text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
People: Nov. 30, 2015 Balcom Agency promoted Ashley Freer to group director, a new management position within the agency. Freer brings more than 16 years of marketing experience to the position, in which she will drive communication strategies for clients and help build and mentor a group of account service team members. Since joining the agency in 2009 as an account director, Freer has managed award-winning campaigns and projects for clients including Dairy MAX, Cook Children's Health Care System, Southwest Bank, Mrs Baird's Bread, Southwestern Seminary and Bennett Benner Partners, while also serving as public relations specialist for the agency. In addition, Freer will oversee all PR efforts for Balcom, including hiring and training staff, managing the agency's PR coordinators and specialists, and overseeing PR planning for the agency and its clients. Freer serves on the boards of Fort Worth Sister Cities International and Common Ground. She is a member of the Greater Fort Worth Chapter of the Public Relations Society of America and the Junior League of Fort Worth, and will serve as the Junior League's community vice president in 2016-2017. Steve Ellis, the former CEO of Asurion and worldwide managing director of Bain & Co., joined TPG as managing partner of the global operations group and portfolio business building. Ellis is based in the San Francisco office. He serves on the board of Charles Schwab, Asurion and The Bridgespan Group. He also is an adviser to the chancellor office of the University of California at Berkeley and has served on the advisory committee for the Stanford Graduate School of Business, where he is a regular guest lecturer. Boards & Organizations Dr. Francesco Benazzo, a world-renowned orthopedic surgeon, lecturer and researcher, joined the medical device and clinical materials advisory board for CoorsTek Medical. Benazzo is chairman of the orthopedic and traumatology department at the University of Pavia, San Matteo Hospital, and also chairs the program of residency in orthopedics and traumatology. Since 1981, he has been involved in the care of top athletes of the Italian track and field national team. In 2005, he became president of the European Federation of the National Sports Traumatology Societies. He is a member of the International Cartilage Repair Society and the International Society of Arthroscopy, Knee Surgery and Orthopaedic Sports Medicine, and is the secretary of the Italian College of Professors of Orthopaedics and Traumatology. Benazzo also serves as visiting surgeon and instructor for Zimmer Biomet. The University of Texas at Arlington appointed Jon Weidanz, founding chairman of the department of immunotherapeutics and biotechnology in the School of Pharmacy at Texas Tech University Health Sciences Center, as associate vice president for research and professor of biology. Weidanz's research has focused on immuno-oncology – therapies and treatments aimed at helping a patient's immune system to combat cancer, including using soluble T-cell receptors and T-cell receptor-mimicking antibodies for targeting tumors. He also engineered T-cell receptor molecules that recruit and activate immune cells to kill tumor cells. He also has been active in the field of vaccine research and development. Weidanz has more than 40 issued, pending and provisional patents for his work. UT Arlington named Rosalinda "Rose" Youngblood, a senior development director at Baylor University, as the assistant vice president for development and university initiatives. During her decade with Baylor's Office of University Development, Youngblood has been a leader in managing the university's frontline development staff and a program that raises more than $100 million annually. The City of Fort Worth named Dakisha R. Wesley as interim municipal court services director until a permanent selection and placement is made. Wesley has more than 16 years of financial/organizational analysis, staff development, management and professional leadership experience in nonprofit and municipal government. She has experience from the cities of Allen and Grand Prairie and has been with the city of Fort Worth for nearly 11 years, where she was municipal court services assistant director for five years. Wesley is a North Central Texas Council of Governments Urban Fellow and a member of the Leadership Fort Worth Leading Edge Class of 2012, the Executive Leadership Development Institute Class of 2008, American Society for Public Administrators, National Forum for Black Public Administrators and the Texas Municipal Courts Association. Orchard Park, a senior living community providing assisted living and memory care in Murphy, has unveiled its new name, Lynridge Senior Living, after Sagora Senior Living acquired the location in August. Sagora Senior Living also operates two communities in Carrollton: Lakeview at Josey Ranch and Briarview, which is under construction and slated for completion in the spring. Texas Rangers Manager Jeff Banister has been named 2015 American League Manager of the Year by the Baseball Writers' Association of America. Banister won the award after leading the Rangers to a 21-game turnaround and an AL West title in 2015 in his first year as a major league manager. Banister became the only manager in franchise history to guide the club to either a first place finish or a postseason berth in his first year. The Rangers' 88 wins also were the most ever by a franchise first-year manager. The Score A Goal In The Classroom School Incentive Program honored Alissa Clark with the 2015 Bayard H. Friedman HERO Award as outstanding special education teacher in North Texas. Clark has taught physically challenged students at Chisholm Trail High School in the Eagle Mountain-Saginaw Independent School District for six years. In addition to the HERO Award, the Mary Potishman Lard Trust presented Clark with a $1,000 honorarium. Texas Health Arlington Memorial Hospital recently achieved the Stage 7 award for electronic health record adoption from the Healthcare Information and Management Systems Society. Stage 7 is the highest level on the Electronic Medical Record Adoption Model, which is used to track patient progress at hospitals across the country. For the second year in a row, Ben E. Keith Foods has won Shareholder of the Year from Distribution Market Advantage. The award recognizes the shareholder making the most significant contribution to chain operators and the DMA organization as a whole. Criteria include the shareholder's overall customer satisfaction scores, support of developing markets, sales growth, strategic leadership and sales leadership. Baylor Scott & White Health has been ranked 10th healthiest employer in America by Healthiest Employer LLC, based on its culture and employee wellness programs. The U.S. Chamber awarded Grapevine Chamber of Commerce a 5-Star Accreditation for its sound policies, effective organizational procedures and positive impact on the community. There are 31 accredited chambers in Texas, and Grapevine is one of only 12 in the state with 5-Star Accreditation. Forshey Prostok LLP hired Juan Mendoza as an associate with the firm. Mendoza's practice will focus on bankruptcy, business reorganizations and commercial litigation. Prior to joining Forshey Prostok, Mendoza was a law clerk for Judge Robert L. Jones, U.S. bankruptcy judge for the Northern District of Texas, based in Lubbock. During law school, Mendoza interned for U.S. Bankruptcy Judges Margaret Murphy of the Northern District of Georgia and Laurel Isicoff of the Southern District of Florida. His experience also includes a summer associate position with the Atlanta offices of Hunton & Williams LLP. Mendoza is licensed to practice law in Florida and Georgia. His admission to the Texas Bar is pending. New chief development officer at Helping Restore Ability is Debbie McGee. McGee has been vice president of resource development since 2012 at United Way for Greater Austin. Prior to that, she was vice president of major gifts and volunteer engagement from 2007 to 2012 at United Way of Tarrant County. McGee worked nine years for JPMorgan Chase as vice president of national accounts in Houston. Send newsmakers to Betty Dillard at bdillard@bizpress.net Previous articleMorningstar dawns west of Fort Worth Next articleStuck waiting: ground delays at US airports on the rise
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
39
\section{INTRODUCTION} \subsection{Motivation} With the recent development of deep learning, many object detection, classification and semantic segmentation methods have achieved impressive performance in many areas, such as autonomous vehicles (AV) and advanced driver-assistance systems (ADAS). The object track frameworks play an essential role in both systems. In order to realize object tracking, many popular algorithms have been proposed based on the object detection frameworks, such as Sort \cite{sort}, Deepsort \cite{deepsort} and ASMS \cite{vojir2014robust} which all achieved the excellent performance. However, these tracking frameworks perform in the 2D image and track the target based on the stable 2D object detectors. Most of them take the Intersection-over-Union (IoU) matching matrix or other data association methods to match the detected objects in the continuous frames, which means that these trackers are often influenced by the performance of the detectors and not robust at solving the interaction problem. The most fundamental aspect of intelligent vehicle applications is to help the vehicles understand the surrounding environment and make a firm decision based on that information. We think the 3D point cloud is more accurate and stable than the image in the perception of environment. The point clouds can be gathered easily by LiDAR which is usually equipped on intelligent vehicles. Also, the scale of an object in point clouds is invariant which is always changed in the common RGB image. There are many popular 3D object detection models, such as Complex-Yolo \cite{Complex-YOLO}, MV3D \cite{MV3D}, VoxelNet \cite{VoxelNet} and PIXOR \cite{PIXOR} available if we want to transform the similar idea of the IoU matching matric from the 2D bounding box to the 3D bounding box. Although computing the IoU matching matrix based on 3D bounding box can work, they still have the challenges that those 3D object detection models do not perform well in the prediction of the orientation of the object, which means the IoU matrix will be built with a high error based on the 3D object proposal, even those models can locate the 3D object in the space precisely. \subsection{Contribution} To solve this problem, we firstly project 3D point clouds acquired using LiDAR into the spherical coordinate system inspired by SqueezeSeg \cite{SqueezeSeg} and PointSeg \cite{PointSeg}. By this 2D data, we can perform 2D tracking on it and recover the 3D information directly. To mitigate the interaction problem in common track frameworks, we introduce the additional 3D information from the 2D projected spherical image by the 3D instance segmentation framework. Compared with the 3D object detection, the 3D instance segmentation is more suitable to locate the object in the 3D space, because it not only locates the object, but also predicts the point-wise mask for the further process. In spite of this, a trade-off between the cost time in the 3D data feature extraction and prediction accuracy always exists in 3D instance segmentation task. Owning to that many 3D instance segmentation tasks use the Multi-layer Perceptron (MLP) to generate the feature representations which cannot be applied well in a large scale scene directly and have the problem on time-consuming due to a large number of concatenations of PointNet \cite{pointnet} layers. We propose an efficient 3D instance segmentation framework with the light weighted network structure which takes the advantages of the spherical image to speed up the processing, by compressing the 3D information in 2D data type with channels. We extend the fast-tracking algorithm Sort \cite{sort} to take 3D information into consideration when builds the cost matrix and achieves the 3D tracking framework for the road objects. We name this pipeline as \textit{PointIT}, as shown in Fig. \ref{fig:pipeline}. Our work can track 3D objects at a speed of 15 fps in the entire forward process. In general, we highlight our contributions as follows: \begin{figure*}[!ht] \centering \includegraphics[width=1\textwidth]{svg/baseline/baseline} \caption{The pipeline of \textit{PointIT}. It contains two part: (1) instance segmentation process which gets the instance segmentation from the input of a spherical image; (2) generation the association matrix with extended sort from the output mask and corresponding space information. The different colors in this figure are only set to separate all instances. } \label{fig:pipeline} \end{figure*} \begin{itemize} \item We propose an end-to-end pipeline of 3D point cloud instance segmentation with the input of projected spherical images. \item We generate the spherical images dataset from KITTI 3D Object Track dataset \cite{KITTI} with the corresponding instance labels to train our model. \item We extend the Sort \cite{sort} to build the graph with the normalized distance of each object location center instead of only using IoU of the object detection bounding boxes to solve the path interaction problem. \item We propose a stable and fast multi-object track structure with the extended Sort \cite{sort} and light-version 3D instance segmentation pipeline based on Mask RCNN \cite{MRCNN} and MobileNet structure \cite{mobilenet}. \end{itemize} \section{Related Work} In this section, we focus on the recent approaches entirely for instance segmentation, multiple objects tracking and some data fusion methods which always used in popular 3D object detection models. \subsection{Instance Segmentation} Due to the significant development of bounding box object detection, many early instance segmentation approaches are based on two stages: (1) use box detector to generate a set of bounding boxes for the location of each possible object position; (2) predict the pixel-wise mask with the bounding boxes proposals and the feature maps which are sensitive to the instance location. For example, MNC \cite{MNC} followed the two stages to build a multiple-branch cascade model to generate accurate results from all the bounding-box proposals. However, this method is time-consuming which is not suitable for intelligent vehicles. For instance, MNC \cite{MNC} costs around 0.4s in the process of feature extraction. Some recent approaches combine the segmentation methods and object detection system to achieve the instance segmentation with an extended fully convolutional network which can only be applied in the semantic segmentation task. For example, DeepMask \cite{deepmask} and FCIS \cite{fcis}, predicted the position-sensitive score maps from the extended Fully Convolutional Network (FCN). Those maps can locate the object and predict masks at the same time. However, they usually perform mistakes in mask prediction in the overlap of the adjacent instances \cite{deepmask} \cite{fcis}. In this paper, we use Mask RCNN \cite{MRCNN}, different from SGPN \cite{SGPN}, as the pipeline which predicts the object bounding box and takes the corresponding pixel-wise mask parallelly from the existing shared branch. SGPN \cite{SGPN} performed 3D instance segmentation well in the indoor scene. But the problem still exists when it is applied in the outdoor large-scale scenes. A large quantity of points will cause memory inefficiency in \cite{SGPN}. To improve this, we introduce the spherical images generated based on the point clouds, and use this 2D projected data as the input to get the point-wise mask from the model outputs. \subsection{Multiple Object Tracking} Many popular multiple objects tracking methods consider the Multiple Object Tracking (MOT) problem as a data association problem and match the detection targets between multiple continuous frames. For example, Sort used Kalman filter in the bounding box to generate the association matrix between two frames with the optimization of the assignment cost matrix using Hungarian algorithm \cite{sort}. However, the identity switch and tracked target loss are the problems in Sort \cite{sort} because its generation method of the association matrix will be influenced easily by the object intersections. Deepsort \cite{deepsort} used the recursive Kalman filtering and a more complex association matrix to solve the problem of a large number of identity switches. However, the deep association matrix can only be trained from the particular a large-scale object re-identification dataset which is not convenient for others to apply in different scenes. Moreover, some approaches for 3D tracking combine the RGB image information and 3D point clouds by data alignment to achieve the state estimation in 3D space. For example, 3D-CNN/PMBM used a deep learning structure to estimate the distance from the camera to object and combine the distance information to generate the association matrix \cite{PMBM}. In practice, these methods cannot provide stable results due to the inaccurate estimation in intelligent vehicles. \subsection{Data Fusion in Deep Learning} In deep learning, many researchers usually get the input from the multiple data sources to produce more consistent and robust features. MV3D used the bird view, the front view of the LiDAR and the corresponding RGB image to get enough feature descriptions for the 3D object detection \cite{MV3D}. Additionally, F-PointNet \cite{F-PointNet} combined the RGB image with the depth information to extract more stable feature representations. PointSeg \cite{PointSeg} and SqueezeSeg \cite{SqueezeSeg} projected the 3D point clouds into the spherical coordinate system and performed semantic segmentation on the projected data. Moreover, VoxelNet \cite{VoxelNet} proposed a novel layer to learn useful features directly from the points in each voxel. \section{Methodology} This section firstly provides the details about the input of the proposed \textit{PointIT}, the main features of instance segmentation part. The rest of this section discusses the key points in the extend sort methods separately. \begin{figure}[!h] \centering \includegraphics[width=0.4\textwidth]{svg/projected_image/spherical} \caption{The process from 3D point clouds to sphearica image. } \label{fig:spherical} \end{figure} \subsection{Network Input} We find that PointSeg\cite{PointSeg} and SqueezeSeg \cite{SqueezeSeg} perform semantic segmentation well in the spherical image and optimize the mask with the corresponding 3D information. We draw on their successes on utilizing the 3D point cloud data into the spherical image as the input of our \textit{PointIT}. Different from the input of \cite{PointSeg} and \cite{SqueezeSeg}, we project the 3D LiDAR data into a spherical projected image with four channels, which are corresponding to the Cartesian coordinates information $(x, y, z)$ and the $reflectivity$. We transform the LiDAR data into the spherical image as follows: \begin{equation} \alpha = arcsin(\frac{z}{\sqrt{x^2+y^2+z^2}}) \ \ \ \bar{\alpha}=\lfloor\frac{\alpha}{\Delta\alpha}\rfloor, \label{equ:azimuth} \end{equation} \begin{equation} \beta =arcsin(\frac{y}{\sqrt{x^2+y^2}}) \ \ \ \bar{\beta}=\lfloor\frac{\beta}{\Delta\beta}\rfloor, \label{equ:zenith} \end{equation} where $\alpha$ and $\beta$ are the azimuth and zenith angles, respectively see in Fig . \ref{fig:spherical} respectively. $\Delta \alpha$ and $\Delta \beta$ are the sizes of the spherical image which we want to generate. In our model, we set $\Delta \alpha = 64$ and $\Delta \beta = 512$. The $\bar{\alpha}$ and $\bar{\beta}$ are the position indexes on the projected image. We will generate the array with the shape $64\times 512 \times 4$ as the input of our model. $64$ is set because of the 3D LiDAR data is come from the Velodyne HDL-64E LiDAR with 64 vertical channels, while $512$ is set because we only project the front view area $(-45^\circ, 45^\circ)$ into the spherical image. After the transformation, we feed this image-type data into the instance segmentation part of the \textit{PointIT} to obtain the instance masks directly. \subsection{Instance Segmentation} In order to achieve a satisfactory efficiency, we build a fast instance segmentation model followed the MobileNet \cite{mobilenet} and Mask RCNN \cite{MRCNN}. \begin{table}[h] \center \caption{The parameters of the encoder.} \renewcommand\arraystretch{1.5} \renewcommand\tabcolsep{7.4pt} \begin{tabular}{@{}ccllll@{}} \toprule {Input} & {Operation} & {c} & {s} \\ \midrule H$\times$W$\times$4 & conv2d & 32 & 2 \\ \midrule H/2$\times$W/2$\times$32 & depthwise\_separable\_block & 64 & 1 \\ \midrule H/2$\times$W/2$\times$64 & depthwise\_separable\_block & 128 & 2 \\ \midrule H/4$\times$W/4$\times$128 & depthwise\_separable\_block & 128 & 1 \\ \midrule H/4$\times$W/4$\times$128 & depthwise\_separable\_block & 256 & 2 \\ \midrule H/8$\times$W/8$\times$256 & depthwise\_separable\_block & 256 & 1 \\ \midrule H/8$\times$W/8$\times$256 & depthwise\_separebla\_block & 512 & 2 \\ \midrule H/16$\times$W/16$\times$512 & depthwise\_separable\_block & 512 & 1 \\ \midrule H/16$\times$W/16$\times$512 & depthwise\_separable\_block & 512 & 1 \\ \midrule H/16$\times$W/16$\times$512 & depthwise\_separable\_block & 512 & 1 \\ \midrule H/16$\times$W/16$\times$512 & depthwise\_separable\_block & 512 & 1 \\ \midrule H/16$\times$W/16$\times$512 & depthwise\_separable\_block & 512 & 1 \\ \midrule \end{tabular} \label{table:para} \end{table} MobileNet\cite{mobilenet} proposed a novel layer function called the depthwise separable convolution block, which is constructed with the depthwise convolutional layer and the pointwise convolutional layer. Using this function, the model can keep the balance between the performance of accuracy and efficiency. In this paper, we take this function as the unit of the feature extractor of our model and more details can be read in \cite{mobilenet}. The parameters of the encoder are shown in Table. \ref{table:para}. The \textit{c} is the output channels of the operation, and the \textit{s} is the stride size. The shape of the input will be downsampled four times. The end feature will have the dimension $16/H\times16/W \times512$. We do not add more downsampling process in the encoder because the Feature Pyramid Networks (FPN) features \cite{fpn} are generated from the end feature maps in the encoder with the max-pooling layer, which can be seen as the green box in Fig. \ref{fig:network}. Also we set the total $stride=16$ to save more information and avoid the feature maps generating $height=2$. \begin{figure}[!h] \centering \includegraphics[width=0.48\textwidth]{svg/network/network} \caption{The structure of the instance segmentation model in the \textit{PointIT}. It uses the backbone (MobileNet) to extract the features and contains four different shapes of feature maps for RPN layer. } \label{fig:network} \end{figure} The structure of the instance segmentation module can be seen in Fig. \ref{fig:network}. It contains four downsampling operations in the encoder. Also RPN layers will generate the ROIs(region of interest area) based on the four different scales of the feature maps. Then the network head will build the classifier graph and mask graph parallelly. The classifier graph is constructed with one convolutional layer with a $7\times 7$ kernel size and two $1\times 1$ convolutional layers to generate the feature maps for the object classification and object bounding box location. The mask graph is built with the convolutional and deconvolutional layers to create the mask map with the shape of $28\times 28$. \subsection{Extended Sort} To extend it into the 3D tracking framework, we adopt the space information which contains the $[x, y, z]^{\top}$ in the projected data with the recursive Kalman filtering. And we build a more stable association matrix between the frames than the original function in \cite{sort}. We will describe the details in the following. \subsubsection{State Estimation} To estimate the object motion, we build the motion model to predict the location of the target in the next frame. The state of each target is generated as: \begin{align} &X_{Object} = (x_p,y_p,s,r,\bar{x}, \bar{y}, \bar{s})^{\top} \label{equ:ob2d}\\ &X_{Center} =(x_w, y_w, z_w, \bar{V_x}, \bar{V_y}, \bar{V_z}, \bar{A_x}, \bar{A_y})^{\top} \label{equ:ob3d} \end{align} The Eq. \ref{equ:ob2d} contains the center of a bounding box position $[x_p, y_p]^{\top}$, its scale $s$, and the aspect ratio $r$. We use Kalman filter with the constant velocity and take the $[x_p,y_p,s,r]^{\top}$ as the observations of the object state. The Eq. \ref{equ:ob3d} has the centre location $[x_w, y_w, z_w]^{\top}$ of the object in the 3D space, the velocity $[\bar{V_x}, \bar{V_y}, \bar{V_z}]^{top}$ and acceleration $[\bar{A_x}, \bar{A_y}]^{\top}$. The Kalman filter is taken with the constant velocity in $z$ and has the acceleration in $x$ and $y$ to smooth the estimation. And the observation state is the $[x_w, y_w, z_w]^{\top}$. \subsubsection{Data Assignment Problem} To assign the tracked objects in each frame, we estimate the object bounding box and the object location in the world space based on the current frame. We build the cost matrix by the weighted IoU between and the weighted distance. The weighted IoU is calculated by all current detected bounding boxes and the new predicted bounding boxes. The weighted distance is generated from all detected target locations and the estimated location of all targets. All functions are shown in Eqs. \ref{equ:graph}, \ref{equ:IoU}, \ref{equ:dis}. Then we use Hungarian algorithm to solve this assignment problem to get the minimum cost. The graph functions are shown as follows: \begin{align} &Graph(i,j) = \alpha \cdot I(i,j) + \beta \cdot D(i,j) \label{equ:graph}\\ & I(i,j) = \frac{Box_i \bigcap Box_j}{Box_i \bigcup Box_j} \label{equ:IoU}\\ & D(i,j) = \exp[-dis(i,j)] \\ & dis =\sqrt[2]{||P_i -P_j||} \ \ \ P=[x,y,z]^{\top} \label{equ:dis} \end{align} In the Eq. \ref{equ:graph}, $\alpha + \beta =1$ and they are used to weight the balance of two matrixes. We set the $\alpha$ as 0.5 to assume that they have the same weight. $I$ means the IoU matrix. $D$ represents the distance matrix. The values of $I$ and $D$ are $\in[0,1]$. $i$ represents the detection target and $j$ represents the prediction target with the state estimation. The details of the process are described in the Algorithm.\ref{alg:exsort}. \begin{algorithm} \caption{Extend Sort process.} \begin{algorithmic}[l] \STATE \textbf{Input of frame $t$ : } {Detection box $B_t=\left(B_1,B_2,...,B_i\right)$}; Mask $M_t =\left(M_1, M_2,...,M_i\right)$; The estimated target $E_t=\left(E_1,E_2,...,E_j\right)$. \STATE \textbf{Output of frame $t$ :} matched indices $M$, unmatched indices $UM$ and the estimation state of frame $t+1$. \STATE \textbf{1:} Compute center location $L_t$ with the corresponding Mask: $L_t = \left(L_1,L_2,...,L_i\right)$ \STATE \textbf{2:} Build the cost matrix $Cost_t$ with the Eq. \ref{equ:graph} \STATE \textbf{3:} Initialize the set of matched indices $M\gets 0 $ and the set of unmatched indices $UM\gets D_t$ \STATE \textbf{4:} $MID_{t}, UID_{t} \gets$ $Minimize(Cost_{t})$ \STATE \textbf{5:} Remove $ID$ in $UM$ for $ID$ in $MID_{t+1}$ \STATE \textbf{6:} Add $ID$ in $M$ for $ID$ in $UID_{t+1}$ \end{algorithmic} \label{alg:exsort} \end{algorithm} In Algorithm.\ref{alg:exsort}, we describe the whole process of the proposed extend sort and assume the task starts with the frame $t$. Then in frame $t+1$, the matched and unmatched indices in step 3 will not be initialized again and just take the tracking results from the previous frame. \begin{table*}[t] \center \caption{Performance of the propsed approach on generated sequence} \scalebox{1.3}{ \begin{tabular}{@{}clcccccccc@{}} \toprule \textbf{Method} & \textbf{Type} & \textbf{MOTA}$\uparrow$ & \textbf{MOTP}$\uparrow$ & \textbf{MT}$\uparrow$ & \textbf{ML}$\downarrow$ & \textbf{ID\_sw}$\downarrow$ & \textbf{FM}$\downarrow$ & \textbf{FP} $\downarrow$ & \textbf{FN}$\downarrow$ \\ \midrule \multicolumn{1}{l}{baseline (sort)} & Online & 0.451 & 0.80 & 0.137 & 0.379 & 5 & 50 & 836 & 1945 \\ \multicolumn{1}{l}{our proposed} & Online & \textbf{0.457} &0.80 &\textbf{0.155} & \multicolumn{1}{l}{\textbf{0.327}} & \textbf{2} & 56 & 895 & \textbf{1895} \\ \bottomrule \end{tabular}} \label{tabel:sort} \end{table*} \section{Experiments} All the experiments are conducted on server equipped with one NVIDIA GeForce GTX GPU with the CUDA 9 and CUDNN v7. During the training model of instance segmentation, we set the learning rate as 0.0001. We train the network for 2000 ephos, which has 500 steps in each epho. \subsection{Datasets and Evaluation Results} We first train the model of the instance segmentation task on the generated dataset which is transformed from the KITTI Track Object dataset \cite{KITTI}. We split the generated dataset into two forms. One part (instance dataset) which from sequence '0000' to sequence '0018', is used to train the instance segmentation model. The other sequences '0019', '0020' are used for the testing of tracking. Also, there are 5000 frames in the generated instance dataset and we separate it into on training set with 4500 frames and one evaluation set with 500 frames. \subsubsection{Evaluation of the Instance Segmentation on Generated Dataset} \begin{figure}[!h] \centering \includegraphics[width=0.45\textwidth]{svg/out_ins/drawing} \caption{The results of the instance segmentation on the generated data without tracking id. The red bounding box represents the car, the blue bounding box represents the cyclist, and the pedestrian is shown by the green bounding box. } \label{fig:ins_image} \end{figure} Table \ref{table:runtime} shows the evaluation results of the instance segmentation part of our model. It shows a good performance in the projected point data and the AP (average precision) achieves $0.617$ among the evaluation set when the IoU threshold is 0.5. We evaluate the testing dataset with the threshold 0.7, and the AP is only $0.237$. The reason for this gap between the two thresholds is that the input shape is quite small. When the segmented objects are far away from the camera, the slight change in the detected box will make a significant influence on the calculation of IoU of the detected bounding box and the ground truth. The simple results of the instance segmentation have been shown in Fig. \ref{fig:ins_image}. The comparison of runtime and average precision has been shown in Table \ref{table:runtime}, a light-version structure (Mobilenet) with the instance segmentation baseline can achieve similar performance as the ResNet backbone \cite{resnet} in the projected spherical image. However, the light-version structure can save more memory and computation time in the process. \begin{table}[t] \caption{The performance comparsion in different backbone in runtime and average precision.} \scalebox{1.3}{ \begin{tabular}{cccc} \toprule backbone & runtime(s) & AP\_0.5 & AP\_0.7 \\\midrule Resnet\_50 & 0.091 & 0.66 &0.251 \\ Mobilenet & 0.061 & 0.617 & 0.237 \\ \midrule \end{tabular}} \label{table:runtime} \end{table} \subsubsection{Evaluation of PointIT} We evaluate the performance of our tracking model with the multi-target scores. The carried evaluation metrics include: \begin{itemize} \item MOTA\cite{mota}: multi-object tracking accuracy. \item MOTP\cite{mota}: multi-object tracking precision. \item MT: number of mostly tracked trajectoris when the life span of tracked targed is larger than 80\%. \item ML: number of mostly tracked trajectoris when the life span of tracked targed is smaller than 20\%. \item ID\_sw: number of times when an ID changes into a different tracked object. \item FM: number of the times when a track is lost due to the missing detection. \end{itemize} In the evaluation measurement, $(\uparrow)$ repesents the higher score will perform better in the task and the $(\downarrow)$ denotes the lower score will achieve better performance. Table \ref{tabel:sort} only shows the evaluation results on the car due to the limitation of the generated testing dataset which mostly contains the object car in the whole sequence. From Table \ref{tabel:sort}, we only compare the results with the Sort \cite{sort}. With the additional information from 3D space, the performance of our method shows a good improvement in the \textbf{MT}, \textbf{ML} and \textbf{FN} and \textbf{ID\_sw} in the testing dataset. In the spherical image, the indexes are calculated with the resolution of height and width. The problem of the occlusion has eased in this projection. Because of this, the score of ID sw is low in the whole testing sequence. We do not compare with other state-of-the-art methods, such as DeepSort \cite{deepsort} which we can not generate a relevant dataset to train the deep association matrix, and our method is proposed to find an efficient way to combine the space information into the 3D object tracking task. The track results are shown in Fig. \ref{fig:result}, each of the sequences is chosen the three frames to show the performance of the track framework and do not show the 3D point data visualization in the paper. \begin{figure*}[!h] \centering \includegraphics[width=1\textwidth]{svg/track_res/drawing} \caption{The results with the track id are shown among the one sequence in each row. The red boxes represent cars and the green box represents the pedestrians. The colors of masks are set only for visualization. } \label{fig:result} \end{figure*} \section{Discussion} Our \textit{PointIT} achieves impressive results which show that tracking the object in 3D space is more accurate. We propose a simple way to introduce the spatial information into the existing popular track frameworks which are based on the bounding box detectors directly, like Sort \cite{sort}. With the additional spatial information, not only the tracking methods, but also other frameworks can cooperate well with our proposed 3D instance segmentation framework. From the Table \ref{table:runtime}, we can find that the AP$_{0.5}$ of Resnet only improved 4\% than the Mobilenet, which is not expected by us. We think that there are some different components between the standard 2D image and the projected spherical data, such as the domain distribution. We assume that the deeper structure of the network would influence its feature learning and the imbalance data distribution will cause an upper bound limitation of the performance. We filter the LiDAR from 360 degrees into the 90 degrees because the front view of one vehicle is the most important part which the vehicles need to care. Our projected data does not contain the distance which is included in \cite{PointSeg} and \cite{SqueezeSeg} because we find that there is not a huge difference between two different implementations. Also in the ideal situation, all the indexes in the spherical projected data should have different corresponding points with $x,y,z$ and $reflectivity$. However, the noise is in the 3D point clouds and the LiDAR will lose some points due to the intensity or material of the object surface. Both of the problems cause the loss of points in the projected spherical and the irregular mask in the predict results which is shown in Fig. \ref{fig:ins_image}. \section{Conclusion} In this paper, we proposed a fast online tracking framework based on 3D point clouds. We constructed a light-weight structure to achieve the 3D point cloud instance segmentation on the road objects. In addition, we trained our instance prediction model on generated datasets which were generated from the KITTI Object Track dataset. We showed that with the help of the accurate space information, e.g. the centre point of the 3D instance object, the tracking performance could be improved hugely. The proposed \textit{PointIT} greatly minimized the trade-off between the performance and the efficiency in the 3D object tracking challenge. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,977
A vigésima terceira temporada de The Simpsons começou a ser transmitida nos Estados Unidos pela Fox em 25 de setembro de 2011 e durou até 20 de maio de 2012. Esta temporada incluiu o episódio número 500, lançado em 19 de fevereiro de 2012. Episódios Temporadas de The Simpsons Temporadas de séries de televisão de 2011 Temporadas de séries de televisão de 2012
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,915
\section{INTRODUCTION } \input{sections/02_introduction} \section{METHODS} \input{sections/03_methods} \section{ARRAY PERFORMANCE} \input{sections/04_results} \section{DISCUSSION} \input{sections/05_discussion} \section{CONCLUSION} \input{sections/06_conclusion} \begin{acknowledgements} \input{sections/07_acknowledgements} \end{acknowledgements} \bibliographystyle{pasa-mnras} \subsection{Simulations} Gamma rays and charged cosmic rays (protons and heavier nuclei) generate extended air showers from colliding with atmospheric molecules, resulting in a cascade of particles showering towards the ground and generating Cherenkov radiation. This light is detected by IACTs, with cosmic ray showers occurring at least $\sim$1000 times more frequently than those from gamma rays. For this study, Monte Carlo simulations of gamma-ray showers, protons showers, and the Cherenkov light they produce were made with \verb|CORSIKA|\footnote{Version 7.7100 with the QGSJET II-03 interaction model.} \citep{1998cmcc}. Gamma rays were simulated originating from a point north of the array, and at a zenith angle of 20\textdegree\ where optimal sensitivity could be expected. The telescopes were aimed with a 1\textdegree\ offset from this point. Diffuse emission of protons (for background) and further gamma rays (for unbiased reconstruction models) were also simulated. The geomagnetic field of a site located in Arkaroola (30.3\textdegree\,S, 139.3\textdegree\,E) was used to emulate a potential Australian site, generated by the \verb|Geomag 7.0| software \citep{Alken2021}. \autoref{settings} presents the simulation settings used for both site altitudes of 0\,m and 1000\,m. \begin{figure}[t] \begin{center} \includegraphics[width=0.7\columnwidth]{pdf_figures/tels} \caption{Arrangement of IACTs (shown as numbers) used in simulations, allowing for multiple different configurations of baseline distances and number of telescopes to be studied.} \label{tels} \end{center} \end{figure} The Cherenkov photons generated by the air showers were passed into the \verb|sim_telarray| \citep{Bernloehr2008} package to simulate IACTs observing the showers. For each photomultiplier converting photons to electrons in a camera sensor, this produces a waveform within a short ($\sim$100\,ns) time window around a triggered event. The package takes into account aspects such as mirror reflectivity, telescope structure shadowing the mirrors, night sky background, trigger conditions for event recording, and the quantum efficiency of the camera sensor. The telescopes were arranged with a central telescope surrounded by three more in an equilateral triangle each 80\,m away, and another three 160\,m away (see \autoref{tels}). This allowed for the study of setups from one to four telescopes with a variety of baseline distances (the maximum distance between telescopes in an array). As a basis for testing, the state-of-the-art CTA Prod-5\footnote{Version 2020-06-28.} \citep{CTAO2021} designs of the 12-metre Medium-Sized Telescope (MST) and 4-metre Small-Sized Telescope (SST) were chosen to be simulated as affordable solutions to provide good sensitivity above 0.1\,TeV. The altitude at which an IACT operates will affect various aspects of its observations \citep{Hassan2017}. Most of the Cherenkov light in an extensive air shower is produced 5 -- 10\,km above ground level. As it propagates towards the ground, the Cherenkov light pool spreads out, covering more area but with lower photon density. For showers whose core lands close to a telescope, higher altitude sites will produce images with greater intensity, generally leading to better event reconstruction and a lower energy threshold. However, the smaller Cherenkov light pool at higher altitudes results in a lower likelihood of showers being detected by multiple telescopes, and more distant showers would be seen with lower photon density, or not at all. Simulation runs were thus made at two different heights above sea level (0\,m and 1000\,m) to study the variation in performance given the site altitudes available in Australia. \subsection{Analysis} Tools from \verb|ctapipe| \citep{Kosack2021}, a prototype data processing framework for CTA, were used to perform the low-level event processing. Pixel intensities and arrival times were extracted from their waveforms using the \verb|Neighbour Peak Window Sum| method, which chooses extraction windows dependent on the surrounding pixel waveforms. The extraction windows were optimised for accurate noise extraction\footnote{Width/shift values of 6/3 samples were used for SSTs, and 4/1 samples for MSTs.}. Images were cleaned to remove pixels without Cherenkov light using a combination of the two-level tail cut approach\footnote{Core/boundary thresholds were chosen at 10/5 photoelectrons for MSTs and 3/1.5 for SSTs.} (to remove dim pixels not adjacent to bright pixels) and \verb|time delta cleaning| (to remove pixels with arrival times not coincident with those adjacent). Cleaned images were parameterised by the second-moment Hillas analysis \citep{hillas1985} to be used for removing low-quality images, energy reconstruction, direction reconstruction, and gamma/hadron separation. The following quality cuts were required to remove low-quality images: \begin{itemize} \item Leakage\footnote{An analogue for Cherenkov ellipse truncation by the edge of the camera sensor, defined as the ratio (post-cleaning) between summed pixel intensities at the edge of the camera and the total summed intensity.}: < 0.2 \item Total photoelectrons: > 70 for MSTs, 30 for SSTs \item Surviving pixels: > 5 \item Number of islands\footnote{Disjoint clusters of pixels post-cleaning.}: < 4 \end{itemize} Training and applying models for event reconstruction were performed with \verb|aict-tools| \citep{aict-tools}. Models for reconstructing energy, direction, and event conformity to a gamma ray (gamma score) were produced using random forests (RFs), a well-established technique for event reconstruction with IACTs \citep{Albert2008}. All available diffuse gamma rays were used for creating the models, and an equal number of diffuse protons were separated from the dataset to train on. The models were applied to the point-source gamma rays and the remaining diffuse protons. Events detected by multiple telescopes were reconstructed from the mean of individual telescope reconstructions, weighted by image intensity (total number of extracted photoelectrons). Additionally, a geometric direction reconstruction was performed along the weighted intersection of planes passing through the major axes of cleaned images, resulting in both geometric and RF-reconstructed directions and $\theta$ values (the distance between true and reconstructed source position). For a given array setup, there was a geometrically-reconstructed impact distance (distance from the array centre to the shower core on the ground) beyond which the geometric direction reconstruction performed worse on average than the RF direction reconstruction due to the more acute stereo angle (the angle formed by the projection of the shower axis in two cameras). Thus, for each telescope arrangement, geometric direction reconstruction was used within this calculated distance, and RF direction reconstruction was used beyond it (values presented in \autoref{rfcuts}). \begin{figure*}[t] \begin{center} \centering \includegraphics[width=0.8\textwidth]{pdf_figures/sensitivity_new_colours} \caption{50-hour differential point-source flux sensitivity for a 5$\sigma$ detection as a function of reconstructed gamma-ray energy. Bands represent the range of sensitivities across the studied altitudes (0\,m and 1000\,m) and baseline distances (80\,m to 277\,m). Cuts on gamma score and $\theta^{2}$ were applied for each energy bin to optimise sensitivity for each array setup. No cuts on the number of telescopes triggered were applied. The H.E.S.S. 50-hour sensitivity curve is shown for comparison \citep{holler2015}.} \label{sensitivity} \end{center} \end{figure*} A telescope array's differential sensitivity for a given energy range is defined as the minimum flux required for a point source to be observed with a significance of 5\,$\sigma$ after 50 hours. The significance was found using the Li \& Ma method \citep{Li1983} with one on-region and five off-regions (chosen equidistant around the camera centre). At least 10 excess on-region counts, 10 counts in the off regions, and an excess to background ratio of $>\frac{1}{20}$ was required for each energy bin, as per the standards adopted by CTA \citep{Hassan2017} and others. For each array setup, a minimum gamma score and maximum $\theta^{2}$ (where $\theta$ is the radius of on- and off-regions) were chosen to optimise sensitivity for each energy bin. The effective area was determined for each energy bin by multiplying the area over which the showers were thrown at the array by the ratio of the number of observed showers (post-cuts) and the total number of thrown showers. The angular resolution was calculated as the 68\textsuperscript{th} percentile of $\theta$ values (post-cuts) for each energy bin. As differential point-source sensitivity was the primary performance metric used to compare setups, the gamma score and $\theta^{2}$ cuts found to optimise sensitivity for each energy bin were applied to the datasets used in both angular resolution and effective area plots. Depending on the desired performance criteria, cuts could instead be chosen to optimise for angular resolution (with stronger gamma score cuts) or for effective area (with the loosening of either gamma score or $\theta^{2}$ cuts). \subsection{Number of Telescopes per Array} The number of telescopes in an array was varied from one to four. As expected there were improvements to all performance metrics with more telescopes across all altitudes, baseline distances, and telescope sizes. Increasing from 1 to 2 and 2 to 4 telescopes provided an approximately 2.5 times improvement in sensitivity (see \autoref{sensitivity}), roughly 0.05\textdegree\ better angular resolution (see \autoref{baseline}), and $\sim$30\% larger effective areas (see \autoref{effarea}). This is to be expected as more telescopes allow for larger ground coverage, more accurate stereoscopic direction reconstruction, and more estimates of particle type, source position, and energy for producing weighted-average predictions. \subsection{Altitude} Arrays at 1000\,m altitude had small improvements in low-energy performance over those at 0\,m (see \autoref{sens_alt}). The lowest energy bin in the sensitivity curves of all MST arrays extended to $\sim$120\,GeV when at 1000\,m altitude. Compared to those at 0\,m, arrays with four MSTs had an order of magnitude improvement at this energy, and the lowest energy bin for arrays with one or two MSTs was at $\sim$200\,GeV. SST arrays had similar results, such as with an additional lower energy bin down to $\sim$930\,GeV for one SST, and down to $\sim$580\,GeV for four SSTs with a 138\,m baseline. Angular resolution at low energies improved by up to 50\% for both MST and SST arrays with two telescopes 80\,m apart, but otherwise there was negligible performance variation. Conversely, all arrays at 0\,m altitude had up to 25\% larger effective area above 1\,TeV and a higher energy bin in the sensitivity of monoscopic MST setups (extending up to $\sim$230\,TeV). This can be understood by the fact that Cherenkov light pools become broader and less photon-dense as they propagate. \subsection{Array Baseline} The performance with respect to baseline distance was compared, varying between 80\,m (only for two-telescope arrays), 139\,m, and 277\,m. When increasing the baseline from 80\,m to 277\,m, arrays of two telescopes had improvements in angular resolution of $\sim$50\% near the energy threshold (down to below 0.2\textdegree), and MSTs showed up to 25\% improvement above 1\,TeV (down to almost 0.1\textdegree) (see \autoref{baseline}). 3- and 4-telescope arrays had up to two-fold improvements in angular resolution across most energies (down to $\sim$0.05\textdegree) when doubling the triangular outer baseline from 139\,m to 277\,m. These results were due to the more perpendicular stereo angle allowing for improved geometric direction reconstruction. Larger baselines also corresponded to increases in effective area in all arrays across all energies (by $\sim$20-40\%). The effect of baseline distance on sensitivity was most notable in 4-telescope MST arrays at 1000\,m altitude, with improvements of up to 50\% below 700\,GeV with a 139\,m baseline compared to one of 277\,m, and a worsening of performance between 700\,GeV and 5\,TeV for the same comparison (see \autoref{sens_baseline}). Differences in sensitivity performance with respect to baseline for other arrays were otherwise small. In the 1000\,m SST arrays, shorter baselines also resulted in improved angular resolution below $\sim$3\,TeV. For these comparatively dim showers near the energy threshold, this result can be understood as an effect of smaller Cherenkov light pools at higher altitudes. With a larger baseline, a higher proportion of events are those that land between the telescopes and have obtuse stereo angles. This results in poorer geometric reconstruction on average, and up to 50\% worse angular resolution at these lower energies. \subsection{SST vs MST} The largest difference in performance between telescope types was in energy threshold. SST arrays had energy thresholds of $\sim$1.2\,TeV whereas MST arrays had thresholds of $\sim$300\,GeV. Angular resolution was comparable for a given number of telescopes, with marginal improvements for stereoscopic MST arrays over stereoscopic SST arrays (see \autoref{baseline}). A single MST provided a larger effective area below 40\,TeV than four SSTs, and for a given number of telescopes an equivalent MST array improved on it four-fold (see \autoref{effarea}). Below 10\,TeV one MST had similar sensitivity to two SSTs and two MSTs were comparable to four SSTs (see \autoref{sensitivity}). \subsection{Sensitivity vs Time} \begin{figure}[t] \begin{center} \centering \includegraphics[width=0.9\columnwidth]{pdf_figures/sensvtime} \caption{Differential point-source flux sensitivity for a 5$\sigma$ detection as a function of observation time for selected energy bins for arrays at 0\,m altitude with baselines of 277\,m. Cuts on gamma score and $\theta^{2}$ were applied for each energy bin to optimise sensitivity for each array setup. The SST lacks a 320\,GeV line as it is outside the detectable energy range. The sensitivity of \emph{Fermi}-LAT (grey) is shown for comparison.} \label{sensvtime} \end{center} \end{figure} \autoref{sensvtime} shows the lowest flux detectable at 5$\sigma$ significance as a function of time. This metric is of note as it pertains to the ability to probe short-timescale flux variations and transient events. MST arrays were several orders of magnitude more sensitive than \emph{Fermi}-LAT at $\sim$320\,GeV. The SST array's sensitivity does not extend this low, highlighting the main benefit of the MST array being its lower energy threshold. WCDs such as those to be employed at SWGO vary across sensitivity between $\sim$10\textsuperscript{2}--10\textsuperscript{-3} erg s\textsuperscript{-1} cm\textsuperscript{-2} at $\sim$300\,GeV in the temporal range shown \citep{Albert2019}. This plot clearly highlights the benefits of an IACT array over alternate methods for observing faint, short-lived, or quickly varying transient phenomena. \section{Direction reconstruction cuts} Presented below are the impact distance cuts applied when using a combination of geometric and Random Forest models for direction reconstruction. \begin{table}[!htb] \caption{Impact distance (metres) from the centre of a subset of telescopes within an array to the shower core above which RF direction reconstruction performs better on average than geometric direction reconstruction.} \centering \begin{tabular}{c|cccc} \hline & \multicolumn{2}{c}{MST} & \multicolumn{2}{c}{SST} \\ Array & 0m & 1000m & 0m & 1000m \\ \hline \begin{tabular}[c]{@{}c@{}}2 tels,\\ 80m baseline\end{tabular} & 0 & 110 & 60 & 0 \\ \hline \begin{tabular}[c]{@{}c@{}}2 tels,\\ 139m baseline\end{tabular} & 130 & 120 & 120 & 90 \\ \hline \begin{tabular}[c]{@{}c@{}}2 + central tels,\\ 139m baseline\end{tabular} & 150 & 150 & 120 & 100 \\ \hline \begin{tabular}[c]{@{}c@{}}2 tels,\\ 277m baseline\end{tabular} & 170 & 310 & 110 & 120 \\ \hline \begin{tabular}[c]{@{}c@{}}2 + central tels,\\ 277m baseline\end{tabular} & 390 & 370 & 170 & 200 \\ \hline \begin{tabular}[c]{@{}c@{}}3 or 4 tels,\\ 139m baseline\end{tabular} & 310 & 290 & 150 & 150 \\ \hline \begin{tabular}[c]{@{}c@{}}3 or 4 tels,\\ 277m baseline\end{tabular} & 420 & 410 & 860 & 440 \\ \hline \end{tabular} \label{rfcuts} \end{table} \section{Sensitivity vs altitude and baseline} Presented here are sensitivity curves for specific setups, showing the effects of altitude and baseline distance. These follow the method described in \autoref{sensitivity}. \\ \\ \begin{figure}[!htb] \begin{center} \centering \includegraphics[width=0.95\columnwidth]{pdf_figures/sens_alt} \caption{Sensitivity for 0\,m (dotted) and 1000\,m (solid) altitude arrays showing the improvement at low energies for the 1000\,m altitude arrays. 4-telescope arrays had a 139\,m baseline.} \label{sens_alt} \end{center} \end{figure} \begin{figure}[!htb] \begin{center} \centering \includegraphics[width=0.95\columnwidth]{pdf_figures/sens_baseline} \caption{Sensitivity for 1000\,m altitude arrays with baselines of 80\,m (dotted), 139\,m (dashed), and 277\,m (solid) showing minimal differences in sensitivity performance due to baseline distance.} \label{sens_baseline} \end{center} \end{figure} \section{Short GRB lightcurves} Presented here are simulated lightcurves for a GRB~160821B-like ``short GRB'' event. While the flux quickly decays, this shows the suitability of a small MST array for detecting and monitoring such events. \begin{figure}[!htb] \begin{center} \centering \includegraphics[width=0.85\columnwidth]{pdf_figures/shortgrblightcurve} \caption{Simulated light curves for a GRB 160821B-like event for 0\,m altitude, 277\,m baseline arrays. Intrinsic source flux was based on the model in \citet{MAGICCollaboration2020} and scaled to match the flux seen by MAGIC, with temporal flux decay following $F(t) \propto t^{-0.8}$. The mean background rates per bin were 6/4/1 protons and electrons per minute for 4$\times$MST/2$\times$MST/4$\times$SST.} \label{shortgrblightcurve} \end{center} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,065
If you thought the experience of the late Mulrunji Doomadgee was a one-off, watch this video about some police in Queensland. In all this talk about Liberal staffers writing chapters for Liberal MP's who plagiarise from Kiwi businessmen, let's not forget the poor folk at Melbourne University Press. If the MUP website is any indicator, the publisher must be wishing a certain book about the Liberal Party's future would just disappear deeper into the catalogue. As of 4:45pm yesterday, the new book is absent from the MUP homepage, including from the news feed on the right hand side and the "November Highlights From MUP". The page dealing with "News and Forthcoming Titles" does mention the book under its "November 2008" section, though it appears below forthcoming titles on Graham Kennedy, Gough Whitlam and cricket. The "Events Calendar" page doesn't mention the book's launch, preferring launches of the books about Gough Whitlam, Graham Kennedy and feminism. Compare this to the huge billing for Peter Costello's memoirs continue to receive in the homepage's news feed as well as in the Photo Gallery, as well as the huge success of Wayne Errington and Peter Van Onselen's excellent biography of former Prime Minister John Winston Howard. If a professional outfit like MUP can't market a book about the future of the Liberal Party, who czan? Indeed, if the MUP's website is any indication, the future of the Liberal Party seems rather uncertain. An edited version of this piece was first published in the Crikey daily alert for 30 October 2008.
{ "redpajama_set_name": "RedPajamaC4" }
5,875
Q: VSU 2012 - Relase Project error on other computer (SQLCE) Recently, i finished my large application project. I used database : SQL Compact, because i was reading and i read, that it's a local database, so i was very happy :). Troubles coming soon when I finally finished my application. The first thing, which i did, is try to open it on other computer, without difference software (visual studios etc.). But it's error with SQL CE. I searched solution for it for one week, but all, which i found it's for older versions of Visual Studio. I'm have Visual Studio 2012 Ultimate Edition (i'm writting in C++) and i don't know what i can do now. System.Data.SqlServerCe.SqlCeException (0x80004005): Unable to load the native components of SQL Server Compact corresponding to the ADO.NET provider version 8876. Install the correct version of SQL Server Compact. See article 974247 in the Knowledge Base for more details. Please help me ;X A: SQL Server Compact is included on the Visual Studio 2012 DVD. You can start the installation using the following steps * *Insert your Visual Studio 2012 DVD *Open the DVD in Windows Explorer *Navigate to the packages folder *Navigate to the SSCE40 folder *Run SSCERuntime_x64-???.exe for 64bit and SSCERuntime_x86-???.exe for 32bit where ??? indicates the language. For instance the 64 bit English language version is named SSCERuntime_x64-enu.exe Choose the installation options that meet your requirements and start the installation. If you do not have your Visual Studio 2012 DVD handy you can download SQL Server Compact from the MSDN website.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,448
Parents' Testimony Year Level Placement Kovan Loyang 端午安康!May you have peace, health, sweetness, wealth and happiness. The children celebrated Dragon Boat Festival through singing and dancing of songs, making of the 粽子 (zong zi) also known as sticky rice dumplings on 14th June 2021. The Dragon Boat Festival, which commemorates Qu Yuan, a third-century poet and political figure of the state of Chu in ancient China. We learnt about the history and practices of the festival through videos and stories. The children were also introduced to some of the main ingredients of the rice dumpling, as well as the purpose behind it. They helped to scoop and pour the toppings into the folded bamboo leaves before the teacher tied them up into triangular shapes. Additionally, the children were given a simple matching activity, and had fun challenging one another to a dragon boat race. Singapore's 56th Birthday on 6th August 2021 Children's Day – 1st October 2021 A MAS Regulated Fund built for your wealth success. Please feel free to contact us. VCC结构下的您值得信赖的子基金。 请随时与我们联系 Copyright © 2020 Good Partner Education Holdings Pte Ltd. All rights reserved. Balmoral: (+65) 6235 7555 / 8805 4876 Kovan: (+65) 6383 5025 / 8806 4598 Loyang: (+65) 6387 7555 / 8854 1677 enquiries@camberley-edu.com Copyright © 2012-2020 Camberley International Pre-School Private Limited. Home About Us News & Events Admission Privacy Policy All Place
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,038
Anne Rankine was the youngest daughter of a tenant farmer, John Rankine from Adamhill Farm that lay two miles from the Robert Burns's family farm at Lochlea. She married John Merry, an inn-keeper in Cumnock on 29 December 1782, and is buried in Cumnock old churchyard. She maintained she was the 'Annie' of Robert Burns' song 'The Rigs o' Barley', however some maintain that she was merely trying to encourage business at their inn at Cumnock. Her father was brother-in-law to John Lapraik the poet. Life and character As stated, Anne was the daughter of a tenant farmer, a friend of Robert Burns, who is described by him as "rough, rude, ready-witted Rankine". She married an inn-keeper, John Merry, who died in 1802 and thereafter she ran the inn herself until she died in 1843, aged 84. Burns lodged at the inn in August 1786, four years after the song was written. Association with Robert Burns She maintained she was the Annie of 'The Rigs o' Barley' although she married in the same year that the song was written. She was sometimes escorted by Burns to her father's house, from festive gatherings in the neighbourhood. The poet was said to have been passionately fond of her and indeed he made her a gift of a lock of his hair and one of the miniature paintings of himself which she treasured all of her life, together with the song. Burns himself is silent on the matter of the identity of the heroine. The poet wrote "The Rigs o' Barley" quite early in his career with the chorus: The song starts : The song ends with: It is said that Anne met the poet soon after the song was published in 1782 and said to him that she had not expected to be celebrated in print, to which he replied "Oh ay, I was just wanting to give you a cast among the lave!" In 1817 she was asked if she remembered nights with Burns among the rigs o' barley and she said "No!", adding however that "I mind o' many a happy night wi' him, though." Micro-history The rigs referred to in the song were the traditional drainage system which was based on dividing fields into ridges around three feet high, and then ploughing them from end to end, the resulting furrows then drained excess water from the land above it, here planted with corn. See also Jean Armour Lesley Baillie Alison Begbie Nelly Blair May Cameron Mary Campbell (Highland Mary) Jenny Clow Jean Gardner Jean Glover Helen Hyslop Kate Kemp Nelly Kilpatrick Jessie Lewars Elizabeth Paton Isabella Steven Peggy Thompson References Notes Sources Boyle, A.M. (1996). The Ayrshire Book of Burns-Lore. Darvel : Alloway Publishing. . Hill, John C. Rev. (1961). The Love Songs and Heroines of Robert Burns. London : J. M. Dent. Mackay, James (2004). A Biography of Robert Burns. Edinburgh : Mainstream Publishing. . Purdie, David; McCue Kirsteen and Carruthers, Gerrard. (2013). Maurice Lindsay's The Burns Encyclopaedia. London : Robert Hale. . Robert Burns 18th-century Scottish women 1843 deaths
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,600
Trust in organisations and institutions Q. How much trust do you have in the following institutions and organisations? Total trust A lot of trust Some trust A little trust No trust The High Court The ABC The Reserve Bank Charitable organisations Environment groups The Commonwealth Public Service Online news media TV news media Federal Parliament Note: 'Total Trust' is an aggregate figure achieved by adding 'A lot of trust' and 'Some trust'. * This Commonwealth Public Service figure is from a question asked in 6 Feb 12. Overall, there have been small increases in trust across all organisations since this question was last asked in June. However, there has been no significant change in the rankings. Respondents had most trust in the High Court (63%), the ABC (59%), charitable organisations (53%) and the Reserve Bank (53%). They had least trust in political parties (16%), trade unions (23%), business groups (25%) State Parliaments (25%), Federal Parliament (26%) and TV news media (26%). Compared to the average, Labor voters had more trust in Federal Parliament (40%), the High Court (67%), the ABC (68%), the Reserve Bank (61%), the Commonwealth Public Service (42%), trade unions (41%), environment groups (48%) and local councils (39%). Liberal/National voters, compared to the average, had more trust in religious organisations (37%) and business groups (32%) but less trust in Federal Parliament (21%), Commonwealth Public Service (28%), trade unions (14%) and environment groups (27%).
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,125
package com.androidzeitgeist.ani.discovery; import static org.mockito.Mockito.doNothing; import static org.mockito.Mockito.doReturn; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.spy; import static org.mockito.Mockito.verify; import org.junit.Assert; import org.junit.Test; /** * Unit tests for the {@link Discovery} class. */ public class DiscoveryTest { /** * Calling {@link Discovery#enable(DiscoveryListener)} is a shortcut for calling * {@link Discovery#setDisoveryListener(DiscoveryListener)} and * {@link Discovery#enable()}. */ @Test public void testEnableWithListenerIsShortcut() throws Exception { DiscoveryListener listener = mock(DiscoveryListener.class); Discovery discovery = spy(new Discovery()); doNothing().when(discovery).enable(); discovery.enable(listener); verify(discovery).setDisoveryListener(listener); verify(discovery).enable(); } /** * Calling {@link Discovery#enable()} without setting a listener will throw an * {@link IllegalStateException}. */ @Test(expected=IllegalStateException.class) public void testEnableThrowsExceptionIfNoListenerIsSet() throws Exception { Discovery discovery = new Discovery(); discovery.enable(); } /** * Calling {@link Discovery#disable()} without a matching call to * {@link Discovery#enable()} will throw an {@link IllegalAccessError}. */ @Test(expected=IllegalAccessError.class) public void testDisableWithoutEnablingWillThrowException() throws Exception { Discovery discovery = new Discovery(); discovery.disable(); } /** * Calling {@link Discovery#enable()} will create and start a {@link DiscoveryThread}. */ @Test public void testEnableWillCreateAndStartDiscoveryThread() throws Exception { DiscoveryThread thread = mock(DiscoveryThread.class); DiscoveryListener listener = mock(DiscoveryListener.class); Discovery discovery = spy(new Discovery()); discovery.setDisoveryListener(listener); doReturn(thread).when(discovery).createDiscoveryThread(); discovery.enable(); verify(discovery).createDiscoveryThread(); verify(thread).start(); } /** * Calling {@link Discovery#enable()} twice without calling {@link Discovery#disable()} * in between throws an {@link IllegalAccessError}. */ @Test(expected=IllegalAccessError.class) public void testEnablingDiscoveryTwiceThrowsException() throws Exception { DiscoveryThread thread = mock(DiscoveryThread.class); DiscoveryListener listener = mock(DiscoveryListener.class); Discovery discovery = spy(new Discovery()); discovery.setDisoveryListener(listener); doReturn(thread).when(discovery).createDiscoveryThread(); discovery.enable(); discovery.enable(); } /** * Calling {@link Discovery#disable()} will call {@link DiscoveryThread#stopDiscovery()} */ @Test public void testCallingDisableWillCallDisableDiscoveryOnThread() throws Exception { DiscoveryThread thread = mock(DiscoveryThread.class); DiscoveryListener listener = mock(DiscoveryListener.class); Discovery discovery = spy(new Discovery()); discovery.setDisoveryListener(listener); doReturn(thread).when(discovery).createDiscoveryThread(); discovery.enable(); discovery.disable(); verify(thread).start(); verify(thread).stopDiscovery(); } /** * Calling {@link Discovery#createDiscoveryThread()} returns new {@link DiscoveryThread} * instance. */ @Test public void testCreateDiscoveryThreadReturnsNewInstance() { Discovery discovery = new Discovery(); DiscoveryThread thread1 = discovery.createDiscoveryThread(); DiscoveryThread thread2 = discovery.createDiscoveryThread(); Assert.assertNotNull(thread1); Assert.assertNotNull(thread2); Assert.assertNotSame(thread1, thread2); } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,978
Sobaki V Kosmose The band Sobaki V Kosmose was formed in 2000 year. The band could be ascribed to Indi bands. Really, the band Sobaki V Kosmose achieved everything on their own. Firstly, Sobaki V Kosmose became the most outstanding ska band in their native city Kharkov. Then they started to attract attention all over Ukraine. So, the developing process is carrying on... Today the band has a great experience of playing gigs in clubs and other different concert places. Sobaki V Kosmose took part in such festivals as "Raz.Live" (2003, 2005), "Musician Island" and "Tavrian games". It was 2003 year when guys recorded their first album "Vafli" (waffles). In the December of 2007 Sobaki V Kosmose are waiting for the release of their new album "Gruppa ishet producera" (The band is looking for producer). Some of the band's songs were included to different collections. Such as "Rock format" – 2003 year; "SKA Unity Review" – 2006 year; "Serhiy Zhadan and the choir of Mongolian Policemen" – 2007 year; "The first musical 3" – 2007 year. The last collection was the beginning of collaboration with the company "Odyssey". Now the band Sobaki V Kosmose has frequent performances and the number of their fans increase swiftly. information from: http://www.myspace.com/sobakivkosmose photo from official site: http://sobaki.kh.ua/new/ Serhiy Zhadan, Sobaki V Kosmose. Sportyvnyy klub armiji. (Sports Club of the Army) ...Consequently, we have in one pot epatage, sarcasm, philosophy, lyric poetry, ska, reggae, punk, folk singing and just drive – both in words and in music. And if you prefer not too trimmed art – this is the album that should not miss your attention. Sobaki V Kosmose, Serhiy Zhadan. Byjsya za neji. /book-comics-CD/. (Fight for Her) Serhiy Zhadan is one of the most "flexible" contemporary authors in Ukraine in what relates to the means of communication with the public. This is true both about live performances, and about the ways of, actually, publications. And "Fight for Her" is just another experiment of the kind. Cool poetry, tough, "raw" music, and interesting graphic... Yevshanzillya 4. Rock Collection 2010. (mp3). In 2010, following the tradition now, "Yevshanzillya" (already for the fourth year in succession) has become the first rock collection in the New Year. And, again as usual, the main focus is not on pop-rock and heavy alternative – as it happens in the majority of rock collections – but on a wide range of every other rock music. Unsubdued ProRock. Olena Teliha. /songbook+CD/ Domestic: 0.00UAH International: $0.00USD recovery.. Plyusch, Shevelyov. Correspondence: 1979-1995
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
761
Chiemsee – miejscowość i gmina w Niemczech, w kraju związkowym Bawaria, w rejencji Górna Bawaria, w regionie Südostoberbayern, w powiecie Rosenheim, wchodzi w skład wspólnoty administracyjnej Breitbrunn. Leży około 25 km na zachód od Rosenheimu. Składa się z trzech wysp na jeziorze Chiemsee: Herrenchiemsee, Frauenchiemsee, oraz Krautinsel. Jest jedną z mniejszych gmin bawarskich. Demografia Polityka Wójtem gminy jest Georg Huber z FWG, rada gminy składa się z 8 osób. Przypisy Powiat Rosenheim Gminy w Bawarii
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,937
\section{Introduction} \subsection{Background} Coronavirus disease 2019 (COVID-19) has changed the world in such a massive way that a predictable influx of epidemiological models has been racing to accurately describe the movement of the disease since onset. As the first wave of vaccines hit the general public in early 2021, so did a new wave of models attempting to describe vaccine distribution and the different strategies followed around the world. Vaccination specifically helps develop immunity to the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and possibly limits the spreading of the disease through communities (the restriction of disease spread by vaccine has not been made clear at this point in time). However, decisions regarding who should be offered COVID-19 vaccines first when supplies are limited remain debatable. \RWC{The institution tasked with minimizing negative outcomes due to disease in the US is the Center for Disease Control and Prevention (CDC).} The phase allocation plans of COVID-19 vaccines provided by the CDC have addressed the importance of keeping a balance between the prevention of morbidity and mortality and the preservation of societal functioning \cite{dooling2020advisory}. The residents of long-term care facilities are at the highest risk for infection and severe illness from COVID-19, which lists them on the top of vaccination plans to prevent morbidity and mortality. While less prone to severe infections, healthcare and other front-line essential workers are at higher risk of infection, and their well being is critical for the continuity of essential functions of the society. As a result, different social groups are given priority for vaccinations globally. For example, most states in the U.S. and Europe prioritize the senior groups, but Indonesia first gives vaccines to essential workers. Speculation about whom to prioritize differs vastly as the general consensus is that healthcare workers and the elderly should come first, but there are many confounding factors. For example, disadvantaged communities can have trouble accessing the vaccine and thus may need to be prioritized as well \cite{subbaraman2020gets}. A closely related question is how different vaccination plans affect the transmission of a disease over a period of time and the achievement of herd immunity (resistance to the spread of an infectious disease within a population due to large amount of immunity caused by previous infection or vaccination). For example, a leading question is the following: does the vaccination of communities with heightened interaction rates but a lower mortality rate cause less fatalities in the long run? \RWC{In this paper we put a novel spin on an existing method of modeling disease spread. We couple an extended SIR model with an optimization problem in order to show that through such a method, not only can one test possible outcomes, but a model like this can also directly inform policy decisions such as what subset of people to vaccinate first. Our approach couples an age based interaction matrix with public US job records in order to develop vaccination schedules for various states in the United States.} \subsection{Related works} Epidemiological models date back to 1932. The seminal work \cite{kermack1932contributions} of Kermack and McKendrick subdivided populations into compartments, and provided differential equations driven by infection and recovery rates. The model is also called SEIR because of the name of compartments: susceptible, exposed, infected and removed. Many generalizations were proposed, including: 1) considering time-dependent parameters, travel and zoonotic infections \cite{chen2020time,ruktanonchai2020assessing}; 2) increasing the number of compartments to capture disease progression \cite{giordano2020modelling}; 3) considering age-structure and spatially distributed populations \cite{Britton846,colombo2020well,colombo2020age,Chyba2020a,zhang2020evolving,zhang2020changes}, 4) considering in-host dynamics \cite{bellomo2020multi}. For the specific case of New Jersey, a compartmental model was instrumental in quickly identifying the hospital beds need \cite{allred2020regional} by county, and was cited in a letter from New Jersey Governor Phil Murphy \cite{Murphy2020}. Compartmental models are useful for large-scale optimization, a fact we take advantage of in the optimization of vaccine schedule using our model. However, they have limitations in lacking description of in-host dynamics, based only on aggregated data, and difficulties in representing spatial component of the dynamics \cite{KimQuaini2020}. Various work discussed the usefulness of mathematical epidemiological models \cite{Hyman17,metcalf2020mathematical,vespignani2020modelling}. From onset of the virus, researchers raced to build models describing how populations could slow the disease using Non-Pharmaceutical Interventions (NPIs). With uncertainty surrounding vaccine efficacy's duration, the modeling of different possible futures allows us a glimpse at the policy decisions that should be made. In \cite{borchering2021modeling} a model is developed to estimate future case counts in the United States by varying projected vaccine efficacy and NPI policies such as social distancing, mask wearing, and testing. In \cite{luo2020managing}, a spacial epidemic model is coupled with an optimization technique to resolve safety-and-mobility trade-offs in epidemic response plans.\\ In June 2020, \cite{block2020social} built an agent–based model to predict the effects of social distancing and isolation on transmission of a virus. Another agent-based model developed by \cite{patel2021association} is used in order to simulate different vaccination strategies based on projected efficacy of a vaccine in fighting the virus in North Carolina. It is important to note that most of these models are considering the minimization of deaths. As seen by the United States Stimulus programs, \RWC{which are programs developed by the US government in order to jump start the economy through direct payments to citizens, loans to companies and more}\cite{StimProg}, this pandemic has had a huge economic impact in many countries as well. In \cite{shaker2021cost}, rather than using deaths as the cost, a model is developed in order to minimize economic cost to the governments of the United Kingdom, Canada, and the United States. In late 2020, as vaccinations started to become available, it became clear that this would be the driving intervention to the COVID–19 pandemic. In \cite{matrajt2021vaccine}, an age based model was developed before the release of the vaccines to predict the impact on the virus. Effectiveness for full vaccine coverage in preventing hospitalization in the age group $65-75$ for example was estimated at $96\%$ for Pfizer-BioNTech, $96\%$ for Moderna, and $84\%$ for Janssen according to \cite{MolineEtal}. In \cite{brosh2021bnt162b2}, a study in Israel, it is shown that patients with more comorbidities are much more likely than the average to develop serious infection. Researchers also looked for optimal vaccination strategies and estimate the affect the vaccine would have on certain populations. In fact in \cite{Roghani2021} it is mentioned that the effects of the vaccine on a given population are not observable for quite some time. Therefore models are imperative in predicting and optimizing a best strategy. In \cite{hoertel2021optimizing}, a model was made to examine how long it would take vaccination to be a strong enough intervention to raise all NPIs. While evaluating the efficacy of vaccines is important, determining a vaccine schedule is crucial for each governing body during a pandemic. In \cite{bubar2021model}, a compartmental model is used to address age-based questions about vaccination distribution. The seemingly strongest strategy to minimize deaths is the obvious one of vaccinating from the oldest down to the youngest populations. Similarly, in \cite{foy2021comparing} a compartmental model is developed in order to answer the question of how to distribute vaccines in India. One key difference in the model presented in this paper is the stages at which a person is vaccinated. In \cite{foy2021comparing}, the vaccine is given to only susceptible while in our model we allow the vaccine to be distributed to exposed and infected populations as well. Previous work in control of pandemic also includes: quantifying the effect of containment measures \cite{Gatto2020}, determining the controllability using daily data \cite{casella2020can}, considering individual reaction to non-pharmaceutical interventions \cite{lin2020conceptual}, determining best timing of interventions \cite{gevertz2021novel,perkinsoptimal}, including testing and quarantining \cite{aronna2021model}. Moreover, some of the considered interventions were already modeled for other viruses such as human papillomavirus \cite{BROWN2011126,saldana2019optimal}. Finally, some papers focused on the economic cost considering uncertainty in data \cite{gollier2020pandemic}, cost of lockdown \cite{acemoglu2020optimal,alvarez2020simple}, hospital and intensive care unit occupancy \cite{Chyba2020b,Portugal21}. \subsection{Contribution of the paper} The focus of this paper is to develop a finite-dimensional model which couples a system of ordinary differential equations with a numerical optimizer to design optimal vaccination strategies \SSEA{to control the spread of COVID-19 by minimizing the mortality amongst a population}{to minimize the mortality amongst a population due to the spread of COVID-19}. The control in question is the \SSEA{}{vaccination strategy} and the objective the minimization of mortality amongst the populations. There were many NPIs developed early during the pandemic to curb the spread of the virus, such as social distancing, contact tracing and testing. While these methods are extremely beneficial, their economic costs rise almost linearly with their frequency of use. Furthermore, if policies are lifted before the disease is all but eradicated, we see an uptick in both cases and deaths soon after. Therefore it seems the most beneficial intervention both in mortality rate and economically will be the vaccine. Not only have the most prevalent vaccines been shown to be very effective, but also a vaccinated person seems to hold a relatively good immunity to sickness and even possibly a lower rate of transmission and infectivity for a much longer time than a government can financially afford to keep the other interventions in policy. Four key factors distinguish our approach from others:\\ 1) By splitting the population into age ranges, we are able to develop very specific interaction matrices by age group.\\ 2) Using CDC work data \cite{cdc}, we are able to split our age groups into essential worker and non-essential worker categories to get a more realistic picture of interactions in the United States.\\ 3) Our model allows vaccinated people to become infected (but not seriously sick). This decision was made after it became clear that while a vaccinated person has strong immunity to sickness, it is unclear how strong the vaccine is in preventing transmission of the virus.\\ 4) Our model is coupled with a complete optimization approach thanks to the Casadi software \cite{Anderson2019}, thus finding the true optimal solution to our model rather than running various simulations with preset schedules. This project began with an augmented age-based SEIRV (Susceptible, Exposed, Infected, Removed, Vaccinated) model which uses an interaction matrix to describe interactions between age groups. This decision was made based on the ample data suggesting that mortality rate for the COVID-19 pandemic is (as expected) closely tied to age. \RWC{ With ``work from home orders'' and ``social distancing mandates'' being put in place, and conversely, the essential worker's inability to follow such mandates, to model these interactions, a simple idea is to put the essential workers of each group into their own categories for which interactions are less affected by mandates.} The time horizon is chosen to be $180$ days as this seems to be a realistic amount of time to vaccinate the majority of the eligible population (with the optimistic point of view that the entire population would like to be vaccinated). This point of view has been proven in many countries to be very optimistic as has been made clear by vaccine hesitancy and outright opposition in many parts of the world (including in the United States). As a booster shot is being developed and approved at the government level, one sees that human consequences are leading to a lag in herd immunity. Our interaction matrices are developed using data from \cite{prem2017projecting}. We use age-based death rates developed from \cite{cdc} in order to estimate the expected number of deaths for a given vaccination schedule.\\ We then adjust the model slightly to become an augmented age-work-based SEIRV model. The big question amongst policy makers seems to be whether it is optimal to prioritize essential workers or to prioritize the oldest populations first. We capture the competition between these two ideas by breaking our model up using labor force statistics from the Bureau of Labor \cite{Laborstat}. We split all working age populations into categories ``essential'' and ``non-essential'' in order to see if there is a difference in optimal policy if the interactions of essential workers are scaled up. We then further develop the model by removing the vaccinated compartment and instead including a vaccinated compartment for each of the usual compartments: \SSEA{Susceptible, Exposed, Infected, Removed, }{} Susceptible Vaccinated, Exposed Vaccinated, Infected Vaccinated and Removed Vaccinated. This decision was made in order to make the model \SSEA{closer}{better} fit the physical properties of the population; a vaccinated person has a high enough probability to become infectious and spread the virus that we felt it crucial to include such dynamics. With this change, the user can capture the real life implications of an unknowing exposed or infected individual (or knowing if there is a perceived health benefit) getting vaccinated. This new model also allows a user the ability to capture an elusive interaction: vaccinated infected-vaccinated susceptible. While there is not much literature about the transference of virus amongst vaccinated susceptible populations for COVID-19, it is more clear than ever that there are two driving forces to a virus: spreadiblity and mortality. If a virus is spread with ease throughout the vaccinated compartments, while those in the vaccinated compartments are assumed safe from serious illness, they are not safe from being hosts through which the virus can travel to a more vulnerable host, thus causing serious illness. Our model opens the door to tests which could show how these interactions between the vulnerable and the vaccinated spreader affect the population as a whole. Once our model is built, we tune the parameters to state specific data (in this paper we show results from both New Jersey and Florida) in order to test vaccination schedules. We then \SSEA{settle}{formulate} an optimal control problem, consisting of minimizing the total number of deaths over the time horizon that we numerically solve by a direct transcription using \cite{Anderson2019}, an open-source tool for nonlinear optimization and algorithmic differentiation. Our numerical simulations produce an optimal vaccination schedule for the given parameters and data set. As will be mentioned later, it is uncertain what exactly \SSEA{would be considered}{constitutes} an ``essential-worker'' in each state, a value as elusive as the initial replication rate and the scaling of the interaction matrix. Because of this, we test several scenarios with various \SSEA{values of all three of these}{choices as a sensitivity analysis} in order to paint a broader picture. The most striking yet \SSEA{relieving}{comforting} result is that in every case, the optimal solution seems to be the one chosen by most policy makers during this pandemic; to vaccinate the oldest population first and then work down the list, vaccinating based on age while always vaccinating the essential worker population first. While this policy seems to be robust, such a choice is not always easy, as seen by countries who chose to vaccinate essential workers first at the onset of vaccine production. The tool that we have designed could be useful in not only the vaccination distribution for this pandemic, but also could be used as a blueprint for future vaccination schedule optimization questions. \section{Discussion on Agent Based and Compartmental Models} There are generally two types of epidemiological models used to examine the trends and fluctuations of populations due to a disease; agent-based models (ABM) and compartmental models (CM). These models differ in a few ways, starting with how they organize a population: a CM focuses on capturing the collective behavior of a population or sub-population as a whole. For example in a standard SIR CM we have three compartments: Susceptible, Infected and Removed. We define differential equations which govern the movements of the population amongst these three compartments. At each time step, the entire population is defined as the sum of the three categories with each member of the populations' only defining characteristic being the compartment that they are in. CMs were extensively used to model infectious diseases, including pertussis~\cite{rohani2010contact, de2012estimation}, measles~\cite{bokler1993chaos}, Ebola~\cite{lekone2006statistical, mamo2015mathematical}, HIV~\cite{ozalp2011fractional, demirci2011fractional}, tuberculosis~\cite{side2016global, bowong2009mathematical}, cholera~\cite{crooks2014agent} and others. CM models were successfully used also for COVID-19, for instance in \cite{zhou2020clinical, Branas2020, Ferguson2020, Giordano2020, Gatto202004978}. In contrast, an ABM, as the name may suggest, focuses on the behavior of \SSEA{an}{each} individual in the system. Each individual in this case would be considered an ``agent'' and the state of an agent is governed by probabilities based on interactions with other agents. One immediately sees both the merits and the restrictions of an ABM. In considering each agent and all of their defining characteristics, we are able to gain a very detailed view of the progression of a disease all the way down to knowing a few defining characteristics about each and every agent. However, the restriction here is that in order to get such a detailed view, one must input many details about the system. This type of model works best for a population which the researcher is able to describe using many choices and consequences made by each agent and the probabilities which govern these choices on a large scale. As such, an ABM requires large amounts of precise data in order to paint an accurate picture of real world dynamics. This brings up concerns about robustness of a model both in accuracy of inputs and transfer-ability to different data sets. This highlights another key difference between the two model types, namely the use of probability. An ABM is at essence a stochastic model using probabilities to govern behavior. Therefore, when using an ABM one must complete multiple simulations and take the average of the solutions in order to ensure an outlier solution has not been found. For example, in \cite{hoertel2021optimizing}, an ABM is used to model vaccine strategy strength, testing whether the vaccines would be robust enough to allow for the removal of non-pharmaceutical interventions in the country of France. The model is run 200 times and results are averaged in order to paint the most probable picture. An ABM, once built, has the capacity to adjust for micro-scale policy changes in a population such as social distancing regulations or mask wearing regulations in an efficient manner. In \cite{kerr2020covasim}, they highlight the ease at which their ABM can be adapted for policy change. However, requiring each member of a population to have such complex characteristics requires the model to remember the state of each individual at each time step and can result in a very costly model. This is where the CM becomes more efficient. While a CM is less easily adapted to new policy, it can scale to any population with much less cost than an ABM. This is because one must not worry about each individual in the population. For example, in \cite{hoertel2021optimizing}, the ABM being used only considers $500,000$ agents despite the population of France being around 67 million people at the time. This need for scaling and then re-scaling often still allows for an accurate model given realistic parameters, but highlights the computational power needed to run an ABM at full capacity. An advantage here for the CM is that the substitution of a different data set, for example using the same model for two different states in the United States, results in a very minor difference in cost. This is a reason why in this paper we will describe the use of a CM to model vaccine distribution in the united states. While different states have vastly different populations and age distributions, the same optimization problem can be solved on them using the same CM with predictable difference \SSEA{is}{in} cost. In this paper we use data from the 50 states, treating each state's vaccination schedule as its own optimization problem. Therefore, the fluctuating populations suggests that a CM is better suited here and thus is what we employ. \section{An age-structured compartmental model} Conventional epidemiological models assume the homogeneity of different social groups as well as the effectiveness of vaccines. To personalize the vaccination phase allocation with a balance of mortality prevention and societal function preservation, we first build an age-and-work structured model. The entire population is divided into seven age groups, and those groups of working age are further divided into essential and non-essential workers. The dynamics of the system are described by an age-and-work structured susceptible-exposed-infected-removed (AW-SEIR) model. Each susceptible individual undertakes the risk of interacting with an infected person, becoming exposed, and after a latent exposed period ends up being recovered or deceased. Essential workers are at higher risk of infection due to extended exposure to a larger population of possibly infected people. They could also not be receiving adequate access to personal protective equipment (PPE) in their workplaces early on. The senior age groups are at higher risk of morbidity or mortality at the final stage. Other groups may also have different social activity levels compared to the pre-pandemic situation. Modeling interactions (i.e., social contacts \cite{prem2017projecting}) between different age-and-work groups is certainly a central task. The intensity of social contacts within each group and between groups is primarily dependent on various social-distancing measures and behavior changes. The locations of those contacts are categorized as work, home, school, and others \cite{prem2017projecting}. This compartmental model is able to describe a variety of policies such as school closure, enforced work-from-home for non-essential workers, or social-distancing in workplaces all through the interaction matrices defined later. The social contacts between different age or work groups could give rise to non-trivial phase allocation plans for vaccine distribution during a pandemic. For instance it is clear that a disease with extremely low morbidity, or less age-based morbidity would require those with more interactions to be vaccinated early. However, a higher age-based morbidity would require the more susceptible ages be vaccinated first. The goal of models such as these is to find a precise plan which takes all factors into account. Considering the primary objective of preventing morbidity and mortality, one would naturally assume that the senior-age groups should be given vaccines before others. Nevertheless, due to the slow vaccination process, a considerable proportion of essential workers being exposed to the disease may become super spreaders. Those super spreaders may bring the virus home and to workplaces so that the infected population grows exponentially because of the failure of adequate protection. Even if the mortality rate of those age groups is smaller than the mortality rate of the seniors, the larger basis of population with severe symptoms may exhaust limited healthcare resources and lead to unexpected consequences. This consideration is valid especially when the basic reproduction number $R_0$ is large. The interaction between those vulnerable societal groups and active social groups plays a prominent role in determining the weights on allocating the limited supply of vaccines. The phase allocation of vaccines is formulated as an \emph{optimal control problem} based on this AW-SEIR model. In each time step, the decision-maker observes the susceptible population of each societal group and determines the number of vaccine doses that should be allocated for each group. For each societal group, the vaccination decision made in the current period will not only affect the prevented number of COVID-19 cases in this group but have a spillover effect on other groups through social contacts; vaccination of any person reduces the chances of that person to become infected, but also reduces the chances of that person infecting others. Therefore, the controller must optimize the vaccination plans for all groups synchronously with the objective of reducing the total mortality in the population before reaching herd immunity. The age structured epidemic model with vaccination divides the total population into 11 groups in Table \ref{tab:AgeGroups}. \begin{table}[ht] \begin{tabular}{|p{1in}|p{3.5in}|} \hline Name & Description \\ [0.5ex] \hline Group 1 & Age 0-4 population \\ \hline Group 2 & Age 5-14 population \\ \hline Group 3 & Age 15-19 population with no job or non-essential \\ \hline Group 4 & Age 20-39 population with no job or non-essential \\ \hline Group 5 & Age 40-59 population with no job or non-essential \\ \hline Group 6 & Age 60-69 population with no job or non-essential \\ \hline Group 7 &Age 70+ population \\ \hline Group 8 & Age 15-19 population who are essential workers\\ \hline Group 9 & Age 20-39 population who are essential workers \\ \hline Group 10 & Age 40-59 population who are essential workers \\ \hline Group 11 & Age 60-69 population who are essential workers\\ [1ex] \hline \end{tabular} \caption{Groups by Age}\label{tab:AgeGroups} \end{table} We use the subscript j to indicate a social group. we then indicate the total population using the variable N, thus $N_j$ is the number of people in age group $j \in \{1, \dots, 11 \}$. Thus we have: $S_j$ the number of susceptible; $E_j$ the number of latently infected or exposed; $I_j$ the number of infectious people not isolated; $R_j$ the number of removed; $V_j$ the number of vaccinated. The dynamics of the age structured SEIR model is given by: \begin{eqnarray}\label{eq:SEIRV1} \begin{aligned} \dot{S_j}&= - u \frac{S_j\, \displaystyle \sum_{k=1}^{11} l_{kj}\ I_k}{\displaystyle \sum_{k=1}^{11} l_{kj}\ N_k} - w_j\\ \dot{E_j}&= u\ \frac{S_j\,\displaystyle \sum_{k=1}^{11} l_{kj}\ I_k}{\displaystyle \sum_{k=1}^{11} l_{kj}\ N_k} -\delta E_j\\ \dot{I_j}&=\delta E_j -\gamma {I_j}\\ \dot{R_j}&= \gamma {I_j}\\ \dot{V_j}&= w_j \end{aligned} \end{eqnarray} where $u = R_0$ $\gamma$ reflects the lockdown measures (and could be time-dependent); $L=(l_{k j})$ is the interaction matrix among age groups during pandemic. In this matrix, the maximal eigenvalue is given by the basic reproduction number $R_0$. $w_j$ is the number of vaccinated people in age group $j$ per day, so that $\sum_1^{11}w^j$ is the number of vaccines available per day (and could be time-dependent); $\delta=1/D_E$ with $D_E$ being the latent period in days; similarly, $\gamma=1/D_I$ with $D_I$ denoting the infectious period in days. Now it is also important to choose epidemiological constants which are \SSEA{accurate}{specific} to the disease in question. These parameters are estimated with CDC sources in Table \ref{tab:VarTable}. \begin{table}[ht] \begin{center} \begin{tabular}{||c c c c||} \hline Name & Description & Estimate & Units \\ [0.5ex] \hline $R_0$ & Rate of infection & 1.0-1.2 & – \\ \hline $D_I$ & Infectious period & 5-14 & days \\ \hline $D_E$ & Latent period & 4-7 & days \\ [1ex] \hline \end{tabular} \caption{Description of Variables}\label{tab:VarTable} \end{center} \end{table} After a few weeks of vaccinating, the notion began to spread that perhaps the vaccine not only keeps a person from contracting the disease, but also from developing symptoms once infected and spreading disease if infection is contracted. In \cite{mallapaty2021can}, a driving question is whether the COVID-19 vaccine can stop one from being a spreader of the disease after contracting the infection. In order to account for modeling these possibilities, we further the complexity of the model by incorporating an expansion of the vaccinated compartment; rather than including V, we add Susceptible Vaccinated (Sv), Exposed Vaccinated (Ev), Infected Vaccinated (Iv) and Removed Vaccinated (Rv) in which we are able to change the parameters which govern the ability of the vaccinated populations to transmit the virus, see Figure \ref{fig:ModelDiag}. \begin{figure} \begin{center} \includegraphics[angle=0, width=5.0cm, height=7cm, trim={0cm 0cm 0cm 0cm},clip]{figures/Vaccine_Diagram.png} \caption{All possible paths through which populations may flow into other populations.} \label{fig:ModelDiag} \end{center} \end{figure} So in the end, our model \SSEA{reads like equation array}{is described by the system of equations} \eqref{eq:SEIRV2}. \begin{eqnarray} \label{eq:SEIRV2} \begin{aligned} \dot{S_j}&= - u \frac{S_j\, \displaystyle \sum_{k=1}^{11} l_{kj}\ I_k+0.3*l_{kj}\ Iv_k}{\displaystyle \sum_{k=1}^{11} l_{kj}\ N_k} - \frac{w_j\ S_j}{S_j+E_j+I_j}\\ \dot{E_j}&= u\ \frac{S_j\, \displaystyle \sum_{k=1}^{11} l_{kj}\ I_k+0.3*l_{kj}\ Iv_k}{\displaystyle \sum_{k=1}^{11} l_{kj}\ N_k} -\delta E_j-\frac{w_j\ E_j}{S_j+E_j+I_j}\\ \dot{I_j}&=\delta E_j -\gamma {I_j}-\frac{w_j\ I_j}{S_j+E_j+I_j}\\ \dot{R_j}&= \gamma {I_j}\\ \dot{Sv_j}&= - u \frac{Sv_j\,\displaystyle \sum_{k=1}^{11} 0.3*l_{kj}\ I_k+0.09*l_{kj}\ I_k}{\displaystyle \sum_{k=1}^{11} l_{kj}\ N_k}+\frac{w_j\ S_j}{S_j+E_j+I_j} \\ \dot{Ev_j}&= u\ \frac{Sv_j\, \displaystyle \sum_{k=1}^{11} 0.3*l_{kj}\ I_k+0.09*l_{kj}\ I_k}{\displaystyle \sum_{k=1}^{11} l_{kj}\ N_k} -\delta Ev_j+\frac{w_j\ E_j}{S_j+E_j+I_j}\\ \dot{Iv_j}&=\delta Ev_j -\gamma {Iv_j}+\frac{w_j\ I_j}{S_j+E_j+I_j}\\ \dot{Rv_j}&= \gamma {Iv_j} \end{aligned} \end{eqnarray} \RWC{In the final model our compartments include the usual SEIR with four additional compartments representing vaccinated populations. we have the interaction matrix $l_{kj}$ governing interactions, $w_j$ governing total vaccine allocation in a day, $N_k$ being the total population, $\delta$ being the virus mean latent period and $\gamma$ being the mean infectious period. } There is reduced probability of an infected but vaccinated person to transmit the disease. We chose to model this by a factor of $0.3$. \RWC{It is important to note that this choice is an estimate as there is not much reliable data in the literature about how the vaccine effects transmission. However, \cite{singanayagam2021community} claims that vaccination was found to be effective in diminishing transmission inside of the household to forty percent. Therefore thirty percent seems a reliable estimate for transmission outside of the household with other non-pharmaceutical interventions at play. } As such, a vaccinated-vaccinated interaction has an estimated reduced chance of transmission to $0.09$. This new model accounts also for the ability to vaccinate other compartments in addition to $S$ (Susceptible): terms such as $\frac{w_j\ S_j}{S_j+E_j+I_j}$ show the administration of the vaccines. This is because with the influx of vaccination there was also a diminishing of testing. The result is that an ``exposed'' or ``infected'' but asymptomatic person could get the vaccine. \section{Census Data and Interaction Matrix} In order to test our model, we use collected data to develop initial conditions for S, E, I and R, interaction matrices, death rates, basic reproduction number and infection rates and recovery rates for our age groups. The majority of the data used comes from the United States census. We first take the raw population information from each state in the united states and use the Age and Sex tables provided to break the population in each state down by age group. From here we fit the populations into the age groups defined below for our model. We then define the number of susceptibles to be the number of people in each age group after removing the exposed infected and removed; the number of exposed is set equal to a certain percent of the infected from the first 5 days of the model (Dec 16,17,18,19,20); the number of infected is set to be the number of cases from the first 5 days of the model minus exposed assigned proportionally based on the number of people in each age group; the number of removed is set to be all cases from January up to and including cases on Dec 15 assigned proportionally based on the number of people in each age group. As the definition of the term ``essential worker'' is a bit ambiguous, we must also justify our percentages there. We begin with labor force statistics from the Bureau of labor. We then decide which occupations count as essential. This decision was based on descriptions of essential workers from \cite{suzannehultin_2021}. We then sum the total workers in each age group and reassign people to the essential worker category based on our age groups (the age groups provided by the Bureau of Labor did not perfectly match our age groups). Lastly we use the total population from the CDC age groups to calculate an "essential worker rate" per age group. The basic reproduction number (to be used to scale $A$) was found per state at rt.live \cite{rt}, a live Covid tracking website which recalculates and reports daily replication numbers. The total vaccine availability per day $w$ was estimated by CDC sources. \RWC{We chose this number to be $10,000$ as this is both a rational assumption for a state in the US and a good estimate for herd immunity by sixth months, which was a goal of policy makers.} Of course this is a simplification of the process as (what we are now seeing live) the number of vaccines administered is reduced after the more eager portion of the population has been successfully vaccinated. We construct the interaction matrix using $A$ and $B$: $\alpha$ is chosen so that the dominant eigenvalue (call this $\lambda_{max}$) of the matrix $\alpha (A-B) + \beta B$. The eleven age groups are$\{1 ,2 , 3 , 3E , 4 ,$ $ 4E , 5 , 5E , 6 , 6E , 7 \}$. We use a combination of $A$ the interaction matrix, and $B$ the working interaction matrix for groups 1 through 7 to build a new interaction matrix which describes the interactions of nonessential and essential age groups. Matrices $A$ and $B$ are shown below. $A$ gives the interaction coefficients which represent how often members of different age groups are exposed to each other. $B$ gives the interactions only due to the working environment. Therefore, we subtract the interaction coefficient given in $B$ from that in $A$ to describe the interaction coefficient of a Non-essential worker in an age group, because they are assumed to not have working interactions. \begin{align*} A= \begin{bmatrix} 2.5982 & 0.8003 & 0.3160 & 0.7934 & 0.3557 & 0.1548 & 0.0564 \\ 0.6473 & 4.1960 & 0.6603 & 0.5901 & 0.4665 & 0.1238 & 0.0515 \\ 0.1737 & 1.7500 & 11.1061 & 0.9782 & 0.7263 & 0.0815 & 0.0273 \\ 0.5504 & 0.5906 & 1.2004 & 1.8813 & 0.9165 & 0.1370 & 0.0397 \\ 0.3894 & 0.7848 & 1.3139 & 1.1414 & 1.3347 & 0.2260 & 0.0692 \\ 0.3610 & 0.3918 & 0.3738 & 0.5248 & 0.5140 & 0.7072 & 0.1469 \\ 0.1588 & 0.3367 & 0.3406 & 0.2286 & 0.3673 & 0.3392 & 0.3868 \end{bmatrix} \end{align*} \begin{align*} B=\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0.0000 & 0.0000 \\ 0 & 0.0195 & 0.0094 & 0.0116 & 0.0115 & 0.0000 & 0.0000 \\ 0 & 0.0168 & 0.6441 & 0.3590 & 0.1893 & 0.0067 & 0.0000 \\ 0 & 0.0272 & 0.3060 & 0.8135 & 0.5203 & 0.0218 & 0.0000 \\ 0 & 0.0361 & 0.2103 & 0.6465 & 0.6465 & 0.0234 & 0.0000 \\ 0.0000 & 0.0079 & 0.0083 & 0.0585 & 0.0692 & 0.0035 & 0.0000 \\ 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 \end{bmatrix} \end{align*} We use $A$ and $B$ to build an 11$\times$11 matrix for age groups \[ \{1 ,2 , 3 , 3E , 4 , 4E , 5 , 5E , 6 , 6E , 7 \}, \] where $3$ is the third youngest age group (age 15 to 19) excluding Essential workers (``Non-Essential Workers''), and $3E$ is the group of Essential workers in this age group. We define the Nonessential group to Nonessential group interactions as $\alpha (A- B) $. This is an 11$\times$11 matrix with zeros on the rows and columns with essential workers ($\{3E , 4E , 5E , 6E \}$) and entries from $\alpha (A- B) $ on the other rows and columns. The Nonessential to Essential interactions have entries from $\alpha (A- B) + \beta B$ on For the total interactions matrix, we have: \[ \begin{aligned} I &= [C_1 (A- B)] + [C_1 (A- B) + B] + [C_1 (A- B) + B] + [C_1 (A- B) + B], \end{aligned} \] where $C_1$ is a scaling factor to add weight to the nonworking interactions. \section{Optimization of vaccination schedule} We consider an optimal control problem for the system \eqref{eq:SEIRV2}, with cost given by the total number of deaths over a time horizon of 180 days. The model was tuned with data from New Jersey and Florida on January 1st 2021. The number of available vaccines was set so in 180 days all individuals above 15 years of age could be vaccinated. In order to solve an optimal control problem numerically, it is usual and well known to distinguish between \emph{direct} and \emph{indirect} methods. Roughly speaking, direct methods (or direct transcription) consist in fully discretizing the optimal control problem under consideration, before applying an optimization routine. Discretizing means: choose a discretization scheme for the differential equations (like Euler, Runge-Kutta); choose a discretization for the integral cost criterion (like rectangle, Simpson). One can also approach the solutions globally in time by pseudo-spectral methods, by collocation. There are infinitely many choices. In all cases, the full discretization yields a classical nonlinear optimization problem under constraints, in finite dimension (this dimension being larger as the discretization is finer). This high-dimensional nonlinear optimization problem can then be solved thanks to numerical optimization: gradient methods, penalization, Lagrange methods (Karusch-Kuhn-Tucker). There also, one has infinitely many choices to design a numerical scheme. We note that, in direct methods, we first discretize, then optimize (or dualize). In the indirect approach, this is exactly the other way round: we first apply a first-order necessary condition for optimality to the optimal control problem: we apply the \emph{Pontryagin maximum principle} (see \cite{BressanPiccoli,LeeMarkus,Pontryagin,Trelat_book}), which leads to a \emph{shooting problem} that can then be solved, numerically, thanks to a Newton method (see \cite{BCT2007} for well-posedness of the shooting method, in relationship with conjugate point theory). In other words, in the indirect approach, we first optimize (or dualize) and then discretize. We refer the reader to \cite{trelat_JOTA2012,Trelat_book} for a survey on the numerical methods in optimal control and on the pros and cons of direct vs indirect approaches. Also, note that many existing numerical methods are neither direct nor indirect but are rather of a hybrid nature. The choice of such or such method depends on the context and on the demanding issues. Here, we choose the direct transcription approach because the optimal control problem that we consider involves some state constraints that would be difficult to handle in the indirect approach (indeed, using the Pontryagin maximum principle when there are state constraints is much more involved). Moreover, direct methods are much softer insofar they allow to easily perform modifications of the model. We choose to discretize the control system with the implicit RK2 scheme and the cost functional with the trapezoidal rule, on a regular subdivision of the time interval (we take: one step = one day). Writing the control system as $\dot x(t) = f(x(t),u(t))$ and the cost functional as $C(u) = \int_0^T f^0(x(t),u(t))\, dt$, this means: \begin{align*} & x_{k+1} = x_k + \frac{h}{2} \left( f(x_k,u_k) + f(x_{k+1},u_{k+1}) \right), \qquad k=0,\ldots,N-1 \\ & \min \frac{h}{2} \sum_{i=0}^N \left( f^0(x_k,u_k) + f^0(x_{k+1},u_{k+1}) \right) \end{align*} where $h$ is the step of the (assumed) regular subdivision of the time frame $[0,T]$. As an optimization routine to solve the resulting (finite-dimensional) optimization problem, we use the interior-point optimization routine \texttt{IpOpt}~\cite{Waechter2006}, combined with the modeling language \texttt{AMPL} which performs automatic differentiation. It is by now well known that the use of automatic differentiation is a real plus in solving efficiently nonacademic, nonobvious optimal control problems that cannot be solved so efficiently without this powerful tool. To initialize the optimization, we just take constant (discretized) controls and states, those constants, for the states, corresponding to their initial conditions. This is a very rough initialization but it happens to be enough for our needs. The execution on a standard desktop machine is almost instantaneous. One of the goals is to compare the policy to vaccinate based on age (oldest to youngest) opposed to work status (essential workers first) structure. Note that in order to test the elasticity of our model, we test varying the initial replication rate $R0$ $(1.0, 1.1, 1.2)$, the percent of workers (PE) deemed ``essential''($24\%$, $34\%$, $44\%$) and the infection rate $\beta$ ($0.25$,$0.5$, $0.75$) of COVID-19. \begin{figure} \begin{center} \includegraphics[angle=0, width=13.0cm, height=7cm, trim={0cm 0cm 0cm 0cm},clip]{figures/NJR0=1.0.png} \caption{Sample tests for New Jersey with a choice of $R0 = 1.0$ (mildest case) while varying percent of essential worker and beta. } \label{fig:NJR01.0} \end{center} \end{figure} \begin{sidewaysfigure} \vspace{30mm} \hspace{0mm} \includegraphics[ trim={0cm 0cm 0cm 0cm},clip]{figures/NewJerseyFullTestsSep6small1.png} \caption{Results using New Jersey data-set plotted by initial replication rate.} \label{fig:NJTests} \end{sidewaysfigure} \begin{sidewaysfigure} \vspace{30mm} \hspace{0mm} \includegraphics[angle=180, trim={0cm 0cm 0cm 0cm},clip]{figures/FloridaFullTestsSep6small1.png} \caption{Results using Florida data-set plotted by initial replication rate.} \label{fig:FLTests} \end{sidewaysfigure} One reassuring result which can be seen clearly in Figure \ref{fig:NJR01.0} is that any amount of vaccination greatly diminishes the number of deaths regardless of who is being vaccinated, even in the case of a very mild disease. A case where $R0=1.0$ is a very benign disease, yet we still see that a vaccine will save about $2000$ lives even in the non-optimal case. However, if $R0=1.2$, we see in Figure \ref{fig:NJTests} that the number of lives saved jumps to $23,500$ immediately regardless of other parameters. Therefore, it is very clear that the vaccine, regardless of schedule, is extremely effective in minimizing mortality across all categories. A small rise in $R0$ results in almost an exponential rise in deaths in an unvaccinated population, but we see that any vaccination plan diminishes this jump significantly. This can be clearly seen in Table \ref{tab:DeathsTable} where we keep the number of essential workers and beta constant and vary the $R0$. Further, we see that while varying the PE and \SSEA{the}{} $\beta$ changes the number of deaths (\SSEA{raising}{increasing} both results in a rise in deaths), the driving factor behind the consequences of COVID-19 is indisputably the infectivity of the virus. This can be seen very strongly in Figure \ref{fig:FLTests} where we see that raising the percent of essential workers raises the number of deaths by less than one hundred in some cases and not much more in others, compared to the few hundred lost when raising the replication rate of the virus even in the optimal vaccination case. \begin{table}[ht] \caption{Deaths Projected with Varying $R0$}\label{tab:DeathsTable} \begin{tabular}{|p{1in}|p{.5in}|p{1.5in}|p{1.5in}|} \hline State & $R0$ & Projected Deaths With No Vaccine & Projected Deaths With Vaccine \\ [0.5ex] \hline New Jersey & $1.0$ & 9316 & 6710\\ \hline New Jersey & $1.1$ & 15609 & 6906\\ \hline New Jersey & $1.2$ & 31681 & 7289\\ \hline Florida & $1.0$ & 28467 & 21678 \\ \hline Florida & $1.1$ & 44657 & 22287 \\ \hline Florida & $1.2$ & 87349 & 23298 \\ \hline \end{tabular} \end{table} \vskip 5mm Note that in the dynamics of our program such as the exposed population in Figure \ref{fig:pops} we see that the program terminates at $180$ days. This decision to choose a time-span of 180 days in our program is due to the simple fact that the most susceptible populations are vaccinated by then and thus the virus is virtually non-existent beyond this point. This can clearly be seen in the infected population in Figure \ref{fig:pops} where the infected populations are all but diminished and only those with extremely low death rates are still infected. If this policy happened in reality, while the virus may still permeate on a biological level, the sickness and death rates would be so low that in the eye of the public it would be non-existent. While this tool can be fit to any data given information about parameters, we have chosen state data in the United States to test our model. See in Figure \ref{fig:heatmap} an example of an optimal strategy for New Jersey. in Figures \ref{fig:pops} and \ref{fig:VaccPop} are also the accompanying plots of the S, E, I, R, Sv, Ev, Iv and Rv populations. We see from the figures that the compartments act as expected. Notice in the infected population in Figure \ref{fig:pops} each infected population grows at the beginning of the time span, but rapidly approaches zero as it is being vaccinated. \RWC{This is because once somebody is a member of this compartment, s/he can either move to removed (by recovering from the disease or passing away) or to vaccinated infected. The latter may occur since the model assumes that an infected person is able to become vaccinated, an event very possible for asymptomatic carriers. As the compartment empties, new infected do appear from the susceptible population, however, as we rapidly vaccinate the susceptible populations, the infected diminish as they have no source of vulnerable people.} \begin{figure} \begin{center} \includegraphics[angle=0, width=9.0cm, height=7cm,clip]{figures/VaccinePolicyHeatMap.png} \caption{Optimal vaccination strategy for Reproduction number 1.2, Percent of workers considered essential 44.} \label{fig:heatmap} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=0, width=14.0cm, height=14cm, trim={0cm 0cm 0cm 0cm},clip]{figures/populations.png} \caption{\RWC{Population dynamics for the unvaccinated compartments: Susceptible, Exposed, Infected, and Recovered.} } \label{fig:pops} \end{center} \end{figure} This strategy seems robust not only amongst different values of beta, distributions of essential workers and chosen $R0$, but also amongst different states. It seems as though for the COVID–19 pandemic, the optimal vaccination policy was the one chosen by most countries: vaccinate the oldest population first and work down based on age. In total we have 27 simulations from each of the states tested. In every run, the optimizer chose to vaccinate the oldest population first. The goal of this paper is to demonstrate how a compartmental epidemiological model can be used in an optimal control problem to minimize casualties when vaccinating the general population. The main limitation of the results is the less realistic assumption that a municipality be able to vaccinate an entire sub-population with ease. See in the susceptible population in Figure \ref{fig:pops} how the susceptible population approaches zero because all people are either vaccinated or have been infected. The reality is that breaking a threshold of something closer to $70\%$ of a population being fully vaccinated would be an optimistic goal for any age group. In fact, we see live in the United States a slowing of vaccination despite the fact that we seemingly have not yet achieved herd immunity. Note that the cost here is the number of deaths which the population suffers in our time range of 180 days starting vaccination at day one. In Figure \ref{fig:NJTests} there is a vaccination schedule for the same parameters as the above plots, with essential workers of the largest working population being vaccinated first, a policy that many may believe to be optimal. Notice that the number of deaths is larger if one were to use this schedule rather than the optimal one. This is noteworthy as some countries such as Indonesia chose to prioritize their essential workers over their elderly. Despite these shortcomings, this program could be updated and adjusted to fit a multitude of situations such as these simply by adjusting the equations of the model. When the COVID-19 vaccine arrived, many countries chose this optimal solution of vaccinated on an age-based schedule while others chose to vaccinate their essential workers first. Programs such as ours set the framework to make a more informed decision in future times of crisis. All code used in the design and implementation of our epidemiological model is placed on GitHub for free use, note that all programs are written in the Python programming language. \begin{figure} \begin{center} \includegraphics[angle=0, width=14.0cm, height=14cm, trim={0cm 0cm 0cm 0cm},clip]{figures/VaccinatedPopulations.png} \caption{\RWC{Population dynamics of the vaccinated compartments: Susceptible, Vaccinated, Exposed vaccinated, Infected vaccinated, and Recovered vaccinated. }} \label{fig:VaccPop} \end{center} \end{figure} \section{Acknowledgements} The authors acknowledge the support of the NSF CMMI project \# 2033580 "Managing pandemic by managing mobility". R.W., S.T.M. and B.P. acknowledge the support of the Joseph and Loretta Lopez Chair endowment.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,673
\section{Introduction} \textbf{Motivation. } This paper considers scheduling problems in which a planner must act on $k$ out of $N$ binary-state processes each round. The planner fully observes the state of the processes on which she acts, then all processes undergo an action-dependent Markovian state transition; the state of the process is unobserved until it is acted upon again, resulting in uncertainty. The planner's goal is to maximize the number of processes that are in some ``good'' state over the course of $T$ rounds. This class of problems is natural in the context of \textit{monitoring tasks} which arise in many domains such as sensor/machine maintenance \cite{iannello2012optimality,glazebrook2006some,abbou2019group,villar2016indexability}, anti-poaching patrols \cite{qian2016restless}, and especially healthcare. For example, nurses or community health workers are employed to monitor and improve the adherence of patient cohorts to medications for diseases like diabetes \cite{newman2018community}, hypertension \cite{brownstein2007effectiveness}, tuberculosis \cite{rahedi2014effects,chang2013house} and HIV \cite{kenya2013using,kenya2011can}. Their goal is to keep patients adherent (i.e., in the ``good'' state) but a health worker can only intervene on (visit) a limited number of patients each day. Health workers can play a similar role in monitoring and delivering interventions for patient mental health, e.g., in the context of depression \cite{lowe2004monitoring,mundorf2018reducing} or Alzheimer's Disease \cite{lin2018selective}. We adopt the solution framework of \textit{Restless Multi-Arm Bandits} (RMABs), a generalization of Multi-Arm Bandits (MABs) in which a planner may act on $k$ out of $N$ arms each round that each follow a Markov Decision Process (MDP). Solving an RMAB is PSPACE-hard in general \cite{papadimitriou1999complexity}. Therefore, a common approach is to consider the Lagrangian relaxation of the problem in which the $\frac{k}{N}$ budget constraint is dualized. Solving the relaxed problem gives Lagrange multipliers which act as a greedy index heuristic, known as the Whittle index, for the original problem. The Whittle index approach has been shown to be asymptotically optimal (i.e., $N\rightarrow{}\infty$ with fixed $\frac{k}{N}$) \cite{weber1990index} and performs well empirically \cite{ansell2003whittle} making it a common solution technique for RMABs. Critically, using the Whittle index approach requires two key components: (i) a fast method for computing the index and (ii) proving the problem satisfies a condition known as \textit{indexability}. Without (i) the approach can be prohibitively slow, and without (ii) performance guarantees are sacrificed. Neither (i) nor (ii) are known for general RMABs. Therefore, to capture the scheduling problems addressed in this work, we introduce a new subclass of RMABs, \textit{Collapsing Bandits}, distinguished by the following feature: when an arm is played, the agent fully observes its state, ``collapsing'' any uncertainty, but when an arm is passive, no observation is made and uncertainty evolves. We show that this RMAB subclass is more general than previous models and leads to new theoretical results, including conditions under which the problem is indexable and under which optimal policies follow one of two simple threshold types. We use these results to develop algorithms for quickly computing the Whittle index. In experiments, we analyze the algorithms' performance on (i) data from a real-world healthcare scheduling task in which our approach ties state-of-the-art performance at a fraction the runtime and (ii) various synthetic distributions, some of which the algorithm achieves performance comparable to the state of the art even outside its optimality conditions. To summarize, our contributions are as follows: (i) We introduce a new subclass of RMABs, Collapsing Bandits, (ii) Derive theoretical conditions for Whittle indexability and for the optimal policy to be threshold-type, and (iii) Develop an efficient solution that achieves a 3-order-of-magnitude speedup compared to more general state-of-the-art RMAB techniques, without sacrificing performance. \section{Restless Multi-Armed Bandits} \label{section:preliminaries} An RMAB consists of a set of $N$ arms, each associated with a \emph{two-action} MDP \cite{puterman2014markov}. An MDP $\{ \mathcal{S}, \mathcal{A}, r, P\}$ consists of a set of states $\mathcal{S}$, a set of actions $\mathcal{A}$, a state-dependent reward function $r: \mathcal{S} \rightarrow \mathbb{R}$, and a transition function $P$, where $P^a_{s,s^\prime}$ denotes the probability of transitioning from state $s$ to $s^\prime$ when action $a$ is taken. An MDP \emph{policy} $\pi: \mathcal{S} \rightarrow \mathcal{A}$ represents a choice of action to take at each state. We will consider both discounted and average reward criteria. The long-term \emph{discounted reward} starting from state $s_0 = s$ is defined as $R_{\beta}^\pi(s) = E\left[\sum_{t=0}^\infty \beta^tr(\pi(s_t))|\pi, s_0 = s\right]$ where $\beta \in [0,1)$ is the discount factor and actions are selected using $\pi$. To define average reward, let $f^\pi(s): \mathcal{S} \rightarrow [0,1]$ denote the \emph{occupancy frequency} induced by policy $\pi$, i.e., the fraction of time spent in each state of the MDP. The \emph{average reward} $\overline{R}^\pi$ of policy $\pi$ be defined as the expected reward computed over the occupancy frequency: $\overline{R}^\pi = \sum_{s \in \mathcal{S}} f^\pi(s) r(s)$. Each arm in an RMAB is an MDP with the action set $\mathcal{A}=\{0,1\}$. Action $1$ ($0$) is called the \emph{active} (\emph{passive}) action and denotes the arm being pulled (not pulled). The agent can pull at most $k$ arms at each time step. The agent's goal is to maximize either her discounted or average reward across the arms over time. Some RMAB problems need to account for partial observability of states. It is sufficient to let the MDP state be the \emph{belief state}: the probability of being in each latent state \citep{kaelbling1998planning}. While intractable in general due to infinite number of reachable belief states, most partially observable RMABs studied (including our Collapsing Bandits) have polynomially many belief states due to a finite time horizon or other structures. \textbf{Related work} RMABs have been an attractive framework for studying various stochastic scheduling problems since Whittle indices were introduced \cite{whittle1988restless}. Because general RMABs are PSPACE-hard \cite{papadimitriou1999complexity}, RMAB studies usually consider restricted classes under which some performance guarantees can be derived. Collapsing Bandits form one such novel class that generalizes some existing results which we note in later sections. \citet{zhao_liyu_paper} develop an efficient Whittle index policy for a 2-state partially observable RMAB subclass in which the state transitions are unaffected by the actions taken and reward is accrued from the active arms only. \citet{akbarzadeh2019restless} define a class of bandits with ``controlled restarts,'' giving indexability results and a method for computing the Whittle index. However, ``controlled restarts'' define the active action as state independent, a stronger assumption than Collapsing Bandits which allow state-dependent action effects. \citet{glazebrook2006some} give Whittle indexability results for three classes of restless bandits: (1) A machine maintenance regime with deterministic active action effect (we consider stochastic active action effect) (2) A switching regime in which the passive action freezes state transitions (in our setting, states always change regardless of action) (3) A reward depletion/replenishment bandit which deterministically resets to a start state on passive action (we consider stochastic passive action effect). \citet{AOI_paper1} and \citet{sombabu_paper} augment the machine maintenance problem from \citet{glazebrook2006some} to include either i.i.d.~or Markovian evolving probabilities of an active action having no effect, a limited form of state-dependent action. \citet{meshram2018whittle} introduce Hidden Markov Bandits which, similar to our approach, consider binary state transitions under partial observability, but do not allow for state dependent rewards on passive arms. In sum, our Collapsing Bandits introduce a new, more general RMAB formulation than special subclasses previously considered. \citet{qian2016restless} present a generic approach for any indexable RMAB based on solving the (partially observable) MDPs on arms directly. Because we derive a closed form for the Whittle index, our algorithm is orders of magnitude faster. \section{Collapsing Bandits} \label{section:problem_formulation} We introduce \emph{Collapsing Bandits} (CoB) as a specially structured RMAB with partial observability. In CoB, each arm $n\in\{1,\ldots,N\}$ has binary latent states $\mathcal{S}=\{0, 1\}$, representing \emph{bad} and \emph{good} state, respectively. The agent acts during each of finite days $t \in 1, \ldots, T$. Let $a_t \in \{0,1\}^N$ denote the vector of actions taken by the agent on day $t$. Arm $n$ is said to be \emph{active} at $t$ if $a_t(n)=1$ and \emph{passive} otherwise. The agent acts on $k$ arms per day, i.e., $\left\|a_t\right\| = k$, where $k \ll N$ because resources are limited. When acting on arm $n$, the true latent state of $n$ is fully observed by the agent and thus its uncertainty ``collapses'' to a realization of the binary latent state. We denote this observation as $\omega \in \mathcal{S}$. States of passive arms are completely unobservable by the agent. Active arms transition according to the \emph{transition matrix} ${P}_{s,s'}^{a,n}$ and passive arms transition according to $P_{s,s'}^{p,n}$. We drop the superscript $n$ when there is no ambiguity. Our scheduling problem, like many problems in analogous domains, exhibits the following natural structure: (i) processes are more likely to stay ``good'' than change from ``bad'' to ``good''; (ii) when acted on, they tend to improve. These natural structures are respectively captured by imposing the following constraints on $P^p$ and $P^a$ for each arm: (i)~$P_{0,1}^p < P_{1,1}^p$ and $P_{0,1}^a < P_{1,1}^a$; (ii)~$P_{0,1}^{p} < P_{0,1}^{a}$ and $P_{1,1}^{p} < P_{1,1}^{a}$. To avoid unnecessary complication through edge cases, all transition probabilities are assumed to be nonzero. The agent receives reward $r_t = \sum_{n=1}^{N}s_t(n)$ at $t$, where $s_t(n)$ is the latent state of arm $n$ at $t$. The agent's goal is to maximize the long term rewards, either discounted or average, defined in Sec.~\ref{section:preliminaries}. \begin{wrapfigure}{r}{0.45\textwidth} \centering \resizebox{!}{60pt}{% \begin{tikzpicture}[ -Triangle, every loop/.append style = {-Triangle}, start chain=main going right, state/.style={circle,minimum size=9mm, draw}, node distance=3mm, font=\scriptsize, >=stealth, auto ] \node [state, on chain, fill=black, text=white] (1) {\textbf{\boldmath $b_0$(1)}}; {[start branch=A going below] \node[state, on chain, fill=black, text=white] (A) {\textbf{\boldmath $b_1$(1)}}; } \node [state, on chain] (2) {$b_0$(2)}; {[start branch=A going below] \node[state, on chain] (B) {$b_1$(2)}; } \node [state, on chain] (3) {$b_0$(3)}; {[start branch=A going below] \node[state, on chain] (C) {$b_1$(3)}; } \node[state, on chain] (4) {$b_0$(4)}; {[start branch=A going below] \node[state, on chain] (D) {$b_1$(4)}; } \node[state, on chain] (5) {...}; {[start branch=A going below] \node[state, on chain] (E) {...}; } \path[] (1) edge node[above] {1} (2) (2) edge node[above] {1} (3) (3) edge node[above] {1} (4) (4) edge node[above] {1} (5); \path[] (A) edge node[above] {1} (B) (B) edge node[above] {1} (C) (C) edge node[above] {1} (D) (D) edge node[above] {1} (E); \end{tikzpicture} } \caption{Belief-state MDP under the policy of always being passive. There is one chain for each observation $\omega \in \{0,1\}$ with the head marked black. Belief states deterministically transition down the chains.} \label{fig:bsMDP} \end{wrapfigure} \paragraph{Belief-State MDP Representation} In limited observability settings, belief-state MDPs have organized chain-like structures, which we will exploit. In particular, the only information that affects our belief of an arm being in state $1$ is the number of days since that arm was last pulled and the state $\omega$ observed at that time. Therefore, we can arrange these belief states into two ``chains'' of length $T$, each for an observation $\omega$. A sketch of the belief state chains under the passive action is shown in Fig.~\ref{fig:bsMDP}. Let $b_\omega(u)$ denote the belief state, \emph{i.e., the probability that the state is $1$}, if the agent received observation $\omega \in \{ 0,1\} $ when it acted on the process $u$ days ago. Note that $b_\omega(u)$ is also the expected reward associated with that belief state, and let $\mathcal{B}$ be the set of all belief states. When the belief-state MDP is allowed to evolve under some policy, the following mechanism arises: first, after an action, the state $\omega$ is observed (uncertainty ``collapses''), then one round passes causing the agent's belief to become $P_{\omega,1}^a$, representing the head of the chain determined by $\omega$. Subsequent passive actions cause the process to transition deterministically down the same chain (though, the transition in the latent state is still stochastic). Then when the process's arm is active, it transitions to the head of one of the chains with probability equal to the belief that the corresponding observation would be emitted (see Fig.~\ref{fig:chains} for an illustration). The belief associated with a belief state can be calculated in closed form with the given transition probabilities. Formally, \begin{small} \begin{align} \label{eq:tau_maintext} b_{\omega}(u) = \tau_{u-1}(P_{\omega,1}^a) \text{ } \forall u \in [T]\hspace{1mm}\text{where}\hspace{1mm} \tau_u (b)= \frac{P_{0,1}^p - (P_{1,1}^p-P_{0,1}^p)^u(P_{0,1}^p- b(1+P_{0,1}^p-P_{1,1}^p))}{(1+P_{0,1}^p-P_{1,1}^p)} \end{align} \end{small} \section{Collapsing Bandits: Threshold Policies and Whittle Indexability} \label{section_computing_whittle_index} Because of the well-known intractability of solving general RMABs, the widely adopted solution concept in the literature of RMABs is the Whittle index approach; for a comprehensive description, see \citet{whittle1988restless}. Intuitively, the Whittle index captures the value of acting on an arm in a particular state by finding the minimum \emph{subsidy} $m$ the agent would accept to \textit{not act}, where the subsidy is some exogenous ``donation'' of reward. Formally, the modified reward function becomes $r_m: \mathcal{S}\times\mathcal{A} \rightarrow \mathbb{R}$, where $r_m(s,0) = r(s) + m$ and $r_m(s,1) = r(s)$. Let $R_{\beta,m}^\pi(s) = E\left[\sum_{t=0}^\infty \beta^tr_m(s_t,\pi(s_t))|\pi, s_0 = s\right]$ and $\overline{R}^\pi_m = \sum_{s \in \mathcal{S}} f^\pi(s) r_m(s,\pi(s))$ be the discounted and average reward criteria for this new subsidy setting, respectively. The former is maximized by the discounted value function (we give a value function for the average reward criterion in \textbf{Fast Whittle Index Computation}): \begin{equation} \label{eq:discounted_value_fn_definition} \begin{split} V_m(b) = \max \begin{cases} m+b+\beta V_m(\tau_1(b)) & \text{passive} \\ b + \beta(bV_m(P_{1,1}^a) + (1-b)V_m(P_{0,1}^a)) & \text{active} \end{cases} \end{split} \end{equation} where $\tau$ is defined in Eq.~\ref{eq:tau_maintext} and $b$ is shorthand for $b_\omega(u)$. In a CoB, the Whittle index of a belief state $b$ is the smallest $m$ s.t.~it is equally optimal to be active or passive in the current state. Formally: \begin{align}\label{whittle_subproblem} W(b) = \inf_m\{m : V_{m}(b;a=0) \ge V_m(b;a=1)\} \end{align} Critically, performance guarantees hold only if the problem satisfies \emph{indexability} \cite{weber1990index,whittle1988restless}, a condition which says that for all states, the optimal action cannot switch to active as $m$ increases. Let $\Pi^*_m$ be the set of policies that maximize a given reward criterion under subsidy $m$. \begin{definition}[Indexability] \label{def:indexability} An arm is indexable if $\mathcal{B}^*(m) = \{b : \forall\pi~\in~\Pi^*_m, \pi(b)=0\}$ monotonically increases from $\emptyset$ to the entire state space as $m$ increases from $-\infty$ to $\infty$. An RMAB is indexable if every arm is indexable. \end{definition} The following special type of MDP policy is central to our analysis. \begin{definition}[Threshold Policies] A policy is a \emph{forward (reverse) threshold policy} if there exists a threshold $b_{th}$ such that $\pi(b) = 0$ ($\pi(b) = 1$) if $b>b_{th}$ and $ \pi(b)=1$ ($\pi(b)=0$) otherwise. \label{def:threshold_pols} \end{definition} \begin{theorem}\label{thm:indexability_maintext} If for each arm and any subsidy $m \in \mathbb{R}$, there exists an optimal policy that is a forward or reverse threshold policy, the Collapsing Bandit is indexable under discounted and average reward criteria. \end{theorem} \begin{proof}[Proof Sketch] Using linearity of the value function in subsidy $m$ for any fixed policy, we first argue that when forward (reverse) threshold policies are optimal, proving indexability reduces to showing that the threshold monotonically decreases (increases) with $m$. Unfortunately, establishing such a monotonic relationship between the threshold and $m$ is a well-known challenging task in the literature that often involves problem-specific reasoning \cite{zhao_liyu_paper}. Our proof features a sophisticated induction argument exploiting the finite size of $\mathcal{B}$ and relies on tools from real analysis for limit arguments. \end{proof} All formal proofs can be found in the appendix. We remark that Thm.~\ref{thm:indexability_maintext} generalizes the result in the seminal work by \citet{zhao_liyu_paper} who proved the indexability for a special class of CoB. In particular, the RMAB in \citet{zhao_liyu_paper} can be viewed as a CoB setting with $P^a = P^p$, i.e., transitions are independent of actions. Though the Whittle index is known to be challenging to compute in general \cite{whittle1988restless}, we are able to design an algorithm that computes the Whittle index efficiently assuming the optimality of threshold policies, which we now describe. \paragraph{Fast Whittle Index Computation} \label{section:algorithm} The main algorithmic idea we use is the Markov chain structure that arises from imposing a \textit{forward} threshold policy on an MDP. A forward threshold policy can be defined by a tuple of the first belief state in each chain that is less than or equal to some belief threshold $b_{th} \in [0, 1]$. In the two-observation setting we consider, this is a tuple $(X_0^{b_{th}}, X_1^{b_{th}})$, where $X_\omega^{b_{th}} \in 1,\ldots,T$ is the index of the first belief state in each chain where it is optimal to act (i.e., the belief is less than or equal to $b_{th}$). We now drop the superscript $b_{th}$ for ease of exposition. See Fig.~\ref{fig:chains} for a visualization of the transitions induced by such an example policy. For a forward threshold policy $(X_0, X_1)$, the occupancy frequencies induced for each state $b_\omega(u)$ are: \begin{align} f^{(X_0,X_1)}(b_\omega(u))= \begin{cases} \alpha & \textrm{if }\omega=0,u \le X_0 \\ \beta & \textrm{if }\omega=1,u \le X_1 \\ 0 & \textrm{otherwise} \end{cases} \\ \alpha = \bigg(\frac{(X_1 b_0(X_0))}{1-b_1(X_1)} + X_0\bigg)^{-1} \text{, } \beta = \bigg(\frac{X_1 b_0(X_0)}{1-b_1(X_1)} + X_0\bigg)^{-1} \frac{b_0(X_0)}{1-b_1(X_1)} \end{align} These equations are derived from standard Markov chain theory. These occupancy frequencies do not depend on the subsidy. Let $J_m^{(X_0,X_1)}$ be the average reward of policy $(X_0,X_1)$ under subsidy $m$. We decompose the average reward into the contribution of the state reward and the subsidy \begin{align}\label{eqn:Ravg} J_m^{(X_0,X_1)} = \sum_{b\in \mathcal{B}}bf^{(X_0,X_1)}(b) + m(1-f^{(X_0,X_1)}(b_1(X_1))-f^{(X_0,X_1)}(b_0(X_0))) \end{align} Recall that for any belief state $b_\omega(u)$, the Whittle index is the smallest $m$ for which the active and passive actions are both optimal. Given forward threshold optimality, this translates to two corresponding threshold policies being equally optimal. Such policies must have adjacent belief states as thresholds, as can be concluded from Lemma 1 in Appendix A. Note that for a belief state $b_0(X_0)$ the only adjacent threshold policies with active and passive as optimal actions at $b_0(X_0)$ are $(X_0,X_1)$ and $(X_0+1,X_1)$ respectively. Thus the subsidy which makes these two policies equal in value must thus be the Whittle Index for $b_0(X_0)$, which we obtain by solving: $J_m^{(X_0, X_1)} = J_m^{(X_0+1, X_1)}$ for $m$. We use this idea to construct two fast Whittle index algorithms. \begin{figure}[t!] \centering \subfloat[]{ \resizebox{!}{72pt}{% \begin{tikzpicture}[ -Triangle, every loop/.append style = {-Triangle}, start chain=main going right, state/.style={circle,minimum size=9mm,draw}, node distance=6mm, font=\scriptsize, >=stealth, bend angle=28, auto ] \node [state, on chain, fill=black, text=white] (1) {\textbf{\boldmath $b_0$(1)}}; {[start branch=A going below] \node[state, on chain, fill=black, text=white] (A) {\textbf{\boldmath$b_1$(1)}}; } \node [state, on chain] (2) {$b_0$(2)}; {[start branch=A going below] \node[state, on chain] (B) {$b_1$(2)}; } \node [state, on chain] (3) {$b_0$(3)}; {[start branch=A going below] \node[state, on chain, fill=lightgray] (C) {$b_1$(3)}; } \node[state, on chain, fill=lightgray] (4) {$b_0$(4)}; {[start branch=A going below] \node[state, on chain] (D) {$b_1$(4)}; } \node[state, on chain] (5) {...}; {[start branch=A going below] \node[state, on chain] (E) {...}; } \foreach \i in {1,...,2} { \draw let \n1 = { int(\i+1) } in (\i) edge[] (\n1); } \draw (3) edge[] (4); \draw (A) edge[](B); \draw (B) edge[] (C); \path[->,draw, thick] (C) edge node[near end] {$1-b_1(3)$} (1); \path[->,draw, thick, bend left=25] (C) edge node[near start] {$b_1(3)$} (A); \path[->,draw, thick, bend right=25] (4) edge node[near end, above] {$1-b_0(4)$} (1); \path[->,draw, thick] (4) edge node[near start] {$b_0(4)$} (A); \end{tikzpicture} }\label{fig:chains} } \hspace{10mm} \subfloat[]{{\includegraphics[width=0.31\textwidth]{figures/nib_v_sb_side.pdf}}\label{fig:nib}}% \caption{(a) Visualization of forward threshold policy ($X_0=4$,$X_1=3$). Black nodes are the head of each chain and grey nodes are the thresholds. (b) Non-increasing belief (NIB) process has non-increasing belief in both chains. A split belief process (SB) has non-increasing belief after being observed in state $1$, but non-decreasing belief after being observed in state $0$.}% \end{figure} \paragraph{Sequential index computation algorithm} Alg.~\ref{alg:algo1} precomputes the Whittle index of every belief state for each process and has time complexity $\mathcal{O}(|\mathcal{S}|^2T)$ per process. It is optimized for settings in which the Whittle index can be precomputed. However, for online learning settings, we give an alternative method in Appendix F that computes the Whittle index on-demand, in a closed form. \begin{algorithm}[h!] \SetAlgoLined Initialize counters to heads of the chains: $X_1 = 1$, $X_0 = 1$ \\ \While{$X_1 < T$ or $X_0 < T$} { Compute $ m_1 := m$ such that $ J_{m}^{(X_0,X_1)}=J_{m}^{(X_0,X_1+1)}$ \\ Compute $m_0 := m$ such that $ J_{m}^{(X_0,X_1)}=J_{m}^{(X_0+1,X_1)}$ \\ Set $i=\arg\min\{m_0, m_1\}$ and $W(X_i) = \min \{m_0, m_1 \}$ \\ Increment $X_i$ } \caption{Sequential index computation algorithm \label{alg:algo1}} \end{algorithm} Our algorithm also requires that belief is decreasing in $X_0$ and $X_1$. Formally, we require: \begin{definition}[Non-increasing belief (NIB) processes] A process has \emph{non-increasing belief} if, for any $u \in [T]$ and for any $\omega \in \mathcal{S}$, $b_\omega(u)\ge b_\omega(u+1)$. \label{def:NIB} \end{definition} All possible CoB belief trends are shown in Fig.~\ref{fig:nib} (full derivation omitted for space). We make this distinction because the computation of the Whittle index in Alg.~\ref{alg:algo1} is guaranteed to be exact for NIB processes that are also forward threshold optimal, though we show empirically that our approach works surprisingly well for most distributions. In the next section, we analyze the possible forms of optimal policies to find conditions under which threshold policies are optimal. \paragraph{Types of Optimal Policies} \begin{wrapfigure}{r}{0.5\textwidth} \centering \includegraphics[width=0.40\textwidth]{figures/value_func.pdf} \caption{Components of $V_m(b)$ in Eq.~\ref{eq:discounted_value_fn_definition}. Since the passive action is convex in $b$, active action is linear in $b$, and value function is a max over these, at most three optimal policy types are possible.}% \label{fig:value_func_forms} \end{wrapfigure} Analyzing Eq.~\ref{eq:discounted_value_fn_definition} reveals that at most three types of optimal policies exist. This follows directly from the definition of $V_m(b)$, which is a max over the passive action value function and the active action value function. The former is convex in $b$, a well-known POMDP result \cite{sondik1978optimal}, and the latter is linear in $b$. Thus, as shown in Fig.~\ref{fig:value_func_forms}, there are three ways in which the value functions of each action may intersect; this defines three optimal policy forms of \textit{forward}, \textit{reverse} and \textit{dual} threshold types, respectively. Forward and reverse threshold policies are defined in Def.~\ref{def:threshold_pols}; dual threshold policies are active between two separate threshold points and passive elsewhere. Not only do threshold policies greatly reduce the optimization search space, they often admit closed form expressions for the index as demonstrated earlier in this section. We now derive sufficient conditions on the state transition probabilities under which each type of policy is verifiably optimal. \begin{theorem}\label{thm:forward_threshold_opt} Consider a belief-state MDP corresponding to an arm in a Collapsing Bandit. For any subsidy $m$, there is a \emph{forward} threshold policy that is optimal under the condition: \begin{align} \label{eq:final_forward_threshold_condition} (P_{1,1}^p-P_{0,1}^p)(1+\beta(P_{1,1}^a-P_{0,1}^a))(1-\beta) \geq P_{1,1}^a-P_{0,1}^a \end{align} \end{theorem} \begin{proof}[Proof Sketch] Forward threshold optimality requires that if the optimal action at a belief $b$ is passive, then it must be so for all $b'>b$. This can be established by requiring that the derivative of the passive action value function is greater than the derivative of the active action value function w.r.t. $b$. The main challenge is to distill this requirement down to measurable quantities so the final condition can be easily verified. We accomplish this by leveraging properties of $\tau(b)$ and using induction to derive both upper and lower bounds on $V_m(b_1)-V_m(b_2) \ \forall \ b_1,b_2\ $ as well as a lower bound on $\frac{d(V_m(b))}{db}$. \end{proof} Intuitively, the condition requires that the intervention effect on processes in the ``bad'' state must be large, making $P_{1,1}^a-P_{0,1}^a$ small. Note that \citet{zhao_liyu_paper} consider the case where $P_{1,1}^a = P_{1,1}^p$ and $P_{0,1}^a = P_{0,1}^p$, which makes Eq.~\ref{eq:final_forward_threshold_condition} always true. Thus we generalize their result for threshold optimality. \begin{theorem}\label{thm:reverse_threshold_opt} Consider a belief-state MDP corresponding to an arm in a Collapsing Bandit. For any subsidy $m$, there is a \emph{reverse} threshold policy that is optimal under the condition: \begin{align} \label{eq:final_reverse_threshold_condition_maintext} (P_{1,1}^p-P_{0,1}^p)\Big(1+\frac{\beta (P_{1,1}^a-P_{0,1}^a) }{1-\beta}\Big)\le P_{1,1}^a-P_{0,1}^a \end{align} \end{theorem} Intuitively, the condition requires small intervention effect on processes in the ``bad'' state, the opposite of the forward threshold optimal requirement. Note that both Thm.~\ref{thm:forward_threshold_opt} and Thm.~\ref{thm:reverse_threshold_opt} also serve as conditions for the average reward case as $\beta\rightarrow{}1$ (a proof based on Dutta's Theorem \cite{dutta1991discounted} is given in Appendix D). \begin{conjecture}\label{conjecture:no_dual_thresh} Dual threshold policies are never optimal for Collapsing Bandits. \end{conjecture} This conjecture is supported by extensive numerical simulations over the random space of state transition probabilities, values of $\beta$, and values of subsidy $m$; its proof remains an open problem. Note that this would imply that all Collapsing Bandits are indexable. \section{Experimental Evaluation} We evaluate our algorithm on several domains using both real and synthetic data distributions. We test the following algorithms: \noindent\textbf{Threshold Whittle} is the algorithm developed in this paper. \textbf{\citet{qian2016restless}}, a slow, but precise general method for computing the Whittle index, is our main baseline that we improve upon. \textbf{Random} selects $k$ process to act on at random each round. \textbf{Myopic} acts on the $k$~processes that maximize the expected reward at the immediate next time step. Formally, at time $t$, this policy picks the $k$ processes with the largest values of $\Delta b_{t} = (b_{t+1}|a=1)-(b_{t+1}|a=0)$. \textbf{Oracle} fully observes all states and uses \citet{qian2016restless} to calculate Whittle indices. We measure performance in terms of \emph{intervention benefit}, where $0\%$ corresponds to the reward of a policy that is always passive and 100\% corresponds to Oracle. All results are averaged over 50 independent trials. \subsection{Real Data: Monitoring Tuberculosis Medication Adherence} We first test on tuberculosis medication adherence monitoring data, which contains daily adherence information recorded for each real patient in the system, as obtained from \citet{killian2019learning}. The ``good'' and ``bad'' states of the arm (patient) correspond to ``Adhering'' and ``Not Adhering'' to medication, respectively. State transition probabilities are estimated from the data. Because this data is noisy and contains only the adherence records and not the intervention (action) information (as the authors state), we perturb the computed average transition matrix by reducing (increasing) $P_{\omega,1}$ by $\delta_1, \delta_2$ ($\delta_3, \delta_4$) to obtain $P_{\omega,1}^p$ ($P_{\omega,1}^a$) for the simulation. Reward is measured as the undiscounted sum of patients (arms) in the adherent state over all rounds, where each trial lasts $T=180$ days (matching the length of first-line TB treatment) with $N$ patients and a budget of $k$ calls per day. In Fig.~\ref{fig:runtime-plot}, we plot the runtime in seconds vs the number of patients $N$. Fig.~\ref{fig:tb-performance} compares the intervention benefit for $N=100, 200, 300, 500$ patients and $k=10\% $ of $N$. In the $N=200$ case, the runtimes of a single trial of Qian et al.~and Threshold Whittle index policy are $3708$ seconds and $3$ seconds, respectively, while attaining near-identical intervention benefit. Our algorithm is thus $3$ orders of magnitude faster than the previous state of the art without sacrificing performance. We next test Threshold Whittle as the resource level~$k$ is varied. Fig.~\ref{fig:kplot} shows the performance in the $k=5\%N$, $k=10\%N$ and $k=15\%N$ regimes ($N=200$). Threshold Whittle outperforms Myopic and Random by a large margin in these low resource settings. We also affirm the robustness of our algorithm to $\delta$, the perturbation parameter used to approximate real-world $P_{\omega,1}^p$ and $P_{\omega,1}^a$ from the data and present the extensive sensitivity analysis in Appendix G. Finally, in Appendix F we couple our algorithm to a Thompson Sampling-based learning approach and show it performs well in the real-world case where transition probabilities would need to be learned online, supporting the deployability of our work. \begin{figure}[h!] \centering \subfloat[]{{\includegraphics[ width=0.28\textwidth, clip]{figures/runtime_only.pdf}}\label{fig:runtime-plot}} \hspace{2mm} \subfloat[]{{\includegraphics[ width=0.35\textwidth, clip]{figures/tb_performance.pdf}}\label{fig:tb-performance}}% \hspace{2mm} \subfloat[]{\includegraphics[width=.27\textwidth]{figures/newKplot.pdf}\label{fig:kplot}} \caption{(a) Threshold Whittle is several orders of magnitude faster than Qian et al.~and scales to thousands of patients without sacrificing performance on realistic data (b). (c) Intervention benefit of Threshold Whittle is far larger than naive baselines and nearly as large as Oracle.}% \end{figure} \subsection{Synthetic Domains} We test our algorithm on four synthetic domains, that potentially characterize other healthcare or relevant domains, and highlight different phenomena. Specifically, we: (i) identify situations when Myopic fails completely while Whittle remains close to optimal, (ii) analyze the effect of latent state entropy on policy performance, (iii) identify limitations of Threshold Whittle by constructing processes for which Threshold Whittle shows separation from Oracle, and (iv) test robustness of our algorithm outside of the theoretically guaranteed conditions. To facilitate comparison with the real data distribution, we simulate trials for $T=180$ rounds where reward is the undiscounted sum of arms in state $1$ over all rounds. We consider the space of transition probabilities satisfying the assumed natural constraints, as outlined in Sec.~\ref{section:problem_formulation}. Fig.~\ref{fig:synth_results}a demonstrates a domain characterized by processes that are either self-correcting or non-recoverable. Self-correcting processes have a high probability of transitioning from state $0$ to $1$ regardless of the action taken, while non-recoverable processes have a low chance of doing so. We show that when the immediate reward is larger for the former than the latter, Myopic can perform even worse than Random. That is because a myopic policy always prefers to act on the self-correcting processes per their larger immediate reward, while Threshold Whittle, capable of long-term planning, looks to avoid spending resources on these processes. In this regime, the best long-term plan is to always act on the non-recoverable processes to keep them from failing. Analytical explanation of this phenomenon is presented in Appendix E. We set the resource level, $k=10\%N$ in our simulation for Fig.~\ref{fig:synth_results}a. Note that performance of Myopic drops as the fraction of self-correcting processes becomes larger and reaches a minimum at $x=100\%-k=90\%$. Beyond this point, Threshold Whittle can no longer completely avoid self-correcting processes and the gap subsequently starts to decrease. Fig.~\ref{fig:synth_results}b explores the effect of uncertainty in the latent state on long-term planning. For each point on the $x$-axis, we draw all transition probabilities according to $P_{\omega,1}^p, P_{\omega,1}^a \sim [x,x+0.1]$. The entropy of the state of a process is maximum near 0.5 making long term planning most uncertain and as a result, this point shows the biggest gap with Oracle, which can observe all the states in each round. Note that Myopic and Whittle policies perform similarly, as expected for (nearly) stochastically identical arms. Fig.~\ref{fig:synth_results}c studies processes that have a large propensity to transition to state $0$ when passive and a corresponding low active action impact, but a significantly larger active action impact in state $1$. This makes it attractive to exclusively act on processes in the $1$ state. This simulates healthcare domains where a fraction of patients degrade rapidly, but can recover, and indeed respond very well to interventions if already in a good state. To simulate these, we draw transition matrices with $P_{0,1}^p, P_{1,1}^p, P_{0,1}^a \sim [0.3,0.32]$ and $ P_{1,1}^a \sim [0.7,0.72]$ in varying proportions and sample the rest from the real TB adherence data. Because the best plan is to act on processes in state $1$, both Myopic and Whittle act on the processes with the largest belief giving Oracle a significant advantage as it has perfect knowledge of states. Although we provide theoretical guarantees on our algorithm for forward threshold optimal processes with non-increasing belief, Fig.~\ref{fig:synth_results}d reveals that Alg.~\ref{alg:algo1} performs well empirically even with these conditions relaxed. Here, we sample processes uniformly at random from the state transition probability space, and use rejection sampling to vary the proportion of threshold optimal processes. Threshold Whittle performs well even when as few as $20\%$ of the processes are forward threshold optimal; we briefly analyze this phenomenon in Appendix H. \begin{figure}[h!] \includegraphics[ width=.85\textwidth, clip]{figures/combined.pdf} \centering \caption{(a) Myopic can be trapped into performing even worse than Random while Threshold Whittle remains close to optimal. (b) Long-term planning is least effective when entropy of states is maximum. (c) Myopic and Whittle planning become similar when more processes are prone to failures. (d) Threshold Whittle is surprisingly robust to processes even outside of theoretically guaranteed conditions.} \label{fig:synth_results} \end{figure} \section{Conclusion} We open a new subspace of Restless Bandits, \emph{Collapsing Bandits}, which applies to a broad range of real-world problems, especially in healthcare delivery. We give new theoretical results that cover a large portion of real-world data as well as an algorithm that runs thousands of times faster than the state of the art without sacrificing performance. \bibliographystyle{plainnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,822
Concordance of programmed death-ligand 1 expression between primary and metastatic non-small cell lung cancer by immunohistochemistry and RNA in situ hybridization. ABSTRACT: We investigated the concordance of programmed death-ligand 1 (PD-L1) expression between primary cancer at initial diagnosis and metastasis at recurrence in resected non-small cell lung cancer (NSCLC). PD-L1 expression was evaluated using the SP142 assay in 37 NSCLC patients with paired primary lung cancer and surgically resected metastases at recurrence. PD-L1 positivity was defined as immunohistochemistry (IHC) and also evaluated by RNA in situ hybridization (RISH). The concordance rate of PD-L1 between primaries and metastases and correlation with clinicopathological factors were analyzed. PD-L1 expression was higher in squamous cell carcinoma, wild-type EGFR, and smokers than in non-squamous carcinoma, mutant EGFR, and never smokers, respectively. PD-L1 positivity was observed in 18.9% of primaries and 21.6% of metastases. IHC demonstrated 78.4% concordance of PD-L1 positivity between primary and metastatic cancers. In 10.8% of cases, PD-L1 positivity was higher in primaries than in metastases, and vice versa in the remaining 10.8%. By PD-L1 RISH, 35.1% of primaries and 27.0% of metastases demonstrated PD-L1 positivity. There was 62.2% concordance in PD-L1 by RISH between the primaries and metastases. Our results thus highlight the clinical importance of replacing metastases with primary archival tissue, particularly when re-biopsy is difficult at recurrence. Concordance of PD-L1 expression and CD8+ TIL intensity between NSCLC and synchronous brain metastases. Project description:Programmed death-ligand 1 (PD-L1) is suggested to be a predictive biomarker in non-small-cell lung carcinoma (NSCLC). However, the differential expression of PD-L1 in primary lung tumor vs. synchronous metastases, especially brain metastasis (BM), remains unclear. This study assessed the concordance of PD-L1 expression on tumor cells and tumor-infiltrating lymphocytes (TILs) and CD8+ TIL intensity between primary lung tumors and synchronous BMs from 24 NSCLC patients. PD-L1, CD3, and CD8 positivity was determined by immunohistochemistry (IHC). PD-L1 scoring was based on the proportion of tumor cells with membranous expression of PD-L1 and the cutoff values <1%, 1-49%, and ?50%. CD3 and CD8 positivity in TILs was evaluated semi-quantitatively and the proportion of CD3+/CD8+ TILs was determined. PD-L1 expression on tumor cells and TILs was evaluated in relation to CD3+/CD8+ TIL proportions and the intensity of CD8+ TILs between the paired primary lung and BM tissues. In the primary lung tumors, PD-L1 positivity was observed in 25%, 37.5%, and 37.5% cases for the cutoff values <1%, 1-49%, and ?50%, respectively. PD-L1 expression on tumor cells was strongly correlated between the paired primary lung and BM tissues, in all cutoff groups. However, PD-L1 expression on TILs and the proportion of CD3+/CD8+ TILs were not strongly correlated in all three groups between the paired primary lung tumors and BMs. The intensity of CD8+ TILs was concordant in only 54.16% of the paired primary lung tumors and BMs. This study showed a high concordance of PD-L1 expression in neoplastic cells between primary NSCLC and synchronous BMs. Prognostic impact of PD-L1 expression in correlation with neutrophil-to-lymphocyte ratio in squamous cell carcinoma of the lung. Project description:The prognostic impact of tumoral programmed death-ligand 1 (PD-L1) expression in correlation with neutrophil-to-lymphocyte ratio (NLR) was retrospectively assessed in 83 patients with completely resected stage I squamous cell carcinoma of the lung, as PD-L1 is a potent regulator of cancer immunity and NLR is a potential surrogate of immune status. Forty-three patients (51.8%) had tumor with positive PD-L1 expression. There was no significant correlation between PD-L1 expression and NLR. PD-L1-positivity failed to provide a significant prognostic impact (overall survival [OS] rate at 5 years, 53.0% in PD-L1-positive patients versus 70.1% in PD-L1-negative patients; P?=?0.117). Among NLR-low (<2.2) patients, however, PD-L1-positivity was significantly correlated with a poor prognosis (OS rate at 5 years, 46.1% versus 86.0%; P?=?0.020). In contrast, among NLR-high (?2.2) patients, PD-L1-positivity provided no prognostic impact (P?=?0.680). When NLR status and tumoral PD-L1 status were combined, "NLR-low and PD-L1-negative" was a significant and independent factor to predict a favorable recurrence-free survival (hazard ratio, 0.237 [95% confidence interval, 0.083 to 0.674]; P?=?0.007) and OS (hazard ratio, 0.260 [0.091 to 0.745]; P?=?0.012). These results suggest the prognostic impact of tumoral PD-L1 expression might be influenced by the status of NLR. Brain metastasis PD-L1 and CD8 expression is dependent on primary tumor type and its PD-L1 and CD8 status. Project description:BACKGROUND:Brain metastases (Bmets) are frequent; however, limited data exist on the efficacy of immunotherapy in these lesions. The aims of the study were to analyze the immunohistochemical expressions of programmed death ligand 1 (PD-L1) and CD8 in Bmets and to compare them with their expressions in paired primary tumors, as well as correlate the results with clinicopathological features. METHODS:This is a retrospective study of 233 patients with Bmets and 111 paired primaries. Clinical, histological, and molecular data were recorded and compared with the immunohistochemical results of PD-L1 and CD8 expressions. The statistical analysis included ?2 test, Cramer's V test, factorial analyses of variance, simple regression analysis, and Kaplan-Meier analysis with log-rank product limit estimation. RESULTS:PD-L1 expression was found in 23.6% of Bmets and in 29.0% of primary tumors with concordant expression between them in 75.5% of cases. Bmets PD-L1 expression was associated with primary tumor PD-L1 expression and the primary tumor type. Significant CD8 peritumoral expression was found in 68.6% of Bmets and in 87.7% of primary tumors. CD8 expression was concordant between primary and metastatic tumors in 73.3% of cases. Bmets CD8 expression was associated with primary tumor CD8 expression and primary tumor type. PD-L1 expression was associated with CD8 expression in both primary and metastatic tumors. The concordance between primary and metastatic tumor PD-L1 expression was independent of all factors studied. The concordance between primary and metastatic CD8 expressions was marginally associated to the time of Bmets development. No prognostic role for PD-L1 and CD8 expression in Bmets was found. CONCLUSION:PD-L1 and CD8 Bmets expressions are associated with the primary tumor type and its PD-L1 and CD8 expressions. No factor predicts the discordance for PD-L1 expression, while time to Bmets development is associated with CD8 expression discordance. PD-L1 and Tumor Infiltrating Lymphocytes as Prognostic Markers in Resected NSCLC. Project description:<h4>Introduction</h4>Immune checkpoint inhibition has shifted treatment paradigms in non-small cell lung cancer (NSCLC). Conflicting results have been reported regarding the immune infiltrate and programmed death-ligand 1 (PD-L1) as a prognostic marker. We correlated the immune infiltrate and PD-L1 expression with clinicopathologic characteristics in a cohort of resected NSCLC.<h4>Methods</h4>A tissue microarray was constructed using triplicate cores from consecutive resected NSCLC. Immunohistochemistry was performed for CD8, FOXP3 and PD-L1. Strong PD-L1 expression was predefined as greater than 50% tumor cell positivity. Matched nodal samples were assessed for concordance of PD-L1 expression.<h4>Results</h4>Of 522 patients, 346 were node-negative (N0), 72 N1 and 109 N2; 265 were adenocarcinomas (AC), 182 squamous cell cancers (SCC) and 75 other. Strong PD-L1 expression was found in 24% cases. In the overall cohort, PD-L1 expression was not associated with survival. In patients with N2 disease, strong PD-L1 expression was associated with significantly improved disease-free (DFS) and overall survival (OS) in multivariate analysis (HR 0.49, 95%CI 0.36-0.94, p = 0.031; HR 0.46, 95%CI 0.26-0.80, p = 0.006). In this resected cohort only 5% harboured EGFR mutations, whereas 19% harboured KRAS and 23% other. KRAS mutated tumors were more likely to highly express PD-L1 compared to EGFR (22% vs 3%). A stromal CD8 infiltrate was associated with significantly improved DFS in SCC (HR 0.70, 95%CI 0.50-0.97, p = 0.034), but not AC, whereas FOXP3 was not prognostic. Matched nodal specimens (N = 53) were highly concordant for PD-L1 expression (89%).<h4>Conclusion</h4>PD-L1 expression was not prognostic in the overall cohort. PD-L1 expression in primary tumor and matched nodal specimens were highly concordant. The observed survival benefit in N2 disease requires confirmation. Clinicopathological analysis and prognostic significance of programmed cell death-ligand 1 protein and mRNA expression in non-small cell lung cancer. Project description:In this study, we present the clinicopathological features associated with PD-L1 protein and mRNA expression in a large Asian cohort of patients with non-small cell lung cancer (NSCLC) and assessed the prognostic implications of PD-L1 expression, particularly in early stage NSCLC. We retrospectively analyzed 687 NSCLC specimens (476 adenocarcinoma and 211 squamous cell carcinoma) using tissue microarray. PD-L1 immunohistochemistry (IHC) was performed using Dako 22C3 pharmDx assay and PDL1 mRNA was measured using RNA in situ hybridization (RISH). The overall prevalence of PD-L1 protein expression was 25.2% in tumor cells and PDL1 mRNA expression was 11.9%. There was a strong positive correlation between PD-L1 IHC and RISH results (Spearman's rho = 0.6, p<0.001). In adenocarcinoma, PD-L1 protein and mRNA expressions significantly correlated with poorly differentiated histologic subtype (p<0.001 and p = 0.002, respectively). PD-L1 expression was also associated with genetic alteration in adenocarcinoma. High PD-L1 expression level was associated with EGFR-naïve and KRAS-mutant subgroup (p = 0.001 and p = 0.017, respectively). With a 1% cut-off value, PD-L1 protein expression showed a short overall survival duration in early stage adenocarcinoma with marginal significance (p = 0.05, Hazard ratio = 1.947). Our study revealed that PD-L1 expression varied with histologic subtype and genomic alteration status in lung adenocarcinoma, and activation of the PD-L1 pathway may be a poor prognostic factor especially in early stage lung adenocarcinoma. In addition, PDL1 RISH showed promising results in predicting PD-L1 protein expression in NSCLC. Differential Expression of PD-L1 between Primary and Metastatic Sites in Clear-Cell Renal Cell Carcinoma. Project description:PD-L1 expression in primary clear-cell renal cell carcinoma (ccRCC) increases the likelihood of response to anti-PD-1 inhibition, but fails to identify all responders. We hypothesized that PD-L1 levels assessed in randomly selected areas of the primary tumors may not accurately reflect expression levels in metastatic lesions, which are the target of systemic therapy. Therefore, we compared PD-L1 expression in a series of primary ccRCC and their metastases. Tissue blocks from 53 primary ccRCCs and 76 corresponding metastases were retrieved. Areas with predominant and highest nuclear grade were selected. Slides were immunostained with a validated anti-PD-L1 antibody (405.9A11). Membranous expression in tumor cells was quantified using H-score. Expression in tumor-infiltrating mononuclear cells (TIMC) was quantified using a combined score. Discordant tumor cell PD-L1 staining between primary tumors and metastases was observed in 11 of 53 cases (20.8%). Overall, tumor cell PD-L1 levels were not different in primary tumors and metastases (P = 0.51). Tumor cell PD-L1 positivity was associated with higher T stage (P = 0.03) and higher Fuhrman nuclear grade (P < 0.01). Within individual lesions, PD-L1 positivity was heterogeneous and almost exclusively detected in high nuclear grade areas (P < 0.001). No difference was found in PD-L1 levels in TIMCs between primary tumors and metastases (P = 0.82). The heterogeneity of PD-L1 expression in ccRCC suggests that its assessment as a predictive biomarker for PD-1 blockade may require analysis of metastatic lesions. Notably, because PD-L1 expression was mostly detected in high nuclear grade areas, to avoid false-negative results, these areas should be specifically selected for assessment. Quantitative and pathologist-read comparison of the heterogeneity of programmed death-ligand 1 (PD-L1) expression in non-small cell lung cancer. Project description:PD-L1 is expressed in a percentage of lung cancer patients and those patients show increased likelihood of response to PD-1 axis therapies. However, the methods and assays for the assessment of PD-L1 using immunohistochemistry are variable and PD-L1 expression appears to be highly heterogeneous. Here, we examine assay heterogeneity parameters toward the goal of determining variability of sampling and the variability due to pathologist-based reading of the immunohistochemistry slide. SP142, a rabbit monoclonal antibody, was used to detect PD-L1 by both chromogenic immunohistochemistry and quantitative immunofluorescence using a laboratory-derived test. Five pathologists scored the percentage of PD-L1 positivity in tumor- and stromal-immune cells of 35 resected non-small cell lung cancer cases, each represented on three separate blocks. An intraclass correlation coefficient of 94% agreement was seen among the pathologists for the assessment of PD-L1 in tumor cells, but only 27% agreement was seen in stromal/immune cell PD-L1 expression. The block-to-block reproducibility of each pathologist's score was 94% for tumor cells and 75% among stromal/immune cells. Lin's concordance correlation coefficient between pathologists' readings and the mean immunofluorescence score among blocks was 94% in tumor and 68% in stroma. Pathologists were highly concordant for PD-L1 tumor scoring, but not for stromal/immune cell scoring. Pathologist scores and immunofluorescence scores were concordant for tumor tissue, but not for stromal/immune cells. PD-L1 expression was similar among all the three blocks from each tumor, indicating that staining of one block is enough to represent the entire tumor and that the spatial distribution of heterogeneity of expression of PD-L1 is within the area represented in a single block. Future studies are needed to determine the minimum representative tumor area for PD-L1 assessment for response to therapy. PD-L1 expression in bladder cancer and metastasis and its influence on oncologic outcome after cystectomy. Project description:Platinum-based chemotherapy is the standard of care in metastatic bladder cancer. With the approval of various checkpoint inhibitors, immunotherapy has revolutionized the traditional treatment modalities. The aim of the study was to evaluate whether PD-L1 expression on tumor cells (TCs) and tumor-infiltrating immune cells (ICs) can be used as biomarker to predict recurrence-free survival (RFS), overall survival (OS) and disease-specific survival (DSS) in bladder cancer patients after radical cystectomy (RC) developing disease recurrence followed by first-line chemotherapy. PD-L1 was measured on formalin-fixed, paraffin-embedded tissue sections of RC specimens in all patients (n=61) and in 27 matched metastatic biopsy samples by immunohistochemistry. PD-L1 expression on TCs was defined by the percentage of PD-L1 positive tumor cells (< 1%= IC0, ?1% but <5%=IC1, ?5 %=IC2/3), and was considered negative or positive for ICs. On 27 paired samples, IC1/2/3 score on TCs was homogeneous distributed with 59.3% in primary tumors and metastases, but with a high discordance rate of 44.4% of PD-L1 positivity on ICs. High PD-L1 expression (IC2/3) on TCs was more frequently seen in histologic subtypes of urothelial cancer compared to pure urothelial cancers (46.2% vs. 20.8%; p=0.002). PD-L1 expression on TCs in primary tumors (IC2/3 vs. IC0, median: 3.2 vs. 13.8 months, p=0.019) and metastatic sites (IC2/3 vs. IC0, median: 6.1 vs. 21.8 months, p=0.014) was associated with poor chemo-response, represented by significant shortened DSS. These results suggest that PD-L1 may be a potential target being involved in chemo-resistance mechanisms and poses potential for therapy stratification in the future. CD274/PD-L1 gene amplification and PD-L1 protein expression are common events in squamous cell carcinoma of the oral cavity. Project description:Immunomodulatory therapies, targeting the immune checkpoint receptor-ligand complex PD-1/PD-L1 have shown promising results in early phase clinical trials in solid malignancies, including carcinomas of the head and neck. In this context, PD-L1 protein expression has been proposed as a potentially valuable predictive marker. In the present study, expression of PD-L1 and PD-1 was evaluated by immunohistochemistry in 80 patients with predominantly HPV-negative oral squamous cell carcinomas and associated nodal metastasis. In addition, CD274/PD-L1 gene copy number status was assessed by fluorescence in situ hybridization analysis. PD-L1 expression was detected in 36/80 (45%) cases and concordance of PD-L1 expression in primary tumor and corresponding nodal metastasis was present in only 20/28 (72%) cases. PD-1 expression was found in tumor-infiltrating lymphocytes (TILs) but not in tumor cells. CD274/PD-L1 gene amplification was detected in 19% of cases, with high level PD-L1 amplification present in 12/80 (15%), and low level amplification in 3/80 (4%). Interestingly, CD274/PD-L1 gene amplification was associated with positive PD-L1 immunostaining in only 73% of cases. PD-L1 copy number status was concordant in primary tumor and associated metastases. Clinically, PD-L1 tumor immunopositivity was associated with a higher risk for nodal metastasis at diagnosis, overall tumor related death und recurrence. Based on our findings we propose to include PD-L1 copy number status in addition to protein status in screening programs for future clinical trials with immunotherapeutic strategies targeting the PD-1/PD-L1 axis. PD-L1 expression combined with microsatellite instability/CD8+ tumor infiltrating lymphocytes as a useful prognostic biomarker in gastric cancer. Project description:While the importance of programmed death-ligand 1 (PD-L1), mutation burden caused by microsatellite instability (MSI), and CD8+ tumor infiltrating lymphocytes (TILs) has become evident, the significance of PD-L1 expression on prognosis still remains controversial. We evaluated the usefulness of combined markers of PD-L1 and MSI or CD8+ TILs as a prognostic biomarker in gastric cancer. A total of 283 patients with gastric cancer were reviewed retrospectively. PD-L1 expression on >5% tumor cells was defined as PD-L1-positive. PD-L1-positive rate was 15.5% (44/283). PD-L1 positivity was significantly correlated with invasive and advanced cancer and also significantly correlated with MSI, whereas no significance was observed with CD8+ TILs. Kaplan-Meier analysis showed that PD-L1 positivity significantly correlated with a poor prognosis (p?=?0.0025). Multivariate analysis revealed that PD-L1 positivity was an independent poor prognostic factor (hazard ratio [HR]: 1.97, p?=?0.0106) along with diffuse histological type and lymph node metastases. Combinations of PD-L1 and MSI (HR: 2.18) or CD8+ TILs (HR: 2.57) were stronger predictive factors for prognosis than PD-L1 alone. In conclusion, combined markers of PD-L1 and MSI or CD8+ TILs may be more useful prognostic biomarkers in gastric cancer, and better clarify the immune status of gastric cancer patients.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,123
Nobody loves a cliché more than football. Phrases like, "I've seen them given", "great feet for a big guy" and "another famous European night at Anfield" drive most of us, unless you're a Liverpool fan or Peter Crouch, completely bonkers when they're spewed like confetti by Andy Townsend. Yet there's one cliché that most of us seem willing to accept and even encourage: "The Magic of the Cup". We all dreamed of it when we were kids (there you go – another one) – scoring the winning goal in the FA Cup final at Wembley. Well here's your chance with the FA People's Cup, a 5-a-side tournament for the good people of Britain. In collaboration with the BBC Get Inspired campaign, the FA People's Cup offers everyone a chance of a day at Wembley with a giant knockout competition which will give 14,000 teams and over 100,000 players a chance to taste cup glory. It is open to both established teams and also Joes and Janes who can join 'wild card' teams that are formed on the day. You'll be a bit like that ringer kid who always turns up down the park wanting a kick-about. The competition will take place over three rounds with the first round taking place on the weekend of the 20th-22nd of February. There will be eight regional venues hosting the semi-finals between the 24th and 26th of April. Disability Football Finalists will be decided through a series of Disability Football Festivals taking place in March and April. The final will take place on Bank Holiday Monday the 25th of May in Manchester, and the winning teams will be invited to the FA Cup Final at Wembley to receive their People's Cup trophy on the 30th of May. Some of the country's leading Small Sided Providers have signed up so there should be plenty of places to get playing with Goals, PlayFootball and Powerleague all involved. For details of your local festival email [email protected] stating your postcode and you'll need to get registered at www.bbc.co.uk/getinspired before the 14th of February so be quick!
{ "redpajama_set_name": "RedPajamaC4" }
7,957
Surprise! The buggiest Windows 10 update Microsoft released in recent memory is also the least successful. This has been revealed in the newest numbers from AdDuplex. The advertising network shared the latest intelligence gathered from the world of Windows. And it shows that the April 2018 Update (Version 1803) continues to be the number one version of the operating system with an 80.2% slice of the pie. The recently released October 2018 Update follows behind with just 12.4%. The software titan made this new version available late last year, but pulled it only after a few days after the original launch due to issues potentially causing data loss. Soon after, a number of other bugs were unearthed in this version of the OS. As it turns out, all the problems that users experienced with the October 2018 Update convinced others that it is better to just want and not install it right away. That is probably why even with this update available for all devices with a manual check for updates in Windows Update, not many took the risk. Microsoft followed that by pushing this new version as an automatic download for the first wave of devices this month, but it is clear that people are taking a wait and see approach. What's worse for Redmond is that the company is close to finalizing the next Windows 10 update, codenamed 19H1, which is on track to arrive in a couple of months. And home users and IT professionals thinking twice before upgrading their devices, this is not a good place to be in terms of fragmentation. The adoption of the October Update will continue in the coming months. But what would be interesting is whether it manages to grow at a quick pace before that happens, and whether at all, it is able to overtake the April 2018 Update.
{ "redpajama_set_name": "RedPajamaC4" }
1,372
refurbished coupon code It is an incredibly popular– and rather frightening– figure that everyone in the business world has actually listened to that "90% of the startups fail". If you take a look at these failed companies which when started with much vitality and also interest, they rarely fall short due to the fact that the idea or the idea of the company itself was poor. refurbished coupon codeThere are lots of businesses with wonderful potential that stop working due to the absence of knowledge the founders have regarding advertising or actually making a sale. It does not matter just how great your item or the service is if you do not recognize how you can transform your leads into conversions. refurbished coupon code It is comparable to a car sale without a salesman. There will be several visitors who involve see the cars and trucks, however without that added press, there will be no sales. A site without a sales channel coincides, which is why you should acquire Click Funnels so as to get one of the most variety of sales feasible. Click Funnels give you with a pre-designed and also well-curated sales channel which will take your visitors via a persuading trip that will certainly make them acquire your item at the end. refurbished coupon code Click Funnels have actually developed a range of sales funnels that can lead you to accomplish your goals by taking your visitors via a well believed out sales funnels. A sales channel favorably affects the mind of your visitors to make the choice to go forward with your goal. This is basically offering that little press a salesperson would provide in a physical store to ultimately make up the mindset to go on with an acquisition. A web site resembles a digital store, and also the function of putting a large investment right into designing a website is to eventually help boost the revenue of your service. Most people, specifically the start-ups and also the entrepreneurs that are brand-new to the online service world focus excessive on the appearances of the design to consider whether it works enough to actually make a sale. You may spend a great deal of cash to employ the very best web designers and developers, as well as they could even supply a fantastic looking site to you, however if you need to assume regarding the sales procedure within the website, your investment will just cost you cash without a return. This is why you have to buy Click Funnels. An internet site produced via that service is laser-focused to deliver terrific advertising as well as sales effects from the starting to the end. Following are the 4 important points that a well selected as well as positioned sales funnels do. Bring in brand-new visitors to the internet site is the very first and also one of the most essential jobs that a sales funnel does. This is the mouth of the funnel. This stage is narrower and also in the following action of the funnel and must be well preserved because they are most likely to make an acquisition in the next few phases as long as you maintain them pleased. Closing is the last of a channel where a lead ends up being a client. They actively make the order as well as buy your item, enroll in your e-newsletter or generally do exactly what you intended them to do by developing the funnel. Following level is customer retention or keeping them thrilled with your item or the purchase, so they come to be return consumers.
{ "redpajama_set_name": "RedPajamaC4" }
274
Illustrations of the Family of Psittacidae, or Parrots is an 1832 book containing 42 hand-coloured lithographs by Edward Lear. He produced 175 copies for sale to subscribers as a part-publication, which were later bound as a book. Lear started painting parrots in 1830 when he was 18 years old, and to get material for his book he studied live birds at the London Zoo and in private collections. The latter included those of Edward Smith Stanley, later 13th Earl of Derby, who had a large menagerie at Knowsley Hall, and Benjamin Leadbeater, a taxidermist and trader in specimens. Lear drew onto lithographic plates for printing by Charles Joseph Hullmandel, who was known for the quality of his reproductions of fine art. Although the book was a financial failure, Lear's paintings of parrots established his reputation as one of the best natural history artists of his time. It found him work with John Gould, Stanley and other leading contemporary naturalists, and the young Queen Victoria engaged him to help her with her painting technique. Parrots was a forerunner to the major volumes of bird paintings by Gould, and Lear's work has influenced children's illustrators such as Beatrix Potter and Maurice Sendak as well as bird specialists like William Y. Cooper, Elizabeth Butterworth and Walton Ford. Lear continued with his nature painting for some years, but from about 1835 he became concerned about his failing eyesight, and increasingly concentrated on his nonsense works and landscape painting, although he may have contributed to the illustrations for Charles Darwin's Voyage of the Beagle. Background Early scientific works on birds, such as those of Conrad Gessner, Ulisse Aldrovandi and Pierre Belon, relied for much of their content on the authority of the Ancient Greek philosopher Aristotle and the teachings of the church, and included much extraneous material relating to the species, such as proverbs, references in history and literature, or its use as an emblem. The arrangement of the species was by alphabetical order in Gessner's , and by arbitrary criteria in most other early works. In the late 16th and early 17th centuries, Francis Bacon had advocated the advancement of knowledge through observation and experiment, and the English Royal Society and its members such as John Ray, John Wilkins and Francis Willughby sought to put the empirical method into practice, including travelling widely to collect specimens and information. The first modern ornithology, intended to describe all the then-known birds worldwide, was produced by Ray and Willughby and published in Latin as (Three Books of Ornithology) in 1676, and in English, as The Ornithology of Francis Willughby of Middleton, in 1678. Its innovative features were an effective classification system based on anatomical features, including the bird's beak, feet and overall size, and a dichotomous key, which helped readers to identify birds by guiding them to the page describing that group. The authors also placed an asterisk against species of which they had no first-hand knowledge, and were therefore unable to verify. The commercial success of the Ornithology is unknown, but it was historically significant, influencing writers including René Réaumur, Mathurin Jacques Brisson, Georges Cuvier and Carl Linnaeus in compiling their own works. George Edwards was a leading British naturalist and illustrator in the 17th century. He was the librarian to the Royal College of Physicians with access to their collection of 8,000 books, and he used these, together with stuffed and live animals, to produced illustrated publications. His four-volume A Natural History of Uncommon Birds (1743–1751) and its three supplements covered more than 600 natural history topics, and his publications enabled Linnaeus to name 350 bird species, including many type specimens. During the early 19th century, several ornithologies were written in English, and Edward Lear's main contributions to the development of bird painting were to concentrate on a single bird group, in his case the parrots, paint mainly from live birds rather than stuffed specimens or skins, and use a large page size. Lear was not the first to produce an illustrated parrot monograph. French artist Jacques Barraband created 145 images for François Levaillant's (1801–1805). Lear's book had an immediate effect, including its impact on John Gould's five-volume Birds of Europe, which was published between 1832 and 1837. Edward Lear Edward Lear was born on 12 May 1812 in Holloway, North London, the penultimate (and youngest to survive) of perhaps 21 children of Jeremiah and Ann (née Skerrett) Lear. Jeremiah was a stockbroker who in 1816 defaulted to the London Stock Exchange to the tune of £2150 11s. 1d. in the economic turmoil following the Napoleonic Wars. The family left their home, Bowmans Lodge, and Edward was raised by his eldest sister, also named Ann, 21 years his senior. Partly because he suffered from epilepsy, bronchitis and asthma, Ann acted as a mother to him from when he was four until her death when he was almost 50 years old. Ann and another of Edward's sisters, Sarah, were both competent artists and taught their brother to draw and paint. From 1827, aged about 15, Edward was taking paid work, including medical illustrations. His first major commission was to illustrate an account of the scientific discoveries of a Royal Navy expedition to the Pacific. HMS Blossom, commanded by Captain Frederick W. Beechey, had a successful three-year voyage (1825–1828), visiting California, the Pitcairn Islands, Tahiti, and previously largely unknown parts of northwest North America. Lear painted 12 plates of birds and two of mammals for The Zoology of Captain Beechey's Voyage, probably in 1829, when he was aged 17, or in 1830. Long delays by another contributor, the keeper of zoology at the British Museum, Edward Gray, meant that the book was more than ten years out of date when it was finally published in 1839, several other expeditions having taken place in the interim. Research Lear's plan was to produce 175 copies of a large folio book, larger than any European nature painter had previously used. He met and became friends with John James Audubon, who had just published his 1827 double elephant-size The Birds of America, and this book may have inspired him to also choose a large format. The publication was to be sold by subscription as fourteen parts, each priced at ten shillings, a total cost of £7. Its full title as published was Illustrations of the family of Psittacidæ, or parrots: the greater part of them species hitherto unfigured, containing forty-two lithographic plates, drawn from life, and on stone, as printed on the title page of the book. The first subscribers included a friend, Mrs Anne Wentworth, and her sisters and daughter, followed by leading naturalists, including the London Zoo's Nicholas Vigors and the president of the Linnean Society, Edward Smith-Stanley, later 13th Earl of Derby. Subscribers from the aristocracy included the Duke of Norfolk, the Earl of Egremont, and the Duke of Northumberland and his Duchess. The London Zoological Society and the Linnean Society also subscribed as organisations. Lear's early sketchbooks include sketches and drawings of parrots, including a citron-crested cockatoo, a watercolour of two green parrots, and another of a blue macaw's head with two of its feathers, but for his project he needed access to live birds. In June 1830 he was given permission by the Zoological Society of London (ZSL) to sketch at London Zoo, and he also had access to the zoo's museum in nearby Bruton Street. As well as skins and stuffed birds, the museum also had aviaries with some live birds. Although other artists were not granted similar access, Lear was introduced to the ZSL by the well-connected Mrs Wentworth, who was interested in both art and natural history. He also painted parrots owned by Stanley and Vigors, and saw several species, including Baudin's black cockatoo, in the collection of Benjamin Leadbeater, a taxidermist and trader in specimens. When he could not view live birds, Lear resorted to Gould's stuffed specimens. Production Lear's illustrations were produced using lithography, in which artists copied their paintings onto a fine-textured limestone slab using a special waxy crayon. The block was then treated with nitric acid and gum arabic to etch away the parts of the stone not protected by the wax. The etched surface was wetted before adding an oil-based ink, which would be held only by the greasy crayon lines, and copies were printed from the stone. The printed plates were hand-coloured, mainly by young women. Lear drew directly on to the limestone instead of first making a painting and then copying it onto the stone, thus saving him considerable expense. Although this method was technically more difficult, drawing directly onto stone could give a livelier feel to the final illustration, and was favoured by some other contemporary bird artists such as John Gerrard Keulemans. Lear largely taught himself lithographic techniques, using stones hired at the studio of his printer, Charles Joseph Hullmandel. Hullmandel was the author of The Art of Drawing on Stone (1824), and the leading exponent of lithographic printing in Britain. His colourists used egg white to give a sheen to the parrot's plumage and a shine to the bird's eye. Lear designed wrappers for each part, but changed the design when he was granted permission to dedicate his book to Queen Adelaide, consort of King William IV. Lear struggled with the costs of producing his book, despite erasing his drawings as soon as he had the necessary 175 copies, to reduce the expense of hiring the lithographic blocks. He ran out of funds when he had printed only twelve of the intended fourteen parts, with 42 plates and no text. He sold only 125 subscriptions, and not all his subscribers actually paid what they owed. To help with funds, Lear worked for Gould from 1832 to 1837, illustrating his five-part Birds of Europe and teaching lithography to Gould's wife, Elizabeth. Lear still owed money for his parrots book, and in March 1833 he sold the remaining 50 copies and the rights to the plates to Gould for £50. Reception The sheer cost of producing his book meant that the final two parts were never completed and it was a financial failure, although Lear had anticipated this possibility, saying "Their publication was a speculation which — so far as it made me known & procured me employment in Zoological drawing — answered my expectations — but in matters of money occasioned me considerable loss." The first two parts were published on 1 November 1830, and Lear, still only 18, was promptly nominated for membership of the Linnean Society by Vigors and the zoologists Thomas Bell and Edward Bennett. Audubon bought a copy of the final bound book, despite its cost and his own limited funds, William John Swainson asked for duplicates of two plates that he could have framed and hang next to his Audubon paintings, and Prideaux John Selby said the plates were "beautifully coloured & I think infinitely superior to Audubon's in softness and the drawing as good". Parrots established Lear as a leading nature painter, and he was continually in demand thereafter. Related works When Lear sold his remaining copies of Parrots to Gould, part of the agreement was that Lear would travel to the zoos of continental Europe with him to collect material for Birds of Europe. The trip, initially delayed by Elizabeth Gould's premature labour and the Goulds getting influenza, took place in July 1833, and Lear eventually produced 68 plates for the book, acknowledged by Gould. He produced at least ten plates for Gould's A Monograph of the Ramphastidae, or Family of Toucans. Although he signed several plates in the first edition, his signatures had disappeared in the second edition of 1854. Lear painted backgrounds for some of the plates in A Monograph of the Trogonidae, or Family of Trogons (1835–1838) but all 36 plates are signed only as by John and Elizabeth Gould. Lear was fond of Elizabeth Gould, and admired John for his work ethic, but he disliked him as a person. When Gould died in 1881, Lear wrote "One I never liked really... a harsh and violent man... ever as unfeeling for those about him." Lear did not work exclusively for Gould. He had been doing watercolours for Selby's British Ornithology (1821–1834) since he was 16, and from 1825 he painted for Selby's collaboration with William Jardine, Illustrations of Ornithology. He also illustrated Jardine's Illustrations of the Duck Tribe, and created paintings, mainly of pigeons and parrots, for Jardine's The Naturalist's Library. Stanley became Lear's most important patron when he inherited his father's title in 1834. Now Lord Derby, he used the grounds of the ancestral home, Knowsley Hall, to create a private zoo in its estate, and he employed Lear to paint watercolours of many of the creatures in his menagerie. From about 1835, Lear became concerned about his eyesight, claiming "no birds under an ostrich should I soon be able to see to do", and increasingly concentrated on his nonsense works and landscape painting, although he may have contributed to the illustrations for Charles Darwin's Voyage of the Beagle. In 1846, he was invited to give lessons to the young Queen Victoria to improve her landscape painting. He gave the young queen ten lessons at Osborne House in July, and two more at Buckingham Palace in August. Victoria sent Lear an engraving as a present the next winter; Lear told his sister Ann about the gift, but said she should not tell anyone else lest it look like boasting. Legacy An immediate effect of the reputational success of Lear's parrot book was its influence on Gould. Until then he was primarily a taxidermist, often working for the Zoological Society of London, with just one published book, his 1832 A Century of Birds from the Himalaya Mountains, with backgrounds painted by Lear. Following Parrots, Gould decided to produce books based on Lear's model, using Hullmandel as his printer, and over the next twenty years produced some 40 volumes. Lear's macaw Anodorhynchus leari was named by Charles Lucien Bonaparte in 1856. Bonaparte had identified it as a new species from Lear's accurate painting in his book, which had been captioned as a hyacinthine macaw. Two other parrot species named for Lear, the cockatoo Lapochroa leari (now Major Mitchell's cockatoo) and the parakeet Platycercus leari (now crimson rosella) are no longer accepted under those names. Lear was the first to describe five of the species and subspecies depicted. His plates are the therefore the holotypes and he is the authority. These are the Australasian species Baudin's black cockatoo (Plate 6 as Calyptorhynchus Baudinii), Antipodes parakeet ( 25 as Platycercus unicolor), Regent parrot ( 29 as Platycercus anthopeplus), Varied lorikeet ( 36 as Trichoglossus versicolor) and the Pale-headed rosella subspecies Platycercus adscitus palliceps ( 19 as Platycercus palliceps). Lear's influence on illustrators of children's books, such as Beatrix Potter and Maurice Sendak was more through his nonsense books than his bird paintings, but other illustrators made more conscious use of his avian works. William Y. Cooper and Elizabeth Butterworth both painted birds from life, and made deliberate efforts to incorporate elements of Lear's style; Butterworth has illustrated four books on parrots, including Amazon Parrots: A Monograph, written by Rosemary Low. Other contemporary artists used Lear's style with a modern twist. Walton Ford paints parrots, but often in settings that show them in potentially lethal situations involving traps or predators. Like Lear, Ford frequently has marginal notes on the paintings, although in his case for the benefit of his audience, rather than as self-reminders. James Prosek made his reputation through painting fish, but also incorporated nonsense elements in his work by creating imaginary birds in Lear's style, with annotations including alternative names, behavioural notes and the supposed locations of sightings. A large collection of sketches and studies for Parrots are included in the major Lear art collection held at the Harvard University Houghton Library. In 2018, a copy of Parrots was sold at auction by Bonhams for £90,000, and in 2020 another copy was listed by Christie's with a guide price of £40,000–60,000, and fetched £60,000. Notes References Cited texts Selected bibliography Publication dates on title pages: Part I 1743, Part II 1747, Part III 1750, Part IV 1751 External links 1832 non-fiction books Ornithological literature Parrots
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,783
\section*{Introduction} One of the key challenges in plant breeding and crop production is to predict performance (seed yield) in unseen and new environments. This active research area is complicated by the time and expense of generating an extensive dataset to represent a wide range of genotypes and environments. Among different crops, soybean has a long history of cultivation in North America, with the first reported production in Georgia in 1766 \cite{hymowitz1983introduction}. Over the years, production in the US and Canada has expanded longitudinally as far west as Kansas-Colorado border and latitudinally from southern Texas to Canada \cite{websitereference_1,websitereference_2}. North American annual soybean yield trials (known as Uniform Soybean Tests (UST)) have been coordinated in the United States and Canada through the United States Department of Agriculture (USDA) between public breeders in university and government settings since 1941 \cite{websitereference_4, websitereference_5}. These trials are used to evaluate current and experimental varieties in multiple environments within their range of adaptation. Therefore, these trials are valuable sources of historical and current data to improve prediction performance with the assimilation of genetic and environmental variables. Management and permanent environmental effects have been examined primarily at small scales due to the labor required for managing large numbers of plots \cite{zhang2016warming, puteh2013soybean}. With the addition of each layer of added characterization of the environment, less of the differences need be ascribed to a generic "environmental" component, and can instead be examined individually in combination with plant genetics. The nexus of genetic and non-genetic variables form the cornerstone of plant breeding strategies, irrespective of crop species, for meeting crop production challenges in the future\cite{lenaerts2019improving,Wulff2019breed}. Climatic resiliency in cultivars is an important objective for plant breeders and farmers to get a high seed yield in a myriad of environments\cite{ICARDA2018resil}. The climatic variability can be associated with changes in temperature and rainfall events (including patterns and magnitude) and other weather variables. In addition to spatial variability, temporal variability of weather variables \cite{websitereference_3} is equally important but generally less understood or not included in yield prediction studies. It is important to understand how agricultural production is affected by the variability of weather parameters in presence of global climate change, especially with higher occurrence of extreme weather events. Therefore, prediction of the effects of changing environments on performance can help in making informed plant breeding decisions, marketing decisions, optimizing production and comparing results over multiple years \cite{jagtap2002adaptation}. Traditionally, crop growth models have been proposed to simulate and predict crop production in different scenarios including climate, genotype, soil properties, and management factors~\cite{blanc2017statistical}. These provide a reasonable explanation on biophysical mechanisms and responses but have deficiencies related to input parameter estimation and prediction in complex and unforeseen circumstances~\cite{roberts2017comparing}. Previous attempts at yield prediction across environments have relied on crop models generated by quantifying response in a limited number of lines while altering a single environmental variable, limiting the inference scope ~\cite{bishop2014seasonal}. To bypass the limitations of crop growth models, linear models have also been used to predict yield with some success ~\cite{jewison2013USDA}. However, these low-capacity models typically rely on a rather small subset of factors, therefore failing to capture the complexity of biological interactions and more site-specific weather variable complexities. Traditional linear methods such as Autoregressive Integrated Moving Average (ARIMA) have been used for time series forecasting problems \cite{petricua2016limitation}, but these methods are effective in predicting future steps in the same time-series. For time series prediction tasks, deep neural networks show robustness to noisy inputs and also have the capability to approximate arbitrary non-linear functions~\cite{dorffner1996neural}. Deep learning models can provide solutions in the presence of such complex data comprising of different weather variables, maturity groups and zones, and genotype information. Long Short Term Memory (LSTM) networks are very useful for time series modeling as they can capture the long-term temporal dependencies in complex multivariate sequences \cite{malhotra2015long}. LSTMs have shown state-of-the-art results in various applications including off-line handwriting recognition \cite{doetsch2014fast}, natural language processing \cite{sutskever2014sequence} and engineering systems \cite{gangopadhyay2020deep}. LSTMs have also been used effectively for multivariate time series prediction tasks \cite{jiang2018predicting,gangopadhyay2018temporal, shook2018integrating}. Considering the importance of climate extremes for agricultural predictions, random forest has been utilized to predict grid-cell anomalies-deviations of yields \cite{vogel2019effects}. Previous work \cite{you2017deep} using deep learning for yield prediction has utilized multi-spectral images to predict yield (instead of leveraging only multivariate time series as input) without considering model interpretability. Khaki et al. \cite{khaki2019crop} applied deep neural networks for yield prediction of maize hybrids using environmental data, but their model is not capable of explicitly capturing the temporal correlations and also lacks explainability. LSTM based model has been used for corn yield estimation \cite{jiang2019deep}, but these models lack interpretability. This study is based on geospatial data without field-scale farming management data and lacks temporal resolution in the absence of daily weather data. Attention based LSTM has been used along with multi-task learning (MTL) output layers \cite{lin2020deepcropnet} for county level corn yield anomaly prediction only based on meteorological data (maximum daily temperature, minimum daily temperature) without field-scale farming data. Other approaches to predict yield rely on the use of sensors to identify the most informative set of variables to predict yield\cite{parmley2019tpp,parmley2019scirep}, which is very useful in multiple applications; however, there is still a need to integrate weather parameters and in a time series approach involving multiple genotypes. Using these motivations, we developed a model that can capture the temporal variability of different weather variables across the growing season in an explainable manner to predict soybean yield from the UST dataset of field trials spanning 13 years across 28 states and provinces. \begin{figure*}[tbhp] \begin{center} \setlength{\unitlength}{0.012500in}% \includegraphics[width=12cm, height=8 cm]{fig/map_locations.PNG} \end{center} \caption{Map showing different locations in the USA and Canada included in our dataset. The dataset comprises of different maturity groups (MGs), some of which are labeled in the figure. The relative size of a yellow dot (representing location) indicates the size of the dataset for that particular location. Dataset included observations from the National Uniform Soybean Tests for years 2003-2015 and is split into North (MG 0 to 4) and South (MG 4 to 8) regions\cite{UST2018S, UST2018N}, consisting of 103,365 performance records over 13 years and 150 locations. These records are matched to weekly weather data for each location throughout the growing season (30 weeks). This generated a dataset with 35,000 plots having phenotype data for all agronomic traits.} \label{map_details} \end{figure*} We propose a framework based on LSTM and temporal attention to predict crop yield with 30 weeks of weather data per year (over 13 years) provided as input, along with a reduced representation of the pedigree to capture differences in the response of varieties to the environment. We vary the number of input time-steps and compare the performance of our proposed Temporal Attention model with the Stacked LSTM model for two variations of each model. We also compared against the results of random forest (RF), LASSO regression and the data-driven state-of-the-art USDA model. The temporal attention mechanism highlights the significant time periods during the growing season leading to high or low yield prediction, concurred with domain knowledge. In this paper, we report improved fidelity interpretation of the prediction outcomes without sacrificing the accuracy for multivariate time-series prediction. Our proposed framework can have widespread applications in plant breeding, crop science research, and agricultural production. \section*{Methods} \subsection*{Preparation of Performance Records} Files from 2003-2015 USTs were downloaded as PDFs \cite{websitereference_4, websitereference_5}. Using on-line utility Zamzar (zamzar.com), all 26 PDFs from this period were converted to .xlsx files, with each tab corresponding to a single page in the file. In this way, the vast majority of tables were recovered with no errors or need for human translation. However, random checking for error was manually performed to ensure verity. These tables were manually curated to align all performance records for a given genotype/location combination into a single row. Records that did not have yield data (due to a variety not being planted in a specific location or dying prior to production of seed), were removed from the file. Following data cleaning, the final dataset comprised of 103,365 performance records over 13 years representing 5839 unique genotypes, along with all available management information. After compilation, we imported performance records in Python for further data analysis. \subsection*{Acquisition and Sub-Sampling of Weather Records} Daily weather records for all location/year combinations were compiled based on the nearest available weather station (25km grid) on Weather.com. We downsampled the dataset to include maximum, minimum, and average conditions on different time frames throughout the growing season (defined April 1 through October 31) and this information was appended to performance records. \subsection*{Genotype Clustering} We included genotype-specific criteria to apply the model for specific genotypes and mean location yield across genotypes. Due to the nature of the UST program, most of the genotypes tested in this period do not have molecular marker data available, preventing the use of a G matrix. To circumvent these restrictions, we developed a completely connected pedigree for all lines with available parentage information, resulting in the formation of a 5839 x 5839 correlation matrix. To improve the model performance, genotypes were clustered based on the organization which developed them, providing additional control over relatedness. We clustered genotypes in 5 clusters using the K-means Clustering technique based on the correlation matrix to extract information about relatedness. With a specified number of clusters ($n$), the K-means algorithm finds $n$ groups of equal variance by choosing centroids of the clusters to minimize a criterion known as $inertia$ (also called, within-cluster sum-of-squares). This algorithm is effective for a large number of samples and finds application across different domains. With this hard clustering technique, each genotype belongs to one of the 5 clusters. The clustering is used to represent each line as a function of membership in 5 groups, which is fed into the model to allow differentiation of lines. \subsection*{Model Development} To leverage the temporal sequence of variables, a modeling approach based on recurrent neural network (RNN) was developed to capture correlation across time. Gradient descent of an error criterion may be inadequate to train RNNs especially for tasks involving long-term dependencies \cite{bengio1994learning}. To overcome these challenges, long short-term memory (LSTM) was used, which is an RNN architecture designed to overcome the error backflow problems \cite{hochreiter1997long}. By learning long-range correlations in a sequence, LSTM can accurately model complex multivariate sequences~\cite{malhotra2015long}. \begin{figure*} \begin{center} \setlength{\unitlength}{0.012500in}% \includegraphics[width=16cm, keepaspectratio]{fig/stacked_lstms_without_attention.PNG} \end{center} \caption{The figure showing the Stacked LSTM Model. The input feature vector is $x^{<t>}$ at time-step 't'. Depending on whether the maturity group and genotype cluster information are incorporated in the model or not, the vector $x^{<t>}$ can be 9-dimensional or 7-dimensional. We included 7 weather variables in our study. The embedding vector $a^{<T_x>}$ encodes the entire input sequence and summarizes the sequential dependencies from the time-step 0 to the time-step $T_x$. We designed two variants of our proposed model based on input information with the time series encoding part remaining the same for both variants. This model (when including MG, cluster) had 106,511 learnable parameters and the training time/epoch was 60 secs.} \label{stacked_lstms_model} \end{figure*} \begin{figure*} \begin{center} \setlength{\unitlength}{0.012500in}% \includegraphics[width=13cm, keepaspectratio]{fig/stacked_lstms_with_attention.PNG} \end{center} \caption{The figure showing the Temporal Attention Model. The LSTM encoding part is the same as that of the Stacked LSTM Model where we get the annotations $a^{<t>}$ for each timestep. Instead of only using $a^{<T_x>}$, this model utilizes all annotations which act as inputs for the temporal attention mechanism. Based on the computed context vector, the two variants of this model are designed depending on the input information. This model (when including MG, cluster) had 106,562 learnable parameters and the training time/epoch was 60 secs.} \label{temporal_attention_model} \end{figure*} We developed two models, based on LSTM: (a) Stacked LSTM Model (without using any attention) (Fig.~\ref{stacked_lstms_model}), and (b) Temporal Attention Model (using a temporal attention mechanism) (Fig.~\ref{temporal_attention_model}). The output of both the models is yearly seed yield as this is a many-to-one prediction problem. For each model, we formulated the model variants depending on whether the performance records comprise data of maturity group and genotype cluster. The same modeling approach was used to compute the time-step wise encoding for both models. Two stacked LSTM layers were used to encode the $T_x$ time-steps of the input sequence as shown in Fig.~\ref{stacked_lstms_model}. Depending on the variant, for both models, we concatenated MG and genotype cluster values with the compressed time-series information. In the Stacked LSTM Model, the last hidden state of the encoding part is assumed to be the compressed representation from the entire input sequence. This fixed-dimensional representation was used for predicting the output value of seed yield (Fig.~\ref{stacked_lstms_model}). For the Temporal Attention Model, the compressed information (context) is computed after aggregating the information from the sequence of hidden states using the attention mechanism. The concept of soft temporal attention \cite{bahdanau2014neural} was first proposed in the context of neural machine translation to overcome the bottleneck of the encoder-decoder model\cite{cho2014learning, sutskever2014sequence} for long sequences. Compressing all information from the input time-steps into a fixed-length single vector was the major bottleneck for the encoder-decoder model. Temporal attention can be applied for many-to-many time series prediction \cite{gangopadhyay2018temporal} and many-to-one-prediction \cite{gangopadhyayexplainable,gangopadhyay2019deep}. The proposed approach (Fig.~\ref{temporal_attention_model}) does not incorporate a decoder LSTM as we are performing a many-to-one prediction problem. Taking in the annotations of all time-steps as input, the attention block aggregates the information and computes the context vector. A greedy search method was utilized to empirically determine the most influential weather variable on seed yield prediction considering data of both the northern and southern U.S. regions. In the first step of the greedy search, the Stacked LSTM model was trained for each of the 7 variables and choose the variable that had the least RMSE. With this variable added, in the second step, the model was trained for each of the other 6 variables. In this way, variables were added. More information is provided in the supplementary materials (Supplementary Tables 5, 6 and 7). All input features were scaled in the range (-1, 1) with the scaler fitted on the training set. We compute the Root Mean Square Error (RMSE) after inverting the applied scaling to have forecasts and the actual values in the original scale. Data were randomly split into training (80\%), validation (10\%) and test (10\%) sets. Models were evaluated by computing RMSE for the test set. Both models were trained for 200 epochs to get the optimal RMSE scores. For training, Adam optimizer was used \cite{kingma2014adam} (learning rate of 0.001) and the mean squared error loss function was computed. Models were developed using Keras \cite{chollet2015keras} with the TensorFlow backend \cite{abadi2016tensorflow} and the models were trained using NVIDIA GPUs. \section*{Results} To select hyper-parameters (determination of appropriate temporal sampling of weather information to predict yield using our proposed frameworks), the test set RMSE was used to determine optimal (lowest RMSE) number of time points to predict seed yield. Using a step-wise approach building from monthly, bi-weekly, weekly and finally daily data, similar performance was observed in each scenario (approximate test RMSE = 7.206) except for daily data. The intermediate scenario of weekly data was picked for all subsequent analyses, to facilitate faster training of LSTMs and also not to downsample to a higher extent in capturing the long-range temporal dependencies. \begin{figure*}[tbhp] \begin{center} \setlength{\unitlength}{0.012500in}% \includegraphics[width=17cm, keepaspectratio]{fig/prediction_results.PNG} \end{center} \caption{Results for different inputs to the Stacked LSTM model. The vertices of the triangle demonstrate results including only the maturity group, only genotype cluster and only weather variables in the input. The edges show the results with a combination of inputs from the respective vertices. The results showed improvement when the genotype cluster was included with weather variables. The coefficient of determination increased further when the maturity group was included with weather variables. The best result were noticed when information from all sources was incorporated (shown at the center of the triangle). The best performance (RMSE = 7.130) is about 14\% of the average seed yield for the test set (50.745) and 44.5\% of the standard deviation (16.019).} \label{prediction_results} \end{figure*} Using weekly weather aggregate data in our model, the prediction models were built starting with a heuristic variable importance given to each variable. For example, precipitation was deemed to be most important followed by average surface temperature and so on. However, the largest drop in test RMSE was observed for the maturity group when it was used as a predictor factor in the model and adding the MG classification after the 2nd LSTM as well caused a further improvement in model performance. No perceptible change in performance was observed with variation of the number of clusters (5, 10, 15, 20, 25) using the hard clustering technique (K-means clustering). Therefore, subsequent analyses are done using 5 clusters in the proposed models for prediction and variable search. Adding the genotype cluster information at every time-step and also after the 2nd LSTM, showed better results. From our greedy search, we observed average relative humidity had the lowest test RMSE. With the inclusion of average relative humidity in the prediction model, average direct normal irradiance was the next most important variable. Sequentially, the remaining weather variables were: maximum direct normal irradiance, maximum surface temperature, minimum surface temperature, average surface temperature, and average precipitation. A second greedy search initiated with the inclusion of maturity group and pedigree-based clustering revealed minimum surface temperature as the most important weather variable (lowest RMSE). The greedy search results revealed the following sequence of weather variable importance obtained from a forward selection approach: average direct normal irradiance, average surface temperature, maximum direct normal irradiance, average precipitation, average relative humidity, and maximum surface temperature. Noticeably, the ranking of the variables was different but the absolute change in RMSE scores was minimal. Overall, a correlation of 0.894 between predicted and observed yields in the testing and validation sets was attained; largely capturing the differences in performance between environments and years. However, the model remains somewhat limited in its ability to generate genotype-specific yield predictions due to the limited complexity of relationships which can be modeled using LSTM, and a lack of genomic information on each genotype. Since, a lack of molecular marker data for each line precludes us to leverage genomic prediction and its integration with the LSTM model, it is the next step of the approach presented in our paper. As currently implemented, the model's average absolute error is 5.4 bu/acre, which is reasonable given the levels of variability within a given environment/year combination. For example, in Ames, IA, during 2003, yields ranged from 33.3-55.3 bu/acre. In spite of this large range of difference, an average error of only 4.5 bu/ac was observed for this environment. No perceptible trends are observed when we looked at state wide results combined over years. We also looked at originating breeding state as well as private company entries, and no geographical trends were noticeable. Both proposed models (Stacked LSTM, Temporal Attention) showed similar performance, and results improved when more information were included (Fig.~\ref{prediction_results}). The coefficient of determination was highest (0.802) when information from all the sources (maturity group, genotpye cluster, weather variables) were incorporated. The best model performance (test RMSE = 7.130) was \~14\% of the average yield for the test set (50.745) and 44.5\% of the standard deviation (16.019)(Fig.~\ref{prediction_results}). Comparatively, test RMSE of 12.779 was obtained from least absolute shrinkage and selection operator (LASSO) regression, while Random Forest test RMSE was 9.889 with same input features. Therefore, both Stacked LSTM and Temporal Attention models outperform LASSO and RF models. In comparison with the data-driven state-of-the-art USDA model\cite{jewison2013USDA}, our deep learning approach performs significantly better demonstrating much lower absolute errors. The USDA approach uses a linear regression approach with coefficients based on historical statewide yields and weather averages. However, the USDA model does not predict performance for individual locations. Due to this limitation, we compare results of our model with the USDA model using year wise average across states for the test set. In comparison with the USDA model, the absolute errors of our model are lower for all 12 years (except in 2011). For 2014 and 2015, the absolute errors of deep learning models were 0.03 and 0.35 (compared to 1.32 and 1.70 for the USDA model), respectively. Detailed comparison results are provided in the supplementary material (Supplementary Table 10). \begin{figure*}[tbhp] \begin{center} \setlength{\unitlength}{0.012500in}% \includegraphics[width=17cm, keepaspectratio]{fig/attention_results.PNG} \end{center} \caption{Results showing the distribution of attention weights for the entire input sequence (spanning the growing season). Considering different ranges of actual yield, the results are demonstrated for two different maturity groups (MG = 1, MG = 7) providing stark geo-climatic regions (Fig.~\ref{map_details}). Early season variables were observed to be comparatively less important for prediction of the highest yielding genotypes.} \label{attention_results} \end{figure*} In addition to accurate yield prediction, the Temporal Attention Model provided insights (Fig.~\ref{attention_results}) about how early-season variables were less important for yield prediction in the highest yielding genotypes for two geographically distinct maturity groups: MG1 (Northern US adaptation) and MG7 (Southern US adaptation). We observed mild sigmoid curves for the highest yielding group in the case of both MG1 and MG7. However, we note that while MG1 had a significantly large number of plots ($\approx 550$) for the highest yielding group, MG7 had only about $30$ such plots. It points to the increasing importance of features in the August – September time phases for both North and South US regions. These time phases coincide with crop reproductive phases, emphasizing their importance in the final yield, and need functional validation which is outside of the scope of our research. However, this is an example of hypotheses generation advantage of these models motivating future research. \section*{Discussion} We establish the potential for use of a long short-term memory-based method for yield prediction to allow models to account for temporal differences in the occurrence of weather events. Predictions using this system can be made reasonably accurate due to a large amount of training data made available through the mining of historical records. Our approach (using LSTM and attention) is an efficient modeling scheme to analyze soybean crop growth interaction with the weather, and to identify hypothesis for plasticity, as well as to identify key physio-environmental features that are important to include into any predictive model. For example, differences in the timing of extreme heat events, as well as drought periods, would affect soybean plants in various ways depending on the stage of plant development. For example, heat stress during flowering is particularly damaging while heat in vegetative stages of development may not produce significant reduction to harvested yield ~\cite{westgate1993flower}. With a larger encompassing dataset, breeders and researchers can be empowered to parse out most informative time periods, weather variables and crop responses. This information sets up the framework for breeding strategies to develop climate resilient and responsive varieties. Our results -- via our hypothesis generation approach -- show a potential mismatch in the heuristic/empirical results for the importance of weather variables. The finding of minimum surface temperature as the most significant weather variable suggests that nighttime temperatures play a larger role in yield prediction than previously suggested~\cite{gibson1996influence}. Our study is a retrospective design, and cannot conclude definitively that this is the case; however, these findings necessitate further empirical investigations and can be used to formulate the next set of hypotheses. Our findings are significant, as minimum temperatures have been reported to be increasing at a faster rate than maximum temperatures~\cite{karl1993asymmetric}. More studies are needed to ascertain the relative importance of these variables and can motivate morpho-physiological attentive breeding approaches to assemble sturdier varieties for future scenarios. A large capacity machine learning approach, such as the one presented in this paper using LSTM-RNN will be robust to incorporate weather changes and adjust performance predictions accordingly. Additional information that may improve the results of this approach is the inclusion of any supplemental irrigation provided, soil fertility levels, disease pressure and resistance levels, and direct genetic markers for the tested varieties, all of which would further strengthen predictive ability. Therefore, future implementations may be expanded to include genomic data, additional factors such as preceding crop, row spacing, planting date, soil texture, or additional temporal data in the forms of soil sensor measurements and remote sensing data for morphological and physiological traits. The approach presented in this work will further enhance phenomic assisted breeding that collects in-season data using different sensors and payloads~\cite{parmley2019scirep,parmley2019tpp,gao2018novel} using machine and deep learning approaches suitable in plant sciences applications~\cite{singh2016machine,singh2018deep,ghosal2018explainable}. Our work shows a unique strategy to assimilate and utilize complex data for seed yield prediction. For comparative purposes, we compared our models with the RF, LASSO and the data-driven USDA model. The USDA model has a limitation on the type of data it can utilize and is limited in its application. For example, as the USDA model computes predictions at the state level, the finer resolution available with our model may help in making regional marketing decisions, as well as in creating yield predictions which can capture intra-state variation due to factors such as differences in rainfall in different areas of the state. Since our results are built on more than a decade of data, it also reflects that early season weather variables are less useful in seed yield prediction and needs empirical evidence to confirm the genetic variability in plasticity of soybean genotypes in earlier stages of growth and development. Importantly, we emphasize that the utilization of the attention module within a LSTM framework allows us to tease out potentially important features for further testing. This alleviates the disadvantage of DL models -- which serve as purely blackbox predictive models -- by allowing for hypothesis generation that will allow scientific insight via targeted follow up analysis and experiments. The advantages of LSTM based models have been recently established for maize yield prediction at a county level \cite{jiang2019deep}, but the model lacked interpretability. Attention based LSTM along with multi-task learning (MTL) output layers has also been used for maize yield prediction using county level data based on meteorological data (maximum daily temperature, minimum daily temperature, and daily precipitation) \cite{lin2020deepcropnet}. These studies are important for solving the yield prediction challenge; however, models are based on geospatial data without field-scale farming management data and variety information is indiscernible, and based on limited weather variables. In our soybean study, we included seven weather variables and detailed field-scale farming data with multiple maturity groups spanning continental U.S. and full variety representation. We have shown that an LSTM-based approach can improve seed yield prediction accuracy due to the ability to identify both temporal effects of weather events and the relative importance of various weather variables for crop yield prediction. Advances in developing an explainable yield prediction model using attention mechanism is an attractive development. The basic framework of LSTM for the phenotypic prediction can be applied to any crop with weather-dependent variability in order to better understand the genotype x environment effects found in the course of multi-environment testing. As such, this approach can be immediately useful for researchers in a variety of crops and environments and may prove to be exceptionally powerful when used in collaborative efforts between researchers operating in contrasting climatic zones, and in conjunction with sensor data for prescriptive breeding\cite{parmley2019scirep} including for root traits\cite{falk2020computer}. The insights provided by our model can help in understanding the impact of weather variability on agricultural production in the presence of climate change, and devise breeding strategies for variety plasticity to circumvent these climatic challenges. The ability to make accurate predictions of crop performance can lead to optimization across many different levels of organizations. At the federal level, improved crop insurance recommendations can be made based on weather forecasts before planting, and be continually updated throughout the season as more data is recorded and forecasts are updated. Railroads, grain cooperatives, and end-users can streamline the logistics of handling the desired quantities of grain if they are permitted a better understanding of how much grain (and of what quality) will be produced in a given region. Farmers can make better marketing decisions if they have an accurate and high confidence prediction of their production for the year, allowing them to sell their crops at the most opportune time. We envision that similar work on other crops and over a longer time span will generate invaluable insights for cultivar development and plant breeding and production related research in a challenging climate. \section*{Conclusion} Unraveling causality would be a substantial step forward in understanding impact of climate change on variety's plasticity. Viewed through the lens of causality, DL based predictive models vs process based predictive models have distinct pros and cons. Process based models have clear causal relationships (by construction); however causality is limited to the confines of the model parameters, and it is non-trivial to assimilate additional data to extract broader causal trends. On the other hand, incorporating causality into DL based models is an open problem in the AI/ML community, with much activity. No principled approaches exist to accomplish this. However, DL based models (in contrast to process-based models) have the ability to seamlessly assimilate additional data. Our vision is therefore to evaluate if systematically augmenting DL based predictive models with increasing amounts of physio-morphological informative features provides a way towards unraveling causal relationships. We accomplish this by deploying our DL framework as a 'hypotheses generation tool'. We build DL models using a large volume of data and variety of information incorporating domain based knowledge. We then systematically probe the impact of various physio-morphological and environmental parameters on yield (via sensitivity analysis, and "what if" scenario evaluation), and establish a framework to generate hypotheses in different crop species and physio-morphological characteristics under different climatic conditions. Until causality based DL becomes feasible, the hypotheses generation DL models will have the maximum impact in meeting the need of climate change scenarios and to incorporate plasticity response in future varieties. \section*{Acknowledgements} Funding for this project was provided by Iowa Soybean Association (AKS), Monsanto Chair in Soybean Breeding (AKS), RF Baker Center for Plant Breeding (AKS), Plant Sciences Institute (SS, BG and AKS), USDA (SS, BG, AKS), NSF NRT (graduate fellowship to JS) and ISU's Presidential Interdisciplinary Research Initiative (AKS, BG, SS). The authors thank Vikas Chawla for his assistance with querying weather data for this project. \section*{Author contributions statement} A.K.S., J.S., S.S. and B.G. conceived the research; All authors contributed in the design of the analysis and interpretation; J.S. compiled the UST performance and pedigree data; T.G. and J.S. performed statistical analysis, T.G. and L.W. built machine learning models and results were interpreted by T.G. and J.S. with inputs from S.S., A.K.S., and B.G.; J.S. and T.G. wrote the first draft with inputs from A.K.S. and S.S.; All authors contributed to the development of the manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,091
Il rifugio Lavaredo è un rifugio alpino situato ai piedi delle Tre Cime di Lavaredo, nelle Dolomiti di Auronzo, a 2.344 m s.l.m. Storia Il rifugio fu costruito nel 1954 dalla guida alpina Francesco Corte Colò "Mazzetta", uno dei fondatori del soccorso alpino di Auronzo. Accessi Dal rifugio Auronzo (2320 m), seguendo il sentiero 101, in circa 20 minuti (T); Dalla val Marzon, alla fine del paese di Auronzo di Cadore, per il sentiero 1104, in circa 4.30 ore (E); Dal rifugio Locatelli (2450 m), seguendo il sentiero 101 (T). Galleria d'immagini Voci correlate Rifugio Auronzo Rifugio Locatelli Altri progetti Collegamenti esterni Lavaredo Lavaredo Lavaredo Misurina
{ "redpajama_set_name": "RedPajamaWikipedia" }
971
Complementary Therapies in Equine Health Care equine therapies, equine accupuncture, horse chiropractors, horse massage, equine massage, horse accupuncture, horse therapies, complementary therapies for horses, remt, lindsay day An Integrative Approach By Lindsay Day, REMT Horse owners today have an array of complementary practices and products to turn to in the care and management of their equine partners. Things like acupuncture, chiropractic and massage are now more widely accessible than ever before. But while the availability and interest in complementary modalities continues to grow, at times what remains less clear is how and when to incorporate these therapies to best effect. Like any decision relating to your horse's health care, making informed choices is important. When it comes to complementary therapies, this means getting a clear picture of what they can (and cannot) offer, who has the education and training to safely deliver them, and how they fit in with your horse's regular veterinary care. An integrative model "Complementary modalities work together with conventional medicine," emphasizes Conny Mosley, an anesthesiologist and instructor at the Ontario Veterinary College who also practices acupuncture. "They give us one more thing to use in addition to veterinary medicine, but they are not a replacement for it. If the horse needs an antibiotic, it needs an antibiotic. If the horse has a fracture, it needs conventional medicine to diagnose and treat that." Massage therapists manipulate the soft tissues of the body through the application of varying degrees of pressure and movement. Photos: Ashley Harris Photography The growing use of complementary therapies alongside conventional veterinary medicine represents a shift towards a more integrative approach to equine health care, she says. The goal is to combine the best of both worlds, while considering all aspects of an animal's health in their management. "You look at the diet, exercise, and you consider their mental state too. You want to look at everything to develop a more complete picture of what's going on for that animal - not just, for example, blood work or other single diagnostic tools we use in veterinary specialties - but all the pieces together." How complementary therapies are used An additional diagnostic tool – Veterinarians trained in acupuncture or chiropractic may sometimes use these modalities as an additional means of assessment when examining a horse. Acupuncture points to which the horse is particularly sensitive or reactive, for example, may help flag an issue or suggest areas for further examination. But this needs to be combined with a thorough veterinary workup, stresses Mosley. "If you were to just rely on acupuncture alone to diagnose, I think you would miss some details, and would be at risk of misdiagnosing the problem if you are inexperienced." It is also important to note that non-veterinary therapists are not at liberty to diagnose, says Anna Drygalski, a Registered Equine Massage Therapist based in White Rock, British Columbia. That's not to say the insights garnered by these practitioners are not of value, but in any instance where a horse is suffering from lameness or ill-health, an accurate veterinary diagnosis is key to ensuring the most appropriate treatment. Part of the treatment plan – Complementary therapies like massage, chiropractic, and acupuncture are often well suited to address compensatory issues that arise with lameness problems. "Oftentimes we are dealing with layers of a problem," says Katie Crossan, DVM who practices chiropractic in addition to regular veterinary medicine out of Kirkton Equine Clinic in London, Ontario. She offers the example of a horse that tends to get stuck in its sacroiliac (SI) joint (where the pelvis connects to the spine) on account of a hock problem. "They'll end up with muscle spasm around the SI joint because they've been hiking their hind end to avoid flexing their hocks." Horse owners employing complementary health care should inform their veterinarian. Practitioners of different modalities can then work together to provide the horse with the best care possible. Photo: monkeybusiness/Cansto With reduced movement of the pelvis, the lumbar vertebrae compensate by moving more than they are designed to, which in turn leads to more muscle spasm along the lumbar spine, she explained. "We might need to inject the hocks in the treatment of that horse, but what we find is that if you don't address all the other issues, it takes much longer to get that horse back to the level it was competing at before. If you leave it to work through all the secondary issues on its own you won't be nearly as satisfied with the outcome." For Crossan, that may also include working with a massage therapist or an acupuncturist. "If you don't do something to address that muscle spasm, then you have a harder time getting your adjustment to hold, because you really can't separate out muscle function and joint function." Preventative care – Complementary therapies can also play a role in promoting optimal health. "When a horse's muscles are working correctly, it takes the strain off of ligaments and tendons and joints," says Drygalski. "A lot of times people will use massage not because there is something wrong with their horse but maybe they have a little bit of trouble with a flying change in one direction or they're a bit stiffer one way." Therapies like massage and chiropractic can also help detect subtle changes or areas of tension and restriction early on, before they lead to bigger problems, added Crossan. Working together to achieve more Much has changed in the past decade in terms of the acceptance of complementary health care for horses, with practitioners of different modalities often working together with a horse's vet and owner to provide the best care possible. "At the end of the day, the more eyes you have on your horse, and the more insights you have, the better you can tailor the management of that horse - so long as everyone is communicating, and on the same page in terms of the horse's needs," says Crossan. Through the insertion of small needles, acupuncture stimulates specific points on the body. Photo: JC Ced-Con/Flickr Indeed, as more research begins to support the use of the more widely practiced complementary therapies, a growing number of vets feel comfortable in recommending them. "I'll get calls from people now because their vet said their horse could use a massage," says Drygalski. "That didn't happen 10 years ago." In her practice, Drygalski says working as part of a team is important. "That includes vets, chiropractors, saddle-fitters, trainers, and the owners." Crossan agrees. "The more you work with different people, the more you know what they are good at and you develop a cohesive team approach. I have a massage therapist and an acupuncturist I work with quite a bit, and we agree that when we both work on an animal together we get better results than if we each did it separately." When it comes to finding therapists to work with, Crossan recommends that horse owners start by asking their regular veterinarian. "They will probably have a pretty good idea of who's practicing in their area and should be able to give you a couple of names." Word of mouth and the Internet can be helpful as well, but it's important to check a therapist's credentials too, she advised. "I think that's important, because if they're serious enough about what they are doing, they'll have credentials." When you decide to work with a therapist, it's worthwhile keeping your vet in the loop, she added. "Most vets don't mind if you call. Even if you don't want them to come out, they can let you know if something is a reasonable option for your horse, or if you might be at risk of missing something else." Ultimately, keeping the lines of communication open and making sure everyone is on the same page translates to more coordinated care and better results. "In terms of getting the most bang for your buck," says Crossan, "you will have a better experience if you have a team that is open to one another and able to communicate effectively." Chiropractic, or spinal manipulation, is a form of manual therapy that uses controlled forces applied to joints of the spine and limbs. According to the American Association of Equine Practitioners (AAEP), it should be considered a medical act and thus only performed by a licensed veterinarian or human doctor of chiropractic under the referral of a veterinarian. When a joint is immobile for any length of time, the health of the joint is compromised. The chiropractor seeks to restore the normal range of motion. Photo: Brooke Sellers/Flickr The reasoning behind chiropractic is that when a joint is immobile for any length of time, the health of that joint is compromised. "When a joint is held immobile there is a decrease in the production of joint fluid and an increase in the inflammatory mediators," says Crossan. Because horses are so good at compensating, reduced motion in one joint can impact the function of others. "Say for example a horse develops stiffness between C3 and C4 (the third and fourth cervical vertebrae in the neck); they'll just not move it, and they'll move the joint above and below a little bit extra to make up for it. The stiff joint can sit for quite some time not moving." Acupuncture involves the stimulation of specific points on the body, typically through the insertion of small needles. Like chiropractic, acupuncture is generally considered a medical act, and recognized training courses in equine acupuncture in North America are only open to licensed veterinarians (or senior-year veterinary students in some programs). Traditional Chinese Medicine focusses on stimulation of points along the body's energy pathways or meridians, which are indicated by the body's underlying disease. Western medicine focuses on the relationship between acupuncture points and the anatomic location of nerve endings to trigger the release of chemicals and hormones throughout the body. "Every joint in your body has a set range of motion, and in chiropractic we go through and test that range of motion," explains Crossan. "When we find a joint that's not moving freely we adjust that joint to restore motion." There is a lot of misconception about the therapy, she adds. "When we use the term subluxation, it's not a matter of a joint being out of place. If you hold your wrist flexed, for example, it's going to look like you have a bump. But it's not that you have a joint out of place, it's just that it's not moving through its normal range of motion. When you put normal motion back into it, then it sits back where it should be and you don't see the bump." Two main approaches and theories are related to the use of acupuncture. Traditional Chinese Medicine (TCM) maintains that there are energy pathways through the body represented by meridians. Stimulation of points along these pathways is indicated on the basis of the underlying pattern of disease or imbalance, which is assessed according to TCM principles. Practitioners of this approach may also recommend the use of traditional Chinese herbs to help address identified imbalances. The Western medical approach focuses instead on the relationship between acupuncture points and the anatomic location of nerve endings and other features of the nervous system. The insertion of acupuncture needles (which can also be stimulated with an electric current, as in electroacupuncture) induces the release of chemicals involved in nerve communication, which in turn can trigger the release of other chemicals and hormones through the body. While studies have revealed the pain-relieving effect that acupuncture can have, the mechanisms of action and the role it can play in treating other disease processes is still not fully understood, though research is ongoing. Massage therapists use their hands to manipulate the soft tissues of the body (muscle, connective tissue, tendons, ligaments and skin) through the application of varying degrees of pressure and movement. Training and education standards in equine massage vary greatly in terms of program length (from two days to two years) as well as substance. Registered equine massage therapists who use the title REMT complete a two-year course of study (similar to the education requirements for RMT's who work on people) as well as a veterinary supervised accreditation exam. Other certifications in equine massage are offered by a number of different schools, each with their own qualification requirements. As an unregulated profession, it is ultimately up the horse owner to ask about the training, experience and credentials of a therapist they are considering. Lindsay Day explains her methodology for massage as a process of listening to the horse's response and feeling and adapting my pressure, approach, or technique based on that. Photos: Ashley Harris Photography "We use massage to address muscle tension, or other issues where the soft tissues are affected, such as adhesions or scar tissue," says Drygalski. Muscles of the body create movement and help stabilize the joints, so when you get excess tension, or restrictions in the connective tissues, that function is compromised. Sore and tight muscles are not healthy muscles. With chronic tension, a muscle's blood supply is restricted, limiting its access to oxygen, nutrients, and fresh tissue fluids. The related nerve endings become irritated, causing pain and loss of fine-tuned coordination. Massage can help address this pain-tension cycle specifically, but may also be used to promote overall mental and physical relaxation. Massage therapy may include many techniques such as Swedish massage, sports massage, myofascial release, trigger point therapy, acupressure, and other bodywork approaches. The type of massage given typically depends on the horse's needs or condition as well as the therapist's training. Some therapists may also incorporate or recommend basic hydrotherapy and stretches, or other exercises for a horse. Lindsay Day is a Registered Equine Massage Therapist and award-winning writer based in Guelph, Ontario. www.EQmassage.ca This article originally appeared in the September 2014 issue of Canadian Horse Journal. Main Photo: The chiropractor uses controlled forces applied to the joints of the spine and limbs. Photo: iStock/DH Music Helps Horses Better Respond to Performance Demands Demystifying the Cranial Bones Promising New Gene Therapy Tendon Treatment Long-Acting Pain Relief From Shellfish Poison Equine Sports Medicine When Horses and Riders Hurt Themselves Equine Lameness: Emerging Technologies for Rehabilitation Help Your Horse with Acupuncture Scratching the Surface of Equine Skin Diseases Mental (Horse) Health Canadian Quarter Horse Association (CQHA)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,578
{"url":"http:\/\/www.ck12.org\/book\/CK-12-Geometry-Concepts\/r8\/section\/10.4\/","text":"<img src=\"https:\/\/d5nxst8fruw4z.cloudfront.net\/atrk.gif?account=iA1Pi1a8Dy00ym\" style=\"display:none\" height=\"1\" width=\"1\" alt=\"\" \/>\n\n# 10.4: Area of Composite Shapes\n\nDifficulty Level: At Grade Created by: CK-12\nEstimated26 minsto complete\n%\nProgress\nPractice Area of Composite Shapes\n\nMEMORY METER\nThis indicates how strong in your memory this concept is\nProgress\nEstimated26 minsto complete\n%\nEstimated26 minsto complete\n%\nMEMORY METER\nThis indicates how strong in your memory this concept is\n\nWhat if you wanted to find the area of a shape that was made up of other shapes? How could you use your knowledge of the area of rectangles, parallelograms, and triangles to help you? After completing this Concept, you'll be able to answer questions like these.\n\n### Guidance\n\nPerimeter is the distance around a shape. The perimeter of any figure must have a unit of measurement attached to it. If no specific units are given (feet, inches, centimeters, etc), write \u201cunits.\u201d Area is the amount of space inside a figure. If two figures are congruent, they have the same area. This is the congruent areas postulate. This postulate needs no proof because congruent figures have the same amount of space inside them. Keep in mind that two figures with the same area are not necessarily congruent.\n\nA composite shape is a shape made up of other shapes. To find the area of such a shape, simply find the area of each part and add them up. The area addition postulate states that if a figure is composed of two or more parts that do not overlap each other, then the area of the figure is the sum of the areas of the parts.\n\n#### Example A\n\nFind the area of the figure below. You may assume all sides are perpendicular.\n\nSplit the shape into two rectangles and find the area of each.\n\n\\begin{align*}A_{top \\ rectangle} &= 6 \\cdot 2=12 \\ ft^2\\\\ A_{bottom \\ square} &= 3 \\cdot 3=9 \\ ft^2\\end{align*}\n\nThe total area is \\begin{align*}12 + 9 = 21 \\ ft^2\\end{align*}.\n\n#### Example B\n\n\u2022 Divide the shape into two triangles and one rectangle.\n\u2022 Find the area of the two triangles and rectangle.\n\u2022 Find the area of the entire shape.\n\nSolution:\n\n\u2022 One triangle on the top and one on the right. Rectangle is the rest.\n\u2022 Area of triangle on top is \\begin{align*}\\frac{8(5)}{2}=20 \\ units^2\\end{align*}. Area of triangle on right is \\begin{align*}\\frac{5(5)}{2}=12.5 \\ units^2\\end{align*}. Area of rectangle is \\begin{align*}375 \\ units^2\\end{align*}.\n\u2022 Total area is \\begin{align*}407.5 \\ units^2\\end{align*}.\n\n#### Example C\n\nFind the area of the figure below.\n\nDivide the figure into a triangle and a rectangle with a small rectangle cut out of the lower right-hand corner.\n\n\\begin{align*}A &= A_{top \\ triangle}+A_{rectangle}-A_{small \\ triangle}\\\\ A &= \\left(\\frac{1}{2} \\cdot 6 \\cdot 9\\right)+(9 \\cdot 15)\\left) - (\\frac{1}{2} \\cdot 3 \\cdot 6\\right)\\\\ A &= 27+135-9\\\\ A &= 153 \\ units^2\\end{align*}\n\nWatch this video for help with the Examples above.\n\n### Vocabulary\n\nPerimeter is the distance around a shape. The perimeter of any figure must have a unit of measurement attached to it. If no specific units are given (feet, inches, centimeters, etc), write \u201cunits.\u201d Area is the amount of space inside a figure and is measured in square units. A composite shape is a shape made up of other shapes.\n\n### Guided Practice\n\n1. Find the area of the rectangles and triangle.\n\n2. Find the area of the whole shape.\n\n1. Rectangle #1: \\begin{align*}\\text{Area }= 24(9+12)=504 \\ units^2\\end{align*}. Rectangle #2: \\begin{align*}\\text{Area }=15(9+12)=315 \\ units^2\\end{align*}. Triangle: \\begin{align*}\\text{Area }=\\frac{15(9)}{2}=67.5 \\ units^2\\end{align*}.\n\n2. You need to subtract the area of the triangle from the bottom right corner. \\begin{align*}\\text{Total Area }=504+315+67.5-\\frac{15(12)}{2}=796.5 \\ units^2\\end{align*}\n\n### Practice\n\nUse the picture below for questions 1-2. Both figures are squares.\n\n1. Find the area of the unshaded region.\n2. Find the area of the shaded region.\n\nFind the area of the figure below. You may assume all sides are perpendicular.\n\nFind the areas of the composite figures.\n\nUse the figure to answer the questions.\n\n1. What is the area of the square?\n2. What is the area of the triangle on the left?\n3. What is the area of the composite figure?\n\nFind the area of the following figures.\n\n1. Find the area of the unshaded region.\n2. Lin bought a tract of land for a new apartment complex. The drawing below shows the measurements of the sides of the tract. Approximately how many acres of land did Lin buy? You may assume any angles that look like right angles are \\begin{align*}90^\\circ\\end{align*}. (1 acre \\begin{align*}\\approx\\end{align*} 40,000 square feet)\n3. Linus has 100 ft of fencing to use in order to enclose a 1200 square foot rectangular pig pen. The pig pen is adjacent to the barn so he only needs to form three sides of the rectangular area as shown below. What dimensions should the pen be?\n\n### Notes\/Highlights Having trouble? Report an issue.\n\nColor Highlighted Text Notes\n\n### Vocabulary Language: English\n\nComposite\n\nA number that has more than two factors.\n\nShow Hide Details\nDescription\nDifficulty Level:\nAuthors:\nTags:\nSubjects:\nSearch Keywords:","date":"2017-02-28 09:25:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 13, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9753947257995605, \"perplexity\": 1390.9324641933042}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-09\/segments\/1487501174154.34\/warc\/CC-MAIN-20170219104614-00629-ip-10-171-10-108.ec2.internal.warc.gz\"}"}
null
null
Meek Mill also posted a similar sentiment. He put up the same video as Game with the caption, "Executed this man about selling bootleg DVDs! Smh it's only gone get worst! They don't care about us!" Chuck D, Tyga, Russell Simmons and others also reacted to the death of Alton Sterling. His passing follows a lineage of people like Michael Brown, Walter Scott, Tamir Rice and Samuel Dubose who were all gunned down by authorities under suspicious circumstances. Check out some of hip-hop's reaction to the Alton Sterling shooting below: All we gone do is repost it & type "SMH" over & over again huh ?? Then we gone put #JusticeForAltonSterling & #BlackLivesMatter hashtags on Instagram & pretend that's us taking action huh ??? What happened to the generation of people who stood together, held hands & took to the streets peacefully or violently if it had to come to that & wouldn't move or be detoured from that stance until real results were handed over to us ?!?!? Huh ??? WTF is wrong with us these days that we will sit here & watch a video of yet another innocent man shot & killed in cold blood by police officers in this country ?!?!? WTF is wrong with ME, YOU & everyone else who will watch the video & read this caption & DO NOTHING until it happens again... Then what we gone do ?? The same shit we been doing... Hashtag #JUSTICEFOR[insert name here__________] & #BLACKLIVESMATTER when it really seems that they don't at all... & if #BLACKLIVES don't even matter to #BLACKPEOPLE why the fuck would they matter to anyone else ?!?!?!? What on earth is wrong with US ?!?!?!? But lemme stop so we can get back to turning up, dab'n & hittin "them folks" while this man lay dead in a county morgue for selling some fucking CD's & asking what he was being detained for ?!!?!?!?!? NAW, fuck this....... WE AINT HAVIN THIS SHIT NO MORE !!!!!!!!!!! #THETIMETOFIGHTBACKISNOW !!!!! #MartinLutherKing got shot on a fuckin balcony because he wasn't scared to stand up for our parents, grandparents & US.... The reason our black asses can even do 1/2 the shit we do is because that man sacrificed his life along with many others so we could..... & we ain't even got enough heart to stand together as a RACE & pay them back !!!! #IMNOTSCARED #FAKECARINGASSGENERATION #WHOELSEOUTTHEREGONEFAKECARE ??? #WEAINTGONEDOSHITBUTREPOSTANDHASHTAG #SCARYASSRACE #AMERICAREMOVED A video posted by The Game (@losangelesconfidential) on Jul 5, 2016 at 11:10pm PDT It saddens me that 5 kids are growing up without their father. 2 white Police Officers killed Alton Sterling while he was standing outside a store selling CD's. These officers got a call that a man was outside the store with a gun but for some reason the officers were wearing Body Cam's that weren't connected & recording. Wrestled him down to the ground with no warrant to do so & shot him in the chest & back!! When is America & the judicial system that's suppose to protect us, gonna make these officers who are killing black men take responsibility for what's going on. This is literally getting away with MURDER & it continues to happen #RipAltonSterling A photo posted by Fabolous (@myfabolouslife) on Jul 6, 2016 at 3:51am PDT Executed this man about selling bootleg DVDs! Smh it's only gone get worst! They don't care about us! A video posted by Meek Mill (@meekmill) on Jul 5, 2016 at 11:21pm PDT Selling..............cds? W.......T.....F????? This cop was deranged and KNEW he was gonna DO this one day-it was a matter of when AND who — Chuck D (@MrChuckD) July 6, 2016 IF anything THIS proves why the term #BlackLivesMatter is the CRYOUT to the rest of the Planet Earth from this United States Of America. Written by Paul Meara (Photos from left: Bennett Raglin/Getty Images for D'USSE, Prince Williams/WireImage) Get More! By clicking submit, I consent to receiving BET Newsletters and other marketing emails. BET Newsletters are subject to our Privacy Policy and Terms of Use. Users can unsubscribe at anytime. BET Newsletters are sent by BET Networks, 1540 Broadway, New York, NY 10036. www.bet.com TRENDING IN MUSIC Snoop Dogg Wins for Best Gospel/Inspirational Award Elijah Blake - "Black Is Love" Adina Howard Admits That A Love Triangle With Brandy And Another R&B Artist Ended Her Career SEE ALL TRENDING Fans Will Not Be Happy About This Male R&B Singer Calling Out Beyoncé, SZA And Cardi B "Stop using that f**king pain to make it OK to say some bullsh** on your record and get nominated for a Grammy..." Hip-Hop Reacts To The Tragic Death Of Leah LaBelle Here are all the artists who posted tributes to the late R&B singer. French Montana: The American Dream An immigrant's dream. Trump's worst nightmare. + Load more Get Your music Fix See our Privacy Policy to learn more about our privacy practice. Privacy Policy/Privacy Rights Terms of Use Jobs Closed Captioning Submission Agreement Copyright Contact Us Site Map TV Ratings Ad Choices © 2019 BET Interactive, LLC, a wholly owned subsidiary of Black Entertainment Television LLC. All rights reserved.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,263
{"url":"http:\/\/www.lastfm.es\/user\/stiller0head\/library\/music\/Rise+Against\/_\/The+Strength+To+Go+On?setlang=es","text":"# Colecci\u00f3n\n\nM\u00fasica \u00bb Rise Against \u00bb\n\n## The Strength to Go On\n\n80 scrobblings | Ir a la p\u00e1gina del tema\n\nTemas (80)\nTema \u00c1lbum Duraci\u00f3n Fecha\nThe Strength to Go On 3:27 8 Jun 2011, 11:22\nThe Strength to Go On 3:27 19 May 2011, 11:40\nThe Strength to Go On 3:27 6 May 2011, 11:40\nThe Strength to Go On 3:27 3 May 2011, 11:03\nThe Strength to Go On 3:27 1 May 2011, 6:45\nThe Strength to Go On 3:27 20 Abr 2011, 7:40\nThe Strength to Go On 3:27 19 Abr 2011, 18:35\nThe Strength to Go On 3:27 17 Abr 2011, 23:10\nThe Strength to Go On 3:27 16 Abr 2011, 14:09\nThe Strength to Go On 3:27 15 Abr 2011, 16:27\nThe Strength to Go On 3:27 10 Abr 2011, 7:44\nThe Strength to Go On 3:27 10 Abr 2011, 7:37\nThe Strength to Go On 3:27 28 Nov 2010, 18:40\nThe Strength to Go On 3:27 28 Nov 2010, 18:13\nThe Strength to Go On 3:27 5 Oct 2010, 13:14\nThe Strength to Go On 3:27 2 Oct 2010, 15:08\nThe Strength to Go On 3:27 29 Sep 2010, 18:08\nThe Strength to Go On 3:27 28 Sep 2010, 18:53\nThe Strength to Go On 3:27 22 Sep 2010, 14:18\nThe Strength to Go On 3:27 4 Jul 2010, 11:47\nThe Strength to Go On 3:27 15 May 2010, 8:33\nThe Strength to Go On 3:27 15 May 2010, 8:23\nThe Strength to Go On 3:27 14 May 2010, 21:57\nThe Strength to Go On 3:27 14 May 2010, 21:42\nThe Strength to Go On 3:27 13 May 2010, 14:05\nThe Strength to Go On 3:27 13 May 2010, 11:46\nThe Strength to Go On 3:27 13 May 2010, 11:35\nThe Strength to Go On 3:27 13 May 2010, 11:20\nThe Strength to Go On 3:27 13 May 2010, 10:40\nThe Strength to Go On 3:27 8 May 2010, 8:59\nThe Strength to Go On 3:27 4 May 2010, 13:46\nThe Strength to Go On 3:27 1 May 2010, 20:01\nThe Strength to Go On 3:27 15 Abr 2010, 21:29\nThe Strength to Go On 3:27 11 Mar 2010, 16:28\nThe Strength to Go On 3:27 9 Mar 2010, 20:45\nThe Strength to Go On 3:27 8 Mar 2010, 20:29\nThe Strength to Go On 3:27 8 Mar 2010, 18:36\nThe Strength to Go On 3:27 7 Mar 2010, 17:27\nThe Strength to Go On 3:27 7 Mar 2010, 10:58\nThe Strength to Go On 3:27 2 Feb 2010, 19:09\nThe Strength to Go On 3:27 2 Feb 2010, 15:50\nThe Strength to Go On 3:27 31 Ene 2010, 8:46\nThe Strength to Go On 3:27 12 Ene 2010, 17:14\nThe Strength to Go On 3:27 5 Ene 2010, 16:54\nThe Strength to Go On 3:27 16 Dic 2009, 22:05\nThe Strength to Go On 3:27 22 Nov 2009, 13:51\nThe Strength to Go On 3:27 22 Nov 2009, 11:29\nThe Strength to Go On 3:27 21 Nov 2009, 12:49\nThe Strength to Go On 3:27 20 Nov 2009, 18:34\nThe Strength to Go On 3:27 20 Nov 2009, 18:11\nThe Strength to Go On 3:27 18 Nov 2009, 19:23\nThe Strength to Go On 3:27 18 Nov 2009, 19:08\nThe Strength to Go On 3:27 12 Nov 2009, 17:50\nThe Strength to Go On 3:27 12 Nov 2009, 15:27\nThe Strength to Go On 3:27 10 Nov 2009, 17:34\nThe Strength to Go On 3:27 7 Nov 2009, 14:59\nThe Strength to Go On 3:27 3 Nov 2009, 12:28\nThe Strength to Go On 3:27 29 Oct 2009, 18:17\nThe Strength to Go On 3:27 26 Oct 2009, 19:23\nThe Strength to Go On 3:27 26 Oct 2009, 14:20\nThe Strength to Go On 3:27 25 Oct 2009, 7:42\nThe Strength to Go On 3:27 21 Oct 2009, 19:05\nThe Strength to Go On 3:27 21 Oct 2009, 16:51\nThe Strength to Go On 3:27 19 Oct 2009, 18:01\nThe Strength to Go On 3:27 15 Oct 2009, 16:24\nThe Strength to Go On 3:27 14 Oct 2009, 12:48\nThe Strength to Go On 3:27 7 Oct 2009, 17:26\nThe Strength to Go On 3:27 6 Oct 2009, 10:34\nThe Strength to Go On 3:27 2 Oct 2009, 20:51\nThe Strength to Go On 3:27 2 Oct 2009, 11:19\nThe Strength to Go On 3:27 1 Oct 2009, 14:23\nThe Strength to Go On 3:27 1 Oct 2009, 13:36\nThe Strength to Go On 3:27 1 Oct 2009, 13:29\nThe Strength to Go On 3:27 30 Sep 2009, 13:18\nThe Strength to Go On 3:27 29 Sep 2009, 18:53\nThe Strength to Go On 3:27 29 Sep 2009, 18:35\nThe Strength to Go On 3:27 29 Sep 2009, 18:14\nThe Strength to Go On 3:27 28 Sep 2009, 18:28\nThe Strength to Go On 3:27 28 Sep 2009, 18:15\nThe Strength to Go On 3:27 28 Sep 2009, 18:12","date":"2015-03-29 01:36:50","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9913665056228638, \"perplexity\": 11077.316882712248}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-14\/segments\/1427131298020.57\/warc\/CC-MAIN-20150323172138-00030-ip-10-168-14-71.ec2.internal.warc.gz\"}"}
null
null
\section{1 Introduction} \label{intro} The advance in technology in the last decades has allowed the creation of increasingly smaller devices reaching the point where the realisation of logic structures on the atomic level is possible \cite{Smith95,Terabe05,Schimmel04,Fuechsle}. Because of their low dimensionality and temperature the dynamics of the system can be dominated by quantum effects, opening a large playground for experimental testing of many-body correlation effects on particle (charge or mass) transport. These ideas boosted also the investigation of transport of ultracold atoms in systems with reduced dimensionality. Transport of fermionic and bosonic ultracold atoms in quantum wires and in one-dimensional optical lattices is studied theoretically in \cite{Ax,Schlagheck10,Schlagheck13,Chien12,Bruderer,Chien13}. In \cite{A31} a possible realisation of an atom analogue of an electron quantum point contact by the use of a microfabricated magnetic wave\-guide is presented. In an experiment a macroscopic atomic cloud was divided into two reservoirs separated by a narrow channel by the use of a laser beam \cite{Esslinger12}, thus creating a cold-atom analog of a mesoscopic conductor. Recent advances in the manipulation of cold atoms loaded in optical lattices are presented in \cite{Schlosser12}. Decreasing the dimensionality of the tunneling to zero, a new field is investigated - the atomtronics. The creation of bosonic analogues to the mesoscopic systems used in electronic devices like a diode or field-effect transistor is suggested in \cite{Anderson12} and also theoretically investigated in \cite{Pepino10,Gajdacz12}. In this work we focus on bosonic transport through a chain of quantum dots coupled to two bosonic reservoirs that keep the system far from equilibrium. Given the by now very well understood behaviour of electronic (fermionic) systems, the first obvious question is about the differences between the bosonic and fermionic transport. It is known that the fermionic Anderson impurity model -- a quantum dot with few energy levels, coupled to two electrodes (electron baths) is the simplest possible model for a field effect transistor (FET). Since the ultracold gas based systems offer a much better degree of `designability' and coherence control, it is also natural to investigate the possibility of a bosonic FET. Having these goals in mind we offer a formal framework for investigation of such systems on the one hand, and on the other hand propose a number of efficient and physically meaningful approximation techniques, which are able to treat even interacting systems. In Section 2 we derive a set of stochastic differential equations for the time evolution of the reduced system by writing down the Keldysh partition function of the system and integrating out the reservoir degrees of freedom. In order to derive the set of equations one performs essentially the same steps as in \cite{polk03a}, where a closed system is considered, the difference being only in the addition of two bosonic reservoirs. In Section 3 we restrict the system to the special case of two bosonic Markovian reservoirs, which is analytically solvable in the noninteracting case. In Section 3.1 we focus on the steady state properties of the system. New effects, appearing after an addition of an interparticle interaction term to the system Hamiltonian, are explained by the use of the spectral properties of the chain of quantum dots. A possible solution in the strongly interacting limit is also suggested. In Section 3.2 the transient behaviour of an initially empty chain of quantum dots, which is instantaneously coupled to two Markovian reservoirs, is calculated. We find a simple scaling law between the time needed to reach a steady state and the strength of the inteparticle interaction. Section 4 concludes the paper, offers a possible experimental realisation of our setup, and outlines the avenues for further research. \section{2 General derivation of a stochastic differential equation} \label{sec:1} \subsection{2.1 Single quantum dot coupled to a bosonic reservoir} To start with we consider a system consisting of a single quantum dot at energy $\Delta$ coupled to a bosonic reservoir with spectral density $\mathcal{D}(\omega)$ and occupation of the modes $n(\omega)$. At the initial time $t_i$ the density matrix of the system is assumed to be a direct product of the density matrices of the reservoir $\hat{\rho}$ and the quantum dot $\hat{\sigma} $. The reservoir is modeled as a set of noninteracting harmonic oscillator levels. Their eigenfrequencies $\varepsilon_k$ should form a continuum, which ensures that the time evolution is irreversible and a steady state is reached. The Hamiltonian of the system is given by \begin{equation} \begin{array}{rcl} \hat{H} & = & \Delta \hat{a}^{\dagger}_{}\hat{a} - \sum \limits^{}_{k} \gamma^{}_{k}\big( \hat{a}^{\dagger}_{}\hat{L}^{}_{k} + \hat{L}^{\dagger}_{k}\hat{a} \big) + \sum \limits^{ }_{k} \varepsilon^{}_{k}\hat{L}^{\dagger}_{k}\hat{L}^{}_{k}, \end{array} \end{equation} where $\hat{a}^{\dagger}_{},\hat{L}^{\dagger}_k $ create a particle in the quantum dot or in the reservoir mode $ k $. One can write down the Keldysh partition function \cite{Kamenev07}, which in the continuous time notation is given by \begin{equation} \begin{array}{rcl} \mathcal{Z} & = & \int \prod \limits^{ }_{k } D[\textbf{L}^*,\textbf{L}] \langle L_{k,-}(t_i) | \hat{\rho}_{k} | L_{k,+}(t_i) \rangle \\ & & \times \int D[\textbf{a}^*, \textbf{a}] \langle a_{-}(t_i) | \hat{\sigma} | a_{+}(t_i) \rangle \\[1.0mm] & & \times e^{-L^{*}_{k,-}(t_i) \cdot L^{}_{k,-}(t_i) } e^{-a^{*}_{-}(t_i) \cdot a^{}_{-}(t_i) } e^{i\mathcal{S}} ,\\ \textbf{L}_{k}(t) & = & ( L^{}_{k,-}(t) , L^{}_{k,+}(t) )^T_{}, \\ \textbf{a}(t) & = & ( a^{}_{-}(t) , a^{}_{+}(t) )^T_{} . \end{array} \end{equation} The $-/+$ subscript denotes the position of the field on the forward/backward branch of the Keldysh contour and the ket-vectors $ | a \rangle, | L_{k} \rangle $ are eigenvectors of the annihilation operators $\hat{a}$ and $\hat{L}_{k}$. The initial time on both branches of the Keldysh contour is denoted by $t_i$ and its turning point by $t_f$. The corresponding action is given by \begin{equation} \begin{array}{rcl} \mathcal{S} & = & \int^{t_f}_{t_i} d\tau \big \lbrace \textbf{a}^{\dagger}(\tau) g^{-1}(\tau) \textbf{a}(\tau) + \sum \limits^{ }_{k } \textbf{L}^{\dagger}_{k}(\tau) g^{-1}_{k}(\tau) \textbf{L}^{}_{k}(\tau) \\ & & + \sum \limits^{ }_{k } \gamma^{}_{k} \big( \textbf{L}^{\dagger}_{k}(\tau) \sigma^{}_z \textbf{a}(\tau) + \textbf{a}^{\dagger}_{}(\tau) \sigma^{}_z \textbf{L}^{}_{k}(\tau) \big) \big\rbrace . \end{array} \end{equation} where $g^{-1}(\tau) = (i\partial^{}_{\tau} - \Delta )\sigma^{}_z $, $g^{-1}_k (\tau) = (i \partial^{}_{\tau} - \varepsilon^{}_k)\sigma^{}_z $ and $\sigma^{}_z$ is the Pauli $z$-matrix. If one uses the discrete time notation, one can include $ \langle L_{k,-}(t_i) | \hat{\rho}_k | L_{k,+}(t_i) \rangle$ $ e^{-L^{*}_{k,-}(t_i) \cdot L_{k,-}(t_i) } $ into the time discrete form of the matrix $g^{-1}_k(\tau)$ \cite{Kamenev07} and integrate out the reservoir degrees of freedom, thus giving the final result \begin{equation} \begin{array}{rcl} \mathcal{Z} & \! = \! & \int D[\textbf{a}^*,\textbf{a}] e^{-a^*_{-}(t_i)a^{}_{-}(t_i) } \langle a_{-}(t_i) | \hat{\sigma} | a_{+}(t_i) \rangle e^{i \mathcal{S}'} , \\[2.0mm] \mathcal{S}' & \! = \! & \int^{t_f}_{t_i}d\tau_1 d\tau_2 \textbf{a}^{\dagger}(\tau_1) G^{-1}_{}( \tau_1, \tau_2 ) \textbf{a}(\tau_2) ,\\ G^{-1}_{}( \tau_1, \tau_2 ) & \! = \! & \delta(\tau_1 - \tau_2)g^{-1}(\tau_1) - \sum \limits^{ }_{k } \gamma^2_{k}\sigma^{}_z g^{}_k(\tau_1 - \tau_2)\sigma^{}_z . \end{array} \end{equation} The expectation value of a normally ordered observable $\hat{\mathcal{O}} \equiv \mathcal{O}(\hat{a}^{\dagger},\hat{a})$ at the turning point $t_f$ of the contour is given by \begin{equation} \label{eq:Exp_val_Obs} \begin{array}{rcl} \langle \hat{\mathcal{O}}(t_f) \rangle & = & \int D[\textbf{a}^*, \textbf{a}] \big\lbrace \langle a_{-}(t_i) | \hat{\sigma} | a_{+}(t_i) \rangle e^{-a^{*}_{-}(t_i) \cdot a^{}_{-}(t_i) } \\[1.5mm] & & \times \mathcal{O}(a^*_{+}(t_f),a^{}_{-}(t_f)) e^{i\mathcal{S}'} \big\rbrace. \end{array} \end{equation} In the same way as in \cite{polk03a}, where the case of a closed system is considered, one can apply the Wigner transformation ($a_{\mp}(\tau) = \psi(\tau) \pm \frac{1}{2} \eta(\tau)$) and integrate out the $\eta(t_i), \eta(t_f)$ fields to reduce Eq.(\ref{eq:Exp_val_Obs}) to \begin{equation} \begin{array}{rcl} \langle \hat{\mathcal{O}}(t_f) \rangle & = & \int D[\psi^*, \psi, \eta^*, \eta] \big\lbrace \sigma_{\mathcal{W}}(\psi^*(t_i) ,\psi (t_i) ) \\[1.5mm] & & \times \mathcal{O}_{\mathcal{W}}(\psi^*(t_f),\psi^{}(t_f)) e^{i\mathcal{S}''} \big\rbrace. \end{array} \end{equation} The Wigner transform of the density matrix $ \sigma_{\mathcal{W}}(\psi^*(t_i) ,\psi (t_i) ) $ and the Weyl symbol of the observable $\mathcal{O}_{\mathcal{W}}(\psi^*(t_f),\psi^{}(t_f))$ are both obtained after integrating out the $\eta^*(t_i),\eta(t_i)$ and $\eta^*(t_f),\eta(t_f)$-fields respectively \begin{equation} \begin{array}{rcl} \sigma_{ \mathcal{W} }(\psi^*, \psi) & = & \int \frac{d\eta^* d\eta}{4\pi^2} \hspace{1.0mm} \big\lbrace \langle \psi \! + \! \eta/2 |\hat{\sigma} | \psi \! - \! \eta/2 \rangle \\[1.5mm] & & \times e^{-|\psi|^2 - 1/4|\eta|^2 +1/2(\eta^* \psi - \eta \psi^*) } \big\rbrace , \\[1.5mm] \mathcal{O}_{\mathcal{W}}(\psi^*, \psi) & = &\int \frac{d\eta^*d\eta}{2 \pi} \hspace{1.0mm} e^{-|\eta|^2/2} {\mathcal{O}}( \psi^* \! - \! \eta^*/2, \psi \! + \! \eta/2). \end{array} \end{equation} Calculating $\mathcal{O}_{\mathcal{W}}$ is equivalent to writing down the normal ordered operator in a symmetrised form and then replacing $\hat{a}^{\dagger},\hat{a}$ with $\psi^*,\psi$, respectively. The new action has the form: \begin{equation} \begin{array}{l} \mathcal{S}'' = i \int^{t_f}_{t_i} d\tau_1 d\tau_2 \big\lbrace \eta^*(\tau_1) 2i \big(\Gamma n+\Gamma/2 \big) (\tau_1 - \tau_2) \eta(\tau_2) \\[1.9mm] + \psi^*(\tau_1)[ \delta( \! \tau_1 \! - \! \tau_2 \! )(i\partial_{\tau_2} - \! \Delta) - 2i\Gamma( \! \tau_1 \! - \! \tau_2 \! )\Theta( \! \tau_2 \! - \! \tau_1 \! ) ]\eta(\tau_2) \\[1.9mm] + \eta^*(\tau_1)[ \delta( \! \tau_1 \! - \! \tau_2 \! )(i\partial_{\tau_2} - \! \Delta) + 2i\Gamma( \! \tau_1 \! - \! \tau_2 \! )\Theta( \! \tau_1 \! - \! \tau_2 \! ) ]\psi(\tau_2) \big\rbrace . \end{array} \end{equation} where $\Gamma(t) = \pi \int^{\infty}_{-\infty} \frac{d\omega}{2 \pi} \mathcal{D}(\omega) \gamma^2(\omega) e^{-i\omega t} $ and $ \big( \Gamma n \big) (t) = \pi \int^{\infty}_{-\infty} \frac{d\omega}{2 \pi} \mathcal{D}(\omega) \gamma^2(\omega) n(\omega) e^{-i\omega t} $. In the noninteracting case the action contains only terms that are linear or quadratic in the $\eta,\eta^*$ fields. Both types of terms can be integrated out to give the following result: \begin{equation} \begin{array}{rcl} \langle \hat{\mathcal{O}}(t_f) \rangle & = & \hspace{3.0mm} \int D[\xi^*, \xi ] e^{- \sum_{lk} \xi^{*}_l \Sigma^{-1}_{lk} \xi^{}_{k} }\\[1.5mm] & & \times \int D[\psi^*, \psi] \big\lbrace \sigma_{\mathcal{W}}(\psi^*(t_i), \psi(t_i) ) \\[1.5mm] & & \times \mathcal{O}_{\mathcal{W}}(\psi^*(t_f), \psi(t_f) ) \delta( f_1(\psi) )\delta( f_2(\psi^*) ) \big\rbrace, \\[1.5mm] \Sigma^{}_{lk} & = & 2 (\Gamma n + \Gamma/2)(t_l-t_k). \end{array} \end{equation} In order to derive the last expression we have divided the time interval into $N$ equal parts $\Delta t = \frac{t_f-t_i}{N}$ ($t_l = t_i + l\cdot \Delta t$) and defined $\Sigma \in \mathbb{C}^{N+1 \hspace{0.1mm} \times \hspace{0.1mm} N+1}$ ($ \Sigma_{lk} \equiv \Sigma(t_l - t_k) $), $ \vec{ \xi } , \vec{ \xi }^* \in \mathbb{C}^{N+1}$ ($\xi_l \equiv \xi(t_l)$). The time evolution of $\psi$ is determined entirely from the argument of the $\delta$-function. If one sets $f_1(\psi)$ to be equal to zero one obtains the following stochastic differential equation: \begin{equation} \label{eq:Langevin_eq} \begin{array}{c} \partial_{t}\psi(t ) = - i \Delta \psi( t ) - \int^{t}_{t_i} d\tau 2\Gamma( \! t \! - \! \tau \! ) \psi(\tau ) + \zeta( t ), \end{array} \end{equation} where $\zeta(t)$ is a Gaussian stochastic process with zero mean and autocorrelation function given by $ \langle \zeta(t) \zeta^{\dagger}(t') \rangle = \Sigma( t - t' )$. The equation of motion for $\psi^*(t)$ is obtained by setting $f_2(\psi^*)$ equal to zero and it is equal to the complex conjugate of Eq. (\ref{eq:Langevin_eq}). In order to calculate $\langle \hat{\mathcal{O}} (t_f) \rangle$ one has to sample a finite number of points $\lbrace \psi_j(t_i) \rbrace_{j=1\ldots N_T}$ from $\sigma_{\mathcal{W}}( \psi^*(t_i), \psi(t_i) )$, let them evolve according to the stochastic differential equation (\ref{eq:Langevin_eq}) and then calculate the following expectation value: \begin{equation} \begin{array}{rcl} \langle \hat{\mathcal{O}}(t_f) \rangle & \approx & \frac{1}{N_T} \sum\limits^{N_T}_{j=1} \mathcal{O}_{\mathcal{W}}(\psi^*_j(t_f), \psi^{}_j(t_f) ). \end{array} \end{equation} For large enough $t_f$, a steady state should be reached.\\ One should note, that the strength of the memory kernel in the second term of Eq. (\ref{eq:Langevin_eq}) and the autocorrelation function of the noise depend entirely on the properties of the reservoir. Having a reservoir with constant density of states over the entire frequency spectrum, energy independent couplings $\gamma_k$ and a constant occupation of the modes (i.e. $\mathcal{D}(\omega) = \mathcal{D}={\rm const}$, $\gamma_k = \gamma = {\rm const}$, $n(\omega)= n={\rm const}$) the Markovian limit is reached, where the memory kernel vanishes and the stochastic process $\zeta(t)$ becomes a Gaussian white noise: \begin{equation} \label{eq:Mark_limit} \begin{array}{rcl} \partial_t \psi (t) & = & -i\Delta \psi (t) - \Gamma \psi (t) + \zeta(t) \\[1.5mm] \left\langle\zeta(t)\zeta^{\dagger}(t') \right\rangle & = & 2 \Gamma(n + \frac{1}{2})\delta(t-t') \end{array} \end{equation} \\ It is important to stress that the same equation is obtained if one starts with the Master equation in Lindblad form for the density matrix $\hat{\sigma}$ of a single quantum dot \begin{equation} \label{eq:QMeq} \begin{array}{rcl} \partial_t \hat{\sigma} & = & - i [\Delta \hat{a}^{\dagger} \hat{a}] + \hat{\mathcal{L}}\hat{\sigma} \\ \hat{\mathcal{L}}\hat{\sigma} & = & -\Gamma(n+1)[\hat{a}^{\dagger} \hat{a}\hat{\sigma} + \hat{\sigma}\hat{a}^{\dagger} \hat{a} - 2 \hat{a}\hat{\sigma}\hat{a}^{\dagger} ] \\ & & -\Gamma n[\hat{a} \hat{a}^{\dagger} \hat{\sigma} + \hat{\sigma}\hat{a} \hat{a}^{\dagger} - 2 \hat{a}^{\dagger}\hat{\sigma}\hat{a} ], \end{array} \end{equation}applies the operator correspondences given in \cite{otago3} in order to map the last expression to a Fokker-Plank equation (FPE) and then use the fact, that the FPE can be rewritten as a Langevin equation. The addition of a dephasing Lindblad operator $ \mathcal{L} \hat{\sigma} = -\frac{\gamma}{2}[\hat{a}^{\dagger}_{}\hat{a}, [ \hat{a}^{\dagger}_{}\hat{a}, \hat{\sigma} ]] $ to the equation will result only in the appearance of $ \sqrt{\gamma} \psi^*_{}(t) \tilde{\zeta}(t) $ on the RHS of Eq. (\ref{eq:Mark_limit}), where $ \tilde{\zeta}(t) $ is a Gaussian white noise $\big( \left\langle \tilde{\zeta}(t) \tilde{\zeta}^{\dagger}_{}(t') \right\rangle = \delta(t-t') \big) $.\\ The addition of an on-site repulsion term $\frac{U}{2}\hat{a}^{\dagger} \hat{a}^{\dagger} \hat{a} \hat{a}$ to the system Hamiltonian reflects in the action $\mathcal{S}''$ by the addition of \begin{equation} \begin{array}{c} -U\int d\tau \big[ \big( \psi^{*2}\psi\eta + \eta^* \psi^* \psi^2 \big) +\frac{1}{4}\big( \eta^{*2} \eta \psi + \psi^* \eta^* \eta^2 \big) \big]. \end{array} \end{equation} The $\tau$-dependence is dropped for simplicity here. The terms in the second bracket are neglected to allow for a mapping onto a set of stochastic differential equations. This is the essence of the so called Truncated Wigner Approximation (TWA) \cite{otago1,otago2,otago3}. \subsection{2.2 Chain of $\mathcal{N}$ quantum dots coupled to two bosonic reservoirs} The generalisation of the simple example from the previous subsection to the case of an arbitrary number of wells (quantum dots) $\mathcal{N}$ between two reservoirs is straightforward. The Hamiltonian of this system is given by \begin{equation} \begin{array}{c} \hat{H} = \sum \limits^{\mathcal{N}}_{j=1} \Delta^{}_{j} \hat{a}^{\dagger}_j \hat{a}^{}_j + \sum \limits^{}_{k} \varepsilon^{}_{k}\hat{L}^{\dagger}_{k}\hat{L}^{}_{k} + \sum \limits^{}_{k'} \varepsilon^{}_{k'}\hat{R}^{\dagger}_{k'}\hat{R}^{}_{k'}\\ - \sum \limits^{}_{k} \gamma^{}_{L,k}\big( \hat{a}^{\dagger}_1\hat{L}^{}_{k} +\hat{L}^{\dagger}_{k}\hat{a}^{}_1 \big) - \sum \limits^{}_{k'} \gamma^{}_{R,k'}\big( \hat{a}^{\dagger}_{\mathcal{N}}\hat{R}^{}_{k'} + \hat{R}^{\dagger}_{k'}\hat{a}^{}_{\mathcal{N}} \big)\\ -\sum \limits^{\mathcal{N}-1}_{j=1} J \big( \hat{a}^{\dagger}_{j+1}\hat{a}^{}_{j} + \hat{a}^{\dagger}_{j} \hat{a}^{}_{j+1} \big) + \frac{1}{2}\sum \limits^{\mathcal{N} }_{j=1}U^{}_{j} \hat{a}^{\dagger}_{j}\hat{a}^{\dagger}_{j}\hat{a}^{}_{j}\hat{a}^{}_{j} \end{array} \end{equation} The ladder operators $\hat{L}^{(\dagger)}_{k},\hat{R}^{(\dagger)}_{k},\hat{a}^{(\dagger)}_{}$ are responsible for the annihilation (creation) of an excitation at the left, right reservoir and at the chain of quantum dots. We always set $U_1=0=U_{\mathcal{N}}$ and $\Delta_1 = 0 = \Delta_{ \mathcal{N} }$. \\ The corresponding set of stochastic differential equations that one has to solve is given by \begin{equation} \label{eq:Set_of_Stoch_DGL} \begin{array}{rcl} \partial_t \psi^{}_1(t) & \! = \! & - \int^{t}_{t_i}\! d\tau 2 \Gamma_L(t-\tau)\psi^{}_{1}(\tau) + iJ\psi^{}_{2}(t) + \zeta^{}_L(t) \\[1.5mm] \partial_t \psi^{}_{j}(t) & \! = \! & -i\Delta^{}_j \Psi^{}_j(t) +iJ\big(\psi^{}_{j-1}(t) + \psi^{}_{j+1}(t) \big) \\[1.5mm] & & -iU^{}_{j}\psi^2_j(t)\psi^*_{j}(t) \hspace{23.0mm}(1<j<\mathcal{N}) \\[1.5mm] \partial_t \psi^{}_{\mathcal{N}}(t) & \! = \! & - \int^{t}_{t_i} \! d\tau 2 \Gamma_R(t-\tau)\psi^{}_{ \mathcal{N}}(\tau) + iJ\psi^{}_{ \mathcal{N }\! - \! 1 \! }(t) + \zeta^{}_{R}(t) \end{array} \end{equation} where $\Gamma_{L,R},\zeta_{L,R}$ are defined in the same way as in Eq. (\ref{eq:Langevin_eq}) and the subscript $L,R$ refers to the left, right reservoir. We assume that initially the lattice chain is empty ($\langle \hat{a}^{\dagger}_i \hat{a}_j \rangle = 0$) and at $t_i=0$ it is instantaneously coupled to the environment, i.e. we take $\gamma^{}_k(t)=\gamma^{}_k \theta(t)$. The Wigner function of the initial state is then \begin{equation} \begin{array}{rcl} \sigma_{\mathcal{W}}(\psi^*, \psi) & = & \prod \limits_j \big( \frac{2}{\pi} e^{-2\psi^*_j \psi^{}_j } \big). \end{array} \end{equation} \section{3 Results for a chain of $\mathcal{N}$ quantum dots coupled to two Markovian reservoirs} \label{sec:2} \subsection{3.1 Steady state properties of the system} \label{subsec:Res_Steady} We first consider the case $\mathcal{N}=3$. Using the nonequilibrium Green's function approach we get exact results for the noninteracting case and Markovian reservoirs. The mean occupation number $n_j$ $(j=1,2,3)$ of the dots and the steady state current $I$ are given by the following exact solutions, for $\Gamma = \pi \gamma^2 \mathcal{D}$, $\Delta_2=0$, and $x=J / \Gamma$: \begin{eqnarray} n_1 & = & n_L - \frac{n_L-n_R}{2}\frac{x^2}{1+x^2} \label{eq:eq_1} ,\\ n_2 & = & \frac{n_L + n_R}{2} \, , \label{eq:eq_2} \\ n_3 & = & n_R - \frac{n_R-n_L}{2 }\frac{x^2}{1+x^2} \, , \label{eq:eq_3} \\ I & = & J\frac{x}{1+x^2}(n_L-n_R) \, , \label{eq:eq_4} \end{eqnarray} where $n_{L/R}$ are the occupation numbers of the modes of the left/right reservoir. We should note that the steady state current remains the same independent of the length of the lattice chain as long as $U_j=0=\Delta_j$ $ \forall j $. In the case of nonzero interparticle interaction in the Markovian limit we approximate the interaction contribution to the self-energy only by the tadpole diagram (one loop diagram with two external legs, also referred to as Hartree contribution) \cite{Rammer_book}. We shall see later that already this approximation yields a number of interesting details, which are consistent with the TWA predictions. In the current case the tadpole diagram renormalises the energy level of the middle quantum dot from $\Delta_2 = 0 $ to $ \Delta_2 = U_2(1+n_L+n_R) = U_2(1+2n_2) $. At this point it is important to realise that Eq. (\ref{eq:eq_2}) is also valid for $\Delta_2 \neq 0 $, which means that $n_2$ is unchanged in this approximation. The same behaviour of $n_2$ is obtained by the TWA. \begin{figure}[t] \centering \includegraphics[width=0.93\linewidth]{Current_dif_Gamma.eps \caption{\label{fig:Current_ch_Gamma}Steady state current for a chain of three quantum dots coupled to two Markovian reservoirs for nonzero interparticle interaction $U_2/J=10^{-3}$. Truncated Wigner approximation (triangles - $\Gamma/J = 5$, squares - $\Gamma/J = 50$) and tadpole approximation (solid line - $\Gamma/J=5$, dashed line - $\Gamma/J=50$). Additional parameters: $\Delta_2 / J=0 $, $n_R=100$. The peaks in the tadpole approximation are at $n_L=5 \times 10^{3}$ and $n_L=5 \times 10^{4}$ for $\Gamma/J = 5$ and $\Gamma/J = 50$, respectively. The corresponding new values of $\Delta_2/J \rightarrow \Delta_2/J + U_2(1+n_L+n_R)/J \approx U_2n_L/J $ are $5$ and $50$.} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{Spec_fct_100dpi.eps \caption{\label{fig:Spec_fct_ch_Gamma} Spectral functions of the first (left panel) and the second (right panel) quantum dot. For $\Gamma/J = 5$ $(50)$ the black, dark grey and grey lines in the upper (lower) two figures denote the spectral functions for $\Delta_2/J = 0$, $5$ and $10$ ($0$, $50$ and $100$) respectively. The peak of $\mathcal{A}_2(\omega)$ is at $\omega=\Delta_2$. The dashed vertical line denotes the value of the critical $\Delta_2 \approx U_2n_2$ where the peak in the current in Fig. \ref{fig:Current_ch_Gamma} in the tadpole approximation is reached.} \end{figure} From Fig. \ref{fig:Current_ch_Gamma} one sees that the steady state current has qualitatively the same behaviour in the TWA and in the tadpole approximation for not too large $n_L$. The slope of the curves and the position of the peaks in the second approximation can be explained with the spectral functions of the three quantum dots $\mathcal{A}^{}_j(\omega)$, $(j=1,2,3)$ \cite{Mahan10} that can be obtained from the action $\mathcal{S}'$ of the noninteracting system after the substitution $\Delta_2 \rightarrow \Delta_2 + U_2(1+2n_2)$. In this approximation the spectral functions of the first and third quantum dot are exactly the same since the system is symmetric under the exchange of $(1,L)\leftrightarrow (3,R)$ indices (except $n_{L,R}$) and the retarded Green's functions of the system do not depend on $n_{L,R}$ in the nonintercating case. In the tadpole approximation this symmetry is not broken since we have only to renormalise $\Delta_2$. For increasing $\Gamma$ $\mathcal{A}_1(\omega), \mathcal{A}_3(\omega)$ become wider and they do not change when varying the energy level $\Delta_2$ of the middle quantum dot, except for the appearance of a small dip and peak at $\omega=\Delta_2$. In the following discussion the latter effect is not important. On the other hand $\mathcal{A}_2(\omega)$ has only a narrow peak at $\omega=\Delta_2$. \\ Now, we look at the overlap of the spectral functions of the left and the middle quantum dots ($\mathcal{A}_1(\omega), \mathcal{A}_2(\omega)$) (the results for the overlap between $\mathcal{A}_2(\omega)$ and $\mathcal{A}_3(\omega)$ are exactly the same). For $\Delta_2 = 0$ and increasing $\Gamma $ the overlap is in the same energy range since the width of $\mathcal{A}_2(\omega)$ is almost unchanged in comparison to the width of $\mathcal{A}_1(\omega)$. Only particles in the left dot with energies also accessible in $\mathcal{A}_2( \omega )$ can tunnel to the middle dot. But this number is smaller since, for larger $\Gamma$, $\mathcal{A}_1( \omega )$ spreads over a wider range of energies and the particles at the left dot are distributed over this range. It follows that the current should also decrease. This explains the difference in the slope of the curves plotted in Fig. \ref{fig:Current_ch_Gamma} for small values of $n_L - n_R$. The same behaviour can be seen also in Eq. (\ref{eq:eq_4}) in the relevant parameter regime $x=J/\Gamma \ll 1$.\\ With this simple picture one can also explain the position of the peaks at the curves plotted in Fig. \ref{fig:Current_ch_Gamma}. In the tadpole approximation an increase of the interparticle interaction strength leads to a change of the energy level of the dot ($\Delta_2 \rightarrow \Delta_2 + U_2(1+n_L+n_R) \approx U_2n_L$). We have to take into account the competition between two effects. On one hand, an increase of $n_L$ leads to a shift of the peak of $\mathcal{A}_2(\omega)$ to higher values, thus decreasing the overlap between $\mathcal{A}_1(\omega)$ and $\mathcal{A}_2(\omega)$, meaning that the relative number of the particles that can tunnel to the middle quantum dot decrease. On the other hand, looking at Eq. (\ref{eq:eq_1}), the total particle number in the first quantum dot increases almost linearly with $n_L$. The position of the peak should be at the point, where the first effect begins to dominate over the second one. From Fig. \ref{fig:Spec_fct_ch_Gamma}, we see that for two different $\Gamma$ this is the value, where $\mathcal{A}_1(\Delta_2 = U_2n_L)$ is equal to half of its maximum. \\ \begin{figure}[htbp] \includegraphics[width=0.94\linewidth]{4Wells_ch_U.eps} \caption{\label{fig:Occupation_4Wells_ch_U} Mean particle occupation of the quantum wells for a chain of four quantum dots coupled to two Markovian reservoirs, TWA. The black, dashed, dotted and dash-dotted lines denote the mean particle occupation in the first, second, third and fourth quantum dot. We use the parameters: $n_L=4000$, $n_R=100$, $\Gamma /J=5$, $\Delta_1 =\Delta_2=\Delta_3=0$, $U_2=U_3=U$. } \end{figure} Within our approximations and keeping the number of quantum dots $\mathcal{N}=3$, there is no difference in the results for the mean particle occupation $n_j$ ($1<j<\mathcal{N}$) of the quantum dots in the interacting and noninteracting regime. For $\mathcal{N}\geq 4$ such a difference can be seen as shown in Fig. \ref{fig:Occupation_4Wells_ch_U} for $\mathcal{N}=4$ and $U_2=U_3=U$ after applying the TWA and solving Eq. (\ref{eq:Set_of_Stoch_DGL}). The tadpole approximation cannot describe such a difference in the particle occupation of the middle two dots since it gives the same correction to their energy levels $\Delta_2$ and $\Delta_3$. We attempted a self-consistent calculation, which leads to the following equations for occupations of the middle two quantum dots $(n_2,n_3)=(f_2( \Delta_2,\Delta_3 ),f_3( \Delta_2,\Delta_3 ))$ \cite{Anderson61}: \begin{equation} \label{eq:Self_con_eq} \begin{array}{rcl} f_2( U(n_2 -1/2) , U(n_3 - 1/2) ) & = & n_2 \\[1.5mm] f_3( U(n_2 -1/2) , U(n_3 - 1/2) ) & = & n_3 . \end{array} \end{equation} These equations can be solved numerically for a wide set of parameters. In the limit of strong interparticle interactions, we find a better agreement of the emerging solutions with the predictions from the TWA for growing $U$. To explain the results in the strongly interacting limit one has to take into account that each of the Markovian reservoirs forces the occupation in the wells to be equal to the occupation $n_{L/R}$ of the reservoir modes. In the case $\mathcal{N}=4$ and very strong interparticle interactions one should expect that the coupling between the middle two quantum dots is effectively equal to zero in analogy to the self-trapping effect one observes for a Bose-Einstein condensate in a double well potential \cite{Legget01}. One can assume that the first two quantum dots are coupled only to the left reservoir -- and the last two only to the right one. In this case, the occupation of the first two and last two dots is equal to $n_L,n_R$ respectively, which seems to be the case after an extrapolation of the results of both approximations in the limit of large interparticle interaction strengths. \subsection{3.2 Transient behaviour of the system} \label{subsec:Res_Transient} In order to find an analytical expression for the behaviour of an empty chain of quantum dots after an instantaneous coupling with two reservoirs one has to calculate the retarded, advanced and lesser Green's function $G^{R,A,<}$ of the system. The case of a single \emph{fermionic} quantum dot coupled to a reservoir is already considered in \cite{Langreth91}, \cite{Schmidt08} in the noninteracting case and in the lowest order self-energy (tadpole) approximation. The generalisation to a chain of quantum dots and two Markovian bosonic reservoirs is straightforward. For $U=0$ the retarded/advanced Green's function is obtained from the solution of the set of equations: \begin{equation} \label{eq:Set_for_D_R_A} \begin{array}{rl} \big(i\partial^{}_t - \Delta^{}_l \big)G^{R/A}_{lk}(t,t') & = \delta_{lk}\delta(t-t') + \\[1.5mm] & \sum_j \int d\tau \Sigma^{R/A}_{lj}(t,\tau)G^{R/A}_{jk}(\tau,t'), \\[1.5mm] \big( \!\! - \! i\partial^{}_{t'} - \Delta^{}_k \big)G^{R/A}_{lk}(t,t') & = \delta_{lk}\delta(t-t') + \\[1.5mm] & \sum_j \int d\tau G^{R/A}_{lj}(t,\tau)\Sigma^{R/A}_{jk}(\tau,t'). \end{array} \end{equation} The retarded/advanced part of the self-energy has the form \begin{equation} \begin{array}{rcl} \Sigma^{R}_{lk}(t,t') & = & \big(\! - \! i\Gamma \theta(t)( \delta_{l1} \! \delta_{k1} + \delta_{l \mathcal{N}} \! \delta_{k \mathcal{N} } ) -J\delta_{l,k\pm1} \big) \delta( \! t \! - \! t' \! ), \\[1.5mm] \Sigma^{A}_{lk}(t,t') & = & \big(\! + \! i\Gamma \theta(t)( \delta_{l1} \! \delta_{k1} + \delta_{l \mathcal{N}} \! \delta_{k \mathcal{N} } ) -J\delta_{l,k\pm1} \big) \delta( \! t \! - \! t' \! ). \end{array} \end{equation} After solving Eq. (\ref{eq:Set_for_D_R_A}) one can obtain $G^{<}(t,t')$ by making use of the fact that the chain of quantum dots is empty at $t=0$: \begin{equation} \begin{array}{rcl} G^{<}(t,t') & = & \int d\tau_1 d\tau_2 G^{R}(t,\tau_1) \Sigma^{<}(\tau_1, \tau_2) G^{A}(\tau_2,t') \end{array} \end{equation} With $G^{R,A,<}(t,t')$ one can obtain all system observables. The calculation of the tadpole approximation of the Green's functions (denoted by $\tilde{G}$) of the chain of quantum dots is obtained via the following equation: \begin{equation} \begin{array}{rcl} \tilde{G}_{lk}(t,t') \! & \! = \! & G_{lk}(t,t') + \sum \limits_j 2U_j \int_c d\tau n_j(\tau) G_{lj}(t,\tau) G_{jk} (\tau, t'). \end{array} \end{equation} The mean occupation number at the $l^{\rm th}$ lattice site is then given by \begin{equation} \begin{array}{rcl} \label{eq:Occupation_Tad} \tilde{n}_{l}(t) & = & i\tilde{G}^{<}_{ll}(t,t)\\[1.5mm] & = & iG^{<}_{ll}(t,t) + \sum_j 2U_j \int d\tau n^{}_j(\tau) G^R_{lj}(t,\tau)iG^{<}_{jk}(\tau,t) \\[1.5mm] & & \hspace{14.2mm} + \sum_j 2U_j \int d\tau n^{}_j(\tau) iG^{<}_{lj}(t,\tau)G^R_{jk}(\tau,t). \end{array} \end{equation} The first term is the result from the noninteracting case and the last two are the perturbative corrections from the interaction. In the following we consider the case $\mathcal{N}=3$ and observe only the behaviour of $n_2(t), \tilde{n}_2(t)$. In the noninteracting case we clearly differ between two regimes in which the observable has the following form: \begin{equation} \begin{array}{rcl} n_2(t) = 0.5(n_L+n_R)f_A(t) \hspace{5.0mm} \Gamma < 2^{3/2}J \\[1.5mm] n_2(t) = 0.5(n_L+n_R)f_B(t) \hspace{5.0mm} \Gamma > 2^{3/2}J \end{array} \end{equation} with $f_A(t),f_B(t)$ given by: \begin{equation} \label{eq:f_A_and_f_B} \begin{array}{rcl} f_A(t) & = & 1 + \frac{e^{-t\Gamma}}{\beta^2}\big( -8J^2 + \Gamma^2 {\rm cos}(t\beta ) - \Gamma \beta {\rm sin}(t \beta ) \big)\\[1.5mm] f_B(t) & = & 1 + \frac{e^{-t\Gamma}}{\beta^2}\big( 8J^2 - \Gamma^2 {\rm cosh}(t\beta ) -\Gamma \beta {\rm sinh}(t \beta ) \big)\\[1.5mm] \beta & = & \sqrt{|8J^2 - \Gamma^2|}. \end{array} \end{equation} In the regime $\Gamma \gg 2^{3/2}J$ (Fig. \ref{fig:FABB}) the observable converges exponentially to its steady state, as in the case for the particle occupation of a single quantum dot coupled to a Markovian reservoir. The time scale of this process is proportional to $\Gamma/(4J^2)$. But in the limit of very small $\Gamma $, one observes a step-like behaviour of the particle occupation, the length of the steps being equal to $2\pi/\beta$. One can also see that the fastest convergence to a steady state is obtained in the case where $\Gamma \sim J$.\\ The next task is to see if the interparticle interactions at the middle dot can influence this transient behaviour. For the special case of $\mathcal{N}=3$ one can bring Eq. (\ref{eq:Occupation_Tad}) into the more compact form \begin{equation} \begin{array}{rcl} \tilde{n}_2(t) & = & n_2(t) + 4U_2\int d\tau n_2(\tau) \Re \big( G^R_{22}(t,\tau) iG^{<}_{22}(\tau,t) \big). \end{array} \end{equation} The correction to the particle occupation in the middle quantum dot is zero. It follows that not only the steady state but also the transient behaviour of $n_2(t)$ is unchanged by the presence of interactions at least within these approximations. The situation is different if one looks at the numerical solution of Eq. (\ref{eq:Set_of_Stoch_DGL}), where all classical contributions of the interparticle interaction are taken into account. In both parameter regimes $(\Gamma \lessgtr 2^{3/2}J )$ one observes a quadratic dependence of the time needed to reach a steady state from the interparticle interaction in the middle quantum dot (Fig. \ref{fig:FABB}). \begin{figure}[h!tb] \centering \includegraphics[width=1.0\linewidth]{FAB3.eps} \caption{ \label{fig:FABB} Time evolution of $f_A(t),f_B(t)$ defined in Eq.~(\ref{eq:f_A_and_f_B}) for $U=0$, $\Gamma = \frac{1}{20}2^{3/2}J$ (solid line in the upper panel) and $\Gamma = 5J$ (solid line in the lower panel). The other lines are the results from the TWA obtained after dividing $n_2(t)$ by $(n_L+n_R)/2$. The values of $U_2/J$ are $5 \times 10^{-4},10^{-3},10^{-2}$ ($ 10^{-3},5 \times 10^{-3},10^{-2} $) for the dashed, dotted and dashed-dotted lines in the upper (lower) panel. In the inset one can see the time that $f_A(t)$ or $f_B(t)$ needs to reach $0.95$. The numerical results are fitted with a curve of the form $g(U_2) = a + bU^2_2$. } \end{figure} \section{4 Conclusions} \label{concl} We have studied the transient behaviour and the steady state properties of a chain of quantum dots that is instantaneously coupled to two Markovian reservoirs. For the case of three dots an exact solution in the noninteracting case is shown. We see that the interparticle interaction does not change the mean particle occupation in the middle well in both the TWA and the tadpole approximation. But the time the system needs to reach a stationary state increases quadratically with the interaction in the TWA. We have also found a qualitative explanation for the behaviour of the steady state current by the use of the spectral properties of the chain of dots. Increasing the number of wells from three to four, additional effects arise from the interparticle interactions. Here the interaction effectively reduces the coupling between the middle two dots such that $n_1 = n_2 = n_L$ and $n_3 = n_4 = n_R$ in the limit of very strong interactions. In order to access this interesting physics experimentally we envisage the following procedure, which has essentially been partly realized already by the authors of \cite{Anderson12}. One starts with a rather large trap with a Bose-Einstein condensate in perfect equilibrium in it. Then by an instantaneous potential shift one induces a sloshing of the condensate. After that the system shoud be cut into two subsystems, for instance by an impenetrable barrier. In this way one produces two different bosonic reservoirs which contain a large number of particles in excited states. Gradually removing the barrier one can then couple these ``reservoirs'' and hence allow for the transport. The additional structuring of the contact area into several quantum dots can be accomplished in the way similar to that described in \cite{Anderson12} for one well, or by adding a lattice potential along the channel created in \cite{Esslinger12}. We hence expect that such a `bosonic FET' can be manufactured with the state-of-the-art experimental methods. Needless to say, there is enough room for improvement of our approach. While an extension of the TWA appears to be highly non-trivial, the inclusion of the higher order self-energies is, in principle, rather straightforward. Since the latter will definitely generate energy-dependent quantities, we expect not only quantitative but also qualitative differences to our predictions to emerge. However, they would only play a significant role for intermediate to strong interactions. \section*{Acknowledgements} We are very grateful to Peter Schlagheck and Martin Bruderer for valuable discussions and for support by the DFG Forschergruppe 760 (Grant No. WI 3426/3-1) and the Heidelberg Center for Quantum Dynamics.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,754
THE SEVEN STEPS Personal Health Plan Extension of Self Care Street Health Teams Training of Basic Health Consultants New Health Care VitalThirdWorld Handbook Holistic Health "Flow System Therapy" Project Normalization of Blood Pressure Boosting your immune-system e.g. prevention of cold, flu, virus infections and SARS Stress, Burn-out & Regeneration RSI & pain treatment through Chinese Guasha Excursion: Guasha Therapy Training STEP 4: TRAINING OF BASIC HEALTH CONSULTANTS Optimaal Vitaal� BHC Training 1. The key activity of VITALWORLD is health education. Its aim is to help people helping themselves. The main program is the training of "Basic Health Consultants", a program which qualifies the graduates to develop and implement local health programs based on the flow system. It consists of a flow system training program, in which the participant will learn the basic principles of prevention, to optimize basic metabolic functions and the promotion of a healthy lifestyle. The miracle is, that by promoting health only - without at all considering symptoms, disorders or disease - the condition of the client will (dramatically) improve. The Basic Health Consultant (BHC) is the cornerstone of the New Health Care. Through optimal guidance (in Self Care) an estimated 50-70% of all complaints, disorders and diseases may be prevented or (indirectly) cured. The consultant supports his/her clients in all matter concerning "a healthy lifestyle", in particular the Personal Health Plan. 2. The first try out of our educational system took place 1992-1994 in Holland, where students have graduated as a "Basic Health Consultant". However, our pilot project was Curitiba, Brazil though. The WHO Healthy Citizens Group in collaboration with IPPUC, together with the hospital "Erasmo do Roterdam" organized very successful courses for the City's Secretaria's, the poor in the slums and for the Secretaria of Nutrition and Agriculture. The response to the courses was very encouraging indeed. It stimulated us to develop the flow system as a multipurpose training, suitable for every situation. Our project in Brazil Optimizing health will automatically cure disease 3. Optimaal Vitaal� is our Self-help Course for Basic Health Consultants*. Usually, the students will be put forward by the Street Health Teams. They are usually lay-people. Some additional regular para-medical knowledge e.g. in physiology, pathology, nutrition and hygiene is recommendable. Our internet curriculum is based on the OVC Health Check-up (See Step 2). It means, that every future Basic Health Consultant - after having sent his/her CV and after having been accepted - is going to participate in this course**. In order to do so the candidate is filling in with questions about his/her own health, after which he/she will receive recommendations for three months, all based on the flow system and coded as indicated in the Flow System Therapy handbook. The difference with others (those who are doing the same thing as a Check-up), is that the former will be receiving learning tools regularly, like multiple choice questionnaires**, to be filled in regularly. Through these forms the students will be given a chance to show their knowledge about the theory and practice of the flow system and its applications e.g. the soil, nutrition, digestive system, blood circulation, metabolism, detoxification and elimination. These forms will be corrected by us, and returned to the students as a valuable feedback. The core of the training thus being (cognitive) learning, while simultaneously undergoing it, indeed learning through self-experience! * Under certain conditions our Training will be open to other health care workers, physicians, physiotherapists, nutritionists, nurses etc. as well. **In case a future BHC already has an optimal health with no complaints (whatsoever), a certain combination of therapies will be sent to him/her every three months, with which he/she will then experiment. *** Other teaching tools are r�sum�'s, work placement and a final examination paper. The fee for the Training will be Euro 360+80+25, including three health check-ups, the subscription, examination fees and the Flow System Therapy book, excluding costs related to work placements. Candidates from the Third World without any means don't pay. 4. Additionally, they have to get themselves involved in emotional self-integration*, a method that aims at emotional/psychological balance, which is a pre-requisite for guiding other people. Once settled down, the Basic Health Consultants will be involving themselves in facilitating a Street Health Team, assisting people in doing the Personal Health Plan, stimulating mutual aid among people, selling OVC Health Check-up packages, taking initiatives in promoting the New Health Care, giving advice on a non-toxic household, while cooperating with the other echelons (Holistic and Technological Medicine) in the neighborhood. BHCs oblige themselves in working in the neighborhood for the neighborhood, doing their work in a selfless way. Since the client is paying only a (voluntary) donation, the BHCs are receiving a percentage of every OVC Health Check-up package they have sold. Additionally, BHC's may very well combine their voluntary work for VITALWORLD with other activities, through which income can be raised. A good idea is to simultaneously work as a therapist. We are promoting this option by offering you courses in one of the most successful therapies ever: Guasha (See: Appendix). Moreover, the prospect of a career will be hold out to you. Depending on certain criteria the BHC may move up to the function of Commune, County and Regional Health Consultant with corresponding increase of income. * See Emotional Self Integration 5. Self experience, learning and a career together will help you in getting some idea of the flow system's enormous potential, which in the future may lead to permanent extension of your knowledge. Obviously doing the nine! months OVC Health Check-up yourself is a conditio sine qua non (i.e. is a necessity). We have chosen the most optimal way of learning: studying and self-experience having a continuous feedback to each other. The course is flexible while depending on the local situation. The result will be, that after successfully completed it you not only have managed "to take your health in your own hands", but to also help others to do the same. P.S. The training does not aim in any way to treat diseases, it rather exclusively will be improving your health by optimizing your basic functions only. It will be concluded by an examination. Everybody should have a basic knowledge Example of a Contract * The BHC is the cornerstone of the New Health Care, with its three echelons of Self Care, Holistic and Technological Medicine respectively * The qualities of a BHC are experience, motivation, compassion and commitment, besides independence, reliability, courage (to start somewhere in no-(wo)mansland), innovative and organizing talents * The training of the BHC consists of nine months' OVC Health Check-up Self-experience e.g. correspondence course followed by a work placement. After successful completion a diploma will be handed out * The BHC is street e.g. neighborhood-oriented * The BHC is part of the New Health Care. He/she is supported by the latter and vice-versa * The work of the BHC consists of e.g. promoting the Personal Health Plan, supporting the Street Health Teams, while initiating the New Health Care wherever possible * The BHC's attitude is an selfless one. His/her income consists of donations from the clients, a percentage of the sales of OVC Health Check-up packages and sources of income related to other activities * The BHC is practicing helpfulness in the interests of the people. He/she has a cooperative attitude toward other health care workers e.g. echelons or/and organizations, the local health food store, the BHCs of other neighborhoods, women, environmental and organic agriculture groups * In case of conflicts with colleagues, clients or other workers of the NewHealth Care the BHC submits his/her case to the Regional Coordinator * The BHC has the possibility of further education, training and career chances 7. VITALWORLD is an integrated approach to health promotion, prevention and (indirect) cure of most "civilization related" disorders. It is going to become the corner stone of public health worldwide. Especially in those countries, which are not able to finance a costly Western-style medicine - which in many cases already proves not to be the answer to many basic health problems - VITALWORLD fills in the gap. It is universal because of its ecological principles and is easily understood by people of all traditions and cultures. Again: we need a health-oriented, rather than a disease-oriented approach to solve the current health problems. We pursue our goals by both offering communities around the world a health-oriented educational training program as well as helping governments innovating their Public Health Services. Our organization is characterized by more than 30 years of medical experience - education, prevention, therapy and consultancy - its compact size with high efficiency, great flexibility, cost effectiveness, non-profitability and a great deal of dedication and effort. Its director is Mrs. MeiMei Yu, MA, specialist regional health development and Guasha therapy. The honorary president is Han Marie Stiekema, M.D. Candidates for the Basic Health Consultant Training may send their CV to us. Appendix: Guasha Therapy Training 8. As mentioned above (step 2) this unique Chinese therapy will be part of the skills of the Basic Health Consultant. It promotes inner clarity, balancing and renewal, detoxification, activating the blood circulation, dissolving blockades, strengthening the immune-system, stimulating basic functions and organs, treating pain, vitalizing energy, restoring and optimizing health. As a self-help technique it is highly popular in China. It is a replacement of acupuncture, massage, reflexology, cupping and many others more. We teach the "Magic Three", a combination of three major techniques, which prove to be highly effective in countless cases: nutrition (+supplements), bowel cleansing and guasha. Moreover, Guasha can be taught not only as a preventative measure, but also as a therapy for treating numerous (chronic) disorders. In that case you may participate in the Guasha-therapy course (consisting of the basic and the follow-up course), which entitles you to fully work as a Guasha-therapist, creating income of your own. In this case two things are combined: your volunteer work for VITALWORLD based on voluntary donations and the treatment of disorders through Guasha, which is entirely your own business. See: www.meihan-guasha.nl 9. Your teacher will be MeiMei Yu, MA, our director and coordinator. In 1990 she came from China to Holland in order to complete her studies "Regional Development" with interest in epidemiology. Before she studied Chinese Traditional Medicine, only to be continued after her graduation. She managed to acquire a very impressive expertise in both TCM and Western Natural Medicine. Together with Han Marie Stiekema she has been involved in setting up a health centre in a Spa hotel on Madeira during one year, after which she followed him to Brazil, where she has set up the project "Normalization of Blood Pressure", while working in a hospital for the poor. It coincided with her discovery of Guasha, a therapy which she subsequently introduced to Holland. Ever since her reputation as a teacher and therapist has grown to such extent, that both practice and courses are continuously extending. Her dedication to the work is exceptional. In the last two years she managed to "deliver" around 150 new Guasha therapists. She receives invitations from all over the world. Obviously, we are very happy, that she decided to integrate her work with VITALWORLD.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,423
Home Archive 2013 A Thiel-good story: Local twins overcome Kitchen Nightmares on way to American... A Thiel-good story: Local twins overcome Kitchen Nightmares on way to American Dream Author: Ryan Graff "Because I'm better looking," Jeff Thiel said as he comes out of the kitchen, tying his stained and faded black apron securely around his barrel-like midsection. Thiel references how the staff and patrons of The Capri Italian Restaurant tell him and his identical twin, Jim, apart. As Jim — younger than Jeff by five minutes — makes his way from the outside alley and into the front of the restaurant and begins to pour himself a soda, Jeff again pokes fun at his not so "little" brother. "Well, look who finally got here, that troublemaker over there," he said staring Jim down and pursing his lips in a small grin. Though they are quick to tease each other, the Thiels, 49, have been inseparable since birth, doing everything together from going to school and playing sports to entering the acting industry at age 22. "We actually prefer to be together," Jim said. "We definitely know how to best push each others' buttons and when one of us wants to, we easily can. But in the end, we're cohesive together and closer than anything else." Born in New York City, the twins moved to Orange County as teenagers after living in Massachusetts for their more formative years. There, a talent scout spotted them and set them on their path into the spotlight. The twins' first acting job was set for a Cabbage Patch astronaut commercial in 1986 where they were cast as two aliens. However, the spot was canceled weeks later after the Challenger space shuttle's infamous crash. Despite the anti-climatic debut, the Thiels stayed in the acting business for nearly two decades, appearing in various commercials, shows for the small screen and minor parts in movies. But it was the desire to move beyond acting and obtain a more formal education that would eventually lead the twins to what they consider their true calling: owning and operating their own restaurant. After Jim's decision to transfer from Ripon College in Wisconsin back to Southern California to attend the University of Southern California and later pursue a marketing major at California State University, Los Angeles, the twins and their parents began frequenting The Capri, founded in 1963 and originally owned by Joe and Helen Sams. The pizza and pasta quickly became Thiel family favorites. Since the elder Thiels had already dabbled in the restaurant business while the brothers were in college, they made an offer the very day that the Sams decided to put The Capri up for sale. The Thiels took over the restaurant as a family business 17 years ago. After a decade as proprietors, the twins' parents retired and bequeathed the location to Jeff and Jim seven years ago. The brothers jumped at the opportunity to once again join forces and run the business that held so many family memories. "At the time, I just saw it as 'yay, free pizza,'" Jim said while smiling and throwing his hands over his head. "But when we took this place over, our dad told us that just like acting, the restaurant business is not an easy one to get into." In fact, it wasn't long before the twins began to experience adversity in their endeavor as business was dwindling, and they were at risk of having to close the doors of The Capri after just three years as the official owners. According to both Jeff and Jim, the menu had become too large to support, and they turned to frozen ingredients of questionable quality. "We had simply misplaced all of our good ideas," Jim said. As a possible solution, the brothers ripped a proverbial page out of their acting handbooks after receiving an email from a friend who works for the casting department of the television series Kitchen Nightmares with Chef Gordon Ramsay. After watching previous episodes of the show and carefully considering the best trajectory for the future of the restaurant, the brothers decided that they had to sign up for the show. "It definitely wasn't an easy decision to make," Jim said. "And if you ask me if I recommend doing a show like Kitchen Nightmares: no I don't recommend it. Sure, you have a vision of what things will be like afterwards, and you get excited about being able to use the show for advertising and recognition. But you know that first, you have to admit that you have a problem and are in need of the help." The difficult choice was further complicated when the then-chef refused to do the show and walked off the job. Head-chef-less and all, the twins opened their doors for Ramsay and his camera crew in hopes that they could revitalize the business and keep it afloat. According to the Thiels, Ramsay entered The Capri with a "less is more" mentality as he dished out various changes that included cutting the menu from 40 pasta dishes to 12 and three red sauces to one, offering only a la carte items instead of large-portioned meals, using all fresh ingredients, removing old acting pictures from the walls and reformatting the dining room to tables and chairs instead of dilapidated wooden booths. The brothers also established that Ramsay, known for his antics of yelling and throwing things around the kitchen on television, is actually supportive and helpful behind the scenes. After the show aired on May 6, 2011, Jeff and Jim noticed a "big jump" to The Capri that included repeat customers and the presence of families. "Kids run the family," Jeff said. "They are often the ones who dictate where the family goes out to eat. If we can spike an interest in kids, we are more likely to get more and more families through the door." One such family includes a little girl named Claire, who often tells her mother that she wants to go see "The Chef Twins" at The Capri. "The change that has occurred here from before and after Kitchen Nightmares is incredible," Claire's mother said as Claire attempts to eat a forkful of pasta. "Claire is always so happy to come here and see Jim and Jeff and, as a family, we love it, too." While both twins understand the importance of running the business end of The Capri — the "back of the house" as they call it — they are oftentimes found in the "front of the house," socializing with and serving customers. For Jeff, the main goal is to keep the "mom and pop" feel of a small, Eagle Rock business while offering an enhanced experience for guests. "When you come into The Capri, you don't just buy our food, you buy the entire package," Jeff said, "And that includes buying into the brand that is me and my brother. We want to have a certain energy and atmosphere in here and the change has been amazing. We still have great service. That has never dimmed. But now, we have a fresher, simpler approach." According to the twins, Ramsay was incredibly satisfied on his follow up visit to The Capri last January to see that Jeff and Jim were running an improved operation. They gave much of the credit to current head chef and kitchen manager James Dunn, who was hired between the original show and the revisit. Over the last two years, Dunn has become a staple of The Capri's staff, with his creative mindset, labeling system and ability to run the entire kitchen and cook simultaneously. Though new business and an expanded network are always ambitions of The Capri, the brothers have never forgotten about their local flavor, making appearances at nearby sporting events and offering discounts to students and members of various organizations. "I'll pay the tax on this one, so your total is 28 dollars," Jeff said to a customer who works at a local burger joint called The Bucket. They are also in the process of luring in old customers lost to the consolidated menu. The improvements include the addition of more specials that will resurrect dishes such as ravioli while allowing them to keep Ramsay's mantra of simplification and a fresh, reasonable stock. The main tune sung by the Thiel twins is still one of always moving onto bigger and better things and leaving the past behind them. The brothers just returned from a pizza expo in Las Vegas, where they learned everything from recipe ideas and newfangled ways of personnel management. "In this business, you can't let things go flat or stagnant," Jim said. "You have to keep forward, consistent and in better running order. Someday, we hope to open another [restaurant]. But for now, that's a long way away." 'I know this is where I'm supposed to be.' Navy vet Daryl Barker's long journey back to Occidental Occidental journalists find their start at public radio powerhouse KPCC 89.3 FM
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,578
Q: Avoid multiple insert into oracle table **Faster way to load 100,000 rows. Instead of multiple inserts want 1 insert ** I have a script that takes data from multiple oracle tables. Based on the order types, there are multiple inserts to a temp table. 100,000 or more records to be inserted based on order type. Multiple inserts based on order type is taking 12-14 minutes. Any faster way? prompt Querying freight ... SET serveroutput ON SIZE 1000000 DECLARE CURSOR c_progpcon IS SELECT cust_id ,div_no FROM dss.program_processing_controls; CURSOR c_custord(in_orgrole_id_customer IN dss.orders.orgrole_id_customer%TYPE) IS SELECT id order_id ,order_type ,order_number ,customer_po FROM dss.orders WHERE order_type = 'CUST' AND orgrole_id_customer = in_orgrole_id_customer; CURSOR c_outbound(in_order_id IN dss.orders.id%TYPE) IS SELECT ship.id ship_id ,ship.shipper_no ,shptrk.id shptrk_id ,shptrk.waybill ,shptrk.estimated_freight ,shptrk.actual_freight ,shptrk.dt_created FROM dss.shipments ship ,dss.shipment_trackings shptrk WHERE ship.order_id = in_order_id AND shptrk.ship_id = ship.id -- and ship.id = 2290451 AND shptrk.dt_created BETWEEN TO_DATE('01-JAN-2017','dd-MON-yyyy') AND TO_DATE('31-DEC-2017','dd-MON-yyyy'); CURSOR c_ordsch(in_order_id IN dss.orders.id%TYPE) IS SELECT ordsch.id ordsch_id FROM dss.orders ord ,dss.ordered_items orditm ,dss.ordered_item_schedules ordsch WHERE ord.id = in_order_id AND orditm.order_id = ord.id AND ordsch.orditm_id = orditm.id; CURSOR c_inbound(in_orditm_id IN dss.ordered_items.id%TYPE) IS SELECT recshp.id recshp_id ,recshp.waybill ,recshp.estimated_freight ,recshp.actual_freight ,recshp.dt_created FROM dss.built_items bltitm ,dss.received_shipments recshp WHERE bltitm.orditm_id_rcvd = in_orditm_id AND recshp.id = bltitm.recshp_id AND recshp.dt_created BETWEEN TO_DATE('01-JAN-2017','dd-MON-yyyy') AND TO_DATE('31-DEC-2017','dd-MON-yyyy') UNION ALL SELECT recshp.id recshp_id ,recshp.waybill ,recshp.estimated_freight ,recshp.actual_freight ,recshp.dt_created FROM dss.received_items rcvitm ,dss.received_shipments recshp WHERE rcvitm.orditm_id_rcvd = in_orditm_id AND recshp.id = rcvitm.recshp_id AND recshp.dt_created BETWEEN TO_DATE('01-JAN-2017','dd-MON-yyyy') AND TO_DATE('31-DEC-2017','dd-MON-yyyy'); v_cust_processed NUMBER := 0; v_custord_processed NUMBER := 0; v_orgrole_id_customer dss.org_roles.id%TYPE; v_estimated_freight_custord adwaram.order_freight.estimated_freight%TYPE; v_actual_freight_custord adwaram.order_freight.actual_freight%TYPE; v_orditm_id_core dss.exchange_cores.orditm_id%TYPE; v_order_id_core dss.orders.id%TYPE; v_bltitm_id_core dss.po_histories.bltitm_id%TYPE; v_order_type dss.orders.order_type%TYPE; v_order_number dss.orders.order_number%TYPE; v_order_id_xfer dss.orders.id%TYPE; v_order_id_inbound dss.orders.id%TYPE; v_orditm_id_po ordered_items.id%TYPE; --anu v_calc_freight number:=0; v_method varchar2(4000); BEGIN FOR c_progpcon_rec IN c_progpcon LOOP v_cust_processed := v_cust_processed + 1; SELECT orgrole_id INTO v_orgrole_id_customer FROM dss.customers WHERE id = c_progpcon_rec.cust_id; FOR c_custord_rec IN c_custord(v_orgrole_id_customer) LOOP v_custord_processed := v_custord_processed + 1; -- outbound customer order FOR c_outbound_rec IN c_outbound(c_custord_rec.order_id) LOOP begin v_calc_freight:=DSS.PKG_ESTIMATED_FREIGHT.GET_ESTIMATED_FREIGHT (null,c_outbound_rec.ship_id,v_method); exception when others then v_calc_freight := 0; end; INSERT INTO adwaram.order_freight (order_type ,order_number ,shipper_no ,waybill ,actual_freight ,estimated_freight ,waybill_entered ,order_id ,ship_id ,shptrk_id ,recshp_id ,cust_id ,order_id_cust ,notes ,dt_created) VALUES (c_custord_rec.order_type ,c_custord_rec.order_number ,c_outbound_rec.shipper_no ,c_outbound_rec.waybill ,c_outbound_rec.actual_freight ,v_calc_freight--c_outbound_rec.estimated_freight ,c_outbound_rec.dt_created ,c_custord_rec.order_id ,c_outbound_rec.ship_id ,c_outbound_rec.shptrk_id ,NULL ,c_progpcon_rec.cust_id ,c_custord_rec.order_id ,'OUTBOUND CUST ORDER' ,SYSDATE); END LOOP; FOR c_ordsch_rec IN c_ordsch(c_custord_rec.order_id) LOOP -- get core BEGIN SELECT xccore.orditm_id ,pohist.bltitm_id INTO v_orditm_id_po ,v_bltitm_id_core FROM dss.exchange_units xcunit ,dss.exchange_cores xccore ,dss.po_histories pohist WHERE xcunit.ordsch_id = c_ordsch_rec.ordsch_id AND xccore.xcitm_id = xcunit.xcitm_id AND pohist.orditm_id(+) = xccore.orditm_id; IF v_bltitm_id_core IS NOT NULL THEN v_order_id_core := dss.pkg_inven.func_get_order(v_bltitm_id_core ,'ORDER_ID'); v_orditm_id_core := dss.pkg_inven.func_get_order(v_bltitm_id_core ,'ORDITM_ID'); ELSE v_order_id_core := NULL; END IF; IF v_order_id_core IS NOT NULL THEN -- outbound order for received core (repair order or customer order) FOR c_outbound_rec IN c_outbound(v_order_id_core) LOOP begin v_calc_freight:=DSS.PKG_ESTIMATED_FREIGHT.GET_ESTIMATED_FREIGHT (null,c_outbound_rec.ship_id,v_method); exception when others then v_calc_freight := 0; end; SELECT order_type ,order_number INTO v_order_type ,v_order_number FROM dss.orders WHERE id = v_order_id_core; INSERT INTO adwaram.order_freight (order_type ,order_number ,shipper_no ,waybill ,actual_freight ,estimated_freight ,waybill_entered ,order_id ,ship_id ,shptrk_id ,recshp_id ,cust_id ,order_id_cust ,notes ,dt_created) VALUES (v_order_type ,v_order_number ,c_outbound_rec.shipper_no ,c_outbound_rec.waybill ,c_outbound_rec.actual_freight ,v_calc_freight--c_outbound_rec.estimated_freight ,c_outbound_rec.dt_created ,v_order_id_core ,c_outbound_rec.ship_id ,c_outbound_rec.shptrk_id ,NULL ,c_progpcon_rec.cust_id ,c_custord_rec.order_id ,'OUTBOUND '||v_order_type||' ORDER' ,SYSDATE); END LOOP; END IF; -- xfer related to customer order BEGIN SELECT ord.id INTO v_order_id_xfer FROM dss.orders ord ,dss.ordered_items orditm WHERE ord.order_type = 'XFER' AND ord.div_no = c_progpcon_rec.div_no AND orditm.order_id = ord.id AND orditm.customer_po = c_custord_rec.customer_po; FOR c_outbound_rec IN c_outbound(v_order_id_xfer) LOOP begin v_calc_freight:=DSS.PKG_ESTIMATED_FREIGHT.GET_ESTIMATED_FREIGHT ( null,c_outbound_rec.ship_id,v_method); exception when others then v_calc_freight := 0; end; SELECT order_type ,order_number INTO v_order_type ,v_order_number FROM dss.orders WHERE id = v_order_id_xfer; INSERT INTO adwaram.order_freight (order_type ,order_number ,shipper_no ,waybill ,actual_freight ,estimated_freight ,waybill_entered ,order_id ,ship_id ,shptrk_id ,recshp_id ,cust_id ,order_id_cust ,notes ,dt_created) VALUES (v_order_type ,v_order_number ,c_outbound_rec.shipper_no ,c_outbound_rec.waybill ,c_outbound_rec.actual_freight ,v_calc_freight--c_outbound_rec.estimated_freight ,c_outbound_rec.dt_created ,v_order_id_xfer ,c_outbound_rec.ship_id ,c_outbound_rec.shptrk_id ,NULL ,c_progpcon_rec.cust_id ,c_custord_rec.order_id ,'OUTBOUND '||v_order_type||' ORDER' ,SYSDATE); END LOOP; EXCEPTION WHEN NO_DATA_FOUND OR TOO_MANY_ROWS THEN NULL; END; -- inbound orders associate with exchange - v_orditm_id_core (if ro) -- v_orditm_id_po (csp po) IF v_orditm_id_core IS NOT NULL THEN FOR c_inbound_rec IN c_inbound(v_orditm_id_core) LOOP begin v_calc_freight:=DSS.PKG_ESTIMATED_FREIGHT.GET_ESTIMATED_FREIGHT ( c_inbound_rec.recshp_id,null,v_method); exception when others then v_calc_freight := 0; end; SELECT ord.order_type ,ord.order_number ,ord.id INTO v_order_type ,v_order_number ,v_order_id_inbound FROM dss.ordered_items orditm ,dss.orders ord WHERE orditm.id = v_orditm_id_core AND ord.id = orditm.order_id; INSERT INTO adwaram.order_freight (order_type ,order_number ,shipper_no ,waybill ,actual_freight ,estimated_freight ,waybill_entered ,order_id ,ship_id ,shptrk_id ,recshp_id ,cust_id ,order_id_cust ,notes ,dt_created) VALUES (v_order_type ,v_order_number ,NULL ,c_inbound_rec.waybill ,c_inbound_rec.actual_freight ,v_calc_freight--c_inbound_rec.estimated_freight ,c_inbound_rec.dt_created ,v_order_id_inbound ,NULL ,NULL ,c_inbound_rec.recshp_id ,c_progpcon_rec.cust_id ,c_custord_rec.order_id ,'INBOUND '||v_order_type||' ORDER' ,SYSDATE); END LOOP; END IF; IF v_orditm_id_po IS NOT NULL THEN FOR c_inbound_rec IN c_inbound(v_orditm_id_po) LOOP begin v_calc_freight:=DSS.PKG_ESTIMATED_FREIGHT.GET_ESTIMATED_FREIGHT ( c_inbound_rec.recshp_id ,NULL , v_method ); exception when others then v_calc_freight := 0; end; SELECT ord.order_type ,ord.order_number ,ord.id INTO v_order_type ,v_order_number ,v_order_id_inbound FROM dss.ordered_items orditm ,dss.orders ord WHERE orditm.id = v_orditm_id_po AND ord.id = orditm.order_id; INSERT INTO adwaram.order_freight (order_type ,order_number ,shipper_no ,waybill ,actual_freight ,estimated_freight ,waybill_entered ,order_id ,ship_id ,shptrk_id ,recshp_id ,cust_id ,order_id_cust ,notes ,dt_created) VALUES (v_order_type ,v_order_number ,NULL ,c_inbound_rec.waybill ,c_inbound_rec.actual_freight ,v_calc_freight--c_inbound_rec.estimated_freight ,c_inbound_rec.dt_created ,v_order_id_inbound ,NULL ,NULL ,c_inbound_rec.recshp_id ,c_progpcon_rec.cust_id ,c_custord_rec.order_id ,'INBOUND '||v_order_type||' ORDER' ,SYSDATE); END LOOP; END IF; EXCEPTION WHEN NO_DATA_FOUND OR TOO_MANY_ROWS THEN NULL; END; END LOOP; END LOOP; END LOOP; COMMIT; dbms_output.put_line(TO_CHAR(v_cust_processed)||' customers processed.'); dbms_output.put_line(TO_CHAR(v_custord_processed)||' customer orders processed.'); END; / A: * *Check the execution plans for all Queries, make sure that they work as expected *Bulk Processing with BULK COLLECT and FORALL as described in this Oracle Magazine Article By Steven Feuerstein may help.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,044
There is no need to worry about the accuracy of our list of the best home security system. Our team did everything they could to make a completely unbiased rating list. They compared such factors of each home security system consumer reports as material, weight, price, number of customer reviews, and many more details. We are open to suggestions so feel free to email us if you think we missed something. Beginners who do not know anything about home security system or online shopping will definitely find this page helpful. Our list of the best home security system includes 10 models with the best price/quality ratio. Choose a good home security system in a preferable price range! 1 year warranty and lifetime free technical support. Four free surveillance warning notes are also provided. Secure and stable long range wireless signals, use up to four hd wireless security cameras on the same system without worrying about signal interference. Multi-function, easy remote access anywhere anytime, the system supports video preview, recording, playback, backup, motion detection and email alarm. You can view the video on smart devices(ios, android) by wifi or 2g/3g/4g anywhere anytime. Please download free app ip pro3 from apple app store or android google play. Easy-to-install wireless system, cromorc 1080p nvr integrated with 10.1″ lcd monitor pairs cameras automatically by wireless without complicated settings. Hd wireless cameras offer both installation flexibility and ease since no video cables are needed. High definition waterproof security cameras, cromorc wireless hd security cameras offer crystal-clear high definition video that ensures crisp security footage and enhanced night vision capability. Cameras built in 3pcs array ir leds and smart automatic switch ir cut filter to support continuous day/night surveillance. Ip66 waterproof cameras with metal housings bring in longer service life. Built in 1tb security grade hard drive, — support ultra-long continuous recording and backup by usb. Dvr can be set to automatically overwrite the oldest internally stored footage or you can transfer those video files by usb to a memory stick or external hard drive. View from anywhere anytime, — the free app zosi smart lets you see all of your cameras in one place. Check in on your home or business wherever you have an internet connection. Free app for live view & playback on smart devices with wifi or 2g/3g/4g anywhere anytime. Smart phone support: iphone, android, ipad. Customizable motion detection zone and alerts, — be notified when there are unexpected movements. Smart notifications with image will be pushed to your smartphone via iphone/android app. You also can setup the detection zone from your dvr to minimize false alarms. 2 years warranty, -2 years quality warranty.60 days money back guarantee. 8ch 1080p hd-tvi dvr/8pcs 1080p weatherproof camera, — get a sharp and crisp image day or night with the camera's automatic ir-cut filter. Aluminum weatherproof housing, cameras can be used indoors and outdoors. 1tb hard drive.8 channel dvr home security camera system with 6pcs ip66 weatherproof bullet indoors&outdoors 1080p security cameras, provide you crystal super clear image, night vison up to 65ft/20m. Pre-installed 1tb hdd for continuous recording at highest resolution, protect your home and business 24/7. Smart remote access, cctv camera security system allows you remote access simultaneously at anytime, anywhere with wifi/4g on smart phone, pad and windows/mac pc. Scan qr code or download danale on mobile phone (free app), super easy setup in seconds. One kit can only be connected to one authorized device user, who can share the device with friends or relatives etc. Install cd within package on your pc so that you can remote view by cms software. Christmas gift, email: safevantservice hotmail. Safevant security camera system with 60-day money back guarantee, two year warranty and free life time tech support. If miss your calling, please email USA at safevantservice hotmail. Motion detection&alerts, home security system support motion detection and recording. When motion is detected, you will get mobile notification and email alerts so that you will always get notice of what s happening. Motion recording starts when motion is detected and stops when motion detection is over. Kindly suggest to setup time recording and motion recording at the same time since motion recording last short time. Not wireless, surveillance camera system comes with 2 power supply adapters: 1.12v/2a for ahd dvr.12v/4a with 2, power splitter for 6 pcs surveillance cameras. Each cctv camera matched with 60ft bnc video cable. How safe is it to purchase a home security system online? Buying home security systemonline is absolutely safe! In fact, it might be even safer than buying them in a local store, since you do not need to leave your house. Moreover, if you get a malfunctioning item you can always get a refund or a replacement. Plug and play,: hd home security system supports 720p video input, connect all security cameras to power supplies, ahd / cvi / tvi / analog video inputs, connect dvr to tv/pc monitor with hdmi/vga cable. Hd video security system supports recording motion detection and outputting 1080n resolution via hdmi or vga. View anytime&anywhere,: it allows you to view live video and playback video remotely anytime and anywhere by phone/pad(support ios&android system) via wifi or 2g/3g/4g network. Download free app danale from android google play or apple app store, register an new account, then add device id. You can playback history video in your palm by app. Pc/laptop view: only support windows system(not available for mac), you can view and playback video by cms software(e-mail to vipbuy100 hotmail. Super waterproof,: 8pcs 720p bullet cameras, ip66 super waterproof grade, durable outdoor weatherproof casing. 100% satisfaction product guarantee 30-day money-back guarantee with 2 year warranty(provide free brand-new parts for replacement in warranty) and lifetime free technical support. Com or call toll free: 1-866-388-1666(available after 7: 00 pm at pacific time). Customizable motion detection,: full hd indoor and outdoor surveillance system with 1080p ahd day/night outdoor bullet cameras supports intelligent motion detection email alert with images. Any movement detected will be recorded. Alarm notification will be sent immediately to your smart phone and get alarm alert with image to will be sent to your email when set it up. Stunning viewing,: powerful ir-led for night vision up to 65ft to keep your home and business safe in day and night.16 channels of hd realtime record with pre-installed 1 tb hdd, support smartphone / web browser remote viewing. Are new home security systemsupposed to be expensive? Obviously, a good home security system cannot be very cheap. If you want to buy a decent product, you should not be trying to save money on it. We recommend you to order the best models from our list of the best home security systemif you can afford it. Worry-free warranty, easy-to-reach lifetime email support.3-year warranty and 90-day money back guarantee. Amazing night vision, the home security cameras have greater photo-sensitivity than ordinary infrared cameras. And they are able to produce amazingly clear night vision even in deep darkness, almost free of image noises. 1080p hd starlight effect, the 8ch 3mp 5-in-1 security dvr works with four starlight surveillance cameras to deliver bright and color-saturated pictures. The 1080p security cameras use a sensor with a larger target surface as well as a lens with a larger aperture, thus bring a feast to your eyes. Customized motion detection, mark out the relevant areas for motion detection to minimize false alerts. And you will gain complete peace of mind by receiving instant email alerts with snapshots and app pushes. Advanced h.264+ compression, enable h.264+, and you can record continuously for 816 hours with a 1tb hdd on normal conditions. In contrast, h.264 takes 3tb hdd to achieve the same hours. With the h.264+ compression, you can effectively save 50 on storage. Customized motion detection, mark out the relevant areas for motion detection to minimize false alerts. And you will gain complete peace of mind by receiving instant email alerts with snapshots and app alarm pushes. Quick remote access, this whole surveillance system can be accessed and controlled remotely via the annke vision app on your mobile devices. Thus, you can review and watch live videos anytime from anywhere at convenience. Worry-free warranty, all annke products have passed ce & fcc, hdmi senior member certified and iso 9001 certified. Easy-to-reach lifetime telephone and email support.2-year warranty and 60-day money back guarantee. Crisp and smooth footage, the robust 8ch 1080p lite 5-in-1 dvr works perfectly with the ip66 weatherproof cameras. You can enjoy 1080p hd live viewing.1080p lite recording and playback, and night vision can cover up to 100ft. Advanced h.264+ video compression, with this advanced technology, more memory space will be saved in storing the same surveillance video. It helps deliver much smoother video recordings and saves you the cost of buying an extra hdd. 4 channel hd wireless security camera system for villa, home, office, shop, warehouse or elsewhere(indoor/ outdoor). Please note that this is not battery powered cameras. Wireless camera system doesn't mean you can use it without any cables. Power supply still needed to power on the cameras and nvr (smonet doesn't take charge of installation). Powered by stable power from nearby outlets.24×7 hours live surveillance. 1 year warranty and lifetime free technical support. Provide free brand-new parts for replacement. Provide free 10db antenna and 10/16/30 feet power extension cable. Us toll free: 1-866-678-0666(available after 5: 00 pm at pacific time). Continuous day/night surveillance is accomplished by ir-cut smart system, powerful ir-led for night vision. Getting a crystal clear picture, even in total darkness. Wireless video security system is easy to setup and diy installation without any video cables. Connect the nvr and cameras with power supply provided. Connect pc/tv monitor to nvr with a vga/hdmi cable. Connect the router lan port to nvr wan port with network cable provided. The system builds a more powerful wireless signal coverage and make the connection quick and easy. Working without disturbing your regular internet speed. How much money should I spend for a nice home security system? Price is a very important factor of any home security system. Usually, the higher it costs the better its quality. However, we recommend you to consider all characteristics of a home security system. There are many models with an average price tag that can compete with the most expensive options. Or call us: +1(352)702-5883(eastern time monday to friday: 9: 00am — 5: 00pm), +1(352)396-4031(eastern time monday to friday: 8: 00 pm to 10: 00 pm, saturday and sunday: 10: 00 am to 5: 00 pm). Provide free 10ft antenna and power extension cable. If you buy accessories, there will be a 15% discount. Wireless, please note the system is not battery operated. Both nvr and cameras need to be plugged into nearby power outlets. Within the system, nvr and cameras communicate automatically and wirelessly. This is a robust wireless system, however, under certain circumstances, due to factors such as distance and obstacles etc, the system wireless signals may not be strong enough to support live monitoring, then a special wi-fi extender is recommended to extend greater wireless. Expandable, this system includes 1x 8-channel 1080p (1920×1080) nvr and 4 x 720p (1920 x 720) wireless ip66 waterproof cameras. It is ideal for both indoor and outdoor uses, perfect for villas, homes, offices, workshops, warehouses, restaurants and etc. This expandable system can support up to 8 cameras. Additional cameras may be purchased on amazon. Oossxx is not responsible for installation. Easy setup, the system is simple to install and set up. For online monitoring and remote access, please download free app «ip pro» or «eseecloud» from android google play or apple app store. This system supports both wireless/wired connection. When wireless signals cannot be covered, video can be transmitted by setting up wired network. System support ac110-240v input. Hd security camera system: wireless cctv system(8ch 1080p nvr recorder+4pcs 960p cams+1tb hard drive pre-installed) is used for villa, home, office, shop, warehouse or elsewhere(indoor/ outdoor). Please note that this is not battery powered cameras. Wireless camera system doesn t mean you can use it without any cables. Power supply still needed to power on the cameras and nvr (smonet doesn't take charge of installation). Powered by stable power from nearby outlets.24×7 hours live surveillance. Easy remote view: wireless surveillance system allows you to view the live video remotely anytime and anywhere by phone and pad(available for android & ios system, not windows users). Download free app ip pro or eseecloud from android google play or apple app store. Register an new account, then add device id. You can view the video by wifi or 2g/3g/4g network. Pc/laptop view: windows system/mac system (e-mail to smonet hotmail. Special feature: surveillance system supports sync-playback, video backup and video detection. You can receive email alerts upon motion detection or app alert when set it up (noted: please set it up properly to avoid email blast). Simply record and playback on your mobile devices. Seamlessly stream video directly to your smartphone, tablet and pc. Keep an eye on your belongings anywhere and anytime. Getting a crystal clear picture in total darkness. Expandable system: support to add more cameras. Only compared with smonet 1080p or 960p ip camera. Camera asin code is b01ir4txa8 or b01ir4tvgo. If need, please search code on amazon. 1 year warranty and lifetime free technical support. Provide free brand-new parts for replacement. Provide free 10db antenna and 10/16/30 feet power extension cable. Us toll free: 1-866-678-0666(available after 5: 00 pm at pacific time). True plug and play: wireless video security system is easy to setup and diy installation without any video cables. Connect the nvr and cameras with power supply provided. Connect pc/tv monitor to nvr with a vga/hdmi cable. Connect the router lan port to nvr wan port with network cable provided. The system builds a more powerful wireless signal coverage and make the connection quick and easy. Working without disturbing your regular internet speed. Weather resistant cameras — the 16 4mp super hd bullet cameras (sdc-89440bf) are ideal for both indoor and outdoor environment, the cameras are ip66 rated which is guaranteed to endure frigid winters and sultry summers. Withstanding extreme temperatures of -22 f to 122 f (-30 c to 50 c). Wide angle and night vision — get a 105 wide angle camera view. The true day and night with ir cut filter allows you to record at 4mp super hd day and night, with night vision up to 130ft. Large storage — the super hd 16 channel dvr comes with 2tb hard drive, allowing you to store hours of video. The dvr also utilizes high compression technology which allows you to record even longer. Motion zone and event detection — select the desired areas to detect motion and avoid false alarms that may trigger the system. Get alerts when there is motion, tampering or video loss detected. Remote vewing — monitor from anywhere, anytime using your smartphone, tablet, pc or mac. What should be my main choice criteria when buying a home security system? If you are buying a home security system for the fist time in your life and do not really know much about it, you should always look carefully for user reviews from other customers. Learn from the experience of other people and try to find a home security system according to its material and assembly quality. What should be the warranty period for a good home security system? You should buy home security systemfrom brands with customer-friendly warranty policy. Trustworthy home security system manufacturers offer two-year or even longer warranty period. You would not have any problems in case of any issues — simply take a broken product to the service center to get it replaced or fixed.
{ "redpajama_set_name": "RedPajamaC4" }
3,288
{"url":"https:\/\/www.aimsciences.org\/article\/doi\/10.3934\/cpaa.2007.6.505","text":"# American Institute of Mathematical Sciences\n\n\u2022 Previous Article\nBlowup behaviors for degenerate parabolic equations coupled via nonlinear boundary flux\n\u2022 CPAA\u00a0Home\n\u2022 This Issue\n\u2022 Next Article\nBoundary blow-up for elliptic problems involving exponential nonlinearities with nonlinear gradient terms and singular weights\nJune\u00a0 2007,\u00a06(2):\u00a0505-520. doi:\u00a010.3934\/cpaa.2007.6.505\n\n## Burgers' type equation with vanishing higher order\n\n 1 Graduate School of Pure and Applied Sciences, University of Tsukuba, 305-8571, Ibaraki, Japan, Japan\n\nReceived\u00a0 May 2006 Revised\u00a0 December 2006 Published\u00a0 March 2007\n\nWe consider a scalar conservation law of Burgers' type: $u_t+(u^2\/2)_x = \\varepsilon u_{x x}-\\delta u_{x x x}+\\gamma u_{x x x x x}$ ($(x, t)\\in \\mathbf R \\times$ (0, \u221e)). We prove that if $\\varepsilon$, $\\delta=\\delta(\\varepsilon)$, $\\gamma=\\gamma(\\varepsilon)$ tend to $0$, then for $q\\in (2, 16\/5)$, the sequence {$u^\\varepsilon$} of solutions converges in $L^k(0, T^\\star; L^p(\\mathbf R)) (k<$ \u221e, p$<$q) to the unique entropy solution $u\\in L^\\infty (0, T^\\star; L^q(\\mathbf R))$ to the inviscid Burgers equation $u_t+(u^2\/2)_x = 0$. More precisely we show that, under the condition $\\delta=O(\\varepsilon^{3\/(3-q)})$ and $\\gamma=O(\\varepsilon^4$ $\\delta^{(8q-7)\/9})$ for $q\\in(2,3)$ or $\\delta=O(\\varepsilon^{12\/(19-4q)}$ $\\gamma^{3\/(19-4q)})$ and $\\gamma=O(\\varepsilon^{4}$ $\\delta^{(8q-7)\/9})$ for $q\\in[3,16\/5)$, the limit of the sequence is the entropy solution. Moreover if we assume the uniform boundedness of {$u^\\varepsilon(\\cdot,t)$} in $L^q(\\mathbf R)$ for $q>2$, the condition $\\delta=o(\\varepsilon^3)$ and $\\gamma=o(\\varepsilon^4\\delta)$ is sufficient to establish the conclusion. We derive new a priori estimates which enable to use the technique of the compensated compactness, the Young measures and the entropy measure-valued solutions.\nCitation: Naoki Fujino, Mitsuru Yamazaki. Burgers' type equation with vanishing higher order. Communications on Pure & Applied Analysis, 2007, 6 (2) : 505-520. doi: 10.3934\/cpaa.2007.6.505\n [1] Ammari Zied, Liard Quentin. On uniqueness of measure-valued solutions to Liouville's equation of Hamiltonian PDEs. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 723-748. doi: 10.3934\/dcds.2018032 [2] Evgeny Yu. Panov. On a condition of strong precompactness and the decay of periodic entropy solutions to scalar conservation laws. Networks & Heterogeneous Media, 2016, 11 (2) : 349-367. doi: 10.3934\/nhm.2016.11.349 [3] Stefano Bianchini, Elio Marconi. On the concentration of entropy for scalar conservation laws. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 73-88. doi: 10.3934\/dcdss.2016.9.73 [4] Azmy S. Ackleh, Vinodh K. Chellamuthu, Kazufumi Ito. Finite difference approximations for measure-valued solutions of a hierarchically size-structured population model. Mathematical Biosciences & Engineering, 2015, 12 (2) : 233-258. doi: 10.3934\/mbe.2015.12.233 [5] Boris P. Andreianov, Giuseppe Maria Coclite, Carlotta Donadello. Well-posedness for vanishing viscosity solutions of scalar conservation laws on a network. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5913-5942. doi: 10.3934\/dcds.2017257 [6] Shijin Deng, Weike Wang. Pointwise estimates of solutions for the multi-dimensional scalar conservation laws with relaxation. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1107-1138. doi: 10.3934\/dcds.2011.30.1107 [7] Giuseppe Maria Coclite, Lorenzo di Ruvo. A note on the convergence of the solutions of the Camassa-Holm equation to the entropy ones of a scalar conservation law. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 2981-2990. doi: 10.3934\/dcds.2016.36.2981 [8] Young-Sam Kwon. On the well-posedness of entropy solutions for conservation laws with source terms. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 933-949. doi: 10.3934\/dcds.2009.25.933 [9] Darko Mitrovic, Ivan Ivec. A generalization of $H$-measures and application on purely fractional scalar conservation laws. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1617-1627. doi: 10.3934\/cpaa.2011.10.1617 [10] Darko Mitrovic. New entropy conditions for scalar conservation laws with discontinuous flux. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1191-1210. doi: 10.3934\/dcds.2011.30.1191 [11] Yanning Li, Edward Canepa, Christian Claudel. Efficient robust control of first order scalar conservation laws using semi-analytical solutions. Discrete & Continuous Dynamical Systems - S, 2014, 7 (3) : 525-542. doi: 10.3934\/dcdss.2014.7.525 [12] Yan Chen, Kewei Zhang. Young measure solutions of the two-dimensional Perona-Malik equation in image processing. Communications on Pure & Applied Analysis, 2006, 5 (3) : 617-637. doi: 10.3934\/cpaa.2006.5.617 [13] Philip Trautmann, Boris Vexler, Alexander Zlotnik. Finite element error analysis for measure-valued optimal control problems governed by a 1D wave equation with variable coefficients. Mathematical Control & Related Fields, 2018, 8 (2) : 411-449. doi: 10.3934\/mcrf.2018017 [14] Gui-Qiang Chen, Monica Torres. On the structure of solutions of nonlinear hyperbolic systems of conservation laws. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1011-1036. doi: 10.3934\/cpaa.2011.10.1011 [15] C. M. Khalique, G. S. Pai. Conservation laws and invariant solutions for soil water equations. Conference Publications, 2003, 2003 (Special) : 477-481. doi: 10.3934\/proc.2003.2003.477 [16] Laurent L\u00e9vi, Julien Jimenez. Coupling of scalar conservation laws in stratified porous media. Conference Publications, 2007, 2007 (Special) : 644-654. doi: 10.3934\/proc.2007.2007.644 [17] Georges Bastin, B. Haut, Jean-Michel Coron, Brigitte d'Andr\u00e9a-Novel. Lyapunov stability analysis of networks of scalar conservation laws. Networks & Heterogeneous Media, 2007, 2 (4) : 751-759. doi: 10.3934\/nhm.2007.2.751 [18] Giuseppe Maria Coclite, Lorenzo Di Ruvo. A note on the convergence of the solution of the high order Camassa-Holm equation to the entropy ones of a scalar conservation law. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1247-1282. doi: 10.3934\/dcds.2017052 [19] Ezzeddine Zahrouni. On the Lyapunov functions for the solutions of the generalized Burgers equation. Communications on Pure & Applied Analysis, 2003, 2 (3) : 391-410. doi: 10.3934\/cpaa.2003.2.391 [20] Leonardi Filippo. A projection method for the computation of admissible measure valued solutions of the incompressible Euler equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : 941-961. doi: 10.3934\/dcdss.2018056\n\n2018\u00a0Impact Factor:\u00a00.925","date":"2019-07-18 00:30:23","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6597127914428711, \"perplexity\": 2503.7530029414115}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195525483.62\/warc\/CC-MAIN-20190718001934-20190718023934-00555.warc.gz\"}"}
null
null
If your debit card is lost or stolen, please call (501) 982-4511 or (800) 982-4511 immediately. We have 24/7 Customer Support, who will be able to assist you. Unless you have been grossly negligent or engaged in fraud, you will not be liable for any unauthorized transactions using your lost or stolen Mastercard debit card. This does not apply to ATM transactions or transactions using your Personal Identification Number, which are not processed by Mastercard. Member FDIC © First Arkansas Bank & Trust. Currently the zip code you have entered for eligibility does not qualify you to open an account with FAB&T. If you have any questions or believe we have reached this decision in error, please contact us at (800) 982-4511 or via e-mail at ebranch@fabandt.com.
{ "redpajama_set_name": "RedPajamaC4" }
2,664
Q: How to convert numpy list to geoTIFF in python? I haven't found answer on this simple question. Please help. How to convert Qcal (numpy list) to TIFF image ? Everything I've found doesn't really work. import math import numpy as np from osgeo import gdal substr1 = 'RADIANCE_MULT_BAND_10' substr2 = 'RADIANCE_ADD_BAND_10' substr3 = 'K1_CONSTANT_BAND_10' substr4 = 'K2_CONSTANT_BAND_10' RADIANCE_MULT_BAND_10 = 1 RADIANCE_ADD_BAND_10 = 1 K1_CONSTANT_BAND_10 = 1 K2_CONSTANT_BAND_10 = 1 with open('LC08_L1TP_180028_20170623_20170630_01_T1_MTL.txt') as file: for line in file: if substr1 in line: startIndex = line.find('=') RADIANCE_MULT_BAND_10 = float((line[startIndex+2:])) if substr2 in line: startIndex = line.find('=') RADIANCE_ADD_BAND_10 = float((line[startIndex + 2:])) if substr3 in line: startIndex = line.find('=') K1_CONSTANT_BAND_10 = float((line[startIndex + 2:])) if substr4 in line: startIndex = line.find('=') K2_CONSTANT_BAND_10 = float((line[startIndex + 2:])) ds = gdal.Open("B10.tif") Qcal = np.array(ds.GetRasterBand(1).ReadAsArray()) # Quantized and calibrated standard product pixel values (DN) for i in range(Qcal.shape[0]): for j in range(Qcal.shape[1]): Qcal[i][j] = RADIANCE_MULT_BAND_10 * Qcal[i][j] + RADIANCE_ADD_BAND_10 Qcal[i][j] = K2_CONSTANT_BAND_10 / math.log1p(K1_CONSTANT_BAND_10/Qcal+1) A: Do you want to modify the existing image? If so you should open it with the update option like this gdal.Open("B10.tif", gdal.GA_Update) Next do your modifications to your np array as you are doing. You are actually just editing the numpy array Qcal and not the actual rasterband in the tif. Now to save your modifications into the same band you can do the following ds.GetRasterBand(1).WriteArray(Qcal) This is writing the updated Qcal array into the rasterband of the tif. If you want to save as a new image You can create a copy of the existing image and save the Qcal array into it like this driver = gdal.GetDriverByName('Gtiff') dst_ds = driver.CreateCopy("example.tif", ds, 1) dst_ds.GetRasterBand(1).WriteArray(Qcal)
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,831
Ozzy update: it's a miracle! We went to the doggie opthamologist this morning, and Dr. M. told us that Ozzy's got 100% vision in his right eye and 90% in his left eye. The side effects of the prednisone--lethargy, general mopey-ness, peeing like there ain't no tomorrow--will subside as we lower the dosage over the next couple of weeks. He will need to take a low dosage of prednisone for the rest of his little doggie life, but it won't have any of these adverse effects on him. It will just keep his little doggie eyeballs from breaking again. Good boy!
{ "redpajama_set_name": "RedPajamaC4" }
1,084
Q: Open the file with the associated application if there is an app opened? Open the file with the associated application if there is an app opened, bring it to the frontend, and open the file. I wrote an app associated with specified file extensions. Double click the extension file and launch the app through registry # location HKEY_CLASSES_ROOT\MyApp.Bif\shell\open\command "C:\Users\alpha\AppData\Roaming\MyCompany\MyApp.exe" "%1" Now I hope the file can be opened by an opened application. Then bring the app to the front end and be focused. The only way I thought is: * *open a new app. *check is there any opened app? *if no, open the file. *if yes, notify it to open the file, close itself. * *notify it: tcp? namedpipe? There is a common function, is there any direct method? Thanks in advance.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,977
{"url":"https:\/\/docs.ethermon.io\/play-2d\/energy-system","text":"Energy system\nUPDATE (DEC 25, 2021): Energy is recharged twice daily. All players now receive the same amount of energy across all 6 ladders, according to the following.\n\u2022 10 energy recharge (per ladder): Players owning less than 3 on-chain mons (i.e., using 1-3 off-chain Mons to play for free) will be refueled to 10 maximum energy every 12 hours across ladders.\n\u2022 20 energy recharge (per ladder): Players owning at least 3 on-chain Mons will be refueled up to 20 maximum energy every 12 hours, plus bonus energy based on their additional Mon holdings (see below).\nEach battle costs 2 energy and forming a new team in Tournament Mode costs 4 energy (except for the first time you join each ladder with a new account, in which the energy fee is waived.)\nYour energy level is displayed on your Quests & Achievements and Battle dashboards and cannot accumulate beyond a maximum amount determined by your investment in the game.\n\n## Bonus energy system\n\nFor users who have at least 3 on-chain Mons, you can still further increase your maximum energy \u2014 recharging up to 30 every 12 hours.\nFor each additional on-chain Mon you own (excluding on-chain starter mons: Kyari, Omnom, Mintol, Hambrisk, and Snobbit), you can increase the amount of ENERGY you receive beyond the base of 20 according to the formula below. Please note that this will still only top you off to the following max energy counts based on your Mon holdings.\n$ROUNDDOWN(\u221aMons)$\nSince 1 battle requires 2 energy, the following table shows the bonus energy\u00a0you will receive in intervals of 2 for quick reference. The column \"on-chain Mons owned\" shows the minimum number of Mons you must own to receive each energy bonus.\nOn-chain Mons owned\n10\n0\n20\n3\n22\n7\n24\n19\n26\n39\n28\n67\n30\n103","date":"2022-01-17 11:01:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 1, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2454361617565155, \"perplexity\": 4335.783460741442}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320300533.72\/warc\/CC-MAIN-20220117091246-20220117121246-00604.warc.gz\"}"}
null
null
{"url":"https:\/\/www.physicsforums.com\/threads\/hermitian-inner-product-btw-2-complex-vectors-angle-btw-them.333047\/","text":"# Hermitian inner product btw 2 complex vectors & angle btw them\n\n1. Aug 27, 2009\n\n### raja0088\n\nWhat is the relationship btw the Hermitian inner product btw 2 complex vectors & angle btw them.\nx,y are 2 complex vectors.\n$$\\theta$$ angle btw them\n\nwhat is the relation btw $$x^{H}$$y and cos($$\\theta$$)??\nAny help will be good?\n\n2. Aug 27, 2009\n\n### g_edgar\n\nWhat is definition of \"angle\" in complex space? Same as the underlying real space? Then, of course, use the underlying real inner product, which is the real part of the complex inner product.","date":"2018-07-19 10:45:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.594879686832428, \"perplexity\": 2722.1410136578424}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-30\/segments\/1531676590794.69\/warc\/CC-MAIN-20180719090301-20180719110301-00443.warc.gz\"}"}
null
null
import NativeObject from './NativeObject'; import {toValueString} from './Console'; import CryptoKey, {Algorithm, AlgorithmECDH, AlgorithmHKDF, AlgorithmInternal, _CryptoKey} from './CryptoKey'; import {allowOnlyKeys, allowOnlyValues, getBuffer, getCid, getNativeObject} from './util'; import checkType from './checkType'; export type TypedArray = Int8Array | Uint8Array | Uint8ClampedArray | Int16Array | Uint16Array | Int32Array | Uint32Array; export default class Crypto { readonly subtle!: SubtleCrypto; private readonly _nativeObject!: NativeCrypto; constructor() { Object.defineProperties(this, { _nativeObject: {enumerable: false, writable: false, value: NativeCrypto.getInstance()}, subtle: {enumerable: false, writable: false, value: new SubtleCrypto()} }); } getRandomValues(typedArray: TypedArray) { if (arguments.length === 0) { throw new Error('Not enough arguments to Crypto.getRandomValues'); } if ( !ArrayBuffer.isView(typedArray) || typedArray instanceof Float32Array || typedArray instanceof Float64Array ) { throw new Error(`Argument ${toValueString(typedArray)} is not an accepted array type`); } return this._nativeObject.getRandomValues(typedArray); } } const validAlgorithms = new Set(['SHA-1', 'SHA-256', 'SHA-384', 'SHA-512']); class SubtleCrypto { private readonly _nativeObject!: NativeCrypto; constructor() { Object.defineProperty(this, '_nativeObject', { enumerable: false, writable: false, value: NativeCrypto.getInstance() }); } async digest(algorithm: string, data: ArrayBuffer | TypedArray) { if (arguments.length < 2) { return Promise.reject(new TypeError('Not enough arguments to SubtleCrypto.digest')); } if (!validAlgorithms.has(algorithm)) { return Promise.reject(new TypeError(`Algorithm: Unrecognized name ${algorithm}`)); } if (!getBuffer(data)) { return Promise.reject(new TypeError(`Argument ${toValueString(data)} is not an accepted array type`)); } return new Promise( (resolve, reject) => this._nativeObject.subtleDigest({algorithm, data, resolve, reject}) ); } async importKey( format: string, keyData: ArrayBuffer | TypedArray, algorithm: Algorithm, extractable: boolean, keyUsages: string[] ): Promise<CryptoKey> { if (arguments.length !== 5) { throw new TypeError(`Expected 5 arguments, got ${arguments.length}`); } allowOnlyValues(format, ['spki', 'pkcs8', 'raw'], 'format'); checkType(getBuffer(keyData), ArrayBuffer, {name: 'keyData'}); if (typeof algorithm === 'string') { allowOnlyValues(algorithm, ['ECDH', 'AES-GCM', 'HKDF'], 'algorithm'); } else { checkType(algorithm, Object, {name: 'algorithm'}); allowOnlyValues(algorithm.name, ['ECDH', 'AES-GCM'], 'algorithm.name'); if (algorithm.name === 'ECDH') { allowOnlyKeys(algorithm, ['name', 'namedCurve']); allowOnlyValues(algorithm.namedCurve, ['P-256'], 'algorithm.namedCurve'); } else { allowOnlyKeys(algorithm, ['name']); } } checkType(extractable, Boolean, {name: 'extractable'}); checkType(keyUsages, Array, {name: 'keyUsages'}); const nativeObject = new _CryptoKey(); const algorithmKeys = Object.keys(algorithm); const algorithmInternal = algorithmKeys.length === 1 && algorithmKeys[0] === 'name' ? (algorithm as {name: string}).name as AlgorithmInternal : algorithm as AlgorithmInternal; await nativeObject.import(format, keyData, algorithmInternal, extractable, keyUsages); return new CryptoKey(nativeObject, { algorithm: algorithmInternal, extractable, usages: Object.freeze(keyUsages.concat()) }); } async deriveBits( algorithm: Algorithm, baseKey: CryptoKey, length: number ): Promise<ArrayBuffer> { if (arguments.length !== 3) { throw new TypeError(`Expected 3 arguments, got ${arguments.length}`); } checkDeriveAlgorithm(algorithm); checkType(baseKey, CryptoKey, {name: 'baseKey'}); checkType(length, Number, {name: 'length'}); const nativeObject = new _CryptoKey(); try { await nativeObject.derive(algorithm, baseKey, {length, name: 'AES-GCM'}, true, []); return new Promise((onSuccess, onReject) => this._nativeObject.subtleExportKey('raw', nativeObject, onSuccess, onReject) ); } finally { nativeObject.dispose(); } } async deriveKey( algorithm: Algorithm, baseKey: CryptoKey, derivedKeyAlgorithm: {name: string, length: number}, extractable: boolean, keyUsages: string[] ): Promise<CryptoKey> { if (arguments.length !== 5) { throw new TypeError(`Expected 5 arguments, got ${arguments.length}`); } checkDeriveAlgorithm(algorithm); allowOnlyKeys(derivedKeyAlgorithm, ['name', 'length']); allowOnlyValues(derivedKeyAlgorithm.name, ['AES-GCM'], 'derivedKeyAlgorithm.name'); checkType(derivedKeyAlgorithm.length, Number, {name: 'derivedKeyAlgorithm.length'}); checkType(baseKey, CryptoKey, {name: 'baseKey'}); checkType(extractable, Boolean, {name: 'extractable'}); checkType(keyUsages, Array, {name: 'keyUsages'}); const nativeObject = new _CryptoKey(); await nativeObject.derive(algorithm, baseKey, derivedKeyAlgorithm, extractable, keyUsages); return new CryptoKey(nativeObject, { algorithm, extractable, type: 'secret', usages: Object.freeze(keyUsages.concat()) }); } async decrypt( algorithm: { name: string, iv: ArrayBuffer | TypedArray, tagLength?: number }, key: CryptoKey, data: ArrayBuffer | TypedArray ): Promise<ArrayBuffer> { if (arguments.length !== 3) { throw new TypeError(`Expected 3 arguments, got ${arguments.length}`); } allowOnlyKeys(algorithm, ['name', 'iv', 'tagLength']); allowOnlyValues(algorithm.name, ['AES-GCM'], 'algorithm.name'); checkType(algorithm.tagLength, Number, {name: 'algorithm.tagLength', nullable: true}); checkType(getBuffer(algorithm.iv), ArrayBuffer, {name: 'algorithm.iv'}); checkType(key, CryptoKey, {name: 'key'}); checkType(getBuffer(data), ArrayBuffer, {name: 'data'}); return new Promise((onSuccess, onReject) => this._nativeObject.subtleDecrypt(algorithm, key, data, onSuccess, onReject) ); } async encrypt( algorithm: { name: string, iv: ArrayBuffer | TypedArray, tagLength?: number }, key: CryptoKey, data: ArrayBuffer | TypedArray ): Promise<ArrayBuffer> { if (arguments.length !== 3) { throw new TypeError(`Expected 3 arguments, got ${arguments.length}`); } allowOnlyKeys(algorithm, ['name', 'iv', 'tagLength']); allowOnlyValues(algorithm.name, ['AES-GCM'], 'algorithm.name'); checkType(algorithm.tagLength, Number, {name: 'algorithm.tagLength', nullable: true}); checkType(getBuffer(algorithm.iv), ArrayBuffer, {name: 'algorithm.iv'}); checkType(key, CryptoKey, {name: 'key'}); checkType(getBuffer(data), ArrayBuffer, {name: 'data'}); return new Promise((onSuccess, onReject) => this._nativeObject.subtleEncrypt(algorithm, key, data, onSuccess, onReject) ); } async exportKey( format: 'raw' | 'spki', key: CryptoKey ): Promise<ArrayBuffer> { if (arguments.length !== 2) { throw new TypeError(`Expected 2 arguments, got ${arguments.length}`); } allowOnlyValues(format, ['raw', 'spki'], 'format'); checkType(key, CryptoKey, {name: 'key'}); return new Promise((onSuccess, onReject) => this._nativeObject.subtleExportKey(format, key, onSuccess, onReject) ); } async generateKey( algorithm: AlgorithmECDH, extractable: boolean, keyUsages: string[] ): Promise<{privateKey: CryptoKey, publicKey: CryptoKey}> { if (arguments.length !== 3) { throw new TypeError(`Expected 3 arguments, got ${arguments.length}`); } allowOnlyKeys(algorithm, ['name', 'namedCurve']); allowOnlyValues(algorithm.name, ['ECDH'], 'algorithm.name'); allowOnlyValues(algorithm.namedCurve, ['P-256'], 'algorithm.namedCurve'); checkType(extractable, Boolean, {name: 'extractable'}); checkType(keyUsages, Array, {name: 'keyUsages'}); const nativeObject = new _CryptoKey(); await nativeObject.generate(algorithm, extractable, keyUsages); const nativePrivate = new _CryptoKey(nativeObject, 'private'); const nativePublic = new _CryptoKey(nativeObject, 'public'); return { privateKey: new CryptoKey(nativePrivate, {algorithm, extractable}), publicKey: new CryptoKey(nativePublic, {algorithm, extractable}) }; } } class NativeCrypto extends NativeObject { private static instance: NativeCrypto; static getInstance() { if (!this.instance) { this.instance = new NativeCrypto(); } return this.instance; } get _nativeType() { return 'tabris.Crypto'; } getRandomValues(typedArray: ArrayBufferView) { const byteLength = typedArray.byteLength; const values = new Uint8Array( this._nativeCall('getRandomValues', {byteLength}) as ArrayBuffer ); if (values.byteLength !== byteLength) { throw new Error('Not enough random bytes available'); } new Uint8Array(typedArray.buffer).set(values); return typedArray; } subtleDigest(arg: { algorithm: string, data: ArrayBuffer | TypedArray, resolve: (buffer: ArrayBuffer) => any, reject: (ex: Error) => any }) { this._nativeCall('subtleDigest', { algorithm: arg.algorithm, data: ArrayBuffer.isView(arg.data) ? arg.data.buffer : arg.data, onSuccess: (result: ArrayBuffer) => { if (!(result instanceof ArrayBuffer) || result.byteLength === 0) { throw new TypeError('Internal Type Error: result is not valid ArrayBuffer'); } arg.resolve(result); }, onError: (reason: unknown) => arg.reject(new Error(String(reason))) }); } subtleDecrypt( algorithm: { name: string, iv: ArrayBuffer | TypedArray, tagLength?: number }, key: CryptoKey, data: ArrayBuffer | TypedArray, onSuccess: (buffer: ArrayBuffer) => any, onError: (ex: Error) => any ): void { const {name, iv, tagLength} = algorithm; this._nativeCall('subtleDecrypt', { algorithm: { name, iv: getBuffer(iv), tagLength: isNaN(tagLength as number) ? 128 : tagLength }, key: getNativeObject(key).cid, data: ArrayBuffer.isView(data) ? data.buffer : data, onSuccess, onError: (reason: unknown) => onError(new Error(String(reason))) }); } subtleExportKey( format: string, key: CryptoKey | _CryptoKey, onSuccess: (value: ArrayBuffer) => void, onError: (ex: any) => void ): void { this._nativeCall('subtleExportKey', { format, key: getCid(key), onSuccess, onError: (reason: unknown) => onError(new Error(String(reason))) }); } subtleEncrypt( algorithm: { name: string, iv: ArrayBuffer | TypedArray, tagLength?: number }, key: CryptoKey, data: ArrayBuffer | TypedArray, onSuccess: (value: ArrayBuffer) => void, onError: (ex: any) => void ): void { const {name, iv, tagLength} = algorithm; this._nativeCall('subtleEncrypt', { algorithm: { name, iv: getBuffer(iv), tagLength: isNaN(tagLength as number) ? 128 : tagLength }, key: getCid(key), data: getBuffer(data), onSuccess, onError: (reason: unknown) => onError(new Error(String(reason))) }); } } function checkDeriveAlgorithm(algorithm: Algorithm): asserts algorithm is (AlgorithmHKDF | AlgorithmECDH | 'HKDF') { if (algorithm === 'HKDF') { return; } if (algorithm === 'AES-GCM') { throw new TypeError('AES-GCM not supported for this function'); } allowOnlyKeys(algorithm, ['name', 'namedCurve', 'public', 'hash', 'salt', 'info']); allowOnlyValues(algorithm.name, ['ECDH', 'HKDF'], 'algorithm.name'); if (algorithm.name === 'ECDH') { allowOnlyValues(algorithm.namedCurve, ['P-256'], 'algorithm.namedCurve'); checkType(algorithm.public, CryptoKey, {name: 'algorithm.public'}); } else if (algorithm.name === 'HKDF') { checkType(algorithm.hash, String, {name: 'algorithm.hash'}); checkType(getBuffer(algorithm.salt), ArrayBuffer, {name: 'algorithm.salt'}); checkType(getBuffer(algorithm.info), ArrayBuffer, {name: 'algorithm.info'}); } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,224
'White Boy Rick' granted parole LANSING, Mich. (WJBK) - The Michigan Parole Board unanimously voted to grant parole to Richard Wersche in their meeting Friday morning. Wershe, who was known on the streets as "White Boy Rick," has been in prison for 29 years for drug crimes committed when he was a teenager. He turns 48 next week. Wershe is aware of the parole board's decision, but isn't free as of yet. We're told Wershe can be released from Michigan's custody at the Oaks Correctional Facility in Manistee as early as mid-August. 'White Boy Rick' Wershe tells parole board 'I'll never sell drugs again' Additionally, he may have to serve another 22 months in a Florida prison for his involvement in a car theft scheme. If he is sent to Florida to serve that time, at no point will he be considered a "free man" during his transfer. The State of Florida is being informed of his anticipated release, and officials there will be responsible for making arrangements for his transfer to serve his sentence there. Wershe was sentenced to serve five years in prison in Florida after being convicted on racketeering and conspiracy to commit racketeering charges in 2006. The crimes happened while he was incarcerated in Florida as part of the federal witness protection program. Wershe was 17 when he was caught with cocaine. He says he had worked as an FBI informant and reported corrupt Detroit police officers but wasn't given leniency. Wershe was sentenced to life in prison in 1988. The sentence was later changed to give him a shot at parole. In June, he told parole board members that he's been rehabilitated and knows drugs destroy communities. M.L. Elrick is in Lansing, where the parole board met. This is a developing story. Stay with FOX 2 for updates. The Wayne County Prosecutor's Office released the following statement after receiving the news of Wersche's parole Friday: "The position of the Wayne County Prosecutor's Office is that this is a decision that has been made by the parole board and that we have no further position. We respect and accept the decision of the parole board." 6/5/2017 - 'White Boy' Rick Wershe parole hearing set for June 8 4/14/2017 - 'White Boy' Rick Wershe granted public parole hearing 2/14/2017 - Parole board could vote for 'White Boy Rick' hearing after meeting with chairman 9/29/2017 - Appeals Court rules Rick Wershe won't be resentenced 9/20/2017 - Should "White Boy Rick" be released from jail? 9/4/2015 - 'White Boy' Rick Wershe to be resentenced Sept. 18 Richard Wershe Jr. got into the drug trade in the 1980s as a 14-year-old informant for the FBI. After he started selling coke, he became notorious for his youth. And of course, there was that nickname. In his final remarks during June's parole hearing, Wershe made a promise the parole board probably hears a lot. "I know I messed up," he said. "I can't go back. I can only go forward. You'll never see me again." Former probation officer charged for allegedly luring slain Minneapolis realtor to home showing
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,645
\section{Introduction} \label{sec1} Measure theory and integration theory are major topics in mathematics with practical applications. For example, they serve as the foundation of probability theory whose formalization in proof assistants is used to verify information security (e.g., \cite{abate2021csf}) or artificial intelligence (e.g., \cite{tassarotti2021cpp}). It is therefore no wonder that the topic of formalization of measure and integration theory in proof assistants has already been tackled several times (e.g., \cite{bialas1992jfm,bialas1995jfm,hurd2002phd,coble2010phd,holzl2011itp}). In fact, experiments are still going on~\cite{vanDoorn2021itp}, some still dealing with the basics~\cite{endou2020jfm,boldo2021tr}. Our motivation is to develop measure and integration theory on top of \textsc{MathComp}{}~\cite{mathcomp}, a library of formalized mathematics developed using the \textsc{Coq}{} proof assistant~\cite{coq}. The \textsc{MathComp}{} library consists of several algebraic theories that made it possible to formalize the Odd Order theorem by following its published, revised proof~\cite[\sect{6}]{gonthier2013itp}. There are now several libraries that are built on top of \textsc{MathComp}{}, the main ones being made available as parts of the Mathematical Components project\footnote{\url{https://github.com/math-comp/}}. Among them, \textsc{MathComp-Analysis}{}~\cite{cohen2018jfr,affeldt2020ijcar} aims at taking advantage of the algebraic theories provided by \textsc{MathComp}{} to develop classical analysis (topology, real and complex analysis, etc.). In this paper, we report on an original formalization of measure and integration theory. Our approach is to extend \textsc{MathComp-Analysis}{} with reusable theories while following textbook presentations~\cite{klenke2014,liintegration}. The best illustration is the construction of the Lebesgue measure that we formalize. This is a standard construction from a semiring of sets, using the measure extension theorem. To the best of our knowledge, it has never been formalized with the abstraction of ring of sets or semiring of sets in a dependently-typed proof assistant. Yet, its formalization is the occasion to develop new mathematical structures of general interest. Similarly, the construction of the Lebesgue integral gives us the opportunity to develop a generic formalization of simple functions and to extend the formalization of the iterated operators of \textsc{MathComp}{}~\cite{bertot2008tphols}, one key to the successful formalization of the Odd Order theorem. Our contribution in this paper is twofold. First, we bring to the \textsc{Coq}{} proof assistant a formalization of measure and integration theory that is compatible with the algebraic theories of \textsc{MathComp}{}. Second, we demonstrate recent formalization techniques developed in the context of the Mathematical Components project. In particular, we use \textsc{Hierarchy-Builder}{}~\cite{cohen2020fscd} to formalize a hierarchy of mathematical structures for measure theory and to provide a compositional formalization of simple functions. Our technical contributions materialize as extensions to \textsc{MathComp-Analysis}{} in the form of reusable formal theories about sequences (of reals and of extended real numbers) and about sums over general sets and over finitely-supported functions. \paragraph*{Paper Outline} In Sect.~\ref{sec:ereal}, we explain how we develop the theory of extended real numbers by extending \textsc{MathComp}{} and \textsc{MathComp-Analysis}{}. In Sect.~\ref{sec:measure_theory}, we explain how we encode the basic definitions of measure theory, demonstrating the use of \textsc{Hierarchy-Builder}{}. In Sect.~\ref{sec:extension}, we formalize the measure extension theorem which shows how to extend a measure over a semiring of sets to a $\sigma$-\textsl{algebra}{}. This is a standard and generic approach to the construction of measures. In Sect.~\ref{sec:lebesgue_measure}, we obtain the Lebesgue measure by extending a measure over the semiring of sets of intervals. In Sect.~\ref{sec:lebesgue_integral}, we show that the framework developed so far allows for a formalization of the Lebesgue integral up to the dominated convergence and Fubini's theorems. We review related work in Sect.~\ref{sec:related_work} and conclude in Sect.~\ref{sec:conclusion}. \def\accompanying#1{\cite[\L!#1!]{analysis}} \paragraph{About the Notations Used in this Paper} The Mathematical Components project have been favoring ASCII notations. Most of them are unsurprising because they are inspired by \LaTeX\ commands. This paper follows this tradition; ASCII notations will be explained in the prose or in Tables~\ref{tab:iterated} and~\ref{tab:set} As a consequence, we can display the \textsc{Coq}{} code almost verbatim; we allow pretty-printing only for a few standard symbols (such as \L!<-! instead of {\tt <-}, \L!->! instead of {\tt ->}, \L!forall! instead of {\tt forall}, \L!exists! instead of {\tt exists}, \L!<=! instead of {\tt <=}, \L+!=+ instead of {\tt !=}, \L!/\! instead of \verb!/\!, etc.). \section{Support for Extended Real Numbers} \label{sec:ereal} Since a measure is potentially infinite, it is represented by extended real numbers. A prerequisite for the construction of measures is therefore the development of the theory of extended real numbers and of their sequences. This actually calls for a substantial extension of \textsc{MathComp-Analysis}{}~\cite{cohen2018jfr}. Our starting point is the hierarchy of numeric and real interfaces provided by \textsc{MathComp}{} and \textsc{MathComp-Analysis}{}. It contains (among others) the type \L!numDomainType! for numeric integral domains, the type \L!numFieldType! for numeric fields, the type \L!realFieldType! for real fields (see~\cite[Chapter~4]{cohen2012phd}), and the type \L!realType! for real numbers). They form an inheritance chain as depicted in Fig.~\ref{fig:numtypes}. \begin{figure}[h] \centering \includegraphics[width=2.5cm]{numtypes.pdf} \caption{Numeric types provided by \textsc{MathComp}{} and \textsc{MathComp-Analysis}{} used in this paper} \label{fig:numtypes} \end{figure} The definition of extended real numbers is unsurprising (and predates the work presented in this paper): \begin{lstlisting} Inductive extended (R : Type) := EFin of R | EPInf | ENInf. \end{lstlisting} Hereafter, the notation \L!+oo! (resp.\ \L!-oo!) is for the constructor \L!EPInf! (resp.\ \L!ENInf!). The constructor \L!EFin! injects a real number \L!r! into the set of extended real numbers; we also use the notation \L! \subsection{Algebraic Aspects of Extended Real Numbers} \label{sec:ereal_algebraic} The expression $\infty-\infty$ is undefined in the mathematical practice. How to deal with this is a crucial aspect of our formalization. We define it to be $-\infty$ because it makes the extended real numbers a commutative monoid, so we can use \textsc{MathComp}{}'s iterated operators~\cite{bertot2008tphols}. Furthermore, we can combine the iterated operators of \textsc{MathComp}{} the notion of limit, which comes from \textsc{MathComp-Analysis}{}~\cite{cohen2018jfr}, to introduce a notation for infinite sums. On the one hand, \textsc{MathComp}{} comes with a generic definition of iterated operators \L!\big[op/idx]_(i <- s | P i) f i! where \L!f! is a function whose domain corresponds to the list of indices \L!s! and \L!P! is a boolean predicate. Depending on the properties of the binary operator \L!op! and the element \L!idx!, many lemmas are available that have been key to important formalizations in \textsc{Coq}{} (e.g., \cite{gonthier2013itp}). The notation \L!\big[op/idx]_(i < n | P i) f i! is a special case where the indices are the natural numbers less than~\L!n!. As for the notation \L!\sum_(i <- s | P i) f i!, it is a special case for the iterated addition when \L!f! is a numerical type-valued function. On the other hand, \textsc{MathComp-Analysis}{} comes with a definition of limit~\cite[\sect{2.3}]{cohen2018jfr}. It can be applied to sequences, i.e., functions of type \L!nat -> T! (notation \L!T^nat!). Given a sequence~\L!u!, \L!lim u! is the limit of the sequence $\texttt{u}_n$ when $n \to \infty$. We combine these two notations into a family of notations \L!\sum_(i <oo | P i) f i!, which is simply defined as \L!lim (fun n => \big[op/idx]_(i < n | P i) f i)!. Of course, these new notations need to be instrumented with many lemmas, the rest of this paper will provide several examples. Table~\ref{tab:iterated} contains a summary of the notations for iterated operators we have discussed so far\footnote{ Table~\ref{tab:iterated} also contains notations that we will introduce later in this paper. We summarize these notations together to highlight their resemblances and serve as a reading guide.}. \makeatletter \renewcommand{\boxed}[1]{\text{\fboxsep=.2em\fbox{\m@th$\displaystyle#1$}}} \makeatother \def\boxed{\mathtt{op}}{\boxed{\mathtt{op}}} \begin{table}[h] {\centering \caption{Summary of iterated operators and alike used or newly introduced in this paper. The symbol $\boxed{\mathtt{op}}$ is the iterated operator corresponding to \texttt{op}. } \label{tab:iterated} \begin{center} \begin{tabular}{|ll|} \hline \multicolumn{2}{|l|}{Finitely iterated operators \cite{bertot2008tphols}:} \\ \L!\big[op/idx]_(i <- s | P i) f i! & $\boxed{\mathtt{op}}_{i < \mid s\mid, i \in P} f(s_i)$ \\ \L!\big[op/idx]_(i < n | P i) f i! & $\boxed{\mathtt{op}}_{0 \leq i < n, i \in P} f(i)$ \\ \L!\big[op/idx]_(m <= i < n | P i) f i! & $\boxed{\mathtt{op}}_{m \leq i < n, i \in P} f(i)$ \\ \multicolumn{2}{|l|}{\quad Application to numeric functions (see Table~\ref{tab:set} for application to sets):} \\ \L!\sum_(i <- s | P i) f i! & $\sum_{i < \mid s\mid, i \in P} f(s_i)$ \\ \hline \multicolumn{2}{|l|}{Countably iterated sum of numeric functions (Sect.~\ref{sec:ereal_algebraic}):} \\ \L!\sum_(i <oo | P i) f i! & $\sum_{i = 0, i \in P}^\infty f(i)$ \\ \L!\sum_(m <= i <oo | P i) f i! & $\sum_{i = m, i \in P}^\infty f(i)$ \\ \hline \multicolumn{2}{|l|}{Iterated operators over finite supports (Sect.~\ref{sec:fsbigop}):} \\ \L!\big[op/idx]_(i \in D) f i! & $\boxed{\mathtt{op}}_{i \in D}f(i)$ if $f(i)$ has a finite number\\ & of values in $D$ s.t.\ $f(i) \neq \mathtt{idx}$ o.w. $\mathtt{idx}$\\ \hline \multicolumn{2}{|l|}{Sum of extended real numbers over general sets (Sect.~\ref{sec:esum}):} \\ \L!\esum_(i in P) f i! & $\sum_{i \in P} f(i)$ \\ \hline \multicolumn{2}{|l|}{Integral (Sect.~\ref{sec:integral_measurable_function}):} \\ \L!\int[mu]_(x in D) f x! & $\int_{x \in D} f(x) d\mu(x)$ \\ \hline \end{tabular} \end{center} } \end{table} \subsection{Topological Aspects of Extended Real Numbers} \label{sec:ereal_topology} \textsc{MathComp-Analysis}{} provides several mathematical structures (topological, uniform, pseudometric spaces, etc.) together with generic lemmas. To enjoy these lemmas, it is necessary to equip extended real numbers with these structures by showing they meet their interfaces. Extended real numbers form a pseudometric space. The instantiation of the mathematical structures essentially relies on the definition and properties of an order-preserving bijective function from the set of extended real numbers to $[-1;1]$ (see \accompanying{constructive_ereal.v} for details): \begin{lstlisting} Definition contract (x : \bar R) : R := match x with \end{lstlisting} There is no hope to get a richer structure (say, \textsc{MathComp}'s \L!zmodType!) on the full type though, because as we already discussed above $\infty-\infty$ is taken to be $-\infty$. \subsection{Sequences of Extended Real Numbers} \label{sec:ereal_seq} The preparatory steps (Sections~\ref{sec:ereal_algebraic} and~\ref{sec:ereal_topology}) we briefly overviewed above are necessary to produce a theory about sequences of extended real numbers that blends in \textsc{MathComp-Analysis}{} in a satisfactory way. For the sake of illustration, let us present two sample lemmas. The first one shows that the limit of a sum is the sum of limits: \begin{lstlisting} Lemma ereal_limD (R : realType) (f g : (\bar R)^nat) : cvg f -> cvg g -> lim f +? lim g -> lim (f \+ g) = lim f + lim g. \end{lstlisting} We already explained the notation \L!lim! in Sect.~\ref{sec:ereal_algebraic}. See Fig.~\ref{fig:numtypes} for \L!realType!. The definition \L!cvg f! (\L!cvg! is for ``convergence'') states that \L!lim f! exists without naming it explicitly. The notation \L!a +? b! is a predicate that checks whether the addition of \L!a! and \L!b! is well-defined; the notation \L!f \+ g! is for the pointwise addition of two functions. The second illustrative lemma shows the commutation of finite and infinite sums of sequences of non-negative terms: \begin{lstlisting} Lemma nneseries_sum_nat (R : realType) n (f : nat -> nat -> \bar R) : (forall i j, 0 <= f i j) -> \sum_(j <oo) (\sum_(0 <= i < n) f i j) = \sum_(0 <= i < n) (\sum_(j <oo) (f i j)). \end{lstlisting} There are many lemmas dealing with sequences of extended real numbers that have been added to \textsc{MathComp-Analysis}{} for the purpose of this work (see \accompanying{sequences.v} and \accompanying{normedtype.v}). These are reusable lemmas that make the rest of our formalization possible. \subsection{Iterated Operators over Finite Supports} \label{sec:fsbigop} To be able to succinctly formalize some proofs relying on iterated operators, we also extend the library of iterated operators of \textsc{MathComp-Analysis}{} with \newterm{iterated operators over finite supports}. They take the form of the notation \L!\big[op/idx]_(i \in A) f i! for the iterated application of the operator \L!op! to \L!f i!'s where \L!i! ranges over \L!A! and \L!f! as a finite support. The definition of the finite support of a function relies on a theory about the cardinality properties of sets that was also triggered by the work presented in this paper. From this theory, we use in particular the function \L!fset_set! (\L!fset_set A! returns a finite set when the set \L!A! is indeed finite and the empty set otherwise) to define the finite support of a function: \begin{lstlisting} Definition finite_support {I : choiceType} {T : Type} (idx : T) (D : set I) (F : I -> T) : seq I := fset_set (D `&` F @^-1` [set~ idx] : set I). \end{lstlisting} The notation for iterated operators over finite supports combines this definition with \textsc{MathComp}'s iterated operators: \begin{lstlisting} Notation "\big [ op / idx ]_ ( i '\in' D ) F" := (\big[op/idx]_(i <- finite_support idx D (fun i => F)) F) : big_scope. \end{lstlisting} The integral of simple functions in Sect.~\ref{sec:sintegral} will provide a concrete use of this new notation. \subsection{Sums over General Sets} \label{sec:esum} Last, we extend \textsc{MathComp-Analysis}{} with \newterm{sums over general sets}, i.e., the notation $$\sum_{i \in S} a_i \overset{\textrm{def}}{=} \sup \left\{ \sum_{i \in A} a_i \,\text{\textbar}\, A \textrm{ non-empty finite subset of } S \right\}.$$ For that purpose, we introduce the definition \L!fsets S! for the finite sets included in~\L!S!. It is defined using the predicate \L!finite_set A! that holds when the set \L!A! is finite. Using this definition and the notation for iterated operators over finite supports from Sect.~\ref{sec:fsbigop}, the pencil-and-paper definition of sums over general sets translates directly to: \begin{lstlisting} Variables (R : realFieldType) (T : choiceType). Definition fsets S : set (set T) := [set F | finite_set F /\ F `<=` S]. Definition esum (S : set T) (a : T -> \bar R) := ereal_sup [set \sum_(x \in A) a x | A in fsets S]. \end{lstlisting} The type \L!realFieldType! is one of the numeric types of \textsc{MathComp}, see Fig.~\ref{fig:numtypes}. This generalizes the notation for the limit of sequences of extended real numbers of Sect.~\ref{sec:ereal_algebraic}. For illustration, in the context of the Lebesgue integration it is in particular useful to establish the following partitioning property: \begin{align} J_k \textrm{ pairwise-disjoint} \to (\forall j, j \in \bigcup_{k \in K} J_k \to 0 \leq a_j) \to \nonumber \\ \sum_{i \in \bigcup_{k \in K} J_k} a_i = \sum_{k \in K} \left(\sum_{j \in J_k} a_j\right) \nonumber \end{align} Here follows the corresponding formal statement, where the hypothesis about the pairwise-disjointness of the sets $J_k$ is slightly generalized (see Table~\ref{tab:set} for notations): \begin{lstlisting} Lemma esum_bigcup J a : trivIset [set i | a @` J i != [set 0]] J -> (forall x, (\bigcup_(k in K) J k) x -> 0 <= a x) -> \esum_(i in \bigcup_(k in K) J k) a i = \esum_(k in K) \esum_(j in J k) a j. \end{lstlisting} \def\cplt#1{{#1}^{\complement}} \begin{table}[h] { \caption{Summary of the set-theoretic notations used in this paper. The type \L!set T! is defined as \L!T -> Prop!. Most set-theoretic constructs are given ASCII notations, otherwise we use the \textsc{Coq}{} identifier directly (as with {\tt set0} or \L!trivIset!). } \begin{center} \begin{tabular}{|l|l|l|l|} \hline ASCII & \textsc{Coq}{} & Meaning \\ notation & identifier & \\ \hline {\tt set0} & {\tt set0} & The empty set \\ \L![set: A]! & \L!setT! & The full set of elements of type \L!A! \\ {\tt `\char`\|`} & \L!setU! & Set union \\ {\tt `\&`} & \L!setI! & Set intersection \\ \L!`\`! & \L!setD! & Set difference \\ \L!~`! & \L!setC! & Set complement \\ {\tt `<=`} & \L!subset! & Set inclusion \\ \L!f @` A! & \L!image! & Image by \L!f! of \L!A! \\ \L!f @^-1` A! & \L!preimage! & Preimage by \L!f! of \L!A! \\ \L![set x]! & \L!set1! & The singleton set $\{x\}$ \\ \L![set~ x]! & see \cite{analysis} & The complement of $\{x\}$ \\ \L![set E | x in P]! & see \cite{analysis} & the set of \L!E! with \L!x! ranging in \L!P! \\ \L!range f! & see \cite{analysis} & Image by \L!f! of the full set \\ \L!\big[setU/set0]_! & see Table~\ref{tab:iterated} & $\bigcup_{i < \mid s\mid, i \in P} f(s_i)$ \\ \L! (i <- s | P i) f i! & & \\ \L!\bigcup_(k in P) F k! & {\tt bigcup} & Countable union \\ \L!\bigcap_(k in P) F k! & {\tt bigcap} & Countable intersection \\ \L!trivIset D F! & {\tt trivIset} & \L!F! is a sequence of pairwise \\ & & disjoint sets over the domain \L!D! \\ \L![set` p]! & see \cite{analysis} & Set corresponding to the boolean \\ & & predicate \L!p! \\ \hline \end{tabular} \end{center} \label{tab:set} } \end{table} \section{Basic Definitions of Measure Theory} \label{sec:measure_theory} The main mathematical definitions for measure theory are $\sigma$-\textsl{algebra}{} and measure. The goal of the construction of the Lebesgue measure is to build a function that satisfies the properties of a measure. This is not trivial because such a function does not exist in general when the domain is an arbitrary powerset, hence the introduction of $\sigma$-\textsl{algebra}{}s. This section propose a formalization of the basic definitions of measure theory using \textsc{Hierarchy-Builder}{}~\cite{cohen2020fscd}, a tool that automates the writing of \newterm{packed classes}~\cite{garillot2009tphols}, a methodology to build hierarchies of mathematical structures that used pervasively in the Mathematical Components project. \subsection{Overview of \textsc{Hierarchy-Builder}{}} \label{sec:hb_overview} \textsc{Hierarchy-Builder}{} extends Coq with commands to define hierarchies of mathematical structures and functions. It is designed so that hierarchies can evolve (for example by splitting a structure into smaller structures) without breaking existing code. These commands are compiled to packed classes~\cite{garillot2009tphols}. but the technical details of their implementation in \textsc{Coq}{} (modules, records, coercions, implicit arguments, canonical structures instances, notations, etc.) are hidden to the user. The main concept of \textsc{Hierarchy-Builder}{} is the one of \newterm{factory}. This is a record defined by the command \L!HB.factory! that packs a carrier, operations, and properties. This record usually corresponds to the standard definition of a mathematical structure. \newterm{Mixins} are factories used as the default definition for a mathematical structure; they are defined by the command \L!HB.mixin!. \newterm{Structures} defined by the command \L!HB.structure! are essentially sigma-types with a carrier paired with one or more factories. A mixin often extends a structure, so it typically takes as parameters a carrier and other structures. Factories are instantiated using the command \L!HB.instance!. Instances are built with an ``\L!xyz.Build!'' function which is automatically generated for each \L!xyz! factory. A \newterm{builder} is a function that shows that a factory is sufficient to build a mixin. To write a builder, one uses the command \L!HB.builders! that opens a \textsc{Coq}{} section starting from a factory and ending with instances of mixins. In addition to commands to build hierarchies, \textsc{Hierarchy-Builder}{} also checks their validity by detecting missing interfaces or \newterm{competing inheritance paths}~\cite{affeldt2020ijcar}. More than an inheritance mechanism, \textsc{Hierarchy-Builder}{} therefore provides help in the design of hierarchies of structures. \subsection{Mathematical Structures for Measure Theory} \label{sec:math_struct_measure} A $\sigma$-\textsl{algebra}{} is a mathematical structure that comprises a set of sets that contains the empty set, and is stable by complement and by countable union. It is best defined as a hierarchy of mathematical structures because poorer structures actually play a key role in the construction by extension of the Lebesgue measure. \subsubsection{Inheritance Chain from Semiring of Sets} \label{sec:inheritance_chain} The hierarchy of mathematical structures for measure theory starts with \newterm{semiring of sets}. They are formalized using \textsc{Hierarchy-Builder}{} (see Sect.~\ref{sec:hb_overview}) as follows: \begin{lstlisting}[numbers=left,xleftmargin=3.0ex] HB.mixin Record isSemiRingOfSets (d : measure_display) T := { 7\label{loc:isSemiRingOfSets}7 ptclass : Pointed.class_of T; 7\label{loc:ptclass}7 measurable : set (set T) ; 7\label{loc:measurable}7 measurable0 : measurable set0 ; 7\label{loc:measurable0}7 measurableI : setI_closed measurable; 7\label{loc:measurableI}7 semi_measurableD : semi_setD_closed measurable }. 7\label{loc:measurableD}7 #[short(type=semiRingOfSetsType)] 7\label{loc:semiRingOfSetsType}7 HB.structure Definition SemiRingOfSets d := 7\label{loc:SemiRingOfSets}7 {T of isSemiRingOfSets d T}. \end{lstlisting} The declaration of the mixin starts at line \ref{loc:isSemiRingOfSets}. The parameter~\L!d! is part of a mechanism to implement user-friendly notations; it can be ignored at this stage because it is not used in the definition, we defer its explanation to Sect.~\ref{sec:display} where we will have more material to demonstrate it with examples. The purpose of line~\ref{loc:ptclass} is to make structures in measure theory pointed types\footnote{At the time of this writing, we have to augment our structures with manually-defined packed classes, because the port of \textsc{MathComp-Analysis}{} to use \textsc{Hierarchy-Builder}{} is not yet completed (this is work in progress, see \url{https://github.com/math-comp/analysis/pull/698}).} in the sense of the hierarchy of algebraic and topologic structures of \textsc{MathComp-Analysis}{}, i.e., their carrier contain at least one element, have a boolean equality operator, and a choice operator\footnote{ We require the equality and choice operators to avoid non-forgetful inheritance~\cite{affeldt2020ijcar} problems. Note that we can always provide these operators thanks to the classical setting of \textsc{MathComp-Analysis}{}~\cite[Sect.~5]{cohen2018jfr}. }. Line \ref{loc:measurable} corresponds to the carrier. A semiring of sets contains the empty set (line \ref{loc:measurable0}). It is also stable by finite intersection; this is captured by line~\ref{loc:measurableI}, where \L+setI_closed G+ is formally defined as \L+forall A B, G A -> G B -> G (A `&` B)+. At line~\ref{loc:measurableD}, \L+semi_setD_closed G+ means that the relative complement of two sets in \L+G+ can be partitioned into a finite number of sets in \L+G+ (recall from Sect.~\ref{sec:esum} that \L!finite_set! holds for finite sets): \begin{lstlisting} Definition "*semi_setD_closed*" G := forall A B, G A -> G B -> exists D, [/\ finite_set D, D `<=` G, A `\` B = \bigcup_(X in D) X & trivIset D id]. \end{lstlisting} The definition of semiring of sets is completed at line~\ref{loc:SemiRingOfSets} by declaring the structure (as explained in Sect.~\ref{sec:hb_overview}) and providing a conventional notation for the corresponding type (line~\ref{loc:semiRingOfSetsType}). Hereafter, we call \newterm{measurables} the sets that form a semiring of sets. A \newterm{ring of sets} is a non-empty set of sets that is closed under union and difference. It can be defined by extending a semiring of sets with the axiom that it is stable by finite union. Its interface can be defined using \textsc{Hierarchy-Builder}{} as follows: \begin{lstlisting}[numbers=left,xleftmargin=3.0ex] HB.mixin Record RingOfSets_from_semiRingOfSets d T of isSemiRingOfSets d T := {7\label{loc:ringextension}7 measurableU : setU_closed (@measurable d [the semiRingOfSetsType d of T]) }. 7\label{loc:measurableU}7 7\label{loc:the}7 #[short(type=ringOfSetsType)] HB.structure Definition RingOfSets d := 7\label{loc:RingOfSets}7 {T of RingOfSets_from_semiRingOfSets d T & SemiRingOfSets d T 7\label{loc:extends}7}. \end{lstlisting} This declaration provides a new mixin that extends the mixin for semiring of sets (note the \L!of! declaration at line~\ref{loc:ringextension}). At line~\ref{loc:measurableU}, the expression \L+setU_closed G+ means that the class \L+G+ is stable by finite unions and is formally defined as % \L+forall A B, G A -> G B -> G (A `|` B)+. The modifier \L!@! at line~\ref{loc:the} is \textsc{Coq}{} syntax to enforce the explicit input of implicit arguments. The notation \L![the A of B]! used at line~\ref{loc:the} is a generic syntax provided by \textsc{Hierarchy-Builder}{} to infer the mathematical structure \L!A! corresponding to the support type~\L!B!. The corresponding structure is declared at line~\ref{loc:RingOfSets} where it is marked as satisfying the mixin \L!RingOfSets_from_semiRingOfSets! and extending the structure of semiring of sets \L!SemiRingOfSets! (line \ref{loc:extends}). An \newterm{algebra of sets} is a set of sets that contains the empty set and is stable by (finite) union and complement. Algebras of sets are defined as extending rings of sets with the axiom that the full set belongs to the measurables. The \textsc{Hierarchy-Builder}{} declaration is similar to the one of semiring of sets and ring of sets: \begin{lstlisting} HB.mixin Record AlgebraOfSets_from_RingOfSets d T of RingOfSets d T := { measurableT : measurable [setT: T] }. #[short(type=algebraOfSetsType)] HB.structure Definition AlgebraOfSets := {T of AlgebraOfSets_from_RingOfSets T & RingOfSets d T}. \end{lstlisting} Finally, $\sigma$-\textsl{algebra}{}s are defined by extending algebras of sets with the axiom of stability by countable union: \begin{lstlisting} HB.mixin Record Measurable_from_algebraOfSets d T of AlgebraOfSets d T := { bigcupT_measurable : forall F, (forall k, measurable (F k)) -> measurable (\bigcup_k (F k)) }. #[short(type=measurableType)] HB.structure Definition Measurable := {T of Measurable_from_algebraOfSets d T & AlgebraOfSets d T}. \end{lstlisting} These definitions form an inheritance chain (Fig.~\ref{fig:inheritance_chain}), so that $\sigma$-\textsl{algebra}{}s are also algebras of sets, which are also rings of sets, and therefore semirings of sets. \begin{figure}[h] \centering \includegraphics[width=2.5cm]{ismeasurable.pdf} \caption{Inheritance chain from semiring of sets to measurables} \label{fig:inheritance_chain} \end{figure} \subsubsection{Direct Definition of Mesurable Spaces} \label{sec:measurable_factory} The set of interfaces provided by the hierarchy of mathematical structures for measure theory is not the only way to instantiate structures. We also provide factories (introduced in Sect.~\ref{sec:hb_overview}). For example, the following factory provides an alternative interface for $\sigma$-\textsl{algebra}{}s: \begin{lstlisting} HB.factory Record isMeasurable d T := { ptclass : Pointed.class_of T; measurable : set (set T) ; measurable0 : measurable set0 ; measurableC : forall A, measurable A -> measurable (~` A) ; measurable_bigcup : forall F, (forall k, measurable (F k)) -> measurable (\bigcup_k (F k)) }. \end{lstlisting} It is arguably closer to the textbook definition from which we started Sect.~\ref{sec:math_struct_measure}. More generally, it is often the case that the textbook definition of mathematical structures ought better be sought in factories rather than in mixins. \subsection{Generated $\sigma$-\textsl{algebra}{}s} \label{sec:generated_salgebra} The notion of \newterm{generated $\sigma$-\textsl{algebra}{}} is used to define the measure extension theorem, and also to develop the theory of measurable functions. The generated $\sigma$-\textsl{algebra}{} \L!<<s D, G >>! is the smallest $\sigma$-\textsl{algebra}{} that contains the set of sets \L!G!, such that the complement is taken w.r.t.\ a set \L!D!. This is defined using a generic \L+smallest+ predicate: \begin{lstlisting} Definition smallest C G := \bigcap_(A in [set M | C M /\ G `<=` M]) A. 7\label{loc:smallest}7 ... Context {T}. Definition "*sigma_algebra*" D G := [/\ G set0, (forall A, G A -> G (D `\` A)) & (forall A : (set T)^nat, (forall n, G (A n)) -> G (\bigcup_k A k))]. ... Notation "'<<s' D , G '>>'" := (smallest ("*sigma_algeba*" D) G). \end{lstlisting} Below, The notation \L!<<s G >>! is for the measurables of the $\sigma$-\textsl{algebra}{} generated from the set of sets \L!G! with complement take w.r.t.\ the full set. Note that the definition \L+smallest+ is well-defined (i.e., is indeed the smallest set in the class \L+C+) whenever the smallest fixpoint of the class \L+C+ indeed exists. This is why the definition of a generated $\sigma$-\textsl{algebra}{} can also be found elsewhere~\cite[\sect{4.2}]{boldo2021tr} defined as an inductive predicate instead. The choice of using the \L+smallest+ predicate rather than an inductive definition is for the sake of genericity: we have a unique function symbol and a common theory to deal with all generated classes (Dynkin, $\sigma$-\textsl{algebra}{}, etc.), and since \L+smallest+ itself is monotonous, we can reduce comparison of generated classes to the extent of the classes themselves. However, this has the drawback that the elimination principle and correctness lemmas are not automatically proven by \textsc{Coq}{} as they would with the \L!Inductive! command. We envision switching back to inductive types together with canonical structures of initiality in the future. Since the set of sets of type \L!<<s G >>! form a $\sigma$-\textsl{algebra}{} we can equip it with the structure of \L!measurableType! from Sect.~\ref{sec:inheritance_chain} using \textsc{Hierarchy-Builder}{}. This starts by introducing a dedicated identifier \L!salgebraType! (line~\ref{salgebraType} below) and a dedicated display \L!sigma_display! (line~\ref{line:sigma_display}) (this provides us with an example to explain the display mechanism in the next section---Sect.~\ref{sec:display}). Let us assume that we are given the proofs \L!sigma_algebra{0,C,_bigcup}! corresponding to the $\sigma$-\textsl{algebra}{} properties of a generated $\sigma$-\textsl{algebra}{}. To associate to the identifier \L!salgebraType! a structure of $\sigma$-\textsl{algebra}{}, we use the command \L!HB.instance! at line~\ref{line:hbinstance} with the constructor of the factory of Sect.~\ref{sec:measurable_factory} at line~\ref{line:isMeasurableBuild}. The corresponding display appears at line \ref{line:isMeasurableBuild} and the proofs of the $\sigma$-\textsl{algebra}{} properties appear at lines \ref{line:proofs1}--\ref{line:proofs2}. \begin{lstlisting}[numbers=left,xleftmargin=3.0ex] Definition salgebraType {T} (G : set (set T)) := T. 7\label{salgebraType}7 Definition sigma_display {T} : set (set T) -> measure_display. 7\label{line:sigma_display}7 Proof. exact. Qed. Section g_salgebra_instance. Variables (T : pointedType) (G : set (set T)). Canonical salgebraType_eqType := EqType (salgebraType G) (Equality.class T). Canonical salgebraType_choiceType := ChoiceType (salgebraType G) (Choice.class T). Canonical salgebraType_ptType := PointedType (salgebraType G) (Pointed.class T). HB.instance Definition _ := 7\label{line:hbinstance}7 @isMeasurable.Build (sigma_display G) (salgebraType G) 7\label{line:isMeasurableBuild}7 (Pointed.class T) <<s G >> (@sigma_algebra0 _ setT G) (@sigma_algebraC) 7\label{line:proofs1}7 (@sigma_algebra_bigcup _ setT G). 7\label{line:proofs2}7 End g_salgebra_instance. \end{lstlisting} \subsection{Displays for Measurable Types} \label{sec:display} We saw in the previous sections that the structures for measure theory are parameterized by a display parameter. Its purpose is to disambiguate the printing of expressions of the (input) form \L!measurable A!. This is useful when several of them appear in the same local context or when \L!A! does not provide enough information to infer the right measurable type. More concretely, let us consider the basic case of a measurable type~\L!T! with display~\L!d! (e.g., \L!T : ringOfSetsType d!). To assert that a set \L!A : set T! is measurable, one can always write \L!measurable A!. Yet, the display mechanism is such that Coq prints back \L!d.-measurable A!. This is achieved by provinding a type for displays and a notation: \begin{lstlisting} Inductive measure_display := default_measure_display. Declare Scope measure_display_scope. Delimit Scope measure_display_scope with mdisp. Notation "d .-measurable" := (@measurable \end{lstlisting} The display mechanism can be used to disambiguate expressions. Let us consider the case of generated $\sigma$-\textsl{algebra}{}'s. We saw that the display for generated $\sigma$-\textsl{algebra}{} is parameterized by the generator set (\L!sigma_display! in the previous section---Sect.~\ref{sec:generated_salgebra}). We can therefore introduce a notation \L!G.-sigma! for the display associated with the generator set \L!G! and a notation \L!G.-sigma.-measurable! for the measurables of the $\sigma$-\textsl{algebra}{} generated by \L!G!: \begin{lstlisting} Notation "G .-sigma" := (sigma_display G) : measure_display_scope. Notation "G .-sigma.-measurable" := (measurable : set (set (salgebraType G))) : classical_set_scope. \end{lstlisting} For example, we can use these notations to regard the empty set \L!set0! as a member of the $\sigma$-\textsl{algebra}{} generated by any set \L!G!: \begin{lstlisting} Goal forall (T : pointedType) (G : set (set T)), G.-sigma.-measurable set0. Proof. by move=> T G; exact: measurable0. Qed. \end{lstlisting} In comparison, the input \L!measurable set0! would not type check. The rest of this paper will provide more illustrations of the display mechanism. \subsection{Functions on Classes of Sets} \label{sec:functions_classes_sets} There are several notions of functions from classes of sets to the real numbers (or, implicitly, extended reals) which fall under the umbrella name of ``measure''. In the literature, they are named additive measures (a.k.a.\ \newterm{content}), premeasures, outer measures, $\sigma$-subadditive measures, and $\sigma$-additive measure (a.k.a.\ measure). We define predicates for all of these notions, but we only define structures for the three most useful of them: additive measures, measures, and outer measures. \subsubsection{Additive Measures} \label{sec:additive_measure} An \newterm{additive measure}~$\mu$ is a non-negative function defined over a semiring of sets such that the measure of the empty set is $0$ and such that $\mu(\cup_{k=1}^n F_k) = \sum_{k=1}^n \mu(F_k)$ for a finite number of pairwise-disjoint measurable sets~$F$. We first provide a definition for the latter condition: \begin{lstlisting} Definition semi_additive mu := forall F n, (forall k, measurable (F k)) -> trivIset setT F -> measurable (\big[setU/set0]_(k < n) F k) -> mu (\big[setU/set0]_(k < n) F k) = \sum_(k < n) mu (F k). \end{lstlisting} The pairwise-disjointness of sets is captured by the generic predicate \L!trivIset! (Table~\ref{tab:set}). Asking $\cup_{k=1}^n F_k$ to be measurable is superfluous when taken on a ring of sets. Additive measures are eventually defined by the following mixin and structure: \begin{lstlisting} HB.mixin Record isAdditiveMeasure (R : numFieldType) (T : semiRingOfSetsType) (mu : set T -> \bar R) := { measure_ge0 : forall x, 0 <= mu x ; measure_semi_additive : semi_additive mu }. HB.structure Definition AdditiveMeasure (R : numFieldType) (T : semiRingOfSetsType) := { mu & isAdditiveMeasure R T mu }. \end{lstlisting} See Fig.~\ref{fig:numtypes} for \L!numFieldType!. An essential property of additive measures is that they can be extended from a semiring of sets~$\mathcal{S}$ to its generated ring of sets~$R(\mathcal{S})$. We can define the latter similarly to how we defined generated $\sigma$-\textsl{algebra}{}s in Sect.~\ref{sec:generated_salgebra}: \begin{lstlisting} Definition setring G := [/\ G set0, setU_closed G & setDI_closed G]. Notation "'<<r' G '>>'" := (smallest setring G). \end{lstlisting} Generated ring of sets can be equipped with a canonical structure of ring of sets. It happens that the measurables of these rings of sets can in fact be expressed as the finite disjoint unions of (non-empty) sets in the original semiring of sets~$\mathcal{S}$ (\L!rT! in the lemma below): \begin{lstlisting} Lemma "*ring_finite_set*" (A : set rT) : measurable A -> exists B : set (set T), [/\ finite_set B, (forall X, B X -> X !=set0), trivIset B id, (forall X, X \in B -> measurable X) & A = \bigcup_(X in B) X]. \end{lstlisting} Thanks to this lemma, we can make this decomposition explicit by the following function~\L+decomp+, which given a set $A$ in~$R(\mathcal{S})$ returns a finite set of sets in~$\mathcal{S}$ that cover $A$: \begin{lstlisting} Definition decomp (A : set rT) : set (set T) := if A == set0 then [set set0] else if pselect (measurable A) is left mA then projT1 (cid ("*ring_finite_set*" mA)) else [set A]. \end{lstlisting} The function \L!decomp! is written in an idiomatic way to retrieve in \textsc{Coq}{} a witness from an existential proof. The identifier \L!pselect! comes from \textsc{MathComp-Analysis}{} and is a strong version of the law of excluded middle~\cite[Sect.~5.2]{cohen2018jfr}; \L!cid! is the axiom of constructive indefinite description. Using \L!decomp!, we can extend the measure over the original semiring of sets by summing the components: \begin{lstlisting} Definition measure (R : numDomainType) (mu : set T -> \bar R) (A : set rT) : \bar R := \sum_(X \in decomp A) mu X. \end{lstlisting} We thus have a \L+measure mu+ function for all functions \L+mu+, which is equal to \L+mu+ on the sets of the semiring of sets where \L+mu+ is defined, and which is a content on the generated ring of sets when \L+mu+ is a content (section \L!additive_measure! in~\cite[file \L!measure.v!]{analysis}, $\sigma$-subadditive when \L+mu+ is (lemma \L!ring_sigma_sub_additive!), and $\sigma$-additive when \L+mu+ is (lemma \L!ring_sigma_additive!) \subsubsection{Measures} \label{sec:measure} A \newterm{measure}~$\mu$ is defined similarly to an additive measure. The difference is the additivity axiom: it is such that $\mu(\cup_k F_k) = \sum_k \mu(F_k)$ for any sequence~$F$ of pairwise-disjoint measurable sets. We provide a definition for the latter condition, but generalizing it for semirings of sets by requiring the union $\cup_k F_k$ to be measurable as a precondition, thus merging the notions of measure and premeasure into one: \begin{lstlisting} Definition semi_sigma_additive mu := forall F, (forall k, measurable (F k)) -> trivIset setT F -> measurable (\bigcup_k F k) -> (fun n => \sum_(k < n) mu (F k)) --> mu (\bigcup_k F k). \end{lstlisting} The notation \L!f --> l! is a notation for convergence of functions that comes from \textsc{MathComp-Analysis}{}~\cite{cohen2018jfr}. In particular, when \L!f --> l! holds, we have \L!lim f = l! using the \L!lim! notation of Sect.~\ref{sec:ereal}. The precondition \L!measurable (\bigcup_k F k)! that can be removed whenever we know the underlying type is a $\sigma$-\textsl{algebra}{}. We use this definition to define the mixin corresponding to measures, which extends the one for additive measures: \begin{lstlisting} HB.mixin Record isMeasure0 (R : numFieldType) (T : semiRingOfSetsType) mu of isAdditiveMeasure R T mu := { measure_semi_sigma_additive : semi_sigma_additive mu }. #[short(type=measure)] HB.structure Definition Measure (R : numFieldType) (T : semiRingOfSetsType) := {mu of isMeasure0 R T mu &}. \end{lstlisting} In practice, to construct a measure, one would rather use the following factory (we introduced the notion of factory in Sect.~\ref{sec:measurable_factory}) whose interface is closed to the textbook definition of measure: \begin{lstlisting} HB.factory Record isMeasure (R : realFieldType) (T : semiRingOfSetsType) (mu : set T -> \bar R) := { measure0 : mu set0 = 0 ; measure_ge0 : forall x, 0 <= mu x ; measure_semi_sigma_additive : semi_sigma_additive mu }. \end{lstlisting} Measures are equipped with the notation \L!{measure set T -> \bar R}!. \subsubsection{Outer Measures} \label{sec:outer_measure} \newterm{Outer measures} are the object of study of the measure extension theorems. An outer measure is intuitively a ``relaxed'' definition of measure. It does not require the measure to be $\sigma$-\textsl{additive}{} but only $\sigma$-\textsl{subadditive}{}: \begin{lstlisting} Definition sigma_subadditive (R : numFieldType) (T : Type) (mu : set T -> \bar R) := forall (F : (set T)^nat), mu (\bigcup_n (F n)) <= \sum_(n <oo) mu (F n). \end{lstlisting} Comparing to $\sigma$-\textsl{additivity}{}, in $\sigma$-\textsl{subadditivity}{} the relation between the measure of the countable union and the sum of the measures is an inequality, there are no conditions on the sequence of sets, and the support type need not be a $\sigma$-\textsl{algebra}{}. Like for additive measures and measures (Sections~\ref{sec:additive_measure} and~\ref{sec:measure}), an outer measure is encoded as \textsc{Hierarchy-Builder}{} mixin: \begin{lstlisting} HB.mixin Record isOuterMeasure (R : numFieldType) (T : Type) (mu : set T -> \bar R) := { outer_measure0 : mu set0 = 0 ; outer_measure_ge0 : forall x, 0 <= mu x ; le_outer_measure : {homo mu : A B / A `<=` B >-> A <= B} ; outer_measure_sigma_subadditive : sigma_subadditive mu }. \end{lstlisting} (The notation \L!{homo f : x y / r x y >-> s x y}! is a generic \textsc{MathComp}{} notation for homomorphisms \L!f! with respect to the relations \L!r! and \L!s!.) Outer measures come with the notation \L!{outer_measure set T -> \bar R}! for the type. \section{Measure Extension} \label{sec:extension} A standard approach to the construction of measures is to extend a function over a ring of sets or a semiring of sets to a measure over an enclosing $\sigma$-\textsl{algebra}{}. This extension theorem and its variations are known under different names (Carath\'eodory/Carath\'eodory-Fréchet/Carath\'eodory-Hopf/Hahn/etc.\ extension theorem), we will refer to it as the Extension theorem. As in the textbooks we follow~\cite{klenke2014,liintegration}, we decompose this theorem in reusable constructions and lemmas. The first, which we refer to as the \newterm{outer measure construction}, extends a non-negative function $\mu$ such that $\mu(\emptyset) = 0$ over a semiring of sets $\mathcal{S}$ to an outer measure (Sect.~\ref{sec:caratheodory1}). This is then shown to be a measure over the $\sigma$-\textsl{algebra}{} of \newterm{Carath\'eodory-measurable sets} (Sect.~\ref{sec:caratheodory2}). When restricted to this $\sigma$-\textsl{algebra}{}, we call it the \newterm{Carath\'eodory measure}. Now, if $\mu$ was a $\sigma$-\textsl{subadditive}{} content on $\mathcal{S}$, the $\sigma$-\textsl{algebra}{} of Carath\'eodory-measurable sets contains the $\sigma$-\textsl{algebra}{} generated by~$\mathcal{S}$, and the Carath\'eodory measure is uniquely determined on it, by the values of $\mu$ on $\mathcal{S}$ (Sect.~\ref{sec:hahn_extension}). \subsection{Outer Measure Construction} \label{sec:caratheodory1} The first part of the Extension theorem builds an outer measure (Sect.~\ref{sec:outer_measure}) given a function defined over a semiring of sets. In textbooks it is often stated in a weaker form starting from a ring of sets or an algebra of sets. The outer measure in question is more precisely defined as the infimum of the measures of covers: $\inf_F\left\{ \sum_k^\infty \mu(F_k) \,\text{\textbar}\, (\forall k, \measurable{F_k} ) \land X \subseteq \bigcup_k F_k\right\},$ which translates in \textsc{MathComp-Analysis}{} as: \begin{lstlisting} Definition measurable_cover X := [set F | (forall k, measurable (F k)) /\ X `<=` \bigcup_k (F k)]. \end{lstlisting} \begin{lstlisting} Variables (R : realType) (T : semiRingOfSetsType). Variable mu : {additive_measure set T -> \bar R}. Definition "*mu_ext*" (X : set T) : \bar R := ereal_inf [set \sum_(k <oo) mu (F k) | F in measurable_cover X]. \end{lstlisting} The identifier \L!ereal_inf! is from \textsc{MathComp-Analysis}{} and corresponds to the infimum of a set of extended real numbers. In the following, \L!mu_ext mu! is noted \L!mu^*!. The difficulty is to show that \L!mu^*! is $\sigma$-\textsl{subadditive}{}. A typical textbook proof~\cite[\sect{X.1}]{liintegration} translates to a proof script of 63 lines of code (lemma \L!mu_ext_sigma_subadditive!, \accompanying{measure.v}). The main technical point is the use of sums over general sets. Precisely, in the course of proving $\sigma$-\textsl{subadditivity}{}, we run into the following subgoal ($\mu^*$ is the outer measure under construction): $ \mu^*(\cup_i F_i) \leq \sum_i^\infty \left(\mu^*(F_i) + \frac{\varepsilon}{2^i}\right). $ The proof goes on by showing $ \mu^*(\cup_i F_i) \leq \sum_{i,j} \mu(G_i j) \leq \sum_i \sum_j \mu(G_i \, j) $ for some well-chosen~$G$, such that $ F_i \subseteq \cup_j G_i j $ and $ \sum_j \mu(F_i j) \leq \mu^*(F_i) + \varepsilon/2^i $. The proof can be completed by the partitioning lemma we saw in Sect.~\ref{sec:esum}. Coming back to \L!mu^*!, we also show that it coincides with the input measure~\L!mu! (lemma \L!measurable_mu_extE! in \accompanying{measure.v}). \subsection{From an Outer Measure to a Measure} \label{sec:caratheodory2} The second part of this construction builds, given an outer measure, a $\sigma$-\textsl{algebra}{} and a measure over it. The resulting $\sigma$-\textsl{algebra}{} is formed of Carath\'eodory measurable sets, i.e., sets~$A$ such that $\forall X, \mu^*(X)=\mu^*(X\cap A) + \mu^*(X\cap\bar{A})$ where $\mu^*$ is an outer measure. Hereafter, the set of Carath\'eodory measurable sets for an outer measure \L!mu! will appear as the notation \L!mu.-cara.-measurable! (this notation is implemented using the display mechanism explained in Sect.~\ref{sec:display}). Given our newly developed theory of sequences of extended real numbers (Sect.~\ref{sec:ereal_seq}), proving that \L!mu.-cara.-measurable! is $\sigma$-\textsl{algebra}{} given an outer measure \L!mu! is almost a direct translation of pencil-and-paper proofs (see lemmas \L!caratheodory_measurable_{set0,setC,bigcup}! in \cite[file \L!measure.v!]{analysis}). Similarly, proving that the restriction of the outer measure \L!mu! to \L!mu.-cara.-measurable! is a measure is also almost a direct translation of pencil-and-paper proofs (see lemmas \L!caratheodory_measure{0,_ge0,_sigma_additive}!). We formally prove a number of properties about the resulting measure, in particular that it is \newterm{complete}, i.e., negligible sets are measurable. \subsection{The Measure Extension Theorem} \label{sec:hahn_extension} Finally, we show that a measure over a semiring of sets can be extended to a measure over a $\sigma$-\textsl{algebra}{} which contains all the measurable sets of the smallest $\sigma$-\textsl{algebra}{} that contains the semiring of sets. We place ourselves in the following context: \begin{lstlisting} Variables (d : measure_display) (R : realType). Variable T : semiRingOfSetsType d. Variable mu : {additive_measure set T -> \bar R}. \end{lstlisting} In this context, we can build an outer measure \L!mu^*! using the results of Sect.~\ref{sec:caratheodory1} and its $\sigma$-\textsl{algebra}{} \L!mu^*.-cara.-measurable! using the results of Sect.~\ref{sec:caratheodory2}. We can show that this $\sigma$-\textsl{algebra}{} contains all the measurables generated from the semiring of sets: \begin{lstlisting} Hypothesis mu_sub : sigma_sub_additive mu. Lemma sub_caratheodory : (d.-measurable).-sigma.-measurable `<=` mu^*.-cara.-measurable. \end{lstlisting} Recall from Sect.~\ref{sec:display} that \L!G.-sigma.-measurable! corresponds to the $\sigma$-\textsl{algebra}{} generated from \L!G! and that in our context \L!d.-measurable! corresponds to the measurables of the semiring of sets~\L!T!. We use this last fact to build a measure over the $\sigma$-\textsl{algebra}{} generated from the semiring of sets: this is the Hahn extension~\cite{liintegration} (recall from Sect.~\ref{sec:generated_salgebra} that \L!salgebraType G! is the measurable type generated by~\L!G!): \begin{lstlisting} Let I := salgebraType (@measurable _ T). Let Hahn_ext : set I -> \bar R := mu^*. HB.instance Definition _ := isMeasure.Build _ _ Hahn_ext Hahn_ext0 Hahn_ext_ge0 Hahn_ext_sigma_additive. \end{lstlisting} The proofs \L!Hahn_ext{0,ge0,_sigma_additive}! correspond to the properties of a measure as explained in Sect.~\ref{sec:measure}. See \accompanying{measure.v} for details. Furthermore, we prove that the measure extension is unique. We use monotone classes for that purpose~\cite[\sect{V.2.1}]{liintegration}. This can also be proved using the equivalent notion of Dynkin systems (as mentioned in~\cite{holzl2011itp}, see also \accompanying{measure.v}). Uniqueness is under the condition that the measure is $\sigma$-\textsl{finite}{}, i.e., the full set can be covered by a countable union of sets of finite measure. When this holds, any other measure \L!mu'! that coincides with \L!mu! on the original semiring of sets also coincides with the measure extension over the generated $\sigma$-\textsl{algebra}{}: \begin{lstlisting} Lemma Hahn_ext_unique : @sigma_finite _ T setT mu -> (forall mu' : {measure set I -> \bar R}, (forall X, d.-measurable X -> mu X = mu' X) -> (forall X, (d.-measurable).-sigma.-measurable X -> Hahn_ext X = mu' X)). \end{lstlisting} \section{Construction of a Measure over a Semiring of Sets} \label{sec:lebesgue_measure} In this section, we explain how we derive the Lebesgue measure from the semiring of sets of intervals of the form $]a, b]$ using the measure extension from the previous section. \subsection{Intervals} In \textsc{MathComp}{}, the type \L!interval R!, where \L!R! is typically an ordered type, is defined as the pairs of bounds of type \L!itv_bound!: \begin{lstlisting} Variant itv_bound (T : Type) : Type := BSide : bool -> T -> itv_bound T | BInfty : bool -> itv_bound T. Variant interval (T : Type) := Interval of itv_bound T & itv_bound T. \end{lstlisting} The constructor \L!BSide! is for open or closed bounds, \L!BInfty! is for infinite bounds. How the boolean parameter distinguishes between open and closed bounds is better explained with illustrations. For example, the left bounds of the intervals \L!`[x, +oo[! and \L!`]x, +oo[! are respectively \L!BSide true x! and \L!BSide false x!, while the right bound of the interval \L!`]-oo, x[! is \L!BSide true x!. This type allows for the statements of generic lemmas about intervals, when they happen to hold independently of whether a bound is open or closed. The length of an interval is defined by subtracting its left bound from its right bound. For the sake of generality, this is formally defined over arbitrary sets for which we take the hull using \L!Rhull! (see \accompanying{normedtype.v} for the definition of \L!Rhull!): \begin{lstlisting} Definition hlength (A : set R) : \bar R := let i := Rhull A in i.2 - i.1. \end{lstlisting} \subsection{Semiring of Sets and Lebesgue Measure} \label{sec:semiring_lebesgue_instance} Let us define the following set of open-closed intervals: \begin{lstlisting} Definition ocitv_type : Type := R. Definition ocitv := [set `]x.1, x.2 \end{lstlisting} This set forms a semiring of sets. Indeed, it contains \L!set0!, it is closed under finite intersection, and it satisfies the \L!semi_setD_closed! predicate from Sect.~\ref{sec:inheritance_chain} (proofs \L!ocitv{0,I,D}! below): \begin{lstlisting} Definition ocitv_display : Type -> measure_display. Proof. exact. Qed. HB.instance Definition _ := @isSemiRingOfSets.Build (ocitv_display R) ocitv_type (Pointed.class R) ocitv ocitv0 ocitvI ocitvD. \end{lstlisting} On the other hand, \L!hlength! is an additive measure. Indeed, the empty set as length~$0$ (proof \L!hlength0! below), \L!hlength! is non-negative (proof \L!hlength_ge0'!), and, more importantly, \L!hlength! is \L!semi_additive! over \L!ocitv!: \begin{lstlisting} Lemma hlength_semi_additive : semi_additive (hlength : set ocitv_type -> _). Proof. (* see 7{\color{myred}{\cite{analysis}}}7 *) Qed. HB.instance Definition _ := isAdditiveMeasure.Build R _ hlength (@hlength0 _) (@hlength_ge0') hlength_semi_additive. \end{lstlisting} Moreover, \L!hlength! is also $\sigma$-\textsl{subadditive}{} over \L!ocitv!, so that we obtain the Lebesgue measure as an application of the measure extension from Sect.~\ref{sec:hahn_extension}: \begin{lstlisting} Lemma hlength_sigma_sub_additive : sigma_sub_additive (hlength : set ocitv_type -> _). Proof. (* see 7{\color{myred}{\cite{analysis}}7 *) Qed. Definition lebesgue_measure : {measure set (salgebraType ocitv) -> \bar R} := Hahn_ext_measure hlength_sigma_sub_additive. \end{lstlisting} The above construction provides a unique measure that applies to a $\sigma$-\textsl{algebra}{} generated from open-closed intervals (see Sect.~\ref{sec:generated_salgebra} for \L!salgebraType!), which include the Borel sets: this is the definition of the Lebesgue measure. The $\sigma$-\textsl{algebra}{} generated from open-closed intervals can easily be shown to be the same as the one generated by open intervals, open rays, etc. It can also be easily extended to a $\sigma$-\textsl{algebra}{} over extended real numbers. % These facts (see~\accompanying{lebesgue_measure.v}) are useful to establish the properties of measurable functions in the next section. \section{Construction of the Lebesgue Integral} \label{sec:lebesgue_integral} We now show that the infrastructure we have developed for the Lebesgue measure can be used to develop the theory of the Lebesgue integral up to Fubini's theorem, which covers the typical set of properties that demonstrate the usefulness of such a formalization. This experiment improves in particular on related work in \textsc{Coq}{} by providing theorems for functions that are not necessary non-negative and that are extended-real valued, and also be experimenting with simpler encodings, in particular the one of simple functions. Hereafter, we shorten code snippets with the following convention: \L!T! has type \L!measurableType d! for some display parameter \L!d!, \L!R! has type \L!realType!, and \L!mu! is a measure of type \L!{measure set T -> \bar R}!. \subsection{Mesurable Functions} Ultimately, the Lebesgue integral is about \newterm{measurable functions}. A function is measurable when any preimage is measurable. We defined it for functions with domain~\L!D! as follows: \begin{lstlisting} Definition measurable_fun d d' (T : measurableType d) (U : measurableType d') (D : set T) (f : T -> U) := measurable D -> forall Y, measurable Y -> measurable (D `&` f @^-1` Y). \end{lstlisting} Note that when in the above definition \L!T! or \L!U! are actually \L!R! or \L!\bar R! with \L!R : realType!, a concrete instance of $\sigma$-\textsl{algebra}{} need to have been declared beforehand as explained in Sect.~\ref{sec:semiring_lebesgue_instance}. \def\indic#1{\textbf{1}_{#1}} \subsection{Simple Functions} \label{sec:simple_function} The construction of the Lebesgue integral starts with simple functions. A \newterm{simple function} $f$ is typically defined by a sequence of pairwise-disjoint and measurable sets $A_0, \ldots A_{n-1}$ and a sequence of elements $a_0, \ldots, a_{n-1}$ such that $f(x) = \sum_{k=0}^{n-1} a_k\indic{A_k}(x)$. It might be tempting (in particular for a computer scientist) to encode this definition using lists to represent the range of simple functions. This actually turns out to be detrimental to formalization (see Sect.~\ref{sec:related_work}). Instead, we strive for modularity by obtaining simple functions from even more basic functions. For that purpose, we again put \textsc{Hierarchy-Builder}{} to good use. We first define functions with a finite image (notation \L!{fimfun T >-> R}!): \begin{lstlisting} HB.mixin Record FiniteImage aT rT (f : aT -> rT) := {fimfunP : finite_set (range f)}. HB.structure Definition FImFun aT rT := {f of @FiniteImage aT rT f}. \end{lstlisting} We then package measurable functions (notation \L!{mfun T >-> R}!): \begin{lstlisting} HB.mixin Record IsMeasurableFun (aT : measurableType) (rT : realType) (f : aT -> rT) := { measurable_funP : measurable_fun setT f }. HB.structure Definition MeasurableFun aT rT := {f of @IsMeasurableFun aT rT f}. \end{lstlisting} As a consequence, simple functions (notation \L!{sfun T >-> R}!) can be defined by combining both: \begin{lstlisting} HB.structure Definition SimpleFun (aT : measurableType) (rT : realType) := {f of @IsMeasurableFun aT rT f & @FiniteImage aT rT f}. \end{lstlisting} Similarly, we introduce non-negative functions (notation \L!{nnfun T >-> R}!) and define non-negative simple functions (notation \L!{nnsfun T >-> R}!) resulting in the hierarchy displayed in Fig.~\ref{fig:nnsfun}. \begin{figure}[h] \centering \includegraphics[width=3.5cm]{nnsfun.pdf} \caption{Definition of non-negative simple functions} \label{fig:nnsfun} \end{figure} The introduction for the above collection of types is a fertile ground for the formalization of the properties of simple functions. We show in particular that simple functions form a ring structure (a \L!comRingType! in \textsc{MathComp}{}'s parlance) and thus that they can be combined accordingly (see the \L!Section comring! in \accompanying{lebesgue_integral.v}). Among all the simple functions, indicator functions \L!indic A! (notation \L!\1_A!, where \L!A! is a set) are of particular interest because they are used pervasively in the theory of integration: \begin{lstlisting} Definition indic {T} {R : ringType} (A : set T) (x : T) : R := (x \in A \end{lstlisting} (\L In particular, any function with a finite image (and thus any simple function) is a linear combination of indicator functions: \begin{lstlisting} Lemma fimfunE T (R : ringType) (f : {fimfun T >-> R}) x : f x = \sum_(y \in range f) (y * \1_(f @^-1` [set y]) x). \end{lstlisting} This fact is instrumental in proofs using the monotone convergence theorem, such as Fubini's theorem (Sect.~\ref{sec:dct_fubini}). \subsection{The Integral of Simple Functions} \label{sec:sintegral} The integral of a simple function is the sum of its images multiplied by the measure of the associated preimage. In textbooks, the corresponding formula can be written in two ways. One can make explicit the finite image of the simple function and sum w.r.t.\ the indices, i.e., as $\sum_{k=0}^{n-1} a_k\mu(A_k)$ using the notations from the previous section and some measure $\mu$. Since the image of a simple function is finite, one can alternatively use sums over finite supports (Sect.~\ref{sec:fsbigop}) and write: $\sum_{x \in \mathbb{R}} x \, \mu(f^{-1}\{x\})$. From the viewpoint of formalization, the former reveals implementation details while the latter is more compact and allows for the following simple definition of the integral of simple functions: \begin{lstlisting} Variables (T : Type) (R : numDomainType) (mu : set T -> \bar R). Variable (f : T -> R). Definition sintegral := \sum_(x \in [set: R]) \end{lstlisting} See Fig.~\ref{fig:numtypes} for \L!numDomainType!. The development of the properties of the integral of simple functions goes on by establishing the properties of the integral of non-negative simple functions such as semi-linearity, monotonicity, etc. Among them, the fact that the integral of the sum of simple functions is the sum of the integrals is the most technical result. Yet, it can be proved within 23 lines of script using generic properties of sums over finite supports (see \accompanying{lebesgue_integral.v}). \subsection{Integral of Measurable Functions} \label{sec:integral_measurable_function} The integral of a measurable function is defined as the difference between its non-negative part and its non-positive part, both considered as non-negative functions. We therefore first temporarily define the integral of a non-negative measurable function, as the supremum of the integrals of smaller non-negative simple functions: \begin{lstlisting} Let nnintegral f := ereal_sup [set sintegral mu h | h in [set h : {nnsfun T >-> R} | forall x, (h x \end{lstlisting} Regarding the definition of the integral of a measurable function, we make the design choice to have it parameterized with the domain of integration. For that purpose, we introduce the notation \L!f \_ D! for the function that behaves as \L!f! over the set \L!D! and some default value elsewhere. The definition of the integral follows (notation \L!\int[mu]_(x in D) f x!): \begin{lstlisting} Variables (D : set T). Definition integral f (g := f \_ D) := nnintegral (g ^\+) - nnintegral (g ^\-). \end{lstlisting} In the code just above, the notation \L!f ^\+! is for $\lambda x. \max(\texttt{f}(x),0)$ and the notation \L!f ^\-! is for $\lambda x. \max(-\texttt{f}(x),0)$. See \accompanying{lebesgue_measure.v} for the development of the theory of integration as presented in~\cite{liintegration}, and the next section for two illustrative examples. \subsection{Dominated Convergence and Fubini's Theorem} \label{sec:dct_fubini} The dominated convergence theorem establishes the convergence of a sequence of integrals of functions $f_n$ given an hypothesis of pointwise convergence of the functions $f_n$ and an hypothesis of domination by an integrable function; these two hypotheses are true ``almost everywhere''. The standard presentation (e.g.,~\cite[\sect{IV.2}]{liintegration}) is to first prove the theorem when the hypotheses are unconditionally true, in which case the proof is essentially a consequence of Fatou's lemma and of the linearity properties of the integral. As for the generalization to hypotheses that are true ``almost everywhere'', it is almost always only sketched in textbooks. The complete statement of the dominated convergence theorem follows. The notation \L!{ae mu, forall x, P x}! means that \L!P! holds almost everywhere for the measure \L!mu! (see \accompanying{measure.v}): \begin{lstlisting} Variables (D : set T) (mD : measurable D). Variables (f_ : (T -> \bar R)^nat) (f : T -> \bar R) (g : T -> \bar R). Hypothesis mf_ : forall n, measurable_fun D (f_ n). Hypothesis mf : measurable_fun D f. Hypothesis f_f : {ae mu, forall x, D x -> f_ ^~ x --> f x}. Hypothesis ig : mu.-integrable D g. Hypothesis f_g : {ae mu, forall x n, D x -> `|f_ n x| <= g x}. Theorem dominated_convergence : [/\ mu.-integrable D f, [sequence \int[mu]_(x in D) (g_ n x)]_n --> 0 & [sequence \int[mu]_(x in D) (f_ n x)]_n --> \int[mu]_(x in D) (f x) ]. \end{lstlisting} Fubini's theorem is a commutation result about integration. It is a good testbed for a combined formalization of measure and integration theory because, on the one hand, it requires the construction of the \newterm{product measure}, and, on the other hand, its proof relies on several lemmas about integration. Given two measures \L!m1! and \L!m2! respectively over two measurable types \L!T1! and \L!T2!, \L!m2! being $\sigma$-\textsl{finite}{}, the product measure is defined as \L!\int[m1]_x (m2 \o xsection A) x! where \L!xsection A x! is the set of pairs \L!(x, y)! in \L!A!. In virtue of the uniqueness of measures (Sect.~\ref{sec:hahn_extension}), inverting the role of \L!m1! and \L!m2! actually gives rise to the {\em same} measure. For the proof of Fubini's theorem, we follow the presentation by Li~\cite[\sect{V.3}]{liintegration}, which is standard. The first step is to prove Fubini-Tonelli's theorem, which is essentially Fubini's theorem for non-negative functions. The decomposition of functions with a finite image into a linear combination of indicator functions (Sect.~\ref{sec:simple_function}) comes in handy to prove Fubini-Tonelli's theorem because the latter is first established for indicator functions, then for simple functions, and finally for measurable functions. The other main ingredient is the monotone convergence theorem (see \accompanying{lebesgue_integral.v}). Fubini's theorem is essentially an application of Fubini-Tonelli's theorem: \begin{lstlisting} Variables (T1 T2 : measurableType) (R : realType). Variables (m1 : {measure set T1 -> \bar R}) (m2 : {measure set T2 -> \bar R}). Hypotheses (sf_m1 : sigma_finite setT m1) (sf_m2 : sigma_finite setT m2). Variable f : T1 * T2 -> \bar R. Hypothesis mf : measurable_fun setT f. Let m : {measure set [the semiRingOfSetsType of T1 * T2] -> \bar R} := Product_measure1 m1 sf_m2. Hypothesis imf : m.-integrable setT f. Theorem Fubini : \int[m1]_x (\int[m2]_y f (x, y)) = \int[m2]_y (\int[m1]_x f (x, y)). \end{lstlisting} \section{Related Work} \label{sec:related_work} \paragraph*{About Measure and Integration Theory in \textsc{Coq}{}} We are not aware of any formalization of the measure extension {\em for general semirings of sets} in a proof-assistant using dependent types (neither \textsc{Coq}{} nor Lean). There is a formalization in \textsc{Coq}{} of the Lebesgue integral based on the \textsc{Coquelicot}{} library~\cite{boldo2021tr} which has recently been extended with a formalization of the Bochner integral~\cite{boldo2022coq}. This development is driven by detailed pencil-and-paper proofs written for the purpose of formalization~\cite{clement2021arxiv}. However, this framework is limited to non-negative functions, the development of the theory of Lebesgue integration stops at Fatou's lemma and does not contain the Lebesgue measure (at least as advertised in~\cite{boldo2021tr}). The authors have communicated to us that there is work in progress on the Lebesgue measure and Tonelli's theorem; the former is however not a modular construction like ours, and the latter is of course still about non-negative functions~\cite{milc}. But the difference with our work lies more in the sustaining infrastructure than in the gallery of theorems. We cannot reuse this framework because of many diverging choices of conventions, one of them is assuming that $\infty - \infty = 0$, which results in the addition of the extended real numbers not being associative, and that prevents the use of iterated operators {\it \`a la} \textsc{MathComp}{}~\cite[\sect{3.2}]{boldo2021tr}. We insist on developing abstractions and components developed along \textsc{MathComp-Analysis}{} so as to find the best encodings. For example, Boldo et al.\ use a very concrete encoding of simple functions whose ranges are represented by sorted lists. Notwithstanding the fact that sorting is not essential to develop integration theory, it appears that this unreasonably complicated formal proofs (compare \cite[\sect{6.3}]{boldo2021tr} with \L!sintegralD!~\accompanying{lebesgue_integral.v}). Another example is having a \L+sigma_algebra+ predicate or a \L+measurableType+ structure while Boldo et al.\ use the fact that a class of sets is a $\sigma$-\textsl{algebra}{} if and only if it is equal to the smallest $\sigma$-\textsl{algebra}{} generated by its elements. We found this characterization impractical in the presence of the hierarchy of classes of sets, for which we need inheritance to work in order to share theorem across structures. With an inductive characterization, theorems defined on a larger class of sets (e.g., semiring of sets) could not be applied to a $\sigma$-\textsl{algebra}{}. On a related note, our definition of generated $\sigma$-\textsl{algebra}{} in Sect.~\ref{sec:generated_salgebra} generalizes the one by Boldo et al.\ by defining the complement with respect to an arbitrary set instead of the full set. This is very useful in practice to develop the theory of measurable partial functions and in fine define the Lebesgue integral as parameterized by a domain (Sect.~\ref{sec:integral_measurable_function}). The C-Corn library has a formalization of the fundamental theorem of calculus~\cite{cruzfilipe2002types} but it is in a constructive setting. The coq-proba library~\cite{coqproba} provides a formalization of the Lebesgue measure and integral but limited to real-valued functions and closed intervals. \paragraph*{About Measure Theory in Other Proof Assistants} There are several formalizations of measure theory in proof assistants other than \textsc{Coq}{}: work in Mizar in 1992~\cite{bialas1992jfm} inspired work in HOL in 2002~\cite[\sect{2.2.2}]{hurd2002phd} which was generalized to HOL4 in 2010~\cite[\sect{2.3}]{coble2010phd} and then triggered a port in Isabelle/HOL in 2011~\cite[\sect{4.2}]{holzl2011itp}, which in turn inspired work in Lean~\cite{mathlib}. The Lebesgue measure is defined in Isabelle/HOL using the gauge integral that was available in Isabelle/HOL, i.e., it is not built as an extension of a premeasure~\cite[\sect{4.6}]{holzl2011itp}. Lean has an extensive formalization of integration theory in a proof assistant with dependent types. The main source of documentation is the code of \textsc{mathlib}{}~\cite{mathlib}. To our understanding, measures are defined as a special case of outer measures~\cite{vanDoorn2021itp}, following the idea than any non-negative function can generate an outer measure which in turn can generate the $\sigma$-\textsl{algebra}{} of its Carath\'eodory measurable sets. Hence \textsc{mathlib}{} does not have a hierarchy of classes of sets reflecting the literature, as we did (Sect.~\ref{sec:math_struct_measure}), even though we believe they naturally occur inside the proofs. \textsc{mathlib}{} has supported the recent formalization of the Haar measure~\cite{vanDoorn2021itp}, which generalizes the Lebesgue measure. The Lebesgue measure in Mizar has recently been reconstructed~\cite{endou2020jfm} to fix an older formalization~\cite{bialas1995jfm}. This is yet another approach by extension from a semialgebra of intervals but of course in a very different setting since Mizar has no dependent types. \paragraph*{About Integration Theory in Other Proof Assistants} One can find a substantial formalization of the Lebesgue integral in Isabelle/HOL~\cite{holzl2011itp} (it shares the same history as the Isabelle/HOL formalization of measure theory, coming partially from previous work in HOL4, see above). One can also find a formalization of the Lebesgue integral in HOL \cite{mhamdi2010itp} with a proof of the monotone convergence theorem and some applications to probability theory. Lean also provides the Lebesgue integral and its standard lemmas up to Fubini's theorem, and is actually further generalized to the Bochner integral. \section{Conclusion} \label{sec:conclusion} This paper introduced a \textsc{Coq}{} formalization of measure theory and Lebesgue integration that is compatible with \textsc{MathComp}{} and that extends \textsc{MathComp-Analysis}{}. This includes an original formalization of mathematical structures for measure theory (Sect.~\ref{sec:math_struct_measure}), an original formalization of the construction of measures using the extension theorem (Sect.~\ref{sec:hahn_extension}), whose application to a measure over a semiring of intervals yields the Lebesgue measure (Sect.~\ref{sec:lebesgue_measure}). This also allows for the construction of the Lebesgue integral and the formalization of its theory up to Fubini's theorem (Sect.~\ref{sec:lebesgue_integral}). We argued about technical aspects of this formalization that we believe improve on related work (Sect.~\ref{sec:related_work}). At the beginning of this experiment, much work was dedicated to the formalization of structures for measure theory and to enrich the foundations (in particular, extended real numbers). As a consequence progress was slow to develop measure theory. However, our attention to details started to pay off when formalizing integration theory, which felt comparatively comfortable. Our development now provides new reusable libraries of general interest, in particular for extended real numbers and their sequences (Sect.~\ref{sec:ereal}), sums over general sets (Sect.~\ref{sec:esum}) and over finite supports (Sect.~\ref{sec:fsbigop}). As a concrete illustration of the reusability of our formalization, we can mention the Lebesgue-Stieltjes measure, which could be formalized using the same approach we used for the Lebesgue measure in Sect.~\ref{sec:lebesgue_measure}. \paragraph*{Current and Future Work} The \textsc{Coq}{} community now has several formalizations of integration, that rely on different grounds. We have been exchanging with the members of the MILC project~\cite{milc} to look for ways to share the development effort. As the next step of our formalization, we plan to formalize the fundamental theorem of calculus to connect with the theory of derivatives of \textsc{MathComp-Analysis}{}. We have also started working on the formalization of probability theory, so as to generalize previous work on the formalization of discrete probabilities on top of \textsc{MathComp}{} (e.g., \cite{affeldt2020cs}) and to apply it to the formalization of the semantics of programming languages (e.g., the extension of~\cite{affeldt2021jfp} to continuous probabilities). \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,275
Apartment-Hindi movie by Jagmohan Mundra to release in theatres, on internet Friday Bollywood director Jagmohan Mundhra, who believes that films should be released on every platform on the same day, will simultaneously release his movie 'Apartment' in theatres and on the Rajshri website Friday. Rajshri.com is the official website of Rajshri Media Private Ltd. By IANS-CT / April 22, 2010 Mumbai, April 22 (Calcutta Tube) Bollywood director Jagmohan Mundhra, who believes that films should be released on every platform on the same day, will simultaneously release his movie 'Apartment' in theatres and on the Rajshri website Friday. Rajshri.com is the official website of Rajshri Media Private Ltd. 'People going to see 'Apartment' will not be able to see it for free, regardless of which platform they choose. The person, who wants to download and watch it on the computer, will have to pay for it. Even DVDs should be released on the same day as the movie,' Mundhra told IANS. Mundhra believes that releasing the film in every mode simultaneously would help prevent piracy. 'Piracy fulfils the need of people who don't go to the theatre. Whether you release a movie or not, pirated CDs and DVDs are available in the market. It's better to release movies on every platform as long as revenue is generated,' said Mundhra. 'Apartment' is based on a girl from a small town. Preeti Sengupta (Tanushree Dutta) is an air hostess who lives with her boyfriend Karan Malhotra (Rohit Roy). She is possessive and has issues with trust. Suspecting infidelity, Preeti throws her boyfriend out of the house but soon realizes she can't afford the apartment rent on her own. Neha Bhardwaj (Neetu Chandra), a small-town girl, comes and asks for accommodation. Impressed by her simplicity, Preeti believes she has found a perfect roommate. Slowly, however, things begin to go wrong. Actor Anupam Kher plays a pivotal role in the film. Asked about the deal with Rajshri.com, Mundhra said: 'There is a business share. I don't know the details of the business deal but I know Rajshri.com makes a standard deal. They are the content aggregators. They don't create the content; they take it from people like me.' Tags: Apartment, Hindi Movie, Indian Cinema, Jagmohan Mundra Swastik wooing Deepak Tijori for Colors show Nandita Das says-Bombay or Mumbai, it's the same
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,245
Peter Jansen (born 1938) was educated as a painter at the Royal Academy of Art in The Hague, the Netherlands. He taught art at the Academy of Art in Rotterdam and the Utrecht School of the Arts. He was a pioneer in the use of computers in art education in the Netherlands and in 1988 he became Managing Director of the Utrecht School of the Arts, Faculty of Art, Media & Technology. At the moment he travels non-stop over the world, creating his 'Panoramic Works'. He uses a large variety of shot types and then manipulates them together. He then places them together in one image so that he can create sculptures of movement to represent what happens over a period of time. His sculptures are mainly based on human movement. References External links 1938 births Living people Dutch artists Artists from Amsterdam Academic staff of the Utrecht School of the Arts Academic staff of Willem de Kooning Academy
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,363
<?php /** * @package DatabaseSchema * @subpackage Tests */ class ezcDatabaseSchemaGenericDiffTest extends ezcTestCase { public function tearDown() { $this->removeTempDir(); } private static function getSchema1() { return new ezcDbSchema( array( 'bugdb' => new ezcDbSchemaTable( array ( 'integerfield1' => new ezcDbSchemaField( 'integer' ), ) ), 'bugdb_deleted' => new ezcDbSchemaTable( array ( 'integerfield1' => new ezcDbSchemaField( 'integer' ), ) ), 'bugdb_change' => new ezcDbSchemaTable( array ( 'integerfield1' => new ezcDbSchemaField( 'integer' ), 'integerfield3' => new ezcDbSchemaField( 'integer' ), ), array ( 'primary' => new ezcDbSchemaIndex( array( 'integerfield1' => new ezcDbSchemaIndexField() ), true ), 'tertiary' => new ezcDbSchemaIndex( array( 'integerfield3' => new ezcDbSchemaIndexField() ), false, true ) ) ), ) ); } private static function getSchema2() { return new ezcDbSchema( array( 'bugdb' => new ezcDbSchemaTable( array ( 'integerfield1' => new ezcDbSchemaField( 'integer' ), ) ), 'bugdb_added' => new ezcDbSchemaTable( array ( 'integerfield1' => new ezcDbSchemaField( 'integer' ), ) ), 'bugdb_change' => new ezcDbSchemaTable( array ( 'integerfield2' => new ezcDbSchemaField( 'integer', 0, true ), 'integerfield3' => new ezcDbSchemaField( 'text', 64 ), ), array ( 'secondary' => new ezcDbSchemaIndex( array( 'integerfield3' => new ezcDbSchemaIndexField() ), false, true ), 'primary' => new ezcDbSchemaIndex( array( 'integerfield2' => new ezcDbSchemaIndexField() ), true ) ) ), ) ); } private static function getSchema3() { return new ezcDbSchema( array( 'table' => new ezcDbSchemaTable( array ( 'from' => new ezcDbSchemaField( 'integer' ), ) ), 'select' => new ezcDbSchemaTable( array ( 'group' => new ezcDbSchemaField( 'integer' ), ) ), 'bugdb_change' => new ezcDbSchemaTable( array ( 'from' => new ezcDbSchemaField( 'integer' ), 'table' => new ezcDbSchemaField( 'integer' ), ), array ( 'primary' => new ezcDbSchemaIndex( array( 'from' => new ezcDbSchemaIndexField() ), true ), 'join' => new ezcDbSchemaIndex( array( 'table' => new ezcDbSchemaIndexField() ), false, true ) ) ), ) ); } private static function getSchema4() { return new ezcDbSchema( array( 'table' => new ezcDbSchemaTable( array ( 'from' => new ezcDbSchemaField( 'integer' ), ) ), 'order' => new ezcDbSchemaTable( array ( 'right' => new ezcDbSchemaField( 'integer' ), ) ), 'bugdb_change' => new ezcDbSchemaTable( array ( 'group' => new ezcDbSchemaField( 'integer', false, true, 0 ), 'table' => new ezcDbSchemaField( 'integer' ), ), array ( 'from' => new ezcDbSchemaIndex( array( 'table' => new ezcDbSchemaIndexField() ), false, true ), 'primary' => new ezcDbSchemaIndex( array( 'group' => new ezcDbSchemaIndexField() ), true ) ) ), ) ); } private static function getSchemaDiff1() { return ezcDbSchemaComparator::compareSchemas( self::getSchema1(), self::getSchema2() ); } private static function getSchemaDiff2() { return ezcDbSchemaComparator::compareSchemas( self::getSchema3(), self::getSchema4() ); } public function testWrite1() { $schema = self::getSchemaDiff1(); $ddl = $schema->convertToDDL( $this->db ); self::assertEquals( $this->getDiffExpectations1(), $ddl ); } public function testApply1() { $schema1 = self::getSchema1(); $schema1->writeToDb( $this->db ); $schemaDiff = self::getSchemaDiff1(); $schemaDiff->applyToDb( $this->db ); $schemaInDb = ezcDbSchema::createFromDb( $this->db ); $this->resetDb(); self::assertEquals( self::getSchema2(), $schemaInDb ); } public function testWrite2() { $schema = self::getSchemaDiff2(); $ddl = $schema->convertToDDL( $this->db ); self::assertEquals( $this->getDiffExpectations2(), $ddl ); } public function testWrite2WithDbName() { $schema = self::getSchemaDiff2(); $ddl = $schema->convertToDDL( $this->db->getName() ); self::assertEquals( $this->getDiffExpectations2(), $ddl ); } public function testWrite2WithUnknownDbName() { $schema = self::getSchemaDiff2(); try { $ddl = $schema->convertToDDL( 'hottentottententententoonstellingsterrijnen' ); self::fail( "Expected exception not thrown." ); } catch ( ezcDbSchemaUnknownFormatException $e ) { self::assertEquals( "There is no 'difference write' handler available for the 'hottentottententententoonstellingsterrijnen' format.", $e->getMessage() ); } } public function testWrite2WithBrokenDbName() { $schema = self::getSchemaDiff2(); try { $ddl = $schema->convertToDDL( 42 ); self::fail( "Expected exception not thrown." ); } catch ( ezcDbSchemaUnknownFormatException $e ) { self::assertEquals( "There is no 'difference write' handler available for the '42' format.", $e->getMessage() ); } } public function testApply2() { $schema1 = self::getSchema3(); $schema1->writeToDb( $this->db ); $schemaDiff = self::getSchemaDiff2(); $schemaDiff->applyToDb( $this->db ); $schemaInDb = ezcDbSchema::createFromDb( $this->db ); $this->resetDb(); $schema4 = self::getSchema4()->getSchema(); $schemaInDb = $schemaInDb->getSchema(); self::assertEquals( $schema4['table'], $schemaInDb['table'] ); self::assertEquals( $schema4['order'], $schemaInDb['order'] ); self::assertEquals( $schema4['bugdb_change'], $schemaInDb['bugdb_change'] ); } // bug #8900 public function testTwoTablesPrimaryKey() { $fileNameWithout = realpath( $this->testFilesDir . 'bug8900-without-index.xml' ); $schemaWithout = ezcDbSchema::createFromFile( 'xml', $fileNameWithout ); $fileNameWith = realpath( $this->testFilesDir . 'bug8900.xml' ); $schemaWith = ezcDbSchema::createFromFile( 'xml', $fileNameWith ); $diff = ezcDbSchemaComparator::compareSchemas( $schemaWithout, $schemaWith ); $text = ''; foreach ( $diff->convertToDDL( $this->db ) as $statement ) { $text .= $statement . ";\n"; } $name = strtolower( $this->db->getName() ); $sql = file_get_contents( $this->testFilesDir . "bug8900-diff_{$name}.sql" ); self::assertEquals( $sql, $text ); } // bug #10801 public function testAddingAutoIncrementField() { $dbh = $this->db; $schema1 = new ezcDbSchema( array( 'table10801' => new ezcDbSchemaTable( array( 'id' => ezcDbSchemaField::__set_state( array( 'type' => 'integer', 'length' => false, 'notNull' => false, 'default' => 0, 'autoIncrement' => false, 'unsigned' => false, ) ), 'text' => new ezcDbSchemaField( 'text' ) ) ) ) ); $schema2 = new ezcDbSchema( array( 'table10801' => new ezcDbSchemaTable( array( 'id' => ezcDbSchemaField::__set_state( array( 'type' => 'integer', 'length' => false, 'notNull' => true, 'default' => null, 'autoIncrement' => true, 'unsigned' => false, ) ), 'text' => new ezcDbSchemaField( 'text' ) ) ) ) ); $schema1->writeToDb( $dbh ); $diff = ezcDbSchemaComparator::compareSchemas( $schema1, $schema2 ); $diff->applyToDb( $dbh ); $q = $dbh->createInsertQuery(); $stmt = $q->insertInto( $dbh->quoteIdentifier('table10801') )->set( $dbh->quoteIdentifier('text'), $q->bindValue('text') )->prepare(); $stmt->execute(); $q = $dbh->createSelectQuery(); $stmt = $q->select( '*' )->from( $dbh->quoteIdentifier('table10801') )->prepare(); $stmt->execute(); $result = $stmt->fetchAll( PDO::FETCH_ASSOC ); $this->assertEquals( 1, $result[0]['id'] ); } } ?>
{ "redpajama_set_name": "RedPajamaGithub" }
4,906
We supply and service all your splashback needs! Quality & Service incomparable on the Northern Beaches / North Shore. Quality & Service incomparable on the Northern Beaches / North Shore. Kim Finlay did a very good job and gave a prompt service. I highly recommend them. I am waiting on the insurance company to approve my claim, but once done I will contact Kim to complete the job. Kim provided a quote over the phone and was polite and professional.
{ "redpajama_set_name": "RedPajamaC4" }
6,537
Q: time integration stability in modelica I am constructing a finite volume model in Dymola which evolves in time and space. The spatial discretization is hard coded in the equations section, the time evolution is implemented with a term consisting of der(phi). Is the time integration of Dymola always numerically stable when using a variable step size algorithm? If not, can I do something about that? Is the Euler integration algorithm from Dymola the explicit or implicit Euler method? A: The stability of time integration is going to depend on your integrator. Generally speaking, implicit methods are going to be much better than explicit ones. But since you mention spatial and time discretization, I think it is worth pointing out that for certain classes of problems things can get pretty sticky. In general, I think elliptic and parabolic PDEs are pretty safe to solve in this way. But hyperbolic PDEs can get very tricky. For example, the Courant-Friedrichs-Lewy condition will affect the overall stability of the solution method. But by discretizing in space first, you leave the solver with information only regarding time and it cannot check or conform to the CFL condition. My guess is that a variable time step integrator will detect the error being introduced by not following the CFL condition but that it will struggle to identify the proper time step and probably also end up permitting an unacceptably unstable solution. A: The Dymola Euler solver by default is explicit (if an in-line sovler is not selected).
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,235
Q: RAID5 and eSATA We have RAID5 (contains three disks): Now we want to add eSATA disk to the same computer, but it musn't be part of RAID5, just single drive (it already has data on it). So, RAID5 with three disks stays untouched + eSATA disk as single (non-raid) drive also with all the data on it. Is this possible? When we add it, it tries to add it to RAID5: A: Maybe you should check the bios configuration. In serial - ata controller (raid enabled/disabled). A: Probably this disk was previously used in a RAID5 array on an Intel onboard RAID controller, and then was reused as a single disk on some other controller (or even an Intel onboard controller configured in non-RAID mode). In this case, unless the array was properly destroyed in the RAID configuration utility of the original RAID controller, the disk still has Intel RAID metadata, and when you connect it to an Intel onboard RAID controller (the same or newer model), the controller tries to assemble the RAID array according to that metadata and fails because there are no other disks from that array. To make this disk usable on an Intel onboard RAID controller again, you need to wipe Intel RAID metadata from it. Just removing the array in RAID BIOS, however, will also wipe the existing data, which you don't want to do. I don't know a Windows utility which can remove RAID metadata (except a raw disk editor, but using it would require expert knowledge about RAID metadata formats). However, the dmraid utility for Linux can do it: sudo dmraid -r -E /dev/sdX (find the device name assigned to this particular disk and specify it instead of /dev/sdX; omit sudo if already working as root). You should do this on another machine to avoid messing up your working array, and maybe even disconnect all other disks and boot from a Live CD or USB stick (I would use SystemRescueCd for this task, but an Ubuntu 12.04 disk should work too; old versions might not have a recent enough dmraid). And you should have a backup of the data on that disk anyway… A: Typically with Intel onboard RAID controllers, when you just add a disk and don't explicitly configure it to be part of a volume, it will be just a disk (unless it has RAID headers with the format the RAID BIOS is expecting). However, it's possible your controller doesn't support this. Try downloading and installing all applicable BIOS and firmware updates for your board, and then try again. Make sure you're not creating a new volume after you plug the disk in...
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,167
\section{Introduction} IceCube Neutrino Observatory, the world's largest neutrino detector, has detected 54 neutrino events within 1347 days with energy between 20 TeV to 2.3 PeV ~\cite{aartsen2014observation,kop}. Shower events, most likely due to $\nu_e$ or $\nu_\tau$ charge current $\nu N$ interactions and also due to neutral current $\nu N$ interactions of all flavors, dominate the event list (39 including 3 events with 1--2 PeV energy) while track events, most likely due to $\nu_\mu$ charge current $\nu N$ interactions, constitute the rest. Among a total of 54 events about 21 could be due to atmospheric neutrino ($9.0^{+8.0}_{-2.2}$) and muon ($12.6\pm 5.1$) backgrounds. A background-only origin of all 54 events has been rejected at 6.5-$\sigma$ level ~\cite{kop}. Therefore a cosmic origin of a number of neutrino events is robust. The track events have on average $\sim 1^\circ$ angular resolution, but the dominant, shower events have much poorer angular resolution, $\sim 15^\circ$ on average ~\cite{kop}. Searching for sources of these events is now one of the major challenges in astrophysics. Pinpointing the astrophysical sources where these neutrinos are coming from is difficult due to large uncertainty in their arrival directions. High energy cosmic rays (CRs) can interact with low energy photons and/or low energy protons to produce neutrinos and high energy gamma rays inside the source or while propagating to earth. So a multi-messenger study of neutrinos, Cosmic Rays (CRs) and gamma-rays can identify the possible astrophysical sources. In our first attempt to search for sources we tried to see a correlation with Ultra-High Energy (UHE) CRs with the earlier 37 cosmic neutrino events ~\cite{Moharana:2015nxa}. A detail analysis of IceCube neutrino events with the Pierre Auger Observa (PAO) and Telescope Array (TA) has been done in collaboration ~\cite{Aartsen:2015dml}. Here we study correlation of IceCube neutrino events with TeVCat, {\it Swift}-BAT 70 month X-ray source catalog ~\cite{2013ApJS..207...19B} and 3LAC source catalog ~\cite{Ackermann:2015yfk}. A similar study of correlation of IceCube neutrinos with the gamma ray sources has also been done ~\cite{Resconi:2015nva} to find a correlation of $Fermi$-LAT source with only track HESE events is also done ~\cite{anthony}. Recently a detail analysis of correlation showed at least 2 $\sigma$ result with extreme blazars ~\cite{Resconi2} and 3$\sigma$ with the starforming regions ~\cite{Emig:2015dma}. To do specific correlation study we use different cuts on the energy flux of these sources, and also different sets of source types, and showed the results of this study. \begin{figure}[h] \includegraphics[width=36pc]{allsource.png} \caption{\label{skymap}Sky map of the 52 IceCube cosmic neutrino events with error circles and sources from different catalogs in Galactic coordinate system.} \end{figure} \section{IceCube neutrino events and Source catalogs} For our analysis we consider all 52 IceCube detected neutrino events. Two track events (event numbers 28 and 32) are coincident hits in the IceTop surface array and are almost certainly a pair of atmospheric muon background events ~\cite{aartsen2014observation}. Therefore we excluded them from our analysis. Fig.~\ref{skymap} shows sky map of the 52 events in Galactic coordinates with reported angular errors. For the correlation analysis we have used 3 different source catalogs. {\it Swift}-BAT 70 month X-ray source catalog ~\cite{2013ApJS..207...19B}, $Fermi$ Third Catalog of Active Galactic Nuclei (3LAC) ~\cite{Ackermann:2015yfk}, TeVCat ~\cite{2008ICRC....3.1341W}. The sky map in Fig.~\ref{skymap} shows the extragalactic sources from these catalogs. {\it Swift}-BAT 70 month X-ray source catalog includes 1210 objects, and after excluding Galactic sources the number of sources become 785. In our previous study ~\cite{Moharana:2015nxa} we found 18 sources from this catalog that are correlated simultaneously with UHECRs and IceCube neutrino events. PAO collaboration has also found an anisotropy at $\sim 98.6\%$ CL in UHECRs with energy $\ge 58$ EeV and within $\sim 18^\circ$ circles around the AGNs in {\em Swift}-BAT catalog at distance $\le 130$ Mpc ~\cite{PierreAuger:2014yba} . These 18 sources mostly have an X-ray energy flux $\ge10 ^{-11}$ ${\rm{erg} \, \rm{cm^{-2}} \, \rm{sec}^ {-1}}$. So, in the present analysis we use all the sources from this catalog which have flux $\ge10 ^{-11}$ ${\rm{erg} \, \rm{cm^{-2}} \, \rm{sec}^ {-1}}$. This condition decreased the number of sources to 687. In the sky map of Fig.~\ref{skymap} we have shown these 687 sources. TeVCat contains sources that are detected with very high energy (VHE) gamma rays with energy $\ge 50$ GeV. It includes 161 sources, out of which 22 are unidentified sources. This is the highest energy source catalog, particularly interesting for $\nu$ production. Sky map in Fig~\ref{skymap} contains TeVCat sources that are not in the Galactic plane. The {\it Third Catalog of Active Galactic Nuclei (AGNs)} detected by Fermi LAT (3LAC) ~\cite{Ackermann:2015yfk} is a subset of the {\it Fermi} LAT {\it Third Source Catalog (3FGL)} ~\cite{2015ApJS..218...23A}. The 3FGL catalog includes 3033 sources detected above a 4$\sigma$ significance (test statistic $>$ 25) on the whole sky, during the first 4 years of the Fermi mission (2008-2012). The original 3LAC sample includes 1591 AGNs from 3FGL, though 28 are duplicate associations. An additional cut had also been performed to exclude the Galactic plane region ($|b| \leq 10^\circ$) where the incompleteness of the counterpart catalogs significantly hinders AGN association. However, in this paper, we chose to study what we call the ``extended 3LAC" sample of 1773 sources, that includes sources of the Galactic plane, and that could be associated to several neutrino events. In the extended 3LAC sample, 491 sources are flat spectrum radio quasars (FSRQs), 662 are BL Lacs, 585 are blazars of unknown type (BCU), and 35 are non-blazar AGNs. \section{Statistical method for Correlation study} To study correlation between cosmic neutrinos and sources from different catalogs separately, we map the Right Ascension and Declination $(RA, Dec)$ of the event directions and sources into unit vectors on a sphere as $$ {\hat x} = (\sin\theta \cos\phi, \sin\theta \sin\phi, \cos\theta)^T, $$ where $\phi = RA$ and $\theta = \pi/2 - Dec$. Scalar product of the neutrino and source vectors $({\hat x}_{\rm neutrino}\cdot {\hat x}_{\rm source})$ therefore is independent of the coordinate system. The angle between the two vectors \begin{equation} \label{gamma} \gamma = \cos^{-1} ({\hat x}_{\rm neutrino}\cdot {\hat x}_{\rm source}), \end{equation} is an invariant measure of the angular correlation between the neutrino event and source directions ~\cite{Virmani:2002xk,Moharana:2015nxa}. Following ref.~\cite{Virmani:2002xk} we use a statistic made from invariant $\gamma$ for each neutrino direction ${\hat x}_i$ and source direction ${\hat x}_j$ pair as \begin{equation} \label{delta} \delta\chi^2_i = {\rm min}_j (\gamma_{ij}^2/\delta\gamma_i^2), \end{equation} which is minimized for all $j$. Here $\delta\gamma_i$ is the 1-$\sigma$ angular resolution of the neutrino events. We use the exact resolutions reported by the IceCube collaboration for each event ~\cite{aartsen2014observation}. A value $\delta \chi^2_i \le 1$ is considered a ``good match'' between the $i$-th neutrino and a source directions. We exploit distributions of all $\delta\chi^2_i$ statistics to study angular correlation between IceCube neutrino events and sources in catalog. The distribution with observed data giving a number of ``hits'' or $N_{\rm hits}$ with $\delta\chi^2 \le 1$ therefore forms a basis to claim correlation. Note that in case more than one source direction from the catalog are within the error circle of a neutrino event, the $\delta\chi^2$ value for UHECR closest to the neutrino direction is chosen in this method. We estimate the significance of any correlation in data by comparing $N_{\rm hits}$ with corresponding number from null distributions. We construct null distributions by randomizing only the $RA$ of the sources, keeping their $Dec$ the same as their direction in the catalog. This {\it semi-isotropic null} is a quick-way to check significance. We perform 100,000 realizations of drawing random numbers to assign new $RA$ and $Dec$ values for each event to construct $\delta\chi^2$ distributions in the same way as done with real data. We calculate statistical significance of correlation in real data or $p$-value (chance probability) using frequentists' approach. We count the number of times we get a random data set that gives equal or more hits than the $N_{\rm hits}$ in real data within $\delta\chi^2 \le 1$ bin. Dividing this number with the total number of random data sets generated (100,000) gives us the $p$-value. We cross-check this $p$-value by calculating the Poisson probability of obtaining $N_{\rm hits}$ within $\delta\chi^2 \le 1$ bin given the corresponding average hits expected from the null distribution. We found the $N_{\rm hits}$ distribution in $\delta\chi^2 \le 1$ does not follow the Poisson distribution. \section{Results and Discussions} We used all 45 HBL (high-frequency peaked BL Lacs) type source listed in TeVCat for our first correlation study with neutrino events. A similar correlation study was carried out in ~\cite{Sahu:2014fua} using HBLs and neutrino data. Our study showed a $p$-value 0.58 with frequentists method while with Poisson distribution probability is 0.1, with 16 neutrinos correlating with different HBLs, almost the same as the null distribution. The distribution is shown in Fig.~\ref{hbl}. {\it Swift}-BAT 70 month X-ray source included 657 sources with energy flux $10^{-11} {\rm{erg} \, \rm{cm^{-2}} \, \rm{sec}^ {-1}}$. The study of correlation with neutrino events showed a $p$-value 0.825 with 39 $N_{\rm hits}$ for the real data and nearly 40 for null distribution, as shown in Fig.~\ref{swift}. \begin{figure}[ht] \begin{minipage}{16pc} \includegraphics[width=16pc]{hbl} \caption{\label{hbl}Correlation Study for all 45 HBL sources from TeVCat.} \end{minipage}\hspace{2pc}% \begin{minipage}{18pc} \includegraphics[width=16pc]{swift_11} \caption{\label{swift}Correlation Study for {\it Swift} BAT X-ray catalog sources with energy flux more than $10^{-11} {\rm{erg} \, \rm{cm^{-2}} \, \rm{sec}^ {-1}}$ also shown in ~\cite{rewin}.} \end{minipage} \end{figure} The correlation study of all 1773 sources in the extended 3LAC catalog gives a $p$-value 0.806 with 41 $N_{\rm hits}$ in for real data, as shown in Fig.~\ref{3lac_all}. Most of the 3LAC sources are populated in the region of energy flux $10^{-11} {\rm{erg} \, \rm{cm^{-2}} \, \rm{sec}^ {-1}}$, and the population decreases abruptly at higher flux. So, we took a set of sources with energy flux $\ge 10^{-11} {\rm{erg} \, \rm{cm^{-2}} \, \rm{sec}^ {-1}}$. It decreased the number of sources in the set to 652, and the correlation study has a $p$-value 0.763 , with $N_{\rm hits}$ in $\delta\chi^2 \le 1$, 39, shown in Fig.~\ref{3lac_f}. \begin{figure}[h] \begin{minipage}{16pc} \includegraphics[width=16pc]{3lac} \caption{\label{3lac_all}Correlation Study for all 1773 sources of extended 3LAC catalog also shown in ~\cite{rewin}.} \end{minipage}\hspace{2pc}% \begin{minipage}{16pc} \includegraphics[width=16pc]{3lac_11} \caption{\label{3lac_f}Correlation Study for sources from extended 3LAC catalog with energy flux $\ge$ $10^{-11} {\rm{erg} \, \rm{cm^{-2}} \, \rm{sec}^ {-1}}$.} \end{minipage} \end{figure} In order to do further study for different type of sources we used the 662 BL Lac source set from the extended 3LAC catalog. The correlation $p$-value for these sources is 0.764, shown in Fig. ~\ref{bllac}. Similarly for the 491 FSRQ sources from the extended 3LAC catalog the $p$-value is 0.784, shown in Fig.~\ref{fsrq}. For BL Lac and FSRQ sources we found 39 and 38 $N_{\rm hits}$ respectively. \begin{figure}[h] \begin{minipage}{16pc} \includegraphics[width=16pc]{3lac_bll} \caption{\label{bllac}Correlation Study of BL Lac sources from extended 3LAC catalog.} \end{minipage}\hspace{2pc}% \begin{minipage}{16pc} \includegraphics[width=16pc]{3lac_fsrq} \caption{\label{fsrq}Correlation Study of FSRQ sources from extended 3LAC catalog.} \end{minipage} \end{figure} The correlation study of IceCube neutrino events with different type of sources as TeVCat HBL, 3LAC BL Lac and FSRQ is done but we have not found any statistically significant result for these sets. We have also put constraints on the energy flux of 3LAC catalog and the sources observed by {\it Swift} in 70 months of its observation, and the result is not significant. However with this type of study we can discard different type of extragalactic sources for IceCube neutrino events. \begin{table*}\centering \begin{tabular}{|c|c|c|c|} \hline {Catalog Name} & {Source type} & {\# of sources} & {p-value} \\ \hline TeVCAT & HBL & 45 & 0.58 \\ $Swift$ Bat X-ray & energy flux > $10^{-11} \,{\rm{erg} \, \rm{cm^{-2}} \, \rm{sec}^ {-1}}$ & 657 & 0.825 \\ 3LAC (Extended) & All & 1773 & 0.806 \\ 3LAC (Extended) & energy flux > $10 ^{-11} \,{\rm{erg} \, \rm{cm^{-2}} \, \rm{sec}^ {-1}}$ & 652 & 0.763 \\ 3LAC (Extended) & BL Lac & 662 & 0.764 \\ 3LAC (Extended) & FSRQ & 491 & 0.786 \\ \hline \end{tabular} \caption{Results of correlation study. } \label{tab:res} \end{table*} \section{Summary} IceCube neutrino observatory has detected at least 54 neutrino events within energy 30 TeV-2 PeV. Sources for these events is still a puzzle for both particle physics and astrophysics. In our project we have tried to find correlation of the arrival direction of these events with direction of sources from TeVCat, {\it Swift} and 3LAC catalogs. In order to test correlation we have used invariant statistics, called the minimum $\delta \chi^2$, as in ~\cite{Virmani:2002xk,Moharana:2015nxa}. Out of 52 neutrino events, 16 were correlated with HBLs from TeVCat but the statistical significance of this correlation $p-$value is 0.58. Similarly we study correlation of neutrino events with sources from {\it Swift} and 3LAC having energy flux $\ge$ $10^{-11} {\rm{erg} \, \rm{cm^{-2}} \, \rm{sec}^ {-1}}$, for which we also found a poor statistical significance. The FSRQ and BL Lacs from 3LAC catalog also showed less significant statistics for the correlation study.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,884
Q: Consistency in the appearances of loops from rectangular nodes I would like to have a left-positioned loop "look the same" as a below-positioned loop emanating from a rectangular node. \documentclass{article} \usepackage{tikz} \tikzstyle{rect} = [rectangle, rounded corners, minimum width=3cm, minimum height=1cm,text centered, draw=black, fill=red!30] \begin{document} \begin{tikzpicture} \node (A) (r) [rect] {Rectangle}; \path[->] (r) edge [loop below] node {Below} (); \path[->] (r) edge [loop left] node {Left} (); \end{tikzpicture} \end{document} For example the left loop is fatter and bigger. I want it to start and finish in the middle of the left-vertical side of the rectangle and be of the same size/shape as the lower loop. Thanks Ron Ans: As per Harish Kumar's answer (with some minor configurations) \begin{tikzpicture} \node (r) [rect] {Rectangle}; \path[->] (r) edge [loop below] node {Below} (); \path[->] (r.185) edge [out=195, in=170,distance=0.8cm] node[anchor=east] {Left}(r.175); \end{tikzpicture} A: Both the loops set their origin from the center of the node while calculating parameters and since the width and height of the node is different, they come out in different sizes. You may draw the loops by yourselves in such cases. \documentclass{article} \usepackage{tikz} \tikzstyle{rect} = [rectangle, rounded corners, minimum width=3cm, minimum height=1cm,text centered, draw=black, fill=red!30] \begin{document} \begin{tikzpicture} \node[draw,minimum size=1cm] (a){}; \path[->] (a) edge [loop below] node {Below} (); \path[->] (a) edge [loop left] node {Left} (); \end{tikzpicture} \begin{tikzpicture} \node (A) (r) [rect] {Rectangle}; \path[->] (r.300) edge [out=300,in=240,distance=1.5cm]node[anchor=north] {Below} (r.240) ; \path[->] (r.190) edge [out=190,in=170,distance=1.5cm] node[anchor=east] {Left} (r.170); \end{tikzpicture} \end{document} You may have to adjust the looseness/tension to make them exactly similar.
{ "redpajama_set_name": "RedPajamaStackExchange" }
683
More than 100,000 people stood in silence outside the Australian War Memorial in Canberra in the dark, 100 years after the landing at Gallipoli. The words of long-dead servicemen and women, in letters and diary entries read by military personnel, echoed across the crowd. "If Gallipoli was a cry, it was one long cry," one said. Another soldier wrote back to his wife describing the horrors of war. "Hilda, it is horrifying to see how the young men are buried," he wrote. On the centenary of the landing of Anzac troops in Gallipoli, a huge crowd attended the ACT dawn service to commemorate their sacrifice. Australian War Memorial director Brendan Nelson said the estimated 120,000 people - more than double the 50,000 that had been predicted - was 80,000 more than last year. "It fills me with pride that so many people have come to commemorate Anzac Day, on this most significant national occasion, at the Australian War Memorial," Dr Nelson said. Meanwhile in Sydney, a record 30,000 people turned out for the Dawn Service in Martin Place and more than 80,000 came to the Shrine of Remembrance in Melbourne. Just before 6am the Last Post rang out as the sun rose behind the War Memorial. Chief of Army Lieutenant General David Morrison spoke in tribute to Australia's soldiers who fought in wars across the world over the past century. "They were from fate and bloody circumstance, Anzacs by name, but more essentially men and women changed forever by war," he said. "We have not forgotten them, and we are defined at least in part by that act of remembrance. It makes us who we are and reminds us, in the face of an unknown future, who we can be." Janine Pearce, 55, of Queensland, said she and her husband had driven six days to get to the service. "Both of my grandfathers fought in the First World War, one was in France, one was in Egypt, and that's the reason that we decided to come down," she said. "The service was very moving ... It's great to see everybody here and that the tradition carries on." Ryan Pilkington, 33, said he had brought his nine-year-old son and six-year-old daughter from Moss Vale in NSW so they could see the centenary service. "We have to pay our respects to soldiers who have gone," he said. "It's part of what makes Australia the way it is... [children] need to get the idea of why we have the freedoms we do in Australia." The Dawn Service commenced with an Indigenous naval rating, Alan Patterson, playing the didgeridoo from the memorial's parapet. /images/transform/v1/crop/frm/silverstone-ct-migration/d6ba07e8-91eb-4518-ba25-17baaf025730/r0_178_600_517_w1200_h678_fmax.jpg April 25 2015 - 8:57AM Anzac Day: Thousands brave cold for Canberra Dawn Service on Gallipoli centenary Ben Westcott More than 100,000 people stood in silence outside the Australian War Memorial in Canberra in the dark, 100 years after the landing at Gallipoli. The words of long-dead servicemen and women, in letters and diary entries read by military personnel, echoed across the crowd. Tens of thousands stand in silence at the Dawn Service in Canberra Photo: Ben Westcott "If Gallipoli was a cry, it was one long cry," one said. Another soldier wrote back to his wife describing the horrors of war. "Hilda, it is horrifying to see how the young men are buried," he wrote. On the centenary of the landing of Anzac troops in Gallipoli, a huge crowd attended the ACT dawn service to commemorate their sacrifice. Australian War Memorial director Brendan Nelson said the estimated 120,000 people - more than double the 50,000 that had been predicted - was 80,000 more than last year. "It fills me with pride that so many people have come to commemorate Anzac Day, on this most significant national occasion, at the Australian War Memorial," Dr Nelson said. Meanwhile in Sydney, a record 30,000 people turned out for the Dawn Service in Martin Place and more than 80,000 came to the Shrine of Remembrance in Melbourne. Just before 6am the Last Post rang out as the sun rose behind the War Memorial. Chief of Army Lieutenant General David Morrison spoke in tribute to Australia's soldiers who fought in wars across the world over the past century. "They were from fate and bloody circumstance, Anzacs by name, but more essentially men and women changed forever by war," he said. "We have not forgotten them, and we are defined at least in part by that act of remembrance. It makes us who we are and reminds us, in the face of an unknown future, who we can be." Janine Pearce, 55, of Queensland, said she and her husband had driven six days to get to the service. "Both of my grandfathers fought in the First World War, one was in France, one was in Egypt, and that's the reason that we decided to come down," she said. "The service was very moving ... It's great to see everybody here and that the tradition carries on." Ryan Pilkington, 33, said he had brought his nine-year-old son and six-year-old daughter from Moss Vale in NSW so they could see the centenary service. "We have to pay our respects to soldiers who have gone," he said. "It's part of what makes Australia the way it is... [children] need to get the idea of why we have the freedoms we do in Australia." The Dawn Service commenced with an Indigenous naval rating, Alan Patterson, playing the didgeridoo from the memorial's parapet. Anzac Day 2015 in the Canberra region Where to eat after the Dawn Service
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,097
\section{INTRODUCTION} \label{INTRODUCTION} \vspace{-0.5em} This paper aims to develop an analytical foundation, based on hybrid systems theory, nonlinear control, quadratic programming, and safety-critical systems, to develop a hierarchical control algorithm that enables safe and stable cooperative locomotion of robotic guide dogs and visually impaired people (see Fig. \ref{v60_human}). One of the most challenging problems in deploying autonomous guide robots is to enable \textit{ubiquitous mobility}. More than half the Earth's landmass is inaccessible to wheeled vehicles which motivates the deployment of intelligent and highly agile \textit{legged guide robots} to access these environments. In particular, infrastructures for human-centered communities, including factories, offices, and homes, are developed for humans which are bipedal walkers capable of stepping over gaps and walking up/down stairs. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{V60_human.jpg} \vspace{-1.8em} \caption{(a) Illustration of a visually impaired human being guided by a quadrupedal assistance robot. (b) Vision 60 robot manufactured by Ghost Robotics \cite{Ghost_Robotics} whose full-order hybrid model will be used for the numerical simulations.} \label{v60_human} \vspace{-1.8em} \end{figure} \vspace{-1em} \subsection{Related Work} \vspace{-0.3em} Although important theoretical and technological advances have occurred for the construction and control of guide robots, state-of-the-art approaches are mainly tailored to the deployment of wheeled vehicles and \textit{not} legged guide robots (e.g., \cite{blind_01,blind_02,blind_03}). Unlike wheeled guide robots, legged robots are \textit{inherently unstable} complex dynamical systems with hybrid nature and high degrees of freedom (DOF). This complicates the design of feedback control algorithms that ensure stable and safe cooperative locomotion of guide dogs and human. Hybrid systems theory has become a powerful approach for modeling and control of legged robots both in theory and practice \cite{Grizzle_Asymptotically_Stable_Walking_IEEE_TAC,Westervelt_Grizzle_Koditschek_HZD_IEEE_TRO,Chevallereau_Grizzle_3D_Biped_IEEE_TRO,Ames_RES_CLF_IEEE_TAC,Ames_DURUS_TRO,Sreenath_Grizzle_HZD_Walking_IJRR,Park_Grizzle_Finite_State_Machine_IEEE_TRO,Poulakakis_Grizzle_SLIP_IEEE_TAC,Tedrake_Robus_Limit_Cycles_CDC,Byl_HZD,Johnson_Burden_Koditschek,Spong_Controlled_Symmetries_IEEE_TAC,Manchester_Tedrake_LQR_IJRR,Vasudevan2017}. Existing nonlinear control approaches that address the hybrid nature of legged locomotion models are developed based on hybrid reduction \cite{Ames_HybridReduction_Original_Paper}, controlled symmetries \cite{Spong_Controlled_Symmetries_IEEE_TAC}, transverse linearization \cite{Manchester_Tedrake_LQR_IJRR}, and hybrid zero dynamics (HZD) \cite{Westervelt_Grizzle_Koditschek_HZD_IEEE_TRO,Ames_RES_CLF_IEEE_TAC}. State-of-the art nonlinear control approaches for dynamic legged locomotion have been tailored to stable locomotion of legged robots, but \textit{not} stable and safe cooperative locomotion of legged guide robots and visually impaired people. \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{graph_and_control.jpg} \vspace{-1.8em} \caption{(a) Illustration of the hybrid models for the unleashed and leashed locomotion of the guide robot and human. (b) Illustration of the proposed hierarchical control strategy for the safe and stable cooperative locomotion.} \label{Illustration_control_scheme} \vspace{-1.7em} \end{figure*} \vspace{-1em} \subsection{Objectives and Contributions} \vspace{-0.2em} The \textit{objectives} and \textit{contributions} of this paper are to present a formal foundation towards 1) developing complex hybrid models of cooperative locomotion of legged guide dogs and human, and 2) creating a hierarchical control algorithm, based on nonlinear control, quadratic programming, and control barrier functions (CBFs) \cite{ames2017cbf,gurriet2018online,nguyen20163d}, to ensure stability, safety, and obstacle avoidance. We address complex and high-dimensional models of cooperative legged locomotion via hybrid systems approach. An actuated leash structure is considered for the coordination of the dog and human locomotion while steering the human for safety and obstacle avoidance. At the higher level, the proposed hierarchical control algorithm employs local and nonlinear controllers, referred to as baseline controllers, that induce asymptotically stable unleashed locomotion patterns for the robotic dog and human. The baseline controllers are synthesized via the HZD approach and assumed to have access to the local state measurements as well as the force measurement applied by the leash structure. The leash baseline controller is then designed to keep the human in a safe distance from the robot while following it. The existence and stability of complex and leashed locomotion patterns for the coupled dynamics are addressed through the Poincar\'e return map. At the lower level of the control strategy, the baseline controllers for the dog and leash are modified by a real-time quadratic programming (QP) that includes CBF constraints to ensure safety and obstacle avoidance. The power of the anlytical results are demonstrated on an extensive numerical simulation of a complex hybrid model that represents cooperative locomotion of a quadrupedal robot, referred to as Vision 60 \cite{Ghost_Robotics} (see Fig. \ref{v60_human}), and a human model. The complex and full-order hybrid dynamical model has $60$ state variables and $20$ control inputs together with $16$ continuous-time domains to describe a trotting gait of the robot and a bipedal gait of the human. The performance of the closed-loop hybrid system in the presence of a discrete set of obstacles around the complex gait is investigated. \vspace{-0.65em} \section{HYBRID MODELS OF LOCOMOTION} \label{HYBRID MODELS OF LOCOMOTION} \vspace{-0.3em} Hybrid models of locomotion can be described by directed cycles. In this section, we will first present the hybrid models for the locomotion of each agent (i.e., robot and human). We will then address the complex hybrid model that describes the cooperative locomotion of agents. \vspace{-1em} \subsection{Directed Cycles} \label{Directed Cycles} \vspace{-0.3em} Throughout this paper, we shall consider \textit{multi-domain} hybrid models described by the following tuple \cite{Hamed_Ma_Ames_Vision60} \begin{equation}\label{open_loop_hybrid_model} \Sigma\left(\mathcal{G},\mathcal{D},\mathcal{S},\Delta,FG\right), \end{equation} where $\mathcal{G}:=(\mathcal{V},\mathcal{E})$ represents a \textit{direct cycle} (i.e., graph) for the studied locomotion pattern (see Fig. \ref{Illustration_control_scheme}a). In our formulation, the vertices $\mathcal{V}$ denote the continuous-time dynamics of legged locomotion, referred to as \textit{domains} or \textit{phases}. The edges $\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}$ represent the discrete-time transitions among continuous-time dynamics arising from changes in physical constraints (e.g., a new contact point is added to the set of existing contact points with the ground or an existing contact point leaves the ground). For every $v_{i},v_{j}\in\mathcal{V}$, $e=(v_{i}\rightarrow{}v_{j})\in\mathcal{E}$ if $v_{i}$ and $v_{j}$ are adjacent on $\mathcal{G}$. The state variables and control inputs of the hybrid system are shown by $x\in\mathcal{X}$ and $u\in\mathcal{U}$, respectively. The set of state manifolds and set of admissible controls are then denoted by $\mathcal{X}:=\{\mathcal{X}_{v}\}_{v\in\mathcal{V}}$ and $\mathcal{U}:=\{\mathcal{U}_{v}\}_{v\in\mathcal{V}}$, in which $\mathcal{X}_{v}$ and $\mathcal{U}_{v}$ are the state space and admissible controls for the domain $v\in\mathcal{V}$. The set of domains of admissibility are further represented by $\mathcal{D}:=\{\mathcal{D}_{v}\}_{v\in\mathcal{V}}$, where $\mathcal{D}_{v}\subseteq\mathcal{X}_{v}\times\mathcal{U}_{v}$ denotes the set of all points $(x,u)$ on which the unilateral constraints and friction cone conditions are satisfied (i.e., legs are above the walking surface and the foot slippage does not occur). The evolution of the hybrid system during the continuous-time domain $v\in\mathcal{V}$ is described by an ordinary differential equation (ODE) arising from the Euler-Lagrange equations as $\dot{x}=f_{v}(x)+g_{v}(x)\,u$ for all $(x,u)\in\mathcal{D}_{v}$. In addition, $FG:=\{(f_{v},g_{v})\}_{v\in\mathcal{V}}$ represents the set of control systems on $\mathcal{D}$. In order to simplify the presentation, we define the \textit{next domain function} as $\mu:\mathcal{V}\rightarrow\mathcal{V}$ by $\mu(v_{i})=v_{j}$ if $e=(v_{i}\rightarrow{}v_{j})\in\mathcal{E}$. The evolution of the hybrid system during the discrete-time transition $e\in\mathcal{E}$ is further described by the instantaneous mapping $x^{+}=\Delta_{e}(x^{-})$, where $x^{-}(t):=\lim_{\tau\nearrow t}x(\tau)$ and $x^{+}(t):=\lim_{\tau\searrow t}x(\tau)$ represent the state of the system right before and after the discrete transition, respectively. $\Delta:=\{\Delta_{e}\}_{e\in\mathcal{E}}$ denotes the set of discrete-time dynamics. The guards of the hybrid system are finally given by $\mathcal{S}:=\{\mathcal{S}_{e}\}_{e\in\mathcal{E}}$, on which the state trajectories undergo an abrupt change according to the discrete-time dynamics $\Delta_{e}$ when the state and control trajectories $(x,u)$ hit the surface $\mathcal{S}_{e}$ in $\mathcal{D}$. \vspace{-1em} \subsection{Continuous-Time Dynamics} \label{Continuous-Time Dynamics} \vspace{-0.3em} In this section, we consider the continuous-time dynamics for each agent. We assume that $q\in\mathcal{Q}\subset\mathbb R^{n}$ denotes the configuration variables for the robot and/or human. The configuration space is further represented by $\mathcal{Q}$. The state vector is taken as $x:=\textrm{col}(q,\dot{q})\in\textrm{T}\mathcal{Q}$, where $\textrm{T}\mathcal{Q}$ denotes the tangent bundle of $\mathcal{Q}$. We remark that the Vision 60 robot has $n=18$ DOFs. For the human model, we make use of an $n=12$ DOF tree structure with a torso and two identical legs consisting of a femur and tibia links. The control inputs $u\in\mathcal{U}\subset\mathbb R^{m}$ are finally taken as torques at the joint levels (i.e., $m=12$ for the dog robot and $m=6$ for the human model) (see Section \ref{NUMERICAL SIMULATIONS AND RESULTS} for further details on the models). It is supposed that $\eta_{v}(q)\equiv0$ represents the holonomic constraints during the domain $v\in\mathcal{V}$ arising form the contact conditions between the leg ends and the ground. The equations of motion during the continuous-time domain $v$ are then described by the Euler-Lagrange equations and principle of virtual work as follows \begin{alignat}{6} &D(q)\,\ddot{q}+C\left(q,\dot{q}\right)\dot{q}+G(q)&&=B\,u+J_{v}^\top(q)\,\lambda\nonumber\\ &J_{v}(q)\,\ddot{q}+\frac{\partial}{\partial q}\left(J_{v}(q)\,\dot{q}\right)\dot{q}&&=0,\label{EL_equations} \end{alignat} where $D(q)\in\mathbb R^{n\times{}n}$ denotes the positive definite mass-inertia matrix, $C(q,\dot{q})\,\dot{q}+G(q)\in\mathbb R^{n}$ represents the Coriolis, centrifugal, and gravitational terms, $B\in\mathbb R^{n\times{m}}$ denotes the input distribution matrix, $\lambda$ represents the Lagrange multipliers (i.e., ground reaction forces), and $J_{v}(q):=\frac{\partial \eta_{v}}{\partial q}(q)$ is the contact Jacobian matrix. If $J_{v}$ has full rank, one can eliminate the Lagrange multipliers to express \eqref{EL_equations} as \begin{equation}\label{EL_equations_02} D(q)\,\ddot{q}+H_{v}\left(q,\dot{q}\right)=T_{v}(q)\,u, \end{equation} in which $H_{v}:=\textrm{proj}_{v}\,H+J_{v}^\top\,(J_{v}\,D^{-1}\,J_{v}^\top)^{-1}\frac{\partial}{\partial q}(J_{v}\,\dot{q})\dot{q}$, $H=C(q,\dot{q})\,\dot{q}+G(q)$, $T_{v}:=\textrm{proj}_{v}\,B$, and $\textrm{proj}_{v}:=I-J_{v}^\top(J_{v}\,D^{-1}\,J_{v}^\top)^{-1}J_{v}\,D^{-1}$. We remark that \eqref{EL_equations_02} can be expressed as an input-affine system, i.e., $\dot{x}=f_{v}(x)+g_{v}(x)\,u$. \vspace{-1em} \subsection{Discrete-Time Dynamics} \label{Discrete-Time Dynamics} \vspace{-0.3em} If a new contact point is added to the existing set of contact points with the ground, we employ a rigid impact model \cite{Hurmuzlu_Impact} to describe the abrupt changes in the velocity coordinates according to the impact. In particular, if $\delta\lambda$ represents the intensity of the impulsive ground reaction force on the contacting points, integrating \eqref{EL_equations} over the infinitesimal period of the impact (i.e., $[t^{-},t^{+}]$) yields \begin{equation}\label{impact} D(q)\,\dot{q}^{+}-D(q)\,\dot{q}^{-}=J_{\mu(v)}^\top\,\delta\lambda,\,\,\,J_{\mu(v)}(q)\,\dot{q}^{+}=0, \end{equation} in which $\dot{q}^{-}$ and $\dot{q}^{+}$ represent the generalized velocity vector right before and after the impact, respectively. From the continuity of position, we assume $q^{+}=q^{-}$ and then from \eqref{impact}, one can solve for $\dot{q}^{+}$ and $\delta\lambda$ in terms of $(q^{-},\dot{q}^{-})$ to have a closed-form expression as $x^{+}=\Delta_{e}(x^{-})$. Furthermore, if the leg leaves the ground, we take $\Delta_{e}$ as the identity map to preserve the continuity of position and velocity coordinates over the corresponding discrete transition. \vspace{-1em} \subsection{Complex Hybrid Models for Cooperative Locomotion} \label{Complex Hybrid Model for Cooperative Locomotion} \vspace{-0.3em} Throughout this paper, we shall assume that there is a rigid and massless leash model that connects a point on the dog (e.g., head) to a point on the human (e.g., hand or hip) via ball (i.e., socket) joints. The leash will further be assumed to be actuated to control its length and orientation so that the human can follow the dog in a safe manner. This will be clarified with more details in Section \ref{Leash Baseline Controller}. The state and control inputs for the robotic dog and human are shown by $x^{i}:=\textrm{col}(q^{i},\dot{q}^{i})$ and $u^{i}$, respectively, for $i\in\{d,h\}$, where the superscripts ``$d$'' and ``$h$'' stand for the dog and human. \noindent\textbf{Complex Graph:} The complex hybrid model that describes the cooperative locomotion of the robot and human will have a complex graph that is taken as the strong product of graphs $\mathcal{G}^{d}=(\mathcal{V}^{d},\mathcal{E}^{d})$ and $\mathcal{G}^{h}=(\mathcal{V}^{h},\mathcal{E}^{h})$. The strong product is denoted by $\mathcal{G}^{c}:=\mathcal{G}^{d}\boxtimes\mathcal{G}^{h}$ that has the vertex set $\mathcal{V}^{c}:=\mathcal{V}^{d}\times\mathcal{V}^{h}$, and any two vertices $(v,w)$ and $(v',w')$ in $\mathcal{V}^{c}$ are adjacent if and only if 1) $v=v'$ and $(w\rightarrow{}w')$ is an edge in $\mathcal{E}^{h}$, or 2) $(v\rightarrow{}v')$ is an edge in $\mathcal{E}^{d}$ and $w=w'$, or 3) $(v\rightarrow{}v')$ is an edge in $\mathcal{E}^{d}$ and $(w\rightarrow{}w')$ is an edge in $\mathcal{E}^{h}$. In our notation, the superscript ``$c$'' represents the complex model. The augmented state and control inputs are further denoted by $x^{c}:=\textrm{col}(x^{d},x^{h})$ and $u^{c}:=\textrm{col}(u^{d},u^{h})$, respectively. \noindent\textbf{Complex Continuous-Time Dynamics:} For every vertex $(v,w)\in\mathcal{V}^{c}$, the evolution of the composite mechanical system, consisting of the robot and human, can be described by the following nonlinear and \textit{coupled} dynamics \begin{alignat}{8} &&D^{d}\left(q^{d}\right)\ddot{q}^{d}&+H^{d}_{v}\left(q^{d},\dot{q}^{d}\right)&=T_{v}^{d}\left(q^{d}\right)u^{d}&-J_{\textrm{head}}^{d\top}\left(q^{d}\right)F\nonumber\\ &&D^{h}\left(q^{h}\right)\ddot{q}^{h}&+H^{h}_{w}\left(q^{h},\dot{q}^{h}\right)&=T_{w}^{h}\left(q^{h}\right)u^{h}&+J_{\textrm{hand}}^{h\top}\left(q^{h}\right)F,\label{coupled_dyn} \end{alignat} in which $J_{\textrm{head}}^{d}(q^{d})$ and $J_{\textrm{hand}}^{h}(q^{h})$ denote the Jacobian matrices for the end points of the leash at the dog and human sides, respectively, and $F\in\mathbb R^{3}$ represents the force applied by the leash to the human hand. We remark that the superscripts ``$d$'' and ``$h$'' represent the dynamic and kinematic terms for the dog and human models, respectively. \noindent\textbf{Complex Discrete-Time Dynamics:} Since the leash model is assumed to be massless and cannot employ impulsive forces, the evolution of the composite mechanical system over the discrete transition $(v,w)\rightarrow(v',w')$ can be described by the following nonlinear and \textit{decoupled} mappings \begin{equation}\label{complex_impact} x^{d+}=\Delta_{v\rightarrow{}v'}^{d}\left(x^{d-}\right),\quad x^{h+}=\Delta_{w\rightarrow{}w'}^{h}\left(x^{h-}\right). \end{equation} We remark that if $v=v'$ (resp. $w=w'$) in \eqref{complex_impact}, the mapping $\Delta_{v\rightarrow{}v'}$ (resp. $\Delta_{w\rightarrow{}w'}$) is simply taken as the identity. \begin{remark} In this paper, we shall consider a trotting gait for Vision 60 robot with $8$ continuous-time domains (see Fig. \ref{Illustration_control_scheme}a for more details). The graph for the bipedal gait of the human model also has $2$ continuous-time domains that represent the right and left stance phases. Consequently, the complex hybrid model of locomotion would have $8\times2=16$ continuous-time domains for which there are $2\times(n^{d}+n^{h})=2\times(18+12)=60$ state variables and $m^{d}+m^{h}+m^{l}=12+6+2=20$ control inputs. Here, $m^{l}=2$ represents the actuator numbers for the leash (see Section \ref{Leash Baseline Controller}). \end{remark} \vspace{-1em} \section{HIERARCHICAL CONTROL STRATEGY} \label{HIERARCHICAL CONTROL STRATEGY} \vspace{-0.5em} In order to have stable and safe cooperative locomotion for the robot and human, we will present a two-level control strategy for the robotic dog and leash (see Fig. \ref{Illustration_control_scheme}b). Since the mathematical models for the local controller of the human part are not known, we shall assume a nonlinear local controller for the human, but will \textit{not} change that controller to address unforeseen events and obstacle avoidance. We will instead focus on the dog and leash hierarchical control strategy to ensure stability and safety. At the higher level of the control strategy, we will employ a local nonlinear controller for the robot that has access to its own state variables as well as the force employed by the leash (i.e., force measurement). This controller will be referred to as the \textit{robot baseline controller}. The objective is to asymptotically derive some outputs to zero that encode the locomotion patterns for the guide robot (e.g., trot, amble, walk, or gallop gaits at desired speeds). The baseline controller for the dog will be designed via HZD and virtual constraints approach in Section \ref{LOCAL VIRTUAL CONSTRAINT CONTROLLERS}. This controller exponentially stabilizes gaits for the hybrid model of the dog in the presence of the leash force. The baseline controller for the leash will be designed to ensure that 1) there is always a safe distance between the robot and human, and 2) human follows the robot (see Section \ref{Leash Baseline Controller}). At the lower level, we will solve a real-time QP optimization to ensure safety and obstacle avoidance. In particular, the QP optimization modifies the baseline controllers for the robot as well as the leash to keep the dog and human in a safe distance from obstacles. The QP framework will be set up based on CBFs in Section \ref{SAFETY CRITICAL CONTROL AND CONVEX QP OPTIMIZATION}. \vspace{-1em} \subsection{Local Baseline Controllers for the Agents} \vspace{-0.3em} In this section, we consider the robot and human as two multi-body ``agents'' specified by the superscript $i\in\{d,h\}$. \begin{definition}[Local Baseline Controllers] We suppose that there are local and smooth feedback laws $\Gamma^{i}(x^{i},F):=\{\Gamma_{v}^{i}(x^{i},F)\}_{v\in\mathcal{V}^{i}}$ for the agent $i\in\{d,h\}$ to have stable locomotion patterns. In our notation, $\Gamma_{v}^{i}(x^{i},F)$ is a local and nonlinear feedback controller, referred to as \textit{baseline controller}, that is employed during the continuous-time $v\in\mathcal{V}^{i}$ and assumed to have access to the state variables of the agent $i$ (i.e., $x^{i}$) as well as the force $F$. \end{definition} \begin{assumption}[Transvsersal Stable Periodic Orbits] \label{Transvsersal Stable Periodic Orbits} By employing the local baseline controllers for the agent $i\in\{d,h\}$ in the unleashed case (i.e., $F\equiv0$), we assume that there is a period-one orbit (i.e., gait) for the closed-loop hybrid model $\Sigma^{i}$, denoted by $\mathcal{O}^{i}_{\textrm{ul}}$, that is transversal to the guards $\mathcal{S}^{i}$. In our notation, the subscript ``$\textrm{ul}$'' stands for the unleashed gait. The orbit $\mathcal{O}^{i}_{\textrm{ul}}$ is further supposed to be locally exponentially stable. \end{assumption} For future purposes, the evolution of the state variables $x^{i}$ on the unleashed orbit $\mathcal{O}^{i}_{\textrm{ul}}$ is represented by $x_{\star}^{i}(t)$ for $t\geq0$. The orbit $\mathcal{O}^{i}_{\textrm{ul}}$ can then be expressed as \begin{equation} \mathcal{O}^{i}_{\textrm{ul}}:=\left\{x^{i}=x_{\star}^{i}(t)\,|\,0\leq{t}<T^{i}\right\}, \end{equation} in which $T^{i}>0$ denotes the minimal period of $x_{\star}^{i}(t)$. Section \ref{LOCAL VIRTUAL CONSTRAINT CONTROLLERS} will present a class of HZD-based local baseline controllers to exponentially stabilize the periodic gaits $\mathcal{O}^{i}_{\textrm{ul}}$. \begin{assumption}[Common Multiples of Gait Periods] \label{Common Multiples of Gait Periods} We assume that there are common multiples for the periods of the dog and human unleashed gaits. More specifically, there are positive integers $N^{d}$ and $N^{h}$ such that $N^{d}\,T^{d}=N^{h}\,T^{h}$. For future purposes, we denote the minimum of these values by $N^{d}_{\min}$ and $N^{h}_{\min}$. \end{assumption} \vspace{-0.5em} \subsection{Leash Baseline Controller} \label{Leash Baseline Controller} \vspace{-0.3em} As mentioned previously, the leash structure is assumed to be rigid. We further suppose that its length and orientation can be independently controlled by two linear and rotational actuators. To make this notion more precise, let us denote the Cartesian coordinates of the dog head and the human hand by $p^{d}_{\textrm{head}}(q^{d})\in\mathbb R^{3}$ and $p^{h}_{\textrm{hand}}(q^{h})\in\mathbb R^{3}$, respectively. Next, consider the vector connecting $p^{h}_{\textrm{hand}}(q^{h})$ to $p^{d}_{\textrm{head}}(q^{d})$. The representation of this vector in the cylindrical coordinates can be given by $(r,\theta,z)$. We assume that there are sensors for the leash structure to measure $(r,\theta)$. The objective here is to design a local force feedback controller for the leash that has access to $(r,\theta)$ to keep the human in a safe distance from the robot dog while regulating the angle $\theta$. In particular, we are interested in (i) having $r\in[r_{\min},r_{\max}]$ for some $0<r_{\min}<r_{\max}$ and (ii) imposing $\theta\rightarrow0$. This controller is referred to as the \textit{leash baseline controller}. One possible way to design such a controller is to decompose the force $F$ into $(F_{r},F_{\theta},F_{z})$, in which $F_{r}(r,\dot{r})$ is the longitudinal force designed to be sufficiently differentiable while being zero over the safe zone $[r_{\min},r_{\max}].$ Moreover, $F_{\theta}(\theta,\dot{\theta})$ is a torsional force that can be taken as a simple PD controller to regulate $\theta$. For the purpose of this paper, $F_{z}$ is assumed to be zero. For future purposes, the leash baseline controller will be represented by $F_{b}(r,\dot{r},\theta,\dot{\theta},\kappa)\in\mathbb R^{3}$, where the subscript ``$b$'' stands for the baseline control and $\kappa$ represents some adjustable controller parameters, e.g., PD gains. \begin{assumption} \label{leash_assumption} We assume that $F_{b}$ is sufficiently differentiable with respect to its arguments $(r,\dot{r},\theta,\dot{\theta},\kappa)$. Furthermore, for $\kappa=0$, $F_{b}(r,\dot{r},\theta,\dot{\theta},\kappa)\equiv0$. \end{assumption} \begin{remark} Since the longitudinal force $F_{r}(r,\dot{r})$ is assumed to be zero over the safe zone $[r_{\min},r_{\max}]$, $F_{r}$ would have a deadzone structure over $[r_{\min},r_{\max}]$. Assumption \ref{leash_assumption} ensures that $F_{r}$ is designed to be differentiable at the corners $r_{\min}$ and $r_{\max}$ such that the stability analysis can be carried out via the Poincar\'e return map in Theorem \ref{Stability of Complex Gaits with Leash}. \end{remark} \vspace{-1em} \subsection{Stability Analysis of Complex Gaits} \vspace{-0.3em} This section addresses the existence and stability of periodic orbits for the cooperative locomotion of the robot and human in the presence of leash. We make use of the Poincar\'e sections analysis and present the following theorem. \begin{theorem}[Stability of Complex Gaits with Leash] \label{Stability of Complex Gaits with Leash} Under Assumptions \ref{Transvsersal Stable Periodic Orbits}-\ref{leash_assumption}, there is an open neighborhood of $0$, denoted by $\mathcal{N}(0)$, such that for all gain values $\kappa\in\mathcal{N}(0)$, there is an exponentially stable complex gait for the leashed closed-loop hybrid system $\Sigma^{c}$. \end{theorem} \begin{proof} From Assumptions \ref{Transvsersal Stable Periodic Orbits} and \ref{Common Multiples of Gait Periods}, the following augmented orbit \begin{equation} \mathcal{O}^{c}_{\textrm{ul}}:=\left\{x^{c}=\textrm{col}(x_{\star}^{d}(t),x_{\star}^{h}(t))\,|\,0\leq{t}<N^{d}_{\min}\,T^{d}\right\} \end{equation} is indeed a periodic orbit for the complex and unleashed hybrid system $\Sigma^{c}$. We next choose a Poincar\'e section transversal to this orbit, denoted by $\mathscr{S}$, and consider a Poincar\'e return map for $\Sigma^{c}$ from $\mathscr{S}$ back to $\mathscr{S}$ as $P^{c}(x^{c},\kappa)$. According to the construction procedure, there is a fixed point for the Poincar\'e map that corresponds to $\mathcal{O}^{c}_{\textrm{ul}}$, that is $P^{c}(x_{\star,\textrm{ul}}^{c},0)=x_{\star,\textrm{ul}}^{c}$, in which $x_{\star,\textrm{ul}}^{c}$ represents the fixed point. We next consider the algebraic equation $E(x^{c},\kappa):=P^{c}\left(x^{c},\kappa\right)-x^{c}=0$. Since $\mathcal{O}_{\textrm{ul}}^{c}$ is exponentially stable for the unleashed complex system, the Jacobian matrix $\frac{\partial E}{\partial x^{c}}(x_{\star,\textrm{ul}}^{c},0)=\frac{\partial P^{c}}{\partial x^{c}}(x_{\star,\textrm{ul}}^{c},0)-I$ is nonsingular. Hence, from the Implicit Function Theorem, there exists $\mathcal{N}(0)$ such that for all $\kappa\in\mathcal{N}(0)$, there is a fixed point for $P^{c}(x^{c},\kappa)$. Moreover, since the elements and eigenvalues of the Jacobian matrix $\frac{\partial P^{c}}{\partial x^{c}}(x^{c},\kappa)$ continuously depend on $\kappa$, one can choose $\mathcal{N}(0)$ sufficiently small such that the eigenvalues of the Jacobian matrix remain inside the unit circle. This completes the proof of exponential stability for leashed locomotion. \end{proof} \vspace{-1em} \section{LOCAL VIRTUAL CONSTRAINT CONTROLLERS WITH FORCE FEEDBACK} \label{LOCAL VIRTUAL CONSTRAINT CONTROLLERS} \vspace{-0.3em} The objective of this section is to design the local baseline controller for the robotic dog. The controller is designed based on virtual constraints approach \cite{Grizzle_Asymptotically_Stable_Walking_IEEE_TAC,Westervelt_Grizzle_Koditschek_HZD_IEEE_TRO} to ensure exponential stability of the gait for the unleashed case. Virtual constraints are defined as kinematic constraints (i.e., outputs) that encode the locomotion pattern. They are imposed through the action of the baseline controllers. The idea is to coordinate the motion of the links within domains. We make use of relative degree one and relative degree two virtual constraints (i.e., outputs). In particular, during the continuous-time domain $v\in\mathcal{V}^{d}$, we consider the following outputs to be regulated \begin{equation}\label{virtual_constraint_outputs} y_{v}^{d}\left(x^{d}\right):=\begin{bmatrix} y_{1v}^{d}(q^{d},\dot{q}^{d})\\ y_{2v}^{d}(q^{d}) \end{bmatrix}, \end{equation} in which $y_{1v}^{d}(q^{d},\dot{q}^{d})$ represents relative degree one nonholonomic outputs for velocity regulation and and $y_{2v}^{d}(q^{d})$ denotes relative degree two holonomic outputs for position tracking. Using the nonlinear dynamics \eqref{coupled_dyn} and standard input-output linearization \cite{Isidori_Book}, one can obtain \begin{equation}\label{output_dyn_0} \begin{bmatrix} \dot{y}_{1v}^{d}\\ \ddot{y}_{2v}^{d} \end{bmatrix}=A_{v}^{d}\left(x^{d}\right)u^{d}+b_{v}^{d}\left(x^{d},F\right), \end{equation} where $A_{v}^{d}(x)$ is a decoupling matrix and $b_{v}^{d}$ consists of Lie derivatives (see \cite{Hamed_Ma_Ames_Vision60} for more details). Furthermore, we would like to solve for $u^{d}$ that results in the following output dynamics \begin{equation}\label{output_dyn} \begin{bmatrix} \dot{y}_{1v}^{d}\\ \ddot{y}_{2v}^{d} \end{bmatrix}=-\ell_{v}(x^{d}):=-\begin{bmatrix} K_{P}\,y_{1d}^{v}\\ K_{D}\,\dot{y}_{2v}^{d}+K_{P}\,y_{2v}^{d} \end{bmatrix} \end{equation} with $K_{P}$ and $K_{D}$ being positive PD gains. The local baseline controller for the dog is finally chosen as \begin{equation}\label{HZD_controllers} \Gamma_{v}^{d}\left(x^{d},F\right):=-A_{v}^{d\top}\left(A_{v}^{d}\,A_{v}^{d\top}\right)^{-1}\left(b_{v}^{d}+\ell_{v}\right) \end{equation} that 1) requires local state and force measurement and 2) exponentially stabilizes the equilibrium point $(y_{1v}^{d},y_{2v}^{d},\dot{y}_{2v}^{d})=(0,0,0)$ for the output dynamics \eqref{output_dyn} in the presence of the external force $F$, i.e., $\lim_{t\rightarrow\infty}y_{v}^{d}(t)=0$. \begin{remark}[Proper Selection of Virtual Constraints] The periodic orbit $\mathcal{O}^{d}_{\textrm{ul}}$ can be designed in an offline manner through direct collocation based trajectory optimization techniques \cite{FROST,Ames_DURUS_TRO}. For a given periodic gait $\mathcal{O}_{\textrm{ul}}^{d}$, the output functions $y_{v}^{d}$ in \eqref{virtual_constraint_outputs} are chosen to vanish on the desired gait $\mathcal{O}_{\textrm{ul}}^{d}$. We have observed that the stability of gaits in the virtual constraint approach depends on the proper selection of the output functions $y_{v}^{d}$ to be regulated \cite{Hamed_Buss_Grizzle_BMI_IJRR}. Our previous work \cite{Hamed_Buss_Grizzle_BMI_IJRR,Hamed_Gregg_decentralized_control_IEEE_CST} has developed a recursive algorithm, based on semidefinite programming, to systematically design output functions for which the gaits are exponentially stable for the corresponding hybrid dynamics. The algorithm is offline and assumes a finite-dimensional parameterization of the output functions to be determined. Then it translates the exponential stabilization problem into a recursive optimization problem that is set up based on linear and bilinear matrix inequalities. Sufficient conditions for the convergence of the algorithm to a set of stabilizing parameters have been addressed in \cite{Hamed_Gregg_decentralized_control_IEEE_CST,Hamed_Gregg_Ames_ACC}. \end{remark} \begin{remark} Nonlinear local controllers for the human model are not know. However, for the purpose of this paper, we assume virtual constraint-based controllers, analogous to \eqref{HZD_controllers}, for the human model. Furthermore, evidence suggests that the phase-dependent models can reasonably predict human joint behavior across perturbations \cite{Villarreal:TRO}. \end{remark} \vspace{-0.8em} \section{SAFETY-CRITICAL CONTROL AND QP OPTIMIZATION} \label{SAFETY CRITICAL CONTROL AND CONVEX QP OPTIMIZATION} \vspace{-0.5em} This section aims to develop low-level safety-critical control algorithms that ensure obstacle avoidance while implementing the baseline controllers for the agents and leash in the hierarchical control structure. We will address safety critical conditions through set invariance and CBFs. In particular, a system being safe is commonly defined as the system never leaving the safety set \cite{ames2017cbf,gurriet2018online,nguyen20163d}. For low-dimensional dynamical systems, analytical control strategies can be derived. However, finding such a control policy for high-DOF and complex models of cooperative legged locomotion of guide dogs and humans is a challenge. To tackle this problem, we make use of a real-time QP formulation to address safety specifications represented by CBFs \cite{ames2017cbf}. To present the main idea, let us consider a discrete set of static and point obstacles $\mathscr{P}^{o}_{\alpha}$ for $\alpha\in\mathcal{I}^{o}$ whose Cartesian coordinates in the $xy$-planes are given by $r_{\alpha}^{o}:=\textrm{col}(x_{\alpha}^{o},y_{\alpha}^{o})$. Next we assume a set of critical points on the robot and human that are supposed to be in a safe distance from these obstacles. These points are denoted by $\mathscr{P}^{d}_{\beta}$ and $\mathscr{P}^{h}_{\gamma}$ for the dog and human, respectively, for some $\beta\in\mathcal{I}^{d}$ and $\gamma\in\mathcal{I}^{h}$. One typical example includes the hip points of the robot and human models. The Cartesian coordinates of $\mathscr{P}^{d}_{\beta}$ and $\mathscr{P}^{h}_{\gamma}$ in the $xy$-plane are further denoted by $r_{\beta}^{d}(q^{d})\in\mathbb R^{2}$ and $r_{\gamma}^{h}(q^{h})\in\mathbb R^{2}$. We formulate the safety set as \begin{alignat}{6} &&\mathcal{C}:=\big\{x^{c}=\textrm{col}\left(x^{d},x^{h}\right)&\,|\,h_{\beta,\alpha}^{d}\left(q^{d}\right)\geq0,\, h_{\gamma,\alpha}^{h}\left(q^{h}\right)\geq0,\nonumber\\ && &\forall(\alpha,\beta,\gamma)\in\mathcal{I}^{o}\times\mathcal{I}^{d}\times\mathcal{I}^{h}\big\}, \end{alignat} where $h_{\beta,\alpha}^{d}(q^{d}):=\|r^{d}_{\beta}(q^{d})-r_{\alpha}^{o}\|_{2}^{2}-h_{\min}^{2}$ and $h_{\gamma,\alpha}^{h}(q^{h}):=\|r^{h}_{\gamma}(q^{h})-r_{\alpha}^{o}\|_{2}^{2}-h_{\min}^{2}$ for some safety distance $h_{\min}>0$. In addition, $\|.\|_{2}$ denotes the Euclidean norm. The safety constraints $h_{\beta,\alpha}^{d}(q^{d})\geq0$ and $h_{\gamma,\alpha}^{h}(q^{h})\geq0$ are relative degree two. Our objective is to modify the torques for the dog robot $u^{d}$ as well as the leash force $F$ to render the safety set $\mathcal{C}$ forward invariant under the flow of the closed-loop complex model. We remark that we are \textit{not} allowed to change the human controller $u^{h}=\Gamma^{h}(x^{h},F)$ as the person can be visually impaired and cannot react properly. For this purpose, we make use of the concept of exponential CBFs (ECBFs) \cite{Sreenaath_HighDegreeCBF}. In particular, we define the ECBFs as follows \begin{alignat}{6} &&\mathcal{B}_{\beta,\alpha}^{d}\left(x^{d}\right)&:=\dot{h}_{\beta,\alpha}^{d}\left(x^{d}\right)&+\lambda\,h_{\beta,\alpha}^{d}\left(x^{d}\right)\label{CBFS_1}\\ &&\mathcal{B}_{\gamma,\alpha}^{h}\left(x^{h}\right)&:=\dot{h}_{\gamma,\alpha}^{h}\left(x^{h}\right)&+\lambda\,h_{\gamma,\alpha}^{h}\left(x^{h}\right)\label{CBFS_2} \end{alignat} for all $(\alpha,\beta,\gamma)\in\mathcal{I}^{o}\times\mathcal{I}^{d}\times\mathcal{I}^{h}=:\mathcal{I}$, where $\lambda>0$ is an adjustable parameter. The exponential CBF condition further implies that \begin{alignat}{6} && \dot{\mathcal{B}}_{\beta,\alpha}^{d}\left(x^{d},u^{d},F\right)&+\omega\,\mathcal{B}_{\beta,\alpha}\left(x^{d}\right)&\geq0\label{ECBFs_01}\\ && \dot{\mathcal{B}}_{\gamma,\alpha}^{h}\left(x^{h},F\right)&+\omega\,\mathcal{B}_{\gamma,\alpha}\left(x^{h}\right)&\geq0\label{ECBFs_02} \end{alignat} for all $(\alpha,\beta,\gamma)\in\mathcal{I}$ and some adjustable scalar $\omega>0$. Substituting \eqref{CBFS_1} and \eqref{CBFS_2} into \eqref{ECBFs_01} and \eqref{ECBFs_02} results in \begin{alignat}{6} && \ddot{h}_{\beta,\alpha}^{d}&+(\lambda+\omega)\,\dot{h}_{\beta,\alpha}^{d}&+\lambda\,\omega\,h_{\beta,\alpha}^{d}&\geq0\label{ECBFs_01ver2}\\ && \ddot{h}_{\gamma,\alpha}^{h}&+(\lambda+\omega)\,\dot{h}_{\gamma,\alpha}^{h}&+\lambda\,\omega\,h_{\gamma,\alpha}^{h}&\geq0\label{ECBFs_02ver2} \end{alignat} for every $(\alpha,\beta,\gamma)\in\mathcal{I}$. From \eqref{coupled_dyn}, we remark that \eqref{ECBFs_01ver2} and \eqref{ECBFs_02ver2} can be expressed as affine inequalities in terms of the the robot torques and leash force $(u^{d},F)$. This can be expressed as follows \begin{alignat}{6} && &A_{\beta,\alpha}^{d}\left(x^{d}\right)\begin{bmatrix} u^{d}\\ F \end{bmatrix}&+b_{\beta,\alpha}^{d}\left(x^{d}\right)&\geq0\label{ECBFs_01ver3}\\ && &A_{\gamma,\alpha}^{h}\left(x^{h}\right)F&+b_{\gamma,\alpha}^{h}\left(x^{h}\right)&\geq0\label{ECBFs_02ver3} \end{alignat} for all $(\alpha,\beta,\gamma)\in\mathcal{I}$. Next, we set up the following real-time QP to ensure safety-critical constraints while being close to the baseline controllers \begin{alignat}{6} \min_{(u^{d},F)} &\,\,\,\left\|u^{d}-\Gamma^{d}\left(x^{d},F\right)\right\|_{2}^{2}+\left\|F-F_{b}\left(r,\dot{r},\theta,\dot{\theta},\kappa\right)\right\|_{2}^{2}\nonumber\\ \textrm{s.t.} &\,\,\,A_{\beta,\alpha}^{d}\left(x^{d}\right)\begin{bmatrix} u^{d}\\ F \end{bmatrix}+b_{\beta,\alpha}^{d}\left(x^{d}\right)\geq0,\,\,\forall(\alpha,\beta,\gamma)\in\mathcal{I}\nonumber\\ &\,\,\,A_{\gamma,\alpha}^{h}\left(x^{h}\right)F+b_{\gamma,\alpha}^{h}\left(x^{h}\right)\geq0,\quad\,\,\,\forall(\alpha,\beta,\gamma)\in\mathcal{I}\nonumber\\ &\,\,\,u_{\min}\leq{u^{d}}\leq{}u_{\max}\nonumber\\ &\,\,\,F_{\min}\leq{F}\leq{}F_{\max}\label{QP_otptimization}, \end{alignat} where $u_{\min}$, $u_{\max}$, $F_{\min}$, and $F_{\max}$ denote the lower and upper bounds for the torques and forces. We remark that according to the construction procedure of the baseline controller in \eqref{output_dyn_0} and \eqref{HZD_controllers}, $b^{d}_{v}(x^{d},F)$ and $\Gamma^{d}_{v}(x^{d},F)$ are affine in terms of the leash force $F$ for every $v\in\mathcal{V}^{d}$. Hence, the cost function in \eqref{QP_otptimization} is indeed quadratic in terms of $(u^{d},F)$. The output of the QP framework are eventually employed as the control inputs for the robotic dog and as well as the leash. \begin{remark} In the QP formulation \eqref{QP_otptimization}, one would need to measure the human state variables $x^{h}$ to check for the inequality constraints \eqref{ECBFs_02ver3}. However, we do \textit{not} modify the torques for the human model. This assumption is not restrictive as one can measure the human state variable via 1) a set of wearable inertial measurement units (IMUs) and 2) asymptotic observers. In particular, our previous work \cite{Hamed_Ames_Gregg_ACC} has developed a systematic approach for asymptotic estimation of the state variables for human models via hybrid observers and IMUs. For the purpose of this paper, we hence assume that $x^{h}$ is available for the QP framework. \end{remark} \vspace{-1em} \section{NUMERICAL SIMULATIONS AND RESULTS} \label{NUMERICAL SIMULATIONS AND RESULTS} \vspace{-0.3em} The objective of this section is to numerically validate the theoretical results of the paper. For this purpose, we consider a complex and full-order hybrid dynamical model that describes the cooperative locomotion of Vision 60 and a human model. \begin{figure*}[!t] \centering \subfloat[\label{COM_trajectories_no_leash}]{\includegraphics[width=2.2in]{robot_trace_noleash.eps}}\!\! \subfloat[\label{COM_trajectories_with_leash}]{\includegraphics[width=2.2in]{robot_trace_leash_noobstacle.eps}}\!\! \subfloat[\label{COM_trajectories_point_obstacles_CBFs}]{\includegraphics[width=2.2in]{robot_trace_point_obstacles.eps}} \vspace{-0.7em} \caption{(a) Robot and human COM trajectories in the $xy$-plane without using the leash structure. The unleashed gait for the dog is exponentially stable (i.e, it walks along a line parallel to the $x$-axis on which the yaw angle is zero). However, the one for the human is modulo yaw stable. (b) COM trajectories using the leash structure. Here the leash and each agent have its own local baseline controllers and there is \textit{no} CBF-based QP optimization. Both the robot and human converge to a complex gait with a common speed while having yaw stability. (c) COM trajectories using the proposed hierarchical control strategy for the dog and leash structure in the presence of point obstacles. The obstacles are illustrated by the circles.} \vspace{-1.0em} \end{figure*} \begin{figure*}[!t] \centering \subfloat[\label{Yaw_Roll_Motion_pointobstacles_CBFs:a}]{\includegraphics[width=2.2in]{position_yaw.eps}}\!\! \subfloat[\label{Yaw_Roll_Motion_pointobstacles_CBFs:b}]{\includegraphics[width=2.2in]{position_roll.eps}}\!\! \subfloat[\label{COM_trajectories_point_obstacles_CBFs_moredense}]{\includegraphics[width=2.2in]{robot_trace_point_obstacles_moredense.eps}} \vspace{-0.7em} \caption{(a) and (b) Time profile of the yaw and roll motions for the dog robot using the proposed hierarchical control strategy in the presence of point obstacles. (c) COM trajectories in the $xy$-plane in the presence of a more-dense set of obstacles.} \vspace{-1.0em} \end{figure*} \noindent\textbf{Vision 60 Robot:} Vision 60 is an autonomous quadrupedal robot manufactured by Ghost Robotics \cite{Ghost_Robotics}. It weighs approximately 26 kg. Vision 60 has 18 DOFs of which 12 leg DOFs are actuated. More specifically, each leg of the robot consists of a 1 DOF actuated knee joint with pitch motion and a 2 DOF actuated hip joint with pitch and roll motions. In addition, 6 DOFs are associated with the translational and rotational motions of the torso. \noindent\textbf{Human Model:} The human model consists of a rigid tree structure with a torso link, including hands and head, and two identical legs terminating at point feet (see \cite{Hamed_Gregg_decentralized_control_IEEE_CST}). Each leg of the robot includes 3 actuated joints: a 2 DOF hip (ball) joint with roll and pitch motions and a 1 DOF knee joint in the sagittal plane. The model has 12 DOFs: 6 DOF for the translational and rotational motions of the torso and 6 DOF for the internal shape variables. The kinematic and dynamic parameter values for the links are taken according to those reported in \cite{Human_Data} from a human cadaver study. \noindent\textbf{Path Planning:} We consider an unleashed trotting gait $\mathcal{O}_{\textrm{ul}}^{d}$ for the dog robot at the speed of $1.2$ (m/s). To generate the gait, we make use of FROST (Fast Robot Optimization and Simulation Toolkit) --- an open-source toolkit for path planning of dynamic legged locomotion \cite{FROST,Ames_DURUS_TRO}. FROST makes use of direct collocation based trajectory optimization. In particular, it utilizes the Hermite-Simpson collocation approach to translate the path planning problem into a finite-dimensional nonlinear programming (NLP) that can be effectively solved with state-of-the-art NLP tools such as IPOPT. A desired periodic bipedal gait $\mathcal{O}_{\textrm{ul}}^{h}$ is designed for the locomotion of the human model at the speed of $1.1$ (m/s). We intentionally design the human gait to be slower than that of the dog to show that using the proposed control strategy, there can be a common speed leashed gait. \noindent\textbf{Local Baseline Controllers:} Using the semidefinite programming algorithm of \cite{Hamed_Buss_Grizzle_BMI_IJRR,Hamed_Gregg_decentralized_control_IEEE_CST}, we synthesize the virtual constraint controllers of \eqref{HZD_controllers} in an offline manner to exponentially stabilize the unleashed gaits for the dog and human models. In particular, the algorithm looks for the optimal outputs to be regulated such that the stability condition in Assumption \ref{Transvsersal Stable Periodic Orbits} is satisfied. We further do not consider the full state stability for the human gait. Instead, we consider the \textit{stability modulo yaw} \cite[Section 6.5]{Hamed_Buss_Grizzle_BMI_IJRR} to have a model of visually impaired people locomotion. We remark that the dog robot together with the leash structure will have the responsibility to stabilize the yaw motion for itself as well as the human. The leash baseline controller is further designed to keep the human in the safe zone of $[1.25,1.75]$ (m). Figures \ref{COM_trajectories_no_leash} and \ref{COM_trajectories_with_leash} depict the robot and human center of mass (COM) trajectories in the $xy$-plane without and with using the leash structure, respectively. We remark that without the leash, the human gait does \textit{not} have the yaw stability (see Fig. \ref{COM_trajectories_no_leash}). However, utilizing the leash structure, the robot and human trajectories converge to a complex gait with the same speed while having the yaw stability (i.e., locomotion along the $x$-axis on which the yaw angle is zero) (see Fig. \ref{COM_trajectories_with_leash}). \noindent\textbf{Obstacle Avoidance:} In order to demonstrate the power of the proposed hierarchical control algorithm, we consider a set of point obstacles $\mathscr{P}_{\alpha}^{o}$ for some $\alpha$ in the discrete set $\mathcal{I}^{o}$. The critical points on the robot and human (i.e., $\mathscr{P}^{d}_{\beta}$ and $\mathscr{P}_{\gamma}^{h}$) are then chosen as the hip points. In the first simulation, we consider $11$ obstacles around the steady-state trajectory of Fig. \ref{COM_trajectories_with_leash}. Without employing the real-time QP-based modification, the robot and human COM can hit the obstacles. In particular, Fig. \ref{COM_trajectories_with_leash} illustrates an undershoot around $-0.3$ (m) along the $y$-axis for the human COM that can easily collide with the obstacle located there in Fig. \ref{COM_trajectories_point_obstacles_CBFs}. However, utilizing the hierarchical control algorithm with QP running at 1kHz, the robot and human trajectories are locally modified around the steady-state gait such that the safety critical conditions are satisfied (see Fig. \ref{COM_trajectories_point_obstacles_CBFs}). The time profile for the robot's yaw and roll motions to accommodate the obstacles is depicted in Figs. \ref{Yaw_Roll_Motion_pointobstacles_CBFs:a} and \ref{Yaw_Roll_Motion_pointobstacles_CBFs:b}. The performance of the closed-loop hybrid system in the presence of a more-dense set of obstacles is shown in \ref{COM_trajectories_point_obstacles_CBFs_moredense}. Figure \ref{Snapshots} illustrates the snapshots of the robot and human locomotion around the obstacles. Animations can be found online \cite{YouTube_CooperativeLoco}. \begin{figure*}[!t] \vspace{1em} \includegraphics[width=1\linewidth]{snapshot.jpg} \vspace{-1.4em} \caption{Simulation snapshots illustrating the evolution ((a) to (e)) of Vision 60 and human trajectories on having a close encounter with the obstacles. The visualization does not illustrate the actuated leash. However, it's effect is clearly demonstrated by the augmented human trajectory in figures (see \cite{YouTube_CooperativeLoco} for the animation).} \label{Snapshots} \vspace{-2em} \end{figure*} \vspace{-1em} \section{CONCLUSION} \vspace{-0.1em} This paper presented a formal method towards 1) addressing complex hybrid dynamical models that describe cooperative locomotion of guide legged robots and humans and 2) systematically designing hierarchical control algorithms that enable stable and safe collaborative locomotion in the presence of discrete obstacles. At the higher level of the proposed control strategy, baseline controllers are assumed for the robotic dog and the leash structure. The robot baseline controller is developed based on HZD approach to asymptotically stabilize a pre-designed unleashed gait for the quadrupedal robot. The leash baseline controller is further developed to keep the human in a safe distance from the dog while following it. The existence and exponential stability of leashed gaits for the complex model are investigated via the Poincar\'e return map. At the lower level, a real-time QP is solved to modify the baseline controllers for the robot as well as the leash to ensure safety (i.e., obstacles avoidance) via CBFs. The power of the analytical approach is validated through extensive numerical simulations of a complex hybrid model with $60$ state variables and $20$ control inputs that represents the cooperative locomotion of Vision 60 and a human model. We considered an unleashed trotting gait for the dog and a bipedal gait for the human, where the dog gait is assumed to be faster. We further assumed that the human gait does not have yaw stability. It is shown that using the proposed control strategy, the dog and human can reach a common speed for the leashed motion. Moreover, we demonstrated that the robot can stabilize the yaw motion for the human model. The proposed approach can locally guarantee safety around pre-designed unleashed trajectories. The QP framework can significantly reduce the overshoot and undershoot in the human COM trajectories for following the guide robot. For future research, we will improve control algorithms to address sharp turns around corners and obstacles. We will extend the approach to consider dynamic obstacles. We will also investigate robust hierarchical approaches to address cooperative locomotion over uneven terrains. \vspace{-0.8em} \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,140
Q: How do you prevent a user from posting data multiple times on a website I am working on a web application (J2EE) and I would like to know the options that are available for handling a double post from the browser. The solutions that I have seen and used in the past are all client-side: * *Disable the submit button as soon as the user clicks it. *Follow a POST-Redirect-GET pattern to prevent POSTs when the user clicks the back button. *Handle the onSubmit event of the form and keep track of the submission status with JavaScript. I would prefer to implement a server side solution if possible. Are there any better approaches than the ones I have mentioned above, or are client-side solutions best? A: You could supply a "ticket" as part of the form, some random number - and make sure it doesn't get accepted twice, on the server side. A: Two server-side solutions come to mind: * *Create one-time use "tokens" in a hidden form field. Once a token is used, it is deleted from whatever database or session context object you're storing it in. The second time, it's not accepted. *Cache information received, and if an identical form is received within a certain time period (10 minutes? an hour? You decide!) it is ignored. A: we use a time sensitive, one time ticket. It's like a session id of sort. But it is tied to the form/page. You discard the ticket when the user submits the page, and you only process pages that comes with a valid ticket. You can, at the same time, tighten security by attaching the ticket to a user, so tat if a ticket comes in that is submitted by a user that is not the user that the ticket was submitted to, you reject the request. A: Implement a uniqueid to go with the request and log it along with the execution. If the id was already logged, you don't do the job again. This is kinda like the fallback solution - you should try and disable the button or link clientside as well as you suggested yourself A: Struts has something like this built in if you happen to be using it. http://struts.apache.org/1.x/apidocs/org/apache/struts/util/TokenProcessor.html A: Its hard to implement an idiot-proof solution (as they are alway improving the idiots). No matter what you do, the client side can be manipulated or perform incorrectly. Your solution has got to be server side to be reliable and secure. That said, one approach is to review the request and check system/database state or logs to determine if it was already processed. Ideally, the process on the server side should be idempotent if possible, and it will have to protect against dupe submits if it can't be. A: I'd use a timestamp and compare values with your server side code. If two timestamps are close enough and have the same IP address, ignore the second form submission.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,459
Red Pearl is a new grape tomato from Johnny's breeding program. It has been a standout among the dozens of grape tomato varieties we have trialed in the past few years. It has resistance to Fusarium and intermediate resistance to late blight. Red Pearl is nearly seedless and has exceptionally tender skin. It is sweet like most grape tomatoes, but has a more complex tomato flavor than many varieties. Compared to Red Grape, the fruits are slightly larger, and yields are similar. The fruits resist cracking and hold well on the vine, even when ripe, which reduces the need to harvest every day. Picking is easy, though, because the tomatoes are visible and accessible on the tall, open plants. Red Pearl is an indeterminate variety and a good choice for hoophouse and greenhouse production. 58 days to maturity. Seed was produced on the JSS research farm and is certified organic. Lisianthus is a great crop for the hoophouse because it can be planted when the weather is still cold, and it will tolerate extremely high summer temperatures under plastic. In windy areas, growing it under protection is essential to producing long stems. The plants can get to 3 feet tall and need to be supported; we recommend Hortanova mesh erected horizontally over the bed. Because lisianthus is very slow growing, it needs to be started in a greenhouse at least 13 weeks before you want to plant it outside. The Echo series, one of the earliest varieties, will bloom 20 to 24 weeks from sowing. The seeds need light to germinate, so should be covered only with a light sprinkling of vermiculite to hold in moisture. Start the seed on a heating mat set to 75°F and provide good air circulation. Applying T-22HC Plantshield is recommended to provide protection against root pathogens during the slow early growth of the seedlings. After emergence, the temperature should be reduced to 60-75°F. Do not allow the plugs to become rootbound, as this can permanently stunt them. Plant on 4" x 6" spacing. Echo Blue petals show water spots, so avoid overhead watering and harvest when two or three of the flowers on a stem are beginning to open. In his book The New Organic Grower, Eliot Coleman writes: "Soil blocks constitute the best system I have yet found for growing seedlings." We couldn't agree more. They produce a much better plant that establishes quickly with no transplant shock. Soil blocks also eliminate the expense, waste, and storage issues of plastic pots. Once you have purchased soil block makers and trays, your only annual cost for transplants will be for potting mix. Johnny's soil block making system has everything you need to make soil blocks for all your vegetable, flower, and herb transplants. Johnny's 512 mix and Vermont Compost's Fort Vee mix are both good choices for soil blocks because they will hold together well when compressed into blocks. We also have a new propagation tray with a flat mesh bottom that provides good drainage for soil blocks. It is a standard 10'x20" size, so it can be used with our leak-proof trays and acrylic domes. Another new addition to the line is a potting tray that lends itself well to soil blocking, allowing you to compress the soil mix tightly by pushing down and twisting the blocker back and forth. It even has an optional shelf for holding seeds, markers, and other supplies. The block makers themselves are available in several sizes, as hand-held or stand-up models. Many growers use a 3/4" mini blocker to maximize the number of seeds they can germinate on a heat mat. Then they transplant the mini blocks into larger blocks where they are grown on until it's time to transplant them outside. Optional square dibble inserts that make depressions the exact size of the mini blocks are available for all the larger blockers, allowing for easy transplanting. The hand-held block makers are available in three sizes: the 3/4" square mini blocker for germinating seeds or for small transplants such as lettuces; a 2" square for all vegetables, large-seeded flowers, and herbs; and a 4" square for large plants or late transplanting. Commercial stand-up models are easy on the back and make multiple blocks quickly. They are available in three sizes, and all make small depressions in the tops for seeds. I believe that has been fixed. We've launched a new web site and some of the old links weren't working. Let me know if you can't view them. Sorry for the inconvenience. Do you sell the Hortanova mesh or something similar? We don't sell Hortanova. I think the most similar product would be Trellis Plus netting. See product, click here.
{ "redpajama_set_name": "RedPajamaC4" }
3,055
<?php declare(strict_types=1); namespace Somnambulist\Collection\Behaviours\MapReduce; use function array_reduce; /** * Trait Reduce * * @package Somnambulist\Collection\Behaviours * @subpackage Somnambulist\Collection\Behaviours\MapReduce\Reduce * * @property array $items */ trait Reduce { /** * Reduces the Collection to a single value, returning it, or $initial if no value * * @link https://www.php.net/array_reduce * * @param callable $callback Receives mixed $carry, mixed $value * @param mixed $initial (optional) Default value to return if no result * * @return mixed */ public function reduce($callback, $initial = null) { return array_reduce($this->items, $callback, $initial); } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,712
Letcombe Valley is a nature reserve south of Letcombe Regis in Oxfordshire. It is managed by the Berkshire, Buckinghamshire and Oxfordshire Wildlife Trust (BBOWT), with the assistance of the Friends of Letcombe Valley. The reserve was part of a manor formerly known as Benhams, which was purchased in 1851 by a racehorse owner called Thomas Parr. After the Second World War, the manor became a centre for scientific research, which was acquired in 1985 by a chemical company, Dow Elanco. The company created a nature reserve on land next to Letcombe Brook, and when the research centre closed in 2002 the main part of the site was developed for housing. BBOWT was given a fifty-year lease on the nature reserve in 2010 as a condition of planning permission for the housing. Letcombe Brook, which runs through the reserve, is one of only two chalk streams in Oxfordshire and 161 nationwide. Wildlife includes water voles and fish such as bullhead, brown trout and the primitive brook lamprey. There are also Daubenton's bats, while insects include rare flies. Additional habitats are ancient woodland and a small area of chalk grassland. There is access from a bridleway at the top of South Street. References Berkshire, Buckinghamshire and Oxfordshire Wildlife Trust Nature reserves in Oxfordshire
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,851
\section{Introduction} \vspace{-2mm} Suppose that there is a simple algorithm to solve a problem, and we have two improvements on the time complexity; (a) is by developing a new algorithm with a small complexity, and (b) proves that its complexity is actually small by complexity analysis. Both types of improvements are important in theoretical computer science, but these days almost all results are on the type of (a). Developing simple algorithms in (a) is non-trivial, thus many recent algorithms and their complexity analysis are difficult to understand. Moreover, these types of algorithms often require some structures in the input, hence the problem formulations tend to be distant from the real world. On contrary, (b) type has a great advantage on these points. Even though the analysis is complicated, we can hide the difficulty by producing general statements applicable to many problems. At least, we do not have to implement the complicated proofs in a program. According to this motivation, we study on complexity analysis in this paper, that is amortized analysis for enumeration algorithms. Amortized analysis is a paradigm of complexity analysis. In the paradigm, we charge the cost of iterations with long computation time to those with shorter time, to make the upper bound of computation time of an iteration shorter. Compared to usual complexity analysis considering the worst case, the amortized analysis is often more powerful, for example dynamic tree, union find, and some enumeration algorithms\cite{DynamicTree,UnionFind}. In the case of dynamic tree, the cost of changing the shape of the tree is charged to the preceding changes with smaller costs, and attains $O(\log n)$ average time complexity for each change where $n$ is the size of the tree. The time complexity is not attained by usual worst case analysis, and it seems to be hard to obtain algorithms with the same complexity by the analysis. This is similar to the union find algorithm, and the resulted time complexity is $O(n\alpha(n))$ while straightforward algorithms take $O(n^2)$ time. The concept of ``charging the cost'' made a paradigm shift on the design of algorithms. Some enumeration algorithms are designed so that the time complexity of an iteration is linear in the number of subproblems, to make the average computation time per child will be short\cite{KpRm95,SrTmUn97}. Enumeration is now rapidly increasing its presence in theoretical computer science. One of the biggest reasons comes from its importance in application areas. An example is the pattern mining problems in data mining. The problem is to find all the patterns belonging to a class of structures, such as subsets and trees, such that the patterns satisfy some constraints in the given database, such as appearing at least $k$ times. One more motivation is that there have not been many studies including simple problems, thus there is a great possibility. On the other hand, enumeration has several interesting aspects which we can not observe in other problems. For example, by dealing only with the difference between output solutions, we can often attain the computation time shorter than its output size, by outputting the solutions by the differences. Another example is its well-structured recursion. We can frequently have several structural results on enumeration, and it gives interesting algorithms and mathematical properties, while it is hard to characterize when a brunch and bound algorithm cuts off subproblems. Structured recursion often gives a good amortization. Thus, there is a great interests on investigating amortized analysis on enumeration algorithms. According to this motivation and interests, this paper addresses amortized analysis of enumeration algorithms. One of our goals on this topic is to fill the gap between theory and practice. In practice, enumeration algorithms are often quite efficient and than the theoretical upper bound on the computation time. Filling the gap gives understandings for both the theoretical and practical properties on the data and algorithms; the properties of the data accelerating the algorithms, and the the mechanism of the algorithms that enable us to attain smaller bounds. We have observed that the recursive structures of enumeration algorithms satisfies a property which we call {\em bottom-expanded}. Iterations of enumeration algorithms generate several recursive calls. Thus, the number of iterations exponentially increases in deeper levels of the recursion. On the other hand, iterations on deeper levels often have relatively small inputs compared to upper levels. Thus, we can expect that iterations near by the root of the recursion are few and spend a long time, and iterations near by the bottom of the recursions are many and spend very short time. In practice, we can frequently observe this, especially in many kinds of pattern mining algorithms. This also implies that the amortized computation time per iteration, or even per solution, is short. This mechanism is what we call bottom-expanded. We can see this mechanism not only in practice but also classic enumeration algorithms. This mechanism motivated us to develop a good amortized analysis. However, amortization is not easy in general, since it is hard to globally estimate the number of iterations and computation time. Thus, in many existing studies, the computation time is amortized between a parent and its children, and sometimes its grandchildren\cite{enumPEO,Ep90,Ep94,ksubtree,KpRm95,SrTmUn97}. These local structures are easier to analyze than the global structures. Extensions of this idea to more global structures are non-trivial. For example, if we want to amortize between iterations in different subtrees of the recursion, we have to understand the relation and the correspondence between all iterations in the different subtrees. This is often a difficult task. In this paper, we propose a new way of carrying out amortized analysis of the time complexity of enumeration algorithms, and propose new algorithms for enumeration of matchings, elimination orderings, and connected vertex induced subgraphs. We also show that the amortized analysis can prove the existing complexity results in very simple ways, for the enumerations of spanning trees, perfect elimination orderings, and perfect sequences, while the existing algorithms often need sophisticated algorithms or data structures. We can also see that the condition in the analysis is often satisfied in practice, thus this amortized analysis explains why the enumeration algorithms are efficient in practice. These satisfy out basic motivations for this kind of studies. Our amortization of an iteration is basically done with all its descendants. For each iteration, we push out its computation time to its children so that the assigned time is proportional to their computation time. By applying this push-out from the root of the recursion to deeper levels, the long computation time near the root is diffused to deeper levels, that have shorter time on average. Since it is very hard to capture the structure of the recursion, we give a condition called {\em Push-out condition} such that the amortized computation time is bounded when the condition is satisfied. As the condition is given to the relation between each iteration and its children, proving the satisfiability of the condition is often not difficult. As a result, to give a bound to amortized time complexity, what we have to do is to prove that the condition holds for some algorithms. In this way, we propose algorithms for enumerating matchings, elimination orderings, and connected vertex induced subgraphs, and prove that the condition holds for each. These lead that these graph objects can be enumerated in constant time per solution. We also show that the condition holds for the algorithm for spanning tree enumeration, and this gives a very simple proof compared to the existing ones. The paper is organized as follows. Section \ref{sec:prlm} is for preliminaries, and Section \ref{sec:PO} describes our Push out amortization and Push out condition. Sections \ref{sec:elim}, \ref{sec:match}, \ref{sec:CIS} and \ref{sec:sptree} show algorithms and their proofs. We conclude the paper in Section \ref{sec:cncl}. \vspace{-2mm} \section{Preliminaries}\label{sec:prlm} \vspace{-2mm} Let $\cal A$ be an enumeration algorithm. Suppose that $\cal A$ is a recursive type algorithm, i.e., composed of a subroutine that recursively calls itself several times (or none). Thus, the recursion structure of the algorithm forms a tree. We call the subroutine, or the execution of the subroutine an {\em iteration}. Note that an iteration does not include the computation done in the subroutines recursively called by the iteration, thus no iteration is included in another. When the algorithm is composed of several kinds of subroutines and operations, and thus the recursion is a nest of several kind of subroutines. In such cases, we consider a series of iterations of different types as an iteration. When an iteration $X$ recursively calls an iteration $Y$, $X$ is called the {\em parent} of $Y$, and $Y$ is called a {\em child} of $X$. The {\em root iteration} is that with no parent. For non-root iteration $X$, its parent is unique, and is denoted by $P(X)$. The set of the children of $X$ is denoted by $C(X)$. The parent-child relation between iterations forms a tree structure called a {\em recursion tree}. An iteration is called a {\em leaf iteration} if it has no child, and an {\em inner iteration} otherwise. For iteration $X$, an upper bound of the execution time (the number of operations) of $X$ is denoted by $T(X)$. Here we exclude the computation for the output process from the computation time. We remind that $T(X)$ is the time for local execution time, and thus does not included the computation time in the recursive calls generated by $X$. For example, when $T(X) = O(n^2)$, $T(X)$ is written as $cn^2$ for some constant $c$. $T^*$ is the maximum $T(X)$ among all leaf iterations $X$. Here, $T^*$ can be either constant, or a polynomial of the input size. If $X$ is an inner iteration, let $\overline{T}(X) = \sum_{Y \in C(X)} T(Y)$. In this paper, we assume that a graph is stored in a style of adjacency list. For a vertex subset $U$ of a graph $G=(V,E)$, the {\em induced subgraph} of $U$ is the graph whose vertex set is $U$, and whose edge set contains the edges of $E$ connecting two vertices of $U$. An edge is called a {\em bridge} if its removal increases the number of connected components in the graph. An edge $f$ is said to be {\em parallel} to $e$ if $e$ and $f$ have the same endpoints, and be {\em series} to $e$ if $e$ is a bridge in $G\setminus f$ and not so in $G$. For an edge $e$ of a graph $G$, we denote the graph obtained by removing $e$ from $G$ by $G\setminus e$, and that by removing $e$ and edges adjacent to $e$ by $G^+(e)$. Similarly, for a vertex $v$ of $G$, $G\setminus v$ is the graph obtained from $G$ by removing $v$ and edges incident to $v$. For an edge $(u,v)$ of $G$, the graph {\em contracted} by $(u,v)$, denoted by $G/(u,v)$, is the graph obtained by unifying the vertices $u$ and $v$ into one. For an edge set $F=\{e_1,\ldots,e_k\}$, $G/F$ denotes the graph $G/e_1/e_2/\cdots/e_k$. \vspace{-2mm} \section{Push Out Amortization}\label{sec:PO} \vspace{-2mm} The size of the input of each iteration for a recursive algorithm often decreases as the depth of the recursion. Thus, iterations near the root iteration take a relatively long time, and iterations near leaf iterations take a relatively short time. Motivated by this observation, we amortize the computation time by moving the computation time of each iteration to its children. We carry out this move from the top to the bottom, so that the computation time of ancestors is recursively diffused to their descendants. When we can obtain a short amortized computation time in this way, iterations with long computation times have many descendants at least proportional to their computation time; the average computation time per iteration will be long only when they have few descendants. However, it is not easy to prove that any inner iteration has sufficiently many descendants. Instead of that, we use some local conditions, related to a parent and children. Suppose that $\alpha > 1$ and $\beta\geq 0$ are two constants.\\ \vspace{-1mm} \noindent {\bf \em Push Out Condition (PO condition):} for iteration $X$, $\overline{T}(X) \ge \alpha T(X) - \beta (|C(X)|+1)T^*$.\\ \vspace{-1mm} \noindent Fig. \ref{fig:POcond} shows a simple example of this condition. After the assignment of the computation time of $\alpha\beta (|C(X)|+1)T^*$ to children and the remaining to itself, the inequation $\overline{T}(X) \ge \alpha T(X)$ holds. This implies that the computation time of one level of recursion intuitively increases as the depth, unless there are not so many leaf iterations. Considering that enumeration algorithms usually spend less time in deeper levels of the recursion, we can see that this implies that each iteration has many children on average. This is in some sense not a typical condition to bound the time complexity of recursive algorithms; usually we want to decrease the total computation time in deeper levels. However, in the enumeration, the number of leaf iterations is fixed, and thereby the total computation time in the bottom level is also fixed. Thus, this condition implies that the total computation time is short. \begin{theorem}\label{poa} If any inner iteration of an enumeration algorithm satisfies PO condition, the amortized computation time of an iteration is $O(T^*)$. \end{theorem} \vspace{-3mm} \begin{figure}[t] \begin{center} \begin{minipage}{160pt} \vspace{-4mm} \begin{center} \includegraphics[scale=0.4]{pic1.eps} \end{center} \caption{An iteration, its children, and their computation time represented by rectangle lengths; seems to be inefficient if children take long time, but this results in many descendants indeed. }\label{fig:POcond} \end{minipage} \hspace{5mm} \begin{minipage}{160pt} \vspace{-4mm} \begin{center} \includegraphics[scale=0.4]{pic2.eps} \end{center} \vspace{-4mm} \caption{Push out rule; an iteration (center) receives computation time from its parent (while rectangle), and delivers it together with its computation time (gray rectangle) to its children, proportional to their computation time. }\label{fig:POrule} \end{minipage} \end{center} \vspace{-4mm} \end{figure} \proof To prove the lemma, we charge the computation time. We neither move the operations nor modify the algorithm, but just charge the computation time; the computation time can be considered as tokens, and we move the tokens so that each iteration has a small number of tokens. We charge the computation time from an iteration to its children, i.e., from the top of the recursion tree to the bottom. Thus, an iteration receives computation time from its parent. We charge (push out) its computation time and that received from its parent to its children. The computation time is charged to the children, in proportion of their individual computation time, using the following rule.\\ \vspace{-1mm} \noindent {\bf \em Push out rule:} Suppose that iteration $X$ receives a computation time of $S(X)$ from its parent, thus $X$ has computation time of $S(X) + T(X)$ in total. Then, we fix $\frac{\beta}{\alpha-1}(|C(X)|+1) T^*$ of the computation time to $X$, and charge (push out) the remaining computation time of quantity $S(X) + T(X) - \frac{\beta}{\alpha-1}(|C(X)|+1) T^*$ to its children. Each child $Z$ of $X$ receives computation time proportional to $T(Z)$, i.e., \vspace{-1mm} \[ S(Z) = (S(X) + T(X) - \frac{\beta}{\alpha-1} (|C(X)|+1)T^*) \frac{T(Z)}{\overline{T}(X)}. \] \vspace{-3mm} \noindent See Fig. \ref{fig:POrule} as an example. According to this rule, we charge the computation time from the root iteration to leaf iterations, so that each inner iteration has $O((|C(X)|+1)T^*)$ computation time. Since the sum of the number of children over all nodes in a tree is no greater than the number of nodes in a tree, this is equivalent to that each iteration has $O(T^*)$ time. The remaining issue is to prove the statement of the lemma by showing that each leaf iteration receives computation time of $O(T^*)$, and it is sufficient to prove the statement. To show that, we state the following claim.\\ \vspace{-2mm} \noindent {\bf \em Claim}: if we charge computation time in the manner of the push out rule, each iteration $X$ receives computation time of at most $T(X) / (\alpha-1)$ from its parent, i.e., $S(X) \le T(X) / (\alpha-1)$\\ \vspace{-2mm} \noindent The root iteration satisfies this condition. Suppose that an iteration $X$ satisfies it. Then, for any child $Z$ of $X$, $Z$ receives computation time of \vspace{-4mm} \begin{eqnarray*} && (S(X) + T(X) - \frac{\beta}{\alpha-1} (|C(X)|+1)T^*) \frac{T(Z)}{\overline{T}(X)}\\ &\le& (T(X) / (\alpha-1) + T(X) - \frac{\beta}{\alpha-1} (|C(X)|+1)T^*) \frac{T(Z)}{\overline{T}(X)}\\ &=& \frac{\alpha T(X) - \beta (|C(X)|+1)T^*}{\alpha-1} \times \frac{T(Z)}{\overline{T}(X)}\\ &=& \frac{\alpha T(X) - \beta (|C(X)|+1)T^*}{\overline{T}(X)} \times \frac{T(Z)}{\alpha-1}. \end{eqnarray*} \vspace{-3mm} \noindent Since PO condition is satisfied, $\overline{T}(X) \ge \alpha T(X) - \beta (|C(X)|+1)T^*$. Thus, \vspace{-1mm} \[ \frac{\alpha T(X) - \beta (|C(X)|+1)T^*}{\overline{T}(X)} \frac{T(Z)}{\alpha-1} \le \frac{T(Z)}{\alpha-1}. \] \vspace{-2mm} \noindent By induction, any iteration satisfies the condition in the claim. \qed Note that PO condition does not require for the iterations to have at least two children. \vspace{-2mm} \section{Enumeration of Elimination Ordering}\label{sec:elim} \vspace{-2mm} Let ${\cal L}$ be a class of structures such as sets, graphs, and sequences. Suppose that any structure $Z\in {\cal L}$ consists of a set of elements called an {\em ground set}, that is denoted by $V(Z)$. Examples of ground sets are the vertex set of a graph, the edge set of a graph, the cells of a matrix, and the letters of a string. The empty structure $\perp$ is the unique structure that has $V(\perp) = \emptyset$, and hereafter we consider only $\cal L$ including the empty structure. For each $Z\in {\cal L}, Z\ne \perp$, we define the set of {\em removable elements} $R(Z)$, such that for each removable element $e\in R(Z)$, the removal of $e$ from $Z$ results in a structure $Z'\in {\cal L}, V(Z') = V(Z)\setminus \{ e\}$. We denote the removal of $e$ from $Z$ by $Z\setminus e$, and we assume that no two different structures can be generated by the removal of $e$. By using removable elements, we define {\em elimination orderings}. An elimination ordering is an ordering $(z_1,\ldots,z_n)$ of elements in $V(Z)$ iteratively removed from $Z$ until $Z$ is $\perp$, i.e., any $z_i$ is removable in the structure $Z_i$ that is obtained by repeatedly removing $z_1$ to $z_{i-1}$ from $Z$. Example of elimination ordering are removing leaves from a tree, and perfect elimination ordering of a chordal graph. A simple algorithm for enumerating elimination orderings can be described as follows. \begin{tabbing} {\bf Algorithm} EnumElimOrdering ($Z, S$)\\ 1. {\bf if} $|V(Z)| = 1$, {\bf output} $S + z$ where $V(Z) = \{ z\}$; {\bf return}\\ 2. {\bf for} each element $z\in V(Z)$ {\bf do}\\ 3. \ \ {\bf if} $z\in R(Z)$, {\bf call} EnumElimOrdering ($Z\setminus z, S + z$)\\ 4. {\bf end for} \end{tabbing} Suppose that we are given a structure $Z$ in a class $\cal L$ and removable ground set $R$ for ground set $V(Z)$. We suppose that for any $z\in V(Z)$, we can list all $z\in R(Z)$ in $\Theta(p(|V(Z)|)q(n))$ time, where $p(|V(Z)|)$ is a polynomial of $|V(Z)|$, and $q(n)$ is a function where $n$ is an invariant of the input structure, such as the number of edges in the original graph. We also assume that a removal of element takes $\Theta(p(|V(Z)|)q(n))$ time. \begin{theorem}\label{elim} Elimination orderings of a class $\cal L$ can be enumerated in $O(q(n))$ time for each, if $|R(Z)| \ge 2$ holds for each $Z\in {\cal L}$ such that $|V(Z)|$ is larger than a constant number $c$. \end{theorem} \proof We first bound the computation time except for the output processes, that is, step 1 of EnumElimOrdering. First, we choose two constants $\delta>c$ and $\alpha>1$ such that $\frac{2p(i-1)}{p(i)} > \alpha$ holds for any $i > \delta$. Since $p$ is a polynomial function, $\frac{p(i)}{p(i-1)}$ converges to $1$, thus such $\alpha$ always exists. Let $X$ be an iteration. When $X$ inputs $Z$ with $|V(Z)| \le \delta$, the computation time is $q(n)$, except for the output process. Hence, we have $T^* = O(q(n))$. For the case $|V(Z)| \le \delta$, the computation time of $X$ is bounded by $q(n)$. For the case $|V(Z)| > \delta$, we have \vspace{-2mm} \[ \overline{T}(X) \ \ge \ 2(|V(Z)|-1)p(|V(Z)|-1)q(n) \ > \ \alpha |V(Z)|p(|V(Z)|)q(n),\] \vspace{-3mm} \noindent since $X$ has at least two children. Thus, $X$ satisfies PO condition with any constant $\beta > 0$. From Theorem \ref{poa}, except for the output process, the computation time is bounded by $O(q(n))$ time for each iteration whose input has at least $\delta$ elements. Since any inner iteration $Y$ has exactly one child only if $|V(Y)| \le c$, the number of inner iterations is bounded by the number of leaf iterations, multiplied by $c$. Therefore, the computation time for each elimination ordering can be bounded by $O(cq(n)) = O(q(n))$ time. Next, let us consider the output process. Instead of explicitly outputting elimination orderings, we output each elimination ordering $S$ by the difference from $S'$ that is output just before $S$. We can output them compactly in this way. Although the difference can be large up to $|V(Z)|$, we can see that it is bounded by the number of operations done from the previous output process. Thus, the size of all output differences, except for the first one output in the usual way, is at most proportional to the total computation time. Therefore, the computation time for the output process is also bounded by $O(q(n))$ time for each. \qed The next corollary immediately follows from the theorem. \begin{corollary} For a given set class, elimination ordering can be enumerated by EnumElimOrdering in $O(1)$ amortized time for each, if each inner iteration generates at least two recursive calls, and takes $O(p(|V(Z)|))$ time, where $p$ is a polynomial of $|V(Z)|$. \qed \end{corollary} There are actually several elimination orderings to which this theorem can be applied, and they are listed below. For conciseness, we have described each by their structures and removable elements.\\ \vspace{-1mm} {\bf Example (a): perfect elimination orderings of a chordal graph\cite{enumPEO}}\\ For a graph, a vertex is called {\em simplicial} if the vertices adjacent to it form a clique. An elimination orderings of simplicial vertex is called {\em perfect elimination ordering}\cite{elimord}, and a graph is {\em chordal} if it has a perfect elimination ordering. We define $\cal L$ by the set of chordal graphs, $V(Z)$ by the vertex set of $Z\in {\cal L}$, and $R(Z)$ by the set of its simplicial vertices. It is known that any chordal graph $Z$ admits a clique tree whose vertices are maximal cliques of $Z$. If $Z$ is a clique, all vertices in $Z$ are simplicial. If not, it is known that there are at least two cliques that has a vertex that is not included in the other maximal cliques. Note that these cliques are leaf cliques of a clique tree, where the vertices of a clique tree are maximal cliques of $Z$, each edge connects overlapping cliques, and the maximal cliques including any vertex forms a subtree of the clique tree. The vertex is simplicial, hence $|R(Z)| \ge 2$ always holds. Since we can check whether a vertex is simplicial or not in $(|V(X)|^2)$ time, we can enumerate all perfect elimination orderings in $O(1)$ time for each. Note that although the algorithm in \cite{enumPEO} already attained the same time complexity, our analysis yields much simpler algorithm and proof, \\ \vspace{-1mm} {\bf Example (b): perfect sequence\cite{perfectseq}}\\ $\cal L$ is the class of chordal graphs $Z$, and $V(Z)$ is the set of maximal cliques in $Z$. A maximal clique is removable if it is a leaf of some clique trees of $Z$, and the removal of a maximal clique $z$ from $Z$ is the removal of all vertices of $z$ that do not belong to another maximal clique. The removal of the vertices results in the graph that includes remaining maximal cliques, and no new maximal clique appears in the graph. Note that a clique tree has at least two leaves if it has more than one vertex, thus $|R(Z)|\ge 2$. An elimination ordering is called a {\em perfect sequence}. Since all removable maximal cliques can be found in polynomial time in the number of maximal cliques\cite{perfectseq}, all perfect sequences are enumerated in $O(1)$ time for each.\\ \vspace{-1mm} The elimination orderings induced by following removable elements can be also enumerated in $O(1)$ time for each. \vspace{-2mm} \begin{itemize} \item non-cut vertices of connected graph \item points on surface of convex hull of a point set in plane \item leaves of a tree \item vertices of degrees less than seven of a simple planar graph. \end{itemize} \vspace{-4mm} \section{Enumeration of Matchings}\label{sec:match} \vspace{-2mm} A {\it matching} of a graph is an edge subset of a graph $G=(V,E)$ such that no two edges are adjacent. The matchings are enumerated by the following algorithm. \begin{tabbing} {\bf Algorithm} EnumMatching ($G=(V,E), M$)\\ 1: {\bf if} $E = \emptyset$ {\bf then output} $M$; {\bf return}\\ 2: choose an edge $e$ from $E$\\ 3: {\bf call} EnumMatching ($G\setminus e, M$)\\ 4: {\bf call} EnumMatching ($G^+(e), M\cup \{ e\}$) \end{tabbing} \noindent The time complexity of an iteration of EnumMatching is $O(|V|)$. Since each inner iteration generates two children, the computation time for each matching is $O(|V|)$, and no better algorithm has been proposed in the literature. A leaf iteration takes $O(1)$ time, thus $T^* = O(1)$. However, PO condition may not hold for some iterations. This cannot be better than $O(|V|)$ in straightforward ways. PO condition does not hold when many edges are adjacent to $e$. In such cases, $G^+(e)$ has few edges, thus the subproblem of $G^+(e)$ takes short time so that PO condition does not hold. To avoid this situation, we modify the way of recursion as follows so that in such cases the iteration has many children. Let $u_1,\ldots,u_k$ be the vertices adjacent to $v$, and $e_i = (v,u_i)$. We partition the matchings to be enumerated into \vspace{-1mm} \begin{itemize} \item matchings including $e_1$ \item matchings including $e_2$ \item $\cdots$ \item matchings including $e_k$ \item matchings including no edge incident to $v$. \end{itemize} \vspace{-1mm} We see that any matching belongs to exactly one of these groups. To recur, we derive $G^+(e_1),\ldots,G^+(e_k)$ and $G\setminus v$. $G\setminus v$ and $G^+(e_1)$ can be derived in $O(|E|)$ time. To shorten the computation time for $G^+(e_i)$ for $i\ge 2$, we construct $G^+(e_i)$ from $G^+(e_{i-1})$. We add all edges of $G$ incident to $u_{i-1}$ to $G^+(e_{i-1})$, and remove all edges adjacent to $u_i$, and obtain $G^+(e_i)$. This can be done in $O(d(u_{i-1})+d(u_i))$ time. To construct $G^+(e_i)$ for all $i=2,\ldots,k$, we need \vspace{-1mm} \[ O(\ (d(u_1)+d(u_2))\ +\ (d(u_2)+d(u_3))\ +\ \cdots\ +\ (d(u_{k-1})+d(u_k))\ )\ \ =\ O(|E|) \] \vspace{-3mm} \noindent time. Thus, the computation time of an iteration is bounded by $c|E|$ with a constant $c$. The algorithm is described as follows. \begin{tabbing} {\bf Algorithm} EnumMatching2 ($G=(V,E), M$)\\ 1: {\bf if} $E = \emptyset$ {\bf then output} $M$; {\bf return}\\ 2: choose a vertex $v$ having the maximum degree in $G$\\ 3: {\bf call} EnumMatching2 ($G\setminus v, M$)\\ 4: {\bf for} each edge $e$ adjacent to $v$, {\bf call} EnumMatching2 ($G^+(e), M\cup \{ e\}$) \end{tabbing} \begin{theorem}\label{match} All matchings in a graph can be enumerated in $O(1)$ time for each, with $O(|E|+|V|)$ space. \end{theorem} \proof The amortized computation time for outputting process is bounded by $O(1)$ for each by using difference as elimination ordering. Let us consider an inner iteration $X$. In the iteration $X$, if $d(v)\ge |E|/4$, we generate at least $|E|/4$ recursive calls, thus we have $|C(X)|=\Omega(|E|)$ and PO condition is satisfied by choosing sufficiently large $\beta$. If $d(v) < |E|/4$, the subproblems of $G\setminus v$ take at least $\Theta(3c|E|/4)$ time, and the subproblems of $G^+(e_1)$ take at least $c|E|/2$ time. Hence, by setting $\alpha = 1.25$, we have \vspace{-1mm} \[ \overline{T}(X) \ge 3c|E|/4 + c|E|/2 = 5c|E|/4 \ge \alpha T(X) - \beta |C(X)|T^* \] \vspace{-3mm} \noindent thereby PO condition holds. Remind that each inner iteration generates two or more recursive calls, the number of iterations does not exceed the twice the number of matchings. Since any inner iteration satisfies PO condition and $T^* = O(1)$, the statement holds. We remind that we assumed that there is no isolated vertex in the input graph, and thus the number of matchings in the graph is greater than the number of vertices, and the number of edges. \qed \vspace{-2mm} \section{Enumeration of Connected Vertex Induced Subgraphs}\label{sec:CIS} \vspace{-2mm} We consider the problem of enumerating all vertex sets of the given graph $G=(V,E)$ inducing connected subgraphs (connected induced subgraphs in short). In literature, an algorithm is proposed that runs in $O(|V|)$ time for each\cite{AvFk96}. For the enumeration, it is sufficient to enumerate all connected induced subgraphs including the given vertex $r$. For a vertex $v$ adjacent to $r$, the connected induced subgraphs including $r$ are partitioned into those including $v$ and those not including $v$. The former subgraphs are connected induced subgraphs in $\underline{G/(r,v)}$ and the latter subgraphs are those in $G\setminus v$. We have the following algorithm according to this partition, and we prove that this algorithm satisfies PO condition. \begin{tabbing} {\bf Algorithm} EnumConnect ($G=(V,E), S, r$)\\ 1: {\bf if} $d(r) = 0$ {\bf then output} $S$; {\bf return}\\ 2: choose a vertex $v$ adjacent to $r$\\ 3: {\bf call} EnumConnect ($\underline{G/(r,v)}, S\cup \{ v\}, r$)\\ 4: {\bf call} EnumConnect ($G\setminus v, S, r$) \end{tabbing} \begin{theorem}\label{connect} All connected vertex induced subgraphs in a graph can be enumerated in $O(1)$ time for each, with $O(|E|+|V|)$ space. \end{theorem} \proof The correctness of the algorithm and the bound for memory usage are clear. Since each inner iteration generates exactly two recursive calls, the number of iterations is linearly bounded by the number of connected induced subgraphs, and $T^* = O(1)$. As same to the matching enumeration, the computation time for outputting process is bounded by $O(1)$ for each. An inner iteration $X$ of the algorithm takes $O(d(r)+d(v))$ time. We assume that $T(X) = c(3d(r)+d(v))$ for a constant $c$, and leaf iteration takes $3c$ time, since $T^* = O(1)$. The constant factor of three is a key to PO condition. The degree of $r$ is at least $(d(r)+d(v))/2-1$ in $\underline{G/(r,v)}$, and $d(r)-1$ in $G\setminus v$. Note that $d(r)$ and $d(v)$ are degrees of $r$ and $v$ in $G$. From this, we can see that the child iteration of $\underline{G/(r,v)}$ takes at least $3c((d(r)+d(v))/2-1)$ time, and that of $G\setminus v$ takes at least $3c(d(r)-1)$ time. Their sum is at least \vspace{-1mm} \[ 3c((d(r)+d(v))/2-1) + 3c(d(r)-1) = \frac{3}{2}c(3d(r)+d(v)) - 6c = \frac{3}{2}T(X) - 6c.\] \vspace{-3mm} \noindent Setting $\beta = 6$, we can see that $X$ satisfies PO condition. Thanks to Theorem \ref{poa}, the computation time for each connected induced subgraph is $O(1)$. \qed \vspace{-2mm} \section{Spanning Trees}\label{sec:sptree} \vspace{-2mm} A subtree $T$ of a graph $G=(V,E)$ is called a {\em spanning tree} if any vertex of $G$ is incident to at least one edge of $T$. Any spanning tree has $|V|-1$ edges. There have already been several studies on this problem\cite{KpRm95,SrTmUn97,Un99}, and \cite{Un99} is the simplest and uses an amortized analysis similar to us. Without loss of generality, we assume that the input graph does not have any bridge. Let $e_1$ be an edge of $G$. If there are several edges $e_2,\ldots,e_k$ parallel to $e_1$, let $F = \{ e_1,\ldots,e_k\}$ and $F_i = F\setminus \{ e_i\}$. We see that at most one edge from $F$ can be included in a spanning tree, thus we enumerate spanning trees in $(G\setminus F_1)/e_1,\ldots, (G\setminus F_k)/e_k$. We further enumerate spanning trees in $G\setminus F$ if it is connected. Any spanning tree is enumerated in exactly one of these. When $e_1$ has no parallel edges, $e_1$ can have series edges. If there are several edges $e_2,\ldots,e_k$ series to $e_1$, again let $F = \{ e_1,\ldots,e_k\}$ and $F_i = F\setminus \{ e_i\}$. We also see that any spanning tree includes at least $k-1$ edges of $F$, thus we enumerate spanning trees in $(G/F_1)\setminus e_1,\ldots, (G/F_k)\setminus e_k$. We further enumerate spanning trees in $G/F$ if $F$ is not the edges of a cycle. Also in this case, any spanning tree is enumerated once among these. By using these subdivisions, we construct the following algorithm. \begin{tabbing} {\bf Algorithm} EnumSpanningTree ($G=(V,E), T$)\\ 1: {\bf if} $E = \emptyset$ {\bf then output} $T$; {\bf return}\\ 2: choose an edge $e_1$ from $E$\\ 3: $F^p := \{ e_1\} \cup \{ e | e \mbox{ is parallel to } e_1\}$ ;\\ \ \ \ \ \ $F^s := \{ e_1\} \cup \{ e| e \mbox{ is not parallel to } e_1, \mbox{ and } e \mbox{ is series to } e_1\}$\\ 4: {\bf for} each $e_i\in F^p$, \ {\bf call} EnumSpanningTree ($(G\setminus (F^p\setminus \{e_i\})/ e_i, T\cup \{ e_i\}$)\\ 5: {\bf for} each $e_i\in F^s$, \ {\bf call} EnumSpanningTree ($(G/ (F^s\setminus \{e_i \})\setminus e_i, T\cup (F^s \setminus \{ e_i\})$) \end{tabbing} \noindent We observe that these $k$ subgraphs are actually isomorphic in both cases except for the edge label $e_i$, thus constructing these graphs takes $O(|V|+|E|)$ time. \begin{theorem}\label{spantree} All spanning trees in a graph can be enumerated in $O(1)$ time for each, with $O(|E|+|V|)$ space. \end{theorem} \proof The space complexity of the algorithm is $O(|E|+|V|)$ and an iteration takes $\Theta(|V|+|E|)$ time since all edges parallel/series to an edge can be found by two connected component decomposition in $O(|V|+|E|)$ time. If no edge is parallel or series to $e_1$, we generate two subproblems of $|E|-1$ edges, thus PO condition holds. If $k$ edges are parallel or series to $e_1$, we have at least $k+1\ge 2$ subproblems of $|E|-(k+1)$ edges. When $k+1\ge |E|/4$, $T(X) - \beta (|C(X)|+1)T^* = 0 $ holds for some $\beta>0$, and PO condition holds. When $k+1< |E|/4$, $(k+1)(|E|-(k+1)) \ge 1.5|E|$ holds, PO condition holds for $\alpha = 1.5$ and some $\beta>0$. Since each iteration generates at least two recursive calls or outputs a solution, the number of iterations is at most twice the number of solutions, therefore the statement holds. \qed \vspace{-2mm} \section{Conclusion}\label{sec:cncl} \vspace{-2mm} We introduced a new way of looking at amortizing the computation time of enumeration algorithms, by local conditions of recursion trees. We clarified the conditions that are sufficient to give non-trivial upper bounds for the average computation time of iterations that only depended on the relation between the computation time of a parent iteration and that of its child iterations. We showed that many algorithms for elimination orderings have good properties so that the conditions are satisfied, and thus enumerated in constant time for each. Several other enumeration algorithms for matchings, connected vertex induced subgraphs, and spanning trees were also described, whose time complexities are $O(1)$ for each solution. There are many problems for those enumeration algorithms that do not satisfy the conditions. An interesting future work is to develop new algorithms for these problems, that satisfy the conditions. Another direction is to study other conditions for bounding amortized computation time. Further studies on amortized analysis will possibly fill the gaps between theory and practice, and clarify the mechanisms of enumeration algorithms. \ \\ \noindent {\bf \large Acknowledgments: } Part of this research is supported by the Funding Program for World-Leading Innovative R\&D on Science and Technology, Japan, and Grant-in-Aid for Scientific Research (KAKENHI), Japan. \vspace{-3mm}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,768
New Release Round-Up October 9th-15th Hey Girl! Check out these new releases from October 9th-15th! Just Lunch (The Shorts Book 3) by Nia Forrester Part 2 to the 'Coffee Date' Randall "The Rocket" Reese is beginning to reclaim his life both professionally and personally, with a new outlook, and a new woman, Dani Erlinger, by his side. Rand and Dani are in in a comfortable groove that suits them both, but an unexpected invitation 'just for lunch' and a calamitous weekend excursion has them questioning whether they've become much too comfortable, much too soon. Everything To Me by A.C. Taylor Everyone comes into your life for a reason… Going through a tough time in her life, the only thing on Haze's mind is getting away from anything causing her pain. Her heart feels empty. Her thoughts bring her sadness. Feeling like she has no choice, she packs everything that she owns and leaves town. But in the midst of her escape, someone special crosses her path and does something for her that no one else could. When Dominic approached Haze, his only concern was turning her frown into a smile. He never expected one encounter to spark an instant friendship and a love so strong that he couldn't imagine his life without her. Without a doubt, Haze was the one woman in his life that could make him see things in a way that no one else could. But even with the bond that they share, their complicated lives continue to keep them apart. Someone has to give in and take a real chance on love. The question is, who? And when that time comes, will love win? Or will the smallest obstacle, continue to keep them apart? Deliberate Seduction by Marie Nash "I still can't believe that I slept with that man." Serena Jamison was pacing back and forth in her office. "Never mind the fact that I don't even like him. I have never done anything so out of character before. It must have been the alcohol. It went straight to my head." She looked at her friend with a raised brow. She hadn't said a word since she started her rant. "Aren't you going to say anything?" Anna Winston shrugged her shoulders. "What would you like for me to say?" You go through this every time you think about that delicious Trent Denison," she said with a sigh. "I don't know what you're whining about. If I had spent the night with him I would not be bemoaning the fact that it happened. It's been a month, Serena. So you slept with him. It was just one night. I'm beginning to think that he left a lasting impression on you." "He impressed me alright. That man has skills. Every night when I close my eyes my I see his face. I'm trying to forget." "What was so bad about that night?" "Anna, I never said the night was bad. Matter of fact, it was fantastic. And that's the problem. It was one of the best nights of my life." "I don't understand. If it was so good why do you call it a mistake? Now that I think about it, you've been very secretive about that night. You gave me very little details." Serena gave her a strange look. "Because it just was, okay." When Anna continued to stare at her, Serena sat down behind her desk to explain. Anna crossed her legs to wait. She watched her friend gather her thoughts. Serena was a beautiful woman and she worked much too hard. She didn't have much of a personal life. She worked late most nights and went home to an empty house. But she had a big heart and Anna loved like the sister she never had. Serena liked Anna from the first day she hired her. Anna is her legal assistant and a darn good one. Serena is a very good attorney who always dreamed up having a private practice. Last year she made the decision to take the plunge. She was still building her client base but so far she was doing very well. Serena cleared her throat. "I haven't said much to you because I'm still confused at my behavior. I had a little too much to drink that night, and I fell into bed with the first available man." Anna rolled her eyes. "Let me stop you right there. If you're going to tell this story at least be honest with me, and more importantly, with yourself. You did not fall into bed with a stranger. You had hot sex with a man you are very much attracted to." Anna raised her hand when Serena started to go into her denial mode. "Let me finish. No matter how many times you say you dislike Trent, we both know that's not true. I think the alcohol gave you the courage to do something you've wanted to do for a long time. Later, you had second thoughts and now you're calling it a mistake." Serena opened her mouth to reply but then she closed it. She laid her head back against the soft leather of her office chair. "Oh, Anna. What am I going to do? He keeps trying to contact me. And no matter how many times I ignore his calls it doesn't discourage him." "He does seem very determined. If I had to guess, I'd say he's not the kind to give up. Especially when faced with a challenge. That would be you, girlfriend." "I try to avoid him. I've run into him a couple of times. Every time I see him, he winks and gives me a sexy grin. It drives me crazy. I don't know what he finds so amusing? My theory is that he's imagining me naked. I wish he would stop calling. I am not going to sleep with him again." Anna raised a brow. "Are you sure about that? I have a feeling that not a lot of sleep went on that night. The fact that you can't forget about that night tell me the two of you have some unfinished business. A Love Sent From Heaven: A Christian Romance Novel (Heaven On Earth Book 2) by Taretha Jones Thirty-three year old Sierra North is beautiful, saved, and single. She has patiently looked on as her younger sisters have all gotten married. Sierra is happy for her siblings, but she can't help but want a special someone for herself. When she develops an attraction to the handsome manager of her local supermarket — Camden Mason — she thinks she's found her soulmate. Her family and friends try to convince her that Camden isn't the one for her, but she doesn't want to hear about that. She finally forms a friendship with Derek Young, a talented, saved contractor who's settled down in her quaint little hometown. Derek falls in love with Sierra and is convinced that she's destined to be his wife. But first he has to prove to her that they share a love sent from Heaven. It's Simple (a different perspectives series Book 1) by Michelle Richardson | unapologetically representing black love | here's a unique look at a progressive couple and how their choices impact their journey; providing a truthful and sometimes painful look at real life scenarios and how two fiercely driven and stubborn lovers choose to handle them. they met as teenagers. however, life and opportunities pull them in different directions on opposite coasts. despite the geographical distance, Tia and Chase remain together. it's simple (a different perspective series, #1) Shoe Fetish 2-: Grown Into High Heels by Sharon Bennett and Beatrice Moore There is definitely an immediate emotional attachment we experience when we see "the one"—that shoe (or that man) and try them on, praying for the perfect fit, and Oh, Lord Have Mercy, how we pray that it will fit! These three friends continue to find their way through life, love, and its relationships transferring all power and emotions via their shoes, which hold secrets. Some harmless and some deadly; and, though Carmen is untrusting of men and their secrets, she has a few deadly secrets of her own. Women love their shoes for varied reasons. Shoes are not only for protection, beauty, and sex; but, for the life that they can transcend you to, and the secrets that may not have been shared with another person are sometimes within the soles of the shoes. "In Shoe Fetish, Bennett and Moore have created a sad yet warmly uplifting version of the time-honored transition into womanhood. Their sensitive portrayal of Carmen's resolute honesty and ultimate empowerment make for an enduring heroine and entertaining reading. " – Ellen Tanner Marsh, NY Time Bestselling Romance Novelist. "I absolutely loved reading Shoe Fetish in my trailer between takes. I can't wait for the sequel!" – Maria Howell- singer/actress of Color Purple, Daddy's Girls, Army Wives, and more. Take a Chance on Me (Baymoor Book 3) by D.A. Young Asked by his sister Georgina to find her old friend Annabelle Gaines, Graham Carlton becomes fascinated with the woman in the photo. Obsessed with finding her, ensuring her safe return, but most importantly, claiming her as his own. Annabelle has been on the run from one man's brutality and never expected to find a savior in another. Graham Carlton's intensity, virility, arrogance, and masculinity should've scared her. Instead, his bold, domineering nature challenges, infuriates, and arouses Annabelle in ways she hadn't deemed possible. Journey along with man of mystery Graham Carlton and veterinarian Annabelle Gaines as they find a way to make their relationship work in Baymoor, Maryland. Don't miss the chance to meet new characters, and catch up with the some of town's interesting residents along the way. IT IS NECESSARY TO READ THE SERIES IN ORDER. SERIES ORDER: Book One: The Farmer & The Belle Book Two: Lost & Found Book Three: Take a Chance on Me PLEASE READ WARNINGS BELOW. MUST BE 18+ TO READ. THIS BOOK CONTAINS GRAPHIC AND SADISTIC VIOLENCE, STEAMY/EXPLICIT LOVE SCENES AND COARSE LANGUAGE. IT IS NOT RECOMMENDED FOR THOSE WHO ARE EASILY OFFENDED! The Christmas Promise (McClendon Holiday) by Sean D. Young Jennifer McClendon is in need of a vacation. What better place to get away from everything than a sun-drenched tropical island? And there's no better way to get over her ex and have some fun than to spend a little time with a handsome software engineer who's also on holiday. What could it hurt? They won't see each other again anyway. Simeon Baker wants more than a holiday romance with the gorgeous woman he meets on vacation, but a tragic accident that almost ends his life leaves him in hospital for months, and he isn't able to follow up on his promise to keep in touch. But Fate steps in four years after their magical island holiday, and Simeon realizes Jennifer didn't only leave the island with good memories. If he can convince Jennifer to give them a chance at something permanent, this might just be their first family Christmas. Each book in the McClendon Holiday series is a standalone story that can be enjoyed out of order. Book #1 A McClendon Thanksgiving Book #2 The Christmas Promise If there are any new releases that we missed, feel free to send them our way and we will add them ASAP! New Release Round-Up January 13th-19th New Release Round-Up January 6th-12th New Release Round-Up December 30th-January 5th New Release Round-Up December 23rd-29th New Release Round-Up December 16th-22nd New Release Round-Up December 9th-15th
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,014
Home/Entertainment/Where is Steve Banerjee Now? Is Somen Still in Jail After Chippendales? Where is Steve Banerjee Now? Is Somen Still in Jail After Chippendales? With the release of Hulus Welcome to ChippendalesMany true crime fans are wondering: where is Steve Banerjee now? The Hulu Miniseries Welcome to Chippendales follows the life of Somer (better known as Steve) Banerjee and his rise to become the founding mogul of Chippendales – a captivating and booming male exotic dancer business in the '80s that focused on female pleasure. Banerjee (played by Kumail Nunjani) enjoyed life as the head of the most successful male stripper empire, but things took a turn when his business partner Nick De Noia (played by Murray Bartlett) rose to fame after he bid for the touring rights . Banerjee hired de Noia and was eventually caught for conspiring to murder him. But what exactly happened after his arrest? Here's what happened to Steve Banerjee and where he is now. Where is Steve Banerjee now? Image: Erin Simkin / ©Hulu / Courtesy Everett Collection So where is Steve Banerjee now? After De Noia was found fatally shot in the face in his Times Square office, many did not expect Banerjee to be a suspect. However, after the rights were bought back from De Noia's family, all signs pointed to him. He was arrested in September 1993 and pleaded guilty to racketeering, which included organizing the murder of De Noia. However, on October 23, 1994, the day before Banerjee was sentenced, he committed suicide in his cell. Banerjee carried out the assassination plot in 1987 by hiring Ray Colon, a former Palm Springs police officer and entertainer, and his accomplice Gilbert Rivera Lopez, who agreed to kill De Noia for $25,000. Banerjee and De Noia became business partners after Banerjee saw the success of opening the Los Angeles location and hired the Emmy-winning choreographer to help open the New York location. After several locations emerged, Banerjee and De Noia split the business in half – with Banerjee overseeing the permanent locations and De Noia handling the touring aspect of the company. The tour company turned out to be very successful, which made Banerjee jealous and he wanted to eliminate his business partner. After De Noia's death, Banerjee still had blood for his competitors. He also allegedly hired Colon to uncover the murders of former Chippendale dancers Read Scot and Steve White, who started their own company. They eventually became the founders of the London club Adonis and were in direct competition with the Banerjee brand. In the documentation from A&E Secrets of the Chippendales, Colon also confessed to Banarjee and asked him to burn down Oskos, a Los Angeles club owned by his friends. In the footage obtained from The sun, he said: "He asked me if I could burn it down. I could understand why he wanted it [burned], because it was full, but it was gigantic – three times the size of Chippendales. So he threw down $7,000…I just took it." Things stalled when Colon hired a hitman using the alias "Strawberry" and flew all the way to London to continue the Adonis murders, but ended up getting cold feet before he could inject Scot and White with cyanide. Strawberry turned himself in to the FBI and brought down Colon in the process. After serving seven months in prison, Colon aided the FBI by obtaining confessions from Banerjee in Switzerland. Banerjee was arrested and charged with conspiracy to violate the federal homicide law and five counts of causing others to travel in foreign trade and use foreign trade facilities to further the assassination program, according to UPI. Initially, Banerjee pleaded not guilty to the charges, according to the AP, and conviction on all charges would have resulted in life in prison and a $1.75 million fine. He eventually pleaded guilty to counts of attempted arson, racketeering and contract killing. He struck a plea deal that would see his sentence reduced to 26 years but he would lose his Chippendales fortune. At the time, a federal prosecutor told a federal judge that Banerjee told a whistleblower he planned to "leave the country or kill himself" if arrested. Reonard McFadden, an executive assistant to the director of the Los Angeles Metropolitan Detention Center, said in a statement at the time that Banerjee was depressed but didn't look like he might want to kill himself. "Banerjee, like every other inmate, had been questioned by a staff psychologist. There was no indication he was suicidal." The man who actually carried out the murder, Lopez, was convicted of second-degree murder and sentenced to 25 years to life in prison. While Colon had a reduced sentence since he worked with the FBI to frame Banerjee. He pleaded guilty to conspiracy and contract killing and was released two years after Banerjee's death in 1996. As an Indian immigrant who used to be a gas station attendant, Banerjee still led a normal family life. He married his wife Irene in the 1980s, whom actress Annaleigh Ashford said was difficult to emulate as there is little about her public life. "I hadn't done any research because there is nothing on this woman," she said. "So the only thing I had to work through was the given circumstances of the situation, the real events. It was very important to me to create a person who is genuinely human and complicated and yet lives in the world of the late '70s and '80s." Irene inherited the Chippendales business along with his entire fortune after his death. The couple had two children, a daughter and a son, Christian, who eventually decided to continue his father's legacy by becoming a stripper. In conversation with the New York Post, he said of his father and what he would have thought of his endeavors today with his own company, Strippendales: "People have a lot of opinions and that's okay. He was a good guy…I always had this connection to my father, even when he wasn't alive, through Chippendales. I think he would want to push me in that direction. He would want to continue his legacy through his son." Welcome to Chippendales can be viewed on Hulu. Here's how to watch it for free. Courtesy of: Kerrera House Press base for the Hulu series, Welcome to Chippendales, with Kumail Nanjiani, Murray Bartlett, Annaleigh Ashford, Dan Stevens and Juliette Lewis, Deadly Dance tells the fascinating story of Steve Banerjee, founder and owner of LA's amazing nightclub Chippendales. In the post-pill, pre-AIDS, and sex LA club scene of the 1980s, celebrities, desperate housewives, and wild bachelors all converged in one place: Chippendales — and behind it all was arson, mob, and murder. If you or someone you know is having suicidal thoughts, call or text 988 to contact the 988 Suicide & Crisis Lifeline. The Lifeline offers 24/7 confidential support for people experiencing suicidal crises or emotional distress. https://stylecaster.com/steve-banerjee-now/ Where is Steve Banerjee Now? Is Somen Still in Jail After Chippendales? NFLPA says NFL agreed to nix fully guaranteed deals Lenápe descendant works to preserve the language of Pennsylvania's indigenous people Trump's Truth Social Makes Saddest Announcement of All Time 'Black Panther 2' leak reveals a massive twist to Namor's origin story Elden Ring Director Hidetaka Miyazaki Reveals His Favorite Boss Spotify Wrapped: See the top songs, artists, albums of 2022
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,078
Cautious optimism from recent results of Covid-19 vaccines Neelam Sen, JNU With the increasing number of COVID-19 infections around the world, developing vaccines is critical for controlling the pandemic. Vaccines are substances that help prevent infections. Typically, they contain an agent that resembles the germ causing a disease, which activates the immune system to recognize and destroy the germs. COVID-19 vaccines are expected to provide immunity to large sections of  populations and thus alleviate the pandemic situation. Recently published results about the ChAdOx1 nCov-19 vaccine 1 (co-developed by Oxford University and AstraZeneca in the UK) and mRNA-1273 vaccine2 (co-developed by National Institute of Allergy and Infectious Diseases and Moderna in the US) from their first and second stage trials have provided hope to many countries around the world in the midst of the COVID-19 pandemic. The development of vaccines typically takes several years, involving initial exploratory research to formulate the vaccine, followed by pre-clinical testing with animals to test safety and effectiveness. This is followed by at least 3 stages of human clinical trials. The first and second stage human clinical trials of the vaccine try to answer the questions 'Is it safe?' and 'Does it activate an immune response', respectively. The critical third stage clinical trial asks how effectively the vaccine protects a population group against the disease. Regulatory approvals have been given in cases that have efficacies ranging from 50% (flu vaccines) to 98% (measles vaccines). Results from the early stage trials of the two COVID-19 vaccines mentioned above have shown that they are mostly safe and only show local or systemic reactions that are commonly associated with vaccinations, and that the overall efficacy of these two vaccines appear positive. However, it is important to note the constraints of drawing overly optimistic perceptions of the vaccines. For example, the stage 1 & 2 trials for ChAdOx1 nCov-19 vaccine was limited to 129 individuals in the UK (mostly white) and the 56-day study is less than 10% of the one year further follow up period that is intended [1]. The Moderna mRNA-1273 vaccine was tested on 45 healthy adults in the US of which 89% were white and observations were again limited to 56 days [2]. Both vaccines reported an increased immune response characterized by an increase in viral specific antibodies by day 28. Their data also show an understandably wide range of responses across individuals. In the limited statistical and biological understanding that these studies provide, it is clear that the vaccines have some success in activating the immune response but this response varied among those who were administered the dose. Critical Phase 3 trials will provide better insights into the effectiveness of these vaccines. It is particularly important to notice the range values of drug safety and efficacy especially as multiple ethnicities, age-groups and co-morbidities are taken into account. For example, the Drugs Controller General of India approved phase 2 & 3 trials of ChAdOx1 nCov-19 vaccine in India across 17 selected sites. These trials will provide a clearer understanding of how varied the response of the vaccine would be for different ethnic groups and common comorbidities prevalent in the Indian population. It is important to note that the speed at which these vaccines are being developed is unprecedented and any expectations from vaccines should only be based on available scientific data. [1] Folegatti PM, Ewer KJ, Aley PK, et al. Safety and immunogenicity of the ChAdOx1 nCoV-19 vaccine against SARS-CoV-2: a preliminary report of a phase 1/2, single-blind, randomised controlled trial, Lancet (2020) [2] Jackson LA, Anderson EJ, Rouphael NG, et al. An mRNA vaccine against SARS-CoV-2 — preliminary report, New England Journal of Medicine (2020) [Last updated 22 August 2020]
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,125
David Wolfson, baron Wolfson de Tredegar QC, né le à Liverpool, est un avocat britannique et pair à vie. Il est nommé ministre au ministère de la Justice le 22 décembre 2020 , poste qu'il quitte le 13 avril 2022. Biographie Né à Liverpool en 1968, Wolfson fait ses études au King David High School de Liverpool, puis passe un an au Yeshivat HaKotel à Jérusalem. Il étudie les études orientales et le droit au Selwyn College, Cambridge, obtenant son diplôme en 1991 . Wolfson fréquente la Inns of Court School of Law où il reçoit une bourse d'études Inns of Court. Il est admis au barreau d'Inner Temple, l'une des Inns of Court qui lui a accordé une bourse d'études majeure, en octobre 1992, où il est maintenant conseiller. Wolfson exerce en droit commercial au One Essex Court à Temple, Londres. Wolfson intervient dans de nombreux litiges bancaires et commerciaux majeurs ces dernières années, et sa pratique s'est étendue à un large éventail de droit commercial, à la fois en litige et en arbitrage international. Il siège également comme arbitre dans des différends nationaux et internationaux. Avant de rejoindre le gouvernement britannique, Wolfson reçoit le prix «Commercial Litigation Silk of the Year 2020» par The Legal 500, ainsi que «Commercial Litigation Silk of the Year» aux Chambers UK Bar Awards 2020. Wolfson est nommé sous-secrétaire d'État parlementaire à la justice au ministère de la Justice le 22 décembre 2020. Il est ensuite créé baron Wolfson de Tredegar, de Tredegar dans le comté de Gwent le 30 décembre 2020 et présenté à la Chambre des lords le 7 janvier 2021. Références Liens externes Personnalité liée à Liverpool Naissance en juillet 1968 Pair à vie
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,310
<?php /** * SubUnit * * This class has been auto-generated by the Doctrine ORM Framework * * @package mediaSCORE * @subpackage model * @author Your name here * @version SVN: $Id: Builder.php 7490 2010-03-29 19:53:27Z jwage $ */ class SubUnit extends BaseSubUnit { }
{ "redpajama_set_name": "RedPajamaGithub" }
4,704
Anopinella sympatrica is a species of moth of the family Tortricidae. It is found in Guatemala. The length of the forewings is 8.2–12.0 mm. External links Systematic revision of Anopinella Powell (Lepidoptera: Tortricidae: Euliini) and phylogenetic analysis of the Apolychrosis group of genera Anopinella Moths of Central America Moths described in 2003
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,891
{"url":"https:\/\/lists.gnu.org\/archive\/html\/pspp-users\/2012-06\/msg00014.html","text":"pspp-users\n[Top][All Lists]\n\n## Re: \"ERROR I\/O\" on a large database on version 0.7.9 Win7 binary\n\n From: John Darrington Subject: Re: \"ERROR I\/O\" on a large database on version 0.7.9 Win7 binary Date: Tue, 5 Jun 2012 06:31:08 +0000 User-agent: Mutt\/1.5.18 (2008-05-17)\n\n[ Moving conversation to address@hidden since this is now clearly a bug ]\n\nThis problem seems to keep coming and going. And it only has been reported on\nWindows, and it seems only on some Windows machines which are configured in a\nparticular way (we don't know the exact criteria).\n\nWindows is rather difficult to debug - the only way is to keep producing test\nversions to spit out diagnostic messages. This is compounded by the fact that\nfew of the developers have or know how to use Windows. Harry is the only person\nwho has the skills to produce windows binaries.\n\nSo if we are going to fix this problem we'll need a commitment from myself or\nBen (to work on the diagnosis and fix), from Harry (to build diagnostic\nversions)\nand from Henry (to try out these diagnostic versions and report the results).\n\nI know that people have a lot of other things to do. So the question is: Are\nall those people prepared to invest the time and effort to fix this bug once\nand for all?\n\nJ'\n\nOn Mon, Jun 04, 2012 at 08:03:27PM +0000, Gong, Henry wrote:\nHi,\n\nI switched from 0.7.9 March 15 64 bit to 0.7.9 May 15 64 bit\n(pspp.awardspace.com); here are the results.\n\nSHOW N.\nnote: SHOW: N is 51475.\n\nLIST.\nC:\\path\\to\\pspp.exe: writing to temporary file: No such file or directory\n<every cell says I\/O Error>\n\nPsppire still shows every cell as blank after I attempt to scroll past\ncase 51464; LIST still outputs errors; and psppire is still unable to do\nanalysis on more than 51464 cases. So the differences are: SHOW N. works, and\nLIST now says that there is no such temporary file.\n\nHenry\n\n--\nPGP Public key ID: 1024D\/2DE827B3\nfingerprint = 8797 A26D 0854 2EAB 0285 A290 8A67 719C 2DE8 27B3\nSee http:\/\/keys.gnupg.net or any PGP keyserver for public key.\n\n\n\nsignature.asc\nDescription: Digital signature","date":"2019-06-16 11:58:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4383362829685211, \"perplexity\": 6304.5279576729745}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-26\/segments\/1560627998100.52\/warc\/CC-MAIN-20190616102719-20190616124719-00350.warc.gz\"}"}
null
null
Choroszczynka [] es un pueblo ubicado en el distrito administrativo de Gmina Tuczna, dentro del Distrito de Biała Podlaska, Voivodato de Lublin, en Polonia oriental. Se encuentra aproximadamente a 4 kilómetros al norte de Tuczna, a 27 kilómetros al sureste de Białun Podlaska, y a 96 kilómetros al noreste de la capital regional Lublin. Referencias Enlaces externos Localidades del voivodato de Lublin
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,388
Greenspace Blog Office of Sustainability & the Environment City of Seattle Blog Seattle's Buildings Are Using Less Energy November 4, 2016 by WysockS How can we tell if Seattle is becoming more energy efficient? And why is that important? Seattle is committed to accelerating energy efficiency improvements because the more we reduce our energy use, the more we reduce our impact on the climate. One key way we measure our progress in building energy efficiency is through the data submitted to the City as part of Seattle's Benchmarking Ordinance. Seattle buildings 20,000 sq.ft. or larger are required to track energy performance and annually report to the City. A recent analysis shows that the energy use of Seattle's benchmarked buildings is moving in the correct direction – which is down. Collectively, Seattle's benchmarked buildings show a 2.7% decrease in energy consumption from 2014 to 2015, after adjusting for differences in weather, a decrease of around 450 million kBtu. The City's own portfolio of buildings has reduced energy use by approximately 4.5% through a focused effort on conservation aimed at reaching our goal for a 20% reduction from 2008 to 2020. The figure below shows the downward trend in the median site Energy Use Intensity (EUI), or energy use per square foot between 2014 and 2015, for Seattle's most common benchmarked building types. These building types—office, multifamily, and retail buildings—alone make up nearly 50% of Seattle's benchmarked energy use. The median site EUI decreased across all these building types, from 0.4 to 3.8 kBtu per square foot between 2014 and 2015. Of the 24 building types analyzed in total, 20 showed a decrease in median site EUI between 2014 and 2015. These findings suggest that the decrease from 2014 to 2015 is widespread across Seattle's buildings rather than limited to a small subset of building types. The trend over the last two years is encouraging, but Seattle still has a lot of work to do. Buildings are responsible for 33% of Seattle's core greenhouse gas (GHG) emissions. The City of Seattle is aiming for a 39% reduction in total building-related emissions by 2030. In order to achieve those GHG reductions by 2030, we need to reduce our commercial building energy use by 10% and our residential energy use by 20%, even as the City's population, jobs and building stock continues to grow. Strong energy codes encourage efficiency in new buildings, but the added square footage still increases the total building energy use across the City. Last year, 72 new buildings were added which combined use a total of nearly 300 million kBtus of energy per year—the equivalent of adding three Columbia Centers to Seattle. As Seattle is in the midst of a major construction boom, aggressively pursuing energy efficiency in both new and existing buildings is key to meeting our climate goals. Interested in seeing how your building compares to other similar buildings in Seattle, check out the Energy Benchmarking Dashboard to see how your building stacks up. Enter your building's ENERGY STAR score or EUI to find out! Filed Under: Greenspace Categories Select Category Anti-Displacement Buildings Climate Justice Duwamish Valley Environmental Justice Equity Greenspace Climate change Energy Benchmarking Community Connection Energy Conservation food & urban ag Greenbuilding Greener Government News Release Renewable Energy Transportation Bicycling electric vehicles urban trees waste reduction Water stormwater water conservation water quality Health Natural Resources sustainable households Wildlife & Habitat Archives Select Month January 2023 December 2022 November 2022 October 2022 September 2022 August 2022 July 2022 June 2022 May 2022 April 2022 March 2022 February 2022 January 2022 December 2021 November 2021 October 2021 September 2021 August 2021 July 2021 June 2021 May 2021 April 2021 March 2021 February 2021 December 2020 November 2020 October 2020 September 2020 August 2020 June 2020 May 2020 April 2020 March 2020 February 2020 January 2020 December 2019 November 2019 October 2019 September 2019 August 2019 July 2019 June 2019 May 2019 April 2019 March 2019 February 2019 January 2019 December 2018 November 2018 October 2018 September 2018 June 2018 May 2018 April 2018 March 2018 January 2018 November 2017 October 2017 September 2017 August 2017 July 2017 June 2017 May 2017 April 2017 March 2017 February 2017 December 2016 November 2016 October 2016 September 2016 August 2016 July 2016 June 2016 May 2016 April 2016 March 2016 February 2016 December 2015 November 2015 October 2015 September 2015 August 2015 March 2015 February 2015 January 2015 December 2014 November 2014 October 2014 September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 February 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 Greenspace Home - RSS Feed - Log in © 2023 City of Seattle
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,701
Q: How to check point is present inside a polygon if points are not aligned correctly? I was working on a question In which we have to find the convex hull of given N points and check whether another P point is present in a polygon or not. So I implemented the Jarvis march gift wrapping algorithm. So, It returned output with correctly aligned points for a polygon if none of the endpoints of the convex hull is collinear. But in case of collinear These collinear points don't come in an aligned manner. So, If such a polygon comes without aligned endpoints how can we check Whether Point P is inside or outside? Mainly all algorithms I know or think of works on side fr which we should know exactly which point comes first and which comes second. Is there any approach for this special case??? example: If points of polygon are {0,0},{1,1},{4,4},{2,2},{3,3},{2,0} and Point is {2,1}. It should take it as polygon {0,0},{1,1},{2,2},{3,3},{4,4},{2,0} and It should return {2,1} is in polygon. A: You could start by transforming the polygon to ensure there are no colinear points. Clearly only the endpoints of any colinear segment need to be retained.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,417
Theo Jay Wharton (born 15 November 1994) is a footballer who plays as a midfielder for Cymru Premier club Barry Town United and the Saint Kitts and Nevis national team. Early and personal life Wharton was born in Cwmbran, Torfaen, and is the son of former footballer Sean Wharton and godson of former footballer Nathan Blake. Club career Early career Wharton played for Pontypool-based side Race Juniors and helped the team to Tesco Cup final at the City of Manchester Stadium. Cardiff City Wharton started his career at Cardiff City, playing for their under-18 side during the 2011–12 season. His impressive form for the youth team resulted in a first-team call up by manager Malky Mackay for the FA Cup game against West Bromwich Albion, in which he came on as a substitute. Following his debut, Mackay admitted that Wharton would train with first team and had a "big future ahead of him". On 29 March 2012, Wharton signed his first professional contract which would keep at Cardiff until at least June 2014. Wharton joined National League South club Weston-super-Mare on 23 March 2017 on loan. York City and Nuneaton Borough Wharton signed for newly relegated National League North club York City on 30 June 2017 on a one-year contract. He joined York's divisional rivals Tamworth on 22 December 2017 on a one-month loan. Tamworth's attempts to extend his loan for the rest of the season were unsuccessful, and he finished his spell at the club with four appearances. Wharton made nine appearances for York as they finished 2017–18 in 11th place in the table. He was released at the end of the season. Wharton signed for National League North club Nuneaton Borough in July 2018. Hereford Wharton signed for National League North club Hereford on 22 December 2018. Barry Town United Wharton signed for Cymru Premier club Barry Town United on 13 January 2020. International career Wharton was first selected for the Wales national under-17 team for the 2011 UEFA European Under-17 Championship qualifying round, starting two matches against Belgium and Denmark. Wharton was called up to the Wales under-21 team to play Moldova on 22 March 2013. On 9 September 2014, Wharton made his Wales under-21 debut in a 1–1 draw against Lithuania under-21s. In 2016, Wharton chose to switch allegiance to play for Saint Kitts and Nevis, the country where his grandparents were born. He made his debut for the side on 13 November 2016 in a 2–0 defeat to Haiti. Career statistics Club International As of match played 14 October 2018. Saint Kitts and Nevis score listed first, score column indicates score after each Wharton goal. References External links Profile at the Nuneaton Borough F.C. website 1994 births Living people Footballers from Cwmbran Welsh footballers Wales youth international footballers Wales under-21 international footballers Saint Kitts and Nevis footballers Saint Kitts and Nevis international footballers Association football midfielders Cardiff City F.C. players Weston-super-Mare A.F.C. players York City F.C. players Tamworth F.C. players Nuneaton Borough F.C. players Barry Town United F.C. players National League (English football) players Welsh people of Saint Kitts and Nevis descent Cymru Premier players
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,760
import React, { Component } from 'react' import { callPlayer, getSDK } from '../utils' import { canPlay } from '../patterns' const SDK_URL = 'https://w.soundcloud.com/player/api.js' const SDK_GLOBAL = 'SC' export default class SoundCloud extends Component { static displayName = 'SoundCloud' static canPlay = canPlay.soundcloud static loopOnEnded = true callPlayer = callPlayer duration = null currentTime = null fractionLoaded = null componentDidMount () { this.props.onMount && this.props.onMount(this) } load (url, isReady) { getSDK(SDK_URL, SDK_GLOBAL).then(SC => { if (!this.iframe) return const { PLAY, PLAY_PROGRESS, PAUSE, FINISH, ERROR } = SC.Widget.Events if (!isReady) { this.player = SC.Widget(this.iframe) this.player.bind(PLAY, this.props.onPlay) this.player.bind(PAUSE, () => { const remaining = this.duration - this.currentTime if (remaining < 0.05) { // Prevent onPause firing right before onEnded return } this.props.onPause() }) this.player.bind(PLAY_PROGRESS, e => { this.currentTime = e.currentPosition / 1000 this.fractionLoaded = e.loadedProgress }) this.player.bind(FINISH, () => this.props.onEnded()) this.player.bind(ERROR, e => this.props.onError(e)) } this.player.load(url, { ...this.props.config.options, callback: () => { this.player.getDuration(duration => { this.duration = duration / 1000 this.props.onReady() }) } }) }) } play () { this.callPlayer('play') } pause () { this.callPlayer('pause') } stop () { // Nothing to do } seekTo (seconds) { this.callPlayer('seekTo', seconds * 1000) } setVolume (fraction) { this.callPlayer('setVolume', fraction * 100) } mute = () => { this.setVolume(0) } unmute = () => { if (this.props.volume !== null) { this.setVolume(this.props.volume) } } getDuration () { return this.duration } getCurrentTime () { return this.currentTime } getSecondsLoaded () { return this.fractionLoaded * this.duration } ref = iframe => { this.iframe = iframe } render () { const { display } = this.props const style = { width: '100%', height: '100%', display } return ( <iframe ref={this.ref} src={`https://w.soundcloud.com/player/?url=${encodeURIComponent(this.props.url)}`} style={style} frameBorder={0} allow='autoplay' /> ) } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,920
{"url":"https:\/\/forum.sierrawireless.com\/t\/cant-find-a-correct-at-commands-reference-for-mc8355\/6162","text":"# Can't find a correct AT commands reference for MC8355\n\n#1\n\nHello everyone\n\nas I specidifed in the title, I can\u2019t find any document related to the TA commands interface of the MC8355 modem module. The closest thing I could find was the 2130617_Support_AT_Command_Reference-v2.4.pdf, but aside from the most common and universal AT commands (+CFUN, for instance), there isn\u2019t any reference to the commands the 8355 modem supports (listed with AT+CLAC). There is also no reference at all about the commands starting with \\$, while I get plenty of referece for commands starting with ! (which the MC8355 doesn\u2019t seem to support).\n\nDoes someone know where I can find the correct document?\n\nThanks everyone for the support\n\nMatt\n\n#2\n\nHi,\n\nPlease check in the 2130616 AirPrimeMC8XXX Extended AT guide.\n\nThanks.","date":"2018-12-16 12:23:29","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9649427533149719, \"perplexity\": 2037.6273244687131}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-51\/segments\/1544376827727.65\/warc\/CC-MAIN-20181216121406-20181216143406-00448.warc.gz\"}"}
null
null
Q: How to arrange 15 women and 15 men so no two females are seated next to each other? To a certain conference, each firm can send two employee representatives, on the condition that one of them is a male and the other a female. If 15 firms were represented in this conference, what is the probability that no two females are seated next to each other? (Assume that all 30 members are seated in a single row of seats) A] 1/2 B] 14!/30! C] 15!/30! D] 28/30! For above question I am trying following: 15 men can be arranged in 15! ways. There are 16 spots among men (-M-M-.....-M-). 16 spots can be taken by 15 women in 16! ways. My answer is (15!*16!)/30!. But it's not listed in answer choices. Am I missing something or answers are incorrect? My approach is similar to approach in question in below link. How many ways are there for 10 women and six men to stand in a line A: $\bf{My\; Solution::}$ If we can represent man by $\bf{M}$ and Women by $\bf{W}$, Then we use Gap Method. So Arrangements as $$_M_M_M_M_M_M_M_M_M_M_M_M_M_M_M_$$ Above $15$ man can be arrange as $15!$ (Here Man and women all are Different) Now We can Arrange $15$ Women in These $16$ Gap, Which can be done by $\displaystyle \binom{16}{15}\times 15! = 16!$ So Total no. of Arrangement in which no $2$ woman sit together is $\displaystyle 15! \times 16!$ And Total Probability $\displaystyle = \frac{\bf{Favourable \; cases}}{\bf{Total \; Cases}} = \frac{15!\times 16!}{30!}$ So I Think Your answer is Right. A: I think the answer is wrong because it doesn't take in account that two men can seat side by side (if you have tow women on te exterior) : WMWMWM..WMMW Something like ${30*15!*15!\over30!}$
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,092
\section{introduction} Frustrated magnets often exhibit a magneto-structural transition in which the system releases the magnetic frustration by spontaneously distorting the underlying lattice, resulting in a magnetic long-range order (LRO). Such a spin-lattice-coupled ordering can commonly be seen in a series of spinel chromium oxides $A$Cr$_2$O$_4$ ($A$=Hg, Cd, Zn, Mg) \cite{ZnCrO_Lee_00, CdCrO_Chung_05, HgCrO_Ueda_06, MgCrO_Ortega_08}, where the magnetic Cr$^{3+}$ sites form the pyrochlore lattice, a three-dimensional network consisting of corner-sharing tetrahedra. The characteristic properties of these compounds are a first-order simultaneous magnetic and structural transition at zero field \cite{ZnCrO_Lee_00, CdCrO_Chung_05, HgCrO_Ueda_06, MgCrO_Ortega_08} and a field-induced $\frac{1}{2}$-magnetization-plateau phase \cite{HgCrO_Ueda_06,CdCrO_Kojima_08,CdCrO_Miyata_13,ZnCrO_Miyata_jpsj_11,ZnCrO_Miyata_prl_11,ZnCrO_Miyata_jpsj_12,HgCrO_Nakamura_jpsj_14,MgCrO_Miyata_jpsj_14} which is considered to originate from the spin-lattice coupling (SLC). Similar zero-field and in-field properties have been observed also in LiInCr$_4$O$_8$ which belongs to a new class of chromium oxides hosting the so-called breathing pyrochlore lattice, an alternation array of small and large tetrahedra \cite{BrPyro_Okamoto_13,BrPyro_Tanaka_14,BrPyro_Nilsen_15,BrPyro_Saha_16,BrPyro_Lee_16,BrPyro_Hdep_Okamoto_17,BrPyro_Hdep_Gen_19}. In this paper, we theoretically investigate effects of the magnetic field on the spin-lattice-coupled ordering in the breathing-pyrochlore antiferromagnets, based on the two possible simplified models describing the SLC, the bond-phonon \cite{Bond_Penc_04,Bond_Motome_06,Bond_Shannon_10} and site-phonon \cite{Site_Jia_05,Site_Bergman_06,Site_Wang_08,Site_AK_16,Site_AK_19} models which will be explained below. In the chromium oxides, Hund-coupled three $3d$ electrons at each Cr$^{3+}$ site occupy the 3-fold $t_{2g}$ level, constituting a localized $S=3/2$ spin with their orbital degrees of freedom off. As the magnetic anisotropy is negligible, the classical Heisenberg model should provide a reasonable modeling. In the nearest neighbor (NN) antiferromagnetic classical Heisenberg model, it is theoretically well established that any magnetic LRO does not occur at any finite temperature due to a massive ground-state degeneracy \cite{Reimers_MC_92,Moessner-Chalker_prl,Moessner-Chalker_prb}, but in the $A$Cr$_2$O$_4$ family, the degeneracy is lifted, via SLC, by lattice distortions which lower the lattice symmetry from cubic to tetragonal or orthorhombic, leading to antiferromagnetic LRO's \cite{ZnCrO_Lee_00, CdCrO_Chung_05, HgCrO_Ueda_06, MgCrO_Ortega_08}. Besides, in a magnetic field, $A$Cr$_2$O$_4$ commonly show the $\frac{1}{2}$-magnetization plateau \cite{HgCrO_Ueda_06,CdCrO_Kojima_08,CdCrO_Miyata_13,ZnCrO_Miyata_jpsj_11,ZnCrO_Miyata_prl_11,ZnCrO_Miyata_jpsj_12,HgCrO_Nakamura_jpsj_14,MgCrO_Miyata_jpsj_14}, pointing to a robust 3-up and 1-down collinear spin configuration on each tetrahedron. Such a collinear state is considered to be stabilized by the biquadratic interaction of the from $-({\bf S}_i \cdot {\bf S}_j)^2$ originating from the SLC \cite{Bond_Penc_04,Bond_Motome_06,Bond_Shannon_10}. In this $\frac{1}{2}$-plateau phase, the crystal structure changes to cubic in the Hg and Cd compounds \cite{HgCrO_Matsuda_07,CdCrO_Inami_06,CdCrO_Matsuda_prl_10}. These zero-field and in-field experimental results suggest that SLC is essential in the uniform pyrochlore antiferromagnets $A$Cr$_2$O$_4$. \begin{figure*}[t] \begin{center} \includegraphics[scale=0.80]{Snap_SP015J06_rv.eps} \caption{Real-space spin structures of the low-field P1, middle-field P2, and high-field P3 phases obtained in the MC simulations for the site-phonon model (\ref{eq:Hamiltonian_SP}) with $J'/J=0.6$ and $b=b'=0.15$ at (a) $H/J=2.0$, (b) $H/J=3.5$, and (c) $H/J=4.3$. Red and yellow (blue) arrows represent spins pointing upward (downward) along the applied magnetic field $H$. In (a), the in-plane components of the red and yellow spins are antiparallel to each other. In the P1, P2, and P3 phases, local spin configurations on each tetrahedron are the cant 2:2, 3-up 1-down, and cant3:1, respectively (see the inset of each figure). These three 16-sublattice states are realized on both the {\it uniform} and {\it breathing} pyrochlore lattices. Concerning the spin structures of other phases unique to the breathing system (SP1, SP2, SP2', SP3, and SP4 phases in Fig. \ref{fig:HT_siteall}), see Fig. \ref{fig:snap_site020J06} in the main text and Figs. \ref{fig:snap_site020J02H0180} and \ref{fig:snap_site020J02H0350-0370} in Appendix C. \label{fig:snap_site015J06}} \end{center} \end{figure*} Breathing-pyrochlore magnets also provide examples of the spin-lattice-coupled ordering. Among so-far reported several compounds \cite{BrPyro_Saha_17,BrPyro_doped_Okamoto_15,BrPyro_doped_Wang_17,BrPyro_doped_Wawrzynczak_17,BrPyro_Sulfides_Okamoto_18,BrPyro_Sulfides_Pokharel_18,BrPyro_Hdep_Gen_20,BrPyro_Sulfides_Kanematsu_20,BrPyro_Sulfides_Pokharel_20,qBrPyro_Kimura_14,qBrPyro_Haku_prb16,qBrPyro_Haku_jpsj16,qBrPyro_Rau_16,qBrPyro_Rau_18}, the chromium oxides Li(Ga, In)Cr$_4$O$_8$ \cite{BrPyro_Okamoto_13,BrPyro_Tanaka_14,BrPyro_Nilsen_15,BrPyro_Saha_16,BrPyro_Lee_16,BrPyro_Hdep_Okamoto_17,BrPyro_Hdep_Gen_19} exhibit magnetic properties similar to those of $A$Cr$_2$O$_4$. In Li(Ga, In)Cr$_4$O$_8$, the NN interactions on small and large tetrahedra, $J$ and $J'$, are antiferromagnetic with different strength, where the ratio $J'/J$ is estimated from the bond-length difference to be $J'/J \sim 0.1$ and $0.6$ for the In and Ga compounds, respectively \cite{BrPyro_Okamoto_13}. In these compounds, the massive ground-state degeneracy, which is still present at the level of the NN model with different antiferromagnetic $J$ and $J'$ \cite{BrPyro_NNmodel_Benton_15}, is lifted by distorting the lattice from cubic to tetragonal, although in contrast to the uniform case of $A$Cr$_2$O$_4$, the structural transition slightly preempts the magnetic one \cite{BrPyro_Tanaka_14,BrPyro_Nilsen_15,BrPyro_Saha_16,BrPyro_Lee_16}. We note that according to Refs. \cite{BrPyro_Nilsen_15,BrPyro_Saha_16}, the structural transition in these breathing systems is incomplete and the low-temperature ordered phase is a coexistence of the original cubic and emergent tetragonal crystal domains. In addition to this zero-field property, recent high-field measurements on LiInCr$_4$O$_8$ show the occurrence of the $\frac{1}{2}$-magnetization plateau \cite{BrPyro_Hdep_Okamoto_17,BrPyro_Hdep_Gen_19}, which suggests that the SLC is also important in the breathing-pyrochlore chromium oxides. Interestingly, in the chromium sulfide CuInCr$_4$S$_8$ with antiferromagnetic $J$ and ferromagnetic $J'$, the $\frac{1}{2}$ plateau has also been observed and the effect of the SLC has been pointed out \cite{BrPyro_Hdep_Gen_20}. In this paper, bearing the chromium oxides in our mind, we will theoretically investigate the effects of both the SLC and the external magnetic field on the spin ordering in the breathing pyrochlore antiferromagnets with antiferromagnetic $J$ and $J'$. Theories of the SLC in pyrochlore antiferromagnets could be classified into two. One is a phenomenological theory based on the group theoretical classification of the lattice distortion \cite{SLC_Yamashita_00,SLC_Tchernyshyov_prl_02,SLC_Tchernyshyov_prb_02}, and the other is a microscopic theory taking account of the effect of {\it local} lattice distortions which have been modeled alternatively by bond phonons or site phonons. In this work, we take the latter microscopic approach. In the bond-phonon model which was first introduced by Penc {\it et al}. \cite{Bond_Penc_04,Bond_Motome_06,Bond_Shannon_10} and has conventionally been used to describe the SLC effect, each bond is assumed to vibrate independently, whereas in the site-phonon model \cite{Site_Jia_05,Site_Bergman_06,Site_Wang_08,Site_AK_16,Site_AK_19}, the lattice vibration is modeled by the Einstein phonon, i.e., each site is assumed to vibrate independently. The bond phonon involves only two spins ${\bf S}_i$ and ${\bf S}_j$ residing on both ends of each bond, giving rise to the effective spin interaction of the biquadratic form $-({\bf S}_i \cdot {\bf S}_j)^2$. The site phonon, on the other hand, additionally involves inter-bond spins, so that it mediates effective further neighbor interactions in addition to the biquadratic one. In both the uniform and breathing pyrochlore antiferromagnets at zero field, the {\it site-phonon} system undergoes a first-order transition into a collinear magnetic LRO which is characterized by $(1,1,0)$-type [$(\frac{1}{2},\frac{1}{2},\frac{1}{2})$-type] magnetic Bragg peaks for weak (strong) SLC \cite{Site_AK_16, Site_AK_19}. The $(1,1,0)$ state realized in the weak SLC regime, which corresponds to the antiferromagnetic order with the lattice distortion of the $E_u$ phonon in Ref. \cite{SLC_Tchernyshyov_prb_02}, has the same spin structure as that observed in the tetragonal crystal domains of the breathing pyrochlore antiferromagnets Li(Ga, In)Cr$_4$O$_8$ \cite{BrPyro_Nilsen_15,BrPyro_Saha_16}. Furthermore, although the $(1,1,0)$ state itself does not seem to be reported in the uniform pyrochlore antiferromagnets $A$Cr$_2$O$_4$ \cite{CdCrO_Chung_05,ZnCrO_Lee_08,HgCrO_Matsuda_07,MgCrO_Ortega_08}, it has been shown that a $(1,1,0)$-like state slightly modified by the Dzyaloshinskii-Moriya (DM) interaction is consistent with the Neel state of CdCr$_2$O$_4$ \cite{SLC_Chern_06}. Thus, the site-phonon model should capture an essential feature of the SLC in the chromium oxides. In the bond-phonon model, on the other hand, any magnetic LRO does not appear and only a spin nematic state is realized in the low-temperature ordered phase, so that to induce a magnetic LRO, additional further neighbor interactions need to be incorporated \cite{Bond_Shannon_10}. Indeed, in Refs. \cite{Bond_Motome_06,Bond_Shannon_10,ZnCrO_Miyata_jpsj_11,ZnCrO_Miyata_jpsj_12} taking the bond-phonon picture, the additional ferromagnetic third NN interaction is incorporated as the simplest example, although the obtained magnetic LRO at zero field is not consistent with the experimental result. In spite of the difference in the ordering properties at zero field, the two models share a common in-field feature at least on the uniform pyrochlore lattice. The biquadratic interaction common to both models favors the collinear spin state, stabilizing the $\frac{1}{2}$-magnetization-plateau phase \cite{Bond_Penc_04,Bond_Motome_06,Bond_Shannon_10,Site_Bergman_06}, which is consistent with the experimental observation in $A$Cr$_2$O$_4$ \cite{HgCrO_Ueda_06,CdCrO_Kojima_08,CdCrO_Miyata_13,ZnCrO_Miyata_jpsj_11,ZnCrO_Miyata_prl_11,ZnCrO_Miyata_jpsj_12,HgCrO_Nakamura_jpsj_14,MgCrO_Miyata_jpsj_14}. Furthermore, it has been shown by Bergman {\it et al}. \cite{Site_Bergman_06} that the site phonon stabilizes the same magnetic structure as that observed in the $\frac{1}{2}$-plateau phase of HgCr$_2$O$_4$ and CdCr$_2$O$_4$ \cite{HgCrO_Matsuda_07,CdCrO_Matsuda_prl_10}. Then, the natural question is how the magnetization processes in the two models behave in the breathing case. In this work, we examine the in-field properties of both the bond-phonon and site-phonon models on the breathing pyrochlore lattice, focusing on the weak SLC regime which should be relevant to the existing materials Li(Ga, In)Cr$_4$O$_8$. It will be shown by means of Monte Carlo (MC) simulations that in both the two models, the $\frac{1}{2}$ plateau is robust against the breathing bond-alternation, but that whether the spin-lattice-coupled orderings are affected or not depends on the model. In the bond-phonon model, the ordering properties are not altered by the breathing bond-alternation: any magnetic LRO does not appear, and two types of quadruple orders as well as a spin-liquid-plateau phase appear as in the uniform case \cite{Bond_Shannon_10}. In the site-phonon model, on the other hand, three magnetically long-range-ordered states, the low-field, middle-field $\frac{1}{2}$-plateau, and high-field phases (see P1, P2, and P3 phases in Figs. \ref{fig:snap_site015J06}, \ref{fig:GS}, and \ref{fig:HT_siteall}), appear in the wide range of the parameter space on both the {\it uniform} and {\it breathing} pyrochlore lattices. Local spin configurations on each tetrahedron in the three phases are the so-called cant 2:2, 3-up 1-down, and cant 3:1, respectively. In addition to the three, the breathing bond-alternation can induce unconventional phases just below the $\frac{1}{2}$-plateau phase and the saturation field (see SP1, SP2, SP2', SP3, and SP4 phases in Fig. \ref{fig:HT_siteall}). In contrast to the basic three phases shown in Fig. \ref{fig:snap_site015J06} where the spin configuration on each tetrahedron is equivalent to one another, in the SP1, SP2, SP2', SP3, and SP4 phases (for their real-space structures, see Figs. \ref{fig:snap_site020J06}, \ref{fig:snap_site020J02H0180}, and \ref{fig:snap_site020J02H0350-0370}), the spin configuration on each tetrahedron is not equivalent any more, resulting in tetrahedron-based LRO's, which could be attributed to the nature characteristic of the breathing pyrochlore lattice, i.e., the existence of the nonequivalent small and large tetrahedra. The outline of this paper is as follows: In Sec. II, we introduce the models taking account of the local lattice distortions, i.e., the bond-phonon and site-phonon models, and derive their effective spin Hamiltonians. Physical quantities relevant to the present system and numerical methods are explained in Sec. III. This is followed by Secs. IV and V in which the ordering properties of the bond-phonon and site-phonon models are respectively, discussed. We end the paper with summary and discussion in Sec. VI. In the Appendix, details of the numerical method and real-space structures of several magnetic LRO's are explained. \section{model} In this section, we derive the effective spin Hamiltonian describing the SLC in the presence of the breathing bond-alternation. Throughout this paper, the NN sites denote the neighboring sites connected by a bond independent of its length. Suppose that the displacement vector at each site $i$ from its regular position ${\bf r}^0_i$ on the lattice is denoted by ${\bf u}_i$, a minimal microscopic Hamiltonian could be written as \begin{equation}\label{eq:original_H} {\cal H} = \sum_{\langle i,j \rangle } J_{\rm ex}\big(|{\bf r}^0_{ij} + {\bf u}_i-{\bf u}_j|\big){\bf S}_i \cdot {\bf S}_j -H\sum_i S_i^z+ {\cal H}_{\rm L}, \end{equation} where ${\bf S}_i$ is the classical Heisenberg spin at the site $i$, ${\bf r}^0_{ij} \equiv {\bf r}^0_i-{\bf r}^0_j$, $J_{\rm ex}$ is the exchange interaction which is, for simplicity, assumed to depend only on the distance between the two spins, the summation $\langle i,j \rangle$ is taken over all the NN sites, $H$ is a magnetic field applied in the $z$-direction in the spin space, and ${\cal H}_{\rm L}$ denotes the elastic energy. As the displacement is usually small, i.e., $|{\bf u}_i|/|{\bf r}^0_i| \ll 1$, we can expand the exchange interaction with respect to the displacement as follows: {\begin{eqnarray}\label{eq:expansion} J_{\rm ex}\big(|{\bf r}^0_{ij} + {\bf u}_i-{\bf u}_j| \big) &=& J_{\rm ex}\big(|{\bf r}^0_{ij}|\big) + \frac{d J_{\rm ex}}{dr}\Big|_{r=|{\bf r}^0_{ij}|} \, {\bf e}_{ij} \cdot ({\bf u}_i-{\bf u}_j ) \nonumber\\ &+& O(\big[{\bf e}_{ij} \cdot ({\bf u}_i-{\bf u}_j )\big]^2) \end{eqnarray} where ${\bf e}_{ij} \equiv {\bf r}^0_{ij}/|{\bf r}^0_{ij}|$ is the unit vector connecting NN sites $i$ and $j$. Hereafter, we take the leading-order correction of the order of $O({\bf e}_{ij} \cdot ({\bf u}_i-{\bf u}_j ))$ in Eq. (\ref{eq:expansion}) into account. \begin{figure}[t] \begin{center} \includegraphics[scale=0.3]{model_rv.eps} \caption{Two possible minimal models describing the local lattice distortions. (a) Bond-phonon model taking account of the independent bond-length change $u_{ij}$. (b) Site-phonon model taking account of the independent site displacement ${\bf u}_i$.\label{fig:model}} \end{center} \end{figure} Concerning the elastic energy ${\cal H}_{\rm L}$, although in reality neighboring ${\bf u}_i$'s should be correlated to each other in the form of dispersive phonon modes, we use here the local phonon models, the bond-phonon and site-phonon models, because they are possible minimum tractable models describing phonon-mediated spin interactions. In the bond-phonon and site-phonon models, variables describing the lattice degrees of freedom are the length change of each bond $u_{ij}\equiv{\bf e}_{ij}\cdot\big({\bf u}_i-{\bf u}_j \big)$ and the displacement at each site ${\bf u}_i$, respectively (see Fig. \ref{fig:model}), and the elastic energy ${\cal H}_{\rm L}$ in each model is given by \begin{equation}\label{eq:lattice_H} {\cal H}_{\rm L} = \left \{\begin{array}{l} \displaystyle{ \frac{c_{\rm BP}}{2}\sum_{\langle i,j \rangle} u_{ij}^2 } \qquad (\mbox{bond-phonon model}), \nonumber\\ \displaystyle{ \frac{c_{\rm SP}}{2}\sum_i |{\bf u}_i|^2 } \qquad (\mbox{site-phonon model}), \end{array} \right . \end{equation} with elastic constants $c_{\rm BP}$ and $c_{\rm SP}$ \cite{Bond_Penc_04,Site_Bergman_06}. Note that compared with the conventional Deby model, the bond-phonon model assumes that variables describing lattice degrees of freedom are not ${\bf u}_i$'s but $u_{ij}$'s with the elastic energy ${\cal H}_{\rm L}$ being the same as that of the Deby phonon, whereas the site-phonon (Einstein-phonon) model assumes that the elastic energy is a local one with the lattice degrees of freedom ${\bf u}_i$'s being unchanged. Substituting Eq. (\ref{eq:expansion}) into Eq. (\ref{eq:original_H}) and integrating out the lattice degrees of freedom $u_{ij}$ for the bond-phonon model and ${\bf u}_i$ for the site-phonon model, we obtain the bond-phonon spin Hamiltonian ${\cal H}_{\rm BP}$ as \begin{eqnarray}\label{eq:Hamiltonian_BP} {\cal H}_{\rm BP} &=& J \, \sum_{\langle i,j \rangle_S } {\bf S}_i \cdot {\bf S}_j + J' \, \sum_{\langle i,j \rangle_L } {\bf S}_i \cdot {\bf S}_j -H\sum_i S_i^z \nonumber\\ &-& J \, b \, \sum_{\langle i,j \rangle_S } \big( {\bf S}_i \cdot {\bf S}_j \big)^2 - J' \, b' \,\sum_{\langle i,j \rangle_L } \big( {\bf S}_i \cdot {\bf S}_j \big)^2 , \end{eqnarray} and the site-phonon spin Hamiltonian ${\cal H}_{\rm SP}$ as \begin{eqnarray}\label{eq:Hamiltonian_SP} {\cal H}_{\rm SP} &=& J \, \sum_{\langle i,j \rangle_S } {\bf S}_i \cdot {\bf S}_j + J' \, \sum_{\langle i,j \rangle_L } {\bf S}_i \cdot {\bf S}_j -H\sum_i S_i^z\\ &-& J \, b \, \sum_{\langle i,j \rangle_S } \big( {\bf S}_i \cdot {\bf S}_j \big)^2 - J' \, b' \,\sum_{\langle i,j \rangle_L } \big( {\bf S}_i \cdot {\bf S}_j \big)^2 \nonumber\\ &-& \sum_i \Big\{ \frac{Jb}{4} \sum_{j\neq k \in N_S(i)}+\frac{J'b'}{4}\sum_{j\neq k \in N_L(i)}\Big\} \big( {\bf S}_i \cdot {\bf S}_j \big)\big( {\bf S}_i \cdot {\bf S}_k \big) \nonumber\\ &-& \sqrt{J \, J' \,b \, b'}\sum_i \sum_{j \in N_S(i) } \sum_{k \in N_L(i) } {\bf e}_{ij} \cdot {\bf e}_{ik} \, \big( {\bf S}_i \cdot {\bf S}_j \big)\big( {\bf S}_i \cdot {\bf S}_k \big), \nonumber \end{eqnarray} where two kinds of NN exchange interactions $J\equiv J_{\rm ex}\big(|{\bf r}^0_{ij}|_{\rm Small}\big)$ and $J'\equiv J_{\rm ex}\big(|{\bf r}^0_{ij}| _{\rm Large}\big)$, NN sites $N_{S}(i)$ and $N_{L}(i)$, and the summations $\langle i,j \rangle _{S}$ and $\langle i,j \rangle _{L}$ are defined only on the small and large tetrahedra, respectively. The degree of the breathing lattice-distortion is quantified by the ratio $0<J'/J \leq 1$, and the dimensionless parameters \begin{eqnarray}\label{eq:b-def} b &=& \frac{1}{cJ}\Big[ \frac{d J_{\rm ex}}{dr}\big|_{r=|{\bf r}^0_{ij}|_{\rm Small}} \Big]^2 \nonumber\\ b' &=& \frac{1}{cJ'}\Big[ \frac{d J_{\rm ex}}{dr}\big|_{r=|{\bf r}^0_{ij}|_{\rm Large}} \Big]^2 \end{eqnarray} measure the strength of the SLC for small and large tetrahedra, respectively, where $c=2c_{\rm BP}$ in the bond-phonon model and $c=c_{\rm SP}$ in the site-phonon model. We take $J, \, J'>0$ and $d J_{\rm ex}/dr < 0$, so that $b, \, b' >0$. In the uniform case of $J'/J=1$, there is, of course, no distinction between the small and large tetrahedra, so that there is only one SLC parameter, i.e., $b=b'$. The feature common to the two models is the existence of the biquadratic interaction of the form $-({\bf S}_i \cdot {\bf S}_j)^2$. Since the overall sign of this interaction is always negative irrespective of the signs of $J$ and $J'$, the $-({\bf S}_i \cdot {\bf S}_j)^2$ term tends to align neighboring spins to be collinear and is known to be an origin of the spin nematic state. In contrast to the bond-phonon model (\ref{eq:Hamiltonian_BP}), the site-phonon model (\ref{eq:Hamiltonian_SP}) contains additional intra-tetrahedron interactions [the third line in Eq. (\ref{eq:Hamiltonian_SP})] and inter-tetrahedron ones [the fourth line in Eq. (\ref{eq:Hamiltonian_SP})]. Due to the inter-tetrahedron interactions involving further neighbor spins, a magnetic LRO becomes possible in the site-phonon model. By contrast, as we will demonstrate in Sec. IV, the bond-phonon model does not exhibit any magnetic LRO because of the absence of such effective further neighbor interactions. \section{Relevant physical quantities and numerical method} In this section, we first introduce physical quantities relevant to our systems described by the Hamiltonians (\ref{eq:Hamiltonian_BP}) and (\ref{eq:Hamiltonian_SP}), and then, explain the numerical method to calculate them. Throughout this paper, $N$ is a total number of spins and $\langle {\cal O} \rangle$ denotes the thermal average of a physical quantity ${\cal O}$. \subsection{relevant physical quantities} Since both the bond-phonon and site-phonon models have the biquadratic term $-({\bf S}_i \cdot {\bf S}_j)^2$ favoring collinear spin states, a key physical quantity is the spin collinearity which can be measured by \begin{equation}\label{eq:OP_nematic} P = \frac{3}{2} \Big\langle \frac{1}{N^2}\sum_{i,j} \big( {\bf S}_i\cdot{\bf S}_j\big)^2 - \frac{1}{3} \Big\rangle. \end{equation} As the magnetic field $H$ is applied in the $S^z$ direction, it is convenient to divide $P$ into the spin collinearity for the direction parallel to the field $P_{\parallel}$ and the ones for the perpendicular direction $P_{\perp 1}$ and $P_{\perp 2}$ as follows: \begin{eqnarray}\label{eq:OP_nematic_div} P &=& P_\parallel + P_{\perp 1} + P_{\perp 2}, \nonumber\\ P_\parallel &=& \frac{3}{4}\Big\langle (Q^{3z^2-r^2})^2 \Big\rangle , \nonumber\\ P_{\perp 1} &=& \frac{3}{4}\Big\langle |{\bf Q}_{\perp 1}|^2 \Big\rangle, \quad {\bf Q}_{\perp 1}=(Q^{xz},Q^{yz}), \nonumber\\ P_{\perp 2} &=& \frac{3}{4}\Big\langle |{\bf Q}_{\perp 2}|^2 \Big\rangle, \quad {\bf Q}_{\perp 2}=(Q^{x^2-y^2}, Q^{xy}) . \end{eqnarray} Here, we have introduced the rank-two tensor order parameters $Q^\alpha$ defined by $Q^\alpha = \frac{1}{N}\sum_{i} Q^\alpha_i$ with the local quadrupole moments \cite{Bond_Shannon_10} \begin{eqnarray} Q^{3z^2-r^2}_i &=& \frac{1}{\sqrt{3}}\big\{ 2(S_i^z)^2- (S_i^x)^2 - (S_i^y)^2 \big\}, \nonumber\\ Q^{x^2-y^2}_i &=& (S_i^x)^2- (S_i^y)^2, \nonumber\\ Q^{xy}_i &=& 2S_i^x S_i^y, \nonumber\\ Q^{xz}_i &=& 2S_i^x S_i^z, \nonumber\\ Q^{yz}_i &=& 2S_i^y S_i^z . \end{eqnarray} For a spin-space rotation in the $S^xS^y$-plane by the angle $\phi$, ${\bf Q}_{\perp n}$ is rotated by the angle $n \phi$, so that $P_{\perp n}$ can be used as an order parameter to detect the $n$-fold breaking of rotational symmetry in the $S^xS^y$ plane. When a magnetic (dipolar) LRO is absent but a quadruple LRO characterized by nonzero value of $P_{\perp 1}$ ($P_{\perp 2}$) exists, such a LRO is called the vector-multipole (nematic) order \cite{Bond_Shannon_10}. Whether a magnetic (dipolar) LRO is present or not can be examined by measuring the spin structure factors \begin{eqnarray}\label{eq:F_S} F_{S\parallel}({\bf q}) &=& \Big\langle \Big| \frac{1}{N} \sum_i S^z_i \, e^{i{\bf q}\cdot{\bf r}^0_i}\Big|^2\Big\rangle, \nonumber\\ F_{S\perp}({\bf q}) &=& \Big\langle \sum_{\nu=x,y} \Big| \frac{1}{N} \sum_i S^\nu_i \, e^{i{\bf q}\cdot{\bf r}^0_i}\Big|^2\Big\rangle. \end{eqnarray} Noting that the magnetic field $H$ is applied in the $S^z$ direction, we have introduced $F_{S\parallel}({\bf q})$ for the $S^z$ component of spins and $F_{S\perp}({\bf q})$ for the $S^xS^y$-plane component. Also, since the breathing bond-alternation has already been incorporated in the spin Hamiltonian, we have taken ${\bf r}^0_i$ in Eqs. (\ref{eq:F_S}) as a regular position of the {\it uniform} pyrochlore lattice ignoring the bond-length alternation for simplicity. \begin{figure*}[t] \begin{center} \includegraphics[scale=0.85]{HT_BP015.eps} \caption{Temperature and magnetic-field phase diagrams obtained in the bond-phonon model with $b=b'=0.15$ for $J'/J=1$ (left), $J'/J=0.6$ (center), and $J'/J=0.2$ (right). Note that a gray cross merely indicates a broad-peak temperature of the specific heat $C$ and thus, it does not mean a phase transition (for details, see the text). \label{fig:HT_bond}} \end{center} \end{figure*} In both the bond-phonon and site-phonon models, once a spin state is obtained, the local lattice distortions can be evaluated directly from the given spin configuration \cite{Site_AK_16,Site_AK_19, HgCrO_Kimura_jpsj_14,CdCrO_Rossi_prl_19}, so that the essential information of the spin-lattice-coupled orders consists in the spin state. Thus, in this paper, we will focus basically on the ordering properties only of spins. By measuring the above physical quantities defined in Eqs. (\ref{eq:OP_nematic_div}) and (\ref{eq:F_S}) as well as the fundamental ones such as the specific heat $C = \frac{1}{T^2N}\big( \langle {\cal H}^2\rangle - \langle {\cal H} \rangle^2\big)$ and the magnetization $m = \langle | \frac{1}{N}\sum_{i} {\bf S}_i | \rangle$, we identify low-temperature phases in the applied magnetic field $H$. \subsection{Numerical method} To calculate the physical quantities introduced above, we perform Monte Carlo (MC) simulations for the bond-phonon and site-phonon Hamiltonians (\ref{eq:Hamiltonian_BP}) and (\ref{eq:Hamiltonian_SP}). Since our cubic unit cell contains 16 sites (see Fig. \ref{fig:snap_site015J06}), the total number of spins $N$ is related to the linear system size $L$ via $N=16 L^3$. In our MC simulation, we basically perform $2\times 10^6$ MC sweeps at each temperature and magnetic field with the periodic boundary condition, and the first half is discarded for thermalization. A single spin flip at each site consists of the conventional Metropolis update and a successive over-relaxation-like process in which we try to rotate a spin by the angle $\pi$ around the local mean field \cite{Loop_Shinaoka_14}. Observations are done in every 10 MC steps and the statistical average is taken over 4-8 independent runs. In most cases of the present system, efficient MC algorithms such as the temperature exchange method \cite{Fukushima_exchange} and a cluster update method, so-called loop-flip algorithm \cite{Loop_Shinaoka_14}, do not work except for some parameters (see below). Thus, we perform the single-spin-flip MC simulations, as mentioned above. In the present single-spin-flip MC simulation, we often encounter various metastable states, as the spin state is not efficiently updated, being trapped in a local minimum. In particular, in the site-phonon model possessing inter-tetrahedron complex interactions, the low-temperature spin states obtained in the four different processes, cooling and warming runs at a fixed field and field-increase and field-decrease runs at a fixed temperature, sometimes differ. In such a situation, we compare the thermal-averaged values of the energy of these states and regard the lowest-energy state as the equilibrium state. Since this procedure could be applicable only to the lower-temperature region where the entropy effect is relatively weak, in relatively higher-temperature regions, we use the mixed-phase method (see Ref. \cite{MixedMethod_Creutz_79} and Appendix A) taking account of the entropy effect. Based on the above analysis, we determine the temperature and magnetic-field phase diagrams shown in Figs. \ref{fig:HT_bond} and \ref{fig:HT_siteall}, where the phase boundary between the low-temperature ordered and high-temperature disordered states is determined by the cooling runs. We note in passing that near the strong first-order transition between the low-field and middle-field $\frac{1}{2}$-plateau phases, we first use the mixed-phase method to obtain the equilibrium state, and then calculate the thermal average of the physical quantities (for details, see Appendix A). In addition to the local-update MC simulations, we use the temperature-exchange method \cite{Fukushima_exchange} to verify the spin-liquid behavior appearing in the $\frac{1}{2}$-plateau region in the bond-phonon model [see Fig. \ref{fig:bond} (c)] and the complex magnetic structures of the SP3 and SP4 phases in the site-phonon model shown in Fig. \ref{fig:snap_site020J02H0350-0370}. In the latter case, the temperature-exchange method can be applied to the $L=3$ system which is the smallest size for the SP3 and SP4 phases, but for the larger size of $L=6$, it does not work because of the first-order character of the transition from the high-temperature paramagnetic phase. Furthermore, we examine the ground state of the site-phonon model for the small number of spins $N=32$, and check that the results obtained in the finite-temperature MC simulations are consistent with those obtained in the ground-state analysis. To search for the global minimum of the Hamiltonian (\ref{eq:Hamiltonian_SP}), we use the ``NMinimize'' function in the Wolfram {\it Mathematica} software 11.3.0. With these multiple checks, we believe that the finite-temperature phases determined in the above procedure are the true equilibrium states, although we cannot rule out the possibility that there exists another phase which cannot be reached by any of the numerical methods used here. Throughout this paper, we restrict ourselves to the case of $b=b'$ for simplicity, although in general, the SLC parameters $b$ and $b'$ defined in Eq. (\ref{eq:b-def}) should take different values in the breathing case. \section{Result in the bond-phonon model} In this section, we will discuss the ordering properties of the bond-phonon model. In the bond-phonon model on the {\it uniform} pyrochlore lattice, it was shown by Shannon, Penc, and Motome that any magnetic (dipolar) LRO does not appear at any finite temperature and magnetic field, but instead, the two different quadruple LRO's, the nematic and vector-multipole orders each characterized by the nonzero value of $P_{\perp 2}$ and $P_{\perp 1}$, are realized at low and high fields, respectively, whereas at middle fields, the paramagnetic phase persists down to $T=0$ showing the spin-liquid behavior \cite{Bond_Shannon_10}. This spin-liquid state consists only of the 3-up and 1-down tetrahedra, so that it possesses the magnetization of $m=\frac{1}{2}$. Furthermore, the spin-liquid state with $m=\frac{1}{2}$ extends over the middle-field window, exhibiting the $\frac{1}{2}$ plateau in the magnetization curve, so that it is called the spin-liquid plateau phase. In this section, we will discuss the stabilities of the nematic, vector-multipole, and spin-liquid-plateau phases against the breathing bond-alternation, i.e., the change in $J'/J$. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{BP_HdepTdep.eps} \caption{MC results for the bond-phonon model with $b=b'=0.15$ and $J'/J=0.2$. (a) Field dependence of the magnetization $m$ (upper panel) and the $S^xS^y$-plane spin collinearities $P_{\perp 1}$ and $P_{\perp 2}$ (lower panel) at $T/J=0.04$. (b) Temperature dependence of the specific heat $C$ (upper panel) and the spin collinearities $P_{\parallel}$, $P_{\perp 1}$, and $P_{\perp 2}$ (lower panel) at $H/J=2.5$. (c) Spin structure factors $F_{S \parallel}({\bf q})$ (upper panels) and $F_{S\perp}({\bf q})$ (lower panels) in the $(h,h,l)$ plane obtained at $H/J=0.8$ and $T/J=0.06$ (left), $H/J=2.5$ and $T/J=0.04$ (center), and $H/J=3.4$ and $T/J=0.04$ (right) for $L=8$. In $F_{S \parallel}({\bf q})$, the high-intensity trivial peak at ${\bf q}=0$ indicated by the cross, which corresponds to $m^2$, has been removed. \label{fig:bond}} \end{center} \end{figure} Figure \ref{fig:HT_bond} shows the $J'/J$ dependence of the temperature and magnetic-field phase diagram in the bond-phonon model with $b=b'=0.15$. Although the characteristic temperature scales, e.g., the transition temperature between the paramagnetic and ordered phases, are suppressed with decreasing $J'/J$ (increasing the strength of the breathing bond-alternation), the relative stability among the nematic, vector-multipole, and spin-liquid-plateau phases is almost unchanged. Below, we will discuss the nature of these three phases. In Fig. \ref{fig:bond}, we show the field dependence of various physical quantities in the strongly breathing case of $J'/J=0.2$. One can see from the upper panel of Fig. \ref{fig:bond} (a) that the magnetization $m$ increases linearly with increasing the applied field $H$ and, via a discontinuous first-order transition, it shows the $\frac{1}{2}$ plateau which is followed by the continuous growth in the higher-field phase. The low-field, middle-field $\frac{1}{2}$-plateau, and high-field phases correspond to the nematic, spin-liquid-plateau, and vector-multipole phases, respectively. As one can see from the $S^xS^y$-component spin structure factors $F_{S\perp}({\bf q})$ shown in the lower panels of Fig. \ref{fig:bond} (c), any magnetic Bragg reflections cannot be found in all the three phases, suggesting that the spin components perpendicular to the applied field are disordered. Such a situation is also the case for the $S^z$ spin component parallel to the field. Actually, in the $S^z$-component spin structure factors $F_{S\parallel}({\bf q})$ shown in the upper panels of Fig. \ref{fig:bond} (c), one cannot find nontrivial Bragg peaks except the $(1,1,1)$-type peaks originating from the uniform magnetization $m$. Thus, in all the three phases, spins (dipole moments) are still disordered down to the lowest temperature. The low-temperature ordered and high-temperature paramagnetic phases can be distinguished by the spin collinearity. One can see from the lower panel of Fig. \ref{fig:bond} (a) that the low-field nematic and high-field vector-multipole phases are characterized by the nonzero values of $P_{\perp 2}$ and $P_{\perp 1}$, respectively. In middle-field $\frac{1}{2}$-plateau phase, on the other hand, a LRO does not occur for the quadruple moments as well as the dipole moments (spins), i.e., $P_{\perp 2}=P_{\perp 1}=0$, so that this phase is a spin-liquid state as is also suggested from the specific-heat data shown in Fig. \ref{fig:bond} (b) where a signature of a phase transition cannot be seen. Note that the broad peak in $C$ is associated with the growth of the collinearity along the field direction $P_{\parallel}$ [see the lower panel of Fig. \ref{fig:bond} (b)] which could also be interpreted as the formation of the 3-up and 1-down spin configuration on each tetrahedron. The broad-peak temperature is indicated by the gray cross in the temperature and magnetic-field phase diagrams in Fig. \ref{fig:HT_bond}. \section{Result in the site-phonon model} In this section, we will discuss the ordering properties of the site-phonon model in which the inter-tetahedron interactions work as effective further neighbor interactions, leading to magnetic LRO's. Since as mentioned in Sec. I, the zero-field phase realized in the weak SLC regime of the site-phonon model has the same spin structure as that observed in Li(Ga, In)Cr$_4$O$_8$ \cite{BrPyro_Nilsen_15,BrPyro_Saha_16,Site_AK_19}, we will focus on this weak SLC regime of $b=b'<0.25$ which should be relevant to the existing materials. This zero-field spin state, which is realized on both the uniform and breathing pyrochlore lattices, is tetragonal-symmetric being characterized by the $(1,1,0)$-type magnetic Bragg reflections. Concerning the in-field properties of the site-phonon model on the {\it uniform} pyrochlore lattice, it was shown by Bergman {\it et al.} that the $\frac{1}{2}$ plateau shows up in the magnetization curve as in the case of the bond-phonon model and that the ground-state spin structure of the $\frac{1}{2}$-plateau phase is the $R$ state with the $P$4$_3$32 symmetry \cite{Site_Bergman_06} (the P2 phase explained below is exactly the same as the $R$ state) which consists of the $\uparrow\uparrow\uparrow\downarrow$ chains running along all the bond directions, keeping the 3-up and 1-down configuration on each tetrahedron. This $\uparrow\uparrow\uparrow\downarrow$ state is cubic symmetric, which is consistent with the experimental results on HgCr$_2$O$_4$ and CdCr$_2$O$_4$ \cite{HgCrO_Matsuda_07, CdCrO_Matsuda_prl_10}. Bearing these fundamental physics in our mind, we will discuss in-field properties of the site-phonon model on the {\it breathing} pyrochlore lattice. \subsection{Ground-state analysis} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{GS_rv4.eps} \caption{Result of the ground-state analysis of the site-phonon model for the small number of $N=32$ spins. (a) SLC parameter $b=b'$ and the magnetic field $H$ dependence of the ground state for $J'/J=1$ (left), $J'/J=0.6$ (center), and $J'/J=0.2$ (right). Filled (open) symbols denote 16-sublattice (32-sublattice) magnetic structures. (b) Associated magnetization curves for $b=b=0.02$ (red), $b=b=0.05$ (green), and $b=b=0.10$ (blue) in the cases of $J'/J=1$ (top), $J'/J=0.6$ (middle), and $J'/J=0.2$ (bottom).} \label{fig:GS} \end{figure*} We first discuss results of the ground-state analysis for $N=32$ spins. Since the zero-field ground state of the site-phonon model is a 16-sublattice state characterized by $(1,1,0)$ Bragg reflections or a 32-sublattice one characterized by $(\frac{1}{2},\frac{1}{2},\frac{1}{2})$ reflections \cite{Site_AK_19}, even the small number of spins $N=32$ can describe at least the zero-field magnetic structures. Figure \ref{fig:GS} (a) shows the SLC parameter $b=b'$ and the magnetic field $H$ dependence of the ground state for $J'/J=1$ (left), $J'/J=0.6$ (center), and $J'/J=0.2$ (right), where the magnetic field is normalized by the saturation field for $b=b'=0$, i.e., $4J+4J'$ \cite{BrPyro_Hdep_Gen_20}. Note that in the limit of $b=b' \rightarrow 0$, the ground state remains disordered even in the presence of a magnetic field at least in the uniform case of $J'/J=1$ \cite{Bond_Shannon_10}. For the parameter sets we investigated, we obtain 14 kinds of magnetic structures except for the trivial high-field fully-polarized state. A half of them are 16-sublattice states and the remaining half are 32-sublattice ones the stability regions of which are, respectively, represented by filled and open symbols in Fig. \ref{fig:GS} (a). In the uniform case of $J'/J=1$, three magnetically long-range-ordered states are realized in a wide range of the parameters space, i.e., the low-field, middle-field, and high-field phases which, hereafter, will be called P1, P2, and P3 phases, respectively. These P1, P2, and P3 phases appear also in the breathing cases of $J'/J=0.6$ and $J'/J=0.2$. For smaller values of $b=b'$, an intermediate phase appears between the P1 and P2 phases in both the uniform and breathing cases [see the orange and yellow regions in Fig. \ref{fig:GS} (a)]. This intermediate phase is the cant 2:1:1 state \cite{Bond_Penc_04,Bond_Shannon_10} or the ``1-up, 1-down, and V'' state which will be called P4 and P5 phases, respectively (for their real-space structures, see Fig. \ref{fig:cant211U} in Appendix B). The P4 phase is favored in the strongly breathing case of $J'/J=0.2$, whereas in the uniform and weakly breathing cases of $J'/J=1$ and 0.6, the P4 phase is degenerate with the P5 phase at least within our computation accuracy. For moderate SLC, on the other hand, additional new phases are favored by the breathing bond-alternation. For example, in the weakly breathing case of $J'/J=0.6$, there exists a higher-field phase between the saturation field and the P3 phase which we call SP1 phase. The SP1 phase is the 16-sublattice state, similarly to the P1, P2, P3, P4, and P5 phases. In the relatively large $b$ region, a variety of 32-sublattice states become favorable near the saturation field and the $\frac{1}{2}$ plateau, reflecting the fact that in the strong SLC regime of $b,b'>0.25$, the zero-field ground state is the $(\frac{1}{2},\frac{1}{2},\frac{1}{2})$-type 32-sublattice state. Figure \ref{fig:GS} (b) shows the parameter dependence of the magnetization curve. One can see that with increasing the SLC parameter $b=b'$, the $\frac{1}{2}$ plateau, which corresponds to the P2 phase, gets wider and the associated magnetization jump from the P1 phase becomes more remarkable. The existence of the P4 or P5 phase is reflected as the bending of the plateau just above the magnetization jump. We note that although the top and middle panels of Fig. \ref{fig:GS} (b) are obtained by assuming that the intermediate phase is the P5 phase, the magnetization curve is not altered even if it is assumed to be the P4 phase. Although the above ground-state analysis for $N=32$ spins offers crucial information about magnetic LRO's in the site-phonon model, careful analysis is necessary to check whether the states obtained for $N=32$ are really stable or not in the thermodynamic limit of $L \rightarrow \infty$, or equivalently, $N \rightarrow \infty$. In particular, in the relatively large $b$ and smaller $J'/J$ regions, the occurrence of the 32-sublattice states in the ground-state analysis indicates that a large number of possibly more than 32 spins should be taken into account to lower the energy of the system. Indeed, as we will see below, in the finite-temperature MC simulations, the tetrahedron-based orders involving more than 32 spins (SP2, SP2', SP3, and SP4 phases) appear just below the saturation field and the $\frac{1}{2}$ plateau for $J'/J=0.2$. In addition, the P3 phase is slightly modified in the strongly breathing case $J'/J=0.2$. \subsection{Temperature and magnetic-field phase diagram} \begin{figure*}[t] \begin{center} \includegraphics[scale=0.85]{HT_SPall_rv3.eps} \caption{Temperature and magnetic-field phase diagrams obtained in the site-phonon model with (a) $b=b'=0.10$, (b) $b=b'=0.15$, and (c) $b=b'=0.20$ for $J'/J=1$ (left), $J'/J=0.6$ (center), and $J'/J=0.2$ (right). \label{fig:HT_siteall}} \end{center} \end{figure*} Now that we have understood the ground-state properties of the site-phonon model for $N=32$, we will next discuss finite-temperature properties for larger numbers of spins $N=16L^3$ with $L \geq 4$. Since the characteristic in-field feature of the spin-lattice-coupled system is the occurrence of the $\frac{1}{2}$ plateau, hereafter, we will focus on the SLC parameters of $b=b' \geq 0.10$ for which the $\frac{1}{2}$ plateau is relatively wide. Figure \ref{fig:HT_siteall} shows the $J'/J$ dependence of the temperature and magnetic-field phase diagram in the site-phonon model with the SLC parameters of $b=b'=0.10$, 0.15, and 0.20. The finite-temperature results in Fig. \ref{fig:HT_siteall} are basically consistent with the results obtained in the ground-state analysis for the small number of spins $N=32$ [see Fig. \ref{fig:GS} (a)] except that in the strongly breathing case of $J'/J=0.2$, LRO's involving more than 32 spins appear as the lowest energy state. We note that according to the ground-state analysis for $b=b'=0.15$ and $J'/J <1$, an additional higher-field phase may exist just above the P3 or P3' phase in the center and right panels of Fig. \ref{fig:HT_siteall} (b), but it is not obtained in the temperature range of our MC simulations. Before going to the details of the ordered phases, here, we will briefly summarize the result. In the uniform case of $J'/J=1$, the three magnetically long-range-ordered states, P1, P2, and P3 phases are stabilized in the low-field, middle-field, and high-field regions, respectively. The P1 and P2 phases are robust against the breathing alternation, while the P3 one is not. Although the P3 phase still exists in the weakly breathing case of $J'/J=0.6$, its spin structure is modified in the strongly breathing case of $J'/J=0.2$. We call such a modified state P3' phase. For the weak SLC's of $b=b'=0.10$ and 0.15, only the above four phases, P1, P2, P3, and P3', come into play. By contrast, for the moderate SLC of $b=b'=0.20$, additional new phases are induced by the breathing bond-alternation. In the weakly breathing case of $J'/J=0.6$, the SP1 phase appears in a higher-field region, whereas in the strongly breathing case of $J'/J=0.2$, there are two corresponding higher-field phases which will be called SP3 and SP4 phases. Another noteworthy aspect in the case of $J'/J=0.2$ is the occurrence of the intermediate phase between the P1 and P2 phases. This phase just below the $\frac{1}{2}$ plateau will be named SP2 phase and its lower-temperature state is SP2' phase in which the correlation between the $S^xS^y$ components of spins is different from that in the higher-temperature SP2 phase. Now, we shall discuss the details of the ordered states, starting from the P1, P2, and P3 phases appearing on both the {\it uniform} and {\it breathing} pyrochlore lattices. \subsection{Ordered states appearing on both the uniform and breathing pyrochlore lattices} \begin{figure}[t] \begin{center} \includegraphics[scale=0.74]{SP_Hdep.eps} \caption{MC results for the site-phonon model with $b=b'=0.15$. (a)-(c) Field dependence of $m$, $P_{\perp 1}$, and $P_{\perp 2}$ for (a) $J'/J=1$ and $T/J=0.06$, (b) $J'/J=0.6$ and $T/J=0.04$, and (c) $J'/J=0.2$ and $T/J=0.02$. (d) Spin structure factor obtained in the high-field P3' phase at $H/J=3.4$ and $T/J=0.02$ for $L=8$ in the strongly breathing case of $J'/J=0.2$. (e) Spin structure factors obtained at $T/J=0.04$ in the weakly breathing case of $J'/J=0.6$. The left, center, and right panels are respectively obtained in the low-field P1 phase at $H/J=1.0$, the middle-field P2 phase at $H/J=3.5$, and the high-field P3 phase at $H/J=4.3$ for $L=8$. In $F_{S \parallel}({\bf q})$, the high-intensity trivial peak at ${\bf q}=0$ indicated by the cross, which corresponds to $m^2$, has been removed. \label{fig:site015}} \end{center} \end{figure} Figure \ref{fig:site015} shows the field dependence of various physical quantities in the uniform ($J'/J=1$), weakly breathing ($J'/J=0.6$), and strongly breathing ($J'/J=0.2$) cases for $b=b'=0.15$. As readily seen from the upper panels of Figs. \ref{fig:site015} (a), (b), and (c), the $\frac{1}{2}$ plateau shows up in the magnetization curve being relatively robust against the breathing bond-alternation, i.e., the change in $J'/J$. This $\frac{1}{2}$-plateau phase is the P2 phase, and the low-field (high-field) phase below (above) the $\frac{1}{2}$ plateau is the P1 (P3 or P3') phase. As one can see from the lower panels of Figs. \ref{fig:site015} (a) and (b), the P1 and P3 phases are characterized by nonzero values of $P_{\perp 2}$ and $P_{\perp 1}$, respectively. These features of the low-field (P1), middle-field (P2), and high-field (P3) phases are quite similar to those of the nematic, spin-liquid-plateau, and vector-multipole phases in the bond-phonon model [see Fig. \ref{fig:bond} (a)], but in the present site-phonon model, magnetic LRO's are realized in the P1, P2, and P3 phases, as is indicated by Bragg peaks in the spin structure factors $F_{S\perp}({\bf q})$ and $F_{S\parallel}({\bf q})$ shown in Fig. \ref{fig:site015} (e). In the low-field P1 phase, $F_{S\perp}({\bf q})$ for the $S^xS^y$ spin component perpendicular to the field exhibits Bragg peaks at $\pm(1,1,0)$, while $F_{S\parallel}({\bf q})$ for the $S^z$ spin component parallel to the field only exhibits the trivial $(0,0,0)$ and $(1,1,1)$ peaks stemming from the uniform magnetization [see the left panels of Fig. \ref{fig:site015} (e)]. The P1 phase characterized by the $(1,1,0)$-type Bragg reflections is tetragonal-symmetric in the sense that among the three equivalent points $(1,1,0)$, $(1,0,1)$, and $(0,1,1)$, only one is selected. The real-space spin configuration of the P1 phase is shown in Fig. \ref{fig:snap_site015J06} (a). As one can see from the periodic pattern of the yellow and red arrows in Fig. \ref{fig:snap_site015J06} (a), the perpendicular spin components constitute $\uparrow\downarrow\uparrow\downarrow$ chains along the facing two bonds of a tetrahedron and $\uparrow\uparrow\downarrow\downarrow$ chains along the rest four bonds. All the spins are canted along the field direction. In units of tetrahedron, each of all the tetrahedra takes the cant 2:2 spin configuration. In the middle-field P2 phase characterized by the $\frac{1}{2}$-magnetization plateau, as one can see from the real-space spin configuration shown in Fig. \ref{fig:snap_site015J06} (b), the parallel spin components constitute $\uparrow\uparrow\uparrow\downarrow$ chains along all the six tetrahedral bonds, keeping the 3-up and 1-down configuration on each tetrahedron. This $\uparrow\uparrow\uparrow\downarrow$ chain structure is reflected in $F_{S\parallel}({\bf q})$ as the magnetic Bragg peaks at $(1,1,0)$ [see the center upper panel of Fig. \ref{fig:site015} (e)] and other cubic-symmetric points of $(0,1,1)$ and $(1,0,1)$. Thus, the spin state is cubic-symmetric. In the P2 phase, all the spins are collinearly aligned along the field direction, and the $S^xS^y$ components perpendicular to the field only exhibit a short-range correlation. Actually, any Bragg reflections cannot be found in the $S^xS^y$-component spin structure factor $F_{S\perp}({\bf q})$ shown in the center lower panel of Fig. \ref{fig:site015} (e). The high-field P3 phase is a canted state of the P2 phase. As one can see from the real-space configuration shown in Fig. \ref{fig:snap_site015J06} (c), the $\uparrow\uparrow\uparrow\downarrow$ chains are canted from the field direction such that the cant directions of the up and down spins are antiparallel to each other keeping the $S^xS^y$ components to be collinear. As a result, not only the $S^z$ component but also the $S^xS^y$ one constitutes $\uparrow\uparrow\uparrow\downarrow$ chains running along all the six tetrahedral bonds. Reflecting this ordering pattern, the spin structure factor shown in the right panels of Fig. \ref{fig:site015} (e) has the $(1,1,0)$-type Bragg peaks not only in the $S^z$ sector [$F_{S\parallel}({\bf q})$] but also in the $S^xS^y$ sector [$F_{S\perp}({\bf q})$]. In contrast to the P2 phase, the $S^xS^y$ components exhibit the LRO in the P3 phase. Comparing the lower left and right panels of Fig. \ref{fig:site015} (e), one notices that $F_{S\perp}({\bf q})$'s of the P1 and P3 phases look quite similar to each other, but in the P3 phase, the Bragg peaks show up at the wave vectors of $(1,0,1)$ and $(0,1,1)$ as well as $(1,1,0)$. Thus, the P3 phase is cubic-symmetric. Although the P3 phase is robust against the weak breathing bond-alternation of $J'/J=0.6$, it is modified into the P3' phase in the strongly breathing case of $J'/J=0.2$. \begin{figure}[t] \begin{center} \includegraphics[scale=0.55]{Snap_SP020J06.eps} \caption{MC results obtained in the higher-field SP1 phase at $H/J=6.0$ for the site-phonon model with $b=b'=0.20$ and $J'/J=0.6$. (a) Spin snapshot at $T/J=0.001$ and (b) the associated small-tetrahedron distribution within the cubic unit cell. (c) Spin structure factors $F_{S \parallel}({\bf q})$ (upper panels) and $F_{S\perp}({\bf q})$ (lower ones) in the $(h,k,0)$ (left), $(h,0,l)$ (center), and $(0,k,l)$ (right) planes obtained at $T/J=0.002$ for $L=8$. In $F_{S \parallel}({\bf q})$, the high-intensity trivial peak at ${\bf q}=0$ indicated by the cross has been removed. \label{fig:snap_site020J06}} \end{center} \end{figure} \subsection{Ordered states favored on the breathing pyrochlore lattice} Among various magnetic LRO's induced by the breathing lattice structure, we first discuss the P3' phase. As shown in Figs. \ref{fig:site015} (c) and (d), the ordering properties of the high-field P3' phase are similar to those of the P3 phase: the $(1,1,0)$-type magnetic Bragg peaks can be found in both $F_{S\parallel}({\bf q})$ and $F_{S\perp}({\bf q})$, and the in-plane collinearity $P_{\perp 1}$ develops on entering the P3' phase from the P2 phase. When we take a closer look at the system-size dependence of $P_{\perp 1}$, however, $P_{\perp 1}$ in the P3' phase is suppressed with increasing the system size $L$, which is in sharp contrast to the corresponding behavior in the P3 phase [see the lower panels of Figs. \ref{fig:site015} (a) and (b)]. We note that in the MC simulation, even when we take the spin configuration of the P3 phase as the initial state, the system eventually goes to a slightly disturbed state, and $P_{\perp 1}$ is suppressed to be a smaller value than those in the P1 and P3 phases. Furthermore, the energy of the system gradually becomes lower than that of the P3 phase with increasing the system size $L$, although the Bragg peak is still located at $(1,1,0)$ up to the largest size of $L=12$. Thus, there is a possibility that in the P3' phase, an incommensurate order whose wave vector is close to $(1,1,0)$ might be realized in the thermodynamic limit of $L\rightarrow\infty$. Identifying the spin structure of the P3' phase needs larger-$L$ analysis, but we will leave this issue for our future work. The above P1, P2, P3, and P3' phases are realized in all the three cases of $b=b'=0.10$, 0.15, and 0.20 (see Fig. \ref{fig:HT_siteall}). For the relatively strong SLC of $b=b'=0.20$, additional new phases, SP1, SP2, SP2', SP3, and SP4 phases, are induced by the breathing bond-alternation. Below, we will discuss these phases starting from the SP1 phase occurring just below the saturation field in the weakly breathing case of $J'/J=0.6$ [see the center panel of Fig. \ref{fig:HT_siteall} (c)]. Figure \ref{fig:snap_site020J06} shows the spin structure of the SP1 phase. One can see from the real-space spin configuration shown in Fig. \ref{fig:snap_site020J06} (a) that the SP1 phase is the 16-sublattice state similarly to the P1, P2, and P3 phases. In contrast to the P1, P2, and P3 phases where all the tetrahedra are equivalent to one another taking the same spin configuration on each tetrahedron, the SP1 phase consists of the two different types of small tetrahedra, 4-up and cant 2:2 tetrahedra. As one can see from Figs. \ref{fig:snap_site020J06} (a) and (b), the distribution of the 4-up and cant 2:2 tetrahedra within the cubic unit cell is tetragonal-symmetric: the 4-up tetrahedron pair and the cant 2:2 tetrahedron pair are stacking along the $z$ axis, leading to the alternating stack of the 4-up and cant 2:2 layers in the whole system. The tetragonal symmetry of this kind can clearly be seen in the difference in the spin structure factors in the $(h,k,0)$, $(h,0,l)$, and $(0,k,l)$ planes. As readily seen from Fig. \ref{fig:snap_site020J06} (c), among the cubic-symmetric families of $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$, only $(0,0,1)$ is picked up in $F_{S\perp}({\bf q})$ and such a situation is also the case for the family of $(0,1,1)$, $(1,0,1)$, and $(1,1,0)$. We note that although in the case of Fig. \ref{fig:snap_site020J06}, the $z$-direction is special, any of the $x$, $y$, and $z$ directions can be the tetragonal axis, i.e., the stacking direction of the alternating 4-up and cant 2:2 layers. Actually, the stacking direction differs run to run in the MC simulations. \begin{figure}[t] \begin{center} \includegraphics[scale=0.52]{SP020J02.eps} \caption{ (a) The field dependence of $m$ (reddish symbols), $P_{\perp 1}$ (greenish ones), and $P_{\perp2}$ (bluish ones) at $T/J=0.052$ for the site-phonon model with $b=b'=0.20$ in the strongly breathing case of $J'/J=0.2$. The SP2 phase is realized in the field range of $1.5 \leq H/J \leq 1.9$. (b) Spin structure factor obtained at $H/J=1.8$ and $T/J=0.05$ for $L=12$, where the notations are the same as those of Fig. \ref{fig:snap_site020J06} (c). \label{fig:site020J02H0180}} \end{center} \end{figure} In the strongly breathing case of $J'/J=0.2$, as shown in the right panel of Fig. \ref{fig:HT_siteall} (c), the SP2 and SP2' (SP3 and SP4) phases appear just below the $\frac{1}{2}$ plateau (the saturation field). Figure \ref{fig:site020J02H0180} (a) shows the field dependence of the magnetization $m$ and the $S^xS^y$-plane collinearities $P_{\perp 1}$ and $P_{\perp 2}$, where the SP2 phase is stabilized in the field range of $1.5 \leq H/J \leq 1.9$. As one can see from a two-step jump in the magnetization curve, the SP2 phase is separated by the first-order transitions from the low-field P1 and middle-field P2 phases. Figure \ref{fig:site020J02H0180} (b) shows the spin structure factor in the SP2 phase. $F_{S\parallel}({\bf q})$ exhibits the Bragg peaks at $\pm(\frac{2}{3},\frac{2}{3},0)$ and $\pm(\frac{4}{3},\frac{4}{3},0)$ but not other cubic-symmetric points, so that the state should be tetragonal. The $S^xS^y$ components of spins, on the other hand, are not long range ordered, as is suggested from the absence of Bragg peaks in $F_{S\perp}({\bf q})$ [see the lower panels of Fig. \ref{fig:site020J02H0180} (b)]. Together with the result that both $P_{\perp 1}$ and $P_{\perp 2}$ vanish in the SP2 phase [see Fig. \ref{fig:site020J02H0180} (a)], it turns out that the $S^xS^y$ components of spins remain disordered. We note that in the lower-temperature SP2' phase, the perpendicular components of spins are ordered into a state characterized by the nonzero value of $P_{\perp 2}$, and associated $(\frac{2}{3},\frac{2}{3},0)$-type Bragg peaks develop in $F_{S\perp}({\bf q})$ with $F_{S\parallel}({\bf q})$ being almost unchanged from that in the SP2 phase. The real-space structure of the SP2 phase is very complicated; it consists of as many as $16\times6^3$ spins. In units of tetrahedron, similarly to the SP1 phase, the SP2 phase is composed of the two different kinds of small tetrahedra, the cant 2:2 tetrahedra and the 3-up and 1-down ones, reflecting the fact that this phase is intercalated between the P1 and P2 phases each consisting of the cant 2:2 tetrahedra and the 3-up and 1-down ones, respectively. In the SP2 phase, these two elements, i.e., the cant 2:2 tetrahedra and the 3-up and 1-down ones, are arranged periodically over the whole system, forming a tetragonal-symmetric pattern (for details, see Fig. \ref{fig:snap_site020J02H0180} in Appendix C). The SP3 and SP4 phases appearing just below the saturation field are also tetrahedron-based orders. In these phases, the fundamental element is the 4-up tetrahedron. The SP3 and SP4 phases are characterized by different ordering patterns of the 4-up tetrahedra both of which are noncubic (see Fig. \ref{fig:snap_site020J02H0350-0370} in Appendix C). Although the real-space structures of the SP2, SP2', SP3, and SP4 phases are rather complicated, the existence of nonequivalent tetrahedra is the common feature of these phases and the SP1 phase all of which are induced by the breathing bond-alternation. By contrast, in the P1, P2, and P3 phases appearing in both the uniform and breathing pyrochlore lattices, all the tetrahedra are equivalent, having the same spin configuration. The key role of the breathing lattice structure is the installation of the nonequivalent tetrahedra in the system, leading to the tetrahedron-based magnetic orders. \section{Summary and Discussion} In this paper, we have theoretically investigated in-field properties of the spin-lattice-coupled ordering in the breathing-pyrochlore antiferromagnets, based on the simplified models taking account of the effect of local lattice distortions, the so-called bond-phonon and site-phonon models. It is found by means of MC simulations that the site-phonon model exhibits a rich variety of magnetic LRO's some of which are unique to the breathing system, while the bond-phonon model does not show any magnetic (dipolar) LRO regardless of whether the system is breathing or not. In the site-phonon model with the small SLC parameters of $0.10 \leq b=b' \leq 0.20$, the low-field, middle-field, and high-field phases appearing in the {\it uniform} pyrochlore lattice, which are, respectively, named the P1, P2, and P3 phases in Fig. \ref{fig:HT_siteall}, are also realized on the breathing pyrochlore lattice. Note that the magnetization curve shows the $\frac{1}{2}$ plateau in the middle-field P2 phase. In addition to the three phases, as a result of the combined effect of the breathing bond-alternation which introduces the nonequivalent tetrahedra in the system and the inter-tetrahedron interactions characteristic of the site-phonon model, tetrahedron-based new LRO's are induced just below the $\frac{1}{2}$ plateau and the saturation field (SP1, SP2, SP2', SP3, and SP4 phases in Fig. \ref{fig:HT_siteall}). It is also found from the ground-state analysis that in the much weaker SLC regime where the $\frac{1}{2}$ plateau is quite narrow, an intermediate phase appears between the P1 and P2 phases. This intermediate phase is the P4 or P5 phase whose local spin configurations on each tetrahedron are the ``cant 2:1:1'' and ``1-up, 1-down and V'', respectively (see Fig. \ref{fig:GS} together with Fig. \ref{fig:cant211U} in Appendix B). In deriving the site-phonon effective spin Hamiltonian, the following approximations have been made: the exchange interaction is assumed to depend only on the distance between the two spins although the situation in real materials would be more complicated, and is expanded with respect to the site displacement up to the first order, neglecting the higher-order contributions [see Eqs. (\ref{eq:original_H}) and (\ref{eq:expansion})]. Also, the elastic energy of the lattice is assumed to be a local one depending only on each site, although in reality neighboring displacements should be correlated in the form of the dispersive phonon modes. Due to the assumed local nature of the phonons, physics relevant to the lattice such as the net lattice distortions, the volume changes, and the phonon dispersions cannot be described in the present model. Nevertheless, as will be discussed below, the site-phonon model captures the essential feature of the ''spin'' ordering physics of the spin-lattice-coupled phenomena in the chromium oxides not only at zero field but also at finite fields. In experiments on the {\it uniform} pyrochlore antiferromagnets $A$Cr$_2$O$_4$ ($A$=Hg, Cd, Zn, Mg), the $\frac{1}{2}$ plateau has commonly been observed and its spin structure suggested from the neutron diffraction patterns for HgCr$_2$O$_4$ and CdCr$_2$O$_4$\cite{HgCrO_Matsuda_07, CdCrO_Matsuda_prl_10} is the same as that of the above P2 phase, i.e., the state consisting of $\uparrow\uparrow\uparrow\downarrow$ chains running along all the tetrahedral bonds. We note that although the P2 phase was already predicted by Bergman {\it et al} \cite{Site_Bergman_06}, its finite-temperature properties are clarified in the present paper. In addition to the P2 phase, the existence of the P4 or P5 phase just below the $\frac{1}{2}$ plateau is consistent with the observation of an intermediate phase in ZnCr$_2$O$_4$ \cite{ZnCrO_Miyata_jpsj_11,ZnCrO_Miyata_prl_11,ZnCrO_Miyata_jpsj_12} and MgCr$_2$O$_4$ \cite{MgCrO_Miyata_jpsj_14}. The observed intermediate phase has been interpreted as the cant 2:1:1 state based on the bond-phonon picture, but our result suggests that the P5 phase shown in Fig. \ref{fig:cant211U} (b) is also a possible candidate for this phase. At zero field, on the other hand, the P1 phase does not seem to be reported in $A$Cr$_2$O$_4$ \cite{CdCrO_Chung_05,ZnCrO_Lee_08,HgCrO_Matsuda_07,MgCrO_Ortega_08} where the spin-ordering patterns vary from material to material, indicating that effects beyond the present simplified model such as higher-order contributions in Eq. (\ref{eq:expansion}), dispersive phonon modes, and other interactions should also be relevant. Indeed, the additional DM interaction slightly modifies the P1 phase and the modified state is consistent with the Neel state of CdCr$_2$O$_4$ \cite{SLC_Chern_06}. These results suggest that the zero-field and in-field properties of the uniform pyrochlore antiferromagnets $A$Cr$_2$O$_4$ can be basically described by the site-phonon model in spite of the simplification of the lattice distortions. In the {\it breathing} pyrochlore antiferromagnets Li(Ga, In)Cr$_4$O$_8$, the P1 phase (the tetragonal-symmetric spin structure consisting of the 2-up and 2-down tetrahedra) is realized at zero field \cite{BrPyro_Nilsen_15,BrPyro_Saha_16,Site_AK_19}, so that the site-phonon model could also be applied to this class of magnets. Accordingly, the $\frac{1}{2}$ plateau observed in LiInCr$_4$O$_8$ \cite{BrPyro_Hdep_Okamoto_17,BrPyro_Hdep_Gen_19} would point to the realization of the P2 phase (the cubic-symmetric spin structure consisting of the 3-up and 1-down tetrahedra). Notably, the magnetization jump just below the $\frac{1}{2}$ plateau in LiInCr$_4$O$_8$ is drastic and amounts to $\sim$0.75~$\mu_{\rm B}$ \cite{BrPyro_Hdep_Gen_19}, which is comparable to that in HgCr$_2$O$_4$ with the strongest SLC among the {\it A}Cr$_2$O$_4$ family \cite{HgCrO_Kimura_jpsj_14, CdCrO_Kimura_jpsj_15, ZnCrO_Miyata_jpsj_11, MgCrO_Miyata_jpsj_14}. This indicates that LiInCr$_4$O$_8$ possesses a relatively strong SLC. In our present theoretical work, we have demonstrated that in the site-phonon model with the relatively strong SLC of $b=b'=0.20$, the intermediate phase (the SP2 or SP2' phase) appears between the P1 and P2 phases in the strongly breathing case of $J'/J=0.2$, showing a two-step magnetization jump [see Fig. \ref{fig:site020J02H0180} (a)]. However, such a feature is not observed in LiInCr$_4$O$_8$, indicating that $b$ and/or $b'$ may be smaller than 0.20 in this compound. On the other hand, according to the recent first-principle calculations \cite{FirstPrinciple_Ghosh_npj_19}, the breathing parameter $J'/J$ in LiInCr$_4$O$_8$, which was at first estimated to be $\sim 0.1$ based on the empirical relationship between the NN Cr-Cr bond length and the exchange coupling \cite{BrPyro_Okamoto_13}, is highly temperature dependent and becomes close to 1 at 20~K slightly above the transition temperature \cite{FirstPrinciple_Ghosh_npj_19}. Thus, at present, it is not clear whether the absence of the intermediate phase in LiInCr$_4$O$_8$ is due to the SLC parameters $b$ and $b'$ or the breathing parameter $J'/J$. Concerning the Ga compound, both the empirical and the first-principle calculations show that the ratio between $J$ and $J'$ is $\sim 0.6$, but the latter predicts that $J$ is smaller than $J'$ in contrast to the naive expectation \cite{FirstPrinciple_Ghosh_npj_19}. We believe that future high-field measurements on LiGaCr$_4$O$_8$ should provide fundamental information on the system parameters. In this work, we have assumed $b=b'$ for simplicity, but different values of the SLC parameters $b \neq b'$ may lead to various types of tetrahedron-based LRO's other than the SP1, SP2, SP2', SP3, and SP4 phases depending on the value of $J'/J$. Although it might not be so easy to find where a complicated real material is placed in the parameter space of the site-phonon model, this work presenting the result for the simplified case of $b=b'$ should help the understanding of the spin-lattice-coupled orderings in the breathing pyrochlore antiferromagnets. Finally, we would like to comment on the validity of the site-phonon model in other related systems. As pointed out in Ref.~\cite{BrPyro_Hdep_Gen_20}, the present model Eq.~(\ref{eq:Hamiltonian_SP}) on the breathing pyrochlore lattice can be extended to the case of antiferromagnetic $J>0$ and ferromagnetic $J'<0$. In this case, the cant 2:1:1 phase appears in a relatively wide parameter region, which seems to be relevant to the gradual magnetization increase prior to the $\frac{1}{2}$ plateau observed in CuInCr$_4$S$_8$ \cite{BrPyro_Sulfides_Okamoto_18, BrPyro_Hdep_Gen_20}. Furthermore, the site-phonon model on the {\it triangular} lattice can reproduce the zero-field zigzag ground state and the in-field $\frac{1}{5}$-magnetization plateau observed in the multiferroic compound CuFeO$_2$ \cite{Site_Wang_08}. Hence, it would be intriguing to apply the site-phonon model to other frustrated spin systems to search for novel spin-lattice-coupled phenomena. \begin{acknowledgments} The authors thank R. Osamura for useful discussions. We are thankful to ISSP, the University of Tokyo and YITP, Kyoto University for providing us with CPU time. This work is supported by JSPS KAKENHI Grant Number JP16K17748, JP17H06137 and JP20J10988. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,803
{"url":"https:\/\/www.physicsforums.com\/threads\/basic-set-theory-determining-relations-reflexive-symmetric-transitive.630950\/","text":"# Basic Set Theory: Determining Relations: Reflexive, Symmetric, Transitive\n\nI am taking a philosophy course that covers basic set theory as part of the introduction. I\u2019m not sure in which section of the forum set theory should be, but I think this is the right place.\n\n## Homework Statement\n\nFor each of the following relations, indicate whether it is Reflexive, Nonreflexive, Irrelfexive, Symmetric, Nonsymmetric, Asymmetric, Antisymmetric, Transitive, Nontransitive, and Intransitive.\n\n9) {(b,d), (a,c), (d,c), (e,e), (b,c)} on the set {a,b,c,d,e}.\n\n## The Attempt at a Solution\n\nI believe they are Nonreflexive, nonsymmetric, and transitive.\n\nI do not know if they are Asymmetric or Antisymmetric because I do not know how to deal with (e,e).\n\nRelated Precalculus Mathematics Homework Help News on Phys.org\nHallsofIvy\nHomework Helper\nYou are correct that this relation is not symmetric because it contains (a, c) but not (c, a). It is not reflexive because it does not contain (a, a), (b, b), and (c, c). It is transitive because the only pairs of the form '(x, y), (y, z)' are (b, d) and (d, c) and it does contain (b, c). What is the difference between 'asymmetric' and 'antisymmetric'?\n\nWhat is the difference between 'asymmetric' and 'antisymmetric'?\n\nAsymmetric: $xRy \\Rightarrow \\neg (yRx)$\n\nAntisymmetric: $xRy \\wedge yRx \\Rightarrow x=y$","date":"2020-06-01 23:05:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2939603328704834, \"perplexity\": 1109.7760784453462}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-24\/segments\/1590347419639.53\/warc\/CC-MAIN-20200601211310-20200602001310-00285.warc.gz\"}"}
null
null
\section{Introduction}\label{sec:intro} In the \emph{graph matching problem} we are given as input two graphs $G$ and $H$ with equal number of vertices, and the objective is to find a bijective function, or \emph{matching}, between the vertices of $G$ and $H$ such that the alignment between the edges of $G$ and $H$ is maximized. This problem appears in many applications such as computer vision \citep{sunfei}, network de-anonymization \citep{nay}, pattern recognition \citep{conte, streib}, protein-protein interactions and computational biology \citep{zasla,singh}. In computer vision, for example, it is used as a method of comparing two objects (or images) encoded as graph structures or to identify the correspondence between the points of two discretized images of the same object at different times. In network de-anonymization, the goal is to learn information about an anonymized (labeless) graph using a related labeled graph as a reference, exploiting their structural similarities. In \citep{naya2} for example, the authors show that it was possible to effectively de-anonymize the Netflix database using the IMDb (Internet Movie Database) as the ``reference'' network. While the graph matching problem is well defined for any pair of graphs (weighted or unweighted), it is intractable in the worst case, since it can be framed as an instance of the NP-hard \emph{quadratic assignment problem} (QAP) \citep{QAPhard2014}. It also contains the ubiquitous \emph{graph isomorphism} (with unknown complexity) as a special case. However, in the average case situation, many polynomial time algorithms have recently shown to recover, either perfectly or partially, the ground-truth vertex matching with high probability. It is thus customary to assume that the observed graphs $G,H$ are generated by a model for correlated random graphs, where the problem can be efficiently solved. The two most popular models are the correlated Erdös-Renyi model \citep{PedGloss}, where two graphs are independently sampled from an Erdös-Renyi mother graph, and the correlated Wigner model \citep{deg_prof,Grampa}, which considers that $G,H$ are complete weighted graphs with independent Gaussian entries on each edge; see Section \ref{sec:RGmodels} for a precise description. Recently, other models of correlation have been proposed for random graphs with a latent geometric structure \citep{geo_1,geo_2}, community structure \citep{AniRac} and with power law degree profile \citep{Powerlaw}. \paragraph{Seeded graph matching.} The statistical analysis of the graph matching problem has mainly focused on two different versions of this problem, depending on whether side information is available or not. In the \emph{seeded} version of this problem, side information is provided (together with the two graphs $G$ and $H$) in the form of a seed, which is a bijective map from the vertices of $G$ to the vertices of $H$. The quality of the seed can be measured by its overlap with the ground-truth matching. This definition of a seed is more general than what is often considered in the literature \citep{MosselXu}, including the notion of a partially correct (or noisy) seed \citep{LubSri,YuXuLin}. The seeded version of the problem is motivated by the fact that in many applications, a set of correctly matched vertices is usually available -- either as prior information, or it can be constructed by hand (or via a algorithm). Several algorithms, based on different techniques, have been proposed for seeded graph matching. In \citep{PedGloss,YarGross}, the authors use a percolation based method to 'grow' the seed to recover (at least partially) the ground-truth matching. Other algorithms \citep{LubSri,YuXuLin} construct a similarity matrix between the vertices of both graphs and then solve the maximum linear assignment problem (either optimally or by a greedy approach) using the similarity matrix as the cost matrix. The latter strategy has also been successfully applied in the case described below, when no side information is provided. \paragraph{Seedless graph matching.} In the \emph{seedless} version, the only information available is the pair of graphs to be matched and, therefore, only structural information can be used to produce an estimation of the ground truth matching. A family of seedless algorithms that has been thoroughly studied is those based on a spectral approach, starting from the celebrated result \citep{Spectral_weighted_Ume} through the more recent contributions in \citep{spec_align,Grampa,ganMass}. Other algorithms have been proposed using different techniques. For example, in \citep{deg_prof} a signature based on its degree is constructed for each vertex of $G$ and $H$ separately and then these signatures are used to produce a vertex matching. Other methods based on convex relaxations \citep{bach}, random walks \citep{isorank_1,isorank_2} and non-convex methods \citep{YuYan,XuLuo} have also been proposed. Most of those methods require either a strong correlation between $G$ and $H$ or a superpolynomial running time \citep{barak}. There are some exceptions, for example in \citep{ganMass} the sparse Erdös-Renyi model is considered and a partially correct matching is output when the two graphs differ in at most a constant fraction of edges. To the best of our knowledge, the algorithm with the strongest theoretical guarantees is the one in \citep{MaoRud}, which assumes that the observed graphs are (relatively) sparse Erdös-Renyi graphs. It works in a two-step process: a first algorithm takes as input the two graphs to be matched and outputs a matching for which only $n^{1-c}$ vertices are incorrectly assigned, where $n$ is the number of vertices in each graph and $c$ is a small positive constant. Then a second algorithm is used to refine the solution of the first algorithm to obtain an exact matching. In this paper we analyse the performance of the \emph{projected power method} (PPM) for the seeded graph matching problem in the context of the correlated Wigner model. This family of iterative algorithms has recently been successfully applied to several problems in machine learning and statistics \citep{chen2016_alignment,boumal2016,Wang2021OptimalNE}. We prove that PPM can exactly recover the ground-truth permutation provided that a sufficiently good initial permutation is provided. Our analysis extends the analysis of the refinement algorithm \cite[Alg.4]{MaoRud} to the case of (dense) Wigner graphs and represents, to best of our knowledge, the first analysis of PPM in the dense regime. The main technical difficulty in proving the convergence of PPM lies in proving that each step of the algorithm is a contraction, which requires establishing a uniform bound for the error in a neighborhood of the ground truth. As a byproduct of our analysis, we see that PPM provides a general framework which generalize some of the state-of-the-art algorithms in the seeded case, such as \citep[Alg.1]{YuXuLin}, \citep[Alg.2]{LubSri} and \citep[Alg.4]{MaoRud}. \paragraph{Contributions.} The main contributions of this paper can be summarized as follows. \begin{itemize} \item We provide (see Theorems \ref{prop:one_it_conv}, \ref{prop:partial_rec}) exact and partial recovery guarantees under the Gaussian Wigner model when the PPM is initialized with a given data-independent seed, and only one iteration of the PPM algorithm is performed. For this result to hold, it suffices that the overlap of the seed with the ground-truth permutation is $\Omega(\sqrt{n \log n})$. This matches the best-known bound for the sparse Erdös-Renyi case \citep{YuXuLin}, for which an overlap of $\Omega(\sqrt{n\log n})$ is required to obtain exact recovery. \item We prove (see Theorem \ref{thm:unif_rec_ppm}) that when multiple iterations are allowed, then PPM converges to the ground-truth matching in $\mathcal{O}(\log n)$ iterations provided that it is initialized with a seed with overlap $\Omega\big((1-\kappa)n\big)$, for a constant $\kappa$ small enough, even if the initialization is data-dependent or adversarial. This extends the results in \citep{MaoRud} from the sparse Erd\"os-Renyi setting, to the dense Wigner case. \item We complement our theoretical results with experiments on synthetic data showing that PPM can help to significantly improve the accuracy of the matching (for correlated Wigner model) compared to that obtained by a standalone application of existing seedless methods. \end{itemize} \subsection{Notation}\label{sec:notation} We denote $\ensuremath{\mathcal{P}}_n$ to be the set of permutation matrices of size $n\times n$ and $\ensuremath{\mathcal{S}}_n$ the set of permutation maps on the set $[n]=\{1,\cdots,n\}$. To each element $X \in \ensuremath{\mathcal{P}}_n$ (we reserve capital letters), there corresponds one and only one element $x\in \ensuremath{\mathcal{S}}_n$ (we use lowercase letters). We denote $\operatorname{Id}$ (resp. $\operatorname{id}$) the identity matrix (resp. identity permutation), where the size will be clear from context. For $X\in \ensuremath{\mathcal{P}}_n$($x\in\ensuremath{\mathcal{S}}_n$), we define $S_X=\{i\in[n]:X_{ii}=1\}$ to be the set of fixed points of $X$, and $s_x=|S_X|/n$ its fraction of fixed points. The symbols $\langle \cdot,\cdot\rangle_F$ and $\|\cdot\|_F$ denote the Frobenius inner product and matrix norm, respectively. For any matrix $X\in\mathbb{R}^{n\times n}$, let $[X]\in \mathbb{R}^{n^2}$ denote its vectorization obtained by stacking its columns one on top of another. For two random variables $X,Y$ we write $X\stackrel{d}{=}Y$ when they are equal in law. For a matrix $A\in \mathbb{R}^{n\times n}$, $A_{i:}$ (resp. $A_{:i}$) will denote its $i$-th row (resp. column). \subsection{Mathematical description} Let $A,B$ be the adjacency matrices of the graphs $G,H$ each with $n$ vertices. In the graph matching problem, the goal is to find the solution of the following optimization problem \begin{equation}\label{form:1} \max_{x\in \mathcal{S}_n}\sum_{i,j}A_{ij}B_{x(i)x(j)} \enskip\tag{P1} \end{equation} which is equivalent to solving % \begin{equation}\label{form:1'} \max_{X\in \mathcal{P}_n}\langle A,X BX^T\rangle_F. \tag{P1'} \end{equation} Observe that \eqref{form:1} is a well defined problem -- not only for adjacency matrices -- but for any pair of matrices of the same size. In particular, it well defined when $A,B$ are adjacency matrices of weighted graphs, which is the main setting of this paper. Moreover, this is an instance of the well-known \emph{quadratic assignment problem}, which is a combinatorial optimization problem known to be NP-hard in the worst case \citep{QAP}. Another equivalent formulation of \eqref{form:1} is given by the following ``lifted'' (or vector) version of the problem \begin{equation} \label{form:1''} \max_{[X]\in [\mathcal{P}_n]}[X]^TB\otimes A[X]\tag{P1''} \end{equation} where $[\mathcal{P}_n]$ is the set of permutation matrices in vector form. This form has been already considered in the literature, notably in the family of spectral methods \citep{Villar,spec_align}. \subsection{Statistical models for correlated random graphs}\label{sec:RGmodels} Most of the theoretical statistical analysis for the graph matching problem has been performed so far under two random graph models: the \emph{correlated Erd\"os-Renyi} and the \emph{correlated Wigner model}. In these models the dependence between the two graphs $A$ and $B$ is explicitly described by the inclusion of a ``noise'' parameter which captures the degree of correlation between $A$ and $B$. \paragraph{Correlated Wigner model $W(n,\sigma,x^*)$.} The problem \eqref{form:1} is well defined for matrices that are not necessarily $0/1$ graph adjacencies, so a natural extension is to consider two complete weighted graphs. The following Gaussian model has been proposed in \citep{deg_prof} \[A_{ij}\sim\begin{cases}\ensuremath{\mathcal{N}}(0,\frac1n)\text{ if }i\neq j, \\ \ensuremath{\mathcal{N}}(0,\frac2n)\text{ if } i= j, \end{cases}\] and $B_{x^*(i)x^*(j)}=\sqrt{1-\sigma^2}A_{ij}+\sigma Z_{ij}$, where $Z\stackrel{d}{=}A$. Both $A$ and $B$ are distributed as the GOE (Gaussian orthogonal ensemble). Here the parameter $\sigma>0$ should be interpreted as the noise parameter and in that sense, $B$ can be regarded as a ``noisy perturbation'' of $A$. Moreover, $x^* \in \ensuremath{\mathcal{S}}_n$ is the ground-truth (or latent) permutation that we seek to recover. It is not difficult to verify that the problem \eqref{form:1} is in fact the maximum likelihood estimator (MLE) of $x^*$ under the correlated Wigner model. \paragraph{Correlated Erd\"os-Renyi $G(n,q,s,x^*)$.} For $q,s\in[0,1]$, the correlated Erdos-Renyi model with latent permutation $x^*\in \ensuremath{\mathcal{S}}_n$ can be described in two steps. \begin{enumerate} \item $A$ is generated according to the Erdös-Renyi model $G(n,q)$. \item Conditional on $A$, the entries of $B$ are i.i.d according to the law % \begin{equation}\label{eq: ER_def} B_{x^*(i),x^*(j)}\sim\begin{cases} Bern(s)\quad \text{if}\quad A_{ij}=1,\\ Bern\big(\frac{q}{1-q}(1-s)\big)\quad \text{if } A_{ij}=0. \end{cases} \end{equation} \end{enumerate} There is another equivalent description of this model in the literature, where to obtain correlated Erdös-Renyi graphs, we first sample an Erdös-Renyi ``mother'' graph and then define $A,B$ as independent subsamples with certain density parameter. We refer to \citep{PedGloss} for details \subsection{Related work} \label{sec:rel_work} \paragraph{Projected power method (PPM).} PPM, which is also often referred to as a \emph{generalized power method} (GPM) in the literature, is a family of iterative algorithms for solving constrained optimization problems. It has been used with success for various tasks including clustering SBM \citep{Wang2021OptimalNE}, group synchronization \citep{boumal2016,GaoZhang}, joint alignment from pairwise difference \citep{chen2016_alignment}, low rank-matrix recovery \citep{chi2019} and the generalized orthogonal procrustes problem \citep{Ling}. It is a useful iterative strategy for solving non-convex optimization problems, and usually requires a good enough initial estimate. In general, we start with an initial candidate satisfying a set of constraints and at each iteration we perform \begin{enumerate} \item a \emph{power step}, which typically consists in multiplying our initial candidate with one or more data dependent matrices, and \item a \emph{projection step} where the result of the power step is projected onto the set of constraints of the optimization problem. \end{enumerate} These two operations are iteratively repeated and often convergence to the ``ground-truth signal'' can be ensured in $\mathcal{O}(\log n)$ iterations, provided that a reasonably good intialization is provided. The use of PPM for graph matching was first proposed and experimentally analysed in \citep{Villar} and it has been subsequently been analysed in the case of sparse Erdös-Renyi graphs in \citep{LubSri,YuXuLin} (only for one iteration) and in \citep{MaoRud} (although the connection with PPM is not mentioned in those works). \paragraph{Graph matching.} For the graph matching problem, numerous algorithmic and information theoretic \citep{CullKi,HallMass,recons_thr} results have been obtained recently for both the Wigner and the Erdös-Renyi models. In \citep{recons_thr} the sharp threshold for reconstruction has been obtained for the Gaussian and Erdös-Renyi models. In the case of the Wigner model, the authors prove in \citep[Thm.1]{recons_thr} that for $\sigma^2\leq 1-\frac{(4+\epsilon)\log n}{n}$ the maximum likelihood estimator, of the ground truth permutation $x^*$, achieves perfect recovery with probability $1-o(1)$. There has been also a lot of work recently from an algorithmic point of view. In the context of seedless algorithms, where no side information is available, several polynomial time algorithms have been proposed relying on spectral methods \citep{Spectral_weighted_Ume,Grampa,ganMass,spec_align,Balanced_GM}, degree profiles \citep{deg_prof,dai_cullina}, other vertex signatures \citep{MaoRud}, random walk based approaches \citep{isorank_1,isorank_2,Gori04graphmatching}, convex and concave relaxations \citep{afla,Lyzin,bach}, and other non-convex methods \citep{YuYan,XuLuo,Villar}. Most of the previous algorithms have theoretical guarantees only in the low noise regime. For instance, the \texttt{Grampa} algorithm proposed in \citep{Grampa} provably exactly recovers the ground truth permutation for the correlated Wigner model when $\sigma=\mathcal{O}(\frac1{\log n})$, and in \citep{deg_prof}, it is required for the Erdös-Renyi model that the fraction of different edges between the two graphs be of the order $\mathcal{O}(\frac1{\log^2 n})$. There are a few exceptions, as in \citep{ganMass2} the authors present an algorithm that returns a partially correct matching in the sparse Erdös-Renyi case, allowing a constant fraction of different edges between the two graphs to be matched. Another exception is the recent work in \citep{MaoRud}, where they propose and analyse an algorithm that can match correlated Erdös-Renyi graphs with constant correlation parameter, under some sparsity assumptions. The results in \citep{MaoRud} give, to the best of our knowledge, the strongest known theoretical guarantees for sparse correlated Erdös-Renyi graphs. \paragraph{Seeded algorithms.} In the context of seeded algorithms \citep{PedGloss,YarGross,MosselXu,fish,LubSri,YuXuLin}, a set of seeds of the form $S=\{(i,i'): i\in V(G),i'\in V(H)\}$ is given as side information. Many algorithms in this class work under the assumption that the information in the set of seeds corresponds perfectly to the ground truth permutation, that is, $(i,i')\in S$ if and only if $x^*(i)=i'$. Some algorithms relax this requirement allowing ``noisy'' seeds, where for some $(i,i')$ in $S$ it happens that $x^*(i)\neq i$ \citep{YarGross,NoisySeeds,LubSri,YuXuLin,MaoRud}. Most of the previous work on the seeded version of the problem has been devoted to the Erdös-Renyi model, under different assumptions on the sparsity. To the best of our knowledge, the state-of-art algorithm in this category is the \texttt{j-hop} algorithm \citep[Alg.1]{YuXuLin}, although it shares similarities with \citep[Alg.2]{LubSri} and \citep[Alg.4]{MaoRud}. On the other hand, it will be evident from our analysis of PPM for graph matching that those algorithms can also be seen as examples of the PPM. \subsection{Proof of Theorem \ref{thm:unif_rec_ppm}} \label{subsec:proof_unif_seed_ppm} The general proof idea is based on the decoupling strategy used by \citep{MaoRud} for Erdös-Renyi graphs. To extend their result from binary graphs to weighted graphs, we need to use an appropriate measure of similarity. For $i, i'\in [n], W\subset [n]$ and $g\in \ensuremath{\mathcal{P}}_n$, let us define \[ \langle A_{i:}, B_{i':} \rangle_{g,W} := \sum_{j\in W} A_{ig(j)}B_{i'j} \] to be the similarity between $i$ and $i'$ restricted to $W$ and measured with a scalar product depending on $g$ (the permutation used to align $A$ and $B$). When $g=id$ or $W=[n]$ we will drop the corresponding subscript(s). If $A$ and $B$ were binary matrices, we would have the following correspondence \[ \langle A_{i:}, B_{i':} \rangle_{g,W} = |g(\ensuremath{\mathcal{N}}_A(i)\cap W)\cap \ensuremath{\mathcal{N}}_B(i') |.\] This last quantity plays an essential role in Proposition 7.5 of \citep{MaoRud}. Here $g(\ensuremath{\mathcal{S}})$ denotes the image of a set $\ensuremath{\mathcal{S}} \subseteq [n]$ under permutation $g$. \paragraph{ Step 1.} The algorithm design relies on the fact that if the matrices $A$ and $B$ were correctly aligned then the correlation between $A_{i:}$ and $B_{i:}$ should be large and the correlation between $A_{i:}$ and $B_{i':}$ should be small for all $i\neq i'$. The following two Lemmas precisely quantify these correlations when the two matrices are well aligned. \begin{lemma}[Correlation between corresponding nodes]\label{lem:nb_ngbh1_mt} Let $(A,B)\sim W(n,\sigma, x^*=id)$ and assume that the diagonals of $A$ and $B$ have been removed. Then for $n$ large enough, we have with probability at least $1-n^{-2}$ that \[ \langle A_{i:}, B_{i:}\rangle \geq \sqrt{1-\sigma^2}(1-\epsilon_1)-\sigma \epsilon_2 \text{ for all } i\in [n], \] where $\epsilon_1, \epsilon_2 =O(\sqrt{\frac{\log n}{n}})$. \end{lemma} \begin{lemma}[Correlation between different nodes]\label{lem:nb_ngbh2_mt} Let $(A,B)\sim W(n,\sigma, id)$ and assume that the diagonals of $A$ and $B$ have been removed. Then for $n$ large enough, we have with probability at least $1-n^{-2}$ that \[ \left| \langle A_{i:}, B_{i':} \rangle\right|\leq \sqrt{1-\sigma^2}\epsilon_2+\sigma \epsilon_3 \text{ for all } i,i'\in [n] \text{ such that } i'\neq i, \] where $\epsilon_3=O(\sqrt{\frac{\log n}{n}})$. \end{lemma} The proofs of Lemma's \ref{lem:nb_ngbh1_mt} and \ref{lem:nb_ngbh2_mt} can be found in Appendix \ref{sec:app_thm3}. \paragraph{Step 2.} Since the ground truth alignment between $A$ and $B$ is unknown, we need to use an approximate alignment (provided by $X^{(0)}$). It will suffice that $X^{(0)}$ is close enough to the ground truth permutation. This is linked to the fact that if $|S_{X^{(0)}}|$ is large enough then the number of nodes for which there is a substantial amount of information contained in $S_{X^{(0)}}^c$ is small. This is shown in the following lemma. \begin{lemma}[Growing a subset of vertices]\label{lem:growing_vert} Let $G$ a graph generated from the Wigner model with self-loops removed, associated with an adjacency matrix $A$, and let $I $ be a random subset of $[n]$ (possibly depending on $A$) with $|I|\geq (1-\kappa)n$ where $ \kappa \in (0,1/2)$. Let $\delta = 8\kappa$ and define a random subset of vertices \[ \tilde{I}= \lbrace i \in [n]: \norm{A_{i:}}_{I^c}^2<\delta \rbrace .\] Then for $n$ large enough, we have \[\ensuremath{\mathbb{P}} \left( |\tilde{I}^c|\leq \frac{1}{4}|I^c| \right) \geq 1-e^{-c' \kappa n} \] for some constant $c' > 0$. \end{lemma} In order to prove this lemma we will need the following decoupling lemma. \begin{lemma}[An elementary decoupling] \label{lem:decoupling} Let $M>0$ be a parameter and $G$ be a weighted graph on $[n]$, with weights of magnitude bounded by $1$ and without self loops, represented by an adjacency matrix $A\in [-1,1]^{n\times n}$. Assume that there are two subsets of vertices $Q,W\subset [n]$ such that \[ \norm{A_{i:}}_W^2 \geq M \text{ for all } i\in Q.\] Then there are subsets $Q'\subseteq Q$ and $W'\subseteq W$ such that $Q'\cap W' =\emptyset$, $|Q'|\geq |Q|/5$ and \[ \norm{A_{i:}}_{W'}^2 \geq M/2 \text{ for all } i\in Q'. \] \end{lemma} \begin{proof} If $|Q\setminus W|\geq |Q|/5$ then one can take $Q'=Q\setminus W$ and $W'= W$. So we can assume that $|Q\cap W|\geq 4|Q|/5$. Let $\Tilde{W}:=W\setminus Q$ and $\hat{Q}$ be a random subset of $Q\cap W$ where each element $j\in Q\cap W$ is selected independently with probability $1/2$ in $\hat{Q}$. Consider the random disjoint sets $\hat{Q}$ and $W':=\tilde{W}\cup ((Q\cap W)\setminus \hat{Q})$. First, we will show the following claim. % \begin{claim} For every $i \in Q\cap W$, we have $ \ensuremath{\mathbb{P}}( \norm{A_{i:}}_{W'}^2\geq M/2 |i \in \hat{Q})\geq 1/2.$ \end{claim} Indeed, we have by definition \[ \norm{A_{i:}}_{W'}^2=\sum_{j \in W' } A_{ij}^2 = \sum_{j \in W\cap Q } A_{ij}^2\indic_{j\not \in \hat{Q}} +\sum_{j \in \tilde{W} } A_{ij}^2 .\] By taking the expectation conditional on $i \in \hat{Q}$, we obtain \[ \ensuremath{\mathbb{E}} \left( \norm{A_{i:}}_{W'}^2 \middle| i \in \hat{Q} \right) = \sum_{j \in W\cap Q} \frac{A_{ij}^2}{2} + \sum_{j \in \tilde W} A_{ij}^2 \geq \frac{1}{2}\sum_{j\in W}A_{ij}^2 \geq \frac{M}{2}.\] But since $\sum_{j\in W\cap Q} A_{ij}^2(\indic_{j\not \in \hat{Q}}-\frac{1}{2})$ is a symmetric random variable we have that \[ \ensuremath{\mathbb{P}}\left(\norm{A_{i:}}_{W'}^2\geq \ensuremath{\mathbb{E}}(\norm{A_{i:}}_{W'}^2) \middle|i \in \hat{Q}\right) = 1/2\] and hence \[ \ensuremath{\mathbb{P}}\left(\norm{A_{i:}}_{W'}^2\geq \frac{M}{2} \middle|i \in \hat{Q}\right)\geq 1/2. \] Consequently, we have \[ \ensuremath{\mathbb{E}}\left(\sum_{i\in Q\cap W} \indic_{\lbrace \norm{A_{i:}}_{W'}^2\geq M/2 \rbrace} \indic_{i \in \hat{Q}}\right) = \sum_{i\in Q\cap W} \ensuremath{\mathbb{P}}(i\in \hat{Q})\ensuremath{\mathbb{E}}\left( \indic_{\lbrace \norm{A_{i:}}_{W'}^2\geq M/2 \rbrace}\middle| i\in \hat{Q}\right) \geq \frac{|Q\cap W|}{4} \geq \frac{|Q|}{5}.\] Therefore, there is a realization $Q'$ of $\hat{Q}$ such that $Q'$ and $W'$ satisfy the required conditions. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:growing_vert}] By considering sets $W=I^c$ and $Q \subset \tilde{I}^c$ we obtain the following inclusion % \[ \lbrace |\tilde{I}^c|> \frac{1}{4}|I^c| \rbrace \subset \ensuremath{\mathcal{E}}:= \lbrace \exists\, Q,W\subset [n]: |W|\leq \kappa n, |Q|\geq |W|/4\neq 0, \norm{A_{i:}}_{W}^2\geq \delta \text{ for all }i\in Q \rbrace . \] % According to Lemma \ref{lem:decoupling}, $\ensuremath{\mathcal{E}}$ is contained in % \[\ensuremath{\mathcal{E}}':=\lbrace \exists\, Q', W'\subset [n]: |W'|\leq \kappa n, |Q'|\geq |W|/20\neq 0, Q'\cap W'= \emptyset, \norm{A_{i:}}_{W'}^2\geq \delta/2 \text{ for all }i\in Q' \rbrace.\] % For given subsets $Q'$ and $W'$, the random variables $(\norm{A_{i:}}_{W'}^2)_{i\in Q'}$ are independent. So, by a union bound argument we get % \[ \ensuremath{\mathbb{P}} \left( |\tilde{I}^c|> \frac{1}{4}|I^c| \right) \leq \sum_{w=1}^{\ceil{\kappa n}}\sum_{|W'|=w}\sum_{k=\ceil{w/20}}^{n}\binom{n}{k}\ensuremath{\mathbb{P}}\left( \norm{A_{i:}}_{W'}^2\geq \delta/2 \right)^k. \] % According to Lemma \ref{lem:lau_mass}, for the choice $t=\kappa n$ we have for all $W'$ \[ \ensuremath{\mathbb{P}}\left( \norm{A_{i:}}_{W'}^2\geq \delta/2 \right) \leq \ensuremath{\mathbb{P}}\left( n\norm{A_{i:}}_{W'}^2\geq |W|+\sqrt{|W|t}+2t \right) \leq e^{-\kappa n}.\] Consequently, for $n$ large enough, we have \[ \ensuremath{\mathbb{P}} \left( |\tilde{I}^c|> \frac{1}{4}|I^c| \right) \leq \sum_{w=1}^{\ceil{\kappa n}}\sum_{k=\ceil{w/20}}^{n}\left(\frac{en}{w}\right)^w\left(\frac{en}{k}\right)^ke^{-k\kappa n} < e^{-c\kappa n}\] for a constant $c>0$. Indeed, since \[ \frac{en}{ke^{\kappa n}}<1\] for $n$ large enough we have \[ \sum_{k=\ceil{w/20}}^{n} \left(\frac{en}{k}\right)^ke^{-k\kappa n} \leq C\left(\frac{en}{e^{\kappa n}}\right)^{\ceil{w/20}}\] by thw property of geometric series, where $C>0$ is a constant. But by the same argument \[ \sum_{w=1}^{\ceil{\kappa n}}\left(\frac{en}{w}\right)^w \left(\frac{(en)^{1/20}}{e^{\kappa n/20}}\right)^{w} \leq \frac{(en)^{1/20}}{e^{\kappa n/20}}\leq e^{-c\kappa n}\] where $c > 0$ is a constant. \end{proof} \paragraph{Step 3.} We are now in position to show that at each step the set of fixed points of the permutation obtained with \texttt{PPMGM}\, increases. \begin{lemma}[Improving a partial matching]\label{lem:improve_matching} Let $G$ and $G'$ two graphs as before, and $g$ be a random permutation possibly depending on $G$ and $G'$. Further assume that $\sqrt{1-\sigma^2}>48\kappa$. Let \[ \ensuremath{\mathcal{E}} := \lbrace |i\in [n]: g(i)=i |\geq (1-\kappa)n \rbrace\] be the event that the number of fixed point of $g$ is large enough. Define a random permutation $\tilde{g}$ and a random set $\tilde{J}$ as follows. Let $\delta=8\kappa$, we say that a vertex $i\in [n]$ belongs to $\tilde{J}$ if there is a unique $i'\in [n]$ such that \begin{itemize} \item $\langle A_{i:},B_{i':}\rangle_{g}\geq 3 \delta$; \item $|\langle A_{i:},B_{j:}\rangle_{g}|< 3\delta$ for all $j \neq i'$; \item $|\langle A_{j:},B_{i':}\rangle_{g}|< 3\delta$ for all $j \neq i $. \end{itemize} Then we set $\tilde{g}(i)=i'$ for any such pair of vertices. We complete $\tilde{g}$ into a permutation in an arbitrary way. If $n$ is sufficiently large and $\kappa$ sufficiently small, we have with probability at least $\ensuremath{\mathbb{P}}(\ensuremath{\mathcal{E}})-\frac{3}{n^2}$, \[ |\lbrace i\in[n]: \tilde{g}(i)=i \rbrace|\geq \frac{n}{2}+\frac{|\lbrace i\in[n]: g(i)=i \rbrace|}{2}.\] It implies in particular that the set of fixed points of $\tilde{g}$ is strictly larger than the set of fixed points of $g$. \end{lemma} \begin{remark} The description of \texttt{GMWM} doesn't involve the use of a threshold, but for the nodes that satisfy the conditions described in Lemma \ref{lem:improve_matching}, \texttt{GMWM} provides by definition the same matching (this can be seen using the notion of row-column dominance and Lemma \ref{lem:overlap_event}). Since the nodes that do not satisfy these conditions can be matched in an arbitrary way, we can use \texttt{GMWM} instead of the thresholding procedure and the analysis remains valid. \end{remark} \begin{proof} Define the random sets \begin{align*} I:=&\lbrace j\in [n]: g(j)=j \rbrace,\\ \tilde{I}:=&\lbrace j\in [n]: \norm{A_{i:}}_{I^c}^2 < \delta \rbrace ,\\ \tilde{I}':=&\lbrace j\in [n]: \norm{B_{i:}}_{I^c}^2 < \delta \rbrace, \end{align*} where $\delta=8\kappa$ and consider the event $\ensuremath{\mathcal{E}}' = \ensuremath{\mathcal{E}}_1' \cap \ensuremath{\mathcal{E}}_2' \cap \ensuremath{\mathcal{E}}_3'$ where \begin{align*} \ensuremath{\mathcal{E}}_1' &:=\lbrace |\tilde{I}^c|\vee |(\tilde{I}')^c|\leq \frac{1}{4}|I^c| \rbrace \\ \ensuremath{\mathcal{E}}_2' &:=\lbrace \forall i\in [n] : \ \langle A_{i:}, B_{i:}\rangle \geq 0.9\sqrt{1-\sigma^2} \rbrace \\ \ensuremath{\mathcal{E}}_3' &:=\lbrace \forall i\neq i'\in [n]: \ |\langle A_{i:}, B_{i':}\rangle|<C\log n/n \rbrace \end{align*} for a suitably large constant $C > 0$ (which is the constant hidden in the $O(\cdot)$ symbol in Lemma \ref{lem:nb_ngbh2_mt}). If $n$ is sufficiently large and $\kappa$ satisfies $\sqrt{1-\sigma^2}>48\kappa$, one can show that \[ \ensuremath{\mathbb{P}}(\ensuremath{\mathcal{E}}'\cap \ensuremath{\mathcal{E}})\geq \ensuremath{\mathbb{P}}(\ensuremath{\mathcal{E}})-\frac{3}{n^2} \] by combining Lemma \ref{lem:nb_ngbh1_mt}, \ref{lem:nb_ngbh2_mt} and \ref{lem:growing_vert}. Condition on any realization of $G,G',g$ such that the event $\ensuremath{\mathcal{E}}'\cap\ensuremath{\mathcal{E}}$ holds. Let $i \in \tilde{I}\cap \tilde{I}'$. By definition of $\ensuremath{\mathcal{E}}'\cap\ensuremath{\mathcal{E}}$, we have \begin{align*} \langle A_{i:},B_{i:}\rangle_{g} &\geq \langle A_{i:},B_{i:}\rangle_{g,I}-|\langle A_{i:},B_{i:}\rangle_{g,I^c}|\\ &\geq \langle A_{i:},B_{i:}\rangle-|\langle A_{i:},B_{i:}\rangle_{I^c}|-|\langle A_{i:},B_{i:}\rangle_{g,I^c}|\\ &\geq 3\delta. \end{align*} Here we used the fact that for all permutations $g$, $|\langle A_{i:},B_{i:}\rangle_{g,I^c}|\leq \norm{A_{i:}}_{I^c}\norm{B_{i:}}_{I^c}$ (because $g(I^c) = I^c$ by definition of $I$). On the other hand, for every $i'\in [n]\setminus{i}$ we have \begin{align*} |\langle A_{i:},B_{i':}\rangle_{g}| &\leq |\langle A_{i:},B_{i':}\rangle_{g,I}|+|\langle A_{i:},B_{i':}\rangle_{g,I^c}|\\ &< 3\delta . \end{align*} Similarly we have $ |\langle A_{i':},B_{i:}\rangle_{g}|< 3\delta $. Hence $\tilde{I}\cap \tilde{I}'\subset \tilde{J}$ and $\tilde{g}(i)=i$ for all $i\in \tilde{I}\cap \tilde{I}'$. Moreover we have by the first condition on $\ensuremath{\mathcal{E}}'$ \[ |\tilde{I}\cap \tilde{I}'|\geq n-|\tilde{I}^c|-|(\tilde{I}')^c|\geq n-\frac{|I|^c}{2}=\frac{n}{2}+\frac{|I|}{2},\] so the result follows. \end{proof} \paragraph{Conclusion.} By Lemma \ref{lem:improve_matching}, if the initial number of fixed points is $(1-\kappa)n$ then after one iteration step the size of the set of fixed points of the new iteration is at least $(1-\kappa/2)n$ with probability greater than $1-\frac{3}{n^2}$. So after $2\log n$ iterations the set of fixed points has size at least $(1-\kappa/2^{2\log n})n>n-1$ with probability greater than $1-\frac{6\log n}{ n^2}$. \section{Proof outline} \subsection{Proof of Theorem \ref{prop:one_it_conv}}\label{sec:thm_one_it} For $A,B\sim W(n,\sigma,id)$, the proof of Theorem \ref{prop:one_it_conv} relies heavily on the concentration properties of the entries of the matrix $C=AXB$, which is the matrix that is projected by our proposed algorithm. In particular, we use the fact that $C$ is diagonally dominant with high probability, under the assumptions of Theorem \ref{prop:one_it_conv}, which is given by the following result. Its proof is delayed to Appendix \ref{app:concentration}. \begin{proposition} [Diagonal dominance property for the matrix $C=AXB$]\label{prop:diago_dom} Let $A,B\sim W(n,\sigma,id)$ with correlation parameter $\sigma\in[0,1)$ and let $X\in \ensuremath{\mathcal{P}}_n$ with $S_X$ the set of its fixed points and $s_x:=|S_X|/n$. Assume that $s_x\geq 10/n$ and that $n\geq 10$. Then the following is true. \begin{enumerate}[(i)] \item \textbf{Noiseless case.} For a fixed $i\in[n]$ it holds that \begin{equation*} \ensuremath{\mathbb{P}}\big(\exists j\neq i: (AXA)_{ij}>(AXA)_{ii}\big)\leq 4ne^{-\frac{s_x^2}{96}n}. \end{equation*} \item For $C=AXB$ and $i\in [n]$ it holds \begin{equation*} \ensuremath{\mathbb{P}}{(\exists j\neq i : C_{ij}>C_{ii})}\leq 5ne^{-c(\sigma)s_x^2n} \end{equation*} where $c(\sigma)=\frac1{384}\frac{1-\sigma^2}{1+2\sigma\sqrt{1-\sigma^2}}$. \end{enumerate} \end{proposition} With this we can proceed with the proof of Theorem \ref{prop:one_it_conv}. \begin{proof}[Proof of Theorem \ref{prop:one_it_conv}] To prove part $(i)$ of the theorem it suffices to notice that in Proposition \ref{prop:diago_dom} part $(ii)$ we upper bound the probability that $C=AXB$ is not diagonally dominant for each fixed row. Using the union bound, summing over the $n$ rows, we obtain the desired upper bound on the probability that $C$ is not diagonally dominant. We now prove part $(ii)$. Notice that the assumption $\|X-\operatorname{Id}\|_F\leq \theta\sqrt{n}$ for $\theta< \sqrt{2}$ implies that $s_x$ is strictly positive. Moreover, from this assumption and the fact that $\|X-\operatorname{Id}\|^2_F=2(n-|S_X|)$ we deduce that \begin{equation}\label{eq:theta_fp} s_x\geq \Big(1-\frac{{\theta}^2}2\Big). \end{equation} On the other hand, we have \begin{align*} \ensuremath{\mathbb{P}}(\Pi\neq \operatorname{Id})&\leq \ensuremath{\mathbb{P}}(C \text{ is not diag.dom})\\ &= \ensuremath{\mathbb{P}}(\exists i,j\in[n],i\neq j:C_{ii}<C_{ij})\\ &\leq 5n^2e^{-c(\sigma)s_x^2n}\\ &\leq 5n^2e^{-c(\sigma)\big(1-\frac{\theta^2}2\big)^2n} \end{align*} where we used Lemma \ref{lem:diagdom_LAP} in the first inequality, Proposition \ref{prop:diago_dom} in the penultimate step and, \eqref{eq:theta_fp} in the last inequality. \end{proof} \subsubsection{Proof of Proposition \ref{prop:diago_dom}} In Proposition \ref{prop:diago_dom} part $(i)$ we assume that $\sigma=0$. The following are the main steps of the proof. \begin{enumerate} \item We first prove that for all $X\in\ensuremath{\mathcal{P}}_n$ such that $s_x=|S_X|/n$ and for $i\neq j\in[n]$ the gap $C_{ii}-C_{ij}$ is of order $s_x$ in expectation. \item We prove that $C_{ii}$ and $C_{ij}$ are sufficiently concentrated around its mean. In particular, the probability that $C_{ii}$ is smaller than $s_x/2$ is exponentially small. The same is true for the probability that $C_{ij}$ is larger than $s_x/2$. \item We use the fact $\ensuremath{\mathbb{P}}(C_{ii}\leq C_{ij})<\ensuremath{\mathbb{P}}(C_{ii} \leq s_x/2)+\ensuremath{\mathbb{P}}(C_{ij}\geq s_x/2)$ to control the probability that $C$ is not diagonally dominant. \end{enumerate} The proof is mainly based upon the following two lemmas. \begin{lemma}\label{lem:expectation} For the matrix $C=AXA$ and with $s_x=|S_X|/n$ we have \[\ensuremath{\mathbb{E}}[C_{ij}]=\begin{cases} s_x+\frac1n\mathbbm{1}_{i\in S_X} \enskip\text { for }i=j, \\ \frac1n\mathbbm{1}_{x(j)=i} \enskip\text { for }i\neq j, \\ \end{cases}\] and from this we deduce that for $i,j\in[n]$ with $i\neq j$ \[s_x-\frac1n\leq \ensuremath{\mathbb{E}}{[C_{ii}]}-\ensuremath{\mathbb{E}}{[C_{ij}]}\leq s_x+\frac1n.\] \end{lemma} \begin{lemma}\label{lem:tailbounds} Assume that $s_x\in(10/n,1]$ and $n\geq 10$. Then for $i,j\in[n]$ with $i\neq j$ we have \begin{align}\label{eq:bounddiag} \ensuremath{\mathbb{P}}(C_{ii}\leq s_x/2)&\leq 4 e^{-\frac{s_x^2}{48}n}, \\%f(s_x)^{n/2}\\ \label{eq:boundoffdiag} \ensuremath{\mathbb{P}}(C_{ij}\geq s_x/2)&\leq 3e^{-\frac{s_x^2}{96}n}. \end{align} \end{lemma} With this we can prove Proposition \ref{prop:diago_dom} part $(i)$. \begin{proof}[Proof of Prop. \ref{prop:diago_dom} $(i)$] Define the event $\mathcal{E}_j=\{C_{ii}<\frac{s_x}2\}\cup \{C_{ij}>\frac{s_x}2\}$ and note that for $j\neq i$, we have $\{C_{ij}>C_{ii}\}\subset\mathcal{E}_j$. With this and the bounds \eqref{eq:bounddiag} and \eqref{eq:boundoffdiag} we have \begin{align*} \ensuremath{\mathbb{P}}\big(\exists j\neq i: C_{ij}>C_{ii}\big)&=\ensuremath{\mathbb{P}}(\cup_{j\neq i}\{C_{ij}>C_{ii}\})\\ &\leq \ensuremath{\mathbb{P}}(\cup_{j\neq i}\mathcal{E}_j)\\ &\leq \ensuremath{\mathbb{P}}(C_{ii}\leq \frac{s_x}{2})+\sum_{j\neq i} \ensuremath{\mathbb{P}}(C_{ij}\geq \frac{s_x}{2})\\ &\leq 4e^{-\frac{s_x^2}{96}n}+3(n-1)e^{-\frac{s_x^2}{96}n}\\ &\leq 4ne^{-\frac{s_x^2}{96}n}. \end{align*} \end{proof} The proof of Lemma \ref{lem:expectation} is short and we include it in the main body of the paper. On the other hand, the proof of Lemma \ref{lem:tailbounds} mainly uses concentration inequalities for Gaussian quadratic forms, but the details are quite technical. Hence we delay its proof to Appendix \ref{app:diagdom_row_noiseless}. Before proceeding with the proof of Lemma \ref{lem:expectation}, observe that the following decomposition holds for the matrix $C$. \begin{equation}\label{eq:Cdecom} C_{ij}=\sum_{k,k'}A_{ik}X_{k,k'}A_{k'i} = \begin{cases} \sum_{k\in S_X}A^2_{ik}+\sum_{k\notin S_X}A_{ik}A_{ix(k)} \enskip\text { for }i=j,\\ \sum^n_{k=1}A_{ik}A_{x(k)j} \enskip\text{ for }i\neq j. \end{cases} \end{equation} \begin{proof}[Proof of Lemma \ref{lem:expectation}] From \eqref{eq:Cdecom} we have that \begin{align*} \ensuremath{\mathbb{E}}[C_{ii}] =\sum_{k\in S_X}\ensuremath{\mathbb{E}}[A^2_{ik}]+\sum_{k\notin S_X}\ensuremath{\mathbb{E}}[A^2_{ik}] =\frac{|S_X|}n+\frac{\mathbbm{1}_{i\in S_X}}n. \end{align*} Similarly, for $j\neq i$ it holds \begin{align*} \ensuremath{\mathbb{E}}[C_{ij}] =\sum^n_{k=1}\ensuremath{\mathbb{E}}[A_{ik}A_{x(k)j}] =\frac1n\mathbbm{1}_{i,j\notin S_X, x(j)=i} =\frac{\mathbbm{1}_{x(j)=i}}n \end{align*} from which the results follows easily. \end{proof} The proof of Proposition \ref{prop:diago_dom} part $(ii)$ which corresponds to the case $\sigma\neq 0$ uses similar ideas and the details can be found Appendix \ref{app:diagdom_row_noise}. \subsection{Proof of Theorem \ref{prop:partial_rec}} \label{subsec:proof_thm_partial_rec} The proof of Theorem \ref{prop:partial_rec} will be based on the following lemma, which extends Proposition \ref{prop:diago_dom}. \begin{lemma}\label{lem:not_rc_dom} For a fixed $i\in[n]$, we have \begin{equation*} \ensuremath{\mathbb{P}}(C_{ii} \textup{ is not row-column dominant})\leq 16ne^{-c(\sigma)s_x^2n}. \end{equation*} \end{lemma} The proof of Lemma \ref{lem:not_rc_dom} is included in Appendix \ref{app:lem_not_rc_dom}. We now prove Theorem \ref{prop:partial_rec}. The main idea is that for a fixed $i\in[n]$, with high probability the term $C_{ii}$ will be the largest term in the $i$-th row and the $i$-th column, and so \texttt{GMWM} will assign $\pi(i)=i$. We will also use the following event inclusion, which is direct from \eqref{eq:overlap_event} in Lemma \ref{lem:overlap_event}. \begin{equation} \label{eq:overlap_event2} \{\operatorname{overlap}(\pi,\operatorname{id})< r/n\}\subset\bigcup^r_{i=1}\{C_{ii} \text{ is not row-column dominant }\}. \end{equation} \begin{proof}[Proof of Theorem \ref{prop:partial_rec} ] Fix $i\in[n]$. By \eqref{eq:overlap_event2} we have that \begin{align*} \ensuremath{\mathbb{P}}(\operatorname{overlap}(\pi,\operatorname{id})\leq r/n)&\leq \sum^r_{i=1}\ensuremath{\mathbb{P}}(C_{ii} \text{ is not row-column dominant})\\ &\leq \sum^r_{i=1}\ensuremath{\mathbb{P}}(\exists j\neq i,\text{ s.t }C_{ij}\vee C_{ji}>C_{ii} )\\ &\leq 16rne^{-c(\sigma) s_x^2n} \end{align*} where we used Lemma \ref{lem:not_rc_dom} in the last inequality. \end{proof} \begin{remark} Notice that the RHS of \eqref{eq:overlap_event2} is a superset of the RHS of \eqref{eq:overlap_event}. To improve this, it is necessary to include dependency information. In other words, we need to `beat Hölder's inequality'. To see this, define \[ E_i:=\mathbbm{1}_{C_{ii}\text{ is not row-column dominant }},\enskip \varepsilon_{I}:=\mathbbm{1}_{\sum_{i\in I}E_i>0}, \text{ for } I\subset [n]; \] then $\varepsilon_{I'}$, for $I'=[r]$, is the indicator of the event in the RHS of \eqref{eq:overlap_event2}. On other hand, the indicator of the event in the RHS of \eqref{eq:overlap_event} is ${\displaystyle \prod_{\substack{I\subset[n],|I|=r}}}\varepsilon_I$. If $\ensuremath{\mathbb{E}}\big[\varepsilon_I\big]$ is equal for all $I$, then Hölder inequality gives \[\ensuremath{\mathbb{E}}\Big[{\displaystyle \prod_{\substack{I\subset[n],|I|=r}}}\varepsilon_I\Big]\leq \ensuremath{\mathbb{E}}[\varepsilon_{I'}]\] which does not help in quantifying the difference between \eqref{eq:overlap_event} and \eqref{eq:overlap_event2}. This is not surprising as we are not taking into account the dependency between the events $\varepsilon_I$ for the different sets $I\subset[n],|I|=r$. \end{remark} \input{proof_thm3} \section{Concluding remarks} In this work, we analysed the performance of the projected power method (proposed in \citep{Villar}) as a seeded graph matching algorithm, in the correlated Wigner model. We proved that for a non-data dependent seed with $\mathcal{O}(\sqrt{n\log n})$ correctly pre-assigned vertices, the PPM exactly recovers the ground truth matching in one iteration. This is analogous to the state-of-the-art results for algorithms in the case of relatively sparse correlated Erdös-Renyi graphs. We additionally proved that the PPM can exactly recover the optimal matching in $\mathcal{O}(\log n)$ iterations for a seed that contains $\Omega\big((1-\kappa)n\big)$ correctly matched vertices, for a constant $\kappa\in (0,1)$, even if the seed can potentially be dependent on the data. For the latter result, we extended the arguments of \citep{MaoRud} from the (sparse) correlated Erd\"os-Renyi model to the (dense) correlated Wigner case, providing a uniform control on the error when the seed contains $\Omega\big((1-\kappa)n\big)$ fixed points. This provides theoretical guarantees for the use of PPM as a refinement algorithm (or a post-processing step) for other seedless graph matching methods. An open question is to find an efficient initialization method which outputs a permutation with order $(1-\kappa)n$ correctly matched vertices in regimes with higher $\sigma$ (say for $\sigma>1/2$). For those noise levels, spectral methods do not seem to perform well (at least in the experiments). An idea could be to adapt the results \citep{MaoRud} from the sparse Erdös-Renyi case to the Wigner case. In that paper, the authors construct for each vertex a signature containing the neighborhood information of that vertex and which is encoded as tree. Then a matching is constructed by matching those trees. It is howeverunclear how to adapt those results (which heavily rely on the sparsity) to the Wigner case. \section{Main results}\label{sec:conv_analysis} Our goal in this section is to prove recovery guarantees for Algorithm \ref{alg:ppmgm} when the input matrices $A,B$ are realizations of the correlated Wigner model, described earlier in Section \ref{sec:RGmodels}. In what follows, we will assume without loss of generality that $X^*=\operatorname{Id}$. \subsection{Exact recovery in one iteration}\label{sec:mainstep1} For any given seed $x^{(0)}$ that is close enough to $x^*$, the main result of this section states that $x^*$ is recovered exactly in one iteration of Algorithm \ref{alg:ppmgm} with high probability. Let us first introduce the following definition: we say that a matrix $M$ is diagonally dominant\footnote{This is weaker than the usual notion of diagonal dominance, where for all $i\in [n]$ $|M_{ii}|\geq \sum_{j\neq i}|M_{ij}|$.} if for all $i,j$ with $i\neq j$ we have $M_{ii}>M_{ij}$. This notion will be used in conjunction with the following lemma, its proof is in Appendix \ref{app:proofs_lem_diagdom}. \begin{lemma}\label{lem:diagdom_LAP} If a matrix $C$ satisfy the diagonal dominance property, then the greedy algorithm \texttt{GMWM} with input $C$ will return the identical permutation. Consequently, for $C=AXB$ and $\Pi=\tau(C)$, we have \begin{equation}\label{eq:probneqId} \ensuremath{\mathbb{P}}(\Pi\neq \operatorname{Id})\leq \ensuremath{\mathbb{P}}(C \textup{ is not diag. dominant}) \end{equation} \end{lemma} The next theorem allow us to control the probability that $C$ is not diagonally dominant and, in turn, proves that Algorithm \ref{alg:ppmgm} recovers the ground truth permutation with high probability. The proof of Theorem \ref{prop:one_it_conv} is outlined in in Section \ref{sec:thm_one_it} \begin{theorem}\label{prop:one_it_conv} Let $A,B\sim W(n,\sigma,\operatorname{id})$ and $X\in\ensuremath{\mathcal{P}}_n$ with $\|X-\operatorname{Id}\|_F\leq \theta\sqrt{n}$, with $0\leq \theta \leq\sqrt{2(1-\frac{10}n)}$ and $n\geq 10$. Then the following holds. % \begin{enumerate}[(i)] \item For $C=AXB$ we have \[\ensuremath{\mathbb{P}}(C \textup{ is not diag. dominant })\leq 5n^2e^{-c(\sigma)\big(1-\frac{\theta^2}{2}\big)^2n}\] where $c(\sigma)=\frac1{384}(\frac{1-\sigma^2}{1+2\sigma\sqrt{1-\sigma^2}})$. \item Denote $\Pi$ as the output of Algorithm \ref{alg:ppmgm} with input $(A,B,X^{(0)}=X,N=1)$, then \[\ensuremath{\mathbb{P}}(\Pi=\operatorname{Id})\geq 1-5n^2e^{-c(\sigma)\big(1-\frac{\theta^2}2\big)^2n}.\] In particular, if $\|X-\operatorname{Id}\|^2_F\leq 2\Big(n-\sqrt{\frac{1}{c(\sigma)}n\log{(5n^3)}}\Big)$ then \[\ensuremath{\mathbb{P}}(\Pi=\operatorname{Id})\geq 1-n^{-1}.\] \end{enumerate} \end{theorem} \begin{remark} The assumption $\|X-\operatorname{Id}\|^2_F\leq 2(n-\sqrt{\frac{1}{c(\sigma)}n\log{(5n^3)}})$ can be restated as $|S_X|\geq \sqrt{\frac{1}{c(\sigma)}n\log{5n^3}}$, where $S_X$ is the set of fixed points of $X$. That is, for this assumption to hold, we need that $X$ has a number of fixed points of order $\Omega_\sigma(\sqrt{n\log n})$. Also note that $c(\sigma)$ is decreasing with $\sigma$, which is consistent with the intuition that larger levels of noise make it more difficult to recover the ground truth permutation. We include a plot of $c(\sigma)$ (rescaled) in Figure \ref{fig:c_2_sig}. \begin{figure}[!ht] \centering \includegraphics[scale=0.29]{img/constant_sigma.pdf} \caption{The constant $c(\sigma)$ (re-scaled multiplying by $384$) appearing in Theorem \ref{prop:one_it_conv}.} \label{fig:c_2_sig} \end{figure} \end{remark} \paragraph{Discussion.} Given an initial seed $X^{(0)}\in \ensuremath{\mathcal{P}}_n$, the case $N=1$ in Algorithm \ref{alg:ppmgm} can be alternatively interpreted as the following two step process: first, compute a similarity matrix $AX^{(0)}B$ and then round the similarity matrix to an actual permutation matrix. This strategy has been frequently applied in graph matching algorithms in both the seeded and seedless case \citep{Spectral_weighted_Ume,Grampa,LubSri,YuXuLin}. In terms of the quality of the seed, Theorem \ref{prop:one_it_conv} gives the same guarantees obtained by \citep[Thm.1]{YuXuLin} which requires $\Omega(\sqrt{n\log n})$ vertices in the seed to be correctly matched. However the results of \citep{YuXuLin} are specifically for the correlated Erdös-Renyi model. \subsection{Partial recovery in one iteration} \label{subsec:partial_one_step} In the partial recovery setting, we are interested in the fraction of nodes that are correctly matched. To this end let us define the following measure of performance \begin{equation}\label{eq:overlap_def} \operatorname{overlap}(\nu,\nu'):=\frac{1}{n}|\{i\in[n]:\nu(i)=\nu'(i)\}| \end{equation} for any pair $\nu,\nu'\in\ensuremath{\mathcal{S}}_n$. Recall that we assume that the ground truth permutation is $x^*=\operatorname{id}$ and $\pi$ is the output of Algorithm \ref{alg:ppmgm} with input $(A,B,X^{(0)}=X,N=1)$ where $\Pi=\operatorname{GMWM} (AXB)$. Observe that $\operatorname{overlap}(\pi,x^*=\operatorname{id})=s_\pi$ is the fraction of fixed points of the permutation $\pi$. It will be useful to consider the following definition. We say that $C_{ij}$ is \emph{row-column dominant} if $C_{ij}> C_{i'j}$ for all $i'\neq i$ and $C_{ij}>C_{ij'}$, for all $j'\neq j$. The following lemma relates the overlap of the output of $\texttt{GMWM}$ with the property that a subset of the entries of $C$ is row-column dominant, its proof is outlined in Appendix \ref{app:proofs_lem_diagdom}. \begin{lemma}\label{lem:overlap_event} Let $C$ be a $n\times n$ matrix with the property that there exists a set $\{i_1,\cdots,i_r\}$, with $1\leq r \leq n$ such that $C_{i_k,i_k}$ is row-column dominant for $k\in[r]$. Let $\pi\in\ensuremath{\mathcal{S}}_n$ be permutation corresponding to $\operatorname{GMWM}(C)\in\ensuremath{\mathcal{P}}_n$. Then it holds that $\pi(i_k)=i_k$ for $k\in[r]$ and, in consequence, the following event inclusion holds \begin{equation}\label{eq:overlap_event} \{\operatorname{overlap}(\pi,\operatorname{id})< r/n\}\subset\bigcap_{\substack{I_r\subset [n]\\|I_r|=r}}\bigcup_{i\in I_r}\{C_{ii} \textup{ is not row-column dominant } \}. \end{equation} \end{lemma} Equipped with this lemma, we can prove the following generalization of Theorem \ref{prop:one_it_conv}, its proof is detailed in Section \ref{subsec:proof_thm_partial_rec}. \begin{theorem}\label{prop:partial_rec} Let $A,B\sim W(n,\sigma,\operatorname{id})$ and $X\in \ensuremath{\mathcal{P}}_n$ with $\|X-\operatorname{Id}\|_F\leq \theta\sqrt{n}$, where $0\leq \theta \leq\sqrt{2(1-\frac{10}n)}$ and $n\geq 10$. Let $\pi\in \ensuremath{\mathcal{S}}_n$ be the output of Algorithm \ref{alg:ppmgm} with input $(A,B,X^{(0)}=X,N=1)$. Then, for $r\in[n]$ \begin{equation*} \ensuremath{\mathbb{P}}( \operatorname{overlap}(\pi,\operatorname{id})> r/n)\geq 1-16rne^{-c(\sigma)\big(1-\frac{\theta^2}2\big)^2n}. \end{equation*} In particular, if $x\in\ensuremath{\mathcal{S}}_n$ is the map corresponding to $X$ and $|S_X|\geq \sqrt{\frac1{c(\sigma)}n\log{(16rn^2)}}$, then \begin{equation*} \ensuremath{\mathbb{P}}( \operatorname{overlap}(\pi,\operatorname{id})> r/n)\geq 1-n^{-1}. \end{equation*} \end{theorem} \subsection{Exact recovery after multiple iterations, uniformly in the seed} The results in Sections \ref{sec:mainstep1} and \ref{subsec:partial_one_step} hold for any given seed $X^{(0)}$, and it is crucial that the seed does not depend on the graphs $A, B$. In this section, we provide uniform convergence guarantees for \texttt{PPMGM} \ which hold uniformly over all choices of the seed in a neighborhood around $x^*$. \begin{theorem} \label{thm:unif_rec_ppm} Let $\sigma \in [0,1)$, $A,B\sim W(n,\sigma,\operatorname{id})$ and let $X^{(0)}$ be a -- possibly random and data dependent -- permutation such that $|S_{X^{(0)}}|\geq (1-\kappa)n$ for a constant $\kappa>0$ such that $\sqrt{1-\sigma^2}>48\kappa$. Then by applying \texttt{PPMGM}\, with input ($\ensuremath{\mathcal{H}}(A),\ensuremath{\mathcal{H}}(B), X^{(0)}, N=2\log n$) where $\ensuremath{\mathcal{H}}(X)$ corresponds to the matrix $X$ with the diagonal removed, when $n$ is large enough, we obtain a permutation $X^{(N)}$ such that \[ \ensuremath{\mathbb{P}}(X^{(N)} \neq Id) \geq 1- \frac{6 \log n}{n^2}.\] \end{theorem} The diagonal of the adjacency matrices $A$ and $B$ in Algorithm \ref{alg:ppmgm} was removed in the above theorem only for ease of analysis. Its proof is detailed in Section \ref{subsec:proof_unif_seed_ppm}. \begin{remark} Contrary to our previous theorems, here the strong consistency of the estimator holds uniformly over all possible seeds that satisfy the condition $|S_{X^{(0)}}|\geq (1-\kappa)n$. For this reason, we need a stronger condition than $|S_X|=\Omega(\sqrt{n\log n})$ as was the case in Theorem \ref{prop:one_it_conv}. Our result is non trivial and cannot be obtained from Theorem \ref{prop:one_it_conv} by taking a union bound. The proof relies on a decoupling technique adapted from \citep{MaoRud} that used a similar refinement method for Erdös-Renyi graphs. \end{remark} \begin{remark} Contrary to the results obtained in the seedless case that require $\sigma=o(1)$ for exact recovery \citep{Grampa}, we can allow $\sigma$ to be of constant order. The condition $\sqrt{1-\sigma^2}>48\kappa$ seems to be far from optimal as shown in the experiments in Section \ref{sec:experiments}. For example, \texttt{PPMGM}\, can achieve exact recovery when $\kappa=0.08$ and $\sigma=0.6$. But interestingly, this condition shows that when the noise $\sigma$ increases, \texttt{PPMGM}\, need a more accurate initialization, hence a larger $\kappa$, to recover the latent permutation. This is confirmed by our experiments. \end{remark} \section{Models for correlated random graphs}\label{sec:RGmodels} Most of theoretical statistical analysis for the graph problem has been done so far for two random graph models: the \emph{correlated Erdos-Renyi} and the \emph{correlated Wigner model}. In those models the dependence between the two graphs $A$ and $B$ is explicitly described by the inclusion of a ``noise'' parameter. The main differences between both models are that the graph generated by the correlated Wigner generate a complete weighted random graph, with Gaussian weights, and the correlated Erdos-Renyi generate graphs with $\{0,1\}$ weights. \textbf{Gaussian Wigner model $W(n,\sigma,x^*)$}: The problem of graph matching \eqref{form:1} is well defined for matrices that are not necessarily adjacencies, so a natural extension is to consider two complete weighted graphs to be matched. In this direction, one model that has been proposed ,and analysed, is the Gaussian Wigner model, see \citep{Grampa} for example. Here we assume that $A$ and $B$ are symmetric matrices with i.i.d Gaussian entries (except for the symmetry constraint) and there is a linear relation between them. More specifically, we assume that \[A_{ij}\sim\begin{cases}\ensuremath{\mathcal{N}}(0,\frac1n)\text{ if }i\neq j\\ \ensuremath{\mathcal{N}}(0,\frac2n)\text{ if }i= j\end{cases}.\] and $B_{x^*(i)x^*(j)}=\sqrt{1-\sigma^2}A_{ij}+\sigma Z_{ij}$, where the $Z_{ij}$ are the entries of a multivariate standard Gaussian matrix. Both $A$ and $B$ are distributed as the GOE (Gaussian orthogonal ensemble). Here the parameter $\sigma>0$ should be interpreted as the noise parameter ($B$ is a ``noisy perturbation'' of $A$). \textbf{Correlated Erdos-Renyi $G(n,q,s,x^*)$:} for $q,s\in[0,1]$, the correlated Erdos-Renyi, with latent permutation $x^*\in \ensuremath{\mathcal{S}}_n$, can be described in two steps: \begin{enumerate} \item $A$ is generated according to the Erdös-Renyi model $G(n,q)$. \item Conditional to $A$, the entries of $B$ are i.i.d according to the law: \begin{equation} B_{x^*(i),x^*(j)}\sim\begin{cases} Bern(s)\quad \text{if}\quad A_{ij}=1,\\ Bern\big(\frac{q}{1-q}(1-s)\big)\quad \text{if } A_{ij}=0 \end{cases}. \end{equation} \end{enumerate} Another way to describe this model is the following: first sample a ``mother'' graph according to the Erdos-Renyi model with parameter $\frac qs$ and then generate $A_{ij}$ and $B_{x^*(i)x^*(j)}$ independently by choosing to keep or not the corresponding edge in the mother graph with probability $s$ (if $s=0$ we consider that $A$ and $B$ are independent $G(n,q)$ graphs). In this model, we can interpret the parameter $s$ as the amount of noise that exists between $A$ and $B$, considering one a perturbation of the other. Indeed, when $s=1$ we have that $A$ and $B$ are indeed isomorphic and for $s=0$, $A$ and $B$ are independent. \ea{21/03: we are leaving the CGMixtures for the next time, right?.} \subsection{Concentration inequalities used in Theorem \ref{thm:unif_rec_ppm}}\label{sec:app_thm3} In this section we provide proofs of Lemma's \ref{lem:nb_ngbh1_mt} and \ref{lem:nb_ngbh2_mt} used to prove Theorem \ref{thm:unif_rec_ppm}. \begin{proof}[Proof of Lemma's \ref{lem:nb_ngbh1_mt} and \ref{lem:nb_ngbh2_mt}.] Recall that $B_{ij}= \sqrt{1-\sigma^2}A_{ij}+\sigma Z_{ij}$. \paragraph{Step 1.} First let us consider the terms of the form $\langle A_{i:},A_{i:} \rangle$. We can write \[ \langle A_{i:},A_{i:} \rangle = \sum_{i=1}^{n-1}\mu_i g_i^2\] where $g_i$ are independent standard Gaussian random variables and $\mu_i =1/n$ for all $i$. Observe that $||\mu||_2=\sqrt{\frac{n-1}{n^2}}$. By Lemma \ref{lem:lau_mass} we have for $i \in [n]$ and all $t>0$ \[ \ensuremath{\mathbb{P}}\left(\langle A_{i:},A_{i:} \rangle \leq \frac{n-1}{n}-2\sqrt{\frac{t(n-1)}{n^2}}\right)\leq e^{-t}.\] For the choice $t= 5\log n$ we obtain \[ \langle A_{i:},A_{i:} \rangle \geq 1-O\left(\sqrt{\frac{\log n}{n}}\right)\] with probability at least $1-e^{-5\log n}$. \paragraph{Step 2.} Let us consider now terms of the form $\langle A_{i:},Z_{i:} \rangle$. We can write \[ \langle A_{i:},Z_{i:} \rangle = \frac{1}{n}\sum_{i=1}^{n-1} (g_ig_i') = \frac{1}{n} G^\top G' \] where $G=(g_i)_{i=1}^{n-1}$ and $G'=(g'_i)_{i=1}^{n-1}$ are i.i.d. standard Gaussian random variables. We can write \[ G^\top G'= \norm{G}\left( \left(\frac{G}{\norm{G}}\right)^\top G'\right).\] Since $G'$ is invariant by rotation $(\frac{G}{\norm{G}})^\top G'$ is independent from $G$ and has distribution $\ensuremath{\mathcal{N}}(0,1)$. By Gaussian concentration inequality we hence have \[ \left(\frac{G}{\norm{G}}\right)^\top G' \leq C\sqrt{\log n}\] with probability at least $1-e^{-5\log n}$ for a suitable choice of $C$. Similarly, by Lemma \ref{lem:lau_mass} we have \[ \norm{G} \leq 2\sqrt{n} \] with probability at least $1-e^{-5\log n}$. Hence with probability at least $1-2e^{-5\log n}$ we have \[ \frac{1}{n} G^\top G' \leq 2C\sqrt{\frac{\log n}{n}}.\] \paragraph{Step 3.} The same argument can be used to show that for $i\neq j$ \[ \ensuremath{\mathbb{P}}\left(\langle A_{i:},A_{j:} \rangle \geq C\sqrt{\frac{\log n}{n}} \right)\leq e^{-5\log n}.\] \paragraph{Conclusion.} We can conclude by using the identity $B_{ij}= \sqrt{1-\sigma^2}A_{ij}+\sigma Z_{ij}$ and taking the union bound over all indices $i\neq j$. \end{proof} \section{Problem statement} The aim of \emph{graph matching} is to find, given two graphs $G=(V(G),E(G))$ and $H=(V(H),E(H))$ of equal size as input, a bijective function between $V(G)$ and $V(H)$ such that the alignment of the adjacency matrices is maximized. More formally, if $A$ and $B$ are the adjacency matrices of $G$ and $H$ respectively, we want to solve the following optimization problem: \begin{equation}\label{form:1} \max_{x\in \mathcal{S}_n}\sum_{i,j}A_{ij}B_{x(i)x(j)}\tag{P1} \end{equation} where $\mathcal{S}_n$ is the set of all permutations of $[n]:=\{1,\cdots,n\}$. The objective function in \eqref{form:1} is called the \emph{adjacency agreement} for a given permutation $x\in\mathcal{S}_n$. Notice that if the graphs are $G$ and $H$ are isomorphic, then the maximal adjacency agreement has value $2|E|$ and is attained by any isomorphism $x\in\mathcal{S}_n$. In matrix language, the problem \eqref{form:1} can be rewritten in the form \begin{equation*} \max_{X\in \mathcal{P}_n}\langle A,X BX^T\rangle_F \end{equation*} where $\mathcal{P}_n$ is the set of permutation matrices of size $n$ and $\langle \cdot,\cdot\rangle_F$ is the Frobenius inner product in the space of $n\times n$ matrices. Observe that \eqref{form:1} is a well defined problem not only for adjacency matrices, but for any pair of matrices of the same size. Moreover, this is an instance of the well-known \emph{quadratic assignment problem}, which is combinatorially hard problem (known to be NP-hard in the worst case). It will be often useful to consider the ``lifted'' (or vector) formulation of the problem, where we identify a matrix in $\mathbb{R}^{n\times n}$ with a vector in $\mathbb{R}^{n^2}$. For matrix $X\in \mathbb{R}^{n\times n}$ we denote $[X]$ the vector in $\mathbb{R}^{n^2}$, where the columns of $X$ has been stacked together. It is easy to see that \eqref{form:1} can be written in the vector form as follows \begin{equation} \label{form:1'} \max_{[X]\in [\mathcal{P}_n]}[X]^TB\otimes A[X]\tag{P1'} \end{equation} where $[\mathcal{P}_n]$ is the set of permutation matrices in vector form. Even if graph matching is known to be intractable in the worst case and the related \emph{graph isomporphism} problem has unkown complexity, both can be solved efficiently for many particular instances. Indeed, in \citep{RG_Isom_BES} the authors provide an algorithm that check if two graph are isomorphic in linear (on the number of edges) time and succeed for ``almost all graphs''. The latter can be rephrased as ``the algorithm works with high probability if the graphs are chosen uniformly'', which is equivalent to restrict ourselves to graphs generated by means of the Erdos-Renyi model $G(n,\frac12)$, where each edge appears with probability $1/2$ and all edges are decided independently. Our goal is of similar nature, that is to propose an efficient algorithm for the graph matching problem and study its statistical guarantees for a class of random graphs models. \section{Numerical experiments}\label{sec:experiments} In this section, we present numerical experiments to assess the performance of the \texttt{PPMGM} algorithm and compare it to the state-of-art algorithms for graph matching, under the correlated Wigner model. We divide this section in two parts. In Section \ref{sec:perf_comp} we generate correlated Wigner graphs $A,B\sim W(n,\sigma,x^*)$ for a random permutation $x^*$, and apply to $A,B$ the spectral algorithms \texttt{Grampa} \citep{Grampa} and the classic \texttt{Umeyama} \citep{Spectral_weighted_Ume}, both of which work in the seedless case. As a second step, we apply algorithm \texttt{PPMGM} with the initialization given by the output of \texttt{Grampa} and \texttt{Umeyama}. We show experimentally that by applying \texttt{PPMGM} the solution obtained in both cases improves, when measured as the overlap (defined in \eqref{eq:overlap_def}) of the output with the ground truth. We also run experiments by initializing \texttt{PPMGM} with $X^{(0)}$ randomly chosen at a certain distance of the ground truth permutation $X^*$. Specifically, we select $X^{(0)}$ uniformly at random from the set of permutation matrices that satisfy $\|X^{(0)}-X^*\|_F=\theta'\sqrt{n}$, and vary the value of $\theta'\in (0,1)$. In Section \ref{sec:spar_st} we run algorithm \texttt{PPMGM} with different pairs of input matrices. We consider the Wigner correlated matrices $A,B$ and also the pairs of matrices ($A^{\operatorname{spar}_1},B^{\operatorname{spar}_1}$), ($A^{\operatorname{spar}_2},B^{\operatorname{spar}_2}$) and ($A^{\operatorname{spar}_3},B^{\operatorname{spar}_3}$), which are produced from $A,B$ by means of a sparsification procedure (detailed in Section \ref{sec:spar_st}). The main idea behind this setting is that, to the best of our knowledge, the best theoretical guarantees for exact graph matching have been obtained in \citep{MaoRud} for relatively sparse Erdös-Renyi graphs. The algorithm proposed in \citep{MaoRud} has two steps, the first of which is a seedless type algorithm which produces a partially correct matching, that is later refined with a second algorithm \citep[Alg.4]{MaoRud}. Their proposed algorithm \texttt{RefinedMatching} shares similarities with \texttt{PPMGM} and with algorithms \texttt{1-hop} \citep{LubSri,YuXuLin} and \texttt{2-hop} \citep{YuXuLin}. Formulated as it is, \texttt{RefinedMatching} \citep{MaoRud} (and the same is true for \texttt{2-hop} for that matter) only accepts binary edge graphs as input and also uses a threshold-based rounding approach instead of Algorithm \ref{alg:gmwm}, which might be difficult to calibrate in practice. With this we address experimentally the fact that the analysis (and algorithms) in \citep{MaoRud} do not extend automatically to a simple `binarization' of the (dense) Wigner matrices, and that specially in high noise regimes, the sparsification strategies do not perform very well. \subsection{Performance of \texttt{PPMGM}}\label{sec:perf_comp} In Figure \ref{fig1-a} we plot the recovery fraction, which is defined as the overlap (see \eqref{eq:overlap_def}) between the ground truth permutation and the output of five algorithms: \texttt{Grampa}, \texttt{Umeyama}, \texttt{Grampa+PPMGM}, \texttt{Umeyama+PPMGM} and \texttt{PPMGM}. The algorithms \texttt{Grampa+PPMGM} and \texttt{Umeyama+PPMGM} use the output of \texttt{Grampa} and \texttt{Umeyama} as seeds for \texttt{PPMGM}, which is performed with $N=1$. In the algorithm \texttt{PPMGM}, we use an initial permutation $x^{(0)}\in \ensuremath{\mathcal{S}}_n$ chosen uniformly at random in the set of permutations such that $\operatorname{overlap}{(x^{(0)},x^*)}=0.08$; this is referred to as `\texttt{PPMGM} rand.init'. We take $n=800$ and consider the average overlap over $25$ Monte Carlo runs. In Figure \ref{fig1-b} we plot the performance of the \texttt{PPMGM} algorithm for randomly chosen seeds and with different number of correctly pre-matched vertices. More specifically, we consider an initial permutation $x^{(0)}_j\in \ensuremath{\mathcal{S}}_n$ (corresponding to initializations $X^{(0)}_j\in\ensuremath{\mathcal{P}}_n$) for $j=1,\cdots,4$ with $\operatorname{overlap}(x^{(0)}_1,x^*)=0.05$, $\operatorname{overlap}(x^{(0)}_2,x^*)=0.1$, $\operatorname{overlap}(x^{(0)}_3,x^*)=0.15$ and $\operatorname{overlap}(x^{(0)}_4,x^*)=0.5$. Equivalently, we have $\|X^{(0)}_j-X^*\|_F=\theta'_j\sqrt{n}$, where $\theta'_j=\sqrt{2\big(1-\operatorname{overlap}(x^{(0)}_j,x^*)\big)}$. Each permutation $x^{(0)}_j$ is chosen uniformly at random in the subset of permutations that satisfy each overlap condition. We observe that initializing the algorithm with an overlap of $0.1$ with the ground truth permutation already produces perfect recovery in one iteration for levels of noise as high as $\sigma=0.6$. \begin{figure} \begin{subfigure}[t]{0.475\textwidth} \includegraphics[width=\textwidth]{img/refined_grampa_ume2.pdf} \caption{Performance of \texttt{PPMGM} as a refinement of Grampa and Umeyama algorithms, compared with PPM with a random initialization $x^{(0)}$, such that $\operatorname{overlap}(x^{(0)},x^*)=0.08$.} \label{fig1-a} \end{subfigure}\hfill \begin{subfigure}[t]{0.475\textwidth} \includegraphics[width=\textwidth]{img/comparison_n1000_diff_init.pdf} \caption{Performance of \texttt{PPMGM} with different initializations. Here $in.1,in.2,in.3,in.4$ corresponds to and overlap of $x^{(0)}$ with the ground truth of $0.05,0.1,0.15$ and $0.5$ respectively.} \label{fig1-b} \end{subfigure} \caption{} \label{fig:perf-1} \end{figure} \paragraph{Varying the number of iterations $N$.} We experimentally evaluate the performance of \texttt{PPMGM} when varying the number of iterations $N$ in Algorithm \ref{alg:ppmgm}. In Figure \ref{fig:perf-2} we plot the recovery rate of \texttt{PPMGM}, initialized with $x^{(0)}$, with an overlap of $0.1$ with the ground truth. In Fig. \ref{fig2-a} we see that adding more iterations increases the performance of the algorithm for $n=500$; however the improvement is less pronounced in the higher noise regime. In other words, the number of iterations cannot make up for the fact that the initial seed is of poor quality (relative to the noise level). We use $N=1,2,4,8,30$ iterations and we observe a moderate gain between $N=8$ and $N=30$. In Fig. \ref{fig2-b} we use a matrix of size $n=1000$ and we see that the difference between using $N=1$ and $N>1$ is even less pronounced ( we omit the case of $30$ iterations for readability purposes, as it is very similar to $N=8$). \begin{figure} \begin{subfigure}[t]{0.475\textwidth} \includegraphics[width=\textwidth]{img/it_matters.pdf} \caption{\texttt{PPMGM} with an initialization such that $\operatorname{overlap}(x^{(0)},x^*)=0.1$. Here $it.1,it.2,it.3,it.4,it.5$ corresponds to $1,2,4,8$ and $30$ iterations respectively. } \label{fig2-a} \end{subfigure}\hfill \begin{subfigure}[t]{0.475\textwidth} \includegraphics[width=\textwidth]{img/it_matters_n1000.pdf} \caption{Here $it.1,it.2,it.3,it.4$ corresponds to $1,2,4$ and $8$ iterations respectively.} \label{fig2-b} \end{subfigure} \caption{} \label{fig:perf-2} \end{figure} \subsection{Sparsification strategies}\label{sec:spar_st} Here we run \texttt{PPMGM} using different input matrices which are all transformations of the Wigner correlated matrices $A,B$. Specifically, we compare \texttt{PPMGM} with $A,B$ as input with the application of \texttt{PPMGM} to three different pairs of input matrices ($A^{\operatorname{spar}_1},B^{\operatorname{spar}_1}$), ($A^{\operatorname{spar}_2},B^{\operatorname{spar}_2}$) and ($A^{\operatorname{spar}_3},B^{\operatorname{spar}_3}$) that are defined as follows. \begin{align*} A^{\operatorname{spar}_1}_{ij}&=\mathbbm{1}_{|A_{ij}|<\tau};\enskip B^{\operatorname{spar}_1}_{ij}=\mathbbm{1}_{|B_{ij}|<\tau}, \\ A^{\operatorname{spar}_2}_{ij}&=A_{ij}\mathbbm{1}_{|A_{ij}|<\tau};\enskip B^{\operatorname{spar}_2}_{ij}=B_{ij}\mathbbm{1}_{|B_{ij}|<\tau}, \\ A^{\operatorname{spar}_3}_{ij}&=A_{ij}\mathbbm{1}_{|A_{ij}|\in \operatorname{top_k}(A_{i:})};\enskip B^{\operatorname{spar}_2}_{ij}=B_{ij}\mathbbm{1}_{|B_{ij}|\in \operatorname{top_k}(B_{i:})}, \end{align*} where $\tau>0$ and for $k\in\mathbb{N}$ and a $n\times n$ matrix $M$, $\operatorname{top_k}(M_{i:})$ is the set of the $k$ largest elements (breaking ties arbitrarily) of $M_{i:}$ (the $i$-th row of $M$). The choice of the parameter $\tau$ is mainly determined by the sparsity assumptions in \citep[Thm.B]{MaoRud}, \emph{i.e.}, if $G,H$ are two Erdös-Renyi graphs to be matched with connection probability $p$ (which is equal to $qs$ in the definition \eqref{eq: ER_def}), then the assumption is that \begin{equation}\label{eq:sparsity_assump} (1+\epsilon)\frac{\log n}n\leq p\leq n^{\frac{1}{R\log\log n}-1} \end{equation} where $\epsilon>0$ is arbitrary and $R$ is an absolute constant. We refer the reader to \citep{MaoRud} for details. For each $p$ in the range defined by \eqref{eq:sparsity_assump} we solve the equation \begin{equation}\label{eq:param_tau} \ensuremath{\mathbb{P}}(|A_{ij}|\leq \tau_p)=2\Phi(-\tau_p\sqrt n)=p \end{equation} where $\Phi$ is the standard Gaussian cdf (which is bijective so $\tau_p$ is well defined). In our experiments, we solve \eqref{eq:param_tau} numerically. Notice that $A^{\operatorname{spar}_1}$ and $ B^{\operatorname{spar}_1}$ are sparse correlated Erdös-Renyi graphs with a correlation that depends on $\sigma$. For the value of $k$ that defines $A^{\operatorname{spar}_3},B^{\operatorname{spar}_3}$ we choose $k=\Omega(\log n)$ or $k=\Omega(n^{o(1)})$, to maintain the sparsity degree in \eqref{eq:sparsity_assump}. In Figure \ref{fig:spar} we plot the performance comparison between the \texttt{PPMGM} without sparsification, and the different sparsification strategies. We see in Figs. \ref{figsp-a} and \ref{figsp-b} (initialized with overlap $0.5$ and $0.1$) that the use of the full information $A,B$ outperforms the sparser versions in the higher noise regimes and for small overlap of the initialization. On the other hand, the performance tends to be more similar for low levels of noise and moderately large number of correct initial seeds. In theory, sparsification strategies have a moderate denoising effect (and might considerably speed up computations), but this process seems to destroy important correlation information. \begin{figure}[!ht] \begin{subfigure}[t]{0.475\textwidth} \includegraphics[width=\textwidth]{img/comparison_n1000_fp1.pdf} \caption{Initial overlap is equal to $0.5$} \label{figsp-a} \end{subfigure}\hfill \begin{subfigure}[t]{0.475\textwidth} \includegraphics[width=\textwidth]{img/comparison_n1000_fp2.pdf} \caption{Initial overlap is equal to $0.1$} \label{figsp-b} \end{subfigure} \caption{Comparison between \texttt{PPMGM} with and without sparsification. Here $thr.1$ corresponds to the pair of matrices ($A^{\operatorname{spar}_1},B^{\operatorname{spar}_1}$), $thr.2$ corresponds to the pair ($A^{\operatorname{spar}_2},B^{\operatorname{spar}_2}$) and top $k$ corresponds to ($A^{\operatorname{spar}_3},B^{\operatorname{spar}_3}$)} \label{fig:spar} \end{figure} \subsubsection{Choice of the sparsification parameter $\tau$}\label{sec:tau_sel} Solving \eqref{eq:param_tau} for $p$ in the range \eqref{eq:sparsity_assump} we obtain a range of possible values for the sparsification parameter $\tau$. To choose between them, we use a simple grid search where we evaluate the recovery rate for each sparsification parameter on graphs of size $n=1500$, and take the mean over $25$ independent Monte Carlo runs. In Fig. \ref{fig-hm}, we plot a heatmap with the results. We see that the best performing parameter in this experiment was for $\tau_5$ corresponding to a probability $p_5=51\times 10^{-3}$, although there is a moderate change between all the choices for $p$. \begin{figure}[!ht] \centering \includegraphics[width=0.57\textwidth]{img/test.pdf} \caption{Heatmap for the recovery rate of \texttt{PPMGM} algorithm with input ($A^{\operatorname{spar}_1},B^{\operatorname{spar}_1}$) for different threshold values $\tau_i$($y$ axis); $i=1,\cdots,6$, and different values of $\sigma$ ($x$ axis). Here $\tau_i$ corresponds to the solution of \eqref{eq:param_tau} with $n=1500$ and $p_i$ for $i=1,2\cdots,6$ in a uniform grid between $p_1=42\times 10^{-3}$ and $p_6=54\times 10^{-3}$.} \label{fig-hm} \end{figure} \section{Algorithm}\label{sec:alg_res} \subsection{Projected power method for Graph matching}\label{sec:ppmgm} The projected power method (PPM) has been used to solve the graph matching problem, and its variants, by several authors \citep{Villar}. Most of the work so far has been empirical and, to the best of our knowledge, theoretical guarantees have been obtained only in the case of sparse Erdös-Renyi graphs, such as in \citep[Thm.B]{MaoRud} in the case of multiple iterations, and \citep{YuXuLin,LubSri} in the case of one iteration. Interestingly, the connection with the PPM is not explicitly stated in any of these works. We start by defining the projection operator onto $\ensuremath{\mathcal{P}}_n$ for a matrix $C\in \ensuremath{\mathbb{R}}^{n\times n}$. We will use the greedy maximum weight matching (GMWM) algorithm introduced in \citep{LubSri}, for the problem of graph matching with partially correct seeds, and subsequently used in \citep{YuXuLin}. The steps are outlined in Algorithm \ref{alg:gmwm}. \begin{algorithm} \caption{\texttt{GMWM} (Greedy maximum weight matching)}\label{alg:gmwm} \begin{algorithmic}[1] \Require{A cost matrix $C\in\mathbb{R}^{n\times n}$.} \Ensure{A permutation matrix $X$.} \State Select $(i_1,j_1)$ such that $C_{i_1,j_1}$ is the largest entry in $C$ (break ties arbitrarily). Define $C^{(1)}\in\mathbb{R}^{n\times n}$: $C^{(1)}_{ij}=C_{ij}\mathbbm{1}_{i\neq i_1,j\neq j_1}-\infty\cdot\mathbbm{1}_{i=i_1\text{or } j= j_1}$. \For{$k=2$ to $N$} \State Select $(i_k,j_k)$ such that $C^{(k-1)}_{i_k,j_k}$ is the largest entry in $C^{(k-1)}$. \State Define $C^{(k)}\in\mathbb{R}^{n\times n}$: $C^{(k)}_{ij}=C^{(k-1)}_{ij}\mathbbm{1}_{i\neq i_k,j\neq j_k}-\infty\cdot\mathbbm{1}_{i=i_k\text{or } j= j_k}$. \EndFor \State Define $X\in \{0,1\}^{n\times n}$: $X_{ij}=\sum^N_{k=1}\mathbbm{1}_{i=i_k,j=j_k}$. \State\Return{$X$} \end{algorithmic} \end{algorithm} Notice that the original version of GMWM works by erasing the row and column of the largest entry of the matrix $C^{(k)}$ at each step $k$. We change this to assign $-\infty$ to each element of the row and column of the largest entry (which is equivalent), mainly to maintain the original indexation. The output of Algorithm \ref{alg:gmwm} is clearly a permutation matrix, hence we define \begin{equation}\label{eq:projection} \tau (C):=\{\text{Output of GMWM with input } C\} \end{equation} which can be considered a projection since $\tau^2(C)=\tau(C)$ for all $C\in\mathbb{R}^{n\times n}$. Notice that, in general, the output of GMWM will be different from solving the linear assignment problem \begin{align*} \tilde{\tau}(C): =\argmin{}{\{\|C-X\|_F\enskip |\enskip X\in\ensuremath{\mathcal{P}}_n\}} =\argmax{\Pi\in\ensuremath{\mathcal{P}}_n}{\langle \Pi,C \rangle_F}\nonumber \end{align*} which provides an orthogonal projection, while $\tau$ corresponds to an oblique projection in general. \begin{algorithm} \caption{\texttt{PPMGM} (PPM for graph matching)}\label{alg:ppmgm} \begin{algorithmic}[1] \Require{Matrices $A,B$, an initial point $X^{(0)}$ and $N$ the maximum number of iterations.} \Ensure{A permutation matrix $X$.} \For{$k=0$ to $N-1$} \State $X^{(k+1)} \gets \tau(AX^{(k)}B)$. \EndFor \State\Return{$X=X^{(N)}$} \end{algorithmic} \end{algorithm} The PPM is outlined in Algorithm \ref{alg:ppmgm}. Given the estimate of the permutation $X^{(k)}$ from step $k$, the power step corresponds to the operation $AX^{(k)}B$ while the projection step is given by the application of the projection $\tau$ on $AX^{(k)}B$. The similarity matrix $C^{k+1}:=AX^{(k)}B$ is the matrix form of the left multiplication of $[X^{(k)}]$ by the matrix $B\otimes A$. Indeed, given that $A$ and $B$ are symmetric matrices, we have $[AX^{(k)}B]=(B\otimes A)[X^{(k)}]$, by \citep[eqs. 6 and 10]{Schacke}. All previous works related to the PPM for graph matching use $(B\otimes A)[X^{(k)}]$ in the power step, which is highly inconvenient in practice. Also, a power step of the form $AX^{(k)}B$ connects the PPM with the seeded graph matching methods proposed for correlated Erdös-Renyi graphs \citep{LubSri,YuXuLin,MaoRud} where related similarity matrices are used, thus providing a more general framework. Indeed, the set of elements correctly matched by the initial permutation $x^{(0)}\in\ensuremath{\mathcal{S}}_n$ can be considered as the seed of the problem, \emph{i.e.}, we take the set of seeds $S:=\{(i,i'): x^{(0)}(i)=i'\}$. Thus, the number of correct seeds will be the number of elements $i\in [n]$ such that $x^{(0)}(i)=x^*(i)$. Observe that the definition of the seed as a permutation is more general than a set $S$ of bijectively pre-matched vertices, because $S$ can be augmented (arbitrarily) to a permutation. \paragraph{Initialization.} We prove in Section \ref{sec:conv_analysis} that Algorithm \ref{alg:ppmgm} recovers the ground truth permutation $x^*$ provided that the initialization $x^{(0)}$ is sufficiently close to $x^*$. The initialization assumption will be written in the form \begin{equation}\label{assumption:init} \|X^{(0)}-X^*\|_F\leq \theta \sqrt n \end{equation} for some $\theta\in[0,\sqrt 2)$. Here, the value of $\theta$ measures how good $X^{(0)}$ is as a seed. Indeed, \eqref{assumption:init} can be equivalently stated as: the number of correct seeds is larger than $n(2-\theta^2)$. The question of finding a good initialization method can be seen as a seedless graph matching problem, where only partial recovery guarantees are necessary. In practice, we can use existent seedless algorithms such as those in \citep{Spectral_weighted_Ume,Grampa,spec_align} to initialize Algorithm \ref{alg:ppmgm}. We compare different initialization methods numerically, in Section \ref{sec:experiments}. \begin{remark}[PPM as a gradient method] The projected power method can be seen as a projected gradient ascent method for solving the MLE formulation in \eqref{form:1'}. From the formulation \eqref{form:1''} it is clear that the gradient of the likelihood evaluated on $X\in \ensuremath{\mathcal{P}}_n$ is $2B\otimes A[X]$ or, equivalently, $2AXB$ in matrix form. This interpretation of PPM has been acknowledged in the context of other statistical problems \citep{jour,chen2016_alignment}. \end{remark} \begin{remark}[Optimality] Algorithms based on PPM or GPM have been shown to attain optimal, or near-optimal, statistical guarantees for several problems in statistics, including community detection \citep{Wang2021OptimalNE,WangManchoso}, group syncronization \citep{boumal2016,Gao2019IterativeAF} and generalized orthogonal procrustes problem \citep{Ling}. \end{remark} \begin{remark}[Complexity] The computational time complexity of Algorithm \ref{alg:ppmgm} is $\mathcal{O}(n^\omega\log{n}+n^2\log^2{n})$, where $\mathcal{O}(n^\omega)$ is the matrix multiplication complexity and $\mathcal{O}(n^2\log{n})$ is the complexity of Algorithm \ref{alg:gmwm} \citep{YuXuLin}. In \citep{Le_Gall}, the authors establish the bound $\omega\leq 2.373$. \end{remark} \section{Proof of Theorem \iffalse \section{Proof of Theorem \ref{prop:partial_rec}} The proof of Theorem \ref{prop:partial_rec} will be based in the following lemma, which extends Proposition \ref{prop:diago_dom}. \begin{lemma}\label{lem:not_rc_dom} For a fixed $i\in[n]$, we have \begin{equation*} \ensuremath{\mathbb{P}}(C_{ii} \textup{ is not row-column dominant})\leq 16ne^{-c(\sigma)s_x^2n} \end{equation*} \end{lemma} The proof of Lemma \ref{lem:not_rc_dom} is included in Appendix \ref{app:lem_not_rc_dom}. We now prove Theorem \ref{prop:partial_rec}. The main idea is that for a fixed $i\in[n]$, with high probability the term $C_{ii}$ is the largest term in $i$-th row and the $i$-th column, and then GMWM will assign $\pi(i)=i$. We will also use the following event inclusion, which is direct from Lemma \ref{lem:overlap_event} eq.\eqref{eq:overlap_event} \begin{equation} \label{eq:overlap_event2} \{\operatorname{overlap}(\pi,\operatorname{id})< r/n\}\subset\bigcup^r_{i=1}\{C_{ii} \text{ is not row-column dominant }\} \end{equation} \begin{proof}[Proof of Theorem \ref{prop:partial_rec} ] Fix $i\in[n]$. By eq.\eqref{eq:overlap_event2} we have that \begin{align*} \ensuremath{\mathbb{P}}(\operatorname{overlap}(\pi,\operatorname{id})\leq r/n)&\leq \sum^r_{i=1}\ensuremath{\mathbb{P}}(C_{ii} \text{ is not row-column dom.})\\ &\leq \sum^r_{i=1}\ensuremath{\mathbb{P}}(\exists j\neq i,\text{ s.t }C_{ij}\vee C_{ji}>C_{ii} )\\ &\leq 16rne^{-c(\sigma) s_x^2n} \end{align*} where we used Lemma \ref{lem:not_rc_dom} in the last inequality. \end{proof} \begin{remark} Notice that the RHS of \eqref{eq:overlap_event2} is a superset of the RHS of \eqref{eq:overlap_event}. To improve this, it is necessary to include dependency information. In other words, we need to 'beat Hölder's inequality'. To see this, define \[E_i:=\mathbbm{1}_{C_{ii}\text{ is not row-column dominant }},\enskip \varepsilon_{I}:=\mathbbm{1}_{\sum_{i\in I}E_i>0}, \text{ for } I\subset [n]\] then the $\varepsilon_{I'}$, for $I'=[r]$, is the indicator of the event in the RHS of \eqref{eq:overlap_event2}. On other hand, the indicator of the event in the RHS of \eqref{eq:overlap_event} is ${\displaystyle \prod_{\substack{I\subset[n],|I|=r}}}\varepsilon_I$. If $\ensuremath{\mathbb{E}}\big[\varepsilon_I\big]$ is equal for all $I$, then Hölder gives \[\ensuremath{\mathbb{E}}\Big[{\displaystyle \prod_{\substack{I\subset[n],|I|=r}}}\varepsilon_I\Big]\leq \ensuremath{\mathbb{E}}[\varepsilon_{I'}]\] which do not helps quantifying the difference between \eqref{eq:overlap_event} and \eqref{eq:overlap_event2}. This is not surprising as we are not using taking into account the dependency between the events $\varepsilon_I$ for the different sets $I\subset[n],|I|=r$. \end{remark \fi \section{Proof of Proposition \ref{prop:diago_dom}}\label{app:concentration} We divide the proof into two subsections. In Appendix \ref{app:diagdom_row_noiseless} we prove Lemma \ref{lem:tailbounds} and in Appendix \ref{app:diagdom_row_noise} we prove part $(ii)$ of Proposition \ref{prop:diago_dom}. Before proceeding, let us introduce and recall some notation. Define $C':=AXA$ and $C'':=AXZ$, then $C=AXB=\sqrt{1-\sigma^2}C'+\sigma C''$. Recall that for a permutation $x$, $S_X$ will denote the set of fixed points of $x$ (the set of non-zero diagonal terms of its matrix representation $X$) and we will often write $s_x=|S_X|/n=Tr(X)/n$. We will say that a real random variable $Y\sim\chi^2_K$ if it follows a central Chi-squared distribution with $K$ degrees of freedom. \subsection{Proof of Lemma \ref{lem:tailbounds}}\label{app:diagdom_row_noiseless} \iffalse We assume that $\sigma=0$, in which case we have $C=AXB=AXA=C'$, in our notation. The following are the main steps of the proof. \begin{enumerate} \item Notice that for all the permutations $X$ such that $s_x=|S_X|/n$ and for $i\neq j\in[n]$ the gap $C_{ii}-C_{ij}$ is of order $s_x$ in expectation. \item We prove that $C_{ii}$ and $C_{ij}$ are sufficiently concentrated around its mean. In particular, the probability that $C_{ii}$ is smaller than $s_x/2$ is exponentially small. The same is true for the probability that $C_{ij}$ is larger than $s_x/2$. \item We use the fact $\ensuremath{\mathbb{P}}(C_{ii}\leq C_{ij})<\ensuremath{\mathbb{P}}(C_{ii} \leq s_x/2)+\ensuremath{\mathbb{P}}(C_{ij}\geq s_x/2)$ to control the probability that $C$ is not diagonally dominant. \end{enumerate} We start with the following lemmas. \begin{lemma}\label{lem:expectation} For the matrix $C=AXA$ and with $s_x=|S_X|/n$ we have \[\ensuremath{\mathbb{E}}[C_{ij}]=\begin{cases} s_x+\frac1n\mathbbm{1}_{i\in S_X} \enskip\text { for }i=j\\ \frac1n\mathbbm{1}_{x(j)=i} \enskip\text { for }i\neq j\\ \end{cases}\] and from this we deduce that for $i,j\in[n]$ with $i\neq j$ \[s_x-\frac1n\leq \ensuremath{\mathbb{E}}{[C_{ii}]}-\ensuremath{\mathbb{E}}{[C_{ij}]}\leq s_x+\frac1n\] \end{lemma} \begin{lemma}\label{lem:tailbounds} Assume that $s_x\in(10/n,1]$ and $n\geq 10$. Then for $i,j\in[n]$ with $i\neq j$ we have \begin{align}\label{eq:bounddiag} \ensuremath{\mathbb{P}}(C_{ii}\leq s_x/2)&\leq 4 e^{-\frac{s_x^2}{48}n}\\%f(s_x)^{n/2}\\ \label{eq:boundoffdiag} \ensuremath{\mathbb{P}}(C_{ij}\geq s_x/2)&\leq 3e^{-\frac{s_x^2}{96}n} \end{align} \end{lemma} With this we can prove Proposition \ref{prop:diago_dom} part $(i)$. \begin{proof}[Proof of Prop. \ref{prop:diago_dom} $(i)$] Define the event $\mathcal{E}_j=\{C_{ii}<\frac{s_x}2\}\cup \{C_{ij}>\frac{s_x}2\}$ and note that for $j\neq i$, we have $\{C_{ij}>C_{ii}\}\subset\mathcal{E}_j$. With this and the bounds \eqref{eq:bounddiag} and \eqref{eq:boundoffdiag} we have \begin{align*} \ensuremath{\mathbb{P}}\big(\exists j\neq i: C_{ij}>C_{ii}\big)&=\ensuremath{\mathbb{P}}(\cup_{j\neq i}\{C_{ij}>C_{ii}\})\\ &\leq \ensuremath{\mathbb{P}}(\cup_{j\neq i}\mathcal{E}_j)\\ &\leq \ensuremath{\mathbb{P}}(C_{ii}\leq \frac{s_x}{2})+\sum_{j\neq i} \ensuremath{\mathbb{P}}(C_{ij}\geq \frac{s_x}{2})\\ &\leq 4e^{-\frac{s_x^2}{96}n}+3(n-1)e^{-\frac{s_x^2}{96}n}\\ &\leq 4ne^{-\frac{s_x^2}{96}n} \end{align*} \end{proof} Before proceeding with the proofs of Lemmas \ref{lem:expectation} and \ref{lem:tailbounds}, we notice that the following decomposition holds for the matrix $C$ \begin{equation}\label{eq:Cdecom}C_{ij}=\sum_{k,k'}A_{ik}X_{k,k'}A_{k'i}=\begin{cases}\sum_{k\in S_X}A^2_{ik}+\sum_{k\notin S_X}A_{ik}A_{ix(k)},\enskip\text { for }i=j,\\ \sum^n_{k=1}A_{ik}A_{x(k)j}, \enskip\text{ for }i\neq j. \end{cases} \end{equation} \begin{proof}[Proof of Lemma \ref{lem:expectation}] From \ref{eq:Cdecom} we have that \begin{align*} \ensuremath{\mathbb{E}}[C_{ii}]&=\sum_{k\in S_X}\ensuremath{\mathbb{E}}[A^2_{ik}]+\sum_{k\notin S_X}\ensuremath{\mathbb{E}}[A^2_{ik}]\\ &=\frac{|S_X|}n+\frac{\mathbbm{1}_{i\in S_X}}n \end{align*} Similarly, for $j\neq i$ it holds \begin{align*} \ensuremath{\mathbb{E}}[C_{ij}]&=\sum^n_{k=1}\ensuremath{\mathbb{E}}[A_{ik}A_{x(k)j}]\\ &=\frac1n\mathbbm{1}_{i,j\notin S_X, x(j)=i}\\ &=\frac{\mathbbm{1}_{x(j)=i}}n \end{align*} from which the results follows easily. \end{proof} \fi The proof of Lemma \ref{lem:tailbounds} mainly revolves around the use of concentration inequalities for quadratic forms of Gaussian random vectors. For that, it will be useful to use the following representation of the entries of $C$. \begin{equation}\label{eq:Cdecom2} C_{ij}=\langle A_{:i},XA_{:j}\rangle \end{equation} where we recall that $A_{:k}$ represents the $k$-th column of the matrix $A$. \begin{proof}[Proof of Lemma \ref{lem:tailbounds}] ~ \paragraph{High probability bound for $C_{ii}$.} Define $\tilde{a}_i$ to be a vector in $\mathbb{R}^n$ such that \begin{equation*} \tilde{a}_i(k)=\begin{cases} A_{ki},\text{ for } k\notin {i,x^{-1}(i)},\\ \frac1{\sqrt{2}}A_{ii},\text{ for } k\in {i,x^{-1}(i)}. \end{cases} \end{equation*} Using representation \eqref{eq:Cdecom2} we have \[C_{ii}=\langle \tilde{a}_i,X\tilde{a}_i\rangle + \mathcal{Z}_i\] where \begin{equation*} \mathcal{Z}_i:=\frac12A_{ii}\big(A_{x(i)i})+A_{x^{-1}(i)i}\big). \end{equation*} It is easy to see that $\sqrt{n}\tilde{a}_i$ is a standard Gaussian vector. Using Lemma \ref{lem:dist_gaussian_inner} we obtain \[ n\langle\tilde{a}_i,X\tilde{a}_i\rangle\stackrel{d}{=} \sum^{n_1}_{i=1}\mu_ig^2_i-\sum^{n_2}_{i=1}\nu_ig'^2_i \] where $(\mu_i)^{n_1}_{i=1}, (-\nu_i)^{n_2}_{i=1}$, (with $\mu_i\geq 0$, $\nu_i\geq0$ and $n_1+n_2=n$) is the sequence of eigenvalues of $\frac12(X+X^T)$ and $g=(g_1,\cdots, g_{n_1})$, $g'=(g'_1,\cdots, g'_{n_2})$ are two independent sets of i.i.d standard Gaussians. Lemma \ref{lem:dist_gaussian_inner} tell us in addition that $\|\mu\|_1-\|\nu\|_1=s_xn$, $\|\mu\|_2+\|\nu\|_2\leq \sqrt{2n}$ and $\|\mu\|_\infty,\|\nu\|_\infty\leq 1$. Using Corollary \ref{cor:lau_mass} \eqref{eq:lau_mass_lt}, we obtain \begin{equation}\label{eq:bound_aXa} \ensuremath{\mathbb{P}}(n\langle\tilde{a}_i,X\tilde{a}_i\rangle\leq s_xn-2\sqrt{2nt}-2t)\leq e^{-t} \end{equation} for all $t\geq 0$. To obtain a concentration bound for $\mathcal{Z}_i$ we will distinguish two cases. \noindent \textit{(a)\underline{Case $i\in S_X$.}} In this case, we have $\mathcal{Z}_i=a^2_i(i)$, which implies that $C_{ii}\geq \langle\tilde{a}_i,X\tilde{a}_i\rangle$. Hence \[\ensuremath{\mathbb{P}}(nC_{ii}\leq s_xn-2\sqrt{2nt}-2t)\leq 2e^{-t}.\] Replacing $t=\overline{t}:=\frac{n}{2}(\sqrt{1+\frac{s_x}2}-1)^2$ in the previous expression, one can verify\footnote{Indeed, the inequality $(\sqrt{1+x}-1)^2\geq \frac16x^2$, follows from the inequality $x^2+(2\sqrt{6}-6)x\leq 0$, which holds for $0<x\leq 1$.} that $\overline{t}\geq \frac{n}{48}s_x^2$, for $s_x\in (0,1]$, hence \begin{equation*} \ensuremath{\mathbb{P}}(C_{ii}\leq s_x/2)\leq 2e^{-\frac{s_x^2}{48}n} \end{equation*} which proves \eqref{eq:bounddiag} in this case. \noindent \textit{(b) \underline{Case $i\notin S_X$.}} Notice that in this case, $a_i(i)$ is independent from $(a_i(x(i))+a_i(x^{-1}(i))$, hence $n\mathcal{Z}_i\stackrel{d}{=}g_1g_2$, where $g_1,g_2$ are independent standard Gaussians. Using the polarization identity $g_1g_2=\frac14(g_1+g_2)^2-\frac14(g_1-g_2)^2$, we obtain \[n\mathcal{Z}_i\stackrel{d}{=}\frac12(\tilde{g}^2_1-\tilde{g}^2_2)\] where $\tilde{g_1},\tilde{g_2}$ are independent standard Gaussians. By Corollary \ref{cor:lau_mass} we have \begin{equation}\label{eq:boundZ_i} \ensuremath{\mathbb{P}}\Big(2n\mathcal{Z}_i\leq -4\sqrt{t}-2t\Big)\leq 2e^{-t}. \end{equation} Using \eqref{eq:bound_aXa} and \eqref{eq:boundZ_i}, we get \begin{equation*} \ensuremath{\mathbb{P}}(nC_{ii}\leq s_xn-2(\sqrt {2n}+1)\sqrt{t}-3 t)\leq 4e^{-t} \end{equation*} or, equivalently \begin{equation}\label{eq:bound_Cii} \ensuremath{\mathbb{P}}\left(C_{ii}\leq s_x-2(\sqrt2+1/\sqrt{n})\sqrt{\frac tn}-3 \frac tn\right)\leq 4e^{-t}. \end{equation} Replacing $t=\overline{t}:=\frac{n}{36}\big(\sqrt{d^2+6s_x}-d\big)^2$, where $d=2(\sqrt2+1/\sqrt{n})$, in the previous expression and noticing that $\overline{t}\geq \frac{1}{6}s_x^2n$, we obtain the bound \begin{equation*} \ensuremath{\mathbb{P}}(C_{ii}\leq s_x/2)\leq 4e^{-\frac{s^2_x}{6}n}. \end{equation*} \paragraph{High probability bound for $C_{ij}$, $i\neq j$.} Let us first define the vectors $\tilde{a}_i,\tilde{a}_j\in \mathbb{R}^{n}$ as \begin{equation*} \tilde{a}_i(k):=\begin{cases} A_{ki},\enskip \text{ for }k\notin\{j,x^{-1}(i)\},\\ 0,\enskip \text{ for }k\in\{j,x^{-1}(i)\},\\ \end{cases} \end{equation*} and \begin{equation*} \tilde{a}_j(k):=\begin{cases} A_{kj},\enskip \text{ for }k\notin\{j,x^{-1}(i)\},\\ 0,\enskip \text{ for }k\in\{j,x^{-1}(i)\}.\\ \end{cases} \end{equation*} Contrary to $a_i$ and $a_j$ which share a coordinate, the vectors $\tilde{a}_i$ and $\tilde{a}_j$ are independent. With this notation, we have the following decomposition \begin{equation*} C_{ij}=\langle \tilde{a}_i,X\tilde{a}_j\rangle+ A_{ji}\Big(A_{x(j)j}+A_{x^{-1}(i)i}\Big). \end{equation*} For the first term, we will use the following polarization identity \begin{equation}\label{eq:polarization} \langle \tilde{a}_i,X\tilde{a}_j\rangle= \|\frac12(\tilde{a}_i+X\tilde{a}_j)\|^2-\|\frac12(\tilde{a}_i-X\tilde{a}_j)\|^2. \end{equation} By the independence of $\tilde{a}_i$ and $\tilde{a}_j$, it is easy to see that $\tilde{a}_i+X\tilde{a}_j$ and $\tilde{a}_i-X\tilde{a}_j$ are independent Gaussian vectors and $\ensuremath{\mathbb{E}}[\langle \tilde{a}_i,X\tilde{a}_j\rangle]=0$. Using \eqref{eq:polarization} and defining $\mathcal{Z}_{ij}:=A_{ji}\Big(A_{x(j)j}+A_{x^{-1}(i)i}\Big)n$, it is easy to see that \begin{equation}\label{eq:decomp_offdiag} nC_{ij}\stackrel{d}{=}\sum^{n-1}_{i=1}\mu_ig^2_i-\sum^{n-1}_{i=1}\nu_ig'^2_i+\mathcal{Z}_{ij} \end{equation} where $g_1,\cdots,g_{n-1}$ and $g'_1,\cdots,g'_{n-1}$ are two sets of independent standard Gaussian variables and $\mu_i,\nu_i\in \{\frac12,\frac34,1\}$, for $i\in[n-1]$. The sequences $(\mu_i)^{n-1}_{i=1},(\nu_i)^{n-1}_{i=1}$ will be characterised below, when we divide the analysis into two cases $x(j)=i$ and $x(j)\neq i$. We first state the following claim about $\mathcal{Z}_{ij}$. \begin{claim}\label{claim:distrZ} For $i\neq j$, we have \[\mathcal{Z}_{ij}\stackrel{d}{=}\begin{cases} q_{ij}(\zeta_1-\zeta_2)\enskip \text{ if }x(j)\neq i,\\ 2\zeta_3 \enskip \text{ if }x(j)=i, \end{cases}\] where $\zeta_1,\zeta_2$ and $\zeta_3$ are independent Chi-squared random variables with one degree of freedom and \[q_{ij}=\begin{cases} \sqrt\frac32 \enskip \text{ if }i\in S_X,j\notin S_X\text{ or }i\notin S_X,j\in S_X,\\ \sqrt 2 \text{ if }i,j\in S_X,\\ \frac1{\sqrt2 } \text{ if }i,j\notin S_X. \end{cases}\] \end{claim} We delay the proof of this claim until the end of this section. From the expression \eqref{eq:decomp_offdiag}, we deduce that the vectors $g=(g_1,\cdots,g_{n-1})$, $g'=(g'_1,\cdots,g'_{n-1})$ and $\mathcal{Z}_{ij}$ are independent. Hence, by Claim \ref{claim:distrZ} the following decomposition holds \begin{equation* nC_{ij}\stackrel{d}{=}\sum^{n}_{i=1}\mu_ig^2_i-\sum^{n}_{i=1}\nu_ig'^2_i \end{equation*} where \[\mu_{n}= \begin{cases} q_{ij}\text{ if }x(j)\neq i, \\ 2\text{ if }x(j)=i, \end{cases} \text{ and } \nu_{n}= \begin{cases} q_{ij}\text{ if }x(j)\neq i, \\ 0\text{ if }x(j)=i. \end{cases} \] % Let us define $\mu:=(\mu_1,\cdots,\mu_{n})$ and $\nu:=(\nu_1,\cdots,\nu_{n})$. We will now distinguish two cases. \noindent \textit{(a) \underline{Case $x(j)\neq i$.}} In this case, we can verify that one of the $\mu_1,\cdots,\mu_{n-1}$ is equal to $0$ (and the same is true for the values $\nu_1,\cdots,\nu_{n-1}$). Assume without loss of generality that $\mu_1=\nu_1=0$. Also, one of the following situations must happen for the sequence $\mu_2,\cdots,\mu_{n-1}$ (resp. $\nu_2,\cdots,\nu_{n-1}$): either $n-3$ of the elements of the sequence are equal to $\frac12$ and one is equal $1$ or $n-4$ are equal to $\frac12$ and two are equal to $\frac34$ or $n-3$ are equal to $\frac12$ and one is equal to $\frac34$. In either of those cases, the following is verified \begin{align*} \|\mu\|_1-\|\nu\|_1&=0,\\ \|\mu\|_2+\|\nu\|_2&\leq \sqrt{2n},\\ \|\mu\|_\infty,\|\nu\|_\infty&\leq \sqrt{2} \end{align*} where the first equality comes from Lemma \ref{lem:expectation}, the inequality on the norm $\|\cdot\|_2$ comes from the fact that in the worst case $\|\mu\|_2=\|\nu\|_2\leq \sqrt{\frac{n+1}4}$. The statement about the norm $\|\cdot\|_\infty$ can be easily seen by the definition of $\mu$ and $\nu$. Using \eqref{eq:lau_mass_ut}, we obtain \begin{equation*} \ensuremath{\mathbb{P}}(nC_{ij}\geq 4\sqrt{nt}+4t)\leq 2e^{-t}. \end{equation*} Replacing $t=\overline{t}:=\frac{n}{4}(\sqrt{1+\frac{s_x}2}-1)^2$ in the previous expression and noticing that $\overline{t}\geq \frac1{96}s_x^2n$ for $s_x\in (0,1]$ leads to the bound \begin{equation*} \ensuremath{\mathbb{P}}(C_{ij}\geq s_x/2)\leq 2e^{-\frac{s_x^2}{96}n}. \end{equation*} \noindent \textit{(b) \underline{Case $x(j)= i$.}} In this case, we have that for the sequence $\mu_1,\cdots,\mu_{n-1}$ (resp. $\nu_1,\cdots,\nu_{n-1}$): either $n-2$ of the elements of the sequence are equal to $\frac12$ and one is equal $1$ or $n-3$ are equal to $\frac12$ and two are equal to $\frac34$. In either case, the following holds \begin{align*} \|\mu\|_1-\|\nu\|_1&=2,\\ \|\mu\|_2+\|\nu\|_2&\leq 2\sqrt{n},\\ \|\mu\|_\infty,\|\nu\|_\infty&\leq 2. \end{align*} Here, the inequalities for the norms $\|\cdot\|_1,\|\cdot\|_{\infty}$ follow directly from the definition of $\mu$ and $\nu$, and the inequality for $\|\cdot\|_2$ follows by the fact that, in the worst case, $\|\mu\|_2+\|\nu\|_2=\sqrt{\frac{n+6}4}+\sqrt{\frac{n+2}4}$. Using \eqref{eq:lau_mass_ut}, we get \begin{equation*} \ensuremath{\mathbb{P}}(nC_{ij}\geq 2+4\sqrt{nt}+4t)\leq 2e^{-t}. \end{equation*} Replacing $t=\overline{t}:=\frac{n}{4}(\sqrt{1+\frac{s_x}2-\frac2n}-1)^2$ in the previous expression and noticing that $\overline{t}\geq \frac1{20}s_x^2n$ for $s_x\in (10/n,1]$ we get \begin{equation*} \ensuremath{\mathbb{P}}(C_{ij}\geq s_x/2)\leq 2e^{-\frac{s_x^2}{20}+4/n}\leq 3e^{-\frac{s_x^2}{20}}, \end{equation*} where we used that $n\geq 10$. \end{proof} \begin{proof}[Proof of Claim \ref{claim:distrZ}] Observe that when $x(j)=i$ (or equivalently $x^{-1}(i)=j$) we have $\mathcal{Z}_{ij}=2nA^2_{ij}$. Given that $i\neq j$ by assumption, it holds $A^2_{ij}\sim \ensuremath{\mathcal{N}} (0,\frac1n)$, which implies that $\mathcal{Z}_{ij}\stackrel{d}{=}2\zeta_3$ for $\zeta_3\sim \chi^2_1$. In the case $x(j)\neq i$, let us define \begin{equation*} \psi_1:=\sqrt{n}A_{ij},\enskip\psi_2:=\sqrt{n}A_{jx(j)},\enskip\psi_3:=\sqrt{n}A_{ix^{-1}(i)}, \end{equation*} which are all independent Gaussians random variables. Moreover, $\psi_1\sim \ensuremath{\mathcal{N}}(0,1)$ and \[\psi_2+\psi_3 \sim \begin{cases} \ensuremath{\mathcal{N}}(0,2)\text{ if }i,j\notin S_X,\\ \ensuremath{\mathcal{N}}(0,3) \text{ if }i\in S_X,j\notin S_X\text{ or }i\notin S_X,j\in S_X,\\ \ensuremath{\mathcal{N}}(0,4)\text { if } i,j\in S_X. \end{cases}\] Consider the case $i,j\notin S_X$. In this case, it holds \begin{align*} \mathcal{Z}_{ij}=\sqrt{2}\psi_1\Big(\frac{\psi_2+\psi_3}{\sqrt{2}}\Big) =\frac{1}{\sqrt{2}}{\Big(\frac{\psi_1}{\sqrt 2}+\frac{\psi_2+\psi_3}{2}\Big)}^2-\frac{1}{\sqrt{2}}{\Big(\frac{\psi_1}{\sqrt 2}-\frac{\psi_2+\psi_3}{2}\Big)}^2. \end{align*} Notice that $\frac{\psi_1}{\sqrt 2}+\frac{\psi_2+\psi_3}{2}$ and $\frac{\psi_1}{\sqrt 2}-\frac{\psi_2+\psi_3}{2}$ are independent standard normal random variables, hence $\mathcal{Z}_{ij}\stackrel{d}{=}\frac1{\sqrt{2}}(\zeta_1-\zeta_2)$, where $\zeta_1$ and $\zeta_2$ are independent $\chi^2_1$ random variables. The proof for the other cases is analogous. \end{proof} \subsection{Proof of Proposition \ref{prop:diago_dom} part $(ii)$}\label{app:diagdom_row_noise} Now we consider the case where $\sigma\neq 0$. It is easy to see that here the analysis of the noiseless case still applies (up to re-scaling by $\sqrt{1-\sigma^2}$) for the matrix $C'=AXA$. We can proceed in an analogous way for the matrix $C''=AXZ$ which will complete the analysis (recalling that $C=\sqrt{1-\sigma^2}C'+\sigma C''$). Before we proceed with the proof, we explain how the tail analysis of entries of $C'$ in Prop.\ref{prop:diago_dom} part $(i)$ helps us with the tail analysis of $C''$. Observe that for each $i,j\in[n]$ we have \begin{equation*} C''_{ij}=\sum_{k,k'}A_{ik}X_{k,k'}Z_{k',j}=\sum^n_{k=1}A_{ik}Z_{x(k)j}= \langle A_{:i},XZ_{:j}\rangle. \end{equation*} The term $C''_{ij}$, for all $i,j\in[n]$, can be controlled similarly to the term $C'_{i'j'}$ (when $i'\neq j'$). Indeed, we have the following \begin{lemma}\label{lem:tailbound_noise} For $t\geq 0$ we have \begin{equation*} \ensuremath{\mathbb{P}}(C''_{ij}\leq -4\sqrt{nt}-2t\big)=\ensuremath{\mathbb{P}}(C''_{ij}\geq 4\sqrt{nt}+2t\big)\leq 2e^{-t}. \end{equation*} Consequently, \begin{equation*} \ensuremath{\mathbb{P}}(C''_{ij}\geq s_x/2)\leq 2e^{-\frac{s_x^2}{96}n}. \end{equation*} \end{lemma} \begin{proof} We define $h_1:=\frac12(A_{:i}+XZ_{:j})$ and $h_2:=\frac12(A_{:i}-XZ_{:j})$. It is easy to see that $h_1$ and $h_2$ are two i.i.d Gaussian vectors of dimension $n$. By the polarization identity, we have \begin{align*} n \langle A_{:i},XZ_{:j}\rangle=n(\|h_1\|^2-\|h_2\|^2) \stackrel{d}{=}\sum^n_{i=1}\mu_ig^2_i-\sum^n_{i=1}\nu_ig'^2_i \end{align*} where $g=(g_1,\cdots,g_n)$ and $g'=(g'_1,\cdots,g'_n)$ are independent standard Gaussian vectors and the vectors $\mu=(\mu_1,\cdots,\mu_n),\nu=(\nu_1,\cdots,\nu_n)$ have positive entries that satisfy, for all $i\in [n]$, $\mu_i,\nu_i\in \{\frac1{\sqrt2},\sqrt{\frac34},1\}$. For $\mu_i$ (and the same is true for $\nu_i$) the following two cases can happen: either $n-1$ of its entries are $1/\sqrt{2}$ and one entry takes the value $1$ (when $i=j$) or $n-2$ of its entries are $1/\sqrt{2}$ and two entries take the value $\sqrt{3/4}$ (when $i\neq j$). In any of those cases, one can readily see that \[\|\mu\|_1=\|\nu\|_1,\enskip \|\mu\|_2+\|\nu\|_2\leq \sqrt{n},\enskip \|\mu\|_\infty,\|\nu\|_\infty\leq 1.\] Using Corollary \ref{cor:lau_mass} we obtain \begin{align*} \ensuremath{\mathbb{P}}\big(n(\|h_1\|^2-\|h_2\|^2)\geq 4\sqrt{nt}+2t\big)&\leq 2e^{-t},\\ \ensuremath{\mathbb{P}}\big(n(\|h_1\|^2-\|h_2\|^2)\leq -4\sqrt{nt}-2t\big)&\leq 2e^{-t}. \end{align*} Arguing as in the proof of Proposition \ref{prop:diago_dom} part $(i)$ we obtain the bound \begin{equation*} \ensuremath{\mathbb{P}}(C''_{ij}\geq s_x/2)\leq 2e^{-\frac{s_x^2}{96}n}. \end{equation*} \end{proof} Now we introduce some definitions that will be used in the proof. We define $s_{\sigma,x}:=\frac12\sqrt{1-\sigma^2}s_x$, and for $\delta>0$, $i,j\in[n]$, we define the following events \[\mathcal{E}^i_\delta:=\{\sqrt{1-\sigma^2}C'_{ii}\leq s_{\sigma,x}+\delta \}\cup \{\sigma C''_{ii}\leq -\delta\},\] \[\mathcal{E}^{ij}:=\{\sqrt{1-\sigma^2}C'_{ij}\geq s_{\sigma,x}/2 \}\cup \{\sigma C''_{ij}\geq s_{\sigma,x}/2 \}\enskip\text{, for }i\neq j.\] One can easily verify that $\{C_{ii}\leq s_{\sigma,x}\}\subset \mathcal{E}^i_{\delta}$, hence it suffices to control the probability of $ \mathcal{E}^i_{\delta}$. For that we use the union bound and the already established bounds in Lemmas \ref{lem:tailbounds} and \ref{lem:tailbound_noise}. To attack the off-diagonal case, we observe that the following holds $\{C_{ij}\geq s_{\sigma,x}\}\subset \mathcal{E}^{ij}$. The following lemma allows us to bound the probability of the events $\mathcal{E}^i_\delta$ and $\mathcal{E}^{ij}$. \begin{lemma}\label{lem:probaevents} Let $\delta$ be such that $0\leq \delta\leq \frac{s_x}2\sqrt{1-\sigma^2}$. Then for $i,j\in[n]$ with $i\neq j$ have the following bounds \begin{align}\label{eq:eventdiag} \ensuremath{\mathbb{P}}(\mathcal{E}^i_\delta)&\leq 4e^{-\frac{1}{96}(\frac{s_x}2-\frac\delta{\sqrt{1-\sigma^2}})^2n}+2e^{-\frac{1}{96}(\frac{\delta}{\sigma})^2n} \\ \label{eq:eventoffdiag} \ensuremath{\mathbb{P}}(\mathcal{E}^{ij})& \leq 4e^{-\frac1{384} s_x^2(\frac{1-\sigma^2}{\sigma^2}\wedge 1)n}. \end{align} In particular, we have \begin{equation}\label{eq:eventdiag2} \ensuremath{\mathbb{P}}(\mathcal{E}^i_{\delta_{\sigma,x}})\leq 6e^{-\frac1{384} s_x^2(\frac{1-\sigma^2}{1+2\sigma\sqrt{1-\sigma^2}})n} \end{equation} where $\delta_{\sigma,x}=\frac{\sigma\sqrt{1-\sigma^2}}{\sigma+\sqrt{1-\sigma^2}}\frac{s_x}2$. \end{lemma} \begin{proof} Using \eqref{eq:bound_Cii}, we have that \begin{equation*} \ensuremath{\mathbb{P}}\Big(\sqrt{1-\sigma^2}C_{ii}\leq \sqrt{1-\sigma^2}\big(s_x-2(\sqrt2+1/\sqrt{n})\sqrt{\frac tn}-3 \frac tn\big)\Big)\leq 4e^{-t}. \end{equation*} Replacing $t=\overline{t}:=\frac{n}{36}\big(\sqrt{d^2+{6s_x-\frac{12\delta}{\sqrt{1-\sigma^2}}}}-d\big)^2$ in the previous expression, where $d=2(\sqrt2+1/\sqrt{n})$, and observing that $\overline{t}\geq \frac{1}{6}(\frac{s_x}{2}-\frac{\delta}{\sqrt{1-\sigma^2}})^2$, which is valid for $0\leq \delta\leq \frac{s_x}2\sqrt{1-\sigma^2}$, we obtain \begin{equation*} \ensuremath{\mathbb{P}}\Big(\sqrt{1-\sigma^2}C'_{ii}\leq s_{\sigma,x}+\delta \Big)\leq 4e^{-\frac{1}{6}(\frac{s_x}{2}-\frac{\delta}{\sqrt{1-\sigma^2}})^2n}. \end{equation*} Using this and Lemma \ref{lem:tailbound_noise} we have \begin{align} \ensuremath{\mathbb{P}}(\mathcal{E}^i_\delta)&\leq \ensuremath{\mathbb{P}}(\sqrt{1-\sigma^2}C'_{ii}\leq s_{\sigma,x}+\delta)+\ensuremath{\mathbb{P}}(\sigma C''_{ii}\leq -\delta)\nonumber\\ &\leq 4e^{-\frac{1}{6}(\frac{s_x}2-\frac\delta{\sqrt{1-\sigma^2}})^2n}+2e^{-\frac{1}{96}(\frac{\delta}{\sigma})^2n}.\nonumbe \end{align} Similarly, to prove \eqref{eq:eventoffdiag} we verify that \begin{align*} \ensuremath{\mathbb{P}}(\mathcal{E}^{ij})&\leq \ensuremath{\mathbb{P}}(C'_{ij}\geq \frac{s_x}4)+\ensuremath{\mathbb{P}}(C''_{ij}\geq \frac{\sqrt{1-\sigma^2}}{\sigma}\frac{s_x}4)\\ &\leq 2e^{-\frac{1}{384}s_x^2n}+2e^{-\frac1{384} s_x^2(\frac{1-\sigma^2}{\sigma^2})n}\\ &\leq 4e^{-\frac1{384} s_x^2(\frac{1-\sigma^2}{\sigma^2}\wedge 1)n}. \end{align*} To prove \eqref{eq:eventdiag2} it suffices to use \eqref{eq:eventdiag} with the choice of $\delta=\delta_{\sigma,x}=\frac{\sigma\sqrt{1-\sigma^2}}{\sigma+\sqrt{1-\sigma^2}}\frac{s_x}2$. \end{proof} With this we prove the diagonal dominance for each fixed row of $C$. \begin{proof}[Proof of Prop. \ref{prop:diago_dom} part $(ii)$] Define $\tilde{\mathcal{E}}_j:=\{C_{ii}\leq s_{\sigma,x}\}\cup\{C_{ij}\geq s_{\sigma,x}\}$, which clearly satisfies $\{C_{ii}\leq C_{ij}\}\subset\tilde{\mathcal{E}}_j$. Then by the union bound, \begin{align*} \ensuremath{\mathbb{P}}(\cup_{j\neq i}\tilde{\mathcal{E}}_j)&\leq \ensuremath{\mathbb{P}}(C_{ii}\leq s_{\sigma,x})+\sum_{j\neq i} \ensuremath{\mathbb{P}}(C_{ij}\geq s_{\sigma,x})\\ &\leq \ensuremath{\mathbb{P}}(\mathcal{E}^i_{\delta_{\sigma,x}})+\sum_{j\neq i}\ensuremath{\mathbb{P}}(\mathcal{E}^{ij})\\ &\leq 6e^{-\frac1{384} s_x^2(\frac{1-\sigma^2}{1+2\sigma\sqrt{1-\sigma^2}})n}+4(n-1)e^{-\frac1{384} s_x^2(\frac{1-\sigma^2}{\sigma^2}\wedge 1)n}\\ &\leq 5ne^{-\frac1{384} s_x^2(\frac{1-\sigma^2}{1+2\sigma\sqrt{1-\sigma^2}})n} \end{align*} where in the third inequality we used Lemma \ref{lem:probaevents}, and in the last inequality we used the fact that $\frac{1-\sigma^2}{\sigma^2}\wedge 1\geq \frac{1-\sigma^2}{1+2\sigma\sqrt{1-\sigma^2}}$. \end{proof} \section{Proof of Lemma \ref{lem:not_rc_dom}}\label{app:lem_not_rc_dom} The proof of Lemma \ref{lem:not_rc_dom} uses elements of the proof of Proposition \ref{prop:diago_dom}. The interested reader is invited to read the proof of Proposition \ref{prop:diago_dom} first. \begin{proof}[Proof of Lemma \ref{lem:not_rc_dom}] It will be useful to first generalize our notation. For that, we denote \[C_{ij,x}=(AXB)_{ij}, \enskip C'_{ij,x}=(AXA)_{ij},\enskip C''_{ij,x}=(AXZ)_{ij} \] for $x\in \ensuremath{\mathcal{S}}_n$, and \[\mathcal{E}^{ij}_{x^{-1}}:=\{\sqrt{1-\sigma^2}C'_{ij,x^{-1}}\geq s_{\sigma,x}/2 \}\cup \{\sigma C''_{ij,x^{-1}}\geq s_{\sigma,x}/2 \}\] where $x^{-1}$ is the inverse permutation of $x$. The fact that $\ensuremath{\mathbb{P}}(C_{ii,x}<C_{ij,x})\leq8e^{-c(\sigma)s_x^2n} $ follows directly from the bound for $\tilde{\mathcal{E}}_j$ derived in the proof of Proposition \ref{prop:diago_dom} part $(ii)$. To prove $\ensuremath{\mathbb{P}}(C_{ii,x}<C_{ji,x})$ notice that $C'_{ji,x}=C'_{ij,x^{-1}}$ and that $C''_{ji,x}\stackrel{d}{=}C''_{ij,x^{-1}}$. On the other hand, notice that $s_x=s_{x^{-1}}$ (hence $s_{\sigma,x}=s_{\sigma,x^{-1}}$). Arguing as in Lemma \ref{lem:probaevents} it is easy to see that \[\ensuremath{\mathbb{P}}(C_{ii,x}<C_{ji,x})\leq 8e^{-c(\sigma)s_x^2n}.\] The bound on $\ensuremath{\mathbb{P}}(\exists j,\text{ s.t }C_{ij,x}\vee C_{ji,x}>C_{ii,x})$ then follows directly by the union bound. \end{proof} \section{Proofs of Lemmas \ref{lem:diagdom_LAP} and \ref{lem:overlap_event}} \label{app:proofs_lem_diagdom} \begin{proof}[Proof of Lemma \ref{lem:diagdom_LAP}] By assumption $C$ is diagonally dominant, which implies that $\exists i_1$ such that $C_{i_1i_1}=\max_{i,j}C_{ij}$ (in other words, if the largest entry of $C$ is in the $i_1$-th row, then it has to be $C_{i_1i_1}$, otherwise it would contradict the diagonal dominance of $C$). In the first step of $\operatorname{GMWM}$ we select $C_{i_1i_1}$, assign $\pi(i_1)=i_1$ and erase the $i_1$-th row and column of $C$. By erasing the $i_1$-th row and column of $C$ we obtain a matrix which is itself diagonally dominant. So by iterating this argument we see $\exists$ $i_1,\cdots,i_n\subset[n]$ such that $\pi(i_k)=i_k$, for all $k$, so $\pi$ has to be the identical permutation. This proves that if $C$ is diagonally dominant, then $\Pi=\operatorname{Id}$. By using the contrareciprocal, \eqref{eq:probneqId} follows. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:overlap_event}] We argue by contradiction. Assume that for some $1\leq k\leq r$, we have $\pi(i_k)\neq i_k$ (and $\pi^{-1}(i_k)\neq i_k$). This means that at some some step $j$ the algorithm selects either $C^{(j)}_{i_k\pi(i_k)}$ or $C^{(j)}_{\pi^{-1}(i_k)\pi(i_k)}$ as the largest entry, but this contradicts the row-column dominance of $i_k$. This proves that that if there exists a set of indices $I_r\subset[n]$ of size $r$ such that for all $i\in I_r$, $C_{ii}$ is row-column dominant, then that set is selected by the algorithm, which implies that $\pi(i)=i$ for $i\in I_r$, thus $\operatorname{overlap}(\pi,\operatorname{id})\geq r$. \eqref{eq:overlap_event} follows by the contrareciprocal. \end{proof} \section{Additional technical lemmas}\label{sec:additionalLemmas} Here we gather some technical lemmas used throughout the paper. \subsection{General concentration inequalities} The following lemma corresponds to \citep[Lemma 1.1]{LauMass} and controls the tails of the weighted sums of squares of Gaussian random variables. \begin{lemma}[Laurent-Massart bound]\label{lem:lau_mass} Let $X_1,\cdots,X_n$ be i.i.d standard Gaussian random variables. Let $\mu=(\mu_1,\cdots,\mu_n)$ be a vector with non-negative entries and define $\zeta=\sum^n_{i=1}\mu_i(X^2_i-1)$. Then it holds for all $t\geq 0$ that \begin{align*} \ensuremath{\mathbb{P}}(\zeta\geq 2\|\mu\|_2\sqrt{t}+2\|\mu\|_\infty t)\leq e^{-t}\\ \ensuremath{\mathbb{P}}(\zeta\leq -2\|\mu\|_2\sqrt{t})\leq e^{-t} \end{align*} % \end{lemma} An immediate corollary now follows. \begin{corollary}\label{cor:lau_mass} Let $X_1,\cdots,X_{n_1}$ and $Y_1,\cdots,Y_{n_2}$ be two independent sets of i.i.d standard Gaussian random variables. Let $\mu=(\mu_1,\cdots,\mu_{n_1})$ and $\nu=(\nu_1,\cdots,\nu_{n_2})$ be two vectors with non-negative entries. Define $\zeta=\sum^{n_1}_{i=1}\mu_iX^2_i$ and $\xi=\sum^{n_2}_{i=1}\nu_iY^2_i$. Then it holds for $t\geq 0$ that \begin{align}\label{eq:lau_mass_ut} \ensuremath{\mathbb{P}}\big(\zeta-\xi\geq \|\mu\|_1-\|\nu\|_1+2(\|\mu\|_2+\|\nu\|_2)\sqrt{t}+2\|\mu\|_\infty t\big)&\leq 2e^{-t}, \\ \label{eq:lau_mass_lt} \ensuremath{\mathbb{P}}\big(\zeta-\xi\leq \|\mu\|_1-\|\nu\|_1-2(\|\mu\|_2+\|\nu\|_2)\sqrt{t}-2\|\nu\|_\infty t\big)&\leq 2e^{-t}. \end{align} \end{corollary} The next lemma give us a distributional equality for terms of the form $\langle g, X g\rangle $ where $g$ is a standard Gaussian vector and $X$ is a permutation matrix. \begin{lemma}\label{lem:dist_gaussian_inner} Let $X\in\ensuremath{\mathcal{P}}_n$ and $g=(g_1,\cdots,g_n)$ be a standard Gaussian vector. Then is holds \[\langle g,Xg\rangle\stackrel{d}{=}\sum^{n}_{i=1}\lambda_ig'^2_i,\] where $\lambda_i$ are the eigenvalues of $\frac12(X+X^T)$ and $g'=(g_1,\cdots,g_n)$ is a vector of independent standard Gaussians. Moreover, if $|S_X|=s_xn$ for $s_x\in(0,1]$, $\mu\in \mathbb{R}^{n_1}$ is a vector containing the positive eigenvalues of $\frac12(X+X^T)$, and $-\nu\in \mathbb{R}^{n_2}$ is a vector containing the negative eigenvalues of $\frac12(X+X^T)$, then \begin{align*} \|\mu\|_1-\|\nu\|_1&=s_xn,\\ \sqrt{n}\leq\|\mu\|_2+\|\nu\|_2&\leq \sqrt{2n},\\ \|\mu\|_\infty,\|\nu\|_\infty&\leq 1 . \end{align*} \end{lemma} \begin{proof} Notice that $\langle g,Xg\rangle=\langle g,\frac12(X+X^T)g\rangle$ and given the symmetry of the matrix $\frac12(X+X^T)$ all its eigenvalues are real. Take its SVD decomposition $\frac12(X+X^T)=V\Lambda V^T$. We have that \begin{align*} \langle g,\frac12(X+X^T)g\rangle&= (V^Tg)^T\Lambda V^Tg \stackrel{d}{=}\sum^n_{i=1}\lambda_ig'^2_i \end{align*} using the rotation invariance of the standard Gaussian vectors. Notice that \[|S_X|=Tr(X)=Tr\left(\frac12(X+X^T)\right)=\sum^n_{i=1}\lambda_i\] which leads to \[\|\mu\|_1-\|\nu\|_1=\sum^n_{i=1}\lambda_i=|S_X|=s_xn.\] The fact that $\|\mu\|_\infty,\|\nu\|_\infty\leq 1$ follows easily since $X$ is a unitary matrix. The inequality $\|\mu\|_2+\|\nu\|_2\geq \sqrt{n}$ follows from the fact that $\|\mu\|_2^2+\|\nu\|_2^2=n$. From the latter, we deduce that $\|\mu\|_2+\|\nu\|_2\leq \sqrt{\|\mu\|^2_2}+\sqrt{n-\|\mu\|^2_2}\leq 2\sqrt{\frac{n}2}$, and the result follows. \end{proof} \section{Algorithm}\label{sec:alg_res} \subsection{Projected power method for Graph matching}\label{sec:ppmgm} The projected power method (PPM) has been used to solve the graph matching problem, and its variants, by several authors \citep{Villar}. Most of the work so far has been empirical and, to the best of our knowledge, theoretical guarantees have been obtained only in the case of sparse Erdös-Renyi graphs, such as in \citep[Thm.B]{MaoRud} in the case of multiple iterations, and \citep{YuXuLin,LubSri} in the case of one iteration. Interestingly, the connection with the PPM is not explicitly stated in any of these works. We start by defining the projection operator onto $\ensuremath{\mathcal{P}}_n$ for a matrix $C\in \ensuremath{\mathbb{R}}^{n\times n}$. We will use the greedy maximum weight matching (GMWM) algorithm introduced in \citep{LubSri}, for the problem of graph matching with partially correct seeds, and subsequently used in \citep{YuXuLin}. The steps are outlined in Algorithm \ref{alg:gmwm}. \begin{algorithm} \caption{\texttt{GMWM} (Greedy maximum weight matching)}\label{alg:gmwm} \begin{algorithmic}[1] \Require{A cost matrix $C\in\mathbb{R}^{n\times n}$.} \Ensure{A permutation matrix $X$.} \State Select $(i_1,j_1)$ such that $C_{i_1,j_1}$ is the largest entry in $C$ (break ties arbitrarily). Define $C^{(1)}\in\mathbb{R}^{n\times n}$: $C^{(1)}_{ij}=C_{ij}\mathbbm{1}_{i\neq i_1,j\neq j_1}-\infty\cdot\mathbbm{1}_{i=i_1\text{or } j= j_1}$. \For{$k=2$ to $N$} \State Select $(i_k,j_k)$ such that $C^{(k-1)}_{i_k,j_k}$ is the largest entry in $C^{(k-1)}$. \State Define $C^{(k)}\in\mathbb{R}^{n\times n}$: $C^{(k)}_{ij}=C^{(k-1)}_{ij}\mathbbm{1}_{i\neq i_k,j\neq j_k}-\infty\cdot\mathbbm{1}_{i=i_k\text{or } j= j_k}$. \EndFor \State Define $X\in \{0,1\}^{n\times n}$: $X_{ij}=\sum^N_{k=1}\mathbbm{1}_{i=i_k,j=j_k}$. \State\Return{$X$} \end{algorithmic} \end{algorithm} Notice that the original version of GMWM works by erasing the row and column of the largest entry of the matrix $C^{(k)}$ at each step $k$. We change this to assign $-\infty$ to each element of the row and column of the largest entry (which is equivalent), mainly to maintain the original indexation. The output of Algorithm \ref{alg:gmwm} is clearly a permutation matrix, hence we define \begin{equation}\label{eq:projection} \tau (C):=\{\text{Output of GMWM with input } C\} \end{equation} which can be considered a projection since $\tau^2(C)=\tau(C)$ for all $C\in\mathbb{R}^{n\times n}$. Notice that, in general, the output of GMWM will be different from solving the linear assignment problem \begin{align*} \tilde{\tau}(C): =\argmin{}{\{\|C-X\|_F\enskip |\enskip X\in\ensuremath{\mathcal{P}}_n\}} =\argmax{\Pi\in\ensuremath{\mathcal{P}}_n}{\langle \Pi,C \rangle_F}\nonumber \end{align*} which provides an orthogonal projection, while $\tau$ corresponds to an oblique projection in general. \begin{algorithm} \caption{\texttt{PPMGM} (PPM for graph matching)}\label{alg:ppmgm} \begin{algorithmic}[1] \Require{Matrices $A,B$, an initial point $X^{(0)}$ and $N$ the maximum number of iterations.} \Ensure{A permutation matrix $X$.} \For{$k=0$ to $N-1$} \State $X^{(k+1)} \gets \tau(AX^{(k)}B)$. \EndFor \State\Return{$X=X^{(N)}$} \end{algorithmic} \end{algorithm} The PPM is outlined in Algorithm \ref{alg:ppmgm}. Given the estimate of the permutation $X^{(k)}$ from step $k$, the power step corresponds to the operation $AX^{(k)}B$ while the projection step is given by the application of the projection $\tau$ on $AX^{(k)}B$. The similarity matrix $C^{k+1}:=AX^{(k)}B$ is the matrix form of the left multiplication of $[X^{(k)}]$ by the matrix $B\otimes A$. Indeed, given that $A$ and $B$ are symmetric matrices, we have $[AX^{(k)}B]=(B\otimes A)[X^{(k)}]$, by \citep[eqs. 6 and 10]{Schacke}. All previous works related to the PPM for graph matching use $(B\otimes A)[X^{(k)}]$ in the power step, which is highly inconvenient in practice. Also, a power step of the form $AX^{(k)}B$ connects the PPM with the seeded graph matching methods proposed for correlated Erdös-Renyi graphs \citep{LubSri,YuXuLin,MaoRud} where related similarity matrices are used, thus providing a more general framework. Indeed, the set of elements correctly matched by the initial permutation $x^{(0)}\in\ensuremath{\mathcal{S}}_n$ can be considered as the seed of the problem, \emph{i.e.}, we take the set of seeds $S:=\{(i,i'): x^{(0)}(i)=i'\}$. Thus, the number of correct seeds will be the number of elements $i\in [n]$ such that $x^{(0)}(i)=x^*(i)$. Observe that the definition of the seed as a permutation is more general than a set $S$ of bijectively pre-matched vertices, because $S$ can be augmented (arbitrarily) to a permutation. \paragraph{Initialization.} We prove in Section \ref{sec:conv_analysis} that Algorithm \ref{alg:ppmgm} recovers the ground truth permutation $x^*$ provided that the initialization $x^{(0)}$ is sufficiently close to $x^*$. The initialization assumption will be written in the form \begin{equation}\label{assumption:init} \|X^{(0)}-X^*\|_F\leq \theta \sqrt n \end{equation} for some $\theta\in[0,\sqrt 2)$. Here, the value of $\theta$ measures how good $X^{(0)}$ is as a seed. Indeed, \eqref{assumption:init} can be equivalently stated as: the number of correct seeds is larger than $n(2-\theta^2)$. The question of finding a good initialization method can be seen as a seedless graph matching problem, where only partial recovery guarantees are necessary. In practice, we can use existent seedless algorithms such as those in \citep{Spectral_weighted_Ume,Grampa,spec_align} to initialize Algorithm \ref{alg:ppmgm}. We compare different initialization methods numerically, in Section \ref{sec:experiments}. \begin{remark}[PPM as a gradient method] The projected power method can be seen as a projected gradient ascent method for solving the MLE formulation in \eqref{form:1'}. From the formulation \eqref{form:1''} it is clear that the gradient of the likelihood evaluated on $X\in \ensuremath{\mathcal{P}}_n$ is $2B\otimes A[X]$ or, equivalently, $2AXB$ in matrix form. This interpretation of PPM has been acknowledged in the context of other statistical problems \citep{jour,chen2016_alignment}. \end{remark} \begin{remark}[Optimality] Algorithms based on PPM or GPM have been shown to attain optimal, or near-optimal, statistical guarantees for several problems in statistics, including community detection \citep{Wang2021OptimalNE,WangManchoso}, group syncronization \citep{boumal2016,Gao2019IterativeAF} and generalized orthogonal procrustes problem \citep{Ling}. \end{remark} \begin{remark}[Complexity] The computational time complexity of Algorithm \ref{alg:ppmgm} is $\mathcal{O}(n^\omega\log{n}+n^2\log^2{n})$, where $\mathcal{O}(n^\omega)$ is the matrix multiplication complexity and $\mathcal{O}(n^2\log{n})$ is the complexity of Algorithm \ref{alg:gmwm} \citep{YuXuLin}. In \citep{Le_Gall}, the authors establish the bound $\omega\leq 2.373$. \end{remark} \section{Problem statement} The aim of \emph{graph matching} is to find, given two graphs $G=(V(G),E(G))$ and $H=(V(H),E(H))$ of equal size as input, a bijective function between $V(G)$ and $V(H)$ such that the alignment of the adjacency matrices is maximized. More formally, if $A$ and $B$ are the adjacency matrices of $G$ and $H$ respectively, we want to solve the following optimization problem: \begin{equation}\label{form:1} \max_{x\in \mathcal{S}_n}\sum_{i,j}A_{ij}B_{x(i)x(j)}\tag{P1} \end{equation} where $\mathcal{S}_n$ is the set of all permutations of $[n]:=\{1,\cdots,n\}$. The objective function in \eqref{form:1} is called the \emph{adjacency agreement} for a given permutation $x\in\mathcal{S}_n$. Notice that if the graphs are $G$ and $H$ are isomorphic, then the maximal adjacency agreement has value $2|E|$ and is attained by any isomorphism $x\in\mathcal{S}_n$. In matrix language, the problem \eqref{form:1} can be rewritten in the form \begin{equation*} \max_{X\in \mathcal{P}_n}\langle A,X BX^T\rangle_F \end{equation*} where $\mathcal{P}_n$ is the set of permutation matrices of size $n$ and $\langle \cdot,\cdot\rangle_F$ is the Frobenius inner product in the space of $n\times n$ matrices. Observe that \eqref{form:1} is a well defined problem not only for adjacency matrices, but for any pair of matrices of the same size. Moreover, this is an instance of the well-known \emph{quadratic assignment problem}, which is combinatorially hard problem (known to be NP-hard in the worst case). It will be often useful to consider the ``lifted'' (or vector) formulation of the problem, where we identify a matrix in $\mathbb{R}^{n\times n}$ with a vector in $\mathbb{R}^{n^2}$. For matrix $X\in \mathbb{R}^{n\times n}$ we denote $[X]$ the vector in $\mathbb{R}^{n^2}$, where the columns of $X$ has been stacked together. It is easy to see that \eqref{form:1} can be written in the vector form as follows \begin{equation} \label{form:1'} \max_{[X]\in [\mathcal{P}_n]}[X]^TB\otimes A[X]\tag{P1'} \end{equation} where $[\mathcal{P}_n]$ is the set of permutation matrices in vector form. Even if graph matching is known to be intractable in the worst case and the related \emph{graph isomporphism} problem has unkown complexity, both can be solved efficiently for many particular instances. Indeed, in \citep{RG_Isom_BES} the authors provide an algorithm that check if two graph are isomorphic in linear (on the number of edges) time and succeed for ``almost all graphs''. The latter can be rephrased as ``the algorithm works with high probability if the graphs are chosen uniformly'', which is equivalent to restrict ourselves to graphs generated by means of the Erdos-Renyi model $G(n,\frac12)$, where each edge appears with probability $1/2$ and all edges are decided independently. Our goal is of similar nature, that is to propose an efficient algorithm for the graph matching problem and study its statistical guarantees for a class of random graphs models. \section{Proof of Theorem \iffalse \section{Proof of Theorem \ref{prop:partial_rec}} The proof of Theorem \ref{prop:partial_rec} will be based in the following lemma, which extends Proposition \ref{prop:diago_dom}. \begin{lemma}\label{lem:not_rc_dom} For a fixed $i\in[n]$, we have \begin{equation*} \ensuremath{\mathbb{P}}(C_{ii} \textup{ is not row-column dominant})\leq 16ne^{-c(\sigma)s_x^2n} \end{equation*} \end{lemma} The proof of Lemma \ref{lem:not_rc_dom} is included in Appendix \ref{app:lem_not_rc_dom}. We now prove Theorem \ref{prop:partial_rec}. The main idea is that for a fixed $i\in[n]$, with high probability the term $C_{ii}$ is the largest term in $i$-th row and the $i$-th column, and then GMWM will assign $\pi(i)=i$. We will also use the following event inclusion, which is direct from Lemma \ref{lem:overlap_event} eq.\eqref{eq:overlap_event} \begin{equation} \label{eq:overlap_event2} \{\operatorname{overlap}(\pi,\operatorname{id})< r/n\}\subset\bigcup^r_{i=1}\{C_{ii} \text{ is not row-column dominant }\} \end{equation} \begin{proof}[Proof of Theorem \ref{prop:partial_rec} ] Fix $i\in[n]$. By eq.\eqref{eq:overlap_event2} we have that \begin{align*} \ensuremath{\mathbb{P}}(\operatorname{overlap}(\pi,\operatorname{id})\leq r/n)&\leq \sum^r_{i=1}\ensuremath{\mathbb{P}}(C_{ii} \text{ is not row-column dom.})\\ &\leq \sum^r_{i=1}\ensuremath{\mathbb{P}}(\exists j\neq i,\text{ s.t }C_{ij}\vee C_{ji}>C_{ii} )\\ &\leq 16rne^{-c(\sigma) s_x^2n} \end{align*} where we used Lemma \ref{lem:not_rc_dom} in the last inequality. \end{proof} \begin{remark} Notice that the RHS of \eqref{eq:overlap_event2} is a superset of the RHS of \eqref{eq:overlap_event}. To improve this, it is necessary to include dependency information. In other words, we need to 'beat Hölder's inequality'. To see this, define \[E_i:=\mathbbm{1}_{C_{ii}\text{ is not row-column dominant }},\enskip \varepsilon_{I}:=\mathbbm{1}_{\sum_{i\in I}E_i>0}, \text{ for } I\subset [n]\] then the $\varepsilon_{I'}$, for $I'=[r]$, is the indicator of the event in the RHS of \eqref{eq:overlap_event2}. On other hand, the indicator of the event in the RHS of \eqref{eq:overlap_event} is ${\displaystyle \prod_{\substack{I\subset[n],|I|=r}}}\varepsilon_I$. If $\ensuremath{\mathbb{E}}\big[\varepsilon_I\big]$ is equal for all $I$, then Hölder gives \[\ensuremath{\mathbb{E}}\Big[{\displaystyle \prod_{\substack{I\subset[n],|I|=r}}}\varepsilon_I\Big]\leq \ensuremath{\mathbb{E}}[\varepsilon_{I'}]\] which do not helps quantifying the difference between \eqref{eq:overlap_event} and \eqref{eq:overlap_event2}. This is not surprising as we are not using taking into account the dependency between the events $\varepsilon_I$ for the different sets $I\subset[n],|I|=r$. \end{remark \fi \section{Proof of Proposition \ref{prop:diago_dom}}\label{app:concentration} We divide the proof into two subsections. In Appendix \ref{app:diagdom_row_noiseless} we prove Lemma \ref{lem:tailbounds} and in Appendix \ref{app:diagdom_row_noise} we prove part $(ii)$ of Proposition \ref{prop:diago_dom}. Before proceeding, let us introduce and recall some notation. Define $C':=AXA$ and $C'':=AXZ$, then $C=AXB=\sqrt{1-\sigma^2}C'+\sigma C''$. Recall that for a permutation $x$, $S_X$ will denote the set of fixed points of $x$ (the set of non-zero diagonal terms of its matrix representation $X$) and we will often write $s_x=|S_X|/n=Tr(X)/n$. We will say that a real random variable $Y\sim\chi^2_K$ if it follows a central Chi-squared distribution with $K$ degrees of freedom. \subsection{Proof of Lemma \ref{lem:tailbounds}}\label{app:diagdom_row_noiseless} \iffalse We assume that $\sigma=0$, in which case we have $C=AXB=AXA=C'$, in our notation. The following are the main steps of the proof. \begin{enumerate} \item Notice that for all the permutations $X$ such that $s_x=|S_X|/n$ and for $i\neq j\in[n]$ the gap $C_{ii}-C_{ij}$ is of order $s_x$ in expectation. \item We prove that $C_{ii}$ and $C_{ij}$ are sufficiently concentrated around its mean. In particular, the probability that $C_{ii}$ is smaller than $s_x/2$ is exponentially small. The same is true for the probability that $C_{ij}$ is larger than $s_x/2$. \item We use the fact $\ensuremath{\mathbb{P}}(C_{ii}\leq C_{ij})<\ensuremath{\mathbb{P}}(C_{ii} \leq s_x/2)+\ensuremath{\mathbb{P}}(C_{ij}\geq s_x/2)$ to control the probability that $C$ is not diagonally dominant. \end{enumerate} We start with the following lemmas. \begin{lemma}\label{lem:expectation} For the matrix $C=AXA$ and with $s_x=|S_X|/n$ we have \[\ensuremath{\mathbb{E}}[C_{ij}]=\begin{cases} s_x+\frac1n\mathbbm{1}_{i\in S_X} \enskip\text { for }i=j\\ \frac1n\mathbbm{1}_{x(j)=i} \enskip\text { for }i\neq j\\ \end{cases}\] and from this we deduce that for $i,j\in[n]$ with $i\neq j$ \[s_x-\frac1n\leq \ensuremath{\mathbb{E}}{[C_{ii}]}-\ensuremath{\mathbb{E}}{[C_{ij}]}\leq s_x+\frac1n\] \end{lemma} \begin{lemma}\label{lem:tailbounds} Assume that $s_x\in(10/n,1]$ and $n\geq 10$. Then for $i,j\in[n]$ with $i\neq j$ we have \begin{align}\label{eq:bounddiag} \ensuremath{\mathbb{P}}(C_{ii}\leq s_x/2)&\leq 4 e^{-\frac{s_x^2}{48}n}\\%f(s_x)^{n/2}\\ \label{eq:boundoffdiag} \ensuremath{\mathbb{P}}(C_{ij}\geq s_x/2)&\leq 3e^{-\frac{s_x^2}{96}n} \end{align} \end{lemma} With this we can prove Proposition \ref{prop:diago_dom} part $(i)$. \begin{proof}[Proof of Prop. \ref{prop:diago_dom} $(i)$] Define the event $\mathcal{E}_j=\{C_{ii}<\frac{s_x}2\}\cup \{C_{ij}>\frac{s_x}2\}$ and note that for $j\neq i$, we have $\{C_{ij}>C_{ii}\}\subset\mathcal{E}_j$. With this and the bounds \eqref{eq:bounddiag} and \eqref{eq:boundoffdiag} we have \begin{align*} \ensuremath{\mathbb{P}}\big(\exists j\neq i: C_{ij}>C_{ii}\big)&=\ensuremath{\mathbb{P}}(\cup_{j\neq i}\{C_{ij}>C_{ii}\})\\ &\leq \ensuremath{\mathbb{P}}(\cup_{j\neq i}\mathcal{E}_j)\\ &\leq \ensuremath{\mathbb{P}}(C_{ii}\leq \frac{s_x}{2})+\sum_{j\neq i} \ensuremath{\mathbb{P}}(C_{ij}\geq \frac{s_x}{2})\\ &\leq 4e^{-\frac{s_x^2}{96}n}+3(n-1)e^{-\frac{s_x^2}{96}n}\\ &\leq 4ne^{-\frac{s_x^2}{96}n} \end{align*} \end{proof} Before proceeding with the proofs of Lemmas \ref{lem:expectation} and \ref{lem:tailbounds}, we notice that the following decomposition holds for the matrix $C$ \begin{equation}\label{eq:Cdecom}C_{ij}=\sum_{k,k'}A_{ik}X_{k,k'}A_{k'i}=\begin{cases}\sum_{k\in S_X}A^2_{ik}+\sum_{k\notin S_X}A_{ik}A_{ix(k)},\enskip\text { for }i=j,\\ \sum^n_{k=1}A_{ik}A_{x(k)j}, \enskip\text{ for }i\neq j. \end{cases} \end{equation} \begin{proof}[Proof of Lemma \ref{lem:expectation}] From \ref{eq:Cdecom} we have that \begin{align*} \ensuremath{\mathbb{E}}[C_{ii}]&=\sum_{k\in S_X}\ensuremath{\mathbb{E}}[A^2_{ik}]+\sum_{k\notin S_X}\ensuremath{\mathbb{E}}[A^2_{ik}]\\ &=\frac{|S_X|}n+\frac{\mathbbm{1}_{i\in S_X}}n \end{align*} Similarly, for $j\neq i$ it holds \begin{align*} \ensuremath{\mathbb{E}}[C_{ij}]&=\sum^n_{k=1}\ensuremath{\mathbb{E}}[A_{ik}A_{x(k)j}]\\ &=\frac1n\mathbbm{1}_{i,j\notin S_X, x(j)=i}\\ &=\frac{\mathbbm{1}_{x(j)=i}}n \end{align*} from which the results follows easily. \end{proof} \fi The proof of Lemma \ref{lem:tailbounds} mainly revolves around the use of concentration inequalities for quadratic forms of Gaussian random vectors. For that, it will be useful to use the following representation of the entries of $C$. \begin{equation}\label{eq:Cdecom2} C_{ij}=\langle A_{:i},XA_{:j}\rangle \end{equation} where we recall that $A_{:k}$ represents the $k$-th column of the matrix $A$. \begin{proof}[Proof of Lemma \ref{lem:tailbounds}] ~ \paragraph{High probability bound for $C_{ii}$.} Define $\tilde{a}_i$ to be a vector in $\mathbb{R}^n$ such that \begin{equation*} \tilde{a}_i(k)=\begin{cases} A_{ki},\text{ for } k\notin {i,x^{-1}(i)},\\ \frac1{\sqrt{2}}A_{ii},\text{ for } k\in {i,x^{-1}(i)}. \end{cases} \end{equation*} Using representation \eqref{eq:Cdecom2} we have \[C_{ii}=\langle \tilde{a}_i,X\tilde{a}_i\rangle + \mathcal{Z}_i\] where \begin{equation*} \mathcal{Z}_i:=\frac12A_{ii}\big(A_{x(i)i})+A_{x^{-1}(i)i}\big). \end{equation*} It is easy to see that $\sqrt{n}\tilde{a}_i$ is a standard Gaussian vector. Using Lemma \ref{lem:dist_gaussian_inner} we obtain \[ n\langle\tilde{a}_i,X\tilde{a}_i\rangle\stackrel{d}{=} \sum^{n_1}_{i=1}\mu_ig^2_i-\sum^{n_2}_{i=1}\nu_ig'^2_i \] where $(\mu_i)^{n_1}_{i=1}, (-\nu_i)^{n_2}_{i=1}$, (with $\mu_i\geq 0$, $\nu_i\geq0$ and $n_1+n_2=n$) is the sequence of eigenvalues of $\frac12(X+X^T)$ and $g=(g_1,\cdots, g_{n_1})$, $g'=(g'_1,\cdots, g'_{n_2})$ are two independent sets of i.i.d standard Gaussians. Lemma \ref{lem:dist_gaussian_inner} tell us in addition that $\|\mu\|_1-\|\nu\|_1=s_xn$, $\|\mu\|_2+\|\nu\|_2\leq \sqrt{2n}$ and $\|\mu\|_\infty,\|\nu\|_\infty\leq 1$. Using Corollary \ref{cor:lau_mass} \eqref{eq:lau_mass_lt}, we obtain \begin{equation}\label{eq:bound_aXa} \ensuremath{\mathbb{P}}(n\langle\tilde{a}_i,X\tilde{a}_i\rangle\leq s_xn-2\sqrt{2nt}-2t)\leq e^{-t} \end{equation} for all $t\geq 0$. To obtain a concentration bound for $\mathcal{Z}_i$ we will distinguish two cases. \noindent \textit{(a)\underline{Case $i\in S_X$.}} In this case, we have $\mathcal{Z}_i=a^2_i(i)$, which implies that $C_{ii}\geq \langle\tilde{a}_i,X\tilde{a}_i\rangle$. Hence \[\ensuremath{\mathbb{P}}(nC_{ii}\leq s_xn-2\sqrt{2nt}-2t)\leq 2e^{-t}.\] Replacing $t=\overline{t}:=\frac{n}{2}(\sqrt{1+\frac{s_x}2}-1)^2$ in the previous expression, one can verify\footnote{Indeed, the inequality $(\sqrt{1+x}-1)^2\geq \frac16x^2$, follows from the inequality $x^2+(2\sqrt{6}-6)x\leq 0$, which holds for $0<x\leq 1$.} that $\overline{t}\geq \frac{n}{48}s_x^2$, for $s_x\in (0,1]$, hence \begin{equation*} \ensuremath{\mathbb{P}}(C_{ii}\leq s_x/2)\leq 2e^{-\frac{s_x^2}{48}n} \end{equation*} which proves \eqref{eq:bounddiag} in this case. \noindent \textit{(b) \underline{Case $i\notin S_X$.}} Notice that in this case, $a_i(i)$ is independent from $(a_i(x(i))+a_i(x^{-1}(i))$, hence $n\mathcal{Z}_i\stackrel{d}{=}g_1g_2$, where $g_1,g_2$ are independent standard Gaussians. Using the polarization identity $g_1g_2=\frac14(g_1+g_2)^2-\frac14(g_1-g_2)^2$, we obtain \[n\mathcal{Z}_i\stackrel{d}{=}\frac12(\tilde{g}^2_1-\tilde{g}^2_2)\] where $\tilde{g_1},\tilde{g_2}$ are independent standard Gaussians. By Corollary \ref{cor:lau_mass} we have \begin{equation}\label{eq:boundZ_i} \ensuremath{\mathbb{P}}\Big(2n\mathcal{Z}_i\leq -4\sqrt{t}-2t\Big)\leq 2e^{-t}. \end{equation} Using \eqref{eq:bound_aXa} and \eqref{eq:boundZ_i}, we get \begin{equation*} \ensuremath{\mathbb{P}}(nC_{ii}\leq s_xn-2(\sqrt {2n}+1)\sqrt{t}-3 t)\leq 4e^{-t} \end{equation*} or, equivalently \begin{equation}\label{eq:bound_Cii} \ensuremath{\mathbb{P}}\left(C_{ii}\leq s_x-2(\sqrt2+1/\sqrt{n})\sqrt{\frac tn}-3 \frac tn\right)\leq 4e^{-t}. \end{equation} Replacing $t=\overline{t}:=\frac{n}{36}\big(\sqrt{d^2+6s_x}-d\big)^2$, where $d=2(\sqrt2+1/\sqrt{n})$, in the previous expression and noticing that $\overline{t}\geq \frac{1}{6}s_x^2n$, we obtain the bound \begin{equation*} \ensuremath{\mathbb{P}}(C_{ii}\leq s_x/2)\leq 4e^{-\frac{s^2_x}{6}n}. \end{equation*} \paragraph{High probability bound for $C_{ij}$, $i\neq j$.} Let us first define the vectors $\tilde{a}_i,\tilde{a}_j\in \mathbb{R}^{n}$ as \begin{equation*} \tilde{a}_i(k):=\begin{cases} A_{ki},\enskip \text{ for }k\notin\{j,x^{-1}(i)\},\\ 0,\enskip \text{ for }k\in\{j,x^{-1}(i)\},\\ \end{cases} \end{equation*} and \begin{equation*} \tilde{a}_j(k):=\begin{cases} A_{kj},\enskip \text{ for }k\notin\{j,x^{-1}(i)\},\\ 0,\enskip \text{ for }k\in\{j,x^{-1}(i)\}.\\ \end{cases} \end{equation*} Contrary to $a_i$ and $a_j$ which share a coordinate, the vectors $\tilde{a}_i$ and $\tilde{a}_j$ are independent. With this notation, we have the following decomposition \begin{equation*} C_{ij}=\langle \tilde{a}_i,X\tilde{a}_j\rangle+ A_{ji}\Big(A_{x(j)j}+A_{x^{-1}(i)i}\Big). \end{equation*} For the first term, we will use the following polarization identity \begin{equation}\label{eq:polarization} \langle \tilde{a}_i,X\tilde{a}_j\rangle= \|\frac12(\tilde{a}_i+X\tilde{a}_j)\|^2-\|\frac12(\tilde{a}_i-X\tilde{a}_j)\|^2. \end{equation} By the independence of $\tilde{a}_i$ and $\tilde{a}_j$, it is easy to see that $\tilde{a}_i+X\tilde{a}_j$ and $\tilde{a}_i-X\tilde{a}_j$ are independent Gaussian vectors and $\ensuremath{\mathbb{E}}[\langle \tilde{a}_i,X\tilde{a}_j\rangle]=0$. Using \eqref{eq:polarization} and defining $\mathcal{Z}_{ij}:=A_{ji}\Big(A_{x(j)j}+A_{x^{-1}(i)i}\Big)n$, it is easy to see that \begin{equation}\label{eq:decomp_offdiag} nC_{ij}\stackrel{d}{=}\sum^{n-1}_{i=1}\mu_ig^2_i-\sum^{n-1}_{i=1}\nu_ig'^2_i+\mathcal{Z}_{ij} \end{equation} where $g_1,\cdots,g_{n-1}$ and $g'_1,\cdots,g'_{n-1}$ are two sets of independent standard Gaussian variables and $\mu_i,\nu_i\in \{\frac12,\frac34,1\}$, for $i\in[n-1]$. The sequences $(\mu_i)^{n-1}_{i=1},(\nu_i)^{n-1}_{i=1}$ will be characterised below, when we divide the analysis into two cases $x(j)=i$ and $x(j)\neq i$. We first state the following claim about $\mathcal{Z}_{ij}$. \begin{claim}\label{claim:distrZ} For $i\neq j$, we have \[\mathcal{Z}_{ij}\stackrel{d}{=}\begin{cases} q_{ij}(\zeta_1-\zeta_2)\enskip \text{ if }x(j)\neq i,\\ 2\zeta_3 \enskip \text{ if }x(j)=i, \end{cases}\] where $\zeta_1,\zeta_2$ and $\zeta_3$ are independent Chi-squared random variables with one degree of freedom and \[q_{ij}=\begin{cases} \sqrt\frac32 \enskip \text{ if }i\in S_X,j\notin S_X\text{ or }i\notin S_X,j\in S_X,\\ \sqrt 2 \text{ if }i,j\in S_X,\\ \frac1{\sqrt2 } \text{ if }i,j\notin S_X. \end{cases}\] \end{claim} We delay the proof of this claim until the end of this section. From the expression \eqref{eq:decomp_offdiag}, we deduce that the vectors $g=(g_1,\cdots,g_{n-1})$, $g'=(g'_1,\cdots,g'_{n-1})$ and $\mathcal{Z}_{ij}$ are independent. Hence, by Claim \ref{claim:distrZ} the following decomposition holds \begin{equation* nC_{ij}\stackrel{d}{=}\sum^{n}_{i=1}\mu_ig^2_i-\sum^{n}_{i=1}\nu_ig'^2_i \end{equation*} where \[\mu_{n}= \begin{cases} q_{ij}\text{ if }x(j)\neq i, \\ 2\text{ if }x(j)=i, \end{cases} \text{ and } \nu_{n}= \begin{cases} q_{ij}\text{ if }x(j)\neq i, \\ 0\text{ if }x(j)=i. \end{cases} \] % Let us define $\mu:=(\mu_1,\cdots,\mu_{n})$ and $\nu:=(\nu_1,\cdots,\nu_{n})$. We will now distinguish two cases. \noindent \textit{(a) \underline{Case $x(j)\neq i$.}} In this case, we can verify that one of the $\mu_1,\cdots,\mu_{n-1}$ is equal to $0$ (and the same is true for the values $\nu_1,\cdots,\nu_{n-1}$). Assume without loss of generality that $\mu_1=\nu_1=0$. Also, one of the following situations must happen for the sequence $\mu_2,\cdots,\mu_{n-1}$ (resp. $\nu_2,\cdots,\nu_{n-1}$): either $n-3$ of the elements of the sequence are equal to $\frac12$ and one is equal $1$ or $n-4$ are equal to $\frac12$ and two are equal to $\frac34$ or $n-3$ are equal to $\frac12$ and one is equal to $\frac34$. In either of those cases, the following is verified \begin{align*} \|\mu\|_1-\|\nu\|_1&=0,\\ \|\mu\|_2+\|\nu\|_2&\leq \sqrt{2n},\\ \|\mu\|_\infty,\|\nu\|_\infty&\leq \sqrt{2} \end{align*} where the first equality comes from Lemma \ref{lem:expectation}, the inequality on the norm $\|\cdot\|_2$ comes from the fact that in the worst case $\|\mu\|_2=\|\nu\|_2\leq \sqrt{\frac{n+1}4}$. The statement about the norm $\|\cdot\|_\infty$ can be easily seen by the definition of $\mu$ and $\nu$. Using \eqref{eq:lau_mass_ut}, we obtain \begin{equation*} \ensuremath{\mathbb{P}}(nC_{ij}\geq 4\sqrt{nt}+4t)\leq 2e^{-t}. \end{equation*} Replacing $t=\overline{t}:=\frac{n}{4}(\sqrt{1+\frac{s_x}2}-1)^2$ in the previous expression and noticing that $\overline{t}\geq \frac1{96}s_x^2n$ for $s_x\in (0,1]$ leads to the bound \begin{equation*} \ensuremath{\mathbb{P}}(C_{ij}\geq s_x/2)\leq 2e^{-\frac{s_x^2}{96}n}. \end{equation*} \noindent \textit{(b) \underline{Case $x(j)= i$.}} In this case, we have that for the sequence $\mu_1,\cdots,\mu_{n-1}$ (resp. $\nu_1,\cdots,\nu_{n-1}$): either $n-2$ of the elements of the sequence are equal to $\frac12$ and one is equal $1$ or $n-3$ are equal to $\frac12$ and two are equal to $\frac34$. In either case, the following holds \begin{align*} \|\mu\|_1-\|\nu\|_1&=2,\\ \|\mu\|_2+\|\nu\|_2&\leq 2\sqrt{n},\\ \|\mu\|_\infty,\|\nu\|_\infty&\leq 2. \end{align*} Here, the inequalities for the norms $\|\cdot\|_1,\|\cdot\|_{\infty}$ follow directly from the definition of $\mu$ and $\nu$, and the inequality for $\|\cdot\|_2$ follows by the fact that, in the worst case, $\|\mu\|_2+\|\nu\|_2=\sqrt{\frac{n+6}4}+\sqrt{\frac{n+2}4}$. Using \eqref{eq:lau_mass_ut}, we get \begin{equation*} \ensuremath{\mathbb{P}}(nC_{ij}\geq 2+4\sqrt{nt}+4t)\leq 2e^{-t}. \end{equation*} Replacing $t=\overline{t}:=\frac{n}{4}(\sqrt{1+\frac{s_x}2-\frac2n}-1)^2$ in the previous expression and noticing that $\overline{t}\geq \frac1{20}s_x^2n$ for $s_x\in (10/n,1]$ we get \begin{equation*} \ensuremath{\mathbb{P}}(C_{ij}\geq s_x/2)\leq 2e^{-\frac{s_x^2}{20}+4/n}\leq 3e^{-\frac{s_x^2}{20}}, \end{equation*} where we used that $n\geq 10$. \end{proof} \begin{proof}[Proof of Claim \ref{claim:distrZ}] Observe that when $x(j)=i$ (or equivalently $x^{-1}(i)=j$) we have $\mathcal{Z}_{ij}=2nA^2_{ij}$. Given that $i\neq j$ by assumption, it holds $A^2_{ij}\sim \ensuremath{\mathcal{N}} (0,\frac1n)$, which implies that $\mathcal{Z}_{ij}\stackrel{d}{=}2\zeta_3$ for $\zeta_3\sim \chi^2_1$. In the case $x(j)\neq i$, let us define \begin{equation*} \psi_1:=\sqrt{n}A_{ij},\enskip\psi_2:=\sqrt{n}A_{jx(j)},\enskip\psi_3:=\sqrt{n}A_{ix^{-1}(i)}, \end{equation*} which are all independent Gaussians random variables. Moreover, $\psi_1\sim \ensuremath{\mathcal{N}}(0,1)$ and \[\psi_2+\psi_3 \sim \begin{cases} \ensuremath{\mathcal{N}}(0,2)\text{ if }i,j\notin S_X,\\ \ensuremath{\mathcal{N}}(0,3) \text{ if }i\in S_X,j\notin S_X\text{ or }i\notin S_X,j\in S_X,\\ \ensuremath{\mathcal{N}}(0,4)\text { if } i,j\in S_X. \end{cases}\] Consider the case $i,j\notin S_X$. In this case, it holds \begin{align*} \mathcal{Z}_{ij}=\sqrt{2}\psi_1\Big(\frac{\psi_2+\psi_3}{\sqrt{2}}\Big) =\frac{1}{\sqrt{2}}{\Big(\frac{\psi_1}{\sqrt 2}+\frac{\psi_2+\psi_3}{2}\Big)}^2-\frac{1}{\sqrt{2}}{\Big(\frac{\psi_1}{\sqrt 2}-\frac{\psi_2+\psi_3}{2}\Big)}^2. \end{align*} Notice that $\frac{\psi_1}{\sqrt 2}+\frac{\psi_2+\psi_3}{2}$ and $\frac{\psi_1}{\sqrt 2}-\frac{\psi_2+\psi_3}{2}$ are independent standard normal random variables, hence $\mathcal{Z}_{ij}\stackrel{d}{=}\frac1{\sqrt{2}}(\zeta_1-\zeta_2)$, where $\zeta_1$ and $\zeta_2$ are independent $\chi^2_1$ random variables. The proof for the other cases is analogous. \end{proof} \subsection{Proof of Proposition \ref{prop:diago_dom} part $(ii)$}\label{app:diagdom_row_noise} Now we consider the case where $\sigma\neq 0$. It is easy to see that here the analysis of the noiseless case still applies (up to re-scaling by $\sqrt{1-\sigma^2}$) for the matrix $C'=AXA$. We can proceed in an analogous way for the matrix $C''=AXZ$ which will complete the analysis (recalling that $C=\sqrt{1-\sigma^2}C'+\sigma C''$). Before we proceed with the proof, we explain how the tail analysis of entries of $C'$ in Prop.\ref{prop:diago_dom} part $(i)$ helps us with the tail analysis of $C''$. Observe that for each $i,j\in[n]$ we have \begin{equation*} C''_{ij}=\sum_{k,k'}A_{ik}X_{k,k'}Z_{k',j}=\sum^n_{k=1}A_{ik}Z_{x(k)j}= \langle A_{:i},XZ_{:j}\rangle. \end{equation*} The term $C''_{ij}$, for all $i,j\in[n]$, can be controlled similarly to the term $C'_{i'j'}$ (when $i'\neq j'$). Indeed, we have the following \begin{lemma}\label{lem:tailbound_noise} For $t\geq 0$ we have \begin{equation*} \ensuremath{\mathbb{P}}(C''_{ij}\leq -4\sqrt{nt}-2t\big)=\ensuremath{\mathbb{P}}(C''_{ij}\geq 4\sqrt{nt}+2t\big)\leq 2e^{-t}. \end{equation*} Consequently, \begin{equation*} \ensuremath{\mathbb{P}}(C''_{ij}\geq s_x/2)\leq 2e^{-\frac{s_x^2}{96}n}. \end{equation*} \end{lemma} \begin{proof} We define $h_1:=\frac12(A_{:i}+XZ_{:j})$ and $h_2:=\frac12(A_{:i}-XZ_{:j})$. It is easy to see that $h_1$ and $h_2$ are two i.i.d Gaussian vectors of dimension $n$. By the polarization identity, we have \begin{align*} n \langle A_{:i},XZ_{:j}\rangle=n(\|h_1\|^2-\|h_2\|^2) \stackrel{d}{=}\sum^n_{i=1}\mu_ig^2_i-\sum^n_{i=1}\nu_ig'^2_i \end{align*} where $g=(g_1,\cdots,g_n)$ and $g'=(g'_1,\cdots,g'_n)$ are independent standard Gaussian vectors and the vectors $\mu=(\mu_1,\cdots,\mu_n),\nu=(\nu_1,\cdots,\nu_n)$ have positive entries that satisfy, for all $i\in [n]$, $\mu_i,\nu_i\in \{\frac1{\sqrt2},\sqrt{\frac34},1\}$. For $\mu_i$ (and the same is true for $\nu_i$) the following two cases can happen: either $n-1$ of its entries are $1/\sqrt{2}$ and one entry takes the value $1$ (when $i=j$) or $n-2$ of its entries are $1/\sqrt{2}$ and two entries take the value $\sqrt{3/4}$ (when $i\neq j$). In any of those cases, one can readily see that \[\|\mu\|_1=\|\nu\|_1,\enskip \|\mu\|_2+\|\nu\|_2\leq \sqrt{n},\enskip \|\mu\|_\infty,\|\nu\|_\infty\leq 1.\] Using Corollary \ref{cor:lau_mass} we obtain \begin{align*} \ensuremath{\mathbb{P}}\big(n(\|h_1\|^2-\|h_2\|^2)\geq 4\sqrt{nt}+2t\big)&\leq 2e^{-t},\\ \ensuremath{\mathbb{P}}\big(n(\|h_1\|^2-\|h_2\|^2)\leq -4\sqrt{nt}-2t\big)&\leq 2e^{-t}. \end{align*} Arguing as in the proof of Proposition \ref{prop:diago_dom} part $(i)$ we obtain the bound \begin{equation*} \ensuremath{\mathbb{P}}(C''_{ij}\geq s_x/2)\leq 2e^{-\frac{s_x^2}{96}n}. \end{equation*} \end{proof} Now we introduce some definitions that will be used in the proof. We define $s_{\sigma,x}:=\frac12\sqrt{1-\sigma^2}s_x$, and for $\delta>0$, $i,j\in[n]$, we define the following events \[\mathcal{E}^i_\delta:=\{\sqrt{1-\sigma^2}C'_{ii}\leq s_{\sigma,x}+\delta \}\cup \{\sigma C''_{ii}\leq -\delta\},\] \[\mathcal{E}^{ij}:=\{\sqrt{1-\sigma^2}C'_{ij}\geq s_{\sigma,x}/2 \}\cup \{\sigma C''_{ij}\geq s_{\sigma,x}/2 \}\enskip\text{, for }i\neq j.\] One can easily verify that $\{C_{ii}\leq s_{\sigma,x}\}\subset \mathcal{E}^i_{\delta}$, hence it suffices to control the probability of $ \mathcal{E}^i_{\delta}$. For that we use the union bound and the already established bounds in Lemmas \ref{lem:tailbounds} and \ref{lem:tailbound_noise}. To attack the off-diagonal case, we observe that the following holds $\{C_{ij}\geq s_{\sigma,x}\}\subset \mathcal{E}^{ij}$. The following lemma allows us to bound the probability of the events $\mathcal{E}^i_\delta$ and $\mathcal{E}^{ij}$. \begin{lemma}\label{lem:probaevents} Let $\delta$ be such that $0\leq \delta\leq \frac{s_x}2\sqrt{1-\sigma^2}$. Then for $i,j\in[n]$ with $i\neq j$ have the following bounds \begin{align}\label{eq:eventdiag} \ensuremath{\mathbb{P}}(\mathcal{E}^i_\delta)&\leq 4e^{-\frac{1}{96}(\frac{s_x}2-\frac\delta{\sqrt{1-\sigma^2}})^2n}+2e^{-\frac{1}{96}(\frac{\delta}{\sigma})^2n} \\ \label{eq:eventoffdiag} \ensuremath{\mathbb{P}}(\mathcal{E}^{ij})& \leq 4e^{-\frac1{384} s_x^2(\frac{1-\sigma^2}{\sigma^2}\wedge 1)n}. \end{align} In particular, we have \begin{equation}\label{eq:eventdiag2} \ensuremath{\mathbb{P}}(\mathcal{E}^i_{\delta_{\sigma,x}})\leq 6e^{-\frac1{384} s_x^2(\frac{1-\sigma^2}{1+2\sigma\sqrt{1-\sigma^2}})n} \end{equation} where $\delta_{\sigma,x}=\frac{\sigma\sqrt{1-\sigma^2}}{\sigma+\sqrt{1-\sigma^2}}\frac{s_x}2$. \end{lemma} \begin{proof} Using \eqref{eq:bound_Cii}, we have that \begin{equation*} \ensuremath{\mathbb{P}}\Big(\sqrt{1-\sigma^2}C_{ii}\leq \sqrt{1-\sigma^2}\big(s_x-2(\sqrt2+1/\sqrt{n})\sqrt{\frac tn}-3 \frac tn\big)\Big)\leq 4e^{-t}. \end{equation*} Replacing $t=\overline{t}:=\frac{n}{36}\big(\sqrt{d^2+{6s_x-\frac{12\delta}{\sqrt{1-\sigma^2}}}}-d\big)^2$ in the previous expression, where $d=2(\sqrt2+1/\sqrt{n})$, and observing that $\overline{t}\geq \frac{1}{6}(\frac{s_x}{2}-\frac{\delta}{\sqrt{1-\sigma^2}})^2$, which is valid for $0\leq \delta\leq \frac{s_x}2\sqrt{1-\sigma^2}$, we obtain \begin{equation*} \ensuremath{\mathbb{P}}\Big(\sqrt{1-\sigma^2}C'_{ii}\leq s_{\sigma,x}+\delta \Big)\leq 4e^{-\frac{1}{6}(\frac{s_x}{2}-\frac{\delta}{\sqrt{1-\sigma^2}})^2n}. \end{equation*} Using this and Lemma \ref{lem:tailbound_noise} we have \begin{align} \ensuremath{\mathbb{P}}(\mathcal{E}^i_\delta)&\leq \ensuremath{\mathbb{P}}(\sqrt{1-\sigma^2}C'_{ii}\leq s_{\sigma,x}+\delta)+\ensuremath{\mathbb{P}}(\sigma C''_{ii}\leq -\delta)\nonumber\\ &\leq 4e^{-\frac{1}{6}(\frac{s_x}2-\frac\delta{\sqrt{1-\sigma^2}})^2n}+2e^{-\frac{1}{96}(\frac{\delta}{\sigma})^2n}.\nonumbe \end{align} Similarly, to prove \eqref{eq:eventoffdiag} we verify that \begin{align*} \ensuremath{\mathbb{P}}(\mathcal{E}^{ij})&\leq \ensuremath{\mathbb{P}}(C'_{ij}\geq \frac{s_x}4)+\ensuremath{\mathbb{P}}(C''_{ij}\geq \frac{\sqrt{1-\sigma^2}}{\sigma}\frac{s_x}4)\\ &\leq 2e^{-\frac{1}{384}s_x^2n}+2e^{-\frac1{384} s_x^2(\frac{1-\sigma^2}{\sigma^2})n}\\ &\leq 4e^{-\frac1{384} s_x^2(\frac{1-\sigma^2}{\sigma^2}\wedge 1)n}. \end{align*} To prove \eqref{eq:eventdiag2} it suffices to use \eqref{eq:eventdiag} with the choice of $\delta=\delta_{\sigma,x}=\frac{\sigma\sqrt{1-\sigma^2}}{\sigma+\sqrt{1-\sigma^2}}\frac{s_x}2$. \end{proof} With this we prove the diagonal dominance for each fixed row of $C$. \begin{proof}[Proof of Prop. \ref{prop:diago_dom} part $(ii)$] Define $\tilde{\mathcal{E}}_j:=\{C_{ii}\leq s_{\sigma,x}\}\cup\{C_{ij}\geq s_{\sigma,x}\}$, which clearly satisfies $\{C_{ii}\leq C_{ij}\}\subset\tilde{\mathcal{E}}_j$. Then by the union bound, \begin{align*} \ensuremath{\mathbb{P}}(\cup_{j\neq i}\tilde{\mathcal{E}}_j)&\leq \ensuremath{\mathbb{P}}(C_{ii}\leq s_{\sigma,x})+\sum_{j\neq i} \ensuremath{\mathbb{P}}(C_{ij}\geq s_{\sigma,x})\\ &\leq \ensuremath{\mathbb{P}}(\mathcal{E}^i_{\delta_{\sigma,x}})+\sum_{j\neq i}\ensuremath{\mathbb{P}}(\mathcal{E}^{ij})\\ &\leq 6e^{-\frac1{384} s_x^2(\frac{1-\sigma^2}{1+2\sigma\sqrt{1-\sigma^2}})n}+4(n-1)e^{-\frac1{384} s_x^2(\frac{1-\sigma^2}{\sigma^2}\wedge 1)n}\\ &\leq 5ne^{-\frac1{384} s_x^2(\frac{1-\sigma^2}{1+2\sigma\sqrt{1-\sigma^2}})n} \end{align*} where in the third inequality we used Lemma \ref{lem:probaevents}, and in the last inequality we used the fact that $\frac{1-\sigma^2}{\sigma^2}\wedge 1\geq \frac{1-\sigma^2}{1+2\sigma\sqrt{1-\sigma^2}}$. \end{proof} \section{Proof of Lemma \ref{lem:not_rc_dom}}\label{app:lem_not_rc_dom} The proof of Lemma \ref{lem:not_rc_dom} uses elements of the proof of Proposition \ref{prop:diago_dom}. The interested reader is invited to read the proof of Proposition \ref{prop:diago_dom} first. \begin{proof}[Proof of Lemma \ref{lem:not_rc_dom}] It will be useful to first generalize our notation. For that, we denote \[C_{ij,x}=(AXB)_{ij}, \enskip C'_{ij,x}=(AXA)_{ij},\enskip C''_{ij,x}=(AXZ)_{ij} \] for $x\in \ensuremath{\mathcal{S}}_n$, and \[\mathcal{E}^{ij}_{x^{-1}}:=\{\sqrt{1-\sigma^2}C'_{ij,x^{-1}}\geq s_{\sigma,x}/2 \}\cup \{\sigma C''_{ij,x^{-1}}\geq s_{\sigma,x}/2 \}\] where $x^{-1}$ is the inverse permutation of $x$. The fact that $\ensuremath{\mathbb{P}}(C_{ii,x}<C_{ij,x})\leq8e^{-c(\sigma)s_x^2n} $ follows directly from the bound for $\tilde{\mathcal{E}}_j$ derived in the proof of Proposition \ref{prop:diago_dom} part $(ii)$. To prove $\ensuremath{\mathbb{P}}(C_{ii,x}<C_{ji,x})$ notice that $C'_{ji,x}=C'_{ij,x^{-1}}$ and that $C''_{ji,x}\stackrel{d}{=}C''_{ij,x^{-1}}$. On the other hand, notice that $s_x=s_{x^{-1}}$ (hence $s_{\sigma,x}=s_{\sigma,x^{-1}}$). Arguing as in Lemma \ref{lem:probaevents} it is easy to see that \[\ensuremath{\mathbb{P}}(C_{ii,x}<C_{ji,x})\leq 8e^{-c(\sigma)s_x^2n}.\] The bound on $\ensuremath{\mathbb{P}}(\exists j,\text{ s.t }C_{ij,x}\vee C_{ji,x}>C_{ii,x})$ then follows directly by the union bound. \end{proof} \section{Proofs of Lemmas \ref{lem:diagdom_LAP} and \ref{lem:overlap_event}} \label{app:proofs_lem_diagdom} \begin{proof}[Proof of Lemma \ref{lem:diagdom_LAP}] By assumption $C$ is diagonally dominant, which implies that $\exists i_1$ such that $C_{i_1i_1}=\max_{i,j}C_{ij}$ (in other words, if the largest entry of $C$ is in the $i_1$-th row, then it has to be $C_{i_1i_1}$, otherwise it would contradict the diagonal dominance of $C$). In the first step of $\operatorname{GMWM}$ we select $C_{i_1i_1}$, assign $\pi(i_1)=i_1$ and erase the $i_1$-th row and column of $C$. By erasing the $i_1$-th row and column of $C$ we obtain a matrix which is itself diagonally dominant. So by iterating this argument we see $\exists$ $i_1,\cdots,i_n\subset[n]$ such that $\pi(i_k)=i_k$, for all $k$, so $\pi$ has to be the identical permutation. This proves that if $C$ is diagonally dominant, then $\Pi=\operatorname{Id}$. By using the contrareciprocal, \eqref{eq:probneqId} follows. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:overlap_event}] We argue by contradiction. Assume that for some $1\leq k\leq r$, we have $\pi(i_k)\neq i_k$ (and $\pi^{-1}(i_k)\neq i_k$). This means that at some some step $j$ the algorithm selects either $C^{(j)}_{i_k\pi(i_k)}$ or $C^{(j)}_{\pi^{-1}(i_k)\pi(i_k)}$ as the largest entry, but this contradicts the row-column dominance of $i_k$. This proves that that if there exists a set of indices $I_r\subset[n]$ of size $r$ such that for all $i\in I_r$, $C_{ii}$ is row-column dominant, then that set is selected by the algorithm, which implies that $\pi(i)=i$ for $i\in I_r$, thus $\operatorname{overlap}(\pi,\operatorname{id})\geq r$. \eqref{eq:overlap_event} follows by the contrareciprocal. \end{proof} \section{Additional technical lemmas}\label{sec:additionalLemmas} Here we gather some technical lemmas used throughout the paper. \subsection{General concentration inequalities} The following lemma corresponds to \citep[Lemma 1.1]{LauMass} and controls the tails of the weighted sums of squares of Gaussian random variables. \begin{lemma}[Laurent-Massart bound]\label{lem:lau_mass} Let $X_1,\cdots,X_n$ be i.i.d standard Gaussian random variables. Let $\mu=(\mu_1,\cdots,\mu_n)$ be a vector with non-negative entries and define $\zeta=\sum^n_{i=1}\mu_i(X^2_i-1)$. Then it holds for all $t\geq 0$ that \begin{align*} \ensuremath{\mathbb{P}}(\zeta\geq 2\|\mu\|_2\sqrt{t}+2\|\mu\|_\infty t)\leq e^{-t}\\ \ensuremath{\mathbb{P}}(\zeta\leq -2\|\mu\|_2\sqrt{t})\leq e^{-t} \end{align*} % \end{lemma} An immediate corollary now follows. \begin{corollary}\label{cor:lau_mass} Let $X_1,\cdots,X_{n_1}$ and $Y_1,\cdots,Y_{n_2}$ be two independent sets of i.i.d standard Gaussian random variables. Let $\mu=(\mu_1,\cdots,\mu_{n_1})$ and $\nu=(\nu_1,\cdots,\nu_{n_2})$ be two vectors with non-negative entries. Define $\zeta=\sum^{n_1}_{i=1}\mu_iX^2_i$ and $\xi=\sum^{n_2}_{i=1}\nu_iY^2_i$. Then it holds for $t\geq 0$ that \begin{align}\label{eq:lau_mass_ut} \ensuremath{\mathbb{P}}\big(\zeta-\xi\geq \|\mu\|_1-\|\nu\|_1+2(\|\mu\|_2+\|\nu\|_2)\sqrt{t}+2\|\mu\|_\infty t\big)&\leq 2e^{-t}, \\ \label{eq:lau_mass_lt} \ensuremath{\mathbb{P}}\big(\zeta-\xi\leq \|\mu\|_1-\|\nu\|_1-2(\|\mu\|_2+\|\nu\|_2)\sqrt{t}-2\|\nu\|_\infty t\big)&\leq 2e^{-t}. \end{align} \end{corollary} The next lemma give us a distributional equality for terms of the form $\langle g, X g\rangle $ where $g$ is a standard Gaussian vector and $X$ is a permutation matrix. \begin{lemma}\label{lem:dist_gaussian_inner} Let $X\in\ensuremath{\mathcal{P}}_n$ and $g=(g_1,\cdots,g_n)$ be a standard Gaussian vector. Then is holds \[\langle g,Xg\rangle\stackrel{d}{=}\sum^{n}_{i=1}\lambda_ig'^2_i,\] where $\lambda_i$ are the eigenvalues of $\frac12(X+X^T)$ and $g'=(g_1,\cdots,g_n)$ is a vector of independent standard Gaussians. Moreover, if $|S_X|=s_xn$ for $s_x\in(0,1]$, $\mu\in \mathbb{R}^{n_1}$ is a vector containing the positive eigenvalues of $\frac12(X+X^T)$, and $-\nu\in \mathbb{R}^{n_2}$ is a vector containing the negative eigenvalues of $\frac12(X+X^T)$, then \begin{align*} \|\mu\|_1-\|\nu\|_1&=s_xn,\\ \sqrt{n}\leq\|\mu\|_2+\|\nu\|_2&\leq \sqrt{2n},\\ \|\mu\|_\infty,\|\nu\|_\infty&\leq 1 . \end{align*} \end{lemma} \begin{proof} Notice that $\langle g,Xg\rangle=\langle g,\frac12(X+X^T)g\rangle$ and given the symmetry of the matrix $\frac12(X+X^T)$ all its eigenvalues are real. Take its SVD decomposition $\frac12(X+X^T)=V\Lambda V^T$. We have that \begin{align*} \langle g,\frac12(X+X^T)g\rangle&= (V^Tg)^T\Lambda V^Tg \stackrel{d}{=}\sum^n_{i=1}\lambda_ig'^2_i \end{align*} using the rotation invariance of the standard Gaussian vectors. Notice that \[|S_X|=Tr(X)=Tr\left(\frac12(X+X^T)\right)=\sum^n_{i=1}\lambda_i\] which leads to \[\|\mu\|_1-\|\nu\|_1=\sum^n_{i=1}\lambda_i=|S_X|=s_xn.\] The fact that $\|\mu\|_\infty,\|\nu\|_\infty\leq 1$ follows easily since $X$ is a unitary matrix. The inequality $\|\mu\|_2+\|\nu\|_2\geq \sqrt{n}$ follows from the fact that $\|\mu\|_2^2+\|\nu\|_2^2=n$. From the latter, we deduce that $\|\mu\|_2+\|\nu\|_2\leq \sqrt{\|\mu\|^2_2}+\sqrt{n-\|\mu\|^2_2}\leq 2\sqrt{\frac{n}2}$, and the result follows. \end{proof} \section{Concluding remarks} In this work, we analysed the performance of the projected power method (proposed in \citep{Villar}) as a seeded graph matching algorithm, in the correlated Wigner model. We proved that for a non-data dependent seed with $\mathcal{O}(\sqrt{n\log n})$ correctly pre-assigned vertices, the PPM exactly recovers the ground truth matching in one iteration. This is analogous to the state-of-the-art results for algorithms in the case of relatively sparse correlated Erdös-Renyi graphs. We additionally proved that the PPM can exactly recover the optimal matching in $\mathcal{O}(\log n)$ iterations for a seed that contains $\Omega\big((1-\kappa)n\big)$ correctly matched vertices, for a constant $\kappa\in (0,1)$, even if the seed can potentially be dependent on the data. For the latter result, we extended the arguments of \citep{MaoRud} from the (sparse) correlated Erd\"os-Renyi model to the (dense) correlated Wigner case, providing a uniform control on the error when the seed contains $\Omega\big((1-\kappa)n\big)$ fixed points. This provides theoretical guarantees for the use of PPM as a refinement algorithm (or a post-processing step) for other seedless graph matching methods. An open question is to find an efficient initialization method which outputs a permutation with order $(1-\kappa)n$ correctly matched vertices in regimes with higher $\sigma$ (say for $\sigma>1/2$). For those noise levels, spectral methods do not seem to perform well (at least in the experiments). An idea could be to adapt the results \citep{MaoRud} from the sparse Erdös-Renyi case to the Wigner case. In that paper, the authors construct for each vertex a signature containing the neighborhood information of that vertex and which is encoded as tree. Then a matching is constructed by matching those trees. It is howeverunclear how to adapt those results (which heavily rely on the sparsity) to the Wigner case. \section{Numerical experiments}\label{sec:experiments} In this section, we present numerical experiments to assess the performance of the \texttt{PPMGM} algorithm and compare it to the state-of-art algorithms for graph matching, under the correlated Wigner model. We divide this section in two parts. In Section \ref{sec:perf_comp} we generate correlated Wigner graphs $A,B\sim W(n,\sigma,x^*)$ for a random permutation $x^*$, and apply to $A,B$ the spectral algorithms \texttt{Grampa} \citep{Grampa} and the classic \texttt{Umeyama} \citep{Spectral_weighted_Ume}, both of which work in the seedless case. As a second step, we apply algorithm \texttt{PPMGM} with the initialization given by the output of \texttt{Grampa} and \texttt{Umeyama}. We show experimentally that by applying \texttt{PPMGM} the solution obtained in both cases improves, when measured as the overlap (defined in \eqref{eq:overlap_def}) of the output with the ground truth. We also run experiments by initializing \texttt{PPMGM} with $X^{(0)}$ randomly chosen at a certain distance of the ground truth permutation $X^*$. Specifically, we select $X^{(0)}$ uniformly at random from the set of permutation matrices that satisfy $\|X^{(0)}-X^*\|_F=\theta'\sqrt{n}$, and vary the value of $\theta'\in (0,1)$. In Section \ref{sec:spar_st} we run algorithm \texttt{PPMGM} with different pairs of input matrices. We consider the Wigner correlated matrices $A,B$ and also the pairs of matrices ($A^{\operatorname{spar}_1},B^{\operatorname{spar}_1}$), ($A^{\operatorname{spar}_2},B^{\operatorname{spar}_2}$) and ($A^{\operatorname{spar}_3},B^{\operatorname{spar}_3}$), which are produced from $A,B$ by means of a sparsification procedure (detailed in Section \ref{sec:spar_st}). The main idea behind this setting is that, to the best of our knowledge, the best theoretical guarantees for exact graph matching have been obtained in \citep{MaoRud} for relatively sparse Erdös-Renyi graphs. The algorithm proposed in \citep{MaoRud} has two steps, the first of which is a seedless type algorithm which produces a partially correct matching, that is later refined with a second algorithm \citep[Alg.4]{MaoRud}. Their proposed algorithm \texttt{RefinedMatching} shares similarities with \texttt{PPMGM} and with algorithms \texttt{1-hop} \citep{LubSri,YuXuLin} and \texttt{2-hop} \citep{YuXuLin}. Formulated as it is, \texttt{RefinedMatching} \citep{MaoRud} (and the same is true for \texttt{2-hop} for that matter) only accepts binary edge graphs as input and also uses a threshold-based rounding approach instead of Algorithm \ref{alg:gmwm}, which might be difficult to calibrate in practice. With this we address experimentally the fact that the analysis (and algorithms) in \citep{MaoRud} do not extend automatically to a simple `binarization' of the (dense) Wigner matrices, and that specially in high noise regimes, the sparsification strategies do not perform very well. \subsection{Performance of \texttt{PPMGM}}\label{sec:perf_comp} In Figure \ref{fig1-a} we plot the recovery fraction, which is defined as the overlap (see \eqref{eq:overlap_def}) between the ground truth permutation and the output of five algorithms: \texttt{Grampa}, \texttt{Umeyama}, \texttt{Grampa+PPMGM}, \texttt{Umeyama+PPMGM} and \texttt{PPMGM}. The algorithms \texttt{Grampa+PPMGM} and \texttt{Umeyama+PPMGM} use the output of \texttt{Grampa} and \texttt{Umeyama} as seeds for \texttt{PPMGM}, which is performed with $N=1$. In the algorithm \texttt{PPMGM}, we use an initial permutation $x^{(0)}\in \ensuremath{\mathcal{S}}_n$ chosen uniformly at random in the set of permutations such that $\operatorname{overlap}{(x^{(0)},x^*)}=0.08$; this is referred to as `\texttt{PPMGM} rand.init'. We take $n=800$ and consider the average overlap over $25$ Monte Carlo runs. In Figure \ref{fig1-b} we plot the performance of the \texttt{PPMGM} algorithm for randomly chosen seeds and with different number of correctly pre-matched vertices. More specifically, we consider an initial permutation $x^{(0)}_j\in \ensuremath{\mathcal{S}}_n$ (corresponding to initializations $X^{(0)}_j\in\ensuremath{\mathcal{P}}_n$) for $j=1,\cdots,4$ with $\operatorname{overlap}(x^{(0)}_1,x^*)=0.05$, $\operatorname{overlap}(x^{(0)}_2,x^*)=0.1$, $\operatorname{overlap}(x^{(0)}_3,x^*)=0.15$ and $\operatorname{overlap}(x^{(0)}_4,x^*)=0.5$. Equivalently, we have $\|X^{(0)}_j-X^*\|_F=\theta'_j\sqrt{n}$, where $\theta'_j=\sqrt{2\big(1-\operatorname{overlap}(x^{(0)}_j,x^*)\big)}$. Each permutation $x^{(0)}_j$ is chosen uniformly at random in the subset of permutations that satisfy each overlap condition. We observe that initializing the algorithm with an overlap of $0.1$ with the ground truth permutation already produces perfect recovery in one iteration for levels of noise as high as $\sigma=0.6$. \begin{figure} \begin{subfigure}[t]{0.475\textwidth} \includegraphics[width=\textwidth]{img/refined_grampa_ume2.pdf} \caption{Performance of \texttt{PPMGM} as a refinement of Grampa and Umeyama algorithms, compared with PPM with a random initialization $x^{(0)}$, such that $\operatorname{overlap}(x^{(0)},x^*)=0.08$.} \label{fig1-a} \end{subfigure}\hfill \begin{subfigure}[t]{0.475\textwidth} \includegraphics[width=\textwidth]{img/comparison_n1000_diff_init.pdf} \caption{Performance of \texttt{PPMGM} with different initializations. Here $in.1,in.2,in.3,in.4$ corresponds to and overlap of $x^{(0)}$ with the ground truth of $0.05,0.1,0.15$ and $0.5$ respectively.} \label{fig1-b} \end{subfigure} \caption{} \label{fig:perf-1} \end{figure} \paragraph{Varying the number of iterations $N$.} We experimentally evaluate the performance of \texttt{PPMGM} when varying the number of iterations $N$ in Algorithm \ref{alg:ppmgm}. In Figure \ref{fig:perf-2} we plot the recovery rate of \texttt{PPMGM}, initialized with $x^{(0)}$, with an overlap of $0.1$ with the ground truth. In Fig. \ref{fig2-a} we see that adding more iterations increases the performance of the algorithm for $n=500$; however the improvement is less pronounced in the higher noise regime. In other words, the number of iterations cannot make up for the fact that the initial seed is of poor quality (relative to the noise level). We use $N=1,2,4,8,30$ iterations and we observe a moderate gain between $N=8$ and $N=30$. In Fig. \ref{fig2-b} we use a matrix of size $n=1000$ and we see that the difference between using $N=1$ and $N>1$ is even less pronounced ( we omit the case of $30$ iterations for readability purposes, as it is very similar to $N=8$). \begin{figure} \begin{subfigure}[t]{0.475\textwidth} \includegraphics[width=\textwidth]{img/it_matters.pdf} \caption{\texttt{PPMGM} with an initialization such that $\operatorname{overlap}(x^{(0)},x^*)=0.1$. Here $it.1,it.2,it.3,it.4,it.5$ corresponds to $1,2,4,8$ and $30$ iterations respectively. } \label{fig2-a} \end{subfigure}\hfill \begin{subfigure}[t]{0.475\textwidth} \includegraphics[width=\textwidth]{img/it_matters_n1000.pdf} \caption{Here $it.1,it.2,it.3,it.4$ corresponds to $1,2,4$ and $8$ iterations respectively.} \label{fig2-b} \end{subfigure} \caption{} \label{fig:perf-2} \end{figure} \subsection{Sparsification strategies}\label{sec:spar_st} Here we run \texttt{PPMGM} using different input matrices which are all transformations of the Wigner correlated matrices $A,B$. Specifically, we compare \texttt{PPMGM} with $A,B$ as input with the application of \texttt{PPMGM} to three different pairs of input matrices ($A^{\operatorname{spar}_1},B^{\operatorname{spar}_1}$), ($A^{\operatorname{spar}_2},B^{\operatorname{spar}_2}$) and ($A^{\operatorname{spar}_3},B^{\operatorname{spar}_3}$) that are defined as follows. \begin{align*} A^{\operatorname{spar}_1}_{ij}&=\mathbbm{1}_{|A_{ij}|<\tau};\enskip B^{\operatorname{spar}_1}_{ij}=\mathbbm{1}_{|B_{ij}|<\tau}, \\ A^{\operatorname{spar}_2}_{ij}&=A_{ij}\mathbbm{1}_{|A_{ij}|<\tau};\enskip B^{\operatorname{spar}_2}_{ij}=B_{ij}\mathbbm{1}_{|B_{ij}|<\tau}, \\ A^{\operatorname{spar}_3}_{ij}&=A_{ij}\mathbbm{1}_{|A_{ij}|\in \operatorname{top_k}(A_{i:})};\enskip B^{\operatorname{spar}_2}_{ij}=B_{ij}\mathbbm{1}_{|B_{ij}|\in \operatorname{top_k}(B_{i:})}, \end{align*} where $\tau>0$ and for $k\in\mathbb{N}$ and a $n\times n$ matrix $M$, $\operatorname{top_k}(M_{i:})$ is the set of the $k$ largest elements (breaking ties arbitrarily) of $M_{i:}$ (the $i$-th row of $M$). The choice of the parameter $\tau$ is mainly determined by the sparsity assumptions in \citep[Thm.B]{MaoRud}, \emph{i.e.}, if $G,H$ are two Erdös-Renyi graphs to be matched with connection probability $p$ (which is equal to $qs$ in the definition \eqref{eq: ER_def}), then the assumption is that \begin{equation}\label{eq:sparsity_assump} (1+\epsilon)\frac{\log n}n\leq p\leq n^{\frac{1}{R\log\log n}-1} \end{equation} where $\epsilon>0$ is arbitrary and $R$ is an absolute constant. We refer the reader to \citep{MaoRud} for details. For each $p$ in the range defined by \eqref{eq:sparsity_assump} we solve the equation \begin{equation}\label{eq:param_tau} \ensuremath{\mathbb{P}}(|A_{ij}|\leq \tau_p)=2\Phi(-\tau_p\sqrt n)=p \end{equation} where $\Phi$ is the standard Gaussian cdf (which is bijective so $\tau_p$ is well defined). In our experiments, we solve \eqref{eq:param_tau} numerically. Notice that $A^{\operatorname{spar}_1}$ and $ B^{\operatorname{spar}_1}$ are sparse correlated Erdös-Renyi graphs with a correlation that depends on $\sigma$. For the value of $k$ that defines $A^{\operatorname{spar}_3},B^{\operatorname{spar}_3}$ we choose $k=\Omega(\log n)$ or $k=\Omega(n^{o(1)})$, to maintain the sparsity degree in \eqref{eq:sparsity_assump}. In Figure \ref{fig:spar} we plot the performance comparison between the \texttt{PPMGM} without sparsification, and the different sparsification strategies. We see in Figs. \ref{figsp-a} and \ref{figsp-b} (initialized with overlap $0.5$ and $0.1$) that the use of the full information $A,B$ outperforms the sparser versions in the higher noise regimes and for small overlap of the initialization. On the other hand, the performance tends to be more similar for low levels of noise and moderately large number of correct initial seeds. In theory, sparsification strategies have a moderate denoising effect (and might considerably speed up computations), but this process seems to destroy important correlation information. \begin{figure}[!ht] \begin{subfigure}[t]{0.475\textwidth} \includegraphics[width=\textwidth]{img/comparison_n1000_fp1.pdf} \caption{Initial overlap is equal to $0.5$} \label{figsp-a} \end{subfigure}\hfill \begin{subfigure}[t]{0.475\textwidth} \includegraphics[width=\textwidth]{img/comparison_n1000_fp2.pdf} \caption{Initial overlap is equal to $0.1$} \label{figsp-b} \end{subfigure} \caption{Comparison between \texttt{PPMGM} with and without sparsification. Here $thr.1$ corresponds to the pair of matrices ($A^{\operatorname{spar}_1},B^{\operatorname{spar}_1}$), $thr.2$ corresponds to the pair ($A^{\operatorname{spar}_2},B^{\operatorname{spar}_2}$) and top $k$ corresponds to ($A^{\operatorname{spar}_3},B^{\operatorname{spar}_3}$)} \label{fig:spar} \end{figure} \subsubsection{Choice of the sparsification parameter $\tau$}\label{sec:tau_sel} Solving \eqref{eq:param_tau} for $p$ in the range \eqref{eq:sparsity_assump} we obtain a range of possible values for the sparsification parameter $\tau$. To choose between them, we use a simple grid search where we evaluate the recovery rate for each sparsification parameter on graphs of size $n=1500$, and take the mean over $25$ independent Monte Carlo runs. In Fig. \ref{fig-hm}, we plot a heatmap with the results. We see that the best performing parameter in this experiment was for $\tau_5$ corresponding to a probability $p_5=51\times 10^{-3}$, although there is a moderate change between all the choices for $p$. \begin{figure}[!ht] \centering \includegraphics[width=0.57\textwidth]{img/test.pdf} \caption{Heatmap for the recovery rate of \texttt{PPMGM} algorithm with input ($A^{\operatorname{spar}_1},B^{\operatorname{spar}_1}$) for different threshold values $\tau_i$($y$ axis); $i=1,\cdots,6$, and different values of $\sigma$ ($x$ axis). Here $\tau_i$ corresponds to the solution of \eqref{eq:param_tau} with $n=1500$ and $p_i$ for $i=1,2\cdots,6$ in a uniform grid between $p_1=42\times 10^{-3}$ and $p_6=54\times 10^{-3}$.} \label{fig-hm} \end{figure} \section{Introduction}\label{sec:intro} In the \emph{graph matching problem} we are given as input two graphs $G$ and $H$ with equal number of vertices, and the objective is to find a bijective function, or \emph{matching}, between the vertices of $G$ and $H$ such that the alignment between the edges of $G$ and $H$ is maximized. This problem appears in many applications such as computer vision \citep{sunfei}, network de-anonymization \citep{nay}, pattern recognition \citep{conte, streib}, protein-protein interactions and computational biology \citep{zasla,singh}. In computer vision, for example, it is used as a method of comparing two objects (or images) encoded as graph structures or to identify the correspondence between the points of two discretized images of the same object at different times. In network de-anonymization, the goal is to learn information about an anonymized (labeless) graph using a related labeled graph as a reference, exploiting their structural similarities. In \citep{naya2} for example, the authors show that it was possible to effectively de-anonymize the Netflix database using the IMDb (Internet Movie Database) as the ``reference'' network. While the graph matching problem is well defined for any pair of graphs (weighted or unweighted), it is intractable in the worst case, since it can be framed as an instance of the NP-hard \emph{quadratic assignment problem} (QAP) \citep{QAPhard2014}. It also contains the ubiquitous \emph{graph isomorphism} (with unknown complexity) as a special case. However, in the average case situation, many polynomial time algorithms have recently shown to recover, either perfectly or partially, the ground-truth vertex matching with high probability. It is thus customary to assume that the observed graphs $G,H$ are generated by a model for correlated random graphs, where the problem can be efficiently solved. The two most popular models are the correlated Erdös-Renyi model \citep{PedGloss}, where two graphs are independently sampled from an Erdös-Renyi mother graph, and the correlated Wigner model \citep{deg_prof,Grampa}, which considers that $G,H$ are complete weighted graphs with independent Gaussian entries on each edge; see Section \ref{sec:RGmodels} for a precise description. Recently, other models of correlation have been proposed for random graphs with a latent geometric structure \citep{geo_1,geo_2}, community structure \citep{AniRac} and with power law degree profile \citep{Powerlaw}. \paragraph{Seeded graph matching.} The statistical analysis of the graph matching problem has mainly focused on two different versions of this problem, depending on whether side information is available or not. In the \emph{seeded} version of this problem, side information is provided (together with the two graphs $G$ and $H$) in the form of a seed, which is a bijective map from the vertices of $G$ to the vertices of $H$. The quality of the seed can be measured by its overlap with the ground-truth matching. This definition of a seed is more general than what is often considered in the literature \citep{MosselXu}, including the notion of a partially correct (or noisy) seed \citep{LubSri,YuXuLin}. The seeded version of the problem is motivated by the fact that in many applications, a set of correctly matched vertices is usually available -- either as prior information, or it can be constructed by hand (or via a algorithm). Several algorithms, based on different techniques, have been proposed for seeded graph matching. In \citep{PedGloss,YarGross}, the authors use a percolation based method to 'grow' the seed to recover (at least partially) the ground-truth matching. Other algorithms \citep{LubSri,YuXuLin} construct a similarity matrix between the vertices of both graphs and then solve the maximum linear assignment problem (either optimally or by a greedy approach) using the similarity matrix as the cost matrix. The latter strategy has also been successfully applied in the case described below, when no side information is provided. \paragraph{Seedless graph matching.} In the \emph{seedless} version, the only information available is the pair of graphs to be matched and, therefore, only structural information can be used to produce an estimation of the ground truth matching. A family of seedless algorithms that has been thoroughly studied is those based on a spectral approach, starting from the celebrated result \citep{Spectral_weighted_Ume} through the more recent contributions in \citep{spec_align,Grampa,ganMass}. Other algorithms have been proposed using different techniques. For example, in \citep{deg_prof} a signature based on its degree is constructed for each vertex of $G$ and $H$ separately and then these signatures are used to produce a vertex matching. Other methods based on convex relaxations \citep{bach}, random walks \citep{isorank_1,isorank_2} and non-convex methods \citep{YuYan,XuLuo} have also been proposed. Most of those methods require either a strong correlation between $G$ and $H$ or a superpolynomial running time \citep{barak}. There are some exceptions, for example in \citep{ganMass} the sparse Erdös-Renyi model is considered and a partially correct matching is output when the two graphs differ in at most a constant fraction of edges. To the best of our knowledge, the algorithm with the strongest theoretical guarantees is the one in \citep{MaoRud}, which assumes that the observed graphs are (relatively) sparse Erdös-Renyi graphs. It works in a two-step process: a first algorithm takes as input the two graphs to be matched and outputs a matching for which only $n^{1-c}$ vertices are incorrectly assigned, where $n$ is the number of vertices in each graph and $c$ is a small positive constant. Then a second algorithm is used to refine the solution of the first algorithm to obtain an exact matching. In this paper we analyse the performance of the \emph{projected power method} (PPM) for the seeded graph matching problem in the context of the correlated Wigner model. This family of iterative algorithms has recently been successfully applied to several problems in machine learning and statistics \citep{chen2016_alignment,boumal2016,Wang2021OptimalNE}. We prove that PPM can exactly recover the ground-truth permutation provided that a sufficiently good initial permutation is provided. Our analysis extends the analysis of the refinement algorithm \cite[Alg.4]{MaoRud} to the case of (dense) Wigner graphs and represents, to best of our knowledge, the first analysis of PPM in the dense regime. The main technical difficulty in proving the convergence of PPM lies in proving that each step of the algorithm is a contraction, which requires establishing a uniform bound for the error in a neighborhood of the ground truth. As a byproduct of our analysis, we see that PPM provides a general framework which generalize some of the state-of-the-art algorithms in the seeded case, such as \citep[Alg.1]{YuXuLin}, \citep[Alg.2]{LubSri} and \citep[Alg.4]{MaoRud}. \paragraph{Contributions.} The main contributions of this paper can be summarized as follows. \begin{itemize} \item We provide (see Theorems \ref{prop:one_it_conv}, \ref{prop:partial_rec}) exact and partial recovery guarantees under the Gaussian Wigner model when the PPM is initialized with a given data-independent seed, and only one iteration of the PPM algorithm is performed. For this result to hold, it suffices that the overlap of the seed with the ground-truth permutation is $\Omega(\sqrt{n \log n})$. This matches the best-known bound for the sparse Erdös-Renyi case \citep{YuXuLin}, for which an overlap of $\Omega(\sqrt{n\log n})$ is required to obtain exact recovery. \item We prove (see Theorem \ref{thm:unif_rec_ppm}) that when multiple iterations are allowed, then PPM converges to the ground-truth matching in $\mathcal{O}(\log n)$ iterations provided that it is initialized with a seed with overlap $\Omega\big((1-\kappa)n\big)$, for a constant $\kappa$ small enough, even if the initialization is data-dependent or adversarial. This extends the results in \citep{MaoRud} from the sparse Erd\"os-Renyi setting, to the dense Wigner case. \item We complement our theoretical results with experiments on synthetic data showing that PPM can help to significantly improve the accuracy of the matching (for correlated Wigner model) compared to that obtained by a standalone application of existing seedless methods. \end{itemize} \subsection{Notation}\label{sec:notation} We denote $\ensuremath{\mathcal{P}}_n$ to be the set of permutation matrices of size $n\times n$ and $\ensuremath{\mathcal{S}}_n$ the set of permutation maps on the set $[n]=\{1,\cdots,n\}$. To each element $X \in \ensuremath{\mathcal{P}}_n$ (we reserve capital letters), there corresponds one and only one element $x\in \ensuremath{\mathcal{S}}_n$ (we use lowercase letters). We denote $\operatorname{Id}$ (resp. $\operatorname{id}$) the identity matrix (resp. identity permutation), where the size will be clear from context. For $X\in \ensuremath{\mathcal{P}}_n$($x\in\ensuremath{\mathcal{S}}_n$), we define $S_X=\{i\in[n]:X_{ii}=1\}$ to be the set of fixed points of $X$, and $s_x=|S_X|/n$ its fraction of fixed points. The symbols $\langle \cdot,\cdot\rangle_F$ and $\|\cdot\|_F$ denote the Frobenius inner product and matrix norm, respectively. For any matrix $X\in\mathbb{R}^{n\times n}$, let $[X]\in \mathbb{R}^{n^2}$ denote its vectorization obtained by stacking its columns one on top of another. For two random variables $X,Y$ we write $X\stackrel{d}{=}Y$ when they are equal in law. For a matrix $A\in \mathbb{R}^{n\times n}$, $A_{i:}$ (resp. $A_{:i}$) will denote its $i$-th row (resp. column). \subsection{Mathematical description} Let $A,B$ be the adjacency matrices of the graphs $G,H$ each with $n$ vertices. In the graph matching problem, the goal is to find the solution of the following optimization problem \begin{equation}\label{form:1} \max_{x\in \mathcal{S}_n}\sum_{i,j}A_{ij}B_{x(i)x(j)} \enskip\tag{P1} \end{equation} which is equivalent to solving % \begin{equation}\label{form:1'} \max_{X\in \mathcal{P}_n}\langle A,X BX^T\rangle_F. \tag{P1'} \end{equation} Observe that \eqref{form:1} is a well defined problem -- not only for adjacency matrices -- but for any pair of matrices of the same size. In particular, it well defined when $A,B$ are adjacency matrices of weighted graphs, which is the main setting of this paper. Moreover, this is an instance of the well-known \emph{quadratic assignment problem}, which is a combinatorial optimization problem known to be NP-hard in the worst case \citep{QAP}. Another equivalent formulation of \eqref{form:1} is given by the following ``lifted'' (or vector) version of the problem \begin{equation} \label{form:1''} \max_{[X]\in [\mathcal{P}_n]}[X]^TB\otimes A[X]\tag{P1''} \end{equation} where $[\mathcal{P}_n]$ is the set of permutation matrices in vector form. This form has been already considered in the literature, notably in the family of spectral methods \citep{Villar,spec_align}. \subsection{Statistical models for correlated random graphs}\label{sec:RGmodels} Most of the theoretical statistical analysis for the graph matching problem has been performed so far under two random graph models: the \emph{correlated Erd\"os-Renyi} and the \emph{correlated Wigner model}. In these models the dependence between the two graphs $A$ and $B$ is explicitly described by the inclusion of a ``noise'' parameter which captures the degree of correlation between $A$ and $B$. \paragraph{Correlated Wigner model $W(n,\sigma,x^*)$.} The problem \eqref{form:1} is well defined for matrices that are not necessarily $0/1$ graph adjacencies, so a natural extension is to consider two complete weighted graphs. The following Gaussian model has been proposed in \citep{deg_prof} \[A_{ij}\sim\begin{cases}\ensuremath{\mathcal{N}}(0,\frac1n)\text{ if }i\neq j, \\ \ensuremath{\mathcal{N}}(0,\frac2n)\text{ if } i= j, \end{cases}\] and $B_{x^*(i)x^*(j)}=\sqrt{1-\sigma^2}A_{ij}+\sigma Z_{ij}$, where $Z\stackrel{d}{=}A$. Both $A$ and $B$ are distributed as the GOE (Gaussian orthogonal ensemble). Here the parameter $\sigma>0$ should be interpreted as the noise parameter and in that sense, $B$ can be regarded as a ``noisy perturbation'' of $A$. Moreover, $x^* \in \ensuremath{\mathcal{S}}_n$ is the ground-truth (or latent) permutation that we seek to recover. It is not difficult to verify that the problem \eqref{form:1} is in fact the maximum likelihood estimator (MLE) of $x^*$ under the correlated Wigner model. \paragraph{Correlated Erd\"os-Renyi $G(n,q,s,x^*)$.} For $q,s\in[0,1]$, the correlated Erdos-Renyi model with latent permutation $x^*\in \ensuremath{\mathcal{S}}_n$ can be described in two steps. \begin{enumerate} \item $A$ is generated according to the Erdös-Renyi model $G(n,q)$. \item Conditional on $A$, the entries of $B$ are i.i.d according to the law % \begin{equation}\label{eq: ER_def} B_{x^*(i),x^*(j)}\sim\begin{cases} Bern(s)\quad \text{if}\quad A_{ij}=1,\\ Bern\big(\frac{q}{1-q}(1-s)\big)\quad \text{if } A_{ij}=0. \end{cases} \end{equation} \end{enumerate} There is another equivalent description of this model in the literature, where to obtain correlated Erdös-Renyi graphs, we first sample an Erdös-Renyi ``mother'' graph and then define $A,B$ as independent subsamples with certain density parameter. We refer to \citep{PedGloss} for details \subsection{Related work} \label{sec:rel_work} \paragraph{Projected power method (PPM).} PPM, which is also often referred to as a \emph{generalized power method} (GPM) in the literature, is a family of iterative algorithms for solving constrained optimization problems. It has been used with success for various tasks including clustering SBM \citep{Wang2021OptimalNE}, group synchronization \citep{boumal2016,GaoZhang}, joint alignment from pairwise difference \citep{chen2016_alignment}, low rank-matrix recovery \citep{chi2019} and the generalized orthogonal procrustes problem \citep{Ling}. It is a useful iterative strategy for solving non-convex optimization problems, and usually requires a good enough initial estimate. In general, we start with an initial candidate satisfying a set of constraints and at each iteration we perform \begin{enumerate} \item a \emph{power step}, which typically consists in multiplying our initial candidate with one or more data dependent matrices, and \item a \emph{projection step} where the result of the power step is projected onto the set of constraints of the optimization problem. \end{enumerate} These two operations are iteratively repeated and often convergence to the ``ground-truth signal'' can be ensured in $\mathcal{O}(\log n)$ iterations, provided that a reasonably good intialization is provided. The use of PPM for graph matching was first proposed and experimentally analysed in \citep{Villar} and it has been subsequently been analysed in the case of sparse Erdös-Renyi graphs in \citep{LubSri,YuXuLin} (only for one iteration) and in \citep{MaoRud} (although the connection with PPM is not mentioned in those works). \paragraph{Graph matching.} For the graph matching problem, numerous algorithmic and information theoretic \citep{CullKi,HallMass,recons_thr} results have been obtained recently for both the Wigner and the Erdös-Renyi models. In \citep{recons_thr} the sharp threshold for reconstruction has been obtained for the Gaussian and Erdös-Renyi models. In the case of the Wigner model, the authors prove in \citep[Thm.1]{recons_thr} that for $\sigma^2\leq 1-\frac{(4+\epsilon)\log n}{n}$ the maximum likelihood estimator, of the ground truth permutation $x^*$, achieves perfect recovery with probability $1-o(1)$. There has been also a lot of work recently from an algorithmic point of view. In the context of seedless algorithms, where no side information is available, several polynomial time algorithms have been proposed relying on spectral methods \citep{Spectral_weighted_Ume,Grampa,ganMass,spec_align,Balanced_GM}, degree profiles \citep{deg_prof,dai_cullina}, other vertex signatures \citep{MaoRud}, random walk based approaches \citep{isorank_1,isorank_2,Gori04graphmatching}, convex and concave relaxations \citep{afla,Lyzin,bach}, and other non-convex methods \citep{YuYan,XuLuo,Villar}. Most of the previous algorithms have theoretical guarantees only in the low noise regime. For instance, the \texttt{Grampa} algorithm proposed in \citep{Grampa} provably exactly recovers the ground truth permutation for the correlated Wigner model when $\sigma=\mathcal{O}(\frac1{\log n})$, and in \citep{deg_prof}, it is required for the Erdös-Renyi model that the fraction of different edges between the two graphs be of the order $\mathcal{O}(\frac1{\log^2 n})$. There are a few exceptions, as in \citep{ganMass2} the authors present an algorithm that returns a partially correct matching in the sparse Erdös-Renyi case, allowing a constant fraction of different edges between the two graphs to be matched. Another exception is the recent work in \citep{MaoRud}, where they propose and analyse an algorithm that can match correlated Erdös-Renyi graphs with constant correlation parameter, under some sparsity assumptions. The results in \citep{MaoRud} give, to the best of our knowledge, the strongest known theoretical guarantees for sparse correlated Erdös-Renyi graphs. \paragraph{Seeded algorithms.} In the context of seeded algorithms \citep{PedGloss,YarGross,MosselXu,fish,LubSri,YuXuLin}, a set of seeds of the form $S=\{(i,i'): i\in V(G),i'\in V(H)\}$ is given as side information. Many algorithms in this class work under the assumption that the information in the set of seeds corresponds perfectly to the ground truth permutation, that is, $(i,i')\in S$ if and only if $x^*(i)=i'$. Some algorithms relax this requirement allowing ``noisy'' seeds, where for some $(i,i')$ in $S$ it happens that $x^*(i)\neq i$ \citep{YarGross,NoisySeeds,LubSri,YuXuLin,MaoRud}. Most of the previous work on the seeded version of the problem has been devoted to the Erdös-Renyi model, under different assumptions on the sparsity. To the best of our knowledge, the state-of-art algorithm in this category is the \texttt{j-hop} algorithm \citep[Alg.1]{YuXuLin}, although it shares similarities with \citep[Alg.2]{LubSri} and \citep[Alg.4]{MaoRud}. On the other hand, it will be evident from our analysis of PPM for graph matching that those algorithms can also be seen as examples of the PPM. \section{Main results}\label{sec:conv_analysis} Our goal in this section is to prove recovery guarantees for Algorithm \ref{alg:ppmgm} when the input matrices $A,B$ are realizations of the correlated Wigner model, described earlier in Section \ref{sec:RGmodels}. In what follows, we will assume without loss of generality that $X^*=\operatorname{Id}$. \subsection{Exact recovery in one iteration}\label{sec:mainstep1} For any given seed $x^{(0)}$ that is close enough to $x^*$, the main result of this section states that $x^*$ is recovered exactly in one iteration of Algorithm \ref{alg:ppmgm} with high probability. Let us first introduce the following definition: we say that a matrix $M$ is diagonally dominant\footnote{This is weaker than the usual notion of diagonal dominance, where for all $i\in [n]$ $|M_{ii}|\geq \sum_{j\neq i}|M_{ij}|$.} if for all $i,j$ with $i\neq j$ we have $M_{ii}>M_{ij}$. This notion will be used in conjunction with the following lemma, its proof is in Appendix \ref{app:proofs_lem_diagdom}. \begin{lemma}\label{lem:diagdom_LAP} If a matrix $C$ satisfy the diagonal dominance property, then the greedy algorithm \texttt{GMWM} with input $C$ will return the identical permutation. Consequently, for $C=AXB$ and $\Pi=\tau(C)$, we have \begin{equation}\label{eq:probneqId} \ensuremath{\mathbb{P}}(\Pi\neq \operatorname{Id})\leq \ensuremath{\mathbb{P}}(C \textup{ is not diag. dominant}) \end{equation} \end{lemma} The next theorem allow us to control the probability that $C$ is not diagonally dominant and, in turn, proves that Algorithm \ref{alg:ppmgm} recovers the ground truth permutation with high probability. The proof of Theorem \ref{prop:one_it_conv} is outlined in in Section \ref{sec:thm_one_it} \begin{theorem}\label{prop:one_it_conv} Let $A,B\sim W(n,\sigma,\operatorname{id})$ and $X\in\ensuremath{\mathcal{P}}_n$ with $\|X-\operatorname{Id}\|_F\leq \theta\sqrt{n}$, with $0\leq \theta \leq\sqrt{2(1-\frac{10}n)}$ and $n\geq 10$. Then the following holds. % \begin{enumerate}[(i)] \item For $C=AXB$ we have \[\ensuremath{\mathbb{P}}(C \textup{ is not diag. dominant })\leq 5n^2e^{-c(\sigma)\big(1-\frac{\theta^2}{2}\big)^2n}\] where $c(\sigma)=\frac1{384}(\frac{1-\sigma^2}{1+2\sigma\sqrt{1-\sigma^2}})$. \item Denote $\Pi$ as the output of Algorithm \ref{alg:ppmgm} with input $(A,B,X^{(0)}=X,N=1)$, then \[\ensuremath{\mathbb{P}}(\Pi=\operatorname{Id})\geq 1-5n^2e^{-c(\sigma)\big(1-\frac{\theta^2}2\big)^2n}.\] In particular, if $\|X-\operatorname{Id}\|^2_F\leq 2\Big(n-\sqrt{\frac{1}{c(\sigma)}n\log{(5n^3)}}\Big)$ then \[\ensuremath{\mathbb{P}}(\Pi=\operatorname{Id})\geq 1-n^{-1}.\] \end{enumerate} \end{theorem} \begin{remark} The assumption $\|X-\operatorname{Id}\|^2_F\leq 2(n-\sqrt{\frac{1}{c(\sigma)}n\log{(5n^3)}})$ can be restated as $|S_X|\geq \sqrt{\frac{1}{c(\sigma)}n\log{5n^3}}$, where $S_X$ is the set of fixed points of $X$. That is, for this assumption to hold, we need that $X$ has a number of fixed points of order $\Omega_\sigma(\sqrt{n\log n})$. Also note that $c(\sigma)$ is decreasing with $\sigma$, which is consistent with the intuition that larger levels of noise make it more difficult to recover the ground truth permutation. We include a plot of $c(\sigma)$ (rescaled) in Figure \ref{fig:c_2_sig}. \begin{figure}[!ht] \centering \includegraphics[scale=0.29]{img/constant_sigma.pdf} \caption{The constant $c(\sigma)$ (re-scaled multiplying by $384$) appearing in Theorem \ref{prop:one_it_conv}.} \label{fig:c_2_sig} \end{figure} \end{remark} \paragraph{Discussion.} Given an initial seed $X^{(0)}\in \ensuremath{\mathcal{P}}_n$, the case $N=1$ in Algorithm \ref{alg:ppmgm} can be alternatively interpreted as the following two step process: first, compute a similarity matrix $AX^{(0)}B$ and then round the similarity matrix to an actual permutation matrix. This strategy has been frequently applied in graph matching algorithms in both the seeded and seedless case \citep{Spectral_weighted_Ume,Grampa,LubSri,YuXuLin}. In terms of the quality of the seed, Theorem \ref{prop:one_it_conv} gives the same guarantees obtained by \citep[Thm.1]{YuXuLin} which requires $\Omega(\sqrt{n\log n})$ vertices in the seed to be correctly matched. However the results of \citep{YuXuLin} are specifically for the correlated Erdös-Renyi model. \subsection{Partial recovery in one iteration} \label{subsec:partial_one_step} In the partial recovery setting, we are interested in the fraction of nodes that are correctly matched. To this end let us define the following measure of performance \begin{equation}\label{eq:overlap_def} \operatorname{overlap}(\nu,\nu'):=\frac{1}{n}|\{i\in[n]:\nu(i)=\nu'(i)\}| \end{equation} for any pair $\nu,\nu'\in\ensuremath{\mathcal{S}}_n$. Recall that we assume that the ground truth permutation is $x^*=\operatorname{id}$ and $\pi$ is the output of Algorithm \ref{alg:ppmgm} with input $(A,B,X^{(0)}=X,N=1)$ where $\Pi=\operatorname{GMWM} (AXB)$. Observe that $\operatorname{overlap}(\pi,x^*=\operatorname{id})=s_\pi$ is the fraction of fixed points of the permutation $\pi$. It will be useful to consider the following definition. We say that $C_{ij}$ is \emph{row-column dominant} if $C_{ij}> C_{i'j}$ for all $i'\neq i$ and $C_{ij}>C_{ij'}$, for all $j'\neq j$. The following lemma relates the overlap of the output of $\texttt{GMWM}$ with the property that a subset of the entries of $C$ is row-column dominant, its proof is outlined in Appendix \ref{app:proofs_lem_diagdom}. \begin{lemma}\label{lem:overlap_event} Let $C$ be a $n\times n$ matrix with the property that there exists a set $\{i_1,\cdots,i_r\}$, with $1\leq r \leq n$ such that $C_{i_k,i_k}$ is row-column dominant for $k\in[r]$. Let $\pi\in\ensuremath{\mathcal{S}}_n$ be permutation corresponding to $\operatorname{GMWM}(C)\in\ensuremath{\mathcal{P}}_n$. Then it holds that $\pi(i_k)=i_k$ for $k\in[r]$ and, in consequence, the following event inclusion holds \begin{equation}\label{eq:overlap_event} \{\operatorname{overlap}(\pi,\operatorname{id})< r/n\}\subset\bigcap_{\substack{I_r\subset [n]\\|I_r|=r}}\bigcup_{i\in I_r}\{C_{ii} \textup{ is not row-column dominant } \}. \end{equation} \end{lemma} Equipped with this lemma, we can prove the following generalization of Theorem \ref{prop:one_it_conv}, its proof is detailed in Section \ref{subsec:proof_thm_partial_rec}. \begin{theorem}\label{prop:partial_rec} Let $A,B\sim W(n,\sigma,\operatorname{id})$ and $X\in \ensuremath{\mathcal{P}}_n$ with $\|X-\operatorname{Id}\|_F\leq \theta\sqrt{n}$, where $0\leq \theta \leq\sqrt{2(1-\frac{10}n)}$ and $n\geq 10$. Let $\pi\in \ensuremath{\mathcal{S}}_n$ be the output of Algorithm \ref{alg:ppmgm} with input $(A,B,X^{(0)}=X,N=1)$. Then, for $r\in[n]$ \begin{equation*} \ensuremath{\mathbb{P}}( \operatorname{overlap}(\pi,\operatorname{id})> r/n)\geq 1-16rne^{-c(\sigma)\big(1-\frac{\theta^2}2\big)^2n}. \end{equation*} In particular, if $x\in\ensuremath{\mathcal{S}}_n$ is the map corresponding to $X$ and $|S_X|\geq \sqrt{\frac1{c(\sigma)}n\log{(16rn^2)}}$, then \begin{equation*} \ensuremath{\mathbb{P}}( \operatorname{overlap}(\pi,\operatorname{id})> r/n)\geq 1-n^{-1}. \end{equation*} \end{theorem} \subsection{Exact recovery after multiple iterations, uniformly in the seed} The results in Sections \ref{sec:mainstep1} and \ref{subsec:partial_one_step} hold for any given seed $X^{(0)}$, and it is crucial that the seed does not depend on the graphs $A, B$. In this section, we provide uniform convergence guarantees for \texttt{PPMGM} \ which hold uniformly over all choices of the seed in a neighborhood around $x^*$. \begin{theorem} \label{thm:unif_rec_ppm} Let $\sigma \in [0,1)$, $A,B\sim W(n,\sigma,\operatorname{id})$ and let $X^{(0)}$ be a -- possibly random and data dependent -- permutation such that $|S_{X^{(0)}}|\geq (1-\kappa)n$ for a constant $\kappa>0$ such that $\sqrt{1-\sigma^2}>48\kappa$. Then by applying \texttt{PPMGM}\, with input ($\ensuremath{\mathcal{H}}(A),\ensuremath{\mathcal{H}}(B), X^{(0)}, N=2\log n$) where $\ensuremath{\mathcal{H}}(X)$ corresponds to the matrix $X$ with the diagonal removed, when $n$ is large enough, we obtain a permutation $X^{(N)}$ such that \[ \ensuremath{\mathbb{P}}(X^{(N)} \neq Id) \geq 1- \frac{6 \log n}{n^2}.\] \end{theorem} The diagonal of the adjacency matrices $A$ and $B$ in Algorithm \ref{alg:ppmgm} was removed in the above theorem only for ease of analysis. Its proof is detailed in Section \ref{subsec:proof_unif_seed_ppm}. \begin{remark} Contrary to our previous theorems, here the strong consistency of the estimator holds uniformly over all possible seeds that satisfy the condition $|S_{X^{(0)}}|\geq (1-\kappa)n$. For this reason, we need a stronger condition than $|S_X|=\Omega(\sqrt{n\log n})$ as was the case in Theorem \ref{prop:one_it_conv}. Our result is non trivial and cannot be obtained from Theorem \ref{prop:one_it_conv} by taking a union bound. The proof relies on a decoupling technique adapted from \citep{MaoRud} that used a similar refinement method for Erdös-Renyi graphs. \end{remark} \begin{remark} Contrary to the results obtained in the seedless case that require $\sigma=o(1)$ for exact recovery \citep{Grampa}, we can allow $\sigma$ to be of constant order. The condition $\sqrt{1-\sigma^2}>48\kappa$ seems to be far from optimal as shown in the experiments in Section \ref{sec:experiments}. For example, \texttt{PPMGM}\, can achieve exact recovery when $\kappa=0.08$ and $\sigma=0.6$. But interestingly, this condition shows that when the noise $\sigma$ increases, \texttt{PPMGM}\, need a more accurate initialization, hence a larger $\kappa$, to recover the latent permutation. This is confirmed by our experiments. \end{remark} \section{Models for correlated random graphs}\label{sec:RGmodels} Most of theoretical statistical analysis for the graph problem has been done so far for two random graph models: the \emph{correlated Erdos-Renyi} and the \emph{correlated Wigner model}. In those models the dependence between the two graphs $A$ and $B$ is explicitly described by the inclusion of a ``noise'' parameter. The main differences between both models are that the graph generated by the correlated Wigner generate a complete weighted random graph, with Gaussian weights, and the correlated Erdos-Renyi generate graphs with $\{0,1\}$ weights. \textbf{Gaussian Wigner model $W(n,\sigma,x^*)$}: The problem of graph matching \eqref{form:1} is well defined for matrices that are not necessarily adjacencies, so a natural extension is to consider two complete weighted graphs to be matched. In this direction, one model that has been proposed ,and analysed, is the Gaussian Wigner model, see \citep{Grampa} for example. Here we assume that $A$ and $B$ are symmetric matrices with i.i.d Gaussian entries (except for the symmetry constraint) and there is a linear relation between them. More specifically, we assume that \[A_{ij}\sim\begin{cases}\ensuremath{\mathcal{N}}(0,\frac1n)\text{ if }i\neq j\\ \ensuremath{\mathcal{N}}(0,\frac2n)\text{ if }i= j\end{cases}.\] and $B_{x^*(i)x^*(j)}=\sqrt{1-\sigma^2}A_{ij}+\sigma Z_{ij}$, where the $Z_{ij}$ are the entries of a multivariate standard Gaussian matrix. Both $A$ and $B$ are distributed as the GOE (Gaussian orthogonal ensemble). Here the parameter $\sigma>0$ should be interpreted as the noise parameter ($B$ is a ``noisy perturbation'' of $A$). \textbf{Correlated Erdos-Renyi $G(n,q,s,x^*)$:} for $q,s\in[0,1]$, the correlated Erdos-Renyi, with latent permutation $x^*\in \ensuremath{\mathcal{S}}_n$, can be described in two steps: \begin{enumerate} \item $A$ is generated according to the Erdös-Renyi model $G(n,q)$. \item Conditional to $A$, the entries of $B$ are i.i.d according to the law: \begin{equation} B_{x^*(i),x^*(j)}\sim\begin{cases} Bern(s)\quad \text{if}\quad A_{ij}=1,\\ Bern\big(\frac{q}{1-q}(1-s)\big)\quad \text{if } A_{ij}=0 \end{cases}. \end{equation} \end{enumerate} Another way to describe this model is the following: first sample a ``mother'' graph according to the Erdos-Renyi model with parameter $\frac qs$ and then generate $A_{ij}$ and $B_{x^*(i)x^*(j)}$ independently by choosing to keep or not the corresponding edge in the mother graph with probability $s$ (if $s=0$ we consider that $A$ and $B$ are independent $G(n,q)$ graphs). In this model, we can interpret the parameter $s$ as the amount of noise that exists between $A$ and $B$, considering one a perturbation of the other. Indeed, when $s=1$ we have that $A$ and $B$ are indeed isomorphic and for $s=0$, $A$ and $B$ are independent. \ea{21/03: we are leaving the CGMixtures for the next time, right?.} \section{Proof outline} \subsection{Proof of Theorem \ref{prop:one_it_conv}}\label{sec:thm_one_it} For $A,B\sim W(n,\sigma,id)$, the proof of Theorem \ref{prop:one_it_conv} relies heavily on the concentration properties of the entries of the matrix $C=AXB$, which is the matrix that is projected by our proposed algorithm. In particular, we use the fact that $C$ is diagonally dominant with high probability, under the assumptions of Theorem \ref{prop:one_it_conv}, which is given by the following result. Its proof is delayed to Appendix \ref{app:concentration}. \begin{proposition} [Diagonal dominance property for the matrix $C=AXB$]\label{prop:diago_dom} Let $A,B\sim W(n,\sigma,id)$ with correlation parameter $\sigma\in[0,1)$ and let $X\in \ensuremath{\mathcal{P}}_n$ with $S_X$ the set of its fixed points and $s_x:=|S_X|/n$. Assume that $s_x\geq 10/n$ and that $n\geq 10$. Then the following is true. \begin{enumerate}[(i)] \item \textbf{Noiseless case.} For a fixed $i\in[n]$ it holds that \begin{equation*} \ensuremath{\mathbb{P}}\big(\exists j\neq i: (AXA)_{ij}>(AXA)_{ii}\big)\leq 4ne^{-\frac{s_x^2}{96}n}. \end{equation*} \item For $C=AXB$ and $i\in [n]$ it holds \begin{equation*} \ensuremath{\mathbb{P}}{(\exists j\neq i : C_{ij}>C_{ii})}\leq 5ne^{-c(\sigma)s_x^2n} \end{equation*} where $c(\sigma)=\frac1{384}\frac{1-\sigma^2}{1+2\sigma\sqrt{1-\sigma^2}}$. \end{enumerate} \end{proposition} With this we can proceed with the proof of Theorem \ref{prop:one_it_conv}. \begin{proof}[Proof of Theorem \ref{prop:one_it_conv}] To prove part $(i)$ of the theorem it suffices to notice that in Proposition \ref{prop:diago_dom} part $(ii)$ we upper bound the probability that $C=AXB$ is not diagonally dominant for each fixed row. Using the union bound, summing over the $n$ rows, we obtain the desired upper bound on the probability that $C$ is not diagonally dominant. We now prove part $(ii)$. Notice that the assumption $\|X-\operatorname{Id}\|_F\leq \theta\sqrt{n}$ for $\theta< \sqrt{2}$ implies that $s_x$ is strictly positive. Moreover, from this assumption and the fact that $\|X-\operatorname{Id}\|^2_F=2(n-|S_X|)$ we deduce that \begin{equation}\label{eq:theta_fp} s_x\geq \Big(1-\frac{{\theta}^2}2\Big). \end{equation} On the other hand, we have \begin{align*} \ensuremath{\mathbb{P}}(\Pi\neq \operatorname{Id})&\leq \ensuremath{\mathbb{P}}(C \text{ is not diag.dom})\\ &= \ensuremath{\mathbb{P}}(\exists i,j\in[n],i\neq j:C_{ii}<C_{ij})\\ &\leq 5n^2e^{-c(\sigma)s_x^2n}\\ &\leq 5n^2e^{-c(\sigma)\big(1-\frac{\theta^2}2\big)^2n} \end{align*} where we used Lemma \ref{lem:diagdom_LAP} in the first inequality, Proposition \ref{prop:diago_dom} in the penultimate step and, \eqref{eq:theta_fp} in the last inequality. \end{proof} \subsubsection{Proof of Proposition \ref{prop:diago_dom}} In Proposition \ref{prop:diago_dom} part $(i)$ we assume that $\sigma=0$. The following are the main steps of the proof. \begin{enumerate} \item We first prove that for all $X\in\ensuremath{\mathcal{P}}_n$ such that $s_x=|S_X|/n$ and for $i\neq j\in[n]$ the gap $C_{ii}-C_{ij}$ is of order $s_x$ in expectation. \item We prove that $C_{ii}$ and $C_{ij}$ are sufficiently concentrated around its mean. In particular, the probability that $C_{ii}$ is smaller than $s_x/2$ is exponentially small. The same is true for the probability that $C_{ij}$ is larger than $s_x/2$. \item We use the fact $\ensuremath{\mathbb{P}}(C_{ii}\leq C_{ij})<\ensuremath{\mathbb{P}}(C_{ii} \leq s_x/2)+\ensuremath{\mathbb{P}}(C_{ij}\geq s_x/2)$ to control the probability that $C$ is not diagonally dominant. \end{enumerate} The proof is mainly based upon the following two lemmas. \begin{lemma}\label{lem:expectation} For the matrix $C=AXA$ and with $s_x=|S_X|/n$ we have \[\ensuremath{\mathbb{E}}[C_{ij}]=\begin{cases} s_x+\frac1n\mathbbm{1}_{i\in S_X} \enskip\text { for }i=j, \\ \frac1n\mathbbm{1}_{x(j)=i} \enskip\text { for }i\neq j, \\ \end{cases}\] and from this we deduce that for $i,j\in[n]$ with $i\neq j$ \[s_x-\frac1n\leq \ensuremath{\mathbb{E}}{[C_{ii}]}-\ensuremath{\mathbb{E}}{[C_{ij}]}\leq s_x+\frac1n.\] \end{lemma} \begin{lemma}\label{lem:tailbounds} Assume that $s_x\in(10/n,1]$ and $n\geq 10$. Then for $i,j\in[n]$ with $i\neq j$ we have \begin{align}\label{eq:bounddiag} \ensuremath{\mathbb{P}}(C_{ii}\leq s_x/2)&\leq 4 e^{-\frac{s_x^2}{48}n}, \\%f(s_x)^{n/2}\\ \label{eq:boundoffdiag} \ensuremath{\mathbb{P}}(C_{ij}\geq s_x/2)&\leq 3e^{-\frac{s_x^2}{96}n}. \end{align} \end{lemma} With this we can prove Proposition \ref{prop:diago_dom} part $(i)$. \begin{proof}[Proof of Prop. \ref{prop:diago_dom} $(i)$] Define the event $\mathcal{E}_j=\{C_{ii}<\frac{s_x}2\}\cup \{C_{ij}>\frac{s_x}2\}$ and note that for $j\neq i$, we have $\{C_{ij}>C_{ii}\}\subset\mathcal{E}_j$. With this and the bounds \eqref{eq:bounddiag} and \eqref{eq:boundoffdiag} we have \begin{align*} \ensuremath{\mathbb{P}}\big(\exists j\neq i: C_{ij}>C_{ii}\big)&=\ensuremath{\mathbb{P}}(\cup_{j\neq i}\{C_{ij}>C_{ii}\})\\ &\leq \ensuremath{\mathbb{P}}(\cup_{j\neq i}\mathcal{E}_j)\\ &\leq \ensuremath{\mathbb{P}}(C_{ii}\leq \frac{s_x}{2})+\sum_{j\neq i} \ensuremath{\mathbb{P}}(C_{ij}\geq \frac{s_x}{2})\\ &\leq 4e^{-\frac{s_x^2}{96}n}+3(n-1)e^{-\frac{s_x^2}{96}n}\\ &\leq 4ne^{-\frac{s_x^2}{96}n}. \end{align*} \end{proof} The proof of Lemma \ref{lem:expectation} is short and we include it in the main body of the paper. On the other hand, the proof of Lemma \ref{lem:tailbounds} mainly uses concentration inequalities for Gaussian quadratic forms, but the details are quite technical. Hence we delay its proof to Appendix \ref{app:diagdom_row_noiseless}. Before proceeding with the proof of Lemma \ref{lem:expectation}, observe that the following decomposition holds for the matrix $C$. \begin{equation}\label{eq:Cdecom} C_{ij}=\sum_{k,k'}A_{ik}X_{k,k'}A_{k'i} = \begin{cases} \sum_{k\in S_X}A^2_{ik}+\sum_{k\notin S_X}A_{ik}A_{ix(k)} \enskip\text { for }i=j,\\ \sum^n_{k=1}A_{ik}A_{x(k)j} \enskip\text{ for }i\neq j. \end{cases} \end{equation} \begin{proof}[Proof of Lemma \ref{lem:expectation}] From \eqref{eq:Cdecom} we have that \begin{align*} \ensuremath{\mathbb{E}}[C_{ii}] =\sum_{k\in S_X}\ensuremath{\mathbb{E}}[A^2_{ik}]+\sum_{k\notin S_X}\ensuremath{\mathbb{E}}[A^2_{ik}] =\frac{|S_X|}n+\frac{\mathbbm{1}_{i\in S_X}}n. \end{align*} Similarly, for $j\neq i$ it holds \begin{align*} \ensuremath{\mathbb{E}}[C_{ij}] =\sum^n_{k=1}\ensuremath{\mathbb{E}}[A_{ik}A_{x(k)j}] =\frac1n\mathbbm{1}_{i,j\notin S_X, x(j)=i} =\frac{\mathbbm{1}_{x(j)=i}}n \end{align*} from which the results follows easily. \end{proof} The proof of Proposition \ref{prop:diago_dom} part $(ii)$ which corresponds to the case $\sigma\neq 0$ uses similar ideas and the details can be found Appendix \ref{app:diagdom_row_noise}. \subsection{Proof of Theorem \ref{prop:partial_rec}} \label{subsec:proof_thm_partial_rec} The proof of Theorem \ref{prop:partial_rec} will be based on the following lemma, which extends Proposition \ref{prop:diago_dom}. \begin{lemma}\label{lem:not_rc_dom} For a fixed $i\in[n]$, we have \begin{equation*} \ensuremath{\mathbb{P}}(C_{ii} \textup{ is not row-column dominant})\leq 16ne^{-c(\sigma)s_x^2n}. \end{equation*} \end{lemma} The proof of Lemma \ref{lem:not_rc_dom} is included in Appendix \ref{app:lem_not_rc_dom}. We now prove Theorem \ref{prop:partial_rec}. The main idea is that for a fixed $i\in[n]$, with high probability the term $C_{ii}$ will be the largest term in the $i$-th row and the $i$-th column, and so \texttt{GMWM} will assign $\pi(i)=i$. We will also use the following event inclusion, which is direct from \eqref{eq:overlap_event} in Lemma \ref{lem:overlap_event}. \begin{equation} \label{eq:overlap_event2} \{\operatorname{overlap}(\pi,\operatorname{id})< r/n\}\subset\bigcup^r_{i=1}\{C_{ii} \text{ is not row-column dominant }\}. \end{equation} \begin{proof}[Proof of Theorem \ref{prop:partial_rec} ] Fix $i\in[n]$. By \eqref{eq:overlap_event2} we have that \begin{align*} \ensuremath{\mathbb{P}}(\operatorname{overlap}(\pi,\operatorname{id})\leq r/n)&\leq \sum^r_{i=1}\ensuremath{\mathbb{P}}(C_{ii} \text{ is not row-column dominant})\\ &\leq \sum^r_{i=1}\ensuremath{\mathbb{P}}(\exists j\neq i,\text{ s.t }C_{ij}\vee C_{ji}>C_{ii} )\\ &\leq 16rne^{-c(\sigma) s_x^2n} \end{align*} where we used Lemma \ref{lem:not_rc_dom} in the last inequality. \end{proof} \begin{remark} Notice that the RHS of \eqref{eq:overlap_event2} is a superset of the RHS of \eqref{eq:overlap_event}. To improve this, it is necessary to include dependency information. In other words, we need to `beat Hölder's inequality'. To see this, define \[ E_i:=\mathbbm{1}_{C_{ii}\text{ is not row-column dominant }},\enskip \varepsilon_{I}:=\mathbbm{1}_{\sum_{i\in I}E_i>0}, \text{ for } I\subset [n]; \] then $\varepsilon_{I'}$, for $I'=[r]$, is the indicator of the event in the RHS of \eqref{eq:overlap_event2}. On other hand, the indicator of the event in the RHS of \eqref{eq:overlap_event} is ${\displaystyle \prod_{\substack{I\subset[n],|I|=r}}}\varepsilon_I$. If $\ensuremath{\mathbb{E}}\big[\varepsilon_I\big]$ is equal for all $I$, then Hölder inequality gives \[\ensuremath{\mathbb{E}}\Big[{\displaystyle \prod_{\substack{I\subset[n],|I|=r}}}\varepsilon_I\Big]\leq \ensuremath{\mathbb{E}}[\varepsilon_{I'}]\] which does not help in quantifying the difference between \eqref{eq:overlap_event} and \eqref{eq:overlap_event2}. This is not surprising as we are not taking into account the dependency between the events $\varepsilon_I$ for the different sets $I\subset[n],|I|=r$. \end{remark} \input{proof_thm3} \subsection{Proof of Theorem \ref{thm:unif_rec_ppm}} \label{subsec:proof_unif_seed_ppm} The general proof idea is based on the decoupling strategy used by \citep{MaoRud} for Erdös-Renyi graphs. To extend their result from binary graphs to weighted graphs, we need to use an appropriate measure of similarity. For $i, i'\in [n], W\subset [n]$ and $g\in \ensuremath{\mathcal{P}}_n$, let us define \[ \langle A_{i:}, B_{i':} \rangle_{g,W} := \sum_{j\in W} A_{ig(j)}B_{i'j} \] to be the similarity between $i$ and $i'$ restricted to $W$ and measured with a scalar product depending on $g$ (the permutation used to align $A$ and $B$). When $g=id$ or $W=[n]$ we will drop the corresponding subscript(s). If $A$ and $B$ were binary matrices, we would have the following correspondence \[ \langle A_{i:}, B_{i':} \rangle_{g,W} = |g(\ensuremath{\mathcal{N}}_A(i)\cap W)\cap \ensuremath{\mathcal{N}}_B(i') |.\] This last quantity plays an essential role in Proposition 7.5 of \citep{MaoRud}. Here $g(\ensuremath{\mathcal{S}})$ denotes the image of a set $\ensuremath{\mathcal{S}} \subseteq [n]$ under permutation $g$. \paragraph{ Step 1.} The algorithm design relies on the fact that if the matrices $A$ and $B$ were correctly aligned then the correlation between $A_{i:}$ and $B_{i:}$ should be large and the correlation between $A_{i:}$ and $B_{i':}$ should be small for all $i\neq i'$. The following two Lemmas precisely quantify these correlations when the two matrices are well aligned. \begin{lemma}[Correlation between corresponding nodes]\label{lem:nb_ngbh1_mt} Let $(A,B)\sim W(n,\sigma, x^*=id)$ and assume that the diagonals of $A$ and $B$ have been removed. Then for $n$ large enough, we have with probability at least $1-n^{-2}$ that \[ \langle A_{i:}, B_{i:}\rangle \geq \sqrt{1-\sigma^2}(1-\epsilon_1)-\sigma \epsilon_2 \text{ for all } i\in [n], \] where $\epsilon_1, \epsilon_2 =O(\sqrt{\frac{\log n}{n}})$. \end{lemma} \begin{lemma}[Correlation between different nodes]\label{lem:nb_ngbh2_mt} Let $(A,B)\sim W(n,\sigma, id)$ and assume that the diagonals of $A$ and $B$ have been removed. Then for $n$ large enough, we have with probability at least $1-n^{-2}$ that \[ \left| \langle A_{i:}, B_{i':} \rangle\right|\leq \sqrt{1-\sigma^2}\epsilon_2+\sigma \epsilon_3 \text{ for all } i,i'\in [n] \text{ such that } i'\neq i, \] where $\epsilon_3=O(\sqrt{\frac{\log n}{n}})$. \end{lemma} The proofs of Lemma's \ref{lem:nb_ngbh1_mt} and \ref{lem:nb_ngbh2_mt} can be found in Appendix \ref{sec:app_thm3}. \paragraph{Step 2.} Since the ground truth alignment between $A$ and $B$ is unknown, we need to use an approximate alignment (provided by $X^{(0)}$). It will suffice that $X^{(0)}$ is close enough to the ground truth permutation. This is linked to the fact that if $|S_{X^{(0)}}|$ is large enough then the number of nodes for which there is a substantial amount of information contained in $S_{X^{(0)}}^c$ is small. This is shown in the following lemma. \begin{lemma}[Growing a subset of vertices]\label{lem:growing_vert} Let $G$ a graph generated from the Wigner model with self-loops removed, associated with an adjacency matrix $A$, and let $I $ be a random subset of $[n]$ (possibly depending on $A$) with $|I|\geq (1-\kappa)n$ where $ \kappa \in (0,1/2)$. Let $\delta = 8\kappa$ and define a random subset of vertices \[ \tilde{I}= \lbrace i \in [n]: \norm{A_{i:}}_{I^c}^2<\delta \rbrace .\] Then for $n$ large enough, we have \[\ensuremath{\mathbb{P}} \left( |\tilde{I}^c|\leq \frac{1}{4}|I^c| \right) \geq 1-e^{-c' \kappa n} \] for some constant $c' > 0$. \end{lemma} In order to prove this lemma we will need the following decoupling lemma. \begin{lemma}[An elementary decoupling] \label{lem:decoupling} Let $M>0$ be a parameter and $G$ be a weighted graph on $[n]$, with weights of magnitude bounded by $1$ and without self loops, represented by an adjacency matrix $A\in [-1,1]^{n\times n}$. Assume that there are two subsets of vertices $Q,W\subset [n]$ such that \[ \norm{A_{i:}}_W^2 \geq M \text{ for all } i\in Q.\] Then there are subsets $Q'\subseteq Q$ and $W'\subseteq W$ such that $Q'\cap W' =\emptyset$, $|Q'|\geq |Q|/5$ and \[ \norm{A_{i:}}_{W'}^2 \geq M/2 \text{ for all } i\in Q'. \] \end{lemma} \begin{proof} If $|Q\setminus W|\geq |Q|/5$ then one can take $Q'=Q\setminus W$ and $W'= W$. So we can assume that $|Q\cap W|\geq 4|Q|/5$. Let $\Tilde{W}:=W\setminus Q$ and $\hat{Q}$ be a random subset of $Q\cap W$ where each element $j\in Q\cap W$ is selected independently with probability $1/2$ in $\hat{Q}$. Consider the random disjoint sets $\hat{Q}$ and $W':=\tilde{W}\cup ((Q\cap W)\setminus \hat{Q})$. First, we will show the following claim. % \begin{claim} For every $i \in Q\cap W$, we have $ \ensuremath{\mathbb{P}}( \norm{A_{i:}}_{W'}^2\geq M/2 |i \in \hat{Q})\geq 1/2.$ \end{claim} Indeed, we have by definition \[ \norm{A_{i:}}_{W'}^2=\sum_{j \in W' } A_{ij}^2 = \sum_{j \in W\cap Q } A_{ij}^2\indic_{j\not \in \hat{Q}} +\sum_{j \in \tilde{W} } A_{ij}^2 .\] By taking the expectation conditional on $i \in \hat{Q}$, we obtain \[ \ensuremath{\mathbb{E}} \left( \norm{A_{i:}}_{W'}^2 \middle| i \in \hat{Q} \right) = \sum_{j \in W\cap Q} \frac{A_{ij}^2}{2} + \sum_{j \in \tilde W} A_{ij}^2 \geq \frac{1}{2}\sum_{j\in W}A_{ij}^2 \geq \frac{M}{2}.\] But since $\sum_{j\in W\cap Q} A_{ij}^2(\indic_{j\not \in \hat{Q}}-\frac{1}{2})$ is a symmetric random variable we have that \[ \ensuremath{\mathbb{P}}\left(\norm{A_{i:}}_{W'}^2\geq \ensuremath{\mathbb{E}}(\norm{A_{i:}}_{W'}^2) \middle|i \in \hat{Q}\right) = 1/2\] and hence \[ \ensuremath{\mathbb{P}}\left(\norm{A_{i:}}_{W'}^2\geq \frac{M}{2} \middle|i \in \hat{Q}\right)\geq 1/2. \] Consequently, we have \[ \ensuremath{\mathbb{E}}\left(\sum_{i\in Q\cap W} \indic_{\lbrace \norm{A_{i:}}_{W'}^2\geq M/2 \rbrace} \indic_{i \in \hat{Q}}\right) = \sum_{i\in Q\cap W} \ensuremath{\mathbb{P}}(i\in \hat{Q})\ensuremath{\mathbb{E}}\left( \indic_{\lbrace \norm{A_{i:}}_{W'}^2\geq M/2 \rbrace}\middle| i\in \hat{Q}\right) \geq \frac{|Q\cap W|}{4} \geq \frac{|Q|}{5}.\] Therefore, there is a realization $Q'$ of $\hat{Q}$ such that $Q'$ and $W'$ satisfy the required conditions. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:growing_vert}] By considering sets $W=I^c$ and $Q \subset \tilde{I}^c$ we obtain the following inclusion % \[ \lbrace |\tilde{I}^c|> \frac{1}{4}|I^c| \rbrace \subset \ensuremath{\mathcal{E}}:= \lbrace \exists\, Q,W\subset [n]: |W|\leq \kappa n, |Q|\geq |W|/4\neq 0, \norm{A_{i:}}_{W}^2\geq \delta \text{ for all }i\in Q \rbrace . \] % According to Lemma \ref{lem:decoupling}, $\ensuremath{\mathcal{E}}$ is contained in % \[\ensuremath{\mathcal{E}}':=\lbrace \exists\, Q', W'\subset [n]: |W'|\leq \kappa n, |Q'|\geq |W|/20\neq 0, Q'\cap W'= \emptyset, \norm{A_{i:}}_{W'}^2\geq \delta/2 \text{ for all }i\in Q' \rbrace.\] % For given subsets $Q'$ and $W'$, the random variables $(\norm{A_{i:}}_{W'}^2)_{i\in Q'}$ are independent. So, by a union bound argument we get % \[ \ensuremath{\mathbb{P}} \left( |\tilde{I}^c|> \frac{1}{4}|I^c| \right) \leq \sum_{w=1}^{\ceil{\kappa n}}\sum_{|W'|=w}\sum_{k=\ceil{w/20}}^{n}\binom{n}{k}\ensuremath{\mathbb{P}}\left( \norm{A_{i:}}_{W'}^2\geq \delta/2 \right)^k. \] % According to Lemma \ref{lem:lau_mass}, for the choice $t=\kappa n$ we have for all $W'$ \[ \ensuremath{\mathbb{P}}\left( \norm{A_{i:}}_{W'}^2\geq \delta/2 \right) \leq \ensuremath{\mathbb{P}}\left( n\norm{A_{i:}}_{W'}^2\geq |W|+\sqrt{|W|t}+2t \right) \leq e^{-\kappa n}.\] Consequently, for $n$ large enough, we have \[ \ensuremath{\mathbb{P}} \left( |\tilde{I}^c|> \frac{1}{4}|I^c| \right) \leq \sum_{w=1}^{\ceil{\kappa n}}\sum_{k=\ceil{w/20}}^{n}\left(\frac{en}{w}\right)^w\left(\frac{en}{k}\right)^ke^{-k\kappa n} < e^{-c\kappa n}\] for a constant $c>0$. Indeed, since \[ \frac{en}{ke^{\kappa n}}<1\] for $n$ large enough we have \[ \sum_{k=\ceil{w/20}}^{n} \left(\frac{en}{k}\right)^ke^{-k\kappa n} \leq C\left(\frac{en}{e^{\kappa n}}\right)^{\ceil{w/20}}\] by thw property of geometric series, where $C>0$ is a constant. But by the same argument \[ \sum_{w=1}^{\ceil{\kappa n}}\left(\frac{en}{w}\right)^w \left(\frac{(en)^{1/20}}{e^{\kappa n/20}}\right)^{w} \leq \frac{(en)^{1/20}}{e^{\kappa n/20}}\leq e^{-c\kappa n}\] where $c > 0$ is a constant. \end{proof} \paragraph{Step 3.} We are now in position to show that at each step the set of fixed points of the permutation obtained with \texttt{PPMGM}\, increases. \begin{lemma}[Improving a partial matching]\label{lem:improve_matching} Let $G$ and $G'$ two graphs as before, and $g$ be a random permutation possibly depending on $G$ and $G'$. Further assume that $\sqrt{1-\sigma^2}>48\kappa$. Let \[ \ensuremath{\mathcal{E}} := \lbrace |i\in [n]: g(i)=i |\geq (1-\kappa)n \rbrace\] be the event that the number of fixed point of $g$ is large enough. Define a random permutation $\tilde{g}$ and a random set $\tilde{J}$ as follows. Let $\delta=8\kappa$, we say that a vertex $i\in [n]$ belongs to $\tilde{J}$ if there is a unique $i'\in [n]$ such that \begin{itemize} \item $\langle A_{i:},B_{i':}\rangle_{g}\geq 3 \delta$; \item $|\langle A_{i:},B_{j:}\rangle_{g}|< 3\delta$ for all $j \neq i'$; \item $|\langle A_{j:},B_{i':}\rangle_{g}|< 3\delta$ for all $j \neq i $. \end{itemize} Then we set $\tilde{g}(i)=i'$ for any such pair of vertices. We complete $\tilde{g}$ into a permutation in an arbitrary way. If $n$ is sufficiently large and $\kappa$ sufficiently small, we have with probability at least $\ensuremath{\mathbb{P}}(\ensuremath{\mathcal{E}})-\frac{3}{n^2}$, \[ |\lbrace i\in[n]: \tilde{g}(i)=i \rbrace|\geq \frac{n}{2}+\frac{|\lbrace i\in[n]: g(i)=i \rbrace|}{2}.\] It implies in particular that the set of fixed points of $\tilde{g}$ is strictly larger than the set of fixed points of $g$. \end{lemma} \begin{remark} The description of \texttt{GMWM} doesn't involve the use of a threshold, but for the nodes that satisfy the conditions described in Lemma \ref{lem:improve_matching}, \texttt{GMWM} provides by definition the same matching (this can be seen using the notion of row-column dominance and Lemma \ref{lem:overlap_event}). Since the nodes that do not satisfy these conditions can be matched in an arbitrary way, we can use \texttt{GMWM} instead of the thresholding procedure and the analysis remains valid. \end{remark} \begin{proof} Define the random sets \begin{align*} I:=&\lbrace j\in [n]: g(j)=j \rbrace,\\ \tilde{I}:=&\lbrace j\in [n]: \norm{A_{i:}}_{I^c}^2 < \delta \rbrace ,\\ \tilde{I}':=&\lbrace j\in [n]: \norm{B_{i:}}_{I^c}^2 < \delta \rbrace, \end{align*} where $\delta=8\kappa$ and consider the event $\ensuremath{\mathcal{E}}' = \ensuremath{\mathcal{E}}_1' \cap \ensuremath{\mathcal{E}}_2' \cap \ensuremath{\mathcal{E}}_3'$ where \begin{align*} \ensuremath{\mathcal{E}}_1' &:=\lbrace |\tilde{I}^c|\vee |(\tilde{I}')^c|\leq \frac{1}{4}|I^c| \rbrace \\ \ensuremath{\mathcal{E}}_2' &:=\lbrace \forall i\in [n] : \ \langle A_{i:}, B_{i:}\rangle \geq 0.9\sqrt{1-\sigma^2} \rbrace \\ \ensuremath{\mathcal{E}}_3' &:=\lbrace \forall i\neq i'\in [n]: \ |\langle A_{i:}, B_{i':}\rangle|<C\log n/n \rbrace \end{align*} for a suitably large constant $C > 0$ (which is the constant hidden in the $O(\cdot)$ symbol in Lemma \ref{lem:nb_ngbh2_mt}). If $n$ is sufficiently large and $\kappa$ satisfies $\sqrt{1-\sigma^2}>48\kappa$, one can show that \[ \ensuremath{\mathbb{P}}(\ensuremath{\mathcal{E}}'\cap \ensuremath{\mathcal{E}})\geq \ensuremath{\mathbb{P}}(\ensuremath{\mathcal{E}})-\frac{3}{n^2} \] by combining Lemma \ref{lem:nb_ngbh1_mt}, \ref{lem:nb_ngbh2_mt} and \ref{lem:growing_vert}. Condition on any realization of $G,G',g$ such that the event $\ensuremath{\mathcal{E}}'\cap\ensuremath{\mathcal{E}}$ holds. Let $i \in \tilde{I}\cap \tilde{I}'$. By definition of $\ensuremath{\mathcal{E}}'\cap\ensuremath{\mathcal{E}}$, we have \begin{align*} \langle A_{i:},B_{i:}\rangle_{g} &\geq \langle A_{i:},B_{i:}\rangle_{g,I}-|\langle A_{i:},B_{i:}\rangle_{g,I^c}|\\ &\geq \langle A_{i:},B_{i:}\rangle-|\langle A_{i:},B_{i:}\rangle_{I^c}|-|\langle A_{i:},B_{i:}\rangle_{g,I^c}|\\ &\geq 3\delta. \end{align*} Here we used the fact that for all permutations $g$, $|\langle A_{i:},B_{i:}\rangle_{g,I^c}|\leq \norm{A_{i:}}_{I^c}\norm{B_{i:}}_{I^c}$ (because $g(I^c) = I^c$ by definition of $I$). On the other hand, for every $i'\in [n]\setminus{i}$ we have \begin{align*} |\langle A_{i:},B_{i':}\rangle_{g}| &\leq |\langle A_{i:},B_{i':}\rangle_{g,I}|+|\langle A_{i:},B_{i':}\rangle_{g,I^c}|\\ &< 3\delta . \end{align*} Similarly we have $ |\langle A_{i':},B_{i:}\rangle_{g}|< 3\delta $. Hence $\tilde{I}\cap \tilde{I}'\subset \tilde{J}$ and $\tilde{g}(i)=i$ for all $i\in \tilde{I}\cap \tilde{I}'$. Moreover we have by the first condition on $\ensuremath{\mathcal{E}}'$ \[ |\tilde{I}\cap \tilde{I}'|\geq n-|\tilde{I}^c|-|(\tilde{I}')^c|\geq n-\frac{|I|^c}{2}=\frac{n}{2}+\frac{|I|}{2},\] so the result follows. \end{proof} \paragraph{Conclusion.} By Lemma \ref{lem:improve_matching}, if the initial number of fixed points is $(1-\kappa)n$ then after one iteration step the size of the set of fixed points of the new iteration is at least $(1-\kappa/2)n$ with probability greater than $1-\frac{3}{n^2}$. So after $2\log n$ iterations the set of fixed points has size at least $(1-\kappa/2^{2\log n})n>n-1$ with probability greater than $1-\frac{6\log n}{ n^2}$. \subsection{Concentration inequalities used in Theorem \ref{thm:unif_rec_ppm}}\label{sec:app_thm3} In this section we provide proofs of Lemma's \ref{lem:nb_ngbh1_mt} and \ref{lem:nb_ngbh2_mt} used to prove Theorem \ref{thm:unif_rec_ppm}. \begin{proof}[Proof of Lemma's \ref{lem:nb_ngbh1_mt} and \ref{lem:nb_ngbh2_mt}.] Recall that $B_{ij}= \sqrt{1-\sigma^2}A_{ij}+\sigma Z_{ij}$. \paragraph{Step 1.} First let us consider the terms of the form $\langle A_{i:},A_{i:} \rangle$. We can write \[ \langle A_{i:},A_{i:} \rangle = \sum_{i=1}^{n-1}\mu_i g_i^2\] where $g_i$ are independent standard Gaussian random variables and $\mu_i =1/n$ for all $i$. Observe that $||\mu||_2=\sqrt{\frac{n-1}{n^2}}$. By Lemma \ref{lem:lau_mass} we have for $i \in [n]$ and all $t>0$ \[ \ensuremath{\mathbb{P}}\left(\langle A_{i:},A_{i:} \rangle \leq \frac{n-1}{n}-2\sqrt{\frac{t(n-1)}{n^2}}\right)\leq e^{-t}.\] For the choice $t= 5\log n$ we obtain \[ \langle A_{i:},A_{i:} \rangle \geq 1-O\left(\sqrt{\frac{\log n}{n}}\right)\] with probability at least $1-e^{-5\log n}$. \paragraph{Step 2.} Let us consider now terms of the form $\langle A_{i:},Z_{i:} \rangle$. We can write \[ \langle A_{i:},Z_{i:} \rangle = \frac{1}{n}\sum_{i=1}^{n-1} (g_ig_i') = \frac{1}{n} G^\top G' \] where $G=(g_i)_{i=1}^{n-1}$ and $G'=(g'_i)_{i=1}^{n-1}$ are i.i.d. standard Gaussian random variables. We can write \[ G^\top G'= \norm{G}\left( \left(\frac{G}{\norm{G}}\right)^\top G'\right).\] Since $G'$ is invariant by rotation $(\frac{G}{\norm{G}})^\top G'$ is independent from $G$ and has distribution $\ensuremath{\mathcal{N}}(0,1)$. By Gaussian concentration inequality we hence have \[ \left(\frac{G}{\norm{G}}\right)^\top G' \leq C\sqrt{\log n}\] with probability at least $1-e^{-5\log n}$ for a suitable choice of $C$. Similarly, by Lemma \ref{lem:lau_mass} we have \[ \norm{G} \leq 2\sqrt{n} \] with probability at least $1-e^{-5\log n}$. Hence with probability at least $1-2e^{-5\log n}$ we have \[ \frac{1}{n} G^\top G' \leq 2C\sqrt{\frac{\log n}{n}}.\] \paragraph{Step 3.} The same argument can be used to show that for $i\neq j$ \[ \ensuremath{\mathbb{P}}\left(\langle A_{i:},A_{j:} \rangle \geq C\sqrt{\frac{\log n}{n}} \right)\leq e^{-5\log n}.\] \paragraph{Conclusion.} We can conclude by using the identity $B_{ij}= \sqrt{1-\sigma^2}A_{ij}+\sigma Z_{ij}$ and taking the union bound over all indices $i\neq j$. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,487
Bloggers Get Mixing for Manhattan Month this October. Beautiful Booze is a popular source for the home cocktail enthusiast and professional bartenders alike, with over 78,000 followers on Instagram. Natalie created a one-of-a-kind beautiful Manhattan to help celebrate this year's Manhattan Month. Preparation: Add all ingredients into a rocks glass with ice; stir to combine. Garnish with maraschino cherries.
{ "redpajama_set_name": "RedPajamaC4" }
1,123
Why is tuna gassed? Tuna quickly turns an unappetizing brown (or chocolate, as it is called in the industry), whether it is fresh or conventionally frozen and thawed. Carbon monoxide, a gas that is also a component of wood smoke, prevents the flesh from discoloring. What's more expensive tuna or salmon? At over 1 billion pounds per year, tuna consumption in the U.S. is more than twice that of salmon. Salmon is more expensive (especially wild caught salmon) and is more likely to be considered a delicacy....Comparison chart.SalmonTunaProtein58g68g13 more rows. Is Bonito a tuna? Bonito, (genus Sarda), tunalike schooling fish of the tuna and mackerel family, Scombridae (order Perciformes). Bonitos are swift, predacious fishes found worldwide. They have striped backs and silvery bellies and grow to a length of about 75 cm (30 inches). Is Bonita a tuna? Commonly called bonita, false albacore, or little tuna, it resembles the Atlantic bonito, skipjack tuna, and species of mackerel. ... It is considered by many to be a trash fish because it has a darker and stronger taste than that of the other tunas.. Is fresh caught tuna safe to eat raw? Raw tuna is generally safe when properly handled and frozen to eliminate parasites. Tuna is highly nutritious, but due to high mercury levels in certain species, it's best to eat raw tuna in moderation. What's the difference between bonito and tuna? Bonito is a species associated with the tuna family, but cannot be marketed as Tuna in many countries. Bonito is quite popular as a fried fish with olive oil, especially in the Mediterranean region.. What is the most expensive tuna? Bluefin is extremely popular in Japan for sashimi, due to its large size, color, texture and its high fat content. Its quality in combination with its rarity makes it the most expensive tuna species.. What is the biggest species of tuna? Bluefin tunaBluefin are the largest tunas and can live up to 40 years. They migrate across all oceans and can dive deeper than 3,000 feet. Bluefin tuna are made for speed: built like torpedoes, have retractable fins and their eyes are set flush to their body.. Can you save a tough roast? How to Fix Dry or Tough Meat. You may be able to salvage a roast that's dry by adding more liquid. However, for meat that's tough, we like to repurpose it instead. Let the meat cool, then use forks or your fingers to pull the overcooked meat into shreds. Does Aldi have pre cooked bacon? Fully Cooked Bacon - Appleton Farms | ALDI US. Fully Cooked Bacon - Appleton Farms.. How do you know when crab is cooked? The crabs are done when they turn orange and the meat flakes when tested with a fork. Carefully remove the crabs from the pot with clean tongs and serve on a platter with a sprinkling of seafood seasoning and some lemon wedges.. What temperature should I cook a leg of lamb on? 350 degrees F.Preheat oven to 350 degrees F. Line a roasting pan with aluminum foil. Pat lamb dry with paper towels. Using a sharp knife, score the top side of the lamb by making shallow cuts all over. Why does my meatloaf shrink so much? Meatloaf is ALWAYS going to shrink when you cook it. That's what meat does, the proteins contract. ... You're also always going to get moisture that comes out, that's just part of cooking meatloaf. Using a leaner meat might result in less fat coming out of it, but you'll still have excess liquid.. Is jerky refrigerated? Jerky is a lightweight, dried meat product that is a handy food for backpackers, campers and outdoor sports enthusiasts. It requires no refrigeration. Jerky can be made from almost any lean meat, including beef, pork, venison or smoked turkey breast. ... Freezing will not eliminate bacteria from the meat.. Do you serve smoked salmon hot or cold? Both types of smoked salmon can be eaten cold right out of the package. Hot-smoked salmon can also be reheated and is great in hot dishes. Unlike fresh salmon, which should be prepared and eaten within 48 hours, smoked salmon has a longer shelf life.. Does asparagus need lots of water? Asparagus needs regular watering, especially while young. Can I cook steak 4 days after use by date? Can I cook steak 4 days after use by date? For beef, that's three to five days, as the USDA advises. It's therefore perfectly safe to cook steak one day past the sell-by date, or even a few days after it.. How long do you wait for a thermometer? Hold the thermometer in place. You will hear a beep in about 30 seconds. For glass thermometers, hold in place for 3 minutes. Take the thermometer out and read the temperature.. Can I put raw bacon on frozen pizza? Better to pan-fry it and scatter it over near the end. How do you boil crayfish? To boil: Keep the cray in salted boiling water for about 10-12 minutes or until their shells turn a bright orange colour. Put in cold water immediately afterwards to arrest the cooking process. To grill: Cut the cray in half lengthwise, baste with a delicate marinade and grill flesh-side up until flesh turns opaque..
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,384
All the Wrong Places| ---|--- A Life Lost and Found| | PHILIP CONNORS For M.J. All the Wrong Places | ---|--- PART ONE Up in the Air A sense of obligation drew me to the Southwest that first time. My brother had recently proposed to his girlfriend, and although we weren't close—I'd seen him just once in the two years since he'd moved to New Mexico—meeting his fiancée before the week of their wedding seemed like the brotherly thing to do. With the celebration just a few months away, my January break at college offered the last best chance to get an introduction. I reached Albuquerque in two long days of driving, only to find we were the same oddball brothers we'd always been, perhaps more so. He shook my hand in our aunt and uncle's driveway, pointy-toed boots on his feet, a ten-gallon hat on his head: playing cowboy, I thought, a little derisively. I'd arrived in my own borrowed costume, the garb of the campus outlaw, black combat boots and surplus East German Army pants, aggressive sideburns sculpted on my cheeks. If we hadn't been brothers and we'd passed in the street, one look would have soured me on him, and no doubt him on me. My soon-to-be sister-in-law was a pale kid with hair dyed black and nails painted purple, a look that spoke of rebellious instincts not at all in keeping with the vision I'd had of my brother's future wife. She was seventeen years old, the daughter of Dan's boss, an electrician who would soon make Dan foreman of his own crew. Emily would graduate from high school just weeks before the wedding. She showed an adoring deference toward Dan—a kind of puppyish infatuation in her eyes and in the tilt of her head—that I knew would one day fade, and I hoped it wouldn't curdle when it did, not least because she was the boss's daughter. She appeared inordinately curious about me, as if every word out of my mouth might contain a clue to parts of Dan's past and personality about which she knew too little. Food offered the organizing principle of our time together, as was customary in our family. Over the course of several days I received a tutorial in the versatility of New Mexico's famous chile peppers. The state question, Dan told me, was three words long—Red or green?—and was usually asked in reference to that staple of New Mexican cuisine, enchiladas. Red sauce, smooth as velvet, was made of dried ripe chiles run through a blender and a sieve; green sauce took the form of a chunky broth built around fresh diced chiles. Either could run the gamut from mild to apocalyptic, and the hottest iterations, to which Dan was partial, had the potential to tear the roof off your mouth. We added raw chiles to our eggs at breakfast, and our aunt used them in an apple pie. My tolerance for spicy food, always high, helped me pass what I came to understand was a kind of test administered to the first-time visitor. We ate out most evenings, washed down our food with ice-cold Mexican beer and reposado tequila, and when the mood loosened Dan spoke of flying. That was to be the prize of the trip. He told me about the annual balloon fiesta, the main event in the public life of Albuquerque and the largest gathering of its kind in the world, a tradition dating back more than twenty years. Each October several hundred balloonists ascended over the city while tens of thousands of spectators gathered to look skyward. Dan had been taken with the romance of flight from the moment he'd arrived in the city. He'd logged trainee hours for months to earn his pilot's license, flying almost every weekend with his boss. The Albuquerque weather was ideal for ballooning much of the year, with prevailing winds creating a phenomenon known as the box. Surface breezes typically blew one direction, upper elevation winds blew the other, a crosscurrent caused by temperature inversion and the geographic features of the valley, with mountains on the east, the big mesa on the west. A skilled balloonist could make use of the conflicting winds to take off and land in the same general area, avoiding an embarrassing trespass of private property or a nightmare descent in harsh terrain. Dan liked to be in the air as early as possible, before the warmth of the sun stirred the wind, so we rose on a Saturday morning before dawn and dressed in haste. A pot of coffee, made on a timer, awaited pouring into a thermos. All other preparations had occurred the night before. The air outside was brisk, scented with the unmistakable tang of the desert in winter. The horizon above the mountains had begun to glow like a coal. Four of us got in the truck and set off. We each took a turn at the thermos. I looked out the window at the pale, naked earth. My stomach felt sick from strong coffee and lack of sleep. I'd never been much of a morning person, another way in which Dan and I differed. Dan turned the truck off pavement and followed a two-track road, a rooster tail of dust rising behind us. In the middle of a mesa he found an opening in which to park, and everyone set to work as if they'd done this a hundred times, which in fact they had, a mechanical ballet involving earth, wind, and sky, though I did not fully grasp it yet. I helped my aunt Ruth lift the wicker basket out of the truck bed. My uncle Robert readied the gasoline-powered fan. Dan unrolled the envelope of the balloon on the face of the mesa, securing it to the basket with a series of cables. Once Robert had the fan running, Dan pointed it at the mouth of the balloon. The envelope began to fill with cold air, and he tugged on the cloth here and there to keep it from snagging as the panels of yellow, red, and blue fabric rippled like a flag in a big wind. I stood back, watching silently, feeling useless and a little bit awed. I wanted to offer help but I didn't know how, and Dan gave no sign that he needed any, so I stood with my hands in my pockets, trying to stay warm, trying to look ready but not too eager. Dan lit the propane burner mounted above the basket. He fired off a horizontal sword of flame, slowly warming the air inside the balloon. The burner roared and went silent, roared and went silent. No one said a word. The balloon slowly stretched taut. Everyone looked up, at the balloon and the sky beyond it, the sword of flame now and then appearing in the balloon's mouth, until the silken bubble swung into place overhead. Get in! my aunt Ruth yelled, as a gust of wind came up, the first of the morning. The urgency in her voice jarred me from my reverie. No time to grab the two-way radio for contact with the chase truck. No time to grab my camera to document the moment. No time to waste if I didn't want to miss the ride. I got one leg over the edge as the basket made a lateral hop. It came to ground with a thud and hopped again, and I feared I'd lose my balance and tip over the side. Dan was working the burner, trying to achieve the requisite heat for lift-off, and I had one hand on the basket's edge and one hand raised behind my head like a bull rider for balance, waving a frantic goodbye to the ground, or what I hoped was goodbye and not hello, for in that moment it could have gone either way. Dan grabbed the collar of my jacket and pulled hard, yanking me into the basket with him. For a second I crouched at his feet, breathing with quick little rasps of fear at how close I'd come to missing the ride. He looked down at me and laughed, shook his head. He hadn't expected the sudden wind and admitted as much. Still, he couldn't resist a poke at such an easy target. Way to be quick on your feet, he said. After a two-beat pause he added: You might want to stand up for this. I rose and looked down at the world we were leaving behind. Ruth waved up at us, a stick figure receding on the mesa; the mesa itself shrank to the size of a tabletop, then a postage stamp. The whole of Albuquerque slumbered in the cold light of sunrise, the dun-colored earth inscribed by the valley of the Rio Grande, a gray thread curling through the city's sere heart. The rugged spine of the Sandias loomed in the east, a sky island dark with pine in the upper elevations, stark contrast to the spare, lion-colored flora of the desert below. Across the valley to the west little conical peaks rose here and there like scale-model remnants of ancient volcanoes. From somewhere off in the distance I heard the faint bark of a dog. There were no other sounds but the burner and the breeze. Two thousand feet below us suburban rooftops glinted like bits of confetti. Cars moved like tiny beetles, scuttling on the stems of the interstates. The entirety of the city had the look of a modular, mutant amoeba stretched across the surface of a pale brown sea. Ten minutes earlier I'd been an earthbound creature. Now I floated in the sky as if cupped in the talons of some magisterial bird. I was twenty-two years old and I'd never been in an airplane. I'd never defied gravity for longer than a bounce on a trampoline or a flop from a high-diving board. The grace of our lift into sky made me giddy. I pretended to shiver from cold even as I trembled with something like euphoria. Dan's demeanor encouraged me to play it outwardly cool, though. He tended the propane burner with confidence, one hand slung free at his side, his face accented by his first real stab at a mustache, not shabby for a twenty-one-year-old. He'd filled out in the time since I'd seen him last, gaining twenty pounds, much of it muscle. His calm self-assurance had nothing of a pose about it. He was clearly in his element. He pointed out the city's major landmarks: the university, the airport, the bosque along the river's course, the Old Town where we'd walked the day before through narrow streets hemmed by neat walls of adobe. I nodded and looked where he told me to, but from the corner of my eye I couldn't stop watching him. It was as if I were seeing him clearly for the first time in my life—no longer the eternal kid brother, but a man in his own right, possessed of a passion I'd barely known about until I was invited to share in its pleasures. See the mountains over there? he asked, pointing at the Sandias. Someday I'm going to fly over them. Not many people have tried it. It takes a serious tailwind, and you have to be prepared for anything, because the wind shear on top can smash you to pieces on the other side. But I think it's worth the risk. He continued looking that way for a moment, as if gaming it out in his head. I wanted to know more about the logistics of such a flight, but I couldn't think of what to ask, so I joined him in silent appraisal of the morning light on the mountains. My joy lasted the entire ascent and more than half the descent. We drifted down and caught the low-level breeze again, moving lateral to the surface of the earth, then found a pocket of calm and drifted down some more. I looked away, looked at Dan. His concentration on the task at hand was total, both hands working the rope line running to the parachute vent in the top of the balloon, through which he let hot air escape. I forgot to tell you, he said, looking upward through the mouth of the balloon. My friends nicknamed this thing the Cactus Plower. Landing is always the most interesting part. He surveyed the ground below us, judging the suitability of potential landing zones. None of them looked soft. The earth was rising to meet us in a game of chicken we couldn't win. At some final point of surrender to wind and gravity he let go of the rope line to the parachute vent and braced his arms on the edge of the basket. I did likewise. I closed my eyes seconds before we came to ground, unable to bear the sickening wait for impact. The force of the landing made my teeth chatter. The basket tipped on its side; my body went momentarily airborne. When I opened my eyes I found myself curled in a ball, covered in the crusty soil of the desert. I brushed the dust from my jacket and squatted on my haunches, careful not to stand before my dizziness passed. I muttered something about having had all the excitement I could handle in one day. That's too bad, he said, because there's one more thing before you're legit. Among hot air balloonists there was a baptismal tradition that Dan honored: when a person made his virgin flight, the rite of passage was marked with champagne, cases of which had been the ballast of choice for the earliest balloonists. Our aunt Ruth having tracked us down in the chase truck, the balloon having been packed and stowed, we drove back into the city. In Ruth and Robert's back yard I was told to kneel, hands behind my back, and bend toward a small paper cup placed on the ground in front of me. The idea was to grip the lip of the cup in my teeth and drink the champagne by lifting and tilting my head. Dan sat facing me, giving instructions. He insisted that I not use my hands, that I not leave my position until the cup was emptied in my mouth and set back on the ground with my teeth. I was a willing acolyte, eager to do anything he told me. I should have sensed the whole thing was a setup, meant to place me in a defenseless position. At the moment of my most intense concentration, when all I could see of the world was the cup attached to my face and Dan just beyond it, Ruth and Robert came forward and poured their glasses and the rest of the bottle over my head, and Dan doused me with his own glass for good measure. I roared up off my knees, shaking like a muskrat, dabbing with my shirtsleeve at the champagne in my eyes. Welcome to the club, brother, he said. We shared another cup from a spare bottle as I toweled off with my flannel shirt. Sticky with cheap bubbly, shivering in the late morning breeze, I felt my admiration for him blossom into something more powerful, almost disorienting, uncomfortably close to envy. Without any fuss or the least hint of self-congratulation, he'd shepherded me through an experience I already sensed would last in memory the rest of my life. The closer I looked at him, the more impressive he seemed. He had the kind of adult life I lacked, not to mention a major talent, bordering on artistry, that allowed him to rise above the world whenever he felt like it, assuming the wind was right. My undergraduate reality looked insubstantial by comparison, with its basement keg parties and communal living arrangements, the rah-rah silliness of Saturday afternoon football games. These feelings were so unexpected, so far from anything I'd ever felt about him, that I could not find the courage to express them, and anyway words of appreciation had never come naturally for either of us. We had been farm kids, after all, and emotional effusiveness was not our style, not by a long shot. In my earliest memories there was no such thing as him or me, only us. Dan and I were born one year and nine days apart, and though I was the older I had no recollection of life before he appeared. Until I went to kindergarten at the age of five we were an inseparable pair, coconspirators unmindful of language, at home in the out of doors, amid the smells of sloughs and mud and skunks and manure. We snuck ripe strawberries from our mother's garden together, built snow forts in the windbreak of woods, swam and fished in the river, made up games of war, American boys on the American land. Growing up on a farm three miles from the nearest town, we each were all the other had, until our sister Lisa arrived three years after Dan and took her place as our mascot. We knew early it was our destiny to be farmers. Our father farmed a rented homestead of a quarter section. His uncles farmed to the west and to the north. Our grandmother grew up on the farm where our great-grandfather spent his entire life, the original homestead claimed by our great-great-grandfather, in 1887. We were said to have descended from a Parisian pharmacist and grocer named Louis Hebert, who emigrated to Quebec early in the seventeenth century and became the man referred to in history books as "the first farmer of Canada." Dan and I would have been the fifth generation to work the soil in the same little corner of southwest Minnesota, Des Moines River headwaters country, on the western edge of what had once been the tallgrass prairie. The first object I can recall coveting was a shiny toy tractor with an enclosed cab, which I received for a birthday gift the year I turned four. We used it to practice growing corn in a patch of earth behind the garage. We tilled the soil and planted seeds snuck from bags in the granary; we weeded the rows and watered the plants until they'd grown to scale with our tractor. Then we cut and chopped our tiny stalks the way our father did for silage, and like our father we covered our piles with a swatch of black plastic to ferment them with the warmth of the sunlight, fodder for the cattle, to get them through the lean months till the grain came in from the fields. It was an enchanting world in its way, as most childhood landscapes are: an agrarian paradise of rich post-glacial soil, with just a sliver of the old wildness remaining to invite you past the manicured fields of corn and beans, their rectilinear geometry. Marshlands and prairie pothole lakes dimpled the low spots in the land, and where the water still pooled and on its edges, along the drainage ditches that ran square as the rows of corn, in strips of untamed earth along the railroad right-of-way, some of the ancient prairie still survived. These remnants were sparse, though, and anyway our mission was to tame the land and bend it to our will and take our living from it. We didn't earn money by admiring it. That was a lesson imparted early. Other lessons we learned by watching, still others we learned by doing. Our father needed the help. He was in deep with the bank from the beginning, having made his start with borrowed money, and he tended his own land while also helping his uncle on the home place down the road. As soon as we had muscles, he put them to work. We learned the toughest job first, picking rock, then later in the summer pulling weeds from the soybean fields. Rock-picking was springtime duty, before the crop was planted but after the fields were plowed. Someone drove a tractor with a loader bucket in front or a wagon hitched behind, or both, and we walked alongside it through the soft and yielding clods of overturned soil, hoisting anything bigger than a softball up into the pile. Rocks could damage the planter or, worse, the combine at harvest. Removing them was a preventative measure, a hedge against damaged machines and lost time, and among the most stupefying of labors ever performed by humans on earth. Giant cairns marked each corner of the rockiest fields, monuments to our labor and the labor of those before us. They had a simple beauty not at all in keeping with the brutality of the work that had formed them. The springtime winds chapped our lips, our hands cracked from digging in the dirt, but we knew better than to complain. Farming wasn't easy. We heard that often enough. Rocks and weeds and bad weather were the enemies, and since one of the three could not be controlled, we had to do our best where we were able. Farming tested a person; those found wanting failed. This was the ironclad law of the life we were born for. My going to kindergarten a year before Dan nudged us apart, as did overheard jokes about our paternity, for though we were close in age we looked nothing alike, to the point where that was our most notable characteristic, the one people fixated on—our physical dissimilarity. Under "comments" in his baby book, our mother had written the first things said about him at birth, among them: He's so different from Philip. I had our mother's dark features, he had our father's strawberry blond hair and fair skin. Our personalities and interests formed as distinctly as our looks. I became a reader, asthmatic and sensitive, squeamish around farm animals, more comfortable baking cookies than baling hay. Early on he showed competence with his hands, unafraid to plunge his arm into a sow and extract a piglet, quicker to learn how to drive a tractor or run a grain auger, more instinctive with tools. Being the older brother meant never wanting to show weakness in his presence, so I scooped manure and castrated pigs alongside him, outwardly capable, inwardly doubtful. I'm sure anyone could have seen which of us was touched by a faint delicacy of manner, and anyway our 4-H projects told the tale. Dan always showed a hog at the county fair, while I played at artsier things—black-and-white photography, model airplanes. The one thing we'd always taken for granted, that we would someday be farmers, became the one option unavailable to us the year I turned twelve. The bankers lost their patience; we held a sale, packed our things in boxes, and left the only home we'd ever known to the wind and time. We'd been found wanting, not in work ethic but in financial viability. Old Lady Leysen rented the land to a neighbor, and that was the end of another homestead. No one would ever live there again. The buildings would eventually be burned to the ground as part of a training exercise for the local volunteer fire department, leaving only a metal Quonset hut and a concrete silo as headstones to mark our failed efforts, the rest of the rooms of our childhood consumed in flame. As much as I missed certain special places on the land, places where I felt the first tendrils of connection to things more enduring than the human-built world, I was also secretly relieved when we left. I'd never felt sure of myself in the more complicated work of the farm, never gained a feel for it, the way Dan had instinctively. A fresh chance at self-invention appealed to me. I can't say how Dan felt, though of course I can guess. I never asked, and he never said, but I had cause to wonder if in the loss of the farm he lost something of himself he could never recover. As a teenager I became obsessed with sports. I trained for basketball and track in the humid clamor of the high school weight room; I pored over copies of the Sporting News after I finished my homework at night, dreaming of one day seeing my name in print, if only in the local sports pages. Dan focused his efforts on the wood shop, becoming skilled enough to hire on summers with his shop teacher, with whom he built furniture and cabinets. As a wrestler, he viewed my passion for basketball as something of a retreat from manlier pursuits. Insofar as my teenage mind believed anything with bedrock conviction, it was that the fast-break style of the Los Angeles Lakers in the Showtime years was the pinnacle of team-sport artistry, and Dan countered by claiming that the Detroit Pistons—known as the Bad Boys, for their intimidating physicality and brutish antics—were his favorite team. He spent the weekends tinkering with cars, an investment of time and energy that confounded me, since he would smash them up during races at the county fair each August, undoing all his hard work in a few loops around the track. No one was surprised when I went away to college and he chose the path of blue-collar work. It was the natural move for each of us, and after he accepted our aunt and uncle's invitation to move to New Mexico and bunk with them while he got himself settled, he existed only on the far edge of my consciousness. We were brothers in our early twenties, each of us making his own way in the world, more than a thousand miles apart. I suspect he thought of me as infrequently as I thought of him. After the champagne baptism, we drank beer and made dinner and spoke of work and school and other such pleasantries. We played a game of Monopoly with Emily and her parents, settling into the banter of good-natured competition, affectionate teasing of the kind that made everyone around us laugh. The elation from our early morning flight continued to hum in my mind. The whole day had about it the character of a festive reunion. Beer flowed, old stories were retold, others told for the first time. Late in the evening, when Dan asked about my work as a reporting intern for the Fargo Forum the previous summer—our mother, he admitted, had sent him some of my clips—I had enough beer in me to tell him the truth, which was that the whole experience had been something of a farce. One Monday morning, shortly after my feature on the city's pet groomers was splashed across the entire face of the B section, along with color photos of poodles and dachshunds undergoing various forms of makeover, I decided I'd had enough. One month remained of my internship, one month more than I could stand. I skipped breakfast and went straight to a sports-medicine clinic. To a kindly but perplexed nurse, I explained that I was with the drama department at the university. We were putting on a play in the fall, and in the play there was a character who wore a sling on his arm. Our prop room lacked a sling. I asked if she might let me borrow one, or, if that wasn't an option, whether she might take cash for it. She seemed to pity me for some reason, perhaps the transparency of my lie; she let me have the thing for free. I told her I'd stop by with a couple of complimentary tickets in the fall, before the play opened, and she pretended to sound pleased. Half an hour later I appeared in the office of the managing editor, empty shirtsleeve dangling at my side like a flag of surrender. I explained my history of shoulder trouble (true), told him in detail how I'd dislocated it over the weekend in a game of pickup basketball (false), and informed him that I needed to leave immediately to see my doctor back in Minneapolis about the likelihood of major rotator-cuff surgery (preposterous). The old man stabbed out his cigarette and lit another, wheezing as he shifted his enormous girth in his chair. He peered at me over the top of his half-moon glasses. I'm sorry, I said, but I can't stay. I can't even take notes anymore. You can use a tape recorder, he said. I don't have one. We'll get you one. But I can't even type, I said, wiggling my pathetic chicken wing for emphasis. Sure you can, he said. You'll just have to use one hand. Hunt and peck. Half the monkeys in the newsroom type that way. His arguments were futile. By noon I'd packed my car, having worn the sling the whole time in case a colleague from the paper drove past the empty frat house—Alpha Gamma Rho, the farm-boy fraternity—where I'd rented a room for the summer. I was thirty miles down Interstate 94, smoking a celebratory joint, when I remembered I wasn't really injured and didn't need the sling. Hearing this, Dan snorted so hard that beer geysered out of his nose. He'd never thought of me as all that amusing, and though I'd done my best to leave out the boring parts of the story, I hadn't expected to hit his funny bone quite so squarely. His reaction proved contagious. We laughed until our faces were wet with tears. I couldn't remember the last time we'd done so. Maybe we never had. The joint, he said. It was the joint. I can see you lighting it, no hands on the wheel. Soon afterward, perhaps wanting to be funny in his turn, he mentioned—apropos of the coming holiday—that he planned to take an out-of-town trip over the long weekend, to a balloon rally in the northwest part of New Mexico, since he wouldn't have to work on Martin Luther Coon Day. My shock was immediate and visceral. I wanted to believe I'd misheard him. Dan had a smirk on his face, a look of mischievous pride, that assured me I had not misheard. No one else seemed to notice or care. He may have thought it a harmless joke, but for me it was neither harmless nor a joke, so I went to the fridge and got another beer, then another as the conversation limped on. I performed some elaborate mental contortions to avoid placing the blame for the remark where it belonged, with the owner of the mouth that had uttered it. I settled on the notion that he was taking his cues on the postures of masculinity from the men he was hanging around at the time, men you might call, to be gracious about it, illiberal. With time and maturity he'd see the folly of their crude worldview. He'd shake off those bits of boilerplate prejudice he'd borrowed in the project of crafting a self and become his own man. We returned to being out of the loop with each other the moment I left on the long drive home. When I arrived back at school that January I briefly considered sending him a note of thanks, along with a photocopy of "Letter from Birmingham Jail," but I figured he'd take it as a calculated insult, the high-minded snobbery of his college-boy brother, so I didn't bother with the essay or the note. Emily called off their wedding not long afterward, a fact relayed to me by my mother, so our plan to gather as a family that summer dissolved. The following autumn my telephone rang at home. It was Dan, calling to catch up. We hadn't talked in most of a year. I was half drunk and in no mood for chitchat, so I lied and told him I was deeply invested in a Monday Night Football game. I told him I'd call him back at halftime and I hung up the phone. For reasons that remain obscure to me, although they surely had something to do with the words Martin Luther Coon Day, I never returned the call. This would always remain the final exchange between us: his calling to connect, my turning away. A few months later, on the day that turned out to be his last, I arrived in New York for a summer internship at the Nation magazine. I'd arranged the job in part so I could spend time with my girlfriend, who'd already graduated and left for New York ahead of me; it was also meant to be my springboard into an honest-to-god career, the last bit of polish on my résumé before I returned to Montana to finish my degree. Marie and I had become involved after working on our college newspaper together. Amid late nights of intense work in a hothouse office, I'd fallen hard for her, in a way I hadn't for anyone in my life to that point. She was a smart editor, a varsity tennis player with the legs that entailed, fluent in French, with a daring sense of fashion shaped by a semester in Paris. With her hair cut short and a cigarette in her hand, she looked like the brunette twin of the movie starlet Jean Seberg, another midwestern girl with an air of irrepressible sensuality. She called me chéri and undertook to expose me to the spiritual dimensions of gourmet coffee and good red wine, an education I can only think to call erotic in its devotion to sensory pleasures, to smells and tastes and textures, most of them a major revelation for a descendant of the first farmer of Canada. Some nights, early in our courtship, we'd sneak away from the paper for a couple of hours, fix dinner at her place, share a bottle of Bordeaux, then return to the office to meet our deadline. Our attraction, shy and halting at first, was the headiest thing I'd ever been a part of, the affections we'd hidden from our colleagues, the long hours engrossed in the creation of something real—an actual newspaper—and of course the moment long after midnight when our work was done and we were finally alone. We had dreamed of New York even then, walking across campus together in the quiet of snowy nights, and now the summer was ours, the city was ours, a possibility I'd been imagining for a very long time. After Marie and I had reacquainted, I called my mother to let her know I was in New York safely. We talked for a bit about my travels—I'd come by car all the way from Montana—and then she told me, with an edge of concern in her voice, that she'd talked to Dan earlier in the day, around lunchtime. He'd told her that his new girlfriend had decided to break things off. Wendy was ten years older than him and hadn't signed the papers on her divorce, and although her two kids liked Dan, they were confused by the sudden appearance of a man they couldn't help viewing as their father's replacement. Everything had happened too quickly between them, and she needed a break to put her life in order. My mother did her best to cheer him up, my father too, and when they hung up they figured he'd have a few bumpy weeks. Eventually he'd find someone else, someone more suitable, ideally unmarried and closer to his own age. Still, my mother said, he sounded pretty down. It might cheer him up to hear from you. I told her I'd call him. I hung up the phone and thought, Sure, I'll call him—silly kid brother and his silly troubles with women. I'll call him in a few days. Next week, maybe. I'd been reunited with Marie for a couple of hours. We'd spent nine months apart, writing letters across the distance between us, and to find myself at last within reach of her touch made me want nothing else. Anything aside from that could wait. The next morning I went to the offices of the magazine, thinking it was going to be my first day on the job, not having received the news that the interns' start date had been moved back one day, unaware it would be more than a year before I'd return for my internship, by which time Marie would be gone for Paris again, our love but a memory. I spent an hour at the office, met some of the editors, grabbed a stack of back issues. In possession of a free afternoon I hadn't expected to be free, I was at a loss for what to do with myself. The city seemed huge and half mad, a roiling carnival of commerce, an immense performance of human longing. I called Marie from a pay phone. We made a dinner plan, a celebration of our reunion. I walked all over Lower Manhattan, tuning in to the pace of street life, browsing amid the evocative, moldering-book smell of the Strand. I found an open bench in Union Square and unfolded a copy of the Times—my new hometown paper. Tears of happiness welled in my eyes as I sat there on that bench. Everything had come together, exactly as we'd planned it. Late that afternoon, in the final minutes of my innocence, when he was already gone and I didn't know it, I puttered around Marie's apartment in Queens, listening to her Rickie Lee Jones albums, holding her clothes to my face, savoring the scent of her, delirious with longing. I was getting dressed for my first-ever dinner in Manhattan when the telephone rang. I muted the Rickie Lee Jones. I picked up the phone. I knew from my father's quavering voice that whatever he was about to tell me would change everything. The known facts were these: He'd spent the afternoon with friends, drinking. He'd spoken to Wendy in the evening by phone. He hadn't shown up for work the next morning. He'd died alone in his apartment. He'd done the deed with a gun. The week surrounding the burial was a maelstrom of tears and bewilderment and wild speculation about what had gone so wrong inside his head that he would choose to point a gun at it, and most of that time, mercifully, remains a fog in my memory. One moment stood out, though, a moment that would define my life in the years to come. It happened on the afternoon of the wake, when one of my uncles, in a moment of thoughtless candor, told me that if the family had been forced to choose ahead of time which of us was more likely to off himself, the odds would've favored me. At first I had no idea what to make of this extraordinary statement, except to wonder whether everyone's sorrow might have been a little less intense, a little less violent, if the death had been mine. People said a lot of foolish things in the midst of their initial shock, but this one stayed with me: the idea that I'd bucked the odds and lived. In moments of self-pity, I allowed myself to wonder whether I'd failed the family by not performing to expectations. Viewed from a different angle, my uncle's words offered up the rest of my life as an unexpected gift, an opportunity for the most radical improvisation. I could be whatever I wanted to be, as long as I didn't end up another corpse in the casket with a hole in his head. Anything went. Anything was permissible, as long as I lived. It soon became clear that the manner of his death had turned him into something of a cipher. People saw him one way or the other: sufferer or coward, victim or murderer. He either succumbed to outside forces or succumbed to the darkest impulse within. In the days after his death, when people's explanations were forming and quickly hardening—little stories they thought they could live with—I often felt I was the only one who vacillated between the two extremes, pitying him one hour and hating him the next. Everyone else, it seemed, had chosen, or was clinging to a brave front of certainty. The gunshot was a mistaken impulse, the gunshot was a calculated rebuke. He slipped over the edge, he was pushed over the edge. He was broken by a battle with depression, he was broken by the sudden loss of love. He clung too tightly to other people, he didn't know how to reach for help. The list of explanations was as long as the list of people who'd known him, and each seemed to me a simplification, perhaps even a lie. I understood these accounts were attempts by those who loved him to soothe the pain of a sudden, inexplicable absence, but I took it as my duty to preserve some ambiguity, if for no other reason than to allow him an inner life of some complexity, resistant to easy answers and summary judgments. I hoped that time and patience would one day reward me with the truth but I was in no hurry to get there. The question for me was never, Why did he kill himself? He killed himself, I assumed, because his life became unbearable. The question, therefore, was why his life had become unbearable, and since I knew very little about his life at the end, and even less about his frame of mind, I couldn't answer that question, and maybe never would. The proximate cause of his suicide—the breakup of an eight-month relationship—struck me as both too pat and maddeningly sketchy, a combination that led me to fixate on his final moments, improvising on the known facts, searching for a way into the mystery. I imagined his final hours again and again, long after a finer mind would have found peace or given up. I didn't want to find peace. To have found peace, I thought, would have meant giving up my obsession with him, but that obsession had become the one thing that gave my life meaning. The evening meals I shared with my parents that summer in Minnesota were funereal, as was only appropriate. To speak was to invite the possibility of invoking his name, and his name was just then unutterable, though he was always in our thoughts. In the beginning those thoughts focused on the last time we saw him, the last time we spoke to him; we hunted for clues we should have seen and didn't, or we tried ourselves on the charge of failing to love him sufficiently, a trial that couldn't help but end in a verdict of guilt. My sister, with whom I'd always been able to speak freely on any subject, was deep inside the drama that would result in a brief, failed marriage, and therefore unavailable for sibling heart-to-hearts. I knew better than to hope that my mother and father would look deep into each other's souls, reaffirm their vows of fidelity in the face of tragedy, and draw me into the safe, warm bosom of their loving embrace. Never having been the kind of people who spoke freely—or even elliptically—about their innermost feelings, they weren't about to start now, when the stakes were so much higher. My father soon devised a mantra—life was too short to dwell on a death he could not undo—that baffled me with what I took to be its refusal to feel a legitimate sadness; my mother's devastation revealed itself wordlessly, with an expression of almost complete vacancy in her eyes, as if she'd gone somewhere in her mind from which she would never return. Their estrangement from each other's experience of grief was too painful for me to contemplate it for more than a moment, so I turned away from them, turned inward—a strategy that became a habit, a habit that became a posture, a posture that solidified into an all-encompassing personality, that of a man shrouded in almost total self-regard. The ambiguity I preserved in the story of his death was already on its way to becoming the story of my life. He was my silent partner, my all-purpose excuse, my left-hand man, and depending on my whim I was sometimes calculating, sometimes impulsive, one minute attentive and the next minute aloof, one day hungry for intimacy and the next day desperate for freedom. By remaining enigmatic—by refusing to be any one way or any one thing—I honored him. He would remain forever unfinished, and so would I. PART TWO Fax Boy My address was the movie house, downtown Missoula, on the banks of the Clark Fork. The yellow marquee glowed outside my bedroom window, and night after night an early and a late show played through the wall of the balcony across the hall. I read novels till dawn, slept till noon, napped around seven each evening with plugs in my ears to keep the movies muted. I walked the river paths after dark. I lurked in AA meetings in order to hear people talk honestly about terrible things. I drank coffee in one of three coffee shops each afternoon, whiskey in one of five bars most nights. I went months without having a conversation lasting more than three minutes. I swam through time like it was motor oil. I made one promise to myself. I would not buy a gun. I took a semester off and returned to New York on borrowed money, my first cash advance on my first credit card. I sublet an apartment in Queens whose occupant, an Italian man in his thirties, was laid up in the hospital with two broken legs. I didn't ask why. I completed my aborted internship at the Nation—a year and a half later than originally planned—for the sum of one hundred bucks a week, a willingly indentured servant at a magazine founded by abolitionists. I spent my days fact-checking articles on how to reinvigorate the labor movement, a staple of Nation reportage whose frequency and desperation of tone increased as union membership declined. During lulls between deadlines I gathered specious research for a contrarian columnist on what he called the hoax of global warming. Back in Missoula, I worked on my pool game at Flipper's, my drinking game at Al's & Vic's. One day I received a piece of paper in the mail saying that I'd earned a bachelor's degree. I couldn't have begun to tell you how. Lacking immediate prospects after graduation, I stayed on in Montana. There was no urgency to make anything of my life, and Missoula was as fine a place as any to hide out from postgrad choices. Besides, the place was too beautiful to leave in summertime, and I couldn't bear to give up an apartment that cost $180 a month and placed me within easy walking distance of so many quality bars. On summer days fishermen cast their flies upstream from the Higgins Avenue Bridge, a hundred yards from my room above the Wilma, while a bagpiper went through his mournful musical paces, using the bridge abutments as acoustic enhancement. I eked out a living baking bread in the early morning hours alongside a failed novelist who'd mastered the texture of the baguette, though not the art of fiction, during two years in Paris in the 1970s. Afternoons in my apartment, with the windows thrown open to the breeze off the black cottonwoods along the river, I worked halfheartedly on what I hoped would become my own first novel, a doomed imitation of Paul Auster's New York Trilogy that stalled forever at page forty with the impossible scenario of a man tailing the ghost of himself after digging up his own grave and finding nothing in it. I felt authentically bohemian as I pounded on my manual typewriter, earplugs in place, while the muffled soundtrack of the week's feature film pulsed and droned through the wall. One of the theater employees was a daytime drinker who liked to stop by my room in the late afternoons and slyly proposition me, vodka fumes on his breath. He probably did so with all the bachelor boys, but I was vain enough, and lonely enough, to take it as a compliment. The building's manual elevator, one of the few of its kind still in operation west of the Mississippi, was staffed in part by a woman who'd never abandoned the apartment upstairs where her husband had shot himself a decade earlier, or so the rumor went. Riding the lift with her after a night out drinking, I fantasized about holding her hand in mine and telling her she was not alone. More than once I heard another rumor that David Lynch had spent some time around the place, long enough to use it as a model for the apartment building in Blue Velvet. Once you'd lived there awhile, the story had the ring of plausibility, though of course it turned out to be a fabrication. Every so often, when I felt myself slipping into a neurasthenic funk, I'd walk to the Orange Street entrance ramp on I-90 and hitchhike to Seattle to visit my uncle, hoping a brush with danger would snap me back to reality. Nothing very interesting happened on those trips, except for the time I was aggressively solicited to proffer my cock so my driver could fondle it with his right hand while steering with his left. He claimed all he wanted was to touch my cock for awhile, then pull off the road and finish the job with his mouth. For this he'd drive me all the way to Seattle from the Idaho border. When I demurred, he stuck his thumb in his mouth and removed his dentures, allowing them to dangle in the space between us. He said, with real conviction, It'll be the best damn blow job you ever had. For a time I convinced myself that I'd given up on journalism. Life was too weird for journalism. I wanted to devote myself to art, to a bleak and eccentric vision along the lines of David Lynch. But the fact was I'd borrowed twenty-five grand to pay for an education in print journalism, so I had little choice but to pursue a career in print journalism, in order to pay off the twenty-five grand. Baking bread for six bucks an hour in Missoula, Montana, was not going to cut it, and there was nothing else I was any good at. New York beckoned once more. My first apartment in the city was a Hell's Kitchen sublet arranged on my behalf by a friend. An actress owned the apartment; she'd gone to some backwater city in the American South to appear in a Shakespeare festival. I covered her co-op payments and looked after her cats while she was away. There were four of them. Three had come off the streets, and their ways had rubbed off on the fourth, so that all were now at least part feral. Perhaps they felt abandoned by their owner, perhaps they just didn't like me, but they ceased to use their litter box, or rather they made the entire apartment their litter box. I chased them around with a broom, tried to frighten them into behaving, but that only provoked them to new outrages. I came home one night and found they'd torn apart my pillow, now just a cloudscape of synthetic stuffing floating across the bedroom floor. From then on I made my home away from home at McHale's, a bar off the west edge of Times Square, four blocks from my apartment. The hamburger at McHale's was the best in the city, the bartenders—all of them female and all of them comely—poured spirits with a heavy hand, and the stools felt as if they'd been designed by ergonomic specialists devoted to the comfort of the human rump. Soft orange lamps burned dimly through the cigarette haze, and ceiling fans spun languidly in the sepia-toned light. I went there more than once in the daytime, but it was a bar built for the needs of the night. It was a hangout for off-duty cops and neighborhood residents and people who worked in the theater district, grips and lighting people and understudies and even the occasional name actor. It had the feel of a place that had been in the family for a very long time, as I later learned it had: half a century, to be precise. Ticket scalpers used it as a drop-off point, so there was a lot of traffic in and out, people leaning over the bar and offering their names, leaving with envelopes slipped in purses and pockets, a trade that gave the place a casually illicit flavor. I liked it in part because the help had a masterful sense for the balance of friendliness and discretion. The one thing they felt a need to know about you was your name. All the rest unfolded in conversation if you felt like talking. If you didn't, that was fine too. No one there knew my story, which was just as well. Nobody could vouch for me, or badmouth me, as long as I avoided romantic entanglements with the regulars. For a while, avoiding romantic entanglements became my highest priority, next to finding a job. I sent my résumé to two dozen magazines and a handful of newspapers. I was summoned for an interview just once, a courtesy I was granted because I knew someone who knew someone who worked at the magazine. It was called Civilization and was affiliated with the Library of Congress. A secretary guided me to the office of the editor, Nelson Aldrich, who asked me about my internships. I told him of the meticulous fact-checking I'd done at the Nation, the intrepid street-level reporting I'd done during my summer at the Fargo Forum, the many things I'd learned about the ways of the world while staring into the abyss of an impending deadline. I must have gone too far with the self-marketing, because Nelson Aldrich said I was overqualified. He was looking for an editorial assistant—a gofer, essentially. I told him I really wanted the job, wanted the chance to be part of an organ of substantive journalism, even if only as a gofer. He said I'd probably find the work boring and he didn't want a bored assistant moping around the office. I told him it wasn't my style to mope in the workplace. He told me the pay was poor and I could almost certainly find something better. I told him I'd already been looking for two months and didn't share his optimism. We spent most of the interview in this way—me begging in an unseemly manner for the job, him trying to talk me out of wanting it. After I left his office I never saw him again. I may have had to leave the city a failure if I hadn't called the head of the journalism department at the University of Montana. Before retreating to academia, Frank Allen had worked at the Wall Street Journal, so I figured he knew some people in New York who could lend me a hand. He'd been kind to me as a transfer student, helping me match classes I'd already taken with a new curriculum, and now he gave me the name of an editor at the Journal, told me I should call her and ask her to coffee. The thinking was that she might know someone who was willing to take a chance on a hungry young journalist from the northern plains. Francine Schwadel oversaw the paper's legal-affairs coverage. We met on the mezzanine level of the paper's home building at 200 Liberty Street, just across West Street from the World Trade Center towers. Sitting at a tiny table with a faux-marble surface, a paper cup of coffee in her hand, Francine Schwadel said, in her gravelly Brooklyn accent, that Frank Allen had hired her when he was chief of the Philadelphia Bureau of the Wall Street Journal, and for that she was eternally grateful. There was no longer a Philadelphia Bureau of the Wall Street Journal, and about that she was sad. She asked me a few questions about my experience, my goals, and then she said, Well, young man, my time is short, but your timing is awfully good. I've just been given clearance to hire a news assistant. Would you be interested in the job? Yes, I said. Of course. She told me to send her a résumé, cover letter, and six samples of my writing by the end of the week. When I left the interview, which I hadn't even known was going to be an interview—I thought she'd give me the names of some people she knew, and I'd have coffee with them too, and they'd give me the names of other people with whom I'd have coffee, and I'd follow the chain of connections until someone offered me a job—I was conflicted. All of a sudden I had a chance for a job at a paper that considered itself the world's most important publication, but I didn't want to work at the world's most important publication. Journalism had appealed to me, in the beginning, because I'd been told by one of my professors that it was among the surest means of comforting the afflicted and afflicting the comfortable. To an idealistic undergrad with socialist inclinations, that chestnut made journalism sound both noble and fun, but of all the places for a young man on the make to pursue a career in journalism, the Wall Street Journal seemed about the least compatible with a desire to comfort the afflicted and afflict the comfortable. I had a problem, though, and it wasn't politics, which had begun to matter a lot less than the growing balance on my credit card. The legal-affairs editor wanted to see six samples of my writing, but I had only four, maybe five good ones from my days as an intern at North Dakota's largest daily newspaper. I didn't want to fall back on clips from my student newspaper days. The piece of which I was fondest was an essay I had written for the Nation about a proposed open-pit gold mine on the Blackfoot River in western Montana. In a throwaway line about a logging company whose clear-cuts of healthy forest had fouled the river with silt and killed untold numbers of fish, I'd written the following: "Even a newspaper as sympathetic to corporate plunder as the Wall Street Journal once called Plum Creek the 'Darth Vader of the timber industry.'" I doubted the legal-affairs editor thought of her employer as sympathetic to corporate plunder. And I very much doubted she would hire me if she discovered I'd written such a thing. I suppose I could have laughed it off as a youthful indiscretion with the English language if she asked but I didn't want to take that chance. I had an acquaintance I trusted at the Nation and I called him, explained my situation, and asked if he'd do me a giant favor. Would he open the electronic archive of the magazine, touch up my article that said unkind things about the Wall Street Journal, and then print for me a copy of the doctored article, which would no longer say unkind things? At first he was reluctant. He didn't want to tinker with the historic record of the magazine. I told him he should of course change back my wording before saving and closing the file. Not exactly the sort of thing I'd been taught in J-school, but he complied. Shortly afterward, I was hired. I showed up for my first day of work wearing a starched white shirt and a sober red tie, wanting to make a good impression. The first order of business was to get my picture taken and affixed to a magnetic pass card. When waved in front of a beam of discerning red light, the pass card unlocked security doors in the paper's austere corridors. Later I would learn that before the paper moved to the World Financial Center it did not have locked doors in its hallways, and one day a senior executive had returned from his lunch to find a sample of human feces on his desk chair. When the paper moved to its new headquarters, the executive insisted on the installation of locked doors that could only be unlocked with special pass cards. In theory a security measure, the pass cards also allowed the paper to track the movements of individual employees as they circulated through the hallways, thereby discouraging anyone who might have had a hankering to leave a malodorous turd on an executive's desk chair. As a news assistant, I mainly fetched faxes and replenished empty water coolers. I spent most of each day standing over a squadron of fax machines, collating and stapling press releases and court documents, then delivering them to reporters who covered corporate law, telecommunications, and the various health care industries. I performed this task with actuarial efficiency, the paper a blur in my hand like a magician's trick; I served the reporters their faxes with the cordial discretion of a headwaiter in an uptown restaurant. The best means I had of telling good days from bad was by noting, at the end of my shift, whether or not I'd avoided a paper cut. I'd spent my late teens and early twenties working dismal jobs—donut fryer, bartender, UPS package unloader—and borrowing heavily to pay for a college education that qualified me for a job that was already obsolete. People didn't need to send faxes anymore. They could send email. I thought about mentioning this to Francine Schwadel. Could we not encourage people to send their documents electronically, thereby saving the world lots of paper and me lots of time? But then I wondered whether that would result, a little too efficiently, in my own obsolescence. So I kept my mouth shut, sorted and stapled the faxes, and every two weeks cashed my paycheck, which still came quaintly on paper, despite the advent of direct deposit. One day my phone rang at work. It was my friend Mary, who'd put me in touch with the actress sublessor, she of the feline-occupied apartment. Mary was feeling a little chagrined about the cats and wanted to make things right. She said she had a lead on another apartment. A friend of hers was moving to New York from Detroit. The friend from Detroit had recently visited the city and, while staying with people she knew in Brooklyn, was shown a lovely old brownstone apartment. The landlord, being bisexual and living on the premises, sought tenants who were gay, bisexual, or gay-friendly. The woman from Detroit happened to be a lesbian; being a homo-friendly straight guy, I was deemed a suitable candidate to be her roommate, at least by Mary's reckoning. Mary was a writer, young and fledgling but with obvious talent. We'd met at the Nation and talked in an easy way from the moment I showed interest in her work. She had introduced me to the poetry of Theodore Roethke, a good thing to have in a dark time. I sensed early on that Mary wanted to be more than friends. She only made this clearer as time passed, and with each manic flutter of her eyelids I wondered: What could be so wrong with her that she found me attractive? I didn't want to be Mary's boyfriend. I wanted to be Mary's friend. She'd been kind to me when no one else had; she was among the few people who'd taken an interest in me in that lonely city. On my computer at work I could enter any street address in America and retrieve census tract data in a couple of clicks—one of the many slick tools available to editorial employees of the Journal. I typed the number and the street name I'd been given, and onto my screen came a statistical snapshot of the neighborhood. A median family income barely half the national average. More than a third of residents with incomes below the poverty line. A population quantified like this: American Indian—0 Asian—0 Black—4,294 Hispanic—162 White—13 At first I thought there'd been a misprint. Thirteen of my hue in a sample of almost 4,500? A minority population of 99.71 percent? The numbers didn't seem plausible. Then again, almost everywhere I'd ever lived—Iowa, Minnesota, Montana—the ratio of white to black had been reversed. If I was as broad-minded as I thought I was, what did I care if I was in the minority for once? As I considered the merits of a move to Bedford-Stuyvesant, I sensed an opportunity to achieve, among other things, a kind of experiential compensation for my job. Every day my employer published a record of the news that was about the length of a short novel, and the version of reality contained in those pages, while interesting and even sometimes useful to the degree you had money lying around—and often most enlightening for the unspoken assumptions undergirding its conventional wisdom—bore almost no resemblance to the world I confronted day-to-day, and left out most of what interested me. It aimed to be the indispensable read for the rich and the reactionary, of which I was neither. The saying about the place was that you got two papers for the price of one: a respectable, often hard-hitting news section that glorified and scrutinized titans of commerce and empire, and a piss-and-vinegar editorial page that acted as the bullhorn for the interests of the moneyed class and the Republican Party. Some reporters I knew refused to read the editorials on principle, as if to acknowledge their existence was to admit that they tainted the integrity of the paper's reporting. Merely to mention the editorial page in the newsroom was to elicit a chuckle or a grimace, as if you'd audibly passed gas. My own embarrassment was intensified by the fact of my peonage. My duties were unrelated to any notion of integrity. I was a fax ferrier, a nobody, the guy whose most important task was to ensure that the water cooler didn't run dry in the middle of the day. To say I worked in news was only accurate to the extent that I worked in a newsroom; I had nothing to do with the pursuit of news. One morning I showed up at work to find a message on my telephone from a man named Peter Brinch. He said he was a friend of Frank Allen's and was calling because he had a tip he wanted to share with a journalist. Frank Allen had kindly told him to call me. I immediately called him back. I didn't tell Brinch that I wasn't a practicing journalist. Nor did I tell him that what the public thinks of as a good tip is often not news at all. There was a hint of sophistication in his voice that made me think he might tell me something extraordinary, something that would change the flat-line trajectory of my so-called career. Brinch told me that he knew a man with an obsession for McDonald's. That didn't sound promising. People obsessed about all sorts of things, and their obsessions were not news. They were just obsessions, some of them mildly intriguing, most of them pointless or creepy or sad. Brinch continued: This guy has made it his life's goal to eat at as many McDonald's restaurants as possible. So far he's eaten at more than ten thousand of them, most in the United States, a few in Canada and the Caribbean. He's been to many that no longer exist. He started eating at McDonald's in the 1970s. Name any town or city in the U.S., and this guy can tell you whether it has a McDonald's, and if so how many, and where they're located. He has a photographic memory. He takes all his vacations in places where he hasn't eaten at the McDonald's yet. He considers his visits to new McDonald's a form of collecting. Collecting the McDonald's experience, he calls it. He makes notes on every restaurant he visits. He's never told his story to anyone. But I think he might be ready to talk. Brinch told me the guy's name, which was Peter as well—Peter Holden—and he passed along Holden's phone number. I thanked him for the tip. I think it would make a good A-hed story, I really do, Brinch said before hanging up. An A-hed was a story that ran in column four on the front page of the Wall Street Journal every weekday—a light-hearted, often humorous story that readers loved because it represented an island of levity within a sea of more serious news. Editors called it the A-hed because the box around the two-deck headline above it was shaped like a square-topped A. It occurred to me, after I hung up with Brinch, that the A-hed was often about someone's weird obsession. During my lunch break I called Peter Holden. He told me he worked for a data-imaging company in Virginia. He explained that his firm scanned documents and compiled them in databases that people could peruse with computers. This eliminated the need to replicate documents in paper form, and therefore saved a lot of trees. Holden told me he was coming to New York on business the following week. We agreed to meet for lunch at a McDonald's near the newspaper's office in Lower Manhattan. He said he had red hair and a brown briefcase. He offered no other particulars about his appearance, but I made the natural assumption. When I arrived at the restaurant, I looked for the fattest man in the place, but the fattest man in the place did not have red hair or a brown briefcase. The only man with red hair and a brown briefcase was tall, trim, and looked about forty-five years old. Holden had told me he was fifty-three. We shook hands, ordered lunch. He was friendly, a little bit shy of his achievement, and a little bit proud beneath the shyness, prouder as the lunch wore on. He ate two Quarter Pounders with cheese—no onions—and drank a large Coke. I ate a Big Mac Value Meal with fries, drank a Hi-C Orange. He said that when I'd first called, he couldn't believe a reporter would have interest in a story such as his. Then he realized that if The Guinness Book of World Records had an entry for solo visits to McDonald's, he would almost certainly own it. As a token of thanks for my interest, he wanted to pay for both of our meals. I told him he couldn't do that; I would have to pay for both meals. At first he resisted, but I told him it was journalistic protocol. A reporter could never accept gifts from potential sources or subjects, even if the gift was only a Big Mac Value Meal: Journalism Ethics 101, avoiding the appearance of a quid pro quo. Holden showed me several folders full of notes about his visits to McDonald's. I looked at the number for the most recent entry: 10,892. That's not even all of the ones I've visited, he said. For years I went to McDonald's without taking notes. Only after I'd been to a thousand or so did I start. I asked him to tell me how many McDonald's there were in Fargo, North Dakota, and he did. I asked him how many McDonald's there were in Missoula, Montana, and he listed them by the names of the streets they were on. I asked him why he started doing this—collecting the McDonald's experience. He said that by the 1970s he'd visited every state capital and national park in the U.S. of A. He'd collected them all, from Montpelier and Cheyenne to Montgomery and Santa Fe, Glacier, Zion, Gettysburg, the Everglades. I wondered what else there was to do, he said. So I thought I'd try to eat at every McDonald's. But they built them faster than I could get to them all. He said his one-day record for visits to McDonald's was forty-five. He'd accomplished this in the suburbs of Detroit. Partway through that epic day he bought cookies for the road, since a visit didn't count unless he ate something from the restaurant, although the actual eating didn't have to happen in the restaurant. At the conclusion of our lunch, I invited him up for a tour of the newspaper. He seemed delighted by the fact that I could wave a little pass card with my picture on it, and doors in the hallways of the Wall Street Journal would open for me. He asked me what subjects I covered for the paper. I was ashamed to admit I sorted faxes and replenished water coolers, so I told him I was a special research assistant to reporters who wrote about law, telecommunications, and the various health care industries. As we circulated through the maze of cubicles in the newsroom, I made sure to avoid the wing of the tenth floor where people knew me. I told him his story was fascinating, a kind of quest story of a uniquely American kind. If my editor gave the go-ahead, perhaps I'd visit him where he lived, in Virginia, and we'd try to find a McDonald's somewhere in the vicinity, ideally a McDonald's he hadn't visited yet, although that seemed unlikely. I first saw Bed-Stuy after dark, so I hardly saw it at all. The C train carried me from the glassy chill of the Financial District to the Kingston–Throop stop on Fulton Street, and from there I walked the dozen blocks to Monroe Street, just off Marcus Garvey Boulevard. It was raining when I got off the train. Everyone hurried along the sidewalks hunched with umbrellas and newspapers over their heads, their knees bent in semi-crouch. With our hands up and our heads down, we looked like we were fleeing the wrath of something horrible come down from heaven. The walk was long, fifteen minutes from the subway. The neighborhood was mostly residential, street after street of beautiful old brownstones, bodegas here and there on the avenues, an occasional barbershop or storefront church. The landlord answered the door when I rang. He introduced himself as Ben. He was a sharp-looking man, bald-headed, thirtyish, from Trinidad, with a suave but laid-back British Island accent. He lived on the top floor of his three-story brownstone. A lesbian couple lived on the ground floor, and the middle floor was open. The place was lovely: high ceilings, decorative molding, a claw-footed tub, two bedrooms and a decent-sized kitchen. I looked the place over. Ben looked me over. I'd come straight from work wearing a blue dress shirt, a red tie, and a black corduroy overcoat. I tried hard to appear a gay-friendly dude who'd pay his rent on time. Mary tells me you work at the Wall Street Journal, he said. It's true. I'm pretty sure I'm the only socialist there. So you don't mind situations in which you're the outsider, he said. I think that's safe to say, I said. He gave me an application to fill out, told me he'd check my references and get back to me afterward. You come as a friend of a friend of a friend, he said. It's probably yours if you want it, but let me do my due diligence. When I told Francine Schwadel about Peter Holden, she thought I was kidding. She asked if I could verify his claim to have visited 10,892 McDonald's. I said, No, not exactly, but he showed me some of the notes he took about them and he seems pretty trustworthy. That's not good enough, she said. We need absolute proof. If you can prove it, I think we've got a story. I called Holden. I told him I needed to see copies of his notes from all 10,892 of his visits to McDonald's. He said that would be impossible. Each collection of notes ranged from a few sentences to half a page or more. It would take him forever to make copies. I asked if he could use his company's technology, scan the notes, and create for me a searchable database. He said he didn't think he could use the company's technology for personal business. I reminded him I was a reporter at the Wall Street Journal. We needed solid sources. We verified facts before putting them in the paper. He said, Why don't I send the last three thousand entries or so, and you can look through them and send them back? They're all numbered. I didn't start at five thousand. Come to think of it, I'll send you some from the beginning and some from the middle and some from the end. I ran this by Francine Schwadel. Tell him we'll pay to have them shipped, she said. When they arrived, I took them home and spent an evening with them. Their banal repetition had a strange poetry to it, a kind of Whitmanesque list-making for the end of the millennium; in almost every instance he'd noted what he'd eaten, and the thought of all those empty calories, millions and millions of them, staggered me. All that ground-up cow flesh. All that corn syrup. All that time spent breathing the oleaginous air of the nation's McDonald's franchises. The next morning I showed the notes to Francine Schwadel. I told her about my idea to visit Holden where he lived and take him to a McDonald's he'd not yet seen. I'd discovered the existence of a new one not far from his home, just across the state line in Maryland. That way, I said, I'll be there for breaking news. Very clever, she said, grimacing. Write me a proposal and I'll send it on to the page-one desk. I'll see if they'll let you travel. Don't get your hopes up. And tell your guy not to talk to any other reporter, anywhere, until your story runs. After approval came down from page one, I was instructed to book a Friday evening train to Washington with my own credit card. I was needed at the fax machines during regular work hours, and as a greenhorn I would not be allowed to report on company time, though all my expenses would be reimbursed. For someone of my position, reporting was an extracurricular activity. I'd learned some tricks about the various strategies one could employ from listening to the reporters around me. The guy in the cubicle to my left had the manner of a no-nonsense dentist. He was blunt and demanding, all business, insisting that he didn't want to waste anyone's time, so why beat around the bush, just tell me what I want to know and I won't bother you again—and people did. The woman in the cubicle across from me adopted the pose of a hopeless neophyte, confused, in over her head. She asked for things she pretended not to understand to be repeated, slower this time, like you're talking to your adolescent niece—and people did. Sometimes she laughed to herself after hanging up, amused by her own performance. She ought to have been. She broke news on all sorts of sophisticated Wall Street shenanigans and she got a lot of her leads by sounding like a complete ditz on the telephone. With Holden I simply shut up and listened, nodded and ah-hummed a lot, and took page after page of notes as he extemporized. On a muggy Saturday morning we drove to a new McDonald's in College Park, Maryland. From the moment we stepped inside he smiled with childlike enthusiasm, his head swiveling as he tried to take it all in. The seating area had plastic tabletops laminated with the University of Maryland shield, and the walls were emblazoned with the words MARYLAND TERRAPINS. Look at this, Holden exclaimed. I love this stuff! He told me his greatest excitement in life came from finding a McDonald's restaurant with something slightly different about it, since most were carbon copies. He thanked me profusely for leading him to a version that was one of a kind, and for sharing in the joy of the discovery. We stayed no more than half an hour; the place was jammed with customers, and it didn't feel right to linger merely to admire the tabletops, the walls. That afternoon he showed me around his home in suburban Virginia. Each room contained a different collection of some object: African masks in the living room, Russian nesting dolls in the dining room, and so on, dozens and dozens of each particular thing. In its museumlike tidiness, it looked like the kind of place a fastidious serial killer might call home. I couldn't stop myself from picturing a collection of severed body parts somewhere in the attic—thumbs, ears. Are you a collector of anything? he asked. About to say no, I thought of the commonplace book I'd been keeping. If I'd wanted to disturb him even more than he'd disturbed me, I could have quoted some of the entries. Pavese: No one ever lacks a good reason for suicide. Jong: It was easy enough to kill yourself in a fit of despair. . . . It was harder to do nothing. Freud: No neurotic harbors thoughts of suicide which are not murderous impulses against others redirected upon himself. Nietzsche: The thought of suicide is a great consolation: with the help of it, one has got through many a bad night. Pliny the Elder: Amid the miseries of our life on earth, suicide is God's gift to man. Artaud: If I commit suicide, it will not be to destroy myself but to put myself back together again. And so on. I assumed that counted as collecting, but it wasn't the sort of collection you shared with anyone. Baseball cards, I told him. As a kid. In my spare time at work I continued reporting. I called Holden's boss at the data-imaging company, who told me that he did his best to accommodate Peter Holden's urge to go out of his way on his business travels and collect the McDonald's experience. If you can handle his mysterious routes from point A to point B, he said, he's the best employee you could have. I called Holden's ex-girlfriend. She told me that at first she couldn't understand why Holden stopped so often for snacks on their vacations, always at McDonald's. After about a year, she said, I finally confronted him. Why stop at McDonald's six times a day? Why not Burger King? I wasn't hungry or thirsty, so I'd sit in the car. I'd see him inside taking notes. I'm a psychotherapist, and I could never figure him out. She went on record that McDonald's played no role in their eventual breakup. I called officials at McDonald's corporate headquarters, who declined to comment. I called a woman at McClip, the barbershop inside the McDonald's headquarters building, where Holden told me he'd had his hair cut several times. The stylist remembered him immediately. She said Holden was enamored of the fact of getting his hair cut in the same chair where the McDonald's CEO got his trim. In fact, once every six or eight weeks for nearly two years in the early 1990s, Peter Holden had made the 725-mile trip from his home in Virginia to Oak Brook, Illinois, to get a twelve-dollar haircut. Thus did I come to write my first story for the Wall Street Journal, a front-pager, an A-hed, a humorous and lighthearted tale of one man's obsession that would turn out to represent the crowning achievement of my career in journalism, though I couldn't have known it at the time. The headline read: Not All McDonald's Are Carbon Copies, a Collector Attests: Peter Holden Eats at 10,893 and, Like a Wine Lover, Enjoys Subtle Differences That day the recipients of my fax deliveries, some of whom had yet to acknowledge my existence, dared to make eye contact. Some even whispered words of encouragement. It felt like my coming-out party, minus most of the things that make a party a party. Just before lunch, my telephone rang. It was a literary agent. He asked if I'd considered turning my story into a book. I told him I was flattered by the idea but I thought it wouldn't make much of a book. There were some details I wished I could have added if I'd had more space, but not many. The agent said I was probably right. Maybe the thing to do, he said, is find ten other obsessed people and write stories about them and package them in a book of short pieces. Maybe, I said. The week after my story about Holden appeared, an assistant managing editor stopped by my desk and told me that he hoped to see more of my byline in the paper. Since my phone wasn't ringing with more good tips, I became the guy to whom editors turned when they needed someone to write a small item on deadline. I wrote one, for instance, about a medical study on the dangers of cigar smoking. The headline read: "Cigar Smokers Face Increased Risk of Cancer, Study Says." I thought this was pretty obvious, but the medical editor assured me it was breaking news. After my first few months on the job, Francine Schwadel called me into her office and gave me a performance review. She said I did a very fine job of handing out faxes and was proving myself to be a diligent reporter. If I showed patience, I would one day be promoted and could move on to something more important than handing out faxes. For a moment I was taken with the thought of someone bringing faxes to me. I finished the term of my sublet, rented a moving van, and moved what little I owned in one trip to Bed-Stuy. To celebrate, I went for a few beers and a burger at McHale's, my farewell visit as a resident of the neighborhood, though I knew I'd always return, no matter where I lived in the city. Three hours later, feeling a little queasy, I decided to splurge on a taxi home. Where to? the driver said. When I told him the address, he said, Where's that? Bed-Stuy. He looked at me in the mirror. I think I can find it. I hope so, I said. I've only been there twice. He scribbled on his clipboard, reset the meter. A few moments later, stopped at a red light, he looked at me in the mirror again. You know someone there? I don't know anyone there. I just moved there this afternoon. What for? The price? Two bedrooms for eight hundred bucks. Sweet deal, he said. But you're aware that ain't your neighborhood. He was one of the few white-ethnic cabbies I'd seen in the city—Irish, apparently, from the name on the license on the back of his seat—but even so the sternness of his tone surprised me. Yeah, haven't seen too many pale faces in the neighborhood, I said. You're not going to, he said. We drove in silence over the East River, the diorama of the city skyline receding behind us. He found the address without trouble. Remember this, he said, turning to face me. When you need a lift, tell your driver to use the Williamsburg Bridge. It's the quickest and easiest way. You come onto Broadway and look for Woodhull. Right after the hospital hang a right and you're on Marcus Garvey. Five minutes and you're home. Thanks, appreciate that, I said. Let me tell you something else, he said. A lot of drivers won't want to come out here. It's a no-man's-land for getting a fare back to the city. Watch how many taxis you see in the street. I'm telling you there won't be many. Easiest way to shirk a fare is to say, I don't know how to get there. So you'd better know for them. Until then I hadn't felt the tiniest tremor of fear about my move. To be afraid, I thought, would have been to admit to a streak of latent racism, and I didn't believe I was racist. Nonetheless a veil of suspicion dropped between me and the neighborhood all of a sudden. Worried, I cast about for points in my favor, as if polishing my make-believe résumé of racial sensitivity. At the age of seventeen I'd read The Autobiography of Malcolm X and was so moved I went out and bought a T-shirt with his face silk-screened on the front. In Minneapolis, in 1992, I'd marched alongside some deodorant-averse white people to protest the verdict in the beating of Rodney King. In college I'd judged Martin Luther King, Jr.'s "I Have a Dream" speech the greatest of American orations. As a result of my liberal arts education, I'd gained some acquaintance with the works of Frederick Douglass, Langston Hughes, James Baldwin, Toni Morrison, names unknown in the house where I'd grown up. I owned several dozen jazz albums, from King Oliver to Sonny Rollins. In the greatest NBA rivalry of my lifetime I'd been on the side of the Lakers over the Celtics—Magic trumping Bird, Showtime all the way. Maybe this collection of random facts would cohere into a signal of my harmlessness and emanate from my being on the streets, discernible via ESP on the lower frequencies. Surely my new neighbors would recognize a kindred soul, a fellow American acquainted with the deep meaning of the blues. Class kinship would trump racial difference, that old dream of the democratic socialists. Then again, the way the cabbie spoke, with a note of warning in his voice—as if he'd sniffed the provinces on me and felt compelled to protect me from my ignorance—clued me in to the fact that my choice of neighborhood was unlikely to be viewed by its longtime residents as a compensatory counterweight to the fact of my employment at the flagship paper of Dow Jones & Company, unless I wore a sandwich board announcing my motives on my walks to and from the subway. My family and friends were bemused that someone of my political persuasion would end up working at the Wall Street Journal, and the few colleagues at the paper with whom I shared the news of my new residence were just as baffled that I would choose to live someplace where I so obviously did not belong. At the newspaper the mere mention of whose name evoked images of power, I had none; in a neighborhood that stood as a stark example of powerlessness, I had the look of a man with more power than anyone by far. Walking down Marcus Garvey Boulevard each morning to the train, wearing a suit and tie on my journey between these worlds, I felt myself traversing the righteous path of the outcast. It was a kind of performance, a daily tightrope walk across a yawning chasm, a journey both precarious and surreal, and I savored every delicious and delirious second of it. Once I convinced myself I would be welcomed in Bed-Stuy, it was only a short leap to imagine myself saved in Bed-Stuy. By being called to the surface of things, by being forced to rise out of self-obsession and deal with the tangible world around me as something other than a bad joke, maybe I could begin the work of forgetting the phone calls I hadn't made, the words I hadn't said. Maybe, by some miraculous encounter in the streets, I'd be granted the forgiveness I couldn't grant myself—or, failing that, endure my punishment and emerge reborn. White Boy Monroe Street marked the northern edge of those stately brownstones that gave Bed-Stuy its architectural charm. Not just Monroe Street, but my side of Monroe Street. Across the street to the north most of the houses were wooden or vinyl-sided, and some had been abandoned, plywood nailed over their windows. A little farther north the Marcy Houses loomed, aesthetic monstrosities that always put me in mind of medium-security prison architecture, but taller. Jay Z had grown up there, but at the time I couldn't have told you the first thing about Jay Z, even though "Hard Knock Life (Ghetto Anthem)" had been playing all over the city for a year. There were three businesses on the corner of Monroe and Marcus Garvey: the Fried Chicken Palace, a Chinese takeout, and a tiny bodega. Up the avenue, just before the projects, was the only grocery store within walking distance. I shopped there twice, once in ignorance and a second time in desperation. The fruits and vegetables looked secondhand. A box of macaroni and cheese sold for about twice what it would've cost on the Upper East Side of Manhattan. The neighborhood, it turned out, was what sociologists called a food desert. One night I went to the Chinese takeout. The kitchen was sealed behind a wall of bulletproof glass so opaque that only the smells of food and fryer oil gave away the fact that the men in back, barely visible as white blurs, were cooking and not stamping license plates. I ordered the chicken lo mein and stood back to wait. Loitering was not encouraged. There was no place to sit. If you wanted to eat on the premises you could set your food on a chest-high shelf along the wall. A man said to me, Hey, I saw you the other day. He was short—maybe five-five—and wore thick glasses. The skin on his face was mottled pink in places, as if he'd had a series of skin grafts that hadn't quite worked out. In the bodega across the street, I said. What are you doing here? Getting dinner. No, I mean here, he said, waving his arm to take in the whole neighborhood. I live here. You buy a house? I rent. He stared at me with a look of profound confusion. He opened his mouth as if to speak but couldn't find the words. A friend of mine put me in touch with the landlord, I said. He wanted to rent to people he knew, or people his friends knew. And the price was right. My name's Phil—I extended my hand—what's yours? He told me and said, I been in the neighborhood forty-six years. Born and raised. I'm a short-timer by comparison. What do you do? There were five or six people standing around waiting for food, and they were all looking at me. I tried to think of something to say—other than the truth—but nothing clever came to mind. I work at the Wall Street Journal, I said. Shit, he said. Trading stocks and making stacks of cash. No, I said—and here I did lie, for reasons that were inexplicable; the lie just came to my lips and escaped in an instant—I write about people who trade stocks. A journalist? he said. A journalist? He squinted and turned up his nose. Throwing mud at people, he said. Draggin' 'em through the dirt. Ruinin' people's right to make a living. A journalist. He turned and spat on the floor as if the word had dirtied his mouth. I'd written five or six pieces in my time at the paper, most of them tiny spot-news fillers, things I could tap out at the margins of my days. I was still first and foremost a fax boy, earning barely twenty grand a year, but I was making myself sound like some kind of big shot. Still, I knew I couldn't backtrack without looking like a fool. I throw mud at people who deserve mud flung at them, I said. He smiled and looked around the room. He raised his arm and gestured toward me. I thought he might be about to hit me, and my arms tensed, ready to deflect his punch. Instead he said, as if he were the arbiter of such things, as if he knew he held my fate in his hands but decided to let me slide: This guy's all right. The woman behind the bulletproof glass called out my order. I stepped up and paid through the little cash-exchange hole. He sidled up as I put the change in my wallet. Will you give me a dollar, man? Give you a dollar? Yeah, man, just a dollar. Again I felt myself performing, everyone waiting to see what I'd say. I thought I'd do well to avoid establishing a reputation as the white boy in the neighborhood who went around giving away his money. No, man, I worked hard for this dollar. Come on, man. I need this dollar. I need to buy lunch tomorrow. I need to pay my rent. Okay, okay, he said, palms up in a gesture of surrender. I'll see you around, though, I said. That's right you will, he said. Every day. I never saw him again. Less than a year into my tenure at the Journal, I learned of a job opening on the Leisure & Arts page. It was listed on the company's internal Web site, a copyediting job, repairing split infinitives and run-on sentences. I fastened with unreasoning hope on the notion that the job—and the raise that came with it—could be mine. My hope vanished the moment I learned that, in order to get the job, I would first have to sit for an interview with Bob Bartley, the editorial page editor of the paper, who oversaw hiring for the Leisure & Arts page, which he otherwise supervised with benign neglect. Bob Bartley was among the most influential American journalists of the second half of the twentieth century, although his name was not widely known outside of New York and Washington. He was fairly soft-spoken, and his posture was not what you'd call ideal. He rarely smiled, but when he did he looked like a cat who'd just swallowed your canary. Bob Bartley's two abiding obsessions were taxes and weapons. He thought taxes should be cut always and everywhere, except for poor people, on whom they should be raised as a disincentive to being poor, and as for weapons he thought America should build as many as possible. The more weapons we had, in his view, the less likely we were to need them. But he believed that occasionally we needed them to bomb other nations that were trying to develop them too, because those nations couldn't be trusted not to use them. In order to further thwart the nations that, unlike ours, couldn't be trusted not to use their weapons, he thought we should spend however many trillions it took to build a missile-defense shield, that sci-fi umbrella that would protect America from the rain of other nations' missiles. Bob Bartley believed that with tax cuts, lots of weapons, and a missile-defense shield, Americans would remain safe, happy, and prosperous. Bob Bartley had been writing editorials about these ideas for almost thirty years. Someone once made a joke about editorial writers. Why is writing an editorial like pissing yourself in a blue serge suit? Because it gives you a warm feeling, and nobody notices what you've done. Bob Bartley was no trouser-wetter, though. From what I could discern he never had warm feelings, and people in power tended to notice what he wrote. The arena in which he'd had his greatest influence was tax policy. He was American journalism's leading proponent of trickle-down economics: by cutting taxes on rich people and raising them for poor people, he argued, more money would end up not only in the hands of rich people but—because the rich people would spend it on housekeepers and yachts—in the hands of people who kept houses and built fancy boats. Because everyone would be making more money, the government would generate more revenue in taxes, even though the top tax rates were lower. Since bloating government coffers with more taxpayer money was actually a bad thing, an evil outcome of sound policy, the government would be obliged to funnel the extra tax revenues to bomb-building projects—in effect throwing the money away, since it created wealth, in the form of weapons, that could only be used once, if at all, and then only to destroy, never to create more wealth, which thus ran counter to the essence of capitalism, wealth creating wealth—while at the same time cutting programs for poor people and generally running the machinery of government with an incompetence bordering on malice, which would make poor people angry at the government and entice them to vote for Republicans, just like most rich people did, ensuring Republican rule forever. Despite the baroque strangeness of some of his ideas, Bob Bartley had once won a Pulitzer Prize. When I first joined the paper, Bob Bartley was in the late, hysterical stages of his obsession with Bill Clinton. Bob Bartley's editorial page had printed enough editorials about Whitewater to fill three thousand pages in six anthologies. Bob Bartley was proud of these books, even though no one bought them. He thought Whitewater was comparable to Watergate; he was hoping to bring down a president, in the manner of Woodward and Bernstein, and perhaps win another Pulitzer Prize. But despite his three thousand pages of editorials, the Whitewater investigation devolved into an absurd argument about whether fellatio is actually sex, and the president did not resign and was not forced from office, although Bob Bartley was adamant that he should have been, because Bob Bartley did not approve of extramarital fellatio, at least not for Democrats. When a reporter had asked him whether he and his editorial page would've attacked Newt Gingrich or another prominent Republican faced with similar charges of sexual misconduct, Bob Bartley admitted that "we would have defended them. That's the way it is." I was nervous when I went to Bob Bartley's office. My internship at the Nation featured prominently on my résumé. While the work I had done there was utterly harmless to the spread of corporate capitalism, the Nation was known to say kind things about socialists. Bob Bartley detested socialists. Bob Bartley held my résumé in his hands. I feared he would ask me about socialism, taxes, trickle-down economics. I would then face a choice: I could either tell him what I thought about these things, whereupon he would refuse to hire me to work on the Leisure & Arts page, or I could betray my principles, such as they were, and lie. I'd been here before, and I knew which path I'd choose. He did not ask me about these things. We talked about Minnesota and Iowa, where, it turned out, we had both lived as boys. He'd been born in southwest Minnesota but grew up mostly in Ames, Iowa, while I'd been born in Ames, Iowa, and grew up mostly in southwest Minnesota. This struck me as appropriate, our moving in opposite directions at the beginning of our lives—me upward and to the left on the map, him downward and to the right. Bob Bartley asked me only one serious question, with two leading follow-ups: What is your ambition in life? Do you, for instance, want to be a reporter? Or do you want to be editorial page editor of the Wall Street Journal? I was pretty sure I didn't want to be a reporter, especially not at the Wall Street Journal, where many reporters covered a single industry (airlines, pharmaceuticals) or even a single company (General Motors, Microsoft), had minimal opportunities to afflict the comfortable and even fewer to comfort the afflicted, and never detached themselves from their cell phones. Even though a part of me did want to be editorial page editor of the Wall Street Journal, which was the same thing as saying I wanted to be the most important person at the world's most important publication, I knew I'd never get that chance, because I didn't believe any of the things Bob Bartley believed. I figured I'd have to say something completely harmless, though not without a hint of some trivial ambition. I said, No, I want to write historical fiction. My answer pleased him, as I'd figured it would. It wasn't long before I was told the job was mine. When I moved to the Leisure & Arts page, I assumed I'd have no personal contact with the editorial writers, but my cubicle was situated smack in the midst of theirs. A couple of them came forward to welcome me, but most of them did not. The ones who welcomed me overlooked the fact that my politics were repugnant. Those who did not welcome me could not overlook that fact. Admittedly, by hanging posters of Emma Goldman and Ralph Nader in my cubicle, I made it a hard fact to overlook. Though I had little in the way of social interaction with the editorial writers, I began to read their pieces very closely, sometimes even dipping into the archives to sample their obsessions over the decades. They wrote with the zeal of converts, as if they'd all been communists in their youth, and each of them rode a favorite right-wing hobbyhorse into the ground, month after month, year after year: not only cutting taxes and stockpiling weapons but the treachery and moral lassitude of the Palestinians, the deleterious effects of the 1960s on American moral values, the heroic necessity of Pinochet's bloody dictatorship in crushing democratic socialism in Chile. The collective voice of the newspaper—the unsigned editorial—was always the furthest to the right of the range of beliefs held by the editorial board members, no accident on Bob Bartley's part. He held the most extreme position on almost every issue and, because he couldn't write three editorials a day himself, took great care in his choice of lieutenants. His fondness for partisan hacks led him to hire people who could just as well have been Republican speechwriters, as indeed some of them had been (Peggy Noonan) or soon would be (Bill McGurn). For the most part, Bob Bartley held meetings only with people who shared his opinions, in a little conference room near my cubicle—meetings with men like Kenneth Starr, the special prosecutor who wrote the most famous volume of pornography during the 1990s, and William Bennett, the moralist who gambled away, at the tables in Vegas, the earnings from his books and speeches, which proselytized on behalf of virtue and self-discipline. I came to think of this conference room as the echo chamber for the vast right-wing conspiracy, though not because of its acoustics. I tried once to engage in a reasonable discussion about politics with one of the editorial writers. She was a voluble young woman who'd grown up in Oregon and gone to college at Princeton. She worked in the cubicle next to mine, so I overheard her on the phone every day, talking the crazy with like-minded crazies—suggesting, for instance, that the U.S. Navy, after being pressured to stop raining practice bombs on the Puerto Rican island of Vieques, should instead bombard the Arctic National Wildlife Refuge, making it the opposite of a wildlife refuge. She cackled when she said this, though not because she was kidding. She wrote a lot about the environment—she was reliably against it—and one time I told her I disagreed with something she'd written about federal forest policy. The essence of my argument was simple: I didn't think trees should be cut down carelessly. She told me that trees existed to be cut down. She said she preferred clear-cuts—forests transformed into nonforests. She said clear-cuts grew back as peaceful meadows, which were aesthetically superior to forests. I disagreed not just on the aesthetics but also in regards to the effect on wildlife and watersheds. She said I had an unhealthy, sentimental attitude about trees; she accused me of wanting to hug them. I told her I didn't want to hug them, I just didn't think they should all die and take with them songbirds and squirrels and all the other life that make an ancient forest more than a stand of timber poised to become lumber. She said most trees would be better off dead, after which they could be given a more useful second life as furniture, houses, or fax paper. We didn't talk much after that, although we always exchanged cordial hellos when passing in the hallways. It took me a while to notice that the only people who would speak to me on the streets of Bed-Stuy were over forty or strung out on crack. No twenty-five-year-old guy was going to strike up a casual conversation with a white dude in plain view of anyone else; even less likely a young woman. The women were magisterial in their ability to pretend I wasn't there when we passed in the streets. I loved the irony of my status as an invisible man. The homeboys would cast a glance my way—surprised, bemused, sometimes aggressive, as if sizing me up, trying to guess my angle—but not the women. To them I was less than an ectoplasm, though I know they sensed my presence and must have been curious. I couldn't blame them for their posture of indifference. In fact I was secretly grateful. They didn't need any trouble, and neither did I. For a while I lived without a home telephone, so I made my calls at a pay phone, around the corner on Marcus Garvey. One afternoon I left a message for a friend I planned to meet that night for dinner. I hung up and turned to find myself in sole possession of the gaze of a woman maybe fifty years old, wearing a green and gold head scarf. She walked with an erectness of posture that made me think she might be some kind of neighborhood ambassador, there to take the measure of me. Happy New Year, she said. You new to the neighborhood? I live on Monroe Street, I said. Just moved in. Well, welcome. You know, there's another couple in the neighborhood. I saw them in the Laundromat a few weeks ago. Young folks like you. I hadn't seen a white face yet. The closest I had come were rumors, secondhand reports of sightings, which sort of disappointed me. I wanted the other thirteen to have fled. I wanted to be the one and only. I'm sorry, she said. That was a foolish thing to say. You know what I mean, I hope. You mean about the other couple? She nodded, looking rueful. Of course, I said. It's pretty obvious. Well, kinda put my foot in my mouth. There's no point denying I'm white. It's a hard thing not to notice. It felt good to speak frankly, as if by stating the irrefutable I was doing my part to advance the cause of racial understanding. This neighborhood is ninety-nine percent black, she said. When you see white folk here they catch your attention, like that couple I told you about. They had dreadlocks and all these tattoos. Real eccentric-looking, but nice. When I'd tried to imagine the other thirteen white people, that's what I typically pictured—white Rastafarians, that most peculiar of oxymorons. I know what it's like to stick out in a neighborhood, she said. The other day I went out for a typewriter ribbon. It was Hanukkah, so all the Jewish stores were closed. I didn't realize it was a Jewish holiday until I found a couple of stores locked. So I went back home and looked in the yellow pages and called some other places. I found one that had the ribbon I needed. Except it was in Greenpoint. I had to take the bus, and after I got off the bus I was lost. I went into a bar to ask for directions. The place was filled with old white men, Polish or Irish or something. What nationality are your people? French and Irish, I said. May the road rise up to meet you. That's Irish. I'm African and Hispanic. Anyway, I'm in this bar with a dozen old white men, somewhere in Greenpoint. Looking pretty scary half drunk in the afternoon, I said. Oh, yes. I know. I've been in those bars. You got that right, darlin'. I told the bartender I was lost. He looked at me a little funny but he took me outside and pointed down the street and showed me where to go. He was pretty sweet about it. In the end we're all human no matter our color. We live in our neighborhoods but we're all in this together. The border of the world ain't the edge of our neighborhood. I'm finding it instructive to live somewhere where I'm conscious of my skin color, I said. I've never had that experience. What you find first are the kindness and curiosity of strangers. At least here. Most places, she said, most places. It seemed as if we'd sussed out an essential truth about the human condition, and there was nothing left to say. I should run, she said. I've kept you long enough. Happy New Year, I said. Maybe I'll see you around. Indeed you will, she said. Keep faith with the Lord and all will be well. I never saw her again. I worked ten to six on weekdays, so I was usually around the neighborhood only at night. Young people flirted outside the Fried Chicken Palace. Deals went down on Marcus Garvey. The heat in my apartment was oppressive that winter, the steam radiators working full bore without modulation, so I left my bedroom windows open. Lying in bed reading by lamplight, I could hear the night sounds below: a shout in the street, the thump of a bass line from a passing car. Most of the time these sounds were a comfort to me, evidence of a complicated social life I could access vicariously. I knew I'd never be a real part of the community, but that didn't matter. I wanted a situation where nothing was asked of me, nothing expected, and while you could find that pretty much anywhere in New York if you were a refugee from the hinterlands, it seemed purer for me in Bed-Stuy. At night I would often see groups of men my age gathered in the barbershops, cutting each other's hair and laughing, dapping, telling jokes. There was no way in hell I could have joined them, I knew that, but I didn't mind. Scenes of joy in camaraderie only reinforced the bitter bite of my bittersweet solitude. I devoted my free hours to reading, as I mostly had since my brother's death, reading being one of the surest escapes from the cocoon of solipsism in which I was otherwise so comfortably nestled. One book in particular seized me that winter: James Baldwin's collected nonfiction, The Price of the Ticket. I'd read parts of it years earlier, and now I picked it up again, looking, I suppose, to his fierce intelligence for an anchor in the swirl of impressions I'd encountered in Bed-Stuy. Instead the thing that gripped me to the point of obsession was a passage from the essay "Nothing Personal," which a footnote said was "written with Richard Avedon" but sounded like vintage Baldwin: . . . sometimes, at 4 a.m. . . . with all one's wounds awake and throbbing, and all one's ghastly inadequacy staring and shouting from the walls and the floor—the entire universe having shrunk to the prison of the self—death glows like the only light on a high, dark, mountain road, where one has, forever and forever! lost one's way.—And many of us perish then. But if one can reach back, reach down—into oneself, into one's life—and find there some witness, however unexpected or ambivalent, to one's reality, one will be enabled, though perhaps not very spiritedly, to face another day. . . . What one must be enabled to recognize, at four o'clock in the morning, is that one has no right, at least not for reasons of private anguish, to take one's life. All lives are connected to other lives and when one man goes, much more than the man goes with him. I'd done some systematic reading in the literature of suicide, most of it amounting to a thumbs-up or -down on whether it was permissible. That question held little interest for me. No matter what judgment Kant or Schopenhauer offered on the subject, thirty thousand Americans a year did themselves in, hundreds of thousands more worldwide. I intuited the raw impulsiveness of the act. You either got there or you didn't; the route was mysterious, and no religious prohibition or philosophical text seemed likely to sway a person in the throes of suicidal despair. Who has time for The Myth of Sisyphus when the gun is right there within reach? ("There is but one truly serious philosophical problem," Camus had written. "Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy.") Anyway, in the case of my brother, the deed had been done. It hardly mattered whether I condemned him or offered my posthumous blessing. He was dead and he was going to stay dead, no need to bury him at the crossroads with a stake in his heart. Baldwin had a point of view on the morality of the deed, but even as he made his judgment he expressed a nuanced sympathy for the lost and the damned. He saw that dark night through their eyes. To find my brother's state of mind—my own state of mind many nights—expressed so clearly offered me a different sort of anchor than the one I'd been looking for. I repeated that one ringing line to myself—all lives are connected to other lives and when one man goes, much more than the man goes with him—so often that it became a mantra, a reason for living another day. It gripped me so fixedly I ignored a pertinent warning that came not long after: Then one selects the uniform which one will wear. This uniform is designed to telegraph to others what to see so they will not be made uncomfortable and probably hostile by being forced to look on another human being. . . . It is necessary to make anyone on the streets think twice before attempting to vent his despair on you. Before I moved to Bed-Stuy, I'd been under the impression that the crack epidemic—always so called, as if it were some virus beyond human agency, akin to Ebola or monkey pox—had run its course in the city. There were no longer any stories about it in the papers, but every other day or so, on my walks to and from the subway, I'd find an empty vial in the seams of the sidewalks. Their stoppers came in various colors, and soon I had a little collection—yellow, red, white, purple, green, blue. In the mornings before work, when I'd try to write and fail, I'd pull them out of my desk drawer and hold them in the palm of my hand, wondering what it felt like to have that kind of high, that kind of need for a high. Out in the streets you could see the crackheads coming from a block away: stumbling, weaving, a beatific smile on their faces if they'd just smoked up, their eyes meat-red and their noses smeared with snot. If you got close enough they smelled something horrible, like they were already dead and beginning to rot. There was a woman who hung around the bodega on the corner asking everyone who passed for cash or beer. She may have been no older than twenty-five but she had only half her teeth left. The skin on her face was swollen so tight I feared it might rupture if she so much as coughed. Her shoes were sometimes mismatched. Other times she went barefoot. I could tell she'd once been a great beauty. She still had the legs, although they were awfully skinny now, and her eyes were huge and lit from within as if she'd seen the Rapture coming. Mister, wooyoo bry me and ache bull? she slurred, the first time I saw her. I'm sorry? Mister, wooyoo priss doobie faber an bry me an ache bull? It took me another moment to understand she was asking for a forty-ouncer of malt liquor. My first impulse was to say, Honey, that wouldn't be doing you a favor. But what sort of favor could I do her? She terrified me, she'd ruined herself so completely—as if committing suicide in slow motion. Do you smoke? I said, making the universal symbol for a cigarette, index finger and middle finger splayed in front of my lips. I didn't want her to think I meant crack. She nodded. In the bodega I bought a carton of orange juice, asked the clerk to throw in two loosies. I handed the woman a cigarette, lit it for her, lit my own. Her hand shook as she held it. She thanked me, told me I was a nice man. I thought: No, I'm not. I am not a nice man. Instead I said: You're welcome. As much as there was a part of me that secretly feared—and even more secretly craved—being harmed in Bed-Stuy, there was another cloistered and delusional part of me that thought I might be redeemed by intimacy with squalor and degradation. Those words appeared more than once in my journals around that time, and what was a better synonym for the squalid and degraded than a crackhead on the streets of Bed-Stuy? Despite the veneer of higher education, I was still an ignorant white boy. I'd never seen a Spike Lee movie, never listened to a word of Biggie Smalls, but I knew the motto Bed-Stuy Do or Die, and with no appreciation for the obvious irony I'd taken it as my own. I may have read Baldwin but I hadn't understood. Mired as I was in my own dark trip, I wasn't terribly interested in the social texture of the neighborhood—in its history, in the stories of the people who lived there, their struggles and hopes, fears and dreams—and truth be told, I didn't want to get all that intimate with squalor and degradation, whatever they might mean. It's not like I was prepared to take a crackhead home and give her a hot bath and a home-cooked meal. I got a little tingle from being in proximity to self-inflicted suffering, but I didn't want to have to do anything about it. I constantly reminded myself that I hadn't gone to school to be a social worker. I'd chosen a course of study that taught me various tricks for how to observe the workings of the world, to take notes and write them up in stories, so I took notes, as much out of habit as anything. I knew I had zero chance of convincing an editor at the Wall Street Journal to let me write a feature on the myriad ways the American government and moneyed interests had turned their backs on Bed-Stuy, a neglect—a spiteful, willful neglect—that was nothing short of criminal. For a time, squalor and degradation were all I could see in the streets, or all I chose to see—and not just in the world around me but in my own mind. At my day job I could impersonate a competent, self-possessed young man, but my inner life festered with diseased visions: At the laundromat today I saw a poster tacked to the wall. It said the 79th precinct's homicide squad is looking for information about the death of a livery cab driver who was found murdered in his car on the corner of Marcus Garvey Blvd. and Monroe St. on Feb. 1. The thought of it chilled me: a dead body forty steps from my front door, shot for the cash in his pocket. On the stoop across the street a man lifted his shriveled penis from his pants and relieved himself on the topmost step. His urine steamed in the cold. It steamed in its arc and steamed where it splashed on the concrete. Think of the cool reassurance of gunmetal in palm. The dull and languid hereafter, the painless hereafter. Think of the bliss of death. I'd been in the neighborhood a couple of months when I was given a raise at work, and one of the ways I celebrated was by going on a little shopping spree at Century 21, just up the street from the paper. I bought a suit jacket off a sales rack, a DKNY number that fit well enough, though it was a touch long in the arms. On another sales rack I found a pair of pants that more or less matched the jacket. I bought three new dress shirts and four ties to go with them. To complete the ensemble, I rode the train up to Eighth Avenue and Forty-first Street, to a hat shop I'd passed many times on my walks through the city, and I bought myself a sharp-looking fedora, black, with a red and yellow feather in the band. Most of the men at the paper wore white shirts and patterned ties that looked like they'd been bought a generation or two before. If you looked closely you could see little stains on them where they'd caught a splash of soup or a dribble of mustard, possibly during the Reagan administration. I opted for colored shirts—salmon-pink, lime-green, cornflower-blue—and ties with a bit of pizzazz in their patterns. The fedora, though, was the real flourish, a gesture of pure irony. It offered not a clue about who I really was, where I'd come from, what I believed. A natty socialist at the Wall Street Journal. A white guy in a black neighborhood. Strange how comfortable my discomfort became. In other words, I was asking for it. The woman who was supposed to move from Detroit and share the apartment never did. After a while, another friend of Mary's became my roommate. Beth had just ended a long-term relationship with a woman and decided to explore the complications of hetero life by seeing a married man. Among the palest people I'd ever encountered, she was impossible not to notice on the streets of Bed-Stuy—she made me look swarthy by comparison—but she carried herself with an air of oblivious good cheer that made me believe she'd remain immune to trouble. She wasn't around much anyway. She stayed most nights with her boyfriend Dave in Manhattan, in the apartment he'd taken after leaving his wife. Dave came to Bed-Stuy once to spend the night. I never saw him there again, but he told me a story I never forgot. He'd taken the train partway, then the bus. It was rush hour and the bus was full, so he'd had to stand. In front of him a woman held a child to her chest. The child couldn't stop staring over her mother's shoulder. Block after block the child stared, her eyes, according to Dave, revealing an intensity of awe unlike anything he'd ever seen. I'm pretty sure the kid had never seen a white person, Dave said. When the bus drew near Dave's stop, he scooted a couple of steps toward the door. Just before he got off, the little girl reached over her mother's shoulder and, in the most tentative way imaginable, touched Dave's cheek with her finger. It was almost as if she couldn't believe I was real, Dave said. Once she realized my skin felt like anyone else's, you should have seen her smile. As I walked the streets of the neighborhood that story kept returning to me. Not because anything so dramatic happened to me, but because I did feel the force of people's curiosity—the sideways glances I got as I walked the streets, the questions I fielded from the neighborhood elders while buying a carton of orange juice in a bodega. A lot of people assumed I was a Jehovah's Witness peddling the literature of salvation, since that was about the only kind of white person who made an appearance in the heart of Bed-Stuy, the kind harvesting souls for Jesus. This was the funniest joke I'd heard in months, all the funnier for its repetition, but I didn't laugh for long. One night I went to meet my uncle in Manhattan. He was in the city on business from Seattle. He'd invited me to dinner on a client's expense account, a seven-course affair at a fancy TriBeCa restaurant, so I put on a new shirt and tie, my new jacket, the fedora. I was strolling down the street toward the C train, looking forward to an evening of food and wine and talk, whistling to myself—I'd been listening to some Sinatra before I left, the famous live recording with Count Basie, in particular "Fly Me to the Moon," that swinging masterpiece—when I was jumped from behind. I say jumped, but that doesn't do justice to the force of it. I'm not sure if the guy meant to keep a grip on me or not. He hit me so hard I flew about eight feet before I landed on the sidewalk. It all happened so quickly, I suppose I acted on pure adrenal reflex. I jumped up and ran. Halfway down the block I stopped to catch my breath, having heard no footsteps behind me. I'd lost my glasses and my hat, and my knees and palms were scratched and bleeding. The rest of me seemed intact, if severely jangled. I couldn't see very well without my glasses but I did notice a man walking toward me, lit from behind by a streetlamp, his face obscured in shadow. Mister, he said, your hat. He held it in his outstretched hand. I was still too stunned to speak, and there was something about him that made me nervous—perhaps the fact that he seemed more concerned about returning my hat than inquiring about my well-being. He said it three times—Mister, your hat—and held it in front of him in one hand, like a priest administering the Eucharist. I extended my hand to take it from him. Just as my fingertips touched the brim he yanked it away. I turned and ran toward the street. A tree and a parked car blocked my path, and in the half second it took me to change course, he jumped on my back. I carried him across the street in a piggyback ride before he wrestled me to the sidewalk. Don't fucking move, he said. My friend's got a gun and he ain't afraid to use it. I was on my hands and knees, gasping for breath, unable to see the guy on top of me, and I thought: You think I give a shit, you fuckers? Go ahead. Go ahead and kill me. A pair of feet appeared in front of my face, spread in the posture of a man about to take target practice. Hand over your wallet. And nothing stupid or you're gonna get hurt. The guy on my back gave it to his friend, who emptied the cash—fifty bucks—and dropped the wallet on the ground. Don't look up, don't follow us, and don't call the cops and we won't have to hunt your ass down and shoot you. They ran down the street, laughing as they went. I stood, testing my ankles and knees. I brushed myself off, flexing my elbows and fingers. I touched my lip, which was swollen and bleeding. I ran my tongue over my teeth and found them all still there, though I spat a mouthful of blood. There was a car sitting in the street, idling, its lights on, its muffler sending little thought-balloon shapes of exhaust into the air, but when I went to its window I found it unoccupied. Up the street I went, hobbling a little, stooping to look for the shapes of my glasses, my hat—the hat my muggers had used like bait to hook some idiot fish by the mouth. For a while afterward, my fear boiled up in plain view. I'd risen to the surface of things, all right, though not in the way I'd hoped—or perhaps exactly in the way I'd hoped. I was surprised by the vehemence of my outrage; I hadn't expected to feel so protective of my own skin. My sense of distinction in the neighborhood, my illusory belief in my own goodness and even bravery, it all vanished. Though I ought to have counted myself lucky—I'd merely suffered a few bruises, lost fifty bucks—I instead felt forewarned, as if the mugging were merely a teaser of what lay in store for me. It exposed a part of me I'd managed to avoid knowing was there, the run-of-the mill bigot, the guy who casts a jaundiced eye on a whole group of people for the actions of a couple of punks redistributing the wealth with the finesse of middle linebackers. It was sport, what they'd done. They'd wanted my money, and I must have looked like I had a lot more than it turned out I did. For a week or so I imagined crossing the river to Jersey and getting my hands on a nine. I'd shove its snout into someone's ear, the next person who dared look at me crossways. I'd become the one to fear instead of the one who feared. Emotion trumping intellect, problem-solving with guns: maybe they ran in the family. It was just a few days later when the verdict came down in the murder of Amadou Diallo. Like many New Yorkers I'd been appalled by the poor man's fate: a West African immigrant street peddler, in the wrong place at the wrong time, shot by four members of a Street Crimes Unit in the Bronx as he fumbled for proof of ID. The officers had fired forty-one shots at Diallo as he stood in his apartment vestibule, holding in his hand nothing more threatening than a wallet—a black wallet the cops took for a gun. On February 25, 2000, a jury acquitted the cops of all wrongdoing. The news was loud and clear and not exactly breaking: the police could shoot an innocent man on the streets of New York and there would be no consequences. An innocent black man, that is. It was hard to imagine a similar verdict if the victim had been a white man in Manhattan. On the day after the acquittal, walking through the city as I often did on weekend afternoons, I came upon a parade of protesters streaming down Broadway, chanting, It's just a wallet, not a gun, no excuse for forty-one. As with many public protests in the city, I agreed with the protesters on principle. Like them, I thought the verdict was a travesty. Unlike them, I didn't want to reduce my feelings about it to a refrain that stated the obvious in the form of a rhyme. I remembered, all too vividly, chanting, No justice, no peace, in the streets of Minneapolis in 1992, one of the emptiest slogans I'd ever uttered, as if I had any intention of continually disturbing the peace in the absence of justice for Rodney King or anyone else. I watched them pass by and then I walked on. Going home that night from the subway, I came upon four young men having an animated discussion outside a bodega on Marcus Garvey. When I passed within earshot I heard one of the men say, No, no, this is fucking bullshit, bro. It's time we fought back against these motherfuckers. I'm talking real firepower. We gotta send a message. They put a bullet in one of us, we put two in one of them. Forty-one for Diallo, eighty-two for him—and he pointed at me, thumb and forefinger at a right angle, in the shape of a gun. I turned away and walked on. A week later I saw a flyer announcing a gathering at Ebenezer Baptist Church, not far from my apartment. The Reverend Al Sharpton, it said, would be the keynote speaker. When the night came I polished my black dress shoes, donned my finest churchworthy threads, and went to see what the reverend had to say. I took a seat in the balcony as the choir began to sway and sing. Just as I'd shed my coat and settled in, a man came to the end of my row, pointed at me, and said: I have a lady I'd like to seat there. Would you mind moving? I looked around. There were numerous empty seats nearby, seats closer to the aisle, but his tone insinuated that it didn't matter one bit if I minded. It was a little power trip, and I played my part. Who could blame him? Such chances must not have come his way often. I nodded, gathered my coat, and stepped away. For a while I moved through the church, looking for another place to sit. I decided I'd do best to stand in the back of the balcony. On my walk around the church I'd counted two other white faces out of perhaps three hundred—one carrying a TV news camera, the other a microphone. The program began with half an hour of gospel music from a choir dressed in blue and gold robes. The entire congregation stood and sang and clapped. The energy was electric, contagious, like nothing I'd experienced—certainly not in the dour Catholic services of my youth, with their mournful hymns, their repeated messages of inexpungable guilt. At that moment, swept up by a communal energy for the first time in a long time, I felt prepared to give something of myself. I wanted to be asked to do something useful, something brave in the service of justice. I didn't know what the something was but I knew I'd be ready if I heard it. I'd spent too much time thinking about a subject that scrambled all categories of justice, and I was growing sick of my befuddlement. I wanted answers. The likelihood of my hearing them seemed to diminish noticeably when the first speaker welcomed us to "reservation headquarters," by which metaphor I would have been a visiting agent of the Bureau of Indian Affairs. The Reverend Sharpton spoke last. In high biblical cadence he sketched a series of proposals to rectify the injustice of the Diallo killing. When you go to the polls on Tuesday for the presidential primary, he said, reject the corrupt status quo and write in the name Diallo. We want to have thousands, tens of thousands, hundreds of thousands of votes for Amadou Diallo. We want to show the powers that be that this city will not move on until justice is served! When someone asks you how you are, he said, you reply: Amadou. Let his name ring from our lips daily, hourly, on every street corner in the city! Let me hear you say it—Amadou! Amadou. Amadou! Amadou. He continued with a proposal to cripple the city's financial institutions by withdrawing black money from white banks. Then he closed with a retelling of the biblical story of David, which he linked to his own interactions with Diallo's father. I told him, he said, I told him that in spite of my funny suit and the fact that I'm a Christian and you're a Muslim, we are brothers, separated long ago on the continent of Africa! The reverend's sermon was a slightly better plan for fighting racial injustice than the one I'd heard on the street: trade two bullets for one. Then again, in that plan I'd have served some purpose, a nice white target in the dark. My money wasn't black. My money was green, and it was clearly safer in the bank than on my person. If I started going around muttering Amadou, I'd have been seen as a danger to myself or others. I'd been a fool to think I was among the reverend's audience just because I was in the reverend's audience. As the music wound down afterward, I made my way toward the exit. Out of the corner of my eye I saw a familiar face—my landlord, Ben, the suave Trinidadian. I sliced sideways through the crowd, placed a hand on his shoulder. Ben, I said. Good to see you. I'm not sure what I expected—perhaps a little friendly chitchat, an exchange of impressions on what we'd just witnessed. I thought he'd at least acknowledge me, the gay-friendly dude who paid his rent on time. But he must have seen a whole mess of trouble he didn't need, because he recoiled slightly, as if he didn't know me. I stood there dumbly, a pariah. In a moment he was gone, lost in the crowd. Walking home along Fulton Street, then north on Malcolm X Boulevard, I knew my time in Bed-Stuy was up, and that it must have looked to the outside observer like nothing more than an exercise in cheap voyeurism—slumming, quite literally. To say out loud that I knew a thing or two about the emotional devastation wrought by gun violence: How far would that have gotten me? About the only commonality in the stories of Diallo and my brother was that white hands had held the guns. When I knocked at Ben's apartment the next week, he didn't answer. I slipped a note under the door, giving him notice that I'd be gone by the end of the month. I never saw him again. Lover Boy My new boss, Raymond Sokolov, was the sophisticate among my immediate colleagues at the Journal. In 1982 he'd founded the Leisure & Arts page as a daily staple of the paper. Prior to that he'd been a book and movie critic for Newsweek, a food editor at the New York Times, and a columnist for Natural History magazine. He'd written several books, among them a biography of A. J. Liebling. With his diminutive stature, his shock of white hair, his round spectacles and colorful bow ties, he had the appearance of a mischievous, throwback intellectual, a holdover from the glory days of PM and the Herald-Tribune. He was no ideologue when it came to politics. He seemed, from what I could gather in our oblique conversations, to be a sensible moderate—except on Israel, where he may have been to the right of Bob Bartley—but he had a streak of iconoclasm, a desire to tweak the sensibilities of the powers that were, which gave him a raffish charm. Working for him was a pleasure, as long as you didn't screw up. Once he'd read a piece, he washed his hands of it and preferred not to think of it again. Negotiating his changes with writers could be tricky when they didn't approve. I quickly learned that was my problem, not his. He left his subordinates alone to their work, and his trust made me want to do it justice. Copyediting is peculiar work, a realm for word geeks and control freaks, part art and part science, and ultimately an exercise in professional anonymity. It's not as if readers have the chance to compare a writer's first draft with the printed version—look how much was needed!—so my work went completely unnoticed until a mistake showed up in print. The better I did my job, the more I receded into obscurity, even from Ray, so that I came to understand the ultimate goal was an erasure of my existence beyond interactions with the writers whose work I smoothed over, and the people in production who brought it all together in the end. Copy editors were the Zamboni drivers of the newspaper world. We kept the surface polished so the writers could perform their little pirouettes. In the course of editing Ray's stable of critics, I played both cheerleader and scold in varying ratios as needed. I spoke by phone with interesting characters on at least a weekly basis: Nat Hentoff, the Village Voice columnist who wrote about jazz and was a living link to some of its legendary players; Ada Louise Huxtable, one of the most regal beings on the planet and America's preeminent architecture critic during the second half of the twentieth century; and the novelist Francine Prose, who wrote fluently about art and never failed to answer her phone in a breathless rush, as if juggling five assignments at once. But the writer whose work changed my life came out of nowhere. Late in 1999, Ray came up with an idea to get a friend of his, the poet Frederick Seidel, the exposure Ray thought he deserved. I'd never heard of Seidel; Ray said he was brilliant but had a tough time getting started on a poem. In fact, in a review in the Journal in the late 1980s, which I found by searching the online archives, Ray had called him "gifted" but "maddeningly unproductive." Seventeen years had passed between his first collection of poems and his second. In a career spanning forty years, he was about to publish his fifth book. Ray's plan was to give Seidel a monthly deadline, as if he were a columnist. Seidel would write one poem per month under the title of that month, and not only would the deadline prod him to action, the paper would offer him a readership the size of which most poets could only dream. Here, Ray said one day, reaching into his bookshelves. Take these home with you. See what you think. I spent that evening in the bathtub with four of Seidel's collections. Some of what I found was beautiful, sweet little rhapsodies in staccato lines, but reading him more often felt like riding shotgun on a fancy motorbike, through a city of voluptuous corruption as imagined by Hieronymus Bosch, with a mad driver tricked out in handmade shoes and Savile Row finery. The voice of the poems was absolutely shameless, musical and brutal in equal measure. He was a dandy, a daredevil, a scourge of official pieties, a celebrant of luxury, unsentimental in the extreme, sometimes innocent, more often guilty. He wrote with a bracing mix of erotic derangement and total liberty, and the images that recurred—cockpit voice recorders, women in stockings and garters—were charged with morbid mystery or an amorous aura or sometimes both. Among his obsessions was the thought of suicide, and I found new lines to add to my collection of commonplace quotes. Children, of all things bad, the best is to kill a king. / Next best: to kill yourself out of fear of death. . . . He had a knack for asking the sort of question I thought I'd been alone in asking: If you put a gun to your temple and close your eyes, And the enormous pressure builds and builds, And slowly you squeeze the trigger . . . Do you hear the big bang? The dude knew. He'd found words for a subject too often taboo. Sitting in the tub as the water turned tepid, growing excited at the many moments of recognition, I came upon a line to which I had a physical reaction, as if my skin had been pricked by a pin: Convinced life is meaningless, / I lack the courage of my conviction. Seidel began to write his newspaper poems in March 2000, and it was my job to format them, make sure all the italics and em-dashes and capital letters were just so, and then fax him a copy to inspect and approve. I was too shy to tell him his earlier poems had hit me with such force. I didn't want to sound like what I feared I was, the kind of guy who reads poetry in search of himself. When we talked on the phone, I felt like a supplicant in the presence of royalty. Here was a man who as a freshman at Harvard had visited Ezra Pound in the hospital and had the temerity to suggest corrections to Pound's translation of Confucius, who had called on T. S. Eliot in London and been the Paris editor of the Paris Review, for which he had interviewed Robert Lowell. If he wasn't royalty, he had at least touched it. On the telephone he always said, Phil, my boy, how are you? in the most sophisticated voice I'd ever heard, very precise, as if his concourse were typically with the gods but he'd learned English as a second language so he could order lunch. He had what was called a Harvard accent, but it sounded like money to me. He never failed to ask what I thought of his latest poem, continued asking long after I'd given him sufficient reason to stop. What could I tell him? I was on deadline every time we spoke, with headlines and photo captions still to write, and stories to cut to make them fit on the page. I didn't read his poems with the care they deserved until the next morning, before the hum of the day began, when I could sit with a cup of coffee in my hand and my feet up on my desk, the paper spread in my lap. They were by far the most interesting thing on offer, perhaps the most interesting writing to appear in those pages since Charles Dow and Edward Jones had changed the name of their daily news bulletin for Wall Street traders, the Customers' Afternoon Letter, in 1889. Dude, I wanted to tell him, I can't believe you're getting away with this in the Wall Street Journal! You're my hero! But I didn't want to be fawning, so I'd focus on a particular stanza whose music I liked, or a particular image that struck me, avoiding mention of the lines I caressed most dearly: Put the pills back in the vial. / Put the gun back in the drawer. / Ventilate the carbon monoxide. / Back away from the railing. I think he believed that I wasn't a very bright boy, a fact I confirmed when one time he asked me what else would be appearing on the next morning's page, alongside his poem. I mentioned a piece on the Elgin Marbles, and he took a quick, shallow breath, aghast. EL-jin, my boy, he said in that godlike voice of his. EL-jin! Never having heard the word spoken aloud, I'd pronounced the g as you would in god, which to a certain cast of mind was akin to calling Socrates sock-RAT-us. Around this time there began to be heard complaints about the political thrust and aesthetic sensibilities of the Leisure & Arts page. Ray mentioned these complaints to me in elliptical asides during conversations on other matters. He'd apparently been forwarded some scolding letters to the editor about a couple of Seidel's poems; he'd also received a memo from the publisher that raised concerns about propriety and sound judgment. But Ray was a cagey fellow, a survivor of twenty years in the shadow of Bob Bartley, and although I never asked him how he responded to questions about his stewardship, I imagined him pointing out that the occasional kerfuffle proved he had his readers' attention, and besides, every single day a big fat ad appeared on his page. We were, after all, in the midst of a millennial madness, and the paper was not just an avid chronicler of the madness but an active participant in it. One month after I was originally hired, the Dow Jones Industrial Average—comprised of thirty corporations chosen by the managing editor of the Wall Street Journal, and the only company brand more recognizable than the paper itself—closed above 10,000 for the first time. The Journal celebrated this triumph with a banner six-column headline, only the third in its history, the others having blared the news of the bombing of Pearl Harbor and the start of the First Gulf War. On March 10, 2000, the NASDAQ index reached an all-time high of 5,048.62. The paper was so fat with tech-company advertising, the average subscriber—white, fiftyish, male, with a yearly household income of around $200,000—risked a herniated disk when he lifted it from his doorstep. Management went on a hiring spree to fill an ever greater need for copy, and the paper hatched new daily sections and weekly supplements to cash in on the advertising lucre of companies that would go belly up before the end of their second fiscal year. I knew colleagues who charged every movie, every dinner out, every new book or bottle of high-end wine to their Dow Jones credit cards. Ad managers at the paper's sister publication, Barron's, were said to keep open tabs at various Manhattan bars and entertain clients by expensing the cost of strippers. It was easy to be carried along on this tide of giddy prosperity, writing the occasional, mildly subversive piece in order to cling to what I thought of as possession of my soul. The paper promised an audience of millions, and a part of me couldn't quite shake the idea that the goal of writing was to have your work read by as many people as possible. Once a month or so I'd propose an article for the Leisure & Arts page, and more often than not Ray would go for it. By almost any yardstick I'd lucked out in my professional life. I could have lived in Bismarck, or Dubuque, like some of my old college friends, writing stories about city council meetings. I could have been waiting tables in a Lower East Side tapas bar. Instead I saw jazz for free in any club I cared to visit, just by calling ahead and telling the doorman where I worked. When out-of-towners came to visit, I took them to Windows on the World, just across the street from the office, a view that never failed to awe. If I profiled a writer or musician—Larry McMurtry, Charlie Haden—the subject's latest book or album shot up the Amazon sales rankings. I was, for a moment anyway, moving units and meeting people. I built a sweet home library from the spoils of the weekly book giveaway, the constant pile of review copies sent by American publishers to the paper's literary editor. Not only did I make off with reissued classics from Penguin and the Modern Library, I surreptitiously swiped the volumes on tantric sex, slipping them into my bag when no one else was looking. When uttered during the exchange of small talk at parties in Brooklyn tenements—always somewhat sheepishly, and only in response to direct questions about my gainful employment—the words Wall Street Journal had the effect of a potent narcotic: dilated pupils, flushed face, and what I perceived as a perceptible slackening of sexual inhibition, of which, being a socially awkward midwesterner, I rarely had the courage to take advantage, despite my collection of books on tantric sex. My new apartment was a one-bedroom, second-story walk-up in Queens, on the border of Astoria and Long Island City, four stops to Manhattan on the N train. It had been trashed pretty badly by the previous occupants, the only reason it wasn't gone before I came across the listing. When the landlord showed me the place he apologized for its condition, but I was desperate. I offered him a deal. I'd repaint the whole thing floor to ceiling, lay new tile in the kitchen, tear up the worn purple carpet in the living room, and sand and refinish the wood floors—if he'd waive the security deposit and give me the first three months rent-free. He looked at me as if I were insane, but I'd done the math—I'd save more than two grand—and when I extended my hand, he shook it. I removed the carpet only to discover little drifts of mouse turds along the walls, plus cockroach corpses by the dozen. The new paint job required multiple coats to cover the underlying shade of Pepto-Bismol pink. I rented a large circular sander for the wood floors and applied sealer every other day in strips so I could move from room to room without ruining the finish. The work took almost every spare waking minute I had over three weeks, and the smells of paint and polyurethane were a long time in fading. Still, it was satisfying to live alone again—no roommate, no feral cats—and in a neighborhood where I had no trouble blending in: middle class, ethnically diverse, with a Mediterranean flavor thanks to the one of the largest expat populations of Greeks in the world. Though I finally had a set of rooms all my own, I found my new freedom slightly unnerving. Unlike in Bed-Stuy, there were plenty of restaurants and bars and cafés within a short walk of my apartment. The options for whiling away an evening overwhelmed me with their variety; I couldn't seem to find the place to call mine, the place where a loner could sit cocooned in silence and remain unremarked-upon, unseen. Committing to the life of a loner involves one difficulty above all others: even loners, perhaps especially loners, often find themselves horny. In New York whole industries thrived on the basis of this simple fact, and nowhere was this more evident than in the Village Voice classifieds. I began to study those pages with what I thought of as a detached and almost scholarly amusement, but one ad in particular kept calling to me with the promise of amateur phone sex. The very existence of amateur phone sex intrigued me. I'd always assumed it was a realm for professionals. It wasn't long before I memorized the prerecorded greeting. I even learned to mimic the perky-bimbo inflections of the woman who recited it: Thanks for calling the all-live, all-the-time phone line where ladies call free to share their fantasies with you. If you're under eighteen, you must hang up. . . . Welcome to the exciting new way to talk one-on-one with the area's hottest students, housewives, and working girls for just thirty-five cents per minute, seventy-five for the first. . . . I knew the city's hottest students, housewives, and working girls weren't sitting at home pressing speed-dial with one hand while petting themselves with the other, but when I called that first night I thought I might get lucky and connect with an introverted bombshell, a naughty librarian. We'd talk about music or books or the Kyoto Protocol. We'd choose a place to meet for a drink. We'd proceed to her place, or mine, and lick each other's privates in the dark. Half the single people my age in New York were already using the Internet as a portal to erotic adventure, but I'd always been a little slow adopting new technologies. It was the new millennium and I was still using a manual typewriter. Main menu: Press one for sexy recorded personals, or press two for live connections on the talk line. I pressed two. Press one to talk to women, or press two to talk to men. I pressed one. Live talk main menu: Press one to connect with callers who are on the line right now. Press two to record or update your dateline personals greeting. I pressed one. You have ninety seconds to describe who you are and what you're interested in. Take care with your privacy—no full names, addresses, or other information that could be abused by other callers. Here's your chance to make an introduction. The most intriguing greetings get the most responses, so make your ad as sexy as you can. Your privacy is guaranteed. Your greeting will play only to others who are on the talk line when you are. To remove your greeting, just hang up. You can rerecord as often as you need to, until you're satisfied. Start speaking at the tone. Press pound when you're done. Good luck. I was drearily earnest at first. I stressed my status as a gainfully employed, suit-wearing monkey. I laid on the midwestern charm, the whole small-town-boy-in-the-big-city act. I waxed poetic about my love of music and books, going to museums, eating out. I was, in short, Prince Charming, a perfect gentleman straight from the script of a rom-com, just the push of a button away. Welcome to the talk line! Rarely have I heard such scorn. Women sent recorded messages in which they simply cackled at me. Some were incredulous: You're actually looking for a date? On this line? One even presumed to judge my anatomy: Come on, little boy, pull that itsy-bitsy, teeny-weenie out of your pants and play with momma. . . . I hung up that first night completely demoralized. I wanted to be appalled at all the perverts and misfits on their telephones across the city—the heavy-breathers, the pre-op transsexuals, the women from the Bronx looking to play for pay—but I was mostly disappointed in myself. They, at least, were candid about what they wanted. And what did I want? There's no way I could have been honest about that. What was I supposed to say: I need someone to sleep with me so I can tell the story of my brother's death? That would have had the virtue of being true, as if the truth were a virtue on a phone-sex line. Over the course of a few short-lived flings in the time since Dan's suicide, I'd discovered that sex emptied my mind of everything nonessential, and the one thing that remained essential, I thought, was the story of his suicide. Everything else was a dream or an anecdote. Nothing else meant a thing, not compared with the big story, and I just couldn't talk about it unless I'd bared myself in physical intimacy. Hard to imagine working that up as an attractive come-on, though: Hey, sweetheart, let's screw with our eyes closed and then snuggle up for some pillow talk about the mysteries of self-inflicted death. Will you listen if I tell you? In time I worked through my initial misgivings about phone sex. I did the practical thing. I listened and learned. The rules were simple. You could lie about what you looked like—who would know the difference?—but you'd best be blunt about your desires if you didn't want to waste anyone's time. It was all there for the ear, an aural smorgasbord of titillation and perversion, thirty-five cents per minute, seventy-five for the first, every kinky fantasy you've ever heard about and more, and plenty of people willing to pay and be paid for real-world sex. You listened, one after another, to little personal ads ("greetings") in the voice of the person being personal, and make no mistake, they were personal, about everything under the sun from golden showers to gang bangs, with an emphasis on interracial pleasure seeking and an unmistakable undertone of pitiful desperation. Press one to repeat this greeting. Press two to send a private message. Press three to ask this caller to connect with you live, one-on-one. Press four to hear the next caller's greeting. Press five to return to the previous ad. Press seven to block this caller from contacting you. With a bit of practice I developed a whole portfolio of personae, ranging from the iconic to the cryptic. Clark Kent Calling from a Phone Booth was my go-to line. His ready-made image allowed me to dispense with laborious physical description. He was also the perfect fantasy man of the women's magazines—a reliable breadwinner, a modest but hunky journalist who morphed into Superman when he took off his clothes. Super-Exhibitionistic Horse-Cock Boy was a bit of inspired ad-lib. One night I made up a story about masturbating in front of my living room window while a neighbor woman watched me from her kitchen across the courtyard. Messages flooded in. Everyone wanted to hear about it. Part of the allure of an amateur sex line involved its invitation to be playful with the rituals of the form: it felt appropriate to situate the fantasy itself inside an act of voyeurism. The Sound of One Hand Slapping was a late addition to my repertoire, and by no means original; I heard many masterly variations. I merely put my own spin on an old phone-sex standard. The trick, of course, was in the execution. I tried at first for authenticity, recording an actual masturbatory stroke, but it was too subtle for the mouthpiece to pick up, and I kept getting a prerecorded admonition: I'm sorry, your message must be at least ten seconds long. Please try again. At first I misheard this as: I'm sorry, your member must be at least ten inches long. Please try again. I experimented until I found a plausible substitute, which involved rubbing my index finger back and forth across the mouthpiece. When I replayed the message to confirm it, I heard a sound that hinted at some sort of deviant friction. By pressing my fingertip with greater or lesser force, I could create a stylized rendition of vigorous, almost violent copulation, or gentle, sensuous cock-stroking. (Later I even recorded an actual slap, although I struck my thigh instead of my ass, having learned that on the talk line impression is reality.) The virtue of this method arose from its ambiguity, its invitation for others to initiate the fantasy. It allowed me, in the opening joust of a phone fuck, to shield my voice from other callers. I'd dialed so often my voice had become a known quantity. Once I got hooked I had to make a real effort not to call every night. Evenings when I stayed away from the phone tended to play out in the same way. I'd be abducted by one of my blue moods, a combination of loneliness and claustrophobia at the thought of all the human longing playing out in the towers and the streets, in the privacy of little urban rooms. I'd run out of patience for reading, my usual strategy of escape, so I'd pace my apartment, listening to Lester Young and Coleman Hawkins, until I tired of retracing my steps. I'd take my notebook and go for a beer at one of the Irish joints in my neighborhood: O'Hanlon's, McCann's, McCaffrey & Burke. There was always something soothing in the murmur of voices and the clank of glassware, men and sometimes even a few women talking in the smoky, intimate light. I liked to imagine I'd find a beautiful woman sipping whiskey all alone in the corner. Our eyes would meet. I'd buy her a drink. We'd step, just for a moment, from the frame of the Hopper painting that circumscribed our lives. Or maybe we'd step into the frame, create a moment of melancholy beauty we could hold with us forever. No matter. She was never there. One night my friend Rachel called me at home. It was rare to hear her voice but always a pleasure when it happened. We'd met during our respective internships at the Nation and the Village Voice and had kept in touch, mostly by letter, me from Montana, her from Seattle, then later me from New York and her from Virginia. Toward the end of our brief season in the city I'd confessed my attraction to her, a confession she did not reciprocate, though I sensed no worse than ambivalence in her cryptic silence. Problem was, she had a boyfriend. But now her boyfriend had gone abroad, to study international relations at the London School of Economics, while she pursued a master's in poetry at the University of Virginia. I'd met the boyfriend once. He was a very small fellow with unwashed hair, tiny round glasses, and tremendous, outsized hands. You couldn't not notice his beautiful hands. Rachel told me those hands had found another woman to reach for in her absence. He'd called and confessed this to her, two months after the fact. She told him it was over. He immediately flew to Virginia in an act of contrition. They wept, they cursed; they held each other tenderly, they screwed not very tenderly. Then he left for London. Nothing was resolved. She was alone with her wounds, alone in Charlottesville, Virginia. He called again and again and pleaded with her to give him another chance. He swore he'd prove his devotion, if only she would grant him one more chance. She could not do that. Instead she called me. She said she'd never stopped thinking about what it would be like to be involved with me. I hadn't known she'd ever started—and besides, I wanted to say now, it would probably be a nightmare to be involved with me. But I stayed mum. She said she'd been adept at disguising her attraction, but it had been there all along. Adept is too pale a word, I told her. She wanted to come out and say something but didn't know if it was appropriate. She admitted she felt oddly giddy and drunk, as if she were capable, suddenly, of anything, and this scared her, made her think that she should play things close to the vest. Go ahead, I said. Say it anyway. What's to lose? Our friendship, she said. We'll always be friends, I said. Almost as if changing the subject, she said she'd be spending spring break in upstate New York, at her father's country home. But she wasn't changing the subject. What train would I take to get there? I asked. She told me the line and the station stop, said she'd happily meet me on arrival. Nine days later I was there. The country home was sprawling and drafty, nearly three hundred years old, with low ceilings and a thriving resident population of mice. Her father, a theater producer, made it clear that he was not to be bothered unless we had ideas about where he could find the money to stage an adaptation of one of Thomas Mann's lesser novels. He gave me the once-over and then dialed someone on the phone. After a tour of the grounds, Rachel and I walked down a long dirt road and sat in a field of what had been alfalfa the year before. The sun was warm on our faces, and we reclined at the crest of a hill where we could see out over a valley to the low mountains that rose on the far side. As the sun dropped behind her it turned her auburn hair a fiery gold. I wanted badly to kiss her but lacked the courage to move near enough to her mouth. We walked down the hill amid the springtime smells of melting water and pine needles. I knew I would remember that afternoon with perfect clarity for the rest of my life—the blue sky, the geese and their honking overhead, the light on her hair and her nervous half-smile, the smell of dust off the road. The anticipation of how sweet and soft her lips would be. After dinner that night she handed me a letter she'd written a couple of days earlier. She said she'd wanted to write one last time before everything changed. Then she left the room, and I sat in the lamplight, reading. The letter closed by saying, This is a goodbye kiss to everything that has come before. It was a beautiful piece of writing, funny and sweet and passionate; it even included a long meditation on the beauty and eroticism of the word passion. I didn't quite know what to think about what was happening between us. I'd always wanted to be more than friends. She'd always insisted we couldn't be more than friends. I'd finally made my peace with that, and now the tables had turned. She was the one in pursuit, after I'd given up. I was more nervous now than when my feelings had gone unreciprocated. I'd become quite comfortable with my feelings going unreciprocated. Preemptive rejection kept the stakes manageable. I sat alone with her letter for a long time. When she returned she held a glass of red wine for each of us. I'm sick of boys, do you know that? she said. All I've ever had are boys. It's time I had an adult relationship. We drank the wine. We went upstairs together. It felt like an adult situation. We undressed and went to the bed. I was feeling more adult by the moment. After some exploratory caressing, we agreed that we'd be wiser to stop before we went too far, wait for another time when we'd be more comfortable, more sure in each other's presence—our adult selves asserting some objective analysis on potential outcomes. I told her I had all the patience in the world. Coming from any other guy, she said, I wouldn't believe that for a moment. But from you I do. As if to disabuse her of her foolishness, I began to kiss her stomach, her thighs. It wasn't long before we were doing exactly what we had said we weren't going to do. As we lay tangled in the sheets, she said, I had no idea you were so religious. What do you mean? You kept saying, Oh my god, oh my god. I guess I must have. But doesn't everyone? I thought Catholic school had taken all of that out of you. No, I said, I've merely found a new altar at which to worship. We raked the yard in the late morning sun, exposing the fecund smell of wet, decaying leaves, the wetness trapped since autumn. She exuded elegance even in yard work, elegance in posture, an elegant wool sweater just so on her shoulders, that shy half-smile she smiled to herself that signified her mind at play. I'd never known a woman more attractive in the act of thinking. I saw in my own mind her naked body, its dips and arcing lines, petite navel, small nipples on ample breasts, powerful yet graceful thighs and calves. Cheeks round and red as plums. A delicate, slender neck. Biceps firm from regular games of squash. Pale skin and eyes of a glacial blue, a sprinkle of freckles on her face. We walked through the woods along a little creek. She tried to leap across the water and landed in mud. Her shoe, covered in slime, made a sucking sound when she pulled it free. We laughed and laughed. In an open swath of grass the sun-bleached bones of an animal, the size of a small dog, lay in the very shape in which it must have died—except the skull, which was a foot away and facing back upon the rest of itself, in terror or bemusement it was hard to tell. We noted this and moved on, a tiny fissure in the texture of the day. In bed that night she said, I don't want to lose control. Not yet. If I let you get me off, everything will change. Hasn't it changed already? I mean really change. She said every decision she'd made in the previous three years was now called into question. She doubted that anything she'd done was due to passion. With her boyfriend she'd found comfort, someone who wouldn't challenge her, who provided stability and familiarity—the safe and reliable college beau. So much for stability, reliability. She'd gone with him to Seattle after college to escape the hard choices of life out of school, when suddenly all of one's options are narrowed, when you finally have to figure out who to be. She'd wanted a sabbatical from decision making. She'd wanted a sojourn of sea-smelling air and Asian-Pacific cuisine, a lush landscape of emerald-green. Now something inside of her was ignited, and she wanted more. Her mind was on fire and her body with it. Her head crackled with ideas. She wanted to write for real. She wanted to create again. Her master's program in Virginia challenged her to think more deeply. Her boyfriend's departure and betrayal liberated her to feel more intensely. So this is what it feels like to make love to a liberated woman, I said. I'll show you a liberated woman, she said. After she'd shown me, I told her that every decision I'd made in the past three years could be traced to the death of my brother. I recounted how one of my uncles had mentioned that I was the likelier candidate for a self-inflicted death; I admitted that although his death had been unbearably sad in the beginning I'd found a way to take a kind of grim pleasure in it. It was the black heart of my life; it gave my life a bleak grandeur it would have lacked otherwise. I'd come to treasure that grandeur. Careless bliss and unspoiled contentment were for the simpleminded. He had revealed the secret passageway off to the side of the life we all led. He'd pulled the curtain on the central fact of existence, which until then I'd failed to comprehend as more than an abstraction: life was optional. From the far end of that passageway he beckoned, mutely suggestive, wrapped in mystery. This was the unmentionable secret at the center of my days in the time since his death: the fact that he was always with me, though dimly remembered and void of substance, like a phantom limb. So this is what it feels like to have a threesome, she said. I've never known anyone to make a joke about my brother, I said. I'm not trying to be callous, but don't you think it's past time? Maybe. And about what your uncle said— Not tonight. Let it go. Exactly, she said. Let it go. I soon became familiar with the long train ride to Charlottesville, the halting, slowly accelerating departure, newspapers and books shielding faces, drinks in the jolly bar car. Strange intimacies with strangers, the proffered stories and the swiveled glances. The endless telephone poles and the scalloped pattern of their lines, rising and falling, rising and falling out the windows. The filthy ditches and the piles of gravel and the scrap-metal heaps. Long lengths of gleaming metal pipes stacked in pyramid form. Featureless glass office towers, low-slung factories abandoned to rot. Brick bungalows and back yard swings. The huge neon sign on the Delaware River Bridge, part boast, part lament: TRENTON MAKES—THE WORLD TAKES. She must have known that my devotion to our correspondence hinted at larger devotions; yet I wondered how something so powerful could have remained dormant all this time, or if not dormant then at least hidden. She was a mysterious creature, sensuous in the way she moved, self-possessed in the extreme, yet beneath the calm exterior a fierce intelligence burned, a hunger for ideas and language. When I arrived at her apartment the first time, she insisted on reading aloud to me the last eight pages of Don DeLillo's first novel, Americana, which she'd just finished. Then she ordered me to carry her to the bed. We spent the rest of the afternoon there in the warm yellow light, delicious hours of indulgent pleasure. How sweet the taste of stolen bread. The next day, while we made dinner, the phone rang. By the way her face changed when she answered it, I knew it was her boyfriend. He told her he'd received her letter, in which she'd flatly stated their relationship was over. He couldn't let himself believe it. There must be someone else, he said. She did not answer. In that silence there's a name, he said. There is someone, isn't there? She hung up the phone. Later, while we were lying next to each other in the dark, she said, It scares me to say this. But I know as long as I live I'm never going to feel about anyone the way I feel about you. I'm never going to find someone who makes me feel so good about myself. I respect you. I respect your need for freedom. That's the beauty of this, I said. We respect the other person's needs. I don't think we come to this with blinders on. We can have freedom in togetherness. Guardians of each other's solitude, she said. Yes. Guardians of each other's solitude. We appeared to agree on everything that mattered. We could have each other. We could have our work. We could have our space in which to think and create and we could do it in nearness to a lover. I allowed myself to believe we could have it all. At the end of her spring semester she came to New York for nine days. We went out most nights, drank martinis and smoked expensive cigarettes, ate Vietnamese food in Chinatown, walked through SoHo and the West Village, hopping from bar to bar. She tried to educate me on the poetry criticism she'd been reading, Randall Jarrell and Helen Vendler. I tried to interest her in the novels I'd been reading, The Virgin Suicides and The Pure and the Impure. We confessed our dreams of writing something great, professed our desire to support each other in the work of doing so. You have an idealized vision of me, she said one night. I can't possibly live up to it. I think you'll be disappointed eventually. I'm afraid of that. I wanted to assure her it wouldn't happen; if anything, I'd disappoint her. Of course she wasn't some angel. She wasn't perfect. But all I knew of her, all I could intuit, intrigued me. Even her moodiness when hungry charmed me. Rather than be wounded by her testy tongue, I was moved to feed her. I wanted to learn the art of taking care, and anyway she mostly took care of herself, and happily. She avoided making me a perpetual human-improvement project. I did her the same courtesy. We each had our flaws—mine were impossible to hide—but we had no urge to modify them in each other. I began with an image of her and wanted only to add understanding and nuance and roll with the punches. I didn't want her to conform to my image. I wanted her to expand and complicate it. Or so I told myself. She did so in spades one night when I made some stray comment concerning my notebooks. They were my repository of toxic thoughts and unspeakable dreams, my testing ground for scenes and ideas, my suicide commonplace book, my sanctuary of the mind. Mention of them cast a shadow over her face. It was there suddenly, and just as suddenly it was gone, and I knew the passing shadow meant that she had read them. I said, You didn't, did you? She twisted her face in shame and said yes. She admitted she'd gone looking for herself, gone looking for my most candid thoughts about her. It was narcissistic, she said. I wanted to know what you had written about me. It was that simple. Only a few days earlier I had told her it would be over if she ever dared violate my private writings. I'll throw you out on your ass, I said. That will be it. Finito. Done. My trust would be destroyed. We were no longer in the realm of the hypothetical. My threat had backfired. Instead of warning her off, I'd fueled her curiosity, made her think there were things worth reading in those mad scribblings. For everything else she was, aside from a narcissistic snoop—witty, well read, a great beauty, a kind soul, a sporting lover—I decided I should try to forgive her. So you want to know my secrets? I said. Is this a trap? she said. I'll tell you my secrets. You don't need to go hunting for them. I don't know if I want to hear this. You can handle it. You think? I do. Here's one: I fear I'll one day put a gun to my head, to know what that feels like, to bring myself closer to the one person I can't seem to reach another way. Um, okay. Also, I could have saved him. That's the big one. I had my chance. I could have saved him but I betrayed him with selfishness and inattention. I don't buy it. I knew you wouldn't. I don't expect you to. But it's true. She sat for a long time in silence. Do you really believe that? she said at last. I've thought about it almost every day since he died, I said. Here's what I think, she said. I think what he did was the ultimate act of selfishness. And you're one of the casualties. You didn't kill him. He killed himself. But you can't bring yourself to blame him, so you've got to go looking for suspects, and the most convenient one to finger is yourself. A fury rose within me, an urge to defend him, but I couldn't think of how, so I held my tongue. Be clear what it is you're mourning, she said. You're not mourning what you had. You're mourning what can never be. You're mourning the loss of possibilities. I wanted to tell her she was wrong, or maybe just glib. I wanted to tell her that the manner of the death made a difference in the manner of the mourning. I wanted to tell her that a suicide bequeathed the grieving a unique blend of emotions—anger and guilt first among them—and an intensity of regret otherwise unknown in the human experience. But I feared her superior education, her wider breadth of reference. She'd probably run circles around me, leaving me feeling lousy about how poorly I understood the central event of my life, and I didn't need it, not that night—I was too invested in my own mythology; I'd exposed myself and been told I was wrong—so I rolled over and pretended to sleep. We honed a routine of twice-weekly telephone talks, occasional visits to the other's city, afternoons in bed whenever possible, followed by nights on the town. We tinkered with the definition of our situation. For a time we were having an affair. For a time we were boyfriend-girlfriend. For a time we were even something like friends with benefits, free to see other people. Depending on the week, we were either in cahoots or in love, or all of the above. Bob Bartley and I talked so infrequently I remember every occasion with uncanny clarity. I even recorded these encounters in my journal, they were so strange and suggestive. The first time, he asked if I would proofread something he'd written. I didn't want to proofread his work, but you don't say no to the most important person at the world's most important publication. I read the column. I disagreed with everything in it, but it was powerfully written. That was the unmistakable thing about his editorials—even if you thought they were crude ideological screeds, as they almost invariably were, they left you with no doubt about what he believed. He claimed to craft everything he wrote for optimum "muzzle velocity," as he once put it to another journalist. His style owed a great deal to the old yellow journalism of personal invective; he didn't just savage his opponents' ideas, he aimed to obliterate his opponents altogether, or at least ream them with a rusty poker for their intellectual bankruptcy, their moral cretinism. I told him I saw only one mistake. He'd made the words "pipe dream" one word, with no space between them. I told him it should be two words, according to Webster's New World Dictionary, which was my authoritative source in such matters. He told me he didn't care what Webster's New World Dictionary said. It was his editorial, and he wanted pipe dream to be one word: pipedream. He said I should delete the space I'd inserted between pipe and dream. I did. We talked a second time a few months later. I was standing in the hallway with a colleague from the Leisure & Arts page, and Bob Bartley approached us. He said he had two doctors' appointments on the Upper East Side of Manhattan the next day. He had a bit of leisure time to spare between them, and wondered if there was any art worth seeing at the museums on the Upper East Side. I said, Yes, there's a wonderful show of Walker Evans photos at the Met. He said, Thanks, I may have a look at that. A few days later I met him in the hallway. I said hello. He did not say hello. I said, Bob, did you see the Walker Evans show at the Met? He stopped and looked at me. I wondered if I should have called him Mr. Bartley. He said, Yes, I saw it. What did you think? It wasn't for me, he said. I stayed for five minutes and went to the Egyptian galleries. Walker Evans was, among other things, a great documentarian of Depression-era southern poverty; Bob Bartley was appalled by the very idea of poor people. He'd once told the Washington Post Magazine that he didn't think there were any poor people left in America, "just a few hermits or something like that." To Bob Bartley, Walker Evans's photos were a form of pornography that depicted human beings in a sinful state of filth and depravity, and such images had no place in an American museum. Of course I disagreed. Not only did I appreciate the unadorned honesty of Walker Evans's photographs, I'd grown up in a poor family myself. As a child coming of age on a farm where we couldn't make enough money to get by, I'd stood in line with my mother at the community hall in Currie, Minnesota, for handouts of surplus government cheese. Pictures of people like us from the time of the Great Depression hung in many museums, farmers too broke to feed themselves without government help. Bob Bartley didn't believe the government should be in the cheese-handout business. Rachel came to visit during her winter break. She was working on a long paper and wanted someplace quiet to hole up and write. My apartment served nicely, as I was gone each day for ten hours. I'd come home from work to find her in bed, exactly where I'd left her, surrounded by a scattering of papers. Wound up from copyediting against deadline, I'd pour myself a glass of bourbon and put some Miles Davis on the stereo, cook us dinner. The music didn't bother her. She stayed in the bedroom, naked, unshowered, writing intently. I made it my duty to see that she ate, since she claimed to be uninterested in food, only words, ideas. I brought her cold drinks. When she panicked at the prospect of running out of paper, I went to the stationery store and bought a stack of legal pads. I was doing my best to be the handmaiden to creativity. That's what I would have wanted from her. I took her at her word that she was on to something, a new theory, a work of genius. I could hardly tell her to stop; that would have been heresy to both of us. If you're working and it's flowing, you run with it. The muse didn't visit very often, so you had to give yourself over like a love slave when she did. I assumed at first that her claims to genius were at least tinged with irony. On the third evening she began to frighten me. She'd hardly slept. She talked in great strings of sentences, making metaphors one after another. I sat on the edge of the bed and listened as her pronouncements became ever more grandiose. She claimed she was channeling James Joyce. She said she was rewriting the Book of Genesis. She was drawing a map for the politics of a new era. She was going to touch off a peaceful revolution, achieve what Marx had only dreamed of. She was going mad in my bedroom. It's the year one, she said. It's a politics for an age of information overload. The only difference between words and worlds is a typographical error. The only difference between immodality and immorality is a typographical error. I'm writing a bible for our times. It's going to change everything. George Bush won't bring peace to the Middle East. I will. I'm going to show the way. I'm going to have enemies, and they're going to want to put me away in an institution. I'm going to tell them I'm already in an institution. The University of Virginia is my institution. I'm going to make geniuses of everyone in the world. I'm going to succeed where Jesus failed. How? I asked. Because Jesus was not a woman. Because women weren't allowed to paint and act and study philosophy and math and they weren't allowed to write. I'm the first. I'm the female Jesus. I told her that she was scaring me, that she sounded delusional, but she only laughed at me with pity. You'll see, she said. Maybe you should just go away for a while. Come back when I'm finished. Then you can read it and know. I left to run some errands. I stayed away for hours, stopped in a bar for drinks, hoping that by the time I returned she'd have left. She hadn't moved. She started calling herself a mystic and a prophet. She claimed that if only she could sit down with Hillary Clinton, one-on-one, and teach her to see the world through the theory of the oneness of everything, then Mrs. Clinton would become our next President and the world would never again experience want or war. I began to argue with her, my voice rising in frustration. She remained unperturbed. She said she was sad that I couldn't see the future the way she could, but that one day soon I would. I would see and I would understand. She would help me along to the place where she was, and there we'd be partners in bliss. The next morning she stuffed her papers in a bag and left for the train to her father's place. Some of her clothes remained scattered in the bedroom. She didn't say a proper goodbye, just walked out the door in mismatched socks, her unwashed hair twisted in little pretzel shapes. I loathed myself for the relief I felt once she was gone. I was scared she might never be the same, that by encouraging her, replenishing her paper supply and bringing her dinner in bed, telling her all the while to keep writing, I'd unwittingly chauffeured the vehicle that had driven her over the edge. I called Rachel's father, but he didn't answer. I left a message telling him to be alert for changes in his daughter's state of mind. Then I caught the train to work and tried not to think about any of it. But when I got back home I looked up some half-remembered lines of Seidel: You said you were Baudelaire— Or was it Marlowe?— You said you were Blake Talking English with the angels, And said you were Christ, of course, But never would say You were yourself. They appeared in a poem called "Hart Crane Near the End," and I didn't need reminding how he'd met his. Her father saw right away that something was wrong. It took forty-eight hours to get Rachel an appointment to see her longtime therapist. After talking to Rachel for fifteen minutes, the shrink advised admitting her to a psychiatric emergency room, ASAP. The shrink's secretary called an ambulance. The cops arrived. They spoke to the shrink, who told them what the situation was: a manic episode gone out of control. The cops then talked to Rachel, who calmly convinced them she was fine. An argument ensued among Rachel's father, the therapist, and the cops. While they argued, Rachel slipped away and vanished into the city, but before she left she told another therapist that if she were forced to go to the hospital she'd kill herself. She did not want to be held captive. She would not stand for it. Rachel's father called me at work, told me Rachel was on the lam. If I saw her or heard from her I was to call immediately. I hurried through my work of fitting headlines, writing photo captions, proofing copy, then I rushed for the elevator, hustled over West Street on the pedestrian bridge, and crossed Liberty Street to the subway station beneath the World Trade Center towers. I took an uptown N to Fourteenth Street, caught the East Side express to Fifty-ninth, transferred back to the N train home. No message on the machine; no sign she'd been in the apartment. I ate a bowl of soup. I paced. I called two of her friends in New York but neither one answered. There was a knock at the door, firmer than she would have knocked. I squinted through the peephole at two cops. I opened the door. They flashed their badges, offered their names. They asked if I'd seen her recently. I told them it had been a couple of days. Do you mind if we come in? one of them asked. Not at all. Just a formality, the other said. Gotta make sure she's not cut up in pieces in your tub. They sauntered around the apartment, thumbs hooked in their belts, leaning torso-first through doorways. I heard the curtain rings slide back and forth on the shower rod. The cops thanked me and left. An hour later she rang the buzzer, three large shopping bags in her arms, a new pair of shoes on her feet. She told me she'd eaten a nice dinner and had her nails done. Afterward, she went inside a Catholic church somewhere in Manhattan, sat in the back pew. I heard breathing, she said. And whispering. Like the devil was whispering in my ear. I could tell because the voice spoke with a forked tongue. She went on about rectifying Einstein's inadvertent creation of the atom bomb, using her vision to reveal the secrets of the unity of matter encoded in the invisible digits of zeros and ones that undergirded the vast computer web. She said she'd discovered a theory that would have saved Virginia Woolf and Sylvia Plath from suicide, if only she'd been able to share it with them. When she went to the bathroom, I called her father. Just stay with her tonight, her father said. I'll come get her in the morning. Rachel emerged from the bathroom and gave me a funny look. I tried to stay calm as a fury of helplessness rose inside of me. I'm sorry, I said, what's the plan again? Silence on the other end. I think sooner is better, I said. Okay. I'll call the cops and have them come right over. Fifteen minutes later, the same two cops came to the door. The look on Rachel's face when she saw them absolutely crushed me. You didn't, she said. You didn't just betray me. I'm sorry. It's for the better. You backstabbing son of a bitch, she said. You dirty Judas! I rode with her in the ambulance, held her hand. For the first time she looked scared. She said she'd known all along that her genius would be punished, she just hadn't expected me to be among her punishers. Her father met us at the hospital. He explained the situation to the intake nurse. Two other nurses coaxed Rachel onto a gurney and wheeled her down a dismal hallway, through a set of doors. We heard a shriek, then a long, piercing wail. We later learned they'd strapped her down and injected her with Haldol and Benadryl, a typical welcome-to-the-psych-ward cocktail. The next time I saw her was in a private room at Mount Sinai Medical Center, a few days later. She was heavily doped on a combination of drugs whose names I couldn't keep straight. She still professed visions, warned against the unseen and disruptive powers of static electricity. I didn't know who she was anymore, or what she was in the process of becoming. I feared she might never come back, which made me scared for her, and almost as scared for me. Her psychotic break confirmed a thing I'd long suspected, that if I let people get too close to me they were doomed; she later admitted that in the worst of her mania she'd been gripped by visions of my brother as described in the journals of mine she'd read, those grotesque gropings toward understanding him by imagining his brain sprayed on the wall. They brought a man in, she said. A big black man, and they did the same thing to him that they did to me. They held him down on the bed and put a needle in his arm, and he screamed just like I did, and I said, Don't you see? He's human and he needs to be touched with love and not poked by your instruments. They told me to go away and shut up. It's not your concern. Go away. It's not your concern. They said it over and over. She turned and spoke to her nurse. You're a healer. I like how you treat me. You don't poke me with anything. You touch me. You hold my hand. You're helping me come back to the ground. It got scary up there. I began to visit her every day. Sometimes she was happy to see me. Other times she hardly appeared to notice I was there. She'd long since stopped asking questions of anyone. She slept, or watched TV, or read and reread the get-well cards she'd been sent. It's so beautiful, she said. So beautiful. Everyone is together. My mom is laughing with me. She brought me chocolates and flowers and lots of gifts. She never did that before. This is what I want, exactly what I want. I'm so happy. It's been so long since it was like this. What a few days earlier she'd viewed as a betrayal and an imprisonment, she now saw as a means of unifying the nuclear family. Her parents had divorced more than half her lifetime ago, but now she'd gone to the loony bin and brought everyone together again. Her mania was a blessing in disguise, she said, just the thing to repair the breach between her mother and father, and between them and her siblings. I didn't have the heart to tell her it wouldn't last. One night she asked if I would please bring her a map and a book about gardening. I want to look at places far away from here, she said. I want to think about growing things. I want the idea of fresh air. I can't have any here. They won't let me leave. Not even for five minutes. The next night I brought her a guide to flower gardening and a Rand McNally atlas. She set them aside without looking at them. During her second week at Mount Sinai, I fell ill with a foul winter cold. I called in sick to work and didn't leave my bed for three days. I'd never been so thankful to feel so bad; it meant I didn't have to sit in her sterile room and pretend I was still her boyfriend. The morning she checked out, her father dropped her off at my place with a prescription bag full of pill bottles. After a few minutes of bogus pleasantries he fled, as I'd figured he would. I tended to Rachel with cool solicitude, clinical courtesy. She shone with a glittering confusion, beautiful and fragile as a Fabergé egg. That fragility haunted me. Our relationship had been best when we'd offered each other pleasure with our minds and bodies on a plane of equals. I had no sense that she wanted me anymore, only that she needed me, and it was precisely that neediness, so uncharacteristic of her, that made me want to flee. If I truly wanted to learn the art of taking care, here was my chance—but to be needed by someone was more than I could stand, because I knew I would fail. I just knew. I get the feeling I'm becoming a burden on you, she said on the third day. This isn't the easiest thing I've ever done. I'm not sure who you are anymore. She looked at me with a mixture of surprise and disdain. I think I need some time at home, she said. I've been away too long. Those words were the sweetest music I'd heard in weeks. Maybe if her father had whisked her away to the country for a month, my fear would have passed and we'd have made it. But I knew it was over and I knew I was a bastard for being grateful it was over. And as long as it was truly over I could live with being a bastard. I'd lived with worse. That afternoon she left for Virginia. That evening I called the phone sex line. It felt like coming home. PART THREE Falling Man One might think an institution that existed to chronicle the story of American capitalism would be uniquely prepared for a fluctuation in the business cycle, but Dow Jones & Company had made a number of blunders that left it ill-prepared for tough times. To give but the most glaring example: In the late 1980s, the company bought ever-increasing shares of an electronic provider of business information called Telerate, for a final total of $1.6 billion. It was meant to compete with Reuters and Bloomberg. A decade later, Dow Jones dumped Telerate at a loss of more than a billion dollars, and no one ever heard of it afterward. The "new economy" bubble was a fiction that allowed the company to believe it might recover from the shocks of Telerate and other poor decisions, but the sudden implosion of tech stocks in 2000 hit Dow Jones like a blow to the solar plexus. Ad revenue tanked. Managers at the flagship editorial product in the universe of Dow Jones brands were instructed to streamline their budgets. In the spring of 2001 we received a memo announcing company-wide layoffs. Two months later another memo appeared in our in-boxes, announcing further staff reductions. The memos kept coming. In July we received one informing us that the indoor plants would no longer be maintained throughout our offices—for a total savings of $40,000 a year. Reporters and editors were urged to pick a plant to babysit if we wished to see it stay, or, failing that, adopt one and take it home. Rumors began to circulate that the company would be sold to a competitor—the New York Times, the Washington Post. In the office cafeteria, reporters and editors had the hangdog look of a dwindling tribe being hunted by enemies with superior weaponry. That summer I scheduled a trip to New Mexico, while I still possessed the benefit of paid vacations. It was less vacation, though, than reporting mission. I wanted to see the public records on Dan's death. After years spent imagining and reimagining the scene of his end, I'd finally made peace with the fact that his suicide was the only story that really interested me. Trouble was, my stance of ambiguity toward it now felt phony; I didn't know him at all, never really had, probably never would, and what I called ambiguity was just a convenient cover for my ignorance. My mistake, I belatedly realized, was to so fixate on his death that I lost contact with who he'd been in life. The darkness of the act of suicide, the violence of it, the despair it bespoke, the raised middle finger it offered to the world—all that blotted out whatever he'd been while he lived. I couldn't access him. Too many questions; too few answers. A yawning silence, a black hole: that's what he'd become, more symbol than flesh. It was time, at long last, to confront the carnage head-on, to replace all the horrific images of my own devising with the cold, clinical truth. It was time to start at the end and see if I could work backward into his life. Seidel had suggested this in one of his poems, in lines I felt were aimed straight at me: To start at End / And work back / To the Mouth / Is the start—. He had all the answers, it seemed, for the questions that gripped me just then. The best way not to kill yourself / Is to ride a motorcycle very fast. / How to avoid suicide? / Get on and really ride. I didn't have a motorcycle but I secured a rental car and took it for a spin on the interstate south of Albuquerque, out beyond Belen, pushing the needle into triple digits. All the while I thought of what I'd find in the files. That night in my hotel I thought too of my little sister and how she'd reacted in the immediate aftermath of Dan's suicide. Lisa and I shared a similar calm exterior, an inscrutable demeanor that left our true feelings a mystery to those around us—even, on occasion, to each other. On the day Dan's body had arrived on a plane from New Mexico, to be driven from Minneapolis to the town in southern Minnesota where our parents lived, Lisa had slipped away unnoticed and paid a visit to Almlie Funeral Home. She told Mr. Almlie that she wanted to view the body. She wanted, for her own peace of mind, to say goodbye in person. Almlie hesitated before he replied. He understood, he said. He'd probably even feel the same way if he were in her shoes. But the decision to have a closed-casket funeral had been made for good reason. I don't care, she said, I need to see him. He tried to dissuade her. She wouldn't back down. And so he told her that when he'd finished preparing the body, she could have her wish. He needed a few hours. He'd give her a signal. When he turned on the light above the front door of the funeral home she could enter, but not before. In this way he could honor her request to avoid calling her at our parents' house and arousing their suspicion. Around nine-thirty or ten o'clock, he said. I should be finished by then. He paused again, thinking. Can you identify him by his hands? Yes, she said. Beginning around nine-thirty she drove back and forth along the street in front of the funeral home. Fifteen minutes passed, a half hour. She began to think he might go back on his word, or she'd missed his signal and he'd gone home for the night. She parked her car across the street and waited with the radio playing softly, waited what seemed an eternity. When the light came on, Almlie led her to a room where the coffin stretched against the back wall. The two portions of its lid were open, but the head of the corpse was covered by a cloth. Lisa stepped forward and looked at the pair of hands folded across an unmoving torso. She noted the fine red hair between the knuckles. Would you mind if I had a little time alone with him? she asked. He nodded and backed away. She waited until his footsteps faded down the hallway. When she was sure he was gone, she lifted the cloth to reveal his head. Her first feeling was one of relief that his face was there at all—a conjecture had floated through our extended family that the force of the gunshot had blown it away. His eyes, however, had been removed, the sockets sewn shut. At his right temple was a neat hole the size of a dime, on the left side of his head a much larger hole whose size she did not quantify, only to say that it convinced her he had not suffered, that his death had been instantaneous. She took a pair of scissors from her pocket and discreetly cut a lock of his hair to give to our mother. She unfolded the cloth and tucked it back in place. She pulled from her purse a letter she'd written him and placed it in the pocket of his blazer, taking care it didn't protrude and invite attention. She whispered a few final words—I never asked her what she'd said, or what she'd written, and she never volunteered to share—and then she called Almlie back in the room. When Lisa told me all this, late on the night it happened, I was torn between admiration and jealousy. Admiration for her courage, certainly—a nineteen-year-old woman, nervy as can be in the face of her brother's ugly death—but also an irrational jealousy at how insistent was her desire to pay last respects. I wondered whether she thought of this act of visitation as a burden she was best equipped to carry, and knowing this she decided to carry it alone; I suspected as much. That was the little sister I'd always known, unafraid of doing what was hard if she thought it was right, with no need for a pat on the back, and no interest in philosophizing about what it meant. I was jealous, I realized, because I hadn't thought of it myself. I wondered if it would have made a difference to have seen him in the flesh one last time, to have looked his deed in the face and not just metaphorically, to have told him what I secretly thought of him before he went underground. Maybe. Maybe not. It was too late to find out. After a lousy night's sleep in a Motel 6 near the airport, I set out in the morning for the Albuquerque Police Department and the state Office of Medical Investigators, each of which, I'd learned, possessed a report on Dan's death. I went first to the police, where a records clerk promptly made copies of the file at fifty cents per page, eight pages in all: The decedent was seated on the living room couch which was against the west wall of the living room. An SKS rifle was found to the immediate right of the decedent between his right arm and right leg. The decedent was attired in green shorts and T-shirt. . . . Several pieces of skull fragments, brain matter and high velocity blood spatter were found throughout the lower level of the apartment. . . . Later in the report the officer wrote, I took overall photos of the apartment, exterior and interior. I asked the clerk if I was entitled to copies of the photos, and she directed me to an office on the second floor called Criminalistics, where I was told the photo lab needed three days to process my order. The photos would be sent to me by mail and arrive in a week to ten days. Twenty minutes later, on a satellite campus of the University of New Mexico, two men greeted me at the Office of Medical Investigators, the records manager and the doctor who'd performed the autopsy. I'd called ahead to have them pull the file. The records manager was a tall, bearded, solemn, soft-spoken man named Walt, while the medical examiner, Marcus, was genial to the point of oddity, smiling at me, head tilted jauntily, while speaking of the finer points of cranial disfigurement. Walt handed me the autopsy report. I asked him if photos were an additional part of the file, and he said that more than likely they had photos in their archive, unless they were misplaced or damaged in the developing. He promised to check for me and said that if they existed, I did have a right to see them or have them copied. He could mail them to me, in fact. Marcus grimaced at the thought. I personally think it's much better if you sit down with us here and allow us to explain what you're seeing, he said. The images, from my reading of the report—and I confess I don't remember this particular case—but the images will likely be very graphic. I'd feel more comfortable showing them to you here than putting them in the mail and having them arrive one day in your mailbox. If you see them and you still want copies, we'll be happy to have them made and sent to you. Walt consulted his photo archivist. It turned out the photos were readily accessible, so I agreed to return in two hours for a viewing. Afternoon cumuli had begun to sprout like enormous white mushrooms over the peaks of the Sandias. The temperature in the valley edged toward one hundred. My palm made a sweaty print on the manila envelope containing the autopsy report. As I walked across campus I felt no urge to open it. My purpose did not demand haste. I wasn't on a deadline. The freshness of the evidence wasn't at issue. There was no criminal who at any moment might strike again, no victim in dire need of justice. I found my way to an asphalt basketball court in the middle of campus. I asked around and learned the circulation desk at the library kept a ball it allowed to be checked out like a book, as long as you left an ID. I dribbled along the sidewalk toward the court. My veins quivered with adrenaline. I began the routine I'd developed as a teenager dreaming of making varsity: a couple of hard runs at the basket, left-hand layup then right, a few short jumpers, little five- and eight-footers kissed off the board, then some turnaround fadeaways from the baseline. Legs limbering and sweat beginning to flow, I drifted out beyond the three-point line, working my way around it right to left, squaring my shoulders before I shot, chasing down the rebound and spinning the ball out in front of me as I sprinted back toward the arc, where I caught the ball and turned, made a quick fake, and stepped one step left or right before leaping and following through, releasing at the apex of the jump, the seams in the ball perpendicular to my fingers—a habit of the purest shooters, the gym rats with an aesthetic devotion to the pretty arc and spin of the perfect jump shot. I was deep inside a trance of fingertip and follow-through and ball and net, the world reduced to a set of internalized geometries, when a tall, broad-shouldered Native American man sauntered onto the court, snared a rebound in his huge hands, and took a shot, banking it in from ten feet out. He wore jeans and a sweaty tank top and had a slightly forward-leaning posture of defiance. You got a nice shot, he said. Thanks. You too. I played some. Me too. Long time ago. He laughed and said, I know how it is. We circled and shot with unspoken playground etiquette, one man rebounding, the other shooting, the shooter entitled to at least five shots and as many beyond that as he could make in a row, the roles switching when the shooter missed. Within twelve feet he was deadly. He always shot while moving to his right. He didn't dribble well with his left hand. I noticed these things in anticipation of the question he asked a few minutes later, after he'd curled his arm around the ball and wiped the sweat from his brow. Wanna go one-on-one? Sure, I said. Shoot for ball? Sure. Make-it-take-it to eleven, win by two? He nodded and took off his shirt. His free throw bounced off the back rim. Mine touched nothing but net. I stepped out beyond the three-point arc. He rolled the ball to me as if it were a bowling ball. I bent to pick it up. He gave me a five-foot cushion, daring me to shoot from where I stood. He'll learn soon enough, I thought, as I lofted a shot toward the rim. One-zero, I said. You like that shot, huh? I'll take it if you give it to me. He rolled the ball toward my feet again, a small taunt, a gesture of disrespect meant to annoy me. I didn't hesitate this time. I lifted the ball from the asphalt and cocked it above my shoulder and bent at the knees and rose and shot, one fluid motion that lasted half a second. Two-zero, I said. He slapped the ball between his hands and mumbled something I couldn't hear. Outwardly I projected an air of utter placidity but in my head I talked a silent stream of trash. You don't want to set up in my face? I'm gonna shoot you down without breaking a sweat. You're not even going to get a shot off before it's over. I'm gonna blank your ass. Eleven-zip, motherfucker. He bounced the ball to me this time, a token of begrudging respect. I caught it and shot again in a single coiled stroke. He leaped toward me, stretching to block or tip the shot, but he was late by a fraction of a second. Three-zero. The freebies were over. He knew he couldn't give me space. He didn't bounce or roll the ball, he handed it over from arm's length. He crowded me and waved his hands in my face. I dribbled backward a couple of steps, slowly, nonchalantly, and when he started moving toward me, his momentum carrying him away from the basket, I made a quick crossover dribble, left to right, and blew past him toward the hoop. When I'd finished whipping him and we'd shaken hands, we sat in the shade of some trees and shared water from his jug. He said his name was Raymond. I asked him about his job on the campus grounds crew. He'd been doing the same few things every day for five summers now: mowing, trimming trees and bushes, inspecting and maintaining the sprinkler system. Today he'd been repairing a valve in a water line and merely wanted to prolong his break, divert his mind from the boredom of his work. He thanked me for playing with him, shook my hand again very intently, called me a worthy foe. I wasn't so sure. There could be no good reason for the fierceness with which I'd beaten the poor man—unless I'd unconsciously made him a stand-in for my brother, the brother I'd always begged to shoot hoops with me when we were kids, the brother whose corpse was described in painstaking detail in the soggy manila envelope at my feet. After Raymond left, I sat for a while and read. The body is received clad in an olive green T-shirt which is blood-stained in the back, an olive green pair of shorts, two white socks and one pair of blue undershorts. The body is cool to touch. Rigor mortis is fully fixed. Fixed purple livor mortis extends over the dorsal surfaces of the body, except in areas exposed to pressure. The scalp hair is red and measures 3 inches in length over the crown. The irises are hazel. The pupils are bilaterally equal at 0.6 cm. The cornea are translucent. The sclerae and conjunctivae are unremarkable. The nose and ears are not unusual. The decedent wears a 1/2 inch red mustache and beard. The teeth are natural and in good repair. The neck is unremarkable. The thorax is well-developed and symmetrical. The abdomen is flat. The anus and back are unremarkable. The penis is circumcised. The testes are bilaterally descended within the scrotum. The upper and lower extremities are well-developed and symmetrical, without absence of digits. No identifying marks or scars are readily apparent. There is no evidence of medical intervention. I returned the basketball, retrieved my driver's license, and walked across the grounds to the Office of Medical Investigators. Walt and Marcus ushered me into a bland conference room equipped with a slide projector and a pull-down screen. First Marcus offered explanations for some of the more technical language in the autopsy report. "Distorted calvarium," for instance, meant that part of Dan's skull had collapsed from the force of the bullet. Marcus again warned me about the gruesomeness of what I was about to see. I swiveled in my chair and faced the screen. Are you ready? Marcus asked. Yes, I'm ready, I said. Walt turned out the lights. The image projected to the screen was more horrific than any I'd imagined, and over the years I'd imagined a lot. I'd expected a grainy black-and-white snapshot, but the color saturation was as lush as a tropical sunset, the reds as vibrant as the seeds of a pomegranate. A giant chunk of the left side of his head was gone. His left eye and ear were still intact, but barely—above them was a gaping red cavity where his brain used to be. A piece of skull appeared to hang as if on a hinge from the top of his head, and what remained of his right forehead was crumpled inward. His eyes stared implacably at the camera, and his mouth hung ajar, as if he'd been in the middle of saying something when he pulled the trigger. The doctor showed several more pictures, some of them close-ups of the entrance and exit wounds, but it was the first that stayed with me—the force of the bullet evident in all its ferocity, the visual confirmation of the laconic language in the report: Portions of the cerebral hemisphere are submitted in a separate plastic bag. Not surprisingly, Seidel had seen it all in advance: The wind lifts off his face, Which flutters In the wind and snaps back and forth, Just barely attached. It smiles horribly— A flag flapping on a flagpole. I took copies of the photos, thanked the two men, and headed north for the backcountry of Bandelier National Monument, where for three days I did something I hadn't done since my time in Montana. I lived in silence, in the out of doors, moving through mesa country with a pack on my back past no more evidence of human life than adobe ruins and scattered potsherds, lost in a tactile world of stone and wood and clay, sleeping beneath the stars in a land as foreign to me as the moon. I didn't see a soul. I didn't want to leave. But duty called. I'd been back in New York for two weeks when I came home from work one night and found a fat envelope in my mailbox. The return address—Albuquerque Police Department, Criminalistics Division, Forensic Photography Unit—warned me what I'd find inside. I poured myself a glass of bourbon and sat with it, turned it in my hand, sipping now and then, savoring the smoky burn, telling myself there was nothing to fear. He was dead. The documentation wouldn't change a thing that mattered. I finished the glass of bourbon and poured myself another and then I opened the envelope. Among the images it contained—a photo from the front of his apartment, down the long hallway to his living room, a slumped body faintly visible on the couch in the corner; a photo taken from partway down the hallway, the slumped body now more prominent, pale arms, pale legs showing through the gloom; a photo of his bedroom, two white cowboy hats upside down on the bed, as if he'd tried them on and tossed them aside; a photo of his kitchen counter, coffeemaker on the left of the frame, a hunk of bloody viscera pooled next to it, twenty feet from the body; a photo of the wall above the couch, spattered with blood and yellowish bits of brain matter; another like it from another angle, and another like it, and another like it; a photo of his body from above, right arm pointed downward, left arm bent at the elbow, gun cradled between them at an angle, stock resting on the floor next to his right leg, barrel pointed toward his left shoulder; a photo still closer of just his head, eyes open as if in shock, portions of his skull peeled back from the force of the bullet, the entire left side of his head all red and wet like a watermelon crushed with a baseball bat; a photo of a big chunk of his brain, disgorged but still intact, lying next to him on the couch all glistening and salmon-colored, like a skinned cat; a photo of a bullet hole in the ceiling; a photo of a rifle cartridge on the floor, nestled in the carpet; a photo of a box of cartridges on the floor, PMC brand, prominently stamped with the words WARNING: KEEP OUT OF THE REACH OF CHILDREN—I came across a photo of his left foot, clad in a white ankle sock, and next to it the telephone, its cord pulled taut across the room, its keypad stippled with a mist of blood. That was the one I couldn't get past. I put down the photos and picked up the bottle. For weeks afterward I walked the night streets of the city, caressing the bitter estrangement of my secret knowledge, my glimpse at the tableau of his end. I drank myself into furious oblivion, alone in my new neighborhood bar at closing time, and in the morning I stared at a blank piece of paper in the typewriter before I dressed for work. I should have called him. I could have called him. That's what I kept thinking, staring at that empty page. I should have called him. I could have called him. My mother had suggested as much, and I'd put it off. I'd figured it could wait. The sight of that telephone in the police photos only confirmed my conviction that I'd had a chance to save him and missed it. There was no way around it. The phone was right there, within arm's reach. Its ringing could have changed the course of his day. My voice could have given him a reason to live. I'd never have known it, of course. Hard to imagine him saying, months later, So, bro, that time you called me after the breakup . . . you saved my life. Instead he was gone, still and forever gone. I spoke to Bob Bartley for the last time on the day he announced his retirement as editorial page editor. Dow Jones & Company required senior executives to retire at the age of sixty-five, so Bob Bartley would be replaced as editorial page editor by Paul Gigot, who'd won a Pulitzer Prize for commentary and often appeared on The MacNeil/Lehrer NewsHour on PBS. Bob Bartley would still write a weekly column called Thinking Things Over, in which he would say the same things he'd been thinking for thirty years all over again. We boarded the elevator together, just the two of us. His hair was mussed, and his shoulders were slumped. He had the doleful look of an injured horse aware it's about to be taken out to pasture and shot. Big day, I said, trying to sound jocular. Yes, he said. Now Paul gets to see how hard you work, I said, staying jocular. That's right, he said. And I have to figure out how to disengage. Not sure how to do that. Maybe stop coming into the office every day. Yes, I can imagine that would be a challenge after thirty years. He didn't respond. I tried to think of something else to say to him—something big-picture or consequential, now that his reign was up. I thought of asking him how he felt about an in-depth study of his editorials by the Columbia Journalism Review, which found that his page "rarely offers balance, is often unfair, and is riddled with errors—distortions and outright falsehoods of every kind and stripe." I thought too of asking him whether he felt in any way responsible for the death of Vincent Foster, the White House counsel to Bill Clinton who'd killed himself shortly after Bob Bartley published a series of attacks on his integrity. Foster's suicide note, discovered in his briefcase six days after police found his body in a suburban Washington park, expressed frustration that "the WSJ editors lie without consequence." After Foster's death, Bob Bartley's editorials insinuated that Foster may have been murdered for knowing too much about Whitewater, and called for a special counsel to investigate. "The American public is entitled to know if Mr. Foster's death was somehow connected to his high office," Bob Bartley wrote. I thought the American public was entitled to know if Bob Bartley thought Vince Foster's suicide was somehow connected to irresponsible journalism, and I wondered whether Bob Bartley had considered, for even a moment, the family of the dead man when he wrote those words. They made it difficult to think of Bob Bartley as a man who no doubt loved his wife and kids, was generous with colleagues, tithed the appropriate amount at his church of choice, and did kind things for his friends. Though I detested his politics, it was this sin that disfigured him so grotesquely, turned him into a caricature of a human being in my mind, an obsession I would even dare say: that nothing was off-limits in the pursuit of a political vendetta, including paranoid musings on a man's tragic suicide. In my heart I knew it was the wrong day for such questions, and anyway I was a coward when it came to asking tough questions of anyone but myself, and even often of myself—one of the reasons I knew I wasn't cut out to be a reporter. We parted ways in the lobby, him heading for his limousine to Brooklyn, me for the subway to Queens. Well, I said, enjoy your newfound freedom. I'll try, he said. I never spoke to him again. I held mornings sacred, time entirely my own, in my own space, with my own music on the stereo. Alone in my bed I'd wake to the alarm and the day's first light, make a pot of coffee. A hot shower and a shave, a shirt, a tie. An hour at the typewriter, coffee at hand, jabbing the machine to make a sound like a thing being built. I wrote and rewrote the story of my brother's end that summer, based on what I'd once been told by my aunt Ruth, who'd spoken to all the principals. It was the one story that never got old in the revision. I'd compile new versions, new variations, improvisations on a tune I couldn't quite seem to hear; I'd give it a rest when the versions started changing one word at a time instead of blossoming in new directions. Then I'd strip away anything extraneous, cut it to the bone, aiming for the very minimum I could say for certain plus a little I could not. I wanted to fashion an ice pick out of words. I wanted concision, dispassion, an accurate accounting of a man's last moves on the brink of a self-willed death. Despite my best effort, I couldn't help thinking it stylized and incomplete: He spent his last afternoon with friends, this much was known. One of them had a hot tub. They soaked and drank beer and told stories and laughed. They were all hot air balloonists, and their stories tended to circle around dubious flying conditions, botched landings, hairy takeoffs in squirrelly winds. The afternoon passed toward evening. Someone suggested they go out for a drink, to a neighborhood place they liked in Rio Rancho, a place called Phil's Bar. Dan said he wanted to go home and get his darts so he could play some cricket. Seven o'clock, it was agreed. They'd meet at seven. They'd see him then. Back in his apartment, he picked up the telephone and dialed his ex-girlfriend. She'd been his girlfriend for eight months, his ex since the previous day. She had two kids, a boy and a girl, with a man from whom she was separated, not divorced. The kids were ages seven and eleven and both had begun to dress in a cowboy hat and boots, in imitation of Dan. Final custody remained uncertain, property yet to be divided, papers yet to be signed. It was a messy situation, and Wendy had insisted on a break while she got her life in order. A break and then they'd see where things stood. He was drunk when he called. They spoke briefly, talking past each other, saying things they didn't mean, as sundered lovers will. You're not thinking straight, she told him. Sleep it off. Call me tomorrow. This was his last known contact with another human voice. From then on it's all conjecture. Maybe he went to the couch with his lowball glass, working it around in his hand, swirling the ice, not even noticing the taste when he drank. Maybe he couldn't sit still. Maybe he was up and pacing the apartment. Maybe he was thinking that the only thing to do was hurt her back. Maybe he was thinking that the only thing to do was hurt himself. He stepped out on his deck for some fresh air. Or maybe he went again to the freezer, put another chunk of ice in his glass, poured himself another finger of scotch. Or maybe he opened a beer. He paced from one room to the next. Or maybe he sat on the couch and worked the glass around in his hand some more. There it is, he thought. Right there, behind the closet door, the answer for everything. He opened the closet. He reached for the gun. He felt the barrel, smooth cool metal. Maybe he caressed it with a kind of loving tenderness. Maybe he simply connected the clip, snapped it in place with a grim satisfaction. He drank a long swallow of scotch. Or maybe he cracked another beer. He walked from one end of the apartment to the other, brandishing the gun, getting a feel for it. Or maybe he sat and tested it against his temple, savoring the chill of it on his skin, testing the membrane between life and death. Maybe he leaned into it, relieved by the onrushing prospect of freedom. Maybe he was calm. Maybe he took his time. Maybe he was itching to get it over. Maybe he was furious. Maybe he felt as if his head were entrapped in a goldfish bowl. There was no one else in his diminishing world, nothing but the vise grip of despair, squeezing him without mercy, reducing his options for escape to the flash from the mouth of the gun. This was my continuing dream that summer, the locus for my imagination, a voyeuristic wish to inhabit the scene of a suicide, to see the carnage firsthand and taste the smoke in the air, the smoke from the heat of the bullet. My commute, door to door, typically took fifty minutes on the N train. It was another part of my day I cherished, a three-quarter-hour journey in benevolent captivity, an in-between time. On the train I came back to the world outside my own head. Boarding near the end of the line, before the cars became crowded, I usually managed to find a place to sit. I used the time to read or, when my attention faltered, to survey the kaleidoscope of city life in the faces of my fellow New Yorkers, the galvanic friction of young and old, rich and poor, black and white and every shade between. Bound in a fragile intimacy sustained through studied nonchalance, we were acutely aware of those near us but discreet with our attentions lest we send a creeper vibe, most of us anyway. There were always creepers and I tried not to be one of them. Nonetheless about every other day I swooned for a woman I would never see again, a one-way romance consummated in a sideways glance and lasting mere minutes, poignant in its transience and futility, in the sickly purity of my unexpressed longing. I had one commute more memorable than all others by far. It was election day, the mayoral primary—a day on which the city painted itself in red, white, and blue, posters and placards taped to light poles and subway-stop railings, an upbeat but languid mood in the streets, as people played hooky from work to do their civic duty. I intended to vote in the evening at my polling place in Queens, for Mark Green, whom I felt sure would be the city's next mayor if he survived Fernando Ferrer. As it happened, neither man would be much heard from again. At a little after nine a.m. my telephone rang. It startled me. No one ever called me in the morning. I had a bad feeling before I even answered. My friend Sarah wanted to know if I'd heard the news. I told her I hadn't heard any news. She said two planes had hit the World Trade Center towers. It looked like terrorism. You probably won't be going to work today, she said. Damned if I won't, I thought. On my way to the subway, having just missed a train pulling out of the station, I stopped in a bar and looked at the television, saw the two towers framed by the camera, both of them smoking, not white smoke but black, a hint of the tremendous heat at work. It looked bad, but I couldn't begin to imagine how bad. I made a vow that sustained me through the next three hours of travel by train and by foot to an office building I would enter for what would turn out to be the last time: I would not be reduced to a stunned spectator. I would not sit in a bar and stare at a screen. This was the biggest story in the world all of a sudden, and it was happening just across the street from my employer, a newspaper regarded as a secular bible by some of the people who worked in those towers. I didn't care what I had to do, I was going to work, straight to the managing editor, to whom I'd offer myself for whatever was needed, phone dictation, rewrite, you name it. It was strange to feel this way—preemptively purposeful. I'd become so jaded with the limitations of journalism that I no longer thought of myself as a journalist but as merely another drone in the hive mind of Lower Manhattan, trading eight hours of each weekday for cash at a paper whose editorial stance I found not just wrong but dangerous. Some habits die hard, I guess. I'd spent seven years, off and on, and many tens of thousands of dollars on an education that taught me three major things: stay curious, be dogged, run toward the story. Old instincts kicked in like a muscle memory. So I went to the story, which turned out to be many stories, depending on how you looked at it: a heinous crime, an audacious act of mass murder, a made-for-TV spectacle, a catastrophic fire, an airborne toxic event, and the most successful terrorist attack in the history of terrorism. September 11 was a lot of things and the beginning of many more: refugees and civilian dead in foreign lands, killed and wounded soldiers, TSA gropings, the Patriot Act, extraordinary rendition, CIA black sites, waterboarding, a linguistic squabble disguising a moral question about the meaning of the word torture, the prison at Guantánamo Bay, sadomasochistic photo shoots at Abu Ghraib, Total Information Awareness, drone assassinations, border hysteria, NSA data collection . . . a full accounting is beyond my ken. But before all that, before it became a rallying cry for war and state surveillance, it was a drama of suicide. Nineteen men on a mission demanding death on a day chosen for them. An untold number of jumpers from the towers who faced a choice of deaths on a day not chosen by them. A chain reaction of suicides. The hijackers believed their reward awaited them in the afterlife. The jumpers, who can say what they believed? When it came to the afterlife, they must have believed dozens of different things, but the one thing they all believed was that ten final seconds of flight was preferable to the inferno they fled. Later I tried to imagine their final moments, as I had with my brother's, but how far inside another man's death can we truly see? Even our own is a mystery until it's upon us, and for the people in those towers it can only be guessed at in the most superficial way. The thunderous explosion from the impact of the plane. Instantaneous fire erupting with a searing heat, the fire quickly growing. Panic as all exits close off. Smoke and flames swallowing all hope of survival, breathing excruciating, lungs overwhelmed. Suffocate or roast to death or jump, those were the choices, the last set of options, the question of how to die. There wasn't much time to mull it over. It wasn't a philosophical exercise. Die now—but die how? It was a question whose horror you couldn't inhabit. I was always ending up in all the wrong places: Bed-Stuy, the Wall Street Journal, the make-believe province of telephonic copulation. In order not to feel satisfied with life in the wake of my brother's death—in order to prove to myself that I had loved him—I'd denied myself contentment in all its forms, as if pleasure were anathema to my holy grief. In an orderly world I'd have had no business working across the street from those towers, a pig farmer's son from Minnesota, graduate of the University of Montana, a country boy in every way that mattered, though I'd tried to pretend otherwise. I had no business at all living in New York City, a place I'd judged hostile to most of what was beautiful about life on earth when I first encountered it. An arts page copy editor, I certainly had no business staring into the center of the biggest story in the world on a late summer day, rubbing my eyes, snapping pictures with a digital camera, inhaling pulverized asbestos, burning plastic, burning metal, burning paper, burning fuel, burning flesh. The images I encountered that day were ghastly, a scene of destruction on a scale unimaginable even as I stood on the edge of it, but it was the smell that stayed with me, remains with me to this day: the smell of an airplane made into a bullet. By an accident of fate I finally got my wish. I paid witness in the flesh to the scene of a suicide—countless suicides. There was nothing else to do. The office was empty when I got there. The whole building was empty, evacuated hours before. I climbed the fire stairs and walked around the newsroom, amazed to find myself alone at lunchtime on a weekday, in a workplace typically restless with several hundred people living in the perpetual now of gathering news. When it finally sank in that I was useless, I went back outside and stood on the edge of the smoking rubble, trying and failing to understand what had happened, a spectator minus the distancing screen. Paper blanketed the ash that blanketed the streets. Firefighters sat stunned, covered in dust, their heads in their hands. There was nowhere to look and not find evidence of ruin. I joined a group of five reporters at the southwest edge of the pile. Two of them scribbled in notebooks. One fiddled with a tape recorder. One snapped pictures, one held a video camera. No one said a word that was printable in a family newspaper. When the smoke made my lungs clench with intimations of an asthma attack, I walked north until I found a city bus. I sat down next to a man who told me he'd evacuated the north tower, and on his way down he'd walked past dozens of firefighters headed up. They're gone, he said. They have to be, every one of them. I remember the faces. I don't think I'll ever forget. I got off the bus at McHale's, dusted in ash to the knees, but the usual discretion ruled. On me, the barkeep said, as she poured my first drink. No one asked me where I'd been. No one needed to. All eyes were on the television; I watched for a while and then I caught a taxi home. The next day's Wall Street Journal was produced that night in New Jersey and carried a six-column headline in type nearly as large as the masthead, the fourth banner headline in the paper's history: Terrorists Destroy World Trade Center, Hit Pentagon in Raid with Hijacked Jets I bought it and a copy of the Times at my corner newsstand early on the morning of September 12. I read all the front page stories and most of the inside of both papers, but one simple photo on page A7 of the Times stopped me cold. Taken by an Associated Press photographer, it showed a man in midflight. His head was down, his torso parallel to the vertical ribbing of the two towers behind him, several stories of them that filled the frame to the edge. He appeared to be falling along the demarcation line between them. One leg was straight, the other bent at ninety degrees. Together they formed a little triangle. One of his boots stood out, starkly black. His pants were black, his shirt white. His arms appeared relaxed. He looked almost peaceful, like a man suspended on a string, even as he hurtled with accelerating speed. His was the emblematic image of the terror of that day, though afterward it was not much seen again in the world of American journalism. We airbrushed him from the record. Readers excoriated the papers that published the photo, and the papers scrubbed it from their Web sites. We couldn't bear to think of the panic of his final moments, his awful need for flight. We wanted pictures of heroism, patriotism—firemen or flags, or better yet firemen holding flags—and he did not fit the bill. He was the incarnation of our last taboo, the avatar of our worst private nightmare, a human being captured in the act of a self-willed death. Only Connect After the attacks we commuted to the cornfields of New Jersey, a trip that took me two hours one way. We put the paper together in a makeshift newsroom in the training wing of Dow Jones corporate headquarters near Princeton. Almost all the stories in the paper concerned terrorism: its practitioners, finances, backers, tactics, goals. It felt, for a time, a little embarrassing to edit pieces about the Cave of Altamira or an Ansel Adams show. When anthrax turned up in the offices of other media companies, all of our mail underwent a heat-steam treatment. The mailroom workers sorted it with masks on their faces and rubber gloves on their hands. They looked like lab technicians working with a deadly poison. When opened, the envelopes crackled like dead leaves, and the ink on the letters was often illegible. On the editorial page the imprint of Bob Bartley lingered, his obsessions trotted out for endless encores: the beneficence of tax cuts, the imperative of a missile defense system, the need for military spending on hardware and troops for vast overseas mobilization. Saddam Hussein became an urgent addition to the repertoire; Osama bin Laden appeared as an afterthought. I started keeping a folder of clippings, called FULL BLOWN INSANITY ON THE WSJ EDITORIAL PAGE. On September 12 the lead editorial stated: "We are entitled to presume that this is the work of the usual suspects—Saddam Hussein, the Taliban, the Iranian mullahs and other dictators who invoke Muslim fundamentalism to justify their fundamentally illegitimate power." There was no mention of what made the authorial we entitled to such a wide-ranging presumption, nor was there mention of the man who turned out to be the mastermind of the attacks. The next day his name snuck into print alongside the primary suspect: "We would not be surprised if this week's atrocity was the work of Saddam or bin Laden or both." This contention was driven home by the pull quote in the adjacent opinion piece: "Can Osama bin Laden sow terror alone? Not likely. His group has had help from Saddam Hussein, and from Sudan." The next day the lead editorial called for hastening deployment of a missile-defense shield—"missile defense is as much a defense against hijacked airliners as it is against missiles," it stated bizarrely—an effort that seemed to me like a man lifting an umbrella over his head while being pelted in the groin by snowballs. On September 19, an unsigned editorial argued that the first and most important steps in combatting terrorism ought to include capital-gains tax cuts and immediate drilling for oil in Alaska. The same editorial stated: "Throughout history the periods of greatest military innovation have been wars. Now is the time to push for next-generation weaponry and electronics that will keep the U.S. ahead of not just terrorists but all adversaries. Democracies are reluctant to spend money on defense in peacetime, but in a war they will give the military whatever it needs." It would seem that war was needed, because a massive military buildup was needed, because nineteen men with box cutters had flown passenger jets into three iconic buildings on American soil. I couldn't follow the logic but I knew they wouldn't stop clamoring until they got themselves an honest-to-god, maim-and-kill war. Reading the paper became an exercise in cognitive dissonance. One day the news section would report that "U.S. Officials Discount Any Role by Iraq in Terrorist Attacks," quoting intelligence officials who noted that bin Laden disliked Saddam and the two had nothing in common but a hatred for America; the next day the editorial page would write that "reports are swirling that Saddam Hussein was also behind last week's attacks. . . . Deposing Saddam has to be considered another war aim." In this rank potpourri of erroneous speculation, dubious reasoning, and calculated propaganda, about the only thing in the back pages of the A section that felt true was Seidel's monthly poem. All the opinion columns calling for "total war," targeted assassinations, the bombing of madrassas, and the American occupation of countries as diverse as Afghanistan, Iraq, Sudan, Libya, Iran, and Syria—"The Answer to Terrorism? Colonialism," a headline proposed a month after the attacks—all of it seemed unhinged and delusional next to eight stanzas of Seidel's verse, which, by adopting a voice as twisted and chilling as that of Osama bin Laden, seemed to get much closer to the heart of the matter. I like the color of the smell. I like the odor of spoiled meat. I like how gangrene transubstantiates warm firm flesh into rotten sleet. When the blue blackens and they amputate, I fly. I am flying a Concorde of modern passengers to gangrene in the sky. Needless to say, some of the paper's more sensitive readers were not impressed; several wrote letters to the editor calling for Seidel to cease and desist. Post-attacks, I heard a noticeable increase in traffic on the talk line. I called often with the hope I'd get lucky. One night I did. Her greeting was ambiguous, almost shy: Hi, this is Christine. Just looking for something interesting. . . . Her voice had a quality of innocence unlike the moaners, the nasty talkers, the men in their girlfriends' lacy underwear. I pressed two and recorded my Clark Kent Calling from a Phone Booth routine—professional journalist, late twenties, looking for a smart, sexy woman to share bedroom superheroics. She responded. She laughed and said she liked my voice. She was a photographer and was intrigued by writers, especially writers with superhero powers. She wondered where I was calling from, where I was from originally. I didn't sound like a native New Yorker. She couldn't place the voice, but it wasn't New York. We traded polite messages for a few minutes—I lived in Queens, she lived in Manhattan. I was from the Midwest, she was from the South. I was twenty-nine and single, she was thirty-seven and separated from her husband. We shared a taste in music: blues, jazz, country, gospel. Finally I made the move. I pressed three and recorded my pitch: Hey, listen, you sound really cool and I was hoping you might want to talk directly. I hope so. . . . Your connection will be arranged shortly. Please hold. . . . Please hold for a live connection. . . . We're about to connect you one-on-one with another talk line caller. If you hear a chime, that means another caller has sent you a message. To disconnect from your live chat, just press the star key. Now, prepare to speak to caller number 32. For a moment I was speechless. I had no idea what she was after, but there was a seductive quality to her voice that made me want to figure it out and give it to her, whatever it might be. Mercifully, she untied my tongue with humor. She teasingly called me Clark and wondered why I wasn't out in the city, saving damsels in distress. She speculated that I was recovering from an encounter with Kryptonite, and when I confessed that, like most superheroes, I was taciturn, not at all a smooth talker, she had great fun with the irony—a not-so-smooth talker on a talk line. Our laughter led to candor, and soon we were exchanging confessions of embarrassment: two urban professionals, not repulsive in any obvious way, reduced to seeking sexual gratification through a telephone line. Maybe my faux-humility charmed her, but all of a sudden she said, Listen, Superman, do you want to come over? I stammered in reply—Uh, you mean, uh, now? Tonight?—and my hesitation must have made her wonder what I'd failed to disclose. The wife? The felony rap? The prosthetic hook for a hand? Because she began to backtrack, saying she'd never done this before, it was crazy, she didn't know me at all, I could be a stalker, some sadistic weirdo. I suppose I could be, I said. But I'm being honest when I tell you I'm not. I'm a shy boy from a little house on the prairie. I make my bed every morning. I pay my bills on time. We went around and around like that. Having extended the invitation, she felt a need to explore every single reason it was a bad idea. But I wasn't going to let it slide. I had a hunch I could convince her. Eventually, I did. We don't have to do anything, she said. If we don't find each other attractive we can just, I don't know. Talk. Or do nothing. Walk away. Okay. Just one thing. What's your real name? Phil. Is yours Christine? No. It's Molly. Molly. I'll see you soon, Molly. When I came up the stairs she was leaning half out of her doorway, hoping to see me before I saw her. We looked at each other and smiled, a wave of mutual relief—thank goodness he/she isn't hideous!—washing over us. I can't believe I'm doing this, she said. She wore a white blouse and blue jeans. Her hair was long and curly, the color of cinnamon. Her lips were darkened with fresh lipstick. She looked younger than thirty-seven. Her jeans clung tightly—but not too tightly—to her hips. She'd obviously spent some time—but not too much time—primping for a visitor. I guess you should come in, she said. We sat at the kitchen table. A stick of incense smoked in an ashtray. The place looked dramatically uncluttered for a Manhattan apartment. Then I remembered her husband had just moved out, him and all his things. She set two beers on the table, lit herself a cigarette, offered the pack to me. I wondered how long we would pretend this was a date. She told me her husband had left two months ago. He'd given no precise reason. He felt them drifting apart, he needed some space, a bunch of vague clichés. At first she was devastated. She hadn't seen it coming. Then he left, and that was it. She was alone. She told herself she'd better get used to it. On September 11, he'd come back and spent the night—the world is ending, at least we have each other—but it felt wrong. She indulged him for forty-eight hours because she was fearful too. Then she told him to get out. He said he was ready to try again, but she wasn't. It hurt, goddamn it hurt, but she had to do it. You don't just walk out on a marriage and walk back in when the world makes you scared to be alone. It couldn't be the same, not after what he'd done. He now held all the power—I want to go, I want to stay, I want I want I want. She couldn't let him have that. She couldn't let him have that and still respect herself. She knew that if she let him back in she'd live in constant fear of the next departure, the final departure, and she knew the fear would disfigure her, make her crazy with dread. Seventeen years! she said, shaking her head. Gone. Just like that. After we stubbed our cigarettes, I reached across the table and brushed my thumb across the tan line where her wedding ring used to be. Our fingers interlocked, and I slid my chair closer to hers across the kitchen linoleum. We kissed very softly on the lips. I haven't been with another man since I was a teenager, she said. I feel like a teenager. Me too, I lied. With our clothes off, we chose to make what we were doing count. There was no need to be bashful. She told me what she wanted, mouth here, hands there, and I did as she said. The erotic geometries aligned very nicely. Seen from above and behind, her body had the elegance of a double helix—arms thrust forward and crossed, back in the shape of an hourglass, her spine a dotted line. She wanted it rough and loud, as if to shatter all memory of her husband, so we wrestled with the ferocity of quarreling lovers overcoming the quarrel, then we rested and did it again, more tenderly this time. We smoked a cigarette in bed, and the talk turned to our families, as I'd felt sure it would—her mother dead of cancer far too young, my brother dead of a bullet in the brain even younger. I didn't belabor the point, and neither did she. We mentioned these facts only briefly, in passing, as if the specifics weren't required because we both already knew them, had known them all along. She rose and straddled me. She seemed to know what I wanted without my even saying it. She wanted to taste me, she said, she wanted to taste herself on me, and I offered her everything. Don't go, she said. Sleep here. Just a few hours. I have to be up for work at six. We can get coffee from the deli on the corner. Once in the night she rolled over, and amid the gauzy confusion of half sleep I remembered I was lying next to a stranger, an attractive stranger, and I smelled her hair and the smell of sex. I moved on top of her, and she woke and moaned and arched her back. At six her alarm went off. While she showered, I dressed and went for coffee. We shared a cigarette at the kitchen table, exhausted and guarded, unsure of how to say goodbye. We kissed in her doorway, and she watched me leave, leaning out into the hallway just as she had when I'd arrived. Outside, the predawn streets were nearly empty. The light was cold and lunar, the sky the color of a daguerreotype. I bought a newspaper for the subway ride home, but when the train came I couldn't read. I stared out the window at the darkness of the passing tunnel. Each time I called the talk line, I hoped to hear Molly's voice; every time I was disappointed. I had her cell phone number, she had my home number, but neither of us made the move. Something about the way we'd met made a friendly call—unmediated by four menu prompts and the perky-bimbo voice—too intimate, too presumptuous. One night I connected with a woman named Ashley seeking a horny young stud from my particular neighborhood. She was bossy, and her bossiness turned me on. She ordered me to take off my clothes, and I did. She ordered me to put on a pair of running shorts, and I did. She told me she wanted me to go to a certain street corner in Queens, pretend I'd been out for a run when I realized I'd locked myself out of my apartment, and ask her boyfriend, who would be waiting there smoking a cigarette, to use his telephone, and, once inside the boyfriend's apartment, first make a pretend call to a friend with a set of spare keys and then, profoundly grateful for the use of the telephone, submit to the boyfriend's wish to get down on his knees and go to work with his mouth. Then she repeated her instructions, beginning to end. I confessed I was only interested in meeting her. She said that would come later: first, the boyfriend. She wanted to hear all about my cock from her boyfriend. When I told her no, she became petulant, and I noticed a slight burr in her voice. This ain't no boyfriend-girlfriend thing, I said. Your voice is too breathy, too nasal, like you're pretending to be someone you're not. You sound like— He didn't let me say it. He pressed star and was gone. One weekend afternoon a "cute little uptown Dominican, 36-C, all natural, no implants" sent a message. She said she needed to get off before she rehearsed that night with her rock band. We connected, exchanged vitals. I gave her directions to my place. It was all very straightforward, simple as summoning a plumber. When she arrived, she said, Some guys think I look underage. I can show you ID if you want. I didn't think she looked underage, nor did I want to card her as if she were buying cigarettes. Her hair was shoulder-length and shiny, like delicate strands of obsidian, and her skin smelled of lavender. She seemed incapable of looking me in the eye but made up for it by being very frank. I offered to make us tea, or a whiskey Coke, whatever she wanted, whatever would help her relax. She shrugged and looked at the floor. After a moment, she said, Why don't we just, like, fuck? When we were through I offered her a cigarette. I lit one for myself and reclined on the bed next to her. She said, Most guys I've met only last, like, five minutes at the most. They don't like to look at my face. They bend me over and do it and then they want me to leave. I wanted her to leave but I knew it would be callous to say so. I only lasted five minutes, I said. Yeah, but at least you know something about foreplay. How many guys have you met off the line? Oh, I don't know. Maybe, like, a dozen. Maybe more. Why? Why do I do it? Yeah. I don't know. I suppose I shouldn't. She was quiet for a moment. It's just that nothing's, like, permanent. So why not admit it and stop trying to find someone to be with forever? I guess my dad's death made me realize everything can be gone tomorrow. Might as well enjoy today. Her voice became inflectionless, deadly matter-of-fact. He was hit by a drunk driver when I was six. Right down the street from our house, right in the middle of the day. When I heard the sirens I came out of the house. There was blood, like, everywhere. I saw a body in the street. The head was barely attached. The cops told me to go home. In a little while they came to the door and told us it was my dad. I didn't even recognize him when I saw him in the street. I told her about my brother killing himself with a semiautomatic assault rifle, about how I'd gone to the police and had them make copies of the crime scene photos for me, the gun and the body and the blood on the walls. This seemed to make her feel better. Before she left, she said, You know, I never do this twice with the same guy. That's okay, I said. But take care and good luck and stuff. You too, I said. For days afterward, the words of the cute Dominican girl resounded in my mind, a prod to my imagination. Body, blood, head barely attached . . . I could only escape them by stepping into the streets and walking for hours through the vastness of Queens, past the all-night bodegas and the empty factories, the ill-lit rail yards and the derelict waterfronts. I dressed in a suit and tie, a flaneur of the city's dark edges—inviting curious glances and the possibility of violence—and I often ended the evening by climbing the fire escape of an old factory in Long Island City, watching from the roof as the elevated trains crawled below me like silver caterpillars. I smoked and tossed my cigarettes onto Northern Boulevard, seven stories below, where they exploded in a flower of sparks. I thought of the tower jumpers, twirling in the air like my cigarette, their quick and poignant plunge to the pavement, their escape from life an escape from pain. Now when I wrote for the paper the stakes felt higher, and with Seidel as my example—the writer willing to say the unsayable in a climate of fear and self-censorship, flinging daggers that sang toward the unsuspecting reader—I chose my subjects with greater care. Still, it was easier as a journalist. I could simply quote the words of others, neither condemning nor condoning. In a profile of jazz trumpeter Dave Douglas, for instance, I quoted him calling the war in Afghanistan "more of a trade show and a laboratory for new weapons than a real pursuit of those who perpetrated that horrible event" already known by the glib shorthand 9/11. Indeed. It would take a decade for the mastermind to be snuffed, and the country where the plan was hatched would remain a failed state and a cesspool of extremism despite the best efforts of our misused soldiers, but it wasn't as if the paper's subscribers were looking to the arts pages for an understanding of the "war on terror." The assignment I enjoyed most occurred when Ray asked me to write a profile of a radical performance artist named William Pope.L. He'd once walked the streets of New York with a twelve-foot white phallus strapped to his midsection, a comment on white fears of black sexuality that sent the National Endowment for the Arts—which had once bestowed on Pope.L a grant of taxpayer money—into a tizzy. His most famous work, however, involved eating a copy of the Wall Street Journal with the aid of ketchup and milk, then regurgitating the meal, all while sitting on a gleaming porcelain toilet perched atop a ten-foot scaffold. He told me he'd once seen an ad campaign for the paper that made it out to be the modern equivalent of a primitive cultural object imbued with mystical powers. I quoted his explanation at length: "The ads suggested that if you bought a subscription, good things would happen to you. They proposed that the paper could have a magical effect. You didn't have to read it. Just having it near you, having it land on your doorstep, would multiply your wealth." It was Ray's brilliant idea to hold the piece until the day the paper, after more than a century in existence, first enlivened its pages with color ink. Thus I could quip that we'd spiced up the product mainly for the sake of its digestibility, an ironic bit of institutional self-mockery that, far from buttressing Pope.L's critique, laughed and winked at it. Nothing I wrote elicited more comments from my colleagues, and everyone thought it was a gas. With each compliment I grew more uneasy, until I began to understand that my habit of privately laughing and winking at myself, at least in relation to my work—the self-proclaimed democratic socialist, working as a low-level functionary at a rag whose very name was practically synonymous with the triumph of finance capitalism—had now spilled over into my writing about others. I had come, at last, to inhabit the voice of a trapped man who perversely enjoyed his cage. I was running out of tricks on the talk line. My clever come-ons had begun to bore even me, and I heard the same voices time and again, reciting their own rote greetings: Brown Sugar, Mistress Tina, dozens of others who never gave their names but whose intonations were as familiar as old friends. What had once surprised me—the novelty of amateur pleasure seekers finding a voice for their fantasies, seeking a sympathetic listener to enliven and validate them—now struck me as not much more than a lurid form of group therapy in which true self-awareness remained forever elusive. Like all good things, the sounds of a working vibrator or the click of a pair of handcuffs became first dull and eventually repulsive with ceaseless repetition. One night Molly called me at home. She said she'd been out with a friend, having a drink, and she'd told the friend our story—our improbable, slightly kinky, strangely sweet story. I think of you a lot, out there in the city, she said. I wonder sometimes what you're doing. I think about you too, I said. An hour later I was at her door. We sat at the table, just like the first time. We drank a beer and smoked a cigarette, just like the first time. We talked about little things, work and such. Then she told me how, on a recent trip home, she'd found some cassette tapes her mother had stowed in the attic. Her mother had made them not long before her death, conversations with a psychic in which the psychic had made a cryptic reference to a lover. Molly's mother confirmed that there was "someone special" in her life. After listening to the tapes, Molly asked her mother's best friend about this mention of a lover. The friend said that Molly's mother had indeed had a lover, outside of her marriage, for forty years. Was he my real dad? Molly asked. She wondered because she didn't look anything like her father. Many times her mother had hinted that Molly was special, somehow different than her siblings. Yes, the friend said, the lover was your father. He was also, she said, my son. Molly was stunned. All these years the woman she'd known as her mother's best friend had been her grandmother too. Life just gets more complicated, she said. One day you're walking along, deep in your routine, you've got a husband you love and a father you have no reason to think is not your father, and the next day you've got neither, but also somehow more. Everything you ever assumed is turned upside down. We went to the bedroom and undressed. It felt very uncomplicated. This gesture has this effect. This movement elicits this response. This part fits here. Silently we joined, as if picking the lock on a door to forgetfulness. For a long time I didn't mention my phone-sex forays to anyone. Then an old friend from college, Rebecca, invited me to Arizona for a week of work and solitude at a little cabin she'd rented for the winter. We wrote in the mornings—she was at work on a novel—read and walked in the afternoons, made dinner together in the evenings. We talked about politics, books, and personal matters; there were few people in the world I was happier to listen to, and none I knew who listened more intently. One night over dinner and a bottle of wine I told her about my calls to the talk line, my meetings with Molly. If anyone would get it, then Rebecca surely would. It wasn't in her nature to be prudish or judgmental. She was the coolest cookie I'd ever known, as shrewd and nonchalant as a cat. Little did I know just how well she'd understand, for when I finished she surprised me with a story of her own. She was on her way out of the house, and the phone rang. It was a man. He spoke haltingly at first. She could barely make him out. Eventually she understood he was a graduate student at the University of Michigan. For a seminar in human sexuality, he was conducting random telephone surveys. He wondered if she could spare a moment to answer a few questions. She told him she was already running late. She had to give a lecture in half an hour. She was sorry but she didn't have time. Can I call back later? he asked. Yes, I suppose, she said, although there was an edge of desperation in his voice that almost made her say no. What is your lecture about? he asked. She told him. She was an expert in her area of interest, and he was impressed. Tell me one other thing before I go, he said. What's that? she said. Tell me what you're wearing to the lecture. She was surprised by the question, but she looked at herself in the mirror. I'm wearing a peach-colored dress, she said. A woman in a peach dress on a spring day. That's nice. The way he said those words—a woman in a peach dress on a spring day—both excited and frightened her. I have to go, she said. Okay. I'll call back later, he said. She gave her lecture. Afterward, people asked her questions for an hour, and she came home exhausted and exhilarated. She made herself some dinner. The telephone rang. It was the man again. He said his name was Joseph. He had just a few questions. It wouldn't take long. But first, how was your lecture? he asked. She told him about it, told him it had gone well, and he said, with what seemed to her sincerity, that he was happy to hear it. She felt a sudden flush in her face, a little surge of self-satisfaction. It had been a long time since she'd heard warm affirmation in a male voice. He said he was going to ask her a series of questions about her sex life. When she heard the phrase, she laughed. Why do you laugh? he asked. Because I don't have a sex life, she said. My husband left me. I found out he was sleeping with his secretary. Maybe I should give you his number. She laughed bitterly. I'm sorry, he said. That's awful. It's okay, she said. I'm glad he's gone. In some ways it's the most terrible thing that's ever happened to me, but in others it's been a blessing. I'm learning things about myself I wouldn't have otherwise. I can eat cereal for dinner if I don't feel like cooking. I've started mountaineering and I've lost forty pounds. I feel better inside my own skin than I ever have. He asked her several generic questions—her age now, at what age she'd had her first sexual experience, how many partners she'd been with, whether she'd ever had an abortion or a sexually transmitted disease, how many times per week, on average, she'd had sex with her husband. She answered them all. His questions became more intimate. He asked her whether she liked to perform oral sex, whether she liked to receive oral sex, whether she'd ever had anal sex, whether she'd ever had lesbian fantasies, lesbian experiences. She began to feel uncomfortable but she answered his questions. He sounded a little embarrassed to be asking them. When he was done, he thanked her, and they said goodbye. A few days later he called back. I tried to reach you a couple of times, he said. But I only got your answering machine. Rebecca. I like that name. And I missed your voice. I wanted to hear it again. She was flattered—and apprehensive. She'd made a point of not telling him her name when he called before. She hadn't thought he'd call again. But she talked to him. She liked the sound of his voice too, although she didn't tell him that. He began to call every few days. She looked forward to hearing from him. It was a pleasant distraction from her work, from her loneliness in the house she'd once shared with her husband. He was a good listener, and she did most of the talking—about work, about the new life she'd suddenly found herself living. One day he asked again what she was wearing. She told him. What are you wearing underneath? he asked. She was surprised by the question—surprised and a little turned on. Will you take it off? he asked. All of it? Yes, she said. If you want me to. He continued to call. Each time, they had phone sex—he would talk, she would touch herself. He said that he loved to give her pleasure; it made him feel good to make her feel good. She enjoyed it too. She'd ceased to have any interest in sex when her husband left, so disgusted was she by his philandering. Joseph was helping her discover a portion of herself she thought had vanished for good. His voice soothed her. She got lost inside of it as she touched herself. She sometimes worried, though, that he was in cahoots with people who wanted to tarnish her reputation. She worried that he was taping their calls. She often worked with people who fought corruption in large corporations, and her work had cost those corporations serious money. Corporate lawyers and operatives had searched high and low for ways to discredit her, to no avail. She wondered if they might be getting desperate. It was too late now. She could only hope that he was telling the truth. One day he called and said, I have to confess, I lied to you. Her throat tightened, her hands began to shake. I wasn't doing a survey, he said, although I did call your number at random. I'd just received some terrible news, and I was lonely. I didn't have anyone to talk to. I just started dialing numbers, and you were the first person who answered. So I made up a reason to talk to you. What's wrong? she asked. What happened? I don't really want to tell you, he said. It's my problem. I don't want to drag that into what we have. So you're not a student, she said. No. I'm a landscape architect. I don't live in Michigan. I live in Chicago. But I did go to college in Michigan about fifteen years ago. He told her about his work, how much he loved it—being outdoors and doing creative things and making the world a tiny bit more beautiful. Moved by his passion, she forgave his lie, although she wondered why he couldn't tell her what had happened to him. The next time he called, she said, Guess what? I'm coming to Chicago to give a talk. I'd like to meet you. But I understand if you'd rather not. No, he said. I'd like that too. More than I can tell you. They arranged to meet in the lobby of a hotel. A part of her knew she shouldn't do it. He'd lied to her at least once. He could be a sociopath, a serial seducer of vulnerable women. She told no one where she was going. How could she? As she drove to the hotel, she became very afraid. She might disappear, might end up at the bottom of a river somewhere, dumped in a landfill, any number of horrible places, and no one would know how or why. She hadn't left word with a soul. But she went ahead. She was too curious to turn back. He was waiting in the lobby. He was tall, sandy-haired, lean and tan from working outdoors. He wore elegant, thin-framed glasses. She thought him quite handsome. His face exuded a thoughtful calm. When he spotted her looking at him, he smiled. They went up to his room. As soon as they entered the room, they undressed. They made love, rested, talked, made love, rested, talked—all day long and into the night. Once, when she came back from the bathroom, he was doubled over on the bed, holding his stomach. Are you okay? she asked. Yes, I'm fine, he said. Just a little indigestion. They slept. In the morning, they showered, dressed, and checked out. He walked her to her car. Thank you, he said. You have no idea how great a gift you've given me. When my husband left, I was sure I'd never make love to anyone again, she said. What we've had—what we have—is so precious. They hugged, and she drove away, feeling both invigorated and a little apart from herself, as if she'd been watching herself in a movie. A few days later, he called. Listen, I have to tell you the truth now, he said. Here it comes, she thought. My whole life is about to unravel. In the hotel room, when you asked me if I was okay—well, I'm not okay. That first day I called you, I'd just been diagnosed with pancreatic cancer. I was told I probably had less than a year to live. She tried to speak, but nothing came out of her mouth. She did the math in her head—it had been five months since that first call. He told her he was married. When he got the news about the cancer, he drove alone in his car for hours. He couldn't bring himself to go home and tell his wife. He'd hidden his doctor visits from her, hoping it was nothing serious, not wanting to alarm her. Suddenly their days together were numbered. Her life was about to be torn apart, and she didn't even know it. His was about to be over. He thought about ways of killing himself, speeding up the inevitable: driving his car into a tree, jumping off a bridge. That's when he started dialing random numbers, unable to bear his solitude, and when he heard her voice he composed himself and made up a story, because she sounded kind, and he didn't want to scare her, he wanted to talk to her—he wanted, somehow, to connect. Oh, Joseph, you should have told me. No. Then it wouldn't have been the same. None of this would have happened. The whole thing would have been colored by sadness. I'm a married man. You would have pitied me, or been disgusted by me. You've allowed me to have something apart from all that, to create something new, one final thing. I couldn't have lasted this long without you. But I want you to know I don't have much time now. I might not be calling as often. Just call when you can, she said. Please. Call my cell phone if you don't reach me at home. Please. She gave him her cell phone number. When he called, they no longer had phone sex. They talked about his illness, about his preparations for death, and when he got tired of talking about that, they talked about her work. One day he called, but she didn't recognize his voice at first. He was very weak. It was just before Thanksgiving. He strained to tell her that he didn't have the energy to speak very much, but could she please tell him what she was doing for Thanksgiving, whether she would see her family, anything at all about her plans. His life, he said, was over now. He was simply waiting to die. He envied people with plans other than dying. She broke down and cried, trembled uncontrollably with helplessness. Then, when the sobs had run their course, she got ahold of herself, for his sake. She blew her nose and wiped her eyes. She told him she would visit her parents in the little town where they lived. She would help them cook dinner for the whole family, all the children and grandchildren, the first great-grandchild of her still-living grandmother, and she told him everything they would make, all their favorite Thanksgiving food, turkey, gravy, mashed potatoes, creamed green beans, rolls and marmalade, pecan pie. I wish I could eat, he said. I miss having an appetite. Those were the last words she remembered him saying. He had whispered them. She'd barely been able to hear him. They never spoke again. A few weeks later, her phone rang. It was a woman. My name is Elan, she said. I don't know you, but I'm Joseph's sister. He asked me to call you and thank you for your friendship. He passed away quietly last week, in the company of family. He wanted you to know. When Rebecca finished her story, I was shattered. It was so improbable, so heartbreaking—a strange act of connection amid the horror of impending death. We talked for a while about the mysteries of chance, about the link between mortality and desire, and then I went to my room in the cottage. Without thinking, I picked up the phone. When I reached for the keypad, poised to dial with my index finger, I realized I had no idea whom I'd meant to call. I suppose it was a reflex, triggered by empathy for Joseph and Rebecca both. I couldn't articulate it at the time, but I must have begun to understand that phone sex had kept alive a part of myself I feared losing, just as it had for Rebecca, and that the solace of strangers on a telephone was the only solace that seemed attainable, just as it had for Joseph. Their story, adjusting for the particulars, had a spooky resonance with my own. For a second I played one of those mental games that seem so absurd in retrospect. Do I replace the receiver in the cradle and admit I experienced a flickering instance of dementia, the body acting without direction or consent from the mind? Or do I recover myself, think of a number to dial, and trick myself into believing I'd had a purpose in picking up the phone all along? I opted for a harmless bit of self-deception and called my home number, thinking I ought to check for messages left in my absence. There was only one. It was, of all people, Molly. When I returned to New York, I called her. This time, I said, I want to take you out on an honest-to-god date. Let's meet for a drink. I told her the address of a bar I liked. She was waiting at a table, sipping a beer, when I arrived. She told me her work was going well. She was part of a group show at a gallery, and people were impressed by her photos. She also had a new job assisting an artist, a well-known painter, and she enjoyed it. She answered his mail, ran his errands, traveled with him to Mexico City when a show of his work opened there. He had friends in the movie and the music industries, and she was meeting a lot of talented people. She'd even started dating someone—tentatively, but it was going well. No rush, nothing serious, just enjoying each other's company. She and her husband were about to finalize their divorce. We had several drinks, moved into the same side of the booth, held hands. It felt very sweet, almost innocent, a strange turn given how we'd met, what we'd made of the meeting. At the sound of the bartender counting the change in his till, we looked around and discovered we were the last two people in the place. We hailed a taxi and took it to her apartment. I walked her to the door. I have to be to work pretty early, she said. I should get a good night's rest. Me too, I said. I'm glad we did this. I gave her a hug. We kissed on the lips, briefly, lightly, and then she turned and was gone. I decided to walk home, across the width of Manhattan and over the Fifty-ninth Street Bridge, past the riverside projects and up through Long Island City to its border with Astoria. The night was beautiful, clear and crisp. It was a Sunday. The city was quiet. Partway across the bridge, I looked back at the skyline, that proud, almost disdainful skyline, still proud despite its diminishment. Standing there, suspended above the shining black surface of the East River, I had a hunch I'd never see her again, that she'd disappeared forever among that glittering immensity of glass and steel. We'd done everything backward, and now, still moving backward, it was as if we were entering the time before our first date, before I'd ever known her. Not long afterward, I gave up calling the talk line. At the time I suppose I'd have said I quit because I grew bored with the predictability of it, and because from a cost-benefit point of view it was nuts. But it may not have been entirely coincidental that around this time I finally realized there could be no substitute for the missed connection about which I fantasized most. All through my foray into phone sex I kept opening that envelope of photographs taken by the police when they'd discovered my brother's body, and each and every time I imagined the call I should have placed to him on the day he died, the consoling words I might have offered, the gesture of brotherly love he might have accepted as a reason to go on living another day. Equally likely, I belatedly realized, was the possibility that nothing could have dissuaded him, that my call would have been in vain, and whatever I said or did not say would have become the source of my regret, instead of my failure to call at all. Among the remnants of his life I'd tracked down, in addition to the official reports, was his final phone bill from US West Communications, a piece of paper I often studied for hints about his state of mind at the end. The time span of the phone bill made its meager data particularly stark, even if they only served to multiply the uncertainties. The bill claimed to cover activity through July 1, 1996, though it was dated June 12. It showed he was due a refund of $11.96. He paid for some services a month in advance, and the previous bill ran through June 1. This final bill, then, contained information, spookily enough, for just one day—the last day of his life, June 2, 1996. It listed three long-distance calls, all to Minnesota, one to our parents in Tracy, the others to our sister in Granite Falls. He placed the first at 11:04 a.m. The call lasted four minutes. Our mother and father had returned home from church and were in the midst of making lunch. It was 12:04 p.m. their time. They said they'd call him back when they finished eating and they did as promised. June 2, 11:04 a.m., Mountain Standard Time: the first of his three long-distance calls. Had he called to say goodbye without coming straight out with the word goodbye? My mother recalled telling him that perhaps if he gave Wendy some time to think things over, they might still work it out. Oh, I'm going to give her a lot of time to think, she remembered him saying. Was there a clue in those words? A note of warning? My sister received the other two long-distance calls he made that day, though she had no reason to suspect he called twice, since she wasn't home for either call and he only left one message. He placed the first call shortly after he finished speaking with our parents, at 12:16 p.m. his time, the second at 1:52 p.m., each call lasting less than a minute. When Lisa arrived home to find his message she was surprised. In the two years since she'd left home and moved in with her boyfriend, Dan had not called her once. His voice mail was so out of character that it gave her cause to worry. For him to have called, she thought, something must be up. Something must be wrong. She immediately picked up the phone and dialed his number. He did not answer. She left him a message. He did not return the call. What changed for him between the time he left a message and the time she left one back? Why did he try not once but twice to call her—a thing, it bears repeating, he'd never done before that day—and then not a third time when he knew the call would be answered? In her message she'd made sure to say she'd be home the rest of the day. If he'd tried a third time he'd have reached her. Did he go from desperately seeking a lifeline to abandoning all hope for a lifeline in the span of a few hours? Was it the afternoon of drinking with friends that changed something? Was it the evening conversation with Wendy that crystallized his fate? The phone bill listed one further piece of pertinent data. It showed that at 9:15 p.m. he dialed *69 to activate last-call return, for which he was charged seventy-five cents. What could it mean? Someone had called him while he was otherwise occupied, and he hadn't answered in time? Someone had called him and failed to leave a message, his answering machine recording nothing but the sound of the hang-up, and he wanted to know who it was? Or he'd been away from his apartment for a time, returned with a hope that someone had called, perhaps Wendy, and finding no message on his machine he checked for a missed call anyway? Ultimately, all this piece of paper told me, despite my hopes for more, was that he was still alive at 9:15 p.m. on the night he did himself in. That left a six-hour window between my mother telling me to call him and his last known effort to reach for something outside of himself, six hours the better part of which I'd spent in bed with Marie. In my most self-lacerating moods I imagined our bodies joined in pleasure as, two thousand miles away, he leaned into the barrel of the gun. Once the arts and editorial pages were settled again in Manhattan, in temporary quarters above the West Side Garment District, the men in the suites looked again for places to squeeze. Eventually a quarter of the company's workforce would be cut, and Ray Sokolov was among those to leave. At the age of sixty, after twenty years of service to the company, he was enticed to take early retirement and replaced by a genial reactionary who'd cut his teeth at the Moonie-owned Washington Times. I was unsurprised when, shortly thereafter, word came down that the Leisure & Arts page would more fully integrate its coverage with the editorial page, whose mantra, "free markets and free people," unwittingly tipped its hand by which of the two it placed first. Soon it came to pass that I was given a chance to work on pieces of greater world-political import. I was sitting with my feet on my desk, editing a story about a play in Chicago or the lovely wines of the Alsace, when Paul Gigot asked me to follow him into an empty conference room. He invited me to sit. He cut straight to the chase. He said that for the foreseeable future I would continue copyediting for the Leisure & Arts page, but beginning in a few weeks I would do the same for the editorial page of the Wall Street Journal's European edition. I told him I didn't want to do that. He seemed surprised. I told him I didn't agree with the politics of the page—with its viewpoint on just about everything. He said he wasn't asking me to write things I didn't believe. He was asking me to edit copy for spelling, punctuation, and grammar. I told him I didn't want my hands on the editorial page in any way, shape, or form. He said he would give me a small raise in compensation for my added responsibilities, and I would do whatever he told me to do. I thanked him for the raise. A college friend of mine, M.J., wrote that spring and told me she'd secured a gig as a fire lookout in southern New Mexico. She suggested I lift my flabby keister out of my cubicle and come have a look at the country, breathe some clean air, unplug for a moment from the rat race. In late May I booked another flight to Albuquerque, where I planned to rummage around for a couple days in my brother's past before heading south for a rendezvous with M.J., a trip on which I would blow my yearly paid vacation and, with luck, find the answer to the question of what I should be next. Almost in spite of myself, in spite of my calculated evasion of the title in my so-called career, I was still for a little while longer a reporter, though one without pretense to worldly concerns, much less objectivity or evenhandedness. I made a date to visit the family who'd known Dan best at the end, thinking they of all people might know something I didn't: his boss George, George's wife Barbara, and their daughter Emily, all of whom I'd met on my first visit to New Mexico, when Emily and Dan were still engaged. I hadn't talked to them since the last time I saw Dan alive. I'd met them only that once. For reasons of distance and logistics, I didn't make it to Dan's memorial service in Albuquerque, and they didn't attend the funeral in Minnesota. It was hard to avoid the feeling that we were strangers communing over the memory of a ghost—a ghost who, seven and a half years earlier, had sat with us one evening after dinner and taken our phony money in a game of Monopoly, laughing as he filled the board with hotels, confident and happy and secure in the love of what he thought were his future in-laws. Emily and Barbara were at the kitchen table looking through boxes of pictures when I arrived. They wanted to give me some snapshots of Dan. Barbara offered me a beer. Emily talked of her family; less than a year after she'd ended things with Dan, and not long before his death, she'd married someone else and now had two little children. She'd also found God, and this had led her to the belief that everything had worked out according to plan, that the Lord Jesus had known Dan was in trouble, and had sent her a signal to bail before she got caught up in trouble she couldn't escape. Our conversation danced around the edge of the reason I'd come there. No one knew how to talk about him for more than a minute or two. I wasn't surprised; I could count on one hand the number of times I'd found someone willing to talk about him for longer than a minute or two in the six years since his death. George said, Come here, I want to show you something. He led me to the dining room. Against one wall stood a china cabinet Dan had made for him, an elegant piece fashioned from oak, with natural stain and a lustrous wax finish. I'd seen similar pieces in other homes, always beautiful and perfectly functional, including in the home of my parents, but no matter how many times I looked at his work it always impressed me with its craftsmanship. The kid was good, George said. He took care. Not just with woodwork, but everything he did. Emily took us to the guest bedroom and pointed to another of his pieces, a chest of drawers. Dan made that for me when we first started dating, she said. I think he even signed it on the back of the bottom drawer. Sure enough, when we pulled it out we found the words, To Emily, From Dan, With Love. We all stood in awkward silence. I turned away from the sight of a tear tracing the curve of Emily's cheek. George and I went together to the back yard, where more of Dan's handiwork awaited: the swimming pool he'd helped George build, the shed they'd put up in which to store Dan's hot air balloon. George spoke of summer afternoons they'd spent together grilling food, swimming in the pool, drinking beer, telling stories. Even after Dan and Emily broke up, George said, we remained friends. It wasn't easy for Emily. It was pretty awkward, if you want to know the truth. She thought of it as taking sides, but I couldn't cut him out of my life. George admitted—quietly, seemingly out of the side of his mouth—that he'd sought counseling after Dan's death. He alone bore the horror of having discovered Dan's body. I wouldn't have known this if I hadn't chased down the police report. But it was right there on the last page: Ofc. Faerber advised that the decedent was found by a co-worker named George Goodwin. Mr. Goodwin had gone to the apartment to check on the welfare of the decedent because he had failed to show for work. Mr. Goodwin was able to gain access via the pass key provided by the building manager. Ofc. Faerber further advised that Mr. Goodwin had become startled upon discovery of the decedent. Mr. Goodwin then accidentally broke the stair rail from the wall. Mr. Goodwin did not proceed any further than the top of the stairs of the apartment interior. . . . George led me back inside, where he retired to his study for the evening. The memories, it seemed, were too painful to explore any further. One of the things I wanted to know was what had happened the morning his body was discovered. Six years had passed since that day, but I knew less than I thought I should about the hours surrounding the gunshot. I still believed that if I could tease a comprehensive narrative from those hours, I might be freed at last from the grip of a morbid devotion to the mystery of his death. Now that George was gone, I delicately broached the subject with Barbara. Between what she told me and what I'd read in the police report, the basic story of that morning came into as sharp a focus as it ever would. It wasn't like Dan to be late for work. George tried to call around ten o'clock. He hoped Dan had had too much to drink, was sleeping off a hangover. The phone rang. No answer until the machine picked up. George left a message. He tried to make a joke of it but he couldn't hide the worry in his voice. He left the work site where his crew was installing fiber-optic cable, the crew on which Dan was foreman. On his way home he drove past Dan's apartment. Dan's truck was parked in front of his door. George circled through the parking lot and left without stopping. When George got home he told his wife that Dan was AWOL. Why don't you go check on him? Barbara asked. I went by, he said. His truck's still there. He paused. I don't know. I've got a bad feeling. What do you mean, a bad feeling? He's never been late. Something must be wrong. Did you knock on his door? No, George said. He's probably hungover and sleeping in. Yeah, George said. I suppose you're right. He dialed Dan's number again. This time he didn't leave a message. Just drive over and wake him up, Barbara said. George loitered around the house. Twenty minutes passed. Do you want me to go with you? Barbara said. No, that's all right, George said. I'll do it. He got in his truck. He drove to the apartment and parked in the lot. He waited for several minutes, unsure of what to do. If he hadn't known Dan so well he might have been less concerned, but the kid was more to him than a hired hand. In the year since his daughter had broken off her engagement to Dan, George had walked a tightrope, honoring Emily's decision but maintaining his closeness with the young man who was to have been his son-in-law. He worked with him every day. He still respected him. It had been difficult, no doubt about it. His jumbled-up feelings of loyalty to both of them. His hopes for both of them. Now he sat paralyzed with dread. He thought about driving off, just leaving, waiting at the work site until Dan showed. Finally, though, he worked up the courage to get out of his truck and go to the door. He knocked. No answer. He knocked again. No sound of anyone stirring. He tried the door. It was locked. He went to the office and found the manager, a guy by the name of Jones. He explained the situation. Jones got his keys. Another employee named Roschevitz joined them. They walked together to No. E43. Jones gave George the key. George slipped the key in the lock. The door gave way. Inside, the shades were drawn, the apartment mostly dark, just one lamp on. He saw a figure sitting on the couch. He called Dan's name, but there was no answer. He started into the apartment. He made it a few feet. Then he saw the wound in the side of Dan's head. He became very afraid. He tried to turn and leave, but in turning he pulled too hard on the handrail along the entryway steps. It gave way, tore from the wall. George stumbled on the stairs, righted himself, got himself out of the apartment. Jones looked at him and said, What is it? The cops, George said, call the cops. At the kitchen table, Emily and Barbara told more stories. Emily said that toward the end of her relationship with Dan, he'd been drinking a lot. It was like there were two sides to him, she said. He was different when he drank. He got angry. One night he threw a glass against the wall and it shattered everywhere. That's when I started having second thoughts about marriage. I wondered if I really knew him. I couldn't figure out the source of his anger. Barbara said that on the day before Dan killed himself he'd brought his gun to their house and sat at the kitchen table, exactly where we were now, and spent an hour or more cleaning it. This was the day after Wendy—the new woman, the one he'd started seeing after Emily called off the wedding—had broken up with him. Barbara's mother, who'd been visiting that same day, later said she had an inkling Dan was suicidal, something in his voice and in his eyes, a hint of despair, the tenderness he'd shown the gun, as if preparing it for a moment of truth. She later wished she'd done something for him, something that would have saved him. In this she was not alone. Emily asked, Did your parents hate me when I called off the wedding? Were they angry with me? Did they blame me for what happened? I assured her they did not. Barbara left the table, and Emily and I sat there alone. She talked about traveling to Minnesota to meet my family for the first time, not long after she and Dan confirmed their engagement. I felt like a queen, she said. Everywhere they went people were thrilled to see Dan again, and all were curious about his bride-to-be. The enthusiasm with which they were greeted almost made her want to move to Minnesota. Some of Dan's antics gave her pause, though, such as the midnight run of sign-stealing he and a friend had made on the lesser-traveled country roads, the sort of thing he and his buddies had done in high school and apparently had yet to grow out of. Emily, still a teenager herself, not even out of high school, didn't exactly find her fiancé's behavior indicative of maturity. She leaned across the table, and a hush came over her voice. I don't know why, she half whispered, but I feel a strong connection to you. Like you're my brother in a weird way. I know that makes no sense, since we only saw each other once before, but maybe we went through some of the same things afterward. Yes, I told her, no doubt we did. There's something I want to ask you, she said. Dan had a secret. I'm pretty sure I'm the only person he ever told, but I wonder if he told you too. I wasn't sure what she meant but I couldn't think of any secret. I don't know if I should share it now, she said. I mean, if you agree to keep someone's secret do you still have to keep it after he's gone? I didn't want to encourage the notion that she ought to keep his secret but I suspected she required a nuanced response if she was going to give it up. So I improvised. I told her that my situation was unique: if I didn't destroy all my notebooks, people would learn certain things about me after my death that might surprise them, and I had come to accept this. I couldn't presume to tell her what to do, but I made clear I was curious about anything that could help me better understand my brother, especially since I could no longer ask him directly. Just then Barbara walked back in the room. You know, it's getting late, Emily said. I need to get the kids to bed. I should show Phil how to get to his hotel. He can follow me there. I'll see you guys tomorrow. I hugged Barbara and said good night and wished her well, promised to keep in touch. Emily and I drove to the hotel separately. We stood in the parking lot, in the warm night air of the desert, making more small talk. At last she dropped her bombshell. What she wanted to tell me was that my brother had been raped. Dan had shared this with her not long before they broke up, when he knew he was losing her and was drinking hard in an effort to deny it. He'd been just a child when it happened, seven or eight years old, if she remembered correctly. When she told me Dan's description of the person who'd done it—a certain someone with an identifying characteristic "who Dan said you both hated when you were young"—I knew exactly whom he'd meant. I couldn't bear to sit still with this news roaring inside my cranium, so I canceled my hotel reservation and drove south into the desert, windows rolled down, Satch and Duke's The Great Summit on the stereo, as loud as I could stand it. I tried to hold my concentration to the lines on the road even as I felt something cold and hard calve inside of me like a glacier. Only past midnight did I finally fall asleep in the back seat of my rental car, alongside a lonely country road near Truth or Consequences, a place name whose bitter irony shadowed my feverish dreams. I had uncovered at last a hidden truth, though the consequences eluded me. Into the Wilderness Of all the friends I had in the world, M.J. was foremost among those whose company I could tolerate under the circumstances. Temperamentally, we could hardly have been more different. She rolled through the world freestyle, exuding irreverence and mirth, always leading with the heart. Though she'd known darkness, she'd chosen not to hunker down and live inside of it, but its traces could be seen if you looked hard enough, etched in subtle lines on her face. Once she left her little hometown in Nebraska she committed herself to a cosmopolitan life of adventure and travel and refused to look back. She'd spent time in Alaska, Ghana, the Sahara, Costa Rica; she was taking a summer of paid R&R stateside, in her fire tower, before she began a master's program in Argentina. She evinced a charming lack of guile that disguised a canny mind and allowed her to fit in anywhere, from the streets of Cairo to the cowboy bars of southern New Mexico. She stood five-foot-two and weighed a hundred pounds fully clothed; she chewed Levi Garrett and took her whiskey neat. To a guy like me she easily could have appeared a little too carefree: an impish world traveler in pigtails, a hell of a lot smarter than she let on, and more ambitious than she gave reason to suspect—a chameleon of sorts. Instead she'd drawn me, also a chameleon, irresistibly into her orbit, shown me things about openhearted friendship that I'd not known previously. Her first offer to hang out had involved sneaking onto the University of Montana golf course at daybreak and sprinting through a round of speed golf before the clubhouse opened and the groundskeepers nabbed us for failing to pay greens fees. Her dedication to frivolity in all its forms was contagious. Telling her no just wasn't an option. We'd kept in touch for years by letter, and she never failed to entertain with comic stories from her travels. Hers was just the face—freckled, smiling, blue eyes twinkling with mischief—I needed to see, and there she was, standing outside the Hilltop Café in T or C, New Mexico, suntanned and lean as a mountain lion from hikes to and from her lookout all summer. Also, charmingly, still a little buzzed from a night spent at Elephant Butte Lake with some rowdy, off-duty firefighters. We stopped for groceries before we left town, then I followed her by car across the creosote flats toward the rim of the Black Range. We drove through two little foothills villages, relics of the mining boom of the 1870s, into piñon-juniper country, then up into the taller, statelier ponderosa forest with its shaggy-needled, red-barked trees, the road all the while making serpentine curves. At a pass high on the divide we turned onto a dead-end dirt road, where we parked and began a two-hour hoof to her mountain with our packs. It was strange country, foreign to my experience, the driest time of a dry season in the driest forest I'd ever known. The grasses were sere and brittle, wildflowers few. The needles of the pines crunched underfoot. In the beginning of the walk, at a little over eight thousand feet above sea level on a south-facing slope, we passed a few alligator junipers, as well as scattered oaks of various types and the occasional yucca in bloom. Higher up the ponderosas predominated, their faint scent of vanilla sweetening the air, and then we'd round a ridge and enter the mixed conifer of the cooler north slopes, dense and dark and fragrant with resins, the bark of the trees draped in lichens. For the last mile I labored, short of breath from cigarettes and sea-level living, until we topped out in an open meadow above ten thousand feet, where a tower rose another fifty feet in the air. We dumped our packs against the concrete footers and climbed the sixty-five steps through a staggered series of four landings, each offering a more impressive tease than the last of what awaited on top. The view from the little glass-walled room nearly made me topple from vertigo. The Black Range ran north and south, scored by deep canyons on its east side, the most rugged country I had ever seen. The crest of the divide loomed like a bulwark blocking the view to the northwest, but in every other direction the vistas stretched for sixty, eighty, a hundred miles or more—long, open expanses of desert with scattered ramparts of rock beneath sky-island peaks. I gripped the windowsill and tried to take it all in as M.J. pointed out the distant landmarks, from the dark shoulder of the Manzanos just south of Albuquerque to the Tres Hermanas, three little pyramids marking the gateway to the Mexican border, peeking over the flinty shoulder of Cookes Peak—the Matterhorn of New Mexico, M.J. said, flashing air quotes with her fingers. Not a bad view, huh? she said. I've never seen anything like it. I think I'm already in love. Crazy thing is you can watch all day, and it never looks the same for longer than an hour or two. And they pay you for this. I know. Can you believe it? The next afternoon I walked. I felt myself drawn along the trails to the north and west, into the upper headwaters canyons of the Spirit Creek, where pink bluffs rose to chiseled turrets on the ridgetops and vultures circled lazily overhead. I meandered for hours through thickets of oak and massive contiguous stands of pure aspen whose leaves shimmered in the breeze with a sound like muffled applause. I sat and rested beneath ancient firs it would have taken three of me and my wingspan to encircle. Jays chattered and squawked in the canopy. Scat of various types dotted the trail. Muddy wallows showed where bears had recently rolled, and I held in my hands mule deer bones whose edges had been chamfered by the teeth of rodents. I put one such bone in my pocket, not really knowing why. Back on the mountain, in the last of the day's light, we tossed a Frisbee in the meadow below the tower. M.J. cooked dinner in an iron skillet, quesadillas with thick slices of avocado and fresh pico de gallo, heavy on jalapeños and fresh-squeezed lime. At dusk we lit kindling in the bonfire circle, downwind of the cabin, and stoked the fire with limbs gathered from the wooded edges of the meadow. We squatted on the periphery of the fire's warmth and sipped bourbon out of plastic cups. After a couple of drinks, I told her of my time in Albuquerque and what I had learned there. She asked a number of questions, each of which I tried to answer. We spoke quietly. She came near and placed her arm around mine and held my left hand in hers, squeezing gently in the absence of words. For a long time we were silent, our eyes drawn to the mesmerizing leap and dance of the flames, friends joined in touch and tears. What will you do now? she asked. I told her I didn't know. I didn't think I could return to New York and pretend everything was unchanged, but the next move escaped me. I knew almost immediately that Emily's revelation had put an end to my desire to learn more about his life, at least any more than was locked away in my own memory. I couldn't bear to think there were other skeletons leering in the closet, waiting to be discovered, if only I managed to find the person with the knowledge of the secret. Perhaps there were no more secrets. One could hope. Some of the speculations of those who had loved him had ultimately struck me as sound, or at least plausible. Depression, sure—my aunt Ruth had suspected as much, and she was about as close to him as anyone in the end. Anguish over the breakup with his girlfriend, okay. Been there myself, not good. But a secret he carried with him most of his life, a violation of the most brutal and sadistic sort? I couldn't wrap my mind around that one. I knew that was a cliché, but that was also it exactly: I couldn't absorb the thought, even as it leaped out as a probable cause. I couldn't fathom what had been done to him, how he'd lived with it, how it had changed him, what it had made him. I was already well aware that I hadn't known him the way a brother should. Now he slipped even further from reach—a failure of imagination on my part, a failure of empathy. I knew this much: most of my prior assumptions had been called into doubt. Everything about him became infinitely more complicated. Cracks appeared in my story of who had failed him, and how, and when. The persistent notion that it was my inability to pick up the phone and call him that led to his death—my hold on that idea, already tenuous, became untenable. In the beginning, it had been as if I couldn't stand the thought that other factors contributed to his suicide, anything other than my failure to call him the day of it. I needed that distinction. I needed to believe I was that important to him. I had clung—far longer than a rational man would have—to the notion that my call would have been answered, and that it would have swayed him. In this way, it was never about him. It was always about me. The mind of the suicide survivor tends to be haunted by the thought that the dead passed judgment on the living, and that whatever else a suicide signifies, it can't help but contain the message that none of the living were enough of a sustaining connection to temper the allure of self-annihilation. The news that he was raped as a boy—this brought to the surface a series of hidden truths about his death, truths I had failed, somehow, to grasp. That it was, in the end, about no one but him; that it was nothing personal, at least insofar as his family was concerned. That perhaps there was nothing we could have done differently with the knowledge we possessed at the time. That he'd hidden his pain and shame so brilliantly, so capably—an acting job of unbelievable fortitude—that we never could have known him in all his complexity, no matter how hard we may have tried. No wonder he'd become a cipher in death. He'd been in hiding all his life. Before I left her mountain, M.J. did me a favor I could never repay. She made noises about being bored in the lookout, wanting to get out on a fire, then maybe a camp crew for a hunting outfit—if only she could find a replacement on fire watch—but I suspect she secretly made it her mission to get me out of the city. She set it up as me doing her a favor, when in fact we both knew otherwise. No one resists M.J.'s charms for long, and certainly not her boss back in district headquarters, to whom she took our plan devised by firelight and whiskey. Toby Cash Richards was born to that country, an aspiring logger turned schoolteacher and summer firefighter who'd worked his way up to become the Black Range district FMO (fire management officer) through the sheer ballsiness of letting things burn on landscape scale, in a landscape where fire was essential for a healthy forest. He was as country as country got. He knew his way around guns and was a master with a chain saw; he hunted elk with a bow and arrow, and people I came to trust eventually told me they never saw a man on a prescribed fire run drip torches with greater efficiency and zeal. If your truck was stuck in the mud or your horse had thrown you from the saddle, you wanted him alongside you. I once saw him drink a case of beer in the course of an afternoon and evening and wake the next morning at five o'clock to cook breakfast before another day in the woods, while the rest of us slumbered or moaned, at least until the smell of bacon roused us from our mummy bags. He did nothing half-assed, drinking included, and it never seemed to impinge on his capacity for work the next day, or his ability to two-step at closing time in the Pine Knot Bar. After rehearsing her argument with me, M.J. got on the two-way radio and told Toby she needed out. She was going stir-crazy in her tower. She needed time on a crew in the woods, wanted to see the action from another angle, up close and personal on the hot, smoky edges of a fire. She told him I needed a career change and some time to think, that I was competent with maps and binoculars, and that she'd personally train me in all the idiosyncrasies of the lookout's tools—a ten-minute job, she said, and he laughed. She talked like a raving pyromaniac, sick of looking at fire from a distance. She knew her audience. Toby, I would learn, was nothing if not keen on fire. He respected her gumption. Eventually he buckled, said fine, he'd take a chance on a greenhorn, and when could I be back and ready for duty? Fifteen days, it was decided. I'd offer two weeks' notice at the paper and take the earliest Saturday flight back. I'd relieve M.J. for whatever remained of the season, no guarantees on the length of my employment, rainfall and fire danger the deciding factors. After she'd signed off the radio, M.J. stared at me with as serious a look as she could muster. Don't screw this up, she said. I told her I was so grateful I would do whatever it took to earn her trust in me. Her face contorted in laughter. Kidding! she howled. It's not possible to screw up as a lookout, as long as you stay awake on the job. I gave her a bear hug and shouldered my pack, took one last long look around the mountain I would soon call home. On my way down the trail I built a cairn on a wind-whipped ridge, in a place I felt sure no one but me would ever visit—a place as wild as the feeling in my heart—and set the deer bone inside of it. I met Paul Gigot as I got off the elevator on my second-to-last day of work at the Journal. Well, it's been a pleasure, I said. Yes, good luck, he said. Now you'll be able to hire someone who's more enthusiastic about working on the editorial page, I said. We've had a change of plans, he said. Your replacement is only going to work on Leisure & Arts. He stepped onto the elevator and threw me a little half wave, half salute. I'd always expected I would one day be shown the door. It was some kind of miracle that I'd lasted as long as I had. Having earned my original position at the paper by means of sanitizing the truth to my advantage, I had to admire the fact that I'd been purged by my own hand. But what was I going to do about it? Rescind my resignation? Beg him to let me stay? So long, I said, waving. Watching over the wilderness of the Gila country, alone with the wind and the stars and the bears and the birds, day after day, night after night—eventually season after season, for more than a decade—was far from easy at first. The enforced solitude made me not just mentally but physically uncomfortable, like a snake molting its skin. All the stimulations and diversions on which I'd come to rely in the city were gone, except the whiskey I made sure to pack in on mules, with all my other supplies. Beyond that I had only myself and the landscape, nothing but time and nothing to do but watch. At long last I had a way of being in the world that didn't feel fraudulent. Outside was a world that dwarfed the self, and I fell hard for the country, especially those parts of it that remained wildest. The headwaters of the Gila River encompassed the first place on earth where an industrial society made a conscious decision to avoid disturbing the landscape with motorized or mechanized machines, an administrative order of the Forest Service in 1924, and it remained a harsh and forbidding landscape, unpunctured by roads, where all travel occurred by foot or by horse. Day by day the place worked its magic on me. Its harshness spoke to something harsh inside of me. Its cruelty attracted. And it was beautiful as only those pieces of the old, wild world can be, places where the ancient music of birdsong and elk bugle still plays undrowned by man and his tools. I lost myself in the manic profusion of starlight, the blinding glare of noon; I hovered in numinous mysteries, laughed like a madman at my unexpected good fortune. By staying put through all the various moods and weathers I couldn't help but feel awe of a sort I'd previously thought unattainable, an ecstatic dissolution of the self. The place tore me down and remade me; its indifference to my cares and sorrows was magisterial and, in unexpected ways, comforting. I had believed that the streets of New York were the pinnacle of indifference to the individual human life and I had been mistaken. In the streets of New York you could always perform and at least pretend someone watched, or recede yourself into the act of watching, a necessary member of the audience for the performance all around you. Alone on a mountain there were no such luxuries. Having seen two towers reduced to a crumble of rubble on fire, I couldn't help but appreciate the poetic reversal of watching for fires from a tower in the wilderness. It felt like a useful act of witness, like journalism minus the obsession with ephemera. But it's also the case that in my renewed grief for Dan and all that he had suffered, I wanted to honor the gift he'd given me the last time I saw him, the gift of an incomparable view of mountains and desert from above the great rift valley of the Rio Grande. From the moment I stepped foot inside the seven-by-seven-foot cab of M.J.'s tower I was reminded of the basket of Dan's balloon, and the unimpeded view from her peak—a view that included the great rift valley of the Rio Grande—called back that long-ago feeling of flight, the dignity and grandeur of floating eye-level with distant mountains. I wanted to perpetuate that feeling. I wanted to live inside of it again, remaining close to what was best in him. If it took an act of intentional downward mobility to do so, trading a job in journalism for a vocation less than a quarter as remunerative, so be it. That great sweep of sky more than made up the difference. The adventure he had dreamed of but never attempted, soaring over the Sandias in a big wind—I could live a version of it every day, afternoons amid the lightning, mornings above the clouds. I never really left southern New Mexico after that first taste, not in my heart of hearts anyway, although it would take me another two years before I left the city for good. While there during those last dismal winters between fire seasons, I mimicked a human being with cosmopolitan cares but I no longer had any such thing, if indeed I ever had. The first winter I burned through my Dow Jones 401(k) and looked in vain for freelance work; I participated in the big antiwar protest, when half a million people took to the streets to issue a warning on the rush to invade Iraq on ginned-up pretenses. I'd always thought of the city as the natural home of free speech and collective action but I watched while protesters were penned in like hamsters by metal barricades and threatened with arrest on the flimsiest of pretexts. Innocent people were brutally cuffed and stuffed, and cops on horseback charged crowds for no good reason, threatening the safety of parents and the kids they carried on their shoulders. Those of us who lived in the city that suffered the brunt of the terrorist attack made clear our distaste for visiting a misguided version of the devastation on Baghdad, and we were treated like dogs, some of us manhandled and jailed, all of us told to shut up and keep shopping and the wise men in Washington would handle the rest. Marching felt like pissing into a headwind. The storm was coming, and we knew it. Chicken hawks were in charge, itching for glory. But someone had to say no, even if—especially if—those in power viewed us with contempt. In the bitter last days I worked a series of demoralizing jobs, dreaming of the next fire season, the low point arriving when I signed on at twelve bucks an hour to transcribe tapes of CEOs and senior executives shilling for their companies to something called the Wall Street Transcript, which published the interviews verbatim and at preposterous length in a weekly printed booklet. And by tapes I do mean tapes. I played the cassettes with a foot pedal that allowed me to stop the recording or rewind when necessary. I'd have preferred not to suffer such humiliation, but I had to make a living somehow, and the WST was the only thing I could find. At the end of my first fire season I flew to Minneapolis and rented a car for the drive to the little town in southern Minnesota where my parents lived. On the day I was to leave to catch a flight to New York, when they were both home from work for their lunch hour, I asked them to sit with me at the kitchen table. I said I was very sorry for what I was about to tell them but I thought they deserved to know. My father's reaction was about what I expected: unemotional, rational in the extreme. That explains some things, he said, when I finished telling him what Emily had told me. I asked him what it explained, not because I didn't feel similarly—victims of childhood sexual abuse are many times more likely to attempt suicide than the general population, for starters—but because I was curious about his take. He said that he'd always suspected Dan of being afraid of sexual intimacy. He'd had so few girlfriends in his life, and when he lost them he was disconsolate in a way that was hard to fathom. Nonetheless, the loss of Wendy, the proximate cause of his suicide, had never seemed a sufficient reason for putting a gun to his head. The fact that he'd carried with him such a secret for most of his life placed his difficulties with women in a new light. Do you know who it was? he asked. When I offered a name, his jaw set in resignation tinged with anger. I can't say that surprises me, he said. My mother reacted about as I'd expected too. Her face suddenly drained of color; she wept a few silent tears while my father and I speculated about what it all meant, and then she went to their bedroom and closed the door. Just before I left to catch a flight home from Minneapolis, my mother reemerged and said to me, There's a blue notebook in the office, on top of a box in the closet. You can read it if you want. After she and my father left again for work, I found the notebook and sat at the kitchen table. June 3rd—I was at work & Bill told me that Bob had called & said to go home for a minute. I thought he'd hurt his back. When I walked in the door, Bob was leaning against the kitchen sink with Father Evers beside him. Bob grabbed me and pulled me against him. I thought he said Dad died, then I realized he said Dan. I was stunned. After a short length of time, sitting on the couch, I asked Father Evers if Dan would go to hell for this—I didn't know if Dan believed in God. After that I don't remember much for the next week. My heavy heart was in my throat & I couldn't swallow or breathe. I couldn't eat, drink, think, or sleep. The neighbor kid asked his dad why we were having so much fun if Dan had died. He had heard all of these people out on the deck all night long, laughing and telling stories, trying to deal with his death in the best way they knew how. Sam & Jan went to Granite Falls to tell Lisa & bring her home. Who told Phil about Dan? When did he find out? Was he alone when he heard? How terrible that he had to take that long plane ride by himself. Dan called Sunday at noon. Thinking back on that conversation, I think he knew what he was going to do. He said he and Wendy were having trouble. I said, "Give her some time." He said, "Oh I'll give her a lot of time." If I had only known, I would have got on a plane right then & gone down to see him. I forgot to tell him "I love you" before I handed the phone to Bob so they could talk about fishing. Sam & Jan helped us through the funeral decisions. We were told to bring friends in case we couldn't understand any of the decisions we needed to make. I'm sure that Mr. Almlie thought we would be VERY distressed over this suicide. Lisa was with us & helped make some of the decisions, which I hardly remember. I only knew I didn't want to bury him, I wanted him alive. Bob made the decision not to see his body, after Almlie said it wouldn't be a good idea. I regret that decision to this day, but don't hold it against Bob. He wanted to remember him like he was, not with a hole blown through his head—maybe he didn't have his face left? We were glad when Lisa went to the funeral home late the next night when his body finally came in. She came back reporting that he looked fine. She only saw, under the cloth on his face, a bruised looking spot on the one side, & they had his eyes sewn shut. She cut off a small lock of his hair & she brought it back to me. That's all I have left of him. I keep it in a small coin purse in a drawer. I can't bring it out to look at because it brings all the heartache back again. It takes all my strength to not think about him & talk about him. That's the only way I've been able to get through these past 5 yrs. Even writing this, the tears are flowing so hard I can hardly see the page. Every year on this day, and on his birthday, I just want to stay in bed. I don't want to do anything or see anybody. Thank God today fell on Sunday so I didn't have to go to work. Were we bad parents that we didn't raise our son to feel strong enough not to take his own life? Now when I see a beautiful morning, a beautiful sunset, a bird, lovers in a park, people fishing, I think: Why did he want to give that up? Why did he want to deprive us of his birthdays, his wedding, his children, visits to his home? I need someone to say the right words to me so that I can deal with this heartbreaking sadness in a positive way because right now—all I do is cry. I worry about my kids being lonely and being alone. There are days when I feel guilty for not crying or for being able to sleep. Double rainbow on his funeral day. Liberated by writing this down. When asked how many kids I have, it's hard to answer three. I'm afraid they'll ask me about Dan. And if I talk about him, I'll cry. It's my birthday today, Phil called and we talked for 2 hours, some about Dan. I cry while I'm talking but it still feels good to talk. When I hear a song on the radio that I knew he liked I want to turn it off—but I can't force myself. If it stays on maybe he is close by listening. I copied this down, word for word, transcribing through my own tears, and then I returned the notebook to the place where I'd found it, unaware I'd begun writing this book. Until then my thoughts on my brother's death remained very rarely spoken aloud, mostly locked up in private notebooks—tens of thousands of words' worth of the most bleak and lugubrious maunderings—but my mother's brave act of connection set me free. If she could share her innermost thoughts, maybe I could a tell a story worth sharing too, in my own rude way. Shortly afterward she sent me a package containing VHS tapes of Dan's varsity wrestling matches. She didn't think she would ever be able to watch them, so she wanted me to have them, just in case. I'd become the documentarian in the family, the keeper of my brother's records—photographs, report cards, test score results, 4-H ribbons, bank statements, wrestling tourney programs, balloon pilot logs—which I saved with the usual journalist's pack-rat mentality, except in this case it had all added up to squat in answer to the major question. I tried twice to watch him but I couldn't get more than a few minutes in. He was as I remembered, fun to watch, tough in the clinch, a technical master and an escape artist more than a brute force. The incongruity of seeing him alive, grappling his opponents into submission—he won twenty-five matches his senior year—was too much for me. I was surprised that my father wouldn't wish to keep the tapes, but then I remembered that more than once over the previous years he'd told me that he refused to dwell in the past, that he would not let his son's death define his life. I have no reason to doubt that he succeeded through a herculean effort of will, or maybe just a cold shrug of contempt for unpleasantness of any sort. I know for a fact that he thought my interest in the story to be an unhealthy wallowing in darkness—his alien, oversensitive son, gripped by morbid curiosities. His of all the theories I'd heard rang truest, that whatever sickness festered inside his youngest son, the suicidal impulse had been just that, an impulse he mistakenly heeded with the aid of booze and a gun, that all too lethal combination for sad young men. I had the presence of mind to avoid telling my father that I felt certain, almost from the moment I heard the news, that my brother's death would be the most interesting thing to happen in what remained of my life, that surpassing it in sheer riveting power would take something so horrible as to be unimaginable, or so wonderful as to be unreal, and that to deny these facts would have taken more determination than I possessed. My father went his way, I went mine, and never the twain shall meet, though I'm closer to him now in other ways than I've ever been. Later on, a little bit braver, or maybe merely masochistic, I stuck a tape recorder under both my parents' noses, one at a time in private moments, conducting what I called research, and what came out of it was totally unexpected, some of it funny, some of it sad, most of it wildly off topic. I couldn't make myself make them talk about it for longer than a question or two, and they weren't prepared to go there on their own. To speak of it with my mother, in particular, seemed a willful act of torture. I had received a very targeted education in the art of making people talk about uncomfortable things, and still I couldn't do it, not to them. They'd overcome too much. My father had transformed himself from failed farmer to bank vice president; they traveled now, drank nice wine, cultivated a beautiful garden, had a whole new set of late-life friends. How could I justify continuing to poke at the wound? "All families of suicides are alike," Janet Malcolm has written. "They wear a kind of permanent letter S on their chests. Their guilt is never assuaged. Their anxiety never lifts. They are freaks among families the way prodigies are freaks among individuals." That about sums it up, except for the prodigy comparison. By definition, prodigies are blessed with a gift. The families of suicides are not blessed. In the winter of 2002 I undertook a journey I'd been planning and dreading for months, all the while in silence. I knew well my capacity for anger; I knew, in other words, that I had needed some time to chill. Plan some lines of inquiry. Judge what it was I wanted to know. But since I doubted I'd extract a confession, it was less about what I wanted to know and more about what I wanted him to know. Maybe one shred of justice could be wrung from the whole sad affair. He would forever know that I knew. So I traveled out of my way to see him, in a town better left unnamed. I found him at his workplace—a little flabbier than I remembered him, a bit too falsely jovial, in the manner of an upbeat high school football coach. I hadn't called ahead to apprise him of my visit. He appeared to be baffled by my coming but he shook my hand, invited me up to his office. I sensed immediately the pride he felt in having an office. A dear connection of his had died not long before, a woman I had cause to know in my youth, and I used this as the pretext for dropping by. I told him that personal business had brought me to the vicinity, and since I'd found myself with spare time on my hands, I wanted to offer my condolences in person. He told me about the woman's final hours, some touching last moments they'd shared, a death with ease and dignity. I nodded my head at all the right moments. He asked about my life in New York. I told him it had been good but was coming to an end. I'd quit my job in journalism. I had no prospects there anymore. I was broke. My candor clearly made him uneasy. People didn't talk this way where we were from. Our conversation dwindled to inanities. The moment had arrived to announce my true purpose. I had fantasized about announcing it with a roundhouse to his nose; now that I found myself within arm's reach of him I felt nervous, even ashamed somehow. I could barely bring myself to look at him. In fact I turned away, looked at the wall. What if I was wrong? What if I had the wrong guy? Or worse, what if my brother had made up a story for sympathy in a moment of vulnerability, when he felt himself to be losing his fiancée? Horrified that I would attribute to my brother such conniving instincts, I forgot the question I'd rehearsed. I nearly rose and left without explanation. Then it returned to me. You're a God-fearing man, correct? I said. He appeared mystified. I go to church, yes, if that's what you mean. Have you asked God's forgiveness for what you did to my brother? There was a pause. He asked me what I meant, so I told him. I have no recollection of any such thing, he said. I pressed the point. He became flustered, sweaty, red of face, but still he denied it. Tellingly, I thought, he never denied doing it; he denied any memory of having done it. I have no recollection of such a thing, he said, over and over. We went around and around, and his story never changed. I didn't have a leg to stand on—a secondhand piece of news, a rumor whispered from the lips of the dead. There existed no corroborating witness, no one to offer incriminating testimony. I knew it was folly to believe that he'd confess if I persisted in my questioning. Maybe he'd truly convinced himself it hadn't happened, a strategic lapse of memory that allowed him to avoid succumbing to a crippling guilt. Maybe he really hadn't done it, and I was crudely bullying an innocent man in an effort to make myself feel like a soldier in the cause of justice, as if our confrontation could possibly balance the scales. I would never know for sure. I could only trust my gut, and my gut told me to pray there had been no other victims, as if prayer could make it so. I gave him a scrap of paper with my phone number on it, told him to call me if his recollection changed. Before I left I wished him good luck with his god. I knew I'd never hear from him. Try though I might, I could think of nothing more to do with this bit of innuendo, not without inviting a lawsuit for slander. It was a bitter and unsatisfying coda to a story I sometimes thought I'd rather not have unearthed—a story that, despite appearing to offer a perverse absolution to Dan and those of us who loved him, still had about it the odor of a spoiled fruit. In the coming years I would often think of Wendy, Dan's last girlfriend, wondering what had become of her. Though I asked around, no one knew how to reach her. No one had seen her since the memorial service in Albuquerque a few weeks after his death. No one even remembered her last name. Eventually it occurred to me to look more closely at Dan's balloon pilot log, in which he'd recorded the date and time of his flights, their duration, his launch and landing sites, plus any passengers he'd had on board. For years it had sat unopened in a box I carted with me every time I moved, a box marked ALL THINGS DAN, into which I'd tossed it after a cursory look at its first few pages. She was there, of course. Along with her two children, she'd been a frequent companion on those flights over the last eight months of his life. Though I now knew her full name, I took it no further. Another year passed, then two, and still I resisted the urge to track her down. I knew some of what she'd been through and I feared my phone call would be greeted as an unwelcome reminder of an episode she'd rather forget. There were those among our family and friends who'd blamed her for Dan's death. It had come so soon after their breakup that the impulse was understandable. By this reckoning, she'd carelessly played with his heart; she'd had her fun with a younger man, used him as a plaything, a distraction from the pain of her marital split, and when the novelty had worn off she'd dumped him. Had these speculations ever reached her ears? I hoped not. Had she intuited them nonetheless? Perhaps. The thought of it made me sick—her having to reckon with that sort of guilt. I'd never been seduced by the temptation to blame her. People break up all the time. A certain amount of pain and sadness ensues, but to kill oneself over it seemed to me an act so extreme and vengeful, so blindly self-regarding, as to be monstrous. Despite having tested the idea, I ultimately couldn't believe my brother to be that sort of monster. One night I typed her name in a search engine and found she still lived in New Mexico. I called information and procured a number, which I wrote on a scrap of paper and tucked in my wallet. I carried it around for months. As more and more time went by, it seemed less and less likely that I would ever muster the nerve to call. I didn't know what I could say to her; I couldn't imagine what she'd say to me. We'd never met. We'd never spoken a word. I'd come across one snapshot of the two of them, in a collection of photos kept by my mother: she was thin and pretty, with blond hair and green eyes, and they were sitting next to each other, turned toward someone out of the frame, both of them laughing, two beers on the table in front of them. I often looked at this photo and tried to imagine what unforeseen trajectory her life had taken in the aftermath of the bullet. I wasn't sure I wanted to know the truth. In some matters the truth, when we find it, is worse than our worst imaginings. After a couple of glasses of whiskey one evening—sixteen years, nine months, and two days after his death—I decided to hell with it. Maybe she'd hate me for calling, maybe not, but there was only one way to find out. I dialed the number. She picked up on the third ring. I told her my name, who I was. She said, Oh, okay, and waited in silence for what would come next. I said there was one thing I needed to tell her, one thing I felt sure Dan would want me to say on his behalf if he knew we were speaking: I'm sorry. I waited in silence for what would come next. I figured fifty-fifty she'd tell me off and hang up. Thank you, she said. My call was unexpected, to say the least. So much time had passed, and she hadn't seen or heard from a soul who'd known him since very shortly after his death, though she still thought of him all the time. She remembered him as a very private person, very quiet, but generally happy, smart for his age, good at his job, a skillful balloon pilot, a take-charge kind of guy. His self-confidence was very attractive. When he showed up it was as if a ray of sunshine had come bursting into her life. He'd made her life better during a difficult time. She told me he'd been mature for his age, she'd been immature for hers, and they'd met in the sweet spot in the middle. They'd spent almost every minute together when he wasn't working, shared dinner together every night, went out often, usually to a bar in Rio Rancho, a suburb of Albuquerque. They drank and played darts, hung out with friends. They were having fun, smitten with each other's company, and they indulged—perhaps a little too much, she admitted. He'd lost a fiancée, she'd lost a husband, but they'd found each other, and for a while it felt like he was everything she needed. He'd melded well with her kids. They liked and respected him. In fact they still talked about him sometimes, all these years later. At the time, though, it was complicated. Her life had felt tangled, too many things unresolved. She and her husband were fighting over custody, over the division of property. The kids thought Dan might be the cause of their parents' breakup. He talked to them honestly, told them he wasn't there to replace their father but was open to listening and helping in any way he could. You have to get back in touch with them, she remembered him telling her. They're hurting. They need you. She took his words to heart. She decided she needed a break, mostly for the sake of her children. They agreed to put things on hold and revisit their situation when everything cooled down. They both knew this was for the best, though it wasn't an easy decision. That same weekend he moved back into his apartment. It was his first time alone there in months; he'd been living with her since not long after they'd met. He left on a Friday night. By sometime late Sunday he was dead. It was the strangest thing, she said, but that night I swore I heard his truck drive past. I asked my daughter, Did you hear Dan's truck just now? Mom, you're hearing things, she told me. Don't be silly. She could still smell and taste things from the day she got the news. First his boss had called and asked if Dan was with her, since no one had heard from him. She said no, they'd had a difficult weekend, talking bad to each other after the split. He'd been drunk when last they spoke. She could only assume he was sleeping off a hangover. It made us all feel so empty, she said, so sickly sad given all he'd accomplished and all he still had ahead of him. It just didn't make sense. I tried to come up with an explanation. Was it work stress that left him feeling overwhelmed? Was it depression I hadn't noticed? Was the breakup the final straw? And if so, how dare he? She evaded the temptation to assign herself guilt: it was his choice, after all. Besides sadness and shock, she felt anger—a tremendous, devouring anger. He'd had so many friends. He could have reached out to one of them. He had other options. Instead he took the one that could not be undone, a permanent solution to a temporary problem. That was a turning point in her life, as it would have to be. Afterward, she lost her taste for drinking. The bar they'd hung out in, their special place, closed, and she was glad to see it gone. She couldn't have gone there without him. It would have been too painful. She'd never remarried. Her kids had become the focus of her life, in addition to the small business she ran. Her son was a property manager and lived in Seattle. Her daughter lived nearby in New Mexico and had two little kids of her own. She'd never imagined herself a grandma, but now here she was. I have these déjà vu moments, she said. They bring on a memory and it's like he's here again, like he never left. Whenever I notice a hot air balloon, which is pretty often in Albuquerque, I think of him, or when I visit the post office and see the security cameras he installed. They're still there. I guess I shouldn't be surprised that they'd last. He was so good at everything he did. We spoke for a bit about ourselves, our work. I wasn't sure if I should bring the conversation back around to Dan or just let it go. It felt a little unfair to allude to the possibility that he'd been raped—as if this fact might color her memory of him—but it felt even more unfair not to. She was shocked to hear it. He'd never said anything about it to her, and she'd never suspected such a thing. She didn't know what to make of it. She'd need some time to think on it. I told her I hadn't called her looking for theories or answers; I had all the answers I would ever have, and they would always remain not enough. I'd called her only to connect, one person to another, over the memory of someone we'd both loved. He was a wonderful man, she said. I know, I said. I try not to define our time together by how his life ended. It's hard. But I think I've managed it. I'm glad of that, I said. And I'm glad to have heard someone speak so sweetly of the man he was. I haven't had that chance very often. She accepted my offer to meet for dinner if ever I was in her neck of the woods. It would be nice, we both agreed, to sit down someday, face-to-face, now that a silence had been broken. When in the course of conversation the subject of siblings arises, I've been known to fudge the truth and leave off mention of Dan—heeding the old taboo. I have a sister, I say. She's a joy to be around, with a ribald sense of humor and a skeptical intelligence. I picked on her mercilessly as a child, but she forgave me for it, even later laughed about it with me. She left home at the age of seventeen, moved in with a boyfriend, attained a GED, worked all sorts of jobs, including, like me, night baker. She's now a corrections officer in Minnesota, a Harley rider, a lover of camping in the north country in the summertime. She tells fascinating stories about her work in a medium-security state pen, the damaged men, the ethnic gangs, the squalor and the silliness, the desperation. She lives in a little burg of six hundred people, in an immaculately kept house filled with books and cozy places to sit, on the last street in town, where the howling of the winter wind across the open prairie gives her a healthy appreciation for the adversities of life, as if she needed that. Twice divorced, she's had unfortunate luck with men but never betrayed a worry over her own self-sufficiency. She's well schooled in the tactics of restraint, even teaches others in her line of work. I know damn well she could hurt me if she needed to. I love her no less for the fact that I talk to her on the phone maybe four or five times a year, maximum. When we do talk, we tell each other the truth unvarnished. We understand we owe each other that much. I told her once how on the day of Dan's funeral I spoke with one of his good friends, a fellow farm kid, who told me that back in high school he and Dan would drive out to the farm on summer nights and sit in the yard drinking beer, listening to the crickets in the fields. No matter where their evening had taken them it always ended at the farm. They'd park in the lane and sit on the tailgate of Dan's truck, looking up at the stars. We'd been gone for several years by then, and bit by bit the place was coming undone, first the windows of the house shot out, then chunks of good lumber wrenched free and hauled off, finally whole walls smashed and copper wire stripped. Each time they went back the place looked worse. It pissed him off, John said. He used to hope we'd find the vandals there when we showed up, so we could catch them in the act and whip their asses. The worse the place looked, the more Dan talked about the way it was when we were kids. Dan could tell stories for hours, apparently, about us playing kick-the-can with our cousins from Iowa, the way we slid down the stairs in the house inside our sleeping bags, pretending to be bobsledders, or the snow forts we built in the woods behind the chicken coop. He loved that place, John said. He hated having to leave it. I'm just glad he was gone for New Mexico by the time it burned down. While he lived I'd never thought to wonder whether he had the same nostalgic yearnings I did, whether he, like me, drove there in later years and walked through the shell of what was once our life. It saddened me to think we had this in common and never knew it, even worse to think it took his death for me to learn it. We'd been told so often we had nothing in common that we came to believe it; this was the first of our misunderstandings, though hardly the last. Unlike me, he never tried to bleed the country boy out of himself, drop by solitary drop. There had been a time in my adolescence when I began to view our failure at farming as a blessing of sorts. It untethered me from a family calling passed down the generations, set me free to make of myself whatever I could dream up—the American way and the way I preferred it. If given the chance, Lisa and I agreed, he'd never have left. When I went back to the farm—rarely, always alone—I was looking for some piece of myself I had lost in a place whose loss, paradoxically, had liberated me to become my true self. Maybe my hunch about him was wrong, and he went back not because he'd lost something of himself but because he wanted close contact with something he would always have or always be. We each eventually drifted away to distant cities, but I was the restless striver, the chameleon, trying on a series of potential identities, while he became a slightly different version of what he'd always been, a shit-kickin' country boy who adopted a northern finger of the Chihuahuan Desert as his new home country, though not for long. Because our leaving the farm marked a kind of rupture in our lives, I return in memory to the time before, years I'm tempted to think of as prelapsarian. The memories are vague, though, and faded like the color in the Polaroids my mother saves in photo albums, but in spite of their haziness they're a major part of what I turn to when I try to reconstitute the brothers we once were. Much of what I recall arises from pure sense memory. The heavy feel of bottle glass in our hands, empty pints and quarts of booze the previous farmer, Old Man Leysen, hid everywhere in the grove of woods behind the house where Dan and I played in summertime. The scaly texture of the wild asparagus we picked outside the barbed-wire fence on the northwest corner of the pasture for our mother to cook with dinner. The milky surface of the ponds in the low spots of the pasture, where we played broom ball in winter, bruising our knees on the ice. The neat cords of firewood we stacked next to the house after our father split the rounds he'd bucked up with his chain saw by the river. The seed-dust smell of the granary where we laid traps for mice. The crunch of shells as we walked through ancient lake beds drained for farmland, picking rocks. The dirt beneath our fingernails from our practice farming in the side yard. That beautiful soil that crumbled in our hands and smelled of ten thousand years of prairie fecundity. All of this I suspected Dan remembered too, the shared geography of a vanished way of life, though we never spoke of it. Sometimes after a hard summer rain we'd step into the yard with a flashlight and a pail and collect the elongated earthworms stretched in the wet grass, dozens and dozens of them writhing in the dark. They contracted as soon as they were touched, and they left a film on our hands when we handled them. No bait was more effective in catching bullheads. They were so fat you could pinch them in half and bait two hooks with one worm. Fishing was our major pastime away from farm work; if we left the farm it was usually to buy groceries in town, attend church on Sunday, or ride our bikes to the bridge across the river with our poles and tackle boxes. Those hours were the sweetest of my childhood, brothers at play on the land, at play on the water, a simple enjoyment of each other's presence amid the thrill of catching fish. There was an old schoolhouse just down the road—an abandoned one-room country school with a potbellied stove and a blackboard still hung on one wall. Our grandmother received her first years of education there, in the depths of the Depression. When we played inside of it, often with Lisa alongside us, dust motes stirred in the light slanting through the cracked windowpanes. One day the three of us startled a skunk who'd been taking shelter under the floorboards. We ran when we saw it, and it ran when it saw us, and thankfully a moving target isn't easy to hit when the thing taking aim is moving too. We'd heard that the only way to rid yourself of the stench of a skunk was to take a bath in tomato juice, a thought that repulsed us and, mercifully, a remedy we never had to endure. One close call was enough. We never played in the old schoolhouse again. A short while later it was struck by lightning and charred. Someone decided the risk of it burning was too great—it was only a short distance from a telephone pole whose wires passed close overhead—so it was demolished, the wood hauled away for kindling in someone's stove, a harbinger of what was to come for our own home. When I picture us a little later, around the ages of nine and ten, I see myself with a basketball shooting endlessly at the hoop on the side of the granary, and I see Dan hunched over some project in a corner of the garage, which was actually a kind of workshop. It was a world of metallic smells and funky fumes that made you feel funny if you sniffed them deeply at close range, but all the various tools nestled in boxes or hung on nails seemed to speak to him of a world that made sense, a world you could take apart and reassemble with your hands, a world in which every thing fit with some other thing and if it stopped fitting you either fixed it or threw it out and replaced it with another. It was a world he felt drawn to by skill and temperament. It was a world he in fact mastered, working with his hands all his short life. A car engine was a world to be taken apart and rebuilt. A china cabinet was a world to be carved from nature and assembled as functional art. His pieces, if you knew when he'd made them, showed evidence of his ever-increasing skill. They were scattered in homes from the Midwest to the Southwest and places between. I have seen and touched them. Everything fits. Everything is smooth and plumb and buffed to a sheen. It seems only natural to wonder if his undoing was that thing he could not make fit in the story of himself, the one thing that did not make sense and could never be fixed or discarded. Whenever I'm in Minnesota I visit his grave, but its chilly rectitude, the cold headstone, do not summon him. There he's simply underground inside a box. I visit too our old home on the farm, but the architecture of the place, the house and barns in which we lived and worked together, are nothing but a ghostly memory. Maybe our erasure from the land erased my memory, more so than his death; either way, what I've written here is most of what I have in the way of story from my childhood. I wish there were more. Maybe it will come in old age, as some say it does. For now, when I want to be close to him, I visit that cairn I built on a lonely ridge of the Black Range. Over the years I've added more chamfered bones, antlers and potsherds and turquoise beads, snake skins and mushrooms, turkey feathers, stones, the serendipitous accumulations of my evening walks. He'd like it there, I feel certain. The view from the ridge is wild as all get-out, the deep headwaters canyons of a trout stream to the west, the distant valley of the Rio Grande across the desert to the east. It's as wild a place as you can still find in the Lower 48. In the summer of my twelfth year on the mountain it burned, as we'd all known it would one day. I watched as the plume took off in a running crown fire of two-hundred-foot flames, the smoke billowing black into the sky, as if roaring from a fissure in the basement of the world. A helicopter plucked me from the mountain ahead of the flames. I watched the spectacle for a month from a different tower thirty miles north. The fire covered two hundred square miles in the end. On the hottest day of its run it torched ten thousand acres in an afternoon. The smoke plume rose into the lower troposphere, a pulsing column of heat topped by its own pyrocumulus cloud, from which could be heard the rumble of thunder. Charred oak leaves fluttered to the ground as far away as twenty-five miles. I returned after the late-summer rains. It was a peculiar hike in, the first time back. The burn area was still closed to the public, so I let myself through a locked gate on the highway near the forest boundary, aware that the country was entirely mine, for a little while anyway. As I walked and gawked I added everything I saw to my memory's palimpsest of the landscape, the original layer as I'd found it in the beginning with M.J., another layer as I'd seen it before a fir-beetle outbreak killed thousands of trees, and another layer after, one from the whirlybird on my way out, and now the newest and most radical revision as it greeted me in the aftermath of the burn, black as black gets in places. About two-thirds of the way to the top, big islands of untouched forest appeared where the fire'd had no impact on the canopy. From the open meadow on top you couldn't tell there'd been a fire at all. The peak still wore a cap of green, the grass luxuriant from the rains, the trees along the peak's edges untouched. I wandered around looking for the places where the fire's fingers made their highest runs. I didn't have to go far. A couple hundred yards in any direction there were big patches of scorched earth. Back on top, an hour after my arrival, something bright green quivered in the grass between the cabin and the outhouse: a tree frog. In all my seasons there I'd never seen one. It felt like an omen, a sign that despite the tremendous changes, the life of the mountain carried on as before. The next day I visited the cairn with the half-charred pelvis of a mule deer. I wasn't sure what I'd find. All around stood the spooky pikes of burnt trees, a forest poised between what it had been and what it would be. Ash had turned to mud on the ground. No birds sang, but the grass was already greening, the oaks resprouting; soon the birds would return, the aspens would burst from the char, the cycle of death and rebirth gone around once more. Strangely, the landscape felt more like home than ever. Perhaps when your childhood home is lost to the bankers and then lost forever when its new owner torches everything on the property for two more acres of tillable land, you can't help but be mesmerized by the erasures achieved by fire. Perhaps when your brother ends his life with a bullet to the brain, you can't help but feel an intuitive understanding of the forces of earthly destruction. Standing inside the black can feel like a form of belonging. By some miracle the cairn remained untouched by the flames, solid as the day I'd built it, a tiny oasis amid the burn scar. I removed the cap rock. I placed the bone inside. I felt the enormity of his loss once more. The pain of it never does fade entirely, never will—no doubt it disfigured me in ways that will endure for what remains of my life—but at last I found a place to put it where it wouldn't eat me alive. My devotion to his memory led me there, the place I venerate above all others on earth, my little voodoo shrine to the lost and the damned, as wild and remote as the country of grief itself. Also by Philip Connors Fire Season: Field Notes from a Wilderness Lookout Author's Note Some names and identifying details have been changed in an effort to protect the innocent and the guilty. Portions of this book first appeared, in different form, in n+1, Ninth Letter, the Dublin Review, and Lapham's Quarterly, the editors of which are gratefully acknowledged; special thanks are due Keith Gessen, Brendan Barrington, and Elias Altman. Copyright © 2015 by Philip Connors All rights reserved First Edition Excerpts from "After the Party," "A Beautiful Day Outside," "The Complete Works of Anton Werbern," "February," "Hart Crane Near the End," "To Start at End," "Easter," and "December" from _Poems: 1959–2009_ by Frederick Seidel. Copyright © 2009 by Frederick Seidel. Reprinted by permission of Farrar, Straus and Giroux, LLC. For information about permission to reproduce selections from this book, write to Permissions, W. W. Norton & Company, Inc., 500 Fifth Avenue, New York, NY 10110 For information about special discounts for bulk purchases, please contact W. W. Norton Special Sales at specialsales@wwnorton.com or 800-233-4830 Book design by Mary Austin Speaker Production manager: Anna Oler ISBN 978-0-393-08876-2 ISBN 978-0-393-24648-3 (e-book) W. W. Norton & Company, Inc. 500 Fifth Avenue, New York, N.Y. 10110 www.wwnorton.com W. W. Norton & Company Ltd. Castle House, 75/76 Wells Street, London W1T 3QT
{ "redpajama_set_name": "RedPajamaBook" }
5,234
The Seek Reveal PRO Thermal Camera offers 76,800 pixels of infrared data, or four times more than any other stand-alone infrared camera under $4000. It runs at a higher frame rate, gives a smoother experience, and is wrapped in a rugged housing including a Corning Gorilla Glass screen. The excellent resolution and sensitivity make the Reveal PRO a very capable thermal imager for applications ranging from building inspection, to facilities maintenance, to outdoor recreation. All this from a camera that costs just $699. It's simply one of the best buys in infrared imaging. How Far Can the Reveal PRO See? For more images and video see our review of the Seek Reveal Pro. The Seek Reveal PRO uses a 32-degree lens to give the right balance between a medium and wide angle view. The lens feels just right for most applications, whether inside or out. With the higher resolution of the Reveal PRO, a digital zoom function allows you to enlarge a target without sacrificing picture detail beyond usefulness. We find that the fixed focus lens combined with a surprisingly fast startup (under 3 seconds) make the Seek PRO ready whenever it's needed. The Seek Reveal PRO comes with a 2-year extended warranty. Ivy Tools provided excellent customer service. Both in processing the order and delivery. The camera does perform well, though I had an instance where the camera refused to turn off, no matter how long I held the power button for off. I reset the camera back to factory and hopefully this would not occur again. I cannot thank Ivy Tools enough for swift;y processing and shipping my order. I was in a pinch and they got the order right out and made it next day delivery. Outstanding. Thank you. The seek thermal pro works well I have just started using it and learning all the features . I ordered a Seek Reveal Pro and Holster from Ivy Tools and couldn't be happier with the service I received. Milt was top notch to deal with and got my order to me quickly. I would definitely order from them again without hesitation.
{ "redpajama_set_name": "RedPajamaC4" }
9,034
\section{INTRODUCTION} Relocalization is a fundamental problem in robotics and computer vision. A robot has to localize itself when moving in urban or indoor environments to achieve competent autonomy. Several existing solutions employ Global Navigation Satellite System (GNSS) to perform localization. However, GNSS is not always available such as in indoor environments and the accuracy of GNSS cannot be guaranteed in urban environments with high-rising buildings since they can block GNSS signals. There is a significant body of knowledge in visual localization, as it has been studied for decades. Conventional geometry-based visual localization systems mainly utilize handcrafted features and descriptors, which are typically sensitive to illumination variation, dynamic objects and viewpoint change~\cite{Huang2019PriorEnvironments}. Recently, learning-based visual localization methods such as PoseNet and variants~\cite{Kendall2015,Kendall2016ModellingRelocalization,Kendall2017,Brahmbhatt2018} have been proposed to solve these challenges, which leverage either a single image or a sequence of images to predict 6-Degree-of-Freedom (6-DoF) poses directly. Unlike retrieval-based learning approaches e.g. CamNet~\cite{Ding2019CamNetRe-Localization}, RelocNet~\cite{Balntas2018RelocNetNets} and Camera Relocalization CNN~\cite{Laskar2017CameraNetwork}, location-related information of these deep learning methods is implicitly encoded within the parameters of these deep neural networks, and therefore these methods require agents that have previously traversed the same environment. However, vision sensors inherently suffer from several drawbacks which restrict their ability to be used in scenarios where reliability is highly desirable, such as self-driving cars. Visual inputs are easily impacted by ambient environmental conditions e.g. sunshine, rain, fog; and further by their narrow Field-of-View (FoV). Emerging Frequency-Modulated Continuous Wave (FMCW) radar sensors can effectively solve many of the shortcomings of cameras. They can provide a $360\degree$ view of the scene and range objects hundreds of meters away. Meanwhile, they can function reliably in unstructured environments in different conditions e.g. snow, darkness, fog, smoke, direct sunlight~\cite{Barnes2020TheDataset} without impact. These characteristics of radar make it suitable for robot localization, especially for autonomous agents which operate in large-scale urban scenes. Inspired by the aforementioned deep pose regression methods that use images, the aim of this work is to investigate and provide a robust radar localization system, allowing robots to relocalize themselves under previously visited scenes. \begin{figure} \centering \includegraphics[width=\columnwidth]{RadarLoc_System_Overview.pdf} \caption{System overview of the proposed RadarLoc relocalization framework. A raw FMCW radar scan is first transformed into a Cartesian Radar Image. The radar image is then fed to RadarLoc, which directly estimates the 6-DoF pose in an end-to-end manner.} \label{fig:system_overview} \end{figure} Specifically, we propose a novel geometry-aware neural network architecture, termed RadarLoc, which can estimate the 6-DoF pose using a single radar scan. The proposed self-attention module of a nested encoder-decoder architecture further improves the localization performance. During the training phase, RadarLoc takes as input a sequence of radar scans, and predicts poses optimized and constrained by both absolute and relative (geometric) pose losses. At inference time the 6-DoF pose is regressed from a single input scan. Fig.~\ref{fig:system_overview} illustrates the overview of the proposed fully differentiable relocalization system. Our contributions are summarized as follows: We demonstrate that radar scans can be employed to estimate absolute 6-DoF poses in an end-to-end fashion. We further refine pose estimations by leveraging geometric constraints between radar pairs as one component of the loss function. Comprehensive experiments and ablation study have been done to demonstrate the effectiveness of RadarLoc, which outperforms state-of-the-art radar-based localization, DNN-based camera relocalization methods by a significant margin. \section{RELATED WORK} \label{related_work} \subsection{Deep Camera Localization} Apart from problems of computation and storage, traditional visual localization in dynamic environments is still very difficult because of foreground outliers and appearance variations \cite{Huang2019PriorEnvironments}. For tackling these problems, recent works propose DNN-based methods to estimate 6-DoF poses directly. Single or sequential images are fed into a neural network model which comprises a feature extractor and a pose regressor for estimating absolute poses in an end-to-end manner. PoseNet~\cite{Kendall2015} is the first to demonstrate that 6-DoF camera poses can be directly predicted by a neural network. Following variations \cite{Kendall2016ModellingRelocalization,Kendall2017} improve the performance of PoseNet by introducing a geometric loss and modelling the uncertainty of poses with Bayesian Neural Network. Walch \textit{et al.}~\cite{Walch2017} proposed to utilize LSTM for structural feature correlation to improve the performance. Although these approaches are promising, they are still limited to the disadvantages of visual sensors. Our work extends this line of research by leveraging FMCW scanning radar to perform deep global localization. \subsection{Radar Geometry} A $360\degree$ FMCW radar continuously scans the surrounding environment with a total of $M$ azimuth angles. The radar emits a beam and collapses the return signal for each azimuth angle~\cite{Hong2020RadarSLAM:Weathers}. The raw scan of the FMCW radar is a polar image, which can be transformed into a Cartesian image. Formally, given a point $(a, b)$ where $a$ is the azimuth and $b$ is range on a raw polar image, the range angle $\theta$ in the corresponding Cartesian coordinate is: \begin{equation} \theta\ =\ 2\pi\cdot a\ /\ M \end{equation} Thus, the corresponding coordinate $\mathbf{Z}$ in the Cartesian image can be calculated as: \begin{equation} \mathbf{Z}\ =\ \begin{bmatrix} \alpha \cdot cos\theta \cdot b\\ \alpha \cdot sin\theta \cdot b \end{bmatrix} \end{equation} where $\alpha$ is a scaling factor between the image pixel space and the world metric space. Cartesian representation of the radar scan is visually comprehensible, and is better for neural networks to learn and optimize than the raw polar representation. \subsection{Radar Odometry} Recent works proposed to utilize radar scans for ego-motion estimation, which is known as radar odometry. Cen \textit{et al.} \cite{Cen2018PreciseConditions} extracted landmarks from radar scans and then conducted scan matching to predict ego-motion based on unary descriptors and pairwise compatibility scores. Barnes \textit{et al.}~\cite{Barnes2019MaskingInformation} developed a robust and real-time radar odometry system based on deep correlative scan matching with learnt feature embedding and self-supervised distraction-free module. Afterwards, they proposed a deep key point detection approach for radar odometry estimation and metric localization by embedding a differentiable point-based motion estimator~\cite{Barnes2020UnderRadar}. Note that different from these methods, our work focuses on radar-based absolute localization, which predicts global poses w.r.t. the world coordinate rather than relative poses. Fig.~\ref{fig:loc_diff} illustrates the differences between these two different localization tasks. \begin{figure} \centering \includegraphics[width=\columnwidth]{loc_diff.pdf} \caption{The difference between radar odometry and radar relocalization~\cite{Maddern2016,Barnes2020TheDataset}. Radar odometry predicts relative poses between consecutive radar scans and thus has accumulative drifts over time, while radar relocalization estimates global poses w.r.t the world coordinate and needs to traverse the environments before. These are two different tasks in localization, and this work focuses on the radar relocalization.} \label{fig:loc_diff} \end{figure} \section{Deep Radar Relocalization} \label{deep_radar_relocalization} In this section, we introduce the proposed deep radar relocalization framework in detail. The overall architecture of RadarLoc is illustrated in Fig.~\ref{fig:radarloc_architecture}, which consists of a self-attention module, a radar encoder, and a deep pose regressor. \begin{figure*} \centering \includegraphics[width=\linewidth]{RadarLoc.pdf} \caption{The architecture of RadarLoc. RadarLoc consists of a self-attention module, a radar encoder and a deep pose regressor. A raw FMCW radar scan is transformed into a Cartesian radar image, and then it is fed into a self-attention module to learn a soft attention map. DenseNet~\cite{Huang2017DenselyNetworks} is employed as the radar encoder to extract useful features for relocalization. The deep pose regressor predicts the parameterized translation $\mathbf{p} \in R^3$ and rotation $log\mathbf{q} \in R^3$ \cite{Brahmbhatt2018}. The predicted parameterized rotation vector $log\mathbf{q}$ can be further transformed to the 4-D rotation vector $\mathbf{q} \in R^4$.} \label{fig:radarloc_architecture} \end{figure*} Since the original output of the FMCW scanning radar is a polar image, we transform it into the Cartesian space as a grey-scale birds-view-like image for better representation and improved localization performance~\cite{Wang2019Pseudo-lidarDriving}. During training phase, the neural network is optimized by the geometry-aware loss function which employs a sequence of radar scans to learn global 6-DoF poses and relative transformations simultaneously. During test phase, the RadarLoc estimates the 6-DoF pose of a single Radar input each time. \subsection{Problem Formulation} The scope of this work is to predict absolute 6-DoF poses of the mobile agent given radar scans as inputs. The scene has been visited by the agent before, in which the agent can relocalize itself. The relocalization of the agent is parameterized by a 6-DoF pose $\mathbf{P} = [\mathbf{p}, \mathbf{q}]$ with respect to the world coordinate, where $\mathbf{p} \in R^3$ is a 3-D translation vector and $\mathbf{q} \in R^4$ is a 4-D rotation vector. At each timestamp t, the agent receives a Cartesian Radar image $\mathbf{I} \in R^{H \times W}$ from the FMCW scanning radar where $H$ is height and $W$ is width. The deep radar relocalization framework learns a function $f$ so that $f(\mathbf{I}) = [\mathbf{p}, \mathbf{q}]$, where $f$ is a deep neural network. \subsection{Self-Attention for Robust Relocalization} For the radar relocalization task, there are two categories of noises which can significantly affect the accuracy of pose predictions. One is noises from the radar sensor itself. The current FMCW scanning radar is affected by multiple noises, e.g. range error, angular error, and false positive and false negative detection which make the radar scans noisier than camera images. The other is the foreground moving objects in dynamic environments. There are several types of dynamic outliers e.g. pedestrians, bikes, buses, trucks in the complex urban environments, which have different shapes and sizes. Since the radar can scan more than 150 meters range, it is likely that one radar image can contain these different types of moving objects. Therefore, the aforementioned noises can inevitably bias the neural network, making the radar relocalization quite challenging. Barnes \textit{et al.}~\cite{Barnes2019MaskingInformation} proposed a U-Net structure to predict distraction-free radar odometry. Wang \textit{el al.}~\cite{Wang2020AtLoc:Localization} designed a non-local self-attention module to filter out moving objects for camera relocalization. However, these methods neither learn semantic features in a fine-grained manner~\cite{Barnes2019MaskingInformation} nor are designed specifically for radar images~\cite{Wang2020AtLoc:Localization}. To this end, we propose a novel self-attention module for radar relocalization as shown in Fig.~\ref{fig:radarloc_self_att}, which is a nested encoder-decoder style neural network, to mitigate the impact of these noises by filtering them out. \begin{figure} \centering \includegraphics[width=\linewidth]{RadarLoc_self_att.pdf} \caption{The architecture of self-attention module. The module is a nested encoder-decoder structure\cite{Zhou2018UNet++:Segmentation}. For better visualization, we only depict 3 levels in the figure, and in our implementation, the nested structure has 6 levels. We adopt a soft attention mechanism, and fuse output features at the upper level to have a fine-grained attention map.} \label{fig:radarloc_self_att} \end{figure} Our design intuition is that considering the different shapes and sizes of moving objects, the self-attention module should have the ability to extract fine-grained features and filter out these dynamic noises. Compared to the U-Net style architecture, the nested encoder-decoder architecture can gradually down-sample, fuse and up-sample features from inputs, which can reduce the semantic gap between the feature maps and extract fine-grained semantic information. We choose the re-designed skip pathways proposed by Zhou~\textit{et al.}~\cite{Zhou2018UNet++:Segmentation} due to its impressive performance on multiple medical image segmentation tasks, in which the feature maps pass through a dense convolution block whose number of convolution layers depends on the pyramid level, and the stack of the feature maps are calculated as: \begin{equation} \mathbf{X}^{i,j} = \begin{cases} g(\mathbf{X}^{i-1,j}),& j = 0 \\ g([[\mathbf{X}^{i,k}]^{j-1}_{k=0}, g^{'}(\mathbf{X}^{i+1,j-1})]),& j > 0 \end{cases} \end{equation} where $\mathbf{X}^{i,j}$ is the extracted feature of each node in Fig.~\ref{fig:radarloc_self_att} , $g(*)$ denotes a convolution layer with an activation function, $g^{'}(*)$ is an up-sampling layer, $[*]$ indicates a concatenation layer, $i \in [0, 1, .., n-1]$ is the index of the down-sampling layer along the encoder, $j \in [0, 1,...,n-1]$ is the index of convolution layer of the dense block along the skip pathway, $n$ is the number of pyramid levels. In order to learn features at different scales, we fuse node outputs on the uppermost level to generate the output features $\mathbf{I}_{node}$ by averaging them: \begin{equation} \mathbf{I}_{node} = \frac{1}{n}\sum_{j=0}^{n-1}\mathbf{X}^{0,j} \end{equation} Thus, our self-attention module is an encoder-decoder pyramid structure with densely skip pathways followed by an activation function. We adopt a soft attention mechanism to learn the mask, so the activation function we use is \textit{Sigmoid}. Given a Cartesian radar image $\mathbf{I} \in R^{H \times W}$, the self-attention module serves to learn a noise-free feature map $\mathbf{I}^{'} \in R^{H \times W}$: \begin{equation} \mathbf{I}^{'} = \sigma(\mathbf{I}_{node}) \cdot \mathbf{I} \end{equation} where $\sigma$ is the \textit{Sigmoid} function, and $\cdot$ represents the dot product. \subsection{Radar Encoder} The radar encoder extracts features from a radar image for relocalization. Existing state-of-the-art camera relocalization approaches~\cite{Brahmbhatt2018,Wang2020AtLoc:Localization,Huang2019PriorEnvironments} employ ResNet~\cite{He2016} as the visual encoder considering the residual neural networks can learn deeper and alleviate the gradient vanishing problem. DenseNet~\cite{Huang2017DenselyNetworks}, which consists of densely connected convolutional networks, has been proved better performance on four object recognition tasks than ResNet. Hence, RadarLoc adopts pre-trained DenseNet as the radar encoder for feature extraction of the relocalization. We broadcast feature map $I^{'}$ to 3 channels, and replace the last 1000-dimensional fully connected layer with a M-dimensional fully connected layer. Formally, given the $\mathbf{I^{'}}$ from the self-attention module, the feature encoder $f_{encoder}$ extracts the feature vector $\mathbf{z} \in R^{M \times 1}$ from $\mathbf{I}^{'}$, which can be presented as: \begin{equation} \mathbf{z} = f_{encoder}(\mathbf{I}^{'}) \end{equation} \subsection{Deep Pose Regressor} The deep pose regressor receives the feature vector $\mathbf{z}$ from the Radar Encoder, and predicts the position $\mathbf{p}$ and the rotation $\mathbf{q}$ respectively. It consists of Multi-Layer Perceptrons (MLPs) of two branches. An activation function is applied to each layer of the MLPs except the last one. The pose regressor which ultimately estimates the global pose $\mathbf{P} = [\mathbf{p}, \mathbf{q}]$ is defined as: \begin{equation} \mathbf{P} = f_{MLPs}(\mathbf{z}) \end{equation} \subsection{Loss Function with Geometric Constraints} For the loss function, we employ the definition in \cite{Brahmbhatt2018} as it has been shown to be effective in existing image-based global pose regression tasks. The vanilla loss function $h$ is defined as: \begin{equation} h(\mathbf{P}, \hat{\mathbf{P}}) =\Vert \mathbf{p}-\mathbf{\hat{p}} \Vert_{1} e^{-\beta} + \beta + \Vert \log \mathbf{q}-\log\mathbf{\hat{q}} \Vert_{1} e^{-\gamma} + \gamma \label{eq:h} \end{equation} where $\mathbf{p}$ and $\log\mathbf{q}$ are translation and orientation of the predicted global pose $\mathbf{P}$, $\hat{\mathbf{p}}$ and $\log\hat{\mathbf{q}}$ are translation and orientation of the ground-truth global pose $\hat{\mathbf{P}}$, $\Vert * \Vert_{1}$ denotes the $L_{1}$ loss function, $\beta$ and $\gamma$ are learnable balance factors which are initiated by $\beta^0$ and $\gamma^0$ respectively. $\mathbf{\log q}$ is the logarithmic form of a unit quaternion $\mathbf{q} = (u, \mathbf{v})$, where $u$ is a scalar and $\mathbf{v}$ is a 3-D vector, which is defined as: \begin{equation} \log\mathbf{q}= \begin{cases} \frac{\mathbf{v}}{\Vert\mathbf{v}\Vert}\cos^{-1}u,& \text{if } \Vert\mathbf{v}\Vert \neq 0 \\ \mathbf{0},& \text{otherwise} \end{cases} \end{equation} Since a 2-D radar image can provide metric information within a wide range, we further improve the performance of relocalization by leveraging geometric constraints to optimize parameters of the neural network. \begin{table} \footnotesize \centering \caption{\small Dataset Descriptions on the Oxford Radar RobotCar.} \vspace{.5em} \resizebox{\linewidth}{!}{ \begin{tabular}{ccccc} \toprule \multirow{1}{*}{Scene} & Time & Tag & Training & Test \\ \midrule Seq-01 & 2019-01-11-14-02-26 & sun & \checkmark & \\ Seq-02 & 2019-01-14-12-05-52 & overcast & \checkmark & \\ Seq-03 & 2019-01-14-14-48-55 & overcast & \checkmark & \\ Seq-04 & 2019-01-15-14-24-38 & overcast & \checkmark & \\ Seq-05 & 2019-01-18-15-20-12 & overcast & \checkmark & \\ Seq-06 & 2019-01-10-11-46-21 & rain & & \checkmark \\ Seq-07 & 2019-01-14-12-41-28 & overcast & & \checkmark \\ Seq-08 & 2019-01-15-13-06-37 & overcast & & \checkmark \\ Seq-09 & 2019-01-17-14-03-00 & sun & & \checkmark \\ Seq-10 & 2019-01-18-14-14-42 & overcast & & \checkmark \\ \bottomrule \end{tabular} } \label{tab:dataset_radar_robotcar} \vspace{-.5em} \end{table} During training, we choose N radar images, consisting of the current radar image $I_{0}$ as well as N-1 sequential radar images $\{I_{1},...,I_{N-1}\}$ close to $I_{0}$. Consequently, RadarLoc learns both global poses ($\mathcal{L}_{gp}$) and relative pose transformations ($\mathcal{L}_{rp}$) between radar image pairs. The improved loss functions are defined as: \begin{equation} \mathcal{L}_{gp} = \sum_{i=0}^{N-1} h(\mathbf{P}_{i}, \hat{\mathbf{P}_{i}}) \quad \mathcal{L}_{rp} = \sum_{i=0}^{N-2}h(\mathbf{Q}_{i}, \hat{\mathbf{Q}_{i}}) \end{equation} where $\mathbf{P}_{i},\mathbf{Q}_{i}$ are predicted global poses and relative pose transformations while $\hat{\mathbf{P}_{i}},\hat{\mathbf{Q}_{i}}$ are ground-truth global poses and relative pose transformations respectively, and $h$ is the distance function defined in Eq.~\ref{eq:h}. Therefore, the ultimate loss function for RadarLoc is formulated as: \begin{equation} \mathcal{L}_{total} = \mathcal{L}_{gp} + \mathcal{L}_{rp} \end{equation} Importantly, we employ multiple images in the training phase, and only a single radar image in the test phase. \section{Experiments} \label{experiments} In this section, we evaluate our proposed RadarLoc on the recently released Oxford Radar RobotCar Dataset~\cite{Barnes2020TheDataset,Maddern2016}, and compare it with state-of-the-art radar-based localization and deep camera and LiDAR relocalization methods. \subsection{Dataset} The Dataset provides Navtech CTS350-X FMCW scanning radar data, RGB images and corresponding ground truth poses. It was collected in January 2019 over thirty-two traversals of a central Oxford route spanning a total of 280 km of urban driving, and covered different kinds of lighting, weather and traffic conditions~\cite{Barnes2020TheDataset}. The length of each sequence is around 9 km, and they traverse the same route. Therefore, the dataset is large-scale and complex. For the relocalization task, it is quite challenging since the urban scenes encompass a variety of foreground objects e.g. people, car, bus, which significantly influence the performance of relocalization. The descriptions of our training sequences and test sequences from the Oxford Radar RobotCar Dataset are illustrated in Table~\ref{tab:dataset_radar_robotcar}. Note that seasonal variations affect localization significantly, this dataset only covers January. \begin{table*} \footnotesize \centering \caption{\small Results showing the mean translation error (m) and rotation error (\degree) for state-of-the-art radar-based localization methods and deep camera and LiDAR relocalization methods on the Oxford Radar RobotCar Dataset. For RadarSLAM and Adapted methods, the sensory data is FMCW radar scan. The sensory data of AtLoc and PointLoc are camera RGB image and LiDAR point cloud respectively.} \vspace{.5em} \resizebox{\linewidth}{!}{ \begin{tabular}{cccccccc|c} \toprule \multirow{3}{*}{Sequence} & RadarSLAM & Adapted Masking & Adapted PoseNet17 & Adapted AtLoc & Adapted LSTM & AtLoc & PointLoc & RadarLoc \\ & \cite{Hong2020RadarSLAM:Weathers} & \cite{Barnes2019MaskingInformation} & \cite{Kendall2017} & \cite{Wang2020AtLoc:Localization} & \cite{Walch2017} & \cite{Wang2020AtLoc:Localization} & \cite{Wang2020} & (Ours) \\ & [Radar] & [Radar] & [Radar] & [Radar] & [Radar] & [RGB] & [LiDAR] & [Radar] \\ \midrule Seq-06 &49.81m, 5.22\degree & 12.54m, 3.93\degree & 15.12m, 4.08\degree & 15.85m, 4.20\degree & 15.86m, 4.28\degree & 15.36m, 3.37\degree & 14.42m, {\bf 2.77\degree} & {\bf 8.43m}, 3.44\degree \\ Seq-07 &24.73m, 3.36\degree & 8.11m, 3.04\degree & 13.59m, 3.54\degree & 13.23m, 3.82\degree & 13.33m, 2.47\degree & 39.76m, 8.31\degree & 8.46m, {\bf 1.82 \degree} & {\bf 5.12m}, 2.87\degree \\ Seq-08 &26.09m, 1.57\degree & 11.32m, 4.18\degree & 14.81m, 3.46\degree & 14.17m, 2.94\degree & 14.86m, 2.88\degree & 31.68m, 4.34\degree & 9.52m, {\bf 2.14\degree} & {\bf 6.56m}, 3.06\degree \\ Seq-09 & 39.84m, 5.67\degree & 11.53m, 2.76\degree & 14.44m, 3.04\degree & 15.71m, 3.23\degree & 13.86m, 2.71\degree & 47.06m, 9.38\degree & 11.52m, {\bf 1.98\degree} & {\bf 6.51m}, 2.91\degree \\ Seq-10 & 17.83m, 1.71\degree & 9.42m, 1.81\degree & 13.21m, 2.02\degree & 13.22m, 1.94\degree & 14.65m, 1.89\degree & 10.35m, {\bf 1.26\degree} & 8.43m, 1.40\degree & {\bf 5.34m}, 1.78\degree \\ \midrule Average & 31.66m, 3.50\degree & 10.58m, 3.15\degree & 14.23m, 3.23\degree & 14.44m, 3.22\degree & 14.51m, 2.85\degree & 28.84m, 5.33\degree & 10.47m, {\bf 2.02\degree} & {\bf 6.39m}, 2.81\degree \\ \bottomrule \end{tabular} } \label{tab:results} \vspace{-.5em} \end{table*} \subsection{Implementation} The spatial dimensions of the self-attention module of RadarLoc are 8, 16, 32, 64, 128 and 256 respectively. The size of a Cartesian radar image is set to $224 \times 224$ in order to utilize the pre-trained DenseNet on ImageNet. For all experiments, the number of training epochs is set to 100, and we tune all baseline methods for the best performance. The learning rate is set to $1 \times 10^{-4}$, and we set the initial values of $\beta_{0} = 0.0$ and $\gamma_{0} = -3.0$. Furthermore, we retrieve a sequence of $N=4$ radar images each time. For all methods, Adam~\cite{Kingma2015} optimizer is applied to the neural networks. \subsection{Baselines} We compare RadarLoc with both radar-based methods and state-of-the-art RGB and Lidar techniques. We also adapt learning-based visual relocalization pipelines to use radar images as input. RadarSLAM is a full radar-based graph SLAM system for reliable localization in large-scale scenarios. Masking by Moving~\cite{Barnes2019MaskingInformation} is the state-of-the-art deep learning-based radar odometry approach, and we adapt the feature extraction module for relocalization. PoseNet17~\cite{Kendall2017}, LSTM-Pose~\cite{Walch2017}, and AtLoc~\cite{Wang2020AtLoc:Localization} are state-of-the-art camera image-based deep relocalization methods, and since our radar scan can be seen as a 2-D $224 \times 224$ grey-scale image, we want to examine the performance of these architectures on the radar inputs. We apply these neural networks to radar images for adapted radar relocalization. AtLoc (RGB) is the state-of-the-art deep camera relocalization method, and PointLoc (LiDAR) is the state-of-the-art DNN-based LiDAR point cloud relocalization method. \subsection{Results} The experimental results are illustrated in Table~\ref{tab:results}, and the qualitative comparisons are depicted in Fig.~\ref{fig:paths_gt}. From Table~\ref{tab:results}, the proposed RadarLoc outperforms radar-based methods by a significant margin. RadarSLAM can predict consecutive poses but accumulates drifts with the increasing distance, which leads to large localization errors as shown in Table~\ref{tab:results}. Note also that RadarSLAM is a continuous localization technique, while RadarLoc is single-shot. For adapted Masking by Moving, adapted PoseNet17, adapted AtLoc and adapted LSTM, the results indicate that our proposed neural network architecture is superior than previously proposed architectures for both deep radar odometry and deep camera relocalization. \begin{figure*} \begin{subfigure} \centering \includegraphics[width=.24\textwidth]{figures/RadarSLAM_10.png} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=.24\textwidth]{figures/adapted_masking_10.png} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=.24\textwidth]{figures/adapted_posenet_10.png} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=.24\textwidth]{figures/adapted_atloc_10.png} \end{subfigure} \newline \begin{subfigure} \centering \includegraphics[width=.24\textwidth]{figures/adapted_lstm_10.png} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=.24\textwidth]{figures/atloc_10.png} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=.24\textwidth]{figures/pointloc_10.png} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=.24\textwidth]{figures/radarloc_10.png} \end{subfigure} \caption{Visual comparisons of all localization approaches for Sequence 10. Poses were projected from 6-DoF to 3-DoF with exception to RadarSLAM, which outputs 3-DoF poses originally. } \label{fig:paths_gt} \end{figure*} \begin{figure*} \centering \begin{tabular}{c c c c c} \includegraphics[width=.18\textwidth]{figures/cdf_seq_06_trans.pdf} & \includegraphics[width=.18\textwidth]{figures/cdf_seq_07_trans.pdf} & \includegraphics[width=.18\textwidth]{figures/cdf_seq_08_trans.pdf} & \includegraphics[width=.18\textwidth]{figures/cdf_seq_09_trans.pdf} & \includegraphics[width=.18\textwidth]{figures/cdf_seq_10_trans.pdf} \\ \includegraphics[width=.18\textwidth]{figures/cdf_seq_06_rot.pdf} & \includegraphics[width=.18\textwidth]{figures/cdf_seq_07_rot.pdf} & \includegraphics[width=.18\textwidth]{figures/cdf_seq_08_rot.pdf} & \includegraphics[width=.18\textwidth]{figures/cdf_seq_09_rot.pdf} & \includegraphics[width=.18\textwidth]{figures/cdf_seq_10_rot.pdf} \\ \small (a) Sequence 06 & \small (b) Sequence 07 & \small (c) Sequence 08 & \small (d) Sequence 09 & \small (e) Sequence 10 \end{tabular} \medskip \caption{Cumulative distributions of translation and rotation errors.} \label{fig:cdf_trans_rot} \end{figure*} We also compare with camera-based and LiDAR-based deep relocalization methods to examine the differences among different sensory data for deep pose regression. The results in Table~\ref{tab:results} demonstrate that radar-based deep relocalization method is much better than camera-based method in terms of accuracy. The probable reasons are radar sensors can provide broader FoV of scenes and are less sensitive to environmental conditions than cameras. Interestingly, RadarLoc significantly outperforms PointLoc in translation while remaining comparable in rotation performance. This is most likely due to LiDAR providing full 3-D metric depth, rather than the 2-D bird's eye view of the FMCW radar scanner, which aids in full 6-DoF pose estimation. Fig.~\ref{fig:cdf_trans_rot} shows the cumulative distribution function (CDF) for both translation and orientation errors for the above mentioned approaches. RadarLoc consistently produces low errors in all sequences and it is closely followed by PointLoc. Pose accuracy for RadarSLAM and AtLoc, on the other hand, was highly dependent on the sequence being considered. Most noticeably, in Sequence 09, over 40\% of poses estimated with RadarSLAM showed errors beyond 40m. \subsection{Ablation Study} In order to study the impact of different components of the proposed RadarLoc system, we conduct the ablation study as shown in Table~\ref{tab:ablation}. For ablation experiments, we keep all the architecture designs the same as RadarLoc except that we do not contain self-attention module (w/o SA), use the UNet as the self-attention module (SA w/ UNet), use the ResNet as the radar encoder (ResNet) and do not use the geometric constraints as one component of the loss function (w/o GC) respectively. The RadarLoc improves the w/o SA by 39.77\% in translation and 5.70\% in rotation, which proves that our self-attention module is very effective in improving the radar localization performance. To delve into the reasons behind the improvement of our self-attention module, we visualize the soft attention map as depicted in Fig.~\ref{fig:radarloc_architecture}. The self-attention module helps RadarLoc focus more on the static objects like streets and buildings rather than feature-less regions of a radar image. Moreover, it improves the SA w/ UNet by 12.71\% and the ResNet by 31.51\% in translation while remains comparable performance in rotation (less than $0.1\degree$). Note that the performance of translation is very crucial in application scenarios like indoor parking lot or outdoor autonomous robots. Furthermore, the RadarLoc improves the w/o GC by 36.63\% in translation and 5.39\% in rotation, which demonstrates that the geometric constraints can greatly improve the performance of radar relocalization. Meanwhile, RadarLoc without geometric constraints (w/o GC) also outperforms all the baselines in Table~\ref{tab:results} except the rotation of PointLoc, which reveals the effectiveness of the proposed neural network architecture considering the only difference between RadarLoc and the w/o GC is the loss function. \begin{table} \footnotesize \centering \caption{\small Results showing the mean translation error (m) and rotation error (\degree) of ablation studies on the Oxford Radar RobotCar Dataset.} \vspace{.5em} \resizebox{\linewidth}{!}{ \begin{tabular}{ccccc|c} \toprule Sequence & w/o SA & SA w/ UNet & ResNet & w/o GC & RadarLoc \\ \midrule Seq-06 & 12.56m, 3.89\degree & 9.96m, 3.62\degree & 11.51m, 3.62\degree & 11.13m, 3.80\degree & {\bf 8.43m}, {\bf3.44\degree} \\ Seq-07 & 10.26m, 3.16\degree & 6.74m, 2.76\degree & 8.09m, {\bf2.75\degree} & 8.04m, 2.95 \degree & {\bf 5.12m}, 2.87\degree \\ Seq-08 & 10.91m, 3.38\degree & {\bf6.46m}, 2.72\degree & 9.42m, {\bf2.60\degree} & 10.74m, 3.47\degree & 6.56m, 3.06\degree \\ Seq-09 & 10.36m, 2.82\degree & 7.77m, 3.13\degree & 9.73m, 2.86\degree & 10.34m, {\bf 2.77\degree} & {\bf 6.51m}, 2.91\degree \\ Seq-10 & 8.94m, 1.65\degree & 5.64m, {\bf1.61\degree} & 7.91m, 1.81\degree & 10.15m, 1.88\degree & {\bf 5.34m}, 1.78\degree \\ \midrule Average & 10.61m, 2.98\degree & 7.32m, 2.77\degree & 9.33m, {\bf2.73\degree} & 10.08m, 2.97\degree & {\bf 6.39m}, 2.81\degree \\ \bottomrule \end{tabular} } \label{tab:ablation} \vspace{-.5em} \end{table} \section{CONCLUSIONS} ~\label{conclusions} The paper proposes a novel Radar-based relocalization system, RadarLoc, based on deep learning. It can directly predict 6-DoF global poses in an end-to-end fashion. The system can be leveraged in urban areas like Oxford for localization or as a component of the existing radar localization system to redeem the accumulative drifts of radar odometry. One important extension direction of this work is to reduce the prediction outliers, which significantly influence the performance of the large-scale localization. The other direction is to integrate the deep radar relocalization system with deep radar odometry to provide a superior localization system in the real world. In the future, we plan to collect more radar sensory data to supplement the shortage of open dataset, and test our methods on it. \addtolength{\textheight}{-12cm} \section*{ACKNOWLEDGMENT} This work was supported in part by the NIST grant 70NANB17H185 and UKRI EP/S030832/1 ACE-OPS. The authors would like to thank Dr. Sen Wang for the fruitful discussion and suggestions. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,748
{"url":"https:\/\/www.physicsforums.com\/threads\/a-numerical-solution-of-a-second-order-ode.805169\/","text":"A numerical solution of a second order ODE\n\n1. Mar 26, 2015\n\nmanifold\n\n\u2022 Member warned about posting with no template\nHello everyone; i'd like some help in this problem : i want to solve num this differential equation\n{ y\"(t)+t*cos(y)=y } by the Taylor method second order expansion. i first have to make this a first order differential equation by taking this vector Z=[y' y] then we have Z'=[y\" y'] wich equal Z'=[y-t*cos(y) y'] and then I put w=y' after that I use the Taylor formulat w(n+1)=w(n)+h*[y(n)-t(n)*cos(y(n))]+h^2\/2*[........] here i get confused. i must put the derivative of (y-t*cos(x)) over t and then over y... is it true that when I derive over t I must derive y too over t or w or i should consider them a constant.\nthen y(n+1)=y(n)+h*w(n)+h^2\/2[d(w)\/dt] here as well i don't know if that is true or not.\n\n2. Mar 26, 2015\n\nRUber\n\nI think you only need to worry about differentiation with respect to t, right.\nUse the chain rule for $t\\cos(y(t))$.\n$Z= \\begin{bmatrix} y\\\\ y' \\end{bmatrix}, Z' = \\begin{bmatrix} y'\\\\ y'' \\end{bmatrix}$\nSo what is Z''? (it should be solvable in terms of your other variables.\n$Z(n+1) = Z(n) + hZ'(n) + \\frac{h^2}{2}Z''$\nThis should be the first order system you solve.\n\n3. Mar 26, 2015\n\nmanifold\n\nhere's what i found","date":"2017-08-18 03:27:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8027522563934326, \"perplexity\": 1078.3305870977874}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886104560.59\/warc\/CC-MAIN-20170818024629-20170818044629-00049.warc.gz\"}"}
null
null
Q: Garbage values --> ��� are coming in the description of swagger-ui (io.swagger) in spring-mvc , How to remove this? I have confiured io.swagger for my spring-mvc application. The problem is that the garbage values --> ��� are in the description part of the defined parameter as shown in screen shot below. I have defined this parameter using @ApiImplicitParam Below is my controller @RestController @RequestMapping("/test") @Api(value = "TestApi", description = "Descriotion") public class TestController { @RequestMapping(value = "/apiTest", method = RequestMethod.GET) @ApiOperation(notes = "The Test API", value = "Response") @ApiImplicitParams(value = { @ApiImplicitParam(name = "employee", value = "Test description for question purposes", dataType = "string", paramType = "query") } public String getTestData(HttpServletRequest request) { return "test"; } } The string the "value" of @ApiImplicitParam is giving ���, Below is the screenshot A: Try enabling utf-8 encoding be adding below lines in header HTML file. <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta http-equiv="content-type" content="text/html; charset=utf-8"/> https://github.com/swagger-api/swagger-ui/issues/3525 A: @ApiImplicitParam(name = "employee", value = "Test description for question purposes", dataType = "string", paramType = "query") } You're getting a garbage value error because, In above code you've done a mistake of dataType = "string" Instead it will be like dataType = "String". The compiler doesn't understand the dataType as you've defined in your code that's why it's printing garbage values instead of an actual values.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,491
Q: Broadcasting assignment slow Julia I have something like this (simple example): using BenchmarkTools function assign() e = zeros(100, 90000) e2 = ones(100) * 0.16 e[:, 100:end] .= e2[:] end @benchmark assign() and need to this for thousands of time steps. This gives BenchmarkTools.Trial: memory estimate: 68.67 MiB allocs estimate: 6 -------------- minimum time: 16.080 ms (0.00% GC) median time: 27.811 ms (0.00% GC) mean time: 31.822 ms (12.31% GC) maximum time: 43.439 ms (27.66% GC) -------------- samples: 158 evals/sample: 1 Is there a faster way of doing this? A: First of all I will assume that you meant function assign1() e = zeros(100, 90000) e2 = ones(100) * 0.16 e[:, 100:end] .= e2[:] return e # <- important! end Since otherwise you will not return the first 99 columns of e(!): julia> size(assign()) (100, 89901) Secondly, don't do this: e[:, 100:end] .= e2[:] e2[:] makes a copy of e2 and assigns that, but why? Just assign e2 directly: e[:, 100:end] .= e2 Ok, but let's try a few different versions. Notice that there is no need to make e2 a vector, just assign a scalar: function assign2() e = zeros(100, 90000) e[:, 100:end] .= 0.16 # Just broadcast a scalar! return e end function assign3() e = fill(0.16, 100, 90000) # use fill instead of writing all those zeros that you will throw away e[:, 1:99] .= 0 return e end function assign4() # only write exactly the values you need! e = Matrix{Float64}(undef, 100, 90000) e[:, 1:99] .= 0 e[:, 100:end] .= 0.16 return e end Time to benchmark julia> @btime assign1(); 14.550 ms (5 allocations: 68.67 MiB) julia> @btime assign2(); 14.481 ms (2 allocations: 68.66 MiB) julia> @btime assign3(); 9.636 ms (2 allocations: 68.66 MiB) julia> @btime assign4(); 10.062 ms (2 allocations: 68.66 MiB) Versions 1 and 2 are equally fast, but you'll notice that there are 2 allocations instead of 5, but, of course, the big allocation dominates. Versions 3 and 4 are faster, not dramatically so, but you see that it avoids some duplicate work, such as writing values into the matrix twice. Version 3 is the fastest, not by much, but this changes if the assignment is a bit more balanced, in which case version 4 is faster: function assign3_() e = fill(0.16, 100, 90000) e[:, 1:44999] .= 0 return e end function assign4_() e = Matrix{Float64}(undef, 100, 90000) e[:, 1:44999] .= 0 e[:, 45000:end] .= 0.16 return e end julia> @btime assign3_(); 11.576 ms (2 allocations: 68.66 MiB) julia> @btime assign4_(); 8.658 ms (2 allocations: 68.66 MiB) The lesson is to avoid doing unnecessary work.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,837
package com.google.security.zynamics.binnavi.ZyGraph.Implementations; import java.awt.Window; import java.util.ArrayList; import java.util.List; import javax.swing.JDialog; import javax.swing.JFrame; import com.google.common.base.Preconditions; import com.google.common.collect.Iterables; import com.google.security.zynamics.binnavi.CUtilityFunctions; import com.google.security.zynamics.binnavi.Exceptions.MaybeNullException; import com.google.security.zynamics.binnavi.Gui.GraphWindows.CGraphModel; import com.google.security.zynamics.binnavi.Gui.GraphWindows.CommentDialogs.InitialTab; import com.google.security.zynamics.binnavi.Gui.GraphWindows.CommentDialogs.CodeNodeComments.DialogEditCodeNodeComment; import com.google.security.zynamics.binnavi.Gui.GraphWindows.CommentDialogs.FunctionComments.CDialogEditFunctionNodeComment; import com.google.security.zynamics.binnavi.Gui.GraphWindows.CommentDialogs.Interfaces.IComment; import com.google.security.zynamics.binnavi.Gui.GraphWindows.CommentDialogs.TextNodeComments.DialogTextNodeComment; import com.google.security.zynamics.binnavi.Gui.errordialog.NaviErrorDialog; import com.google.security.zynamics.binnavi.disassembly.CNaviViewEdge; import com.google.security.zynamics.binnavi.disassembly.CTextNode; import com.google.security.zynamics.binnavi.disassembly.INaviCodeNode; import com.google.security.zynamics.binnavi.disassembly.INaviEdge; import com.google.security.zynamics.binnavi.disassembly.INaviFunction; import com.google.security.zynamics.binnavi.disassembly.INaviFunctionNode; import com.google.security.zynamics.binnavi.disassembly.INaviInstruction; import com.google.security.zynamics.binnavi.disassembly.INaviViewNode; import com.google.security.zynamics.binnavi.disassembly.views.INaviView; import com.google.security.zynamics.reil.ReilGraph; import com.google.security.zynamics.reil.translators.InternalTranslationException; import com.google.security.zynamics.reil.translators.ReilTranslator; import com.google.security.zynamics.reil.translators.StandardEnvironment; import com.google.security.zynamics.zylib.gui.GuiHelper; import com.google.security.zynamics.zylib.gui.zygraph.edges.EdgeType; /** * Contains helper functions related to graph nodes. */ public final class CNodeFunctions { /** * You are not supposed to instantiate this class. */ private CNodeFunctions() { } /** * Creates a comment editing dialog for a given view node. * * @param node Node for which the dialog is created. * * @return The comment editing dialog for the given node. */ private static JDialog getCommentDialog(final CGraphModel model, final INaviViewNode node, final InitialTab initialTab) { if (node instanceof INaviCodeNode) { return new DialogEditCodeNodeComment(model, (INaviCodeNode) node, initialTab); } else if (node instanceof INaviFunctionNode) { return new CDialogEditFunctionNodeComment(model, (INaviFunctionNode) node, initialTab); } else { throw new IllegalStateException("IE02127: Unknown node type"); } } /** * Transfers the comments from the old node to the two new nodes after node splitting. * * @param node The old node. * @param newNode1 The upper new node. * @param newNode2 The lower new node. */ private static void transferLocalCodeNodeComments(final INaviCodeNode node, final INaviCodeNode newNode1, final INaviCodeNode newNode2) { newNode1.getComments().initializeLocalCodeNodeComment( node.getComments().getLocalCodeNodeComment()); for (final INaviInstruction naviInstruction : node.getInstructions()) { if (newNode1.hasInstruction(naviInstruction)) { newNode1.getComments().initializeLocalInstructionComment(naviInstruction, node.getComments().getLocalInstructionComment(naviInstruction)); } else { newNode2.getComments().initializeLocalInstructionComment(naviInstruction, node.getComments().getLocalInstructionComment(naviInstruction)); } } } /** * Copy REIL code for node * * @param parent Parent used for dialogs * @param node The node for which to generate REIL code * @return The corresponding REIL graph */ public static ReilGraph copyReilCode(final Window parent, final INaviCodeNode node) { final ReilTranslator<INaviInstruction> translator = new ReilTranslator<INaviInstruction>(); try { return translator.translate(new StandardEnvironment(), node); } catch (final InternalTranslationException e) { CUtilityFunctions.logException(e); final String message = "E000XXX: " + "Could not show REIL code for node"; final String description = CUtilityFunctions.createDescription( String.format("BinNavi could not show the REIL code for basic block at '%X'.", node.getAddress()), new String[] {"The instructions could not be converted to REIL code."}, new String[] {"You can not fix this problem yourself. Please contact " + "the BinNavi support."}); NaviErrorDialog.show(parent, message, description, e); } return null; } /** * Attaches a comment node to a given view node. * * @param parent Parent used for dialogs. * @param view The view where the new comment node is created. * @param node The node the new comment node is attached to. */ public static void createCommentNode(final JFrame parent, final INaviView view, final INaviViewNode node) { Preconditions.checkNotNull(parent, "IE02128: Parent argument can not be null"); Preconditions.checkNotNull(view, "IE02129: View argument can not be null"); Preconditions.checkNotNull(node, "IE01726: Node argument can not be null"); // TODO (timkornau): this is just transposed from the old code // needs to be checked to if we still want this to be like this. final CTextNode source = view.getContent().createTextNode(null); final CNaviViewEdge edge = view.getContent().createEdge(source, node, EdgeType.TEXTNODE_EDGE); final DialogTextNodeComment dlg = new DialogTextNodeComment(parent, source); GuiHelper.centerChildToParent(parent, dlg, true); dlg.setVisible(true); final List<IComment> newComment = dlg.getComment(); if (newComment == null) { view.getContent().deleteEdge(edge); view.getContent().deleteNode(source); } } /** * Shows a dialog to edit the comments of a node. * * @param node The node whose comments are edited. * @param initialTab The initially visible tab. */ public static void editNodeComments(final CGraphModel model, final INaviViewNode node, final InitialTab initialTab) { Preconditions.checkNotNull(node, "IE02131: Node argument can not be null"); final JDialog dialog = getCommentDialog(model, node, initialTab); GuiHelper.centerChildToParent(model.getParent(), dialog, true); dialog.setVisible(true); } /** * Splits a node. * * @param view View the node belongs to. * @param originalNode Node to split. * @param instruction Instruction after which the node is split. */ public static void splitAfter(final INaviView view, final INaviCodeNode originalNode, final INaviInstruction instruction) { final Iterable<INaviInstruction> oldInstructions = originalNode.getInstructions(); if (instruction == Iterables.getLast(oldInstructions)) { // Splitting after the last instruction of a node does not make // sense at all. return; } // Step I: Find out what instructions belong to the new upper block and what // instructions belong to the new lower block. final List<INaviInstruction> upperInstructions = new ArrayList<INaviInstruction>(); final List<INaviInstruction> lowerInstructions = new ArrayList<INaviInstruction>(); List<INaviInstruction> currentInstructions = upperInstructions; for (final INaviInstruction oldInstruction : oldInstructions) { currentInstructions.add(oldInstruction); if (oldInstruction == instruction) { currentInstructions = lowerInstructions; } } // Step II: Create the two new code nodes. INaviFunction parentFunction = null; try { parentFunction = originalNode.getParentFunction(); } catch (final MaybeNullException e) { // No parent function } final INaviCodeNode newNode1 = view.getContent().createCodeNode(parentFunction, upperInstructions); final INaviCodeNode newNode2 = view.getContent().createCodeNode(parentFunction, lowerInstructions); newNode1.setColor(originalNode.getColor()); newNode1.setBorderColor(originalNode.getBorderColor()); newNode2.setColor(originalNode.getColor()); // Step III: Transfer node comments and instruction comments from the old node // to the new nodes. transferLocalCodeNodeComments(originalNode, newNode1, newNode2); // Step IV: Connect the two new nodes. view.getContent().createEdge(newNode1, newNode2, EdgeType.JUMP_UNCONDITIONAL); // Step V: Recreate the incoming and outgoing edges of the old node. for (final INaviEdge incomingEdge : originalNode.getIncomingEdges()) { view.getContent().createEdge(incomingEdge.getSource(), newNode1, incomingEdge.getType()); } for (final INaviEdge outgoingEdge : originalNode.getOutgoingEdges()) { view.getContent().createEdge(newNode2, outgoingEdge.getTarget(), outgoingEdge.getType()); } // Step VI: Get rid of the old node. view.getContent().deleteNode(originalNode); } }
{ "redpajama_set_name": "RedPajamaGithub" }
169
{"url":"https:\/\/www.oesf.org\/forum\/index.php?action=printpage;topic=19304.0","text":"# OESF Portables Forum\n\n## Model Specific Forums => Sharp Zaurus => Zaurus - pdaXrom => Topic started by: patzoul on May 10, 2006, 07:38:30 am\n\nTitle: Justreader Command Line\nPost by: patzoul on May 10, 2006, 07:38:30 am\nDo you know what is the syntax to open an html file with justreader using the command line?\n\nI tried justreader x.html but I doesnt open the file. It just launches justereader and an error message pops up saying it couldnt open the file.\nTitle: Justreader Command Line\nPost by: grog on May 10, 2006, 07:37:30 pm\nI can't install it to try out right now, but I'd suggest giving justreader the full path to the file, 'justreader \/mnt\/card\/myfiles\/myfile.html' for example. See if that works. hth\nTitle: Justreader Command Line\nPost by: patzoul on May 11, 2006, 09:03:27 am\nDidnt work. My objective is to find the command that work and set it in the ROX file manager to automatically open an html file.\nTitle: Justreader Command Line\nPost by: karlto on May 11, 2006, 04:11:23 pm\nQuote\nDidnt work. My objective is to find the command that work and set it in the ROX file manager to automatically open an html file.\n[div align=\\\"right\\\"][a href=\\\"index.php?act=findpost&pid=126546\\\"][{POST_SNAPBACK}][\/a][\/div]\nMost commands will give you some usage tips on the command line. Try:\nCode: [Select]\njustreader -hin a terminal to see if it will tell you...\nTitle: Justreader Command Line\nPost by: grog on May 11, 2006, 06:19:50 pm\nQuote\nMost commands will give you some usage tips on the command line. Try:\nCode: [Select]\njustreader -hin a terminal to see if it will tell you...[div align=\\\"right\\\"][a href=\\\"index.php?act=findpost&pid=126604\\\"][{POST_SNAPBACK}][\/a][\/div]\nThat didn't work (for me anyway) & since I couldn't find any other way, AND to be one to never shrink form a challenge, I came up with this 'simple' script . Hope it works for you, patzoul. Just stick it somewhere in your path.\n\nCode: [Select]\n#!\/bin\/sh# justreader.sh - enables opening a specified file from the command line# 2006-05-11 GROG! #set -x \u00a0# TESTING# make sure justreader isn't already running.ps | grep -E \"\\b[j]ustreader\\b\" | grep -v $$&& exit 0if [ ! -s \"1\" ]; then echo \"Usage: {0##*\/} filename\" >&2 exit 1fiINFILE=1INDIR={1%\/*}CONFFILE=~\/Choices\/common\/JustReader.confTMPFILE={CONFFILE%\/*}\/$$trap \"\\rm -f $TMPFILE\" 1 2 3 15while read LINE; do set --$LINE\u00a0\u00a0 \u00a0case $1 in File) echo \"File =$INFILE\";;\u00a0\u00a0 \u00a0 \u00a0 \u00a0 Folderpath) echo \"Folderpath = $INDIR\";; *) echo \"$LINE\";;\u00a0\u00a0 \u00a0esacdone < $CONFFILE >$TMPFILE\\mv -f $TMPFILE$CONFFILEexec justreader\nHTH","date":"2022-10-05 18:07:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3741738498210907, \"perplexity\": 13603.123119214008}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030337663.75\/warc\/CC-MAIN-20221005172112-20221005202112-00695.warc.gz\"}"}
null
null
layout: post author: businessowl title: "Aaron Plocharczyk's reflection post" --- <strong>Lightbulb moment:</strong> <br/> When creating my blackjack game, I tried to pass a list of the cards in the user's hand to a function that would evaluate the value of each card and total it. But after I got the total, it seemed like the user never had any face cards. It turns out that this was because I was overwriting values in the supposedly "local" list that got passed into the function I was calling. I realized then that even though the list was passed as a parameter, its scope was not local to that function. It was a pointer to a list instead of a separate copy of the list, so I had been overwriting values on a larger scope than I had intended. In the future, when I pass lists to functions, I will clone the list before editing it so I don't run into unforseen complications. Here is the beginning of that function where I was having the problem: ``` def hand_to_score(hand): my_hand = hand[:] # this line clones the list and solves the issue for i in range(len(my_hand)): if my_hand[i] > 10: my_hand[i] = 10 ``` <br/> <br/> <strong>Confusion:</strong> <br/> Editing global variables is different in python than in other languages. I am not used to having to specifically say that a given variable is global at the beginning of a function, and I still gloss over this sometimes. Before I figured out that this was a requirement, I had no idea what was wrong with the code I was writing, but now I understand. It is a constant reminder to remember the scope of my variables. Here is the beginning of a function from the "Simon" game that I made for my clicky turtles post. It exemplifies this concept well: ``` sequence = [] guesses = [] def addGuess(newIndex): global guesses, sequence guesses.append(newIndex) ``` <br/> <br/> <strong>Still fuzzy:</strong> <br/> I have never bound a function to a keypress in python, so I think that is something I could experiment with. I have understood the examples in class, but I'd like to try it myself. If I don't try it out in a class assignment, I will definitely give it a go on my own. I think it's something that can be more fully grasped after doing it myself. <br/> <br/> <strong>Problem solving strategy:</strong> <br/> I am a huge fan of picking a reasonable direction <i>first</i> and <i>then</i> starting to code. I try to do this all the time so I don't get three hours into an assignment and realize that I don't like what I'm creating or that it isn't going to work well. On my clicky turtles assignment, I got a couple hundred lines into a failed, unexciting experiment before I called it quits and started over, and it was all because I was just messing around and never chose a solid direction to go in when I first started coding. The assignment went much better when I actually decided to make a concrete thing like the "Simon" game.
{ "redpajama_set_name": "RedPajamaGithub" }
169
Let the Christmas Crafting begin! Last year I started this Kaisercraft tree and wanted to make this wreath but as we were renovating, packing and had the house on the market not much was happening in the way of Christmas craft. I pulled all of these out yesterday afternoon. Every second Wednesday afternoon its just me and my girl so she made a start on painting her tree, I got these earlier in the year at a $2.00 shop. One for me and one for her I haven't opened mine yet. While she waited for the paint to dry we agreed she was old enough and capable enough to try stitching with a real needle. She has stitched in pre formed holes with a blunt end plastic needle before. I printed out a template from here. Then adjusted the size to make it easy enough for her. She chose some colours I cut it out, threaded the needle, tied the knots and started it off for her. I helped with the stuffing in an instructional manner and placed the ribbon. I see a lot of little felt stockings in my future. I have linked up on Our Creative Spaces and I'll keep you updated with our Christmas makings. I have also just set up a face book fan page so you can like that if you are inclined to follow my crafty things and/or prefer facebook :). Has anyone else started creating for Christmas yet? I have started some Christmas crafting for gifts, we won't be decorating this year so all those ideas will have to wait for next year. How wonderful when they're at an age to craft like that for the first time. And to want to do it with you - priceless. oh cute, your little girl did a great job of sewing her stocking! I have started on some ornaments for swaps and presents for the kids, but lacking motivation to do it in a hurry...lol. Might get more done once the school holidays start.
{ "redpajama_set_name": "RedPajamaC4" }
9,254
2 In terms of market share, AT Internet is clearly lagging behind, losing to Facebook Domain Insights in all segments. 2 AT Internet hasn't got a lead over Facebook Domain Insights in any websites category. 1 Facebook Domain Insights is leading in most countries, including United States, Japan, Russia, France and 216 other countries. 2 AT Internet hasn't got a lead over Facebook Domain Insights in any country.
{ "redpajama_set_name": "RedPajamaC4" }
2,129
La industria cauchera de Colombia refiere a la producción de Caucho en Colombia. Historia La extracción de caucho se remonta a periodos anteriores a la conquista, cuando los indígenas utilizaron el caucho como impermeabilizante en algunas de sus herramientas. Por parte de algunas compañías colombianas, la extracción de caucho negro empieza en 1885, aunque ya antes algunos hombres habían empezado a extraer este producto. A finales de la década de 1890, comienzan a darse tropiezos territoriales con Brasil y Perú sobre temas referentes a la soberanía colombiana, que por la falta de delimitación real y presencia institucional, permitieron ellos por omisión, el ingreso de otros estados en las fronteras nacionales colombianas, presentándose violaciones contra los indígenas. Para 1900, Brasil comete actos que van en contra de la soberanía colombiana, mientras los peruanos ya asesinaban indígenas en la zona de Loreto. Para este mismo periodo, tras la caída del Quina, muchos hombres con ambiciones han decidido dedicarse a la extracción del caucho. Muchos emprenden una labor colonizadora de estas regiones, y para 1880 Julio Cesar Arana llega a Iquitos, desde donde crearía una empresa para la extracción del caucho. A finales de la década de 1890, la compañía cauchera "Calderón" se establece en el Putumayo, y en 1900 empieza la esclavización de indígenas en competencia por el desarrollo de economías extractivas entre Brasil y Colombia, pero que posteriormente tras la persecución de Julio Cesar Arana a otros caucheros, este tiene que venderle su empresa. En 1896 Julio César Arana llega al Putumayo formando sociedad con Benjamín y Rafael Larrañaga (quien había trabajado en el negocio relacionados con la manufactura y venta de sombreros, hasta cuando esta economía decayó) Tras la Guerra de los Mil Días se reactiva en Colombia la colonización de zonas apartadas, pero el abastecimiento de comerciantes colombianos se hace a través de Iquitos, en vez de comerciar con otras ciudades colombianas. Ello demuestra poca comunicación entre las diferentes regiones colombianas, y una desarticulación de la realidad nacional. En 1904, Arana empieza a comprar todas las empresas caucheras de la región, comprando inicialmente a Larrañaga sus terrenos, y ya en 1908 consolida para sí todos los territorios circundantes que con patrocinio del gobierno peruano emprende su expansión comercial. Arana cometió en la amazonía muchos actos de violencia y genocidio, aunque sorprende que en la venta entre Larrañaga y Arana, este último pusiera preso al hijo del primero que se encontraba en Iquitos, olvidando así cualquier relación cercana con Larrañaga, cuando en el pasado habían sido socios comerciales. En Londres y bajo el nombre de The Peruvian Amazon Company, Arana registra su compañía bajo este nombre para que el gobierno colombiano no proteste ante los abusos cometidos por el fuerte apoyo de inversionistas ingleses. Para 1907 dos periódicos de Iquitos denuncian las malas condiciones de trabajo que practicaba la Casa Arana, por lo que "La Sensación" y "La Felpa" fueron suspendidos por estas denuncias, aunque para 1909 un periódico londinense repudia el genocidio que la Casa Arana práctica contra los indígenas. Esto llega a altas instancias (Parlamento Inglés y la Cancillería) por lo que se les retira el apoyo a los inversionistas ingleses de esa empresa. ¡Tortura y Genocidio cometido por la casa Arana! (Látigo, mutilación de partes, incineración de las víctimas aun vivas, sumí-ahogamiento y violación) fueron las razones por las cuales en 1911 los ingleses liquidan la empresa The Peruvian Amazon Company. Ya para 1920 las ganancias de la Casa Arana bajan representativamente por el desarrollo asiático capitalista y mejor ordenado de las chaucheras casi lo liquidan económicamente, aunque paradójicamente para 1927, la Casa Arana aun contaba con 5 mil trabajadores indígenas, que junto con el total de su familia reúnen 12 mil personas, entre los que hay mujeres y niños que también laboran como fuerza de trabajo. Ello acabaría después de la segunda guerra con la implementación del caucho sintético .... Tipos de caucho Existe una gran variedad de cauchos en la amazonía, y cada uno cumple una función especial, aunque existen tres grandes variedades de caucho que son especialmente importantes; estas son el caucho negro, el caucho blanco y las siringas o jebes. Con respecto a su extracción, cabe hacer una aclaración con respecto al número de trabajadores que explotaban el caucho, pues era extraído en cuadrillas de entre 10 a 500 hombres (nunca menos de 5). El caucho negro El caucho negro o Castilla Ulei, tiene como zonas de producción el piedemonte amazónico y la planicie cercana a la cordillera de los andes en un arco que comprende a Colombia, Ecuador y Perú. De los diferentes tipos de caucho extraídos en la amazonía, fue el que más se explotó. Este tipo de caucho es propio de la región amazónica, y también creció bastante en el Caquetá, aunque nunca se desarrolló tanto como en la Uribe. Su extracción se hace derrumbando el árbol, picándolo y esperando a que todo su jugo chorree y caiga sobre recipientes anteriormente puestos por los caucheros. Tras este procedimiento, se recoge todo el jugo y este es depositado en canecas de 1 metro cuadrado por 20 centímetros de profundidad donde se espera a que su contenido coagule, para ser posteriormente exprimido y picado. El caucho en su estado final es conocido como Castilloa. El no derribar este tipo de árboles, representa realizarse solo incisiones donde posteriormente pueden entrar hongos por losg mismos cortes, produciendo la muerte del árbol, sin contar además que la inexperiencia del cauchero al realizar malos cortes, puede también matar el árbol. El caucho blanco También conocido como Sepium Verum, abunda en regiones templadas y frías especialmente. Su extracción es realiza derrumbando el árbol, de igual manera como se hace con el caucho negro. Siringas o jebes También conocido como Hebea, es un tipo de árbol cauchero originario del amazonas que empezó a ser usado para la extracción de su jugo en la mitad del . Los malos cortes realizados en los procedimientos para la extracción del caucho, acabaron con gran parte de estos árboles que contenían una mayor calidad en comparación con los diferentes tipos de árboles productores de gomas. Véase también Fiebre del caucho Enlaces externos LA CASA ARANA EN EL PUTUMAYO. Julio Cesar Arana. Colombia en la economía mundial ARANA, REY DEL CAUCHO por Ovidio Lagos. Agricultura en Colombia
{ "redpajama_set_name": "RedPajamaWikipedia" }
461
Melissa Woodhouse has a knack for fixing things, and for making good things even better. Perhaps it's her original training as an Occupational Therapist (OT), a field dedicated to returning patients to their activities of daily living after an injury or illness. It requires intelligence, insight, creativity, patience, ingenuity, persistence and, of course, a positive outlook. These traits have served Melissa well as an OT, and have been employed and enjoyed en route to her current RVNA role, Director of HomeCare and Client Services. I grew up in Brookfield, CT and moved back six years ago! Director of HomeCare and Client Services. I oversee RVNA's private caregiver company and centralized care coordination team (also know as 'intake'). Together we ensure that clients receive care from one of our many service lines at RVNA. I began working in home health as an Occupational Therapist, where I got to treat the most amazing patients in their homes. Getting to interact with clients, families, and caregivers. When did you originally decide to become an OT? After working many years as a Respiratory Therapist, I decided I wanted to pursue a career that furthered my ability to help change the lives of those who recently suffered an illness or injury. How does your prior experience inform your new position? It allows me to understand where clients are coming from, train caregivers how to manage clients, and ensure that our clients can age at home safely. Go hiking or camping with my two daughters and husband. We love the outdoors! Art — I love any and everything art related! Painting, drawing, or doing crafts with my kids. Buy a house on a lake in the Adirondacks! Getting to watch Nick Depuy and Barbie Tatum perform at the Autumn Dinner this year as a tribute to his late mom and my former patient. It's something I'll never forget. This event brings so many wonderful RVNA supporters together and is an amazing reminder of how many lives the RVNA impacts.
{ "redpajama_set_name": "RedPajamaC4" }
8,143
A large reformed west facing villa with 4 bedrooms in the main house and a independent attached apartment. The property has an above average sized kitchen, on suite to all of the bedrooms, large raised west facing terrace with views across the valley and a large split level garden. 6 Bedrooms, 5 Bathrooms, Built 280 m², Garden/Plot 1000 m².
{ "redpajama_set_name": "RedPajamaC4" }
7,640
{"url":"https:\/\/tutorial.math.lamar.edu\/Solutions\/CalcIII\/MultiVrbleFcns\/Prob1.aspx","text":"Paul's Online Notes\nHome \/ Calculus III \/ 3-Dimensional Space \/ Functions of Several Variables\nShow Mobile Notice Show All Notes\u00a0Hide All Notes\nMobile Notice\nYou appear to be on a device with a \"narrow\" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width.\n\n### Section 12.5 : Functions of Several Variables\n\n1. Find the domain of the following function.\n\n$f\\left( {x,y} \\right) = \\sqrt {{x^2} - 2y}$ Show Solution\n\nThere really isn\u2019t all that much to this problem. We know that we can\u2019t have negative numbers under the square root and so the we\u2019ll need to require that whatever $$\\left( {x,y} \\right)$$ is it will need to satisfy,\n\n${x^2} - 2y \\ge 0$\n\nLet\u2019s do a little rewriting on this so we can attempt to sketch the domain.\n\n${x^2} \\ge 2y\\hspace{0.5in} \\Rightarrow \\hspace{0.5in}y \\le \\frac{1}{2}{x^2}$\n\nSo, it looks like we need to be on or below the parabola above. The domain is illustrated by the green area and red line in the sketch below.","date":"2023-02-04 01:58:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3820289373397827, \"perplexity\": 494.0396839815571}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500080.82\/warc\/CC-MAIN-20230204012622-20230204042622-00767.warc.gz\"}"}
null
null
{"url":"https:\/\/getreuer.info\/tutorials\/matlabimaging\/","text":"This tutorial discusses how to use Matlab for image processing. Some familiarity with Matlab is assumed (you should know how to use matrices and write an M-file).\n\nIt is helpful to have the Matlab Image Processing Toolbox, but fortunately, no toolboxes are needed for most operations. Commands requiring the Image Toolbox are indicated with [Image Toolbox].\n\n## Image representation\n\nThere are five types of images in Matlab.\n\n\u2022 Grayscale. A grayscale image $$M$$ pixels tall and $$N$$ pixels wide is represented as a matrix of double datatype of size $$M\\times N$$. Element values (e.g., MyImage(m, n)) denote the pixel grayscale intensities in [0,\u00a01] with 0=black and 1=white.\n\n\u2022 Truecolor RGB. A truecolor red-green-blue (RGB) image is represented as a three-dimensional $$M\\times N \\times 3$$ double matrix. Each pixel has red, green, blue components along the third dimension with values in [0,\u00a01], for example, the color components of pixel (m, n) are MyImage(m, n, 1) = red, MyImage(m, n, 2) = green, MyImage(m, n, 3) = blue.\n\n\u2022 Indexed. Indexed (palette based) images are represented with an index matrix of size $$M\\times N$$ and a colormap matrix of size $$K\\times 3$$. The colormap holds all colors used in the image and the index matrix represents the pixels by referring to colors in the colormap. For example, if the 22nd color is magenta MyColormap(22, :) = [1, 0, 1], then MyImage(m, n) = 22 is a magenta-colored pixel.\n\n\u2022 Binary. A binary image is represented by an $$M\\times N$$ logical matrix where pixel values are 1 (true) or 0 (false).\n\n\u2022 uint8. This type uses less memory and some operations compute faster than with double types. For simplicity, this tutorial does not discuss uint8 further.\n\nWhen possible, it is convenient to work with grayscale format for image processing. In cases requiring color, an RGB color image can be decomposed and handled as three separate grayscale images. Indexed images must be converted to grayscale or RGB for most operations.\n\nBelow are some common manipulations and conversions. A few commands require the Image Toolbox and are indicated with [Image Toolbox].\n\n% Display a grayscale or binary image\nimage(MyGray * 255);\naxis image\ncolormap(gray(256));\n\n% Display an RGB image (error if any element outside of [0, 1])\nimage(MyRGB);\naxis image\n% Display an RGB image (clips elements to [0, 1])\nimage(min(max(MyRGB, 0), 1));\naxis image\n\n% Display an indexed image\nimage(MyIndexed);\naxis image\ncolormap(MyColormap);\n\n% Separate the channels of an RGB image\nMyRed = MyRGB(:, :, 1);\nMyGreen = MyRGB(:, :, 2);\nMyBlue = MyRGB(:, :, 3);\n% Put the channels back together\nMyRGB = cat(3, MyRed, MyGreen, MyBlue);\n\n% Convert grayscale to RGB\nMyRGB = cat(3, MyGray, MyGray, MyGray);\n\n% Convert RGB to grayscale using simple average\nMyGray = mean(MyRGB, 3);\n% Convert RGB to grayscale using NTSC weighting [Image Toolbox]\nMyGray = rgb2gray(MyRGB);\n% Convert RGB to grayscale using NTSC weighting\nMyGray = 0.299*MyRGB(:, :, 1) + 0.587*MyRGB(:, :, 2) + 0.114*MyRGB(:, :, 3);\n\n% Convert indexed image to RGB [Image Toolbox]\nMyRGB = ind2rgb(MyIndexed, MyColormap);\n% Convert indexed image to RGB\nMyRGB = reshape(cat(3, MyColormap(MyIndexed, 1), MyColormap(MyIndexed, 2),...\nMyColormap(MyIndexed, 3)), size(MyIndexed, 1), size(MyIndexed, 2), 3);\n\n% Convert an RGB image to indexed using K colors [Image Toolbox]\n[MyIndexed, MyColormap] = rgb2ind(MyRGB, K);\n\n% Convert binary to grayscale\nMyGray = double(MyBinary);\n\n% Convert grayscale to binary\nMyBinary = (MyGray > 0.5);\n\n## Reading and writing image files\n\nMatlab can read and write images with the imread and imwrite commands. Although a fair number of file formats are supported, some are not. Use imformats to see what your installation supports:\n\n>> imformats\nEXT ISA READ WRITE ALPHA DESCRIPTION\n--------------------------------------------------------------------------\nbmp isbmp readbmp writebmp 0 Windows Bitmap (BMP)\ngif isgif readgif writegif 0 Graphics Interchange Format (GIF)\npbm ispbm readpnm writepnm 0 Portable Bitmap (PBM)\npcx ispcx readpcx writepcx 0 Windows Paintbrush (PCX)\npgm ispgm readpnm writepnm 0 Portable Graymap (PGM)\npng ispng readpng writepng 1 Portable Network Graphics (PNG)\npnm ispnm readpnm writepnm 0 Portable Any Map (PNM)\nppm isppm readpnm writepnm 0 Portable Pixmap (PPM)\n...\n\nWhen reading images, an inconvenience is that imread returns the image data in uint8 datatype, which must be converted to double and rescaled before use. So instead of calling imread directly, I use the following M-file function to read and convert images:\n\ngetimage.m\n\nfunction Img = getimage(Filename)\n%GETIMAGE Read an image given a filename\n% V = GETIMAGE(FILENAME) where FILENAME is an image file. The image is\n% returned either as an MxN double matrix for a grayscale image or as an\n% MxNx3 double matrix for a color image, with elements in [0,1].\n\n% Pascal Getreuer 2008-2009\n\nImg = double(Img);\n\nif ~isempty(Map) % Convert indexed image to RGB.\nImg = Img + 1;\nImg = reshape(cat(3, Map(Img, 1), Map(Img, 2), Map(Img, 3)), ...\nsize(Img, 1), size(Img, 2), 3);\nelse\nImg = Img \/ 255; % Rescale to [0, 1].\nend\n\nSave this code as getimage.m to use this M-function. If image baboon.png is in the current directory (or somewhere in the Matlab search path), you can read it with MyImage = getimage('baboon.png'). You can also use partial paths, for example if the image is in <current directory>\/images\/ with getimage('images\/baboon.png').\n\nTo write a grayscale or RGB image, use\n\nimwrite(MyImage, 'myimage.png');\n\nTake care that MyImage is a double matrix with elements in [0, 1]\u2014if improperly scaled, the saved file will probably be blank.\n\nWhen writing image files, I highly recommend using the PNG file format. This format is a reliable choice since it is lossless, supports truecolor RGB, and compresses pretty well. Use other formats with caution.\n\n## Basic operations\n\nBelow are some basic operations on a grayscale image u. Commands requiring the Image Toolbox are indicated with [Image Toolbox].\n\n% Statistics\nuMax = max(u(:)); % Compute the maximum value\nuMin = min(u(:)); % Minimum\nuPower = sum(u(:).^2); % Power\nuAvg = mean(u(:)); % Average\nuVar = var(u(:)); % Variance\nuMed = median(u(:)); % Median\nhist(u(:), linspace(0, 1, 256)); % Plot histogram\n\n% Basic manipulations\nuClip = min(max(u, 0), 1); % Clip elements to [0,1]\nuPad = u([1, 1:end, end], [1, 1:end, end]); % Pad with one-pixel margin\nuCrop = u(RowStart:RowEnd, ColStart:ColEnd); % Crop image\nuFlip = flipud(u); % Flip up\/down\nuFlip = fliplr(u); % Flip left\/right\nuResize = imresize(u, ScaleFactor); % Resize [Image Toolbox]\nuRot = rot90(u, k); % Rotate by k*90 degrees\nuRot = imrotate(u, Angle); % Rotate [Image Toolbox]\nuc = (u - min(u(:))\/(max(u(:)) - min(u(:))); % Stretch contrast to [0, 1]\nuq = round(u * (K - 1))\/(K - 1); % Quantize to K graylevels\n\n% Add white Gaussian noise of standard deviation sigma.\nuNoisy = u + randn(size(u)) * sigma;\n% Simluate salt and pepper noise.\nuNoisy = u; uNoisy(rand(size(u)) < p) = round(rand(size(u)));\n\n% Debugging\nany(~isfinite(u(:))) % Are any elements are infinite or NaN?\nnnz(u > 0.5) % Count elements satisfying some condition.\n\n(Note: For any array, the syntax u(:) means \u201cunroll u into a column vector.\u201d For example, if u = [1,5;0,2], then u(:) is [1;0;5;2].)\n\nFor example, image signal power is used in computing signal-to-noise ratio (SNR) and peak signal-to-noise ratio (PSNR). Given clean image uclean and noise-contaminated image u,\n\n% Compute SNR\nsnr = -10 * log10( sum((uclean(:) - u(:)).^2) \/ sum(uclean(:).^2) );\n\n% Compute PSNR (where the maximum possible value of uclean is 1)\npsnr = -10 * log10( mean((uclean(:) - u(:)).^2) );\n\nBe careful with norm: the behavior is norm(v) on vector v computes sqrt(sum(v.^2)), but norm(A) on matrix A computes the induced $$L^2$$ matrix norm,\n\nnorm(A) = sqrt(max(eig(A' * A))) % gaah!\n\nSo norm(A) is certainly not sqrt(sum(A(:).^2)). It is nevertheless an easy mistake to use norm(A) where it should have been norm(A(:)).\n\n## Linear filters\n\nLinear filtering is the cornerstone technique of signal processing. To briefly introduce, a linear filter is an operation where at every pixel $$x_{m,n}$$ of an image, a linear function is evaluated on the pixel and its neighbors to compute a new pixel value $$y_{m,n}$$.\n\nA linear filter in two dimensions has the general form $y_{m,n} = \\sum_j \\sum_k h_{j,k} x_{m-j,n-k}$ where $$x$$ is the input, $$y$$ is the output, and $$h$$ is the filter impulse response. Different choices of $$h$$ lead to filters that smooth, sharpen, and detect edges, to name a few applications. The right-hand side of the above equation is denoted concisely as $$h*x$$ and is called the \u201cconvolution of $$h$$ and $$x$$.\u201d\n\n### Spatial-domain filtering\n\nTwo-dimensional linear filtering is implemented in Matlab with conv2. Unfortunately, conv2 can only handle filtering near the image boundaries by zero-padding, which for images is usually inappropriate. To work around this, we can pad the input image and then use the 'valid' option when calling conv2. The following M-function does this.\n\nconv2padded.m\n\nfunction x = conv2padded(varargin)\n% Y = CONV2PADDED(X,H) applies 2D filter H to X with constant\n%\n% Y = CONV2PADDED(H1,H2,X) first applies 1D filter H1 along the rows\n% and then applies 1D filter H2 along the columns.\n%\n% If X is a 3D array, filtering is done separately on each channel.\n\n% Pascal Getreuer 2009\n\nif nargin == 2 % Function was called as \"conv2padded(x,h)\"\nx = varargin{1};\nh = varargin{2};\ntop = ceil(size(h, 1)\/2) - 1;\nbottom = floor(size(h, 1)\/2);\nleft = ceil(size(h, 2)\/2) - 1;\nright = floor(size(h, 2)\/2);\nelseif nargin == 3 % Function was called as \"conv2padded(h1,h2,x)\"\nh1 = varargin{1};\nh2 = varargin{2};\nx = varargin{3};\ntop = ceil(length(h1)\/2) - 1;\nbottom = floor(length(h1)\/2);\nleft = ceil(length(h2)\/2) - 1;\nright = floor(length(h2)\/2);\nelse\nerror('Wrong number of arguments.');\nend\n\nxPadded = x([ones(1, top), 1:size(x, 1), size(x, 1) + zeros(1, bottom)],...\n[ones(1, left), 1:size(x, 2), size(x, 2) + zeros(1, right)], :);\n\n% Since conv2 cannot handle 3D inputs, do filtering channel by channel\nfor p = 1:size(x, 3)\nif nargin == 2\nx(:, :, p) = conv2(xPadded(:, :, p), h, 'valid'); % Call conv2\nelse\nx(:, :, p) = conv2(h1, h2, xPadded(:, :, p), 'valid'); % Call conv2\nend\nend\n\nSave this code as conv2padded.m to use this M-function. Here are some examples:\n\n% A light smoothing filter\nh = [0, 1, 0;\n1, 4, 1;\n0, 1, 0];\nh = h \/ sum(h(:)); % Normalize the filter\n\n% A sharpening filter\nh = [0, -1, 0;\n-1, 8, -1;\n0, -1, 0];\nh = h \/ sum(h(:)); % Normalize the filter\n\n% Sobel edge detection\nhx = [1, 0, -1;\n2, 0, -2;\n1, 0, -1];\nhy = rot90(hx, -1);\nEdgeStrength = sqrt(u_x.^2 + u_y.^2);\n\n% Moving average\nWindowSize = 5;\nh1 = ones(WindowSize, 1) \/ WindowSize;\n\n% Gaussian filtering\nsigma = 3.5;\nFilterRadius = ceil(4 * sigma); % Truncate the Gaussian at 4*sigma\nh1 = h1 \/ sum(h1); % Normalize the filter\nuSmooth = conv2padded(h1, h1, u);\n\nA 2D filter h is said to be separable if it can be expressed as the outer product of two 1D filters h1 and h2, that is, h = h1(:) * h2(:)'. It is faster to pass h1 and h2 than h, as is done above for the moving average window and the Gaussian filter. In fact, the Sobel filters hx and hy in the above code are also separable\u2014what are h1 and h2?\n\n### Fourier-domain filtering\n\nFor large filters, spatial-domain filtering with conv2 is easily a computationally expensive operation. For a K\u00d7K filter on an M\u00d7N image, conv2 costs $$O(MNK^2)$$ additions and multiplications, or $$O(N^4)$$ supposing M, N, K are similar magnitudes.\n\nFor large filters, filtering in the Fourier domain is faster since the computational cost is reduced to $$O(N^2 \\log N)$$. Using the convolution-multiplication property of the Fourier transform, the convolution is equivalently computed by\n\n% Compute y = h*x with periodic boundary extension\n[k1, k2] = size(h);\nhpad([end + 1 - floor(k1\/2):end, 1:ceil(k1\/2)], ...\n[end + 1 - floor(k2\/2):end, 1:ceil(k2\/2)]) = h;\ny = real(ifft2(fft2(hpad) .* fft2(x)));\n\nThe result is equivalent to conv2padded(x,h) except near the boundary, where the above computation uses periodic boundary extension.\n\nFourier-based filtering can also be done with symmetric boundary extension by reflecting the input in each direction:\n\n% Compute y = h*x with symmetric boundary extension\nxSym = [x, fliplr(x)]; % Symmetrize horizontally\nxSym = [xSym; flipud(xSym)]; % Symmetrize vertically\n[k1,k2] = size(h);\nhpad([end + 1 - floor(k1\/2):end, 1:ceil(k1\/2)], ...\n[end + 1 - floor(k2\/2):end, 1:ceil(k2\/2)]) = h;\ny = y(1:size(y, 1)\/2, 1:size(y, 2)\/2);\n\n(Note: An even more efficient method is FFT overlap-add or overlap-save filtering. The Signal Processing Toolbox implements FFT overlap-add in one-dimension in fftfilt.)\n\n## Nonlinear filters\n\nA nonlinear filter is an operation where each filtered pixel $$y_{m,n}$$ is a nonlinear function of $$x_{m,n}$$ and its neighbors. Here we briefly discuss a few types of nonlinear filters.\n\n### Order statistic filters\n\nIf you have the Image Toolbox, order statistic filters can be performed with ordfilt2 and medfilt2. An order statistic filter sorts the pixel values over a neighborhood and selects the kth largest value. The min, max, and median filters are special cases.\n\n### Morphological filters\n\nIf you have the Image Toolbox, bwmorph implements various morphological operations on binary images, like erosion, dilation, open, close, and skeleton. There are also commands available for morphology on grayscale images: imerode, imdilate, and imtophat, among others.\n\nOccasionally we want to use a new filter that Matlab does not have. The code below is a simple generic template for implementing filters.\n\n[M, N] = size(x);\ny = zeros(size(x));\n\nr = 1; % Adjust for desired window size\n\nfor n = 1+r:N-r\nfor m = 1+r:M-r\n% Extract a window of size (2r+1)x(2r+1) around (m,n)\nw = x(m + (-r:r),n + (-r:r));\n\n% ... write the filter here ...\n\ny(m, n) = result;\nend\nend\n\n(Note: A frequent misguided claim is that loops in Matlab are slow and should be avoided. This was once true, back in Matlab 5 and earlier, but loops in modern versions are reasonably fast.)\n\nFor example, the alpha-trimmed mean filter ignores the d\/2 lowest and d\/2 highest values in the window, and averages the remaining values. The filter is a balance between a median filter and a mean filter. The alpha-trimmed mean filter can be implemented in the template as\n\n% The alpha-trimmed mean filter\nw = sort(w(:));\ny(m, n) = mean(w(1+d\/2:end-d\/2)); % Compute the result y(m,n)\n\nAs another example, the bilateral filter is $y_{m,n} = \\frac{\\sum_{j,k} h_{j,k,m,n} x_{m-j,n-k}}{\\sum_{j,k} h_{j,k,m,n}}$ where $$h_{j,k,m,n}$$ is $\\mathrm{e}^{-(j^2 + k^2) \/ (2\\sigma_s^2)} \\mathrm{e}^{-(x_{m-j,n-k} - x_{m,n})^2 \/ (2\\sigma_d^2)}$\n\nThe bilateral filter can be implemented as\n\n% The bilateral filter\n[k, j] = meshgrid(-r:r, -r:r);\nh = exp( -(j.^2 + k.^2)\/(2*sigma_s^2) ) .* ...\nexp( -(w - w(r+1,r+1)).^2\/(2*sigma_d^2) );\ny(m, n) = h(:)'*w(:) \/ sum(h(:));\n\nIf you don\u2019t have the Image Toolbox, the template can be used to write substitutes for missing filters, though they will not be as fast as the Image Toolbox implementations.\n\n% medfilt2\ny(m, n) = median(w(:));\n\n% ordfilt2\nw = sort(w(:));\ny(m, n) = w(k); % Select the kth largest element\n\n% imdilate\n% Define a structure element as a (2r+1)x(2r+1) array\nSE = [0,1,0;1,1,1;0,1,0];\ny(m, n) = max(w(SE));\n\n## Special topics\n\nUp to this point, we have covered those operations that are generally useful in imaging processing. You must look beyond this tutorial for the details on the topic of your interest. A large amount of Matlab code and material is freely available online for filter design, wavelet and multiresolution techniques, PDE-based imaging, morphology, and wherever else researchers have made their code publicly available.","date":"2022-01-20 05:17:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6135786771774292, \"perplexity\": 8952.396237923165}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320301720.45\/warc\/CC-MAIN-20220120035934-20220120065934-00554.warc.gz\"}"}
null
null
\section{INTRODUCTION TO LOW $R$ FLOW} \label{chap:Intro Low R} \subsection{Overview} In 1851, shortly after writing down the Navier-Stokes equations, Sir George Gabriel Stokes turned his attention to what modern researchers might whimsically refer to as \lq\lq the hydrogen atom" of fluid mechanics: the determination of the drag on a sphere or an infinite cylinder moving at fixed speed in a highly viscous fluid \cite{Stokes51}. Just as the quantum theory of the hydrogen atom entailed enormous mathematical difficulties, ultimately leading to the development of quantum field theory, the problem posed by Stokes has turned out to be much harder than anyone could reasonably have expected: it took over 100 years to obtain a justifiable lowest order approximate solution, and that achievement required the invention of a new branch of applied mathematics, \emph{matched asymptotic expansions}. And just as the fine structure of the hydrogen atom's spectral lines eventually required renormalization theory to resolve the problems of \lq\lq infinities" arising in the theory, so too, Stokes' problem is plagued by divergences that are, to a physicist, most naturally resolved by renormalization group theory \cite{Feynman48, Schwinger48, Tomonaga48, Stuckelberg1953, Gellmann1954, Wilson1971a, Wilson1971b, Wilson1983, CGO96}. In order to appreciate the fundamental difficulty of such problems, and to expose the similarity with familiar problems in quantum electrodynamics, we need to explain how perturbation theory is used in fluid dynamics. Every flow that is governed by the Navier-Stokes equations only (i.e. the transport of passive scalars, such as temperature, is not considered; there are no rotating frames of reference or other complications) is governed by a single dimensionless parameter, known as the Reynolds number, which we designate as $R$. The Reynolds number is a dimensionless number made up of a characteristic length scale $L$, a characteristic velocity of the flow $U$, and the kinematic viscosity $\nu\equiv \eta/\rho$, where $\eta$ is the viscosity and $\rho$ is the density of the fluid. In the problems at hand, defined precisely below, the velocity scale is the input fluid velocity at infinity, $u_\infty$, and the length scale is the radius $a$ of the body immersed in the fluid. Then the Reynolds number is given by: \begin{equation} R\equiv \frac{u_\infty a}{\nu} \end{equation} The Reynolds number is frequently interpreted as the ratio of the inertial to viscous terms in the Navier-Stokes equations. For very viscous flows, $R\rightarrow 0$, and so we anticipate that a sensible way to proceed is perturbation theory in $R$ about the problems with infinite viscosity, i.e. $R=0$. In this respect, the unwary reader might regard this as an example very similar to quantum electrodynamics, where the small parameter is the fine structure constant. However, as we will see in detail below, there is a qualitative difference between a flow with $R=0$ and a flow with $R\rightarrow 0$. The fundamental reason is that by virtue of the circular or spherical geometry, the ratio of inertial to viscous forces in the Navier-Stokes equations is not a constant everywhere in space: it varies as a function of radial distance $r$ from the body, scaling as \Order[R r/a]{}. Thus, when $R=0$, this term is everywhere zero; but for any non-zero $R$, as $r/a\rightarrow\infty$ the ratio of inertial to viscous forces becomes arbitrarily large. Thus, inertial forces can not legitimately be regarded as negligible with respect to viscous forces everywhere: the basic premise of perturbation theory is not valid. Perturbation theory has to somehow express, or manifest, this fact, and it registers its objection by generating divergent terms in its expansion. These divergences are not physical, but are the perturbation theory's way of indicating that the zeroth order solution---the point about which perturbation theory proceeds---is not a correct starting point. The reader might wonder if the precise nature of the breakdown of perturbation theory, signified by the divergences, can be used to deduce what starting point would be a valid one. The answer is yes: this procedure is known as the perturbative renormalization group (RG), and we will devote a significant fraction of this article to expounding this strategy. As most readers will know, renormalization \cite{Feynman48, Schwinger48, Tomonaga48} and renormalization group \cite{Stuckelberg1953, Gellmann1954, Wilson1971a, Wilson1971b, Wilson1983} techniques in quantum field theories have been stunningly successful. In the most well-controlled case, that of quantum electrodynamics, the smallness of the fine structure constant allows agreement of perturbative calculations with high-precision measurements to 12 significant figures \cite{GABR06}. Do corresponding techniques work as well in low Reynolds fluid dynamics, where one wishes to calculate and measure the drag $C_D$ (defined precisely below)? Note that in this case, it is the {\it functional form\/} in $R$ for the drag that is of interest, rather than the drag at {\it one\/} particular value of $R$, so the measure of success is rather more involved. Nevertheless, we will see that calculations can be compared with experiments, but there too will require careful interpretation. Historically a different strategy was followed, leading to a set of techniques known generically as singular perturbation theory, in particular encompassing boundary layer theory and the method of matched asymptotic expansions. We will explain these techniques, developed by mathematicians starting in the 1950's, and show their connection with renormalization group methods. Although the calculational techniques of matched asymptotic expansions are widely regarded as representing a systematically firm footing, their best results apply only to infinitesimally small Reynolds number. As shown in Figure \ref{bestinitialcomparision}, the large deviations between theory and experiment for $R \sim 0.5$ demonstrate the need for theoretical predictions which are more robust for small but non-infinitesimal Reynolds numbers. Ian Proudman, who, in a {\it tour de force\/} helped obtain the first matched asymptotics result for a sphere \cite{Proudman57}, expressed it this way: ``It is therefore particularly disappointing that the numerical `convergence' of the expansion is so poor.'' \cite{Chester69} In spite of its failings, Proudman's solution from 1959 was the first mathematically rigorous one for flow past a sphere; all preceding theoretical efforts were worse. \begin{figure} \psfrag{Re}{$R = U_\infty a/\nu$} \psfrag{CD}{$C_D R/4\pi$} \psfrag{Jayaweera XXXXXX}{Jayaweera} \psfrag{Tritton}{Tritton} \psfrag{Kaplun}{Eqn. \ref{cylinder:KaplunCD}} \begin{center} \includegraphics[width=.494 \textwidth]{fig1a} \hfill \psfrag{Re}{$R = U_\infty a/\nu$} \psfrag{CD}{$C_D R/6\pi - 1$} \psfrag{Maxworthy XXXXXX}{Maxworthy} \psfrag{Le Clair}{Le Clair} \psfrag{Dennis}{Dennis} \psfrag{Chester and}{Eqn. \ref{matchedcd}} \includegraphics[width=.494 \textwidth]{fig1b} \end{center} \caption{(Color online) Comparing experiment with ``state of the art'' theoretical predictions for a sphere \cite{Jay65,Tritton59} (right) and a cylinder \cite{Den71,Maxworthy65,LC70} (left).} \label{bestinitialcomparision} \end{figure} Further complicating matters, the literature surrounding these problems is rife with ``paradoxes'', revisions, ad-hoc justifications, disagreements over attribution, mysterious factors of two, conflicting terminology, non-standard definitions, and language barriers. Even a recent article attempting to resolve this quagmire \cite{Lindgren99} contains an inaccuracy regarding publication dates and scientific priority. This tortured history has left a wake of experiments and numerical calculations which are of widely varying quality, although they can appear to agree when not examined closely. For example, it turns out that the finite size of experimental systems has a dramatic effect on measurements and simulations, a problem not appreciated by early workers. Although in principle the matched asymptotics results can be systematically extended by working to higher order, this is not practical. The complexity of the governing equations prohibits further improvement. We will show here that techniques based on the renormalization group ameliorate some of the technical difficulties, and result in a more accurate drag coefficient at small but non-infinitesimal Reynolds numbers. Given the historical importance of the techniques developed to solve these problems, we hope that our solutions will be of general methodological interest. We anticipate that some of our readers will be fluid dynamicists interested in assessing the potential value of renormalization group techniques. We hope that this community will see that our use of the renormalization group is quite distinct from applications to stochastic problems, such as turbulence, and can serve a different purpose. The second group of readers may be physicists with a field theoretic background, encountering fluids problems for the first time, perhaps in unconventional settings, such as heavy ion collisions and QCD \cite{PhysRevLett.86.402, CMT05, Hen05, HG06, BRW06, CKM06} or 2D electron gases \cite{stone1990sdf, 1998PhyB..256...47E}. We hope that this review will expose them to the mathematical richness of even the simplest flow settings, and introduce a familiar conceptual tool in a non-traditional context. This review has two main purposes. The first purpose of the present article is to attempt a review and synthesis of the literature, sufficiently detailed that the subtle differences between different approaches are exposed, and can be evaluated by the reader. This is especially important, because this is one of those problems so detested by students, in which there are a myriad of ways to achieve the right answer for the wrong reasons. This article highlights all of these. A second purpose of this article is to review the use of renormalization group techniques in the context of singular perturbation theory, as applied to low Reynolds number flows. These techniques generate a non-trivial estimate for the functional form of $C_D (R)$ that can be sensibly used at moderate values of $R\sim \Order[1]{}$, not just infinitesimal values of $R$. As $R \rightarrow 0$, these new results reduce to those previously obtained by matched asymptotic expansions, in particular accounting for the nature of the mathematical singularities that must be assumed to be present for the asymptotic matching procedure to work. Renormalization group techniques were originally developed in the 1950's to extend and improve the perturbation theory for quantum electrodynamics. During the late 1960's and 1970's, renormalization group techniques famously found application in the problem of phase transitions \cite{Wilson1971a,Kadanoff1966,Widom1963}. During the 1990's, renormalization group techniques were developed for ordinary and partial differential equations, at first for the analysis of nonequilibrium (but deterministic) problems which exhibited anomalous scaling exponents \cite{GMO+90, CGO91} and subsequently for the related problem of travelling wave selection \cite{CGO+94, CGO94b, CG95}. The most recent significant development of the renormalization group---and the one that concerns us here---was the application to singular perturbation problems \cite{CGO94, CGO96}. The scope of \cite{CGO96} encompasses boundary layer theory, matched asymptotic expansions, multiple scales analysis, WKB theory, and reductive perturbation theory for spatially-extended dynamical systems. We do not review all these developments here, but focus only on the issues arising in the highly pathological singularities characteristic of low Reynolds number flows. For a pedagogical introduction to renormalization group techniques, we refer the reader to \cite{GoldenfeldBook}, in particular Chapter 10 which explains the connection between anomalous dimensions in field theory and similarity solutions of partial differential equations. We mention also that the RG techniques discussed here have also been the subject of rigorous analysis \cite{bricmont:rpd, bricmont1994rga, moise1998nsd, ziane2000crg, moise2000rgm, moise2001rgm, blomker2002sns, lan2004abc, wirosoetisno59fgw, petcu2004rgm} in other contexts of fluid dynamics, and have also found application in cavitation \cite{josserand1999cie} and cosmological fluid dynamics \cite{iguchi1998rga, nambu1999rlw, nambu2000rga, belinchon2002rga, nambu2002bra}. This review is organized as follows. After precisely posing the mathematical problem, we review all prior theoretical and experimental results. We identify the five calculations and measurements which are accurate enough, and which extend to sufficiently small Reynolds number, to be useful for evaluating theoretical predictions. Furthermore, we review the history of all theoretical contributions, and clearly present the methodologies and approximations behind previous solutions. In doing so, we eliminate prior confusion over chronology and attribution. We conclude by comparing the best experimental results with our new, RG-based, theoretical prediction. This exercise makes the shortcomings that Proudman lamented clear. \subsection{Mathematical formulation} The goal of these calculations is to determine the drag force exerted on a sphere and on an infinite cylinder by steady, incompressible, viscous flows. The actual physical problem concerns a body moving at constant velocity in an infinite fluid, where the fluid is at rest in the laboratory frame. In practice, it is more convenient to analyze the problem using an inertial frame moving with the fixed body, an approach which is entirely equivalent.\footnote{Nearly all workers, beginning with Stokes \cite{Stokes51}, use this approach, which Lindgren \cite{Lindgren99} refers to as the ``steady'' flow problem.} \begin{figure} \psfrag{UInfinity}{$\vec{u}_{\infty}$} \psfrag{a}{$a$} \psfrag{r}{$r$} \psfrag{q}{$\theta$} \begin{center} \includegraphics*[width=.35 \textwidth]{fig2} \caption{(Color online) Schematic for flow past a sphere or cylinder.} \label{fig:Flowschematic} \end{center} \end{figure} Flow past a sphere or circle is shown schematically in Figure \ref{fig:Flowschematic}. The body has a characteristic length scale, which we have chosen to be the radius ($a$), and it is immersed in uniform stream of fluid. At large distances, the undisturbed fluid moves with velocity $\vec{u}_{\infty}$. The quantities shown in Table \ref{tab:gov1} characterize the problem. We assume incompressible flow, so $\rho =$ const. \begin{table} \begin{center} \begin{tabular}{|c|l|} \hline Quantity & Description\\ \hline $\vec{r}$ & Coordinate Vector \\ $\vec{u}(\vec{r})$ & Velocity Field \\ $\rho$ & Fluid Density \\ $p(\vec{r})$ & Pressure \\ $\nu$ & Kinematic Viscosity \\ $ a $ & Characteristic Length of Fixed Body \\ $\vec{u}_{\infty}$ & The Uniform Stream Velocity \\ \hline \end{tabular} \caption{Quantities needed to characterize low $R$ flow past a rigid body.} \label{tab:gov1} \end{center} \end{table} The continuity equation (Eqn. \ref{Continuity2}) and the time-independent Navier-Stokes equations (Eqn. \ref{NS1}) govern steady-state, incompressible flow. \begin{equation} \label{Continuity2} \nabla\cdot\vec{u} = 0 \end{equation} \begin{equation} \label{NS1} (\vec{u} \cdot \nabla \vec{u}) = - \frac{\nabla p}{\rho} + \nu \nabla^{2}\vec{u} \end{equation} These equations must be solved subject to two boundary conditions, given in Eqn. \ref{boundaryconditions}. First, the \emph{no-slip} conditions are imposed on the surface of the fixed body (Eqn. \ref{no-slip}). Secondly, the flow must be a uniform stream far from the body (Eqn. \ref{uniform}). To calculate the pressure, one also needs to specify an appropriate boundary condition (Eqn. \ref{pressurebc}), although as a matter of practice this is immaterial, as only pressure differences matter when calculating the drag coefficient. \begin{subequations} \label{boundaryconditions} \begin{eqnarray} \vec{u}(\vec{r}) &=& 0 \quad \vec{r} \in \textrm{\{Surface of Fixed Body\}} \label{no-slip} \\ \lim_{|\vec{r}| \to \infty} \vec{u}(\vec{r}) &=& \vec{u}_{\infty} \label{uniform} \\ \lim_{|\vec{r}| \to \infty} p(\vec{r}) &=& p_{\infty} \label{pressurebc} \end{eqnarray} \end{subequations} It is convenient to analyze the problem using non-dimensional quantities, which are defined in Table \ref{tab:gov2}. \begin{table} \begin{center} \begin{tabular}{|c|l|} \hline Dimensionless Quantity & Definition\\ \hline $\vec{r}^{*}$ & $\vec{r}/a$ \\ $\vec{u}^{*}(\vec{r})$ & $\vec{u}(\vec{r})/|\vec{u}_\infty|$ \\ $p^{*}(\vec{r})$ & $a\ p(\vec{r})/\rho \ \nu \ |\vec{u}_\infty|$\\ $\vec{\nabla}^{*}$ & $a \ \vec{\nabla}$ \\ \hline \end{tabular} \caption{Dimensionless variables.} \label{tab:gov2} \end{center} \end{table} When using dimensionless variables, the governing equations assume the forms given in Eqns. \ref{Continuity2ND} and \ref{NonDNS}, where we have introduced the \emph{Reynolds Number}, $ R = |\vec{u}_{\infty}| a/\nu$, and denoted scaled quantities by an asterisk. \begin{equation} \label{Continuity2ND} \nabla^{*} \cdot \vec{u^{*}} = 0 \end{equation} \begin{equation} R (\vec{u}^{*} \cdot \nabla^{*})\vec{u}^{*} = - \nabla^{*} p^{*} + \nabla^{*2} \vec{u}^{*} \label{NonDNS} \end{equation} The boundary conditions also transform, and will later be given separately for both the sphere and the cylinder (Eqns. \ref{Sphere BC}, \ref{Cylinder BC}). Henceforth, the $^{*}$ will be omitted from our notation, except when dimensional quantities are explicitly introduced. It is useful to eliminate pressure from Eqn. \ref{NonDNS} by taking the curl and using the identity $\nabla \times \nabla p = 0$, leading to \begin{equation} \label{NS3} (\vec{u} \cdot \nabla)(\nabla \times \vec{u}) - ((\nabla \times \vec{u}) \cdot \vec{u})= \frac{1}{R} \nabla^{2}(\nabla \times \vec{u}) \end{equation} \subsubsection{Flow past a cylinder} For the problem of the infinite cylinder, it is natural to use cylindrical coordinates, $\vec{r}=(r, \theta, z)$. We examine the problem where the uniform flow is in the $\hat{x}$ direction (see Figure \ref{fig:Flowschematic}). We will look for 2-d solutions, which satisfy $\partial_z{\vec{u}} = 0$. Since the problem is two dimensional, one may reduce the set of governing equations (Eqns. \ref{Continuity2ND} and \ref{NonDNS}) to a single equation involving a scalar quantity, the \emph{Lagrangian} stream function, usually denoted $\psi(r,\theta)$. It is defined by Eqn. \ref{cylinderstreamfunction}.\footnote{Although many authors prefer to solve the vector equations, we follow Proudman and Pearson \cite{Proudman57}.} \begin{equation} u_r =\frac{1}{r} \frac{\partial \psi}{\partial \theta} \qquad u_\theta = -\frac{\partial \psi}{\partial r} \qquad u_z = 0 \label{cylinderstreamfunction} \end{equation} This definition guarantees that equation (\ref{Continuity2ND}) will be satisfied \cite{Goldstein29}. Substituting the stream function into equation (\ref{NS3}), one obtains the governing equation (Eqn. \ref{CylinderEqn}). Here we follow the compact notation of Proudman and Pearson \cite{Proudman57,Hinch91}. \begin{equation} \label{CylinderEqn} \nabla_r^4 \psi(r,\theta) = - \frac{R}{r} \frac{\partial (\psi, \nabla_r^2)}{\partial(r,\theta)} \end{equation} where \begin{displaymath} \nabla_r^2 \equiv \frac{\partial^2}{\partial r^2} + \frac{1}{r} \frac{\partial}{\partial r} + \frac{1}{r^2} \frac{\partial^2}{\partial \theta^2} \end{displaymath} The boundary conditions which fix $\vec{u}(\vec{r})$ (Eqns. \ref{no-slip}, \ref{uniform}) also determine $\psi(r,\theta)$ up to an irrelevant additive constant.\footnote{The constant is irrelevant because it vanishes when the derivatives are taken in Eqn. \ref{cylinderstreamfunction}.} Eqn. \ref{Cylinder BC} gives the boundary conditions expressed in terms of stream functions. \begin{subequations} \label{Cylinder BC} \begin{eqnarray} \psi(r = 1, \theta) &=& 0 \\ \frac{\partial \psi(r,\theta)}{\partial r} \bigg|_{r=1} &=& 0 \\ \lim_{r \to \infty} \frac{\psi(r,\theta)}{r} &=& \sin(\theta) \end{eqnarray} \end{subequations} To calculate the drag for on a cylinder, we must first solve Equation \ref{CylinderEqn} subject to the boundary conditions given in Eqn. \ref{Cylinder BC}. \subsubsection{Flow past a sphere} To study flow past a sphere, we use spherical coordinates: $\vec{r}=(r, \theta, \phi)$. We take the uniform flow to be in the $\hat{z}$ direction. Consequently, we are interested in solutions which are independent of $\phi$, because there can be no circulation about the $\hat{z}$ axis. Since the problem has axial symmetry, one can use the \emph{Stokes' stream function} (or Stokes' current function) to reduce Eqns. \ref{Continuity2ND} and \ref{NonDNS} to a single equation. This stream function is defined through the following relations: \begin{equation} v_r = \frac{1}{r^2 \sin{\theta}}\psi_{\theta} \qquad v_{\theta} = -\frac{1}{r \sin{\theta}} \psi_{r} \qquad v_{\phi} = 0 \label{spherestreamfunction} \end{equation} These definitions guarantee that Eqn. \ref{Continuity2ND} will be satisfied. Substituting Eqn. \ref{spherestreamfunction} into Eqn. \ref{NS3}, one obtains the governing equation for $\psi(r,\theta)$ \cite{Proudman57}: \begin{equation} \label{SphereEqn} D^4 \psi = R \left( \frac{1}{r^2} \frac{\partial(\psi, D^2 \psi)}{\partial(r,\mu)} + \frac{2}{r^2} D^2 \psi L \psi \right) \end{equation} In this equation, \begin{subequations} \nonumber \begin{eqnarray} \nonumber \mu &\equiv& \cos{\theta} \\ \nonumber D^2 &\equiv& \frac{\partial^2}{\partial r^2} + \frac{1-\mu^2}{r^2} \frac{\partial^2}{\partial \mu^2} \\ \nonumber L &\equiv& \frac{\mu}{1-\mu^2}\frac{\partial}{\partial r} + \frac{1}{r} \frac{\partial}{\partial \mu} \end{eqnarray} \end{subequations} Here we follow the notation of Proudman and Pearson \cite{Proudman57}. Other authors, such as Van Dyke \cite{VanDyke1975} and Hinch \cite{Hinch91}, write their stream function equations in an equivalent, albeit less compact, notation. As in the case of the cylinder, the boundary conditions which fix $\vec{u}(\vec{r})$ (Eqns. \ref{no-slip}, \ref{uniform}) determine $\psi$ up to an irrelevant additive constant. The transformed boundary conditions are given by Eqn. \ref{Sphere BC}. \begin{subequations} \label{Sphere BC} \begin{eqnarray} \psi(r = 1, \mu) &=& 0 \\ \frac{\partial \psi(r,\mu)}{\partial r} \bigg|_{r=1} &=& 0 \\ \lim_{r \to \infty} \frac{\psi(r,\mu)}{r^2} &=& \frac{1}{2} \left( 1 - \mu^2 \right) \end{eqnarray} \end{subequations} In this paper, we obtain approximate solutions for Eqn. \ref{CylinderEqn} (subject to Eqn. \ref{Cylinder BC}), and Eqn. \ref{SphereEqn} (subject to Eqn. \ref{Sphere BC}). These solutions are then used to calculate drag coefficients, which we compare to experimental results. \subsubsection{Calculating the drag coefficient} Once the Navier-Stokes equations have been solved, and the stream function is known, calculating the drag coefficient, $C_D$, is a mechanical procedure. We follow the methodology described by Chester and Breach \cite{Chester69}. This analysis is consistent with the work done by Kaplun \cite{Kaplun57c} and Proudman \cite{Proudman57}, although these authors do not detail their calculations. This methodology is significantly different from that employed by other workers, such as Tomotika \cite{Tomotika50, Oseen10}. Tomotika calculates $C_D$ approximately, based on a linearized calculation of pressure. Although these approximations are consistent with the approximations inherent in their solution of the Navier-Stokes equations, they are inadequate for the purposes of obtaining a systematic approximation to any desired order of accuracy. Calculating the drag on the body begins by determining the force exerted on the body by the moving fluid. Using dimensional variables, the force per unit area is given by \cite{LandauLifschitz}: \begin{equation} P_i = -\sigma_{ik} n_k \label{forceperunitarea} \end{equation} Here $\sigma_{ik}$ is the stress tensor, and $\vec{n}$ is a unit vector normal to the surface. For an incompressible fluid, the stress tensor takes the form \cite{LandauLifschitz}: \begin{equation} \sigma_{ik} = -p\delta_{ik} + \eta\left(\frac{\partial v_{i}}{\partial{x_k}}+\frac{\partial v_k}{\partial x_i}\right) \label{Stress2} \end{equation} $\eta$ is the \emph{dynamic viscosity}, related to the kinematic viscosity by $\eta = \nu \rho$. The total force is found by integrating Eqn. \ref{forceperunitarea} over the surface of the solid body. We now use these relations to derive explicit formula, expressed in terms of stream functions, for both the sphere and the cylinder. {\it a. Cylinder} In the case of the cylinder, the components of the velocity field are given through the definition of the Lagrangian stream function (Eqn. \ref{cylinderstreamfunction}). Symmetry requires that the net force on the cylinder must be in the same direction as the uniform stream. Because the uniform stream is in the $\hat{x}$ direction, if follows from Eqns. \ref{forceperunitarea} and \ref{Stress2} that the force\footnote{The form of $\sigma_{ik}$ in cylindrical coordinates is given is Landau \cite{LandauLifschitz}.} on the cylinder per unit length is given by: \begin{eqnarray} \label{cylinderforceeqn} F_{\hat{x}} &=& \oint (\sigma_{rr} \cos{\theta} - \sigma_{r\theta} \sin{\theta}) \text{d}s \\ \nonumber &=& \left[ \int_0^{2\pi} \left( \sigma_{rr} \cos{\theta} - \sigma_{r\theta} \sin{\theta} \right) r\ \text{d}\theta \right]_{r=a} \\ \nonumber &=& \bigg[ \int_0^{2\pi} \bigg( \left(-p+2\eta \frac{\partial v_r}{\partial r} \right) \cos{\theta} - \eta \left(\frac{1}{r} \frac{\partial v_r}{\partial \theta} + \frac{\partial v_\theta}{\partial r} - \frac{v_\theta}{r} \right) \sin{\theta} \bigg) r\ \text{d}\theta\bigg]_{r=a} \end{eqnarray} The drag coefficient for an infinite cylinder is \emph{defined} as $C_D = F_{\text{Net}}/\rho |\vec{u}_\infty|^2 a$. Note that authors (e.g., \cite{Kaplun67, Tritton59}) who define the Reynolds number based on diameter nonetheless use the same definition of $C_D$, which is based on the radius. For this problem, $F_{\text{Net}} = F_{\hat{x}}$, as given by Eqn. \ref{cylinderforceeqn}. Introducing the dimensionless variables defined in Table \ref{tab:gov2} into Eqn. \ref{cylinderforceeqn}, we obtain Eqn. \ref{finalcylforce}. Combining this with the definition of $C_D$, we obtain Eqn. \ref{CDcylinder}. \begin{equation} \label{finalcylforce} F_{\hat{x}} = \frac{\rho |\vec{u}_\infty|^2 a}{R} \bigg[ \int_0^{2\pi} \bigg( \left(-p(r,\theta)+2 \frac{\partial u_r}{\partial r} \right) \cos{\theta} - \left(\frac{1}{r} \frac{\partial u_r}{\partial \theta} + \frac{\partial u_\theta}{\partial r} - \frac{u_\theta}{r} \right) \sin{\theta} \bigg) r\ \text{d}\theta\bigg]_{r=1} \end{equation} \begin{equation} \label{CDcylinder} C_D = \frac{1}{R} \bigg[ \int_0^{2\pi} \bigg( \left(-p(r,\theta)+2 \frac{\partial u_r}{\partial r} \right) \cos{\theta} - \left(\frac{1}{r} \frac{\partial u_r}{\partial \theta} + \frac{\partial u_\theta}{\partial r} - \frac{u_\theta}{r} \right) \sin{\theta} \bigg) r\ \text{d}\theta \bigg]_{r=1} \end{equation} To evaluate this expression, we must first derive $p(r,\theta)$ from the stream function. The pressure can be determined to within an irrelevant additive constant by integrating the $\hat{\theta}$ component of the Navier-Stokes equations (Eqn. \ref{NonDNS}) \cite{Chester69, LandauLifschitz}. The constant is irrelevant because, in Eqn. \ref{CDcylinder}, $\int_0^{2\pi} C \cos{\theta} \text{d} \theta=0$. Note that all gradient terms involving $z$ vanish by construction. \begin{equation} \label{cylpressure} p(r,\theta) = r \ \int \left[ -R \left( \left(\vec{u} \cdot \nabla \right) u_\theta + \frac{u_r u_\theta}{r} \right) + \nabla^2 u_\theta + \frac{2}{r^2} \frac{\partial u_r}{\partial \theta} - \frac{u_\theta}{r^2} \right] \text{d} \theta \end{equation} Given a solution for the stream function $\psi$, the set of dimensionless Eqns. \ref{cylinderstreamfunction}, \ref{CDcylinder}, and \ref{cylpressure} uniquely determine $C_D$ for a cylinder. However, because the velocity field satisfies no-slip boundary conditions, these general formula often simplify considerably. For instance, consider the class of stream functions which meets the boundary conditions (Eqn. \ref{Cylinder BC}) and can be expressed as a Fourier sine series: $\psi(r,\theta) = \sum_{n=1}^\infty f_n(r) \sin{n\theta}$. Using the boundary conditions it can be shown that, for these stream functions, Eqn. \ref{CDcylinder} reduces to the simple expression given by Eqn. \ref{cylinder:convenientdrag}. \begin{equation} \label{cylinder:convenientdrag} C_D = -\frac{\pi}{R} \left(\frac{\textrm{d}^3}{\textrm{d}r^3} f_1(r)\right)_{r=1} \end{equation} {\it b. Sphere} The procedure for calculating $C_D$ in the case of the sphere is nearly identical to that for the cylinder. The components of the velocity field are given through the definition of the Stokes' stream function (Eqn. \ref{spherestreamfunction}). As before, symmetry requires that any net force on the cylinder must be in the direction of the uniform stream, in this case the $\hat{z}$ direction. From Eqn. \ref{forceperunitarea}, the net force on the sphere is given by Eqn. \ref{sphereforceeqn}. \begin{eqnarray} \label{sphereforceeqn} F_{\hat{z}} &=& \oint (\sigma_{rr} \cos{\theta} - \sigma_{r\theta} \sin{\theta}) \text{d}s \\ \nonumber &=& 2 \pi \left[ \int_0^{\pi} \left( \sigma_{rr} \cos{\theta} - \sigma_{r\theta} \sin{\theta} \right) r^2 \sin{\theta} \ \text{d}\theta \right]_{r=a} \end{eqnarray} For the sphere, the drag coefficient is \emph{defined} as $C_D \equiv F_{\text{Net}}/\rho |\vec{u}_\infty|^2 a^2$. Often the drag coefficient is given in terms of the \emph{Stokes' Drag}, $D_S \equiv 6 \pi \rho |\vec{u}_\infty| a \nu = 6 \pi \rho |\vec{u}_\infty|^2 a^2/R$. In these terms, $C_D = F_{\text{Net}} 6\pi/D_S R$. If $F_\text{Net} = D_S$, $C_D = 6 \pi/R$, the famous result of Stokes \cite{Stokes51}. Not all authors follow Stokes' original definition of $C_D$. For instance, S. Goldstein \cite{Goldstein29,Goldstein38} and H. Liebster \cite{Liebster26,Liebster24} define $C_D$ using a factor based on cross-sectional areas: $C_D^{\textrm{Goldstein}} = C_D 2/\pi$. These authors also define $R$ using the diameter of the sphere rather than the radius. S. Dennis, defines $C_D$ similarly to Goldstein, but without the factor of two: $C_D^{\textrm{Dennis}} = C_D/\pi$ \cite{Den71}. Using the form of Eqn. \ref{Stress2} given in Landau \cite{LandauLifschitz} and introducing the dimensionless variables defined in Table \ref{tab:gov2} into Eqn. \ref{sphereforceeqn}, we obtain Eqn. \ref{finalsphforce}. Combining this with the definition of $C_D$, we obtain Eqn. \ref{CDsphere}. \begin{equation} \label{finalsphforce} F_{\hat{z}} = \frac{D_s}{3} \bigg[ \int_0^{\pi} \bigg( \left(-p(r,\theta)+2 \frac{\partial u_r}{\partial r} \right) \cos{\theta} - \left(\frac{1}{r} \frac{\partial u_r}{\partial \theta} + \frac{\partial u_\theta}{\partial r} - \frac{u_\theta}{r} \right) \sin{\theta} \bigg) r^2 \sin{\theta} \ \text{d}\theta \bigg]_{r=1} \end{equation} \begin{equation} \label{CDsphere} C_D = \frac{2 \pi}{R} \bigg[ \int_0^{\pi} \bigg( \left(-p(r,\theta)+2 \frac{\partial u_r}{\partial r} \right) \cos{\theta} - \left(\frac{1}{r} \frac{\partial u_r}{\partial \theta} + \frac{\partial u_\theta}{\partial r} - \frac{u_\theta}{r} \right) \sin{\theta} \bigg) r^2 \sin{\theta} \ \text{d}\theta \bigg]_{r=1} \end{equation} As with the cylinder, the pressure can be determined to within an irrelevant additive constant by integrating the $\hat{\theta}$ component of the Navier-Stokes equations (Eqn. \ref{NonDNS}) \cite{Chester69, LandauLifschitz}. Note that gradient terms involving $\phi$ must vanish. \begin{equation} \label{sphpressure} p(r,\theta) = r \ \int \left[ -R \left( \left(\vec{u} \cdot \nabla \right) u_\theta + \frac{u_r u_\theta}{r} \right) + \nabla^2 u_\theta + \frac{2}{r^2} \frac{\partial u_r}{\partial \theta} - \frac{u_\theta}{r^2 \sin{\theta}^2} \right] \text{d} \theta \end{equation} Given a solution for the stream function $\psi$, the set of dimensionless Eqns. \ref{spherestreamfunction}, \ref{CDsphere}, and \ref{sphpressure} uniquely determine $C_D$ for a sphere. As with the cylinder, the imposition of no-slip boundary conditions considerably simplifies these general formula. In particular, consider stream functions of the form $\psi(r,\theta) = \sum_{n=1}^\infty f_n(r)Q_n(\cos{\theta})$, where $Q_n(x)$ is defined as in Eqn. \ref{Oseen:Goldstein}. If these stream functions satisfy the boundary conditions, the drag is given by Eqn. \ref{sphere:simpledrag}: \begin{equation} \label{sphere:simpledrag} C_D = \frac{2\pi}{3 R} \left( -2 f_1^{''}(r) + f_1^{'''}(r) \right)_{r=1} \end{equation} {\it c. A subtle point} When applicable, Eqns. \ref{cylinder:convenientdrag} and \ref{sphere:simpledrag} are the most convenient way to calculate the drag given a stream function. They simply require differentiation of a single angular term's radial coefficient. However, they only apply to functions that can be expressed as a series of harmonic functions. Moreover, for these simple formula to apply, the series expansions \emph{must} meet the boundary conditions exactly. This requirement implies that \emph{each} of the functions $f_i(r)$ independently meets the boundary conditions. The goal of our work is to derive and understand approximate solutions to the Navier-Stokes' equations. These approximate solutions generally will not satisfy the boundary conditions exactly. What --- if any --- applicability do Eqns. \ref{cylinder:convenientdrag} and \ref{sphere:simpledrag} have if the stream function does not exactly meet the boundary conditions? In some rare cases, the stream function of interest can be expressed in a convenient closed form. In these cases, it is natural to calculate the drag coefficient using the full set of equations. However we will see that the solution to these problems is generally only expressible as a series in harmonic functions. In these cases, it actually preferable to use the simplified equations \ref{cylinder:convenientdrag} and \ref{sphere:simpledrag}. First, these equations reflect the essential symmetry of the problem, the symmetry imposed by the uniform flow. Eqns. \ref{cylinder:convenientdrag} and \ref{sphere:simpledrag} explicitly demonstrate that, given an exact solution, only the lowest harmonic will matter: Only terms which have the same angular dependence as the uniform stream will contribute to the drag. By utilizing the simplified formula for $C_D$ as opposed to the general procedure, we effectively discard contributions from higher harmonics. This is exactly what we want, since these contributions are artifacts of our approximations, and would not be present in an exact solution. The contributions from inaccuracies in how the lowest harmonic meets the boundary conditions are more subtle. As long as the boundary conditions are satisfied to the accuracy of the overall approximation, it does not matter whether one uses the full-blown or simplified drag formula. The drag coefficients will agree to within the accuracy of the original approximation. In general, we will use the simplified formula. This is the approach taken explicitly by many matched asymptotics workers \cite{Chester69,Ski75}, and implicitly by other workers \cite{Proudman57,VanDyke1975}. It should be noted that these workers only use the portion\footnote{To be precise, they use only the Stokes' expansion, rather than a uniform expansion.} of their solutions which can exactly meet the assumptions of the simplified drag formula. However, as we will subsequently discuss, this is an oversimplification. \section{HISTORY OF LOW $R$ FLOW STUDIES} \label{chap:history} \subsection{Experiments and numerical calculations} Theoretical attempts to determine the drag by solving the Navier-Stokes' equations have been paralleled by an equally intricate set of experiments. In the case of the sphere, experiments usually measured the terminal velocity of small falling spheres in a homogeneous fluid. In the case of the cylinder, workers measured the force exerted on thin wires or fibers immersed in a uniformly flowing viscous fluid. These experiments, while simple in concept, were difficult undertakings. The regime of interest necessitates some combination of small objects, slow motion, and viscous fluid. Precise measurements are not easy, and neither is insuring that the experiment actually examines the same quantities that the theory predicts. All theoretical drag coefficients concern objects in an infinite fluid, which asymptotically tends to a uniform stream. Any real drag coefficient measurements must take care to avoid affects due to the finite size of the experiment. Due to the wide variety of reported results in the literature, we found it necessary to make a complete survey, as presented in this section. \subsubsection{Measuring the drag on a sphere} As mentioned, experiments measuring the drag on a sphere at low Reynolds number were intertwined with theoretical developments. Early experiments, which essentially confirmed Stokes' law as a reasonable approximation, include those of Allen \cite{Allen00}, Arnold \cite{Arnold11}, Williams \cite{Williams15}, and Wieselsberger \cite{Wie21}. The next round of experiments were done in the 1920s, motivated by the theoretical advances begun by C. W. Oseen \cite{Oseen10}. These experimentalists included Schmeidel \cite{schmiedel28} and Liebster \cite{Liebster26,Liebster24}. The results of Allen, Liebster, and Arnold were analyzed, collated, and averaged by Castleman \cite{Castleman25}, whose paper is often cited as a summary of prior experiments. The state of affairs after this work is well summarized in plots given by Goldstein (p. 16) \cite{Goldstein38}, and Perry \cite{Perry50}. Figure \ref{Goldstein Sphere} shows Goldstein's plot, digitized and re-expressed in terms of the conventional definitions of $C_D$ and $R$. \begin{figure}[tb] \begin{center} \includegraphics[width=.8 \textwidth]{fig3} \caption{(Color online) Early measurements of the drag on a sphere \cite{Goldstein38}.} \label{Goldstein Sphere} \end{center} \end{figure} Figure \ref{Goldstein Sphere} shows the experimental data at this point, prior to the next theoretical development, matched asymptotics. Although the experimental data seem to paint a consistent portrait of the function $C_D(R)$, in reality they are not good enough to discriminate between different theoretical predictions. Finite geometries cause the most significant experimental errors for these measurements \cite{Tri88,Maxworthy65,Lindgren99}. Tritton notes that ``the container diameter must be more than one hundred times the sphere diameter for the error to be less than 2 percent'', and Lindgren estimates that a ratio of 50 between the container and sphere diameters will result in a 4\% change in drag force. In 1961, Fidleris et al. experimentally studied the effects of finite container size on drag coefficient measurements \cite{Fid61}. They concluded that there were significant finite size effects in previous experiments, but also proposed corrections to compensate for earlier experimental limitations. Lindgren also conducted some related experiments \cite{Lindgren99}. T. Maxworthy also realized this problem, and undertook experiments which could be used to evaluate the more precise predictions of matched asymptotics theories. In his own words, \begin{quote} From the data plotted in Goldstein or Perry, it would appear that the presently available data is sufficient to accurately answer any reasonable question. However, when the data is plotted `correctly'; that is, the drag is non-dimensionalized with respect to the Stokes drag, startling inaccuracies appear. It is in fact impossible to be sure of the drag to better than $\pm 20\%$ ... The difficulties faced by previous investigators seemed to be mainly due to an inability to accurately compensate for wall effects \cite{Maxworthy65}. \end{quote} Maxworthy refined the falling sphere technique to produce the best experimental measurements yet --- $2\%$ error. He also proposed a new way of plotting the data, which removes the $R^{-1}$ divergence in Eqn. \ref{CDsphere} (as $R\rightarrow 0$). His approach makes clear the failings of earlier measurements, as can be seen in Figure \ref{Maxworthy Sphere}, where the drag measurements are normalized by the Stokes drag, $C_D^{\textrm{Stokes}} = 6 \pi/R$. \begin{figure}[tb] \psfrag{Re}{\mbox{\Large $R$}} \psfrag{CD}{\mbox{\Large $C_D R/6\pi - 1$}} \psfrag{Maxworthy}{Maxworthy} \psfrag{Previous Expts.}{Previous Expts.} \begin{center} \includegraphics[width=.8 \textwidth]{fig4} \caption{(Color online) Maxworthy's accurate measurements of the drag on a sphere \cite{Maxworthy65} contrasted with previous experiments \cite{Goldstein38}.} \label{Maxworthy Sphere} \end{center} \end{figure} In Maxworthy's apparatus, the container diameter is over 700 times the sphere diameter, and does not contribute significantly to experimental error, which he estimates at better than 2 percent. Note that the data in Figure \ref{Maxworthy Sphere} are digitized from his paper, as raw data are not available. This problem also attracted the attention of atmospheric scientists, who realized its significance in cloud physics, where ``cloud drops may well be approximated by rigid spheres.''\cite{Pru70} In a series of papers (e.g., \cite{Pru70,Pru68,Bea69,LC70}), H.R. Pruppacher and others undertook numerical and experimental studies of the drag on the sphere. They were motivated by many of the same reasons as Maxworthy, because his experiments covered only Reynolds numbers between 0.4 and 11, and because ``Maxworthy's experimental setup and procedure left considerable room for improvement'' \cite{Pru68}. Their results included over 220 measurements, which they binned and averaged. They presented their results in the form of a set of linear fits. Adopting Maxworthy's normalization, we collate and summarize their findings in Eqn. \ref{Pruppacher1}. \begin{equation} C_D \frac{R}{6\pi} - 1 = \left\{ \begin{array}{ll} 0.102\left(2 R\right)^{0.955} & 0.005 < R \leq 1.0 \\ 0.115 \left(2 R\right)^{0.802} & 1.0 < R \leq 20 \\ 0.189\left(2 R\right)^{0.632} & 20 < R \leq 200 \end{array} \right. \label{Pruppacher1} \end{equation} Unfortunately, one of their later papers includes the following footnote (in our notation): ``At $R < 1$ the most recent values of $C_D R/6\pi - 1$ (Pruppacher, 1969, unpublished) tended to be somewhat higher than those of Pruppacher and Steinberger.'' \cite{LC70} Their subsequent papers plot these unpublished data as ``experimental scatter.'' As the unpublished data are in much better agreement with both Maxworthy's measurements and their own numerical analysis \cite{LC70}, it makes us question the accuracy of the results given in Eqn. \ref{Pruppacher1}. There are many other numerical calculations of the drag coefficient for a sphere, including: Dennis \cite{Den71}, Le Clair \cite{LC70,Pru70}, Hamielec \cite{Ham67}, Rimon \cite{Rim69}, Jenson \cite{Jen59}, and Kawaguti \cite{Kaw50}. Most of these results are not useful either because of large errors (e.g., Jenson), or because they study ranges of Reynolds number which do not include $R < 1$. Many numerical studies examine only a few (or even just a single) Reynolds numbers. For the purposes of comparing theoretical predictions of $C_D$ at low Reynolds number, only Dennis \cite{Den71} and Le Clair \cite{LC70} have useful calculations. Both of these papers report tabulated results which are in very good agreement with both each other and Maxworthy; at $R=0.5$, the three sets of results agree to within 1\% in $C_D$, and to within 10\% in the transformed variable, $C_D R/6\pi -1$. The agreement is even better for $R < 0.5$. \begin{figure}[tb] \psfrag{Re}{\mbox{\Large $R$}} \psfrag{CD}{\mbox{\Large $C_D R/6\pi - 1$}} \psfrag{CD2 XXXXXXX}{\tiny{$C_D R/6\pi - 1$}} \psfrag{Maxworthy XXXXXX}{Maxworthy} \psfrag{Pruppacher}{Eqn. \ref{Pruppacher1}} \psfrag{Le Clair}{Le Clair} \psfrag{Dennis}{Dennis} \begin{center} \includegraphics[width=.8 \textwidth]{fig5} \caption{(Color online) A summary of experimental and numerical studies of $C_D$ for a sphere \cite{Maxworthy65,LC70,Den71}.} \label{TotSphere1} \end{center} \end{figure} Figure \ref{TotSphere1} shows all relevant experimental and numerical results for the drag on a sphere. Note the clear disagreement between Pruppacher's results (Eqn. \ref{Pruppacher1}), and all of the other results for $R < 1$ --- including Le Clair and Pruppacher's numerical results \cite{LC70}. This can be clearly seen in the inset graph. Although Pruppacher's experiment results do agree very well with other data for larger values of $R$ ($R \gtrsim 20$), we will disregard them for the purposes of evaluating theoretical predictions at low Reynolds number. It should also be noted that there is a community of researchers interested in sedimentation and settling velocities who have studied the drag on a sphere. In a contribution to this literature, Brown reviews all of the authors discussed here, as he tabulates $C_D$ for $R < 5000$ \cite{Bro03}. His report addresses a larger range of Reynolds numbers and he summarizes a number of experiments not treated here. His methodology is to apply the Fidleris' correction \cite{Fid61} to previous experiments where tabulated experimental data was published.\footnote{Brown incorrectly reports Dennis' work \cite{Den71} as experimental.} While this yields a reasonably well-behaved drag coefficient for a wide range of Reynolds numbers, it is not particularly useful for our purposes, as less accurate work obfuscates the results of the most precise experiments near $R=0$. It also does not include numerical work or important results which are only available graphically (e.g., Maxworthy \cite{Maxworthy65}). \subsubsection{Measuring the drag on a cylinder} Experiments designed to measure the drag on an infinite cylinder in a uniform fluid came later than those for spheres. In addition to being a more difficult experiment --- theoretical calculations assume the cylinder is infinite --- there were no theoretical predictions to test before Lamb's result in 1911 \cite{Lam11}. In 1914, E. F. Relf conducted the first experiments \cite{Rel14}. These looked at the force exerted on long wires in a fluid. Relf measured the drag down to a Reynolds number of about ten. In 1921, Wieselberger measured the drag at still lower Reynods number, reaching $R=2.11$ by looking at the deflection of a weight suspended on a wire in an air stream \cite{Wie21b}. These experiments, combined with others \cite{Lin31,Goldstein38} at higher Reynolds number, characterize the drag over a range of Reynolds numbers (see Goldstein, pg. 15). However, they do not probe truly small Reynolds numbers ($R\ll 1$), and are of little use for evaluating theories which are only valid in that range. Curiously, there are no shortage of claims otherwise, such as Lamb, who says ``The formula is stated to be in good agreement with experiment for sufficiently small values of $U_\infty a/\nu$; see Wieselsberger'' \cite{Lamb1932}. In 1933, Thom measured the ``pressure drag'', extending observations down to $R=1.75$. Thom also notes that this Reynolds number is still too high to compare with calculations: ``Actually, Lamb's solution only applies to values of $R$ less than those shown, in fact to values much less than unity, but evidently in most cases the experimental results are converging with them.'' \cite{Tho33} In 1946, White undertook a series of measurements, which were flawed due to wall effects \cite{Whi46}. The first high quality experiments which measured the drag at low Reynolds number were done by R. K. Finn \cite{Fin53}. His results, available only in graphical form, are reproduced in Figure \ref{TotCyl1}. While vastly superior to any previous results, there is considerable scatter in Finn's measurements, and they have largely been surpassed by later experiments. Tritton, in 1959, conducted experiments which reached a Reynolds number of $R=0.2$, and also filled in some gaps in the $R-C_D$ curve \cite{Tritton59}. Tritton estimates his accuracy at $\pm 6\%$, and compares his results favorably to previous work, commenting that, ``Probably the lowest R points of the other workers were stretching their techniques a little beyond their limits.'' Tritton is also the first author to give a discussion of systematic errors.\footnote{Tritton does caution that his measurements may be negatively biased at higher Reynolds number ($R \gtrsim 30$).} Tritton's results are shown in Figure \ref{Tritton Data 1}. All of his data are available in tabular form. \begin{figure}[tb] \psfrag{Re}{$R = \frac{U_\infty a}{\nu}$} \psfrag{Log}{$\log_{10}$} \psfrag{Previous Experiments and}{Previous Expts. \cite{Goldstein38}} \begin{center} \includegraphics[width=.8 \textwidth]{fig6} \caption{(Color online) Tritton's measurements of the drag on a cylinder \cite{Tritton59}.} \label{Tritton Data 1} \end{center} \end{figure} Maxworthy improved plots of the drag on a sphere (Fig. \ref{Goldstein Sphere}), by arguing that the leading divergence must be removed to better compare experiments and predictions (Fig. \ref{Maxworthy Sphere}). This same criticism applies to plots of the drag on a cylinder. In the case of the cylinder, $C_D$ goes as $R^{-1}$ (with logarithmic corrections) as $R \rightarrow 0$ (Eqn. \ref{CDcylinder}). This means we ought to plot $C_D R/4\pi$. This function tends to zero as $R\rightarrow 0$, so it is not necessary to plot $C_D R/4\pi-1$, as in the case of the sphere. Figure \ref{TotCyl1} shows both Finn's and Tritton's data re-plotted with the leading divergence removed. In 1965, K. O. L. F. Jayaweera \cite{Jay65} undertook drag measurements of the drag on very long (but finite) cylinders. At very low Reynolds number ($R \leq 0.135$), his data are available in tabular form. At higher Reynolds number, they had to be digitized. His data, plotted with the leading divergence removed, are also shown in Figure \ref{TotCyl1}. \begin{figure}[tb] \psfrag{Re}{\mbox{\Large $R$}} \psfrag{CD}{\mbox{\Large $C_D R/4\pi$}} \psfrag{CD2 XXXXXXX}{\tiny{$C_D R/4\pi$}} \psfrag{Jayaweera XXXXX}{Jayaweera} \psfrag{Finn}{Finn} \psfrag{Tritton}{Tritton } \begin{center} \includegraphics[width=.8 \textwidth]{fig7} \caption{(Color online) Summary of measurements of the drag on a cylinder \cite{Jay65,Fin53,Tritton59}.} \label{TotCyl1} \end{center} \end{figure} The agreement amongst these experiments is excellent. Henceforth, Finn's data will not be plotted, as it exhibits larger experimental variations, and is surpassed by the experiments of Jayaweera and Tritton. Jayaweera's data exhibit the least scatter, and may be slightly better than Tritton's. However, both experiments have comparable, large ratios of cylinder length to width (the principle source of experimental error), and there is no a priori reason to favor one experimental design over the other. We consider these two experiments to be equivalent for the purposes of evaluating theoretical predictions. As with the sphere, there are numerical calculations, including: Underwood \cite{Und69}, Son \cite{Son69}, Kawaguti \cite{Kaw66}, Dennis \cite{Den64}, Thom \cite{Tho33}, Apelt \cite{Ape61}, and Allen \cite{All55}. Of these, most treat only a few Reynolds numbers, none of which are sufficiently small. Others, such as Allen and Dennis, have had their results subsequently questioned \cite{Und69}. The only applicable studies are Kawaguti \cite{Kaw66}, and Underwood \cite{Und69}. Kawaguti has a calculation only for $R=0.5$, and is omitted. Underwood's results are in principle important and useful, but are only available in a coarse plot, which cannot be digitized with sufficient accuracy. Consequently, no numerical results will be used for evaluating analytical predictions. There are many different experimental and numerical drag coefficient measurements. We will subsequently use only the best as benchmarks for evaluating the performance of theoretical predictions. In the case of the sphere, the experimental measurements of Maxworthy \cite{Maxworthy65} as well as the numerical calculations of Dennis \cite{Den71} and Le Clair \cite{LC70} all extend to sufficiently small $R$ and possess sufficient accuracy. For the cylinder the experiments of both Tritton \cite{Tritton59} and Jayaweera \cite{Jay65} are both excellent. Although they exhibit small differences, we cannot judge either to be superior, and we will compare both with theoretical results. \subsection{Theoretical history} Since these problems were posed by Stokes in 1851, there have been many attempts to solve them. All of these methods involve approximations, which are not always rigorous (or even explicitly stated). There is also considerable historical confusion over contributions and attribution.\footnote{For an explanation of confusion over early work, see Lindgren \cite{Lindgren99}. Proudman and Pearson \cite{Proudman57} also begin their article with an insightful, nuanced discussion, although there are some errors \cite{Lindgren99}.} Here we review and summarize the substantial contributions to the literature, focusing on what approximations are used, in both deriving governing equations and in their subsequent solution. We discuss the validity and utility of important results. Finally, we emphasize methodological shortcomings and how they have been surmounted. \subsubsection{Stokes and paradoxes} In the first paper on the subject, Stokes approximated $R=0$ in Eqn. \ref{NonDNS} and solved the resulting equation (a problem equivalent to solving Eqn. \ref{SphereEqn} with $R=0$) \cite{Stokes51}. After applying the boundary conditions (Eqn. \ref{Sphere BC}), his solution is given in terms of a stream function by Eqn. \ref{stokessol}. \begin{equation} \label{stokessol} \psi(r,\mu) = \frac{1}{4}\left( 2 r^2 - 3 r + \frac{1}{r} \right) \left(1-\mu^2 \right) \end{equation} By substituting $\psi(r,\mu)$ into Eqns. \ref{spherestreamfunction}, \ref{CDsphere}, and \ref{sphpressure} (or by using Eqn. \ref{sphere:simpledrag}), we reproduce the famous result of Stokes, given by Eqn. \ref{stokes:famoussol}. \begin{equation} \label{stokes:famoussol} C_D = \frac{6 \pi}{R} \end{equation} Stokes also tackled the two dimensional cylinder problem in a similar fashion, but could not obtain a solution. The reason for his failure can be seen by setting $R=0$ in Eqn. \ref{CylinderEqn}, and attempting a direct solution. Enforcing the $\sin\theta$ angular dependence results in a solution of the form $\psi(r,\theta) = \left( C_1 r^3 + C_2 r \ln{r} + C_3 r + C_4/r \right) \sin\theta$. Here $C_i$ are integration constants. No choice of $C_i$ will meet the boundary conditions Eqn. (\ref{Cylinder BC}), as this solution cannot match the uniform flow at large $r$. The best one can do is to set $C_1 = 0$, resulting in a partial solution: \begin{equation} \psi(r,\theta) = C \left(2 r \ln{r} - r + \frac{1}{r} \right) \sin\theta \label{stokes:cylindersol} \end{equation} Nonetheless, this solution is \emph{not} a description of fluid flow which is valid everywhere. Moreover, due to the indeterminable constant $C$, Eqn. \ref{stokes:cylindersol} cannot be used to estimate the drag on the cylinder. A more elegant way to see that no solution may exist is through dimensional analysis \cite{LandauLifschitz,Happel73}. The force per unit length may only depend on the cylinder radius, fluid viscosity, fluid density, and uniform stream velocity. These quantities are given in Table \ref{tab:dimanal1}, with $M$ denoting a unit of mass, $T$ a unit of time, and $L$ a unit of length. From these quantities, one may form two dimensionless groups \cite{Buckingham04}: $\Pi_0 = R = |\vec{u}_{\infty}| a/\nu$, $\Pi_1 = F_{\text{Net}}/(\rho \nu |\vec{u}_{\infty}|)$. Buckingham's $\Pi$ Theorem \cite{Buckingham04} then tells us that: \begin{equation} \Pi_0 = F(R) \label{buckingham} \end{equation} If we make the assumption that the problem does not depend on $R$, as Stokes did, then we obtain $\Pi_1 = \text{const}$, whence \begin{equation} F_{\text{Net}} \propto \rho \nu |\vec{u}_{\infty}| \label{buckinghamresult} \end{equation} However, Eqn. \ref{buckinghamresult} does not depend on the cylinder radius, $a$! This is physically absurd, and demonstrates that Stokes' assumptions cannot yield a solution. The explanation is that when we take the $R\rightarrow 0$ limit in Eqn. \ref{buckingham}, we made the incorrect assumption that $F(R)$ tended toward a \emph{finite, non-zero limit}. This is an example of \emph{incomplete similarity}, or \emph{similarity of the second kind} (in the Reynolds number) \cite{Barenblatt79}. Note that the problem of flow past a sphere involves force, \emph{not} force per unit length, and therefore is not subject to the same analysis. \begin{table} \begin{center} \begin{tabular}{|c|l|c|} \hline Quantity & Description & Dimensions\\ \hline $F_{\text{Net}}$ & Net Force per Unit Length & $MT^{-2}$ \\ $\nu$ & Kinematic Viscosity & $L^2T^{-1}$\\ $ a $ & Cylinder Radius & $L$ \\ $ \rho $ & Fluid Density & $ML^{-3}$ \\ $|\vec{u}_{\infty}|$ & The Uniform Stream Speed & $LT^{-1}$ \\ \hline \end{tabular} \caption{Dimensional analysis of Stokes' problem.} \label{tab:dimanal1} \end{center} \end{table} Stokes incorrectly took this nonexistence of a solution to mean that steady-state flow past an infinite cylinder could not exist. This problem, which is known as \emph{Stokes' paradox}, has been shown to occur with any unbounded two-dimensional flow \cite{Kra53}. But such flows really do exist, and this mathematical problem has since been resolved by the recognition of the existence of boundary layers. In 1888, Whitehead, attempted to find higher approximations for flow past a sphere, ones which would be valid for small but non-negligible Reynolds numbers \cite{Whitehead89}. He used Stokes' solution (Eqn. \ref{stokessol}) to approximate viscous contributions (the LHS of Eqn. \ref{SphereEqn}), aiming to iteratively obtain higher approximations for the inertial terms. In principle, this approach can be repeated indefinitely, always using a linear governing equation to obtain higher order approximations. Unfortunately, Whitehead found that his next order solution could not meet all of the boundary conditions (Eqn. \ref{Sphere BC}), because he could not match the uniform stream at infinity \cite{VanDyke1975}. These difficulties are analogous to the problems encountered in Stokes' analysis of the infinite cylinder. Whitehead's approach is equivalent to a perturbative expansion in the Reynolds number, an approach which is ``never valid in problems of uniform streaming'' \cite{Proudman57}. This mathematical difficulty is common to all three-dimensional uniform flow problems, and is known as \emph{Whitehead's paradox}. Whitehead thought this was due to discontinuities in the flow field (a ``dead-water wake''), but this is incorrect, and his ``paradox'' has also since been resolved \cite{VanDyke1975}. \subsubsection{Oseen's equation} {\it a. Introduction} In 1893, Rayleigh pointed out that Stokes' solution would be uniformly applicable if certain inertial forces were included, and noted that the ratio of those inertial forces to the viscous forces which Stokes considered could be used to estimate the accuracy of Stokes' approximations \cite{Ray93}. Building on these ideas in 1910, C. W. Oseen proposed an ad hoc approximation to the Navier-Stokes equations which resolved both paradoxes. His linearized equations (the \emph{Oseen equations}) attempted to deal with the fact that the equations governing Stokes' perturbative expansion are invalid at large $|\vec{r}|$, where they neglect important inertial terms. In addition to Oseen, a number of workers have applied his equations to a wide variety of problems, including both the cylinder and the sphere.\footnote{Lamb \cite{Lamb1932} solved the Oseen equations for the cylinder approximately, as Oseen \cite{Oseen10} did for the sphere. The Oseen equations have been solved exactly for a cylinder by Fax\'en \cite{Faxen27}, as well as by Tomotika and Aoi \cite{Tomotika50}, and those for the sphere were solved exactly by Goldstein \cite{Goldstein29}.} Oseen's governing equation arises independently in several different contexts. Oseen derived the equation in an attempt to obtain an approximate equation which describes the flow everywhere. In modern terminology, he sought a governing equation whose solution is a uniformly valid approximation to the Navier-Stokes equations. Whether he succeeded is a matter of some debate. The short answer is ``Yes, he succeeded, but he got lucky.'' This story is further complicated by historical confusion. Oseen's equations ``are valid but for the wrong reason'' \cite{Lindgren99}; Oseen originally objected to working in the inertial frame where the solid body is at rest, and therefore undertook calculations in the rest frame of uniform stream. This complication is overlooked largely because many subsequent workers have only understood Oseen's intricate three paper analysis through the lens of Lamb's later work \cite{Lam11}. Lamb --- in addition to writing in English --- presents a clearer, ``shorter way of arriving at his [Oseen's] results'', which he characterizes as ``somewhat long and intricate.'' \cite{Lam11} In 1913 Fritz Noether, using both Rayleigh's and Oseen's ideas, analyzed the problem using stream functions \cite{Noe11}. Noether's paper prompted criticisms from Oseen, who then revisited his own work. A few months later, Oseen published another paper, which included a new result for $C_D$ (Eqn. \ref{Oseen:dragsphere}) \cite{Ose13}. Burgess also explains the development of Oseen's equation, and presents a clear derivation of Oseen's principal results, particularly of Oseen's new formula for $C_D$ \cite{BR16}. Lindgren offers a detailed discussion of these historical developments \cite{Lindgren99}. However, he incorrectly reports Noether's publication date as 1911, rather than 1913. As a result, he incorrectly concludes that Noether's work was independent of Oseen's, and contradicts claims made in Burgess \cite{BR16}. Although the theoretical justification for Oseen's approximations is tenuous, its success at resolving the paradoxes of both Stokes and Whitehead led to widespread use. Oseen's equation has been fruitfully substituted for the Navier-Stokes' equations in a broad array of low Reynolds number problems. Happel and Brenner describe its application to many problems in the dynamics of small particles where interactions can be neglected \cite{Happel73}. Many workers have tried to explain the utility and unexpected accuracy of Oseen's governing equations. Finally, the Oseen equation, as a partial differential equation, arises in both matched asymptotic calculations and in our new work. In these cases, however, its genesis and interpretation is entirely different, and the similarity is purely formal. Due to its ubiquity and historical significance, we now discuss both Oseen's equation and its \emph{many} different solutions in detail. {\it b. Why Stokes' approximation breaks down} Oseen solved the paradoxes of Stokes and Whitehead by using Rayleigh's insight: compare the magnitude of inertial and viscous forces \cite{Oseen10,Ray93}. Stokes and Whitehead had completely neglected inertial terms in the Navier-Stokes equations, working in the regime where the Reynolds number is insignificantly small (so-called ``creeping flow''). However, this assumption can only be valid near the surface of the fixed body. It is \emph{never} valid everywhere. To explain why, we follow here the spirit of Lamb's analysis, presenting Oseen's conclusions ``under a slightly different form.'' \cite{Lam11} Consider first the case of the sphere. We can estimate the magnitude of the neglected inertial terms by using Stokes' solution (Eqn. \ref{stokessol}). Substituting this result into the RHS of Eqn. \ref{SphereEqn}, we see that the dominant inertial components are convective accelerations arising from the nonlinear terms in Eqn. \ref{SphereEqn}. These terms reflect interactions between the uniform stream and the perturbations described by Eqn. \ref{stokessol}. For large values of $|\vec{r}|$, these terms are of \Order[R r]{-2}. Estimating the magnitude of the relevant viscous forces is somewhat trickier. If we substitute Eqn. \ref{stokessol} into the LHS of Eqn. \ref{SphereEqn}, the LHS vanishes identically. To learn anything, we must consider the terms individually. There are two kinds of terms which arise far from the sphere. Firstly, there are components due solely to the uniform stream. These are of \Order[r]{-2}. However, the uniform stream satisfies Eqn. \ref{SphereEqn} independently, without the new contributions in Stokes' solution. Mathematically, this means that all of the terms of \Order[r]{-2} necessarily cancel amongst themselves.\footnote{VanDyke \cite{VanDyke1975} does not treat this issue in detail, and we recommend Proudman \cite{Proudman57} or Happel \cite{Happel73} for a more careful discussion.} We are interested in the magnitude of the remaining terms, perturbations which result from the other components of Stokes' solution. These viscous terms (i.e. the $\partial_\theta^4$ term in Eqn. \ref{SphereEqn}) are of \Order[r]{-3} as $r \to \infty$. Combining these two results, the ratio of inertial to viscous terms, in the $r \to \infty$ limit, is given by Eqn. \ref{oseen:stokesbreakdown}. \begin{equation} \frac{\textrm{inertial}}{\textrm{viscous}} = \Order[Rr]{} \label{oseen:stokesbreakdown} \end{equation} This ratio is small near the body ($r$ is small) and justifies neglecting inertial terms in that regime. However, Stokes' implicit assumption that inertial terms are everywhere small compared to viscous terms breaks down when $R r \sim \Order[1]{}$, and the two kinds of forces are of the same magnitude. In this regime, Stokes' solution is not valid, and therefore cannot be used to estimate the inertial terms (as Whitehead had done). Technically speaking, Stokes' approximations breaks down because of a singularity at infinity, an indication that this is a \emph{singular perturbation} in the Reynolds' number. As Oseen pointed out, this is the genesis of Whitehead's ``paradox''. What does this analysis tell us about the utility of Stokes' solution? Different opinions can be found in the literature. Happel, for instance, claims that it ``is not uniformly valid'' \cite{Happel73}, while Proudman asserts ``Stokes' solution is therefore actually a uniform approximation to the total velocity distribution.'' \cite{Proudman57} By a \emph{uniform approximation}, we mean that the approximation asymptotically approaches the exact solution as the Reynolds' number goes to zero \cite{Kaplun57a}; see Section \ref{section:uniform} for further discussion. Proudman and Pearson clarify their comment by noting that although Stokes' solution is a uniform approximation to the total velocity distribution, it does not adequately characterize the perturbation to the uniform stream, or the \emph{derivatives} of the velocity. This is a salient point, for the calculations leading to Eqn. \ref{oseen:stokesbreakdown} examine components of the Navier-Stokes equations, not the velocity field itself. These components are forces --- derivatives of velocity. However, Proudman and Pearson offer no proof that Stokes' solution is actually a uniform approximation, and their claim that it is ``a valid approximation to many bulk properties of the flow, such as the resistance'' \cite{Proudman57} goes unsupported. In fact any calculation of the drag necessitates utilizing derivatives of the velocity field, so their argument is inconsistent. We are forced to conclude that Stokes' solution is not a uniformly valid approximation, and that his celebrated result, Eqn. \ref{stokes:famoussol}, is the fortuitous result of uncontrolled approximations. Remarkably, Stokes' drag formula is in fact the correct zeroth order approximation, as can be shown using either matched asymptotics or the Oseen equation! This coincidence is essentially due to the fact that the drag is determined by the velocity field and its derivatives at the surface of the sphere, where $r=1$, and Eqn. \ref{oseen:stokesbreakdown} is \Order[R]{1}. The drag coefficient calculation uses Stokes' solution in the regime where his assumptions are the most valid. A similar analysis affords insight into the origin of Stokes' paradox in the problem of the cylinder. Although we have seen previously that Stokes' approach must fail for both algebraic and dimensional considerations, examining the ratio between inertial and viscous forces highlights the physical inconsistencies in his assumptions. We can use the incomplete solution given by Eqn. \ref{stokes:cylindersol} to estimate the relative contributions of inertial and viscous forces in Eqn. \ref{CylinderEqn}. More specifically, we examine the behavior of these forces at large values of $r$. Substituting Eqn. \ref{stokes:cylindersol} into the RHS of Eqn. \ref{CylinderEqn}, we find that the inertial forces are $\mathcal{O}\left(R C^2 \log{r}/{r^2}\right)$ as $r \rightarrow \infty$. We estimate the viscous forces as in the case of the sphere, again ignoring contributions due solely to the uniform stream. The result is that the viscous forces are $\mathcal{O}\left(C \log{r}/{r^3} \right)$.\footnote{This result disagrees with the results of Proudman \cite{Proudman57} and VanDyke \cite{VanDyke1975}, who calculate that the ratio of inertial to viscous forces $\sim R r \ln{r}$. However, both results lead to the same conclusions.} Combining the two estimates, we obtain the result given in Eqn. \ref{oseen:stokesbreakdown2}. \begin{equation} \frac{\textrm{inertial}}{\textrm{viscous}} = \Order[Rr]{} \label{oseen:stokesbreakdown2} \end{equation} This result demonstrates that the paradoxes of Stokes and Whitehead are the result of the same failures in Stokes' uncontrolled approximation. Far from the solid body, there is a regime where it is incorrect to assume that the inertial terms are negligible in comparison to viscous terms. Although these approximations happened to lead to a solution in the case of the sphere, Stokes' approach is invalid and technically inconsistent in both problems. {\it c. How Oseen Resolved the Paradoxes} Not only did Oseen identify the physical origin for the breakdowns in previous approximations, but he also discovered a solution \cite{Oseen10}. As explained above, the problems arise far from the solid body, when inertial terms are no longer negligible. However, in this region ($r \gg 1$), the flow field is nearly a uniform stream --- it is almost unperturbed by the solid body. Oseen's inspiration was to replace the inertial terms with linearized approximations far from the body. Mathematically, the fluid velocity $\vec{u}$ in Eqn. \ref{NonDNS} is replaced by the quantity $\vec{u}_\infty+\vec{u}$, where $\vec{u}$ represents the perturbation to the uniform stream, and is considered to be small. Neglecting terms of \Order[|\vec{u}|]{2}, the viscous forces of the Navier-Stokes' equation --- $R \left(\vec{u} \cdot \nabla\vec{u}\right)$ --- are approximated by $R \left(\vec{u}_\infty \cdot\nabla\vec{u}\right)$. This results in Oseen's equation: \begin{equation} \label{Oseen:Eqn} R \left(\vec{u}_\infty \cdot \nabla \vec{u}\right) = - \nabla p + \nabla^2 \vec{u} \end{equation} The lefthand side of this equation is negligible in the region where Stokes' solution applies. One way to see this is by explicitly substituting Eqn. \ref{stokessol} or Eqn. \ref{stokes:cylindersol} into the LHS of Eqn. \ref{Oseen:Eqn}. The result is of \Order[R]{}. This can also be done self-consistently with any of the solutions of Eqn. \ref {Oseen:Eqn}; it can thereby be explicitly shown that the LHS can only becomes important when $r \gg 1$, and the ratios in Eqns. \ref{oseen:stokesbreakdown} and \ref{oseen:stokesbreakdown2} are of \Order[1]{}. Coupled with the continuity equation (Eqn. \ref{Continuity2ND}), and the usual boundary conditions, the Oseen equation determines the flow field everywhere. The beautiful thing about Oseen's equation is that it is \textit{linear}, and consequently is solvable in a wide range of geometries. In terms of stream functions, the Oseen equation for a sphere takes on the form given by Eqn. \ref{Oseen:SphereEqn}. The boundary conditions for this equation are still given by Eqn. \ref{Sphere BC}. \begin{equation} \label{Oseen:SphereEqn} D^4 \psi = R \left(\frac{1-\mu^2}{r} \frac{\partial}{\partial \mu} + \mu \frac{\partial}{\partial r} \right) D^2 \psi(r,\mu) \end{equation} Here, $D$ is defined as in Eqn. \ref{SphereEqn}. For the cylinder, where the boundary conditions are given by Eqn. \ref{Cylinder BC}, Oseen's equation takes the form given by Eqn. \ref{Oseen:CylinderEqn}. \begin{equation} \label{Oseen:CylinderEqn} \nabla_r^4 \psi(r,\theta) = R \left(\cos(\theta) \frac{\partial}{\partial r} - \frac{\sin (\theta)}{r}\frac{\partial}{\partial \theta} \right) \nabla_r^2 \psi(r,\theta) \end{equation} Here $\nabla$ is defined as in Eqn. \ref{CylinderEqn}. This equation takes on a particularly simple form in Cartesian coordinates (where $x=r\cos{\theta}$): $\left( \nabla^2 - R\partial_x \right)\nabla^2\psi(r,\theta) = 0$. A few historical remarks must be made. First, Oseen and Noether were motivated to refine Stokes' work and include inertial terms because they objected to the analysis being done in the rest frame of the solid body. While their conclusions are valid, there is nothing wrong with solving the problem in any inertial frame. Secondly, Oseen made no use of stream functions; the above equations summarize results from several workers, particularly Lamb. There are many solutions to Oseen's equations, applying to different geometries and configurations, including some exact solutions. However, for any useful calculations, such as $C_D$, even the exact solutions need to be compromised with approximations. There have been many years of discussion about how to properly interpret Oseen's approximations, and how to understand the limitations of both his approach and concomitant solutions. Before embarking on this analysis, we summarize the important solutions to Eqns. \ref{Oseen:SphereEqn} and \ref{Oseen:CylinderEqn}. {\it d. A plethora of solutions} Oseen himself provided the first solution to Eqn. \ref{Oseen:SphereEqn}, solving it exactly for flow past a sphere \cite{Oseen10}. Eqn. \ref{Oseen:1sol} reproduces this result in terms of stream functions, a formula first given by Lamb \cite{Lamb1932}. \begin{equation} \label{Oseen:1sol} \psi(r,\theta)=\frac{1}{4}\left(2 r^2 + \frac{1}{r} \right)\sin^2{\theta} - \frac{3}{2 R} \left( 1+ \cos{\theta} \right)\left(1 - e^{-\frac{1}{2}R r \left(1 - \cos{\theta}\right)}\right) \end{equation} This solution is reasonably behaved everywhere, and may be used to obtain Oseen's improved approximation for the drag coefficient (Eqn. \ref{Oseen:dragsphere}). \begin{equation} \label{Oseen:dragsphere} C_D = \frac{6 \pi}{R}\left(1 + \frac{3}{8}R \right) + \Order[R]{2} \end{equation} Oseen obtained this prediction for $C_D$ after the prompting of Noether, and only presented it in a later paper \cite{Ose13}. Burgess also obtained this result \cite{BR16}. Oseen's work was hailed as a resolution to Whitehead's paradox. While it \emph{did} resolve the paradoxes (e.g., he explained how to deal with inertial terms), and his solution is uniformly valid, it does \emph{not} posses sufficient accuracy to justify the ``$3/8 R$'' term in Eqn. \ref{Oseen:dragsphere}. What Oseen really did was to rigorously derive the leading order term, proving the validity of Stokes' result (Eqn. \ref{stokes:famoussol}). Remarkably, his new term is also correct! This is a coincidence which will be carefully considered later. This solution (Eqn. \ref{Oseen:1sol}) is exact in the sense that it satisfies Eqn. \ref{Oseen:SphereEqn}. However, it does not exactly meet the boundary conditions (Eqn. \ref{Sphere BC}) at the surface of the sphere. It satisfies those requirements only approximately, to \Order[R]{1}. This can readily be seen by expanding Eqn. \ref{Oseen:1sol} about $r=1$: \begin{equation} \label{Oseen:SphereExpand} \psi(r,\theta) = \frac{1}{4}\left( 2 r^2 - 3 r + \frac{1}{r} \right)\sin^2{\theta} + \Order[R]{1} \end{equation} Up to \Order[R]{} this is simply Stokes' solution (Eqn. \ref{stokessol}), which vanishes identically at $r=1$. The new terms fail to satisfy the boundary conditions at the surface, but are higher order in $R$. Thus Oseen's solution is an exact solution to an approximate governing equation which satisfies boundary conditions approximately. The implications of this confounding hierarchy of approximations will be discussed below. Lamb contributed a simplified method for both deriving and solving Oseen's equation \cite{Lam11}. His formulation was fruitfully used by later workers (e.g., \cite{Faxen27,Goldstein29,Tomotika50}), and Lamb himself used it both to reproduce Oseen's results and to obtain the first result for the drag on an infinite cylinder. Lamb's basic solution for flow around an infinite cylinder appears in a number of guises. His original solution was given in terms of velocity components, and relied on expansions of modified Bessel functions which kept only the most important terms in the series. This truncation results in a solution (Eqn. \ref{Oseen:Lambcyl1}) which only approximately satisfies the governing equations (Eqn. \ref{Oseen:CylinderEqn}), and is only valid near the surface. \begin{subequations} \label{Oseen:Lambcyl1} \begin{eqnarray} u_x &=& 1+ \delta \left(\gamma -\frac{1}{2} + \log{\frac{rR}{4}}+\frac{1}{2}\left(r^2-1\right)\frac{\partial^2}{\partial x^2} \log{r} \right) \\ u_y &=& \frac{\delta}{2} \left(r^2-1 \right) \frac{\partial^2}{\partial x \partial y} \log{r} \\ u_z &=& 0 \end{eqnarray} \end{subequations} In this equation, $\delta = \left(\frac{1}{2}-\gamma-\log{\frac{R}{4}}\right)^{-1}$. Note that, although it only approximately satisfies Oseen's governing equation, this result satisfies the boundary conditions (Eqn. \ref{boundaryconditions}) exactly. Lamb used his solution to derive the first result (Eqn. \ref{Oseen:LambDrag}) for the drag on an infinite cylinder, ending Stokes' paradox: \begin{equation} \label{Oseen:LambDrag} C_D = \frac{4\pi}{R} \left(\delta \right) \end{equation} In his own words, `` ... Stokes was led to the conclusion that steady motion is impossible. It will appear that when the inertia terms are partially taken into account ... that a definite value for the resistance is obtained.'' \cite{Lam11} As with all analysis based on the ad-hoc Oseen equation, it is difficult to quantify either the accuracy or the limitations of Lamb's result. Many authors formulate alternate expressions of Lamb's solution by retaining the modified Bessel functions rather than replacing them with expansions valid for small $R$ and $r$. This form is given by Eqn. \ref{Oseen:LambcylClose}, and is related to the incomplete form given by VanDyke (p. 162) \cite{VanDyke1975}.\footnote{Note that VanDyke incorrectly attributes to this result to Oseen, rather than to Lamb.} \begin{subequations} \label{Oseen:LambcylClose} \begin{eqnarray} u_x &=& 1 + \delta \left(\frac{x^2}{r^4} - \frac{1}{2r^2} + \frac{2x}{Rr^2} - e^{Rx/2}K_0\left(\frac{Rr}{2}\right) - \frac{x}{r}e^{Rx/2} K_1\left(\frac{Rr}{2}\right)\right) \\ u_y &=& \delta \left( \frac{xy}{r^4} + \frac{2 y}{R r^2} - \frac{y}{r}e^{Rx/2}K_1\left(\frac{Rr}{2}\right)\right) \\ u_z &=& 0 \end{eqnarray} \end{subequations} Here $I_n$ and $K_n$ are modified Bessel functions. In contrast to Eqn. \ref{Oseen:Lambcyl1}, this solution is an exact solution to Oseen's equation (Eqn. \ref{Oseen:CylinderEqn}), but only meets the boundary conditions to first approximation. In particular, it breaks down for harmonics other than $\sin{\theta}$. Whether Eqn. \ref{Oseen:Lambcyl1} or Eqn. \ref{Oseen:LambcylClose} is preferred is a matter of some debate, and ultimately depends on the problem one is trying to solve. Some workers prefer expressions like Eqn. \ref{Oseen:LambcylClose}, which are written in terms of $\vec{u}$. Unlike the solutions for the stream function, these results can be written in closed form. This motivation is somewhat misguided, as applying the boundary conditions nonetheless requires a series expansion. In terms of stream functions Eqn. \ref{Oseen:LambcylClose} transforms into Eqn. \ref{Oseen:Lambcyl} \cite{Proudman57}. \begin{equation} \label{Oseen:Lambcyl} \psi(r,\theta) = \left(r + \frac{\delta}{2 r} \right) \sin{\theta} - \sum_{n=1}^\infty \delta\phi_n\left(\frac{R r}{2}\right) \frac{r \sin{n\theta}}{n} \end{equation} Here, \begin{displaymath} \phi_n(x) = 2 K_1(x)I_n(x) + K_0(x) \left(I_{n+1}(x) + I_{n-1}(x)\right) \end{displaymath} This result is most easily derived as a special case of Tomotika's general solution (Eqn. \ref{cylinder:1sol}) \cite{Tomotika50}, although Proudman et al. intimate that it can also be directly derived from Lamb's solution (Eqn. \ref{Oseen:LambcylClose}) \cite{Proudman57}. Bairstow et al. were the first to retain Bessel functions while solving Oseen's Eqn. for flow past a cylinder \cite{Bai23}. They followed Lamb's approach, but endeavored to extend it to larger Reynolds' numbers, and obtained the drag coefficient given in Eqn. \ref{Bairstowsol}. When expanded near $R=0$, this solution reproduces Lamb's result for $C_D$ (Eqn. \ref{Oseen:LambDrag}). It can also be obtained from Tomotika's more general solution (Eqn. \ref{cylinder:1sol}). \begin{equation} C_D = \frac{4\pi}{R \left(I_0(R/2) K_0(R/2) + I_1(R/2)K_1(R/2)\right)} \label{Bairstowsol} \end{equation} Bairstow also made extensive comparisons between experimental measurements of $C_D$ and theoretical predictions \cite{Rel14}. He concluded, ``For the moment it would appear that the maximum use has been made of Oseen's approximation to the equations of viscous fluid motion.'' At this point, the ``paradoxes'' were ``resolved'' but by an approximate governing equation which had been solved approximately. This unsatisfactory state of affairs was summarized by Lamb in the last edition of his book: `` ... even if we accept the equations as adequate the boundary-conditions have only been approximately satisfied.'' \cite{Lamb1932} His comment was prompted largely by the work of Hilding Fax\'en, who initiated the next theoretical development, exact solutions to Oseen's approximate governing equation (Eqn. \ref{Oseen:Eqn}) which also exactly satisfy the boundary conditions. Beginning with his thesis and spanning a number of papers Fax\'en systematically investigated the application of boundary conditions to solutions of Oseen's equations \cite{Fax23,Fax21}. Fax\'en initially studied low Reynolds number flow around a sphere, and he began by re-examining Oseen's analysis. He derived a formula for $C_D$ which differed from Oseen's accepted result (Eqn. \ref{Oseen:dragsphere}). Fax\'en realized that this was due to differences in the application of approximate boundary conditions; within the limitations of their respective analyses, the results actually agreed. Fax\'en next solved Oseen's equation (Eqn. \ref{Oseen:SphereEqn}), but in bounded, finite spaces where the boundary conditions could be satisfied exactly. He initially studied flow near infinite parallel planes, but ultimately focused on flow around a sphere within a cylinder of finite radius. He aimed to calculate the drag force in a finite geometry, and then take the limit of that solution as the radius of the cylinder tends to infinity. Unfortunately, in the low Reynolds number limit, the problem involves incomplete similarity, and it is incorrect to assume that solutions will be well behaved (e.g., tend to a finite value) as the boundary conditions are moved to infinity. The drag force which Fax\'en calculated involved a number of undetermined coefficients, so he also calculated it using solutions to Stokes' governing equations. This solution \emph{also} has unknown coefficients, which he then calculated numerically. Arguing that the two solutions ought to be the same, he matched coefficients between the two results, substituted the numerical coefficients, and thereby arrived at a drag force based on the Oseen governing equation. This work is noteworthy for two reasons. First, the matching of coefficients between solutions derived from the two different governing equations is prescient, foreshadowing the development of matched asymptotics 30 years later. Secondly, Fax\'en ultimately concluded that Oseen's ``improvement'' (Eqn. \ref{Oseen:dragsphere}) on Stokes' drag coefficient (Eqn. \ref{stokes:famoussol}) is invalid \cite{Fax23}. Fax\'en's analysis demonstrates that --- when properly solved --- Oseen's equation yields the same drag coefficient as Stokes', without any additional terms \cite{Lindgren99}. Studies by Bohlin and Haberman concur with Fax\'en's conclusions \cite{Boh60,HS58,Lindgren99}. It is not surprising that his results reject Oseen's new term ($3 R/8$). We previously explained that Oseen's analysis, although it eliminates the ``paradoxes'', does not posses sufficient accuracy to justify more than the lowest order term in Eqn. \ref{Oseen:dragsphere}. However, Fax\'en's results suffer from problems. First, they cannot be systematically used to obtain better approximations. Secondly, Fax\'en actually solves the problem for bounded flow, with the boundary conditions prescribed by finite geometries. He uses a limiting procedure to extend his solutions to unbounded flow (with boundary conditions imposed on the uniform stream only at infinity, as in Eqn. \ref{boundaryconditions}). In problems like this, which involve incomplete similarity, it is preferable to work directly in the infinite domain. Fax\'en's meticulous devotion to properly applying boundary conditions culminated in the first complete solution to Eqn. \ref{Oseen:CylinderEqn}. In 1927, he published a general solution for flow around an infinite cylinder which could exactly satisfy arbitrary boundary conditions \cite{Faxen27}. Unfortunately, Fax\'en's solution contains an infinite number of undetermined integration constants, and approximations must be used to determine these constants. Although this destroys the ``exact'' nature of the solution, these approximations can be made in a controlled, systematic fashion --- an improvement over the earlier results of Lamb and Oseen. Although Fax\'en's heroic solution was the first of its kind, his real insight was realizing that approximations in the application of boundary conditions could be as important as the approximations in the governing equations. His formal solutions are in essence a difficult extension of Lamb's reformulation of Oseen's equations, and they inspired several similar solutions. In 1929, Goldstein completed a similarly herculean calculation to derive a general solution to Oseen's equation for flow around a sphere \cite{Goldstein29}. Like Fax\'en's result for the cylinder, Goldstein's solution can --- in principle --- exactly satisfy the boundary conditions. Unfortunately, it also suffers from the same problems: It is impossible to determine all of the infinitely many integration constants. Goldstein's solution is summarized by Tomotika, who also translated it into the language of stream functions \cite{Tomotika50}. We combine elements from both papers in quoting the solution given in Eqn. \ref{Oseen:Goldstein}. \begin{equation} \label{Oseen:Goldstein} \psi(r,\theta) = -r^2 Q_1(\cos{\theta}) + \sum_{n=1}^\infty \left(B_n r^{-n} + \sum_{m=0}^\infty X_m r^2 \Phi_{m,n}(r R /2) \right) Q_n(\cos{\theta}) \end{equation} In this equation, \begin{subequations} \begin{eqnarray} Q_n(\mu) &=& \int_{-1}^\mu P_n(\mu) \textrm{d}\mu \\ \Phi_{m,n}(x) &=& - \left(\frac{m}{2m-1} \chi_{m-1}(x) + \frac{m+1}{2m+3}\chi_{m+1}(x)\right)f_{m,n}(x) \nonumber \\ & & - \left(\frac{m}{2m+1} f_{m-1,n}(x)+\frac{m+1}{2m+1} f_{m+1,n}(x) \right)\chi_m(x) \\ \chi_m(x) &=& \left(2m+1\right) \left(\frac{\pi}{2x}\right)^{\left(\frac{1}{2}\right)}K_{m+\frac{1}{2}}(x) \\ f_{m,n}(x) &=& (2 n +1)\sum_{j=0}^m \frac{(2j)!(2m-2j)!(2n-2j)!}{(j!)^2(2m+2n-2j+1)!} \nonumber \\ & & \times \left(\frac{(m+n-j)!}{(m-j)!(n-j)!}\right)^2\phi_{m+n-2j}(x) \\ \phi_{n}(x)&=&(2n+1)\left(\frac{\pi}{2x}\right)^{\frac{1}{2}}I_{n+\frac{1}{2}}(x )\nonumber \\f_{m,n}(x) &=&\sum_{j=0}^{m} C_m(k) \frac{\partial^j \phi_n(x)}{\partial x^j} \end{eqnarray} \end{subequations} Here $K_n(x)$ and $I_n(x)$ are Bessel functions, $P_m(x)$ are Legendre polynomials, and $C_m(k)$ is the coefficient of $x^k$ in $P_m(x)$. Note that the second expression for $f_{m,n}(x)$, written in terms of derivatives, is computationally convenient \cite{Goldstein29}. Eqn. \ref{Oseen:Goldstein} is given with undetermined constants of integration, $B_n$ and $X_m$. Methods to determine these constants were discussed by both Tomotika \cite{Tomotika50} and Goldstein \cite{Goldstein29}. We will present our own analysis later. There are many different results which have been obtained using the above general solution. The exact formula for the stream function and the drag coefficient depend on what terms in the solution are retained, and how one meets the boundary conditions. In general, retaining $n$ angular terms in Eqn. \ref{Oseen:Goldstein} requires the retention of $m=n-1$ terms in the second sum. In his original paper, Goldstein retains three terms in each series, and thereby calculates the formula for $C_D$ given in Eqn. \ref{GoldsteinCD}. \begin{equation} C_D = \frac{6\pi}{R}\left(1 + \frac{3}{8}R -\frac{19}{320} R^2 + \frac{71}{2560}R^3 - \frac{30179}{2150400} R^4 + \frac{122519}{17203200} R^5 + \Order[R]{6} \right) \label{GoldsteinCD} \end{equation} The coefficient of the last term reflects a correction due to Shanks \cite{Sha55}. To obtain the result in Eqn. \ref{GoldsteinCD}, Goldstein both truncated his solution for the stream function and then expanded the resulting $C_D$ about $R=0$. Van Dyke extended this result to include an additional 24 terms, for purposes of studying the mathematical structure of the series, but not because of any intrinsic physical meaning \cite{VD70}. Van Dyke does not state whether he was including more harmonics in the stream function solution or simply increasing the length of the power series given in Eqn. \ref{GoldsteinCD}. In addition to expressing Goldstein's solution for the geometry of a sphere in terms of stream functions, Tomotika derived his own exact solution to Eqn. \ref{Oseen:CylinderEqn} for flow past a cylinder \cite{Tomotika50}. Tomotika closely followed the spirit of Lamb \cite{Lam11} and Goldstein \cite{Goldstein29}, and his resulting ``analysis is quite different from Fax\'en's.'' \cite{Tomotika50}. His solution to Eqn. \ref{Oseen:CylinderEqn} is given in Eqn. \ref{cylinder:1sol} below, conveniently expressed in terms of stream functions. Note that Tomotika's result suffers from the same problems as his predecessors: An infinite number of undetermined integration constants. \begin{equation} \label{cylinder:1sol} \psi(r,\theta) = r \sin{\theta} + \sum_{n=1}^\infty \left( B_n r^{-n} + \sum_{m=0}^\infty X_m r \Phi_{m,n}(r R /2) \right) \sin{n \theta} \end{equation} Where \begin{eqnarray} \Phi_{m,n}(x) &=& \left(K_{m+1}(x)+K_{m-1}(x)\right)\left(I_{m-n}(x)+I_{m+n}(x)\right) \nonumber \\ & & + K_m(x) \left(I_{m-n-1}(x)-I_{m-n+1}(x)-I_{m+n-1}(x)+I_{m+n+1}(x)\right) \end{eqnarray} As before, $B_n$ and $X_m$ are constants of integration which need to be determined by the boundary conditions (Eqn. \ref{Cylinder BC}). As with Goldstein's solution for the sphere, approximations are necessary in order to actually calculate a drag coefficient. By retaining the $m=0$ and $n=1$ terms, Tomotika reproduced Bairstow's result for $C_D$ (Eqn. \ref{Bairstowsol}). He also numerically calculated drag coefficients based on retaining more terms. As with the Goldstein solution, keeping $n$ angular terms requires keeping $m=n-1$ terms in the second sum. The solutions given in Eqns. \ref{Oseen:Goldstein} and \ref{cylinder:1sol} represent the culmination of years of efforts to solve Oseen's equation for both the sphere and the cylinder. These general solutions are also needed in both matched asymptotics and the new techniques presented in this section \cite{Proudman57}. There is a final noteworthy solution to Eqn. \ref{Oseen:CylinderEqn}. In 1954, Imai published a general method for solving the problem of flow past an arbitrary cylindrical body \cite{II54}. His elegant technique, based on analytic functions, applies to more general geometries. Imai calculated a formula for $C_D$, approximating the functions in his exact solution with power series about $R=0$. His result (re-expressed in our notation) is given in Eqn. \ref{imaisol}. \begin{equation} C_D = \frac{4 \pi}{R} \delta + R \left(-\frac{\pi}{2} + \frac{\pi \delta}{4}-\frac{5 \pi \delta^2}{32} \right) \label{imaisol} \end{equation} Note that Imai's result agrees with Eqn. \ref{Oseen:LambDrag} at lowest order, the only order to which Oseen's equation really applies. A priori, his result is neither better nor worse than any other solution of Oseen's equation. It is simply different. {\it e. Discussion} We have presented Oseen's governing equations for low Reynold number fluid flow. These equations are a linearized approximation to the Navier-Stokes' equations. We have also presented a number of different solutions, for both stream functions and drag coefficients; each of these solutions comes from a unique set of approximations. The approximations which have been made can be put into the following broad categories: \begin{itemize} \item The governing equation --- Oseen's equation approximates the Navier-Stokes equations. \item Solutions which only satisfy the Oseen's equation approximately. \item Solutions which only satisfy the boundary conditions approximately. \item Solutions where the stream function is expanded in a power series about $R=0$ after its derivation. \item Approximations in the drag coefficient derivation. \item Drag coefficients which were expanded in a power series about $R=0$ after their derivation. \end{itemize} The first approximation is in the governing equations. Oseen's approximation is an \textit{ad hoc} approximation which, although it can be shown to be self-consistent, requires unusual cleverness to obtain. Because it is not derived systematically, it can be difficult to understand either its applicability or the limitations of its solutions. There have been years of discussion and confusion about both the equation and its solutions. The short answer is this: Oseen's governing equation is a zeroth order uniformly valid approximation to the Navier Stokes equation; the equation and its solutions are valid only at \Order[R]{0}. It is not easy to prove this claim rigorously \cite{Fax23}. However, it can be easily shown that Oseen's equations are self-consistent with its solutions, and that the error in the solution is of \Order[R]{1}. One way to explicitly demonstrate this is by substituting a solution of Oseen's equation into the LHS of the Navier-Stokes equations (Eqn. \ref{NonDNS}), thereby estimating the contribution of inertial terms for the flow field characterized by the solution. By repeating that substitution into the LHS of Oseen's equation (Eqn. \ref{Oseen:Eqn}), one can estimate the contribution of inertial terms under Oseen's approximations. Comparing the two results gives an estimate of the inaccuracies in Oseen's governing equations. Concretely, for the sphere, we substitute Eqn. \ref{Oseen:1sol} into the RHS of Eqn. \ref{Oseen:SphereEqn}, and into the RHS of Eqn. \ref{SphereEqn}. The difference between the two results is of \Order[R]{1}. For the cylinder, substitute Eqn. \ref{Oseen:Lambcyl} into the RHS of Eqns. \ref{Oseen:CylinderEqn} and \ref{CylinderEqn}. The difference between the exact and approximate inertial terms is of \Order[R\delta]{}, where $\delta$ is defined as in Eqn. \ref{Oseen:Lambcyl}. These conclusions do not depend on the choice of solution (or on the number of terms retained in Eqn. \ref{Oseen:Lambcyl}). They explicitly show that the governing equation is only valid to \Order[R]{} (or \Order[R\delta]{}). Consequently, the solutions can only be meaningful to the same order, and the boundary conditions need only be satisfied to that order. With these considerations, almost all of the solutions in the preceding section are equivalent. The only ones which are not --- such as Eqn. \ref{Oseen:Lambcyl1} --- are those in which the solution itself has been further approximated.\footnote{In this case, the Bessel functions have been expanded near $R=0$ and are no longer well behaved as $R \rightarrow \infty$.} Since the formulae for determining $C_D$ (Eqns. \ref{CDcylinder} and \ref{CDsphere}) are of the form $1/R$ + terms linear in stream function + nonlinear terms, a stream function which is valid to \Order[R]{} will result in a drag coefficient which is valid to \Order[R]{0}. Thus, in all of the formula for $C_D$ which have been presented so far, only the first term is meaningful. For a sphere, this is the Stokes' drag (Eqn. \ref{stokes:famoussol}), and for the cylinder, Lamb's results (Eqn. \ref{Oseen:LambDrag}). We have concluded that it is only good fortune that Oseen's new ``$3/8R$'' term is actually correct. This concurs with the analysis of Proudman et al., who wrote, ``Strictly, Oseen's method gives only the leading term ... and is scarcely to be counted as superior to Stokes's method for the purpose of obtaining the drag.'' \cite{Proudman57} Proudman and Pearson also note that the vast effort expended finding exact solutions to Oseen's equation is ``of limited value.'' Goldstein's formula for $C_D$, for instance, is expanded to \Order[R]{5}, well beyond the accuracy of the original governing equations. The reason for Oseen's good fortune is rooted in the symmetry of the problem. Chester and Van Dyke both observe that the non-linear terms which Oseen's calculation neglects, while needed for a correct stream function, do not contribute to the drag because of symmetry \cite{Che62,VanDyke1975}. Lindgren argues that Fax\'en proved that, when the boundary conditions are met properly and Oseen's equations solved exactly, the resulting $C_D$ is that obtained by Stokes (Eqn. \ref{stokes:famoussol}) \cite{Lindgren99}. Whether this argument is correct does not matter, as Oseen's additional term is beyond the accuracy of his governing equations. There is another approximation which arises while computing $C_D$ in the context of Oseen's equation. Many workers (e.g., \cite{Tomotika50}) compute the pressure in Eqns. \ref{CDcylinder} and \ref{CDsphere} by integrating Oseen's equation (Eqn. \ref{Oseen:Eqn}, rather than the Navier-Stokes equations (Eqn. \ref{NonDNS}). In Eqns. \ref{cylpressure} and \ref{sphpressure}, we presented a pressure calculation based on the Navier Stokes equations. Calculating pressure using the linearized Oseen equation introduces an additional approximation into $C_D$. While not necessarily problematic or inconsistent, this approximation can be difficult to identify. {\it f. Two different interpretations} One criticism of the Oseen equation is that it may be obtained by linearizing the Navier-Stokes equations, without regard to the magnitude of inertial and viscous terms. By writing $\vec{u} = \vec{U}_\infty + \delta\vec{u}$, treating $\delta\vec{u}$ as a small perturbation, and expanding Eqn. \ref{NonDNS} one can formally reproduce Oseen's equations. Clearly, the disturbance to the uniform stream is not negligible near the surface of the solid body, and therefore Oseen's equations ``would appear to be a poor approximation in the neighborhood of the body where the boundary condition $\vec{u}=0$ requires that the true inertial term be small.'' \cite{Happel73} This incorrect argument, put forth as a reason to use Stokes' solutions, overlooks the origins of Oseen's equations. The point of Oseen's approximation is that inertial terms are \emph{only} significant at large values of $\vert r\vert$, where $R\vert r\vert$ is no longer negligible. Near the surface of the solid, the approximate inertial terms which Oseen introduced are negligible in comparison to the viscous terms, because they are multiplied by the factor $R$ (in the LHS of Eqn. \ref{Oseen:Eqn}). Hence the difference between Oseen's and Stokes' equations in the neighborhood of the sphere will be of \Order[R]{}, and is beyond the scope of either theory. {\it g. Better approximations} The approach of Whitehead was essentially to improve Stokes' solution for the sphere in an iterative fashion \cite{Whitehead89}. By substituting the first approximation into the governing equations, he estimated the neglected terms. He then tried, and failed, to solve the resulting governing equation. This approach fails because the Stokes' equations are not uniformly valid to zeroth order. Oseen's equations are uniformly valid, and, as Proudman remarked, ``there seems little reason to doubt that Whitehead's iterative method, using Oseen's equation rather than Stokes's equation would yield an expansion, each successive term of which would represent a uniformly valid higher approximation to the flow. In each step of the iteration, a lower-order approximation would be used to calculate those particular inertia terms that are neglected ... the expansion generated in this way would seem to be the most economic expansion possible.'' \cite{Proudman57} Proudman did not follow through on this idea, instead developing a solution based on matched asymptotics expansions (see below). In an appendix, Van Dyke relates the unpublished work of C. R. Illingworth (1947) \cite{VanDyke1975}. Illingworth carried through Whitehead's program, deriving a new expression (Eqn. \ref{illingworth}) for $C_D$, which agrees to \Order[R^2\ln{R}]{} with the later results of matched asymptotic calculations (Eqn. \ref{matchedcd}). \begin{equation} C_D = \frac{6\pi}{R} \left( 1+ \frac{3}{8}R + \frac{9}{40}R^2 \log{R} + 0.1333 R^2 + \frac{81}{320}R^3\log{R} - 0.0034 R^3 + \ldots \right) \label{illingworth} \end{equation} Although this result has since been subsumed by matched asymptotics, it is nonetheless remarkable, substantially improving any previous drag calculations, and rigorously justifying Oseen's $3/8R$ term. There have also been efforts (e.g., \cite{Sha55,VD70}) to ``re-sum'' Goldstein's series expansion for $C_D$ (Eqn. \ref{GoldsteinCD}). However, these results have little intrinsic (as opposed to methodological) value, as Goldstein's result is only valid to \Order[R]{}. If applied to more accurate approximations, such as Eqn. \ref{illingworth}, these methods could be worthwhile. Alas, even improved approximations lack a sufficient numbers of terms in the expression for $C_D$ to make this practicable. {\it h. Summary} Simply put, Oseen's equations resolved the paradoxes of Stokes and Whitehead, put Stokes' results on firm theoretical ground, and led to the first solution for the drag on a cylinder. Although the Oseen equations happen to provide a uniformly valid first approximation, it is difficult to extend this work to higher order approximations. Figure \ref{OseenSpherePlot1} compares the ``predictions'' of Oseen theory to experimental and numerical data for the drag on a sphere. Again, Oseen's first order theory is, strictly speaking, not adequate to make the predictions with which it is traditionally credited. The theoretical drag coefficients are roughly valid for $R \lesssim 0.2$, with Goldstein's solution (Eqn. \ref{GoldsteinCD}) being slightly better than Oseen's prediction (Eqn. \ref{Oseen:dragsphere}). All are clearly superior to Stokes' formula (Eqn. \ref{stokes:famoussol}). Figure \ref{OseenSpherePlot1} also shows the prediction of Illingworth's second order Oseen theory (Eqn. \ref{illingworth}). Not surprisingly, it gives the best prediction of $C_D$, particularly when compared to Dennis' numerical results. \begin{figure}[tb] \psfrag{Re}{\mbox{\Large $R$}} \psfrag{CD}{\mbox{\Large $C_D R/6\pi - 1$}} \psfrag{CD2 XXXXXXX}{\tiny{$C_D R/6\pi - 1$}} \psfrag{Maxworthy XXXXXX}{Maxworthy} \psfrag{Pruppacher}{Eqn. \ref{Pruppacher1}} \psfrag{Le Clair}{Le Clair} \psfrag{Dennis}{Dennis} \psfrag{Stokes}{Stokes, Eqn. \ref{stokes:famoussol}} \psfrag{Oseen}{Oseen, Eqn. \ref{Oseen:dragsphere}} \psfrag{Goldstein}{Goldstein, Eqn. \ref{GoldsteinCD}} \psfrag{Illingworth}{Illing., Eqn. \ref{illingworth}} \begin{center} \includegraphics[width=.8 \textwidth]{fig8} \caption{(Color online) Drag on a sphere, experiment vs. Oseen theory \cite{Maxworthy65,LC70,Den71}. The Stokes' solution (Eqn. \ref{stokes:famoussol}) is shown at the bottom for reference. In these coordinates, it is defined by the line $y=0$.} \label{OseenSpherePlot1} \end{center} \end{figure} \begin{figure}[tb] \psfrag{Re}{\mbox{\Large $R$}} \psfrag{CD}{\mbox{\Large $C_D R/4\pi$}} \psfrag{Jayaweera XXXXXX}{Jayaweera} \psfrag{Tritton}{Tritton} \psfrag{Imai}{Imai, Eqn. \ref{imaisol}} \psfrag{Bairstow}{Bairstow, Eqn. \ref{Bairstowsol}} \psfrag{Lamb}{Lamb, Eqn. \ref{Oseen:LambDrag}} \begin{center} \includegraphics[width=.8 \textwidth]{fig9} \caption{(Color online) Drag on a cylinder, experiment vs. Oseen theory \cite{Jay65,Tritton59}.} \label{OseenCylPlot1} \end{center} \end{figure} Figure \ref{OseenCylPlot1} shows the important predictions of Oseen theory for the drag on an infinite cylinder. As with the sphere, the theory is only truly entitled to predict the lowest order term. Figure \ref{OseenCylPlot1} shows decent agreement with the data. Although more ``exact'' solutions (such as Bairstow's and Imai's) do better than Lamb's lowest order solution, this is purely coincidental. Tomotika's solutions exhibit similar characteristics to these two solutions. \subsubsection{Matched asymptotics} Efforts to systematically improve Oseen's results led to the development of \emph{matched asymptotic expansions}.\footnote{This technique is also known as the method of \emph{inner and outer expansions} or \emph{double asymptotic expansions}.} This branch of applied mathematics was developed gradually, with systematic work beginning with papers by Kaplun and Lagerstrom et al. \cite{Lag55,Kap54}. Kaplun subsequently used these techniques to calculate the drag on a cylinder, obtaining an entirely new result for $C_D$ \cite{Kaplun67}. Proudman and Pearson subsequently applied matched asymptotics to both the sphere and the cylinder, deriving a new result for the drag on a sphere \cite{Proudman57}. \begin{quote} ``The principle of asymptotic matching is simple. The interval on which a boundary-value problem is posed is broken into a sequence of two or more \emph{overlapping} subintervals. Then, on each subinterval perturbation theory is used to obtain an asymptotic approximation to the solution of the differential equation valid on that interval. Finally, the matching is done by requiring that the asymptotic approximations have the same functional form on the overlap of every pair or intervals. This gives a sequence of asymptotic approximations ... the end result is an approximate solution to a boundary-value problem valid over the entire interval'' \cite{BenderOrzag}. \end{quote} Both of the two low Reynolds number problems are attacked in similar fashion. The problem is divided into only two regions. The first region is near the surface of the solid body. In this region, inertial terms are small, the approximation of Stokes ($R\approx0$) applies, and the problem is solved perturbatively (in $R$). At each order in $R$, the two no-slip boundary conditions at the surface are applied. One undetermined constant remains (at each order in $R$). Loosely speaking, it is determined by the boundary condition as $\vert \vec{r} \vert \to \infty$. This expansion is referred to as the \emph{Stokes expansion}. The second region is far from the sphere, where inertial terms are important. In this region, $R\vert r\vert \sim \Order[1]{}$, and the approximations which led to Oseen's governing equation apply. The Oseen problem is then solved perturbatively, and the boundary condition as $|\vec{r}| \to \infty$ is applied. There are two undetermined constants remaining; they are related to the boundary conditions on the surface. This perturbative expansion is referred to as the \emph{Oseen expansion}. The next part of this calculation is \emph{asymptotic matching}, which determines the remaining coefficients.\footnote{At this point, there are two unknown coefficients in the Oseen expansion, and one in the Stokes expansion.} In this process, we expand the Oseen expansion for small $ R |\vec{r}|$, and the Stokes expansion for large $|\vec{r}|$. By choosing the three hitherto undetermined coefficients appropriately, these two limiting forms are made to agree order by order in $R$. For this to be possible, the two asymptotic functional forms must overlap. With the coefficients determined, the two unique, locally valid perturbative approximations are complete. If desired, they can be combined to make a single uniformly valid approximation. While straightforward in theory, asymptotic matching is difficult in practice, particularly for an equation like the Navier-Stokes equation. However, it is still far simpler than alternatives, such as iteratively solving the Oseen equations. Van Dyke's book is an excellent presentation of the many subtleties which arise in applying matched asymptotics to problems in fluid mechanics \cite{VanDyke1975}. We now present the matched asymptotic solutions for Eqns. \ref{CylinderEqn} and \ref{SphereEqn}. These solutions result in the ``state of the art'' drag coefficients for both the sphere and the cylinder. {\it a. Sphere} Although Lagerstrom and Cole initially applied matched asymptotics to the problem of the sphere, the seminal work came in an elegant 1957 paper by Proudman and Pearson \cite{Lag55, Proudman57}. Chester and Breach extended this paper via a difficult calculation in 1969 \cite{Chester69}. We summarize the results of both papers here. These workers used a perturbative solution in the Stokes regime of the form: \begin{equation} \psi(r,\mu) = \psi_0 + R \psi_1 + R^2 \log{R} \psi_{2L} + R^2 \psi_2 + R^3\log{R} + R^3\psi_3 + \Order[R]{3} \label{PnPStokes1} \end{equation} This rather peculiar perturbative form cannot be determined a priori. Rather, it arose in a fastidious incremental fashion, calculating one term at a time. The procedure of asymptotic matching \emph{required} including terms like $R^2\log{R}$ in the expansion; otherwise, no matching is possible. Note that matched asymptotics gives no explanation for the origin of these singular terms. The first step to finding a perturbative solution in the Oseen region is to define the \emph{Oseen variables}: \begin{displaymath} \rho = R r, \qquad \Psi(\rho,\mu) = R^2 \psi(r,\mu) \end{displaymath} Part of the reason for this transformation can be understood via the derivation of the Oseen equation. The region where inertial effects become important has been shown to be where $ R \vert r\vert \sim \Order[1]{}$. Intuitively, the variable $\rho = R r$ is a natural choice to analyze this regime, as it will be of \Order[1]{}. The precise reasons for the selection of these variables is a technique from boundary layer theory known as a \emph{dominant balance} argument, which we will revisit later \cite{BenderOrzag}. The perturbative expansion in the Oseen region takes the form: \begin{equation} \Psi(\rho,\mu) = \Psi_0 + R \Psi_1 + R^2 \Psi_2 + R^3\log{R}\Psi_{3L} + \Order[R]{3} \label{PnPOseen1} \end{equation} Note that there is no $R^2\log{R}$ term in this expansion; none is required to match with the Stokes expansion. As with the Stokes expansion, this form cannot be determined a priori. Proudman and Pearson completely solved for the Stokes' expansion through $\mathcal{O}(R$ $\log{R})$, and partially solved for the \Order[R]{2} term. They determined the Oseen expansion through \Order[R]{}. Chester and Breach extended these results up to a partial solution for \Order[R]{3} in the Stokes' expansion, and to \Order[R^3\log{R}]{} in the Oseen expansion. The exact form of these expansions is given in Chester and Breach.\footnote{Note that the expression for $\psi_2$ in Proudman, is incorrect \cite{Proudman57}. There is also a mistake in Chester and Breach \cite{Chester69}, Eqn. 3.5; the coefficient of $c_8$ should be $r^{-3}$ not $r^{-2}$.} Some aspects of these results have been seen before: The leading order in the Stokes' expansion ($\psi_0$) is simply the Stokes solution (Eqn. \ref{stokessol}). In the Oseen expansion, $\Psi_0$ is simply the formula for the uniform stream expressed in Oseen variables. The second term, $\Psi_1$, is the rotational part of Oseen's solution (Eqn. \ref{Oseen:1sol}). Both sets of authors then used their result for the Stokes expansion to calculate $C_D$, which is given in Eqn. \ref{matchedcd}. \begin{eqnarray} C_D &=& \frac{6 \pi}{R} \bigg( \underbrace{1\vphantom{\bigg(}}_{\textrm{``Stokes''}} \underbrace{\vphantom{\bigg(}+ \frac{3}{8}R}_{\textrm{``Oseen''}} \underbrace{\vphantom{\bigg(}+ \frac{9}{40} R^2 \log{R}}_{\textrm{Proudman}} \nonumber \\ & & \underbrace{\vphantom{\bigg(}+ \frac{9}{40}R^2\left( \gamma + \frac{5}{3} \log{2} - \frac{323}{360} \right) + \frac{27}{80} R^3 \log{R}}_{\textrm{Chester and Breach}} + \Order[R]{3} \Bigg) \label{matchedcd} \end{eqnarray} Here $\gamma$ is Euler's constant. This formula reproduces and extends nearly all earlier work. Eqn. \ref{matchedcd} shows both the original results of Proudman and Pearson and the higher order contributions of Chester and Breach \cite{Chester69,Proudman57}. The ``Stokes'' term is Stokes' original result (Eqn. \ref{stokes:famoussol}), which was rigorously justified by Oseen. The ``Oseen'' term is generally credited to Oseen (Eqn. \ref{Oseen:dragsphere}), although it is really beyond the accuracy of his work, and is only justified by this calculation.\footnote{Illingworth's unpublished result also justifies this term.} \begin{figure} \psfrag{Re}{$R$} \psfrag{CD}{$C_D R/6\pi - 1$} \psfrag{CD2 XXXXXX}{\tiny{$C_D \frac{R}{6\pi} - 1$}} \psfrag{Maxworthy XXXXXX}{Maxworthy \cite{Maxworthy65}} \psfrag{Le Clair}{Le Clair \cite{LC70}} \psfrag{Dennis}{Dennis \cite{Den71}} \psfrag{Oseen}{Oseen (\ref{matchedcd})} \psfrag{Proudman}{Proudman (\ref{matchedcd})} \psfrag{Chester and Breach 2}{Chester (\ref{matchedcd})} \begin{minipage}[t]{0.49\columnwidth} \includegraphics[width=1 \textwidth]{fig10a} \label{MatchedSphere1} \end{minipage} \begin{minipage}[t]{0.49\columnwidth} \includegraphics[width=1 \textwidth]{fig10b} \label{MatchedSphere2} \end{minipage} \caption{(Color online) Drag on a sphere, experiment vs. matched asymptotic theory. Experimental and numerical results are plotted as in Figure \ref{OseenSpherePlot1}.} \label{MatchedSphere0} \end{figure} Figure \ref{MatchedSphere0} compares the results of matched asymptotics (Eqn. \ref{matchedcd}) with experimental data, numerical results, and the basic prediction of Oseen's equation (Eqn. \ref{Oseen:dragsphere}). This plot has been the source of some confusion. Maxworthy examined his data and concluded that $C_D$ as computed by Oseen and Goldstein (Eqn. \ref{GoldsteinCD}) were as good as any matched asymptotics predictions \cite{Maxworthy65}. The calculations of Dennis and Le Clair, however, refute that claim, and demonstrate the systematic improvement that results from matched asymptotics. Neither is it immediately clear that the extra terms in Eqn. \ref{matchedcd} due to Chester and Breach are actually any improvement on the work of Proudman and Pearson. Van Dyke notes, ``This result is disappointing, because comparison with experiment suggests that the range of applicability has scarcely been increased.'' \cite{VanDyke1975}, and Chester himself remarks that ``there is little point in continuing the expansion further.'' At very low Reynolds number, however, the results of Dennis ``indicate that the expression of Chester and Breach gives a better approximation to the drag coefficient than any other asymptotic solution until about [$R=0.3$].'' \cite{Den71} Figure \ref{MatchedSphere0} shows the excellent low $R$ agreement between Dennis' numerical results and Eqn. \ref{matchedcd}. The prediction of matched asymptotics (Eqn. \ref{matchedcd}) is close to Illingworth's second order Oseen theory (Eqn. \ref{illingworth}). Close examination shows that the matched asymptotics results are slightly closer to the Dennis' calculations in the limit of low Reynolds number. Strictly speaking, these two theories should only be compared as $r \rightarrow 0$, and in this regime matched asymptotics is superior. This is not surprising, as the best matched asymptotic calculation is a higher order approximation than that of Illingworth. {\it b. Cylinder} \label{matchedcylinder} In 1957, Kaplun applied matched asymptotics to the problem of the cylinder, and produced the first new result for $C_D$ \cite{Kaplun57c}. Additional stream function calculations (but without a drag coefficient) were done by Proudman and Pearson \cite{Proudman57}. Kaplun's calculations were extended to higher order by Skinner, whose work also explored the structure of the asymptotic expansions \cite{Ski75}. We summarize results from all three papers here. Near the surface of the cylinder, the Stokes' expansion applies, and the perturbative solution takes the following form. \begin{equation} \psi(r,\theta)=\psi_0(r,\theta,\delta) + R \psi_2(r,\theta,\delta) + R^2 \psi_3(r,\theta,\delta) + \Order[R]{3} \label{matchedcylstokes} \end{equation} Here, $\delta = \delta(R)$ is defined as in Eqn. \ref{Oseen:Lambcyl1}. What is remarkable about the structure of this expansion is its dependence on $\delta$. To be precise, each function $\psi_n$ is actually another perturbative expansion, in $\delta$: \begin{equation} \psi_n(r,\theta,\delta) = \delta F_{n,1}(r,\theta) + \delta^2 F_{n,2}(r,\theta) + \Order[\delta]{3} \label{form1} \end{equation} This formulation is equivalent to an asymptotic expansion in terms of $\log{R}^{-1}$, which is used by Proudman and Pearson: \begin{equation} \psi_n(r,\theta,\log{R}) = \frac{\tilde{F}_{n,1}(r,\theta)}{(\log{R})^1} + \frac{\tilde{F}_{n,2}(r,\theta)}{(\log{R})^2} + \Order[\frac{1}{(\log{R})^3}]{} \label{form2} \end{equation} This form is much less efficient than that given in Eqn. \ref{form1}, in the sense that more terms in the Stokes and Oseen expansions are needed to obtain a given number of terms in $C_D$. For that reason, expansions in $\delta$ are used here. This curious asymptotic form is necessitated by matching requirements. It is also the source of a number of bizarre complications. The first implication is that \emph{all} terms in Eqn. \ref{matchedcylstokes} of \Order[R]{} and higher will be transcendentally smaller than \emph{any} of the terms in the expansion for $\psi_0$. This is true asymptotically, as $R \to 0$. The reason for this is that inertial terms \emph{never} enter into any of the governing equations for the Stokes' expansion; they enter only through the matching process with the Oseen expansion. As with the sphere, the first step to finding a perturbative solution in the Oseen region is to transform into the relevant Oseen variables. In this case, \begin{equation} \rho = R r, \qquad \Psi(\rho,\mu) = R \psi(r,\mu) \end{equation} The perturbative expansion which can solve the problem in the Oseen region has the same generic form as Eqn. \ref{matchedcylstokes}. \begin{equation} \Psi(\rho,\theta)=\Psi_0(\rho,\theta,\delta) + R \Psi_1(\rho,\theta,\delta) + \Order[R]{2} \label{matchedcyloseen} \end{equation} The functions $\Psi_n(\rho,\theta,\delta)$ can also be expressed as a series in $\delta(R)$. However, the formula cannot be written down as conveniently as it could in Eqn. \ref{form1}. The first two terms take the forms given in Eqn. \ref{eqn:form2}. \begin{subequations} \begin{eqnarray} \Psi_0(\rho,\theta,\delta) &=& F_{0,0}(\rho,\theta) + \delta F_{0,1}(\rho,\theta) + \Order[\delta]{2} \\ \Psi_1(\rho,\theta,\delta) &=& \delta^{-1} F_{1,-1}(\rho,\theta) + F_{0,0}(\rho,\theta) + \Order[\delta]{} \end{eqnarray} \label{eqn:form2} \end{subequations} Kaplun and Proudman both considered only terms of \Order[R]{0} in the Stokes' expansion. As $R \to 0$, this is an excellent approximation, as all higher terms are transcendentally smaller. In this limit, the Stokes expansion takes a particularly simple form: \begin{equation} \psi(r,\theta) = \psi_1(r,\theta,\delta) = \sum_{n=1}^\infty a_n \delta^n \left(2 r \log{r} -r + \frac{1}{r} \right)\sin{\theta} \end{equation} Kaplun obtained terms up to and including $n=3$. Proudman et al. also obtained expressions for the Oseen expansion, albeit expressed as a series in $\log{R}^{-1}$. Skinner extended Kaplun's Stokes expansion to include terms up to \Order[\delta]{3}, \Order[R\delta]{}, and \Order[R^2\delta]{} \cite{Ski75}. He obtained approximate solutions for the Oseen expansion, including terms up to \Order[\delta]{} and \Order[R]{}. The lowest order solutions in the Oseen expansion are related to the expression for a uniform stream and the solution of Lamb (Eqn. \ref{Oseen:Lambcyl1}). Using his solution, Kaplun computed a new result for the drag coefficient (Eqn. \ref{cylinder:KaplunCD}) which agrees with Lamb's result (Eqn. \ref{Oseen:LambDrag}) at lowest order. \begin{equation} \label{cylinder:KaplunCD} C_D =\frac{4 \pi}{R} \left( \delta - k \delta^3\right) \end{equation} Here, $k = \int_0^{\infty} K_0(x) K_1(x) \left( x^{-1} I_1(2 x) - 4 K_1(x)I_1(x)+1\right) \textrm{d}x \approx 0.87$. Skinner extended these results, showed that terms of \Order[R]{0} do not contribute to the drag, and calculated the first transcendentally smaller contribution, which is of \Order[R]{1}. His result is given in Eqn. \ref{cylinder:SkinnerCD}. \begin{equation} \label{cylinder:SkinnerCD} C_D=\frac{4 \pi}{R} \left( \delta - k \delta^3 + \Order[\delta]{4} - \frac{R^2}{32} \left(1 - \frac{\delta}{2} + \Order[\delta]{2} \right) + \Order[R]{4} \right) \end{equation} The value of these new terms is questionable, and Skinner himself noted that they are likely negligible in comparison to the neglected terms of \Order[\delta]{4}. Asymptotically this is unequivocally true. Figure \ref{MatchedCylPlot1} compares the predictions of matched asymptotic theory with Lamb's result (Eqn. \ref{Oseen:LambDrag}) based on Oseen's equation. Although both theories agree as $r \rightarrow 0$, matched asymptotic results seem no better than Lamb's solution. The comparison is further complicated by the scatter in the different experiments; matched asymptotics agree more closely with Tritton's measurements, while Lamb's solution agrees better with Jayaweera's. We draw two conclusions from Figure \ref{MatchedCylPlot1}: Both theories break down for $R\gtrsim 0.1$ and neither theory is demonstrably more accurate. Even more disappointingly, Skinner's result is nowhere better than Kaplun's --- it is actually worse at higher Reynolds numbers. \begin{figure}[tb] \psfrag{Re}{\mbox{\large $R$}} \psfrag{CD}{\mbox{\large $C_D R/4\pi$}} \psfrag{Jayaweera XXXXXX}{Jayaweera} \psfrag{Tritton}{Tritton} \psfrag{Kaplun}{Kaplun, Eqn. \ref{cylinder:KaplunCD}} \psfrag{Skinner}{Skinner, Eqn. \ref{cylinder:SkinnerCD}} \psfrag{Lamb}{Lamb, Eqn. \ref{Oseen:LambDrag}} \begin{center} \includegraphics[width=.8 \textwidth]{fig11} \caption{(Color online) Drag on a cylinder, experiment vs. matched asymptotic theory \cite{Jay65, Tritton59}.} \label{MatchedCylPlot1} \end{center} \end{figure} Part of the problem with the matched asymptotics approach arises from the need for two expansions, in $\delta$ and $R$. Because infinitely many orders of $\delta$ are needed before \emph{any} higher orders in $R$ are relevant means that infinitely many terms in the Oseen expansion must be calculated before the second order term in the Stokes expansion. This is inefficient, and is the reason for Skinner's lack of success. A recent paper by Keller et al. solved this problem numerically \cite{Kel96}. They developed a method to sum \emph{all} of the orders of $\delta$ for the first two orders of $R$. Their ``beyond all orders'' numerical results prove the importance of these higher order terms. When such terms are accounted for, the resulting $C_D$ is vastly improved from Kaplun's, and is superior to any of the analytic solutions discussed here. Interestingly, it seems to agree very well with the experiments of Tritton, although it is difficult to tell from the plot in their paper, which does not remove the leading order divergence. \subsubsection{Other theories} Amongst the community interested in settling velocities and sedimentation, there are many theoretical models of the drag on a sphere. These workers specify $C_D$ as a function of $R$ by means of a ``sphere drag correlation.'' An overview of these formula is given by Brown \cite{Bro03}. These results are generally semi-empirical, relying on a blend of theoretical calculations and phenomenologically fit parameters to predict $C_D$ over a large range of Reynolds number. While practically useful, these results are not specific to low Reynolds numbers, and cannot be derived from the Navier-Stokes equations. They address a different problem, and will not be further considered here. One other semi-empirical theory is due to Carrier \cite{Car53}. He argued that the inertial corrections in the Oseen equation were over-weighted, and multiplied them by a coefficient which he constrained to be between $0$ and $1$. Consequently, his theory is in some sense ``in between'' that of Stokes and that of Oseen. He ultimately determined this coefficient empirically. \subsubsection{Terminology} \label{section:terminology} Confusing terminology, particularly in the matched asymptotics literature, riddles the history of these problems. We previously detailed discrepancies in the definition of $C_D$. In this section we explain the sometimes conflicting terms used in the matched asymptotics literature, introduce a convention which eliminates confusion, and also explain how some authors adopt different definitions of the Reynolds' number. Matched asymptotics literature discusses numerous perturbative expansions, each of which are valid in a different regime, or ``domain of validity.'' Different authors use different labels for these expansions. Most workers define the ``inner'' expansion to be the expansion which is valid inside the \emph{boundary layer} \cite{BenderOrzag}. A boundary layer is a region of rapid variation in the solution. The ``outer'' expansion is valid outside of the boundary layer, where the solution is slowly varying \cite{BenderOrzag}. Problems with multiple boundary layers require additional terminology. The outer expansion is based ``upon the primary reference quantities in the problem,'' and the inner expansion is usually obtained by stretching the original variables by dimensionless functions of the perturbation parameter \cite{VanDyke1975}. The appropriate stretching, or scaling functions are obtained through a \emph{dominant balance} analysis, which can be difficult. After this rescaling, the argument of the inner expansion will be of $\mathcal{O}(1)$ inside the boundary layer. Accompanying these inner and outer expansions are ``inner variables'', ``outer variables'', ``inner limits'', and ``outer limits''. The low Reynolds number flow problems are complicated by the fact that some authors, including Van Dyke, also define expansions on the basis of their physical location \cite{VanDyke1975}. The ``outer'' limit is valid far from the solid body ($|\vec{r}|$ is large), and the ``inner'' limit is valid near the surface of the body ($|\vec{r}| \approx 1$). This is consistent with yet another definition, based on proximity to the origin of the chosen coordinate system. In a review paper Lagerstrom and Casten define the ``inner limit'' as being ``valid near the origin,'' and the ``outer limit'' as being ``valid except near the origin.'' \cite{Lag72} Part of their motivation for this new definition was to distinguish between the \emph{domain of validity} of an expansion, and the limit process by which it is obtained. Finally, Kaplun refers to the inner and outer limits based on their correspondence to high Reynolds number flow \cite{Kaplun57c}. He identifies the Stokes' approximation as the ``inner'' limit, and Oseen's equation as the ``outer'' limit. Part of the confusion arises because of disagreements over the location of the boundary layer. Van Dyke claims that ``it is the neighborhood of the point at infinity'', while Kaplun argues that the boundary layer is near the surface. Definitions referenced to the boundary layer disagree when there are disagreements about its location. To eliminate this confusion, a preferable alternative notation has emerged from subsequent work\cite{Proudman57,Kaplun57a}. We follow this notation, defining the``Oseen'' and ``Stokes'' expansions, which were used in the previoussection. The Oseen expansion is valid far from the surface, and is expressed in stretched coordinates. The Stokes limit is valid near the surface of the sphere, where $r$ is small, and is expressed in the original variables.\footnote{Van Dyke's book is not consistent in relating ``inner'' and ``outer'' expansions to the Stokes and Oseen expansions.} Matched asymptotics workers also discuss \emph{uniform approximations}, \emph{intermediate} expansions, or \emph{composite} expansions \cite{BenderOrzag, Kaplun67, VanDyke1975}. The basic idea is that the Stokes and Oseen expansions can be blended together to form a single expression which is valid everywhere. This result reduces to the two original expansions when expanded asymptotically in the two limits. How to calculate a uniform expansion is discused below. There are also minor differences in the definition of the Reynolds number, $R$. Some authors define $R$ based on the diameter of the solid, while others base it on the radius. This factor of $2$ can be difficult to track. We define the Reynolds number using the \emph{radius} of the fixed body: $R = |\vec{u}_{\infty}| a/\nu$. It is worth noting that Kaplun \cite{Kaplun67}, Tomotika \cite{Tomotika50}, Goldstein \cite{Goldstein38}, Liebster \cite{Liebster26}, Thom \cite{Tho33} and Tritton \cite{Tritton59} all use the \emph{diameter}. \subsection{Uniformly valid approximations} \label{section:uniform} As mentioned previously, the inner and outer expansions may be combined into a single, \emph{uniformly valid} approximation, which is applicable everywhere. For a function of one variable, the uniform approximation is constructed as in Eqn. \ref{benderuniform} \cite{BenderOrzag}. \begin{equation} \label{benderuniform} y_{\textrm{uniform}}(x) = y_{\textrm{outer}}(x) + y_{\textrm{inner}}(x) - y_{\textrm{overlap}}(x) \end{equation} $y_{\textrm{overlap}}(x)$ consists of the common ``matching'' terms between the inner and outer expansions. Kaplun demonstrates that $y_{\textrm{uniform}}(x) \to y(x)$ as the expansion variable $R \to 0$, i.e. the uniform approximation tends to the exact solution everywhere \cite{Kaplun67}. To be more precise, if the matched asymptotics solution is constructed to \Order[R]{1}, then \begin{equation*} \lim_{R\rightarrow 0} y(x) - y_{\textrm{uniform}}(x) \sim \Order[R]{1} \end{equation*} As a matter of practice, calculating the uniform solution is mechanistic. First, express the inner and outer expansions in the same coordinates; in our case, express the Oseen expansion in Stokes variables.\footnote{Note that this transformation affects both the radial coordinates and the stream function, and that it differs for the sphere and cylinder.} Alternatively, one can express the Stokes expansion in Oseen variables. Next, express both solutions as a power series in the expansion parameter, $R$. By construction the Stokes expansion is already in this form but the transformed Oseen expansion is not, and must be expanded to the same power in $R$ as the Stokes solution. From these two power series we can identify the ``overlap'' function, $y_{\textrm{overlap}}$. This function consists of the terms which are in common between the two expansions, and is usually obtained by inspection. Of course, $y_{\textrm{overlap}}$ is only valid to the same order as the original matched asymptotics solution, and higher order terms should be discarded. The uniformly valid approximation is then obtained using $y_{\textrm{overlap}}$ and Eqn. \ref{benderuniform}. \subsubsection{The correct way to calculate $C_D$} Proudman and Pearson argue that ``uniformly valid approximations \emph{per se} are not usually of much physical interest ... In the present problem, for instance, it is the Stokes expansion that gives virtually all the physically interesting information.'' \cite{Proudman57} All matched asymptotics calculations are based solely on the Stokes expansion, and are therefore influenced by the Oseen expansion only via the boundary conditions. For instance, the drag coefficient is calculated using only the Stokes' expansion. Other properties of the stream function, such as the size of the dead water wake directly behind the sphere or cylinder, are also calculated using the Stokes' expansion. In this section we argue that this approach is incorrect, and that uniformly valid approximation should be used to calculate all quantities of interest. By adopting this viewpoint, we obtain new results for $C_D$, and demonstrate that these drag coefficients systematically improve on previous matched asymptotics results. Matched asymptotics workers argue that the drag coefficient is calculated at the surface of the solid (Eqns. \ref{CDsphere}, \ref{CDcylinder}), where $r=1$. Since the Oseen solution applies for large $r$, the Stokes solution applies for small $r$, and the Stokes solution ought to be used to calculate $C_D$. In fact, by construction, any uniformly valid approximation must reduce to the Stokes expansion in the limit as $R r \to 0$. Curiously, proponents of the Oseen equation argue conversely \cite{Happel73,Faxen27}. They claim that because the Oseen expansion \emph{happens} to apply everywhere, it should be used to calculate all sorts of quantities of interest, including $C_D$. In fact, Hapel and Brenner wrote a book essentially devoted to this premise \cite{Happel73}. In fairness, it must be mentioned that all of these authors were well aware of their choices, and motivated their approach pragmatically: They obtained useful solutions to otherwise intractable problems. In reality, both approaches converge to the exact solution for suitably small Reynolds' numbers. However, for small but non-infinitesimal $R$, the best estimate of derivative quantities such as $C_D$ is obtained not by using the Stokes expansion, but by using a uniformly valid approximation calculated with both the Stokes and Oseen expansions. Such a drag coefficient must agree with results derived from the Stokes expansion as $R r \to 0$, and it can \emph{never} be inferior. Moreover, this approach makes determination of the drag coefficient's accuracy straightforward; it is determined solely by the accuracy of the uniform expansion, without any need to be concerned about its domain of applicability. We now calculate the drag coefficients for both the sphere and the cylinder using uniformly valid approximations, using previously published inner and outer expansions. These corrections are small but methodologically noteworthy, and are absent from the existing literature. {\it a. Cylinder} Although the state-of-the-art matched asymptotics solutions are due to Kaplun, it is more convenient to use stream functions \cite{Kaplun57c}. Skinner conveniently combines previous work, providing a concise summary of Stokes and Oseen stream functions \cite{Ski75}. We base our derivation of a uniformly valid approximation on the results in his paper. The Stokes expansion is given by Eqn. \ref{cylinder:stokes} \cite{Ski75}. \begin{equation} \label{cylinder:stokes} \psi(r,\theta) = \frac{1}{2}\left(\delta - k \delta^3 + \Order[\delta]{4} \right) \left(2 r \log{r} - r + \frac{1}{r} \right) \sin{\theta} + \Order[R]{1} \end{equation} The Oseen Expansion is given by Eqn. \ref{cylinder:oseenU}. \begin{equation} \label{cylinder:oseenU} \Psi(\rho,\theta) = \left( \rho \sin{\theta} - \delta \sum_{n=1}^\infty \phi_n \left(\frac{\rho}{2}\right) \frac{\rho}{n} \sin{n \theta} + \Order[\delta]{2} + \Order[R]{1} \right) \end{equation} With these results, creating the uniform approximation and calculating $C_D$ is straightforward. The only subtlety is the sine series in Eqn. \ref{cylinder:oseenU}. However, Eqn. \ref{cylinder:convenientdrag} tells us that, for the purposes of calculating the drag, only the coefficient of $\sin{\theta}$ matters. We calculate the overlap between the two functions by expanding Eqn. \ref{cylinder:oseenU} about $\rho=0$. The result is given by Eqn. \ref{cylinder:overlap}. \begin{equation} \label{cylinder:overlap} \psi_{\textrm{overlap}}(r,\theta) = \delta \frac{r}{2} \left(2 \log{r} - 1\right) \sin{\theta} + \Order[\delta]{2} + \Order[R]{1} \end{equation} Combining this with the Oseen and Stokes expansions, we obtain the uniformly valid approximation given by Eqn. \ref{cylinder:uniform}. \begin{eqnarray} \label{cylinder:uniform} \psi_{\textrm{uniform}}(r,\theta) &=& \left(r + \delta \left(\frac{1}{2r} - r \phi_1(\frac{r R}{2}) \right) + k \delta^3 \left( \frac{r}{2} - r \log{r} - \frac{1}{2r} \right) \right) \sin{\theta} - \nonumber \\ & &\delta \sum_{n=2}^\infty \phi_n \left(\frac{R r}{2}\right) \frac{r}{n} \sin{n \theta} + \Order[\delta]{2} + \Order[R]{1} \end{eqnarray} By substituting this result into Eqn. \ref{cylinder:convenientdrag}, we obtain a new result for $C_D$: \begin{equation} \label{cylinder:uniformCD} C_D = \frac{\pi \delta \left( 24 - 32 k \delta^3 + 6 R^2 \phi_1^{''}(R/2) + R^3 \phi_1^{'''}(R/2) \right)}{8 R} \end{equation} Fig. \ref{fig:cylinder:uniform} compares Eqn. \ref{cylinder:uniformCD} with Kaplun's usual result (Eqn. \ref{cylinder:KaplunCD}). The new drag coefficient (Eqn. \ref{cylinder:uniformCD}) is a small but \emph{systematic} improvement over the results of Kaplun. Because they are asymptotically identical up to \Order[\delta]{4} and \Order[R]{}, they agree as $R \rightarrow 0$. However, at small but non-infinitesimal $R$, our new result is superior. Comparing Figures \ref{fig:cylinder:uniform} and \ref{MatchedCylPlot1}, we can also see a second surprise: The new result betters Skinner's $C_D$, even though they were based on the same stream functions. If Skinner had used a uniformly valid approximation, his result would not have misleadingly appeared inferior to Kaplun's. \begin{figure}[tb] \psfrag{Re}{\mbox{\large $R$}} \psfrag{CD}{\mbox{\large $C_D R/4\pi$}} \psfrag{Jayaweera XXXXXX}{Jayaweera} \psfrag{Tritton}{Tritton} \psfrag{Kaplun}{Kaplun, Eqn. \ref{cylinder:KaplunCD}} \psfrag{Uniform CD}{Eqn. \ref{cylinder:uniformCD}} \begin{center} \includegraphics[width=.8 \textwidth]{fig12} \caption{(Color online) Drag on a cylinder, comparing a uniformly valid calculations and matched asymptotics results\cite{Jay65,Tritton59}.} \label{fig:cylinder:uniform} \end{center} \end{figure} {\it b. Sphere} As with the cylinder, calculating $C_D$ from a uniformly valid expansion yields an improved result. However, there is a substantial difference in this case. Although matched asymptotics calculations have been done through \Order[R]{3} in Eqn. \ref{PnPStokes1} and \Order[R^3\log{R}]{} in Eqn. \ref{PnPOseen1}, the higher order terms in the Oseen expansion are impossible to express in a simple analytic form. Asymptotic expressions exist (and have been used for matching), but these cannot be used to construct a uniformly valid expansion. Consequently, we can only compute the uniform expansion through \Order[R]{}, and its predictions can only be meaningfully compared to the first two terms in Eqn. \ref{matchedcd}. The solutions for the Stokes and Oseen expansions are given in Chester and Breach, and are quoted here \cite{Chester69}. The Stokes expansion: \begin{eqnarray} \psi(r,\mu) &=&-\frac{1}{2}\left(2r^2-3r+\frac{1}{r}\right)Q_1(\mu) - R \frac{3}{16} \Bigg( \left(2r^2-3r+\frac{1}{r} \right) Q_1(\mu) - \nonumber \\ & &\left(2r^2-3r+1-\frac{1}{r}+\frac{1}{r^2}\right) Q_2(\mu) \Bigg) + \Order[R^2\log{R}]{} \label{sphere:unif:stokes} \end{eqnarray} The Oseen expansion: \begin{equation} \Psi(\rho,\mu)=-\rho^2Q_1(\mu) - R \frac{3}{2} \left(1+\mu\right)\left(1-e^{-\frac{1}{2}\rho\left(1-\mu\right)} \right) + \Order[R]{2} \label{sphere:unif:oseen} \end{equation} By taking the $\rho \to 0$ limit of Eqn. \ref{sphere:unif:oseen}, we can calculate the overlap between these two expansions. The result is given in Eqn. \ref{sphere:unif:overlap}. \begin{equation} \label{sphere:unif:overlap} \psi_{\textrm{overlap}}(r,\mu) = \frac{r}{8}\left(12 - 8 r \right)Q_1(\mu) + \frac{r R}{8}\left(3 r Q_2(\mu) - 3 r Q_1(\mu) \right) + \Order[R]{2} \end{equation} Eqns. \ref{sphere:unif:overlap}, \ref{sphere:unif:oseen}, and \ref{sphere:unif:stokes} can be combined to form a uniformly valid approximation: \begin{equation} \psi_{\textrm{uniform}}(r,\mu)=\psi(r,\mu)-\psi_{\textrm{overlap}}(r,\mu) +\frac{\Psi(r R,\mu)}{R^2} + \Order[R^2\log{R}]{} \label{sphere:uniform} \end{equation} Due to the $e^{-\frac{1}{2}\rho\left(1-\mu\right)}$ term, we cannot use the simple expression for $C_D$ (Eqn. \ref{sphere:simpledrag}). Instead, we must use the full set of Eqns. \ref{spherestreamfunction}, \ref{CDsphere}, and \ref{sphpressure}. After completing this procedure, we obtain a new result for $C_D$, given by Eqn. \ref{sphere:uniformcd}. \begin{eqnarray} \label{sphere:uniformcd} C_D&=&\frac{6 \pi}{R} \Bigg( \frac{e^{-2 R}}{320 R^3} \Big( 40 e^{R} \left(1728+1140R + 335R^2+56R^3+6R^4 \right) - 60 R \left(1+R\right) \nonumber \\ & & + e^{2 R} \big(-69120 + 23580 R -2420 R^2 +20 (10 + \pi) R^3 + 10 (18-\pi)R^4 - 8 R^5 \nonumber \\ & & -3R^6 \big) \Big) - \frac{e^{-R/2} \pi I_1(R/2)}{4 R} \Bigg) + \Order[R]{1} \end{eqnarray} This result is plotted in Figure \ref{UniformSphere1}. Asymptotically, it agrees with the matched asymptotics predictions to \Order[1]{}, as it must, and reproduces the $3/8R$ ``Oseen'' term. As $R$ increases, however, the uniform calculation becomes superior to the first two terms of the matched asymptotic $C_D$. Although it is a much higher order solution than either of the other two results, we show the full matched asymptotics prediction for comparison. \begin{figure}[tb] \psfrag{Re}{\mbox{\large $R$}} \psfrag{CD}{\mbox{\large $C_D R/6\pi - 1$}} \psfrag{Maxworthy XXXXXX}{Maxworthy} \psfrag{Le Clair}{Le Clair} \psfrag{Dennis}{Dennis} \psfrag{Oseen}{Eqn. \ref{matchedcd}, 2 terms} \psfrag{Chester and Breach 2}{Eqn. \ref{matchedcd}, 5 terms} \psfrag{Uniform}{Eqn. \ref{sphere:uniformcd}} \begin{center} \includegraphics[width=.8 \textwidth]{fig13} \caption{(Color online) Drag on a sphere, experiment vs. theory \cite{Maxworthy65,LC70,Den71}.} \label{UniformSphere1} \end{center} \end{figure} \section{THE RENORMALIZATION GROUP APPLIED TO LOW $R$ FLOW} \label{chap:RG} \subsection{Introduction to the renormalization group} In 1961, Lagerstrom proposed the first of a number of ``model problems'', ordinary differential equations which exhibited many of the same asymptotic features as the low Reynolds number problems. They were used to study and develop the theory of matched asymptotic expansions. The mathematical solution of these problems is closely analogous to the actual solutions of the Navier-Stokes equations. A review of these equations, and of their matched asymptotic solutions, is given in Lagerstrom \cite{Lag72}. The relevant models can be summarized by the following equation: \begin{equation} \frac{\mathrm{d}^2 u}{\mathrm{d} x^2} + \frac{n-1}{x}\frac{\mathrm{d} u}{\mathrm{d} x} + u \frac{\mathrm{d} u}{\mathrm{d} x} + \delta \left(\frac{\mathrm{d} u}{\mathrm{d} x} \right)^2 = 0 \label{modeleqn1} \end{equation} This ODE is subject to the boundary conditions $u(\epsilon) = 1$, $u(\infty)=0$. In this equation, $n$ corresponds to the number of spatial dimensions ($n=2$ for the cylinder, $n=3$ for the sphere). $\delta$ = 0 characterizes incompressible flow, and $\delta=1$ corresponds to compressible flow. This equation is similar to the Navier-Stokes equations expressed in Oseen variables. There are fundamental differences between the structure of the incompressible and compressible flow equations. These model problems are posed in Hinch, albeit in terms of ``Stokes'' (rather than ``Oseen'') variables \cite{Hinch91}. Hinch begins by examining the model describing incompressible flow past a sphere. He next examines incompressible flow past a cylinder, which he calls ``A worse problem.'' Finally, he treats compressible flow past cylinder, which he dubs ``A terrible problem.'' These problems, which have historically been the proving ground of matched asymptotics, were recently solved using new Renormalization Group (RG) techniques in two papers by Chen et al. \cite{CGO96,CGO94}. These techniques afford both quantitative and methodological advantages over traditional matched asymptotics. The RG approach derives all of the subtle terms (e.g., $R^2 \log{R}$) which arise during asymptotic matching, demonstrating that origin of these terms lies in the need to correct flaws inherent in the underlying expansions. Moreover, RG does not require multiple rescalings of variables, and its results, while asymptotically equivalent to those of matched asymptotics, apply over a much larger range (e.g., they extend to higher $R$). In particular, Chen et al. solved Hinch's first model, which describes incompressible flow past a sphere ($n=3$, $\delta=0$), as well as the model for both kinds of flow past a cylinder ($n=2$, $\delta=0,1$) \cite{CGO96,CGO94}. In a notation consistent with Hinch, they termed these models the ``Stokes-Oseen caricature'' and the ``terrible problem.'' The dramatic success of the RG techniques in solving the model problems inspired their application to the original low Reynolds number flow problems. That is our primary purpose here, as the low Reynolds number problems are the traditional proving ground for new methodologies. We will show that the RG techniques perform well when applied to these problems. RG produces results superior to and encompassing the predictions of matched asymptotics. More importantly, the RG calculations are considerably simpler than matched asymptotics, requiring half the work. The utility of the RG approach is most easily seen through an example, which will also provide a framework for understanding the analysis presented in subsequent sections. Several pedagogical examples can also be found in the references (e.g., \cite{CGO94,CGO96,Oon00,GoldenfeldBook}). We begin here with an analysis of the most complicated model problem, the ``terrible problem,'' which caricatures compressible flow past a cylinder. \subsubsection{Detailed analysis of the ``terrible problem''} Although the ``terrible problem,'' is solved in a paper by Chen et al., we re-examine it here in considerably more detail, as its solution is closely analogous to those of the low Reynolds number flow problems. This switchback problem is exceptionally delicate\footnote{Hinch notes, ``It is unusual to find such a difficult problem ...'' \cite{Hinch91}.}, requiring the calculation of an infinite number of terms for the leading order asymptotic matching. There are pitfalls and ambiguities in applying RG techniques, even to the ``terrible problem,'' which while terrible, is considerably simpler than the real low Reynolds number problems. Understanding these subtleties in this simpler context provides essential guidance when attacking the Navier-Stokes' equations. We want to solve the ODE given in Eqn. \ref{terribleeqn}, subject to the boundary conditions \ref{terriblebc}. This equation can be derived from Eqn. \ref{modeleqn1} by setting $n=2$, $\delta = 1$, and transforming to the ``Stokes'' variables, $r=x/\epsilon$. Unlike Eqn. \ref{modeleqn1}, Eqn. \ref{terrible} is obviously a singular perturbation in $\epsilon$, which has been removed from the boundary conditions. The last term in the equation vanishes when $\epsilon=0$. \begin{subequations} \begin{eqnarray} \frac{d^2 u(r)}{d r^2} + \frac{1}{r}\frac{d u(r)}{d r} + \left(\frac{d u(r)}{d r}\right)^2 + \epsilon u(r) \frac{d u(r)}{d r} =0 \quad \label{terribleeqn}\\ u(1)=0, \quad u(r=\infty)=1 \quad \label{terriblebc} \end{eqnarray} \label{terrible} \end{subequations} This problem cannot be solved exactly, although numerical solution is straightforward. Trouble arises due to the \emph{boundary layer}\footnote{A boundary layer is a region of rapid variation in the solution, $y(t)$.} located near $r = \infty$. RG analysis requires that we work in the ``inner'' variable for our approximation to capture the correct behavior near the boundary layer\footnote{Here we use ``inner'' in the usual sense \cite{BenderOrzag}. For further discussion, see Section \ref{section:terminology}}. This requirement may also be qualitatively motivated by arguing that one must choose coordinates to ``stretch out'' the boundary layer so that it can be well characterized by our approximate solution. To determine the appropriate change of variables, we need to analyze Eqn. (\ref{terrible}) using a \emph{dominant balance} argument \cite{BenderOrzag}. As it stands, the first three terms of Eqn. (\ref{terribleeqn}) will dominate, since $\epsilon$ is small. The rescaling $x=\epsilon r$ yields inner Eqn. (\ref{terrible2}). This, of course, is the same equation originally given by Lagerstrom (Eqn. \ref{modeleqn1}). \begin{subequations} \begin{eqnarray} \frac{d^2 u(x)}{d x^2} + \frac{1}{x}\frac{d u(x)}{d x} + \left(\frac{d u(x)}{d x}\right)^2 + u(x) \frac{d u(x)}{d x} =0 \quad \label{terribleeqn2}\\ u(\epsilon)=0, \quad u(x=\infty)=1 \quad \label{terriblebc2} \end{eqnarray} \label{terrible2} \end{subequations} The next step in the RG solution is to begin with the ansatz that the solution to Eqn. (\ref{terrible2}) can be obtained from a perturbation expansion (Eqn. \ref{naive terrible}). We fully expect this ansatz to fail, since we have a singular perturbation in our ODE. We therefore refer to this starting point as the \emph{na\"\i ve} perturbation expansion. \begin{equation} \label{naive terrible} u(x) = u_0(x) + \epsilon u_1(x) + \epsilon^2 u_2(x) + \mathcal{O}(\epsilon^3) \end{equation} Collecting powers of $\epsilon$, we obtain differential equations for $u_0(x)$, $u_1(x)$, etc: \begin{eqnarray} \label{terriblegovern1} \Order{0} &:& \frac{u_0^{'}(x)}{x} + u_0(x)u_0^{'}(x)+u_0^{'}(x)^2+u0^{''}(x)=0 \\ \label{terriblegovern2} \Order{1} &:& u_1u_0^{'}+\frac{u_1^{'}}{x}+u_0u_1^{'}+2u_0^{'}u_1^{'}+u_1^{''}=0 \\ \label{terriblegovern3} \Order{2} &:& u_2 u_0^{'} + u_1^{'} u_1 + u_0 u_2^{'} + (u_1^{'})^2 + 2 u_0^{'}u_2^{'} +\frac{u_2^{'}}{x} + u_2^{''} = 0 \end{eqnarray} {\it a. $\mathcal{O}(\epsilon^0)$ solution} The first complication of the terrible problem arises when we attempt to solve Eqn. \ref{terriblegovern1}, a nonlinear ODE. Although one solution --- $u_0(x)=A_0$ --- is seen by inspection, an additional integration constant is not forthcoming, and our solution to the $\mathcal{O}(\epsilon^0)$ problem cannot satisfy both of the boundary conditions (Eqn. \ref{terriblebc2}). The resolution to this quandary is simple: Ignore the problem and it will go away; continue constructing the na\"\i ve solution as if $u_0(x)=A_0$ were wholly satisfactory. The qualitative idea is that the $\mathcal{O}(\epsilon^0)$ solution is the uniform field which we have far from any disturbance source. Why is this acceptable? The RG method is robust against shortcomings in the na\"\i ve expansion. We know that singular perturbation problems cannot be solved by a single perturbation expansion. We therefore expect problems, such as secular behavior, to arise in our solution for the na\"\i ve expansion. RG techniques can be used to remove these flaws from the perturbative solution, turning it into a uniformly valid approximation \cite{CGO96}. It does not matter whether these defects arise from an incomplete solution for $u_0(x)$, the intrinsic structure of the equation, or a combination of the two. To solve the terrible problem (and later the low Reynolds number problems), we must exploit this flexibility. For subsequent calculations, there are two ways to proceed. First, we may retain $A_0$ as an arbitrary constant, one which will ultimately be renormalized in the process of calculating a uniformly valid approximation. Alternatively, we may set $A_0=1$, satisfying the boundary condition at $x=\infty$.\footnote{Meeting the boundary condition at $x=\epsilon$ results only in the trivial solution $u_0(x)=0$.} This unconventional approach to the RG calculation effectively shifts the freedom that usually comes with the $\mathcal{O}(\epsilon^0)$ constants of integration into the $\mathcal{O}(\epsilon^1)$ solution. This artifice greatly simplifies subsequent calculations, and is invaluable in treating the Navier-Stokes equations. Moreover, these two approaches are equivalent, as we now show. {\it b. $\mathcal{O}(\epsilon^1)$ solution} If $u_0(x)=A_0$, Eqn. \ref{terriblegovern2} simplifies to Eqn. \ref{terrible1a}. \begin{equation} \label{terrible1a} \frac{d^2 u_1}{d x^2} + \left( \frac{1}{x}+A_0\right) \frac{d u_1}{d x}=0 \end{equation} The solution is: $u_1(x)=B_0 + B_1 e_1(A_0 x)$, where $e_n(x)\equiv \int_x^\infty e^{-t} t^{-n} \text{d} t$. Notice that the first term is linearly dependant on the $u_0(x)$ solution. There are many opinions regarding how to utilize this degree of freedom \cite{Kun95,Woodruff95}. In our approach, one is free to choose the homogeneous solutions of $u_0, u_1, \textrm{etc.}$ for convenience. The only constraint\footnote{Of course the solution must also satisfy the governing equation.} is that the ``na\"\i ve'' solution (Eqn. \ref{naive terrible}) must have a sufficient number of integration constants to meet the boundary conditions. In this example, that means two constants of integration. Different choices of particular solutions will ultimately result in different approximate solutions to the ODE. However, all of these solutions will agree within the accuracy limitations of the original approximation (in this case the na\"\i ve expansion). This can be shown explicitly. In this example, as in the low Reynolds number problems, we choose a particular solution which simplifies subsequent calculations. Setting $B_0 = 0$ (note that this is \emph{not} the same as a redefinition of the constant $A_0$), we obtain the solution: \begin{equation} \label{Oeterriblesol} u(x)=A_0+ \underbrace{\epsilon B_1 e_1(A_0 x)}_{\textrm{divergent as } x \rightarrow 0} + \mathcal{O}(\epsilon^2) \end{equation} The second term in Eqn. \ref{Oeterriblesol} diverges logarithmically as $x\rightarrow 0$. One may argue that this divergence is irrelevant, since the range of the original variable is $r \in [1,\infty)$, and numerical solutions demonstrate that the solutions to Eqn. \ref{terrible} in $[1,\infty)$ diverge when extended to $r < 1$. But the argument that the divergence in Eqn. \ref{Oeterriblesol} is an intrinsic part of the solution (and therefore should not be considered problematic) is incorrect. Although the original variable, $r$, is limited to $r \in [1,\infty)$, the transformed variable, $x = \epsilon r$, has the range $x \in [0,\infty)$. This occurs because there are no restrictions on the lower limit of $\epsilon$. The divergence exhibited by the second term of Eqn. \ref{Oeterriblesol} must be removed via renormalization in order to turn the flawed na\"\i ve solution into a uniformly valid approximation. This divergence arises for two reasons. First, we are perturbing about an $\mathcal{O}(\epsilon^0)$ solution which is deficient; it is missing the second integration constant (and concomitant fundamental solution). More fundamentally, Eqn \ref{naive terrible} attempts to solve a singular perturbation problem with a regular expansion, an approach which must fail. The RG technique solves these problems by restructuring the na\"ive expansion and eliminating the flaws in $u_0(x)$. Although $A_0$ is simply a constant of integration when $\epsilon = 0$, it must be modified when $\epsilon \neq 0$. We will absorb the divergences into a modification, or renormalization, of the constant of integration $A_0$. Formally, one begins by ``splitting'' the secular terms, replacing $e_1(A_0 x)$ by $e_1(A_0 x) - e_1(A_0 \tau) + e_1(A_0 \tau)$, where $\tau$ is an arbitrary position. This results in Eqn. \ref{Oeterriblesol2}: \begin{equation} \label{Oeterriblesol2} u(x)=A_0+ \epsilon B_1 (e_1(A_0 x) - e_1(A_0 \tau)+ e_1(A_0 \tau)) + \mathcal{O}(\epsilon^2) \end{equation} Since $\tau$ is arbitrary, it can be chosen such that $e_1(A_0 x) - e_1(A_0 \tau)$ is non-secular (for a given $x$). The divergence is now contained in the last term of Eqn. (\ref{Oeterriblesol2}), and is exhibited as a function of $\tau$. It is dealt with by introducing a multiplicative renormalization constant, $Z_1 = 1 + \sum_{i=1}^\infty a_i(\tau) \epsilon^i$, and then renormalizing $A_0$ as $ A_0 = Z_1 A_0(\tau)$.\footnote{$A_0$ is the only constant which can be renormalized to remove the divergences, as $B_1$ is proportional to the secular terms.} The coefficients $a_i(\tau)$ can then be chosen.\footnote{Note that the coefficients \emph{must} also be independent of $x$.} order by order so as to eliminate the secular term in Eqn. (\ref{Oeterriblesol2}). Substituting, and choosing $a_1$ to eliminate the final term of Eqn. \ref{Oeterriblesol2}, we obtain \begin{equation} \label{terribleRG1} u(x)=A_0(\tau)+ \epsilon B_1 (e_1(A_0(\tau) x) - e_1(A_0(\tau) \tau)) + \mathcal{O}(\epsilon^2) \end{equation} Where $a_1$ satisfies \begin{equation} a_1(\tau)= \frac{-B_1 e_1(\tau A_0(\tau)(1+\sum_{i=1}^\infty a_i(\tau) \epsilon^i))}{A_0(\tau)} \end{equation} Note that to obtain Eqn. \ref{terribleRG1} we needed to expand $e_1$ about $\epsilon=0$. Unusually in this equation, the renormalized constant ($A_0(\tau)$) appears in the argument of the exponential integral; this complicates the calculation. We will later show how to avoid this problem by restructuring our calculations. Qualitatively, the idea underlying Eqn. \ref{terribleRG1} is that boundary conditions far away (from $x = \epsilon$) are unknown to our solution at $x \gg \epsilon$, so that $A_0$ is undetermined at $x = \tau$. RG determines $A_0$ in this regime through the renormalization constant $Z_1$ (which depends on $\tau$). Afterward there will be new constants which can be used to meet the boundary conditions. The RG condition states that the solution $u(x)$ cannot depend on the arbitrary position $\tau$. This requirement can be implemented in one of two ways. First, since $\partial_\tau u(x) = 0$, apply $\partial_\tau$ to the RHS of Eqn. \ref{terribleRG1} and set the result equal to zero: \begin{equation} \label{terribleRGans1} A_0^{'}(\tau) + \epsilon B_1 \left( \frac{e^{-A_0(\tau) \tau}}{\tau} + \frac{A_0^{'}(\tau)}{A_0(\tau)}\left( e^{-A_0(\tau) \tau} - e^{-A_0(\tau) x} \right) \right) + \mathcal{O}(\epsilon^2) = 0 \end{equation} The next step in RG is to realize Eqn. \ref{terribleRGans1} implies that $A_0^{'}(\tau) \sim \mathcal{O}(\epsilon)$. Retaining only terms of $\mathcal{O}(\epsilon)$, we obtain: \begin{equation} \label{terribleRGans2} \frac{d A_0(\tau)}{d \tau} + \epsilon B_1 \left( \frac{e^{-A_0(\tau) \tau}}{\tau} \right) + \mathcal{O}(\epsilon^2) = 0 \end{equation} In principle, we simply solve Eqn. \ref{terribleRGans2} for $A_0(\tau)$. Unfortunately, that is not possible, due to the presence of $A_0(\tau)$ in the exponential. This complication also occurs in other switchback problems, as well as in the low Reynolds number problems. Eqn. \ref{terribleRGans2} can be solved by an iterative approach: Initially set $\epsilon=0$, and solve for $A_0(\tau)=\alpha_0$, a constant. Next substitute this result into the $\mathcal{O}(\epsilon)$ term in Eqn. \ref{terribleRGans2}, solving for $A_0(\tau)$ again: \begin{equation} \label{A0ans} A_0(\tau)= \alpha_0 + \epsilon B_1 e_1(\alpha_0 \tau) \end{equation} In this solution, we have a new integration constant, $\alpha_0$. Having obtained this result, we again must exploit the arbitrary nature of $\tau$. Setting $\tau = x$, and substituting into Eqn. \ref{terribleRG1}, we obtain: \begin{equation} \label{terribleRG1b} u(x)=\alpha_0 + \epsilon B_1 e_1(\alpha_0 x) + \mathcal{O}(\epsilon^2) \end{equation} But this is identical to the original solution (Eqn. \ref{terrible1a})! What have we accomplished? This renormalized result is guaranteed to be a uniformly valid result, for $\forall x$. The renormalization procedure ensures that the logarithmic divergence in Eqn. \ref{terribleRG1b} is required by the solution, and is \emph{not} an artifact of our approximations. Obtaining the same answer is a consequence of solving Eqn. \ref{terribleRGans1} iteratively. Had we been able to solve that equation exactly, this disconcerting coincidence would have been avoided. We obtain the final solution to Eqn. \ref{terribleeqn} by applying the boundary conditions (Eqn. \ref{terriblebc2}) to Eqn. \ref{terribleRG1b}: $\alpha_0 = 1$, $B_1 = -1/(\epsilon e_1(\epsilon))$. Lastly, we undo the initial change of variables ($r=x/\epsilon$), yielding the result given in Eqn. \ref{terribleO1sol}. As shown in Chen et al., this is an excellent approximate solution \cite{CGO96}. \begin{equation} \label{terribleO1sol} u(r) = 1 - \frac{e_1(r \epsilon)}{e_1(\epsilon)} + \Order{2} \end{equation} Furthermore, if we expand the coefficient $B_1=-1/(\epsilon e_1(\epsilon))$ for $\epsilon \rightarrow 0^+$, $B_1(\epsilon) /\epsilon \sim -1/\ln{(1/\epsilon)} - \gamma / \ln^2{(1/\epsilon)}$. These logarithmic functions of $\epsilon$ are exactly those which are required by asymptotic matching! These ``unexpected'' orders in $\epsilon$ make the solution of this problem via asymptotic matching very difficult. They must be deduced and introduced order by order, so as to make matching possible. In the RG solution, they are seen to arise naturally as a consequence of the term $1/e_1(\epsilon)$. There are several other equivalent ways to structure this calculation. It is worthwhile to examine these (and to demonstrate their equivalence), in order to streamline our approach for the low Reynolds number problems. The first variation occurs in how we apply the RG condition. Rather than applying $\partial_\tau$ to Eqn. \ref{terribleRG1}, we may also realize that the original constants of integration, $ A_0 = Z_1(\tau) A_0(\tau)$, must be independent of $\tau$. Hence the ``alternative'' RG equation: \begin{displaymath} \frac{\partial A_0}{\partial \tau} = \frac{\partial (Z_1(\tau) A_0(\tau))}{\partial \tau} = 0 \end{displaymath} Substituting $Z_1 = 1 + \epsilon \left( -B_1 e_1(\tau A_0(\tau)(1+\sum_{i=1}^\infty a_i(\tau) \epsilon^i))\right)/A_0(\tau) + \Order{2}$, one obtains: \begin{equation} \label{terribleRGans2b} A_0^{'}(\tau) + \epsilon B_1 \left(\frac{e^{-A_0(\tau) \tau}}{\tau} + \frac{A_0^{'}(\tau)}{A_0(\tau)} e^{-A_0(\tau) \tau} \right) + \Order{2} = 0 \end{equation} Because this implies $A_0^{'}(\tau) \sim \Order{1}$, Eqn. \ref{terribleRGans2b} simplifies to Eqn. \ref{terribleRGans2} (to within \Order{2}), and these two methods of implementing the RG condition are equivalent. In addition to this dichotomous implementation of the RG condition, there is yet another way to structure the analysis from the outset: We set $A_0=1$ in the zeroth order solution, and rely on the robustness of the RG approach to variations in our perturbative solution. With this $u_0(x)$ solution, there is no longer any freedom in our choice of $u_1(x)$ integration constants --- both are needed to meet boundary conditions. In this approach, our na\"\i ve perturbative solution is: \begin{equation} \label{Oeterriblesol2b} u(x)=1 + \epsilon ( B_0 + \underbrace{B_1 e_1(x)}_{\textrm{divergent}}) + \Order{2} \end{equation} Proceeding as before, replace $e_1(x)$ by $e_1(x) - e_1(\tau) + e_1(\tau)$: \begin{displaymath} u(x)=1 + \epsilon \left(B_0 + B_1 \left(e_1(x) - e_1(\tau) + e_1(\tau)\right)\right) + \Order{2} \end{displaymath} Again introduce renormalization constants ($Z_1 = 1 + \sum_{i=1}^\infty a_i(\tau) \epsilon^i$, $Z_2 = 1 + \sum_{i=1}^\infty b_i(\tau) \epsilon^i$), and renormalize $B_0$, $B_1$ as $B_0 = Z_1 B_0(\tau)$ and $B_1 = Z_2 B_1(\tau)$. In fact, only $B_0$ needs to be renormalized, as the $B_1$ term multiplies the secular term and consequently cannot absorb that divergence. This can be seen systematically by attempting to renormalize both variables. With an appropriate choice of coefficients, $a_1 = -B_1(\tau) e_1(\tau)$ and $b_1 = 0$, the final term in the last equation is eliminated. $b_1=0$ demonstrates that $B_1$ does not need to be renormalized at \Order{1}. The resulting equation is given in Eqn. \ref{terribleRGb2}. \begin{equation} \label{terribleRGb2} u(x)=1 + \epsilon \left(B_0(\tau) + B_1(\tau) \left(e_1(x) - e_1(\tau)\right)\right) + \Order{2} \end{equation} We did not actually need to determine $a_1$ or $b_1$ in order to write the above equation; it could have been done by inspection. Determination of these quantities is useful for two reasons. First, it helps us see which secular terms are being renormalized by which integration constants. Secondly, it allows the second implementation of the RG condition which was described above. This can sometimes simplify calculations. Using the first implementation (requiring $\partial_\tau u(x) = 0$), and using Eqn. \ref{terribleRGb2}, we obtain: \begin{equation} \label{terribleRGb1} B_0^{'}(\tau) + B_1^{'}(\tau)\left( e_1(x) - e_1(\tau) \right) + B_1(\tau) \frac{e^{-\tau}}{\tau} = \Order{1} \end{equation} This can only be true $\forall x$ if $B_1^{'}(\tau) = 0$, or $B_1(\tau) = \beta_2$, a constant (as expected). Knowing this, we solve for $B_0(\tau) = \beta_1 + \beta_2 e_1(\tau)$. Substituting this result into Eqn. \ref{terribleRGb2}, and setting $\tau = x$, we obtain the renormalized solution: \begin{equation} \label{terribleRGsolv2} u(x) = 1 + \epsilon \left( \beta_1 + \beta_2 e_1(x) \right) \end{equation} The boundary conditions in Eqn. \ref{terriblebc2} are satisfied if $\beta_1 = 0$ and $\beta_2 = -1/(\epsilon e_1(\epsilon))$. Returning to the original variable ($r=x/\epsilon$), we obtain: \begin{equation} \label{terribleO1solb} u(r) = 1 - \frac{e_1(r \epsilon)}{e_1(\epsilon)} + \Order{2} \end{equation} This is identical to Eqn. \ref{terribleO1sol}, demonstrating the equivalence of these calculations. The latter method is preferable, as it avoids the nonlinear RG equation (Eqn. \ref{terribleRGans2}). We will use this second approach for analyzing the low Reynolds number problems. The RG analysis has shown us that the logarithmic divergences present in Eqn. \ref{Oeterriblesol} are an essential component of the solution, Eqn. \ref{terribleO1solb}. However, we must work to \Order{2} in order to see the true utility of RG and to understand all of the nuances of its application. {\it c. \Order{2} solution} We base our treatment of the \Order{2} on the second analysis presented above. Through \Order{1}, the na\" \i ve solution is: $u_0(x) = 1$, $u_1(x)= B_0 + B_1 e_1(x)$. Substituting into Eqn. \ref{terriblegovern3}, we obtain the governing equation for $u_2(x)$: \begin{equation} \label{terriblegovern3b} u_2^{''} + \left( 1 + \frac{1}{x} \right)u_2 = \frac{B_0 B_1 e^{-x}}{x} - \frac{B_1^2 e^{-2x}}{x^2} + \frac{B_1^2 e^{-x} e_1(x)}{x} \end{equation} This has the same homogeneous solution as $u_1(x)$, $u_2^{(h)}(x) = C_0 + C_1 e_1(x)$. A particular solution is: \begin{displaymath} u_2^{(p)}(x) = -B_1 B_0 e^{-x} + 2 B_1^2 e_1(2x) - \frac{1}{2}B_1^2 e_1^2(x) - B_1^2 e^{-x}e_1(x) \end{displaymath} As discussed previously, we are free to choose $C_0$, $C_1$ to simplify subsequent calculations. The constants $B_0$, $B_1$ are able to meet the boundary conditions, so there is no need to retain the \Order{2} constants: We choose $C_0 = 0$, $C_1 = 0$. In this case, the differing differing choices of ${C_0, C_1}$ correspond to a redefinition of ${B_0, B_1}$ plus a change of \Order{3}, i.e. $\tilde B_0 = B_0 + \epsilon C_0$.\footnote{This was not true at the previous order.} Our na\" \i ve solution through \Order{2} is thus: \begin{eqnarray} \label{o2terriblenaive} u(x) &=& 1 + \epsilon \left( B_0 + \underline{B_1 e_1(x)}\right) + \\ & & \epsilon^2 \left( -B_1 B_0 e^{-x} + \underline{2 B_1^2 e_1(2x)} - \underline{\underline{\frac{1}{2}B_1^2 e_1^2(x)}} - \underline{B_1^2 e^{-x}e_1(x)} \right) + \Order{3} \nonumber \end{eqnarray} The underlined terms in this expression are divergent as $x \rightarrow 0$; the doubly underlined term is the most singular ($ \sim \ln (x)^2$). RG can be used to address the divergences in Eqn. \ref{o2terriblenaive}. However, there is a great deal of flexibility in its implementation; while most tactics yield equivalent approximations, there are significant differences in complexity. We now explore all of the organizational possibilities in the terrible problem, an exercise which will subsequently guide us through the low Reynolds number calculations. The first possibility is to treat only the most secular term at \Order{2}. The doubly underlined term dominates the divergent behavior, and contains the most important information needed for RG to construct a uniformly valid approximation. The approximation reached by this approach is necessarily inferior to those obtained utilizing additional terms. However it is nonetheless valid and useful, and eliminating most of the \Order{2} terms simplifies our calculations. Discarding all \Order{2} terms except the doubly underlined term, we begin the calculation in the usual manner, but come immediately to the next question: Ought we replace $e_1^2(x)$ by $e_1^2(x) - e_1^2(\tau) + e_1^2(\tau)$ or by $\left(e_1(x)-e_1(\tau)\right)^2 + 2 e_1(x) e_1(\tau) - e_1^2(\tau)$? Each option eliminates the divergence in $x$, replacing it with a divergence in $\tau$. Both merit consideration. Beginning with the latter, the renormalized perturbative solution is: \begin{eqnarray} \label{o2terrible1} u(x) &=& 1 + \epsilon \left( B_0(\tau) + B_1(\tau) \left(e_1(x)-e_1(\tau) \right) \right) - \epsilon^2 \left(\frac{1}{2}B_1(\tau)^2 \left(e_1(x)-e_1(\tau) \right)^2 \right) \nonumber \\ & & + \epsilon^2 \left(\textrm{less divergent terms} \right) + \Order{3} \end{eqnarray} Applying the RG condition ($\partial_\tau u(x) = 0$) results in a lengthy differential equation in $\tau$. Because we want our solution to be independent of $x$, we group terms according to their $x$ dependence. Recognizing that $B^{'}_1(\tau) \sim \Order{1}$, $B^{'}_0(\tau) \sim \Order{1}$, and working to \Order{3}, we obtain two equations which must be simultaneously satisfied: \begin{subequations} \label{o2terriblesol1} \begin{eqnarray} B_1^{'}(\tau) - \frac{\epsilon e^{-\tau} B_1^2(\tau)}{\tau} &=& \Order{3} \label{b1sol} \\ \frac{e^{-\tau} \left( B_1(\tau) + \epsilon B_1^2(\tau)e_1(\tau)\right) }{\tau}-e_1(\tau)B_1^{'}(\tau)+B_0^{'}(\tau) &=& \Order{3} \label{b2sol} \end{eqnarray} \end{subequations} Eqn. \ref{b1sol} has the solution \begin{displaymath} B_1(\tau) = \frac{1}{\beta_1 + \epsilon e_1(\tau)} + \Order{2} \end{displaymath} Substituting this result into Eqn. \ref{b2sol}, and solving, we obtain the result \begin{displaymath} B_0(\tau)= \beta_0 + \frac{\ln{(\beta_1 + \epsilon e_1(\tau)})}{\epsilon} + \Order{2} \end{displaymath} Both $\beta_0$ and $\beta_1$ are constants of integration which can be later used to meet the boundary conditions. Substituting these solutions into Eqn. \ref{o2terrible1}, setting $\tau=x$, disregarding terms of \Order{2} and higher we obtain the renormalized solution: \begin{equation} \label{o2terriblesol0} u(x)=1 + \epsilon \left( \beta_0 + \frac{\ln{\left(\beta_1 + \epsilon e_1(x)\right)}}{\epsilon}\right) + \Order{2} \end{equation} Choosing $\beta_0$ and $\beta_1$ to satisfy Eqn. \ref{terriblebc2}, results in Eqn. \ref{o2terrible1sol}. \begin{equation} \label{o2terrible1sol} u(x) = \ln \left(e + \frac{\left(1-e\right) e_1(x)}{e_1(\epsilon)}\right) + \Order{2} \end{equation} Expressing this in the original variable ($r=x/\epsilon$), results in the final answer (Eqn. \ref{finalo2terrible}). \begin{equation} u(r) = \ln \left(e + \frac{\left(1-e\right) e_1(\epsilon r)}{e_1(\epsilon)}\right) + \Order{2} \label{finalo2terrible} \end{equation} This is the solution previously obtained by Chen et al., albeit with a typographical error corrected \cite{CGO96}. We will now revisit this analysis, using the alternative ``splitting'' of the most secular term in Eqn. \ref{o2terriblenaive}, but not yet considering less secular (or non-secular) terms of \Order{2}. If we replace replace $e_1^2(x)$ in Eqn. \ref{o2terriblenaive} by $e_1^2(x) - e_1^2(\tau) + e_1^2(\tau)$, we obtain the new na\"ive expansion given by Eqn. \ref{o2terrible2}. \begin{eqnarray} \label{o2terrible2} u(x) &=& 1 + \epsilon \left( B_0(\tau) + B_1(\tau) \left(e_1(x)-e_1(\tau) \right) \right) - \epsilon^2 \left(\frac{1}{2}B_1(\tau)^2 \left(e_1^2(x)-e_1^2(\tau) \right) \right) \nonumber \\ & & + \epsilon^2 \left(\textrm{less divergent terms} \right) + \Order{3} \end{eqnarray} We now repeat the same calculations: \begin{enumerate} \item Apply the RG condition ($\partial_\tau u(x) = 0$). \item Group the resulting equation according to $x$ dependence. This will result in two equations which must be satisfied independently. \item Discard terms of \Order{3}, observing that $B_0^{'}(\tau), B_1^{'}(\tau)$ must be of $\Order{1}$. \item Solve these differential equations simultaneously for $B_0(\tau),B_1(\tau)$. \item Substitute these solutions into the original equation (i.e. Eqn. \ref{o2terrible2}), and set $\tau = x$. \item Choose the integration constants in this result to satisfy Eqn. \ref{terriblebc2}. \item Obtain the final solution by returning to the original variable, $r=x/\epsilon$. \end{enumerate} For Eqn. \ref{o2terrible2}, steps 1 - 4 result in the following solutions for our renormalized constants: $B_1(\tau) = \beta_1 + \Order{2}$, $B_0(\tau) = \beta_0 + \beta_1 e_1(\tau) - \epsilon \beta_1^2 e_1^2(\tau)/2 + \Order{2}$. Completing step 5, we obtain the renormalized result: \begin{equation} \label{o2terrible2sol} u(x) = 1 + \epsilon \left( \beta_0 + \beta_1 e_1(x) \right) - \epsilon^2 \frac{\beta_1 ^2 e_1^2(x)}{2} + \Order{2} \end{equation} This is identical to our starting point, Eqn. \ref{o2terriblenaive} (retaining only the most secular terms). This should no longer be surprising, as we observed the same phenomena in the \Order{1} analysis (Eqn. \ref{terribleRG1b}). However, it is worth noticing that we obtained two different results (Eqns. \ref{o2terrible1sol}, \ref{o2terrible2sol}) depending on how we structured our RG calculation. This apparent difficulty is illusory, and the results are equivalent: Expanding Eqn. \ref{o2terriblesol0} for small $\epsilon$ reproduces Eqn. \ref{o2terrible2sol}. Here, as in previous cases, we are free to structure the RG calculation for convenience. This easiest calculation is the second approach --- in which only one constant of integration is actually renormalized --- and our renormalized result is the same as our na\" \i ve starting point. This simplified analysis (considering only the most secular terms) illustrates some of the pitfalls which can arise in applying RG to switchback problems. However, we must finish the \Order{2} analysis by considering all terms in Eqn. \ref{o2terriblenaive} to understand the final nuances of this problem. There is a new complication when we attempt to renormalize all terms of Eqn. \ref{o2terriblenaive}: The final term, $-B_1^2 e^{-x}e_1(x)$, has the same kind of ``splitting'' ambiguity which we encountered in dealing with the doubly underlined term. We introduce our arbitrary position variable, $\tau$, which we want to choose so as to eliminate the secular term in $x$ by replacing it with a divergence in $\tau$. In many cases, it is clear how to deal with the secular term. For example, a linear divergence --- $x$ --- can be replaced with $ x - \tau + \tau$. The final $\tau$ will be absorbed into the renormalized constants of integration, and the $x - \tau$ term (which is now considered non-secular), will ultimately disappear after renormalization. However the term $-B_1^2e^{-x}e_1(x)$ is confusing. As seen above, there are two ways to ``split'' the $B_1^2 e_1^2(x)/2$ term. There are \emph{four} different ways to split $e^{-x}e_1(x)$. It may be replaced by any of the following: \begin{enumerate} \item $\left(e^{-x}-e^{-\tau}\right)e_1(x)+e^{-\tau}e_1(x)$ \item $e^{-x}e_1(x)-e^{-\tau}e_1(\tau)+e^{-\tau}e_1(\tau)$ \item $\left(e^{-x}-e^{-\tau}\right)\left(e_1(x)-e_1(\tau)\right)+e^{-\tau}e_1(x)+e^{-x}e_1(x)-e^{-\tau}e_1(\tau)$ \item $e^{-x}\left(e_1(x)-e_1(\tau)\right)+e^{-x}e_1(\tau)$ \end{enumerate} All four of these options ``cure'' the divergent term (i.e. the secular term will vanish when we subsequently set $\tau = x$), and are equal to $e^{-x}e_1(x)$. If handled properly, any of these options can lead to a valid renormalized solution. However, we will show that the fourth and final option is most natural, and results in the simplest algebra. How do we choose? The first consideration is subtle: \emph{The overall renormalized perturbative result must satisfy the governing equation (Eqn. \ref{terriblegovern1}) independently for each order in $\epsilon$}. How we renormalize the \Order{1} divergences (Eqn. \ref{terribleRGb2}) has implications for \Order{2} calculations. For example, in \Order{1} renormalization, there is an important difference between Eqn. \ref{terribleRGb2} and Eqn. \ref{Oeterriblesol2}. The former has the additional term $- \epsilon B_1(\tau) e_1(\tau)$. This term requires the presence of an additional \Order{2} term: $\epsilon^2e^{-x}B_1^2(\tau)e_1(\tau)$. Without this term the \Order{2} renormalized solution will not satisfy Eqn. \ref{terriblegovern3}, and the renormalization procedure will yield an incorrect solution. We were able to gloss over this before because we were considering only the most secular term at \Order{2}. Inspecting the four possible splittings enumerated above, we see that only the last two options provide the necessary $\epsilon^2e^{-x}B_1^2(\tau)e_1(\tau)$ term, and can satisfy Eqn. \ref{terriblegovern3} without contrivances.\footnote{The first two options \emph{can} satisfy the governing equation \emph{if} we carefully choose a different homogeneous solution at \Order{2}. With the proper non-zero choice of $C_0$ and $C_1$ we can use the first two splittings enumerated, and they will result in an equivalent RG solution.} In examining both of these options, we split the $e_1^2(x)$ term for simplicity, as in the derivation of Eqn. \ref{o2terrible2sol}.\footnote{In principle, each of the possible \Order{1} splittings could be paired with all possibilities at \Order{2}, resulting in eight total possibilities.} Considering the third option first, our renormalized perturbation solution becomes: \begin{eqnarray} \label{terrible:I} u(x) &=& 1 + \epsilon \left( B_0(\tau) + B_1(\tau) \left(e_1(x)-e_1(\tau) \right) \right) + \epsilon^2 \Big(-B_1(\tau)B_0(\tau)e^{-x} - \nonumber \\ & & B_1^2(\tau)\left(e^{-x}-e^{-\tau}\right)\left(e_1(x)-e_1(\tau)\right) -\frac{1}{2}B_1(\tau)^2 \left(e_1^2(x)-e_1^2(\tau)\right) + \nonumber \\ & & 2 B_1^2(\tau)\left(e_1(2x)-e_1(2 \tau)\right)\Big) + \Order{3} \end{eqnarray} As it must, this result satisfies Eqn. \ref{terriblegovern3} to \Order{2}. By applying the RG condition ($\partial_\tau u(x) = 0$) to this equation, and grouping the resulting equation according to $x$ dependence, we obtain a lengthy equation which can only be satisfied to \Order{3} $\forall x$ if: \begin{eqnarray} \label{terrible:Iconditions} B_1^{'}(\tau) e^{\tau} &=& \epsilon B_1^2(\tau) \\ e^{2 \tau} \tau B_0^{'}(\tau) &=& e^{2 \tau} \tau e_1(\tau) B_1^{'}(\tau)-e^{\tau}B_1(\tau)-3\epsilon B_1^2(\tau)+e^{\tau}\epsilon B_1^2(\tau) e_1(\tau) - e^{\tau} \tau \epsilon B_1^2(\tau) e_1(\tau) \nonumber \\ 0 &=& \epsilon \left( \epsilon B_1(\tau) + e^{\tau} \tau \epsilon B_0^{'}(\tau) \right) \nonumber \end{eqnarray} Generally, no solution will exist, as we have two unknown functions and three differential equations. In this case, however, the first equation requires that: \begin{equation} \label{terrible:almost1} B_1(\tau) = \frac{e^{\tau}}{-\epsilon + e^\tau \beta_1} \end{equation} For this $B_1(\tau)$ solution, it is actually possible to satisfy the latter equations simultaneously to \Order{3}: This occurs because the last equation is simply the lowest order of the second one.\footnote{This can be seen explicitly by substituting Eqn. \ref{terrible:almost1}.} There is another noteworthy point regarding the second part of Eqn. \ref{terrible:Iconditions}. In all previous calculations, we discarded terms like $\epsilon^2 B_0^{'}(\tau)$, since $B_0^{'}(\tau)$ and $B_1^{'}(\tau)$ had to be of $\Order{1}$. To solve these equations, however, $B_0^{'}(\tau)$ can \emph{not} be $\Order{1}$ (although $B_1^{'}(\tau)$ is). Solving for $B_0$, \begin{equation} \label{terrible:almost} B_0(\tau)= \beta_0 - \int_\epsilon^\tau \frac{2 \epsilon + e^{\sigma}\beta_1 + e^{\sigma}\left( 2 \sigma - 1 \right) \epsilon e_1(\sigma)}{\sigma \left( \epsilon - e^{\sigma}\beta_1 \right)} \textrm{d}\sigma \end{equation} This solution, while valid, is cumbersome. Consider instead the fourth possible ``split'' enumerated above. Eqn. \ref{terrible:II} gives our renormalized perturbation solution, which satisfies Eqn. \ref{terriblegovern3}. \begin{eqnarray} \label{terrible:II} u(x) &=& 1 + \epsilon \left( B_0(\tau) + B_1(\tau) \left(e_1(x)-e_1(\tau) \right) \right) + \epsilon^2 \Big(-B_1(\tau)B_0(\tau)e^{-x}- \nonumber \\ & & B_1^2(\tau)e^{-x}\left(e_1(x)-e_1(\tau)\right) -\frac{1}{2}B_1(\tau)^2 \left(e_1^2(x)-e_1^2(\tau)\right) + \nonumber \\ & & 2 B_1^2(\tau)\left(e_1(2x)-e_1(2 \tau)\right)\Big) + \Order{3} \end{eqnarray} Applying the RG condition ($\partial_\tau u(x) = 0$), and requiring that it be satisfied $\forall x$, we obtain the following solutions for $B_0(\tau)$, and $B_1(\tau)$: \begin{subequations} \begin{eqnarray} \label{terrible:O2sol} B_1(\tau)&=&\beta_1 +\Order{3}\\ B_0(\tau)&=&\beta_0 + \beta_1 e_1(\tau) + \epsilon \left( - \frac{\beta_1^2 e_1^2(\tau)}{2}+2\beta_1^2 e_1(2 \tau) \right) + \Order{3} \end{eqnarray} \end{subequations} Substituting these results into Eqn. \ref{terrible:II} and setting $\tau = x$, we obtain the final RG result, given by Eqn. \ref{terrible:final}. \begin{eqnarray} \label{terrible:final} u(x) &=& 1 + \epsilon \left( \beta_0 + \beta_1 e_1(x)\right) + \\ & & \epsilon^2 \Bigg( -\beta_1 \beta_0 e^{-x} + 2 \beta_1^2 e_1(2x) - \frac{1}{2}\beta_1^2 e_1^2(x) - \beta_1^2 e^{-x}e_1(x) \Bigg) + \Order{3} \nonumber \end{eqnarray} This is, of course, identical to our na\"\i ve staring point, a happenstance we have seen several times previously. It is worth noting that the renormalized solutions obtained using Eqns. \ref{terrible:almost1} and \ref{terrible:almost} are asymptotically equivalent to Eqn. \ref{terrible:final}. It may seem that we have needlessly digressed into the ``terrible'' problem. However, a clear-cut ``best'' strategy has emerged from our detailed exploration. Furthermore, we have identified --- and resolved --- a number of subtleties in the application of RG. Before applying these lessons to the problem of low Reynolds number flow past a cylinder, we summarize our conclusions. The ``best'' strategy is the one used to derive Eqn. \ref{terrible:final}, a result which is identical to our na\" \i ve solution (Eqn. \ref{o2terriblenaive}). First, transform to the inner equation. Solve the \Order{0} equation incompletely (obtaining just one constant of integration), which can then be set to satisfy the boundary condition at $\infty$. This ``trick'' necessitates retention of integration constants at \Order{1}, but results in computational simplifications (a non-linear RG equation) which are \emph{essential} in dealing with the Navier-Stokes equations. At \Order{2}, the homogeneous solution are identical to those at \Order{1}. Consequently, the \Order{2} integration constants need not be retained, as we can meet the boundary conditions with the \Order{1} constants. We just pick a convenient particular solution. To apply RG to the terrible problem, we first ``split'' the secular terms. There are several ways to do this, even after requiring that the renormalized perturbation expansions satisfy the governing equations at each order. We can again choose for simplicity, bearing in mind that \Order{1} renormalization can impact \Order{2} calculations. It is easiest to apply the RG condition to the renormalized perturbation expansion, rather than applying it to the integration constants directly. In solving the resulting equation, we want solutions which are valid $\forall x$. To solve the RG equation, care must be taken to satisfy several conditions simultaneously, and it cannot be assumed that our renormalized constants have a derivative of \Order{1}. Although there is quite a bit of flexibility in implementing the RG technique, our results are robust: Regardless of how we structure the calculation, our solutions agree to within an accuracy limited by the original na\" \i ve perturbative solution; they are asymptotically equivalent. It is this robustness which makes RG a useful tool for the low Reynolds number problems, where the complexity of the Navier-Stokes equations will constrain our choices. \subsection{Flow past a cylinder} \subsubsection{Rescaling} To solve Eqn. \ref{CylinderEqn} using RG techniques, we begin by transforming the problem to the Oseen variables. As in the terrible problem, to find a solution which is valid for all $\vec{r}$, we need to analyze Eqn. \ref{CylinderEqn} using a \emph{dominant balance} argument. As it stands, different terms of Eqn. \ref{CylinderEqn} will dominate in different regimes.\footnote{i.e. the LHS, which is comprised of \emph{inertial terms} dominates for small $|\vec{r}|$ whereas at large $|\vec{r}|$ the \emph{viscous} terms which comprise the RHS are of equal or greater importance.} Looking for a rescaling of $\psi$ and $r$ which makes all terms of the same magnitude (more precisely, of the same order in $R$), yields the rescaling given in Eqn. \ref{Oseendef} \cite{Proudman57}. \begin{equation} \label{Oseendef} \rho = R r, \qquad \Psi = R \psi \end{equation} Transforming to these variables, Eqn. \ref{CylinderEqn} becomes: \begin{equation} \label{OseenCylinder} \nabla_\rho^4 \Psi(\rho,\theta) = - \frac{1}{\rho} \frac{\partial (\Psi, \nabla_\rho^2)}{\partial(\rho,\theta)} \end{equation} The boundary conditions (Eqn. \ref{Cylinder BC}) become: \begin{equation} \label{Oseen Cylinder BC} \Psi(\rho = R, \theta) = 0, \qquad \frac{\partial \Psi(\rho,\theta)}{\partial \rho} \bigg|_{\rho=R} = 0, \qquad \lim_{\rho \to \infty} \frac{\Psi(\rho,\theta)}{\rho} = sin(\theta) \end{equation} \subsubsection{Na\"\i ve perturbation analysis} The next step in obtaining the RG solution is to begin with the ansatz that the solution can be obtained from a perturbation expansion (Eqn. \ref{cylinder:naive}). \begin{equation} \label{cylinder:naive} \Psi(\rho,\theta) = \Psi_0(\rho,\theta) + R \Psi_1(\rho,\theta) + R^2 \Psi_2(\rho,\theta)+ \Order[R]{2} \end{equation} Substituting Eqn. \ref{cylinder:naive} into Eqn. \ref{OseenCylinder}, and collecting powers of $R$ yields a series of equations which must be satisfied: \setlength\arraycolsep{1pt} \begin{eqnarray} \Order[R]{0}: \nabla_\rho^4 \Psi_0(\rho,\theta) &=& \frac{1}{\rho} \left( \frac{\partial \Psi_0}{\partial \theta} \frac{\partial}{\partial \rho} - \frac{\partial \Psi_0}{\partial \rho} \frac{\partial}{\partial \theta}\right) \nabla_\rho^2 \Psi_0 \\ \Order[R]{1}: \nabla_\rho^4 \Psi_1(\rho,\theta) &=& \frac{1}{\rho} \Bigg(\! \left( \frac{\partial \Psi_1}{\partial \theta} \frac{\partial}{\partial \rho} - \frac{\partial \Psi_1}{\partial \rho} \frac{\partial}{\partial \theta}\right) \nabla_\rho^2 \Psi_0 + {} \left( \frac{\partial \Psi_0}{\partial \theta} \frac{\partial}{\partial \rho} - \frac{\partial \Psi_0}{\partial \rho} \frac{\partial}{\partial \theta}\right) \nabla_\rho^2 \Psi_1 \Bigg) \nonumber \\ \Order[R]{2}: \nabla_\rho^4 \Psi_2(\rho,\theta) &=& \frac{1}{\rho} \Bigg(\! \left( \frac{\partial \Psi_2}{\partial \theta} \frac{\partial}{\partial \rho} - \frac{\partial \Psi_2}{\partial \rho} \frac{\partial}{\partial \theta}\right) \nabla_\rho^2 \Psi_0 + \left( \frac{\partial \Psi_0}{\partial \theta} \frac{\partial}{\partial \rho} - \frac{\partial \Psi_0}{\partial \rho} \frac{\partial}{\partial \theta}\right) \nabla_\rho^2 \Psi_2 \nonumber \\ & & + \left( \frac{\partial \Psi_1}{\partial \theta} \frac{\partial}{\partial \rho} - \frac{\partial \Psi_1}{\partial \rho} \frac{\partial}{\partial \theta}\right) \nabla_\rho^2 \Psi_1 \!\Bigg) \nonumber \label{cylinder:gov} \end{eqnarray} \subsubsection{\Order[R]{0} solution} The zeroth order part of Eqn. \ref{cylinder:gov} is the same as Eqn. \ref{OseenCylinder}, and is equally hard to solve. But RG does not need a complete solution; we just need a starting point. We will begin with the equation which describes a uniform stream. This is analogous to the constant \Order{0} solution in the ``terrible'' problem. A first integral to the \Order[R]{0} equation can be obtained by noting that any solutions of $\nabla_\rho^2 \Psi_0(\rho,\theta) = 0$ are also solutions of Eqn. \ref{cylinder:gov}. This is Laplace's equation in cylindrical coordinates, and has the usual solution (assuming the potential is single-valued): \begin{equation} \label{laplacesoln} \Psi_0(\rho,\theta) = A_0 + B_0 \ln{\rho} + \sum_{n=1}^\infty \left( \left(A_n \rho^n + B_n \rho^{-n} \right) \sin{n \theta} + \left(C_n \rho^n + D_n \rho^{-n} \right)\cos{n \theta} \right) \end{equation} We are only interested in solutions with the symmetry imposed by the uniform flow (Eqn. \ref{Cylinder BC}). Hence $A_0=B_0=C_n=D_n=0$. Furthermore, the boundary conditions at infinity require that $A_n = 0$ for $n>1$. For simplicity at higher orders, we set $C_n=0$; this is not required, but these terms will simply re-appear at \Order[R]{1}. Finally set $A_1=1$ to satisfy the boundary condition at $\infty$ (Eqn. \ref{Oseen Cylinder BC}). As in the ``terrible'' problem, this is done for technical convenience, but will not change our results. We are left with the potential describing the uniform flow: \begin{equation} \label{cylinder:0sol} \Psi_0(\rho,\theta) = \rho \sin(\theta) \end{equation} \subsubsection{\Order[R]{1} solution} By substituting Eqn. \ref{cylinder:0sol} into the \Order[R]{1} governing equation, we obtain Eqn. \ref{cylinder:oseen}. \begin{equation} \label{cylinder:oseen} \nabla_\rho^4 \Psi_1(\rho,\theta) = \left(\cos(\theta) \frac{\partial}{\partial \rho} - \frac{\sin (\theta)}{\rho}\frac{\partial}{\partial \theta} \right) \nabla_\rho^2 \Psi_1 \end{equation} This equation is formally identical to Oseen's equation (Eqn. \ref{Oseen:CylinderEqn}), albeit derived through a different argument. This is fortuitous, as its solutions are known \cite{Tomotika50}. Unfortunately, when working with stream functions, the solution can only be expressed as an infinite sum involving combinations of modified Bessel functions, $K_n$, $I_n$. The general solution can be obtained either by following Tomotika or by using variation of parameters \cite{Proudman57}. It is comprised of two parts, the first being a solution of Laplace's equation (as at \Order[R]{0}). The same considerations of symmetry and boundary conditions limit our solution: In Eqn. \ref{laplacesoln}, $A_0=B_0=C_n=D_n=0$; $A_n = 0$, if $n>1$. Here, however, we retain the constants $B_n$, and do not fix $A_1$. This is analogous to what was done with the homogeneous terms at \Order{1} in the ``terrible'' problem. The second part of the general solution is analogous to a particular solution in the ``terrible'' problem, and can be obtained from Tomotika's solution (Eqn. \ref{cylinder:1sol}). These two results are combined in Eqn. \ref{cylinder:1solb}, which will be the basis for our RG analysis. \begin{equation} \label{cylinder:1solb} \Psi_1(\rho,\theta) = A_1 \rho \sin{\theta} + \sum_{n=1}^\infty \left( B_n \rho^{-n} + \sum_{m=0}^\infty X_m \rho \Phi_{m,n}(\rho/2) \right) \sin{n \theta} \end{equation} Before discussing the application of RG to Eqn. \ref{cylinder:1solb}, it is worthwhile to discuss Eqn. \ref{cylinder:oseen} in general terms. Eqn. \ref{cylinder:oseen} may be re-written as: \begin{equation} \label{cylinder:oseen2} \mathcal{L}\Psi_1 \equiv \left(\nabla_\rho^2 - \cos(\theta) \frac{\partial}{\partial \rho} + \frac{\sin (\theta)}{\rho}\frac{\partial}{\partial \theta} \right) \nabla_\rho^2 \Psi_1 = 0 \end{equation} We see explicitly that this equation is a linear operator ($\mathcal{L}$) acting on $\Psi_1$, and that the RHS is zero. This is the \emph{homogeneous Oseen equation}. It is only because of our judicious choice of $\Psi_0$ that we do not need to deal with the inhomogeneous counterpart, i.e. with a non-zero RHS. However, the inhomogeneous Oseen equation governs $\Psi_n$ at all higher orders. This can be seen for \Order[R]{2} from Eqn. \ref{cylinder:gov}. In general, the solutions to the inhomogeneous Oseen equation are found using the method of variation of parameters. It is worth exploring these solutions, as they provide some insight into the structure of Eqn. \ref{cylinder:1sol}. We now solve Eqn. \ref{cylinder:oseen2} for a particular kind of inhomogeneity, one which can be written as a Fourier sine series.\footnote{The symmetry of the problem precludes the possibility of cosine terms in the governing equations for $\Psi_n$, $\forall n > 1$.} We want to solve: \begin{equation} \label{cylinder:oseen3} \mathcal{L} \Psi_1 = \sum_{n=1}^\infty \tilde F_n(\rho) \sin{n \theta} \end{equation} The substitution $\nabla^2\Psi_1 = e^{\rho \cos{\theta}/2} \Pi(\rho,\theta)$\footnote{ $\nabla^2 \Psi_1(\rho,\theta)$ is the \emph{vorticity}.}, allows us to obtain the first integral of Eqn. \ref{cylinder:oseen3}. This result is given by Eqn. \ref{cylinder:oseen4} \cite{Proudman57}. \begin{equation} \label{cylinder:oseen4} \left( \nabla^2 - \frac{1}{4}\right)\Pi(\rho,\theta) = \sum_{n=1}^\infty F_n(\rho) \sin{n \theta} \end{equation} Here $F_n(\rho) = e^{-\rho\cos{\theta}/2} \tilde F_n(\rho)$. To solve for $\Pi(\rho,\theta)$, begin by noting that the symmetry of the inhomogeneous terms implies that $\Pi(\rho,\theta)$ can be written as a sine series. Consequently, substitute $\Pi(\rho,\theta) = \sum_{n=1}^\infty g_n(\rho)\sin{n\theta}$ into Eqn. \ref{cylinder:oseen4} to obtain: \begin{equation} \label{oseen:1stint} g_n^{''}(\rho) + \frac{1}{\rho}g_n^{'}(\rho) - \left( \frac{1}{4} + \frac{1}{\rho^2} \right) g_n(\rho) = F_n(\rho) \end{equation} The fundamental solutions of this equation are $K_n(\rho/2)$, $I_n(\rho/2)$. Using variation of parameters, the general solution of Eqn. \ref{oseen:1stint} may be written: \begin{equation} \label{oseen:1stintsol} g_n(\rho)=-I_n\left(\frac{\rho}{2}\right)\left(\alpha_n + \mathcal{J}_1^{(n)}(\rho) \right) + K_n\left(\frac{\rho}{2}\right)\left(\beta_n+\mathcal{J}_2^{(n)} (\rho)\right) \end{equation} Here, $\mathcal{J}_1^{(n)}(\rho)=\int\textrm{d}\rho \rho F_n(\rho) K_n(\rho/2)$, $\mathcal{J}_2^{(n)}(\rho) = \int\textrm{d}\rho \rho F_n(\rho) I_n(\rho/2)$, and $\alpha_n$, $\beta_n$ are constants. The next step is to undo our original transformation, and to solve the resulting equation: \begin{eqnarray} \nabla^2 \Psi_1(\rho,\theta) &=& e^{\frac{\rho \cos{\theta}}{2}} \sum_{n=1}^\infty g_n(\rho) \sin{n \theta} \\ &=& \sum_{n=1}^\infty b_n(\rho)\sin{n\theta} \nonumber \label{oseen:2ndint} \end{eqnarray} In this equation, $b_n(\rho)=\sum_{m=1}^\infty g_m(\rho)\left(I_{n-m}\left(\rho/2\right)-I_{n+m}\left(\rho/2\right)\right)$. We have the unfortunate happenstance that each $b_n$ depends on the \emph{all} of the harmonics of the first integral. This is the origin of the nested sum (over $m$) in Tomotika's solution (Eqn. \ref{cylinder:1sol}). As before, symmetry will require that $\Psi_1(\rho,\theta)$ be representable as a sine series: $\Psi_1(\rho,\theta) = \sum_{m=1}^\infty X_m(\rho)\sin{m\theta}$. With this substitution we obtain (for each $m$), the radial component of Poisson's equation in cylindrical coordinates: \begin{equation} \label{possion} X_m^{''}(\rho) + \frac{1}{\rho} X_m^{'}(\rho) - \frac{m^2}{r^2}X_m(\rho)=b_m(\rho) \end{equation} The fundamental solutions were discussed before in the context of Laplace's equation: $\rho^m$, $\rho^{-m}$. As before, a particular integral is obtained through variation of parameters, and the general solution may be written: \begin{equation} \label{cylinder:sol2} X_n(\rho) = -\rho^n \left(A_n + \mathcal{I}_1^{(n)}(\rho)\right) + \frac{1}{\rho^n}\left(B_n+\mathcal{I}_2^{(n)}(\rho)\right) \end{equation} Here $\mathcal{I}_1^{(n)}(\rho)=\int\textrm{d}\rho -\rho b_n(\rho) /(2n\rho^n)$, $\mathcal{I}_2^{(n)}(\rho)=\int\textrm{d}\rho -\rho b_n(\rho) \rho^n/(2n)$, and $A_n$, $B_n$ are integration constants. It is useful to relate Eqn. \ref{cylinder:sol2} to Tomotika's solution (Eqn. \ref{cylinder:1sol}). There are four integration constants for each angular harmonic. Two are obvious: $A_n$,$B_n$. The other two arise in the first integral (the vorticity solution), Eqn. \ref{oseen:1stintsol}. However, every vorticity integration constant appears in each harmonic of Eqn. \ref{cylinder:sol2}. For example, one cannot uniquely assign $\alpha_1$ and $\beta_1$ to the $\sin{\theta}$ harmonic of Eqn. \ref{cylinder:sol2}. However, if one considers $n$ terms from Eqn. \ref{oseen:1stintsol} and $n$ terms from Eqn. \ref{cylinder:sol2}, there will be $4n$ integration constants --- four per retained harmonic of Eqn. \ref{cylinder:sol2}. In passing we note that matched asymptotics workers avoid this problem by using the vorticity directly, and thereby simplify their treatment of boundary conditions. This approach does not work in conjunction with RG. It is mildly disconcerting to have four integration constants, as there are only three boundary conditions for each harmonic (Eqn. \ref{Oseen Cylinder BC}). However, two of the constants --- $A_n$ and $\alpha_n$ --- will be determined by the boundary conditions at infinity. This claim is not obvious, particularly since terms which are divergent prior to renormalization might not be present after the renormalization procedure. We outline here an argument which can be made rigorous. There are two kinds of divergences in Eqn. \ref{cylinder:sol2}: Terms which are secular as $\rho \rightarrow 0$, and terms which diverge too quickly as $\rho \rightarrow \infty$.\footnote{To be precise, terms which diverge faster than $\rho$ as $\rho \rightarrow \infty$ are problematic, and prevent satisfying the boundary conditions (Eqn. \ref{Oseen Cylinder BC}).} After renormalization, we will try to need to meet the boundary conditions (Eqn. \ref{Oseen Cylinder BC}). As in the case of the ``terrible'' problem, it will turn out that the simplest approach to renormalization yields a renormalized perturbation solution which is the same as the na\" \i ve series. Consider Eqn. \ref{cylinder:sol2}. The terms which are secular as $\rho \rightarrow 0$ will not preclude satisfying the boundary conditions. Those which diverge too quickly as $\rho \rightarrow \infty$, however, will conflict with Eqn. \ref{Oseen Cylinder BC}. These terms must be eliminated by a suitable choice of integration constants. It turns out not to matter whether we do this before or after the renormalization procedure. For simplicity, we will do it before renormalizing. First, the coefficient of $\rho^n$ must vanish for all $n > 1$. This can happen, with an appropriate choice of $A_n$, if \begin{displaymath} \lim_{\rho \to \infty} \mathcal{I}_1^{(n)}(\rho) \sim \Order[1]{} \end{displaymath} For this requirement to be met, the coefficient of $I_n(\rho/2)$ in Eqn. \ref{oseen:1stintsol} must vanish (e.g., $\alpha_n = \lim_{\rho \rightarrow \infty} \mathcal{J}_1^{n}(\rho)$). It is always possible to choose $\alpha_n$ appropriately, because the following condition is satisfied for all $n$: \begin{displaymath} \lim_{\rho \to \infty} \mathcal{J}_1^{(n)}(\rho) \sim \Order[1]{} \end{displaymath} In our problem this is true because $F_n(\rho)$ is based on solutions to the lower order governing equations. By construction, these are well-behaved as $\rho \to \infty$. Therefore, for the inhomogeneous Oseen equation under consideration (Eqn. \ref{cylinder:oseen4}), we see that two of the four integration constants --- $A_n$, $\alpha_n$ --- are needed to satisfy the boundary conditions at infinity. More specifically, the immediate problem requires us to consider the homogeneous Oseen's equation (Eqn. \ref{cylinder:oseen2}), and Tomotika's solution (Eqn. \ref{cylinder:1sol}). For this problem, $F_n(\rho) = 0$, and the coefficient of $I_n(\rho/2)$ in Eqn. \ref{oseen:1stintsol} has no $\rho$ dependence. So we simply choose $\alpha_n$ such that this coefficient vanishes. Simplifying Eqn. \ref{oseen:1stintsol}, we then have the following solution for the vorticity: \begin{equation} \nabla^2 \Psi_1(\rho,\theta) = e^{\frac{\rho \cos{\theta}}{2}} \sum_{n=1}^\infty K_n\left(\frac{\rho}{2}\right)\left(\beta_n \right) \sin{n \theta} \end{equation} Since this solution for the vorticity is well-behaved as $\rho \to \infty$, it follows that we can choose $A_n$ ($n > 1$) in Eqn. \ref{cylinder:sol2} so that the coefficient of $\rho^n$ vanishes as $\rho \to \infty$. We are left with the solution \begin{equation} \label{xneqn} X_n(\rho) = A_n \rho \delta_{n,1} + \rho^n \left(\mathcal{I}_1^{(n)}(\rho)-\mathcal{I}_1^{(n)}(\infty)\right) + \rho^{-n} \left(B_n + \mathcal{I}_2^{(n)}(\rho) \right) \end{equation} For the homogeneous Oseen's equation, $\mathcal{I}_1^{(n)}(\rho)$ and $\mathcal{I}_2^{(n)}(\rho)$ simplify to: \begin{eqnarray} \label{someintegrals} \mathcal{I}_1^{(n)}(\rho) &=& \int \textrm{d}\rho \frac{-\rho}{2 n} \rho^{-n} \left(\sum_{m=1}^{\infty} \beta_m K_m\left(\frac{\rho}{2}\right) \left( I_{n-m}\left(\frac{\rho}{2}\right) - I_{n+m}\left(\frac{\rho}{2}\right) \right) \right) \\ \mathcal{I}_2^{(n)}(\rho) &=& \int \textrm{d}\rho \frac{-\rho}{2 n} \rho^n \left(\sum_{m=1}^{\infty} \beta_m K_m\left(\frac{\rho}{2}\right) \left( I_{n-m}\left(\frac{\rho}{2}\right) - I_{n+m}\left(\frac{\rho}{2}\right) \right) \right) \end{eqnarray} This result is fundamentally the same as Tomotika's (Eqn. \ref{cylinder:1sol}). However, his solution is more useful, as he accomplished the integrals in Eqn. \ref{someintegrals}. What is the point of all this work? Firstly, the approach based on the variation of parameters may be applied to the inhomogeneous Oseen's equation, which must be solved for orders higher than \Order[R]{1}. Secondly, we see explicitly what happens to the two sets of integration constants $\alpha_n$ and $A_n$. Tomotika's solution has but two integration constants\footnote{There is also $A_1$, but that is a special case.} --- $B_n$ and $\beta_n$. The other constants have already been chosen so as to satisfy the boundary conditions at $\infty$. We have shown explicitly how they must be determined, and stated without proof that this may be done prior to renormalization. In short, we have explained why Eqn. \ref{cylinder:1solb} is the appropriately general \Order[R]{1} solution for our na\" \i ve perturbation analysis. In addition to explaining why Tomotika's solution is a suitable starting point for RG, our analysis also connects with the \Order[R]{1} solution of Proudman and Pearson \cite{Proudman57}. We have shown that the vorticity must be well-behaved at $\rho=\infty$ if the overall solution is to satisfy the boundary conditions. {\it a. Secular behavior} Combining Eqns. \ref{cylinder:0sol}, \ref{cylinder:1solb}, we begin the following na\" \i ve solution: \begin{equation} \label{cylinder:naivesol} \Psi_(\rho,\theta) = \rho \sin(\theta) + R \left( A_1 \rho \sin{\theta} + \sum_{n=1}^\infty \left( B_n \rho^{-n} + \sum_{m=0}^\infty X_m \rho \Phi_{m,n}\left(\frac{\rho}{2}\right) \right) \sin{n \theta} \right) +\Order[R]{2} \end{equation} Although intimidating, this is conceptually equivalent to Eqn. \ref{Oeterriblesol2} (in the terrible problem). The first step in our analysis is identifying which terms are divergent. As explained above, Eqn. \ref{cylinder:naivesol} is specifically constructed to be of \Order[\rho]{1} as $\rho \to \infty$. In fact, only the \Order[R]{0} and $A_1$ terms matter at large $\rho$. As $\rho \to 0$, however, many other terms in Eqn. \ref{cylinder:naivesol} diverge. All of the $B_n$ terms diverge. Most of the $\Phi_{m,n}(\rho)$ terms are also secular. Rather than enumerating and sorting through the different divergences, we simply treat the problem abstractly. Eqn. \ref{cylinder:naivesol} can be rewritten as: \begin{equation} \label{cylinder:beginRG} \Psi_(\rho,\theta) = \rho \sin(\theta) + R \left( A_1 \rho \sin{\theta} + \mathcal{R}(\rho,\theta; \{B_i\}; \{X_j\}) + \mathcal{S}(\rho,\theta; \{B_m\}; \{X_n\}) \right) \end{equation} Here, $\mathcal{S}$ includes the terms which are secular as $\rho \to 0$, and $\mathcal{R}$ includes regular terms. \subsubsection{Renormalization} Equation \ref{cylinder:beginRG} is renormalized just like the terrible problem. We begin with the renormalized perturbation expansion given in Eqn. \ref{cylinder:RG2}. Note that we are not specifying the details of which terms are secular, or how we are ``splitting'' these terms. The only term we are explicitly considering is $A_1$. This is a trick built on consideration of the terrible problem. Our ``best'' solution (Eqn. \ref{terrible:final}) to that problem was built on the renormalization of just \emph{one} constant, $B_0$ in Eqn. \ref{terrible:O2sol}. Essentially, we will repeat that procedure here, using $A_1$ as that constant. \begin{eqnarray} \label{cylinder:RG2} \Psi(\rho,\theta) &=& \rho \sin(\theta) + R \Big( A_1(\tau) \rho \sin{\theta} + \mathcal{R}(\rho,\theta; \{B_i(\tau)\}; \{X_j(\tau)\}) + \\ \nonumber & & \mathcal{S}(\rho,\theta; \{B_m(\tau)\}; \{X_n(\tau)\}) - \mathcal{S}(\tau,\theta; \{B_m(\tau)\}; \{X_n(\tau)\}) + \Order[R]{2} \Big) \end{eqnarray} We will now apply the RG condition --- $\partial_\tau \Psi(\rho,\theta) = 0$ --- to this equation. Accomplishing this in complete generality is difficult. However, using our experience from the terrible problem, we can see that this is not necessary. The RG condition may be satisfied as follows: First, suppose that $X_n^{'}(\tau) = \Order[R]{2} \quad \forall n$, $B_m^{'}(\tau) = \Order[R]{2} \quad \forall m$. These equations are satisfied by $X_n(\tau) = \chi_n$, $B_m(\tau)=\beta_m$. Substituting these results into Eqn. \ref{cylinder:RG2}, and applying the RG condition results in: \begin{equation} 0 = R \left( A_1^{'}(\tau) \rho \sin{\theta} - \mathcal{S}^{'}(\tau,\theta; \{\beta_m\}; \{\chi_n\})\right) \end{equation} This is easily solved for $A_1(\tau)$. \begin{equation} \label{cylinder:RGsol1} A_1(\tau) = \frac{\mathcal{S}(\tau,\theta; \{\beta_m\}; \{\chi_n\})}{ \rho \sin{\theta}} + \alpha_1 \end{equation} We have explicitly validated our supposition that $\{X_n(\tau)\}$ and $\{B_m(\tau)\}$ can be constants. With this supposition, we have shown that the RG condition applied to Eqn. \ref{cylinder:RG2} can be satisfied with an appropriate choice of $A_1(\tau)$. We have satisfied the RG condition through clever tricks derived from our experience with the terrible problem. However, this solution is entirely valid, and our experience with the terrible problem has shown us that more complicated solutions are asymptotically equivalent. Substituting Eqn. \ref{cylinder:RGsol1} into Eqn. \ref{cylinder:RG2}, and setting $\tau = \rho$, we obtain our renormalized solution: \begin{equation} \label{cylinder:endRG} \Psi_(\rho,\theta) = \rho \sin(\theta) + R \left( \alpha_1 \rho \sin{\theta} + \mathcal{R}(\rho,\theta; \{\beta_i\}; \{\chi_j\}) + \mathcal{S}(\rho,\theta; \{\beta_m\}; \{\chi_n\}) \right) \end{equation} By now it should not be surprising that this is the same equation as our naive perturbation solution (Eqn. \ref{cylinder:beginRG}), and by extension the same solution obtained by Tomotika \cite{Tomotika50}. As in the case of the terrible problem, however, we now know that this is a uniformly valid approximation. We now may choose the integration constants to satisfy the boundary conditions, and then calculate the drag coefficient. {\it a. Truncation} Unfortunately, there are infinitely many integration constants, and it is impossible to apply the boundary conditions to our renormalized solution (or Eqn. \ref{cylinder:naivesol}). To progress further, we must make the same sort of uncontrolled approximations made by previous workers \cite{Proudman57,Tomotika50}.\footnote{Kaplun was able to avoid this difficulty by using the velocity field instead of stream functions, although his approach brings other problems: the solution cannot be expressed in closed form, and must be approximated to apply the boundary conditions (see section \ref{matchedcylinder}).} Our approximation consists of a careful truncation, in both $m$ and $n$, of the series in Eqn. \ref{cylinder:naivesol}. There are two important points to consider. First is the $\sin{\theta}$ symmetry of the overall problem: terms proportional to $\sin{\theta}$ reflect the symmetries exhibited by the uniform flow which are imposed on our solution via the boundary conditions at infinity. The importance of this harmonic is further seen in Eqn. \ref{cylinder:convenientdrag}: Only the coefficient of $\sin{\theta}$ will be needed for the computation of $C_D$. Secondly we recall that the remaining boundary conditions are imposed at the surface of the sphere, at $\rho = R$ in Oseen coordinates. When applying the boundary conditions, terms which are secular as $\rho \to 0$ will therefore be most important. Specifically, we cannot truncate any terms which are divergent, although we are at liberty to set their coefficients equal to zero. These considerations allow exactly one solution. First, set all $B_n = 0 \quad n > 1$. Secondly, set all $X_m = 0 \quad m > 0$. We retain three coefficients: $A_1, B_1, X_0$, which will permit the boundary conditions to be satisfied for the $\sin{\theta}$ harmonic. What about the higher harmonics? These terms are truncated in an \emph{uncontrolled} approximation. However, as we will show, the discarded terms are \Order[R^3\log{R}]{} or higher at the surface of the sphere. They are regular terms, and thus negligible in comparison to the secular terms retained (which are \Order[R]{-1}). Now, suppose we follow Tomotika, and try to extend this approach, by retaining a few more terms. The next step would be to retain the $B_2, X_1$ terms, and to try to satisfy the boundary conditions for the $\sin{2 \theta}$ harmonic. As before, all the higher $B_n,X_m$ are set to zero. Why not include the next harmonic or two? The answer lies in the terms we discard. If we satisfy the boundary conditions at $\rho = R$ for the first $n$ harmonics, we must retain the coefficients $X_0, ... ,X_n-1$. To minimize the amount of truncation we do, first set $X_m =0$ for $\forall m > n-1$ and $B_k=0$ for $\forall k > n$. What, then, is the form of the terms which are discarded from our solution? \begin{equation} \label{cylinder:discards} \Psi_{\textrm{discard}}^{(n)}(\rho,\theta) = R \left(\sum_{k=n+1}^\infty \sum_{m=0}^{n-1} X_m \Phi_{m,k}(\rho/2) \rho \sin{k \theta} \right) \end{equation} $\Psi_{\textrm{discard}}^{(n)}(\rho,\theta)$ is largest as $\rho \to 0$, and will be most important at $\rho = R$, on the surface of the cylinder. If we retain only the $n=1$ harmonic, $\Psi_{\textrm{discard}}^{(1)}(\rho,\theta) \sim \Order[R^3 \log{R}]{}$. Since we are only working to \Order[R]{1}, this is fine. We must also consider the derivative, since we want to satisfy all of the boundary conditions (Eqn. \ref{Cylinder BC}) to the same order. $\Psi_{\textrm{discard}}^{'(1)}(\rho,\theta) \sim \Order[R^2\log{R}]{}$ Therefore, in the case where we retain only the $\sin{\theta}$ harmonic, the discarded terms are negligible, as we are working to \Order[R]{1}.\footnote{This argument is somewhat simplistic: The neglected terms also contribute, when meeting the boundary conditions, to the values of the retained coefficients. i.e. All non-zero $X_m$ affect $X_0$. But these are lower order effects.} When we retain higher harmonics, everything changes. Table \ref{tab:discards} shows the magnitude of the discarded terms at $\rho = R$ for the first four harmonics. \begin{table} \begin{center} \begin{tabular}{|c|cccc|} \hline n = & 1 & 2 & 3 & 4 \\ \hline $\Psi_{\textrm{discard}}^{(n)}(\rho,\theta)$ & \Order[R^3 \log{R}]{} & \Order[R]{2} & \Order[R]{1} & \Order[R]{0} \\ $\Psi_{\textrm{discard}}^{'(n)}(\rho,\theta)$ & \Order[R^2\log{R}]{} & \Order[R]{1} & \Order[R]{1} & \Order[R]{-1} \\ \hline \end{tabular} \caption{Relative importance of discarded terms at $\rho=R$.} \label{tab:discards} \end{center} \end{table} From Table \ref{tab:discards}, we see immediately that to retain $\sin{2 \theta}$ harmonics, we must have an error in our derivative boundary condition of \Order[R]{1} --- the order to which we are trying to work. If we retain higher harmonics, this situation gets worse. First we have an \Order[R]{1} error in the stream function itself, and then we begin to have errors which are \emph{divergent} in $R$! For $n>4$, both $\Psi_{\textrm{discard}}^{(n)}(\rho,\theta)$ and $\Psi_{\textrm{discard}}^{'(n)}(\rho,\theta)$ are increasingly divergent functions of $R$. Since it is in practice impossible to fit the boundary conditions to Eqn. \ref{cylinder:naivesol}, we must truncate the series expansion. We have shown that there is only one truncation consistent with both the symmetry requirements of the problem and the demand that we satisfy the boundary conditions to \Order[R]{1}: \begin{equation} \label{cylinder:truncsol} \Psi_(\rho,\theta) = \rho \sin(\theta) + R \left( A_1 \rho + B_1 \rho^{-1} + X_0 \rho \Phi_{0,1}\left(\frac{\rho}{2}\right) \right) \sin{\theta} \end{equation} This result is identical to Proudman's \Order[R]{1} result for the Oseen stream function \cite{Proudman57}. However, he arrives at this result by considering matching requirements with the \Order[R]{0} Stokes expansion and by imposing $\sin{\theta}$ symmetry on the first integral (Eqn. \ref{oseen:2ndint}). Our approach arrives at the same conclusion, but without the need for asymptotic matching or the two expansions it requires. Moreover, we did not need the expertise and finesse which matched asymptotics workers needed to deduce the unusual form of their expansions (e.g., the $1/\log{R}$ term in Eqn. \ref{form2}). Finally, we note that Tomotika's numerical results support our truncation \cite{Tomotika50}. {\it b. Meeting boundary conditions} It is straightforward to apply the boundary conditions (Eqn. \ref{Cylinder BC}) to Eqn. \ref{cylinder:truncsol}. To satisfy the condition at infinity, $A_1 = 0$. The other two requirements are met by the following choice of coefficients: \begin{eqnarray} \label{cylinder:BC} B_1 &=& \frac{-R^2 \Phi_{0,1}^{'}(R/2)}{4 \Phi_{0,1}(R/2) + R \Phi_{0,1}^{'}(R/2)}\\ X_0 &=& \frac{-4}{R \left( 4 \Phi_{0,1}(R/2)+R \Phi_{0,1}^{'}(R/2)\right)} \end{eqnarray} Notice that we are using the Oseen stream function. The Stokes' function is related by: $\psi(r,\theta)=\Psi(rR,\theta)/R$. Putting everything together, we have the new result given by Eqn. \ref{cylinder:finalsol}. \begin{eqnarray} \label{cylinder:finalsol} \Psi_(\rho,\theta) &=& \rho \sin(\theta) + R \Bigg( \frac{-R^2 \Phi_{0,1}^{'}(R/2)}{4 \Phi_{0,1}(R/2) + R \Phi_{0,1}^{'}(R/2)} \rho^{-1} + \\ & & \frac{-4}{R \left( 4 \Phi_{0,1}(R/2)+R \Phi_{0,1}^{'}(R/2)\right)} \rho \Phi_{0,1}(\rho/2) \Bigg) \sin{\theta} \nonumber \end{eqnarray} Remember that although our truncated solution satisfies the boundary conditions \emph{exactly}, it only satisfies the governing equations \emph{approximately}. \subsubsection{Calculating the drag coefficient} We now transform Eqn. \ref{cylinder:finalsol} into Stokes' coordinates, and substitute the result into Eqn. \ref{cylinder:convenientdrag}.\footnote{Or, alternatively, into Eqns. \ref{cylinderstreamfunction}, \ref{CDcylinder}, and \ref{cylpressure}.} We thereby obtain a new result for $C_D$, given by Eqn. \ref{cylinder:O1CD}. \begin{equation} \label{cylinder:O1CD} C_D = \frac{\pi \left( -12 \Phi_{0,1}^{'}(R/2) + R \left( 6 \Phi_{0,1}^{''}(R/2) + R \Phi_{0,1}^{'''}(R/2) \right) \right)}{8 \Phi_{0,1}(R/2) + 2 R \Phi_{0,1}^{'}(R/2)} \end{equation} This result is plotted in Figure \ref{fig:cylinder:RG}, where it is compared against the principal results of Oseen theory, matched asymptotic theory, and experiments. When compared asymptotically, all of these theoretical predictions agree. At small but not infinitesimal Reynolds number, the largest difference is seen between Kaplun's second order result and the first order predictions, including Eqn. \ref{cylinder:O1CD}. As explained previously, current experimental data cannot determine whether Kaplun's second order matched asymptotics solution is actually superior. The RG result lies among the first order predictions. Fundamentally, the RG calculation begins with an equation similar to Oseen's, so this is not too surprising. Within this group Eqn. \ref{cylinder:O1CD} performs very well, and is only slightly bettered by Imai's prediction (Eqn. \ref{imaisol}). These two results are very close over the range $0 < R < 1$. \begin{figure}[tb] \psfrag{Re}{\mbox{\large $R$}} \psfrag{CD}{\mbox{\large $C_D R/4\pi$}} \psfrag{Kaplun}{Kaplun, Eqn. \ref{cylinder:KaplunCD}} \psfrag{Imai}{Imai, Eqn. \ref{imaisol}} \psfrag{Bairstow XXXXXXX}{Bairstow, Eqn. \ref{Bairstowsol}} \psfrag{Lamb}{Lamb, Eqn. \ref{Oseen:LambDrag}} \psfrag{RG}{RG, Eqn. \ref{cylinder:O1CD}} \begin{center} \includegraphics[width=.9 \textwidth]{fig14} \caption{(Color online) Drag on cylinder, comparing RG predictions to other theories at low $R$.} \label{fig:cylinder:RG} \end{center} \end{figure} \begin{figure}[tb] \psfrag{Re}{\mbox{\large $R$}} \psfrag{CD}{\mbox{\large $C_D R/4\pi$}} \psfrag{Kaplun}{Kaplun, Eqn. \ref{cylinder:KaplunCD}} \psfrag{Imai}{Imai, Eqn. \ref{imaisol}} \psfrag{Bairstow XXXXXXX}{Bairstow, Eqn. \ref{Bairstowsol}} \psfrag{Lamb}{Lamb, Eqn. \ref{Oseen:LambDrag}} \psfrag{RG}{RG, Eqn. \ref{cylinder:O1CD}} \begin{center} \includegraphics[width=.9 \textwidth]{fig15} \caption{(Color online) Drag on a Cylinder, comparing RG predictions to other theories at higher $R$.} \label{fig:cylinder:RG2} \end{center} \end{figure} The real strength of Eqn. \ref{cylinder:O1CD} can be seen in in Figure \ref{fig:cylinder:RG2}. As the Reynolds number increases beyond $R=1$, all other theories begin to behave pathologically. They diverge from experimental measurements and behave non-physically (e.g., a negative drag coefficient). The RG prediction suffers from none of these problems; it is well behaved for all $R$. As it is still based on a perturbative solution, it does become less accurate as $R$ increases. \subsection{Flow past a sphere} \subsubsection{Rescaling} Our analysis of low Reynolds number flow past a sphere closely follows both the cylinder problem and the terrible problem. We omit redundant explanations. As before, the first step is a rescaling of both $r$ and $\psi$ --- the transformation into Oseen coordinates. A dominant balance analysis identifies the rescaling given in Eqn. \ref{sphere:rescale}. \begin{equation} \label{sphere:rescale} \rho = Rr, \quad \Psi = R^2 \psi \end{equation} In Oseen variables, the governing equation (Eqn. \ref{SphereEqn}) becomes: \begin{equation} \label{sphere:oseeneqn} D_\rho^4 \Psi(\rho,\mu) = \frac{1}{\rho^2}\left(\frac{\partial(\Psi(\rho,\mu), D_\rho^2 \Psi(\rho,\mu))}{\partial(\rho,\mu)} + 2 D_\rho^2 \Psi(\rho,\mu) L_\rho \Psi(\rho,\mu) \right) \end{equation} where \begin{equation} \mu \equiv \cos{\theta}, \qquad D_\rho^2 \equiv \frac{\partial^2}{\partial \rho^2} + \frac{1-\mu^2}{\rho^2} \frac{\partial^2}{\partial \mu^2}, \qquad L_\rho \equiv \frac{\mu}{1-\mu^2}\frac{\partial}{\partial \rho} + \frac{1}{\rho} \frac{\partial}{\partial \mu} \end{equation} The boundary conditions (Eqn. \ref{Sphere BC}) transform into: \begin{equation} \label{sphere:oseenbc} \Psi(\rho = R, \mu) = 0, \qquad \frac{\partial \Psi(\rho,\mu)}{\partial \rho} \bigg|_{\rho=R} = 0, \qquad \lim_{\rho \to \infty} \frac{\Psi(\rho,\mu)}{\rho^2} = \frac{1}{2} \left( 1 - \mu^2 \right) \end{equation} \subsubsection{Na\"\i ve perturbation analysis} We continue by substituting our na\" \i ve perturbation assumption (Eqn. \ref{cylinder:naive}) into Eqn. \ref{sphere:oseeneqn}, and then collecting powers of $R$. \begin{subequations} \setlength\arraycolsep{1pt} \begin{eqnarray} \Order[R]{0}: D_\rho^4 \Psi_0(\rho,\mu) &=&\frac{1}{\rho^2}\left(\frac{\partial(\Psi_0(\rho,\mu), D_\rho^2 \Psi_0(\rho,\mu))}{\partial(\rho,\mu)} + 2 D_\rho^2 \Psi_0(\rho,\mu) L_\rho \Psi_0(\rho,\mu) \right) \qquad \qquad \label{sphere:0} \\ \Order[R]{1}: D_\rho^4 \Psi_1(\rho,\mu) &=& \frac{1}{\rho^2}\Bigg(\frac{\partial(\Psi_0, D_\rho^2 \Psi_1)}{\partial(\rho,\mu)} + \frac{\partial(\Psi_1, D_\rho^2 \Psi_0)}{\partial(\rho,\mu)} + \nonumber \\ & & 2 \left(D_\rho^2 \Psi_0 L_\rho \Psi_1 + D_\rho^2 \Psi_1 L_\rho \Psi_0 \right)\Bigg) \label{sphere:1} \\ \Order[R]{2}: D_\rho^4 \Psi_2(\rho,\mu) &=& \frac{1}{\rho^2}\Bigg(\frac{\partial(\Psi_0, D_\rho^2 \Psi_2)}{\partial(\rho,\mu)} + \frac{\partial(\Psi_1, D_\rho^2 \Psi_1)}{\partial(\rho,\mu)} + \frac{\partial(\Psi_2, D_\rho^2 \Psi_0)}{\partial(\rho,\mu)} + \nonumber \\ & & 2 \left(D_\rho^2 \Psi_0 L_\rho \Psi_2 + D_\rho^2 \Psi_1 L_\rho \Psi_1 + D_\rho^2 \Psi_2 L_\rho \Psi_0 \right) \Bigg) \label{sphere:2} \end{eqnarray} \label{sphere:gov} \end{subequations} \subsubsection{\Order[R]{0} solution} As seen with both the cylinder problem and the terrible problem, Eqn. \ref{sphere:0} is the same as the original governing equation (Eqn. \ref{sphere:oseeneqn}). As before, we proceed using an incomplete solution for $\Psi_0$: the uniform stream which describes flow far from any disturbances. Analogously to the cylinder, we notice that Eqn. \ref{sphere:0} is satisfied if $\Psi_0(\rho,\mu)$ obeys $D_\rho^2 \Psi_0(\rho,\mu) = 0$. The general solution of this equation which also satisfies the appropriate symmetry requirement ($\Psi_0(\rho,\mu = \pm 1) = 0$) is given by Eqn. \ref{sphere:laplace}. \begin{equation} \label{sphere:laplace} \Psi_0(\rho,\mu)= \sum_{n=0}^\infty \left(A_n \rho^{n+1} + B_n \rho^{-n} \right) Q_n(\mu) \end{equation} Here $Q_n(\mu)$ is defined as in Eqn. \ref{Oseen:Goldstein}. Following the analysis used for the cylinder, we set all of the coefficients to zero, excepting $A_1 = -1/2$. This choice of $A_1$ satisfies the uniform stream boundary condition (Eqn. \ref{sphere:oseenbc}) at $\rho = \infty$. We thereby obtain: \begin{equation} \label{sphere:0sol} \Psi_0(\rho,\mu) = - \rho^2 Q_1(\mu) \end{equation} \subsubsection{\Order[R]{1} solution} Substituting Eqn. \ref{sphere:0sol} into Eqn. \ref{sphere:1}, we obtain Eqn. \ref{sphere:1eqn}: \begin{equation} \label{sphere:1eqn} D_\rho^4 \Psi_1(\rho,\mu) = \left(\frac{1-\mu^2}{\rho} \frac{\partial}{\partial \mu} + \mu \frac{\partial}{\partial \rho} \right) D_\rho^2 \Psi_1(\rho,\mu) \end{equation} This result is also derived in matched asymptotic analysis, and is formally identical to the Oseen equation for a sphere (Eqn. \ref {Oseen:Eqn}). Structurally, this problem is similar to what we have seen previously, and is solved in two steps \cite{Goldstein29}. First use the transformation $D_\rho^2 \Psi_1 = e^{\rho\mu/2} \Phi(\rho,\mu)$ to obtain Eqn. \ref{sphere:1stintgov}.\footnote{$D_\rho^2 \Psi_1(\rho,\mu)$ is the vorticity.} \begin{equation} \label{sphere:1stintgov} \left(D_\rho^2 - \frac{1}{4} \right) \Phi(\rho,\mu)=0 \end{equation} This may be solved to obtain the first integral: \begin{equation} \label{oseen:1stintb} D_\rho^2 \Psi_1(\rho,\mu)= e^{\frac{1}{2}\rho \mu} \sum_{n=1}^\infty \left( \mathcal{A}_n \left(\frac{\rho}{2}\right)^{\frac{1}{2}} K_{n+\frac{1}{2}}\left(\frac{\rho}{2}\right)+B_n \left(\frac{\rho}{2}\right)^{\frac{1}{2}} I_{n+\frac{1}{2}}\left(\frac{\rho}{2}\right) \right) Q_n(\mu) \end{equation} As in the case of the cylinder, the inhomogeneous term on the RHS of Eqn. \ref{oseen:1stintb} consists of integration constants which multiply the two modified Bessel functions. We are beset by the same considerations, which (properly speaking) must be resolved by applying boundary conditions (Eqn. \ref{sphere:oseenbc}) to the renormalized solution. Following the same arguments given for the cylinder, we set the coefficients $B_n=0$, which will later make it possible to satisfy the boundary conditions at infinity. Completing the second integration is difficult, but was accomplished by Goldstein \cite{Goldstein29}. The requisite solution is essentially the second term in Eqn. \ref{Oseen:Goldstein}: \begin{equation} \label{sphere:psi1gold} \Psi_1^{\textrm{(a)}}(\rho,\theta)= A_1 \rho^2 Q_1(\mu) + \sum_{n=1}^\infty \left(B_n \rho^{-n} + \sum_{m=0}^\infty X_m \rho^2 \Phi_{m,n}(\rho /2) \right) Q_n(\mu) \end{equation} Note that we have omitted the terms $A_n r^n Q_n(\mu)$ which diverge too quickly at infinity (this was also done for the cylinder). Alternatively, one may simplify the series in Eqn. \ref{oseen:1stintb}, by retaining only the $n=1$ term (setting all other $A_n = 0$). It is then possible to complete the second integration with a closed form solution: \begin{equation} \label{sphere:oseen1sol} \Psi_1^{\textrm{(b)}}(\rho,\theta)= A_1 \rho^2 Q_1(\mu) + \mathcal{A}_1 \left( 1+ \mu \right)\left(1 - e^{-\frac{1}{2}\rho \left(1 - \mu\right)}\right) + \sum_{n=1}^\infty B_n\rho^{-n} Q_n(\mu) \end{equation} As before, we neglect the $A_n r^n Q_n(\mu)$ solutions. This is essentially Oseen's solution (Eqn. \ref{Oseen:1sol}), expressed in the appropriate variables and with undetermined coefficients. We therefore have two solutions (Eqns. \ref{sphere:psi1gold}, \ref{sphere:oseen1sol}) which can be used for $\Psi_1$. For the moment, we will consider both. We will later demonstrate that the former is the preferred choice by considering boundary conditions. \subsubsection{Secular behavior} We consider our \Order[R]{1} na\" \i ve solution abstractly: \begin{equation} \label{sphere:sphereRsol} \Psi(\rho,\mu) = -\rho^2 Q_1(\mu) + R \left(A_1 \rho^2 Q_1(\mu) + \sum_{n=1}^\infty B_n\rho^{-n} Q_n(\mu) + \cdots \right) + \Order[R]{2} \end{equation} This generic form encompasses both Eqn. \ref{sphere:oseen1sol} and Eqn. \ref{sphere:psi1gold}. It also possesses two key similarities with both the terrible and the cylinder problems. First, there is a term at \Order[R]{1} which is a multiple of the \Order[R]{0} solution ($A_1 \rho^2 Q_1(\mu)$). Secondly, the secular behavior in our na\" \i ve solution occurs \emph{at the same order} as the integration constants which we hope to renormalize.\footnote{These secular terms are not written explicitly in Eqn. \ref{sphere:sphereRsol}. They can be found in Eqns. \ref{sphere:oseen1sol} and \ref{sphere:psi1gold}.} This fact is in essence related to equations like Eqn. \ref{terribleRGans2}, which must be solved iteratively. We avoided that kind of RG equation by introducing the constant which could have been associated with the \Order[R]{0} solution at \Order[R]{1}. But renormalizing divergences into integration constants \emph{at the same order} limits the ability of RG to ``re-sum'' our na\" \i ve series. In all of these cases, the real power of RG techniques could be seen by extending our analysis to \Order[R]{2}. Because of the similarities between Eqn. \ref{sphere:sphereRsol} and Eqn. \ref{cylinder:naivesol}, we can tackle this problem in a manner formally the same as the cylinder. By construction, Eqn. \ref{sphere:sphereRsol} is \Order[\rho]{2} as $\rho \to \infty$. Hence the only terms with problematic secular behavior occurs in the limit $\rho \to 0$. As before, these divergences need not even be explicitly identified. We write: \begin{equation} \label{sphere:beginRG} \Psi_(\rho,\mu) = -\rho^2 Q_1(\mu) + R \left( A_1 \rho^2 Q_1(\mu) + \mathcal{R}(\rho,\mu; \{B_i\}; \{X_j\}) + \mathcal{S}(\rho,\mu; \{B_m\}; \{X_n\}) \right) \end{equation} Here, $\mathcal{S}$ includes the terms which are secular as $\rho \to 0$, and $\mathcal{R}$ includes regular terms. \subsubsection{Renormalization} Eqn. \ref{sphere:beginRG} is only cosmetically different from Eqn. \ref{cylinder:beginRG}. Renormalizing the two equations can proceed in \emph{exactly} the same fashion. Therefore, we may immediately write the renormalized solution: \begin{equation} \label{sphere:endRG} \Psi_(\rho,\mu) = -\rho^2 Q_1(\mu) + R \left( \alpha_1 \rho^2 Q_1(\mu) + \mathcal{R}(\rho,\theta; \{\beta_i\}; \{\chi_j\}) + \mathcal{S}(\rho,\theta; \{\beta_m\}; \{\chi_n\}) \right) \end{equation} This is, of course, the same solution from which we began. As in the previous two problems, we now know that it is a uniformly valid solution, and turn to the application of the boundary conditions. \subsubsection{Meeting the boundary conditions} We have two possible solutions for $\Psi_1(\rho,\mu)$. Considering the boundary conditions on the surface of the sphere (Eqn. \ref{sphere:oseenbc}) will demonstrate why Eqn. \ref{sphere:psi1gold} is preferential. Eqn. \ref{sphere:oseen1sol} can never satisfy the two requirements for all of the angular harmonics. Expanding the exponential term, we see that although it has but one integration constant, it contributes to \emph{all} of the powers of $\mu$. The second solution, Eqn. \ref{sphere:psi1gold}, can meet both of the boundary conditions --- in principle. However, as in the case of the cylinder, this is practically impossible, and we must consider truncating our solution. It is clear that we will need to approximate our solutions in order to apply the boundary conditions. Our procedure is governed by the following considerations. First, we demand that our approximate solution satisfy the boundary conditions as accurately as possible. This requirement is necessary because our goal is to calculate the drag coefficient, $C_D$, a calculation which is done by evaluating quantities derived from the stream function \emph{on the surface of the sphere}. Hence it is necessary that the stream function be as accurate as possible in that regime. Secondly, we want the difference between our modified solution and the exact solution (one which satisfies the governing equations) to be as small as possible. {\it a. Oseen's solution} First, consider trying to satisfy these requirements starting from Eqn. \ref{sphere:oseen1sol}. Although this is the less general solution to Oseen's equation, we consider Oseen's solution because of (1) its historical importance, including widespread use as a starting point for matched asymptotics work and (2) the appealling simplicity of a closed-form solution. We combine Eqns. \ref{sphere:oseen1sol} and \ref{sphere:0sol} to begin from the solution: $\Psi(\rho,\mu) = \Psi_0(\rho,\mu) + R \Psi_1^{\textrm{(b)}}(\rho,\mu)$. Since we are interested in the solution near the surface of the sphere ($\rho = R$), and because there is no other way to determine the integration constants, we expand the exponential in that vicinity. Retaining terms up to $\Order[R\rho]{1} \sim \Order[\rho]{2}$, we obtain: \begin{equation} \label{sphere:approxossensol} \Psi(\rho,\mu) = \left( - \rho^2 + R \left( A_1 \rho^2 - \mathcal{A}_1 \rho \right) \right) Q_1(\mu) + R \sum_{n=1}^\infty B_n\rho^{-n} Q_n(\mu) \end{equation} The boundary conditions are satisfied if $B_n = 0$ $\forall n > 1$, $A_1=0$, $\mathcal{A}_1 = -3/2$, and $B_1 = -R^2/2$. In passing, we note that substituting these values into Eqn. \ref{sphere:oseen1sol} reproduces Oseen's original solution \cite{Oseen10}. Continuing, we substitute these values into Eqn. \ref{sphere:approxossensol}, obtaining: \begin{equation} \label{sphere:truncoseen} \Psi(\rho,\mu) = \left( - \rho^2 + \frac{3 R \rho}{2} - \frac{R^3}{2 \rho} \right) Q_1(\mu) \end{equation} This is nothing more than Stokes' solution (Eqn. \ref{stokessol}), albeit expressed in Oseen variables. Consequently, when substituted into Eqns. \ref{spherestreamfunction}, \ref{CDsphere}, and \ref{sphpressure} Eqn. \ref{sphere:truncoseen} reproduces $C_D = 6 \pi/R$. How accurate is our approximate solution? The difference between Eqn. \ref{sphere:truncoseen} and Eqn. \ref{sphere:oseen1sol} is given by: \begin{equation} \Delta \Psi = -\frac{3}{4} R \left(1+\mu\right) \left( -2 + 2 e^{-\frac{1}{2}\rho(1-\mu)} + \rho \left(1-\mu\right) \right) \end{equation} At the surface of the sphere ($\rho = R$), this equates to an \Order[R]{3} error in the stream function, and an \Order[R]{2} error in the derivative. That is entirely acceptable. However, at large $\rho$, $\Delta \Psi$ grows unbounded, being of \Order[\rho]{1}. This is the fundamental problem with the solution given by Eqn. \ref{sphere:truncoseen}. By beginning from Eqn. \ref{sphere:psi1gold}, we can avoid this difficulty. It is at first a little disconcerting that Oseen used his solution to obtain the next approximation to $C_D$ (Eqn. \ref{Oseen:dragsphere}) \cite{Ose13}. How can our results be worse? As noted previously, ``Strictly, Oseen's method gives only the leading term ... and is scarcely to be counted as superior to Stokes' method for the purpose of obtaining the drag.''\cite{Proudman57} {\it b. Goldstein's solution} We now apply the boundary conditions to Eqn. \ref{sphere:psi1gold}. By starting from the more general solution to Oseen's equation, we can remedy the difficulties encountered above. This analysis will be very similar to the truncation performed on Tomotika's solution for the cylinder problem. We combine Eqns. \ref{sphere:psi1gold} and \ref{sphere:0sol} to begin from the solution: $\Psi(\rho,\mu) = \Psi_0(\rho,\mu) + R \Psi_1^{\textrm{(a)}}(\rho,\mu)$. As with the cylinder, we will approximate the full solution by truncating the series in both $m$ and $n$. Our first consideration is again symmetry: The uniform flow imposes a $\sin{\theta}$, or $Q_1(\mu)$ symmetry on the problem. Hence we must retain the $n=1$ term in Eqn. \ref{sphere:psi1gold}. The importance of this term is clearly seen from Eqn. \ref{sphere:simpledrag}: \emph{Only} the coefficient of $Q_1(\mu)$ is needed to calculate the drag if the stream function satisfies the boundary conditions. As in the case of the cylinder, if we retain $n$ harmonics, we must retain $m=n-1$ terms in the second sum (the sum over $m$) in order to meet both boundary conditions. To minimize the error introduced by our approximations we set all \emph{other} $B_n$, $X_m$ equal to zero. The remaining terms, those which would violate the boundary conditions and must be truncated, are then given by Eqn. \ref{sphere:discards}. \begin{equation} \label{sphere:discards} \Psi_{\textrm{discard}}^{(n)}(\rho,\mu) = R \left(\sum_{k=n+1}^\infty \sum_{m=0}^{n-1} X_m \Phi_{m,k}(\rho/2) \rho^2 Q_k(\mu) \right) \end{equation} We want to estimate the magnitude of the error in our approximation, both overall and at the surface (the error in the boundary conditions). The error is given by Eqn. \ref{sphere:discards}. First, we calculate the magnitude of both $\Psi_{\textrm{discard}}^{(n)}(\rho,\mu)$ and its derivative at the surface ($\rho = R$) with $n$ retained harmonics. The results are given in Table \ref{tab:spherediscards}. \begin{table} \begin{center} \begin{tabular}{|c|cccc|} \hline n = & 1 & 2 & 3 & 4 \\ \hline $\Psi_{\textrm{discard}}^{(n)}(\rho,\mu)$ & \Order[R]{3} & \Order[R]{2} & \Order[R]{1} & \Order[R]{0} \\ $\Psi_{\textrm{discard}}^{'(n)}(\rho,\mu)$ & \Order[R]{2} & \Order[R]{1} & \Order[R]{1} & \Order[R]{-1} \\ \hline \end{tabular} \caption{Importance of discarded terms at $\rho = R$.} \label{tab:spherediscards} \end{center} \end{table} From Table \ref{tab:spherediscards}, we see that to retain the $Q_2(\mu)$ harmonics, we must have an error in our derivative boundary condition of \Order[R]{1} --- the order to which we are trying to work. If we retain higher harmonics, this situation gets worse. Since it is in practice impossible to fit the boundary conditions to all harmonics, we must truncate the series expansion. We see that there is only one truncation consistent with both the symmetry requirements of the problem and the demand that we satisfy the boundary conditions to \Order[R]{1}: \begin{equation} \label{sphere:truncsol} \Psi(\rho,\mu) = -\rho^2 Q_1(\mu) + R \left( A_1 \rho^2 + B_1 \rho^{-1} + X_0 \Phi_{0,1}(\rho/2) \rho^2 \right) Q_1(\mu) + \Order[R]{2} \end{equation} We also must consider the overall error, e.g., how big can $\Psi_{\textrm{discard}}^{(1)}(\rho,\mu)$ get? Although, at the surface of the sphere, Eqn. \ref{sphere:truncsol} is no better than Eqn. \ref{sphere:truncoseen}, it is superior for $\rho \ne R$. The magnitude of the error is maximized as $\rho \to \infty$. It can be shown by Taylor expansion (separately accounting for $m=0,m=n$, etc.) that $\Phi_{m,n}(x \to \infty) \sim x^{-2}$. Therefore, \begin{displaymath} \lim_{\rho \to \infty} \Psi_{\textrm{discard}}^{(1)}(\rho,\mu) = \Order[R]{1} \end{displaymath} Although this is somewhat unsatisfactory, this solution does not suffer from the same shortcomings as Eqn. \ref{sphere:approxossensol}. The error remains bounded. Eqn. \ref{sphere:truncsol} will satisfy the boundary conditions (Eqn. \ref{sphere:oseenbc}) if $A_1 = 0$ and \begin{eqnarray} X_0 &=& \frac{6}{6 R \Phi_{0,1}(R/2) + R^2 \Phi_{0,1}^{'}(R/2)}\\ B_1 &=& \frac{R^3 \Phi_{0,1}^{'}(R/2)}{6 \Phi_{0,1}(R/2) + R \Phi_{0,1}^{'}(R/2)} \end{eqnarray} As in the case of the cylinder, the resulting stream function satisfies the boundary conditions exactly, and the governing equations approximately. Our final solution is: \begin{eqnarray} \label{sphere:goldfinal} \Psi(\rho,\mu) &=& -\rho^2 Q_1(\mu) + R \Bigg( \frac{R^3 \Phi_{0,1}^{'}(R/2)}{6 \Phi_{0,1}(R/2) + R \Phi_{0,1}^{'}(R/2)} \rho^{-1} + \\ & & \frac{R^3 \Phi_{0,1}^{'}(R/2)}{6 \Phi_{0,1}(R/2) + R \Phi_{0,1}^{'}(R/2)} \Phi_{0,1}(\rho/2) \rho^2 \Bigg) Q_1(\mu) + \Order[R]{2} \nonumber \end{eqnarray} For reference, \begin{displaymath} \Phi_{0,1}(x) = -\frac{3 \pi}{4 x^2} \left(2 - \frac{2}{x}+\frac{1}{x^2}-\frac{e^{-2x}}{x^2}\right) \end{displaymath} \subsubsection{Calculating the drag coefficient} We calculated the drag coefficient by substituting Eqn. \ref{sphere:goldfinal} into Eqn. \ref{sphere:simpledrag}, giving this new result: \begin{equation} \label{sphere:cdphi} C_D = \frac{ \pi \left( -16 \Phi_{0,1}^{'}(R/2) + R \left( 8 \Phi_{0,1}^{''}(R/2) + R \Phi_{0,1}^{'''}(R/2) \right) \right)}{2 \left(6 \Phi_{0,1}(R/2) + R \Phi_{0,1}^{'}(R/2) \right)} \end{equation} This can be expressed in terms of more conventional functions by substituting for $\Phi_{0,1}(x)$, resulting in the drag coefficient given by Eqn. \ref{sphere:cdresult}. \begin{equation} \label{sphere:cdresult} C_D = \frac{4 \pi \left(24 + 24 R + 8 R^2 + R^3 + 4 e^R \left(R^2 - 6 \right)\right)}{R \left( 2 \left(R + 1\right) + e^R \left(R^2 -2 \right)\right)} \end{equation} This result is plotted in Figure \ref{fig:sphere:RG}, where it is compared against the principal results of Oseen theory, matched asymptotic theory, numerical results, and experiments. As $R \rightarrow 0$, there is excellent agreement. At small but non-infinitesimal Reynolds numbers, RG is nearly identical to Oseen's prediction (Eqn. \ref{Oseen:dragsphere}), which is disappointing. It is surprising that Goldstein's result is better than the RG result, as they are calculations of the same order in $R$, and are a series approximation. That the matched asymptotics predictions are superior is not surprising; Chester and Breach's result began with a much higher order perturbative approximation. If a higher order RG calculation were possible, RG ought to be better than the same order matched asymptotics prediction. \begin{figure}[tb] \psfrag{Re}{\mbox{\Large $R$}} \psfrag{CD}{\mbox{\Large $C_D R/6\pi - 1$}} \psfrag{CD2 XXXXXX}{\tiny{$C_D \frac{R}{6\pi} - 1$}} \psfrag{Maxworthy XXXXXX}{Maxworthy} \psfrag{Le Clair}{Le Clair} \psfrag{Dennis}{Dennis} \psfrag{Oseen}{Oseen, Eqn. \ref{matchedcd}} \psfrag{Proudman}{Proudman, Eqn. \ref{matchedcd}} \psfrag{Chester and Breach 2}{Chester, Eqn. \ref{matchedcd}} \psfrag{Goldstein}{Goldstein, Eqn. \ref{GoldsteinCD}} \psfrag{RG}{RG, Eqn. \ref{sphere:cdresult}} \begin{center} \includegraphics[width=.9 \textwidth]{fig16} \caption{(Color online) Drag on a sphere, comparing RG to other theories \cite{Maxworthy65,LC70,Den71}.} \label{fig:sphere:RG} \end{center} \end{figure} As in the case of the cylinder, the real strength of Eqn. \ref{sphere:cdresult} can be seen as the Reynolds number increases. Figure \ref{fig:sphere:RG2} demonstrates that all other theories diverge from experimental measurements for $R\gtrsim1$. This is an unavoidable aspect of their structure and derivation --- they are only valid asymptotically. The RG prediction suffers from none of these problems. Eqn. \ref{sphere:cdresult} is well behaved for all $R$, although it does become less accurate at larger Reynolds numbers. \begin{figure}[tb] \psfrag{Re}{\mbox{\Large $R$}} \psfrag{CD}{\mbox{\Large $C_D R/6\pi - 1$}} \psfrag{CD2 XXXXXX}{\tiny{$C_D \frac{R}{6\pi} - 1$}} \psfrag{Maxworthy XXXXXX}{Maxworthy} \psfrag{Le Clair}{Le Clair} \psfrag{Dennis}{Dennis} \psfrag{Oseen}{Oseen, Eqn. \ref{matchedcd}} \psfrag{Proudman}{Proudman, Eqn. \ref{matchedcd}} \psfrag{Chester and Breach 2}{Chester, Eqn. \ref{matchedcd}} \psfrag{Goldstein}{Goldstein, Eqn. \ref{GoldsteinCD}} \psfrag{RG}{RG, Eqn. \ref{sphere:cdresult}} \begin{center} \includegraphics[width=.9 \textwidth]{fig17} \caption{(Color online) Drag on a sphere, comparing RG at larger $R$ \cite{Maxworthy65,LC70,Den71}.} \label{fig:sphere:RG2} \end{center} \end{figure} \section{CONCLUSIONS} We have devoted a substantial effort to the historical problem of calculating the drag coefficient for flow around a cylinder and a sphere at low Reynolds number. We report four principal accomplishments. First, we have untangled over 150 years of diffuse, confusing, and sometimes contradictory experimental, numerical, and theoretical results. We have expressed all important previous work within a consistent mathematical framework, and explained the approximations and assumptions which have gone into previous calculations. Moreover, by plotting experimental results and theoretical predictions with the leading order divergence removed (an idea originally due to Maxworthy), we have consistently and critically compared all available measurements. There are no other such exhaustive comparative reviews available in the existing literature. Secondly, we have extended traditional matched asymptotics calculations. We advance and justify the idea that \emph{uniformly valid approximations}, not the Stokes or Oseen expansions, should be used to calculate derivative quantities such as $C_D$. By combining this approach with previously published matched asymptotics results, we obtain new results for the drag coefficients. These results \emph{systematically} improve on published drag coefficients, which relied only on the Stokes expansion. This methodology also resolved a problem in the existing literature: the most accurate calculations for a cylinder, due to Skinner, had failed to improve $C_D$ \cite{Ski75}. When treated via a uniformly valid approximation, our new result based on Skinner's solutions betters all matched asymptotics predictions. We have also explored the structure and subtleties involved in applying renormalization group techniques to the ``terrible'' problem posed by Hinch and Lagerstrom \cite{Hinch91,Lag72}. This problem, previously solved by Chen et al. \cite{CGO96}, contains a rich and henceforth unexplored collection of hidden subtleties. We exhaustively examined all possible complications which can arise while solving this problem with the renormalization group. To treat some of these possibilities, we identified and implemented a new constraint on the RG calculation; the renormalized perturbation solution itself, not just the expansion on which it is based, must satisfy the governing equations to the appropriate order in $\epsilon$. While this had been done implicitly in previous calculations, we had to deal with it explicitly (e.g., by appropriate choices of homogeneous solutions). In the process of doing so, we obtained several new second order approximate solutions to the ``terrible'' problem, and demonstrated their equivalence. The work with the ``terrible'' problem laid the foundation for our most significant new calculation. In close analogy with the ``terrible'' problem, we used the RG to derive new results for the drag coefficients for both a sphere and a cylinder (Eqns. \ref{sphere:cdresult} and \ref{cylinder:O1CD}, respectively). These new results agree asymptotically with previous theoretical predictions, but greatly surpass them at larger $R$. Other theories diverge pathologically, while the results from the RG calculation remain well behaved. We demonstrated that these new techniques could reproduce and improve upon the results of matched asymptotics --- when applied to the very problem which that discipline was created to solve! Matched asymptotics requires the use of two ingenious and intricate expansions, replete with strange terms (like $R \log{R}$) which must be introduced while solving the problem via a painful iterative process. RG requires only a single generic expansion, which can always be written down a priori, even in complicated singular perturbation problems with boundary layers. It therefore gives rise to a much more economical solution, requiring half the work and yielding a superior result. It is hoped that demonstrating of the utility of these techniques on this historical problem will result in increased interest and further application of renormalization group techniques in fluid mechanics. \section*{ACKNOWLEDGMENTS} The authors are grateful, in some sense, to Charlie Doering for his initial suggestion to consider the tortuous problems discussed here using RG techniques. This work was partially supported by the National Science Foundation through Grant No. NSF-DMR-99-70690.
{ "redpajama_set_name": "RedPajamaArXiv" }
795
\section{Introduction} Probabilistic ensembles with one or more adjustable parameters are often used to model complex networks, including social networks, biological networks, the Internet, etc.; see e.g. Fienberg \cite{FienbergI,FienbergII}, Lov\'{a}sz \cite{Lovasz2009} and Newman \cite{Newman}. One of the standard complex network models is the exponential random graph model, originally studied by Besag \cite{Besag}. We refer to Snijders et al. \cite{Snijders}, Rinaldo et al. \cite{Rinaldo} and Wasserman and Faust \cite{Wasserman} for history and a review of recent developments. The phenomenon of phase transitions in exponential random graph models has recently attracted a lot of attention in the literature. The statistical content of such models can be described by the {\em free energy density}, an appropriately scaled version of the probability normalization. The free energy density is also a standard quantity in statistical physics. In particular, its limit as the system size becomes infinite can be used to draw phase diagrams corresponding (for example) to the familiar fluid, liquid and solid phases of matter~\cite{FisherRadin}. Using the large deviations formula for Erd\H{o}s-R\'{e}nyi graphs of Chatterjee and Varadhan~\cite{ChatterjeeVaradhan}, Chatterjee and Diaconis~\cite{ChatterjeeDiaconis} obtained a variational formula for the limiting free energy density for a large class of exponential random graph models. Radin and Yin~\cite{Radin} used this to formalize, for the first time, the notion of a phase transition for exponential random graphs, explicitly computing phase diagrams for a family of two-parameter models. A similar three-parameter family was studied by Yin~\cite{Yin}. Previous non-rigorous analysis using mean-field theory and other approximations can be found in Park and Newman~\cite{ParkI,ParkII} and the references in H\"{a}ggstr\"{o}m and Jonasson~\cite{Haggstrom}. We consider a family of directed exponential random graphs parametrized by edges and outward directed $p$-stars. Such models are standard and important in the literature of social networks, see e.g. Holland~\cite{Holland} and the references therein. For directed graphs, recently developed techniques based on the graph limit theory of Lovasz~\cite{Lovasz} and the large deviations formula of Chatterjee and Varadhan~\cite{ChatterjeeVaradhan} cannot be directly applied. Instead of trying to adapt these techniques to the directed case, we use completely different methods which lead to {\em better} asymptotics for the free energy density. From the limiting free energy density, we find that the model has a phase diagram essentially identical to the one of~\cite{Radin}. Because our asymptotics are more precise, we are able to build on the results in~\cite{Radin}. In particular, by carefully studying partial derivatives of the free energy density along the phase transition curve, we obtain precise scaling laws for the variance of edge and star densities and we compute exactly the limiting edge probabilities. To explain how our results fit into the phase transition framework of~\cite{Radin}, we need to make the notions of free energy and phase transition more precise. Consider the probability measure on the set of graphs on $n$ nodes defined by \begin{equation}\label{ERGM} {\mathbb P}_n(X) = Z_n(\beta_1,\beta_2)^{-1} \exp\left(n^2\left[\beta_1 e(X) + \beta_2 s(X)\right]\right), \end{equation} where $\beta_1$, $\beta_2$ are real parameters, $Z_n(\beta_1,\beta_2)$ is the probability normalization, and $e(X)$ (resp. $s(X)$) is the probability that a random function from a single edge (resp. a $p$-star) into $X$ is a homomorphism, i.e., an edge preserving map between the vertex sets. The quantities $e(X)$ and $s(X)$ are called homomorphism densities; see e.g.~\cite{ChatterjeeDiaconis} for details and a discussion. We consider both undirected and directed graphs. The model~\eqref{ERGM} has at least a superficial similarity to the grand canonical ensemble in statistical physics, which describes the statistical properties of matter in thermal equilibrium~\cite{Gallavotti}. The grand canonical ensemble consists of a probability measure, defined on the set of locally finite subsets of $[-n/2,n/2]^d$, of the form \begin{equation}\label{grandcanon} {\mathbb P}_n(Y) = Z_n(\beta,\mu)^{-1}\exp\left(n^d\left[-\beta \mu N(Y) - \beta E(Y)\right]\right), \end{equation} where $\beta = 1/(k_BT)$ (with $T$ temperature and $k_B$ Boltzmann's constant), $\mu$ is chemical potential, $N(Y) = |Y|/n^d$ is the number density of $Y$, $E(Y)$ is the energy density of $Y$, and $d = 2$ or $3$. Here, each point of $Y$ represents a particle (e.g., atom). A standard fact in statistical physics is that essentially all relevant physical quantities in the model can be obtained through the {\em free energy density} \begin{equation*} \psi_n(\beta,\mu):= n^{-d} \log Z_n(\beta,\mu). \end{equation*} In particular, the average and variance of $N(Y)$ and $E(Y)$ (or more generally, all of their moments) can be obtained by differentiating $\psi_n$ with respect to $\beta$ or $\mu$. Moreover, under appropriate conditions on $E$ the limit \begin{equation*} \psi(\beta,\mu) := \lim_{n\to \infty} \psi_n(\beta,\mu) \end{equation*} exists, and for any $i,j\in\mathbb{N}$, \begin{equation*} \lim_{n\rightarrow\infty}\frac{\partial^{i+j}}{\partial\beta^{i}\partial\mu^{j}} \psi_{n}(\beta,\mu) =\frac{\partial^{i+j}}{\partial\beta^{i}\partial\mu^{j}} \lim_{n\rightarrow\infty}\psi_{n}(\beta,\mu) =\frac{\partial^{i+j}}{\partial\beta^{i}\partial\mu^{j}} \psi(\beta,\mu) \end{equation*} whenever the derivative on the right hand side exists~\cite{Yang}. The limit $\psi(\beta,\mu)$ is key to understanding phases of matter: for instance, for a ``typical'' energy density $E$ (e.g., based on the commonly used Lennard-Jones particle interaction~\cite{LennardJones}), that the function $\psi$ is analytic except along two curves with an endpoint; these curves correspond exactly to the solid/liquid/vapor phase transitions, and the endpoint is called the {critical point}~\cite{FisherRadin}. See Figure 1(i). Actually, though the preceding statement is widely believed and supported by numerical experiments, proofs are possible only in very special cases; see e.g. Lebowitz et. al.~\cite{Lebowitz}. The following analogy between the models~\eqref{ERGM} and~\eqref{grandcanon} was explored in Radin and Yin~\cite{Radin}. For the model~\eqref{ERGM} we can define the free energy density in the same way, \begin{equation*} \psi_n(\beta_1,\beta_2) = n^{-2} \log Z_n(\beta_1,\beta_2). \end{equation*} It is proved in~\cite{Radin} that in the undirected graph case, the limit \begin{equation*} \psi(\beta_1,\beta_2) = \lim_{n\to \infty} \psi_n(\beta_1,\beta_2) \end{equation*} is analytic except along a certain curve with an endpoint, which we will call the phase transition curve and critical point, respectively; see Figure 1(ii). Moreover, on the curve but away from the critical point, the first order partial derivatives of $\psi$ have a jump discontinuity; at the critical point, the first order partial derivatives of $\psi$ are continuous but the second order derivatives diverge. Precisely the same behavior occurs on the liquid-vapor transition curve in the model~\eqref{grandcanon}. \begin{figure}\label{fig1} \begin{center} \includegraphics[scale=0.7]{trans_curve2.eps} \end{center} \caption{Simple phase diagrams in (i) the grand canonical ensemble, and (ii) the ERGM model. The critical point is labeled with a $*$.} \end{figure} To understand these singularities better, consider the following. First, just as in the grand canonical ensemble~\eqref{grandcanon}, \begin{align*} &\lim_{n\to \infty}\frac{\partial}{\partial \beta_i} \psi_n(\beta_1,\beta_2) = \frac{\partial}{\partial \beta_i} \psi(\beta_1,\beta_2), \quad i \in \{1,2\}\\ &\lim_{n\to \infty}\frac{\partial^2}{\partial \beta_i \partial \beta_j} \psi_n(\beta_1,\beta_2) = \frac{\partial^2}{\partial \beta_i\partial \beta_j} \psi(\beta_1,\beta_2), \quad i,j\in\{1,2\}, \end{align*} provided the derivatives on the right hand side exist~\cite{Radin}. Next, straightforward computations show that \begin{align}\begin{split}\label{derivs} &\frac{\partial}{\partial \beta_1} \psi_n(\beta_1,\beta_2) = {\mathbb E}_n[e(X)], \quad \frac{\partial^2}{\partial \beta_1^2} \psi_n(\beta_1,\beta_2) = n^2{\hbox{Var}_n}(e(X))\\ &\frac{\partial}{\partial \beta_2} \psi_n(\beta_1,\beta_2) = {\mathbb E}_n[s(X)], \quad \frac{\partial^2}{\partial \beta_2^2} \psi_n(\beta_1,\beta_2) = n^2{\hbox{Var}_n}(s(X)).\end{split} \end{align} Thus, a jump discontinuity in $\partial\psi/\partial \beta_1$ (resp. $\partial\psi/\partial \beta_2$) along the transition curve implies a jump in the average value of $e(X)$ (resp. $s(X)$) across the curve as $n\to \infty$. Similarly, at the critical point, divergence of $\partial^2\psi/\partial \beta_1^2$ (resp. $\partial^2\psi/\partial \beta_2^2$) implies that the variance of $e(X)$ (resp. $s(X)$) decays more slowly than $n^{-2}$. Away from the transition curve, all partial derivatives of $\psi$ (of any order) exist and are finite, so in particular the variance of $e(X)$ and $s(X)$ decays at least as fast as $n^{-2}$. Analogous statements can be made in the model~\eqref{grandcanon} about the average and variance of $N(Y)$ and $E(Y)$. More detailed statements would require an analysis of the free energy density $\psi_n$ for {\em finite} $n$; this is much more difficult to study than the limit $\psi$, which be obtained (at least in the undirected case) via the large deviations results of Chatterjee and Diaconis~\cite{ChatterjeeDiaconis}. In this paper we consider~\eqref{ERGM} in the directed graph case, with $H_2$ an outward directed $p$-star. For finite $n$, we obtain asymptotics for $\psi_n$ and certain quantities related to its partial derivatives. Besides using these asymptotics to rederive a formula essentially the same as the one in~\cite{ChatterjeeDiaconis,Radin} for the limiting free energy density $\psi$, we obtain precise scaling of the variance and covariance of $e(X)$ and $s(X)$ as $n\to \infty$ for all parameter values $(\beta_1,\beta_2)$, including on the transition curve and at the critical point. By analogy with the model~\eqref{grandcanon}, the scaling at the critical point yields what in physics is called the {\em critical exponent}~\cite{Gallavotti}. We also use our asymptotics for $\psi_n$ to prove that in the limit $n \to \infty$, there is an edge between fixed nodes according to a Bernoulli random variable. On the transition curve, across which we recall the average of $e(X)$ has a jump discontinuity in the limit $n\to \infty$, we give an explicit formula for the Bernoulli parameter as a {convex combination} of the averages of $e(X)$ on both sides of the jump. The paper is organized as follows. In Section~\ref{Description}, we describe the model in detail. Main results are stated in Section~\ref{THEOREMS}. The results are obtained by estimates, stated in Section~\ref{ESTIMATES}, which allow for a precise computation of the free energy density and derivatives thereof. All proofs are in Section~\ref{PROOFS}. \section{Description of the model}\label{Description} Consider directed graphs on $n$ nodes, where a graph is represented by a matrix $X = (X_{ij})_{1\le i,j \le n}$ with each $X_{ij} \in \{0,1\}$. Here, $X_{ij} = 1$ means there is a directed edge from node $i$ to node $j$; otherwise, $X_{ij}=0$. Give the set of such graphs the probability \begin{equation}\label{1} {\mathbb P}_n(X) = Z_n(\beta_{1},\beta_{2})^{-1}\exp\left[n^2\left(\beta_{1} e(X) + \beta_{2} s(X)\right)\right], \end{equation} where \begin{align}\begin{split}\label{es} &e(X) := n^{-2}\sum_{1\le i,j\le n} X_{ij},\\ &s(X) := n^{-p-1} \sum_{1\leq i,j_{1},j_{2},\ldots,j_{p}\leq n}X_{ij_{1}}X_{ij_{2}}\cdots X_{ij_{p}}. \end{split} \end{align} Here, $Z_n(\beta_{1},\beta_{2})$ is the appropriate normalization. Note that $e(X)$ and $s(X)$, defined in~\eqref{es}, represent the directed edge and outward directed $p$-star homomorphism densities of $X$. It is easy to see that $s(X)$ has the alternative expression \begin{equation} s(X) = n^{-p-1} \sum_{i=1}^n \left(\sum_{j=1}^n X_{ij}\right)^p. \end{equation} We allow $X_{ii}$ to equal $1$ for ease of notation. It is not hard to see that without this simplification, our main results in Section~\ref{THEOREMS} below hold exactly as stated, and our estimates in Section~\ref{ESTIMATES} hold with only small modifications. Define the free energy density \begin{equation*} \psi_n(\beta_{1},\beta_{2}) := n^{-2} \log Z_n(\beta_{1},\beta_{2}) \end{equation*} and the limiting free energy density \begin{equation*} \psi(\beta_{1},\beta_{2}) = \lim_{n\to \infty} \psi_n(\beta_1,\beta_2). \end{equation*} Our analysis will involve a closely related function \begin{equation*} \ell(x) := \beta_{1} x + \beta_{2} x^p-x \log x - (1-x)\log (1-x). \end{equation*} It is easy to see that $\ell$ is analytic in $(0,1)$ and continuous on $[0,1]$. Note that $\ell$ is essentially identical to the function of the same name studied in~\cite{Radin}: after multiplying $\beta_1$ and $\beta_2$ by two the functions differ only by a constant. This allows us to use results from~\cite{Radin} concerning $\ell$. \section{Main results}\label{THEOREMS} We rederive the following formula for the limiting free energy density, first obtained in~\cite{ChatterjeeDiaconis} in the undirected graph case: \begin{theorem}\label{free_energy} For any $\beta_1$, $\beta_2$ we have \begin{equation*} \psi_n(\beta_1,\beta_2) = \ell(x^*)+O(n^{-1}\log n). \end{equation*} \end{theorem} We restate the following result proved in~\cite{Radin}: \begin{theorem}[Radin and Yin~\cite{Radin}]\label{trans_curve} There is a certain curve in the $(\beta_1,\beta_2)$-plane with the endpoint \begin{equation*} (\beta_{1}^{c},\beta_{2}^{c})=\left(\log(p-1) - \frac{p}{p-1},\frac{p^{p-1}}{(p-1)^p}\right), \end{equation*} such that off the curve and at the endpoint, $\ell$ has a unique global maximizer $x^* \in (0,1)$, while on the curve away from the endpoint, $\ell$ has two global maximizers, $x_{1}^{*}$ and $x_{2}^{*}$, with $0 < x_1^* < (p-1)/p < x_2^* < 1$. \end{theorem} The curve in Theorem~\ref{trans_curve} will be called the {\em phase transition curve} and written $\beta_2 = q(\beta_1)$. The endpoint will be called the {\em critical point}. It is not possible to write an explicit equation for the curve; see~\cite{Radin} for a graph obtained numerically. However, in~\cite{Radin} is is shown that $q(\beta_{1})$ is continuous and decreasing in $\beta_{1}$, with $\lim_{\beta_{1}\rightarrow-\infty}|q(\beta_{1})+\beta_{1}|=0$. We have the following more precise result, which, since it concerns only the function $\ell$, holds for both our model and that of~\cite{Radin}: \begin{theorem}\label{PropertyCurve} (i) $q(\beta_{1})$ is differentiable for $\beta_{1}<\beta_{1}^{c}$ with \begin{equation*} q'(\beta_{1})=-\frac{x_{1}^{\ast}-x_{2}^{\ast}}{(x_{1}^{\ast})^{p}-(x_{2}^{\ast})^{p}}<0. \end{equation*} In particular, \begin{equation*} \lim_{\beta_{1}\rightarrow\beta_{1}^{c}}q'(\beta_{1})=-\frac{p^{p-2}}{(p-1)^{p-1}}, \qquad \text{and} \qquad \lim_{\beta_{1}\rightarrow-\infty}q'(\beta_{1})=-1. \end{equation*} (ii) $q(\beta_{1})$ is convex in $\beta_{1}$. \end{theorem} When $p = 2$, along the line $\beta_1 + \beta_2 = 0$ the function $\ell$ is symmetric around $1/2$. It follows that $x_1^* + x_2^* = 1$ along this line, so Theorem~\ref{PropertyCurve} implies $\ell(\beta_1) = -\beta_1$. See Figure~2(i). As discussed in the introduction, the following theorems give the scaling of the variance of $e(X)$ and $s(X)$. Note that we compute this for any $(\beta_1,\beta_2)$, including on the phase transition curve and at the critical point; this extends the results in~\cite{Radin}, which only hold off the phase transition curve. \begin{theorem}\label{MainThm} Off the phase transition curve, \begin{equation*} \lim_{n\to \infty}\frac{\partial^2 }{\partial \beta_1^2}\psi_n(\beta_1,\beta_2) = \frac{\partial^2 }{\partial \beta_1^2}\lim_{n\to \infty}\psi_n(\beta_1,\beta_2) = \frac{1}{|\ell''(x^*)|}. \end{equation*} On the phase transition curve except at the critical point, \begin{equation*} \lim_{n\rightarrow\infty}\frac{1}{n}\frac{\partial^2 }{\partial \beta_1^2}\psi_n(\beta_1,\beta_2) =\frac{(x_{1}^{\ast}-x_{2}^{\ast})^{2}\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|} \sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}} {\left(\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|} +\sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}\right)^{2}}. \end{equation*} At the critical point, \begin{equation*} \lim_{n\rightarrow\infty}\frac{1}{n^{1/2}} \frac{\partial^2 }{\partial \beta_1^2}\psi_n(\beta_1,\beta_2) =\frac{\Gamma(\frac{3}{4})}{\Gamma(\frac{1}{4})} \frac{2\sqrt{6}(p-1)}{p^{5/2}}. \end{equation*} \end{theorem} \begin{theorem}\label{starvariance} Off the phase transition curve, \begin{equation*} \lim_{n\to \infty}\frac{\partial^2 }{\partial \beta_2^2}\psi_n(\beta_1,\beta_2) = \frac{\partial^2 }{\partial \beta_2^2}\lim_{n\to \infty}\psi_n(\beta_1,\beta_2) = \frac{p^{2}(x^{\ast})^{2p-2}}{|\ell''(x^*)|}. \end{equation*} On the transition curve except at the critical point, \begin{equation*} \lim_{n\rightarrow\infty}\frac{1}{n}\frac{\partial^2 }{\partial \beta_2^2}\psi_n(\beta_1,\beta_2) =\frac{((x_{1}^{\ast})^{p}-(x_{2}^{\ast})^{p})^{2}\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|} \sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}} {\left(\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|} +\sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}\right)^{2}}. \end{equation*} At the critical point, \begin{equation*} \lim_{n\rightarrow\infty}\frac{1}{n^{1/2}}\frac{\partial^2 }{\partial \beta_2^2}\psi_n(\beta_1,\beta_2) =\frac{2\sqrt{6}\Gamma(\frac{3}{4})}{\Gamma(\frac{1}{4})}\frac{(p-1)^{2p-1}}{p^{2p-\frac{3}{2}}}. \end{equation*} \end{theorem} \begin{theorem}\label{covariance} Off the phase transition curve, \begin{equation*} \lim_{n\to \infty}\frac{\partial^{2}}{\partial\beta_{1}\partial\beta_{2}}\psi_n(\beta_1,\beta_2) =\frac{\partial^{2}}{\partial\beta_{1}\partial\beta_{2}}\lim_{n\to \infty}\psi_n(\beta_1,\beta_2) =\frac{p(x^{\ast})^{p-1}}{|\ell''(x^*)|}. \end{equation*} On the transition curve except at the critical point, \begin{align*} &\lim_{n\rightarrow\infty}\frac{1}{n} \frac{\partial^2 }{\partial \beta_1 \partial \beta_2}\psi_n(\beta_1,\beta_2) \\ &= \frac{((x_{1}^{\ast})^{p}-(x_{2}^{\ast})^{p})(x_{1}^{\ast}-x_{2}^{\ast}) \sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|} \sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}} {\left(\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|} +\sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}\right)^{2}}. \end{align*} At the critical point, \begin{equation*} \lim_{n\rightarrow\infty}\frac{1}{n^{1/2}} \frac{\partial^2 }{\partial \beta_1 \partial \beta_2}\psi_n(\beta_1,\beta_2) =\frac{2\sqrt{6}\Gamma(\frac{3}{4})}{\Gamma(\frac{1}{4})}\frac{(p-1)^{p}}{p^{p+\frac{1}{2}}}. \end{equation*} \end{theorem} \begin{figure} \begin{center} \includegraphics[scale=0.7]{edge_variance_v4.eps} \end{center} \caption{(i): The graph of the phase transition curve $\beta_2 = q(\beta_1)$ when $p = 2$, with the critical point labeled by $*$. (ii)-(iv): Scaling of the variance of $e(X)$ (ii) off the phase transition curve, (iii) at the critical point, and (iv) on the phase transition curve away from the critical point. For (ii)-(iv) we use $p = 2$ and $(\beta_1,\beta_2)$ values of $(-3/2,3/2)$, $(-2,2)$ and $(-5/2,5/2)$, respectively. The straight lines are obtained from the scaling in Theorem~\ref{MainThm}, and the squares are obtained by Monte Carlo simulation.} \label{fig2} \end{figure} The next theorem gives the limiting edge densities. \begin{theorem}\label{marginaldensities} Off the phase transition curve and at the critical point, \begin{equation} \lim_{n\rightarrow\infty}\mathbb{P}_{n}(X_{12}=1) =1-\lim_{n\rightarrow\infty}\mathbb{P}_{n}(X_{12}=0)=x^{\ast}. \end{equation} On the phase transition curve except at the critical point, \begin{equation} \lim_{n\rightarrow\infty}\mathbb{P}_{n}(X_{12}=1) =1-\lim_{n\rightarrow\infty}\mathbb{P}_{n}(X_{12}=0)=\alpha x_{1}^{\ast}+(1-\alpha)x_{2}^{\ast}, \end{equation} where \begin{equation} \alpha:=\frac{\sqrt{ x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}} {\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|} +\sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}}. \end{equation} \end{theorem} In~\cite{Radin} it is proved that, off the phase transition curve and at the critical point, for large $n$ a typical graph behaves like the Erd\H{o}s-R\'{e}nyi graph $G(n,p^*)$, where $p^{\ast}=x^{\ast}$ is the unique global maximizer of $\ell$. (See Theorem 3.4 of~\cite{Radin} for a more precise statement.) It is also shown that, on the phase transition curve except at the critical point, for large $n$ a typical graph behaves like $G(n,p^{\ast})$, where $p^{\ast}$ is a mixture of the two global maximizers $x^{\ast}_{1}<x^{\ast}_{2}$ of $\ell$. However, $p^{\ast}$ is not determined explicitly. In our model, since we consider only directed graphs, we do not obtain an Erd\H{o}s-R\'{e}nyi graph in the $n\to \infty$ limit. Nevertheless, Theorem~\ref{marginaldensities} is a qualitatively similar result about limiting edge probabilities, with an explicit formula for the mixture of edge probabilities, $p^* = \alpha x_{1}^{\ast}+(1-\alpha)x_{2}^{\ast}$, along the phase transition curve. \section{Key Estimates}\label{ESTIMATES} First we have the following formula for the normalization $Z_n(\beta_1,\beta_2)$: \begin{proposition}\label{Zn} Let $W$ be a binomial random variable with parameters $n$ and $\frac{1}{2}$: \begin{equation*} \mathbb{P}(W = i)= 2^{-n}\binom{n}{i}. \end{equation*} Then \begin{equation*} Z_{n}(\beta_{1},\beta_{2}) = 2^{n^2} \left({\mathbb E}\left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]\right)^n. \end{equation*} \end{proposition} Next we approximate the expectation in Proposition~\ref{Zn} in terms of an integral: \begin{proposition}\label{E} Let $W$ be a binomial random variable with parameters $n$ and $\frac{1}{2}$. Then for any $r < 1$, \begin{align*} &{\mathbb E}\left[W^k\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right] \\ &= \begin{cases} \left(1+O\left(n^{1/2-r}\right)\right) n^k 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1} \sqrt{\frac{x^{2k}}{x(1-x)}}e^{n\ell(x)}\,dx, & (\beta_1,\beta_2) \ne (\beta_1^c, \beta_2^c)\\ \left(1+O\left(n^{1/4-r}\right)\right) n^k 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1} \sqrt{\frac{x^{2k}}{x(1-x)}}e^{n\ell(x)}\,dx, &(\beta_1,\beta_2) = (\beta_1^c, \beta_2^c) \end{cases} \end{align*} \end{proposition} Lastly we give a technical lemma for computing the integral in Proposition~\ref{E}: \begin{proposition}\label{laplace} Let $f$ be an analytic function in $(0,1)$ with Taylor expansion at $c \in (0,1)$ given by \begin{equation*} f(x) = d_0(c) + d_1(c)(x-c) + d_2(c)(x-c)^2 + \ldots, \quad d_j(c) := \frac{f^{(j)}(c)}{j!}. \end{equation*} For $c \in (0,1)$, define \begin{align}\begin{split}\label{bda} &b_k(c) = \frac{\ell^{k}(c)}{k!},\\ &{\alpha}_k(c) = \frac{1}{2}\Gamma\left(\frac{k}{2}\right)|b_2(c)|^{-k/2},\\ &{\gamma}_k(c) = \frac{1}{4}\Gamma\left(\frac{k}{4}\right)|b_4(c)|^{-k/4}.\end{split} \end{align} Assume that $f(x)e^{n\ell(x)} \in L^1[0,1]$ for each $n$. Then as $n\rightarrow\infty$, we have the following. \begin{itemize} \item[(i)] Off the phase transition curve, \begin{equation*} \int_0^1 f(x) e^{n\ell(x)}\,dx = e^{n\ell(c)}\left[n^{-1/2}d_0\alpha_1 + n^{-3/2}\Lambda + O(n^{-5/2})\right] \end{equation*} where \begin{equation*} \Lambda := d_2\alpha_3 + d_1 b_3\alpha_5 + d_0 b_4 \alpha_5 + \frac{1}{2}d_0 b_3^2 \alpha_7 \end{equation*} with $c = x^*$ the unique maximizer of $\ell$, and $d_j = d_j(c)$, $b_j = b_j(c)$, $\alpha_j = \alpha_j(c)$. \item[(ii)] On the phase transition curve except at the critical point, \begin{equation*} \int_0^1 f(x) e^{n\ell(x)}\,dx = e^{n\ell(c)}\left[n^{-1/2}\left(d_0(c_1)\alpha_1(c_1) + d_0(c_2)\alpha_1(c_2)\right) + O(n^{-3/2})\right] \end{equation*} where $c_1$ and $c_2$ are the maximizers of $\ell$. \item[(iii)] At the critical point, \begin{equation*} \int_0^1 f(x) e^{n\ell(x)}\,dx = e^{n\ell(c)}\left[n^{-1/4}d_0\gamma_1 + n^{-3/4}\Theta + O(n^{-5/4})\right] \end{equation*} where \begin{equation*} \Theta := d_2\gamma_3 + d_1 b_5\gamma_7 + d_0 b_6 \gamma_7 + \frac{1}{2}d_0 b_5^2 \gamma_{11} \end{equation*} with $c = x^*$ the unique maximizer of $\ell$, and $d_j = d_j(c)$, $b_j = b_j(c)$, $\gamma_j = \gamma_j(c)$. \end{itemize} \end{proposition} Note that this strategy allows for a relatively precise computation of $Z_n(\beta_1,\beta_2)$. Unfortunately, arbitrary precision cannot be achieved, due to the error inherent in the sum to integral approximation of Proposition~\ref{E}. \section{Proofs}\label{PROOFS} Before turning to the proofs of the theorems in Section~\ref{THEOREMS}, we will prove the estimates in Section~\ref{ESTIMATES}. The following result will be needed in almost all of our proofs. \begin{proposition}\label{order} Off the phase transition curve, \begin{equation*} \ell'(x^*) = 0,\quad \ell''(x^*) < 0. \end{equation*} On the phase transition curve except at the critical point, \begin{equation*} \ell'(x_1^*) = \ell'(x_2^*) = 0,\quad \ell''(x_1^*)<0, \quad \ell''(x_2^*) < 0. \end{equation*} At the critical point, \begin{equation*} \ell'(x^*)=\ell''(x^*) = \ell'''(x^*) = 0, \quad \ell^{(4)}(x^*)=\frac{-p^{5}}{(p-1)^{2}}< 0. \end{equation*} \end{proposition} \begin{proof} It is straightforward to compute that \begin{align*} &\ell'(x)=\beta_{1}+p\beta_{2}x^{p-1}-\log\left(\frac{x}{1-x}\right), \\ &\ell''(x)=p(p-1)\beta_{2}x^{p-2}-\frac{1}{x}-\frac{1}{1-x}, \\ &\ell'''(x)=p(p-1)(p-2)\beta_{2}x^{p-3} +\frac{1}{x^{2}}-\frac{1}{(1-x)^{2}}, \\ &\ell^{(4)}(x)=p(p-1)(p-2)(p-3)\beta_{2}x^{p-4} -\frac{2}{x^{3}}-\frac{2}{(1-x)^{3}}. \end{align*} Since $\lim_{x\rightarrow 0^{+}}\ell'(x)=+\infty$ and $\lim_{x\rightarrow 1^{-}}\ell'(x)=-\infty$, the maximum is achieved at a local maximum, we have $\ell'(x^{\ast})=0$. Let us first show that $\ell''(x^{\ast})<0$ off the critical point (where $x^*$ denotes either $x_1^*$ or $x_2^*$ if we are on the phase transition curve). Following the proof of Proposition 3.2 in Radin and Yin~\cite{Radin}, we first analyze the properties of $\ell''(x)$. We can re-write $\ell''(x)$ as \begin{equation} \ell''(x)=x^{p-2}p(p-1)\left[\beta_{2}-\frac{1}{p(p-1)x^{p-1}(1-x)}\right]. \end{equation} Consider the function \begin{equation} m(x):=\frac{1}{p(p-1)x^{p-1}(1-x)}. \end{equation} It is easy to observe that $m(x)\geq\frac{p^{p-1}}{(p-1)^{p}}$ and the equality holds if and only if $x=\frac{p-1}{p}$. (i) If $\beta_{2}<\frac{p^{p-1}}{(p-1)^{p}}$, $\ell''(x)<0$ on $[0,1]$ and in particular $\ell''(x^{\ast})<0$. (ii) If $\beta_{2}>\frac{p^{p-1}}{(p-1)^{p}}$, there exist $0<x_{1}<\frac{p-1}{p}<x_{2}<1$ so that $\ell''(x)<0$ on $0<x<x_{1}$, $\ell''(x)>0$ on $u_{1}<x<x_{2}$ and $\ell''(x)<0$ on $x_{2}<x<1$. Moreover $\ell''(x_{1})=\ell''(x_{2})=0$. If $\ell'(x_{1})\geq 0$, $\ell(x)$ has a unique local and hence global maximizer $x^{\ast}>x_{2}$; if $\ell'(x_{2})\leq 0$, $\ell(x)$ has a unique local and hence global maximizer $x^{\ast}<x_{1}$. Finally, if $\ell'(x_{1})<0<\ell'(x_{2})$, then $\ell(x)$ has two local maximizers $x_{1}^{\ast}$ and $x_{2}^{\ast}$ so that $x_{1}^{\ast}<x_{1}<\frac{p-1}{p}<x_{2}<x_{2}^{\ast}$. Since $\ell''$ vanishes only at $x_{1}$ and $x_{2}$, we have proved that $\ell''(x^{\ast})<0$. (iii) If $\beta_{2}=\frac{p^{p-1}}{(p-1)^{p}}$, $\ell''(x)\leq 0$ on $[0,1]$ and $\ell''(x)=0$ if and only if $x=\frac{p-1}{p}$ by the properties of $m(x)$. Therefore, $\ell''(x^{\ast})=0$ if and only if $x^{\ast}=\frac{p-1}{p}$. Since $\ell'(x^{\ast})=0$, $x^{\ast}=\frac{p-1}{p}$ if and only if \begin{equation} \beta_{1}=-p\frac{p^{p-1}}{(p-1)^{p}}\left(\frac{p-1}{p}\right)^{p-1} +\log\left(\frac{\frac{p-1}{p}}{1-\frac{p-1}{p}}\right)=\beta_{1}^{c}, \end{equation} Hence $\ell''(x^{\ast})<0$ off the critical point and $\ell''(x^{\ast})=0$ at the critical point. Furthermore, at the critical point $(\beta_{1},\beta_{2})=(\beta_{1}^{c},\beta_{2}^{c})$, we can compute that \begin{equation*} \ell'''(u^{\ast})=p(p-1)(p-2)\frac{p^{p-1}}{(p-1)^{p}}\frac{(p-1)^{p-3}}{p^{p-3}} +\frac{p^{2}}{(p-1)^{2}}-p^{2}=0. \end{equation*} Moreover, \begin{align*} \ell^{(4)}(u^{\ast})&=p(p-1)(p-2)(p-3)\frac{p^{p-1}}{(p-1)^{p}}\frac{(p-1)^{p-4}}{p^{p-4}} -\frac{2p^{3}}{(p-1)^{3}}-2p^{3} \\ &=\frac{-p^{5}}{(p-1)^{2}}<0. \end{align*} \end{proof} The next three proofs are for the results in Section~\ref{ESTIMATES}. \begin{proof}[Proof of Proposition~\ref{Zn}] Let $Y = (Y_{ij})_{1\le i,j \le n}$ be an $n\times n$ matrix of i.i.d. Bernoulli random variables: \begin{equation*} {\mathbb P}(Y_{ij} = 0) = \frac{1}{2} = {\mathbb P}(Y_{ij} = 1). \end{equation*} For $i=1,\ldots,n$ define \begin{equation*} W_i = \sum_{j=1}^n Y_{ij}. \end{equation*} Then \begin{align*} Z_{n}(\beta_{1},\beta_{2}) &= 2^{n^2}\,{\mathbb E}\left[\exp\left(n^2(\beta_{1} e(Y) + \beta_{2} s(Y))\right)\right]\\ &= 2^{n^2}\,{\mathbb E}\left[\exp\left(\sum_{i=1}^n \beta_{1} W_i + \frac{\beta_{2}}{n^{p-1}}W_i^p\right)\right]\\ &= 2^{n^2}\,{\mathbb E}\left[\prod_{i=1}^{n}\exp\left(\beta_{1} W_i + \frac{\beta_{2}}{n^{p-1}}W_i^p\right)\right]\\ &= 2^{n^2}\,\prod_{i=1}^{n}{\mathbb E}\left[\exp\left(\beta_{1} W_i + \frac{\beta_{2}}{n^{p-1}}W_i^p\right)\right]\\ &= 2^{n^2} \left({\mathbb E}\left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]\right)^n. \end{align*} \end{proof} \begin{proof}[Proof of Proposition~\ref{E}] We will prove only the case $k = 0$, as the other cases are easy extensions. Observe that \begin{equation*} {\mathbb E}\left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right] = 2^{-n} \sum_{i=1}^n \binom{n}{i} \exp\left(\beta_{1}i + \frac{\beta_{2}}{n^{p-1}}i^p\right). \end{equation*} Using the fact that for all $n\ge 1$, \begin{equation*} n\log n - n + \frac{1}{2}\log n \le \log n! \le n\log n - n + \frac{1}{2}\log n + 1, \end{equation*} we obtain \begin{equation}\label{bin1} \binom{n}{i} \le \exp\left(n\left[-\frac{i}{n}\log \frac{i}{n} - \left(1-\frac{i}{n}\right)\log \left(1-\frac{i}{n}\right) + \frac{1}{2n}\log \frac{n}{i(n-i)}\ + \frac{1}{n}\right]\right). \end{equation} Define \begin{equation*} A_n = \{i\in \{1,\ldots,n\}\,:\,i/n \in (\varepsilon, 1 - \varepsilon)\} \end{equation*} where $\varepsilon>0$ will be specified momentarily. From~\eqref{bin1}, for any $\varepsilon \in (0,1)$ we have \begin{align*} \max_{i \in \{1,\ldots,n\}\setminus A_n} \binom{n}{i} \exp\left(\beta_{1} i + \frac{\beta_{2}}{n^{p-1}}i^p\right) &\le e\left(1-\frac{1}{n}\right)^{-\frac{1}{2n}} \sup_{x \in [0,1]\setminus (\varepsilon, 1-\varepsilon)} e^{n\ell(x)}\\ &\le 3\sup_{x \in [0,1]\setminus (\varepsilon, 1-\varepsilon)} e^{n\ell(x)}. \end{align*} Since $\ell'(x) \to \infty$ as $x \to 0$ and $\ell'(x) \to -\infty$ as $x \to 1$, we may choose $\varepsilon > 0$ such that for some $\delta > 0$, \begin{equation*} \sup_{x \in [0,1]\setminus(\varepsilon,1-\varepsilon)}\ell(x) < \ell(x^*) - \delta. \end{equation*} Thus, \begin{equation*} \sum_{i \in \{1,\ldots,n\}\setminus A_n}\binom{n}{i} \exp\left(\beta_{1} i + \frac{\beta_{2}}{n^{p-1}}i^p\right) = O\left(e^{n(\ell(x^*) - \delta)}\right). \end{equation*} For $i \in A_n$, Stirling's formula allows us to write \begin{align*} \binom{n}{i}&= \left(1+O\left(n^{-1}\right)\right)\frac{1}{\sqrt{2\pi}}\sqrt{\frac{n}{i(n-i)}} \\ &\qquad\qquad\qquad \times\exp\left(n\left[-\frac{i}{n}\log\frac{i}{n}- \left(1-\frac{i}{n}\right)\log \left(1-\frac{i}{n}\right)\right]\right). \end{align*} The last two displays yield \begin{align}\begin{split}\label{mainexp} &{\mathbb E}\left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right] \\ &=2^{-n} \left(\sum_{i=1}^n\binom{n}{i} \exp\left(\beta_{1} i + \frac{\beta_{2}}{n^{p-1}}i^p\right)\right)\\ &= 2^{-n} \left(O\left(e^{n(\ell(x^*) - \delta)}\right) + \sum_{i \in A_n}\binom{n}{i} \exp\left(\beta_{1} i + \frac{\beta_{2}}{n^{p-1}}i^p\right)\right)\\ &= 2^{-n} \left(O\left(e^{n(\ell(x^*) - \delta)}\right) + \left(1+O\left(n^{-1}\right)\right) \frac{1}{\sqrt{2\pi n}}\sum_{i \in A_n} \sqrt{\frac{1}{(i/n)(1-i/n)}}e^{n\ell(i/n)} \right).\end{split} \end{align} We will approximate the sum in~\eqref{mainexp} by an integral. Consider first the case off the transition curve. Thus, there is a unique maximizer $x^*$ of $\ell$, and $\ell'(x^*) = 0$, $\ell''(x^*) < 0$. Let $q \in (1/3,1/2)$ and define \begin{equation*} B_n = \{i \in \{1,\ldots,n\}\,:\,i/n \in (x^*-n^{-q},x^*+n^{-q})\}. \end{equation*} For any $j \in A_n$, note that \begin{align}\begin{split}\label{err1} &\left|\frac{1}{n}\sqrt{\frac{1}{(j/n)(1-j/n)}}e^{n\ell(j/n)} - \int_{j/n}^{j/n+1/n} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx\right| \\ &\le \frac{1}{n}\max_{x,y \in [j/n,\,j/n+1/n]} \left|\sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)} - \sqrt{\frac{1}{y(1-y)}}e^{n\ell(y)}\right| \\ &\le \frac{1}{n}\max_{x,y \in [j/n,\,j/n+1/n]} \sqrt{\frac{1}{x(1-x)}} \max_{x,y \in [j/n,\,j/n+1/n]}\left|e^{n\ell(x)}- e^{n\ell(y)}\right|\\ &\quad + \frac{1}{n}e^{n\ell(x^*)}\max_{x,y \in [j/n,\,j/n+1/n]} \left|\sqrt{\frac{1}{x(1-x)}} - \sqrt{\frac{1}{y(1-y)}}\right|\\ &= O(n^{-1})\max_{x,y \in [j/n,\,j/n+1/n]} \left|e^{n\ell(x)} - e^{n\ell(y)}\right| +O(n^{-2})e^{n\ell(x^*)}.\end{split} \end{align} Fix $j \in A_n$ and let $x, y \in [j/n,j/n+1/n]$. Note that for all $x$, \begin{equation*} |e^x -1 | \le e^{|x|}-1. \end{equation*} We use this, the fact that $\ell''(x^*) < 0$, and the mean value theorem to write \begin{align}\begin{split}\label{err2} \left|e^{n\ell(x)} - e^{n\ell(y)}\right| &= e^{n\ell(x^*)}e^{n(\ell(y)-\ell(x^*))} \left|e^{n(\ell(x)-\ell(y))} - 1\right|\\ &= e^{n\ell(x^*)}\exp\left(n\frac{\ell''(x^*)}{2}(y-x^*)^2 + n\frac{\ell'''(\xi)}{6}(y-x^*)^3\right) \\ &\quad \times\left|\exp\left(n\ell'(y)(x-y) + \frac{n\ell''(\nu)}{2}(x-y)^2\right)-1\right|\\ &= e^{n\ell(x^*)}\exp\left(n\frac{\ell''(x^*)}{2}(y-x^*)^2 + n\frac{\ell'''(\xi)}{6}(y-x^*)^3\right) \\ &\quad \times\left|\exp\left(n\ell''(\zeta)(y-x^*)(x-y) + \frac{n\ell''(\nu)}{2}(x-y)^2\right)-1\right|\\ &\le e^{n\ell(x^*)}\exp\left(n\frac{-|\ell''(x^*)|}{2}(y-x^*)^2 + n\frac{\ell'''(\xi)}{6}(y-x^*)^3\right) \\ &\quad \times\left(\exp\left(n|\ell''(\zeta)||y-x^*||x-y| + \frac{n|\ell''(\nu)|}{2}(x-y)^2\right)-1\right)\end{split} \end{align} where $\xi$ and $\zeta$ are between $y$ and $x^*$, and $\nu$ is between $y$ and $x$. Observe that \begin{align*} &\exp\left(n\frac{-|\ell''(x^*)|}{2}(y-x^*)^2 + n\frac{\ell'''(\xi)}{6}(y-x^*)^3\right)\\ &= \begin{cases} O\left(\exp\left(-\frac{|\ell''(x^*)|}{2}n^{1-2q}\right)\right)\left(1+O(n)\right), & j \notin B_n\\ 1+O(n^{1-3q}), & j \in B_n \end{cases} \end{align*} and that \begin{align*} &\exp\left(n|\ell''(\zeta)||y-x^*||x-y| + \frac{n|\ell''(\nu)|}{2}(x-y)^2\right)-1\\ &= \begin{cases} O(1), & j \notin B_n\\ O(n^{-q}), & j \in B_n \end{cases} \end{align*} Let $t = 1-2q > 0$ and $\omega \in(0, |\ell''(x^*)|/2)$. The last three displays show that \begin{equation*} \max_{x,y \in [j/n,\,j/n+1/n]}\left|e^{n\ell(x)} - e^{n\ell(y)}\right| = \begin{cases} e^{n\ell(x^*)}O(\exp(-\omega n^t)), & j \notin B_n \\ e^{n\ell(x^*)}O(n^{-q}), & j \in B_n \end{cases}, \end{equation*} and so from~\eqref{err1}, \begin{align}\begin{split}\label{diff} &\left|\frac{1}{n}\sqrt{\frac{1}{(j/n)(1-j/n)}}e^{n\ell(j/n)} - \int_{j/n}^{j/n+1/n} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx\right| \\ &= \begin{cases} e^{n\ell(x^*)}O(\exp(-\omega n^t)), & j \notin B_n \\ e^{n\ell(x^*)}O(n^{-1-q}), & j \in B_n \end{cases}.\end{split} \end{align} Observe that \begin{equation}\label{AnBn} |B_n| = O(n^{1-q}),\quad |A_n\setminus B_n| = O(1). \end{equation} Now from~\eqref{diff}, for any $r < 1$, \begin{align}\begin{split}\label{sumbound} &\left|\frac{1}{n}\sum_{i \in A_n} \sqrt{\frac{1}{(i/n)(1-i/n)}}e^{n\ell(i/n)} - {\int_0^{1} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}}\right| \\ &\le e^{n\ell(x^*)}\left(|B_n|\,O(n^{-1-q}) + |A_n\setminus B_n|\,O(\exp(-\omega n^{t}))\right) \\ &\qquad\qquad\qquad\qquad+ {\int_{\varepsilon+1/n}^{1-\varepsilon-1/n} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}}\\ &\le e^{n\ell(x^*)}\left(O(n^{-2q}) + O(\exp(-\omega n^{t}))\right) + O\left(e^{n(\ell(x^*) - \delta)}\right)\\ &\le e^{n\ell(x^*)}O(n^{-r}).\end{split} \end{align} Now by~\eqref{sumbound} and Proposition~\ref{laplace}, \begin{align*} &\left|\frac{1}{n}\sum_{i \in A_n} \sqrt{\frac{1}{(i/n)(1-i/n)}}e^{n\ell(i/n)} - {\int_0^{1} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}}\right|\\ &\quad \times\left( \int_{0}^{1} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx\right)^{-1} = O(n^{1/2-r}). \end{align*} Thus, \begin{equation*} \frac{1}{n} \sum_{i \in A_n} \sqrt{\frac{1}{(i/n)(1-i/n)}}e^{n\ell(i/n)} = \left(1+O\left(n^{1/2-r}\right)\right)\int_{0}^{1} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx. \end{equation*} Now from~\eqref{mainexp} we conclude \begin{align*} &{\mathbb E}\left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]\\ &= \left(1+O\left(n^{1/2-r}\right)\right) 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx. \end{align*} Next, consider $(\beta_1,\beta_2)$ on the transition curve away from the critical point. By Theorem~\ref{trans_curve}, there are two maximizers of $\ell$, say $x_1^*$ and $x_2^*$. Defining \begin{equation*} B_n = \{i \in \{1,\ldots,n\}\,:\,i/n \in (x_1^*-n^{-q},x_1^*+n^{-q})\cup (x_2^*-n^{-q},x_2^*+n^{-q})\}, \end{equation*} it is not hard to see that the arguments above can be repeated to obtain the same result. Finally, consider the case at the critical point. Here, equation~\eqref{err1} still holds, but~\eqref{err2} needs to be modified, as follows. By Proposition~\ref{order}, we have $\ell'(x^*) = \ell''(x^*) = \ell'''(x^*) = 0$ and $\ell^{(4)}(x^*) < 0$, so by the mean value theorem we have \begin{align}\begin{split} &\left|e^{n\ell(x)} - e^{n\ell(y)}\right| \\ &= e^{n\ell(x^*)}e^{n(\ell(y)-\ell(x^*))} \left|e^{n(\ell(x)-\ell(y))} - 1\right|\\ &= e^{n\ell(x^*)}\exp\left(n\frac{\ell^{(4)}(x^*)}{4!}(y-x^*)^4 + n\frac{\ell^{(5)}(\xi)}{5!}(y-x^*)^5\right) \\ &\quad \times\left|\exp\left(n\ell'(y)(x-y) + \frac{n\ell''(\nu)}{2}(x-y)^2\right)-1\right|\\ &= e^{n\ell(x^*)}\exp\left(n\frac{\ell^{(4)}(x^*)}{4!}(y-x^*)^4 + n\frac{\ell^{(5)}(\xi)}{5!}(y-x^*)^5\right) \\ &\quad \times\left|\exp\left(n\ell^{(4)}(\zeta)(v-x^*)(u-x^*)(y-x^*)(x-y) + \frac{n\ell''(\nu)}{2}(x-y)^2\right)-1\right|\\ &\le e^{n\ell(x^*)}\exp\left(n\frac{-|\ell^{(4)}(x^*)|}{4!}(y-x^*)^4 + n\frac{\ell^{(5)}(\xi)}{5!}(y-x^*)^5\right) \\ &\quad \times\left(\exp\left(n|\ell^{(4)}(\zeta)||v-x^*||u-x^*||y-x^*||x-y| + \frac{n|\ell''(\nu)|}{2}(x-y)^2\right)-1\right)\end{split} \end{align} where $u$, $v$, $\xi$ and $\zeta$ are between $y$ and $x^*$, and $\nu$ is between $y$ and $x$. Let $q \in (1/5,1/4)$ and note that \begin{align*} &\exp\left(n\frac{-|\ell^{(4)}(x^*)|}{4!}(y-x^*)^4 + n\frac{\ell^{(5)}(\xi)}{5!}(y-x^*)^5\right)\\ &= \begin{cases} O\left(\exp\left(\frac{-|\ell^{(4)}(x^*)|}{4!}n^{1-4q}\right)\right) \left(1 + O(n)\right), & j \notin B_n\\ 1 + O(n^{1-5q}), & j \in B_n \end{cases} \end{align*} and that \begin{align*} &\exp\left(n|\ell^{(4)}(\zeta)||v-x^*||u-x^*||y-x^*||x-y| + \frac{n|\ell''(\nu)|}{2}(x-y)^2\right)-1\\ &= \begin{cases} O(1),& j \notin B_n \\ O(n^{-3q}),& j \in B_n.\end{cases} \end{align*} Let $\omega \in (0, |\ell^{(4)}(x^*)|/4!)$ and $t = 1-4q > 0$. The last three displays show that \begin{equation*} \max_{x,y \in [j/n,\,j/n+1/n]}\left|e^{n\ell(x)} - e^{n\ell(y)}\right| = \begin{cases} e^{n\ell(x^*)}O(\exp(-\omega n^t)), & j \notin B_n \\ e^{n\ell(x^*)}O(n^{-3q}), & j \in B_n \end{cases}. \end{equation*} So from~\eqref{err1}, \begin{align}\begin{split}\label{diff2} &\left|\frac{1}{n}\sqrt{\frac{1}{(j/n)(1-j/n)}}e^{n\ell(j/n)} - \int_{j/n}^{j/n+1/n} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx\right| \\ &= \begin{cases} e^{n\ell(x^*)}O(\exp(-\omega n^t)), & j \notin B_n \\ e^{n\ell(x^*)}O(n^{-1-3q}), & j \in B_n \end{cases}.\end{split} \end{align} Using~\eqref{diff2} and~\eqref{AnBn}, for any $r < 1$, \begin{align}\begin{split}\label{sumbound2} &\left|\frac{1}{n}\sum_{i \in A_n} \sqrt{\frac{1}{(i/n)(1-i/n)}}e^{n\ell(i/n)} - {\int_0^{1} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}}\right| \\ &\le e^{n\ell(x^*)}\left(|B_n|\,O(n^{-1-3q}) + |A_n\setminus B_n|\,O(\exp(-\omega n^{t}))\right) \\ &\qquad\qquad\qquad\qquad+ {\int_{\varepsilon+1/n}^{1-\varepsilon-1/n} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}}\\ &\le e^{n\ell(x^*)}\left(O(n^{-4q}) + O(\exp(-\omega n^{t}))\right) + O\left(e^{n(\ell(x^*) - \delta)}\right)\\ &\le e^{n\ell(x^*)}O(n^{-r}).\end{split} \end{align} Now by~\eqref{sumbound2} and Proposition~\ref{laplace}, \begin{align*} &\left|\frac{1}{n}\sum_{i \in A_n} \sqrt{\frac{1}{(i/n)(1-i/n)}}e^{n\ell(i/n)} - {\int_0^{1} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}}\right|\\ &\quad \times\left( \int_{0}^{1} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx\right)^{-1} = O(n^{1/4-r}). \end{align*} Thus, \begin{equation*} \frac{1}{n} \sum_{i \in A_n} \sqrt{\frac{1}{(i/n)(1-i/n)}}e^{n\ell(i/n)} = \left(1+O\left(n^{1/4-r}\right)\right)\int_{0}^{1} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx. \end{equation*} Now from~\eqref{mainexp} we conclude \begin{align*} &{\mathbb E}\left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right] \\ &=\left(1+O\left(n^{1/4-r}\right)\right) 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx. \end{align*} \end{proof} \begin{proof}[Proof of Proposition~\ref{laplace}] We will prove only {(i)} and {(iii)}, as {(ii)} is standard. We first consider {(i)}. Note that for $b>0$ and $k \in {\mathbb N}$, \begin{equation*} \int_{-\infty}^{\infty} x^k e^{-bx^2}\,dx = \begin{cases} 0,& k \hbox{ odd }\\ \frac{1}{2}\Gamma\left(\frac{k+1}{2}\right)b^{-\frac{k+1}{2}}, & k \hbox{ even }\end{cases} \end{equation*} So for any $\delta > 0$, \begin{align*} \int_{-\delta}^{\delta} u^k e^{-nbu^2} \,du &= n^{-\frac{k+1}{2}} \int_{-\delta n^{1/2}}^{\delta n^{1/2}} x^k e^{-bx^2}\,dx \\ &= n^{-\frac{k+1}{2}}\left(O(e^{-bn}) + \int_{-\infty}^\infty x^k e^{-bx^2}\,dx\right) \\ &= \begin{cases} 0,& k \hbox{ odd }\\ \frac{1}{2}\Gamma\left(\frac{k+1}{2}\right)(nb)^{-\frac{k+1}{2}} + O(e^{-bn}) , & k \hbox{ even }.\end{cases} \end{align*} Now let $c = x^*$ and $u = x-c$, and pick $0<\delta < \min\{c,1-c\}$. We use Taylor expansions of $x^k$ and $\ell(x)$ at $c$ and of $e^x$ at zero, along with Proposition~\ref{order}, to compute \begin{align*} &\int_{c-\delta}^{c+\delta} f(x) e^{n\ell(x)}\,dx \\ &= \int_{-\delta}^{\delta}\left[d_0 + d_1u +\ldots\right] e^{n(b_0 + b_1u +b_2 u^2 + \ldots)}\,du\\ &= e^{n\ell(c)}\int_{-\delta}^{\delta}\left[d_0 + d_1u +\ldots\right] e^{nb_2 u^2 + nb_3u^3+ \ldots}\,du\\ &=e^{n\ell(c)}\int_{-\delta}^\delta\left[d_0 + d_1u +\ldots\right] \left[1 + (nb_3u^3 + \ldots) +\frac{1}{2}(nb_3u^3 + \ldots)^2+ \ldots\right]e^{nb_2 u^2}\,du\\ &= e^{n\ell(c)}\left[n^{-1/2}d_0\alpha_1 + n^{-3/2}\Lambda + O(n^{-5/2})\right] \end{align*} where the last step is obtained by collecting terms of the same order, and the interchange of sum and integral is justified by dominated convergence theorem. Since $x^* = c$ is the unique global maximizer of $\ell$, we conclude that for some $\varepsilon > 0$, \begin{equation*} \int_{0}^{1} f(x) e^{n\ell(x)}\,dx = \int_{c-\delta}^{c+\delta} f(x) e^{n\ell(x)}\,dx + O\left(e^{n(\ell(c)-\varepsilon)}\right). \end{equation*} It follows that \begin{equation*} \int_{0}^{1} f(x) e^{n\ell(x)}\,dx = e^{n\ell(c)}\left[n^{-1/2}d_0\alpha_1 + n^{-3/2}\Lambda + O(n^{-5/2})\right]. \end{equation*} Now we turn to {(iii)}. Note that for $b>0$ and $k \in {\mathbb N}$, \begin{equation*} \int_{-\infty}^{\infty} x^k e^{-bx^4}\,dx = \begin{cases} 0,& k \hbox{ odd }\\ \frac{1}{4}\Gamma\left(\frac{k+1}{4}\right)b^{-\frac{k+1}{4}}, & k \hbox{ even }\end{cases}. \end{equation*} So for any $\delta > 0$, \begin{align*} \int_{-\delta}^\delta u^k e^{-nbu^4} \,du &= n^{-\frac{k+1}{4}} \int_{-\delta n^{1/4}}^{\delta n^{1/4}} x^k e^{-bx^4}\,dx \\ &= n^{-\frac{k+1}{4}}\left(O(e^{-bn}) + \int_{-\infty}^\infty x^k e^{-bx^4}\,dx\right) \\ &= \begin{cases} 0,& k \hbox{ odd }\\ \frac{1}{4}\Gamma\left(\frac{k+1}{4}\right)(nb)^{-\frac{k+1}{4}} + O(e^{-bn}) , & k \hbox{ even }\end{cases}. \end{align*} As before we let $c = x^*$ and $u = x-c$, pick $0< \delta = \min\{c,1-c\}$ and use Taylor expansions of $x^k$ and $\ell(x)$ at $c$ and $e^x$ at zero, along with Proposition~\ref{order}, to write \begin{align*} &\int_{c-\delta}^{c+\delta} f(x) e^{n\ell(x)}\,dx\\ &= \int_{c-\delta}^{c+\delta}\left[d_0 + d_1u +\ldots\right] e^{n(b_0 + b_1u +b_2 u^2 + \ldots)}\,du\\ &= e^{n\ell(c)}\int_{c-\delta}^{c+\delta}\left[d_0 + d_1u +\ldots\right] e^{nb_4 u^4 + nb_5u^5+ \ldots}\,du\\ &=e^{n\ell(c)}\int_{c-\delta}^{c+\delta}\left[d_0 + d_1u +\ldots\right] \left[1 + (nb_5u^5 + \ldots) +\frac{1}{2}(nb_5u^5 + \ldots)^2 + \ldots\right]e^{nb_4 u^4}\,du\\ &= e^{n\ell(c)}\left[n^{-1/4}d_0\gamma_1 + n^{-3/4}\Theta + O(n^{-5/4})\right], \end{align*} where again the last step is obtained by collecting terms of the same order, and the interchange of sum and integral is justified by dominated convergence theorem. As before, since $x^* = c$ is the unique global maximizer of $\ell$, we can conclude that \begin{equation*} \int_{0}^{1} f(x) e^{n\ell(x)}\,dx = e^{n\ell(c)}\left[n^{-1/4}d_0\gamma_1 + n^{-3/4}\Theta + O(n^{-5/4})\right]. \end{equation*} \end{proof} The remainder of the proofs are for the results in Section~\ref{THEOREMS}. \begin{proof}[Proof of Theorem~\ref{free_energy}] By Propositions~\ref{E} and~\ref{laplace}, we have \begin{align}\begin{split}\label{long} \psi_n(\beta_{1},\beta_{2}) &= n^{-2} \log Z_n(\beta_{1},\beta_{2}) \\ &= \log 2 + n^{-1} \log {\mathbb E} \left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right] \\ &= O(n^{-1}\log n) + \frac{1}{n}\log \int_0^1 \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx\\ &= O(n^{-1}\log n) + \ell(x^*). \end{split} \end{align} \end{proof} \begin{proof}[Proof of Theorem~\ref{PropertyCurve}] (i) Along the phase transition curve, we have \begin{align} &\beta_{1}+pq(\beta_{1})(x_{1}^{\ast})^{p-1}-\log\left(\frac{x_{1}^{\ast}}{1-x_{1}^{\ast}}\right)=0, \label{EqnI} \\ &\beta_{1}+pq(\beta_{1})(x_{2}^{\ast})^{p-1}-\log\left(\frac{x_{2}^{\ast}}{1-x_{2}^{\ast}}\right)=0, \label{EqnII} \\ &\beta_{1}x_{1}^{\ast}+q(\beta_{1})(x_{1}^{\ast})^{p} +x_{1}^{\ast}\log x_{1}^{\ast}+(1-x_{1}^{\ast})\log(1-x_{1}^{\ast}) \nonumber \\ &\qquad\qquad =\beta_{1}x_{2}^{\ast}+q(\beta_{1})(x_{2}^{\ast})^{p} +x_{2}^{\ast}\log x_{2}^{\ast}+(1-x_{2}^{\ast})\log(1-x_{2}^{\ast}). \label{EqnIII} \end{align} Let $x_1^*<x_2^*$ be the two local maximizers of $\ell$ in the V-shaped region~\cite{Radin} that contains the phase transition curve except the critical point. By Proposition~\ref{order}, $\ell''(x_1^*)$ and $\ell''(x_2^*)$ are nonzero away from the critical point. The implicit function theorem implies that then $x_{1}^{\ast}$ and $x_{2}^{\ast}$ are analytic functions of both $\beta_{1}$ and $\beta_2$. Differentiating \eqref{EqnIII} with respect to $\beta_{1}$ and using \eqref{EqnI} and \eqref{EqnII}, we can show that \begin{equation*} x_{1}^{\ast}+q'(\beta_{1})(x_{1}^{\ast})^{p} = x_{2}^{\ast}+q'(\beta_{1})(x_{2}^{\ast})^{p}, \end{equation*} which implies that $q'(\beta_{1})=-\frac{x_{1}^{\ast}-x_{2}^{\ast}}{(x_{1}^{\ast})^{p}-(x_{2}^{\ast})^{p}}$. As $\beta_{1}\rightarrow\beta_{1}^{c}$, $x_{2}^{\ast}-x_{1}^{\ast}\rightarrow 0$ and both $x_{2}^{\ast}$ and $x_{1}^{\ast}$ converge to the common maximizer $x^{\ast}_{c}=\frac{p-1}{p}$. Therefore, \begin{equation*} \lim_{\beta_{1}\rightarrow\beta_{1}^{c}}q'(\beta_{1}) =-\frac{1}{p(x^{\ast}_{c})^{p-1}}=-\frac{p^{p-2}}{(p-1)^{p-1}}. \end{equation*} Since $x_{1}^{\ast}\rightarrow 0$ and $x_{2}^{\ast}\rightarrow 1$ as $\beta_{1}\rightarrow-\infty$, we get $\lim_{\beta_{1}\rightarrow-\infty}q'(\beta_{1})=-1$. (ii) Differentiating $q'(\beta_{1})$ with respect to $\beta_{1}$, we get \begin{align} q''(\beta_{1})&= -\frac{1}{((x_{1}^{\ast})^{p}-(x_{2}^{\ast})^{p})^{2}} \left[(1-p)(x_{1}^{\ast})^{p} +p(x_{1}^{\ast})^{p-1}x_{2}^{\ast}-(x_{2}^{\ast})^{p}\right] \frac{\partial x_{1}^{\ast}}{\partial\beta_{1}} \nonumber \\ &\qquad -\frac{1}{((x_{1}^{\ast})^{p}-(x_{2}^{\ast})^{p})^{2}} \left[(1-p)(x_{2}^{\ast})^{p} +p(x_{2}^{\ast})^{p-1}x_{1}^{\ast}-(x_{1}^{\ast})^{p}\right] \frac{\partial x_{2}^{\ast}}{\partial\beta_{1}}.\label{SecondDerivative} \end{align} Differentiating \eqref{EqnI} and \eqref{EqnII} with respect to $\beta_{1}$, we get \begin{align} &1+pq'(\beta_{1})(x_{1}^{\ast})^{p-1} +\left[pq(\beta_{1})(p-1)(x_{1}^{\ast})^{p-2}-\frac{1}{x_{1}^{\ast}(1-x_{1}^{\ast})}\right] \frac{\partial x_{1}^{\ast}}{\partial\beta_{1}}=0,\label{EqnIV} \\ &1+pq'(\beta_{1})(x_{2}^{\ast})^{p-1} +\left[pq(\beta_{1})(p-1)(x_{2}^{\ast})^{p-2}-\frac{1}{x_{2}^{\ast}(1-x_{2}^{\ast})}\right] \frac{\partial x_{2}^{\ast}}{\partial\beta_{1}}=0.\label{EqnV} \end{align} Notice that \begin{equation}\label{EqnVI} 1+pq'(\beta_{1})(x_{1}^{\ast})^{p-1} =1-p\frac{x_{1}^{\ast}-x_{2}^{\ast}}{(x_{1}^{\ast})^{p}-(x_{2}^{\ast})^{p}}(x_{1}^{\ast})^{p-1} >0, \end{equation} and \begin{equation}\label{EqnVII} 1+pq'(\beta_{1})(x_{2}^{\ast})^{p-1} =1-p\frac{x_{1}^{\ast}-x_{2}^{\ast}}{(x_{1}^{\ast})^{p}-(x_{2}^{\ast})^{p}}(x_{2}^{\ast})^{p-1} <0, \end{equation} Moreover, in Proposition \ref{order}, we showed that \begin{align} &\ell''(x_{1}^{\ast}) =pq(\beta_{1})(p-1)(x_{1}^{\ast})^{p-2}-\frac{1}{x_{1}^{\ast}(1-x_{1}^{\ast})}<0,\label{EqnVIII} \\ & \ell''(x_{2}^{\ast}) =pq(\beta_{1})(p-1)(x_{2}^{\ast})^{p-2}-\frac{1}{x_{2}^{\ast}(1-x_{2}^{\ast})}<0.\label{EqnIX} \end{align} Therefore, from \eqref{EqnIV}, \eqref{EqnV}, \eqref{EqnVI}, \eqref{EqnVII}, \eqref{EqnVIII} and \eqref{EqnIX}, we conclude that $\frac{\partial x_{1}^{\ast}}{\partial\beta_{1}}>0$ and $\frac{\partial x_{2}^{\ast}}{\partial\beta_{1}}<0$. Finally, by noticing that in \eqref{SecondDerivative}, \begin{align*} &(1-p)(x_{1}^{\ast})^{p} +p(x_{1}^{\ast})^{p-1}x_{2}^{\ast}-(x_{2}^{\ast})^{p}<0, \\ &(1-p)(x_{2}^{\ast})^{p} +p(x_{2}^{\ast})^{p-1}x_{1}^{\ast}-(x_{1}^{\ast})^{p}>0, \end{align*} we conclude that $q''(\beta_{1})>0$. \end{proof} In the proofs below, let $d_m^{(n)}$ be defined as in Proposition~\ref{laplace} for the function \begin{equation*} f(x) = \frac{x^n}{\sqrt{x(1-x)}}. \end{equation*} \begin{proof}[Proof of Theorem~\ref{MainThm}] Off the phase transition curve, the result follows immediately from Theorem~\ref{free_energy} and results in~\cite{Radin}. Thus, we prove only the last two displays in Theorem~\ref{MainThm}. From the second line of~\eqref{long}, we have \begin{align}\label{2nd_deriv} \frac{\partial^2}{\partial \beta_{1}^2}\psi_n(\beta_{1},\beta_{2}) &= n^{-1}\Bigg\{\frac{{\mathbb E}\left[W^2\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} {{\mathbb E}\left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} \\ &\qquad\qquad\qquad - \left(\frac{{\mathbb E}\left[W\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} {{\mathbb E}\left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]}\right)^2\Bigg\}. \nonumber \end{align} We use Proposition~\ref{E} and Proposition~\ref{laplace} to estimate each of the terms in~\eqref{2nd_deriv}. We first consider the case on the transition curve excluding the critical point. By Theorem~\ref{trans_curve}, there are two global maximizers $x_{1}^{\ast}< x_{2}^{\ast}$ of $\ell$. Let us write $\ell(x_{1}^{\ast})=\ell(x_{2}^{\ast})=\ell(x^{\ast})$. By Proposition \ref{laplace} and Proposition \ref{E}, for any $r<1$, we have \begin{align}\label{Moments} &{\mathbb E}\left[W^{k}\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right] \\ &=\left[1+O(n^{\frac{1}{2}-r})\right] \frac{n^{k}2^{-n}\sqrt{n}}{\sqrt{2\pi}}\int_{0}^{1}\sqrt{\frac{x^{2k}}{x(1-x)}}e^{n\ell(x)}dx \nonumber \\ &=\left[1+O(n^{\frac{1}{2}-r})\right] \frac{n^{k}2^{-n}\sqrt{n}}{\sqrt{2\pi}}\frac{e^{n\ell(x^{\ast})}}{\sqrt{n}} \left[\frac{\sqrt{\frac{(x_{1}^{\ast})^{2k}}{x_{1}^{\ast}(1-x_{1}^{\ast})}}} {\sqrt{2\pi\ell''(x_{1}^{\ast})}} +\frac{\sqrt{\frac{(x_{2}^{\ast})^{2k}}{x_{2}^{\ast}(1-x_{2}^{\ast})}}} {\sqrt{2\pi\ell''(x_{2}^{\ast})}}+O(n^{-1})\right] \nonumber \\ &= \frac{n^{k}2^{-n}e^{n\ell(x^{\ast})}}{2\pi} \left[\frac{(x_{1}^{\ast})^{k}} {\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})\ell''(x_{1}^{\ast})}} +\frac{(x_{2}^{\ast})^{k}} {\sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})\ell''(x_{2}^{\ast})}}+O(n^{\frac{1}{2}-r})\right]. \nonumber \end{align} Hence, \begin{align} &\frac{\partial^{2}}{\partial\beta_{1}^{2}}\psi_{n}(\beta_{1},\beta_{2}) \\ &=n^{-1}n^{2}\frac{\frac{(x_{1}^{\ast})^{2}} {\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|}} +\frac{(x_{2}^{\ast})^{2}} {\sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}}} {\frac{1} {\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|}} +\frac{1} {\sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}}} \nonumber \\ &\qquad\qquad\qquad\qquad -n^{-1}n^{2} \frac{\left(\frac{x_{1}^{\ast}} {\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|}} +\frac{x_{2}^{\ast}} {\sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}}\right)^{2}} {\left(\frac{1} {\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|}} +\frac{1} {\sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}}\right)^{2}}+O(n^{\frac{3}{2}-r}) \nonumber \\ &= n \frac{\frac{(x_{1}^{\ast}-x_{2}^{\ast})^{2}}{\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|} \sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}}} {\left(\frac{1} {\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|}} +\frac{1} {\sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}}\right)^{2}} +O(n^{\frac{3}{2}-r}) \nonumber \\ &= n \frac{(x_{1}^{\ast}-x_{2}^{\ast})^{2}\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|} \sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}} {\left(\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|} +\sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}\right)^{2}} +O(n^{\frac{3}{2}-r}). \nonumber \end{align} Next consider the case at the critical point. By Proposition \ref{laplace} and Proposition \ref{E}, for any $r<1$, \begin{align} &{\mathbb E}\left[W^{k}\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right] \\ &=\left[1+O(n^{\frac{1}{4}-r})\right] \frac{n^{k}2^{-n}\sqrt{n}}{\sqrt{2\pi}}\int_{0}^{1}\sqrt{\frac{x^{2k}}{x(1-x)}}e^{n\ell(x)}dx \nonumber \\ &= \frac{n^{k}2^{-n}\sqrt{n}}{\sqrt{2\pi}}e^{n\ell(x^{\ast})}\left[n^{-1/4}d_{0}^{(k)}\gamma_1 + n^{-3/4}\Theta^{(k)} + O(n^{-r})\right], \nonumber \end{align} where \begin{equation*} \Theta^{(k)}:=d_{2}^{(k)}\gamma_3 + d_{1}^{(k)} b_5\gamma_7 + d_{0}^{(k)} b_6 \gamma_7 + \frac{1}{2}d_{0}^{(k)} b_5^2 \gamma_{11}, \qquad\qquad k=0,1,2. \end{equation*} Then \begin{equation*} d_{0}^{(0)}=\frac{1}{\sqrt{x^{\ast}(1-x^{\ast})}}, \qquad d_{0}^{(1)}=\frac{x^{\ast}}{\sqrt{x^{\ast}(1-x^{\ast})}}, \qquad d_{0}^{(2)}=\frac{(x^{\ast})^{2}}{\sqrt{x^{\ast}(1-x^{\ast})}}, \end{equation*} \begin{equation*} d_{1}^{(0)}=\frac{x^{\ast}-\frac{1}{2}}{(x^{\ast}(1-x^{\ast}))^{3/2}}, \qquad d_{1}^{(1)}=\frac{\frac{x^{\ast}}{2}}{(x^{\ast}(1-x^{\ast}))^{3/2}}, \qquad d_{1}^{(2)}=\frac{\frac{3}{2}(x^{\ast})^{2}-(x^{\ast})^{3}}{(x^{\ast}(1-x^{\ast}))^{3/2}}, \end{equation*} \begin{equation*} d_{2}^{(0)}=\frac{2(x^{\ast})^{2}-2x^{\ast}+\frac{3}{4}}{2(x^{\ast}(1-x^{\ast}))^{5/2}}, \quad d_{2}^{(1)}=\frac{(x^{\ast})^{2}-\frac{x^{\ast}}{4}}{2(x^{\ast}(1-x^{\ast}))^{5/2}}, \quad d_{2}^{(2)}=\frac{\frac{3}{4}(x^{\ast})^{2}}{2(x^{\ast}(1-x^{\ast}))^{5/2}}. \end{equation*} It is easy to observe that $d_{0}^{(2)}d_{0}^{(0)}=(d_{0}^{(1)})^{2}$. By differentiating this identity, we get $d_{1}^{(2)}d_{0}^{(0)}+d_{0}^{(2)}d_{1}^{(0)}=2d_{1}^{(1)}d_{0}^{(1)}$. Therefore, by \eqref{2nd_deriv} and \eqref{Moments}, \begin{align} &\frac{\partial^{2}}{\partial\beta_{1}^{2}}\psi_{n}(\beta_{1},\beta_{2}) \\ &=n^{-1}n^{2}\frac{n^{-\frac{1}{4}}d_{0}^{(2)}\gamma_{1}+n^{-\frac{3}{4}}\Theta^{(2)}} {n^{-\frac{1}{4}}d_{0}^{(0)}\gamma_{1}+n^{-\frac{3}{4}}\Theta^{(0)}} \nonumber \\ &\qquad\qquad\qquad\qquad -n^{-1}n^{2}\frac{(n^{-\frac{1}{4}}d_{0}^{(1)}\gamma_{1}+n^{-\frac{3}{4}}\Theta^{(1)})^{2}} {(n^{-\frac{1}{4}}d_{0}^{(0)}\gamma_{1}+n^{-\frac{3}{4}}\Theta^{(0)})^{2}} +O(n^{\frac{5}{4}-r}) \nonumber \\ &=n\frac{n^{-1}\gamma_{1}[d_{0}^{(2)}\Theta^{(0)}+d_{0}^{(0)}\Theta^{(2)} -2d_{0}^{(1)}\Theta^{(1)}]+O(n^{-\frac{3}{2}})} {n^{-\frac{1}{2}}(d_{0}^{(0)})^{2}\gamma_{1}^{2}}+O(n^{\frac{5}{4}-r}) \nonumber \\ &=\frac{n^{\frac{1}{2}}}{(d_{0}^{(0)})^{2}\gamma_{1}} \left[\gamma_{3}\left(d_{0}^{(2)}d_{2}^{(0)}+d_{0}^{(0)}d_{2}^{(2)} -2d_{0}^{(1)}d_{2}^{(1)}\right)\right] \nonumber \\ &\qquad\qquad +\frac{n^{\frac{1}{2}}}{(d_{0}^{(0)})^{2}\gamma_{1}} \left[b_{5}\gamma_{7}\left(d_{0}^{(2)}d_{1}^{(0)}+d_{0}^{(0)}d_{1}^{(2)} -2d_{0}^{(1)}d_{1}^{(1)}\right)\right] +O(n^{\frac{5}{4}-r}) \nonumber \\ &=\frac{n^{\frac{1}{2}}\gamma_{3}}{(d_{0}^{(0)})^{2}\gamma_{1}} \left(d_{0}^{(2)}d_{2}^{(0)}+d_{0}^{(0)}d_{2}^{(2)} -2d_{0}^{(1)}d_{2}^{(1)}\right) +O(n^{\frac{5}{4}-r}) \nonumber \\ &=n^{\frac{1}{2}}\frac{\gamma_{3}}{\gamma_{1}} +O(n^{\frac{5}{4}-r}) \nonumber \\ &=n^{\frac{1}{2}}\frac{\Gamma(\frac{3}{4})}{\Gamma(\frac{1}{4})} \frac{1}{\sqrt{\frac{\ell^{(4)}(x^{\ast})}{4!}}} +O(n^{\frac{5}{4}-r}) \nonumber =n^{\frac{1}{2}}\frac{\Gamma(\frac{3}{4})}{\Gamma(\frac{1}{4})} \frac{2\sqrt{6}(p-1)}{p^{5/2}} +O(n^{\frac{5}{4}-r}), \nonumber \end{align} where we used Proposition \ref{order} in the last line. \end{proof} \begin{proof}[Proof of Theorem~\ref{starvariance}] We prove only the last two displays in Theorem~\ref{starvariance}, since the first display follows immediately from Theorem~\ref{free_energy} and results in~\cite{Radin}. From the second line of~\eqref{long}, we have \begin{align} \frac{\partial^2}{\partial\beta_{2}^{2}}\psi_n(\beta_{1},\beta_{2}) &=n^{-1}\Bigg\{\frac{{\mathbb E}\left[\frac{W^{2p}}{n^{2(p-1)}}\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} {{\mathbb E}\left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} \\ &\qquad\qquad\qquad - \left(\frac{{\mathbb E}\left[\frac{W^{p}}{n^{p-1}}\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} {{\mathbb E}\left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]}\right)^2\Bigg\}. \nonumber \end{align} Consider first the case on the phase transition curve excluding the critical point. Then, similar to the proof of Theorem \ref{MainThm}, for any $r<1$, \begin{align*} \frac{\partial^{2}}{\partial\beta_{2}^{2}}\psi_{n}(\beta_{1},\beta_{2}) &=n\frac{((x_{1}^{\ast})^{p}-(x_{2}^{\ast})^{p})^{2}\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|} \sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}} {\left(\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|} +\sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}\right)^{2}} \\ &\qquad\qquad\qquad\qquad\qquad +O(n^{\frac{3}{2}-r}). \end{align*} Now consider the case at the critical point. We have \begin{align*} &d_{0}^{(p)}=\frac{(x^{\ast})^{p}}{\sqrt{x^{\ast}(1-x^{\ast})}}, \\ &d_{1}^{(p)}=\frac{(p-\frac{1}{2})(x^{\ast})^{p}-(p-1)(x^{\ast})^{p+1}}{(x^{\ast}(1-x^{\ast}))^{3/2}}, \\ &d_{2}^{(p)}=\frac{(p^{2}-2p+\frac{3}{4})(x^{\ast})^{p} -(2p^{2}-5p+2)(x^{\ast})^{p+1} +(p^{2}-3p+2)(x^{\ast})^{p+2}}{2(x^{\ast}(1-x^{\ast}))^{5/2}}. \end{align*} It is easy to observe that $d_{0}^{(2p)}d_{0}^{(0)}=(d_{0}^{(p)})^{2}$. By differentiating this identity, we get $d_{1}^{(2p)}d_{0}^{(0)}+d_{0}^{(2p)}d_{1}^{(0)}=2d_{1}^{(p)}d_{0}^{(p)}$. Similar to the proof of Theorem \ref{MainThm}, for any $r<1$, \begin{align} &\frac{\partial^{2}}{\partial\beta_{1}^{2}}\psi_{n}(\beta_{1},\beta_{2}) \nonumber \\ &=n\frac{(n^{-\frac{1}{4}}d_{0}^{(2p)}\gamma_{1}+n^{-\frac{3}{4}}\Theta^{(2p)}) (n^{-\frac{1}{4}}d_{0}^{(0)}\gamma_{1}+n^{-\frac{3}{4}}\Theta^{(0)}) -(n^{-\frac{1}{4}}d_{0}^{(p)}\gamma_{1}+n^{-\frac{3}{4}}\Theta^{(p)})^{2}} {(n^{-\frac{1}{4}}d_{0}^{(0)}\gamma_{1}+n^{-\frac{3}{4}}\Theta^{(0)})^{2}} \nonumber \\ &\qquad\qquad\qquad\qquad\qquad +O(n^{\frac{5}{4}-r}) \nonumber \\ &=\frac{n^{\frac{1}{2}}\gamma_{3}}{(d_{0}^{(0)})^{2}\gamma_{1}} \left(d_{0}^{(2p)}d_{2}^{(0)}+d_{0}^{(0)}d_{2}^{(2p)} -2d_{0}^{(p)}d_{2}^{(p)}\right) +O(n^{\frac{5}{4}-r}) \nonumber \\ &=p^{2}(x^{\ast})^{2p-2}\frac{\gamma_{3}}{\gamma_{1}}n^{1/2} +O(n^{\frac{5}{4}-r}) \nonumber \\ &=n^{\frac{1}{2}}p^{2}\left(\frac{p-1}{p}\right)^{2p-2} \frac{\Gamma(\frac{3}{4})}{\Gamma(\frac{1}{4})} \frac{2\sqrt{6}(p-1)}{p^{5/2}} +O(n^{\frac{5}{4}-r}). \nonumber \end{align} \end{proof} \begin{proof}[Proof of Theorem~\ref{covariance}] Again we prove only the last two displays in the theorem. From the second line of~\eqref{long}, we have \begin{align} &\frac{\partial^2}{\partial\beta_{1}\partial\beta_{2}}\psi_n(\beta_{1},\beta_{2}) \\ &=n^{-1}\frac{{\mathbb E}\left[W\frac{W^{p}}{n^{(p-1)}}\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} {{\mathbb E}\left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} \nonumber \\ &\qquad -\frac{{{\mathbb E}\left[W\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} {\mathbb E}\left[\frac{W^{p}}{n^{p-1}} \exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} {\left({\mathbb E}\left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]\right)^{2}}. \nonumber \end{align} Similar to the proof of Theorem \ref{MainThm}, on the phase transition curve excluding the critical point, for any $r<1$, \begin{align*} &\frac{\partial^2 }{\partial \beta_1^2}\psi_n(\beta_1,\beta_2) \\ &= n\frac{((x_{1}^{\ast})^{p}-(x_{2}^{\ast})^{p})(x_{1}^{\ast}-x_{2}^{\ast}) \sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|} \sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}} {\left(\sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|} +\sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}\right)^{2}} +O(n^{\frac{3}{2}-r}). \end{align*} Consider now the case at the critical point. It is easy to observe that $d_{0}^{(p+1)}d_{0}^{(0)}=(d_{0}^{(1)})(d_{0}^{(p)})$. By differentiating this identity, we get $d_{1}^{(p+1)}d_{0}^{(0)}+d_{0}^{(p+1)}d_{1}^{(0)}=d_{1}^{(1)}d_{0}^{(p)} +d_{0}^{(1)}d_{1}^{(p)}$. Therefore, similar to the proof of Theorem \ref{MainThm}, we get for any $r<1$, \begin{align} &\frac{\partial^{2}}{\partial\beta_{1}^{2}}\psi_{n}(\beta_{1},\beta_{2}) \nonumber \\ &=n\frac{(n^{-\frac{1}{4}}d_{0}^{(p+1)}\gamma_{1}+n^{-\frac{3}{4}}\Theta^{(p+1)}) (n^{-\frac{1}{4}}d_{0}^{(0)}\gamma_{1}+n^{-\frac{3}{4}}\Theta^{(0)})} {(n^{-\frac{1}{4}}d_{0}^{(0)}\gamma_{1}+n^{-\frac{3}{4}}\Theta^{(0)})^{2}} \\ &\qquad\qquad -n\frac{(n^{-\frac{1}{4}}d_{0}^{(1)}\gamma_{1}+n^{-\frac{3}{4}}\Theta^{(1)}) (n^{-\frac{1}{4}}d_{0}^{(p)}\gamma_{1}+n^{-\frac{3}{4}}\Theta^{(p)})} {(n^{-\frac{1}{4}}d_{0}^{(0)}\gamma_{1}+n^{-\frac{3}{4}}\Theta^{(0)})^{2}} +O(n^{\frac{5}{4}-r}) \nonumber \\ &=n\frac{n^{-1}\gamma_{1}[d_{0}^{(p+1)}\Theta^{(0)}+d_{0}^{(0)}\Theta^{(p+1)} -d_{0}^{(1)}\Theta^{(p)}-d_{0}^{(p)}\Theta^{(1)}]+O(n^{-\frac{3}{2}})} {n^{-\frac{1}{2}}(d_{0}^{(0)})^{2}\gamma_{1}^{2}+O(n^{-1})} +O(n^{\frac{5}{4}-r}) \nonumber \\ &=\frac{n^{\frac{1}{2}}\gamma_{3}}{(d_{0}^{(0)})^{2}\gamma_{1}} \left(d_{0}^{(p+1)}d_{2}^{(0)}+d_{0}^{(0)}d_{2}^{(p+1)} -d_{0}^{(1)}d_{2}^{(p)}-d_{0}^{(p)}d_{2}^{(1)}\right) +O(n^{\frac{5}{4}-r}) \nonumber \\ &=p(x^{\ast})^{p-1}\frac{\gamma_{3}}{\gamma_{1}}n^{1/2}+O(n^{\frac{5}{4}-r}) \nonumber \\ &=p\left(\frac{p-1}{p}\right)^{p-1}\frac{\Gamma(\frac{3}{4})}{\Gamma(\frac{1}{4})} \frac{2\sqrt{6}(p-1)}{p^{5/2}}n^{1/2}+O(n^{\frac{5}{4}-r}). \nonumber \end{align} \end{proof} \begin{proof}[Proof of Theorem~\ref{marginaldensities}] Observe first that $\mathbb{P}_{n}(X_{12}=1)=\mathbb{E}_{n}[X_{12}] =\frac{1}{n}\mathbb{E}_{n}[\sum_{j=1}^{n}X_{1j}]$. Thus, off the transition curve we have \begin{align*} \lim_{n\rightarrow\infty}\mathbb{P}_{n}(X_{12}=1) &=\lim_{n\rightarrow\infty}\frac{1}{n}\mathbb{E}_{n}\left[\sum_{j=1}^{n}X_{1j}\right] \\ &=\lim_{n\rightarrow\infty}\frac{1}{n}\frac{\mathbb{E}\left[W\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} {\mathbb{E}\left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} \\ &=\lim_{n\rightarrow\infty} \frac{\left(1+O\left(n^{1/2-4q}\right)\right) 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1} \sqrt{\frac{x^{2}}{x(1-x)}}e^{n\ell(x)}\,dx}{\left(1+O\left(n^{1/2-4q}\right)\right) 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx} \\ &=\lim_{n\rightarrow\infty}\frac{\sqrt{\frac{2\pi(x^{\ast})^{2}} {x^{\ast}(1-x^{\ast})|\ell''(x^{\ast})|}}n^{-\frac{1}{2}}e^{n\ell(x^{\ast})}} {\sqrt{\frac{2\pi}{x^{\ast}(1-x^{\ast})|\ell''(x^{\ast})|}}n^{-\frac{1}{2}}e^{n\ell(x^{\ast})}} \\ &=x^{\ast}. \end{align*} Similarly, at the critical point, \begin{align*} \lim_{n\rightarrow\infty}\mathbb{P}_{n}(X_{12}=1) &=\lim_{n\rightarrow\infty}\frac{1}{n}\frac{\mathbb{E}\left[W\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} {\mathbb{E}\left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} \\ &=\lim_{n\rightarrow\infty} \frac{\left(1+O\left(n^{1/4-4q}\right)\right) 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1} \sqrt{\frac{x^{2}}{x(1-x)}}e^{n\ell(x)}\,dx}{\left(1+O\left(n^{1/4-4q}\right)\right) 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1} \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx} \\ &=\lim_{n\rightarrow\infty}\frac{e^{n\ell(x^{\ast})}n^{-\frac{1}{4}}d_{0}^{(1)}\gamma_{1}} {e^{n\ell(x^{\ast})}n^{-\frac{1}{4}}d_{0}^{(0)}\gamma_{1}} \\ &=x^{\ast}. \end{align*} Finally, on the phase transition curve except at the critical point, \begin{align*} \lim_{n\rightarrow\infty}\mathbb{P}_{n}(X_{12}=1) &=\lim_{n\rightarrow\infty}\frac{1}{n}\frac{\mathbb{E}\left[W\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} {\mathbb{E}\left[\exp\left(\beta_{1} W + \frac{\beta_{2}}{n^{p-1}}W^p\right)\right]} \\ &=\lim_{n\rightarrow\infty}\frac{\left(\sqrt{\frac{2\pi(x_{1}^{\ast})^{2}} {x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|}} +\sqrt{\frac{2\pi(x_{2}^{\ast})^{2}} {x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}}\right)n^{-\frac{1}{2}}e^{n\ell(x^{\ast})}} {\left(\sqrt{\frac{2\pi}{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|}} +\sqrt{\frac{2\pi}{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}}\right)n^{-\frac{1}{2}}e^{n\ell(x^{\ast})}} \\ &=\frac{x_{1}^{\ast}\sqrt{\frac{1} {x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|}} +x_{2}^{\ast}\sqrt{\frac{1} {x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}}} {\sqrt{\frac{1}{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})|}} +\sqrt{\frac{1}{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}}}. \end{align*} \end{proof} \section*{Acknowledgements} The authors are very grateful to Mei Yin for helpful discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,148
Q: How to find the maximum contiguous SUM in an array (which contains positive and negative numbers)? I want to write a function ContigSum(i,j) that calculates the sum of the contiguous elements a[i] through a[j], where i<=j and a[] contains positive and negative numbers. Could you please tell me a time efficient solution to find maximized contiguous SUM in the array? A: Well explained in the wikipedia entry about the subject. I find the Python code (i.e., executable pseudocode) they give for Kandane's Algorithm to be a little gem: def max_subarray(A): max_so_far = max_ending_here = 0 for x in A: max_ending_here = max(0, max_ending_here + x) max_so_far = max(max_so_far, max_ending_here) return max_so_far A: This is discussed in Column 7 of the 1st Edition or Column 8 of the 2nd Edition of 'Programming Pearls' by Jon Bentley. A: Alex, you have a very elegant algorithm but it needs correction for an array that contains a single element that is negative. Of course, in the original algorithm of Kadane's, one can get the subarray start and end indexes which is useful for knowing the "path". Here's an inelegant but I think correct Python function: def max_subarray(A): (maxSum, maxStartIndex, maxEndIndex) = (float("-inf"), 0, 0) (currentMaxSum,currentStartIndex,currentEndIndex ) = (0,0,0) for item in A: currentMaxSum = currentMaxSum + item if currentMaxSum > maxSum : (maxSum, maxStartIndex, maxEndIndex) = (currentMaxSum, currentStartIndex, currentEndIndex) if currentMaxSum < 0 : currentMaxSum = 0 currentStartIndex = currentEndIndex + 1 # continue here. currentEndIndex = currentEndIndex + 1 return (maxSum, maxStartIndex, maxEndIndex) A: static void MaxContiguousSum(int[] x, int lb, int[] result) { int start, end, sum, testSum; start = lb; end = lb; /* Empty vector has 0 sum*/ sum = 0; testSum = 0; for (int i=lb; i < x.length; i++) { if (sum + x[i] < 0) { /* Net contribution by current term is negative. So, contiguous sum lies in [start,i-1] or [i+1, array upper bound]*/ MaxContiguousSum(x, i+1, result); if (result[0] < sum) { result[0] = sum; result[1] = start; result[2] = end; } return; } else { testSum += x[i]; if (testSum > 0) { /* Move the end marker since incrementing range is beneficial. */ end = i; /* update the sum*/ sum += testSum; /* reset the testSum */ testSum = 0; } } } /* Update the results */ result[0] = sum; result[1] = start; result[2] = end; return; } A: This is the correct Java Code which will handle scenarios including all negative numbers. public static long[] leftToISumMaximize(int N, long[] D) { long[] result = new long[N]; result[0] = D[0]; long currMax = D[0]; for (int i = 1; i < N; i++) { currMax = Math.max(D[i], currMax + D[i]); result[i] = Math.max(result[i - 1], currMax); } return result; } A: Here is the C++ Code I just implemented and tested on Visual Studio 2012. int maxSum(int *A, int lo, int hi) { int left = lo, right = lo, sum = INT_MIN, currentMaxSum = 0, maxLeft = lo, maxRight = lo; for(int i = lo; i < hi; i++) { currentMaxSum += A[i]; if(currentMaxSum > sum) { sum = currentMaxSum; right = i; maxLeft = left; maxRight = right; } if(currentMaxSum < 0) { left = i+1; right = left; currentMaxSum = 0; } } printf("Maximum sum contiguous subarray :"); for(int i = maxLeft; i <= maxRight; i++) printf(" %d", A[i]); printf("\n"); return sum; } Below is the main() code to call the above function. int main() { int A[] = {3,-4, -3, 2, 6}; int N = sizeof(A) / sizeof(int); printf("Maximum sum : %d\n", maxSum(A, 0, N)); return 0; } A: Here is my solution in Ruby. Return the maximum contiguous subsum in O(n) time and O(1) memory. I also, wrote some unit tests just in case ;) def largest_contiguous_subsum(array) max_sum = 0 current_sum = 0 array.each do |num| current_sum += num max_sum = current_sum if current_sum >= max_sum current_sum = 0 if current_sum < 0 end return max_sum end
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,288
Paal Kaasen (ur. 14 listopada 1883 w Oslo, zm. 11 lipca 1963 w Oslo) – norweski żeglarz, olimpijczyk, zdobywca złotego medalu w regatach na Letnich Igrzyskach Olimpijskich w 1920 roku w Antwerpii. Na Letnich Igrzyskach Olimpijskich 1920 zdobył złoto w żeglarskiej klasie 6 metrów (formuła 1919). Załogę jachtu Jo tworzyli również Andreas Brecke i Ingolf Rød. Bibliografia Norwescy żeglarze sportowi Norwescy medaliści olimpijscy Medaliści Letnich Igrzysk Olimpijskich 1920 Urodzeni w 1883 Zmarli w 1963 Ludzie urodzeni w Oslo
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,631
\section{Introduction} Discovering semantic relations between text entities is a key task in natural language understanding. It is a critical component which enables the success of knowledge representation systems such as TextRunner \cite{yates2007textrunner}, ReVerb \cite{fader2011identifying}, and NELL \cite{carlson2010toward}, which in turn are useful for a variety of NLP applications, including, temporal scoping \cite{talukdar2012coupled}, semantic parsing \cite{krishnamurthy2012weakly} and entity linking \cite{lin2012entity}. In this work, we examine \emph{coordinate} relations between words. According to the WordNet glossary, \emph{X} and \emph{Y} are defined as \emph{coordinate terms} if they share a common hypernym \cite{miller1995wordnet, christiane1998wordnet}. This is a symmetric relation that indicates a semantic similarity, meaning that \emph{X} and \emph{Y} are ``a type of the same thing'', since they share at least one common ancestor in some hypernym taxonomy (to paraphrase the definition of Snow et al. \cite{snow2006semantic}). Semantic similarity relations are normally discovered by comparing corpus statistics associated with the entities: for instance, two entities $X$ and $Y$ that usually appear in similar contexts are likely to be semantically similar \cite{pereira1993distributional,pantel2003clustering,curran2004distributional}. However, in technical domains, we have access to additional information about the real-world objects that are named by the entities: e.g., we might have biographical data about a person entity, or a 3D structural encoding of a protein entity. In such situations, it seems plausible that a "grounded" NLP method, in which corpus statistics are coupled with data on the real-world referents of $X$ and $Y$, might lead to improved methods for relation discovery. Here we explore the idea of grounded relation discovery in the domain of software. In particular, we consider the detection of coordinate-term relationships between entities that (potentially) refer to Java classes. We use a software domain text corpus derived from the Q\&A website StackOverflow (SO), in which users ask and answer questions about software development, and we extract posts which have been labeled by users as \emph{Java} related. From this data, we collected a small set of entity pairs that are labeled as coordinate terms (or not) based on high-precision Hearst patterns and frequency statistics, and we attempt to label these pairs using information available from higher-recall approaches based on distributional similarity. \begin{figure*}[t] \centering \includegraphics[width=0.85\textwidth]{topology_big} \caption{Visualization of predicted coordinate term pairs, where each pair of coordinate classes is connected by an edge. Highly connected components are labeled by edge color, and it can be noted that they contain classes with similar functionality. Some areas containing a functional class group have been magnified for easier readability.} \label{fig:topology} \end{figure*} We describe an entity linking method in order to map a given text entity to an underlying class type implementation from the Java standard libraries. Next, we describe corpus and code based information that we use for the relation discovery task. Corpus based methods include distributional similarity and string matching similarity. Additionally, we use two sources of code based information: (1) we define the \emph{class-context} of a Java class in a given code repository, and are therefore able to calculate a code-based distributional similarity measure for classes, and (2) we consider the hierarchical organization of classes, described by the Java class type and namespace hierarchies. We demonstrate that using our approach, cross-validation accuracy on this dataset is improved from 60.9\% to 88\%. According to human labeling, our classifier has an F1-score of 86\% over the highest-ranking 1000 predicted pairs. We see this work as a first step towards building a knowledge representation system for the software domain, in which text entities refer to elements from a software code base, for example classes, methods, applications and programming languages. Understanding software entity relations will allow the construction of a domain specific taxonomy and knowledge base, which can enable higher reasoning capabilities in NLP applications for the software domain \cite{weimer2007automatically,wang2009extracting,branavan2010reading,movshovitzattias-wcohen:2013:ACL} and improve a variety of code assisting applications, including code refactoring and token completion \cite{han2009code,jacob2010code,binkley2011improving,schulam:2013:DAPSE13p1}. Figure~\ref{fig:topology} shows a visualization based on coordinate term pairs predicted using the proposed method. Java classes with similar functionality are highly connected in this graph, indicating that our method can be used to construct a code taxonomy. \section{Related Work} \textbf{Semantic Relation Discovery.} Previous work on semantic relation discovery, in particular, coordinate term discovery, has used two main approaches. The first is based on the insight that certain lexical patterns indicate a semantic relationship with high-precision, as initially observed by Hearst \cite{hearst1992automatic}. For example, the conjuction pattern ``X and Y'' indicates that $X$ and $Y$ are coordinate terms. Other pattern-based classifier have been introduced for meronyms \cite{girju2003learning}, synonyms \cite{lin2003identifying}, and general analogy relations \cite{turney2003combining}. The second approach relies on the notion that words that appear in a similar context are likely to be semantically similar. In contrast to pattern based classifiers, context distributional similarity approaches are normally higher in recall. \cite{pereira1993distributional,pantel2003clustering,curran2004distributional,snow2004learning}. In this work we attempt to label samples extracted with high-precision Hearst patterns, using information from higher-recall methods. \textbf{Grounded Language Learning.} The aim of grounded language learning methods is to learn a mapping between natural language (words and sentences) and the observed world \cite{siskind1996computational,yu2004integration,gorniak2007situated}, where more recent work includes grounding language to the physical world \cite{krishnamurthy2013jointly}, and grounding of entire discourses \cite{minh2013parsing}. Early work in this field relied on supervised aligned sentence-to-meaning data \cite{zettlemoyer2005learning,ge2005statistical}. However, in later work the supervision constraint has been gradually relaxed \cite{kate2007learning,liang2009learning}. Relative to prior work on grounded language acquisition, we use a very rich and complex representation of entities and their relationships (through software code). However, we consider a very constrained language task, namely coordinate term discovery. \textbf{Statistical Language Models for Software.} In recent work by NLP and software engineering researchers, statistical language models have been adapted for modeling software code. NLP models have been used to enhance a variety of software development tasks such as code and comment token completion \cite{han2009code,jacob2010code,movshovitzattias-wcohen:2013:ACL,schulam:2013:DAPSE13p1}, analysis of code variable names \cite{lawrie2006s,binkley2011improving}, and mining software repositories \cite{gabel2008javert}. This has been complemented by work from the programming language research community for structured prediction of code syntax trees \cite{omar2013structured}. To the best of our knowledge, there is no prior work on discovering semantic relations for software entities. \section{Coordinate Term Discovery} In this section we describe a coordinate term classification pipeline, as depicted at high-level in Figure~\ref{fig:classificationPippline}. All the following steps are described in detail in the sections below. Given a software domain text corpus (StackOverflow) and a code repository (Java Standard Libraries), our goal is to predict a coordinate relation for $\langle X,Y \rangle$, where $X$ and $Y$ are nouns which potentially refer to Java classes. We first attempt a baseline approach of labeling the pair $\langle X,Y \rangle$ based on corpus distributional similarity. Since closely related classes often exhibit morphological closeness, we use as a second baseline the string similarity of $X$ and $Y$. Next, we map noun $X$ to an underlying class implementation from the code repository, named $X'$, according to an estimated probability for $p(\textrm{Class }X' | \textrm{Word }X)$, s.t., $X'=\max_C{\hat{p}(C|X)}$, for all other classes $C$. $X'$ is then the code referent of $X$. Similarly, we map $Y$ to the class $Y'$. Given a code-based grounding for $X$ and $Y$ we extract information using the class implementations: (1) we define a code based distributional similarity measure, using code-context to encode the usage pattern of a class, and (2) we use the hierarchical organization of classes, described by the type and namespace hierarchies. Finally, we combine all the above information in a single SVM classifier. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{pipeline} \caption{Classification Pipeline for determining whether nouns $X$ and $Y$ are coordinate terms. Each noun is mapped to an underlying class from the code repository with probability, $p(\textrm{Class} | \textrm{Word})$. Textual features are extracted based on the input words, code based features are extracted using the mapped classes, and all of these are given to the coordinate term classifier. } \label{fig:classificationPippline} \end{figure} \subsection{Baseline: Corpus Distributional Similarity.}\label{sec:text-dist-sim} As an initial baseline we calculate the corpus distributional similarity of nouns $\langle X,Y \rangle$, following the assumption that words with similar context are likely to be semantically similar. Our implementation follows Pereira et al. \cite{pereira1993distributional}. We calculate the empirical context distribution for noun $X$ \begin{equation}\label{eq:empirical-context-dist} p_X = f(c,X) / \sum_{c'} f(c',X) \end{equation} where $f(c,X)$ is the frequency of occurrence of noun $X$ in context $c$. We then measure the similarity of nouns $X$ and $Y$ using the \emph{relative entropy} or \emph{Kullback-Leibler divergence} \begin{equation} D(p_X || p_Y) = \sum_z p_X(z)\log\frac{p_X(z)}{p_Y(z)} \end{equation} As this measure is not symmetric we finally consider the distributional similarity of $X$ and $Y$ as $D(p_X || p_Y) + D(p_Y || p_X)$. \subsection{Baseline: String Similarity.}\label{sec:string-sim} Due to naming convention standards, many related classes often exhibit some morphological closeness. For example, classes that provide Input/Output access to the file system will often contain the suffix \texttt{Stream} or \texttt{Buffer}. Likewise, many classes extend on the names of their super classes (e.g., \texttt{JRadioButtonMenuItem} extends the class \texttt{JMenuItem}). More examples can be found in Figure~\ref{fig:topology} and Table~\ref{tbl:pairs-top-predicted}. We therefore include a second baseline which attempts to label the noun pair $\langle X,Y \rangle$ as coordinate terms according to their string matching similarity. We use the SecondString open source Java toolkit\footnote{http://secondstring.sourceforge.net/}. Each string is tokenized by camel case (such that \emph{ArrayList} is represented as \emph{Array List}). We consider the SoftTFIDF distance of the tokenized strings, as defined by Cohen et al. \cite{cohen2003comparison}. \subsection{Entity Linking.}\label{sec:text-to-code-map} In order to draw code based information on text entities, we define a mapping function between words and class types. Our goal is to find $p(C|W)$, where $C$ is a specific class implementation and $W$ is a word. This mapping is ambiguous, for example, since users are less likely to mention the qualified class name (e.g., \verb|java.lang.String|), and usually use the class \emph{label}, meaning the name of the class not including its package (e.g., \verb|String|). As an example, the terms \verb|java.lang.String| and \verb|java.util.Vector| appears 37 and 1 times respectively in our corpus, versus the terms \verb|String| and \verb|Vector| which appear 35K and 1.6K times. Additionally, class names appear with several variations, including, case-insensitive versions, spelling mistakes, or informal names (e.g., \emph{array} instead of \emph{ArrayList}). Therefore, in order to approximate $p(C,W)$ in \begin{equation} p(C|W) = \frac{p(C,W)}{p(W)} \end{equation} We estimate a word to class-type mapping that is mediated through the class label, $L$, as \begin{equation} \hat{p}(C,W) = p(C,L) \cdot p(L,W) \end{equation} Since $p(C,L) = p(C|L)p(L)$, this can be estimated by the corresponding MLEs \begin{eqnarray} \hat{p}(C,L) &=& \hat{p}(C|L)\cdot \hat{p}(L) \nonumber \\ &=& \frac{f(C)}{\sum_{C' \in L}f(C')}\cdot \frac{f(L)}{\sum_{L'}f(L')} \end{eqnarray} where $f()$ is the frequency function. Note that since $\sum_{C' \in L}f(C') = f(L)$ we get that $\hat{p}(C,L)=\hat{p}(C)$, as the class label is uniquely determined by the class qualified name (the opposite does not hold since multiple class types may correspond to the same label). Finally, the term $p(L,W)$ is estimated by the symmetric string distance between the two strings, as described in Section~\ref{sec:string-sim}. We consider the linking probability of $\langle X,Y \rangle$ to be $\hat{p}(X'|X)\cdot \hat{p}(Y'|Y)$, where $X'$ is the best matching class for $X$ s.t. $X'=\max_C{\hat{p}(C|X)}$ and similarly for $Y'$. \subsection{Code Distributional Similarity.}\label{sec:software-dist-sim} Corpus distributional similarity evaluates the occurrence of words in particular semantic contexts. By defining the \emph{class-context} of a Java class, we can then similarly calculate a \emph{code distributional similarity} between classes. Our definition of class context is based on the usage of a class as an argument to methods and on the API which the class provides, and it is detailed in Table~\ref{tbl:code-context}. We observe over 23K unique contexts in our code repository. Based on these definitions we can compute the distributional similarity measure between classes $X'$ and $Y'$ based on their code-context distributions, as previously described for the corpus distributional similarity (Section~\ref{sec:text-dist-sim}, following Pereira et al. \cite{pereira1993distributional}). For the code-based case, we calculate the empirical context distribution of $X'$ (see Equation~\ref{eq:empirical-context-dist}) using $f(c,X')$, the occurrence frequency of class $X'$ in context $c$, where $c$ is one of the ARG-\emph{Method} or API-\emph{Method} contexts (defined in Table~\ref{tbl:code-context}) for methods observed in the code repository. The distributional similarity of $\langle X',Y' \rangle$ is then taken, using the relative entropy, as $D(p_{X'} || p_{Y'}) + D(p_{Y'} || p_{X'})$. \begin{table} [t] \centering \begin{tabular}{ p{3in} } \toprule \textbf{ARG-\emph{Method}:} \verb|Class| is being passed as an argument to \verb|Method|. We count an occurrence of this context once for the method definition\\ \verb| Method(Class class, ...)|\\ as well as for each method invocation\\ \verb| Method(class, ...)|\\ For example, given the statement\\ \verb| str = toString(i);|\\ where $i$ is an Integer, we would count an occurrence for this class in the context ARG-toString. \tabularnewline \cmidrule{1-1} \textbf{API-\emph{Method}:} \verb|Class| provides the API method \verb|Method|. We count an occurrence of this context once for the method definition, and for every occurrence of the method invocation, e.g. \verb|class.Method(...)|.\\ For example, given the statement\\ \verb| s = map.size();|\\ where $map$ is a HashMap, we would count an occurrence for this class in the context API-size. \tabularnewline \bottomrule \end{tabular} \caption{Definition of two types of code-contexts for a class type, \texttt{Class}, or an instantiation of that type (e.g., \texttt{class}).} \label{tbl:code-context} \end{table} \subsection{Code Hierarchies and Organization.}\label{sec:code-hierarchies} The words $X$ and $Y$ are defined as coordinate terms if they have the same hypernym in a given taxonomy, meaning they have at least one common ancestor in this taxonomy \cite{snow2004learning}. For the purpose of comparing two class types, we therefore define an ancestry relation between them using two taxonomies based on the code namespace and type hierarchies. \textbf{Package Taxonomy:} A package is the standard way for defining namespaces in the Java language. It is a mechanism for organizing sets of classes which normally share a common functionality. Packages are organized in a hierarchical structure which can be easily inferred from the class name. For example, the class \verb|java.lang.String|, belongs to the \verb|java.lang| package, which belongs to the \verb|java| package. \textbf{Type Taxonomy:} The inheritance structure of classes and interfaces in the Java language defines a type hierarchy, such that class $A$ is the ancestor of class $B$ if $B$ extends or implements $A$. We define type-ancestry and package-ancestry relations between classes $\langle X',Y' \rangle$, based on the above taxonomies. For the type taxonomy, \begin{quote} $A_{type}^n(X',Y')$ = \{\# of common ancestors $X'$ and $Y'$ share within \emph{n} higher up levels in the type taxonomy\} \end{quote} for \emph{n} from 1 to 6. $A_{package}^n$ is defined similarly for the package taxonomy. As an example, \begin{equation*} A_{package}^2(\textrm{ArrayList}, \textrm{Vector}) = 2 \end{equation*} as these classes both belong in the package \verb|java.util|, and therefore their common level 2 ancestors are: \verb|java| and \verb|java.util|. Moreover, \begin{equation*} A_{type}^1(\textrm{ArrayList}, \textrm{Vector}) = 5 \end{equation*} since both classes extend the \verb|AbstractList| class, and also implement four joint interfaces: \verb|List|, \verb|RandomAccess|, \verb|Cloneable|, and \verb|Serializable|. \section{Experimental Settings} \subsection{Data Handling.} We downloaded a dump of the interactions on the StackOverflow website\footnote{http://www.clearbits.net/creators/146-stack-exchange-data-dump} from its launch date in 2008 and until 2012. We use only the 277K questions labeled with the user-assigned \emph{Java} tag, and their 629K answers. Text from the SO html posts was extracted with the Apache Tika toolkit\footnote{http://tika.apache.org/} and then tokenized with the Mallet statistical NLP package \cite{mccallum2002mallet}. In this study, we use only the text portions of the SO posts, and exclude all raw code segments, as indicated by the user-labeled $<$$code$$>$ markup. Next, the text was POS tagged with the Stanford POS tagger \cite{toutanova2003feature} and parsed with the MaltParser \cite{nivre2006maltparser}. Finally, we extract noun pairs with the conjunction dependencies: \emph{conj} or \emph{inv-conj}, a total of 255,150 pairs, which we use as positive training samples. We use the Java standard libraries code repository as a grounding source for Java classes, as we expect that users will often refer to these classes in the \emph{Java} tagged SO posts. This data includes: 7072 source code files, the implementation of 10562 class and interface types, and 477 packages. The code repository is parsed using the Eclipse JDT compiler tools, which provide APIs for accessing and manipulating Abstract Syntax Trees. \subsection{Classification.}\label{sec:coord-term-class} We follow the classification pipeline described in Figure~\ref{fig:classificationPippline}, using the LibLinear SVM classifier \cite{fan2008liblinear,chang2011libsvm} with the following features: \begin{description} \item[Corpus-Based Features] \hfill \begin{itemize} \item \emph{Corpus distributional similarity} (Corpus Dist. Sim.) - see Section~\ref{sec:text-dist-sim}. \item \emph{String similarity} (String Sim.) - see Section~\ref{sec:string-sim}. \end{itemize} \item[Code-Based Features] \hfill \begin{itemize} \item \emph{Text to code linking probability} (Text-to-code Prob.) - see Section~\ref{sec:text-to-code-map}. \item \emph{Code distributional similarity} (Code Dist. Sim.) - see Section~\ref{sec:software-dist-sim}. \item \emph{Package and type ancestry} ($A_{package}^1$ - $A_{package}^6$ and $A_{type}^1$ - $A_{type}^6$) - see Section~\ref{sec:code-hierarchies}. \end{itemize} \end{description} \begin{table} [t] \centering \rowcolors{1}{white}{gray!13} \begin{tabular}{ p{4.6cm} l } \toprule High PMI & Low PMI \tabularnewline \cmidrule{1-1} \cmidrule(l){2-2} \textbf{\small{$\langle$JTextField,JComboBox$\rangle$}} & \small{$\langle$threads,characters$\rangle$} \tabularnewline \textbf{\small{$\langle$yearsPlayed,totalEarned$\rangle$}}& \textbf{\small{$\langle$server,user$\rangle$}} \tabularnewline \textbf{\small{$\langle$PostInsertEventListener,} \newline \small{PostUpdateEventListener$\rangle$}} & \textbf{\small{$\langle$code,design$\rangle$}} \tabularnewline \textbf{\small{$\langle$removeListener,addListener$\rangle$}} & \textbf{\small{$\langle$Java,client$\rangle$}} \tabularnewline \small{$\langle$MinTreeMap,MaxTreeMap$\rangle$} & \small{$\langle$Eclipse,array$\rangle$} \tabularnewline \bottomrule \end{tabular} \caption{Sample set of word pairs with high and low \emph{PMI} scores. Many of the high \emph{PMI} pairs refer to software entities such as variable, method and Java class names, whereas the low \emph{PMI} pairs contain more general software terms.} \label{tbl:word-sample-pmi} \end{table} \begin{table} [t] \centering \begin{tabular}{ l c c } \toprule Method & \emph{Coord} & \emph{Coord-PMI} \tabularnewline \cmidrule{1-1} \cmidrule(l){2-2} \cmidrule(l){3-3} Code \& Corpus & \textbf{85.3} & \textbf{88} \tabularnewline \midrule \emph{Baselines:} \tabularnewline \texttt{ }Corpus Dist. Sim. & 57.8 & 58.2 \tabularnewline \texttt{ }String Sim. & 65.2 & 65.8 \tabularnewline \texttt{ }Corpus Only & 64.7 & 60.9 \tabularnewline \texttt{ }Code Only & 80.1 & 81.1 \tabularnewline \midrule \emph{Code Features:} \tabularnewline \texttt{ }Code Dist. Sim. & 67 (60.2) & 67.2 (59) \tabularnewline \texttt{ }$A_{package}^1$ & 64.2 (63.8) & 64.3 (63.9) \tabularnewline \texttt{ }$A_{package}^2$ & 64.2 (63.8) & 61.2 (64.8) \tabularnewline \texttt{ }$A_{package}^3$ & 65.8 (64.3) & 66 (64.6) \tabularnewline \texttt{ }$A_{package}^4$ & 52.5 (52) & 64.7 (58.7) \tabularnewline \texttt{ }$A_{package}^5$ & 52.5 (52) & 52.6 (58.7) \tabularnewline \texttt{ }$A_{package}^6$ & 50.4 (51.6) & 52.3 (52) \tabularnewline \texttt{ }$A_{type}^1$ & 51.4 (51.4) & 55.1 (53.7) \tabularnewline \texttt{ }$A_{type}^2$ & 54 (53.9) & 55.5 (54.3) \tabularnewline \texttt{ }$A_{type}^3$ & 56.8 (56.7) & 57 (56.9) \tabularnewline \texttt{ }$A_{type}^4$ & 57.1 (56.9) & 57.3 (57.1) \tabularnewline \texttt{ }$A_{type}^5$ & 57.4 (57.6) & 58 (57.9) \tabularnewline \texttt{ }$A_{type}^6$ & 57.2 (57.4) & 57.5 (57.5) \tabularnewline \texttt{ }Text-to-code Prob. & 55.7 & 55.8 \tabularnewline \bottomrule \end{tabular} \caption{Cross validation accuracy results for the coordinate term SVM classifier (Code \& Corpus), as well as baselines using corpus distributional similarity, string similarity, all corpus based features (Corpus Only), or all code based features (Code Only), and all individual code based features. The weighted version of the code based features (see Section~\ref{sec:coord-term-class}) is in parenthesis. Results are shown for both the \emph{Coord} and \emph{Coord-PMI} datasets.} \label{tbl:classification-text-v-code} \end{table} Since the validity of the code based features above is directly related to the success of the entity linking phase, each of the code based features are used in the classifier once with the original value and a second time with the value weighted by the text to code linking probability. Of the noun pairs $\langle X,Y \rangle$ in our data, we keep only pairs for which the linking probability $\hat{p}(X'|X)\cdot \hat{p}(Y'|Y)$ is greater than $0.1$. Note that this guarantees that each noun must be mapped to at least one class with non-zero probability. Next, we evaluate the string morphology and its resemblance to a camel-case format, which is the acceptable formatting for Java class names. We therefore select alphanumeric terms with at least two upper-case and one lower-case characters. We name this set of noun pairs the \emph{Coord} dataset. A key assumption underlying statistical distributional similarity approaches is that ``high-interest'' entities are associated with higher corpus frequencies, therefore, given sufficient statistical evidence ``high-interest'' relations can be extracted. In the software domain, real world factors may introduce biases in a software-focused text corpus which may affect the corpus frequencies of classes: e.g., users may discuss classes based on the clarity of their API, the efficiency of their implementation, or simply if they are fundamental in software introduced to novice users. Another motivation for using grounded data, such as the class implementation, is that it may highlight additional aspects of interest, for example, classes that are commonly inherited from. We therefore define a second noun dataset, \emph{Coord-PMI}, which attempts to address this issue, in which noun pairs are selected based on their pointwise mutual information (\emph{PMI}): \begin{equation} \textrm{\emph{PMI}}(X,Y) = \log{\frac{p(X,Y)}{p(X)p(Y)}} \end{equation} where the frequency of the pair $\langle X,Y \rangle$ in the corpus is positive. In this set we include coordinate term pairs with high \emph{PMI} scores, which appear more rarely in the corpus and are therefore harder to predict using standard NLP techniques. The negative set in this data are noun pairs which appear frequently separately but do not appear as coordinate terms, and are therefore marked by low \emph{PMI} scores. To illustrate this point, we provide a sample of noun pairs with low and high \emph{PMI} scores in Table~\ref{tbl:word-sample-pmi}, where pairs highlighted with bold font are labeled as coordinate terms in our data. We can see that the high \emph{PMI} set contains pairs that are specific and interesting in the software domain while not necessarily being frequent words in the general domain. For example, some pairs seem to represent variable names (e.g., \emph{$\langle$yearsPlayed, totalEarned$\rangle$}), others likely refer to method names (e.g., \emph{$\langle$removeListener, addListener$\rangle$}). Some pairs refer to Java classes, such as \emph{$\langle$JTextField, JComboBox$\rangle$} whose implementation can be found in the Java code repository. We can also see examples of pairs such as \emph{$\langle$PostInsertEventListener, PostUpdateEventListener$\rangle$} which are likely to be user-defined classes with a relationship to the Java class \verb|java.util.EventListener|. In contrast, the low \emph{PMI} set contains more general software terms (e.g., \emph{code, design, server, threads}). \begin{table*} [t] \centering \rowcolors{1}{white}{gray!13} \scalebox{0.97}{ \begin{tabular}{ p{5.3cm} p{5.8cm} l } \toprule Code Dist. Sim & $A_{package}^3$ & $A_{type}^5$ \tabularnewline \cmidrule{1-1} \cmidrule(l){2-2} \cmidrule(l){3-3} \small{$\langle$FileOutputStream,OutputStream$\rangle$} & \small{$\langle$KeyEvent,KeyListener$\rangle$} & \small{$\langle$JMenuItem,JMenu$\rangle$} \tabularnewline \small{$\langle$AffineTransform,AffineTransformOp$\rangle$} & \small{$\langle$StyleConstants,SimpleAttributeSet$\rangle$} & \small{$\langle$JMenuItems,JMenu$\rangle$} \tabularnewline \small{$\langle$GZIPOutputStream,}\newline \small{DeflaterOutputStream$\rangle$} & \small{$\langle$BlockQueue,ThreadPoolExecutor$\rangle$} & \small{$\langle$JMenuItems,JMenus$\rangle$} \tabularnewline \small{$\langle$OutputStream,DataOutputStream$\rangle$} & \small{$\langle$BufferedImage,WritableRaster$\rangle$} & \small{$\langle$JLabel,DefaultTreeCellRenderer$\rangle$} \tabularnewline \small{$\langle$AtomicInteger,AtomicIntegerArray$\rangle$} & \small{$\langle$MouseListener,MouseWheelListener$\rangle$} & \small{$\langle$JToggleButton,JRadioButtonMenu$\rangle$} \tabularnewline \small{$\langle$ResourceBundle,ListResourceBundle$\rangle$} & \small{$\langle$DocumentBuilderFactory,}\newline \small{DocumentBuilder$\rangle$} & \small{$\langle$JFrame,JDialogs$\rangle$} \tabularnewline \small{$\langle$setIconImages,setIconImage$\rangle$} & \small{$\langle$ActionListeners,FocusListeners$\rangle$} & \small{$\langle$JTable,JTableHeader$\rangle$} \tabularnewline \small{$\langle$ComboBoxModel,}\newline \small{DefaultComboBoxModel$\rangle$} & \small{$\langle$DataInputStream,DataOutputStream$\rangle$} & \small{$\langle$JTextArea,JEditorPane$\rangle$} \tabularnewline \small{$\langle$JTextArea,TextArea$\rangle$} & \small{$\langle$greaterOrEqualThan,lesserOrEqualThan$\rangle$} & \small{$\langle$JTextPane,JEditorPane$\rangle$} \tabularnewline \small{$\langle$ServerSocketChannel,SocketChannel$\rangle$} & \small{$\langle$CopyOnWriteArrayList,}\newline \small{ConcurrentLinkedQueue$\rangle$} & \small{$\langle$JTextArea,JTable$\rangle$} \tabularnewline \bottomrule \end{tabular} } \caption{Top ten coordinate terms predicted by classifiers using one of the following features: code distributional similarity, package hierarchy ancestry ($A_{package}^3$), and type hierarchy ancestry ($A_{type}^5$). All of the displayed predictions are \emph{true}.} \label{tbl:pairs-top-predicted} \end{table*} \section{Results} \subsection{Classification and Feature Analysis.} In Table~\ref{tbl:classification-text-v-code} we report the cross validation accuracy of the coordinate term classifier (\emph{Code \& Corpus}) as well as baseline classifiers using corpus distributional similarity (\emph{Corpus Dist. Sim.}), string similarity (\emph{String Sim.}), all corpus features (\emph{All Corpus}), or all code features (\emph{All Code}). Note that using all code features is significantly more successful on this data than any of the corpus baselines (corpus baselines' accuracy is between 57\%-65\% whereas code-based accuracy is over 80\%). When using both data sources, performance is improved even further (to over 85\% on the \emph{Coord} dataset and 88\% on \emph{Coord-PMI}). We provide an additional feature analysis in Table~\ref{tbl:classification-text-v-code}, and report the cross validation accuracy of classifiers using each single code feature. Interestingly, code distributional similarity (\emph{Code Dist. Sim.}) is the strongest single feature, and it is a significantly better predictor than corpus distributional similarity, achieving around 67\% v.s. around 58\% for both datasets. \subsection{Evaluation by Manual Labeling.} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{f1_manual_labeling} \caption{Manual Labeling Results. F1 results of the top 1000 predicted coordinate terms by rank. The final data point in each line is labeled with the F1 score at rank 1000.} \label{fig:manualLabeling} \end{figure} The cross-validation results above are based on labels extracted using Hearst conjunction patterns. In Figure~\ref{fig:manualLabeling} we provide an additional analysis based on manual human labeling of samples from the \emph{Coord-PMI} dataset, following a procedure similar to prior researchers exploring semi-supervised methods for relation discovery \cite{carlson2010toward,lao2011random}. After all development was complete, we hand labeled the top 1000 coordinate term pairs according to the ranking by our full classifier (using all code and corpus features) and the top 1000 pairs predicted by the classifiers based on code and corpus distributional similarities only. We report the F1 results of each classifier by the rank of the predicted samples. According to our analysis, the F1 score for the text and code distributional similarity classifiers degrades quickly after the first 100 and 200 top ranked pairs, respectively. At rank 1000, the score of the full classifier is at 86\%, whereas the code and text classifiers are only at 56\% and 28\%. To highlight the strength of each of the code based features, we provide in Table~\ref{tbl:pairs-top-predicted} the top ten coordinate terms predicted using the most successful code based features. For example, the top prediction using type hierarchy ancestry ($A_{type}^5$) is $\langle$JMenuItem, JMenu$\rangle$. Since \texttt{JMenu} extends \texttt{JMenuItem}, the two classes indeed share many common interfaces and classes. Alternatively, all of the top predictions using the package hierarchy ancestry ($A_{package}^3$) are labels that have been matched to pairs of classes that share at least 3 higher up package levels. So for example, \texttt{BlockQueue} has been matched to \texttt{java.util.concurrent.BlockingQueue} which was predicted as a coordinate term of \texttt{ThreadPoolExecutor} which belongs in the same package. Using code distributional similarity, one of the top predictions is the pair $\langle$GZIPOutputStream, DeflaterOutputStream$\rangle$, which share many common API methods such as \texttt{write}, \texttt{flush}, and \texttt{close}. Many of the other top predicted pairs by this feature have been mapped to the same class and therefore have the exact same context distribution. \subsection{Taxonomy Construction.} We visualize the coordinate term pairs predicted using our method (with all features), by aggregating them into a graph where entities are nodes and edges are determined by a coordinate term relation (Figure~\ref{fig:topology}). Graph edges are colored using the Louvain method \cite{louvain2008} for community detection and an entity label's size is determined by its betweenness centrality degree. We can see that high-level communities in this graph correspond to class functionality, indicating that our method can be used to create an interesting code taxonomy. Note that our predictions also highlight connections within functional groups that cannot be found using the package or type taxonomies directly. One example can be highlighted within the GUI functionality group. \texttt{Listener} classes facilitate a response mechanism to GUI \texttt{Actions}, such as \emph{pressing a button}, or \emph{entering text}, however, these classes belong in different packages than basic GUI components for historical reasons. In our graph, Action and Listener classes belong to the same communities of the GUI components they are normally used with. \section{Conclusions} We have presented an approach for grounded discovery of coordinate term relationships between text entities representing Java classes. Using a simple entity linking method we map text entities to an underlying class type implementation from the Java standard libraries. With this code-based grounding, we extract information on the usage pattern of the class and its location in the Java class and namespace hierarchies. Our experimental evaluation shows that using only corpus distributional similarity for the coordinate term prediction task is unsuccessful, achieving prediction accuracy of around 58\%. However, adding information based on the entities' software implementation improves accuracy dramatically to 88\%. Our classifier has an F1 score of 86\% according to human labeling over the top 1000 predicted pairs. We have shown that our predictions can be used to build an interesting code taxonomy which draws from the functional connections, common usage patterns, and implementation details that are shared between classes.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,691
\section{Introduction} \label{sec:Intro} The interaction structure of the Standard Model of Particle Physics (SM) strongly suggests a mechanism of unification. On the one hand, Grand Unified Theories (GUTs) elegantly address questions related to fermion charge assignments in addition to a range of other shortcomings that are present in the SM. Along these lines a range of less traditional approaches to grand unification have been proposed recently (for a recent review see e.g.~\cite{Croon:2019kpe}). A scenario that we will focus on in this work is grand unification in the context of gauge-Higgs unification~\cite{Espinosa:1998re,Hall:2001zb,Burdman:2002se,Medina:2007hz,Hosotani:2004wv,Terazawa:1977hq,Lim:2007jv}. In particular, we will focus on the model of Refs.~\cite{Hosotani:2017edv,Hosotani:2017hmu}. As shown in Ref.~\cite{Englert:2019xhz}, this model is consistent with current LHC measurements with future LHC measurements being able to extend the currently observed sensitivity to exotic states to the multi-TeV range. If a new state is discovered in the future, a question that will arise as part of the ensuing characterisation programme is its role as a potential harbinger of unification. Answers to this question will be model-dependent but can be informed by theoretical consistency arguments. One of these consistency arguments that is typically highlighted in GUT scenarios is the tree-level prediction of the Weinberg angle \begin{equation} \label{eq:weinberg} \sin^2\theta_W = {3\over 8}\,, \end{equation} as a consequence of an (intermediate) SU(5) unification~\cite{Georgi:1974sy,Georgi:1974yf,Marciano:1979yg}. In perturbative theories, reproducing this value in the UV is critical to support the hypothesis of unification. The relation of Eq.~\eqref{eq:weinberg} receives perturbative corrections that will modify its value in the UV as a function of the theories fundamental input parameters. However, the dominant relation between UV and TeV scales is captured in the renormalisation group running of $\sin^2\theta_W$, i.e. starting from the observed value at the electroweak scale and including corrections from new particles becoming accessible we should approach the relation of Eq.~\eqref{eq:weinberg} or discover the necessity of additional model constraints. This is the focus of this work in the context of the aforementioned gauge-Higgs unification scenario of Refs.~\cite{Hosotani:2017edv,Hosotani:2017hmu}. We perform a detailed renormalisation group equation (RGE) investigation of the 4D and 5D phases of the scenario with a particular focus on the weak mixing angle. In Sec.~\ref{sec:model} we briefly outline the model to make our work self-contained. In Sec.~\ref{sec:RGEeff} we lay out the RGE solving methods within the respective 4D and 5D formalisms and discuss their qualitative behaviour using a particular parameter benchmark scenario. In Sec.~\ref{sec:WeinbergRGE} we comment on the Weinberg angle at the GUT scale as a means to gauge unification in the considered theoretical framework. Sec.~\ref{sec:ResDiscConc} is devoted to a numerical RGE scan. Particular attention is given to the number of RGE-active fermion generations that can provide guidance for future model-building. Sec.~\ref{sec:Conc} offers conclusions. \section{The model} \label{sec:model} The model of Refs.~\cite{Hosotani:2017edv,Hosotani:2017hmu} is a 6D space-time with hybrid (warped+flat) compactification and an $SO(11)$ gauge symmetry, described by a Randall-Sundrum--like metric~\cite{Randall:1999ee} \begin{equation} ds^2 = e^{-2 \sigma (y)} (\eta_{\mu\nu} dx^\mu dx^\nu + d w^2) + dy^2\,, \end{equation} where $e^{-2 \sigma (y)}$ is the warp factor associated with the $y\in [0, L_5]$ direction, $w \in [0, 2\pi R_6]$ is an euclidean direction, and $\eta_{\mu\nu} = \mathop{\mathrm{diag}}(-1 , +1, +1, +1)$ is the 4D Minkowski space-time metric. A $\mathds{Z}_2$ transformation $(x^\mu, y, w) \rightarrow (x^\mu, -y, -w)$ results in a $\mathcal{M}_4 \times (T^2 / \mathds{Z}_2)$ orbifold with 5D branes $\mathcal{M}_4 \times S^1 $, at the fixed points $y= 0, L_5 $. We assume a compactification $M_\text{GUT}^{-1}\sim R_6\ll \left\{ \pi k/(z_L - 1)\right\}^{-1}$, where $k$ is the $\mathrm{AdS}_5$ curvature and $z_L = e^{k L_5}$, implying Kaluza-Klein (KK) mass scales of the 5th and 6th dimension $m_{\text{KK}_5} \ll m_{\text{KK}_6}\sim M_\text{GUT}$. The matter content as well as its localisation on the orbifold fixed points is given in Tab.~\ref{table:ModelFields}. \begin{table*}[!t] \centering \parbox{0.6\textwidth}{ \begin{tabular}{|c|c|c|c|} \toprule Name & Field & $SO(11)$ rep. & Bulk/Brane \\ \colrule Gauge bosons & $A_M (x, y, w)$ & $\mathbf{55}$ & 6D Bulk \\ Dirac spinors & $\Psi^\alpha_\mathbf{32}(x, y, w)$ & $\mathbf{32}$ & 6D Bulk \\ Dirac vectors & $\Psi_\mathbf{11}^\beta (x, y, w)$ & $\mathbf{11}$ & 6D Bulk \\ Dirac vectors & $\Psi'^{\beta}_\mathbf{11}(x, y, w)$ & $\mathbf{11}$ & 6D Bulk \\ Spinor scalar & $\Phi_\mathbf{32}(x, w)$ & $\mathbf{32}$ & 5D Brane at $y=0$\\ Majorana spinor & $\chi_\mathbf{1}^\beta(x, w)$ & $\mathbf{1}$ & 5D Brane at $y=0$ \\ \toprule \end{tabular}} \parbox{0.35\textwidth}{ \vspace{1.2cm} \caption{Field content of the model of Refs.~\cite{Hosotani:2017edv,Hosotani:2017hmu}. The columns provide details of the fields content, their transformation properties under the $SO(11)$ gauge symmetry, and their localisations in the 6D setup. $\alpha, \beta$ are generational indices where $\alpha = 1, 2, 3, 4$ and $\beta = 1, 2, 3$.} \label{table:ModelFields}} \end{table*} \begin{figure}[!b] \begin{center} \includegraphics[width=1.0\columnwidth]{CustomPlot-mKK5zLThetaHiggsFilter1000.pdf} \caption{Scatter plot of representative parameter space points for the $SO(11)$ model as functions of the KK scale $m_{\text{KK}_5}$ and warp factor $z_L$. The color reflects the order parameter $\langle\theta_H\rangle$. Points highlighted as hexagons are points that are SM-like, i.e. they reproduce the SM in the low energy regime at the 95\% confidence level~\cite{Englert:2019xhz}. Faded points do not meet the 95\% confidence level criteria.}\label{fig:Res} \end{center} \end{figure} Symmetry breaking to Quantum Chromodynamics (QCD) and Electrodynamics (QED) proceeds in three stages: Firstly, orbifolding with appropriate parity assignments~\cite{Scherk:1978ta,Scherk:1979zr} breaks $SO(11)\to G_\text{PS}= SU(4)_C \times SU(2)_L \times SU(2)_R$, the Pati-Salam~\cite{Pati:1974yy} group on the infrared (IR) brane at $y=L_5$. Secondly, 5D brane-localised interactions at $y=0$ of $\Phi_\mathbf{32}$ break $SO(11)\to SU(5)$ spontaneously, leading to a $SU(5) \cap G_\text{PS} = G_\text{SM} = SU(3)_C\times SU(2)_L \times U(1)_Y$ zero mode spectrum in the gauge field KK decomposition. Finally, below the 5D compactification scale (i.e. where a 4D description of the theory is appropriate), the Hosotani mechanism~\cite{Hosotani:1983xw, Hosotani:1988bm, Hosotani:1983vn} breaks $ SU(2)_L \times U(1)_Y \to U(1)_{\text{EM}}$ through a vacuum expectation value of a Wilson loop $\theta_H$ along the $y$ direction that carries the quantum numbers of the SM Higgs field. In addition to recreating the SM at the electroweak scale, the theory predicts KK towers for the $SO(11)$ gauge bosons and bulk matter fields in Tab.~\ref{table:ModelFields}. The masses of these modes are set by the various symmetry breaking stages and the two associated mass scales $m_{\text{KK}_5}, m_{\text{KK}_6}$. For the purposes of exploring the model's parameter space, as done in e.g. \cite{Hosotani:2017edv}, we identify the Weinberg angle at the electroweak scale as $\sin^2\theta_W = 0.2312$. As shown in Ref.~\cite{Englert:2019xhz} the parameter region leading to an acceptable low energy phenomenology can be extended with adapted statistical sampling methods. This is highlighted in Fig.~\ref{fig:Res}, where we identify a parameter point as ``SM-like'' when it reproduces the SM at the $95\%$ confidence limit.\footnote{We refer the interested reader to Ref.~\cite{Englert:2019xhz} for details.} \begin{figure}[!b] \begin{center} \includegraphics[width=1.0\columnwidth]{TowerKKTheories.pdf} \caption{Tower of EFTs that approximate the UV 6D theory. The 4D description is valid within $[M_Z, M_{\text{KK}_5}]$ with $G_{\cancel{\text{SM}}} \equiv SU(3)_C \times U(1)_\text{EM}$ gauge symmetry and within $[M_{\text{KK}_5}, 1 / L_5 ]$ with $G_\text{PS}$ gauge symmetry. The 5D description is valid within $[M_{\text{KK}_5}, \Lambda_\text{Max}]$ with a $G_\text{PS}$ gauge symmetry. Above $\Lambda_\text{Max}$ the full 6D description comes into effect.}\label{TowerOfTheories} \end{center} \end{figure} \section{RGE effects}\label{sec:RGEeff} \subsection*{General remarks} At the TeV scale the model is effectively the 4D SM and we evolve the parameters according to the 4D theory properties. This is admissible until we approach $M_{\text{KK}_5}$ where the 5D structure becomes apparent. At this stage we could continue using 4D RGE equations including the additional KK states that have non-trivial quantum numbers under the SM gauge group. Alternatively, one can directly work in a 5D approximation~\cite{Choi:2002ps} of the theory to obtain identical results, see Fig.~\ref{TowerOfTheories}. Above the $M_{\text{KK}_5}$ scale additional KK states of the 5D theory become accessible which correct the behaviour of the 5D running. The 5D regime is determined by the Pati-Salam symmetry group together with the active KK states and thresholds. 6D compactification effects are not relevant in this context as we assume $M_{\text{KK}_5} \ll M_\text{GUT} \sim 1/ R_6$. Without a 6D RGE formalism, a complete evolution to the GUT scale in our one-loop analysis is not possible since there is a scale \begin{equation} \Lambda_\text{Max} \sim \frac{16 \pi^2}{g_5^2} \ll M_\text{GUT}\,, \end{equation} which signifies a loss of perturbative control of the 5D regime before the unification scale. In this work, we opt to understand this scale as a lower bound on the GUT scale itself and use the difference of the Weinberg angle with respect to Eq.~\eqref{eq:weinberg} as a measure to gauge unification qualitatively. The gauge-related states with masses $\mathcal{O}(M_{\text{KK}_5})$ relevant for our discussion are gauge fields that transform under the symmetries \begin{equation} A_M \sim \begin{cases} G_\text{PS} / G_\text{SM} \\ G_\text{SM} \\ SO(5) / SO(4) \end{cases}\hspace{-0.2cm}. \end{equation} In the theory's 5D regime, these states have defined transformation properties under the Pati-Salam $G_\text{PS}$ symmetry. The coset $SO(5) / SO(4)$ sector which transforms as $(1, \mathbf{2}, \mathbf{2})$ under $G_\text{PS}$ and eventually triggers electroweak symmetry breaking via the Hosotani mechanism, induces corrections to the gauge couplings $g_{2L}, g_{2R}$. Note that the $w$ gauge component KK states of $SO(5)/SO(4)$ obtain large masses via brane interactions (see~\cite{Hosotani:2017edv}) and are therefore not relevant for our discussion. The fermionic matter content relevant in the same regime, is again characterised by symmetry properties under $G_\text{PS}$. States with masses $ \mathcal{O}(M_{\text{KK}_5})$ are given by \begin{align} \begin{split} & (\mathbf{4}, \mathbf{2}, 1)_{L,R}, \quad (\mathbf{4}, 1, \mathbf{2})_{L,R}, \\ & ( \mathbf{6}, 1, 1)^{(+)}_{L,R}, \quad (\mathbf{6}, 1, 1)^{(- )}_{L,R}, \\ & (1, \mathbf{2}, \mathbf{2})^{(+)}_{L,R}, \quad (1, \mathbf{2}, \mathbf{2})^{(-)}_{L,R}, \quad (1, 1, 1)^{(+)}_{L,R}, \quad (1, 1, 1)^{(-)}_{L,R}, \end{split} \label{MatterContent} \end{align} which all originate from the $\Psi^\alpha_\mathbf{32}, \Psi_\mathbf{11}^\beta, \Psi_\mathbf{11}'^{\beta}$ bulk fields. The signs $\pm$ refer to parity assignments to guarantee 6D $SO(11)$ chiral anomaly cancellation, see Ref.~\cite{Hosotani:2017edv}. We divide the full energy range in which the 5D EFT is valid (i.e. $[M_Z, \Lambda_\text{Max}]$) into two regions. The first region is given by the energy range in which the 5D EFT is well-approximated by its 4D EFT counterpart. This region's cut-off energy is dictated by the $M_{\text{KK}_5}$ mass threshold around where the gauge bosons of the Pati-Salam symmetry are resolved. This corresponds to a scale given by the first non-zero mode of the photon tower.\footnote{ For warp factor choices $z_L > 10$ that yield realistic low energy spectra, the solutions for the first photon mode and the PS gauge bosons are almost degenerate.} Thus, the first region is very well approximated by a $G_\text{SM}$ theory with additional matter states (that correspond to the $\theta_H$ shifted KK towers), which is valid between $[M_Z, M_{\text{KK}_5}]$. We describe the remaining energy range $[M_{\text{KK}_5}, \Lambda_\text{Max}]$ in the 5D $G_\text{PS}$ formalism following \cite{Choi:2002ps}, where the cut-off represents the energy at which we lose perturbative control of the 5D theory, and the more fundamental 6D theory is required. The tower of theories is schematically shown in Fig.~\ref{TowerOfTheories}. We now turn to the discussion of the 4D evolution, which will provide the IR boundary conditions for the 5D theory. We first fix our (electroweak) input parameters at $M_Z$ by setting $\alpha_{3C}, \alpha_\text{EM}, \sin \theta_W$ to their experimentally observed values~\cite{Chakraborty:2014aca, Mohr:2015ccw} \begin{equation} \begin{split} \alpha_{3C} &= 0.11822\,,\\ \alpha_\text{EM}^{-1}& = 127.916\,, \\ \sin^2 \theta_W &= 0.2312\,, \end{split} \end{equation} where $\alpha_{3C}, \alpha_\text{EM}$ denote the strong and electric structure constants, respectively (we will discuss the impact of uncertainties on our results below). Subsequently, we then evolve $\alpha_{3C}, \alpha_\text{EM}, \sin \theta_W$ via the $G_\text{SM}$ RGEs in the broken phase (using the formalism outlined in \cite{Erler:2004in}) until we reach the energy scale at which a new KK state becomes available. At this scale, we include new RGE contributions arising from resolved KK states until we reach $M_{\text{KK}_5}$, where we include threshold corrections $\lambda_i$ corresponding to integrating out the heavy states corresponding to the $G_\text{PS} \rightarrow G_\text{SM}$ breaking (we do not include logarithmic threshold corrections arising from the matter fields.). The 4D/5D matching requires the identification of coupling constants at the relevant scale. The electroweak couplings of the unbroken $SU(2)_L\times U(1)_Y$ phase, $ \alpha_{1Y}, \alpha_{2L}$ are related to their broken phase counterparts by\footnote{This is done in the 4D framework, and we have adopted the standard $3/5$ GUT normalisation for the hypercharge coupling.} \begin{equation} \begin{split} \frac{1}{\alpha_{1Y} (\mu)} \eval_{\mu = M_{\text{KK}_5} } &= \frac{3}{5} (1- \sin^2 \theta_W) \frac{1}{\alpha_\text{EM} (\mu)} \eval_{\mu = M_{\text{KK}_5} } \,, \\ \frac{1}{\alpha_{2L} (\mu) } \eval_{\mu = M_{\text{KK}_5} } &= \sin^2 \theta_W \frac{1}{\alpha_\text{EM} (\mu)} \eval_{\mu = M_{\text{KK}_5} }\,. \end{split} \end{equation} \begin{figure*}[!t] \begin{center} \includegraphics[scale = 0.43]{KKTowers.pdf} \caption{Tower of states from $M_Z$ to the next states at scales beyond $ M_{\text{KK}_5}$. The labels indicate the relevant fermion and boson fields, and their markers show the mass of the respective KK state. $W^{\pm}_\mu$ refer to the W boson tower, $Z^{0}_\mu$ to the Z boson tower, $\psi_t$ denotes the top quark tower, $\psi_b$ is the bottom quark tower, $\psi_D$ is the ``dark fermion'' multiplet tower, $W^{R}_\mu$ is the Pati-Salam $SU(2)_R$ W boson tower, $\gamma_\mu$ is the photon tower, $A^{4, 11}_z$ is the Higgs tower, and $\psi_{\tau}$ is the tau tower. }\label{KKTowerExplicit} \end{center} \end{figure*} With this we can now find the values of the Pati-Salam gauge couplings $\alpha_{4C}, \alpha_{2L}, \alpha_{2R}$ at the $M_{\text{KK}_5}$ scale \begin{equation} \begin{split} \frac{1}{\alpha_{4C}} & = \frac{1}{\alpha_{3C}} + \frac{1 }{12 \pi} \,,\\ \frac{1}{\alpha_{2R}} & = \frac{5}{3} \frac{1}{\alpha_{1Y}} - \frac{2}{3} \frac{1}{\alpha_{3C}} + \frac{8}{45 \pi} \,, \label{eq:paticoup} \end{split} \end{equation} ($\alpha_{2L}$ is already given as the coupling of the $SU(2)_L$ group). These serve as the boundary conditions for the 5D theory, where \begin{equation} g_{\mathrm{5D}} \sqrt{L_5} = g_{\mathrm{4D}}\eval_{\mu = M_{\text{KK}_5}}. \end{equation} We evolve the Pati-Salam couplings $\alpha_4, \alpha_{2L}, \alpha_{2R}$ within the 5D formalism described in Ref.~\cite{Choi:2002ps} in the energy range $[M_{\text{KK}_5}, \Lambda_\text{Max}]$. Using this running we then extract the coupling values and compare the Weinberg angle \begin{multline} \sin^2 \theta_W (\mu) = \Bigg( \frac{1}{\alpha_{2L}\alpha_{4C}} \Big( \alpha_{2L} \alpha_{4C} + \frac{2}{3} \alpha_{2L}\alpha_{2R} + \\ \alpha_{2R}\alpha_{4C} - \frac{5}{3} \alpha_{2L}\alpha_{2R} \alpha_{4C} \frac{8}{45 \pi} \Big) \Bigg)^{-1}\eval_{\mu} \label{eqn:WeinbergAnglePS} \end{multline} to its predicted GUT value. Before we discuss the RGEs in detail below, it is instructive to define a reference point to guide our discussion. To get a qualitative understanding of how the KK thresholds modify the RG evolution of the theory, we consider the set of parameters from~\cite{Hosotani:2017edv}, which provide a SM-like physical mass spectrum \begin{align} \begin{split} \mathcal{P}_\text{sample} := & \Big\{ k = 89130, z_L = 35, c_1 = 0, c_2 = -0.7, \\ & c'_0 = 0.5224, \mu_1 = 11.18, \mu_{11} = 0.108, \\ & \tilde{\mu}_2 = 0.7091 , \mu'_{11} = 0.108 \Big\} \,. \end{split}\label{eqn:samplePoint} \end{align} $c_1, c_2, c'_0$ are the fermion bulk mass parameters along the warped direction, and $\mu_{1}, \tilde{\mu}_2, \mu_\textbf{11}, \mu'_\textbf{11}$ are couplings localised on the 5D brane at $y=0$ (for details see \cite{Hosotani:2017edv}). This choice results in the tower of states shown in Fig.~\ref{KKTowerExplicit}, which we will use as a reference point in the following. \subsection*{4D Approximation and RGEs} By performing the RGE analysis in the broken phase, we evolve the QCD gauge coupling $g_3$, along with the electromagnetic coupling $g_\text{EM}$, which in turn determines the Weinberg angle $\sin \theta_W$ RGE evolution via the matter content. To facilitate an unambiguous transition to the Pati-Salam phase we then proceed to relate the latter to the unbroken $U(1)_Y$ hypercharge and $SU(2)_L$ weak couplings. The renormalisation group equations are expressed in terms of the gauge couplings $g_i$ as \begin{equation} \mu \frac{\partial g_i}{ \partial \mu} = \beta_i (g_i, \mu)\,, \qquad \frac{1}{\alpha_i} = \frac{4 \pi}{ g_i^2}\,, \end{equation} where $\beta_i$ are the beta coefficients arising from the group representations of the $SU(N)$ gauge group. The QCD beta function $\beta_{g_3}$, has the generic form arising from a $SU(N)$ gauge theory \cite{Machacek:1984zw} with fermions and scalars in representations $F_i$ and $S_i$, \begin{equation*} \beta_{g_3} = \frac{g_3^3}{(4\pi)^2} \left\{ -\frac{11}{3} C_2\left(SU(3)\right) + \frac{4}{3}\kappa S_2\left(F_i\right) + \frac{1}{6} \eta S_2\left(S_i\right) \right\} \end{equation*} where $C_2\left(G_i\right)$ is the quadratic Casimir of the group $G_i$, $S_2\left(F_i\right), S_2\left(S_i\right)$ are the Dynkin indices for the fermion/scalar representations, $\kappa = 1/2, 1$ for Weyl and Dirac fermions, respectively, and $\eta = 1, 2$ for real and complex scalar fields. For the RGE runnings of the QED gauge coupling $g_\text{EM}$ and Weinberg angle $\sin \theta_W$, we use the formalism presented in Ref.~\cite{Erler:2004in}. The QED beta function is \begin{equation*} \beta_{g_\text{EM}} = \frac{g_\text{EM}^3}{(4\pi)^2} \frac{1}{6} \left\{ \sum_{i} N_i^c \gamma_i Q^2_i \right\} \,, \end{equation*} where $N_i^c$ are the fermion colour factors, $Q_i$ are the EM charges and $\gamma_i = \{ -22, 8, 4, 2 \}$ correspond to gauge bosons, Dirac/chiral fermions and complex scalar fields. We begin our RGE evolution at $M_Z \simeq \SI{91}{GeV}$. The QCD and QED couplings have beta function coefficients \begin{equation} \beta_{g_3} = -7 \frac{g_3^3}{(4\pi)^2}\, , \qquad{\beta_{ g_\text{EM} } } = 22 \frac{g_\text{EM}^3}{(4\pi)^2}\,, \end{equation} which are determined by the SM matter content and their $SU(3)_C$ and $U(1)_\text{EM}$ charges. As we evolve the couplings and encounter new states, the beta functions pick up new contributions. The additional contributions to the QCD beta function take the form \begin{subequations} \label{eq:betfu} \begin{equation} \beta_{g_3} \rightarrow \beta_{g_3} + \begin{cases} -\frac{11}{3} C_2(SU(3)) \\ +\frac{4}{3} \kappa S_2(F_i) \cdot N_G \\ +\frac{1}{6} \sum \eta S_2(S_i) \end{cases} \end{equation} depending on the nature of the state. Analogously, for the QED beta function we have, \begin{equation} \beta_{g_\text{EM}} \rightarrow \beta_{g_\text{EM}} + \begin{cases} -22 N_i^c \gamma_i Q_i^2 \\ + 8 N_i^c \gamma_i Q_i^2 \cdot N_G \\ +2 N_i^c \gamma_i Q_i^2 \end{cases} \hspace{-0.2cm}. \end{equation} \end{subequations} In Eqs.~\eqref{eq:betfu}, we have introduced the $N_G$ factor in the fermionic contributions to account for the number of matter generations present in the model. In this paper we examine the $N_G=1, 3$ cases. For $N_G = 3$ we assume that all three SM generations contribute and that the mass differences between the associated KK states is negligible for the non-zero modes. Similarly for $N_G = 1$ we assume that there is a mass separation mechanism between the third family and the other two which effectively decouples the non-zero states from the theory, leaving only the third as relevant, as in \cite{Hosotani:2017edv}. Comparing the different assumptions will point towards future model building directions (see below) in the light of expected unification. With this framework in place we can now form a piecewise system of differential equations. As shown in \cite{Erler:2004in} the Weinberg angle's RGE running is fully determined by its experimental value at $M_Z$, the matter content of the theory, and the running of $\alpha_\text{EM}$ \begin{multline} \sin^2 \theta_W (\mu) = \frac{\alpha_\text{EM} (\mu)}{\alpha_\text{EM} (\mu_0)} \sin^2 \theta_W (\mu_0) \\ + \frac{\sum_{i} N_i^c \gamma_i Q_i T_i }{\sum_{i} N_i^c \gamma_i Q^2_i } \left[1 - \frac{\alpha_\text{EM} (\mu)}{\alpha_\text{EM} (\mu_0)} \right] \,, \label{eqn:GaugeRel} \end{multline} where $T_i$ is the third component of the weak isospin ($T_3 = +1/2$ for $u_i, \nu_i$, $T_3 = -1/2$ for $d_i, e_i$, $T_3 = \pm 1$ for $W^\pm$). The RGE running for the Weinberg angle starting at $M_Z$ is determined by the matter content of the SM and has a growth coefficient \begin{equation} \frac{\sum_{i} N_i^c \gamma_i Q_i T_i }{\sum_{i} N_i^c \gamma_i Q^2_i } = -\frac{19}{22}. \end{equation} Therefore, using the numerical solution for $\alpha_\text{EM}$ we can now create an analogous piecewise solution for $\sin \theta_W$ based on the present matter content. The fields' charges under $SU(3)_C \times U(1)_\text{EM}$, along with their $T_3$ values are given in Table \ref{table:RGEcharges}. \begin{table}[!t] \centering \begin{tabular}{|c|c|c|c|} \toprule Name & $SU(3)_C$ Charge & $U(1)_\text{EM}$ Charge & $T_3$ \\ \colrule Tau ($\tau$) & 1 & -1 & -1/2 \\ Bottom ($b$) & 3 & -1/3 & -1/2 \\ Top ($t$) & 3 & +2/3 & +1/2 \\ Neutrino ($\nu$) & 1 & 0 & +1/2 \\ $W^\pm$ & 1 & $\pm 1$ & $\pm 1$ \\ Dark Fermion $\psi_D^{\nu}$ & 1 & 0 & +1/2 \\ Dark Fermion $\psi_D^{u}$ & 3 & +2/3 & +1/2 \\ Dark Fermion $\psi_D^{\overline{u}}$ & $\overline{3}$ & -2/3 & -1/2 \\ Dark Fermion $\psi_D^{e}$ & 1 & -1 & +1/2 \\ \toprule \end{tabular} \caption{Charge assignments for fields contributing to the RGE runnings. } \label{table:RGEcharges} \end{table} \begin{figure}[t!] \begin{center} \includegraphics[width = 1.0\columnwidth]{gGSM_4D_1Gens.pdf} \caption{Piecewise RGE evolution for the SM couplings $g_{3C}, g_{2L}, g_{1Y}$ with the different $\beta$ function changes at the multiple encountered KK states marked as dashed lines. ($M_{\text{KK}_5}$ itself is the furthest right dashed line.) Note that the piecewise forms for $g_{2L}, g_{1Y}$ are obtained via Eq.~\eqref{eq:weinbergc}. }\label{GSM_4DRGE} \end{center} \end{figure} When we reach $M_{\text{KK}_5}$, we recover the hypercharge and weak couplings from the evolved values of $\alpha_\text{EM}$ and $\sin \theta_W$ via \begin{equation} \begin{split} \label{eq:weinbergc} \frac{1}{\alpha_{2L}(\mu)} & = \frac{1}{\alpha_\text{EM} (\mu)} \sin^2 \theta_W (\mu)\,, \\ \frac{1}{\alpha_{1Y}(\mu)} & = \frac{3}{5} \frac{1}{\alpha_\text{EM}(\mu)} (1 - \sin^2 \theta_W(\mu))\,. \end{split} \end{equation} Since $M_{\text{KK}_5}$ is the energy threshold at which the Pati-Salam states become available, we transition to the PS phase of the theory where we obtain the gauge couplings based on the symmetry breaking $SU(4)_C \times SU(2)_L \times SU(2)_R \rightarrow SU(3)_C \times SU(2)_L \times U(1)_Y$. This in turn provides us with the aforementioned 4D/5D boundary conditions of Eq.~\eqref{eq:paticoup} evaluated at $M_{\text{KK}_5}$. Following this procedure, the $G_\text{SM}$ gauge coupling running in the energy range $[M_Z, M_{\text{KK}_5}]$ for the spectrum of Fig.~\ref{KKTowerExplicit} is shown in Fig.~\ref{GSM_4DRGE}. \subsection*{5D RGEs and cut-offs} We now turn to the 5D running with the boundary conditions at $M_{\text{KK}_5}$ detailed above as input. The matter content in our approximated 5D theory was mentioned earlier in Eq.~\eqref{MatterContent}, in addition to the $(1, \mathbf{2}, \mathbf{2}) \sim SO(5)/SO(4)$ state. The formalism in \cite{Choi:2002ps} specifies the 5D RGE running for generic 5D field parity assignments on a \mbox{$S^1/ \mathds{Z}_2 \times \mathds{Z}'_2$} orbifold. Since we started with a 6D theory defined on \mbox{$\mathcal{M}_4 \times T^2 / \mathds{Z}_2$}, the $S^1/ \mathds{Z}_2 \times \mathds{Z}'_2$ assignments arise from the orbifold assignments along the warped direction. These assignments are tabled for fermions, gauge bosons and scalars in Tabs.~\ref{table:FermionParity},~\ref{table:GaugeParity}, and~\ref{table:GaugeScalarParity}, respectively. \begin{table}[t!] \centering \begin{tabular}{c \begin{tabular}{ | c | c | c |} \toprule $G_\text{PS}$ rep. & Parent Field & $(\mathds{Z}_2 , \mathds{Z}'_2)$ \\ \colrule $(\mathbf{4}, \mathbf{2}, 1)_L$ & $\Psi^\alpha_\mathbf{32}$ & $(+,+)$ \\ $(\mathbf{4}, \mathbf{2}, 1)_R$ & $\Psi^\alpha_\mathbf{32}$ & $(-,-)$ \\ $(\mathbf{4}, 1, \mathbf{2})_R$ & $\Psi^\alpha_\mathbf{32}$ & $(+,+)$ \\ $(\mathbf{4}, 1, \mathbf{2})_L$ & $\Psi^\alpha_\mathbf{32}$ & $(-,-)$ \\ \toprule \end{tabular} \\ \\ \begin{tabular}{ | c | c | c |} \toprule $G_\text{PS}$ rep. & Parent Field & $(\mathds{Z}_2 , \mathds{Z}'_2)$ \\ \colrule $(\mathbf{4}, \mathbf{2}, 1)_L$ & $\Psi^4_\mathbf{32}$ & $(+,-)$ \\ $(\mathbf{4}, \mathbf{2}, 1)_R$ & $\Psi^4_\mathbf{32}$ & $(-,+)$ \\ $(\mathbf{4}, 1, \mathbf{2})_L$ & $\Psi^4_\mathbf{32}$ & $(-,+)$ \\ $(\mathbf{4}, 1, \mathbf{2})_R$ & $\Psi^4_\mathbf{32}$ & $(+,-)$ \\ \toprule \end{tabular} \\ \\ \begin{tabular}{| c | c | c |} \toprule $G_\text{PS}$ rep. & Parent Field & $(\mathds{Z}_2 , \mathds{Z}'_2)$ \\ \colrule $(\mathbf{6}, 1, 1)^{(+)}_R$ & $\Psi^\beta_\mathbf{11}$ & $(+,+)$ \\ $(\mathbf{6}, 1, 1)^{(-)}_L$ & $\Psi^\beta_\mathbf{11}$ & $(+,+)$ \\ $(\mathbf{6}, 1, 1)^{(+)}_L$ & $\Psi^\beta_\mathbf{11}$ & $(-,-)$ \\ $(\mathbf{6}, 1, 1)^{(-)}_R$ & $\Psi^\beta_\mathbf{11}$ & $(-,-)$ \\ \toprule \end{tabular} \\ \\ \begin{tabular}{| c | c | c |} \toprule $G_\text{PS}$ rep. & Parent Field & $(\mathds{Z}_2 , \mathds{Z}'_2)$ \\ \colrule $(1, \mathbf{2}, \mathbf{2})^{(+,-)}_{L,R}$ & $\Psi^{\beta'}_\mathbf{11}$ & $(+,+)$ \\ $(1, 1, 1)^{(+,-)}_{R,L}$ & $\Psi^{\beta'}_\mathbf{11}$ & $(+,+)$ \\ $(1, \mathbf{2}, \mathbf{2})^{(+,-)}_{R,L}$ & $\Psi^{\beta'}_\mathbf{11}$ & $(-,-)$ \\ $(1, 1, 1)^{(+,-)}_{L ,R}$ & $\Psi^{\beta'}_\mathbf{11}$ & $(-,-)$ \\ \toprule \end{tabular} \end{tabular} \caption{Fermion Parity Assignments Under $S^1 / \mathds{Z}_2 \times \mathds{Z}'_2$.} \label{table:FermionParity} \end{table} \begin{table}[t!] \centering \begin{tabular}{c} \begin{tabular}{| c | c | c |} \toprule $G_\text{SM}$ rep. & Parent Field & $(\mathds{Z}_2 , \mathds{Z}'_2)$ \\ \colrule $(1, \mathbf{3}, 0)$ & $A_\mu \in G_\text{SM}$ & $(+,+)$ \\ $(\mathbf{8}, 1, 0)$ & $A_\mu \in G_\text{SM}$ & $(+,+)$ \\ $(1, 1, 0)$ & $A_\mu \in G_\text{SM}$ & $(+,+)$ \\ \toprule \end{tabular} \\ \\ \begin{tabular}{| c | c | c |} \toprule $G_\text{SM}$ rep. & Parent Field & $(\mathds{Z}_2 , \mathds{Z}'_2)$ \\ \colrule $(\mathbf{3}, 1, 0 )$ & $A_\mu \in G_\text{PS} / G_\text{SM}$ & $(-,+)$ \\ $(\overline{\mathbf{3}}, 1, 0)$ & $A_\mu \in G_\text{PS} / G_\text{SM}$ & $(-,+)$ \\ \toprule \end{tabular} \\ \\ \begin{tabular}{| c | c | c |} \toprule $G_\text{PS}$ rep. & Parent Field & $(\mathds{Z}_2 , \mathds{Z}'_2)$ \\ \colrule $(1, \mathbf{2}, \mathbf{2})$ & $A^{a, 11}_\mu \in SO(5) / SO(4)$ & $(-,-)$ \\ $(1, 1, \mathbf{3}) $ & $W^\pm_R, Z_R \in G_\text{PS} / G_\text{SM}$ & $(-,+)$ \\ \toprule \end{tabular} \end{tabular} \caption{Gauge boson parity assignment under \mbox{$S^1 / \mathds{Z}_2 \times \mathds{Z}'_2$}. Note that we have to treat the $G_\text{PS}$, and $G_\text{SM}$ representations separately due to the mixed parity assignments in the full 6D model.} \label{table:GaugeParity} \end{table} \begin{table}[ht!] \centering \begin{tabular}{c} \begin{tabular}{| c | c | c |} \toprule $G_\text{PS}$ rep. & Parent Field & $(\mathds{Z}_2 , \mathds{Z}'_2)$ \\ \colrule $(\mathbf{15}, 1, 1 )$ & $A_y \in G_\text{PS}$ & $(-,-)$ \\ $(1, 1, \mathbf{3})$ & $A_y \in G_\text{PS}$ & $(-,-)$ \\ $(1, \mathbf{3}, 1)$ & $A_y \in G_\text{PS}$ & $(-,-)$ \\ \toprule \end{tabular} \\ \\ \begin{tabular}{| c | c | c |} \toprule $G_\text{PS}$ rep. & Parent Field & $(\mathds{Z}_2 , \mathds{Z}'_2)$ \\ \colrule $(\mathbf{15}, 1, 1)$ & $A_w \in G_\text{PS}$ & $(-,-)$ \\ $(1, 1, \mathbf{3})$ & $A_w \in G_\text{PS}$ & $(-,-)$ \\ $(1, \mathbf{3}, 1)$ & $A_w \in G_\text{PS}$ & $(-,-)$ \\ \toprule \end{tabular} \\ \\ \begin{tabular}{| c | c | c |} \toprule $G_\text{PS}$ rep. & Parent Field & $(\mathds{Z}_2 , \mathds{Z}'_2)$ \\ \colrule $(1, \mathbf{2}, \mathbf{2})$ & $A^{4, 11}_y \in SO(5) / SO(4)$ & $(+,+)$ \\ \toprule \end{tabular} \end{tabular} \caption{Scalar parity assignment under $S^1 / \mathds{Z}_2 \times \mathds{Z}'_2$. In the 5D RGE formalism they are treated as scalars originating from either the gauge boson projections or as remnants from the 6D approximation. } \label{table:GaugeScalarParity} \end{table} The 5D RGEs take the generic form~\cite{Choi:2002ps} \begin{equation} \frac{1}{g^2_a (\mu)} = \frac{\pi L_5}{g^2_{a_{\mathrm{5D}}} \left( \Lambda_\text{Max} \right) } \color{black} + \frac{1}{8\pi^2} \sum_\xi \overline{\Delta}_a \left( \xi; \mu , \ln \Lambda_\text{Max} \right), \end{equation} where $g_a$ is the 4D gauge coupling corresponding to the respective gauge group in $SU(4)_C \times SU(2)_L \times SU(2)_R$ (where by $a$ we denote $4C, 2L, 2R$), $g^2_{a_{\mathrm{5D}}}$ is the squared 5D gauge coupling (which has mass dimension $M^{-1}$). $\overline{\Delta}_a$ (see Appendix \ref{appendix:5DRGEs}) are denote the one loop corrections due to the theory's field content labelled with $\xi \in \left\{ \phi, \psi, A_\mu \right\}$ for scalars, fermions and gauge bosons. $\overline{\Delta}_N (\xi)$ for a gauge group $SU(N)$ and a field $\xi$ are given in Ref.~\cite{Choi:2002ps} and reproduced in the appendix for completeness. We can define a cut-off $\Lambda_\text{Max}$ as the scale at which we lose perturbative control of the 5D theory, \begin{equation} \Lambda_\text{Max} \simeq \frac{16 \pi^2}{ g^2_{a_{\mathrm{5D}} } (\Lambda_\text{Max}) }\,. \end{equation} This is the scale where the formal expansion parameter becomes too large (see Ref.~\cite{Sundrum:2005jf}) to deliver reliable results within the context of our leading order RGE analysis. To get a numerical estimate for $\Lambda_\text{Max}$ we can use the RGEs evaluated at $M_{\text{KK}_5}$, i.e. \begin{multline} \frac{1}{g^2_a \left( M_{\text{KK}_5} \right) } = \frac{\pi L_5}{ g^2_{a_{\mathrm{5D}}} \left( \Lambda_\text{Max} \right) } \\+ \frac{1}{8\pi^2} \sum_\xi \overline{\Delta}_a \left( \xi; M_{\text{KK}_5} , \ln \Lambda_\text{Max} \right) \\ \equiv C^a_5 \left( \Lambda_\text{Max} \right) + \frac{1}{8\pi^2} \sum_\xi \overline{\Delta}_a \left( \xi; M_{\text{KK}_5} , \ln \Lambda_\text{Max} \right) . \label{eqn:GaugeCt5D} \end{multline} This is an implicit equation for our unknown 5D gauge coupling at the cut-off scale. To find the unknown dimensionless $C^a_5$ (and scale $\Lambda_\text{Max}$), we can recast the above as a functional equation and solve it numerically for $C^a_5$. More specifically we can recast $\Lambda_\text{Max}$ as \begin{equation} \Lambda_\text{Max} = \frac{16 \pi}{L_5} C^a_5\, , \end{equation} which then provides us with the functional form when substituted into Eq.~\eqref{eqn:GaugeCt5D}, \begin{equation} C^a_5 = \frac{1}{g^2_a \left( M_{\text{KK}_5} \right) } - \frac{1}{8\pi^2} \sum_\xi \overline{\Delta}_a \left( \xi; M_{\text{KK}_5} , \ln \left( \frac{16 \pi}{L_5} C^a_5 \right) \right). \end{equation} Solving this equation numerically yields cut-off scales for each of the gauge couplings $\Lambda_\text{Max}^{4C}, \Lambda_\text{Max}^{2L}, \Lambda_\text{Max}^{2R}$. For the remainder of this paper we will refer to the smallest of the three when discussing the cut-off of the theory where a more fundamental 6D theory should come into effect \begin{equation} \Lambda_\text{Max} = \min \left \{ \Lambda_\text{Max}^{4C}, \Lambda_\text{Max}^{2L}, \Lambda_\text{Max}^{2R} \right\} \,. \end{equation} The running in the 5D regime for our sample point is shown in Fig.~\ref{protoRunning}. \begin{figure}[t!] \begin{center} \includegraphics[width = 1.0\columnwidth]{GPS_5D_1Gens.pdf} \caption{Effective 4D $SU(4)_C\times SU(2)_L \times SU(2)_R$ gauge couplings obtained via the 5D Pati-Salam approximation, using the evolved coupling values originating from the 4D formalism. The dotted line corresponds to the $M_{\text{KK}_5}$ threshold at which we start our 5D runnigs.}\label{protoRunning} \end{center} \end{figure} \section{Weinberg Angle: $SU(5)$ prediction vs running} \label{sec:WeinbergRGE} We can now turn to an analysis of the RGE-corrected Weinberg angle. Switching from the broken $SU(3)_C \times U(1)_\text{EM}$ phase, to the $G_\text{SM}$ phase, the Weinberg angle $\sin \theta_W$ and the electromagnetic fine structure constant $\alpha_\text{EM}$, determine the weak and hypercharge couplings according to Eq.~\eqref{eq:weinbergc}. Similarly the $G_\text{SM}$ couplings are related to the $G_\text{PS}$ ones as expressed in Eq.~\eqref{eq:paticoup}, leading to the Weinberg angle expression of Eq.~\eqref{eqn:WeinbergAnglePS}. At the unification scale, i.e. the energy at which the first non-zero GUT KK state becomes available $m_{\text{KK}_6} \sim 1/ (2 \pi R_6)$, we can write a series of identities between the 4D, 5D, 6D couplings based on the principle that there is only one fundamental gauge coupling. Before gauge symmetry breaking, the 5D and 4D equivalent $SO(11)$ couplings at the 5D Planck and IR branes are related to the 6D gauge coupling by, \begin{align*} \alpha^{SO(11)}_{\mathrm{6D}} & = \frac{\alpha^{SO(11)- \text{IR}}_{\mathrm{5D}} }{ 2 \pi R_6 } = \frac{\alpha^{SO(11)- \text{Pl}}_{\mathrm{5D}} }{ 2 \pi R_6 } \\ & = \frac{\alpha^{SO(11)- \text{IR}}_{\mathrm{4D}} }{ 2 \pi R_6 L_5 } = \frac{\alpha^{SO(11)- \text{Pl}}_{\mathrm{4D}} }{ 2 \pi R_6 L_5 } \,. \end{align*} On the Planck brane the gauge symmetry is broken down to $SU(5)$ via the vacuum expectation value (VEV) $\langle \Phi_\mathbf{32} \rangle$. In terms of the equivalent 4D gauge couplings, the identification at 1-loop is equivalent to \cite{Hall:1980kf,Babu:2015bna} \begin{multline} \frac{1}{\alpha^{SU(5) - \text{Pl}}_{\mathrm{4D}}} = \frac{1}{\alpha^{SO(11) - \text{Pl}}_{\mathrm{4D}}} \\- \frac{1}{12\pi}\left[ C_2(SO(11)) - C_2(SU(5)) \right]. \end{multline} Recasting this in terms of the 6D coupling, we have \begin{equation} \frac{1}{\alpha^{SU(5) - \text{Pl}}_{\mathrm{4D}}} = \left\{ \frac{1}{\alpha^{SO(11)}_{\mathrm{6D}}} - 2 \pi R_6 L_5 \frac{\lambda_{11\rightarrow 5}}{12\pi} \right\} \frac{1}{2 \pi R_6 L_5}\,, \label{eqn:SU5gauge} \end{equation} where $\lambda_{11\rightarrow 5} = \left[ C_2(SO(11)) - C_2(SU(5)) \right]$. Similarly, on the IR brane we break $SO(11)\rightarrow SU(4)_C \times SU(2)_L \times SU(2)_R$ via boundary conditions, which produce the gauge identifications at 1 loop, \begin{align*} \frac{1}{\alpha^{SU(4)_C - \text{IR}}_{\mathrm{4D}}} = \frac{1}{\alpha^{SO(11) - \text{Pl}}_{\mathrm{4D}}} - \frac{\lambda_{11\rightarrow 4}}{12\pi}\, ,\\ \frac{1}{\alpha^{SU(2)_L - \text{IR}}_{\mathrm{4D}}} = \frac{1}{\alpha^{SO(11) - \text{Pl}}_{\mathrm{4D}}} - \frac{\lambda_{11\rightarrow 2}}{12\pi}\, ,\\ \frac{1}{\alpha^{SU(2)_R - \text{IR}}_{\mathrm{4D}}} = \frac{1}{\alpha^{SO(11) - \text{Pl}}_{\mathrm{4D}}} - \frac{\lambda_{11\rightarrow 2}}{12\pi} \,, \end{align*} where $\lambda_{11\rightarrow 4} = C_2(SO(11)) - C_2(SU(4)) , \lambda_{11\rightarrow 2} = C_2(SO(11)) - C_2(SU(2)) $. In terms of the 6D couplings this means, \begin{equation} \begin{split} \frac{1}{\alpha^{SU(4)_C - \text{IR}}_{\mathrm{4D}}} =\left\{ \frac{1}{\alpha^{SO(11) - \text{Pl}}_{\mathrm{6D}}} - 2\pi R_6 L_5 \frac{\lambda_{11\rightarrow 4}}{12\pi} \right\} \frac{1}{ 2 \pi R_6 L_5 }\,, \\ \frac{1}{\alpha^{SU(2)_L - \text{IR}}_{\mathrm{4D}}} =\left\{ \frac{1}{\alpha^{SO(11) - \text{Pl}}_{\mathrm{6D}}} - 2\pi R_6 L_5 \frac{\lambda_{11\rightarrow 2}}{12\pi} \right\} \frac{1}{ 2 \pi R_6 L_5 }\,, \label{eqn:PatiSalamGauge} \\ \frac{1}{\alpha^{SU(2)_R - \text{IR}}_{\mathrm{4D}}} =\left\{ \frac{1}{\alpha^{SO(11) - \text{Pl}}_{\mathrm{6D}}} - 2\pi R_6 L_5 \frac{\lambda_{11\rightarrow 2}}{12\pi} \right\} \frac{1}{ 2 \pi R_6 L_5 }\,, \end{split} \end{equation} Ignoring the Casimir terms for a moment to keep the discussion transparent, at the unification scale, instead of the Eqs.~\eqref{eqn:SU5gauge}, \eqref{eqn:PatiSalamGauge}, we have \begin{equation} \begin{split} \frac{1}{\alpha^{SU(4)_C - \text{IR}}_{\mathrm{4D}}} & = \frac{1}{\alpha^{SU(2)_L - \text{IR}}_{\mathrm{4D}}} = \frac{1}{\alpha^{SU(2)_R - \text{IR}}_{\mathrm{4D}}} \\ &= \frac{1}{\alpha^{SU(5) - \text{Pl}}_{\mathrm{4D}}} = \frac{1}{\alpha^{SO(11) - \text{Pl}}_{\mathrm{6D}}} \frac{1}{ 2 \pi R_6 L_5 }\,. \end{split} \end{equation} When combined with the expression for the Weinberg angle in the Pati-Salam phase, Eq.~\eqref{eqn:WeinbergAnglePS}, these relations lead to the expected \begin{equation} \label{eq:weinuv} \sin^2 \theta_W (\mu) \eval_{\mu = (2\pi R_6)^{-1} }= \frac{1}{\frac{2}{3}+ 1 +1} = \frac{3}{8}\,. \end{equation} In essence, this is the $SU(5)$ prediction translated from the Planck brane to the IR brane.\footnote{ The scale of $SU(5)$ breaking is dictated by $\langle \Phi_\mathbf{32}\rangle \sim R_6^{-1}$, which is localised on the UV brane $y=0$, i.e. the scale in Eq.~\eqref{eq:weinuv} is consistent.} Again, we emphasise that this scale is not accessible within our 5D formalism, but we can infer some useful conclusions depending on the values of the RGE runnings at $\Lambda_\mathrm{Max}$, as we will see in Sec.~\ref{sec:ResDiscConc}. Including the Casimir corrections, we find the slightly modified relation \begin{equation} \sin^2 \theta_W (\mu)= \frac{36 - 18 \pi \alpha^{SO(11)}_{\mathrm{4D}}}{96 - \dfrac{1}{\pi} 20 \alpha^{SO(11)}_{\mathrm{4D}} - 44 \pi \alpha^{SO(11)}_{\mathrm{4D}}} \Biggr\rvert_{\mu = (2\pi R_6)^{-1} }. \end{equation} Since the Casimir-corrected Weinberg angle requires a value for the $SO(11)$ 4D equivalent gauge coupling, we examine the possible deviation from the $3/8$ GUT prediction as a function of the possible values of $\alpha^{SO(11)}_{\mathrm{4D}}$, as shown in Fig.~\ref{figure:WeinCass}. For reasonable $\alpha^{SO(11)}_{\mathrm{4D}}$ coupling values (e.g. Ref~\cite{Babu:2015bna}) we see that deviations arising from the Casimir-corrected values amount to $\lesssim - 0.0013$, see Fig.~\ref{figure:WeinCass}. Since this $\sim 0.4 \%$ deviation is negligible, we can safely ignore the Casimir contributions in the following without qualitatively changing our results. \begin{figure}[h!] \begin{center} \includegraphics[width = 1. \columnwidth]{WeinCass.pdf} \caption{Numerical impact of the Casimir correction (blue line) as a function of the unknown inverse unified coupling $(\alpha_\mathrm{\mathrm{4D}}^{SO(11)})^{-1}$. The green line represents the low bound for the $\sim 0.4 \%$ deviation occurring at $(\alpha_\mathrm{\mathrm{4D}}^{SO(11)})^{-1} \simeq 20$. The orange line represents the GUT hypothesis 3/8. The smaller $\alpha$, the less impact the Casimir corrections have on the prediction as they weighted by $\alpha_\mathrm{\mathrm{4D}}^{SO(11)}$. } \label{figure:WeinCass} \end{center} \end{figure} \begin{figure*}[!t] \begin{center} \subfigure[\label{fig:SMabsDeV1Gena}]{\includegraphics[width=1.0\columnwidth]{CustomPlot-SM-AbsDevRatio-MKK5-Sin2ThW-1Gens-Filter1000.pdf}} \subfigure[\label{fig:PSabsDeV1Genb}]{\includegraphics[width=1.0\columnwidth]{CustomPlot-PS-AbsDevRatio-LambdaMax-Sin2ThW-1Gens-Filter1000.pdf}} \caption{(a) Scatter plot of the parameter space points for the $N_G=1$ case, where we use the same convention as in Fig.~\ref{fig:Res}. We now represent each point's value for the unification measure $\Delta(G_\text{SM}; M_{\text{KK}_5}, M_Z)$ in the 4D SM phase between the Kaluza-Klein scale $M_{\text{KK}_5}$, and $M_Z$, the respective KK scale, and the colour shading denotes the value of the Weinberg angle $\sin^2 \theta_W (M_{\text{KK}_5})$. (b) Correlation of the $N_G=1$ case in the 5D phase, $\Delta(G_\text{PS}; \Lambda_\text{Max}, M_{\text{KK}_5})$, shown as a function of the cut-off scale $\Lambda_\text{Max}$ where perturbativity is lost (see text for details). The colour shading again represents the Weinberg angle at the cut-off. Highlighted hexagon points refer to realistic low energy spectra compatible with exotics searches. \label{fig:SMabsDeV1Gen} } \end{center} \end{figure*} \begin{figure*}[!t] \begin{center} \subfigure[\label{fig:SMabsDeV3Gena}]{\includegraphics[width=1.0\columnwidth]{CustomPlot-SM-AbsDevRatio-MKK5-Sin2ThW-3Gens-Filter1000.pdf}} \subfigure[\label{fig:PSabsDeV3Genb}]{\includegraphics[width=1.0\columnwidth]{CustomPlot-PS-AbsDevRatio-LambdaMax-Sin2ThW-3Gens-Filter1000.pdf}} \caption{Scatter plots analogous to Figs.~\ref{fig:SMabsDeV1Gena} and~\ref{fig:PSabsDeV1Genb} for the degenerate $N_G=3$ case. \label{fig:absDeV3Gen} } \end{center} \end{figure*} \section{Results and Conclusions}\label{sec:ResDiscConc} The running is crucially influenced by the number of active fermion generations $N_G$. We will therefore comment on our results for $N_G=1,3$ separately. In the first case, we include only the third fermion generation as mentioned before. This implicitly assumes that there is a large mass gap between the third family and the remaining two, decoupling the associated zero-mode KK states from the RGE flow (see Ref.~\cite{Hosotani:2017edv}). In the second case, we assume that all three SM generations are present and that different generational mass states are nearly degenerate. The comparison of these avenues contrasted with implications for unification can therefore act as a guideline for future model-building in the fermion sector. To examine the extent to which the gauge couplings converge in the 4D, 5D regimes tensioned against the unification value of the Weinberg angle, we introduce a ``unification measure'' \begin{multline} \Delta(G; M_2, M_1) = \frac{\displaystyle \sum_{ i,j \in G | i \neq j} |\alpha_i (M_2) - \alpha_j(M_2)| }{ \displaystyle \sum_{ i,j \in G | i \neq j} |\alpha_i (M_1) - \alpha_j(M_1)| }\,, \end{multline} i.e. we consider the ratio of the sum of the mutual coupling deviations between two scales $M_2 > M_1$. $\alpha_i$ are the gauge group couplings of the subgroups that form the gauge group $G$. This ratio measures how quickly the gauge couplings approach each other as a function of the energy scale. Since we are interested in gauge coupling unification at $M_2 > M_1$, values of $\Delta(G; M_2, M_1)$ refer to \begin{equation} \Delta(G; M_2, M_1) \begin{cases} >1 \enskip \Leftrightarrow \enskip \text{departure from unification} \\ <1 \enskip \Leftrightarrow \enskip \text{approaching unification} \\ \sim 0 \enskip \Leftrightarrow \enskip \text{unification} \end{cases} \hspace{-0.2cm}. \end{equation} We plot this unification measure in the 4D SM phase between $M_Z, M_{\text{KK}_5}$, along with the Weinberg angle value at $M_{\text{KK}_5}$ in Figs.~\ref{fig:SMabsDeV1Gena}, \ref{fig:SMabsDeV3Gena} for the $N_G = 1$ and $N_G = 3$ cases. Figs.~\ref{fig:PSabsDeV1Genb} and \ref{fig:PSabsDeV3Genb}. show the same measure for $N_G = 1$ and $N_G = 3$ in the 5D PS phase between $M_{\text{KK}_5}, \Lambda_\text{Max}$. We start our discussion with the $N_G = 1$ case. Examining Fig.~\ref{fig:SMabsDeV1Gena} we can see that within the 4D SM phase, all the points that are consistent with the SM have a unification measure smaller than unity, where the evolved Weinberg angle is around $\sin^2\theta_W \simeq 0.25$. \begin{figure*}[t!]% \centering \subfigure[~RGE evolution for the piecewise hypercharge coupling $g_{1Y}$ for the sample point in Eqn.~\eqref{eqn:samplePoint} for the $N_G = 1$ case.]{{ \includegraphics[width=0.96\columnwidth]{g1Y_4D_1Gens.pdf}\label{fig:g1Y_1Gen} }}% % \subfigure[~RGE evolution for the piecewise hypercharge coupling $g_{1Y}$ for the sample point in Eqn.~\eqref{eqn:samplePoint} for the $N_G = 3$ case.]{{ \includegraphics[width=0.96\columnwidth]{g1Y_4D_3Gens.pdf}\label{fig:g1Y_3Gens} }}% \caption{Comparison between the piecewise RGE evolutions of the hypercharge couplings for the sample point in Eqn.~\eqref{eqn:samplePoint} between the $N_G=1$ and $N_G=3$ cases.}% \label{fig:g1YComparison}% \end{figure*} \begin{figure*}[t!]% \centering \subfigure[~RGE evolution for the piecewise weak coupling $g_{2L}$ for the sample point in Eqn.~\eqref{eqn:samplePoint} for the $N_G = 1$ case.]{{ \includegraphics[width=0.96\columnwidth]{g2L_4D_1Gens.pdf}\label{fig:g2L_3Gens} }}% % \subfigure[~RGE evolution for the piecewise weak coupling $g_{2L}$ for the sample point in Eqn.~\eqref{eqn:samplePoint} for the $N_G = 3$ case.]{{ \includegraphics[width=0.96\columnwidth]{g2L_4D_3Gens.pdf}\label{fig:g2L_3Gens} }}% \caption{Comparison between the piecewise RGE evolutions of the weak couplings for the sample point in Eqn.~\eqref{eqn:samplePoint} between the $N_G=1$ and $N_G=3$ cases.}% \label{fig:g2LComparison}% \end{figure*} The Weinberg angle evolves towards its predicted unified value with a converging behaviour of the gauge couplings. The numerical results are similar between the $N_G = 1, 3$ cases, where in the $N_G = 3$ case, the unification measure is smaller due to the additional positive fermionic contributions which increase the slope of the running of the hypercharge coupling. We note that this effect is in competition with the weak corrections which tend to be strong enough to result in a change in the direction of the gauge coupling running away from asymptotic freedom. This in turn leads to a smaller Weinberg angle in the UV. We can see this behaviour for the $N_G=1,3$ cases in Figs.~\ref{fig:g1YComparison} and \ref{fig:g2LComparison}. In the 5D phase shown in Fig.~\ref{fig:PSabsDeV1Genb}, we see that the converging behaviour is maintained, where the unification measure increases compared to its 4D phase. The measure remains below uninty while the Weinberg angle also increases via the RGE flow. This reflects the need for a complete set of RGEs to be performed within higher dimensional theories (see e.g. Refs.~\cite{Randall:2001gb, Randall:2001gc}). Under the assumption that in the 6D phase of the theory the coupling behaviour remains similar, we can infer that gauge coupling unification is consistent with the predicted value for the Weinberg angle. Put differently, the cut-off scale depicted in Fig.~\ref{fig:PSabsDeV1Genb} provides us with a lower bound for the unification scale $M_\text{GUT} > \Lambda_\text{Max}$ which is dictated by gauge coupling unification and consistency with the Weinberg angle prediction. Let us turn to the $N_G=3$ case, where we observe an amplified behaviour of the aforementioned effect of the KK states (Fig.~\ref{fig:SMabsDeV3Gena}) due to their increased number. In total, this leads to gauge couplings getting increasingly pushed away from unification in the 5D phase, while the Weinberg angle flow is still consistent with its unification value. Under the assumption, that this behaviour continues in the 6D theory, we could face a potential inconsistency arising from reaching the predicted $SU(5)$ value for the Weinberg angle, but not achieving gauge coupling unification. While this could be compensated by large radiative corrections that shift the Weinberg angle away from the GUT hypothesis, this sets fairly tight constraints on the dynamics of the fermion sector. We finally comment on the impact of uncertainties, in particular uncertainties of the input parameters $\alpha_{3C}$ and $\sin^2 \theta_W$ at the weak scale. Errors as small as $\sigma({\alpha_{3C}}) = \pm 0.00074$ are possible from a theoretical perspective (e.g. Ref.~\cite{Chakraborty:2014aca}), and we consider a conservative $5\%$ uncertainty in the value of the Weinberg angle where $\sigma({\sin^2 \theta_W}) = \pm 0.01156$ for demonstration purposes. Taking into account both of these uncertainties, we perform our analysis for the sample point highlighted in Eq.~\eqref{eqn:samplePoint}. In both the $N_G = 1$ and $N_G = 3$ cases the percentage difference arising in the unification measure at $M_{\mathrm{KK}_5}$ amounts to $\sim 2\%$. This effect is less pronounced at $\Lambda_{\mathrm{Max}}$, decreasing to $\sim 0.2 \%$ for $N_G = 1$ and $\sim 0.1 \%$ for $N_G = 3$. In the $N_G = 1$ case the Weinberg angle at $M_{\mathrm{KK}_5}$ is affected by $\sim 4.7 \%$, and decreases at $\Lambda_{\mathrm{Max}}$ to $\sim 3.9 \%$. In the $N_G = 3$ case the impact on the Weinberg angle is similar; at $M_{\mathrm{KK}_5}$ we obtain $\sim 4.8\%$, and at $\Lambda_{\mathrm{Max}}$ we have an increase to $\sim 4.92\%$. \section{Conclusions} \label{sec:Conc} Grand Unified Theories are attractive solutions to shortcomings of the Standard Model of Particle Physics. In non-supersymmetric realisations, scale separations can be achieved by employing higher dimensional background geometry~\cite{Randall:1999ee}, where electroweak symmetry breaking can also be implemented elegantly as a radiative phenomenon~\cite{Hosotani:1983xw}. Transitioning through the different phases of such scenarios is less straightforward compared to applications in ``standard'' 4D GUTs (see e.g.~\cite{Babu:2015bna,Lazarides:1980nt,Hall:1993gn,Barr:1981qv,Dimopoulos:1981zb,Derendinger:1983aj,Antoniadis:1987dx,Maekawa:2002bk}). This is the purpose of our study: a detailed analysis of the 4D and 5D phases of the model of~\cite{Hosotani:2017edv,Hosotani:2017hmu}, contrasted with electroweak scale measurements as well as LHC constraints. We pay particular attention to the Weinberg angle, whose size is determined by $SU(5)$ relations, and can therefore be used to test gauge unification (or lack thereof). While a fully conclusive test will need a full investigation of the 6D phase of the theory, which we leave for future work, we gather evidence that the 4D and 5D effective theories can remain under perturbative control up to scales of $\sim 10^7$~GeV. If unification is to be approached in a controlled way, new dynamics should appear at scales about two orders of magnitude above the 5D compactification scale. This scale can be interpreted as a lower limit on the GUT scale $\sim 5000$ TeV in the light of observed physics at and around the electroweak scale. Fermionic thresholds crucially impact the running of couplings and as a consequence, the model-building aspects related to the three fermion generations plays an important role in the high energy behaviour of the theory. Unless there is a hierarchical approach to lifting the zero modes of the fermion fields to their observed SM values, the 6D theory will play a more important role in achieving unification in the sense of Eq.~\eqref{eq:weinuv}. \acknowledgements We thank the referee of Physics Letters B for their comments. CE acknowledges support by the Durham Institute for Particle Physics Phenomenology (IPPP) Associateship Scheme and thanks the IPPP for the hospitality extended to him while this work was finalised. CE and DJM are supported by the UK Science and Technology Facilities Council (STFC) under grant ST/P000746/1. DDS is supported by a University of Glasgow College of Science \& Engineering PhD scholarship.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,672
/** * @license AngularJS v2.0.0-rc.1 * (c) 2010-2016 Google, Inc. https://angular.io/ * License: MIT */ var __extends = (this && this.__extends) || function (d, b) { for (var p in b) if (b.hasOwnProperty(p)) d[p] = b[p]; function __() { this.constructor = d; } d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __()); }; (function (global, factory) { typeof exports === 'object' && typeof module !== 'undefined' ? factory(exports, require('@angular/core'), require('@angular/common')) : typeof define === 'function' && define.amd ? define(['exports', '@angular/core', '@angular/common'], factory) : (factory((global.ng = global.ng || {}, global.ng.platformBrowser = global.ng.platformBrowser || {}), global.ng.core, global.ng.common)); }(this, function (exports, _angular_core, _angular_common) { 'use strict'; var globalScope; if (typeof window === 'undefined') { if (typeof WorkerGlobalScope !== 'undefined' && self instanceof WorkerGlobalScope) { // TODO: Replace any with WorkerGlobalScope from lib.webworker.d.ts #3492 globalScope = self; } else { globalScope = global; } } else { globalScope = window; } var IS_DART = false; // Need to declare a new variable for global here since TypeScript // exports the original value of the symbol. var global$1 = globalScope; var Date = global$1.Date; var _devMode = true; function assertionsEnabled() { return _devMode; } // TODO: remove calls to assert in production environment // Note: Can't just export this and import in in other files // as `assert` is a reserved keyword in Dart global$1.assert = function assert(condition) { // TODO: to be fixed properly via #2830, noop for now }; function isPresent(obj) { return obj !== undefined && obj !== null; } function isBlank(obj) { return obj === undefined || obj === null; } function isString(obj) { return typeof obj === "string"; } function isFunction(obj) { return typeof obj === "function"; } function isArray(obj) { return Array.isArray(obj); } function stringify(token) { if (typeof token === 'string') { return token; } if (token === undefined || token === null) { return '' + token; } if (token.name) { return token.name; } if (token.overriddenName) { return token.overriddenName; } var res = token.toString(); var newLineIndex = res.indexOf("\n"); return (newLineIndex === -1) ? res : res.substring(0, newLineIndex); } var StringWrapper = (function () { function StringWrapper() { } StringWrapper.fromCharCode = function (code) { return String.fromCharCode(code); }; StringWrapper.charCodeAt = function (s, index) { return s.charCodeAt(index); }; StringWrapper.split = function (s, regExp) { return s.split(regExp); }; StringWrapper.equals = function (s, s2) { return s === s2; }; StringWrapper.stripLeft = function (s, charVal) { if (s && s.length) { var pos = 0; for (var i = 0; i < s.length; i++) { if (s[i] != charVal) break; pos++; } s = s.substring(pos); } return s; }; StringWrapper.stripRight = function (s, charVal) { if (s && s.length) { var pos = s.length; for (var i = s.length - 1; i >= 0; i--) { if (s[i] != charVal) break; pos--; } s = s.substring(0, pos); } return s; }; StringWrapper.replace = function (s, from, replace) { return s.replace(from, replace); }; StringWrapper.replaceAll = function (s, from, replace) { return s.replace(from, replace); }; StringWrapper.slice = function (s, from, to) { if (from === void 0) { from = 0; } if (to === void 0) { to = null; } return s.slice(from, to === null ? undefined : to); }; StringWrapper.replaceAllMapped = function (s, from, cb) { return s.replace(from, function () { var matches = []; for (var _i = 0; _i < arguments.length; _i++) { matches[_i - 0] = arguments[_i]; } // Remove offset & string from the result array matches.splice(-2, 2); // The callback receives match, p1, ..., pn return cb(matches); }); }; StringWrapper.contains = function (s, substr) { return s.indexOf(substr) != -1; }; StringWrapper.compare = function (a, b) { if (a < b) { return -1; } else if (a > b) { return 1; } else { return 0; } }; return StringWrapper; }()); var NumberParseError = (function (_super) { __extends(NumberParseError, _super); function NumberParseError(message) { _super.call(this); this.message = message; } NumberParseError.prototype.toString = function () { return this.message; }; return NumberParseError; }(Error)); var NumberWrapper = (function () { function NumberWrapper() { } NumberWrapper.toFixed = function (n, fractionDigits) { return n.toFixed(fractionDigits); }; NumberWrapper.equal = function (a, b) { return a === b; }; NumberWrapper.parseIntAutoRadix = function (text) { var result = parseInt(text); if (isNaN(result)) { throw new NumberParseError("Invalid integer literal when parsing " + text); } return result; }; NumberWrapper.parseInt = function (text, radix) { if (radix == 10) { if (/^(\-|\+)?[0-9]+$/.test(text)) { return parseInt(text, radix); } } else if (radix == 16) { if (/^(\-|\+)?[0-9ABCDEFabcdef]+$/.test(text)) { return parseInt(text, radix); } } else { var result = parseInt(text, radix); if (!isNaN(result)) { return result; } } throw new NumberParseError("Invalid integer literal when parsing " + text + " in base " + radix); }; // TODO: NaN is a valid literal but is returned by parseFloat to indicate an error. NumberWrapper.parseFloat = function (text) { return parseFloat(text); }; Object.defineProperty(NumberWrapper, "NaN", { get: function () { return NaN; }, enumerable: true, configurable: true }); NumberWrapper.isNaN = function (value) { return isNaN(value); }; NumberWrapper.isInteger = function (value) { return Number.isInteger(value); }; return NumberWrapper; }()); var RegExpWrapper = (function () { function RegExpWrapper() { } RegExpWrapper.create = function (regExpStr, flags) { if (flags === void 0) { flags = ''; } flags = flags.replace(/g/g, ''); return new global$1.RegExp(regExpStr, flags + 'g'); }; RegExpWrapper.firstMatch = function (regExp, input) { // Reset multimatch regex state regExp.lastIndex = 0; return regExp.exec(input); }; RegExpWrapper.test = function (regExp, input) { regExp.lastIndex = 0; return regExp.test(input); }; RegExpWrapper.matcher = function (regExp, input) { // Reset regex state for the case // someone did not loop over all matches // last time. regExp.lastIndex = 0; return { re: regExp, input: input }; }; RegExpWrapper.replaceAll = function (regExp, input, replace) { var c = regExp.exec(input); var res = ''; regExp.lastIndex = 0; var prev = 0; while (c) { res += input.substring(prev, c.index); res += replace(c); prev = c.index + c[0].length; regExp.lastIndex = prev; c = regExp.exec(input); } res += input.substring(prev); return res; }; return RegExpWrapper; }()); // Can't be all uppercase as our transpiler would think it is a special directive... var Json = (function () { function Json() { } Json.parse = function (s) { return global$1.JSON.parse(s); }; Json.stringify = function (data) { // Dart doesn't take 3 arguments return global$1.JSON.stringify(data, null, 2); }; return Json; }()); var DateWrapper = (function () { function DateWrapper() { } DateWrapper.create = function (year, month, day, hour, minutes, seconds, milliseconds) { if (month === void 0) { month = 1; } if (day === void 0) { day = 1; } if (hour === void 0) { hour = 0; } if (minutes === void 0) { minutes = 0; } if (seconds === void 0) { seconds = 0; } if (milliseconds === void 0) { milliseconds = 0; } return new Date(year, month - 1, day, hour, minutes, seconds, milliseconds); }; DateWrapper.fromISOString = function (str) { return new Date(str); }; DateWrapper.fromMillis = function (ms) { return new Date(ms); }; DateWrapper.toMillis = function (date) { return date.getTime(); }; DateWrapper.now = function () { return new Date(); }; DateWrapper.toJson = function (date) { return date.toJSON(); }; return DateWrapper; }()); function setValueOnPath(global, path, value) { var parts = path.split('.'); var obj = global; while (parts.length > 1) { var name = parts.shift(); if (obj.hasOwnProperty(name) && isPresent(obj[name])) { obj = obj[name]; } else { obj = obj[name] = {}; } } if (obj === undefined || obj === null) { obj = {}; } obj[parts.shift()] = value; } var wtfInit = _angular_core.__core_private__.wtfInit; var DebugDomRootRenderer = _angular_core.__core_private__.DebugDomRootRenderer; var SecurityContext = _angular_core.__core_private__.SecurityContext; var SanitizationService = _angular_core.__core_private__.SanitizationService; /** * A pattern that recognizes a commonly useful subset of URLs that are safe. * * This regular expression matches a subset of URLs that will not cause script * execution if used in URL context within a HTML document. Specifically, this * regular expression matches if (comment from here on and regex copied from * Soy's EscapingConventions): * (1) Either a protocol in a whitelist (http, https, mailto or ftp). * (2) or no protocol. A protocol must be followed by a colon. The below * allows that by allowing colons only after one of the characters [/?#]. * A colon after a hash (#) must be in the fragment. * Otherwise, a colon after a (?) must be in a query. * Otherwise, a colon after a single solidus (/) must be in a path. * Otherwise, a colon after a double solidus (//) must be in the authority * (before port). * * The pattern disallows &, used in HTML entity declarations before * one of the characters in [/?#]. This disallows HTML entities used in the * protocol name, which should never happen, e.g. "h&#116;tp" for "http". * It also disallows HTML entities in the first path part of a relative path, * e.g. "foo&lt;bar/baz". Our existing escaping functions should not produce * that. More importantly, it disallows masking of a colon, * e.g. "javascript&#58;...". * * This regular expression was taken from the Closure sanitization library. */ var SAFE_URL_PATTERN = /^(?:(?:https?|mailto|ftp|tel|file):|[^&:/?#]*(?:[/?#]|$))/gi; function sanitizeUrl(url) { if (String(url).match(SAFE_URL_PATTERN)) return url; return 'unsafe:' + url; } /** * Regular expression for safe style values. * * Quotes (" and ') are allowed, but a check must be done elsewhere to ensure * they're balanced. * * ',' allows multiple values to be assigned to the same property * (e.g. background-attachment or font-family) and hence could allow * multiple values to get injected, but that should pose no risk of XSS. * * The rgb() and rgba() expression checks only for XSS safety, not for CSS * validity. * * This regular expression was taken from the Closure sanitization library. */ var SAFE_STYLE_VALUE = /^([-,."'%_!# a-zA-Z0-9]+|(?:rgb|hsl)a?\([0-9.%, ]+\))$/; /** * Checks that quotes (" and ') are properly balanced inside a string. Assumes * that neither escape (\) nor any other character that could result in * breaking out of a string parsing context are allowed; * see http://www.w3.org/TR/css3-syntax/#string-token-diagram. * * This code was taken from the Closure sanitization library. */ function hasBalancedQuotes(value) { var outsideSingle = true; var outsideDouble = true; for (var i = 0; i < value.length; i++) { var c = value.charAt(i); if (c === '\'' && outsideDouble) { outsideSingle = !outsideSingle; } else if (c === '"' && outsideSingle) { outsideDouble = !outsideDouble; } } return outsideSingle && outsideDouble; } function sanitizeStyle(value) { if (String(value).match(SAFE_STYLE_VALUE) && hasBalancedQuotes(value)) return value; return 'unsafe'; } /** * DomSanitizationService helps preventing Cross Site Scripting Security bugs (XSS) by sanitizing * values to be safe to use in the different DOM contexts. * * For example, when binding a URL in an `<a [href]="someValue">` hyperlink, `someValue` will be * sanitized so that an attacker cannot inject e.g. a `javascript:` URL that would execute code on * the website. * * In specific situations, it might be necessary to disable sanitization, for example if the * application genuinely needs to produce a `javascript:` style link with a dynamic value in it. * Users can bypass security by constructing a value with one of the `bypassSecurityTrust...` * methods, and then binding to that value from the template. * * These situations should be very rare, and extraordinary care must be taken to avoid creating a * Cross Site Scripting (XSS) security bug! * * When using `bypassSecurityTrust...`, make sure to call the method as early as possible and as * close as possible to the source of the value, to make it easy to verify no security bug is * created by its use. * * It is not required (and not recommended) to bypass security if the value is safe, e.g. a URL that * does not start with a suspicious protocol, or an HTML snippet that does not contain dangerous * code. The sanitizer leaves safe values intact. */ var DomSanitizationService = (function () { function DomSanitizationService() { } return DomSanitizationService; }()); var DomSanitizationServiceImpl = (function (_super) { __extends(DomSanitizationServiceImpl, _super); function DomSanitizationServiceImpl() { _super.apply(this, arguments); } DomSanitizationServiceImpl.prototype.sanitize = function (ctx, value) { if (value == null) return null; switch (ctx) { case SecurityContext.NONE: return value; case SecurityContext.HTML: if (value instanceof SafeHtmlImpl) return value.changingThisBreaksApplicationSecurity; this.checkNotSafeValue(value, 'HTML'); return this.sanitizeHtml(String(value)); case SecurityContext.STYLE: if (value instanceof SafeStyleImpl) return value.changingThisBreaksApplicationSecurity; this.checkNotSafeValue(value, 'Style'); return sanitizeStyle(value); case SecurityContext.SCRIPT: if (value instanceof SafeScriptImpl) return value.changingThisBreaksApplicationSecurity; this.checkNotSafeValue(value, 'Script'); throw new Error('unsafe value used in a script context'); case SecurityContext.URL: if (value instanceof SafeUrlImpl) return value.changingThisBreaksApplicationSecurity; this.checkNotSafeValue(value, 'URL'); return sanitizeUrl(String(value)); case SecurityContext.RESOURCE_URL: if (value instanceof SafeResourceUrlImpl) { return value.changingThisBreaksApplicationSecurity; } this.checkNotSafeValue(value, 'ResourceURL'); throw new Error('unsafe value used in a resource URL context'); default: throw new Error("Unexpected SecurityContext " + ctx); } }; DomSanitizationServiceImpl.prototype.checkNotSafeValue = function (value, expectedType) { if (value instanceof SafeValueImpl) { throw new Error('Required a safe ' + expectedType + ', got a ' + value.getTypeName()); } }; DomSanitizationServiceImpl.prototype.sanitizeHtml = function (value) { // TODO(martinprobst): implement. return value; }; DomSanitizationServiceImpl.prototype.bypassSecurityTrustHtml = function (value) { return new SafeHtmlImpl(value); }; DomSanitizationServiceImpl.prototype.bypassSecurityTrustStyle = function (value) { return new SafeStyleImpl(value); }; DomSanitizationServiceImpl.prototype.bypassSecurityTrustScript = function (value) { return new SafeScriptImpl(value); }; DomSanitizationServiceImpl.prototype.bypassSecurityTrustUrl = function (value) { return new SafeUrlImpl(value); }; DomSanitizationServiceImpl.prototype.bypassSecurityTrustResourceUrl = function (value) { return new SafeResourceUrlImpl(value); }; return DomSanitizationServiceImpl; }(DomSanitizationService)); DomSanitizationServiceImpl.decorators = [ { type: _angular_core.Injectable }, ]; var SafeValueImpl = (function () { function SafeValueImpl(changingThisBreaksApplicationSecurity) { this.changingThisBreaksApplicationSecurity = changingThisBreaksApplicationSecurity; // empty } return SafeValueImpl; }()); var SafeHtmlImpl = (function (_super) { __extends(SafeHtmlImpl, _super); function SafeHtmlImpl() { _super.apply(this, arguments); } SafeHtmlImpl.prototype.getTypeName = function () { return 'HTML'; }; return SafeHtmlImpl; }(SafeValueImpl)); var SafeStyleImpl = (function (_super) { __extends(SafeStyleImpl, _super); function SafeStyleImpl() { _super.apply(this, arguments); } SafeStyleImpl.prototype.getTypeName = function () { return 'Style'; }; return SafeStyleImpl; }(SafeValueImpl)); var SafeScriptImpl = (function (_super) { __extends(SafeScriptImpl, _super); function SafeScriptImpl() { _super.apply(this, arguments); } SafeScriptImpl.prototype.getTypeName = function () { return 'Script'; }; return SafeScriptImpl; }(SafeValueImpl)); var SafeUrlImpl = (function (_super) { __extends(SafeUrlImpl, _super); function SafeUrlImpl() { _super.apply(this, arguments); } SafeUrlImpl.prototype.getTypeName = function () { return 'URL'; }; return SafeUrlImpl; }(SafeValueImpl)); var SafeResourceUrlImpl = (function (_super) { __extends(SafeResourceUrlImpl, _super); function SafeResourceUrlImpl() { _super.apply(this, arguments); } SafeResourceUrlImpl.prototype.getTypeName = function () { return 'ResourceURL'; }; return SafeResourceUrlImpl; }(SafeValueImpl)); var Map$1 = global$1.Map; var Set$1 = global$1.Set; // Safari and Internet Explorer do not support the iterable parameter to the // Map constructor. We work around that by manually adding the items. var createMapFromPairs = (function () { try { if (new Map$1([[1, 2]]).size === 1) { return function createMapFromPairs(pairs) { return new Map$1(pairs); }; } } catch (e) { } return function createMapAndPopulateFromPairs(pairs) { var map = new Map$1(); for (var i = 0; i < pairs.length; i++) { var pair = pairs[i]; map.set(pair[0], pair[1]); } return map; }; })(); var createMapFromMap = (function () { try { if (new Map$1(new Map$1())) { return function createMapFromMap(m) { return new Map$1(m); }; } } catch (e) { } return function createMapAndPopulateFromMap(m) { var map = new Map$1(); m.forEach(function (v, k) { map.set(k, v); }); return map; }; })(); var _clearValues = (function () { if ((new Map$1()).keys().next) { return function _clearValues(m) { var keyIterator = m.keys(); var k; while (!((k = keyIterator.next()).done)) { m.set(k.value, null); } }; } else { return function _clearValuesWithForeEach(m) { m.forEach(function (v, k) { m.set(k, null); }); }; } })(); // Safari doesn't implement MapIterator.next(), which is used is Traceur's polyfill of Array.from // TODO(mlaval): remove the work around once we have a working polyfill of Array.from var _arrayFromMap = (function () { try { if ((new Map$1()).values().next) { return function createArrayFromMap(m, getValues) { return getValues ? Array.from(m.values()) : Array.from(m.keys()); }; } } catch (e) { } return function createArrayFromMapWithForeach(m, getValues) { var res = ListWrapper.createFixedSize(m.size), i = 0; m.forEach(function (v, k) { res[i] = getValues ? v : k; i++; }); return res; }; })(); /** * Wraps Javascript Objects */ var StringMapWrapper = (function () { function StringMapWrapper() { } StringMapWrapper.create = function () { // Note: We are not using Object.create(null) here due to // performance! // http://jsperf.com/ng2-object-create-null return {}; }; StringMapWrapper.contains = function (map, key) { return map.hasOwnProperty(key); }; StringMapWrapper.get = function (map, key) { return map.hasOwnProperty(key) ? map[key] : undefined; }; StringMapWrapper.set = function (map, key, value) { map[key] = value; }; StringMapWrapper.keys = function (map) { return Object.keys(map); }; StringMapWrapper.values = function (map) { return Object.keys(map).reduce(function (r, a) { r.push(map[a]); return r; }, []); }; StringMapWrapper.isEmpty = function (map) { for (var prop in map) { return false; } return true; }; StringMapWrapper.delete = function (map, key) { delete map[key]; }; StringMapWrapper.forEach = function (map, callback) { for (var prop in map) { if (map.hasOwnProperty(prop)) { callback(map[prop], prop); } } }; StringMapWrapper.merge = function (m1, m2) { var m = {}; for (var attr in m1) { if (m1.hasOwnProperty(attr)) { m[attr] = m1[attr]; } } for (var attr in m2) { if (m2.hasOwnProperty(attr)) { m[attr] = m2[attr]; } } return m; }; StringMapWrapper.equals = function (m1, m2) { var k1 = Object.keys(m1); var k2 = Object.keys(m2); if (k1.length != k2.length) { return false; } var key; for (var i = 0; i < k1.length; i++) { key = k1[i]; if (m1[key] !== m2[key]) { return false; } } return true; }; return StringMapWrapper; }()); var ListWrapper = (function () { function ListWrapper() { } // JS has no way to express a statically fixed size list, but dart does so we // keep both methods. ListWrapper.createFixedSize = function (size) { return new Array(size); }; ListWrapper.createGrowableSize = function (size) { return new Array(size); }; ListWrapper.clone = function (array) { return array.slice(0); }; ListWrapper.forEachWithIndex = function (array, fn) { for (var i = 0; i < array.length; i++) { fn(array[i], i); } }; ListWrapper.first = function (array) { if (!array) return null; return array[0]; }; ListWrapper.last = function (array) { if (!array || array.length == 0) return null; return array[array.length - 1]; }; ListWrapper.indexOf = function (array, value, startIndex) { if (startIndex === void 0) { startIndex = 0; } return array.indexOf(value, startIndex); }; ListWrapper.contains = function (list, el) { return list.indexOf(el) !== -1; }; ListWrapper.reversed = function (array) { var a = ListWrapper.clone(array); return a.reverse(); }; ListWrapper.concat = function (a, b) { return a.concat(b); }; ListWrapper.insert = function (list, index, value) { list.splice(index, 0, value); }; ListWrapper.removeAt = function (list, index) { var res = list[index]; list.splice(index, 1); return res; }; ListWrapper.removeAll = function (list, items) { for (var i = 0; i < items.length; ++i) { var index = list.indexOf(items[i]); list.splice(index, 1); } }; ListWrapper.remove = function (list, el) { var index = list.indexOf(el); if (index > -1) { list.splice(index, 1); return true; } return false; }; ListWrapper.clear = function (list) { list.length = 0; }; ListWrapper.isEmpty = function (list) { return list.length == 0; }; ListWrapper.fill = function (list, value, start, end) { if (start === void 0) { start = 0; } if (end === void 0) { end = null; } list.fill(value, start, end === null ? list.length : end); }; ListWrapper.equals = function (a, b) { if (a.length != b.length) return false; for (var i = 0; i < a.length; ++i) { if (a[i] !== b[i]) return false; } return true; }; ListWrapper.slice = function (l, from, to) { if (from === void 0) { from = 0; } if (to === void 0) { to = null; } return l.slice(from, to === null ? undefined : to); }; ListWrapper.splice = function (l, from, length) { return l.splice(from, length); }; ListWrapper.sort = function (l, compareFn) { if (isPresent(compareFn)) { l.sort(compareFn); } else { l.sort(); } }; ListWrapper.toString = function (l) { return l.toString(); }; ListWrapper.toJSON = function (l) { return JSON.stringify(l); }; ListWrapper.maximum = function (list, predicate) { if (list.length == 0) { return null; } var solution = null; var maxValue = -Infinity; for (var index = 0; index < list.length; index++) { var candidate = list[index]; if (isBlank(candidate)) { continue; } var candidateValue = predicate(candidate); if (candidateValue > maxValue) { solution = candidate; maxValue = candidateValue; } } return solution; }; ListWrapper.flatten = function (list) { var target = []; _flattenArray(list, target); return target; }; ListWrapper.addAll = function (list, source) { for (var i = 0; i < source.length; i++) { list.push(source[i]); } }; return ListWrapper; }()); function _flattenArray(source, target) { if (isPresent(source)) { for (var i = 0; i < source.length; i++) { var item = source[i]; if (isArray(item)) { _flattenArray(item, target); } else { target.push(item); } } } return target; } // Safari and Internet Explorer do not support the iterable parameter to the // Set constructor. We work around that by manually adding the items. var createSetFromList = (function () { var test = new Set$1([1, 2, 3]); if (test.size === 3) { return function createSetFromList(lst) { return new Set$1(lst); }; } else { return function createSetAndPopulateFromList(lst) { var res = new Set$1(lst); if (res.size !== lst.length) { for (var i = 0; i < lst.length; i++) { res.add(lst[i]); } } return res; }; } })(); var SetWrapper = (function () { function SetWrapper() { } SetWrapper.createFromList = function (lst) { return createSetFromList(lst); }; SetWrapper.has = function (s, key) { return s.has(key); }; SetWrapper.delete = function (m, k) { m.delete(k); }; return SetWrapper; }()); var _DOM = null; function getDOM() { return _DOM; } function setDOM(adapter) { _DOM = adapter; } function setRootDomAdapter(adapter) { if (isBlank(_DOM)) { _DOM = adapter; } } /* tslint:disable:requireParameterType */ /** * Provides DOM operations in an environment-agnostic way. */ var DomAdapter = (function () { function DomAdapter() { this.xhrType = null; } /** @deprecated */ DomAdapter.prototype.getXHR = function () { return this.xhrType; }; Object.defineProperty(DomAdapter.prototype, "attrToPropMap", { /** * Maps attribute names to their corresponding property names for cases * where attribute name doesn't match property name. */ get: function () { return this._attrToPropMap; }, set: function (value) { this._attrToPropMap = value; }, enumerable: true, configurable: true }); ; ; return DomAdapter; }()); /** * Provides DOM operations in any browser environment. */ var GenericBrowserDomAdapter = (function (_super) { __extends(GenericBrowserDomAdapter, _super); function GenericBrowserDomAdapter() { var _this = this; _super.call(this); this._animationPrefix = null; this._transitionEnd = null; try { var element = this.createElement('div', this.defaultDoc()); if (isPresent(this.getStyle(element, 'animationName'))) { this._animationPrefix = ''; } else { var domPrefixes = ['Webkit', 'Moz', 'O', 'ms']; for (var i = 0; i < domPrefixes.length; i++) { if (isPresent(this.getStyle(element, domPrefixes[i] + 'AnimationName'))) { this._animationPrefix = '-' + domPrefixes[i].toLowerCase() + '-'; break; } } } var transEndEventNames = { WebkitTransition: 'webkitTransitionEnd', MozTransition: 'transitionend', OTransition: 'oTransitionEnd otransitionend', transition: 'transitionend' }; StringMapWrapper.forEach(transEndEventNames, function (value, key) { if (isPresent(_this.getStyle(element, key))) { _this._transitionEnd = value; } }); } catch (e) { this._animationPrefix = null; this._transitionEnd = null; } } GenericBrowserDomAdapter.prototype.getDistributedNodes = function (el) { return el.getDistributedNodes(); }; GenericBrowserDomAdapter.prototype.resolveAndSetHref = function (el, baseUrl, href) { el.href = href == null ? baseUrl : baseUrl + '/../' + href; }; GenericBrowserDomAdapter.prototype.supportsDOMEvents = function () { return true; }; GenericBrowserDomAdapter.prototype.supportsNativeShadowDOM = function () { return isFunction(this.defaultDoc().body.createShadowRoot); }; GenericBrowserDomAdapter.prototype.getAnimationPrefix = function () { return isPresent(this._animationPrefix) ? this._animationPrefix : ""; }; GenericBrowserDomAdapter.prototype.getTransitionEnd = function () { return isPresent(this._transitionEnd) ? this._transitionEnd : ""; }; GenericBrowserDomAdapter.prototype.supportsAnimation = function () { return isPresent(this._animationPrefix) && isPresent(this._transitionEnd); }; return GenericBrowserDomAdapter; }(DomAdapter)); var _attrToPropMap = { 'class': 'className', 'innerHtml': 'innerHTML', 'readonly': 'readOnly', 'tabindex': 'tabIndex' }; var DOM_KEY_LOCATION_NUMPAD = 3; // Map to convert some key or keyIdentifier values to what will be returned by getEventKey var _keyMap = { // The following values are here for cross-browser compatibility and to match the W3C standard // cf http://www.w3.org/TR/DOM-Level-3-Events-key/ '\b': 'Backspace', '\t': 'Tab', '\x7F': 'Delete', '\x1B': 'Escape', 'Del': 'Delete', 'Esc': 'Escape', 'Left': 'ArrowLeft', 'Right': 'ArrowRight', 'Up': 'ArrowUp', 'Down': 'ArrowDown', 'Menu': 'ContextMenu', 'Scroll': 'ScrollLock', 'Win': 'OS' }; // There is a bug in Chrome for numeric keypad keys: // https://code.google.com/p/chromium/issues/detail?id=155654 // 1, 2, 3 ... are reported as A, B, C ... var _chromeNumKeyPadMap = { 'A': '1', 'B': '2', 'C': '3', 'D': '4', 'E': '5', 'F': '6', 'G': '7', 'H': '8', 'I': '9', 'J': '*', 'K': '+', 'M': '-', 'N': '.', 'O': '/', '\x60': '0', '\x90': 'NumLock' }; /** * A `DomAdapter` powered by full browser DOM APIs. */ /* tslint:disable:requireParameterType */ var BrowserDomAdapter = (function (_super) { __extends(BrowserDomAdapter, _super); function BrowserDomAdapter() { _super.apply(this, arguments); } BrowserDomAdapter.prototype.parse = function (templateHtml) { throw new Error("parse not implemented"); }; BrowserDomAdapter.makeCurrent = function () { setRootDomAdapter(new BrowserDomAdapter()); }; BrowserDomAdapter.prototype.hasProperty = function (element, name) { return name in element; }; BrowserDomAdapter.prototype.setProperty = function (el, name, value) { el[name] = value; }; BrowserDomAdapter.prototype.getProperty = function (el, name) { return el[name]; }; BrowserDomAdapter.prototype.invoke = function (el, methodName, args) { el[methodName].apply(el, args); }; // TODO(tbosch): move this into a separate environment class once we have it BrowserDomAdapter.prototype.logError = function (error) { if (window.console.error) { window.console.error(error); } else { window.console.log(error); } }; BrowserDomAdapter.prototype.log = function (error) { window.console.log(error); }; BrowserDomAdapter.prototype.logGroup = function (error) { if (window.console.group) { window.console.group(error); this.logError(error); } else { window.console.log(error); } }; BrowserDomAdapter.prototype.logGroupEnd = function () { if (window.console.groupEnd) { window.console.groupEnd(); } }; Object.defineProperty(BrowserDomAdapter.prototype, "attrToPropMap", { get: function () { return _attrToPropMap; }, enumerable: true, configurable: true }); BrowserDomAdapter.prototype.query = function (selector) { return document.querySelector(selector); }; BrowserDomAdapter.prototype.querySelector = function (el, selector) { return el.querySelector(selector); }; BrowserDomAdapter.prototype.querySelectorAll = function (el, selector) { return el.querySelectorAll(selector); }; BrowserDomAdapter.prototype.on = function (el, evt, listener) { el.addEventListener(evt, listener, false); }; BrowserDomAdapter.prototype.onAndCancel = function (el, evt, listener) { el.addEventListener(evt, listener, false); // Needed to follow Dart's subscription semantic, until fix of // https://code.google.com/p/dart/issues/detail?id=17406 return function () { el.removeEventListener(evt, listener, false); }; }; BrowserDomAdapter.prototype.dispatchEvent = function (el, evt) { el.dispatchEvent(evt); }; BrowserDomAdapter.prototype.createMouseEvent = function (eventType) { var evt = document.createEvent('MouseEvent'); evt.initEvent(eventType, true, true); return evt; }; BrowserDomAdapter.prototype.createEvent = function (eventType) { var evt = document.createEvent('Event'); evt.initEvent(eventType, true, true); return evt; }; BrowserDomAdapter.prototype.preventDefault = function (evt) { evt.preventDefault(); evt.returnValue = false; }; BrowserDomAdapter.prototype.isPrevented = function (evt) { return evt.defaultPrevented || isPresent(evt.returnValue) && !evt.returnValue; }; BrowserDomAdapter.prototype.getInnerHTML = function (el) { return el.innerHTML; }; BrowserDomAdapter.prototype.getOuterHTML = function (el) { return el.outerHTML; }; BrowserDomAdapter.prototype.nodeName = function (node) { return node.nodeName; }; BrowserDomAdapter.prototype.nodeValue = function (node) { return node.nodeValue; }; BrowserDomAdapter.prototype.type = function (node) { return node.type; }; BrowserDomAdapter.prototype.content = function (node) { if (this.hasProperty(node, "content")) { return node.content; } else { return node; } }; BrowserDomAdapter.prototype.firstChild = function (el) { return el.firstChild; }; BrowserDomAdapter.prototype.nextSibling = function (el) { return el.nextSibling; }; BrowserDomAdapter.prototype.parentElement = function (el) { return el.parentNode; }; BrowserDomAdapter.prototype.childNodes = function (el) { return el.childNodes; }; BrowserDomAdapter.prototype.childNodesAsList = function (el) { var childNodes = el.childNodes; var res = ListWrapper.createFixedSize(childNodes.length); for (var i = 0; i < childNodes.length; i++) { res[i] = childNodes[i]; } return res; }; BrowserDomAdapter.prototype.clearNodes = function (el) { while (el.firstChild) { el.removeChild(el.firstChild); } }; BrowserDomAdapter.prototype.appendChild = function (el, node) { el.appendChild(node); }; BrowserDomAdapter.prototype.removeChild = function (el, node) { el.removeChild(node); }; BrowserDomAdapter.prototype.replaceChild = function (el, newChild, oldChild) { el.replaceChild(newChild, oldChild); }; BrowserDomAdapter.prototype.remove = function (node) { if (node.parentNode) { node.parentNode.removeChild(node); } return node; }; BrowserDomAdapter.prototype.insertBefore = function (el, node) { el.parentNode.insertBefore(node, el); }; BrowserDomAdapter.prototype.insertAllBefore = function (el, nodes) { nodes.forEach(function (n) { return el.parentNode.insertBefore(n, el); }); }; BrowserDomAdapter.prototype.insertAfter = function (el, node) { el.parentNode.insertBefore(node, el.nextSibling); }; BrowserDomAdapter.prototype.setInnerHTML = function (el, value) { el.innerHTML = value; }; BrowserDomAdapter.prototype.getText = function (el) { return el.textContent; }; // TODO(vicb): removed Element type because it does not support StyleElement BrowserDomAdapter.prototype.setText = function (el, value) { el.textContent = value; }; BrowserDomAdapter.prototype.getValue = function (el) { return el.value; }; BrowserDomAdapter.prototype.setValue = function (el, value) { el.value = value; }; BrowserDomAdapter.prototype.getChecked = function (el) { return el.checked; }; BrowserDomAdapter.prototype.setChecked = function (el, value) { el.checked = value; }; BrowserDomAdapter.prototype.createComment = function (text) { return document.createComment(text); }; BrowserDomAdapter.prototype.createTemplate = function (html) { var t = document.createElement('template'); t.innerHTML = html; return t; }; BrowserDomAdapter.prototype.createElement = function (tagName, doc) { if (doc === void 0) { doc = document; } return doc.createElement(tagName); }; BrowserDomAdapter.prototype.createElementNS = function (ns, tagName, doc) { if (doc === void 0) { doc = document; } return doc.createElementNS(ns, tagName); }; BrowserDomAdapter.prototype.createTextNode = function (text, doc) { if (doc === void 0) { doc = document; } return doc.createTextNode(text); }; BrowserDomAdapter.prototype.createScriptTag = function (attrName, attrValue, doc) { if (doc === void 0) { doc = document; } var el = doc.createElement('SCRIPT'); el.setAttribute(attrName, attrValue); return el; }; BrowserDomAdapter.prototype.createStyleElement = function (css, doc) { if (doc === void 0) { doc = document; } var style = doc.createElement('style'); this.appendChild(style, this.createTextNode(css)); return style; }; BrowserDomAdapter.prototype.createShadowRoot = function (el) { return el.createShadowRoot(); }; BrowserDomAdapter.prototype.getShadowRoot = function (el) { return el.shadowRoot; }; BrowserDomAdapter.prototype.getHost = function (el) { return el.host; }; BrowserDomAdapter.prototype.clone = function (node) { return node.cloneNode(true); }; BrowserDomAdapter.prototype.getElementsByClassName = function (element, name) { return element.getElementsByClassName(name); }; BrowserDomAdapter.prototype.getElementsByTagName = function (element, name) { return element.getElementsByTagName(name); }; BrowserDomAdapter.prototype.classList = function (element) { return Array.prototype.slice.call(element.classList, 0); }; BrowserDomAdapter.prototype.addClass = function (element, className) { element.classList.add(className); }; BrowserDomAdapter.prototype.removeClass = function (element, className) { element.classList.remove(className); }; BrowserDomAdapter.prototype.hasClass = function (element, className) { return element.classList.contains(className); }; BrowserDomAdapter.prototype.setStyle = function (element, styleName, styleValue) { element.style[styleName] = styleValue; }; BrowserDomAdapter.prototype.removeStyle = function (element, stylename) { element.style[stylename] = null; }; BrowserDomAdapter.prototype.getStyle = function (element, stylename) { return element.style[stylename]; }; BrowserDomAdapter.prototype.hasStyle = function (element, styleName, styleValue) { if (styleValue === void 0) { styleValue = null; } var value = this.getStyle(element, styleName) || ''; return styleValue ? value == styleValue : value.length > 0; }; BrowserDomAdapter.prototype.tagName = function (element) { return element.tagName; }; BrowserDomAdapter.prototype.attributeMap = function (element) { var res = new Map(); var elAttrs = element.attributes; for (var i = 0; i < elAttrs.length; i++) { var attrib = elAttrs[i]; res.set(attrib.name, attrib.value); } return res; }; BrowserDomAdapter.prototype.hasAttribute = function (element, attribute) { return element.hasAttribute(attribute); }; BrowserDomAdapter.prototype.hasAttributeNS = function (element, ns, attribute) { return element.hasAttributeNS(ns, attribute); }; BrowserDomAdapter.prototype.getAttribute = function (element, attribute) { return element.getAttribute(attribute); }; BrowserDomAdapter.prototype.getAttributeNS = function (element, ns, name) { return element.getAttributeNS(ns, name); }; BrowserDomAdapter.prototype.setAttribute = function (element, name, value) { element.setAttribute(name, value); }; BrowserDomAdapter.prototype.setAttributeNS = function (element, ns, name, value) { element.setAttributeNS(ns, name, value); }; BrowserDomAdapter.prototype.removeAttribute = function (element, attribute) { element.removeAttribute(attribute); }; BrowserDomAdapter.prototype.removeAttributeNS = function (element, ns, name) { element.removeAttributeNS(ns, name); }; BrowserDomAdapter.prototype.templateAwareRoot = function (el) { return this.isTemplateElement(el) ? this.content(el) : el; }; BrowserDomAdapter.prototype.createHtmlDocument = function () { return document.implementation.createHTMLDocument('fakeTitle'); }; BrowserDomAdapter.prototype.defaultDoc = function () { return document; }; BrowserDomAdapter.prototype.getBoundingClientRect = function (el) { try { return el.getBoundingClientRect(); } catch (e) { return { top: 0, bottom: 0, left: 0, right: 0, width: 0, height: 0 }; } }; BrowserDomAdapter.prototype.getTitle = function () { return document.title; }; BrowserDomAdapter.prototype.setTitle = function (newTitle) { document.title = newTitle || ''; }; BrowserDomAdapter.prototype.elementMatches = function (n, selector) { var matches = false; if (n instanceof HTMLElement) { if (n.matches) { matches = n.matches(selector); } else if (n.msMatchesSelector) { matches = n.msMatchesSelector(selector); } else if (n.webkitMatchesSelector) { matches = n.webkitMatchesSelector(selector); } } return matches; }; BrowserDomAdapter.prototype.isTemplateElement = function (el) { return el instanceof HTMLElement && el.nodeName == "TEMPLATE"; }; BrowserDomAdapter.prototype.isTextNode = function (node) { return node.nodeType === Node.TEXT_NODE; }; BrowserDomAdapter.prototype.isCommentNode = function (node) { return node.nodeType === Node.COMMENT_NODE; }; BrowserDomAdapter.prototype.isElementNode = function (node) { return node.nodeType === Node.ELEMENT_NODE; }; BrowserDomAdapter.prototype.hasShadowRoot = function (node) { return node instanceof HTMLElement && isPresent(node.shadowRoot); }; BrowserDomAdapter.prototype.isShadowRoot = function (node) { return node instanceof DocumentFragment; }; BrowserDomAdapter.prototype.importIntoDoc = function (node) { var toImport = node; if (this.isTemplateElement(node)) { toImport = this.content(node); } return document.importNode(toImport, true); }; BrowserDomAdapter.prototype.adoptNode = function (node) { return document.adoptNode(node); }; BrowserDomAdapter.prototype.getHref = function (el) { return el.href; }; BrowserDomAdapter.prototype.getEventKey = function (event) { var key = event.key; if (isBlank(key)) { key = event.keyIdentifier; // keyIdentifier is defined in the old draft of DOM Level 3 Events implemented by Chrome and // Safari // cf // http://www.w3.org/TR/2007/WD-DOM-Level-3-Events-20071221/events.html#Events-KeyboardEvents-Interfaces if (isBlank(key)) { return 'Unidentified'; } if (key.startsWith('U+')) { key = String.fromCharCode(parseInt(key.substring(2), 16)); if (event.location === DOM_KEY_LOCATION_NUMPAD && _chromeNumKeyPadMap.hasOwnProperty(key)) { // There is a bug in Chrome for numeric keypad keys: // https://code.google.com/p/chromium/issues/detail?id=155654 // 1, 2, 3 ... are reported as A, B, C ... key = _chromeNumKeyPadMap[key]; } } } if (_keyMap.hasOwnProperty(key)) { key = _keyMap[key]; } return key; }; BrowserDomAdapter.prototype.getGlobalEventTarget = function (target) { if (target == "window") { return window; } else if (target == "document") { return document; } else if (target == "body") { return document.body; } }; BrowserDomAdapter.prototype.getHistory = function () { return window.history; }; BrowserDomAdapter.prototype.getLocation = function () { return window.location; }; BrowserDomAdapter.prototype.getBaseHref = function () { var href = getBaseElementHref(); if (isBlank(href)) { return null; } return relativePath(href); }; BrowserDomAdapter.prototype.resetBaseElement = function () { baseElement = null; }; BrowserDomAdapter.prototype.getUserAgent = function () { return window.navigator.userAgent; }; BrowserDomAdapter.prototype.setData = function (element, name, value) { this.setAttribute(element, 'data-' + name, value); }; BrowserDomAdapter.prototype.getData = function (element, name) { return this.getAttribute(element, 'data-' + name); }; BrowserDomAdapter.prototype.getComputedStyle = function (element) { return getComputedStyle(element); }; // TODO(tbosch): move this into a separate environment class once we have it BrowserDomAdapter.prototype.setGlobalVar = function (path, value) { setValueOnPath(global$1, path, value); }; BrowserDomAdapter.prototype.requestAnimationFrame = function (callback) { return window.requestAnimationFrame(callback); }; BrowserDomAdapter.prototype.cancelAnimationFrame = function (id) { window.cancelAnimationFrame(id); }; BrowserDomAdapter.prototype.performanceNow = function () { // performance.now() is not available in all browsers, see // http://caniuse.com/#search=performance.now if (isPresent(window.performance) && isPresent(window.performance.now)) { return window.performance.now(); } else { return DateWrapper.toMillis(DateWrapper.now()); } }; return BrowserDomAdapter; }(GenericBrowserDomAdapter)); var baseElement = null; function getBaseElementHref() { if (isBlank(baseElement)) { baseElement = document.querySelector('base'); if (isBlank(baseElement)) { return null; } } return baseElement.getAttribute('href'); } // based on urlUtils.js in AngularJS 1 var urlParsingNode = null; function relativePath(url) { if (isBlank(urlParsingNode)) { urlParsingNode = document.createElement("a"); } urlParsingNode.setAttribute('href', url); return (urlParsingNode.pathname.charAt(0) === '/') ? urlParsingNode.pathname : '/' + urlParsingNode.pathname; } var PublicTestability = (function () { function PublicTestability(testability) { this._testability = testability; } PublicTestability.prototype.isStable = function () { return this._testability.isStable(); }; PublicTestability.prototype.whenStable = function (callback) { this._testability.whenStable(callback); }; PublicTestability.prototype.findBindings = function (using, provider, exactMatch) { return this.findProviders(using, provider, exactMatch); }; PublicTestability.prototype.findProviders = function (using, provider, exactMatch) { return this._testability.findBindings(using, provider, exactMatch); }; return PublicTestability; }()); var BrowserGetTestability = (function () { function BrowserGetTestability() { } BrowserGetTestability.init = function () { _angular_core.setTestabilityGetter(new BrowserGetTestability()); }; BrowserGetTestability.prototype.addToWindow = function (registry) { global$1.getAngularTestability = function (elem, findInAncestors) { if (findInAncestors === void 0) { findInAncestors = true; } var testability = registry.findTestabilityInTree(elem, findInAncestors); if (testability == null) { throw new Error('Could not find testability for element.'); } return new PublicTestability(testability); }; global$1.getAllAngularTestabilities = function () { var testabilities = registry.getAllTestabilities(); return testabilities.map(function (testability) { return new PublicTestability(testability); }); }; global$1.getAllAngularRootElements = function () { return registry.getAllRootElements(); }; var whenAllStable = function (callback) { var testabilities = global$1.getAllAngularTestabilities(); var count = testabilities.length; var didWork = false; var decrement = function (didWork_) { didWork = didWork || didWork_; count--; if (count == 0) { callback(didWork); } }; testabilities.forEach(function (testability) { testability.whenStable(decrement); }); }; if (!global$1.frameworkStabilizers) { global$1.frameworkStabilizers = ListWrapper.createGrowableSize(0); } global$1.frameworkStabilizers.push(whenAllStable); }; BrowserGetTestability.prototype.findTestabilityInTree = function (registry, elem, findInAncestors) { if (elem == null) { return null; } var t = registry.getTestability(elem); if (isPresent(t)) { return t; } else if (!findInAncestors) { return null; } if (getDOM().isShadowRoot(elem)) { return this.findTestabilityInTree(registry, getDOM().getHost(elem), true); } return this.findTestabilityInTree(registry, getDOM().parentElement(elem), true); }; return BrowserGetTestability; }()); /** * A DI Token representing the main rendering context. In a browser this is the DOM Document. * * Note: Document might not be available in the Application Context when Application and Rendering * Contexts are not the same (e.g. when running the application into a Web Worker). */ var DOCUMENT = new _angular_core.OpaqueToken('DocumentToken'); var BaseException = (function (_super) { __extends(BaseException, _super); function BaseException(message) { if (message === void 0) { message = "--"; } _super.call(this, message); this.message = message; this.stack = (new Error(message)).stack; } BaseException.prototype.toString = function () { return this.message; }; return BaseException; }(Error)); var EVENT_MANAGER_PLUGINS = /*@ts2dart_const*/ new _angular_core.OpaqueToken("EventManagerPlugins"); var EventManager = (function () { function EventManager(plugins, _zone) { var _this = this; this._zone = _zone; plugins.forEach(function (p) { return p.manager = _this; }); this._plugins = ListWrapper.reversed(plugins); } EventManager.prototype.addEventListener = function (element, eventName, handler) { var plugin = this._findPluginFor(eventName); return plugin.addEventListener(element, eventName, handler); }; EventManager.prototype.addGlobalEventListener = function (target, eventName, handler) { var plugin = this._findPluginFor(eventName); return plugin.addGlobalEventListener(target, eventName, handler); }; EventManager.prototype.getZone = function () { return this._zone; }; /** @internal */ EventManager.prototype._findPluginFor = function (eventName) { var plugins = this._plugins; for (var i = 0; i < plugins.length; i++) { var plugin = plugins[i]; if (plugin.supports(eventName)) { return plugin; } } throw new BaseException("No event manager plugin found for event " + eventName); }; return EventManager; }()); EventManager.decorators = [ { type: _angular_core.Injectable }, ]; EventManager.ctorParameters = [ { type: undefined, decorators: [{ type: _angular_core.Inject, args: [EVENT_MANAGER_PLUGINS,] },] }, { type: _angular_core.NgZone, }, ]; var EventManagerPlugin = (function () { function EventManagerPlugin() { } // That is equivalent to having supporting $event.target EventManagerPlugin.prototype.supports = function (eventName) { return false; }; EventManagerPlugin.prototype.addEventListener = function (element, eventName, handler) { throw "not implemented"; }; EventManagerPlugin.prototype.addGlobalEventListener = function (element, eventName, handler) { throw "not implemented"; }; return EventManagerPlugin; }()); var CssAnimationOptions = (function () { function CssAnimationOptions() { /** classes to be added to the element */ this.classesToAdd = []; /** classes to be removed from the element */ this.classesToRemove = []; /** classes to be added for the duration of the animation */ this.animationClasses = []; } return CssAnimationOptions; }()); var Math$1 = global$1.Math; var CAMEL_CASE_REGEXP = /([A-Z])/g; function camelCaseToDashCase(input) { return StringWrapper.replaceAllMapped(input, CAMEL_CASE_REGEXP, function (m) { return '-' + m[1].toLowerCase(); }); } var Animation = (function () { /** * Stores the start time and starts the animation * @param element * @param data * @param browserDetails */ function Animation(element, data, browserDetails) { var _this = this; this.element = element; this.data = data; this.browserDetails = browserDetails; /** functions to be called upon completion */ this.callbacks = []; /** functions for removing event listeners */ this.eventClearFunctions = []; /** flag used to track whether or not the animation has finished */ this.completed = false; this._stringPrefix = ''; this.startTime = DateWrapper.toMillis(DateWrapper.now()); this._stringPrefix = getDOM().getAnimationPrefix(); this.setup(); this.wait(function (timestamp) { return _this.start(); }); } Object.defineProperty(Animation.prototype, "totalTime", { /** total amount of time that the animation should take including delay */ get: function () { var delay = this.computedDelay != null ? this.computedDelay : 0; var duration = this.computedDuration != null ? this.computedDuration : 0; return delay + duration; }, enumerable: true, configurable: true }); Animation.prototype.wait = function (callback) { // Firefox requires 2 frames for some reason this.browserDetails.raf(callback, 2); }; /** * Sets up the initial styles before the animation is started */ Animation.prototype.setup = function () { if (this.data.fromStyles != null) this.applyStyles(this.data.fromStyles); if (this.data.duration != null) this.applyStyles({ 'transitionDuration': this.data.duration.toString() + 'ms' }); if (this.data.delay != null) this.applyStyles({ 'transitionDelay': this.data.delay.toString() + 'ms' }); }; /** * After the initial setup has occurred, this method adds the animation styles */ Animation.prototype.start = function () { this.addClasses(this.data.classesToAdd); this.addClasses(this.data.animationClasses); this.removeClasses(this.data.classesToRemove); if (this.data.toStyles != null) this.applyStyles(this.data.toStyles); var computedStyles = getDOM().getComputedStyle(this.element); this.computedDelay = Math$1.max(this.parseDurationString(computedStyles.getPropertyValue(this._stringPrefix + 'transition-delay')), this.parseDurationString(this.element.style.getPropertyValue(this._stringPrefix + 'transition-delay'))); this.computedDuration = Math$1.max(this.parseDurationString(computedStyles.getPropertyValue(this._stringPrefix + 'transition-duration')), this.parseDurationString(this.element.style.getPropertyValue(this._stringPrefix + 'transition-duration'))); this.addEvents(); }; /** * Applies the provided styles to the element * @param styles */ Animation.prototype.applyStyles = function (styles) { var _this = this; StringMapWrapper.forEach(styles, function (value, key) { var dashCaseKey = camelCaseToDashCase(key); if (isPresent(getDOM().getStyle(_this.element, dashCaseKey))) { getDOM().setStyle(_this.element, dashCaseKey, value.toString()); } else { getDOM().setStyle(_this.element, _this._stringPrefix + dashCaseKey, value.toString()); } }); }; /** * Adds the provided classes to the element * @param classes */ Animation.prototype.addClasses = function (classes) { for (var i = 0, len = classes.length; i < len; i++) getDOM().addClass(this.element, classes[i]); }; /** * Removes the provided classes from the element * @param classes */ Animation.prototype.removeClasses = function (classes) { for (var i = 0, len = classes.length; i < len; i++) getDOM().removeClass(this.element, classes[i]); }; /** * Adds events to track when animations have finished */ Animation.prototype.addEvents = function () { var _this = this; if (this.totalTime > 0) { this.eventClearFunctions.push(getDOM().onAndCancel(this.element, getDOM().getTransitionEnd(), function (event) { return _this.handleAnimationEvent(event); })); } else { this.handleAnimationCompleted(); } }; Animation.prototype.handleAnimationEvent = function (event) { var elapsedTime = Math$1.round(event.elapsedTime * 1000); if (!this.browserDetails.elapsedTimeIncludesDelay) elapsedTime += this.computedDelay; event.stopPropagation(); if (elapsedTime >= this.totalTime) this.handleAnimationCompleted(); }; /** * Runs all animation callbacks and removes temporary classes */ Animation.prototype.handleAnimationCompleted = function () { this.removeClasses(this.data.animationClasses); this.callbacks.forEach(function (callback) { return callback(); }); this.callbacks = []; this.eventClearFunctions.forEach(function (fn) { return fn(); }); this.eventClearFunctions = []; this.completed = true; }; /** * Adds animation callbacks to be called upon completion * @param callback * @returns {Animation} */ Animation.prototype.onComplete = function (callback) { if (this.completed) { callback(); } else { this.callbacks.push(callback); } return this; }; /** * Converts the duration string to the number of milliseconds * @param duration * @returns {number} */ Animation.prototype.parseDurationString = function (duration) { var maxValue = 0; // duration must have at least 2 characters to be valid. (number + type) if (duration == null || duration.length < 2) { return maxValue; } else if (duration.substring(duration.length - 2) == 'ms') { var value = NumberWrapper.parseInt(this.stripLetters(duration), 10); if (value > maxValue) maxValue = value; } else if (duration.substring(duration.length - 1) == 's') { var ms = NumberWrapper.parseFloat(this.stripLetters(duration)) * 1000; var value = Math$1.floor(ms); if (value > maxValue) maxValue = value; } return maxValue; }; /** * Strips the letters from the duration string * @param str * @returns {string} */ Animation.prototype.stripLetters = function (str) { return StringWrapper.replaceAll(str, RegExpWrapper.create('[^0-9]+$', ''), ''); }; return Animation; }()); var CssAnimationBuilder = (function () { /** * Accepts public properties for CssAnimationBuilder */ function CssAnimationBuilder(browserDetails) { this.browserDetails = browserDetails; /** @type {CssAnimationOptions} */ this.data = new CssAnimationOptions(); } /** * Adds a temporary class that will be removed at the end of the animation * @param className */ CssAnimationBuilder.prototype.addAnimationClass = function (className) { this.data.animationClasses.push(className); return this; }; /** * Adds a class that will remain on the element after the animation has finished * @param className */ CssAnimationBuilder.prototype.addClass = function (className) { this.data.classesToAdd.push(className); return this; }; /** * Removes a class from the element * @param className */ CssAnimationBuilder.prototype.removeClass = function (className) { this.data.classesToRemove.push(className); return this; }; /** * Sets the animation duration (and overrides any defined through CSS) * @param duration */ CssAnimationBuilder.prototype.setDuration = function (duration) { this.data.duration = duration; return this; }; /** * Sets the animation delay (and overrides any defined through CSS) * @param delay */ CssAnimationBuilder.prototype.setDelay = function (delay) { this.data.delay = delay; return this; }; /** * Sets styles for both the initial state and the destination state * @param from * @param to */ CssAnimationBuilder.prototype.setStyles = function (from, to) { return this.setFromStyles(from).setToStyles(to); }; /** * Sets the initial styles for the animation * @param from */ CssAnimationBuilder.prototype.setFromStyles = function (from) { this.data.fromStyles = from; return this; }; /** * Sets the destination styles for the animation * @param to */ CssAnimationBuilder.prototype.setToStyles = function (to) { this.data.toStyles = to; return this; }; /** * Starts the animation and returns a promise * @param element */ CssAnimationBuilder.prototype.start = function (element) { return new Animation(element, this.data, this.browserDetails); }; return CssAnimationBuilder; }()); var BrowserDetails = (function () { function BrowserDetails() { this.elapsedTimeIncludesDelay = false; this.doesElapsedTimeIncludesDelay(); } /** * Determines if `event.elapsedTime` includes transition delay in the current browser. At this * time, Chrome and Opera seem to be the only browsers that include this. */ BrowserDetails.prototype.doesElapsedTimeIncludesDelay = function () { var _this = this; var div = getDOM().createElement('div'); getDOM().setAttribute(div, 'style', "position: absolute; top: -9999px; left: -9999px; width: 1px;\n height: 1px; transition: all 1ms linear 1ms;"); // Firefox requires that we wait for 2 frames for some reason this.raf(function (timestamp) { getDOM().on(div, 'transitionend', function (event) { var elapsed = Math$1.round(event.elapsedTime * 1000); _this.elapsedTimeIncludesDelay = elapsed == 2; getDOM().remove(div); }); getDOM().setStyle(div, 'width', '2px'); }, 2); }; BrowserDetails.prototype.raf = function (callback, frames) { if (frames === void 0) { frames = 1; } var queue = new RafQueue(callback, frames); return function () { return queue.cancel(); }; }; return BrowserDetails; }()); BrowserDetails.decorators = [ { type: _angular_core.Injectable }, ]; BrowserDetails.ctorParameters = []; var RafQueue = (function () { function RafQueue(callback, frames) { this.callback = callback; this.frames = frames; this._raf(); } RafQueue.prototype._raf = function () { var _this = this; this.currentFrameId = getDOM().requestAnimationFrame(function (timestamp) { return _this._nextFrame(timestamp); }); }; RafQueue.prototype._nextFrame = function (timestamp) { this.frames--; if (this.frames > 0) { this._raf(); } else { this.callback(timestamp); } }; RafQueue.prototype.cancel = function () { getDOM().cancelAnimationFrame(this.currentFrameId); this.currentFrameId = null; }; return RafQueue; }()); var AnimationBuilder = (function () { /** * Used for DI * @param browserDetails */ function AnimationBuilder(browserDetails) { this.browserDetails = browserDetails; } /** * Creates a new CSS Animation * @returns {CssAnimationBuilder} */ AnimationBuilder.prototype.css = function () { return new CssAnimationBuilder(this.browserDetails); }; return AnimationBuilder; }()); AnimationBuilder.decorators = [ { type: _angular_core.Injectable }, ]; AnimationBuilder.ctorParameters = [ { type: BrowserDetails, }, ]; var SharedStylesHost = (function () { function SharedStylesHost() { /** @internal */ this._styles = []; /** @internal */ this._stylesSet = new Set(); } SharedStylesHost.prototype.addStyles = function (styles) { var _this = this; var additions = []; styles.forEach(function (style) { if (!SetWrapper.has(_this._stylesSet, style)) { _this._stylesSet.add(style); _this._styles.push(style); additions.push(style); } }); this.onStylesAdded(additions); }; SharedStylesHost.prototype.onStylesAdded = function (additions) { }; SharedStylesHost.prototype.getAllStyles = function () { return this._styles; }; return SharedStylesHost; }()); SharedStylesHost.decorators = [ { type: _angular_core.Injectable }, ]; SharedStylesHost.ctorParameters = []; var DomSharedStylesHost = (function (_super) { __extends(DomSharedStylesHost, _super); function DomSharedStylesHost(doc) { _super.call(this); this._hostNodes = new Set(); this._hostNodes.add(doc.head); } /** @internal */ DomSharedStylesHost.prototype._addStylesToHost = function (styles, host) { for (var i = 0; i < styles.length; i++) { var style = styles[i]; getDOM().appendChild(host, getDOM().createStyleElement(style)); } }; DomSharedStylesHost.prototype.addHost = function (hostNode) { this._addStylesToHost(this._styles, hostNode); this._hostNodes.add(hostNode); }; DomSharedStylesHost.prototype.removeHost = function (hostNode) { SetWrapper.delete(this._hostNodes, hostNode); }; DomSharedStylesHost.prototype.onStylesAdded = function (additions) { var _this = this; this._hostNodes.forEach(function (hostNode) { _this._addStylesToHost(additions, hostNode); }); }; return DomSharedStylesHost; }(SharedStylesHost)); DomSharedStylesHost.decorators = [ { type: _angular_core.Injectable }, ]; DomSharedStylesHost.ctorParameters = [ { type: undefined, decorators: [{ type: _angular_core.Inject, args: [DOCUMENT,] },] }, ]; var NAMESPACE_URIS = /*@ts2dart_const*/ { 'xlink': 'http://www.w3.org/1999/xlink', 'svg': 'http://www.w3.org/2000/svg' }; var TEMPLATE_COMMENT_TEXT = 'template bindings={}'; var TEMPLATE_BINDINGS_EXP = /^template bindings=(.*)$/g; var DomRootRenderer = (function () { function DomRootRenderer(document, eventManager, sharedStylesHost, animate) { this.document = document; this.eventManager = eventManager; this.sharedStylesHost = sharedStylesHost; this.animate = animate; this._registeredComponents = new Map(); } DomRootRenderer.prototype.renderComponent = function (componentProto) { var renderer = this._registeredComponents.get(componentProto.id); if (isBlank(renderer)) { renderer = new DomRenderer(this, componentProto); this._registeredComponents.set(componentProto.id, renderer); } return renderer; }; return DomRootRenderer; }()); var DomRootRenderer_ = (function (_super) { __extends(DomRootRenderer_, _super); function DomRootRenderer_(_document, _eventManager, sharedStylesHost, animate) { _super.call(this, _document, _eventManager, sharedStylesHost, animate); } return DomRootRenderer_; }(DomRootRenderer)); DomRootRenderer_.decorators = [ { type: _angular_core.Injectable }, ]; DomRootRenderer_.ctorParameters = [ { type: undefined, decorators: [{ type: _angular_core.Inject, args: [DOCUMENT,] },] }, { type: EventManager, }, { type: DomSharedStylesHost, }, { type: AnimationBuilder, }, ]; var DomRenderer = (function () { function DomRenderer(_rootRenderer, componentProto) { this._rootRenderer = _rootRenderer; this.componentProto = componentProto; this._styles = _flattenStyles(componentProto.id, componentProto.styles, []); if (componentProto.encapsulation !== _angular_core.ViewEncapsulation.Native) { this._rootRenderer.sharedStylesHost.addStyles(this._styles); } if (this.componentProto.encapsulation === _angular_core.ViewEncapsulation.Emulated) { this._contentAttr = _shimContentAttribute(componentProto.id); this._hostAttr = _shimHostAttribute(componentProto.id); } else { this._contentAttr = null; this._hostAttr = null; } } DomRenderer.prototype.selectRootElement = function (selectorOrNode, debugInfo) { var el; if (isString(selectorOrNode)) { el = getDOM().querySelector(this._rootRenderer.document, selectorOrNode); if (isBlank(el)) { throw new BaseException("The selector \"" + selectorOrNode + "\" did not match any elements"); } } else { el = selectorOrNode; } getDOM().clearNodes(el); return el; }; DomRenderer.prototype.createElement = function (parent, name, debugInfo) { var nsAndName = splitNamespace(name); var el = isPresent(nsAndName[0]) ? getDOM().createElementNS(NAMESPACE_URIS[nsAndName[0]], nsAndName[1]) : getDOM().createElement(nsAndName[1]); if (isPresent(this._contentAttr)) { getDOM().setAttribute(el, this._contentAttr, ''); } if (isPresent(parent)) { getDOM().appendChild(parent, el); } return el; }; DomRenderer.prototype.createViewRoot = function (hostElement) { var nodesParent; if (this.componentProto.encapsulation === _angular_core.ViewEncapsulation.Native) { nodesParent = getDOM().createShadowRoot(hostElement); this._rootRenderer.sharedStylesHost.addHost(nodesParent); for (var i = 0; i < this._styles.length; i++) { getDOM().appendChild(nodesParent, getDOM().createStyleElement(this._styles[i])); } } else { if (isPresent(this._hostAttr)) { getDOM().setAttribute(hostElement, this._hostAttr, ''); } nodesParent = hostElement; } return nodesParent; }; DomRenderer.prototype.createTemplateAnchor = function (parentElement, debugInfo) { var comment = getDOM().createComment(TEMPLATE_COMMENT_TEXT); if (isPresent(parentElement)) { getDOM().appendChild(parentElement, comment); } return comment; }; DomRenderer.prototype.createText = function (parentElement, value, debugInfo) { var node = getDOM().createTextNode(value); if (isPresent(parentElement)) { getDOM().appendChild(parentElement, node); } return node; }; DomRenderer.prototype.projectNodes = function (parentElement, nodes) { if (isBlank(parentElement)) return; appendNodes(parentElement, nodes); }; DomRenderer.prototype.attachViewAfter = function (node, viewRootNodes) { moveNodesAfterSibling(node, viewRootNodes); for (var i = 0; i < viewRootNodes.length; i++) this.animateNodeEnter(viewRootNodes[i]); }; DomRenderer.prototype.detachView = function (viewRootNodes) { for (var i = 0; i < viewRootNodes.length; i++) { var node = viewRootNodes[i]; getDOM().remove(node); this.animateNodeLeave(node); } }; DomRenderer.prototype.destroyView = function (hostElement, viewAllNodes) { if (this.componentProto.encapsulation === _angular_core.ViewEncapsulation.Native && isPresent(hostElement)) { this._rootRenderer.sharedStylesHost.removeHost(getDOM().getShadowRoot(hostElement)); } }; DomRenderer.prototype.listen = function (renderElement, name, callback) { return this._rootRenderer.eventManager.addEventListener(renderElement, name, decoratePreventDefault(callback)); }; DomRenderer.prototype.listenGlobal = function (target, name, callback) { return this._rootRenderer.eventManager.addGlobalEventListener(target, name, decoratePreventDefault(callback)); }; DomRenderer.prototype.setElementProperty = function (renderElement, propertyName, propertyValue) { getDOM().setProperty(renderElement, propertyName, propertyValue); }; DomRenderer.prototype.setElementAttribute = function (renderElement, attributeName, attributeValue) { var attrNs; var nsAndName = splitNamespace(attributeName); if (isPresent(nsAndName[0])) { attributeName = nsAndName[0] + ':' + nsAndName[1]; attrNs = NAMESPACE_URIS[nsAndName[0]]; } if (isPresent(attributeValue)) { if (isPresent(attrNs)) { getDOM().setAttributeNS(renderElement, attrNs, attributeName, attributeValue); } else { getDOM().setAttribute(renderElement, attributeName, attributeValue); } } else { if (isPresent(attrNs)) { getDOM().removeAttributeNS(renderElement, attrNs, nsAndName[1]); } else { getDOM().removeAttribute(renderElement, attributeName); } } }; DomRenderer.prototype.setBindingDebugInfo = function (renderElement, propertyName, propertyValue) { var dashCasedPropertyName = camelCaseToDashCase(propertyName); if (getDOM().isCommentNode(renderElement)) { var existingBindings = RegExpWrapper.firstMatch(TEMPLATE_BINDINGS_EXP, StringWrapper.replaceAll(getDOM().getText(renderElement), /\n/g, '')); var parsedBindings = Json.parse(existingBindings[1]); parsedBindings[dashCasedPropertyName] = propertyValue; getDOM().setText(renderElement, StringWrapper.replace(TEMPLATE_COMMENT_TEXT, '{}', Json.stringify(parsedBindings))); } else { this.setElementAttribute(renderElement, propertyName, propertyValue); } }; DomRenderer.prototype.setElementClass = function (renderElement, className, isAdd) { if (isAdd) { getDOM().addClass(renderElement, className); } else { getDOM().removeClass(renderElement, className); } }; DomRenderer.prototype.setElementStyle = function (renderElement, styleName, styleValue) { if (isPresent(styleValue)) { getDOM().setStyle(renderElement, styleName, stringify(styleValue)); } else { getDOM().removeStyle(renderElement, styleName); } }; DomRenderer.prototype.invokeElementMethod = function (renderElement, methodName, args) { getDOM().invoke(renderElement, methodName, args); }; DomRenderer.prototype.setText = function (renderNode, text) { getDOM().setText(renderNode, text); }; /** * Performs animations if necessary * @param node */ DomRenderer.prototype.animateNodeEnter = function (node) { if (getDOM().isElementNode(node) && getDOM().hasClass(node, 'ng-animate')) { getDOM().addClass(node, 'ng-enter'); this._rootRenderer.animate.css() .addAnimationClass('ng-enter-active') .start(node) .onComplete(function () { getDOM().removeClass(node, 'ng-enter'); }); } }; /** * If animations are necessary, performs animations then removes the element; otherwise, it just * removes the element. * @param node */ DomRenderer.prototype.animateNodeLeave = function (node) { if (getDOM().isElementNode(node) && getDOM().hasClass(node, 'ng-animate')) { getDOM().addClass(node, 'ng-leave'); this._rootRenderer.animate.css() .addAnimationClass('ng-leave-active') .start(node) .onComplete(function () { getDOM().removeClass(node, 'ng-leave'); getDOM().remove(node); }); } else { getDOM().remove(node); } }; return DomRenderer; }()); function moveNodesAfterSibling(sibling, nodes) { var parent = getDOM().parentElement(sibling); if (nodes.length > 0 && isPresent(parent)) { var nextSibling = getDOM().nextSibling(sibling); if (isPresent(nextSibling)) { for (var i = 0; i < nodes.length; i++) { getDOM().insertBefore(nextSibling, nodes[i]); } } else { for (var i = 0; i < nodes.length; i++) { getDOM().appendChild(parent, nodes[i]); } } } } function appendNodes(parent, nodes) { for (var i = 0; i < nodes.length; i++) { getDOM().appendChild(parent, nodes[i]); } } function decoratePreventDefault(eventHandler) { return function (event) { var allowDefaultBehavior = eventHandler(event); if (allowDefaultBehavior === false) { // TODO(tbosch): move preventDefault into event plugins... getDOM().preventDefault(event); } }; } var COMPONENT_REGEX = /%COMP%/g; var COMPONENT_VARIABLE = '%COMP%'; var HOST_ATTR = "_nghost-" + COMPONENT_VARIABLE; var CONTENT_ATTR = "_ngcontent-" + COMPONENT_VARIABLE; function _shimContentAttribute(componentShortId) { return StringWrapper.replaceAll(CONTENT_ATTR, COMPONENT_REGEX, componentShortId); } function _shimHostAttribute(componentShortId) { return StringWrapper.replaceAll(HOST_ATTR, COMPONENT_REGEX, componentShortId); } function _flattenStyles(compId, styles, target) { for (var i = 0; i < styles.length; i++) { var style = styles[i]; if (isArray(style)) { _flattenStyles(compId, style, target); } else { style = StringWrapper.replaceAll(style, COMPONENT_REGEX, compId); target.push(style); } } return target; } var NS_PREFIX_RE = /^@([^:]+):(.+)/g; function splitNamespace(name) { if (name[0] != '@') { return [null, name]; } var match = RegExpWrapper.firstMatch(NS_PREFIX_RE, name); return [match[1], match[2]]; } var modifierKeys = ['alt', 'control', 'meta', 'shift']; var modifierKeyGetters = { 'alt': function (event) { return event.altKey; }, 'control': function (event) { return event.ctrlKey; }, 'meta': function (event) { return event.metaKey; }, 'shift': function (event) { return event.shiftKey; } }; var KeyEventsPlugin = (function (_super) { __extends(KeyEventsPlugin, _super); function KeyEventsPlugin() { _super.call(this); } KeyEventsPlugin.prototype.supports = function (eventName) { return isPresent(KeyEventsPlugin.parseEventName(eventName)); }; KeyEventsPlugin.prototype.addEventListener = function (element, eventName, handler) { var parsedEvent = KeyEventsPlugin.parseEventName(eventName); var outsideHandler = KeyEventsPlugin.eventCallback(element, StringMapWrapper.get(parsedEvent, 'fullKey'), handler, this.manager.getZone()); return this.manager.getZone().runOutsideAngular(function () { return getDOM().onAndCancel(element, StringMapWrapper.get(parsedEvent, 'domEventName'), outsideHandler); }); }; KeyEventsPlugin.parseEventName = function (eventName) { var parts = eventName.toLowerCase().split('.'); var domEventName = parts.shift(); if ((parts.length === 0) || !(StringWrapper.equals(domEventName, 'keydown') || StringWrapper.equals(domEventName, 'keyup'))) { return null; } var key = KeyEventsPlugin._normalizeKey(parts.pop()); var fullKey = ''; modifierKeys.forEach(function (modifierName) { if (ListWrapper.contains(parts, modifierName)) { ListWrapper.remove(parts, modifierName); fullKey += modifierName + '.'; } }); fullKey += key; if (parts.length != 0 || key.length === 0) { // returning null instead of throwing to let another plugin process the event return null; } var result = StringMapWrapper.create(); StringMapWrapper.set(result, 'domEventName', domEventName); StringMapWrapper.set(result, 'fullKey', fullKey); return result; }; KeyEventsPlugin.getEventFullKey = function (event) { var fullKey = ''; var key = getDOM().getEventKey(event); key = key.toLowerCase(); if (StringWrapper.equals(key, ' ')) { key = 'space'; // for readability } else if (StringWrapper.equals(key, '.')) { key = 'dot'; // because '.' is used as a separator in event names } modifierKeys.forEach(function (modifierName) { if (modifierName != key) { var modifierGetter = StringMapWrapper.get(modifierKeyGetters, modifierName); if (modifierGetter(event)) { fullKey += modifierName + '.'; } } }); fullKey += key; return fullKey; }; KeyEventsPlugin.eventCallback = function (element, fullKey, handler, zone) { return function (event) { if (StringWrapper.equals(KeyEventsPlugin.getEventFullKey(event), fullKey)) { zone.runGuarded(function () { return handler(event); }); } }; }; /** @internal */ KeyEventsPlugin._normalizeKey = function (keyName) { // TODO: switch to a StringMap if the mapping grows too much switch (keyName) { case 'esc': return 'escape'; default: return keyName; } }; return KeyEventsPlugin; }(EventManagerPlugin)); KeyEventsPlugin.decorators = [ { type: _angular_core.Injectable }, ]; KeyEventsPlugin.ctorParameters = []; var CORE_TOKENS = { 'ApplicationRef': _angular_core.ApplicationRef, 'NgZone': _angular_core.NgZone }; var INSPECT_GLOBAL_NAME = 'ng.probe'; var CORE_TOKENS_GLOBAL_NAME = 'ng.coreTokens'; /** * Returns a {@link DebugElement} for the given native DOM element, or * null if the given native element does not have an Angular view associated * with it. */ function inspectNativeElement(element) { return _angular_core.getDebugNode(element); } function _createConditionalRootRenderer(rootRenderer) { if (assertionsEnabled()) { return _createRootRenderer(rootRenderer); } return rootRenderer; } function _createRootRenderer(rootRenderer) { getDOM().setGlobalVar(INSPECT_GLOBAL_NAME, inspectNativeElement); getDOM().setGlobalVar(CORE_TOKENS_GLOBAL_NAME, CORE_TOKENS); return new DebugDomRootRenderer(rootRenderer); } /** * Providers which support debugging Angular applications (e.g. via `ng.probe`). */ var ELEMENT_PROBE_PROVIDERS = [ /*@ts2dart_Provider*/ { provide: _angular_core.RootRenderer, useFactory: _createConditionalRootRenderer, deps: [DomRootRenderer] } ]; var DomEventsPlugin = (function (_super) { __extends(DomEventsPlugin, _super); function DomEventsPlugin() { _super.apply(this, arguments); } // This plugin should come last in the list of plugins, because it accepts all // events. DomEventsPlugin.prototype.supports = function (eventName) { return true; }; DomEventsPlugin.prototype.addEventListener = function (element, eventName, handler) { var zone = this.manager.getZone(); var outsideHandler = function (event) { return zone.runGuarded(function () { return handler(event); }); }; return this.manager.getZone().runOutsideAngular(function () { return getDOM().onAndCancel(element, eventName, outsideHandler); }); }; DomEventsPlugin.prototype.addGlobalEventListener = function (target, eventName, handler) { var element = getDOM().getGlobalEventTarget(target); var zone = this.manager.getZone(); var outsideHandler = function (event) { return zone.runGuarded(function () { return handler(event); }); }; return this.manager.getZone().runOutsideAngular(function () { return getDOM().onAndCancel(element, eventName, outsideHandler); }); }; return DomEventsPlugin; }(EventManagerPlugin)); DomEventsPlugin.decorators = [ { type: _angular_core.Injectable }, ]; var _eventNames = { // pan 'pan': true, 'panstart': true, 'panmove': true, 'panend': true, 'pancancel': true, 'panleft': true, 'panright': true, 'panup': true, 'pandown': true, // pinch 'pinch': true, 'pinchstart': true, 'pinchmove': true, 'pinchend': true, 'pinchcancel': true, 'pinchin': true, 'pinchout': true, // press 'press': true, 'pressup': true, // rotate 'rotate': true, 'rotatestart': true, 'rotatemove': true, 'rotateend': true, 'rotatecancel': true, // swipe 'swipe': true, 'swipeleft': true, 'swiperight': true, 'swipeup': true, 'swipedown': true, // tap 'tap': true, }; var HammerGesturesPluginCommon = (function (_super) { __extends(HammerGesturesPluginCommon, _super); function HammerGesturesPluginCommon() { _super.call(this); } HammerGesturesPluginCommon.prototype.supports = function (eventName) { eventName = eventName.toLowerCase(); return StringMapWrapper.contains(_eventNames, eventName); }; return HammerGesturesPluginCommon; }(EventManagerPlugin)); var HAMMER_GESTURE_CONFIG = /*@ts2dart_const*/ new _angular_core.OpaqueToken("HammerGestureConfig"); var HammerGestureConfig = (function () { function HammerGestureConfig() { this.events = []; this.overrides = {}; } HammerGestureConfig.prototype.buildHammer = function (element) { var mc = new Hammer(element); mc.get('pinch').set({ enable: true }); mc.get('rotate').set({ enable: true }); for (var eventName in this.overrides) { mc.get(eventName).set(this.overrides[eventName]); } return mc; }; return HammerGestureConfig; }()); HammerGestureConfig.decorators = [ { type: _angular_core.Injectable }, ]; var HammerGesturesPlugin = (function (_super) { __extends(HammerGesturesPlugin, _super); function HammerGesturesPlugin(_config) { _super.call(this); this._config = _config; } HammerGesturesPlugin.prototype.supports = function (eventName) { if (!_super.prototype.supports.call(this, eventName) && !this.isCustomEvent(eventName)) return false; if (!isPresent(window['Hammer'])) { throw new BaseException("Hammer.js is not loaded, can not bind " + eventName + " event"); } return true; }; HammerGesturesPlugin.prototype.addEventListener = function (element, eventName, handler) { var _this = this; var zone = this.manager.getZone(); eventName = eventName.toLowerCase(); return zone.runOutsideAngular(function () { // Creating the manager bind events, must be done outside of angular var mc = _this._config.buildHammer(element); var callback = function (eventObj) { zone.runGuarded(function () { handler(eventObj); }); }; mc.on(eventName, callback); return function () { mc.off(eventName, callback); }; }); }; HammerGesturesPlugin.prototype.isCustomEvent = function (eventName) { return this._config.events.indexOf(eventName) > -1; }; return HammerGesturesPlugin; }(HammerGesturesPluginCommon)); HammerGesturesPlugin.decorators = [ { type: _angular_core.Injectable }, ]; HammerGesturesPlugin.ctorParameters = [ { type: HammerGestureConfig, decorators: [{ type: _angular_core.Inject, args: [HAMMER_GESTURE_CONFIG,] },] }, ]; /** * A service that can be used to get and set the title of a current HTML document. * * Since an Angular 2 application can't be bootstrapped on the entire HTML document (`<html>` tag) * it is not possible to bind to the `text` property of the `HTMLTitleElement` elements * (representing the `<title>` tag). Instead, this service can be used to set and get the current * title value. */ var Title = (function () { function Title() { } /** * Get the title of the current HTML document. * @returns {string} */ Title.prototype.getTitle = function () { return getDOM().getTitle(); }; /** * Set the title of the current HTML document. * @param newTitle */ Title.prototype.setTitle = function (newTitle) { getDOM().setTitle(newTitle); }; return Title; }()); /** * JS version of browser APIs. This library can only run in the browser. */ var win = typeof window !== 'undefined' && window || {}; var ChangeDetectionPerfRecord = (function () { function ChangeDetectionPerfRecord(msPerTick, numTicks) { this.msPerTick = msPerTick; this.numTicks = numTicks; } return ChangeDetectionPerfRecord; }()); /** * Entry point for all Angular debug tools. This object corresponds to the `ng` * global variable accessible in the dev console. */ var AngularTools = (function () { function AngularTools(ref) { this.profiler = new AngularProfiler(ref); } return AngularTools; }()); /** * Entry point for all Angular profiling-related debug tools. This object * corresponds to the `ng.profiler` in the dev console. */ var AngularProfiler = (function () { function AngularProfiler(ref) { this.appRef = ref.injector.get(_angular_core.ApplicationRef); } /** * Exercises change detection in a loop and then prints the average amount of * time in milliseconds how long a single round of change detection takes for * the current state of the UI. It runs a minimum of 5 rounds for a minimum * of 500 milliseconds. * * Optionally, a user may pass a `config` parameter containing a map of * options. Supported options are: * * `record` (boolean) - causes the profiler to record a CPU profile while * it exercises the change detector. Example: * * ``` * ng.profiler.timeChangeDetection({record: true}) * ``` */ AngularProfiler.prototype.timeChangeDetection = function (config) { var record = isPresent(config) && config['record']; var profileName = 'Change Detection'; // Profiler is not available in Android browsers, nor in IE 9 without dev tools opened var isProfilerAvailable = isPresent(win.console.profile); if (record && isProfilerAvailable) { win.console.profile(profileName); } var start = getDOM().performanceNow(); var numTicks = 0; while (numTicks < 5 || (getDOM().performanceNow() - start) < 500) { this.appRef.tick(); numTicks++; } var end = getDOM().performanceNow(); if (record && isProfilerAvailable) { // need to cast to <any> because type checker thinks there's no argument // while in fact there is: // // https://developer.mozilla.org/en-US/docs/Web/API/Console/profileEnd win.console.profileEnd(profileName); } var msPerTick = (end - start) / numTicks; win.console.log("ran " + numTicks + " change detection cycles"); win.console.log(NumberWrapper.toFixed(msPerTick, 2) + " ms per check"); return new ChangeDetectionPerfRecord(msPerTick, numTicks); }; return AngularProfiler; }()); var context = global$1; /** * Enabled Angular 2 debug tools that are accessible via your browser's * developer console. * * Usage: * * 1. Open developer console (e.g. in Chrome Ctrl + Shift + j) * 1. Type `ng.` (usually the console will show auto-complete suggestion) * 1. Try the change detection profiler `ng.profiler.timeChangeDetection()` * then hit Enter. */ function enableDebugTools(ref) { context.ng = new AngularTools(ref); } /** * Disables Angular 2 tools. */ function disableDebugTools() { delete context.ng; } /** * Predicates for use with {@link DebugElement}'s query functions. */ var By = (function () { function By() { } /** * Match all elements. * * ## Example * * {@example platform/dom/debug/ts/by/by.ts region='by_all'} */ By.all = function () { return function (debugElement) { return true; }; }; /** * Match elements by the given CSS selector. * * ## Example * * {@example platform/dom/debug/ts/by/by.ts region='by_css'} */ By.css = function (selector) { return function (debugElement) { return isPresent(debugElement.nativeElement) ? getDOM().elementMatches(debugElement.nativeElement, selector) : false; }; }; /** * Match elements that have the given directive present. * * ## Example * * {@example platform/dom/debug/ts/by/by.ts region='by_directive'} */ By.directive = function (type) { return function (debugElement) { return debugElement.providerTokens.indexOf(type) !== -1; }; }; return By; }()); var BROWSER_PLATFORM_MARKER = /*@ts2dart_const*/ new _angular_core.OpaqueToken('BrowserPlatformMarker'); /** * A set of providers to initialize the Angular platform in a web browser. * * Used automatically by `bootstrap`, or can be passed to {@link platform}. */ var BROWSER_PROVIDERS = [ /*@ts2dart_Provider*/ { provide: BROWSER_PLATFORM_MARKER, useValue: true }, _angular_core.PLATFORM_COMMON_PROVIDERS, /*@ts2dart_Provider*/ { provide: _angular_core.PLATFORM_INITIALIZER, useValue: initDomAdapter, multi: true }, ]; function _exceptionHandler() { // !IS_DART is required because we must rethrow exceptions in JS, // but must not rethrow exceptions in Dart return new _angular_core.ExceptionHandler(getDOM(), !IS_DART); } function _document() { return getDOM().defaultDoc(); } var BROWSER_SANITIZATION_PROVIDERS = [ /* @ts2dart_Provider */ { provide: SanitizationService, useExisting: DomSanitizationService }, /* @ts2dart_Provider */ { provide: DomSanitizationService, useClass: DomSanitizationServiceImpl }, ]; /** * A set of providers to initialize an Angular application in a web browser. * * Used automatically by `bootstrap`, or can be passed to {@link PlatformRef.application}. */ var BROWSER_APP_COMMON_PROVIDERS = /*@ts2dart_const*/ [ _angular_core.APPLICATION_COMMON_PROVIDERS, _angular_common.FORM_PROVIDERS, BROWSER_SANITIZATION_PROVIDERS, /* @ts2dart_Provider */ { provide: _angular_core.PLATFORM_PIPES, useValue: _angular_common.COMMON_PIPES, multi: true }, /* @ts2dart_Provider */ { provide: _angular_core.PLATFORM_DIRECTIVES, useValue: _angular_common.COMMON_DIRECTIVES, multi: true }, /* @ts2dart_Provider */ { provide: _angular_core.ExceptionHandler, useFactory: _exceptionHandler, deps: [] }, /* @ts2dart_Provider */ { provide: DOCUMENT, useFactory: _document, deps: [] }, /* @ts2dart_Provider */ { provide: EVENT_MANAGER_PLUGINS, useClass: DomEventsPlugin, multi: true }, /* @ts2dart_Provider */ { provide: EVENT_MANAGER_PLUGINS, useClass: KeyEventsPlugin, multi: true }, /* @ts2dart_Provider */ { provide: EVENT_MANAGER_PLUGINS, useClass: HammerGesturesPlugin, multi: true }, /* @ts2dart_Provider */ { provide: HAMMER_GESTURE_CONFIG, useClass: HammerGestureConfig }, /* @ts2dart_Provider */ { provide: DomRootRenderer, useClass: DomRootRenderer_ }, /* @ts2dart_Provider */ { provide: _angular_core.RootRenderer, useExisting: DomRootRenderer }, /* @ts2dart_Provider */ { provide: SharedStylesHost, useExisting: DomSharedStylesHost }, DomSharedStylesHost, _angular_core.Testability, BrowserDetails, AnimationBuilder, EventManager, ELEMENT_PROBE_PROVIDERS ]; function initDomAdapter() { BrowserDomAdapter.makeCurrent(); wtfInit(); BrowserGetTestability.init(); } exports.__platform_browser_private__; (function (__platform_browser_private__) { __platform_browser_private__.DomAdapter = DomAdapter; function getDOM$$() { return getDOM(); } __platform_browser_private__.getDOM = getDOM$$; function setDOM$$(adapter) { return setDOM(adapter); } __platform_browser_private__.setDOM = setDOM$$; __platform_browser_private__.setRootDomAdapter = setRootDomAdapter; __platform_browser_private__.BrowserDomAdapter = BrowserDomAdapter; __platform_browser_private__.AnimationBuilder = AnimationBuilder; __platform_browser_private__.CssAnimationBuilder = CssAnimationBuilder; __platform_browser_private__.CssAnimationOptions = CssAnimationOptions; __platform_browser_private__.Animation = Animation; __platform_browser_private__.BrowserDetails = BrowserDetails; })(exports.__platform_browser_private__ || (exports.__platform_browser_private__ = {})); var BrowserPlatformLocation = (function (_super) { __extends(BrowserPlatformLocation, _super); function BrowserPlatformLocation() { _super.call(this); this._init(); } // This is moved to its own method so that `MockPlatformLocationStrategy` can overwrite it /** @internal */ BrowserPlatformLocation.prototype._init = function () { this._location = getDOM().getLocation(); this._history = getDOM().getHistory(); }; Object.defineProperty(BrowserPlatformLocation.prototype, "location", { /** @internal */ get: function () { return this._location; }, enumerable: true, configurable: true }); BrowserPlatformLocation.prototype.getBaseHrefFromDOM = function () { return getDOM().getBaseHref(); }; BrowserPlatformLocation.prototype.onPopState = function (fn) { getDOM().getGlobalEventTarget('window').addEventListener('popstate', fn, false); }; BrowserPlatformLocation.prototype.onHashChange = function (fn) { getDOM().getGlobalEventTarget('window').addEventListener('hashchange', fn, false); }; Object.defineProperty(BrowserPlatformLocation.prototype, "pathname", { get: function () { return this._location.pathname; }, set: function (newPath) { this._location.pathname = newPath; }, enumerable: true, configurable: true }); Object.defineProperty(BrowserPlatformLocation.prototype, "search", { get: function () { return this._location.search; }, enumerable: true, configurable: true }); Object.defineProperty(BrowserPlatformLocation.prototype, "hash", { get: function () { return this._location.hash; }, enumerable: true, configurable: true }); BrowserPlatformLocation.prototype.pushState = function (state, title, url) { this._history.pushState(state, title, url); }; BrowserPlatformLocation.prototype.replaceState = function (state, title, url) { this._history.replaceState(state, title, url); }; BrowserPlatformLocation.prototype.forward = function () { this._history.forward(); }; BrowserPlatformLocation.prototype.back = function () { this._history.back(); }; return BrowserPlatformLocation; }(_angular_common.PlatformLocation)); BrowserPlatformLocation.decorators = [ { type: _angular_core.Injectable }, ]; BrowserPlatformLocation.ctorParameters = []; /** * An array of providers that should be passed into `application()` when bootstrapping a component * when all templates * have been precompiled offline. */ var BROWSER_APP_STATIC_PROVIDERS = /*@ts2dart_const*/ BROWSER_APP_COMMON_PROVIDERS; function browserStaticPlatform() { if (isBlank(_angular_core.getPlatform())) { _angular_core.createPlatform(_angular_core.ReflectiveInjector.resolveAndCreate(BROWSER_PROVIDERS)); } return _angular_core.assertPlatform(BROWSER_PLATFORM_MARKER); } /** * See {@link bootstrap} for more information. */ function bootstrapStatic(appComponentType, customProviders, initReflector) { if (isPresent(initReflector)) { initReflector(); } var appProviders = isPresent(customProviders) ? [BROWSER_APP_STATIC_PROVIDERS, customProviders] : BROWSER_APP_STATIC_PROVIDERS; var appInjector = _angular_core.ReflectiveInjector.resolveAndCreate(appProviders, browserStaticPlatform().injector); return _angular_core.coreLoadAndBootstrap(appInjector, appComponentType); } function browserPlatform() { if (isBlank(_angular_core.getPlatform())) { _angular_core.createPlatform(_angular_core.ReflectiveInjector.resolveAndCreate(BROWSER_PROVIDERS)); } return _angular_core.assertPlatform(BROWSER_PLATFORM_MARKER); } exports.browserPlatform = browserPlatform; exports.DomEventsPlugin = DomEventsPlugin; exports.EventManager = EventManager; exports.EVENT_MANAGER_PLUGINS = EVENT_MANAGER_PLUGINS; exports.ELEMENT_PROBE_PROVIDERS = ELEMENT_PROBE_PROVIDERS; exports.BROWSER_APP_COMMON_PROVIDERS = BROWSER_APP_COMMON_PROVIDERS; exports.BROWSER_SANITIZATION_PROVIDERS = BROWSER_SANITIZATION_PROVIDERS; exports.BROWSER_PROVIDERS = BROWSER_PROVIDERS; exports.By = By; exports.Title = Title; exports.enableDebugTools = enableDebugTools; exports.disableDebugTools = disableDebugTools; exports.HAMMER_GESTURE_CONFIG = HAMMER_GESTURE_CONFIG; exports.HammerGestureConfig = HammerGestureConfig; exports.DOCUMENT = DOCUMENT; exports.DomSanitizationService = DomSanitizationService; exports.SecurityContext = SecurityContext; exports.bootstrapStatic = bootstrapStatic; exports.browserStaticPlatform = browserStaticPlatform; exports.BROWSER_APP_STATIC_PROVIDERS = BROWSER_APP_STATIC_PROVIDERS; exports.BrowserPlatformLocation = BrowserPlatformLocation; }));
{ "redpajama_set_name": "RedPajamaGithub" }
5,955
\section{Introduction} \label{myIntro} \IEEEPARstart{R}{esearchers} and engineers often use transforms to analyze and process signals. A common desired property from a transform, is that it allow signals to be represented as combinations of a small number of ``atoms''. For example, the Fourier transform is commonly used for analyzing audio signals \cite{gold2011speech} since they tend to be comprised of a small number of harmonic components. Piecewise smooth signals, on the other hand, are much more compactly represented by the Wavelet transform, which is thus popular in image processing \cite{chan2005image}. The emergence of the field of sparse representations \cite{mallat1993matching}, initiated the systematic construction of dictionaries, that allow representing signals as linear combinations of a \emph{small} number of their atoms \cite{engan1999method,aharon2006rm}. Today, this concept constitutes a key ingredient in numerous areas, ranging from image enhancement to signal recovery and compression~\cite{elad2010sparse,eldar2012compressed,zhang2015survey}. In this paper, we address a fundamental question relating to the expressive power of sparse representations. Specifically, we study conditions under which a redundant dictionary can be used to represent every signal in \(\mathbb{R}^d\) as a linear combination of at most \(k<d\) of its atoms, with an error no larger than \( \varepsilon \). Our goal is to obtain necessary and sufficient conditions on the minimal number of atoms~$n$ allowing this. This problem has two motivations. First, when the sparse representation model is used as a prior, as in compressed sensing or signal restoration \cite{elad2006image,yang2010image}, only a small set of signals is meant to be sparsely representable over the dictionary. This is often achieved by learning a dictionary from a set of relevant training examples (\emph{e.g.}, patches from natural images) \cite{engan1999method,aharon2006rm,lewicki2006learning,mairal2009online}. In this context, it is of interest to identify when a dictionary has an unnecessarily large overcompleteness (\emph{i.e.}, one which allows sparse representation of every signal, and not only of those from the designated set). A second motivation relates to the use of the sparse representation model as a generic transform, under which all signals are sparse. It is easy to show that when $k$ is taken to be a fixed fraction of $d$, the set of signals in \(\mathbb{R}^d\) that can be approximately represented by a specific choice of \(k\) atoms has a volume which is exponentially small in $k$ (see Sec.~\ref{Derivations} for details). This is, in fact, the principle underlying the Johnson-Lindenstrauss lemma \cite{johnson1984extensions}. This lemma asserts that when projecting points in $\mathbb{R}^d$ onto a random $k$-dimensional space, there is a concentration of measure effect whereby the points' norms are approximately preserved up to a factor of $k/d$. Therefore, when $k$ is much smaller than $d$, such projections are very far from the original points with high probability. Nevertheless, in our context, if $k$ is also a fixed fraction of $n$, then the number of choices of \(k\) atoms from a dictionary of size \(n\) is exponentially large in $k$. Therefore, it is not a-priori clear whether the overcompleteness should be very large in order to allow universal sparse representation. Interestingly, our results show that for certain regimes of error and sparsity levels, universal sparse representation can be achieved with moderate redundancy. For other regimes, on the other hand, universal sparse representation becomes impractical. \IEEEpubidadjcol It should be noted that our setting is very different from that of compressed sensing. There, the quantity of interest is the minimal number of linear measurements from which any $k$-sparse signal can be uniquely recovered \cite{donoho2006compressed,tropp2007signal,donoho2006stable,candes2006stable}. Merging the measurement matrix into the dictionary, this problem is equivalent to asking what is the minimal number of rows of a dictionary allowing to uniquely recover any signal that is $k$-sparse in the standard (non-overcomplete) basis. In contrast, here we analyze the minimal number of atoms (columns of an overcomplete dictionary) with which all signals possess a sparse representation. Furthermore, we do not require uniqueness of the representation. Mathematically, universal sparse representation can be viewed as a covering problem, a branch of mathematics with many known results \cite{boroczky2004finite}. However most works consider ball covering problems, whereas our setting is concerned with covering by dilations of linear subspaces (spanned by subsets of atoms from the dictionary). Moreover, these subspaces share atoms and are thus constrained to intersect. To the best of our knowledge, such settings were not studied in the past. Only a few attempts were made to characterize the representation ability of overcomplete dictionaries. In \cite[Ch.~7]{aharonovercomplete} the author provided approximations for the relative volume of signals that admit a sparse representation with an allowable error. However, the expressions depend on properties of the dictionary (minimal and maximal singular values of any subset of columns) and are thus not universal. Furthermore, the accuracy of the approximations in high dimensions is not clear. In \cite{akccakaya2008frame} the authors analyzed a stochastic setting, for which they provided a lower bound on the achievable mean squared error (MSE) of the representation as a function of the sparsity and the dictionary's overcompleteness. The analysis is universal in that it holds for all dictionaries. However, it does not provide an upper bound on the error (from which an upper bound on the required overcompleteness could be deduced), and it does not provide deterministic (worst-case) results. In this paper we study the universal sparse representation problem from both a worst case standpoint and an average case one. In the worst case setting we request that the representation error be bounded by $\varepsilon$ for every signal in \(\mathbb{R}^d\). In the average case setting, we assume that the signal is random and require that the probability that it can be sparsely represented with an error less than $\varepsilon$, be high. For each scenario we give lower and upper bounds on the minimal required overcompleteness allowing universal sparse representation. As opposed to previous works, our bounds have simple closed-form expressions, which allow to easily deduce the asymptotic behavior of the required overcompleteness. In particular, our bounds reveal that if $\varepsilon \ll 1$ or $k\ll d$, then the minimal required overcompleteness behaves like $ ( 1/ \varepsilon) ^{d/k-1} $ up to polynomial factors in $d/k$. We provide simulations, which show that our bounds correctly predict the threshold at which sparse coding techniques start to succeed in approximating arbitrary signals. As a side effect of our derivations, we obtain a tight lower bound on the regularized incomplete beta function, which may be interesting in its own right. The paper is organized as follows. Section \ref{MainResults} introduces our problem in mathematical terms, and describes our main results. Section \ref{Derivations} includes the derivations of the bounds, and Section \ref{NumericalComp} presents some numerical experiments, which illustrate and validate the theorems. \section{Main Results} \label{MainResults} Our goal is to be able to represent every signal $x\in\mathbb{R}^d$ as a linear combination of a small number $k<d$ of atoms from some dictionary $\Phi\in \mathbb{R}^{d\times n}$. Note that this is impossible to do without incurring some error, since the set of signals admitting such a $k$-sparse representation is a union of $\binom{n}{k}$ subspaces of dimension at most $k$, which is strictly contained in $\mathbb{R}^d$. However, the question we ask is: Under what conditions can we guarantee a $k$-sparse representation for every signal in $\mathbb{R}^d$ \emph{with a small error}? \newtheorem{RepresentationError}[Definitions]{Definition} \begin{RepresentationError}[Normalized \(k\)-sparse representation error] \label{RepresentationErrorDef} We define the \emph{\(k\)-sparse representation error} of a signal \(x \in \mathbb{R}^d \) over a dictionary \(\Phi \in \mathbb{R}^{d\times n} \) as \begin{equation}\label{RepresentationErrorEq} \epsilon(x,\Phi) \triangleq \min_{\alpha \in \mathbb{R}^n} \frac{\norm{x-\Phi \alpha} }{\norm{x}} \quad \text{s.t.} \quad \norm{\alpha}_0 \leq k, \end{equation} where the $\ell_0$ (pseudo) norm $\|\cdot\|_0$ counts the number of nonzero elements of its vector argument. We shall say that \(x\) has a \emph{\(k\)-sparse representation over \(\Phi\) with precision \(\varepsilon\)} if \(\epsilon(x,\Phi)\leq \varepsilon\). \end{RepresentationError} The normalized error is indifferent to scaling of $x$. Therefore, without loss of generality, we will restrict our analysis to signals lying on the unit sphere (\textit{i.e.}, with \(\norm{x} = 1\)). We are interested in the existence of dictionaries $\Phi$ such that $\epsilon(x,\Phi)$ is small for all, or at least most, signals $x$. More specifically, we consider both a worst-case design (Sec. \ref{WorstCaseResults}) and an average-case one (Sec. \ref{AverageCaseResults}). In the former, we require that the error $\epsilon(x,\Phi)$ be bounded by $\varepsilon$ for every $x\in\mathbb{R}^d$. In the latter, we assume that $x$ is a random vector and require that the probability that $\epsilon(x,\Phi)\leq\varepsilon$ be large. The two cardinal parameters in our problem are the sparsity factor \( s \) and the overcompleteness ratio \( o \), defined as \begin{equation}\label{CardinalParametersDef} s \triangleq \frac{k}{d}, \qquad o \triangleq \frac{n}{d}. \end{equation} The overcompleteness ratio can be thought of as the aspect ratio of the (wide) dictionary matrix $\Phi\in\mathbb{R}^{d\times n}$. Similarly, the sparsity factor $s$ is the aspect ratio of the (tall) sub-matrix of $\Phi$ containing the $k$ atoms participating in the decomposition of the signal $x$. Note that $s$ is not the percentage of nonzeros in the coefficient vector $\alpha$ in \eqref{RepresentationErrorEq} (which would be $k/n$). Our goal is to characterize the minimal overcompletness $o$ such that all/most signals in $\mathbb{R}^d$ possess a sparse representation with sparsity $s$, up to some permissible error $\varepsilon$. Before we state our main results, let us first give some intuition into why this problem is not trivial. As mentioned above, each choice of $k$ atoms from the dictionary corresponds to a single subspace of dimension at most $k$. The volume of the set of signals whose normalized distance from this subspace is bounded by $\varepsilon$, can be computed in closed form (see Sec.~\ref{Derivations}). The problem is that when the number of atoms $n$ tends to infinity while keeping the ratio $\frac{k}{n}$ fixed, almost all pairs of groups of $k$ atoms from the dictionary share a significant number of atoms. That is, almost all pairs of subspaces intersect. These intersections cannot be disregarded, especially when seeking to upper-bound the minimal required $o$. Specifically, let $Q=\binom{n}{k}^2$ denote the total number of ordered pairs of subgroups of $k$ atoms and let $q(\ell)$ denote the number of pairs that share precisely $\ell$ atoms. Then, by definition, $q(\ell)/Q$ is a hypergeometric distribution with parameters $(n,k,k)$. It is well known that the mean of this distribution is $k\times \frac{k}{n}$ and that its standard deviation is $\sqrt{k}\times \sqrt{\frac{k}{n}(1-\frac{k}{n})\frac{n-k}{n-1}}$. Thus, normalizing by the maximal possible overlap $k$ and taking $n$ to infinity while keeping $\frac{k}{n}$ fixed, we obtain a probability distribution whose mean tends to $\frac{k}{n}$ and whose standard deviation tends to $0$ as $O(\frac{1}{\sqrt{k}})$. In other words, almost all pairs of groups of $k$ atoms share precisely $k\times \frac{k}{n}$ atoms in this regime. This phenomenon is illustrated in Fig.~\ref{fig:Hypergeometric}. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{Fig1} \caption{Given a set of $n$ atoms, there exist $\binom{n}{k}$ distinct subsets of $k$ atoms. The graph shows the relative amount of pairs of such subsets as a function of their overlap. As $n$ tends to infinity while keeping $k/n$ fixed, the relative overlap develops a sharp peak at $k/n$ and its standard deviation tends to $0$. Thus, when $n$ is large, practically all pairs of groups of $k$ atoms share $k\times (k/n)$ atoms. Here, $k/n=0.25$.} \label{fig:Hypergeometric} \end{figure} To overcome this difficulty, in our upper bounds analyses, we focus on a special type of (sub-optimal) structured dictionaries and also pose a certain (sub-optimal) restriction on the allowed choices of atoms from the dictionary. These assumptions significantly simplify the derivations, and while they may seem to lead to a crude overestimation of the required overcompleteness, we show that the resulting bounds are rather accurate in quite a wide range of settings. \subsection{Worst-case analysis} \label{WorstCaseResults} We begin by studying the problem from a worst-case standpoint. \newtheorem{UniversalSparseRepresentation}[Definitions]{Definition} \begin{UniversalSparseRepresentation}[Universal \(k\)-sparse representation dictionary] \label{UniversalSparseRepresentationDef} We say that \(\Phi \in \mathbb{R}^{d \times n}\) is a \emph{universal \(k\)-sparse representation dictionary with precision \(\varepsilon \)} if all signals in \(\mathbb{R}^d\) admit a \(k\)-sparse representation with precision \(\varepsilon\) over $\Phi$, namely \(\epsilon(x,\Phi)\leq\varepsilon,\, \forall x\in\mathbb{R}^d\) (equivalently, \( \max_{x \in \mathbb{R}^d } \epsilon(x,\Phi)\leq\varepsilon \)). \end{UniversalSparseRepresentation} Let us denote by \( o^{\dagger} \) the minimal overcompleteness allowing universal sparse representation. In the theorems below we provide upper and lower bounds on $o^{\dagger}$. These bounds are expressed only in terms of the sparsity $s$ and allowable error $\varepsilon$. Particularly, they are independent of the dimension \(d\) (despite the fact that $o^{\dagger}$ itself does depend on $d$). \newtheorem{WorstCaseLowerBound}[Theorems]{Theorem} \begin{WorstCaseLowerBound}[Worst-case lower bound] \label{WorstCaseLowerBoundTheorem} If \( \varepsilon \in (0, \sqrt{1-s} ) \), then \begin{equation} \label{WorstCaseLowerBoundEq} o^{\dagger} \geq c_1( s,\varepsilon ) \times s^{ \frac{3}{2} } \left(\frac{1}{\varepsilon}\right)^{ \frac{1}{s} - 1 }, \end{equation} where $c_1( s,\varepsilon ) = e^{-1}\sqrt{(1-s)^{1/s-1}/({1-\varepsilon^2})} \geq e^{-\frac{3}{2}}$ for all \(s\) and \( \varepsilon \). If $\varepsilon \in (\sqrt{1-s},1) $, then \begin{equation} o^{\dagger} = 1. \end{equation} \end{WorstCaseLowerBound} As expected, when either the sparsity factor $s$ or the precision $\varepsilon$ are small, the required overcompleteness is large. However, interestingly, the dependence on $s$ and $\varepsilon$ is quite different. While the bound is polynomial in $\varepsilon^{-1}$, it is exponential in $s^{-1}$, implying that universal sparse representation is practically impossible at very small sparsity factors. \newtheorem{WorstCaseUpperBound}[Theorems]{Theorem} \begin{WorstCaseUpperBound}[Worst-case upper bound] \label{WorstCaseUpperBoundTheorem} If \( \varepsilon \in (0, 1) \), \(k\) is a divisor of \(d\) and \( s \leq \frac{1}{3} \), then \begin{equation} \label{WorstCaseUpperBoundEq} o^{\dagger} \leq c_2( s,\varepsilon ) \times \log (s^{-1}) s^{-\frac{1}{2}} \left(\frac{1}{\varepsilon}\right)^{ \frac{1}{s} - 1 }, \end{equation} where \(c_2( s,\varepsilon )= \sqrt{2\pi} (1 + \frac {2} {\log s^{-1}} ) (1 + \frac{\log \log s^{-1}}{\log s^{-1}} + \frac{\sqrt{e}} {s^{-1}} ) (1-\varepsilon^2 \frac{1-s}{1+s})^{\frac{1}{2}} \leq 12 \) for all \(s\) and \( \varepsilon \). If \(k\) is not a divisor of \(d\), then this bound holds true with \( s^{-1} \) replaced by \( \ceil{s^{-1}} \). \end{WorstCaseUpperBound} As can be seen, both bounds are exponentially equivalent\footnote{Namely, the ratio between the log of the bounds and the log of $ ( 1/\varepsilon ) ^{1/s-1} $ tends to 1 as either $s$ or $\varepsilon$ tend to zero.} to $ ( 1/\varepsilon ) ^{1/s-1} $. This implies that under the conditions of Theorems~\ref{WorstCaseLowerBoundTheorem} and~\ref{WorstCaseUpperBoundTheorem}, the minimal overcompleteness $o^{\dagger}$ satisfies \begin{equation} \label{ExponentialEquality} o^{\dagger} \approx \left(\frac{1}{\varepsilon}\right)^{\frac{1}{s} -1}, \end{equation} where $\approx$ denotes exponential equivalence. One can think of the representation error as noise, in which case the term $1/\varepsilon$ can be interpreted as the signal-to-noise ratio (SNR). Therefore,~\eqref{ExponentialEquality} can also be written as \begin{equation} \label{ExponentialEquality_SNR} o^{\dagger}_{\text{dB}} \sim \left( \frac{1}{s} -1 \right) \mathrm{SNR}_{\text{dB}} , \end{equation} where \(\sim\) denotes asymptotic equivalence, $\mathrm{SNR}_{\text{dB}} = 20\log_{10}(1/\varepsilon)$ and $o^{\dagger}_{\text{dB}} = 20\log_{10}(o^{\dagger})$. Exponential equivalence is agnostic to polynomial dependencies. Thus, to refine our intuition, it is instructive to examine the ratio between the bounds \eqref{WorstCaseUpperBoundEq} and \eqref{WorstCaseLowerBoundEq}. As can be seen, this ratio is bounded from above as a function of $\varepsilon$ and is only polynomial in $s^{-1}$ (behaves as \( \Theta( \log (s^{-1}) s^{-2}) \)). This indicates that our bounds are relatively accurate for moderate sparsity factors, even when $\varepsilon$ is small, but may become inaccurate for very small $s$. \subsection{Average-case analysis} \label{AverageCaseResults} The upper bound of Theorem \ref{WorstCaseUpperBoundTheorem} is rather pessimistic as it guarantees that \emph{all} signals can be sparsely represented with precision \(\varepsilon\), including esoteric and unlikely signals. In many practical situations, it may be enough to loosen this requirement and replace it by a probabilistic one. Specifically, suppose we have prior knowledge in the form of a distribution $\Omega$ over signals in $\mathbb{R}^d$. In this case, it may be enough to settle for dictionaries allowing sparse representation only \emph{with high probability}. \newtheorem{OptimalSuccessProbability}[Definitions]{Definition} \begin{OptimalSuccessProbability}[Optimal success probability] \label{OptSuccessProbDef} For any given distribution \( \Omega \) over \( \mathbb{R}^d \) and dictionary size \( d \times n \), we define the \emph{optimal success probability} under \(\Omega\) as \begin{equation}\label{OptSuccessProbEq} \optSuccProb{\Omega} \triangleq \max_{\Phi \in \mathbb{R}^{d \times n} } \prob{ \epsilon(x,\Phi) \leq \varepsilon }, \end{equation} where \(x\) is a random vector with distribution $\Omega$. \end{OptimalSuccessProbability} To obtain bounds that do not depend on the prior $\Omega$, we will examine the worst-case optimal success probability over all possible distributions $\Omega$. Mathematically, let \( \mathcal{D} (\mathbb{R} ^d) \) be the collection of all distributions over \(\mathbb{R} ^d\). Then the worst-case optimal success probability is defined as \begin{equation} \label{WorstCaseOptimalSuccessProb} \mathcal{P}^* = \min_{\Omega \in \mathcal{D} (\mathbb{R} ^d) } \optSuccProb{\Omega}. \end{equation} Studying the behavior of $ \mathcal{P}^* $ is particularly interesting in high dimensions. In this setting there is a sharp transition between the regime of overcompleteness factors at which $ \mathcal{P}^* $ tends to $1$ and the regime at which it tends to $0$. We would therefore like to study the limit of \( \mathcal{P}^* \) as \(d\) tends to infinity, while keeping the sparsity factor \( s \) fixed. To this end, we denote by \( o^{*} \) the minimal overcompleteness such that \( \lim_{d \rightarrow \infty } \mathcal{P}^* = 1 \). There are several important distinctions between $o^{*}$ of the average case scenario and $o^{\dagger}$ of the worst case setting. First, $o^{*}$ is only affected by typical signals, whereas $o^{\dagger}$ takes into account all signals. Therefore, we necessarily have that $o^{*}\leqo^{\dagger}$. Second, as can be seen from \eqref{OptSuccessProbEq}, it may be that for each distribution \( \Omega \) the optimal dictionary is different. Thus, as opposed to the worst-case analysis, in the average case setting we do not guarantee the existence of a single dictionary that is good for all signals. Finally, $o^{*}$ is defined only for $d\rightarrow\infty$, whereas $o^{\dagger}$ is defined for all $d$. This is particularly important when bounding these quantities from above, since the minimal required overcompleteness becomes smaller as $d$ increases. This further contributes to our ability to obtain an upper bound on $o^{*}$, which is lower than the upper bound on $o^{\dagger}$ in Theorem~\ref{WorstCaseUpperBoundTheorem}. The next two theorems are analogous to Theorems~\ref{WorstCaseLowerBoundTheorem} and~\ref{WorstCaseUpperBoundTheorem}. The first statement in each theorem bounds \( o^{*} \), and thus provides an asymptotic analysis. The second statement characterizes the convergence to the asymptotic behavior, and is relevant for any finite dimension. \newtheorem{AverageCaseLowerBound}[Theorems]{Theorem} \begin{AverageCaseLowerBound}[Average-case lower bound] \label{AverageCaseLowerBoundTheorem} If \( \varepsilon \in (0, \sqrt{1-s} ) \), then \begin{equation} \label{AverageCaseLowerBoundEq} o^{*} \geq c_1( s,\varepsilon ) \times s^{ \frac{3}{2} } \left(\frac{1}{\varepsilon}\right)^{ \frac{1}{s} - 1 }, \end{equation} where \( c_1( s,\varepsilon )\) is as in \eqref{WorstCaseLowerBoundEq}. Furthermore, for any finite dimension \( d \), \begin{equation} \label{AverageCaseLowerBoundProb} \mathcal{P}^* \leq \frac{1}{\sqrt{2\pi s \left(1 - \frac{s}{o}\right)}} \times d^{-\frac{1}{2}}\exp\{ - c_3( s,\varepsilon, o ) d \} . \end{equation} Here, \( c_3( s,\varepsilon, o ) = \frac{1}{2} \relent{1-s}{\varepsilon^2} - o \entropyi{ \frac{s}{o} } \), where $\entropyi{\alpha} =-\alpha\log(\alpha) -(1-\alpha)\log(1-\alpha)$ is the entropy of a $\text{Bernoulli}(\alpha)$ random variable, and \( \relent{\alpha}{\beta} = \alpha\log(\frac{\alpha}{\beta}) + (1-\alpha)\log(\frac{1-\alpha}{1-\beta}) \) is the Kullback Leibler divergence between the \( \text{Bernoulli}(\alpha)\) and \(\text{Bernoulli}(\beta)\) distributions. \end{AverageCaseLowerBound} In fact, as we show in Sec.~\ref{AverageCaseLowerBoundAnalysis}, when the overcompleteness \( o \) is smaller than the right hand side of \eqref{AverageCaseLowerBoundEq}, not only that $ \mathcal{P}^* $ does not tend to $1$, it actually tends to $0$. This implies that below this bound, universal sparse representation is practically impossible in high dimensions (for the worst case distribution). The next theorem is stated in terms of the incomplete regularized beta function $\Ii{x}{\alpha}{\beta}$, which is the probability that a \( \myBetai{\alpha}{\beta}\) random variable is smaller than $x$. \newtheorem{AverageCaseUpperBound}[Theorems]{Theorem} \begin{AverageCaseUpperBound}[Average-case upper bound] \label{AverageCaseUpperBoundTheorem} If \( s = \frac{1}{m} \) for some \( m \in \mathbb{N} \), then \begin{equation} \label{AverageCaseUpperBoundEq} o^{*} \leq c_4( s,\varepsilon ) \times s^{\frac{1}{2}} \left(\frac{1}{\varepsilon}\right)^{ \frac{1} {s} - 1 }, \end{equation} where \(c_4( s,\varepsilon )= \sqrt{\tfrac{\pi}{2} (1-\varepsilon^2 (1-s)/(1+s) ) }\leq \sqrt{\tfrac{\pi}{2}} \) for all \(s\) and \( \varepsilon \). Furthermore, for any finite dimension \( d \), if \( \varepsilon^2 \leq \tfrac{1}{2} \) and the overcompleteness ratio satisfies \begin{equation} \label{AverageCaseUpperBoundProb1} o \geq \frac{s} { \I{\delta^2}{\frac{1}{2}(\frac{1}{s}-1)}{ \frac{1}{2} } } = \bigtheta{ s^{\frac{1}{2}} \left(\frac{1}{\delta}\right)^{ \frac{1} {s} - 1 } } \end{equation} for some \( \delta \in (0, \varepsilon ) \), then \begin{equation} \label{AverageCaseUpperBoundProb2} \mathcal{P}^* \geq 1 - \frac{c_5(s,\varepsilon,o,\delta)}{c_5(s,\varepsilon,o,\delta) + d}, \end{equation} where \( c_5(s,\varepsilon,o,\delta) = \frac{ (1-2\varepsilon^2) (\Ii{1-2\varepsilon^2} {\frac{1}{2}}{\frac{1-s}{2s}})^{ \frac{o}{ s } } + \varepsilon^4} {(\varepsilon^2-\delta^2)^2} \frac{(1+2s)}{s } \) is a constant independent of the dimension \(d\). \end{AverageCaseUpperBound} Note that Theorem \ref{AverageCaseLowerBoundTheorem} provides a \emph{lower bound} on the minimal required overcompleteness (see \eqref{AverageCaseLowerBoundEq}) and an \emph{upper bound} on the probability of success (see \eqref{AverageCaseLowerBoundProb}). Similarly, Theorem \ref{AverageCaseUpperBoundTheorem} provides an \emph{upper bound} on the minimal required overcompletness (see \eqref{AverageCaseUpperBoundEq}), which is further refined via a \emph{lower bound} on the probability of success (see \eqref{AverageCaseUpperBoundProb1},\eqref{AverageCaseUpperBoundProb2}). Comparing Theorems \ref{WorstCaseUpperBoundTheorem} and \ref{AverageCaseUpperBoundTheorem}, it can be seen that the average-case analysis provides an improvement of \( \Theta (s^{-1} \log s^{-1} ) \) over the worst-case analysis (compare \eqref{WorstCaseUpperBoundEq} with \eqref{AverageCaseUpperBoundEq}). Another interesting comparison is between the lower and upper bounds in the average-case scenario. In general, the ratio between \eqref{AverageCaseLowerBoundEq} and \eqref{AverageCaseUpperBoundEq} is a complicated function of \( s \) and \( \varepsilon \) which behaves as \( \Theta( s^{-1} ) \). Yet, for certain regimes of \( \varepsilon \) and \( s \) we can obtain simple expressions. When \( \varepsilon \ll 1 \), the ratio becomes approximately \( \sqrt{ \frac{\pi e^2 }{2(1-s)^{s^{-1}-1} } } s^{-1} \). This expression is independent of \( \varepsilon \), implying that the bounds are relatively tight for moderate values of \( s \) when \( \varepsilon \) is small. Similarly, when \( s \ll 1 \), the ratio becomes approximately \( \sqrt{ \frac{\pi e^3}{2} } (1-\varepsilon^2) s^{-1} \). \section{Derivations} \label{Derivations} In this section, we prove Theorems~\ref{WorstCaseLowerBoundTheorem}-\ref{AverageCaseUpperBoundTheorem}. As discussed above, each choice of $k$ atoms from a dictionary \( \Phi \in \mathbb{R} ^{d \times n } \), allows to perfectly represent all signals lying in some linear subspace of dimension at most $k$. Therefore, the set of all signals admitting a $k$-sparse representation over $\Phi$ corresponds to the union of these \( N = \binom{n}{k} \) linear subspaces, which we denote by \( \{ \psi_i\}_{i=1}^N \). In our setting, we allow for \emph{approximate} representations with a relative error of at most $\varepsilon$. To this end, we use the notion of spherical dilations. Specifically, the spherical dilation of a set $X\subset \mathbb{R}^d$ with radius \(r\) is defined as the set \begin{equation} \label{BallDef} \ball{X}{r} = \{ y \in \mathbb{R}^d : \exists x \in X \quad \text{s.t.} \quad \norm{ y - x } \leq r \}. \end{equation} In order for \( \Phi \) to be a universal \( k \)-sparse representation dictionary with precision \( \varepsilon \), each point on the unit sphere must be contained in at least one of the \( \varepsilon \)-dilations of its subspaces \( \{ \psi_i\}_{i=1}^N \). In other words, the universal sparse representation problem is in fact a covering problem as we are interested in covering the unit sphere by \( \varepsilon \)-dilations of linear subspaces. That is, we would like that \begin{equation} \label{SphereCover} S^{d-1} \subseteq \bigcup\limits_{i=1}^{N} \ball{\psi_i}{\varepsilon}, \end{equation} where $S^{d-1}$ is the unit sphere in $\mathbb{R}^d$. \begin{figure} \centering \includegraphics[trim=32 47 23 40, clip, width=3.5in]{Fig2.png} \caption{The area from the unit sphere in $\mathbb{R}^3$ covered by an $\varepsilon$-dilation of a 2D subspace is shown in green. Corollary~\ref{SubspaceCoverageCorollary} gives an explicit formula for the ratio between the green area and the surface area of the unit sphere, for any ambient dimension $d$, subspace dimension $k$, and dilation $\varepsilon$.}\label{fig:ball} \end{figure} Let us start by examining the relative area covered by an \( \varepsilon \)-dilation of a single linear subspace\footnote{A similar analysis was provided in \cite{akccakaya2008frame} for complex spaces.} $\psi$ (namely, the ratio between the area of $\ball{\psi}{\varepsilon}\cap S^{d-1}$ and the area of $S^{d-1}$). This relative area, illustrated in Fig.~\ref{fig:ball}, can be interpreted as the probability that a point \(x\) chosen uniformly at random from the unit sphere $S^{d-1}$ be \( \varepsilon \)-close to \( \psi \). The square distance between \( x \) and \( \psi \) is \(\normi{x-\proj{x}{\psi}}^2=\normi{x}^2 - \normi{\proj{x}{\psi}}^2\), where \(\proj{}{\psi}\) is the orthogonal projection matrix onto $\psi$. Thus, since \(\normi{x} = 1\), \(x\) is \( \varepsilon \) close to \( \psi \) if and only if \( \normi{\proj{x}{\psi}}^2 \geq 1-\varepsilon^2 \). This implies that we have the relation \begin{equation} \label{FundamentalEquality} \frac{\area{\ball{\psi}{\varepsilon} \cap S^{d-1}}}{\area{S^{d-1}}} = \prob{\norm{\proj{x}{\psi}}^2 \geq 1-\varepsilon^2}, \end{equation} where $\area{V}$ denotes the area of the $d-1$ dimensional manifold $V\subset \mathbb{R}^d$. Our problem thus boils down to determining the distribution of the length of a random vector with uniform distribution on the unit sphere, projected down onto a fixed \(k\)-dimensional subspace. This result can be found \emph{e.g.}, in \cite{muirhead1982aspects}. For our purposes, it is more convenient to state the result for the equivalent setting where the subspace is random and the unit vector is deterministic. \newtheorem{RandomSubspaceProjection}[Lemmas]{Lemma} \begin{RandomSubspaceProjection}[Random subspace projection \cite{muirhead1982aspects}] \label{RandomSubspaceProjectionTheorem} Let \( \mathcal{V} \) denote the \( k \)-dimensional subspace spanned by a set \( \{\nu_i\}_{i=1}^k \) of independent isotropically distributed random vectors in \( \mathbb{R}^d \). Then for every deterministic unit-norm vector \(x \in S^{d-1}\), \begin{equation} \label{RandomSubspaceProjectionDistribution} \norm{\proj{x}{ \mathcal{V} }}^2 \thicksim \myBeta{\frac{k}{2}}{\frac{d-k}{2}}. \end{equation} \end{RandomSubspaceProjection} Note that the distribution of \( \normi{\proj{x}{ \mathcal{V}}}^2 \) is independent of \( x \), hence this result also holds true when $x$ is a random unit norm vector that is independent of \(\{\nu_i\}\). From \eqref{RandomSubspaceProjectionDistribution} we have that \( \probi{\normi{\proj{x}{\psi}}^2 \geq 1-\varepsilon^2} = 1-\Ii{1-\varepsilon^2}{\frac{k}{2}}{\frac{d-k}{2}} \), where $\Ii{x}{\alpha}{\beta}$ is the cumulative distribution function of the $\myBetai{\alpha}{\beta}$ distribution, also known as the regularized incomplete beta function. From the properties of the beta distribution, we can also write \( 1-\Ii{1-\varepsilon^2}{\frac{k}{2}}{\frac{d-k}{2}} = \Ii{\varepsilon^2} {\frac{d-k}{2}}{\frac{k}{2} }\). Thus, from \eqref{FundamentalEquality} and \eqref{RandomSubspaceProjectionDistribution} we reach the following conclusion. \newtheorem{SubspaceCoverage}[Corollaries]{Corollary} \begin{SubspaceCoverage}[Subspace coverage] \label{SubspaceCoverageCorollary} Let \( \psi \) be a \( k \) dimensional linear subspace in \( \mathbb{R}^d \). Then the relative area covered by \( \ball{\psi}{\varepsilon} \) from the unit sphere is precisely \( \Ii{\varepsilon^2 }{\frac{d-k}{2}}{\frac{k}{2}} \). \end{SubspaceCoverage} We remark that this corollary can be seen as a generalization of \cite{li2011concise}, which proved it for one dimensional subspaces. We are now ready to prove the theorems of Section \ref{MainResults}. In sections \ref{WorstCaseLowerBoundAnalysis} and \ref{WorstCaseUpperBoundAnalysis} we prove the lower and upper bounds, respectively, for the worst-case setting. Similarly, in sections \ref{AverageCaseLowerBoundAnalysis} and \ref{AverageCaseUpperBoundAnalysis} we derive the necessary and sufficient conditions, respectively, for the average-case setting. \subsection{Worst-case lower bound}\label{WorstCaseLowerBoundAnalysis} In this section we prove Theorem \ref{WorstCaseLowerBoundTheorem}. Following the discussion above, it is clear that \( \Phi \) is \emph{not} a universal \(k\)-sparse representation dictionary if the unit sphere \( S^{d-1} \) is not contained in the union of the \( \varepsilon \) dilations of its $\binom{n}{k}$ subspaces $\{\psi_i\}$, namely \begin{equation}\label{WorstCaseLowerBoundCondition} S^{d-1} \not\subset \bigcup\limits_{i=1}^{N} \ball{\psi_i}{\varepsilon}. \end{equation} This condition is satisfied when \begin{equation}\label{WorstCaseLowerBoundCondition2} \frac{\area{\bigcup\limits_{i=1}^{N} \ball{\psi_i}{\varepsilon} \cap S^{d-1}}}{\area{S^{d-1}}} < 1. \end{equation} Applying the union bound on the left hand side of \eqref{WorstCaseLowerBoundCondition2} gives \begin{equation}\label{WorstCaseUnionBound} \frac{\area{\bigcup\limits_{i=1}^{N} \ball{\psi_i}{\varepsilon} \cap S^{d-1}}}{\area{S^{d-1}}} \leq \sum\limits_{i=1}^{N} \frac{\area{ \ball{\psi_i}{\varepsilon} \cap S^{d-1}}}{\area{S^{d-1}}}. \end{equation} From Corollary \ref{SubspaceCoverageCorollary}, each of the summands in the right hand side of \eqref{WorstCaseUnionBound} is equal to \( \Ii{\varepsilon^2 }{\frac{d-k}{2}}{\frac{k}{2}} \). Therefore, a sufficient condition for \eqref{WorstCaseLowerBoundCondition2} to hold, is that \begin{equation}\label{WorstCaseLowerBoundCondition3} \binom{n}{k} \,\I{\varepsilon^2}{\frac{d-k}{2}} {\frac{k}{2}} < 1. \end{equation} To simplify this expression, let us further bound the terms \( \binom{n}{k} \) and \( \Ii{\varepsilon^2 } {\frac{d-k}{2}} {\frac{k}{2}} \). For the binomial coefficient, we use the bound \begin{equation}\label{BinomialUpperBound} \binom{n}{k} < \left( \frac{ne}{k} \right)^k = \exp\left\{k \left(\log\left(\frac{n}{k}\right)+1\right)\right\}. \end{equation} For the incomplete beta function, we use the following tail bound from \cite{dasgupta2003elementary}. \newtheorem{IncomBetaUpperBound}[Lemmas]{Lemma} \begin{IncomBetaUpperBound}[\cite{dasgupta2003elementary}]\label{IncomBetaUpperBoundTheorem} Let \( L \thicksim \myBetai{\frac{k}{2}}{\frac{d-k}{2}} \), where \(k<d\) are natural numbers. Then for every \( \beta > 1 \), \begin{equation} \prob{L \geq \frac{\beta k}{d}} \leq \beta^{ \frac{k}{2} } \left( 1+ \frac{(1-\beta)k}{d-k} \right)^{\frac{d-k}{2}}. \end{equation} \end{IncomBetaUpperBound} To match our setting in \eqref{FundamentalEquality}, we choose \( \frac{\beta k}{d} = 1-\varepsilon^2 \). This relation implies that $\beta = \frac{1-\varepsilon^2}{s} $ and \( 1+ \frac{(1-\beta)k}{d-k} = \frac{\varepsilon^2}{1-s} \). Note that the lemma applies to $\beta>1$, which translates to the requirement that \( s < 1- \varepsilon^2 \). Applying the lemma, gives us the bound \begin{equation} \label{BetaUpperBound} \I{\varepsilon^2 }{\frac{d-k}{2}}{\frac{k}{2}} \leq \left( \frac{1-\varepsilon^2}{s} \right)^{ \frac{k}{2} } \left( \frac{\varepsilon^2}{1-s} \right)^{\frac{d-k}{2}}. \end{equation} To present this bound more compactly, we use the function \( \relent{\alpha}{\beta} = \alpha\log(\frac{\alpha}{\beta}) + (1-\alpha)\log(\frac{1-\alpha}{1-\beta}) \), which is the Kullback Leibler divergence between the \( \text{Bernoulli}(\alpha)\) and \(\text{Bernoulli}(\beta)\) distributions. Then, the right-hand-side of \eqref{BetaUpperBound} can be equivalently expressed as \begin{equation}\label{BetaUpperBound2} \left( \frac{1-\varepsilon^2}{s} \right)^{ \frac{k}{2} } \left( \frac{\varepsilon^2}{1-s} \right)^{\frac{d-k}{2}} = \exp\left\{-\frac{d}{2} \relent{1- s }{\varepsilon^2}\right\}. \end{equation} Using the bounds \eqref{BinomialUpperBound} and \eqref{BetaUpperBound},\eqref{BetaUpperBound2} in \eqref{WorstCaseLowerBoundCondition3}, we conclude that the relative area is upper-bounded by \begin{equation}\label{WorstCaseLowerBoundCondition4} \begin{aligned} \exp\left\{ -\frac{d}{2} \relent{1-s}{\varepsilon^2}+k\left(\log\left(\frac{n}{k}\right)+1\right) \right\}. \end{aligned} \end{equation} Note from \eqref{CardinalParametersDef} that \(\frac{n}{k}=\frac{o}{s} \). Consequently, the argument of the exponent in \eqref{WorstCaseLowerBoundCondition4} can be written as \begin{align} -k\left(\frac{1}{2s} \relent{1-s}{\varepsilon^2} - \log\left(\frac{o}{s}\right)-1 \right). \end{align} Therefore, to guarantee that the relative area is strictly smaller than $1$, it is sufficient to require that \( \frac{1}{2s} \relent{1-s}{\varepsilon^2} - 1 \geq \log(\frac{o}{s}) \), implying that if \begin{equation}\label{eq:WorstCaseLowerBoundExp} o \leq s \exp\left\{\frac{1}{2s}\relent{1-s}{\varepsilon^2}-1\right\}, \end{equation} then universal sparse representation is impossible. Substituting the expression for $\relent{1-s}{\varepsilon^2}$ in \eqref{eq:WorstCaseLowerBoundExp} gives the right-hand side of \eqref{WorstCaseLowerBoundEq}, thus completing the proof of the first part of Theorem~\ref{WorstCaseLowerBoundTheorem}. Note that the constant $c_1(s,\varepsilon)$ in \eqref{WorstCaseLowerBoundEq} satisfies \begin{align} c_1(s,\varepsilon) \geq c_1(s,0) \geq \lim_{s\to 0} c_1(s,0) = e^{-\frac{3}{2}} \end{align} for every $\varepsilon$ and $s$ in the range $(0,1)$. We next examine the case \( \varepsilon^2>1-s \). On one hand, if the number of atoms \( n \) is smaller than the dimension \( d \), then there exists a linear subspace in \( \mathbb{R}^d \) which is orthogonal to all atoms. Signals from this subspace will have a representation error of~1. Therefore, we must have that \( o^{\dagger} \geq 1 \). On the other hand, we will show that a dictionary with \( d \) atoms is sufficient for representing all signals in \( \mathbb{R}^d \) with precision \( \sqrt{1-s} \). Specifically, take the dictionary to be the \( d \) dimensional identity matrix \( I_{d \times d } \). It is easy to see that in this setting, the representation error is \begin{equation} \label{eq:ErrorForIdentityMatrix} \epsilon^2\left(x , I_{d \times d } \right) = \min_{ |\kappa| = d-k } \sum_{i \in \kappa } x^2_{ i }. \end{equation} Therefore, the worst case error can be written as \begin{equation} \label{eq:WorstCaseSignal} \max_{x \in S^{d-1} } \epsilon^2\left(x , I_{d \times d } \right) = 1- \min_{x \in S^{d-1} } \max_{ |\kappa| = k } \sum_{i \in \kappa } x^2_{ i }, \end{equation} where we used the fact that \( \norm{x}^2 = 1 \). Denoting \( \xi_i = x^2_i \), the right hand term reduces to \begin{equation} \label{eq:ConvexOptimization} \min_{\xi \in \Delta^d } \max_{ |\kappa| = k } \sum_{i \in \kappa } \xi_i, \end{equation} where \( \Delta^d \) is the \( d \) dimensional unit simplex. It is well known that max-\(k\)-sums are convex functions and that the unit simplex is a convex set. Therefore, this is a convex optimization problem. Due to the fact that max-\(k\)-sums are invariant under permutation of the components, this optimization function has a minimizer \( \xi^* \) that satisfies \begin{equation} \xi^*_i = \xi_0 \end{equation} for all $ i \in\{1,\ldots,d\} $, for some constant \(\xi_0\). From the constraints, we have that \( \xi_0 = \tfrac{1}{d} \). Substituting this solution in \eqref{eq:WorstCaseSignal} we obtain \begin{equation} \max_{x \in S^{d-1} } \epsilon^2\left(x , I_{d \times d } \right) = 1-\frac{k}{d}=1-s, \end{equation} which is less than \( \varepsilon^2 \) by assumption. Therefore, the required overcompleteness \( o^{\dagger} \) must be equal to 1. \subsection{Worst-case upper bound}\label{WorstCaseUpperBoundAnalysis} Next we prove Theorem \ref{WorstCaseUpperBoundTheorem}. As discussed above, a sufficient condition for universal sparse representation corresponds to covering the unit sphere \( S^{d-1} \) (see \eqref{SphereCover}). To obtain an upper bound, we will restrict attention to suboptimal dictionaries with a block-diagonal structure \begin{equation}\label{MyDictionary} \Phi = \begin{bmatrix} \Phi_{1} & 0 & \dots & 0 \\ 0 & \Phi_{2} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \Phi_{k} \end{bmatrix}, \end{equation} where \( \frac{d}{k} \) is a natural number and each $\Phi_i$ is a $\frac{d}{k} \times \frac{n}{k}$ matrix with columns $\{\varphi^{(i)}_j\}_{j=1}^{\frac{n}{k}}$. Interestingly, it turns out that the degradation in performance incurred by using such dictionaries is moderate, as we show experimentally in Sec.~\ref{AverageCaseUpperBoundAnalysis}. Let us denote the corresponding partitioning of the signal \( x \) as \begin{equation}\label{MySignal} x^T = [x_1^T \quad x_2^T \quad \cdots \quad x_k^T], \end{equation} where each $x_i$ is of length $\frac{d}{k}$. The block-diagonal structure \eqref{MyDictionary} allows us to simplify the problem of covering by dilations of subspaces into a problem of covering by balls. Specifically, \(\Phi\) of~\eqref{MyDictionary} is a universal \(k\) sparse representation dictionary with precision \(\varepsilon \) for signals in $\mathbb{R}^d$ if the atoms of each sub-dictionary \(\Phi_i\) form an \(\varepsilon\) ball covering of the unit sphere in \(\mathbb{R}^{ \frac{d}{k}}\). Indeed, if each \(\Phi_i\) forms an \(\varepsilon\) ball covering, then it contains at least one atom whose relative error from \(x_i\) is less than \(\varepsilon \). Choosing this single atom from each dictionary, we obtain a $k$-sparse representation with a squared relative error of \begin{align} \label{ModificationInequality} \frac{\|x-\Phi\alpha\|^2}{\|x\|^2} &= \frac{\sum_{i=1}^{k}\limits \min_{1 \leq j \leq m }\limits \norm{x_i - \proj{x_i}{\varphi^{(i)}_j}}^2}{\norm{x}^2 } \nonumber\\ &\leq \frac{\sum_{i=1}^{k}\limits\norm{x_i}^2\varepsilon^2}{\|x\|^2} = \varepsilon^2. \end{align} Hence, our problem is reduced to finding the minimal number of balls \(m=\frac{n}{d}\) with radius \( \varepsilon\) that suffice to cover the unit sphere in \(\mathbb{R}^{\frac{d}{k}} \). A lot of work has been done in the field of ball covering. In particular, several papers have derived bounds on the minimal covering \emph{density} \(\nu \) \cite{coxeter1959covering,rogers1963covering, boroczky2003covering, dumer2007covering,boroczky2004finite}. In our terminology, the covering density is the sum of all relative surface areas from the unit sphere that are covered by the balls. These \(m\) relative surface areas are all the same, and according to Corollary~\ref{SubspaceCoverageCorollary} are given by\footnote{Note that the area covered by a subspace of dimension $1$ spanned by a vector $\psi$, corresponds to all unit vectors that are $\varepsilon$ close to either $\psi$ or $-\psi$, hence the factor $\frac{1}{2}$.} \( \frac{1}{2} \Ii{\varepsilon^2 } {\frac{1-s}{2s}}{\frac{1}{2}} \). Therefore, \( \nu = \frac{m}{2}\Ii{\varepsilon^2} {\frac{1-s}{2s}} {\frac{1}{2}} \). In \cite{boroczky2003covering} it was proven that for all \( s^{-1}=\frac{d}{k} \geq 3 \), the minimal covering density for this problem is bounded by \begin{equation} \nu \leq h(s^{-1})\log(s^{-1})s^{-1}, \end{equation} where \( h(x)= (1 + \frac {2} {\log x} ) (1 + \frac{\log \log x}{\log x} + \frac{\sqrt{e}} {x\log x} ) \). From this bound we get that \begin{equation} m = \frac{2h(s^{-1})\log(s^{-1})s^{-1}}{\I{\varepsilon^2 } { \frac{1-s}{2s}}{\frac{1}{2} }} \end{equation} atoms per sub-dictionary suffice. Since $n=mk$, the corresponding overcompleteness is $o=n/d=mk/d=ms$. We thus showed that if \begin{equation}\label{eq:WorstCaseUpperBoundBeta} o \geq \frac{2 h(s^{-1})\log(s^{-1})}{\I{\varepsilon^2 } {\frac{1-s}{2s}} {\frac{1}{2}} }, \end{equation} then universal sparse representation is possible. The bound~\eqref{eq:WorstCaseUpperBoundBeta} can be easily computed for any $\varepsilon$ and $s$. However, its asymptotic behavior for small $\varepsilon$ and $s$ is hard to interpret. To obtain a simpler expression, let us use the following lemma (see proof in Appendix \ref{BetaBoundProof}). \newtheorem{BetaBoundPF}[Lemmas]{Lemma} \begin{BetaBoundPF}[Lower bound on incomplete beta function] \label{BetaLowerBoundLemma} Let \( a >0\), \( 0 < b \leq 1 \) and \(x \in [0,1] \), then \begin{equation} \label{BetaLowerBound} \Ii{x}{a}{b} \geq \frac{x^a}{ \Gamma(b) \left( (a+b)( 1 - x \tfrac{a}{a+1} ) \right)^{1-b} }. \end{equation} \end{BetaBoundPF} By setting \( a = \frac{1-s}{2s} \), \( b = \frac{1}{2} \) and \( x = \varepsilon^2\) we get \begin{equation} \I{\varepsilon^2} {\frac{1-s}{2s}} {\frac{1}{2}} \geq \sqrt{\frac{2s}{\pi(1-\varepsilon^2\frac{1-s}{1+s})} } \varepsilon^{s^{-1}-1}, \end{equation} where we used the fact that $\Gamma(\frac{1}{2})=\sqrt{\pi}$. Therefore, we have \begin{equation} o \geq \sqrt{2\pi(1-\varepsilon^2 \tfrac{1-s}{1+s})} h(s^{-1}) \log (s^{-1}) s^{-\frac{1}{2}} \left(\frac{1}{\varepsilon}\right)^{ s^{-1} - 1 }, \end{equation} which demonstrates \eqref{WorstCaseUpperBoundEq} and thus completes the proof of Theorem~\ref{WorstCaseUpperBoundTheorem}. It is easily verified that $c_2(s,\varepsilon)$ of \eqref{WorstCaseUpperBoundEq} satisfies $c_2(s,\varepsilon)\leq c_2(\frac{1}{3},0) < 12$ for every $\varepsilon\in(0,1)$ and $s\in(0,\frac{1}{3})$. A few words are in place regarding the tightness of the tail bound in Lemma \ref{BetaLowerBoundLemma}, at least for the case $b=\frac{1}{2}$. From Corollary~\ref{SubspaceCoverageCorollary}, we have that the relative surface area covered from the unit sphere in \( \mathbb{R}^d \) by a spherical cap with half chord~\( \varepsilon \), is given by \( \frac{1}{2} \Ii{\varepsilon^2}{\frac{d-1}{2}}{\frac{1}{2}} \). This geometric quantity arises in many fields, including machine learning \cite{hanneke2014theory,safran2016quality}, estimation theory \cite{ramirez2012low}, communication \cite{chaaban2016free}, and even systematic biology~\cite{klingenberg2013evolutionary}. As a result, obtaining tight bounds for this quantity, has attracted interest by various researchers, \emph{e.g.}, \cite[Corollary 3.2]{boroczky2003covering}, \cite[Th. 3.1]{frankl1990some}. To the best of our knowledge, the tightest bounds are found in \cite{frankl1990some}, where it has been shown that for all \( \varepsilon \in (0,1) \) and \( d \in \mathbb{N} \), this relative area is lower bounded by \begin{align} \label{eq:FranklBounds} \frac{1}{2} \I{\varepsilon^2}{\frac{d-1}{2}}{\frac{1}{2}}> \frac{ 1 - \left( 1- \frac{1}{\sqrt{d}} \right)^{ \frac{d-1}{2} } }{\sqrt{2 \pi \frac{(d-1)^2}{d-2}\left(1-\varepsilon^2\left(1- \frac{1}{\sqrt{d}}\right)\right)}} \varepsilon^{d-1}. \end{align} Our lemma provides the lower bound \begin{equation}\label{eq:OurLowerBoundBeta} \frac{1}{2} \I{\varepsilon^2}{\frac{d-1}{2}}{\frac{1}{2}}\geq\frac{\varepsilon^{d-1} }{ \sqrt{ 2\pi d (1-\varepsilon^2\frac{d-1}{d+1}) }} . \end{equation} It is easily verified that \( \frac{(d-1)^2}{d-2}> d \), \( 1-d^{- \frac{1}{2} } \leq \frac{d-1}{d+1}\) and \( ( 1- d^{- \frac{1}{2} } )^{ \frac{d-1}{2} }>0 \). Thus, our lower bound is higher than \eqref{eq:FranklBounds} for all \( \varepsilon \) and \( d \). The authors of \cite{frankl1990some} also derived the upper bound \begin{equation} \label{eq:FranklBounds2} \frac{1}{2} \I{\varepsilon^2}{\frac{d-1}{2}}{\frac{1}{2}} < \frac{\varepsilon^{d-1}}{\sqrt{2\pi(d-1)(1-\varepsilon^2)}}. \end{equation} It can be seen that for high dimensions, the ratio between our lower bound and this upper bound tends to one, implying that they are asymptotically tight (the tightness of \eqref{eq:FranklBounds2} was also noted in \cite{frankl1990some}). \subsection{Average-case lower bound} \label{AverageCaseLowerBoundAnalysis} We now turn to prove Theorem \ref{AverageCaseLowerBoundTheorem}, which provides an average-case lower bound on the overcompleteness for the setting in which $x$ is a random vector. As mentioned in Section~\ref{MainResults}, to obtain bounds that do not depend on the distribution of $x$, we consider the worst case distribution (\textit{i.e.} the one leading to the lowest optimal success probability). We begin with the following observation, whose proof is provided in Appendix~\ref{WorstCaseDistributionProof}. \newtheorem{WorstCaseDistribution}[Lemmas]{Lemma} \begin{WorstCaseDistribution}[Worst case distribution] \label{WorstCaseDistributionLemma} Any isotropic distribution \( \Theta\) achieves the \emph{minimum} of the optimal success probability, \textit{i.e.} \begin{equation} \mathcal{P}^* = \optSuccProb{\Theta}. \end{equation} \end{WorstCaseDistribution} This lemma shows that we can safely focus on the case where \( x \thicksim N(0,I_{d \times d})\), which is an isotropic distribution. Our derivation is similar to the one in Sec.~\ref{WorstCaseLowerBoundAnalysis}. For any dictionary~$\Phi$, the success probability $\epsilon(x,\Phi)$ can be expressed in terms of a union of events \begin{equation}\label{UnionProbability} \prob{\epsilon(x,\Phi) \leq \varepsilon } = \prob{\bigcup\limits_{i=1}^{N} \norm{\proj{x}{\psi_i}}^2 \geq 1-\varepsilon^2}, \end{equation} where \( \{\psi_i\}_{i=1}^N \) are all the $ N = \binom{n}{k} $ subspaces that are spanned by some choice of \(k\) atoms from \(\Phi\). Applying the union bound on \eqref{UnionProbability}, we get \begin{equation}\label{UnionBoundProb} \prob{ \bigcup\limits_{i=1}^{N} \norm{ \proj{x}{\psi_i} }^2 \geq 1-\varepsilon^2 } \leq \sum\limits_{i=1}^{N} \prob{ \norm{ \proj{x}{\psi_i} }^2 \geq 1-\varepsilon^2 }. \end{equation} From Lemma~\ref{RandomSubspaceProjectionTheorem}, we know that \(\norm{\proj{x}{\psi_i}}^2 \thicksim \myBetai{\frac{k}{2}}{\frac{d-k}{2}} \) for all \(i\). Thus, using the inequality from \eqref{BetaUpperBound},\eqref{BetaUpperBound2} we have that \begin{equation}\label{eq:UnionBoundRandom1} \prob{\bigcup\limits_{i=1}^{N} \norm{\proj{x}{\psi_i}}^2 \geq 1-\varepsilon^2} \leq \binom{n}{k} \exp\left\{-\frac{d}{2} \relent{1-s}{\varepsilon^2}\right\}. \end{equation} To bound the binomial coefficient, we use the next lemma from \cite{ash1965information}. \newtheorem{BinomialCoefficientBounds}[Lemmas]{Lemma} \begin{BinomialCoefficientBounds}[\cite{ash1965information}] \label{BinomialCoefficientBoundsTheorem} Let \( n,k \in \mathbb{N} \) such that \( 0< \frac{k}{n} <1 \) then \begin{equation} \frac{\exp\{n \entropy{\frac{k}{n}} \} }{ \sqrt{ 8 k (1 - \frac{k}{n} ) } } \leq \binom{n}{k} \leq \frac{\exp\{ n \entropy{\frac{k}{n}} \}}{ \sqrt{ 2 \pi k (1 - \frac{k}{n} ) } }, \end{equation} where \(\entropyi{\alpha}\) is the entropy of the \(\text{Bernoulli}(\alpha)\) distribution, defined as \begin{equation}\label{Entropy} \entropy{\alpha} = -\alpha\log(\alpha) - (1-\alpha)\log(1-\alpha). \end{equation} \end{BinomialCoefficientBounds} This bound on the binomial coefficient gives a slightly better result than the bound \eqref{BinomialUpperBound}. Using this lemma in \eqref{eq:UnionBoundRandom1}, we obtain \begin{equation} \prob{\epsilon(x,\Phi) \leq \varepsilon } \leq \frac{ \exp\left\{ - \frac{d}{2} \relent{1-s}{\varepsilon^2} + n \entropy{ \frac{k}{n} } \right\} } { \sqrt{2\pi k(1 - \frac{k}{n})}}. \end{equation} The right hand side of this inequality is independent of the dictionary \( \Phi \). Therefore, by taking the maximum over all dictionaries \( \Phi \in \mathbb{R} ^{ d \times n } \) we get \begin{equation}\label{AverageCaseLowerBoundResult} \mathcal{P}^* \leq \frac{ \exp\{ - d(\frac{1}{2} \relent{1-s}{\varepsilon^2} - o \,\entropy{ \frac{s}{o} }) \} } { \sqrt{2\pi d s (1 - \frac{s}{o})}}, \end{equation} where we used~\eqref{CardinalParametersDef}. This proves the second statement of Theorem~\ref{AverageCaseLowerBoundTheorem}. To prove the first part, let us derive a sufficient condition for the argument of the exponent in~\eqref{AverageCaseLowerBoundResult} to be negative. It is easy to show that \( -(1-\alpha)\log(1-\alpha) \leq \alpha \) for all \( \alpha \in [0,1] \). Hence \( o \entropyi{\frac{s}{o}} \leq -s \log(\frac{s}{o}) +s \), which implies that \begin{equation} \frac{1}{2} \relent{1-s}{\varepsilon^2} - o \,\entropy{ \frac{s}{o} } \geq \frac{1}{2} \relent{1-s}{\varepsilon^2} + s \log\left(\frac{s}{o}\right) -s . \end{equation} Therefore, to ensure that the left hand side is positive, we will require that \( \frac{1}{2} \relent{1-s}{\varepsilon^2} + s \log(\frac{s}{o}) -s > 0 \). Isolating~$o$ leads to the condition \begin{equation}\label{eq:UnionBoundRandom2} o \leq e^{-1} s \exp\left\{\frac{1}{2s} \relent{1-s}{\varepsilon^2} \right\}. \end{equation} We thus conclude that when this condition holds, \begin{equation} \lim\limits_{d \rightarrow \infty } \mathcal{P}^* = 0. \end{equation} Substituting $\relent{1-s}{\varepsilon^2}$, the right-hand side of \eqref{eq:UnionBoundRandom2} coincides with the right-hand side of \eqref{AverageCaseLowerBoundEq}, thus completing the proof of Theorem~\ref{AverageCaseLowerBoundTheorem}. \subsection{Average-case upper bound}\label{AverageCaseUpperBoundAnalysis} We next prove Theorem \ref{AverageCaseUpperBoundTheorem}, which provides an average-case upper bound on the required overcompleteness. Recall from Lemma \ref{WorstCaseDistributionLemma} that the worst case distribution is isotropic. Therefore, as in section \ref{AverageCaseLowerBoundAnalysis}, we will analyze the case where \( x \thicksim N(0,I_{d \times d}) \). Since the signal \( x \) is stochastic, it will be more convenient to use a stochastic dictionary as well. The next lemma shows that this does not change the probability of success. \newtheorem{StochasticDictionary}[Lemmas]{Lemma} \begin{StochasticDictionary}[Stochastic dictionary] \label{StochasticDictionaryLemma} Let \( x \thicksim \Omega\) be a $d$-dimensional random vector and denote by \( \mathcal{D} (\mathbb{R} ^{ d \times n }) \) the collection of all distributions over \(\mathbb{R} ^{ d \times n }\). Then \begin{equation} \max_{\Phi \in \mathbb{R} ^{ d \times n } } \prob{ \epsilon(x,\Phi) \leq \varepsilon} = \max_{ \Theta \in \mathcal{D} (\mathbb{R} ^{ d \times n }) } \prob{ \epsilon(x,\Psi) \leq \varepsilon }, \end{equation} where \( \Psi\) in the right-hand side is a random dictionary with distribution $\Theta \in \mathcal{D} (\mathbb{R} ^{ d \times n })$, independent of \(x\). \end{StochasticDictionary} Here, the probability in the left-hand side is over the randomness of $x$ alone while the probability in right-hand side is over the randomness of both $x$ and $\Psi$. Similarly to Sec.~\ref{WorstCaseUpperBoundAnalysis}, to obtain an upper bound on the required overcompleteness, we introduce two sub-optimal restrictions. First, instead of searching for the distribution of the dictionary that maximizes the success probability, we choose one particular distribution. Specifically, we focus on a block-diagonal dictionary of the form~\eqref{MyDictionary} with i.i.d.~entries distributed as $N(0,1)$. This choice is made mainly for convenience, and is worse than \emph{e.g.}, a full Gaussian dictionary (whose atoms are spread isotropically). Second, we require the sparse representation to contain exactly one atom from each of the $k$ sub-dictionaries. Clearly, this latter limitation can only increase the representation error as it reduces the number of subspaces from \( \binom{n}{k} \) to \( (\frac{n}{k})^k \leq \binom{n}{k} \). The advantage of this limitation is that, as noted in Sec.~\ref{WorstCaseUpperBoundAnalysis}, it makes the squared representation error~\eqref{RepresentationErrorEq} separable (see~\eqref{ModificationInequality}). This allows us to bound the normalized $k$-sparse representation error of any signal $x$ over our block-diagonal $\Phi$ by \begin{equation}\label{ModificationInequality2} \epsilon^2(x,\Phi) \leq \frac{1}{\norm{x}^2 } \sum_{i=1}^{k} \min_{1 \leq j \leq m } \norm{x_i - \proj{x_i}{\varphi^{(i)}_j}}^2, \end{equation} where we used the notation \eqref{MySignal} for the partitioning of the signal into $k$ parts. Figure~\ref{fig:Modifications} illustrates the effect of these restrictions on the probability of success, when using the OMP algorithm to obtain the sparse representation. As can be seen, compared to using a full Gaussian dictionary with no restriction on the atom selection (black curve), each of these modifications introduces only a moderate increase in the required overcompleteness (red and blue curves). The combination of the two restrictions together, introduces an additional minor increase in the overcompleteness (green curve). Our approach is to lower bound the green curve. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{Fig3} \caption{Probability of succeeding to sparsely represent a white Gaussian signal using the OMP algorithm, as a function of the overcompleteness of the dictionary. Here, the dimension is $d = 100$, the sparsity factor is $s = 0.2$, and the allowed reconstruction error corresponds to $\mathrm{SNR} = 10 [\text{dB}] $. The black dotted curve shows the performance of standard OMP over a full Gaussian dictionary. The red solid curve is with a full Gaussian dictionary but with choice limitation. The blue dashed line is with a block diagonal Gaussian dictionary and without choice limitation. The green circled curve is with a block diagonal Gaussian dictionary and with choice limitation. We focus on lower bounding the green curve, thus also bounding the best achievable performance.} \label{fig:Modifications} \end{figure} Since $\normi{x_i- \proj{x_i}{\varphi^{(i)}_j}}^2 = \normi{x_i}^2- \normi{\proj{x_i}{\varphi^{(i)}_j}}^2$ and $\sum_{i=1}^k \normi{x_i}^2=\normi{x}^2$, the inequality in \eqref{ModificationInequality2} can be written as \begin{equation}\label{UpdatedRepresentationError} \epsilon^2(x,\Phi) \leq 1-\frac{1}{\norm{x}^2} \sum_{i=1}^{k} \max_{1 \leq j \leq m } \norm{\proj{x_i}{\varphi^{(i)}_j}}^2. \end{equation} For simplicity, let us denote \begin{equation}\label{RandomVariableDefinition} \gamma_i = \frac{\norm{x_i}^2}{\norm{x}^2} , \quad y^{(i)}_j = \frac{\norm{ \proj{x_i}{\varphi^{(i)}_j}}^2} {\norm{x_i}^2}, \quad Z_i = \max_{1 \leq j \leq m } y^{(i)}_j. \end{equation} Then we can rewrite \eqref{UpdatedRepresentationError} as \begin{equation}\label{eq:gammai_Zi} \epsilon^2(x,\Phi) \leq 1-\sum_{i=1}^{k} \gamma_i Z_i. \end{equation} Thus, we have the following lower bound on the optimal success probability \begin{equation}\label{ProbabilityInequality} \mathcal{P}^* \geq \prob{ \epsilon(x,\Phi) \leq \varepsilon } \geq \prob{ \sum_{i=1}^{k} \gamma_i Z_i \geq 1- \varepsilon^2 }, \end{equation} where the first inequality is because our $\Phi$ is block diagonal and the second inequality follows from \eqref{eq:gammai_Zi}, which is due to the atom choice selection. Our goal is to show that the series $\sum_{i=1}^{k} \gamma_i Z_i$ converges to its mean as $k\to\infty$, so that if its mean is greater than $1- \varepsilon^2$, then $ \mathcal{P}^* \to 1$. We will show this by explicitly calculating the mean and variance of the series. From Lemma~\ref{RandomSubspaceProjectionTheorem}, we know that $ y^{(i)}_j \thicksim \myBetai{\frac{1}{2}}{\frac{1-s}{2s}}$ for all $i,j$, and \( \gamma_i \thicksim \myBetai{\frac{d}{2k}}{\frac{d(k-1)}{2k}} \) for all $i$. Using the properties of the Beta distribution, this implies that \begin{equation}\label{MeanAndVarianceGamma} \Ei{\gamma_i} = \frac{1}{k}, \qquad \mathrm{Var}(\gamma_i) = \frac{2}{k^2} \frac{k-1}{d+2}. \end{equation} Since the random variables $\{ y^{(i)}_j : i \leq k , j \leq m \}$ are identically distributed, so are \( \{Z_i\}\). Furthermore, it can be shown that \( \{ y^{(i)}_j : i \leq k , j \leq m \} \) are mutually independent and are also independent of \( \{\gamma_i\}\) (see proof in Appendix \ref{DistributionPropertiesProof}). This implies that the random variables \( \{Z_i\}\) are also mutually independent and are independent of \( \{\gamma_i\}\). Let us denote \begin{equation}\label{MeanAndVarianceZ} \E{Z_i} = \mu, \qquad \mathrm{Var}(Z_i) = \sigma ^2. \end{equation} Then \begin{equation}\label{SeriesExpectation} \E{\sum_{i=1}^{k} \gamma_i Z_i} = \sum_{i=1}^{k} \E{\gamma_i} \E{Z_i} = \sum_{i=1}^{k} \frac{1}{k} \mu = \mu. \end{equation} Furthermore, \begin{align} \mathrm{Var}\left(\sum_{i=1}^{k} \gamma_i Z_i\right) &= \E {\left( \sum_{i=1}^{k} \gamma_i Z_i-\mu \right)^2} \nonumber\\ &= \E{\left( \sum_{i=1}^{k} \gamma_i (Z_i-\mu) \right)^2 } \nonumber\\ &= \E{\sum_{i=1}^{k} \gamma_i^2 (Z_i-\mu)^2} \nonumber\\ &= \sum_{i=1}^{k} \E{\gamma_i^2} \E{(Z_i-\mu)^2}, \end{align} where the second equality follows from the fact that \(\sum_{i=1}^{k} \gamma_i = 1\) and the third and fourth equalities follow from the independence of $\{Z_i\}$ and $\{\gamma_i\}$, which implies that \( \E{\gamma_i \gamma_j (Z_i-\mu)(Z_j-\mu)} = \E{\gamma_i \gamma_j} \E{Z_i-\mu}\E{Z_j-\mu}\) for all \( i \neq j \) (and obviously \( \E{Z_i-\mu}=0 \) for all $i$). Using the second moment formula \(\E{\gamma_i^2} = \mathrm{Var}(\gamma_i) + (\E{\gamma_i})^2\), with the variance and expectation given in \eqref{MeanAndVarianceGamma}, and writing \(\E{(Z_i-\mu)^2} = \sigma^2\), we can further simplify this expression as \begin{align}\label{SeriesVariance} \mathrm{Var}\left(\sum_{i=1}^{k} \gamma_i Z_i\right) &= \sigma^2\sum_{i=1}^k \left(\mathrm{Var}(\gamma_i) + (\E{\gamma_i})^2\right) \nonumber\\ &= k\sigma^2\left(\frac{1}{k^2}+\frac{2}{k^2} \frac{k-1}{d+2}\right) \nonumber\\ &\leq \frac{(1+2s)\sigma^2}{k}. \end{align} We see that the expectation does not depend on $k$, whereas the variance decays as \( \frac{1}{k} \) (equivalently $\frac{1}{d}$). Therefore, from Chebyshev's inequality, this series converges in probability to its mean. Consequently, in order for the optimal success probability to converge to $1$ we must require that \( \mu \geq 1- \varepsilon^2 \). Since $Z_i$ is the maximum over i.i.d variables, to bound its expectation $\mu$ we will use the next lemma \cite[Sec.~4.5]{david2004order}. \newtheorem{MaxLowerBound}[Lemmas]{Lemma} \begin{MaxLowerBound}[Lower bound on the order statistics of RVs] \label{MaxLowerBoundLemma} Let \( \{q_j\}_{j=1}^m \) be a set of i.i.d random variables defined on an interval \( \mathcal{I} \subseteq \mathbb{R} \), with marginal cumulative distribution function \( F_q(\alpha) \). Assume that \( F_q \) is differentiable, concave and strictly monotonically increasing on \( \mathcal{I} \). Let \( q_{[i]} \) denote the \(i\)th smallest value among \( \{q_j\}_{j=1}^m \), then \begin{equation} \Ei{q_{[i]}} \geq F_q^{-1} \left(\frac{i}{m+1}\right), \end{equation} where \( F_q^{-1}\) is the inverse function of \(F_q\) on the interval \( \mathcal{I}\) (satisfying \( F_q^{-1}(F_q(\alpha)) = \alpha \quad \forall \alpha \in \mathcal{I} \)). \end{MaxLowerBound} As mentioned above, the random variables $\{y_j^{(i)}\}$ are i.i.d and distributed as $\myBetai{\frac{1}{2}}{\frac{1-s}{2s}}$. Their cumulative distribution function \( F_y(\alpha) = \Ii{\alpha} {\frac{1}{2}} {\frac{1-s}{2s}} \) is differentiable, concave and strictly monotonically increasing. Hence, the conditions of Lemma~\ref{MaxLowerBoundLemma} are satisfied and we have that \begin{equation}\label{ExpectationBound} \mu=\Ei{Z_i}= \E{\max_{1 \leq j \leq m } y^{(i)}_j} \geq I^{-1}_{\frac{m}{m+1}} \left( \frac{1}{2},\frac{1-s}{2s} \right) . \end{equation} For any given $\delta>0$, let us require that \begin{align}\label{eq:expect_one_minus_delta} I^{-1}_{\frac{m}{m+1}}\left(\frac{1}{2},\frac{1-s}{2s}\right)\geq 1-\delta^2, \end{align} so that from \eqref{ExpectationBound}, we have that $\mu\geq1-\delta^2$. In particular, if we set $\delta=\varepsilon$, then we get $\mu\geq1-\varepsilon^2$, as desired. Due to the monotonicity of the function $I_{\frac{m}{m+1}}(\frac{1}{2},\frac{1-s}{2s})$ w.r.t.\@ $m$, the requirement \eqref{eq:expect_one_minus_delta} translates into the condition \begin{equation} \label{SubDictionaryConnection} m \geq \frac{\I {1-\delta^2} {\frac{1}{2}} {\frac{1-s}{2s}}} { \I{\delta^2}{\frac{1-s}{2s}}{\frac{1}{2}}} = \frac{ \prob{ y_j^{(i)} \leq 1-\delta^2 } } {\prob{ y_j^{(i)} \geq 1-\delta^2 } }. \end{equation} Since $o=n/d=mk/d=ms$, we conclude that if the overcompleteness ratio satisfies \begin{equation}\label{eq:overcomp_Idelta} o \geq \frac{ s } { \I{\delta^2}{\frac{1-s}{2s}}{\frac{1}{2}} }, \end{equation} then we are guaranteed to have $ \mu \geq 1-\delta^2$. Using \( \delta =\varepsilon \) and lower-bounding the denominator using Lemma \ref{BetaLowerBoundLemma}, we get that if \begin{equation} o \geq \sqrt{\frac{\pi}{2} (1-\varepsilon^2 \frac{1-s}{1+s} ) } s^{\frac{1}{2}} \left(\frac{1}{\varepsilon}\right)^{ s^{-1} - 1 }, \end{equation} then \(\lim\limits_{d \rightarrow \infty } \mathcal{P}^* = 1\), which proves the first part of Theorem \ref{AverageCaseUpperBoundTheorem}. Next, we prove the second part of the theorem, which requires bounding the probability \( \probi{ \sum_{i=1}^{k} \gamma_i Z_i \geq 1- \varepsilon^2 } \) from below for any finite dimension \( d \). We will assume from this point on that \eqref{eq:overcomp_Idelta} holds with some \( \delta < \varepsilon \), which ensures that $\mu \geq 1-\delta^2 > 1-\varepsilon^2$. Note that \( \{ \gamma_i \} \) are not independent, as \( \sum_{i=1}^{k} \gamma_i = 1\) w.p.~1. Therefore, we cannot use bounds such as Hoeffding's inequality. Instead, we will use Cantelli's inequality, which can be seen as a one-sided Chebyshev inequality. Cantelli's inequality states that for any random variable \(W\), \begin{equation}\label{GeneralCantelliInequality} \prob{W-\E{W} \geq \lambda } ~ \begin{cases} \leq \frac{\mathrm{Var}(W)}{\mathrm{Var}(W) + \lambda^2} ,& \lambda > 0,\\ \geq 1-\frac{\mathrm{Var}(W)}{\mathrm{Var}(W) + \lambda^2} ,& \lambda < 0. \end{cases} \end{equation} In our case, this inequality gives \begin{align}\label{CantelliInequalitySimple} &\prob{\sum_{i=1}^{k} \gamma_i Z_i \geq 1-\varepsilon^2 } \nonumber\\ &\hspace{2.5cm}= \prob{\sum_{i=1}^{k} \gamma_i Z_i - \mu \geq -(\mu -1+\varepsilon^2) } \nonumber\\ &\hspace{2.5cm}\geq 1-\frac{\mathrm{Var}(\sum_{i=1}^{k} \gamma_i Z_i)}{\mathrm{Var}(\sum_{i=1}^{k} \gamma_i Z_i) + (\mu -1+\varepsilon^2)^2} \nonumber\\ &\hspace{2.5cm}= \left(1+\frac{\mathrm{Var}(\sum_{i=1}^{k} \gamma_i Z_i)}{(\mu -1+\varepsilon^2)^2}\right)^{-1}. \end{align} Thus, using \eqref{SeriesVariance} and writing $k=s d$, we get from \eqref{ProbabilityInequality} that \begin{equation}\label{NumericStochasticBound} \mathcal{P}^* \geq \left(1+\frac{(1+2s)\sigma^2}{(\mu -1+\varepsilon^2)^2 s} \frac{1}{d} \right)^{-1}. \end{equation} We note that this bound can be computed by evaluating the terms \(\mu\) and \(\sigma^2\) numerically, either by numerical integration or by using Monte Carlo simulations. To obtain an expression that does not require numerical approximations, we can replace $\mu$ in \eqref{NumericStochasticBound} by its lower bound $1-\delta^2$ and also replace $\sigma$ by an upper bound as follows (see proof in Appendix \ref{VarianceUpperBoundProof}). \newtheorem{VarianceBound}[Lemmas]{Lemma} \begin{VarianceBound}[Upper bound on the variance] \label{VarianceUpperBoundLemma} Let \( W \) be a random variable defined on the interval \( [0,1] \) with cumulative distribution function \( F_W \). Then for every \( \rho \in [0,\frac{1}{2}] \), \begin{equation} \mathrm{Var}(W) \leq (1-2\rho)F_W(1-2\rho)+\rho^2. \end{equation} \end{VarianceBound} In our case, \( F_{Z_i}(\alpha) = ( \I{\alpha}{\frac{1}{2}}{\frac{1-s} {2s}})^m \). Thus, by setting \( \rho = \varepsilon^2 \leq \frac{1}{2} \) we have \begin{equation} \label{VarBound} \sigma^2 = \mathrm{Var}(Z_i) \leq (1-2\varepsilon^2)( \I{1-2\varepsilon^2} {\tfrac{1}{2}}{\tfrac{1-s} {2s}})^m+\varepsilon^4. \end{equation} Remark: There exist several bounds on the variance of a bounded random variable, the most popular of which are Popoviciu's inequality and the Bhatia-Davis inequality. Popoviciu's inequality uses no knowledge on the probability distribution besides its support, and thus only gives \( \sigma^2\leq \frac{1}{4} \) in our case. The Bhatia-Davis inequality relies on the additional knowledge of the expectation and thus gives the slightly better result \(\sigma^2\leq(1-\mu )\mu \) in our case. However, considering that \( 1-\mu \simeq \varepsilon^2 \) this bound is only on the order of $\varepsilon^2$. In Lemma \ref{VarianceUpperBoundLemma} we also assume to have the cumulative distribution function of the random variable. This gives us an upper bound that scales like \( \varepsilon^4 \), as the left term in \eqref{VarBound} can become arbitrarily small when $m$ is large. Using \eqref{VarBound} and $\mu>1-\delta^2$ in \eqref{NumericStochasticBound}, gives \begin{equation}\label{FinalStochasticBound} \mathcal{P}^* \geq \left( 1 + \frac{ (1-2\varepsilon^2) (\I{1-2\varepsilon^2} {\frac{1}{2}}{\frac{1-s}{2s}})^m+\varepsilon^4} {(\varepsilon^2-\delta^2)^2} \frac{(1+2s)}{s d}\right)^{-1}, \end{equation} which proves the second statement of Theorem \ref{AverageCaseUpperBoundTheorem}. \section{Numerical Simulations} \label{NumericalComp} \begin{figure*}[!t] \centering \subfloat[Lower bound \eqref{WorstCaseLowerBoundEq},\eqref{AverageCaseLowerBoundEq}.]{ \includegraphics[height=2in]{Fig4a} \label{Deterministic_Necessary_fig} } \subfloat[Average-case upper bound \eqref{AverageCaseUpperBoundEq}.]{ \includegraphics[height=2in]{Fig4b} \label{Deterministic_Sufficient_fig} } \subfloat[Worst-case upper bound \eqref{WorstCaseUpperBoundEq}.]{ \includegraphics[height=2in]{Fig4c} \label{Stochastic_Sufficient_fig} } \subfloat{ \includegraphics[height=2in,trim={0.15in 0 0.3in 0},clip]{Fig4d} \label{colorbar} } \caption{Our bounds on the minimal overcompleteness that allows universal sparse representation, as functions of the sparsity factor $s$ and the permissible normalized error $\varepsilon$. Note that the color scale is logarithmic.} \label{Surfaces_fig} \end{figure*} In this section, we present simulations that demonstrate the bounds from Section \ref{MainResults}. Figure~\ref{Surfaces_fig} depicts the values of the worst-case and average-case lower-bounds \eqref{WorstCaseLowerBoundEq},\eqref{AverageCaseLowerBoundEq}, the worst-case upper bound \eqref{WorstCaseUpperBoundEq} and the average-case upper bound \eqref{AverageCaseUpperBoundEq}, as functions of the allowed error \( \varepsilon \) and the sparsity \( s \). Note that the color scale in all plots is logarithmic, thus highlighting the fact that the minimal required overcompleteness becomes extremely large for small values of \( \varepsilon \) and \( s \). We can see the asymmetrical dependency of the overcompleteness on \( s \) and \( \varepsilon \), which is exponential in \( s^{-1} \) and only polynomial in \( \varepsilon^{-1} \). This illustrates that for small values of $s$, it is practically impossible to achieve universal sparse representation with any reasonable error $\varepsilon$ (the required overcompleteness is extremely large). However, for small values of $\varepsilon$, universal sparse representation may still be practical if the sparsity is not too small (\emph{e.g.}, $s\approx 0.3$). To better visualize the differences between the bounds, Figs.~\ref{Overcomp_Spars_fig} and \ref{Overcomp_SNR_fig} show slices from the two-dimensional surfaces. Specifically, Fig.~\ref{Overcomp_Spars_fig} depicts the bounds as functions of \( s^{-1} \) at a constant representation error $\varepsilon$ corresponding to $\mathrm{SNR}_{\text{dB}} = 20\log_{10}(1/\varepsilon)=10$dB. Figure~\ref{Overcomp_SNR_fig} shows the bounds as functions of the SNR at a constant sparsity factor of $s=0.2$. Here we can see that the worst-case upper bound is quite pessimistic with respect to the average case one. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{Fig5} \caption{Overcompleteness as a function of \( s^{-1}\) at \(\mathrm{SNR} = 10\)dB.} \label{Overcomp_Spars_fig} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=3.5in]{Fig6} \caption{Overcompleteness as a function of \(\mathrm{SNR}\) at a sparsity of \( s=0.2 \).} \label{Overcomp_SNR_fig} \end{figure} We next compare our bounds to the actual performance of a sparse coding algorithm. To the best of our knowledge, there exists no practical method for calculating the worst-case error \( \max_{x \in \mathbb{R}^d } \epsilon(x,\Phi) \) for a given dictionary $\Phi$. This means that we cannot verify whether a given $\Phi$ is a universal \(k\)-sparse representation dictionary. Consequently, we focus on examining only the average case scenario. In this setting, we take \(x \in \mathbb{R}^d \) to be a Gaussian vector with i.i.d coordinates, which, according to Lemma \ref{WorstCaseDistributionLemma}, is a worst case distribution. For any given $\Phi$, this allows us to easily approximate the probability of success $\prob{ \epsilon(x,\Phi) \leq \varepsilon }$, simply by applying the OMP algorithm on many draws of $x$ and counting the relative number of times the resulting sparse approximation satisfies our error constraint. This still leaves us with the problem of choosing the optimal $\Phi$. Since there is no closed form expression for the optimal $\Phi$, here we make a suboptimal choice, which is to take the dictionary to be a random matrix with Gaussian i.i.d entries. Recall that according to Lemma~\ref{StochasticDictionaryLemma}, the best possible probability of success is the same whether we restrict the search for deterministic dictionaries or also allow random dictionaries. Figure~\ref{Prob_overcomp_fig} compares the probability of success of the OMP algorithm over a Gaussian dictionary to the lower and upper bounds on the probability of success \eqref{AverageCaseLowerBoundProb} and \eqref{AverageCaseUpperBoundProb2} as well as to the tighter bound \eqref{NumericStochasticBound}, which we calculated numerically. This simulation was carried out for a fixed dimension of $d=1600$. As can be seen, the success probability is indeed between the upper and lower bounds. Moreover, it exhibits a sharp phase-transition at some overcompleteness. Below this critical overcompleteness, the probability of success is nearly 0, while above the threshold it climbs very steeply towards 1. Obviously, both our choice of sparse-coding algorithm and our choice of dictionary are suboptimal. Therefore, we must keep in mind that the simulation gives us an underestimation of the optimal performance (the true best achievable probability of success is actually higher than the black dash-dotted curve). \begin{figure}[!t] \centering \includegraphics[width=3.5in]{Fig7} \caption{Probability of succeeding to sparsely represent a white Gaussian signal, as a function of the dictionary's overcompleteness. Here, the dimension is \( d = 1600\), the sparsity factor is $s = 0.2$, and the permissible error is $\mathrm{SNR} = 10 {[\text{dB}]}$.} \label{Prob_overcomp_fig} \end{figure} Figure~\ref{Overcomp_dimension_fig} demonstrates the asymptotic behavior in Theorems~\ref{AverageCaseLowerBoundTheorem} and~\ref{AverageCaseUpperBoundTheorem} (\emph{i.e.}, the bounds in \eqref{AverageCaseLowerBoundEq} and \eqref{AverageCaseUpperBoundEq}). Here, we compare the bounds to the minimal overcompleteness that allows to obtain a sparse representation with overwhelming probability. Specifically, we set a threshold of 0.99 on the success probability, and numerically found the minimal overcompleteness that allowed to surpass this success rate. This was done by gradually increasing the overcompleteness ratio, for each dimension $d$, until we hit the 0.99 success probability threshold for the first time. As can be seen, at high dimensions~$d$, the overcompleteness required for overwhelming success probability is indeed between the two bounds, as Theorems \ref{AverageCaseLowerBoundTheorem} and \ref{AverageCaseUpperBoundTheorem} predict. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{Fig8} \caption{Minimal overcompleteness that allows to find a sparse representation for a white Gaussian signal with probability 0.99, as a function of the dimension $d$. Here, the sparsity factor is \( s = 0.2\) and the permissible normalized error is $\mathrm{SNR} = 10 {[\text{dB}]} $.} \label{Overcomp_dimension_fig} \end{figure} \section{Conclusion} In this paper, we presented and studied the \emph{universal sparse representation} problem, which relates to the ability of constructing sparse approximations to all signals in the space, up to a predefined error. We analyzed the problem in a deterministic setting as well as in a stochastic one. In both cases, we derived necessary and sufficient conditions on the minimal required overcompleteness. Our conditions have simple explicit forms, and, as we illustrated through simulations, accurately capture the behavior of sparse coding algorithms in practice. \appendices \section{Proof of Lemma \ref{BetaLowerBoundLemma}} \label{BetaBoundProof} By the definition of the Beta distribution, we know that \begin{equation} \I{x}{a}{b} = \frac{1}{B(a ,b)} \int_{0}^{x} t^{a-1}(1-t)^{b-1}dt, \end{equation} where \( B(a ,b) \) is the Beta function with parameters \(a\) and \(b\). We note that for \( 0<b \leq 1 \) the function \( w(t) = (1-t)^{b-1} \) is convex in \(t\). Moreover we have that \begin{equation} \int_{0}^{x} \frac{a}{x^a} t^{a-1}dt = 1. \end{equation} Let us represent the incomplete beta function as follows \begin{equation} \I{x}{a}{b} = \frac{x^a}{aB(a,b)} \int_{0}^{x} \frac{a}{x^a} t^{a-1}w(t)dt. \end{equation} Then by Jensen's inequality we have that \begin{align} \int_{0}^{x} \frac{a}{x^a} t^{a-1}w(t)dt &\geq w\left( \int_{0}^{x} \frac{a}{x^a} t^{a-1}tdt \right) \nonumber\\ &= w\left(x \frac{a}{a+1}\right) \nonumber\\ &= \left( 1-x \frac{a}{a+1}\right)^{b-1}. \end{align} Therefore \begin{equation} \I{x}{a}{b} \geq \frac{x^a}{aB(a,b)(1-x\tfrac{a}{a+1})^{1-b}}. \end{equation} The Beta function can be expressed in terms of Gamma functions as \begin{equation} B(a,b) = \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}. \end{equation} From the properties of the Gamma function, we know that \( a\Gamma(a) = \Gamma(a+1) \). Therefore, \begin{equation} aB( a , b ) = \Gamma(b) \frac{ \Gamma(a+1)} {\Gamma(a+b)}. \end{equation} The ratio of the gamma functions is bounded by \cite[(3.2)]{mukhopadhyay2016stirling} \begin{equation} \frac{ \Gamma(a+1)} {\Gamma(a+b)} < (a+b)^{1-b}, \end{equation} which concludes the proof. \section{Proof of Lemma \ref{WorstCaseDistributionLemma}} \label{WorstCaseDistributionProof} To prove Lemma \ref{WorstCaseDistributionLemma}, we first prove the following useful property \newtheorem{IsotropicDistributionLemma}[Lemmas]{Lemma} \begin{IsotropicDistributionLemma}[Isotropic distribution] \label{IsotropicDistribution} Let \( \mathcal{D} (\mathbb{R}^{d}) \) denote the collection of all probability distributions defined on \(\mathbb{R}^{d}\). Then, for every distribution \(\Omega \in \mathcal{D}(\mathbb{R}^{d})\) there exists an isotropic distribution \(\Theta \in \mathcal{D}(\mathbb{R}^{d})\) for which the optimal success probability is not greater than the optimal success probability for \(\Omega \). That is \begin{equation} \optSuccProb{\Theta} \leq \optSuccProb{\Omega}. \end{equation} \end{IsotropicDistributionLemma} \begin{IEEEproof}[Proof of Lemma \ref{IsotropicDistribution}] Let \(x \thicksim \Omega\) be some \(d\)-dimensional random vector. Denote by \(SO(d)\) the \emph{Special Orthogonal} group in \(\mathbb{R}^d\), which corresponds to all \(d\times d\) rotation matrices (satisfying \(R^TR=RR^T=I\) and \(\text{det}(R)=1\)). It is well known that there exists an invariant measure in \(SO(d)\) \cite[Sec.2]{leon2006statistical}. Therefore, let \( M \) be a matrix chosen uniformly at random from \(SO(d)\), and independent of \(x\). Then define \begin{equation} y = Mx. \end{equation} It is easy to see that the distribution of \(y\), which we denote by \( \Theta \), is isotropic. We will show that the optimal success probability for the distribution \( \Theta \) is no larger than that of \( \Omega \). Using the law of total probability we have \begin{equation} \label{TotalProb} \max_{\Phi \in \mathbb{R}^{d \times n}} \prob{\epsilon(y,\Phi) \leq \varepsilon} = \max_{\Phi \in \mathbb{R}^{d \times n}} \E{\condProb{\epsilon(Mx,\Phi) \leq \varepsilon}{M}}, \end{equation} where the left hand side is the definition of the optimal success probability \( \optSuccProb{\Theta} \). Let us recall the definition of the representation error \begin{equation} \label{ErrorDef} \epsilon(Mx,\Phi) = \min_{\alpha \in \mathbb{R}^n} \frac{\norm{Mx-\Phi \alpha} }{\norm{Mx}} \quad \text{s.t.} \quad \norm{\alpha}_0 \leq k. \end{equation} By the properties of orthogonal matrices, we know that \( \norm{Mx-\Phi \alpha} = \norm{x- M^T \Phi \alpha} \) and \( \norm{Mx} = \norm{x} \). Therefore \(\epsilon(Mx,\Phi) = \epsilon(x, M^T \Phi)\), so that \begin{equation} \label{Orthogonal} \condProb{\epsilon(Mx,\Phi) \leq \varepsilon}{M} = \condProb{\epsilon(x, M^T \Phi) \leq \varepsilon}{M}. \end{equation} The expectation of a pointwise maximum is greater or equal to the maximum of the expectation, \textit{i.e.} \begin{align} \label{MaxInequality} &\max_{\Phi \in \mathbb{R}^{d \times n}} \E{\condProb{\epsilon(x, M^T \Phi) \leq \varepsilon}{M}} \leq \nonumber\\ &\hspace{2cm}\E{\max_{\Phi \in \mathbb{R}^{d \times n}} \condProb{\epsilon(x, M^T \Phi) \leq \varepsilon }{M}}. \end{align} Recall that \( x \) and \( M \) are statistically independent. Therefore, \begin{equation} \max_{\Phi \in \mathbb{R}^{d \times n}} \condProb{\epsilon(x, M^T \Phi) \leq \varepsilon}{M} = \max_{\Phi \in \mathbb{R}^{d \times n}} \prob{\epsilon(x,\Phi) \leq \varepsilon}. \end{equation} We notice that the right hand side is independent of \(M\). Thus, putting this result in \eqref{MaxInequality} we get \begin{equation} \optSuccProb{\Theta} \leq \max_{\Phi \in \mathbb{R}^{d \times n}} \prob{\epsilon(x,\Phi) \leq \varepsilon} = \optSuccProb{\Omega}, \end{equation} which completes the proof. \end{IEEEproof} We will now show that the minimum of the optimal success probability over the set \( \mathcal{D}(\mathbb{R}^{d}) \) is attained by an isotropic distribution. It is always true that there exists a sequence of distributions \( \{ \Omega_n \}_{n=1}^\infty \subset \mathcal{D} (\mathbb{R}^{d}) \) on which the optimal success probability converges to the minimal value. Mathematically, \begin{equation} \lim\limits_{n \rightarrow \infty} \optSuccProb{\Omega_n} = \min_{\Omega \in \mathcal{D} (\mathbb{R}^{d})} \optSuccProb{\Omega}. \end{equation} By Lemma \ref{IsotropicDistribution} we know that for each \( n \in \mathbb{N} \) there exist an isotropic distribution \( \Theta_n \) such that \( \optSuccProb{\Theta_n} \leq \optSuccProb{\Omega_n} \). It is easy to see from the definition that all isotropic distributions have the same optimal success probability. Let \( \Theta \) be a given isotropic distribution, then we have that \begin{equation} \forall n \in \mathbb{N} \qquad \optSuccProb{\Theta} = \optSuccProb{\Theta_n} \leq \optSuccProb{\Omega_n}. \end{equation} The left hand side is independent of \(n\), then by taking the limit we get that \begin{equation} \optSuccProb{\Theta} \leq \lim\limits_{n \rightarrow \infty} \optSuccProb{\Omega_n} = \min_{\Omega \in \mathcal{D} (\mathbb{R}^{d})} \optSuccProb{\Omega}. \end{equation} We thus conclude that \( \Theta \) is a minimizer, which completes the proof. \section{Proof of Lemma \ref{StochasticDictionaryLemma}} \label{StochasticDictionaryProof} On the one hand, a deterministic dictionary is a special case of a stochastic dictionary. Hence, we have the inequality \begin{equation} \max_{\Phi \in \mathbb{R} ^{ d \times n } } \prob{ \epsilon(x,\Phi) \leq \varepsilon} \leq \max_{ \Theta \in \mathcal{D} (\mathbb{R} ^{ d \times n }) } \prob{ \epsilon(x,\Psi) \leq \varepsilon }. \end{equation} On the other hand, by the law of total probability, \begin{equation} \max_{ \Theta \in \mathcal{D} (\mathbb{R} ^{ d \times n }) } \prob{ \epsilon(x,\Psi) \leq \varepsilon } = \max_{ \Theta \in \mathcal{D} (\mathbb{R} ^{ d \times n }) } \E{\condProb{ \epsilon(x,\Psi) \leq \varepsilon}{\Psi }}. \end{equation} Obviously the maximum is greater or equal to the expectation, that is \begin{equation} \E{\condProb{ \epsilon(x,\Psi) \leq \varepsilon}{\Psi}} \leq \max_{\Psi \in \mathbb{R} ^{ d \times n } } \prob{ \epsilon(x,\Psi) \leq \varepsilon}. \end{equation} Here, we used the fact that \( x \) and \( \Psi \) are statistically independent to remove the conditioning in the right-hand term. The above inequality is true for any distribution \( \Theta \in \mathcal{D} (\mathbb{R} ^{ d \times n }) \), and thus in particular it applies also to the maximum. Therefore, we obtain the opposite inequality \begin{equation} \max_{ \Theta \in \mathcal{D} (\mathbb{R} ^{ d \times n }) } \prob{ \epsilon(x,\Psi) \leq \varepsilon } \leq \max_{\Psi \in \mathbb{R} ^{ d \times n } } \prob{ \epsilon(x,\Psi) \leq \varepsilon}, \end{equation} and the result follows. \section{Proof of Lemma \ref{VarianceUpperBoundLemma} } \label{VarianceUpperBoundProof} The variance of \(W\) is defined as \begin{equation} \mathrm{Var}(W) = \E{(W-\E{W})^2}. \end{equation} Let us consider this formula as a mean square error (MSE) between \(W\) and its expectation. It is well known that the expectation achieves the minimum MSE over all constants. Thus, by replacing the expectation with \(1-\rho \) we can only increase the value of the MSE. Hence we have \begin{equation} \E{(W-\E{W})^2} \leq \E{(W-1+\rho)^2}. \end{equation} Using the law of total expectation we get that for any \( \rho \in [0,\frac{1}{2}] \), \begin{align} & \E{(W-1+\rho)^2} \nonumber\\ & = \condE{ (W-1+\rho)^2 }{ W \leq 1-2\rho} \times \prob{W \leq 1-2\rho} \nonumber\\ & + \condE{ (W-1+\rho)^2 }{ W > 1-2\rho} \times \prob {W > 1-2\rho}. \end{align} It is easy to see that if \(W\in [0,1-2\rho] \) then \begin{equation} (W-1+\rho)^2 \leq (1-\rho)^2 \end{equation} and if \(W\in (1-2\rho,1] \) then \begin{equation} (W-1+\rho)^2 \leq \rho^2. \end{equation} Therefore we have that \begin{align} \E{(W-1+\rho)^2} \leq & (1-\rho)^2 \times \prob{W \leq 1-2\rho} \nonumber\\ &+ \rho^2 \times \prob {W > 1-2\rho}. \end{align} By definition, \( \prob{W \leq 1-2\rho} = F_W(1-2\rho) \) and \( \prob {W > 1-2\rho} = 1-F_W(1-2\rho) \). Therefore, after simple algebra we get that \begin{equation} \E{(W-1+\rho)^2} \leq (1-2\rho)F_W(1-2\rho)+\rho^2, \end{equation} which concludes the proof. Remark: If we choose \( \rho = \frac{1}{2} \), then we get the value of Popoviciu's inequality, which in this case is equal to \( \frac{1}{4} \). \section{Proof of independence property} \label{DistributionPropertiesProof} To show that the set \( \{ y^{(i)}_l : i \leq k , l \leq m \} \) is mutually independent and is independent of the set \( \{\gamma_i\}_{i=1}^k \), we will use the characteristic function. Recall the definitions of these sets, \begin{equation} \gamma_i = \frac{\norm{x_i}^2}{\norm{x}^2}, \qquad y^{(i)}_j = \frac{\norm{ \proj{x_i}{\varphi^{(i)}_j}}^2} {\norm{x_i}^2}. \end{equation} For a general random vector \( W \in \mathbb{R}^n \), the characteristic function is defined by \begin{equation} \bigchi_W(t) = \E{e^{jt^TW}} , \qquad t \in \mathbb{R}^n. \end{equation} where \( j = \sqrt{-1} \) is the unit imaginary number. Let \( Y \) and \(\Gamma \) be vector representations of the sets \( \{ y^{(i)}_l : i \leq k , l \leq m \} \) and \( \{\gamma_i\}_{i=1}^k \) respectively (concatenated in arbitrary order in each vector). Then, \begin{equation} \bigchi_{(Y, \Gamma)}(t,s) = \E{e^{jt^TY+js^T\Gamma}}, \qquad t \in \mathbb{R}^n,\ s \in \mathbb{R}^k \end{equation} where \(n = km \) is the number of random variables in the vector \( Y \). Using the law of total expectation we have that \begin{equation} \E{e^{jt^TY+js^T\Gamma}} = \E{\condE{e^{jt^TY+js^T\Gamma}}{x}}. \end{equation} Given \(x\), the variables \( \{\gamma_i\}_{i=1}^k \) are determined deterministically. Additionally, since \( \{ \varphi^{(i)}_l : i \leq k , l \leq m \} \) are mutually independent, we have that \( \{ y^{(i)}_l : i \leq k , l \leq m \} \) are conditionally independent given \(x\). Therefore, we can write \begin{equation} \E{\condE{e^{jt^TY+js^T\Gamma}}{x}} = \E{e^{js^T\Gamma} \prod_{i=1}^{N}\condE{e^{jt_iY_i}}{x}}. \end{equation} From Lemma~\ref{RandomSubspaceProjectionTheorem} we notice that the conditional distribution of \( Y_i \) given \( x \) is the same as the unconditioned distribution. Thus, we have \begin{equation} \condE{e^{jt_iY_i}}{x} = \E{e^{jt_iY_i}} = \bigchi_{Y_i}(t_i). \end{equation} We point out that the right term is a deterministic function, therefore \begin{equation} \bigchi_{(Y, \Gamma)}(t,s) = \E{e^{js^T\Gamma}} \prod_{i=1}^{N}\bigchi_{Y_i}(t_i) = \bigchi_{\Gamma}(s) \prod_{i=1}^{N}\bigchi_{Y_i}(t_i), \end{equation} which proves the desired independence. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
101
\section{Introduction} As experimental tools have progressed (e.g. \cite{Bloch2008Ultracold,Blatt2012TrappedIon}), the microscopic control of quantum systems has become increasingly accessible. These advancements, along with a correlated increase in theoretical interest, have led to the discovery of many new and surprising phenomena that emerge when periodic driving, interactions, and their interplay are considered. For example, periodically driven systems can be used to stabilize otherwise unusual behavior. A recent important example is topological Floquet insulators \cite{Kitagawa2010FTI,lindner2011Floquet,Rudner2020FTI}, where novel topological features of the band structure may emerge due to inherent periodicity of the non-interacting quasi- energy spectrum. Furthermore, it was shown in \cite{titum2016anomalous} that, by combining spatial disorder with a topological Floquet insulator model introduced by Rudner-Lindner-Berg-Levin (RLBL) \cite{rudner2013anomalous}, a new topological phase may be realized called the anomalous Floquet-Anderson insulator (AFAI). Discrete time crystals \cite{Sacha2020DTCBook,Else2020DTCRev,Khemani2016DTC,Else2016DTC} are another important example of behavior that may occur in periodically driven, but not static \cite{Watanabe2015noTC}, systems. Namely, a time crystal is a system where time-translation symmetry is spontaneously broken (in analogy to spatial translation symmetry spontaneously breaking to form ordinary crystals). Combining periodic driving with interactions, however, can often be problematic as generic, clean, interacting Floquet system are expected to indefinitely absorb energy from their drive and thus quickly converge to a featureless infinite temperature state \cite{Lazarides2014Therm,Dalessio2014Therm,Ponte2015Therm}. This problem may be side-stepped by considering Many-Body Localization (MBL) \cite{Abanin2019MBLRev,DAlessio2013MBL,Ponte2015MBL,Ponte2015MBLPRL,Lazarides2015MBL,Khemani2016MBL,Agarwala2017MBL}, in which strong disorder is utilized to help stave off thermalization, by considering the effective evolution of pre-thermal states \cite{Bukov2015pretherm,Kuwahara2016pretherm,Else2017pretherm,Abanin2017pretherm,Zeng2017pretherm,Machado2019pretherm} that, in the best cases, take exponentially long to thermalize, or by connecting the system to a bath to facilitate cooling and arrive at interesting, non- equilibrium steady-states \cite{Dehghani2014Dissipative,Iadecola2015Bath,Iadecola2015Bath2,Seetharam2015Bath}. Yet another route for realizing non-trivial dynamics despite the expected runaway heating from interacting, Floquet drives is to consider systems where the ergodicity is weakly broken, i.e. where there are subspaces (whose size scales only polynomially in the system size) of the Hilbert space that do not thermalize despite the fact that the rest of the Hilbert space does. These non-thermal states are called quantum many-body scars \cite{Turner2018Scars,Ho2019Scars,Moudgalya2022Scars} and have been shown to support many interesting phenomena including, for example, discrete time crystals \cite{Yarloo2020ScarDTC}. Furthermore, in constrained systems, the full Hilbert space may fragment into subspaces where some of the subspaces thermalize while others do not \cite{Sala2020HFrag,Moudgalya2022Scars}. When the fraction of non-thermal states are a set of measure zero in the thermodynamic limit, the system is an example of quantum many-body scarring. However, in other cases, the non-thermal subspaces form a finite fraction of the full Hilbert space and therefore correspond to a distinct form of ergodicity breaking. In addition to leading to heating, interactions are also often responsible for our inability to efficiently study or describe many body quantum states in both Floquet and static Hamiltonian systems. However, there are situations when interactions play the opposite role in creating specialized states of particular simplicity or utility. For example, systems with interactions can exhibit counter-intuitive bound states due to coherent blocking of evolution. A nice class of such systems are the edge-locked few particle systems studied in \cite{haque:060401,haque2010self}. In this work, we consider Floquet drives where hopping between neighboring pairs of sites are sequentially activated. The theoretical and experimental tractability of such models have made them a popular workhorse for fleshing out a broad range of the exciting properties of periodically driven systems (e.g. \cite{rudner2013anomalous,Kumar2018evenodd,Ljubotina2019evenodd,Piroli2020evenodd,Lu2022EvenOddPRL}). We find that, when interactions are added to such systems, there exist special values of interaction strength and driving frequency where the dynamics becomes exactly solvable. Furthermore, the complete set of these special parameter values may be determined via emergent Diophantine equations \cite{Cohen2007Dioph}. At other parameter values, the Hilbert space is fragmented. Initial states contained within some, thermal, subspaces will ergodically explore the subspace (though not the entire Hilbert space), while other initial states contained within other, non-thermal, subspaces will evolve according to a classical cellular automation (CA) \cite{Wolfram1983CA}, i.e. the system evolves in discrete time steps where after each step the occupancy of any given site is updated deterministically based on a small set of rules determined by the occupancy of neighboring sites. As examples, we consider RLBL(-like) models with added nearest neighbor (NN) or Hubbard interactions as well as an even-odd Floquet drive in one dimension with NN interactions (more detailed descriptions of these models given below). We note that some work has been done in the first two cases \cite{Nathan2019AFI,Nathan2021AFI} where it was argued that novel, MBL anomalous Floquet insulating phases emmerged when a disorder potential was added. We will discuss how our focus on special parameter values leads to new insights into these models and how it suggests a possible route towards other exciting phenomena such as the support of discrete time crystals within fragments of the Hilbert space. \color{red} \color{black} \section{Conditions for evolution by Fock state permutations} In this section, we examine conditions for deterministic evolution of Fock states into Fock states in fermion models. Here we consider real space Fock states, which have a well defined fermion occupation on each lattice site (We will also refer to such states as fermion product states). We consider models where hopping between non-overlapping selected pairs of sites is sequentially activated. Two models of this type, discussed in detail below, deal with Hubbard and nearest neighbour interactions. The approach can be naturally extended to deal with more general interactions in sequentially applied evolution models. \subsection{Example 1: Hubbard-RLBL} \label{Section: Hubbard RLBL} As a particularly illuminating example, consider the Rudner-Lindner-Berg-Levin model \cite{rudner2013anomalous}. This model is an exact toy model for a topological Floquet insulator and has been very useful in flushing out some of their salient properties. In addition, it provides the starting point for other states, such as the anomalous Floquet-Anderson insulators \cite{titum2016anomalous}. The model is two dimensional, however, it's simplicity lies in its similarity to even-odd type models, \cite{Kumar2018evenodd,Ljubotina2019evenodd,Piroli2020evenodd,Lu2022EvenOddPRL}, in that the evolution activates disjoint pairs of sites at each stage. The model can be tuned to a particular point where the stroboscopic evolution of product states is deterministic exhibiting bulk periodic motion and edge propagation. Similarly, one can tune the driving frequency to completely freeze the stroboscopic evolution. Here, we add interactions to the model and ask when we can make the evolution a product state permutation, at least in some sectors. The Hubbard-RLBL evolution is written as \begin{eqnarray} \label{eq: Floquet steps} U=U_{wait} U_4 U_3 U_2 U_1 \end{eqnarray} where $U_i(V,\tau)=e^{-i \tau H_i}$. For $i=1,..4$, \begin{gather} {\cal H}_i = -t_{hop}\sum_{(i,j)\in A_i;\sigma} (a_{i,\sigma}^\dagger a_{j,\sigma} + h.c.) + V \sum_{i\in A_i} n_{i,\uparrow} n_{i,\downarrow} \label{eq: RLBL Hubbard Hamiltonian} \end{gather} where $n_{i,\sigma}=a_{i,\sigma}^\dagger a_{i,\sigma}$ and the sets $A_i$ are described in Fig. \ref{fig:RLBL}. Note, this is equivalent to the model investigated in \cite{Nathan2021AFI} when $U_{wait} \rightarrow U_{dis}$, i.e. the waiting period corresponds to evolution with random local potentials and no hopping \footnote{Technically, in \cite{Nathan2021AFI} a weak disorder potential is added during the $U_i$ steps and then the disorder strength during the wait step is effectively made stronger by increasing the length of time the wait step is applied. However, this slight difference in how the disorder potential is applied does not seriously alter the dynamics and so we will not make a hard distinction between the two.}. In that work, it was shown that this model supports a new family of few-body topological phases characterized by a hierarchy of topological invariants. These results may be viewed from the following perspective. First, finely-tuned points where the dynamics is exactly solvable were studied (namely, $\tau=\frac{\pi}{2}$ and $V=0$ or $V \rightarrow \infty$). Second, it is argued that regions near these special points are stabilized (i.e. localized, at least for finite particle number cases) by disorder leading to robust phases. Finally, topological invariants characterizing these phases (V small vs. V large) can be found and shown to be distinct implying two differing topological phases. An application of the methods we propose in this work will allow us to generalize the first step above and find families of these exactly solvable points. We leave discussions of when regions in parameter space near these points may or may not be stabilized by disorder to future work. Since, at these exactly solvable points, we will be mapping product states to product states, $U_{dis}$ will only act as an unobservable global phase and thus for the rest of our analysis we will set $U_{wait} = I$. Furthermore, throughout the rest of the paper we will work in units where $t_{hop}=1$ and $\hbar=1$. \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{RLBL.PNG} \caption{The RLBL model. Hopping is sequentially activated among neighbouring sites connected in the set $A_i$, $i=1,...,4$.} \label{fig:RLBL} \end{figure} We now look for conditions to simplify the evolution \eqref{eq: Floquet steps} in such a way that the total evolution reduces to a permutation on the set of product states, i.e. when an initial configuration of fermions is placed at a selection of locations it will evolve into a different assignment of locations without generating entanglement. To do so, we note that the evolution of each pair of sites, may be considered separately due to the disjoint nature of the set of pairs $A_i$. Thus, we consider the evolution on a pair of sites $i,j$ \begin{eqnarray} U_{(i,j)}(V,\tau)=e^{-i \tau (a_{i,\sigma}^\dagger a_{j,\sigma} + h.c.) +\tau V( n_{i,\uparrow} n_{i,\downarrow}+ n_{j,\uparrow} n_{j,\downarrow} \label{eq: RLBL Hubbard pair evolution} }. \end{eqnarray} Since the evolution preserves particle number, we can treat the sub-spaces of $0,1,2,3,$ and $4$ particles in each neighboring pair of sites separately. In the case of $0$ or $4$ particles, evolution is trivially the identity (due to Pauli blocking in the $4$ particle case). For $1$ or $3$ particles, one of the two sites is always doubly occupied, and thus the interaction term in \eqref{eq: RLBL Hubbard Hamiltonian} is a constant and does not affect evolution. In this case, solving the two site non-interacting evolution we see that in the one-particle sector, a fermion starting initially at site $i$ has a probability $p = \sin^2{ \tau}$ to hop to the other site in pair $j$ and probability $1-p$ to stay. Similarly, in the 3-particle sector, an initially placed hole in site $i$ has the same probability, $p$ to hop to the other site $j$. Thus, when \begin{gather} \tau = \frac{\pi}{2} \ell \label{eq: 2 site perm condition 0} \end{gather} for some integer $\ell$, evolution for initial product states in the $1$,$3$ particle subspace is completely deterministic with trivial evolution for even $\ell$ and the particle hopping to the other site in the pair with probability $1$ (henceforth referred to as perfect swapping) when $\ell$ is odd. Clearly, for these values of $\tau$ (and independently of $V$), no new entanglement is created in any pairs with $1$ or $3$ particles. To render the evolution in the $2$ particle pair subspace simple, it is shown in appendix \ref{appendix: hubbard 2 sector} that deterministic evolution occurs when the two conditions below are simultaneously satisfied: \begin{gather} \tau \sqrt{4^2 + V^2} = 2 \pi m \label{eq: 2-site permutation condit 1} \\ \text{and} \nonumber \\ \frac{1}{2} \tau V + \pi m = \pi n \label{eq: 2-site permutation condit 2} \end{gather} with $n,m \in \mathbb{Z}$. Note that \eqref{eq: 2-site permutation condit 1} guarantees the preservation of the number of doubly occupied sites (doublons). When $n$ is even, the sub-system will return to its initial state. On the other hand, if $n$ is odd, the system will exhibit perfect swapping i.e. each particle will hop to the other site in the pair. By solving for $\tau$ and $V$ in terms of $n$ and $m$, we may now summarize when evolution is deterministic in each of the particle number sub-spaces: \begin{gather} \label{eq: freeze condition} \begin{tabular}{c | c | c} particles & $\tau$ & $V$ \\ \hline 1 or 3 & $\tau = \frac{\pi}{2} \ell$ & $V$ arbitrary\\ 2, opposite spins & $\tau = \frac{\pi}{2} \sqrt{2 m n - n^2}$ & $V = \frac{4 (n-m)}{\sqrt{2 m n - n^2}}$ \\ otherwise & any & any \end{tabular} \end{gather} when $n$ or $\ell$ are even (odd) evolution is frozen (perfect swapping). To keep the solutions real, Eq. \ref{eq: freeze condition} also implies we must take $2 m n-n^2>0$. Can all the conditions \eqref{eq: 2 site perm condition 0}, \eqref{eq: 2-site permutation condit 1}, and \eqref{eq: 2-site permutation condit 2} be simultaneously satisfied? In such a case the evolution of $\cal U$ is simply a permutation (being a product of identities and site swaps) and generates no new entanglement in any of the sectors. \subsection{The Diophantine Equation} Combining the conditions \eqref{eq: 2 site perm condition 0}, \eqref{eq: 2-site permutation condit 1}, and \eqref{eq: 2-site permutation condit 2} together yields the following equation: \begin{gather} \label{eq: diophantine Floquet Hubbard} \ell^2 + n^2 = 2 m n. \,\, \ell,n,m \in \mathbb{Z} \end{gather} Eq. \eqref{eq: diophantine Floquet Hubbard} is a homogeneous Diophantine equation of degree $2$ and can be solved. We now give a brief review of Diophantine equations and the strategy for solving homogenous quadratic equations. Diophantine equations are algebraic (often polynomial) equations of several unknowns where only integer or rational solutions are of interest. They are named in honor of Diophantus of Alexandria for his famous treatise on the subject written in the 3rd century though the origins of Diophantine equations can be found across ancient Babylonian, Egyptian, Chinese, and Greek texts \cite{Cohen2007Dioph}. Despite their often innocuous appearance, they are an active area of research with solutions frequently requiring surprisingly sophisticated mathematical techniques and have been the centerpiece of several famous, long-standing mathematical problems that have only been (relatively) recently resolved, including Fermat's Last Theorem \cite{Wiles1995Fermat} and Hilbert's Tenth Problem \cite{Matiyasevich1970Hilbert}. In this section, we are interested in the relatively simple case of a homogeneous quadratic Diophantine equation, i.e. equation of the form \begin{gather} X^T Q X = 0 \label{eq:quad Diophantine} \end{gather} with variables $X^T = \left(x_0,x_1,...,x_n \right)$ and coefficients given by the $n\times n$ symmetric matrix $Q$ with integral diagonal entries and half integral off-diagonal entries. As we shall see, however, for interactions beyond Hubbard a broader class of Diophantine equations may need to be considered. For information on broader classes of Diophantine equations and for more information on the derivation to follow, see, for example, \cite{Cohen2007Dioph}. The general strategy for finding rational (we will specialize to integer solutions for our cases of interest at the end) solutions to \eqref{eq:quad Diophantine} is to first find a particular solution and then generate all other rational solutions from the particular solution. Particular solutions can be found simply by inspection or through existing efficient algorithms \cite{Cohen2007Dioph}. The main task is then to generate all other rational solutions from a given particular solution. Take $X_0^T = \left(x_{0,0},x_{1,0},...,x_{n,0} \right)$ to be a particular solution, i.e. \begin{gather} X_0^T Q X_0 = 0. \label{eq:particular quad Diophantine} \end{gather} Since \eqref{eq:quad Diophantine} is quadratic, any line through $X_0$ will intersect the hypersurface defined by \eqref{eq:quad Diophantine} at a single other point (see Fig. \ref{fig:Diophantine}). Furthermore, if the line through $X_0$ is rational (i.e. has rational coefficients), as we see below, this implies that the second intersection point must also be rational. Therefore, it is possible to generate every rational solution to \eqref{eq:quad Diophantine} by finding the second intersection point of every rational line through $u X_0$, where $u$ is rational. \begin{figure} \centering \includegraphics[width=0.3\textwidth]{DiophantineFigA.png} \caption{Any line passing through the null surface has two points of intersection. Given a particular solution $X_0$ of the homogeneous Diophantine eq \eqref{eq:particular quad Diophantine}, other rational solutions are found by looking at lines emanating from $u X_0$ with rational slopes .} \label{fig:Diophantine} \end{figure} Here, since \eqref{eq:quad Diophantine} is homogeneous, it is convenient to work in projective space $\mathbb{P}_n(\mathbb{Q})$ where a general line passing through $X_0$ is parameterized by \begin{gather} X = u X_0 + v W \label{eq:gen line through X0} \end{gather} with $(u,v) \in \mathbb{P}_2(\mathbb{Q})$ and any $W=(w_1,..,w_n) \in \mathbb{P}_n(\mathbb{Q})$ not equal to $X_0$. Combining \eqref{eq:gen line through X0} and \eqref{eq:quad Diophantine}, \begin{gather} 0 = (u X_0 + v W)^T Q (u X_0 + v W) \\ = v \left(2u W^T Q X_0 + v W^T Q W \right) \end{gather} where we have simplified using \eqref{eq:particular quad Diophantine}. We may thus take as the solution $(u,v) = \left(W^T Q W,-2 W^T Q X_0 \right)$. Combining with Eq \eqref{eq:gen line through X0} and multiplying by a general $d \in \mathbb{Q}$ to restore full solutions (since we considered $X$ as an element of a projective space), we find \begin{gather} X = d \left[(W^T Q W)X_0 - 2 (W^T Q X_0) W \right]. \label{eq:final X} \end{gather} For integer solutions, we need simply to rescale $W \rightarrow \frac{W}{\zeta}$ and $d \rightarrow d \zeta^2$ where $\zeta = \gcd({w_i})$. After rescaling, the only non-integer information is coming from $d$, so all integer solutions may be found simply by considering integer $d$. For the relevant case of $n=3$, let us, without loss of generality, diagonalize $Q = diag(A,B,C)$ and let $W^T = (w_1,w_2,0)$ where (after rescaling with $\zeta$) $w_1$ and $w_2$ are co-prime integers and the final element of $W$ may be set to $0$ due to the required linear independence with $X_0$. Simplifying \eqref{eq:final X} then becomes \begin{gather} X = d(A w_1^2 + B w_2^2) \left(\begin{tabular}{c} $x_{0,0}$ \\ $x_{1,0}$ \\ $x_{2,0}$ \end{tabular}\right) \nonumber \\ - 2d (w_1 A x_{0,0} + w_2 B x_{1,0}) \left(\begin{tabular}{c} $w_1$ \\ $w_2$ \\ $0$ \end{tabular}\right) \\ = d \left(\begin{tabular}{c} $-(A w_1^2 - B w_2^2 )x_{0,0} - 2 B w_1 w_2 x_{1,0}$ \\ $(A w_1^2 - B w_2^2 )x_{1,0} - 2 A w_1 w_2 x_{0,0}$\\ $(A w_1^2 + B w_2^2 )x_{2,0}$ \end{tabular} \right) \label{eq: final 3 quadratic dio solution} \end{gather} \subsection{Solution for product state permutation dynamics with Hubbard interaction} Following the previous section, we write our Diophantine eq. \eqref{eq: diophantine Floquet Hubbard} in a diagonal form: \begin{gather} \ell^2 + n^2 = 2m n\\ \implies \left(\begin{tabular}{c c c} $\ell$ & $\Tilde{n}$ & $m$ \\ \end{tabular}\right) \left(\begin{tabular}{c c c} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \\ \end{tabular}\right) \left(\begin{tabular}{c} $\ell $ \\ $\Tilde{n} $ \\ $m$ \\ \end{tabular}\right) = 0, \end{gather} where we have defined $\Tilde{n} \equiv n-m$. Note, this is the famous Diophantine equation for Pythagorean triples. By inspection, a non-trivial solution is $\ell=-1,\Tilde{n}=0,m=1$. Utilizing Eq. \eqref{eq: final 3 quadratic dio solution} we find \begin{gather} \left(\begin{tabular}{c} $\ell$ \\ $\Tilde{n}$ \\ $m$ \end{tabular}\right) = d \left(\begin{tabular}{c} $w_1^2 - w_2^2$ \\ $ 2 w_1 w_2$\\ $ w_1^2 + w_2^2$ \end{tabular} \right) \label{eq:pythag triples} \\ \implies \left(\begin{tabular}{c} $\ell$ \\ $n$ \\ $m$ \end{tabular}\right) = d \left(\begin{tabular}{c} $w_1^2 - w_2^2$ \\ $[w_1+w_2]^2$\\ $ w_1^2 + w_2^2 $ \end{tabular} \right) \label{eq:final hubbard dio solution} \end{gather} Note, Eq. \eqref{eq:pythag triples} is the standard solution for Pythagorean triples. We thus found that the set of $n$, $m$, and $\ell$ simultaneously satisfying the conditions for simple dynamics can be written as: \begin{subequations} \label{eq: diophantine solution} \begin{gather} \ell = d(w_1^2 - w_2^2) \\ m = d(w_1^2 + w_2^2) \\ n = d(w_1 + w_2)^2 \end{gather} \end{subequations} where $d,w_1, w_2 \in \mathbb{Z}$ and $w_1,w_2$ are coprime. Note, in \eqref{eq: diophantine solution}, if $\ell$ is even (odd) then so is $n$. This implies that the only way to completely satisfy the conditions in Eq. \eqref{eq: freeze condition} is if all motion is frozen or all motion (not constrained by Pauli exclusion) becomes perfect swapping. Inspecting the above solutions, we see that $2 m n-n^2=(w_1^2-w_2^2)^2$, automatically satisfying the condition $2 m n-n^2>0$ for $V$ and $\tau$ to be real. Finally our solution is summarized by \begin{eqnarray} \tau=\frac{\pi}{2} d(w_1^2 - w_2^2)\,\,;\,\, V=\frac{8 w_1 w_2}{|w_1^2-w_2^2|}. \label{eq: tau and V hubbard} \end{eqnarray} Note that $V$ doesn't depend on the choice of $d$, and that any choice involving $w_1=0$ or $w_2=0$ will yield a non-interacting model. As an illustration, consider the following example choices: \\ 1. Taking $w_1=1,w_2=0,d=1$ yields $\tau={\pi \over 2},V=0$, which is the non-interacting dynamics considered in the original RLBL model, with perfect swapping. \\ 2. Taking $w_1=3,w_2=1,d=1$ yields $\tau={4 \pi },V=3$. Since $\ell$ is even in this case, the dynamics is completely frozen. \\ 3. Taking $w_1=3,w_2=-1,d=1$ yields $\tau={4 \pi },V=-3$, i.e. frozen dynamics in a model with an attractive Hubbard interaction. It is important to note that the special values of interaction strength and driving frequency in Eq. \eqref{eq: tau and V hubbard} hold for any Hubbard-Floquet procedure where hopping between pairs of sites is sequentially activated. This is the case for such systems on any lattice and in any dimension. We also note, that the Diophantine solution is ill suited to describe the singular case of infinite $V$ and finite $\tau$ and therefore this situation must be handled separately. In the limit of large $V$, the interaction strength overpowers the hopping strength and all evolution is frozen in the 2-particle sector. On the other hand, evolution in the 1,3 particle sector is independent of $V$ and therefore may exhibit perfect swapping or freezing. Thus, in this case, it is possible to have one sector (the 2-particle sector) frozen while the other (the 1,3 particle sector) exhibits perfect swapping. \subsection{Example 2: Nearest neighbour interactions on a Lieb lattice.} \label{section: NN RLBL model} In the next two examples, we consider interactions involving nearest neighbours. Unfortunately, adding nearest neighbour interactions to the RLBL model directly destroys an essential feature for the solvability of the problem: that the evolution operators of different pairs of sites are not directly coupled (and therefore commute). Here, instead, we choose to work with RLBL-like dynamics on a Lieb lattice as described in \cite{wampler2021stirring}. The dynamics we consider here essentially activates pairs that are separated by several lattice sites at each step. The sequence of activations is described in Fig \ref{fig:MIC}. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{MIC.PNG} \caption{RLBL-like model on a Lieb lattice. Hopping between neighboring pairs of sites within $A_i$ is activated during step $i$ of the Floquet drive. The same sequence of activated site pairs is achieved with the chiral measurement scheme introduced in \cite{wampler2021stirring}. During each step $i$, evolution is confined between neighboring sites in $A_i$ by rapidly measuring (in the Zeno limit) all sites in the complimentary set $A_i^c$. Both models, with NN interactions, will share the same conditions (Eqs. \eqref{eq: NN delta not 0 condit} and \eqref{eq: NN delta 0 condit}) for number state to number state evolution.} \label{fig:MIC} \end{figure} Here, we consider spinless fermions on the Lieb lattice. There are 8 steps. At step $i$ we activate hopping between sites that are nearest neighbours that belong to the set $A_i$. The evolution is given by: \begin{gather} U= U_8 U_7 U_6 U_5 U_4 U_3 U_2 U_1 \label{eq: NN-RLBL U} \end{gather} where $U_i = e^{-i {\cal H}_i \tau}$, and \begin{gather} {\cal H}_i = -t_{hop}\sum_{(i,j)\in A_i } (a_{i }^\dagger a_{j } + h.c.) + V \sum_{<i,j>} n_i n_j \label{eq: Lieb Hamiltonian} \end{gather} We proceed, as in Section \ref{Section: Hubbard RLBL}, by considering the evolution of a single connected pair during step $i$ and exactly solving for values of $V$ and $\tau$ where the pair exhibits freezing or perfect swapping. The evolution of a 2-site pair of sites $i,j$ for one step is given by \begin{gather} U_{(i,j)}=e^{-i\tau[ - t_{hop}( a_{i}^\dagger a_{j} + h.c.) + V n_i\sum_{k:\langle i,k\rangle} n_{k}+V n_j\sum_{k:\langle j,k\rangle} n_{k}]}. \label{eq: 2 site NN evolve} \end{gather} Note that the number operators on neighbours of $i,j$ commute with the evolution. Let the initial number of occupied neighbours of the sites $i$ and $j$ be $N_i$ and $N_j$ respectively (not counting $i,j$ themselves). Evolution of the 2-site pair is now exactly solvable in terms of $\Delta=N_i-N_j$, the difference in the number of particles neighboring sites $i$ and $j$ in the 2-site pair respectively (see Figure \ref{fig:Delta}). \begin{figure} \centering \includegraphics[width=0.2\textwidth]{particle_diff.PNG} \caption{Evolution of a 2-site pair in the NN-RLBL model on a Lieb lattice. All evolution is restricted to the red ellipse above. Evolution within the red ellipse (i.e. between site 1 and site 2) is determined by $\tau$, $V$, and the neighboring particle number difference $\Delta = |N_1 - N_2|$. In this case, $N_1 = 2$ and $N_2 = 1$, so $\Delta = 1$. If the $\Delta = 1$ condition on $V$ and $\tau$ in Eq. \eqref{eq: NN delta not 0 condit} is satisfied, then the particle at site 2 will exactly return to site 2 after a time $\tau$ (at intermediate times, the particle may be in a generic superposition of being located at site 1 and site 2). } \label{fig:Delta} \end{figure} Solving the two site evolution, we find that evolution is frozen when \begin{eqnarray}\label{eq:DeltaEq} \sqrt{4 + \Delta^2 V^2} \tau = 2 \pi m \end{eqnarray} for some $m \in \mathbb{Z}$. We find that the evolution may only be perfect swapping when $\Delta=N_i-N_j = 0$ and occurs when $\tau = \frac{\pi}{2} + \pi m$ for $m \in \mathbb{Z}$ (see appendix \ref{appendix NN RLBL} for details). In the rest of the paper, whenever considering the evolution on a pair of sites, we will denote $\Delta$ as the difference in the number of (static) particles that are nearest neighbours of the two sites during the relevant evolution step. \subsection{A coupled set of Diophantine Equations} \label{Section: coupled diophantine} For a generic initial position of the particles, $N_i-N_j$ will not be uniform across the sample. Thus, for proper particle permutation dynamics, we must simultaneously find a solution of \eqref{eq:DeltaEq} for all possible values of $|N_i-N_j|$. Note that $N_i$ takes the values $0,..,D_i-1$, where $D_i$ is the degree (number of neighbours) of lattice site $i$. It follows that $|N_i-N_j| \in\{0,..,max(D_i,D_j)-1\}$. Thus, if $D_{max}$ is the maximum degree of the lattice, we have the simultaneous conditions: \begin{gather} \sqrt{4 + \Delta^2 V^2} \tau = 2 \pi m_{\Delta} \hbox{ } \forall \hbox{ } \Delta =1,...,(D_{max}-1) \label{eq: NN delta not 0 condit} \\ \tau = \frac{\pi}{2} m_0 \hbox{ } \text{corresponds to} \hbox{ } N_i=N_j \label{eq: NN delta 0 condit} \end{gather} with all $m_i \in \mathbb{Z}$. Equations \eqref{eq: NN delta not 0 condit} and \eqref{eq: NN delta 0 condit} provide $D_{max}$ equations that must be solved simultaneously. The first two equations set the values for $\tau$ and $V$ in terms of $m_0$, $m_1$: \begin{eqnarray}\label{eq:tauAndV} \tau=\frac{\pi}{2} m_0\,\,;\,\, V^2=4( \frac{4m_1^2 }{m_0^2}-1). \end{eqnarray} However, the rest of the equations for $m_i$, with $i>1$, must be simultaneously solved with these values for $\tau$ and $V$ yielding the coupled equations: \begin{gather}\label{eq:NN Diophantine set} 4m_l^2=(1-l^2 ) m_0^2+4 l^2 m_1^2\,\\ \,m_l\in \mathbb{Z} \, , \, l =2,3,...,(D_{max}-1) \end{gather} A first solution to this system may be obtained by taking $m_0=2m_1=2m_2=...=2m_{D_{max}-1}$, which, by \eqref{eq:tauAndV}, yields the non-interacting case $V=0$. We now search for non-trivial solutions (i.e. $\tau,V \neq 0$). {\it Solution for $D_{max}=3$}. For $D_{max}=3$, we describe a general solution in appendix \ref{appendix NN RLBL} that yields non-trivial solutions. The result: \begin{gather} \left(\begin{tabular}{c} $m_0$ \\ $m_1$\\ $m_2$ \end{tabular} \right) = d \left(\begin{tabular}{c} $ - 32 w_1 w_2$ \\ $-3 w_1^2 - 16 w_2^2 $\\ $2 \left[-3 w_1^2 + 16 w_2^2 \right]$ \end{tabular} \right) \label{eq:NN Dio Solution 0 1 2}. \end{gather} We note that $m_0$ is always even and thus all evolution is frozen. Due to the hierarchy of the equations, total freezing must then occur for any solutions with $D_{max} \geq 3$. {\it Solution for $D_{max}=4$}. We combine equations \eqref{eq:NN Dio Solution 0 1 2} and the $\Delta=3$ equation from \eqref{eq: NN delta not 0 condit} to find a new Diophantine equation for the case $D_{max}=4$: \begin{gather} m_3^2 = 81 w_1^4 + 2304 w_2^4 - 1184 w_1^2 w_2^2 \label{eq: NN 4 delta Dio} \end{gather} The Diophantine equation \eqref{eq: NN 4 delta Dio} is harder to solve. However, a numerical search does find non-trivial ($V \neq 0$) solutions. For example, $(w_1;w_2;m_3) = (3;9471;4305592257)$ is a solution with $V \approx 6,394$ and $\tau = 454,608 \pi$. Whether there exist $V,\tau$ such that lattices with a maximum degree larger than $4$ may exhibit fully product state permutation evolution is an open question. The result for $D_{max}=4$ suggests the conjecture that there are solutions to the system of equations for any $D_{max}$. Similar to the strategy above, by solving for $D_{max}=k$, it is possible to construct a new Diophantine equation for $D_{max}=k+1$. Determining whether this tower of equations is solvable is outside the scope of the present paper. On the other hand, as can already be seen in the case of $D_{max}=4$, the values of $V,\tau$ for which the system exhibit such freezing for any initial number state quickly become prohibitively large for typical physical systems as the maximum lattice degree increases. \color{black} \begin{comment} This shows that, on say a honeycomb lattice, any NN model where 2-site hopping is activated sequentially will have special points where all evolution is frozen. On the other hand, a search through $7*10^5$ solutions of the equations $i=0,1,2$ for a solution with $V\neq 0$ consistent with $i=3$ yielded no result. Due to the hierarchical structure of the equations \eqref{eq:NN Diophantine set}, it follows that, if no solutions exist for $D_{max}=4$, then no solutions exist for any lattice with $D_{max}\geq 4$. We conjecture that this is indeed the case. In particular, dynamics cannot be completely trivialized on a square lattice using simple nearest neighbour interactions. \end{comment} \begin{comment} \color{blue} , then every further equation sets the value of a $m_i$ in terms of the other $m_j$ (with $i \neq j$). In other words, simultaneously solving the $\Delta_{\text{max}}$ equations in \eqref{eq: NN Freeze Condit} amounts to solving $\Delta_{\text{max}}-2$ simultaneous Diophantine equations given by (see appendix for details) \begin{gather} m_1^2 (\Delta_2^2 - \Delta_j^2) + m_2^2 (\Delta_j^2 - \Delta_1^2) + m_j^2 (\Delta_1^2 - \Delta_2^2) = 0 \label{eq:NN Diophantine} \end{gather} We have found integer solutions for the simultaneous equations \eqref{eq:NN Diophantine} up to $\Delta_{\text{max}} = ?$ (which includes the case of interest, the RLBL model, where $\Delta_{\text{max}} = 4$), but conjecture that, at some critical $\Delta_{\text{max}} = \Delta_{\text{crit}}$, equation \eqref{eq:NN Diophantine} will no longer support integer solutions for $m_i$. This would imply complete freezing may not occur for any lattice with $\Delta_{\text{max}} \geq \Delta_{\text{crit}}$. \color{black} \end{comment} {\it Remark.} It is straightforward to generalize the Hamiltonian \eqref{eq: Lieb Hamiltonian} to include more elaborate interactions as long as at each step the number operators associated with the neighbourhood of each evolving pair is constant. For example, we can write \begin{gather} {\cal H}_i = -t_{hop}\sum_{(i,j)\in A_i} (a_{i}^\dagger a_{j} + h.c.) + \sum_{i\in A_i} V_{ij}n_{i} n_{j} , \label{eq: RLBL general} \end{gather} Given the number of particles in the neighborhood of each 2-site pair, we write (note here we include the potentials $V$ in the the definition of $\Delta$): \begin{eqnarray} \Delta_{ij}= \sum_{k:\langle i,k\rangle} V_{ik}n_{k}- \sum_{k:\langle j,k\rangle} V_{jk} n_{k}\label{eq: Delta forms} \end{eqnarray} and the freezing condition becomes: \begin{gather} \tau \sqrt{4 + \Delta_{ij}^2 } = 2 \pi m_{ij}, \,\, m_{ij} \in \mathbb{Z} \end{gather} for all $\Delta_{ij}$ of the form \eqref{eq: Delta forms}. \subsection{Example 3: Deterministic evolution in the measurement induced chirality model on a Lieb lattice.} \label{section: MIC} As another example, We consider the measurement induced chirality protocol of \cite{wampler2021stirring} with added nearest neighbour interactions and in the Zeno limit. In that work, a simple hopping Lieb lattice model of fermions was subjected to repeated measurements changing according to a prescribed chiral protocol. In contrast to the previous models, the Hamiltonian is not time dependent and all hopping terms in the Hamiltonian remain activated throughout the process. It was shown in \cite{wampler2021stirring} that in the limit of rapid measurements, the so called the Zeno limit, the resulting dynamics is a classical stochastic process of permuting Fock states. We will see that, in this case too, we can find special values of interaction strength and protocol duration where the dynamics becomes deterministic. In fact, we will see the dynamics is governed by the same Diophantine equation as in example 2. Specifically, we consider fermions hopping on a Lieb lattice with nearest-neighbor interactions given by \begin{gather} \label{eq: H NN} {\cal H} = -t_{\text{hop}} \sum_{<i,j>} a_i^\dagger a_j + V \sum_{<i,j>} n_i n_j . \end{gather} We now apply the measurement protocol introduced in \cite{wampler2021stirring} to the system. Namely, we consider an 8 step measurement protocol in which, during the $i^{th}$ step that runs for a time $\tau$, the local particle density in all sites in a set $A_i^c$ of sites are measured. In the Zeno limit, all evolution during a step is restricted to neighboring sites in the subspace $A_i$ (See figure \ref{fig:MIC} for details), while the rest of the sites are kept frozen. Thus, in the Zeno limit, the evolution is effectively split into $8$ steps evolved by the Hamiltonian \eqref{eq: Lieb Hamiltonian}, interspersed by an additional measurement. The measurements keep projecting the system onto Fock states, however, the particular states at hand are statistically distributed. However, if the step evolution \eqref{eq: 2 site NN evolve} maps Fock states into Fock states, the whole procedure yields a deterministic evolution of an initial Fock state into another. In other words, the conditions for permutative evolution (and the corresponding set of Diophantine equations) for this model are equivalent to those found in the interacting Floquet model investigated in example 2. \begin{comment} A simple generalization to the NN interacting case of the result in \cite{wampler2021stirring} shows that the effective evolution under the measurement protocol for a single step $i$ is given by a single projective measurement followed by evolution under the Hamiltonian \begin{gather} {\cal H}_i = -t_{hop}\sum_{(i,j)\in A_i} (a_{i}^\dagger a_{j} + h.c.) + V \sum_{\langle i,j\rangle} n_{i} n_{j} , \label{eq: NN Hamiltonian step i} \end{gather} for a time $\tau$. At the special points where the evolution only permutes particles, the projective measurement will act as an identity, so we need only focus on the unitary evolutions for a time $\tau$ in \eqref{eq: NN Hamiltonian step i}. In other words, the measurement protocol will share the same CA points with a Floquet (unitary) drive given by \begin{gather} U= U_8 U_7 U_6 U_5 U_4 U_3 U_2 U_1 \end{gather} where $U_i = e^{-i {\cal H}_i \tau}$. We proceed, as in Section \ref{Section: Hubbard RLBL}, by considering the evolution of a single connected pair during step $i$ and exactly solving for values of $V$ and $\tau$ where the pair exhibits freezing or perfect swapping. The evolution of a 2-site pair of sites $i,j$ for one step is given by \begin{gather} U_{(i,j)}=e^{-i\tau[ - t_{hop}( a_{i}^\dagger a_{j} + h.c.) + V n_i\sum_{k:\langle i,k\rangle} n_{k}+V n_j\sum_{k:\langle j,k\rangle} n_{k}]}. \label{eq: 2 site NN evolve} \end{gather} Note, the number operators for all sites other than $i,j$ in \eqref{eq: 2 site NN evolve} commute with the evolution. \end{comment} \begin{comment} A major tool used in the analysis of the Hubbard RLBL model was that the Hubbard interaction preserved the disjoint nature of the steps of the periodic drive. However, special parameter values where classical evolution occurs may be found even with interactions that do not preserve this property. As a simple example, here we analyze an RLBL model with nearest-neighbor interactions. The model, as the original RLBL, is spinless and now in each step ${\cal H}_i$ is given by \begin{gather} {\cal H}_i = -t_{hop}\sum_{(i,j)\in A_i} (a_{i}^\dagger a_{j} + h.c.) + V \sum_{i\in A_i;\langle i,j\rangle} n_{i} n_{j} , \label{eq: RLBL NN Hamiltonian} \end{gather} where the interacting terms involve nearest neighbours that are not restricted to the sites in $A_i$, thus, the evolution of the 2-site pairs is no longer disjoint due to the n.n. interactions. Let us, nonetheless, consider the evolution of a single 2-site pair $i,j$, \begin{gather} U_{(i,j)}=e^{-i\tau[ - t_{hop}( a_{i}^\dagger a_{j} + h.c.) + V n_i\sum_{k:\langle i,k\rangle} n_{k}+V n_j\sum_{k:\langle j,k\rangle} n_{k}]}. \end{gather} If there are $0$ or $2$ particles in the pair, then the pair is static (again due to Pauli blocking in the case of $2$ particles). Evolution within any 2-site pair may only be non-trivial for states with a single particle within the 2-sites. \begin{figure}[h] \centering \includegraphics[width=0.3\textwidth]{neighbours.PNG} \caption{The evolution in the subspace associated with a pair of neighbours will depend on the number occupation of the neighbours. On a square lattice a particle in a neighbouring pair can have anywhere between $0$ and $4$ particles. However, when $4$ neighbours are occupied the other site in the pair is occupied and hopping is blocked. In the example above, $N_1=3,N_2=2$, and the effective potential difference that determines the evolution is $(N_1-N_2) V=V$} \label{fig:neighbours} \end{figure} We now search for conditions that freeze the dynamics of the entire system. If the dynamics are frozen, then in each step in the evolution, the number operators of neighbours outside the pair commute with the evolutions. Therefore, if we start the evolution from a product state the interaction with neighbouring sites will be constant and determined by the number of particles occupying sites neighboring the 2-site pair where evolution is happening. \end{comment} \section{Hilbert Space Fragmentation}\label{Section: Scars} In Section \ref{Section: coupled diophantine}, we gave $D_{\text{max}}$ conditions that must be simultaneously satisfied for Fock state permutative dynamics in models on a Lieb lattice with NN interactions. Similarly, in Section \ref{Section: Hubbard RLBL} we gave conditions for permutative evolution in the Floquet-Hubbard RLBL model. If in these models not all of these conditions are satisfied, then the evolution of a general initial state will require consideration of the full quantum many-body Floquet Hamiltonian. However, evolution for certain initial states may still be deterministic even if only one or a few of the conditions for Fock state to Fock state evolution are met. This fragments \cite{Moudgalya2022Scars} the Hilbert space, ${\cal H}$, into disconnected Krylov supspaces, ${\cal K}_i$, i.e. \begin{gather} {\cal H} = \bigoplus_i {\cal K}_i ,~ {\cal K}_i = span_n\{{\cal U}^n |\psi_i \rangle \} \end{gather} where we have chosen a states $|\psi_i\rangle$ that are number local states in such a way that ${\cal K}_i$ are unique. In the rest of this section, we will explore the nature of the Hilbert space fragmentation in the example interacting Floquet and measurement induced models discussed in the previous section. Namely, we will see how the Hilbert spaces in these systems simultaneously support Krylov subspaces that are one-dimensional and correspond to frozen product states, few dimensional and correspond to states that evolve according to a classical cellular automation \cite{Wolfram1983CA}, and exponentially large subspaces that may evolve with more generic quantum many-body evolution. \subsection{Arrested development} Let us take as an example the NN-RLBL model on a Lieb lattice considered in Section \ref{section: NN RLBL model}. We have seen that satisfying the i$^{th}$ condition in equations \eqref{eq: NN delta not 0 condit} and \eqref{eq: NN delta 0 condit} implies that evolution on any neighboring pair of sites will be frozen if $\Delta = i$ (and also requiring $m_0$ is even if $\Delta = 0$). Thus, any given number state will be frozen under the evolution so long as every neighboring 2-site pair in the system containing a single particle has $\Delta=i$. In figure \ref{fig:CA zoo}, we provide example frozen states for several values of $\Delta$. \begin{figure}[h] \centering \includegraphics[width=0.3\textwidth]{CA_Zoo.PNG} \caption{A Zoo of frozen particle configurations when only some of the conditions in \eqref{eq: NN delta not 0 condit} and \eqref{eq: NN delta 0 condit} are satisfied on a nearest neighbour interacting Lieb-RLBL model. At the top, a particle configuration that requires only that the $\Delta=0$ condition (and $m_0$ even) be satisfied for frozen evolution. In the bulk of the system are particle configurations that will be frozen so long as the $\Delta = 1$ condition is satisfied. The lower edge of the system provides an example of a particle configuration that will be frozen so long as both the $\Delta=1$ and $\Delta=3$ conditions are satisfied. Since all the particle configurations above are disjoint, the simultaneous satisfaction of the $\Delta = 0$, $\Delta = 1$, and $\Delta=3$ conditions implies that the entire system above will be frozen. Each frozen particle configuration corresponds to a 1D Krylov subspace of the full Hilbert space.} \label{fig:CA zoo} \end{figure} Since these frozen states are trivially mapped back onto themselves (stroboscopically), they correspond to one-dimensional Krylov subspaces. Note, disjoint unions of frozen particle configurations will also be frozen. Therefore, since the number of possible disjoint unions of these frozen particle configurations grows exponentially with the system size, so too will the number of one-dimensional Krylov subspaces. Furthermore, if several of the conditions \eqref{eq: NN delta not 0 condit} and \eqref{eq: NN delta 0 condit} are satisfied, say the $\Delta=i$ and $\Delta=j$ conditions, then a zoo of frozen particle configurations emergres. Any disjoint unions of particle configurations satisfied by $\Delta=i$ or $\Delta=j$ alone will be satisfied. Additionally, new frozen particle configurations will emerge that simultaneously require both the $\Delta=i$ and $\Delta=j$ conditions to be frozen. An example particle configuration requiring the simultaneous satisfaction of the $\Delta = 1$ and $\Delta = 3$ condition is also given in Figure \ref{fig:CA zoo}. We emphasize here that the chiral nature of the Floquet procedure played no role in the emmergence of these frozen states. In fact, any procedure that sequentially activates hopping between neighboring pairs of sites (suitably spaced to keep evolution disjoint after adding NN interactions) will exhibit the exact same frozen states. For example, consider a new procedure where, at each step in the evolution, the system is evolved with a $U_i$ from equation \eqref{eq: NN-RLBL U} chosen at random (uniformly), i.e. an example realization of this aperiodic, random evolution is given by \begin{gather} \label{eq: chaotic NN-RLBL} U = ...U_4 U_5 U_3 U_3 U_1 U_2 U_7 U_3 . \end{gather} The exact same states will be frozen in this model as in the NN-RLBL model on a Lieb lattice and therefore the two models will share each of the one-dimensional Krylov subspaces. In the random model, the Hilbert space fragmentation will thus be split amongst an exponentially large number of one-dimensional, frozen, Krylov subspaces and a single Krylov subspace whose dimension scales exponentially with the system size. Due to the random, aperiodic nature of the full evolution, we expect evolution within the large-dimensional Krylov subspace to correspond to chaotic dynamics. In other words, we expect that time evolution of a random initial product state under \eqref{eq: chaotic NN-RLBL} will result in either frozen evolution or ergodic dynamics within the Krylov subspace (for more on Krylov-restricted thermalization see \cite{Moudgalya2021scarbook}). Both results will occur a finite fraction of the time depending on the initial product state. \subsection{Krylov Subspaces of Cellular Automation}\label{section: Cellular Automation} Since the dynamics of a particle configuration that obey the Diophantine conditions depends crucially on particles on the neighbouring sites, it can be naturally encoded as a cellular automation step. We will now see how Krylov subspaces supporting classical CA \cite{Wolfram1983CA} at each evolution step may emerge in interacting Floquet and measurement-induced systems when a few of the conditions for number state to number state evolution are satisfied. To elucidate this effect, we consider again the NN-RLBL model on the Lieb lattice. In this case, we take the $\Delta = 0$ and the $\Delta = 1$ conditions for number state to number state evolution to both be satisfied, but this time the $\Delta=0$ condition is satisfied for perfect swapping while the $\Delta=1$ condition is satisfied for freezing. This may happen at, for example, $\tau = \frac{\pi}{2}$ and $V=\sqrt{12}$. It is now possible to find number states such that the initial particle configuration, $|\Psi_{init}\rangle$, and the resulting states after evolution of each step in the Floquet drive, all satisfy either $\Delta=0$ or $\Delta=1$ for every activated two-site pair in the system with a single particle. We give an example particle configuration where this may occur in Figure \ref{fig: NN 3 bound particles}. Here, the space of states $span_n \{U^n |\Psi_{init} \rangle \}$ defines a Krylov subspace where evolution is completely given by a CA since at each step in the Floquet drive the local particle densities are updated deterministically based on the neighboring particle densities (i.e. if $\Delta = 0$ or $1$). \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{3_Bound_Particles_NN.PNG} \caption{Example evolution within a cellular automation Krylov subspace set by the simultaneous satisfaction of the $\Delta = 0$ and $\Delta = 1$ conditions in equations \eqref{eq: NN delta not 0 condit} and \eqref{eq: NN delta 0 condit}. In this case, 2-site pairs with $\Delta=0$ evolve with perfect swapping while 2-site pairs with $\Delta=1$ are frozen. The resulting cellular automation for this example initial particle configuration results in the particles returning to their initial sites after $19T$. Example values of $V,\tau$ that achieve this evolution are $V=\sqrt{12}$ and $\tau = \frac{\pi}{2}$. Particle trajectories are drawn with orange, green, and magenta arrows.} \label{fig: NN 3 bound particles} \end{figure} Similarly to the case of frozen initial particle configurations, disjoint unions of particle configurations that evolve as a CA will also evolve as a CA. For particle configurations whose CA evolution leaves all particles contained in a volume that does not scale with system size (for example, the evolution of the configuration in Figure \ref{fig: NN 3 bound particles} remains contained within the $5 \times 5$ site square), the number of CA Krylov subspaces will grow exponentially with the system size (since there are exponentially many disjoint unions of such particle configurations). These CA subspaces may coexist with frozen Krylov subspaces as well as with exponentially large subspaces with more general quantum evolution. It is important to note that these CA subspaces break the underlying $T$ time translation symmetry of the evolution operator. For example, the particle configuration in Fig. \ref{fig: NN 3 bound particles} returns to its initial configuration after $19T$. However, the exact realization of this Krylov subspace requires fine-tuning in parameter space. If an alteration of this model was possible such that the realization of these Krylov subspaces did not require fine-tuning, then such a model would be a realization of a time-crystal. In fact, since the systems we've considered may simultaneously support Krylov supspaces that break the $T$ time translation symmetry in different ways, such a stabilized system would simultaneously support several different time crystals depending on which Krylov subspace contains the initial state. Recent works \cite{Nathan2019AFI,Nathan2021AFI} have argued that disorder may stabilize dynamics for regions in parameter space near similarly fine-tuned points in an interacting, Floquet model to acheive anomalous Floquet insulating phases. We plan to address when disorder may stabilize dynamics for the entire system or for specific Krylov subspaces in our more general set of finely-tuned points in a future work. \begin{comment} The examples shown in the "scar face" figure, Fig. \ref{fig:CA zoo} illustrates this phenomenon. Namely, in the figure, the lower particle configurations exhibit states that only require satisfaction of the $\Delta = 1$ condition to be stroboscopically frozen. The top particle configuration requires only the $\Delta = 0$ condition to be satisfied for classical evolution to occur. Furthermore, disjoint unions of classically evolving particle configurations will also evolve classically. For example, for the entire system in Fig. \ref{fig:CA zoo} to evolve classically, all that is required is that both the $\Delta = 0$ and $\Delta = 1$ conditions are satisfied. \color{red} next paragraph overlaps with previous \color{black} When a small number of the conditions \eqref{eq: NN delta not 0 condit} and \eqref{eq: NN delta 0 condit} are satisfied, a zoo of classically evolving particle configurations emerges. In addition to the particle configurations that require a single $\Delta$ condition to evolve classically as explored in Fig. \ref{fig:CA zoo}, new classically evolving particle configurations emerge that require the combined conditions to evolve classically. For example, in the particle configuration given in Figure \ref{fig: NN 3 bound particles}, a perfect swapping $\Delta=0$ condition and a frozen $\Delta = 1$ condition conspire to return the particle configuration to its initial state after $19T$. This happens at, for example, $\tau = \frac{\pi}{2}$ and $V=\sqrt{12}$. We emphasize that these simple, classically evolving configurations exist despite the fact that, at this interaction strength and driving frequency, none of the $\Delta>1$ conditions are satisfied and so the full evolution is highly non-classical. In fact, even if a single 2-site, single particle pair exists in the system such that $\Delta = i$ and the $i^{th}$ equation in \eqref{eq: NN delta not 0 condit} is not satisfied, then the system may thermalize. \end{comment} \color{black} \begin{comment} In the Floquet-Hubbard RLBL model, it can be seen that classically evolving states (in addition to frozen states) may still exist in the system even when not all $3$ conditions, \eqref{eq: 2 site perm condition 0}, \eqref{eq: 2-site permutation condit 1}, and \eqref{eq: 2-site permutation condit 2} are satisfied. For this to occur, we must require that any neighboring pair in the system - at all times - must evolve classically under the subset of the conditions \eqref{eq: 2 site perm condition 0}, \eqref{eq: 2-site permutation condit 1}, and \eqref{eq: 2-site permutation condit 2}. \color{red} next sentence not clear or repetitive \color{black} Since the classical evolution of the system is a general permutation of the sites, we thus have that any initial state will evolve classically so long as every pair of sites in the state (not just neighboring pairs) evolve classically under the subset of the conditions \eqref{eq: 2 site perm condition 0}, \eqref{eq: 2-site permutation condit 1}, and \eqref{eq: 2-site permutation condit 2} that are met. For example, if only conditions \eqref{eq: 2-site permutation condit 1} and \eqref{eq: 2-site permutation condit 2} are satisfied, the evolution of any state where every site initially has a single particle (for example a Neel ordered state) will evolve classically. This is because the only possible particle configurations in any 2-site pair of such a state are $\uparrow \uparrow$, $\downarrow \downarrow$, $\uparrow \downarrow$, and $\downarrow \uparrow$. Evolution for all of these combinations under \eqref{eq: 2-site permutation condit 1} and \eqref{eq: 2-site permutation condit 2} is a permutation even though \eqref{eq: 2 site perm condition 0} is not satisfied. Thus, the evolution of the entire system will also be a permutation. In a similar fashion, any initial state with only doublons will evolve classically. It is interesting to note that, in both of these cases, the RLBL nature of the permutations still imply that the evolution of these states will support a local bulk and chiral edge modes. However, in contrast to the non-interacting RLBL, the particle being transported along the edge of the system is a repulsively bound fermion pair. \color{red} time crystals\color{black} \end{comment} \subsection{Frozen states of Floquet evolution on a chain with nearest neighbour interactions} A major tool used in the analysis of the interacting Floquet and measurement models above was that the interactions preserved the disjoint nature of the steps of the periodic drive. However, using the same tools as in the disjoint case, it is possible to find frozen states even when the activated neighboring pairs interact (i.e. do not commute). Here, we investigate an example model where the interactions ruin the disjoint nature of the Floquet drive and show how, at special values of interaction strength and driving frequency, it is still possible to find states that are frozen. Namely, we take as an example a 1D, NN interacting Hamiltonian of the form \begin{gather} {\cal H}(t) = H_0(t) + V\sum_{i=0}^{N-2} n_i n_{i+1} \end{gather} where \begin{gather} H_0(t) = \begin{cases} \sum_{i \text{ even}} (a_{i}^\dagger a_{i+1} + h.c.) & 0 \leq t<\frac{T}{2} \equiv \tau \\ \sum_{i \text{ odd}} (a_{i}^\dagger a_{i+1} + h.c.) & \frac{T}{2} \leq t<T . \end{cases} \end{gather} Similarly to the previous cases, let us again consider a single 2-site pair where hopping is activated. If the occupancy of the sites neighboring the pair happen to be static, then the conditions for frozen or perfect swapping \eqref{eq: NN delta not 0 condit} and \eqref{eq: NN delta 0 condit} will still hold (except here with $D_{max} = 2$). However, this is, of course, not generally the case. Even if a neighboring pair is stroboscopically frozen, this is already enough to make $\Delta$ ill-defined and ruin the conditions \eqref{eq: NN delta not 0 condit} and \eqref{eq: NN delta 0 condit}. However, if every 2-site pair with a single particle is located on the edge of a domain wall in the system, then $\Delta$ will again be well defined (since any neighboring particles will be stationary due to Pauli exclusion) and the conditions \eqref{eq: NN delta not 0 condit} and \eqref{eq: NN delta 0 condit} will hold for these particle configurations. In Figure \ref{fig:Domain wall freezing}, we give examples of such states that will be stroboscopically frozen when the $\Delta=1$ condition is satisfied, i.e. all these states are eigenstates to the evolution operator ${\cal U}(T) = {\cal T}e^{-i \int_0^T {\cal H}(t)}$. \begin{figure}[h] \centering \includegraphics[width=0.3\textwidth]{Domain_wall_freezing.PNG} \caption{Particle configurations frozen in the Even-Odd NN model at values of $V,\tau$ that satisfy the $\Delta=1$ condition in \eqref{eq: NN delta not 0 condit}. The only 2-site pairs with a single particle are located on the domain walls. Since, within the uniform domain, particles are frozen at all times due to Pauli exclusion, the neighboring particle number difference for 2-site pairs on the domain wall is constant and given by $\Delta=1$. } \label{fig:Domain wall freezing} \end{figure} We now turn to numerically investigating the emergence of these frozen states and the Hilbert space fragmentation in this system. We exactly diagonalize ${\cal U}(T)$ at the special points $V=\sqrt{12}$, $\tau=\frac{\pi}{2}$ and $\tau=\pi$ \footnote{As a technical note, the frozen domain wall states will be highly degenerate and numerical diagonalization will give a random basis of eigenstates within the degenerate subspace. To find the frozen states within this basis, we apply a small disorder potential during the wait step in the evolution to split the energy levels. This disorder potential will add only a global phase to the frozen states and thus allows a direct numerical route to finding them.}. Here, the condition for frozen $\Delta=1$ is satisfied, while $\Delta=0$ is perfect swapping or frozen respectively. If the activated neighboring pairs were disjoint, evolution at these parameter values would be exactly solvable (with dynamics either being a CA or stroboscopically frozen). As we will see, however, this is not the case here. The Hilbert space instead fragments into exponentially many subspaces of frozen domain wall states and a single, exponentially large, ergodic subspace. To seperate the two classes of subspaces, we calculate the half-chain entanglement entropy of the eigenstates (shown in Figure \ref{fig:domain wall entanglement entropy}). The frozen eigenstates have zero entanglement entropy while the other eigenstates have finite (and as can be seen from Fig. \ref{fig:domain wall entanglement entropy}, large) entanglement entropy. Upon plotting the average local particle densities of a sample of the zero entanglement entropy eigenstates, we find that they do indeed correspond to the expected frozen domain wall states. \begin{figure}[h] \centering \includegraphics[width=0.495\textwidth]{Entangle_entropy.PNG} \caption{Half-chain entanglement entropy of all eigenstates of the evolution in the even-odd NN Floquet model. Eigenstates were found by exactly diagonalizing a 16 site chain. The parameter values were chosen such that both the $\Delta=0$ and $\Delta=1$ conditions in \eqref{eq: NN delta not 0 condit} and \eqref{eq: NN delta 0 condit} are satisfied: $V=\sqrt{12}$, $\tau=\frac{\pi}{2}$ (left) and $\tau=\pi$ (right). Despite the non-disjoint nature of the activated hopping site pairs, the conditions \eqref{eq: NN delta not 0 condit} and \eqref{eq: NN delta 0 condit} will still be valid for domain wall states that will, therefore, be frozen under the dynamics. These number states have no entanglement entropy and are indicated with red arrows in the figure above. The other eigenstates exhibit near-maximal entanglement entropy. This is a signature of the fragmentation of the Hilbert space into frozen Krylov subspaces and a ergodic Krylov subspace.} \label{fig:domain wall entanglement entropy} \end{figure} The large half-chain entanglement entropy of non-domain wall states suggests that the rest of the Hilbert space might be thermalized. To provide further evidence to this claim, we analyze an indicator often used to differentiate between ergodic and integrable systems: the statistics of level spacing ratios. For thermalizing systems, it is expected \cite{Dalessio2014Therm} that the evolution operator ${\cal U}$ resembles random matrices drawn from a circular ensemble (the analog of gaussian ensembles for unitary matrices). Unlike the evolution operators for integrable systems, eigenstates of circular ensembles are random vectors and the spectrum exhibits level repulsion. Thus, it is possible to argue whether a system is ergodic by analyzing the statistics of the spacing of energy levels to see if the distribution is Poissonian (corresponding to no level repulsion) or if it corresponds to the expected level spacing distribution of circular ensembles (see \cite{Dalessio2014Therm} for explicit formulas). Namely, consider the level spacings between two neighboring eigen-quasienergies $\varepsilon$ (i.e. $\varepsilon$'s are the phases of the eigenvalues of ${\cal U}$), \begin{gather} \delta_n = \varepsilon_{n+1} - \varepsilon_n . \label{eq: level spacing} \end{gather} The ratio of level spacings is given by \begin{gather} r_n = \frac{min\{\delta_n,\delta_{n+1}\}}{max\{\delta_n,\delta_{n+1}\}}. \end{gather} We then expect the statistics of $r$ to match that of the circular ensembles instead of yielding a Poissonian distribution if the system is ergodic. In our case, however, the system is not completely ergodic since the domain wall number states are eigenstates of the evolution. We instead wish to study the nature of the subspace which is the compliment of the set of all frozen Krylov subspaces within the Hilbert space. We thus will only consider $\delta_n$ in \eqref{eq: level spacing} if the corresponding eigenstates of $\varepsilon_{n+1}$ and $\varepsilon_{n}$ have non-zero half-chain entanglement entropy. The results of this analysis are shown in Fig. \ref{fig:level spacing}. As can be seen in the figure, the probability distribution is in good agreement with that of the circular orthogonal ensemble (COE) suggesting that the Krylov subspace is thermal. In summary, we have shown that the Hilbert space of the even-odd NN Floquet model is fragmented at special values of interaction strength and driving frequency. The fragmented Hilbert space simultaneously supports exponentially many (in system size) frozen Krylov subspaces and a single, exponentially large ergodic Krylov subspace. In this model, we did not find evidence of CA subspaces. Whether these subspaces are realizable in other non-disjoint models is an open question. Furthermore, for neighboring two-site pairs each with a single particle, the interactions between the pairs could conspire to produce special values of $V,\tau$ not given by equations \eqref{eq: NN delta not 0 condit} and \eqref{eq: NN delta 0 condit} where evolution is stroboscopically frozen. We leave both these open questions for future work. \begin{figure}[h] \centering \includegraphics[width=0.495\textwidth]{level_spacing.PNG} \caption{Level spacing statistics in the non-frozen Krylov subspace for evolution in the even-odd NN Floquet model. As in Fig. \ref{fig:domain wall entanglement entropy}, parameter values are chosen as $V=\sqrt{12}$, $\tau=\frac{\pi}{2}$ (left) and $\tau=\pi$ (right). The probability distribution, $P(r)$, of the level spacing ratios, $r$, for quasi-energy levels not corresponding to frozen eigenstates provides good agreement with the level spacing probability distribution of random matrices in the circular orthogonal ensemble (COE). This suggests that the Krylov subspace is ergodic.} \label{fig:level spacing} \end{figure} \section{Summary and discussion} In recent years the study of quantum many body states that break ergodicity has been an active field of research. Here, we considered conditions for dynamics in interacting systems that takes initial local number states to local number states. We have found such conditions for systems with sequentially activated hopping involving interactions such as Hubbard and nearest neighbour density interactions. Studying the resultant Diophantine relations between interaction strength, hopping energy, and hopping activation time, we discovered solutions to a variety of such systems. The resultant dynamics can be cast into two types: (1) Evolution that is deterministic for any initial Fock state (2) Fragmentation of the Hilbert space into deterministic sub-spaces and non-deterministic ones. Our results introduce new sets of dynamically tractable interacting systems, with an emphasis on 2d where such results are scarce. Furthermore, the approach is applicable to similar systems in other dimensions. At the special solvable points, we get a variaty of behaviors from frozen dynamics of Fock states to cellular automata like evolution of selected subspaces. In cases where only some of the Diophantine conditions are met, we have shown that the special subspaces can exist simultaneously with states that possess volume law entanglement entropy and level statistics suggesting thermalizing behavior. As discussed in section \ref{section: Cellular Automation}, although the ratios of Hamiltonian parameters (interaction strength, evolution time etc) considered here are finely tuned, previous work suggests that similarly finely tuned points may be stabilized by disorder to realize novel dynamical phases. In particular, periodic celullar automata evolution in our models may lead to new classes of time crystals. The problem of finding complete freezing of Fock states also led us to an interesting number theoretic problem involving the solution of a tower of Diophantine equations described in \eqref{eq: NN delta not 0 condit} and \eqref{eq: NN delta 0 condit}. We have shown explicitly solutions for dynamics on lattices with maximal degree of up to $4$ nearest neighbours and conjecture a solution can be found for arbitrary maximal degree. We remark that the same methods may be applicable to bosonic systems, and systems with pairing terms where resultant cellular automata may not be of the number preserving type. \emph{Acknowledgments.} We thank G. Refael and E. Berg for discussions. Our work was supported in part by the NSF grant DMR-1918207.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,443
David Castillo Gallery is proud to present Tender Game, new works in photography by Luis Gispert. The viewer is alone inside the girth of a Lockheed C-5. The console is thick with buttons and levers, every mechanical detail exposed. The head-on view from the windshield is magnificent. Luis Gispert's formal use of scale, perspective, and composition create environments at once dazzling and disarming. The military aircraft featured in the artist's high-res chromogenic prints ring the gallery like pop spectacle. Old Shaky, a Cold War relic, lifts invisible ordinance over the German Alps. Superstrattois a superstar in purple twilight. Fat Hercules, a player in every air force in the world since 1957, sizes up the South Dakota Badlands veined with snow. Glider has the face of a segmented grasshopper over Bryce Canyon. Fat Fred is as ubiquitous in battle as the salt of the Bonneville flats over which it flashes. "Decepción," the title of Luis Gisbert's show at the uptown Mary Boone Gallery, at 745 Fifth Avenue, translates from the Spanish not to its English cognate, but more literally to disillusionment or disappointment. The word applies to the effect of the huge photographs in the show, which depict fantasy landscapes seen through the windshields of luxury vehicles.
{ "redpajama_set_name": "RedPajamaC4" }
2,344