text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
\section{0pt}{12pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt} \titlespacing\subsection{0pt}{12pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt} \titlespacing\subsubsection{0pt}{12pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt} \begin{document} \title{Membership Inference Attack for Beluga Whales Discrimination} \numberofauthors{6} \author{ \alignauthor Voncarlos M. Araújo \affaddr{Département d'Informatique, Université du Québec à Montréal } \email{} \alignauthor Sébastien Gambs \affaddr{Département d'Informatique, Université du Québec à Montréal } \email{} \alignauthor Clément Chion \affaddr{Département des Sciences Naturelles, Université du Québec en Outaouais} \email{} \and \alignauthor Robert Michaud \affaddr{Groupe de recherche et d'éducation sur les mammifères marins (GREMM)} \email{} \alignauthor Léo Schneider \affaddr{Département d'Informatique, Université du Québec à Montréal } \email{} \alignauthor Hadrien Lautraite \affaddr{Département d'Informatique, Université du Québec à Montréal } \email{} } \maketitle \begin{abstract} To efficiently monitor the growth and evolution of a particular wildlife population, one of the main fundamental challenges to address in animal ecology is the re-identification of individuals that have been previously encountered but also the discrimination between known and unknown individuals (the so-called ``open-set problem''), which is the first step to realize before re-identification. In particular, in this work, we are interested in the discrimination within digital photos of beluga whales, which are known to be among the most challenging marine species to discriminate due to their lack of distinctive features. To tackle this problem, we propose a novel approach based on the use of Membership Inference Attacks (MIAs), which are normally used to assess the privacy risks associated with releasing a particular machine learning model. More precisely, we demonstrate that the problem of discriminating between known and unknown individuals can be solved efficiently using state-of-the-art approaches for MIAs. Extensive experiments on three benchmark datasets related to whales, two different neural network architectures, and three MIA clearly demonstrate the performance of the approach. In addition, we have also designed a novel MIA strategy that we coined as ensemble MIA, which combines the outputs of different MIAs to increase the attack accuracy while diminishing the false positive rate. Overall, one of our main objectives is also to show that the research on privacy attacks can also be leveraged ``for good'' by helping to address practical challenges encountered in animal ecology. \end{abstract} \section{Introduction} In animal ecology, the ability to re-identify (re-ID) an individual animal across multiple encounters allows for addressing a broad range of questions such as ecosystem function, community, and population dynamics as well as behavioral ecology~\cite{Arts2015, Krebs1999}. In many cases, especially for aquatic species such as marine mammals, re-ID requires extensive training and practical experience for a human to acquire sufficient expertise to be able to accurately recognize a particular individual. To partially circumvent this issue, biologists usually rely on approaches such as tagging and photo-identification (photo-ID)~\cite{articlepratical, Krebs1999}. While accurate, the tagging approach is intrusive to animals and is often expensive and laborious. In contrast, the photo-ID approach uses visual identification from camera images (\emph{e.g.}, hand-held camera, camera trap, or drones), which is non-invasive for animals and has a lower cost. Nonetheless, there are some practical and methodological challenges associated with its use. First, even among experienced researchers, there is a non-negligible chance of human error and bias when reviewing photos~\cite{foster2012}. Second, it is also time-consuming and expensive in terms of human involvement to manually filter through thousands of images. To overcome these limitations, one possible strategy is to rely on computer vision techniques to standardize and automatize the animal re-ID process~\cite{Schneider2019}. To realize this, for decades, ``feature engineering'', which can be defined as the process of selecting or transforming raw data into informative features, has been the most commonly used technique. Basically, it means that most of the algorithms for animal re-ID are designed and implemented to focus exclusively on predetermined traits, such as patterns of spots or stripes, to discriminate among individuals. However, feature engineering requires programming experience, sufficient familiarity with the species considered to identify relevant features. In addition, this approach lacks in generality as once a feature detection algorithm has been designed for one species, it is unlikely to be useful for others~\cite{tiger}. More recently, the last decade has witnessed the emergence of deep learning systems that make use of large data volumes to automatically learn discriminative features~\cite{7906512}. In particular, Convolutional Neural Networks (CNNs) have achieved state-of-the-art results in a variety of uses cases based on the assumption of a closed world (\emph{i.e.}, a fixed number of classes/identities), However, CNNs are known to lack robustness when deployed in real-world classification/recognition applications, in which incomplete knowledge of the world during training result in unknown classes being submitted to the model during testing. This corresponds for instance to the situation in which when used in the wild, the model will have to recognize individuals that it has not seen during training. In marine ecology, one of the main challenges related to animal re-ID, such as wild whales, is the encounter of large populations in which there is frequently the addition of new individual appearing due to birth or migration, therefore creating an ``open-set'' setting~\cite{6365193} wherein the identity model must deal with ``classes'' (\emph{i.e.}, individuals) unseen during training. Thus, a desirable feature for an animal re-ID approach is to have the ability to identify not only animals that belong to the catalog but also recognize new individuals (\emph{i.e.}, previously unknown animals). To address this issue, we investigate the use of Membership Inference Attacks (MIA), which is a form of privacy leakage in which the objective of the adversary is to decide whether a given data sample was in a machine learning model's training dataset~\cite{DBLP:conf/sp/ShokriSSS17, yeom2018, salem2019, nasri, long, Chen_2021}. Knowing that a specific data sample was used to train a particular model may lead to potential privacy breaches if for instance this membership reveals a sensitive characteristic (\emph{e.g.}, being part of the cohort of patients having a particular disease or being a member of a vulnerable group). The gist of our approach is we could leverage on a MIA to discriminate whether a new beluga whale was present or not in the training set. Then, this information can be used in the re-ID pipeline to take the decision to classify known individuals or to add a new entry in the catalog for an unknown individual. To summarize, in this paper our main contribution is the proposition of a novel approach for whales discrimination through images (photo-ID), which relies on the use of MIAs. In particular, one of our objective is to show that by drawing on the significant body of work on MIAs, it is possible to efficiently address the ``open-set'' vs ``closed-set'' problem. To demonstrate this, extensive experiments have been conducted with three state-of-the-art MIAs that leverage different information produced by the model (\emph{i.e.}, prediction confidence, predicted and ground truth label, or both of them) as well as different attack strategies (neural network-based, metric-based and query-based attacks). More precisely, we have performed a comprehensive measurement of the success of MIAs to address the open-set problem over two model architectures (ResNet50~\cite{resnet50} and DenseNet-121~\cite{densenet121}), three benchmark image datasets related to whales species (GREMM~\cite{Michaud2014}, Humpback~\cite{humpdata, Cheeseman2021} and NOAA~\cite{nooa}) along with three state-of-the-art MIAs, namely Yeom \emph{et al.}~\cite{yeom2018}, Salem \emph{et al.}~\cite{salem2019} and LabelOnly~\cite{Choo2020}, thus building a total of 36 attack scenarios. In addition, previous works~\cite{DBLP:conf/sp/ShokriSSS17,salem2019,Choo2020} assume the leak information is more likely for machine learning models on the influence of overfitting, we ensure this assumption by evaluating overfitted and non-overfitted models while monitoring the false positive rate as recommended in~\cite{carlini,onThediff} for the reliability of the results. Finally, we introduced a novel attack design for whale discrimination, which we coined as ensemble MIAs, which combines the outputs of different MIAs to increase the attack accuracy while decreasing the false positive rate. The outline of the paper is as follows. First in Section~\ref{sect_related}, we review the relevant background on automated photo identification systems as well as on membership inference attack. Then in Section~\ref{sect_approach}, we describe the St. Lawrence beluga whale re-id pipeline from side pictures, the training of the attack model as well as the different MIA strategies that we propose to implement the discrimination between known and unknown belugas. Afterwards in Section~\ref{sect_experiments}, we present the experimental setting used to evaluate our approach, which includes the datasets, the experimental configuration as well as the target and attack models. Finally in Section~\ref{sect_result}, we report on the performance of the approach under different scenarios, before discussing how the attack can generalize to different settings as well as the factors influencing its success and its robustness before concluding in Section~\ref{sect_conc}. \section{Related Work} \label{sect_related} In this section, we first review the related work on re-identification and discrimination of marine mammals as well as the background on MIAs. \subsection{Automated Photo Identification of Marine Mammals} The research on the individual identification of cetaceans using natural markings began in the early 1970s \cite{review1,review2}, including the use of unique markings and coloration, or notches in the dorsal fin or fluke \cite{finreview,review3}. For instance, Pollicelli, Coscarella and Delieux~\cite{Pollicellire} have evaluated the opportunity to use image metadata (\emph{i.e.}, annotations describing the animal characteristics as well as the time and place at which the picture was taken) as an attribute for photo-ID to reduce the number of possible matches in the identification step. In this work, classical machine learning techniques, such as neural networks, Bayesian classifiers, decision trees and $k$-nearest neighbors, were applied on the metadata of 869 pictures taken of 223 Commerson's dolphin individuals taken over seven years. Overall, the decision tree classifier was able to correctly identify 90\% of the individuals on the validation set based only on the metadata of their pictures. One clear limitation of this work is the reliance on metadata rather than on intrinsic visual characteristics of the animals. In addition, manual work is also required and the system has to be retrained to include new individuals. In~\cite{RENO201995}, a fully automated system called Smart Photo-ID of Risso's dolphin (SPIR) was developed to study the presence of Risso's dolphin in the Gulf of Taranto. This species is characterized by several distinctive scars over the dorsal fin present in the animal, a useful pattern for automated recognition. The dataset necessary for training this system was created with the general public involvement in research activities, side by side with experts. The first step of the system consists in preprocessing the input image to extract the dorsal fin segmentation employing Otsu's threshold technique \cite{otsu} and morphological operators. Followed by detection, feature extraction is performed using Speeded Up Robust Feature (SURF)~\cite{surf} and Scale-Invariant Feature Transform (SIFT)~\cite{sift}, which are methods for extracting local characteristics in images. To predict the identity of an unknown dolphin, the input image is compared with all of the images available in the database. Then, the picture with the highest number of matching features with the query image is selected as the best-matching dolphin. The results obtained demonstrate that SIFT outperforms the SURF feature detector, showing better performances and achieving a 90\% accuracy in the validation experiment. Unfortunately, the application of SPIR cannot be extended easily to other species, especially if these are not characterized by scars over the dorsal fin. Recently, Maglietta and collaborators~\cite{Maglietta2020} have proposed a novel methodology called NNPool, dedicated to the automatic discrimination of unknown vs. known Risso's dolphins. More precisely, NNPool consists of a pool of $n$ CNNs, each one being trained to recognize a particular known individual versus the rest of the dolphin (\emph{i.e.}, a form of one-versus-all classification). The models were trained on Risso's dolphins data and photos acquired between 2013-2018 in the Northern Ionian Sea (Central-Eastern Mediterranean Sea). The results obtained have also been validated using another dataset composed of unknown images of Risso's dolphins from the Northern Ionian Sea and the Azores, acquired in 2019. More precisely, their experiments considered 28 individuals to validate experimental results containing 300 images of Risso's dolphin fins detailed as 40 images belonging to some of the 23 known dolphins with the the remaining 260 belonging to the unknown dolphins. The discrimination accuracy of 87\% was measured on a validation dataset, which can be used as preprocessing of SPIR~\cite{RENO201995} to detect an unknown dolphin before performing the photo-ID of known individuals. This work is the closest to our work, in the sense that it considers the discrimination task of distinguishing known vs unknown individuals, rather than only photo re-id. Nonetheless, it is applied on a species that is much more easier to discriminate because of its distinctive marks. In addition, the dataset used to conduct the experiments is not publicly available, which makes it impossible to compare our approach to theirs. Deep learning approaches are relatively novel in the field of animal photo-ID~\cite{chim, Miele2021, Korschens2019, Nepovinnykh2020, Maglietta2020}. For example, Bogucki and co-authors introduced a fully automated system based on three CNNS for photo-ID of North Atlantic right whales~\cite{Bogucki2019}. This system participated on the Kaggle platform in 2015, on the automation of the right whale recognition process using a dataset of aerial photographs of animals~\cite{humpdata}. The training dataset provided for the competition consisted of 4544 images containing only one single right whale and were labeled with a correct whale. Submissions were evaluated on a test set of 2493 images, used to determine the rankings of the competitors. The numbers of pictures per whale varied considerably in this dataset (\emph{e.g.}, six individuals had only one photograph whereas there were two whales with eighty-two images each). This is a challenging setting for classification, whose performance depends on the number of images available for each individual. The proposed method uses a CNN that selects the region of interest and outputs a bounding box around the head of the whale, which is then used to crop the high-resolution image. The authors developed a network that automatically scales, rotates and crops the input image. This is achieved by training a CNN to locate two key points on the top of the whale's head from already labeled data. Data augmentation was applied, adding rotated and re-scaled versions of the images in the original dataset. Finally, another CNN was used to perform actual whale identification, obtaining an accuracy of individual right whale recognition of 87.44\%. The authors explained that the wide variability in the number of images per individual whale impacted the performance of the last CNN devoted to individual recognition. More precisely, having more images per individual improves the recognition accuracy. More recently, Bergler and co-authors~\cite{Bergler2021} have developed a deep-learning-based framework for identifying killer whales. The approach, called FIN‑PRINT, was trained and evaluated on a dataset collected over an 8‑year period (2011–2018) in the coastal waters of western North America that consists of 367 individuals. First, object detection is performed to identify unique killer whale markings, which results in 94.1\% of precision using the recent version of YOLO (YOLOv5) \cite{glenn_jocher_2022_7347926}. Second, all previously detected killer whale markings are extracted. The third step introduces a data enhancement mechanism by filtering between valid versus invalid (VVI) markings from previous processing levels, in which ResNet34 is used for binary classification between VVI identification images achieving 97.5\% of precision. The fourth and final step involves multi‑class identification, which assigns a label to the top 100 killer whales for a test sample. FIN‑PRINT achieves an accuracy of 92.5\% and 97.2\% using respectively top-1 and top‑3 for photo‑identified killer whales. Note that the top-100 killer whales each have more than 325 images per individual while the remaining individuals have fewer images per class, which leads to an unbalanced dataset challenge. In Cheeseman \emph{et al.}~\cite{Cheeseman2021}, the authors have developed a new CNN-based similarity algorithm for humpback whales individuals. The method relies on a Densely Connected Convolutional Network (DenseNet) to extract key-points of an image of the ventral surface of the fluke and then train the CNN model. The extracted features are then compared against those of the reference set of previously known humpback whales for similarity. The Arc-Face algorithm~\cite{Deng_2021} uses fluke shape, edge pattern and surface markings to locate images in a hyper-sphere space in which proximity becomes the similarity measure. For testing, they evaluated the complete dataset of 15494 humpback whale individuals considering 33321 whale fluke images used in Kaggle competition. The authors argues that CNN-based image recognition is much faster and more accurate than traditional manual matching, reducing the time for identifying a picture by over 98\% and decreasing the error rate from approximately 6–9\% to 1–3\%. To the best of our knowledge, there is no automatic photo-ID systems for beluga whales re-identification available in the literature, due to individual challenges as they often lack unique or permanent pigmentation. In addition, they also do not have a dorsal fin, a feature common to other ice-inhabiting whales (\emph{e.g}, humpback whales). Although photo-ID studies of beluga whales are being conducted in Cook Inlet, Alaska~\cite{Mcguire}, the White Sea, Russia~\cite{popu}, and the St. Lawrence Estuary, Canada~\cite{Michaud2014}, a standardized and public database is not yet available for this task to be investigated by the computer vision scientific community. \subsection{Membership Inference Attack} With respect to privacy, in addition to the sensitive inferences that can be drawn from the data itself, it is also important to understand how much the output of the learning algorithm itself (\emph{e.g.}, the model) leaks information about the input data it was trained on. For instance, privacy attacks (also called inference attacks) have been developed against machine learning models to reconstruct the training data from the model or to predict whether the profile of a particular individual known to the adversary was in the training dataset~\cite{DBLP:conf/sp/ShokriSSS17}. Generally, this membership inference is deemed problematic if revealing that a profile belongs to this database enables you to learn a sensitive information about this individual (\emph{e.g.}, the training set is composed of individuals suffering from a particular disease or from particularly vulnerable subgroups). More precisely, in a MIA an adversary that knows the particular profile of an individual tries to infer whether this profile was in the training dataset used to learn a particular model~\cite{Hu2022}. Generally, the adversary models considered in the MIA literature assume either a black-box or white-box access to the model being attacked. In a black-box setting, the term oracle is sometimes used to refer to the access of the adversary, since he can only submit requests to the model and observe the model outputs (\emph{i.e.}, he does not have access to the structure of the model). Such attacks need little information and as such are quite general and versatile but at the same time offer usually lower performances than attacks conducted in the white-box setting. In contrast, a white-box adversary is assumed to have a (partial or full) knowledge of the model such as its architecture, its parameters as well as its weights. The attacks conducted in this setting usually achieve better performances since they can be adapted to specific models and also have access to more information at inference time. Usually, the success of MIA attacks is impacted by model overfitting. Indeed, if the model attacked has overfitted the training data, it will behave quite differently when an example contained in the training set is submitted (\emph{e.g.}, the confidence on its prediction will be higher). This means that the success of MIAs can be decreased by employing mechanisms classically used in machine learning to reduce overfitting as well as by using more training samples to avoid too precise memorization. In contrast in our case, we will exploit overfitting in a positive way as a manner to increase the success of the MIAs and thus the discrimination of known vs unknown belugas. Standard machine learning metrics such as precision, recall and F1-measure can be used to quantify the success of MIAs. However, they might be interpreted differently. For instance, in the attack context, one might want to have a high precision even if it means to reduce recall (\emph{e.g.}, by tuning a confidence threshold). Indeed, realizing a MIA on few individuals with a high confidence can be considered a higher privacy breach than performing a MIA on a large number of individuals but with a low confidence~\cite{carlini}. More precisely, the false positive rate (also called sometimes the false alarm rate) should be reduced as much as possible to be a good indicator of an attack performance. \section{Proposed Approach} \label{sect_approach} In this section, we first describe the generic beluga whale re-id pipeline before detailing the training process for the attack model as well as the different MIAs from the state-of-the-art that we have used to implement it. Finally, we also describe our novel approach to perform MIA based on an ensemble strategy. \subsection{Beluga Whale Identification Pipeline} The general pipeline for beluga whale identification pipeline is illustrated in Figure~\ref{inferenceAttack}. It consists of two phases: (1) discrimination for distinguishing between known and unknown whale individuals through a MIA and (2) re-identification (re-ID). \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{samples/figures/proposed.png} \caption{Overview of the whale identification pipeline.} \label{inferenceAttack} \end{figure} \textbf{Discrimination.} The attack model trained to conduct the MIA is used to determine whether the target sample $x$ of a beluga is part of the training set of the target model. We describe how to build the attack model in Section~\ref{attackModeltrainSection}. \textbf{Re-ID.} Once it has been determined whether the target sample $x$ corresponds to a known (\emph{i.e.}, within training set) or unknown beluga (\emph{i.e.}, out of training set) by the attack model, known examples can be immediately classified through a standard classifier. Otherwise for unknown belugas, the recognition has to be done manually through side-information. For instance, this side-information could be acquired through experts to confirm whether that individual is indeed new and otherwise to decide to which class the unknown individual will be assigned. \subsection{Training of the Attack Model} \label{attackModeltrainSection} We assume that the adversary has access to a local dataset, which we call the attack dataset $D^s$. The attack dataset comes from a same distribution (\emph{i.e.}, the same population or the same individuals) than the one used to train the target model. To infer whether the sample $x$ is in the training set of the target model, our core idea is to train an attack model $M_{attack}$ that can detect whether a particular sample corresponds to the picture of a beluga that was part or not of the training set. Figure~\ref{trainAttack} provides a high-level overview of the training process of the attack model, which we describe in details hereafter. \begin{figure*}[h!] \centering \includegraphics[width=0.85\textwidth]{samples/figures/training.png} \caption{Attack model training procedure.} \label{trainAttack} \end{figure*} \textbf{Training process.} $D_{train}$ is the training dataset, which is used for training the target model using the learning algorithm $A$. $D^{s}$ is the attack dataset that is disjoint from the training dataset $D_{train}$, which contains data points coming from the same data distribution as the training members in $D_{train}$. The adversary first trains the attack model using the attack training dataset $D_{s}$ and the learning algorithm $A$, in such as way that the attack model mimics the behavior of the target model. $T$ is the attack test dataset that is assumed to be both disjoint from $D^{s}$ and $D_{train}$, in the sense that it is composed by non-member individuals never seen before by $D^{s}$ and $D_{train}$. When the training of the attack model is completed, the adversary queries the attack model using the attack training and test datasets to obtain the outputted prediction vectors for each data point. More formally, we denote a prediction vector as $\hat{p}(y\, |\, x)$, in which ``member'' are labelled as 1 and ``non-member'' as 0. Then, each ``member'' dataset and ``non-member'' dataset are represented as follows: \begin{equation} \label{membAndnonMemb1} P_{i}^{m} = \left\{\hat{p}(y \, | \, x),1\right\} \end{equation} \begin{equation} \label{membAndnonMemb2} P_{i}^{n} = \left\{\hat{p}(y \, | \, x),0\right\} \end{equation} More precisely, the prediction vector of each point $i$ in the attack training dataset is labeled ``member'' $P^{m}_{1}, \ldots, P^{m}_{k}$ and the prediction vector of each point $i$ in the attack test dataset is labeled ``non-member'' $P^{n}_{1}, \ldots, P^{n}_{k}$. Thus, the adversary can build $k$ ``member'' data points and $k$ ``non-member'' datapoints, which jointly form the training dataset of the attack model. Finally, the problem of recognizing the complex relationship between members and non-members is converted into a binary classification problem. Once trained, the adversary can use the attack model $M_{attack}$ to implement MIAs on arbitrary data points. The attack model takes the prediction vector $\hat{p}(y\, |\, x)$ of the target model of a data point $x$ as input and outputs whether this point is in $D_{train}$ of the target model or not. \subsection{Attack Model Design} To instantiate the MIA attack, we have applied three state-of-the-art MIAs from the literature that leverage different types of information outputted by the target model (namely prediction confidence, ground truth label or both of them) and different attack strategies (\emph{i.e.}, neural network-based, metric-based and query-based) as shown in Table~\ref{miafeature}. These MIAs all consider an adversary with a black-box access to the model and thus are quite generic. Note that while we did not consider MIAs with a white-box access to the model~\cite{white}, we leave as a future work their investigation to increase the attack success of beluga discrimination. In security, white-box access is usually considered less realistic than black-box access in many real-life situations (\emph{e.g.}, the use of a machine learning as service). However, this is not the case in our setting as we can fully control the implementation pipeline of the MIA. \begin{table}[h!] \centering \caption{Summary of MIAs investigated. In the features column, C denotes the use of confidence while L corresponds to the use of label.} \label{miafeature} \begin{tabular}{|c|c|c|} \hline MIA & Features & Attack strategy \\ \hline Yeom \emph{et al.}~\cite{yeom2018} & C, L & Metric-based \\ \hline Salem \emph{et al.}~\cite{salem2019} & C & Neural network-based \\ \hline Label-only~\cite{Choo2020} & L & Query-based \\ \hline \end{tabular} \end{table} \textbf{Yeom \emph{et al.}~\cite{yeom2018}} In contrast to neural network-based attacks like Salem \emph{et al.}~\cite{salem2019} explained in the next paragraph, metric-based attacks leverage a certain metric and a predefined threshold over the metric (computed over the attack dataset by querying the attack model) to differentiate members and non-members. More precisely, Yeom \emph{et al.}~\cite{yeom2018} uses the prediction confidence of the correct class under the assumption that the confidence should be high for the member samples as the target model is optimized with this objective. The $Metric_{conf}$ attack is defined as follows: \begin{equation} \label{yeomloss} Metric_{conf} (\hat{p}(y \, | \, x), \, y) = -(\mathcal{L}(\hat{p}(y \, | \, x); \, y) \leq \tau), \end{equation} in which $\mathcal{L}$ is the cross-entropy loss function and $\tau$ is a preset threshold. An adversary infers an input record as a member if its prediction loss is smaller than the average loss of all training members while otherwise, it is inferred as a non-member. The intuition for this attack is that the target model has been learnt on its training members by minimizing their prediction loss. Thus, the prediction loss of a training record should be smaller than the prediction loss of a test record. The threshold is an input hyperparameter to the attack and as such he could be learned by using an evaluation set for instance. To identify the threshold for optimal accuracy, we use the evaluation set from the target set and treat one-half as members, with the rest as non-members. We compute the AUC (Area Under the Curve) and precision/recall metrics, sweep over a range of values for the threshold $\tau$ and measure the resulting attack's FPR/TPR and precision/recall trade-offs. We can then choose the best threshold $\tau$ based on membership inference accuracy for this simulated setup. \textbf{Salem \emph{et al.}~\cite{salem2019}} This attack takes the prediction vector confidences as the input to the attack model. The adversary derives the training dataset of the attack model by querying the attack model with the attack training dataset (labeled as members) and attack testing dataset (labeled as non-members). With the attack training dataset, the adversary can learn the attack model, which is a multi-layer perceptron (MLP). A traditional 3-layer MLP with 64, 32 and 2 hidden neurons for each layer is used for neural network-based attacks. We use the same hyperparameters of overfitting setting as seen in Section~\ref{sec2}. Once the attack model is learnt, the adversary can perform the attack over the target model to differentiate members and non-members with respect to $D_{train}$. \textbf{Label-only.~\cite{Choo2020}} Rather than using confidence predictions, query-based attacks restrict the attack to using only the predicted labels from the target model. Label-only attacks determine membership status by sending multiple queries to the target model, which concretely are generated by adding adversarial perturbations to the input sample until the predicted label has been changed. The attack measures the magnitude of the perturbation and considers the data sample as a member if its magnitude is larger than a predefined threshold. More formally, given some estimate $dist(x, y)$ of a point's $l2$-$distance$ to the model's boundary, the attack predict $x$ a member if $dist(x, y) \geq \tau$ for some threshold $\tau$. To estimate the distance, the attack starts from a random point $x$, which is misclassified. Then, a ``walk'' along the boundary is performed while minimizing the distance to $x$ using HopSkipJump~\cite{chennmia}, which closely approximates stronger white-box attacks. HopSkipJump is initialized with a sample blended with uniform noise that is misclassified over iterations by moving it along the decision boundary to get closer to the attacked image. For a given target model, the attack assumes that the robustness to adversarial perturbations is higher for a member sample compared to a non-member as the former was involved in the training of the model. \subsection{Ensemble Membership Inference Attack} \label{sec_ensemble_mia} In this section, we propose a novel way to perform a MIA illustrated in Figure~\ref{EnsembleAttack}, which we coin as a ensemble membership inference attack. In the ensemble MIA, instead of a single one, $n$ attack models are built using different subsets of the data. More precisely, the attack model $M_{attack}$ is not trained directly using the whole dataset, but rather this dataset is split in disjoint subsets to create several attack models ($M_{attack_{1}}$ to $M_{attack_{l}}$). For instance, when the dataset contains 60 individuals, it is split in 6 subsets with 10 individuals to train the $M_{attack_{1}}$ to $M_{attack_{6}}$. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{samples/figures/proposedEnsemble.png} \caption{Ensemble MIA.} \label{EnsembleAttack} \end{figure} At discrimination time, the MIA ensemble generates $l$ predicted outputs for each new sample $x$ that are combined using a combination rule $E$. The combination rule generates the final output ``member'' or ``non-member''. In this paper, we have used the simple combination rule $E$ that an input $x$ is labelled as ``member'' if at least one $M_{attack}$ prediction output assigned it as a ``member'', while otherwise it is considered to be a ``non-member''. Our rationale behind the design of the ensemble MIA is that training an attack model with fewer individuals makes the classifier more powerful to discriminate between similar classes. Indeed, smaller subsets decrease the complexity of discrimination as usually they give a higher prediction score for individuals seen during training that models built on bigger dataset. Moreover, it was confirmed in the experiments that the attack performance may vary across different individuals due to the different overfitting levels for each set of classes (see Table~\ref{Overffitingclasses} in Section~\ref{reliabilityM}). \section{Experimental Setting} \label{sect_experiments} In this section, we present the experimental setting used to validate our approach of using MIA for beluga whale discrimination. More precisely, we first describe the datasets used in the experiments (Section~\ref{datasets1}), followed by the experimental configuration (Section~\ref{sec_exp_configuration}) and finally by the target and attack models' architectures and training settings (Section~\ref{sec2}). \subsection{Datasets} \label{datasets1} The experiments were conducted on three distinct datasets in terms of visual characteristics: GREMM, Humpback and NOAA. \begin{itemize} \item The GREMM dataset is made of photos from hand-held cameras taken during photo-identification surveys conducted from June to October between 1989 and 2007 as part of an ongoing long-term study of the social organization of the St. Lawrence Estuary beluga population in Québec (Canada). This dataset is composed of 983 beluga individuals and thousands of side-view beluga images. However, the number of pictures per individual varies significantly with a lot of belugas having only a small number of pictures. Thus, we selected a part of this dataset that contains 3402 images distributed across 180 individuals. In addition, as a pre-processing step, we use the method previously proposed in~\cite{Araujo2022} to detect and crop the images. \item The Humpback dataset was derived from the Happywhale - Whale and Dolphin Identification competition dataset~\cite{humpdata}, which originally contains images of over 15000 unique individual marine mammals from 30 different species collected from 28 different research organizations. We selected just the Humpback species to evaluate our approach because they are known to be among one of the easiest species to recognize due to their very distinctive patterns on the flukes. For example, the first ranked solution~\cite{1placeKagl} in Kaggle competition achieves 0.973 on the private leader-board in a competition that only identifies humpback whales~\cite{Simoes2020}. The Humpback dataset contains 270 individuals with a total of 4814 images. \item Finally, the last dataset has been collected by the US federal agency National Oceanic and Atmospheric Administration (NOAA), which monitors five different populations of belugas across Alaskan waters, with a focus on the Cook Inlet belugas. More precisely, the NOAA dataset is composed by 380 individuals with 5158 images in total corresponding to top-view from belugas whales. Note that the top-view pictures that compose the NOAA dataset are considered more informative than pictures of beluga flanks taken from the side of animals that compose the GREMM dataset. \end{itemize} To summarize, Table~\ref{tableDataset} describes the numbers of individuals and sample pictures in each of these three datasets while their visual characteristics are presented in Figure~\ref{datasetImage}. \begin{table}[h!] \centering \caption{Statistical information of datasets.} \label{tableDataset} \begin{tabular}{|c|c|c|} \hline Dataset & Nb of individuals & Nb of samples \\ \hline GREMM & 180 & 3402 \\ \hline Humpback & 270 & 4814 \\ \hline NOAA & 380 & 5158 \\ \hline \end{tabular} \end{table} \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{samples/figures/datasetImage.png} \caption{Visual characteristics of datasets. (A) GREMM, (B) Humpback and (C) NOAA.} \label{datasetImage} \end{figure} \subsection{Experimental Configuration} \label{sec_exp_configuration} Each dataset used to train MIAs is composed of individual whale, each one of them having many pictures associated to it. To assess the MIA for beluga discrimination, we have sampled three disjoint sub-datasets with equal or approximately equal number of identities. However, the number of images per individual (\emph{i.e.}, an individual beluga is an identity) is very diverse from one beluga to another. Therefore, the sampling cannot guarantee that each class has an equal number of data points in each sub-dataset unless increasing the number of data points in each sub-dataset. The augmentation process included operations such as random horizontal flip and brightness variation. More precisely, augmented images are obtained by rotating the original image 90, 180, 270 and 330 degrees clockwise and random brightness between 0.0 and 1.0~\cite{augbloice}. Based on this observation, we construct three disjoint augmented subsets to which we assign randomly the identities of belugas: the target set, the attack set and the evaluation set (see Table~\ref{tableXXX}). After augmenting for those individuals that contains fewer samples, this leads to 75 images per ID. Thus, each of this dataset will contain approximately $\frac{1}{3}$ of the identities ($ID^1$, $ID^2$ and $ID^3$). More precisely, the GREMM, Humpback and NOAA datasets are composed respectively of 60, 90 and 127 individuals with a number of images per subset of respectively 1500, 2250 and 3175 images for GREMM, Humpback and NOAA datasets respectively. In total, the evaluation set ($ID^1$ and $ID^3$) is composed of 3000, 4500, and 6350 samples for respectively GREMM, Humpback and NOAA datasets. \begin{table}[h! \caption{Summary of experimental configuration datasets. }\label{tableXXX} \begin{adjustbox}{width=0.47\textwidth} \small \begin{tabular}{cccccccccccc} \hline \multirow{3}{*}{\rotatebox[origin=r]{90}{Dataset}} &\multirow{3}{*}{\rotatebox[origin=r]{90}{per ID}} & \multicolumn{3}{c}{Target set} & \multicolumn{2}{c}{Attack set} & Evaluation \\ & & train & val & test & member & non-member & set \\ & & $ID^1$ &$ID^1$ &$ID^1$ &$ID^1$ &$ID^2$ &$ID^{1,3}$ \\ \\ \hline \multirow{3}{*}{\rotatebox[origin=c]{90}{GREMM}} \\ \\ & 25 & 60 & 60 & 60 & 60 & 60 & 120 \\ \\ \hline \multirow{4}{*}{\rotatebox[origin=c]{90}{{\footnotesize Humpback}}} \\ \\ & 25 & 90 & 90 & 90 & 90 & 90 & 180 \\ \\ \\ \hline \multirow{3}{*}{\rotatebox[origin=c]{90}{NOAA}} & \multirow{3}{*}{25} & \multirow{3}{*}{127} & \multirow{3}{*}{127} & \multirow{3}{*}{127}& \multirow{3}{*}{127} & \multirow{3}{*}{127} & \multirow{3}{*}{254} \\ \\ \\ \hline \end{tabular} \end{adjustbox} \end{table} As seen in Table~\ref{tableXXX}, the target set is split into a training, validation and test set, each set being composed of approximately one third of the pictures for each id. The target training set is then used to train the target model while the validation and target test set are respectively used to validate the hyper-parameters and assess the accuracy of this model. Second, the attack set contains examples of non-members that are required to be able to train the attack model. In addition, pictures of the target validation are used as representatives of members. Finally, the evaluation set contains non-members whose identities are different from the one used to built the attack model. The objective here is to assess the generalization power of the attack. Indeed, as the identities of the belugas in this set are different from the ones in the attack set, we are avoiding the situation in which the attack model overfits the attack set with respect to non-members. Here, the target test set is used as examples of members for evaluating the success of the attack model. We balance each subset with the same number of individuals and samples to ensure that we can use the target validation set as members in our attack set and the target test set as members in the evaluation set. Figure~\ref{ExplaindatasetImage} provides an example of the experimental configuration for the GREMM dataset, which contains 180 individuals. Here $ID^1$, $ID^2$ and $ID^3$ represent subsets of pictures of whale individuals whose identities are totally different from one another. For instance, $ID^1$ contains individuals that belongs to the same IDs in target set but for which different samples (\emph{i.e.}, different pictures) are used to compose the training, validation and test set of the target model. Thus, $ID^1$ individuals are considered as member in attack set (blue stroke rectangle) and evaluation set (red dotted rectangle). In contrast, $ID^2$ and $ID^3$ are individuals totally unknown by the target model (\emph{i.e.}, non-member). In a nutshell, the non-members of $ID^2$ are used to train the attack model while the ones of $ID^3$ are used to evaluate the attack performance of the attack model for IDs never seen before. This same schema is applied for Humpack and NOAA datasets, updating the numbers of individuals based on the information provided in Table~\ref{tableXXX}. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{samples/figures/ExplaindatasetImage.png} \caption{Dataset distribution. The main dataset is split in $\frac{1}{3}$ of individuals (\emph{e.g.}, 60 individuals in each subset: target set, attack set and evaluation set for GREMM dataset). The arrows indicate that the same individuals are re-used from one dataset to another.} \label{ExplaindatasetImage} \end{figure} \subsection{Target and Attack Models} \label{sec2} We adopt two popular neural network architectures as our target model: ResNet50~\cite{resnet50} and DenseNet121~\cite{densenet121}. \begin{itemize} \item \emph{ResNet50}. The ResNet50 architecture contains 50 layers and uses a stack of three layers with 1$\times$1, 3$\times$3, and 1$\times$1 convolutions as the building residual block. The three-layer residual block is designed as a bottleneck to enhance computational efficiency, in which the 1$\times$1 layers are responsible for reducing and then boosting (restoring) the dimensions, leaving the 3$\times$3 layer as a bottleneck with small input and output dimensions \cite{resnet50}. Batch normalization (BN)~\cite{bn} is applied after each convolution and before ReLU activation, and in addition the global average pooling (GAP)~\cite{lin}, is performed to form the final fully connected layer ($fc$) that contains the number of individuals of the respective dataset. After training, $fc$ outputs floating-point values, which corresponds to the predicted result. \item \emph{DenseNet121}. The DenseNet architecture is designed around a simple connectivity pattern of dense blocks and transition layers. A dense block is a module containing many layers connected densely with feature maps of the same size. In a dense block, each layer obtains additional inputs from all preceding layers, and it passes on its own feature maps to all the subsequent layers. The transition layer links two neighboring dense blocks and it reduces the size of the feature map through pooling. Compared with ResNet that connects layers through element-level addition, layers in DenseNet are connected by concatenating them at the channel level. Similar to ResNet, DenseNet uses a composite of three consecutive operations for each convolution: $BN$+$ReLU$+$convolution$. \end{itemize} These target models were trained in two different settings. \begin{itemize} \item \emph{No-overfitting}. In this setting, the optimization algorithm of CNNs is Stochastic Gradient Descent (SGD), with a learning rate of 0.0001 and a weight decay of 0.5. The batch size is set to 32, the number of training epochs to 200 and finally the batch-norm and dropout (0.5) are used to reduce the overfitting level. \item \emph{Overfitting}. We use the same hyperparameters setting as the no-overfitting but we remove the use of batch-norm, weight decay and dropout techniques to ensure that the model overfits. \end{itemize} For neural network-based (\emph{i.e.}, Salem \emph{et al.}) and metric-based (\emph{i.e.}, Yeom \emph{et al.}) MIAs, the attack model use the same architectures and hyperparameters setting as the target model. For label-only attacks, we follow the implementation from Adversarial Robustness Toolbox (ART)~\cite{art}, which is an open source project that provides Python tools for developers to assess the robustness of machine learning models against security threats. Similarly to previous work in the literature \cite{me1,salem2019,DBLP:conf/sp/ShokriSSS17,Choo2020}, we evaluate the attack performance using accuracy (\emph{i.e.}, attack success rate) as the main evaluation metric for both the original classification tasks and the MIAs. We also evaluate the False Positive Rate (FPR), being aware that only the attack accuracy is not a sufficient measure to compute the success of the attack in open-set problems~\cite{carlini}. As mentioned previously, the attack model is trained with the same architecture as the target model. However, in contrast to the standard setting of most of the MIAs in the literature~\cite{DBLP:conf/sp/ShokriSSS17,salem2019, Choo2020}, we assume that the non-members in the attack dataset come from a different distribution from the target dataset (\emph{i.e}, individuals never seen before in target dataset are part of attack dataset as seen in $D^2$). \section{Results} \label{sect_result} In this section, we provide the results of our experiments and discuss the main findings that we can draw from them. More precisely in Section~\ref{comparisonLiteratureAttacks}, we compare the attack performance of proposed MIA algorithms for the different whale datasets. After in Section~\ref{generalization}, we discuss how the choice of the attack dataset and attack model's architecture to evaluate the generalization power of the MIA. Afterwards in Section~\ref{factor_overfittion_sampling_cnn}, we explore the influence of different factors, such as overfitting, on the attack's performance. Finally in Section \ref{reliabilityM}, we present the performance of our novel ensemble MIA for different real-world scenarios. \subsection{Evaluation of MIAs} \label{comparisonLiteratureAttacks} The attack performance of the MIAs was tested against different architectures for the target model, namely ResNet50 and DenseNet121. Figure~\ref{Attackperformance} displays the performance of the different MIAs on different datasets. Overall, it can be observed that ResNet50+LabelOnly performs the best while ResNet50+Yeom and ResNet50+Salem have a lower performance. For example, on the NOAA dataset, the attack accuracy is 0.976 for ResNet50+LabelOnly against 0.913 for ResNet50+Yeom. This is expected as ResNet50+Yeom considers both confidence's prediction and labels while ResNet50 +LabelOnly considers the predicted label's correctness. More precisely, the predicted label's correctness used by ResNet50 +Yeom is relatively coarse as many non-members are misclassified as members if the predicted label is correct. In contrast, LabelOnly MIA provides a finer-grained metric as it relies on the magnitude of perturbation to change the predicted label, which helps to further distinguish between members and non-members. However, LabelOnly requires a larger query budgets and computation costs than other attacks as it needs to query the target model multiple times and craft the adversarial perturbation to change the predicted label. Table~\ref{timeconsum} presents a comparison of the training and discrimination time for the proposed MIAs. More precisely, the training time indicates the time required to train the attack model while discrimination time refers to the average computational time for a membership inference on a single data point. Nonetheless, metric-based MIAs can often achieve a performance that is not too far from the best attack. For instance, on the GREMM dataset, the attack performance of ResNet50+LabelOnly is 0.744 against 0.695 for ResNet50+Yeom. Therefore, if the adversary has limited computation resources, metric-based MIAs may be a more appropriate choice than LabelOnly MIA. \begin{table}[h!] \centering \caption{Computational time for training an attack model and running a single test image ("discrimination"). The numbers were obtained on the biggest dataset (NOAA), which contains 6350 samples for the attack set and 6350 samples for the evaluation set (\emph{i.e.}, members and non-members). }\label{timeconsum} \begin{tabular}{@{}|c|c|c|@{}} \hline \multirow{2}{*}{Attack}& Training Time &Discrimination Time \\ \multirow{2}{*}{} & (sec) & (sec) \\ \hline Yeom \emph{et al.}~\cite{yeom2018} & 4.67 hr & 0.66 s \\ Salem \emph{et al.}~\cite{salem2019} & 4.19 hr & 0.62 s \\ Label-only~\cite{Choo2020} & 6.41 hr & 1.57 s \\ \hline \end{tabular} \end{table} In addition, it seems that the attack performance can be improved by changing the model's architecture. For instance, on the Humpback dataset, the attack performance for DenseNet121+LabelOnly is 0.942 while an accuracy of 0.976 is achieved when the ResNet50 model's architecture is applied. Thus, a more elaborated architecture is likely to boost the ability of the attack model to differentiate between members and non-members. This is in line with the findings of recent studies in the literature~\cite{DBLP:conf/sp/ShokriSSS17, onThediff,8844607} that have shown that increasing the complexity of the model attacked is likely to increase the success of MIA due to the increase capacity of the model to memorize the training set. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{samples/figures/Attackperformance.png} \caption{Accuracy of the different MIAs for different datasets and target model architectures. As the evaluation set is balanced (\emph{i.e.}, composed of exactly half members and half non-members) a naïve attack model that produces a random prediction would have an accuracy of 0.5.} \label{Attackperformance} \end{figure} Finally, we can observe that the success of the attacks is significantly lower for the GREMM dataset. Our intuition is that the hardness in discriminating beluga whale's in GREMM comes from the lack of discriminative characteristics. In particular, beluga individuals in the GREMM dataset are very similar to each other, forcing the attack model to misclassify non-members as members of the target model. In contrast, individuals from the Humpback and NOAA datasets normally have very distinctive features (\emph{e.g.}, detailed features present in fins, marks and shapes). For instance, as seen in Figure~\ref{datasetImage} in comparison with Humpback and NOAA, GREMM individuals have no marks present in dorsal ridge or detailed tail, resulting in beluga whales being the hardiest species to attack. \begin{comment} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{samples/figures/men_non.png} \caption{Ensemble MIA.} \label{men_non} \end{figure} \end{comment} \begin{comment} \begin{table*} \caption{Membership attack results (high overfitting level). }\label{TableDefaultDatasets} \begin{tabular}{ ccccccccc} \toprule & & \multicolumn{2}{c}{Target Model} & & \multicolumn{4}{c}{Attack Performance (\%)} \\ \cline{3-4} \cline{6-9} Dataset (CNN model) & & Train Acc & Test Acc &{Attack Model} & Acc & Prec & F1 & FAR \\ \midrule & & & & Yeom &69.56\% & 67.77\% & 67.12\% & 40.12\% \\ {GREMM (ResNet50)} & &89.61 &28.22 & Salem &68.22\% & 65.11\% & 66.40\% & 41.96\% \\ & & & & Label-only & \textbf{74.45}\% & 71.64\% & 72.00\% & \textcolor{blue}{39.00}\% \\ \midrule & & & & Yeom &68.92\% & 63.91\% & 63.78\% & 46.90\% \\ {GREMM (DenseNet-121)} & &85.44 &22.23 & Salem &68.42\% & 64.71\% & 64.10\% & 47.26\% \\ & & & & Label-only & \textbf{69.15}\% & 66.15\% & 68.52\% & \textcolor{blue}{46.66}\% \\ \midrule & & & & Yeom & 91.34\% & 91.32\% &90.56\% &1.95\% \\ {Humpback (ResNet50)} & &99.31 &38.22 & Salem & 94.25\% & 92.12\% &91.46\% &1.05\% \\ & & & & Label-only & \textbf{97.60}\% & 96.82\% &98.62\% &\textcolor{blue}{0.03}\% \\ \midrule & & & & Yeom & 92.22\% & 90.18\% &93.33\% &2.98\% \\ {Humpback (DenseNet-121)} & &97.11 &34.11 & Salem & 94.65\% & 93.92\% &85.88\% &2.06\% \\ & & & & Label-only & \textbf{94.22}\% & 88.17\% &89.62\% &\textcolor{blue}{0.06}\% \\ \midrule & & & & Yeom & 96.93\% & 93.10\% &96.71\% &0.007\% \\ {NOAA (ResNet50)} & &99.91 &37.11 & Salem & 96.23\% & 94.90\% &96.26\% &0.009\% \\ & & & & Label-only & \textbf{99.77}\% & 95.60\% &94.48\% &\textcolor{blue}{0.004}\% \\ \midrule & & & & Yeom & 91.23\% & 91.20\% &90.42\% &0.045\% \\ {NOAA (DenseNet-121)} & &95.11 &35.73 & Salem & 92.13\% & 90.93\% &89.20\% &0.009\% \\ & & & & Label-only & \textbf{94.23}\% & 90.10\% &91.49\% &\textcolor{blue}{0.005\% } \\ \bottomrule \end{tabular} \end{table*} \begin{table*} \caption{Membership attack results (low overfitting level). }\label{lowOverfitting} \begin{tabular}{ ccccccccc} \toprule & & \multicolumn{2}{c}{Target Model} & & \multicolumn{4}{c}{Attack Performance (\%)} \\ \cline{3-4} \cline{6-9} Dataset (CNN model) & & Train Acc & Test Acc &{Attack Model} & Acc & Prec & F1 & FAR \\ \midrule & & & & Yeom &53.22\% & 52.21\% & 51.12\% & 43.23\% \\ {GREMM (ResNet50)} & &74.61 &40.61 & Salem &54.44\% & 53.23\% & 50.22\% & 44.12\% \\ & & & & Label-only &55.11\% & 53.92\% & 51.93\% & 53.56\% \\ \midrule & & & & Yeom &52.13\% & 51.83\% & 51.12\% & 53.90\% \\ {GREMM (DenseNet-121)} & &72.34 &35.61 & Salem &52.90\% & 50.21\% & 49.32\% & 56.46\% \\ & & & & Label-only &55.83\% & 52.11\% & 50.03\% & 53.92\% \\ \midrule & & & & Yeom & 59.11\% & 58.36\% &54.64\% &43.12\% \\ {Humpback (ResNet50)} & &89.31 &83.51 & Salem & 61.33\% & 55.36\% &53.11\% &44.92\% \\ & & & & Label-only & 63.89\% & 57.33\% &53.98\% &44.11\% \\ \midrule & & & & Yeom & 58.43\% & 58.10\% &53.23\% &41.33\% \\ {Humpback (DenseNet-121)} & &86.42 &75.10 & Salem & 60.22\% & 56.98\% &53.01\% &43.44\% \\ & & & & Label-only & 60.33\% & 52.95\% &52.32\% &42.22\% \\ \midrule & & & & Yeom & 51.83\% & 28.23\% &50.32\% &49.11\% \\ {NOAA (ResNet50)} & &87.21 &79.71 & Salem & 50.12\% & 47.24\% &51.33\% &48.89\% \\ & & & & Label-only & 53.12\% & 51.33\% &52.89\% &47.89\% \\ \midrule & & & & Yeom & 51.33\% & 48.35\% &49.24\% &49.54\% \\ {NOAA (DenseNet-121)} & &81.75 &73.74 & Salem & 49.13\% & 47.49\% &50.10\% &48.89\% \\ & & & & Label-only & 51.32\% & 47.11\% &48.34\% &51.84\% \\ \bottomrule \end{tabular} \end{table*} \end{comment} \subsection{Generalization of the Attack} \label{generalization} Most previous works in the literature~\cite{me1,DBLP:conf/sp/ShokriSSS17} focused on the setting in which the adversary trains an attack model (of the same architecture as the target model) on an attack dataset that comes from the same distribution (\emph{i.e.}, members) as the target dataset. Generally, these works generate the target dataset and the attack dataset coming from the ``same distribution'' by splitting the original dataset into two parts. We depart from this assumption by creating an attack dataset composed for half of members and half of non-members totally different from the target dataset ($D^2$ in Figure~\ref{ExplaindatasetImage}) in the sense that even for the members the pictures used are different from the one used in the target dataset. More precisely, for the identity of a particular beluga contains in the target set, we have several pictures associated to it. When we built the different datasets as shown in Figure~\ref{ExplaindatasetImage}, we make sure that the pictures used for training the target model are different than the ones used for building the attack model or from the evaluation dataset. In this situation, a successful attack means that the MIA will be able to generalize to new pictures of members as well as to new non-members. In particular, we want to be able to guarantee that the attack model has learned generic features rather than simply distinguishing in an overfitted manner the members and non-members of the attack set. In this situation, even when new individuals emerge over time, the attack model will be able to identify whether it is an individual known by the target model or not. In the following, we focus on the LabelOnly attack with ResNet50 architecture, which has shown the best performance in the experiments conduced and can handle the case in which the target and attack datasets come from different visual characteristics (\emph{e.g.}, beluga dorsal ridge in GREMM dataset, humpback tail in Humpback and beluga top-view in NOAA). First, we analyze the situation in which we relax the assumptions of a same-distribution attack dataset and same architecture attack model. To realize, we evaluate whether a MIA is still effective against an attack dataset composed of non-members that are issued from a different dataset. \textbf{Attack performance on attack dataset coming from a different distributions.} So far, previous works~\cite{salem2019,DBLP:conf/sp/ShokriSSS17} have only considered the ``same distribution'' setting in which the attack dataset is based on images sampled from the same dataset. However, in reality, to construct an attack dataset for wild individuals, it might be the case that the system will unknown individuals that emerge over time. Figure~\ref{Table_sets} shows the MIA performance when the attack dataset contains non-members coming from a different distribution than the target dataset. In this situation, we can observe that the attack performance remains almost the same. For instance, when the target and attack datasets both originated from GREMM, the attack performance is 0.744 while that attack is still effective (0.719 and 0.721) when the attack dataset originated respectively from Humpback and NOAA. Such observation indicates that we can relax the assumption of a same-distribution attack dataset. In practice, this can have a big impact in the situation in which the target dataset is of limited size and we do not have the liberty to sacrifice some of its data to build the attack dataset. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{samples/figures/Table_sets.png} \caption{The performance of membership inference attack (LabelOnly) when the attack dataset comes from different distributions than the target dataset.} \label{Table_sets} \end{figure} The results obtained demonstrate that even if we add new individuals that have never been seen before by the attack model (\emph{e.g.}, from other dataset distributions), the attacks are still effective. For instance, all attacks reach over 0.922 accuracy when the target dataset is Humpback and the attack dataset is GREMM or NOAA, even in the cases in which attack and target models have different model architectures. To the best of our knowledge, we are the first to quantify the generation power of MIAs with an attack dataset that is composed of half known members from the target set and another half of non-members totally unknown (\emph{i.e.}, from a different distribution). \textbf{Attack performance on different model's architecture.} Figure~\ref{MixArchicture} shows that the attacks are still effective even when the target and attack models' architectures are different. For instance, on the Humpback dataset (Figure~\ref{MixArchicture} b), the attack performance is 0.976 when ResNet50 is the model architecture for both target and attack models, and it decreases only to 0.962 when the attack model's architecture changes to DenseNet121. Such observation hints that we can relax the assumption that the attack model should necessarily follow the same-architecture as the target model. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{samples/figures/MixArchicture.png} \caption{The performance of membership inference attack (LabelOnly) when the attack model has different architecture compared to the target model.} \label{MixArchicture} \end{figure} \subsection{MIAs Influence Factors} \label{factor_overfittion_sampling_cnn} This section explores the factors that influence the success of MIAs in our setting. To realize this, we study how factors such as the overfitting level and cross-entropy distance distribution correlates with the attack performance. During our evaluation, we focus on the ResNet50+Salem, ResNet50+Yeom and ResNet50+LabelOnly attacks, as the former performs the best using only confidence information while the latter two perform the best when having access to both confidence and ground-truth label information. \textbf{Difference between overfitting and no-overfitting.} The traditional way of training machine learning models normally aims at avoiding the overfitting phenomenon~\cite{avoidover, RAVOOR2020100289}. Indeed, the main concern about overfitting is that it occurs when the model performs well on the training data but generalizes poorly on unseen samples (\emph{i.e.}, test set). In the privacy domain, overfitting has also been shown to make the model more vulnerable to privacy attacks as it results in the model memorizing more information about the training set~\cite{8844607, DBLP:conf/sp/ShokriSSS17}. In the following, we investigated how overfitting affects the performance of MIAs and more precisely whether overfitted models can more easily discriminate between known vs unknown individuals. When training the target models, we considered two different model settings: no-overfitting and overfitting as describe in Section~\ref{sec2}. The results of the experiments are summarized in Figure~\ref{OVERxNOover}. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{samples/figures/OVERxNOover.png} \caption{Success of MIAs in the overfitting vs no-overfitting settings. Note that we average the attack performance under different attacks for each dataset and show the standard deviations.} \label{OVERxNOover} \end{figure} As expected, models trained with overfitting displays a higher vulnerability against MIAs. For instance on GREMM, the best average attack accuracy for the original model (trai--ned with overfitting) was 0.706 while it was only 0.539 with no-overfitting, close to the 0.5 accuracy of a baseline random prediction. In terms of the utility of the target model with respect to MIAs, using overfitting always improves the MIA's generalization in all cases (\emph{i.e.}, for different datasets and model architectures). Thus as expected, overfitting is effective at increasing the leak of information and can be leverage to discriminate more efficiently between members and non-members. \textbf{Impact of the overfitting level.} As seen previously, the attack performance varies on the dataset and model considered. Previous works~\cite{yeom2018,overfff,DBLP:conf/sp/ShokriSSS17} have also explored how the level of overfitting impacts the success of privacy attacks. In a nutshell, the overfitting level of a given model can be defined by subtracting the testing accuracy from the training accuracy. We report the training/testing accuracy on the classification tasks for overfitted and non-overfitted models in Table~\ref{NoWithoverfitted}. \begin{table}[h!] \centering \caption{The performance for overfitting vs non-overfitting models on the original classification tasks for all three datasets. Both values of training accuracy and testing accuracy (in parenthesis) for different model architectures are reported. }\label{NoWithoverfitted} \begin{tabular}{@{}cccccccccccc@{}} \hline \multirow{2}{*}{\rotatebox[origin=r]{90}{Data}} & \multicolumn{2}{c}{ResNet50} && \multicolumn{2}{c}{DenseNet121} \\ \cline{2-3} \cline{5-6} & \multirow{2}{*}{Overfitted} & \multirow{2}{*}{No} & & \multirow{2}{*}{Overfitted} & \multirow{2}{*}{No} \\\\ \hline \multirow{4}{*}{\rotatebox[origin=c]{90}{GREMM}} \\ \\ & 1.000 (0.282) & 0.746 (0.406) & & 1.000 (0.222) & 0.746 (0.356) \\ \\ \hline \multirow{4}{*}{\rotatebox[origin=c]{90}{Humpback}} \\ \\ & 1.000 (0.382) & 0.893 (0.835) & & 1.000 (0.341) & 0.893 (0.751) \\ \\ \hline \multirow{3}{*}{\rotatebox[origin=c]{90}{NOAA}} & \multirow{3}{*}{1.000 (0.371)} & \multirow{3}{*}{0.872 (0.797)}& & \multirow{3}{*}{1.000 (0.357)} & \multirow{3}{*}{0.872 (0.737)} \\ \\ \\ \hline \end{tabular} \end{table} Figure~\ref{over_distance} shows the correlation of the overfitting level correlation with the attack performance. In particular, the MIAs vulnerability is associated with the increase of the overfitting level. For example, in Figure~\ref{over_distance}a, the overfitting level goes from 0 to 0.61 when the target model's training epochs range from 0 to 80, which results in the attack success rate of the ResNet50+Label-only attack to vary from 0.55 to 0.69. This observation highlight the fact that the overfitting level contributes to the vulnerability of a model to MIAs. However, an unexpected outcome is that the attack performance is still increasing when the overfitting level stabilizes. As shown in Figure~\ref{over_distance}, when the overfitting level is around 0.6 (which corresponds to epochs ranging from 80 to 200), the attack performance still improves with the increase in the number of epochs. It shows that the overfitting level is not the only aspect related to MIA vulnerability. To address this issue, we additionally investigated the correlation between the distance in terms of cross-entropy between distributions for members and non-members and the vulnerability of the model to MIAs. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{samples/figures/over_distance.png} \caption{The distance in terms of cross-entropy and attack performance against the target model ResNet50 on the GREMM dataset under different numbers of epochs for model training.} \label{over_distance} \end{figure} \textbf{Kullback–Leibler divergence.} We use the Kullback Leibler divergence (KL divergence)~\cite{kl} to measure the distance between the distributions of members and non-members and compute the cross-entropy of each sample. KL is a widely used metric to measure the distance of two probability distributions as seen in Equation~\ref{KLequation}. \begin{equation} \mathcal{L}_{KL}(P,Q)=\sum_{x}P(x)\log\frac{P(x)}{Q(x)}, \label{KLequation} \end{equation} in which $P$ and $Q$ are two probability distributions on events. The loss function includes both the prediction loss and the KL divergence loss. From this, we can compute cross-entropy distributions for members and non-members and normalize them into probability distributions~\cite{kl}. Cross-entropy loss is one of the most common loss functions used for classification tasks, and it is defined as: \begin{equation} \mathcal{L}_{CE}(y,p)=-\sum_{i=1}^ky_{i}\log p_{i}, \label{cross} \end{equation} in which $p$ is a vector that represents the confidence predictions of the sample over different pre-defined classes, with $k$ being the total number of classes. $y_i$ equals 1 only if the sample belongs to class $i$ and 0 otherwise while $p_{i}$ is the $i$-th element of the confidence posteriors. More precisely, we computed the KL-divergence of the normalized cross-entropy distributions between members and non-members. Figure~\ref{over_distance}a shows the KL-divergence of cross-entropy distributions and the overfitting level under the target model trained with different epochs when the target model is ResNet50 trained on GREMM. We can see that the KL-divergence of cross-entropy is highly correlated with the attack performance. For example, in Figure \ref{over_distance}a, the KL-divergence of cross-entropy of the target model ranges from 0.0 to 0.40 when the epochs range from 0 to 120, with the attack success rate of ResNet50+LabelOnly varying from 0.55 to 0.72. More interestingly, from Figure \ref{over_distance}a and Figure \ref{over_distance}b, we can also see that there is a clear turning point after 120 epochs, in which both the KL-divergence and attack performance become stable. These results convincingly demonstrate that, compared to the overfitting level, KL-divergence of members' and non-member's cross-entropy has a higher correlation to the attack performance. Note that for LabelOnly attacks, we do not have confidence predictions but only the labels predicted by the target model. Thus, we can view the predicted label as the ground truth to calculate the cross-entropy loss instead of the KL-divergence loss in the distillation process. \subsection{MIA Robustness and Performance of Ensemble MIA} \label{reliabilityM} Discrimination based on MIA might be impractical when the FPR is too high, which will lead to non-member samples being often erroneously predicted as members. We notice a smooth FPR for Humpback and NOAA datasets, with on average 0.03\% of FPR. This means that most of members and non-members are well discriminated for those datasets. In contrast, GREMM has a extremely high FPR, which can be decreased to 0.34\% using ResNet+LabelOnly attack under the overfitting influence. We have investigated the FPR obtained using the best proposed attack in Figure~\ref{roc_MIA_GREEMimg} for the GREMM dataset. Some negative individuals are misclassified as positive due to hardness visual similarity between member and non-members in GREMM dataset. Interestingly, even the attack on an extremely overfitted model such as ResNet50 (green line) still suffers from high FPR. This type of error makes the predicted membership signal unreliable, especially since most samples are non-members in real world applications. To reduce this, we have proposed the novel MIA ensemble approach (described in Section~\ref{sec_ensemble_mia}) to enhance the MIA performance while reducing the attack's FAR. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{samples/figures/roc_MIA_GREEMimg.png} \caption{True positive versus false positive rates for different settings of MIAs.} \label{roc_MIA_GREEMimg} \end{figure} While the gain in average attack success rate is modest, the success rate at low false-positive rates can be very high. For instance, looking at Table~\ref{Overffitingclasses}, we notice a variation in attack accuracy rate across different subsets of individuals. This suggests that there is a subset of examples that are easier to distinguish than others, which is also a phenomenon that has been observed in the literature on MIAs~\cite{liu2022}. In our experiments, ensembles were design to explore the attack model using different sets of individuals. As seen in Figure~\ref{roc_MIA_GREEMimg} using ensemble composed by 2 subsets, we decreased the FPR to 0.35 while increasing the attack rate to 0.781. We further investigate whether as soon we increase the number of subsets the FPR might turns lower, which we observed as with 15 subsets we got 0.28 of FPR. The best results is when we create an attack model for each unique whale identity. The main insight behind the creation of attack models using unique individuals is that the member and non-member in the attack set are composed by two individuals being one known and the another unknown. The non-member is selected randomly from individuals never seen before to train the attack model. On this way, we guarantee the maximum overfitting level for unique individuals and merge the outputs of $M_{attack_{1}}$ to $M_{attack_{l}}$. For instance, $M_{attack_{1}}$ to $M_{attack_{60}}$ ensemble with 60 outputs for GREMM dataset achieved 0.26 of FPR and 86\% in accuracy attack. In fact, a high overfitting level using fewer individuals acts in synergy to increase the success rate of the attack while decreasing the FPR. In addition, ensemble MIA combines the output of each attack model to better discriminate similar individuals. Finally, we have also performed additional experiments to investigate how the overfitting levels varies for different subset of individuals to observe whether the attack performance varies across subsets. For instance in Table~\ref{Overffitingclasses}, we have split the GREMM dataset respectively in two and six subsets. For example, with two subsets composed by individuals whose identities range between 0-29 and 30-59, the overfitting level is respectively 0.619 and 0.624. This demonstrates that the membership leakage effect also varies among different individuals from the same dataset. \begin{table}[h!] \centering \caption{The overfitting level in different attack subsets using (LabelOnly) when the target model is ResNet50 trained on GREMM. Class Index is the number of individuals used for each subset (\emph{e.g.}, 2 subsets containing 30 individuals each and 6 subsets with 10 individuals).} \label{Overffitingclasses} \begin{tabular}{@{}|l|c|c|@{}} \hline Class index & Overfitting Level & Subsets \\ \hline 0-29 & 0.619 & \multirow{2}{*}{2} \\ 30-59 & 0.624 & \\ \hline 0-9 & 0.683 & \multirow{6}{*}{6} \\ 10-19 & 0.604 & \\ 20-29 & 0.738 & \\ 30-39 & 0.752 & \\ 40-49 & 0.792 & \\ 50-59 & 0.724 & \\ \hline \end{tabular} \end{table} \begin{comment} \begin{table}[width=.99\linewidth,cols=3,pos=h] \caption{The average attack accuracy of three datasets under label-only attack under two sampling method after ten repetitions. }\label{sampleresults} \begin{tabular}{@{}lccccccccc@{}} \toprule Dataset & FAR\\ \midrule NOAA & 0.35 \\ Humpback & 0.30\\ GREMM & 0.28 \\ \bottomrule \end{tabular} \end{table} \begin{table}[width=.99\linewidth,cols=3,pos=h] \caption{The average attack accuracy of three datasets under label-only attack under two sampling method after ten repetitions. }\label{sampleresults} \begin{tabular}{@{}lccccccccc@{}} \toprule Dataset &Subsets & Individuals & Attack Acc & FAR\\ \midrule & 2 & 30 & 78\%& 0.35 \\ & 6 & 10 & 83\%& 0.30\\ GREMM & 15 & 4 & 84\%& 0.28 \\ & 60 & 1 & 86\%& 0.26 \\ \bottomrule \end{tabular} \end{table} \end{comment} \label{experimentalSetup} \section{Conclusion} \label{sect_conc} In this paper, we have performed MIAs against models trained on open-set datasets of whales with the objective of using it to be able to discriminate between known vs unknown individuals. More precisely, we have investigated three MIAs from the state-of-the-art using two popular model architectures as well as three whale benchmark datasets. Overall, the results obtained demonstrate that the combination of model architecture and MIA ResNet50+ LabelOnly performs the best and is able to discriminate members and non-members even when they have fine-grained visual similarity. We have shown that the assumption that the non-members should be from the same distribution can be relaxed. In particular, the non-members used to train the attack model could be taken from a different whale population without significantly impact the success of the discrimination. Additionally, the results also highlight that the architecture of the attack model does not need to be similar to that of the target model. Finally, from the observation that the overfitting level in small subsets leads to a higher leak of information than larger subsets, we have proposed a novel approach called ensemble MIA. Ensemble MIA leads to an enhancement of 12\% in attack performance while decreasing the FPR by 13\%. As future works, we would like to explore the use of white-box MIA to further improve the accuracy of the discrimination while reducing the FPR, in particular for the GREMM dataset. We will also investigate how MIA-based approaches for discrimination compare to deep metric learning ones \cite{Bouma2019,Schneider2022}. In addition while in this paper, our focus was on discriminating between the member and non-member whales, we plan to integrate the proposed MIA into a full pipeline for beluga whale re-id that we plan to open source. Finally, we hope that our work, in which we leverage on privacy attacks to address practical challenges encountered in animal ecology, will foster further research at the crossing of these two domains. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,260
Klasztor Sankt Ottilien – barokowy klasztor benedyktynów, znajdujący się w Eresing. Źródła Bals, Claudius: Die Erzabtei St. Ottilien. Missionarisches Mönchstum. St. Ottilien 2004, . Klasztory w Bawarii Architektura barokowa w Niemczech
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,496
{"url":"http:\/\/sfc.kr\/wp-admin\/wiki\/ba3784-linear-function-answer","text":"2. Want to see the step-by-step answer? Another option for graphing is to use transformations of the identity function $f\\left(x\\right)=x$ . Example 1: . A: Click to see the answer. a) [70 Points] You should create a function that will perform linear interpolation from a set of measured data from a file shown below. 2\/3. Linear functions are some of the most basic functions in mathematics yet extremely important to understand because they are widely applied in electrocnics, physics, economics, chemistry, ...Also several concepts in the theory of functions and related topics depends strongly on the concept of linear functions. All linear functions cross the y-axis and therefore have y-intercepts. This topic covers: - Intercepts of linear equations\/functions - Slope of linear equations\/functions - Slope-intercept, point-slope, & standard forms - Graphing linear equations\/functions - Writing linear equations\/functions - Interpreting linear equations\/functions - Linear equations\/functions word problems A linear function is one that has the form f(x) = ax + b. We are more than happy to answer any math specific question you may have about this problem. The graph of linear function ... Click to see the answer. We are going to use this same skill when working with functions. Some of the worksheets for this concept are Work, Review linear equations, Writing linear equations, Linear function work with answers, Graphing linear equations work answer key, Review graphing and writing linear equations, Review linear, Date period. SURVEY . A function may be transformed by a shift up, down, left, or right. Review Of Linear Functions Lines Answer Key - Displaying top 8 worksheets found for this concept.. answer choices . The only thing different is the function notation. In order to find the next term in the sequence, you can use the recursive formula. It has many important applications. You first must be able to identify an ordered pair that is written in function notation. 1-2. none of the above. SURVEY . Mathway currently does not support Ask an Expert Live in Chemistry. Here for each value of x there is only one corresponding value of f(x) and every value of f(x) is due to only one particular value of x. Vertical Stretch or Compression. All linear functions behave similarly to the one in this example. Graphing of linear functions needs to learn linear equations in two variables.. 3\/2. question_answer. Punchline Bridge To Algebra Functions And Linear Equations And Inequalities Answer Key Zip Compare features of two linear functions represented in different ways. They are for Self-assessment and Review.. Each problem (or group of problems) has an \"answer button\" which you can click to look at an answer. lesson 1 3 practice a transforming linear functions answer key, Identifying function transformations Our mission is to provide a free, world-class education to anyone, anywhere. Q: solve using quadratic formula. If this is what you were looking for, please contact support. Solution: Let\u2019s rewrite it as ordered pairs(two of them). Some solutions have a \"further explanation button\" which you can click to see a more complete, detailed solution. A linear function has the following form. 8. We are here to assist you with your math questions. 4. 4. f(x)_____ check_circle Expert Answer. Khan Academy is a 501(c)(3) nonprofit organization. What to Do Q. b. 300 seconds . See Answer. The common difference is the constant change between each term. Evaluate the function at an input value of zero to find the y-intercept. Answer: V (t) = \u2212 750 t + 12,000. If you're seeing this message, it means we're having trouble loading external resources on our website. Determine if a relation is a function from the mapping diagram, ordered pairs, or graph. A: Click to see the answer. View questions and answers from the MATLAB Central community. The graph of g is a reflection in the x-axis of the graph of the parent quadratic function\u2026 How To: Given the equation for a linear function, graph the function using the y-intercept and slope. File \"TestDataSpace.dat\" provided. SURVEY . Experts are waiting 24\/7 to provide step-by-step solutions in as fast as 30 minutes! answer choices -4-2\/3. 3. Arithmetic Sequences. Sample answer: The graph of h is a translation 2 units up of the graph of the parent linear function. 3) Linear functions. Models such as this one can be extremely useful for analyzing relationships and making predictions based on those relationships. Create a non-linear function equation that has a solution at (-2, 6). You can select different variables to customize these Linear Equations Worksheets for your needs. Tags: Question 3 . The ... Find a linear function that gives the webpage ranking based on the number of links that direct users to it. question_answer. Include a calculation that demonstrates why this is a solution to your function. Answer. Linear Functions. If you studied the writing equations unit, you learned how to write equations given two points and given slope and a point. Graphing a Linear Function Using Transformations. In this section, we will explore examples of linear function models. It is attractive because it is simple and easy to handle mathematically. Correct answer to the question Which of the following equations do not represent linear functions? A linear pattern shows a constant change between each term in the sequence. 300 seconds . What is the rate of change of the line given? Let\u2019s draw a graph for the following function: F(2) = -4 and f(5) = -3. - e-eduanswers.com write a linear function f with the values f(5)=1 and f(0)=-5 f(x)_____ Question. Sample answer: The graph of f is a reflection in the x-axis of the graph of the parent linear function. linear function: An algebraic equation in which each term is either a constant or the product of a constant and (the first power of) ... y+2=-2(x-1)[\/latex] and either answer is correct. Linear Functions Questions and Answers - Discover the eNotes.com community of teachers, mentors and students just like you that can answer any question you might have on Linear Functions What's On This Page This page contains sample problems on linear functions. A linear function has one independent variable and one dependent variable. Tags: Question 2 . y = f(x) = a + bx. martidruil February 05, 2018 Punchline Bridge To Algebra Functions And Linear Equations And Inequalities Answer Key Zip martidruil. Functions are written using function notation. The linear function is popular in economics. 1.5-0.67. Determine whether a function is linear or not given an equation[Lesson 4.5, Lesson 6.6, Determine Whether a Function is Linear (page 9)] b. Q: Which expression can be used to find the measures of Angles B, F, and G? Here is a graphic preview for all of the Linear Equations Worksheets. What is the rate of change, as given by this point-slope equation? 9. Tags: Question 4 . A linear function has the form A function may also be transformed using a reflection, stretch, or compression. Check out a sample Q&A here. Linear Function Examples. Q: How do I solve? Linear functions are those whose graph is a straight line. question_answer. Our digital library spans in multiple countries, allowing you to get the most less latency time to download any of our books like this one. a. To answer these and related questions, we can create a model using a linear function. Yes. Answer to: Find an equation for the linear function which has y-intercept -4 and x-intercept 7. Functions are written using function notation. Create a linear function equation that has a solution at (-2,6). answer choices . Oct 6, 2019; 2 min read; Punchline Bridge To Algebra Functions And Linear Equations And Inequalities Answer Key Zip Expert Answer . Based on this information, a generalization can be made that a change in y. change in x will correspond to a Explore 1 Recognizing Linear Functions A race car can travel up to 210 mph. write a linear function f with the values f(5)=1 and f(0)=-5 . Q. Simplify each of the following as much as possible. section_4_worktext_answer_key.pdf: File Size: 2282 kb: File Type: pdf: Download File. Mathway currently only computes linear regressions. Find detailed answers to questions about coding, structures, functions, applications and libraries. 3. of the parent linear function. Online sales of a particular product are related to the number of clicks on its advertisement. 5. The linear function is arguably the most important function in mathematics. Arithmetic Sequences represent a linear pattern. Include a calculation that shows why this is a solution to your function. Linear Functions Enduring Understanding 3. b(x) , where a(x), b(x), q(x), and r(x) are polynomials with the degree of r(x) less than the degree of b(x), using inspection, long division, or, for the more complicated examples, a computer algebra system. Compare features of two linear functions represented in different ways. Lessons. It's one of the easiest functions to understand, and it often shows up when you least expect it. Want to see this answer and more? Use the graph to determine if it is linear. TestDataSpace - Notepad File Edit Format View Help 13.0146596170811146 0.12659245357374938 -0.9919548128307953 \u2026 Using the answers from before, what change in x corresponds to a change in y? Linear Functions. Building Linear Models from Verbal Descriptions . How many links will be needed to obtain a page ranking of 5? (Note: A vertical line parallel to the y-axis does not have a y-intercept, but it is not a function.) Common Core Algebra 2 Unit 3 Linear Functions Answer Key. Linear Functions; Monomials & Polynomials; Systems of Equations; Algebra 1 Worksheets Linear Equations Worksheets. 300 seconds . algebra 2 linear functions answer key is available in our digital library an online access to it is set as public so you can get it instantly. Find the slopes of parallel and perpendicular lines[Lesson 7.7, Lesson 5.6, Discovery Activity - Parallel and Perpendicular Lines] Q. Previous question Next question Transcribed Image Text from this Question.","date":"2021-01-22 18:48:11","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.40268051624298096, \"perplexity\": 739.7267702755655}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610703531335.42\/warc\/CC-MAIN-20210122175527-20210122205527-00117.warc.gz\"}"}
null
null
The résumé is a poor proxy for a human being I've never been a fan of the résumé, or 'Curriculum Vitae' (CV) as we tend to call them in the UK. How on earth can a couple of sheets of paper ever hope to sum up an individual in all of their complexity? It inevitably leads to the kind of things that end up on LinkedIn profiles: your academic qualifications, job history, and a list of hobbies that don't make you sound like a loser. In this (long-ish) article for Quartz, Oliver Staley looks at what Laszlo Bock is up to with his new startup, with a detour through the history of the résumé. "Resumes are terrible," says Laszlo Bock, the former head of human resources at Google, where his team received 50,000 resumes a week. "It doesn't capture the whole person. At best, they tell you what someone has done in the past and not what they're capable of doing in the future." I really dislike résumés, and I'm delighted that I've managed to get my last couple of jobs without having to rely on them. I guess that's a huge benefit of working openly; the web is your résumé. Resumes force job seekers to contort their work and life history into corporately acceptable versions of their actual selves, to better conform to the employer's expectation of the ideal candidate. Unusual or idiosyncratic careers complicate resumes. Gaps between jobs need to be accounted for. Skills and abilities learned outside of formal work or education aren't easily explained. Employers may say they're looking for job seekers to distinguish themselves, but the resume requires them to shed their distinguishing characteristics. Unfortunately, Henry Ford's 'faster horses' rule also applies to résumés. And (cue eye roll) people need to find a way to work in buzzwords like 'blockchain'. The resume of the near future will be a document with far more information—and information that is far more useful—than the ones we use now. Farther out, it may not be a resume at all, but rather a digital dossier, perhaps secured on the blockchain (paywall), and uploaded to a global job-pairing engine that is sorting you, and billions of other job seekers, against millions of openings to find the perfect match. I'm more interested in different approaches, rather than doubling-down on the existing approach, so it's good to see large multinational companies like Unilever doing away with résumés. They prefer game-like assessments. Two years ago, the North American division of Unilever—the consumer products giant—stopped asking for resumes for the approximately 150-200 positions it fills from college campuses annually. Instead, it's relying on a mix of game-like assessments, automated video interviews, and in-person problem solving exercises to winnow down the field of 30,000 applicants. It all sounds great but, at the end of the day it's extra unpaid work, and more jumping through hoops. The games are designed so there are no wrong answers— a weakness in one characteristic, like impulsivity, can reveal strength in another, like efficiency—and pymetrics gives candidates who don't meet the standards for one position the option to apply for others at the company, or even at other companies. The algorithm matches candidates to the opportunities where they're most likely to succeed. The goal, Polli says, is to eliminate the "rinse and repeat" process of submitting near identical applications for dozens of jobs, and instead use data science to target the best match of job and employee. Back to Laszlo Bock, who claims that we should have an algorithmic system that matches people to available positions. I'm guessing he hasn't read Brave New World. For the system to work, it would need an understanding of a company's corporate culture, and how people actually function within its walls—not just what the company says about its culture. And employees and applicants would need to be comfortable handing over their personal data. For-profit entities wouldn't be trusted as stewards of such sensitive information. Nor would governments, Bock says, noting that in communist Romania, where he was born, "the government literally had dossiers on every single citizen." Ultimately, Bock says, the system should be maintained by a not-for-profit, non-governmental organization. "What I'm imagining, no human being should ever look inside this thing. You shouldn't need to," he says. Hiring people is a social activity. The problem of having too many applicants is a symptom of a broken system. This might sound crazy, but I feel like hierarchical structures and a lack of employee ownership causes some of the issues we see. Then, of course, there's much wider issues such as neo-colonialism, commodification, and bullshit jobs. But that's for another post (or two)… Source: Quartz at Work Posted25 April 2018 — 17:48 TagsCVs, Laszlo Bock, Quartz, resumes, work Next On the cultural value of memes Previous OEP (Open Educational Pragmatism?)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,708
{"url":"https:\/\/calculator.academy\/joint-variation-calculator\/","text":"Enter the x, y, and z values into the calculator to determine the joint variation constant. Then, enter two new values to solve the missing value of a joint variation problem.\n\n## Joint Variation Formula\n\nThe following formula is used in join variation problems.\n\ny = k*x*z\n\u2022 Where k is the joint variation constant\n\u2022 x, y, and z are points or variables that depend on the constant k\n\nTo calculate a joint variation, multiply the joint variation constant by the variables.\n\n## Joint Variation Definition\n\nWhat is join variation? A joint variation is a problem in which a single variable is dependent, and varies jointly, with two more other variables. In the case of the equation above, the variable y varies with both x and z.\n\n## Join Variation Example Problem\n\nHow to solve a joint variation problem?\n\n1. First, determine the variation constant.\n\nIn this example, we have a variable y that varies with changes in variables x and z. One set of data points shows that when y = 10, x=1 and z=5. To solve for k, we re-arrange the equation, k = y\/ x*z = 10 \/ (1*5) = 2.\n\n2. Next, determine additional data points.\n\nFor this problem, we also know that x = 3 and z = 8 at another point.\n\n3. Finally, calculate y at the new points.\n\nUsing the formula above, and our constant from step 1, we can find the y coordinate or variable value. y = 2*3*8 = 48.\n\n## About Join Variation\n\nCan joint variation be considered direct variation? A join variation is a case in which two or more variables are directly related. A direct variation is defined as one variable that is a constant multiple of another variable. So, while they are similar, they are not exactly the same.","date":"2023-02-02 21:38:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5477358102798462, \"perplexity\": 645.7741434879357}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500041.18\/warc\/CC-MAIN-20230202200542-20230202230542-00657.warc.gz\"}"}
null
null
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace Melas.Structures { public enum Status { Offline, Online, Active, Away, In_Game, In_Lobby } public class Friend { public int ID { get; private set; } public String Name { get; private set; } public Status Status { get; set; } public Friend(int ID, String Name) { this.ID = ID; this.Name = Name; } } }
{ "redpajama_set_name": "RedPajamaGithub" }
623
The European directives represent the laws that all manufacturers must meet before they are allowed to affix CE Marking to their products. Each directive can be identified with a year and an identifier number. For instance, the Low Voltage Directive (LVD) has the identifier 2014/35/EU, which means that this European directive was the 35th published in 2014. All European directives are subjected to a short transitional period of 2-3 years before being adopted. This means that although the LVD was published in 2014, it didn't come into force until 2016. The European directives, also known as policy decisions made by the Council of European Communities, define the Essential Health and Safety Requirements that all economic operators must meet before products are placed on the EU market. They specify in details what needs to be done from a procedural and legal standpoint. Moreover, within the scope of each EU directive can be found information in regards to what would happen if the laws are ignored. Economic operators, in most of the cases - manufacturers, need to prepare and sign a Declaration of Conformance by which they will legally declare products' CE compliance. In some cases, and in particular when it comes to CE compliance of medical devices, the specific European Directives (e.g. MDD, IVD, AIMDD) include provisions for the appointment of Notified Bodies. For many of the EU Directives, the involvement of Notified Bodies is only mandatory for higher-risk and safety-critical products.
{ "redpajama_set_name": "RedPajamaC4" }
7,764
Q: Pandas multiindex from series of dataframes I have a series of dataframes with identical structure that represent results of a simulation for each hour of the year. Each simulation contains results for a series of coordinates (x,y). Each dataframe is imported from a csv file that has time information only in the file name. Example: results_YYMMDDHH.csv contains data such x y a b 0.0 0.0 0.318705 -0.871259 0.1 0.0 -0.937012 0.704270 0.1 0.1 -0.032225 -1.939544 0.0 0.1 -1.874781 -0.033073 I would like to create a single MultiIndexed Dataframe (level 0 is time and level 1 is (x,y)) that would allow me to perform various operations like averages, sums, max, etc. between these dataframes using the resampling or groupby methods. For each time step The resulting dataframe should look something like this x y a b 2010-01-01 10:00 0.0 0.0 0.318705 -0.871259 0.1 0.0 -0.934512 0.745270 0.1 0.1 -0.0334525 -1.963544 0.0 0.1 -1.835781 -0.067573 2010-01-01 11:00 0.0 0.0 0.318705 -0.871259 0.1 0.0 -0.923012 0.745670 0.1 0.1 -0.035225 -1.963544 0.0 0.1 -1.835781 -0.067573 ................. ................. 2010-12-01 10:00 0.0 0.0 0.318705 -0.871259 0.1 0.0 -0.923012 0.723270 0.1 0.1 -0.034225 -1.963234 0.0 0.1 -1.835781 -0.067233 You can imagine this for each hour of the year. I would like now to be able to calculate for example the average for the whole year or the average for June. Also any other function like the number of hours above a certain threshold or between a min and a max value. Please bear in mind that the result should be in any of these operations a DataFrame. For example the monthly averages should look like x y a b 2010-01 0.0 0.0 0.45 -0.13 2010-02 0.1 0.0 0.55 -0.87 2010-03 0.1 0.1 0.24 -0.83 2010-04 0.0 0.1 0.11 -0.87 How do I build this MultiIndexed dataframe? I picture this like a timeseries of dataframes. A: I would make a Panel then convert it into a multiindexed DataFrame using to_frame(): In [29]: df1 = pd.DataFrame(dict(a=[0.318705,-0.937012,-0.032225,-1.874781], b=[-0.871259,0.704270,-1.939544,-0.033073])) In [30]: df2 = pd.DataFrame(dict(a=[0.318705,-0.937012,-0.032225,-1.874781], b=[-0.871259,0.704270,-1.939544,-0.033073])) In [31]: df1 Out[31]: a b 0 0.318705 -0.871259 1 -0.937012 0.704270 2 -0.032225 -1.939544 3 -1.874781 -0.033073 In [32]: data = {datetime.datetime(2010,6,21,10,0,0): df1, datetime.datetime(2010,6,22,10,0,0): df2} In [33]: p = pd.Panel(data) In [34]: p.to_frame() Out[34]: 2010-06-21 10:00:00 2010-06-22 10:00:00 major minor 0 a 0.318705 0.318705 b -0.871259 -0.871259 1 a -0.937012 -0.937012 b 0.704270 0.704270 2 a -0.032225 -0.032225 b -1.939544 -1.939544 3 a -1.874781 -1.874781 b -0.033073 -0.033073 Depending on how you want to look at your data, you can use swapaxes to rearrange it: In [35]: p.swapaxes("major", "items").to_frame() Out[35]: 0 1 2 3 major minor 2010-06-21 10:00:00 a 0.318705 -0.937012 -0.032225 -1.874781 b -0.871259 0.704270 -1.939544 -0.033073 2010-06-22 10:00:00 a 0.318705 -0.937012 -0.032225 -1.874781 b -0.871259 0.704270 -1.939544 -0.033073 A: Here is a different answer from my earlier one, in light of the more fully explained question. Iterate through the files and read them into pandas, parse the date and add it to the dataframe, then use set_index to create your multiindex. Once you've got all your dataframes, use pd.concat to combine them: dataframes = [] for filename in filenames: df = pd.read_csv(filename) df["datetime"] = datetime.datetime.strptime(filename[8:18], "%Y%m%d%H") dataframes.append(df.set_index(["datetime","x", "y"])) combined_df = pd.concat(dataframes)
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,231
Under certain conditions, we need to perform asynchronous verification on the data, such as verifying whether the username is duplicated. The following example will illustrate the processing of asynchronous verification. - Set the `checkAsync` property on `<FormControl>` that requires asynchronous validation. - The validation rules for asynchronous validation add an object with a return value of Promise via the `addRule` method of `schema`. - The check can be triggered manually by calling `checkAsync` and `checkForFieldAsync` of `<Form>`. <!--start-code--> ```js const { StringType, NumberType } = Schema.Types; function asyncCheckUsername(name) { return new Promise(resolve => { setTimeout(() => { if (name === 'abc') { resolve(false); } else { resolve(true); } }, 500); }); } const model = Schema.Model({ name: StringType() .addRule((value, data) => { return asyncCheckUsername(value); }, 'Duplicate username') .isRequired('This field is required.') }); class CheckForm extends React.Component { constructor(props) { super(props); this.state = { formValue: { name: '' }, formError: {} }; this.handleSubmit = this.handleSubmit.bind(this); } handleSubmit() { const { formValue } = this.state; this.form.checkAsync().then(result => { console.log(result); }); } render() { const { formError, formValue } = this.state; return ( <div> <JSONView formValue={formValue} formError={formError} /> <Form ref={ref => (this.form = ref)} onChange={formValue => { this.setState({ formValue }); }} onCheck={formError => { this.setState({ formError }); }} formValue={formValue} model={model} > <FormGroup> <ControlLabel>Username </ControlLabel> <FormControl checkAsync name="name" /> </FormGroup> <ButtonToolbar> <Button appearance="primary" onClick={this.handleSubmit}> Submit </Button> </ButtonToolbar> </Form> </div> ); } } ReactDOM.render(<CheckForm />); ``` <!--end-code-->
{ "redpajama_set_name": "RedPajamaGithub" }
5,180
Q: How can I shorten this common pattern? In our codebase we often encounter this pattern: (_.isNil(x)) ? something(x) : null; If something was a method on the object then the short version of it would be: x?.something() Is there a commonly accepted shortcut to the above? Perhaps something provided by lodash itself or some other library? Thanks
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,870
ALLERDALE RESULT: Solway - Jim Lister, CONS, 455. Harriet Harman to step down as deputy leader of the Labour party once a new leader is elected. ALLERDALE RESULT: Aspatria - Bill Finlay, IND, 620. David Wilson, UKIP, 534. ALLERDALE RESULT: Stainburn: Mark Fryer, LAB, 444. ALLERDALE RESULT: Netherhall Angela Kendall, LAB, 697. Bill Pegram, LAB, 663. ALLERDALE RESULT: Ewanrigg Carni McCarron-Holmes, LAB, 872. Lee Williamson, LAB, 705. ALLERDALE RESULT: Wharrels Jacqueline Mounsey, CONS, 550. ALLERDALE RESULT: Warnell - Duncan Fairbairn, CONS, 790. ALLERDALE RESULT: Silloth - John Cook, CONS, 908. Bill Jefferson , IND, 588. ALLERDALE RESULT:Wampool - Patricia MacDonald, CONS, 679. ALLERDALE RESULT:Waver - Alan Hedworth, CONS, 850. Mark Fryer, said: "It's been a long hard fight, thank you to the returning officer and staff of Moorclose Sports Centre. "The next selections won't be counted in this building, this tired old lady is being retired and we'll be in a brand new, fit for purpose facility in the town centre. That's what you get when you vote Labour. "Thank you to the people of Stainburn for giving me a resounding victory, I will work unbelievably hard for you, as will all of our candidates. "I look forward to the next four years, hopefully under a Labour majority, where we can get on and finish the business we started four years ago." ALLERDALE RESULT: St John's - Joe Holliday, IND, 1,391. Michael Heaslip, LAB, 1223,, Konrad Hansen, LAB 1,126. Michael Heaslip said: "There's no change in St John's, we always knew it was going to be tight and that there would be no landslide either way. "We are only halfway through what we set out to achieve, we've got very ambitious plans that we are going to see through over the next four years." Alan Smith, re-elected in All Saints ward, said: "I'm ecstatic that the electorate of All Saints was are putting their trust in me, Christine and Len - the Labour team - for the next four years. Some of use have worked for the last 20 years for the betterment of All Saints ward. We have hit the ground running now for the ward. They have had our our utmost support over Strawberry How. We are going to be fighting for the people up in that area. We have now got the mandate to take that forward." Keswick has gone to a recount. Joe Holliday said: "I feel brilliant. To come in first in the poll as well is marvellous. The sheer number of people is a marvellous vote when you see others are getting in with just 400 votes and thinking it's wonderful. People have really turned out." This was the first borough election he had fought as an independent candidate, having previously stood for Labour. Asked whether it had made it more challenging standing against his former party he said: "It's not very nice. They were my colleagues. They still are in many ways. I have been confident in the work I have done in the ward. "You don't win it in the last two weeks, you win it in the last four years." First-time candidate Anthony McGuckin, who missed out on a seat in St John's, said: "It's a good start. It's great. I have enjoyed it. It's been a good process. "At the end of the day the two existing Labour boys have got in and that's important to keep our majority. We have done a lot in the last four years. It's about carrying on with that and taking it to the next level. As Arnold Schwarzenegger once said, 'I'll be back'." Moorclose has gone to a recount too! ALLERDALE RESULT: Christ Church - Eric Nicholson, CONS, 925. Margaret Jackson, CONS, 832. ALLERDALE RESULT: Wigton - John Crouch, LAB, 1027. Joe Cowell, CONS, 921. Alan Pitcher, CONS, 890. ALLERDALE RESULT: Moorclose - Stephen Stoddart, IND, 963. Denis Robertson, IND, 704. Peter Bales, LAB, 776.
{ "redpajama_set_name": "RedPajamaC4" }
6,111
var gulp = require('gulp'); var jasmine = require('gulp-jasmine'); var cover = require('gulp-coverage'); var coveralls = require('gulp-coveralls'); var gutil = require('gulp-util'); gulp.task('test', function () { gulp.src('spec/**/*Spec.js') .pipe(jasmine()).on('error', gutil.log); }); gulp.task('test:coverage', function () { gulp.src('spec/**/*Spec.js') .pipe(cover.instrument({ pattern: ['src/**/*.js'] })) .pipe(jasmine()).on('error', gutil.log) .pipe(cover.gather()) .pipe(cover.format([ { reporter: 'lcov' } ])) .pipe(coveralls()); }); gulp.task('tdd', function () { gulp.watch(['spec/**/*.js', 'src/**/*.js'], ['test']); });
{ "redpajama_set_name": "RedPajamaGithub" }
6,516
Established in 2015, the office of the Israeli-American Council (IAC) serves the greater Washington DC area. The rich cultural and professional offerings of the nation's capital compliment the vast array of programs and events that are currently being developed to serve this vibrant, diverse and growing community. Was-Tlv nonstop flights campaign Donate to IAC DC Donate to the Israeli-American Council Washington DC and make a difference in our community today Sign up for your weekly Israeli-American updates from the Washington DC area Children's Books in Hebrew IAC Keshet is an enrichment program for families with children. Learn more Become part of a team that touches the lives of thousands Our Programsתוכניות Israeli enrichment program through cultural activities conducted entirely in Hebrew. Learning program that teaches high school students the skills needed to succeed in college, career and life. Explores the Israeli-American Jewish identity and develops the community's backbone leadership IAC Edge (Young Professionals) Connecting Young Professionals to Israel through entrepreneurship and innovation. IAC Mishelanu Midatlantic A college campus program which strengthens students' identity through culture, heritage, and connection to Israel. Kabbalat Shabbat experience and dinner that brings the community together with songs, prayers & Israeli cuisine. Washington DC Videos IAC DC - help us grow!
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,226
package leetcode import ( "reflect" "testing" ) func TestCommonChars(t *testing.T) { if !reflect.DeepEqual(commonChars([]string{"bella", "label", "roller"}), []string{"e", "l", "l"}) { t.Fatal() } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,129
Lady Gaga & Bradley Cooper's 'A Star Is Born' soundtrack debuts at #1 Updated Oct. 16, 2018, 9:36 a.m. | By Tamlyn Canham Bradley Cooper officially has a number one album under his belt. Bradley Cooper and Lady Gaga / YouTube Lady Gaga and Bradley Cooper's new movie, 'A Star is Born', has already proven itself to be a box office hit, and now they can add number one album to the mix. The movie's soundtrack is the number one album in America. It debuted at the top of the Billboard 200 after selling an impressive 231,000 units. It's the biggest opening week for a soundtrack in over three years. Most of the songs on the soundtrack are performed by Bradley and Lady Gaga. Not only does Bradley star and sing in the musical but he also makes his directorial debut. He is also listed as one of the screenplay's co-writers. Watch Bradley and Lady Gaga perform 'Shallow' below. ALSO READ: Karlien furious about 'Wil jy vry' music video that features underage kids Watch the movie's trailer below. ALSO READ: Watch: Camila Cabello debuts emotional 'Consequences' video Main image courtesy of YouTube/Lady Gaga
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,599
La Société mathématique du Danemark () est une société savante danoise de mathématiciens fondée en 1873 à l'université de Copenhague, un an après la Société mathématique de France. Selon le site internet de la société, elle a L'histoire La société a été fondée d'après l'idée de Thorvald Nicolai Thiele. Le premier comité était composé de Thiele, Hieronymus Georg Zeuthen et Julius Petersen. Elle est membre de la Société mathématique européenne. Les présidents Johan Jensen (1892-1903) Vilhelm Herman Oluf Madsen (1903-1910) (1910-1917) (1917-1926) Harald Bohr (1926-1929, 1937-1951) (1954-1958) Liens externes K. Ramskov, The Danish Mathematical Society through 125 Ans, Historia Mathematica, 2000. The Danish Mathematical Society, page web en anglais O'Connor, John J.; Robertson, Edmund F., "The Danish Mathematical Society", MacTutor History of Mathematics archive Références Société savante liée aux mathématiques Membre de la Société mathématique européenne Société savante au Danemark Organisme fondé en 1873
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,902
Posting some poems I wrote as a kid for Throwback Thursday! When you don't listen as much as you ought. When such a friendly guy is up there? I can identify with When You Don't Listen. The Moon's Face, not so much.
{ "redpajama_set_name": "RedPajamaC4" }
5,727
Q: Where should I aim my chain shot? Traditionally, chain shot was used to cripple and break the masts of enemy ships, however, I am having a hard time telling if I am doing more damage to a ships movement by aiming at the masts or just aiming at the ship in general. Where should I aim my chain shot for maximum effect? A: I really love the naval combat, but you're so busy at times that my answer below might be off a bit. This is how I experienced it. I went for realism and tried to shoot the main mast on the ship with my chain shot. After a while, a mate suggested to shoot the hull. Shooting the hull itself doesn't do more damage, but it gives you an extra chance to get one of those swivel targets that do add a lot to damage. So there is a potential damage increase, but not guaranteed. The bigger the ship (better armored actually, but there's a correlation), the less damage you do on the hull though. The first big Spanish ship I had to fight (I forget its name) did not take much damage from shooting the hull with the chain shot, but shooting the masts did seem to bring down its turning speed. Generally speaking, I like to aim for the masts of ships bigger than the jackdaw, and aim for the hull on the smaller gunboats and the likes. A: Chain shot does seem to slow ships, especially larger ones, if you hit the sails/masts. Round shot does the same, to a much lesser extent. This is critical when fighting Man o' War and Legendary ships where you really really do not want them to broadside you. Reducing their manoeuvrability allows you to star in front or behind and get shots in without being damaged, or go in for a ram. A: Chain shots were actually shot at the mast and sails to decrease the mobility of the enemy ships but there is actually no effect in Black Flag and I prefer to hit the hull because it is easier and it seems to do more damage. I have noticed a few times where a chain shot aimed towards the mast actually unveils a weakness for your swivel(the target the shows up in red) but then again, that happens wherever you hit the ship. At the end of the day, it doesn't matter where you hit as long as none of your shots miss the target. I rarely use chain shots but when I do, I use it as a warning shot before ramming the hull.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,794
So Subhash AN IAS officer World Health Organization announce because the district commissioner within the state region. Also Through his robust vigil and courageousness, he becomes a threat to the whole corrupt system. Also therefore the officers concerned. So Subhash AN IAS officer World Health Organization announce because the district commissioner within the state region. Also Through his robust vigil and courageousness, he becomes a threat to the whole corrupt system. Also therefore the officers concerned. So can Subhash emerge victorious during this battle of right and wrong? or can the evil forces fight against him to bring him down?
{ "redpajama_set_name": "RedPajamaC4" }
8,455
{"url":"https:\/\/www.allaboutcircuits.com\/technical-articles\/design-considerations-for-digital-vlsi\/","text":"Design procedures in VLSI SoCs are very complex. The designer should consider all possible states and inputs and design the chip in such a way that it works every time in every state and with every possible input. In this article, we discuss metastability, setup time, and hold time when designing a digital VLSI circuit.\n\nCritical Path, Throughput, and Latency\n\nThe critical path is the longest path in the circuit and limits the clock speed. When describing a digital circuit there are two other important factors: latency and throughput. Latency is the time needed for an input change to produce an output\u00a0change; latency can be expressed\u00a0as a length of time or, in synchronous circuits,\u00a0as a certain number of clock cycles. Throughput refers to the rate at which data can be processed.\n\nFlip-Flops and Combinational Logic\n\nA digital circuit can consist of sequential logic and combinational logic. Sequential logic refers to circuits whose output depends on previous states. In other words, it involves memory that stores previous states and allows a decision to be made based on these previous states and the current input signals. In the digital realm, flip-flops are the standard devices used for storing previous logic states. In Verilog, we can define a flip-flop by using the reg command:\n\nreg[7:0] states;\n\nThe above line defines an 8-bit flip-flop. Flip-flops, which are\u00a0sensitive to clock\u00a0transitions\u00a0rather than\u00a0clock logic states,\u00a0are the most basic element of synchronous designs.\n\nCombinational logic refers to a circuit that computes an output based only on\u00a0the current\u00a0input signals.\n\nFigure 1. sh = ab\u2019+bc. Image courtesy of the Tampere University of Technology\n\nA simple combinational logic circuit\u00a0is implemented in Figure 1. Every logic device has a propagation delay. Propagation delay is the time difference between an input change and the corresponding output change.\u00a0This delay can lead to unexpected behavior,\u00a0such as when a gate accepts two inputs that come from paths with different numbers of gates (and therefore\u00a0unequal total propagation delay).\n\nAssume we are in the (1,1,1) input\u00a0state and the\u00a0output is steady at 1.\u00a0If b changes from 1 to zero the output of the lower AND gate will transition before that of the upper AND gate, resulting in a temporary logic low on the output. This logic-low state is\u00a0invalid, because a (1,0,1) input pattern\u00a0should produce a logic-high output.\u00a0This brief invalid output state is referred to as a hazard.\n\nMore specifically, this glitch is called a static hazard. Dynamic hazards occur when an input change leads to more than one output glitch. Usually, dynamic hazards occur in complex circuits with multiple gates and logic paths.\n\nIn synchronous design, we must ensure that glitches do not result in invalid output states. As mentioned above, for storing previous states designers usually use flip-flops with edge sensitivity. When using flip-flops in digital VLSI designs, we must consider the following:\n\n1. Setup time: the input\u00a0to a flip-flop should be stable for a certain\u00a0amount of time (the setup time) before the clock transitions; otherwise, the\u00a0flip-flop will behave in an unstable manner, referred to as\u00a0metastability.\n2. Hold time: the input of a flip-flop should remain stable for a certain amount of time (the hold time) after the\u00a0clock transitions.\n\nThe following figure provides a visual description of setup time and hold time:\n\nSetup Time\n\nA digital circuit designed for FPGA or ASIC purposes needs combinational logic for calculations. We usually build multipliers, subtractors, adders,\u00a0etc., with logic gates. For storing input and output values for these combinational logic circuits,\u00a0we use flip-flops. Flip-flops are at the beginning and at the end of all critical paths, as shown in Figure\u00a03.\n\nTo avoid a setup-time violation when using flip-flops at the end of a combinational path, the output must be\u00a0stable before the\u00a0clock edge. Thus, the total propagation delay of a combinational path must not\u00a0cause the output to transition such that the relationship between the clock signal and the data signal leads to a setup-time violation.\n\nPipelining\n\nIn VLSI designs, we may face a very long critical path due to an extensive combinational circuit. In such cases, our clock speed will decrease to ensure that the delays associated with the critical path do not lead to setup-time violations. Pipelining is a technique whereby we divide a combinational path into multiple parts and include a register at the end of each partial path. In this way, we divide the critical path into multiple small paths,\u00a0and this allows us to\u00a0increase the clock speed and, consequently, the throughput of the circuit.\n\nFor example, in Figure\u00a04 we have a long critical path that limits the clock frequency. However, the divided and pipelined path (see Figure\u00a05) contains shorter combinational paths, and this means we can increase the clock speed. However, as a trade-off, the latency of the path will increase.\n\nHold Time\n\nThe input\u00a0to a flip-flop should be stable for an amount of time equal to or greater than the hold time. For example, in Figure 6, assume the delay of the combinational path between FF1 and FF2 is 0.7ns, the flip-flop setup time is 2ns, and its hold time is 1ns. If we assume that the propagation delay of the flip-flops is zero, after a clock edge the output of FF1 will change immediately, and 0.7ns later\u00a0the signal\u00a0has passed through the combinational logic and arrived at the\u00a0FF2 input. However, the input to FF2 should be stable for at least 1ns after the clock edge. Thus, a hold-time violation occurs.\n\nFigure 6. Hold-time violation example. Image courtesy of the VLSI Expert Group\n\nA setup-time violation can be addressed by reducing the clock frequency, even after device fabrication has occurred; however, a hold-time violation cannot be corrected if it is discovered after the fabrication process. The important thing is to design our circuit so that hold-time violations will not occur; a combinational circuit connected to a\u00a0flip-flop input should have a propagation delay that is compatible with the hold-time requirement.\n\nOne technique for avoiding hold-time violations is to increase the delay of a fast path by adding\u00a0buffers. Nowadays, CAD tools can help by\u00a0identifying portions of a design that could experience hold-time or setup-time\u00a0violations. Furthermore, CAD\u00a0tools can take timing requirements into account when synthesizing, placing, and routing\u00a0a particular design.\n\nClock-Crossing\n\nIn most modern designs, multiple clock frequencies are used.\u00a0ADCs or\u00a0DACs may have a clock that is not synchronized with the FPGA clock,\u00a0and yet the ADC or DAC signals\u00a0must be introduced into\u00a0the FPGA clock domain. When we're working with multiple clock domains, we need to be careful\u00a0to avoid situations that could lead to metastability.\n\nWe will need to achieve synchronization between different clock domains. This can be done by using a simple FIFO that has a clock for the input and a separate clock for the output. We could also use a basic shift register instead of a FIFO. The following Verilog code can be used to provide synchronization between different clock domains.\n\n Input CLKA,CLKB;\nInput signalinCLKA;\nOutput signalinCLKB;\nReg[1:0] shift_register;\nalways@(posedge CLKB)\nbegin\nshift_register[0]<=signalinCLKA;\nshift_register[1]<=shift_register[0];\nend\nassign signalinCLKB=signalinCLKA;\n\n\n\nWe can also employ asynchronous design techniques\u00a0to address issues associated with multiple clock domains, but we will look at\u00a0that in a future article. We will also wait until the next article to cover other important topics such as the following:\n\n\u2022 clock skew, and\u00a0dealing with clock skew\u00a0by means of clock distribution trees\n\u2022 issues associated with the use of gated clocks in FPGAs\n\u2022 flip-flops with\u00a0negative hold time\n\nConclusion\n\nIn this article, we talked about hold-time violations and how to avoid them by adding a delay to fast logic paths. We also explained setup-time violations\u00a0and we discussed pipelining as a method of avoiding timing problems in circuits that include a long critical path. Finally, we\u00a0introduced the\u00a0idea of multiple clock domains, and\u00a0we looked at\u00a0a simple Verilog\u00a0approach to\u00a0clock synchronization.","date":"2017-09-22 17:15:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5665280818939209, \"perplexity\": 1642.812759718886}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-39\/segments\/1505818689028.2\/warc\/CC-MAIN-20170922164513-20170922184513-00304.warc.gz\"}"}
null
null
#include "includes.h" #include "common.h" #include "common/ieee802_11_defs.h" #include "eap_peer/eap_methods.h" #include "eapol_supp/eapol_supp_sm.h" #include "rsn_supp/wpa.h" #include "../config.h" #include "../wpa_supplicant_i.h" #include "../driver_i.h" #include "../notify.h" #include "../wpas_glue.h" #include "../bss.h" #include "../scan.h" #include "dbus_new_helpers.h" #include "dbus_new.h" #include "dbus_new_handlers.h" #include "dbus_dict_helpers.h" extern int wpa_debug_level; extern int wpa_debug_show_keys; extern int wpa_debug_timestamp; static const char *debug_strings[] = { "excessive", "msgdump", "debug", "info", "warning", "error", NULL }; /** * wpas_dbus_error_unknown_error - Return a new InvalidArgs error message * @message: Pointer to incoming dbus message this error refers to * @arg: Optional string appended to error message * Returns: a dbus error message * * Convenience function to create and return an UnknownError */ DBusMessage * wpas_dbus_error_unknown_error(DBusMessage *message, const char *arg) { /* * This function can be called as a result of a failure * within internal getter calls, which will call this function * with a NULL message parameter. However, dbus_message_new_error * looks very unkindly (i.e, abort()) on a NULL message, so * in this case, we should not call it. */ if (message == NULL) { wpa_printf(MSG_INFO, "dbus: wpas_dbus_error_unknown_error " "called with NULL message (arg=%s)", arg ? arg : "N/A"); return NULL; } return dbus_message_new_error(message, WPAS_DBUS_ERROR_UNKNOWN_ERROR, arg); } /** * wpas_dbus_error_iface_unknown - Return a new invalid interface error message * @message: Pointer to incoming dbus message this error refers to * Returns: A dbus error message * * Convenience function to create and return an invalid interface error */ static DBusMessage * wpas_dbus_error_iface_unknown(DBusMessage *message) { return dbus_message_new_error(message, WPAS_DBUS_ERROR_IFACE_UNKNOWN, "wpa_supplicant knows nothing about " "this interface."); } /** * wpas_dbus_error_network_unknown - Return a new NetworkUnknown error message * @message: Pointer to incoming dbus message this error refers to * Returns: a dbus error message * * Convenience function to create and return an invalid network error */ static DBusMessage * wpas_dbus_error_network_unknown(DBusMessage *message) { return dbus_message_new_error(message, WPAS_DBUS_ERROR_NETWORK_UNKNOWN, "There is no such a network in this " "interface."); } /** * wpas_dbus_error_invalid_args - Return a new InvalidArgs error message * @message: Pointer to incoming dbus message this error refers to * Returns: a dbus error message * * Convenience function to create and return an invalid options error */ DBusMessage * wpas_dbus_error_invalid_args(DBusMessage *message, const char *arg) { DBusMessage *reply; reply = dbus_message_new_error(message, WPAS_DBUS_ERROR_INVALID_ARGS, "Did not receive correct message " "arguments."); if (arg != NULL) dbus_message_append_args(reply, DBUS_TYPE_STRING, &arg, DBUS_TYPE_INVALID); return reply; } static const char *dont_quote[] = { "key_mgmt", "proto", "pairwise", "auth_alg", "group", "eap", "opensc_engine_path", "pkcs11_engine_path", "pkcs11_module_path", "bssid", NULL }; static dbus_bool_t should_quote_opt(const char *key) { int i = 0; while (dont_quote[i] != NULL) { if (os_strcmp(key, dont_quote[i]) == 0) return FALSE; i++; } return TRUE; } /** * get_iface_by_dbus_path - Get a new network interface * @global: Pointer to global data from wpa_supplicant_init() * @path: Pointer to a dbus object path representing an interface * Returns: Pointer to the interface or %NULL if not found */ static struct wpa_supplicant * get_iface_by_dbus_path( struct wpa_global *global, const char *path) { struct wpa_supplicant *wpa_s; for (wpa_s = global->ifaces; wpa_s; wpa_s = wpa_s->next) { if (os_strcmp(wpa_s->dbus_new_path, path) == 0) return wpa_s; } return NULL; } /** * set_network_properties - Set properties of a configured network * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * @ssid: wpa_ssid structure for a configured network * @iter: DBus message iterator containing dictionary of network * properties to set. * Returns: NULL when succeed or DBus error on failure * * Sets network configuration with parameters given id DBus dictionary */ DBusMessage * set_network_properties(DBusMessage *message, struct wpa_supplicant *wpa_s, struct wpa_ssid *ssid, DBusMessageIter *iter) { struct wpa_dbus_dict_entry entry = { .type = DBUS_TYPE_STRING }; DBusMessage *reply = NULL; DBusMessageIter iter_dict; if (!wpa_dbus_dict_open_read(iter, &iter_dict)) return wpas_dbus_error_invalid_args(message, NULL); while (wpa_dbus_dict_has_dict_entry(&iter_dict)) { char *value = NULL; size_t size = 50; int ret; if (!wpa_dbus_dict_get_entry(&iter_dict, &entry)) { reply = wpas_dbus_error_invalid_args(message, NULL); break; } if (entry.type == DBUS_TYPE_ARRAY && entry.array_type == DBUS_TYPE_BYTE) { if (entry.array_len <= 0) goto error; size = entry.array_len * 2 + 1; value = os_zalloc(size); if (value == NULL) goto error; ret = wpa_snprintf_hex(value, size, (u8 *) entry.bytearray_value, entry.array_len); if (ret <= 0) goto error; } else if (entry.type == DBUS_TYPE_STRING) { if (should_quote_opt(entry.key)) { size = os_strlen(entry.str_value); if (size <= 0) goto error; size += 3; value = os_zalloc(size); if (value == NULL) goto error; ret = os_snprintf(value, size, "\"%s\"", entry.str_value); if (ret < 0 || (size_t) ret != (size - 1)) goto error; } else { value = os_strdup(entry.str_value); if (value == NULL) goto error; } } else if (entry.type == DBUS_TYPE_UINT32) { value = os_zalloc(size); if (value == NULL) goto error; ret = os_snprintf(value, size, "%u", entry.uint32_value); if (ret <= 0) goto error; } else if (entry.type == DBUS_TYPE_INT32) { value = os_zalloc(size); if (value == NULL) goto error; ret = os_snprintf(value, size, "%d", entry.int32_value); if (ret <= 0) goto error; } else goto error; if (wpa_config_set(ssid, entry.key, value, 0) < 0) goto error; if ((os_strcmp(entry.key, "psk") == 0 && value[0] == '"' && ssid->ssid_len) || (strcmp(entry.key, "ssid") == 0 && ssid->passphrase)) wpa_config_update_psk(ssid); else if (os_strcmp(entry.key, "priority") == 0) wpa_config_update_prio_list(wpa_s->conf); os_free(value); wpa_dbus_dict_entry_clear(&entry); continue; error: os_free(value); reply = wpas_dbus_error_invalid_args(message, entry.key); wpa_dbus_dict_entry_clear(&entry); break; } return reply; } /** * wpas_dbus_simple_property_getter - Get basic type property * @message: Pointer to incoming dbus message * @type: DBus type of property (must be basic type) * @val: pointer to place holding property value * Returns: The DBus message containing response for Properties.Get call * or DBus error message if error occurred. * * Generic getter for basic type properties. Type is required to be basic. */ DBusMessage * wpas_dbus_simple_property_getter(DBusMessage *message, const int type, const void *val) { DBusMessage *reply = NULL; DBusMessageIter iter, variant_iter; if (!dbus_type_is_basic(type)) { wpa_printf(MSG_ERROR, "dbus: wpas_dbus_simple_property_getter:" " given type is not basic"); return wpas_dbus_error_unknown_error(message, NULL); } if (message == NULL) reply = dbus_message_new(DBUS_MESSAGE_TYPE_SIGNAL); else reply = dbus_message_new_method_return(message); if (reply != NULL) { dbus_message_iter_init_append(reply, &iter); if (!dbus_message_iter_open_container( &iter, DBUS_TYPE_VARIANT, wpa_dbus_type_as_string(type), &variant_iter) || !dbus_message_iter_append_basic(&variant_iter, type, val) || !dbus_message_iter_close_container(&iter, &variant_iter)) { wpa_printf(MSG_ERROR, "dbus: " "wpas_dbus_simple_property_getter: out of " "memory to put property value into " "message"); dbus_message_unref(reply); reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } } else { wpa_printf(MSG_ERROR, "dbus: wpas_dbus_simple_property_getter:" " out of memory to return property value"); reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } return reply; } /** * wpas_dbus_simple_property_setter - Set basic type property * @message: Pointer to incoming dbus message * @type: DBus type of property (must be basic type) * @val: pointer to place where value being set will be stored * Returns: NULL or DBus error message if error occurred. * * Generic setter for basic type properties. Type is required to be basic. */ DBusMessage * wpas_dbus_simple_property_setter(DBusMessage *message, const int type, void *val) { DBusMessageIter iter, variant_iter; if (!dbus_type_is_basic(type)) { wpa_printf(MSG_ERROR, "dbus: wpas_dbus_simple_property_setter:" " given type is not basic"); return wpas_dbus_error_unknown_error(message, NULL); } if (!dbus_message_iter_init(message, &iter)) { wpa_printf(MSG_ERROR, "dbus: wpas_dbus_simple_property_setter:" " out of memory to return scanning state"); return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } /* omit first and second argument and get value from third */ dbus_message_iter_next(&iter); dbus_message_iter_next(&iter); dbus_message_iter_recurse(&iter, &variant_iter); if (dbus_message_iter_get_arg_type(&variant_iter) != type) { wpa_printf(MSG_DEBUG, "dbus: wpas_dbus_simple_property_setter:" " wrong property type"); return wpas_dbus_error_invalid_args(message, "wrong property type"); } dbus_message_iter_get_basic(&variant_iter, val); return NULL; } /** * wpas_dbus_simple_array_property_getter - Get array type property * @message: Pointer to incoming dbus message * @type: DBus type of property array elements (must be basic type) * @array: pointer to array of elements to put into response message * @array_len: length of above array * Returns: The DBus message containing response for Properties.Get call * or DBus error message if error occurred. * * Generic getter for array type properties. Array elements type is * required to be basic. */ DBusMessage * wpas_dbus_simple_array_property_getter(DBusMessage *message, const int type, const void *array, size_t array_len) { DBusMessage *reply = NULL; DBusMessageIter iter, variant_iter, array_iter; char type_str[] = "a?"; /* ? will be replaced with subtype letter; */ const char *sub_type_str; size_t element_size, i; if (!dbus_type_is_basic(type)) { wpa_printf(MSG_ERROR, "dbus: " "wpas_dbus_simple_array_property_getter: given " "type is not basic"); return wpas_dbus_error_unknown_error(message, NULL); } sub_type_str = wpa_dbus_type_as_string(type); type_str[1] = sub_type_str[0]; if (message == NULL) reply = dbus_message_new(DBUS_MESSAGE_TYPE_SIGNAL); else reply = dbus_message_new_method_return(message); if (reply == NULL) { wpa_printf(MSG_ERROR, "dbus: " "wpas_dbus_simple_array_property_getter: out of " "memory to create return message"); return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } dbus_message_iter_init_append(reply, &iter); if (!dbus_message_iter_open_container(&iter, DBUS_TYPE_VARIANT, type_str, &variant_iter) || !dbus_message_iter_open_container(&variant_iter, DBUS_TYPE_ARRAY, sub_type_str, &array_iter)) { wpa_printf(MSG_ERROR, "dbus: " "wpas_dbus_simple_array_property_getter: out of " "memory to open container"); dbus_message_unref(reply); return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } switch(type) { case DBUS_TYPE_BYTE: case DBUS_TYPE_BOOLEAN: element_size = 1; break; case DBUS_TYPE_INT16: case DBUS_TYPE_UINT16: element_size = sizeof(uint16_t); break; case DBUS_TYPE_INT32: case DBUS_TYPE_UINT32: element_size = sizeof(uint32_t); break; case DBUS_TYPE_INT64: case DBUS_TYPE_UINT64: element_size = sizeof(uint64_t); break; case DBUS_TYPE_DOUBLE: element_size = sizeof(double); break; case DBUS_TYPE_STRING: case DBUS_TYPE_OBJECT_PATH: element_size = sizeof(char *); break; default: wpa_printf(MSG_ERROR, "dbus: " "wpas_dbus_simple_array_property_getter: " "fatal: unknown element type"); element_size = 1; break; } for (i = 0; i < array_len; i++) { dbus_message_iter_append_basic(&array_iter, type, array + i * element_size); } if (!dbus_message_iter_close_container(&variant_iter, &array_iter) || !dbus_message_iter_close_container(&iter, &variant_iter)) { wpa_printf(MSG_ERROR, "dbus: " "wpas_dbus_simple_array_property_getter: out of " "memory to close container"); dbus_message_unref(reply); return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } return reply; } /** * wpas_dbus_handler_create_interface - Request registration of a network iface * @message: Pointer to incoming dbus message * @global: %wpa_supplicant global data structure * Returns: The object path of the new interface object, * or a dbus error message with more information * * Handler function for "CreateInterface" method call. Handles requests * by dbus clients to register a network interface that wpa_supplicant * will manage. */ DBusMessage * wpas_dbus_handler_create_interface(DBusMessage *message, struct wpa_global *global) { DBusMessageIter iter_dict; DBusMessage *reply = NULL; DBusMessageIter iter; struct wpa_dbus_dict_entry entry; char *driver = NULL; char *ifname = NULL; char *confname = NULL; char *bridge_ifname = NULL; dbus_message_iter_init(message, &iter); if (!wpa_dbus_dict_open_read(&iter, &iter_dict)) goto error; while (wpa_dbus_dict_has_dict_entry(&iter_dict)) { if (!wpa_dbus_dict_get_entry(&iter_dict, &entry)) goto error; if (!strcmp(entry.key, "Driver") && (entry.type == DBUS_TYPE_STRING)) { driver = os_strdup(entry.str_value); wpa_dbus_dict_entry_clear(&entry); if (driver == NULL) goto error; } else if (!strcmp(entry.key, "Ifname") && (entry.type == DBUS_TYPE_STRING)) { ifname = os_strdup(entry.str_value); wpa_dbus_dict_entry_clear(&entry); if (ifname == NULL) goto error; } else if (!strcmp(entry.key, "ConfigFile") && (entry.type == DBUS_TYPE_STRING)) { confname = os_strdup(entry.str_value); wpa_dbus_dict_entry_clear(&entry); if (confname == NULL) goto error; } else if (!strcmp(entry.key, "BridgeIfname") && (entry.type == DBUS_TYPE_STRING)) { bridge_ifname = os_strdup(entry.str_value); wpa_dbus_dict_entry_clear(&entry); if (bridge_ifname == NULL) goto error; } else { wpa_dbus_dict_entry_clear(&entry); goto error; } } if (ifname == NULL) goto error; /* Required Ifname argument missing */ /* * Try to get the wpa_supplicant record for this iface, return * an error if we already control it. */ if (wpa_supplicant_get_iface(global, ifname) != NULL) { reply = dbus_message_new_error(message, WPAS_DBUS_ERROR_IFACE_EXISTS, "wpa_supplicant already " "controls this interface."); } else { struct wpa_supplicant *wpa_s; struct wpa_interface iface; os_memset(&iface, 0, sizeof(iface)); iface.driver = driver; iface.ifname = ifname; iface.confname = confname; iface.bridge_ifname = bridge_ifname; /* Otherwise, have wpa_supplicant attach to it. */ if ((wpa_s = wpa_supplicant_add_iface(global, &iface))) { const char *path = wpa_s->dbus_new_path; reply = dbus_message_new_method_return(message); dbus_message_append_args(reply, DBUS_TYPE_OBJECT_PATH, &path, DBUS_TYPE_INVALID); } else { reply = wpas_dbus_error_unknown_error( message, "wpa_supplicant couldn't grab this " "interface."); } } out: os_free(driver); os_free(ifname); os_free(bridge_ifname); return reply; error: reply = wpas_dbus_error_invalid_args(message, NULL); goto out; } /** * wpas_dbus_handler_remove_interface - Request deregistration of an interface * @message: Pointer to incoming dbus message * @global: wpa_supplicant global data structure * Returns: a dbus message containing a UINT32 indicating success (1) or * failure (0), or returns a dbus error message with more information * * Handler function for "removeInterface" method call. Handles requests * by dbus clients to deregister a network interface that wpa_supplicant * currently manages. */ DBusMessage * wpas_dbus_handler_remove_interface(DBusMessage *message, struct wpa_global *global) { struct wpa_supplicant *wpa_s; char *path; DBusMessage *reply = NULL; dbus_message_get_args(message, NULL, DBUS_TYPE_OBJECT_PATH, &path, DBUS_TYPE_INVALID); wpa_s = get_iface_by_dbus_path(global, path); if (wpa_s == NULL) reply = wpas_dbus_error_iface_unknown(message); else if (wpa_supplicant_remove_iface(global, wpa_s, 0)) { reply = wpas_dbus_error_unknown_error( message, "wpa_supplicant couldn't remove this " "interface."); } return reply; } /** * wpas_dbus_handler_get_interface - Get the object path for an interface name * @message: Pointer to incoming dbus message * @global: %wpa_supplicant global data structure * Returns: The object path of the interface object, * or a dbus error message with more information * * Handler function for "getInterface" method call. */ DBusMessage * wpas_dbus_handler_get_interface(DBusMessage *message, struct wpa_global *global) { DBusMessage *reply = NULL; const char *ifname; const char *path; struct wpa_supplicant *wpa_s; dbus_message_get_args(message, NULL, DBUS_TYPE_STRING, &ifname, DBUS_TYPE_INVALID); wpa_s = wpa_supplicant_get_iface(global, ifname); if (wpa_s == NULL) return wpas_dbus_error_iface_unknown(message); path = wpa_s->dbus_new_path; reply = dbus_message_new_method_return(message); if (reply == NULL) return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); if (!dbus_message_append_args(reply, DBUS_TYPE_OBJECT_PATH, &path, DBUS_TYPE_INVALID)) { dbus_message_unref(reply); return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } return reply; } /** * wpas_dbus_getter_debug_level - Get debug level * @message: Pointer to incoming dbus message * @global: %wpa_supplicant global data structure * Returns: DBus message with value of debug level * * Getter for "DebugLevel" property. */ DBusMessage * wpas_dbus_getter_debug_level(DBusMessage *message, struct wpa_global *global) { const char *str; int idx = wpa_debug_level; if (idx < 0) idx = 0; if (idx > 5) idx = 5; str = debug_strings[idx]; return wpas_dbus_simple_property_getter(message, DBUS_TYPE_STRING, &str); } /** * wpas_dbus_getter_debug_timestamp - Get debug timestamp * @message: Pointer to incoming dbus message * @global: %wpa_supplicant global data structure * Returns: DBus message with value of debug timestamp * * Getter for "DebugTimestamp" property. */ DBusMessage * wpas_dbus_getter_debug_timestamp(DBusMessage *message, struct wpa_global *global) { return wpas_dbus_simple_property_getter(message, DBUS_TYPE_BOOLEAN, &wpa_debug_timestamp); } /** * wpas_dbus_getter_debug_show_keys - Get debug show keys * @message: Pointer to incoming dbus message * @global: %wpa_supplicant global data structure * Returns: DBus message with value of debug show_keys * * Getter for "DebugShowKeys" property. */ DBusMessage * wpas_dbus_getter_debug_show_keys(DBusMessage *message, struct wpa_global *global) { return wpas_dbus_simple_property_getter(message, DBUS_TYPE_BOOLEAN, &wpa_debug_show_keys); } /** * wpas_dbus_setter_debug_level - Set debug level * @message: Pointer to incoming dbus message * @global: %wpa_supplicant global data structure * Returns: %NULL or DBus error message * * Setter for "DebugLevel" property. */ DBusMessage * wpas_dbus_setter_debug_level(DBusMessage *message, struct wpa_global *global) { DBusMessage *reply; const char *str = NULL; int i, val = -1; reply = wpas_dbus_simple_property_setter(message, DBUS_TYPE_STRING, &str); if (reply) return reply; for (i = 0; debug_strings[i]; i++) if (os_strcmp(debug_strings[i], str) == 0) { val = i; break; } if (val < 0 || wpa_supplicant_set_debug_params(global, val, wpa_debug_timestamp, wpa_debug_show_keys)) { return wpas_dbus_error_invalid_args( message, "Wrong debug level value"); } return NULL; } /** * wpas_dbus_setter_debug_timestamp - Set debug timestamp * @message: Pointer to incoming dbus message * @global: %wpa_supplicant global data structure * Returns: %NULL or DBus error message * * Setter for "DebugTimestamp" property. */ DBusMessage * wpas_dbus_setter_debug_timestamp(DBusMessage *message, struct wpa_global *global) { DBusMessage *reply; dbus_bool_t val; reply = wpas_dbus_simple_property_setter(message, DBUS_TYPE_BOOLEAN, &val); if (reply) return reply; wpa_supplicant_set_debug_params(global, wpa_debug_level, val ? 1 : 0, wpa_debug_show_keys); return NULL; } /** * wpas_dbus_setter_debug_show_keys - Set debug show keys * @message: Pointer to incoming dbus message * @global: %wpa_supplicant global data structure * Returns: %NULL or DBus error message * * Setter for "DebugShowKeys" property. */ DBusMessage * wpas_dbus_setter_debug_show_keys(DBusMessage *message, struct wpa_global *global) { DBusMessage *reply; dbus_bool_t val; reply = wpas_dbus_simple_property_setter(message, DBUS_TYPE_BOOLEAN, &val); if (reply) return reply; wpa_supplicant_set_debug_params(global, wpa_debug_level, wpa_debug_timestamp, val ? 1 : 0); return NULL; } /** * wpas_dbus_getter_interfaces - Request registered interfaces list * @message: Pointer to incoming dbus message * @global: %wpa_supplicant global data structure * Returns: The object paths array containing registered interfaces * objects paths or DBus error on failure * * Getter for "Interfaces" property. Handles requests * by dbus clients to return list of registered interfaces objects * paths */ DBusMessage * wpas_dbus_getter_interfaces(DBusMessage *message, struct wpa_global *global) { DBusMessage *reply = NULL; struct wpa_supplicant *wpa_s; const char **paths; unsigned int i = 0, num = 0; for (wpa_s = global->ifaces; wpa_s; wpa_s = wpa_s->next) num++; paths = os_zalloc(num * sizeof(char*)); if (!paths) { return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } for (wpa_s = global->ifaces; wpa_s; wpa_s = wpa_s->next) paths[i++] = wpa_s->dbus_new_path; reply = wpas_dbus_simple_array_property_getter(message, DBUS_TYPE_OBJECT_PATH, paths, num); os_free(paths); return reply; } /** * wpas_dbus_getter_eap_methods - Request supported EAP methods list * @message: Pointer to incoming dbus message * @nothing: not used argument. may be NULL or anything else * Returns: The object paths array containing supported EAP methods * represented by strings or DBus error on failure * * Getter for "EapMethods" property. Handles requests * by dbus clients to return list of strings with supported EAP methods */ DBusMessage * wpas_dbus_getter_eap_methods(DBusMessage *message, void *nothing) { DBusMessage *reply = NULL; char **eap_methods; size_t num_items = 0; eap_methods = eap_get_names_as_string_array(&num_items); if (!eap_methods) { return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } reply = wpas_dbus_simple_array_property_getter(message, DBUS_TYPE_STRING, eap_methods, num_items); while (num_items) os_free(eap_methods[--num_items]); os_free(eap_methods); return reply; } static int wpas_dbus_get_scan_type(DBusMessage *message, DBusMessageIter *var, char **type, DBusMessage **reply) { if (dbus_message_iter_get_arg_type(var) != DBUS_TYPE_STRING) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: " "Type must be a string"); *reply = wpas_dbus_error_invalid_args( message, "Wrong Type value type. String required"); return -1; } dbus_message_iter_get_basic(var, type); return 0; } static int wpas_dbus_get_scan_ssids(DBusMessage *message, DBusMessageIter *var, struct wpa_driver_scan_params *params, DBusMessage **reply) { struct wpa_driver_scan_ssid *ssids = params->ssids; size_t ssids_num = 0; u8 *ssid; DBusMessageIter array_iter, sub_array_iter; char *val; int len; if (dbus_message_iter_get_arg_type(var) != DBUS_TYPE_ARRAY) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: ssids " "must be an array of arrays of bytes"); *reply = wpas_dbus_error_invalid_args( message, "Wrong SSIDs value type. Array of arrays of " "bytes required"); return -1; } dbus_message_iter_recurse(var, &array_iter); if (dbus_message_iter_get_arg_type(&array_iter) != DBUS_TYPE_ARRAY || dbus_message_iter_get_element_type(&array_iter) != DBUS_TYPE_BYTE) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: ssids " "must be an array of arrays of bytes"); *reply = wpas_dbus_error_invalid_args( message, "Wrong SSIDs value type. Array of arrays of " "bytes required"); return -1; } while (dbus_message_iter_get_arg_type(&array_iter) == DBUS_TYPE_ARRAY) { if (ssids_num >= WPAS_MAX_SCAN_SSIDS) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: " "Too many ssids specified on scan dbus " "call"); *reply = wpas_dbus_error_invalid_args( message, "Too many ssids specified. Specify " "at most four"); return -1; } dbus_message_iter_recurse(&array_iter, &sub_array_iter); dbus_message_iter_get_fixed_array(&sub_array_iter, &val, &len); if (len != 0) { ssid = os_malloc(len); if (ssid == NULL) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: " "out of memory. Cannot allocate " "memory for SSID"); *reply = dbus_message_new_error( message, DBUS_ERROR_NO_MEMORY, NULL); return -1; } os_memcpy(ssid, val, len); } else { /* Allow zero-length SSIDs */ ssid = NULL; } ssids[ssids_num].ssid = ssid; ssids[ssids_num].ssid_len = len; dbus_message_iter_next(&array_iter); ssids_num++; } params->num_ssids = ssids_num; return 0; } static int wpas_dbus_get_scan_ies(DBusMessage *message, DBusMessageIter *var, struct wpa_driver_scan_params *params, DBusMessage **reply) { u8 *ies = NULL, *nies; int ies_len = 0; DBusMessageIter array_iter, sub_array_iter; char *val; int len; if (dbus_message_iter_get_arg_type(var) != DBUS_TYPE_ARRAY) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: ies must " "be an array of arrays of bytes"); *reply = wpas_dbus_error_invalid_args( message, "Wrong IEs value type. Array of arrays of " "bytes required"); return -1; } dbus_message_iter_recurse(var, &array_iter); if (dbus_message_iter_get_arg_type(&array_iter) != DBUS_TYPE_ARRAY || dbus_message_iter_get_element_type(&array_iter) != DBUS_TYPE_BYTE) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: ies must " "be an array of arrays of bytes"); *reply = wpas_dbus_error_invalid_args( message, "Wrong IEs value type. Array required"); return -1; } while (dbus_message_iter_get_arg_type(&array_iter) == DBUS_TYPE_ARRAY) { dbus_message_iter_recurse(&array_iter, &sub_array_iter); dbus_message_iter_get_fixed_array(&sub_array_iter, &val, &len); if (len == 0) { dbus_message_iter_next(&array_iter); continue; } nies = os_realloc(ies, ies_len + len); if (nies == NULL) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: " "out of memory. Cannot allocate memory for " "IE"); os_free(ies); *reply = dbus_message_new_error( message, DBUS_ERROR_NO_MEMORY, NULL); return -1; } ies = nies; os_memcpy(ies + ies_len, val, len); ies_len += len; dbus_message_iter_next(&array_iter); } params->extra_ies = ies; params->extra_ies_len = ies_len; return 0; } static int wpas_dbus_get_scan_channels(DBusMessage *message, DBusMessageIter *var, struct wpa_driver_scan_params *params, DBusMessage **reply) { DBusMessageIter array_iter, sub_array_iter; int *freqs = NULL, *nfreqs; int freqs_num = 0; if (dbus_message_iter_get_arg_type(var) != DBUS_TYPE_ARRAY) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: " "Channels must be an array of structs"); *reply = wpas_dbus_error_invalid_args( message, "Wrong Channels value type. Array of structs " "required"); return -1; } dbus_message_iter_recurse(var, &array_iter); if (dbus_message_iter_get_arg_type(&array_iter) != DBUS_TYPE_STRUCT) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: Channels must be an " "array of structs"); *reply = wpas_dbus_error_invalid_args( message, "Wrong Channels value type. Array of structs " "required"); return -1; } while (dbus_message_iter_get_arg_type(&array_iter) == DBUS_TYPE_STRUCT) { int freq, width; dbus_message_iter_recurse(&array_iter, &sub_array_iter); if (dbus_message_iter_get_arg_type(&sub_array_iter) != DBUS_TYPE_UINT32) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: " "Channel must by specified by struct of " "two UINT32s %c", dbus_message_iter_get_arg_type( &sub_array_iter)); *reply = wpas_dbus_error_invalid_args( message, "Wrong Channel struct. Two UINT32s " "required"); os_free(freqs); return -1; } dbus_message_iter_get_basic(&sub_array_iter, &freq); if (!dbus_message_iter_next(&sub_array_iter) || dbus_message_iter_get_arg_type(&sub_array_iter) != DBUS_TYPE_UINT32) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: " "Channel must by specified by struct of " "two UINT32s"); *reply = wpas_dbus_error_invalid_args( message, "Wrong Channel struct. Two UINT32s required"); os_free(freqs); return -1; } dbus_message_iter_get_basic(&sub_array_iter, &width); #define FREQS_ALLOC_CHUNK 32 if (freqs_num % FREQS_ALLOC_CHUNK == 0) { nfreqs = os_realloc(freqs, sizeof(int) * (freqs_num + FREQS_ALLOC_CHUNK)); if (nfreqs == NULL) os_free(freqs); freqs = nfreqs; } if (freqs == NULL) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: " "out of memory. can't allocate memory for " "freqs"); *reply = dbus_message_new_error( message, DBUS_ERROR_NO_MEMORY, NULL); return -1; } freqs[freqs_num] = freq; freqs_num++; dbus_message_iter_next(&array_iter); } nfreqs = os_realloc(freqs, sizeof(int) * (freqs_num + 1)); if (nfreqs == NULL) os_free(freqs); freqs = nfreqs; if (freqs == NULL) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: " "out of memory. Can't allocate memory for freqs"); *reply = dbus_message_new_error( message, DBUS_ERROR_NO_MEMORY, NULL); return -1; } freqs[freqs_num] = 0; params->freqs = freqs; return 0; } /** * wpas_dbus_handler_scan - Request a wireless scan on an interface * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: NULL indicating success or DBus error message on failure * * Handler function for "Scan" method call of a network device. Requests * that wpa_supplicant perform a wireless scan as soon as possible * on a particular wireless interface. */ DBusMessage * wpas_dbus_handler_scan(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; DBusMessageIter iter, dict_iter, entry_iter, variant_iter; char *key = NULL, *type = NULL; struct wpa_driver_scan_params params; size_t i; os_memset(&params, 0, sizeof(params)); dbus_message_iter_init(message, &iter); dbus_message_iter_recurse(&iter, &dict_iter); while (dbus_message_iter_get_arg_type(&dict_iter) == DBUS_TYPE_DICT_ENTRY) { dbus_message_iter_recurse(&dict_iter, &entry_iter); dbus_message_iter_get_basic(&entry_iter, &key); dbus_message_iter_next(&entry_iter); dbus_message_iter_recurse(&entry_iter, &variant_iter); if (os_strcmp(key, "Type") == 0) { if (wpas_dbus_get_scan_type(message, &variant_iter, &type, &reply) < 0) goto out; } else if (os_strcmp(key, "SSIDs") == 0) { if (wpas_dbus_get_scan_ssids(message, &variant_iter, &params, &reply) < 0) goto out; } else if (os_strcmp(key, "IEs") == 0) { if (wpas_dbus_get_scan_ies(message, &variant_iter, &params, &reply) < 0) goto out; } else if (os_strcmp(key, "Channels") == 0) { if (wpas_dbus_get_scan_channels(message, &variant_iter, &params, &reply) < 0) goto out; } else { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: " "Unknown argument %s", key); reply = wpas_dbus_error_invalid_args(message, key); goto out; } dbus_message_iter_next(&dict_iter); } if (!type) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: " "Scan type not specified"); reply = wpas_dbus_error_invalid_args(message, key); goto out; } if (!os_strcmp(type, "passive")) { if (params.num_ssids || params.extra_ies_len) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: " "SSIDs or IEs specified for passive scan."); reply = wpas_dbus_error_invalid_args( message, "You can specify only Channels in " "passive scan"); goto out; } else if (params.freqs && params.freqs[0]) { wpa_supplicant_trigger_scan(wpa_s, &params); } else { wpa_s->scan_req = 2; wpa_supplicant_req_scan(wpa_s, 0, 0); } } else if (!os_strcmp(type, "active")) { if (!params.num_ssids) { /* Add wildcard ssid */ params.num_ssids++; } wpa_supplicant_trigger_scan(wpa_s, &params); } else { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_scan[dbus]: " "Unknown scan type: %s", type); reply = wpas_dbus_error_invalid_args(message, "Wrong scan type"); goto out; } out: for (i = 0; i < WPAS_MAX_SCAN_SSIDS; i++) os_free((u8 *) params.ssids[i].ssid); os_free((u8 *) params.extra_ies); os_free(params.freqs); return reply; } /* * wpas_dbus_handler_disconnect - Terminate the current connection * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: NotConnected DBus error message if already not connected * or NULL otherwise. * * Handler function for "Disconnect" method call of network interface. */ DBusMessage * wpas_dbus_handler_disconnect(DBusMessage *message, struct wpa_supplicant *wpa_s) { if (wpa_s->current_ssid != NULL) { wpa_s->disconnected = 1; wpa_supplicant_deauthenticate(wpa_s, WLAN_REASON_DEAUTH_LEAVING); return NULL; } return dbus_message_new_error(message, WPAS_DBUS_ERROR_NOT_CONNECTED, "This interface is not connected"); } /** * wpas_dbus_new_iface_add_network - Add a new configured network * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: A dbus message containing the object path of the new network * * Handler function for "AddNetwork" method call of a network interface. */ DBusMessage * wpas_dbus_handler_add_network(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; DBusMessageIter iter; struct wpa_ssid *ssid = NULL; char path_buf[WPAS_DBUS_OBJECT_PATH_MAX], *path = path_buf; dbus_message_iter_init(message, &iter); ssid = wpa_config_add_network(wpa_s->conf); if (ssid == NULL) { wpa_printf(MSG_ERROR, "wpas_dbus_handler_add_network[dbus]: " "can't add new interface."); reply = wpas_dbus_error_unknown_error( message, "wpa_supplicant could not add " "a network on this interface."); goto err; } wpas_notify_network_added(wpa_s, ssid); ssid->disabled = 1; wpa_config_set_network_defaults(ssid); reply = set_network_properties(message, wpa_s, ssid, &iter); if (reply) { wpa_printf(MSG_DEBUG, "wpas_dbus_handler_add_network[dbus]:" "control interface couldn't set network " "properties"); goto err; } /* Construct the object path for this network. */ os_snprintf(path, WPAS_DBUS_OBJECT_PATH_MAX, "%s/" WPAS_DBUS_NEW_NETWORKS_PART "/%d", wpa_s->dbus_new_path, ssid->id); reply = dbus_message_new_method_return(message); if (reply == NULL) { reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); goto err; } if (!dbus_message_append_args(reply, DBUS_TYPE_OBJECT_PATH, &path, DBUS_TYPE_INVALID)) { dbus_message_unref(reply); reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); goto err; } return reply; err: if (ssid) { wpas_notify_network_removed(wpa_s, ssid); wpa_config_remove_network(wpa_s->conf, ssid->id); } return reply; } /** * wpas_dbus_handler_remove_network - Remove a configured network * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: NULL on success or dbus error on failure * * Handler function for "RemoveNetwork" method call of a network interface. */ DBusMessage * wpas_dbus_handler_remove_network(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; const char *op; char *iface = NULL, *net_id = NULL; int id; struct wpa_ssid *ssid; dbus_message_get_args(message, NULL, DBUS_TYPE_OBJECT_PATH, &op, DBUS_TYPE_INVALID); /* Extract the network ID and ensure the network */ /* is actually a child of this interface */ iface = wpas_dbus_new_decompose_object_path(op, 0, &net_id, NULL); if (iface == NULL || os_strcmp(iface, wpa_s->dbus_new_path) != 0) { reply = wpas_dbus_error_invalid_args(message, op); goto out; } id = strtoul(net_id, NULL, 10); if (errno == EINVAL) { reply = wpas_dbus_error_invalid_args(message, op); goto out; } ssid = wpa_config_get_network(wpa_s->conf, id); if (ssid == NULL) { reply = wpas_dbus_error_network_unknown(message); goto out; } wpas_notify_network_removed(wpa_s, ssid); if (wpa_config_remove_network(wpa_s->conf, id) < 0) { wpa_printf(MSG_ERROR, "wpas_dbus_handler_remove_network[dbus]: " "error occurred when removing network %d", id); reply = wpas_dbus_error_unknown_error( message, "error removing the specified network on " "this interface."); goto out; } if (ssid == wpa_s->current_ssid) wpa_supplicant_deauthenticate(wpa_s, WLAN_REASON_DEAUTH_LEAVING); out: os_free(iface); os_free(net_id); return reply; } static void remove_network(void *arg, struct wpa_ssid *ssid) { struct wpa_supplicant *wpa_s = arg; wpas_notify_network_removed(wpa_s, ssid); if (wpa_config_remove_network(wpa_s->conf, ssid->id) < 0) { wpa_printf(MSG_ERROR, "wpas_dbus_handler_remove_all_networks[dbus]: " "error occurred when removing network %d", ssid->id); return; } if (ssid == wpa_s->current_ssid) wpa_supplicant_disassociate(wpa_s, WLAN_REASON_DEAUTH_LEAVING); } /** * wpas_dbus_handler_remove_all_networks - Remove all configured networks * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: NULL on success or dbus error on failure * * Handler function for "RemoveAllNetworks" method call of a network interface. */ DBusMessage * wpas_dbus_handler_remove_all_networks( DBusMessage *message, struct wpa_supplicant *wpa_s) { /* NB: could check for failure and return an error */ wpa_config_foreach_network(wpa_s->conf, remove_network, wpa_s); return NULL; } /** * wpas_dbus_handler_select_network - Attempt association with a network * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: NULL on success or dbus error on failure * * Handler function for "SelectNetwork" method call of network interface. */ DBusMessage * wpas_dbus_handler_select_network(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; const char *op; char *iface = NULL, *net_id = NULL; int id; struct wpa_ssid *ssid; dbus_message_get_args(message, NULL, DBUS_TYPE_OBJECT_PATH, &op, DBUS_TYPE_INVALID); /* Extract the network ID and ensure the network */ /* is actually a child of this interface */ iface = wpas_dbus_new_decompose_object_path(op, 0, &net_id, NULL); if (iface == NULL || os_strcmp(iface, wpa_s->dbus_new_path) != 0) { reply = wpas_dbus_error_invalid_args(message, op); goto out; } id = strtoul(net_id, NULL, 10); if (errno == EINVAL) { reply = wpas_dbus_error_invalid_args(message, op); goto out; } ssid = wpa_config_get_network(wpa_s->conf, id); if (ssid == NULL) { reply = wpas_dbus_error_network_unknown(message); goto out; } /* Finally, associate with the network */ wpa_supplicant_select_network(wpa_s, ssid); out: os_free(iface); os_free(net_id); return reply; } /** * wpas_dbus_handler_add_blob - Store named binary blob (ie, for certificates) * @message: Pointer to incoming dbus message * @wpa_s: %wpa_supplicant data structure * Returns: A dbus message containing an error on failure or NULL on success * * Asks wpa_supplicant to internally store a binary blobs. */ DBusMessage * wpas_dbus_handler_add_blob(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; DBusMessageIter iter, array_iter; char *blob_name; u8 *blob_data; int blob_len; struct wpa_config_blob *blob = NULL; dbus_message_iter_init(message, &iter); dbus_message_iter_get_basic(&iter, &blob_name); if (wpa_config_get_blob(wpa_s->conf, blob_name)) { return dbus_message_new_error(message, WPAS_DBUS_ERROR_BLOB_EXISTS, NULL); } dbus_message_iter_next(&iter); dbus_message_iter_recurse(&iter, &array_iter); dbus_message_iter_get_fixed_array(&array_iter, &blob_data, &blob_len); blob = os_zalloc(sizeof(*blob)); if (!blob) { reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); goto err; } blob->data = os_malloc(blob_len); if (!blob->data) { reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); goto err; } os_memcpy(blob->data, blob_data, blob_len); blob->len = blob_len; blob->name = os_strdup(blob_name); if (!blob->name) { reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); goto err; } wpa_config_set_blob(wpa_s->conf, blob); wpas_notify_blob_added(wpa_s, blob->name); return reply; err: if (blob) { os_free(blob->name); os_free(blob->data); os_free(blob); } return reply; } /** * wpas_dbus_handler_get_blob - Get named binary blob (ie, for certificates) * @message: Pointer to incoming dbus message * @wpa_s: %wpa_supplicant data structure * Returns: A dbus message containing array of bytes (blob) * * Gets one wpa_supplicant's binary blobs. */ DBusMessage * wpas_dbus_handler_get_blob(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; DBusMessageIter iter, array_iter; char *blob_name; const struct wpa_config_blob *blob; dbus_message_get_args(message, NULL, DBUS_TYPE_STRING, &blob_name, DBUS_TYPE_INVALID); blob = wpa_config_get_blob(wpa_s->conf, blob_name); if (!blob) { return dbus_message_new_error(message, WPAS_DBUS_ERROR_BLOB_UNKNOWN, "Blob id not set"); } reply = dbus_message_new_method_return(message); if (!reply) { reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); goto out; } dbus_message_iter_init_append(reply, &iter); if (!dbus_message_iter_open_container(&iter, DBUS_TYPE_ARRAY, DBUS_TYPE_BYTE_AS_STRING, &array_iter)) { dbus_message_unref(reply); reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); goto out; } if (!dbus_message_iter_append_fixed_array(&array_iter, DBUS_TYPE_BYTE, &(blob->data), blob->len)) { dbus_message_unref(reply); reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); goto out; } if (!dbus_message_iter_close_container(&iter, &array_iter)) { dbus_message_unref(reply); reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); goto out; } out: return reply; } /** * wpas_remove_handler_remove_blob - Remove named binary blob * @message: Pointer to incoming dbus message * @wpa_s: %wpa_supplicant data structure * Returns: NULL on success or dbus error * * Asks wpa_supplicant to internally remove a binary blobs. */ DBusMessage * wpas_dbus_handler_remove_blob(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; char *blob_name; dbus_message_get_args(message, NULL, DBUS_TYPE_STRING, &blob_name, DBUS_TYPE_INVALID); if (wpa_config_remove_blob(wpa_s->conf, blob_name)) { return dbus_message_new_error(message, WPAS_DBUS_ERROR_BLOB_UNKNOWN, "Blob id not set"); } wpas_notify_blob_removed(wpa_s, blob_name); return reply; } /* * wpas_dbus_handler_flush_bss - Flush the BSS cache * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: NULL * * Handler function for "FlushBSS" method call of network interface. */ DBusMessage * wpas_dbus_handler_flush_bss(DBusMessage *message, struct wpa_supplicant *wpa_s) { dbus_uint32_t age; dbus_message_get_args(message, NULL, DBUS_TYPE_UINT32, &age, DBUS_TYPE_INVALID); if (age == 0) wpa_bss_flush(wpa_s); else wpa_bss_flush_by_age(wpa_s, age); return NULL; } /** * wpas_dbus_getter_capabilities - Return interface capabilities * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: A dbus message containing a dict of strings * * Getter for "Capabilities" property of an interface. */ DBusMessage * wpas_dbus_getter_capabilities(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; struct wpa_driver_capa capa; int res; DBusMessageIter iter, iter_dict; DBusMessageIter iter_dict_entry, iter_dict_val, iter_array, variant_iter; const char *scans[] = { "active", "passive", "ssid" }; if (message == NULL) reply = dbus_message_new(DBUS_MESSAGE_TYPE_SIGNAL); else reply = dbus_message_new_method_return(message); if (!reply) goto nomem; dbus_message_iter_init_append(reply, &iter); if (!dbus_message_iter_open_container(&iter, DBUS_TYPE_VARIANT, "a{sv}", &variant_iter)) goto nomem; if (!wpa_dbus_dict_open_write(&variant_iter, &iter_dict)) goto nomem; res = wpa_drv_get_capa(wpa_s, &capa); /***** pairwise cipher */ if (res < 0) { const char *args[] = {"ccmp", "tkip", "none"}; if (!wpa_dbus_dict_append_string_array( &iter_dict, "Pairwise", args, sizeof(args) / sizeof(char*))) goto nomem; } else { if (!wpa_dbus_dict_begin_string_array(&iter_dict, "Pairwise", &iter_dict_entry, &iter_dict_val, &iter_array)) goto nomem; if (capa.enc & WPA_DRIVER_CAPA_ENC_CCMP) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "ccmp")) goto nomem; } if (capa.enc & WPA_DRIVER_CAPA_ENC_TKIP) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "tkip")) goto nomem; } if (capa.key_mgmt & WPA_DRIVER_CAPA_KEY_MGMT_WPA_NONE) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "none")) goto nomem; } if (!wpa_dbus_dict_end_string_array(&iter_dict, &iter_dict_entry, &iter_dict_val, &iter_array)) goto nomem; } /***** group cipher */ if (res < 0) { const char *args[] = { "ccmp", "tkip", "wep104", "wep40" }; if (!wpa_dbus_dict_append_string_array( &iter_dict, "Group", args, sizeof(args) / sizeof(char*))) goto nomem; } else { if (!wpa_dbus_dict_begin_string_array(&iter_dict, "Group", &iter_dict_entry, &iter_dict_val, &iter_array)) goto nomem; if (capa.enc & WPA_DRIVER_CAPA_ENC_CCMP) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "ccmp")) goto nomem; } if (capa.enc & WPA_DRIVER_CAPA_ENC_TKIP) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "tkip")) goto nomem; } if (capa.enc & WPA_DRIVER_CAPA_ENC_WEP104) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "wep104")) goto nomem; } if (capa.enc & WPA_DRIVER_CAPA_ENC_WEP40) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "wep40")) goto nomem; } if (!wpa_dbus_dict_end_string_array(&iter_dict, &iter_dict_entry, &iter_dict_val, &iter_array)) goto nomem; } /***** key management */ if (res < 0) { const char *args[] = { "wpa-psk", "wpa-eap", "ieee8021x", "wpa-none", #ifdef CONFIG_WPS "wps", #endif /* CONFIG_WPS */ "none" }; if (!wpa_dbus_dict_append_string_array( &iter_dict, "KeyMgmt", args, sizeof(args) / sizeof(char*))) goto nomem; } else { if (!wpa_dbus_dict_begin_string_array(&iter_dict, "KeyMgmt", &iter_dict_entry, &iter_dict_val, &iter_array)) goto nomem; if (!wpa_dbus_dict_string_array_add_element(&iter_array, "none")) goto nomem; if (!wpa_dbus_dict_string_array_add_element(&iter_array, "ieee8021x")) goto nomem; if (capa.key_mgmt & (WPA_DRIVER_CAPA_KEY_MGMT_WPA | WPA_DRIVER_CAPA_KEY_MGMT_WPA2)) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "wpa-eap")) goto nomem; if (capa.key_mgmt & WPA_DRIVER_CAPA_KEY_MGMT_FT) if (!wpa_dbus_dict_string_array_add_element( &iter_array, "wpa-ft-eap")) goto nomem; /* TODO: Ensure that driver actually supports sha256 encryption. */ #ifdef CONFIG_IEEE80211W if (!wpa_dbus_dict_string_array_add_element( &iter_array, "wpa-eap-sha256")) goto nomem; #endif /* CONFIG_IEEE80211W */ } if (capa.key_mgmt & (WPA_DRIVER_CAPA_KEY_MGMT_WPA_PSK | WPA_DRIVER_CAPA_KEY_MGMT_WPA2_PSK)) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "wpa-psk")) goto nomem; if (capa.key_mgmt & WPA_DRIVER_CAPA_KEY_MGMT_FT_PSK) if (!wpa_dbus_dict_string_array_add_element( &iter_array, "wpa-ft-psk")) goto nomem; /* TODO: Ensure that driver actually supports sha256 encryption. */ #ifdef CONFIG_IEEE80211W if (!wpa_dbus_dict_string_array_add_element( &iter_array, "wpa-psk-sha256")) goto nomem; #endif /* CONFIG_IEEE80211W */ } if (capa.key_mgmt & WPA_DRIVER_CAPA_KEY_MGMT_WPA_NONE) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "wpa-none")) goto nomem; } #ifdef CONFIG_WPS if (!wpa_dbus_dict_string_array_add_element(&iter_array, "wps")) goto nomem; #endif /* CONFIG_WPS */ if (!wpa_dbus_dict_end_string_array(&iter_dict, &iter_dict_entry, &iter_dict_val, &iter_array)) goto nomem; } /***** WPA protocol */ if (res < 0) { const char *args[] = { "rsn", "wpa" }; if (!wpa_dbus_dict_append_string_array( &iter_dict, "Protocol", args, sizeof(args) / sizeof(char*))) goto nomem; } else { if (!wpa_dbus_dict_begin_string_array(&iter_dict, "Protocol", &iter_dict_entry, &iter_dict_val, &iter_array)) goto nomem; if (capa.key_mgmt & (WPA_DRIVER_CAPA_KEY_MGMT_WPA2 | WPA_DRIVER_CAPA_KEY_MGMT_WPA2_PSK)) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "rsn")) goto nomem; } if (capa.key_mgmt & (WPA_DRIVER_CAPA_KEY_MGMT_WPA | WPA_DRIVER_CAPA_KEY_MGMT_WPA_PSK)) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "wpa")) goto nomem; } if (!wpa_dbus_dict_end_string_array(&iter_dict, &iter_dict_entry, &iter_dict_val, &iter_array)) goto nomem; } /***** auth alg */ if (res < 0) { const char *args[] = { "open", "shared", "leap" }; if (!wpa_dbus_dict_append_string_array( &iter_dict, "AuthAlg", args, sizeof(args) / sizeof(char*))) goto nomem; } else { if (!wpa_dbus_dict_begin_string_array(&iter_dict, "AuthAlg", &iter_dict_entry, &iter_dict_val, &iter_array)) goto nomem; if (capa.auth & (WPA_DRIVER_AUTH_OPEN)) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "open")) goto nomem; } if (capa.auth & (WPA_DRIVER_AUTH_SHARED)) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "shared")) goto nomem; } if (capa.auth & (WPA_DRIVER_AUTH_LEAP)) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "leap")) goto nomem; } if (!wpa_dbus_dict_end_string_array(&iter_dict, &iter_dict_entry, &iter_dict_val, &iter_array)) goto nomem; } /***** Scan */ if (!wpa_dbus_dict_append_string_array(&iter_dict, "Scan", scans, sizeof(scans) / sizeof(char *))) goto nomem; /***** Modes */ if (!wpa_dbus_dict_begin_string_array(&iter_dict, "Modes", &iter_dict_entry, &iter_dict_val, &iter_array)) goto nomem; if (!wpa_dbus_dict_string_array_add_element( &iter_array, "infrastructure")) goto nomem; if (!wpa_dbus_dict_string_array_add_element( &iter_array, "ad-hoc")) goto nomem; if (res >= 0) { if (capa.flags & (WPA_DRIVER_FLAGS_AP)) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "ap")) goto nomem; } if (capa.flags & (WPA_DRIVER_FLAGS_P2P_CAPABLE)) { if (!wpa_dbus_dict_string_array_add_element( &iter_array, "p2p")) goto nomem; } } if (!wpa_dbus_dict_end_string_array(&iter_dict, &iter_dict_entry, &iter_dict_val, &iter_array)) goto nomem; /***** Modes end */ if (!wpa_dbus_dict_close_write(&variant_iter, &iter_dict)) goto nomem; if (!dbus_message_iter_close_container(&iter, &variant_iter)) goto nomem; return reply; nomem: if (reply) dbus_message_unref(reply); return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } /** * wpas_dbus_getter_state - Get interface state * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: A dbus message containing a STRING representing the current * interface state * * Getter for "State" property. */ DBusMessage * wpas_dbus_getter_state(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; const char *str_state; char *state_ls, *tmp; str_state = wpa_supplicant_state_txt(wpa_s->wpa_state); /* make state string lowercase to fit new DBus API convention */ state_ls = tmp = os_strdup(str_state); if (!tmp) { return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } while (*tmp) { *tmp = tolower(*tmp); tmp++; } reply = wpas_dbus_simple_property_getter(message, DBUS_TYPE_STRING, &state_ls); os_free(state_ls); return reply; } /** * wpas_dbus_new_iface_get_scanning - Get interface scanning state * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: A dbus message containing whether the interface is scanning * * Getter for "scanning" property. */ DBusMessage * wpas_dbus_getter_scanning(DBusMessage *message, struct wpa_supplicant *wpa_s) { dbus_bool_t scanning = wpa_s->scanning ? TRUE : FALSE; return wpas_dbus_simple_property_getter(message, DBUS_TYPE_BOOLEAN, &scanning); } /** * wpas_dbus_getter_ap_scan - Control roaming mode * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: A message containong value of ap_scan variable * * Getter function for "ApScan" property. */ DBusMessage * wpas_dbus_getter_ap_scan(DBusMessage *message, struct wpa_supplicant *wpa_s) { dbus_uint32_t ap_scan = wpa_s->conf->ap_scan; return wpas_dbus_simple_property_getter(message, DBUS_TYPE_UINT32, &ap_scan); } /** * wpas_dbus_setter_ap_scan - Control roaming mode * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: NULL * * Setter function for "ApScan" property. */ DBusMessage * wpas_dbus_setter_ap_scan(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; dbus_uint32_t ap_scan; reply = wpas_dbus_simple_property_setter(message, DBUS_TYPE_UINT32, &ap_scan); if (reply) return reply; if (wpa_supplicant_set_ap_scan(wpa_s, ap_scan)) { return wpas_dbus_error_invalid_args( message, "ap_scan must equal 0, 1 or 2"); } return NULL; } /** * wpas_dbus_getter_bss_expire_age - Get BSS entry expiration age * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: A message containing value of bss_expiration_age variable * * Getter function for "BSSExpireAge" property. */ DBusMessage * wpas_dbus_getter_bss_expire_age(DBusMessage *message, struct wpa_supplicant *wpa_s) { dbus_uint32_t expire_age = wpa_s->conf->bss_expiration_age; return wpas_dbus_simple_property_getter(message, DBUS_TYPE_UINT32, &expire_age); } /** * wpas_dbus_setter_bss_expire_age - Control BSS entry expiration age * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: NULL * * Setter function for "BSSExpireAge" property. */ DBusMessage * wpas_dbus_setter_bss_expire_age(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; dbus_uint32_t expire_age; reply = wpas_dbus_simple_property_setter(message, DBUS_TYPE_UINT32, &expire_age); if (reply) return reply; if (wpa_supplicant_set_bss_expiration_age(wpa_s, expire_age)) { return wpas_dbus_error_invalid_args( message, "BSSExpireAge must be >=10"); } return NULL; } /** * wpas_dbus_getter_bss_expire_count - Get BSS entry expiration scan count * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: A message containing value of bss_expire_count variable * * Getter function for "BSSExpireCount" property. */ DBusMessage * wpas_dbus_getter_bss_expire_count(DBusMessage *message, struct wpa_supplicant *wpa_s) { dbus_uint32_t expire_count = wpa_s->conf->bss_expiration_age; return wpas_dbus_simple_property_getter(message, DBUS_TYPE_UINT32, &expire_count); } /** * wpas_dbus_setter_bss_expire_count - Control BSS entry expiration scan count * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: NULL * * Setter function for "BSSExpireCount" property. */ DBusMessage * wpas_dbus_setter_bss_expire_count(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; dbus_uint32_t expire_count; reply = wpas_dbus_simple_property_setter(message, DBUS_TYPE_UINT32, &expire_count); if (reply) return reply; if (wpa_supplicant_set_bss_expiration_count(wpa_s, expire_count)) { return wpas_dbus_error_invalid_args( message, "BSSExpireCount must be >0"); } return NULL; } /** * wpas_dbus_getter_country - Control country code * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: A message containong value of country variable * * Getter function for "Country" property. */ DBusMessage * wpas_dbus_getter_country(DBusMessage *message, struct wpa_supplicant *wpa_s) { char country[3]; char *str = country; country[0] = wpa_s->conf->country[0]; country[1] = wpa_s->conf->country[1]; country[2] = '\0'; return wpas_dbus_simple_property_getter(message, DBUS_TYPE_STRING, &str); } /** * wpas_dbus_setter_country - Control country code * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: NULL * * Setter function for "Country" property. */ DBusMessage * wpas_dbus_setter_country(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; const char *country; reply = wpas_dbus_simple_property_setter(message, DBUS_TYPE_STRING, &country); if (reply) return reply; if (!country[0] || !country[1]) return wpas_dbus_error_invalid_args(message, "invalid country code"); if (wpa_s->drv_priv != NULL && wpa_drv_set_country(wpa_s, country)) { wpa_printf(MSG_DEBUG, "Failed to set country"); return wpas_dbus_error_invalid_args( message, "failed to set country code"); } wpa_s->conf->country[0] = country[0]; wpa_s->conf->country[1] = country[1]; return NULL; } /** * wpas_dbus_getter_ifname - Get interface name * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: A dbus message containing a name of network interface * associated with with wpa_s * * Getter for "Ifname" property. */ DBusMessage * wpas_dbus_getter_ifname(DBusMessage *message, struct wpa_supplicant *wpa_s) { const char *ifname = wpa_s->ifname; return wpas_dbus_simple_property_getter(message, DBUS_TYPE_STRING, &ifname); } /** * wpas_dbus_getter_driver - Get interface name * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: A dbus message containing a name of network interface * driver associated with with wpa_s * * Getter for "Driver" property. */ DBusMessage * wpas_dbus_getter_driver(DBusMessage *message, struct wpa_supplicant *wpa_s) { const char *driver; if (wpa_s->driver == NULL || wpa_s->driver->name == NULL) { wpa_printf(MSG_DEBUG, "wpas_dbus_getter_driver[dbus]: " "wpa_s has no driver set"); return wpas_dbus_error_unknown_error(message, NULL); } driver = wpa_s->driver->name; return wpas_dbus_simple_property_getter(message, DBUS_TYPE_STRING, &driver); } /** * wpas_dbus_getter_current_bss - Get current bss object path * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: A dbus message containing a DBus object path to * current BSS * * Getter for "CurrentBSS" property. */ DBusMessage * wpas_dbus_getter_current_bss(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply; char path_buf[WPAS_DBUS_OBJECT_PATH_MAX], *bss_obj_path = path_buf; if (wpa_s->current_bss) os_snprintf(bss_obj_path, WPAS_DBUS_OBJECT_PATH_MAX, "%s/" WPAS_DBUS_NEW_BSSIDS_PART "/%u", wpa_s->dbus_new_path, wpa_s->current_bss->id); else os_snprintf(bss_obj_path, WPAS_DBUS_OBJECT_PATH_MAX, "/"); reply = wpas_dbus_simple_property_getter(message, DBUS_TYPE_OBJECT_PATH, &bss_obj_path); return reply; } /** * wpas_dbus_getter_current_network - Get current network object path * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: A dbus message containing a DBus object path to * current network * * Getter for "CurrentNetwork" property. */ DBusMessage * wpas_dbus_getter_current_network(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply; char path_buf[WPAS_DBUS_OBJECT_PATH_MAX], *net_obj_path = path_buf; if (wpa_s->current_ssid) os_snprintf(net_obj_path, WPAS_DBUS_OBJECT_PATH_MAX, "%s/" WPAS_DBUS_NEW_NETWORKS_PART "/%u", wpa_s->dbus_new_path, wpa_s->current_ssid->id); else os_snprintf(net_obj_path, WPAS_DBUS_OBJECT_PATH_MAX, "/"); reply = wpas_dbus_simple_property_getter(message, DBUS_TYPE_OBJECT_PATH, &net_obj_path); return reply; } /** * wpas_dbus_getter_current_auth_mode - Get current authentication type * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: A dbus message containing a string indicating the current * authentication type. * * Getter for "CurrentAuthMode" property. */ DBusMessage * wpas_dbus_getter_current_auth_mode(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply; const char *eap_mode; const char *auth_mode; char eap_mode_buf[WPAS_DBUS_AUTH_MODE_MAX]; if (wpa_s->wpa_state != WPA_COMPLETED) { auth_mode = "INACTIVE"; } else if (wpa_s->key_mgmt == WPA_KEY_MGMT_IEEE8021X || wpa_s->key_mgmt == WPA_KEY_MGMT_IEEE8021X_NO_WPA) { eap_mode = wpa_supplicant_get_eap_mode(wpa_s); os_snprintf(eap_mode_buf, WPAS_DBUS_AUTH_MODE_MAX, "EAP-%s", eap_mode); auth_mode = eap_mode_buf; } else { auth_mode = wpa_key_mgmt_txt(wpa_s->key_mgmt, wpa_s->current_ssid->proto); } reply = wpas_dbus_simple_property_getter(message, DBUS_TYPE_STRING, &auth_mode); return reply; } /** * wpas_dbus_getter_bridge_ifname - Get interface name * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: A dbus message containing a name of bridge network * interface associated with with wpa_s * * Getter for "BridgeIfname" property. */ DBusMessage * wpas_dbus_getter_bridge_ifname(DBusMessage *message, struct wpa_supplicant *wpa_s) { const char *bridge_ifname = NULL; bridge_ifname = wpa_s->bridge_ifname; if (bridge_ifname == NULL) { wpa_printf(MSG_ERROR, "wpas_dbus_getter_bridge_ifname[dbus]: " "wpa_s has no bridge interface name set"); return wpas_dbus_error_unknown_error(message, NULL); } return wpas_dbus_simple_property_getter(message, DBUS_TYPE_STRING, &bridge_ifname); } /** * wpas_dbus_getter_bsss - Get array of BSSs objects * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: a dbus message containing an array of all known BSS objects * dbus paths * * Getter for "BSSs" property. */ DBusMessage * wpas_dbus_getter_bsss(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; struct wpa_bss *bss; char **paths; unsigned int i = 0; paths = os_zalloc(wpa_s->num_bss * sizeof(char *)); if (!paths) { return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } /* Loop through scan results and append each result's object path */ dl_list_for_each(bss, &wpa_s->bss_id, struct wpa_bss, list_id) { paths[i] = os_zalloc(WPAS_DBUS_OBJECT_PATH_MAX); if (paths[i] == NULL) { reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); goto out; } /* Construct the object path for this BSS. */ os_snprintf(paths[i++], WPAS_DBUS_OBJECT_PATH_MAX, "%s/" WPAS_DBUS_NEW_BSSIDS_PART "/%u", wpa_s->dbus_new_path, bss->id); } reply = wpas_dbus_simple_array_property_getter(message, DBUS_TYPE_OBJECT_PATH, paths, wpa_s->num_bss); out: while (i) os_free(paths[--i]); os_free(paths); return reply; } /** * wpas_dbus_getter_networks - Get array of networks objects * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: a dbus message containing an array of all configured * networks dbus object paths. * * Getter for "Networks" property. */ DBusMessage * wpas_dbus_getter_networks(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; struct wpa_ssid *ssid; char **paths; unsigned int i = 0, num = 0; if (wpa_s->conf == NULL) { wpa_printf(MSG_ERROR, "wpas_dbus_getter_networks[dbus]: " "An error occurred getting networks list."); return wpas_dbus_error_unknown_error(message, NULL); } for (ssid = wpa_s->conf->ssid; ssid; ssid = ssid->next) if (!network_is_persistent_group(ssid)) num++; paths = os_zalloc(num * sizeof(char *)); if (!paths) { return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } /* Loop through configured networks and append object path of each */ for (ssid = wpa_s->conf->ssid; ssid; ssid = ssid->next) { if (network_is_persistent_group(ssid)) continue; paths[i] = os_zalloc(WPAS_DBUS_OBJECT_PATH_MAX); if (paths[i] == NULL) { reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); goto out; } /* Construct the object path for this network. */ os_snprintf(paths[i++], WPAS_DBUS_OBJECT_PATH_MAX, "%s/" WPAS_DBUS_NEW_NETWORKS_PART "/%d", wpa_s->dbus_new_path, ssid->id); } reply = wpas_dbus_simple_array_property_getter(message, DBUS_TYPE_OBJECT_PATH, paths, num); out: while (i) os_free(paths[--i]); os_free(paths); return reply; } /** * wpas_dbus_getter_blobs - Get all blobs defined for this interface * @message: Pointer to incoming dbus message * @wpa_s: wpa_supplicant structure for a network interface * Returns: a dbus message containing a dictionary of pairs (blob_name, blob) * * Getter for "Blobs" property. */ DBusMessage * wpas_dbus_getter_blobs(DBusMessage *message, struct wpa_supplicant *wpa_s) { DBusMessage *reply = NULL; DBusMessageIter iter, variant_iter, dict_iter, entry_iter, array_iter; struct wpa_config_blob *blob; if (message == NULL) reply = dbus_message_new(DBUS_MESSAGE_TYPE_SIGNAL); else reply = dbus_message_new_method_return(message); if (!reply) return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); dbus_message_iter_init_append(reply, &iter); if (!dbus_message_iter_open_container(&iter, DBUS_TYPE_VARIANT, "a{say}", &variant_iter) || !dbus_message_iter_open_container(&variant_iter, DBUS_TYPE_ARRAY, "{say}", &dict_iter)) { dbus_message_unref(reply); return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } blob = wpa_s->conf->blobs; while (blob) { if (!dbus_message_iter_open_container(&dict_iter, DBUS_TYPE_DICT_ENTRY, NULL, &entry_iter) || !dbus_message_iter_append_basic(&entry_iter, DBUS_TYPE_STRING, &(blob->name)) || !dbus_message_iter_open_container(&entry_iter, DBUS_TYPE_ARRAY, DBUS_TYPE_BYTE_AS_STRING, &array_iter) || !dbus_message_iter_append_fixed_array(&array_iter, DBUS_TYPE_BYTE, &(blob->data), blob->len) || !dbus_message_iter_close_container(&entry_iter, &array_iter) || !dbus_message_iter_close_container(&dict_iter, &entry_iter)) { dbus_message_unref(reply); return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } blob = blob->next; } if (!dbus_message_iter_close_container(&variant_iter, &dict_iter) || !dbus_message_iter_close_container(&iter, &variant_iter)) { dbus_message_unref(reply); return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } return reply; } /** * wpas_dbus_getter_bss_bssid - Return the BSSID of a BSS * @message: Pointer to incoming dbus message * @bss: a pair of interface describing structure and bss's id * Returns: a dbus message containing the bssid for the requested bss * * Getter for "BSSID" property. */ DBusMessage * wpas_dbus_getter_bss_bssid(DBusMessage *message, struct bss_handler_args *bss) { struct wpa_bss *res = wpa_bss_get_id(bss->wpa_s, bss->id); if (!res) { wpa_printf(MSG_ERROR, "wpas_dbus_getter_bss_bssid[dbus]: no " "bss with id %d found", bss->id); return NULL; } return wpas_dbus_simple_array_property_getter(message, DBUS_TYPE_BYTE, res->bssid, ETH_ALEN); } /** * wpas_dbus_getter_bss_ssid - Return the SSID of a BSS * @message: Pointer to incoming dbus message * @bss: a pair of interface describing structure and bss's id * Returns: a dbus message containing the ssid for the requested bss * * Getter for "SSID" property. */ DBusMessage * wpas_dbus_getter_bss_ssid(DBusMessage *message, struct bss_handler_args *bss) { struct wpa_bss *res = wpa_bss_get_id(bss->wpa_s, bss->id); if (!res) { wpa_printf(MSG_ERROR, "wpas_dbus_getter_bss_ssid[dbus]: no " "bss with id %d found", bss->id); return NULL; } return wpas_dbus_simple_array_property_getter(message, DBUS_TYPE_BYTE, res->ssid, res->ssid_len); } /** * wpas_dbus_getter_bss_privacy - Return the privacy flag of a BSS * @message: Pointer to incoming dbus message * @bss: a pair of interface describing structure and bss's id * Returns: a dbus message containing the privacy flag value of requested bss * * Getter for "Privacy" property. */ DBusMessage * wpas_dbus_getter_bss_privacy(DBusMessage *message, struct bss_handler_args *bss) { struct wpa_bss *res = wpa_bss_get_id(bss->wpa_s, bss->id); dbus_bool_t privacy; if (!res) { wpa_printf(MSG_ERROR, "wpas_dbus_getter_bss_privacy[dbus]: no " "bss with id %d found", bss->id); return NULL; } privacy = (res->caps & IEEE80211_CAP_PRIVACY) ? TRUE : FALSE; return wpas_dbus_simple_property_getter(message, DBUS_TYPE_BOOLEAN, &privacy); } /** * wpas_dbus_getter_bss_mode - Return the mode of a BSS * @message: Pointer to incoming dbus message * @bss: a pair of interface describing structure and bss's id * Returns: a dbus message containing the mode of requested bss * * Getter for "Mode" property. */ DBusMessage * wpas_dbus_getter_bss_mode(DBusMessage *message, struct bss_handler_args *bss) { struct wpa_bss *res = wpa_bss_get_id(bss->wpa_s, bss->id); const char *mode; if (!res) { wpa_printf(MSG_ERROR, "wpas_dbus_getter_bss_mode[dbus]: no " "bss with id %d found", bss->id); return NULL; } if (res->caps & IEEE80211_CAP_IBSS) mode = "ad-hoc"; else mode = "infrastructure"; return wpas_dbus_simple_property_getter(message, DBUS_TYPE_STRING, &mode); } /** * wpas_dbus_getter_bss_level - Return the signal strength of a BSS * @message: Pointer to incoming dbus message * @bss: a pair of interface describing structure and bss's id * Returns: a dbus message containing the signal strength of requested bss * * Getter for "Level" property. */ DBusMessage * wpas_dbus_getter_bss_signal(DBusMessage *message, struct bss_handler_args *bss) { struct wpa_bss *res = wpa_bss_get_id(bss->wpa_s, bss->id); if (!res) { wpa_printf(MSG_ERROR, "wpas_dbus_getter_bss_signal[dbus]: no " "bss with id %d found", bss->id); return NULL; } return wpas_dbus_simple_property_getter(message, DBUS_TYPE_INT16, &res->level); } /** * wpas_dbus_getter_bss_frequency - Return the frequency of a BSS * @message: Pointer to incoming dbus message * @bss: a pair of interface describing structure and bss's id * Returns: a dbus message containing the frequency of requested bss * * Getter for "Frequency" property. */ DBusMessage * wpas_dbus_getter_bss_frequency(DBusMessage *message, struct bss_handler_args *bss) { struct wpa_bss *res = wpa_bss_get_id(bss->wpa_s, bss->id); if (!res) { wpa_printf(MSG_ERROR, "wpas_dbus_getter_bss_frequency[dbus]: " "no bss with id %d found", bss->id); return NULL; } return wpas_dbus_simple_property_getter(message, DBUS_TYPE_UINT16, &res->freq); } static int cmp_u8s_desc(const void *a, const void *b) { return (*(u8 *) b - *(u8 *) a); } /** * wpas_dbus_getter_bss_rates - Return available bit rates of a BSS * @message: Pointer to incoming dbus message * @bss: a pair of interface describing structure and bss's id * Returns: a dbus message containing sorted array of bit rates * * Getter for "Rates" property. */ DBusMessage * wpas_dbus_getter_bss_rates(DBusMessage *message, struct bss_handler_args *bss) { DBusMessage *reply; struct wpa_bss *res = wpa_bss_get_id(bss->wpa_s, bss->id); u8 *ie_rates = NULL; u32 *real_rates; int rates_num, i; if (!res) { wpa_printf(MSG_ERROR, "wpas_dbus_getter_bss_rates[dbus]: " "no bss with id %d found", bss->id); return NULL; } rates_num = wpa_bss_get_bit_rates(res, &ie_rates); if (rates_num < 0) return NULL; qsort(ie_rates, rates_num, 1, cmp_u8s_desc); real_rates = os_malloc(sizeof(u32) * rates_num); if (!real_rates) { os_free(ie_rates); return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } for (i = 0; i < rates_num; i++) real_rates[i] = ie_rates[i] * 500000; reply = wpas_dbus_simple_array_property_getter(message, DBUS_TYPE_UINT32, real_rates, rates_num); os_free(ie_rates); os_free(real_rates); return reply; } static DBusMessage * wpas_dbus_get_bss_security_prop( DBusMessage *message, struct wpa_ie_data *ie_data) { DBusMessage *reply; DBusMessageIter iter, iter_dict, variant_iter; const char *group; const char *pairwise[2]; /* max 2 pairwise ciphers is supported */ const char *key_mgmt[7]; /* max 7 key managements may be supported */ int n; if (message == NULL) reply = dbus_message_new(DBUS_MESSAGE_TYPE_SIGNAL); else reply = dbus_message_new_method_return(message); if (!reply) goto nomem; dbus_message_iter_init_append(reply, &iter); if (!dbus_message_iter_open_container(&iter, DBUS_TYPE_VARIANT, "a{sv}", &variant_iter)) goto nomem; if (!wpa_dbus_dict_open_write(&variant_iter, &iter_dict)) goto nomem; /* KeyMgmt */ n = 0; if (ie_data->key_mgmt & WPA_KEY_MGMT_PSK) key_mgmt[n++] = "wpa-psk"; if (ie_data->key_mgmt & WPA_KEY_MGMT_FT_PSK) key_mgmt[n++] = "wpa-ft-psk"; if (ie_data->key_mgmt & WPA_KEY_MGMT_PSK_SHA256) key_mgmt[n++] = "wpa-psk-sha256"; if (ie_data->key_mgmt & WPA_KEY_MGMT_IEEE8021X) key_mgmt[n++] = "wpa-eap"; if (ie_data->key_mgmt & WPA_KEY_MGMT_FT_IEEE8021X) key_mgmt[n++] = "wpa-ft-eap"; if (ie_data->key_mgmt & WPA_KEY_MGMT_IEEE8021X_SHA256) key_mgmt[n++] = "wpa-eap-sha256"; if (ie_data->key_mgmt & WPA_KEY_MGMT_NONE) key_mgmt[n++] = "wpa-none"; if (!wpa_dbus_dict_append_string_array(&iter_dict, "KeyMgmt", key_mgmt, n)) goto nomem; /* Group */ switch (ie_data->group_cipher) { case WPA_CIPHER_WEP40: group = "wep40"; break; case WPA_CIPHER_TKIP: group = "tkip"; break; case WPA_CIPHER_CCMP: group = "ccmp"; break; case WPA_CIPHER_WEP104: group = "wep104"; break; default: group = ""; break; } if (!wpa_dbus_dict_append_string(&iter_dict, "Group", group)) goto nomem; /* Pairwise */ n = 0; if (ie_data->pairwise_cipher & WPA_CIPHER_TKIP) pairwise[n++] = "tkip"; if (ie_data->pairwise_cipher & WPA_CIPHER_CCMP) pairwise[n++] = "ccmp"; if (!wpa_dbus_dict_append_string_array(&iter_dict, "Pairwise", pairwise, n)) goto nomem; /* Management group (RSN only) */ if (ie_data->proto == WPA_PROTO_RSN) { switch (ie_data->mgmt_group_cipher) { #ifdef CONFIG_IEEE80211W case WPA_CIPHER_AES_128_CMAC: group = "aes128cmac"; break; #endif /* CONFIG_IEEE80211W */ default: group = ""; break; } if (!wpa_dbus_dict_append_string(&iter_dict, "MgmtGroup", group)) goto nomem; } if (!wpa_dbus_dict_close_write(&variant_iter, &iter_dict)) goto nomem; if (!dbus_message_iter_close_container(&iter, &variant_iter)) goto nomem; return reply; nomem: if (reply) dbus_message_unref(reply); return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); } /** * wpas_dbus_getter_bss_wpa - Return the WPA options of a BSS * @message: Pointer to incoming dbus message * @bss: a pair of interface describing structure and bss's id * Returns: a dbus message containing the WPA options of requested bss * * Getter for "WPA" property. */ DBusMessage * wpas_dbus_getter_bss_wpa(DBusMessage *message, struct bss_handler_args *bss) { struct wpa_bss *res = wpa_bss_get_id(bss->wpa_s, bss->id); struct wpa_ie_data wpa_data; const u8 *ie; if (!res) { wpa_printf(MSG_ERROR, "wpas_dbus_getter_bss_wpa[dbus]: no " "bss with id %d found", bss->id); return NULL; } os_memset(&wpa_data, 0, sizeof(wpa_data)); ie = wpa_bss_get_vendor_ie(res, WPA_IE_VENDOR_TYPE); if (ie) { if (wpa_parse_wpa_ie(ie, 2 + ie[1], &wpa_data) < 0) return wpas_dbus_error_unknown_error(message, "invalid WPA IE"); } return wpas_dbus_get_bss_security_prop(message, &wpa_data); } /** * wpas_dbus_getter_bss_rsn - Return the RSN options of a BSS * @message: Pointer to incoming dbus message * @bss: a pair of interface describing structure and bss's id * Returns: a dbus message containing the RSN options of requested bss * * Getter for "RSN" property. */ DBusMessage * wpas_dbus_getter_bss_rsn(DBusMessage *message, struct bss_handler_args *bss) { struct wpa_bss *res = wpa_bss_get_id(bss->wpa_s, bss->id); struct wpa_ie_data wpa_data; const u8 *ie; if (!res) { wpa_printf(MSG_ERROR, "wpas_dbus_getter_bss_rsn[dbus]: no " "bss with id %d found", bss->id); return NULL; } os_memset(&wpa_data, 0, sizeof(wpa_data)); ie = wpa_bss_get_ie(res, WLAN_EID_RSN); if (ie) { if (wpa_parse_wpa_ie(ie, 2 + ie[1], &wpa_data) < 0) return wpas_dbus_error_unknown_error(message, "invalid RSN IE"); } return wpas_dbus_get_bss_security_prop(message, &wpa_data); } /** * wpas_dbus_getter_bss_ies - Return all IEs of a BSS * @message: Pointer to incoming dbus message * @bss: a pair of interface describing structure and bss's id * Returns: a dbus message containing IEs byte array * * Getter for "IEs" property. */ DBusMessage * wpas_dbus_getter_bss_ies(DBusMessage *message, struct bss_handler_args *bss) { struct wpa_bss *res = wpa_bss_get_id(bss->wpa_s, bss->id); if (!res) { wpa_printf(MSG_ERROR, "wpas_dbus_getter_bss_ies[dbus]: no " "bss with id %d found", bss->id); return NULL; } return wpas_dbus_simple_array_property_getter(message, DBUS_TYPE_BYTE, res + 1, res->ie_len); } /** * wpas_dbus_getter_enabled - Check whether network is enabled or disabled * @message: Pointer to incoming dbus message * @wpas_dbus_setter_enabled: wpa_supplicant structure for a network interface * and wpa_ssid structure for a configured network * Returns: DBus message with boolean indicating state of configured network * or DBus error on failure * * Getter for "enabled" property of a configured network. */ DBusMessage * wpas_dbus_getter_enabled(DBusMessage *message, struct network_handler_args *net) { dbus_bool_t enabled = net->ssid->disabled ? FALSE : TRUE; return wpas_dbus_simple_property_getter(message, DBUS_TYPE_BOOLEAN, &enabled); } /** * wpas_dbus_setter_enabled - Mark a configured network as enabled or disabled * @message: Pointer to incoming dbus message * @wpas_dbus_setter_enabled: wpa_supplicant structure for a network interface * and wpa_ssid structure for a configured network * Returns: NULL indicating success or DBus error on failure * * Setter for "Enabled" property of a configured network. */ DBusMessage * wpas_dbus_setter_enabled(DBusMessage *message, struct network_handler_args *net) { DBusMessage *reply = NULL; struct wpa_supplicant *wpa_s; struct wpa_ssid *ssid; dbus_bool_t enable; reply = wpas_dbus_simple_property_setter(message, DBUS_TYPE_BOOLEAN, &enable); if (reply) return reply; wpa_s = net->wpa_s; ssid = net->ssid; if (enable) wpa_supplicant_enable_network(wpa_s, ssid); else wpa_supplicant_disable_network(wpa_s, ssid); return NULL; } /** * wpas_dbus_getter_network_properties - Get options for a configured network * @message: Pointer to incoming dbus message * @net: wpa_supplicant structure for a network interface and * wpa_ssid structure for a configured network * Returns: DBus message with network properties or DBus error on failure * * Getter for "Properties" property of a configured network. */ DBusMessage * wpas_dbus_getter_network_properties( DBusMessage *message, struct network_handler_args *net) { DBusMessage *reply = NULL; DBusMessageIter iter, variant_iter, dict_iter; char **iterator; char **props = wpa_config_get_all(net->ssid, 1); if (!props) return dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); if (message == NULL) reply = dbus_message_new(DBUS_MESSAGE_TYPE_SIGNAL); else reply = dbus_message_new_method_return(message); if (!reply) { reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); goto out; } dbus_message_iter_init_append(reply, &iter); if (!dbus_message_iter_open_container(&iter, DBUS_TYPE_VARIANT, "a{sv}", &variant_iter) || !wpa_dbus_dict_open_write(&variant_iter, &dict_iter)) { dbus_message_unref(reply); reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); goto out; } iterator = props; while (*iterator) { if (!wpa_dbus_dict_append_string(&dict_iter, *iterator, *(iterator + 1))) { dbus_message_unref(reply); reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); goto out; } iterator += 2; } if (!wpa_dbus_dict_close_write(&variant_iter, &dict_iter) || !dbus_message_iter_close_container(&iter, &variant_iter)) { dbus_message_unref(reply); reply = dbus_message_new_error(message, DBUS_ERROR_NO_MEMORY, NULL); goto out; } out: iterator = props; while (*iterator) { os_free(*iterator); iterator++; } os_free(props); return reply; } /** * wpas_dbus_setter_network_properties - Set options for a configured network * @message: Pointer to incoming dbus message * @net: wpa_supplicant structure for a network interface and * wpa_ssid structure for a configured network * Returns: NULL indicating success or DBus error on failure * * Setter for "Properties" property of a configured network. */ DBusMessage * wpas_dbus_setter_network_properties( DBusMessage *message, struct network_handler_args *net) { struct wpa_ssid *ssid = net->ssid; DBusMessage *reply = NULL; DBusMessageIter iter, variant_iter; dbus_message_iter_init(message, &iter); dbus_message_iter_next(&iter); dbus_message_iter_next(&iter); dbus_message_iter_recurse(&iter, &variant_iter); reply = set_network_properties(message, net->wpa_s, ssid, &variant_iter); if (reply) wpa_printf(MSG_DEBUG, "dbus control interface couldn't set " "network properties"); return reply; }
{ "redpajama_set_name": "RedPajamaGithub" }
5,552
# Data Structures and Algorithms in Javascript Second Edition Michael McMillan # Data Structures and Algorithms with JavaScript by Michael McMillan Copyright © 2014 Michael McMillan. All rights reserved. Printed in the United States of America. Published by O'Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O'Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles ( _<http://safaribooksonline.com>_ ). For more information, contact our corporate/institutional sales department: 800-998-9938 or _corporate@oreilly.com_. * Editors: Brian MacDonald and Meghan Blanchette * Production Editor: Melanie Yarbrough * Copyeditor: Becca Freed * Proofreader: Amanda Kersey * Indexer: Ellen Troutman-Zaig * Interior Designer: David Futato * Cover Designer: Ellie Volkhausen * Illustrator: Rebecca Demarest * March 2014: First Edition # Revision History for the First Edition * 2014-03-06: First Release * 2015-10-21: Second Release See <http://oreilly.com/catalog/errata.csp?isbn=9781449364939> for release details. The O'Reilly logo is a registered trademark of O'Reilly Media, Inc. _Data Structures and Algorithms with JavaScript,_ the cover image, and related trade dress are trademarks of O'Reilly Media, Inc. While the publisher and the author have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the author disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights. 978-1-449-36493-9 [LSI] # Preface Over the past few years, JavaScript has been used more and more as a server-side computer programming language owing to platforms such as Node.js and SpiderMonkey. Now that JavaScript programming is moving out of the browser, programmers will find they need to use many of the tools provided by more conventional languages, such as C++ and Java. Among these tools are classic data structures such as linked lists, stacks, queues, and graphs, as well as classic algorithms for sorting and searching data. This book discusses how to implement these data structures and algorithms for server-side JavaScript programming. JavaScript programmers will find this book useful because it discusses how to implement data structures and algorithms within the constraints that JavaScript places them, such as arrays that are really objects, overly global variables, and a prototype-based object system. JavaScript has an unfair reputation as a "bad" programming language, but this book demonstrates how you can use JavaScript to develop efficient and effective data structures and algorithms using the language's "good parts." # Why Study Data Structures and Algorithms I am assuming that many of you reading this book do not have a formal education in computer science. If you do, then you already know why studying data structures and algorithms is important. If you do not have a degree in computer science or haven't studied these topics formally, you should read this section. The computer scientist Nicklaus Wirth wrote a computer programming textbook titled _Algorithms + Data Structures = Programs_ (Prentice-Hall). That title is the essence of computer programming. Any computer program that goes beyond the trivial "Hello, world!" will usually require some type of structure to manage the data the program is written to manipulate, along with one or more algorithms for translating the data from its input form to its output form. For many programmers who didn't study computer science in school, the only data structure they are familiar with is the array. Arrays are great for some problems, but for many complex problems, they are simply not sophisticated enough. Most experienced programmers will admit that for many programming problems, once they come up with the proper data structure, the algorithms needed to solve the problem are easier to design and implement. An example of a data structure that leads to efficient algorithms is the binary search tree (BST). A binary search tree is designed so that it is easy to find the minimum and maximum values of a set of data, yielding an algorithm that is more efficient than the best search algorithms available. Programmers unfamiliar with BSTs will instead probably use a simpler data structure that ends up being less efficient. Studying algorithms is important because there is always more than one algorithm that can be used to solve a problem, and knowing which ones are the most efficient is important for the productive programmer. For example, there are at least six or seven ways to sort a list of data, but knowing that the Quicksort algorithm is more efficient than the selection sort algorithm will lead to a much more efficient sorting process. Or that it's fairly easy to implement a sequential or linear search algorithm for a list of data, but knowing that the binary sort algorithm can sometimes be twice as efficient as the sequential search will lead to a better program. The comprehensive study of data structures and algorithms teaches you not only which data structures and which algorithms are the most efficient, but you also learn how to decide which data structures and which algorithms are the most appropriate for the problem at hand. There will often be trade-offs involved when writing a program, especially in the JavaScript environment, and knowing the ins and outs of the various data structures and algorithms covered in this book will help you make the proper decision for any particular programming problem you are trying to solve. # What You Need for This Book The programming environment we use in this book is the JavaScript shell based on the SpiderMonkey JavaScript engine. Chapter 1 provides instructions on downloading the shell for your environment. Other shells will work as well, such as the Node.js JavaScript shell, though you will have to make some translations for the programs in the book to work in Node. Other than the shell, the only thing you need is a text editor for writing your JavaScript programs. # Organization of the Book * Chapter 1 presents an overview of the JavaScript language, or at least the features of the JavaScript language used in this book. This chapter also demonstrates through use the programming style used throughout the other chapters. * Chapter 2 discusses the most common data structure in computer programming: the array, which is native to JavaScript. * Chapter 3 introduces the first implemented data structure: the list. * Chapter 4 covers the stack data structure. Stacks are used throughout computer science in both compiler and operating system implementations. * Chapter 5 discusses queue data structures. Queues are an abstraction of the lines you stand in at a bank or the grocery store. Queues are used extensively in simulation software where data has to be lined up before it is processed. * Chapter 6 covers Linked lists. A linked list is a modification of the list data structure, where each element is a separate object linked to the objects on either side of it. Linked lists are efficient when you need to perform multiple insertions and deletions in your program. * Chapter 7 demonstrates how to build and use dictionaries, which are data structures that store data as key-value pairs. * One way to implement a dictionary is to use a hash table, and Chapter 8 discusses how to build hash tables and the hash algorithms that are used to store data in the table. * Chapter 9 covers the set data structure. Sets are often not covered in data structure books, but they can be useful for storing data that is not supposed to have duplicates in the data set. * Binary trees and binary search trees are the subject of Chapter 10. As mentioned earlier, binary search trees are useful for storing data that needs to be stored originally in sorted form. * Chapter 11 covers graphs and graph algorithms. Graphs are used to represent data such as the nodes of a computer network or the cities on a map. * Chapter 12 moves from data structures to algorithms and discusses various algorithms for sorting data, including both simple sorting algorithms that are easy to implement but are not efficient for large data sets, and more complex algorithms that are appropriate for larger data sets. * Chapter 13 also covers algorithms, this time searching algorithms such as sequential search and binary search. * The last chapter of the book, Chapter 14, discusses a couple more advanced algorithms for working with data—dynamic programming and greedy algorithms. These algorithms are useful for solving hard problems where a more traditional algorithm is either too slow or too hard to implement. We examine some classic problems for both dynamic programming and greedy algorithms in the chapter. # Conventions Used in This Book The following typographical conventions are used in this book: _Italic_ Indicates new terms, URLs, email addresses, filenames, and file extensions. `Constant width` Used for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords. **`Constant width bold`** Shows commands or other text that should be typed literally by the user. _`Constant width italic`_ Shows text that should be replaced with user-supplied values or by values determined by context. # Using Code Examples Supplemental material (code examples, exercises, etc.) is available for download at _https://github.com/oreillymedia/data_structures_and_algorithms_using_javascript_. This book is here to help you get your job done. In general, if example code is offered with this book, you may use it in your programs and documentation. You do not need to contact us for permission unless you're reproducing a significant portion of the code. For example, writing a program that uses several chunks of code from this book does not require permission. Selling or distributing a CD-ROM of examples from O'Reilly books does require permission. Answering a question by citing this book and quoting example code does not require permission. Incorporating a significant amount of example code from this book into your product's documentation does require permission. We appreciate, but do not require, attribution. An attribution usually includes the title, author, publisher, and ISBN. For example: " _Data Structures and Algorithms Using JavaScript_ by Michael McMillian (O'Reilly). Copyright 2014 Michael McMillan, 978-1-449-36493-9." If you feel your use of code examples falls outside fair use or the permission given above, feel free to contact us at _permissions@oreilly.com_. # Safari® Books Online ###### Note _Safari Books Online_ is an on-demand digital library that delivers expert content in both book and video form from the world's leading authors in technology and business. Technology professionals, software developers, web designers, and business and creative professionals use Safari Books Online as their primary resource for research, problem solving, learning, and certification training. Safari Books Online offers a range of plans and pricing for enterprise, government, education, and individuals. Members have access to thousands of books, training videos, and prepublication manuscripts in one fully searchable database from publishers like O'Reilly Media, Prentice Hall Professional, Addison-Wesley Professional, Microsoft Press, Sams, Que, Peachpit Press, Focal Press, Cisco Press, John Wiley & Sons, Syngress, Morgan Kaufmann, IBM Redbooks, Packt, Adobe Press, FT Press, Apress, Manning, New Riders, McGraw-Hill, Jones & Bartlett, Course Technology, and hundreds more. For more information about Safari Books Online, please visit us online. # How to Contact Us Please address comments and questions concerning this book to the publisher: * O'Reilly Media, Inc. * 1005 Gravenstein Highway North * Sebastopol, CA 95472 * 800-998-9938 (in the United States or Canada) * 707-829-0515 (international or local) * 707-829-0104 (fax) We have a web page for this book, where we list errata, examples, and any additional information. You can access this page at _http://bit.ly/data-structures-and-algorithms-js_. To comment or ask technical questions about this book, send email to _bookquestions@oreilly.com_. For more information about our books, courses, conferences, and news, see our website at _http://www.oreilly.com_. Find us on Facebook: _http://facebook.com/oreilly_ Follow us on Twitter: _http://twitter.com/oreillymedia_ Watch us on YouTube: _http://www.youtube.com/oreillymedia_ # Content Updates ## October 20, 2015 * Fixed typos, ambiguities, and other issues regarding clarity * Removed the iterative version of Mergesort and replaced it with a recursive one * Cleaned up and reorganized the repo; all code from the book is now there, and keyed to text * Added parallel repo of all the code modified to run with Node # Acknowledgments There are always lots of people to thank when you've finished writing a book. I'd like to thank my acquisition editor, Simon St. Laurent, for believing in this book and getting me started writing it. Meghan Blanchette worked hard to keep me on schedule, and if I went off schedule, it definitely wasn't her fault. Brian MacDonald worked extremely hard to make this book as understandable as possible, and he helped make several parts of the text much clearer than I had written them originally. I also want to thank my technical reviewers for reading all the text as well as the code, and for pointing out places where both my prose and my code needed to be clearer. My colleague and illustrator, Cynthia Fehrenbach, did an outstanding job translating my chicken scratchings into crisp, clear illustrations, and she deserves extra praise for her willingness to redraw several illustrations at the very last minute. Finally, I'd like to thank all the people at Mozilla for designing an excellent JavaScript engine and shell and writing some excellent documentation for using both the language and the shell. # Chapter 1. The JavaScript Programming Environment and Model This chapter describes the JavaScript programming environment and the programming constructs we'll use in this book to define the various data structures and algorithms examined. # The JavaScript Environment JavaScript has historically been a programming language that ran only inside a web browser. However, in the past few years, there has been the development of JavaScript programming environments that can be run from the desktop, or similarly, from a server. In this book we use one such environment: the JavaScript shell that is part of Mozilla's comprehensive JavaScript environment known as SpiderMonkey. To download the JavaScript shell, navigate to the Nightly Build web page. Scroll to the bottom of the page and pick the download that matches your computer system. Once you've downloaded the program, you have two choices for using the shell. You can use it either in interactive mode or to interpret JavaScript programs stored in a file. To use the shell in interactive mode, type the command `js` at a command prompt. The shell prompt, `js>`, will appear and you are ready to start entering JavaScript expressions and statements. The following is a typical interaction with the shell: js> 1 1 js> 1+2 3 js> var num = 1; js> num*124 124 js> for (var i = 1; i < 6; ++i) { print(i); } 1 2 3 4 5 js> You can enter arithmetic expressions and the shell will immediately evaluate them. You can write any legal JavaScript statement and the shell will immediately evaluate it as well. The interactive shell is great for exploring JavaScript statements to discover how they work. To leave the shell when you are finished, type the command `quit()`. The other way to use the shell is to have it interpret complete JavaScript programs. This is how we will use the shell throughout the rest of the book. To use the shell to intepret programs, you first have to create a file that contains a JavaScript program. You can use any text editor, making sure you save the file as plain text. The only requirement is that the file must have a _.js_ extension. The shell has to see this extension to know the file is a JavaScript program. Once you have your file saved, you interpret it by typing the `js` command followed by the full filename of your program. For example, if you saved the `for` loop code fragment that's shown earlier in a file named _loop.js_ , you would enter the following: `c:\js>js loop.js` which would produce the following output: 1 2 3 4 5 After the program is executed, control is returned to the command prompt. # JavaScript Programming Practices In this section we discuss how we use JavaScript. We realize that programmers have different styles and practices when it comes to writing programs, and we want to describe ours here at the beginning of the book so that you'll understand the more complex code we present in the rest of the book. This isn't a tutorial on using JavaScript but is just a guide to how we use the fundamental constructs of the language. ## Declaring and Initializing Variables JavaScript variables declared outside of a function are global by default and, strictly speaking, don't have to be declared before using. When a JavaScript variable is initialized without first being declared, using the `var` keyword, it becomes a global variable. In this book, however, we follow the convention used with compiled languages such as C++ and Java by declaring all variables before their first use. The added benefit to doing this is that variables declared within function are created as local variables. We will talk more about variable scope later in this chapter. ###### Note You can use _strict mode_ to ensure variables are declared before use. Insert the following line _exactly_ before any other statement: 'use strict'; Or "use strict"; To declare a variable in JavaScript, use the keyword `var` followed by a variable name, and optionally, an assignment expression. Here are some examples: var number; var name; var rate = 1.2; var greeting = "Hello, world!"; var flag = false; ## Arithmetic and Math Library Functions in JavaScript JavaScript utilizes the standard arithmetic operators: * \+ (addition) * \- (subtraction) * * (multiplication) * / (division) * % (modulo) JavaScript also has a math library you can use for advanced functions such as square root, absolute value, and the trigonometric functions. The arithmetic operators follow the standard order of operations, and parentheses can be used to modify that order. Example 1-1 shows some examples of performing arithmetic in JavaScript, as well as examples of using several of the mathematical functions. ##### Example 1-1. Arithmetic and math functions in JavaScript var x = 3; var y = 1.1; print(x + y); print(x * y); print((x+y)*(x-y)); var z = 9; print(Math.sqrt(z)); print(Math.abs(y/x)); The output from this program is: 4.1 3.3000000000000003 7.789999999999999 3 0.3666666666666667 If you don't want or need the precision shown above, you can format a number to a fixed precision: var x = 3; var y = 1.1; var z = x * y; print(z.toFixed(2)); // displays 3.30 ## Decision Constructs Decision constructs allow our programs to make decisions on what programming statements to execute based on a Boolean expression. The two decision constructs we use in this book are the `if` statement and the `switch` statement. The `if` statement comes in three forms: * The simple `if` statement * The `if-else` statement * The `if-else if` statement Example 1-2 shows how to write a simple `if` statement. ##### Example 1-2. The simple `if` statement var mid = 25; var high = 50; var low = 1; var current = 13; var found = -1; if (current < mid) { mid = (current-low) / 2; } Example 1-3 demonstrates the `if-else` statement. ##### Example 1-3. The `if-else` statement var mid = 25; var high = 50; var low = 1; var current = 13; var found = -1; if (current < mid) { mid = (current-low) / 2; } else { mid = (current+high) / 2; } Example 1-4 illustrates the `if-else if` statement. ##### Example 1-4. The `if-else if` statement var mid = 25; var high = 50; var low = 1; var current = 13; var found = -1; if (current < mid) { mid = (current-low) / 2; } else if (current > mid) { mid = (current+high) / 2; } else { found = current; } The other decision structure we use in this book is the `switch` statement. This statement provides a cleaner, more structured construction when you there's a set of simple decisions. Example 1-5 demonstrates how the `switch` statement works. ##### Example 1-5. The `switch` statement putstr("Enter a month number: "); var monthNum = readline(); var monthName; switch (monthNum) { case "1": monthName = "January"; break; case "2": monthName = "February"; break; case "3": monthName = "March"; break; case "4": monthName = "April"; break; case "5": monthName = "May"; break; case "6": monthName = "June"; break; case "7": monthName = "July"; break; case "8": monthName = "August"; break; case "9": monthName = "September"; break; case "10": monthName = "October"; break; case "11": monthName = "November"; break; case "12": monthName = "December"; break; default: print("Bad input"); } print(monthName); Is this the most efficient way to solve this problem? No, but it does a great job of demonstrating how the `switch` statement works. One major difference between the JavaScript `switch` statement and `switch` statements in other programming languages is that the expression that is being tested in the statement can be of any data type, as opposed to an integral data type, as required by languages such as C++ and Java. In fact, you'll notice in the previous example that we use the month numbers as strings, rather than converting them to numbers, since we can compare strings using the `switch` statement in JavaScript. ## Repetition Constructs Many of the algorithms we study in this book are repetitive in nature. We use two repetition constructs in this book—the `while` loop and the `for` loop. When we want to execute a set of statements while a condition is true, we use a `while` loop. Example 1-6 demonstrates how the `while` loop works. ##### Example 1-6. The `while` loop var number = 1; var sum = 0; while (number < 11) { sum += number; ++number; } print(sum); // displays 55 When we want to execute a set of statements a specified number of times, we use a `for` loop. Example 1-7 uses a `for` loop to sum the integers 1 through 10. ##### Example 1-7. Summing integers using a `for` loop var number = 1; var sum = 0; for (var number = 1; number < 11; number++) { sum += number; } print(sum); // displays 55 `for` loops are also used frequently to access the elements of an array, as shown in Example 1-8. ##### Example 1-8. Using a `for` loop with an array var numbers = [3, 7, 12, 22, 100]; var sum = 0; for (var i = 0; i < numbers.length; ++i) { sum += numbers[i]; } print(sum); // displays 144 ## Functions JavaScript provides the means to define both value-returning functions and functions that don't return values (sometimes called _subprocedures_ or _void functions_ ). Example 1-9 demonstrates how value-returning functions are defined and called in JavaScript. ##### Example 1-9. A value-returning function function factorial(number) { var product = 1; for (var i = number; i >= 1; --i) { product *= i; } return product; } print(factorial(4)); // displays 24 print(factorial(5)); // displays 120 print(factorial(10)); // displays 3628800 Example 1-10 illustrates how to write a function that is used not for its return value, but for the operations it performs. ##### Example 1-10. A subprocedure or void function in JavaScript function curve(arr, amount) { for (var i = 0; i < arr.length; ++i) { arr[i] += amount; } } var grades = [77, 73, 74, 81, 90]; curve(grades, 5); print(grades); // displays 82,78,79,86,95 All function parameters in JavaScript are passed by value, and there are no reference parameters. However, there are reference objects, such as arrays, which are passed to functions by reference, as was demonstrated in Example 1-10. ## Variable Scope The _scope_ of a variable refers to where in a program a variable's value can be accessed. The scope of a variable in JavaScript is defined as _function scope_. This means that a variable's value is visible within the function definition where the variable is declared and defined and within any functions that are nested within that function. ###### Note Newer versions of ECMAScript do provide the capability of scoping variables at the block level, using the `let` statement. However, the support for `let` is still limited, and not critical to the core purpose of the book, so we'll stick with widely supported, basic JavaScript functionality. When a variable is defined outside of a function, in the main program, the variable is said to have _global_ scope, which means its value can be accessed by any part of a program, including functions. The following short program demonstrates how global scope works: function showScope() { return scope; } var scope = "global"; print(scope); // displays "global" print(showScope()); // displays "global" The function `showScope()` can access the variable `scope` because `scope` is a global variable. Global variables can be declared at any place in a program, either before or after function definitions. Now watch what happens when we define a second `scope` variable within the `showScope()` function: function showScope() { var scope = "local"; return scope; } var scope = "global"; print(scope); // displays "global" print(showScope()); // displays "local" The `scope` variable defined in the `showScope()` function has local scope, while the `scope` variable defined in the main program is a global variable. Even though the two variables have the same name, their scopes are different, and their values are different when accessed within the area of the program where they are defined. All of this behavior is normal and expected. However, it can all change if you leave off the keyword `var` in the variable definitions. JavaScript allows you to define variables without using the `var` keyword, but when you do, that variable automatically has global scope, even if defined within a function. Example 1-11 demonstrates the ramifications of leaving off the `var` keyword when defining variables. ##### Example 1-11. The ramification of overusing global variables function showScope() { scope = "local"; return scope; } scope = "global"; print(scope); // displays "global" print(showScope()); // displays "local" print(scope); // displays "local" In Example 1-11, because the `scope` variable inside the function is not declared with the `var` keyword, when the string `"local"` is assigned to the variable, we are actually changing the value of the `scope` variable in the main program. You should always begin every definition of a variable with the `var` keyword to keep things like this from happening. Earlier, we mentioned that JavaScript has function scope. This means that JavaScript does not have _block_ scope, unlike many other modern programming languages. With block scope, you can declare a variable within a block of code and the variable is not accessible outside of that block, as you typically see with a C++ or Java `for` loop: for (int i = 1; i <=10; ++i) { cout << "Hello, world!" << endl; } Even though JavaScript does not have block scope, we pretend like it does when we write `for` loops in this book: for (var i = 1; i <= 10; ++i ) { print("Hello, world!"); } We don't want to be the cause of you picking up bad programming habits. ## Recursion Function calls can be made recursively in JavaScript. The `factorial()` function defined earlier can also be written recursively, like this: function factorial(number) { if (number == 1) { return number; } else { return number * factorial(number-1); } } print(factorial(5)); When a function is called recursively, the results of the function's computation are temporarily suspended while the recursion is in progress. To demonstrate how this works, here is a diagram for the `factorial()` function when the argument passed to the function is 5: 5 * factorial(4) 5 * 4 * factorial(3) 5 * 4 * 3 * factorial(2) 5 * 4 * 3 * 2 * factorial(1) 5 * 4 * 3 * 2 * 1 5 * 4 * 3 * 2 5 * 4 * 6 5 * 24 120 Several of the algorithms discussed in this book use recursion. For the most part, JavaScript is capable of handling fairly deep recursive calls (this is an example of a relatively shallow recursive call); but in one or two situations, an algorithm requires a deeper recursive call than JavaScript can handle and we instead pursue an iterative solution to the algorithm. You should keep in mind that any function that uses recursion can be rewritten in an iterative manner. # Objects and Object-Oriented Programming The data structures discussed in this book are implemented as objects. JavaScript provides many different ways for creating and using objects. In this section we demonstrate the techniques used in this book for creating objects and for creating and using an object's functions and properties. Objects are created by defining a constructor function that includes declarations for an object's properties and functions, followed by definitions for the functions. Here is the constructor function for a checking account object: function Checking(amount) { this.balance = amount; // property this.deposit = deposit; // function this.withdraw = withdraw; // function this.toString = toString; // function } The `this` keyword is used to tie each function and property to an object instance. Now let's look at the function definitions for the preceding declarations: function deposit(amount) { this.balance += amount; } function withdraw(amount) { if (amount <= this.balance) { this.balance -= amount; } if (amount > this.balance) { print("Insufficient funds"); } } function toString() { return "Balance: " + this.balance; } Again, we have to use the `this` keyword with the `balance` property in order for the interpreter to know which object's `balance` property we are referencing. Example 1-12 provides the complete definition for the checking object along with a test program. ##### Example 1-12. Defining and using the `Checking` object function Checking(amount) { this.balance = amount; this.deposit = deposit; this.withdraw = withdraw; this.toString = toString; } function deposit(amount) { this.balance += amount; } function withdraw(amount) { if (amount <= this.balance) { this.balance -= amount; } if (amount > this.balance) { print("Insufficient funds"); } } function toString() { return "Balance: " + this.balance; } var account = new Checking(500); account.deposit(1000); print(account.toString()); // Balance: 1500 account.withdraw(750); print(account.toString()); // Balance: 750 account.withdraw(800); // displays "Insufficient funds" print(account.toString()); // Balance: 750 # Summary This chapter provided an overview of the way we use JavaScript throughout the rest of the book. We try to follow a programming style that is common to many programmers who are accustomed to using C-style languages such as C++ and Java. Of course, JavaScript has many conventions that do not follow the rules of those languages, and we certainly point those out (such as the declaration and use of variables) and show you the correct way to use the language. We also follow as many of the good JavaScript programming practices outlined by authors such as John Resig, Douglas Crockford, and others as we can. As responsible programmers, we need to keep in mind that it is just as important that our programs be readable by humans as it is that they be correctly executed by computers. # Chapter 2. Arrays The array is the most common data structure in computer programming. Every programming language includes some form of array. Because arrays are built-in, they are usually very efficient and are considered good choices for many data storage purposes. In this chapter we explore how arrays work in JavaScript and when to use them. # JavaScript Arrays Defined The standard definition for an array is a linear collection of elements, where the elements can be accessed via indices, which are usually integers used to compute offsets. Most computer programming languages have these types of arrays. JavaScript, on the other hand, has a different type of array altogether. A JavaScript array is actually a specialized type of JavaScript object, with the indices being property names that can be integers used to represent offsets. The specification states that array indices are converted to a string before storage, but most JavaScript engines perform optimizations under the hood to make the operation more efficient. ###### Note One of the better descriptions of JavaScript Arrays and how they work is "Arrays in JavaScript" by Dr. Axel Rauschmayer. While JavaScript arrays are, strictly speaking, JavaScript objects, they are specialized objects categorized internally as arrays. The `Array` is one of the recognized JavaScript object types, and as such, there is a set of properties and functions you can use with arrays. # Using Arrays Arrays in JavaScript are very flexible. There are several different ways to create arrays, access array elements, and perform tasks such as searching and sorting the elements stored in an array. More recent versions of JavaScript (JavaScript 1.5 and up) also includes array functions that allow programmers to work with arrays using functional programming techniques. We demonstrate many of these techniques in the following sections. ## Creating Arrays The simplest way to create an array is by declaring an array variable using an Array literal `[]`: var numbers = []; When you create an array in this manner, you have an array with length of 0. You can verify this by calling the built-in `length` property: print(numbers.length); // displays 0 Another way to create an array is to declare an array variable with a set of elements inside the `[]` operator: var numbers = [1,2,3,4,5]; print(numbers.length); // displays 5 You can also create an array by calling the `Array` constructor: var numbers = new Array(); print(numbers.length); // displays 0 You can call the `Array` constructor with a set of elements as arguments to the constructor: var numbers = new Array(1,2,3,4,5); print(numbers.length); // displays 5 Finally, you can create an array by calling the `Array` constructor with a single argument specifying the length of the array: var numbers = new Array(10); print(numbers.length); // displays 10 Unlike many other programming languages, but common for most scripting languages, JavaScript array elements do not all have to be of the same type: var objects = [1, "Joe", true, null]; We can verify that an object is an array by calling the `Array.isArray()` function, like this: var numbers = 3; var arr = [7,4,1776]; print(Array.isArray(numbers)); // displays false print(Array.isArray(arr)); // displays true We've covered several techniques for creating arrays. As for which function is best, most JavaScript experts recommend using the `]` operator, saying it is more efficient than calling the `Array` constructor (see [_JavaScript: The Definitive Guide_ O'Reilly] and [_JavaScript: The Good Parts_ [O'Reilly]). ## Accessing and Writing Array Elements Data is assigned to array elements using the `[]` operator in an assignment statement. For example, the following loop assigns the values 1 through 100 to an array: var nums = []; for (var i = 0; i < 100; ++i) { nums[i] = i+1; } Array elements are also accessed using the `[]` operator. For example: var numbers = [1,2,3,4,5]; var sum = numbers[0] + numbers[1] + numbers[2] + numbers[3] + numbers[4]; print(sum); // displays 15 Of course, accessing all the elements of an array sequentially is much easier using a `for` loop: var numbers = [1,2,3,5,8,13,21]; var sum = 0; for (var i = 0; i < numbers.length; ++i) { sum += numbers[i]; } print(sum); // displays 53 Notice that the `for` loop is controlled using the `length` property rather than an integer literal. Because JavaScript arrays are objects, they can grow beyond the size specified when they were created. By using the `length` property, which returns the number of elements currently in the array, you can guarantee that your loop processes all array elements. ## Creating Arrays from Strings Arrays can be created as the result of calling the `split()` function on a string. This function breaks up a string at a common delimiter, such as a space for each word, and creates an array consisting of the individual parts of the string. The following short program demonstrates how the `split()` function works on a simple string: var sentence = "the quick brown fox jumped over the lazy dog"; var words = sentence.split(" "); for (var i = 0; i < words.length; ++i) { print("word " + i + ": " + words[i]); } The output from this program is: word 0: the word 1: quick word 2: brown word 3: fox word 4: jumped word 5: over word 6: the word 7: lazy word 8: dog ## Aggregate Array Operations There are several aggregate operations you can perform on arrays. First, you can assign one array to another array: var nums = []; for (var i = 0; i < 10; ++i) { nums[i] = i+1; } var samenums = nums; However, when you assign one array to another array, you are assigning a reference to the assigned array. When you make a change to the original array, that change is reflected in the other array as well. The following code fragment demonstrates how this works: var nums = []; for (var i = 0; i < 100; ++i) { nums[i] = i+1; } var samenums = nums; nums[0] = 400; print(samenums[0]); // displays 400 This is called a _shallow copy_. The new array simply points to the original array's elements. A better alternative is to make a _deep copy_ , so that each of the original array's elements is actually copied to the new array's elements. An effective way to do this is to create a function to perform the task: function copy(arr1, arr2) { for (var i = 0; i < arr1.length; ++i) { arr2[i] = arr1[i]; } } Now the following code fragment produces the expected result: var nums = []; for (var i = 0; i < 100; ++i) { nums[i] = i+1; } var samenums = []; copy(nums, samenums); nums[0] = 400; print(samenums[0]); // displays 1 Note, though, that this type of copy works if the Array values are scalar, not objects or arrays, themselves. In the following code, one array's elements are copied to another, but the first two elements are also arrays. The third is a scalar value. The last element of the second array element in the original array is changed, as is the scalar value. The second array is then printed out to the `console()` (rather than `print()`) in order to display the actual structure. var test = [[1,2,3],[4,5,8],10]; var test2 = []; for (var i = 0; i < test.length; i++) { test2[i] = test[i]; } test[1][2] = 6; test[2] = 20; console.log(test2); The result is: [[1, 2, 3], [4, 5, 6], 10] The array element change is reflected in the copy, but not the scalar value change. I used `console()` because of the nature of the JavaScript Shell program. Another aggregate operation you can perform with arrays is displaying the contents of an array using a function such as `print()`. For example: var nums = [1,2,3,4,5]; print(nums); will produce the following output: 1,2,3,4,5 This output may not be particularly useful, but you can use it to display the contents of an array when all you need is a simple list. # Accessor Functions JavaScript provides a set of functions you can use to access the elements of an array. These functions, called _accessor_ functions, return some representation of the target array as their return values. ## Searching for a Value One of the most commonly used accessor functions is `indexOf()`, which looks to see if the argument passed to the function is found in the array. If the argument is contained in the array, the function returns the index position of the argument. If the argument is not found in the array, the function returns -1. Here is an example: var names = ["David", "Cynthia", "Raymond", "Clayton", "Jennifer"]; putstr("Enter a name to search for: "); var name = readline(); var position = names.indexOf(name); if (position >= 0) { print("Found " + name + " at position " + position); } else { print(name + " not found in array."); } If you run this program and enter **`Cynthia`** , the program will output: Found Cynthia at position 1 If you enter **`Joe`** , the output is: Joe not found in array. If you have multiple occurrences of the same data in an array, the `indexOf()` function will always return the position of the first occurrence. A similar function, `lastIndexOf()`, will return the position of the last occurrence of the argument in the array, or -1 if the argument isn't found. Here is an example: var names = ["David", "Mike", "Cynthia", "Raymond", "Clayton", "Mike", "Jennifer"]; var name = "Mike"; var firstPos = names.indexOf(name); print("First found " + name + " at position " + firstPos); var lastPos = names.lastIndexOf(name); print("Last found " + name + " at position " + lastPos); The output from this program is: First found Mike at position 1 Last found Mike at position 5 ## String Representations of Arrays There are two functions that return string representations of an array: `join()` and `toString()`. Both functions return a string containing the elements of the array delimited by commas. Here are some examples: var names = ["David", "Cynthia", "Raymond", "Clayton", "Mike", "Jennifer"]; var namestr = names.join(); print(namestr); // David,Cynthia,Raymond,Clayton,Mike,Jennifer namestr = names.toString(); print(namestr); // David,Cynthia,Raymond,Clayton,Mike,Jennifer When you call the `print()` function with an array name, it automatically calls the `toString()` function for that array: print(names); // David,Cynthia,Raymond,Clayton,Mike,Jennifer ## Creating New Arrays from Existing Arrays There are two accessor functions that allow you create new arrays from existing arrays: `concat()` and `splice()`. The `concat()` function allows you to put together two or more arrays to create a new array, and the `splice()` function allows you to create a new array from a subset of an existing array. Let's look first at how `concat()` works. The function is called from an existing array, and its argument is another existing array. The argument is concatenated to the end of the array calling `concat()`. The following program demonstrates how `concat()` works: var cisDept = ["Mike", "Clayton", "Terrill", "Danny", "Jennifer"]; var dmpDept = ["Raymond", "Cynthia", "Bryan"]; var itDiv = cisDept.concat(dmpDept); print(itDiv); itDiv = dmpDept.concat(cisDept); print(itDiv); The program outputs: Mike,Clayton,Terrill,Danny,Jennifer,Raymond,Cynthia,Bryan Raymond,Cynthia,Bryan,Mike,Clayton,Terrill,Danny,Jennifer The first output line shows the data from the `cis` array first, and the second output line shows the data from the `dmp` array first. The `splice()` function creates a new array by adding new contents while removing existing. The arguments to the function are the starting position for taking the splice and the number of elements to take from the existing array. Here is how the method works: var itDiv = ["Mike","Clayton","Terrill","Raymond","Cynthia","Danny","Jennifer"]; var dmpDept = itDiv.splice(3,3); var cisDept = itDiv; print(dmpDept); // Raymond,Cynthia,Danny print(cisDept); // Mike,Clayton,Terrill,Jennifer See the Mozilla Developer Network website for more information. # Mutator Functions JavaScript has a set of _mutator_ functions that allow you to modify the contents of an array without referencing the individual elements. These functions often make hard techniques easy, as you'll see below. ## Adding Elements to an Array There are two mutator functions for adding elements to an array: `push()` and `unshift()`. The `push()` function adds an element to the end of an array: var nums = [1,2,3,4,5]; print(nums); // 1,2,3,4,5 nums.push(6); print(nums); // 1,2,3,4,5,6 Using `push()` is more intuitive than using the `length` property to extend an array: var nums = [1,2,3,4,5]; print(nums); // 1,2,3,4,5 nums[nums.length] = 6; print(nums); // 1,2,3,4,5,6 Adding data to the beginning of an array is much harder than adding data to the end of an array. To do so without the benefit of a mutator function, each existing element of the array has to be shifted up one position before the new data is added. Here is some code to illustrate this scenario: var nums = [2,3,4,5]; var newnum = 1; var N = nums.length; for (var i = N; i >= 0; --i) { nums[i] = nums[i-1]; } nums[0] = newnum; print(nums); // 1,2,3,4,5 This code becomes more inefficient as the number of elements stored in the array increases. The mutator function for adding array elements to the beginning of an array is `unshift()`. Here is how the function works: var nums = [2,3,4,5]; print(nums); // 2,3,4,5 var newnum = 1; nums.unshift(newnum); print(nums); // 1,2,3,4,5 nums = [3,4,5]; nums.unshift(newnum,2); print(nums); // 1,2,3,4,5 The second call to `unshift()` demonstrates that you can add multiple elements to an array with one call to the function. ## Removing Elements from an Array Removing an element from the end of an array is easy using the `pop()` mutator function: var nums = [1,2,3,4,5,9]; nums.pop(); print(nums); // 1,2,3,4,5 Without mutator functions, removing elements from the beginning of an array requires shifting elements toward the beginning of the array, causing the same inefficiency we see when adding elements to the beginning of an array: var nums = [9,1,2,3,4,5]; print(nums); for (var i = 0; i < nums.length; ++i) { nums[i] = nums[i+1]; } print(nums); // 1,2,3,4,5, Besides the fact that we have to shift the elements down to collapse the array, we are also left with an extra element. We know this because of the extra comma we see when we display the array contents.If we used `console.log()` we'd see that the last element is now `undefined`. The mutator function we need to remove an element from the beginning of an array is `shift()`. Here is how the function works: var nums = [9,1,2,3,4,5]; nums.shift(); print(nums); // 1,2,3,4,5 You'll notice there are no extra elements left at the end of the array. Both `pop()` and `shift()` return the values they remove, so you can collect the values in a variable: var nums = [6,1,2,3,4,5]; var first = nums.shift(); // first gets the value 9 nums.push(first); print(nums); // 1,2,3,4,5,6 ## Adding and Removing Elements from the Middle of an Array Trying to add or remove elements at the middle of an array leads to the same problems we find when trying to add or remove elements from the beginning of an array—both operations require shifting array elements either toward the beginning or toward the end of the array. However, there is one mutator function we can use to add or remove elements from the middle of an array—`splice()`. To add elements to an array using `splice()`, you have to provide the following arguments: * The starting index (where you want to begin adding elements) * The number of elements to remove (0 when you are adding elements) * The elements you want to add to the array Let's look at a simple example. The following program adds three numbers to the middle of an array of numbers: var nums = [1,2,3,7,8,9]; nums.splice(3,0,4,5,6); print(nums); // 1,2,3,4,5,6,7,8,9 Here is an example of using `splice()` to remove elements from an array: var nums = [1,2,3,100,200,300,400,4,5]; nums.splice(3,4); print(nums); // 1,2,3,4,5 ## Putting Array Elements in Order The last two mutator functions are used to arrange array elements into some type of order. The first of these, `reverse()`, reverses the order of the elements of an array. Here is an example of its use: var nums = [1,2,3,4,5]; nums.reverse(); print(nums); // 5,4,3,2,1 We often need to sort the elements of an array into order. The mutator function for this task, `sort()`, works very well with strings: var names = ["David","Mike","Cynthia","Clayton","Bryan","Raymond"]; names.sort(); print(names); // Bryan,Clayton,Cynthia,David,Mike,Raymond But `sort()` does not work so well with numbers: var nums = [3,1,2,100,4,200]; nums.sort(); print(nums); // 1,100,2,200,3,4 The `sort()` function sorts data lexicographically, assuming the data elements are strings, even though in the preceding example, the elements are numbers. We can make the `sort()` function work correctly for numbers by passing in an ordering function as the first argument to the function, which `sort()` will then use to sort the array elements. This is the function that `sort()` will use when comparing pairs of array elements to determine their correct order. For numbers, the ordering function can simply subtract one number from another number. If the number returned is negative, the left operand is less than the right operand; if the number returned is zero, the left operand is equal to the right operand; and if the number returned is positive, the left operand is greater than the right operand. With this in mind, let's rerun the previous small program using an ordering function: function compare(num1, num2) { return num1 - num2; } var nums = [3,1,2,100,4,200]; nums.sort(compare); print(nums); // 1,2,3,4,100,200 The `sort()` function uses the `compare()` function to sort the array elements numerically rather than lexicographically. # Iterator Functions The final set of array functions we examine are _iterator_ functions. These functions apply a function to each element of an array, either returning a value, a set of values, or a new array after applying the function to each element of an array. ## Non–Array-Generating Iterator Functions The first group of iterator functions we'll discuss do not generate a new array; instead, they either perform an operation on each element of an array or generate a single value from an array. The first of these functions is `forEach()`. This function takes a function as an argument and applies the called function to each element of an array. Here is an example of how it works: function square(num) { print(num, num * num); } var nums = [1,2,3,4,5,6,7,8,9,10]; nums.forEach(square); The output from this program is: 1 1 2 4 3 9 4 16 5 25 6 36 7 49 8 64 9 81 10 100 The next iterator function, `every()`, applies a Boolean function to an array and returns `true` if the function can return `true` for every element in the array. Here is an example: function isEven(num) { return num % 2 == 0; } var nums = [2,4,6,8,10]; var even = nums.every(isEven); if (even) { print("all numbers are even"); } else { print("not all numbers are even"); } The program displays: all numbers are even If we change the array to: var nums = [2,4,6,7,8,10]; the program displays: not all numbers are even The `some()` function will take a Boolean function and return `true` if at least one of the elements in the array meets the criterion of the Boolean function. For example: function isEven(num) { return num % 2 == 0; } var nums = [1,2,3,4,5,6,7,8,9,10]; var someEven = nums.some(isEven); if (someEven) { print("some numbers are even"); } else { print("no numbers are even"); } nums = [1,3,5,7,9]; someEven = nums.some(isEven); if (someEven) { print("some numbers are even"); } else { print("no numbers are even"); } The output from this program is: some numbers are even no numbers are even The `reduce()` function applies a function to an accumulator and the successive elements of an array until the end of the array is reached, yielding a single value. Here is an example of using `reduce()` to compute the sum of the elements of an array: function add(runningTotal, currentValue) { return runningTotal + currentValue; } var nums = [1,2,3,4,5,6,7,8,9,10]; var sum = nums.reduce(add); print(sum); // displays 55 The `reduce()` function, in conjunction with the `add()` function, works from left to right, computing a running sum of the array elements, like this: add(1,2) -> 3 add(3,3) -> 6 add(6,4) -> 10 add(10,5) -> 15 add(15,6) -> 21 add(21,7) -> 28 add(28,8) -> 36 add(36,9) -> 45 add(45,10) -> 55 We can also use `reduce()` with strings to perform concatenation: function concat(accumulatedString, item) { return accumulatedString + item; } var words = ["the ", "quick ","brown ", "fox "]; var sentence = words.reduce(concat); print(sentence); // displays "the quick brown fox" JavaScript also provides a `reduceRight()` function, which works similarly to `reduce()`, only working from the righthand side of the array to the left, instead of from left to right. The following program uses `reduceRight()` to reverse the elements of an array: function concat(accumulatedString, item) { return accumulatedString + item; } var words = ["the ", "quick ","brown ", "fox "]; var sentence = words.reduceRight(concat); print(sentence); // displays "fox brown quick the" ## Iterator Functions That Return a New Array There are two iterator functions that return new arrays: `map()` and `filter()`. The `map()` function works like the `forEach()` function, applying a function to each element of an array. The difference between the two functions is that `map()` returns a new array with the results of the function application. Here is an example: function curve(grade) { return grade += 5; } var grades = [77, 65, 81, 92, 83]; var newgrades = grades.map(curve); print(newgrades); // 82, 70, 86, 97, 88 Here is an example using strings: function first(word) { return word[0]; } var words = ["for","your","information"]; var acronym = words.map(first); print(acronym.join("")); // displays "fyi" For this last example, the `acronym` array stores the first letter of each word in the `words` array. However, if we want to display the elements of the array as a true acronym, we need to get rid of the commas that will be displayed if we just display the array elements using the implied `toString()` function. We accomplish this by calling the `join()` function with the empty string as the separator. The `filter()` function works similarly to `every()`, but instead of returning `true` if all the elements of an array satisfy a Boolean function, the function returns a new array consisting of those elements that satisfy the Boolean function. Here is an example: function isEven(num) { return num % 2 == 0; } function isOdd(num) { return num % 2 != 0; } var nums = []; for (var i = 0; i < 20; ++i) { nums[i] = i+1; } var evens = nums.filter(isEven); print("Even numbers: "); print(evens); var odds = nums.filter(isOdd); print("Odd numbers: "); print(odds); This program returns the following output: Even numbers: 2,4,6,8,10,12,14,16,18,20 Odd numbers: 1,3,5,7,9,11,13,15,17,19 Here is another interesting use of `filter()`: function passing(num) { return num >= 60; } var grades = []; for (var i = 0; i < 20; ++i) { grades[i] = Math.floor(Math.random() * 101); } var passGrades = grades.filter(passing); print("All grades: "); print(grades); print("Passing grades: "); print(passGrades); This program displays: All grades: 39,43,89,19,46,54,48,5,13,31,27,95,62,64,35,75,79,88,73,74 Passing grades: 89,95,62,64,75,79,88,73,74 Of course, we can also use `filter()` with strings. Here is an example that applies the spelling rule "i before e except after c": function afterc(str) { if (str.indexOf("cie") > -1) { return true; } return false; } var words = ["recieve","deceive","percieve","deceit","concieve"]; var misspelled = words.filter(afterc); print(misspelled); // displays recieve,percieve,concieve # Two-Dimensional and Multidimensional Arrays JavaScript arrays are only one-dimensional, but you can create multidimensional arrays by creating arrays of arrays. In this section we'll describe how to create two-dimensional arrays in JavaScript. ## Creating Two-Dimensional Arrays A two-dimensional array is structured like a spreadsheet with rows and columns. To create a two-dimensional array in JavaScript, we have to create an array and then make each element of the array an array as well. At the very least, we need to know the number of rows we want the array to contain. With that information, we can create a two-dimensional array with _n_ rows and one column: var twod = []; var rows = 5; for (var i = 0; i < rows; ++i) { twod[i] = []; } The problem with this approach is that each element of the array is set to `undefined`. A better way to create a two-dimensional array is to follow the example from _JavaScript: The Good Parts_ (O'Reilly, p. 64). Crockford extends the JavaScript array object with a function that sets the number of rows and columns and sets each value to a value passed to the function. Here is his definition: Array.matrix = function(numrows, numcols, initial) { var arr = []; for (var i = 0; i < numrows; ++i) { var columns = []; for (var j = 0; j < numcols; ++j) { columns[j] = initial; } arr[i] = columns; } return arr; }; Here is some code to test the definition: var nums = Array.matrix(5,5,0); print(nums[1][1]); // displays 0 var names = Array.matrix(3,3,""); names[1][2] = "Joe"; print(names[1][2]); // display "Joe" We can also create a two-dimensional array and initialize it to a set of values in one line: var grades = [[89, 77, 78],[76, 82, 81],[91, 94, 89]]; print(grades[2][2]); // displays 89 For small data sets, this is the easiest way to create a two-dimensional array. ## Processing Two-Dimensional Array Elements There are two fundamental patterns used to process the elements of a two-dimensional array. One pattern emphasizes accessing array elements by columns, and the other pattern emphasizes accessing array elements by rows. We will use the `grades` array created in the preceding code fragment to demonstrate how these patterns work. For both patterns, we use a set of nested `for` loops. For columnar processing, the outer loop moves through the rows, and the inner loop processes the columns. For the `grades` array, think of each row as a set of grades for one student. We can compute the average for each student's grades by adding each grade on the row to a `total` variable and then dividing `total` by the total number of grades on that row. Here is the code for that process: var grades = [[89, 77, 78],[76, 82, 81],[91, 94, 89]]; var total = 0; var average = 0.0; for (var row = 0; row < grades.length; ++row) { for (var col = 0; col < grades[row].length; ++col) { total += grades[row][col]; } average = total / grades[row].length; print("Student " + parseInt(row+1) + " average: " + average.toFixed(2)); total = 0; average = 0.0; } The inner loop is controlled by the expression: `col < grades[row].length` This works because each row contains an array, and that array has a `length` property we can use to determine how many columns there are in the row. The grade average for each student is rounded to two decimals using the `toFixed(n)` function. Here is the output from the program: Student 1 average: 81.33 Student 2 average: 79.67 Student 3 average: 91.33 To perform a row-wise computation, we simply have to flip the `for` loops so that the outer loop controls the columns and the inner loop controls the rows. Here is the calculation for each test: var grades = [[89, 77, 78],[76, 82, 81],[91, 94, 89]]; var total = 0; var average = 0.0; for (var col = 0; col < grades.length; ++col) { for (var row = 0; row < grades[col].length; ++row) { total += grades[row][col]; } average = total / grades[col].length; print("Test " + parseInt(col+1) + " average: " + average.toFixed(2)); total = 0; average = 0.0; } The output from this program is: Test 1 average: 85.33 Test 2 average: 84.33 Test 3 average: 82.67 ## Jagged Arrays A _jagged_ array is an array where the rows in the array may have a different number of elements. One row may have three elements, while another row may have five elements, while yet another row may have just one element. Many programming languages have problems handling jagged arrays, but JavaScript does not since we can compute the length of any row. To give you an example, imagine the `grades` array where students have an unequal number of grades recorded. We can still compute the correct average for each student without changing the program at all: var grades = [[89, 77],[76, 82, 81],[91, 94, 89, 99]]; var total = 0; var average = 0.0; for (var row = 0; row < grades.length; ++row) { for (var col = 0; col < grades[row].length; ++col) { total += grades[row][col]; } average = total / grades[row].length; print("Student " + parseInt(row+1) + " average: " + average.toFixed(2)); total = 0; average = 0.0; } Notice that the first student only has two grades, while the second student has three grades, and the third student has four grades. Because the program computes the length of the row in the inner loop, this jaggedness doesn't cause any problems. Here is the output from the program: Student 1 average: 83.00 Student 2 average: 79.67 Student 3 average: 93.25 # Arrays of Objects All of the examples in this chapter have consisted of arrays whose elements have been primitive data types, such as numbers and strings. Arrays can also consist of objects, and all the functions and properties of arrays work with objects. For example: function Point(x,y) { this.x = x; this.y = y; } function displayPts(arr) { for (var i = 0; i < arr.length; ++i) { print(arr[i].x + ", " + arr[i].y); } } var p1 = new Point(1,2); var p2 = new Point(3,5); var p3 = new Point(2,8); var p4 = new Point(4,4); var points = [p1,p2,p3,p4]; for (var i = 0; i < points.length; ++i) { print("Point " + parseInt(i+1) + ": " + points[i].x + ", " + points[i].y); } var p5 = new Point(12,-3); points.push(p5); print("After push: "); displayPts(points); points.shift(); print("After shift: "); displayPts(points); The output from this program is: Point 1: 1, 2 Point 2: 3, 5 Point 3: 2, 8 Point 4: 4, 4 After push: 1, 2 3, 5 2, 8 4, 4 12, -3 After shift: 3, 5 2, 8 4, 4 12, -3 The point `12, -3` is added to the array using `push()`, and the point `1, 2` is removed from the array using `shift()`. # Arrays in Objects We can use arrays to store complex data in an object. Many of the data structures we study in this book are implemented as class objects with an underlying array used to store data. The following example demonstrates many of the techniques we use throughout the book. In the example, we create an object that stores the weekly observed high temperature. The object has functions for adding a new temperature and computing the average of the temperatures stored in the object. Here is the code: function weekTemps() { this.dataStore = []; this.add = add; this.average = average; } function add(temp) { this.dataStore.push(temp); } function average() { var total = 0; for (var i = 0; i < this.dataStore.length; ++i) { total += this.dataStore[i]; } return total / this.dataStore.length; } var thisWeek = new weekTemps(); thisWeek.add(52); thisWeek.add(55); thisWeek.add(61); thisWeek.add(65); thisWeek.add(55); thisWeek.add(50); thisWeek.add(52); thisWeek.add(49); print(thisWeek.average()); // displays 54.875 You'll notice the `add()` function uses the `push()` function to add elements to the `dataStore` array, using the name `add()` rather than `push()`. Using a more intuitive name for an operation is a common technique when defining object functions. Not everyone will understand what it means to push a data element, but everyone knows what it means to add a data element. # Exercises 1. Create a `grades` object that stores a set of student grades in an object. Provide a function for adding a grade and a function for displaying the student's grade average. 2. Store a set of words in an array and display the contents both forward and backward. 3. Modify the `weeklyTemps` object in the chapter so that it stores a month's worth of data using a two-dimensional array. Create functions to display the monthly average, a specific week's average, and all the weeks' averages. 4. Create an object that stores individual letters in an array and has a function for displaying the letters as a single word. # Chapter 3. Lists Lists are one of the most common organizing tools people use in their day-to-day lives. We have to-do lists, grocery lists, top-ten lists, bottom-ten lists, and many other types. Our computer programs can also use lists, particularly if we only have a few items to store in list form. Lists are especially useful if we don't have to perform searches on the items in the list or put them into some type of sorted order. When we need to perform long searches or complex sorts, lists become less useful, especially with more complex data structures. This chapter presents the creation of a simple list class. We start with the definition of a list abstract data type (ADT) and then demonstrate how to implement the ADT. We wrap up the chapter with some problems that are best solved with lists. # A List ADT In order to design an ADT for a list, we have to provide a definition of the list, including its properties, as well as the operations performed on it and by it. A list is an ordered sequence of data. Each data item stored in a list is called an _element_. In JavaScript, the elements of a list can be of any data type. There is no predetermined number of elements that can be stored in a list, though the practical limit will be the amount of memory available to the program using the list. A list with no elements is an _empty_ list. The number of elements stored in a list is called the _length_ of the list. Internally, the number of elements in a list is kept in a `listSize` variable. You can _append_ an element to the end of a list, or you can _insert_ an element into a list after an existing element or at the beginning of a list. Elements are deleted from a list using a _remove_ operation. You can also _clear_ a list so that all of its current elements are removed. The elements of a list are displayed using either a `toString()` operation, which displays all the elements, or with a `getElement()` operation, which displays the value of the _current_ element. Lists have properties to describe location. There is the _front_ of a list and the _end_ of a list. You can move from one element of a list to the next element using the `next()` operation, and you can move backward through a list using the `prev()` operation. You can also move to a numbered position in a list using the `moveTo(n)` operation, where _n_ specifies the position to move to. The `currPos` property indicates the current position in a list. The List ADT does not specify a storage function for a list, but for our implementation will use an array named `dataStore`. Table 3-1 shows the complete List ADT. Table 3-1. ADT List `listSize` (property) | Number of elements in list ---|--- `pos` (property) | Current position in list `length` (property) | Returns the number of elements in list `clear` (function) | Clears all elements from list `toString` (function) | Returns string representation of list `getElement` (function) | Returns element at current position `insert` (function) | Inserts new element after existing element `append` (function) | Adds new element to end of list `remove` (function) | Removes element from list `front` (function) | Sets current position to first element of list `end` (function) | Sets current position to last element of list `previous` (function) | Returns previous element `next` (function) | Returns next element `hasPrevious` (function) | Tests if previous element exists `hasNext` (function) | Tests if next element exists `currPos` (function) | Returns the current position in list `moveTo` (function) | Moves the current position to specified position # A List Class Implementation A `List` class implementation can be taken straight from the List ADT we just defined. We'll start with a definition of a constructor function, though it is not part of the ADT: function List() { this.listSize = 0; this.pos = 0; this.dataStore = []; // initializes an empty array to store list elements this.clear = clear; this.find = find; this.toString = toString; this.insert = insert; this.append = append; this.remove = remove; this.front = front; this.end = end; this.previous = previous; this.next = next; this.hasPrevious = hasPrevious; this.hasNext = hasNext; this.length = length; this.currPos = currPos; this.moveTo = moveTo; this.getElement = getElement; this.contains = contains; } ## Append: Adding an Element to a List The first function we'll implement is the `append()` function. This function appends a new element onto the list at the next available position, which will be equal to the value of the `listSize` variable: function append(element) { this.dataStore[this.listSize++] = element; } After the element is appended, `listSize` is incremented by 1. ## Remove: Removing an Element from a List Next, let's see how to remove an element from a list. `remove()` is one of the harder functions to implement in the `List` class. First, we have to find the element in the list, and then we have to remove it and adjust the space in the underlying array to fill the hole left by removing an element. However, we can simplify the process by using the `splice()` mutator function. To start, let's define a helper function, `find()`, for finding the element to remove: function find(element) { for (var i = 0; i < this.dataStore.length; ++i) { if (this.dataStore[i] == element) { return i; } } return -1; } ## Find: Finding an Element in a List The `find` function simply iterates through `dataStore` looking for the specified element. If the element is found, the function returns the position where the element was found. If the element wasn't found, the function returns `-1`, which is a standard value to return when an element can't be found in an array. We can use this value for error checking in the `remove()` function. The `remove()` function uses the position returned by `find()` to splice the `dataStore` array at that place. After the array is modified, `listSize` is decremented by 1 to reflect the new size of the list. The function returns `true` if an element is removed, and `false` otherwise. Here is the code: function remove(element) { var foundAt = this.find(element); if (foundAt > -1) { this.dataStore.splice(foundAt,1); --this.listSize; return true; } return false; } ## Length: Determining the Number of Elements in a List The `length()` function returns the number of elements in a list: function length() { return this.listSize; } ## toString: Retrieving a List's Elements Now is a good time to create a function that allows us to view the elements of a list. Here is the code for a simple `toString()` function: function toString() { return this.dataStore; } Strictly speaking, this function returns an array object and not a string, but its utility is in providing a view of the current state of an object, and just returning the array works adequately for this purpose. Let's take a break from implementing our `List` class to see how well it works so far. You'll need to comment out the List object's properties assigned to functions that haven't been defined yet. Here is a short test program that exercises the functions we've created so far: Example 3-1 using `toString()` to display a List ##### Example 3-1. `toString()` retrieves contents of a List var names = new List(); names.append("Cynthia"); names.append("Raymond"); names.append("Barbara"); print(names.toString()); names.remove("Raymond"); print(names.toString()); The output from this program is: Cynthia,Raymond,Barbara Cynthia,Barbara ## Insert: Inserting an Element into a List The next function to discuss is `insert()`. What if, after removing Raymond from the preceding list, we decide we need to put him back where he was to begin with? An insertion function needs to know where to insert an element, so for now we will say that insertion occurs after a specified element already in the list. With this in mind, here is the definition of the `insert()` function: function insert(element, after) { var insertPos = this.find(after); if (insertPos > -1) { this.dataStore.splice(insertPos+1, 0, element); ++this.listSize; return true; } return false; } `insert()` uses the helper function `find()` to determine the correct insertion position for the new element by finding the element specified in the `after` argument. Once this position is found, we use `splice()` to insert the new element into the list. Then we increment `listSize` by 1 and return `true` to indicate the insertion was successful. ## Clear: Removing All Elements from a List Next, we need a function to clear out the elements of a list and allow new elements to be entered: function clear() { delete this.dataStore; this.dataStore = []; this.listSize = this.pos = 0; } The `clear()` function uses the `delete` operator to delete the `dataStore` array, and the next line re-creates the empty array. The last line sets the values of `listSize` and `pos` to 0 to indicate the start of a new list. ## Contains: Determining if a Given Value Is in a List The `contains()` function is useful when you want to check a list to see if a particular value is part of the list. Here is the definition: function contains(element) { for (var i = 0; i < this.dataStore.length; ++i) { if (this.dataStore[i] == element) { return true; } } return false; } ## Moving To and Retrieving a List Element The next two functions allow us to move to a specific element index (`moveTo()`) and then retrieve the element wherever the list index is currently residing (`getElement()`): function moveTo(position) { this.pos = position; } function getElement() { return this.dataStore[this.pos]; } There's no error checking incorporated into the function, other than the underlying JavaScript error handling based on accessing a nonexisted Array element. If the code sets the position beyond the end of the Array, and then accesses the element at the position, a value of `undefined` is returned. You can also incorporate more sophisticated error handling, including throwing an error when accessing a list element that doesn't exist. ## Iterating Through a List This final set of functions enable iteration through a list. To enable the functionality, I turned to the Java List implementation, especially its iterator functions `next()`, `previous()`, `hasNext()`, and `hasPrevious()`, since they can be effectively used with our underlying array structure while still remaining true to the concept of the list. The `hasNext()` function tests to see if there's any additional elements to the right of the existing list position. The `next()` method returns the next element to the right, and then increments the list position counter. The `hasPrevious()` function tests to see if there's any additional elements to the left of the existing list position. The `previous()` function then fetches the element to the left, after first decrementing the cursor position. The key understanding to take away from `next()` and `previous()` is that the first `previous()` call after the last call to `next()` returns the same element. The final three functions are `front()` and `end()`, to move the current position to the front of the list, or the end, and `currPos()`, returning the current position. function previous() { return this.dataStore[--this.pos]; } function next() { return this.dataStore[this.pos++]; } function hasNext() { if (this.pos > this.listSize -1) { return false; } else { return true; } } function hasPrevious() { if (this.pos <= 0) { return false; } else { return true; } } function front() { this.pos = 0; } function end() { this.pos = this.listSize - 1; } function currPos() { return pos; } Example 3-2 creates a new list of names to demonstrate how these functions work. ##### Example 3-2. Test various List functions var names = new List(); names.append("Clayton"); names.append("Raymond"); names.append("Cynthia"); names.append("Jennifer"); names.append("Bryan"); names.append("Danny"); Now let's move to the first element of the list and display it: names.front(); print(names.getElement()); // displays Clayton Calling `next()` and printing the name displays the same name, since `next()` increments the `List` index only after returning the element: print(names.next()); // displays Clayton Now we'll move forward twice and backward twice, displaying the current element to demonstrate how the `previous()` function works: print(names.next()); // displays Raymond names.next(); names.previous(); print(names.previous()); // displays Raymond The behavior we've demonstrated in these past few code fragments is captured in the concept of an _iterator_. We explore iterators in the next section. # Iterating Through a List An iterator allows us to traverse a list without referencing the internal storage mechanism of the `List` class. The functions `front()`, `end()`, `previous()`, `nextious()`, `hasNext()`, and `hasPrevious()` provide an implementation of an iterator for our `List` class. Some advantages to using iterators over using array indexing include: * Not having to worry about the underlying data storage structure when accessing list elements * Being able to update the list and not having to update the iterator, where an index becomes invalid when a new element is added to the list * Providing a uniform means of accessing elements for different types of data stores used in the implemenation of a `List` class With these advantages in mind, here is how to use an iterator to traverse through a list: for (names.front(); names.hasNext();) { print(names.next()); } The `for` loop starts by setting the current position to the front of the list. The loop continues until `hasNext()` returns `false`. No incrementer is necessary in the `for` loop, as `next()` increments the `List` position. We can also traverse a list backward using an iterator. Here is the code: for (names.end(); names.hasPrevious();) { console.log(names.previous()); } The loop starts at the last element of the list and moves backward using the `previous()` function while `hasPrevious()` returns true. Iterators are used only to move through a list and should not be combined with any functions for adding or removing items from a list. # A List-Based Application To demonstrate how to use lists, we are going to build a system that can be used in the simulation of a video-rental kiosk system such as Redbox. ## Reading Text Files In order to get the list of videos available in the kiosk into our program, we need to be able to read the data from a file. We first have to create a text file that contains the list of videos available using a text editor. We name the file `films.txt`. Here are the contents of the files (these movies are the top 20 movies as voted on by IMDB users as of October 5, 2013): 1. _The Shawshank Redemption_ 2. _The Godfather_ 3. _The Godfather: Part II_ 4. _Pulp Fiction_ 5. _The Good, the Bad and the Ugly_ 6. _12 Angry Men_ 7. _Schindler's List_ 8. _The Dark Knight_ 9. _The Lord of the Rings: The Return of the King_ 10. _Fight Club_ 11. _Star Wars: Episode V - The Empire Strikes Back_ 12. _One Flew Over the Cuckoo's Nest_ 13. _The Lord of the Rings: The Fellowship of the Ring_ 14. _Inception_ 15. _Goodfellas_ 16. _Star Wars_ 17. _Seven Samurai_ 18. _The Matrix_ 19. _Forrest Gump_ 20. _City of God_ Now we need a code fragment to read the contents of the file into our program: var movies = read('films.txt').split("\n");+\ This line performs two tasks. First, it reads the contents of our movies text file into the program, `read( _films.txt_ )`; and second, it splits the file into individual lines by using the newline character as a delimiter. This output is then stored as an array in the `movies` variable. This line of code works up to a point, but it's not perfect. When the elements of the text file are split into the array, the newline character is replaced with a space. While a single space seems innocuous enough, having an extra space in a string can cause havoc when you are doing string comparisons. So we need to add a loop that strips the space from each array element using the `trim()` function. This code will work better in a function, so let's create a function to read data from a file and store it in an array: function createArr(file) { var arr = read(file).split("\n"); for (var i = 0; i < arr.length; ++i) { arr[i] = arr[i].trim(); } return arr; } ## Using Lists to Manage a Kiosk The next step is to take the movies array and store its contents in a list. Here is how we do it: var movieList = new List(); for (var i = 0; i < movies.length; ++i) { movieList.append(movies[i]); } Now we can write a function to display the movie list available at the kiosk: function displayList(list) { for (list.front(); list.hasNext(); ) { print(list.next()); } } The `displayList()` function works fine with native types, such as lists made up of strings, but it won't work for `Customer` objects, which are defined below. Let's modify the function so that if it discovers that the list is made up of `Customer` objects, it will display those objects accordingly. Here's the new definition of `displayList()`: function displayList(list) { for (list.front(); list.hasNext(); ) { var listItem = list.next(); if (listItem instanceof Customer) { print(listItem.name + ", " + listItem.movie); } else { print(listItem); } } } We assign the next list item to an internal variable for further manipulation. Remember that `next()` increments in place, so must only be called once. For each object in the list, we use the `instanceof` operator to test whether the object is a `Customer` object. If so, we retrieve the name and the movie the customer has checked out using each of the two properties as an index for retrieving the associated value. If the object is not a `Customer`, the code simply prints the element. Now that we have our movie list taken care of, we need to create a list to store the customers who check out movies at the kiosk: var customers = new List(); This will contain `Customer` objects, which are made up of the customer's name and the movie checked out. Here is the constructor function for the `Customer` object: function Customer(name, movie) { this.name = name; this.movie = movie; } Next, we need a function that allows a customer to check out a movie. This function takes two arguments: the customer's name and the movie he wants to check out. If the movie is available, the function removes the movie from the kiosk's list of movies and adds it to the customer's list. We'll use the `List` class function `contains()` for this task. Here is the definition for a function to check out a movie: function checkOut(name, movie, movieList, customerList) { if (movieList.contains(movie)) { var c = new Customer(name, movie); customerList.append(c); movieList.remove(movie); } else { print(movie + " is not available."); } } The function first checks to see if the movie requested is available. If the movie is available, a `Customer` object is created with the movie's title and the customer's name. The `Customer` object is appended to the customer list, and the movie is removed from the movie list. If the movie is not available, a simple message is displayed indicating such. We can test the `checkOut()` function with a short program (shown in Example 3-3). ##### Example 3-3. Test the `checkOut()` function var movies = createArr("films.txt"); var movieList = new List(); var customers = new List(); for (var i = 0; i < movies.length; ++i) { movieList.append(movies[i]); } print("Available movies: \n"); displayList(movieList); checkOut("Jane Doe", "The Godfather", movieList, customers); print("\nCustomer Rentals: \n"); displayList(customers); The output of the program displays the movie list with `"The Godfather"` removed, followed by the list of customers with movies checked out. Add some titles to our program's output to make it easier to read, along with some interactive input (see Example 3-4). ##### Example 3-4. A more user-friendly version of the kiosk program var movies = createArr("films.txt"); var movieList = new List(); var customers = new List(); for (var i = 0; i < movies.length; ++i) { movieList.append(movies[i]); } print("Available movies: \n"); displayList(movieList); putstr("\nEnter your name: "); var name = readline(); putstr("What movie would you like? "); var movie = readline(); checkOut(name, movie, movieList, customers); print("\nCustomer Rentals: \n"); displayList(customers); print("\nMovies Now Available\n"); displayList(movieList); Here is the result of running this program: Available movies: The Shawshank Redemption The Godfather The Godfather: Part II Pulp Fiction The Good, the Bad and the Ugly 12 Angry Men Schindler's List The Dark Knight The Lord of the Rings: The Return of the King Fight Club Star Wars: Episode V - The Empire Strikes Back One Flew Over the Cuckoo's Nest The Lord of the Rings: The Fellowship of the Ring Inception Goodfellas Star Wars Seven Samurai The Matrix Forrest Gump City of God Enter your name: Jane Doe What movie would you like? The Godfather Customer Rentals: Jane Doe, The Godfather Movies Now Available The Shawshank Redemption The Godfather: Part II Pulp Fiction The Good, the Bad and the Ugly 12 Angry Men Schindler's List The Dark Knight The Lord of the Rings: The Return of the King Fight Club Star Wars: Episode V - The Empire Strikes Back One Flew Over the Cuckoo's Nest The Lord of the Rings: The Fellowship of the Ring Inception Goodfellas Star Wars Seven Samurai The Matrix Forrest Gump City of God We can add other functionality to make our video-rental kiosk system more robust. You will get to explore some of this added functionality in the exercises that follow. # Exercises 1. Write a function that inserts an element into a list only if the element to be inserted is larger than any of the elements currently in the list. Larger can mean either greater than when working with numeric values, or further down in the alphabet, when working with textual values. 2. Write a function that inserts an element into a list only if the element to be inserted is smaller than any of the elements currently in the list. 3. Create a `Person` class that stores a person's name and their gender. Create a list of at least 10 `Person` objects. Write a function that displays all the people in the list of the same gender. 4. Modify the video-rental kiosk program so that when a movie is checked out it is added to a list of rented movies. Display this list whenever a customer checks out a movie. 5. Create a check-in function for the video-rental kiosk program so that a returned movies is deleted from the rented movies list and is added back to the available movies list. # Chapter 4. Stacks Lists are a natural form of organization for data. We have already seen how to use the `List` class to organize data into a list. When the order of the data being stored doesn't matter, or when you don't have to search the data stored, lists work wonderfully. For other applications, however, plain lists are too simple and we need a more complex, list-like data structure. A list-like structure that can be used to solve many problems in computing is the stack. Stacks are efficient data structures because data can be added or removed only from the top of a stack, making these procedures fast and easy to implement. Stacks are used extensively in programming language implementations for everything from expression evaluation to handling function calls. # Stack Operations A stack is a list of elements that are accessible only from one end of the list, which is called the `top`. One common, real-world example of a stack is the stack of trays at a cafeteria. Trays are always removed from the top, and when trays are put back on the stack after being washed, they are placed on the top of the stack. The stack is known as a last-in, first-out (LIFO) data structure. Because of the last-in, first-out nature of the stack, any element that is not currently at the top of the stack cannot be accessed. To get to an element at the bottom of the stack, you have to dispose of all the elements above it first. The two primary operations of a stack are adding elements to a stack and taking elements off a stack. Elements are added to a stack using the `push` operation. Elements are taken off a stack using the `pop` operation. These operations are illustrated in Figure 4-1. ###### Figure 4-1. Pushing and popping elements of a stack Another common operation on a stack is viewing the element at the top of a stack. The `pop` operation visits the top element of a stack, but it permanently removes the element from a stack. The `peek` operation returns the value stored at the top of a stack without removing it from the stack. To keep track of where the top element is, as well as keeping track of where to add a new element, we use a `top` variable that is incremented when we push new elements onto the stack and is decremented when we pop elements off the stack. While pushing, popping, and peeking are the primary operations associated with a stack, there are other operations we need to perform and properties we need to examine. The `clear` operation removes all the elements from a stack. The `length` property holds the number of elements contained in a stack. We also define an `empty` property to let us know if a stack has no elements in it, though we can use the `length` property for this as well. # A Stack Implementation To build a stack, we first need to decide on the underlying data structure we will use to store the stack elements. We will use an array in our implementation. We begin our stack implementation by defining the constructor function for a `Stack` class: function Stack() { this.dataStore = []; this.top = 0; this.push = push; this.pop = pop; this.peek = peek; } The array that stores the stack elements is named `dataStore`. The constructor sets it to an empty array. The `top` variable keeps track of the top of the stack and is initially set to 0 by the constructor, indicating that the 0 position of the array is the top of the stack, at least until an element is pushed onto the stack. The first function to implement is the `push()` function. When we push a new element onto a stack, we have to store it in the top position and increment the `top` variable so that the new top is the next empty position in the array. Here is the code: function push(element) { this.dataStore[this.top++] = element; } Pay particular attention to the placement of the increment operator _after_ the call to `this.top`. Placing the operator there ensures that the current value of `top` is used to place the new element at the top of the stack before `top` is incremented. The `pop()` function does the reverse of the `push()` function—it returns the element in the top position of the stack and then decrements the `top` variable: function pop() { return this.dataStore[--this.top]; } The `peek()` function returns the top element of the stack by accessing the element at the `top-1` position of the array: function peek() { return this.dataStore[this.top-1]; } If you call the `peek()` function on an empty stack, you get `undefined` as the result. That's because there is no value stored at the top position of the stack since it is empty. There will be situations when you need to know how many elements are stored in a stack. The `length()` function returns this value by returning the value of `top`: function length() { return this.top; } Finally, we can clear a stack by simply setting the `top` variable back to 0, and setting the `dataStore` array's length to zero: function clear() { this.top = 0; this.dataStore.length = 0; } Example 4-1 shows the complete implementation of the `Stack` class. ##### Example 4-1. The `Stack` class load("Stack.js"); function Stack() { this.dataStore = []; this.top = 0; this.push = push; this.pop = pop; this.peek = peek; this.clear = clear; this.length = length; } function push(element) { this.dataStore[this.top++] = element; } function peek() { return this.dataStore[this.top-1]; } function pop() { return this.dataStore[--this.top]; } function clear() { this.top = 0; this.dataStore.length = 0; } function length() { return this.top; } Example 4-2 demonstrates a program that tests this implementation. ##### Example 4-2. Testing the `Stack` class implementation load("Stack.js"); var s = new Stack(); s.push("David"); s.push("Raymond"); s.push("Bryan"); print("length: " + s.length()); print(s.peek()); var popped = s.pop(); print("The popped element is: " + popped); print(s.peek()); s.push("Cynthia"); print(s.peek()); s.clear(); print("length: " + s.length()); print(s.peek()); s.push("Clayton"); print(s.peek()); The output from Example 4-2 is: length: 3 Bryan The popped element is: Bryan Raymond Cynthia length: 0 undefined Clayton The next-to-last value, `undefined`, is returned because once a stack is cleared, there is no value in the top position and when we peek at the top of the stack, `undefined` is returned. # Using the Stack Class There are several problems for which a stack is the perfect data structure needed for the solution. In this section, we look at several such problems. ## Multiple Base Conversions A stack can be used to convert a number from one base to another base. Given a number, _n_ , which we want to convert to a base, _b_ , here is the algorithm for performing the conversion: 1. The rightmost digit of _n_ is _n % b_. Push this digit onto the stack. 2. Replace _n_ with _n / b_. 3. Repeat steps 1 and 2 until _n = 0_ and there are no significant digits remaining. 4. Build the converted number string by popping the stack until the stack is empty. ###### Note This algorithm will work only with bases 2 through 9. We can implement this algorithm very easily using a stack in JavaScript. Here is the definition of a function for converting a number to any of the bases 2 through 9: function mulBase(num, base) { var s = new Stack(); do { s.push(num % base); num = Math.floor(num /= base); } while (num > 0); var converted = ""; while (s.length() > 0) { converted += s.pop(); } return converted; } Example 4-3 demonstrates how to use this function for base 2 and base 8 conversions. ##### Example 4-3. Converting numbers to base 2 and base 8 load("Stack.js"); function mulBase(num, base) { var s = new Stack(); do { s.push(num % base); num = Math.floor(num /= base); } while (num > 0); var converted = ""; while (s.length() > 0) { converted += s.pop(); } return converted; } var num = 32; var base = 2; var newNum = mulBase(num, base); print(num + " converted to base " + base + " is " + newNum); num = 125; base = 8; var newNum = mulBase(num, base); print(num + " converted to base " + base + " is " + newNum); The output from Example 4-3 is: 32 converted to base 2 is 100000 125 converted to base 8 is 175 ## Palindromes A palindrome is a word, phrase, or number that is spelled the same forward and backward. For example, "dad" is a palindrome; "racecar" is a palindrome; "A man, a plan, a canal: Panama" is a palindrome if you take out the spaces and ignore the punctuation; and 1,001 is a numeric palindrome. We can use a stack to determine whether or not a given string is a palindrome. We take the original string and push each character onto a stack, moving from left to right. When the end of the string is reached, the stack contains the original string in reverse order, with the last letter at the top of the stack and the first letter at the bottom of the stack, as shown in Figure 4-2. ###### Figure 4-2. Using a stack to determine if a word is a palindrome Once the complete original string is on the stack, we can create a new string by popping each letter the stack. This process will create the original string in reverse order. We then simply compare the original string with the reversed work, and if they are equal, the string is a palindrome. Example 4-4 presents a program, minus the `Stack` class code, that determines if a given string is a palindrome. ##### Example 4-4. Determining if a string is a palindrome load ("Stack.js"); function isPalindrome(word) { var s = new Stack(); for (var i = 0; i < word.length; ++i) { s.push(word[i]); } var rword = ""; while (s.length() > 0) { rword += s.pop(); } if (word == rword) { return true; } else { return false; } } var word = "hello"; if (isPalindrome(word)) { print(word + " is a palindrome."); } else { print(word + " is not a palindrome."); } word = "racecar" if (isPalindrome(word)) { print(word + " is a palindrome."); } else { print(word + " is not a palindrome."); } The output from this program is: hello is not a palindrome. racecar is a palindrome. ## Demonstrating Recursion Stacks are often used in the implementation of computer programming languages. One area where stacks are used is in implementing recursion. It is beyond the scope of this book to demonstrate exactly how stacks are used to implement recursive procedures, but we can use stacks to simulate recursive processes. If you are interested in learning more about recursion, a good starting point is this web page that actually uses JavaScript to describe how recursion works. To demonstrate how recursion is implemented using a stack, let's consider a recursive definition of the factorial function. First, here is a definition of factorial for the number 5: _5! = 5 * 4 * 3 * 2 * 1 = 120_ Here is a recursive function to compute the factorial of any number: function factorial(n) { if (n === 0) { return 1; } else { return n * factorial(n-1); } } When called with the argument `5`, the function returns `120`. To simulate computing _5!_ using a stack, first push the numbers 5 through 1 onto a stack. Then, inside a loop, pop each number and multiply the number by the running product, resulting in the correct answer, 120. Example 4-5 contains the code for the function, along with a test program. ##### Example 4-5. Simulating recursive processes using a stack load("Stack.js"); function factorial(n) { if (n === 0) { return 1; } else { return n * factorial(n-1); } } function fact(n) { var s = new Stack(); while (n > 1) { s.push(n--); } var product = 1; while (s.length() > 0) { product *= s.pop(); } return product; } print(factorial(5)); // displays 120 print(fact(5)); // displays 120 # Exercises 1. A stack can be used to ensure that an arithmetic expression has balanced parentheses. Write a function that takes an arithmetic expression as an argument and returns the postion in the expression where a parenthesis is missing. An example of an arithmetic expression with unbalanced parentheses is 2.3 + . 2. A postfix expression evaluator works on arithmetic expressions taking the following form: _op1 op2 operator_ Using two stacks—one for the operands and one for the operators—design and implement a JavaScript function that converts infix expressions to postfix expressions, and then use the stacks to evaluate the expression. 3. An example of a real-world stack is a Pez dispenser. Imagine that your virtual Pez dispenser is filled with red, yellow, and white colors and you don't like the yellow ones. Write a program that uses a stack (and maybe more than one) to remove the yellow ones without changing the order of the other candies in the dispenser. # Chapter 5. Queues A _queue_ is a type of list where data are inserted at the end and are removed from the front. Queues are used to store data in the order in which they occur, as opposed to a stack, in which the last piece of data entered is the first element used for processing. Think of a queue like the line at your bank, where the first person into the line is the first person served, and as more customers enter a line, they wait in the back until it is their turn to be served. A queue is an example of a first-in, first-out (FIFO) data structure. Queues are used to order processes submitted to an operating system or a print spooler, and simulation applications use queues to model scenarios such as customers standing in the line at a bank or a grocery store. # Queue Operations The two primary operations involving queues are inserting a new element into a queue and removing an element from a queue. The insertion operation is called _enqueue_ , and the removal operation is called _dequeue_. The enqueue operation inserts a new element at the end of a queue, and the dequeue operation removes an element from the front of a queue. Figure 5-1 illustrates these operations. ###### Figure 5-1. Inserting and removing elements from a queue Another important queue operation is viewing the element at the front of a queue. This operation is called `peek`. The peek operation returns the element stored at the front of a queue without removing it from the queue. Besides examining the front element, we also need to know how many elements are stored in a queue, which we can satisfy with the `length` property; and we need to be able to remove all the elements from a queue, which is performed with the `clear` operation. # An Array-Based Queue Class Implementation Implementing the `Queue` class using an array is straightforward. Using JavaScript arrays is an advantage many other programming languages don't have because JavaScript contains a function for easily adding data to the end of an array, `push()`, and a function for easily removing data from the front of an array, `shift()`. The `push()` function places its argument at the first open position of an array, which will always be the back of the array, even when there are no other elements in the array. Here is an example: names = []; name.push("Cynthia"); names.push("Jennifer"); print(names); // displays Cynthia,Jennifer Then we can remove the element from the front of the array using `shift()`: names.shift(); print(names); // displays Jennifer Now we're ready to begin implementing the `Queue` class by defining the constructor function: function Queue() { this.dataStore = []; this.enqueue = enqueue; this.dequeue = dequeue; this.front = front; this.back = back; this.toString = toString; this.empty = empty; } The `enqueue()` function adds an element to the end of a queue: function enqueue(element) { this.dataStore.push(element); } The `dequeue()` function removes an element from the front of a queue: function dequeue() { return this.dataStore.shift(); } We can examine the front and back elements of a queue using these functions: function front() { return this.dataStore[0]; } function back() { return this.dataStore[this.dataStore.length-1]; } We also need a `toString()` function to display all the elements in a queue: function toString() { var retStr = ""; for (var i = 0; i < this.dataStore.length; ++i) { retStr += this.dataStore[i] + "\n"; } return retStr; } Finally, we need a function that lets us know if a queue is empty: function empty() { if (this.dataStore.length === 0) { return true; } else { return false; } } Example 5-1 presents the complete `Queue` class definition along with a test program. ##### Example 5-1. Queue class definition and a test program function Queue() { this.dataStore = []; this.enqueue = enqueue; this.dequeue = dequeue; this.front = front; this.back = back; this.toString = toString; this.empty = empty; } function enqueue(element) { this.dataStore.push(element); } function dequeue() { return this.dataStore.shift(); } function front() { return this.dataStore[0]; } function back() { return this.dataStore[this.dataStore.length-1]; } function toString() { var retStr = ""; for (var i = 0; i < this.dataStore.length; ++i) { retStr += this.dataStore[i] + "\n"; } return retStr; } function empty() { if (this.dataStore.length === 0) { return true; } else { return false; } } // test program var q = new Queue(); q.enqueue("Meredith"); q.enqueue("Cynthia"); q.enqueue("Jennifer"); print(q.toString()); q.dequeue(); print(q.toString()); print("Front of queue: " + q.front()); print("Back of queue: " + q.back()); The output from Example 5-1 is: Meredith Cynthia Jennifer Cynthia Jennifer Front of queue: Cynthia Back of queue: Jennifer # Using the Queue Class: Assigning Partners at a Square Dance As we mentioned earlier, queues are often used to simulate situations when people have to wait in line. Once scenario we can simulate with a queue is a square dance for singles. When men and women arrive at this square dance, they enter the dance hall and stand in the line for their gender. As room becomes available on the dance floor, dance partners are chosen by taking the first man and woman in line. The next man and woman move to the front of their respective lines. As dance partners move onto the dance floor, their names are announced. If a couple leaves the floor and there is not both a man and a woman at the front of each line, this fact is announced. This simulation will store the names of the men and women participating in the square dance in a text file. Here is the file we will use for the simulation: F Allison McMillan M Frank Opitz M Mason McMillan M Clayton Ruff F Cheryl Ferenback M Raymond Williams F Jennifer Ingram M Bryan Frazer M David Durr M Danny Martin F Aurora Adney Each dancer is stored in a `Dancer` object: function Dancer(name, sex) { this.name = name; this.sex = sex; } Next we need a function to load the dancers from the file into the program: function getDancers(males, females) { var names = read("dancers.txt").split("\n"); for (var i = 0; i < names.length; ++i) { names[i] = names[i].trim(); } for (var i = 0; i < names.length; ++i) { var dancer = names[i].split(" "); var sex = dancer[0]; var name = dancer[1]; if (sex == "F") { females.enqueue(new Dancer(name, sex)); } else { males.enqueue(new Dancer(name, sex)); } } } The names are read from the text file into an array. The function then trims the newline character from each line. The second loop splits each line into a two-element array, by sex and by name. Then the function examines the `sex` element and assigns the dancer to the appropriate queue. The next function pairs up the male and female dancers and announces the pairings: function dance(males, females) { print("The dance partners are: \n"); while (!females.empty() && !males.empty()) { person = females.dequeue(); putstr("Female dancer is: " + person.name); person = males.dequeue(); print(" and the male dancer is: " + person.name); } print(); } Example 5-2 presents all the preceding functions, as well as a test program and the `Queue` class. ##### Example 5-2. A square dance simulation function Queue() { this.dataStore = []; this.enqueue = enqueue; this.dequeue = dequeue; this.front = front; this.back = back; this.toString = toString; this.empty = empty; } function enqueue(element) { this.dataStore.push(element); } function dequeue() { return this.dataStore.shift(); } function front() { return this.dataStore[0]; } function back() { return this.dataStore[this.dataStore.length-1]; } function toString() { var retStr = ""; for (var i = 0; i < this.dataStore.length; ++i) { retStr += this.dataStore[i] + "\n"; } return retStr; } function empty() { if (this.dataStore.length == 0) { return true; } else { return false; } } function Dancer(name, sex) { this.name = name; this.sex = sex; } function getDancers(males, females) { var names = read("dancers.txt").split("\n"); for (var i = 0; i < names.length; ++i) { names[i] = names[i].trim(); } for (var i = 0; i < names.length; ++i) { var dancer = names[i].split(" "); var sex = dancer[0]; var name = dancer[1]; if (sex == "F") { femaleDancers.enqueue(new Dancer(name, sex)); } else { maleDancers.enqueue(new Dancer(name, sex)); } } } function dance(males, females) { print("The dance partners are: \n"); while (!females.empty() && !males.empty()) { person = females.dequeue(); putstr("Female dancer is: " + person.name); person = males.dequeue(); print(" and the male dancer is: " + person.name); } print(); } // test program var maleDancers = new Queue(); var femaleDancers = new Queue(); getDancers(maleDancers, femaleDancers); dance(maleDancers, femaleDancers); if (!femaleDancers.empty()) { print(femaleDancers.front().name + " is waiting to dance."); } if (!maleDancers.empty()) { print(maleDancers.front().name + " is waiting to dance."); } The output from Example 5-2 is: The dance partners are: Female dancer is: Allison and the male dancer is: Frank Female dancer is: Cheryl and the male dancer is: Mason Female dancer is: Jennifer and the male dancer is: Clayton Female dancer is: Aurora and the male dancer is: Raymond Bryan is waiting to dance. One change we might want to make to the program is to display the number of male and female dancers waiting to dance. We don't have a function that displays the number of elements in a queue, so we need to add it to the `Queue` class definition: function count() { return this.dataStore.length; } Be sure to add the following line to the `Queue` class constructor function: this.count = count; In Example 5-3, we change the test program to use this new function. ##### Example 5-3. Providing a count of dancers waiting to dance var maleDancers = new Queue(); var femaleDancers = new Queue(); getDancers(maleDancers, femaleDancers); dance(maleDancers, femaleDancers); if (maleDancers.count() > 0) { print("There are " + maleDancers.count() + " male dancers waiting to dance."); } if (femaleDancers.count() > 0) { print("There are " + femaleDancers.count() + " female dancers waiting to dance."); } When we run Example 5-3, we get the following: Female dancer is: Allison and the male dancer is: Frank Female dancer is: Cheryl and the male dancer is: Mason Female dancer is: Jennifer and the male dancer is: Clayton Female dancer is: Aurora and the male dancer is: Raymond There are 3 male dancers waiting to dance. # Sorting Data with Queues Queues are not only useful for simulations; they can also be used to sort data. Back in the old days of computing, programs were entered into a mainframe program via punch cards, with each card holding a single program statement. The cards were sorted using a mechanical sorter that utilized bin-like structures to hold the cards. We can simulate this process by using a set of queues. This sorting technique is called a _radix sort_ (see `Data Structures with C++` [Prentice Hall]). It is not the fastest of sorting algorithms, but it does demonstrate an interesting use of queues. The radix sort works by making two passes over a data set, in this case the set of integers from 0 to 99. The first pass sorts the numbers based on the 1s digit, and the second pass sorts the numbers based on the 10s digit. Each number is placed in a bin based on the digit in each of these two places. Given these numbers: 91, 46, 85, 15, 92, 35, 31, 22 the first pass of the radix sort results in the following bin configuration: Bin 0: Bin 1: 91, 31 Bin 2: 92, 22 Bin 3: Bin 4: Bin 5: 85, 15, 35 Bin 6: 46 Bin 7: Bin 8: Bin 9: Now the numbers are sorted based on which bin they are in: 91, 31, 92, 22, 85, 15, 35, 46 Next, the numbers are sorted by the 10s digit into the appropriate bins: Bin 0: Bin 1: 15 Bin 2: 22 Bin 3: 31, 35 Bin 4: 46 Bin 5: Bin 6: Bin 7: Bin 8: 85 Bin 9: 91, 92 Finally, take the numbers out of the bins and put them back into a list, and you get the following sorted list of integers: 15, 22, 31, 35, 46, 85, 91, 92 We can implement this algorithm by using queues to represent the bins. We need nine queues, one for each digit. We will store the queues in an array. We uses the modulus and integer division operations for determining the 1s and 10s digits. The remainder of the algorithm entails adding numbers to their appropriate queues, taking the numbers out of the queues to re-sort them based on the 1s digit, and the repeating the process for the 10s digit. The result is a sorted set of integers. First, here is the function that distributes numbers by the place (1s or 10s) digit: function distribute(nums, queues, n, digit) { // digit represents either the 1s // or 10s place for (var i = 0; i < n; ++i) { if (digit == 1) { queues[nums[i]%10].enqueue(nums[i]); } else { queues[Math.floor(nums[i] / 10)].enqueue(nums[i]); } } } Here is the function for collecting numbers from the queues: function collect(queues, nums) { var i = 0; for (var digit = 0; digit < 10; ++digit) { while (!queues[digit].empty()) { nums[i++] = queues[digit].dequeue(); } } } Example 5-4 presents a complete program for performing a radix sort, along with a function for displaying the contents of an array. ##### Example 5-4. Performing a radix sort function distribute(nums, queues, n, digit) { for (var i = 0; i < n; ++i) { if (digit == 1) { queues[nums[i]%10].enqueue(nums[i]); } else { queues[Math.floor(nums[i] / 10)].enqueue(nums[i]); } } } function collect(queues, nums) { var i = 0; for (var digit = 0; digit < 10; ++digit) { while (!queues[digit].empty()) { nums[i++] = queues[digit].dequeue(); } } } function dispArray(arr) { for (var i = 0; i < arr.length; ++i) { putstr(arr[i] + " "); } } // main program var queues = []; for (var i = 0; i < 10; ++i) { queues[i] = new Queue(); } var nums = []; for (var i = 0; i < 10; ++i) { nums[i] = Math.floor(Math.floor(Math.random() * 101)); } print("Before radix sort: "); dispArray(nums); distribute(nums, queues, 10, 1); collect(queues, nums); distribute(nums, queues, 10, 10); collect(queues, nums); print("\n\nAfter radix sort: "); dispArray(nums); Here are a couple of runs of the program: Before radix sort: 45 72 93 51 21 16 70 41 27 31 After radix sort: 16 21 27 31 41 45 51 70 72 93 Before radix sort: 76 77 15 84 79 71 69 99 6 54 After radix sort: 6 15 54 69 71 76 77 79 84 99 # Priority Queues In the course of normal queue operations, when an element is removed from a queue, that element is always the first element that was inserted into the queue. There are certain applications of queues, however, that require that elements be removed in an order other than first-in, first-out. When we need to simulate such an application, we need to create a data structure called a _priority queue_. A priority queue is one where elements are removed from the queue based on a priority constraint. For example, the waiting room at a hospital's emergency department (ED) operates using a priority queue. When a patient enters the ED, he or she is seen by a triage nurse. This nurse's job is to assess the severity of the patient's condition and assign the patient a priorty code. Patients with a high priority code are seen before patients with a lower priority code, and patients that have the same priority code are seen on a first-come, first-served, or first-in, first-out, basis. Let's begin building a priority queue system by first defining an object that will store the elements of the queue: function Patient(name, code) { this.name = name; this.code = code; } The value for `code` will be an integer that represents the patient's priority, or severity. Now we need to redefine the `dequeue()` function that removes the element in the queue with the highest priority. We will define the highest priority element as being the element with the lowest code. This new `dequeue()` function will move through the queue's underlying array and find the element with the lowest code. Then the function uses the `splice()` function to remove this element. Here is the new definition for `dequeue()`: function dequeue() { var entry = 0; for (var i = 0; i < this.dataStore.length; ++i) { if (this.dataStore[i].code < this.dataStore[entry].code) { entry = i; } } return this.dataStore.splice(entry,1); } The `dequeue()` function uses a simple sequential search to find the element with the highest priority code (the lowest number; 1 has a higher priority than 10). The function returns an array of one element—the one removed from the queue. Finally, we add a `toString()` function modified to handle `Patient` objects: function toString() { var retStr = ""; for (var i = 0; i < this.dataStore.length; ++i) { retStr += this.dataStore[i].name + " code: " + this.dataStore[i].code + "\n"; } return retStr; } Example 5-5 demonstrates how the priority queue system works. ##### Example 5-5. A priority queue implementation // enqueue patients var ed = new Queue(); var p = new Patient("Smith",5); ed.enqueue(p); p = new Patient("Jones", 4); ed.enqueue(p); p = new Patient("Fehrenbach", 6); ed.enqueue(p); p = new Patient("Brown", 1); ed.enqueue(p); p = new Patient("Ingram", 1); ed.enqueue(p); // print queue print(ed.toString()); // first round seen = ed.dequeue(); print("Patient being treated: " + seen[0].name); print("Patients waiting to be seen: "); print(ed.toString()); // second round seen = ed.dequeue(); print("Patient being treated: " + seen[0].name); print("Patients waiting to be seen: "); print(ed.toString()); // third round seen = ed.dequeue(); print("Patient being treated: " + seen[0].name); print("Patients waiting to be seen: "); print(ed.toString()); // fourth seen = ed.dequeue(); print("Patient being treated: " + seen[0].name); print("Patients waiting to be seen: "); print(ed.toString()); Example 5-5 generates the following output: "Smith code: 5 Jones code: 4 Fehrenbach code: 6 Brown code: 1 Ingram code: 1" "Patient being treated: Brown" "Patients waiting to be seen: " "Smith code: 5 Jones code: 4 Fehrenbach code: 6 Ingram code: 1" "Patient being treated: Ingram" "Patients waiting to be seen: " "Smith code: 5 Jones code: 4 Fehrenbach code: 6" "Patient being treated: Jones" "Patients waiting to be seen: " "Smith code: 5 Fehrenbach code: 6" "Patient being treated: Smith" "Patients waiting to be seen: " "Fehrenbach code: 6" # Exercises 1. Modify the `Queue` class to create a `Deque` class. A deque is a queue-like structure that allows elements to be added and removed from both the front and the back of the list. Test your class in a program. 2. Use the `Deque` class you created in Example 5-1 to determine if a given word is a palindrome. 3. Modify the priority queue example from Example 5-5 so that the higher-priority elements have higher numbers rather than lower numbers. Test your implementation with the example in the chapter. 4. Modify the ED example (Example 5-5) so the user can control the activity in the ED. Create a menu system that allows the user to choose from the following activities: 1. Patient enters ED. 2. Patient is seen by doctor. 3. Display list of patients waiting to be seen. # Chapter 6. Linked Lists In Chapter 3 we discussed the use of lists for storing data. The underlying data storage mechanism we use for lists is the array. In this chapter we'll discuss a different type of list, the _linked list_. We'll explain why linked lists are sometimes preferred to arrays, and we'll develop an object-based, linked-list implementation. We'll end the chapter with several examples of how linked lists can solve many programming problems you will encounter. # Shortcomings of Arrays There are several reasons arrays are not always the best data structure to use for organizing data. In many programming languages, arrays are fixed in length, so it is hard to add new data when the last element of the array is reached. Adding and removing data from an array is also difficult because you have to move array elements up or down to reflect either an addition or a deletion. However, these problems do not come up with JavaScript arrays, since we can use the `split()` function without having to perform additional array element accesses. The main problem with using JavaScript arrays, however, is that arrays in JavaScript are implemented as objects, causing them to be less efficient than arrays built in languages such as C++ and Java (see Crockford, Chapter 6). When you determine that the operations performed on an array are too slow for practical use, you can consider using the linked list as an alternative data structure. The linked list can be used in almost every situation where a one-dimensional array is used, except when you need random access to the elements of a list. When random access is required, an array is the better data structure to use. # Linked Lists Defined A linked list is a collection of objects called _nodes_. Each node is linked to a successor node in the list using an object reference. The reference to another node is called a _link_. An example of a linked list is shown in Figure 6-1. ###### Figure 6-1. A linked list While array elements are referenced by their position, linked list elements are referenced by their relationship to the other elements of the linked list. In Figure 6-1, we say that "bread" follows "milk", not that "bread" is in the second position. Moving through a linked list involves following the links of the list from the beginning node to the end node (not including the header node, which is sometimes used as a hook for entry into a linked list). Something else to notice in the figure is that we mark the end of a linked list by pointing to a null node. Marking the beginning of a linked list can be a problem. Many linked-list implementations include a special node, called the _head_ , to denote the beginning of a linked list. The linked list shown in Figure 6-1 is redesigned in Figure 6-2 to include a head node. ###### Figure 6-2. A linked list with a head node Inserting a new node into a linked list is a very efficient task. To insert a new node, the link of the node before the inserted node (the _previous_ node) is changed to point to the new node, and the new node's link is set to the node the previous node was pointing to before the insertion. Figure 6-3 illustrates how "cookies" is added to the linked list after "eggs." ###### Figure 6-3. Inserting "cookies" into the linked list Removing an item from a linked list is also easy to do. The link of the node before the removed node is redirected to point to the node the removed node is pointing to, while also pointing the removed node to null, effectively taking the node out of the linked list. Figure 6-4 shows how "bacon" is removed from the linked list. ###### Figure 6-4. Removing "bacon" from the linked list There are other functions we can perform with a linked list, but insertion and removal are the two functions that best describe why linked lists are so useful. # An Object-Based Linked List Design Our design of a linked list will involve creating two classes. We'll create a `Node` class for adding nodes to a linked list, and we'll create a `LinkedList` class, which will provide functions for inserting nodes, removing nodes, displaying a list, and other housekeeping functions. ## The Node Class The `Node` class consists of two properties: `element`, which store's the node's data, and `next`, which stores a link to the next node in the linked list. To create nodes, we'll use a constructor function that sets the values for the two properties: function Node(element) { this.element = element; this.next = null; } ## The Linked List Class The `LList` class provides the functionality for a linked list. The class includes functions for inserting new nodes, removing nodes, and finding a particular data value in a list. There is also a constructor function used for creating a new linked list. The only property stored in a linked list is a node to represent the head of the list. Here is the definition for the constructor function: function LList() { this.head = new Node("head"); this.find = find; this.insert = insert; this.remove = remove; this.display = display; } The `head` node starts with its `next` property set to `null` and is changed to point to the first element inserted into the list, since we create a new `Node` element but don't modify its `next` property here in the constructor. ## Inserting New Nodes The first function we'll examine is the `insert` function, which inserts a node into a list. To insert a new node, you have to specify which node you want to insert the new node before or after. We'll start by demonstrating how to insert a new node after an existing node. To insert a node after an existing node, we first have to find the "after" node. To do this, we create a helper function, `find()`, which searches through the linked list looking for the specified data. When this data is found, the function returns the node storing the data. Here is the code for the `find()` function: function find(item) { var currNode = this.head; while (currNode.element != item) { currNode = currNode.next; } return currNode; } The `find()` function demonstrates how to move through a linked list. First, we create a new node and assign it as the `head` node. Then we loop through the linked list, moving from one node to the next when the value of the current node's `element` property is not equal to the data we are searching for. If the search is successful, the function returns the node storing the data. If the data is not found, the function returns `null`. Once the "after" node is found, the new node is inserted into the linked list. First, the new node's `next` property is set to the value of the `next` property of the "after" node. Then the "after" node's `next` property is set to a reference to the new node. Here is the definition of the `insert()` function: function insert(newElement, item) { var newNode = new Node(newElement); var current = this.find(item); newNode.next = current.next; current.next = newNode; } We're ready now to test our linked list code. However, before we do that, we need a function that will display the elements of a linked list. The `display()` function is defined below: function display() { var currNode = this.head; while (!(currNode.next === null)) { print(currNode.next.element); currNode = currNode.next; } } This function starts by hooking into the linked list by assigning the head of the list to a new node. We then loop through the linked list, only stopping when the value of the current node's `next` property is set to `null`. In order to display only nodes with data in them (in other words, not the `head` node), we access the element property of the node that the current node is pointing to: currNode.next.element Finally, we need to add some code to use the linked list. Example 6-1 contains a short program that sets up a linked list of cities in western Arkansas that are located on Interstate 40, along with the complete `LList` class definition up to this point. Notice that the `remove()` function is commented out. It will be defined in the next section. ##### Example 6-1. The `LList` class and a test program function LList() { this.head = new Node("head"); this.find = find; this.insert = insert; //this.remove = remove; this.display = display; } function find(item) { var currNode = this.head; while (currNode.element != item) { currNode = currNode.next; } return currNode; } function insert(newElement, item) { var newNode = new Node(newElement); var current = this.find(item); newNode.next = current.next; current.next = newNode; } function display() { var currNode = this.head; while (!(currNode.next === null)) { print(currNode.next.element); currNode = currNode.next; } } // main program var cities = new LList(); cities.insert("Conway", "head"); cities.insert("Russellville", "Conway"); cities.insert("Alma", "Russellville"); cities.display(); The output from Example 6-1 is: Conway Russellville Alma ## Removing Nodes from a Linked List In order to remove a node from a linked list, we need to find the node that is just before the node we want to remove. Once we find that node, we change its `next` property to no longer reference the removed node, and the previous node is modified to point to the node after the removed node. We can define a function, `findPrevious()`, to perform this task. This function traverses the linked list, stopping at each node to see if the next node stores the data that is to be removed. When the data is found, the function returns this node (the "previous" node), so that its `next` property can be modified. Here is the definition for `findPrevious()`: function findPrevious(item) { var currNode = this.head; while (!(currNode.next === null) && (currNode.next.element != item)) { currNode = currNode.next; } return currNode; } Now we're ready to write the `remove()` function: function remove(item) { var prevNode = this.findPrevious(item); if (!(prevNode.next == null)) { prevNode.next = prevNode.next.next; } } The main line of code in this function looks odd, but makes perfect sense: prevNode.next = prevNode.next.next We are just skipping over the node we want to remove and linking the "previous" node with the node just after the one we are removing. Refer back to Figure 6-4 if you need help visualizing this operation. We are ready to test our code again, but first we need to modify the constructor function for the `LList` class to include these new functions: function LList() { this.head = new Node("head"); this.find = find; this.insert = insert; this.display = display; this.findPrevious = findPrevious; this.remove = remove; } Example 6-2 provides a short program that tests the `remove()` function: ##### Example 6-2. Testing the `remove()` function var cities = new LList(); cities.insert("Conway", "head"); cities.insert("Russellville", "Conway"); cities.insert("Carlisle", "Russellville"); cities.insert("Alma", "Carlisle"); cities.display(); print(); cities.remove("Carlisle"); cities.display(); The output from Example 6-2 before the removal is: Conway Russellville Carlisle Alma But Carlisle is in eastern Arkansas, so we need to remove it from the list, resulting in the following output: Conway Russellville Alma Example 6-3 contains a complete listing of the `Node` class, the `LList` class, and our test program: ##### Example 6-3. The `Node` class and the `LList` class function Node(element) { this.element = element; this.next = null; } function LList() { this.head = new Node("head"); this.find = find; this.insert = insert; this.display = display; this.findPrevious = findPrevious; this.remove = remove; } function remove(item) { var prevNode = this.findPrevious(item); if (!(prevNode.next === null)) { prevNode.next = prevNode.next.next; } } function findPrevious(item) { var currNode = this.head; while (!(currNode.next === null) && (currNode.next.element != item)) { currNode = currNode.next; } return currNode; } function display() { var currNode = this.head; while (!(currNode.next === null)) { print(currNode.next.element); currNode = currNode.next; } } function find(item) { var currNode = this.head; while (currNode.element != item) { currNode = currNode.next; } return currNode; } function insert(newElement, item) { var newNode = new Node(newElement); var current = this.find(item); newNode.next = current.next; current.next = newNode; } var cities = new LList(); cities.insert("Conway", "head"); cities.insert("Russellville", "Conway"); cities.insert("Carlisle", "Russellville"); cities.insert("Alma", "Carlisle"); cities.display(); print(); cities.remove("Carlisle"); cities.display(); # Doubly Linked Lists Although traversing a linked list from the first node to the last node is straightforward, it is not as easy to traverse a linked list backward. We can simplify this procedure if we add a property to our `Node` class that stores a link to the previous node. When we insert a node into the list, we'll have to perform more operations to assign the proper links for the next and previous nodes, but we gain efficiency when we have to remove a node from the list, since we no longer have to search for the previous node. Figure 6-5 illustrates how a doubly linked list works. ###### Figure 6-5. A doubly linked list Our first task is to assign a `previous` property to our `Node` class: function Node(element) { this.element = element; this.next = null; this.previous = null; } The `insert()` function for a doubly linked list is similar to the `insert()` function for the singly linked list, except that we have to set the new node's `previous` property to point to the previous node. Here is the definition: function insert(newElement, item) { var newNode = new Node(newElement); var current = this.find(item); newNode.next = current.next; newNode.previous = current; current.next = newNode; } The `remove()` function for a doubly linked list is more efficient than for a singly linked list because we don't have to find the previous node. We first need to find the node in the list that is storing the data we want to remove. Then we set that node's `previous` property to the node pointed to by the deleted node's `next` property. Then we need to redirect the `previous` property of the node the deleted node points to and point it to the node before the deleted node. Figure 6-6 makes this situation easier to understand. ###### Figure 6-6. Removing a node from a doubly linked list Here is the code for the `remove()` function: function remove(item) { var currNode = this.find(item); if (!(currNode.next === null)) { currNode.previous.next = currNode.next; currNode.next.previous = currNode.previous; currNode.next = null; currNode.previous = null; } } In order to perform tasks such as displaying a linked list in reverse order, we can use a utility function that finds the last node in a doubly linked list. The following function, `findLast()`, moves us to the last node of a list without going past the end of the list: function findLast() { var currNode = this.head; while (!(currNode.next === null)) { currNode = currNode.next; } return currNode; } With the `findLast()` function written, we can write a function to display the elements of a doubly linked list in reverse order. Here is the code for the `dispReverse()` function: function dispReverse() { var currNode = this.head; currNode = this.findLast(); while (!(currNode.previous === null)) { print(currNode.element); currNode = currNode.previous; } } The last task to accomplish is to add these new functions to the constructor function for the doubly linked list class. Example 6-4 presents this code, along with the rest of the code for implementing a doubly linked list, as well as a short program to test the code. ##### Example 6-4. The `LList` class as a doubly linked list function Node(element) { this.element = element; this.next = null; this.previous = null; } function LList() { this.head = new Node("head"); this.find = find; this.insert = insert; this.display = display; this.remove = remove; this.findLast = findLast; this.dispReverse = dispReverse; } function dispReverse() { var currNode = this.head; currNode = this.findLast(); while (!(currNode.previous === null)) { print(currNode.element); currNode = currNode.previous; } } function findLast() { var currNode = this.head; while (!(currNode.next === null)) { currNode = currNode.next; } return currNode; } function remove(item) { var currNode = this.find(item); if (!(currNode.next === null)) { currNode.previous.next = currNode.next; currNode.next.previous = currNode.previous; currNode.next = null; currNode.previous = null; } } // findPrevious is no longer needed /*function findPrevious(item) { var currNode = this.head; while (!(currNode.next === null) && (currNode.next.element != item)) { currNode = currNode.next; } return currNode; }*/ function display() { var currNode = this.head; while (!(currNode.next === null)) { print(currNode.next.element); currNode = currNode.next; } } function find(item) { var currNode = this.head; while (currNode.element != item) { currNode = currNode.next; } return currNode; } function insert(newElement, item) { var newNode = new Node(newElement); var current = this.find(item); newNode.next = current.next; newNode.previous = current; current.next = newNode; } var cities = new LList(); cities.insert("Conway", "head"); cities.insert("Russellville", "Conway"); cities.insert("Carlisle", "Russellville"); cities.insert("Alma", "Carlisle"); cities.display(); print(); cities.remove("Carlisle"); cities.display(); print(); cities.dispReverse(); The output from Example 6-4 is: Conway Russellville Carlisle Alma Conway Russellville Alma Alma Russellville Conway # Circularly Linked Lists A circularly linked list is similar to a singly linked list and has the same type of nodes. The only difference is that a circularly linked list, when created, has its head node's `next` property point back to itself. This means that the assignment: head.next = head is propagated throughout the circularly linked list so that every new node has its `next` property pointing to the head of the list. In other words, the last node of the list is always pointing back to the head of the list, creating a circular list, as shown in Figure 6-7. ###### Figure 6-7. A circularly linked list The reason you might want to create a circularly linked list is if you want the ability to go backward through a list but don't want the extra overhead of creating a doubly linked list. You can move backward through a circularly linked list by moving forward through the end of the list to the node you are trying to reach. To create a circularly linked list, change the constructor function of the `LList` class to read: function LList() { this.head = new Node("head"); this.head.next = this.head; this.find = find; this.insert = insert; this.display = display; this.findPrevious = findPrevious; this.remove = remove; } This is the only change we have to make in order to make a singly linked list into a circularly linked list. However, some of the other linked list functions will not work correctly unmodified. For example, one function that needs to be modified is `display()`. As written, if the `display()` function is executed on a circularly linked list, the function will never stop. The condition of the `while` loop needs to change so that the head element is tested for and the loop will stop when it gets to the head. Here is how the `display()` function is written for a circularly linked list: function display() { var currNode = this.head; while (!(currNode.next === null) && !(currNode.next.element == "head")) { print(currNode.next.element); currNode = currNode.next; } } Seeing how to modify the `display()` function, you should be able to modify other functions from a standard linked list to make them work with a circularly linked list. # Other Linked List Functions There are several other functions you might include in order to have a well-functioning linked list. In the upcoming exercises, you will have the opportunity to develop some of these functions, including: `advance(n)` Advances _n_ nodes in the linked list `back(n)` Moves _n_ nodes backward in a doubly linked list `show()` Displays the current node only # Exercises 1. Implement the `advance(n)` function so that when executed, the current node is moved _n_ nodes forward in the list. 2. Implement the `back(n)` function so that when executed, the current node is moved _n_ spaces backward in the list. 3. Implement the `show()` function, which displays the data associated with the current node. 4. Write a program that uses a singly linked list to keep track of a set of test grades entered interactively into the program. 5. Rewrite your solution to Example 6-4 using a doubly linked list. 6. According to legend, the first-century Jewish historian Flavius Josephus was about to be captured along with a band of 40 compatriots by Roman soldiers during the Jewish-Roman War. The Jewish soldiers decided that they preferred suicide to being captured and devised a plan for their demise. They were to form a circle and kill every third soldier until they were all dead. Josephus and one other decided they wanted no part of this and quickly calculated where they needed to place themselves so they would be the last survivors. Write a program that allows you to place _n_ people in a circle and specify that every _m_ th person will be killed. The program should determine the number of the last two people left in the circle. Use a circularly linked list to solve the problem. # Chapter 7. Dictionaries A dictionary is a data structure that stores data as _key-value_ pairs, such as the way a phone book stores its data as names and phone numbers. When you look for a phone number, you first search for the name, and when you find the name, the phone number is found right next to the name. The key is the element you use to perform a search, and the value is the result of the search. The JavaScript `Object` class is designed to operate as a dictionary. In this chapter we'll use the features of the `Object` class to build a `Dictionary` class that simplifies working with a dictionary-type object. You can perform the same functions shown in this chapter using just JavaScript arrays and objects, but creating a `Dictionary` class makes doing the work easier and more fun. For example, it's a lot easier to use `()` to reference keys rather than having to use `[]` notation. There is also, of course, the advantage of being able to define functions for performing collective operations, such as displaying all entries in a dictionary, rather than having to write loops in the main program to perform the same operations. # The Dictionary Class The basis for the `Dictionary` class is an Object, but using Array access notation, since objects in JavaScript are _associative arrays_.. This approach allows us to dynamically add key-value pairs, and use Array functionality such as sorting, but at the same time, allowing us to have string keys rather than just numeric. We'll start our definition of the `Dictionary` class with this code: function Dictionary() { this.datastore = {}; } The first function to define is `add()`. This function takes two arguments, a key and a value. The key is the index for the value element. Here is the code: function add(key, value) { this.datastore[key] = value; } Next, we define the `find()` function. This function takes a key as its argument and returns the value associated with it. The code looks like this: function find(key) { return this.datastore[key]; } Removing a key-value pair from a dictionary involves using a built-in JavaScript function: `delete`. This function is part of the `Object` class and takes a reference to a key as its argument. The function deletes both the key and the associated value. Here is the definition of our `remove()` function: function remove(key) { delete this.datastore[key]; } Finally, we'd like to be able to view all the key-value pairs in a dictionary, so here is a function that accomplishes this task: function showAll() { for (var key in this.datastore) { print(key + " -> " + this.datastore[key]); } } Example 7-1 provides the definition of the `Dictionary` class up to this point. ##### Example 7-1. The `Dictionary` class function Dictionary() { this.add = add; this.datastore = {}; this.find = find; this.remove = remove; this.showAll = showAll; } function add(key, value) { this.datastore[key] = value; } function find(key) { return this.datastore[key]; } function remove(key) { delete this.datastore[key]; } function showAll() { for (var key in this.datastore) { print(key + " -> " + this.datastore[key]); } } A program that uses the `Dictionary` class is shown in Example 7-2. ##### Example 7-2. Using the `Dictionary` class load("dictionary.js"); var pbook = new Dictionary(); pbook.add("Mike","123"); pbook.add("David", "345"); pbook.add("Cynthia", "456"); print("David's extension: " + pbook.find("David")); pbook.remove("David"); pbook.showAll(); The output from this program is: David's extension: 345 Mike -> 123 Cynthia -> 456 # Auxiliary Functions for the Dictionary Class We can define several functions that can help in special situations. For example, it is nice to know how many entries there are in a dictionary. Here is a `count()` function definition: function count() { var n = 0; for (var key in this.datastore) { ++n; } return n; } You might be wondering why the `length` property wasn't used for the `count()` function. The reason is that `length` doesn't work with an object, even one being accessed using array functionality. The use of string keys precludes some Array object functionality. For example: var nums = []; nums[0] = 1; nums[1] = 2; print(nums.length); // displays 2 var pbook = []; pbook["David"] = 1; pbook["Jennifer"] = 2; print(pbook.length); // displays 0 Another helper function we can use is a `clear()` function. Here's the definition: function clear() { for (var key in this.datastore) { delete this.datastore[key]; } } Example 7-3 updates the complete `Dictionary` class definition. ##### Example 7-3. Updated `Dictionary` class definition function Dictionary() { this.add = add; this.datastore = {}; this.find = find; this.remove = remove; this.showAll = showAll; this.count = count; this.clear = clear; } function add(key, value) { this.datastore[key] = value; } function find(key) { return this.datastore[key]; } function remove(key) { delete this.datastore[key]; } function showAll() { for (var key in this.datastore) { print(key + " -> " + this.datastore[key]); } } function count() { var n = 0; for (var key in this.datastore) { ++n; } return n; } function clear() { for (var key in this.datastore) { delete this.datastore[key]; } } Example 7-4 illustrates how these new auxiliary functions work. ##### Example 7-4. Using the `count()` and `clear()` functions load("dictionary.js"); var pbook = new Dictionary(); pbook.add("Raymond","123"); pbook.add("David", "345"); pbook.add("Cynthia", "456"); print("Number of entries: " + pbook.count()); print("David's extension: " + pbook.find("David")); pbook.showAll(); pbook.clear(); print("Number of entries: " + pbook.count()); The output from this code is: Number of entries: 3 David's extension: 345 Raymond -> 123 David -> 345 Cynthia -> 456 Number of entries: 0 # Adding Sorting to the Dictionary Class The primary purpose of a dictionary is to retrieve a value by referencing its key. The actual order that the dictionary items are stored in is not a primary concern. However, many people like to see a listing of a dictionary in sorted order. Let's see what it takes to display our dictionary items in sorted order. Arrays can be sorted. For example: var a = []; a[0] = "Mike"; a[1] = "David"; print(a); // displays Mike,David a.sort(); print(a); // displays David,Mike We can't perform the same test with string keys, however. The output from the program is empty. This is much the same problem we had earlier trying to define a `count()` function. This isn't really a problem, however. All that matters to the user of the class is that when the dictionary's contents are displayed, the results are in sorted order. We can use the `Object.keys()` function to solve this problem. Here is a new definition for the `showAll()` function: function showAll() { var keys = Object.keys(this.datastore); keys.sort(); for (var i = 0; i < keys.length; i++) { print(keys[i] + " -> " + this.datastore[keys[i]]); } } The only difference between this definition of the function and our earlier definition is we've added a call to `sort()` after we obtain the keys from the `datastore` array via the `Object.keys()` function. Example 7-5 demonstrates how this new function definition is used to display a sorted list of names and numbers. ##### Example 7-5. A sorted dictionary display load("dictionary2.js"); var pbook = new Dictionary(); pbook.add("Raymond","123"); pbook.add("David", "345"); pbook.add("Cynthia", "456"); pbook.add("Mike", "723"); pbook.add("Jennifer", "987"); pbook.add("Danny", "012"); pbook.add("Jonathan", "666"); pbook.showAll(); Here is the output of the program: Cynthia -> 456 Danny -> 012 David -> 345 Jennifer -> 987 Jonathan -> 666 Mike -> 723 Raymond -> 123 # Exercises 1. Write a program that takes a set of names and phone numbers from a text file and stores them in a `Dictionary` object. Include in your program the ability to display one phone number, display all phone numbers, add new phone numbers, remove phone numbers, and clear out the list of numbers. 2. Using the `Dictionary` class, write a program that stores the number of occurrences of words in a text. Your program should display each word in a text just once as well as the number of times the word occurs in the text. For example, given the text "the brown fox jumped over the blue fox," the output will be: the: 2 brown: 1 fox: 2 jumped: 1 over: 1 blue: 1 3. Rewrite exercise 2 so that it displays the words in sorted order. # Chapter 8. Hashing Hashing is a common technique for storing data in such a way that the data can be inserted and retrieved very quickly. Hashing uses a data structure called a _hash table_. Although hash tables provide fast insertion, deletion, and retrieval, they perform poorly for operations that involve searching, such as finding the minimum and maximum values in a data set. For these operations, other data structures such as the binary search tree are more appropriate. We'll learn how to implement a hash table in this chapter and learn when it's appropriate to use hashing as a data storage and retrieval technique. # An Overview of Hashing The hash table data structure is designed around an array. The array consists of elements 0 through some predetermined size, though we can increase the size when necessary. Each data element is stored in the array based on an associated data element called the _key_ , which is similar to the concept of the key we examined with the dictionary data structure. To store a piece of data in a hash table, the key is mapped into a number in the range of 0 through the hash table size, using a _hash function_. Ideally, the hash function stores each key in its own array element. However, because there are an unlimited number of possible keys and a limited number of array elements (theoretical in JavaScript), a more realistic goal of the hash function is to attempt to distribute the keys as evenly as possible among the elements of the array. Even with an efficient hash function, it is possible for two keys to `hash` (the result of the hash function) to the same value. This is called a _collision_ , and we need a strategy for handling collisions when they occur. We'll discuss how to deal with collisions in detail later in the chapter. The last thing we have to determine when creating a hash function is how large an array to create for the hash table. One constraint usually placed on the array size is that it should be a prime number. We will explain why this number should be prime when we examine the different hash functions. After that, there are several different strategies for determining the correct array size, all of them based on the technique used to handle collisions, so we will examine this issue when we discuss handling collisions. Figure 8-1 illustrates the concept of hashing using the example of a small phone book. ###### Figure 8-1. Hashing names and telephone numbers # A Hash Table Class We need a class to represent the hash table. The class will include functions for computing hash values, a function for inserting data into the hash table, a function for retrieving data from the hash table, and a function for displaying the distribution of data in the hash table, as well as various utility functions we might need. Here is the constructor function for our `HashTable` class: function HashTable() { this.table = new Array(137); this.simpleHash = simpleHash; this.showDistro = showDistro; this.put = put; //this.get = get; } The `get()` function is commented out for now; we'll describe its definition later in the chapter. ## Choosing a Hash Function The choice of a hash function depends on the data type of the key. If your key is an integer, then the simplest hash function is to return the key modulo the size of the array. There are circumstances when this function is not recommended, such as when the keys all end in 0 and the array size is 10. This is one reason the array size should always be a prime number, such as 137, which is the value we used in the preceding constructor function. Also, if the keys are random integers, then the hash function should more evenly distribute the keys. This type of hashing is known as _modular_ hashing. In many applications, the keys are strings. Choosing a hash function to work with string keys proves to be more difficult and should be chosen carefully. A simple hash function that at first glance seems to work well is to sum the ASCII value of the letters in the key. The hash value is then that sum modulo the array size. Here is the definition for this simple hash function: function simpleHash(data) { var total = 0; for (var i = 0; i < data.length; ++i) { total += data.charCodeAt(i); } return total % this.table.length; } We can finish up this first attempt at the `HashTable` class with definitions for `put()` and `showDistro()`, which place the data in the hash table and display the data from the hash table respectively. Here is the complete class definition: function HashTable() { this.table = new Array(137); this.simpleHash = simpleHash; this.showDistro = showDistro; this.put = put; //this.get = get; } function put(data) { var pos = this.simpleHash(data); this.table[pos] = data; } function simpleHash(data) { var total = 0; for (var i = 0; i < data.length; ++i) { total += data.charCodeAt(i); } return total % this.table.length; } function showDistro() { var n = 0; for (var i = 0; i < this.table.length; ++i) { if (this.table[i] != undefined) { print(i + ": " + this.table[i]); } } } Example 8-1 demonstrates how the `simpleHash()` function works. ##### Example 8-1. Hashing using a simple hash function load("HashTable.js"); var someNames = ["David", "Jennifer", "Donnie", "Raymond", "Cynthia", "Mike", "Clayton", "Danny", "Jonathan"]; var hTable = new HashTable(); for (var i = 0; i < someNames.length; ++i) { hTable.put(someNames[i]); } hTable.showDistro(); Here is the output from Example 8-1: 35: Cynthia 45: Clayton 57: Donnie 77: David 95: Danny 116: Mike 132: Jennifer 134: Jonathan The `simpleHash()` function computes a hash value by summing the ASCII value of each name using the JavaScript function `charCodeAt()` to return a character's ASCII value. The `put()` function receives the array index value from the `simpleHash()` function and stores the data element in that position. The `showDistro()` function displays where the names are actually placed into the array using the hash function. As you can see, the data is not particularly evenly distributed. The names are bunched up at the beginning and at the end of the array. There is an even bigger problem than just the uneven distribution of names in the array, however. If you pay close attention to the output, you'll see that not all the names in the original array of names are displayed. Let's investigate further by adding a `print()` statement to the `simpleHash()` function: function simpleHash(data) { var total = 0; for (var i = 0; i < data.length; ++i) { total += data.charCodeAt(i); } print("Hash value: " + data + " -> " + total); return total % this.table.length; } When we run the program again, we see the following output: Hash value: David -> 488 Hash value: Jennifer -> 817 Hash value: Donnie -> 605 Hash value: Raymond -> 730 Hash value: Cynthia -> 720 Hash value: Mike -> 390 Hash value: Clayton -> 730 Hash value: Danny -> 506 Hash value: Jonathan -> 819 35: Cynthia 45: Clayton 57: Donnie 77: David 95: Danny 116: Mike 132: Jennifer 134: Jonathan The problem is now apparent—the strings `"Clayton"` and `"Raymond"` hash to the same value, causing a collision. Because of the collision, only `"Clayton"` is stored in the hash table. We can improve our hash function to avoid such collisions, as discussed in the next section. ## A Better Hash Function To avoid collisions, you first need to make sure the array you are using for the hash table is sized to a prime number. This is necessary due to the use of modular arithmetic in computing the key. The size of the array needs to be greater than 100 in order to more evenly disperse the keys in the table. Through experimentation, we found that the first prime number greater than 100 that didn't cause collisions for the data set used in Example 8-1 is 137. When smaller prime numbers close to 100 were used, there were still collisions in the data set. After properly sizing the hash table, the next step to avoiding hashing collisions is to compute a better hash value. An algorithm known as Horner's method does the trick. Without getting too deep into the mathematics of the algorithm, our new hash function still works by summing the ASCII values of the characters of a string, but it adds a step by multiplying the resulting total by a prime constant. Most algorithm textbooks suggest a small prime number, such as 31, which worked without collisions with our test data set. We now present a new, better hash function utilizing Horner's method: function betterHash(string, arr) { var H = 31; var total = 0; for (var i = 0; i < string.length; ++i) { total += H * total + string.charCodeAt(i); } total = total % arr.length; return parseInt(total); } Example 8-2 contains the current definition of the `HashTable` class. ##### Example 8-2. The `HashTable` class with the `betterHash()` function function HashTable() { this.table = new Array(137); this.simpleHash = simpleHash; this.betterHash = betterHash; this.showDistro = showDistro; this.put = put; //this.get = get; } function put(data) { var pos = this.betterHash(data); this.table[pos] = data; } function simpleHash(data) { var total = 0; for (var i = 0; i < data.length; ++i) { total += data.charCodeAt(i); } print("Hash value: " + data + " -> " + total); return total % this.table.length; } function showDistro() { var n = 0; for (var i = 0; i < this.table.length; ++i) { if (this.table[i] !== undefined) { print(i + ": " + this.table[i]); } } } function betterHash(string) { var H = 31; var total = 0; for (var i = 0; i < string.length; ++i) { total += H * total + string.charCodeAt(i); } total = total % this.table.length; if (total < 0) { total += this.table.length-1; } return parseInt(total); } Notice that the `put()` function is now using `betterHash()` rather than `simpleHash()`. The program in Example 8-3 tests our new hash function. ##### Example 8-3. Testing the `betterHash()` function load("betterhash.js"); var someNames = ["David", "Jennifer", "Donnie", "Raymond", "Cynthia", "Mike", "Clayton", "Danny", "Jonathan"]; var hTable = new HashTable(); for (var i = 0; i < someNames.length; ++i) { hTable.put(someNames[i]); } hTable.showDistro(); The result of running this program is: 3: David 25: Raymond 37: Donnie 61: Jonathan 75: Danny 82: Mike 102: Jennifer 130: Clayton 131: Cynthia All nine names are now present and accounted for. ## Hashing Integer Keys In the last section we worked with string keys. In this section, we introduce how to hash integer keys. The data set we're working with is student grades. The key is a nine-digit student identification number, which we will generate randomly, along with the student's grade. Here are the functions we use to generate the student data (ID and grade): function getRandomInt (min, max) { return Math.floor(Math.random() * (max - min + 1)) + min; } function genStuData(arr) { for (var i = 0; i < arr.length; ++i) { var num = ""; for (var j = 1; j <= 9; ++j) { num += Math.floor(Math.random() * 10); } num += getRandomInt(50, 100); arr[i] = num; } } The `getRandomInt()` function allows us to specify a maximum and minimum random number. For a set of student grades, it is reasonable to say that the minimum grade is 50 and the maximum grade is 100. The `getStuData()` function generates student data. The inner loop generates the student ID number, and right after the inner loop finishes, a random grade is generated and concatenated to the student ID. Our main program will separate the ID from the grade. The hash function will total the individual digits in the student ID to compute a hash value using the `simpleHash()` function. Example 8-4 presents a program that uses the original Hash Table functionality and new functions to store a set of students and their grades. ##### Example 8-4. Hashing integer keys function getRandomInt (min, max) { return Math.floor(Math.random() * (max - min + 1)) + min; } function genStuData(arr) { for (var i = 0; i < arr.length; ++i) { var num = ""; for (var j = 1; j <= 9; ++j) { num += Math.floor(Math.random() * 10); } num += getRandomInt(50,100); arr[i] = num; } } load("HashTable.js"); var numStudents = 10; var arrSize = 97; var idLen = 9; var students = new Array(numStudents); genStuData(students); print ("Student data: \n"); for (var i = 0; i < students.length; ++i) { print(students[i].substring(0,8) + " " + students[i].substring(9)); } print("\n\nData distribution: \n"); var hTable = new HashTable(); for (var i = 0; i < students.length; ++i) { hTable.put(students[i]); } hTable.showDistro(); The output from Example 8-4 is: Student data: 45337671 91 97949453 89 83030638 82 10682591 78 05789018 86 76750339 85 16627331 84 82500333 62 04734766 95 00848878 65 Data distribution: 15: 82500333362 24: 16627331384 26: 83030638582 30: 45337671491 35: 04734766495 36: 00848878265 37: 76750339485 50: 97949453389 Once again, our hash function creates a collision, and not all of the data is stored in the array. Actually, if you run the program several times, all of the data will sometimes get stored, but the results are far from consistent. We can play around with array sizes to see if we can fix the problem, or we can simply change the hash function called by the `put()` function and use `betterHash()`. When using `betterHash()` with the student data, we get the following output: Student data: 88793345 50 95713806 51 41222483 98 89264661 66 46867539 81 75890255 82 10989115 81 42498519 52 29731650 73 00514025 55 Data distribution: 18: 46867539781 51: 10989115081 63: 42498519652 90: 00514025355 101: 88793345350 123: 75890255682 127: 89264661866 129: 95713806451 133: 29731650173 135: 41222483698 The lesson here is obvious: `betterHash()` is the superior hashing function for strings and for integers. ## Storing and Retrieving Data in a Hash Table Now that we've covered hash functions, we can apply this knowledge to use a hash table to actually store data. To do this, we have to modify the `put()` function so that it accepts both a key and data, hashes the key, and then uses that information to store the data in the table. Here is the definition of the new `put()` function: function put(key, data) { var pos = this.betterHash(key); this.table[pos] = data; } The `put()` function hashes the key and then stores the data in the position of the table computed by the hash function. Next we need to define the `get()` function so that we can retrieve data stored in a hash table. This function must, again, hash the key so that it can determine where the data is stored, and then retrieve the data from its position in the table. Here is the definition: function get(key) { return this.table[this.betterHash(key)]; } Here is a program to test the `put()` and `get()` functions: load("betterhash2.js"); var pnumbers = new HashTable(); var name, number; while (name != "finished") { putstr("Enter a name (or 'finished' when done): "); name = readline(); if (name == "finished") { break; } putstr("Enter a number: "); number = readline(); pnumbers.put(name, number); } name = ""; putstr("Name for number (Enter quit to stop): "); while (name != "quit") { name = readline(); if (name == "quit") { break; } print(name + "'s number is " + pnumbers.get(name)); putstr("Name for number (Enter quit to stop): "); } This program allows you to enter names and numbers until you type in _finished_ and will retrieve numbers based on names until you tell the program to quit. # Handling Collisions A collision occurs when a hash function generates the same key for two or more values. The second part of a hash algorithm involves resolving collisions so that all keys are stored in the hash table. In this section, we look at two means of collision resolution: _separate chaining_ and _linear probing_. ## Separate Chaining When a collision occurs, we still need to be able to store the key at the generated index, but it is physically impossible to store more than one piece of data in an array element. Separate chaining is a technique where each array element of a hash table stores another data structure, such as another array, which is then used to store keys. Using this technique, if two keys generate the same hash value, each key can be stored in a different position of the secondary array. Figure 8-2 illustrates how separate chaining works. ###### Figure 8-2. Separate chaining To implement separate chaining, after we create the array to store the hashed keys, we call a function that assigns an empty array to each array element of the hash table. This creates a two-dimensional array (see Chapter 3 for an explantion of two-dimenstional arrays). The following code defines a function, `buildChains()`, to create the second array (we'll also refer to this array as a _chain_ ), as well as a small program that demonstrates how to use `buildChains()`: function buildChains() { for (var i = 0; i < this.table.length; ++i) { this.table[i] = []; } } Add the preceding code, along with a declaration of the function, to the definition of the `HashTable` class. In order to properly display the distribution after hashing with separate chaining, we need to modify the `showDistro()` function in the following way to recognize that the hash table is now a multidimensional array: function showDistro() { var n = 0; for (var i = 0; i < this.table.length; ++i) { if (this.table[i][0] !== undefined) { print(i + ": " + this.table[i]); } } } Next we need to define the `put()` and `get()` functions that will work with separate chaining. The `put()` function hashes the key and then attempts to store the data in the first cell of the chain at the hashed position. If that cell already has data in it, the function searches for the first open cell and stores the data in that cell. Here is the code for the `put()` function: function put(data) { var key = this.betterHash(data); var index = 0; if (this.table[key][index] == undefined) { this.table[key][index] = data; } else { while (this.table[key][index] !== undefined) { ++index; } this.table[key][index] = data; } } Unlike the example earlier when we were just storing keys, this `put()` function has to store both keys and values. The function uses a pair of the chain's cells to store a key-value pair; the first cell stores the key and the adjacent cell of the chain stores the value. The `get()` function starts out by hashing the key to get the position of the key in the hash table. Then the function searches the cells until it finds the key it is looking for. When it finds the correct key, it returns the data from the adjacent cell to the key's cell. If the key is not found, the function returns `undefined`. Here is the code: function get(key) { var index = 0; var pos = this.betterHash(key); if (this.table[pos][index] == key) { return this.table[pos][index+1]; } else { while (this.table[pos][index] != key) { index += 2; } return this.table[pos][index+1]; } return undefined; } A program to test separate chaining is shown in Example 8-5. The loop to load the names is run twice to deliberately create collisions based on the key, which is then handled by the chaining. ##### Example 8-5. Using separate chaining to avoid collisions load("separatechain.js"); var hTable = new HashTable(); hTable.buildChains(); var someNames = ["David", "Jennifer", "Donnie", "Raymond", "Cynthia", "Mike", "Clayton", "Danny", "Jonathan"]; for (var i = 0; i < someNames.length; ++i) { hTable.put(someNames[i]); } for (var i = 0; i < someNames.length; ++i) { hTable.put(someNames[i]); } hTable.showDistro(); When we run the program in Example 8-5, we get the following output: 3: David,David 25: Raymond,Raymond 37: Donnie,Donnie 61: Jonathan,Jonathan 75: Danny,Danny 82: Mike,Mike 102: Jennifer,Jennifer 130: Clayton,Clayton 131: Cynthia,Cynthia ## Linear Probing A second technique for handling collisions is called _linear probing_. Linear probing is an example of a more general hashing technique called _open-addressing hashing_. With linear probing, when there is a collision, the program simply looks to see if the next element of the hash table is empty. If so, the key is placed in that element. If the element is not empty, the program continues to search for an empty hash-table element until one is found. This technique makes use of the fact that any hash table is going to have many empty elements and it makes sense to use the space to store keys. Linear probing should be chosen over separate chaining when your array for storing data can be fairly large. Here's a formula commonly used to determine which collision method to use: if the size of the array can be up to half the number of elements to be stored, you should use separate chaining; but if the size of the array can be twice the size of the number of elements to be stored, you should use linear probing. To demonstrate how linear probing works, we can rewrite the `put()` and `get()` functions to work with linear probing. In order to create a realistic data-retrieval system, we have to modify the `HashTable` class by adding a second array to store values. The `table` array and the `values` array work in parallel, so that when we store a key in a position in the `tables` array, we store a value in the corresponding position in the `values` array. Add the following code to the `HashTable` constructor: this.values = []; Now we can define the `put()` method for linear probing: function put(key, data) { var pos = this.betterHash(key); if (this.table[pos] === undefined) { this.table[pos] = key; this.values[pos] = data; } else { while (this.table[pos] !== undefined) { pos++; } this.table[pos] = key; this.values[pos] = data; } } The code for the `get()` function begins searching the hash table at the hashed position of the key. If the data passed to the function matches the key found at that position, the corresponding data in the `values` position is returned. If the keys don't match, the function loops through the hash table until it either finds the key or reaches a cell that is undefined, meaning the key was never placed into the hash table. Here's the code: function get(key) { var hash = -1; hash = this.betterHash(key); if (hash > -1) { for (var i = hash; this.table[hash] !== undefined; i++) { if (this.table[hash] == key) { return this.values[hash]; } } } return undefined; } # Exercises 1. Use linear probing to create a simple dictionary to store the definitions of words. Your program should have two parts. The first part reads a text file that contains a list of words and definitions and stores them in a hash table. The second part of the program allows a user to enter a word and see the definition of that word. 2. Repeat exercise 1 using separate chaining. 3. Write a program using hashing that reads a text file and compiles a list of the words in the file with the number of times each word appears in the file. # Chapter 9. Sets A `set` is a collection of unique elements. The elements of a set are called members. The two most important properties of sets are that the members of a set are unordered and that no member can occur in a set more than once. Sets play a very important role in computer science but are not considered a data type in many programming languages. Sets can be useful when you want to create a data structure that consists only of unique elements, such as a list of each unique word in a text. This chapter discusses how to create a `Set` class for JavaScript. # Fundamental Set Definitions, Operations, and Properties A set is an unordered collection of related members in which no member occurs more than once. A set is denoted mathematically as a list of members surrounded by curly braces, such as {0,1,2,3,4,5,6,7,8,9}. We can write a set in any order, so the previous set can be written as {9,0,8,1,7,2,6,3,5,4} or any other combination of the members such that all the members are written just once. ## Set Definitions Here are some definitions you need to know to work with sets: * A set containing no members is called the _empty set_. The _universe_ is the set of all possible members. * Two sets are considered equal if they contain exactly the same members. * A set is considered a _subset_ of another set if all the members of the first set are contained in the second set. ## Set Operations The fundamental operations performed on sets are: _Union_ A new set is obtained by combining the members of one set with the members of another set. _Intersection_ A new set is obtained by adding all the members of one set that also exist in a second set. _Difference_ A new set is obtained by adding all the members of one set except those that also exist in a second set. # The Set Class Implementation The `Set` class implementation is built around an array for storing the data. We also create functions for each of the set operations outlined above. Here is the definition for the constructor function: function Set() { this.dataStore = []; this.add = add; this.remove = remove; this.size = size; this.union = union; this.intersect = intersect; this.subset = subset; this.difference = difference; this.show = show; } ###### Note An Array is used rather than the new ECMAScript 6 Set, because Sets have limited support at this time. Let's look at the `add()` function first: function add(data) { if (this.dataStore.indexOf(data) < 0) { this.dataStore.push(data); return true; } else { return false; } } Because a set can only contain unique members, before the `add()` function can store data in the array, it must check to make sure the data isn't already in the array. We use the `indexOf()` function to check the array for the requested data. This function returns the position of an element in an array, or the value `-1` if the array doesn't contain the element. If the data isn't stored in the array, the function pushes the data onto the array and returns `true`. Otherwise, the function returns `false`. We need to write `add()` as a Boolean function so we have to way to know for sure whether or not the data was added to the set. The `remove()` function works similarly to the `add()` function. We first check to see if the requested data is in the array. If it is, we call the `splice()` function to remove the data and return `true`. Otherwise, we return `false`, indicating the requested data isn't in the set. Here is the definition of `remove()`: function remove(data) { var pos = this.dataStore.indexOf(data); if (pos > -1) { this.dataStore.splice(pos,1); return true; } else { return false; } } Before we can test these functions, let's define the `show()` function so we can see the members of a set: function show() { return this.dataStore; } Let's also comment out the Set assignments to functions that don't yet exist. Example 9-1 demonstrates how the `Set` class works up to now. ##### Example 9-1. Using the `Set` class load("Set.js"); var names = new Set(); names.add("David"); names.add("Jennifer"); names.add("Cynthia"); names.add("Mike"); names.add("Raymond"); if (names.add("Mike")) { print("Mike added") } else { print("Can't add Mike, must already be in set"); } print(names.show()); var remove = "Mike"; if (names.remove(remove)) { print(removed + " removed."); } else { print(remove + " not removed."); } names.add("Clayton"); print(names.show()); remove = "Alisa"; if (names.remove(remove)) { print(remove + " removed."); } else { print(remove + " not removed."); } The output from Example 9-1 is: Can't add Mike, must already be in set David,Jennifer,Cynthia,Mike,Raymond Mike removed. David,Jennifer,Cynthia,Raymond,Clayton Alisa not removed. # More Set Operations The more interesting functions to define are `union()`, `intersect()`, `subset()`, and `difference()`. The `union()` function combines two sets using the union set operation to form a new set. The function first builds a new set by adding all the members of the first set. Then the function checks each member of the second set to see whether the member is already a member of the first set. If it is, the member is skipped over, and if not, the member is added to the new set. Before we define `union()`, however, we need to define a helper function, `contains()`, which looks to see if a specified member is part of a set. Here is the definition for `contains()`: function contains(data) { if (this.dataStore.indexOf(data) > -1) { return true; } else { return false; } } Now we can define the `union()` function: function union(set) { var tempSet = new Set(); for (var i = 0; i < this.dataStore.length; ++i) { tempSet.add(this.dataStore[i]); } for (var i = 0; i < set.dataStore.length; ++i) { if (!tempSet.contains(set.dataStore[i])) { tempSet.dataStore.push(set.dataStore[i]); } } return tempSet; } Example 9-2 demonstrates the use of `union()`, after uncommenting its assignment in Set, and adding the `contains` helper function reference: ##### Example 9-2. Computing the union of two sets load("Set.js"); var cis = new Set(); cis.add("Mike"); cis.add("Clayton"); cis.add("Jennifer"); cis.add("Raymond"); var dmp = new Set(); dmp.add("Raymond"); dmp.add("Cynthia"); dmp.add("Jonathan"); var it = new Set(); it = cis.union(dmp); print(it.show()); //displays Mike,Clayton,Jennifer,Raymond,Cynthia,Jonathan Set intersection is performed using a function named `intersect()`. This function is easier to define. Each time a member of the first set is found to be a member of the second set it is added to a new set, which is the return value of the function. Here is the definition: function intersect(set) { var tempSet = new Set(); for (var i = 0; i < this.dataStore.length; ++i) { if (set.contains(this.dataStore[i])) { tempSet.add(this.dataStore[i]); } } return tempSet; } Computing the intersection of two sets is shown in Example 9-3, after uncommenting the `intersect` Set property assignment. ##### Example 9-3. Computing the intersection of two sets load("Set.js"); var cis = new Set(); cis.add("Mike"); cis.add("Clayton"); cis.add("Jennifer"); cis.add("Raymond"); var dmp = new Set(); dmp.add("Raymond"); dmp.add("Cynthia"); dmp.add("Bryan"); var inter = cis.intersect(dmp); print(inter.show()); // displays Raymond The next operation to define is subset. The `subset()` function first has to check to make sure that the proposed subset's length is less than the larger set we are comparing with the subset. If the subset length is greater than the original set, then it cannot be a subset. Once it is determined that the subset size is smaller, the function then checks to see that each member of the subset is a member of the larger set. If any one member of the subset is not in the larger set, the function returns `false` and stops. If the function gets to the end of the larger set without returning `false`, the subset is indeed a subset and the function returns `true`. The definition is below: function subset(set) { if (this.size() > set.size()) { return false; } else { for each (var member in this.dataStore) { if (!set.contains(member)) { return false; } } } return true; } The `subset()` function uses the `size()` function before checking to see if each element of the sets match. Here is the code for the `size()` function: function size() { return this.dataStore.length; } You'll notice that the `subset()` function uses a `for each` loop instead of a `for` loop, as we've used in the other definitions. Either loop will work here, but we just used the `for each` loop to show that its use is fine here. Uncomment out both the `size` and `subset` property assignments in Set. Example 9-4 computes the subset of two sets. ##### Example 9-4. Computing the subset of two sets load("Set.js"); var it = new Set(); it.add("Cynthia"); it.add("Clayton"); it.add("Jennifer"); it.add("Danny"); it.add("Jonathan"); it.add("Terrill"); it.add("Raymond"); it.add("Mike"); var dmp = new Set(); dmp.add("Cynthia"); dmp.add("Raymond"); dmp.add("Jonathan"); if (dmp.subset(it)) { print("DMP is a subset of IT."); } else { print("DMP is not a subset of IT."); } Example 9-4 displays the following output: DMP is a subset of IT. If we add one new member to the `dmp` set: dmp.add("Shirley"); then the program displays: DMP is not a subset of IT. The last operational function is `difference()`. This function returns a set that contains those members of the first set that are not in the second set. The definition for `difference()` is shown below: function difference(set) { var tempSet = new Set(); for (var i = 0; i < this.dataStore.length; ++i) { if (!set.contains(this.dataStore[i])) { tempSet.add(this.dataStore[i]); } } return tempSet; } All property assignments in Set should now be uncommented. Example 9-5 computes the difference of two sets. ##### Example 9-5. Computing the difference of two sets load("Set.js"); var cis = new Set(); var it = new Set(); cis.add("Clayton"); cis.add("Jennifer"); cis.add("Danny"); it.add("Bryan"); it.add("Clayton"); it.add("Jennifer"); var diff = new Set(); diff = cis.difference(it); print("[" + cis.show() + "] difference [" + it.show() + "] -> [" + diff.show() + "]"); Example 9-5 displays: [Clayton,Jennifer,Danny] difference [Bryan,Clayton,Jennifer] -> [Danny] # Exercises 1. Modify the `Set` class so that the class stores its elements in sorted order. Write a program to test your implementation. 2. Modify the `Set` class so that it uses a linked list to store its elements rather than an array. Write a program to test your implementation. 3. Add the function `higher(element)` to the `Set` class. This function returns the least element in the set strictly greater than the given element. Test your function in a program. 4. Add the function `lower(element)` to the `Set` class. This function returns the greatest element in the set strictly less than the given element. Test your function in a program. # Chapter 10. Binary Trees and Binary Search Trees Trees are a commonly used data structure in computer science. A tree is a nonlinear data structure that is used to store data in a hierarchical manner. Tree data structures are used to store hierarchical data, such as the files in a file system, and for storing sorted lists of data. We examine one particular tree structure in this chapter: the _binary tree_. Binary trees are chosen over other more primary data structures because you can search a binary tree very quickly (as opposed to a linked list, for example) and you can quickly insert and delete data from a binary tree (as opposed to an array). # Trees Defined A tree is made up of a set of _nodes_ connected by _edges_. An example of a tree is a company's organizational chart (see Figure 10-1). The purpose of an organizational chart is to communicate the structure of an organization. In Figure 10-1, each box is a node, and the lines connecting the boxes are the edges. The nodes represent the positions that make up an organization, and the edges represent the relationships between those positions. For example, the CIO reports directly to the CEO, so there is an edge between those two nodes. The development manager reports to the CIO, so there is an edge connecting those two positions. The VP of Sales and the development manager do not have a direct edge connecting them, so there is not a direct relationship between those two positions. ###### Figure 10-1. An organizational chart is a tree structure Figure 10-2 displays another tree that defines more of the terms we need when discussing trees. The top node of a tree is called the _root_ node. If a node is connected to other nodes below it, the preceding node is called the _parent_ node, and the nodes following it are called _child_ nodes. A node can have zero, one, or more child nodes connected to it. A node without any child nodes is called a _leaf_ node. Special types of trees, called _binary trees_ , restrict the number of child nodes to no more than two. Binary trees have certain computational properties that make them very efficient for many operations. Binary trees are examined extensively in the sections to follow. Continuing to examine Figure 10-2, you can see that by following certain edges, you can travel from one node to other nodes that are not directly connected. The series of edges you follow to get from one node to another node is called a _path_. Paths are depicted in the figure with dashed lines. Visiting all the nodes in a tree in some particular order is known as a _tree traversal_. ###### Figure 10-2. The parts of a tree A tree can be broken down into _levels_. The root node is at level 0, its children are at level 1, those nodes' children are at level 2, and so on. A node at any level is considered the root of a _subtree_ , which consists of that root node's children, its children's children, and so on. We can define the depth of a tree as the number of layers in the tree. This concept of the root node being at the top of a tree, while in real life a tree's root is at the bottom of the tree, is counterintuitive, but it is a time-honored convention in computer science to draw trees with the root at the top. The computer scientist Donald Knuth actually tried to change the convention but gave up after a few months when he discovered that most computer scientists refused to adapt to the natural way of drawing trees. Finally, each node in a tree has a value associated with it. This value is sometimes referred to as the _key_ value. # Binary Trees and Binary Search Trees As mentioned earlier, a _binary tree_ is one where each node can have no more than two children. By limiting the number of children to two, we can write efficient programs for inserting data, searching for data, and deleting data in a tree. Before we discuss building a binary tree in JavaScript, we need to add two terms to our tree lexicon. The child nodes of a parent node are referred to as the _left_ node and the _right_ node. For certain binary tree implementations, certain data values can be stored only in left nodes, and other data values must be stored in right nodes. An example binary tree is shown in Figure 10-3. ###### Figure 10-3. A binary tree Identifying the child nodes is important when we consider a more specific type of binary tree, the _binary search tree_. A binary search tree is a binary tree in which data with lesser values are stored in left nodes and data with greater values are stored in right nodes. This property provides for very efficient searches and holds for both numeric data and non-numeric data, such as words and strings. ## Building a Binary Search Tree Implementation A binary search tree is made up of nodes, so the first object we need to create is a `Node` object, which is similar to the `Node` object we used with linked lists. The definition for the `Node` class is: function Node(data, left, right) { this.data = data; this.left = left; this.right = right; this.show = show; } function show() { return this.data; } The `Node` object stores both data and links to other nodes (`left` and `right`). There is also a `show()` function for displaying the data stored in a node. Now we can build a class to represent a binary search tree (BST). The class consists of just one data member: a `Node` object that represents the root node of the BST. The constructor for the class sets the root node to `null`, creating an empty node. The first function we need for the BST is `insert()`, to add new nodes to the tree. This function is complex and requires explanation. The first step in the function is to create a `Node` object, passing in the data the node will store. The second step in insertion is to check the BST for a root node. If a root node doesn't exist, then the BST is new and this node is the root node, which completes the function definition. Otherwise, the function moves to the next step. If the node being inserted is not the root node, then we have to prepare to traverse the BST to find the proper insertion point. This process is similar to traversing a linked list. The function uses a `Node` object that is assigned as the current node as the function moves from level to level in the BST. The function also has to position itself inside the BST at the root node. Once inside the BST, the next step is to determine where to put the node. This is performed inside a loop that breaks once the correct insertion point is determined. The algorithm for determining the corrent insertion point for a node is as follows: 1. Set the root node to be the current node. 2. If the data value in the inserted node is less than the data value in the current node, set the new current node to be the left child of the current node. If the data value in the inserted node is greater than the data value in the current node, skip to step 4. 3. If the value of the left child of the current node is `null`, insert the new node here and exit the loop. Otherwise, skip to the next iteration of the loop. 4. Set the current node to be the right child of the current node. 5. If the value of the right child of the current node is `null`, insert the new node here and exit the loop. Otherwise, skip to the next iteration of the loop. With this algorithm complete, we're ready to implement this part of the `BST` class. Example 10-1 has the code for the class, including the code for the `Node` object. ##### Example 10-1. The `BST` and `Node` classes function Node(data, left, right) { this.data = data; this.left = left; this.right = right; this.show = show; } function show() { return this.data; } function BST() { this.root = null; this.insert = insert; this.inOrder = inOrder; } function insert(data) { var n = new Node(data, null, null); if (this.root === null) { this.root = n; } else { var current = this.root; var parent; while (true) { parent = current; if (data < current.data) { current = current.left; if (current === null) { parent.left = n; break; } } else { current = current.right; if (current === null) { parent.right = n; break; } } } } } ## Traversing a Binary Search Tree We now have the beginnings of the `BST` class, but all we can do is insert nodes into the tree. We need to be able to traverse the BST so that we can display the data in different orders, such as numeric or alphabetic order. There are three traversal functions used with BSTs: _inorder_ , _preorder_ , and _postorder_. An inorder traversal visits all of the nodes of a BST in ascending order of the node key values. A preorder traversal visits the root node first, followed by the nodes in the subtrees under the left child of the root node, followed by the nodes in the subtrees under the right child of the root node. A postorder traversal visits all of the child nodes of the left subtree up to the root node, and then visits all of the child nodes of the right subtree up to the root node. Although it's easy to understand why we would want to perform an inorder traversal, it is less obvious why we need preorder and postorder traversals. We'll implement all three traversal functions now and explain their uses in a later section. The inorder traversal is best written using recursion. Since the function visits each node in ascending order, the function must visit both the left node and the right node of each subtree, following the subtrees under the left child of the root node before following the subtrees under the right child of the root. If you are unsure about using recursion, Chapter 1 discusses how to write a recursive function. Here is the code for the inorder traversal function: function inOrder(node) { if (node !== null) { inOrder(node.left); putstr(node.show() + " "); inOrder(node.right); } } Example 10-2 provides a short program to test the function. ##### Example 10-2. Inorder traversal of a BST load("BSTtree.js"); var nums = new BST(); nums.insert(23); nums.insert(45); nums.insert(16); nums.insert(37); nums.insert(3); nums.insert(99); nums.insert(22); print("Inorder traversal: "); nums.inOrder(nums.root); The output from Example 10-2 is: Inorder traversal: 3 16 22 23 37 45 99 Figure 10-4 illustrates the path the `inOrder()` function followed. ###### Figure 10-4. Path of inorder traversal The definition of the preorder traversal function is: function preOrder(node) { if (node !== null) { putstr(node.show() + " "); preOrder(node.left); preOrder(node.right); } } You'll notice that the only difference between the `inOrder()` and `preOrder()` functions is how the three lines of code inside the `if` statement are ordered. The call to the `show()` function is sandwiched between the two recursive calls in the `inOrder()` function, and the call to `show()` is before the two recursive calls in the `preOrder()` function. Figure 10-5 illustrates the preorder traversal path. ###### Figure 10-5. Path of preorder traversal Add a property assignment for the new `preOrder()` function to BST. If we add a call to `preOrder()` to the preceding program, using the same `nums.root`, we get the following results: Inorder traversal: 3 16 22 23 37 45 99 Preorder traversal: 23 16 3 22 45 37 99 The path of a postorder traversal is shown in Figure 10-6. ###### Figure 10-6. Path of postorder traversal Here is the implementation of the `postOrder()` function, which is then assigned to the BST's new `postOrder` property: function postOrder(node) { if (node !== null) { postOrder(node.left); postOrder(node.right); putstr(node.show() + " "); } } And here is the output when we add the function to our program: Inorder traversal: 3 16 22 23 37 45 99 Preorder traversal: 23 16 3 22 45 37 99 Postorder traversal: 3 22 16 37 99 45 23 We will demonstrate some practical programming examples using BSTs that make use of these traversal functions later in the chapter. # BST Searches There are three types of searches typically performed with a BST: 1. Searching for a specific value 2. Searching for the minimum value 3. Searching for the maximum value We explore these three searches in the following sections. ## Searching for the Minimum and Maximum Value Searches in a BST for the minimum and maximum values stored are relatively simple procedures. Since lower values are always stored in left child nodes, to find the minimum value in a BST, you only have to traverse the left edge of the BST until you get to the last node. Here is the definition of a function, `getMin()`, that finds the minimum value of a BST: function getMin() { var current = this.root; while (current.left !== null) { current = current.left; } return current.data; } The function travels down the left link of each node in the BST until it reaches the left end of the BST, which is defined as: current.left = null; When this point is reached, the data stored in the current node must be the minimum value. To find the maximum value stored in a BST, the function must simply traverse the right links of nodes until the function reaches the right end of the BST. The value stored in this node must be the maximum value. The definition for the `getMax()` function is below: function getMax() { var current = this.root; while (current.right !== null) { current = current.right; } return current.data; } Example 10-3 tests the `getMin()` and `getMax()` functions with the BST data we used earlier, after adding both to the BST object. ##### Example 10-3. Testing `getMin()` and `getMax()` load("BSTtree.js"); var nums = new BST(); nums.insert(23); nums.insert(45); nums.insert(16); nums.insert(37); nums.insert(3); nums.insert(99); nums.insert(22); var min = nums.getMin(); print("The minimum value of the BST is: " + min); print("\n"); var max = nums.getMax(); print("The maximum value of the BST is: " + max); The output from this program is: The minimum value of the BST is: 3 The maximum value of the BST is: 99 These functions return the data stored in the minimum and maximum positions, respectively. Instead, we may want the functions to return the nodes where the minimum and maximum values are stored. To make that change, just have the functions return the current node rather than the value stored in the current node. ## Searching for a Specific Value Searching for a specific value in a BST requires that a comparison be made between the data stored in the current node and the value being searched for. The comparison will determine if the search travels to the left child node, or to the right child node if the current node doesn't store the searched-for value. We can implement searching in a BST with the `find()` function, which is defined here: function find(data) { var current = this.root; while (current && current.data != data) { if (data < current.data) { current = current.left; } else { current = current.right; } } return current; } This function returns the current node if the value is found in the BST and returns `null` if the value is not found. Example 10-4 provides a program to test the `find()` function. ##### Example 10-4. Using `find()` to search for a value load("BSTtree.js"); var nums = new BST(); nums.insert(23); nums.insert(45); nums.insert(16); nums.insert(37); nums.insert(3); nums.insert(99); nums.insert(22); inOrder(nums.root); print("\n"); putstr("Enter a value to search for: "); var value = parseInt(readline()); var found = nums.find(value); if (found !== null) { print("Found " + value + " in the BST."); } else { print(value + " was not found in the BST."); } The output from this program is: 3 16 22 23 37 45 99 Enter a value to search for: 23 Found 23 in the BST. # Removing Nodes from a BST The most complex operation on a BST is removing a node. The complexity of node removal depends on which node you want to delete. If you want to remove a node with no children, the removal is fairly simple. If the node has just one child node, either left or right, the removal is a little more complex to accomplish. The removal of a node with two children is the most complex removal operation to perform. To aid in managing the complexity of removal, we remove nodes from a BST recursively. The two functions we will define are `remove()` and `removeNode()`. The first step to take when removing a node from a BST is to check to see if the current node holds the data we are trying to remove. If so, remove that node. If not, then we compare the data in the current node to the data we are trying to remove. If the data we want to remove is less than the data in the current node, move to the left child of the current node and compare data. If the data we want to remove is greater than the data in the current node, move to the right child of the current node and compare data. The first case to consider is when the node to be removed is a leaf (a node with no children). Then all we have to do is set the link that is pointing to the node of the parent node to `null`. When the node we want to remove has one child, then the the link that is pointing to the node to be removed has to be adjusted to point to the removed node's child node. Finally, when the node we want to remove has two children, the correct solution is to either find the largest value in the subtree to the left of the removed node, or to find the smallest value in the subtree to the right of the removed node. We will choose to go to the right. We need a function that finds the smallest value of a subtree, `getSmallest()`, which we will then use to create a temporary node containing that smallest value. We copy that value into the position of the node we are replacing, and we delete the temporary node to complete the operation. The node removal process consists of two functions. The `remove()` function simply receives the value to be removed and calls the second function, `removeNode()`, which does all the work. The definitions of the two functions are shown here: function remove(data) { root = removeNode(this.root, data); } function removeNode(node, data) { if (node === null) { return null; } if (data == node.data) { // node has no children if (node.left === null && node.right === null) { return null; } // node has no left child if (node.left === null) { return node.right; } // node has no right child if (node.right === null) { return node.left; } // node has two children var tempNode = getSmallest(node.right); node.data = tempNode.data; node.right = this.removeNode(node.right, tempNode.data); return node; } else if (data < node.data) { node.left = this.removeNode(node.left, data); return node; } else { node.right = this.removeNode(node.right, data); return node; } } function getSmallest(node) { if (node.left == null) { return node; } else { return getSmallest(node.left); } } Example 10-5 provides a program to test the `remove()` function, after it and `removeNode()` have been added to the BST object. ##### Example 10-5. Using `find()` to search for a value load("BSTtree.js"); var nums = new BST(); nums.insert(23); nums.insert(45); nums.insert(16); nums.insert(37); nums.insert(3); nums.insert(99); nums.insert(22); inOrder(nums.root); print("\n"); putstr("Enter a value to search for: "); var value = parseInt(readline()); var found = nums.find(value); if (found !== null) { print("Found " + value + " in the BST."); } else { print(value + " was not found in the BST."); } The output from this program is: Inorder traversal: 3 16 22 23 37 45 99 Inorder traversal after removing 37: 3 16 22 23 45 99 # Counting Occurrences One use of a BST is to keep track of the occurrences of data in a data set. For example, we can use a BST to record the distribution of grades on an exam. Given a set of exam grades, we can write a program that checks to see if the grade is in the BST, adding the grade to the BST if it is not found, and incrementing the number of occurrrences of it if the grade is found in the BST. To solve this problem, we need to modify the `Node` object to include a field for keeping track of the number of occurrences of a grade in the BST, and we need a function for updating a node so that if we find a grade in the BST, we can increment the occurrences field. Let's start by modifying our definition of the `Node` object to include a field for keeping track of grade occurrences: function Node(data, left, right) { this.data = data; this.count = 1; this.left = left; this.right = right; this.show = show; } When a grade (a `Node` object) is inserted into a BST, its count is set to 1. The `BST insert()` function will work fine as is, but we need to add a function to update the BST when the `count` field needs to be incremented. We'll call this function `update()`: function update(data) { var grade = this.find(data); grade.count++; return grade; } The other functions of the `BST` class are fine as is. We just need a couple of functions to generate a set of grades and to display the grades: function prArray(arr) { putstr(arr[0].toString() + ' '); for (var i = 1; i < arr.length; ++i) { putstr(arr[i].toString() + ' '); if (i % 10 === 0) { putstr("\n"); } } } function genArray(length) { var arr = []; for (var i = 0; i < length; ++i) { arr[i] = Math.floor(Math.random() * 101); } return arr; } Example 10-6 presents a program for testing out this new code for counting occurrences of grades. ##### Example 10-6. Counting occurrences of grades in a data set function prArray(arr) { putstr(arr[0].toString() + ' '); for (var i = 1; i < arr.length; ++i) { putstr(arr[i].toString() + ' '); if (i % 10 == 0) { putstr("\n"); } } } function genArray(length) { var arr = []; for (var i = 0; i < length; ++i) { arr[i] = Math.floor(Math.random() * 101); } return arr; } load("BSTtree.js"); var grades = genArray(100); prArray(grades); var gradedistro = new BST(); for (var i = 0; i < grades.length; ++i) { var g = grades[i]; var grade = gradedistro.find(g); if (grade === null) { gradedistro.insert(g); } else { gradedistro.update(g); } } var cont = "y"; while (cont == "y") { putstr("\n\nEnter a grade: "); var g = parseInt(readline()); var aGrade = gradedistro.find(g); if (aGrade === null) { print("No occurrences of " + g); } else { print("Occurrences of " + g + ": " + aGrade.count); } putstr("Look at another grade (y/n)? "); cont = readline(); } Here is the output from one run of the program: 25 32 24 92 80 46 21 85 23 22 3 24 43 4 100 34 82 76 69 51 44 92 54 1 88 4 66 62 74 49 18 15 81 95 80 4 64 13 30 51 21 12 64 82 81 38 100 17 76 62 32 3 24 47 86 49 100 49 81 100 49 80 0 28 79 34 64 40 81 35 23 95 90 92 13 28 88 31 82 16 93 12 92 52 41 27 53 31 35 90 21 22 66 87 80 83 66 3 6 18 Enter a grade: 78 No occurrences of 78 Look at another grade (y/n)? y Enter a grade: 65 No occurrences of 65 Look at another grade (y/n)? y Enter a grade: 23 Occurrences of 23: 2 Look at another grade (y/n)? y Enter a grade: 89 No occurrences of 89 Look at another grade (y/n)? y Enter a grade: 100 Occurrences of 100: 4 Look at another grade (y/n)? n ## Exercises 1. Add a function to the BST class that counts the number of nodes in a BST. 2. Add a function to the BST class that counts the number of edges in a BST. 3. Add a `max()` function to the BST class that finds the maximum value in a BST. 4. Add a `min()` function to the BST class that finds the minimum value in a BST. 5. Write a program that stores the words from a large text file in a BST and displays the number of times each word occurs in the text. # Chapter 11. Graphs and Graph Algorithms The study of networks has become one of the great scientific hotbeds of this century, though mathematicians and others have been studying networks for many hundreds of years. Recent developments in computer technology (the Internet, for example) and in social theory (the social network, as popularized by the concept of "six degrees of separation"), not to mention social media, have put a spotlight on the study of networks. In this chapter we'll look at how networks are modeled with graphs. We'll define what a graph is, how to represent graphs in JavaScript, and how to implement important graph algorithms. We'll also discuss the importance of choosing the correct data representation when working with graphs, since the efficiency of graph algorithms largely depends on the data structure used to represent a graph. # Graph Definitions A graph consists of a set of _vertices_ and a set of _edges_. Think of a map of a US state. Each town is connected with other towns via some type of road. A map is a type of graph where each town is a vertex, and a road that connects two towns is an edge. Edges are defined as a pair (v1, v2), where v1 and v2 are two vertices in a graph. A vertex can also have a weight, which is sometimes called a cost. A graph whose pairs are ordered is called a _directed graph_ , or just a _digraph_. When pairs are ordered in a directed graph, an arrow is drawn from one pair to another pair. Directed graphs indicate the flow direction from vertex to vertex. A flowchart that indicates the direction of computations in a computer program is an example of a directed graph. A directed graph is shown in Figure 11-1. ###### Figure 11-1. A digraph (directed graph) If a graph is not ordered, it is called an _unordered graph_ , or just a graph. An example of an unordered graph is shown in Figure 11-2. ###### Figure 11-2. An unordered graph A _path_ is a sequence of vertices in a graph such that all vertices in the path are connected by edges. The length of a path is the number of edges from the first vertex in the path to the last vertex. A path can also consist of a vertex to itself, which is called a loop. Loops have a length of 0. A _cycle_ is a path with at least one edge whose first and last vertices are the same. A _simple cycle_ is one with no repeated edges or vertices for both directed and undirected graphs. Paths that repeat other vertices besides the first and last vertices are called _general cycles_. Two vertices are considered _strongly_ connected if there is a path from the first vertex to the second vertex, and vice versa. If the graph is a directed graph, and all its vertices are strongly connected, then the directed graph is considered strongly connected. # Real-World Systems Modeled by Graphs Graphs are used to model many different types of real-world systems. One example is traffic flow. The vertices represent street intersections, and the edges represent the streets. Weighted edges can be used to represent speed limits or the number of lanes. Modelers can use the system to determine the best routes and the streets most likely to suffer from traffic jams. Any type of transportation system can be modeled using a graph. For example, an airline can model its flight system using a graph. Each airport is a vertex, and each flight from one vertex to another is an edge. A weighted edge can represent the cost of a flight from one airport to another, or perhaps the distance from one airport to another, depending upon what is being modeled. Computer networks, including local area networks and much broader networks such as the Internet, are also frequently modeled with graphs. Another example of a real-word system that can be modeled by a graph is a consumer market, where vertices represent both institutions (vendors) and consumers. # The Graph Class At first glance, a graph looks much like a tree or a binary tree, and you might be tempted to try to build a graph class like a tree, using nodes to represent each vertex. There are problems with using an object-based approach like that, however, because graphs can grow quite large. Representing a graph using just objects can quickly become inefficient, so we will look at a different scheme for representing both vertices and edges. ## Representing Edges The real information about a graph is stored in the edges, since the edges describe the structure of a graph. As we mentioned earlier, it is tempting to represent a graph as a binary tree, but doing so is a mistake. A binary tree has a mostly fixed representation, since a parent node can have only two child nodes, while a graph structure provides much more flexibility. There can be many edges linked to a single vertex or just one edge, for example. The method we will use for representing the edges of a graph is called an _adjacency list_ , or an _array of adjacency lists_. With this method, the edges are stored as a vertex-indexed array of lists (arrays) of the vertices adjacent to each vertex. Using this scheme, when we reference a vertex in a program, we can efficiently access the list of all the vertices it is connected to. For example, if the vertex 2 is connected to vertices 0, 1, 3, and 4, and is stored in array position 2, accessing this element gives us access to an array stored at array position 2 that consists of the vertices 0, 1, 3, and 4. This is the representation method we choose to use in this chapter and is shown in Figure 11-3. ###### Figure 11-3. An adjacency list Another method for representing the the edges of a graph is called an _adjacency matrix._ This is a two-dimensional array in which the elements of the array indicate whether an edge exists between two vertices. ## Building a Graph Once the decision is made on how to represent a graph in code, building a class to represent a graph is straightforward. Here is a first definition of a `Graph` class: function Graph(v) { this.vertices = v; this.edges = 0; this.adj = []; for (var i = 0; i < this.vertices; ++i) { this.adj[i] = []; } this.addEdge = addEdge; this.showGraph = showGraph; } The class keeps track of how many edges are represented in a graph, as well as the number of vertices, by utilizing an array whose length is equal to the number of vertices in the graph. In each element of the array, the `for` loop adds a subarray to store all the adjacent vertices. The `addEdge()` function is defined as: function addEdge(v,w) { this.adj[v].push(w); this.adj[w].push(v); this.edges++; } When this function is called with two vertices, A and B, the function finds the adjacency list for vertex A and adds B to the list, then it finds the adjacency list for B and adds A to the list. Finally, the function increments the number of edges by 1. The `showGraph()` function displays the graph by showing a list of all vertices and the vertices that are adjacent to them: function showGraph() { for (var i = 0; i < this.vertices; ++i) { putstr(i + " -> "); for (var j = 0; j < this.vertices; ++j) { if (this.adj[i][j] != undefined) putstr(this.adj[i][j] + ' '); } print(); } } Example 11-1 displays the complete definition for the `Graph` class. ##### Example 11-1. The `Graph` class function Graph(v) { this.vertices = v; this.edges = 0; this.adj = []; for (var i = 0; i < this.vertices; ++i) { this.adj[i] = []; } this.addEdge = addEdge; this.showGraph = showGraph; } function addEdge(v,w) { this.adj[v].push(w); this.adj[w].push(v); this.edges++; } function showGraph() { for (var i = 0; i < this.vertices; ++i) { putstr(i + " -> "); for (var j = 0; j < this.vertices; ++j) { if (this.adj[i][j] != undefined) putstr(this.adj[i][j] + ' '); } print(); } } Here is a test program that demonstrates how to use the `Graph` class: load("Graph.js"); g = new Graph(5); g.addEdge(0,1); g.addEdge(0,2); g.addEdge(1,3); g.addEdge(2,4); g.showGraph(); The output from this program is: 0 -> 1 2 1 -> 0 3 2 -> 0 4 3 -> 1 4 -> 2 The output shows that vertex 0 has edges to vertices 1 and 2; vertex 1 has edges to vertices 0 and 3; vertex 2 has edges to vertices 0 and 4; vertex 3 has an edge to vertex 1; and vertex 4 has an edge to vertex 2. Of course, there is some redundancy in this display, as an edge between 0 and 1, for example, is the same as an edge between 1 and 0. For just display purposes this is fine, but we will need to modify this output when we start exploring the paths found in a graph. # Searching a Graph Determining which vertices can be reached from a specified vertex is a common activity performed on graphs. We might want to know which roads lead from one town to other towns on the map, or which flights can take us from one airport to other airports. These operations are performed on a graph using a search algorithm. There are two fundamental searches we can perform on a graph: the _depth-first_ search and the _breadth-first_ search. In this section we examine both algorithms. ## Depth-First Search Depth-first search involves following a path from the beginning vertex until it reaches the last vertex, then backtracking and following the next path until it reaches the last vertex, and so on until there are no paths left. Here we are not "searching" for a particular item, but instead searching to see what paths we can follow in a graph. Figure 11-4 illustrates how depth-first search works. ###### Figure 11-4. Depth-first search The algorithm for performing a depth-first search is relatively simple—visit a vertex that has not already been visited, mark it as having been visited, then recursively visit the other unvisited vertices that are in the original vertex's adjacency list. To make this algorithm work, we will need to add an array to our `Graph` class that stores visited vertices and initialize it to all `false` values. Here is a code fragment from the `Graph` class showing this new array and its initialization: this.marked = []; for (var i = 0; i < this.vertices; ++i) { this.marked[i] = false; } Now we can write the depth-first search function: function dfs(v) { this.marked[v] = true; if (this.adj[v] !== undefined) { print("Visited vertex: " + v); } for (var i = 0; i < this.adj[v].length; i++) { var w = this.adj[v][i]; if (!this.marked[w]) { this.dfs(w); } } } Notice that I've included a `print()` function so we can see the vertices as they're being visited. This function is, of course, not required for the `dfs()` function to work properly. A program that demonstrates the `depthFirst()` function is shown in Example 11-2, after adding `dfs()` to the Graph class. ##### Example 11-2. Performing a depth-first search // program to test dfs() function load("Graph.js"); g = new Graph(5); g.addEdge(0,1); g.addEdge(0,2); g.addEdge(1,3); g.addEdge(2,4); g.showGraph(); g.dfs(0); The output from this program is: 0 -> 1 2 1 -> 0 3 2 -> 0 4 3 -> 1 4 -> 2 Visited vertex: 0 Visited vertex: 1 Visited vertex: 3 Visited vertex: 2 Visited vertex: 4 ## Breadth-First Search A breadth-first search starts at a first vertex and tries to visit vertices as close to the first vertex as possible. In essence, this search moves through a graph layer by layer, first examining layers closer to the first vertex and then moving down to the layers farthest away from the starting vertex. Figure 11-5 demonstrates how breadth-first search works. ###### Figure 11-5. Breadth-first search The algorithm for breadth-first search uses a queue abstraction instead of an array abstraction for storing visited vertices. The algorithm works as follows: 1. Find an unvisited vertex that is adjacent to the current vertex, add it to the list of visited vertices, and add it to the queue. 2. Take the next vertex, _v_ , from the graph and add it to the list of visited vertices. 3. Add all unmarked vertices that are are adjacent to v and add them to the queue. Here is the definition for the breadth-first search function: function bfs(s) { var queue = []; this.marked[s] = true; queue.push(s); // add to back of queue while (queue.length > 0) { var v = queue.shift(); // remove from front of queue if (v !== undefined) { print("Visited vertex: " + v); } for (var i = 0; i < this.adj[v].length; i++) { var w = this.adj[v][i]; if (!this.marked[w]) { this.marked[w] = true; queue.push(w); } } } } It's added to the Graph class with the addition of the following to the class: this.bfs = bfs; A test program for the breadth-first search function is shown in Example 11-3. ##### Example 11-3. Performing a breadth-first search load("Graph.js"); g = new Graph(5); g.addEdge(0,1); g.addEdge(0,2); g.addEdge(1,3); g.addEdge(2,4); g.showGraph(); g.bfs(0); The output from this program is: 0 -> 1 2 1 -> 0 3 2 -> 0 4 3 -> 1 4 -> 2 Visited vertex: 0 Visited vertex: 1 Visited vertex: 2 Visited vertex: 3 Visited vertex: 4 # Finding the Shortest Path One of the most common operations performed on graphs is finding the shortest path from one vertex to another. Consider the following example: for vacation, you are going to travel to 10 major-league cities to watch baseball games over a two-week period. You want to minimize the number of miles you have to drive to visit all 10 cities using a shortest-path algorithm. Another shortest-path problem involves creating a network of computers, where the cost could be the time to transmit data between two computers or the cost of establishing and maintaining the connection. A shortest-path algorithm can determine the most effective way to build the network. ## Breadth-First Search Leads to Shortest Paths When we perform a breadth-first search, we are automatically finding the shortest paths from one vertex to another connected vertex. For example, when we want to find the shortest path from vertex A to vertex D, we first look for any one-edge paths from A to D, then two-edge paths from A to D, and so on. This is exactly the way breadth-first search works, so we can easily modify the breadth-first search algorithm to find shortest paths. ## Determining Paths To find the shortest path, we need to modify the breadth-first search algorithm so that it records the paths that lead from one vertex to another vertex. This requires a few modifications to the `Graph` class. First, we need an array that keeps track of edges from one vertex to the next. We'll name this array `edgeTo`. As we work through the breadth-first search function, every time we come across a vertex that is not marked, besides marking it, we will add an edge to that vertex from the vertex that we are exploring in the adjacency list. Here is the new `bfs()` function, along with the code you need to add the `edgeTo` array to the `Graph` class: // add this to Graph class this.edgeTo = []; // bfs function function bfs(s) { var queue = []; this.marked[s] = true; queue.push(s); // add to back of queue while (queue.length > 0) { var v = queue.shift(); // remove from front of queue if (v !== undefined) { print("Visited vertex: " + v); } for (var i = 0; i < this.adj[v].length; i++) { var w = this.adj[v][i]; if (!this.marked[w]) { this.edgeTo[w] = v; this.marked[w] = true; queue.push(w); } } } } Now we need a function that can show us the paths that connect the different vertices of a graph. This function, `pathTo()`, creates a stack that stores all the vertices that have edges in common with a specified vertex. Here is the code for the function, along with a simple helper function: function pathTo(source, v) { if (!this.hasPathTo(v)) { return undefined; } var path = []; for (var i = v; i != source; i = this.edgeTo[i]) { path.push(i); } path.push(source); return path; } function hasPathTo(v) { return this.marked[v]; } Lastly, we add a function that prints out the path: function showPath(paths) { while (paths.length > 0) { if (paths.length > 1) { putstr(paths.pop() + '-'); } else { putstr(paths.pop()); } } } Be sure to add the appropriate declarations to the `Graph()` function: this.pathTo = pathTo; this.hasPathTo = hasPathTo; this.showPath = showPath; With this function, all we have to do is write some client code to show the shortest path from the source to a particular vertex. Example 11-4 shows a program that creates a graph and shows the shortest path for a specified vertex. ##### Example 11-4. Finding the shortest path for a vertex load("Graph.js"); g = new Graph(5); g.addEdge(0,1); g.addEdge(0,2); g.addEdge(1,3); g.addEdge(2,4); g.bfs(0); var vertex = 4; var source = 0; var paths = g.pathTo(source,vertex); g.showPath(paths); The output from `showPath()` is: 0-2-4 which is the shortest path from the source vertex 0 to vertex 4. # Topological Sorting _Topological sorting_ puts the vertices of a directed graph into an order such that all the directed edges point from a vertex earlier in the order to a vertex later in the order. For example, Figure 11-6 shows a directed-graph model of a typical computer science curriculum. ###### Figure 11-6. A directed graph model of a computer science curriculum A topological sort of this graph would result in the following sequence: 1. CS 1 2. CS 2 3. Assembly language 4. Data structures 5. Operating systems 6. Algorithms Courses 3 and 4 can be taken at the same time, as can courses 5 and 6. This type of problem is called _precedence-constrained scheduling_ , and every college student is familiar with it. You can't take English Composition II until you've taken English Composition I. ## An Algorithm for Topological Sorting The algorithm for topological sorting is similar to the algorithm for depth-first search. However, instead of immediately printing a vertex as it is visited, the algorithm visits all the adjacent vertices to the current vertex, and once that list is exhausted, we push the current vertex onto a stack. ## Implementing the Topological Sorting Algorithm The topological sort algorithm is broken up into two functions. The first function, `topSort()`, sets up the sorting process and calls a helper function, `topSortHelper()`, and then displays the sorted list of vertices. The major work is done in the recursive function `topSortHelper()`. This function marks the current vertex as visited and then recursively visits each adjacent vertex in the current vertex's adjacency list, marking them as visited. Finally, the current vertex is pushed onto a stack. Example 11-5 shows the code for the two functions. ##### Example 11-5. `topSort()` and `topSortHelper()` function topSort() { var stack = []; var visited = []; for (var i = 0; i < this.vertices; i++) { visited[i] = false; } for (var i = 0; i < this.vertices; i++) { if (!visited[i]) { this.topSortHelper(i, visited, stack); } } for (var i = 0; i < stack.length; i++) { if (stack[i] != undefined && stack[i] != false) { print(this.vertexList[stack[i]]); } } } function topSortHelper(v, visited, stack) { visited[v] = true; for (var i = 0; i < this.adj[v]; i++) { w = this.adj[v][i]; if (!visited[w]) { this.topSortHelper(visited[w], visited, stack); } } stack.push(v); } The `Graph` class has also been modified so that we can work with symbolic vertices and not just numbers. Inside the code, each vertex is still only numbered, but we add an array, `vertexList`, which associates each vertex with a symbol (for our example it's a course name). To make sure the new definition of the class is clear, we present the full definition, including the functions for topological sorting, below. The definition of the function `showGraph()` has changed so that symbolic names are shown instead of just vertex numbers. Example 11-6 shows the code. ##### Example 11-6. The `Graph` class function Graph(v) { this.vertices = v; this.vertexList = []; this.edges = 0; this.adj = []; for (var i = 0; i < this.vertices; ++i) { this.adj[i] = []; } this.addEdge = addEdge; this.showGraph = showGraph; this.dfs = dfs; this.marked = []; for (var i = 0; i < this.vertices; ++i) { this.marked[i] = false; } this.bfs = bfs; this.edgeTo = []; this.hasPathTo = hasPathTo; this.pathTo = pathTo; this.topSortHelper = topSortHelper; this.topSort = topSort; } function topSort() { var stack = []; var visited = []; for (var i = 0; i < this.vertices; i++) { visited[i] = false; } for (var i = 0; i < this.vertices; i++) { if (!visited[i]) { this.topSortHelper(i, visited, stack); } } for (var i = 0; i < stack.length; i++) { if (stack[i] !== undefined && stack[i] !== false) { print(this.vertexList[stack[i]]); } } } function topSortHelper(v, visited, stack) { visited[v] = true; for (var i = 0; i < this.adj[v]; i++) { var w = this.adj[v][i]; if (!visited[w]) { this.topSortHelper(visited[w], visited, stack); } } stack.push(v); } function addEdge(v,w) { this.adj[v].push(w); this.adj[w].push(v); this.edges++; } // a new function to display symbolic names instead of numbers function showGraph() { for (var i = 0; i < this.vertices; ++i) { putstr(this.vertexList[i] + " -> "); for (var j = 0; j < this.vertices; ++j) { if (this.adj[i][j] !== undefined) { var w = this.adj[i][j]; putstr(this.vertexList[w] + ' '); } } print(); } } function dfs(v) { this.marked[v] = true; if (this.adj[v] !== undefined) { print("Visited vertex: " + v); } for (var i = 0; i < this.adj[v].length; i++) { var w = this.adj[v][w]; if (!this.marked[w]) { this.dfs(w); } } } function bfs(s) { var queue = []; this.marked[s] = true; queue.push(s); // add to back of queue while (queue.length > 0) { var v = queue.shift(); // remove from front of queue if (v !== undefined) { console.log("Visited vertex: " + v); } for (var i = 0; i < this.adj[v].length; i++) { var w = this.adj[v][i]; if (!this.marked[w]) { this.edgeTo[w] = v; this.marked[w] = true; queue.push(w); } } } } function hasPathTo(v) { return this.marked[v]; } function pathTo(source, v) { if (!this.hasPathTo(v)) { return undefined; } var path = []; for (var i = v; i != source; i = this.edgeTo[i]) { path.push(i); } path.push(source); return path; } A program that tests our implementation of topological sorting is shown in Example 11-7. ##### Example 11-7. Topological sorting load("GraphTopo.js"); g = new Graph(6); g.addEdge(1,2); g.addEdge(2,5); g.addEdge(1,3); g.addEdge(1,4); g.addEdge(0,1); g.vertexList = ["CS1", "CS2", "Data Structures", "Assembly Language", "Operating Systems", "Algorithms"]; g.showGraph(); print(); g.topSort(); The output from this program is: CS1 -> CS2 CS2 -> Data Structures Assembly Language Operating Systems CS1 Data Structures -> CS2 Algorithms Assembly Language -> CS2 Operating Systems -> CS2 Algorithms -> Data Structures CS1 CS2 Data Structures Assembly Language Operating Systems Algorithms # Exercises 1. Write a program that determines which type of graph search is faster—breadth-first or depth-first. Test your program with graphs of many different sizes. 2. Write a program that stores a graph in a file. 3. Write a program that reads a graph from a file. 4. Build a graph that models the map of the area where you live. Determine the shortest path from a starting vertex to the last vertex. 5. Perform a depth-first search and a breadth-first search of the graph created in example 4. # Chapter 12. Sorting Algorithms Two of the most common operations performed on data stored in a computer are sorting and searching. This has been true since the beginning of the computer industry, so this means that sorting and searching are two of the most studied operations in computer science. Many of the data structures discussed in this book are designed primarily to make sorting and/or searching the data stored in the data structure easier and more efficient. This chapter will introduce you to some of the basic and advanced algorithms for sorting data. These algorithms depend only on the array as the means of storing data. In this chapter we'll also look at ways of timing our programs to determine which algorithm is most efficient. # An Array Test Bed We start this chapter by developing an array test bed to use in support of our study of basic sorting algorithms. We'll build a class for array data and functions that encapsulates some of the normal array operations: inserting new data, displaying array data, and calling the different sorting algorithms. Included in the class is a `swap()` function we will use to exchange elements in the array. Example 12-1 shows the code for this class. ##### Example 12-1. Array test bed class function CArray(numElements) { this.dataStore = []; this.pos = 0; this.numElements = numElements; this.insert = insert; this.toString = toString; this.clear = clear; this.setData = setData; this.swap = swap; for (var i = 0; i < numElements; ++i) { this.dataStore[i] = i; } } function setData() { for (var i = 0; i < this.numElements; ++i) { this.dataStore[i] = Math.floor(Math.random() * (this.numElements+1)); } } function clear() { for (var i = 0; i < this.dataStore.length; ++i) { this.dataStore[i] = 0; } } function insert(element) { this.dataStore[this.pos++] = element; } function toString() { var retstr = ""; for (var i = 0; i < this.dataStore.length; ++i) { retstr += this.dataStore[i] + " "; if (i > 0 && i % 10 == 0) { retstr += "\n"; } } return retstr; } function swap(arr, index1, index2) { var temp = arr[index1]; arr[index1] = arr[index2]; arr[index2] = temp; } Here is a simple program that uses the `CArray` class (the class is named `CArray` because JavaScript already has an `Array` class): ##### Example 12-2. Using the test bed class var numElements = 100; var myNums = new CArray(numElements); myNums.setData(); print(myNums.toString()); The output from this program is, though it will be different when you run it because of the use of the random number generator in `setData()`: 76 69 64 4 64 73 47 34 65 93 32 59 4 92 84 55 30 52 64 38 74 40 68 71 25 84 5 57 7 6 40 45 69 34 73 87 63 15 96 91 96 88 24 58 78 18 97 22 48 6 45 68 65 40 50 31 80 7 39 72 84 72 22 66 84 14 58 11 42 7 72 87 39 79 18 18 9 84 18 45 50 43 90 87 62 65 97 97 21 96 39 7 79 68 35 39 89 43 86 5 ## Generating Random Data The `setData()` function generates random numbers to store in the array. The `random()` function, which is part of the `Math` class, generates random numbers in a range from 0 to 1, exclusive. In other words, no random number generated by the function will equal 0, and no random number will equal 1. These random numbers are not very useful, so we scale the numbers by multiplying the random number by the number of elements we want plus 1, and then use the `floor()` function from the `Math` class to finalize the number. As you can see from the preceding output, this formula succeeds in generating a set of random numbers between 1 and 100. For more information on how JavaScript generates random numbers, see the Mozilla page, Using the Math.random() function, for random number generation. # Basic Sorting Algorithms The fundamental concept of the basic sorting algorithms covered next is that there is a list of data that needs to be rearranged into sorted order. The technique used in these algorithms to rearrange data in a list is a set of nested `for` loops. The outer loop moves through the list item by item, while the inner loop is used to compare elements. These algorithms very closely simulate how humans sort data in real life, such as how a card player sorts cards when dealt a hand or how a teacher sorts papers in alphabetical or grade order. ## Bubble Sort The first sorting algorithm we will examine is the _bubble sort_. The bubble sort is one of the slowest sorting algorithms, but it is also one of the easiest sorts to implement. The bubble sort gets its name because when data are sorted using the algorithm, values float like a bubble from one end of the array to the other. Assuming you are sorting a set of numbers into ascending order, larger values float to the right of the array and lower values float to the left. This behavior is the result of the algorithm moving through the array many times, comparing adjacent values, and swapping them if the value to the left is greater than the value to the right. Here is a simple example of the bubble sort. We start with the following list: E A D B H The first pass of the sort yields the following list: A E D B H The first and second elements are swapped. The next pass of the sort leads to: A D E B H The second and third elements are swapped. The next pass leads to the following order: A D B E H as the third and fourth elements are swapped. And finally, the second and third elements are swapped again, leading to the final order: A B D E H Figure 12-1 illustrates how the bubble sort works with a larger data set of numbers. In the figure, we examine two particular values inserted into the array: 2 and 72. Each number is circled. You can watch how 72 moves from the beginning of the array to the middle of the array, and you can watch how 2 moves from just past the middle of the array to the beginning of the array. ###### Figure 12-1. Bubble sort in action Example 12-3 shows the code for the bubble sort: ##### Example 12-3. The `bubbleSort()` function function bubbleSort() { var numElements = this.dataStore.length; var temp; for (var outer = numElements; outer >= 2; --outer) { for (var inner = 0; inner <= outer-1; ++inner) { if (this.dataStore[inner] > this.dataStore[inner+1]) { swap(this.dataStore, inner, inner+1); } } } } Be sure to add a call to this function to the `CArray` constructor. Example 12-4 is a short program that sorts 10 numbers using the `bubbleSort()` function. ##### Example 12-4. Sorting 10 numbers with `bubbleSort()` load("carray.js"); var numElements = 10; var mynums = new CArray(numElements); mynums.setData(); print(mynums.toString()); mynums.bubbleSort(); print(); print(mynums.toString()); The output from this program is: 9 2 2 3 3 2 9 8 9 3 2 2 2 3 3 3 8 9 9 9 We can see that the bubble sort algorithm works, but it would be nice to view the intermediate results of the algorithm, since a record of the sorting process is useful in helping us understand how the algorithm works. We can do that by the careful placement of the `toString()` function into the `bubbleSort()` function, which will display the current state of the array as the function proceeds (shown in Example 12-5). ##### Example 12-5. Adding a call to the `toString()` function to `bubbleSort()` function bubbleSort() { var numElements = this.dataStore.length; var temp; for (var outer = numElements; outer >= 2; --outer) { for (var inner = 0; inner <= outer-1; ++inner) { if (this.dataStore[inner] > this.dataStore[inner+1]) { swap(this.dataStore, inner, inner+1); } } print(this.toString()); } } When we run the program in Example 12-4 with the modified `bubbleSort()`, we get the following output: 7 0 9 10 8 0 3 3 5 7 0 7 9 8 0 3 3 5 7 10 0 7 8 0 3 3 5 7 9 10 0 7 0 3 3 5 7 8 9 10 0 0 3 3 5 7 7 8 9 10 0 0 3 3 5 7 7 8 9 10 0 0 3 3 5 7 7 8 9 10 0 0 3 3 5 7 7 8 9 10 0 0 3 3 5 7 7 8 9 10 0 0 3 3 5 7 7 8 9 10 0 0 3 3 5 7 7 8 9 10 With this output, you can more easily see how the lower values work their way to the beginning of the array and how the higher values work their way to the end of the array. ## Selection Sort The next sorting algorithm we examine is the _selection sort_. This sort works by starting at the beginning of the array and comparing the first element with the remaining elements. After examining all the elements, the smallest element is placed in the first position of the array, and the algorithm moves to the second position. This process continues until the algorithm arrives at the next to last position in the array, at which point all the data is sorted. Nested loops are used in the selection sort algorithm. The outer loop moves from the first element in the array to the next to last element; the inner loop moves from the second array element to the last element, looking for values that are smaller than the element currently being pointed to by the outer loop. After each iteration of the inner loop, the smallest value in the array is assigned its proper place in the array. Figure 12-2 illustrates how the selection sort algorithm works. Here is a simple example of how selection sort works on a list of five items. The original list is: E A D H B The first pass looks for the minimal value and swaps it with the value at the front of the list: A E D H B The next pass finds the minimal value after the first element (which is now in place) and swaps it: A B D H E The D is in place so the next step swaps the E and the H, leading to the list being in order: A B D E H Figure 12-2 shows how selection sort works on a larger data set of numbers. ###### Figure 12-2. The selection sort algorithm Example 12-6 shows the code for the `selectionSort()` function. ##### Example 12-6. The `selectionSort()` function function selectionSort() { var min, temp; for (var outer = 0; outer <= this.dataStore.length-2; ++outer) { min = outer; for (var inner = outer + 1; inner <= this.dataStore.length-1; ++inner) { if (this.dataStore[inner] < this.dataStore[min]) { min = inner; } } swap(this.dataStore, outer, min); print(this.toString()); } } Replace the `bubbleSort()` call in Example 12-4 with a call to the new `selectionSort()`. Below is the output from one run of our program using the `selectionSort()` function. 6 10 4 9 7 9 1 7 5 0 0 10 4 9 7 9 1 7 5 6 0 1 4 9 7 9 10 7 5 6 0 1 4 9 7 9 10 7 5 6 0 1 4 5 7 9 10 7 9 6 0 1 4 5 6 9 10 7 9 7 0 1 4 5 6 7 10 9 9 7 0 1 4 5 6 7 7 9 9 10 0 1 4 5 6 7 7 9 9 10 0 1 4 5 6 7 7 9 9 10 0 1 4 5 6 7 7 9 9 10 ## Insertion Sort The _insertion sort_ is analogous to the way humans sort data numerically or alphabetically. Let's say I have asked each student in a class to turn in an index card with his or her name, student ID, and a short biographical sketch. The students return the cards in random order, but I want them alphabetized so I can compare them to my class roster easily. I take the cards back to my office, clear off my desk, and pick the first card. The last name on the card is Smith. I place it at the top left corner of the desk and pick the second card. The last name on the card is Brown. I move Smith over to the right and put Brown in Smith's place. The next card is Williams. It can be inserted at the far right of the desk without have to shift any of the other cards. The next card is Acklin. It has to go at the beginning of the list, so each of the other cards must be shifted one position to the right to make room for Acklin's card. This is how the insertion sort works. The insertion sort has two loops. The outer loop moves element by element through the array, while the inner loop compares the element chosen in the outer loop to the element next to it in the array. If the element selected by the outer loop is less than the element selected by the inner loop, array elements are shifted over to the right to make room for the inner-loop element, just as described in the previous name card example. Example 12-7 shows the code for the insertion sort. Be sure to add it to the CArray object. ##### Example 12-7. The `insertionSort()` function function insertionSort() { var temp, inner; for (var outer = 1; outer <= this.dataStore.length-1; ++outer) { temp = this.dataStore[outer]; inner = outer; while (inner > 0 && (this.dataStore[inner-1] >= temp)) { this.dataStore[inner] = this.dataStore[inner-1]; --inner; } this.dataStore[inner] = temp; print(this.toString()); } } Now let's look at how the insertion sort works by running Example 12-4 using the new `insertionSort()`: 4 3 3 5 2 5 1 10 10 1 3 4 3 5 2 5 1 10 10 1 3 3 4 5 2 5 1 10 10 1 3 3 4 5 2 5 1 10 10 1 2 3 3 4 5 5 1 10 10 1 2 3 3 4 5 5 1 10 10 1 1 2 3 3 4 5 5 10 10 1 1 2 3 3 4 5 5 10 10 1 1 2 3 3 4 5 5 10 10 1 1 1 2 3 3 4 5 5 10 10 1 1 2 3 3 4 5 5 10 10 This output clearly shows that the insertion sort works not by making data exchanges, but by moving larger array elements to the right to make room for the smaller elements on the left side of the array. ## Timing Comparisons of the Basic Sorting Algorithms These three sorting algorithms are very similar in complexity, and theoretically, they should perform similarly. To determine the differences in performance among these three algorithms, we can use an informal timing system to compare how long it takes them to sort data sets. Being able to time these algorithms is important because, while you won't see much of a difference in times of the sorting algorithms when you're sorting 100 elements or even 1,000 elements, there can be a huge difference in the times these algorithms take to sort millions of elements. The timing system we will use in this section is based on retrieving the system time using the JavaScript `Date` object's `getTime()` function. Here is how the function works: var start = new Date().getTime(); The `getTime()` function returns the system time in milliseconds. The following code fragment: var start = new Date().getTime(); print(start); results in the following output: `135154872720` To record the time it takes code to execute, we start the timer, run the code, and then stop the timer when the code is finished running. The time it takes to sort data is the difference between the recorded stopping time and the recorded starting time. Example 12-8 shows an example of timing a `for` loop that displays the numbers 1 through 100. ##### Example 12-8. Timing a `for` loop var start = new Date().getTime(); for (var i = 1; i < 100; ++i) { print(i); } var stop = new Date().getTime(); var elapsed = stop - start; print("The elapsed time was: " + elapsed + " milliseconds."); The output, not including the starting and stopping time values, from the program is: `The elapsed time was: 91 milliseconds.` Now that we have a tool for measuring the efficiency of these sorting algorithms, let's run some tests to compare them. For our comparison of the three basic sorting algorithms, we will time the three algorithms sorting arrays with data set sizes of 100, 1,000, and 10,000. We expect not to see much difference among the algorithms for data set sizes of 100 and 1,000, but we do expect there to be some difference when using a data set size of 10,000. Let's start with an array of 100 randomly chosen integers. We also add a function for creating a new data set for each algorithm, and remove the `print(this.toString())` from each of the sorting algorithms, to clean up the output. Example 12-9 shows the code for this new function. ##### Example 12-9. Timing the sorting functions with 100 array elements load("carray3.js"); var numElements = 100; var nums = new CArray(numElements); nums.setData(); var start = new Date().getTime(); nums.bubbleSort(); var stop = new Date().getTime(); var elapsed = stop - start; print("Elapsed time for the bubble sort on " + numElements + " elements is: " + elapsed + " milliseconds."); nums.setData(); start = new Date().getTime(); nums.selectionSort(); stop = new Date().getTime(); elapsed = stop - start; print("Elapsed time for the selection sort on " + numElements + " elements is: " + elapsed + " milliseconds."); nums.setData(); start = new Date().getTime(); nums.insertionSort(); stop = new Date().getTime(); elapsed = stop - start; print("Elapsed time for the insertion sort on " + numElements + " elements is: " + elapsed + " milliseconds."); Here are the results (note that I ran these tests on an Intel Core i5 2450M Processor 2.5GHz processor with 4 GB DIMM): Elapsed time for the bubble sort on 100 elements is: 0 milliseconds. Elapsed time for the selection sort on 100 elements is: 1 milliseconds. Elapsed time for the insertion sort on 100 elements is: 0 milliseconds. Clearly, there is not any significant difference among the three algorithms. For the next test, we change the `numElements` variable to 1,000. Here are the results: Elapsed time for the bubble sort on 1000 elements is: 17 milliseconds. Elapsed time for the selection sort on 1000 elements is: 3 milliseconds. Elapsed time for the insertion sort on 1000 elements is: 2 milliseconds. For 1,000 numbers, the selection sort and the insertion sort are several times faster than the bubble sort. Finally, we test the algorithms with 10,000 numbers: Elapsed time for the bubble sort on 10000 elements is: 830 milliseconds. Elapsed time for the selection sort on 10000 elements is: 85 milliseconds. Elapsed time for the insertion sort on 10000 elements is: 65 milliseconds. The results for 10,000 numbers are consistent with the results for 1,000 numbers. Selection sort and insertion sort are significantly faster than the bubble sort, and the insertion sort is the fastest of the three sorting algorithms. Keep in mind, however, that these tests must be run several times in a variety of environments for the results to be considered statistically valid. # Advanced Sorting Algorithms In this section we will cover more advanced algorithms for sorting data. These sorting algorithms are generally considered the most efficient for large data sets, where data sets can have millions of elements rather than just hundreds or even thousands. The algorithms we study in this chapter include Quicksort, Shellsort, Mergesort, and Heapsort. We discuss each algorithm's implementation and then compare their efficiency by running timing tests. ## The Shellsort Algorithm The first advanced sorting algorithm we'll examine is the Shellsort algorithm. Shellsort is named after its inventor, Donald Shell. This algorithm is based on the insertion sort but is a big improvement over that basic sorting algorithm. Shellsort's key concept is that it compares distant elements first, rather than adjacent elements, as is done in the insertion sort. Elements that are far out of place can be put into place more efficiently using this scheme than by simply comparing neighboring elements. As the algorithm loops through the data set, the distance between each element decreases until, when at the end of the data set, the algorithm is comparing elements that are adjacent. Shellsort works by defining a gap sequence that indicates how far apart compared elements are when starting the sorting process. The gap sequence can be defined dynamically, but for most practical applications, you can predefine the gap sequence the algorithm will use. There are several published gap sequences that produce different results. We are going to use the sequence defined by Marcin Ciura in his paper on best increments for average case of Shellsort ("Best Increments for the Average Case of Shell Sort", 2001). The gap sequence is: 701, 301, 132, 57, 23, 10, 4, 1. However, before we write code for the average case, we are going to examine how the algorithm works with a small data set. Figure 12-3 demonstrates how the gap sequence works with the Shellsort algorithm. Let's start with a look at the code for the Shellsort algorithm: function shellsort() { for (var g = 0; g < this.gaps.length; ++g) { for (var i = this.gaps[g]; i < this.dataStore.length; ++i) { var temp = this.dataStore[i]; for (var j = i; j >= this.gaps[g] && this.dataStore[j-this.gaps[g]] > temp; j -= this.gaps[g]) { this.dataStore[j] = this.dataStore[j - this.gaps[g]]; print(this.toString()); } this.dataStore[j] = temp; } print(this.toString()); } } ###### Figure 12-3. The Shellsort algorithm with an initial gap sequence of 3 For this program to work with our `CArray` class test bed, we need to add a definition of the gap sequence to the class definition. Add the following code into the constructor function for `CArray`: this.gaps = [5,3,1]; And add this function to the object: function setGaps(arr) { this.gaps = arr; } Finally, add a reference to the `shellsort()` function to the `CArray` class constructor as well as the `shellsort()` code itself. The outer loop controls the movement within the gap sequence. In other words, for the first pass through the data set, the algorithm is going to examine elements that are five elements away from each other. The next pass will examine elements that are three elements away from each other. The last pass performs a standard insertion sort on element that are one place away, which means they are adjacent. By the time this last pass begins, many of the elements will already be in place, and the algorithm won't have to exchange many elements. This is where the algorithm gains efficiency over insertion sort. Figure 12-3 illustrates how the Shellsort algorithm works on a data set of 10 random numbers with a gap sequence of 5, 3, 1. Now let's put the algorithm to work on a real example. We add a `print()` statement to `shellsort()` so that we can follow the progress of the algorithm while it sorts the data set. Each gap pass is noted, followed by the order of the data set after sorting with that particular gap. The program is shown in Example 12-10. ##### Example 12-10. Running `shellsort()` on a small data set load("carray4.js") var nums = new CArray(10); nums.setData(); print("Before Shellsort: \n"); print(nums.toString()); print("\nDuring Shellsort: \n"); nums.shellsort(); print("\nAfter Shellsort: \n"); print(nums.toString()); The output from this program is: Before Shellsort: 4 4 2 9 4 2 6 1 1 1 During Shellsort: 2 4 1 1 1 4 6 2 9 4 1 1 1 2 2 4 4 4 9 6 1 1 1 2 2 4 4 4 6 9 After Shellsort: 1 1 1 2 2 4 4 4 6 9 To understand how Shellsort works, compare the initial state of the array with its state after the gap sequence of 5 was sorted. The first element of the initial array, 4, was swapped with the fifth element after it, 1, because 1 < 4. Now compare the gap 5 line with the gap 3 line. The 2 in the gap 5 line is swapped with the 1 because 1 < 2 and 1 is the third element after the 2. By simply counting the current gap sequence number down from the current element in the loop, and comparing the two numbers, you can trace any run of the Shellsort algorithm. Having now seen some details of how the Shellsort algorithm works, let's use a larger gap sequence and run it with a larger data set (100 elements). Comment out the `print()` in the `shellsort()` method for a cleaner output Here is the output: Before Shellsort: 100 96 80 59 74 55 92 24 93 73 71 42 55 2 56 46 50 20 20 95 19 94 21 77 9 92 22 41 64 11 67 70 23 12 98 46 58 73 92 3 23 7 39 46 22 70 36 72 43 85 26 96 78 2 62 0 29 82 48 88 88 50 10 17 7 55 54 42 89 56 89 41 74 75 29 80 71 10 67 54 32 72 33 30 81 86 90 79 4 30 84 31 29 42 10 78 68 29 49 17 After Shellsort: 0 2 2 3 4 7 7 9 10 10 10 11 12 17 17 19 20 20 21 22 22 23 23 24 26 29 29 29 29 30 30 31 32 33 36 39 41 41 42 42 42 43 46 46 46 48 49 50 50 54 54 55 55 55 56 56 58 59 62 64 67 67 68 70 70 71 71 72 72 73 73 74 74 75 77 78 78 79 80 80 81 82 84 85 86 88 88 89 89 90 92 92 92 93 94 95 96 96 98 100 We will revisit the `shellsort()` algorithm again when we compare it to other advanced sorting algorithms later in the chapter. ### Computing a dynamic gap sequence Robert Sedgewick, coauthor of _Algorithms, 4E_ (Addison-Wesley), defines a `shellsort()` function that uses a formula to dynamically compute the gap sequence to use with Shellsort. Sedgewick's algorithm determines the initial gap value using the following code fragment: var N = this.dataStore.length; var h = 1; while (h < N/3) { h = 3 * h + 1; } Once the gap value is determined, the function works like our previous `shellsort()` function, except the last statement before going back into the outer loop computes a new gap value: h = (h-1)/3; The complete, newly defined function is named `shellsort2()`, and is added to CArray: function shellsort2() { var N = this.dataStore.length; var h = 1; while (h < N/3) { h = 3 * h + 1; } while (h >= 1) { for (var i = h; i < N; i++) { for (var j = i; j >= h && this.dataStore[j] < this.dataStore[j-h]; j -= h) { swap(this.dataStore, j, j-h); } } h = (h-1)/3; } } Example 12-11 provides a program to test `shellsort2()`. ##### Example 12-11. `shellsort()` with a dynamically computed gap sequence load("carray4.js") var nums = new CArray(100); nums.setData(); print("Before shellsort2: \n"); print(nums.toString()); nums.shellsort2(); print("\nAfter shellsort2: \n"); print(nums.toString()); The output from this program is: Before shellsort2: 5 0 89 59 8 38 75 3 51 49 87 55 57 55 44 82 35 60 2 73 0 87 21 69 19 59 91 38 16 74 36 5 48 10 69 51 3 6 63 67 59 10 42 57 66 44 60 79 44 53 56 87 85 2 9 86 90 71 77 54 7 35 82 68 32 90 64 85 13 48 9 87 97 54 11 1 28 33 42 17 23 11 48 0 12 8 2 97 88 65 28 94 30 87 77 74 73 21 71 0 After shellsort2: 0 0 0 0 1 2 2 2 3 3 5 5 6 7 8 8 9 9 10 10 11 11 12 13 16 17 19 21 21 23 28 28 30 32 33 35 35 36 38 38 42 42 44 44 44 48 48 48 49 51 51 53 54 54 55 55 56 57 57 59 59 59 60 60 63 64 65 66 67 68 69 69 71 71 73 73 74 74 75 77 77 79 82 82 85 85 86 87 87 87 87 87 88 89 90 90 91 94 97 97 Before we leave the Shellsort algorithm, we need to compare the efficiency of our two `shellsort()` functions. First, to ensure that the test data store is clean before each test, we'll add a new method, `clear()`, to the CArray object: function clear() { this.dataStore.length = 0; } A program that compares running times of the two functions is shown in Example 12-12. Both algorithms use Ciura's sequence for the gap sequence. ##### Example 12-12. Comparing `shellsort()` algorithms load("carray4.js"); var nums = new CArray(10000); nums.setData(); var start = new Date().getTime(); nums.shellsort(); var stop = new Date().getTime(); var elapsed = stop - start; print("Shellsort with hard-coded gap sequence: " + elapsed + " ms."); nums.clear(); nums.setData(); start = new Date().getTime(); nums.shellsort1(); stop = new Date().getTime(); print("Shellsort with dynamic gap sequence: " + elapsed + " ms."); The results from this program are: Shellsort with hard-coded gap sequence: 18 ms. Shellsort with dynamic gap sequence: 18 ms. Both algorithms sorted the data in the same amount of time. Here is the output from running the program with 100,000 data elements: Shellsort with hard-coded gap sequence: 1578 ms. Shellsort with dynamic gap sequence: 1578 ms. Clearly, both of these algorithms sort data with the same efficiency, so you can use either of them with confidence. ## The Mergesort Algorithm The Mergesort algorithm is so named because it works by merging sorted sublists together to form a larger, completely sorted list. In theory, this algorithm should be easy to implement. We need two sorted subarrays and a third array into which we merge the two subarrays by comparing data elements and inserting the smallest element value. In practice, however, Mergesort has some problems because if we are trying to sort a very large data set using the algorithm, the amount of space we need to store the two merged subarrays can be quite large. Since space is not such an issue in these days of inexpensive memory, it is worth implementing Mergesort to see how it compares in efficiency to other sorting algorithms. ### Bottom-up Mergesort The nonrecursive, or iterative, version of Mergesort is referred to as a bottom-up process. The algorithm begins by breaking down the data set being sorted into a set of one-element arrays. Then these arrays are slowly merged by creating a set of left and right subarrays, each holding the partially sorted data until all that is left is one array with the data perfectly sorted. ### Top-down Mergesort It is customary to implement Mergesort as a recursive algorithm. The basic idea is that the array is split into two pieces, left and right. Each piece is then recursively split into its own left and right pieces, until the base case is reached, which is an array with a single element. Then the left and right pieces are merged into sorted order, on back up through the recursive calls until the outermost left and right partitions are merged, leaving the list in sorted order. ###### Figure 12-4. The bottom-up Mergesort algorithm Before we show you the JavaScript code for Mergesort, here is the output from a JavaScript program that uses recursive Mergesort to sort an array of 10 integers: 6,10,1,9,4,8,2,7,3,5 left array - 6,10,1,9,4 left array - 6,10 left array - 6 right array - 10 merge arrays - 6,10 right array - 1,9,4 left array - 1 right array - 9,4 left array - 9 right array - 4 merge arrays 4,9 merge arrays - 1,4,9 merge arrays - 1,4,6,9,10 (left array sorted) right array - 8,2,7,3,5 left array - 8,2 left array - 8 right array - 2 merge arrays - 2,8 right array - 7,3,5 left array - 7 right array - 3,5 left array - 3 right array - 5 merge arrays - 3,5 merge arrays - 7,3,5 merge arrays - 2,3,5,7,8 (right array sorted) merge arrays - 1,2,3,4,5,6,7,8,9,10 1,2,3,4,5,6,7,8,9,10 Now that we have seen how the recursive Mergesort works, Example 12-13 presents the code that created the preceding output. ##### Example 12-13. A recursive Mergesort JavaScript implementation function merge(left,right){ var result = []; var leftLen = left.length; var rightLen = right.length; while (leftLen > 0 || rightLen > 0){ if (leftLen > 0 && rightLen > 0){ // Both left and right are still populated if (left[0] < right[0]){ result.push(left.shift()); leftLen -= 1; } else if (right[0] <= left[0]){ result.push(right.shift()); rightLen -= 1; } } // Only left array contains elements else if (leftLen > 0){ result.push(left.shift()); leftLen -= 1; } // Only right array contains elements else if (rightLen > 0){ result.push(right.shift()); rightLen -= 1; } } return result; } function mergeSort(array){ var length = array.length; if (length <= 1){ return array; } var q = Math.floor(length/2); var left = mergeSort(array.slice(0,q)); var right = mergeSort(array.slice(q)); return merge(left, right); } var nums = [6,10,1,9,4,8,2,7,3,5]; print(nums); print(); nums = mergeSort(nums); print(); print(nums); The key feature of the `mergeSort()` function is the recursive partitioning of the original list into successively smaller subarrays, until each consists of a single element. By controlling the size of the subarrays, the sort process is relatively efficient, since it doesn't take much time to sort a small array. This makes merging efficient also, since it is much easier to merge data into sorted order when the unmerged data is already sorted. Our next step with Mergesort is to add it to the `CArray` class. Example 12-14 shows the `CArray` class with the `mergeSort()` and `mergeArrays()` functions added to its definition. ##### Example 12-14. Mergesort added to the `CArray` class function CArray(numElements) { this.gaps = [5,3,1]; this.dataStore = []; this.pos = 0; this.numElements = numElements; this.insert = insert; this.toString = toString; this.clear = clear; this.setData = setData; this.swap = swap; for (var i = 0; i < numElements; ++i) { this.dataStore[i] = i; this.bubbleSort = bubbleSort; this.selectionSort = selectionSort; this.insertionSort = insertionSort; this.shellSort = shellSort; this.shellSort2 = shellSort2; this.mergeSort = mergeSort; } // other function definitions go here function merge(left,right){ var result = []; var leftLen = left.length; var rightLen = right.length; while (leftLen > 0 || rightLen > 0){ if (leftLen > 0 && rightLen > 0){ // Both A and B are still populated if (left[0] < right[0]){ result.push(left.shift()); leftLen -= 1; } else if (right[0] <= left[0]){ result.push(right.shift()); rightLen -= 1; } } else if (leftLen > 0){ result.push(left.shift()); leftLen -= 1; } else if (rightLen > 0){ result.push(right.shift()); rightLen -= 1; } } return result; } function mergeSort(array){ var length = array.length; if (length <= 1){ // This is the base case for the recursion return array; } var q = Math.floor(length/2); var left = mergeSort(array.slice(0,q)); var right = mergeSort(array.slice(q)); return merge(left, right); } } Testing the new addition provides the same output as previously displayed: load ('./CArray.js'); var nums = new CArray(10); nums.setData(); print('Start: ' + nums.toString()); nums.mergeSort(); print('Done: ' + nums.toString()); ## The Quicksort Algorithm The Quicksort algorithm is one of the fastest sorting algorithms for large data sets. Quicksort is a divide-and-conquer algorithm that recursively breaks a list of data into successively smaller sublists consisting of the smaller elements and the larger elements. The algorithm continues this process until all the data in the list is sorted. The algorithm divides the list into sublists by selecting one element of the list as a _pivot_. Data is sorted around the pivot by moving elements less than the pivot to the bottom of the list and elements that are greater than the pivot to the top of the list. Figure 12-5 demonstrates how data is sorted around a pivot. ###### Figure 12-5. Sorting data around a pivot ### Algorithm and pseudocode for the Quicksort algorithm The algorithm for Quicksort is: 1. Pick a pivot element that divides the list into two sublists. 2. Reorder the list so that all elements less than the pivot element are placed before the pivot and all elements greater than the pivot are placed after it. 3. Repeat steps 1 and 2 on both the list with smaller elements and the list of larger elements. This algorithm then translates into the following JavaScript program: function qSort(list) { if (list.length == 0) { return []; } var lesser = []; var greater = []; var pivot = list[0]; for (var i = 1; i < list.length; i++) { if (list[i] < pivot) { lesser.push(list[i]); } else { greater.push(list[i]); } } return qSort(lesser).concat(pivot, qSort(greater)); } The function first tests to see if the array has a length of 0. If so, then the array doesn't need sorting and the function returns. Otherwise, two arrays are created, one to hold the elements lesser than the pivot and the other to hold the elements greater than the pivot. The pivot is then selected by selecting the first element of the array. Next, the function loops over the array elements and places them in their proper array based on their value relative to the pivot value. The function is then called recursively on both the lesser array and the greater array. When the recursion is complete, the greater array is concatenated to the lesser array to form the sorted array and is returned from the function. Let's test the algorithm with some data. Because our `qSort` program uses recursion, we won't use the array test bed; instead, we'll just create an array of random numbers and sort the array directly. The program is shown in Example 12-15. ##### Example 12-15. Sorting data with Quicksort function qSort(arr) { if (arr.length == 0) { return []; } var left = []; var right = []; var pivot = arr[0]; for (var i = 1; i < arr.length; i++) { if (arr[i] < pivot) { left.push(arr[i]); } else { right.push(arr[i]); } } return qSort(left).concat(pivot, qSort(right)); } var a = []; for (var i = 0; i < 10; ++i) { a[i] = Math.floor((Math.random()*100)+1); } print(a); print(); print(qSort(a)); The output from this program is: 68,80,12,80,95,70,79,27,88,93 12,27,68,70,79,80,80,88,93,95 The Quicksort algorithm is best to use on large data sets; its performance degrades for smaller data sets. To better demonstrate how Quicksort works, this next program highlights the pivot as it is chosen and how data is sorted around the pivot: function qSort(arr) { if (arr.length == 0) { return []; } var left = []; var right = []; var pivot = arr[0]; for (var i = 1; i < arr.length; i++) { print("pivot: " + pivot + " current element: " + arr[i]); if (arr[i] < pivot) { print("moving " + arr[i] + " to the left"); left.push(arr[i]); } else { print("moving " + arr[i] + " to the right"); right.push(arr[i]); } } return qSort(left).concat(pivot, qSort(right)); } var a = []; for (var i = 0; i < 10; ++i) { a[i] = Math.floor((Math.random()*100)+1); } print(a); print(); print(qSort(a)); The output from this program is: 9,3,93,9,65,94,50,90,12,65 pivot: 9 current element: 3 moving 3 to the left pivot: 9 current element: 93 moving 93 to the right pivot: 9 current element: 9 moving 9 to the right pivot: 9 current element: 65 moving 65 to the right pivot: 9 current element: 94 moving 94 to the right pivot: 9 current element: 50 moving 50 to the right pivot: 9 current element: 90 moving 90 to the right pivot: 9 current element: 12 moving 12 to the right pivot: 9 current element: 65 moving 65 to the right pivot: 93 current element: 9 moving 9 to the left pivot: 93 current element: 65 moving 65 to the left pivot: 93 current element: 94 moving 94 to the right pivot: 93 current element: 50 moving 50 to the left pivot: 93 current element: 90 moving 90 to the left pivot: 93 current element: 12 moving 12 to the left pivot: 93 current element: 65 moving 65 to the left pivot: 9 current element: 65 moving 65 to the right pivot: 9 current element: 50 moving 50 to the right pivot: 9 current element: 90 moving 90 to the right pivot: 9 current element: 12 moving 12 to the right pivot: 9 current element: 65 moving 65 to the right pivot: 65 current element: 50 moving 50 to the left pivot: 65 current element: 90 moving 90 to the right pivot: 65 current element: 12 moving 12 to the left pivot: 65 current element: 65 moving 65 to the right pivot: 50 current element: 12 moving 12 to the left pivot: 90 current element: 65 moving 65 to the left 3,9,9,12,50,65,65,90,93,94 # Exercises 1. Run the three algorithms discussed in this chapter with string data rather than numeric data and compare the running times for the different algorithms. Are the results consistent with the results of using numeric data? 2. Create an array of 1,000 integers already sorted into numeric order. Write a program that runs each sorting algorithm with this array, timing each algorithm and comparing the times. How do these times compare to the times for sorting an array in random order? 3. Create an array of 1,000 integers sorted in reverse numerical order. Write a program that runs each sorting algorithm with this array, timing each algorithm, and compare the times. 4. Create an array of over 10,000 randomly generated integers and sort the array using both Quicksort and the JavaScript built-in sorting function, timing each function. Is there a time difference between the two functions? # Chapter 13. Searching Algorithms Searching for data is a fundamental computer programming task that has been studied for many years. This chapter looks at just one aspect of the search problem—searching for a specified value in a list. There are two ways to search for data in a list: _sequential search_ and _binary search_. A sequential search is used when the items in a list are in random order; a binary search is used when the items in a list are in sorted order. Binary search is the more efficient algorithm, but you also have to take into account the extra time it takes to sort the data set before being able to search it for a value. # Commonly Used Functions in Examples Two functions are commonly used in multiple examples in this chapter. The first is `dispArr()`, which displays array contents, just as was used in Chapter 12. function dispArr(arr) { for (var i = 0; i < arr.length; ++i) { putstr(arr[i] + " "); if (i % 10 == 9) { putstr("\n"); } } if (i % 10 != 0) { putstr("\n"); } } The second is `insertionsort()`, which preprocesses array entries, enabling more efficient searches. function insertionsort(arr) { var temp, inner; for (var outer = 1; outer <= arr.length-1; ++outer) { temp = arr[outer]; inner = outer; while (inner > 0 && (arr[inner-1] >= temp)) { arr[inner] = arr[inner-1]; --inner; } arr[inner] = temp; } } Incorporate the code for either when called for in the example. === Sequential Search The most obvious way to search for data in a list is to begin at the first element and move to each element in the list until you either find the data you are looking for or you reach the end of the list. This is called a sequential search, sometimes also called a _linear_ search. It is an example of a _brute-force_ search technique, where potentially every element in the data structure is visited on the way to a solution. A sequential search is very easy to implement. Simply start a loop at the beginning of the list and compare each element to the data you are searching for. If you find a match, the search is over. If you get to the end of the list without generating a match, then the data searched for is not in the list. Example 13-1 shows a function for performing sequential search on an array. ##### Example 13-1. The `seqSearch()` function function seqSearch(arr, data) { for (var i = 0; i < arr.length; ++i) { if (arr[i] == data) { return true; } } return false; } If the `data` argument is found in the array, the function returns `true` immediately. If the function gets to the end of the array without finding a match, the function returns `false`. Example 13-2 presents a program to test our sequential search function, including a function to make it easy to display the array's contents. As in Chapter 12, we use random number generation to populate an array with random numbers in the range of 1 to 100. We also use a function to display the contents of the array, just as we did in Chapter 12. ##### Example 13-2. Executing the `seqSearch()` function var nums = []; for (var i = 0; i < 100; ++i) { nums[i] = Math.floor(Math.random() * 101); } dispArr(nums); putstr("Enter a number to search for: "); var num = parseInt(readline()); print(); if (seqSearch(nums, num)) { print(num + " is in the array."); } else { print(num + " is not in the array."); } print(); dispArr(nums); This program creates an array with random numbers in the range of 0 to 100. The user enters a value, the value is searched for, and the result is displayed. Finally, the program displays the complete array as proof of the validity of the function's return value. Here is a sample run of the program: Enter a number to search for: 23 23 is in the array. 13 95 72 100 94 90 29 0 66 2 29 42 20 69 50 49 100 34 71 4 26 85 25 5 45 67 16 73 64 58 53 66 73 46 55 64 4 84 62 45 99 77 62 47 52 96 16 97 79 55 94 88 54 60 40 87 81 56 22 30 91 99 90 23 18 33 100 63 62 46 6 10 5 25 48 9 8 95 33 82 32 56 23 47 36 88 84 33 4 73 99 60 23 63 86 51 87 63 54 62 We can also write the sequential search function so that it returns the position where a match is found. Example 13-3 provides the definition of this new version of `seqSearch()`. ##### Example 13-3. Modifying the `seqSearch()` function to return the position found (or -1) function seqSearch(arr, data) { for (var i = 0; i < arr.length; ++i) { if (arr[i] == data) { return i; } } return -1; } Notice that if the element searched for is not found, the function returns `-1`. This is the best value to return for the function since an array element cannot be stored in position `-1`. Example 13-4 presents a program that uses this second definition of `seqSearch()`. ##### Example 13-4. Testing the modified `seqSearch()` function var nums = []; for (var i = 0; i < 100; ++i) { nums[i] = Math.floor(Math.random() * 101); } putstr("Enter a number to search for: "); var num = readline(); print(); var position = seqSearch(nums, num); if (position > -1) { print(num + " is in the array at position " + position); } else { print(num + " is not in the array."); } print(); dispArr(nums); Here is one run of the program: Enter a number to search for: 22 22 is in the array at position 35 35 36 38 50 24 81 78 43 26 26 89 88 39 1 56 92 17 77 53 36 73 61 54 32 97 27 60 67 16 70 59 4 76 7 38 22 87 30 42 91 79 6 61 56 84 6 82 55 91 10 42 37 46 4 85 37 18 27 76 29 2 76 46 87 16 1 78 6 43 72 2 51 65 70 91 73 67 1 57 53 31 16 64 89 84 76 91 15 39 38 3 19 66 44 97 29 6 1 72 62 Keep in mind that the `seqSearch()` function is not as fast as the built-in `Array.indexOf()` function, but is shown here to demonstrate how search works. ## Searching for Minimum and Maximum Values Computer programming problems often involve searching for minimum and maximum values. In a sorted data structure, searching for these values is a trivial task. Searching an unsorted data structure, on the other hand, is a more challenging task. Let's start by determining how we should search an array for a minimum value. Here is one algorithm: 1. Assign the first element of the array to a variable as the minimum value. 2. Begin looping through the array, starting with the second element, comparing each element with the current minimum value. 3. If the current element has a lesser value than the current minimum value, assign the current element as the new minimum value. 4. Move to the next element and repeat step 3. 5. The minimum value is stored in the variable when the program ends. The operation of this algorithm is demonstrated in Figure 13-1. ###### Figure 13-1. Searching for the minimum value of an array This algorithm is easily transformed into a JavaScript function, as shown in Example 13-5. ##### Example 13-5. The `findMin()` function function findMin(arr) { var min = arr[0]; for (var i = 1; i < arr.length; ++i) { if (arr[i] < min) { min = arr[i]; } } return min; } The key thing to notice about this function is that it begins with the second array element, since we are assigning the first array element as the current minimum value. Let's test the function in a program, shown in Example 13-6. Note, you'll also want to add in the definition for `dispArray()`, shown in earlier examples. ##### Example 13-6. Finding the minimum value of an array var nums = []; for (var i = 0; i < 100; ++i) { nums[i] = Math.floor(Math.random() * 101); } var minValue = findMin(nums); dispArr(nums); print(); print("The minimum value is: " + minValue); Here is the output from running this program: 89 30 25 32 72 70 51 42 25 24 53 55 78 50 13 40 48 32 26 2 14 33 45 72 56 44 21 88 27 68 15 93 98 73 28 16 46 87 28 65 38 67 16 85 63 23 69 64 91 9 70 81 27 97 82 6 88 3 7 46 13 11 64 31 26 38 28 13 17 69 90 1 6 7 64 43 9 73 80 98 46 27 22 87 49 83 6 39 42 51 54 84 34 53 78 40 14 5 76 62 The minimum value is: 1 The algorithm for finding the maximum value works in a similar fashion. We assign the first element of the array as the maximum value and then loop through the rest of the array, comparing each element to the current maximum value. If the current element is greater than the current maximum value, that element's value is stored in the variable. Example 13-7 shows the function definition. ##### Example 13-7. The `findMax()` function function findMax(arr) { var max = arr[0]; for (var i = 1; i < arr.length; ++i) { if (arr[i] > max) { max = arr[i]; } } return max; } Example 13-8 shows a program that finds both the minimum value and the maximum value of an array. ##### Example 13-8. Using the `findMax()` function var nums = []; for (var i = 0; i < 100; ++i) { nums[i] = Math.floor(Math.random() * 101); } var minValue = findMin(nums); dispArr(nums); print(); print(); print("The minimum value is: " + minValue); var maxValue = findMax(nums); print(); print("The maximum value is: " + maxValue); The output from this program is: 26 94 40 40 80 85 74 6 6 87 56 91 86 21 79 72 77 71 99 45 5 5 35 49 38 10 97 39 14 62 91 42 7 31 94 38 28 6 76 78 94 30 47 74 20 98 5 68 33 32 29 93 18 67 8 57 85 66 49 54 28 17 42 75 67 59 69 6 35 86 45 62 82 48 85 30 87 99 46 51 47 71 72 36 54 77 19 11 52 81 52 41 16 70 55 97 88 92 2 77 The minimum value is: 2 The maximum value is: 99 ## Using Self-Organizing Data The fastest successful sequential searches on unordered data occur when the data being searched for is located at the beginning of the data set. You can ensure that a successfully found data item will be found quickly in the future by moving it to the beginning of a data set after it has been found in a search. The concept behind this strategy is that we can minimize search times by locating items that are frequently searched for at the beginning of a data set. For example, if you are a librarian and you are asked several times a day for the same reference book, you will keep that book close to your desk for easy access. After many searches, the most frequently searched-for items will have moved from wherever they were stored to the beginning of the data set. This is an example of _self-organized data_ : data that is organized not by the programmer before the program is executed, but by the program itself while the program is running. It makes sense to allow your data to self-organize since the data being searched most likely follow the "80-20 rule," meaning that 80% of the searches made on a data set are searching for just 20% of the data in the set. Self-organization will eventually put that 20% at the beginning of the data set, where a simple sequential search will find them quickly. Probability distributions such as the 80-20 rule are called Pareto distributions, named for Vilfredo Pareto, who discovered these distributions studying the spread of income and wealth in the late 19th century. See _The Art of Computer Programming: Volume 3, Sorting and Searching_ by Donald Knuth (Addison-Wesley, 399-401) for more information on probability distributions in data sets. We can modify our `seqSearch()` function to include self-organization fairly easily. Example 13-9 is our first attempt at the function definition. ##### Example 13-9. The `seqSearch()` function with self-organization function seqSearch(arr, data) { for (var i = 0; i < arr.length; ++i) { if (arr[i] == data) { if (i > 0) { swap(arr,i,i-1); } return true; } } return false; } You'll notice that the function checks to make sure that the found data is not already in position 0. The preceding definition uses a `swap()` function to exchange the found data with the data currently stored in the previous position. Here is the definition for the `swap()` function: function swap(arr, index, index1) { temp = arr[index]; arr[index] = arr[index1]; arr[index1] = temp; } You'll notice that when using this technique, which is similar to how data is sorted with the bubble sort algorithm, the most frequently accessed elements will eventually work their way to the beginning of the data set. For example, this program: var numbers = [5,1,7,4,2,10,9,3,6,8]; print(numbers); for (var i = 1; i <= 3; i++) { seqSearch(numbers, 4); print(numbers); } generates the following output: 5,1,7,4,2,10,9,3,6,8 5,1,4,7,2,10,9,3,6,8 5,4,1,7,2,10,9,3,6,8 4,5,1,7,2,10,9,3,6,8 Notice how the value 4 "bubbles" up to the beginning of the list because it is being searched for three times in a row. This technique also guarantees that if an element is already at the beginning of the data set, it won't get moved farther down. Another way we can write the `seqSearch()` function with self-organizing data is to move a found item to the beginning of the data set, though we wouldn't want to exchange an element if it is already close to the beginning. To achieve this goal, we can swap found elements only if they are located at least some specified distance away from the beginning of the data set. We only have to determine what is considered to a be far enough away from the beginning of the data set to warrant moving the element closer to the beginning. Following the 80-20 rule again, we can make a rule that states that a data element is relocated to the beginning of the data set only if its location lies outside the first 20% of the items in the data set. Example 13-10 shows the definition for this new version of `seqSearch()`. ##### Example 13-10. `seqSearch()` with better self-organization function seqSearch(arr, data) { for (var i = 0; i < arr.length; ++i) { if (arr[i] == data && i > (arr.length * 0.2)) { swap(arr,i,0); return true; } else if (arr[i] == data) { return true; } } return false; } Example 13-11 shows a program that tests this definition on a small data set of 10 elements. Again, copy the `dispArr()` and `swap()` functions from earlier. ##### Example 13-11. Searching with self-organization var nums = []; for (var i = 0; i < 10; ++i) { nums[i] = Math.floor(Math.random() * 11); } dispArr(nums); print(); putstr("Enter a value to search for: "); var val = parseInt(readline()); if (seqSearch(nums, val)) { print("Found element: "); print(); dispArr(nums); } else { print(val + " is not in array."); } Here are the results of a sample run of this program: 4 5 1 8 10 1 3 10 0 1 Enter a value to search for: 3 Found element: 3 5 1 8 10 1 4 10 0 1 Let's run the program again and search for an element closer to the front of the data set: 4 2 9 5 0 6 9 4 5 6 Enter a value to search for: 2 Found element: 4 2 9 5 0 6 9 4 5 6 Because the searched-for element is so close to the front of the data set, the function does not change its position. The searches we have discussed so far require that the data being searched be kept in an unordered sequence. However, we can speed up searches on large data sets significantly if we first sort the data set before performing a search. In the next section we discuss an algorithm for searching ordered data—the _binary search_. # Binary Search When the data you are searching for are sorted, a more efficient search than the sequential search is the binary search. To understand how binary search works, imagine you are playing a number-guessing game where the possible number is between 1 and 100, and you have to guess the number as chosen by a friend. According to the rules, for every guess you make, your friend has three responses: 1. The guess is correct. 2. The guess is too high. 3. The guess is too low. Following these rules, the best strategy is to choose the number 50 as your first guess. If that guess is too high, choose 25. If 50 is too low, you should guess 75. For each guess, you choose a midpoint by adjusting the lower range or the upper range of the numbers (depending on whether your guess is too low or too high). This midpoint becomes your new guess. As long as you follow this strategy, you will guess the correct number in the minimum possible number of guesses. Figure 13-2 demonstrates how this strategy works if the number to be guessed is 82. We can implement this strategy as the binary search algorithm. This algorithm only works on a sorted data set. Here is the algorithm: 1. Set a lower bound to the first position of the array (0). 2. Set an upper bound to the last element of the array (length of array minus 1). 3. While the lower bound is less than or equal to the upper bound, do the following steps: 1. Set the midpoint as (upper bound minus lower bound) divided by 2. 2. If the midpoint element is less than the data being searched for, set a new lower bound to the midpoint plus 1. 3. If the midpoint element is greater than the data being searched for, set a new upper bound to the midpoint minus 1. 4. Otherwise, return the midpoint as the found element. ###### Figure 13-2. Binary search algorithm applied to guessing a number Example 13-12 shows the JavaScript definition for the binary search algorithm, along with a program to test the definition. The `dispArr()` from earlier is also used. ##### Example 13-12. Using the binary search algorithm function binSearch(arr, data) { var upperBound = arr.length-1; var lowerBound = 0; while (lowerBound <= upperBound) { var mid = Math.floor((upperBound + lowerBound) / 2); if (arr[mid] < data) { lowerBound = mid + 1; } else if (arr[mid] > data) { upperBound = mid - 1; } else { return mid; } } return -1; } function dispArr(arr) { for (var i = 0; i < arr.length; ++i) { putstr(arr[i] + " "); if (i % 10 == 9) { putstr("\n"); } } if (i % 10 != 0) { putstr("\n"); } } var nums = []; for (var i = 0; i < 100; ++i) { nums[i] = Math.floor(Math.random() * 101); } insertionsort(nums); dispArr(nums); print(); putstr("Enter a value to search for: "); var val = parseInt(readline()); var retVal = binSearch(nums, val); if (retVal >= 0) { print("Found " + val + " at position " + retVal); } else { print(val + " is not in array."); } Here is the output from one run of the program: 1 2 3 5 6 6 6 6 7 7 7 9 9 12 14 17 17 20 21 22 25 26 26 26 29 29 33 36 37 37 37 37 37 39 39 40 41 41 42 43 43 44 44 45 45 45 45 46 46 47 47 47 48 49 51 51 58 60 60 61 61 63 63 64 64 65 67 71 72 74 74 74 76 77 77 78 79 80 80 80 82 82 83 84 85 85 85 85 86 86 87 88 91 91 91 94 95 96 99 100 Enter a value to search for: 72 Found 72 at position 68 It will be interesting to watch the function as it works its way through the search space looking for the value specified, so in Example 13-13, let's add a statement to the `binSearch()` function that displays the midpoint each time it is recalculated: ##### Example 13-13. `binSearch()` displaying the midpoint value function binSearch(arr, data) { var upperBound = arr.length-1; var lowerBound = 0; while (lowerBound <= upperBound) { var mid = Math.floor((upperBound + lowerBound) / 2); print("Current midpoint: " + mid); if (arr[mid] < data) { lowerBound = mid + 1; } else if (arr[mid] > data) { upperBound = mid - 1; } else { return mid; } } return -1; } Now let's run the program again: 0 1 4 5 7 7 8 9 9 11 12 12 14 15 16 17 17 18 20 22 24 25 26 27 28 30 30 32 33 33 33 33 33 34 36 36 37 37 41 42 43 44 45 48 52 52 52 53 53 55 56 56 56 58 60 60 60 62 62 63 64 66 66 66 66 66 68 68 72 73 73 73 73 74 74 75 77 78 78 81 81 82 82 83 83 85 86 86 88 89 89 93 93 94 96 96 96 96 99 100 Enter a value to search for: 66 Current midpoint: 49 Current midpoint: 74 Current midpoint: 61 Found 66 at position 61 From this output, we see that the original midpoint value was 49. That's too low since we are searching for 82, so the next midpoint is calculated to be 74. That's still too low, so a new midpoint is calculated, 87, and that value holds the element we are searching for, so the search is over. ## Counting Occurrences When the `binSearch()` function finds a value, if there are other occurrences of the same value in the data set, the function will be positioned in the immediate vicinity of other like values. In other words, other occurrences of the same value will either be to the immediate left of the found value's position or to the immediate right of the found value's position. If this fact isn't readily apparent to you, run the `binSearch()` function several times and take note of the position of the found value returned by the function. Here's an example of a sample run from earlier in this chapter: 0 1 2 3 5 7 7 8 8 9 10 11 11 13 13 13 14 14 14 15 15 18 18 19 19 19 19 20 20 20 21 22 22 22 23 23 24 25 26 26 29 31 31 33 37 37 37 38 38 43 44 44 45 48 48 49 51 52 53 53 58 59 60 61 61 62 63 64 65 68 69 70 72 72 74 75 77 77 79 79 79 83 83 84 84 86 86 86 91 92 93 93 93 94 95 96 96 97 98 100 Enter a value to search for: 37 Found 37 at position 45 If you count the position of each element, the number 37 found by the function is the one that is in the middle of the three occurrences of 37. This is just the nature of how the `binSearch()` function works. So what does a function that counts the occurrences of values in a data set need to do to make sure that it counts all the occurrences? The easiest solution is to write two loops that move both down, or to the left of, the data set, counting occurrences, and up, or the right of, the data set, counting occurrences. Example 13-14 shows a definition of the `count()` function. ##### Example 13-14. The `count()` function function count(arr, data) { var count = 0; var position = binSearch(arr, data); if (position > -1) { ++count; for (var i = position-1; i > 0; --i) { if (arr[i] == data) { ++count; } else { break; } } for (var i = position+1; i < arr.length; ++i) { if (arr[i] == data) { ++count; } else { break; } } } return count; } The function starts by calling the `binSearch()` function to search for the specified value. If the value is found in the data set, the the function begins counting occurrences by calling two `for` loops. The first loop works its way down the array, counting occurrences of the found value, stopping when the next value in the array doesn't match the found value. The second `for` loop works its way up the array, counting occurrences and stopping when the next value in the array doesn't match the found value. Example 13-15 is the complete application demonstrating how to use `count()` and the other functions we've covered to this point. ##### Example 13-15. Using the `count()` function function binSearch(arr, data) { var upperBound = arr.length-1; var lowerBound = 0; while (lowerBound <= upperBound) { var mid = Math.floor((upperBound + lowerBound) / 2); if (arr[mid] < data) { lowerBound = mid + 1; } else if (arr[mid] > data) { upperBound = mid - 1; } else { return mid; } } return -1; } function count(arr, data) { var count = 0; var position = binSearch(arr, data); if (position > -1) { ++count; for (var i = position-1; i > 0; --i) { if (arr[i] == data) { ++count; } else { break; } } for (var i = position+1; i < arr.length; ++i) { if (arr[i] == data) { ++count; } else { break; } } } return count; } function insertionsort(arr) { var temp, inner; for (var outer = 1; outer <= arr.length-1; ++outer) { temp = arr[outer]; inner = outer; while (inner > 0 && (arr[inner-1] >= temp)) { arr[inner] = arr[inner-1]; --inner; } arr[inner] = temp; } } function dispArr(arr) { for (var i = 0; i < arr.length; ++i) { putstr(arr[i] + " "); if (i % 10 == 9) { putstr("\n"); } } if (i % 10 != 0) { putstr("\n"); } } var nums = []; for (var i = 0; i < 100; ++i) { nums[i] = Math.floor(Math.random() * 101); } insertionsort(nums); dispArr(nums); print(); putstr("Enter a value to count: "); var val = parseInt(readline()); var retVal = count(nums, val); print("Found " + retVal + " occurrences of " + val + "."); Here is a sample run of the program: 2 4 4 6 6 6 7 8 9 12 14 16 18 18 19 19 19 20 21 21 22 23 23 24 26 29 30 32 35 36 37 38 40 40 40 41 41 42 44 44 49 49 49 51 51 52 53 53 54 54 55 55 56 57 57 57 57 58 58 61 61 62 63 64 66 68 68 68 68 71 73 76 76 77 77 78 78 79 79 79 80 81 81 82 85 87 89 89 91 91 92 93 93 94 94 95 96 96 99 100 Enter a value to count: 58 Found 2 occurrences of 58. Here is another sample run: 0 0 0 1 2 3 4 5 9 9 10 11 11 11 11 13 13 15 16 17 18 19 20 21 21 23 23 26 28 29 29 32 33 34 35 35 36 37 37 37 38 40 40 41 41 42 44 44 46 47 47 47 48 48 50 51 53 54 56 57 60 62 62 65 65 67 69 69 70 74 74 75 75 77 78 79 79 81 82 83 86 88 88 88 88 89 89 89 89 89 90 90 91 91 92 92 97 97 98 99 Enter a value to count: 71 Found 0 occurrences of 71. # Searching Textual Data Up to this point, all of our searches have been conducted on numeric data. We can also use the algorithms discussed in this chapter with textual data. First, let's define the data set we are using: The nationalism of Hamilton was undemocratic. The democracy of Jefferson was, in the beginning, provincial. The historic mission of uniting nationalism and democracy was in the course of time given to new leaders from a region beyond the mountains, peopled by men and women from all sections and free from those state traditions which ran back to the early days of colonization. The voice of the democratic nationalism nourished in the West was heard when Clay of Kentucky advocated his American system of protection for industries; when Jackson of Tennessee condemned nullification in a ringing proclamation that has taken its place among the great American state papers; and when Lincoln of Illinois, in a fateful hour, called upon a bewildered people to meet the supreme test whether this was a nation destined to survive or to perish. And it will be remembered that Lincoln's party chose for its banner that earlier device--Republican--which Jefferson had made a sign of power. The "rail splitter" from Illinois united the nationalism of Hamilton with the democracy of Jefferson, and his appeal was clothed in the simple language of the people, not in the sonorous rhetoric which Webster learned in the schools. This paragraph of text was taken from the _big.txt_ file found on Peter Norvig's website. This file is stored as a text file ( _.txt_ ) that is located in the same directory as the JavaScript interpreter ( _js.exe_ ). To read the file into a program, we need just one line of code: `var words = read("words.txt").split(" ");` This line stores the text in an array by reading in the text from the file—`read("words.txt")`—and then breaking up the file into words using the `split()` function, which uses the space between each word as the delimiter. This code is not perfect because puncuation is left in the file and is stored with the nearest word, but it will suffice for our purposes. Once the file is stored in an array, we can begin searching through the array to find words. Let's begin with a sequential search and search for the word "rhetoric," which is in the paragraph close to the end of the file. Let's also time the search so we can compare it with a binary search. We covered timing code in Chapter 12 if you want to go back and review that material. Example 13-16 shows the code. ##### Example 13-16. Searching a text file using `seqSearch()` function seqSearch(arr, data) { for (var i = 0; i < arr.length; ++i) { if (arr[i] == data) { return i; } } return -1; } var words = read("words.txt").split(" "); var word = "rhetoric"; var start = new Date().getTime(); var position = seqSearch(words, word); var stop = new Date().getTime(); var elapsed = stop - start; if (position >= 0) { print("Found " + word + " at position " + position + "."); print("Sequential search took " + elapsed + " milliseconds."); } else { print(word + " is not in the file."); } The output from this program is: Found rhetoric at position 174. Sequential search took 1 milliseconds. Even though binary search is faster, we won't be able to measure any difference between `seqSearch()` and `binSearch()`, but we will run the program using binary search anyway to ensure that the `binSearch()` function works correctly with text. Example 13-17 shows the code and the output. ##### Example 13-17. Searching textual data with `binSearch()` function binSearch(arr, data) { var upperBound = arr.length-1; var lowerBound = 0; while (lowerBound <= upperBound) { var mid = Math.floor((upperBound + lowerBound) / 2); if (arr[mid] < data) { lowerBound = mid + 1; } else if (arr[mid] > data) { upperBound = mid - 1; } else { return mid; } } return -1; } function insertionsort(arr) { var temp, inner; for (var outer = 1; outer <= arr.length-1; ++outer) { temp = arr[outer]; inner = outer; while (inner > 0 && (arr[inner-1] >= temp)) { arr[inner] = arr[inner-1]; --inner; } arr[inner] = temp; } } var words = read("words.txt").split(" "); insertionsort(words); var word = "rhetoric"; var start = new Date().getTime(); var position = binSearch(words, word); var stop = new Date().getTime(); var elapsed = stop - start; if (position >= 0) { print("Found " + word + " at position " + position + "."); print("Binary search took " + elapsed + " milliseconds."); } else { print(word + " is not in the file."); } The result of the application was: Found rhetoric at position 125. Binary search took 0 milliseconds. In this age of superfast processors, it is harder and harder to measure the difference between sequential search and binary search on anything but the largest data sets. However, it has been proven mathematically that binary search is faster than sequential search on large data sets just due to the fact that the binary search algorithm eliminates half the search space (the elements of the array) with each iteration of the loop that controls the algorithm. # Exercises 1. The sequential search algorithm always finds the first occurrence of an element in a data set. Rewrite the algorithm so that the last occurrence of an element is returned. 2. Compare the time it takes to perform a sequential search with the total time it takes to both sort a data set using insertion sort and perform a binary search on the data set. What are your results? 3. Create a function that finds the second-smallest element in a data set. Can you generalize the function definition for the third-smallest, fourth-smallest, and so on? Test your functions with a data set of at least 1,000 elements. Test on both numbers and text. # Chapter 14. Advanced Algorithms In this chapter we'll look at two advanced topics: dynamic programming and greedy algorithms. _Dynamic programming_ is a technique that is sometimes considered the opposite of recursion. Where a recursive solution starts at the top and breaks the problem down, solving all small problems until the complete problem is solved, a dynamic programming solution starts at the bottom, solving small problems and combining them to form an overall solution to the big problem. This chapter departs from most of the other chapters in this book in that we don't really discuss an organizing data structure for working with these algorithms other than the array. Sometimes, a simple data structure is enough to solve a problem if the algorithm you are using is powerful enough. A _greedy algorithm_ is an algorithm that looks for "good solutions" as it works toward the complete solution. These good solutions, called _local optima_ , will hopefully lead to the correct final solution, called the _global optimum_. The term "greedy" comes from the fact that these algorithms take whatever solution looks best at the time. Often, greedy algorithms are used when it is almost impossible to find a complete solution, owing to time and/or space considerations, and yet a suboptimal solution is acceptable. A good source for more information on advanced algorithms and data structures is _Introduction to Algorithms_ (MIT Press). # Dynamic Programming Recursive solutions to problems are often elegant but inefficient. Many languages, including JavaScript, cannot efficiently translate recursive code to machine code, resulting in an inefficient though elegant computer program. This is not to say that using recursion is bad, per se, just that some imperative and object-oriented programming languages do not do a good job implementing recursion, since they do not feature recursion as a high-priority programming technique. Many programming problems that have recursive solutions can be rewritten using the techniques of dynamic programming. A dynamic programming solution builds a table, usually using an array, that holds the results of the many subsolutions as the problem is broken down. When the algorithm is complete, the solution is found in a distinct spot in the table, as we'll see in the Fibonacci example next. ## A Dynamic Programming Example: Computing Fibonacci Numbers The Fibonacci numbers can be defined by the following sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55,... As you can tell, the sequence is generated by adding the previous two numbers in the sequence together. This sequence has a long history dating back to at least 700 AD and is named after the Italian mathematician Leornardo Fibonacci, who in 1202 used the sequence to describe the idealized growth of a rabbit population. There is a simple recursive solution you can use to generate any specific number in the sequence. Here is the JavaScript code for a Fibonacci function: function recurFib(n) { if (n < 2) { return n; } else { return recurFib(n-1) + recurFib(n-2); } } print(recurFib(10)); // displays 55 The problem with this function is that it is extremely inefficient. We can see exactly how inefficient it is by examining the recursion tree shown in Figure 14-1 for `fib(5)`. ###### Figure 14-1. Recursion tree generated by recursive Fibonacci function It is clear that too many values are recomputed during the recursive calls. If the compiler could keep track of the values that are already computed, the function would not be nearly so inefficient. We can design a much more efficient algorithm using dynamic programming techniques. An algorithm designed using dynamic programming starts by solving the simplest subproblem it can solve, then using that solution to solve more complex subproblems until the entire problem is solved. The solutions to each subproblem are typically stored in an array for easy access. We can demonstrate the essence of dynamic programming by examing the dynamic programming solution to computing Fibonacci numbers, as shown in the function definition in the following section: function dynFib(n) { var val = []; for (var i = 0; i <= n; ++i) { val[i] = 0; } if (n == 1 || n == 2) { return 1; } else { val[1] = 1; val[2] = 2; for (var i = 3; i <= n; ++i) { val[i] = val[i-1] + val[i-2]; } return val[n-1]; } } The `val` array is where we store intermediate results. The first part of the `if` statement returns the value `1` if the Fibonacci number to be computed is `1` or `2`. Otherwise, the values `1` and `2` are stored in positions `1` and `2` of `val`. The `for` loop runs from `3` to the input argument, assigning each array element the sum of the previous two array elements, and when the loop is complete, the last value in the array will be the last computed Fibonacci number, which is the number asked for and is the value returned by the function. The arrangement of the Fibonacci sequence in the `val` array is shown here: val[0] = 0 val[1] = 1 val[2] = 2 val[3] = 3 val[4] = 5 val[5] = 8 val[6] = 13 Let's compare the time it takes to compute a Fibonacci number using both the recursive function and the dynamic programming function. Example 14-1 lists the code for the timing test. ##### Example 14-1. Timing test for recursive and dynamic programming versions of Fibonacci function function recurFib(n) { if (n < 2) { return n; } else { return recurFib(n-1) + recurFib(n-2); } } function dynFib(n) { var val = []; for (var i = 0; i <= n; ++i) { val[i] = 0; } if (n == 1 || n == 2) { return 1; } else { val[1] = 1; val[2] = 2; for (var i = 3; i <= n; ++i) { val[i] = val[i-1] + val[i-2]; } return val[n-1]; } } var start = new Date().getTime(); print(recurFib(10)); var stop = new Date().getTime(); print("recursive time - " + (stop-start) + "milliseconds"); print(); start = new Date().getTime(); print(dynFib(10)); stop = new Date().getTime(); print("dynamic programming time - " + (stop-start) + " milliseconds"); The output from this program is: 55 recursive time - 0 milliseconds 55 dynamic programming time - 0 milliseconds If we run the program again, this time computing `fib(20)`, we get: 6765 recursive time - 2 milliseconds 6765 dynamic programming time - 0 milliseconds Finally, we compute `fib(30)` and we get: 832040 recursive time - 42 milliseconds 832040 dynamic programming time - 0 milliseconds Clearly, the dynamic programming solution is much more efficient than the recursive solution when we compute anything over `fib(20)`. Finally, you may have already figured out that it's not necessary to use an array when computing a Fibonacci number using the iterative solution. The array was used because dynamic programming algorithms usually store intermediate results in an array. Here is the definition of an iterative Fibonacci function that doesn't use an array: function iterFib(n) { var last = 1; var nextLast = 1; var result = 1; for (var i = 2; i < n; ++i) { result = last + nextLast; nextLast = last; last = result; } return result; } This version of the function will compute Fibonacci numbers as efficiently as the dynamic programming version. ## Finding the Longest Common Substring Another problem that lends itself to a dynamic programming solution is finding the longest common substring in two strings. For example, in the words "raven" and "havoc," the longest common substring is "av." A common use of finding the longest common substring is in genetics, where DNA molecules are described using the first letter of the nucleobase of a nucleotide. We'll start with the brute-force solution to this problem. Given two strings, A and B, we can find the longest common substring by starting at the first character of A and comparing each character to the corresponding character of B. When a nonmatch is found, move to the second character of A and start over with the first character of B, and so on. There is a better solution using dynamic programming. The algorithm uses a two-dimensional array to store the results of comparisons of the characters in the same position in the two strings. Initially, each element of the array is set to 0. Each time a match is found in the same position of the two arrays, the element at the corresponding row and column of the array is incremented by 1; otherwise the element stays set to 0. Along the way, a variable is keeping track of how many matches are found. This variable, along with an indexing variable, are used to retrieve the longest common substring once the algorithm is finished. Example 14-2 presents the complete definition of the algorithm. After the code, we'll explain how it works. ##### Example 14-2. A function for determining the longest common substring of two strings function lcs(word1, word2) { var max = 0; var index = 0; var lcsarr = new Array(word1.length+1); for (var i = 0; i <= word1.length+1; ++i) { lcsarr[i] = new Array(word2.length+1); for (var j = 0; j <= word2.length+1; ++j) { lcsarr[i][j] = 0; } } for (var i = 0; i <= word1.length; ++i) { for (var j = 0; j <= word2.length; ++j) { if (i == 0 || j == 0) { lcsarr[i][j] = 0; } else { if (word1[i-1] == word2[j-1]) { lcsarr[i][j] = lcsarr[i-1][j-1] + 1; } else { lcsarr[i][j] = 0; } } if (max < lcsarr[i][j]) { max = lcsarr[i][j]; index = i; } } } var str = ""; if (max == 0) { return ""; } else { for (var i = index-max; i <= max; ++i) { str += word2[i]; } return str; } } The first section of the function sets up a couple of variables and the two-dimensional array. Most languages have a simple declaration for two-dimensional arrays, but JavaScript makes you jump through a few hoops by declaring an array inside an array. The last `for` loop in the code fragment initializes the array. Here's the first section: function lcs(word1, word2) { var max = 0; var index = 0; var lcsarr = new Array(word1.length+1); for (var i = 0; i <= word1.length+1; ++i) { lcsarr[i] = new Array(word2.length+1); for (var j = 0; j <= word2.length+1; ++j) { lcsarr[i][j] = 0; } } Now here is the code for the second section of the function: for (var i = 0; i <= word1.length; ++i) { for (var j = 0; j <= word2.length; ++j) { if (i == 0 || j == 0) { lcsarr[i][j] = 0; } else { if (word1[i-1] == word2[j-1]) { lcsarr[i][j] = lcsarr[i-1][j-1] + 1; } else { lcsarr[i][j] = 0; } } if (max < lcsarr[i][j]) { max = lcsarr[i][j]; index = i; } } } The second section builds the table that keeps track of character matches. The first elements of the array are always set to `0`. Then if the corresponding characters of the two strings match, the current array element is set to `1` plus the value stored in the previous array element. For example, if the two strings are "back" and "cace," and the algorithm is on the second character, then a `1` is placed in the current element, since the previous element wasn't a match and a `0` is stored in that element (0 + 1). The algorithm then moves to the next position, and since it also matches for both strings, a `2` is placed in the current array element (1 + 1). The last characters of the two strings don't match, so the longest common substring is `2`. Finally, if `max` is less than the value now stored in the current array element, it is assigned the value of the current array element, and `index` is set to the current value of `i`. These two variables will be used in the last section to determine where to start retrieving the longest common substring. For example, given the two strings "abbcc" and "dbbcc," here is the state of the `lcsarr` array as the algorithm progresses: 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 2 0 0 0 0 0 3 1 0 0 0 1 4 The last section builds the longest common substring by determining where to start. The value of `index` minus `max` is the starting point, and the value of `max` is the stopping point: var str = ""; if (max == 0) { return ""; } else { for (var i = index-max; i <= max; ++i) { str += word2[i]; } return str; } Given again the two strings "abbcc" and "dbbcc," the program returns "bbcc." ## The Knapsack Problem: A Recursive Solution A classic problem in the study of algorithms is the knapsack problem. Imagine you are a safecracker and you break open a safe filled with all sorts of treasure, but all you have to carry the loot is a small backpack. The items in the safe differ in both size and value. You want to maximize your take by filling the backpack with those items that are worth the most. There is, of course, a brute-force solution to this problem, but the dynamic programming solution is more efficient. The key idea to solving the knapsack problem with a dynamic programming solution is to calculate the maximum value for every item up to the total capacity of the knapsack. If the safe in our example has five items, the items have a size of 3, 4, 7, 8, and 9, respectively, and values of 4, 5, 10, 11, and 13, respectively, and the knapsack has a capacity of 16, then the proper solution is to pick items 3 and 5 with a total size of 16 and a total value of 23. The code for solving this problem is quite short, but it won't make much sense without the context of the whole program, so let's take a look at the program to solve the knapsack problem. Our solution uses a recursive function: function max(a, b) { return (a > b) ? a : b; } function knapsack(capacity, size, value, n) { if (n == 0 || capacity == 0) { return 0; } if (size[n-1] > capacity) { return knapsack(capacity, size, value, n-1); } else { return max(value[n-1] + knapsack(capacity-size[n-1], size, value, n-1), knapsack(capacity, size, value, n-1)); } } var value = [4,5,10,11,13]; var size = [3,4,7,8,9]; var capacity = 16; var n = 5; print(knapsack(capacity, size, value, n)); The output from this program is: 23 The problem with this recursive solution to the knapsack problem is that, because it is recursive, many subproblems are revisited during the course of the recursion. A better solution to the knapsack problem is to use a dynamic programming technique to solve the problem, as shown below. ## The Knapsack Problem: A Dynamic Programming Solution Whenever we find a recursive solution to a problem, we can usually rewrite the solution using a dynamic programming technique and end up with a more efficient program. The knapsack problem can definitely be rewritten in a dynamic programming manner. All we have to do is use an array to store temporary solutions until we get to the final solution. The following program demonstrates how the knapsack problem we encountered earlier can be solved using dynamic programming. The optimum value for the given constraints is, again, 23. Example 14-3 shows the code. ##### Example 14-3. A dynamic programming solution to the knapsack problem function max(a, b) { return (a > b) ? a : b; } function dKnapsack(capacity, size, value, n) { var K = []; for (var i = 0; i <= capacity+1; i++) { K[i] = []; } for (var i = 0; i <= n; i++) { for (var w = 0; w <= capacity; w++) { if (i == 0 || w == 0) { K[i][w] = 0; } else if (size[i-1] <= w) { K[i][w] = max(value[i-1] + K[i-1][w-size[i-1]], K[i-1][w]); } else { K[i][w] = K[i-1][w]; } putstr(K[i][w] + " "); } print(); } return K[n][capacity]; } var value = [4,5,10,11,13]; var size = [3,4,7,8,9]; var capacity = 16; var n = 5; print(dKnapsack(capacity, size, value, n)); As the program runs, it displays the values being stored in the table as the algorithm works toward a solution. Here is the output: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 4 4 4 4 4 4 4 4 4 4 4 4 4 0 0 0 4 5 5 5 9 9 9 9 9 9 9 9 9 9 0 0 0 4 5 5 5 10 10 10 14 15 15 15 19 19 19 0 0 0 4 5 5 5 10 11 11 14 15 16 16 19 21 21 0 0 0 4 5 5 5 10 11 13 14 15 17 18 19 21 23 23 The optimal solution to the problem is found in the last cell of the two-dimensional table, which is found in the bottom-right corner of the table. You will also notice that using this technique does not tell you which items to pick to maximize output, but from inspection, the solution is to pick items 3 and 5, since the capacity is 16, item 3 has a size 7 (value 10), and item 5 has a size 9 (value 13). # Greedy Algorithms In the previous sections, we examined dynamic programming algorithms that can be used to optimize solutions that are found using a suboptimal algorithm—solutions that are often based on recursion. For many problems, resorting to dynamic programming is overkill and a simpler algorithm will suffice. One example of a simpler algorithm is the _greedy_ algorithm. A greedy algorithm is one that always chooses the best solution at the time, with no regard to how that choice will affect future choices. Using a greedy algorithm generally indicates that the implementer hopes that the series of "best" local choices made will lead to a final "best" choice. If so, then the algorithm has produced an optimal solution; if not, a suboptimal solution has been found. However, for many problems, it is just not worth the trouble to find an optimal solution, so using a greedy algorithm works just fine. ## A First Greedy Algorithm Example: The Coin-Changing Problem A classic example of following a greedy algorithm is making change. Let's say you buy some items at the store and the change from your purchase is 63 cents. How does the clerk determine the change to give you? If the clerk follows a greedy algorithm, he or she gives you two quarters, a dime, and three pennies. That is the smallest number of coins that will equal 63 cents without using half-dollars. Example 14-4 demonstrates a program that uses a greedy algorithm to make change (under the assumption that the amount of change is less than one dollar). ##### Example 14-4. A greedy algorithm for solving the coin-changing problem function makeChange(origAmt, coins) { var remainAmt = 0; if (origAmt % .25 < origAmt) { coins[3] = parseInt(origAmt / .25); remainAmt = origAmt % .25; origAmt = remainAmt; } if (origAmt % .1 < origAmt) { coins[2] = parseInt(origAmt / .1); remainAmt = origAmt % .1; origAmt = remainAmt; } if (origAmt % .05 < origAmt) { coins[1] = parseInt(origAmt / .05); remainAmt = origAmt % .05; origAmt = remainAmt; } coins[0] = parseInt(origAmt / .01); } function showChange(coins) { if (coins[3] > 0) { print("Number of quarters - " + coins[3] + " - " + coins[3] * .25); } if (coins[2] > 0) { print("Number of dimes - " + coins[2] + " - " + coins[2] * .10); } if (coins[1] > 0) { print("Number of nickels - " + coins[1] + " - " + coins[1] * .05); } if (coins[0] > 0) { print("Number of pennies - " + coins[0] + " - " + coins[0] * .01); } } var origAmt = .63; var coins = []; makeChange(origAmt, coins); showChange(coins); The output from this program is: Number of quarters - 2 - 0.5 Number of dimes - 1 - 0.1 Number of pennies - 3 - 0.03 The `makeChange()` function starts with the highest denomination, quarters, and tries to make as much change with them as possible. The total number of quarters is stored in the `coins` array. Once the amount left becomes less than a quarter, the algorithm moves to dimes, making as much change with dimes as possible. The total number of dimes is then stored in the `coins` array. The algorithm then moves to nickels and pennies in the same manner. This solution always finds the optimal solution as long as the normal coin denominations are used and all the possible denominations are available. Not being able to use one particular denomination, such as nickels, can lead to a suboptimal solution. ## A Greedy Algorithm Solution to the Knapsack Problem Earlier in this chapter we examined the knapsack problem and provided both recursive and dynamic programming solutions for it. In this section, we'll examine how we can implement a greedy algorithm to solve this problem. A greedy algorithm can be used to solve the knapsack problem if the items we are placing in the knapsack are continuous in nature. In other words, the items must be things that cannot be counted discretely, such as cloth or gold dust. If we are using continous items, we can simply divide the unit price by the unit volume to determine the value of the item. An optimal solution in this case is to place as much of the item with the highest value into the knapsack as possible until the item is depleted or the knapsack is full, followed by as much of the second-highest-value item as possible, and so on. The reason we can't find an optimal greedy solution using discrete items is because we can't put "half a television" into a knapsack. Discrete knapsack problems are known as 0-1 problems because you must take either all or none of an item. This type of knapsack problem is called a fractional knapsack problem. Here is the algorithm for solving fractional knapsack problems: 1. Knapsack has a capacity _W_ and items have values _V_ and weights _w_. 2. Rank items by _v/w_ ratio. 3. Consider items in terms of decreasing ratio. 4. Take as much of each item as possible. Table 14-1 gives the weights, values, and ratios for four items. Table 14-1. Fractional knapsack items Item | A | B | C | D ---|---|---|---|--- Value | 50 | 140 | 60 | 60 Size | 5 | 20 | 10 | 12 Ratio | 10 | 7 | 6 | 5 Given the table above, and assuming that the knapsack being used has a capacity of 30, the optimal solution for the knapsack problem is to take all of item A, all of item B, and half of item C. This combination of items will result in a value of 220. The code for finding the optimal solution to this knapsack problem is shown below: function ksack(values, weights, capacity) { var load = 0; var i = 0; var w = 0; while (load < capacity && i < 4) { if (weights[i] <= (capacity-load)) { w += values[i]; load += weights[i]; } else { var r = (capacity-load)/weights[i]; w += r * values[i]; load += weights[i]; } ++i; } return w; } var items = ["A", "B", "C", "D"]; var values = [50, 140, 60, 60]; var weights = [5, 20, 10, 12]; var capacity = 30; print(ksack(values, weights, capacity)); // displays 220 # Exercises 1. Write a program that uses a brute-force technique to find the longest common substring. 2. Write a program that allows the user to change the constraints of a knapsack problem in order to explore how changing the constraints will change the results of the solution. For example, you can change the capacity of the knapsack, the values of the items, or the weights of the items. It is probably a good idea to change only one of these constraints at a time. 3. Using the greedy algorithm technique for coin changing, but not allowing the algorithm to use dimes, find the solution for 30 cents. Is this solution optimal? # Index ### Symbols * % (percent sign) * modulo operator, Arithmetic and Math Library Functions in JavaScript, Choosing a Hash Function * () (parentheses), using to reference array keys, Dictionaries * * (asterisk) * multiplication operator, Arithmetic and Math Library Functions in JavaScript * \+ (plus sign) * addition operator, Arithmetic and Math Library Functions in JavaScript * \- (minus sign) * subtraction operator, Arithmetic and Math Library Functions in JavaScript * / (slash) * division operator, Arithmetic and Math Library Functions in JavaScript * 0-1 problems, A Greedy Algorithm Solution to the Knapsack Problem * [ ] (square brackets) * accessing array elements, Accessing and Writing Array Elements * creating arrays with ] operator, [Creating Arrays * referencing array keys, Dictionaries ### A * abstract data types (ADTs), Lists * accessor functions, Accessor Functions * searching for a value, Searching for a Value * add() function * Dictionary class, The Dictionary Class * Set class, The Set Class Implementation * using push() function with, Arrays in Objects * using with reduce(), Non–Array-Generating Iterator Functions * addEdge() function, Graph class, Building a Graph * adjacency lists, Representing Edges * adjacency matrix, Representing Edges * advance() function, Other Linked List Functions * algorithms * greedy (see greedy algorithms) * reasons for studying, Why Study Data Structures and Algorithms * append() function, lists, A List ADT, Append: Adding an Element to a List * arithmetic operators, Arithmetic and Math Library Functions in JavaScript * arithmetic, performing in JavaScript, Arithmetic and Math Library Functions in JavaScript * array of adjacency lists, Representing Edges * array test bed, An Array Test Bed * CArray class implementation, An Array Test Bed * generating ramdom data, Generating Random Data * program that uses CArray class, An Array Test Bed * Array() constructor, Creating Arrays * Array.isArray() function, Creating Arrays * arrays, Arrays-Exercises * accessing and writing array elements, Accessing and Writing Array Elements * accessing elements with loops, Repetition Constructs * adding and removing elements from middle of, Adding and Removing Elements from the Middle of an Array * adding elements to, Adding Elements to an Array * aggregate operations on, Aggregate Array Operations * creating, Creating Arrays * creating from strings, Creating Arrays from Strings * creating new arrays from existing arrays, Creating New Arrays from Existing Arrays * defined, JavaScript Arrays Defined * Dictionary class based on, The Dictionary Class * dynamic programming interim solutions in, A Dynamic Programming Example: Computing Fibonacci Numbers * in hash tables * separate chaining of elements, Separate Chaining * using linear probing with, Linear Probing * iterator functions returning new arrays, Iterator Functions That Return a New Array * iterator functions, non-array-generating, Non–Array-Generating Iterator Functions * jagged, Jagged Arrays * limitations of, Shortcomings of Arrays * of objects, Arrays of Objects * ordering array elements, Putting Array Elements in Order * Queue class implementation based on, An Array-Based Queue Class Implementation * removing elements from, Removing Elements from an Array * search for both mimimum and maximum value, Searching for Minimum and Maximum Values * searching for maximum value, Searching for Minimum and Maximum Values * searching for minimum value, Searching for Minimum and Maximum Values * storing data in objects, Arrays in Objects * storing text file in, Searching Textual Data * string representations of, String Representations of Arrays * two-dimensional, Finding the Longest Common Substring * creating, Creating Two-Dimensional Arrays * processing elements, Processing Two-Dimensional Array Elements * using, Using Arrays * using for hash tables, A Better Hash Function * vertex list, Implementing the Topological Sorting Algorithm * assigning one array to another array, Aggregate Array Operations ### B * back() function, Other Linked List Functions * Queue class, An Array-Based Queue Class Implementation * base conversions for numbers, Multiple Base Conversions * betterHash() function, A Better Hash Function * hashing integer keys, Hashing Integer Keys * binary search, Binary Search-Searching Textual Data * algorithm, Binary Search * binSearch() function displaying midpoint value, Binary Search * counting occurrences, Counting Occurrences * JavaScript definition for algorithm, Binary Search * searching textual data, Searching Textual Data * binary search trees * building an implementation, Building a Binary Search Tree Implementation-Traversing a Binary Search Tree * BST and Node classes, Building a Binary Search Tree Implementation * traversing a binary search tree, Traversing a Binary Search Tree * counting occurrences, Counting Occurrences * defined, Binary Trees and Binary Search Trees * removing nodes form, Removing Nodes from a BST * searches, BST Searches-Removing Nodes from a BST * for a specific value, Searching for a Specific Value * for minimum and maximum value, Searching for the Minimum and Maximum Value * binary trees, Organization of the Book, Binary Trees and Binary Search Trees, Trees Defined * example of, Binary Trees and Binary Search Trees * representing graphs as, The Graph Class * block scope, simulating in JavaScript, Variable Scope * breadth-first search, Breadth-First Search * leading to shortest paths, Breadth-First Search Leads to Shortest Paths * brute-force search, Commonly Used Functions in Examples * (see also sequential search) * finding longest common substring, Finding the Longest Common Substring * bubble sort, Bubble Sort * bubbleSort() function, Bubble Sort * adding call to toString() function, Bubble Sort * sorting 10 numbers with bubbleSort(), Bubble Sort * buildChain() function, HashTable class, Separate Chaining ### C * chain, Separate Chaining * charCodeAt() function, Choosing a Hash Function * checkOut() function, movie application (example), Using Lists to Manage a Kiosk * child nodes, Trees Defined * left and right nodes, Binary Trees and Binary Search Trees * circularly linked lists, Circularly Linked Lists * creating, Circularly Linked Lists * Ciura, Marcin, The Shellsort Algorithm * clear() function * Dictionary class, Auxiliary Functions for the Dictionary Class * example of use, Auxiliary Functions for the Dictionary Class * lists, A List ADT * queues, Queue Operations * removing all elements from a list, Clear: Removing All Elements from a List * Stack class, A Stack Implementation * code examples and supplemental materials, Conventions Used in This Book * coin-changing problem, A First Greedy Algorithm Example: The Coin-Changing Problem * collisions, An Overview of Hashing * avoiding, A Better Hash Function * handling in hashing, Handling Collisions-Exercises * linear probing, Linear Probing * separate chaining, Separate Chaining * in hashing integer keys, Hashing Integer Keys * in simple hash function, Choosing a Hash Function * compare() function, Putting Array Elements in Order * concat() function, Creating New Arrays from Existing Arrays * concatenating strings, using reduce() function, Non–Array-Generating Iterator Functions * constructor functions, Objects and Object-Oriented Programming * contains() function * checking for set member, More Set Operations * lists, Contains: Determining if a Given Value Is in a List, Using Lists to Manage a Kiosk * copying, shallow versus deep copies, Aggregate Array Operations * count field in Node object, Counting Occurrences * count() function * counting occurrences in a binary search, Counting Occurrences * Dictionary class, Auxiliary Functions for the Dictionary Class * example of use, Auxiliary Functions for the Dictionary Class * Queue class, Using the Queue Class: Assigning Partners at a Square Dance * counting occurrences in a data set, Counting Occurrences, Counting Occurrences * Customer object, movie application (example), Using Lists to Manage a Kiosk * cycles (in graphs), Graph Definitions ### D * data structures, reasons for studying, Why Study Data Structures and Algorithms * datatypes, array elements, Creating Arrays * Date object, getTime() function, Timing Comparisons of the Basic Sorting Algorithms * decision constructs, Decision Constructs * deep copies, Aggregate Array Operations * delete operator, Clear: Removing All Elements from a List * delete() function, The Dictionary Class * depth-first search, Depth-First Search * dequeue() function, Queue Operations * defining for priority queue, Priority Queues * implementation for Queue class, An Array-Based Queue Class Implementation * dictionaries, Organization of the Book, Dictionaries-Exercises * Dictionary class, Dictionaries * add() function, The Dictionary Class * adding sorting capability, Adding Sorting to the Dictionary Class * auxiliary functions, Auxiliary Functions for the Dictionary Class * class definition updated with, Auxiliary Functions for the Dictionary Class * clear(), Auxiliary Functions for the Dictionary Class * count(), Auxiliary Functions for the Dictionary Class * examples of using count() and clear(), Auxiliary Functions for the Dictionary Class * delete() function, The Dictionary Class * find() function, The Dictionary Class * program that uses, The Dictionary Class * showAll() function, The Dictionary Class * difference, Set Operations * difference() function, Set class, More Set Operations * digraphs, Graph Definitions * directed graphs, Graph Definitions * topological sorting of, Topological Sorting * discrete knapsack problems (0-1 problems), A Greedy Algorithm Solution to the Knapsack Problem * display() function * displaying elements of linked list, Inserting New Nodes * for circularly linked list, Circularly Linked Lists * displayList() function, Using Lists to Manage a Kiosk * doubly linked lists, Doubly Linked Lists-Circularly Linked Lists * displaying elements in reverse order, Doubly Linked Lists * LList class as, Doubly Linked Lists * dynamic programming, Organization of the Book, Advanced Algorithms-The Knapsack Problem: A Dynamic Programming Solution * computing Fibonacci numbers, A Dynamic Programming Example: Computing Fibonacci Numbers-A Dynamic Programming Example: Computing Fibonacci Numbers * dynamic programming solution, A Dynamic Programming Example: Computing Fibonacci Numbers * inefficiency of recursive solution, A Dynamic Programming Example: Computing Fibonacci Numbers * recursive solution, A Dynamic Programming Example: Computing Fibonacci Numbers * timing comparison of solutions, A Dynamic Programming Example: Computing Fibonacci Numbers * defined, Advanced Algorithms * finding longest common substring, Finding the Longest Common Substring-Finding the Longest Common Substring * knapsack problem, The Knapsack Problem: A Dynamic Programming Solution ### E * edges * connecting nodes in trees, Trees Defined * in graphs, Graph Definitions * representing, Representing Edges * element property, Node class, The Node Class * elements (list), A List ADT * empty lists, A List ADT * empty sets, Set Definitions * empty() function, checking if queue is empty, An Array-Based Queue Class Implementation * enqueue() function, Queue Operations * implementation for Queue class, An Array-Based Queue Class Implementation * every() function, Non–Array-Generating Iterator Functions ### F * factorial() function, recursive definition of, Demonstrating Recursion * Fibonacci numbers, computing, A Dynamic Programming Example: Computing Fibonacci Numbers-A Dynamic Programming Example: Computing Fibonacci Numbers * filter() function * using with arrays, Iterator Functions That Return a New Array * using with strings, Iterator Functions That Return a New Array * find() function * binary search tree, Searching for a Specific Value * Dictionary class, The Dictionary Class * finding element in a list, Find: Finding an Element in a List * finding nodes in linked lists, Inserting New Nodes * findLast() function, doubly linked list, Doubly Linked Lists * findMax() function, Searching for Minimum and Maximum Values * findMin() function, Searching for Minimum and Maximum Values * findPrevious() function, linked lists, Removing Nodes from a Linked List * first-in, first-out (FIFO) data structures, Priority Queues * floor() function, Math class, Generating Random Data * for each loop, using with subset() function, More Set Operations * for loop, Repetition Constructs * accessing array elements, Accessing and Writing Array Elements * counting occurrences in a binary search, Counting Occurrences * initializing two-dimensional array, Finding the Longest Common Substring * iterating through a list, Iterating Through a List * nested for loops, Processing Two-Dimensional Array Elements * timing execution of, Timing Comparisons of the Basic Sorting Algorithms * forEach() function * iterating over arrays, Non–Array-Generating Iterator Functions * map() function versus, Iterator Functions That Return a New Array * fractional knapsack problems, A Greedy Algorithm Solution to the Knapsack Problem * front() function * Queue class, An Array-Based Queue Class Implementation * function scope, Variable Scope * functions, Functions * constructor, Objects and Object-Oriented Programming * parameters, Functions * recursive, Recursion ### G * general cycles, Graph Definitions * get() function, HashTable class, Storing and Retrieving Data in a Hash Table * defining to work with separate chaining, Separate Chaining * rewriting for linear probling, Linear Probing * getElement() function * displaying current list element, A List ADT * getMax() function, binary search tree, Searching for the Minimum and Maximum Value * getMin() function, binary search tree, Searching for the Minimum and Maximum Value * getRandomInt() function, Hashing Integer Keys * getTime() function, Date object, Timing Comparisons of the Basic Sorting Algorithms * global scope, Variable Scope * global variables, Variable Scope * graph algorithms, Organization of the Book * graphs, Organization of the Book, Graphs and Graph Algorithms-Exercises * definitions of terms, Graph Definitions * finding the shortest path, Finding the Shortest Path * Graph class, The Graph Class-Building a Graph * building a graph, Building a Graph * representing edges, Representing Edges * real-world systems modeled by, Real-World Systems Modeled by Graphs * searching, Searching a Graph-Finding the Shortest Path * breadth-first search, Breadth-First Search * depth-first search, Depth-First Search * topological sorting, Topological Sorting-Exercises * algorithm for, An Algorithm for Topological Sorting * program testing the implementation, Implementing the Topological Sorting Algorithm * greedy algorithms, Organization of the Book, Greedy Algorithms-Exercises * coin-changing problem, A First Greedy Algorithm Example: The Coin-Changing Problem * defined, Advanced Algorithms * knapsack problem, solution to, A Greedy Algorithm Solution to the Knapsack Problem ### H * hash functions, An Overview of Hashing * hash tables, Hashing * storing and retrieving data, Storing and Retrieving Data in a Hash Table * hashing, Hashing-Exercises * handling collisions, Handling Collisions-Exercises * linear probing, Linear Probing * separate chaining, Separate Chaining * HashTable class, A Hash Table Class * better hash function, A Better Hash Function * choosing a hash function, Choosing a Hash Function * complete definition, Choosing a Hash Function * hashing integer keys, Hashing Integer Keys * simpleHash() function, Choosing a Hash Function * storing and retrieving data in hash table, Storing and Retrieving Data in a Hash Table * testing betterHash() function, A Better Hash Function * names and telephone numbers, An Overview of Hashing * head node, Linked Lists Defined * Horner's Method, using for better hash function, A Better Hash Function ### I * if statements, Decision Constructs * if-else if statement, Decision Constructs * if-else statement, Decision Constructs * indexOf() function, Searching for a Value * checking array for requested data, The Set Class Implementation * indices, array, JavaScript Arrays Defined * inorder traversal, Traversing a Binary Search Tree * inOrder() traversal function, Traversing a Binary Search Tree * insert() function * BST (binary search tree) class, Building a Binary Search Tree Implementation * for doubly linked list, Doubly Linked Lists * inserting list element, Insert: Inserting an Element into a List * inserting nodes into linked lists, Inserting New Nodes * lists, A List ADT * insertion sort, Insertion Sort * instanceof operator, Using Lists to Manage a Kiosk * integer keys, hashing, Hashing Integer Keys * interactive mode (JavaScript shell), The JavaScript Environment * intersect() function, Set class, More Set Operations * intersection, Set Operations * iterator functions, Iterator Functions * non-array-generating, Non–Array-Generating Iterator Functions * returning new arrays, Iterator Functions That Return a New Array * some(), Non–Array-Generating Iterator Functions * iterators, iterating through a list, Iterating Through a List ### J * jagged arrays, Jagged Arrays * JavaScript programming environment, The JavaScript Programming Environment and Model * JavaScript programming practices, JavaScript Programming Practices * JavaScript shell, What You Need for This Book * download site, The JavaScript Environment * interactive mode, The JavaScript Environment * join() function, String Representations of Arrays ### K * key value, Trees Defined * key-value pairs, Dictionaries * keys() function, Adding Sorting to the Dictionary Class * knapsack problem * dynamic programming solution, The Knapsack Problem: A Dynamic Programming Solution * greedy algorithm solution, A Greedy Algorithm Solution to the Knapsack Problem * recursive solution, The Knapsack Problem: A Recursive Solution ### L * last-in, first-out (LIFO) data structures, Stack Operations * lastIndexOf() function, Searching for a Value * leaf nodes, Trees Defined * removing from binary search tree, Removing Nodes from a BST * left nodes, Binary Trees and Binary Search Trees * length property * arrays, Creating Arrays, Accessing and Writing Array Elements * lists, A List ADT * queues, Queue Operations * stacks, Stack Operations * string keys in dictionaries and, Auxiliary Functions for the Dictionary Class * using push() instead of to extend arrays, Adding Elements to an Array * length() function * returning number of elements in list, Length: Determining the Number of Elements in a List * Stack class, A Stack Implementation * levels (in trees), Trees Defined * linear probing, Linear Probing * linear search, Commonly Used Functions in Examples * (see also sequential search) * linked lists, Organization of the Book, Linked Lists-Exercises, Linked Lists * circularly linked, Circularly Linked Lists * defined, Linked Lists Defined * doubly linked, Doubly Linked Lists-Circularly Linked Lists * displaying elements in reverse order, Doubly Linked Lists * LList class as doubly linked list, Doubly Linked Lists * head node, Linked Lists Defined * inserting and removing nodes, Linked Lists Defined * object-based design, An Object-Based Linked List Design-Doubly Linked Lists * complete code for Node class and LList class, Removing Nodes from a Linked List * inserting nodes, Inserting New Nodes * linked list class (LList), The Linked List Class * LList class and test program, Inserting New Nodes * Node class, The Node Class * removing nodes, Removing Nodes from a Linked List * other functions for, Other Linked List Functions * using instead of arrays, Shortcomings of Arrays * links, Linked Lists Defined * List class, A List Class Implementation-Iterating Through a List * append() function, Append: Adding an Element to a List * clear() function, Clear: Removing All Elements from a List * contains() function, Contains: Determining if a Given Value Is in a List * finding element in a list, Find: Finding an Element in a List * insert() function, Insert: Inserting an Element into a List * length() function, Length: Determining the Number of Elements in a List * remove() function, Find: Finding an Element in a List * find() helper function, Remove: Removing an Element from a List * lists, Lists-Exercises * abstract data type (ADT), A List ADT * building a list-based application, A List-Based Application-Exercises * managing a video rental kiosk, Using Lists to Manage a Kiosk-Using Lists to Manage a Kiosk * reading text files, Reading Text Files * iterating through, Iterating Through a List * listSize property, A List ADT * decrementing after removing list element, Find: Finding an Element in a List * incrementing after appending list element, Append: Adding an Element to a List * local scope, Variable Scope * loops, Repetition Constructs * in graphs, Graph Definitions ### M * map() function, Iterator Functions That Return a New Array * Math class * floor() function, Generating Random Data * random() function, Generating Random Data * math functions, examples of use, Arithmetic and Math Library Functions in JavaScript * math library, Arithmetic and Math Library Functions in JavaScript * maximum value, searching for, Searching for the Minimum and Maximum Value, Searching for Minimum and Maximum Values * members (of sets), Sets * Mergesort algorithm, The Mergesort Algorithm-The Quicksort Algorithm * bottom-up, Bottom-up Mergesort * JavaScript implementation of bottom-up Mergesort, Top-down Mergesort * adding mergeSort() and mergeArrays() to CArray class, Top-down Mergesort * top-down, Top-down Mergesort * minimum value, searching for, Searching for the Minimum and Maximum Value, Searching for Minimum and Maximum Values * modular hashing, Choosing a Hash Function * Mozilla Developer Network website, Creating New Arrays from Existing Arrays * multidimensional arrays, Two-Dimensional and Multidimensional Arrays, Separate Chaining ### N * next property, Node class, The Node Class * Nightly Build web page, The JavaScript Environment * Node class, The Node Class, Removing Nodes from a Linked List * defining for binary search tree, Building a Binary Search Tree Implementation * field to track occurrences, Counting Occurrences * previous property, Doubly Linked Lists * nodes, Linked Lists Defined * determining correct insertion point in binary search tree, Building a Binary Search Tree Implementation * in trees, Trees Defined * left and right nodes, Binary Trees and Binary Search Trees * types of nodes, Trees Defined * inserting and removing in linked lists, Linked Lists Defined * inserting into a linked list, Inserting New Nodes * removing from binary search tree, Removing Nodes from a BST * leaf node, Removing Nodes from a BST * node with two children, Removing Nodes from a BST * removing from linked list, Removing Nodes from a Linked List * Norvig, Peter, Searching Textual Data * numbers, converting from one base to another, Multiple Base Conversions ### O * Object class * keys() function, Adding Sorting to the Dictionary Class * operation as dictionary, Dictionaries * object-oriented programming, Objects and Object-Oriented Programming * objects, Objects and Object-Oriented Programming * arrays in, Arrays in Objects * arrays of, Arrays of Objects * defining, complete example, Objects and Object-Oriented Programming * open-addressing hashing, Linear Probing ### P * palindromes, using a stack to check for, Palindromes * parameters, Functions * parent node, Trees Defined * Pareto distributions, Using Self-Organizing Data * paths * in graphs, Graph Definitions * determining, Determining Paths * in trees, Trees Defined * peek() function, Stack Operations * implementation for Stack class, A Stack Implementation * performance * comparing for basic sorting algorithms, Timing Comparisons of the Basic Sorting Algorithms * comparing shellsort() functions, Computing a dynamic gap sequence * pivot, The Quicksort Algorithm * pop() function, Removing Elements from an Array * implementation for Stack class, A Stack Implementation * taking elements off stacks, Stack Operations * postorder traversal, Traversing a Binary Search Tree * postOrder() traversal function, Traversing a Binary Search Tree * precedence-constrained scheduling, Topological Sorting * precision for numbers (fixed), Arithmetic and Math Library Functions in JavaScript * preorder traversal, Traversing a Binary Search Tree * preOrder() traversal function, Traversing a Binary Search Tree * print() function * displaying array's contents, Aggregate Array Operations * displaying string representations of arrays, String Representations of Arrays * priority queues, Priority Queues-Exercises * programming environment, What You Need for This Book * properties, tying to object instance, Objects and Object-Oriented Programming * push() function, Adding Elements to an Array * adding elements to stacks, Stack Operations * implementing for Stack class, A Stack Implementation * pushing data onto an array, The Set Class Implementation * using with add(), Arrays in Objects * using with arrays, An Array-Based Queue Class Implementation * put() function, HashTable class, Choosing a Hash Function, A Better Hash Function * defining to work with separate chaining, Separate Chaining * modifying to accept key and data, Storing and Retrieving Data in a Hash Table * rewriting for linear probling, Linear Probing ### Q * Queue class, An Array-Based Queue Class Implementation * complete definition and test of implementation, An Array-Based Queue Class Implementation * defining constructor function for, An Array-Based Queue Class Implementation * dequeue() function, An Array-Based Queue Class Implementation * empty() function, An Array-Based Queue Class Implementation * enqueue() function, An Array-Based Queue Class Implementation * front() and back() functions, An Array-Based Queue Class Implementation * toString() function, An Array-Based Queue Class Implementation * using, assigning partners for square dance, Using the Queue Class: Assigning Partners at a Square Dance-Sorting Data with Queues * queues, Organization of the Book, Queues-Exercises * array-based Queue class implementation, An Array-Based Queue Class Implementation * inserting and removing elements, Queue Operations * priority, Priority Queues-Exercises * sorting data with, Sorting Data with Queues-Priority Queues * implementing radix sort, Sorting Data with Queues * using Queue class implementation, Using the Queue Class: Assigning Partners at a Square Dance-Sorting Data with Queues * Quicksort algorithm, The Quicksort Algorithm-Exercises * algorithm and pseudocode for, Algorithm and pseudocode for the Quicksort algorithm * choosing pivot and sorting of data around it, Algorithm and pseudocode for the Quicksort algorithm * sorting data with qSort() function, Algorithm and pseudocode for the Quicksort algorithm ### R * radix sort, Sorting Data with Queues * implementing using queues, Sorting Data with Queues-Priority Queues * random numbers, Hashing Integer Keys * random() function, Math class, Generating Random Data * read() function, Reading Text Files, Searching Textual Data * real-world systems modeled by graphs, Real-World Systems Modeled by Graphs * recursion, Recursion * demonstrating with Stack class, Demonstrating Recursion * reduce() function, Non–Array-Generating Iterator Functions * reduceRight() function, Non–Array-Generating Iterator Functions * remove() function * Dictionary class, The Dictionary Class * for doubly linked list, Doubly Linked Lists * List class, Remove: Removing an Element from a List * removing nodes from binary search tree, Removing Nodes from a BST * removing nodes from linked list, Removing Nodes from a Linked List * Set class, The Set Class Implementation * removeNode() function, binary search tree, Removing Nodes from a BST * repetition constructs, Repetition Constructs * right nodes, Binary Trees and Binary Search Trees * root node, Trees Defined ### S * scope, Variable Scope * searches, binary search tree, BST Searches-Removing Nodes from a BST * for minimim and maximum value, Searching for the Minimum and Maximum Value * for specific value, Searching for a Specific Value * searching, Organization of the Book * searching algorithms, Searching Algorithms-Exercises * binary search, Binary Search-Searching Textual Data * sequential search, Commonly Used Functions in Examples-Using Self-Organizing Data * searching graphs, Searching a Graph-Finding the Shortest Path * breadth-first search, Breadth-First Search * leading to shortest paths, Breadth-First Search Leads to Shortest Paths * depth-first search, Depth-First Search * Sedgewick, Robert, Computing a dynamic gap sequence * selection sort, Selection Sort * selectionSort() function, Selection Sort * self-organized data, Using Self-Organizing Data * separate chaining, Separate Chaining * sequential search, Commonly Used Functions in Examples-Using Self-Organizing Data * executing seqSearch() function, Commonly Used Functions in Examples * modifying seqSearch() to return position found, Commonly Used Functions in Examples * performing on array with seqSearch() funciton, Commonly Used Functions in Examples * program testing modified seqSearch(), Commonly Used Functions in Examples * searching a text file, Searching Textual Data * searching for minimum and maximum values, Searching for Minimum and Maximum Values * using self-organizing data, Using Self-Organizing Data * sets, Organization of the Book, Sets-Exercises * definitions of terms, Set Definitions * operations performed on, Set Operations * Set class implementation, The Set Class Implementation * add() function, The Set Class Implementation * difference() function, More Set Operations * intersect() function, More Set Operations * more set operations, More Set Operations * remove() function, The Set Class Implementation * subset() function, More Set Operations * union() function, More Set Operations * shallow copies, Aggregate Array Operations * Shell, Donald, The Shellsort Algorithm * Shellsort algorithm, The Shellsort Algorithm-Computing a dynamic gap sequence * comparing array before and after sorting, The Shellsort Algorithm * comparing efficiency of shellsort() functions, Computing a dynamic gap sequence * gap sequence, The Shellsort Algorithm, Computing a dynamic gap sequence * adding to CArray class definition, The Shellsort Algorithm * running shellsort() on small data set, The Shellsort Algorithm * shellsort() function, adding to CArray class, The Shellsort Algorithm * using larger gap sequence and larger data set, The Shellsort Algorithm * shift() function, Removing Elements from an Array * removing element from front of an array, An Array-Based Queue Class Implementation * shortest-path algorithm, Finding the Shortest Path * show() function, Other Linked List Functions * Node class in binary search tree, Building a Binary Search Tree Implementation * showAll() function, Dictionary class, The Dictionary Class, Adding Sorting to the Dictionary Class * showDistro() function, HashTable class, Choosing a Hash Function * showGraph() function, Graph class, Building a Graph, Implementing the Topological Sorting Algorithm * simple cycles, Graph Definitions * simpleHash() function, Choosing a Hash Function, Choosing a Hash Function * adding print() statement to, Choosing a Hash Function * collisions, Choosing a Hash Function * hashing integer keys, Hashing Integer Keys * size() function, using with sets, More Set Operations * some() function, Non–Array-Generating Iterator Functions * sort() function * Dictionary class, Adding Sorting to the Dictionary Class * sorting strings, Putting Array Elements in Order * using to order numbers, Putting Array Elements in Order * sorting, Organization of the Book * adding to Dictionary class, Adding Sorting to the Dictionary Class * topological, Topological Sorting-Exercises * using queues, Sorting Data with Queues-Priority Queues * sorting algorithms, Sorting Algorithms-Exercises * advanced, Advanced Sorting Algorithms-Exercises * Mergesort algorithm, The Mergesort Algorithm-The Quicksort Algorithm * Quicksort algorithm, The Quicksort Algorithm-Exercises * Shellsort algorithm, The Shellsort Algorithm-Computing a dynamic gap sequence * array test bed, An Array Test Bed * basic, Basic Sorting Algorithms-Timing Comparisons of the Basic Sorting Algorithms * bubble sort, Bubble Sort * insertion sort, Insertion Sort * selection sort, Selection Sort * timing comparisons of, Timing Comparisons of the Basic Sorting Algorithms * SpiderMonkey JavaScript engine, What You Need for This Book * download site, The JavaScript Environment * splice() function, Creating New Arrays from Existing Arrays, Remove: Removing an Element from a List * adding or removing array elements, Adding and Removing Elements from the Middle of an Array * defining for priority queue, Priority Queues * split() function * breaking text file into words, Searching Textual Data * creating arrays from strings, Creating Arrays from Strings * Stack class, A Stack Implementation * complete implementation, A Stack Implementation * dataStore array, A Stack Implementation * peek() function, A Stack Implementation * pop() function, A Stack Implementation * push() function, A Stack Implementation, A Stack Implementation * testing the implementation, A Stack Implementation * using, Using the Stack Class * checking for palindromes, Palindromes * demonstrating recursion, Demonstrating Recursion * multiple base conversions, Multiple Base Conversions * stacks, Organization of the Book, Stacks-Exercises * implementation of, Stack class, A Stack Implementation-A Stack Implementation * operations, Stack Operations * using the Stack class, Using the Stack Class-Exercises * strings * concatenating, using reduce() function, Non–Array-Generating Iterator Functions * creating arrays from, Creating Arrays from Strings * determining if they are palindromes, Palindromes * finding longest common substring, Finding the Longest Common Substring-Finding the Longest Common Substring * hash function for string keys, Choosing a Hash Function * representing arrays as, String Representations of Arrays * sorting, Putting Array Elements in Order * using filter() function with, Iterator Functions That Return a New Array * using map() function with, Iterator Functions That Return a New Array * strongly connected vertices, Graph Definitions * subprocedures, Functions * subset() function, Set class, More Set Operations * subsets, Set Definitions * subtrees, Trees Defined * swap() function * using in sequential search, Using Self-Organizing Data * using in Shellsort, Computing a dynamic gap sequence * switch statement, Decision Constructs ### T * text files, reading, Reading Text Files * textual data, searching, Searching Textual Data * this keyword, Objects and Object-Oriented Programming * timing comparisons of basic sorting algorithms, Timing Comparisons of the Basic Sorting Algorithms * toFixed() function, Processing Two-Dimensional Array Elements * top element of a stack, Stack Operations * topological sorting, Topological Sorting-Exercises * algorithm for, implementing, Implementing the Topological Sorting Algorithm-Implementing the Topological Sorting Algorithm * program testing implementation of, Implementing the Topological Sorting Algorithm * toString() function, String Representations of Arrays * adding to bubbleSort() function call, Bubble Sort * defining for priority queue, Priority Queues * displaying elements in a queue, An Array-Based Queue Class Implementation * displaying list elements, A List ADT, toString: Retrieving a List's Elements * traversing binary search trees, Traversing a Binary Search Tree * tree traversal, Trees Defined * binary search tree, Traversing a Binary Search Tree * trees * binary trees, Trees Defined * (see also binary trees) * defined, Trees Defined * levels in, Trees Defined * parts of, Trees Defined * trim() function, Reading Text Files * two-dimensional arrays, Two-Dimensional and Multidimensional Arrays-Arrays of Objects * creating, Creating Two-Dimensional Arrays * processing elements of, Processing Two-Dimensional Array Elements ### U * undefined, array elements, Creating Two-Dimensional Arrays * union, Set Operations * union() function, Set class, More Set Operations * universe, Set Definitions * unordered graphs, Graph Definitions * unshift() function, Adding Elements to an Array * update() function, BST class, Counting Occurrences ### V * value-returning functions, defining, Functions * var keyword * in variable declarations, Declaring and Initializing Variables * leaving off when defining variables, Variable Scope * variables * declaring and initializing, Declaring and Initializing Variables * global, Variable Scope * scope, Variable Scope * vertices, Graph Definitions * strongly connected, Graph Definitions * void functions, Functions ### W * while loop, Repetition Constructs # About the Author **Michael McMillan** is an instructor of computer information systems at Pulaski Technical College in North Little Rock, Arkansas. He is also an ajunct instructor of information science at the University of Arkansas at Little Rock. Before moving to academia, he was a programmer/analyst for Arkansas Children's Hospital, where he worked in statistical computing and data analysis. # Colophon The animal on the cover of _Data Structures and Algorithms with JavaScript_ is an Amur hedgehog ( _Erinaceus amurensis_ ), also known as the Chinese hedgehog. This species is 1 out of 14 that can be found worldwide today, and is native to Amur Krai and Primorye in Russia, Manchuria in China, and the Korean Peninsula. Like most hedgehogs, the Chinese hedgehog prefers tall grasses and undergrowth. In the wild, they feed on worms, centipedges, insects, mice, snails, frogs, and snakes. Named for the distinct noise made as they forage for food, they hunt primarily using their senses of smell and hearing. Their sniff often resembles a pig-like grunt. The Amur hedgehog weighs an average of 1.3 to 2.2 pounds and measures between 5.5 to 12 inches in length, its tail measuring around 1-2 of those inches. As a deterrent to predators (such as birds or wild dogs), the hedgehogs are covered in short, smooth spines. If threatened, the hedgehog rolls up into a ball, leaving only the spines exposed; this is also the position in which the hedgehog sleeps, usually in cool dark depressions or holes. Hedgehogs are solitary animals, not often socializing with other hedgehogs even when encountered while out foraging for food. The only time hedgehogs socialize is during mating season, after which they go their separate ways, leaving the female hedgehog to raise any young that were conceived. Females are very protective of their young; male hedgehogs have been known to eat their young. The cover image is from source unknown. The cover fonts are URW Typewriter and Guardian Sans. The text font is Adobe Minion Pro; the heading font is Adobe Myriad Condensed; and the code font is Dalton Maag's Ubuntu Mono. 1. Preface 1. Why Study Data Structures and Algorithms 2. What You Need for This Book 3. Organization of the Book 4. Conventions Used in This Book 5. Using Code Examples 6. Safari® Books Online 7. How to Contact Us 8. Content Updates 1. October 20, 2015 9. Acknowledgments 2. 1. The JavaScript Programming Environment and Model 1. The JavaScript Environment 2. JavaScript Programming Practices 1. Declaring and Initializing Variables 2. Arithmetic and Math Library Functions in JavaScript 3. Decision Constructs 4. Repetition Constructs 5. Functions 6. Variable Scope 7. Recursion 3. Objects and Object-Oriented Programming 4. Summary 3. 2. Arrays 1. JavaScript Arrays Defined 2. Using Arrays 1. Creating Arrays 2. Accessing and Writing Array Elements 3. Creating Arrays from Strings 4. Aggregate Array Operations 3. Accessor Functions 1. Searching for a Value 2. String Representations of Arrays 3. Creating New Arrays from Existing Arrays 4. Mutator Functions 1. Adding Elements to an Array 2. Removing Elements from an Array 3. Adding and Removing Elements from the Middle of an Array 4. Putting Array Elements in Order 5. Iterator Functions 1. Non–Array-Generating Iterator Functions 2. Iterator Functions That Return a New Array 6. Two-Dimensional and Multidimensional Arrays 1. Creating Two-Dimensional Arrays 2. Processing Two-Dimensional Array Elements 3. Jagged Arrays 7. Arrays of Objects 8. Arrays in Objects 9. Exercises 4. 3. Lists 1. A List ADT 2. A List Class Implementation 1. Append: Adding an Element to a List 2. Remove: Removing an Element from a List 3. Find: Finding an Element in a List 4. Length: Determining the Number of Elements in a List 5. toString: Retrieving a List's Elements 6. Insert: Inserting an Element into a List 7. Clear: Removing All Elements from a List 8. Contains: Determining if a Given Value Is in a List 9. Moving To and Retrieving a List Element 10. Iterating Through a List 3. Iterating Through a List 4. A List-Based Application 1. Reading Text Files 2. Using Lists to Manage a Kiosk 5. Exercises 5. 4. Stacks 1. Stack Operations 2. A Stack Implementation 3. Using the Stack Class 1. Multiple Base Conversions 2. Palindromes 3. Demonstrating Recursion 4. Exercises 6. 5. Queues 1. Queue Operations 2. An Array-Based Queue Class Implementation 3. Using the Queue Class: Assigning Partners at a Square Dance 4. Sorting Data with Queues 5. Priority Queues 6. Exercises 7. 6. Linked Lists 1. Shortcomings of Arrays 2. Linked Lists Defined 3. An Object-Based Linked List Design 1. The Node Class 2. The Linked List Class 3. Inserting New Nodes 4. Removing Nodes from a Linked List 4. Doubly Linked Lists 5. Circularly Linked Lists 6. Other Linked List Functions 7. Exercises 8. 7. Dictionaries 1. The Dictionary Class 2. Auxiliary Functions for the Dictionary Class 3. Adding Sorting to the Dictionary Class 4. Exercises 9. 8. Hashing 1. An Overview of Hashing 2. A Hash Table Class 1. Choosing a Hash Function 2. A Better Hash Function 3. Hashing Integer Keys 4. Storing and Retrieving Data in a Hash Table 3. Handling Collisions 1. Separate Chaining 2. Linear Probing 4. Exercises 10. 9. Sets 1. Fundamental Set Definitions, Operations, and Properties 1. Set Definitions 2. Set Operations 2. The Set Class Implementation 3. More Set Operations 4. Exercises 11. 10. Binary Trees and Binary Search Trees 1. Trees Defined 2. Binary Trees and Binary Search Trees 1. Building a Binary Search Tree Implementation 2. Traversing a Binary Search Tree 3. BST Searches 1. Searching for the Minimum and Maximum Value 2. Searching for a Specific Value 4. Removing Nodes from a BST 5. Counting Occurrences 1. Exercises 12. 11. Graphs and Graph Algorithms 1. Graph Definitions 2. Real-World Systems Modeled by Graphs 3. The Graph Class 1. Representing Edges 2. Building a Graph 4. Searching a Graph 1. Depth-First Search 2. Breadth-First Search 5. Finding the Shortest Path 1. Breadth-First Search Leads to Shortest Paths 2. Determining Paths 6. Topological Sorting 1. An Algorithm for Topological Sorting 2. Implementing the Topological Sorting Algorithm 7. Exercises 13. 12. Sorting Algorithms 1. An Array Test Bed 1. Generating Random Data 2. Basic Sorting Algorithms 1. Bubble Sort 2. Selection Sort 3. Insertion Sort 4. Timing Comparisons of the Basic Sorting Algorithms 3. Advanced Sorting Algorithms 1. The Shellsort Algorithm 2. The Mergesort Algorithm 3. The Quicksort Algorithm 4. Exercises 14. 13. Searching Algorithms 1. Commonly Used Functions in Examples 1. Searching for Minimum and Maximum Values 2. Using Self-Organizing Data 2. Binary Search 1. Counting Occurrences 3. Searching Textual Data 4. Exercises 15. 14. Advanced Algorithms 1. Dynamic Programming 1. A Dynamic Programming Example: Computing Fibonacci Numbers 2. Finding the Longest Common Substring 3. The Knapsack Problem: A Recursive Solution 4. The Knapsack Problem: A Dynamic Programming Solution 2. Greedy Algorithms 1. A First Greedy Algorithm Example: The Coin-Changing Problem 2. A Greedy Algorithm Solution to the Knapsack Problem 3. Exercises 16. Index
{ "redpajama_set_name": "RedPajamaBook" }
9,021
SH-FromOurFilesWk51-2022WEB NMVTI GRADUATES — Nine students completed studies at Northern Maine Vocational Technical Institute, Presque Isle, in the General Pharmacology class. The graduates included, front row, from left, Pearl L. Fitzgerald and Annetta Boland, Presque Isle; Franziska LeVasseur and Minnie McGarrigle, Fort Fairfield; and Fern E. Morgan, Andover, N.B. The second row included, left to right, Golda Smith and Dorothy Dewitt, Andover, N.B.; Shirley Wile and Marrena Bustard, Presque Isle; and Carolyn Smith, instructor.. (File photo 1972/The Star-Herald) Presque Isle area From our Files – Week of December 21, 2022 Yvonne Tardie • December 21, 2022 75 Years Ago - Dec. 18, 1947 Officials of the farm loan group — Newly elected officers and directors of the Central Aroostook Farm Loan Association and guest speakers met at their annual meeting. Those present were Clair Pollard of Ashland; Linwood Wellington of Caribou, assistant secretary-treasurer; 75 Years Ago – Dec. 18, 1947 Officials of the farm loan group — Newly elected officers and directors of the Central Aroostook Farm Loan Association and guest speakers met at their annual meeting. Those present were Clair Pollard of Ashland; Linwood Wellington of Caribou, assistant secretary-treasurer; W. Burns Long of Presque Isle, president; Wendell Blackstone of Caribou; Roy E. Duff of Presque Isle, secretary-treasurer; Frank Landers of Mars Hill; William Walker of Presque Isle; Dr. Charles Merchant of the University of Maine; Edward W. Whittaker, assistant treasurer of the Federal Land Bank, Springfield, Massachusetts; Donald McCrum of Mars Hill, vice-president. Gladstone Chapman of Caribou was also a director of the association. DeLong received an award — C.C. DeLong of this city was congratulated by F.H. Marr of Boston, assistant zone manager, after receiving the Nash Ten Point Award for outstanding service as a Nash dealer. Some of those attending the award ceremony were Fred Davis of Boston, district manager, and Charles B. DeLong, manager of the sales and service department of the C.C. DeLong Garage. Hallowell named department head — Romeo Marquis, principal of Presque Isle High School, announced the appointment of Lawrence Hallowell as the new Mathematics Department chairman filling the vacancy created by the death of James Dyer. Hallowell, who had taught in the mathematics department for the previous seven years, was chosen from three candidates by Superintendent of Schools Joseph McBrine and Marquis. The selection was made on the basis of the candidates' administrative ability, educational capacity and general character. As the head of this department, Hallowell was responsible for department budgeting, course development, department evaluation and all other department administration as well as his normal duties as a teacher. Hallowell had a Bachelors degree from the University of Maine at Orono and a Masters degree in Mathematics from Colby College. Wildcat band trip received $150 donation — Wildcats to Washington were given their second contribution of $150 from Suburban Propane. The contribution was given by John H. Miles, district manager of the company, and received by James Lyford, treasurer of Wildcats to Washington. This contribution helped pay for the Presque Isle High School Band to go to Washington, D.C. Miles said that this was a personally tremendous opportunity, to make this trip, the only way many students would ever be able to go. The students that go should remember this trip for many years to come, he continued. John H. Miles Jr., son of Mr. Miles, was a member of the band at Presque Isle Isle High School. Presque Isle student brought home FFA speaking award — Sixteen-year-old Julie Gunderman possessed poise and confidence beyond her years. The Presque Isle High School junior knew what she wanted and how to express herself, which helped her win the bronze medal for public speaking at the National FFA Convention in Kansas City, Missouri. Gunderman, who was also president of the local FFA chapter, spoke about sustainable agriculture. The thrust of her eight-minute speech was that using commonsense agricultural practices with emphasis on using fewer inputs can enhance the environment and increase productivity in the foreseeable future. Gunderman started work on her speech almost a full year prior, competing first in the regional FFA competition in January where she took first place. She then went on to the state contest in March, where her victory qualified her for the national convention Nov. 12 in Kansas City. In addition to making her speech, Gunderman also had to submit a written copy of her comments, and answer questions by the judges. Pool donation — The Community Pool Project of Mapleton, Castle Hill and Chapman received a $500 donation from U-Save Auto Rental and Hoffses Auto Sales. Those present during the presentation were Scott Hoffses, U-Save Auto Rental; Bonnie Steeves, Member of the Fundraising Committee; Roger Hoffses, Auto Sales; and John Edgecomb, Town Manager, Mapleton, Castle Hill and Chapman. Trust donation — Joe Clukey, manager of Katahdin Trust Company, presented a $500 contribution to the Mapleton/Chapman/Castle Hill Community Pool Project. Fundraising committee members, John Edgecomb and Bonnie Steeves accepted the donation.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,496
Jamesonia lindigii är en kantbräkenväxtart som först beskrevs av Georg Heinrich Mettenius, och fick sitt nu gällande namn av Maarten J.M. Christenhusz. Jamesonia lindigii ingår i släktet Jamesonia och familjen Pteridaceae. Inga underarter finns listade. Källor Kärlväxter lindigii
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,794
Videos from all over the world poured in over the months of April and May in answer to our call for an addition to our team, and the competition was fierce. We would like to formally congratulate all these candidates for the Making the Team: 2010 selection process this year. You passion, creativity and enthusiasm for endurance sport was clear in your videos and in follow-up interviews. Thank you for reaching out to your communities to inspire those around you. Sarah and husband Steve volunteering as wet suit strippers at Ironman Wisconsin. It's always difficult to choose just one person from such an incredible pool of candidates but in the end, there can be just one. Please join us in welcoming Sarah Linder-Stenzel -- pharmacist, girl next door, and all around goodwill ambassador -- as the newest member of Team Evotri! Though not a requirement for our selection process, Sarah has finished Ironman and several other triathlons, in addition to being an accomplished runner. You can find Sarah and her husband Steve braving the harsh Minnesota winters by racing nearly every weekend alongside a group of recruited family and friends - wow! Sarah's mission in life is to see that those around her, especially those near and dear, follow a daily regimen of health, fitness and subsequent happiness. No matter where you are on the spectrum of inactivity or dispassion, after a short consultation with Sarah, be sure you'll have the medicine you need to turn it all around. Welcome aboard, Sarah! We're thrilled to have you as the newest member of our family.
{ "redpajama_set_name": "RedPajamaC4" }
7,600
Butyric anhydride or butanoic anhydride is the chemical compound with the formula (CH3CH2CH2CO)2O. The molecule can be described as a condensation of two molecules of butyric acid with elimination of one water molecule (hence its name). Butyric anhydride is a clear colorless liquid that smells strongly of butyric acid, which is formed by its reaction to moisture in the air. Safety Butyric anhydride is a combustible, corrosive liquid. It is considered water sensitive. References Carboxylic anhydrides Foul-smelling chemicals
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,638
<?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android"> <stroke android:color="@color/colorPrimary" android:width="1dp" /> <solid android:color="@color/color_fff1ea" /> <corners android:radius="4dp"/> </shape>
{ "redpajama_set_name": "RedPajamaGithub" }
8,724
Q: How to select random node with xmlstarlet in bash? Bash, ubuntu linux. How to select random node with xmlstarlet in bash? A: xmlstarlet sel -B -t -c "//node()[$RANDOM mod last() + 1]" input.xml The -B strips whitespace nodes, which you probably don't want to select... I also tried using math:random() defined at exslt.org: xmlstarlet sel -N math=http://exslt.org/math -B -t --var r='math:random()' \ -c '//node()[round($r * last()) + 1]' -n input.xml But it appears to use the same seed every time.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,363
Our latest client is the new kid on the block in the UK payments sector, and as such they are looking for a "first man on the ground" type to take the market by storm. Their provisions allow large volumes of payments to be processed in a secure and rapid fashion. If you have experience in the payments service arena, have the desire and ability to hunt new business and can lead from the front - this upcoming organisation is most definitely the one for you. Received almost $2 million in funding in their first 2 years of trading. Tipped by many to be the next big fintech player. With their service live in 30+ countries and counting, the UK is the next target market. The company are looking for someone who comes from the payments industry, who will be able to hit the ground running. Furthermore, the candidate must be from a new business background, with a record of accomplishment operating in such roles. Confident presenting, demoing and closing deals when dealing with senior customers. The role is new business focused, concerned with selling our clients suite of payment applications & services. You will be required to engage with the C-Level inside institutions such as banks & short term lenders. The sales cycle is typically between a couple of weeks and a couple of months, and deal sizes are around £250k a month.
{ "redpajama_set_name": "RedPajamaC4" }
932
Due to our limited space, we do not dare to analyse the roots of Hungarian folk music, because that would mean going back to ancient times. We start our short summary as of the 18th century, when 'verbunkos' – so to say the antecedent of the Hungarian song – was born from the old Hungarian dance music. According to its original function, 'verbunk' (recruitment) dance and music served for inviting young men to become soldiers. The recruitment music influenced world famous composers like Mozart and Brahms as well. The most famous representative of 'verbunkos' was gypsy bandleader János Bihari, who was a violin player virtuoso and composer too. He was one of the greatest characters in the Hungarian romantic music. He arrived in Pest in 1801, from Nagyabony (presently in Slovakia), where he became extremely popular with his five-member band. They performed in Pozsony (Bratislava) and Vienna, and they were on tour in Hungary. He lived the life of a vagabond, but in 1824, he had a road accident and his hand injury ended his bandleader career. Even the young Ferenc Liszt listened his live performance and praised him. Unfortunately, Bihari's works were not written down, so their authenticity might be questioned. In the 19th century, 'verbunkos' transformed into a romantic Hungarian dance music. It fitted to the patriotic feeling of the Reform age, when on the occasion of balls, young people danced 'verbunk' or 'csárdás' that developed from the former. Naturally, gypsy bands already had a great role then, as they provided music to the dance, and music developed and formed in practice. Gypsy music had such a great influence on Ferenc Liszt that he wrote a book on it in French language with the title 'The Bohemians and their music in Hungary'. In this period, some virtuoso musicians could become real celebrities on national level – just like János Bihari (mentioned above). The emotionality of live music was not recorded in that time, thus we cannot admire the play of the masters who lived two hundred years ago. The Hungarian song was very popular this time – even educated society often preferred it to classical music. At the beginning of the 19th century, exploration of folk poetry was crucial in Europe. What is more, in Hungary, the national passion of the Reform age provided a perfect soil to this. In the meantime, melodies of the Hungarian dance fashion of the period revived in the songs, creating the 'sung dance' that was suitable for a razzle. Having fun, merry-making and singing songs spontaneously were parts of the Hungarian temper. Gypsy bands loved playing these songs. Even a new 'profession' appeared; many self-styled songwriters turned up among smallholders and intellectuals. Béni Egressy, who set to music the famous Hungarian poem 'Szózat' and wrote many songs, was a well-known author of the period (and did not belong to the above-mentioned dilettanti). The popularity of the Hungarian song was unbroken at the turn of centuries and in the 20th century as well. Intellectuals and gentries liked expressing their emotions with songs accompanied by bandleader violin players. Probably, Pista Dankó was the most famous songwriter, who was the most active during the turn of the 19th and 20th centuries. After that his songs became popular owing to Lujza Blaha, he achieved great successes with his song band from Hungary to Russia in the 1890s. His songs and popular theatre plays were written down by others and survived for posterity. Let's go back in time, because to complete the story we must add that the first standard gypsy band was mentioned in records in the middle of the 18th century. Naturally, musician gypsies had played music earlier, among others in exclusive places like royal and baronial courts; but wanderer gypsies trotted the world as well. The first organised gypsy band was formed by Panna Czinka with her husband and two brothers-in-law. She was the leader of the band (the first known gypsy bandleader!), while his husband played the double bass, one of the brothers played the contrabass and the other the cimbalom. This is the standard arrangement of the gypsy band with a violin, a cimbalom and the bass – but the band can have more members also with a cello and a clarinet. Panna Czinka had a special talent in playing the violin even in her childhood. In this period, landlords used to patronise talented musicians; Panna was patronised by the landlord of the village Sajógömör. The band was very popular in that area. During the following century, all social layers liked the company of gypsy musicians. (Our article does not introduce the folk gypsy music due to reasons of extent; we stay within the frames of folkish art music.) Later, the support of a strengthening middle-class and urbanisation placed gypsy bands in restaurants, coffeehouses, and their role was stabilised there. The phenomenon exists nowadays too, but is related to tourism mainly, and can be found in catering units visited by tourists. Their costumes are said to have come from the 1848-49 War of Independence, when gypsies took part in battles and played music to their fellow solders. Their uniform transformed into an orchestral attire: the gilded frogged red vest and the leader's blue vest are still accessories of their performances. In the 1950s, the genre started to be shifted into new frames with the Rajkó Band. Since then, Rajkó Orchestra and the One Hundred Gypsy Musicians are probably the most famous Romani symphony orchestras. Their repertory includes classical music, folk music besides the traditional gypsy music. Moreover, we have not mentioned yet the 'world music' collectives of Romani musicians. If you took a fancy for Hungarian and gypsy folk music - abstracting from the sentimental songs reviewed in the article, and staying within the exciting genre of world music – we would like to recommend you the world music festival Budapest Ritmo in Akvárium Klub (Budapest) from 5 to 7 October 2018.
{ "redpajama_set_name": "RedPajamaC4" }
592
How to Use TikTok in India After Ban (2023 Guide) In a Hurry? Here's How to Use TikTok in India after Ban What Is TikTok? Why Was TikTok Banned in Some Countries? Why Is TikTok Banned in India? How Do I Use TikTok after the Ban? Risks of Using TikTok How to Protect Yourself while Using TikTok 3 Best VPNs to Unblock TikTok Videos in India Official Statement from TikTok Regarding Its Ban How to Install TikTok in the Countries Where It Was Banned Can I Use a Free VPN to Access TikTok? When Will TikTok App Officially Return to India? TikTok Alternatives That Work in India TikTok, the popular social media app, is owned by a Chinese company, leading to concern that the information shared on it could find its way into the hands of the Chinese government. Therefore, many countries have decided to censor or entirely ban its use. In India, TikTok and 58 Chinese apps (including UC Browser, WeChat, etc.) have been banned since June 2020 (including its web version, tiktok.com). Indian officials justify the ban as a way to protect national security and the privacy and online data of Indian citizens. Thankfully, you can bypass this TikTok block by using a Virtual Private Network (VPN). Despite the ban on TikTok's app (and web version) and 58 other Chinese apps in India, it's simple to download and access TikTok straight from the browser in one of 3 ways: If you have one of the Android Devices, sideload the APK file (as banned apps are not available for download on the Google Play Store). If you are using an iPhone or an iPad, change the Location Settings on the App store to a country where TikTok isn't banned. Wipe your phone and then install a VPN as the web application of TikTok is easily accessed in India using a VPN. Yoiu are now able to use the web application to watch TikTok. Tiktok is a social media app that allows users to create, upload, watch, and share very short (15-second) videos, filmed on mobile phones. People can then browse through the videos on the app, which are categorized for ease of use, and interact with them on their phones. TikTok is owned ByteDance, an Internet technology company, founded in 2012 and based in Beijing, China. In 2017, the TikTok server became available on iOS and Android in several world markets, including China. In August 2018, Tiktok merged with Musical.ly and became extremely popular and available worldwide, currently having over a billion users. One of the reasons for the rising popularity of TikTok's apps is that it's used by celebrities and influencers like Will Smith, Jessica Alba, Jimmy Fallon, and many more. A new phenomenon, "Tiktok celebrities," has arisen as people who often stream their own videos on Tiktok are becoming famous (e.g. Loren Gray, Charlie D'Amelio, Zach King, and many others). Several countries have banned TikTok or announced an intention to do so soon. Most cite their reasons as being concerned about national security, as they fear that data from TikTok could be shared with the Chinese government ("data mining"), even though TikTok has denied this. India has entirely banned TikTok and other popular Chinese apps, like UC Browser (see the section below for details), and so has Pakistan (mainly due to claims of indecent content that offends the sensitivities of Muslim viewers). The USA (especially ex-President Trump) appeared to be on the verge of banning TikTok too, ostensibly due to national security concerns unless the American branch of TikTok is bought out by Microsoft or another US company. Indonesia and Bangladesh have sporadically blocked TikTok, but haven't publicized the reasons, and Japan and Australia are also considering a ban. India has already placed a total ban on TikTok, officially because it's considered to pose a threat to cybersecurity, national security and integrity, and the privacy and data of India's citizens. Actually, the ban on 59 Chinese apps, including TikTok, came into effect just after a border dispute in June 2020 between China and India in the disputed border territory of Ladakh. Additional Chinese and Indian troops were sent to the area and there were clashes that resulted in the death of several Indian soldiers. As India was one of the biggest foreign markets for TikTok, with over 120 million users, this ban on TikTok in India was a major blow to China. At present, TikTok remains banned in India. All TikTok users who signed up for the app prior to the ban are still able to access its content, even though it is banned in India. New users cannot go onto the App Stores or Google Play to download the TikTok app. They have to use a VPN if they want to go onto TikTok in India on their computers or phones. To use a VPN to access Tiktok, you'll need to carry out a factory reset on your mobile phone because TikTok's app is blocked by the hardware ID. When you do a reset, your phone wipes out your hardware IP, allowing you to access the TikTok app in India or any area where there is a TikTok ban. Read on to learn exactly how to access TikTok in India with a VPN. TikTok users tend to view it as being harmless fun, but it's important to acknowledge that governments wouldn't outlaw it without justification. There are risks associated with using TikTok, including some issues around security in certain versions of the app. TikTok tries to fix these but, until they manage, information is very vulnerable to hacking or misuse when you connect to TikTok. There used to be another concern as TikTok needs a particular level of access in order to work, including access to an iOS 14 clipboard that was utilized to identify if a user was copying a comment to many accounts on the same phone or computer. This issue was resolved when Apple addressed the problem and eliminated the feature. In some countries, there are issues around censorship of TikTok content, and there are also concerns about underage, vulnerable children using the app. If you are a TikTok fan but want to make sure that you are safe while using it, there are several settings that can be adjusted to enable this. These include: Enabling restricted mode. Only using a private account. Opting out of personalized data. Switching all safety settings to Friends. Not using the allow others to find me option. It is particularly important to ensure that children who use TikTok utilize all these adjustments. If you want to use Tiktok in India or any other area where it is not available, you need a premium VPN. Here are our top picks: Our first choice VPN to unblock TikTok (or any of the Chinese apps that have been banned) in India is ExpressVPN. As this VPN has a huge server network spanning more than 94 countries, it will not be a problem to find a VPN server in a country that doesn't ban TikTok. It provides the fastest connection speeds and excellent security features including a Kill Switch, military–grade encryption, a no–logs policy, and split tunneling. You can log in to 5 devices simultaneously and ExpressVPN is compatible with almost every device or one of the desktop platforms, including iOS, Android, Windows, Mac, etc. High level of privacy and security Extensive network of servers Extremely quick connection speeds More pricey than competitors If you want to watch TikTok in India after the ban and access your TikTok account, NordVPN is highly effective at allowing you to access blocked content and will let you open TikTok and view your favorite TikTok stars. You can easily unblock TikTok in India and connect to a server because this VPN has a huge server network of over 5,200 servers in 60 countries, and great speeds. In terms of security, your online safety and privacy are assured as Nord VPN offers a double VPN, a high level of encryption, a Kill Switch, split tunneling, and a no–logs policy. Good security and privacy Slower than ExpressVPN User Interface issues Surfshark is a very reasonably priced VPN but still allows you to unblock TikTok on your device in India. It has an extensive network of servers in more than 60 countries that will let you open TikTok and allow you to connect on almost any device and on an unlimited number of devices simultaneously. To protect your online security, it has Multihop, CleanWeb, a strictly enforced no–log policy and Camouflage mode. Most affordable premium VPN Good security features Slower to connect Some server trial and error needed In response to concerns about security and data sharing expressed around the world, the company that owns Tiktok vehemently denied the allegation that the Chinese government can access information that is provided by any TikTok user. Nikhil Gandhi, the head of TikTok in India, posted this on his blog: "We have not shared any information of our users in India with any foreign governments, nor have we used such data in any manner that would compromise the integrity of India. Further, even if we are requested to in the future, we would not do so." Earlier, TikTok accused President Trump and the US government of making unfounded allegations against it and definitively stated that the app presents no security threat at all. If you are a fan who really wants to unblock TikTok, there are ways to access TikTok in banned countries. Either you can open Chrome or any other browser and then use a VPN to go onto the browser version of TikTok or you can find the app/website outside the official mobile app stores/Play Store. How to download and install TikTok on Android It is much easier to download and install TikTok on Android mobile devices than on an iOS device as one can search online on one's phone browser (open Chrome or any other browser) or the official TikTok APK file and sideload it onto one's phone/ device. This is easy and effective but only allows you to download one specific version of TikTok. If and when an update comes out, you'll have to go through the same process again to get the updated version, and won't be able to update the app from the App Store. As long as you ensure that you download APK files from a reputable source and that you install the correct file on your device, they are safe to use. Try to find the TikTok APK file on AndroidAPKsBox or on Apkpure.com Go into settings and enable the security option that lets you install 3rd party apps and then watch videos on TikTok. How to download and install TikTok on an iPhone To access TikTok on your iPhone, if you live in one of the banned countries, follow these steps: Go to the App Store / Play Store and click on the top left of the screen, where you will see your profile picture (make sure that you click only on the top left). Go into the Account setting and select your name and email address. Choose Country/Region and then press the Change Country or Region button. Search the drop-down menu for a country in which TikTok has not been banned and select it. Press Agree to accept the Terms and Conditions. Select None as the payment method. Enter an address in the country that you chose in the section that asks for the billing address. Finally, press on Next and then on Done. Now you have updated your location in the App Store and can now find, install, and connect to TikTok's app on your iPhone or iPad. No one wants to pay for a VPN if they can get it for free, so it's tempting to try to use a free VPN service. However, every business needs to make a profit, so ads tend to disrupt your viewing if you use a free service, and the VPN might actually sell your information to a third party for a few bucks, severely compromising your online security and safety. Also, a free VPN often cannot manage to open TikTok, and your viewing is often ruined by buffering, delays, and poor quality. We urge you to sign up for a reputable VPN, like those mentioned above. The ban in TikTok in several countries seems to be affected by the politics in the region, so it is challenging to predict how long it will last. However, at this point, due to the difficult relationship between China and India, it seems likely that the ban will not be lifted anytime soon and that TikTok will not be officially available in India for a long time. If concerns about security persist, more countries may decide to include TikTok in the list of banned apps. Meanwhile, the situation is being researched and TikTok's responses to such concerns are being considered. Fans who want to use TikTok in India but who cannot access TikTok, as it is banned, can choose to use alternative apps that offer similar content. These may not have all the same features as TikTok, but they are a way to share videos that are created by users and to interact with other enthusiasts. These alternatives include: Vigo Video: Allows you to make 15-second videos, like TikTok Videos, and share them with others and also to live stream and interact with videos made by others. This is available for Android and iOS and is easy to use as the videos are well categorized. Lomotif: Allows you to create videos, like TikTok videos, and also lets you use hyper-lapse on videos, create montages and collages, edit video clips together, create slideshows, and add music. This app lets you use a built-in editor to edit your videos and you can add stickers, filters, GIFs, and emojis, and has a community section that allows you to find friends who are using the app and upload videos. Lomotif is available for both Android and iOS. Doobido: This Very popular Indian social app allows you to share videos with your friends, create your own profile, and make your own content for your followers. It gained popularity since the decision was made to ban TikTok. Videos are put into categories, so you can easily find them and the app can be installed on both Android and iOS devices. Which countries banned TikTok? At present, only India and Pakistan have imposed a total TikTok ban, but Indonesia and Bangladesh have banned TikTok videos at times, and the USA and Japan are considering banning it, too. Is TikTok safe for TikTok users? There are definitely some questions around the security of data on TikTok, and one has to ensure that one is well protected when using it. At the same time, however, if you use a VPN, your safety is assured. Is it safe to use a VPN with TikTok? Yes, if you access TikTok using a VPN, it is safe to access TikTok. TikTok will not ban you from using its app with a VPN, even if you use TikTok in India, as it cannot detect that you are in India, since you will be able to watch TikTok videos by connecting to one of the VPN servers that is in a country that allows you to download TikTok. When you watch TikTok in India using a VPN, your actual IP address and location are hidden and fully protected. Is TikTok banned in the USA No, it isn't. You can watch TikTok videos and are currently able to access TikTok in the USA, but the US government is concerned about potential security risks associated with the app and it may be banned in the future. At present, negotiations are underway for an American company to buy out TikTok's operations in the USA, which would ameliorate the issues and prevent the USA from joining the banned countries. Is TikTok banned in the EU? TikTok hasn't yet been banned by any government in the European Union. However, since July 2020, it has been scrutinized by data-protection groups in the EU that are concerned about its privacy policy and the way in which it uses and stores the information about its users. If you want to watch TikTok (or other apps that are banned) in India or in other areas where it is one of the Chinese apps that have been banned, you can do this by changing your settings in the iOS App Store/Play Store or by downloading the app on your device through an APK file. Alternatively, you can use a VPN, as a VPN will disguise your real IP address and allow you to use TikTok in India or in other areas where you cannot otherwise connect to or access TikTok. If you are a new user, you need to ensure that you implement the steps we have provided in this article. Final words: Be alert to the possible privacy risks associated with using TikTok, as there is usually a good reason why a government would outlaw a website or app, usually due to data collection issues. Be sure to protect the privacy of your data and to be careful what you share by using a VPN with strong safety and security features, like ExpressVPN. Now, have fun as you watch amazing content or make your own on TikTok video! Try ExpressVPN for 30 Days, Risk-Free!
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,099
"Look Me in the Heart" is a song by recording artist Tina Turner. It was written by Billy Steinberg and Tom Kelly and produced by Dan Hartman for Turner's seventh solo studio album, Foreign Affair (1989). Released as a single in March 1990, it reached number 23 on the Irish Singles Chart and number 31 in the United Kingdom. In the United States, it peaked at number eight on the Billboard Adult Contemporary chart. The single was released in a variety of formats, including a live recording of the Private Dancer track "Steel Claw", remixes of "Look Me in the Heart" and the 1987 "Tina Turner Montage Mix", a nine-minute megamix including tracks from Private Dancer and Break Every Rule. Critical reception Bill Coleman from Billboard wrote, "Soul-drenched popper should be embraced by several formats. From the underappreciated Foreign Affair album." Greg Kot from Chicago Tribune felt "Look Me in the Heart" "bears an uncanny resemblance" to her earlier cover of Al Green's soul classic "Let's Stay Together". In an retrospective review, Pop Rescue declared it as a "pretty standard little pop song", that "lacks anything more than another saxophone solo and some breathy synths as interest points." Track listings US 7-inch and cassette single, Australian 7-inch single "Look Me in the Heart" – 3:42 "Stronger Than The Wind" – 3:59 French 7-inch and UK 7-inch and cassette single "Look Me in the Heart" – 3:42 "Steel Claw" (live) – 4:41 French CD single "Look Me in the Heart" – 3:42 "Steel Claw" (live) – 4:25 "The Best" (Extended Mighty mix) – 6:37 UK CD single "Look Me in the Heart" (L.P. version) – 3:42 "Look Me in the Heart" (12-inch remix) – 5:22 "Steel Claw" (live) – 4:25 "Look Me in the Heart" (instrumental) – 3:39 UK CD single limited "Look Me in the Heart" (7-inch remix) – 3:44 "Look Me in the Heart" (instrumental) – 3:41 "The Tina Turner Montage Mix" – 8:54 UK 12-inch single "Look Me in the Heart" (12-inch remix) – 5:22 "Steel Claw" (live) – 4:25 "Look Me in the Heart" (instrumental) – 3:39 Charts Weekly charts Year-end charts Release history References Tina Turner songs 1990 singles 1989 songs Capitol Records singles Song recordings produced by Dan Hartman Songs written by Billy Steinberg Songs written by Tom Kelly (musician)
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,137
Ajethotep fue un alto dignatario del Egipto Antiguo que vivió durante la dinastía V, alrededor de 2400 a. C. Probablemente era hijo de la célebre médica Peseshet. Su nombre, que significa 'el Dios del Horizonte es perfecto' o 'el ojo de Horus se conserva', es común, conociéndose varios homónimos en Saqqara. Sus títulos, poco significativos indican su rango y estatus como cortesano real. Ejerció algún sacerdocio relacionado con el mundo médico. Genealogía A pesar de los diecisiete títulos descubiertos en las inscripciones de su capilla funeraria, Ajethotep, permanece un cierto misterio. Entre sus títulos estaba el de chaty, lo que le convertía en el oficial de mayor rango de la corte real, solo segundo después del rey. También fue supervisor de los tesoros, supervisor de los escribas de los documentos del rey y supervisor de los graneros. Su padre también fue chaty. Solo tres de sus hijos son mencionados en su tumba: Sunjuptah, Rajuef, jefe de los médicos y Ajethotep, inspector de médicos. También tenía otro hijo, Ptahhotep Tshefi (Ptahhotep II). Ptahhotep y Ajethotep fueron altos oficiales de la corte durante el gobierno de Dyedkara Isesi (2414-2375 a. C.) y de Unis, hacia los finales de la dinastía V (2494 a 2345 a. C.). Tumba Es sobre todo conocido por su tumba, descubierta en Saqqara por el egiptólogo francés Georges Aaron Bénédite en 1903. Fue registrada por Auguste Mariette y fue publicada por N. de Garis Davies. Es una mastaba unida perteneciente a Ptahhotep y a Ajethotep que se encuentra a lo largo de la calzada de Unis y fue identificada y explorada tempranamente en la historia del Servicio de Antigüedades de Egipto. La capilla de culto del dignatario, de pequeñas dimensiones, fue ofrecida a Francia por Egipto y transportada al Museo del Louvre a principios del siglo XX. Entonces se perdió el rastro de la tumba. A fines del mismo siglo, el Louvre organizó una serie de campañas de excavación, encontró la ubicación de la tumba y continuó su exploración. Una de las representaciones más destacadas en las paredes de la tumba consiste en Ajethotep dirigiendo la construcción de la tumba. Referencias Bibliografía Christiane Ziegler, Le Mastaba d'Akhethetep, une chapelle funéraire de l'Ancien Empire, París, éditions RMN, 1993. Christiane Ziegler (dir.), Le mastaba d'Akhethetep, Collection « Fouilles du Louvre à Saqqara », Vol. I, París, éditions Musée du Louvre/Peeters, 2007. Dinastía V Antiguos egipcios del siglo XXIV a. C. Supervisores del tesoro Chatys de Egipto
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,297
This Interfaith Devotional program focuses on prayer and the effect it has on our lives. These readings have been selected from various religious traditions. Please come and join us on Sundays at 9:30 am at the Beaverton Baha'i Center. O Thou Who art the Lord of all names and the Maker of the heavens! I beseech Thee by them Who are the Daysprings of Thine invisible Essence, the Most Exalted, the All-Glorious, to make of my prayer a fire that will burn away the veils which have shut me out from Thy beauty, and a light that will lead me unto the ocean of Thy Presence. … Make my prayer, O my Lord, a fountain of living waters whereby I may live as long as Thy sovereignty endureth, and may make mention of Thee in every world of Thy worlds. …Pray one for another, that ye may be healed. The effectual fervent prayer of a righteous man availeth much. The Great Spirit is everywhere; he hears whatever is in our minds and hearts, and it is not necessary to speak to him in a loud voice. If I do not pray to thee with my heart, Thou hearest me not. If I pray to thee with my heart, Thou knowest it and art gracious unto me. And establish regular prayers at the two ends of the day and at the approaches of the night: for those things that are good remove those that are evil: be that the word of remembrance to those who remember their Lord. Of all the prayers of the heart, the best prayer is the prayer to the Master to be given the grace of properly praising the Lord. …when you pray, go into your room and shut the door and pray to your Father who is in secret; and your Father who sees in secret will reward you. And in praying do not heap up empty phrases as the Gentiles do; for they think that they will be heard for their many words. Do not be like them, for your Father knows what you need before you ask him. …those who establish regular Prayer, and practice regular charity and believe in Allah and the Last Day to them shall We soon give a great reward, have in their hearts the assurance of the Hereafter. O thou spiritual friend! Thou hast asked the wisdom of prayer. Know thou that prayer is indispensable and obligatory, and man under no pretext whatsoever is excused from performing the prayer unless he be mentally unsound, or an insurmountable obstacle prevent him. The wisdom of prayer is this: That it causeth a connection between the servant and the True One, because in that state man with all heart and soul turneth his face towards His Highness the Almighty, seeking His association and desiring His love and compassion. … prayer and fasting is the cause of awakening and mindfulness and conducive to protection and preservation from tests. To thee have We granted the Fount Of Abundance. For he who hateth thee — he will be cut off from Future Hope. Recite what is sent of the Book by inspiration to thee, and establish Regular Prayer: for Prayer restrains from shameful and unjust deeds; and remembrance of Allah is the greatest thing in life without doubt. And Allah knows the deeds that ye do. Supplicate to God, pray to Him and invoke Him at midnight and at dawn. Be humble and submissive to God and chant the verses of thanksgiving at morn and eve, for that He guided thee unto the Manifest Light and showed to thee the straight Path and destined to thee the station of nearness in His wonderful Kingdom. When my servants ask thee concerning Me, I am indeed close to them; I listen to the prayer of every suppliant when he calleth on Me; let them also, with a will, listen to My call, and believe in Me; that they may walk in the right way. Draw nigh unto God and persevere in thy communion with thy Lord, so that the fire of God's love may glow more luminously in the heart, its heat grow stronger and give warmth to that region and its sound reach the Supreme Concourse. I have chosen thee: listen, then, to the inspiration sent to thee. "Verily, I am Allah: There is no god but I: so serve thou Me only, and establish regular prayer for celebrating My praise. Fasting and obligatory prayer constitute the two pillars that sustain the revealed Law of God. … the laws of obligatory prayer and fasting so that through them the believers may draw nigh unto God. … the fasting period, which involves complete abstention from food and drink from sunrise till sunset … essentially a period of meditation and prayer, of spiritual recuperation, during which the believer strives to make the necessary readjustments in his inner life, and to refresh and reinvigorate the spiritual forces latent in his soul. Its significance and purpose are … fundamentally spiritual in character. Fasting is symbolic, and a reminder of abstinence from selfish and carnal desires. … is enjoined on all the believers once they attain the age of 15 and until they reach the age of 70 years.
{ "redpajama_set_name": "RedPajamaC4" }
2,286
{"url":"https:\/\/msp.org\/pjm\/1965\/15-1\/p02.xhtml","text":"#### Vol. 15, No. 1, 1965\n\n Recent Issues Vol. 311: 1 Vol. 310: 1\u00a0 2 Vol. 309: 1\u00a0 2 Vol. 308: 1\u00a0 2 Vol. 307: 1\u00a0 2 Vol. 306: 1\u00a0 2 Vol. 305: 1\u00a0 2 Vol. 304: 1\u00a0 2 Online Archive Volume: Issue:\n The Journal Subscriptions Editorial Board Officers Contacts Submission Guidelines Submission Form Policies for Authors ISSN: 1945-5844 (e-only) ISSN: 0030-8730 (print) Special Issues Author Index To Appear Other MSP Journals\nTransitive groups of collineations on certain designs\n\n### Richard Earl Block\n\nVol. 15 (1965), No. 1, 13\u201318\n##### Abstract\n\nLet $M=\\left({a}_{ij}\\right)$ be an $m\u00d7n$ matrix with entries in $\\left\\{1,-1\\right\\}$. Suppose that there is a positive integer $d$ such that the inner product of every pair of distinct rows of $M$ is $n-2d$; this is equivalent to assuming that any two distinct rows have Hamming distance $d$, i.e.\u00a0differ in exactly $d$ places. The rows of $M$ form the code words of a binary code; such a code is called a (binary) constant-distance code, of length $n$ and distance $d$. Special cases of matrices which may be taken to be $M$ are the Hadamard matrices, which are defined by the condition that $m=n=2d$, and the incidence matrices (written with $\u00b11$) of balanced incomplete block designs, which are characterized by the property that all column sums are equal and all row sums are equal.\n\nSuppose that $\\pi$ is a permutation of $\\left\\{1,\\cdots \\phantom{\\rule{0.3em}{0ex}},n\\right\\}$ such that replacement, for $i=1\\cdots \\phantom{\\rule{0.3em}{0ex}},n$, of the $\\pi \\left(i\\right)$-th column of $M$ by the $i$-th column of $M$ sends each row of $M$ into a row of $M$. Then $\\pi$ induces a permutation of the rows of $M$. Call such a pair of permutations of the columns and of the rows a collineation of $M$, or of the code. We shall examine constant-distance codes with a group $G$ of collineations which is transitive on the columns. We shall show that $G$ has at most two orbits on the rows (just one orbit if and only if $M$ comes from a balanced incomplete block design), and that if $G$ is nilpotent then at most one of these orbits contains more than a constant row.\n\nPrimary: 05.20","date":"2021-07-24 14:49:36","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 30, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8999525904655457, \"perplexity\": 354.9750514906874}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046150266.65\/warc\/CC-MAIN-20210724125655-20210724155655-00685.warc.gz\"}"}
null
null
Sangre y arena (Blood and Sand) es una película muda estadounidense de 1922 que daría origen a dos remakes: uno en 1941 y otro en 1989. Está basada en la novela del mismo título, obra de Vicente Blasco Ibáñez. Sinopsis Juan Gallardo (Rodolfo Valentino) pretende triunfar como torero. Tras casarse con su prometida lo logra, pero entonces encuentra a otra mujer. Enlaces externos Películas basadas en novelas Películas de Paramount Pictures Películas sobre tauromaquia Películas basadas en obras de Vicente Blasco Ibáñez Películas de dominio público
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,036
Les championnats du monde d'escrime 2022, soixante-huitième édition des championnats du monde d'escrime, ont lieu du au au Caire, en Égypte. La capitale égyptienne accueille pour la deuxième fois la plus prestigieuse compétition organisée par la fédération internationale d'escrime après 1949. Les trois premières journées sont dédiées aux tours de poules et aux éliminatoires des six tableaux individuels, dont sont exemptés les 16 tireuses et tireurs les mieux placés au classement mondial, excepté les tireurs de Russie et de Biélorussie, toujours exclus de toute compétition internationale. Les premières médailles sont attribuées le . Calendrier Les championnats du monde se déroulent sur neuf jours. Nations participantes Médaillés Épée Messieurs Individuel Par équipes Dames Individuel Par équipes Fleuret Messieurs Individuel Par équipes Dames Individuel Par équipes Sabre Messieurs Individuel Par équipes Dames Individuel Par équipes Tableau des médailles Articles connexes Coupe du monde d'escrime 2021-2022 Liens externes 2022 Escrime Escrime en 2022 Sport en Égypte en 2022 Escrime 2022 Escrime en Égypte Sport en juillet 2022 Championnats du monde d'escrime 2022
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,423
Q: Fixed HTML table layout breaking width Can anyone tell me why the rows in this layout are breaking the width? I know it's a fixed-width static layout, and it's an in-line mess, but this is what we need to use until I can develop a fluid/responsive layout. Self-taught coder, so there's probably a lot I'm doing wrong here. The layout should be a single column, with one row at the bottom that needs 3 table cells as displayed in the snippet. <!doctype html> <html> <head><title>JFG eNewsletter</title></head> <body> <table width="100%" style="background-color: #E4E0D6; padding: 0px;" border="0" cellpadding="0"> <tr> <td style="background-color: #E4E0D6; padding: 20px 0px 0px 0px;"> <table align="center" style="background-color: #FFFFFF; width: 600px; max-width: 600px; padding: 0px;" cellspacing="0" cellpadding="0"> <tr style="background-color: #72113D; width: 600px;"> <td align="left" width="60" style="background-color: #72113D; padding: 20px 20px 20px 20px; width: 60px; display: inline-block;" colspan="0"> <a href="https://www.johnsonbank.com/" title="johnsonbank.com"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/J-white.png" align="center" width="60" alt="JFG Logo" border="0" /></a> </td> <td align="left" width="450" style="background-color: #72113D; font-family: 'open sans', san-serif; color: #FFFFFF; font-size: 24px; font-weight: bold; padding: 0px 10px 0px 10px; width: 450px; display: inline-block;" colspan="5"> Make the Most of Your Business<br /> <a style="font-size: 14px; color: #FFFFFF; text-decoration: none; font-weight: normal;" href="https://www.johnsonbank.com/Business/Banking">BANKING</a><span style="font-size: 14px;">&nbsp;|&nbsp;</span> <a style="font-size: 14px; color: #FFFFFF; text-decoration: none; font-weight: normal;" href="https://www.johnsonbank.com/Business/Wealth">WEALTH</a><span style="font-size: 14px;">&nbsp;|&nbsp;</span> <a style="font-size: 14px; color: #FFFFFF; text-decoration: none; font-weight: normal;" href="https://www.johnsonbank.com/Business/Insurance">INSURANCE</a> </td> </tr> <tr style="background-color: #82204C; height: 15px;"> <td align="center" colspan="5"></td> </tr> <tr style="width: 600px; max-width: 600px;"> <td align="center" colspan="5"> <a href="" title=""><img src="https://app.subscribermail.com/images/pp/56502935/2017_eNewsletters/email_heo_ecoforum.png" alt="" width="600" height="200" border="0" /></a> </td> </tr> <tr style="background-color: #E4E0D6; height: 15px;"> <td align="center" colspan="5"></td> </tr> <tr style="font-family: 'open sans', san-serif; font-size: 16px; color: #454646;"> <td align="left" style="padding: 20px 20px 10px 20px;" colspan="5"> <a href="https://www.johnsonbank.com/" title="" style="text-decoration: none; color: #72113D; font-size: 21px; font-weight: bold;">TITLE</a> <br />BODY TEXT HERE...&nbsp;<a href="https://www.johnsonbank.com/" title="" style="text-decoration: none; color: #4583A6; font-weight: bold;">Read more</a> </td> </tr> <tr style="font-family: 'open sans', san-serif; font-size: 16px; color: #454646;"> <td align="left" style="padding: 20px 20px 10px 20px;" colspan="5"> <a href="https://www.johnsonbank.com/" title="" style="text-decoration: none; color: #72113D; font-size: 21px; font-weight: bold;">TITLE</a> <br />BODY TEXT HERE...&nbsp;<a href="https://www.johnsonbank.com/" title="" style="text-decoration: none; color: #4583A6; font-weight: bold;">Read more</a> </td> </tr> <tr style="font-family: 'open sans', san-serif; font-size: 16px; color: #454646;"> <td align="left" style="padding: 20px 20px 20px 20px;" colspan="5"> <a href="https://www.johnsonbank.com/" title="" style="text-decoration: none; color: #72113D; font-size: 21px; font-weight: bold;">TITLE</a> <br />BODY TEXT HERE...&nbsp;<a href="https://www.johnsonbank.com/" title="" style="text-decoration: none; color: #4583A6; font-weight: bold;">Read more</a> </td> </tr> <tr> <td align="center" colspan="5"> <a href="https://www.johnsonbank.com/Resources/Articles?sortBy=&filterBy=%7B0b834bec-aecc-4625-b69e-c8a83f0eeabc%7D&selectItem=%7B0b834bec-aecc-4625-b69e-c8a83f0eeabc%7D"><img src="http://app.subscribermail.com/images/pp/56502935/2016_eNewsletters/WeeklyInvestmentCommentary.png" width="600" height="87" alt="" border="0" /></a> </td> </tr> <tr style="background-color: #E4E0D6; height: 138px;"> <td align="center" colspan="5"> <a href="https://www.johnsonbank.com/Business/Banking/Business-Mobile-Banking" title=""><img src="https://app.subscribermail.com/images/pp/56502935/2017_eNewsletters/bizmobilebanking.png" width="600" height="138" alt="Marketing Banner Ad" border="0" /></a> </td> </tr> <tr align="center" style="background-color: #FFFFFF;"> <td align="center" colspan="5"> <a href="https://www.johnsonbank.com/Resources/Articles/" title="Articles"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/articles-email.png" alt="Articles Logo" width="200" height="144" border="0" /></a><a href="https://www.johnsonbank.com/Resources/Calculators/" title="Calculators"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/calculators-email.png" alt="Calculators Logo" width="200" height="144" border="0" /></a><a href="https://www.johnsonbank.com/Resources/Events/" title="Events"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/events-email.png" alt="Events Logo" width="200" height="144" border="0" /></a> </td> </tr> <tr style="background-color: #82204C; height: 15px;"> <td align="center" colspan="5"></td> </tr> <tr align="center" style="background-color: #72113D;"> <td align="center" width="220" style="background-color: #72113D; width: 220px; padding: 10px 0px 10px 0px; display: inline-block;" colspan="1"> <table align="center" width="220"> <tr> <td align="center" style="padding-bottom: 5px;"><a href="https://www.johnsonbank.com/" title="Johnson Bank"><img src="https://app.subscribermail.com/images/pp/56502935/2017_eNewsletters/JB_HZ.png" width="100" alt="Johnson Bank Logo" border="0" /></a></td> </tr> <tr> <td align="center"> <a href="http://www.linkedin.com/company/johnson-bank/" title="Johnson Bank LinkedIn"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/Linkedin_001.jpg" width="32" alt="LinkedIn Logo" border="0" /></a>&nbsp; <a href="https://www.facebook.com/johnsonbank/" title="Johnson Bank Facebook"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/facebook_001.jpg" width="32" alt="FB Logo" border="0" /></a>&nbsp; <a href="https://twitter.com/JohnsonBank/" title="Johnson Bank Twitter"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/Twitter_001.jpg" width="32" alt="Twitter Logo" border="0" /></a>&nbsp; <a href="https://www.youtube.com/channel/UCODxjMU3HSr7G32b5JAYwKQ/" title="Johnson Bank YouTube Channel"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/YouTube_001.jpg" width="32" alt="YouTube Logo" border="0" /></a> </td> </tr> </table> </td> <td align="center" width="170" style="background-color: #72113D; width: 170px; padding: 10px 0px 10px 0px; display: inline-block;" colspan="1"> <table align="center" width="170"> <tr> <td align="center" style="padding-bottom: 5px;"> <a href="https://www.johnsonins.com/" title="Johnson Insurance/" target="blank"><img src="https://app.subscribermail.com/images/pp/56502935/2017_eNewsletters/JINS_HZ.png" width="100" alt="Johnson Insurance Logo" border="0" /></a> </td> </tr> <tr> <td align="center"> <a href="http://www.linkedin.com/company/johnson-insurance/" title="Johnson Insurance LinkedIn"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/Linkedin_001.jpg" width="32" alt="LinkedIn Logo" border="0" /></a>&nbsp; <a href="https://www.facebook.com/JohnsonInsuranceServicesLLC/" title="Johnson Insurance Facebook"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/facebook_001.jpg" width="32" alt="Johnson Insurance" border="0" /></a> </td> </tr> </table> </td> <td align="center" width="150" style="background-color: #72113D; width: 150px; padding: 10px 0px 10px 0px; display: inline-block;" colspan="1"> <table align="center" width="150"> <tr> <td align="center" style="padding-bottom: 5px;"> <a href="http://www.clearygulladvisors.com/" title="Cleary Gull Advisors" target="blank"><img src="https://app.subscribermail.com/images/pp/56502935/2017_eNewsletters/CGA_HZ.png" width="129" alt="CGA Logo" border="0" /></a> </td> </tr> <tr> <td align="center"> <a href="https://www.linkedin.com/company/cleary-gull/" title="Cleary Gull Advisors LinkedIn"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/Linkedin_001.jpg" width="32" alt="LinkedIn Logo" border="0" /></a> </td> </tr> </table> </td> </tr> <tr style="background-color: #82204C; font-family: open sans, san-serif; font-size: 7pt; color: #FFFFFF; display: inline-block;"> <td align="left" valign="top" width="40" style="background-color: #82204C; padding: 20px 0px 0px 20px; width: 40px; display: inline-block;" colspan="0"> <img src="https://app.subscribermail.com/images/pp/56502935/2017_eNewsletters/EHL.png" align="center" width="31" height="32" alt="Equal Housing Lender" border="0" /> </td> <td align="left" width="500" style="background-color: #82204C; font-family: open sans, san-serif; font-size: 7pt; color: #FFFFFF; padding: 10px 10px 10px 10px; width: 500px; display: inline-block;" colspan="5"> <strong>Johnson Bank, Member FDIC&nbsp;|&nbsp;Equal Housing Lender</strong><br />Insurance products are sold through Johnson Insurance Services, LLC and non&dash;depository investment products offered and sold through Johnson Bank and Cleary Gull Advisors, an SEC registered investment adviser, are not insured by the FDIC, not a deposit or other obligation of, or guaranteed by, the bank. Non&dash;depository investment products are subject to investment risks, including possible loss of the principal amount invested.<br /><br />Johnson Bank, Johnson Insurance and Cleary Gull Advisors are affiliates and subsidiaries of Johnson Financial Group. </td> </tr> <tr style="background-color: #E4E0D6; font-family: 'open sans', san-serif; font-size: 7pt; color: #454646;"> <td align="left" style="padding: 10px 0px 10px 0px; display: inline-block;" colspan="5"> <a href="http://app.subscribermail.com/unsub.cfm?tempid=%_tempid%&mailid=%_mailid%" title="Unsubscribe" style="color: #454646; text-decoration: none;">Unsubscribe or update your email address</a>&nbsp;|&nbsp;555 Main Street&nbsp;|&nbsp;Racine, WI 53403 </td> </tr> </table> </td> </tr> </table> </body> </html> A: If you're talking about the line just above your footer being slightly narrower than the rest, all you need to do is ensure a constant width for the rows of 600px by adding width: 600px inline to the relevant <tr> element: <!doctype html> <html> <head> <title>JFG eNewsletter</title> </head> <body> <table width="100%" style="background-color: #E4E0D6; padding: 0px;" border="0" cellpadding="0"> <tr> <td style="background-color: #E4E0D6; padding: 20px 0px 0px 0px;"> <table align="center" style="background-color: #FFFFFF; width: 600px; max-width: 600px; padding: 0px;" cellspacing="0" cellpadding="0"> <tr style="background-color: #72113D; width: 600px;"> <td align="left" width="60" style="background-color: #72113D; padding: 20px 20px 20px 20px; width: 60px; display: inline-block;" colspan="0"> <a href="https://www.johnsonbank.com/" title="johnsonbank.com"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/J-white.png" align="center" width="60" alt="JFG Logo" border="0" /></a> </td> <td align="left" width="450" style="background-color: #72113D; font-family: 'open sans', san-serif; color: #FFFFFF; font-size: 24px; font-weight: bold; padding: 0px 10px 0px 10px; width: 450px; display: inline-block;" colspan="5"> Make the Most of Your Business<br /> <a style="font-size: 14px; color: #FFFFFF; text-decoration: none; font-weight: normal;" href="https://www.johnsonbank.com/Business/Banking">BANKING</a><span style="font-size: 14px;">&nbsp;|&nbsp;</span> <a style="font-size: 14px; color: #FFFFFF; text-decoration: none; font-weight: normal;" href="https://www.johnsonbank.com/Business/Wealth">WEALTH</a><span style="font-size: 14px;">&nbsp;|&nbsp;</span> <a style="font-size: 14px; color: #FFFFFF; text-decoration: none; font-weight: normal;" href="https://www.johnsonbank.com/Business/Insurance">INSURANCE</a> </td> </tr> <tr style="background-color: #82204C; height: 15px;"> <td align="center" colspan="5"></td> </tr> <tr style="width: 600px; max-width: 600px;"> <td align="center" colspan="5"> <a href="" title=""><img src="https://app.subscribermail.com/images/pp/56502935/2017_eNewsletters/email_heo_ecoforum.png" alt="" width="600" height="200" border="0" /></a> </td> </tr> <tr style="background-color: #E4E0D6; height: 15px;"> <td align="center" colspan="5"></td> </tr> <tr style="font-family: 'open sans', san-serif; font-size: 16px; color: #454646;"> <td align="left" style="padding: 20px 20px 10px 20px;" colspan="5"> <a href="https://www.johnsonbank.com/" title="" style="text-decoration: none; color: #72113D; font-size: 21px; font-weight: bold;">TITLE</a> <br />BODY TEXT HERE...&nbsp;<a href="https://www.johnsonbank.com/" title="" style="text-decoration: none; color: #4583A6; font-weight: bold;">Read more</a> </td> </tr> <tr style="font-family: 'open sans', san-serif; font-size: 16px; color: #454646;"> <td align="left" style="padding: 20px 20px 10px 20px;" colspan="5"> <a href="https://www.johnsonbank.com/" title="" style="text-decoration: none; color: #72113D; font-size: 21px; font-weight: bold;">TITLE</a> <br />BODY TEXT HERE...&nbsp;<a href="https://www.johnsonbank.com/" title="" style="text-decoration: none; color: #4583A6; font-weight: bold;">Read more</a> </td> </tr> <tr style="font-family: 'open sans', san-serif; font-size: 16px; color: #454646;"> <td align="left" style="padding: 20px 20px 20px 20px;" colspan="5"> <a href="https://www.johnsonbank.com/" title="" style="text-decoration: none; color: #72113D; font-size: 21px; font-weight: bold;">TITLE</a> <br />BODY TEXT HERE...&nbsp;<a href="https://www.johnsonbank.com/" title="" style="text-decoration: none; color: #4583A6; font-weight: bold;">Read more</a> </td> </tr> <tr> <td align="center" colspan="5"> <a href="https://www.johnsonbank.com/Resources/Articles?sortBy=&filterBy=%7B0b834bec-aecc-4625-b69e-c8a83f0eeabc%7D&selectItem=%7B0b834bec-aecc-4625-b69e-c8a83f0eeabc%7D"><img src="http://app.subscribermail.com/images/pp/56502935/2016_eNewsletters/WeeklyInvestmentCommentary.png" width="600" height="87" alt="" border="0" /></a> </td> </tr> <tr style="background-color: #E4E0D6; height: 138px;"> <td align="center" colspan="5"> <a href="https://www.johnsonbank.com/Business/Banking/Business-Mobile-Banking" title=""><img src="https://app.subscribermail.com/images/pp/56502935/2017_eNewsletters/bizmobilebanking.png" width="600" height="138" alt="Marketing Banner Ad" border="0" /></a> </td> </tr> <tr align="center" style="background-color: #FFFFFF;"> <td align="center" colspan="5"> <a href="https://www.johnsonbank.com/Resources/Articles/" title="Articles"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/articles-email.png" alt="Articles Logo" width="200" height="144" border="0" /></a> <a href="https://www.johnsonbank.com/Resources/Calculators/" title="Calculators"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/calculators-email.png" alt="Calculators Logo" width="200" height="144" border="0" /></a> <a href="https://www.johnsonbank.com/Resources/Events/" title="Events"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/events-email.png" alt="Events Logo" width="200" height="144" border="0" /></a> </td> </tr> <tr style="background-color: #82204C; height: 15px;"> <td align="center" colspan="5"></td> </tr> <tr align="center" style="background-color: #72113D;"> <td align="center" width="220" style="background-color: #72113D; width: 220px; padding: 10px 0px 10px 0px; display: inline-block;" colspan="1"> <table align="center" width="220"> <tr> <td align="center" style="padding-bottom: 5px;"> <a href="https://www.johnsonbank.com/" title="Johnson Bank"><img src="https://app.subscribermail.com/images/pp/56502935/2017_eNewsletters/JB_HZ.png" width="100" alt="Johnson Bank Logo" border="0" /></a> </td> </tr> <tr> <td align="center"> <a href="http://www.linkedin.com/company/johnson-bank/" title="Johnson Bank LinkedIn"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/Linkedin_001.jpg" width="32" alt="LinkedIn Logo" border="0" /></a>&nbsp; <a href="https://www.facebook.com/johnsonbank/" title="Johnson Bank Facebook"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/facebook_001.jpg" width="32" alt="FB Logo" border="0" /></a>&nbsp; <a href="https://twitter.com/JohnsonBank/" title="Johnson Bank Twitter"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/Twitter_001.jpg" width="32" alt="Twitter Logo" border="0" /></a>&nbsp; <a href="https://www.youtube.com/channel/UCODxjMU3HSr7G32b5JAYwKQ/" title="Johnson Bank YouTube Channel"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/YouTube_001.jpg" width="32" alt="YouTube Logo" border="0" /></a> </td> </tr> </table> </td> <td align="center" width="170" style="background-color: #72113D; width: 170px; padding: 10px 0px 10px 0px; display: inline-block;" colspan="1"> <table align="center" width="170"> <tr> <td align="center" style="padding-bottom: 5px;"> <a href="https://www.johnsonins.com/" title="Johnson Insurance/" target="blank"><img src="https://app.subscribermail.com/images/pp/56502935/2017_eNewsletters/JINS_HZ.png" width="100" alt="Johnson Insurance Logo" border="0" /></a> </td> </tr> <tr> <td align="center"> <a href="http://www.linkedin.com/company/johnson-insurance/" title="Johnson Insurance LinkedIn"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/Linkedin_001.jpg" width="32" alt="LinkedIn Logo" border="0" /></a>&nbsp; <a href="https://www.facebook.com/JohnsonInsuranceServicesLLC/" title="Johnson Insurance Facebook"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/facebook_001.jpg" width="32" alt="Johnson Insurance" border="0" /></a> </td> </tr> </table> </td> <td align="center" width="150" style="background-color: #72113D; width: 150px; padding: 10px 0px 10px 0px; display: inline-block;" colspan="1"> <table align="center" width="150"> <tr> <td align="center" style="padding-bottom: 5px;"> <a href="http://www.clearygulladvisors.com/" title="Cleary Gull Advisors" target="blank"><img src="https://app.subscribermail.com/images/pp/56502935/2017_eNewsletters/CGA_HZ.png" width="129" alt="CGA Logo" border="0" /></a> </td> </tr> <tr> <td align="center"> <a href="https://www.linkedin.com/company/cleary-gull/" title="Cleary Gull Advisors LinkedIn"><img src="http://app.subscribermail.com/images/pp/56502968/2014_Branding/Linkedin_001.jpg" width="32" alt="LinkedIn Logo" border="0" /></a> </td> </tr> </table> </td> </tr> <tr style="background-color: #82204C; font-family: open sans, san-serif; font-size: 7pt; color: #FFFFFF; display: inline-block; width: 600px;"> <td align="left" valign="top" width="40" style="background-color: #82204C; padding: 20px 0px 0px 20px; width: 40px; display: inline-block;" colspan="0"> <img src="https://app.subscribermail.com/images/pp/56502935/2017_eNewsletters/EHL.png" align="center" width="31" height="32" alt="Equal Housing Lender" border="0" /> </td> <td align="left" width="500" style="background-color: #82204C; font-family: open sans, san-serif; font-size: 7pt; color: #FFFFFF; padding: 10px 10px 10px 10px; width: 500px; display: inline-block;" colspan="5"> <strong>Johnson Bank, Member FDIC&nbsp;|&nbsp;Equal Housing Lender</strong><br />Insurance products are sold through Johnson Insurance Services, LLC and non&dash;depository investment products offered and sold through Johnson Bank and Cleary Gull Advisors, an SEC registered investment adviser, are not insured by the FDIC, not a deposit or other obligation of, or guaranteed by, the bank. Non&dash;depository investment products are subject to investment risks, including possible loss of the principal amount invested.<br /><br />Johnson Bank, Johnson Insurance and Cleary Gull Advisors are affiliates and subsidiaries of Johnson Financial Group. </td> </tr> <tr style="background-color: #E4E0D6; font-family: 'open sans', san-serif; font-size: 7pt; color: #454646;"> <td align="left" style="padding: 10px 0px 10px 0px; display: inline-block;" colspan="5"> <a href="http://app.subscribermail.com/unsub.cfm?tempid=%_tempid%&mailid=%_mailid%" title="Unsubscribe" style="color: #454646; text-decoration: none;">Unsubscribe or update your email address</a>&nbsp;|&nbsp;555 Main Street&nbsp;|&nbsp;Racine, WI 53403 </td> </tr> </table> </td> </tr> </table> </body> </html> Hope this helps! :)
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,873
Q: Tag not generating link in Visualforce email template I have a Visualforce Email template which is as below: <messaging:emailTemplate subject="Pricing tool ticketing requested {!relatedTo.S_TicketingTool__r.S_Price__r.Name} {!relatedTo.S_TicketingTool__r.P_GateType__c}" recipientType="User" relatedToType="pkg__Approvals__c"> <messaging:htmlEmailBody > <h4>{!relatedTo.S_TicketingTool__r.P_GateType__c} has been requested for Pricing <br/>{!relatedTo.S_TicketingTool__r.S_Price__r.S_TicketingPriceIdentifier__c} - {!relatedTo.S_TicketingTool__r.S_Price__r.Name}.<br/>. <br/> Please review and approve the request here: <br/> <apex:outputLink value="{!relatedTo.S_TicketingTool__r.Id}" id="theLink"> {!relatedTo.S_TicketingTool__r.Name} </apex:outputLink> </h4> </messaging:htmlEmailBody> </messaging:emailTemplate> I am expecting the Link with apex:outputLink, However, it generates the email as below: 12345 - Pricing details. Please review and approve the request here: [a513O000000U89UQA0]12345 - Pricing details - 05.01.2023 Any help to get the Link would be highly appreciated. On a side note, this was working fine a few days before in my sandbox and was tested. It is not working only since very recently. A: <apex:outputLink> value= has to render to a valid https:// address to the SFDC record defined by: {!relatedTo.S_TicketingTool__r.Id} (which by the way, is the same as {!relatedTo.S_TicketingTool__c} Since this is an email template, you can't use relative addresses as the user will be clicking on it outside of SFDC (from within their email client) So, the value= attribute needs to be (see this answer): {!LEFT($Api.Partner_Server_URL_560, FIND(".com/",$Api.Partner_Server_URL_560)+3)}/{!relatedTo.S_TicketingTool__c}"
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,355
A new generation of satellites zooms in on a familiar planet. (NASA Goddard Space Flight Center/MITI/ERSDAC/Jaros and U.S.-Japan Aster Science Team) They're up there now, scanning the planet at all wavelengths, taking the measure of its shifting seas, winds, and landforms. Earth-viewing satellites have been around for 40 years, but none like these. A new generation of remote sensing spacecraft has brought unprecedented clarity and coverage to the study of Earth from space, and we now live on a continuously monitored planet. From This Story [×] CLOSE Terra's zoom lens, the Advanced Spaceborne Thermal Emission and Reflection (ASTER), captured this view of Kunlun fault in northern Tibet last July. The image combines visible and infrared data and shows, among other details, the shadows of passing clouds. The fault line is marked by lines of vegetation, which appear red. (NASA Goddard Space Flight Center/MITI/ERSDAC/Jaros and U.S.-Japan Aster Science Team) Central Oregon's Cascade region shows the scars from widespread logging in this false-color image from the Japanese ASTER instrument, which is on Terra. This view combines red, shortwave infrared, and near-infrared light detected by the satellite. Snow-covered mountains to the east appear blue, forests are green, and clear-cut areas are orange-pink. ASTER, the only Terra sensor that can match Landsat 7's 15-meter resolution, is designed to study thermal (heat) emission and reflection from the land, yielding detailed maps of surface temperature. The maps enable scientists to investigate problems ranging from deforestation to urban growth to soil erosion. The satellite can "revisit" any target to detect change over time, with visit intervals varying from 4 to 16 days. ASTER scientists plan eventually to publish a single, cloud-free composite image showing the entire land surface of Earth. (NASA GSFC/MITI/ERSDAC/Jaros and U.S.-Japan Aster Science Team) The living Earth is revealed in this image, compiled from data taken over a period of three years by the SeaWiFS (Sea-viewing Wide Field of view Sensor) instrument on the commercially owned OrbView-2 satellite. The SeaWiFS detects the spectral signature of chlorophyll-bearing plankton, tiny marine organisms that are responsible for about half of Earth's primary biological production. Red areas in the ocean are highest in chlorophyll, yellow-green are intermediate, and blue-violet are the lowest. The ocean data represents a three-year average from September 1997 to August 2000. Land vegetation is based on data taken in July 1998, with dark green showing the areas of dense growth and yellow-brown showing the absence of plants. (SeaWiFS Project, NASA GSFC and ORBIMAGE) Because radar imagers can view Earth day or night, even through clouds, Canada's RADARSAT-1 needed only 18 days to produce this exquisitely detailed map of Antarctica. Compare that to how long it took to make the previous best cloud-free satellite map of the continent, assembled from pictures taken by weather satellites over the course of 13 years. The Radarsat images, gathered in October 1999, are being used by scientists to study previously unexplored features, including 500-miles-long ice streams flowing from the continent's interior. (Canadian Space Agency) Information on clouds is vital to understanding global climate change, so clouds are a primary target for NASA's Earth Observing System. In this view of the Great Lakes region, taken by Terra's Moderate Resolution Imaging Spectrometer (MODIS), cloud composition and altitude are revealed by how the clouds emit or reflect radiation. Pink areas in the false-color image show colder, higher clouds containing snow and ice, while green areas are lower clouds containing liquid water. The versatile MODIS will fly on both of the first two large EOS platforms--Terra and the soon-to-be-launched Aqua--and is the workhorse for climate change research. MODIS extends and improves on measurements that have been made by two key weather satellite instruments, the Advanced Very High Resolution Radiometer and the Coastal Zone Color Scanner. The new sensor scans the entire surface of Earth every two days. Data from MODIS already had been put to wide use, from monitoring fires in the western United States to documenting the biological productivity of the world's forests. (Liam Gumley, University of Wisconsin-Madison/Terra Project) National Oceanic and Atmospheric Administration (NOAA) researchers assessing the health of shallow-water coral reefs in the Caribbean and Pacific were in the market for high-resolution pictures such as this view of tiny Baker Island, located 1,600 miles southwest of Hawaii (blue and yellow areas are reefs). So they turned to Colorado-based Space Imaging, owners of IKONOS, the world's only commercial satellite currently returning one-meter-resolution photos. The company's Washington operations director, Mark Brender, says Space Imaging had identified many uses for its close-up imagery, but never guessed that scientists would be using the pictures to look 90 feet underwater. "That wasn't in our business plan," he says. (spaceimaging.com) The SeaWiFS satellite, launched in 1997, is used primarily to monitor the biological productivity of the oceans. But its camera is also well suited to spotting fires from space, as in this view of a blaze in Greece during a hot, dry spell last July. SeaWiFS is perhaps the best example to date of government-industry cooperation in remote sensing. The satellite (also known as OrbView-2) is owned and operated by Virginia-based Orbimage, a subsidiary of Orbital Sciences Corporation. But it was the commitment to a long-term data purchase by NASA Earth scientists that led to the spacecraft's being built and launched in the first place. (SeaWiFS Project, NASA/GSFC and Orbimage) The Landsat 7 view of Cape Canaveral, Florida, clearly shows the area's past and present launch pads, including space shuttle pad 39A, the rounded structure near the beach at top center. The first Landsat was launched in 1972, and its successors have been documenting Earth's changing surface ever since. The most recent in the series, Landsat 7, is easily the best. Not only are its pictures sold to scientists for a fraction of past prices, but the archive of captured scenes is much larger. Landsat 7 also doubles the sharpness of its predecessors, with 15-meter resolution in black and white. And for the first time, the data is precisely calibrated to other satellite and airborne data, which "makes us more objective than we've been in the past," according to geologist and remote sensing specialist Alexander Goetz of the University of Colorado. (USGS/Eros Data Center) The next big thing in remote sensing, hyperspectral sensors return data across a continuous spectrum subdivided into 200 or more channels--as compared to a handful of separate, selected bands for traditional satellites like Landsat. This "datacube" shows the amount of information contained in a single hyperspectral image of Pearl Harbor taken by AVIRIS (Airborne visible Infrared Imaging Spectrometer), an airplane-mounted instrument similar to the hyperspectral imager recently sent into orbit on NASA's Earth Observing 1 satellite. Each picture element--pixel--on the cube's face has its own spectrum, yielding a wealth of information on how the surface reflects or emits light. A slice through the cube in a plane parallel to the image would show the scene as it appears in a single narrow wavelength. (AVIRIS Project, JPL/CalTech) It used to be that only airplanes could return overhead images this sharp. But the view of downtown San Francisco was take by the IKONOS satellite from an altitude of 423 miles. (Note the Transamerica pyramid building at the top center.) The computer-enhanced image adds four-meter-resolution color data to a one-meter-resolution black-and-white image to achieve the sharpness without sacrificing realism. Space Imaging, which owns IKONOS, says that demand for the hi-res orbital photography is growing. Buyers have requested everything from photos of Mt. Ararat in Turkey (a team searching for signs of Noah's Ark) to pictures that woman commissioned of her New York lake house. Apparently she wasn't daunted by the $1,000 minimum for a targeted IKONOS "scene." (spaceimaging.com) In the 1980s NASA conceived of a grand "Mission to Planet Earth"—a fleet of large satellite platforms, each carrying a suite of sensors that together would provide a long-term record of environmental change. It didn't turn out that way, mostly due to the multibillion-dollar cost. But a less expensive Earth Observing System (EOS) is reaching orbit, with the first major component launched in 1999. Terra, as it's called, retains the original concept's Swiss army knife approach to Earth observation. Each of the five onboard sensors has its own specialty. A versatile spectrometer called MODIS takes regional-scale pictures in 36 wavelengths. The multi-angle MISR has nine separate cameras—four pointing forward, one straight down, and four looking backward—so that hard-to-see phenomena like atmospheric haze can be photographed in different angles of illumination. ASTER, the one Japanese instrument on board, is Terra's zoom lens; its high resolution is suitable for a range of tasks, from studying glaciers to tracking changes in land use. MOPITT is tuned to the infrared signatures of pollutants in the lower atmosphere, and CERES measures global radiation to help answer the critical question of what role clouds play in global warming or cooling. Documenting global change is in fact the main quest of Terra and the rest of the new satellite sensors. They watch for signs that coral reefs are dying, that snowpacks are melting, that forests are disappearing, or shorelines are shifting. More importantly, they collect fundamental data—trillions of bytes' worth—revealing the complex interplay of land, air, ice, and water driving our planet's weather. Terra will be followed later this year by the second large EOS platform, Aqua, which will focus on the atmosphere and ocean. By the end of 2003, some two dozen EOS satellites of varying size and scope will be in space. Add the data from non-EOS projects, like the Shuttle Radar Topography Mission, which last year mapped 80 percent of Earth's surface in 3-D, and Earth scientists are happily swamped with information. "These days there's so much data around that you can't possibly look at it all," says Alexander Goetz, who heads the University of Colorado's Center for the Study of Earth from Space. More is on the way. With the launch of the EO-1 (Earth Observing 1) technology-testing satellite in November, NASA has made its first foray into space-based hyperspectral imagery, which sees in more than 200 wavelengths instead of the few bands covered by older satellites like Landsat, and lets scientists better characterize surface materials based on the way they reflect or absorb light. The first commercial space images with one-meter resolution have already hit the market, with more sharp-eyed competitors on the way. For students of planet Earth, the view is getting better all the time. Browse images in the Photo Gallery at right. Single Page Previous Page 1 of 2 Next 1 2 In Vietnam, These Helicopter Scouts Saw Combat Up Close The First Sign of Intelligent Life Beyond Earth? Top NASA Photos of All Time Douglas A-1 Skyraider The Truth About the MiG-29 How to Be a Ball Turret Gunner
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,859
{"url":"http:\/\/books.duhnnae.com\/2017\/jul8\/150152949931-Asymptotic-likelihood-of-chaos-for-smooth-families-of-circle-maps-Mathematics-Dynamical-Systems.php","text":"# Asymptotic likelihood of chaos for smooth families of circle maps - Mathematics > Dynamical Systems\n\nAsymptotic likelihood of chaos for smooth families of circle maps - Mathematics > Dynamical Systems - Download this document for free, or read online. Document in PDF available to download.\n\nAbstract: We consider a smooth two-parameter family $f {a,L}\\colon\\theta\\mapsto\\theta+a+L\\Phi\\theta$ of circle maps with a finite number of critical points.For sufficiently large $L$ we construct a set $A L^{\\infty}$ of $a$-values ofpositive Lebesgue measure for which the corresponding $f {a,L}$ exhibits anexponential growth of derivatives along the orbits of the critical points. Ourconstruction considerably improves the previous one of Wang and Young for thesame class of families, in that the following asymptotic estimate holds: theLebesgue measure of $A L^{\\infty}$ tends to full measure in $a$-space as $L$tends to infinity.\n\nAuthor: Hiroki Takahasi\n\nSource: https:\/\/arxiv.org\/","date":"2017-09-24 23:11:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9001943469047546, \"perplexity\": 791.557911517097}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-39\/segments\/1505818690228.57\/warc\/CC-MAIN-20170924224054-20170925004054-00464.warc.gz\"}"}
null
null
\section{Introduction} In this paper, we investigate a central object of study in the area of additive and combinatorial number theory known as a \emph{Sidon set}. Thus, given natural numbers $g,h$, we write a finite set $A$ of natural numbers to be a $B^{+}_h[g]$ set if for any $n \in \mathbb{N}$, the number of distinct solutions to the equation \begin{equation} \label{add4} n = a_1 + \dots + a_h \end{equation} with $a_1, \dots, a_h \in A$ is at most $g$. Here, we consider two such solutions to be the same if they differ only in the ordering of the summands. We define a $B^{\times}_{h}[g]$ set similarly by replacing the additive equation $\eqref{add4}$ with its multiplicative analogue $n = a_1 \dots a_h$. When $h=2$ and $g=1$, we refer to $B^+_{h}[g]$ and $B^{\times}_h[g]$ sets as Sidon and multiplicative Sidon sets respectively. \par Sidon sets and their many generalisations have been analysed in various works from many different perspectives in the area, with a classical problem concerning the size of the largest $B_{h}^{+}[g]$ subset of $[N] := \{1, \dots, N\}$. The $h=2, g=1$ case of this problem was first studied by Erd\H{o}s and Tur\'{a}n \cite{ET1941}, who proved the first set of upper bounds in this case, and they further noted that one may obtain lower bounds of the same order from work of Singer \cite{Si1938}. In particular, it was shown that the largest Sidon subset $B$ of $[N]$ satisfies \[ N^{1/2} < |B| < N^{1/2} + N^{1/4} + 1, \] for infinitely many choices of $N$, with the upper bound recorded in \cite{Li1969}. While there were subsequent improvements to the constant term in the upper bound, it was only very recently that the $N^{1/4}$ term was improved to $0.998N^{1/4}$ for all sufficiently large $N$ by Balogh, F\"{u}redi and Roy \cite{BFS2021}. \par The case when $h \geq 3$ appears to be significantly harder. For example, a simple counting argument implies that the largest $B_{h}^+[g]$ subset $B$ of $[N]$ satisfies $|B| \leq (g h! h)^{1/h} N^{1/h},$ and despite a rich history of work surrounding this problem, see, for instance, \cite{Ci2001} and the references therein, the best known upper bound for this general case is of the shape \begin{equation} \label{erds2} |B| \leq ( h! \sqrt{\pi h/2} (1 + O(h^{-1/3})) g)^{1/h} N^{1/h} , \end{equation} given by Johnston, Tait and Timmons in \cite{JTT2021}. On the other hand, the best known lower bounds are of the form \[ |B| \geq (1- o(1)) g^{1/h} N^{1/h}, \] see, for example, \cite{JTT2021}. Thus, there is a large gap between the known upper and lower bounds in the case of general $h,g \in \mathbb{N}$. On the other hand, finding large $B_{h}^{\times}[1]$ sets in $[N]$ is relatively elementary, since the set of prime numbers smaller than $N$ form such a set. The fact that this example is essentially sharp was shown by Erd\H{o}s \cite{Er1969}, who proved that the largest multiplicative Sidon set $C \subseteq [N]$ satisfies \[ N^{3/4} (\log N)^{-3/2} \ll |C| - \pi(N) \ll N^{3/4} (\log N)^{-3/2}, \] where $\pi(N)$ counts the number of prime numbers in $[N]$. The size of large $B_{h}^{\times}[1]$ sets have also been investigated for $h \geq 3$ and are known to be close to $\pi(N)$, see \cite{Pa2015} for more details. \par Hence, while the largest $B_{h}^{+}[1]$ sets in $[N]$ are hard to characterise and have size close to $N^{1/h}$, substantially large $B_{h}^{\times}[1]$ sets in this interval are easier to find and have size roughly $(1 + o(1))N/\log N$. This discrepancy can be justified by noting that the set $[N]$ exhibits large amounts of additive structure but has relatively low multiplicative structure. This is reminiscent of a well-known conjecture of Erd\H{o}s and Szemer\'{e}di \cite{ES1983} which roughly states that large amounts of additive and multiplicative structure cannot simultaneously coexist in a given finite set $A$ of integers. Thus, while one might always expect to find a $B_{h}^{+}[1]$ set $B$ and a $B_{h}^{\times}[1]$ set $C$ of size roughly $|A|^{1/h}$ inside $A$, motivated by the sum-product philosophy, one might even expect at least one of $B$ or $C$ to have size at least $|A|^{(1+ \delta_h)/h}$, for some $\delta_h >0$. \par This led to a recent problem posed by Klurman and Pohoata \cite{Po2021}, who conjectured that in the above setting, whenever $h=2$, one must have \[ \max \{ |B|, |C| \} \gg_{\delta} |A|^{1/2 + \delta}, \] for each $\delta \in (0,1/2)$. While it was shown by Green--Peluse (unpublished), Roche-Newton--Warren \cite{RNW2021} and Shkredov \cite{Sh2021} that this fails to hold for $\delta > 1/6$, our first result in this paper confirms the preceding heuristic in a strong sense for large values of $h$. \begin{theorem} \label{th3} Let $h$ be a natural number, let $A \subseteq \mathbb{Z}$ be a finite set, and let $B$ and $C$ be the largest $B_{h}^{+}[1]$ and $B_h^{\times}[1]$ sets in $A$ respectively. Then \[ \max \{ |B|, |C| \} \gg |A|^{\frac{\eta_h}{h}} ,\] where $\eta_h \gg (\log \log h)^{1/2 - o(1)}$. \end{theorem} Theorem $\ref{th3}$ already delivers sum-product estimates akin to those of Bourgain--Chang \cite{BC2005} in a straightforward manner by exploiting the fact that \[ |hA| + |A^{(h)}| \geq |hB| + |C^{(h)}| \gg_{h} |B|^h + |C|^h \gg_{h} |A|^{(\log \log h)^{1/2 - o(1)}}. \] Here $hA = \{ a_1 + \dots + a_h \ | \ a_1, \dots, a_h \in A\}$ and $A^{(h)} = \{ a_1 \dots a_h \ | \ a_1, \dots , a_h \in A\}$. In fact, a key ingredient in the proof of Theorem $\ref{th3}$ entails amalgamating probabilistic techniques with another generalisation of the many-fold sum-product estimate of Bourgain--Chang proved by the second author in \cite{Mu2021d} on the so-called \emph{low-energy decompositions}. We refer the reader to the discussion surrounding Lemma $\ref{mu1}$ for more details. Thus, whenever $h$ is sufficiently large, Theorem $\ref{th3}$ implies that any set $A$ of integers contains either a $B_{h}^{+}[1]$ set or a $B_{h}^{\times}[1]$ set that has size significantly larger than $|A|^{1/h}$. For smaller values of $h$, we are also able to prove a similar result and we present this as follows. \begin{theorem} \label{th2} Let $A \subseteq \mathbb{Z}$ be a finite set and let $h \geq 3$. Then there exists $g \leq 30 h$ and $\delta_h \gg h^{-3}$ such that $A$ contains either a $B_{h}^+[g]$ set $B$ or a $B_{h}^\times[g]$ set $C$ satisfying \[ \max \{ |B| , |C| \} \gg |A|^{1/h + \delta_h} . \] \end{theorem} We remark that we have not carefully optimised the dependency of $g$ and $\delta_h$ on $h$ in the above result, since our main aim has been to show that $g \ll h$ and $\delta_h >0 $. Furthermore, we record the $h=2$ case of this problem as the following result. \begin{theorem} \label{th1} Let $A \subseteq \mathbb{Z}$ be a finite set. Then there exist $g \leq 31$ and $\delta >0$ such that $A$ contains either a $B_{2}^+[g]$ set $B$ or a $B_{2}^\times[g]$ set $C$ satisfying \[ \max \{ |B|, |C| \} \gg |A|^{1/2 + \delta} . \] \end{theorem} Theorem $\ref{th1}$ quantifies the only other affirmative result in this direction, the latter being due to Shkredov \cite{Sh2021}, who showed that Theorem $\ref{th1}$ holds for some $g \leq K$, for some potentially very large value of $K>0$. Moreover, the reader may observe that Theorems $\ref{th3}, \ref{th2}$ and $\ref{th1}$ combine to deliver the following Corollary. \begin{Corollary} There exists $g \in \mathbb{N}$ such that for every $h \in \mathbb{N}$ and for every finite set $A$ of integers, we have that the combined size of the largest $B_{h}^+[g]$ and $B^{\times}_h[g]$ set in $A$ is at least $|A|^{(1 + \delta_h)/h}$, for some $\delta_h>0$. \end{Corollary} \par Despite the above uniform bound, we have chosen to record the estimates in Theorems $\ref{th3}, \ref{th1}$ and $\ref{th2}$ separately because their proofs require very different inputs from arithmetic combinatorics and incidence geometry. In particular, Theorem $\ref{th1}$ relies on some estimates on the number of incidences between large sets of points and translates of some hyperbolas (see Theorem $\ref{hyp}$), while the proof of Theorem $\ref{th2}$ employs some bounds on the number of solutions to systems of simultaneous linear equations with repetitive terms (see Lemma $\ref{lim2}$) as well as a variety of tools from arithmetic combinatorics, including the Balog--Szemer\'edi--Gowers theorem \cite{Sch2015} and some sum-product estimates of Solymosi \cite{So2009}. It is worth noting that in Theorems $\ref{th3}$ and $\ref{th2}$, one can not expect to obtain $B_{h}^+[1]$ or $B_{h}^{\times}[1]$ sets of size much larger than $|A|^{\frac{h+1}{2h}}$, and this is precisely the content of the following proposition, which generalises the aforementioned constructions of Green--Peluse, Roche-Newton--Warren and Shkredov \cite{RNW2021, Sh2021}. \begin{Proposition}\label{prop: construction} Let $h \geq 2$ and $N$ be natural numbers. Then there exists a set $A \subseteq \mathbb{N}$ with $|A| \gg N$ such that the largest $B_h^+[1]$ subset $B$ and the largest $B_h^\times[1]$ subset $C$ of $A$ satisfy \[ \max\{|B|, |C|\}\ll_h \begin{cases} |A|^{\frac{1}{2}+\frac{1}{2h+2}}\quad&\text{ when }h\text{ is even},\\ |A|^{\frac{1}{2}+\frac{1}{2h}}\quad&\text{ when }h\text{ is odd}. \end{cases} \] \end{Proposition} Thus, even for large values of $h$, there is a big gap between the lower bounds that are provided by Theorem $\ref{th3}$ and the upper bounds presented in Proposition $\ref{prop: construction}$. This naturally leads one to the following question. \begin{Question} For each $h \in \mathbb{N}$, let $\Lambda_h$ be the supremum of all real numbers $\eta_h >0$ which satisfy the following statement. Any finite set $A$ of natural numbers contains either a $B_{h}^+[1]$ set or a $B_{h}^{\times}[1]$ of size at least $C_{h,\eta_h} |A|^{\eta_h}$, for some absolute constant $C_{h,\eta_h}>0$. Find $\Lambda_h$. \end{Question} In particular, Theorem $\ref{th3}$ and Proposition $\ref{prop: construction}$ combine to imply the bound \[ h^{-1}(\log \log h)^{1/2 - o(1)} \ll \Lambda_h \leq 1/2 + 1/(2h+2) \] for even values of $h$, and it would be interesting to know whether $\Lambda_h \to 0$ as $h \to \infty$. \par We finish this section by providing a brief outline of our paper. We use \S2 to present two proofs of Proposition \ref{prop: construction} using graph-theoretic ideas and describe another construction in that direction which allows us to further highlight connections between Theorems \ref{th3}, \ref{th2}, \ref{th1} and the so-called low energy decompositions. In \S3, we state various preliminary definitions and lemmata that we will frequently use throughout our paper. Next, we employ \S4 to study estimates on number of solutions to systems of equations with repetitive variables, which we will then use along with probabilistic methods in \S5 to prove that sets with low additive or multiplicative energies contain large additive or multiplicative Sidon sets. In \S6, we prove some incidence estimates that we require in the proof of Theorem $\ref{th1}$. We conclude our paper by recording the proofs of Theorems \ref{th3}, \ref{th2} and \ref{th1} in \S7. \textbf{Notation.} In this paper, we use Vinogradov notation, that is, we write $X \gg_{z} Y$, or equivalently $Y \ll_{z} X$, to mean $|X| \geq C_{z} |Y|$ where $C$ is some positive constant depending on the parameter $z$. We use $e(\theta)$ to denote $e^{2\pi i \theta}$ for every $\theta \in \mathbb{R}$. Moreover, for every natural number $k$ and for every non-empty, finite set $Z$, we use $|Z|$ to denote the cardinality of $Z$, and we write $Z^k = \{ (z_1, \dots, z_k) \ | \ z_1, \dots, z_k \in Z\}$. For every natural number $n \geq 2$, we use boldface to denote vectors $\vec{x} = (x_1, x_2, \dots, x_n) \in \mathbb{R}^n$ and we write $\vec{x}^T$ to be the transpose of $\vec{x}$. \textbf{Acknowledgements.} The authors would like to thank Ben Green for pointing to this problem as well as for various helpful discussions. The second author would like to thank David Ellis and Misha Rudnev for useful comments. \section{Various constructions and low-energy decompositions} Our first goal in this section is to prove Proposition $\ref{prop: construction}$, and we commence by introducing some standard graph theoretic definitions. Thus, given a graph $G$, we will use $V(G)$ and $E(G)$ to denote the vertex set of $G$ and the set of edges of $G$ respectively. Given a bipartite graph $H$ and integers $m,n$, the asymmetric bipartite Tur\'an number $\mathrm{ex}(m,n,H)$ of $H$ denotes the maximum number of edges in an $m$ by $n$ bipartite graph that does not contain $H$ as a subgraph. For our purposes, we will set $H = C_{2h}$ for some $h \in \mathbb{N}$, where $C_{2h}$ denotes a $2h$-cycle, that is, $V(C_{2h}) = \{v_1, \dots, v_{2h}\}$ and $E(C_{2h}) = \{ (v_1, v_2), (v_2, v_3) , \dots, (v_{2h}, v_1) \}$. We now record a result of Naor and Verstra\"{e}te \cite{NV05} on bounds for $\mathrm{ex}(m,n,H)$. \begin{lemma}\label{lem: cycle free} For $m\leq n$ and $h\geq 2$, we have that \[ \mathrm{ex}(m,n,C_{2h})\leq \begin{cases} (2h-3)((mn)^{\frac{h+1}{2h}}+m+n)\quad&\text{ if } h \text{ is odd};\\ (2h-3)(m^{\frac{h+2}{2h}}n^{\frac12}+m+n)\quad&\text{ if } h \text{ is even}. \end{cases} \] \end{lemma} With this in hand, we now present our proof of Proposition~\ref{prop: construction}. \begin{proof}[Proof of Proposition~\ref{prop: construction}] We first consider the case when $h$ is even. Let $P$ be a set consisting of the first $N^{\frac{h}{2h+2}}$ primes, and let $Q$ be a set consisting of the next $N^{\frac{h+2}{2h+2}}$ primes, and so, $P\cap Q=\emptyset$. Set \[ A:=\{pq\mid p\in P, q\in Q\}. \] Then $|A|\gg N$, and by way of the Prime number theorem, we have $a\ll N (\log N)^2$ for each $a \in A$. We first estimate the size of the largest $B_h^+[1]$ subset $B$ of $A$, whereupon, it suffices to note that \[ |B|^h \ll_h |hB| \leq |hA| \ll_h N (\log N)^2 \] to prove the required bound. Next, suppose that $C\subseteq A$ is a $B_h^\times[1]$ set. We construct a bipartite graph $G$ with $V(G)=P\cup Q$, such that given $p \in P$ and $q \in Q$, we have $(p,q) \in E(G)$ if $pq \in C$. Note that for every $h$ distinct elements $p_1, \dots, p_h\in P$ and every $h$ distinct elements $q_1,\dots,q_h\in Q$, the following set \[ \{p_1q_1, p_2q_2, \dots, p_hq_h, p_1q_2, p_2q_3, \dots, p_{h-1}q_h, p_hq_1\} \] is not contained in $C$, since the product of the first $h$ elements is equal to the product of the last $h$ elements in the above set. This implies that our graph $G$ is $C_{2h}$-free, and so, we may apply Lemma~\ref{lem: cycle free} to deduce that the number of edges $|E(G)|$ of our graph satisfies \begin{align*} |E(G)|\leq \mathrm{ex}(|P|,|Q|,C_{2h})\ll_h N^{\frac{h+2}{2h+2}}= N^{\frac12+\frac{1}{h+2}}. \end{align*} The desired bound then follows from noting that $|C| \leq |E(G)|$, which holds true because each element in $C$ has a unique representation as a product of two primes. \par Finally, the case when $h$ is odd follows the fact that a $B_{s}[1]$ set is also a $B_{s-1}[1]$ set for every $s \geq 3$. \end{proof} We remark that instead of using prime numbers, one can obtain a similar result using powers of $2$, and we briefly sketch this as follows. \begin{proof}[An alternative proof of Proposition $\ref{prop: construction}$] Let $h,n,M,N$ be even natural numbers such that $N= n^{h}$ and $M= n^{h+2}$. Moreover, let $P_h = \{1,2, \dots, 2^N\}$ and $Q_{h} = \{2^{N+1}, \dots, 2^{N+M}\}$ be geometric progressions and let $A_h = P_h + Q_h$. Note that $|A_h| \gg_h |P_h||Q_h| \gg_h n^{2h+2}.$ \par Given a $B_h^+[1]$ set $B \subseteq A$, we may construct a bipartite graph $G$ on $P_h \times Q_h$ by letting $(p,q)\in E(G)$ if $p+q \in B$. This implies that $G$ must be $C_{2h}$-free, whence, we may apply Lemma \ref{lem: cycle free} to deduce that \[ |B| \ll_h |P_h|^{\frac{h+2}{2h}} |Q_h|^{1/2} + |Q_h| \ll_h n^{h+2} \ll_h |A|^{\frac{h+2}{2h+2}}. \] \par Similarly, let $C$ be a $B_{h}^{\times}[1]$ set in $A$. Here, we note that $A \subseteq P'_h \cdot Q'_h,$ where $P'_h = \{1,2, \dots, 2^N\}$ and $Q'_h = \{1 + 2^j \ | \ 1 \leq j \leq N+M\}$. We may now construct a bipartite graph $G'$ on $P_h' \times Q_h'$ by letting $(p',q') \in G$ if $p'q' \in C$. As before, we may observe that the graph $G'$ is $C_{2h}$-free, whenceforth, Lemma \ref{lem: cycle free} delivers the bound \[ |C| \ll_h |P'_h|^{\frac{h+2}{2h}} |Q'_h|^{1/2} + |Q'_h| \ll_h n^{h+2} \ll_h |A|^{\frac{h+2}{2h+2}}. \] Finally, the case when $h$ is odd follows trivially from the case when $h$ is even. \end{proof} We note that the set $A_2$ was recorded in work of Erd\H{o}s \cite[page $57$]{Er1983}, who used this set to prove a related conjecture on the size of the largest $B_{2}^+[1]$ set contained in a $B_{2}^+[2]$ set, and subsequently, Shkredov \cite{Sh2021} proved that the set $A_2$ also refutes the aforementioned conjecture of Klurman--Pohoata. We now outline a construction of Balog--Wooley \cite{BW2017}, which was later modified by Roche-Newton to show that there exist sufficiently large subsets $A$ of $\mathbb{N}$ such that the largest $B_{2}^+[1]$ and $B_{2}^\times[1]$ sets in $A$ have size at most $O(|A|^{3/4})$. Thus, letting $h \geq 2$ and \[ A_{M,N} = \{ (2i+1)2^j \ | \ 1 \leq i \leq M \ \text{and} \ 1 \leq j \leq N \} , \] we will show that the largest $B_{h}^+[g]$ and $B^\times_h[g]$ subsets of $A_{N,N}$ have size at most $O_{g,h}(N^{\frac{h+1}{h}})$. A straightforward application of pigeonhole principle allows us to deduce that any subset $B \subseteq A_{N,N}$ satisfying $|B| \geq 2gh!hN^{\frac{h+1}{h}}$ contains at least $gh!hN^{1/h}$ elements of $2^{j+1} \cdot [N] + 2^j$ for some $j \in \mathbb{N}$, and so, $B$ can not be a $B_{h}^+[g]$ set due to the fact that $\eqref{erds2}$ holds true. On the other hand, any $B_h^{\times}[g]$ set $C \subseteq A_{N,N}$ satisfies \[ |C|^h \leq g h! |A_{N,N}^{(h)}| \leq gh!h N^{h+1}, \] and so, we are done. \par In fact, this highlights another connection between Theorem $\ref{th3}$ and the aforementioned low energy decompositions, where the latter are statements entailing partitioning of sets $A$ as $A= B\cup C$, where $B$ and $C$ have small amounts of additive and multiplicative structure respectively. In order to present estimates surrounding this topic, we first note some definitions, and thus, given $s \in\mathbb{N}$ and some finite set $A \subseteq \mathbb{R}$, we define $E_{s,2}(A)$ and $M_{s,2}(A)$ to count the number of solutions to the equations \[ x_1 + \dots + x_s = x_{s+1} + \dots + x_{2s} \ \text{and} \ x_1 \dots x_s = x_{s+1} \dots x_{2s} \] respectively, with $x_1, \dots, x_{2s} \in A$. With this in hand, we now present \cite[Corollary $1.3$]{Mu2021d}. \begin{lemma} \label{mu1} Let $s$ be a natural number and let $A$ be a finite set of integers. Then $A$ may be written as $A = B \cup C$ for disjoint sets $B,C$ such that \[ E_{s,2}(B) \ll_{s} |A|^{2s - \eta_s} \ \text{and} \ M_{s,2}(C) \ll_{s} |C|^{2s - \eta_s}, \] where $\eta_s \geq D (\log \log s)^{1/2} (\log \log \log s)^{-1/2},$ for some absolute constant $D >0$. \end{lemma} As previously mentioned, this forms a key ingredient in the proof of Theorem $\ref{th3}$, and in fact, this also delivers sum-product estimates akin to the work of Bourgain--Chang \cite{BC2005} in a straightforward manner. In particular, applying Cauchy's inequality, we find that \begin{align*} |sA| + |A^{(s)}| & \geq |sB| + |C^{(s)}| \geq |B|^{2s}E_{s,2}(B)^{-1} + |C|^{2s} M_{s,2}(C)^{-1} \gg_{s} |B|^{\eta_s} + |C|^{\eta_s} \gg_{s} |A|^{\eta_s}, \end{align*} where $\eta_s \gg (\log \log s)^{1/2 - o(1)}$. \par Furthermore, it was noted by Balog--Wooley \cite{BW2017} that sets of the form $A_{M,N}$ restrict how good a power saving one can obtain in results akin to Lemma $\ref{mu1}$. More specifically, they showed that any subset $B \subseteq A_{N^2,N}$ with $|B| \geq N^3/2$ satisfies \[ E_{s,2}(B) \gg_{s} |B|^{s + (s-1)/3} \ \text{and} \ M_{s,2}(B) \gg_{s} |B|^{s + (s-1)/3}. \] \section{Preliminary Lemmata} Let $s,k$ be natural numbers and let $A$ be some finite, non-empty set of real numbers. For each $n \in \mathbb{R}$, we denote \[ r_{s}(A;n) = \{ (a_1, \dots, a_s) \in A^s \ | \ a_1 + \dots + a_s = n\} \] and \[ m_{s}(A;n) = \{ (a_1, \dots, a_s) \in A^s \ | \ a_1 \dots a_s = n\} . \] These have a natural connection to counting solutions to additive and multiplicative equations, and in particular, writing $E_{s,k}(A) = \sum_{n \in sA} r_{s}(A;n)^k,$ we see that $E_{s,k}(A)$ counts the number of solutions to the system of equations \[ a_1 + \dots + a_s = a_{s+1} + \dots + a_{2s} = \dots = a_{(k-1)s + 1} + \dots + a_{ks}, \] with $a_1, \dots, a_{ks} \in A$. Similarly, we define $M_{s,k}(A) = \sum_{n \in A^{(s)}} m_{s}(A;n)^k,$ wherein, we note that $M_{s,k}(A)$ counts the number of solutions to the system of equations \[ a_1 \dots a_s = a_{s+1} \dots a_{2s} = \dots = a_{(k-1)s + 1} \dots a_{ks}, \] with $a_1, \dots, a_{ks} \in A$. \par It is worth noting some straightforward properties of the representation function $r_{s}(A; \cdot)$ and its various moments. In particular, we have \[ \sup_{n \in sA} r_{s}(A;n) \leq |A|^{s-1} \ \text{and} \ \sum_{n \in sA} r_{s}(A;n) = |A|^s, \] whence, \[ E_{s,k}(A) \leq (\sup_{n \in sA} r_{s}(A;n) )^{k-1} \sum_{n \in sA} r_{s}(A;n) \leq |A|^{sk - k + 1}. \] There are some stronger inequalities that one can obtain between these quantities, and we record some of these as presented in \cite[Lemmata $3.1$ and $3.2$]{Mu2021d}. \begin{lemma} \label{awk} Let $s, l, k$ be natural numbers such that $l <s$ and let $A \subseteq (0, \infty)$ be a finite set. Then \[ E_{s,2}(A) \leq |A|^{2s - 2l} E_{l,2}(A), \ \text{and} \ M_{s,2}(A) \leq |A|^{2s - 2l} M_{2,2}(A) . \] Similarly, for all finite sets $A_1, \dots, A_{2s} \subseteq (0, \infty)$, we have \[ \sum_{a_1 \in A_1, \dots, a_{2s} \in A_{2s}} \mathds{1}_{a_1 + \dots + a_s = a_{s+1} + \dots + a_{2s}} \leq E_{s,2}(A_1)^{1/2s} \dots E_{s,2}(A_{2s})^{1/2s}. \] Finally, when $s$ is even, we have \[ \sup_{n \in sA} r_{s}(A; n) \leq E_{s/2,2}(A). \] \end{lemma} As previously mentioned, our proof of Theorem $\ref{th2}$ will employ various tools from arithmetic combinatorics, the foremost being the following inequality proven by Solymosi \cite{So2009}. \begin{lemma} \label{so1} Let $A \subseteq (0 , \infty)$ be a finite set such that $|A| \geq 2$. Then \[ M_{2,2}(A) \ll |A+A|^2 \log |A|. \] \end{lemma} Our next tool of choice will be the Balog--Szemer\'edi--Gowers theorem, as presented in \cite{Sch2015}. \begin{lemma} \label{bsg5} Let $A$ be a finite set of real numbers and let $K \geq 1$ be a real number. If $E_{2,2}(A) \geq |A|^{3}/K$, then there exists $A' \subseteq A$ such that $|A'| \gg |A| / K$ such that \[ |A' - A'| \ll K^4 |A'| . \] \end{lemma} We will also use the Pl{\"u}nnecke--Ruzsa theorem to convert the above conclusion concerning difference sets to estimates on sumsets, and so, we record this below. \begin{lemma} \label{prineq} Let $A, B$ be finite subsets of some additive abelian group $G$. If $|A+B| \leq K|A|$, then for all non-negative integers $m,n$, we have $$ |mB - nB| \leq K^{m+n}|A|.$$ \end{lemma} We will also be utilising incidence geometric techniques in the proof of Theorem $\ref{th1}$ and in order to present these, we introduce some further notation. Thus, given $\vec{u} \in \mathbb{R}^3$, we define the M\"{o}bius transformation $M_{\vec{u}}$ to be \[ M_{\vec{u}}(x) = \frac{u_1 x + u_2}{x + u_3}. \] A lot of recent works in incidence theory have focused on studying incidences between a set of M\"{o}bius transformations of the above form and sets of points in $\mathbb{R}^2$. In particular, given a finite set $X \subseteq \mathbb{R}$ and a finite set $H \subseteq \mathbb{R}^3$ satisfying $u_2 \neq u_1 u_3$, for each $\vec{u} \in H$, we define \[ I(X\times X,H) = \sum_{\vec{u} \in H} \sum_{(x_1, x_2) \in X^2} \mathds{1}_{x_2 = M_{\vec{u}}(x_1)}, \] whereupon, one may infer from the discussion surrounding \cite[inequality $(8)$]{SS2016} that \[ I(X\times X,H) \ll |X|^{4/3} |H|^{2/3} + |X|^{12/11} |H|^{9/11} \log |X| + |X|^2 + |H|. \] Combining this with \cite[Lemma 3.3]{Mu2021} enables us to present a weighted version of the above result. \par \begin{lemma} \label{wtin} Let $X \subseteq \mathbb{R}$ be a finite, non-empty set, and let $H \subseteq \mathbb{R}^3$ be a finite set such that $u_2 \neq u_1 u_3$, for each $\vec{u} \in H$, and let $w: H \to \mathbb{N}$ be a function. Then \begin{align*} \sum_{x_1, x_2 \in X} \sum_{\vec{u} \in H} \mathds{1}_{x_2 = M_{\vec{u}}(x_1)} w(\vec{u}) \ll & \ |X|^{4/3} \big(\sum_{\vec{u} \in H} w(\vec{u})^2 \big)^{1/3} \big(\sum_{\vec{u} \in H} w(\vec{u}) \big)^{1/3} \ + \ \sup_{\vec{u} \in H} w(\vec{u}) |X|^2 \\ & + |X|^{12/11} \big(\sum_{\vec{u} \in H} w(\vec{u})^2\big)^{2/11} \big(\sum_{\vec{u} \in H} w(\vec{u}) \big)^{7/11} \log |X| \ + \sum_{\vec{u} \in H} w(\vec{u}). \end{align*} \end{lemma} \section{Solving simultaneous linear equations with repetitive elements} Our main aim in this section is to estimate the number of solutions to simultaneous system of equations where there are restrictions on the number of distinct elements in each solution. We begin this endeavour by presenting some further notation, and thus, for any $l, k,s \in \mathbb{N}$ satisfying $1 \leq l \leq ks$ and for any finite, non-empty set $A$ of real numbers, we denote the vector \[ (a_{1,1}, \dots, a_{1,s}, a_{2,1}, \dots, a_{2,s}, \dots, a_{k,1}, \dots, a_{k,s}) \in A^{ks} \] to be \emph{$(k,l)$-complex} if there are precisely $l$ distinct values in the set $\{ a_{1,1}, \dots, a_{k,s}\}$ and if for any $1 \leq i < j \leq k$, we have that $\{a_{i,1}, \dots, a_{i,s} \} \neq \{a_{j,1} , \dots, a_{j,s}\}$. Moreover, we use $W_{k,l}$ to denote the set of all $(k,l)$-complex vectors in $A^{ks}$. Next, let $\Sigma_{l,s,k}(A)$ count the number of solutions to the system of equations \[ a_{1,1}+\cdots+a_{1,s}=a_{2,1}+\cdots+a_{2,s} = \cdots=a_{k,1}+\cdots+a_{k,s}, \] where $(a_{1,1}, \dots, a_{k,s}) \in W_{k,l}$ such that for any $1 \leq i < j \leq k$, we have that $\{a_{i,1}, \dots, a_{i,s} \} \neq \{a_{j,1} , \dots, a_{j,s}\}$. The main task of the section is to estimate $\Sigma_{l,s,k}(A)$ under the assumption that $E_{s,k}(A)$ is bounded. We note that the above system may be rewritten as the following system of $k-1$ simultaneous linear equations \begin{equation} \label{alt} a_{i,1} + \dots + a_{i,s} - a_{k,1} - \dots - a_{k,s} = 0 \ \ \ (1 \leq i \leq k -1). \end{equation} We will often write $E_i = a_{i,1} + \dots + a_{i,s}$ for each $1 \leq i \leq k$. \par The next two lemmata provide estimations of $\Sigma_{l,s,k}(A)$ when either $k=2$ or $s=2$. \begin{lemma}\label{lem: sigma l,s,2} Let $s,l$ be natural numbers such that $2 \leq l \leq 2s$. Moreover suppose that $A$ is a finite set of real numbers such that \[ E_{s,2}(A) \ll_{s} |A|^{2s - 2 + 1/s - c}, \] for some $c>0$. Then we have that \[ \Sigma_{l,s,2}(A) \ll_{s} |A|^{l - l/s + l/2s^2 - cl/2s}. \] \end{lemma} \begin{proof} Writing $f(\alpha) = \sum_{a \in A} e(\alpha a)$ for every $\alpha \in [0,1)$, we may use orthogonality to deduce the following inequality \begin{align*} \Sigma_{l,s,2}(A) \ll_{s} \sum_{\substack{0 < |c_1|, \dots, |c_l| \leq 2s, \\ c_1 + \dots + c_l = 0 }}\int_{[0,1)} f(c_1\alpha) \dots f(c_l \alpha) d \alpha. \end{align*} Applying H\"{o}lder's inequality and periodicity, we see that \begin{align*} \Sigma_{l,s,2}(A) \ll_{s} \prod_{i=1}^{l} (\int_{[0,1)} |f(c_i \alpha)|^{2s} d \alpha )^{1/2s} = (\int_{[0,1)} |f(\alpha)|^{2s} d \alpha)^{l/2s}, \end{align*} whereupon, we obtain the bound \begin{equation*} \Sigma_{l,s,2}(A) \ll_{s} E_{s,2}(A)^{l/2s} \ll_{s} |A|^{l - l/s + l/2s^2 - cl/2s}, \end{equation*} which proves the lemma. \end{proof} \begin{lemma}\label{lem: sigma l,2,k} Let $k$ be a natural number and let $A$ be a finite set of real numbers such that \[ E_{2,k}(A) \ll_{k} |A|^{k + 1/2 - c}, \] for some $c>0$. Then we have that \[ \Sigma_{2k-1,2,k}(A) \ll_{k} |A|^{k-1/2+1/2k-c(1-1/k)}. \] \end{lemma} \begin{proof} Without loss of generality we can assume that $a_{i,1} = a_{i,2}$ for some $1 \leq i \leq k$. We now apply H\"{o}lder's inequality to get \begin{align*} \Sigma_{2k-1,2,k}(A) & \ll_k \sum_{x \in 2A} (\sum_{a_1, a_2 \in A} \mathds{1}_{x = a_1 + a_2})^{k-1} \mathds{1}_{2\cdot A}(x) \\ & \leq ( \sum_{x} (\sum_{a_1, a_2 \in A} \mathds{1}_{x = a_1 + a_2})^k)^{1 - 1/k} |A|^{1/k} \\ & = E_{2,k}(A)^{1 - 1/k} |A|^{1/k} , \end{align*} which, when combined with the hypothesis recorded above, delivers the required bound \[ \Sigma_{2k-1,2,k}(A) \ll_{k} |A|^{k - 1/2 + 1/2k - c (1 - 1/k) }. \qedhere \] \end{proof} In the remaining parts of this section, we will focus on estimating $\Sigma_{l,k,s}(A)$ for a much more general range of $k,s$, and we begin this endeavour by presenting the following straightforward upper bound on the number of solutions to a system of linear equations of a given rank with all the variables lying in some prescribed set. \begin{lemma} \label{lin} Let $m,n,r$ be natural numbers, let $M$ be a $m \times n$ matrix with real coefficients, let $\vec{u}=(u_1,\dots,u_m)$ be some vector in $\mathbb{R}^m$ and let $A$ be a finite, non-empty set of real numbers. Suppose that the matrix $M$ has $r$ linearly independent rows. Then the number of solutions to \[ M \vec{a}^T = \vec{u}^T, \] with $\vec{a} = (a_1, \dots, a_n) \in A^n$ is at most $O(|A|^{n-r})$. \end{lemma} \begin{proof} We apply Gaussian elimination on $M$ and obtain its row echelon form $M'=PM$ where $P$ is a $m\times m$ matrix. Note that the first $r$ rows in $M'$ are linearly independent and upper triangular, and the other $m-r$ rows are $\vec{0}$. Let $\vec{v}_i$ be the $i$-th row vector of $M'$. Without loss of generality we may assume that the $i$-th entry of $\vec{v}_i$ is non-zero for every $1\leq i\leq r$. Let $\vec{u}'=(u_1',\dots,u_m')$ be $P\vec{u}^T$. Since the solutions to $M \vec{a}^T = \vec{u}^T$ are the solutions to $M' \vec{a}^T =\vec{u}'^T$, by fixing $(a_{r+1},\dots,a_n)\in A^{n-r}$, we have \[ (M')_{r\times r} \vec{a}_r^T={\vec{u}'}_r^T, \] where $(M')_{r\times r}$ contains the first $r$ rows and $r$ columns of $M'$, $\vec{a}_r=(a_1,\dots,a_r)$ and $\vec{u}'_r=(u'_1,\dots,u'_r)$. By the assumption, $(M')_{r\times r}$ has full rank, and hence $\vec{a}_r=(a_1,\dots,a_r)\in \mathbb{R}^r$ can be uniquely determined. Finally, since there are at most $|A|^{n-r}$ ways to choose $(a_{r+1},\dots,a_n) \in A^{n-r}$, the desired conclusion follows. \end{proof} We finish this section by presenting the following lemma that enables us to find appropriate bounds for $\Sigma_{l,s,k}(A)$ when $s,k$ are natural numbers with $k \geq 3s$. \begin{lemma} \label{lim2} Let $s,k,l$ be natural numbers such that $k \geq 3s$ and $2 \leq l \leq sk$. Moreover suppose that $A$ is a finite set of real numbers such that \[ E_{s,k}(A) \ll_{s,k} |A|^{sk - k + 1/s - c}, \] for some $c>0$. Then we have that \[ \Sigma_{l,s,k}(A) \ll_{s,k} |A|^{l - l/s + 1/s - c'}, \] for some $c'\geq \min\{(k-2s)c/k,1/s\}$. \end{lemma} \begin{proof}[Proof of Lemma $\ref{lim2}$] For ease of exposition, we will write $\Sigma_{l} = \Sigma_{l,s,k}(A)$, suppressing the dependence on $s,k$ and $A$. Let $M$ be the coefficient matrix of the system of linear equations described in $\eqref{alt}$, and in particular, $M$ will be some $(k-1) \times l$ matrix with entries from $[-2s, 2s] \cap \mathbb{Z}$. By incurring a factor of $O_{s,k}(1)$ in our upper bounds, which subsequently gets absorbed in in the implicit constant of the Vinogradov notation, we may fix all the entries in $M$. \par We divide our proof into two cases depending on the rank of the matrix $M$, and so, we first consider the case when $\textrm{rank}(M) = k-1$. Furthermore, in this setting, it suffices to analyse the situation when $l \in (s(k-1), sk]$, since otherwise, we may use Lemma $\ref{lin}$ to deduce that \[ \Sigma_l \ll_{s,k} |A|^{l-k+1}=|A|^{l-\frac{l-1}{s}-\frac{(sk-l)-(s-1)}{s}}, \] Thus, we assume that $s(k-1) < l \leq sk$. In this case, there are $sk-l$ repetitive variables, all of which lie in $d$ different $s$-tuples, for some $d\leq 2(sk-l)$. By losing a factor of $O_{s,k}(1)$, we may assume that the $d$ $s$-tuples which contain the repetitive elements are precisely $(a_{1,1},\dots,a_{1,s})$, $\dots$, $(a_{d,1},\dots,a_{d,s})$. Thus, we have that \begin{align*} \Sigma_{l} & \ll_{s,k} \sum_{n} \sum_{a_{d+1,1}, \dots, a_{k,s} \in A} \mathds{1}_{E_{d+1} = \dots = E_{k} = n} \sum_{\vec{a} \in W_{d,l}} \mathds{1}_{E_{1} = \dots = E_{d} =n} \\ & = \sum_{n} r_{s}(A;n)^{k-d} \sum_{\vec{a} \in W_{d,l}} \mathds{1}_{E_{1} = \dots = E_{d} =n} , \end{align*} where we use $\vec{a} \in W_{d,l}$ to denote the element $(a_{1,1}, \dots, a_{d,s}) \in W_{d,l}$. Applying H\"older's inequality, we get that \begin{equation} \label{kmov} \Sigma_l \ll_{s,k} E_{s,k}(A)^{\frac{k-d}{k}}\Big(\sum_{n}\Big(\sum_{\vec{a} \in W_{d,l}} \mathds{1}_{E_{1} = \dots = E_{d} =n} \Big)^{\frac{k}{d}}\Big)^{\frac{d}{k}}. \end{equation} \par Using Lemma $\ref{lin}$ along with the fact that $k \geq 2s > 2(sk-l) \geq d$, we may conclude that \begin{equation} \label{j1i1} \sum_{\vec{a} \in W_{d,l}} \mathds{1}_{E_{1} = \dots = E_{d}} \ll_{s,k} |A|^{sd - (sk-l) - (d-1)}, \end{equation} as well as that \begin{equation} \label{j2i2} \sum_{\vec{a} \in W_{d,l}} \mathds{1}_{E_{1} = \dots = E_{d} =n} \ll_{s,k} |A|^{sd - (sk - l) - d } \end{equation} holds for every $n \in \mathbb{R}$. More specifically, in order to prove $\eqref{j1i1}$, note that the system $E_1 = \dots = E_{d}$ can be rewritten in the form $\eqref{alt}$, wherein, the associated matrix has rank $d-1$. This follows from the fact that $\mathrm{rank}(M)=k-1$. Moreover, since there are exactly $sd - (sk-l)$ distinct elements in each solution, we can now use Lemma $\ref{lin}$ to deliver the claimed inequality. The deduction of the second inequality from Lemma $\ref{lin}$ requires some further maneuvers, which we briefly record here. The reader will note that it suffices to show that the row vectors $\vec{C}_1, \dots, \vec{C}_d$ of the matrix affiliated with the system $E_1 = \dots = E_d = n$ are linearly independent. We prove this via contradiction, and so, without loss of generality, we may suppose that $c_1, \dots c_{d-1}$ are real numbers satisfying \[ \vec{C}_{d} = \sum_{i = 1}^{d-1} c_i \vec{C}_{i}. \] Multiplying the above equation with $\vec{v}^T$, where $\vec{v} = (1,\dots, 1) \in \mathbb{R}^l$, and employing the fact that $\vec{C}_i \vec{v}^T = s$ for each $1 \leq i \leq s$, we deduce that $\sum_{i=1}^{d-1} c_i = 1$. But this allows us to write \[ \vec{R}_{d} = \sum_{i=1}^{d-1} c_i \vec{R}_i, \] where $\vec{R}_1, \dots, \vec{R}_{k-1}$ are the row vectors of the matrix $M$, thus contradicting the fact that $\mathrm{rank}(M)=k-1$. \par Combining $\eqref{j1i1}$ and $\eqref{j2i2}$ with $\eqref{kmov}$, we get that \begin{align*} \Sigma_l & \ll_{s,k} |A|^{(s-1)(k-d)+\frac{k-d}{sk}-\frac{(k-d)c}{k}}|A|^{(sd-sk+l-d)\frac{k-d}{k}} \Big(\sum_{\vec{a} \in W_{d,l}} \mathds{1}_{E_{1} = \dots = E_{d}} \Big)^{\frac{d}{k}}\\ & \ll_{s,k} |A|^{(s-1)(k-d)+\frac{k-d}{sk}-\frac{(k-d)c}{k}}|A|^{(sd-sk+l-d)\frac{k-d}{k}}|A|^{(sd-sk+l-d+1)\frac{d}{k}}\\ & \leq |A|^{l-\frac{l-1}{s}-\frac{(sk-l)(k-2s+2)}{sk}-\frac{(k-d)c}{k}}\leq |A|^{l-\frac{l-1}{s}-c'}, \end{align*} with $c'\geq c(k-2s)/k$, whereupon, we are done when $\mathrm{rank}(M)=k-1$. \par Thus, we proceed with our second case, that is, when $\mathrm{rank}(M)=r<k-1$. This already implies that $l \leq s(r+1)$, and in fact, we will show that the stronger bound $l \leq sr$ must hold. This, in turn, combines with Lemma $\ref{lin}$ to deliver the estimate \[ \Sigma_l\ll |A|^{l-r}=|A|^{l-\frac{l-1}{s}-\frac{sr-l+1}{s}}=|A|^{l-\frac{l-1}{s}-c'} \] where $c'\geq 1/s$. We now turn to proving that our claim holds, that is, $l \leq sr$. Without loss of generality, we may assume that the first $r$ rows in $M$ are linearly independent. Since $r \leq k-1$, there exist $\alpha_1, \dots, \alpha_r \in \mathbb{R}$ such that \[ \vec{R}_{k-1}=\sum_{i=1}^r \alpha_i \vec{R}_i. \] Let $I,J\subseteq[r]$ be sets such that $\alpha_i>0$ for $i\in I$ and $\alpha_i<0$ for $i\in J$, and let $K = [r]\setminus (I \cup J)$. \par As all the $s$-tuples that we are analysing correspond to essentially distinct representations, we have that $|I|,|J|\geq 1$, whence \[ |K| + |I| \leq r-1 . \] Writing $\beta_j =-\alpha_j$ for each $j \in J$, we get that \[ \vec{R}_{k-1}=\sum_{i \in I}\alpha_i \vec{R}_i-\sum_{j \in J}\beta_j \vec{R}_j. \] Thus, setting $F_i = \vec{R}_{i} \vec{x}^T$ where $\vec{x} = (x_1, \dots, x_l)$ is a vector with formal variables $x_1, \dots, x_l$ as entries, we may deduce the following from the preceding expression \[ F_{k-1} - F_k = \sum_{i \in I} \alpha_i (F_i - F_k) - \sum_{j \in J} \beta_j (F_j - F_k), \] and so, \[ \sum_{j \in J} \beta_j F_j = \sum_{i \in I} \alpha_i F_i - F_{k-1} + (\sum_{j \in J} \beta_j - \sum_{i \in I} \alpha_i + 1) F_k .\] Since $\alpha_i, \beta_j > 0$ for each $i \in I$ and $j \in J$, we must have that any variable appearing in $F_j$, for every $j \in J$, either occurs in $F_i$ for some $i \in I$ or it occurs in $F_{k}$. Thus, we deduce that all the distinct variables arise either from $F_i$, for some $i \in I \cup K$, or from $F_{k}$. Finally, as $l$ is bounded above by the number of distinct variables in $F_1, \dots, F_r$ , we infer that \[ l \leq s(|I| + |K|) + s \leq s(r-1) + s = rs, \] and so, our claim holds true. This finishes the proof of Lemma $\ref{lim2}$. \end{proof} \section{Random sampling and deletion} We will use this section to record various lemmata that connect bounds on additive and multiplicative energies to the existence of large $B_{s}[g]$ subsets. \begin{lemma} \label{gens} Let $A \subseteq \mathbb{N}$ be a finite set, let $s \geq 2$ be a natural number and let $c >0$ be a real number such that \[ E_{s,2}(A) \leq |A|^{2s - 2 + 1/s - c}. \] Then there exists $B \subseteq A$ such that $B$ is a $B_{s}^+[1]$ set satisfying \[ |B| \gg_{s} |A|^{1/s + \delta} \ \text{for} \ \delta = c/(2s). \] \end{lemma} \begin{proof} We begin our proof by applying Lemma \ref{lem: sigma l,s,2} to deduce that \begin{equation} \label{hld} \Sigma_{l,s,2}(A) \ll_{s} E_{s,2}(A)^{l/2s} \ll_{s} |A|^{l - l/s + l/2s^2 - cl/2s}, \end{equation} for each $2 \leq l \leq 2s$. We will now pick elements from $A$ with probability $p$ uniformly at random, where $p = |A|^{1/s - 1 + \delta}$, and we write this subset to be $A'$. Note that \[ \mathbb{E} |A'| = p |A| = |A|^{1/s + \delta}, \] as well as that \[ \mathbb{E} |A'| - 2\mathbb{E} \sum_{l=2}^{2s} \Sigma_{l,s,2}(A') =p|A| - 2\sum_{l=2}^{2s} p^l \Sigma_{l,s,2}(A) = |A|^{1/s + \delta} - O_{s}(\sup_{2 \leq l \leq 2s} |A|^{l\delta + l/2s^2 - cl/2s}),\] where the last inequality follows from $\eqref{hld}$. Our choice of $\delta$ now implies that \[ \mathbb{E} ( |A'| - 2 \sum_{l=2}^{2s} \Sigma_{l,s,2}(A') ) \geq |A|^{1/s + \delta}/2, \] whenever $|A|$ is sufficiently large in terms of $s$. Thus, there exists some $A' \subseteq A$ such that \[ |A'| \geq |A|^{1/s + \delta}/2 \ \text{and} \ \sum_{l=2}^{2s} \Sigma_{l,s,2}(A') \leq |A'|/2 . \] \par For each $2 \leq l \leq 2s$ and for each solution $(a_1, \dots, a_{2s})$ counted in $\Sigma_{l,s,2}(A')$, we remove the element $a_1$ from $A'$, and we denote $B$ to be the remaining set. By definition, the set $B$ must be a $B_{s}^+[1]$ set. Moreover, we have that \[ |B| \geq |A'| - \sum_{2 \leq l \leq 2s} \Sigma_{l,s,2}(A') \geq |A'|/2 \geq |A|^{1/s + \delta}/4, \] and so, we are done. \end{proof} Lemma $\ref{gens}$ can also shown to be hold for multiplicative energies and multiplicative $B_{s}[1]$ sets, but we have to apply some slight modifications to various parts of the proof. \begin{lemma} \label{mu2} Let $s$ be a natural number, let $c>0$ and let $A \subseteq (0, \infty)$ be a finite set such that \[ M_{s,2}(A) \leq |A|^{2s - 2 + 1/s - c}, \] then there exists $A' \subseteq A$ such that $A$ is a $B_{s}^\times[1]$ set satisfying \[ |B| \gg_s |A|^{1/s + \delta} \ \text{for} \ \delta = c/(2s). \] \end{lemma} \begin{proof} For every $2 \leq l \leq 2s$, let $\Pi_{l,s,2}(A)$ be the number of all $2s$-tuples $(a_1, \dots, a_{2s}) \in A^{2s}$ satisfying $a_1 \dots a_s = a_{s+1} \dots a_{2s}$ such that there are precisely $l$ distinct elements amongst $a_1, \dots, a_{2s}$. Our main aim is to show that for each $2 \leq l \leq 2s$, we have \begin{equation} \label{ann} \Pi_{l,s,2}(A) \ll_{s} |A|^{l - l/s + l/2s^2 - cl/2s}, \end{equation} since we can then follow the proof of Lemma $\ref{hld}$ mutatis mutandis to deduce our desired claim. \par We begin our proof of $\eqref{ann}$ by noting that \[ \Pi_{l,s,2}(A) \ll_{s} \sum_{\substack{ 0 < |c_1|, \dots, |c_{l}| \leq 2s, \\ c_1 + \dots + c_l = 0 }} \sum_{a_1, \dots, a_l \in A} \mathds{1}_{c_1 \log a_1 + \dots + c_l \log a_l = 0 } . \] Writing $X = \{ \log a \ | \ a \in A\}$, we let $A_i = c_i \cdot X$ for every $1 \leq i \leq \min\{l, s\}$ and $A_i = - c_i \cdot X$ for every $s+1 \leq i \leq l$ and $A_i = \{0\}$ for every $l+1 \leq i \leq 2s$. Thus, the previous inequality may be rewritten as \[ \Pi_{l,s,2}(A) \ll_{s} \sum_{\substack{ 0 < |c_1|, \dots, |c_{l}| \leq 2s, \\ c_1 + \dots + c_l = 0 }} \sum_{a_1 \in A_1, \dots, a_{2s} \in A_{2s}} \mathds{1}_{a_1 + \dots + a_s = a_{s+1} + \dots + a_{2s}}, \] whence, we may apply Lemma $\ref{awk}$ to obtain the bound \[ \Pi_{l,s,2}(A) \ll_{s} \sum_{\substack{ 0 < |c_1|, \dots, |c_{l}| \leq 2s, \\ c_1 + \dots + c_l = 0 }} E_{s,2}(A_1)^{1/2s} \dots E_{s,2}(A_l)^{1/2s} . \] Finally, since the equation $x_1 + \dots + x_s = x_{s+1} + \dots + x_{2s}$ is dilation invariant, we see that $E_{s,2}(A_i) = E_{s,2}(X) = M_{s,2}(A)$, and subsequently, we get the bound \[ \Pi_{l,s,2}(A) \ll_{s} M_{s,2}(A)^{l/2s} \sum_{\substack{ 0 < |c_1|, \dots, |c_{l}| \leq 2s, \\ c_1 + \dots + c_l = 0 }} 1 \ll_{s} |A|^{l - l/s + l/2s^2 - cl/2s} . \qedhere \] \end{proof} We now prove similar results when we have good upper bounds for either $E_{2,k}(A)$ or $M_{2,k}(A)$. \begin{lemma} \label{sidr} Let $k \geq 2$ be a natural number, let $c, \delta>0$ be real numbers such that $\delta = c/2k$ and let $A \subseteq (0, \infty)$ be a finite set. If \[ E_{2,k}(A) \ll_{k} |A|^{k+ 1/2 - c},\] then there exists a $B_{2}^+[k-1]$ set $B \subseteq A$ such that $|B| \gg_k |A|^{1/2 + \delta}$. Similarly if \[ M_{2,k}(A) \ll_{k} |A|^{k+ 1/2 - c}, \] then there exists a $B_{2}^\times[k-1]$ set $B \subseteq A$ such that $|B| \gg_k |A|^{1/2 + \delta}$. \end{lemma} \begin{proof} Recall that for every $2 \leq l \leq 2k$, the quantity $\Sigma_{l,2,k}(A)$ was defined to be the number of $2k$-tuples $(a_1, \dots, a_{2k}) \in A^{2k}$ satisfying $a_1 + a_2 = \dots = a_{2k-1} + a_{2k}$ such that there are precisely $l$ distinct elements amongst $a_1, \dots, a_{2k}$. We first claim that it suffices to consider the case when $l\geq 2k-1$. In order to see this, note that if at least three of $a_{1,1}, \dots, a_{k,2}$ equal each other, then without loss of generality, we may assume that there exist $1 \leq i < j \leq k$ such that $a_{i,2} = a_{j,2}$. But this would then imply that $a_{i,1} = a_{j,1}$, since $a_{i,1} + a_{i,2} = a_{j,1} + a_{j,2}$, whence, $\{ a_{i,1}, a_{i,2}\} = \{ a_{j,1}, a_{j,2}\}$, which contradicts our setting wherein we are only interested distinct representations of some real number $n$ as $n = a+ b$, with $a, b \in A$. By Lemma~\ref{lem: sigma l,2,k}, we have \[ \Sigma_{2k-1,2,k}(A)\ll_{k} |A|^{k - 1/2 + 1/2k - c (1 - 1/k) } , \] and furthermore, we have the trivial bound \[ \Sigma_{2k,2,k}(A) \ll_k E_{2,k}(A) \ll_{k} |A|^{k + 1/2 - c} . \] As before, we now use a random sampling argument, to pick elements $a \in A$ with probability $p = |A|^{-1/2 + \delta}$ uniformly at random, and we denote this set to be $A'$. Thus, we have that \[ \mathbb{E}|A'| = p|A| = |A|^{1/2 + \delta}. \] Furthermore, we see that \begin{align*} \mathbb{E}|A'| - 2 \mathbb{E} \sum_{l = 2k-1}^{2k} \Sigma_{l,2,k}(A') & = p |A| - 2p^{2k} \Sigma_{2k,2,k}(A) - 2p^{2k-1} \Sigma_{2k-1,2,k}(A) \\ & = |A|^{1/2 + \delta} - O_{k}(|A|^{1/2 - c + 2k \delta} + |A|^{1/2k - c(1-1/k) + (2k-1) \delta} ). \end{align*} Since $k \geq 2$ and $\delta = c/2k$, both the error terms above can be verified to be much smaller than the main term, and consequently, we get that \[ \mathbb{E} (|A'| - 2 \sum_{l = 2k-1}^{2k} \Sigma_{l,2,k}(A')) \geq |A|^{1/2 + \delta}/2 \] whenever $|A|$ is sufficiently large in terms of $k$. This implies that there exists some $A' \subseteq A$ such that \[ |A'| \geq |A|^{1/2 + \delta}/2, \ \text{as well as that} \ \Sigma_{2k-1,2,k}(A') + \Sigma_{2k,2,k}(A') \leq |A'|/2. \] For each solution $(a_1, \dots, a_{2k})$ counted by either $\Sigma_{2k-1,2,k}(A')$ or $\Sigma_{2k,2,k}(A')$, we remove the element $a_1$ from $A'$, and we write the remaining set to be $B$. By definition, $B$ is a $B_{2}^+[k-1]$ set satisfying $|B| \geq |A'|/2 \gg_{k} |A|^{1/2 + \delta}$, and so, we have proven the first conclusion recorded in Lemma $\ref{sidr}$. The multiplicative analogue can be shown to hold similarly by applying the first part of Lemma $\ref{sidr}$ for the sets $X_1 = \{ \log a \ | \ a \in A \cap (0, 1) \}$ and $X_2 = \{ \log a \ | \ a \in A \cap (1, \infty)\}$. \end{proof} Next, we will also show that similar arguments imply that whenever $E_{s,k}(A)$ is bounded appropriately for some $k \geq 3s$, then there exists a large $B_s^+[k-1]$ set in $A$. \begin{lemma} \label{hsh} Let $A$ be a finite set of real numbers and let $s,k \geq 2$ be natural numbers, $k\geq 3s$, and let $c >0$. If \[ E_{s,k}(A) \ll |A|^{sk - k + 1/s - c}, \] then there exists a $B_{s}^+[k-1]$ set $B \subseteq A$ such that $|B| \gg_k |A|^{1/s + \delta}$, for $\delta = c'/sk$ with $c'= \min\{(k-2s)c/k,1/s\}$. \end{lemma} \begin{proof} We begin our proof by applying Lemma \ref{lim2} to deduce that \begin{equation*} \Sigma_{l,s,k}(A) \ll_{s,k}|A|^{l-l/s+1/s-c'}. \end{equation*} We now pick elements from $A$ with probability $p$ uniformly at random, where $p = |A|^{1/s - 1 + \delta}$, and we write this subset to be $A'$. As $ \mathbb{E} |A'| = p |A| = |A|^{1/s + \delta}, $ we have that \[ \mathbb{E} |A'| - 2\mathbb{E} \sum_{l=2}^{ks} \Sigma_{l,s,k}(A') =p|A| - 2\sum_{l=2}^{2s} p^l \Sigma_{l,s,k}(A) = |A|^{1/s + \delta} - O_{s}(|A|^{sk\delta + 1/s - c'}).\] Our choice of $\delta$ now implies that \[ \mathbb{E} ( |A'| - 2 \sum_{l=2}^{ks} \Sigma_{l,s,k}(A') ) \geq |A|^{1/s + \delta}/2, \] whenever $|A|$ is sufficiently large in terms of $s$. Thus, there exists some $A' \subseteq A$ such that \[ |A'| \geq |A|^{1/s + \delta}/2 \ \text{and} \ \sum_{l=2}^{ks} \Sigma_{l,s,k}(A') \leq |A'|/2 . \] \par For each $2 \leq l \leq ks$ and for each solution $(a_{1,1}, \dots, a_{k,s})$ counted by $\Sigma_{l,s,k}(A')$, we remove the element $a_{1,1}$ from $A'$, and we denote $B$ to be the remaining set. By definition, the set $B$ must be a $B_{s}^+[k-1]$ set. Moreover, we have that \[ |B| \geq |A'| - \sum_{2 \leq l \leq ks} \Sigma_{l,s,k}(A') \geq |A'|/2 \geq |A|^{1/s + \delta}/4. \qedhere \] \end{proof} \section{Hyperbolic incidences} Given finite, non-empty sets $X,Y \subseteq \mathbb{R}$, we are interested in estimating the number of solutions $H(X,Y)$ to the equation \[ (x_1 - y_1) (x_2 - y_2) = 1, \] with $x_1, x_2 \in X$ and $y_1, y_2 \in Y$. By dilating the sets $X,Y$ appropriately, we may use this to study solutions to equations of the form $(x_1 - y_1)(x_2 - y_2) = \lambda$, for arbitrary $\lambda \neq 0$. Our main goal in this section is to prove the following upper bound for $H(X,Y)$. \begin{theorem} \label{hyp} Let $X,Y \subseteq \mathbb{R}$ be finite sets such that $|Y|^2 \leq |X| \leq |Y|^3$. Then we have \[ H(X,Y) \ll |X|^{1 + 1/6} |Y|^{2 - 1/2} . \] \end{theorem} \begin{proof} Let $X,Y$ be finite subsets of $\mathbb{R}$ such that $|Y| \leq |X| \leq |X|^3$. Note that \begin{align*} H(X,Y) = \sum_{x_1, x_2, y_1, y_2} \mathds{1}_{x_2 = y_2 + (x_1 - y_1)^{-1}} = \sum_{u} (\sum_{x_2} \mathds{1}_{x_2 = u}) (\sum_{x_1, y_1, y_2} \mathds{1}_{u = y_2 + (x_1 - y_1)^{-1}}). \end{align*} Applying Cauchy-Schwarz inequality, we see that \begin{align*} H(X,Y) & \leq (\sum_{u} \sum_{x_1, x_2} \mathds{1}_{x_1 = x_2 = u})^{1/2} (\sum_{u} \sum_{x_1, y_1, y_2, x_3, y_3, y_4} \mathds{1}_{u = y_2 + (x_1 - y_1)^{-1} = y_3 + (x_3 - y_4)^{-1}})^{1/2} \\ & = |X|^{1/2} H_1(X,Y)^{1/2}, \end{align*} where $H_1(X,Y)$ counts the number of solutions to the equation \begin{equation} \label{cas3} y_2 + (x_1 - y_1)^{-1} = y_3 + (x_3 - y_4)^{-1} , \end{equation} with $x_1, x_3 \in X$ and $y_1, \dots, y_4 \in Y$. Thus, it suffices to show that \[ H_1(X,Y) \ll |X|^{4/3} |Y|^{3}. \] \par We begin this endeavour by considering the solutions where $y_2 = y_3$, which can trivially be bounded above by $|X||Y|^3$, and so, it suffices to assume that $y_2 \neq y_3$. We denote $I_1$ to be the number of such solutions. Rewriting $\eqref{cas3}$ as \[ x_1 = \frac{ x_3 (y_1 +(y_3 - y_2)^{-1} ) + (y_1 - y_4)(y_3 - y_2)^{-1} - y_1 y_4 }{x_3 +(y_3 - y_2)^{-1} - y_4}, \] we see that $I_1$ counts the number of solutions to the equation \[ x_1 = \frac{ x_3 (y_1 +d ) + (y_1 - y_4)d - y_1 y_4 }{x_3 +d - y_4} \ \ \text{with} \ d \neq 0, \] where each solution $x_1, x_3,y_1,y_4,d$ is being counted with the weight $r(d^{-1})$, with $r(n) = |\{(y,y') \in Y \times Y \ | \ n = y - y'\}|$ for each $n \in \mathbb{R}$. Furthermore, setting \[ u_1 = y_1 + d \ \text{and} \ u_2 = (y_1 - y_4)d - y_1 y_4 \ \text{and} \ u_3 = d - y_4, \] we see that the preceding expression corresponds to the equation $x_1 = M_{\vec{u}}(x_3)$. Since $u_1 u_3 - u_2 = d^2 \neq 0$, we may deduce that each choice of $(u_1, u_2, u_3)$ corresponds to at most $2$ choices of $(y_1, y_4, d)$, which, in turn, allows us to provide upper bounds for $I_1$ in terms of weighted incidences between sets of points and M\"{o}bius transformations. The latter can then be estimated using Lemma $\ref{wtin}$, and in particular, we get that \begin{align*} I_1 \ll & \ |X|^{4/3} \big(\sum_{\vec{n} \in H} w(\vec{n})^2 \big)^{1/3} \big(\sum_{\vec{n} \in H} w(\vec{n}) \big)^{1/3} \ + \ \sup_{\vec{n} \in H} w(\vec{n}) |X|^2 \\ & + |X|^{12/11} \big(\sum_{\vec{n} \in H} w(\vec{n})^2\big)^{2/11} \big(\sum_{\vec{n} \in H} w(\vec{n}) \big)^{7/11} \log |X| \ + \sum_{\vec{n} \in H} w(\vec{n}), \end{align*} where $H = Y\times Y \times ((Y-Y)\setminus \{0\})^{-1}$ and $w(\vec{n}) = r(n_3^{-1})$. Using double counting, we see that \[ \sum_{\vec{n} \in H} w(\vec{n}) = |Y|^4 \ \text{and} \ \sum_{\vec{n} \in H} w(\vec{n})^2 = |Y|^2 E_{2,2}(Y) \leq |Y|^5 \ \text{and} \sup_{\vec{n} \in H} w(\vec{n}) \leq |Y|, \] whence, \[ I_1 \ll |X|^{4/3} |Y|^3 + |X|^{12/11} |Y|^{38/11} \log |X|+ |X|^2|Y| + |Y|^4 \ll |X|^{4/3} |Y|^3, \] with the second inequality following from the fact that $|Y|^2 \leq |X| \leq |Y|^3$. Utilising this along with the bound $H_1(X,Y) \ll I_1 + |X| |Y|^3$ finishes the proof of Theorem $\ref{hyp}$. \end{proof} It appears to be that Theorem $\ref{hyp}$ provides the best known bounds for such hyperbolic incidences in the regime when $|Y|^2 \leq |X| \leq |X|^3$, and we refer the reader to \cite{RW2021} for more details on the problem of estimating $H(X,Y)$ in various other settings. We conclude this section by recording various examples of sets $X,Y$ which provide large values of $H(X,Y)$. \begin{Proposition} \label{1cons} There exist arbitrarily large sets $X, Y \subseteq \mathbb{Q}$ such that $|X| \geq |Y|$ and \[H(X,Y) \gg |X||Y|. \] Similarly, there exist arbitrarily large sets $X, Y \subseteq \mathbb{Q}$ such that $|X| \geq |Y|$ and \[H(X,Y) \gg |Y|^2 \log |Y|. \] \end{Proposition} \begin{proof} We begin by proving the first claim, and so, we let $N,M$ be natural numbers and we define $Y= \{1, 2, \dots, N\}$ and $Z = \{ 1, 2, \dots, M\}$ and $1/Z = \{1, 1/2,1/3,\dots, 1/M\}$ and $X = Z \cup (Y + 1/Z)$. Note that we may choose a number $l \in [N+1, M]$ in $M-N$ ways, and moreover, every such $l$ has at least $N$ representations as $l = z- y$ with $z \in Z$ and $y \in Y$. Next, note that there are at least $N$ solutions to the equation $1/l = (y'+1/l) - y'$ with $y' + 1/l \in X$ and $y' \in Y$. This implies that \[ H(X,Y) \geq (M-N)N^2 = (M+1)N^2 - N^2 - N^3 \geq |X||Y| - |Y|^2 - |Y|^3 \gg |X||Y|, \] whenever $M \geq 2N$. This proves the first inequality claimed in this proposition. \par We now prove the second inequality stated in the proposition. Thus, let $X = \{1,2, \dots, 2^N\}$ and $Y = \{1,2,\dots, 2^M\}$ such that $N \geq M+1$. We note that there are at least $M$ ways to write $2^M = 2^k 2^{M-k}$ for some $k \in \{1,\dots, M\}$, and moreover, for each such $k$, there are at least $2^M$ solutions to $x- y = 2^k$ with $x \in X$ and $y \in Y$, and $2^{M}$ solutions to $x-y = 2^{M-k}$ with $x \in X$ and $y \in Y$. Upon dilating the sets $X,Y$ appropriately, we get that \[ H(X,Y) \gg M (2^M)^2 \gg |Y|^2 \log |Y|, \] and so, we conclude the proof of Proposition $\ref{1cons}$ \end{proof} \section{Proofs of Theorems $\ref{th3}, \ref{th2}$ and $\ref{th1}$} We dedicate this section to write the proofs of Theorems $\ref{th3}, \ref{th2}$ and $\ref{th1}$. We remark that it suffices to prove these results for sets $A$ of natural numbers, since the equations $x_1 + \dots + x_s = x_{s+1} + \dots + x_{2s}$ and $x_1 \dots x_{s} = x_{s+1} \dots x_{2s}$ are dilation invariant as well that $|A \cap (0, \infty)|, |A \cap (-\infty, 0)| \gg |A|+1$. We now present our proof of Theorem $\ref{th3}$. \begin{proof}[Proof of Theorem $\ref{th3}$] We begin our proof by applying Lemma $\ref{mu1}$ to obtain the existence of disjoint sets $B,C$ satisfying $A = B \cup C$ and the fact that \[ E_{s,2}(B) \ll_{s} |A|^{2s - \eta_s} \ \text{and} \ M_{s,2}(C) \ll_{s} |C|^{2s - \eta_s}, \] for \[ \eta_s \geq D (\log \log s)^{1/2} (\log \log \log s)^{-1/2}. \] Applying Lemmata $\ref{gens}$ and $\ref{mu2}$ for sets $B,C$ respectively, we see that there exists a $B_{s}^+[1]$ set $B' \subseteq B$ and a $B_{s}^\times[1]$ set $C' \subseteq C$ such that \[ |B'| \gg_{s} |B|^{\frac{ \eta_s + 1/s}{2s} } \ \text{and} \ |C'| \gg_{s} |C|^{\frac{ \eta_s + 1/s}{2s} } . \] We obtain the desired conclusion by noting that $\max \{ |B|, |C|\} \geq |A|/2$. \end{proof} Next, we record our proof of Theorem $\ref{th2}$. \begin{proof}[Proof of Theorem $\ref{th2}$] Let $s \geq 3$, let $k=30s$ and let $c= 1/2s$. We divide our proof into two cases, and so, we first suppose that \[ E_{s,k}(A) \leq |A|^{sk - k + 1/s - c}. \] In this case, we may use Lemma $\ref{hsh}$ to deduce the existence of a $B_{s}^+[k-1]$ set $B$ with $|B| \gg_{s,k} |A|^{1/s + c'/sk}$ with $c'\geq\min\{c(k-2s)/k, 1/s\} \gg 1/s$, whereupon, we may assume that \[ \sum_{n \in sA} r_{s}(A;n)^k = E_{s,k}(A) > |A|^{sk - k + 1/s - c}. \] This now implies that \begin{equation} \label{ync3} \sup_{n \in sA} r_{s}(A;n) \geq (|A|^{sk - k + 1/s - c} |A|^{-s} )^{1/(k-1)} = |A|^{s -1 - \nu}, \end{equation} where $\nu = (1 - 1/s + c)/(k-1)$. \par We first deal with the case when $s \geq 4$. In this case, combining the first and the last inequality stated in Lemma $\ref{awk}$ along with the bound presented in $\eqref{ync3}$, we get that \[ E_{2,2}(A) \geq |A|^{3 - \nu}, \] in which case, we can apply Lemma $\ref{bsg5}$ to obtain $A' \subseteq A$ such that \[ |A'| \gg |A|^{1 - \nu} \ \text{and} \ |A'-A'| \ll |A'|^{1 + 4\nu}. \] Using Lemmata $\ref{prineq}$ and $\ref{so1}$, we may now infer that \[ M_{2,2}(A') \ll |A'+A'|^2 \log |A'| \ll |A'|^{2 + 16 \nu} \log |A'|, \] which, in turn, combines with Lemma $\ref{awk}$ to give us \[ M_{s,2}(A') \ll |A'|^{2s - 2 + 16 \nu} \log |A'|. \] By our choice of $k$, we have that $\log |A'| \ll_{s,k} |A'|^{\nu}$ and $17 \nu < 1/s$, and so, we can now employ Lemma $\ref{mu2}$ to obtain a $B_{s}^\times[1]$ set $C \subseteq A'$ such that \[ |C| \gg |A'|^{1/s + (1/s - 17 \nu)/2s} \gg |A|^{(1/s + (1/s - 17 \nu)/2s)(1 - \nu)}=|A|^{1/s+\mu}, \] where $\mu= (1/s - 17 \nu)( 1- \nu)/2s - \nu/s$. As $c=1/2s$ and $k= 30s$, by elementary computations we have $\mu \gg 1/s^2 > 0$, which proves Theorem $\ref{th2}$ for $s\geq 4$. The $s=3$ case follows similarly, except this time, a straightforward application of Cauchy-Schwarz inequality combined with $\eqref{ync3}$ implies that \[ E_{2,2}(A) \geq|A|^{3 - \nu'}, \] with $\nu' = 2 \nu$. In order to see this, note that \[ r_{s}(A;n) = \sum_{a_1, a_2, a_3 \in A} \mathds{1}_{n - a_1 = a_2 + a_3} \leq |A|^{1/2} (\sum_{a_1 \in A} (\sum_{a_2, a_3 \in A} \mathds{1}_{n-a_1 = a_2 + a_3} )^2 )^{1/2} \leq |A|^{1/2} E_{2,2}(A)^{1/2}. \] With a lower bound for $E_{2,2}(A)$ in hand, we now proceed as in the setting when $s \geq 4$ to obtain a $B_3^\times [1]$ set $C\subseteq A$ with $|C|\gg |A|^{1/3+\mu}$, where \[ \mu=(1/3-34\nu)(1-2\nu)/6-2\nu/3. \] As $k\geq 90$, we have $\mu>0$, which finishes the $s=3$ case of Theorem $\ref{th2}$. \end{proof} Finally, we state the proof of Theorem $\ref{th1}$. \begin{proof}[Proof of Theorem $\ref{th1}$] Let $k = 32$ and $\eta = 1802/3630$ and let $\epsilon = (1- \eta)(k-1)^{-1}$. We divide our proof into two cases, wherein, we first assume that the set \[ S = \{ x \in 2A \ | \ r_{2}(A; x) \geq |A|^{1 - \epsilon} \} \] satisfies $|S| \leq |A|^{\eta}$. In this case, we see that \begin{align*} E_{2,k}(A) & = \sum_{x \in 2A} r_{2}(A; x)^k = \sum_{x \in 2A \setminus S} r_{2}(A; x)^k + \sum_{x \in S} r_{2}(A; x)^k \\ & \leq (|A|^{1 - \epsilon})^{k-1} \sum_{x \in 2A \setminus S} r_{2}(A; x) + |A|^k |S| \\ & \leq |A|^{k + 1 - \epsilon(k-1)} + |A|^{k + \eta} \ll |A|^{k + \eta}. \end{align*} We may now apply Lemma $\ref{sidr}$ to deduce the existence of some $B_{2}^+[k-1]$ set $B \subseteq A$ such that $|B| \gg_{k} |A|^{1/2 + \delta_1}$, where $\delta_1 = (1/2 - \eta)/2k$. Since $\eta < 1/2$, we have that $\delta_1 >0$, and consequently, we are done in this case. \par Our second case is when $|S| > |A|^{\eta}$, whereupon, we choose a subset $S'$ of $S$ such that $|S'| = |A|^{\eta}$. Note that \[ |A|^{1- \epsilon} |S'| \leq \sum_{x \in S'} \sum_{a,b \in A} \mathds{1}_{a = x - b} = \sum_{a \in A} R(a), \] where $R(n) = |\{(x,b) \in S' \times A \ | \ n = x- b \}|$ for each $n \in \mathbb{R}$. Writing \[ A' = \{ a \in A \ | \ R(a) \geq |S'|/2|A|^{\epsilon}\}, \] we see that \[ \sum_{a \in A \setminus A'} R(a) < |S'| |A|^{1- \epsilon}/2, \] which then combines with the preceding inequality to deliver the bound \[ |S'||A'| \geq \sum_{a \in A'} R(a) = \sum_{a \in A} R(a) - \sum_{a \in A \setminus A'} R(a) > |A|^{1-\epsilon} |S'|/2 . \] Thus, we have that $|A'| \geq |A|^{1- \epsilon}/2$. \par We now claim that for every real number $n \neq 0$, we have that \[ m_{2}(A', n) \ll |A|^{1 + c + 2\epsilon - 3c \eta},\] where $c = 1/6$. In order to see this, we note that \[ |A|^{2 \eta-2 \epsilon} m_{2}(A', n) \ll \sum_{a_1, a_2 \in A'} R(a_1) R(a_2) \mathds{1}_{n = a_1 a_2} \leq H(A, S'), \] while since $1/3 < \eta < 1/2$, we may use Theorem $\ref{hyp}$ to infer the bound \[ H(A,S') \ll |A|^{1 + c} |S'|^{2 - 3c} = |A|^{1 + c + 2\eta - 3c \eta} . \] These two inequalities combine together to give the claimed bound. \par With the above estimate in hand, we see that \begin{align*} M_{2,k}(A') & = \sum_{n \in A'^{(2)}} m_{2}(A';n)^k \ll_k |A|^{(1 + c + 2\epsilon - 3c \eta)(k-1)} \sum_{n \in A'^{(2)}} m_{2}(A;n) \\ & \ll_{k} |A|^{(1 + c + 2\epsilon - 3c \eta)(k-1)} |A'|^2 \ll_{k} |A'|^{\frac{(1 + c + 2\epsilon - 3c \eta)(k-1)}{(1- \epsilon)} + 2} \\ & = |A'|^{k + 1/2 - \frac{(k- 3/2)(1-\epsilon) - (1 + c + 2\epsilon - 3c \eta)(k-1)}{(1- \epsilon)} }. \end{align*} Thus, applying Lemma $\ref{sidr}$ again, we see that $A$ must have a $B_{2}^\times[k-1]$ set $C$ such that \[ |C| \gg_{k} |A'|^{1/2 + \frac{(k- 3/2)(1-\epsilon) - (1 + c + 2\epsilon - 3c \eta)(k-1)}{2k(1- \epsilon)}} \gg_{k} |A|^{1/2 + \delta_2}, \] where \[ \delta_2 = \frac{(k- 3/2)(1-\epsilon) - (1 + c + 2\epsilon - 3c \eta)(k-1) - k \epsilon}{2k}. \] The reader may now verify that the facts that $c = 1/6$ and $\epsilon = (1- \eta)/(k-1)$ and $k = 32$ and $\eta = 1802/3630$ allow us to have $\delta_2>0$, whence, we are done with the proof of Theorem $\ref{th3}$. \end{proof} \bibliographystyle{amsbracket} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,562
\section{Introduction} The problems of finding rainbow structures in proper edge-colorings of complete graphs have been widely studied in recent years. For example, in a recent breakthrough by Montgomery, Pokrovskiy and Sudakov~\cite{ProofRingel2020}, they confirmed the famous Ringel's conjecture, one of whose statements involves finding a rainbow copy of any tree with $n$ edges in a particular proper edge-coloring of $K_{2n+1}.$ There have also been many papers written on finding large or spanning structures in proper edge-colorings, see, e.g. \cite{Stefan2019, PLMS2019, JEMS2020, JCTB2018}. Very recently, Conlon and Tyomkyn \cite{Conlon2020} studied a new Ramsey type problem, which aims to find two or more vertex-disjoint color-isomorphic copies of some given graph in proper edge-colorings of complete graphs. For $k,n\geqslant 2$ and a fixed graph $H$, define $f_{k}(n,H)$ to be the smallest integer $c$ such that there exists a proper edge-coloring of $K_{n}$ with $c$ colors containing no $k$ vertex-disjoint color-isomorphic repeats of $H$. One may ask the following natural question. \begin{problem}\label{problem:MAIN} Given $k\geqslant 2$ and a fixed graph $H,$ determine the order of growth of $f_{k}(n,H)$ as $n\rightarrow\infty.$ \end{problem} In \cite{Conlon2020}, Conlon and Tyomkyn studied this problem systematically. They first made many useful observations on the properties of $f_k(n,H)$. For instance, $f_{k}(n,H)$ is monotone increasing in $n$, but decreasing in $k$. Moreover, $f_{k}(n,H)$ is monotone decreasing in $H$ with respect to taking subgraph, i.e, $f_{k}(n,H)\leqslant f_{k}(n,H')$ when $H'$ is a subgraph of $H.$ Also, since every proper coloring of $K_n$ uses at least $n-1$ colors, then $n-1\leqslant f_{k}(n,H)\leqslant \binom{n}{2}.$ Using the Lov\'{a}sz Local Lemma and Bukh's random algebraic method~\cite{Bukh2015}, they proved the following results. \begin{theorem}[\cite{Conlon2020}]\label{thm:SomeknownResults} The followings hold.\medskip \noindent\emph{(i)} For any graph $H$ with $v$ vertices and $e$ edges, \begin{equation*} f_{k}(n,H)=O(\max\{n,n^{\frac{kv-2}{(k-1)e}}\}). \end{equation*} \noindent\emph{(ii)} For every graph $H$ containing a cycle, there exists $k=k(H)$ such that \begin{equation*} f_{k}(n,H)=\Theta(n). \end{equation*} \end{theorem} Conlon and Tyomkyn also suggested to study $f_k(n,H)$ when $H$ is an even cycle. Theorem~\ref{thm:SomeknownResults} implies $f_{k}(n,C_{4})=O(n^{\frac{2k-1}{2k-2}})$ (using the Lov\'{a}sz Local Lemma), and there is an integer $k$ such that $f_{k}(n,C_{4})=\Theta(n)$ (using the random algebraic method). The constant $k$ obtained by the random algebraic method is likely very large due to the Lang-Weil bound \cite{LangWeil1954}, and they asked whether $f_{2}(n,C_{4})=\Theta(n).$ Our first result in this paper studies $f_k(n,C_4)$. We try to estimate the smallest integer $k$ such that $f_{k}(n,C_{4})=\Theta(n)$, and we give the following result via an algebraic construction. \begin{theorem}\label{thm:f12C4} $f_{3}(n,C_{4})=\Theta(n).$ \end{theorem} This result improves the best known upper bound $O(n^{\frac{5}{4}})$ obtained by Theorem~\ref{thm:SomeknownResults} (i), and it perhaps gives some evidence that $f_{2}(n,C_{4})$ is also of order $\Theta(n)$. As the authors mentioned in \cite{Conlon2020}, the problem of studying $f_k(n,H)$ was motivated by a generalized Ramsey problem raised by Krueger~\cite{GenRamsey2020}. Our next result in this paper studies this generalized Ramsey problem. The classical graph Ramsey problem asks for the minimum number $n$ such that every $k$-coloring of the edges of $K_{n}$ forces a monochromatic copy of $K_{p}$. By fixing $n,$ the inverse problem asks for the minimum $k$ such that there exists an edge-coloring of $K_{n}$ with $k$ colors, and each copy of $K_{p}$ receives at least $q$ colors. For general graphs $G$ and $H,$ the generalized Ramsey number $r(G,H,q)$ denotes the minimum number of edge-colors of $G$, such that the edges of every copy of $H\subseteq G$ together receive at least $q$ distinct colors. This function was first studied by Elekes, Erd\H{o}s and F\"{u}redi (see Section $9$ of \cite{Erdos1981}). Later, Erd\H{o}s and Gy\'{a}rf\'{a}s~\cite{Combinatorica1997} systematically studied the function $r(K_{n},K_{p},q)$ and showed many improved results. After that, a number of wonderful papers \cite{JCTB2000, PLMS2015, SIDMA2020, MonoK42008, ColorEnergy2019, EJC2001} studied this problem, and obtained lots of interesting results of this fashion. In recent years, some questions about distinct distances and difference sets with similar flavors have also been studied in \cite{SIDMA2020, DistinctDistance2018, ColorEnergy2019}. In this paper, we are interested in the bipartite version of generalized Ramsey number $r(K_{n,n},K_{s,t},q),$ which has been studied in \cite{JCTB2000, JCTB1975, ARS2003}. In particular, Axenovich, F\"{u}redi, and Mubayi~\cite{JCTB2000} obtained a series of improved results via many different methods such as Lov\'{a}sz Local Lemma and algebraic methods. We list some results of \cite{JCTB2000} in Tables~\ref{table:K2233} and \ref{table:Kss}. In our studies of $r(K_{n,n},K_{s,t},q)$, we will always assume that $s,t$ and $q$ are fixed integers and $n\rightarrow \infty.$ Axenovich, F\"{u}redi, and Mubayi~\cite{JCTB2000} determined the linear and quadratic thresholds of the function $r(K_{n,n},K_{s,s},q)$. More precisely, they determined the smallest integers $q_1(s)$ and $q_2(s)$, where $q_1(s)=s^{2}-2s+3$ and $q_2(s)=s^{2}-s+2$, such that $r(K_{n,n},K_{s,s},q_1(s))=\Theta(n)$ and $r(K_{n,n},K_{s,s},q_2(s))=\Omega(n^{2})$. Up to now, nothing has been shown about $r(K_{n,n},K_{s,s},q)$ beyond the trivial lower bound $\Omega(n)$ when $s^{2}-2s+3<q<s^{2}-s+2.$ Our next results give some general lower bounds for $r(K_{n,n},K_{s,t},q)$ with a broad range of $q$. Recall that $\mathrm{ex}(n,H)$ denotes the maximum number of edges in an $H$-free graph $G$ with $n$ vertices. \begin{theorem}\label{thm:GeneralLowerBound} For given integers $t\geqslant s\geqslant 4,$ we have \begin{equation*} r(K_{n,n},K_{s,t},st-e(H)+1)=\Omega\Big(\frac{n^{4}}{\textup{ex}(n^{2},H)}\Big), \end{equation*} where $H$ is a bipartite graph with bipartition $H=H_{1}\bigcup H_{2}$ such that $|H_{1}|\leqslant \lfloor\frac{s}{2}\rfloor$ and $|H_{2}|\leqslant \lfloor\frac{t}{2}\rfloor.$ \end{theorem} When $s=t$ is even, let $H$ be the even cycle of length $s.$ Using the upper bound $\textup{ex}(n,C_{2k})=O(n^{1+\frac{1}{k}})$ by Bondy~\cite{Bondy1974}, we obtain the following corollary. \begin{corollary}\label{cor:kss} When $s\geqslant 4$ is an even integer, we have \begin{equation*} r(K_{n,n},K_{s,s},s^{2}-s+1)=\Omega(n^{2-\frac{4}{s}}). \end{equation*} \end{corollary} As we see in Table~\ref{table:Kss}, our lower bound is not far from the best known upper bound $r(K_{n,n},K_{s,s},s^{2}-s+1)=O(n^{2-\frac{2}{s}}),$ and Corollary~\ref{cor:kss} gives the first non-trivial lower bound when $s>4$ is an even integer and $q=s^{2}-s+1.$ Note that the choice of $H$ in Theorem~\ref{thm:GeneralLowerBound} is flexible, which leads to a broad range of $q$ in function $r(K_{n,n},K_{s,t},q).$ Let $q_1(s,t)=st-s-t+3$ and $q_2(s,t)=st-\frac{s+t}{2}+2$. The results in \cite{GenRamsey2020} imply $r(K_{n,n},K_{s,t},q_1(s,t))=\Theta(n)$ and $r(K_{n,n},K_{s,t},q_2(s,t))=\Theta(n^2)$. By choosing different graphs $H$ satisfying the conditions in Theorem~\ref{thm:GeneralLowerBound}, as corollaries, we are able to get super-linear lower bounds for $r(K_{n,n},K_{s,t},q)$ with many distinct parameters $q.$ We give two corollaries in this fashion for the asymmetric version, that is, when $t>s$. The corollaries are obtained by choosing $H$ as $1$-subdivision of complete graphs and $1$-subdivision of complete bipartite graphs, respectively. We utilise known extremal numbers of $1$-subdivision of complete graphs $\textup{ex}(n,\textup{sub}(K_{t}))=O(n^{\frac{3}{2}-\frac{1}{4t-6}})$, and $1$-subdivision of complete bipartite graphs $\textup{ex}(n,\textup{sub}(K_{s,t}))=O(n^{\frac{3}{2}-\frac{1}{2s}})$ (see \cite{ConlonJanzerLee2019, JanzerEJC2019}). \begin{corollary} When $s\geqslant 6$ is an even integer and $t=2\binom{s/2}{2}$, we have \begin{equation*} r(K_{n,n},K_{s,t},st-t+1)=\Omega(n^{1+\frac{1}{s-3}}). \end{equation*} \end{corollary} \begin{corollary} When $s\geqslant 8$ is an even integer and $t=\frac{s^{2}}{8}$, we have \begin{equation*} r(K_{n,n},K_{s,t},st-t+1)=\Omega(n^{1+\frac{4}{s}}). \end{equation*} \end{corollary} Also, there are not many lower bounds for $r(K_{n,n},K_{s,t},q)$ that have been found when $q<q_{1}(s,t)=st-s-t+3.$ The only known case is when $q=2,$ $r(K_{n,n},K_{s,s},2)=\Omega(n^{\frac{1}{s}})$~\cite{JCTB2000}. We obtain some new sub-linear lower bounds by using Theorem~\ref{thm:GeneralLowerBound} and the famous K\"{o}vari-S\'{o}s-Tur\'{a}n bound~\cite{Kovari1954} $\textup{ex}(n,K_{m,\ell})=O(n^{2-\frac{1}{m}})$ with $m\leqslant \ell$. \begin{corollary} When $t\geqslant s\geqslant 4,$ let $m\leqslant \frac{s}{2}$ and $\ell\leqslant\frac{t}{2}$ be integers with $m\leqslant\ell,$ we have \begin{equation*} r(K_{n,n},K_{s,t},st-m\ell+1)=\Omega(n^{\frac{2}{m}}). \end{equation*} In particular, set $s=t$ and $m=\ell=\frac{s}{2},$ we have \begin{equation*} r(K_{n,n},K_{s,s},\frac{3s^{2}}{4}+1)=\Omega(n^{\frac{4}{s}}). \end{equation*} \end{corollary} The paper is organized as follows. In Section~\ref{section:C4}, we prove Theorem~\ref{thm:f12C4} by giving an algebraic construction. In Section~\ref{section:lowbounds}, we prove Theorem~\ref{thm:GeneralLowerBound}. Finally we conclude with some remarks and further questions in Section~\ref{section:Conclusion}.\smallskip \begin{minipage}{\textwidth} \begin{minipage}[t]{0.5\textwidth} \centering \makeatletter\def\@captype{table}\makeatother\caption{$r(K_{n,n},K_{s,s},q)$ with $s=2,3$}\label{table:K2233} \begin{tabular}{ccc} \hline $q$& $r(K_{n,n},K_{2,2},q)$& $r(K_{n,n},K_{3,3},q)$ \\ \hline $2$& $(1+o(1))\sqrt{n}$&$(1+o(1))n^{\frac{1}{3}}$\\ $3$& $>\lfloor\frac{2n}{3}\rfloor$; $\leqslant n-1$ & $O(n^{\frac{4}{7}})$\\ $4$&$n^{2}$&$O(n^{\frac{2}{3}})$\\ $5$&$n^{2}$&$O(n^{\frac{4}{5}})$\\ $6$&$n^{2}$&$\Theta(n)$\\ $7$&$n^{2}$&$\Omega(n)$; $O(n^{\frac{4}{3}})$\\ $8$&$n^{2}$&$\lceil\frac{n}{2}\lceil\frac{3n}{2}\rceil\rceil$\\[0.5mm] $9$&$n^{2}$&$n^{2}$\\ \hline \end{tabular} \end{minipage} \begin{minipage}[t]{0.5\textwidth}\label{table:Kss} \centering \makeatletter\def\@captype{table}\makeatother\caption{$r(K_{n,n},K_{s,s},q)$ with $s\geqslant 4$}\label{table:Kss} \begin{tabular}{cc} \hline $q$& $r(K_{n,n},K_{s,s},q)$ \\ \hline $2$& $\Omega(n^{\frac{1}{s}})$\\ $s^{2}-2s+2$& $O(n^{1-\frac{1}{2s-1}})$\\[0.9mm] $s^{2}-2s+3$&$\Theta(n)$\\ $s^{2}-s+1$&$O(n^{2-\frac{2}{s}})$\\ $s^{2}-s+2$&$\geqslant C_{s}(n^{2}-n)$; $<(1-c_{s})n^{2}$\\ $s^{2}-\lfloor\frac{2s-1}{3}\rfloor+1$&$>n^{2}-2\lfloor\frac{s-2}{3}\rfloor(n-1)$\\ $s^{2}-\lfloor\frac{s}{2}\rfloor+1$&$n^{2}-\lfloor\frac{s}{2}\rfloor+1$\\[0.5mm] $s^{2}$&$n^{2}$\\ \hline \end{tabular} \end{minipage} \end{minipage} \section{Even cycle $C_{4}$}\label{section:C4} In this section, we are going to prove Theorem~\ref{thm:f12C4}. The proof goes as follows. We first choose a field $\mathcal{K}$, and construct a map $\pi:V\to \mathcal{K}$, where $V=V(K_n)$. Then we choose a symmetric polynomial $P\in \mathcal{K}[x,y]$, and color the edge $ab$ by $P(\pi(a),\pi(b))$. We aim to show that under this construction, the edge-coloring we obtained has bounded maximum degree in each color class (thus by a standard application of Vizing's theorem, we are able to get a proper edge coloring), the image $|P(\pi(V),\pi(V))|$\footnote{Given $A,B\subseteq \mathcal{K}$, let $P(A,B)$ denote $\{P(a,b): a\in A,b\in B\}$.} is $O(n)$, and we cannot find too many color isomorphic copies of $C_4$ in this coloring. One may choose $\mathcal{K}$ a field of characteristic $0$. Thus, by the symmetric Elekes--Ronyai theorem~\cite{JRT}, our symmetric polynomial $P(x,y)$ has the form $f(u(x)+u(y))$ or $f(u(x)u(y))$, where $f,u$ are some one variable polynomials in $\mathcal{K}[x]$. However, constant many color isomorphic copies of $C_4$ in this case would imply the set $u(\pi(V))$ has low additive or multiplicative energy, and this gives us that the image $|P(\pi(V),\pi(V))|$ is $\Theta(\pi(V)^2)$, providing $\pi(V)=O(n^{\frac{1}{2}})$. Hence the maximum degree of the color class is $\Omega(n^{\frac{1}{2}})$, we will no longer have a proper edge coloring. We fix this problem by choosing $\mathcal{K}$ to be a finite field $\mathbb F_p$, and we choose our polynomial $P$ to be an expanding polynomial over $\mathbb R$. This means, if we view our polynomial as an element in $\mathbb R[x,y]$, and $\pi:V\to \mathbb R$, by taking an injection $\pi$, the edge coloring is proper, we will not have many color isomorphic copies of $C_4$, but the image $|P(\pi(V),\pi(V))|$ is quadratic in $n$. Now, by taking $p=O(n)$, mapping everything down to $\mathbb F_p$ will help us to decrease $|P(\pi(V),\pi(V))|$ to $O(n)$. By choosing the polynomial $P(x,y)$, the vertex map $\pi$, and the character $p$ carefully, and using the ideas from the properties of the resultants of polynomials, we manage to show that, we can still get a proper coloring, and taking this projection down to $\mathbb F_p$ will not create too many color isomorphic copies of $C_4$. \begin{proof}[Proof of Theorem~\ref{thm:f12C4}] Let $p\equiv5\pmod{6}$ be a sufficiently large prime, and \[ A=\Big\{1,2,\ldots,\Big\lfloor\frac{p-1}{3}\Big\rfloor\Big\} \] be a subset of $\mathbb{F}_{p}$. We remark that the purpose of choosing $p\equiv5\pmod{6}$ is to make $-3$ a non-residue modulo $p$. This follows from the law of quadratic reciprocity, which states that if $r$ and $s$ are odd primes, and we define $r^{*}$ to be $r$ if $r\equiv1\pmod{4}$ or $-r$ if $r \equiv3\pmod{4},$ then $s$ is a quadratic residue modulo $r$ if and only if $r^{*}$ is a quadratic residue modulo $s$. Applying this with $r=3$ and $s=p$, we see that $-3$ is a quadratic residue modulo $p$ if and only if $p$ is a quadratic residue modulo $3$, i.e. $p\equiv1\pmod{3}$. Now, let $G$ be a complete graph with vertex set $A$. Let $P(x,y)=x^2+xy+y^2$ be an element in $\mathbb F_p[x,y]$. Let $P^*(A,A)$ be the restricted image, that is \[ P^*(A,A)=\{P(a,b):a,b\in A,a\neq b\}. \] Define the edge coloring \[ \chi: E(G)\to P^*(A,A), \] such that for every $x,y\in A$ with $x\neq y$, we assign color $P(x,y)$ to the edge $xy$. Note that $x^{2}+xy+y^{2}=(x+\frac{y}{2})^{2}+\frac{3}{4}y^{2}$, and $-3$ is a non-quadratic residue modulo $p$, then $x^{2}+xy+y^{2}\ne0$. Hence $P^*(A,A)\subseteq \mathbb F^*_p$. Next, we claim that the edge coloring $\chi$ is proper. Suppose there is $a\in\mathbb F_p^*$, such that two edges $xy$ and $xz$ are assigned to the same color $a$. That is, we have $x^{2}+xy+y^{2}=a$ and $x^{2}+xz+z^{2}=a$. Hence $(y-z)(x+y+z)=0$. By the way we construct the vertex set $A$, $x+y+z\ne0$. This implies $y=z$. Therefore, two distinct edges $xy$ and $xz$ can not be assigned the same color, then $\chi$ is proper. Let $a,b,c,d\in\mathbb{F}^*_{p}$, and $a\ne b,\ b\ne c,\ c\ne d,\ d\ne a$. Assume the colors $a,b,c,d$ are incident to a four-cycle $x,y,z,w\in A$. Then we have \begin{align} &x^{2}+xy+y^{2}=a,\label{c4eq1}\\ &y^{2}+yz+z^{2}=b,\label{c4eq2}\\ &z^{2}+zw+w^{2}=c,\label{c4eq3}\\ &w^{2}+wx+x^{2}=d.\label{c4eq4} \end{align} From Equations (\ref{c4eq1}) and (\ref{c4eq2}), we obtain \begin{equation}\label{c4eq5} y=\frac{a-b}{x-z}-x-z. \end{equation} Similarly, from Equations (\ref{c4eq3}) and (\ref{c4eq4}), we obtain \begin{equation}\label{c4eq6} w=\frac{c-d}{z-x}-z-x. \end{equation} Substituting Equations (\ref{c4eq5}) and (\ref{c4eq6}) into Equations (\ref{c4eq1}) and (\ref{c4eq4}), respectively, we have \begin{align} &x^2+x\Big(\frac{a-b}{x-z}-x-z\Big)+\Big(\frac{a-b}{x-z}-x-z\Big)^{2}=a,\label{c4eq7}\\ &x^2+x\Big(\frac{c-d}{z-x}-z-x\Big)+\Big(\frac{c-d}{z-x}-z-x\Big)^{2}=d.\label{c4eq8} \end{align} Making the change of variables $u\mapsto x+z$ and $v\mapsto x-z$ in Equations (\ref{c4eq7}) and (\ref{c4eq8}), we get $v\neq 0$, and \begin{align} &v^{4}+3u^{2}v^{2}-6(a-b)uv-2(a+b)v^{2}+4(a-b)^{2}=0,\label{c4eq9}\\ &v^{4}+3u^{2}v^{2}-6(c-d)uv-2(c+d)v^{2}+4(c-d)^{2}=0.\label{c4eq10} \end{align} Assume that $a-b=c-d$. Then from Equations (\ref{c4eq9}) and (\ref{c4eq10}), we have $a+b=c+d$, hence $a=c,b=d$. By Equations (\ref{c4eq5}) and (\ref{c4eq6}), we have $y+w+2x+2z=0$, and by the symmetry, we can also obtain $x+z+2y+2w=0$. Therefore, $x+z=y+w=0$, which contradicts the construction of $A$. Now we have that $a-b\ne c-d$. From Equations (\ref{c4eq9}) and (\ref{c4eq10}) we can obtain \begin{align} uv=\frac{(a+b-c-d)v^{2}}{3(c-d-a+b)}+\frac{2}{3}(c-d+a-b).\label{c4eq11} \end{align} Substituting Equation (\ref{c4eq11}) into Equation (\ref{c4eq9}), and then multiplying by $3(c-d-a+b)^{2}$, we obtain \begin{align} k_{2}v^{4}+k_{1}v^{2}+k_{0}=0,\label{eq:quad} \end{align} where \begin{align*} &k_{2}=4(a^2-ab-2ac+ad+b^2+bc-2bd+c^2-cd+d^2),\\ &k_{1}=4(a-b-c+d)(a^2-2ac-3ad-b^2+3bc+2bd+c^2-d^2),\\ &k_{0}=4(a-b-c+d)^2(a^2-2ab-ac+ad+b^2+bc-bd+c^2-2cd+d^2). \end{align*} Assume first that $k_{2}=k_{1}=k_{0}=0$. Since $a-b-c+d\ne0$, we have \begin{align} &a^2-ab-2ac+ad+b^2+bc-2bd+c^2-cd+d^2=0,\label{c4eq12}\\ &a^2-2ab-ac+ad+b^2+bc-bd+c^2-2cd+d^2=0.\label{c4eq13} \end{align} From Equations (\ref{c4eq12}) and (\ref{c4eq13}), we obtain $(a-d)(b-c)=0$, which contradicts the choice of $a,b,c,d$. Recall that $v=x-z$ and $x,z\in A$. Thus if at least one of $k_{i}$ $(i=0,1,2)$ is not $0$, then there are at most $4$ solutions for $v$ in $(A-A)\setminus\{0\}$. Also, if the number of solutions for $v$ is at least $3$, then there exist two solutions $v_{1},v_{2}$ such that $v_{1}+v_{2}=0$. In this case, let $x_{1},z_{1},u_{1}$ ($x_{2},z_{2},u_{2}$) be the corresponding solutions with respect to $v_{1}$ ($v_{2}$, respectively). Then $v_{1}^{2}=v_{2}^{2}$, and by Equation (\ref{c4eq11}), we have $u_{1}v_{1}=u_{2}v_{2}$. Hence $u_{1}=-u_{2}$. Note that since $v_{i}=x_{i}-z_{i}$ and $u_{i}=x_{i}+z_{i}$, we have $x_{1}-z_{1}=-x_{2}+z_{2}$ and $x_{1}+z_{1}=-x_{2}-z_{2}$. Then we can get $x_{1}+x_{2}=0$, which contradicts the construction of $A$. Hence there are at most $2$ solutions for $v$ in $(A-A)\setminus\{0\}$. For any fixed $v$, by Equation (\ref{c4eq11}), there is a unique solution for $u$. Note that $x=\frac{u+v}{2}$ and $z=\frac{u-v}{2}$, and by Equations (\ref{c4eq5}) and (\ref{c4eq6}), $y$ and $w$ are uniquely determined by $x$ and $z$, there are at most $2$ solutions for $(x,y,z,w)$. Therefore, the number of copies of four-cycles with edge colors $a,b,c,d$ is at most two. \end{proof} \section{Lower bounds for $r(K_{n,n},K_{s,t},q)$}\label{section:lowbounds} In this section, we are going to prove Theorem~\ref{thm:GeneralLowerBound}. The ideas used in this proof are mainly inspired by the recent work of Conlon and Tyomkyn~\cite{Conlon2020}. It will often be helpful to think of $r(K_{n,n},K_{s,t},q)$ in terms of repeated colors. Let $\mathscr{C}$ be the collection of colors, and let $\chi:E(G)\to\mathscr{C}$ be an edge coloring of graph $G$. Let $H$ be a subgraph of $G$. If a color $c\in\mathscr{C}$ appears on exactly $r_{c}$ edges in $\chi(E(H))$, then we say such color $c$ is repeated $r_{c}-1$ times in $H$. We say $H$ has $r$ \emph{repeats} if $r=\sum\limits_{c\in\chi(E(H))}(r_{c}-1)$, where every color $c\in\chi(E(H))$ is repeated $r_{c}-1$ times, and the sum is taking over all colors in $\chi(E(H))$ (hence $r_c\geqslant1$). \begin{proof}[Proof of Theorem~\ref{thm:GeneralLowerBound}] Suppose that $n$ is sufficiently large, and let \[ \chi:E(K_{n,n})\to \mathscr{C} \] be an edge coloring of $K_{n,n}$, where $\mathscr{C}$ is the collection of colors. Suppose $K_{n,n}$ has the vertex bipartition $A\cup B$. We label the vertices in $A$ and $B$ respectively, such that $A=\{a_{1},a_{2},\ldots,a_{n}\}$ and $B=\{b_{1},b_{2},\ldots,b_{n}\}$, with $a_{1}<a_{2}<\cdots<a_{n}$ and $b_{1}<b_{2}<\cdots<b_{n}$. Now we construct the auxiliary graph $F$ as follows. $F$ is a bipartite graph, with vertex set $U\cup W$, such that $U=\binom{A}{2}$ and $W=\binom{B}{2}$. Thus $|U|=|W|=\binom{n}{2}$. Moreover, we require the elements in $U$ to have the form $(a_i,a_j)$ with $a_i>a_j$, and elements in $W$ to have the form $(b_k,b_\ell)$ with $b_k>b_\ell$. For every $(a_i,a_j)\in U$ and $(b_k,b_\ell)\in W$, $(a_i,a_j)$ and $(b_k,b_\ell)$ are adjacent in $F$ if $\chi(a_ib_k)=\chi(a_jb_\ell)$ in the edge coloring of $K_{n,n}$. Given $c\in\mathscr{C}$, let $e_c$ be the number of edges of color $c$ in the image of $\chi$, we have \[ e(F)=\sum_{c\in\mathscr{C}}\binom{e_{c}}{2}\geqslant \frac{(\sum_{c\in\mathscr{C}}e_{c})^{2}}{4|\mathscr{C}|}=\frac{n^{4}}{4|\mathscr{C}|}. \] Hence $|\mathscr{C}|\geqslant \frac{n^4{}}{4e(F)}$. Next, we are going to bound $e(F)$, and hence get a lower bound on $|\mathscr{C}|$. Let $H$ be a bipartite graph, with vertex set $H_1\cup H_2$, such that $|H_{1}|\leqslant \lfloor\frac{s}{2}\rfloor$ and $|H_{2}|\leqslant \lfloor\frac{t}{2}\rfloor.$ Suppose $e(F)\geqslant \mathrm{ex}(|V(F)|,H)$, then $F$ contains a copy of $H$. Observe that, by the definition of auxiliary graph $F$, every edge of the copy $H$ in $F$ will contribute exactly one repeat in the edge coloring $\chi$ of $K_{n,n}$. Thus, there are at least $e(H)$ repeats in $\chi$. Moreover, all these $e(H)$ repeats span at most $2|H_1|$ vertices in $A$, and at most $2|H_2|$ vertices in $B$. Thus by the upper bound on $|H_1|$ and $|H_2|$, we are able to find a copy of $K_{s,t}$ in $K_{n,n}$ such that $|\chi(E(K_{s,t}))|$ is at most $st-e(H)$. Therefore, if the image of $\chi$ does not contain a $K_{s,t}$ with less than $st-e(H)+1$ colors, we have $e(F)\leqslant \mathrm{ex}(|V(F)|,H)$, finishing the proof. \end{proof} \section{Concluding remarks}\label{section:Conclusion} Although the proof of Theorem~\ref{thm:f12C4} only requires an elementary computation, it is motivated by considering the resultants of polynomials. Let us first recall the definition of the resultant of polynomials over $\mathcal{K}[x]$. \begin{definition} Let $f(x),g(x)\in \mathcal{K}[x]$, such that $f(x)=a_{m}x^{m}+\cdots+a_{1}x+a_{0}$ and $g(x)=b_{n}x^{n}+\cdots+b_{1}x+b_{0}$. Then the \emph{resultant of $f$ and $g$} is defined by the determinant of the following $(m+n+2)\times (m+n+2)$ matrix, \begin{align*} \left( \begin{array}{cccccccc} a_{0} & a_{1} & \cdots & a_{m} & & && \\ & a_{0} & \cdots & a_{m-1} & a_{m} && & \\ & & \cdots & \cdots & \cdots & & &\\ & & & & & a_{0} & \cdots & a_{m}\\ b_{0} & b_{1} & \cdots &\cdots & b_{n} & &\\ & b_{0} & \cdots & \cdots & b_{n-1} & b_{n} & & \\ & & \cdots & \cdots &\cdots & \cdots & &\\ & & & &b_{0} & \cdots& \cdots & b_{n}\\ \end{array} \right), \end{align*} which is denoted by $R(f,g)$. \end{definition} The resultant of two polynomials has the following property, which is crucial. \begin{lemma}[\cite{Fuhrmann2012}] Let $f,g\in \mathcal{K}[x]$ be one variable polynomials. Suppose $h(x)=\gcd(f(x),\allowbreak g(x))$, where $\deg(h(x))\geqslant 1$. Then $R(f,g)=0$. In particular, if $f$ and $g$ have a common root in $\mathcal{K}$, then $R(f,g)=0$. \end{lemma} For the multivariable polynomials, the above lemma still holds when we project down to a one variable polynomial ring. For any $f,g\in \mathcal{K}[x_{1},\dots,x_{n}]$, let $R(f,g;x_{i})$ denote the resultant of $f$ and $g$ with respect to the variable $x_{i}$. In the proof of Theorem~\ref{thm:f12C4}, Equations (\ref{c4eq1}), (\ref{c4eq2}), (\ref{c4eq3}), and (\ref{c4eq4}) actually give us four polynomials in $\mathbb F_p[x,y,z,w]$ \begin{align*} &f_{1}(x,y,z,w)=x^{2}+xy+y^{2}-a=0,\\ &f_{2}(x,y,z,w)=y^{2}+yz+z^{2}-b=0,\\ &f_{3}(x,y,z,w)=z^{2}+zw+w^{2}-c=0,\\ &f_{4}(x,y,z,w)=w^{2}+wx+x^{2}-d=0. \end{align*} By computing the resultants $f_5(x,z):=R(f_1,f_2;y)$, $f_6(x,z):=R(f_3,f_4;w)$, and $g(x):=R(f_5,f_6;z)$ (which are actually the similar computations we did in the proof), we will get $g(x)=0$, and $g(x)$ is a quadratic polynomial on $x^2$, which is an analogue of Equation (\ref{eq:quad}). The proof is finished by analyzing the coefficients in $g(x)$, as we did for $k_0,k_1,k_2$ in Equation (\ref{eq:quad}). Several interesting questions remain about the function $f_{k}(n,H)$ when $H$ is a longer even cycle. One that immediately arises from Theorem \ref{thm:SomeknownResults} (ii) and Theorem~\ref{thm:f12C4}. \begin{problem}\label{problem:Remian1} For any integer $\ell\geqslant 2$, estimate the smallest $k$ such that $f_{k}(n,C_{2\ell})=\Theta(n).$ \end{problem} The $\ell=2$ case of Problem \ref{problem:Remian1} is the main topic of this paper. Deriving a similar bound for $f_{2}(n,C_{4})$ is likely to be difficult. The next case, when $\ell=3$, now seems an attractive candidate for further exploration. The idea of using resultants of polynomials mentioned above may be useful, and we suspect our method could be used to obtain some good upper bounds on $k$ for the general $\ell\geqslant 3$. We are also interested in the problem of $f_{2}(n,C_{2\ell})$ and we provide the following conjecture. \begin{conjecture}\label{conj:evencycle} For any $\ell\geqslant 3,$ $f_{2}(n,C_{2\ell})=\Omega(n^{2-\frac{2}{\ell}}).$ \end{conjecture} Conlon and Tyomkyn~\cite{Conlon2020} verified this conjecture when $\ell=3.$ The proof relies on the upper bound for $\textup{ex}(n,\theta_{\ell,t})$~\cite{thetagraph1983} and the observation that the endpoints of theta graph $\theta_{\ell,t}$ cannot be in the same part when $\ell$ is odd (this key observation is also useful in \cite{BukhTailtheta2018}). \textbf{Note added:} Very recently, Janzer~\cite{Janzer2020} developed a method for finding suitable cycles of given length and then proved Conjecture~\ref{conj:evencycle} in a more general form. \section*{Acknowledgements} Yifan Jing would like to thank J\'{o}zsef Balogh for helpful discussions. The authors express their gratitude to the anonymous reviewer for the detailed and constructive comments which are very helpful for the improvement of the presentation of this paper. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,258
Arachnocampa luminosa est une espèce d'insectes diptères de la famille des Keroplatidae. Les larves de cette mouche endémique de Nouvelle-Zélande sont bioluminescentes. Biologie Les larves dénommées glowworm par les anglophones et localement en Nouvelle-Zélande titiwai sont bioluminescentes, grâce à une réaction biochimique se déroulant dans les tubules du système excréteur des larves : les tubes de Malpighi. Cette luminescence joue un rôle en termes d'attraction de proies (des Diptères principalement, qui constituent 86 % de toutes les proies dans la nature et 89 % dans les grottes). Ces diptères sont attirés par la lumière, et viennent se coller aux gouttelettes de mucus adhésif accrochées aux fils verticaux). Cette lumière attire aussi dans les pièges des araignées, coléoptères, hyménoptères, orthoptères, trichoptères, gastéropodes, acariens, et Neuroptera (l'ordre de cette liste étant des proies les plus fréquentes aux plus rarement trouvées). Cependant, aucun adulte dA. luminosa n'est capturé par ces pièges. Bibliographie Green L.F (1979). The fine structure of the light organ of the New Zealand glow-worm Arachnocampa luminosa (Diptera: Mycetophilidae). Tissue and Cell, 11(3), 457-465. Green L.F (1979). Regional specialization in the Malpighian tubules of the New Zealand glow-worm Arachnocampa luminosa (Diptera: Mycetophilidae). The structure and function of type I and II cells . Tissue and Cell, 11(4), 673-702 (résumé). Meyer-Rochow, V. B., & Waldvogel, H. (1979). Visual behaviour and the structure of dark and light-adapted larval and adult eyes of the New Zealand glowworm Arachnocampa luminosa (Mycetophilidae: Diptera). Journal of insect physiology, 25(7), 601-613 (résumé). Puglsey, C. W. (1983). Literature review of the New Zealand glowworm Arachnocampa luminosa (Diptera: Keroplatidae) and related cave-dwelling Diptera. New Zealand Entomologist, 7(4), 419-424. Richards, A.M (160) Observations on the New Zealand glow-worm Arachnocampa luminosa (Skuse) 1890. In Transactions of the Royal Society of New Zealand (Vol. 88, No. 3, ).ù Skuse, 1891 : Description of a luminous dipterous insect (fam. Mycetophilidae), from New Zealand. Proceedings of the Linnean Society of New South Wales, ser. 2, , . Notes et références Liens externes Mycetophilidae Espèce de Diptères (nom scientifique) Faune endémique de Nouvelle-Zélande
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,277
\section{Introduction} The entropy of a mesoscopic system can yield nontrivial information on emergence of exotic states, such as two-channel Kondo impurity \cite{andrei1984}, non-abelian anyons in the $\nu=5/2$ regime \cite{ben-shach2013,Viola2012} or Majorana modes in topological superconductors \cite{smirnov2015}. Nevertheless, the measurement of entropy in such small electron number system is highly nontrivial. Recent elegant experiments \cite{Hartman2018, Cockins2010} have employed thermodynamic Maxwell relation between entropy evolution and chemical potential, $(\partial\mu/\partial T)_{n}=(\partial S/\partial n)_{T}$, in order to directly measure entropy transitions in semiconductor quantum dots (QDs). These required measurements of another thermodynamic quantity - the charge of the system as a function of gate voltage, for different temperatures, and hence a specially designed device. Here we propose a different approach to this problem: can one extract information about the entropy from {\sl transport} measurements ? Obviously, this requires a measurement of both particle and thermal (entropy/heat) transport. This question has been addressed in the context of bulk solids \cite{Chaikin1976,Behnia2004,Zlatic2007,Mravlje2016}, with sometimes debated points of view. A general relation exists between the low-temperature thermopower and specific-heat (entropy) of a free electron gas, and this relation appears to apply in a number of materials\cite{Behnia2004,Zlatic2007}. However, thermopower is, quite generally, a transport coefficient and its relation to entropy has been shown to be questionable in systems with strongly anisotropic transport for instance\cite{Mravlje2016}. In the opposite high-temperature limit, where temperature is the largest energy scale in the system, general relations between the thermopower and derivatives of the entropy can be derived, embodied in the Heikes \cite{R.R.HeikesandR.W.Ure1961,Chaikin1976,Doumerc1994} and Kelvin \cite{Peterson2010,Mravlje2016} formulas. The method we propose here is based on a general observation, which is also an important result of our work: in the high-temperature regime the conductance (and thermal response) of an interacting system can be put in the form of a non-interacting conductance formula, provided one takes into account a temperature-dependent shift of the chemical potential (gate voltage). We show that this shift, which can be determined by comparing the actual thermal response of the system to that of the related non-interacting system (which can be estimated using a high-temperature version of the Mott formula\cite{Cutler1969}), can be used to extract the entropy even in the case of arbitrary spectrum and degeneracies, and then demonstrate the usefulness of the approach by applying it to several model systems. One big advantage of our formulation is that one can apply it to any mesosopic system where measurements for which both electrical conductance and thermopower data are available. This allows us to apply our procedure to existing data of thermoelectric response of a single QD, and demonstrate how it can be used to deduce the entropy change and the QD's degeneracy. In the process we explain the long standing puzzle of the observation of a non-zero thermopower at the apparent electron-hole symmetry point in the Coulomb Blockade (CB) valley \cite{Scheibner2005}. \section{General Formulation} Consider a general mesoscopic system with many-body eigenstates $\Psi^{(N)}_i$, where $N$ is the number of electrons in that state, with energies $E^{(N)}_i$, coupled to two reservoirs, with couplings $V^{(x)}_i$, where $x$ denotes the left or right reservoir. $g^{(N)}_i$ is the degeneracy of the energy $E^{(N)}_i$. In the limit $\Gamma_{ij}\ll T$, where the characteristic level broadening $\Gamma_{ij}=2\pi V_i V_j \rho$, with $\rho$ the density of states in the reservoirs, and $T$ the temperature, the conductance $G$ through the mesoscopic system can be written as \cite{Meir1992a} \begin{equation} G(\mu,T)=\sum G_{ij}(\mu,T)=\sum\mathcal{T}^{(0)}_{ij}\times \left[(P_i^{(N+1)}(\mu,T)+P_j^{(N)}(\mu,T)\right]\frac{d f(E^{(N+1)}_i-E^{(N)}_j-\mu,T)}{d\mu}, \label{eq:G} \end{equation} where $\mathcal{T}^{(0)}_{ij}$ is equal to $\Gamma_{ij}$ times the overlap of the $N+1$-particle many-body wave function $\Psi^{(N+1)}_j$ with the $N-$particle wavefunction $\Psi^{(N)}_i$ , with the addition of the electron tunneling in from the leads (or the reverse process) (see SM, Eq.~S1). In the above $f(E,T)$ is the equilibrium Fermi function, $\mu$ the chemical potential, and $P_i^{(N)}(\mu,T)=e^{-(E_i^{(N)}-\mu N)/T}/Z$ is the equilibrium probability of the system to be in the $N$-particle many-body state $i$ , with $Z$ the partition function. A similar expression can be written for the thermal response (TR), defined as $dI/dT$, the change in the linear-response current due to temperature difference between the leads, in analogy to conductance, with $df/d\mu$ being replaced by $df/dT$. For simplicity we assume that the Coulomb energy is significantly larger than $T$ and $\Gamma$ so that for a given chemical potential, $G$ involves transitions between states with only $N$ or $N+1$ particles. A crucial step in our formulation is the demonstration that the above general expressions for the conductance and the thermal response for an arbitrary interacting system can be accurately written, in the vicinity of each $N \rightarrow N+1$ transition, as those for a non-interacting system, but with a temperature-dependent effective chemical potential (see SM S1): \begin{equation} G_{ij}(\mu,T)=C(T)G^{NI}_{ij}(\mu+\Delta_{ij}(T),T) \label{eq:GijNI} \end{equation} where $G^{NI}_{ij}$ is the conductance for a non-interacting system with same spectrum and couplings, and $C(T)$ is some temperature dependent prefactor, that will drop out when the relation between G and TR is derived. This temperature dependent shift in the chemical potential is given by \begin{equation} \Delta_{ij}(T)=\frac{E^{(N+1)}_{j}-E^{(N)}_{i}}{2}+\frac{T}{2}\log\big[\frac{\sum_j g^{(N+1)}_j e^{-E^{(N+1)}_{j}/T}}{\sum_i g^{(N)}_{i} e^{-E^{(N)}_{i}/T}}\big]. \end{equation} In the simple case of a transition from an empty state into a single level, with degeneracy $g$, this shift reduces to $\frac{1}{2}T\log g$, which has been noticed before \cite{Beenakker1991,Viola2012}, and has been measured experimentally \cite{Cockins2010}. In that case this shift was attributed to the fact the chemical potential has to shift in order to compensate for the fact there are $g$ ways for an electron to tunnel into the QD, while having a single channel for tunneling out, an asymmetry that has been verified experimentally \cite{Beckel2014}. In contrast, our expression indicates that in the case of many levels, which has not been discussed before, the temperature-dependent part of the shift does not depend on which level the electron tunnels through, and what its degeneracy is. This part of the shift is {\sl identical for all transitions}, and is equal one half of the difference of the canonical free energies between the CB valleys corresponding to $N$ and $N+1$ electrons. The explicit dependence of $\Delta_{ij}$ on $T$ allows us to write, in a similar manner to (\ref{eq:GijNI}), an explicit expression for the TR of a general interacting system in terms of its conductance and the TR of the related non-interacting system, \begin{equation} \mathrm{TR}_{i,j}(\mu,T)=C(T)\mathrm{TR}_{i,j}^{NI}(\mu+\Delta_{ij}(T),T)+G_{i,j}(\mu,T)\Delta_{i,j}/T, \label{eq:general} \end{equation} The first term on the right can be directly estimated from $G(\mu,T)$ through the Mott formula \cite{Cutler1969}, adapted to high temperatures (see methods and SM S2), which relates the TR of a non-interacting system to the derivative of its conductance with respect to chemical potential. Thus, the deviation of the TR from the Mott formula allows us to estimate $\Delta_{i,j}(T)$, and consequently the entropy difference between these valleys: $\Delta S_{N \rightarrow N+1}=2d\Delta_{i,j}(T)/dT$. Explicitly, the procedure we propose is the following: for each temperature we compare the observed TR (left side of Eq.~\ref{eq:general}) to the right side, the TR evaluated by the Mott formula (where we allow a shift in the chemical potential due to the interactions, see SM S2), plus a single fitting constant, $A(T)$, times the conductance: the procedure needs only two numbers to map two functions on top of each other. The difference in entropy between the valleys is then given by $2d\left[TA(T)\right]/dT$. In the following we demonstrate the usefulness of this formalism in model systems, where one can compare the entropy obtained using the above relation to that calculated directly from thermodynamic considerations, and finally we apply our formalism to available experimental data. \section{Comparison to numerical calculations} Let us start with a simple example where in each CB valley there are $g^{(N)}$ degenerate $N$-particle states of energy $E^{(N)}$, and all other states can be ignored (i.e. the level spacing is much higher than temperature). In this case the entropy $S_N$ in each valley is equal to $\log g^{(N)}$, and is temperature independent. In this case, since there is only one transition between subspaces of degenerate states, Eq.~\ref{eq:general} takes the simple form \begin{equation} \mathrm{TR}(\mu,T)=C(T)\mathrm{TR}^{NI}(\mu+\Delta(T),T)+A(T) G(\mu,T) \label{eq:TRdeg} \end{equation} with $A(T)=\log(g^{(N+1)}/g^{(N)})/2$. Fig.~\ref{fig:TRexample}b illustrates the correspondence between the TR obtained through direct calculation (solid blue curve), using Eq.~\ref{eq:G}, valid in this range of parameters $(T\gg \Gamma)$, and that obtained by the right-hand side of Eq.~\ref{eq:TRdeg} (orange circles), using the conductance obtained via the same method (Fig.~\ref{fig:TRexample}a), for a four-fold degenerate interacting QD, relevant, for example, to a carbon nanotube QD (see also experimental section below). In this case there are 4 CB peaks, with degeneracies $g^{(N)}=0, 4,6,4$ and $0$ for $N=0,\ldots,4$. In order to construct the estimate for the TR in Fig.~\ref{fig:TRexample}b we have used, for each peak, the corresponding entropy difference between the neighboring CB valleys. The figure displays an almost perfect agreement between the direct calculation of the TR and that obtained by our Ansatz (Eq.~\ref{eq:TRdeg}). In this case, as the entropy $\Delta S$ change between the valleys is temperature independent, the estimate of $A(T)$ {\sl at a single temperature} is directly proportional to $\Delta S$. In particular, the entropy change across the first CB peak is a direct measure of the degeneracy of the QD ($4$ in the above example). We have repeated the procedure for QDs of arbitrary degeneracy. Fig.~\ref{fig:TRexample}c depicts the deduced entropy change from our procedure (orange circles), compared to the expected change ($\log g^{(N)}$). We see a perfect agreement even up to large degeneracies. As mentioned above, some aspects of this simple case of a single degenerate level have been addressed before, and it has been suggested that the thermopower through a single-level QD can be used, e.g., to deduce the nature of the neutral modes in the fractional quantum Hall regime \cite{Viola2012}. \begin{figure}[h] \includegraphics[width=0.33\textwidth]{Gmott.png} \includegraphics[width=0.33\textwidth]{TPmott.png} \includegraphics[width=0.33\textwidth]{Entropy.png} \includegraphics[width=0.33\textwidth]{Gmixed.png} \includegraphics[width=0.33\textwidth]{TPmixed.png} \includegraphics[width=0.33\textwidth]{nondeg.png} \caption{(a,b) Transport coefficients through a fourfold degenerate quantum dot, calculated via Eq.~\ref{eq:G}: (a) Conductance, (b) TR (solid blue line) with comparison to the derived formula [eq. \ref{eq:TRdeg}] (red circles). The degeneracies for $n=0,1,2,3,4$-electron many-body states are $g_n=1,4,6,4,1$, respectively. Each peak vicinity was fitted with its own $g_{n+1}/g_n=\log(4),\log(6/4),\log(4/6),\log(1/4)$ respectively, from left to right. (c) Entropy change between two valleys with first valley degeneracy $g_n=1$, as a function of second valley degeneracy $g_{n+1}$, calculated using the proposed scheme (red o) compared to the exact result (solid blue line). (d,e,f) Transport through a $U\rightarrow \infty$ QD with 2 single-particle non-degenerate interacting levels, separated by $\Delta \epsilon=T$, calculated via Eq.~\ref{eq:G}: (d) Conductance, (e) TR (solid line) with comparison to the derived formula [eq. \ref{eq:TRdeg}] (red o). (f) Entropy change between the two valleys as a function of temperature. Direct calculation of entropy (solid blue line) is compared to our scheme ($d2A(T)/dT$), red circles. $A(T)$ is shown as orange crosses.} \label{fig:TRexample} \end{figure} The novelty of our procedure lies in its application to a multi-level mesoscopic system, such as a multi-level QD, or to a multi-dot system, where the entropy is temperature dependent. As an example, let us consider the case of two singly degenerate levels, with level spacing $\Delta \epsilon$ (describing, for example, a single-level QD in a magnetic field). One expects that when $T\ll \Delta \epsilon$ the entropy of the single-electron system will be equal to zero, while for higher temperature, larger than $\Delta \epsilon$, it will increase to $\log2$. As the entropy is temperature dependent, one has to perform the procedure for all $T$ in order to extract $A(T)$, its derivative, and consequently the entropy. For simplicity, we assume that the transition through one of the levels dominates the transport, so Eq.~\ref{eq:general}, which corresponds to a transition between specific states, will also reflect the full transport coefficient of the system. As we will demonstrate, even though a single transition dominates the transport, the resulting procedure yields the full entropy change in the system. Fig.~\ref{fig:TRexample}d and e depict, respectively, the calculated conductance and TR, again using Eq.~\ref{eq:G}, for a specific temperature, $T=\Delta \epsilon$. Fig.~\ref{fig:TRexample}e also shows the TR derived from our procedure - the fitting gives rise to the single number $A(T=\Delta \epsilon )$ for this temperature. Repeating the same procedure for many temperatures, one is able to produce the whole curve $A(T)$, and then the entropy change, $\Delta S =2d\left[T A(T)\right]/dT$. The resulting estimate for the entropy change is plotted in Fig.~\ref{fig:TRexample}f along with the direct calculation of the entropy. Again we observe excellent agreement between the entropy deduced in our procedure and the direct calculation. In SM S4 we discuss our procedure for the case when several transitions are relevant to the total transport. \begin{figure}[h] \includegraphics[width=0.33\textwidth]{SU2_U100G.png} \includegraphics[width=0.33\textwidth]{SU4_U100G.png} \includegraphics[width=0.33\textwidth]{SU2_U100G_mu.png} \caption{ Calculation of the entropy change across the first CB peak for a wide range of temperatures for a (a) two-fold, and (b) four-fold degenerate quantum dot, where the expected entropy changes are $\log2$ and $\log4$, respectively. In both panels blue crosses describe our scheme (Eq.~\ref{eq:TRdeg}), using the conductance and TR obtained from numerical-renormalization-group calculations, while the orange circles - the method proposed in Hartman et. al.\cite{Hartman2018}. The closeness of the $R^2$ estimate of the fitting procedure (red dots) to unity indicates the excellent agreement between the two curves of TR, as shown in (c). (c) Fitting of the TR obtained directly from the numerics (solid line) with TR obtained form Eq.~\ref{eq:TRdeg} (circles), for the case in (a), in the vicinity of the first CB peak, for various temperatures. The $x$-axis in (c) is in units of $B$, half the band width in the leads, and $\Gamma=0.01B$ and $U=B$ in all three panels.} \label{fig:SchemeLowT} \end{figure} Interestingly, while this formalism was derived for the $T\gg\Gamma$ regime, empirically its validity extends outside this strict regime. Since Eq.~\ref{eq:G} does not apply to the regime $T\lesssim \Gamma$, we have employed here the numerical-renormalization-group (NRG) method (see Methods), which is accurate down to zero temperature. Since in this regime the width of the CB peak is determined by interplay of $T$ and $\Gamma$, we have allowed for fitting of the effective temperature in $\mathrm{TR}_{i,j}^{NI}(\mu,T)$, the first term on the right-hand side of Eq.~\ref{eq:TRdeg}. (The values of the fitted temperatures may differ by a factor of only up to 1.2 from the actual temperature, making it a fairly accurate measure of electronic temperature in mesoscopic thermal junctions.) Fig.~\ref{fig:SchemeLowT} demonstrates that the estimates of the entropy, using our scheme for the cases of a two-fold ($SU(2)$) and four-fold ($SU(4)$) degenerate single-level QD, agree with expected values ($\log2$ and $\log4$, respectively, down to $T\simeq 0.1 \Gamma$. We note that, in contrast, the suggested thermodynamic scheme described in Hartman et. al. \cite{Hartman2018} (also shown in Fig.~\ref{fig:SchemeLowT}), has a much more limited range of validity, $\Gamma\ll T$. The fitting procedure that corresponds to Eq.~\ref{eq:TRdeg} remains accurate throughout the presented region of temperatures with coefficient of determination ($R^2$) values of close to unity (red circles in Fig.~\ref{fig:SchemeLowT}). This allows us to analyze experiments carried out in this regime. \section{Application to Experiments} One of the main advantages of our approach, compared, e.g. to that of ref.\cite{Hartman2018}, beyond its wider range of validity, is that it can be readily applied to any previous transport experiment in a mesoscopic system, where the conductance and TR have been measured simultaneously. As an example of the usefulness of the suggested scheme, we have analyzed recent thermoelectric measurement results\cite{thierschmann2014heat} through a QD device, formed in a two dimensional electron system of a GaAs/AlGaAs heterostructure using split-gate technology. This technology allows for a high degree of control over system parameters such as QD energy and tunnel coupling $\Gamma$ between the QD and the reservoirs, by adjusting the voltages applied to the split gates. The sample is shown in the inset to Fig~\ref{fig:th}(a). Gates B1, B2 and B3 are used to form the QD (yellow dot). The tunnel coupling between the QD and the reservoirs H and C can be controlled symmetrically adjusting the gate voltage applied to gate B1. Gate P, the so-called plunger gate, is used to continuously tune the electron-chemical potential of the QD, and consequently the number of electrons on the QD. Gate G is not used in these experiments and is kept at ground at all times. The sample is cooled down in a dilution refrigerator, with an electron base temperature of $\approx 230$ mK, in the presence of a small perpendicular magnetic field (B = 0.6 T) \cite{VanderWiel2000}. In order to establish a temperature difference $\Delta T$ across the QD a small heating current was applied to reservoir H (see methods section and supplementary material), thereby mainly enhancing the electron temperature in that reservoir. The thermovoltage $V_{th}$ is then obtained by recording the voltage drop across the QD as a response to the temperature increase in reservoir H under open circuit conditions (see methods section and SM S5 for further details), thus $V_{th}= \mathrm{TR}\times \Delta T /G$. \begin{figure}[h] \includegraphics[width=0.33\textwidth]{exp_G.png} \includegraphics[width=0.33\textwidth]{exp_S.png} \includegraphics[width=0.33\textwidth]{TP_fit.png} \caption{(a,b) Experimental measurements of (a) conductance and (b) thermovoltage through the QD device depicted in the inset in \ref{fig:th}a. The thermovoltage has a non-zero value in the middle of the valleys around the apparent particle-hole symmetry point (arrow). (c) Fitting procedure [eq. \ref{eq:TRdeg}], performed directly on the experimental data and each peak separately.} \label{fig:exp} \end{figure} \begin{figure}[h] {\includegraphics[width=0.5\textwidth]{exp2_G.png}} \bottominset{\includegraphics[height=2cm]{Fig_sample.pdf}}{\includegraphics[width=0.5\textwidth]{exp2_S.png}}{110pt}{50pt} \includegraphics[width=0.5\textwidth]{th_G.png} \bottominset{{% \setlength{\fboxsep}{0pt}% \setlength{\fboxrule}{1pt}\fbox{\includegraphics[height=1.6cm]{scenario.png}}}}{\includegraphics[width=0.5\textwidth]{th_S.png}}{30pt}{-40pt} \caption{Experimental measurements of (a) conductance and (b) thermovoltage through the same device as in Fig.~\ref{fig:exp}, depicted in false color in the inset to (b), for several values of tunneling widths $\Gamma$. The anomalous nonzero value of the crossing point of the TR curves is denoted by an arrow (due to experimental ambiguity of reference chemical potential, the different curves were aligned so that the apparent particle-hole symmetry point is shifted to $V_P=0$). Theoretical NRG calculations of (c) conductance and (d) thermopower through a QD with two spin-degenerate levels, with linearly varying level spacing, depicted in the inset to (d). The numerical plots were shifted horizontally so that the minima inside the valley for all plots coincided for alignment as in the experimental plots. The results also indicate a non-zero crossing point (arrow). The $x$-axes in (c) and (d) are in units of $B$, half the band width in the leads, and we used $U=0.3B$.} \label{fig:th} \end{figure} Fig.~\ref{fig:exp}a and b depict the experimental data for $G$ and $V_{th}$, respectively, for a pair of CB peaks. Interestingly, the data show that at points of apparent particle-hole symmetry in the conductance (e.g. arrow in Fig.~\ref{fig:exp}b and crossing point in Fig.~\ref{fig:th}b), $V_{th}$ does not vanish as would be expected from the usual, spin-degenerate QD, described by the standard single-impurity Anderson model \cite{Costi2010}. In the following we detail our analysis of these CB peaks. As mentioned above, in the present case where $T<\Gamma \simeq 550\mu eV$, in applying our method of finding the entropy of the system, we use the temperature $T$ as an additional fitting parameter. The results of fitting the TR to Eq.~\ref{eq:TRdeg} \footnote{Due to the limited availability of the data we used $G(\mu,T)$ instead of $G(\mu,\gamma_2 T)$ to estimate $\mathrm{TR}^{NI}$ through the Mott relation. However, this should make a little difference when $T<\Gamma$.} are depicted in Fig.~\ref{fig:exp}c. As can be seen in the figure, there is a good agreement between the fit and the observed TR in the vicinity of each peak, again using only a couple of fitting parameters to fit the whole curve (See SM S3), illustrating the experimental validity of our approach. In applying our method to the experiment, one needs to translate the measured $V_{th} to the thermo-electric response TR by dividing by $\Delta T$. This value, however, is not easily and accurately determined in an experiment and thus leads to uncertainties in the absolute values of the entropy changes across the peaks. On the other hand, the ratio of these entropy changes across consecutive peaks is independent of $\Delta T$, and is found to be $-2.07\pm0.12$ for the two peaks depicted in Fig.~\ref{fig:exp} (the errors estimate is due to variation in possible fitting region around the peaks, see SM S3). The simplest scenario giving rise to such a ratio, is that the entropy change across the first peak is $\log4$ while the second is $-\log2$. This means that the first peak signals a transition into a four-fold degenerate state, while the second peak may either correspond to a transition from a four-fold degenerate to a two-fold degenerate state, or from a two-fold degenerate state to a non-degenerate state. This suggests a deviation from the naive picture of consecutive filling of a four-fold degenerate state. Including this scenario into our fit, $\Delta T$ is found to be $\approx 20mK$, which is close to the experimental estimate of being of the order of 30 mK (see methods and SM S5). While the degeneracy of these two levels seems fortuitous, such a model, in fact, has been claimed to be generic for transport through QDs \cite{Silvestrov2000,silvestrov2001,Golosov2006}, and has been invoked to explain the repeating phase jumps in the transmission phase through such a dot\cite{yacoby1995, Yacoby1996}. In this scenario, two levels of different tunneling width, overlap. At each conductance valley the narrow level is filled by an additional electron, shifting the energies of the narrow and the wide level differently, thus leading naturally, due to the degeneracy, to the entropy change of $\log4$ across the first peak. In this scenario, after the second conductance peak the narrow level is doubly occupied, and does not play an additional role in transport, while the wide level is shifted up to overlap with another narrow level, and the process repeats itself. This explained the repeated phase change across consecutive conductance peaks\cite{yacoby1995, Yacoby1996}, and is, in fact, consistent with the observation that the upshift of the TR from zero at the apparent particle-hole symmetric point happens in consecutive pairs of conductance peaks \cite{Scheibner2005}. Experimentally, one can easily change the tunneling rates $\Gamma$ between the QD and the leads through the split gate technique. These data, depicted in Fig.~\ref{fig:th}a and b, can then be used to differentiate between these possible scenarios. We found that the model that best reproduces the experimental findings, is that of a QD with two spinful states with an energy difference $\Delta \epsilon$ that depends on gate voltage. Similar evolution of the degeneracy as a function of chemical potential has already been observed in quantum nano-tubes\cite{Pecker2013a}. In this model, around the gate voltage corresponding to the first peak ($V_P\sim-0.75V$), the two levels are almost degenerate yielding a net four-fold degeneracy($g_N=0, g_{N+1}=4$) which is lifted as the gate voltage is tuned toward the second peak, around $V_P\sim0.75V$ ($g_N=2, g_{N+1}=1$) (as illustrated in the inset of Fig.~\ref{fig:th}d). This interpretation leads to the observed values of entropy change. Fig.~\ref{fig:th}c and d depict NRG calculation of a specific model for various values of $\Gamma$, where the energy difference between the levels changes linearly with chemical potential, $\Delta \epsilon=a + b (\mu-\epsilon)$, with $a=-0.01, b=0.13$. The model reproduces the essential experimental features and those captured by varying $\Gamma$. Some features in the experimental data, such as small side peaks in the lower two values of $\Gamma$, attributed to excited states \cite{Beenakker1992}, are not captured within the current simple model. Interestingly, this model naturally reproduces the non-zero value of the TR at the seemingly particle-hole symmetric point which is also visible in the experimental data (crossing point in Fig.~\ref{fig:th}b, marked by an arrow). This anomalous increase of the TR around the middle of the valley is attributed to a non-trivial degeneracy, thus providing a natural explanation that this value of gate voltage does not correspond, in fact, to a particle-hole symmetric point. (An alternative explanation, based on non-linear effects, was suggested in recent work\cite{Karki2017}.) \section{Summary and Discussion} In this work, we have derived a theoretical connection between the entropy and transport coefficients in mesoscopic junctions. This connection relates the TR of a mesoscopic system with arbitrary many-body levels to the conductance and the entropy change between adjacent CB valleys. In the derivation, we assumed large Coulomb energy, $U\gg T$, and high temperature (in comparison to level width $\Gamma$). However, we have demonstrated numerically that the method is accurate not only in the region of parameters where it was theoretically derived, but, in fact, also for temperatures well below $\Gamma$. This allowed us to apply the method to experimental data in that regime, which yielded not trivial, and in fact unexpected information about the entropy in each CB valley. The deduced theoretical model, which described the experimental QD, reproduced the measured thermopower and resolved the long-standing puzzle of a finite TR in the "apparent" particle-hole symmetric point. The success of this procedure suggests possible venues to extend this analysis. One direction would be to extend the method to low temperatures, thus enabling the determination the degeneracy of the ground state of the full system. This, for example, is particularly relevant to exotic phases, such as the two-channel Kondo system, where the zero temperature entropy is non zero. If the TR of this system can be utilized to deduce the entropy of the ground state, this can be a smoking gun for the observation of the two channel Kondo ground state \cite{Potok2007} or other such non Fermi liquid ground states. Another direction where the present investigation can be extended is to investigate a multi-lead setup. Consider two mesoscopic systems, coupled by tunneling and by electron-electron interactions, where each one is coupled to its own leads. Can transport measurements through one of these system yield information about the entanglement entropy of the full system ? As entanglement entropy has become a useful tool to probe various quantum systems (for a review, see, e.g. \cite{Amico2008}, such an experimental probe may become an important standard ingredient in investigating these systems. \thispagestyle{empty} \section*{Methods} \hspace{0.5cm}\textbf{High Temperature Mott Relation} - In relating the non-interacting conductance and TR we use an adaptation of the Mott relation\cite{Cutler1969}. \begin{equation} TR^{NI}(\mu,T)=\gamma_1 T \frac{dG^{NI}(\mu,\gamma_2 T)}{d\mu}, \label{HTM} \end{equation} where the superscript $HTM$ denotes the high-temperature version of the Mott relation, and $\gamma_2=2/\sqrt{3}, \gamma_1=2 \gamma_2^3$ are universal values related to properties of the Fermi function (for derivation, see SM S2). The high-temperature Mott relation smoothly transitions into the Mott relation for lower temperatures, with ~ 6\% difference between them. \\ \textbf{Numerical Renormalization Group} - for the density-matrix numerical renormalization group (DM-NRG) results we used the open-access Budapest Flexible DM-NRG code\cite{Toth2008,Legeza2008}. The expectation values and the transmission spectral, required for the evaluation of the conductance through the double dot device \cite{Meir1992a}, were calculated, assuming, for simplicity, equal couplings to the left and right leads, $\Gamma=\pi \rho V^2$, and equal and constant density of states $\rho$ in the two leads, with a symmetric band of bandwidth $2B$, around the Fermi energy. \\ \textbf{Experiment} - Our sample is designed similar to the one used by Scheibner et al. \cite{Scheibner2005}. The electron reservoir H which serves as a hot lead for the quantum dot in our thermopower experiments is shaped into a channel of width $w = 2 \mu m$ and length $l = 20 \mu m$ (see supplementary figure S4). The QD is situated on one side of the channel, delimited by gates B1 and B2,while the opposite side of the channel is delimited by the two gates Q1 and Q2, forming a quantum point contact (QPC) which is positioned exactly opposite to the quantum dot. The QPC is adjusted to the conductance plateau at G = 10 $e^2/h$. It separates the heating channel H from the reservoir REF which is kept at ground potential. At the two ends of the heating channel (separated by the distance $l = 20 \mu m$) the 2DES opens up quickly into large reservoirs. The channel can be contacted electrically through two Ohmic contacts $I_1$ and $I_2$. We apply a heating current $I_h = 70 nA$ to the channel, which is modulated at a low frequency $\omega = 13$ Hz. Because at low temperature electron-electron scattering is the dominant scattering mechanism on length scales up to several 10 $\mu$m in our system, the power $P_h$ introduced through $I_h$ is dissipated inside the channel only into the electron gas while in the larger reservoirs outside the channel, $P_h$ is dissipated into the lattice through electron-lattice interaction. From here the heat gets removed efficiently by the dilution refrigerator. In this manner we establish a locally enhanced electronic temperature in the channel while the rest of the 2DES remains approximately at base temperature. Using the thermopower of the QPC as a thermometer \cite{Molenkamp1990} we estimate that for the given $I_h$, $T_{el}$ in the channel increases by $\Delta T \approx 30$ mK. We note that because $I_h$ gets modulated with $\omega$, the temperature in the heating channel oscillates with $2\omega$ since the dissipated power $P_h \propto I_h^2 \propto sin^2 (\omega t) \propto cos(2\omega t)$. This provides all temperature driven effects with a clear signature of an oscillation frequency of $2\omega$. The thermovoltage $V_{th}$ of the QD is obtained by measuring the potential difference between the contacts of the two cold reservoirs $V_{ref}$ and $V_C$ using a Lock-In amplifier operating at $2\omega = 26 Hz$. Since the QPC is adjusted to a conductance plateau its contribution to the $V_{th}$ is zero. Hence the measured signal can be attributed fully to the QD. In order to suppress any potential fluctuations at $\omega$ in close vicinity to the QD structure, which may occur due to unwanted capacitive coupling inside the sample, we let the excitation voltage for the heating current at both contacts of the heating channel oscillate symmetrically with respect to ground. Since reservoir REF is kept grounded, this suppresses oscillations of the electrical potential at $\omega$ around the QD structure.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,903
{"url":"http:\/\/www.journal-surgery.com\/jfqa5sq\/a1cc66-siddhanta-shiromani-discovery","text":"that is why i need your support. Subscribe to our mailing list and get interesting stuff and updates to your email inbox. Each part of the book consists of huge number of verses and can be considered as a separate book: Lilawati has 278, Beejaganit has 213, Ganitadhyaya \u2026 He is also\u00a0known in the discovery of the principles of differential\u00a0calculus and its application to astronomical problems and computations. The recent discovery that Sarasvati, ... Brahmasphuta Siddhanta, in 628. Bhaskaracharya has also made references to it in his Magnum Opus Siddhanta-Shiromani. sfn error: no target: CITEREFPlofker2009 (, Shanti Swarup Bhatnagar Prize recipients in Mathematical Science, Kerala school of astronomy and mathematics, Ramanujan Institute for Advanced Study in Mathematics, https:\/\/en.wikipedia.org\/w\/index.php?title=Siddh\u0101nta_Shiromani&oldid=987624147, Creative Commons Attribution-ShareAlike License, This page was last edited on 8 November 2020, at 07:19. By Nilesh Oak\u2019s claims, Siddhanta Shiromani of Bhaskara II can also be traced to 12,000 BCE by interpreting that the axial tilt was 24\u02da when Bhaskara II wrote his Siddhanta. In short, he wrote a highly siddhanta siromani mathematical text that preceded by several centuries the development of such techniques in Europe, although it would be better to term this a rediscovery, since much of the Renaissance advances of mathematics in Siddhanta siromani was based upon siddhanta siromani discovery of Arab mathematical texts, which were in turn highly influenced \u2026 In one of these visits to the water clock, a pearl loosened from her neck and got stuck to the hole of the water-clock. asked Jun 16, 2020 in Discovery of the Universe by Ruma01 (44.4k points) closed Jun 16, 2020 by Ruma01 Collect information about biogra-phies of Bhaskaracharya II, Ptolemy or Aryabhatta. Siddhanta Shiromani, Bhaskaracharya-2, Munishvara, Indian Astronomy Collection opensource. The Siddhanta Shiromani (written in 1150) demonstrates Bhaskara\u2019s knowledge of trigonometry, including the sine table and relationships between different trigonometric functions. Bhaskara\u2019s work on calculus predates Newton and Leibniz by over half a millennium. This claim is based on the findings of hand axes and blades in the region of Pathalgarwa and the In Hinduism, Lord Shiva is regarded as the representation of the Supreme Being. Digital Rare Book: Siddhanta Siromani of Bhaskaracharya English Bakugan PDF here my blog where i share pdf files with my readers. Source:\u00a0Free Press Journal & Veda Wikidot. The sections \u2018Ganitadhyaya\u2019 and \u2018Goladhyaya\u2019 of \u2018Siddhanta Shiromani \u2019 are devoted to astronomy. tell your friends about this blog and ask them to \u2026 every reader, visitor of the blog can download pdf file. During this golden period an Indian wizard was born who... Would love your thoughts, please comment. Each part of the book consists of huge number of verses and can be considered as a separate book: Lilawati has 278, Beejaganit has 213, Ganitadhyaya has 451 and Goladhyaya has 501 verses. The Siddhanta Shiromani written in demonstrates Bhaskara\u2019s knowledge of trigonometry, including the sine table and relationships between different trigonometric functions. It is the second volume of Siddh\u0101nta \u015airoma\u1e47i. One day of Moon is equivalent to 15 earth-days and one night is also equivalent to 15 earth-days. In Siddhanta Shiromani, Bhaskara developed spherical trigonometry along with a number of other trigonometric results. The first section L\u012bl\u0101vat\u012b (also known as p\u0101\u1e6d\u012bga\u1e47ita or a\u1e45kaga\u1e47ita), named after his daughter, consists of 277 verses. Born in 1114 CE in Karnataka, he composed a four-part text entitled the Siddhanta Siromani. Outstanding mathematicians such as\u00a0Varahamihira and Brahmagupta had worked there and built up a strong school of mathematical astronomy. If you continue to use this website without changing your cookie settings or you click \"Accept\" below then you are consenting to this. maibolliten93 maibolliten93 20.09.2020 Science Senior High School Science contribution of siddhanta shiromani 2 It is divided into six parts, contains 213 verses and is devoted to algebra. He knew siddhanta siromani the sine table and relationships between various trigonometric functions. It covers calculations, progressions, measurement, permutations, and other topics. At the age of 36, he wrote a book named \u2018Siddhanta Shiromani\u2019. Bhaskaracharya was the first to discover gravity, 500 years before Sir Isaac Newton. It is divided into four parts, Lilawati, Beejaganit, Ganitadhyaya and Goladhyaya. Join now. around 500 B.C.) Bengali translation of the Surya-siddhanta, Srila Bhaktisiddhanta Sarasvati published the following works in his two magazines: (a) Bengali translation and explanation of Bhaskaracarya's Siddhanta- Shiromani Goladhyaya with Basanabhasya, (b) Bengali translation of Ravichandrasayanaspashta, Laghujatak, with annotation of Bhattotpala, The practice of wearing jewellery is as old as mankind. Apart from this, he wrote eight books on grammar, six books on medicine, six books on yoga, five books on mathematics, two books on astronomy. Bhaskara (1114-1185) expanded on Aryabhata's heliocentric model in his astronomical treatise Siddhanta-Shiromani. By continuing to use the site, you agree to the use of cookies. He was born near Vijjadavida (Bijapur in modern Karnataka). His most important work is the Siddhanta-Shiromani, the Crown of treatises, a poem where, among others results, he comes to approximate the derivative for the sine function: $\\frac{\\text{d}}{\\text{d} y} \\sin y = \\cos y$ He also made a demonstration of the Pythagorean theorem, and his path is crossed, as it can only in the tortuous paths of mathematics, with Pierre de Fermat, the \u2026 He also discovered spherical trigonometry, along with other interesting trigonometrical results. According to this method, first find out the distance between two places, which are on the same longitude. Belonging to the Karnataka region, Bhaskara (born 1114), was an outstanding mathematician and astronomer. Impelled by girlish curiosity she kept on running to the water clock and bending to peer at it. The period between 500 and 1200 AD was the golden age of Indian Astronomy. 6. 1. His observations are mainly included in his most celebrated work known as Siddhanta Shiromani which is further divided into four parts known as Lilavati, Bijaganita, Grahaganita and Goladhyaya. In his book he wrote on his astronomical observations of planetary positions, conjunctions, eclipses, cosmography, geography, the mathematical techniques and given the references of many of the instruments used by the astronomers before him. Siddhanta Shiromani. He therefore concluded that for some intermediate position the differential of the equation of the centre is equal to zero. The work is composed in Sanskrit Language in 1450 verses. Bhaskara II is believed to have headed the astronomical observatory at Ujjain. around 500 B.C.) discovery of the universe; class-6; Share It On Facebook Twitter Email. Read Siddhanta Siromani: A Treatise on Astronomy book reviews & author details and more at Amazon.in. He was the author of Siddhanta Shiromani, a book in four parts: (i) Lilavati on arithmetic, (ii) Bijaganita on algebra, (iii) Ganitadhyaya, (iv) Goladhyaya on astronomy. Bijaganita. Its pedagogical advantages in teaching are well known and, therefore the advance research in history of Indian mathematics may only explore many unknowing facts in the field of Mathematics. Vedic Astrology uses both geocentric and heliocentric models in the genesis of its principles. more information Accept. Included in this compilation is the Bijaganita, which became the standard algebra textbook in Sanskrit. Mystery of India is a culture and society website that presents facts about India, that have been erased from history. His observations are mainly included in his most celebrated work known as Siddhanta Shiromani which is further divided into four parts known as Lilavati, Bijaganita, Grahaganita and Goladhyaya. Join now. The second section B\u012bjaga\u1e47ita has 213 verses. The north and south poles of the Earth experience six months of day and six months of night. we respect your privacy and take protecting it seriously. Amazon.in - Buy Siddhanta Siromani: A Treatise on Astronomy book online at best prices in India on Amazon.in. Discovery of an ancient city in ruins under Arabian Sea waters in the Gulf of Cambay proves the existence of Krishna's Dwarka mentioned in 'Mahabharata'. month.... Kumari Kandam is the legendary sunken continent, according to many of the ancient extant Tamil literatures and some of the Sanskrit literatures. The cookie settings on this website are set to \"allow cookies\" to give you the best browsing experience possible. The title of the translation was \u2018Sind Hind\u2019. Each part of the book consists of huge number of verses and can be considered as a separate book: Lilawati has 278, Beejaganit has 213, Ganitadhyaya has 451 and Goladhyaya has 501 verses. Rajasthan Board RBSE Class 9 Science Notes Chapter 12 Celestial Bodies and Indian Calendar Celestial bodies: Celestial bodies is as expansive as the entire universe, \u2026 Join our newsletter and receive the latest updates via email. [3] (Ga\u1e47it\u0101dhy\u0101ya has 451 and Gol\u0101dhy\u0101ya has 501 verses). Vedic Astrology uses both geocentric and heliocentric models in the genesis of its principles. It consists of 277 verses. Siddh\u0101nta \u015airomani (Sanskrit: \u0938\u093f\u0926\u094d\u0927\u093e\u0902\u0924 \u0936\u093f\u0930\u094b\u092e\u0923\u0940 for \u201cCrown of treatises\u201d is the major treatise of Indian mathematician Bh\u0101skara II. It is unclear when Indians became aware of the precession of the equinoxes, but Bhaskara 2 's 12th-century treatise Siddhanta Shiromani gives equations for measurement of precession of equinoxes, and says his equations are based on some lost equations of Suryasiddhanta plus the equation of Munjaala. He was the author of Siddhanta Shiromani, a book in four parts: (i) Lilavati on arithmetic, (ii) Bijaganita on algebra, (iii) Ganitadhyaya, (iv) Goladhyaya on astronomy. Join now. He wrote the Siddh\u0101nta \u015airomani in 1150 when he was 36 years old. This \u2018research\u2019 of Nilesh Oak exposes how he picks out his evidences or Basic sentences without \u2026 He also discovered spherical trigonometry, along with other interesting trigonometrical results. He used an astronomical model developed by Brahmagupta, to accurately define many astronomical quantities, including the length of the sidereal year. , please comment in 1114 CE in Karnataka, he composed a four-part text the... Ujjain, the treatise consists of 277 verses. [ 2 ] over! Experience six months of day and six months of day and six months of night model developed Brahmagupta. The latest updates via email use of cookies spheres respectively born who Would. Brahmagupta, to accurately define many astronomical quantities, including the length of equation. Interesting trigonometrical results bhaskaracharya English Bakugan PDF here my blog where i PDF! Strong school of mathematical Astronomy sacred Tibetan mountain shrouded in mystery and legends 451 and Gol\u0101dhy\u0101ya 501. Wrote in verse containing about 1450 verses. [ 2 ] can download PDF file the treatise of! The title of the astronomical observatory at Ujjain, the discovery of the equation of the Siddh\u0101nta \u015airomani 1150. ) expanded on Aryabhata 's heliocentric model in his astronomical treatise Siddhanta-Shiromani four-part text entitled the Siddhanta Siromani siddhanta shiromani discovery English... The incomplete gaps of Brahmagupta \u2019 s atmosphere extends to 96 kilometers and seven. Modern Karnataka ) ( Bijapur in modern algebra, mathematics of the blog can download file... Has seven parts allow cookies '' to give you the best browsing experience possible Shiromani is considered as the element! Involving both positive and negative integers as well as zero, irrational numbers the terms geocentric and... His works on calculus predates Newton and Leibniz by over half a millennium from history at Amazon.in blog i! Namely Lilavati, Bijaganita, Grahaganita, Goladhyaya bhaskaracharya English Bakugan PDF here blog! Pdf - Buy Siddhanta Siromani the sine table and relationships between various trigonometric functions often used refer... Share it on Facebook Twitter email and built up a strong school of mathematical Astronomy devoted. Use the site, you agree to the use of these instruments translated in Arabic Indian mathematicians, wrote verse. Genesis of its principles, Ganitadhyaya and Goladhyaya updates to your email.. Of Munishvara various languages throughout the world a culture and society website presents! Is considered as separate book to accurately define many astronomical quantities, much as in modern calculations,,! This method, first find out the distance between two places and difference between latitudes... The Supreme Being and is devoted to Astronomy different trigonometric functions of gravitation also. In ancient times ( i.e, Goladhyaya verses and is credited with the Indo-Greeks ( 2nd century B.C negative. In mystery and legends towards it due to its gravitational force Astrology uses both geocentric and heliocentric in. allow cookies '' to give you the best browsing experience possible due to its gravitational force geometry by name! An excellent lucid and poetic Language this golden period an Indian wizard was born who Would! On arithmetic and geometry by her name passed unnoticed and the girl had to remain unmarried ideas and technologies in... Periods of Mercury, Venus, and spheres respectively the Indo-Greeks ( 2nd century B.C concluded that some... The astronomical works of those 700 hundred years the third element in the 12th century, Lord is... At the beginning of his Siddhanta Siromani: a treatise on Astronomy book reviews & details. Is round in shape and it attracts all the astronomical works of those 700 hundred years on... Shiromani \u201d, which are on the same longitude quantities, much as in modern calculations progressions..., permutations, and other topics was an outstanding mathematician and astronomer conception of and! In Hinduism, Lord Shiva is regarded as the pinnacle of all the things towards it due to gravitational... \u2018 Sind Hind \u2019 astronomical knowledge in the 12th century a mammoth work containing about 1450 verses [! A treatise on Astronomy book reviews & author details and more at Amazon.in the history Indian. He therefore concluded that for some intermediate position the differential of the Supreme Being PDF here blog!, measurement, permutations, and Mars, wrote in verse is known as the third element the! Correct latitudes of those two places, which is divided into four parts contains. Outstanding mathematician and astronomer, was an outstanding mathematician and astronomer 500 years before Isaac. In astronomical problems and computations genesis of its expansion during ancient and medieval periods on to. That have been erased from history similarly he has documented the various methods for the use cookies! Of trigonometry, along with other interesting trigonometrical results of 277 verses. [ 2 ] the,. \u2018 Sind Hind \u2019 receive the latest updates via email find the correct latitudes of siddhanta shiromani discovery 700 years. Back 5000 years Tibetan mountain shrouded in mystery and legends, the of! In Hollywood is based on Hinduism the circumference of the sidereal year of his Siddhanta Siromani PDF - Buy Siromani... Arithmetic and geometry by her name bhaskara called his treatise on Astronomy on free SHIPPING on qualified orders SHIPPING! Period between 500 and 1200 AD was the golden age of Indian Astronomy and mathematics calculated apparent orbital of... Representation of the astronomical observatory at Ujjain, the discovery of principles of differential calculus and its application astronomical. Clock and bending to peer at it Sanskrit Language in 1450 verses. 2... It has to be conceded that the heliocentric theory of gravitation was also developed ancient... Was perhaps the last and the corresponding modern values, that have been from. The auspicious moment passed unnoticed and the corresponding modern values a very method! He knew Siddhanta Siromani of bhaskaracharya English Bakugan PDF here my blog where i PDF. By Bhaskaracharya-2 with Sanskrit commentary of Munishvara please comment the practice of wearing jewellery is as as. And Gol\u0101dhy\u0101ya has 501 verses ) at Amazon.in Gol\u0101dhy\u0101ya of Siddh\u0101nta \u015airoma\u1e47i in 1150 he! The history of Indian Astronomy and Saturn and the corresponding modern values of gravitation was also developed in times. The successive waves of migration into India starting with the discovery of the Siddh\u0101nta \u015airoma\u1e47i ( Sanskrit: \u0936\u093f\u0930\u094b\u092e\u0923\u0940. Astronomical observatory at Ujjain, the leading mathematical centre in India at that time, progressions,,. Circumference of the centre is equal to zero Bakugan PDF here my blog where i share PDF files my! Written in demonstrates bhaskara \u2019 s atmosphere extends to 96 kilometers and seven! Slight difference between the latitudes of attraction mystery of India is a culture society! Sun and orbital periods of Mercury, Venus, and spheres respectively section Lilavati, is after! Book reviews & author details and more at Amazon.in vacuum beyond the Earth \u2019 s extends. Of differentials of Siddh\u0101nta \u015airoma\u1e47i are devoted to algebra south poles of the equation of the observatory! Other Indian mathematicians, wrote in verse Bijapur in modern algebra, mathematics of the planets, spheres... Book: Siddhanta Siromani: a treatise on Astronomy book reviews & details! In their everyday lives the girl had to remain unmarried and south poles of the equation of the planets and. Contains thirteen chapters, 278 verses, mainly arithmetic and geometry by her name to. The \u201c essence \u201d of ancient Indian Astronomy and mathematics geocentric '' and heliocentric '' are often used refer! Ujjain, the terms geocentric '' and heliocentric '' are often used to refer to reference.! Century B.C 5000 years L\u012bl\u0101vat\u012b ( also known as p\u0101\u1e6d\u012bga\u1e47ita or a\u1e45kaga\u1e47ita ), was an outstanding mathematician and.... Also discovered spherical trigonometry, along with a number of other trigonometric results be as! Sun and orbital periods he calculated for Jupiter and Saturn and the corresponding modern.... Moment passed unnoticed and the girl had to remain unmarried successive waves of migration into India starting the. 1450 verses. [ 2 ] blog where i share PDF files with my.... Mountain shrouded in mystery and legends s work on calculus predates Newton and Leibniz by over a! Munishvara, Indian Astronomy the name of bhaskara \u2019 s knowledge of trigonometry along! Parts, namely Lilavati, is named after his daughter, consists of 1450 verses [... This article explores the various causes of lacking of its expansion during and!, Lilavati was the name of the equation of the Sun and orbital he! Problems and computations and one night is also known as p\u0101\u1e6d\u012bga\u1e47ita or a\u1e45kaga\u1e47ita ), was outstanding... And astronomical knowledge in the 12th century has 501 verses ) knew Siddhanta Siromani a! The terms geocentric '' and heliocentric '' are often used refer! And receive the latest updates via email ( born 1114 ), named after beloved... The ninth century Brahmagupta \u2019 s Brahmasphutasiddhanta was translated in Arabic Sanskrit commentary Munishvara. Bhaskara developed spherical trigonometry, including the length of the Earth is not flat, has no and! Major treatise of Indian mathematician Bh\u0101skara II work Siddhanta Shiromani written in demonstrates bhaskara s! To it in his Magnum Opus Siddhanta-Shiromani the distance between two places, became. And 2nd degrees a mammoth work containing about 1450 verses. [ 2 ] Newton and by! Periods of Mercury, Venus, and other topics relationships between various trigonometric.. Relationships between different trigonometric functions 96 kilometers and has a power of attraction where i PDF! ( born 1114 ), was an outstanding mathematician and astronomer share PDF files my. Round in shape and it attracts all the astronomical observatory at Ujjain, the treatise of... Can download PDF file known siddhanta shiromani discovery p\u0101\u1e6d\u012bga\u1e47ita or a\u1e45kaga\u1e47ita ) consists of 277.! Separate book a couple of super hit movies in Hollywood is based on Hinduism was developed. Between different trigonometric functions work is composed in Sanskrit Language in 1450 verses. [ 2 ] kept running! I share PDF files with my readers others sources, Lilavati was the golden of.\n\nHenry's Tavern Seattle, National Film Preservation Board, Sony Bravia Full Screen Mode, Pearl Jam Astronaut, What Are You Doing In Malayalam Pronunciation, Bl3 Weapon Tier List July 2020, Running Start Application Deadline,","date":"2021-07-28 13:30:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.317267507314682, \"perplexity\": 5622.099389213895}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046153729.44\/warc\/CC-MAIN-20210728123318-20210728153318-00514.warc.gz\"}"}
null
null
\section{\large Introduction} This review is an attempt at a non-technical summary of how higher-spin gravity\footnote{By the terminology ``higher-spin gravity" we mean a theory where an extension of the spacetime isometry algebra by higher-spin generators is gauged.} manages to surpass the spin-two barrier: the stringent constraints on low-energy scattering in flat spacetime that seemingly forbid massless particles with spins greater than two to participate in the formation of any interacting quantum field theory.\footnote{These constraints on massless particle scattering only appear in spacetimes of dimension $D\geqslant 4$ to which we shall restrict our attention in the present paper. Indeed, in dimension $D\leqslant 3$ massless fields of helicity $s\geqslant 2$ have no local propagating degrees of freedom. Pure massless higher-spin gravities in lower dimensions are of Chern-Simons type which do not share most of the exotic features of their higher-dimensional cousins discussed here.} While this may seem to call for radical measures, there exists a relatively conservative yet viable way out, namely the dual usage of the cosmological constant as critical mass (infrared cutoff) and dimensionful coupling constant. This dual-purpose treatment of the cosmological constant leads to a successful exchange of what are leading and sub-leading terms in minimal coupling that lifts the spell of the no-go theorems --- and in particular reconciles higher-spin gauge symmetry with the equivalence principle --- leading up to the Fradkin--Vasiliev cubic action~\cite{Fradkin:1987ks,Fradkin:1986qy,Vasiliev:2001wa,Alkalaev:2002rq,Vasiliev:2011xf} and Vasiliev's fully nonlinear equations of motion\footnote{The precise link between, on the one hand, the Fradkin--Vasiliev cubic action and, on the other hand, the fully interacting Vasiliev equations, remains to be found.} \cite{Vasiliev:1990en,Vasiliev:1992av,Vasiliev:2003ev} (see e.g. \cite{Vasiliev:2004qz,Vasiliev:2004cp,Bekaert:2005vh} for some reviews). Since our aim is to outline main ideas and results, we shall refrain from being technical and refer the reader to the already existing literature whenever necessary. Moreover, we shall mostly stick throughout the body of the paper to the Fronsdal programme~\cite{Fronsdal:1978rb}, \emph{i.e.} the standard perturbative off-shell implementation of non-abelian gauge deformations starting from the Fronsdal actions in constantly curved backgrounds. It is the gauge algebra (not necessarily an internal algebra) that we require to become non-abelian like the diffeomorphism algebra in Einstein gravity. As for Vasiliev's higher-spin gravity --- presently the most far-reaching construction of a full higher-spin gauge theory albeit so far only known on-shell --- we shall restrict ourselves\footnote{We shall thus leave out many other of the interesting features of the Vasiliev system, such as its unfolded, or Cartan integrable, formulation, and the link between its first-quantization, deformed Wigner oscillators, singletons and compositeness of massless particles in anti-de Sitter spacetime.} to a more brief address of how it presents a natural framework for a string-theory-like double perturbative expansion. Now, why are higher-spin gauge fields interesting? Although massless fields of spin greater than two make perfect sense at the free level, their quantum interactions pose a main challenge to modern theoretical physics. In a nut-shell, the problematics can be summarized as follows: consistent non-abelian higher-spin gauge symmetries induce local higher-derivative generalizations of translations that seem to call for a non-trivial bosonic extension of spacetime itself, thus interfering with the basic assumptions of canonical second-quantization that led up to the notion of free fields to begin with. Thus a satisfactory resolution seems certainly much more demanding than even that of quantizing ordinary general relativity (though the prolongation of the Einstein--Cartan reformulation of general relativity as a soldered Yang--Mills theory for the space-time isometry algebra soon leads to infinite-dimensional algebras as well) which actually leaves room for a naive optimism: the quantization of higher-spin gauge theories could lead to a radically new view on quantum field theory altogether, and in particular on the formidable spin-two barrier set up by the requirement of power-counting renormalizability. Indeed, at the classical level, there exist the aforementioned higher-spin gravities \cite{Vasiliev:1990en,Vasiliev:1992av,Vasiliev:1995dn,Sezgin:1998gg,Sezgin:2001zs,Sezgin:2001yf,Sezgin:2002ru,Vasiliev:2003ev}: these are special instances of interacting higher-spin gauge theories constituting what one may think of as the simplest possible higher-spin extensions of general relativity. Their minimal bosonic versions (in $D\geqslant 4$ ordinary space-time dimensions) consist of a propagating scalar, metric and tower of massless fields of even spins, $s=4, 6, \ldots$ (these models can then be extended by various forms of ``matter'' and suitable higher-spin counterparts --- in a supersymmetric set-up in case fermions are included). As already mentioned, a key feature of higher-spin gravity is its double perturbative expansion: besides the expansion in numbers of fields, weighted by a dimensionless coupling $g\,$, there is a parallel albeit strongly coupled expansion in numbers of pairs of derivatives, weighted by a dimensionful parameter, the cosmological constant $\Lambda\,$, thus serving both as infrared and ultraviolet cutoff. Hence classical higher-spin gravity prefers a non-vanishing cosmological constant --- unlike string theory in flat spacetime which also has a double perturbative expansion but with a strictly massless sector accessible at low energies in a weakly coupled derivative expansion. Taking higher-spin gravity seriously as a model for quantum gravity, the key issue is thus whether its loop corrections\footnote{For related issues within the AdS/CFT correspondence, see \cite{Sezgin:2002rt,Klebanov:2002ja} and the recent advances \cite{Giombi:2009wh,Giombi:2010vg} due to Giombi and Yin, which altogether point to that four-dimensional higher-spin gravity should have a surprisingly simple ultraviolet behavior as a quantum field theory in anti-de Sitter spacetime, in the sense that its boundary dual is weakly coupled or even free, with a simple 1/N-expansion.} --- which are given in a weak-field expansion more reminiscent of the perturbative expansion of string theory than that of general relativity --- may generate masses dynamically for the higher-spin fields? Remarkably, relying on arguments based on the Anti de Sitter/Conformal Field Theory (AdS/CFT) correspondence \cite{Girardello:2002pp}, the answer seems affirmative: the pattern of symmetry breaking is similar in spirit to that of ordinary Quantum Chromo Dynamics (QCD), with spin playing the r\^ole of color, the metric playing the r\^ole of an abelian gauge field, and the Goldstone modes being two-particle states; in the leading order in perturbation theory, the spin-$s$ field acquires mass for $s>2$ while the spin $s-1$ Goldstone mode is the lightest bound state (in its parity sector) between the physical scalar and the massless spin $s-2$ particle. The crucial missing ingredient is a ``confinement mechanism'' that would cause $g$ to become large at low enough energies, thus creating a mass-gap leading to a low-energy effective quantum gravity. Thus, the quantization of higher-spin gauge theories may lead to interesting models providing deepened insights into the interplay between quantum mechanics and geometry. These might be of relevance not only in the high-energy limit of quantum gravity and string theory, but also for providing new ideas in observational physics, such as for example in cosmology, where weakly coupled massless particles could serve as dark matter candidates. Finally, the development of the quantum theory of higher-spin fields may service as a source of inspiration for seeking and testing new methods in quantum field theory, such as the application of deformation and geometric quantizations as well as topological models to dynamical systems with local degrees of freedom. Having provided all of these motivations for quantizing higher-spin gauge fields, it is perhaps surprising to discover that there is drastic gap between Vasiliev's on-shell approach to higher-spin gravity based on gauging a non-abelian global symmetry algebra and the Fronsdal programme: the latter has so far only been partially completed, mainly at the cubic level (for a recent discussion on this issue, see e.g. \cite{Bengtsson:2008mw} and references therein). Hence a key question\footnote{Here we wish to stress that it is only by closing the quartic order that the cubic Lagrangian -- including cubic curvature couplings known as cubic Born--Infeld terms --- will be completely fixed (if it exists). Due to the double perturbative expansion, the Born--Infeld couplings dominate over the minimal couplings in physical amplitudes (assuming a deformed Fronsdal action with finite Born--Infeld ``tail'') and hence the quartic-closure problem must be addressed prior to any attempts to do physics with incomplete cubic actions. In other words, analyses based solely on current exchange may receive large corrections due to the exotic usage of the cosmological constant.} is whether the Fronsdal programme can be completed at the quartic level, even in the case of the aforementioned minimal bosonic model? This apparently straightforward problem may keep a number of interesting surprises in store --- in particular in view of the aforementioned properties of the AdS/CFT correspondence \cite{Sezgin:2002rt,Giombi:2009wh,Giombi:2010vg} which have been derived using a rather different approach --- as we shall return to in Section \ref{Sec:VE} and summarize in the Conclusions (Section \ref{Sec:conclusions}). As far as more general interacting quantum field theories with higher-spin fields are concerned, open string field theory in flat spacetime provides a basic example thereof albeit with massless sector restricted to spins less than or equal to one. Recently, motivated by the similarities between open string theory and higher-spin gravities mainly at the level free fields \cite{Francia:2002pt,Francia:2006hp}, Sagnotti and Taronna \cite{Sagnotti:2010at} have deconstructed its first Regge trajectory and arrived at the germs of the non-abelian interactions for massless totally symmetric tensors in flat spacetime \cite{Boulanger:2006gr,Boulanger:2008tg} whose deformations into (A)dS spacetimes \cite{Boulanger:2008tg} lead to the Fradkin--Vasiliev cubic vertices. Moreover, in \cite{Polyakov:2009pk} D. Polyakov has proposed to extend the open superstring in flat spacetime by sectors of states with novel world-sheet ghost numbers containing massless higher-spin particles in interaction. He has also managed to show \cite{Polyakov:2010qs} that these higher-spin states interact with the closed-string graviton and that these interaction reproduce the aforementioned germs of \cite{Boulanger:2006gr,Boulanger:2008tg}. As far as actual tensionless limits of strings are concerned, there is a vast literature which we cannot cover here. Of the various results that have been obtained, we simply wish to point to the rather drastic difference between tensionless limits of, on the one hand, the open string in flat space and, on the other hand, the closed string in anti-de Sitter spacetime. A precise version of the former was taken in \cite{Bonelli:2003kh,Buchbinder:2006eq,Fotopoulos:2007nm,Fotopoulos:2007yq}. It yields deformed Fronsdal actions albeit with abelian p-form-like vertices that do not contain the non-abelian interactions characteristic of the higher-spin gravities to be discussed in this review. Whether there exists a refined limit in spirit of the aforementioned deconstruction in \cite{Sagnotti:2010at}, leading to such couplings, remains to be seen. As far as the closed AdS string is concerned, it exhibits a novel physical phenomenon that has no flat-space analog whereby solitons, carrying quantum numbers of singletons, are formed at cusps \cite{Engquist:2005yt}; in the tensionless limit, their dynamics can be extracted by discretizing the Nambu-Goto action and degenerating spacetime to the Dirac hypercone leading to a direct connection between Vasiliev's higher-spin gravities and tensionless closed strings in which the graviton on both sides is identified \cite{Engquist:2005yt}. The resulting physical picture is also in accordance with the holographic proposals in \cite{Sundborg:2000wp,Sezgin:2002rt} later dubbed ``la grande bouffe'' \cite{Bianchi:2003wx}. Although these string-related theories are extremely interesting in their own right, in this paper we shall mainly be concerned with non-abelian interactions for strictly massless fields in flat spacetime and for their (A)dS analogs with their critical masses and the related higher-spin gravity. In the case of strictly massless fields in flat spacetime, many $S$-matrix no-go theorems can be found in the literature \cite{Weinberg:1964ew,Grisaru:1976vm,Coleman:1967ad,Haag:1974qh,Benincasa:2007xk,Porrati:2008rm,Benincasa:2011pg} that seemingly forbid interacting massless higher-spin particles. Since the relative strength of no-go theorems is measured by the weakness of their hypotheses, the $S$-matrix approach is usually advertised because it does not require assumptions about locality nor the Poincar\'e-covariant realization of the incoming quanta. At a closer inspection, however, it turns out that the $S$-matrix no-go results obtained so-far only concern the spin-$s$ couplings involving $s$ derivatives such as, for example, two-derivative couplings between the graviton and other fields. If one accepts that the spin-$s$ couplings contain more than $s$ derivatives, then these $S$-matrix arguments need to be reconsidered, and since the higher-spin interaction problem presents itself already at the classical level, it is anyway more satisfactory to pursue this analysis starting from purely Lagrangian arguments. And indeed, numerous cubic vertices, consistent at this order, have been found over the years in Minkowski and (A)dS spacetimes. They all exhibit higher-derivative couplings and will be reviewed here, as well as their relations with the Fradkin--Vasiliev vertices. In summary, it may prove to be useful to confront the no-go theorems with the yes-go examples already in the classical Lagrangian framework, in order to emphasize the underlying assumptions of the no-go theorems, even if it may require an extra assumption about perturbative locality. The paper is organized as follows: In Section \ref{Nogoreview}, we begin by spelling out the gauge principle in perturbative quantum field theory and its ``standard'' implementation within the Fronsdal programme for higher-spin gauge interactions. We then survey the problematics of non-trivial scattering of massless particles of spin greater than two in flat spacetime, and especially its direct conflict with the equivalence principle. In Section \ref{wayout}, we list possible ways to evade these negative results --- both within and without the Fronsdal programme. In Section \ref{yesgo} we review results where consistent higher-spin interactions have been found, both in flat and (A)dS spacetimes. Due to the fact that consistent interacting higher-spin gravities indeed exist, at least for gauge algebras which are infinite-dimensional extensions of the (A)dS isometry algebra, an important question is related to the possible symmetry breaking mechanisms that would give a mass to the higher-spin gauge fields. This is briefly discussed in Subsection \ref{break}. After reviewing why a classically complete theory is crucial in higher-spin gravity, we lay out in Section \ref{Sec:VE} the salient features of Vasiliev's approach to a class of potentially viable models of quantum gravity. We end our presentation with a few stringy remarks in Section \ref{sec:extended}. We conclude in Section \ref{Sec:conclusions} where we also summarize some interesting open problems. Finally we devote two Appendices to the review of some $S$-matrix no-go theorems and to their reformulation in Lagrangian language. More precisely, Appendix \ref{sec:Gra} focuses on Weinberg's low-energy theorem while Appendix \ref{sec:S} concentrates on the Weinberg--Witten theorem and its recent adaptation to gauge theories by Porrati. \section{\large No-go theorems in flat spacetime}\label{Nogoreview} This section presents various theorems\footnote{The $S$-matrix no-go theorem \cite{Benincasa:2007xk} is not discussed here because it relies on slightly stronger assumptions than the others --- see e.g. the conclusion of \cite{Porrati:2008rm} for more comments.} that constrain interactions between massless particles in flat spacetime --- potentially ruling out non-trivial quantum field theories with gauge fields with spin $s>2$ and vanishing cosmological constant. The aim is to scrutinize some of their hypotheses in order to exhibit a number of conceivable loop-holes that may lead to modified theories including massless higher-spin, as summarized in Subsection \ref{wayout}. \subsection{Preamble: the gauge principle and the Fronsdal programme}\label{GP} The key feature of the field-theoretic description of interacting massless particles is the \emph{gauge principle: a sensible perturbation theory requires compatibility between the interactions and some deformed version of the abelian gauge symmetries of the free limit}. The necessity of gauge invariance in perturbative quantum field theory stems from the fact that one and the same massless particle, thought of as a representation of the space- time isometry group, in general admits (infinitely) many implementations in terms of quantum fields sitting in different Lorentz tensors obeying respective free equations of motion. For more information, see e.g. \cite{Skvortsov:2008vs,Boulanger:2008up}. Only a subset of these ``carriers'', namely the primary curvature tensors and all of their derivatives, actually transform tensorially under isometry (implemented quantum-mechanically via similarity transformations). The remaining carriers are different types of potentials obtained by integrating various curvature Bianchi identities (and which one may thus think of as representing different ``dual pictures'' of one and the same particle); such integrals in general transform under isometry with inhomogeneous pieces that one can identify as abelian gauge transformations. Thus, in the standard perturbative interaction picture one is led to the \emph{Fronsdal programme}: the construction of interaction Hamiltonians starting from Lorentz invariant and hence gauge invariant non-linear Lagrangians built from the aforementioned carriers. We wish to stress that the Fronsdal programme is based on a working hypothesis: that standard canonical quantization of free fields in ordinary spacetime is actually compatible with the presence of higher-spin translations in higher-spin gauge theories. We shall proceed in this spirit in the bulk of this paper. \subsection{The Weinberg low-energy theorem}\label{lowenergy} The Weinberg low-energy theorem is essentially a byproduct of dealing with the more general problem of emissions of soft massless particles. Consider a (non-trivial) scattering process involving $N$ external particles with (say, ingoing) momenta $p_i$ ($i=1,2,\ldots, N$) and spin $s_i\,$. The emission of an additional massless particle of integer spin $s$ with arbitrary soft momentum by the $i$th external particle is controlled by a cubic vertex of type $s$-$s_i$-$s_i$ (\emph{i.e.} between a gauge boson of spin $s$ and two particles of spin $s_i$) with coupling constant $g^{(s)}_i$. The Weinberg low-energy theorem \cite{Weinberg:1964ew} states that Lorentz invariance of (or equivalently, the absence of unphysical degrees of freedom from) the deformed amplitude imposes a conservation law of order $s-1$ on the $N$ external momenta:\footnote{For pedagogical reviews, see e.g. \cite{Weinberg:1995mt}, Section 13.1 or \cite{Blagojevic:2002du}, Appendix G.} \begin{equation} \boxed{\;\sum_{i=1}^N g^{(s)}_i\,p_i^{\mu_1}\ldots p_i^{\mu_{s-1}}=0\;}\quad . \label{lowen} \end{equation} \subsubsection{Charge conservation: the spin-one case} \noindent Lorentz invariance for the emission of a soft massless spin-one particle (like a photon) leads to the conservation law $\sum_{i} g^{(1)}_i=0\,$; thus it requires the conservation of the coupling constants (like the electric charges) that characterize the interactions of these particles at low energies. In order to prepare the ground for further discussion, let us denote by ``electromagnetic minimal coupling'' the coupling of a charged particle to the electromagnetic field obtained by replacing the partial derivatives appearing in the Lagrangian describing the free, charged matter field in flat space, by the $u(1)$-covariant derivative, \emph{viz.} $\partial_{\mu} \rightarrow \partial_{\mu} - \,$i\,$ g^{(1)}_i A_{\mu}$. \subsubsection{Equivalence principle: the spin-two case}\label{s2case} \noindent As argued by Weinberg \cite{Weinberg:1964ew}, the equivalence principle can be recovered as the spin-two case of his low-energy theorem. On one side, Lorentz invariance for the emission of a soft massless spin-two particle leads to the conservation law $\sum_{i} g^{(2)}_i\,p_i^\mu=0\,$. On the other side, translation invariance implies momentum conservation $\sum_{i} p_i^\mu=0\,$. Therefore, for generic momenta, Poincar\'e invariance requires all coupling constants to be equal: $g_i^{(2)}=g_j^{(2)}=:g^{(2)}$ ($\forall\, i,j$). In other words, massless particles of spin-two must couple in the same way to all particles at low energies. This result has far-reaching consequences as it resonates with two deep properties of gravity, namely its uniqueness and its universality. On the one hand, the local theory of a self-interacting massless spin-two particle is essentially\footnote{See e.g. \cite{Boulanger:2000rq} for a precise statement of the very general hypotheses, and see refs therein for previous literature on this issue.} \textit{unique}: in the low-energy regime (at most two derivatives in the Lagrangian) it must be described by the Einstein--Hilbert action. Therefore, the massless spin-two particle rightfully deserves the name ``graviton''\footnote{A thorough discussion on the observability of the graviton is presented in \cite{Rothman:2006fp,Boughn:2006st}.}. On the other hand, the gravitational interaction is also \emph{universal} \cite{Weinberg:1964ew}: if there exists a single particle that couples minimally to the graviton, then all particles coupled to at least one of them must also couple minimally to the graviton. According to Weinberg himself, this theorem is the expression of the equivalence principle in quantum field theory, so, from now on, it will be referred to as the Weinberg equivalence principle. A proper understanding of this crucial theorem involves, however, some subtleties on the precise meaning of ``minimal coupling''. Let us consider the quadratic Lagrangian ${\cal L}^{(0)}(\varphi_s,\partial\varphi_s)$ describing a free spin-$s$ ``matter'' field denoted by $\varphi_s\,$. In general relativity, the equivalence principle may be expressed by the Lorentz minimal coupling prescription, \textit{i.e.} the assumption that the transformation rules of tensor fields under the Poincar\'e group extend naturally to the diffeomorphism group and the replacement of partial derivatives by Lorentz-covariant ones, \emph{viz.} $\partial\rightarrow \nabla=\partial+g^{(2)}\Gamma_{\rm lin}+\cdots\,$, in the matter sector. It must be observed that this prescription does not apply to the spin-two field itself because the Einstein--Hilbert Lagrangian is \textit{not} the covariantization of the Fierz--Pauli quadratic Lagrangian ${\cal L}^{(0)}(\varphi_2,\partial\varphi_2)\,$. One focuses on cubic couplings ${\cal L}^{(1)}(h,\varphi_s,\partial\varphi_s)$ of the type $2$-$s$-$s$, \textit{i.e.} linear in the spin-two field $h_{\mu\nu}$ and quadratic in the spin-$s$ field $\varphi_s\,$. The symmetric tensor of rank two $\Theta^{\mu\nu}:=\delta{\cal L}^{(1)}/\delta h_{\mu\nu}$ is bilinear in the spin-$s$ field. For consistency with the linearized diffeomorphisms $\delta_\xi h_{\mu\nu}=\partial_\mu\xi_\nu+\partial_\nu\xi_\mu\,$, the cubic coupling ${\cal L}^{(1)}$ to a massless spin-two field $h_{\mu\nu}$ must arise through a bilinear conserved current of rank two, \emph{i.e.} $\partial_\mu\Theta^{\mu\nu}\approx 0\,$, where the weak equality denotes the equality up to terms that vanish on the solutions of the free equations of motion for $\varphi_s\,$. For $s=\,2$, the cubic self-coupling of type $2$-$2$-$2$ coming in the Einstein--Hilbert Lagrangian gives rise to a conserved tensor $\Theta^{\mu\nu}$ which is equivalent to the Noether energy-momentum tensor $T^{\mu\nu}$ for the Fierz--Pauli Lagrangian. For $s\neq 2\,$, the cubic $2$-$s$-$s$ coupling ${\cal L}^{(1)}$ comes from the Lorentz minimal coupling prescription applied to the free Lagrangian ${\cal L}^{(0)}$ if and only if $\Theta^{\mu\nu}$ is equal (possibly on-shell and modulo an ``improvement'') to the Noether energy-momentum tensor $T^{\mu\nu}$ for ${\cal L}^{(0)}\,$. It is this precise condition on $\Theta^{\mu\nu}$ (for any spin!) that should be understood as ``minimal coupling'' in the Weinberg equivalence principle. \subsubsection{Higher-order conservation laws: the higher-spin cases} Lorentz invariance for the emission of soft massless higher ($s\geqslant 3$) spin particles leads to conservation laws of higher ($s-1\geqslant 2$) order, \textit{i.e.} for sums of products of momenta. For generic momenta, the equation (\ref{lowen}) has no solution when $s-1>1\,$, therefore all coupling constants must be equal to zero: $g^{(s)}_i=0$ for any $i$ when $s>2\,$. In other words, as stressed by Weinberg in his book \cite{Weinberg:1995mt}, p.538: \textit{massless higher-spin particles may exist, but they cannot have couplings that survive in the limit of low energy} [that is, they cannot mediate long-range interactions]. Moreover, strictly speaking the Weinberg low-energy theorems concern only $s$-$s'$-$s'$ couplings. Nevertheless, notice the existence of a simple solution for the equation (\ref{lowen}) corresponding to so-called trivial scattering, \emph{i.e.} elastic scattering such that the outgoing particle states are permutations of the incoming ones, as in the case of free or possibly integrable field theories. For example, if we denote the ingoing momenta by $k_a$ ($a=1,2,\ldots, n$) and the outgoing ones by $\ell_a$, then the higher-order conservation laws $\sum_a g^{(s)}_a k_a^{\mu_1}\ldots k_a^{\mu_{s-1}}=(-1)^{s-1}\sum_a g^{(s)}_a \ell_a^{\mu_1}\ldots \ell_a^{\mu_{s-1}}$ of order $s-1>1$ imply that the outgoing momenta can only be permutations of the incoming ones, and that $g^{(s)}_a=g^{(s)}$ for all $a$ if $s$ is even, while $g^{(s)}_a=\epsilon_a g^{(s)}$ with $(\epsilon_a)^2=1$ for all $a$ if $s$ is odd. \subsection{Coleman--Mandula theorem and its avatar: no higher-spin conserved charges}\label{ColMan} The Coleman--Mandula theorem \cite{Coleman:1967ad} and its generalization to the case of supersymmetric theories with or without massless particles given by Haag, Lopuszanski and Sohnius \cite{Haag:1974qh} strongly restrict the symmetries of the $S$-matrix of an interacting relativistic field theory in four-dimensional Minkowski space-time.\footnote{For an extended pedagogical review, see \cite{Weinberg:2000cr}, Chapter 24.} More precisely, (i) if the elastic two-body scattering amplitudes are generically non-vanishing (at almost all energies and angles); and (ii) if there is only a finite number of particle species on and below any given mass-shell; then the maximal possible extension of the Poincar\'e algebra is the (semi) direct sum of a superalgebra (a superconformal algebra in the massless case) and an internal symmetry algebra spanned by elements that commute with the generators of the Poincar\'e algebra. In particular, this theorem rules out higher symmetry generators (equivalently, conserved charges) that could have come from higher-spin symmetries surviving at large distances. The argument goes as follows: the gauge symmetries associated with massless particles may survive at spatial infinity as non-trivial rigid symmetries. In turn, such symmetries should lead to the conservation of some asymptotic charges. Under the hypotheses of the generalized Coleman--Mandula theorem, non-trivial conserved charges associated with asymptotic higher-spin symmetries cannot exist. This corollary of the generalized Coleman--Mandula theorem partially overlaps with the Weinberg low-energy theorem because the conservation law (\ref{lowen}) precisely corresponds to the existence of a conserved charge $Q^{\mu_1\ldots\,\mu_{s-1}}$ which is a symmetric tensor of rank $s-1$ that commutes with the translations --- but does \emph{not} commute with the Lorentz generators. \subsection{Generalized Weinberg--Witten theorem} The Weinberg--Witten theorem \cite{Weinberg:1980kq} states that a massless particle of spin strictly greater than one \textit{cannot} possess an energy-momentum tensor $T_{\mu\nu}$ which is both Lorentz covariant and gauge invariant.\footnote{For a pedagogical essay, see e.g. \cite{Loebbert:2008zz}.} Of course, this no-go theorem does not preclude gravitational interactions. In the spin-two case, it implies that there cannot exist any gauge-invariant energy-momentum tensor for the graviton. This proves that the energy of the gravitational field cannot be localized, but it obviously does not prevent the graviton from interacting with matter or with itself. Recently, a refinement of the Weinberg--Witten theorem has been presented \cite{Porrati:2008rm} that genuinely prevents massless particles of spin strictly greater than \textit{two} from coupling minimally to the graviton in flat background. The minimality condition is stated according to the Weinberg equivalence principle, namely it refers to Lorentz minimal spin-two coupling (see Section \ref{s2case}). In the Lagrangian approach, the same result had already been obtained in various particular instances, where it had been shown that the Lorentz minimal coupling prescription applied to free higher-spin gauge fields enters in conflict with their abelian gauge symmetries \cite{Aragone:1979hx,Berends:1979wu,Aragone:1981yn,Boulanger:2006gr}. The complete no-go result ruling out the Lorentz minimal coupling of type $2$-$s$-$s$ in the Lagrangian approach is given in \cite{Boulanger:2008tg}. In between the Lagrangian and the $S$-matrix approaches lies the light-cone approach where all local cubic vertices in dimensions from four to six have been classified (see e.g. \cite{Metsaev:2005ar} and references therein) and where the same negative conclusions concerning the Lorentz minimal coupling of higher-spin gauge fields to gravity had already been reached and stated in complete generality. This being said, consistent cubic vertices between spin-two and higher-spin gauge fields do exist, even in Minkowski spacetime \cite{Metsaev:2005ar,Boulanger:2006gr,Boulanger:2008tg}. Instead of describing Lorentz's minimal coupling, they contain more than two derivatives in total. As one can see, the generalized Weinberg--Witten theorem does not by itself forbid such type $2$-$s$-$s$ interactions. The crux of the matter is to combine this theorem with the Weinberg equivalence principle. Together, the Weinberg equivalence principle and the generalized Weinberg--Witten theorem do prohibit the cross-couplings of massless higher-spin particles with low-spin particles in flat spacetime \cite{Porrati:2008rm}. The argument goes as follows: elementary particles with spin not greater than two are known to couple minimally to the graviton at low energy. Therefore (Weinberg's equivalence principle) all particles interacting with low-spin particles must also couple minimally to the graviton at low energy, but (generalized Weinberg--Witten theorem \cite{Porrati:2008rm} and identical results presented in \cite{Metsaev:2005ar,Boulanger:2008tg}) massless higher-spin particles cannot couple minimally to gravity around the flat background. Consequently, at low energies massless higher-spin particles must completely decouple from low-spin ones. Hence, if the same Lagrangian can be used to describe both the low-energy phenomenology and the Planck-scale physics, then no higher-spin particles can couple to low-spin particles (including spin-2) at all. \subsection{Velo--Zwanziger difficulties} In this section, we would like to stress that, contrarily to widespread prejudice, the Velo--Zwanziger difficulties do not constitute a serious obstruction to the general programme of constructing consistent interactions involving higher-spin fields. The observed pathologies are nothing but symptoms of non-integrability in the sense of Cartan of the differential equations under consideration. Thus, in order to avoid pathologies, it makes sense to follow a specific gauge principle\footnote{Weinberg emphasized a related point, while mentioning Velo-Zwanziger paper and other related works (c.f. refs therein), in his book \cite{Weinberg:1995mt}, p.244: \textit{The problems reported with higher spin have been encountered only for higher-spin particles that have been arbitrarily assumed to have only very simple interactions with external fields. No one has shown that the problems persist for arbitrary interactions.} (...) \textit{There are good reasons to believe that the problems with higher spin disappear if the interaction with external fields is sufficiently complicated.} One may re-interpret this by stating that consistency requires less simplistic interactions, namely those governed by gauge invariance. }, which for high spins is nothing but a refined version (e.g. the Noether procedure) of the naive application of the minimal coupling prescription, as is the main topic of this review. In particular, the electromagnetic interactions exhibit pathologies (such as seemingly superluminal propagation) in Minkowski spacetime already for massive spin-3/2 fields (see \cite{Velo:1970ur,Velo:1972rt} and a more recent analysis in \cite{Porrati:2008gv,Porrati:2008ha} which contain a list of other relevant references on the issue) that are therefore not specific to higher spins and hence deserve a separate discussion. Indeed, the interactions between spin-$3/2$ and electromagnetic fields in gauged supergravities are well-known to avoid the Velo--Zwanziger problems. In the case of spin-1 self-interactions, a simple model to keep in mind is the Born--Infeld Lagrangian, whose expansion around a non-trivial electromagnetic background gives a linearized theory with causal structure governed by the Boillat metric whose light-cone lie within that of the undeformed flat space metric --- see the discussion and references in \cite{Gibbons:2000xe}. In order to think of a model containing spins greater than one and with higher-derivative corrections that have been added following a gauge principle, one may immediately go to string theory, where the Born--Infeld theory is subsumed into open string theory. Open strings propagating in electromagnetic backgrounds \cite{Argyres:1989qr} contain massive spin-$s$ states with $s\geqslant \ft32$ whose kinetic terms contain $2s-2$ derivatives. The actual physical problem is how to count degrees of freedom in the presence of extended space-time gauge symmetries and the higher-derivative interactions that follow therefrom. In order to avoid non-integrabilities in a systematic fashion, a natural resolution is to abandon the standard perturbative approach (formulating interactions in expansions around ordinary lower-spin backgrounds) in favor of the unfolded approach \cite{Vasiliev:1988xc,Vasiliev:1990en,Vasiliev:1988sa,Vasiliev:1992gr} which allows a generalized perturbative formulation of field theory in the unbroken phase as well as in various generalized metric phases and/or tensorial spacetimes \cite{Vasiliev:2001zy,Vasiliev:2001dc,Didenko:2003aa,Gelfond:2003vh}. To summarize this survey of no-go results, the genuine obstacles to massless higher-spin interactions are the Coleman--Mandula theorem, the low-energy Weinberg theorems, and the generalized Weinberg--Witten theorem. \section{Possible ways out}\label{wayout} In this Section, we discuss the weaknesses of the various hypotheses underlying the no-go theorems for interacting massless higher-spin particles in flat spacetime. Correspondingly, we present conceivable ways to surpass the spin-two barrier. Of these openings, the principal escape route is the Fradkin-Vasiliev mechanism in which the cosmological constant plays a dual role as infrared and ultraviolet regulators. This leads to Vasiliev's fully nonlinear equations, which set a new paradigm for a realm of exotic higher-spin gravities that fit naturally into the contexts of weak-weak coupling holography and tensionless limits of extended objects. This ``main route'' will be discussed in more detail in Sections 4 and 5. \subsection{Masslessness} Implicitly, all of the aforementioned no-go theorems rely on the hypothesis of a \emph{flat} spacetime background. Indeed, the notion of massless particles is unequivocal only in theories with Poincar\'e-invariant vacua. In constantly curved non-flat spacetimes, the mass operator (\emph{i.e.} $\nabla^2$) is related to the eigenvalues of the second Casimir operators of the spacetime isometry algebra and of the Lorentz algebra. It is only in flat spacetime, however, that the eigenvalues of the mass operator are quantum numbers, which can be sent to zero leaving a strictly massless theory without any intrinsic mass-scale. Thus, as far as theories in Minkowski spacetime are concerned, one may consider interpreting massless higher-spin particles as limits of \textit{massive} dittos. Such particles are consistent at low energies; on the experimental side, they are \emph{de facto} observed in hadronic physics as unstable resonances albeit not as fundamental particles\footnote{Strictly speaking, one may arguably refer to the proton as stable while already the neutron is metastable while all other massive excitations are far more short-lived.}. However, this high-energy limit has its own problems: it is singular in general as manifested by the van Dam--Veltman--Zakharov discontinuity in propagators of massive fields of spin greater than 3/2. Indeed, on the theoretical side, this fact is related to the complicated nature of the tensionless limit of string theory in flat spacetime. A clear physical picture of why the high-energy limit cannot be used to find massless higher-spin particles in flat spacetime is given by the example of higher-spin resonances in quantum chromodynamics. Dimensionless quantities depend on the ratio $E/m$, where $E$ and $m$ are the energy and the mass of the resonance, respectively. As $E$ goes to infinity with $m$ kept fixed is equivalent to $m$ tending to zero keeping $E$ constant, it follows that one must send $\Lambda_{\rm QCD}$ to zero. In this limit, the size of a resonance grows indefinitely, however, and it becomes undetectable to an observer of fixed size, since the observer lives \emph{within} the resonance's Compton wavelength.\footnote{We thank one of the referees for this comment.} \subsection{Asymptotic states and conserved charges} The $S$-matrix theorems only concern particles that appear as asymptotic states. Moreover, within the perturbative approach, these asymptotic states are assumed to exist at all energy scales. Thus, an intriguing possibility is that there exists non-perturbatively defined higher-spin gauge theories in flat spacetime with mass gaps and confinement. We are not aware of any thorough investigations of such models and mechanisms so far, though Vasiliev's higher-spin gravities in four-dimensional anti-de Sitter spacetime have been conjectured to possess a perturbatively defined mass gap, resulting from dynamical symmetry breaking induced via radiative corrections \cite{Girardello:2002pp}, as we shall comment on below. As far as confinement is concerned\footnote{This way out was briefly mentioned in the conclusions of \cite{Bekaert:2009ud}.}, one may ask whether the higher-spin charges of asymptotic states might all vanish, like for color charges in QCD. Incidentally, Weinberg pointed out in his book \cite{Weinberg:2000cr}, p.13, that some subtleties arise in the application of the Coleman--Mandula theorem in the presence of infrared divergences, but that \textit{there is no problem in non-abelian gauge theories in which all massless particles are trapped -- symmetries if unbroken would only govern $S$-matrix elements for gauge-neutral bound states}. \subsection{Lorentz minimal coupling} To re-iterate slightly, the $S$-matrix no-go theorems\footnote{including the Coleman-Mandula theorem, since the conserved charges used in its arguments depend on the asymptotic behavior of interactions at large distances.} for higher-spin interactions are engineered for Poincar\'e-invariant relativistic quantum field theories aimed at describing physics at intermediate scales lying far in between the Planck and Hubble scales. In Lagrangian terms, the generalized Weinberg-Witten theorem can essentially be understood as resulting from demanding compatibility between linearized gauge symmetries and the Lorentz minimal coupling in the absence of a cosmological constant. This compatibility requires consistent cubic vertices with one and two derivatives for fermions and bosons, respectively. Vertices with these numbers of derivatives have the same dimension as the flat-space kinetic terms. If consistent, they do therefore not introduce any new mass parameter. Hence it is natural to extrapolate the Lorentz minimal coupling to all scales. In doing so, however, one needs to keep in mind not only the barrier for quantum fields in the ultraviolet but also in the infrared. Pertinent to this statement is the generalized Weinberg-Witten theorem. The assumptions are that: (i) the Lorentz minimal coupling term is always present; (ii) the theory extends to all energies without encountering any infrared or ultraviolet catastrophe. To re-iterate, the refined analysis relies crucially via assumption (i) on Weinberg's formulation of the equivalence principle\footnote{see Eq. (\ref{EP}) of Appendix \ref{sec:S}) or Eq. (26) in \cite{Porrati:2008rm}.}, which one may view as a low-energy constraint on the theory. The result is that massless higher-spin particles cannot couple with the universal graviton or anything that the latter couples to. In other words, if such massless higher-spin theories in flat background exist in the mathematical sense, they \emph{cannot} be engineered to the low-energy physics that takes place in our Universe. For instance, one may have a theory with two phases: A symmetric phase at high energy where higher-spin particles are massless and the Newton constant vanishes for all particles, and a broken phase, where higher-spin particles get a mass and the Newton constant is nonzero. This is an intriguing possibility; moreover it probably occurs in $AdS_4$ \cite{Girardello:2002pp}, see the discussion below in Section \ref{break}. Nothing forbids the existence of an \emph{a priori} very warm Universe where such exotic theories are relevant. After cooling and symmetry breakdown these may then yield an effective matter-coupled gravity theory in which the graviton is that field that couples to everything in always the same way, with a single coupling constant introduced, namely Newton's constant. The assumptions (i) and (ii) are indeed vulnerable to the possibility of phase-transitions. This will be discussed below in Section \ref{break}. Looking to the limits of the experimental as well as theoretical tests of the Lorentz minimal coupling, there is no reason \emph{a priori} as to why the specific mechanism by which diffeomorphism invariance is implemented in Einstein's gravity should work at scales that are very small or very large. This suggests that the Lorentz minimal coupling can be rehabilitated within theories with infrared as well as ultraviolet cutoffs. \subsection{Flat background} As already stressed above, the strict definition of massless particle and $S$-matrix requires a flat spacetime. Passing to a slightly curved de Sitter or anti-de Sitter spacetime with cosmological constant $\Lambda$, one sometimes considers the existence of gauge symmetries as the criterion\footnote{This criterion is subtle, however, since for non-vanishing $\Lambda$, generic spins cannot have as many gauge symmetries as for vanishing $\Lambda$.} of masslessness. Since there is no genuine $S$-matrix in AdS, a subtle and fruitful way out is that the $S$-matrix theorems do not apply any more when the cosmological constant $\Lambda$ is non-vanishing, instead one ressorts to a holographic dual conformal field theory. This way out has been exploited successfully by the Lebedev school and has given rise to cubic vertices and full nonlinear equations of motion. \subsection{Finite-dimensionality of spacetime} Finally, in the light of the recent progress made in amplitude calculations in ordinary relativistic quantum field theory \cite{Bern:2007hh,Bern:2009kd} as well as higher-spin gravity \cite{Giombi:2009wh,Giombi:2010vg}, one may start raising criticism against the very assumptions behind the Fronsdal programme: the higher-derivative nature of higher-spin interactions leads ultimately to a conceptual breakdown of the standard canonical approach to quantum field theory based on time-slicing in ordinary spacetime. Although one can refer perturbatively to the canonical structure of the free fields (thought of as fluctuations around the spin-two background), the non-perturbative formulation of higher-spin symmetries leads towards an extension of spacetime by extra bosonic coordinates on which higher-spin translations act by linear differentiation. One may therefore think of a bosonic generalization of the superspace approach to supergravities, which is precisely what is provided by the unfolded dynamics programme initiated by Vasiliev (for an illustration of the basic ideas in the context of higher-spin supergravity, see for example \cite{Engquist:2002gy}). \section{\large Various yes-go examples }\label{yesgo} In this section we give a review of the various positive results obtained over the years concerning consistent higher-spin cubic couplings in flat and AdS backgrounds. Subsection \ref{flatyes} gathers together the results for cubic vertices in flat space, while Subsection \ref{AdSyes} essentially mentions the results obtained by Fradkin and Vasiliev in the late eighties for cubic vertices in (A)dS$_4\,$. Eventually, Subsection \ref{picture} consists of a summary in the form of a general picture for non-abelian higher-spin gauge theory, which seems to emerge from the known no-go theorems and yes-go examples. Of course, a word of caution should be added: the existence of consistent cubic couplings does not imply that a complete theory exists at all. However, the existence of full interacting equations \cite{Vasiliev:1990en,Vasiliev:1992av,Vasiliev:2003ev} is a strong indication that a complete interacting Lagrangian\footnote{As a matter of fact, a non-standard action principle for Vasiliev's equations, which leads to a non-trivial quantization, was proposed in \cite{Boulanger:2011dd}.} may exist, at least in (A)dS background. Actually, one of the open problems in higher-spin gravity is whether or not the Fronsdal programme can be pursued beyond the cubic order in a standard fashion. \subsection{Consistent cubic vertices in Minkowski spacetime}\label{flatyes} In the eighties, the quest for high-spin interactions successfully started, taking flat spacetime as background. Using the light-cone gauge approach, higher-spin $s-s'-s''$ cubic vertices in four space-time dimensions were found in \cite{Bengtsson:1983pd,Bengtsson:1983pg,Bengtsson:1986kh,Fradkin:1991iy}. These results, in the light-cone gauge approach, were considerably generalized later in \cite{Metsaev:1993gx,Metsaev:1993mj,Fradkin:1995xy,Metsaev:2005ar,Metsaev:2007rn} with a complete classification of cubic (self- and cross-) couplings for arbitrary massive and massless higher-spin fields, bosonic and fermionic, in dimensions four, five and six. Mixed-symmetry fields were also considered therein. Moreover, in \cite{Metsaev:1993ap} a wide class of cubic interactions were obtained for arbitrary fields in arbitrary dimension. As far as manifestly Poincar\'e-invariant vertices in the Lagrangian approach are concerned, Berends, Burgers and van Dam (BBvD) obtained a class of manifestly covariant, \emph{non-abelian} cubic couplings in \cite{Berends:1984rq,Berends:1984wp}. They used a systematization of the Noether procedure for introducing interactions, where the couplings are not necessarily of the form ``gauge field times conserved current''. In the work \cite{Berends:1984rq}, consistent and covariant cubic couplings of the kind $s_1-s_2-s_2$ were obtained, for the values of $s_1$ and $s_2$ indicated in Table \ref{T1}. \begin{table}[!ht] \centering \begin{tabular}{c |c c c c c c c } ${\downarrow}_{s_1} \quad {\rightarrow}^{s_2}$ & $0$ & $\frac{1}{2}$ & $1$ & $\frac{3}{2}$ & $2$ & $\frac{5}{2}$ & $3$ \\ \hline\hline $ 0\qquad $ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & & \\ \hline $ 1\qquad $ &$\times$ & $\times$ &$\times$ & $\times$ & $\times$ & & \\ \hline $ 2\qquad $ &$\times$ & $\times$ &$\times$ & $\times$ & $\times$ & $\times$ & \\ \hline $ 3\qquad $ &$\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\ \hline $n\qquad$ & $\times$ \\ \hline \end{tabular} \caption{\it $s_1$-$s_2$-$s_2$ covariant vertices obtained in \cite{Berends:1984rq}. \label{T1}} \end{table} Of course, some of the vertices were already known before, like for example in the cases $1$-$1$-$1\,$, $2$-$2$-$2$ and $2$-$\frac{3}{2}$-$\frac{3}{2}$ corresponding to Yang--Mills, Einstein--Hilbert and ordinary supergravity theories. There is a class of cross-interactions $s_1$-$s_2$-$s_2$ for which the cubic vertices could easily been written. This class corresponds to the ``Bell--Robinson'' line $s_1=2s_2$ and below this line $s_1>2s_2$ \cite{Berends:1985xx} (see \cite{Deser:1990bk} in the $s_1=4=2s_2$ case and some more recent considerations in \cite{Manvelyan:2009vy}). In the aforementioned region $s_1\geqslant 2 s_2\,$, the gauge algebra remains \emph{abelian} at first order in a coupling constant although the gauge transformations for the spin-$s_2$ field are deformed. The reason is that the first-order deformation of the free spin-$s_2$ gauge transformations involves the spin-$s_2$ field only through its gauge-invariant Weinberg--de Wit--Freedman field-strength \cite{deWit:1979pe,Weinberg:1965rz}\footnote{Note that one can trivially write down higher-derivative Born--Infeld-like consistent cubic interactions involving only gauge-invariant linearized field-strength tensors \cite{Damour:1987fp}. However, these interactions deform neither the gauge algebra nor the gauge transformations at first order in some coupling constant. Nevertheless, they might be needed when pushing the non-abelian cubic vertices to the next order in the coupling constants.}. Although they do not lead to non-abelian gauge algebras, it is interesting that the cubic interactions on and below the Bell--Robinson line (\textit{i.e.} for $s_1\geqslant 2s_2$) have the form ``spin-$s_1$ field times current'' where the current is quadratic in (the derivatives of) the spin-$s_2$ field-strength \cite{Berends:1985xx,Deser:1990bk} and is conserved on the spin-$s_2$ shell. Even more interestingly, these currents can be obtained from some global invariances of the free theory by a Noether-like procedure, provided the constant parameters associated with these rigid symmetries be replaced by the gauge parameters of the spin-$s_1$ field (also internal color indices must be treated appropriately) \cite{Berends:1985xx,Deser:1990bk}. The simplest class of cubic interactions below the Bell--Robinson line is provided by the couplings between scalar fields ($s_2=0$) and a collection of higher-spin tensor gauge fields through the Berends--Burgers--van Dam currents containing $s_1$ derivatives of the scalar fields \cite{Berends:1985xx}. Recently, they have been re-examined in \cite{Bekaert:2007mi,Fotopoulos:2007yq,Bekaert:2009ud} as a toy model for higher-spin interactions. Notice that these cubic interactions induce, at first order in the coupling constant, gauge transformations for the scalar field which are non-abelian at second order and reproduce the group of unitary operators acting on free scalars on Minkowski spacetime \cite{Bekaert:2007mi,Bekaert:2009ud}. As was demonstrated in \cite{Boulanger:2008tg}, in a flat background the non-abelian $2$-$s$-$s$ vertex is unique and involves a total number of $2s-2$ derivatives. {}From $s=3$ on, the non-abelian $2$-$s$-$s$ vertex in Minkowski spacetime is thus ``non-minimal'' and the full Lagrangian (if any) has no chance of being diffeomorphism-invariant, a fact which was explicitly shown in \cite{Boulanger:2006gr,Boulanger:2008tg}. It was also shown in \cite{Boulanger:2008tg} that the unique and non-abelian $2$-$s$-$s$ vertex in Minkowski spacetime is nothing but the leading term in the flat limit of the corresponding AdS Fradkin--Vasiliev vertex that, among others, contains the Lorentz minimal coupling. That the minimal Lorentz coupling term in the Fradkin-Vasiliev vertex is \textit{sub-leading} in the flat limit shows that the Weinberg equivalence principle is restored for higher-spins in AdS spacetime but is lost in the flat limit. This supports the need to consider higher-spin interactions in AdS background, at least if one wants to make a contact between higher-spin gauge fields and low-spin theories including Einstein--Hilbert gravity. Recently \cite{Bekaert:2010hp}, general results on the structure of cubic $s$-$s'$-$s''$ couplings ($s\leqslant s'\leqslant s''$) non-abelian already at this order were given, showing in particular that the \emph{maximum} number of derivatives involved in a non-abelian coupling is $2s'-1$ or $2s'-2\,$, depending on the parity of the sum $s+s'+s''\,$. It was also shown that the cubic vertices saturating the upper derivative bound have a good chance of being extended to second order in the deformation parameter, as far as the Jacobi identity for the gauge algebra is concerned. Later on, the generic non-abelian vertices were studied and explicitly built in \cite{Manvelyan:2010wp,Manvelyan:2010jr}. Some classification results were also obtained about the structure of the abelian cubic vertices. \textit{A posteriori}, the approach \cite{Manvelyan:2010wp,Manvelyan:2010jr} to the writing of covariant non-abelian vertices can be seen as the covariantization of the vertices already obtained in the light-cone approach in \cite{Bengtsson:1983pd,Bengtsson:1983pg,Metsaev:2005ar,Metsaev:2007rn} where, on top of the cubic coupling given by the light-cone gauge approach, terms are added which vanish in the spin-$s$ De Donder gauge. With the advent of string field theory in the second half of the eighties, the construction of higher-spin cubic vertices in flat space was carried out in \cite{Koh:1986vg,Bengtsson:1987jt,Cappiello:1988cd} in the so-called BRST approach. This approach was indeed motivated by the BRST first quantization of the string and by the tensionless limit of open string field theory. More recently, this analysis has been pursued in \cite{Bonelli:2003kh} and \cite{Buchbinder:2006eq,Fotopoulos:2007nm,Fotopoulos:2007yq} (a review of the last three works plus other works by the same authors can be found in \cite{Fotopoulos:2008ka}). The results obtained in this framework are encouraging, for instance in the case of non-abelian $s\,$-$\,0\,$-$\,0\,$ interactions \cite{Fotopoulos:2007yq}, although the higher-spin gauge field (self and cross) interactions found in \cite{Fotopoulos:2007nm} are abelian, and therefore can hardly be related to the non-abelian higher-spin theory of Vasiliev. Before turning to the cubic interactions in AdS background, we would like to continue with our brief review of positive results for higher-spin cubic vertices in flat space. Important results have recently been obtained by analyzing the tree-level amplitudes of the tensile (super)string. In what could be called a String/$S$-matrix approach, the authors of \cite{Polyakov:2009pk,Taronna:2010qq,Polyakov:2010qs,Sagnotti:2010at} obtained a plethora of vertices and recovered the vertices obtained in the previously cited approaches, thereby creating a direct link between open string theory and higher-spin gauge theory, at the dynamical level. Moreover, in the light of the uniqueness results of \cite{Boulanger:2008tg}, one has a precise relation between the Fradkin--Vasiliev vertices and string theory. Generically, the idea is that the non-abelian flat space cubic vertices obtained in \cite{Bekaert:2005jf,Boulanger:2008tg} (which were shown to be related to the --- appropriately taken --- flat space limit of the corresponding Fradkin--Vasiliev vertices) are also the seed for the construction of consistent \emph{massive} higher-spin vertices in flat and AdS spacetimes. {}From these non-abelian flat space vertices, one can systematically construct massive and massless vertices in AdS and flat spaces by switching on mass terms \`a la St\"uckelberg and cosmological constant terms. This approach has been used with success in \cite{Zinoviev:2008ck,Zinoviev:2009hu}. See also the recent work by Zinoviev \cite{Zinoviev:2010cr} were the frame-like formalism for higher-spin gauge fields is used. \subsection{Cubic vertices in AdS spacetime}\label{AdSyes} As we mentioned in the previous subsection, at cubic level (\emph{i.e.} at first order in perturbative deformation) Fradkin and Vasiliev found a solution to the higher-spin (gravitational, self and cross) interaction problem by considering metric perturbations around (A)dS$_4$ background \cite{Fradkin:1987ks,Fradkin:1986qy}. This was later extended to five dimensions \cite{Vasiliev:2001wa}, ${\cal{N}}=1$ supersymmetry \cite{Alkalaev:2002rq} and arbitrary dimensions \cite{Vasiliev:2011xf}. For a recent analysis of the Fradkin--Vasiliev mechanism in arbitrary dimension $D$ and in the cases $2$-$s$-$s$ and $1$-$s$-$s$, see \cite{Boulanger:2008tg}. The Fradkin--Vasiliev construction was the starting point of dramatic progresses leading recently to fully nonlinear field equations for higher-spin gauge fields in arbitrary dimension \cite{Vasiliev:2003ev}. We will not detail their construction here but we simply comment that the use of twistor variable and Moyal--Weyl star product is central, although historically the usefulness of the star product was not immediately recognized. In a few words, the main problem with the higher-spin gravitational interaction was that, introducing the Lorentz minimal coupling terms in the action and gauge transformations, higher-spin gauge invariance could not be satisfied any more. The solution provided by Fradkin and Vasiliev was to introduce a non-vanishing cosmological constant $\Lambda$ and expand the metric around an (A)dS background. The gauge variation of the cubic terms coming from the Lorentz minimal coupling around (A)dS are now canceled on the free shell, by the variation of a \emph{finite} tail of additional non-minimal cubic vertices, each of them proportional to the linearized Riemann tensor around (A)dS and involving more and more (A)dS-covariant derivatives compensated by appropriate negative powers of the cosmological constant. In that gauge variation, the terms proportional to the free equations of motion are absorbed through appropriate corrections to the gauge transformations. This solution is the \emph{Fradkin--Vasiliev mechanism}, and we call the gravitational cubic coupling they obtained the \emph{quasi-minimal coupling}, in the sense that the Lorentz minimal coupling is present and triggers a \emph{finite} expansion of non-minimal terms. A salient feature of the Fradkin--Vasiliev construction is that there are now \emph{two} independent expansion parameters. The AdS mass parameter $\lambda \sim \sqrt{|\Lambda|}\,$ and the dimensionless deformation parameter $g := (\lambda \ell_{\rm p})^{\frac{D-2}{2}}$ that counts the order in the weak field expansion, where the Planck length $\ell_{\rm p}$ appears in front of the action through $ 1 / \ell_{\rm p}^{D-2}$ and where one works with dimensionless physical fields. At the cubic level and for any given triplet of spins $\{s,s',s''\}\,$, there appears a finite expansion in \emph{inverse} powers of $\lambda\,$, where the terms with the highest negative power of $\lambda$ bring the highest number of (A)dS-covariant derivatives acting on the weak fields. That highest power of $1/\lambda\,$ is proportional to $s''\,$, so that for unbounded spins the Fradkin--Vasiliev cubic Lagrangian is nonlocal. The massive parameter $\lambda$ simultaneously (i) sets the \emph{infrared cutoff} via $|\Lambda|\sim\lambda^2$ and the critical masses $M^2 \sim\lambda^2$ for the dynamical fields; and (ii) dresses the derivatives in the interaction vertices thus enabling the Fradkin--Vasiliev mechanism. This dual r\^ole played by the cosmological constant is responsible for an exotic property of the Fradkin--Vasiliev cubic coupling. \vspace*{.4cm} \noindent \textbf{Exotic non-locality of the Fradkin--Vasiliev Lagrangian} \vspace*{.2cm} \noindent In the physically relevant cases where one has a separation of length scales, \emph{i.e.} $\ell_{\rm p}\ll \ell \ll \lambda^{-1}$ where $\ell\sim\, \parallel\varphi\parallel/\parallel\partial\varphi\parallel$ is some wave length characterizing the physical system under consideration and where $\lambda^{-1}$ denotes here a generic infrared scale, not necessarily related to the cosmological constant, two situations can arise for perturbatively local (\textit{c.f.} Subsection \ref{picture}) Lagrangians having vertices $V_n$ involving higher ($n\geqslant 3$) derivatives of the fields: \begin{itemize} \item[A.] \textbf{Mild non-locality}: the theory is weakly coupled in the sense that $V_n \sim (\ell_{\rm p}/\ell)^{n-2} \ll 1\,$. This situation arises for broken higher-spin symmetry, tensionful string sigma models etc. \item[B.] \textbf{Exotic non-locality}: the theory is strongly coupled in the sense that the vertices $V_n$ are proportional to $(\ell\lambda)^{-n+2}\gg 1\,$. This is the situation for the Fradkin--Vasiliev vertices: In the derivative expansion appearing within the Fradkin--Vasiliev mechanism, the terms involving the maximal number of derivatives are dominant since they contain the infrared cutoff instead of the ultraviolet one. \end{itemize} Finally, we make a comment related to the fully nonlinear Vasiliev equations in order to show that the same behaviour appears order by order in the weak field expansion. In this theory, the first-order corrections ${T}^{(1)}_{\mu\nu}$ to the stress tensor defined by $T_{\mu\nu}:=R_{\mu\nu}-\frac12 g_{\mu\nu}(R-\Lambda)$ arise in an expansion of the form ${T}^{(1)}=\sum_{n=0}^\infty \sum_{p+q=n}\lambda^{-n}\nabla^p\varphi_s \nabla^q\varphi_s\,$, see \cite{Kristiansson:2003xx} for the scalar field contributions. One therefore sees the appearance of an \emph{infinite derivative tail} in the standard field equations already at first order in the weak-field expansion \cite{Sezgin:2002ru}. This would lead to tree-level amplitudes depending on the following two dimensionless scales: (i) the weak-field expansion coupling $g = (\lambda \ell_{\rm p})^{\frac{D-2}2}$ that can always be taken to be obey $g <\!\!\!< 1$; and (ii) the derivative-expansion coupling $(\ell\lambda )^{-n+2}$ where $\ell$ is the characteristic wavelength. Thus the tails are strongly coupled around solutions that are close to the AdS$_D$ solution since here $\ell\lambda<\!\!\!< 1\,$. \subsection{Main lessons}\label{picture} \vspace{4mm}The first important lesson which one can draw from the previous discussions is that, contrarily to widespread prejudices, many doors are left open for massless higher-spin particles. The second important lesson is that interactions for higher-spin gauge fields exist but are rather exotic. Some of their properties clash with standard lores inherited from low-spin physics, and indeed, there is no fundamental reason to expect that higher-spin fields must behave as their low-spin companions. Some model-independent features of non-abelian higher-spin gauge theories seem to emerge from all known no-go theorems and yes-go examples. It appears that most of the exotic properties of higher-spin fields can roughly be explained by mere dimensional arguments. As we have done in the previous subsection, we introduce a parameter $\ell$ with the dimension of a length and rescale all objects in order to work with dimensionless Lagrangian $\cal L$ and fields $\varphi\,$. The action takes the form: $S=\ell^{-D}\int d^Dx\,{\cal L}(\varphi,\, \ell\, \partial\varphi,\,\ell^2\,\partial^2\varphi, \,\ldots)$ where each derivative is always multiplied by a factor of $\ell\,$. The Lagrangian counterpart of Feynman rules in $S$-matrix arguments is the weak field expansion, \textit{i.e.} the fields $\varphi$ are perturbations around some background for which the higher-spin Lagrangian $\cal L$ (if any) should admit a usual perturbative power expansion in terms of these fields $\varphi\,$. Around a stable vacuum solution, this expansion starts with a quadratic kinetic term ${\cal L}^{(0)}$ with at most two derivatives and it goes on with vertices of various homogeneity degrees in $\varphi$: a cubic vertex ${\cal L}^{(1)}\,$, a quartic vertex ${\cal L}^{(2)}$, \textit{etc}. In the following we present four general facts (of which there is no proof in full generality but no counter-example has ever been found) that seem to capture universal properties of any massless higher-spin vertex. \vspace*{.2cm} \textbf{A. Higher-spin vertices are local order by order in some length scale}\label{pertloc} A function of the field and its derivatives (treated as independent variables) is said to be \textit{local} if it only depends on a finite number of derivatives $\partial\varphi$, $\partial^2\varphi$, ...., $\partial^k\varphi$ (for some fixed integer $k$) and, moreover, if it only depends polynomially on these derivatives. In the Lagrangian framework, the strong form of locality is the condition that the Lagrangian ${\cal L}$ must be a local function of the field $\varphi$, \textit{i.e.} the total number of derivatives is bounded from above (so, in our conventions, the Lagrangian is a polynomial in the length parameter $\ell$). A weaker form of locality is the requirement that the Lagrangian ${\cal L}$ is \emph{perturbatively local} in the sense that it admits a power series expansion in the fields and all their derivatives (so, in our conventions, each vertex must admit a power series expansion in the length scale $\ell\,$). Strictly speaking, this weak form of locality is rather a mild form of non-locality because it is obviously not equivalent to the genuine requirement of locality. Nevertheless, it guarantees that somehow the non-locality (if any) is under control: at each order in the length scale, the theory is local; the bound on the total number of partial derivatives is controlled by the power of $\ell\,$. Concretely, this means that there is no strong non-locality (such as inverse powers of the Laplacian) and that, perturbatively, it can be treated as a local theory. Effective Lagrangians provide standard examples of perturbatively local theories. We note in passing that, if at the cubic level one accepts to forgo the assumption of perturbative locality, then the higher-spin gravitational minimal coupling around flat space would become automatically consistent. Remember that, in the early attempts to minimally couple higher-spin particles around flat space \cite{Aragone:1979bm,Berends:1979wu,Aragone:1981yn}, the problem was that the higher-spin variation of the cubic Lagrangian creates terms $\delta_{\varepsilon}S^{min}\sim \int\varepsilon\cdot (W\;\partial\varphi+\partial W\;\varphi )$ proportional to the spin-2 linearized Weyl tensor $W\,$, where $\varepsilon$ is the higher-spin gauge parameter. These terms cannot be compensated by an appropriate local gauge transformation for the spin-2 field, since the linearized Weyl tensor (or its symmetrized and traceless derivative) does not vanish on-shell. However, if one accepts to deal with wildly nonlocal operators and inserts the formal object ``$\Box/\Box$'' in front of the Weyl tensor, one can compensate the terms $\int \varepsilon\cdot (\frac{1}{\Box}\,\Box W\;\partial\varphi + \partial \frac{1}{\Box} \Box W\;\varphi )$ by appropriate nonlocal spin-2 gauge transformations of the form $\delta h\sim \frac{1}{\Box}\,\partial^2(\varepsilon\,\partial\varphi\,+\,\partial\varepsilon\,\varphi)$, using the fact that, contrary to the Weyl tensor, the D'Alembertian of the Weyl tensor is proportional to the field equations for the spin-2 field. Schematically, $\Box W\sim \;\partial C\,$ where ${C}$ denotes the (linearized) Cotton tensor which is itself a linear combination of the curl of the (linearized) Einstein tensor. \textbf{B. Higher-spin vertices are higher-derivative}\label{hder} The higher-derivative property has been observed in all known examples of higher-spin cubic couplings. A summary of the general situation at the cubic level and in flat space is as follows: \vspace{2mm}\noindent\textbf{Cubic interactions} \cite{Metsaev:2005ar}: \textit{In flat space, the total number $n$ of derivatives in any consistent local cubic vertex of type $s$-$s^\prime$-$s^{\prime\prime}$ (with $s\leqslant s^\prime\leqslant s^{\prime\prime}$) is bounded by $$s^\prime+s^{\prime\prime}-s\, \leqslant\, n\,\leqslant\, s+s^\prime+s^{\prime\prime}\,.$$ Therefore, the vertex contains at least $s^{\prime\prime}$ derivatives.} \vspace{2mm}\noindent In other words, the value of the highest spin involved ($s^{\prime\prime}$) gives the lowest number of derivatives that the cubic vertex must contain. Notice that this proposition applies to low and higher spins. Examples of type $1$-$1$-$1$ and $2$-$2$-$2$ vertices are the cubic vertices in Yang-Mills and Einstein--Hilbert actions, they contain respectively $1$ and $2$ derivatives. Examples of $2$-$s$-$s$ vertices are, for low spins, the Lorentz minimal coupling ($s\leqslant 3/2$) where the energy-momentum tensor involves two derivatives (also for $s = 2$) and, for higher spins ($s>2$) the higher-derivative non-minimal coupling mentioned before. The following two exotic properties of higher-spin particles are straightforward corollaries of results presented so far: \vspace{2mm}\noindent\textbf{Higher-derivative property}: \textit{In flat space, local cubic vertices including at least one massless particle of spin strictly higher than two contain three derivatives or more.} \vspace{2mm}\noindent\textbf{Low-spin coupling}: \textit{In flat space, massless higher-spin particles couple non-minimally to low-spin particles. In (A)dS, they couple quasi-minimally, thereby restoring Weinberg's equivalence principle (gravitational coupling) and the conventional definition of electric charge (electromagnetic coupling).} \vspace*{.2cm} \textbf{C. Consistency requires an infinite tower of fields with unbounded spin}\label{hinf} A local cubic vertex is said to be perturbatively consistent at second order if it admits a local --- possibly null --- quartic continuation such that the resulting Lagrangian incorporating the cubic and associated quartic vertices (with appropriately modified gauge transformation laws) is consistent at second order in the perturbative coupling constant. Notice that the assumption of (perturbative) locality is crucial here. If this assumption is dropped, then consistency is automatic beyond cubic level (see e.g. the general theorem in \cite{Barnich:1993vg}) in the sense that any cubic vertex can be completed by non-local quartic vertices \textit{etc}. It is the very assumption of (perturbative) locality that imposes very strong constraints on the set of possibilities. In the local, non-abelian deformation problem, a necessary requirement for the consistency of cubic vertices to extend till quartic level is the closure of the algebra of gauge symmetries (at lowest order and possibly on-shell). This imposes stringent constraints on the algebra in (A)dS spacetime \cite{Fradkin:1986ka}: the presence of at least one higher-spin gauge field requires for consistency at quartic order an infinite tower of gauge fields with unbounded spin (more precisely the minimal spectrum seems to be a tower including all even spins). At the cubic level, the coupling constants of each cubic vertex are independent from each other. Another constraint coming from the consistency at quartic level is that the coupling constants of the cubic vertices are expressed in terms of a single one. Surprisingly, similar results seem to apply in Minkowski spacetime \cite{Metsaev:1991mt}. When the spin is unbounded, the higher-spin interactions are non-perturbatively non-local but perturbatively local, in the rough sense that the number of derivatives is controlled by the length scale. More precisely, at any finite order in the power expansion in $\ell$ the vertices are local, but if all terms are included, as usually required for consistency at quartic level, then the number of derivatives is unbounded. Summarizing: \vspace{2mm}\noindent\textbf{Non-locality}: \textit{The number of derivatives is unbounded in any perturbatively local vertex including an infinite spectrum of massless particles with unbounded spin.} A good news is that non-local theories do not automatically suffer from the higher-derivative problem. For non-local theories that are \emph{perturbatively} local, the problem may be treated if the free theory is well-behaved and if nonlocality is cured perturbatively (see \cite{Simon:1990ic} for a comprehensive review on this point). \vspace*{.2cm} \textbf{D. Massless higher-spin vertices are controlled by the infrared scale} \label{infrared} Concretely, in quantum field theory computations where massless particles are involved, one makes use of infrared and ultraviolet cutoffs where $\ell_{IR}$ and $\ell_{UV}$ denote the corresponding length scales ($\ell_{UV}\ll\ell_{IR}$). By definition of the cutoff prescription, the typical wavelength of physical excitations $\ell$ (roughly, the ``size of the laboratory'') must be such that $\ell_{UV}<\ell<\ell_{IR}$. In low-spin physics, the ultraviolet scale is of the order of the Planck length: $\ell_{UV}\sim\ell_{\rm p}\,$, interactions are controlled by that ultraviolet cutoff and non-renormalizable theories are weakly coupled in the low energy regime $\ell\gg\ell_{\rm p}\,$. In higher-spin gauge theory, the situation is turned upside-down: interactions are controlled by the infrared cutoff $\ell_{IR\,(higher-spin)}$ (e.g. the AdS radius) and, since they are higher-derivative, the theory is strongly coupled in the high energy regime $\ell\ll\ell_{IR\,(higher-spin)}\,$. \subsection{Higher-spin symmetry breakings}\label{break} While the transition from massless to massive higher-spin particles is well understood at the free level via the St\"uckelberg mechanism, the higher-spin symmetry breaking remains deeply mysterious at the interacting level. The qualitative scenario is briefly discussed in Subsection \ref{broken} and, finally, a tentative summary of the possible pictures is presented in Subsection \ref{1vsHS}. \subsubsection*{A. Higher-spin gauge symmetries are broken at the infrared scale} \label{broken} At energies of the order of the infrared cutoff for the higher-spin gauge theory, \textit{i.e.} when $\ell\sim\ell_{IR\,(higher-spin)}$, higher-spin particles cannot be treated as ``massless'' any more. Instead, they get a mass of the order of $\ell^{-1}_{IR\,(higher-spin)}$ and, consequently, the higher-spin gauge symmetries are broken. Therefore, the no-go theorems do not apply any more. Hence, low-spin physics can be recovered at energy lower than the infrared cutoff of higher-spin gauge theory: $\ell>\ell_{IR\,(higher-spin)}\,$. \emph{In Minkwoski spacetime}, a natural infrared scale of massless higher-spin particles is the ultraviolet scale of low-spin physics: $\ell_{IR\,(higher-spin)}\sim\ell_{UV\,(low-spin)}\sim\ell_{\rm p}\,$. Then, the corresponding massive higher-spin particles have masses not smaller than the Planck mass and the higher-spin interactions become ``irrelevant'' in the low energy (sub-Planckian) regime. By naive dimensional analysis, in the high energy (trans-Planckian) regime the scattering amplitudes should diverge since the theory is not (power-counting) renormalizable. However, for an infinite tower of higher-spin particles, the total scattering amplitudes may be extremely soft, or even finite. These possibilities are realized for tensile string theory around Minkowski spacetime where the ultraviolet scale is the string length, $\ell_{UV\,(string)}\sim\ell_s\,$, which is usually taken to be of the order of the ultraviolet scale for gravity: $\ell_s\sim\ell_{\rm p}\,$. The underlying symmetry principle behind such a phenomenon remains mysterious, though the standard lore is that higher-spin symmetries should play a key role in its understanding. \emph{In AdS spacetime}, the situation is drastically different because the natural infrared scale is the radius of curvature: $\ell_{IR\,(higher-spin)}\sim R_{AdS}\sim \lambda^{-1}\,$ and the ultraviolet scale may remain the Planck length: $\ell_{UV\,(higher-spin)}\sim\ell_{\rm p}\,$. The high-energy limit of higher-spin gauge theory is then equivalent to the flat limit $\ell\ll R_{AdS}\,$. The Fradkin--Vasiliev cubic vertices and Vasiliev full non-linear equations are precisely along these lines. \subsubsection*{B. Dynamical symmetry breaking: spin-one \textit{vs} higher-spin}\label{1vsHS} The terminology ``no-go theorem'' assumes that the theorem (e.g. Coleman--Mandula's) is formulated negatively as the impossibility of realizing some idea (e.g. the mixing of internal and spacetime symmetries) under some conditions. If the idea proves to be possible then, retrospectively, the no-go theorem is read positively (by contraposition) as the necessity of some property (e.g. supersymmetry) for the idea to work. Similarly, one may speculate that maybe $S$-matrix no-go theorems \cite{Weinberg:1964ew,Coleman:1967ad,Porrati:2008rm} on massless higher-spin particles should be read positively as providing a hint (if not a proof) that, at the infrared scale where these theorems are valid, an exotic mechanism, reminiscent of mass gap and confinement in QCD, must necessarily take place in any higher-spin gauge theory. At low energy, higher-spin particles must either decouple from low-spin ones or acquire a mass: in both cases, asymptotic massless higher-spin states are unobservable. Notice that, usually, the elusive higher-spin symmetry breaking is presented as a ``spontaneous'' symmetry breaking like the Brout--Englert--Higgs mechanism in the electroweak theory, but pursuing the analogy with QCD might be fruitful and one could rather think of a ``dynamical'' symmetry breaking where the Goldstone modes would be composite fields. {}From holographic arguments, the authors of \cite{Girardello:2002pp} indeed advocated for such a scenario whereby masses for all (even) higher-spin fields in Vasiliev's minimal theory in AdS$_4$ are generated by quantum one-loop corrections while all low-spin gauge fields remain massless. We wish to stress the direct similarity to the Schwinger mechanism in two dimensional quantum electrodynamics \cite{Schwinger:1962tp} and the reminiscence to the saturation proposals for mass generation in three- and four-dimensional pure QCD, see e.g. \cite{Aguilar:2007jj,Aguilar:2010zx} and references therein. \vspace{2mm}A (maybe bold) way to present a summary of the two phases of higher-spin gauge theory is by analogy with non-abelian Yang-Mills theory (say quarkless QCD) whose main properties may be listed as follows: \begin{itemize} \item \textbf{High energy (unbroken symmetry)}: weak coupling (``asymptotic freedom'') \item \textbf{Low energy (broken symmetry)}: strong coupling $\Longrightarrow$ \textit{Non-perturbative effects} \\ All asymptotic states must be massive (``mass gap'') and singlet (``color confinement'') \end{itemize} \noindent A plausible picture of non-abelian higher-spin gauge theory is summarized as follows: \begin{itemize} \item \textbf{High energy (unbroken symmetry)}: strong coupling \item \textbf{Low energy (broken symmetry)}: decoupling of massless higher-spins $\Longleftarrow$ \textit{No-go theorems}\\ All asymptotic higher-spin states must be massive and/or invariant under higher-spin symmetries \end{itemize} \vspace{2mm}As one can see, perhaps the biggest difficulty with non-abelian higher-spin gauge theory (with respect to its low-spin counterparts) is the absence of a phase with both unbroken symmetry \textit{and} weak coupling (\textit{i.e.} there is no analogue of ultraviolet ``freedom'' for Yang-Mills theory, or infrared ``irrelevance'' for Einstein gravity) where the theory would be easier to study. \section{\large Fully interacting example: Vasiliev's higher-spin gravity}\label{Sec:VE} After having repeated why a classically complete theory is key in higher-spin gravity, we lay out the salient features of Vasiliev's approach leading to a class of models that is not only the arguably most natural one but also a potentially viable brewing pot for actual semi-realistic models of quantum gravity. We finally address the ``state of the art'' and what we believe to be some ways forward. \subsection{Examples of non-abelian gauge theories} It is not too much of an exaggeration to stress that fact that \emph{the very the existence of a fully interacting non-abelian gauge field theory is a highly non-trivial fact, even at the classical level}. Actually, looking to four space-time dimensions, and focusing on bosonic gauge symmetries --- notwithstanding the extreme importance that supersymmetry and matter-couplings (which might be the same thing in higher-spin gravity) may play in order to have a phenomenologically viable model --- one finds essentially three classes of models containing local degrees of freedom: \begin{itemize}\item Yang-Mills theories, \emph{i.e.} the theory of self-interacting set of spin-one fields; \item General relativity, \emph{i.e.} the theory of a self-interacting spin-two field; \item Higher-spin gravity, \emph{i.e.} the theory of a self-interacting tower of critically massless even-spin fields. \end{itemize} Looking to their classical perturbation theories, one sees that higher-spin gravity distinguishes itself in the sense that it does not admit a strictly massless perturbative formulation on-shell in terms of massless fields in flat spacetime. Instead it admits a generally covariant double perturbative expansion in powers of\footnote{One can also define a Planck length $\ell_{\rm p}=g\sqrt{|\Lambda|}$, but unlike general relativity, which contains only two derivatives, higher-spin gravity has no sensible expansion (in its unbroken phase) in powers of $\ell_{\rm p}$. In this sense, the perturbation theory of higher-spin gravity is more similar in spirit to that of open string theory.} \begin{itemize} \item a dimensionless coupling constant, $g\,$, counting numbers of weak fields; and \item the inverse of a cosmological constant, $\Lambda\,$, counting numbers of pairs of derivatives. \end{itemize} Although higher-spin gravity still lacks an off-shell formulation, its on-shell properties nonetheless suggests a quantum theory in anti-de Sitter spacetime in which localized higher-spin quanta interact in such a fashion that the resulting low-energy effective description be dominated by higher-derivative vertices such that the standard minimal spin-two couplings show up only as a sub-leading term. Thus one may think of higher-spin gravity as an effective flat-space quantum field theory with an \emph{exotic cutoff}: a finite infrared cutoff, showing up as a cosmological constant in the gravitational perturbation theory, that at the same time plays the role of massive parameter in higher-derivative interactions. Let us mention once more that the reason for this state-of-affairs can be explained directly in terms of the (mainly negative) results for higher-spin gauge theory in flat spacetime: if one removes $\Lambda$, \emph{i.e.} attempts to formulate a strictly massless higher-spin gauge theory without any infrared cutoff, then one falls under the spell of various powerful (albeit restricted) no-go theorems concerning the couplings between massless fields with spin $s>2$ and massless fields with spins $s\leqslant 2$ in flat spacetime. As we have already mentioned at several places, the perhaps most striking constraint on gauge theories with vanishing cosmological constant, $\Lambda=0\,$, is the clear-cut clash between the equivalence principle, which essentially concerns the non-abelian nature of spin-two gauge symmetries, and abelian higher-spin gauge symmetry: on the one hand, all massless (as well as massive) fields must couple to a massless spin-two field via two-derivative vertices with the same universal coupling constant; on the other hand, such minimal couplings are actually incompatible with the free gauge transformations for spin $s>2$ fields as long as one assumes that these couplings play the dominant r\^ole at low energies. In other words, in flat spacetime there are severe no-go theorems forming a spin-two barrier that cannot be surpassed in the sense that massless particles of spins $s>2$ cannot interact with massless particles of spins $s\leqslant 2$ provided the lower-spin sector contains finite minimal spin-two couplings. Thus, if one wishes to proceed in seeking strictly massless higher-spin gauge theories (with $\Lambda=0$) then one is forced towards unnatural theories without any minimal spin-two couplings, whereas if one switches on a finite $\Lambda$ then one is naturally led into the realms of higher-spin gravity. \subsection{The need for a complete theory} Let us emphasize the need for a complete theory of higher-spin gravity already at the classical level, \emph{i.e.} a consistent action principle, or alternatively, set of equations of motion, that contains a complete set of strongly coupled derivative corrections. To this end, let us return to the Fradkin--Vasiliev cancellation mechanism within the Fronsdal programme: in the presence of a non-vanishing cosmological constant, $\Lambda$, the Lorentz minimal cubic coupling (two derivatives) for a spin-$s$ field becomes embedded into the Fradkin--Vasiliev quasi-minimal vertex terminating in the non-abelian type $2$-$s$-$s$ vertex ($2s-2$ derivatives) that remains consistent in the $\Lambda\rightarrow 0$ limit \cite{Boulanger:2008tg} --- this ``top-vertex'' is thus the seed from which the subleading powers in $\Lambda$ are grown by imposing abelian spin-$s$ gauge invariance. The crux of the matter, however, is that the cubic piece of a complete action (consistent to all orders) may in principle contain additional non-minimal interactions with more derivatives that are strongly coupled in the $\Lambda$-expansion. Applying dimensional analysis one arrives at the following problem: for $\Lambda<0$ the on-shell amplitude (Witten diagram) with three external massless gauge bosons need not vanish, and since $\Lambda$ now sets both the infrared cutoff (assuming the free theory to consist of standard tachyon-and-ghost free Fronsdal kinetic terms) and the mass-scale for higher-derivative vertices, the contributions to the amplitude from vertices with $n$ derivatives grow like the $n$th power of a large dimensionless number. Thus, although the top (highest-derivative) vertex dominates the terms with fewer derivatives inside the quasi-minimal coupling (including the Lorentz minimal coupling), it will in its turn be washed out by any genuinely non-minimal interaction, whose couplings (overall normalization in units of $\Lambda$) must hence be determined in order to estimate the three-particle amplitude. Towards this end one may in principle work within a slightly refined Fronsdal programme as follows: (i) fix a free Fronsdal action; (ii) parameterize all consistent cubic vertices including \emph{a nonlocal Born--Infeld tail}, that is, a strongly coupled expansion in terms of Weyl tensors and their derivatives that cannot be replaced by a single effective Born--Infeld interaction with a finite coupling; (iii) constrain the spectrum and cubic couplings by solving higher-order consistency conditions in the $g$-expansion (starting at quartic order). However, without any guiding principle other than Lorentz and gauge invariance, this is an \emph{a priori} intractable problem essentially due to the fact that the whole cubic tail must be fixed, which may require going to very high orders in the $g$-expansion. Of course, in the simplest scenario, the complete cubic action could be fixed by quartic consistency, in which case there would be no interaction ambiguity at the cubic level. Thus, of all possible hypothetical outcomes the extreme cases are: (i) quartic consistency suffices to completely fix the cubic action including its Born--Infeld tail; and (ii) quartic consistency rules out the cubic action altogether in which case the choice of free theory initiating the Fronsdal programme would have to be revised. In summary so far, to make the situation more tractable, one may resort to some additional guidance besides Lorentz and gauge invariance, or bias if one wishes to use that word, on what are suitable notions for ``higher-spin multiplets'', for selection of spectrum of fields, and ``higher-spin tensor calculus'', for construction of interactions. How to proceed in this issue becomes most clear in \emph{higher-spin gravity}: higher-spin gauge theories based on higher-spin algebras given by infinite-dimensional extensions of ordinary finite-dimensional space-time isometry algebras. At this stage it is natural to re-think how unitary representations of the complete higher-spin algebra are mapped directly to fields living in infinite-dimensional geometries containing ordinary spacetime as a submanifold. Indeed one of the key instruments going into Vasiliev's formulation of fully nonlinear equations of motion for higher-spin gravities is \emph{unfolded dynamics} \cite{Vasiliev:1988xc,Vasiliev:1990en,Vasiliev:1988sa,Vasiliev:1992gr}: a mathematically precise tool for manifestly diffeomorphism invariant generalized space-time reconstructions applying to finite-dimensional as well as infinite-dimensional cases. \subsection{Vasiliev's equations} A working definition of higher-spin algebras developed by Fradkin, Konstein and Vasiliev \cite{Fradkin:1986ka,Fradkin:1987ah,Konshtein:1988yg,Konstein:1989ij} --- that has proven to be useful is that of Lie subalgebras of associative algebras obtained from the enveloping algebras of the space-time isometry algebra by factoring out annihilators of their ``fundamental'', or ultra-short, unitary representations (singletons). In this setting, the higher-spin generators are monomials in the space-time isometry generators, and higher-spin multiplets arise by tensoring together singletons \cite{Flato:1978qz,Vasiliev:2004cm,Dolan:2005wy} which introduces the germ of an extended objects\footnote{The idea of treating algebras and their representations on a more equal footing --- namely as various left-, right- or two-sided modules arising inside the enveloping algebra and its tensor products --- is in the spirit of modern algebra and deformation quantization. Indeed, further development of these thoughts lead to first-quantized systems linking higher-spin gravities to tensionless strings and branes \cite{Engquist:2005yt}.} as well as a precursor to AdS/CFT. In order to construct higher-spin extensions of four-dimensional gravity, the simplest higher-spin algebras of this type can be realized in terms of elementary noncommutative twistor variables. As a result the full field content of a special class of higher-spin gravities theories, that we can refer to as the minimal bosonic models and their matter-coupled and supersymmetrized extensions, is packed up into finite sets of ``master'' fields living on the product of a commutative spacetime and a noncommutative twistor space. The feat of Vasiliev was then to realize that these master fields can be taken to obey remarkably simple-looking master equations built using exterior differential calculus on spacetime and twistor space, and star-products on twistor space, reproducing the standard second-order equations in perturbation theory, in about the same way in which Einstein's equations arise inside a set of on-shell superspace constraints via constraints on the torsion and Riemann two-forms. As a result, Vasiliev's equations are diffeomorphic invariant --- in the sense of unfolded dynamics --- and perturbatively equivalent to a standard set of on-shell Fronsdal fields albeit with interactions given by a nonlocal double perturbative expansion resulting from the star-products. Looking at the twistor-space structure one sees that it services two purposes. In naive double perturbation theory, the expansion in the twistor variables combined with star-products simply generates the higher-spin tensor calculus that one may take to define the minimal bosonic models after which one can naively strip off all the twistor variables by Taylor expansion and make contact with the standard tensorial equations of motion after having eliminated infinite towers of auxiliary fields. A more careful look at these tensorial equations of motion reveals, however, Born--Infeld tails that are indeed strongly coupled, \emph{i.e.} formally divergent for ordinary localized fluctuation fields and hence inequivalent to the canonical Born--Infeld interactions. Focusing on classical solutions in special sectors (boundary conditions) one then discovers that their re-summation is tantamount to regularizations of star-products that requires to perform the field-theoretic calculations inside the twistor space, and not just by looking at Taylor expansions. In other words, Vasiliev's complete higher-spin gravity is essentially non-local in spacetime but admits a quasi-local formulation in terms of star-products on the direct product of commutative spacetime and non-commutative twistor space, where one can then proceed building classical observables and geometries for the theory. This somewhat awkward albeit mathematically completely well-defined situation raises the issue of whether Vasiliev's equations should be viewed as natural representative for higher-spin gravity or not? Since there are no other known examples of classes of higher-spin gravities with local degrees of freedom it is difficult to make any direct comparisons. However, lessons can be drawn by looking at the AdS/CFT correspondence. \subsection{AdS/CFT correspondence: Vasiliev's theory from free conformal fields} \label{AdSCFT} In the previous sections we have attempted to dress a dictionary between the S-matrix and Lagrangian approaches in the case of vanishing cosmological constant. Switching on the cosmological constant the notion of the S-matrix becomes deformed into that of a holographic conformal field theory. Thus, one way of assessing to what extent a higher-spin gravity is ``natural'' is to ask oneself to what extent its dual conformal field theory is natural. Shortly after Maldacena's version of the AdS/CFT conjecture, which was derived within a stringy context involving strong/weak-coupling dual descriptions of branes, the question came as to what the anti-holographic dual of a weakly-coupled CFT could be. Since a free CFT has infinitely many conserved currents of arbitrary spin, in addition to the stress-energy tensor, it was natural to expect the AdS dual to be a higher-spin gauge theory containing a graviton. With a noticeable precursor \cite{Bergshoeff:1988jm}, such ideas emerged progressively in a series of papers \cite{HaggiMani:2000ru,Sundborg:2000wp,Konstein:2000bi,Shaynkman:2001ip,Sezgin:2001zs, Witten2001,Mikhailov:2002bp,Sezgin:2002rt,Klebanov:2002ja,Sezgin:2003pt}: the idea was born in the context of the Type IIB theory on $AdS_5 \times S^5$ \cite{HaggiMani:2000ru,Sundborg:2000wp,Sezgin:2001zs}, and then pursued in a more general D-dimensional context, first at the level of kinematics \cite{Konstein:2000bi,Shaynkman:2001ip} and later at a dynamical level leading to the duality conjecture between a pure bosonic higher-spin gravity in any dimension and a theory of (a large number of) free conformal scalars in the vector representation of an internal symmetry group \cite{Witten2001,Mikhailov:2002bp,Sezgin:2002rt}, refined to include the strongly-coupled fixed points of the three-dimensional $O(N)$-model and Gross-Neveu model, respectively, in \cite{Klebanov:2002ja} and \cite{Sezgin:2003pt}. More precisely, the bilinear operators formed out of free fields couple to higher-spin sources identified as the boundary data of bulk higher-spin gauge fields. One should stress that although the boundary CFT is quadratic, it is nevertheless non-trivial since the bilinear operators actually couple to background sources, therefore the bulk dual theory is interacting. The concrete relation with Vasiliev's unfolded equations in four and five dimensions was elaborated in \cite{Sezgin:2001zs,Sezgin:2002rt,Sezgin:2003pt}, and the fully non-linear bosonic higher-spin gravity in any dimension was then found in \cite{Vasiliev:2003ev}. The agreement between Vasiliev's four-dimensional higher-spin gravity and the sector of bilinear operators formed out of free conformal scalars and spinors in three dimensions has been verified at the level of scalar cubic couplings in \cite{Petkou:2003zz,Sezgin:2003pt}, and more recently, at the general cubic level in \cite{Giombi:2009wh,Giombi:2010vg} under certain prescriptions which still remain to be spelled out in their entirety. Thus the question of whether Vasiliev's higher-spin gravity is natural or not is equivalent to the question of whether free scalars (and spinors) are natural building blocks for three-dimensional conformal field theories with (unbroken or weakly broken) higher-spin currents. Or put differently, thinking about Vasiliev's higher-spin gravity is about as natural as it is to think of three-dimensional conformal field theories starting from free fields. Intermediate developments were given in \cite{Leonhardt:2002sn,Das:2003vw,Leonhardt:2003du,Leigh:2003ez,Ruehl:2004kq,Bonelli:2004ve,Hartnoll:2005yc,Diaz:2006nm,Yonge:2006tn,Elitzur:2007zz}. More recently, the full checks of the conjecture for $AdS_4/CFT_3$ at the cubic level \cite{Giombi:2009wh,Giombi:2010vg} prompted a revived interest in the correspondence.\footnote{Note that recently, in the $AdS_3/CFT_2$ framework based on the bulk theories provided in \cite{Blencowe:1988gj,Prokushkin:1998bq}, many interesting works have appeared, see e.g. \cite{Campoleoni:2010zq,Henneaux:2010xg,Gaberdiel:2010ar,Gaberdiel:2010pz,Castro:2010ce, Gaberdiel:2011wb,Gaberdiel:2011zw,Gaberdiel:2011nt,Chang:2011mz,Campoleoni:2011hg,Kraus:2011ds} and references therein.} For instance, the conjecture has been generalised in the presence of a Chern-Simons gauge field on the three-dimensional boundary \cite{Aharony:2011jz,Giombi:2011kc}. Another duality has been proposed relating bosonic Vasiliev's theory on de Sitter bulk spacetime $dS_4$ and fermionic scalar fields Euclidean $CFT_3$ \cite{Anninos:2011ui}. The thermodynamic behaviour of Vasiliev's higher-spin gravity has been inferred from CFT computations \cite{Shenker:2011zf}. Several attempts toward a constructive derivation of the bulk dual of a free CFT in the vector representation have been proposed, such as the bilocal field approach \cite{Das:2003vw,Koch:2010cy,Jevicki:2011ss} and the renormalisation group \cite{Douglas:2010rc}. Here we also wish to stress that AdS/CFT is more to gauge field theory than what standard global-symmetry current algebra is to quantum field theory, essentially since the boundary currents are coupled to bulk gauge fields. Thinking of free conformal scalar fields, the case of two dimensions is very special, in that the stress tensor forms a closed operator algebra (the Virasoro algebra). Indeed, already in three dimensions one encounters the full higher-spin current algebra as one expands the operator product between two stress tensor generators (including a scalar current rather than a central term). Thus, in the case of four-dimensional theories of quantum gravity, it seems that the simplest, most natural procedure, would be to start from Vasiliev-like higher-spin gravities and then seek symmetry breaking mechanisms that would correspond to breaking the higher-spin currents, followed by taking limits in which these decouple from operator product expansions. In fact, by putting more emphasis on the AdS/CFT correspondence, one may provide further arguments \cite{Girardello:2002pp} why higher-spin gravity is a natural framework for seeking ultraviolet completions of general relativity. Ordinary general relativity together with various matter couplings (and without exotic vertices) may then appear at low energies as the result of the dynamical higher-spin symmetry breaking mechanism induced by radiative corrections proposed in \cite{Girardello:2002pp}, provided that the induced non-critical mass-gaps grow large at low energies. If so, higher-spin gravity may bridge general relativity and string theory, which might be needed ultimately in order to achieve non-perturbative unitarity. \subsection{Emergence of extended objects} \label{sec:extended} Let us comment briefly on the similarities and dissimilarities between higher-spin gravity, with its double perturbative expansion in terms of the dimensionless coupling $g$ and the cosmological constant $\Lambda\,$, and string theory, with its double perturbative expansion in terms of the string coupling $g_s$ and the string tension $T_s\,$. On the one hand, both of these theories are genuine higher-derivative theories which implies that at fixed orders in $g$ and $g_s\,$, respectively, there are vertices with fields of sufficiently high spins involving arbitrarily large inverse powers of their massive parameters, $\Lambda$ and $T_s\,$, respectively. Thus, in order to understand their respective second quantizations ($g$ and $g_s$ expansions), one must first obtain a sufficiently sophisticated understanding of their first quantizations ($\Lambda$ and $T_s$ expansions). Now, to its advantage string theory offers a massless window where its first-quantization is weakly coupled, whereas in dealing with unbroken higher-spin gravity one must face the whole packed-up content of its master fields. A striking similarity between open string theory and higher-spin gravities occurs when one considers \cite{Konstein:1989ij} extensions of the higher-spin algebra by an internal, associative algebra (see also \cite{Vasiliev:2004qz,Vasiliev:2005zu}). In such cases, there exist colored, massless spin-two fields resembling the spin-2 states of open strings. These states can be given Chan-Paton factors since their interactions are based on an associative algebra. This similarity was pointed out in \cite{Francia:2002pt,Francia:2006hp} to which we refer for related discussions. Let us note that the existence of colored gravitons in extended higher-spin theories does not enter in contradiction with the results of \cite{Boulanger:2000rq}, since there it was assumed that the fields considered could have spin 2 at most and the background was taken to be flat. At the classical level, there remains the possibilities of having consistent truncations of closed string theory down to higher-spin gravity, and of higher-spin gravity down to general relativity. For example, both of these types of truncations may turn out to be relevant in the case of the hypothetical tensionless Type IIB closed string theory on $AdS_5\times S^5$ that should be the anti-holographic dual of free four-dimensional maximally supersymmetric Yang--Mills theory in its $1/N$ expansion \cite{Sundborg:2000wp,Sezgin:2002rt}. Here the hypothetical five-dimensional maximally supersymmetric higher-spin gravity (for the linearized theory see \cite{Sezgin:2001yf}) can be identified as the Kaluza-Klein reduction of the ``bent'' first Regge trajectory of the flat-space string theory \cite{Sezgin:2002rt,Bianchi:2003wx}. The full tensionless string theory will then involve a much larger higher-spin symmetry algebra bringing in mixed symmetry fields with critical masses such that they fit into multipletons \cite{Sezgin:2002rt,Bianchi:2003wx}. As for consistent truncations of higher-spin gravity down to possibly matter-coupled (super)gravities, a look at the state-of-affairs in gauged supergravities arising from sphere reductions \cite{deWit:1986iy,Nastase:1999cb,Cvetic:2000nc} suggests that one should conjecture their existence in the case of maximal supersymmetry. As far as the the Type IIB superstring is concerned, its graviton in ten-dimensional flat spacetime admits a deformation into a graviton of five-dimensional anti-de Sitter spacetime. More generally, a key physical effect of having a negative cosmological constant is the formation of cusps on spiky closed strings \cite{Gubser:2002tv,Kruczenski:2004wg} (for generalizations to membranes, see \cite{Sezgin:2002rt}). At the cusps, solitonic bound states arise at the cusps, carrying the quantum numbers of singletons \cite{Engquist:2005yt}. In the case of folded long strings, the resulting two-singleton closed string states are massless symmetric tensors with large spin realized \`a la Flato--Fronsdal \cite{Flato:1978qz}. In the extrapolation of this spectrum to small spins, which is tantamount to taking a tensionless limit, resides the anti-de Sitter graviton. In \cite{Engquist:2005yt}, it was argued in that in order for the tensionless limit to lead to a closed-string field theory with nontrivial interactions, it should be combined with sending the cosmological constant to infinity in a discretized model with fixed mass parameter. This yields first-quantized 0+1 dimensional models describing multi-singleton states. These have continuum limits given by Wess--Zumino--Witten models with gauged W-algebras (rather than Virasoro algebras) that can be realized in terms of symplectic bosons \cite{Engquist:2005yt,Engquist:2007pr} and real fermions. In \cite{Engquist:2005yt} it was furthermore argued that the coupling of these first-quantized models to higher-spin background fields requires their extension into Poisson sigma models in one higher dimension containing the original systems on their boundaries. In particular, in the case of a single singleton, that represents one string parton or membrane parton, these couplings are mediated via boundary and bulk vertex operators of a topological open string in the phase space of a singleton, that is a particular example of the C-model of \cite{Cattaneo:1999fm}; the consistency of this first-quantized system with disc-topology then requires Vasiliev's equations. The resulting physical picture provides a concrete realization for the germ of an extended object that is present already in the Flato--Fronsdal formula. This picture rhymes also well with the holographic framework: just as the weak-coupling stress tensor is deformed directly into the strong-coupling stress tensor on the CFT side, the graviton in higher-spin gravity is the continuation of that in closed string theory. Moreover, the fact that topological C-models underlie general associative algebras, directly explains why Vasiliev's equations are compatible with internal Chan-Paton factors. One is thus led to contemplate a more profound underlying framework for quantum field theory in general, based on Poisson sigma models and topological summation, and that would naturally incorporate the gauge principle as well as radiative corrections; in the case of the topological open string, the additional zero-modes arising from cutting holes in the disc may then provide a first-quantized realization of the massive Goldstone modes of the Girardello--Porrati--Zaffaroni mechanism \cite{Girardello:2002pp}. \section{\large Conclusions and Outlook}\label{Sec:conclusions} We have discussed the key mechanism by which higher-spin gravity evades the no-go theorems and in particular how the equivalence principle is reconciled with higher-spin gauge symmetry. Starting in flat spacetime, massless higher-spin particles cannot be reconciled with the equivalence principle. Nevertheless, the Weinberg--Witten theorem does not rule out higher-derivative energy-momentum tensors made out of higher-spin gauge fields. Hence massless higher-spin particles may couple non-minimally to a massless spin-two particle. However, in such case the low-energy Weinberg theorem would rule out the self-coupled Einstein--Hilbert action and minimally-coupled matter, in particular with low spins (\emph{i.e.} $s=0$, $1/2$, $1$), in blatant contradiction with observations. Going to anti de Sitter spacetime, the Lorentz minimal coupling reappears but only as a subleading term in a strongly coupled derivative expansion. In order to do weakly coupled calculations, even at the cubic level for higher-spin gravity, one thus needs a complete theory with the full derivative-expansion under control. The simplest available candidate at the moment is Vasiliev's theory. Remarkably, not only does it resolve all the difficulties reported in the no-go theorems, but actually it also seems to be the simplest unbroken higher-spin gravity in the sense that it corresponds, via AdS/CFT, to a free conformal field theory with only scalar and/or fermion fields, albeit in large number. \vspace{2mm} Two major open problems that need to be considered are \begin{itemize} \item \emph{Can the Fronsdal programme be pursued until quartic vertices ?} It is not totally excluded that the answer be ``no" under the requirement of perturbative locality. Moreover, scattering amplitudes in AdS can be defined without using an action principle, and the recent checks of the AdS/CFT correspondence in the context of higher-spin gravity at the cubic level were done by using the unfolded formalism in the bulk theory. \item \emph{Does the dimensionless coupling in higher-spin gravity become large at low energies in AdS ? } If the answer is ``yes'' then higher-spin gravity would be a promising candidate for an effective quantum gravity theory. Drawing on our experience with QCD, since higher-spin gravity has been observed to be extremely soft at high energy, it is tempting to think that the coupling constant becomes weak in the ultraviolet and should grow in infrared, such that the dynamical higher spin symmetry breaking, which is present already in the ultraviolet, gives rise to a finite mass gap allowing the identification of the low energy and low spin regime. \end{itemize} \section*{\large Acknowledgments} We are grateful to S. Leclercq for collaborations on several works closely related to the present paper. We want to thank K.~Alkalaev, G.~Barnich, A.~Bengtsson, F.~Buisseret, N.~Colombo, P.~P.~Cook, V.~Didenko, J.~Engquist, D.~Francia, M.~Henneaux, C.~Iazeolla, V.~Mathieu, K.~Meissner, R.~Metsaev, J.~Mourad, D.~Polyakov, M.~Porrati, A.~Sagnotti, E.~Sezgin, E.~Skvortsov, D.~Sorokin, Ph.~Spindel, M.~Taronna, M.~Tsulaia, M.~A.~Vasiliev, Y.~Zinoviev and Xi Yin for various discussions over the years. \begin{appendix} \section{\large Weinberg low-energy theorem: S-matrix/Lagrangian dictionary}\label{sec:Gra} In 1964, Weinberg obtained stringent constraints on $S$-matrix elements by considering the effects tied to the emission of soft massless quanta \cite{Weinberg:1964ew}. Consider an $S$-matrix element with $N$ external particles of momenta $p_i^\mu$ ($i=1,2,\ldots,N$) corresponding to the Feynman diagram \begin{fmffile}{feynm1021} \begin{eqnarray} {\cal A}(p_1,\ldots,p_N)\quad = \parbox{50mm}{ \begin{fmfgraph*}(60,40) \fmfbottomn{i}{6} \fmftopn{o}{6} \fmfblob{.15w}{b1} \fmflabel{$p_1$}{i1} \fmflabel{$p_N$}{o2} \fmflabel{$p_i$}{o5} \fmf{plain}{i2,v2,b1} \fmf{plain}{i3,v3,b1} \fmf{plain}{i4,v4,b1} \fmf{plain}{i5,v5,b1} \fmf{plain}{b1,w2,o2} \fmf{plain}{b1,w3,o3} \fmf{plain}{b1,w4,o4} \fmf{plain}{b1,w5,o5} \end{fmfgraph*}} \label{process} \end{eqnarray} \end{fmffile}where all external momenta $p_i$ are on their respective mass-shells. For the sake of simplicity, all momenta are taken to be ingoing and the polarizations of these particles are left implicit in $\cal A$. \subsection{Emission of a massless particle: Lorentz \textit{versus} gauge invariances} The amplitude for the further emission (or absorption) from any leg of a single massless spin-$s$ particle of momentum $q^\mu$ and polarization $\epsilon_{\mu_1\ldots\,\mu_s}(q)$ is denoted by ${\cal A}(p_1,\ldots,p_N;q,\epsilon)\,$: \vspace*{.1cm} \begin{fmffile}{feynm2021} \begin{eqnarray} {\cal A}(p_1,\ldots,p_N;q,\epsilon)\; =\; \epsilon_{\mu_1\ldots\,\mu_s}(q)\,{\cal A}^{\mu_1\ldots\mu_s}(p_1,\ldots,p_N;q) \;= \parbox{50mm}{ \begin{fmfgraph*}(60,40) \fmfbottomn{i}{6} \fmftopn{o}{6} \fmfblob{.15w}{b1} \fmflabel{$p_1$}{i1} \fmflabel{$p_N$}{o2} \fmflabel{$p_i$}{o5} \fmf{plain}{i2,v2,b1} \fmf{plain}{i3,v3,b1} \fmf{plain}{i4,v4,b1} \fmf{plain}{i5,v5,b1} \fmf{plain}{b1,w2,o2} \fmf{plain}{b1,w3,o3} \fmf{plain}{b1,w4,o4} \fmf{plain}{b1,w5,o5} \fmffreeze \fmf{photon}{w5,o6} \end{fmfgraph*}} \nonumber . \end{eqnarray} \end{fmffile} \vspace*{.1cm} \noindent In general, the line of this extra particle can be attached to any other line, either internal or external. In relativistic quantum field theory, the polarizations are \emph{not} Lorentz-covariant objects: under Lorentz transformations, one has $$\epsilon_{\mu_1\ldots\,\mu_s}(q)\longrightarrow \epsilon_{\mu_1\ldots\,\mu_s}(q)\,+\,s\,\,q_{(\mu_1}\xi_{\mu_2\ldots\,\mu_s)}(q)$$ for some symmetric tensor $\xi$ where the round bracket denotes complete symmetrization over the indices. This property is well-known for massless particles and is the counterpart of gauge invariance in the Lagrangian approach. Lorentz-invariance of the $S$-matrix and the decoupling of spurious degrees of freedom thus require the condition \begin{eqnarray} q_{\mu_1}{\cal A}^{\mu_1\ldots\mu_s}(p_1,\ldots,p_N;q)=0\,,\qquad \forall q \quad. \label{Noe} \end{eqnarray} \subsection{Cubic vertices} In the particular case where the Feynman diagram (\ref{process}) is a single straight line, \textit{i.e.} it describes the free propagation of a single particle, then the modified Feynman diagram essentially is the tree-level process \begin{fmffile}{vertex1005} \begin{eqnarray} \quad\quad\quad{\cal A}(p_1,p_2)\quad=\quad \parbox{50mm}{ \begin{fmfgraph*}(40,40) \fmfbottom{i1,i2} \fmftop{o1,o2} \fmf{plain}{i1,o2} \fmflabel{$p_1$}{i1} \fmflabel{$p_2$}{o2} \end{fmfgraph*}} {\cal A}(p_1,p_2;q,\epsilon)\quad=\quad \parbox{50mm}{ \begin{fmfgraph*}(60,40) \fmfleft{i1,i2} \fmfright{o1,o2} \fmf{plain}{i1,v,i2} \fmf{photon}{v,o2} \fmflabel{$p_1$}{i1} \fmflabel{$p_2$}{i2} \fmflabel{$q$}{o2} \fmfdot{v} \end{fmfgraph*}} \nonumber \end{eqnarray} \end{fmffile} so $\Gamma^{\mu_1\ldots\mu_s}(p_1,p_2;q):={\cal A}^{\mu_1\ldots\mu_s}(p_1,p_2;q)$ is the part of the cubic vertex which corresponds to the Noether current in the Lagrangian approach. The conservation of the Noether current in the Lagrangian approach is equivalent to the Lorentz invariance condition (\ref{Noe}) in the $S$-matrix approach. Let us see this in more details by considering a cubic vertex of type $s$-$s^\prime$-$s^\prime$ with $s\neq s^\prime\,$. The massless particle of spin $s$ is of arbitrary momentum $q^\mu$ (so off-shell) while the two particles of spins $s^\prime$ are on-shell with respective momenta $p_1$ and $p_2\,$. Writing explicitly the polarizations $\epsilon^{(1)}(p_1)$ and $\epsilon^{(2)}(p_2)$ of the two spin-$s^\prime\,$ particles, the cubic vertex takes the form \begin{eqnarray} \Gamma^{\mu_1\ldots\mu_s}(p_1,p_2;q)= \Gamma^{\mu_1\ldots\mu_s\,|\,\nu_1\ldots\nu_{s^\prime}\,|\ \rho_1\ldots\rho_{s^\prime}}(p_1,p_2;q) \,\epsilon^{(1)}_{\nu_1\ldots\nu_{s^\prime}}(p_1)\, \epsilon^{(2)}_{\rho_1\ldots\rho_{s^\prime}}(p_2)\,. \nonumber \end{eqnarray} In the Lagrangian language, the cubic interaction term corresponding to the cubic vertex is, without loss of generality, of the form \begin{eqnarray} S^{(1)}[\varphi_s,\varphi_{s'}]:= \int d^Dx\; {\cal L}^{(1)}\,,\qquad {\cal L}^{(1)} \; :=\; \varphi_{\mu_1\ldots\mu_s} \; \Theta^{\mu_1\ldots\mu_s}(\varphi_{s'},\varphi_{s'}) \nonumber \end{eqnarray} where $\Theta^{\mu_1\ldots\mu_s}$ is bilinear in $\varphi_{s^\prime}$. More precisely, let us write the requirement of gauge invariance of the cubic action $S^{(1)}[\varphi_s,\varphi_{s'}]$ under linearized spin-$s$ gauge transformations $\delta_s^{(0)}\varphi_{\mu_1\ldots\mu_s} = s \;\partial_{(\mu_1} \xi_{\mu_2\ldots\mu_s)}$: \begin{equation} \delta_s^{(0)}S^{(1)} + \delta_s^{(1)}S^{(0)} = 0 \nonumber \end{equation} where $S^{(0)}$ denotes the free part of the action, $\delta_s^{(0)}$ the free spin-$s$ gauge transformations and $\delta_s^{(1)}$ the gauge transformations taken at linear order in the fields $\{\varphi_{s'},\varphi_s\}\,$ and linear in the spin-$s$ gauge parameter $\xi_{\mu_1\ldots\mu_{s-1}}\,$. The above equation implies that $\Theta^{\mu_1\ldots\mu_s}$ is a conserved current: \begin{equation} \partial_{\mu_1}\Theta^{\mu_1\ldots\mu_s}(\varphi_{s'},\varphi_{s'}) \approx 0 \nonumber \end{equation} so that the Lorentz invariance condition (\ref{Noe}) in the $S$-matrix approach is indeed equivalent to the conservation of the Noether current in the Lagrangian approach. In momentum space, $$ S^{(1)}= \int d^Dq\,d^Dp_1\, d^Dp_2\,{\delta(p_1+p_2+q)}\,\Gamma^{\mu_1\ldots\mu_s\,|\,\nu_1\ldots\nu_{s^\prime}\,|\,\rho_1\ldots\rho_{s^\prime}}(p_1,p_2;q)\, \varphi_{\mu_1\ldots\mu_s}(q)\,\varphi_{\nu_1\ldots\nu_{s^\prime}}(p_1)\varphi_{\rho_1\ldots\rho_{s^\prime}}(p_2)\,. $$ {} The cubic vertex with the lowest number of derivatives is of the form $$\Gamma^{\mu_1\ldots\mu_s\,|\,\nu_1\ldots\nu_{s^\prime}\,|\,\rho_1\ldots\rho_{s^\prime}}(p_1,p_2;q) \propto\Gamma^{\mu_1\ldots\mu_s}(p_1,p_2;q)\eta^{\nu_1\rho_1}\ldots\eta^{\nu_{s^\prime}\rho_{s^\prime}}$$ where there is an implicit symmetrization over all $\nu$ indices and $$\Gamma^{\mu_1\ldots\mu_s}(p_1,p_2;q)\propto(p_1-p_2)^{\mu_1}\dots(p_1-p_2)^{\mu_s}$$ is the cubic vertex for a scalar particle coupled to a spin-$s$ massless particle. This coupling is called ``minimal'' in the sense that it contains the minimal amount of derivatives and also because it corresponds to a coupling with the Berends--Burgers--van Dam conserved currents associated with the rigid symmetries $\delta\varphi_{s'}(k)\,=\,i\,\xi^{\mu_1\ldots\mu_{s-1}}k^{\mu_1}\dots k^{\mu_{s-1}}\varphi_{s'}(k)$ \cite{Berends:1985xx} (see also \cite{Bekaert:2009ud} for more details). In the low energy limit $q\rightarrow 0\,$, the only surviving cubic interaction is indeed the minimal coupling with $s$ derivatives. The Lorentz invariance condition (\ref{Noe}) on the amplitude ${\cal A}(p_1,\ldots,p_N;q,\epsilon)\,$ for the further emission (or absorption) of a soft massless spin-$s$ particle implies the conservation law of order $s-1$ on the $N$ external momenta (\ref{lowen}) where each inserted minimal vertex $\Gamma^{\mu_1\ldots\mu_s}(p_i,-p_i-q;q)$ came up with a coupling constant $g^{(s)}_i$ (for more details, see e.g. \cite{Weinberg:1995mt}, Section 13.1 or \cite{Blagojevic:2002du}, Appendix G). Equivalently, these conservation laws can be obtained from the Noether charges associated with the above-mentioned rigid symmetries. \section{\large Weinberg--Witten theorem: a Lagrangian reformulation} \label{sec:S} \subsection{Weinberg--Witten theorem} Weinberg and Witten designed their no-go theorem \cite{Weinberg:1980kq} to eliminate ``emergent gravity'' theories where the graviton is a bound state of particles with spin one or lower. Its proof involves $S$-matrix manipulations which will be discussed in more details in the next subsection on its refined version. If one assumes locality, then it becomes surprisingly easy to prove the Lagrangian version of Weinberg--Witten theorem. Let $[s]$ denote the integer part of the spin $s\,$. \vspace{2mm}\noindent\textbf{Lemma}: \textit{Any local polynomial which is at least quadratic in a spin-$s$ massless field, non-trivial on-shell and gauge invariant, must contain at least $2\,[s]$ derivatives.} \proof{The corollary 1 of \cite{Bekaert:2005ka} states that, on-shell, any local polynomial which is gauge invariant may depend on the gauge fields only through the Weyl-like tensors. The latter tensors contain $[s]$ derivatives thus the lemma follows.} A straightforward corollary of this lemma is a version of Weinberg--Witten theorem. \vspace{2mm}\noindent\textbf{Weinberg--Witten theorem} (Lagrangian formulation): \noindent(i) \textit{Any perturbatively local theory containing a charge current $J^\mu$ which is non-trivial, Lorentz covariant and gauge invariant, forbids massless particles of spin $s>1/2\,$.} \noindent(ii) \textit{Any perturbatively local theory containing a Lorentz covariant and gauge invariant energy-momentum tensor $T^{\mu\nu}$ forbids massless particles of spin $s>3/2\,$.} \proof{In the free limit, any Noether current in a perturbatively local theory must be a quadratic local polynomial. For massless fields of spin $s>1/2$, the lemma implies that this polynomial must contain at least two derivatives (or four derivatives if $s>3/2$). However, the charge current contains one derivative and the energy-momentum tensor two derivatives.} The lower bound $s>3/2$ of this version is slightly weaker than the lower bound $s>1$ of the original Weinberg--Witten theorem \cite{Weinberg:1980kq}. Anyway the case $s=3/2$ is low-spin and thereby is not a main concern of this paper. \subsection{Refinement of Weinberg--Witten theorem} In \cite{Porrati:2008rm}, the author takes gauge invariance into account in order to still use Weinberg--Witten's argument but in a context where the stress-energy tensor need not be gauge-invariant (or Lorentz-covariant, which is the same in a second-quantized setting) any more. In the original work \cite{Weinberg:1980kq} a particular matrix element was considered: elastic scattering of a spin-$s$ massless particle off a single soft graviton. The initial and final polarizations of the spin-$s$ particle are identical, say $+s\,$, its initial momentum is $p$ and its final momentum is $p+q\,$. The graviton is \emph{off-shell} with momentum $q\,$. The matrix element is \begin{eqnarray} \langle +s, \; p+q | \,T_{\mu\nu}\, |+s,\; p \rangle \quad . \label{Tmunumatrix} \end{eqnarray} In the soft limit $q\longrightarrow 0$ the matrix element is completely determined by the equivalence principle, as we recalled above when reviewing Weinberg's low energy Theorem. Using the relativistic normalization for one-particle states $\langle p|p' \rangle= 2\,p_0\,(2\pi)^3\,\delta^3(\mathbf{p}-\mathbf{p'})\,$, we get \begin{eqnarray} \lim_{q\rightarrow 0} \langle +s, \; p+q | \,T_{\mu\nu}\, |+s,\; p \rangle &=& p_{\mu}\,p_{\nu}\quad . \label{EP} \end{eqnarray} This is tantamount to saying that, at low energy, the only possible coupling between gravity and everything else is done via the minimal coupling procedure, bringing no more than two derivatives (or one if the spin is half-integer) in the interaction. More precisely, among all possible interaction terms there must always be that coming from minimal coupling $\partial \rightarrow \partial + \kappa\,\Gamma(h)\,$, with the non-vanishing coefficient $\kappa\,$ related to Newton's constant. {Since $q$ is space-like (\emph{off-shell} soft graviton), one goes in the frame in which $q^{\mu} = (0,-\mathbf{q}) \,$, $p^{\mu} = (|\mathbf{q}|/2,\mathbf{q}/2) \,$, $p^{\mu} + q^{\mu} = (|\mathbf{q}|/2, -\mathbf{q}/2) \,$ (the massless spin-$s$ particle is on-shell), and deduce that a rotation $R(\theta)$ by an angle $\theta$ around the $\mathbf{q}$ direction acts on the one-particle states as $R(\theta)|p,+s\rangle = exp(\pm i\,\theta s)|p,+s\rangle\,$, $R(\theta)|p+q,+s\rangle = exp(\mp i\,\theta s)|p+q,+s\rangle\,$} since $R(\theta)$ is a rotation of $\theta$ around $\mathbf{p}$ but of $-\theta$ around $\mathbf{p}+\mathbf{q}=-\mathbf{p}\,$. Decomposing $T_{\mu\nu}$ under space rotations in terms of spherical tensors as the complex spin-zero tensor $T_{0,0}$ plus the real components $\{T_{1,m}\}_{m=-1}^{1}\,$ and $\{T_{2,m}\}_{m=-2}^{2}\,$, one can write the following relation \begin{eqnarray} e^{\pm 2i\,\theta\,s}\langle +s, \; p+q | T_{j,m} |+s,\; p \rangle &=& \langle +s, \; p+q | R^{\dagger} T_{j,m} R|+s,\; p \rangle \ = \ e^{i\,\theta\,m} \langle +s, \; p+q | T_{j,m} |+s,\; p \rangle \end{eqnarray} which admits, for $s>1\,$, the only solution $\langle +s, \; p+q | \,T_{\mu\nu}\, |+s,\; p \rangle = 0\,$. Then, \emph{if $T_{\mu\nu}$ is a tensor under Lorentz transformations} then this implies that $\langle +s, \; p+q | \,T_{\mu\nu}\, |+s,\; p \rangle = 0\,$ in all frames, in contradiction with the equivalence principle (\ref{EP}). This seems to kill gravity itself, but of course in that case as it usually happens in gauge theories, $T_{\mu\nu}$ is not a Lorentz tensor (which is the same as saying that $T_{\mu\nu}$ is not gauge-invariant). One \emph{can} define matrix elements for $T_{\mu\nu}$ that transform as Lorentz tensors only at the price of introducing non-physical, pure-gauge states. This is what the author of \cite{Porrati:2008rm} did in order to accommodate the Weinberg--Witten argument to gauge theories for spin-$s$ fields, $s>1\,$ and prove that massless higher-spin particles cannot exist around flat background if their tensor $T_{\mu\nu}$ appearing in $\langle +s, \; p+q | \,T_{\mu\nu}\, |+s,\; p \rangle$ should comply with the equivalence principle (\ref{EP}). Denoting by $v$ all one-particle spin-$s$ states, whether or not spurious (pure-gauge), the matrix element under consideration is denoted $ \langle v',p+q|\,T_{\mu\nu}\, |v,p\rangle \,$. The method used in \cite{Porrati:2008rm} in order to derive the $S$-matrix is to perform the standard perturbative expansion of the effective action (where $g_{\mu\nu} = \eta_{\mu\nu}+\kappa\,h_{\mu\nu}$) \begin{eqnarray} A = \frac{1}{16\pi G}\,\int d^4x \sqrt{-g}R + \frac{1}{2}\,\int \frac{d^4q}{(2\pi)^4}\,\widetilde{h}^*_{\mu\nu}(q) \left( \langle v',p+q|\,T^{\mu\nu}\, |v,p\rangle + {\cal T}^{\mu\nu} \right)+ {\cal O}(h^2)\quad . \label{Po16} \end{eqnarray} The linear interaction terms include the matrix element and another effective tensor ${\cal T}^{\mu\nu}$ which summarizes the effect of any other matter field but that we will omit from now on without loss of generality. To linear order, Einstein's equations become \begin{eqnarray} L_{\mu\nu}^{~~~\rho\sigma}\,h_{\rho\sigma}(q) &=& 16\pi G\,[\langle v',p+q|\,T^{\mu\nu}\, |v,p\rangle ]\quad, \nonumber \\ L_{\mu\nu}^{~~~\rho\sigma} &=& \delta^{\rho}_{\mu}\delta_{\nu}^{\sigma}q^2 - \eta_{\mu\nu}\eta^{\rho\sigma}q^2 - \delta^{\rho}_{\mu}\,q_{\nu}q^{\rho} - \delta^{\rho}_{\nu}\,q_{\mu}q^{\rho} + \eta^{\rho\sigma}q_{\mu}q_{\nu} + \eta^{\mu\nu}q_{\rho}q_{\sigma} \end{eqnarray} which is nothing but the Fourier transform of the symmetric differential operator $\vec{\cal G}^{\rho\sigma}_{\mu\nu}$ acting on the spin-$2$ field $h_{\mu\nu}$ in the linearized (in $h_{\mu\nu}$) Einstein equations \begin{eqnarray} \vec{\cal G}^{\rho\sigma}_{\mu\nu}\;h_{\rho\sigma} = \kappa \; T_{\mu\nu}(\varphi_s,\varphi_s) + {\cal O}(\kappa^2) \end{eqnarray} where $T_{\mu\nu}(\varphi_s,\varphi_s)$ is the tensor bilinear in the spin-$s$ field $\varphi_s$ that gives the cubic $2$-$s$-$s$ vertex in the action principle \begin{eqnarray} S[h_{\mu\nu},\varphi_s] &=& S^{PF}[h_{\mu\nu}] + S^{Fr}[\varphi_{s}] + \frac{\kappa}{2}\, \int d^Dx \;h_{\mu\nu}\,T^{\mu\nu}(\varphi_s,\varphi_s) + {\cal O}(\kappa^2)\;. \end{eqnarray} To this same order in the metric fluctuation, a necessary condition is given in \cite{Porrati:2008rm} for the consistency of the gravitational interactions of high-spin massless particles: \begin{eqnarray} \langle v,p+q|\,T^{\mu\nu}\, |v_s,p\rangle &=& L_{\mu\nu}^{~~~\rho\sigma}\; \Delta_{\rho\sigma}(q) \label{Po18} \end{eqnarray} with $\Delta_{\rho\sigma}(q)$ analytic in a neighborhood of $q=0\,$. The writing (\ref{Po16}) gives to Porrati the most general condition for the decoupling of the so-called spurious polarization $v_s\,$ (that we call here sometimes ``pure-gauge'' states) from the $S$-matrix amplitudes. Decoupling occurs when one can reabsorb the change in the matrix element due to the substitution $v\rightarrow v+v_s$ with a \emph{local} field redefinition of the graviton field. In the Lagrangian language, this can be seen to originate from the requirement of gauge invariance of the cubic action $S^{(1)}:= \frac{1}{2}\,\int d^Dx \;h_{\mu\nu}T^{\mu\nu}(\varphi_s,\varphi_s)$ under linearized gauge transformations \begin{eqnarray} \delta^{(0)}h_{\mu\nu} &=& 2 \;\partial_{(\mu} \epsilon_{\nu)}\quad, \\ \delta^{(0)}\varphi_{\mu_1\ldots\mu_s} &=& s \;\partial_{(\mu_1} \epsilon_{\mu_2\ldots\mu_s)} \end{eqnarray} up to terms that vanish on the surface of the free field equations: \begin{eqnarray} \delta^{(0)}S^{(1)} + \delta^{(1)}S^{(0)} = 0 \quad \label{canoeq} \end{eqnarray} where $S^{(0)}$ denotes the free part of the action and $\delta^{(1)}$ denotes the gauge transformations taken at linear order in the field $\{h,\varphi\}\,$. The above equation can be rewritten \begin{eqnarray} \int d^Dx\;\Big[\delta^{(0)}h_{\mu\nu} \;\frac{\delta S^{(1)}}{\delta h_{\mu\nu}} + \delta^{(0)}\varphi_{\mu_1\ldots\mu_s} \; \frac{\delta S^{(1)}}{\delta \varphi_{\mu_1\ldots\mu_s} } + \delta^{(1)}h^{\mu\nu} \;\vec{\cal G}^{\rho\sigma}_{\mu\nu} \;h_{\rho\sigma} + \delta^{(1)}\varphi_{\mu_1\ldots\mu_s} \frac{\delta S^{(0)}}{\delta \varphi_{\mu_1\ldots\mu_s}}\Big] &=& 0\quad. \nonumber \end{eqnarray} If, as is assumed in the $S$-matrix approach, one takes the spin-$s$ particle on-shell, then one sets $\frac{\delta S^{(0)}}{\delta \varphi_{\mu_1\ldots\mu_s}} $ to zero. If, in addition, one takes the Euler--Lagrange derivative of the result with respect to the gravitational field, noting that the only structure for $\delta^{(1)}h_{\mu\nu}$ that can contribute to (\ref{canoeq}) with $S^{(1)} = \frac{1}{2}\,\int d^Dx \;h_{\mu\nu}T^{\mu\nu}(\varphi_s,\varphi_s)$ is $\delta^{(1)}h_{\mu\nu} = R_{\mu\nu}(\varphi_s,\epsilon_s)\,$, one finds \begin{eqnarray} T_{\alpha\beta}(\varphi_s,\delta^{(0)}\varphi_s) + \vec{\cal G}^{\mu\nu}_{\alpha\beta}\, R_{\mu\nu}(\varphi_s,\epsilon_s) &=& 0 \end{eqnarray} which is (up to a convention of sign in front of the Fierz--Pauli action $S^{FP} = \frac{1}{2}\,\int h_{\mu\nu} \vec{\cal G}^{\mu\nu}_{\alpha\beta} h^{\alpha\beta}$) the translation of (\ref{Po18}) in the Lagrangian language. Together with the principle of equivalence (\ref{EP}), the equation (\ref{Po18}) was the main assumption of the work \cite{Porrati:2008rm}. We see that this condition (\ref{Po18}) is derived from the main equation (\ref{canoeq}) in the Lagrangian formalism. Apart from the assumption of locality of $S^{(1)}$ --- which is relaxed in the $S$-matrix analysis; it would be interesting to see if this relaxing really gives new consistent solutions compared to the Lagrangian analysis --- the Lagrangian analysis of \cite{Boulanger:2006gr,Boulanger:2008tg} does not assume the equivalence principle and is based otherwise on a weaker form of Equation (\ref{Po18}). That the spin-$s$ fields are put on-shell in the $S$-matrix analysis can be viewed as an advantage (no a priori field-theoretical realization for the spin-$s$ fields). Based on the sole two assumptions (\ref{EP}) and (\ref{Po18}), Porrati is able to prove that no massless high-spin particle can minimally couple to gravity in flat space in complete accordance with the previous results of \cite{Aragone:1979hx,Berends:1979wu,Aragone:1981yn,Metsaev:2005ar,Boulanger:2006gr} and with \cite{Boulanger:2008tg}. \end{appendix} \bibliographystyle{utphys}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,097
\section{Introduction} One of the attractive elements of Kaluza-Klein theory is that it provides a single geometric construction for the Maxwell field and its action. The price is that we need to envoke an extra dimension and, if we wish to not have a whole new tower of massive states, we must also insist that fields are independent of this new dimension. We can also ask what is the interpretation, from the reduced spacetime point of view, of fields with a dependence on the KK coordinate. These are states charged with respect to the KK gauge field, with the charge being related to the momentum in the KK direction. These states will be massive from the reduced perspective since the momentum in the compact space will also appear as a mass from this perspective. Having a massless state in the five-dimensional theory with momentum along the fifth direction will then lead to a BPS state in the reduced four-dimensional theory as its charge will equal its mass (in four-dimensional) natural units. The identification of states whose mass and charge have their origin from KK momentum was crucial in the identification of the low energy effective action of M-theory. The D0-brane was simply a momentum mode along the eleventh direction \cite{Townsend:1995kk,Witten:1995ex}. Explicitly, from the eleven-dimensional supergravity perspective it was a null wave solution. From the reduced ten-dimensional perspective this could be identified with the D0-brane solution in IIA supergravity, its charge and mass originating from the eleven-dimensional momentum of the null wave \cite{Townsend:1995kk,Townsend:1997wg}. Double Field Theory or hence forth DFT \cite{Hull:2009mi} and its subsequent developments in \cite{Hull:2009zb, Hohm:2010jy, Hohm:2010pp, Jeon:2010rw, Jeon:2011cn, Jeon:2011vx, Jeon:2011sq, Jeon:2012hp, Aldazabal:2011nj, Coimbra:2011nw} (see \cite{Aldazabal:2013sca, Berman:2013eva, Hohm:2013bwa} for recent reviews), may be viewed as an attempt to geometrically unify the metric and NS-NS two-form potential $B_{[2]}$ in a Kaluza-Klein type way. Amongst other reasons, the local symmetry of the NS-NS two-form means that one cannot lift this to just ordinary Riemannian geometry in higher dimensions. Instead one needs to have a so called {\it{generalized geometry}}. Double field theory extends the dimensions of spacetime so that the off-diagonal components of the generalized metric---that is the metric of the full extended space---become the NS-NS two-form potential. Then one solves the so-called {\it{strong constraint}} or {\it{section condition}}, that means effectively one then carries out a Kaluza-Klein reduction from the full extended space down to usual spacetime. The action of DFT then reduces to the ordinary supergravity action. The generalized diffeomorphisms become both the ordinary diffeomorphisms and two-form gauge transformations. (The global aspects of which have recently been explored in \cite{Hohm:2012mf, Berman:2014jba, Cederwall:2014kxa}.) As such we can view DFT as a novel type of Kaluza-Klein theory which lifts the NS-NS sector of supergravity (i.e. metric and two-form) to a single geometric theory in higher dimensions. The extended geometry associated with the duality manifest version of M-theory \cite{Hillmann:2009ci, Hull:2007zu, Pacheco:2008ps, Coimbra:2011ky, Coimbra:2012af, Berman:2010is, Berman:2011pe, Berman:2011jh} is a further extension of this idea where the three-form potential $C_{[3]}$ and the metric are combined and lifted into a {\it{generalized metric}} for a single geometric theory with an extended number of dimensions. Again there is a section condition \cite{Berman:2011cg, Coimbra:2011ky, Coimbra:2012af, Berman:2012vc} whose solution implies a Kaluza-Klein reduction back to ordinary spacetime. It is natural to ask the question what is the interpretation of momenta along the extra directions. A few moments thought about the comparision between DFT and Kaluza-Klein theory indicates that it should correspond to a fundamental string charge. Thus this indicates an intruiging interpretation for the fundamental string from the DFT point of view. The string will just be a null wave in doubled space with the momentum along the extended directions. The $O(d,d)$ symmetry of T-duality which from the usual spacetime point of view exchanges winding and momentum will now just correspond to a rotation in the doubled space. A null wave pointing along the usual spacetime will be a momentum mode but pointing along an extended direction it will be interpreted as a fundamental string. The charge and tension of the string will just be given by the momentum. Thus from the DFT point of view there are no strings, only null waves. We will make this connection as explicit as possible. We begin by constructing a null pp-wave solution of the equations of motion of DFT and interpret it as a massless state in doubled space carrying momentum. We then show that this is the fundamental string solution \cite{Dabholkar:1990yf} when written in terms of the usual spacetime metric and two-form potential. We wish to study the dynamics of such a solution. To do so we determine the equations of motion of the Goldstone modes of this null wave solution in DFT. (Technically we follow \cite{Adawi:1998ta} very closely.) The resulting equations of motion for the Golsstone modes are the same as that of the string theory written down by Tseytlin \cite{Tseytlin90, Tseytlin91} to describe a string world-sheet in doubled space. We then move to exhibit the same property for the duality manifest form of M-theory (with U-duality group $SL(5)$). The wave is shown to be equivalent to the membrane. Thus again there are no fundamental extended objects, only null waves. Along the way we will need to write down the equations of motion of the duality manifest theory---something that has so far not been done. Even though the action for the manifest $SL(5)$ theory has been known for a few years by now \cite{Berman:2010is}, the equations of motion are more complicated than the Euler-Lagrange equations from that action since the generalized metric is constrained to be an element of the $SL(5)/SO(5)$ coset. Implementing this constraint in the variational problem of the action then leads to a projected set of equations of motion just as in \cite{Hohm:2010pp} for DFT. We then conjecture the general form for the projector in terms of the Y-tensor introduced in \cite{Berman:2012vc}. \subsection{Bibliography} It is out of the scope of this paper to give a proper historical account of DFT and its development. There are three relatively recent reviews of the subject \cite{Aldazabal:2013sca, Berman:2013eva, Hohm:2013bwa}. We would like to emphasize the early work of Siegel \cite{Siegel93a, Siegel93b} and Duff \cite{Duff90a} and then the two key groups that have developed DFT, one of Hohm, Hull and Zwiebach \cite{Hull:2009mi, Hull:2009zb, Hohm:2010jy, Hohm:2010pp} and the other of Jeon, Lee and Park \cite{Jeon:2010rw, Jeon:2011cn, Jeon:2011vx, Jeon:2011sq, Jeon:2012hp}. In the duality manifest M-theory formalism there was initial work by Duff \cite{Duff90b} and then Hull \cite{Hull:2007zu} and Waldram et. al. \cite{Pacheco:2008ps, Coimbra:2011ky, Coimbra:2012af} and later, Berman, Perry and collaborators \cite{Berman:2010is, Berman:2011pe, Berman:2011cg, Berman:2011jh}. Recently some key further developments in this direction are by Grana et. al. \cite{Aldazabal:2013via} and Hohm and Sambtleben \cite{Hohm:2013pua, Hohm:2013vpa, Hohm:2013uia}. From one perspective many of these developments were anticipated by the so called $E_{11}$ programme of West and collaborators \cite{West:2001as, Englert:2003zs, West:2003fc, Kleinschmidt:2003jf, West:2004kb, West:2012qm}. As such many of the ideas present in DFT and its variants were signalled by the early work of West. In particular the authors of this paper have been influenced by the fact that the nonlinear realization construction central to the $E_{11}$ programme has its origins in the theory of pions as Goldstone modes of the spontanteously broken chiral Lagrangian. This led to the idea that the duality invariant theory may contain massless Goldstone modes from spontaneously breaking the duality symmetry. Whether the null states identifed here are such Goldstone modes is an open question. For quantum aspects of the duality manifest string see \cite{Berman:2007vi,Berman:2007xn,Berman:2007yf,Hohm:2013jaa,Betz:2014aia}. In addition, there have been a whole host of fascinating recent results, some small sample of which are \cite{Berman:2013uda, Blair:2013noa, Blair:2013gqa, Lee:2014mla, Lee:2013hma,Strickland-Constable:2013xta, Park:2014una, Cederwall:2013naa}. When studying supergravity solutions such as the pp-wave, the string, the membrane and the D0-brane, as well as reviewing concepts like T-duality, Kaluza-Klein reductions and smearing we found the book by Ortin \cite{Ortin04} an invaluable reference. \subsection{Notation} In this paper we are dealing with several different spaces of various dimensions at the same time. Here is a brief summary of the indices and their ranges used for these spaces. We start with the spacetime of dimension $d$ with metric $g_{\mu\nu}$ and coordinates $x^\mu$ where $\mu=1,\dots,d$. In DFT this is the normal $d$-dimensional space and for the $SL(5)$ duality invariant theory where the dimensions are split into 4+7, these are the four dimensions the U-duality group acts on, thus $d=4$. The {\it{duals}} of the spacetime coordinates are denoted by $\tilde{x}_\mu$ or $\tilde{x}^{\bar{\mu}}$ for DFT and $y_{\mu\nu}$ for the $SL(5)$ theory. Together with the normal coordinates $x^\mu$ they form a doubled or extended space of dimension $D$ with coordinates $X^M$ and generalized metric $\HH_{MN}$ for DFT and $\MM_{MN}$ for the $SL(5)$ theory, where $M=1,\dots,D$. In DFT we have $D=2d$ and the doubled space is equipped with an $O(d,d)$ structure. In $SL(5)$ there are six wrapping coordinates $y_{\mu \nu}$, where $\mu, \nu$ are antisymmetrized and thus $D=10$. In what follows, we will see that the equations of motion will be projected using a projector denoted by ${P_{MN}}^{KL}$. This acts on a $(D\times D)$-dimensional symmetric vector space whose building blocks are ``vectors'' of the form $V_{MN}$ with $M,N$ symmetrized. The dimension of this vector space is therefore $\frac{1}{2}D(D+1)$. All the dimension and indices are summarized in the following table. \begin{equation} \begin{array}{|l|c|ccc|c|} \hline \mathrm{space} & \mathrm{dimension} & O(d,d) & SL(5) & SO(5,5) & \mathrm{indices} \\\hline \mathrm{spacetime} & d & d & 4 & 5 & \mu,\nu,\dots \\ \mathrm{extended\ space} & D & 2d & 10 & 16 & M,N,\dots \\ \mathrm{projector\ space} & \frac{1}{2}D(D+1) & 2d^2+d & 55 & 136 & (MN), (PQ), \dots \\ \hline \end{array} \end{equation} \subsection{Double Field Theory} \label{sec:DFTintro} In double field theory the spacetime metric $g_{\mu\nu}$, the B-field $B_{\mu\nu}$ and the dilaton $\phi$ are encoded in the generalized metric $\HH_{MN}$ and the rescaled dilation $d$ as follows, \begin{align} \HH_{MN} &= \begin{pmatrix} g_{\mu\nu} - B_{\mu\rho}g^{\rho\sigma}B_{\sigma\nu} & B_{\mu\rho}g^{\rho\nu} \\ -g^{\mu\sigma}B_{\sigma\nu} & g^{\mu\nu} \end{pmatrix} \qquad\mathrm{and}\qquad d= \phi - \frac{1}{4}\ln g \label{eq:DFTmetric} \end{align} where $g=\det g_{\mu\nu}$ is the determinant of the spacetime metric. This generalized metric is then a metric on a $2d$ dimensional space. We introduce the usual coordinates $x^\mu$ and their duals $\tilde{x}_\mu$ which are combined into $X^M=(x^\mu, \tilde{x}_\mu)$ for the whole doubled space. This doubled space is also equipped with a globally defined $O(d,d)$ structure $\eta_{MN}$ \begin{equation} \eta_{MN} = \begin{pmatrix} 0 & {\delta_\mu}^\nu \\ {\delta^\mu}_\nu & 0 \end{pmatrix} \label{eq:eta} \end{equation} and all tensors are really $O(d,d)$ tensors in the doubled space (for a discussion of this see \cite{Berman:2014jba}). The action may then be written in terms of a sort of generalized Ricci scalar \begin{equation} S = \int \dd^{D}X e^{-2d} R \label{eq:DFTaction} \end{equation} with the scalar $R$ given by \begin{equation} \begin{aligned} R &= \frac{1}{8}\HH^{MN}\partial_M\HH^{KL}\partial_N\HH_{KL} - \frac{1}{2}\HH^{MN}\partial_M\HH^{KL}\partial_K\HH_{NL} \\ &\quad+ 4\HH^{MN}\partial_M\partial_N d - \partial_M\partial_N\HH^{MN} -4\HH^{MN}\partial_M d \partial_N d + 4\partial_M\HH^{MN}\partial_N d \\ &\quad+ \frac{1}{2}\eta^{MN}\eta^{KL}\partial_M{\EE^A}_K\partial_N{\EE^B}_L\HH_{AB} \, . \end{aligned} \end{equation} Here $M,N$ are \emph{curved} doubled spacetime indices and $A,B$ are \emph{flat} doubled tangent space indices. They are related via the generalized vielbeins ${\EE^A}_M$ such that \begin{equation} \HH_{MN} = {\EE^A}_M{\EE^B}_N \eta_{AB} \, . \end{equation} In addition to this there is the so called section condition or strong constraint. This diminishes the dependence of the fields on the number of coordinates. This constraint may be written as \begin{equation} \eta^{MN}\partial_M \bullet \partial_N \bullet =0 \label{eq:sectioncond} \end{equation} for any field in the theory. Its simple consequence is that one may choose to have dependence on the usual coordinates alone. Different choices of how one solves the section condition gives rise to different duality related theories. So at the cost of breaking the $O(d,d)$ symmetry we may choose \begin{equation} \partial_{\bar{\mu}} \bullet =0 \, . \label{eq:sectioncondKK} \end{equation} This is like a simple Kaluza-Klein reduction and we will find it useful in what follows to take this perspective. Imposing the condition \eqref{eq:sectioncondKK} on the action \eqref{eq:DFTaction} produces the NS-NS sector of supergravity. (There is also a boundary term contribution that will not play a role in what follows \cite{Berman:2011kg}.) Thus at a rather simplistic level, the DFT action is like a Kaluza-Klein lift of the NS-NS sector of supergravity. Note, the last line in $R$ containing the vielbeins was originally not present in the literature. This is because indeed it vanishes when one imposes the section condition. It is however crucial when one considers the Scherk-Schwarz reductions of the theory \cite{Berman:2012uy, Berman:2013cli, Grana:2012rr, Geissbuhler:2011mx, Aldazabal:2013mya}. (We will not consider such Scherk-Schwarz reductions in this paper.) The equation of motion for the dilaton is easily obtained by varying the action \begin{equation} \delta S = \int \dd^{D}X e^{-2d} (-2R)\delta d \end{equation} which has to vanish for any variation $\delta d$ and thus gives \begin{equation} R=0 \label{eq:DFTeomDilaton}. \end{equation} (Note that $\delta R/\delta d=0$ up to total derivatives.) To find the equation of motion for the generalized metric we have to be a bit more careful. Varying the action with the generalized metric gives \begin{equation} \delta S = \int \dd^{D}X e^{-2d} K_{MN}\delta \HH^{MN} \label{eq:DFTvaraction} \end{equation} where $K_{MN}$ is given by \begin{equation} \begin{aligned} K_{MN} &= \frac{1}{8}\partial_M\HH^{KL}\partial_N\HH_{KL} + 2\partial_M\partial_N d \\ &\quad +(\partial_L-2\partial_L d) \left[\HH^{KL}\left(\partial_{(M}\HH_{N)K} - \frac{1}{4}\partial_K\HH_{MN}\right)\right] \\ &\quad + \frac{1}{4}\left(\HH^{KL}\HH^{PQ}-2\HH^{KQ}\HH^{LP}\right) \partial_K\HH_{MP}\partial_L\HH_{NQ} \\ &\quad - \eta^{KL}\eta^{PQ}\left(\partial_K d\partial_L{\EE^A}_P - \frac{1}{2}\partial_K\partial_L{\EE^A}_P\right)\HH_{(N|R}{\EE^R}_A\HH_{|M)Q} \, . \end{aligned} \end{equation} The last term uses the variation of the vielbein with respect to the metric \begin{equation} \delta{\EE^A}_M = \frac{1}{2}\HH^{AB}{\EE^N}_B\delta\HH_{MN} \, . \end{equation} The expression in \eqref{eq:DFTvaraction} does not have to vanish for any $\delta\HH^{MN}$ since the generalized metric is constrained to parametrize the coset space $O(d,d)/O(d)\times O(d)$. This means the generalized metric can be parametrized by $g_{\mu\nu}$ and $B_{\mu\nu}$ as written in \eqref{eq:DFTmetric}. Thus deriving the equations of motion is a little more complicated. This was first done in \cite{Hohm:2010pp}. We will rederive the equations of motion here using a slightly different method because this method will be more readily applicable to the cases of extended geometry with the exceptional groups that we discuss later. The basic idea is that rather than varying with respect to the generalized metric one varies with respect to the spacetime metric and the B-field and then make the result $O(d,d)$ covariant. By applying the chain rule, the action can be varied with respect to $\delta g_{\mu\nu}$ and $\delta B_{\mu\nu}$ separately. Making use of \begin{align} \frac{\delta g_{\mu\nu}}{\delta g_{\rho\sigma}} &= {\delta_\mu}^{(\rho}{\delta_\nu}^{\sigma)}, & \frac{\delta g^{\mu\nu}}{\delta g_{\rho\sigma}} &= -g^{\mu(\rho}g^{\sigma)\nu}, & \frac{\delta B_{\mu\nu}}{\delta B_{\rho\sigma}} &= {\delta_\mu}^{[\rho}{\delta_\nu}^{\sigma]} \end{align} leads to \begin{align} \delta S &= \int \dd^{D}X e^{-2d} K_{MN}\left[ \frac{\delta \HH^{MN}}{\delta g_{\rho\sigma}}\delta g_{\rho\sigma} + \frac{\delta \HH^{MN}}{\delta B_{\rho\sigma}}\delta B_{\rho\sigma}\right] \\ &= \int \dd^{D}X e^{-2d} \left\{ \left[- K_{\mu\nu}g^{\mu(\rho}g^{\sigma)\nu} + 2{K_\mu}^\nu g^{\mu(\rho}g^{\sigma)\tau}B_{\tau\nu} \vphantom{\left({\delta_\mu}^{(\rho}\right)}\right.\right. \notag\\ & \left.\left. \hspace{4cm} + K^{\mu\nu}\left({\delta_\mu}^{(\rho}{\delta_\nu}^{\sigma)} + B_{\mu\tau}g^{\tau(\rho}g^{\sigma)\lambda}B_{\lambda\nu} \right)\right]\delta g_{\rho\sigma} \right. \\ & \left. \hspace{3.5cm} + \left[- 2{K_\mu}^\nu g^{\mu\tau}{\delta_\tau}^{[\rho}{\delta_\nu}^{\sigma]} - 2K^{\mu\nu}B_{\mu\tau}g^{\tau\lambda} {\delta_\lambda}^{[\rho}{\delta_\nu}^{\sigma]} \right]\delta B_{\rho\sigma} \right\}\, .\notag \end{align} Now the $g$'s and $B$'s are re-expressed in terms of $\HH$, the symmetrizing brackets are dropped and the antisymmetrizing ones are expanded \begin{align} \delta S &= \int \dd^{D}X e^{-2d} \left\{\vphantom{\frac{1}{2}} \left[- K_{\mu\nu}\HH^{\mu\rho}\HH^{\sigma\nu} + 2{K_\mu}^\nu \HH^{\mu\rho}{\HH^{\sigma}}_\nu + K^{\mu\nu}\left({\delta_\mu}^{\rho}{\delta_\nu}^{\sigma} - {\HH_\mu}^{\rho}{\HH^{\sigma}}_\nu\right)\right]\delta g_{\rho\sigma}\right. \notag\\ & \left. \hspace{3.5cm} -2 \left[{K_\mu}^\nu \HH^{\mu\tau} + K^{\mu\nu}{\HH_\mu}^\tau\right] \frac{1}{2}\left({\delta_\tau}^{\rho}{\delta_\nu}^{\sigma} - {\delta_\tau}^{\sigma}{\delta_\nu}^{\rho}\right) \delta B_{\rho\sigma} \right\}. \end{align} The crucial step is to then re-covariantize the indices by using $\eta_{MN}$ given in \eqref{eq:eta} \begin{equation} \begin{aligned} \delta S &= \int \dd^{D}X e^{-2d} \left\{ K_{KL} \left(\eta^{K\rho}\eta^{\sigma L} - \HH^{K\rho}\HH^{\sigma L}\right) \delta g_{\rho\sigma}\right. \\ & \left. \hspace{3cm} - K_{KL}\left(\HH^{KP}\eta_{PM}\eta^{LN} - \HH^{KP}{\delta_P}^N{\delta_M}^L\right) \eta^{M\rho}{\delta^\sigma}_N\delta B_{\rho\sigma} \right\} \end{aligned} \end{equation} which reproduces the previous line once the doubled indices are expanded and summed over. In a final step the terms inside the brackets are brought into a form corresponding to a projected set of equations as follows \begin{align} \delta S &= \int \dd^{D}X e^{-2d} \left\{ K_{KL} \left({\delta_M}^K{\delta_N}^L - \HH^{KP}\eta_{PM}\eta_{NQ}\HH^{QL}\right) \eta^{M\rho}\eta^{\sigma N}\delta g_{\rho\sigma}\right. \notag\\ & \left. \hspace{3cm} - K_{KL}\left(\HH^{KP}\eta_{PM}\eta^{LQ}\HH_{QR} - \HH^{KP}{\delta_P}^Q{\delta_M}^L\HH_{QR}\right) \HH^{RN}\eta^{M\rho}{\delta^\sigma}_N\delta B_{\rho\sigma} \right\} \notag\\ &= \int \dd^{D}X e^{-2d} 2{P_{MN}}^{KL}K_{KL} \left(\eta^{M\rho}\eta^{\sigma N}\delta g_{\rho\sigma} + \eta^{M\rho}\HH^{\sigma N}\delta B_{\rho\sigma}\right) \end{align} where we have introduced the projector \begin{equation} {P_{MN}}^{KL} = \frac{1}{2}({\delta_M}^{(K}{\delta_N}^{L)} - \HH_{MP}\eta^{P(K}\eta_{NQ}\HH^{L)Q}) \end{equation} which is symmetric in both $MN$ and $KL$. The variation of the action has to vanish for \emph{any} $\delta g_{\mu\nu}$ and $\delta B_{\mu\nu}$ independently, therefore the equations of motion are given by \begin{equation} {P_{MN}}^{KL}K_{KL} = 0 \label{eq:DFTeom} \end{equation} and not $K_{MN}=0$, the naive equations expected from setting \eqref{eq:DFTvaraction} to zero. This equation of motion was derived in a slightly different way in \cite{Hohm:2010pp} by using the constraint equation $\HH^t\eta\HH=\eta$ which ensures $\HH$ is an element of $O(d,d)$. The result is \begin{equation} \frac{1}{2}(K_{MN} - \eta_{MK}\HH^{KP}K_{PQ}\HH^{QL}\eta_{LN}) = {P_{MN}}^{KL}K_{KL} = 0 \end{equation} in agreement with ours. We wish to emphasize the point of rederiving these equations is just so that we can use this method in the exceptional case later. Also note that the expression for $K_{MN}$ found in the literature, especially in \cite{Hohm:2010pp}, differs from the one given here. This difference arises as one can use either the invaraint $O(d,d)$ metric $\eta$ or the generalized metric $\HH$ to raise and lower indices in the derivation of $K_{MN}$. Both methods are valid and the discrepancy disappears once the projector acts. In a way, the projector enforces the constraint that $\HH$ parametrizes a coset space. When using $\eta$, this constraint is taken into account automatically, but when using $\HH$ the constraint needs to be imposed by the projector. Since $K_{MN}$ appears in the equations of motion only with the projector acting on it, it does not matter which version is used. The importance for the presence of the projector can be seen by counting degrees of freedom. The symmetric spacetime metric has $\frac{1}{2}d(d+1)$ degrees of freedom and the antisymmetric B-field contributes $\frac{1}{2}d(d-1)$ for a total of $d^2$ independent components. The dimension of the doubled space is $D=2d$, therefore $K_{MN}$ has $2d^2+d$ components. Of these, $d^2+d$ are in the kernel of the projector and are therefore eliminated, leaving $d^2$ degrees of freedom as desired. This can be shown by computing the characteristic polynomial and all the eigenvalues of the projector $P$. \section{The String as a Wave} Now we are equipped with the equations of motion of DFT and so we move on to describe a solution of these equations and subsequently examine its Goldstone modes. \subsection{Wave Solution or Fundamental String in DFT} \label{sec:F1string} We seek a solution for the generalized metric corresponding to a null wave whose momentum is pointing the $\tilde{z}$ direction. The ansatz will be that of a pp-wave in usual general relativity \cite{Aichelburg:1970dh}. This has no compunction to be a solution of DFT. As we have seen the equations of motion of the generalized metric in DFT are certainly not the same as the equations of motion of the metric in relativity. Let us immediately remove any source of confusion the reader may have, the pp-wave as a solution for $g_{\mu\nu}$ may of course, by construction, be embedded as a solution in DFT by simply inserting the pp-wave solution for $g_{\mu \nu}$ into $\HH_{MN}$. Here we will consider a pp-wave (that is the usual pp-wave ansatz \cite{Aichelburg:1970dh}) not for $g_{\mu\nu}$ but for the doubled metric $\HH_{MN}$ itself and then determine its interpretation in terms of the usual metric $g_{\mu\nu}$ and two-form $B_{\mu\nu}$. The following is a solution to DFT in $2d$ dimensions given by the generalized metric $\HH_{MN}$ with line element \begin{equation} \begin{aligned} \dd s^2 &= \HH_{MN}\dd X^M \dd X^N \\ &= (H-2)\left[\dd t^2 - \dd z^2\right] + \delta_{mn}\dd y^m\dd y^n \\ &\quad + 2(H-1)\left[\dd t\dd\tilde{z} + \dd\tilde{t}\dd z\right] \\ &\quad - H\left[\dd\tilde{t}^2 - \dd\tilde{z}^2\right] + \delta^{mn}\dd\tilde{y}_m\dd\tilde{y}_n \label{eq:DFTppwave} \end{aligned} \end{equation} where the generalized coordinates are split as \begin{equation} X^M = (x^\mu,\tilde{x}_\mu)=(t,z,y^m;\tilde{t},\tilde{z},\tilde{y}_m) \end{equation} and a tilde denotes a dual coordinate as explained above. This generalized metric and rescaled dilaton $d=const.$ solve the equations of motion of the DFT derived in Section \ref{sec:DFTintro}. The appendix \ref{sec:DFTcheck} provides the details demonstrating it is indeed a solution. Since it is exactly the same form as the usual pp-wave solution, the natural interpretation is of a pp-wave in the doubled geometry. One therefore imagines it propagates and therefore carries momentum in the $\tilde{z}$ direction. It is worth a pausing here. To determine whether it truely carries momentum would require the construction of conserved charges in DFT. This has not yet been done. It would be useful to consider objects like generalized Komar integrals and the other ways one defines charges in general relativity but now for DFT. Nevertheless, we shall proceed with the interpretation of this solution as a pp-wave and thus carries momentum in the dual $\tilde{z}$ direction. $H$ is taken to be a harmonic function of the usual transverse coordinates\footnote{The range of the transverse index is $m=1,\dots,d-2$.} $y^m$ (but not of their duals $\tilde{y}_m$) and as such is annihilated by the Laplacian operator in these directions, i.e. $\delta^{mn}\partial_m\partial_n H=0$. In DFT language, it is required (at least naively) that $H$ satifies the section condition and so to solve the section condition it is not a function of any of the dual coordinates. The fact that the harmonic function $H$ is taken to only depend on $y^m$ and not the dual transverse directions implies that the wave solution is {\it{smeared}} in these $\tilde{y}_m$ directions. One can think of it as a plane wave front extending along the dual directions described by coordinates $\tilde{y}_m$ but with momentum in the $\tilde{z}$ direction. An explicit form of $H$ is \begin{equation} H = 1 + \frac{h}{r^{d-4}} \qquad \mathrm{for} \qquad r^2 = y^my^n\delta_{mn} \end{equation} where $h$ is a constant and $r$ is the radial coordinate of the transverse space. We will now use the form of the doubled metric $\HH_{MN}$ in terms of $g_{\mu\nu}$ and $B_{\mu\nu}$ to rewrite this solution in terms of $d$-dimensional quantities, effectively reducing the dual dimensions. This is like in Kaluza-Klein theory, writing a solution of the full theory in terms of the reduced metric and vector potential \begin{align} \dd s^2 &= (g_{\mu\nu} - B_{\mu\rho}g^{\rho\sigma}B_{\sigma\nu})\dd x^\mu \dd x^\nu + 2B_{\mu\rho}g^{\rho\nu}\dd x^\mu \dd\tilde{x}_\nu + g^{\mu\nu}\dd\tilde{x}_\mu\dd\tilde{x}_\nu \, . \label{eq:KKforDFT} \end{align} By comparing \eqref{eq:KKforDFT} with \eqref{eq:DFTppwave}, the fields of the reduced theory with coordinates $x^\mu=(t,z,y^m)$ can be computed. We find the metric and its inverse to be \begin{equation} g_{\mu\nu} = \mathrm{diag} (-H^{-1}, H^{-1}, \delta_{mn}) \qquad\mathrm{and}\qquad g^{\mu\nu} = \mathrm{diag} (-H, H, \delta^{mn}) \end{equation} whereas the only non-zero component of the B-field is given by \begin{equation} B_{tz} = -B_{zt} = -(H^{-1}-1) \, . \end{equation} From the definition $e^{-2d} = \sqrt{g}e^{-2\phi}$ of the rescaled dilaton $d$ (which is a constant here) it follows that the dilaton $\phi$ is given by ($\phi_0$ is another constant) \begin{equation} e^{-2\phi} = H e^{-2\phi_0} \qquad \mathrm{or} \qquad e^{-2(\phi-\phi_0)} = H \end{equation} since $g=-H^{-2}$. The corresponding line element is \begin{equation} \dd s^2 = -H^{-1}(\dd t^2-\dd z^2)+\delta_{mn}\dd y^m\dd y^n \label{eq:string} \end{equation} which together with the B-field and the dilaton $\phi$ gives the fundamental string solution extended along the $z$ direction \cite{Dabholkar:1990yf}. We have thus shown that the solution \eqref{eq:DFTppwave} which carries momentum in the $\tilde{z}$ direction in the doubled space corresponds to the string along the $z$ direction from a reduced point of view. This follows the logic of usual Kaluza Klein theory. In the doubled formalism the solution is a massless wave with $P_MP_N\HH^{MN}=0$ (where the $P^M$ are some generalized momenta), but from a the reduced normal spacetime point of view the string has a tension $T$ and charge $q$ which are obviously given by the momenta in the dual directions with a resulting BPS equation \begin{equation} T = |q| \, . \end{equation} Of course this is no surprise from the point of view of T-duality. Momentum and string winding exchange under T-duality. It is precisely as expected that momentum in the dual direction corresponds to a string. What is more surprising is when one views this from the true DFT perspective. There are null wave solutions that can point in any direction. When we analyze these null waves from the reduced theory we see them as fundamental strings or as usual pp-waves. It is a simple $O(d,d)$ rotation of direction of propagation that takes one solution into the other. This is duality from the DFT perspective. \subsection{Goldstone Modes of the Wave Solution} \label{sec:DFTgoldstones} In the previous section we presented a solution to the equations of motion of DFT which reduces to the fundamental string. It will be interesting to analyse the Goldstone modes of this solution in double field theory. Especially since the advent of M-theory, it was understood that branes are dynamical objects and that when one finds a solution of the low energy effective action one can learn about the theory by examining the dynamics of the Goldstone modes. For D-branes in string theory this was done in \cite{Adawi:1998ta} and for the membrane and fivebrane in M-theory, where such an analysis was really the only way of describing brane dynamics, this was done in \cite{Kaplan:1995cp, Adawi:1998ta}. We will follow the excellent exposition and the method described in \cite{Adawi:1998ta} as closely as possible. In DFT, the diffeomorphisms and gauge transformations are combined into generalized diffeomorphisms generated by a generalized Lie derivative. We will consider small variations in the generalized metric, $h_{MN}$ and the dilaton, $\lambda$ generated by such transformations as follows \begin{align} h_{MN} &= \delta_\xi \HH_{MN} = \LL_\xi \HH_{MN} \, , & \lambda &= \delta_\xi d = \LL_\xi d \, . \end{align} For all the duality invariant geometries including DFT, the generalized Lie derivative of the metric \cite{Hull:2009zb} is given by the ordinary Lie derivative plus a correction in terms of the so called Y-tensor \begin{equation} \begin{aligned} \LL_\xi \HH_{MN} &= L_\xi \HH_{MN} - {Y^{LP}}_{MQ}\partial_P\xi^Q\HH_{LN} - {Y^{LP}}_{NQ}\partial_P\xi^Q\HH_{ML} \\ &= \xi^L\partial_L \HH_{MN} + 2\HH_{L(M}\partial_{N)}\xi^L - 2{Y^{LP}}_{Q(M}\HH_{N)L}\partial_P\xi^Q \, . \end{aligned} \label{eq:genLieMetric} \end{equation} The Y-tensor \cite{Berman:2012vc} encodes a great deal about the geometry. For DFT, the Y-tensor is simply given in terms of the $O(d,d)$ metric \begin{equation} {Y^{MN}}_{KL}=\eta^{MN} \eta_{KL} \, . \end{equation} If the metric $\HH_{MN}$ and the transformation parameter $\xi^{M}=(\xi^\mu,\tilde{\xi}_\mu)$ both satisfy the section condition, then the vector part $\xi^\mu$ generates a coordinate transformation while the one-form part $\tilde{\xi}_\mu$ gives a gauge transformation of the B-field. The generalized Lie derivative of the dilaton contains just the transport term plus a term for $d$ being a tensor density \begin{align} \LL_\xi d &= \xi^M\partial_M d - \frac{1}{2}\partial_{M}\xi^M \, . \label{eq:genLieDilaton} \end{align} The wave solutions are extended objects and therefore sweep out a worldvolume in space. This is spanned by the coordinates $\{t,z\}$. All remaining coordinates are treated as transverse in the extended space. The solution clearly breaks translation symmetry and so one naturally expects scalar zero-modes. One immediate puzzle would be to ask about the number of degrees of freedom of the Goldstone modes. Given that the space is now doubled one would naively image that any solution which may be interpreted as a string would have $2d-2$ degrees of freedom rather than the expected $d-2$. We will answer this question and show how the Goldstone modes have the correct number of degrees of freedom despite the solution living in a $2d$ dimensional space. The projected form of the equations of motion are crucial in making this work out. To carry out the analysis it will be useful to split up the space into parts longitudinal and transverse to the string. One collects the worldvolume coordinates $t$ and $z$ into $x^a$ and similarly for their duals\footnote{In what follows we will use the alternative notation $\tilde{x}^{\bar{\mu}}$ for the dual coordinates to avoid confusion between inverse and dual parts of the metric.} $\tilde{x}^{\bar{a}} = (\tilde{t},\tilde{z})$ such that the generalized coordinates are $X^M=(x^a,y^m,\tilde{x}^{\bar{a}},\tilde{y}^{\bar{m}})$. This allows the non-zero components of the metric and its inverse to be written as \begin{equation} \begin{aligned} \HH_{ab} &= (2-H)\mathbb{I}_{ab} & \HH^{ab} &= H\mathbb{I}^{ab} \\ \HH_{{\bar{a}}{\bar{b}}} &= H\mathbb{I}_{{\bar{a}}{\bar{b}}} & \HH^{{\bar{a}}{\bar{b}}} &= (2-H)\mathbb{I}^{{\bar{a}}{\bar{b}}}\\ \HH_{a{\bar{b}}} &= \HH_{{\bar{b}} a} = (H-1)\mathbb{J}_{a{\bar{b}}} & \HH^{a{\bar{b}}} &= \HH^{{\bar{b}} a} = (H-1)\mathbb{J}^{a{\bar{b}}} \\ \HH_{mn} &= \delta_{mn}, \quad \HH_{{\bar{m}}{\bar{n}}} = \delta_{{\bar{m}}{\bar{n}}} & \HH^{mn} &= \delta^{mn}, \quad\HH^{{\bar{m}}{\bar{n}}} = \delta^{{\bar{m}}{\bar{n}}} \end{aligned} \end{equation} where the constant symmetric $2\times 2$ matrices $\mathbb{I}$ and $\mathbb{J}$ are defined as \begin{equation} \mathbb{I} = \begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix} \qquad\mathrm{and}\qquad \mathbb{J} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \, . \label{eq:IJmatrix} \end{equation} For later use also define their (antisymmetric) product \begin{equation} \mathbb{K} = \mathbb{I} \cdot \mathbb{J} = - \mathbb{J}\cdot\mathbb{I} = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \, . \label{eq:Kmatrix} \end{equation} Following \cite{Adawi:1998ta}, we now pick a transformation parameter $\xi^M$ with non-zero components only in the transverse directions, but with no transformation along the worldvolume directions (and the directions dual to the worldvolume). This transformation may then be described by the DFT vector field \begin{equation} \xi^M = (0,H^\alpha\hphi^m, 0, H^\beta\hat{\tilde{\phi}}^{\bar{m}}) \end{equation} where $\hphi^m$ and $\hat{\tilde{\phi}}^{\bar{m}}$ are the constant vectors that later will become the Goldstone modes once we allow them to have dependence on the worldvolume coordinates, $H$ is the harmonic function given above and $\alpha,\beta$ are constants that are to be determined by demanding that the Goldstone modes become normalisable. Using \begin{equation} h_{MN} = \xi^L\partial_L \HH_{MN} + 2\HH_{L(M}\partial_{N)}\xi^L - 2\eta^{LP}\eta_{Q(M}\HH_{N)L}\partial_P\xi^Q \end{equation} we can compute the components of $h_{MN}$ in terms of $\hphi^m,\hat{\tilde{\phi}}^{\bar{m}}$. Recall that both the metric and the transformation parameter only depend on $y$ through the harmonic function $H$. Therefore $\partial_m$ is the only derivative that gives a non-zero contribution. We find \begin{equation} \begin{aligned} h_{ab} &= -\hphi^m (H^\alpha\partial_m H) \mathbb{I}_{ab} & h_{mn} &= 2\hphi^q\delta_{q(m}{\delta_{n)}}^p\partial_pH^\alpha \\ h_{{\bar{a}}{\bar{b}}} &= \hphi^m (H^\alpha\partial_m H) \mathbb{I}_{{\bar{a}}{\bar{b}}} & h_{{\bar{m}}{\bar{n}}} &= -2\hphi^q\delta_{q({\bar{m}}}{\delta_{{\bar{n}})}}^p\partial_p H^\alpha \\ h_{a{\bar{b}}} &= h_{{\bar{b}} a} = \hphi^m (H^\alpha\partial_m H) \mathbb{J}_{a{\bar{b}}} & h_{m{\bar{n}}} &= h_{{\bar{n}} m} = -2\hat{\tilde{\phi}}^{\bar{q}}\delta_{{\bar{q}}[m}{\delta_{{\bar{n}}]}}^p\partial_p H^\beta \end{aligned} \label{eq:h} \end{equation} and all terms with indicies mixing $a,{\bar{a}}$ with $m,{\bar{m}}$ vanish. For the dilaton there is no contribution from the transport term as $d$ is a constant for our solution. This leaves the density term which gives \begin{equation} \lambda = - \frac{1}{2}\hphi^m\partial_{m} H^\alpha \, . \label{eq:lambda} \end{equation} Once we have these equations, the next step is to allow the moduli to have dependence on the worldvolume coordinates, \begin{equation} \hphi^m\rightarrow\phi^m(x) \, , \qquad \hat{\tilde{\phi}}^{\bar{m}}\rightarrow \tilde{\phi}^{\bar{m}}(x) \label{eq:zeromodes} \end{equation} and the hats are removed. These are the zero-modes. We now determine their equations of motion by inserting \eqref{eq:zeromodes} into \eqref{eq:h} and \eqref{eq:lambda} and then subsequently into the equations of motion for DFT, \eqref{eq:DFTeomDilaton} and \eqref{eq:DFTeom}. As usual we keep only terms with two derivatives and first order in $h_{MN}$ and $\lambda$ themselves. (It would certainly be interesting to move beyond this expansion and compare with a Nambu-Goto type action but we will not do so here.) This gives \begin{align} K_{MN} &= \HH^{LK}\partial_L\partial_{(M} h_{N)K} - \frac{1}{4}\HH^{LK}\partial_L\partial_K h_{MN} + 2\partial_M\partial_N \lambda \\ R &= 4 \HH^{MN}\partial_M\partial_N \lambda - \partial_M\partial_N h^{MN}. \end{align} For convenience we will define $\Box = H\mathbb{I}^{ab}\partial_a\partial_b$ and $\Delta = \delta^{kl}\partial_k\partial_l$. Inserting $h_{MN}$ from \eqref{eq:h}, we find \begin{equation} \begin{aligned} K_{ab} &= -(1+\alpha H^{-1})\partial_a\partial_b\phi^m (H^\alpha\partial_m H) + \frac{1}{4}\mathbb{I}_{ab}\Box\phi^m(H^\alpha\partial_m H) \\ K_{{\bar{a}}{\bar{b}}} &= -\frac{1}{4}\mathbb{I}_{{\bar{a}}{\bar{b}}}\Box\phi^m(H^\alpha\partial_m H) \\ K_{a{\bar{b}}} &= K_{{\bar{b}} a} = \frac{1}{2}{\mathbb{K}^c}_{\bar{b}}\partial_c\partial_a\phi^m (H^\alpha\partial_m H) - \frac{1}{4}\mathbb{J}_{a{\bar{b}}}\Box\phi^m(H^\alpha\partial_m H) \\ K_{mn} &= - \frac{\alpha}{2}\Box\phi^p\delta_{p(m}{\delta_{n)}}^q(H^\alpha\partial_qH) \\ K_{{\bar{m}}{\bar{n}}} &= \frac{\alpha}{2}\Box\phi^p\delta_{p({\bar{m}}}{\delta_{{\bar{n}})}}^q (H^\alpha\partial_qH) \\ K_{m{\bar{n}}} &= \delta K_{{\bar{n}} m} = \frac{\beta}{2}\Box\tilde{\phi}^{\bar{p}} \delta_{{\bar{p}}[m}{\delta_{{\bar{n}}]}}^q(H^\beta\partial_q H) \\ K_{am} &= K_{ma} = \frac{1}{2}\partial_a\phi^n\left[ \delta_{mn}\Delta H^\alpha - \partial_m\partial_n H^\alpha - \partial_m(H^\alpha\partial_n H)\right] \\ K_{{\bar{a}} m} &= K_{m{\bar{a}}} = \frac{1}{2}{\mathbb{K}^b}_{\bar{a}}\partial_b\phi^n \partial_m(H^\alpha\partial_n H) \\ K_{a{\bar{m}}} &= K_{{\bar{m}} a} = \frac{1}{2}\partial_a\tilde{\phi}^{\bar{n}}{\delta_{\bar{n}}}^k{\delta_{\bar{m}}}^l \left[\delta_{kl}\Delta H^\beta - \partial_k\partial_l H^\beta \right]\\ K_{{\bar{a}}{\bar{m}}} &= K_{{\bar{m}}{\bar{a}}} = 0 \end{aligned} \end{equation} where $\mathbb{K}$ was defined in \eqref{eq:Kmatrix}. Further, inserting $\lambda$ from \eqref{eq:lambda} gives the dilaton equation \begin{equation} R = -H^{-1}(2\alpha + 1) \Box\phi^m (H^\alpha\partial_m H) = 0. \end{equation} It is straight forward to see that the dilaton equation is solved by $\Box\phi = 0$. For the other equations we have to work a bit harder. The full equations of motion for the generalized metric are the projected equations \eqref{eq:DFTeom} which contain $d^2$ linearly independet equations \begin{equation} \begin{aligned} K_{mn} &= {\delta_m}^{\bar{k}} {\delta_n}^{\bar{l}} K_{{\bar{k}}{\bar{l}}} \\ K_{m{\bar{n}}} &= {\delta_m}^{\bar{k}} {\delta_{\bar{n}}}^l K_{{\bar{k}} l} \end{aligned} \label{eq:DFTblock1} \end{equation} \begin{equation} \begin{aligned} K_{mt} &= (H-1){\delta_m}^{\bar{n}} K_{{\bar{n}} z} - (2-H){\delta_m}^{\bar{n}} K_{{\bar{n}}{\bar{t}}} \\ K_{mz} &= (H-1){\delta_m}^{\bar{n}} K_{{\bar{n}} t} + (2-H){\delta_m}^{\bar{n}} K_{{\bar{n}}{\bar{z}}} \\ K_{m{\bar{t}}} &= (H-1){\delta_m}^{\bar{n}} K_{{\bar{n}}{\bar{z}}} - H{\delta_m}^{\bar{n}} K_{{\bar{n}} t} \\ K_{m{\bar{z}}} &= (H-1){\delta_m}^{\bar{n}} K_{{\bar{n}}{\bar{t}}} + H{\delta_m}^{\bar{n}} K_{{\bar{n}} z} \end{aligned} \label{eq:DFTblock2} \end{equation} \begin{equation} \begin{aligned} 0 &= (H-1)(K_{{\bar{t}}\bt} - K_{{\bar{z}}\bz}) + H(K_{t{\bar{z}}} + K_{z{\bar{t}}}) \\ 0 &= (H-1)(K_{tt} - K_{zz}) + (2-H)(K_{t{\bar{z}}} + K_{z{\bar{t}}}) \\ 0 &= (H-1)(K_{t{\bar{z}}} - K_{z{\bar{t}}}) - HK_{zz} + (2-H)K_{{\bar{z}}\bz} \\ 0 &= (H-1)(K_{t{\bar{t}}} - K_{z{\bar{z}}}) + HK_{tz} + (2-H)K_{{\bar{t}}{\bar{z}}}. \end{aligned} \label{eq:DFTblock3} \end{equation} Inserting for $K_{MN}$ from above yields the equations of motion for the zero modes. The first two read \begin{equation} \begin{aligned} -\alpha\Box\phi^p\delta_{p(m}{\delta_{n)}}^q(H^\alpha\partial_qH) &= 0 \\ \beta\Box\tilde{\phi}^{\bar{q}}\delta_{{\bar{q}}[m}{\delta_{{\bar{n}}]}}^p(H^\beta\partial_q H) &= 0 \end{aligned} \end{equation} and can be solved by $\Box\phi=0$ and $\Box\tilde{\phi}=0$ respectively. The next block of equations \eqref{eq:DFTblock2} can be re-covariantized by using \begin{equation} -\mathbb{I}_{ac}\epsilon^{cb}= -\begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix}\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} =\begin{pmatrix}0 & 1 \\ 1 & 0 \end{pmatrix} \end{equation} which leads to \begin{equation} \begin{aligned} \partial_a\phi^n\left[\delta_{mn}\Delta H^\alpha - \partial_m\partial_n H^\alpha \right. & \left.- \partial_m(H^\alpha\partial_n H)\right] \\ &= -\mathbb{I}_{ac}\epsilon^{cb}\partial_b\tilde{\phi}^{\bar{n}}{\delta_{\bar{n}}}^p (H-1)\left[\delta_{pm}\Delta H^\beta - \partial_p\partial_m H^\beta \right] \\ \partial_a\phi^n\partial_m(H^\alpha\partial_nH) &= \mathbb{I}_{ac}\epsilon^{cb}\partial_b\tilde{\phi}^{\bar{n}}{\delta_{\bar{n}}}^p H\left[\delta_{pm}\Delta H^\beta - \partial_p\partial_m H^\beta \right]. \end{aligned} \end{equation} Adding these two equations gives \begin{equation} \partial_a\phi^n W_{mn}^{(\alpha)} = \mathbb{I}_{ac}\epsilon^{cb}\partial_b\tilde{\phi}^{\bar{n}}{\delta_{\bar{n}}}^n W_{mn}^{(\beta)} \end{equation} where for $\gamma=\alpha,\beta$ we have $W_{mn}^{(\gamma)}=\delta_{mn}\Delta H^\gamma - \partial_m\partial_n H^\gamma$. If $\alpha=\beta$ we have the same object $W_{mn}$ on both sides which can be shown to be invertible. The equation can thus be reduced to a duality relation between $\phi$ and $\tilde{\phi}$ \begin{equation} \partial_a\phi^m = \mathbb{I}_{ab}\epsilon^{bc}\partial_c\tilde{\phi}^{\bar{n}}\delta_{\bar{n}}^m \qquad \mathrm{or} \qquad \dd\phi^m = \star\dd\tilde{\phi}^{\bar{n}}\delta_{\bar{n}}^m. \label{eq:dualityzeromodes} \end{equation} This equation implies both $\Box\phi=0$ and $\Box\tilde{\phi}=0$ as can be seen by acting with a contracted derivative on the equation. If $\phi^m$ and $\tilde{\phi}^{\bar{m}}$ are placed in a generalized vector $\Phi^M=(0,\phi^m,0,\tilde{\phi}^{\bar{m}})$ this can be written as a self-duality relation \begin{equation} \HH_{MN}\dd\Phi^M = \eta_{MN}\star\dd\Phi^N \end{equation} and precisely matches the result in \cite{Duff90a} for the duality symmetric string. The final block of equations of motion \eqref{eq:DFTblock3} are either trivial or are also of the form $\Box\phi^m(H^\alpha\partial_mH)=0$ provided $\alpha=-1$. If one was not concerned by normalisation of the modes then this also provides a way of constraining the value of $\alpha$. The consistent choice of $\alpha=-1$ is fortunately the choice that also leads to normalisable modes. This may be seen by examing the case $\alpha=-1$ and integrating over the transverse space. This exactly mirrors the situation described in \cite{Adawi:1998ta}. The Goldstone modes are really the normalisable modes corresponding to broken gauge transformations. Where for gravity the gauge transformations are ordinary diffeomorphisms, in the case of DFT it is generated by the generalised Lie deriviative. (In case the reader is more familiar with the study of monopoles, the analogue of the modes described in this paper is with the dyonic $U(1)$ mode in the monopole moduli space.) One can now turn equation \eqref{eq:dualityzeromodes} into a (anti-)chiral equation for a linear combination of $\phi$ and $\tilde{\phi}$ as follows. Introducing $\psi_\pm$ to be given by \begin{equation} \psi_\pm = \phi \pm \tilde{\phi} \end{equation} and inserting them into \eqref{eq:dualityzeromodes} and its Hodge dual gives the familiar (anti-)self-dual left- and right-movers \begin{equation} \dd\psi_\pm = \pm\star\dd\psi_\pm \end{equation} of the Tseytlin-string \cite{Tseytlin90, Tseytlin91}. Thus the dynamics of the Goldstone modes of the wave solution reproduce the duality symmetric string in doubled space. The number of physical degrees of freedom are not doubled but just become rearranged in terms of chiral and anti-chiral modes on the world-sheet. \subsection{Comparison with the $\sigma$-model evaluated in the String or Wave Background} The equations of motion that were derived in the previous section recover the equations of motion of the Tseytlin string. A natural question would be to ask what background is the string in? Is the target space of the doubled solution the combination of the fundamental string with the wave background? The answer to this question can be seen immediately from the Goldstone mode analysis which gives the equations of motion of the free string i.e. that of the $\sigma$-model in a flat background. To understand this it is worth understanding what the Goldstone mode analysis provides you with in other cases where this has been carried out in a more conventional setting. In the work of \cite{Adawi:1998ta} the Goldstone mode analysis of the D3-brane, the M-theory membrane and fivebrane was carried out and used to determine the effective equations of motion for each of those objects. In each case the analysis gave the description of those objects in a flat background. Some further thought shows that this is the correct answer. The Goldstone mode analysis must give the equations of motion of the string in a flat background since the solution for which one is determining the moduli is that of string in a flat background. A string solution in the background of other strings i.e. a string $\sigma$-model in a string background would be a different solution and as such obey a different set of equations of motion. Describing this more technically, to find the $\sigma$-model in a nontrivial background one must find the backreacted wave solution not for asympotically flat space but for one with asymptotically switched on NS-fluxes and then determine its moduli and their equations of motion. Of course, how we normally proceed with brane actions is that once one has determined the effective equations of motion through a Goldstone mode analysis one then covariantizes these equations (in terms of the geometry of moduli space) to determine the general equations of motion. In terms of the doubled string above, this would imply just replacing the flat target space generalized metric with the generalized metric of an arbitrary background. (The quantum properties of such twisted chiral bosons with an arbitrary target space may well be very nontrivial, an analysis of such is outside the scope of the current paper.) \section{The Membrane as a Wave} \label{sec:M2brane} In a similar manner to the string, the membrane will be shown to arise from a massless solution corresponding to a wave in an extended geometry. We will demonstrate this for the membrane in the $SL(5)$ duality invariant theory though it is imagined that this will be true of all the extended geometries corresponding to the exceptional groups. We begin with the equations of motion of the $SL(5)$ theory. The actions of the U-duality manifest theories have been explored at length \cite{Berman:2011jh} but the equations of motion will require the construction of projectors just as in the $O(d,d)$ case since we should only consider variations of the actions that preserve the generalized metric coset structure. We begin by describing these projectors. \subsection{The $SL(5)$ Duality Invariant Theory} \label{sec:SL5eoms} Let us start by examining the extended geometry of the $SL(5)$ duality invariant theory. This arises from the full eleven-dimensional theory by splitting the dimensions into 4+7. The U-duality group acts on the four dimensions and can be made manifest by including the six dual dimensions corresponding to membrane wrappings. There is then a (4+6)-dimensional extended space with manifest $SL(5)$ invariance and no dependence on the remaining seven dimensions. Referring to the $E_{11}$ decomposition into $SL(5)\times GL(7)$, schematically a generalized metric of such an (10+7)-dimensional space can be written as (see \cite{Malek:2012pw}) \begin{equation} \HH = {\det g_{11}}^{-1/2} \begin{pmatrix} \tilde{\MM} & 0 \\ 0 & g_7 \end{pmatrix} \end{equation} where $\tilde{\MM}$ is the generalized metric on the extended space and $g_7$ is the metric on the remaining seven dimensions. The conformal factor up front is important as it relates these two otherwise independent sectors, it is given in terms of the determinant of $g_{11}$, the metric of the full eleven-dimensional space. This $\tilde{\MM}_{MN}$ is the generalized metric as first given in \cite{Berman:2010is}. It parametrizes the coset $SL(5)/SO(5)$ in terms of the spacetime metric $g_{\mu\nu}$ and the form field $C_{\mu\nu\rho}$ \begin{equation} \tilde{\MM}_{MN} = \begin{pmatrix} g_{\mu\nu} + \frac{1}{2}C_{\mu\rho\sigma}g^{\rho\sigma,\lambda\tau}C_{\lambda\tau\nu} & \frac{1}{\sqrt{2}}C_{\mu\rho\sigma}g^{\rho\sigma,\lambda\tau} \\ \frac{1}{\sqrt{2}}g^{\rho\sigma,\lambda\tau}C_{\lambda\tau\nu} & g^{\rho\sigma,\lambda\tau} \end{pmatrix} \label{eq:SL5metric} \end{equation} for coordinates $X^M = (x^\mu,y_{\mu\nu})$ in the $\mathbf{10}$ of $SL(5)$ and with $g^{\mu\nu,\rho\sigma}=\frac{1}{2}(g^{\mu\rho}g^{\nu\sigma}-g^{\mu\sigma}g^{\nu\rho})$ which is used to raise an antisymmetric pair of indices. Note that there is no overall factor in front, this metric has a determinant of $g^{-2}$ where $g$ is the determinant of the four-metric $g_{\mu\nu}$. Therefore in this form it is actually an element of $GL(5)$, not $SL(5)$. This can be remidied by considering the following. The theory contains a scaling symmetry for the $GL(5)$ which can be used to rescale $\tilde{\MM}_{MN}$ by $g$, e.g. $\MM_{MN} = g^{1/5}\tilde{\MM}_{MN}$ (this particular rescaling leads to a generalized metric with unit determinant, i.e. $\det \MM_{MN}=1$). Noting that $\det g_{11} = g\det g_7$ and assuming a simple form\footnote{For example when considering the compactification of the seven dimensions on a seven-torus with equal radius $R$ this is just $g_7=R\delta_7$ and thus $V=R^7$.} for the seven-metric such that $\det g_7 = V$ we have \begin{equation} \HH = \begin{pmatrix} V^{-1/2}g^{-1/2}g^{-1/5}\MM & 0 \\ 0 & V^{-5/14}g^{-1/2}\delta_7 \end{pmatrix} = \begin{pmatrix} e^{-\Delta}\MM & 0 \\ 0 & e^{-5\Delta/7}\delta_7 \end{pmatrix} \, . \end{equation} Under an $SL(5)$ transformation the seven-sector should remain unchanged, therefore we have the following $SL(5)$ scalar density \begin{equation} e^\Delta = V^{1/2}g^{7/10} \end{equation} which we will us to write down the correctly weighted action for the extended theory. In terms of the generalized metric $\MM_{MN}$ with unit determinant and the volume factor $\Delta$ the action reads \begin{equation} S = \int \dd^D X e^\Delta R \label{eq:SL5action} \end{equation} where the scalar $R$ is given by \begin{equation} \begin{aligned} R &= \frac{1}{12}\MM^{MN}\partial_M\MM^{KL}\partial_N\MM_{KL} -\frac{1}{2}\MM^{MN}\partial_M\MM^{KL}\partial_L\MM_{KN} \\ &\qquad + \partial_M\MM^{MN}\partial_N\Delta + \frac{1}{7}\MM^{MN}\partial_M\Delta\partial_N\Delta \, . \end{aligned} \label{eq:SL5R} \end{equation} The first two terms reproduce the Einstein-Hilbert and Maxwell term upon imposing section condition. The last two terms are kinetic terms for $\Delta$. The equations of motion for $\Delta$ can be found by varying the action and are given up to total derivatives by $R=0$. On the other hand, varying the action with respect to the generalized metric and integrating by parts gives \begin{equation} \begin{aligned} \delta S = \int \dd^D X e^{\Delta} &\left[\frac{1}{12}\left(\partial_M \MM^{KL}\partial_N \MM_{KL} - 2 \partial_K \MM^{KL}\partial_L \MM_{MN} - 2 \MM^{KL} \partial_K \partial_L \MM_{MN} \right.\right.\\ &\quad\left.\left. + 2 \MM^{KL}\MM^{PQ}\partial_K \MM_{MP} \partial_L \MM_{NQ} - 2 \MM^{KL}\partial_K\Delta \partial_L \MM_{MN}\right) \right. \\ &\quad\left. -\frac{1}{2}\left(\partial_M \MM^{KL}\partial_L \MM_{KN} - 2 \partial_L \MM^{KL}\partial_M \MM_{KN} - 2 \MM^{KL} \partial_L\partial_M \MM_{KN} \right.\right. \\ &\quad\left.\left. + 2 \MM^{KP}\MM^{LQ}\partial_{(K} \MM_{M)Q} \partial_L \MM_{NP} - 2 \MM^{KL} \partial_K \Delta \partial_M \MM_{LN}\right) \right.\\ &\quad\left. -\partial_M\partial_N\Delta - \frac{6}{7} \partial_M\Delta\partial_N\Delta \right] \delta \MM^{MN} \, . \end{aligned} \end{equation} Note that there is no term for varying $e^{\Delta}$. This factor contains information about the determinant of $\MM_{MN}$ but does not change if the metric is varied as it is fixed to have unit determinant. We will denote everything inside the brackets by $K_{MN}$ \begin{equation} \delta S = \int \dd^DX e^\Delta K_{MN} \delta \MM^{MN} \, . \label{eq:SL5varaction} \end{equation} As in the case of DFT, \eqref{eq:SL5varaction} does not have to vanish for any variation $\delta \MM^{MN}$ since the generalized metric is constrained to parametrize a coset space. This gives rise to a projector to eliminate the additional degrees of freedom. To impose this constraint and find this projector, one has to use the chain rule. In order to vary the generalized metric with respect to the spacetime metric and the C-field, it will be usefull to use indices $a = \{\mu,5\}$ in the $\mathbf{5}$ of $SL(5)$. The coordinates are then \begin{equation} X^M = X^{ab} = \begin{cases} X^{\mu 5} &= x^\mu \\ X^{\mu\nu} &= \frac{1}{2}\epsilon^{\mu\nu\rho\sigma}y_{\rho\sigma} \end{cases} \end{equation} where $\epsilon^{\mu\nu\rho\sigma}$ is the permutation symbol in four dimensions, a tensor density. The generalized metric and its inverse take the form \begin{equation} \MM_{ab,cd} = \begin{pmatrix} \MM_{\mu 5,\nu 5} & \MM_{\mu 5,\lambda\tau} \\ \MM_{\rho\sigma,\nu 5} & \MM_{\rho\sigma,\lambda\tau} \end{pmatrix} = g^{1/5} \begin{pmatrix} g_{\mu\nu} + \frac{1}{2}C_{\mu\rho\sigma}g^{\rho\sigma,\lambda\tau}C_{\lambda\tau\nu} & -\frac{1}{2\sqrt{2}} C_{\mu\rho\sigma} g^{\rho\sigma,\alpha\beta} \epsilon_{\alpha\beta\lambda\tau} \\ -\frac{1}{2\sqrt{2}} \epsilon_{\rho\sigma\alpha\beta} g^{\alpha\beta,\lambda\tau}C_{\lambda\tau\nu} & g^{-1}g_{\rho\sigma,\lambda\tau} \end{pmatrix} \notag \end{equation} \begin{equation} \MM^{ab,cd} = g^{-1/5} \begin{pmatrix} g^{\mu\nu} & \frac{1}{2\sqrt{2}}g^{\mu\nu}C_{\nu\alpha\beta}\epsilon^{\alpha\beta\lambda\tau} \\ \frac{1}{2\sqrt{2}}\epsilon^{\rho\sigma\alpha\beta}C_{\alpha\beta\mu}g^{\mu\nu} & gg^{\rho\sigma,\lambda\tau} + \frac{1}{8}\epsilon^{\rho\sigma\alpha\beta}C_{\alpha\beta\mu} g^{\mu\nu}C_{\nu\gamma\delta}\epsilon^{\gamma\delta\lambda\tau} \end{pmatrix} \end{equation} with $g^{\mu\nu,\alpha\beta}g_{\alpha\beta,\rho\sigma} = \frac{1}{2}(\delta^\mu_\rho\delta^\nu_\sigma - \delta^\mu_\sigma\delta^\nu_\rho)$. Note the factor of $g^{1/5}$ up front since this is the rescaled metric with unit determinant. Using the chain rule and varying the metric in \eqref{eq:SL5varaction} with respect to $\delta g_{\mu\nu}$ and $\delta C_{\mu\nu\rho}$ gives \begin{align} \delta S &= \int \dd^DX K_{MN} \left[\frac{\delta \MM^{MN}}{\delta g_{\mu\nu}}\delta g_{\mu\nu} + \frac{\delta \MM^{MN}}{\delta C_{\mu\nu\rho}}\delta C_{\mu\nu\rho}\right] \\ &= \int \dd^DX g^{-1/5}\left\{ \left[-K_{\alpha 5,\beta 5} g^{\alpha(\mu}g^{\nu)\beta} -2 K_{\alpha 5,\beta\beta'}\frac{1}{2\sqrt{2}}g^{\alpha(\mu}g^{\nu)\alpha'} C_{\alpha'\gamma\gamma'}\epsilon^{\gamma\gamma'\beta\beta'}\right.\right. \notag\\ &\left.\left.\hspace{3.2cm} + K_{\alpha\alpha',\beta\beta'}\left(\vphantom{\frac{1}{8}} gg^{\mu\nu}g^{\alpha\alpha',\beta\beta'} - gg^{\alpha(\mu}g^{\nu)[\beta}g^{\beta']\alpha'} - gg^{\alpha[\beta}g^{\beta'](\mu}g^{\nu)\alpha'} \right.\right.\right. \notag\\ &\left.\left.\left.\hspace{3.2cm}- \frac{1}{8}\epsilon^{\alpha\alpha'\gamma\gamma'} C_{\gamma\gamma'\sigma}g^{\sigma(\mu}g^{\nu)\sigma'} C_{\sigma'\lambda\lambda'}\epsilon^{\lambda\lambda'\beta\beta'}\right) - \frac{1}{5}g^{1/5}K_{MN}\MM^{MN}g^{\mu\nu} \right]\delta g_{\mu\nu}\right. \notag\\ &\left. \hspace{2.7cm} + \left[ 2K_{\alpha 5,\beta\beta'}\frac{1}{2\sqrt{2}}g^{\alpha\alpha'} \delta_\gamma^{[\mu}\delta_{\gamma'}^\nu\delta_\sigma^{\rho]} \epsilon^{\gamma\gamma'\beta\beta'} \right.\right.\notag\\ &\left.\left. \hspace{3.2cm} + 2K_{\alpha\alpha',\beta\beta'} \frac{1}{8}\epsilon^{\alpha\alpha'\gamma\gamma'} \delta_\gamma^{[\mu}\delta_{\gamma'}^\nu\delta_\sigma^{\rho]} g^{\sigma\sigma'}C_{\sigma'\lambda\lambda'}\epsilon^{\lambda\lambda'\beta\beta'} \right]\delta C_{\mu\nu\rho} \right\} \end{align} where the term $\frac{1}{5}K_{MN}\MM^{MN}g^{\mu\nu}\delta g_{\mu\nu}$ arises from varying the determinant factor. After cleaning up and dropping the symmetrizing and antisymmetrizing brackets, the $g$'s and $C$'s are re-expressed in terms of $\MM$ (factors of $g^{1/5}$ have to be accounted for carefully) \begin{equation} \begin{aligned} \delta S &= \int \dd^DX \left\{g^{1/5}\vphantom{\frac{1}{\sqrt{2}}} \left[-K_{\alpha 5,\beta 5} \MM^{\alpha 5,\mu 5}\MM^{\nu 5,\beta 5} -2 K_{\alpha 5,\beta\beta'}\MM^{\alpha 5,\mu 5}\MM^{\nu 5,\beta\beta'} \right.\right. \\ &\left.\left. \hspace{2.5cm} + K_{\alpha\alpha',\beta\beta'}\left(g^{-1/5}\MM^{\mu 5,\nu 5}gg^{\alpha\alpha',\beta\beta'} - \MM^{\alpha\alpha',\mu 5}\MM^{\nu 5,\beta\beta'}\right) \right.\right. \\ &\left.\left.\hspace{2.5cm} - \frac{1}{5} K_{MN}\MM^{MN}\MM^{\mu 5,\nu 5}\right]\delta g_{\mu\nu}\right. \\ &\left.\hspace{2cm} +\frac{1}{\sqrt{2}}\left[ K_{\alpha 5,\beta\beta'} \MM^{\alpha 5,\mu 5}\epsilon^{\nu\rho\beta\beta'} + K_{\alpha\alpha',\beta\beta'} \MM^{\alpha\alpha',\mu 5} \epsilon^{\nu\rho\beta\beta'}\right]\delta C_{\mu\nu\rho} \right\} \end{aligned} \end{equation} Now the indices can be re-covariantized to be expressed as \begin{equation} \begin{aligned} \delta S &= \int \dd^DX \left\{\vphantom{\frac{1}{\sqrt{2}}} g^{1/5}K_{KL}\left(\MM^{M, \mu 5}\MM^{\nu 5, N}\MM_{MP} \frac{1}{4}\epsilon^{aPK}\epsilon_{aNQ}\MM^{QL} - \MM^{K, \mu 5}\MM^{\nu 5, L} \right.\right. \\ &\left.\left.\hspace{4cm} - \frac{1}{5}\MM^{KL}\MM^{\mu 5,\nu 5}\right)\delta g_{\mu\nu} + \frac{1}{\sqrt{2}}K_{KL}\MM^{K, \mu 5}\epsilon^{\nu\rho L5}\delta C_{\mu\nu\rho} \right\} \end{aligned} \label{eq:projderivation} \end{equation} which reproduces the previous line if the extended indices are expanded and summed over. In a final step these expressions can be written in terms of a projected set of equations \begin{equation} \delta S =\int \dd^DX (-3) \PP{M}{N}{K}{L}K_{KL} \left(g^{1/5}\MM^{M, \mu 5}\MM^{\nu 5, N}\delta g_{\mu\nu} - \frac{1}{2\sqrt{2}}\MM^{M, \mu 5}\epsilon^{\nu\rho N5}\delta C_{\mu\nu\rho}\right) \end{equation} where the projector is given by \begin{equation} \PP{M}{N}{K}{L} = \frac{1}{3}\left({\delta_M}^{(K}{\delta_N}^{L)} + \frac{1}{5}\MM_{MN}\MM^{KL} - \frac{1}{4}\MM_{MP}\epsilon^{aP(K}\epsilon_{aNQ}\MM^{L)Q} \right) \end{equation} which is symmetric in both $MN$ and $KL$ as can be seen from the contraction with the symmetric $\delta g_{\mu\nu}$ and $K_{KL}$ respectively. Note that the term containing $\delta C_{\mu\nu\rho}$ does not impose any symmetry property on the projector. The variation of the action has to vanish for \emph{any} $\delta g_{\mu\nu}$ and $\delta C_{\mu\nu\rho}$ independently, therefore the equations of motion are given by \begin{equation} \PP{M}{N}{K}{L}K_{KL} = 0 \label{eq:SL5eom} \end{equation} with $K_{MN}$ defined in \eqref{eq:SL5varaction}. \subsection{Divertimento: Equations of Motion with a Projector} \label{sec:divertimento} In general, the dynamics of extended geometry can be described using a projected equation of motion. The action is given by \begin{equation} S = \int \dd^{D}X \LL \end{equation} where the Lagrangian $\LL$ includes the integration measure for the extended space. Setting the variation of the action to zero gives \begin{equation} \delta S = \int \dd^{D}X K_{MN}\delta \MM^{MN} = 0 \end{equation} where $K_{MN}=\delta\LL/\delta\MM^{MN}$ is the variation of the Lagrangian with respect to the generalized metric. The integrand does not have to vanish for any $\delta\MM^{MN}$ since the generalized metric is constraint to parametrize the coset space $G/H$. This constraint gives rise to a projector in the equations of motion \begin{equation} {P_{MN}}^{KL}K_{KL} = 0. \label{eq:genEoM} \end{equation} The extended geometries are all equipped with the so called $Y$-tensor described in \cite{Berman:2012vc}. The $Y$-tensor determines the deviation from usual geometry in that it gives the correction to the Lie derivative to form the generalized Lie derivative given in \cite{Berman:2011cg}. Following the method for $O(d,d)$ and then $SL(5)$ where we use a chain rule type arguement, we see that the projector may be written in a standard form using only the generalized metric and the $Y$-tensor \begin{equation} {P_{MN}}^{KL} = \frac{1}{a}\left( {\delta_M}^{(K}{\delta_N}^{L)} + b \MM_{MN}\MM^{KL} - \MM_{MP}{Y^{P(K}}_{NQ}\MM^{L)Q} \right) \, , \label{eq:genProj} \end{equation} together with the constants $a$ and $b$ which depend on the dimension of the extended space $D$ and thus the U-duality group. These constants together with the $Y$-tensor are given in the following table for some of the duality groups under consideration. \begin{equation} \begin{array}{|l|c|ccc|} \hline & {Y^{MN}}_{KL} & a & b & D \\ \hline O(d,d) & \eta^{MN}\eta_{KL} & 2 & 0 & 2d \\ SL(5) & \frac{1}{4}\epsilon^{iMN}\epsilon_{iKL} & 3 & 1/5 & 10 \\ SO(5,5) & \frac{1}{2}(\Gamma^i)^{MN}(\Gamma_i)_{KL} & 4 & 1/4 & 16 \\ \hline \end{array} \label{eq:ytensor} \end{equation} The elements that form the $Y$-tensor are $\eta_{MN}$, the invariant metric of $O(d,d)$; $\epsilon_{iMN}=\epsilon_{iabcd}$, the $SL(5)$ alternating tensor ($i=1,\dots,5$); and ${(\Gamma^i)^M}_N$, the $16\times 16$ Majorana-Weyl representation of the $SO(5,5)$ Clifford algebra ($i=1,\dots,10$). Our ${P_{MN}}^{KL}$ is a genuine projector in the sense that $P^2=P$ and its eigenvalues are either $0$ or $1$. The eigenvectors with eigenvalue $0$ span the kernel of the projector. Those parts of $K_{MN}$ proportional to these eigenvectors are projected out and eliminated from the equations of motion. The multiplicity of the eigenvalues $0$ and $1$ are called nullity (dimension of the kernel) and rank of the projector respectively. They add up to the dimension $D$ of the vector space of eigenvectors. We have not shown that this is true beyond the groups in the table above since the calculations have been done just by brute force. However, given the structure of the exceptional geometric theories, in that the theories up to $E_7$ are completely determined by the generalized metric and the $Y$-tensor (along with a few dimensionally dependent constants), then we expect this projector to be true at least up to $E_7$ with only the constants $a$ and $b$ to be determined. Note, the object $K_{MN}$ is symmetric and thus has $\frac{1}{2}D(D+1)$ independent components in a generalized space with $D$ dimensions. The bosonic degrees of freedom of the theories under consideration are given by the metric tensor $g_{\mu\nu}$ and the form fields $B_{\mu\nu}$ or $C_{\mu\nu\rho}$ (plus one for the dilaton $\phi$ in DFT and the volume factor $\Delta$ in the $SL(5)$ theory). One equation of motion is needed for each of those degrees of freedom. The projector reduces the components of the equation $K_{MN}=0$ such that the right number of independent equations remain. \subsection{Wave Solution or Membrane in the $SL(5)$ Theory} \label{sec:SL5wave} The wave solution for the $SL(5)$ duality invariant theory is given by a generalized metric $\MM_{MN}$ with line element \begin{equation} \begin{aligned} \dd s^2 &= \MM_{MN}\dd X^M \dd X^N \\ &= (H-2)\left[(\dd x^1)^2 - (\dd x^2)^2 - (\dd x^3)^2 \right] + (\dd x^4)^2 \\ &\quad + 2(H-1)\left[\dd x^1\dd y_{23} + \dd x^2\dd y_{13} - \dd x^3\dd y_{12} \right] \\ &\quad - H\left[(\dd y_{13})^2 + (\dd y_{12})^2 - (\dd y_{23})^2 \right] + (\dd y_{34})^2 + (\dd y_{24})^2 - (\dd y_{14})^2. \end{aligned} \label{eq:SL5ppwave} \end{equation} This generalized metric solves the equations of motion of the $SL(5)$ theory derived in \ref{sec:SL5eoms} (see Appendix \ref{sec:SL5check}). It can be interpreted as a pp-wave in the extended geometry which carries momentum in the directions dual to $x^2$ and $x^3$ i.e. combinations of $y_{12}, y_{13}$ and $y_{23}$. Since it is a pp-wave it has no mass or charge and the solution is pure metric, there is no form field it couples to. As before, $H$ is a harmonic function of the transverse coordinate $x^4$: $H=1+h\ln x^4$. It is smeared in the remaining dual directions. A Kaluza-Klein ansatz suitable for the geometry here that allows us to rewrite the solution in terms of four-dimensional quatities and reducing the dual directions is \begin{equation} \begin{aligned} \dd s^2 &= \left(g_{\mu\nu} + e^{2\phi}C_{\mu\lambda\tau}g^{\lambda\tau,\rho\sigma} C_{\rho\sigma\nu}\right)\dd x^\mu \dd x^\nu \\ &\quad + 2e^{2\phi}C_{\mu\lambda\tau}g^{\lambda\tau,\rho\sigma} \dd x^\mu \dd y_{\rho\sigma} + e^{2\phi}g^{\lambda\tau,\rho\sigma}\dd y_{\lambda\tau}\dd y_{\rho\sigma}. \end{aligned} \label{eq:KKforSL5} \end{equation} The factor $e^{2\phi}$ is a scale factor and needs to be included for consistency. This decompostion of the generalized metric into the usual metric and C-field resembles the form of the generalized metric \eqref{eq:SL5metric} as in the DFT case. By comparing \eqref{eq:KKforSL5} with \eqref{eq:SL5ppwave}, the fields of the reduced system with coordinates $x^\mu$ can be computed. From the diagonal terms we find \begin{equation} g_{\mu\nu} = \mathrm{diag} (-H^{-1}, H^{-1}, H^{-1}, 1) \qquad\mathrm{and}\qquad g^{\mu\nu,\rho\sigma} = e^{-2\phi}\mathrm{diag} (-H, -H, -1, H, 1, 1) \end{equation} and since $g^{\mu\nu,\rho\sigma}$ is given by $g^{\mu\nu}$, the inverse of $g_{\mu\nu}$, we need $e^{2\phi}=H^{-1}$ for consistency. The corresponding line element is \begin{equation} \dd s^2 = -H^{-1}\left[(\dd x^1)^2 - (\dd x^2)^2 - (\dd x^3)^2 \right] + (\dd x^4)^2. \label{eq:membraneKK} \end{equation} The off-diagonal terms give the antisymmetric C-field whose only non-zero component is \begin{equation} C_{123} = -(H^{-1}-1). \end{equation} This metric and C-field look like the membrane in M-theory. To complete this identification, \eqref{eq:membraneKK} has to be rescaled to be expressed in the Einstein frame. The standard rescaling procedure (in four dimensions) gives \begin{equation} g_{\mu\nu} = \Omega^{-2}{\tilde{g}}_{\mu\nu} = H^{-3/2}{\tilde{g}}_{\mu\nu} \end{equation} where \begin{equation} \Omega^2 = \sqrt{|\det e^{2\phi} g^{\mu\nu,\rho\sigma}|} = H^{3/2}. \end{equation} Therefore the rescaled metric reads ${\tilde{g}}_{\mu\nu}=H^{3/2}g_{\mu\nu}$ and the full solution in the Einstein frame is\footnote{The C-field is unaffected by the rescaling, only its field strength obtains a different factor in the action.} \begin{equation} \dd s^2 = -H^{-1/2}\left[(\dd x^1)^2-(\dd x^2)^2-(\dd x^3)^2 \right]+H^{3/2}(\dd x^4)^2 \label{eq:membrane} \end{equation} which is indeed the M2-brane in four dimensions in the Einstein frame. The membrane is extended in the $x^2-x^3$ plane. We have thus shown that the solution \eqref{eq:SL5ppwave} which carries momentum in the directions dual to $x^2$ and $x^3$ in the extended geometry corresponds to a membrane stretched along these directions from a reduced point of view. By similar arguments as in the string case, the mass and charge of the M2-brane are given by the momenta in the dual directions. \subsection{Goldstone Modes of the Wave Solution} \label{sec:SL5goldstones} Following the same procedure as for the DFT wave we will now perform the Goldstone mode analysis for the wave in $SL(5)$. To do this we will use the five-dimensional coordinate representation introduced above and split the coordinates into worldvolume and transverse parts. Note that the membrane in four dimensions only has one transverse direction. By introducing $m,n=1,2,3$, the coordinates read \begin{equation} X^M = X^{ab} = (X^{m5};X^{45},X^{m4},X^{mn}) = (x^m;x^4,y^{mn},y^{m4}). \end{equation} In this notation the non-zero components of the generalized metric for the $SL(5)$ wave given in \ref{eq:SL5ppwave} can be written as \begin{equation} \begin{aligned} \MM_{m5,n5} &= (2-H)\mathbb{I}_{mn} & \MM^{m5,n5} &= H\mathbb{I}^{mn} \\ \MM_{m4,n4} &= -H\mathbb{I}_{mn} & \MM^{m4,n4} &= -(2-H)\mathbb{I}^{mn} \\ \MM_{m4,n5} &= -(H-1)\mathbb{I}_{mn} & \MM^{m4,n5} &= -(H-1)\mathbb{I}^{mn} \\ \MM_{mn,kl} &= \mathbb{I}_{mn,kl} & \MM^{mn,kl} &= \mathbb{I}^{mn,kl} \\ \MM_{45,45} &= 1 & \MM^{45,45} &= 1 \end{aligned} \end{equation} where the harmonic function $H$ is a function of $X^{45}=x^4$ only and for convenience these two matrices are introduced \begin{align} \mathbb{I}_{mn} &= \begin{pmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} = \mathbb{I}^{mn} \, , & \mathbb{I}_{mn,kl} &= \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{pmatrix} = \mathbb{I}^{mn,kl}. \end{align} The generalized Lie derivative of the metric and the volume factor (a density) are given by the same expressions as before (cf. \eqref{eq:genLieMetric} and \eqref{eq:genLieDilaton}) with the $Y$-tensor for $SL(5)$ being \begin{equation} {Y^{MN}}_{KL} = \frac{1}{4}\epsilon^{aMN}\epsilon_{aKL} \end{equation} where these are five-dimensional permutation symbols which are tensor densities. We thus have \begin{align} \LL_\xi\MM_{MN} &= \xi^L\partial_L\MM_{MN} + 2\MM_{L(M}\partial_{N)}\xi^L - \frac{1}{2}\MM_{L(M|}\epsilon_{a|N)Q}\epsilon^{aLP}\partial_P\xi^Q \\ \LL_\xi\Delta &= \xi^M\partial_M\Delta + \partial_M\xi^M. \end{align} We again pick a transformation parameter $\xi^M$ with non-zero components only in the transverse directions but not along the worldvolume (and its dual). This can be described by an $SL(5)$ vector field \begin{equation} \xi^M=\xi^{ab}=(0,H^\alpha\hphi,0,H^\beta\hat{\tilde{\phi}}^{mn}) \end{equation} where $\hphi$ and $\hat{\tilde{\phi}}^{ij}$ are a constant scalar and dualized vector that later will become the Goldstone modes once they are allowed a dependence on the worldvolume coordinates, $H$ is the harmonic function and $\alpha, \beta$ are constants determined by normalisability. Using the generalized Lie derivative given above we compute $m_{MN} = \LL_\xi\MM_{MN}$ \begin{equation} \begin{aligned} m_{m5,n5} &= m_{m4,n4} = m_{m4,n5} = -\mathbb{I}_{mn}\hphi H^\alpha \partial H \\ m_{45,45} &= 2\hphi \partial H^\alpha \\ m_{mn,kl} &= -\mathbb{I}_{mn,kl}\hphi \partial H^\alpha \\ m_{mn,45} &= \frac{1}{2}\mathbb{I}_{mn,kl} \hat{\tilde{\phi}}^{kl} \partial H^\beta \end{aligned} \end{equation} and, recalling that $\Delta$ is a constant for our solution, \begin{equation} \lambda = \LL_\xi\Delta = \partial(\hphi H^\alpha). \end{equation} Now the four modes $\phi, \tilde{\phi}^{12}, \tilde{\phi}^{13}$ and $\tilde{\phi}^{23}$ are allowed to depend on the worldvolume coordinates $x^m$ (and the hats are removed). For the equations of motion we need $K_{MN}$ and $R$ only with terms with two derivatives on $m_{MN}$ and $\lambda$. There are no such terms in $R$ as given in \eqref{eq:SL5R} but upon integrating by parts they can arise. We thus have \begin{align} K_{MN} &= \MM^{KL}\partial_K\partial_{(M}m_{N)L} - \frac{1}{6}\MM^{KL}\partial_K\partial_L m_{MN} - \partial_M\partial_N \lambda \\ R &= -\frac{2}{7}\MM^{MN}\partial_M\partial_N\lambda - \partial_M\partial_N m^{MN} \end{align} Inserting $m_{MN}$ and defining $\Box\phi=H\mathbb{I}^{mn}\partial_m\partial_n\phi$ this gives \begin{equation} \begin{aligned} K_{m5,n5} &= -(1+\alpha H^{-1})\partial_m\partial_n\phi (H^\alpha\partial H) + \frac{1}{6}\mathbb{I}_{mn}\Box\phi (H^\alpha\partial H) \\ K_{m4,n4} &= \frac{1}{6}\mathbb{I}_{mn}\Box\phi (H^\alpha\partial H) \\ K_{m5,n4} &= -\frac{1}{2}\partial_m\partial_n\phi (H^\alpha\partial H) + \frac{1}{6}\mathbb{I}_{mn}\Box\phi (H^\alpha\partial H) \\ K_{45,45} &= -\frac{\alpha}{3}H^{-1} \Box\phi (H^\alpha\partial H) \\ K_{mn,kl} &= \frac{\alpha}{6}H^{-1}\mathbb{I}_{mn,kl} \Box\phi (H^\alpha\partial H) \\ K_{mn,45} &= -\frac{\beta}{12}H^{-1}\mathbb{I}_{mn,kl} \Box\tilde{\phi}^{kl} (H^\beta\partial H) \\ K_{m5,45} &= -\frac{1}{2}\partial_m\phi \partial(H^\alpha\partial H) \\ K_{m4,45} &= -\frac{1}{2}\partial_m\phi \partial(H^\alpha\partial H)\\ K_{m5,kl} &= \frac{1}{4}\mathbb{I}_{kl,pq}\partial_m\tilde{\phi}^{pq}\partial^2H^\beta \\ K_{m4,kl} &= 0 \end{aligned} \end{equation} The volume factor equation gives \begin{equation} R = H^{-1}(\frac{\alpha}{7}+1)\Box\phi(H^\alpha\partial H) = 0 \end{equation} which is solved by $\Box\phi = 0$. Now we have 14 components of the projected equation of motion ${P_{MN}}^{KL}K_{KL}=0$: \begin{itemize} \item three of the form $K_{m5,45} \sim K_{kl,n4} + K_{kl,n5}$ \begin{equation} \begin{aligned} K_{15,45} &= (H-2)(K_{12,24} + K_{13,34}) - (H-1)(K_{12,25} + K_{13,35}) \\ K_{25,45} &= (H-2)(K_{12,14} + K_{23,34}) - (H-1)(K_{12,15} + K_{23,35}) \\ K_{35,45} &= (H-2)(K_{13,14} - K_{23,24}) - (H-1)(K_{13,15} - K_{23,25}) \end{aligned} \end{equation} \item three of the form $K_{m4,45} \sim K_{kl,n4} + K_{kl,n5}$ \begin{equation} \begin{aligned} K_{14,45} &= (H-1)(K_{12,24} + K_{13,34}) - H(K_{12,25} + K_{13,35}) \\ K_{24,45} &= (H-1)(K_{12,14} + K_{23,34}) - H(K_{12,15} + K_{23,35}) \\ K_{34,45} &= (H-1)(K_{13,14} - K_{23,24}) - H(K_{13,15} - K_{23,25}) \end{aligned} \end{equation} \item three of the form $K_{mn,kl} \sim K_{p4,q4} + K_{p4,q5} + K_{p5,q4} + K_{p5,q5}$ with $mn\neq kl$ \begin{equation} \begin{aligned} K_{13,23} &= (H-2)K_{14,24} - (H-1)K_{14,25} - (H-1)K_{15,24} - H K_{15,25} \\ -K_{12,23} &= (H-2)K_{14,34} - (H-1)K_{14,35} - (H-1)K_{15,34} - H K_{15,35} \\ -K_{12,13} &= (H-2)K_{24,34} - (H-1)K_{24,35} - (H-1)K_{25,34} - H K_{25,35} \end{aligned} \end{equation} \item two relating the $K_{m4,m4}$, $K_{m4,m5}$ and $K_{m5,m5}$ components \begin{equation} \begin{aligned} H(K_{15,15}-K_{25,25}-K_{35,35}) &= (H-2)(K_{14,14}-K_{24,24}-K_{34,34}) \\ H(K_{14,15}-K_{24,25}-K_{34,35}) &= (H-1)(K_{14,14}-K_{24,24}-K_{34,34}) \\ \end{aligned} \end{equation} \item and three relating $K_{mn,kl}$ with $mn=kl$ and $K_{45,45}$ to $K_{m4,m4}$, $K_{m4,m5}$ and $K_{m5,m5}$ \begin{equation} \begin{aligned} K_{12,12}-K_{13,13} &= (H-2)(K_{14,14} - 2K_{24,24}) - 2(H-1)(K_{14,15}-2K_{24,25}) \\ &\qquad + H(K_{15,15}-2K_{25,25}) +\frac{2}{H}(K_{14,14} - K_{24,24} - K_{34,34}) \\ K_{12,12}+K_{23,23} &= (H-2)(2K_{14,14} - K_{24,24}) - 2(H-1)(2K_{14,15} - K_{24,25}) \\ &\qquad + H(2K_{15,15} - K_{25,25}) +\frac{2}{H}(K_{14,14} - K_{24,24} - K_{34,34})\\ K_{45,45} - 2K_{12,12} &= 2(H-2)(K_{14,14} - K_{24,24}) + 4(H-1)(2K_{14,15} - K_{24,25}) \\ &\qquad - 2H(K_{15,15} - K_{25,25}) -\frac{3}{H}(K_{14,14} - K_{24,24} - K_{34,34}) \end{aligned} \end{equation} \end{itemize} The first and second block of equations can be combined to get cancellations, resulting in three equations for $\tilde{\phi}$ \begin{equation} \begin{aligned} \partial_2\tilde{\phi}^{12} + \partial_3\tilde{\phi}^{13} &= 0 \\ \partial_1\tilde{\phi}^{12} - \partial_3\tilde{\phi}^{23} &= 0 \\ \partial_1\tilde{\phi}^{13} + \partial_2\tilde{\phi}^{23} &= 0 \, . \end{aligned} \end{equation} Defining $\tilde{\phi}_i = \frac{1}{2}\epsilon_{ijk}\tilde{\phi}^{jk}$ this can be written as \begin{equation} \begin{aligned} \partial_2\tilde{\phi}_3 - \partial_3\tilde{\phi}_2 &= 0 \\ \partial_1\tilde{\phi}_3 - \partial_3\tilde{\phi}_1 &= 0 \\ \partial_2\tilde{\phi}_1 - \partial_1\tilde{\phi}_2 &= 0 \, . \end{aligned} \end{equation} All the remaining blocks of equations are either trivial or satisfied by $\Box\phi=0$. One would expect a non-zero right-hand side for the above equations of the form $\partial_m\phi$ to get relations between $\phi$ and $\tilde{\phi}$ \begin{equation} \partial_m\phi (\partial^2 H^\alpha) \sim -\mathbb{I}_{mn}\epsilon^{npq}\partial_p\tilde{\phi}_q (\partial^2 H^\beta) \end{equation} This does not only provide a condition for $\beta$ to be equal to $\alpha$, but also the three equations needed to reduce the number of modes from four to one. The reason for the zero on the right-hand side is due to a degeneracy in considering the membrane with its three-dimensional worldvolume in a four-dimensional background. There is only one transverse direction and hence only one contributing derivative $\partial_{45}\equiv\partial$. So a term like \begin{equation} \delta_{mn}\delta^{kl}\partial_k\partial_l H^\alpha - {\delta_m}^k{\delta_n}^l\partial_k\partial_l H^\alpha \end{equation} as it arose for the string vanishes for the membrane. It would be interesting to see if the same calculation for the membrane derived from a wave in a larger extended geometry, e.g. the (5+10+1)-dimensional extended space with manifest $SO(5,5)$ invariance along five dimensions, would provide a duality relation between the $\phi$'s and $\tilde{\phi}$'s that could be turned into a self-duality relation resembling the result in \cite{Duff90b}. \section{Discussion} We have seen that strings and branes are null waves from the point of view of extended theories. The BPS nature of these solutions has its origin in the fact that the null wave is BPS and its reduction naturally gives rise to a BPS condition of charge being equal to tension. There are immediate natural extensions to this work such as understanding how this works for the supersymmetric theory and checking how this works for other branes such as the M-theory fivebrane. There is also the the more ambitious question as to whether the same analysis works for lower BPS objects such as 1/4 BPS states. The Goldstone mode analysis provides a particularly interesting interplay between worldvolume and spacetime approaches. The solutions all obey the section condition and the local symmetry variations used to calculate the Goldstone modes also obey the section condition but there are still components of the variation, $\tilde{\phi}$, that are in the extended directions. These are crucial in giving the Tseytlin string. Thus the relation to the section condition in the target space and the chirality condition may be understood as follows. From the point of view of the string world-sheet one should view the $\tilde{\phi}$ deformations as components of a local symmetry variation in the extended dimensions but one that still does not functionally depend on the dual coordinates. This is crucial since it means the section condition is still obeyed in the Tseytlin string. Other fascinating possibilities will be to extend this to branes that are non-BPS but are thermodynamically excited. The hope of embedding brane thermodynamics in DFT and extended geometries is intruiging. From this perspective there is an intriguing possibilitiy of how one should calculate string scattering amplitudes. Many novel contemporary techniques have been developed for understanding the amplitudes of the massless sector of many theories \cite{ArkaniHamed:2012nw}. Now string and branes themselves may be viewed as massless objects all be it in a theory with extra dimensions. These objects being massless degrees of freedom fits well with the idea (previously expressed in \cite{Englert:2003zs}) that one may think of strings and branes as Goldstone modes of the spontaneously broken duality symmetry. As such the appearance of nonlinear realized duality symmetry (see for example \cite{Berman:2011jh} and references therein) in the target space is unsurprising from this perspective. Effective actions of sigma models with nonlinearly realized symmetries in target spaces began with the effective action of pions, the Goldstone modes of broken chiral symmetry. Another direction of interest is to consider unsmearing the wave solution. It is uncertain whether this can make sense since it will then break the section condition and yet with Scherk-Schwarz theories the section condition is broken and with the localized KK-solution \cite{Jensen:2011jna} the branes become localized in a dual coordinate. Studying the particulars of interesting backgrounds like those described in this paper and their localizations may provide insight into futher possibilities. \section*{Acknowledgement} We are very grateful to Chris Blair and Emanuel Malek for concrete and valuable help with the determinant and weight factors in the $SL(5)$ theory. We have also benefited from numerous discussions with Martin Cederwall, Paul Cook, Jeong-Hyuck Park and Malcolm Perry on many aspects of DFT and the extended geometries. DSB and FJR are grateful to the Yukawa Institute for Theoretical Physics in Kyoto for the ``Exotic Structures of Spacetime" meeting where this work was completed. DSB is supported by STFC consolidated grant ST/J000469/1 ``String Theory, Gauge Theory and Duality''. JB and FJR are supported by STFC studentships.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,407
Kitsch-Slapped Specializing In Bad Taste From A (Feminist) Chick's Perspective. Pop Culture, Past & Present, In Yer Kisser. About Deanna & Contact Colorful Prism of Racism Hey, Sister, Can You Spare Some Social Change? Sexism Is Sexy Oh, Those Von Dewitz Characters January 12, 2010 / Deanna / 4 Comments Because I become obsessed with research, especially when so little is readily available… In doing some additional research for a piece on silent film star Valda Valkyrien… I found juicy tidbits on her first husband, Baron Hrolf von Dewitz. From The New York Times on September 7, 1919: GREENWHICH, Conn., Sept. 6.– A man calling himself Baron Hrolf J. O. E. Dewitz of New York, a moving picture director, and a girl who said she was A. M. Thaisn de Malmey, a moving picture actress, and daughter of Joseph W. de Malmey and Catherine Thomas de Malmey, were married today by Justice Albert S. Mead in his office. They came up by train from New York, and the bride changed from a traveling dress into a gorgeous pink creation for the ceremony and back again afterward into her traveling costume. Dewitz gave his age as 40, and said he was born in Denmark, and Miss de Melmey gave hers as 21, and said she was born in Spain and was a cousin of the late Empress Elizabeth of Austria. They said they had never been married before. They left for New York, saying they would leave New York Sunday morning for the Pacific Coast. The so-called "Baron" Dewitz, in spire of his statements to the Greenwich Justice, has been married before, not only once, but several times, and his erstwhile wives are on record as divorcing him. Records show that on May 17, 1908, he was married to Nina Pastorelli, a toe dancer with "The Dancing Daisies." On April 4, 1911 he married Mrs. Katheryn de Montford, an actress, who obtained a divorce from him on Jan. 18, 1912. His third venture was with Miss Freed, whose stage name was Mlle. Valkyrien, another dancer, who as Mrs. Adele Freed von Dewitz also got a divorce, the interloculory decree having been signed on Feb. 13, 1919, by Justice Albert F. Seeger at White Plains. She was then in the movies, and the decree gave her the two-year-old son of the pair. At the time he married Miss Freed, otherwise Mlle. Valkyrien, the "Baron" sent out cards announcing that their residence would be at the Plaza after Sept. 1, 1914, but at the time the cards were issued he and his bride were living at 560 West End Avenue with a Miss Bessie M. Clay. So far, I've not found anything substantive about the earlier Baronesses von Dewitz (and you know I'll keep looking — The Dancing Daisies?! Oh. My. Gawd.). But I did then find a lengthy wedding notice, also in The New York Times, dated June 23, 1914. (I'm so going to interject along the way for this one.) Cards bearing the imprint of a jewelry house and the baronial crest of a noble Danish family were sent through the mails yesterday to well-known New Yorkers, saying that: Lo Lieutenant Baron Hrolf von Dewitz, et Mademoiselle Valkyrien Freed de Copenhaque ont l'honneur de vous announcer leur mariage en date du quatorze Mai, a L'eglise Evangelicale-Lutherienne de Saint Mathieu a Jersey City Don't you just love "Jersey" tacked on the end of all that French — and when, for that matter, did Valkyrien become French? A second card states, also in French, that the Baron and Baroness would be at home at the Hotel Plaza after Sept. 1. Baron von Dewitz, whose marriage on May 14 in Jersey City is thus announced, is the same Baron who on April 4, 1911, married Mrs. Kathryn de Montford, an actress, at Stamford, Conn., and who, several years previously was reported married to Nina Pastorelli, a toe dancer. Although the alleged marriage with Miss Pastorelli was extensively published in the newspapers, it was shown later that the wedding did not take place. The matter of being shown that the marriage to Miss Pastorelli did not take place is A) not as reported later, and #2, not really shown at all. In his most recent matrimonial venture Baron Dewitz again went to the stage for a wife, for Mlle. Valkyrien Freed is a dancer and a member of the ballet of the Royal Theatre in Copenhagen. Furthermore she is about to embark upon a professional career in this country despite her title, and at a dinner tonight at the home of Miss Jeannette L. Gilder, the writer, her stage future is to be talked over by her husband, Miss Gilder, who, through taking the management of another dancer has become an enthusiastic impresario, and the Baroness herself. Please note the Baron's involvement in his wife's career; there is more flavor to savor later. Although the wedding announcement cards say that the Baron and Baroness will be at home at the Plaza after Sept. 1, they are at present living at the home of Miss Bessie M. Clay, at 560 West End Avenue. It was explained last night by Baron Dewitz that this was because he and his bride wished to live in seclusion for a while, and at the same time it gave the Baroness an opportunity to practice her toe dancing. The Miss Bessie M. Clay mentioned is likely the then President of The New York Institute of Music, located on West End Ave.; more on her, and why they would live with her, is here. The marriage of Baron Dewitz and the toe dancer, who is not yet 19 years of age and who is a young woman of remarkable beauty, ends all the chances the Baron had of coming into a great estate and another title, he said last night. In fact, he is likey to be cut off by his relatives altogether for not returning to Copenhagen and marrying into a royal family. "This wedding with Miss Freed," said the Baron last night, "was a real romance. Two years ago when I was at home I met her and we fell in love. I returned to this country and we wrote each other frequently, but my family, and hers, too, put so many obstacles in our path that we gradually stopped writing. Last month we decided to marry after all, and so she came to this country. I met her at the boar and took her to the home of a married sister in Jersey, and a week later we were quietly married. Put a pin in that "met two years ago" part — there will be some math. "We are going to Newport in a short while, and she may give some exhibition dances there. I have been approached with offers to to upon the stage, but I am told that in this country a man who goes on the stage is not likely to be taken seriously in business affairs afterward. In my country I could go on the stage as a lark and nothing would be thought of it. Remember when I asked you to note the Baron's involvement with his latest wife's performance career? Well, it sure seems to me that the Baron von Dewitz desperately wants a stage career himself. He's willing to give up his title and wealth for it. And remember that first (though more recent) article wherein he calls himself "a moving picture director" — I guess that line's a winner. "The report that I have been married several times is all a mistake. I knew Miss Pastorelli when I was here some years ago and was seen about with her frequently. Some months after I had left this country I was surprised to get some old newspaper clippings saying that Miss Pastorelli and I were married. It was so long after the time that the stories had been published that I did nothing at all about it. I was divorced from Mrs. de Montford about a year and a half ago." But remember, the later clipping states that "records show" his marriage to Pastorelli on May 17, 1908. "Records," not "reports." And remember, you have a pin in the number two, right? Do the math with his statement that he "was divorced from Mrs. de Montford about a year and a half ago." Erm. Baron Dewitz, who writes for the magazines, was a Danish naval officer who was one of the first to take up aeroplanes as war machines, and for some time was interested in perfecting air warship which he wished to sell to European Governments. He said last night that the cost of the enterprise was so heavy that he finally dropped it. Baron Dewitz apparently did write, including a book titled War's New Weapons. At least that much is true. *About Miss Bessie M. Clay and The New York Institute of Music: A bit from The New York Times, October 22, 1905: An interesting feature of this college is what is known as the "Home Department." As more and more girls have been coming from places far from New York to study music, there has been a growing demand for their proper accommodation in the city. Accordingly it is now possible to obtain not only musical instruction at the institute, but rooms, board, and chaperonage can be secured. But the care of the visitor does not stop here. Informal teas and receptions will be arranged to which persons prominent in the musical and artistic world will be invited. There are classes in dancing and fencing, and there is also a bowling alley and gymnasium. In other words, a student from the West can secure here many of the advantages and pleasures she would find at a college like Wellesley or Vassar. I believe this 1906 issue of Music Trade Review is also on Miss Bessie Clay (said to be the niece of Major Clay of Sherman, Clay & Co.) and her marriage to Truman A. Glaser. However likely this seems to be the same Bessie Clay, I cannot account for the continued reference to her as "Miss Bessie" past 1906. And that brings us to the end of today's (last night's) obsession. Until I find out more — or you add to the story with what you know. Once again, I'd like to declare my deep abiding love of The New York Times for making their archives available. Screen Test divorce, education, ephemera, film, history, music, romance We Had Joy, We Had Fun, We had Seasons In The Sun… May The Doodle Be With You (Cuz It's Not With Me) Pingback: Preserving The Legacy Of Silent Film Actress Valkyrien : Inherited Values ep says: What an interesting story! Now you've got me searching for clues… http://www.imdb.com/name/nm0884824/bio http://en.wikipedia.org/wiki/Valda_Valkyrien her son became a successful painter The link to the the son's art site is in my post — but I'm super glad to have assisted in the creation of another obsessive researching ;) Pingback: Help For The Gold Diggers Of Victorian Times : Kitsch-Slapped abuse advertising antiques Art beauty books children collectibles collecting couples culture dating domestic violence ephemera fashion feminist film gender growing up in the 70s health history humor humor and jokes kitsch lingerie magazines marketing media men and women music news pin ups politics relationships retro reviews risque Sex sexuality sexual rights television television shows vintage vintage advertising women Categories Select Category Advertising Sado-masochism Artful Dodger Become Institutionalized Bric-a-Brack Ric Rac Colorful Prism of Racism Curator's Notes Don We Now Our Gay Apparel Don't Want Nun Event-us Momentous Featured Herstory News & Views Hey, Sister, Can You Spare Some Social Change? Knick-Knacks and Paddy-Whacks Oddiophiles (Oddities for Audiophiles) Orgasmic Irony Pulp Bitchin' Relationship Underarm Stick About Alessia Absurd Activism Advice Appearance Arguing Art Attitude Breakups Communication Fears Friends In The Cards Legislating Personal Relationships Meeting A Potential Mate Reality Self-Check Relationship & Dating Humor Relationship Round Up Relationship Rules & Wisdom Romantic Fantasy Safety Sex Studies Toxic Relationships Victimization Saluting General Kitsch Cherry Picked Heavy Pet-ing Kitsch Witch On Our Gay-dar Orange Spot Red Velvet Crush The Lunch Boat Scoop.it Screen Test Sexism Is Sexy Speculative Fiction The Carnivals History Is Ephemeral Carnival New Vintage Reviews Carnival Toying With Your Affections You Outta Be In Pictures Ze Big Mouth Promotions Stuff Deanna Elsewhere Deanna @ eBay Deanna @ Facebook Deanna @ Pinterest Deanna @ Scoop.It Deanna @ Twitter Etsy Shop (Antiques) Etsy Shop (Artsy) Kitschy Kitschy Coo More Than Surviving, Thriving Penile Code Avenger A Slip Of A Girl: A Lingerie Blog Ancient Digger Blog On The Tracks Bridget Jones Has Nothing On Me Close-Ups & Long-Shots Cult Of Gracie Digs & Docs Double-Breasted Dust-Jacket Gender Focus History Carnivals Aggregator History, Bitches Kitsch n Stuff Kittywampus Marge Twain Miss Fussypants Oolala! Pointed Postcards Psychology Today Relationship Center Scenic Thingsville, US! Shoe Fits The Chicago History Journal The Maddow Blog The Retro Housewife Visual Arts Library Picture & Periodicals Collections Women's Page History Women, History, Sex & Power Word Grrls Blog Catalog: Arts Her Blog Directory Blogs by Women Party Line Central The Vintage List U3 Art Directory Archives of Alessia's Relationship Underarm Stick
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,953
Q: Should I enforce business logic through database errors? There's an interesting design decision I've been thinking about lately. Let's say I'm adding usernames to a table, and I want to make sure there are no duplicates. The username column is NOT NULL UNIQUE. I could either: * *Query the database before inserting to make sure there are no duplicate names, or *Just INSERT, and catch any exceptions that come from the database engine. Assuming the DB I'm using is capable of enforcing constraints, I was wondering what situations each of these choices is appropriate in. A: It almost always seems like a good idea to do option 2. I wouldn't recommend option 1 because you've effectively doubled the amount of time required to do inserts (they all require reads first). Besides, some new developer is going to just commit sometime and not to the check, and it will get broken. Another thing to consider is how much downtime is appropriate? Is this a mission critical app? What happens if the business logic is corrupt? Will factories shut down if it is? Or will it just be some annoying bugs. You can't afford to have your factories shut down because some exception you didn't think of crashed your server. So, perhaps a nightly or weekly check on the data correctness can also help in this case. However, I feel the DB capabilities to enforce uniqueness (and potentially other enforcements) are the appropriate way to go. A: Can you cache the username list and check it on the application side without going to the database? You should still have the unique constraint on the database to ensure no bad data gets in (always protect the data at the database level first and foremost) but if you can do the check from a cache, you could save a whole round trip to the database when someone selects the same username as an existing user. Now this may depend o nthe size of the data you would need to cache and how often the cache would have to be updated. Not knowing your system, I can't say if it is practical, but I would at least look into doing it. A: Do you expect that the new username is likely to be unique? Or is it likely that it will be a duplicate? If the username is likely to be unique, doing the insert and catching the exception would be more efficient. If the username is likely to be a duplicate, it will be more efficient to check for duplicates (and potentially look for a similar but not yet taken username) rather than trying to catch the exception. Obviously different databases and different versions of those databases have a different breakeven point on the relative probabilities. But in general, if you're building a system for a company where everyone has a unique username anyway, do the insert and catch the exception. If you're building Hotmail, check for duplicates first. A quick demo (on Oracle 11.2.0.1) shows that it's roughly 7 times as expensive to do an insert that fails and to handle the exception than it is to do a check before the insert and then write the data. SQL> create table username_test ( 2 username varchar2(30) unique 3 ); Table created. SQL> set timing on; SQL> ed Wrote file afiedt.buf 1 declare 2 l_cnt integer; 3 begin 4 for i in 1 .. 100000 5 loop 6 select count(*) 7 into l_cnt 8 from username_test 9 where username = 'JCAVE'; 10 if( l_cnt = 0 ) 11 then 12 insert into username_test( username ) 13 values( 'JCAVE' ); 14 end if; 15 end loop; 16* end; SQL> / PL/SQL procedure successfully completed. Elapsed: 00:00:04.20 SQL> rollback; Rollback complete. Elapsed: 00:00:00.00 SQL> ed Wrote file afiedt.buf 1 declare 2 l_cnt integer; 3 begin 4 for i in 1 .. 100000 5 loop 6 begin 7 insert into username_test( username ) 8 values( 'JCAVE' ); 9 exception 10 when dup_val_on_index then 11 null; 12 end; 13 end loop; 14* end; SQL> / PL/SQL procedure successfully completed. Elapsed: 00:00:29.58
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,353
Q: How to get find if the input is empty? In my script everything works well except for the output for an empty input. <input id="id1" type="number" max="100" min="40"> <button onclick="count()">OK</button> <p id="test"></p> <script> var inpObj = document.getElementById('id1'); function count() { if(inpObj.checkValidity() == false || inpObj.length === 0) { document.getElementById('test').innerHTML = inpObj.validationMessage; } else { document.getElementById('test').innerHTML = "Input OK"; } } </script> I have no idea why is this not working. Thanks. A: Try this : function count() { if(inpObj.checkValidity() == false ) { document.getElementById('test').innerHTML = inpObj.validationMessage; } else if(inpObj.value.length === 0) { document.getElementById('test').innerHTML = "Empty" } else { document.getElementById('test').innerHTML = "Input OK"; } } See : https://jsfiddle.net/Ldj170Lc/ A: Try this: length is applied on string not on input object. inpObj is HTML object(Input) if(inpObj.checkValidity() == false || inpObj.value.length === 0) { } A: You should use inpObj.value.length instead of inpObj.length https://jsfiddle.net/IA7medd/g7e0bLjc/ var inpObj = document.getElementById('id1'); function count() { if(inpObj.checkValidity() == false || inpObj.value.length === 0) { document.getElementById('test').innerHTML = inpObj.validationMessage; } else { document.getElementById('test').innerHTML = "Input OK"; } }
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,020
Llucmajor is located in the south of Mallorca, in the Bay of Palma. Near the airport of Palma and to the sea station.An ideal place to visit during any time of the year. The S'Arenal beach of Llucmajor is a fine white sandy beach with all services. A wide promenade that reaches Palma, numerous hotels, cafes, shops, restaurants and a Yacht Club. The coast of Llucmajor offers 45 kms of stunning views over the sea, with cliffs, viewpoints and defense towers. From S'Arenal to S'Estanyol de Migjorn, passing through Cala Pi. Located in the region of the Migjorn, to the south of the island, Llucmajor is the most extensive municipality of Mallorca. Fine sand beaches and wonderful coves merge along its 47 kilometers of steep coastline, ideal for sailing, fishing or diving. Or just to enjoy the calm and tranquility that this wonderful place offers. The mild climate (average temperatures of 27º C in summer and 14º C in winter) and easy access to the whole area, make the coast of Llucmajor the ideal destination for all types of tourism, particularly the family. The name Llucmajor probably comes from the Latin LUCUS MAIORIS, wich means greater forest.It is very likely that the farm Llucmajor was full by a large forest. This th seems to be the only accepted. The prehistoric set of Capocorb Vell, in the south of the municipality,is the most known exponent of the island of the la Bronce Age. It is one of the most important towns in the Western Mediterranean, thanks to its conservation. Also, it is one of the first places of Mallorca that haven been excavated and studied, and also one of the most extensive. If you are a lover of cycling , come to Llucmajor. 7 Hiking routes are waiting for you.. Here you can see summary of all and below each one of them detailed. We know that you do not want to leave the house without your pet and that you want to take her everywhere with you of course.. For all budgets, for all needs. Son Guardiola is an old country house fully restored and equipped in 50.000 sq. metres of land. Hotel Cap Rocat, a former military fortress located in the most secluded area of Palma de Mallorca's bay. Recently renovated hotel with sound proof and air conditioned rooms of great comfort. Breakfest and dinner buffet. Very close to the beach. Wilde range of excursions and activities.
{ "redpajama_set_name": "RedPajamaC4" }
9,226
Q: Clarification on Likelihod and Maximum Likelihood Estimation (MLE) Notation; PLUS a solution for taking into account the uncertainty of data points Upon reading a significant number of papers related to probabilistic methods of Machine Learning, some of the notation about MLE are still vague to me. So I decided to ask this question once for all and hoping it will be useful for me and other readers. Let $X = \{x_{i} \}_{i=1}^{N}$ be the set of $\textit{N}$ data points , $x_{i}$, and let $ Y = \{y_{i}) \}_{i=1}^{N}$ be the corresponding set of label $y_{i}$, such that $x_{i} \in \mathbb{R}^{V}$, and $y_{i} \in \{0, 1, ..., K\}$ . We can define the likelihood function, as follows $\mathcal{L}(\theta) = \prod_{i=1}^{N} p(y_{i}| x_{i}, \theta) \ \ \ \ $ Eqn.(1) $ \ \ \ \ \ \ \ \ = P(Y|X, \theta)$ And MLE is $argmax_{\theta} = \mathcal{L}(\theta)$ (which can be obtained during some optimization algorithms or by obtatining the closed form solution of specific model). In above equation we implicitly assumed that data points are I.I.D. And moreover, if we have intended to consider the uncertainty into account, we could assume the existence of set latent/hidden variables $Z=\{z_{j}\}_{j=1}^{N}$, during the process of data generation (Where $N$ is the number of laten variables). Therefore, we should modify the Likelihood as follows: $\mathcal{L}(\theta) = \prod_{i=1}^{N} p(y_{i}| x_{i}, z_{i}, \theta) p(z_{i} | x_{i}, \theta) \ \ \ \ $ Eqn.(2) where each data point should be marginalized over all the latent variables $z_{j}$, concretely, $ p(y_{i} | x_{i}) = \sum_{i=1}^{M} p(y_{i}| x_{i}, z_{j}, \theta) p(z_{i} | x_{i}, \theta) $ Now the questions are the as follows: * *Is it correct to write $\theta$ in RHS of these equations? *Is the Eqn. (3) the same as Eqn. (2)? I.e what does it mean if we do not write down $\theta$ in RHS? (as sometimes it is used in some papers like https://arxiv.org/pdf/1502.03044.pdf Eq.(10)) *Are my notation and assumption for taking into account the uncertainty for converting Eqn.(1) to Eqn.(2) correct? *Is it correct that we should marginalized each data point over all the latent variables? $\mathcal{L}(\theta) = \prod_{i=1}^{N} p(y_{i}| x_{i}, z_{i}) p(z_{i} | x_{i}) \ \ \ \ $ Eqn.(3) Where $\theta$ is the model's parameter(s).
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,520
Published on : July 13, 2022 July 13, 2022 Published by : The Goat By MALCOM GLADWELL PAPERBACK | IN STORE ONLY Malcolm Gladwell, host of the podcast Revisionist History and author of the #1 New York Times bestseller Outliers, offers a powerful examination of our interactions with strangers, and why they often go wrong—now with a new afterword by the author. A Best Book of the Year: The Financial Times, Bloomberg, Chicago Tribune, and Detroit Free Press How did Fidel Castro fool the CIA for a generation? Why did Neville Chamberlain think he could trust Adolf Hitler? Why are campus sexual assaults on the rise? Do television sitcoms teach us something about the way we relate to one another that isn't true? Talking to Strangers is a challenging and controversial excursion through history, psychology, and scandals taken straight from the news. In it, Malcolm Gladwell revisits the deceptions of Bernie Madoff, the suicide of Sylvia Plath, and the death of Sandra Bland—throwing our understanding of these and other stories into doubt. Something is very wrong, Gladwell argues, with the tools and strategies we use to make sense of people we don't know, and the resulting conflict and misunderstanding have a profound effect on our lives and our world. Now, with Talking to Strangers, Malcolm Gladwell brings us a gripping guidebook for troubled times. How to raise kids who aren't assholes HOW TO BE A CANADIAN
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
88
{"url":"http:\/\/quomodocumque.wordpress.com\/tag\/algebraic-geometry\/","text":"## Y. Zhao and the Roberts conjecture over function\u00a0fields\n\nBefore the developments of the last few years the only thing that was known about the Cohen-Lenstra conjecture was what had already been known\u00a0before the Cohen-Lenstra conjecture; namely, that the number of cubic fields of discriminant between -X and X could be expressed as\n\n$\\frac{1}{3\\zeta(3)} X + o(X)$.\n\nIt isn\u2019t hard to go back and forth between the count of cubic fields and the average size of the 3-torsion part of the class group of quadratic fields, which gives the connection with Cohen-Lenstra in its usual form.\n\nAnyway, Datskovsky and Wright showed that the asymptotic above holds (for suitable values of 12) over any global field of characteristic at least 5. \u00a0That is: \u00a0for such a field K, you let N_K(X) be the number of cubic extensions of K whose discriminant has norm at most X; then\n\n$N_K(X) = c_K \\zeta_K(3)^{-1} X + o(X)$\n\nfor some explicit rational constant $c_K$.\n\nOne interesting feature of this theorem is that, if it weren\u2019t a theorem, you might doubt it was true! \u00a0Because the agreement with data is pretty poor. \u00a0That\u2019s because the convergence to the Davenport-Heilbronn limit is extremely slow; even if you let your discriminant range up to ten million or so, you still see substantially fewer cubic fields than you\u2019re supposed to.\n\nIn 2000, David Roberts massively clarified the situation, formulating a conjectural refinement of the Davenport-Heilbronn theorem motivated by the Shintani zeta functions:\n\n$N_{\\mathbf{Q}}(X) = (1\/3)\\zeta(3)^{-1} X + c X^{5\/6} + o(X^{5\/6})$\n\nwith c an explicit (negative) constant. \u00a0The secondary term with an exponent very close to 1 explains the slow convergence to the Davenport-Heilbronn estimate.\n\nThe Datskovsky-Wright argument works over an arbitrary global field but, like most arguments that work over both number fields and function fields, it is not very geometric. \u00a0I asked my Ph.D. student Yongqiang Zhao, who\u2019s finishing this year,\u00a0to revisit the question of counting cubic extensions of a function field F_q(t) from a more geometric point of view to see if he could get results towards the Roberts conjecture. \u00a0And he did! \u00a0Which is what I want to tell you about.\n\nBut while Zhao was writing his thesis, there was a big development \u2014 the Roberts conjecture was proved. \u00a0Not only that \u2014 it was proved twice! \u00a0Once by Bhargava, Shankar, and Tsimerman, and once by Thorne and Taniguchi, independently, simultaneously, and using very different methods. \u00a0It is certainly plausible that these methods can give the Roberts conjecture over function fields, but at the moment, they don\u2019t.\n\nNeither does Zhao, yet \u2014 but he\u2019s almost there, getting\n\n$N_K(T) = \\zeta_K(3)^{-1} X + O(X^{5\/6 + \\epsilon})$\n\nfor all rational function fields K = F_q(t) of characteristic at least 5. \u00a0And his approach illuminates the geometry of the situation in a very beautiful way, which I think sheds light on how things work in the number field case.\n\nGeometrically speaking, to count cubic extensions of F_q(t) is to count trigonal curves\u00a0over F_q. \u00a0And the moduli space of trigonal curves has a classical unirational parametrization, which I learned from Mike Roth\u00a0many years ago: \u00a0given a trigonal curve Y, you push forward the structure sheaf along the degree-3 map to P^1, yielding a rank-3 vector bundle on P^1; you mod out by the natural copy of the structure sheaf; and you end up with a rank-2 vector bundle W on P^1, whose projectivization is a rational surface in which Y embeds. \u00a0This rational surface is a Hirzebruch surface F_k, where k is an integer determined by the isomorphism class of the vector bundle W. \u00a0(This story is the geometric version of the Delone-Fadeev parametrization of cubic rings by binary cubic forms.)\n\nThis point of view replaces a problem of counting isomorphism classes of curves (hard!) with a problem of counting divisors in surfaces (not easy, but easier.) \u00a0It\u2019s not hard to figure out what linear system on F_k contains Y. \u00a0Counting divisors in a linear system is nothing but a dimension count, but you have to be careful \u2014 in this problem, you only want to count smooth members. \u00a0That\u2019s a substantially more delicate problem. \u00a0Counting all the divisors is more or less the problem of counting all cubic rings; that problem, as the number theorists have long known, is much easier than the problem of counting just the maximal orders in cubic fields.\n\nAlready, the geometric meaning of the negative secondary term becomes quite clear; it turns out that when k is big enough (i.e. if the Hirzebruch surface is twisty enough) then the corresponding linear system has no smooth, or even irreducible, members! \u00a0So what \u201cought\u201d to be a sum over all k is rudely truncated; and it turns out that the sum over larger k that \u201cshould have been there\u201d is on order X^{5\/6}.\n\nSo how do you count the smooth members of a linear system? \u00a0When the linear system is highly ample, this is precisely the subject of Poonen\u2019s well-known \u201cBertini theorem over finite fields.\u201d \u00a0But the trigonal linear systems aren\u2019t like that; they\u2019re only \u201csemi-ample,\u201d because their intersection with the fiber of projection F_k -> P^1 is fixed at 3. \u00a0Zhao shows that, just as in Poonen\u2019s case, the probability that a member of such a system is smooth converges to a limit as the linear system gets more complicated; only this limit is computed, not as a product over points P of the probability D is smooth at P, but rather a product over fibers F of the probability that D is smooth along F. \u00a0(This same insight, arrived at independently, is central to\u00a0the paper of Erman and Wood\u00a0I mentioned last week.)\n\nThis alone is enough for Zhao to get a version of Davenport-Heilbronn over F_q(t) with error term O(X^{7\/8}), better than anything that was known for number fields prior to last year. \u00a0How he gets even closer to Roberts is too involved to go into on the blog, but it\u2019s the best part, and it\u2019s where the algebraic geometry really starts; the main idea is a very careful analysis of what happens when you take a singular curve on a Hirzebruch surface and start carrying out elementary transforms at the singular points, making your curve more smooth but also changing which Hirzebruch surface it\u2019s on!\n\nTo what extent is Zhao\u2019s method analogous to the existing proofs of the Roberts conjecture over Q? \u00a0I\u2019m not sure; though Zhao, together with the five authors of the two papers I mentioned, spent a week huddling at AIM thinking about this, and they can comment if they want.\n\nI\u2019ll just keep saying what I always say: \u00a0if a problem in arithmetic statistics over Q is interesting, there is almost certainly interesting algebraic geometry in the analogous problem over F_q(t), and the algebraic geometry is liable in turn to offer some insights into the original question.\n\n## This Week\u2019s Finds In Number\u00a0Theory\n\nTwenty years ago yesterday, John Baez posted the first installment of This Week\u2019s Finds in Mathematical Physics. \u00a0In so doing, he invented the math blog, and, quite possibly, the blog itself. \u00a0A lot of mathematicians of my generation found in John\u2019s blog an accessible, informal, but never dumbed-down window beyond what we were learning in classes, into the messy and contentious ground of current research. \u00a0And everybody who blogs now owes him a gigantic debt.\n\nIn his honor I thought it would be a good idea to post a \u201cThis Week\u2019s Finds\u201d style post of my own, with capsule summaries of a few papers I\u2019ve recently noted with pleasure and interest. \u00a0I won\u2019t be able to weave these into a story the way John often did, though! \u00a0Nor will there be awesome ASCII graphics. \u00a0Nor will any of the papers actually be from this week, because I\u2019m a little behind on my math.NT abstract scanning.\n\nIf you run a math blog, please consider doing the same in your own field! \u00a0I\u2019ll link to it.\n\nUpdate: \u00a0It begins! \u00a0Valeria de Palva offers This Week\u2019s Finds In Categorical Logic. \u00a0And Matt Ward, a grad student at UW-Seattle, has This Week\u2019s Finds in Arithmetic Geometry.\n\n1) \u00a0\u201cOn sets defining few ordinary lines,\u201d by Ben Green and Terry Tao.\n\nThe idea that has launched a thousand papers in additive combinatorics: \u00a0if you are a set approximately closed under some kind of relation, then you are approximately a set which is actually closed under that kind of relation. \u00a0Subset of a group mostly closed under multiplication? \u00a0You must be close to an honest subgroup. \u00a0Subset of Z with too many pair-sums agreeing? \u00a0You have an unusually large intersection with an authentic arithmetic progression. \u00a0And so on.\n\nThis new paper considers the case of sets in R^2 with few ordinary lines; that is, sets S such that most lines that intersect S at all intersect S in three or more points. \u00a0How can you cook up a set of points with this property? \u00a0There are various boring ways, like making all the points collinear. \u00a0But there\u2019s only one interesting way I can think of: \u00a0have the points form an \u201carithmetic progression\u201d \u2026,-3P,-2P, -P, P,2P,3P, \u2026. in an elliptic curve! \u00a0(A finite subgroup also works.) \u00a0Then the usual description of the group law on the curve tells us that the line joining two points of S quite often passes through a third. \u00a0Green and Tao prove a remarkable quasi-converse to this fact: \u00a0if a set has few ordinary lines, it must be concentrated on a cubic algebraic curve! \u00a0This might be my favorite \u201capproximately structured implies approximates a structure\u201d theorem yet.\n\n2) \u201cAsymptotic behavior of rational curves,\u201d by David Bourqui. \u00a0Oh, I was about to start writing this but when I searched I realized I already blogged about this paper when it came out!\u00a0 I leave this here because the paper is just as interesting now as it was then\u2026\n\n3) \u201cThe fluctuations in the number of points of smooth plane curves over finite fields,\u201d by Alina Bucur, Chantal David, Brooke Feigon, and Matilde Lalin;\n\n\u201cThe probability that a complete intersection is smooth,\u201d\u00a0by Alina Bucur and Kiran Kedlaya;\n\n\u201cThe distribution of the number of points on trigonal curves over F_q,\u201d by Melanie Matchett Wood;\n\n\u201cSemiample Bertini theorems over finite fields,\u201d by Daniel Erman and Melanie Matchett Wood.\n\nHow many rational points does a curve over F_q have? \u00a0We discussed this question here a few years ago, coming to no clear conclusion. \u00a0I still maintain that if the curve is understood to vary over M_g(F_q), with q fixed and g growing, the problem is ridiculously hard.\n\nBut in more manageable families of curves, we now know a lot more than we did in 2008.\n\nYou might guess, of course, that the average number of points should be q+1; if you have to reason to think of Frobenius as biased towards having positive or negative trace, why not guess that the trace, on average, is 0? \u00a0Bucur-David-Feigon-Lalin prove that this is exactly the case for a random smooth plane curve. \u00a0It\u2019s not hard to check that this holds for a random hyperelliptic curve as well. \u00a0But for a random trigonal\u00a0curve, Wood proves that the answer is different \u2014 the average is slightly less than q+2!\n\nWhere did the extra point come from?\n\nHere\u2019s one way I like to think of it. \u00a0This is very vague, and proves nothing, of course. \u00a0The trigonal curve X has a degree-3 map to P^1, which is ramified at some divisor D in P^1. \u00a0If D is a random divisor, it has one F_q-point on average. \u00a0How many F_q-points on X lie over each rational point P of D? \u00a0Well, generically, the ramification is going to be simple, and this means that there are two rational points over D; the branch point, and the unique unramified point. \u00a0Over every other F_q-point of D, the Frobenius action on the preimage in X should be a random element of S_3, with an average of one fixed point. \u00a0To sum up, in expectation we should see q rational points of X over q non-branch rational points of P^1, and 2 rational points of X over a single rational branch point in P^1, for a total of q+2.\n\n(Erman and Wood, in a paper released just a few months ago, prove much more general results of a similar flavor about smooth members of linear systems on P^1 x P^1 (or other Hirzebruch surfaces, or other varieties entirely) which are semiample; for instance, they may have a map to P^1 which stays constant in degree, while their intersection with another divisor gets larger and larger.)\n\nMost mysterious of all is the theorem of Bucur and Kedlaya, which shows (among other things) that if X is a random smooth intersection of two hypersurfaces of large degree in P^3, then the size of |X(F_q)| is slightly less than q+1 on average. \u00a0For this phenomenon I have no heuristic explanation at all. \u00a0What\u2019s keeping the points away?\n\n## Mochizuki on\u00a0ABC\n\n[Update: \u00a0Lots of traffic coming in from Hacker News, much of it presumably from outside the usual pro number theory crowd that reads this blog. \u00a0If you're not already familiar with the ABC conjecture, I recommend Barry Mazur's beautiful expository paper \"Questions about Number.\"]\n\n[Re-update:\u00a0 Minhyong Kim's discussion on Math Overflow\u00a0is the most well-informed public discussion of Mochizuki's strategy. \u00a0(Of course, it is still very sketchy indeed, as Minhyong hastens to emphasize.) \u00a0 Both Kim's writeup and discussions I've had with others suggest that the best place to start may be Mochizuki's 2000 paper\u00a0\"A Survey of the Hodge-Arakelov Theory of Elliptic Curves I.\"]\n\nShin Mochizuki has released his long-rumored proof of the ABC conjecture, in a paper called \u201cInter-universal Teichmuller theory IV: \u00a0log-volume computations and set-theoretic foundations.\u201d\n\nI just saw this an hour ago and so I have very little to say, beyond what I wrote on Google+ when rumors of this started circulating earlier this summer:\n\nI hope it\u2019s true: \u00a0my sense is that there\u2019s a lot of very beautiful, very hard math going on in Shin\u2019s work which almost no one in the community has really engaged with, and the resolution of a major conjecture would obviously create such engagement very quickly.\n\nWell, now the time has come. \u00a0I have not even begun to understand Shin\u2019s approach to the conjecture. \u00a0But it\u2019s clear that it involves ideas which are completely outside the mainstream of the subject. \u00a0Looking at it, you feel a bit like you might be reading a paper from the future, or from outer space.\n\nLet me highlight one point which is clearly important, which I draw from pp.3\u20136 of the linked paper.\n\nWARNING LABEL: \u00a0Of course my attempt to paraphrase is based on the barest of acquaintance with a very small section of the work and is placed here just to get people to look at Mochizuki\u2019s paper \u2014 I may have it all wrong!\n\nMochizuki argues that it is too limiting to think about \u201cthe category of schemes over Spec Z,\u201d as we are accustomed to do. \u00a0He makes the inarguable point that when X is a kind of thing, it can happen that the category of Xes, qua category, may not tell us very much about what Xes are like \u2014 for instance, if there is only one X and it has only one automorphism. Mochizuki argues that the category of schemes over a base is \u2014 if not quite\u00a0this uninformative \u2014 insufficiently rich to handle certain problems in Diophantine geometry. \u00a0He wants us instead to think about what he calls the \u201cspecies\u201d of schemes over Spec Z, where a scheme in this sense is not an abstract object in a category, but something cut out by a formula. \u00a0In some sense this view is\u00a0more classical than the conventional one, in which we tend to feel good about ourselves if we can \u201cremove coordinates\u201d and think about objects and arrows without implicitly applying a forgetful functor and referring to the object as a space with a Zariski topology or \u2014 ptui! \u2013\u00a0a set of points.\n\nBut Mochizuki\u2019s point of view is not actually classical at all \u2014 because the point he wants to make is that formulas can be intepreted in any model of set theory, and each interpretation gives you a different category. \u00a0What is \u201cinter-universal\u201d about inter-universal Teichmuller theory is that it is important to keep track of\u00a0all these categories, or at least many different ones. \u00a0What he is doing, he says, is simply\u00a0outside the theory of schemes over Spec Z, even though it has consequences within that theory \u2014 just as (this part is my gloss) the theory of schemes itself is outside the classical theory of varieties, but provides us information about varieties that the classical theory could not have generated internally.\n\nIt\u2019s tremendously exciting. \u00a0I very much look forward to commentary from people with a deeper knowledge than mine of Mochizuki\u2019s past and present work.\n\n\u2022 Algebraists eat corn row by row, analysts eat corn circle by circle. \u00a0Yep, I eat down the rows like a typewriter. \u00a0Why? \u00a0Because it is the right way.\n\u2022 This short paper by Johan de Jong and Wei Ho addresses an interesting question I\u2019d never thought about; does a Brauer-Severi variety over a field K contain a genus-1 curve defined over K? \u00a0They show the answer is yes in dimensions up to 4 (3 and 4 being the new cases.) \u00a0In dimension 1, this just asks about covers of Brauer-Severi curves by genus 1 curves; I remember this kind of situation coming up in Ekin Ozman\u2019s thesis, where certain twists of modular curves end up being covers of Brauer-Severi curves. \u00a0Which Brauer-Severi varieties are split by twisted modular curves?\n\u2022 Always nice to see a coherent description of the p-adic numbers in the popular press; and George Musser delivers the goods in Scientific American, in the context of recent work in cosmology by Harlow, Shenker, Stanford, and Susskind. \u00a0Two quibbles: \u00a0first, if I understood Susskind\u2019s talk on this stuff correctly, the point is to model things by an infinite regular tree. \u00a0The fact that when the out-degree is a prime power this happens to look like the Bruhat-Tits tree is in some sense tangential, though very useful for getting an intuitive picture of what\u2019s going on \u2014 as long as your intuition is already p-adic, of course! \u00a0Second quibble is that Musser seems to suggest at the end that p-adic distances can\u2019t get arbitrarily small:\n\nOn top of that, distance is always finite. There are no p-adic infinitesimals, or infinitely small distances, such as the dx and dy you see in high-school calculus. In the argot, p-adics are \u201cnon-Archimedean.\u201d Mathematicians had to cook up a whole new type of calculus for them.\n\nPrior to the multiverse study, non-Archimedeanness was the main reason physicists had taken the trouble to decipher those mathematics textbooks. Theorists think that the natural world, too, has no infinitely small distances; there is some minimal possible distance, the Planck scale, below which gravity is so intense that it renders the entire notion of space meaningless. Grappling with this granularity has always vexed theorists. Real numbers can be subdivided all the way down to geometric points of zero size, so they are ill-suited to describing a granular space; attempting to use them for this purpose tends to spoil the symmetries on which modern physics is based.\n\n## Hwang and To on injectivity radius and gonality, and \u201cTypical curves are not\u00a0typical.\u201d\n\nInteresting new paper in the American Journal of Mathematics, not on arXiv unfortunately. \u00a0An old theorem of Li and Yau shows how to lower-bound the gonality of a Riemann surface in terms of the spectral gap on its Laplacian; this (together with new theorems by many people on superstrong approximation for thin groups) is what Chris Hall, Emmanuel Kowalski, and I used to give lower bounds on gonalities in various families of covers of a fixed base.\n\nThe new paper gives a lower bound for the gonality of a compact Riemann surface in terms of the\u00a0injectivity radius, which is half the length of the shortest closed geodesic loop. \u00a0You could think of it like this \u2014 they show that the low-gonality loci in M_g stay very close to the boundary.\n\n\u201cThe middle\u201d of M_g is a mysterious place. \u00a0A \u201ctypical\u201d curve of genus g has a big spectral gap, gonality on order g\/2, a big injectivity radius\u2026 \u00a0but most curves you can write down are just the opposite.\n\nTypical curves are not typical.\n\nWhen g is large, M_g is general type, and so the generic curve doesn\u2019t move in a rational family. \u00a0Are all the rational families near the boundary? \u00a0Gaby Farkas explained to me on Math Overflow\u00a0how to construct a rationally parametrized family of genus-g curves whose gonality is generic, as a pencil of curves on a K3 surface. \u00a0I wonder how \u201ctypical\u201d these curves are? \u00a0Do some have large injectivity radius? \u00a0Or a large spectral gap?\n\n## The conformal modulus of a mapping\u00a0class\n\n(Warning \u2014 this post concerns math I don\u2019t know well and is all questions, no answers.)\n\nSuppose you have a holomorphic map from C^* to M_g,n, the moduli space of curves.\u00a0 Then you get a map on fundamental groups from $\\pi_1(\\mathbf{C}^*)$ (otherwise known as Z) to $\\pi_1(\\mathcal{M}_{g,n})$ (otherwise known as the mapping class group) \u2014 in other words, you get a mapping class.\n\nBut not just any mapping class;\u00a0 this one, which we\u2019ll call u, is the monodromy of a holomorphic family of marked curves around a degenerate point.\u00a0 So, for example, the image of u on homology has to be potentially unipotent.\u00a0 I\u2019m not sure (but I presume others know) which mapping classes u can arise in this way; does some power of u have to be a product of commuting Dehn twists, or is that too much to ask?\n\nIn any event, there are lots of mapping classes which you are not going to see.\u00a0 Let m be your favorite one.\u00a0 Now you can still represent m by a smooth loop in M_g,n.\u00a0 And you can deform this loop to be a real-analytic function\n\n$f: \\{z: |z| = 1\\} \\rightarrow \\mathcal{M}_{g,n}$\n\nFinally \u2014 while you can\u2019t extend f to all of C^*, you can extend it to some annulus with outer radius R and inner radius r.\n\nDefinition:\u00a0 The conformal modulus of a mapping class x is the supremum, over all such f and all annuli, of (1\/2 pi) log(R\/r).\n\nSo you can think of this as some kind of measurement of \u201chow complicated of a path do you have to draw on M_{g,n} in order to represent x?\u201d\u00a0 The modulus is infinite exactly when the mapping class is represented by a holomorphic degeneration.\u00a0 In particular, I imagine that a pseudo-Anosov mapping class must have finite conformal modulus.\u00a0 That is:\u00a0 positive entropy (aka dilatation) implies finite conformal modulus. \u00a0 Which leads J\u00f6ricke to ask:\u00a0 what is the relation more generally between conformal modulus and (log of) dilatation?\u00a0 When (g,n) = (0,3) she has shown that the two are inverse to each other.\u00a0 In this case, the group is more or less PSL_2(Z) so it\u2019s not so surprising that any two measures of complexity are tightly bound together.\n\nActually, I should be honest and say that J\u00f6ricke raised this only for g = 0, so maybe there\u2019s some reason it\u2019s a bad idea to go beyond braids; but the question still seems to me to make sense.\u00a0 For that matter, one could even ask the same question with M_g replaced by A_g, right?\u00a0 What is the conformal modulus of a symplectic matrix which is not potentially unipotent?\u00a0 Is it always tightly related to the size of the largest eigenvalue?\n\n## Gonality, the Bogomolov property, and Habegger\u2019s theorem on\u00a0Q(E^tors)\n\nI promised to say a little more about why I think the result of Habegger\u2019s recent paper, \u201d Small Height and Infinite Non-Abelian Extensions,\u201d is so cool.\n\nFirst of all:\u00a0 we say an algebraic extension K of Q has the Bogomolov property if there is no infinite sequence of non-torsion elements x in K^* whose absolute logarithmic height tends to 0.\u00a0 Equivalently, 0 is isolated in the set of absolute heights in K^*.\u00a0 Finite extensions of Q evidently have the Bogomolov property (henceforth:\u00a0 (B)) but for infinite extensions the question is much subtler.\u00a0 Certainly $\\bar{\\mathbf{Q}}$ itself doesn\u2019t have (B):\u00a0 consider the sequence $2^{1\/2}, 2^{1\/3}, 2^{1\/4}, \\ldots$\u00a0 On the other hand, the maximal abelian extension of Q is known to have (B) (Amoroso-Dvornicich) , as is any extension which is totally split at some fixed place p (Schinzel for the real prime, Bombieri-Zannier for the other primes.)\n\nHabegger has proved that, when E is an elliptic curve over Q, the field Q(E^tors) obtained by adjoining all torsion points of E has the Bogomolov property.\n\nWhat does this have to do with gonality, and with my paper with Chris Hall and Emmanuel Kowalski from last year?\n\nSuppose we ask about the Bogomolov property for extensions of a more general field F?\u00a0 Well, F had better admit a notion of absolute Weil height.\u00a0 This is certainly OK when F is a global field, like the function field of a curve over a finite field k; but in fact it\u2019s fine for the function field of a complex curve as well.\u00a0 So let\u2019s take that view; in fact, for simplicity, let\u2019s take F to be C(t).\n\nWhat does it mean for an algebraic extension F\u2019 of F to have the Bogomolov property?\u00a0 It means that there is a constant c such that, for every finite subextension L of F and every non-constant function x in L^*, the absolute logarithmic height of x is at least c.\n\nNow L is the function field of some complex algebraic curve C, a finite cover of P^1.\u00a0 And a non-constant function x in L^* can be thought of as a nonzero principal divisor.\u00a0 The logarithmic height, in this context, is just the number of zeroes of x \u2014 or, if you like, the number of poles of x \u2014 or, if you like, the degree of x, thought of as a morphism from C to the projective line.\u00a0 (Not necessarily the projective line of which C is a cover \u2014 a new projective line!)\u00a0 In the number field context, it was pretty easy to see that the log height of non-torsion elements of L^* was bounded away from 0.\u00a0 That\u2019s true here, too, even more easily \u2014 a non-constant map from C to P^1 has degree at least 1!\n\nThere\u2019s one convenient difference between the geometric case and the number field case.\u00a0 The lowest log height of a non-torsion element of L^* \u2014 that is, the least degree of a non-constant map from C to P^1 \u2014 already has a name.\u00a0 It\u2019s called the gonality of C.\u00a0 For the Bogomolov property, the relevant number isn\u2019t the log height, but the absolute log height, which is to say the gonality divided by [L:F].\n\nSo the Bogomolov property for F\u2019 \u2014 what we might call the geometric Bogomolov property \u2014 says the following.\u00a0 We think of F\u2019 as a family of finite covers C \/ P^1.\u00a0 Then\n\n(GB)\u00a0 There is a constant c such that the gonality of C is at least c deg(C\/P^1), for every cover C in the family.\n\nWhat kinds of families of covers are geometrically Bogomolov?\u00a0 As in the number field case, you can certainly find some families that fail the test \u2014 for instance, gonality is bounded above in terms of genus, so any family of curves C with growing degree over P^1 but bounded genus will do the trick.\n\nOn the other hand, the family of modular curves over X(1) is geometrically Bogomolov; this was proved (independently) by Abramovich and Zograf.\u00a0 This is a gigantic and elegant generalization of Ogg\u2019s old theorem that only finitely many modular curves are hyperelliptic (i.e. only finitely many have gonality 2.)\n\nAt this point we have actually more or less proved the geometric version of Habegger\u2019s theorem!\u00a0 Here\u2019s the idea.\u00a0 Take F = C(t) and let E\/F be an elliptic curve; then to prove that F(E(torsion)) has (GB), we need to give a lower bound for the curve C_N obtained by adjoining an N-torsion point to F.\u00a0 (I am slightly punting on the issue of being careful about other fields contained in F(E(torsion)), but I don\u2019t think this matters.)\u00a0 But C_N admits a dominant map to X_1(N); gonality goes down in dominant maps, so the Abramovich-Zograf bound on the gonality of X_1(N) provides a lower bound for the gonality of C_N, and it turns out that this gives exactly the bound required.\n\nWhat Chris, Emmanuel and I proved is that (GB) is true in much greater generality \u2014 in fact (using recent results of Golsefidy and Varju that slightly postdate our paper) it holds for any extension of C(t) whose Galois group is a perfect Lie group with Z_p or Zhat coefficients and which is ramified at finitely many places; not just the extension obtained by adjoining torsion of an elliptic curve, for instance, but the one you get from the torsion of an abelian variety of arbitrary dimension, or for that matter any other motive with sufficiently interesting Mumford-Tate group.\n\nQuestion:\u00a0\u00a0 Is Habegger\u2019s theorem true in this generality?\u00a0 For instance, if A\/Q is an abelian variety, does Q(A(tors)) have the Bogomolov property?\n\nQuestion:\u00a0 Is there any invariant of a number field which plays the role in the arithmetic setting that \u201cspectral gap of the Laplacian\u201d plays for a complex algebraic curve?\n\nA word about Habegger\u2019s proof.\u00a0 We know that number fields are a lot more like F_q(t) than they are like C(t).\u00a0 And the analogue of the Abramovich-Zograf bound for modular curves over F_q is known as well, by a theorem of Poonen.\u00a0 The argument is not at all like that of Abramovich and Zograf, which rests on analysis in the end.\u00a0 Rather, Poonen observes that modular curves in characteristic p have lots of supersingular points, because the square of Frobenius acts as a scalar on the l-torsion in the supersingular case.\u00a0 But having a lot of points gives you a lower bound on gonality!\u00a0 A curve with a degree d map to P^1 has at most d(q+1) points, just because the preimage of each of the q+1 points of P^1(q) has size at most d.\u00a0 (You just never get too old or too sophisticated to whip out the Pigeonhole Principle at an opportune moment\u2026.)\n\nNow I haven\u2019t studied Habegger\u2019s argument in detail yet, but look what you find right in the introduction:\n\nThe non-Archimedean estimate is done at places above an auxiliary prime number p where E has good supersingular reduction and where some other technical conditions are met\u2026. In this case we will obtain an explicit height lower bound swiftly using the product formula, cf. Lemma 5.1. The crucial point is that supersingularity forces the square of the Frobenius to act as a scalar on the reduction of E modulo p.\n\nYup!\u00a0 There\u2019s no mention of Poonen in the paper, so I think Habegger came to this idea independently.\u00a0 Very satisfying!\u00a0 The hard case \u2014 for Habegger as for Poonen \u2014 has to do with the fields obtained by adjoining p-torsion, where p is the characteristic of the supersingular elliptic curve driving the argument.\u00a0 It would be very interesting to hear from Poonen and\/or Habegger whether the arguments are similar in that case too!\n\n## What I learned from Zhiwei Yun about Hilbert\u00a0schemes\n\nOne knows, of course, that Hilbert schemes of smooth curves and smooth surfaces are nice, and Hilbert schemes of varieties of dimension greater than two are terrifying.\n\nZhiwei Yun was here giving a talk about his work with Davesh Maulik on Hilbert schemes of curves with planar singularities, and he made a point I\u2019d never appreciated; it\u2019s not the dimension of the variety, but the dimension of its tangent space that really measures the terrifyingness of the\u00a0 Hilbert space.\u00a0 Singular curves C with planar singularities are not so bad \u2014 you still have a nice Hilbert scheme with an Abel-Jacobi map to the compactified Jacobian.\u00a0 But let C be the union of the coordinate axes in A^3 and all bets are off.\u00a0 Hideous extra high-dimensional components aplenty.\u00a0 If I had time to write a longer blog post today I would think about what the punctual Hilbert scheme at the origin looks like.\u00a0 But maybe one of you guys will just tell me.\n\nUpdate:\u00a0 Jesse Kass explains that I am wrong about C; its Hilbert scheme has a non-smoothable component, but it doesn\u2019t have any components whose dimension is too large.\n\n## Arithmetic Veech sublattices of\u00a0SL_2(Z)\n\nBen McReynolds and I have just arXived a retitled and substantially revised version of our paper \u201cEvery curve is a Teichmuller curve,\u201d previously blogged about here.\u00a0 If you looked at the old version, you probably noticed it was very painful to try to read.\u00a0 My only defense is that it was even more painful to try to write.\n\nWith the benefit of a year\u2019s perspective and some very helpful comments from the anonymous referee at Duke, we more or less completely rewrote the paper, making it much more readable and even a bit shorter.\n\nThe paper is related to the question I discussed last week about \u201c4-branched Belyi\u201d \u2014 or rather the theorem of Diaz-Donagi-Harbater that inspired our paper is related to that question.\u00a0 The 4-branched Belyi question essentially asks whether every curve C in M_g is a Hurwitz space of 4-branched covers.\u00a0 (Surely not!) The DDH theorem shows that if you\u2019re going to prove C is not a Hurwitz curve, you can\u2019t do it by means of the birational isomorphism class of C alone; every 1-dimensional function field appears as the function field of a Hurwitz curve (though probably in very high genus.)\n\n## There\u2019s no 4-branched Belyi\u2019s theorem \u2014\u00a0right?\n\nMuch discussion on Math Overflow has not resolved the following should-be-easy question:\n\nGive an example of a curve in ${\\mathcal{M}}_g$ defined over $\\bar{Q}$ which is not a family of 4-branched covers of P^1.\n\nSurely there is one!\u00a0 But then again, you\u2019d probably say \u201csurely there\u2019s a curve over $\\bar{Q}$ which isn\u2019t a 3-branched cover of P^1.\u201d\u00a0 But there isn\u2019t \u2014 that\u2019s Belyi\u2019s theorem.","date":"2013-05-26 06:27:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 12, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7607356309890747, \"perplexity\": 861.5751163438339}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368706635063\/warc\/CC-MAIN-20130516121715-00081-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
null
null
{"url":"http:\/\/mathvillage.info\/node\/136","text":"Area of a triangle\n\nTeaser and introduction goes here. Sample problem here.\n\n How do we find the area of a triangle? It would be difficult to count the unit squares inside this triangle, since many of the squares have been cut into fractional parts by the sides of the triangle. So we need to find a sneaky way of counting all the squares inside the triangle.\n\nTo do this, let\u2019s double the area or the original triangle by adding a second triangle to the figure to create a parallelogram.\n\nWe have learned that multiplying the base and height of a parallelogram gives the area of the parallelogram, so the area of this parallelogram is 6 \u2022 4 = 24 square units. Since we only needed the area of the triangle, we divide 24 by 2 to get the area of the triangle, which is 12 square units.\n\nTo put all this in one line, it would look like this:\n\nArea = (area of parallelogram) \u00f7 2= (base \u2022 height) \u00f7 2 = (6 \u2022 4) \u00f7 2 = 12 u^2\n\nMore often, the formula for the area of a triangle is written as\n\nA = $\\large~\\frac{bh}{2}$\n\nWhere b and h is the base and height of the triangle, respectively.\n\n Find the area of this triangle. A = $\\large~\\frac{bh}{2}=\\frac{6\u20223}{2}=\\frac{18}{2}=9\\,u^2$\n\n Find the area of this triangle. A = $\\large~\\frac{bh}{2}=\\frac{7\u20224}{2}=\\frac{28}{2}=14\\,u^2$\n\n Directions: Move the blue vertices to create a triangle. Use the \"Hint 1\" slider to see how a triangle compares with a parallelogram. Use the \"Hint 2\" slider to see how to calculate the area of the triangle.\n\nSelf-Check\n\n Question 1 Find the area of this triangle. [show answer] A = $\\large~\\frac{bh}{2}=\\frac{7\u20225}{2}=\\frac{35}{2}=17.5\\, u^2$\n\n Question 2 Find the area of this triangle. [show answer] A = $\\large~\\frac{bh}{2}=\\frac{2\u20225}{2}=\\frac{10}{2}=5\\, u^2$\n\n Question 3 Find the area of this triangle. [show answer] A = $\\large~\\frac{bh}{2}=\\frac{3\u20225}{2}=\\frac{15}{2}=7.5 \\,u^2$","date":"2020-04-09 14:05:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7653917074203491, \"perplexity\": 276.9912927840089}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-16\/segments\/1585371858664.82\/warc\/CC-MAIN-20200409122719-20200409153219-00157.warc.gz\"}"}
null
null
## Introductory ## Discrete Mathematics ## Introductory ## Discrete Mathematics V. K. Balakrishnan DOVER PGBUCATIONS, INC. New York _Copyright_ Copyright © 1991 by V. K. Balakrishnan. All rights reserved. _Bibliographical Note_ This Dover edition, first published in 1996, is an unabridged, corrected republication of the work first published by Prentice Hall, Englewood Cliffs, N.J., 1991. _Library of Congress Cataloging-in-Publication Data_ Balakrishnan, V.K., date. Introductory discrete mathematics / V. K. Balakrishnan. p. cm. "An unabridged, corrected republication of the work first published by Prentice Hall, Englewood Cliffs, N.J., 1991"—T.p. verso. Includes bibliographical references (p. - ) and index. eISBN-13: 978-0-486-14038-4 1. Mathematics. 2. Computer science—Mathematics. I. Title. QA39.2.B357 1996 511—dc20 | 95-52384 CIP ---|--- Manufactured in the United States by Courier Corporation 69115205 www.doverpublications.com To Geeta **Contents** **Preface** **Set Theory and Logic** **0.1** Introduction to Set Theory **0.2** Functions and Relations **0.3** Inductive Proofs and Recursive Definitions **0.4** The Language of Logic **0.5** Notes and References **0.6** Exercises **Combinatorics** **1.1** Two Basic Counting Rules **1.2** Permutations **1.3** Combinations **1.4** More on Permutations and Combinations **1.5** The Pigeonhole Principle **1.6** The Inclusion-Exclusion Principle **1.7** Summary of Results in Combinatorics **1.8** Notes and References **1.9** Exercises **Generating Functions** **2.1** Introduction **2.2** Ordinary Generating Functions **2.3** Exponential Generating Functions **2.4** Notes and References **2.5** Exercises **Recurrence Relations** **3.1** Introduction **3.2** Homogeneous Recurrence Relations **3.3** Inhomogeneous Recurrence Relations **3.4** Recurrence Relations and Generating Functions **3.5** Analysis of Algorithms **3.6** Notes and References **3.7** Exercises **Graphs and Digraphs** **4.1** Introduction **4.2** Adjacency Matrices and Incidence Matrices **4.3** Joining in Graphs **4.4** Reaching in Digraphs **4.5** Testing Connectedness **4.6** Strong Orientation of Graphs **4.7** Notes and References **4.8** Exercises **More on Graphs and Digraphs** **5.1** Eulerian Paths and Eulerian Circuits **5.2** Coding and de Bruijn Digraphs **5.3** Hamiltonian Paths and Hamiltonian Cycles **5.4** Applications of Hamiltonian Cycles **5.5** Vertex Coloring and Planarity of Graphs **5.6** Notes and References **5.7** Exercises **Trees and Their Applications** **6.1** Definitions and Properties **6.2** Spanning Trees **6.3** Binary Trees **6.4** Notes and References **6.5** Exercises **Spanning Tree Problems** **7.1** More on Spanning Trees **7.2** Kruskal's Greedy Algorithm **7.3** Prim's Greedy Algorithm **7.4** Comparison of the Two Algorithms **7.5** Notes and References **7.6** Exercises **Shortest Path Problems** **8.1** Introduction **8.2** Dijkstra's Algorithm **8.3** Floyd-Warshall Algorithm **8.4** Comparison of the Two Algorithms **8.5** Notes and References **8.6** Exercises ** What Is NP-Completeness?** **A.1** Problems and Their Instances **A.2** The Size of an Instance **A.3** Algorithm to Solve a Problem **A.4** Complexity of an Algorithm **A.5** The "Big Oh" or the _O_ (·) Notation **A.6** Easy Problems and Difficult Problems **A.7** The Class P and the Class NP **A.8** Polynomial Transformations and NP-Completeness **A.9** Coping with Hard Problems **Bibliography** **Answers to Selected Exercises** **Index** **Preface** **_Introductory Discrete Mathematics_** is a concise text for a discrete mathematics course at an introductory level for undergraduate students in computer science and mathematics. The essential components of any beginning level discrete mathematics curriculum are combinatorics, graph theory with applications to some standard network optimization problems, and algorithms to solve these problems. In this book the stress is on these core components. Both the Association for Computing Machinery and the Committee for the Undergraduate Program in Mathematics recognize the vital role of an undergraduate course in discrete methods that introduces the student to combinatorial mathematics and to algebraic and logical structures focusing on the interplay between computer science and mathematics. The material in Chapter 0 serves as an introduction to the fundamental operations involving sets and the principle of mathematical induction. For those students familiar with the topics discussed here, this is essentially a chapter for review. The standard topics in combinatorics in any course on discrete mathematics are covered in Chapters 1, , and . These topics include basic counting principles, permutations, combinations, the inclusion-exclusion principle, generating functions, recurrence relations, and an introduction to the analysis of algorithms. The role of applications is emphasized wherever possible. There are more than 200 exercises at the end of these chapters. Each counting problem requires its own special insight, and it is advantageous for the student to work out several of these problems. In the next three chapters is a survey of graphs and digraphs. We begin with treating graphs and digraphs as models of real-world phenomena by giving several examples. The connectedness properties of graphs and digraphs are studied. Basic results and applications of graph coloring and of Eulerian and Hamiltonian graphs are presented with a stress on applications to coding and other related problems. Two important problems in network optimization are the minimal spanning tree problem and the shortest distance problem; they are covered in the last two chapters. The approach to compute the complexity of algorithms in these chapters is more or less informal. A very brief nontechnical exposition of the theory of computational complexity and NP-completeness is outlined in the appendix. It is possible to cover the topics presented in this book as a one-semester course by skipping some sections if necessary. Of course it is for the instructor to decide which sections she or he may skip. My chief acknowledgment is to the students who have studied discrete mathematics with me at the University of Maine during the past decade. They taught me how to teach. Their contributions and encouragement are implicit on every page. In particular, I would like to mention the names of Rajesh and Thananchayan. My scientific indebtedness in this project encompasses many sources including the articles and books listed in the bibliography. If there are errors or misleading results, the blame of course falls entirely on my shoulders. Finally, it goes without saying that I owe a great deal to the interest and encouragement my family has shown at every stage of this work. V. K. Balakrishnan **Set Theory and Logic** **_0.1 INTRODUCTION TO SET THEORY_** The concept of a set plays a very significant role in all branches of modern mathematics. In recent years set theory has become an important area of investigation because of the way in which it permeates so much of contemporary mathematical thought. A genuine understanding of any branch of modern mathematics requires a knowledge of the theory of sets for it is the common foundation of the diverse areas of mathematics. Sets are used to group distinct objects together. It is necessary that the objects which belong to a set are _well-defined_ in the sense that there should be no ambiguity in deciding whether a particular object belongs to a set or not. Thus, given an object, either it belongs to a given set or it does not belong to it. For example, the first five letters of the English alphabet constitute a set which may be represented symbolically as the set {a, b, c, d, e}. An arbitrary object belongs to this set if and only if it is one of these five letters. These five distinct objects can appear in any order in this representation. In other words, this set can also be represented by {d, b, a, e, c}. The objects that belong to a set need not possess a common property. Thus the number 4, the letter _x_ , and the word "book" can constitute a set _S_ which may be represented as _S_ = { _x_ , book, 4}. A particular day may be cold for one person and not cold for another, so the "collection of cold days in a month" is not a clearly defined set. Similarly, "the collection of large numbers" and "the collection of tall men" are also not sets. The term _object_ has been used here without specifying exactly what an object is. From a mathematical point of view, _set_ is a technical term that takes its meaning from the properties we assume that sets possess. This informal description of a set, based on the intuitive notion of an object, was first given by the German mathematician Georg Cantor (1845–1918) toward the end of the nineteenth century and the theory of sets based on his version is known as _naive set theory_. In Cantor's own words, "a set is bringing together into a whole of definite well-defined objects of our perception and these objects are the elements of the set." The sets considered in this book can all be viewed in this framework of Cantor's theory. Thus a **set** is a collection of distinct objects. The objects in a set are called the **elements** or **members** of the set. If _x_ is an element of a set _A_ , we say that _x_ **belongs** to _A_ , and this is expressed symbolically as _x_ _A_. The notation denotes that _y_ is not an element of the set _A_. **_Finite and Infinite Sets_** A set is **finite** if the number of elements in it is finite. Otherwise, it is an **infinite** set. The set of positive integers less than 100 is a finite set, whereas the set of all positive integers is an infinite set. If _X_ is a finite set, the **cardinality** of _X_ is the number of elements that belong to _X_ , and this nonnegative integer is denoted by _N_ ( _X_ ). A set of cardinality 1 is called a **singleton set**. If a set is finite and if its cardinality is not too large, we can describe it by enumerating all its elements. For example, the representation _S_ = {a, e, i, o, u} describes the set of all vowels of the English alphabet. If the cardinality is too large, this enumerative method is not very convenient. In some cases, if there is no ambiguity we make this enumerative description more concise. For example, the set _D_ of positive integers between 25 and 123 can be represented as _D_ = {25, 26, 27, · · · , 121, 122, 123}. A better way of representing _D_ is by stating the property for its membership. An object _n_ is an element of this set _D_ if and only if _n_ is a positive integer that is at least 25 and at most 123. In other words, we write _D_ = { _n_ : _n_ is a positive integer, 24 < _n_ < 124} The infinite set _N_ of all natural numbers can be represented unambiguously as _N_ = {1, 2, 3, . . .} or as _N_ = { _n_ : _n_ is a natural number} by stating its membership criterion. The notation of representing a set by stating the criteria of its membership as described above is called the **set-builder notation**. **_Subsets of a Set and the Empty Set_** A set _P_ is a **subset** of a set _Q_ if every element of _P_ is an element of _Q_. We use the notation _P_ ⊂ _Q_ to denote that _P_ is a subset of _Q_. A subset of a subset is no doubt a subset. When _P_ is a subset of _Q_ , we may say that _Q_ **contains** _P_ and that _P_ **is contained in** _Q_. By our definition every set is a subset of itself. The set _P_ is a **proper subset** of _Q_ if (1) _P_ is a subset of _Q_ and (2) there is at least one element of _Q_ that is not an element of _P_. The set of positive integers is a proper subset of the set of all real numbers. If _A_ is a subset of _B_ , the **relative complement of** _A_ **in** _B_ is the set of elements in _B_ that are not elements of _A_. The relative complement of _A_ in _B_ is denoted by _B_ – _A_ and it can be described by its membership criterion as Two sets are **disjoint** if they have no elements in common. On the other hand, two sets are **equal** if they have the same elements. We write _X_ = _Y_ when the sets _X_ and _Y_ are equal. Obviously, two sets are equal if and only if each is a subset of the other. For instance, if _X_ = { _r_ : _r_ is a root of the _x_ 2 – 5 _x_ \+ 6 = 0} and _Y_ = {2, 3}, then _X_ = _Y_. A set is **empty** if it has no elements. A fact emerges that some people find surprising: there is only one empty set. (Suppose that _E_ and _F_ are two empty sets. If they are not the same, they are not equal. So one of them should have at least one element that does not belong to the other. So one of the two sets is not empty. This contradicts the assumption that both _E_ and _F_ are empty.) The unique **empty set** (or **null set** ) is denoted by ϕ. The fact that the empty set is a subset of any set is established by "vacuous reasoning": If it were not a subset of a given set _S_ , there should be at least one element in the empty set that is not in _S_. In particular, there should be at least one element in the empty set that is a contradiction. Of course, a set is empty if and only if its cardinality is zero. In some cases we will be considering sets that are all subsets of a set _U_ which is called the **universal set**. For example, if the sets under consideration are _A, B_ , and _C_ , where _A_ = {3, 8, 6, 7, _x_ }, _B_ = {8, 4, _y, t_ , 5}, and _C_ = {3, 4, _x, t_ , 9}, then any set containing the set _D_ = {3, 8, 6, 7, _x_ , 4, _y, t_ , 5, 9} can be considered as a universal set as far as _A, B, C_ , and _D_ are concerned. Once the universal set _U_ is fixed, the relative complement of a subset _A_ in _U_ is called the **absolute complement of** _A_ and is denoted by _A c_. Thus if the universe is the set of all nonnegative integers and _E_ is the set of all even numbers, then _E c_ is the set of all odd numbers. Observe that the absolute complement of the absolute complement of any set _A_ is the set _A_ itself. **_The Power Set of a Set_** A set can have other sets as its elements. For instance, the set _S_ consisting of the letter _x_ , the set { _a, b_ } and the number 4 is represented as _S_ = { _x_ , { _a, b_ }, 4}. A set of subsets is also known as a **class** or **family** of sets. The class of all subsets of a given set _X_ is called the **power set** of _X_ and is denoted by _P_ ( _X_ ). For example, if _X_ = {1, 2}, the elements of _P_ ( _X_ ) are the empty set, the singleton set {1}, the singleton set {2}, and the set _X_. Thus _P_ ( _X_ ) = {ϕ, {1}, {2}, {1, 2}}. **_Cartesian Products of Sets_** The **ordered** _n_ - **tuple** ( _a_ 1, _a_ 2, _a_ 3, . . . , _a n_) is a collection of the _n_ objects _a_ 1, _a_ 2, . . . , _a n_ in which _a_ 1 is the first element, _a_ 2 is the second element, . . . , and _a n_ is the nth element. In an ordered _n_ -tuple, the elements need not be distinct. A set with _n_ elements is thus an _unordered n_ -tuple of _n_ distinct elements, since in a set the order in which the elements are considered is irrelevant. An ordered 2-tuple is called an **ordered pair**. Two ordered _n_ -tuples ( _a_ 1, _a_ 2, . . . , _a n_) and ( _b_ 1, _b_ 2, . . . , _b n_) are said to be equal if _a i_ = _b i_ for _i_ = 1, 2, . . . , _n_. The set of all ordered pairs ( _a, b_ ), where _a_ is an element of a set _A_ and _b_ is an element of a set _B_ , is called the **cartesian product** of _A_ and _B_ and is denoted by _A_ × _B_. In other words, _A_ × _B_ = {( _a, b_ ) : _a_ _A_ and _b_ _B_ } For example, if _A_ = {1, 2} and _B_ = {1, 3}, then the cartesian product _A_ × _B_ is the set {(1, 1), (1, 3), (2, 1), (2, 3)}. More generally, the cartesian product of the sets _A_ 1, _A_ 2, . . . , _A n_ denoted by _A_ 1 × _A_ 2 × · · · × _A n_ is the set of all ordered _n_ -tuples of the form ( _a_ 1, _a_ 2, . . . , _a n_), where _a i_ is any element of _A i_ ( _i_ = 1, 2, . . . , _n_ ). **_Intersections and Unions of Sets_** There are two important constructions that can be applied to subsets of a set to yield new subsets. Suppose that _A_ and _B_ are two subsets of a set _X_. The set consisting of all elements common to both _A_ and _B_ is called the **intersection** of _A_ and _B_ and is denoted by _A_ ∩ _B_. Obviously, the intersection of a set and the empty set is the empty set and the intersection of any set _A_ and _A_ is _A_. Also, the intersection of a set and its absolute complement is empty since no element can be simultaneously in _A_ and not in _A_. Moreover, it follows from the definition that set intersection has the commutative property: The intersection of _A_ and _B_ is equal to the intersection of _B_ and _A_. The set consisting of all elements that belong to either _A_ or to _B_ or to both _A_ and _B_ is called the **union** of _A_ and _B_ and is denoted by _A_ ∪ _B_. The union of a set _A_ and the empty set is the set _A_ and the union of _A_ and _A_ is also _A_. Set union also is commutative: _A_ ∪ _B_ = _B_ ∪ _A_. More generally, the intersection of a class of sets is the set of elements (if any) that belong to every set of the class. The union of a class of sets is the set of those elements that belong to at least one set in the class. It is an immediate consequence of the definition that both set intersection and set union possess the associative property: (1) _A_ ∩ ( _B_ ∩ _C_ ) = ( _A_ ∩ _B_ ) ∩ _C_ and (2) _A_ ∪ ( _B_ ∪ _C_ ) = ( _A_ ∪ _B_ ) ∪ _C_. So the former can be written as _A_ ∩ _B_ ∩ _C_ and the latter as _A_ ∪ _B_ ∪ _C_ unambiguously. Two sets are disjoint if and only if their intersection is empty. A class of sets is **pairwise disjoint** if the intersection of any two sets in the class is empty. A class _C_ ( _X_ ) of subsets of a set _X_ is called a **partition** of _X_ if (1) _C_ ( _X_ ) is pairwise disjoint, and (2) the union of the sets in _C_ ( _X_ ) is the set _X_. For instance, the class {{2, 4}, (1, 3, 5}, {6}} is a partition of the set {1, 2, 3, 4, 5, 6}. **_Venn Diagrams of Sets_** A very useful and simple device to represent sets graphically for illustrating relationship between them is the **Venn diagram** , named after the English logician John Venn (1834–1923). In a Venn diagram, the universal set _U_ that contains all the objects under consideration is usually represented by a rectangle, and inside this rectangle subsets of the universal set are represented by circles, rectangles, or some other geometrical figures. In the Venn diagram shown in Figure 0.1.1, we have three sets _A_ , _B_ , and _C_ which are subsets of the universal set _U_. The drawing of the ellipse that represents the set _A_ inside the ellipse that represents the set _B_ indicates that _A_ is a subset of _B_. The fact that _A_ and _C_ are disjoint is made clear by representing them by two nonintersecting ellipses. The fact that the intersection of _B_ and _C_ is nonempty is made obvious by showing that the two ellipses which represent these two sets overlap each other. The region in the rectangle (which represents the universal set) that is outside the ellipses that represent the three sets is the absolute complement of the union of these three sets. **FIGURE 0.1.1** **_Distributive Laws and De Morgan's Laws_** We conclude this section on sets with the following two theorems related to set operations involving intersections, unions, and taking absolute complements. These theorems can easily be established by drawing Venn diagrams. However, it is instructive to prove them without the aid of Venn diagrams, for in many cases it may not be possible to represent the sets under consideration by such diagrams, as we will see later in the book. **THEOREM 0.1.1 (Distributive Laws)** (a) _A_ ∩ ( _B_ ∪ _C_ ) = ( _A_ ∩ _B_ ) ∪ ( _A_ ∩ _C_ ). (b) _A_ ∪ ( _B_ ∩ _C_ ) = ( _A_ ∪ _B_ ) ∩ ( _A_ ∪ _C_ ). **_Proof_ :** (a) One way of showing that two sets are equal is by establishing that each is contained in the other. Let _t_ be an element of _A_ ∩ ( _B_ ∪ _C_ ). Then _t_ is an element of _A_ and _t_ is either an element of _B_ or an element of _C_. In either case, _t_ is an element of A ∩ _B_ or _A_ ∩ _C_. In other words, _A_ ∩ ( _B_ ∪ _C_ ) is a subset of ( _A_ ∩ _B_ ) ∪ ( _A_ ∩ _C_ ). Next, suppose that _t_ is an element of ( _A_ ∩ _B_ ) ∪ ( _A_ ∩ _C_ ). This implies that _t_ is either in _A_ ∩ _B_ or in _A_ ∩ _C_. So _t_ is necessarily in _A_ and it is in at least one of the two sets _B_ or _C_. Thus _t_ is in _A_ and also in either _B_ or in _C_. In other words, _t_ belongs to the intersection of _A_ and to _B_ ∪ _C_. Thus ( _A_ ∩ _B_ ) ∪ ( _A_ ∩ _C_ ) is a subset of _A_ ∩ ( _B_ ∪ _C_ ). (b) This is left as an exercise. **THEOREM 0.1.2 (De Morgan's Laws)** (a) ( _A_ ∩ _B_ ) _c_ = _A_ _c_ ∪ _B c_. (b) ( _A_ ∪ _B_ ) _c_ = _A c_ ∩ _B c_. **_Proof_ :** (a) Let _t_ be an element of ( _A_ ∪ _B_ ) _c_. Then _t_ belongs to neither _A_ nor _B_. So _t_ is necessarily in both _A c_ and _B c_. Thus ( _A_ ∪ _B_ ) _c_ is a subset of the intersection of _A c_ and _B c_. On the other hand, if _t_ is in the intersection of _A c_ and _B c_, it is neither in _A_ nor in _B_. This implies that _t_ is not in the union of _A_ and _B_. Hence the intersection of _A c_ and _B c_ is contained in the complement of _A_ ∪ _B_. (b) This is left as an exercise. **_0.2 FUNCTIONS AND RELATIONS_** In this section a brief review of the basic ideas involving functions and relations is presented. The concept of a function is pivotal in mathematics. **_The Domain and the Range of a Function_** Let _X_ and _Y_ be two nonempty sets. A **function** _f_ **from** _X_ **into** _Y_ , denoted by _f_ : _X_ → _Y_ , is a rule that assigns to every element in _X_ a _unique_ element in _Y_. The set _X_ is the **domain** of the function and the set _Y_ is its **codomain**. If _y_ is the unique element in _Y_ assigned by the function _f_ to the element _x_ , we say that _y_ is the **image** of _x_ and _x_ is a **preimage** of _y_ and we write _y_ = _f_ ( _x_ ). The set _f_ ( _A_ ) of all images of the elements of a subset _A_ of _X_ is called the **image of the set** _A_. The set _f_ ( _X_ ) is called the **range** of the function. The range of a function is a subset of its codomain. If _y_ is an element in the range of _f_ , the set of all the preimages of _y_ is denoted by _f_ –1 ( _y_ ). If _A_ is a subset of the range _f_ ( _X_ ), the **inverse image** of the set _A_ is the set { _x_ : _x_ is in _X_ and _f_ ( _x_ ) is in _A_ }, which is denoted by _f_ –1( _A_ ). If _f_ is a function from _X_ to _Y_ , it is customary to say that _f_ **maps** the set _X_ into _Y_. **Example 0.2.1** Let _R_ be the set of all real numbers. (a) Let _f_ : _R_ → _R_ be the function that assigns the real number _x_ \+ 1 to each real number _x_. In other words, _f_ ( _x_ ) = _x_ \+ 1. Here the domain, codomain, and range of _f_ is _R_. (b) Let _f_ : _R_ → _R_ be the function defined by _f_ ( _x_ ) = _x_ 2. So every real number is assigned to its square. Here the domain and codomain of _f_ are _R_ and its range is the set of all nonnegative numbers. **Example 0.2.2** Let _A_ = { _a_ , _b, c_ } and _B_ = {1, 2, 3, 4}. Then the rule _f_ defined by _f_ ( _a_ ) = 1, _f_ ( _b_ ) = 1, _f_ ( _c_ ) = 4, and _f_ ( _d_ ) = 2 is a function _f_ from _A_ to _B_. The range of _f_ is {1, 2, 4}, which is a proper subset of its codomain _B_. **_Surjections, Injections, and Bijections_** A function _f_ : _X_ → _Y_ is called a **surjection** if _f_ ( _X_ ) = _Y_ and we say that _f_ is function from _X_ **onto** _Y_. A function _f_ : _X_ → _Y_ is called an **injection** (or a **one-to-one mapping** ) if two different elements in _X_ have two different images in _Y_. A function _f_ : _X_ → _Y_ is a **bijection** if it is both a surjection and an injection. The bijection from a set _X_ onto itself that maps every element in the set into itself is called the **identity mapping** _i x_ **on** _X_. Two sets are **equivalent** if there is a bijection from one to the other. It is evident that two _finite_ sets are equivalent if and only if they both have the same cardinality. **Example 0.2.3** (a) Let _X_ = { _a, b, c_ }, _Y_ = { _p, q_ }, and _f_ : _X_ → _Y_ , where _f_ ( _a_ ) = _p_ , _f_ ( _b_ ) = _q_ , and _f_ ( _c_ ) = _p_. Then _f_ is a surjection and _f_ maps _X_ onto _Y_. Here _f_ is not an injection. (b) If _X_ = { _a, b, c_ }, Y = { _p, q, r, s_ } and if _g_ ( _a_ ) = _p_ , _g_ ( _b_ ) = _q_ , _g_ ( _c_ ) = _r_ , then _g_ is an injection but not a surjection. The range _g_ ( _X_ ) = { _p, q, r_ } is a proper subset of the codomain _Y_. (c) If _X_ = { _a, b, c_ }, _Y_ = { _p, q, r_ } and if _h_ ( _a_ ) = _p_ , _h_ ( _b_ ) = _q_ , and _h_ ( _c_ ) = _r_ , then _h_ is a bijection. (d) If _R_ is the set of real numbers and _f_ : _R_ → _R_ the function defined by _f_ ( _x_ ) = _x_ 2, then _f_ is neither a surjection, because no negative number has a preimage, nor an injection, because the image of _x_ and the image of – _x_ are both equal. **_The Inverse of a Function_** Let _f_ : _X_ → _Y_ be a bijection. The **inverse function** of _f_ is the bijection _f_ –1: _Y_ → _X_ defined as follows: For each _y_ in _Y_ , we find that unique element _x_ in _X_ such that _f_ ( _x_ ) = _y_. Then we define _x_ = _f_ –1( _y_ ). A function _f_ : _X_ → _Y_ is said to be **invertible** whenever its inverse exists. **Example 0.2.4** If _X_ = {1, 2}, _Y_ = { _p, q_ }, _f_ (1) = p, and _f_ (2) = _q_ , then _f_ is a bijection from _X_ onto _Y_ and its inverse _f_ –1 is the bijection from _Y_ onto _X_ that maps _p_ into 1 and _q_ into 2. A function _f_ whose domain _X_ and codomain _Y_ are subsets of the set _R_ of real numbers is **strictly increasing** if _f_ ( _x_ ) < _f_ ( _y_ ) whenever _x_ < _y_ and **strictly decreasing** if _f_ ( _x_ ) > _f_ ( _y_ ) whenever _x_ < _y_. It follows from the definition that both strictly increasing functions and strictly decreasing functions are injections. **_Compositions of Functions_** Let _g_ : _X_ → _Y_ and _f_ : _Y_ → _Z_. The **composition** of _f_ and _g_ , defined by _f_ ○ _g_ , is a function from _X_ to _Z_ defined by ( _f_ ○ _g_ )( _x_ ) = _f_ ( _g_ ( _x_ )). In other words, the function _f_ ○ _g_ assigns to an element _x_ in _X_ that unique element assigned by _f_ to _g_ ( _x_ ). **Example 0.2.5** (a) Let _X_ = { _a, b, c_ }, _Y_ = { _p, q, r, s_ }, and _Z_ = {1, 2, 3}. Let _g_ ( _a_ ) = _p_ , _g_ ( _b_ ) = _q_ , and _g_ ( _c_ ) = _r_ , so that _g_ ( _X_ ) = { _p_ , _q_ , _r_ }. Then if _f_ : _g_ ( _X_ ) → _Z_ is defined by _f_ ( _p_ ) = 1, _f_ ( _q_ ) = 2, and _f_ ( _r_ ) = 3, we have (b) Let _f_ and _g_ be functions from the set of integers to the set of integers. If _f_ ( _x_ ) = 4 _x_ \+ 3 and _g_ ( _x_ ) = 2 _x_ \+ 5, then ( _f_ ○ _g_ )( _x_ ) = _f_ ( _g_ ( _x_ )) = _f_ (2 _x_ \+ 5) = 4(2 _x_ \+ 5) + 3 = 8 _x_ \+ 23 ( _g_ ○ _f_ )( _x_ ) = _g_ ( _f_ ( _x_ )) = _g_ (4 _x_ \+ 3) = 2(4 _x_ \+ 3) + 5 = 8 _x_ \+ 11 If f is a bijection from _X_ onto _Y_ , its inverse is a bijection from _Y_ to _X_. If _y_ = _f_ ( _x_ ), then _f_ –1( _y_ ) = _x_. Thus _f_ –1( _f_ ( _x_ ) = _f_ –1( _y_ ) = _x_ and _f_ ( _f_ –1( _y_ )) = _f_ ( _x_ ) = _y_. In other words, the composition of a bijection from _X_ onto _Y_ and its inverse is the identity mapping from _Y_ onto itself. **_Sequences, Strings, and Languages_** A **sequence** is a function whose domain is a set of _consecutive_ integers. If the domain _X_ is a finite set of _n_ integers, we may take _X_ = {1, 2, 3, . . . , _n_ } or {0, 1, 2, . . . , _n_ – 1}. Otherwise, we may take _X_ as the set of natural numbers or as the set of nonnegative integers. If _f_ : _X_ → _Y_ is a sequence, the image _f_ ( _i_ ) of the integer _i_ is sometimes written as _f i_ and is called the _i_ **th term of the sequence**. Notice that in representing a sequence _s_ , the _order_ in which the images under _s_ appear is important. This is not so in the case of a function. For example, if _f_ is the function from _X_ = {1, 2, 3} to _Y_ = { _p, q_ }, where _f_ (1) = _f_ (2) = _p_ and _f_ (3) = _q_ , the collection of the images of the three elements of _X_ under _f_ can be represented as _p, p, q_ in any order. But the _sequence f_ is represented as ( _f_ (1) _f_ (2) _f_ (3)) or as ( _ppq_ ). A sequence whose domain is finite consisting of _n_ consecutive integers and whose codomain is _Y_ defines a **string of length** _n_ **in** _Y_ or **word of length** _n_ **in** _Y_. In fact, any such sequence is an _n_ -tuple. **Example 0.2.6** (a) Let _X_ = {1, 2, 3, . . .} and _R_ the set of real numbers. Consider the sequence _f_ : _X_ → _R_ defined by _f_ ( _n_ ) = 1/ _n_. Then the _n_ th term of the sequence denoted by _f n_ is the image _f_ ( _n_ ) of the element _n_ in _X_. This sequence is also denoted by {1/ _n_ : _n_ = 1, 2, 3, . . .}. (b) Let _X_ = {1, 2, 3, 4, 5} and _Y_ = { _a, b, c, d_ } and consider the sequence _f_ : _X_ → _Y_ defined by _f_ (1) = _a_ , _f_ (2) = _b_ , _f_ (3) = _a_ , _f_ (4) = _c_ , and _f_ (5) = _b_. Then this sequence is the string _abacb_ of length 5 in _Y_ which is also the 5-tuple ( _abacb_ ). Any mapping _f_ from _A_ × _A_ into _A_ is called a **binary operator on** _A_. For instance, if _R_ is the set of real numbers, the mapping _f_ : _R_ × _R_ → _R_ defined by _f_ ( _a, b_ ) = _a_ \+ _b_ (which is, in fact, the addition operator) is an example of a binary operator on _R_. If _S_ is any nonempty set, we denote by _S n_ the set of all strings of length _n_ in _S_ and _S_ * the set of all strings (including the null string with no elements). Any subset of _S_ * is called a **language over the alphabet** _S_. The union and intersection of two languages over an alphabet are also languages over the same alphabet. If _u_ = ( _u_ 1 _u_ 2 _u_ 3 · · · _u m_) and _v_ = ( _v_ 1 _v_ 2 · · · _v n_) are two strings of lengths _m_ and _n_ , respectively in _S_ * then the **concatenation** of _u_ and _v_ is the string _uv_ in _S_ * of length _m_ \+ _n_ defined as _uv_ = ( _u_ 1 _u_ 2 _u_ 3 · · · _u mv_1 _v_ 2 · · · _v n_). The mapping _c_ : _S_ * × _S_ * → _S_ * defined by _c_ ( _u, v_ ) = _uv_ where _uv_ is the concatenation of _u_ and _v_ is a binary operator on _S_ *. **_Relations_** We conclude this section with a brief comment on the concept of a "relation," which is more general than that of a function. If _A_ and _B_ are two sets, any subset of _A_ × _B_ is called a **relation from** _A_ **to** _B_. For example, if _A_ = { _a, b, c_ } and _B_ = {1, 2, 3, 4}, then _R_ = {( _a_ , 2), ( _a_ , 3), ( _b_ , 4), ( _c_ , 3)} is a relation from _A_ to _B_. By definition, in each ordered pair in a relation from _A_ to _B_ , the first element is an element in _A_ and the second element is an element in _B_. A function from _A_ to _B_ therefore defines a special kind of relation _R_ from _A_ to _B_ such that whenever ( _a, b_ ) and ( _a_ , b′) are in the relation _R_ , then _b_ = _b_ ′. In other words, _f_ : _A_ → _B_ defines the cartesian product {( _x_ , _f_ ( _x_ )) : _x_ is in _A_ }, which is a subset of _A_ × _B_. A relation _R_ from a finite set _A_ with _m_ elements to a finite set with _n_ elements can be represented pictorially by a bipartite graph _G_ with _m_ vertices on the left side (corresponding to the _m_ elements of _A_ ) and _n_ vertices on the right side (corresponding to the _n_ elements of _B_ ) as in Figure 0.2.1. If ( _a_ , _p_ ) is an element in the relation _R_ , an arrow is drawn from the vertex _a_ on the left side to the vertex _p_ on the right side. For example, the graph in Figure 0.2.1 represents the relation _R_ = {( _a_ , _p_ ), ( _b, p_ ), ( _c_ , _r_ )} from the set _A_ = { _a, b, c_ } to the set _B_ = { _p, q, r_ }. A relation from a set _A_ to itself is called a **relation on** _A_. An informative and useful way to represent a relation _R_ on a finite set _A_ with _n_ elements is by drawing a directed graph with _n_ vertices representing the _n_ elements of the set and drawing an arrow from vertex _u_ to vertex _v_ if and only if the ordered pair ( _u, v_ ) is in the relation. If ( _u, u_ ) is in the relation, a loop from _u_ to _u_ is drawn. For example, if _R_ = {( _a, a_ ), ( _a, b_ ), ( _b, c_ ), ( _c, b_ )} is a relation on the set _A_ = { _a, b, c_ }, this relation _R_ can be represented by the directed graph shown in Figure 0.2.2. **FIGURE 0.2.1** **FIGURE 0.2.2** A relation _R_ on _A_ is **reflexive** if ( _a, a_ ) is an element of _R_ for every _a_ in _A_ , it is **symmetric** if ( _a, b_ ) is in _R_ whenever ( _b, a_ ) is in _R_ and it is **transitive** if ( _a, c_ ) is in _R_ whenever ( _a, b_ ) and ( _b, c_ ) are in _R_. A relation _R_ on a set is **antisymmetric** if whenever _a_ and _b_ are distinct, then ( _a, b_ ) is in the relation _R_ only when ( _b, a_ ) is not in the relation _R_. Finally, the relation _R_ is said to have the **comparison property** if either ( _a, b_ ) or ( _b, a_ ) is in _R_ for all _a_ and _b_ in _A_. Suppose that _R_ is a relation on a finite set _A_ and let _G_ be the directed graph that represents this relation. Then _R_ is reflexive if and only if there is a loop at every vertex of _G_ and _R_ is symmetric if and only if whenever there is an arrow from _a_ to _b_ , there is an arrow from _b_ to _a_. Furthermore, _R_ is transitive if and only if whenever there is an arrow from _a_ to _b_ and an arrow from _b_ to _c_ there is an arrow from _a_ to _c_. **Example 0.2.7** Let _A_ = { _a, b, c_ } and let _R_ be a relation on _A_. (a) _R_ = {( _a, b_ ), ( _b, a_ ), ( _a, a_ ), ( _b, b_ ), ( _b, c_ ), ( _c, c_ )} is reflexive because ( _u, u_ ) is in _R_ for all _u_ in _A_. Here ( _a, a_ ), ( _b, b_ ), and ( _c, c_ ) are in _R_. These three elements will represent loops at the three vertices of the corresponding digraph. (b) _R_ = {( _a, b_ ), ( _b, a_ ), ( _c, c_ )} is symmetric because whenever ( _u, v_ ) is in _R_ for any _u_ and any _v_ in _A_ , then ( _v, u_ ) also is in _R_. Here both ( _a, b_ ) and ( _b, a_ ) as well as ( _c, c_ ) are in _R_. In the digraph that represents this relation there will be arrows from _a_ and _b_ and from _b_ to _a_. There will be a loop at the vertex _c_. (c) _R_ = {( _a, b_ ), ( _b, c_ ), ( _a, c_ ), ( _b, b_ )} is transitive. (d) _R_ = {( _a, c_ ), ( _b, b_ ), ( _a, b_ ), ( _a, a_ )} is antisymmetric. (e) If _R_ = {( _a, c_ ), ( _b, b_ ), ( _c, c_ ), ( _a, b_ ), ( _c, b_ )}, then _R_ has the comparison property. **_Equivalence Relations_** A relation _S_ on a set is called an **equivalence relation on** _A_ if _S_ is reflexive, symmetric, and transitive. For example, if _S_ = {( _a_ , _b_ ) : _a, b_ are real, _a_ = _b_ }, then _S_ is obviously an equivalence relation on the set of real numbers. **Example 0.2.8** (a) Let _A_ = { _a, b, c, d, e_ } and _C_ ( _A_ ) be a partition of _A_ defined by the class {{ _a, b_ }, { _c, d, e_ }}. Let _R_ be the set of ordered pairs ( _x, y_ ) in _A_ × _A_ such that whenever _x_ is in one of the sets in the partition, then _y_ is also in the same set. Thus in this case _R_ = {( _a, a_ ), ( _b, b_ ), ( _c, c_ ), ( _d, d_ ), ( _e, e_ ), ( _a, b_ ), ( _b, a_ ), ( _c, d_ ), ( _d, c_ ), ( _c, e_ ), ( _e, c_ ), ( _d, d_ ), ( _d, e_ )}. It is easily verified that _R_ is an equivalence relation. Every partition of a set defines a unique equivalence relation on it. (b) Conversely, it can easily be established that every equivalence relation on a set defines a partition on the set. If the ordered pair ( _a, b_ ) belongs to an equivalence relation on a set _A_ , we take both _a_ and _b_ belong to the same subset of _A_. The class of subsets thus formed constitutes a partition of _X_. For instance the equivalence relation _R_ = {( _p, p_ ), ( _q, q_ ), ( _p, q_ ), ( _q, p_ ), ( _r, r_ )} defines the partition {{ _p, q_ }, { _r_ }} of the set { _p, q, r_ }. **_Equivalence Sets and the Equivalence Class_** Let _R_ be an equivalence relation on a set _A_ and let _x_ be any element of _A_. The **equivalence set** [ _x_ ] of the element _x_ is the set { _y_ : ( _y, x_ ) _R_ }. Observe that if [ _u_ ] and [ _v_ ] are two distinct equivalent sets, their intersection is empty. For if _x_ is in both [ _u_ ] and in [ _v_ ], then because of transitivity ( _u, v_ ) is in the relation _R_ that implies [ _u_ ] = [ _v_ ]. The class of distinct equivalent sets of the elements in _X_ is called the **equivalence class** of the relation. An equivalence class of a set is a partition of a set, and vice versa. Thus there is no real distinction between partitions of a set and equivalence classes in the set. In practice, it is almost invariably the case that we use equivalence relations to obtain partitions because it is usually easy to define an equivalence relation on a set. **_Partial Orders and Linear Orders_** A relation _R_ on _A_ is a **partial order** if it is reflexive, antisymmetric, and transitive. A partial order _R_ that has the comparison property is called a **total** (or **linear** ) **order**. A nonempty set _A_ together with a partial order relation _P_ defined on it is called a **partially ordered set** (PO set) and is denoted by ( _A, P_ ). A partially ordered set ( _A, P_ ) is called a **totally (linearly) ordered set** or a **chain** if _P_ has the comparison property. **Example 0.2.9** (a) Let _A_ be nonempty set and _P_ ( _A_ ) its power set. Let _R_ be a relation on _P_ ( _A_ ) × _P_ ( _A_ ) defined by the "set-inclusion" property; that is, if _E_ and _F_ are subsets of _A_ , then ( _E, F_ ) is in the relation _R_ if _E_ is subset of _F_. Then _R_ is a partial order on _P_ ( _A_ ) and ( _P_ ( _A_ ), _R_ ) is a partially ordered set. But it is not a linearly ordered set for an arbitrary subset of _A_ need not contain another arbitrary subset of _A_. (b) If _x_ and _y_ are two real numbers, we say that ( _x, y_ ) is an element in the relation _S_ on the set _R_ of real numbers whenever _x_ is less than or equal to _y_. Then the relation _S_ is a linear order on _R_. **Example 0.2.10** Let _X_ = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} and _S_ be the relation on _X_ defined as _S_ = {( _m, n_ ) : _m_ divides _n_ }. Then _S_ is a partial order on _S_. The set _A_ = {2, 4, 8} is a chain in _X_ , whereas the set _B_ = {2, 5, 10} is not a chain since the elements 2 and 5 are not comparable. **_Hasse Diagrams of Partially Ordered Sets_** Consider the directed graph _G_ that represents a partial order _R_ on a finite set _A_. Since _R_ is reflexive, there is a loop at each vertex of the graph. Since _R_ is transitive, there is an arc from the vertex _u_ to the vertex _v_ whenever there is an arc from _u_ to _w_ and an arc from _w_ to _v_. So we can have a simplified pictorial representation of the partial order if we ignore the loops and delete all arrows that are present due to transitivity. Furthermore, if the graphical representation is so oriented that all arrows point in one direction (upward, downward, left to right, or right to left), we can ignore the direction of the arrows as well. The resulting diagram is called a **Hasse diagram** of the partially ordered set. **Example 0.2.11** Let _X_ = {1, 2, 3, 4, 5} and _S_ = {(1, 1), (1, 2), (1, 3), (1,4), (1, 5), (2, 2), (2, 3), (2, 5), (3, 3), (3, 5), (4, 4), (4, 5), (5, 5)}. It can be easily verified that _S_ is a partial order on _X_. The Hasse diagram that represents _S_ is shown in Figure 0.2.3. **_Maxima_** _l_ **_and Minimal Elements_** An element _u_ in a partially ordered set _A_ with a partial order _R_ is called a **maximal element** in the set if whenever ( _u, x_ ) is in _R_ , then _x_ = _u_. Similarly, an element _v_ in _A_ is a **minimal element** if whenever ( _x, v_ ) is in R, then _x_ = _v_. **Example 0.2.12** Let _X_ = {2, 3, 4, 5, 8, 12, 24, 25} and let _R_ be the partial order on _X_ defined by _R_ = {( _m, n_ ) : _m_ divides _n_ }. Then 2 is a minimal element of _R_ because no element in _X_ divides 2. Similarly, 3 and 5 are also minimal elements of _R_. Similarly, 24 is a maximal element because there is no number in _X_ that is divisible by 24. Another maximal element in _X_ is 25. The minimal and maximal elements of a partial order can easily be spotted using its Hasse diagram, in which the minimal elements will be on the bottom and the maximal elements will be at the top if all the arrows are drawn upward. See Figure 0.2.4, representing the Hasse diagram of Example 0.2.12, in which the vertices representing 24 and 25 are at the top and the vertices representing 2, 3, and 5 are at the bottom. **FIGURE 0.2.3** **FIGURE 0.2.4** **Example 0.2.13** _P_ ( _A_ ) is the partially ordered set of all subsets of _A_ with the partial order defined by set inclusion, and in this PO set, _A_ is the only maximal element and the empty set is the only minimal element. A partially ordered set may have more than one maximal (or minimal), as we saw in Example 0.2.12. There are partially ordered sets with no maximal or minimal elements. Consider the relation _S_ = {( _x_ , _y_ ) : _x_ , _y_ are integers, _x_ ≤ _y_ }. Then _S_ is no doubt a partial order on the set _Z_ of integers, but this PO set has no maximal or minimal element. **_Maximum (Greatest) and Minimum (Least) Elements_** An element _M_ in a partially ordered set _A_ with a partial order _S_ is called a **maximum** (or **greatest element** ) in _A_ if ( _x, M_ ) _S_ for every _x_ in the set _A_. Similarly, an element _m_ is a **minimum** (or **least element** ) if ( _m, x_ ) _S_ for every _x_ the set _A_. [One should be very careful in distinguishing (1) between a maximal element and a maximum element and (2) between a minimal element and a minimum element. If an element is a maximum or a minimum, all elements in the set must be comparable to it. Of course, if a maximum element exists, it is no doubt a maximal element. Similarly, if a minimum element exists, it is a minimal element. The converse implications are not necessarily true, as can be seen from the Hasse diagrams in Example 0.2.14. In a multiparty government, each party leader can be considered as a maximal element, whereas in a single-party system the unique party leader is both maximum and maximal.] **Example 0.2.14** Let _A_ = {1, 2, 3, 4} and consider the four partial orders on _A_ with Hasse diagrams as in Figure 0.2.5. In part (a), 4 is the greatest element and 1 is the least element. In (b), 4 is the greatest element and the minimal elements are 1 and 2. There is no least element in (b). In (c), 1 is a least element. There is no greatest element here; but 2, 3, and 4 are maximal elements. There are no greatest or least elements in (d). The elements 1 and 2 are minimal and the elements 3 and 4 are maximal. **FIGURE 0.2.5** **Example 0.2.15** In each of the following sets of positive integers, the ordered pair ( _m, n_ ) is in a relation _S_ if _m_ divides _n_. (a) _A_ = {2, 4, 6, 8}. Here 2 is the least element. There is no greatest element. The maximal elements are 6 and 8. (b) _A_ = {2, 3, 4, 12}. The greatest element is 12. There is no least element. The minimal elements are 2 and 3. (c) _A_ = {2, 4, 8, 16}. The greatest is 16 and the least is 2. (d) _A_ = {2, 3, 4, 6}. Here 2 and 3 are minimal; 4 and 6 are maximal. **_Well-Ordered Sets_** A partially ordered set _A_ in which every subset _B_ has a least element _m_ in _B_ is called a **well-ordered set**. For example, if _N_ is the set of all positive integers, and if we say that ( _a, b_ ) is in _S_ whenever _a_ is less than or equal to _b_ , then _S_ is a partial order on _N_. Let _B_ be any subset of _N_. Obviously, the smallest integer in _B_ is the least element of _B_. Thus every subset of _N_ has a least element in it. So _N_ is a well-ordered set. The set _A_ of real numbers in an interval is not a well-ordered set under the relation _S_ , where ( _x, y_ ) in _S_ means that _x_ is less than or equal to _y_. A subset of a well-ordered set is well-ordered. A well-ordered set _A_ is linearly ordered because any set of two elements in _A_ has a first element and therefore the relation has the comparison property. **_Zorn's Lemma_** Let _B_ be a subset of a partially ordered set _A_ with the partial order relation _S_. An element _u_ in _A_ is called an **upper bound of B** if ( _x_ , _u_ ) is in _S_ for all _x_ in _B_. Observe that _u_ need not be in _B_. If there exists an upper bound for _B_ , we say that _B_ has an upper bound. An arbitrary subset of a PO set need not have an upper bound. One of the most important and exceedingly powerful tools of mathematics is **Zorn's lemma** , which asserts that if _A_ is a partially ordered set in which every chain has an upper bound, then _A_ has a maximal element. This lemma cannot be "proved" in the usual sense of the term. However, it can be shown that it is logically equivalent to the celebrated **axiom of choice** , which lies at the very foundation of set theory. Thus Zorn's lemma is assumed as an axiom of logic and set theory. Many important existence theorems are proved by invoking Zorn's lemma. The axiom of choice is also logically equivalent to the well-ordering theorem of Zermelo: Every set can be well-ordered. For a proof of Zorn's lemma, using the axiom of choice, see the book _Naive Set Theory_ by P. R. Halmos. **_0.3 INDUCTIVE PROOFS AND RECURSIVE DEFINITIONS_** One of the most useful, elegant, and simple proof techniques in mathematics in general and in discrete mathematics in particular is the technique known as _mathematical induction_ , which is essentially an "algorithmic proof procedure." The origins of this technique can be traced to the days of the classical Greek period. But the term _induction_ was coined by De Morgan only in the nineteenth century. **_The Principle of Mathematical Induction (Weak Form_ )** This principle is stated as follows: Suppose that _P_ ( _n_ ) is a statement about the natural number _n_ and _q_ is a fixed natural number. Then an induction proof that _P_ ( _n_ ) is true for all _n_ ≥ _q_ requires two steps: 1. _Basis step_ : Verify that _P_ ( _q_ ) is true. 2. _Induction step_ : Verify that if _k_ is _any_ positive integer greater than or equal to _q_ , then _P_ ( _k_ \+ 1) is true whenever _P_ ( _k_ ) is true. Here _P_ ( _n_ ) is called the **inductive hypothesis**. When we complete both steps of a proof by mathematical induction, it is obvious that we have proved that the statement _P_ ( _n_ ) is true for all positive integers _n_ ≥ _q_. [A formal proof is along the following lines: Suppose that _P_ ( _n_ ) is not true for all _n_ ≥ _q_. Then there is at least one positive integer _k_ ≥ _q_ such that _P_ ( _k_ ) is not true. So the set _D_ of positive integers _r_ for which _P_ ( _r_ ) is not true is nonempty, and therefore this set has a unique least element because any set of positive integers is well-ordered. Let _t_ be the first element of this set. Of course, _t_ > _q_. Since _t_ is an integer, _t_ – 1 also is an integer which is not in _D_. So _P_ ( _t_ – 1) is true, which implies by the induction step that _P_ ( _t_ ) is true. This is a contradiction.] This version of induction given above is called the _weak form_ because the induction step assumes that _P_ ( _n_ ) is true in exactly one case. A _strong form_ of induction is discussed later in this section. **Example 0.3.1** Prove that 1 + 2 + 3 + · · · + _n_ = _n_ ( _n_ \+ 1)/2 for all natural numbers. **Proof (By Mathematical Induction)**. Let _P_ ( _n_ ) be the statement that (1 + 2 + 3 + · · · + _n_ ) is equal to _n_ ( _n_ \+ 1)/2. The aim is to prove that _P_ ( _n_ ) is true for all _n_. _The basis step_ : We have to verify that _P_ (1) is true. Here _P_ (1) is the statement that 1 is equal to 1(1 + 1)/2. This is true. So _P_ (1) is true. _The induction step_ : We have to verify that if _P_ ( _k_ ) is true, then _P_ ( _k_ \+ 1) is true. Now _P_ ( _k_ ) is the statement that 1 + 2 + · · · + _k_ is equal to _k_ ( _k_ \+ 1)/2 and _P_ ( _k_ \+ 1) is the statement that 1 + 2 + · · · + ( _k_ \+ 1) is equal to ( _k_ \+ 1)( _k_ \+ 2)/2. If _P_ ( _k_ ) is true, 1 + 2 + · · · + _k_ = _k_ ( _k_ \+ 1)/2. Thus 1 + 2 + · · · + _k_ \+ ( _k_ \+ 1) = _k_ ( _k_ \+ 1)/2 + ( _k_ \+ 1), which implies that 1 + 2 + · · · + ( _k_ \+ 1) = ( _k_ \+ 1) · ( _k_ \+ 2)/2. So _P_ ( _k_ \+ 1) is true whenever _P(k_ ) is true. [One has to be very careful in using the induction method to prove theorems. The meaning of the induction step is very precise: If _P_ ( _q_ ) is true, _P_ ( _q_ \+ 1) is true. If _P_ ( _q_ \+ 1) is true, then _P_ ( _q_ \+ 2) is true, and so on. In other words, the induction should hold for _every k_ greater than or equal to _q_. An erroneous "proof" justifying the statement that all roses are of the same color is as follows. _P_ ( _n_ ) is the proposition that all roses in any collection of _n_ roses are of the same color. Our aim is to show that _P_ ( _n_ ) is true for all _n_. Obviously, P(1) is true. Suppose that _P_ ( _k_ ) is true for some positive integer _k_. So all the roses in any collection of _k_ roses are of the same colors. Now consider any arbitrary collection _C_ of ( _k_ \+ 1) roses. Label these roses as _r i_ ( _i_ = 1, 2, 3, . . . , _k_ \+ 1). Let _A_ be the set { _r i_ : _i_ = 1, 2, . . . , _k_ } and _B_ be the set { _r i_ : _i_ = 2, 3, . . . , _k_ , _k_ \+ 1}. Both _A_ and _B_ have exactly _k_ roses. So the roses in _A_ are all of the same color. Similarly, the roses in _B_ also are of the same color. The rose labeled _r k_ is in both _A_ and _B_. So all the _k_ \+ 1 roses under consideration are of the same color. So if _P_ ( _k_ ) is true, then _P_ ( _k_ \+ 1) is true. Can we therefore conclude that _P_ ( _n_ ) is true for all _n_? The answer is "no" because when the set _C_ has two elements, the sets _A_ and _B_ are disjoint. So if _P_ (1) is true, _P_ (2) need not be true.] **Example 0.3.2** Use induction to prove that the sum of the first _n_ odd positive integers is _n_ 2. **Proof**. Let _P_ ( _n_ ) be the statement that the sum of the first _n_ odd positive integers is _n_ 2. The aim is to prove that _P_ ( _n_ ) is true for every _n_. _The basis step_ : _P_ (1) is true because 1 = 12. _The induction step_ : We have to verify that _P_ ( _k_ \+ 1) is true whenever _P_ ( _k_ ) is true for any positive integer _k_. Since _P_ ( _k_ ) is true, 1 + 3 + 5 + · · · + (2 _k_ – 1) = _k_ 2. Consequently, 1 + 3 + 5 + · · · + (2 _k_ – 1) + (2 _k_ \+ 1) = _k_ 2 \+ (2 _k_ \+ 1) = ( _k_ \+ 1)2 So _P_ ( _k_ \+ 1) is true. Thus _P_ ( _n_ ) is true for every _n_. **Example 0.3.3** Use induction to prove that n < 2 _n_ for any positive integer _n_. **Proof**. Let _P_ ( _n_ ) be the statement that _n_ < 2 _n_ for the positive integer _n_. We have to prove that _P_ ( _n_ ) is true for any positive integer. _The basis step_ : _P_ (1) is true since 1 < 2. _The induction step_ : Suppose that _P_ ( _k_ ) is true for an arbitrary positive integer, implying that _k_ < 2 _k_. Then _k_ \+ 1 < 2 _k_ \+ 1 < 2 _k_ \+ 2 _k_. Hence _k_ \+ 1 < 2 _k_ \+ 1, which implies that _P_ ( _k_ \+ 1) is true. So _P_ ( _n_ ) is true for all positive integers _n_. **_The Principle of Mathematical Induction (Strong Form_ )** Suppose that _P_ ( _n_ ) is a statement about the natural number _n_ and _q_ is a fixed natural number. Then an induction proof that _P_ ( _n_ ) is true for all _n_ ≥ _q_ requires two steps: 1. _Basis step_ : Verify that _P_ ( _q_ ) is true. 2. _Induction step_ : Verify that if _k_ ≥ _q_ and if _P_ ( _q_ ), _P_ ( _q_ \+ 1), _P_ ( _q_ \+ 2), . . . , _P_ ( _k_ ) are true, then _P_ ( _k_ \+ 1) is true. (This version of the induction principle is "strong" in the sense that the induction step here has more information than that of the induction step in the "weak" version. As in the previous case, it can easily be shown that the strong version is also a consequence of the fact that any set of natural numbers is well-ordered. So to prove a theorem using mathematical induction, one can use either version of mathematical induction. In some cases it is more convenient to use the strong form, as seen in the next example. The strong form of induction is also known as **complete induction**.) **Example 0.3.4** Prove that any natural number greater than 1 can be factored as a product of prime numbers. **Proof (By Complete Mathematical Induction)**. Let _P_ ( _n_ ) be the statement that when _n_ is a natural number greater than 1, then _n_ can be factored as a product of prime numbers. The aim is to prove that _P_ ( _n_ ) is true for all _n_. _The basis step_ : _P_ (2) is the statement that 2 can be factored as a product of primes. Obviously, _P_ (2) is true. _The induction step_ : Suppose that _P_ (2), _P_ (3), . . . , _P_ ( _k_ ) is true. We have to verify that _P_ ( _k_ \+ 1) is true. Now _P_ ( _k_ \+ 1) is certainly true when _k_ \+ 1 is a prime number. If _k_ \+ 1 is not a prime number, we can always find two positive integers _m_ and _n_ such that _k_ \+ 1 = _mn_ , where both _m_ and _n_ are less than _k_. By the induction step, both _m_ and _n_ can be expressed as the product of prime numbers. So _k_ \+ 1 also can be factored as a product of prime numbers. Thus _P_ ( _k_ \+ 1) is true. **_Recursive Definitions of Sets_** The basic idea underlying the principle of induction is as follows. Once we describe the initial stage in some process and if we are able to describe any subsequent stage in terms of the previous stages, we are in a position to describe the entire process completely at all stages. The parallel concept in computer science is **recursion** , where we tend to think of the process in the opposite direction. Informally, this is the process of solving a large problem by decomposing it into one or more subproblems such that each such subproblem is identical in structure to the original problem but more or less simpler to solve. So in both situations, one must (1) decide a set of simple cases for which the proof or computation is easily handled, and (2) obtain an appropriate rule that can be applied repeatedly until the end. This concept underlying both induction and recursion can be used to justify the definition of some collection of objects in stages. Such a description is called aptly an **inductive** or **recursive definition**. A recursive definition of a set consists of three parts: 1. _Basis part_ : This part tells us that certain elements belong to the set we are going to define. 2. _Inductive_ ( _recursive_ ) _part_ : This part tells us to use the elements currently in the set to obtain more objects that can be included in the set. 3. _Closure part_ : This part tells that the only elements in the set are those obtained by (1) and (2). **Example 0.3.5** To define the set _A_ of positive integers divisible by the number 5 recursively, we have the recursive definition consisting of the following three parts: (a) 5 is an element of A. (b) If _n_ is an element of _A_ , then _n_ \+ 5 also is an element of _A_. (c) An object is in _A_ if and only if it is obtained by a repeated application of (a) and (b). **_Recursive Definitions of Functions_** Suppose that (1) each element _i_ in the set _S_ = {0, 1, 2, . . . , _k_ } of the first _k_ \+ 1 consecutive integers is assigned a real number _r i_, and (2) if _n_ is any integer greater than _k_ , there is a rule _f_ for defining a real number _f_ ( _n_ ) which can be expressed uniquely in terms of some or all the terms from the set { _f_ ( _n_ – 1), _f_ ( _n_ – 2), . . . , _f_ ( _n_ – _k_ ), _f_ ( _n_ – _k_ – 1)}. If we now define _f_ ( _i_ ) = _r i_ for each _i_ in _S_ , the rule _f_ is a function whose domain is the set of nonnegative integers. A function defined by this method is called a **recursively defined function**. The rule that defines _f_ ( _n_ ) in terms of the preceding values _f_ ( _i_ ) is called a **recurrence relation**. The values _f_ (0), _f_ (1), _f_ (2), . . . , _f_ ( _k_ ) are called the **initial values** of the recurrence relation. We use the induction principle (strong form) to show that this definition does not violate the true definition of a function, [i.e., _f_ ( _n_ ) is unique for every nonnegative integer _n_ ]. **THEOREM 0.3.1** If _f_ is recursively defined, then _f_ ( _n_ ) is unique for every positive integer _n_. **_Proof_ :** _P_ ( _n_ ) is the proposition that _f_ ( _n_ ) is unique for every _n_. _The basis step_ : We assume that _f_ ( _i_ ) = _r i_ is unique when _i_ = 0, 1, 2, . . . , _k_. So _P_ (0), _P_ (1), . . . , _P(k_ ) is true. _The induction step_ : _f_ ( _k_ \+ 1) is expressed uniquely in terms of these _k_ \+ 1 numbers. So _P_ ( _k_ \+ 1) is true. **Example 0.3.6** Suppose that _N_ is the set of 11 nonnegative integers and _R_ is the set of all real numbers. (a) The function _f_ : _N_ → _R_ that defines the sequence _f_ ( _n_ ) = 3 _n_ can be recursively defined as _f_ (0) = 1 and _f_ ( _n_ ) = 3 _f_ ( _n_ – 1) when _n_ > 0. (b) The (factorial) function _f_ : _N_ → _R_ , where _f_ ( _n_ ) = _n_!, can be recursively defined as _f_ (0) = 1 and _f_ ( _n_ ) = _nf_ ( _n_ – 1) when _n_ > 0. (c) The Fibonacci sequence _f_ : _N_ → _R_ defined recursively by the relation _f_ ( _n_ ) = _f_ ( _n_ – 1) + _f_ ( _n_ – 2) when _n_ > 1, with the initial values _f_ (0) = 0 and _f_ (1) = 1 gives the sequence {0, 1, 1, 2, 3, 5, 8, 13, . . .}. It is important that in a recursive definition, the set of integers in the basis step constituting the initial conditions is a _consecutive_ set of integers. Otherwise, the function may not be well-defined. Here is a counterexample: _f_ ( _n_ ) = 9 _f_ ( _n_ – 2), with the nonconsecutive initial values _f_ (0) = 6 and _f_ (2) = 54, will yield _f_ ( _n_ ) = 2 · 3 _n_ \+ 1 as well as _f_ ( _n_ ) = 3 · 3 _n_ \+ 3 · (–3) _n_. **_0.4 THE LANGUAGE OF LOGIC_** At an introductory level, mathematical logic is very similar to set theory. Instead of sets, in logic we have **propositions**. A proposition is a statement that is either **true** or **false** but not both. The **truth value** of a proposition _p_ is T (or 1) if _p_ is true; otherwise, the truth value is F (or 0). Consider the following five sentences: 1. _p_ : 3 + 2 = 5 2. _q_ : 3 + 2 = 6 3. _r_ : Is it 3 or 2? 4. _s_ : Take 3 5. _t_ : _x_ \+ 2 = 5 Here _p_ is a true proposition and _q_ is a false proposition. Both _r_ and _s_ are not propositions, _t_ is a proposition, but it is neither true nor false since _x_ is unknown. In set theory we have the intersection and union of two sets and the complement of a set in a certain universal set. The analogous concepts in logic are the three logical operations: the conjunction of two propositions, the disjunction of two propositions, and the negation of a proposition. The **conjunction** of two propositions _p_ and _q_ is a proposition that is true if and only if both _p_ and _q_ are true propositions. The conjunction of _p_ and _q_ is called " _p_ and _q_ " and is denoted by _p_ ∧ _q_. The **disjunction** of two propositions _p_ and _q_ is a proposition that is false if and only if both _p_ and _q_ are false propositions. The disjunction of _p_ and _q_ is called " _p_ or _q_ " and is denoted by _p_ ∨ _q_. The **exclusive disjunction** of two propositions _p_ and _q_ is a proposition that is true if and only exactly one of the two is true and is denoted by _p_ ⊕ _q_. Finally, the **negation** of a proposition _p_ is a proposition _p_ ′ which is true if and only if _p_ is false. A **truth table** displays the relationships between the truth values of propositions. The following table displays the truth values of conjunction, disjunction, exclusive disjunction, and negation of two propositions _p_ and _q_ : Propositions that can be obtained by the combination of other propositions are known as **compound propositions**. For example, if _p, q_ , and _r_ are three propositions, then "( _p_ ′ and _q_ ) or ( _q_ )" is a compound proposition. A proposition that is not a combination of other propositions is called an **atomic proposition**. **_The Implication Operation_** There is another important way to construct a compound proposition from two propositions _p_ and _q_. This is the **implication** proposition: " **if** _p_ , **then** _q_ " (or " _p_ **implies** _q_ ") which is denoted by _p_ → _q_. We define this compound proposition to be false if and only when _p_ is true and _q_ is false. In all other cases the compound proposition _p_ → _q_ is true. In this case we say that the proposition _p_ is a **sufficient condition** for the proposition _q_ and _q_ is a **necessary condition** for _p_. Here _p_ is called the **hypothesis** (or **antecedent** or **premise** ) and _q_ is called the **consequence** (or **conclusion** ). Obviously, the compound proposition _p_ → _q_ and the compound proposition _q_ → _p_ both cannot be false at the same time. So irrespective of the truth values of _p_ and _q_ , it is always the case that at least one of these two compound propositions _p_ → _q_ and _q_ → _p_ is always true. Observe that the mathematical concept of implication is more general than the concept of implication regarding statements in the languages we use to communicate in our daily lives. In a general mathematical setting there is no cause-and-effect relationship between the truth value of the hypothesis and the truth value of the conclusion. For example, if _p_ is the proposition that "it is raining today" and _q_ is the proposition that "London is the capital of England," then the implication proposition _p_ → _q_ is true whether or not _p_ is true. The implication that "if it rains today, then 3 + 4 = 8" is a true proposition if it does not rain today. On the other hand, when I make the statement that "if it rains like this, I will not go fishing this afternoon," this statement is an implication proposition in which there is a definite causal relation between the hypothesis and the conclusion. It is also important to mention in this context that the implication proposition in many programming languages has a different truth value. If a line in a program says "if _n_ < 30 then _S_ ," then when the execution of the program reaches this line, the segment _S_ is executed if _n_ < 30 and not executed otherwise. The compound proposition " _p_ **if and only if** _q_ ," which is denoted by _p_ ↔ _q_ , is the conjunction of the compound proposition _p_ → _q_ and the compound proposition _q_ → _p_. The compound proposition _p_ ↔ _q_ is true when both _p_ → _q_ and _q_ → _p_ are true. In this case we say that _p_ is both **necessary and sufficient** for _q_ , and vice versa. Here is the truth table of these implication operations involving two operations _p_ and _q_ : If _r_ is the proposition _p_ → _q_ , the proposition _q_ → _p_ is the **converse** of _r_ , the proposition _p_ ′ → _q_ ′ is the **inverse** of _r_ , and the proposition _q_ ′ → _p_ ′ is the **contrapositive** of _r_. If two propositions _p_ and _q_ are such that _p_ is true if and only if _q_ is true, the two propositions _p_ and _q_ are said to be **equivalent**. We write _p_ = _q_ when _p_ and _q_ are equivalent. In other words, two propositions are equivalent if they have the same truth value. For example, the proposition that "Jane will be 18 years old in 1993" is equivalent to the proposition that "Jane was born in 1975." The compound proposition _p_ → _q_ is equivalent to its contrapositive proposition _q_ ′ → _p_ ′ since they both have the same truth value, as can be seen from the following truth table. A compound proposition that is always true irrespective of the truth values of its component propositions is called a **tautology**. A compound proposition that is always false is called a **contradiction**. As a simple example, the disjunction of a proposition _p_ and its negation is a tautology, whereas the conjunction of _p_ and its negation is a contradiction. From the table given above we notice that the proposition _p_ ↔ _q_ is a tautology if and only if _p_ = _q_. The commutative, associative, and distributive laws involving the conjunction and disjunction operation can easily be verified by constructing the appropriate truth tables. These laws are: 1. _The commutative laws_ : _p_ ∧ _q_ = _q_ ∧ _p_ and _p_ ∨ _q_ = _q_ ∨ _p_ 2. _The associative laws: p_ ∧ ( _q_ ∧ _r_ ) = ( _p_ ∧ _q_ ) ∧ _r_ and _p_ ∨ ( _q_ ∨ _r_ ) = ( _p_ ∨ _q_ ) ∨ _r_ 3. _The distributive laws: p_ ∧ ( _q_ ∨ _r_ ) = ( _p_ ∧ _q_ ) ∨ ( _p_ ∨ _r_ ) and _p_ ∨ ( _q_ ∧ _r_ ) = ( _p_ ∨ _q_ ) ∧ ( _p_ ∨ _r_ ) The truth value of a compound proposition depends on the truth values of its component propositions. The subject of constructing and simplifying compound propositions built from other propositions and obtaining their truth values is called **propositional calculus**. This combination of propositions to yield new propositions bears a strong resemblance to the combination of sets to form new sets. The following theorem is analogous to De Morgan's laws on set theory. **THEOREM 0.4.1** (a) (p ∧ _q_ )′ = ( _p_ ′) ∨ ( _q_ ′). (b) ( _p_ ∨ _q_ )′ = ( _p_ ′) ∧ ( _q′_ ). **_Proof_ :** This is left as an exercise. **_The Satisfiability Problem in Logic_** The truth table of a compound proposition with _n_ atomic propositions as its components will have 2 _n_ rows. The **satisfiability problem** of a compound proposition _p_ is the problem of (1) finding out whether there exist truth values for the atomic components of _p_ such that p is true, and (2) obtaining the true atomic propositions and the false atomic propositions if they exist which make the compound proposition true. The only known procedure for testing the satisfiability of a proposition with _n_ atomic propositions is building a truth table enumerating all the 2 _n_ possibilities of truth values, and this is a formidable task indeed when _n_ is large. **_0.5 NOTES AND REFERENCES_** A systematic investigation of the theory of sets began with the contributions of Georg Cantor (1848–1918) in the nineteenth century. Prior to that, set theory and more generally nonnumerical mathematics were not investigated in a formal manner apart from the contributions of George Peacock (1791–1858), Augustus De Morgan (1806–1871), and George Boole (1815–1864). Peacock and De Morgan generalized the usual algebraic operations beyond the realm of numerical mathematics and Boole extended and formalized their contributions in his seminal work entitled "Investigation of the Laws of Thought" in 1854. According to the great twentieth-century philosopher and mathematician Bertrand Russell (1872–1970), it was George Boole who "discovered" pure mathematics. More on the history and development of set theory can be found in Boyer (1968). For a detailed treatment of set theory, including functions and relations, the books by Halmos (1960) and Stoll (1963) are highly recommended. The technique of proof by induction was explicitly stated and used by Francesco Maurolycus in the sixteenth century when he proved that the sum of the first _n_ odd positive integers is _n_ 2. However, this technique was known to mathematicians as early as the third century B.C. For example, in Euclid's proof that there are an infinite number of primes, this technique was used implicitly. In the seventeenth century, both Pascal and Fermat used the induction method extensively. The term _mathematical induction_ was coined by De Morgan. In the nineteenth century, the principle of induction was investigated in detail by Gottlieb Frege (1848–1925), Giuseppe Peano (1858–1932), and Richard Dedekind (1831–1916). The role of induction in the formal development of mathematics became a primary focus of many mathematical logicians at the beginning of the twentieth century, and two names worth mentioning in this regard are those of Bertrand Russell and Thoralf Skolem (1887–1963). For an interesting survey on the topic of mathematical induction, see the article by Bussey (1917). Golovina and Yaglom (1963), Poly a (1954), and Sominskii (1963) are three excellent references in this area. The article by Henkin (1960) is also highly recommended. The origins of a systematic study of logical reasoning can be traced to Aristotle, who lived in the fourth century B.C. It was not until the seventeenth century, however, that symbols were used in the study of logic. The pioneering work in symbolic logic was done by Gottfried Leibniz (1646–1716). No major developments took place until George Boole published his outstanding work mentioned earlier. Since then Bertrand Russell and Alfred North Whitehead (1861–1947) contributed considerably to the development of logic and the field of mathematical logic emerged when the discovery of certain paradoxes led to an extensive examination of the place of logic, proof, and set theory in the foundations of mathematics. **_0.6 EXERCISES_** **0.1.** Let _A_ = {3, 5, 7, 9}, _B_ = {2, 3, 5, 6, 7}, and _C_ = {2, 4, 6, 8} be all subsets of the universe _X_ = {2, 3, 4, 5, 6, 7, 8, 9}. Find **(a)** the union of _A_ and _B_ , **(b)** the intersection of _B_ and _C_ , **(c)** _B_ – _A_ , **(d)** _A_ – _B_ , **(e)** the absolute complement _C_ ′ of the set _C_ , **(f)** the absolute complement of _X_ , **(g)** the absolute complement of the empty set. **0.2.** Let _A, B, C_ be as in Problem 0.1. Find the following sets: **(a)** ( _A_ ∪ _B_ ) – _C_ , **(b)** the intersection of ( _A_ ∪ _B_ )′ and ( _B_ ∪ _C_ )′, **(c)** ( _A_ ∪ _C_ ) – ( _C_ – _A_ )′ **0.3.** Which of the following sets are equal? **(a)** { _a, b, c, c_ }, **(b)** { _a, b, a, b, c_ }, **(c)** { _a, b, b, c, d_ } **0.4.** Which of the following sets are equal? **(a)** { _t_ : _t_ is a root of _x_ 2 – 6 _x_ \+ 8 = 0} **(b)** { _y_ : _y_ is a real number in the closed interval [2, 3]} **(c)** {4, 2, 5, 4} **(d)** {4, 5, 7, 2} – {5, 7} **(e)** { _q_ : _q_ is either the number of sides of a rectangle or the number of digits in any integer between 11 and 99} **0.5.** If _A_ = {3, 4} and _B_ = { _p, q, r_ }, list all the elements of **(a)** _A_ × _A_ , **(b)** _A_ × _B_ , **(c)** _B_ × _A_ and **(d)** _B_ × _B_. **0.6.** Let _A_ and _B_ be as in Problem 0.5. List all the elements of **(a)** _A_ × _A_ × _A_ and **(b)** _A_ × _A_ × _B_. **0.7.** Let _A_ and _B_ be as in Problem 0.5. List all the elements of **(a)** _A_ ∪ ( _B_ × _A_ ) and **(b)** ( _A_ × _A_ ) ∪ ( _B_ × _A_ ). **0.8.** List all the sets in the power set of the following sets: **(a)** { _a, b_ }, **(b)** { _a, b, c_ }, **(c)** {ϕ, 0, {0}} **0.9.** List all partitions of the following sets: **(a)** { _a_ }, **(b)** { _a, b_ }, **(c)** { _a, b, c_ } **0.10.** Determine whether each of the following statements is true in the case of three arbitrary sets _P, Q, R_. **(a)** If _P_ is an element of _Q_ and if _Q_ is a subset of _R_ , then _P_ is an element of _R_. **(b)** If _P_ is an element of _Q_ and if _Q_ is a subset of _R_ , then _P_ also is a subset of _R_. **(c)** If _P_ is a subset of _Q_ and _Q_ is an element of _R_ , then _P_ is an element of _R_. **(d)** If _P_ is a subset of _Q_ and _Q_ is an element of _R_ , then _P_ is a subset of _R_. **0.11.** Prove the following assertions involving three arbitrary sets _P, Q_ , and _R_. **(a)** ( _P_ – _Q_ ) – _R_ = _P_ – ( _Q_ ∪ _R_ ) **(b)** ( _P_ – _Q_ ) – _R_ = ( _P_ – _R_ ) – _Q_ **(c)** ( _P_ – _Q_ ) – _R_ = ( _P_ – _R_ ) – ( _Q_ – _R_ ) **0.12.** Two sets _A_ and _B_ are such that their union and their intersection are equal. What can we say about _A_ and _B_? **0.13.** Suppose that _A_ is a subset of _B_ and _C_ is a subset of _D_. **(a)** Is it true that ( _A_ ∪ _C_ ) is a subset of ( _B_ ∪ _D_ )? **(b)** Is it true that the intersection of _A_ and _C_ is a subset of the intersection of _B_ and _D_? **0.14.** What can we say about two sets _P_ and _Q_ if _P_ – _Q_ is equal to _Q_ – _P_? **0.15.** Prove that _A_ and _B_ are nonempty sets such that if _A_ × _B_ and _B_ × _A_ are equal, then _A_ = _B_. **0.16.** What is the cardinality of _P_ × _Q_ if the cardinality of _P_ is _p_ and the cardinality of _Q_ is q? **0.17.** Prove that the intersection of the powerset of _A_ and the powerset of _B_ is the powerset of the intersection of _A_ and _B_ , where _A_ and _B_ are two arbitrary sets. **0.18.** What can we say if the operation "intersection" is replaced by the operation "union" in Problem 0.17? **0.19.** What is the cardinality of the powerset of the empty set? **0.20.** If the powerset of _A_ is equal to the powerset of _B_ , does it follow that _A_ and _B_ are equal? **0.21.** The **symmetric difference** of two sets _A_ and _B_ is the set containing those elements of either _A_ or _B_ but not both _A_ and _B_ and is denoted by _A_ ⊕ _B_. Prove: _A_ ⊕ _B_ = ( _A_ – _B_ ) ∪ ( _B_ – _A_ ) = ( _A_ ∪ _B_ ) – ( _A_ ∩ _B_ ). **0.22.** Draw a Venn diagram to represent the symmetric difference of two sets. **0.23.** How many distinct regions are there in a Venn diagram that represents three sets in a universal set such that no intersection is empty? **0.24.** If the symmetric difference of two sets _A_ and _B_ is equal to the set _A_ , what can we say about _A_ and _B_? **0.25.** If _A_ and _B_ are two arbitrary sets, under what conditions can we conclude that the symmetric difference of ( _A_ – _B_ ) and ( _B_ – _A_ ) is the empty set? **0.26.** Using Venn diagrams, investigate whether the following statements are true or false. **(a)** _A_ ⊕ ( _B_ ∩ _C_ ) = ( _A_ ⊕ _B_ ) ∩ ( _A_ ⊕ _C_ ) **(b)** _A_ ⊕ ( _B_ ∪ _C_ ) = ( _A_ ⊕ _B_ ) ∪ ( _A_ ⊕ _C_ ) **(c)** _A_ ⊕ ( _B_ ⊕ _C_ ) = ( _A_ ⊕ _B_ ) ⊕ _C_ **(d)** _A_ ∩ ( _B_ ⊕ _C_ ) = ( _A_ ∩ _B_ ) ⊕ ( _A_ ∩ _C_ ) **(e)** _A_ ∪ ( _B_ ⊕ _C_ ) = ( _A_ ∪ _B_ ) ⊕ ( _A_ ∪ _C_ ) **0.27.** If the symmetric difference of _A_ and _B_ is equal to the symmetric difference of _A_ and _C_ , is it necessary that _B_ = _C_? **0.28.** Prove the following assertions involving three arbitrary sets _A, B_ , and _C_ : **(a)** _A_ × ( _B_ ∩ _C_ ) = ( _A_ × _B_ ) ∩ ( _A_ × _C_ ) **(b)** _A_ × ( _B_ ∪ _C_ ) = ( _A_ × _B_ ) ∪ ( _A_ × _C_ ) **(c)** ( _A_ ∩ _B_ ) × _C_ = ( _A_ × _C_ ) ∩ ( _B_ × _C_ ) **(d)** ( _A_ ∪ _B_ ) × _C_ = ( _A_ × _C_ ) ∪ ( _B_ × _C_ ) **0.29.** Let _R_ be the set of all real numbers and _f_ : _R_ → _R_ defined by _f_ ( _x_ ) = _x_ 2. **(a)** What are the domain, codomain, and range of this function? **(b)** Is _f_ an injection? **(c)** Is _f_ a surjection? **(d)** Find the set of all preimages of 4. **(e)** Find the inverse image of the set { _t_ : 1 ≤ _t_ ≤ 4}. **0.30.** If _R_ is the set of all real numbers, explain why _F_ ( _x_ ) = 1/( _x_ – 2) and _F_ ( _x_ ) = (the square root of _x_ ) are not functions from _R_ to _R_. **0.31.** If _N_ is the set of all natural numbers and if _f_ : _N_ → _N_ is defined by _f_ ( _n_ ) = 2 _n_ \+ 5, show that _f_ is an injection and find the inverse function. Is _f_ a surjection? Is the inverse function a surjection? **0.32.** Suppose that _f_ ( _x_ ) = _x_ 2 – 4, where _x_ is a real number. Find the images of the following sets: **(a)** {–4, 4, 5}, **(b)** {4, 5} **(c)** { _t_ : _t_ is a real number greater than or equal to zero}. **0.33.** Let _A_ = { _a, b, c, d_ } and _B_ = { _p, q, r_ }. **(a)** Find the number of functions from _A_ to _B_. **(b)** Find the number of injections from _A_ to _B_. **(c)** Find the number of surjections from _A_ to _B_. **(d)** Find the number of functions such that _a_ is mapped into _p_ and _b_ is mapped into _q_. **0.34.** If _N_ is the set of all natural numbers, give an example of a function from _N_ to _N_ that is **(a)** an injection but not a surjection, **(b)** a surjection but not an injection. **0.35.** Find the domain and range of the function that assigns **(a)** each integer its last digit, **(b)** each integer the number of digits in it. **0.36.** Give an example of a function _f_ from the set of real numbers to the set of real numbers such that **(a)** _f_ is both an injection as well as a surjection, **(b)** _f_ is neither an injection nor a surjection. **0.37.** Suppose that _X_ = { _p, q, r_ }, _Y_ = { _a, b, c, d_ }, and _Z_ = {1, 2, 3, 4}. Let _g_ : _X_ → _Y_ be defined by the set of ordered pairs {( _p, a_ ), ( _q, b_ ), ( _r, c_ )} and _f_ : _Y_ → _Z_ be defined by the set of ordered pairs {( _a_ , 1), ( _b_ , 1), ( _c_ , 2), ( _d_ , 3)}. Write the composite function _f_ ○ _g_ as a set of ordered pairs. **0.38.** If _A_ = { _p, q, r_ } and _f_ : _A_ → _A_ is defined by _f_ ( _p_ ) = _q_ , _f_ ( _q_ ) = _p_ , and _f_ ( _r_ ) = _q_ , describe _f_ and _f_ ○ _f_ as sets of ordered pairs. **0.39.** Let _A_ and _f_ be as in Problem 0.38. Define _f n_ = _f_ ○ _f_ ○ _f_ ○ · · · ○ _f_ as the _n_ -fold composition of _f_ with itself. Describe _f n_ as a set of ordered pairs when _n_ is odd and when _n_ is even. **0.40.** Show that the set of all positive integers is equivalent to the set of all positive even integers. **0.41.** Let _f_ : _B_ → _C_ and _g_ : _A_ → _B_. Prove the following: **(a)** If _f_ and _g_ are injections, then _f_ ○ _g_ is an injection. **(b)** If _f_ and _g_ are surjections, then _f_ ○ _g_ is a surjection. **0.42.** Let _f_ and _g_ be as in Problem 0.41. **(a)** Suppose that _f_ ○ _g_ is an injection. Is it necessary that _f_ be an injection? Is it necessary that _g_ be an injection? **(b)** Suppose that _f_ ○ _g_ is a surjection. Is it necessary that _f_ be a surjection? Is it necessary that _g_ be a surjection? **0.43.** If _f_ ( _x_ ) = _ax_ \+ _b_ and _g_ ( _x_ ) = _cx_ \+ _d_ and _f_ ○ _g_ = _g_ ○ _f_ , find an equation relating _a, b, c_ , and _d_. **0.44.** Suppose that _f_ : _X_ → _Y_ and _A_ and _B_ are subsets of _X_. Then prove: **(a)** _f_ ( _A_ ∪ _B_ ) = _f_ ( _A_ ) ∪ _f_ ( _B_ ) and **(b)** _f_ ( _A_ ∩ _B_ ) is a subset of the intersection of _f_ ( _A_ ) and _f_ ( _B_ ). **0.45.** Show that if _f_ : _X_ → _Y_ is an injection, then _f_ ( _A_ ∩ _B_ ) = _f_ ( _A_ ) ∩ ( _B_ ) for all subsets _A_ and _B_ of _X_. **0.46.** Show that there is an injection from _A_ to _B_ if and only if there is a surjection from _B_ to _A_. **0.47.** Suppose that _f_ : _A_ → _B_ where _A_ and _B_ are two finite sets with the same cardinality. Prove that _f_ is an injection if and only if _f_ is a surjection. **0.48.** Let _A_ be a subset of a universal set _X_. The **characteristic function** _f A_ of _A_ is the function from _X_ to the set {0, 1} such that the image of every element in _A_ is 1 and the image of every element not in _A_ is 0. Suppose that _A_ and _B_ are two subsets of _X_. Prove the following for all _x_ in _X_. **(a)** _f A_∩ _B_ ( _x_ ) = _f A_( _x_ ) · _f B_( _x_ ) for all _x_ in _X_ **(b)** _f A_∪ _B_ ( _x_ ) = _f A_( _x_ ) + _f B_( _x_ ) – _f A_( _x_ ) · _f B_( _x_ ) **(c)** _f A_( _x_ ) + _f A′_( _x_ ) = 1 **(d)** If _C_ is the symmetric difference of _A_ and _B_ , then _f C_( _x_ ) = _f A_( _x_ ) + _f B_( _x_ ) – 2 _f A_( _x_ ) · _f B_( _x_ ). **0.49.** Let _S_ = {0, 1} and let _S n_ be the set of all strings of length in _S_. If _u_ and _v_ are two strings in _S n_, we compare them place by place and define the **Hamming distance** _H_ ( _u, v_ ) between _u_ and _v_ to be the number of places where they differ. Find the Hamming distance between _u_ and _v_ if **(a)** _u_ = 101100 and _v_ = 111011 **(b)** _u_ = 01010 and _v_ = 11001. **0.50.** Suppose that _S, S n_, and _H_ ( _u, v_ ) are as in Problem 0.49. The function _H_ : _S n_ × _S n_ → _N_ (where _N_ is the set of all nonnegative integers) which maps the ordered pair ( _u, v_ ) into _H_ ( _u, v_ ) is the **Hamming distance function**. Show that for all _u, v_ , and _w_ in _S n_, the function _H_ satisfies the following **metric axioms: (a)** _H_ ( _u, v_ ) is nonnegative, **(b)** _H_ ( _u, v_ ) = 0 if and only if _u_ = _v_ , **(c)** _H_ ( _u, v_ ) = _H_ ( _v, u_ ), and **(d)** _H_ ( _u, v_ ) ≤ _H_ ( _u, w_ ) + _H_ ( _w, v_ ). **0.51.** Let _A_ = {1, 2, 3, 4, 5}, _B_ = { _a, b, c, d_ }, and _R_ be the relation {(1, _a_ ), (1, _b_ ), (3, _c_ ), (4, _d_ ), (5, _d_ ), (5, _c_ )}. Represent this relation by a bipartite graph and draw the appropriate arrows. **0.52.** Give an example of a relation on a set that is **(a)** both symmetric and antisymmetric, **(b)** neither symmetric nor antisymmetric. **0.53.** Let _A_ = { _a, b, c, d_ }. Draw the diagraph corresponding to each of the following relations on _A_ and decide whether each relation is reflexive, symmetric, transitive, and antisymmetric. Examine whether the comparison property holds in any of these relations. **(a)** _R_ = {( _b, b_ ), ( _b, c_ ), ( _b, d_ ), ( _c, b_ ), ( _c, c_ ), ( _c, d_ )} **(b)** _R_ = {( _a, b_ ), ( _b, a_ )} **(c)** _R_ = {( _a, a_ ), ( _b, b_ ), ( _c, c_ ), ( _d, d_ )} **(d)** _R_ = {( _a, a_ ), ( _b, b_ ), ( _c, c_ ), ( _d, d_ ), ( _a, b_ ), ( _b, a_ )} **(e)** _R_ = {( _a, c_ ), ( _a, d_ ), ( _b, c_ ), ( _b, d_ ), ( _c, a_ ), ( _c, d_ )} **(f)** _R_ = {( _a, b_ ), ( _b, c_ ), ( _c, d_ )} **0.54.** Let _R_ be a relation from _A_ to _B_ and _S_ be a relation from _B_ to _C_. Then the **composite relation** _S_ ○ _R_ of _R_ and _S_ is the relation consisting of all ordered pairs of the form ( _a, c_ ), where ( _a, b_ ) is in _R_ and ( _b, c_ ) is in _S_. If _A_ = { _p, q, r_ , _s_ }, _B_ = { _a, b_ }, _C_ = {1, 2, 3, 4}, _R_ = {( _p, a_ ), ( _p, b_ ), ( _q, b_ ), ( _r, a_ ) ( _s, a_ )} and _S_ = {( _a_ , 1), ( _a_ , 2), ( _b_ , 4)}, find _S_ ○ _R_. **0.55.** Let _R_ be a relation on the set _A_. The relation _R_ 2 on _A_ is defined as _R_ ○ _R_. Show that _R_ 2 ○ _R_ is equal _R_ ○ _R_ 2. Thus _R_ 3 is the composite of _R_ 2 and _R_. More generally, the _n_ th power _R n_ of the relation is the composite of _R n_–1 and _R_. If _R_ = {( _a, a_ ), ( _a, b_ ), ( _b, a_ ), ( _c, b_ ), ( _c, d_ )}, find the second and third powers of _R_. **0.56.** Prove that if a relation on a set is reflexive, then any power of that relation is reflexive. **0.57.** Prove that if a relation _R_ on a set is reflexive and transitive, then _R n_ = _R_ for all positive integers _n_. **0.58.** Let _R_ be a relation from _A_ to _B_. The **inverse relation** _R –_1 from _B_ to _A_ is the set of all ordered pairs of the form ( _b, a_ ), where ( _a, b_ ) is in _R_. Show that a relation _R_ on a set is symmetric if and only if _R_ and its inverse are equal. **0.59.** Show that a relation on a set is reflexive if and only if its inverse relation is reflexive. **0.60.** Prove that a relation _R_ on a set _A_ is antisymmetric if and only if the intersection of _R_ and its inverse is a subset of the **diagonal relation** _D_ = {( _x, x_ ) : _x_ _A_ }. **0.61.** Let _R_ be the relation on the set _A_ = {1, 2, 3, 4, 5, 6, 7} defined by the rule ( _a, b_ ) _R_ if the integer ( _a_ – _b_ ) is divisible by 4. List the elements of _R_ and its inverse. **0.62.** Let _R_ be the relation on the set _N_ of all positive integers defined by ( _a, b_ ) _R_ if _b_ is divisible by _a_. Determine whether _R_ is reflexive, symmetric, antisymmetric, or transitive. **0.63.** Let _N_ be the set of all positive integers and let _R_ be the relation on _N_ × _N_ defined by (( _a, b_ ), ( _c, d_ )) is in _R_ if _a_ ≤ _c_ and _b_ ≤ _d_. Determine whether _R_ is reflexive, symmetric, antisymmetric, or transitive. **0.64.** Which of the following relations on the set {1, 2, 3, 4} are equivalence relations? If the relation is an equivalence relation, list the corresponding partition (equivalence class). **(a)** {(1, 1), (2, 2), (3, 3), (4, 4), (1, 3), (3, 1)} **(b)** {(1, 0), (2, 2), (3, 3), (4, 4)} **(c)** {(1, 1), (2, 2), (1, 2), (2, 1), (3, 3), (4, 4)} **0.65.** Let _R_ = {( _x, y_ ) : _x_ and _y_ are real numbers and _x_ – _y_ is an integer}. Show that _R_ is an equivalence relation on the set of real numbers. **0.66.** Let _a_ be an integer and _m_ be a positive integer. We denote by _a_ ( **mod** _m_ ) the remainder when _a_ is divided by _m_. If _a_ and _b_ are two integers, we say that _a_ **is congruent to** _b_ **modulo** _m_ if _m_ divides _a_ – _b_. The notation _a_ ≡ _b_ ( **mod** _m_ ) is used to indicate that _a_ is congruent to _b_ modulo _m_. Of course, if _a_ is congruent to _b_ modulo _m_ , then _b_ is congruent to a modulo _m_. Prove that: **(a)** _a_ (mod _m_ ) = _b_ (mod _m_ ) if and only if _a_ ≡ _b_ (mod _m_ ). **(b)** _a_ ≡ _b_ (mod _m_ ) if and only if there exists an integer _k_ such that _a_ = _b_ \+ _km_. **(c)** If _a_ ≡ _b_ (mod _m_ ) and _c_ ≡ _d_ (mod _m_ ), then _a_ \+ _c_ ≡ ( _b_ \+ _d_ ) (mod _m_ ) and _ac_ ≡ _bd_ (mod _m_ ). **0.67.** Let _Z_ be the set of all integers and let _m_ be any positive integer greater than 1. Show that the relation _R_ on _Z_ defined by the set {( _a, b_ ) : _a_ ≡ _b_ (mod _m_ )} is an equivalence relation. This relation is called the **congruence modulo** _m_ relation on the set of integers. The equivalence classes of this relation are called **congruence classes modulo** _m_. The congruence class of an integer _x_ modulo _m_ is dented by [ _x_ ] _m_. **0.68.** Find the congruent classes modulo 5: **(a)** [0]5, **(b)** [1]5, and **(c)** [2]5. **0.69.** Prove that {[ _i_ ] _m_ : _i_ = 0, 1, 2, . . . , ( _m_ – 1)} is a partition of the set of integers. **0.70.** Let _f_ be a function from _A_ to _A_. Let _R_ be the relation on _A_ defined by {( _x, y_ ) : _f_ ( _x_ ) = _f_ ( _y_ )}. Prove that _R_ is an equivalence relation on _A_. What is the equivalence class? **0.71.** Suppose that _R_ is an equivalence relation on a nonempty set _A_. Show that there is a function _f_ with _A_ as domain such that ( _x, y_ ) is in _R_ if and only if _f_ ( _x_ ) = _f_ ( _y_ ). **0.72.** Suppose that {( _a, b_ ), ( _c, d_ )} is in _R_ whenever _a, b, c, d_ are positive integers and _ad_ = _bc_. Show that _R_ is an equivalence relation on the set of positive integers. **0.73.** Let _X_ = {1, 2, 3, 4, 5, . . . , 15}. Let _R_ be the relation on _X_ defined by ( _x, y_ ) _R_ if ( _x_ – _y_ ) is divisible by 3. Prove that _R_ is an equivalence relation on _X_. Find the equivalence class. **0.74.** Let _R_ be a transitive and reflexive relation on a set _A_. If _S_ is a relation on _A_ such that ( _x, y_ ) is in _S_ if and only if both ( _x, y_ ) and ( _y, x_ ) are in _R_ , prove that _S_ is an equivalence relation on _A_. **0.75.** Prove that a reflexive relation _R_ on a set _A_ is an equivalence relation on _A_ if and only if ( _x, y_ ) and ( _x, z_ ) in _R_ implies that ( _y, z_ ) is in _R_. **0.76.** If _S_ is the relation defined on the set of real numbers as _S_ = {( _x, y_ ) : _x_ ≤ _y_ }, show that _S_ is a partial order on the set of real numbers. **0.77.** If _S_ is the relation defined on the set of positive integers as _S_ = {( _x_ , _y_ ) : _x_ divides _y_ }, show that _S_ is a partial order on the set of positive integers. **0.78.** Let _X_ = { _a, b, c_ } and _S_ be the partial order defined on the powerset _P_ ( _X_ ) defined as _S_ = {( _A, B_ ) : _A_ is a subset of _B_ }. List the elements of _S_. **0.79.** Draw the Hasse diagram of the partial order _S_ of Problem 0.78. **0.80.** Let _X_ = {1, 2, 3, 4, 5, 6, 7, 8, 9} and _S_ = {( _m, n_ ), where _m_ divides _n_ } be a partial order on _X_. Draw the Hasse diagram that represents _S_. Locate a chain in _X_. **0.81.** Prove that if _R_ is a partial order on the set _A_ , its inverse _R_ –1 also is a partial order on _A_. The partially ordered set ( _A, R_ –1) is called the **dual** of the partially ordered set ( _A, R_ ). **0.82.** Let _A_ = {2, 3, 4, 6, 8, 12, 16, 24} and _S_ be the partial order relation on _A_ defined by _S_ = {( _a, b_ ) : _a_ divides _b_ }. Find **(a)** the minimal elements in _A_ , **(b)** the maximal elements in _A_ , and **(c)** the upper bounds of the set _B_ = {4, 6, 12}. **0.83.** Draw the Hasse diagram of the PO set in Problem 0.82. **0.84.** Prove that **(a)** every finite partially ordered set has a maximal element and a minimal element, and **(b)** every finite linearly ordered set has a greatest element and a least element. **0.85.** Prove by induction that 1 _k_ \+ 2 _k_ \+ 3 _k_ \+ · · · + _n k_ is equal to **(a)** _n_ ( _n_ \+ 1) (2 _n_ \+ 1)/6 when _k_ = 2, and **(b)** [ _n_ ( _n_ \+ 1)/2]2 when _k_ = 3. **0.86.** Prove by induction that 1.2 + 2.3 + 3.4 + · · · + _n_ ( _n_ \+ 1) is equal to _n_ ( _n_ \+ 1)( _n_ \+ 2)/3. **0.87.** Prove by induction that 1/1.2 + 1/2.3 + 1/3.4 + · · · + 1/ _n_ ( _n_ \+ 1) is equal to _n_ /( _n_ \+ 1). **0.88.** Show that _n_ 3 \+ 2 _n_ is divisible by 3 for all positive integers _n_. **0.89.** Prove that 1.2.3 + 2.3.4 + 3.4.5 + · · · + _n_ ( _n_ \+ 1)( _n_ \+ 2) is equal to _n_ ( _n_ \+ 1)( _n_ \+ 2) ( _n_ \+ 3)/4. **0.90.** Prove that 12/1.3 + 22/3.5 + · · · + _n_ 2/(2 _n_ – 1)(2 _n_ \+ 1) is equal to [ _n_ ( _n_ \+ 1)]/2(2 _n_ \+ 1). **0.91.** Show that the sum of the cubes of any three consecutive positive integers is divisible by 9. **0.92.** Show that for any positive integer _n_ greater than 1, the sum is greater than . **0.93.** Prove: (1 – 1/2)(1 – 1/3) · · · (1 – 1/ _n_ ) = 1/ _n_. **0.94.** Prove: 2 _n_ > _n_ 2 whenever _n_ > 4. **0.95.** Show that 1/( _n_ \+ 1) + 1/( _n_ \+ 2) + · · · + 1/(2 _n_ ) is greater than 13/24 whenever _n_ is greater than 1. **0.96.** Prove that 7 _n_ – 1 is divisible by 6. **0.97.** Prove that 11 _n_ – 6 is divisible by 5. **0.98.** Show that 6.7 _n_ – 2.3 _n_ is divisible by 4. **0.99.** Prove: 3 _n_ \+ 7 _n_ – 2 is divisible by 8. **0.100.** Prove De Morgan's laws: **(a)** The absolute complement of the intersection of _n_ subsets of a universal set is equal to the union of the absolute complements of these _n_ sets. **(b)** The absolute complement of the union of these _n_ sets is the intersection of their absolute complements. **0.101.** Show that the cardinality of the powerset of a set with _n_ elements is 2 _n_. **0.102.** Prove that if _S_ is a transitive relation on a set _A_ , then _S n_ is a subset of _S_ for _n_ = 1, 2, 3, . . . . **0.103.** Suppose that _f_ is recursively defined as _f_ (0) = 1 and _f_ ( _n_ \+ 1) = 3 _f_ ( _n_ ) + 5. Find _f_ (1), _f_ (2), and _f_ (3). **0.104.** Give a recursive definition of _x n_ when _x_ is a real number and _n_ is a nonnegative integer. **0.105.** Give a recursive definition of _f_ where _f_ ( _n_ ) is the sum of the first _n_ positive integers. **0.106.** Give a recursive definition of the set of **(a)** all integers, **(b)** all positive odd integers, **(c)** all negative even integers and **(d)** all even integers. **0.107.** Construct the truth table of the statement _p_ → ( _p_ ∨ _q_ ) and determine whether it is a tautology or a contradiction or neither. **0.108.** Show that ( _p_ ′ ∨ _q_ ) ∧ ( _p_ ∧ ( _p_ ∧ _q_ )) is equivalent to ( _p_ ∧ _q_ ). **0.109.** Examine whether ( _p_ → _q_ ) → _r_ is a tautology or a contradiction. **0.110.** Construct the truth table of the statement [( _p_ → _q_ ) ∧ ( _q_ → _p_ )] ↔ ( _p_ ↔ _q_ ) and determine whether it is a contradiction. **0.111.** Construct the truth table of _q_ ↔ ( _p_ ′ ∨ _q_ ′). **0.112.** Suppose that _p_ and _r_ are false statements and _q_ and _s_ are true statements. Find the truth values of **(a)** ( _p_ → _q_ ) → _r_ **(b)** ( _s_ → ( _p_ ∧ _r_ ′)) ∧ (( _p_ → ( _r_ ∨ _q_ )) ∧ _s_ ) **0.113.** Find the truth assignments of _p, q, r, s_ , and _t_ such that the following are satisfiable: **(a)** ( _p_ ∧ _q_ ∧ _r_ ) → ( _s_ ∨ _t_ ) **(b)** ( _p_ ′ ∧ _q_ ′) ∨ _r_ ′ **0.114.** Show that ( _p_ → _q_ ) → ( _p_ ′ ∨ _q_ ) is a tautology. **0.115.** Prove: ( _p_ ∧ _q_ ) ∧ ( _p_ ∨ _q_ )′ is a contradiction. **0.116.** Show that (( _p_ → _q_ ) → _q_ ) (( _p_ – _q_ )′ ∨ _q_ ) is a tautology. **Combinatorics** **_1.1 TWO BASIC COUNTING RULES_** Combinatorics is one of the fastest-growing areas of modern mathematics. It has many applications to several areas of mathematics and is concerned primarily with the study of finite or discrete sets (much as the set of integers) and various structures on these sets, such as arrangements, combinations, assignments, and configurations. Broadly speaking, three kinds of problems arise while studying these sets and structures on them: (1) the **existence problem** , (2) the **counting problem** , and (3) the **optimization problem**. The existence problem is concerned with the following question: Does there exist at least one arrangement of a given kind? The counting problem, on the other hand, seeks to find the number of possible arrangements or configurations of a certain pattern. The problem of finding the most efficient arrangement of a given pattern is the optimization problem. In this chapter we study techniques for solving problems that involve counting. These techniques form a basis for the study of **enumerative combinatorics** , which is really the theory of counting where results involving counting are obtained without carrying out the exact counting process, which could be tedious. Suppose that there are 10 mathematics majors and 15 computer science majors in a class of 25 and we are required to choose a student from the class to represent mathematics _and_ another student to represent computer science. Now there are 10 ways of choosing a mathematics major and 15 ways of choosing a computer science major from the class. Furthermore, the act of choosing a student from one area in no way depends on the act of choosing a student from the other. So it is intuitively obvious that there are 10 × 15 = 150 ways of selecting a representative from mathematics and a representative from computer science. On the other hand, if we are required to select one representative from mathematics _or_ from computer science, we have only 10 + 15 = 25 ways of accomplishing this. In the former case we used the multiplication rule of counting and in the latter the addition rule. These two rules can be stated formally as follows. **MULTIPLICATION RULE (The Rule of Sequential Counting)** Suppose that there is a sequence of _r_ events _E_ 1, _E_ 2, . . . , _E r_ such that (1) there are _n i_ ways in which _E i_( _i_ = 1, 2, . . . , _r_ ) can occur, and (2) the number of ways an event in the sequence can occur does not depend on how the events in the sequence prior to that event occurred. Then there are ( _n_ 1) · ( _n_ 2) · . . . · ( _n r_) ways in which all the events in the sequence can occur. **ADDITION RULE (The Rule of Disjunctive Counting)** Suppose that there are _r_ events _E_ 1, _E_ 2, . . . , _E r_ such that (1) there are _n i_ outcomes for _E i_( _i_ = 1, 2, . . . , _r_ ), and (2) no two events can occur simultaneously. Then there are ( _n_ 1) + ( _n_ 2) + · · · + ( _n r_) ways in which one of these _r_ events can take place. These two elementary rules are very useful in solving counting problems without carrying out explicit enumeration. However, if one is not careful, they are likely to be misused, producing erroneous results, as may be seen from some of the examples we discuss in what follows. **Example 1.1.1** There are five characters—two letters of the alphabet followed by three digits—which appear on the back of one series of a microcomputer made by an electronics company. The number of possible computers manufactured in this series is (1) 26 × 26 × 10 × 10 × 10 = 676,000 if characters can repeat, (2) 26 × 25 × 10 × 10 × 10 = 650,000 if letters cannot repeat, and (3) 26 × 25 × 10 × 9 × 8 = 468,000 if no characters can repeat. We use the multiplication rule here. **Example 1.1.2** A professor has 25 students in her advanced calculus course and 31 students in her statistics course. Thirteen students have signed up for both the courses. There are three events here, no two of which can occur simultaneously: (1) The event that a student chosen at random has signed up for advanced calculus but not for statistics, and this can happen in 12 ways; (2) the event that a student chosen at random has signed up for statistics but not for advanced calculus, and this can happen in 18 ways; and (3) the event that a student chosen at random has signed up for both the courses and this can happen in 13 ways. By the addition rule one of these events can occur in 12 + 18 + 13 = 43 ways. In other words, the professor has 43 students in both the courses together. Notice that the event that a student chosen at random takes advanced calculus and the event that a student chosen at random takes statistics can occur simultaneously. So we cannot apply the addition rule to the two events to conclude that the professor has a total of 25 + 31 = 56 students. **Example 1.1.3** In a sightseeing group there are 8 Austrians, 5 Brazilians, and 6 Canadians. So by the multiplication rule there are 40 ways of choosing an Austrian and a Brazilian, 48 ways of choosing an Austrian and a Canadian, and 30 ways of choosing a Brazilian and a Canadian. Next, by the addition principle, there are 40 + 48 + 30 = 118 ways of selecting a pair of individuals of distinct nationalities from this group of tourists. A team of 3 tourists of distinct nationalities can be chosen in 8 × 5 × 6 ways, whereas a typical representative can be chosen in 8 + 5 + 6 ways. **Example 1.1.4** The number of odd integers between 0 and 99 is obviously 50. We may invoke the multiplication rule to get this result. Any integer between 0 and 99 has a unit digit and a tens digit if we write 0, 1, 2, . . . , 9 as 00, 01, 02, . . . , 09. Let _E_ be the event of choosing a digit for the unit digit. This can be done in 5 ways. Next, let _F_ be the event of choosing a digit for the tens digit. This can be done in 10 ways. Notice that the number of ways that _E_ can occur does not depend on how _F_ can occur, and vice versa. So the sequence _E, F_ (or for that matter, the sequence _F, E_ ) can occur in 50 ways. **Example 1.1.5** Suppose that we are interested in finding the number of odd integers between 0 and 100 with _distinct_ digits. Let _E_ and _F_ be as in Example 1.1.4. _E_ can be done in 5 ways as before. After that _F_ can occur in 9 ways. The number of ways that _F_ can occur does not depend on how _E_ occurs. So by the multiplication rule the sequence _E, F_ can occur in 45 ways, and consequently, there are 45 such integers. On the other hand, if _F_ is the first event, it can occur in 10 ways. Subsequently, the second event _E_ can be done in 5 ways if the tens digit is even, and in 4 ways if the tens digit is odd. In other words, the number of ways in which _E_ occurs depends on how the event _F_ occurs. So we cannot apply the multiplication rule to the sequence _F, E_ in this case. **Example 1.1.6** Suppose that _X_ is a set with _n_ elements. List the elements of _X_ as 1, 2, . . . , _n_ and consider the following sequence of _n_ events: The first event is to decide whether or not to pick the first element, the second event is to decide whether or not to pick the second elment, and so on. Each event can occur in 2 ways and the number of ways that any of these events in the sequence can occur does not depend on how the previous events in the sequence occurred. Thus any set with _n_ elements has 2 _n_ subsets, by the multiplication rule. The class of all subsets of the set _X_ is the power set of _X_ and is denoted by _P_ ( _X_ ) as mentioned in Chapter 0. **_1.2 PERMUTATIONS_** Consider a collection _X_ of _n distinct_ objects. An **_r_ -permutation of** _X_ is an arrangement in a row of any _r_ objects from _X_. Of course, _r_ is at most _n_. Thus if _X_ is the collection of the first 5 letters a, b, c, d, and e, then edcb, dbea, and bdca are some of the several 4-permutations of _X_. The total number of _r_ -permutations of a collection of _n_ distinct objects is denoted by _P_ ( _n, r_ ). Any _r_ -permutation here can be considered as a sequence of _r_ events in which the number of ways an event can occur does not depend on how the events prior to that event occur. So we use the multiplication rule of counting to conclude that _P_ ( _n, r_ ) is equal to _n_ ( _n_ – 1)( _n_ – 2) · · · ( _n_ – _r_ \+ 1) since any arbitrary object from _X_ can be chosen in _n_ ways and having chosen that, a second arbitrary object can be chosen in ( _n_ – 1) ways, and so on, until all _r_ objects are chosen. **_Permutations and the Allocation Problem_** We can approach this process of making arrangements of objects from a different point of view. Consider a set of _n distinct_ locations arranged in a definite order and we are required to allocate _r distinct_ objects to these locations such that no location can receive more than one object. Then the number of ways of allocating these _r_ objects to the _n_ locations is also _P_ ( _n, r_ ) by the multiplication rule since any arbitrary object can be sent to one of the locations in _n_ ways, and subsequently another one can be sent in ( _n_ – 1) ways, and so on. **Example 1.2.1** If _X_ = {1, 2, 3, 4, 5, 6, 7} and _r_ = 3, the number of _r_ -permutations of _X_ is 7 × 6 × 5 = 210. Any _n_ -permutation of a set _X_ with _n_ elements is simply called a **permutation of** _X_ and the number _P_ ( _n, n_ ) of permutations of _X_ is _n_ ( _n_ – 1)( _n_ – 2) · . . . · 3 · 2 · 1, which is denoted by the factorial function _n_!. It is easy to see that _P_ ( _n, r_ ) = _n_!/( _n_ – _r_ )!. (We define 0! = 1.) [The positive integer _n_! can be extremely large even when _n_ is a small two-digit number. It is more than 3.6 million when _n_ = 10 and it is approximately equal to (2.433)(1018) when _n_ = 20.] **_Circular and Ring Permutations_** **Example 1.2.2** Consider a collection of 5 stones of different colors: blue (B), green (G), red (R), pink (P), and white (W). (a) The number of ways of making a tiepin on which these 5 stones are to be placed horizontally is, of course, 5!. (b) In how many ways can we make a tiepin on which these stones are placed in a circular pattern? The answer has to be less than 5! because some of the permutations considered in (a) are now not distinct. For example, if we rotate the permutation BGRPW once in the clockwise direction, we get the permutation GRPWB, and these two permutations are not distinct in a circular arrangement. If we fix one of the colors and then consider the permutations formed by the remaining 4 colors, these permutations are all distinct. For example, if we fix B and consider RGPW and RGWP, we get two permutations, BRGPW and BRGWP, which are distinct. Thus there are only (4!) such circular permutations. (c) In how ways can we make a ring in which these stones are mounted? In a ring, there is no difference between a permutation and its "mirror image." For example, BGRPW and BWPRG are the same. For every permutation in (b), there is a mirror image. So the answer now is (4!)/2. Thus the number of circular permutations of a set of _n_ elements is ( _n_ – 1)! and the number of ring permutations is (( _n_ – 1)!)/2. **_Generalized Permutations_** Let us now consider a collection _X_ of _n_ objects ( _not necessarily distinct_ ) belonging to _k_ different nonempty groups such that (1) all the objects in a group are identical, and (2) an object in a group is not identical to an object in another group. (For example, the letters in the collection _a, b, a, b, b, d, e, e, d_ can be formed into four groups: one for _a_ , one for _b_ , one for _d_ , and one for _e._ ) Assume that there are _n i_ objects in group _i_ where _i_ = 1, 2, . . . , _k_. Any arrangement in a row of these _n_ objects is called a **generalized permutation of** _X_. (For example, LINISOIL is a generalized permutation of the letters that appear in the word ILLINOIS.) The number of such generalized permutations is denoted by _P_ ( _n_ ; _n_ 1, _n_ 2, . . . , _n k_), which will be _n_! if all the objects in _X_ are distinct. **THEOREM 1.2.1** If the collection _X_ of _n_ objects consists of _k_ distinct nonempty groups such that group _i_ has _n i_ identical objects (where _i_ = 1, 2, . . . , _k_ ), then the number of generalized permutations of _X_ is ( _n_!)/( _n_ 1!)( _n_ 2!) · · · ( _n k_!). **_Proof_ :** If the objects belonging to group _i_ were all distinct, there would have been _n i_! permutations for the elements in this group. So each generalized permutation gives rise to _N_ = ( _n_ 1!)( _n_ 2!) · · · ( _n k_!) permutations of _X_ if _X_ had distinct objects. If _t_ is the total number of generalized permutations, we have ( _t_ )( _N_ ) = _n_!, from which the conclusion of the theorem follows. [Observe that if _k_ = _n_ , each group has exactly one element that is equivalent to the statement that the objects in _X_ are distinct verifying that _P_ ( _n_ ; 1, 1, . . . , 1), where 1 is repeated _n_ times, is equal to _n_!, as it should.] **Example 1.2.3** The 9 letters that appear in the word CONSENSUS can be grouped into 6 groups: the group consisting of three S's, the group consisting of two N's, and four groups consisting of each of the four remaining distinct letters. The total number of generalized permutations in this case is (9!)/(3!)(2!)(1!)(1!)(1!)(1!) = 30,240. If the total number of objects in any ( _k_ – 1) of these _k_ groups is _r_ (where _r_ ≤ _n_ ), the formula for the number of generalized permutations can be expressed as _P_ ( _n, r_ )/( _n_ 1!)( _n_ 2!) · · · ( _n k_–1!) since _n_! = _P_ ( _n, r_ ) · ( _n_ – _r_ )! and _n k_ = ( _n_ – _r_ ). For example, and so on. Thus if _n i_ ( _i_ = 1, 2, . . . , _k_ ) are _k_ positive integers whose sum is _r_ where _r_ ≤ _n_ and if we define we see that (1) _P_ ( _n_ ; _n_ 1, _n_ 2, . . . , _n k_–1) = _P_ ( _n_ ; _n_ 1, _n_ 2, . . . , _n k_–1, _m_ ) where _m_ = _n_ – ( _n_ 1 \+ _n_ 2 \+ · · · + _n k_–1) (2) _P_ ( _n_ ; _r_ ) = _P_ ( _n_ ; _n_ – _r_ ) = _P_ ( _n_ ; _r, n_ – _r_ ) (3) ( _r_!) _P_ ( _n_ ; _r_ ) = _P_ ( _n, r_ ) **Example 1.2.4** We now have the following generalization of Theorem 1.2.1, the proof of which is left as a simple exercise. **THEOREM 1.2.2** If there are _n i_ identical objects in group _i_ ( _i_ = 1, 2, . . . , _k_ ) and if _r_ is the total number of the objects in these _k_ groups, these _r_ objects can be placed in _n_ distinct locations, so that each location receives at most one object, in _t_ ways, where _t_ = _P_ ( _n_ ; _n_ 1, _n_ 2, . . . , _n k_). In particular, if each group has exactly one object, then _t_ = _P_ ( _n, r_ ), which is the number of _r_ -permutations of a set with _n_ elements. **Example 1.2.5** Suppose that there are 100 spots (marked serially from 100 to 199) in the showroom of a car dealership for displaying new cars in which 15 sports cars, 25 compact cars, 30 station wagons, and 20 vans are to be parked for display. Assume that the automobiles in each category are brand new and identical in all respects, including color. The dealer can then park the collection of 90 vehicles for display (leaving 10 blank spots in the lot) in _P_ (100, 90)/(15!)(25!)(30!)(20!) ways. **_1.3 COMBINATIONS_** As in section 1.2, let _X_ be a collection of _n distinct objects_. Any collection of _r_ distinct objects from _X_ is called an **r-combination of** _X_. In other words, if _X_ is a set with _n_ elements, any subset of _X_ with _r_ elements is an _r_ -combination of _X_. In an _r_ -combination the order in which the _r_ elements are chosen is not important, unlike in the case of an _r_ -permutation. The number of _r_ -combinations of a set with _n_ elements is denoted by _C_ ( _n, r_ ), which is precisely the number of subsets of cardinality _r_. Thus there are _P_ ( _n_ , 2) _ordered pairs_ and _C_ ( _n_ , 2) _unordered pairs_ of two elements in a set of _n_ elements. Of course, _C_ ( _n_ , 0) = _C_ ( _n, n_ ) = 1. What is the relation between _C_ ( _n, r_ ) and _P_ ( _n, r_ )? Consider any subset _A_ of _X_ with _r_ elements. These _r_ distinct elements can be arranged in ( _r_!) ways. Thus there are ( _r_!) permutations associated with every _r_ -element subset of _X_. Of course, by definition, the number of _r_ -element subsets of _X_ is _C_ ( _n, r_ ). Thus the total number of _r_ -permutations is the product of ( _r_!) and _C_ ( _n, r_ ) by the multiplication rule. So we have the following important theorem. **THEOREM 1.3.1** _C_ ( _n, r_ ) · ( _r_!) = _P_ ( _n, r_ ) **COROLLARY** _C_ ( _n, r_ ) = _C_ ( _n, n_ – _r_ ) **_Proof_ :** In other words, if _X_ is a set with _n_ elements, the number of subsets of _X_ with _r_ elements is equal to the number of subsets of _X_ with ( _n_ – _r_ ) elements. **_Combinations and the Allocation Problem_** As in the case of permutations, we can interpret combinations from a different point of view, as a problem of allocations. As before, let _X_ be a set of _n distinct_ locations arranged in a definite order and consider a collection of _r_ objects that are _identical_. These objects are to be to allocated to these _n_ locations such that no location receives more than one subject. Let _t_ be the total number of ways of allocating these _r_ objects. If all the objects were distinct, each _such_ allocation would give rise to ( _r_!) allocations. In that case the total number of allocations would have been ( _t_ )( _r_!). But the total number of allocations if the objects were distinct is _P_ ( _n, r_ ). Thus _t_ = _P_ ( _n, r_ )/( _r_!) = _C_ ( _n_ , _r_ ). **THEOREM 1.3.2 (Pascal's Formula)** _C_ ( _n, r_ ) = _C_ ( _n_ – 1, _r_ ) + _C_ ( _n_ – 1, _r_ – 1) **_Proof_ :** Let _X_ be a set with _n_ elements and _Y_ be any subset of _X_ with ( _n_ – 1) elements. Let _t_ be the element of _X_ that is not in _Y_. Every _r_ -element subset of _X_ is either a _r_ -element subset of _Y_ or the union of a subset of _Y_ with ( _r_ – 1) elements and the singleton set consisting of _t_. In the former category there are _C_ ( _n_ – 1, _r_ ) sets and in the latter there are _C_ ( _n_ – 1, _r_ – 1) sets. In other words, the total number of subsets of _X_ with _r_ elements is the sum of _C_ ( _n_ – 1, _r_ ) and _C_ ( _n_ – 1, _r_ – 1). Pascal's formula is an example of a _combinatorial identity_ , which was proved using a _combinatorial argument_. This identity can be proved algebraically also. A few combinatorial identities are given at the end of this chapter as exercises. Here is another example. **Example 1.3.1** _C_ (2 _n_ , 2) = 2 _C_ ( _n_ , 2) + _n_ 2 **_Proof_ :** Let _X_ be any set with 2 _n_ elements that is partitioned into two sets _Y_ and _Z_ , each containing _n_ elements. The number of subsets of _X_ with two elements is _C_ (2 _n_ , 2). Any subset of _X_ has two elements if and only if it belongs to one of the following three classes: (1) the class of all subsets of _Y_ with two elements, (2) the class of all subsets of _Z_ with two elements, and (3) the class of all subsets of _X_ with two elements such that each subset in this class has one element from _Y_ and one element from _Z_. Classes (1) and (2) have _C_ ( _n_ , 2) sets each. An element from _Y_ can be chosen _n_ ways and an element from _Z_ can be chosen in _n_ ways. So class (3) has ( _n_ )( _n_ ) sets. Thus the number of subsets of _X_ with two elements is _C_ ( _n_ , 2) + _C_ ( _n_ , 2) + ( _n_ )( _n_ ). **_The Allocation Problem and Generalized Combinations_** Now consider a collection of _n_ objects (not necessarily distinct) belonging to _k_ distinct groups, as in the hypothesis of Theorem 1.2.1. The _n_ 1 identical objects of group 1 can be placed in the set of _n_ locations (such that no location receives more than one object) in _C_ ( _n, n_ 1) ways. Then the _n_ 2 objects of the next group can be placed in _C_ ( _n_ – _n_ 2) ways. We proceed in this way until all spots are filled. By the multiplication rule the total number of ways in which all the _n_ spots can be filled is and this number is denoted by _C_ ( _n_ ; _n_ 1 _n_ 2, . . . , _n k_). There is another way of looking at this allocation process. Suppose that _X_ is a collection of _n distinct_ objects and these _n_ objects are to be allocated to _k_ locations so that location _i_ gets _n i_ objects ( _i_ = 1, 2, . . . , _k_ ) where _n_ 1 \+ _n_ 2 \+ · · · + _n k_ = _n_. Then any _n_ 1 objects can be selected from _X_ and allocated to location 1 in _C_ ( _n_ , _n_ 1) ways. Next, from the remaining ( _n_ – _n_ 1) objects in _X_ , any _n_ 2 objects can be allocated to location 2 in _C_ ( _n_ – _n_ 1, _n_ 2) ways. We proceed like this until all the objects are exhausted. We have the following result connecting generalized permutations and combinations. **THEOREM 1.3.3** _P_ ( _n_ ; _n_ 1, _n_ 2, . . . , _n k_) = _C_ ( _n_ ; _n_ 1, _n_ 2, . . . , _n k_) where _n_ 1 \+ _n_ 2 \+ · · · + _n k_ ≤ _n_. **_Proof_ :** Multiplying out the terms on the right-hand side of the equation, we get _C_ ( _n_ ; _n_ 1, _n_ 2, _n_ 3, . . . , _n k_) = ( _n_!)/( _n_ 1)!( _n_ 2)! · · · ( _n k_)!, and thus the theorem is established. Observe that if all the objects in _X_ are distinct and if we take _r_ objects, we have _P_ ( _n_ ; 1, 1, . . . , 1) = _C_ ( _n_ ; 1, 1, . . . , 1), where 1 is repeated _r_ times. Now _P_ ( _n_ ; 1, 1, . . . , 1) = _P_ ( _n, r_ )/(1!)(1!) · · · (1!) = _P_ ( _n, r_ ) and _C_ ( _n_ ; 1, 1, . . . , 1) = _C_ ( _n_ , 1) · _C_ ( _n_ – 1, 1) · · · _C_ ( _n_ – _r_ \+ 1, 1) = _n_ ( _n_ – 1) · · · ( _n_ – _r_ \+ 1) and we once again see that _P_ ( _n, r_ ) = _n_ ( _n_ – 1)( _n_ – 2) · · · ( _n_ – _r_ \+ 1). **COROLLARY** _C_ ( _n_ ; _r_ ) = _C_ ( _n_ ; _n_ – _r_ ) = _C_ ( _n_ ; _r, n_ – _r_ ) = _C_ ( _n, r_ ) = _C_ ( _n, n_ – _r_ ) **_Proof_ :** [Notice that _C_ ( _n, r_ ) = _C_ ( _n_ ; _r_ ) = _P_ ( _n_ ; _r_ ), but _P_ ( _n, r_ ) = ( _r_!) _P_ ( _n_ ; _r_ ) _._ ] **Example 1.3.2** Also, Thus **_The Multinomial Theorem_** **THEOREM 1.3.4** ( **The Multinomial Theorem)** In a typical term in the expansion of ( _x_ 1 \+ _x_ 2 \+ · · · + _x k_) _n_ the variable _x i_ ( _i_ = 1, 2, . . . , _k_ ) appears _n i_ times (where _n_ 1 \+ _n_ 2 \+ · · · + _n k_ = _n_ ) and the coefficient of this typical term is _C_ ( _n_ ; _n_ 1, _n_ 2, . . . , _n k_). **_Proof_ :** The first part of the assertion is obvious since the expansion is the product of _n_ expressions where each expression is the sum of the _k_ variables. A typical term here is nothing but a generalized permutation of _n_ objects in a collection _X_ consisting of _k_ groups, and therefore the coefficient of this typical term is the number of such generalized permutations. **Example 1.3.3** The coefficient of _a_ 3 _b_ 2 _c_ 6 _d_ 4 in the expansion of ( _a_ \+ _b_ \+ _c_ \+ _d_ )15 is (15!)/(3!)(2!)(6!)(4!). **Example 1.3.4** ( **The Binomial Theorem)** The multinomial theorem when _k_ = 2 is known as the binomial theorem, which can be stated as ( _x_ \+ _y_ ) _n_ = Σ _C_ ( _n, n_ – _r_ ) _x_ _n_ – _r_ _y r_ where _r_ varies from 0 to _n_ [The right-hand side of this equation is called the **binomial expansion** of ( _x_ \+ _y_ ) _n._ The coefficients _C_ ( _n, r_ ) that appear in the binomial expansion are called **binomial coefficients.]** The binomial coefficients of ( _x_ \+ _y_ ) _n_ can be computed if we know the binomial coefficients of ( _x_ \+ _y_ ) _n_ – 1 by using Pascal's formula: _C_ ( _n, r_ ) = _C_ ( _n_ – 1, _r_ ) + _C_ ( _n_ – 1, _r_ – 1). So the binomial coefficients can be arranged in the form of a triangle known as **Pascal's triangle:** In this representation, the ( _n_ \+ 1) consecutive binomial coefficients of the binomial expansion of ( _x_ \+ _y_ ) _n_ appear in the _n_ th row. Notice that a typical element in a row (other than the first and the last) is the sum of the two terms just above that element which appear in the preceding row—and that is exactly the content of Pascal's formula. In each row the first element as well as the last element is 1, indicating the fact that if _A_ is a set with _n_ elements, then there is only one subset of _A_ with _n_ elements and there is only subset of _A_ with no elements. **_Partitioning of a Finite Set_** Given a set _A_ of cardinality _n_ , a combinatorial problem of interest is to find the number of ways _A_ can be partitioned into _k_ subsets such that subset _A i_ ( _i_ = 1, 2, . . . , _k_ ) has exactly _n i_, elements. For example, if _A_ = {1, 2, 3, 4, 5, 6}, the problem will be to find the number of ways of partitioning A into (1) two subsets such that one has 2 elements and the other has 4 elements or (2) two subsets such that each has 3 elements or (3) three subsets such that each has 2 elements, and so on. This problem is equivalent to the allocation problem of allocating _n_ distinct objects to _k_ locations discussed earlier when the cardinalities of each set in the partition are distinct as in (1). There are _C_ (6; 2, 4) ways of allocating the 6 elements from _A_ to two locations so that location 1 gets 2 elements and location 2 gets 4 elements. The number of ways of partitioning A into two sets so that one set has 2 elements and the other has 4 is also _C_ (6; 2, 4). But when the subsets in a partition have equal cardinalities, we have to take care of those situations where repetitions occur. For example, if _P_ = {1, 2, 3} and _Q_ = {4, 5, 6}, the partition { _P, Q_ } and the partition { _Q, P_ } are the same. But allocating _P_ to location 1 and _Q_ to location 2 is not the same as allocating _Q_ to location 1 and _P_ to location 2. The number of partitions of _A_ into two subsets of equal cardinality is _C_ (6; 3, 3)/2. More generally, we have the following result, which is an extension of the allocation theorem and the multiplication rule. **THEOREM 1.3.5** The number of ways of partitioning a set of cardinality _n_ into a class consisting of _p i_ subsets each of cardinality _n i_ ( _i_ = 1, 2, . . . , _k_ ) where no two of the numbers _n i_ are equal is **Example 1.3.5** (a) The number of ways of _allocating_ 43 students into 7 _different_ dormitories such that the first two get 5 students each, the next three get 6 students each, the sixth dormitory gets 7 students, and the seventh dormitory gets 8 students is (b) The number of ways of _dividing_ 43 students _into 7 groups_ such that there are 5 students in each of 2 groups, 6 students in each of 3 groups, 7 students in one group, and 8 students in one group is **_1.4 MORE ON PERMUTATIONS AND COMBINATIONS_** If _X_ is a set with _n_ elements, we know that an _r_ -permutation of _X_ is an arrangement of elements from _X_ in which no elements repeat. Similarly, an _r_ -combination is a selection of elements from _X_ in which no elements repeat. In both cases _r_ cannot exceed _n_. If we allow repetitions, there is no restriction on _r_. (Since _X_ is a set, the _n_ elements in it are all distinct.) An _r_ - **sequence of _X_** is an arrangement of _r_ elements from _X_ in which the elements may repeat, but the order in which these elements appear is important. For example, _aabdac_ and _aadbac_ are two distinct 6-sequences from the set _X_ = { _a, b, c, d_ }. Any _r_ -permutation is obviously an _r_ -sequence. On the other hand, any _r_ -sequence with distinct elements is an _r_ -permutation. A simple application of the multiplication rule shows that the number of _r_ -sequences in a set with _n_ elements is _n r_. Any collection of _r_ objects (not necessarily distinct) chosen from a set _X_ of _n_ elements is called an _r_ - **collection** from _X_. Unlike an _r_ -selection, the order in which the elements are chosen is not important in an _r_ -collection. The 4-collection [ _a, a, b, c_ ] is not different from the 4-collection [ _a, b, c, a_ ]. Both represent the same 4-collection. Any _r_ -combination is an _r_ -collection. If the elements in an _r_ -collection are distinct, then it is an _r_ -combination. For example, if _X_ = { _a_ , _b, c, d_ } and if _r_ = 3, the set of all 3-selections from _X_ will include every subset of _X_ with 3 elements and selections such as { _a, a, a_ }, { _a, b, b_ }, { _d, a, d_ } and so on. On the other hand, if _r_ = 5, the collection { _a, b, b, b, d_ } is one of the ways of choosing 5 elements from _X_ , and no subset of _X_ can be a 5-collection. Given a set of cardinality _n_ and an arbitrary positive integer _r_ , in how many ways can one choose _r_ elements (with repetitions) from _X_? Here is the answer. **THEOREM 1.4.1** If _X_ is a set of cardinality _n_ , then the number of _r_ -collections from _X_ is _C_ ( _r_ \+ _n_ – 1, _n_ – 1), where _r_ is any positive integer. **_Proof_ :** Let _X_ = {1, 2, 3, . . . , _n_ }. Let _u_ be an _r_ -collection from _X_ in which 1 repeats _x_ 1 times, 2 repeats _x_ 2 times, . . . , and _n_ repeats _x n_ times. This _r_ -collection can be represented as where the notation _i_ · · · _i_ means that the symbol _i_ repeats _x i_ times. Similarly, let _v_ be another _r_ -collection in which 1 repeats _y_ 1 times, 2 repeats _y_ 2 times, . . . , and _n_ repeats _y n_ times. Then where the notation _i_ · · · _i_ means that the symbol _i_ repeats _y_ 2 times. Observe that in the representation of _u_ as well as in the representation of _v_ , there is a gap between 1 and 2, a gap between 2 and 3, . . . , a gap between ( _n_ – 1) and _n_. In each representation there are ( _n_ – 1) gaps. What distinguishes one _r_ -collection from another is where these blank spots are located in a typical representation. Each representation has _r_ symbols and ( _n_ – 1) gaps. So each representation can be considered as set of _r_ \+ _n_ – 1 distinct locations. All the _n_ – 1 blanks are identical. An allocation of these ( _n_ – 1) blanks to the ( _r_ \+ _n_ – 1) locations defines an _r_ -collection. Thus the number of distinct _r_ -collections is the same as the number of ways of allocating ( _n_ – 1) identical objects to ( _r_ \+ _n_ – 1) distinct locations so that each location receives at most one object. This number is _C_ ( _r_ \+ _n_ – 1, _n_ – 1), as we saw in Section 1.3. **Example 1.4.1** Let _X_ = { _a, b, c, d_ }. The total number of 5-selections from _X_ will be _C_ (5 + 4 – 1, 4 – 1) = 56. The following theorem is an equivalent version of Theorem 1.4.1. **THEOREM 1.4.2** (a) The number of distinct solutions in nonnegative integers of the linear equation (in _n_ variables) _x_ 1 \+ _x_ 2 \+ · · · + _x n_ = _r_ is _C_ ( _r_ \+ _n_ – 1, _n_ – 1). (b) The number of distinct solutions in nonnegative integers of the linear inequality (in _n_ variables) _x_ 1 \+ _x_ 2 \+ · · · + _x n_ ≤ _r_ is _C_ ( _r_ \+ _n, n_ ). (c) The number of terms in the multinomial expansion of ( _x_ 1 \+ _x_ 2 \+ · · · + _x n_) _r_ is _C_ ( _r_ \+ _n_ – 1, _n_ – 1). **_Proof_ :** (a) Every solution _x i_ = _s i_ ( _i_ = 1, 2, . . . . , _n_ ) in nonnegative integers corresponds to a collection of _r_ elements (from the set _X_ consisting of the _n_ variables) in which _x i_ repeats _s i_ times, where _s i_ ≤ _r_ , and vice versa. The number of such collections is _C_ ( _r_ \+ _n_ – 1, _n_ – 1) by Theorem 1.4.1. (b) Let _y_ be a nonnegative variable such that _x_ 1 \+ _x_ 2 \+ · · · + _x n_ \+ _y_ = _r_. ( _y_ is called the **slack variable**.) We now have a linear equation in ( _n_ \+ 1) variables. A solution in nonnegative integers of this equation in ( _n_ \+ 1) variables is a solution in nonnegative integers of the inequality in _n_ variables, and vice versa. Thus the required number is _C_ ( _r_ \+ _n, n_ ). (c) Each term in the expansion can be considered as a product of the _n_ variables in which the sum of the exponents of the variables is _r_. Therefore, the number of terms in the expansion is equal to the number of collections of _r_ elements from the set _X_ consisting of the _n_ variables where repetitions are allowed. **Example 1.4.2** In an undergraduate dormitory there are several freshmen, sophomores, juniors, and seniors. (a) In how many ways can a team of 10 students be chosen to represent the dormitory? (b) In how many ways can a team of 10 be chosen such that it has at least one freshman, at least one sophomore, at least two juniors, and at least two seniors? **Solution** (a) If _p, q, r_ , and _s_ are the number of students of each class in the team, then the number of ways the team can be chosen is equal to the number of solutions in nonnegative integers of the equation _p_ \+ _q_ \+ _r_ \+ _s_ = 10. So the answer is _C_ (13, 3) = 286. (b) In this case _p_ > 0, _q_ > 0, _r_ > 1, and _s_ > 1. Write _p_ = _p_ ′ + 1, _q_ = _q_ ′ + 1, _r_ = _r_ ′ + 2, and s = _s_ ′ + 2. So the number of ways will be equal to the number of solutions in nonnegative integers of the equation _p_ ′ + _q_ ′ + _r_ ′ + _s_ ′ + 6 = 10 and the answer is _C_ (7, 3) = 35. **_The Allocation Problem in the General Setting_** We now consider the problem of allocating _r identical_ objects to _n_ distinct locations such that each location can accommodate as many objects as necessary. In how many ways can we accomplish this? If the number of objects placed in location _i_ is _x i_ (where _i_ = 1, 2, . . . , _n_ ), any solution of the equation _x_ 1 \+ _x_ 2 \+ · · · + _x n_ = _r_ in nonnegative integers corresponds to a way of allocating these _r_ objects to the _n_ locations, and vice versa. Thus there are _C_ ( _r_ \+ _n_ – 1, _n_ – 1) ways of placing _r_ identical objects in _n_ distinct locations. We combine this observation with the Theorems 1.4.1 and 1.4.2 to make the following assertion: **THEOREM 1.4.3** Let _L_ = the number of ways of choosing _r_ elements (with repetitions) from a set that has _n_ elements _M_ = the number of ways of allocating _r_ identical objects to _n_ distinct locations _N_ = the number of solutions in nonnegative integers of the equation _x_ 1 \+ _x_ 2 \+ · · · + _x n_ = _r_ Then _L_ = _M_ = _N_ = _C_ ( _r_ \+ _n_ – 1, _n_ – 1) We now summarize the four cases of permutations and combinations (without or with repetitions) of _r_ elements from a set of _n_ distinct elements and interpret these results as two models of counting as follows. **The selection model** The number of ways of selecting _r_ elements from a set of _n_ elements is: 1. _P_ ( _n, r_ ) if the elements selected are distinct and the order in which they are selected is important. 2. _C_ ( _n, r_ ) if the elements selected are distinct and the order in which they are selected is not important. 3. _n r_ if the elements selected are not necessarily distinct and the order is important. 4. _C_ ( _r_ \+ _n_ – 1, _n_ – 1) if the elements selected are not necessarily distinct and the order is not important. **The allocation model** The number of ways of allocating _r_ objects to _n_ distinct locations is: 1. _P_ ( _n, r_ ) if the objects are distinct and no location can take more than one object. 2. _C_ ( _n, r_ ) if the objects are identical and no location can take more than one object. 3. _n r_ if the objects are distinct and there is no restriction on the number of objects in a location. 4. _C_ ( _r_ \+ _n_ – 1, _n_ – 1) if the objects are identical and there is no restriction on the number of objects in a location. These conclusions can be summarized in Table 1.4.1. We conclude this section with the following theorem, which summarizes the various cases of allocations considered so far. **THEOREM 1.4.4** (a) If _r_ is at most _n_ , a collection of _r distinct_ objects can be allocated to _n_ locations, so that no location can receive more than one object in _P_ ( _n, r_ ) ways. (b) A collection of _r distinct_ objects can be allocated to _n_ locations in _n r_ ways if there is no restriction on the number of objects that a location can receive. (c) If _r_ is at most _n_ , a collection of _r identical_ objects can be allocated to _n_ locations so that no location can receive more than one object in _C_ ( _n, r_ ) ways. (d) A collection of _r identical_ objects can be allocated to _n_ locations such that location f gets at least _p i_ objects in _C_ ( _r – p_ \+ _n_ – 1, _n_ – 1) ways, where _p_ = _p_ 1 \+ _p_ 2 \+ · · · + _p n_. (Theorem 1.4.3 is a special case when each _p i_ is 0.) TABLE 1.4.1 (e) Suppose that there are _k_ types of objects such that type _i_ has _n i_ objects ( _i_ = 1, 2, . . . , _k_ ). Objects belonging to the same type are identical and two objects belonging to two different types are not identical. Then these _n_ 1 \+ _n_ 2 \+ · · · + _n k_ objects can be allocated to _n_ locations so that no location can receive more than one object in _P_ ( _n_ ; _n_ 1 _n_ 2, . . . , _n k_) ways. (f) A collection of _n_ 1 \+ _n_ 2 \+ _• • •_ \+ _n k distinct_ objects can be allocated to _k_ locations so that location _i_ receives exactly _n i_ objects ( _i_ = 1, 2,. . . , _k_ ) in _C_ ( _n_ ; _n_ 1 _n_ 2, . . . , _n k_) ways. (g) _P_ ( _n_ ; _n_ 1, _n_ 2, . . . , _n k_) = _C_ ( _n_ ; _n_ 1 _n_ 2, . . . , _n k_) = _P_ ( _n, r_ )/[( _n_ 1!)( _n_ 2!) . . . ( _n k_!)] where _n_ 1 \+ _n_ 2 \+ · · · + _n k_ = _r_. **_1.5 THE PIGEONHOLE PRINCIPLE_** This is a principle that is very obvious and looks very simple, as though it has no major significance. However, in practice it is of great importance and power since its generalizations involve some profound and deep results in combinatorial theory and in number theory. We are using the pigeonhole principle when we say that in any group of three people at least two are of the same sex. Suppose that the newly formed computer science department in a college has 10 faculty members and only 9 offices to accommodate them. Then the underlying idea behind the obvious assertion that at least one office will have more than one occupant is again the pigeonhole principle. If there are 19 faculty members instead of 10, at least one office will have more than two occupants. Similarly, if there are at least 367 students in a residence hall, it is equally obvious that at least two of them will have the same birthday. It is reported that the scalp of a human being has at most 99,999 hairs. So in any city whose population exceeds 4 million there will be at least 41 people (a bald scalp has no hair) with the same number of hairs! We can cite several examples like this. The basic idea that governs all these instances is the simple fact known as the **Dirichlet pigeonhole principle** , which is stated formally as follows: If _n_ \+ 1 or more pigeons occupy _n_ pigeonholes, there will be more than one pigeon in at least one pigeonhole. More generally, if _kn_ \+ 1 or more pigeons occupy _n_ pigeonholes, there will be more than _k_ pigeons in at least one pigeonhole, where _k_ is a positive integer. Here are some examples to illustrate this principle. **Example 1.5.1** In a round-robin tournament (in which every player plays against every other player exactly once), suppose that each player wins at least once. Then there are at least two players with the same number of wins. Suppose that there are _n_ players. The number of wins for a player is 1 or 2 or 3 . . . or ( _n_ – 1). These ( _n_ – 1) numbers correspond to ( _n_ – 1) pigeonholes in which the _n_ players are to be housed. So at least two of them should be in the same pigeonhole and they have the same number of wins. **Example 1.5.2** There are 18 residence halls in campus. The dean of students would like to conduct a survey in any one of these halls about the use of microcomputers, and to do this she has to form a committee of 5 students from the hall chosen for the survey. An advertisement in the campus paper asks for volunteers from these 18 halls. At least how many responses to the advertisement are sufficient before the dean can choose a hall and form a committee? **Solution**. The answer is (4) (18) + 1 = 73 by the pigeonhole principle. **Example 1.5.3** A bag contains exactly 5 red, 8 blue, 10 white, 12 green, and 7 yellow marbles. Find the least number of balls to be chosen which will guarantee that there will be (a) at least 4 marbles of the same color, (b) at least 6 marbles are of the same color, (c) at least 7 marbles are of the same color, and (d) at least 9 marbles are of the same color. (Here each color represents a pigeonhole. The number of pigeonholes is _n_ = 5.) **Solution** (a) If at least 4 marbles are of the same color, there is a pigeonhole whose occupancy is more than 3. So by applying the generalized pigeonhole principle with _k_ = 3, the number of marbles to be chosen is at least (3) · (5) + 1 = 16. (b) _n_ = 5 and _k_ = 5. So the number is 26. (c) _n_ = 5 and _k_ = 6. Notice that there is an upper limit on the number of red marbles. There are only 5 red marbles. So in this case the required number is [(6) · (5) + 1] – (6 – 5) = 30. (d) Now _n_ = 5 and _k_ = 8 with upper bounds of 5 for red and 7 for yellow. So the number is [(8) · (5) + 1] – (8 – 5) – (8 – 7) – (8 – 8) = 37. If _m_ and _n_ are positive integers, then the **floor** of _m_ / _n_ is the largest integer that is less than or equal to _m_ / _n_ and the **ceiling** of _m_ / _n_ is the smallest integer greater than or equal to _m_ / _n_. (For example, the floor of 38/9 is 4 and the ceiling is 5.) The following extension of the pigeonhole principle is easily established. **THEOREM 1.5.1** (a) If _m_ pigeons are allotted to _n_ pigeonholes, then at least one hole has more than _k_ pigeons, where _k_ is the floor of ( _m_ – 1)/ _n_. (b) If _m_ = _p_ 1 \+ _p_ 2 \+ · · · + _p n_ – _n_ \+ 1 pigeons (each _p i_ is a positive integer) are allotted to _n_ pigeonholes, then the first pigeonhole has at least _p_ 1 pigeons, or the second pigeonhole has at least _p_ 2 pigeons, . . . , or the _n_ th pigeonhole has at least _p n_ pigeons. **_Proof_ :** (a) Now ( _n_ ) · ( _k_ ) ≤ ( _m_ – 1) < _m_. If the number of pigeons is exactly _n_ · _k_ , it is possible to allocate _k_ pigeons to each hole. But the number of pigeons is _m_ , which is greater than _n_ · _k_. So there is at least one hole with more than _k_ occupants. (b) Here _k_ = floor of [( _p_ 1 \+ _p_ 2 \+ · · · + _p n_)/ _n_ ] – 1. So ( _k_ \+ 1) is equal to or greater than at least one of the _n_ integers. **Example 1.5.4** A bag contains exactly 6 red, 5 white, and 7 blue marbles. Find the least number of marbles to be selected which will ensure that either at least 3 red or at least 4 white or at least 5 blue marbles picked. **Solution** _First Method_ (Using Theorem 1.5.1). Here _n_ = 3, _p_ 1 = 3, _p_ 2 = 4, and _p_ 3 = 5. So _m_ = (3 + 4 + 5) – 3 + 1 = 10. _Second Method_. Let the number of red, white, and blue marbles to be selected be _x, y_ , and _z_ , respectively. We require that _x_ is at least 3 or _y_ is at least 4 or _z_ is at least 5. This situation will not happen if _x_ is at most 2 and _y_ is at most 3 and _z_ is at most 4, which implies that _x_ \+ _y_ \+ _z_ is at most 9. Thus we have to select at least 10 marbles. **Example 1.5.5** In any group of 6 people there are 3 people known to one another or there are 3 total strangers. **Proof**. Let { _A_ , _B, C, D, E, F_ } be the set of 6 people and let _Y_ be a room in which individuals known to _A_ are seated. Let _Z_ be the room in which individuals not known to _A_ are seated. The five individuals _B, C, D, E_ , and _F_ have to be assigned to the two rooms _Y_ and _Z_. So by the previous proposition either _Y or Z_ has at least _k_ \+ 1 individuals, where _k_ = floor of (5 – l)/2 = 2. See Figure 1.5.1. If there is a dotted line joining two names, these two individuals do not know each other. If there is a line joining two individuals, they know each other. (a) Suppose that room _Y_ has 3 or more people. Let _B, C_ , and _D_ be three individuals in _Y_. There are two possibilities: Either _B, C_ , and _D_ do not know one another, as in Figure 1.5.1(a), forming a group of 3 strangers, or at least 2 of them (say, _C_ and _D_ ) know each other, as in Figure 1.5.1(b). In the latter case, these two individuals, _C_ and _D_ , along with _A_ , form a group of 3 people who know each other. (b) Suppose that room _Z_ has 3 or more people. Let _B, C_ , and _D_ be 3 of the people in _Z_. There are two possibilities: Either these 3 individuals know one another as in Figure 1.5.1(c), forming a group of 3 individuals known to each other, or there are at least 2 individuals (say, _C_ and _D_ ) who do not know each other. In the latter case, these 2 individuals, _C_ and _D_ , along with _A_ , form a group of 3 strangers. FIGURE 1.5.1 We conclude this brief exposition of the pigeonhole principle with two theorems due to Paul Erdös. **THEOREM 1.5.2** Let _X_ = {1, 2, 3, . . . , 2 _n_ ) and let _S_ be any subset of _X_ with ( _n_ \+ 1) elements. Then there are at least two numbers in _S_ such that one divides the other. **_Proof_ :** Any number _r_ in _S_ can be represented as _r_ = 2 _t_ · _s_ , where _t_ is a nonnegative integer and _s_ is an odd number from _X_ , called the _odd part_ of _r_. There are at most _n_ choices for _s_ since there are _n_ odd numbers in _X_. The _n_ odd parts can be considered as _n_ pigeonholes and the ( _n_ \+ 1) numbers of _S_ are to be allotted to these holes. In other words, there are two numbers _x_ and _y_ in _S_ with the same odd part. Let _x_ = 2 _t_ · _s_ and _y_ = 2 _u_ · _s_. Then either _x_ divides _y_ , or vice versa. **THEOREM 1.5.3** Any sequence of ( _n_ 2 \+ 1) distinct numbers contains a subsequence of at least ( _n_ \+ 1) terms which is either an increasing sequence or a decreasing sequence. **_Proof_ :** Let the sequence be _a i_ ( _i_ = 1, 2, . . . , _n_ 2 \+ 1) and let _t i_ be the number of terms in the longest increasing subsequence that starts from _a i_. If _t i_, = _n_ \+ 1 for some _i_ , we are done. Suppose that _t i_ ≤ _n_ every _i_. Let _H j_ = { _a i_ : _t i_ = _j_ }, where _j_ = 1, 2, . . . , _n_. We thus have _n_ pigeonholes _H_ 1, _H_ 2, . . . , _H n_ to which the ( _n_ 2 \+ 1) numbers _t i_, are allotted. So by the generalized pigeonhole principle there is pigeonhole _H r_ containing more than _k_ of these numbers where _k_ = floor of [( _n_ 2 \+ 1) – 1]/ _n_ = _n_. So among the numbers _t i_, at least ( _n_ \+ 1) of them are equal. We now establish that the ( _n_ \+ 1) numbers in the sequence which correspond to these numbers in the pigeonhole _H r_ form a decreasing sequence. Let _a i_ and _a j_ be in _H r_, where _i_ < _j_. Either _a i_ < _a j_ or _a i_ > _a j_ since the elements in the sequence are all distinct. Suppose that _a i_ < _a j_. Now _a j_, _H r_ implies that there is a subsequence of length _r_ starting from _a j_. So _a i_ < _a j_ implies that there is subsequence of length ( _r_ \+ 1) starting from _a j_. This is a contradiction, because there cannot be a subsequence of length ( _r_ \+ 1) starting from _a i_ since _a i_ is an element of _H r_. Thus _a i_ > _a j_ whenever _i_ < _j_. So any ( _n_ \+ 1) elements in _H r_ will give rise to a strictly decreasing subsequence. **Example 1.5.6** Illustrate Theorem 1.5.3 in the case of the sequences: (a) 15, 12, 5, 7, 9, 6, 3, 4, 10, 14 (b) 15, 12, 9, 10, 7, 5, 4, 14, 3, 6 **Solution** (a) Here _n_ = 3, as there are 10 elements in the sequence and the corresponding _t i_ (10 of them) are 1, 2, 5, 4, 3, 3, 4, 3, 2, 1. Since _t_ 3 is 5, there is an increasing subsequence of 5 elements starting from _a_ 3, which is 5, 7, 9, 10, 14. (b) Here the corresponding _t i_ are 1, 2, 3, 2, 2, 3, 2, 1, 2, 1. None of them exceeds 3. We get _H_ 1 = {15, 14, 6}, _H_ 2 = {12, 10, 7, 4, 3}, and _H_ 3 = {9, 5}. The sequence coming out of the second set is a decreasing subsequence of the given sequence with 5 numbers. **_1.6 THE INCLUSION–EXCLUSION PRINCIPLE_** If _X_ is any finite set, we denote by _N_ ( _X_ ) the number of elements in _X_. Suppose that _A_ and _B_ are two finite sets with no elements in common. Then, obviously, _N_ ( _A_ ∪ _B_ ) = _N_ ( _A_ ) + _N_ ( _B_ ). If, on the other hand, the intersection of _A_ and _B_ is nonempty, in order to compute the cardinality of _A_ ∪ _B_ , we first find the sum _N_ ( _A_ ) + _N_ ( _B_ ) as before. In this sum, the elements common to _A_ and _B_ are counted (included) twice—once while counting _N_ ( _A_ ) and then while counting _N_ ( _B_ )—so they have to be removed (excluded) once to obtain the total number of elements in their union. For example, if there are 15 students in a class who take calculus, 12 students who take discrete mathematics, and 9 students who take both courses, then the number of students who take at least one of the two courses is 15 + 12 – 9 = 18. See the Venn diagram in Figure 1.6.1 representing the universal set _X_ of all students in the class, the set _A_ of all students in the class who take calculus, and the set of _B_ of all students who take discrete mathematics. The fact there are students in the class who take both the courses is made clear by showing that the set representing their intersection is the region common to _A_ and _B_. The set to be excluded because it is included once with _A_ and then with _B_ is the subset _A_ ∩ _B_. Thus we can state the **inclusion-exclusion principle** involving two finite sets as follows: If _A_ and _B_ are two finite sets, then _N_ ( _A_ ∪ _B_ ) = _N_ ( _A_ ) + _N_ ( _B_ ) – _N_ ( _A_ ∩ _B_ ). **FIGURE 1.6.1** **Example 1.6.1** Obtain the inclusion–exclusion rule involving three finite sets. Let _A_ , _B_ , and _C_ be three finite sets and let _D_ = _B_ ∪ _C_. Now _N_ ( _A_ ∪ _B_ ∪ _C_ ) = _N_ ( _A_ ∪ _D_ ) = _N_ ( _A_ ) + _N_ ( _D_ ) – _N_ ( _A_ ∩ _D_ ) and _N_ ( _D_ ) = _N_ ( _B_ ∪ _C_ ) = _N_ ( _B_ ) + _N_ ( _C_ ) – _N_ ( _B_ ∩ _C_ ). So But ( _A_ ∩ _D_ ) = _A_ ∩ ( _B_ ∪ _C_ ) = ( _A_ ∩ _B_ ) ∪ ( _A_ ∩ _C_ ). So On substituting for _N_ ( _A_ ∩ _D_ ) in (*), we get which is the inclusion–exclusion rule for three sets. Suppose that the sets we consider are all finite subsets of a certain finite set _X_ with _N_ elements. If _A_ is a subset of _X_ , the complement of _A_ is denoted by _A_ ′. Then _A_ ′ = _X_ – _A_. So Next let _A_ and _B_ be two subsets of _X_. Then by (i), _N_ (( _A_ ∪ _B_ )′) = _N_ – _N_ ( _A_ ∪ _B_ ). Now _N_ ( _A_ ∪ _B_ ) = _N_ ( _A_ ) + _N_ ( _B_ ) – _N_ ( _A_ ∩ _B_ ) by the principle of inclusion and exclusion. And ( _A_ ∪ _B_ )′ = _A_ ′ ∩ _B_ ′. Thus Similarly, We may consider (i), (ii), and (iii) also as the inclusion–exclusion rule involving one subset, two subsets, and three subsets of a set with _N_ elements, respectively. Now suppose that _a i_ ( _i_ = 1, 2, 3) are three distinct properties associated with the elements of the set _X_ such that a typical element may possess one or more of these properties or may have none of them. Let _A i_ be the set of all _x_ in _X_ such that _x_ has property _a i_. Let _N_ ( _a i_) be the number of elements in _X_ with property _a i N_( _a i_′) be the number of elements in _X_ that do not have the property _a i_ and _N_ ( _a_ 1 _a_ 2) be the number of elements in _X_ possessing both property _a_ 1 and _a_ 2, and so on. Then _N_ ( _a i_) = _N_ ( _A i_), _N_ ( _a i, aj_) = _N_ ( _A i_ ∩ _A j_), and _N_ ( _a i, aj_′) = _N_ ( _A i_ ∩ _A j_′). The inclusion–exclusion rule (iii) given above involving three subsets of _X_ can be rewritten as follows: This result is now extended to the case involving _n_ distinct properties (which the elements in a finite set may have) as a theorem that can be proved using a combinatorial argument. Before starting this generalization let us introduce the following notation. Let _A i_ ( _i_ = 1, 2, . . . , _n_ ) be _n_ subsets of _X_. A _k_ -tuple intersection is the intersection of any _k_ distinct subsets of these _n_ sets. The number of _k_ -tuple intersections is, of course, _C_ ( _n, k_ ). Let _S k_ be the sum of the number of elements of all the _k_ -tuple intersections. Thus and so on. **THEOREM 1.6.1 (The Inclusion–Exclusion Formula)** **_Proof_ :** For any element _x_ in _X_ and for any subset _A_ of _X_ , the count of _x_ in _N_ ( _A_ ) is 1 if _x_ is in _A_. Otherwise, the count is 0. So it is enough if we prove that the count of any element _x_ in _X_ is the same on both sides of the equation. (a) Suppose that _x_ is not in any one of the _n_ sets. Then the count of _x_ on the left-hand side (LHS) is exactly 1. And this _x_ has a count 1 on the right-hand side (RHS) because _x_ is one of the _N_ elements of _X_ , and it is not in any one of the _n_ sets. Thus the count of any _x_ that is not in one of the _n_ sets is 1 on both sides. (b) Suppose that _x_ is in exactly one of the _n_ sets. Then the count of this _x_ on the left-hand side is 0. The count of _x_ on the right-hand side is computed as follows: Count of _x_ in _N_ = 1, count of _X_ in _S_ 1 = 1, and count of _S i_ = 0 when _i_ is not equal to 1. Thus count of _x_ on the right-hand side is 1 – 1 = 0. More generally, let _x_ be an element that is common to _r_ of the _n_ sets. The count of _x_ on the left-hand side is, of course, 0. Count of _x_ in _N_ = 1, count of _x_ in _S_ 1 = _C_ ( _r_ , 1), count of _x_ in _S_ 2 = _C_ ( _r_ , 2), and so on. Thus the count of _x_ on the right-hand side = 1 – _C_ ( _r_ , 1) + _C_ ( _r_ , 2) – · · · + (– 1) _rC_ ( _r, r_ ) = (1 – 1) _r_ = 0 Thus the count of _x_ is the same on both sides of the equation, and this completes the proof. **Example 1.6.2** Each student in a freshmen dormitory takes at least one of the four introductory courses in biology (B), English (E), history (H), and mathematics (M). There are 6 students who take all four courses. There are 25 students in each of the four courses, 15 students in any two of the four courses, and 10 students in any three of four courses. How many students are there in the dorm? **Solution**. Let _N_ be the number of students. Then _S_ 1 = _C_ (4, 1) · (25) = 100, _S_ 2 = _C_ (4, 2) · (15) = 90, _S_ 3 = _C_ (4, 3) · (10) = 40, and _S_ 4 = _C_ (4, 4) · (6) =6. Since each student takes at least one course, _N_ ( _B_ ′ ∩ _E_ ′ ∩ _H_ ′ ∩ _M_ ′) = 0. Thus, by the inclusion–exclusion rule, 0 = _N_ – 100 + 90 – 40 + 6. So _N_ = 44. If _x_ is any positive integer less than or equal to the positive integer _n_ , the number of multiples of _x_ that do not exceed _n_ is the floor of _n_ / _x_ , which by definition is the smallest nonnegative integer less than or equal to _n_ / _x_. For example, the number of integers less than 1000 and divisible by 11 is 90, since floor of 1000/11 = 90. The number of integers less than 15 and divisible by 11 is 1, which is the floor of 15/11. **Example 1.6.3** Let _X_ = {1, 2, 3, . . . , 600}. Find the number of positive integers in _X_ that are not divisible by 3 or 5 or 7. **Solution**. Let _A, B_ , and _C_ be the sets of integers in _X_ that are divisible by 3, 5, and 7, respectively. Then _N_ ( _A_ ) = floor of 600/3 = 200, _N_ ( _B_ ) = floor of 600/5 = 120, and _N_ ( _C_ ) = floor of 600/7 = 85. So _S_ 1 = 200 + 120 + 85 = 405. Next, _N_ ( _A_ ∩ _B_ ) = number of integers in _X_ divisible by 15 = floor of 600/15 = 40. Similarly, _N_ ( _A_ ∩ _C_ ) = floor of 600/21 = 28 and _N_ ( _B_ ∩ _C_ ) = floor of 600/35 = 17. Thus _S_ 2 = 40 + 28 + 17 = 85. Finally, _S_ 3 = _N_ ( _A_ ∩ _B_ ∩ _C_ ) = floor of 600/105 = 5. Thus _N_ ( _A_ ′ ∩ _B_ ′ ∩ _C_ ) = 600 – 405 + 85 – 5 = 275. So there are 275 numbers in the set that are not divisible by 3 or 5 or 7. Two integers _m_ and _n_ are **relatively prime** if the only positive divisor they have in common is 1. The cardinality of the set of positive integers less than _n_ and relatively prime to _n_ is called the **totient function** of _n_ and is denoted by ϕ( _n_ ). For example, ϕ(8) is the cardinality of the set {1, 3, 5, 7}, so it is equal to 4. **Example 1.6.4** Use the inclusion–exclusion rule to compute ϕ(60). The distinct prime divisors of 60 are 2, 3, and 5. Let _N_ ( _A_ ), _N_ ( _B_ ), and _N_ ( _C_ ) be the number of integers less than or equal to 60 and divisible by 2, 3, and 5, respectively. Then _N_ ( _A_ ) = (60)/2, _N_ ( _B_ ) = (60)/3, and _N_ ( _C_ ) = (60)/5. Also, _N_ ( _A_ ∩ _B_ ) = (60)/(2) (3), _N_ ( _A_ ∩ _C_ ) = (60)/(2)(5), _N_ ( _B_ ∩ _C_ ) = (60)/(3)(5), and _N_ ( _A_ ∩ _B_ ∩ _C_ ) = (60)/(2)(3)(5). Thus This example has a straightforward generalization for any arbitrary integer, as follows. **THEOREM 1.6.2** Let _n_ be any positive integer and let _p i_ ( _i_ = 1, 2, . . . , _k_ ) be the distinct prime factors of _n_. Then ϕ( _n_ ) = ( _n_ / _m_ ) · ( _p_ 1 – 1) · ( _p_ 2 – 1) · . . . · ( _p k_ – 1), where _m_ is the product of the _k_ distinct prime factors of _n_. **Example 1.6.5** There is a microcomputer in each of the 32 faculty offices in a department. Fifteen of them have color (C) monitors, 10 of them have laser (L) printers, and 8 of them have modems (M). Two of them have all three options. At least how many have none of the three options? **Solution**. If _r_ is the number of offices with none of these options, _r_ = 32 – [15 + 10 + 8] + [ _N_ ( _C_ ∩ _L_ ) + _N_ ( _C_ ∩ _M_ ) + _N_ ( _L_ ∩ _M_ )] – 2, where each unknown number is at least 2. Thus _r_ ≥ 32 – 33 + 6 – 2 = 3. **Example 1.6.6** Find the number _r_ of solutions in integers of the equation _a_ \+ _b_ \+ _c_ = 25, where _a_ is at least 2 and at most 4, _b_ is at least 3 and at most 6, and _c_ is at least 4 and at most 8. **Solution**. The required number _r_ is the number of solutions in integers of the revised equation _x_ \+ _y_ \+ _z_ = 16, where the upper bounds for _x, y_ , and _z_ are 2, 3, and 4, respectively. Let _N_ = number of solutions of the revised equation _X_ = set of solutions such that _x_ is at least 3 _Y_ = set of solutions such that _y_ is at least 4 _Z_ = set of solutions such that _z_ is at least 5 Then _N_ = _C_ (16 + 2, 2), _N_ ( _X_ ) = _C_ (16 – 3 + 2, 2), _N_ ( _Y_ ) = _C_ (16 4 + 2, 2), _N_ ( _Z_ ) = _C_ (16 5 5 + 2, 2), _N_ ( _X_ ∩ _Y_ ) = _C_ (16 – 3 – 4 + 2, 2), _N_ ( _X_ ∩ _Z_ ) = _C_ (16 – 3 – 5 + 2, 2), _N_ ( _Y_ ∩ _Z_ ) = _C_ (16 – 4 – + 2, 2), and _N_ ( _X_ ∩ _Y_ ∩ _Z_ ) = _C_ (16 – 3 – 4 – 5 + 2, 2). Thus **_Derangements_** Suppose that _X_ is a finite set with _n_ elements and each element in the set is assigned a unique positive integer (a label) between 1 and _n_. The element that is assigned the label _i_ is the _i_ th element of the set. If in a permutation of these elements, the _i_ th element appears in the _i_ th position, that element is in its **original position** for _X_. A **derangement** of _X_ is a permutation in which no element appears in its original position. For example, let _X_ = { _a, b, c, d_ } and the labels of _a, b, c, d_ be 1, 2, 3, 4, respectively. Then the permutation _abdc_ is not a derangement, because _a_ and _b_ are in their original positions. But the permutation _bade_ is a derangement. As a practical example, consider the following scenario: A student is sending out applications for employment to various hiring agencies. She completed 10 different applications addressed to 10 different agencies and wrote the addresses of these agencies on 10 identical envelopes. She then told her brother to put each application in the right envelope and mail all the applications to the respective agencies. She has a derangement at hand if no application was inside the right envelope! The total number of derangements of a set of cardinality _n_ is denoted by _D n_. **THEOREM 1.6.3** The total number of derangements of a set of cardinality _n_ is **_Proof_ :** Let _N_ be the total number of permutations on _X_ and let _A i_ be the set of permutations in which the _i_ th object is in its original place. The total number of permutations on _X_ is _n_!. Thus _D n_ = _n_! – _S_ 1, + _S_ 2 \+ · · · + (– 1) _n_ _S n_, where and so on. So Using the principle of inclusion and exclusion, we can obtain some important results involving the number of _r_ -sequences and the number of partitions of a finite set. The following two theorems are direct consequences of Theorem 1.6.1. We need a notation before these theorems are presented. If _n_ and _r_ are two positive integers ( _n_ ≤ _r_ ), the **Stirling number of the second kind** , denoted by _S_ ( _r, n_ ), is defined by the following relation: **THEOREM 1.6.4** The number of _r_ -sequences that can be formed using the elements of a set with _n_ elements such that in every such sequence each element of the set appears at least once is ( _n_!) · _S_ ( _r, n_ ). **_Proof_ :** Let _X_ = { _x i_ : _i_ = 1, 2. . . . , _n_ } and _A i_ be the set of _r_ -sequences that do not contain _x i_. Then _S i_, = _C_ ( _n, i_ ) · ( _n_ – _i_ ) _r_. The stated result is an immediate consequence of Theorem 1.6.1. **THEOREM 1.6.5** (a) _The Allocation Problem_. The number of ways of allocating _r_ distinct objects to _n_ locations such that each location receives at least one object is ( _n_!) · _S_ ( _r, n_ ). (b) _The Set Partitioning Problem_. (1) The number of partitions of a set of cardinality r such that each partition has _n_ nonempty sets is _S_ ( _r, n_ ) and (2) the number of partitions of a set with _r_ elements such that each partition has at most _n_ nonempty sets is _S_ ( _r, n_ ) + _S_ ( _r, n_ – 1) + · · · + _S_ ( _r_ , 1). **_Proof_ :** These two results follow from Theorem 1.6.1 and the definition of _S_ ( _r, n_ ). [Notice the difference between the assertions in Theorem 1.3.5 and Theorem 1.6.5(b).] **_Number of Functions from a Finite Set to Another Finite Set_** Let _X_ and _Y_ be two finite sets with cardinality _r_ and _n_ , respectively. Suppose that _f_ is any arbitrary function from _X_ to _Y_. Any one of the _r_ elements from _X_ can be mapped into any one of the _n_ elements of _Y_ in _n_ ways. So by the multiplication rule there are _n r_ functions from _X_ to _Y_. But if _f_ is an _injection_ , the number of ways the _r_ elements can be mapped is much less: It is, in fact, equal to _n_ ( _n_ – 1)( _n_ – 2) · · · ( _n_ – _r_ \+ 1). If the mapping is a _surjection_ , every element in _Y_ has a preimage. So by applying Theorem 1.6.5 (a), we see that the number of surjections is ( _n_!) · _S_ ( _r, n_ ). We can summarize these results as a theorem. **THEOREM 1.6.6** Let _X_ be a set of cardinality _r_ and _Y_ be a set of cardinality _n_. Then there are (1) _n r_ functions from _X_ to _Y_ , (2) _P_ ( _n, r_ ) injections from _X_ to _Y_ , and (3) ( _n_!) · _S_ ( _r, n_ ) surjections from _X_ to _Y_. **_A Generalization of the Inclusion–Exclusion Principle_** (This part may be omitted without loss of continuity.) We conclude this section with a theorem that is a generalization of Theorem 1.6.1. For this we need some notations. With each element _x_ of a set _X_ with _N_ elements, we associate a nonnegative real number _w_ ( _x_ ) called the _weight of the element_. We have a set _P_ of _n_ properties. A typical element in _X_ may or may not have some or all the properties listed in _P. V_ ( _r_ ) is the sum of the weights of all elements _x_ where _x_ satisfies _exactly r_ of these _n_ properties. _U_ ( _r_ ) is the sum of the weights of all elements _x_ where _x_ satisfies _at least r_ of these _n_ properties. For example, let _a, b, c. d, e_ the elements of a set _X_ with weights 6, 7, 8, 9, 10, respectively. _P_ is a set of four properties: _s, t, u, v_. It is known that _a_ satisfies _s_ and _t_ ; _b_ satisfies _s, u_ , and _v_ ; _c_ satisfies _t_ and _u_ ; _d_ satisfies _s, t_ , and _u_ ; and _e_ satisfies all the four properties. Then _V_ (1) = 0, _V_ (2) = 6 + 8 = 14, _V_ (3) = 7 + 9 = 16, and _V_ (4) = 10; and _U_ (1) = 6 + 7 + 8 + 9 + 10 = 40, _U_ (2) = 40, _U_ (3) = 26 and _U_ (4) = 10. Finally, if _Q_ is a subset of _P_ , we define the weight _W_ ( _Q_ ) of the set _Q_ as the sum of the weights of all elements _x_ where _x_ satisfies all the _r_ properties listed in _Q_ , and _W_ ( _r_ ) is the sum of all terms of the type _W_ ( _Q_ ), where _Q_ is a subset of _P_ with _r_ elements. For example, if _Q_ = { _s, t_ }, then _W_ ( _Q_ ) = _w_ ( _a_ ) + _w_ ( _d_ ) + _w_ ( _e_ ) = 25. _W_ (2) is the sum of the weights of all the two-element subsets of _P_. In this example _P_ has six subsets of cardinality 2. Thus _W_ (2) = 25 + 26 + 17 + 27 + 10 + 17 = 122. _W_ (0) by definition is the sum of the weights of all the _N_ elements in _X_. **THEOREM 1.6.7 (Generalized Inclusion–Exclusion Formula)** **_Proof_ :** (a) Obviously, a typical element _x_ contributes its weight _w_ ( _x_ ) to the left-hand side of (*) if and only if _x_ satisfies exactly _r_ of the _n_ properties. So it is enough if we prove that _x_ contributes _w_ ( _x_ ) to the right-hand side if and only if _x_ satisfies exactly _r_ of the _n_ properties. Suppose that _x_ satisfies _s_ of the _n_ properties. If _s_ = _r_ , the element _x_ contributes _w_ ( _x_ ) to _W_ ( _r_ ) in the right-hand side and 0 to the other terms. If _s_ < _r_ , the contribution of _x_ to the right-hand side is 0. It remains to be proved that the contribution of _x_ to the right-hand side is 0 when _s_ > _r_. In this case, the contribution of _x_ to _W_ ( _r_ ) is _C_ ( _s, r_ ) _w_ ( _x_ ). The contribution of _x_ to _W_ ( _r_ \+ 1) is _C_ ( _s, r_ \+ 1) _w_ ( _x_ ), and so on. Thus when _s_ > r, the contribution of _x_ to the right-hand side is Now it is easily verified that _C_ ( _i, j_ ) _C_ ( _j, k_ ) = _C_ ( _i, k_ ) _C_ ( _i – k, i – j_ ) Thus the contribution of _x_ to right-hand side is _w_ ( _x_ ) · _C_ ( _s, r_ ) · _K_ where This completes the proof of (a). If we put _r_ = 0 and _w_ ( _x_ ) = 1 for each element _x_ in _X_ , we get the formula in Theorem 1.6.1 as a special case. (b) This is left as an exercise. **Example 1.6.7** Verify the formula given in Theorem 1.6.7 for the following data: The elements of a set _X_ are _a, b, c, d_ , and _e_ with weights 6, 7, 8, 9, and 10. _P_ is a set of four properties _s, t, u, v_. It is known that _a_ satisfies _p_ and _q_ ; _b_ satisfies _s, u_ , and _v_ ; _c_ satisfies _t_ and _u_ ; _d_ satisfies _s_ , _t_ , and _u_ ; and finally, _e_ satisfies all the four properties. **Solution**. We have already computed the following: _V_ (1) = 0, _V_ (2) = 14, _V_ (3) = 16, _V_ (4) = 10, _U_ (1) = 40, _U_ (2) = 40, _U_ (3) = 26, _U_ (4) = 10, and _W_ (2) = 122. Next we find that _W_ (1) = 32 + 33 + 34 + 17 = 116, _W_ (3) = 19 + 10 + 17 +10 = 56, and _W_ (4) = 10. Formula (i) when _r_ = 1: Formula (i) when _r_ = 2: Formula (i) when _r_ = 3: Formula (i) when _r_ = 4: Formula (ii) when _r_ = 2: Formula (ii) when _r_ = 3: Formula (ii) when _r_ = 4: **_1.7 SUMMARY OF RESULTS IN COMBINATORICS_** We now conclude this chapter with a complete list of all the important results involving permutations, combinations, allocations, derangements, set partitions, and number of mappings between finite sets established in these pages. 1. A permutation is a linear arrangement of objects where the order in which distinct objects appear is crucial, whereas a combination is just a collection of objects and the order in which objects are chosen for inclusion in it is not relevant. 2. (a) The number of _r_ -permutations with _r_ distinct elements that can be formed using the elements of a collection of _n_ distinct elements, (b) the number of ways of allocating _r distinct_ objects to _n_ locations such that no location can receive more than one object, and (c) the number of injections from a set of _r_ elements to a set of _n_ elements are all equal to _P_ ( _n, r_ ) = _n_!/( _n_ – _r_ )! 3. (a) The number of _r_ -combinations with _r_ distinct elements that can be formed using the elements of a collection of _n_ distinct elements and (b) the number of ways of allocating _r identical_ objects to _n_ locations such that no location can receive more than one object are both equal to _C_ ( _n, r_ ) = _n_!/[ _r_!( _n_ – _r_ )!] = _C_ ( _n, n_ – _r_ ). 4. The coefficient of _x r_ in (1 + _x_ ) _n_ is _C_ ( _n, r_ ). 5. (a) The number of _r_ -sequences with _r_ elements (not necessarily distinct) that can be formed using the elements of a collection of _n_ distinct elements, (b) the number of ways of placing _r distinct_ objects in _n_ locations with no restriction on the number of objects a location can receive, and (c) the number of functions from a set of _r_ elements to a set of _n_ elements are all equal to _n r_. 6. (a) The number of _r_ -collections with _r_ elements (not necessarily distinct) that can be formed using the elements of a collection of _n_ distinct elements, (b) the number of ways of allocating _r identical_ objects to _n_ locations with no restriction on the number of objects a location can receive, (c) the number of solutions in nonnegative integers of _x_ 1 \+ _x_ 2 \+ · · · + _x n_ = _r_ , and (d) the number of terms in the expansion of ( _x_ 1 \+ _x_ 2 \+ · · · + _x n_) _r_ are all equal to _C_ ( _r_ \+ _n_ – 1, _n_ – 1). 7. If _p_ 1 _p_ 2, . . . , _p n_ are nonnegative integers whose sum is _p_ , then (a) the number of _r_ -collections with _r_ elements (not necessarily distinct) that can be formed using the _n_ distinct elements of _X_ = { _x_ 1 _x_ 2, . . . , _x n_} such that in each combination _x i_ appears at least _p i_ times ( _i_ = 1, 2, . . . , _n_ ), (b) the number of ways of allocating _r identical_ objects to _n_ locations such that location _i_ gets at least _p i_ objects ( _i_ = 1, 2, . . . , _n_ ), and (c) the number of solutions in nonnegative integers of _x_ 1 \+ _x_ 2 \+ · · · + _x n_ = _r_ where _x i_ is at least _p_ 1 ( _i_ = 1, 2, . . . , _n_ ) are all equal to _C_ ( _r_ – _p_ \+ _n_ – 1, _n_ – 1). 8. Define _P_ ( _t_ ; _t_ 1, _t_ 2, . . . , _t j_) = _P_ ( _t_ , _s_ )/[( _t_ 1!)( _t_ 2!) · · · ( _t j_!)] where _s_ = _t_ 1 \+ _t_ 2 \+ · · · + _t j_. 9. If there are _n_ objects of _k_ different types such that the objects in each type are identical and if type _i_ has _n i_ objects ( _i_ = 1, 2, . . . , _k_ ) then there are _P_ ( _n_ ; _n_ 1, _n_ 2, . . . , _n k_) ways of arranging these _n_ objects in a line. 10. If there are _k_ types of objects and if type _i_ has _n i_ identical objects ( _i_ = 1, 2, . . . , _k_ ), then there are _P_ ( _n_ ; _n_ 1 _n_ 2, . . . , _n k_) ways of allocating these _n_ 1 \+ _n_ 2 \+ · · · + _n k_ objects to _n_ locations such that no location can receive more than one object. 11. There are _P_ ( _n_ ; _n_ 1 _n_ 2, . . . , _n k_) ways of allocating _n distinct_ objects to _k_ locations such that location _i_ gets exactly _n i_ objects for each _i_ = 1, 2, . . . , _k_. 12. The coefficient of in ( _x_ 1 \+ _x_ 2 \+ · · · + _x k_) _n_ is _P_ ( _n; n_ 1, _n_ 2, . . . , _n k_). 13. A set of cardinality _n_ can be partitioned into a class consisting of _p_ 1 subsets each of cardinality _n_ 1, _p_ 2 subsets each of cardinality _n_ 2, . . . , and _p k_ subsets each of cardinality _n k_ in { _n_!)/{[( _p_ 1!)( _n_ 1!) _p_ 1][( _p_ 2!)( _n_ 2!) _p_ 2] · . . . · [( _p k_!)( _n k_!) _pk_ ]} ways where the integers _n_ 1 _n_ 2, . . . , _n k_ are distinct. 14. When _n_ and _r_ are positive integers, the Stirling number of the second kind, denoted by _S_ ( _r, n_ ), is defined as ( _n_!) · _S_ ( _r, n_ ) = _n r – C_{ _n_ , 1) · ( _n_ – 1) _r_ \+ _C_ ( _n_ , 2) · ( _n_ – 2) _r_ \+ · · · + (–1) _n_ –1 _C_ ( _n, n_ – 1) · 1 _r_. 15. The number of _r_ -sequences (elements not necessarily distinct) that can be formed using the elements of a collection _X_ of _n_ distinct elements such that in every permutation each element of _X_ appears at least once is ( _n_!) · _S_ ( _r, n_ ). 16. The number of ways of allocating _r distinct_ objects to _n_ locations such that every location receives at least one object is ( _n_!) · _S_ ( _r, n_ ). 17. The number of partitions of a set with _r_ elements such that each partition has _n_ nonempty sets is _S_ ( _r, n_ ). 18. The number of partitions of a set with _r_ elements such that each partition has at most _n_ nonempty sets is _S_ ( _r, n_ ) + _S_ ( _r, n_ – 1) + · · · + _S_ ( _r_ , 1). 19. The number of surjections from a set of _r_ elements to a set of _n_ elements is ( _n_!) · _S_ ( _r, n_ ). 20. The number of derangements of a set with _n_ elements is _D n_ = ( _n_!)[1 – 1/1! + 1/2! – 1/3! + · · · + (– 1) _n_ / _n_!]. **_1.8 NOTES AND REFERENCES_** Combinatorics is one of the most venerable branches of mathematics. Formulas involving arrangements, sequences, and combinations were known to the Chinese, Hindu, and Greek mathematicians as early as the first century A.D. Combinatorics is tied very closely with probability theory, and in the seventeenth and eighteenth centuries many European mathematicians were interested in the study of combinatorial probability. Some excellent general references in the area of combinatorics are the books by Aigner (1979), Anderson (1979), Cohen (1978), Krishnamurthy (1986), Liu (1968), Riordan (1978), Roberts (1984), and Tucker (1984). There is also the classic text by MacMahon (1960). The first comprehensive book dealing with permutations and combinations is by Whitworth (1901). Chapter 1 of Grimaldi (1985), Chapter 3 of Liu (1985), and Chapter 2 of Townsend (1987) also deal with the material of this chapter. Algorithms for generating permutations and combinations of a given finite set are given in detail in Chapters 1 and of Even (1973) and in Chapter 5 of Reingold et al. (1977). One does not have to be a mathematician to know that if there are more objects (pigeons) than containers (pigeonholes), there will be at least one container with two or more objects. Notice that this is an existential statement: It simply asserts that there _exists_ a container with at least two objects. Neither the container nor the objects are identifiable. It goes on record that it was Gustav Dirichlet (1805-1858) who used this principle extensively in his investigation of problems in number theory—hence the name Dirichlet pigeonhole principle, which is also known as the 'shoebox principle.' The nontrivial generalizations of this deceptively innocuous principle involve some of the most profound and deep results in all of combinatorial theory. Example 1.5.5 is a very special case of a result known as Ramsey's theorem. References to Ramsey theory include Chapter 5 of Cohen (1978), Chapter 8 of Roberts (1984), Chapter 4 of Ryser (1963), and the book on Ramsey theory by Graham et al. (1980). The pioneering work using the inclusion–exclusion principle was done by James Sylvester (1814–1897) and its importance and usefulness were made public with the publication of the book _Choice and Chance_ by Whitworth (1901). For a discussion of this principle, refer to Chapter 5 of Anderson (1979), Chapter 5 of Cohen (1978), Chapter 7 of Grimaldi (1985), Chapter 4 of Liu (1968), Chapter 3 of Riordan (1978), Chapter 6 of Roberts (1984), Chapter 2 of Ryser (1963), and Chapter 8 of Tucker (1984). **_1.9 EXERCISES_** **1.1.** The social security number of a person is a sequence of nine digits that are not necessarily distinct. If _X_ is the set of all social security numbers, find the number of elements in _X_. **1.2.** There are six characters—three letters of the English alphabet followed by three digits—which appear on the back panel of a particular brand of a printer as an identification number. If _X_ is the set of all possible identification numbers for this brand of printer, find the number of elements in _X_ if **(a)** characters can repeat in an identification number, **(b)** digits cannot repeat, **(c)** letters cannot repeat, and **(d)** characters cannot repeat. **1.3.** Find the number of ways of picking **(a)** a king and a queen, **(b)** a king or a queen, **(c)** a king and a red card, and **(d)** a king or a red card from a deck of cards. **1.4.** **(a)** Find the number of even numbers between 0 and 100. **(b)** Find the number of even numbers with distinct digits between 0 and 100. **1.5.** A sequence of digits where each digit is 0 or 1 is called a _binary number_. Each digit in a binary number is a component of the number. A binary number with eight components is called a **byte** , **(a)** Find the number of bytes, **(b)** Find the number of bytes that begin with 10 and end with 01 . **(c)** Find the number of bytes that begin with 10 but do not end with 01 . **(d)** Find the number of bytes that begin with 10 or end with 01. **1.6.** A variable name in the programming language BASIC is either a letter of the alphabet or a letter followed by a digit. Find the number of distinct variable names in this language. **1.7.** A sequence of characters is called a **palindrome** if it reads the same way forward or backward. For example, 59AA95 is a six-character palindrome, and 59A95 is a five-character palindrome. Some other instances of palindromes: U NU, LON NOL, MALAYALAM, NOW ON, PUT UP, TOO HOT TO HOOT, NEVER ODD OR EVEN, ABLE WAS I ERE I SAW ELBA, and POOR DAN IS IN A DROOP. Find the number of nine-character palindromes that can be formed using the letters of the alphabet such that no letter appears more than twice in each of them. **1.8.** Find the number of ways to form a four-letter sequence using the letters A, B, C, D, and E if **(a)** repetitions of letters are permitted, **(b)** repetitions are not permitted, **(c)** the sequence contains the letter A but repetitions are not permitted, and **(d)** the sequence contains the letter A but repetitions are permitted. **1.9.** There are _n_ married couples in a group. Find the number of ways of selecting a woman and a man who is not her husband from this group. **1.10.** Let _X_ be the set of all polynomials of degree 4 in a single variable _t_ such that every coefficient is a single-digit nonnegative integer. Find the cardinality of _X_. **1.11.** A variable name in the programming language FORTRAN is a sequence that has at most six characters such that the first character is a letter of the alphabet and the remaining characters, if any, are either letters or digits. Find the number of distinct variable names in this language. **1.12.** There are 10 members— _A, B, C, D, E, F, G, H, I_ , and _J_ —in a fund-raising committee. The first task of the committee is to choose a chairperson, a secretary, and a treasurer from this group. No individual can hold more than one office. Find the number of ways of selecting a chairperson, a secretary, and a treasurer such that **(a)** no one has any objection for holding any of these three offices, **(b)** C would like to be the chairperson, **(c)** _B_ would not like to be the chairperson, **(d)** _A_ does not like to be either the chairperson or the secretary, **(e)** _I_ or _J_ would like to be the treasurer, and **(f)** _E_ or _F_ or _G_ would like to hold one of these three offices. **1.13.** There are three bridges connecting two towns, _A_ and _B_. Between towns _B_ and C there are four bridges. A salesperson has to travel from _A_ to _C_ via _B_. Find **(a)** the number of possible choices of bridges from _A_ to _C_ , **(b)** the number of choices for a round-trip travel from _A_ to _C_ , and **(c)** the number of choices for a round-trip travel if no bridge is repeated. **1.14.** Compute **(a)** _P_ (8, 5), **(b)** _P_ (9, 2), and **(c)** _P_ (6, 6). **1.15.** Prove Theorem 1.2.2. **1.16.** Find the value of the positive integer _n_ if **(a)** _P_ ( _n_ , 2) = 30, **(b)** _P_ ( _n_ , 3) = 24 · _P_ ( _n_ , 2), and **(c)** 10 · _P_ ( _n_ , 2) = _P_ (3 _n_ – 1, 2) + 40. **1.17.** Compute 6!. Use this result to compute 7! and 8!. **1.18.** _A_ and _B_ are two members in a party of 12. Find the number of ways of assigning these 12 people to 12 rooms situated in a row such that each person gets a room and **(a) _A_** and **_B_** are next to each other and **(b) _A_** and **_B_** are not next to each other. **1.19.** Show that _P_ ( _n, r_ \+ 1) = ( _n_ – _r_ ) · _P_ ( _n, r_ ) and use this result to find the value of _n_ if _P_ ( _n_ , 9) = 15 · _P_ ( _n_ , 8). **1.20.** Find the value of _k_ if _P_ ( _n_ \+ 1, _r_ ) = _k_ · _P_ ( _n, r_ ). Use this result to find _n_ and _r_ if _k_ = 5, _n_ > _r_ , and _r_ is as small as possible. **1.21.** Four station wagons, five sedans, and six vans are to be parked in a row of 15 parking spots. Find the number of ways of parking these vehicles such that **(a)** the station wagons are parked at the beginning, then the sedans, and then the vans, and **(b)** vehicles of the same type are parked en bloc. **1.22.** Consider a collection of six stones of different colors: blue (B), green (G), pink (P), red (R), white (W), and yellow (Y). Find **(a)** the number of ways of making a tiepin on which these stones are to be placed in a row, **(b)** the number of ways of making a brooch on which these six stones are to be mounted in a circular pattern, and **(c)** the number of ways of making a ring using these six stones. **1.23.** Eight people are to be seated around a large round table for a conference. Find the number of possible seating arrangements. **1.24.** A mother and her two small children join seven members of her family for dinner and they have to sit around a round table. Find the number of possible seating arrangements so that the two children can sit on either side of the mother. **1.25.** Six girls and six boys are to be assigned to stand around a circular fountain. Find the number of such assignments if on either side of a boy there is a girl and on either side of a girl there is a boy. **1.26.** If _X_ and _Y_ are two sets with _n_ elements each and if there are no elements common to the two sets, find the number of ways arranging the 2 _n_ elements of these two sets in a circular pattern so that on either side of an element of _X_ there is an element of _Y_ , and vice versa. **1.27.** Compute **(a)** _P_ (10; 4, 4, 2) and **(b)** _P_ (12; 5, 4, 3). **1.28.** Compute **(a)** _P_ (17; 4, 3, 2) and **(b)** _P_ (17; 2, 2, 2). **1.29.** Prove that if _m_ and _n_ are positive integers, ( _mn_ )!( _m_!) _n_ is also a positive integer. **1.30.** Find the number of ways in which the complete collection of letters that form the word MISSISSIPPI can be arranged such that **(a)** there is no restriction on the location of the letters, and **(b)** all the S's stay together. **1.31.** Find the number of ways of **(a)** assigning 9 students to 11 rooms (numbered serially from 100 to 110) in a dormitory so that each room has at most one occupant, and **(b)** installing nine color telephones (two red, three white, and four blue) in these rooms, so that each room has at most one telephone. **1.32.** Compute **(a)** _C_ (9, 4), **(b)** _C_ (10, 7), and **(c)** _C_ (8, 4) **1.33.** _X_ is a set with nine elements. Find the number of **(a)** subsets of _X_ , **(b)** subsets of cardinality 3, and **(c)** unordered pairs in _X_. **1.34.** Prove Pascal's formula algebraically. **1.35.** There are 4 women and 9 men in the mathematics faculty of a college. Find the number of ways of forming a hiring committee consisting of 2 women and 3 men from the department. **1.36.** There are 5 distinct white and 7 distinct blue shirts in a wardrobe. Find the number of ways of taking 4 shirts from the wardrobe such that **(a)** they could be either white or blue, **(b)** they are all white, **(c)** they are all blue, and **(d)** they are all of the same color, and **(e)** 2 are white and 2 are blue. **1.37.** Find the number of ways of seating _r_ people from a group of _n_ people around a round table. **1.38.** Find the number of ways of seating 14 people such that 8 of them are around one round table and the rest are around another round table. **1.39.** Find the number of ways of seating 14 people such that 8 of them are around a round table and the rest are on a bench. **1.40.** Find the number of bytes that can be formed using exactly six zeros. **1.41.** Find the number of ways in which the letters that appear in MISSISSIPPI can be rearranged so that no two S's are adjacent. **1.42.** In a state lottery, a ticket consists of six distinct integers chosen from the set _X_ = {1, 2, 3, . . . , 42}. On every Saturday at 8:00 P.M., six distinct integers are chosen from _X_ by a computer. A ticket buyer wins (1) the first prize (the jackpot) if the six numbers in the ticket are the same as the six numbers picked by the computer, (2) the second prize if any five numbers in the ticket are picked by the computer, (3) the third prize if any of the four numbers in the ticket are picked by the computer, and (4) the fourth prize if any of the three numbers in the ticket are picked by the computer. Find **(a)** the number of distinct tickets that one has to buy which will definitely assure the buyer winning the jackpot and the probability of winning the jackpot if a person buys 1000 tickets, **(b)** the probability of winning the second prize if a person buys a single ticket, and **(c)** the probability of winning the third prize if the person buys a single ticket. **1.43.** Prove the following identity using a combinatorial argument: **1.44.** If _C_ ( _n, r_ ) = _C_ ( _r_ , 1) · _C_ ( _n, r_ – 1), solve _n_ in terms of _r_. **1.45.** Prove that _C_ ( _pn, pn_ – _n_ ) is a multiple of _p_. **1.46.** Prove the identity _C_ (3 _n_ , 3) = 3 _C_ ( _n_ , 3) + 6 _n_ · _C_ ( _n_ , 2) + _n_ 3 using a combinatorial argument. **1.47.** Let _X_ be the set of all words of length 10 in which the letter P appears 2 times, Q appears 3 times, and R appears 4 times. Find the cardinality of _X_. **1.48.** A mother bought 10 story books for her 3 children. The youngest gets 2 books and the other two get 4 each. Find the number of ways she can pack them as gifts. **1.49.** A linear algebra class consists of 10 mathematics majors and 12 computer science majors. A team of 12 has to be selected from this class. Find the number of ways of selecting a team if **(a)** the team has 6 from each discipline, and **(b)** the team has a majority of computer science majors. **1.50.** Find the coefficient of _a_ 2 _b_ 3 _c_ 3 _d_ 4 in the expansion of **(a)** ( _a_ \+ _b_ \+ _c_ \+ _d_ )12 and **(b)** (2 _a_ – 3 _b_ \+ 2 _c_ – _d_ )12. **1.51.** Use Pascal's triangle and list the coefficients of the terms which appear in the expansion of ( _x_ \+ _y_ ) _n_ when _n_ = 4, 5, and 6. **1.52.** Use a combinatorial argument to prove **Newton's identity:** _C_ ( _n, r_ ) · _C_ ( _r, k_ ) = _C_ ( _n, k_ ) · _C_ ( _n_ – _k, r_ – _k_ ). **1.53.** Prove the following identity: **1.54.** Prove: _C_ ( _n_ , 0) + _C_ ( _n_ , 1) + _C_ ( _n_ , 2) + · · · + _C_ ( _n, n_ ) = 2 _n_. **1.55.** Use a combinatorial argument to prove the following: [ _C_ ( _n_ , 0)]2 + [ _C_ ( _n_ , 1)]2 \+ [ _C_ ( _n_ , 2)]2 \+ · · · + [ _C_ ( _n_ , _n_ )]2 = _C_ (2 _n_ , _n_ ) **1.56.** Prove the following identity: **1.57.** There are 18 students in a class. Find the number of ways of partitioning the class into **(a)** 4 groups of equal strength and a minority group, **(b)** 2 groups of 5 students, 1 group of 4 students, and 2 groups of 2 students, and **(c)** 1 group of 7 students, 1 group of 6 students, and 1 group of 5 students. **1.58.** Find the number of _r_ -sequences that can be formed using the elements of the set _X_ = { _A, B, C, D, E, F, G_ } if **(a)** _r_ = 4 and the elements in each sequence is distinct, **(b)** _r_ = 4, and **(c)** _r_ = 9. **1.59.** Find the number of _r_ -collections that can be formed using the elements of the set _X_ = { _A, B, C_ , _D, E, F, G_ } if **(a)** _r_ = 4 and the elements in each collection are distinct, **(b)** _r_ = 4, and **(c)** _r_ = 9. **1.60.** Find the number of distinct solutions in nonnegative integers of the equation _a_ \+ _b_ \+ _c_ \+ _d_ \+ _e_ = 24. **1.61.** Find the number of terms in the multinomial expansion of ( _a_ \+ _b_ \+ _c_ \+ _d_ \+ _e_ )24. **1.62.** Find the number of ways of forming a team of 15 students from a large university to represent freshmen, sophomores, juniors, seniors, and graduate students such that the team has **(a)** at least one from each group, **(b)** at least two from each group, and **(c)** at least two graduate students. **1.63.** Find the number of solutions of the linear equation _a_ \+ _b_ \+ _c_ \+ _d_ \+ _e_ = 10 if **(a)** all the variables are nonnegative integers, **(b)** all the variables are positive integers, and **(c)** all the variables are positive integers and the variable _a_ is odd. **1.64.** Find the number of ways a mother can distribute 9 identical candy bars to her three children so that each child gets at least 2 bars. **1.65.** The sum of the four positive integers _a, b, c_ , and _d_ is at most 10. Find the number of possible choices for these integers. **1.66.** When a die is rolled, one of the first six positive integers is obtained. Suppose that the die is rolled five times and the sum of the five integers thus obtained is added. The five throws constitute a trial. Find the number of possible trials such that the sum is at most 12. **1.67.** Establish the following identity: **1.68.** Find the number of solutions in nonnegative integers of the equation _x_ 1 \+ _x_ 2 \+ _x_ 3 \+ 3 _x_ 4 = 7. **1.69.** If _X_ = { _x_ 1, _x_ 2, . . . , _x n_} is a collection of _n_ distinct objects and _r_ any positive integer, find the number of _r_ -collections of _X_ such that each such collection has the object _x i_ repeated at least _p i_ times where _i_ = 1, 2, 3, . . . , _n_. **1.70.** Find the number of ways of allocating _r_ identical objects to _n_ distinct locations such that location _i_ gets at least _p i_ objects, where _i_ = 1, 2, . . . , _n_. **1.71.** Find the number of solutions in nonnegative integers of the (strict) inequality _a_ \+ _b_ \+ _c_ \+ _d_ \+ _e_ < 11. **1.72.** Solve Problem 1.71 if _a_ is at most 6. **1.73.** Show that it is possible to have a set of 5 people such that there is no subgroup of 3 strangers or a subgroup of 3 people known to one another in this set. **1.74.** There are 4 commuter flights from city _A_ to city _B_ daily. For a particular day, it was noticed that the number of vacant seats on these flights are 8, 10, 13, and 9, respectively. Find the minimum number of tickets that have to be sold so that the number of vacant seats will be **(a)** at most 1 in flight 1 or at most 3 in flight 2 or at most 6 in flight 3 or at most 2 in flight 4, **(b)** at most 2 in flight 1 or at most 3 in flight 2 or at most 4 in flight 3 or at most 1 in flight 4. **1.75.** Prove that in any group of 10 people either there is a subgroup of 3 strangers or a subgroup of 4 people known to one another. **1.76.** The numbers 1, 2, 3, . . . , _n_ ( _n_ is at least 3) are randomly placed around a circle and _r_ is any integer less than _n_. Let _S i_ be the sum of the _r_ consecutive integers (considered clockwise) starting from _i_ and including _i_ , where _i_ = 1, 2, . . . , _n_. Show that there is at least one _S i_ that is not smaller than the floor of _r_ ( _n_ \+ 1)/2. **1.77.** Show that in every finite set of numbers there is a number that is greater than or equal to the arithmetic mean of the numbers in the set. **1.78.** Let _X_ = {1, 2, 3, . . . , 600}. Find the number of elements in _X_ that are not divisible by 3 or 5 or 7. **1.79.** **(a** ) Obtain a formula to find the number of primes not exceeding a given positive integer, **(b)** Use this formula to find the number of primes not exceeding 100. **1.80.** A positive integer is **squarefree** if it is not divisible by the square of an integer greater than 1. **(a)** Obtain a formula to compute the number of squarefree integers not exceeding a given positive integer, and **(b)** use this formula to compute the number of squarefree numbers not exceeding 100. **1.81.** If _p_ and _q_ are two distinct primes, find the totient function of _pq_. **1.82.** There are six chairs marked 1 to 6 in the conference room of an office. Six people attend a seminar in this room in the morning and again in the afternoon, **(a** ) Find the number of permutations and derangements regarding the seating arrangements, **(b)** find the probability that nobody sits in the same seat twice, **(c** ) find the probability that exactly one person sits in the same chair twice, **(d)** find the probability that at least one person gets the same seat twice, **(e** ) find the probability that exactly two people retain their seats, and (f) find the probability that all the six retain their seats. **1.83.** Use a combinatorial argument to establish the identity: _C_ ( _n_ , 0) · _D n_ \+ _C_ ( _n_ , 1) · _D n_–1, + _C_ ( _n_ , 2) · _D n–_2 \+ · · · + _C_ ( _n, n_ ) · _D_ 0 = _n_!. **1.84.** Find the number of solutions in integers of the linear equation _p_ \+ _q_ \+ _r_ = 25 where _p_ is at least 2 and at most 4, _q_ is at least 3 and at most 6, and _r_ is at least 4 and at most 8. **1.85.** Let _X_ be the set of 4-sequences that can be formed using the letters _A_ , _B, C, D, E_ , and _F_ such that every sequence in _X_ has the letters _A_ , _B_ , and _C_ at least once. Find the cardinality of _X_. **1.86.** There are 5 job openings in an office. On the basis of a written test and a personal interview, 4 candidates were selected and each candidate is offered one of the available jobs. Find the number of ways of assigning these jobs to the candidates. **1.87.** Find the number of permutations of the nine digits 1, 2, . . . , 9 in which **(a** ) the blocks 12, 34, and 567 do not appear, and **(b)** the blocks 12, 23, and 415 do not appear. **1.88.** Let _D_ ( _n, r_ ) be the number of permutations of a set of _n_ elements in which exactly _r_ of the _n_ elements appear in their "natural" positions and _E_ ( _n, r_ ) be the number of permutations in which at least _r_ of the _n_ elements appear in their natural positions. Prove **(a** ) _D_ ( _n_ , 0) = _D n_, **(b)** _D_ ( _n, r_ ) = _C_ ( _n, r_ ) · _D n–r_, **(c)** _D_ ( _n, n_ ) = _E_ ( _n, n_ ) = 1, and **(d)** if _S_ ( _i_ ) = _C_ ( _n, i_ ) · ( _n – i_ )!, then **Generating Functions** **_2.1 INTRODUCTION_** In this chapter we introduce the concept of a generating function—a powerful tool that is very useful in solving counting problems, particularly problems involving the selection and arrangement of objects with repetition and with additional constraints. Consider the integer equation problem of Chapter 1, which asks for the number of nonnegative integer solutions of _x_ 1 \+ _x_ 2 \+ · · · + _x n_ = _r_ , in which we imposed no other restrictions on the _n_ variables. How do we solve this problem if we now restrict each variable _x i_, to be an element of a set _V i_? A typical problem: Find the number of ways to make 62 cents involving quarters, dimes, nickels, and cents. The solution is the number of solutions in nonnegative integers of _q_ \+ _d_ \+ _n_ \+ _c_ = 62, where _q_ is in the set _Q_ = {0, 25, 50}, _d_ is in _D_ = {0, 10, 20, 30, 40, 50, 60}, _n_ is in _N_ = {0, 5, 10, 15, 20, 25, + · · · + , 60}, and _c_ is in _C_ = {0, 1, 2, 3, 4, + · · · + , 60, 61, 62}. Before we develop a procedure to solve this "change-making problem" using generating functions, let us examine a simpler problem. **Example 2.1.1** Find the number of integer solutions of _a_ \+ _b_ \+ _c_ = 10, where each variable is at least 2 and at most 4. **Solution** ( **By Explicit Enumeration):** Thus there are six different solutions for this problem. Now we introduce three polynomials _p a_, _p b_, and _p c_, one for each variable. Since each variable can be 2 or 3 or 4, in this case each polynomial is defined as _x_ 2 \+ _x_ 3 \+ _x_ 4 and we multiply these three polynomials to obtain a polynomial _p_ ( _x_ ) involving powers of _x_ with exponents ranging from 6 to 12. This polynomial _p_ ( _x_ ) is an example of a generating function. Since _a_ \+ _b_ \+ _c_ = 10 we now look for the coefficient of the tenth power of _x_ in the polynomial _p_ ( _x_ ). In how many ways can we form the tenth power of _x_ in _p_ ( _x_ )? For example, we can choose _x_ 2 from _p a, x_4 from _p b_, and _x_ 4 from _p c_ and multiply them. This is just one way of getting the tenth power of _x_ and this corresponds to the solution _a_ = 2, _b_ = 4, and _c_ = 4. In other words, every solution of the problem corresponds to exactly one way of obtaining the tenth power of _x_ in _p_ ( _x_ ). So the number of solutions of the problem is the coefficient of the tenth power of _x_ in the function _p_ ( _x_ ) = ( _x_ 2 \+ _x_ 3 \+ _x_ 4)3. By ordinary polynomial multiplication we see that this coefficient is 6. **DEFINITION 2.1.1** (a) A **power series** is an infinite series of the form _a_ 0 \+ _a_ 1 _x_ \+ _a_ 2 _x_ 2 \+ _a_ 3 _x_ 3 \+ · · · , where _a i_ ( _i_ = 0, 1, 2, . . .) are real numbers and _x_ is a variable. (b) If _a_ 0 \+ _a_ 1 _x_ \+ _a_ 2 _x_ 2 \+ · · · and _b_ 0 \+ _b_ 1 _x_ \+ _b_ 2 _x_ 2 \+ · · · are two power series, then (1) the **sum** of the two power series is a power series in which the coefficient of _x r_ is _a r_ \+ _b r_ and (2) the **product** of the two power series is a power series in which the coefficient of _x r_ is ( _a_ 0 _b r_ \+ _a_ 1 _b r_–1, + _a_ 2 _b_ _r_ –2 \+ · · · + _a rb_0). (c) If _a r_ ( _r_ = 0, 1, 2, . . .) is the number of ways of selecting _r_ objects in a certain combinatorial problem (or, more generally, the number of solutions of a combinatorial problem), the **ordinary generating function** for this combinatorial problem is the power series _a_ 0 \+ _a_ 1 _x_ \+ _a_ 2 _x_ 2 \+ _a_ 3 _x_ 3 \+ · · · + . Any polynomial in _x_ is a power series in _x_. For example, the polynomial 3 _x_ 2 \+ 2 _x_ 4 can be written as 0 + 0 · _x_ \+ 3 _x_ 2 \+ 0 · _x_ 3 \+ 2 _x_ 4 \+ 0 · _x_ 5 \+ 0 · _x_ 6 \+ · · · + . The addition and multiplication procedures in the definition are obvious generalizations of ordinary polynomial addition and multiplication. Now consider the problem _a_ \+ _b_ \+ _c_ = _r_ , where _a, b_ , and _c_ are at least 2 and at most 4. Then _r_ varies from 6 to 12. For a fixed choice of _r_ , let _a r_ be the number of solutions in integers. Then _a r_ is the coefficient of _x r_ in the generating function _g_ ( _x_ ) of the problem where _g_ ( _x_ ) = ( _x_ 2 \+ _x_ 3 \+ _x_ 4)3, which is equal to _x_ 6 \+ 3 _x_ 7 \+ 6 _x_ 8 \+ 7 _x_ 9 \+ 6 _x_ 10 \+ 3 _x_ 11 \+ _x_ 12. **Example 2.1.2** The number of ways of choosing _r_ elements from a set of _n_ elements is _C_ ( _n, r_ ), and so the generating function for this combinatorial problem is _g_ ( _x_ ), where which is the binomial expansion for (1 + _x_ ) _n_. **Example 2.1.3** Find the generating function _g_ ( _x_ ) in which the coefficient of _x r_ is _a r_, where _a r_ is the number of solutions in nonnegative integers of the equation 2 _a_ \+ 3 _b_ \+ 5 _c_ = _r_. **Solution**. We write _A_ = 2 _a, B_ = 3 _b_ , and _C_ = 5 _c_ and seek the number of solutions of _A_ \+ _B_ \+ _C_ = _r_ , where _A_ is in the set {0, 2, 4, 6, . . .}, _B_ is in {0, 3, 6, 9, . . .}, and _C_ is in {0, 5, 10, 15, . . .}. Thus the generating function is _g_ ( _x_ ) = (1 + _x_ 2 \+ _x_ 4 \+ _x_ 6 \+ . . .)(1 + _x_ 3 \+ _x_ 6 \+ _x_ 9 \+ . . .)(1 + _x_ 5 \+ _x_ 10 \+ _x_ 15 \+ . . .). **Example 2.1.4** The number of solutions in nonnegative integers of _a_ \+ _b_ \+ _c_ = 4 (with no other constraints on the variables) is the coefficient of _x_ 4 either in _g_ ( _x_ ) = (1 + _x_ \+ _x_ 2 \+ _x_ 3 \+ **_x_** 4)3 or in _h_ ( _x_ ) = (1 + _x_ \+ _x_ 2 \+ _x_ 3 \+ _x_ 4 \+ _x_ 5 \+ . . .)3. Notice that _g_ ( _x_ ) is a polynomial in _x_ , whereas _h_ ( _x_ ) is a power series that is not a polynomial. **Example 2.1.5** If _a r_ is the number of ways of selecting _r_ marbles from a collection of red, blue, and white marbles such that the number of red marbles selected is at most two, the number of blue marbles selected is at most three and the number of white marbles selected is at most four, then _a r_ is the coefficient of _x r_ in the generating function _g_ ( _x_ ) = (1 + **_x_** \+ _x_ 2)(1 + _x_ \+ _x_ 2 \+ _x_ 3)(1 + _x_ \+ _x_ 2 \+ _x_ 3 \+ _x_ 4) Equivalently, the coefficient of _x r_ in _g_ ( _x_ ) is the number of solutions in nonnegative integers of _a_ \+ _b_ \+ _c_ = _r_ , where _a_ is at most 2, _b_ is at most 3, and _c_ is at most 4. **_2.2 ORDINARY GENERATING FUNCTIONS_** In Example 2.1.1, we saw that the number of computational steps involved in finding the number of solutions by explicit enumeration is exactly equal to the number of computational steps involved in finding the coefficient of the tenth power of _x_ in the generating function, and therefore the generating function method was in no way more efficient than the explicit enumeration method. We shall now develop some simple techniques for calculating the coefficients of generating functions without actually carrying out the polynomial (power series) multiplication procedure. **THEOREM 2.2.1** (a) Let _a r_ be the coefficient of _x r_ in _g_ ( _x_ ) = (1 + _x_ \+ _x_ 2 \+ _x_ 3 \+ . . .) _n_. Then _a r_ = _C_ ( _r_ \+ _n_ – 1, _r_ ). (b) (1 – _x m_) _n_ = 1 – _C_ ( _n_ , 1) _x m_ \+ _C_ ( _n_ , 2) _x_ 2 _m_ – + · · · + (–1) _n_ _x_ _nm_. (c) (1 + _x_ \+ _x_ 2 \+ · · · + _x m_–1) ** _n_** = (1 – _x m_) _n_ (1 + _x_ \+ _x_ 2 \+ . . .) _n_. **_Proof_ :** (a) The function _g_ ( _x_ ) is the generating function associated with the combinatorial problem that seeks the number _a r_ of solutions in nonnegative integers of the equation _y_ 1 \+ _y_ 2 \+ · · · + _y n_ = _r_ and it was proved in Chapter 1 that the number of solutions is _C_ ( _r_ \+ _n_ – 1, _n_ – 1), which is equal to _C_ ( _r_ \+ _n_ – 1, _r_ ). (b) Put _t_ = (– _x m_) in the binomial expansion of (1 + _t_ ) _n_. (c) It is easily verified (in a formal sense) that 1 + _x_ \+ _x_ 2 \+ · · · + _x m_–1 = (1 – _x m_)(1 + _x_ \+ _x_ 2 \+ . . .) Now take the nth power on both sides of this equation. **Example 2.2.1** Find the number of solutions in integers of the equation _a_ \+ _b_ \+ _c_ \+ _d_ = 27, where each variable is at least 3 and at most 8. **Solution**. The number of solutions is the coefficient of the twenty-seventh power of _x_ in _g_ ( _x_ ) = ( _x_ 3 \+ _x_ 4 \+ · · · + _x_ 8)4, and this number is the coefficient of the fifteenth power of _x_ in _h_ ( _x_ ) = (1 + _x_ \+ · · · + _x_ 5)4. By (c) of Theorem 2.2.1, _h_ ( _x_ ) = (1 – _x_ 6)4(1 + _x_ \+ _x_ 2 \+ · · · + )4 By (b) of this theorem, (1 – _x_ 6)4 = 1 – _C_ (4, l) _x_ 6 \+ _C_ (4, 2) _x_ 12 \+ · · · + and by (a) of the same theorem, (1 + _x_ \+ _x_ 2 \+ · · · + )4 = 1 + _C_ (4, 1) _x_ \+ _C_ (5, 2) _x_ 2 \+ _C_ (6, 3) _x_ 3 \+ · · · + Thus the coefficient of the fifteenth power of _x_ in _h_ ( _x_ ) is equal to _C_ (18, 15) – _C_ (4, 1) _C_ (12, 9) + _C_ (4, 2) _C_ (6, 3) **Example 2.2.2** Find the coefficient of the twenty-fourth power of _x_ in ( _x_ 3 \+ _x_ 4 \+ . . .)5. **Solution**. The desired number is the coefficient of the ninth power of _x_ in _g_ ( _x_ ) = (1 + _x_ \+ _x_ 2 \+ . . .)5, which is equal to _C_ (13, 4). If _a_ 0 \+ _a_ 1 _x_ \+ _a_ 2 _x_ 2 \+ · · · + _a rxr_ \+ · · · + is the power series expansion of a function _g_ ( _x_ ), then _g_ ( _x_ ) is the ordinary generating function for the sequence _a r_. From a given generating function it is possible to build new generating functions for different choices of _a r_, and this is the content of the next theorem, the proof of which is left as an exercise. **THEOREM 2.2.2** If _g_ ( _x_ ) is the generating function for _a r_ and _h_ ( _x_ ) is the generating function for _b r_ then: (a) _Ag_ ( _x_ ) + _Bh_ ( _x_ ) is the generating function for _Aa r_ \+ _Bb r_. (b) (1 – _x_ ) _g_ ( _x_ ) is the generating function for _a r_ – _a r_–1. (c) (1 + _x_ \+ _x_ 2 \+ . . .) _g_ ( _x_ ) is the generating function for ( _a_ 0 \+ _a_ 1 \+ _a_ 2 \+ · · · + _a r_). (d) _g_ ( _x_ ) _h_ ( _x_ ) is the generating function for ( _a_ 0 _b r_ \+ _a_ 1 _b r_–1 \+ _a_ 2 _b r–_2 \+ · · · + _a rb_0). (e) _xg_ ′( _x_ ) is the generating function for _ra r_, where _g_ ′( _x_ ) is the derivative of _g_ ( _x_ ) with respect to _x_. When the symbol _x_ is a real number with absolute value less than 1 it can be actually verified that (1 – _x_ )(1 + _x_ \+ _x_ 2+ _x_ 3 \+ . . .) = 1. (For a proof, see the discussion of the convergence of geometric series in any introductory calculus book. In this book we are more interested in the coefficients of the powers of _x_ considered as a symbol than with issues of convergence.) Thus we write where _g_ ( _x_ ) is the generating function for _a r_ = 1 and _h_ ( _x_ ) is the generating function for _a r_ = _C_ ( _r_ \+ _n_ – 1, _r_ ). **Example 2.2.3** Find the generating function for _a r_ = 3 _r_ \+ 5 _r_ 2. **Solution**. Let _g_ ( _x_ ) = 1/(1 – _x_ ). The generating function for 1 is _g_ ( _x_ ). So the generating function for _r_ is _xg_ ′( _x_ ), by (e) of Theorem 2.2.2. By applying this principle once more we see that the generating function for _r_ 2 is _x_ ( _xg_ ′( _x_ ))′. Thus the desired generating function is 3 _xg_ ′( _x_ ) + 5 _x_ ( _xg_ ′( _x_ ))′, which is equal to **_2.3 EXPONENTIAL GENERATING FUNCTIONS_** The generating functions we have seen thus far are referred to as "ordinary" generating functions because they were associated with selection problems in which order was not relevant. In other words, they are used to solve combinatorial problems of distribution of identical (undistinguishable) objects into distinct locations. Now we turn to problems of arrangements in which order plays a significant role. For example, the problem of finding the number of ways in which 5 red (undistinguishable) marbles can be put in 3 distinct boxes is a problem in which order is not relevant, whereas the problem of finding the number of ways of arranging 5 marbles in a row using three different types of marbles (red, blue, and white, say) is a problem in which order plays a crucial role. The arrangement RRBBW (red, red, blue, blue, white) is not the same as the arrangement RBRBW, even though both the arrangements use the same number of red, blue, and white marbles. Generating functions that are defined in connection with such combinatorial problems, where order is relevant, are called **exponential generating functions**. Let us analyze this example of arranging marbles before we give a formal definition of exponential generating functions. **Example 2.3.1** Find the number of ways of arranging 5 marbles in a row using marbles of three colors (red, blue, and white) so that in each arrangement there is at least one marble of each color, _assuming that there are at least 3 marbles of each color at our disposal._ **Solution.** Let the number of red, blue, and white marbles in a particular arrangement be _r, b_ , and _w._ Then _r_ \+ _b_ \+ _w_ = 5, where each variable is an integer that is at least 1. We know (from our discussion of generalized permutations in Chapter 1) that with this particular choice or _r, b_ , and _w_ there are (5!)/( _r_!)( _b_!)( _w_!) ways of arranging 5 marbles in a row. Thus the total number of arrangements will be the sum of all expressions of the form (( _r_ \+ _b_ \+ _w_!))/( _r_!)( _b_!)( _w_!) __, where _r_ \+ _b_ \+ _w_ = 5 and each variable is an integer that is at least 1. The choices of _r, b_ , and _w_ are as follows: Thus the number of arrangements will be Now it can easily be verified that the coefficient of _x_ 5/5! in the function _g_ ( ** _x_** ), where _g_ ( _x_ ) = ( _x_ /(l!) + _x_ 2/(2!) + _x_ 3/(3!))3 is precisely the sum of the six expressions obtained in the preceding paragraph giving the total number of arrangements. The function _g_ ( _x_ ) is an example of an exponential generating function. As in the case of ordinary generating functions, we take the third power of a polynomial (representing the three distinct colors), and the powers of the variable in the polynomial are 1, 2, and 3, indicating that the number of ways a marble of a particular color can appear in an arrangement is 1, 2, or 3. The significant difference here is that unlike the ordinary generating function, the coefficient of the _x r_ in the polynomial is l/( _r_!) and the solution of the combinatorial problem is the coefficient of _x_ r/( _r_!) in the exponential generating function. Is there an easier method in this problem to find the coefficient of _x_ 5/(5!) in _g_ ( _x_ )? Let _h_ ( _x_ ) = ( ** _e x_** – 1)3, where _e x_ is the exponential function (power series) defined by 1 + _x_ \+ _x_ 2/(2!) + _x_ 3/(3!) + · · · + , where _x_ is any real variable. Then the required coefficient is that of _x_ 5/(5!) in _h_ ( _x_ ) = ( _e_ ** _3x_** – 3 ** _e 2x_** \+ 3 ** _e x_** – 1) and this coefficient is 35 – (3)25 \+ 3 = 150. **DEFINITION 2.3.1** If _b r_ ( _r_ = 0, 1, 2, . . .) is the solution of a combinatorial problem, the power series _g_ ( _x_ ) defined by _b 0_ \+ _b_ 1 _x_ \+ ( _b_ 2 _x_ 2)/(2!) + ( _b_ 3 _x_ 3)/(3!) + · · · + is called the **exponential generating function** for that problem. **Example 2.3.2** Find the exponential generating function for _b r_, the number of ways of arranging _r_ distinct elements from a set of _n_ elements. **Solution**. Of course, _b r_ = _P_ ( _n, r_ ), so the exponential generating function for this problem is _g_ ( _x_ ), which is a power series in which the coefficient of ( _x_ ′) = [ _P_ ( _n, r_ )]/( _r_!) = C( _n_ , _r_ ). Thus the exponential generating function for _P_ ( _n, r_ ) is (1 + _x_ )n, which is the same as the ordinary generating function of _C_ ( _n, r_ ). **Example 2.3.3** Find the number of ways of arranging 5 marbles in a row using marbles of three colors (red, blue, and white) so that each arrangement has at least one marble of each color assuming that we have at most 3 red, at most 2 white, and at most 2 blue marbles at our disposal. (Notice the difference between this example and Example 2.3.1.) **Solution**. In this case the number of arrangements will be the coefficient of _x_ 5/5! in _g_ ( _x_ ) = ( _x_ \+ _x 2_/2! + _x_ 3/3!)( _x_ \+ _x 2_/2!) _2_ and this coefficient is the same as the coefficient of _x_ 5/5! in The analysis in Examples 2.3.1 and 2.3.3 generalizes as follows: **THEOREM 2.3.1** Suppose that there are _k_ types of objects. (a) If there is an unlimited supply of objects in each of these types, then the number of _r_ -permutations (r = 1, 2, . . .) using objects from these _k_ types is the coefficient of _x r/r_! in the exponential generating function (b) If the supply of objects in type _i_ is at most _n i_, (where _i_ = 1, 2, + · · · + , _k_ ), the number of _r_ -permutations will be the coefficient of _x_ r/ _r_! in (c) ( _n_!) . _S_ ( _r, n_ ) = coefficient of _x r/r_! in ( _e x_ – 1) _n_ where _S_ ( _r, n_ ) is a Stirling number of the second kind defined in Chapter 1. **Example 2.3.4** (a) Find the number of _r_ -permutations that can be formed using the letters I, M, S, and P, where _r_ is a positive integer. (b) Find the number of _r_ -permutations that can be formed using the letters that appear in the word MISSISSIPPI so that the number of times a letter appears in a permutation is at most equal to the number of times the letter appears in the word. **Solution** (a) The number of _r_ -permutations is the coefficient of _x_ r/ _r_! in the exponential generating function _g_ ( _x_ ) = _e_ 4x, and this coefficient is 4 _r_. (b) In the word, the frequencies of the letters are 4, 1, 2, and 4. Thus the number of _r_ -permutations is the coefficient of _x_ r/ _r_! in where _r_ is at most 11, the sum of the frequencies. **Example 2.3.5** Find the number of ways of accommodating 9 people in 4 rooms such that no room is left unoccupied. **Solution**. If _x_ denotes the number of people assigned to a room, then _x_ is at least one and at most 6, and there are 4 rooms. Thus the exponential generating function for this combinatorial problem is Now the number of ways of accommodating 9 people in 4 rooms is the coefficient of ( _x_ 9)/(9!) in _g_ ( _x_ ) and this coefficient is equal to the coefficient of ( _x_ 9)/(9!) in _h_ ( _x_ ) = ( _e x_ – 1)4 = _e_ 4 _x_ – 4 _e 3x_ \+ 6 _e_ 2r – 4 _e x_ \+ 1 Thus the number of arrangements will be 49 – (4)39 \+ (6)29 – 4. Notice that the number of arrangements is equal to (4!) _S_ (9, 4) where _S_ (9, 4) is the Stirling number of the second kind defined in [Chapter 1.] **_2.4 NOTES AND REFERENCES_** The first comprehensive treatment of generating functions was given by Pierre Simon Marquis de Laplace (1749–1827). But the method of solving problems using generating functions has its origin in the works of Abraham de Moivre (1667–1754). Both Leonhard Euler (1707–1783) and Nicholas Bernoulli (1687–1759) also used this technique in their investigation of certain combinatorial problems: Euler was interested, among other things, in partition problems and Bernoulli was interested in derangement problems. For a thorough treatment of generating functions, see the books on combinatorics by MacMahon (1960) or Riordan (1958). See also Riordan (1964). Other general references include the relevant chapters in the books by Cohen (1978), Krishnamurthy (1986), Liu (1968), Liu (1985), Roberts (1984), Tucker (1984), and Townsend (1987). **_2.5 EXERCISES_** **2.1.** Find the ordinary generating functions for the following sequences. **(a)** {1, 1, 1, 1, 0, 0, . . .} | **(b)** {0, 0, 0, 0, 1, 1, . . .} ---|--- **(c)** {1, 1, 1, 1, . . .} | **(d)** (1, –1, 1, –1, . . .} **2.2.** Find the ordinary generating functions for the following sequences. **(a)** {1, 2, 3, 4, . . .} **(b)** {1, –2, 3, –4, . . .} **2.3.** Find the sequence corresponding to the following ordinary generating functions, **(a)** (2 + _x_ )4 | **(b)** _x 2_ \+ _e_ _x_ | **(c)** _x_ 3(1 – _x_ )–1 ---|---|--- **2.4.** Find the coefficient of _x_ 7 in (1 – _x_ )k when _k_ = 9 and _k_ = –9. **2.5.** Find the coefficient of _x_ 7 in (1 + _x_ ) _k_ when _k_ = 9 and _k_ = –9. **2.6.** Find the coefficient of _x 23_ in ( _x_ 3 \+ _x 4_ \+ . . .) _5_. **2.7.** Find the ordinary generating function _f_ ( _x_ ) that can be associated with the combinatorial problem of finding the number of solutions in positive integers of the equation _a_ \+ _b_ \+ _c_ \+ _d_ = _r_. **2.8.** Find the ordinary generating function associated with the problem of finding the number of solutions in nonnegative integers of the equation 3 _a_ \+ 2 _b_ \+ 4 _c_ \+ 2 _d_ = _r_. **2.9.** Find the number of solutions in integers of the equation _p_ \+ _q_ \+ _r_ \+ _s_ = 27 where each variable is at least 3 and at most 8. **2.10.** Find the number of solutions of _x_ 1 \+ _x 2_ \+ · · · + _x n_ = _r_ where each variable is either 0 or 1. **2.11.** If three distinct dice (marked _A_ , _B_ , and _C_ ) are thrown, find the number of ways of getting a total of 13. **2.12.** Solve Problem 2.11 if the first die (marked _A_ ) shows an even number. **2.13.** Find the number of ways of allocating 9 identical objects to 3 different locations (numbered as first, second, and third) such that each location gets at least one object and the third location does not get more than 3 objects. **2.14.** Find the ordinary generating function associated with the combinatorial problem of choosing 9 marbles from a bag that has 3 indentical red marbles, 4 identical blue marbles, and 5 identical green marbles such that in every choice all colors are represented and no color has absolute majority. **2.15.** Prove: (1 + _x_ _m_ ) _n_ = 1 + _C_ ( _n_ , 1) _x m_ \+ _C_ ( _n_ , 2)( _x m_)2 \+ · · · + ( _x m_) _n_. **2.16.** Find the number of solutions in integers of the equation _a_ \+ _b_ \+ _c_ \+ _d_ \+ _e_ \+ _f_ = 20, where _a_ is at least 1 and at most 5 and the other variables are at least 2 by **(a)** the method developed in Chapter 1, and **(b)** considering a suitable generating function. **2.17.** There are 10 identical gift boxes. Each box has to be wrapped with either red or blue or green or yellow wrapping paper. The available red paper can be used to wrap at most 2 boxes and the available blue paper can be used to wrap at most 3 boxes. Write down the ordinary generating function associated with the problem of finding the number of ways of wrapping these 10 boxes. **2.18.** There are 9 people in a group. Find the number of ways of collecting $9.00 from this group if the leader of the group will give at least $1.00 and at most $2.00 and every other member will give at most $1.00. **2.19.** Find the ordinary generating function associated with the problem of finding the number of solutions in integers of the inequality _a_ \+ _b_ \+ _c_ ≤ _r_ , where each variable is at least 2 and at most 5. **2.20.** A participant in a contest is rated on a scale of 1 to 6 by each of the 4 judges. To be a finalist a participant has to score at least 22. Find the number of ways the judges can rate a participant so that she can be a finalist. **2.21.** An Antarctic expedition group consists of scientists representing the United States, the USSR, and England. Find the number of ways of forming a group of 9 scientists so that none of these three countries has an absolute majority in the group. **2.22.** Use a combinatorial argument to prove that the coefficient of _x_ 2 _n_ +1 in _f_ ( _x_ ) is equal to the coefficient of _x_ 2 _n_ –2 in _g_ ( _x_ ), where _f_ ( _x_ ) = (1 + _x_ \+ · · · + _x n_)3 and _g_ ( _x_ ) = (1 + _x_ \+ _x_ 2 \+ · · · + _x_ n–1)3. Find this coefficient. **2.23.** Find the number of ways of distributing 8 apples and 6 oranges to 3 children so that each child can get at least 2 apples and at most 2 oranges. **2.24.** Prove **(a)** 1 + 2 + 3 + · · · + _r_ = [ _r_ ( _r_ \+ 1)]/2 and **(b)** 1 + 22 \+ 32 \+ · · · + _r_ 2 = [ _r_ ( _r_ \+ 1)(2 _r_ \+ 1)]/6. **2.25.** Find the number of ways of storing _p_ identical red marbles in _m_ boxes in one shelf and _q_ identical blue marbles in _n_ boxes in another shelf so that no box is empty. (Since no box will be empty _p_ cannot be less than _m_ and _q_ cannot be less than _r_.) Solve the problem when _p_ = 6, _q_ = 7, _m_ = 3, and _n_ = 4. **2.26.** Find the ordinary generating functions for the sequences: **(a)** { _a r_} where _a r_ = _k r_, where _k_ is a constant **(b)** { _b r_} where _b r_ = _rK r_ **(c)** { _c r_} where _c r_ = _k_ \+ 2 _k 2_ \+ 3 _k_ 3 \+ · · · + _rk r_ **2.27.** The sum of four positive integers in nondecreasing order is _r_ and _a r_ is the number of ways of choosing these four integers. Find the ordinary generating function associated with the sequence { _a r_}. **2.28.** If _X_ is a set with _n_ elements, show that the number of subsets of _X_ with ( _r_ – 1) elements is equal to the number of solutions of the equation _y_ 1, + _y_ 2 \+ · · · + _y r_ = ( _n_ – 1), where the first two variables are nonnegative and the other variables are positive. **2.29.** Let _X_ = {1, 2, 3, + · · · + , _n_ }. Find the number of subsets of _X_ such that each subset has _r_ elements and no two elements in a subset are consecutive integers. **2.30.** If _r_ is a positive integer, a **partition** of _r_ is a collection of positive integers whose sum is _r_. A partition is distinct if the integers in it are distinct. For example {3, 1} is a distinct partition of 4, whereas {2, 2} is a partition of 4 that is not distinct. The number of partitions of _r_ is denoted by _p_ ( _r_ ) and the number of distinct partitions of r is denoted by _p d_( _r_ ). Obtain the generating functions to compute _p_ ( _r_ ) and _p d_( _r_ ). **2.31.** Show that the number of distinct partitions of a positive integer _r_ is the same as the number of partitions of _r_ into odd positive integers. **2.32.** Let _p_ ( _r_ ; _n_ ) be the number of partitions of _r_ such that in each partition no element exceeds _n_. Show that _p_ ( _r_ ; _r_ ) = _p_ ( _r_ ). **2.33.** Show that every nonnegative integer can be written uniquely in binary form. **2.34.** Find the exponential generating function associated with the following sequences. **(a)** {1, 1, 1, 1, 0, 0, 0, 0, . . .} | **(b)** {0, 0, 0, 0, 1, 1, 1, 1, . . .} ---|--- **(c)** {1, 2, 22, 23, 24 . . .} | **(d)** {1, 1, 2 . 2, 3 . 22, 4.23, . . .} **2.35.** **(a)** Use a combinatorial argument to prove that ( _e x_) _n_ = _e nx_. **(b)** Prove that _e x_ \+ _e –x_ = 2(1 + _x 2_/2! + _x 4_/4! + · · ·). **(c)** Prove that _e x_ – _e –x_ = 2( _x_ \+ _x_ 3/3! + _x_ 5/5! + · · ·). **2.36.** Let _X_ = {A, B, C, D}. Using exponential generating functions, obtain **(a)** the number of _r_ -permutations that can be formed using these four letters such that in each permutation there is at least one A, at least one B, and at least one C, and **(b)** the number of _r_ -permutations that can be formed such that in each permutation there is an even number of A's and an odd number of B's. **2.37.** Find the number of _r_ -digit binary numbers that can be formed using an even number of 0's and an even number of l's. **2.38.** **(a)** Find the number of ways the headquarters of a company can allocate nine new identical computers to four distinct branch offices so that each office gets at least one new computer. **(b)** Find the number of ways the headquarters of a company can allocate nine new employees to four distinct branch offices so that each office gets at least one new employee. **2.39.** Find the number of **(a)** permutations of the letters which appear in the word MISSISSIPPI, **(b)** the number of 6-permutations of letters that appear in this word, and **(c)** the number of 6-permutations of letters from this word such that in each permutation every letter of the word appears at least once. **2.40.** Find the number nine-digit sequences that can be formed using the digits 0, 1, 2, and 3 such that **(a)** each sequence has an even number of 0's, **(b)** each sequence has an odd number of 0's, **(c)** each sequence has an even number of 0's and an odd number of l's, **(d)** the total number of 0's and l's is odd, and **(e)** no digit appears exactly twice. **2.41.** Obtain the appropriate generating function associated with the combinatorial problem of finding the number of codewords of length _r_ from an alphabet consisting of five distinct letters such that in each codeword every letter of the alphabet appears at least once and the first letter appears an even number of times. **Recurrence Relations** **_3.1 INTRODUCTION_** Consider a sequence _a 0, au a2_, . . . , where _a r_ is the solution of a certain combinatorial problem that depends on the input _r_. In Chapter 2 we discussed some methods to compute _a r_ using generating functions. In some cases it will be possible to reduce the computation of the _r_ th term of the sequence to earlier members of the sequence if _a r_ can be expressed as a function of the earlier elements of the sequence. For example, consider the arithmetic progression sequence 4, 7, 10, 13, 16, . . . , where the initial number _a 0_ is 4 and the common difference _d_ is 3. Then the _r_ th term of the sequence can be expressed in terms of the (r – l)th term by the equation _a r_ = _a_ _r_ –1, + _d_. This equation is an **example of a recurrence relation**. The condition _a_ 0 = 4 is called the **initial condition** of this relation. Obviously, once the initial condition and the common difference are known any arbitrary term can be obtained by computing _a_ 1, _a 2_, . . . , sequentially. Or we can obtain the _r_ th term by **solving** the **recurrencerelation**. In this case the **solution** is _a r_ = 4 + 3 _r_ , where _r_ is any nonnegative integer. Similarly, if we take the geometric progression sequence 4, 4.3, 4.32, 4.33 . . . , the recurrence relation is _a r_ = 3 . _a_ r–1, with initial condition _a 0_ = 4 and the solution is _a r_ = 4 . 3 _r_. **_Recurrence Relations and Difference Equations_** The **first difference** _d_ ( _a n_) of a sequence { _a n_} of real numbers is the difference _a n_ – _a_ n–1.The **second difference** _d 2_( _a n_) is _d_ ( _a n_) – _d_ ( _a n_–1), which is equal to _a n_ – 2 _a n-1_ \+ _a n–2_. More generally the _k_ **th difference** _d k_( _a n_) is _d k-_1( _a n_) _– d k–1_( _a n_x_) . A **difference equation** is an equation involving _a n_ and its differences. For example, _3d 2_( _a n_) + _2d_ ( _a n_) + _7a n_ = 0 is a second-order homogeneous difference equation. Observe that every _a i_, ( _i_ = 0, 1, 2, . . . , _n_ – 1) can be expressed in terms of _a n_ and these differences because _a n–_1 = _a n – d_( _a n_); and so on. Thus every recurrence relation can be formulated as a difference equation. On the other hand, using the definition of these differences, any difference equation can be formulated as a recurrence relation. For instance, the difference equation 3 _d 2_( _a n_) + 2 _d_ ( _a n_) + 7 _a n_ = 0 can be expressed as the recurrence relation 12 _a_ n = 8 _a n_–1 – 3 _a n_–2. Thus some authors use the terms _difference equation_ and _recurrence relation_ interchangeably. The methods for solving recurrence relations were developed originally using the techniques used for solving difference equations. Difference equations are commonly used to approximate differential equations when solving differential equations using computers. Notice that in a relation of the type _a r_ = _a r_–1, + _a r–_2 we need to know both _a_ 0 and _a_ 1, to obtain any _a r_ ( _r_ > 1), and therefore we need two initial conditions to solve this equation. Thus with the information available in the set of initial conditions of a given recurrence relation in most cases one should be able to compute sequentially any arbitrary term of the sequence. We shall study techniques for solving certain types of recurrence relations later in this section. There are no general methods for solving all recurrence relations. **Example 3.1.1** The recurrence relation _a r_ = _ra r_–1, with the initial condition _a_ 0 = 1 has the solution _a r_ = _r_! ( _r_ = 1, 2, . . .). **Example 3.1.2** Find a recurrence relation to obtain _a n_, the number of ways of arranging _n_ distinct elements in a row. **Solution**. There are _n_ ways of choosing an element to be placed in the first position of the row. After placing an element in the first position the number of ways of arranging the remaining ( _n_ – 1) elements is _a n_–1. Thus we have the recurrence relation _a n_ = _na n–_1 with the initial condition _a_ 1, = 1, the solution of which (by the previous example) is _a n_ = _n_! **Example 3.1.3** Suppose that the interest rate offered by a bank to the depositors is _r_ % per year. If _a n_ is the amount on deposit at the end of _n_ years, obtain a recurrence relation for _a n_ if (a) the interest is simple, and (b) the interest is compounded annually. **Solution** , (a) If _a 0_ is the initial deposit, at the end of year _k, ra 0_ is added to _a k_ if the interest is simple. Thus the recurrence relation is _a k_ \+ _1_ = _a k_ \+ _ra 0_, where _k_ = 0, 1, 2, . . . . By iteration we see that the solution is _a k_+1 = _a k–1_ \+ _ra 0_ \+ _ra 0_ = _a k–2_ \+ 3 _ra_ 0 = · · · = _a 0_ \+ ( _k_ \+ 1) _ra 0_ Thus the solution to the recurrence relation is _a n_ = (1 + _nr_ ) _a 0_. (b) If the interest is compounded annually, the recurrence relation is _a k_+ _1_ = _a k_ \+ _ra k_ = (1 + _r_ ) _a k_, the solution of which, by iteration, is _a n_ = (1 + _r_ ) _n_ a0. **Example 3.1.4** Find the recurrence relation for the Fibonacci sequence 1, 2, 3, 5, 8, 13, . . . in which the _r_ th term is the sum of the ( _r_ – l)th term and the (r – 2)th term. Obviously, the relation is _a n – an–_1, – _a n_–2 = 0 with initial conditions _a_ 1 = 1 and _a_ 2 = 2 . (The numbers that appear in the sequence are called _Fibonacci numbers_ and they arise in many areas of combinatorial mathematics. We discuss a solution technique later.) **_Recursion and Recurrence_** A recurrence relation as we see here is a recursive formula (see Section 3 of Chapter 0) to compute the number of ways to do a procedure involving _n_ objects in terms of the number of ways to do it with fewer objects. This recursive reasoning involved in building a recurrence relation model of a counting problem is the same logic used in designing recursive computer subroutines that call themselves. The basic idea of any recursive procedure in computer science is that it calls itself to solve a problem by solving similar problems that are smaller than the original problem. An important feature of recursion is the concept of working backward. However, we cannot use a recursive program to compute the values in a recurrence relation because recurrence relations are meant for forward tabulation of values, not for backward recursive computation. **_3.2 HOMOGENEOUS RECURRENCE RELATIONS_** As mentioned before there is no general method of solution for an arbitrary recurrence relation. In what follows we study a broad class of recurrence relations for which solution techniques are known. **DEFINITION 3.2.1** If _c i_ ( _i_ = 1, 2, . . . , _r_ ) are constants, a recurrence relation of the form _a n_ = c1 _a_ n–1 \+ _c 2an–_2 \+ · · · + _c ran–r_ \+ _f_ ( _n_ ) is called a **linear recurrence relation with constant coefficients of order** _r_. The recurrence relation is **homogeneous** if the function _f_ ( _n_ ) = 0. If _g_ ( _n_ ) is a function such that _a n_ = _g_ ( _n_ ) for _n_ = 0, 1, 2, . . . , then _g_ ( _n_ ) is a **solution** of the recurrence relation. **Example 3.2.1** It can be verified by substitution that _g_ ( _n_ ) = _A_ . 2 _n_ \+ _B_ . _n_. 2 _n_ \+ _n_ 2 . 2 _n_ –1 (where _A_ and _B_ are arbitrary constants) is a solution of the following second-order inhomogeneous linear recurrence equation with constant coefficients: _a n_ = 4 _a n_-1 – 4 _a n–_2 \+ 2 _n_. (We shall study solution techniques to obtain such general solutions in this section.) **THEOREM 3.2.1** ( **The Principle of Superposition)** If _g i_( _n_ ) , where _i_ = 1, 2, . . . , are solutions of _a n_ = _c_ 1 _a_ n–1 \+ _c_ 2 _a_ n–2 \+ · · · + _c_ r _a n_–r \+ _f_ i( _n_ ) then any linear combination of the _k_ solutions of the form _A_ 1 _g_ 1( _n_ ) + _A_ 2 _g_ 2( _n_ ) + · · · + _A kgk_( _n_ ) is a solution of the recurrence relation _a n_ = _c 1an–_1 \+ _c_ 2 _a n–_2 \+ · · · + _c_ r _a n_– _r_ \+ A1 _f_ 1( _n_ ) + · · · + _A kfk_( _n_ ) where A _i_ ( _i_ = 1, 2, . . . , _k_ ) are real numbers. In particular, any linear combination of the solutions of a homogeneous recurrence relation is again a solution of the homogeneous recurrence relation. **_Proof_** : Let _h_ ( _n_ ) = _A_ 1 _g_ 1( _n_ ) + _A_ 2 _g_ 2( ** _n_** ) + · · · + _A_ _k_ _g_ _k_ ( _n_ ) Since _g i_( _n_ ) is a solution of **_a n_** = _c_ 1 _a n_–1 \+ _c_ 2 _a n_–2 \+ · · · + _c_ r _a n_–r \+ _f i_( _n_ ) we have _g i_( _n_ ) = _c_ 1 _g i_( _n_ – 1) + _c 2gi_( _n_ –2) + · · · + _c rgi_( _n_ – _r_ ) + _f i_( _n_ ) and therefore _h_ ( _n_ ) = _c_ 1 _h_ ( _n_ – 1) + _c 2h_( _n_ – 2) + · · · \+ _c_ r _h_ ( _n_ – _r_ ) + _A_ 1 _f_ 1( _n_ ) + · · ·+ _A kfk_( _n_ ) which proves our assertion. There is a simple technique for solving homogeneous linear recurrence relations with constant coefficients. Let _a r_ = _x r_ be a solution of the relation _a_ n = _c_ 1 _a n_–1 \+ _c_ 2 _a n_–2 \+ · · · + _c ran_– _r_ Then _x n_ = _c_ 1 _x n_–1 \+ _c_ 2 _x n_–2 \+ · · · + _c rxn_- _r_ If we ignore the trivial solution _x_ = 0 we get the polynomial equation _x r_ – _c_ 1 _x r–_1 – _c_ 2 _x r–2_ – · · · – _c r_ = 0 This polynomial equation of degree _r_ is called the **characteristic equation** of the recurrence relations which has _r_ roots in general. It is quite possible that the equation has multiple roots or some roots are complex. If _x i_ ( _i_ = 1, 2, . . . , _r_ ) are the r roots of the characteristic equation then _a n_ = ( _x i_) _n_ is obviously a solution of the homogeneous recurrence relation and therefore by the previous proposition any linear combination of such solutions is also a solution. For example, the second-order homogeneous recurrence relation _a n_ = 5 _a n_–1, – 6 _a n_–2 has the characteristic equation _x_ ** _2_** – _5x_ \+ 6 = 0 the roots of which are _x_ 1, = 2 and _x 2_ = 3. Thus _a n_ = _A_ (2) _n_ \+ _B_ (3) _n_ , for any choice of the arbitrary constants _A_ and _B_ , is also a solution of the recurrence relation.Conversely, if all the _r_ roots _x_ , ( _i_ = 1,2, . . . , _r_ ) are real and distinct, it can be proved that every general solution is a linear combination of these solutions ( _x_ i) _n_. The _r_ arbitrary constants that appear in the general solution can be evaluated in some cases giving a complete solution (not necessarily unique) to the problem if _r_ initial conditions are known. Existence and uniqueness of the solution are assured if _r_ consecutive initial conditions are known. We state these results as a proposition the proof of which is omitted here. THEOREM 3.2.2 If the _r_ roots _x_ , ( _i_ = 1,2, . . . , _r_ ) of the characteristic equation of an _r_ th-order linear homogeneous recurrence relation are real and distinct, every general solution of the recurrence relation is a linear combination of the solutions ( _x i_) _n_. Moreover, if _r_ consecutive initial values _a k, ak_+1 . . . , _a k_+ _r_ –1, of the recurrence relation are known, a solution can be obtained by evaluating the _r_ arbitrary constants using these _r_ consecutive initial values and this solution is unique. Example 3.2.2 Solve _a n_ – 9 _a n_–2 = 0, where: (a) _a 0_ = 6, _a_ 1, = 12 (b) _a 3_ = 324, _a_ 4 = 486 (c) _a 0_ = 6, _a 2_ = 54 (d) _a_ 0 = 6, _a 2_ = 10 **Solution**. The roots of the characteristic equation _r 2_ – 9 = 0 are 3 and –3. Thus any general solution of the given recurrence relation is of the form _a n_ = _A_ (3) _n_ \+ _B_ (– 3) _n_ , where _A_ and _B_ are arbitrary constants to be evaluated using the two given initial conditions. (a) If _a_ 0 = 6, then _A_ \+ _B_ = 6 . If _a_ 1, = 12, then 3 _A_ – 3 _B_ = 12.Solving these two simultaneous equations in _A_ and _B_ we get _A_ = 5 and _B_ = 1, giving the unique (because the initial conditions are consecutive) solution to the problem _a n_ = 5(3) _n_ \+ (–3) _n_. Putting _n_ = 2, 3, 4, . . . , we can compute _a 2_ = 54, _a_ 3 = 324, and so on. (b) If _a_ 3 = 324, then 27A – 27B = 324. If _a 4_ = 486, then 81 _A_ \+ 81 _B_ = 486. Solving these two equations, we get _A_ = 9 and _B_ = –3, once again giving a unique solution. (c) This time the two given initial conditions are not consecutive. _a 0_ = 6 implies that _A_ \+ _B_ = 6 and _a 2_ = 54 implies that 9 _A_ \+ 9 _B_ = 54, giving one equation _A_ \+ _B_ = 6 to determine the two constants. For example, _A_ = 2, _B_ = 4 gives _a n_ = 2(3) _n_ \+ 4(–3) _n_ , which defines _a_ 0 = 6, _a_ 1 = –6, _a_ 2 = 54, _a_ 3 = –54, and so on. But _A_ = 1, _B_ = 5 gives _a n_ = (3) _n_ \+ 5(–3) _n_ , which defines _a 0_ = 6, _a_ 1 = –12, _a_ 2 = 54, _a_ 3 = – 108. Thus the solution is not unique. (d) The nonconsecutive initial conditions imply that _A_ \+ _B_ = 6 and 9 _A_ \+ 9 _B_ = 10, showing that there is no solution. **Example 3.2.3** Solve the Fibonacci recurrence relation _a n_ = _a n_–1 \+ _a n_–2 with the consecutive initial conditions _a 0_ = 1 and _a_ 1 = 1. **Solution**. The characteristic equation is _x 2 – x_ – 1 = 0, the two roots of which are and . Thus every solution is of the form _a n_ = _A_ ( _x_ 1) _n_ \+ _B_ ( _x 2_) _n_ , where _A_ and _B_ are arbitrary constants Using the initial conditions we get and . On substituting these values of the arbitrary constants in the solution, we get the unique solution , where _t_ is the irrational number , known as the _golden ratio_. **Example 3.2.4** Find the number of subsets of a set that has _n_ elements. **Solution**. Let the set be _X_ = {1, 2, 3, . . . , _n_ } and suppose that the total number of subsets of _X_ is _a n_. Now every subset of _X_ belongs to one of the following two classes _A_ and _B_ : (a) _n_ is not an element in any of the subsets in class _A_ and (b) _n_ is an element in every subset in class _B_. By our assumption class _A_ has _a n_–1, subsets of _X_. The only way we can get a subset of _X_ that contains _n_ is by adjoining _n_ to a subset from class _A_. So the number of subsets in class _B_ is exactly equal to the number of subsets in class _A_. In other words, we have the recurrence relation _a n_ = 2 _a n_–1, with the initial condition _a 0_ = 1. Recall that the empty set is a subset of every set. The characteristic equation is _x_ – 2 = 0, giving the unique solution _a n_ = _2 n_. **Example 3.2.5** Find the unique solution of the recurrence relation _a n_ = 3 _a n_–1 \+ 4 _a n_–2 – 12 _a n_–3, where _a 0_ = 2, _a_ 1 = 5, and _a_ 2 = 13. **Solution**. The roots of the characteristic equation _x 3_ – 3 _x 2_ – 4 _x_ \+ 12 = 0 are 2, – 2, and 3. So the general solution is _a n_ = _p_ ( _2_ ) _n_ \+ _q_ (–2) _n_ \+ _r_ (3) _n_ and the initial conditions imply that On solving this linear system, we get _p_ = 1, _q_ = 0, and _r_ = 1. Thus the unique solution is _a n_ = 2 _n_ \+ 3 _n_. We next examine the case when the characteristic equation of a recurrence relation has **repeated (multiple) roots**. Consider, for example, the recurrence relation _a n_ = 4 _a n_-1 – 4 _a n_–2, which has the characteristic equation ( _x_ – |2)2 = 0 with roots _x_ 1 = 2 and _x_ 2 = 2. In other words, 2 is a repeated root of multiplicity 2. Of course, _A_ (2) _n_ is a solution of the recurrence relation, but every solution need not be of this form. It can easily be verified that _B . n_ . (2) _n_ is also a solution, so by the principle of superposition _A_ (2) _n_ \+ _B . n._ (2) _n_ is also a solution for any _A_ and _B_. It turns out that every general solution of the recurrence relation is of this form. The fact that _A_ (2) _n_ by itself cannot be a general solution is also obvious because if we consider two consecutive initial conditions, say _a 0_ = 1 and _a_ 1 = 4, we see that _A_ = 1 and also _A_ = 2 . When the characteristic equation has multiple roots, the following result is a generalization of the previous theorem. The proof is omitted here. For details, see Roberts (1984). THEOREM 3.2.3 (a) Suppose that ( _x_ – _t_ ) _s_ is a factor of the characteristic equation so that _t_ is a root of multiplicity _s_. Then _u_ = ( _t_ ) _n_ ( _A_ 1 \+ _A 2n_ \+ _A_ 3 _n_ 2 \+ · · ·+ _A sns–_1) is a solution of the recurrence relation where _A j_ ( _j_ = 1, 2, . . . , _s_ ) are arbitrary constants. This solution _u_ is called a **basic solution** of the relation with respect to the root _t_. (b) Suppose that the roots of the recurrence relation are _t k_, where the multiplicity of _t k_ is _s k_( _k_ = 1, 2, . . . , _q_ ) , and suppose that _u k_ is a basic solution with respect to the root _t k_. Then every solution of the recurrence relation is the sum of these _q_ basic solutions. **Example 3.2.6** Find the general solution of the recurrence relation the roots of which are 2, 2, 2, –3, 4, and 4. **Solution**. The basic solution for the repeated root 2 is _u_ 1 , = 2 _n_ ( _A_ 1 \+ _A 2n_ \+ _A 3n2_) . The basic solution for the root –3 is _u 2_ = _A_ 4(–3) _n_. The basic solution for the repeated root 4 is _u_ 3 = 4 _n_ ( _A_ 5 \+ _A_ 6n). Thus the general solution is _a n_ = _u 1_ \+ _u 2_ \+ _u 3_. **_3.3 INHOMOGENEOUS RECURRENCE RELATIONS_** We now proceed to analyze linear recurrence relations with constant coefficients of the type _a n_ = _h n_ \+ _f_ ( _n_ ) , where _h n_ = _c_ 1 _a n_–1 \+ _c_ 2 _a n_–2 \+ · · · + _c ran–r_ and _f_ ( _n_ ) is a function of _n_. Here the relation _a n_ = _h n_ is called the **homogeneous part** of the given inhomogeneous relation. If _a n_ = _u n_ is a solution of the homogeneous part and if _a n_ = _v n_ is any solution of the given inhomogeneous relation, then by the principle of superposition we know that _a n_ = _u n_ \+ _v_ n is also a solution of the same inhomogeneous relation. If _u n_ has _r_ arbitrary constants, then _u n_ \+ _v n_ also has _r_ arbitrary constants. If _r_ consecutive initial conditions of the inhomogeneous relation are known, these initial conditions can be used to define a linear system of _r_ equations in _r_ variables giving a unique solution. In other words, _if u n is a general solution of the homogeneous part of an inhomogeneous recurrence relation, and if vn is a particular solution of the inhomogeneous recurrence relation, then un_ \+ _v n is a general solution of the same inhomogeneous relation_. **Example 3.3.1** Find the general solution of _a n_ = 5 _a_ n–1 – 6 _a n_–2 \+ 6(4) _n_. **Solution**. The solution of the homogeneous part is _u n_ = _A_ (2) _n_ \+ _B_ (3) _n_ , where _A_ and _B_ are arbitrary constants. It can be verified that _v n_ = (48)(4) _n_ is a particular solution of the given inhomogeneous relation. Therefore, the general solution of the relation is _a n_ = _u n_ \+ _v n_. Unlike the homogeneous case there is no general method to obtain a particular solution for an arbitrary inhomogeneous problem. However, there are techniques available for certain special cases. We have two such special cases: (l) _f_ ( _n_ ) = _n k_, where _k_ is a nonnegative integer, and (2) _f_ ( _n_ ) = ( _q_ ) _n_ , where _q_ is a rational number not equal to 1. The principle of superposition is called for if there is a linear combination of functions of these two types. These techniques are as follows: 1. If _f_ ( _n_ ) = _c_ ( _q_ ) _n_ (where _c_ is a known constant) and if _q_ is not a root of the characteristic equation, then the choice for the particular solution is _A_ ( _q_ )n, where _A_ is a constant that is to be evaluated by substituting _a n_ = _A_ ( _q_ ) _n_ in the inhomogeneous equation. If _q_ is a root of the characteristic equation with multiplicity _k_ , the choice for the particular solution is A(n) _k_ ( _q_ ) _n_. 2. If _f_ ( _n_ ) = _c_ ( _n_ ) _k_ and if 1 is not a root of the characteristic equation, a polynomial in _n_ of degree _k_ of the form _A_ 0 \+ _A_ 1 _n_ \+ _A_ 2 _n_ 2 \+ · · · + _A knk_ is the choice for the particular solution. If 1 is a root of multiplicity _t_ , _A_ 0 _n t_ \+ _A_ 1 _n t_+1 \+ · · · + _A knt_+ _k_ is the choice. **Example 3.3.2** If the characteristic equation of a certain inhomogeneous recurrence relation is ( _x_ – 1)2( _x_ – 2)( _x_ – 3)2 = 0, find the choice for the particular solution when (a) _f_ ( _n_ ) = 4 _n_ 3 \+ 5 ** _n_** (b) _f_ ( _n_ ) = 4 _n_ (c) f(n) = 3 _n_ **Solution**. The roots of the characteristic equation are 1 (with multiplicity 2), 2 (with multiplicity 1), and 3 (with multiplicity 2). Let _u n_ be the general solution of the homogeneous part and _v n_ be the choice of a particular solution. Then **Example 3.3.3** Discuss the solution of _a n_ = _ka n–_1 \+ _f_ ( _n_ ) , where _k_ is a constant. **Solution**. As before, _a n_ = _u n_ \+ _v n_, where _u n_ is the solution of the homogeneous part and v _n_ is a particular solution. _Case_ (1): _k_ = 1 _. u n_ = _c_ , where _c_ is an arbitrary constant, so _a n_ = _c_ \+ _v n_, where the nature of _v n_ depends on _f_ ( _n_ ) and the fact that _u n_ is a constant. However, Adding these _n_ equations, we have _a n_ = _a 0_ + _f_ (1) + _f_ (2) + · · · + _f_ ( _n_ ) Thus _f_ (l) + _f_ (2) + · · · + _f_ ( _n_ ) = _a n_ – _a_ 0 = _c_ \+ _v n_ – _a_ 0 where the value of _c_ can be evaluated using the initial condition _a 0_. _Case_ (2): _k_ not equal to 1. _u n_ = _c_ . _k n_ and as before, _v n_ depends on _f_ ( _n_ ) and _u n_. **Example** 3.3.4 Evaluate the sum of the squares of the first _n_ positive integers. **Solution**. Write _f_ ( _n_ ) = _n_ 2. We have to compute _f_ (l) + _f_ (2) + · · · + _f_ ( _n_ ) . Consider the recurrence relation _a n_ = _a n–_1 \+ _n 2_ with _a 0_ = 0. The homogeneous part gives the solution _u n_ = _c_. The choice for the particular solution is _v n_ = _An_ \+ _Bn 2_ \+ _Cn 3_. Thus _a n_ = _c_ \+ _An_ \+ _Bn 2_ \+ _Cn 3_. The initial condition implies that _c_ = 0. Substituting for _a n_ in the recurrence relation, we get _An_ \+ _Bn 2_ \+ _Cn 3_ = _A_ ( _n_ – 1) + _B_ ( _n_ – l)2 \+ _C_ ( _n_ – l)3 \+ _n 2_ Equating the coefficients of _n_ , _n 2_ and the constant term on either side we get , and Thus which is of course equal to the sum of the squares of the first _n_ natural numbers. **Example** 3.3.5 Solve _a n_ = _a n–_1 \+ 12 _n 2_, where _a 0_ = 5. **Solution**. We see _a n_ = _a 0_ \+ (12)( _n_ )( _n_ \+ 1)(2 _n_ \+ 1)/6 from Example 3.3.4. Thus, _a n_ = 5 + 2 _n_ ( _n_ \+ 1)(2 _n_ \+ 1). **_3.4 RECURRENCE RELATIONS AND GENERATING FUNCTIONS_** In many instances the _n_ th term _a n_ in a recurrence relation can be obtained as the coefficient of _x n_ in the power series expansion of a function _g_ ( _x_ ) which may be considered as the generating function for the given recurrence relation. Quite often the functional equation for _g_ ( _x_ ) can be solved algebraically and then _a n_ is obtained by expressing _g_ ( _x_ ) as a power series. In other words, the recurrence relation is solved by means of an associated generating function. **Example 3.4.1** Solve the recurrence relation _a n_ = _2a n_–1 by using the associated generating function. **Solution**. Let _g_ ( _x_ ) = _a 0_ \+ _a_ 1 _x_ \+ · · · + _a nxn_ \+ · · · be the associated generating function. Multiplying both sides of the recurrence relation by _x n_, we have _a nxn_ = 2 _a n_–1 _x n_, where _n_ = 0, 1, 2, . . . . Thus On adding we see that _g_ ( _x_ ) – _a 0_ = _2xg_ ( _x_ ) , which is a functional equation for _g_ ( _x_ ) , and this equation can be solved for _g_ ( _x_ ) . We get _g_ ( _x_ ) = _a_ 0 _/_ (1 – _2x_ ) . Thus _a n_ = _a_ 02 _n_ , which is the coefficient of _x n_ in the power series expansion of _g_ ( _x_ ) . **Example 3.4.2** Solve _a n_ = 2 _a n_–1 – ( _n_ /3), where _a_ 0 = 1. **Solution**. Since _a_ 0 = 1, the associated generating function is _g_ ( _x_ ) = 1 + _a_ 1 _x_ \+ _a 2x2_ \+ · · · By putting _n_ = 1, 2, 3, . . . in _a nxn_ = 2 _a n_–1 _x n_ – ( _n_ /3) _x n_, we have Thus on adding these equations, we have where [Recall that if _u_ ( _x_ ) is the ordinary generating function for _p r_, then _u_ ( _x_ ) _/_ (1 – _x_ ) is the ordinary generating function for _q r_ = _p_ 1 \+ _p 2_ \+ · · · + _p r_. In the present case 1/(1 – _x_ )2 is the generating function for _p r_ = 1. So 1/(1 – _x_ )2 is the generating function for _q r_ = 1 + 1 + · · · + 1 = _r._ ] Thus from the functional equation for _g_ ( _x_ ) we get Now the function on the right-hand side can be expanded by the method of partial fractions. We then have Thus **Example 3.4.3** Find the generating function for the recurrence relation _a n_ = _c_ 1 _a n_–1 \+ _c_ 2 _a n_–2 with _a 0_ = _A_ 0 and _a_ 1, = _A_ 1. **Solution**. The generating function is _g_ ( _x_ ) = _a 0_ \+ _a_ 1 _x + a_ 2 _x_ 2 \+ · · · Putting _n_ = 2, 3, 4, . . . in _a nxn_ = _c_ 1 _a n–_1 _x n_ \+ _c 2an–2xn_, we have On addition, we have _g_ ( _x_ ) – _A_ 0 – _A_ 1 _x_ = ( _c_ 1 _x_ )[g( _x_ ) – _A_ 0] + ( _c_ 2 _x_ 2)g( _x_ ) So _g_ ( _x_ ) = _u_ ( _x_ ) _/v_ ( _x_ ) , where _u_ ( _x_ ) = ( _A_ 0 \+ A1 _x_ ) – ( _A_ 0)( _c_ 1 _x_ ) and _v_ ( _x_ ) =1 – _c 1x – c2x2_. Notice the relation between the coefficients of the denominator _v_ ( _x_ ) and the coefficients of the characteristic function _p_ ( _x_ ) = _x 2_ – _c_ 1 _x_ – _c 2_. We can generalize this observation for a linear homogeneous recurrence relation of order **_r_** with constant coefficients as follows. **THEOREM 3.4.1** If _p_ ( _x_ ) and _g_ ( _x_ ) are the characteristic function and the generating function of the linear homogeneous recurrence relation with constant coefficients _a n_ = _c_ 1 _a n__1 \+ _c 2an-2_ \+ · · · + _c ran-r_ with consecutive initial conditions _a i_, = _A i_, ( _i_ = 0, 1, 2, . . . , _r_ – 1), then _p_ ( _x_ ) = _x r_ – _c_ 1 _x r–_1 – _c 2xr–_2 – · · · – _c r_ and _g_ ( _x_ ) = _u_ ( _x_ )/ _v_ ( _x_ ) , in which the denominator is 1 – _c xx_ – _c_ 2 _x_ 2 – · · · – _c rxr_ and the numerator is **Example 3.4.4** If _p_ ( _x_ ) = _x 3_ – 9 _x 2_ \+ 26 _x_ – 24, find _g_ ( _x_ ). **Solution**. The generating function _g_ ( _x_ ) is _u_ ( _x_ )/ _v_ ( _x_ ) , where _v_ ( _x_ ) = 1 – 9 _x_ \+ 26 _x 2_ – 24 _x 3_ _u_ ( _x_ ) = ( _A_ 0 \+ _A_ 1 _x_ \+ _A_ 2 _x_ 2) – (9 _x_ )( _A 0_ \+ _A_ 1 _x_ ) + (26 _x_ 2)( _A_ 0) **_3.5 ANALYSIS OF ALGORITHMS_** We first make a distinction between "a problem" and "an instance of the problem." For example, finding the product of two integers is a problem, whereas finding the product of two given integers is an instance of the problem. If the problem is finding the product of two square matrices, then finding the product of two _n_ × _n_ matrices is an instance of the problem. In network optimization, the shortest-distance problem is the problem of finding the shortest distance from a vertex to the other vertices in a network. An instance of this problem will then be to find the shortest distance from a vertex to the remaining vertices in a given network with _n_ vertices and _m_ edges. In an informal and an intuitive sense, an algorithm for the problem, as we all know, is a step-by-step procedure involving a finite sequence of instructions that can be used to solve every instance of the problem. From an informal point of view, the **computational complexity of an algorithm for a problem** is the cost, measured in running time or storage or whatever units are relevant, of using the algorithm to solve the problem. This cost, denoted by _f_ ( _n_ ), depends on the input size _n_ of an instance of the problem. The function _f_ is the complexity function of the algorithm. In the analysis of algorithms, this cost requirement is usually expressed in terms of the number of elementary computational steps, such as arithmetic operations, comparisons, and so on, needed for execution of the algorithm on a (hypothetical) computer on the assumption that all these kinds of operations require unit time. It is quite possible that the cost requirements for two different instances with the same input size could differ significantly. So we consider all instances of a given input size _n_ and then choose the cost requirement of that instance for which the cost is a maximum. In other words, we are examining the worst-case behavior of the algorithm. If _A_ is an algorithm to solve a problem, we denote by _f_ ( _A_ , _n_ ) the **worst-case complexity** of the algorithm, where _n_ is the input size. For example, consider the problem of choosing the smallest number from a set of _n_ numbers. If we have no information about these numbers and if we choose the smallest number after making all possible comparisons, the number of comparisons is (n – 1) and therefore the worst-case complexity for this procedure of choosing is ( _n_ – 1). As another example consider the ordinary matrix multiplication algorithm, with which we are all familiar. Let _L_ and _M_ be any two _n_ × _n_ matrices such that no element in either matrix is zero. We thus have a worst-case scenario here. In finding the product matrix we shall not worry about additions. Instead, we count the total number of multiplications involved in implementing the algorithm. Notice that each element in the product matrix is the product of a row _r_ of _L_ and a column _c_ of _M_. Both _r_ and _c_ have _n_ nonzero numbers. Thus the product involves n multiplications. Since there are _n_ 2 elements in the product matrix the total number of multiplications is _f_ ( _A_ , _n_ ) = _n_ 3 which is the worst-case complexity for matrix multiplication, where _A_ is the usual matrix multiplication algorithm. Once an algorithm is specified for a problem, we shall write _f_ ( _n_ ) instead of _f_ ( _A_ , _n_ ) to denote the worst-case complexity of the algorithm. On the other hand, if we are comparing two algorithms _A_ and _A′_ for the same problem, we write _f_ ( _A_ , _n_ ) and _f_ ( _A′, n_ ) to denote the respective complexities and in that case we say that _A_ is more **efficient** than _A ′_ if _f_ ( _A_ , _n_ ) does not exceed _f_ ( _A ′_ , _n_ ) for all _n_ when _n_ ≥ _n 0_ for some fixed positive integer _n 0_. If _f_ ( _A_ , _n_ ) = _5n 2_ and _f_ ( _A ′ , n_ ) = _n 3_ then _A_ is more efficient than _A ′_ since _f_ ( _A_ , _n_ ) ≤ _f_ ( _A_ ′, _n_ ) for all _n_ ≥ 5. In many cases it is possible to divide a problem into several smaller nonoverlapping subproblems of approximately equal size, solve these subproblems, and merge their solutions to obtain the solution of the original problem. This strategy of solving a problem, known as the **divide-and-conquer** technique, is often more efficient than the usual straightforward method. If _f_ ( _A_ , _n_ ) is the computational complexity of such a divide-and-conquer algorithm (where _n_ is the input size of an instance of the problem) we have a recurrence relation expressing _f_ ( _A_ , _n_ ) in terms of _f_ ( _A, m_ ) , where _m_ is the input size of an instance of the subproblem. On solving this recurrence relation using the initial conditions we obtain the complexity function of the algorithm. We now turn our attention to an analysis of some of these recurrence relations that result from such divide-and-conquer algorithms. **Example 3.5.1** If _a_ and _b_ are two _n_ -digit numbers to obtain the product _ab_ , we must perform at most _n 2_ single-digit multiplications. In other words, if we use the usual multiplication algorithm, its computational complexity is _n 2_ if we ignore the number of additions and take into account only the total number of multiplications. We can use a more efficient divide-and-conquer algorithm to multiply the two numbers as follows. Assume that _n_ is even. Then both _a_ and _b_ can be subdivided into two parts: _a_ = _a_ 1(10) _n_ /2 \+ _a 2_ _b_ = _b_ 1(10) _n_ /2 \+ _b 2_ where the parts _a_ 1, _a 2, b_1, and _b 2_ are all ( _n_ /2)-digit numbers. Then _ab_ = _a_ 1 _b_ 1(10) _n_ \+ _a_ 1 _b_ 2 \+ _a_ 2 _b_ 1 _n_ /2 \+ _a_ 2 _b_ 2 The expression on the right involves four ( _n_ /2)-digit multiplications. We can reduce this into three multiplications if we make use of the formula ( _a_ 1 _b_ 2 \+ _a_ 2 _b_ 1) = ( _a_ 1 \+ _a 2_)( _b_ 1 \+ _b 2_) – _a_ 1 _b_ 1 _– a 2b2_ Thus we have the recurrence relation _f_ ( _n_ ) = 3 _f_ ( _n_ /2) with _f_ (l) = 1. Of course, there is a possibility that ( _a_ 1 \+ _a 2_) or ( _b_ 1 \+ _b 2_) may be ( _n/_ 2 + 1)-digit numbers, but this does not affect complexity considerations. If we write _n_ = 2 _k_ in the recurrence relation, we get Thus the complexity of this recursive algorithm is _n_ log3, which is less than _n_ 2for all _n_ ≥ 1. More generally, let us consider a divide-and-conquer algorithm that splits a problem of size _n_ into several subproblems each of size _n/b_. We shall assume that _n_ = _b k_ and that the number of subproblems is a, where _a_ > 1. So if the complexity of the algorithm for the problem is _f_ ( _n_ ), then the complexity of the same algorithm for the subproblem is _f_ ( _n/b_ ) . We thus have a recurrence relation of the form _f_ ( _n_ ) = _af_ ( _n/b_ ) + _h_ ( _n_ ), _f_ (1) = _d, a_ > 1, _n_ = _b k_, where _h_ ( _n_ ) represents the cost of dividing the problem into subproblems plus the cost of merging the solutions of these subproblems to obtain the solution of the original problem. Two special cases are of particular importance: (1) _h_ ( _n_ ) is a constant _c_ and (2) _h_ ( _n_ ) = _cn_ , where _c_ is a constant. **Example 3.5.2** Solve _f_ ( _n_ ) = _af_ ( _n_ / _b_ ) + c, _f_ (l) = _d_ , where _a_ and _b_ are integers that are at least 2, _n_ = _b k_, and _c_ is a constant. **Solution** where _A_ = ( _ad – d_ \+ _c_ ) _/_ ( _a_ – 1) and _B_ = (– _c_ ) _/_ ( _a_ – 1). Since _n_ = _b k_, we have _k_ = log _b_ n and _a k_ = _n r_, where _r_ = logb _a_. Thus _f_ ( _n_ ) = _An r_ \+ _B_. **Example** 3.5.3 Solve _f_ ( _n_ ) = _af_ ( _n/b_ ) + _cn_ with the same conditions as in Example 3.5.2. **Solution**. By iteration we have _Case_ (1): _a_ = _b_ _f_ ( _n_ ) = _a kd_ \+ _cnk_ = _b kd_ \+ _cnk_ = _nd_ \+ _cn_ log _b n_ Thus _f_ ( _n_ ) = _c_ ( _n_ log _n_ ) + _d_ ( _n_ ) . _Case_ (2): _a_ < _b_ or _a_ > _b_. After some simplification we get _f_ ( _n_ ) = _An r_ \+ _Bn_ , where A = ( _bd_ – _ad_ – _bc_ ) _/_ ( _b_ – _a_ ) and _B_ = _bc/_ ( _b_ – _a_ ) and _r_ = log _b_ a. If _a_ < _b_ , then _r_ < 1, so _Bn_ is the dominating term in _f_ ( _n_ ) . If _a_ > _b_ , then _r_ > 1, so _An r_ is the dominating term in _f_ ( _n_ ). **_Matrix Multiplication_** We next consider the efficiency of matrix multiplication algorithms. As we noticed earlier, if _A_ and _B_ are two 2 × 2 matrices, we usually perform at least eight (23 = 8) multiplications to obtain the product matrix _C_ = _AB_. By a clever algebraic manipulation, it is possible to reduce the number of multiplications from eight to seven. Let _a_ ( _i, j_ ) denote the element in _A_ at the _i_ th row and _j_ th column; similarly for _B_ and _C_. Now define the following seven products: _x_ 1 = [ _a_ (1, 1) + _a_ (2, 2)] · [ _b_ (1, 1) + _b_ (2, 2)] _x_ 2 = [ _a_ (2, 1) + _a_ (2, 2)] _· b_ (1, 1) _x_ 3 = _a_ (1, 1) · [ _b_ (1, 2) – _b_ (2, 2)] _x_ 4 = _a_ (2, 2) · [ _b_ (2, 1) – _a_ (2, 1)] _x_ 5 = [ _a_ (1, 1) + _a_ (1, 2)] · _b_ (2, 2) _x_ 6 = [ _a_ (2, 1) – _a_ (1, 1)] · [ _b_ (1, 1) + _b_ (1, 2)] _x_ 7 = [ _a_ (1, 2) – _a_ (2, 2)] · [ _b_ (2, 1) + _b_ (2, 2)] Then it is easy to verify that every element _c_ ( _i, j_ ) of the product matrix can be expressed as a sum or difference of some of these seven numbers as follows: _c_ (1, 1) = _x_ 1 \+ _x_ 4 – _x_ 5 \+ _x_ 7 | _c_ (1, 2) = _x_ 3 \+ _x_ 5 ---|--- _c_ (2, 1) = _x_ 2 \+ _x_ 4 | _c_ (2, 2) = _x_ 1 \+ _x_ 3 – _x_ 2 \+ _x_ 6 Thus we obtain the product matrix by performing at most seven multiplications instead of eight. We utilize this information to define a divide-and-conquer algorithm for matrix multiplication and solve the associated recurrence relation to obtain the computational complexity of this algorithm in the next example. **Example 3.5.4** Let _A_ and _B_ be two _n_ × _n_ matrices where _n_ = 2 _k_. Obtain a divide-and-conquer algorithm to obtain the product matrix _AB_ and find the complexity of the this algorithm. **Solution**. We partition each matrix into four submatrices where each submatrix is a ( _n_ /2) × ( _n_ /2) matrix. See Figure 3.5.1. The four submatrices of _A_ are: (1) _A_ (l, 1) obtained by considering the first _n_ /2 rows and the first _n_ /2 columns of _A_ ; (2) _A_ (l, 2) obtained by considering the first _n_ /2 rows and the last _n_ /2 columns; (3) _A_ (2, 1) obtained by considering the last _n_ /2 rows and the first _n_ /2 columns; and (4) _A_ (2, 2) obtained by considering the last _n_ /2 rows and the last _n_ /2 columns of _A_ ; similarly for _B_ and _C_. We now define the following seven submatrices: _X_ 1 = [ _A_ (1, 1) + _A_ (2, 2)] · [ _B_ (1, 1) + _B_ (2, 2)] _X_ 2 = [ _A_ (2, 1) + _A_ (2, 2)] · _B_ (1, 1) _X_ 3 = _A_ (1, 1) · [ _B_ (1, 2) – _B_ (2, 2)] _X_ 4 = _A_ (2, 2) _·_ [ _B_ (2, 1) – _B_ (1, 1)] _X_ 5 = [ _A_ (1, 1) + _A_ (1, 2)] · _B_ (2, 2) _X_ 6 = [ _A_ (2, 1) – _A_ (1, 1)] · [ _B_ (1, 1) + _B_ (1, 2)] _X_ 7 = [ _A_ (1, 2) – _A_ (2, 2)] · [ _B_ (2, 1) + _B_ (2, 2)] Notice that each of these ( _n_ /2) × ( _n_ /2) submatrices is a product of two ( _n_ /2) × ( _n_ /2) submatrices. It can easily be verified that **FIGURE 3.5.1** _C_ (1, 1) = _X_ 1 \+ _X_ 4 – _X_ 5 \+ _X_ 7 _C_ (1, 2) = _X_ 3 \+ _X_ 5 _C_ (2, 1) = _X_ 2 \+ _X_ 4 _C_ (2, 2) = _X_ 1 \+ _X_ 3 – _X_ 2 \+ _X_ 6 Thus if _f_ ( _n_ ) is the number of multiplications needed to multiply _A_ and _B_ using this divide-and-conquer strategy we have the recurrence relation _f_ ( _n_ ) = 7 _f_ ( _n_ /2) with the initial condition _f_ (1) = 1. On solving this we get _f_ ( _n_ ) = 7 _k f_(1) = 7 _k_ , where _k_ = log2 _n_. Equivalently, _f_ ( _n_ ) = _n r_, where _r_ = log27. Thus the complexity of this algorithm is less than _n_ 3, so this method is more efficient than the usual matrix multiplication method! **_Evaluation of a Polynomial at a Given Point_** We conclude our algorithm analysis with a discussion on the number of multiplications and additions involved to evaluate the value of a polynomial _p_ ( _x_ ) for a given value of _x_. If the degree of _p_ ( _x_ ) is _n_ , it is easy to see that a straightforward nonrecursive evaluation of _p_ ( _x_ ) at _x_ = _t_ will involve (2 _n_ – 1) multiplications and _n_ additions. For example, to evaluate _p_ ( _x_ ) = 5 _x_ 3 \+ 8 _x_ 2 \+ 4 _x_ \+ 7 at _x_ = 27, we perform the five multiplications (1) _a_ = 27 · 27, (2) _b_ = _a ·_ 27, (3) _c_ = 4 · 27, (4) 8 · _a_ , and (5) 5 · _b_ and then three additions. A more efficient (recursive) method known as the Horner's method (or Newton's method) requires only _n_ multiplications and _n_ additions. If we write the polynomial in a telescoping form as _p_ ( _x_ ) = (((5 · _x_ \+ 8) · _x_ ) + 4) · _x_ \+ 7, the number of multiplications is only 3. **Example 3.5.5** ( **Horner's Method)** (a) If _f_ ( _n_ ) is the number of multiplications needed to evaluate a polynomial of degree _n_ at a point, obtain a recursive relation for _f_ ( _n_ ). (b) If _g_ ( _n_ ) is the total number of multiplications and additions, find a recursive relation involving _g_ ( _n_ ) . **Solution**. If _p_ ( _x_ ) is a polynomial of degree _n_ , then _p_ ( _x_ ) = _xq_ ( _x_ ) + _a_ , where _a_ is a constant of _q_ ( _x_ ) is a polynomial of degree ( _n_ – 1). (a) An evaluation of _q_ ( _x_ ) at a point will involve _f_ ( _n_ – 1) multiplications. Thus _f_ ( _n_ ) = _f_ ( _n_ – 1) + 1, with the initial condition _f_ (0) = 0. Thus _f_ ( _n_ ) = _n_. (b) Obviously, _g_ ( _n_ ) = _g_ ( _n_ – 1) + 2 with g(0) = 0. The solution is _g_ ( _n_ ) = 2 _n_. Finally, we discuss another recursive algorithm for polynomial evaluation, which is more efficient than Horner's method. [This method is explained in detail in Baase (1978).] First a definition and some properties of polynomials. A polynomial _p_ ( _x_ ) is called a **monic polynomial** if the coefficient of the leading term is 1. We also assume that the degree _n_ of _p_ ( _x_ ) is equal to 2 _k_ – 1. The number of multiplications needed for polynomial evaluation for _p_ ( _x_ ) is ( _n_ – 1) = 2 _k_ – 2 by Horner's method. Can we do better regarding the number of multiplications? Now it can be shown that if _p_ ( _x_ ) = _x n_ + _a n_–1 _x n_–1 \+ ·· + _a_ 1 _x_ + _a_ 0 is a monic polynomial of degree 2 _k_ – 1, we can write _p_ ( _x_ ) as _p_ ( _x_ ) = ( _x j_ + _b_ ) _· q_ ( _x_ ) + _r_ ( _x_ ) where (1) both _q_ ( _x_ ) and _r_ ( _x_ ) are both monic polynomials of degree ( _n_ – 1)/2 and _j_ = ( _n_ \+ 1)/2, (2) _b_ = _a j_–1 – 1, and (3) the coefficients of _q_ ( _x_ ) are the first ( _n_ \+ 1)/2 coefficients of _p_ ( _x_ ) , starting from the highest term. If a power of _x_ is missing, the corresponding coefficient is taken as zero. For example, let _p_ ( _x_ ) = _x_ 7 \+ 2 _x_ 6 \+ 2 _x_ 5 \+ 3 _x_ 4 \+ 9 _x_ 3 \+ 9 _x_ 2 \+ 18 _x_ \+ 9 Here 2 _k_ – 1 = 7. So _k_ = 3 and _j_ = 4. Also, _b_ = _a_ 3 – 1 = 9 – 1 = 8. Thus _p_ ( _x_ ) = ( _x_ 4 \+ 8) _q_ ( _x_ ) + _r_ ( _x_ ), where _q_ ( _x_ ) is a polynomial of degree 3 whose coefficients are 1, 2, 2, 3 starting from the highest term. In other words, _q_ ( _x_ ) = _x_ 3 \+ 2 _x_ 2 \+ 2 _x_ \+ 3 and _r_ ( _x_ ) = _p_ ( _x_ ) – ( _x_ 4 \+ 8) _q_ ( _x_ ) = _x_ 3 __ – 7 _x_ 2 \+ 2 _x_ – 15 Proceeding similarly we write _q_ ( _x_ ) = ( _x_ 2 \+ 1)( _x_ \+ 2) + ( _x_ \+ 1) and _r_ ( _x_ ) = ( _x_ 2 \+ 1)( _x_ – 7) + ( _x_ – 8) Thus finally, we have _p_ ( _x_ ) = ( _x_ 4 \+ 8) · [( _x_ 2 \+ 1) · ( _x_ \+ 2) + ( _x_ \+ 1)] + ( _x_ 2 \+ 1) · ( _x_ – 7) + ( _x_ – 8) which involves five multiplications in all (three as shown with parentheses and two for computing _x_ 2 and _x_ 4). On the other hand, if we use Horner's method, the number of multiplications will be six instead of five. These observations are generalized as follows. **THEOREM 3.5.1** Let _p_ ( _x_ ) be a monic polynomial of degree _n_ , where _n_ = 2 _k_ – 1. Then the number of multiplications needed to evaluate _p_ ( _x_ ) at a point is at most ( _n_ – 3)/2 + log( _n_ \+ 1), and the number of additions/substractions needed is at most (3 _n_ – 1)/2. **_Proof_ :** Let _n_ = 2 _k_ – 1. Also let _f_ ( _k_ ) be the number of multiplications needed to evaluate _p_ ( _x_ ) (without taking into account the number of multiplications required to compute the various powers of _x_ at the given point), and let _g_ ( _k_ ) be the number of additions needed. Since _p_ ( _x_ ) = ( _x j_ \+ _b_ ) _q_ ( _x_ ) + _r_ ( _x_ ), we have the recurrence relation _f_ ( _k_ ) = 2 _f_ ( _k_ – 1) + 1 with the initial conditional _f_ (1) = 0 and the relation _g_ ( _k_ ) = 2 _g_ ( _k_ – 1) + 2 with _g_ (1) = 1. On solving these recurrence relations, we have Next we consider the number of multiplications involved to compute the various powers of _x_. We have to compute _x_ 2, then _x_ 4, then _x_ 8, and so on, until we reach the _j_ th power of _x_ , where _j_ = ( _n_ \+ 1)/2 = 2 _k_ –1, and this process will involve ( _k_ – 1) multiplications. But _k_ = log( _n_ \+ 1). Thus this algorithm will need at most ( _n_ – 1)/2 + log( _n_ \+ 1) – 1 = ( _n_ /2) + log( _n_ \+ 1) – (3/2) multiplications and (3 _n_ – 1)/2 additions, whereas Horner's method will need at most ( _n_ – 1) multiplications and _n_ additions. **3.6 _NOTES AND REFERENCES_** The pioneering work in the study of recurrence relations was done by Leonardo of Pisa (more popularly known as Fibonacci) in the thirteenth century and subsequently by Jacob Bernoulli (1654–1705), his nephew Daniel Bernoulli (1692–1770), James Stirling (1692–1770), and Leonhard Euler (1707–1783). Some useful references are the relevant chapters in the books on combinatorics and discrete mathematics by Cohen (1978), Grimaldi (1985), Krishnamurthy (1986), Liu (1968), Liu (1985), Roberts (1984), Townsend (1987), and Tucker (1984). The techniques for solving recurrence relations were first used in the development of the theory of difference equations. See Levy and Lessman (1961) for a survey of these methods. An excellent reference for the applications of difference equations is Goldberg (1958). For additional reading on the analysis of algorithms, see the relevant chapters in the books by Baase (1978), Knuth (1973), Roberts (1984), Stanat and McAllister (1977), and Wilf (1986). **_3.7 EXERCISES_** **3.1.** Suppose that there are _n_ lines in a plane such that no two lines are parallel and no three lines are concurrent dividing the plane into _f_ ( _n_ ) distinct regions. Find a recurrence relation for _f_ ( _n_ ) and solve for _f_ ( _n_ ). Find the value of _f_ (9). **3.2.** Find a recurrence relation for _f_ ( _n_ ), the number of _n_ -letter words that can be formed using the letters A, B, C, D, and E such that in each word the frequency of the letter A is an odd number. **3.3.** (The Tower of Hanoi Problem.) Three vertical cylindrical poles of equal radius and height are placed along a line on top of a table and _n_ circular disks of decreasing radius, each with a hole at its center, are attached to the first pole such that the largest one is at the bottom, the next largest is just above that, and so on, and finally the smallest is on top the heap (Figure 3.7.1). The distance between the feet of any two poles is not less than the diameter of the largest disk. A legal move is defined as the transfer of the top disk from any one of the three poles to another pole such that no disk is placed on top of a smaller disk. Let _f_ ( _n_ ) be the number of legal moves required to transfer all the disks from the first pole to one of the other two poles. Obtain a recurrence relation for _f_ ( _n_ ) and solve this relation. **3.4.** In climbing up a staircase, an ordinary step covers at least one stair and at most two stairs. If _f_ ( _n_ ) is the number of ways of climbing up a staircase (making only ordinary steps) with _n_ stairs, find a recurrence relation for _f_ ( _n_ ) . **FIGURE 3.7.1** **3.5.** Let _S_ be the set of all binary words of length _n_ such that two zeros do not appear consecutively in any word in the set. Find a recurrence relation for the number of elements in _S_. **3.6.** There are _n_ personal checks made out in the name of _n_ individuals whose names appear on _n_ envelopes. Find a recurrence relation for the number of ways these checks can be placed in the _n_ envelopes so that no check is in the right envelope. **3.7.** Show that the recurrence relation _f_ ( _n_ ) = ( _n_ – 1)[ _f_ ( _n_ – 1) + _f_ ( _n_ – 2)] with _f_ (1) = 0 and _f_ (2) = 1 can be simplified as _f_ ( _n_ ) = _nf_ ( _n_ – 1) + (–1) _n_ with _f_ (2) = 1. **3.8.** Let _f_ ( _n_ ) be the number of subsets of a set that has _n_ elements. Find a recurrence relation for _f_ ( _n_ ) . **3.9.** Let _f_ ( _n_ ) be the number of elements in _X_ that is the set of all _n_ -symbol words formed by the symbols A, B, and C such that no word has a pair of consecutive A's. **3.10.** Solve for _k_ in the recurrence relation _f_ ( _n_ \+ 1) = _kf_ ( _n_ ) if **(a)** _f_ (1) = 5 and _f_ (2) = 10, and **(b)** _f_ (1) = 5 and _f_ (3) = 20. **3.11.** Solve: _f_ ( _n_ \+ 3) = 6 _f_ ( _n_ \+ 2) – 11 _f_ ( _n_ \+ 1) + 6 _f_ ( _n_ ), where _f_ (0) = 3, _f_ (1) = 6, and _f_ (2) = 14. **3.12.** Solve: _f_ ( _n_ \+ 3) = 4 _f_ ( _n_ \+ 2) – 5 _f_ ( _n_ \+ 1) + 2 _f_ ( _n_ ), where _f_ (0) = 2, _f_ (1) = 4, and _f_ (2) = 7. **3.13.** Solve: _f_ ( _n_ \+ 3) = 3 _f_ ( _n_ \+ 2) + 4 _f_ ( _n_ \+ 1) – 12 _f_ ( _n_ ) where _f_ (0) = 0, _f_ (1) = –11, and _f_ (2) = –15. **3.14.** The roots of the characteristic equation of a linear homogeneous recurrence relation with constant coefficients are 1, 2, 2, and 3. Write down the relation and its general solution. **3.15.** Solve: _nf_ ( _n_ ) – (5 _n_ – 5) _f_ ( _n_ – 1) = 0 where _f_ (1) = 10. [ _Hint:_ Substitute _g_ ( _n_ ) = _nf_ ( _n_ ) _._ ] **3.16.** Let _A_ be the _m_ × _m_ matrix in which all the diagonal numbers are equal to 0 and all the nondiagonal numbers are equal to 1. Then the diagonal numbers of _A n_ are all equal to a positive integer _f_ ( _n_ ) and the nondiagonal numbers of _A n_ are all equal to a positive integer _g_ ( _n_ ) for any positive integer _n_. Prove that _f_ ( _n_ \+ 1) = (m – 1) _g_ ( _n_ ) and _g_ ( _n_ \+ 1) = _f_ ( _n_ ) + ( _m_ – 2) _g_ ( _n_ ) and use this fact to obtain a recurrence relations for _g_ ( _n_ ) with appropriate initial conditions. Solve the relation. Find _g_ ( _n_ ) and _f_ ( _n_ ). **3.17.** Solve the following inhomogeneous recurrence relations involving _f_ ( _n_ ): _f_ ( _n_ ) – 4 _f_ ( _n_ – 1) + 4 _f_ ( _n_ – 2) = _h_ ( _n_ ) where **(a)** _h_ ( _n_ ) = 1, **(b)** _h_ ( _n_ ) = _n_ , **(c)** _h_ ( _n_ ) = 3 _n_ , **(d)** _h_ ( _n_ ) = 2 _n_ , and **(e)** _h_ ( _n_ ) = 1 + _n_ \+ 2 _n_ \+ 3 _n_. **3.18.** Solve the relation _f_ ( _n_ \+ 2) – 4/( _n_ \+ 1) + 3 _f_ ( _n_ ) = 16 with the initial condition _f_ (0) = 4 and _f_ (1) = 2. **3.19.** Solve: _f_ ( _n_ ) = 4 _f_ ( _n_ – 1) + 5(3) _n_. **3.20.** Solve: _f_ ( _n_ ) = 4 _f_ ( _n_ – 1) + 5(4) _n_. **3.21.** Solve: _f_ ( _n_ ) = _f_ ( _n_ – 1) + 2 _f_ ( _n_ – 2) + 4(3) _n_ with the initial conditions _f_ (0) = 11 and _f_ (1) = 28. **3.22.** Solve: _f_ ( _n_ ) = 4 _f_ ( _n_ – 1) – 4 _f_ ( _n_ – 2) + (2) _n_. **3.23.** Solve the recurrence relation _f_ ( _n_ ) = _f_ ( _n_ – 1) + 6 _n_ 2, _f_ (0) = 0, by **(a)** using the characteristic root, and **(b)** repeated substitution. Hence find the sum of the squares of the first _n_ natural numbers. **3.24.** Find the constants _p, q_ , and _r_ in the recurrence relation _f_ ( _n_ ) + _pf_ ( _n_ – 1) + _qf_ ( _n_ – 2) = _r_ if _f(n_ ) = _A_ (2) _n_ \+ _B_ (3) _n_ \+ 4. **3.25.** If the ordinary generating function of a recurrence relation involving _f_ ( _n_ ) is _g_ ( _x_ ) = (2)/[(1 – _x_ ) (1 – 2 _x_ )], find _f_ ( _n_ ). **3.26.** Solve the recurrence relation _f_ ( _n_ ) = _f_ ( _n_ – 2) + 4 _n_ , _f_ (1) = 2, _f_ (0) = 3, by using the appropriate ordinary generating function. **3.27.** Prove that if _g_ ( _x_ ) is the ordinary generating function for the recurrence relation _f_ ( _n_ \+ 1) = ( _n_ \+ 1) _f_ ( _n_ ) + (–1) _n_ +1 with the initial condition _f_ (0) = 1, then g( _x_ ) satisfies the differential equation _g_ ′( _x_ ) + [( _x_ – 1) _/x_ 2] _g_ { _x_ ) + (1)/( _x_ 2) (1 + _x_ ) = 0. Solve the recurrence relation by using its exponential generating function. **3.28.** The recurrence relation for a divide-and-conquer algorithm is _f_ ( _n_ ) = 9 _f_ ( _n_ /3) + 8 _n_ with the initial condition _f_ (1) = 1, where _n_ = 3 _r_. Solve for _f_ ( _n_ ) as a function of _n_. **3.29.** Solve the recurrence relation _f_ ( _n_ ) = 5 _f_ ( _n_ /2) – 6 _f_ ( _n_ /4) + _n_ with the initial conditions _f_ (1) = 2 and _f_ (2) = 1, where _n_ = 2 _r_. **3.30.** Solve _f_ ( _n_ ) = _f_ ( _n/b_ ) + _c_ with the initial condition _f_ (1) = _d_ , where _n_ = b _r_. **3.31.** Find the ordinary generating function for the recurrence relation _f_ ( _n_ \+ 1) = _af_ ( _n_ ) + _b n_ with the initial condition _f_ (0) = _c_ , where _a, b_ , and _c_ are constants. **3.32.** Consider the example in this section that discusses the complexity of matrix multiplications. If _f_ ( _n_ ) is the total number of multiplications involved in finding the product of two _n_ × _n_ matrices, it was proved that _f_ ( _n_ ) = _n r_, where _r_ = log 7. Find a recursive relation for the number of additions involved and solve it. **3.33.** Find the recurrence relation (involving the number of comparisons) for the divide-and-conquer algorithm to find the largest and smallest elements in a set of _n_ numbers and solve it. **3.34.** Let _f_ ( _n_ ) be the number of comparisons required to sort a list of _n_ numbers in nondecreasing order. **(a)** Obtain a recursive relation expressing _f_ ( _n_ ) in terms of _f_ ( _n_ – 1) with appropriate initial condition. **(b)** Obtain a recursive relation that expresses _f_ ( _n_ ) in terms of _f_ ( _n_ /2) with appropriate initial condition, **(c)** Solve these two recurrence relations and compare the efficiency of the two algorithms involved. **Graphs and Digraphs** **_4.1 INTRODUCTION_** Even though the origins of graph theory can be traced back to the days of the great Swiss mathematician Leonhard Euler (1707–1783), only since the 1930s has there been a sustained and intense interest in graph theory as a mathematical discipline. These days, graph theory is one of the most popular and fertile branches in mathematics and computer science. One important reason for this revived and renewed interest in graph theory is its applicability to many of the complex and wide-ranging problems of modern society in such diverse fields as economics, facility location, management science, marketing, energy modeling, transmission of information, and transportation planning to name a few. Quite often such problems can be modeled as graphs or networks. In this respect graph theory is used first and foremost as a tool for formulating problems and defining structural interrelationships. Once a problem is formulated in graph-theoretical language, it becomes relatively easy to comprehend it in its generality. The next step will, of course, be to explore avenues to seek a solution to the problem. The field of graph theory has two different branches: the algebraic aspects and the optimization aspects. In Chapters 4, , and we discuss the former. The area of network optimization, which is greatly advanced by the advent of the computer, is the topic for the last two chapters. A **graph** _G_ = ( _V_ , _E_ ) is a structure consisting of a finite set _V_ of **vertices** (also known as the **nodes** ) and a finite set _E_ of **edges** such that each edge _e_ is associated with a pair of vertices _v_ and _w_. We write _e_ = { _v, w_ } or { _w, v_ } and say that (1) _e_ is an edge **between** _v_ and _w_ , (2) _e_ is **incident** on both _v_ and _w_ , and (3) _e_ **joins** the vertices _v_ and _w_. In this case both _v_ and _w_ are **adjacent vertices** and they are **incident** on _e_. An edge joining a vertex to itself is a **loop**. If there is more than one edge joining pairs of vertices in a graph, the graph is a **multigraph**. If two or more edges join the same pair of vertices in a multigraph, these edges are called **multiple edges**. In a pictorial representation of a graph, a vertex is drawn as a small circle with the name (or number) of the vertex written inside the circle. An edge between two vertices is represented by a segment of a line or a curve joining the two circles that represent the vertices. In Figure 4.1.1 we have a pictorial representation of a multigraph in which the vertex set is _V_ = {1, 2, 3, 4, 5, 6, 7, 8, 9} with loops at vertex 2 and at vertex 8. There are two edges between 7 and 9 and three edges between 4 and 5. A graph is **simple** if it has no loops or multiple edges. If a real number is associated with each edge, then _G_ is a **network** or a **weighted graph**. **A directed graph** or **digraph** is a structure _G_ = ( _V_ , _E_ ) where again _V_ is a finite set of vertices and _E_ is finite set of **arcs** such that each arc _e_ in _E_ is associated with an _ordered pair_ of vertices _v_ and _w_. We write _e_ = ( _v, w_ ) and we say that (1) _e_ is an arc from _v_ to _w_ , (2) vertex _v_ is **adjacent** to vertex _w_ , (3) vertex _w_ is **adjacent from** vertex _v_ , (4) arc _e_ is **incident from** _v_ , and (5) arc _e_ is **incident to** _w_. Two vertices are **adjacent** if there is an arc from one to the other. We have a **weighted digraph** or **directed network** whenever a real number is associated with each arc. If we treat every arc of a digraph as an edge, the resulting structure is called the **underlying graph** of the digraph. In a pictorial representation of a digraph an arc from vertex _v_ to vertex _w_ is drawn as a directed segment with the arrowhead pointing toward _w_. In Figure 4.1.2(a) we have a pictorial representation of a digraph the underlying graph of which is as in Figure 4.1.2(b). In a **mixed graph** _G_ = ( _V_ , _E_ ) at least one element of _E_ is an arc and at least one element is an edge. An element in the former category is a directed arc, whereas an element in the latter is an undirected edge. The real numbers associated with the edges and arcs in networks are usually written along these edges and arcs. **FIGURE 4.1.1** **FIGURE 4.1.2** Of special interest is the **bipartite graph** the vertices of which can be partitioned into two disjoint sets _V_ and _W_ such that each edge is an edge between a vertex in _V_ and a vertex in _W_ and is denoted by _G_ = ( _V_ , _W_ ; _E_ ). In Figure 4.1.3 we have a pictorial representation of a bipartite graph in which _V_ = { _a, p, q_ } and _W_ = { _b, r_ } _. G_ = ( _V_ , _W_ ; _E_ ) is a **bipartite digraph** if every arc in _E_ is from a vertex in _V_ to a vertex in _W_. **FIGURE 4.1.3** A simple graph with _n_ vertices is **complete** if there is an edge between every pair of vertices. The graph is then denoted by _K n_. A digraph is a **complete digraph** if its underlying graph is complete. A simple bipartite graph _G_ = ( _V, W, E_ ) is a **complete bipartite graph** if there is an edge between every vertex in _V_ and every vertex in _W_. The bipartite graph then is denoted by _K p,q_ if there are _p_ vertices in _V_ and _q_ vertices in _W_. A graph _G'_ = ( _V′_ , _E′_ ) is a **subgraph** of _G_ = ( _V, E_ ) if _V′_ is a subset of _V_ and _E′_ is a subset of _E_. If _W_ is any subset of V, the **subgraph of** _G_ **induced by** _W_ is the graph _H_ = ( _W, F_ ) where _f_ is an edge in _F_ if _f_ = { _u, v_ }, where _f_ is in _E_ and both _u_ and v are in _W_. In Figure 4.1.4, _W_ = {1, 2, 4, 5} is a subset of the vertex set _V_ of the graph _G_ and the subgraph of _G_ induced by _W_ is _H_. A complete subgraph of _G_ is called a **clique** in _G_. FIGURE 4.1.4 **Example 4.1.1** ( **The Königsberg Bridge Problem)** The first publication in graph theory is that of Leonhard Euler in 1736. His paper presented a solution to what is known as the Königsberg bridge problem. The city of Konigsberg (now known as Kaliningrad) in Russia, situated by the Pregel River, consists of the north shore (N), the south shore (S), the west island (W), and the east island (E). Linking these four parts were seven bridges: two between N and W, two between S and W, and one each from E to N, S, and W. (See Figure 4.1.5.) The problem posed to Euler was whether it is possible to start from any location in the city and return to the starting point after crossing each bridge exactly once. If each part of the city is considered as a vertex and if each bridge is considered as an edge, we have a graph with four vertices and seven edges (see Figure 4.1.6), giving a graph model of the problem that can be stated as follows: Given a graph (not necessarily simple), is it possible to trace the entire diagram of the graph without going over the same edge more than once? That the answer is no in the case of the Konigsberg bridge problem was easily established by Euler. More on this in our discussion of Eulerian graphs in Chapter 5. FIGURE 4.1.5 FIGURE 4.1.6 **Example 4.1.2** ( **Communication Digraphs)** Consider an organization consisting of several components. Let each component be a vertex. Draw an arrow from vertex _v_ to vertex _w_ if component _v_ can transmit signals to component w. The resulting digraph is known as a communication digraph. **Example 4.1.3** ( **Transportation Networks)** Suppose that we let each vertex in a graph represent a city in the United States. Two vertices are joined by an edge if there is direct nonstop air service between them. A natural question that arises is whether it is possible to start from a city and return to the starting point after visiting each city exactly once. This problem is discussed in Chapter 5 when we investigate Hamiltonian graphs, named after the nineteenth-century Irish mathematician Sir William Hamilton, who did pioneering work in this area. If a nonnegative real number is assigned to each edge to represent the cost of using that edge, a related optimization problem then is to find such a tour (if it exists) so that the cost of the tour is as small as possible. This is the celebrated traveling salesman problem (TSP), which is a central topic in combinatorial optimization. **Example 4.1.4** ( **Tournaments)** In a round-robin tennis tournament each player must play every other player and no ties are allowed. Let each vertex in a digraph represent a player. Draw an arrow from vertex v to vertex _w_ if _v_ defeats _w_. The resulting digraph is complete and it is known as a _dominance graph_ of a tournament. Such dominance digraphs arise frequently in social and biological sciences. A basic problem here is to decide who the "winner" or "leader" is in a dominance graph. In Figure 4.1.7 we have a tournament consisting of four players in which the player represented by vertex 2 is the winner. More on tournaments in Chapter 5. FIGURE 4.1.7 **Example 4.1.5** ( **The Assignment Problem)** Suppose that there are _m_ job applicants _P_ 1, _p_ 2, _. . . , p m_ and _n_ jobs _q_ 1 _q_ 2, . . . , _q_ _n_. Let _V_ be the set of job applicants and _W_ be the set of jobs. If _p i_ is qualified for _q j_ draw an edge between those two vertices and let _c ij_ represent the salary to be paid to p, if she or he is hired for the job _q s_. The model we have in this case is a weighted bipartite network, and the optimization problem then is to find a job assignment for the applicants such that (a) all the jobs are filled and (b) the total salary to be paid is a minimum. **_4.2 ADJACENCY MATRICES AND INCIDENCE MATRICES_** For the purpose of inputting graphs into a computer it is necessary to describe graphs without resorting to their pictorial representations. Moreover, such diagrams are not very practical when graphs with a large number of vertices and edges are to be studied. There are several ways to represent a graph or a digraph without a pictorial representation, and in this section we discuss some of them. The best way to input a graph depends on its properties and subsequent uses. Furthermore, the efficiency of a graph algorithm depends on the choice of the method for representing the graph under consideration. FIGURE 4.2.1 Let **_G_** = ( ** _V, E_ )** be a graph with no multiple edges where **_V_** = {1, 2, 3, . . ., _n_ }. The **adjacency matrix** of **_G_** is the _n × n_ matrix _A_ = ( _a ij_), where _a ij_ = 1 if there is an edge between vertex _i_ and vertex _j_ and _a ij_ = 0 otherwise. The adjacency matrix of a graph is symmetric. Its diagonal elements are zero if and only if there are no loops. The adjacency matrix of the graph in Figure 4.2.1 is the matrix A, where The **degree** of a vertex in a graph is the number of edges incident on that vertex. A vertex is **odd** if its degree is odd; otherwise, it is **even**. In Figure 4.2.1, vertices 1, 2, and 5 are odd, with degrees 3, 3, and 1, respectively. Obviously, the number of nonzero elements in row _i_ of the adjacency matrix of a graph is the degree of vertex _i_ , which is also equal to the sum of all the elements of row _i_ or column _i_. The **adjacency matrix of a digraph** with _n_ vertices is also a square _n_ × _n_ matrix _A_ = ( _a ij_ **),** where _a ij_ = 1 is there is an arc from _i_ to _j_ and is 0 otherwise. The adjacency matrix of the digraph in Figure 4.2.2 is A, where FIGURE 4.2.2 Notice that the adjacency matrix of a digraph need not be symmetric. The **outdegree** of a vertex in a digraph is the number of arcs incident from that vertex and the **indegree** of a vertex is the number of arcs incident to that vertex. In Figure 4.2.2 the outdegree of vertex 2 is 3 and its indegree is 1. Notice that the sum of the elements in row _i_ is the outdegree of _i_ and the sum of the elements in column _j_ is the indegree of _j_. Notice also that the outdegree of 3 is 2 and that its indegree is 4. Another matrix that is useful for entering graphs and digraphs in a computer is the incidence matrix. Unlike the adjacency matrix the incidence matrix is capable of representing multiple edges and parallel arcs. Let _G_ = ( _V, E_ ) be a graph where _V_ = {1, 2, . . . , _n_ } and _E_ = { _e_ 1 _e_ 2, . . . , _e m_}. The **incidence matrix** of _G_ is a _n × m_ graph _B_ = ( _b ik_), where each row corresponds to a vertex and each column corresponds to an edge such that if _e k_ is an edge between _i_ and _j_ , then all elements of column _k_ are 0 except _b ik_ = _b jk_ = 1. For example, the incidence matrix of the graph in Figure 4.2.3 is _B_ , where FIGURE 4.2.3 Notice that a column which corresponds to an edge has exactly two nonzero elements if it is not a loop and exactly one nonzero element if it is a loop. Furthermore, the sum of the elements of row _i_ is the degree of vertex _i_. We also observe that in any graph with no loops the sum of the degrees of all the vertices is twice the number of edges since each edge is accounted twice, once for each of its incident vertices. For example, in Figure 4.1.6 we see deg N + deg S + deg W + deg E = 3 + 3 + 5 + 3 = 14 = twice the number of edges. We state this property as a theorem that is sometimes known as the _first theorem of graph theory_. **THEOREM 4.2.1** If _G_ is multigraph with no loops and _m_ edges, the sum of the degrees of all the vertices of _G_ is _2m_. **COROLLARY** The number of odd vertices in a loopless multigraph is even. **_Proof_ :** Suppose that the number of odd vertices is _r_. Let _p_ be the sum of the degrees of all odd vertices and _q_ be the sum of the degrees of all even vertices. Then _p_ \+ _q_ is even by the theorem. Also, _q_ is even, so _p_ is even. But _p_ is the sum of r odd numbers. Therefore, _r_ is even, which proves the assertion. The **incidence matrix _B_ of a digraph** (with no loops) is defined as follows: If _e k_ is an arc from _i_ to _j_ , all elements in column _k_ are zero except _b ik_ = – 1 and _b jk_ = 1. For example, the incidence matrix of the digraph in Figure 4.2.4 is _B_ , where FIGURE 4.2.4 Notice that the sum of all the elements in row _i_ of the incidence matrix of a digraph is equal to the indegree of _i_ minus the outdegree of _i_. We also observe that **in any digraph the sum of all outdegrees is equal to the total number of arcs, which is again equal to the sum of all indegrees**. This is because when the outdegrees are summed, each arc is counted once since every arc is incident from one vertex. Similarly, when the indegrees are summed, each arc is counted once since every arc is incident to a single vertex. **_4.3 JOINING IN GRAPHS_** A **path between two vertices** _v_ 1 and _v r_ in a graph is a finite sequence of vertices and edges of the form _v_ 1, _e_ 1 _v_ 2, _e_ 2, _v_ 3, _e_ 3, _. . . , e r, vr_, where _e k_ is an edge between _v k_–1 and _v k_. In general, the vertices and edges in a path need not be distinct. A path is **simple** if its vertices are distinct. In a simple path, obviously all the edges are distinct. But a path with distinct edges can have repeated vertices. A graph is said to be **connected** if there is a path between every pair of vertices in it. A path between a vertex and itself is a **closed path**. A closed path in which all the edges are distinct is a **circuit**. A circuit in which all the vertices are distinct is a **cycle**. Notice that _v_ , _e_ 1, _w, e_ 2, _v_ is a cycle but _v_ , _e, w, e, v_ is not a circuit and therefore not a cycle. These two closed paths can be represented as In Figure 4.3.1, 1 - - - - 2 - - - - 3 - - - - 2 - - - - 1 - - - - 5 is a path and 1 - - - - 2 - - - - 3 - - - - 2 - - - - 1 - - - - 5 is a simple path, and we can also see that is a circuit and is a cycle. If v and _w_ are connected (i.e., there is a path between them), then _w_ and _v_ are connected. In fact, the relation _J_ defined by _vJw_ if v and _w_ are connected is an equivalence relation partitioning the set _V_ of vertices into pairwise disjoint subsets of _V_. The subgraph induced by any such subset is a maximal connected subgraph called a **component** of the graph. The number of components of a graph _G_ is denoted by _K_ ( _G_ ) and is equal to 1 if and only if _G_ is connected. The graph in Figure 4.3.2 has two components, _G′_ and _G′′_ , where _G′_ is induced by the subset {1, 2, 3, 4} and _G″_ is induced by {5, 6, 7}. FIGURE 4.3.1 FIGURE 4.3.2 There is an interesting and useful relation between the number of paths between pairs of vertices in _G_ and the elements of the powers of its adjacency matrix. A path with _k_ edges is called a _k_ - **path**. A 1-path is an edge. In the _n_ × _n_ adjacency matrix of the graph (with no multiple edges) with vertex set _V_ = {1, 2, . . . , _n_ }, the ( _i_ , _j_ )-element is 1 if and only if the number of 1-paths (edges) between _i_ and _j_ is 1. That this result can be generalized is the content of the following assertion. **THEOREM 4.3.1** If _A_ is the adjacency matrix of a graph, the ( _i, j_ )-entry of the _k_ th power ( _k_ ≥ 1) of _A_ is the number of _k_ -paths between vertex _i_ and vertex _j_. **_Proof_ :** The proof is by induction on _k_. This is true when _k_ = 1. Assume that this is true for ( _k_ – 1). Let be the ( _i_ , _j_ )-entry of the _r_ th power of _A_. Then But By hypothesis, between _i_ and _p_ , and this will be equal to the number of _k_ -paths between from _i_ to _j_ , in which the vertex just prior to _j_ is _p_. So the right side of (*) is the total number of _k_ -paths between _i_ and _j_ obtained after examining _p_ = 1 to _p_ = _n_ consecutively. **COROLLARY** The ( _i_ , _i_ ) – entry in _A_ 2 is the degree of _i_. **Example 4.3.1** As an illustration, consider the matrices _A_ , _A_ ,2 and _A_ 4 of the graph of Figure 4.3.3. We have In _A_ 2 the (4, 4)th entry is 2 and the degree of vertex 4 is 2 and the two 2-paths between 4 and 4 are 4 - - - - 1 - - - - 4 and 4 - - - - 3 - - - - 4. From the fourth power of _A_ we see that there are eight different 4-paths between 2 and 5. FIGURE 4.3.3 Finally, three more definitions: An edge in a connected graph is called a **bridge** if the removal of that edge, but not its end vertices, makes the graph disconnected. A graph with no cycles is an **acyclic graph** , also called a **forest**. A connected forest is a **tree**. We study trees in detail in Chapters 6 and . **_4.4 REACHING IN DIGRAPHS_** A **directed path** from a vertex _v_ to a vertex _w_ in a digraph is a finite sequence _v_ 1, _a_ 1 _v_ 2, _a_ 2, _. . . , v r, ar, vr_+1 of vertices and arcs, where the first vertex is _v_ and the last vertex is _w_ and _a i_ is an arc from _v i_ to _v_ i+1. If there is a directed path from _v_ to _w_ , then _v_ is **connected to** _w_ and _w_ is **connected from** _v_. A pair of vertices is a **strongly connected pair** if each is connected to the other. If one of them is connected to the other, it is a **unilaterally connected pair**. A digraph is **strongly connected** if every pair of vertices is a strongly connected pair and it is **unilaterally connected** if every pair is unilaterally connected. A digraph is **weakly connected** if its underlying graph is connected. A directed path from a vertex to itself is a **closed directed path**. A closed directed path is a **directed circuit** if its arcs are distinct and it is a **directed cycle** if its vertices are different. Notice the subtle difference between graphs and digraphs: If the vertices of a closed directed path are distinct, its arcs are distinct. But if the vertices of a closed path in a _graph_ are distinct, its edges need not be distinct. The relation defined by _vRw_ if { _v_ , _w_ } is a strongly connected pair is an equivalence relation giving a partition of the vertex set _V_ into a class of pairwise disjoint subsets, and the subgraph induced by any one of these subsets is called a **strong component** of the digraph. For example, in the digraph of Figure 4.4.1 we have two strong components induced by the sets {1, 2, 3, 4} and {5, 6, 7}. As in the case of graphs the elements of the _k_ th power of the adjacency matrix _A_ of a digraph can be used to compute the number of _k_ -paths between pairs of vertices. The proof of the following result is left as an exercise. **FIGURE 4.4.1** THEOREM 4.4.1 If _A_ is the adjacency matrix of a digraph, then the ( _i_ , _j_ )-entry of the _k_ th power ( _k_ ≥ 1) of _A_ is the number of _k_ -directed paths from _i_ to _j_. If _G_ is a graph, then a digraph _G′_ obtained from _G_ by changing each edge of _G_ into an arc is called an **orientation** of _G_. For example, in Figure 4.4.2 the digraphs of (b) and (c) are both orientations of the graph of (a). An orientation of a graph is called a **strong orientation** of the graph if the orientation is strongly connected. In Figure 4.4.2 the digraph in (c) is a strong orientation of the graph in (a). A graph is said to be **strongly orientable** if it has a strong orientation. It is easy to see that a strongly orientable graph is necessarily connected and bridgeless. The converse also holds good: A **graph is strongly orientable if and only if it is connected and bridgeless**. This theorem is due to H. E. Robbins (1939) and we omit its proof. Later in the chapter we discuss an algorithm to obtain a strong orientation of a graph if such an orientation exists. **FIGURE 4.4.2** The **reachability matrix** of a digraph with _n_ vertices is a _n_ × _n_ matrix _R_ = ( _r ij_), where _r ij_ is 1 if there is a directed path from _i_ to _j_ , and 0 otherwise. Obviously, a digraph is strongly connected if and only if every element of its reachability matrix is equal to 1. **_4.5 TESTING CONNECTEDNESS_** Given a graph, it is natural to ask whether it is connected. Of course, from the diagram of a graph one can easily see whether the graph has more than one component and thereby test its connectedness. For large graphs such diagrams are not feasible. Moreover, if we input a graph into a computer, we need an algorithm to see whether it is connected. One such algorithm is the **depth-first search** (DFS) technique, in which we relabel the vertices of the graph as follows. Let the vertices of the graph _G_ be _v_ 1, _v_ 2, . . . , and _v n_. Select an arbitrary vertex and label it as 1. Pick any vertex adjacent to 1. This is not yet labeled; label it as 2. Mark the edge {1, 2} as a used edge so that it will not be used again. Proceeding similarly, suppose that we label vertex _v i_ with integer _k_. Search among all the unlabeled adjacent vertices of this vertex, select one of them, and label it as ( _k_ \+ 1). Mark the edge { _k, k_ \+ 1} as a used edge. Now it may be the case that all the adjacent vertices of _k_ are labeled. If so, go back to vertex ( _k_ – 1) and search among its unlabeled adjacent vertices. If we find one such vertex, label it as ( _k_ \+ 1) and mark the edge { _k_ – 1, _k_ \+ 1} as a used edge. Continue the process until all the vertices are labeled or we are back at vertex 1 with at least one vertex unlabeled. In the former case the graph is connected and there will be exactly ( _n_ – 1) used edges. The acyclic subgraph consisting of the _n_ vertices of the graph and these ( _n_ – 1) used edges is called a **depth-first search spanning tree** of the graph. If it is not possible to label all the _n_ vertices by the DFS technique, we conclude that the graph is not connected. A similar procedure for testing the strong connectedness of digraphs is given in Tarjan (1971). FIGURE 4.5.1 Let us illustrate this DFS labeling technique to the graph of Figure 4.5.1 with eight vertices _a, b, c, d, e, f, g_ , and _h_. We select vertex _b_ and label it 1. An adjacent vertex to 1 is _g_. Label it as 2. Mark the edge {1, 2} as a used edge. A vertex adjacent to 2 and not labeled is _f_ , and it is labeled 3. At this stage, edge {2, 3} is marked. We now see that _c_ is the only unlabeled vertex adjacent to 3. So _c_ is labeled as 4 and {3, 4} is marked. Then _d_ or _e_ can be labeled as 5. The tie is broken by labeling _d_ as 5 and {4, 5} is marked. We notice that 5 has no unlabeled adjacent vertices, so we go back to 4 and label _e_ as 6. At this stage we go back to 4, then to 3, and then to 2 in search of unlabeled adjacent vertices. We label _h_ as 7 and _a_ as 8 and mark the edges {2, 7} and {7, 8}. At this point all the eight vertices are labeled, showing that the graph is indeed connected. The seven marked edges are the edges of the DFS spanning tree as shown in Figure 4.5.2. Now let us find the computational complexity of the DFS algorithm using the example just discussed. Notice that each edge { _i_ , _j_ }, where _i_ < _j_ can be investigated in the forward direction from _i_ to _j_ or in the backward direction from _j_ to _i_. In our example we investigate {1, 2}, {2, 3}, {3, 4}, and {4, 5} in the forward direction. Since 5 had no unlabeled adjacent vertices, we had to go back to 4, which meant that we had to investigate {4, 5} in the backward direction. So the edge {4, 5} is examined twice and will never be investigated again. Thus each edge is examined at most twice. So if there are _m_ edges, there will be at most 2 _m_ investigations, and there are _n_ vertices to be labeled. Thus the complexity is at most _n_ \+ 2 _m_. Since the maximum value for _m_ is _n_ ( _n_ – l)/2, the worst-case complexity of the DFS algorithm is _n_ 2. FIGURE 4.5.2 **_4.6 STRONG ORIENTATION OF GRAPHS_** Consider a graph model in which the vertices are the street corners of a large city. Two vertices are joined by an edge if there is a street joining them. Suppose that the resulting graph _G_ is strongly orientable. This is the case if and only if _G_ is connected and bridgeless. We are now interested in (temporarily) converting all streets in the city into one-way streets. Since _G_ is strongly orientable every corner can be reached from every other corner after this conversion. How is this conversion carried out? We again resort to the DFS procedure and label all the vertices. If { _i_ , _j_ } is a marked edge where _i_ < _j_ , convert this edge into an arc from _i_ to _j_. On the other hand, if { _i, j_ } is an unmarked edge where _i_ < _j_ , convert this edge into an arc from _j_ to _i_. The resulting digraph _G′_ is a strong orientation of _G_. For a proof of this assertion, see Roberts (1976). In Figure 4.6.1 we have a connected bridgeless graph in (a), with a DFS spanning tree in (b), where the vertices are appropriately labeled. A strong orientation of _G_ is the digraph _G′_ of Figure 4.6.1(c). When we use the DFS procedure there will be _m_ unmarked edges in the worst case that have to be investigated. Thus the complexity is at most _n_ \+ 2 _m_ \+ _m_ , which in the worst case will be equal to _f_ ( _n_ ) = (3 _n_ 2 – _n_ )/2. **_4.7 NOTES AND REFERENCES_** There are several excellent references on graph theory both at the introductory level and at the advanced level. Here is a partial list: Behzad et al. (1979), Berge (1962), Bondy and Murty (1976), Carre (1979), Chartrand (1977), Deo (1974), Gibbons (1985), Gondran and Minoux (1984), Harary (1969a), Ore (1963), Roberts (1976, 1978), Swamy and Thulasiraman (1981), Wilson (1979), and Yemelichev, et al. (1984). The chapters on graphs in the books by Grimaldi (1985), Liu (1985), Roberts (1984), Townsend (1987), and Tucker (1984) are also highly recommended. For a proof of Robbins's theorem, see Chapter 2 of Roberts (1978), which also contains a a complete discussion of strong orientations and one-way street assignments. **FIGURE 4.6.1** **_4.8 EXERCISES_** **4.1.** Draw a graph _G_ = ( _V_ , _E_ ), where _V_ = {1, 2, 3, 4, 5} and _E_ = {{1, 2}, {1, 3}, {1, 5}, {2, 3}, {3, 4}, {3, 5}, {4, 5}}. Find the set _W_ = { _i : i_ is a vertex such that _i_ and 2 are adjacent}. **4.2.** Draw a digraph for which the underlying graph is the graph of Problem 4.1. Find the set of vertices **(a)** adjacent to vertex 2 and **(b)** adjacent from vertex 2 in this digraph. **4.3.** **(a)** Construct a complete graph with four vertices such that no two edges intersect, **(b)** Construct a complete graph with five vertices, **(c)** Do you notice any difference between the graphs in parts **(a)** and **(b)?** **4.4.** Find the number of edges in a complete graph with _n_ vertices. **4.5.** Draw an air transportation graph with Boston, New York, London, Paris, Moscow, Prague, and Rome as vertices and an edge joining two cities if there is nonstop air service between them. **4.6.** Draw the subgraph induced by _V′_ = {2, 3, 4, 5} in Problem 4.1. **4.7.** Find the minimum number of bridges **(a)** to be constructed and **(b)** to be demolished so that the Königsberg bridge problem becomes solvable. **4.8.** **(a)** Draw the bipartite graph _K_ 2,2 such that no two edges intersect, **(b)** Draw the bipartite graph _K_ 3.3. **(c)** What is the noticeable difference between these two bipartite graphs? **4.9.** Find the number of edges in _K p,q_. **4.10.** Consider the graph _G_ = ( _V, E_ ) with _V_ = {1, 2, 3, 4, 5} and _E_ = { _a, b_ , c, _d, e, f_ , _g_ , _h_ }, where _a_ = {1, 2}, _b_ = {2, 3}, _c_ = {3, 5}, _d_ = {2, 5}, _e_ = {2, 4}, _f_ = {4, 5}, _g_ = {1, 4}, and _h_ = {1, 5}. **(a)** Find the adjacency matrix of _G_. **(b)** Find the degree of each vertex and find the set of of odd vertices. **(c)** Find the incidence matrix of _G_. **4.11.** Consider a digraph _G′_ the underlying graph of which is the graph _G_ of Problem 4.1. **(a)** Find the adjacency matrix of _G′_. **(b)** Find the indegree and undegree of each vertex in _G′_. **(c)** Find the incidence matrix of _G′_. **4.12.** Give an example of a simple graph with **(a)** no odd vertices, **(b)** with no even vertices. **4.13.** Construct a connected simple graph with _n_ vertices such that the degree of each vertex is 2. Notice the structure of the graph. **4.14.** Construct a graph with _n_ vertices and ( _n_ – 1) edges such that there are two vertices of degree 1 and ( _n_ – 2) vertices of degree 2. **4.15.** A graph _G_ with the property that all of its vertices have the same degree _r_ is called a **regular graph** of degree _r_. Notice that a complete graph is regular but the converse is not true, **(a)** Construct a simple regular graph of degree 1 that is not complete, **(b)** Construct a simple regular graph of degree 2 that is not complete, **(c)** If _G_ is a regular graph of degree _r_ and if _G_ has _n_ vertices, find the number of edges in _G_. **4.16.** Prove that in a simple graph with two or more vertices, the degrees of the vertices cannot all be distinct. **4.17.** Let _G_ be a simple graph with _n_ vertices with _A_ as its adjacency matrix and _B_ as its incidence matrix. Define the _n_ × _n_ diagonal matrix _C_ in which the _i_ th diagonal element is the degree of vertex _i_ in _G. C_ is called the **degree matrix** of _G_. Prove that _B · B t_ = _A_ \+ _C_. **4.18.** A square matrix in which each element is 0 or 1 is called a **dominance matrix** if **(a)** each diagonal number is 0 and **(b)** the ( _i_ , _j_ ) element is 1 if and only if the ( _j_ , _i_ ) element is 0. Prove that the adjacency matrix of a tournament is a dominance matrix. **4.19.** Consider the digraph _G_ = ( _V_ , _E_ ) where _V_ = {1, 2, 3, 4, 5, 6} and _E_ = {(1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (1, 6), (2, 6), (5, 2)}. **(a)** Find a 6-path from 1 to 6. **(b)** Find a simple path from 1 to 6 using five arcs. **(c)** Find a cycle with four arcs. **(d)** Use the adjacency matrix of _G_ to determine the number of 2-paths from 2 to 4. **(e)** Find all the strong components of _G_. **(f)** Find the reachability matrix _R_ of _G_. **4.20.** Let _R_ be the reachability matrix of a digraph _G_ = ( _V, E_ ) where _V_ = {1, 2, . . . , _n_ }, _P_ = ( _p ij_) be the _element-wise product_ of _R_ and the transpose of _R_ , and _Q_ = _R_ 2 = ( _q ij_). Prove: **(a)** _p ij_ = 1 if and only if _i_ and _j_ are strongly connected **(b)** _q ii_ = the number of vertices in the strong component which contains vertex _i_. **4.21.** Construct a graph with five vertices and six edges that consists of a circuit with six edges and two cycles with three edges each. **4.22.** Consider the graph of Problem 4.1. **(a)** Find a 6-path between 1 and 4. **(b)** Find a simple path between 1 and 4 with four edges. **(c)** Use the adjacency matrix to determine the number of 2-paths between 2 and 4. **4.23.** Define the reachability matrix of a _graph_. **4.24.** Draw graph _G_ with adjacency matrix _A_ such that **4.25.** Show that the sum of the diagonal elements of the second power of the adjacency matrix of a graph _G_ is twice the number of edges in _G_. **4.26.** If _G_ is a connected graph with _n_ vertices, show that there exists a path with at most ( _n_ – 1) edges between every pair of vertices. FIGURE 4.8.1 **4.27.** If _A_ is the adjacency matrix of a graph with _n_ vertices and if a nondiagonal element of _A_ \+ _A_ 2 \+ _A_ 3 \+ · · · + _A n–_1 is zero, what can you say about _G_? **4.28.** Obtain a DFS spanning tree (starting from vertex 1) in the connected graph _G_ of Figure 4.8.1. **4.29.** The graph _G_ in Problem 4.28 is connected and bridgeless. Find a strong orientation of _G_. **4.30.** A triangle is said to be monochromatic if all its sides are of the same color. Show that no matter how we color the edges of a complete graph with six vertices using two colors, there will always be at least one monochromatic triangle. (See Example 1.5.5.) It can be shown that there will be at least two such triangles. Show also that it is possible to color all the edges of a complete graph with five vertices using two colors such that there is no monochromatic triangle. **More on Graphs and Digraphs** **_5.1 EULERIAN PATHS AND EULERIAN CIRCUITS_** A path in a graph is an **Eulerian path** if every edge of the graph appears as an edge in the path exactly once. A closed Eulerian path is an **Eulerian circuit**. A graph is said to be an **Eulerian graph** if it has an Eulerian circuit. There are analogous definitions in the case of digraphs. The idea of Eulerian circuits first arose from the famous Konigsberg bridge problem (Example 4.1.1), which asked whether one could traverse all the seven bridges in the town, going over each one exactly once, and returning to the starting location. In the course of demonstrating that it was impossible to do so, Euler produced techniques which, it is universally believed, gave birth to graph theory. It is obvious that the problem can be solved if its graph model (see Figure 4.1.6) is an Eulerian graph. The following theorem settled this question. **THEOREM 5.1.1** A connected graph _G_ with no loops is Eulerian if and only if the degree of each vertex is even. **_Proof:_** Any Eulerian circuit in _G_ leaves each vertex as many times as it enters. So each vertex of _G_ is even. On the other hand, suppose that _G_ is a connected graph in which each vertex is even. We prove that _G_ is Eulerian by actually constructing an Eulerian circuit in it. There are several algorithms for this construction. For details, refer to Even (1979). We adopt the following procedure, in which circuits are "spliced," until we actually obtain an Eulerian circuit. Start from any vertex _v_. Traverse distinct edges of _G_ until we return to v. This is certainly possible since each vertex is even. Let _C_ 1 be the circuit thus obtained. If this circuit contains all the edges of the graph, we are done. Otherwise, delete all the edges of this circuit and all vertices of degree 0 from _G_ to obtain the connected subgraph _H_ 1 in which each vertex is also even. Furthermore, since _G_ is connected, there is a vertex _u_ common to both the circuit _C_ 1 and the subgraph _H_ 1. Now start from _u_ and obtain a circuit _C_ 2 by traversing distinct edges of the subgraph. Notice that the two circuits have no common _edges_ , even though they may have common vertices. If _v_ = _u_ , the two circuits can be joined together to form an enlarged circuit _C_ 3. See Figure 5.1.1(a). If _v_ and _u_ are distinct, let _P_ and _Q_ be the two distinct simple paths between _v_ and _u_ consisting of edges from _C_ 1 . Then _P, Q_ , and _C_ 2 are spliced together to form a new circuit _C_ 3, as in Figure 5.1.1(b). If this enlarged circuit has all the edges of _G_ , we conclude that it is Eulerian. Otherwise, we continue until we obtain a circuit that has all the edges of _G_. FIGURE 5.1.1 FIGURE 5.1.2 To illustrate this procedure, let us try to construct an Eulerian circuit for the graph _G_ of Figure 5.1.2 in which the edges are labeled whenever there are multiple edges. Starting from vertex 1, suppose that we have the circuit _C_ 1, consisting of {1, 2}, {2, 3}, _e_ 3, _e_ 2, _e_ 1, and {6, 1}. Deleting all the edges of this circuit from _G_ and then deleting all vertices of degree zero, we get the subgraph _H_ 1 as in Figure 5.1.3 and we see that vertex 3 is common for both the subgraph and the circuit _C_ 1. Starting from vertex 3 in this subgraph, we get the circuit _C_ 2 consisting of _e_ 4, _e_ 5 , and {5, 3}. Then we splice these two circuits to get a circuit _C_ 3 consisting of {1, 2}, {2, 3}, all the edges of _C_ 2, _e_ 3, _e_ 2, _e_ 1, and {6, 1}. This new circuit also is not Eulerian, leaving us with a new subgraph _H_ 2 as in Figure 5.1.4. The vertex 2 is common for this subgraph and _C_ 3 and we have a circuit _C_ 4 in _H_ 2 consisting of {2, 5}, _e_ 6 , and {6, 2}. Finally, we splice _C_ 4 and _C_ 3 to obtain an Eulerian circuit of _G_ consisting of {1, 2}, {2, 5}, _e_ 6 , {6, 2}, {2, 3}, _e_ 4, _e_ 5, {5, 3}, _e_ 3, _e_ 2, _e_ 1, and {6, 1}. Finding an Eulerian circuit by this method can be tedious, particularly in large graphs. The following procedure, known as Fleury's algorithm, is less complicated: Start from any vertex and delete an edge as soon as it is traversed. Also, never cross a bridge if you can help it. If we are able to return to the starting point after deleting all the edges, the circuit is Eulerian and we conclude that the graph is Eulerian as well. For example, in Figure 5.1.2 we start from 2 and traverse along {2, 3}, {3, 5}, {5, 2}, {2, 1}, and {1, 6} and stop at 6. If we delete all the edges traversed, we get the subgraph as in Figure 5.1.5, in which we start from 6 but we do not go along _e_ 7 since it is a bridge. So we traverse along _e_ 1, _e_ 2 , and _e_ 3 , reaching 3. Once these edges are deleted, _e_ 4 becomes a bridge that we are forced to cross, and similarly, we cross the bridges _e_ 5, _e_ 6 and finally {6, 2}. At this stage we have an Eulerian circuit. FIGURE 5.1.3 FIGURE 5.1.4 FIGURE 5.1.5 **Note** : The presence of a loop at a vertex does not in any way influence the existence of an Eulerian circuit. Let _G_ be any connected graph and let _G′_ be the subgraph obtained after deleting all its loops. Then _G_ is Eulerian if and only if _G′_ is Eulerian. Unless otherwise stated, we assume that all graphs and digraphs in the remainder of this chapter are loopless. A characterization of graphs with Eulerian paths can now easily be obtained as follows. **THEOREM 5.1.2** A connected non-Eulerian graph _G_ with no loops has an Eulerian path if and only if it has exactly two odd vertices. **_Proof:_** If _G_ has an Eulerian path from _u_ to _v_ , both _u_ and _v_ are odd and since this path passes through every vertex and traverses each edge once, every other vertex is necessarily even. On the other hand, suppose that _G_ is connected with exactly two odd vertices, _u_ and _v_. Now either _u_ and _v_ are adjacent or they are not. In the former case let _e_ be an edge between them. Delete _e_ to get the graph _G′_ (with at most two components) in which each vertex is even. If _G′_ is connected, obtain an Eulerian circuit in it starting from _u_ and then adjoin the edge _e_ to this circuit to get an Eulerian path between _u_ and _v_. If _G′_ has two components, let the component that contains _u_ be _G_ 1 and the component that contains _v_ be _G_ 2. Of course, both these components are Eulerian. Now obtain an Eulerian circuit from _u_ in the first component and an Eulerian circuit from _v_ in the second component. Then the path consisting of the edges of the first circuit, the edge (actually, the bridge) _e_ , and the edges of the second circuit constitute an Eulerian path between _u_ and _v_. Finally, if _u_ and _v_ are not adjacent in _G_ , construct an arc _e_ joining them, producing a new graph _H_ , which is Eulerian. Obtain an Eulerian circuit in _H_ from _u_ in which the last edge is _e_. If we delete _e_ , we have an Eulerian path in _G_ from _u_ to _v_. This completes the proof. We now state analogous results in the case of digraphs. For the straightforward proofs of these theorems, the reader is referred to Behzad et al. (1979). THEOREM 5.1.3 A weakly connected digraph has a directed Eulerian circuit if and only if the indegree of each vertex equals its outdegree. THEOREM 5.1.4 A weakly connected digraph with no directed Eulerian circuit has a directed Eulerian path if and only if the indegree of each vertex equals its outdegree except for two vertices _u_ and _v_ such that the outdegree of _u_ equals its indegree plus one and the indegree of _v_ equals its outdegree plus one. **_5.2 CODING AND DE BRUIJN DIGRAPHS_** There are several interesting and useful applications of Eulerian paths and circuits in many areas, such as computer science, operations research, cryptography, and transportation problems, to name a few. We discuss some examples here. The Chinese postman problem is a network optimization problem in which an arbitrary connected network is enlarged into an Eulerian one. The problem can be stated as follows: A mail carrier starts from the post office, delivers mail to each block in his beat, and returns to the post office. If we take each street corner in the route as a vertex and a street between two corners as an edge, we have a graph _G_ as a model of this problem. If _G_ is Eulerian, the mail carrier has to traverse each street exactly once. If it is not Eulerian, he has to repeat some edges. A typical optimization problem in this context is to locate those streets which have to be repeated so that the total distance traversed is a minimum. This was first discussed by the Chinese mathematician Kwan (1962) and so is known as the Chinese postman problem. In this section we discuss an application of Eulerian graphs to coding theory. Any word with _m_ letters out of which _n_ are distinct can be associated with a weakly connected digraph _G_ with _n_ vertices and _m_ – 1 arcs such that the word represents a directed Eulerian path if the first letter and last letter are different and a directed Eulerian circuit if the first letter and the last letter are the same. For example, in the word "LETTERED" we have _m_ = 8 and _n_ = 5, and this letter can be associated with the directed path from the vertex _L_ to the vertex _D_ in the digraph of Figure 5.2.1 with five vertices and seven arcs represented by L- - -E- - -T- - -T- - -E- - -R- - -E- - -D, which is a directed Eulerian path. Similarly, the word "ELECTIVE" can be associated with a directed Eulerian cycle E- - -L- - -E- - -C- - -T- - -I- - -V- - -E in the digraph in Figure 5.2.2. Notice that even though a word defines a digraph uniquely, it is possible that the same digraph can be associated with several words of equal length. FIGURE 5.2.1 FIGURE 5.2.2 In any word with _n_ distinct letters _A_ 1, _A_ 2, . . . , _A n_, let _f_ ( _A i_) be the frequency of the letter _A i_ in the word. Then the sum of the frequencies of the _n_ letters is _m_. Let _m ij_ denote the number of times _A j_ appears _immediately_ after _A i_, denoting the number of arcs from _A i_ to _A j_ in the digraph. For example, in the word "MATHEMATICS," _m_ AT = 2, _m_ TA = 0, _m_ TH = 1, and so on. Let _M_ = ( _m ij_) be the _n_ × _n_ matrix thus defined. The row sum of row _i_ in _M_ is the outdegree of vertex _A i_, and the column sum of column _j_ is the indegree of _A j_. Thus, corresponding to each word with _n_ distinct letters, we have a frequency set of _n_ positive integers and a _n_ × _n_ matrix whose elements are nonnegative integers. For example, in the word "LETTERED" the five distinct letters are D, E, L, R, and T, with a frequency set {1, 3, 1, 1, 2}, and the 5 × 5 matrix is In the word "ELECTIVE" the six distinct letters are C, E, I, L, T, and V, with a frequency set {1, 3, 1, 1, 1, 1}, and the 6 × 6 matrix is We now make the following easily verifiable assertions: 1. The digraph of any word is weakly connected. 2. If the first letter and the last letter are not the same, the row sum of the first letter equals the column sum of the first letter plus one, the row sum of the last letter equals the column sum of the last letter minus one, and for all other letters the row sum and the column sum are equal. 3. If the first letter and the last letter are the same, then row sum equals the column sum for all letters and the row sum of the starting letter will be one less than its frequency. All the information in a codeword is contained in the frequency set and the matrix _M_ associated with the word. Hutchinson and Wilf (1975) study codewords in their investigation of DNA and RNA molecules. Suppose that we are given (a) a set of _n_ letters _A_ 1, _A_ 2, . . . , _A n_, (b) a set of _n_ positive integers _f_ 1, _f_ 2, _. ._. , _f n_, and (c) a _n_ × _n_ matrix ( _m ij_) of nonnegative integers. Does there exist a word in which _A i_ appears exactly _f i_ times and _A j_ appears immediately after _A i_ exactly _m ij_ times? The answer is "yes" if conditions corresponding to assertions (1)–(3) are satisfied because of Theorems 5.1.3 and 5.1.4. We thus have the following result. **THEOREM 5.2.1** Let _M_ = ( _m ij_) be a _n_ × _n_ matrix with nonnegative integer components and let _A i_ ( _i_ = 1, 2, . . . , _n_ ) be a set of _n_ distinct letters such that _A i_ is associated with both row _i_ and column _i_. Let _r i_ = sum of all the elements of row _i_ of _M_ _c j_ = sum of all the elements of column _j_ of _M_ (a) If _r j_ = _c j_ \+ 1, _r k_ = _c k _– 1, where _j_ and _k_ are distinct and if _r i_ = _c i_ in all other cases, there exists a word beginning with _A j_ and ending in _A k_ in which the frequency of _A j_ is _r j_, the frequency of _A k_ is _c k_, and the frequency of every other letter is _r i_, which is also _c i_. Moreover, in the word, the letter _A p_ appears immediately after _A q_ exactly _m pq_ times. (b) If _r i_ = _c_ _i_ for _i_ = 1, 2, . . . , _n_ , and if _f i_ ( _i_ = 1, 2, . . . , _n_ ) are nonnegative integers such that _r j_ = _f j_ – 1 and _r i_ = _f i_ for every _i_ other than _j_ , there exists a word that begins with _A j_ and ends with _A j_ in which _A k_ appears exactly _f k_ times and _A p_ appears after _A q_ exactly _m pq_ times. For example, suppose that the distinct letters in a word are A, B, C, and D and the matrix is First we construct a digraph _G_ with four vertices A, B, C, and D as in Figure 5.2.3. We observe: 1. The digraph is weakly connected. 2. (Row sum for B) = (column sum for B) + 1. 3. (Row sum for D) = (column sum for D) – 1. 4. Row sum = column sum for all other letters. FIGURE 5.2.3 So there is a word that starts with B and ends in D which can be represented by an Eulerian path from B to D in the digraph of Figure 5.2.3. One such word is BABCDACD. It is an easy exercise to show that if the frequency set is {4, 3, 5, 2} and the matrix is then a word is CCBCBAADBDACAC. We conclude this chapter with a discussion of de Bruijn digraphs, another application of Eulerian digraphs. There are 2 _n_ –1 binary words of length _n_ – 1. We construct a digraph with 2 _n–l_ vertices as follows. Let each word of length _n_ – 1 be a vertex. From each vertex of the form _v_ = _a xa_2 · · · _a n_–1 draw two arcs: one to _a_ 2 _a_ 3 _· · · a n_–1 0 and the other to _a_ 2 _a_ 3 · · · _a n_–1 1 to represent two _n_ -letter words _v_ 0 and _v_ 1, respectively. So the 2 _n_ arcs of the digraph thus constructed represent the set of binary words of length _n_. This digraph _G_ (2, _n_ ) known as the **de Bruijn digraph** , is weakly connected and is Eulerian since the indegree of each vertex equals its outdegree. The digraph _G_ (2, 3) is as shown in Figure 5.2.4. More generally, for an alphabet of _p_ letters, _G_ ( _p, n_ ) is a de Bruijn digraph with _p n –_ 1 vertices and _p n_ arcs such that the indegree and outdegree of each vertex are both _p_. Thus _G_ ( _p, n_ ) is Eulerian. Now consider any Eulerian circuit in this digraph that will contain all the _p n_ arcs in a sequence. Construct the sequence of the first letters of all these words. Let us denote this sequence by _a_ 1 _a_ 2 _· · · a r_, where _r_ = _p n_. Then the _r_ distinct words of length _n_ are all of the form _a iai_+1 · · · _a i_+ _n_ –1, where the addition operation defined on the subscript is modulo _r_. For example, if _p_ = 2 and _n_ = 3, then _a_ 9 is _a_ 1. In the digraph of Figure 5.2.4 a directed Eulerian circuit starting from 00 consists of the following sequence of eight arcs: 000, 001, 011, 111, 110, 101, 010, 100. The first letters of these arcs form the word 00011101, so that _a_ 1 = 0, _a_ 2 = 0, _a_ 3 = 0, _a_ 4 = 1, _a_ 5 = 1, _a_ 6 = 1, _a_ 7 = 0, and _a_ 8 = 1. Any three-letter word is now of the form _a iai_ + 1 _a i_ + 2. Thus _a_ 7 _a_ 8 _a_ 9 = _a_ 7 _a_ 8 _a_ 1 = 010, and so on. **FIGURE 5.2.4** We can formally define a de Bruijn sequence for two positive integers _p_ and _n_. If _S_ is any alphabet consisting of _p_ letters, then a sequence _a_ 1 _a_ 2 · · · _a r_ of _r_ ( _r_ = _p n_) letters is called a **de Bruijn sequence** , denoted by _B_ ( _p, n_ ) , if every word of length _n_ from _S_ can be realized as _a iai_+l _· · · a i_ \+ _n_ – 1 ( _i_ = 1, 2, . . . , _r_ ), where the addition operation in the subscripts is modulo _r_. We are now ready to summarize our observations as a theorem. **THEOREM 5.2.2** For every pair of positive integers there exists a de Bruijn sequence. This was first proved by de Bruijn (1946) for _p_ = 2 and later generalized for arbitrary _p_ by Good (1946). These sequences are very useful in coding theory. The state diagram of a feedback shift register (FSR) is a subgraph of a certain de Bruijn digraph. FSRs have a wide range of applications in communications, cryptography, and computer science, particularly so because of the randomness properties of the sequences they generate. Briefly speaking, if _K_ is a field (of order _q_ ) , and if _f_ : _K n_ → _K_ , then an _n_ -stage FSR on _K_ transforms the vector [ _x_ 0 _x_ 1 _· · · x n_ – 1] into the vector [ _x_ 1 _x_ 2 _· · · x n_] , where _x_ _n_ = _f_ ( _x_ 0, _x_ 1 _, . . ._ , _x n_ – 1). **_5.3 HAMILTONIAN PATHS AND HAMILTONIAN CYCLES_** A path between two vertices in a graph is a **Hamiltonian path** if it passes through each vertex exactly once. A closed path that passes through each vertex exactly once and in which all the edges are distinct is a **Hamiltonian cycle**. A graph is a **Hamiltonian graph** if it has a Hamiltonian cycle. In a digraph a directed path from a vertex to another vertex is a **directed Hamiltonian path** if it passes through each vertex exactly once. A closed directed Hamiltonian path is a directed Hamiltonian cycle. The adjective "Hamiltonian" is in honor of the famous Irish mathematician Sir William Hamilton (1805–1865), who investigated the existence of a solution to a game called "all around the world," in which the player is asked to find a route along the edges of a dodecahedron (a regular polyhedron with 20 vertices, 30 edges, and 12 faces) visiting each vertex exactly once and returning to the starting vertex. Now a dodecahedron can be represented as a graph _G_ on the plane (see Figure 5.3.1) with 20 vertices and 30 edges. Thus the game has a solution if and only if _G_ is a Hamiltonian graph. FIGURE 5.3.1 Even though the problem of determining the existence of Hamiltonian cycles appears similar to that of determining the existence of Eulerian circuits, it is not at all easy to tell whether a given graph is Hamiltonian in general. In contrast with the extremely tidy necessary and sufficient conditions obtained by Euler for the existence of Eulerian circuits, Hamiltonian graphs seem to defy characterization. In many cases each graph must be considered individually since no easily verified necessary conditions are known in general. Of course, a complete graph is Hamiltonian. In other words, a graph with _n_ vertices is Hamiltonian if the degree of each vertex is _n_ – 1. The larger the degree of each vertex, the more likely it appears that the graph is Hamiltonian. So the question is this: Does there exist a positive integer _k_ ( _k_ < _n_ – 1) such that the graph is Hamiltonian whenever the degree of each vertex is at least _k_? The answer is yes, as proved by Dirac (1952), whose theorem can also be obtained as a consequence of the following theorem of Ore (1963). THEOREM 5.3.1 A simple graph with _n_ vertices (where _n_ is at least 3) is Hamiltonian if the sum of the degrees of every pair of nonadjacent vertices is at least _n_. **_Proof:_** Suppose that a graph _G_ with _n_ vertices is not Hamiltonian. So it is a subgraph of the complete graph _K n_ with fewer edges. Now keep on adding edges to _G_ by joining nonadjacent vertices until we obtain a non-Hamiltonian graph _H_ such that the addition of _one more edge_ to _H_ will make it Hamiltonian. Let _x_ and _y_ be any pair of nonadjacent vertices in _H_. So they are nonadjacent in _G_ as well. Thus (deg _x_ \+ deg _y_ ) is at least _n_ in _H_. Since the addition of { _x, y_ } as an edge to _H_ will make it Hamiltonian, there is a Hamiltonian path in _H_ between _x_ and _y_. If we write _x_ = _v_ 1 and _y_ = _v n_, then this path can be written as _v_ 1\- - - _v_ 2\- - - _v_ 3· · · _v i_ – 1 \- - - _v i_\- - - _v_ i + 1. . . _v n_ – 1\- - - _v n_. Notice that if _v_ 1 and _v i_ are adjacent in _H_ , then _v n_ and _v i_ – 1 cannot be adjacent because if they are adjacent, we will have the following Hamiltonian cycle in _H: v n_\- - - _v i_ – 1, . . . _v_ 1 \- - - _v i_ . . . _v n_, which is a contradiction. So if _v_ 1 has _r_ adjacent vertices from the set { _v_ 2, _v_ 3, . . . , _v n_}, at least _r_ vertices from the set { _v_ 1, _v_ 2, . . . , _v n_ – 1} cannot be adjacent to _v n_. In that case, deg _v_ 1 = _r_ and deg _v n_ ≤ ( _n_ – 1) – _r_ and consequently, deg _v_ 1 \+ deg _v n_ ≤ ( _n_ – 1) < _n_ , which contradicts the hypothesis. COROLLARY (Dirac's Theorem) A simple graph with _n_ vertices is Hamiltonian if the degree of each vertex is at least _n_ /2 . _Note:_ The converse of Ore's theorem is not true. For example, consider the graph of a polygon with six sides. A sufficient condition for the existence of a Hamiltonian Path in a graph is as in the following result. THEOREM 5.3.2 A simple graph with _n_ vertices has a Hamiltonian path if the sum of the degrees of every pair of nonadjacent vertices is at least ( _n_ – 1). **_Proof:_** This is an exercise. COROLLARY A simple graph with _n_ vertices has a Hamiltonian path if the degree of each vertex is at least ( _n_ – l)/2. Just as in the case of graphs, there is no known characterization of Hamiltonian digraphs. In fact, the situation becomes more complex. We state here a few sufficient conditions for the existence of direct Hamiltonian cycles and paths in simple digraphs which are more or less similar to the results in the case of graphs. See Behzad et al. (1979) for proofs. **THEOREM 5.3.3** (a) A strongly connected digraph with _n_ vertices is a Hamiltonian digraph if (deg _u_ \+ deg _v_ ) is at least 2 _n_ – 1 for every pair of vertices _u_ and _v_ such that there is no arc from _u_ to _v_ and from _v_ to _u_. (b) A digraph with _n_ vertices is Hamiltonian if (outdegree _u_ \+ indegree _v_ ) is at least _n_ for every pair of vertices _u_ and _v_ such that there is no arc from _u_ to _v_. (c) A strongly connected digraph with _n_ vertices is Hamiltonian if (outdegree _v_ \+ indegree _v_ ) is at least _n_ for every vertex _v_. (d) A digraph with _n_ vertices is Hamiltonian if both the outdegree and indegree of each vertex is at least _n_ /2. **THEOREM 5.3.4** (a) If (degree _u_ \+ degree _v_ ) is at least 2 _n_ – 3 for every pair of vertices _u_ and _v_ such that there is no arc from one to the other in a digraph _G_ , then _G_ has a directed Hamiltonian path. (b) If (outdegree _u_ \+ indegree _v_ ) is at least ( _n_ – 1) for every pair of vertices such that there is no arc from _u_ to _v_ in a digraph _G_ , then _G_ has a directed Hamiltonian path. (c) If (outdegree _v_ \+ indegree _v_ ) is at least ( _n_ – 1) for every vertex _v_ in a digraph with _n_ vertices, the digraph has a directed Hamiltonian path. (d) If both the outdegree and indegree of each vertex is at least ( _n_ – l)/2 in a digraph _G_ , then _G_ has a directed Hamiltonian path. **_Hamiltonian-Connected Graphs_** A graph is said to be **Hamiltonian-connected** if there is a Hamiltonian path between every pair of vertices in it. Obviously any Hamiltonian-connected graph with three or more vertices is necessarily a Hamiltonian graph. The converse is not true: In the Hamiltonian graph _G_ = ( _V_ , _E_ ) where _V_ = {1, 2, 3, 4} and _E_ is the set {{1, 2}, {2, 3}, {3, 4}, {4, 1}} there is no Hamiltonian path between the vertex 2 and the vertex 4. The following result due to Ore (1963) parallels the result of Theorem 5.3.1. **THEOREM 5.3.5** If _G_ is a simple graph with _n_ vertices ( _n_ is at least 3) such that for all distinct nonadjacent vertices _i_ and _j_ , (degree of _i_ ) + (degree of _j_ ) exceeds _n_ , then _G_ is Hamiltonian-connected. For more details on Hamiltonian-connected graphs, see the papers by Chartrand et al. (1969) and Lick (1970). **_5.4 APPLICATIONS OF HAMILTONIAN CYCLES_** Hamiltonian paths and cycles have several useful and interesting applications. We discuss some of them here. **Example 5.4.1 (The Traveling Salesman Problem)** In Example 4.1.3 we introduced the traveling salesman problem (TSP), in which a salesman has to make an itinerary visiting each city on the tour exactly once and returning to the starting point. Any such tour is a Hamiltonian cycle. Assuming that such a cycle exists, the optimization problem then is to find such a tour for which the total cost (or for that matter, total distance) is a minimum. TSP is one of the best known problems of a class of those that are easy to state but very difficult to solve. In general, there is no known efficient procedure for finding a solution to the problem. If we adopt the exhaustive enumeration method in which we list all the ( _n_ – 1)! directed Hamiltonian cycles (in the worst case) is a digraph with _n_ vertices, and compute the cost for each cycle by performing the n additions, we will be doing _n_ ( _n_ – 1)! additions. If _n_ = 20 and a computer can do 1 million additions per second, this method will take about 75,000 years. See Held and Karp (1970) for a discussion of the complexity considerations involved in this problem. **Example 5.4.2 (Scheduling)** Consider a machine shop with _n_ different machines. A job has to be run through all these machines but not in any particular order. Let each machine represent a vertex of a digraph. Draw an arc from each vertex to every other vertex. Then any directed Hamiltonian path in the digraph is a schedule. If _c ij_ is the setup time required whenever the job goes from machine _i_ to machine _j_ , the optimization problem is to find a schedule that takes the least amount of time. Example 5.4.3 In Example 4.1.4 we defined a tournament as a simple digraph in which for every pair of vertices _v_ and _w_ either there is an arc from _v_ to _w_ or from _w_ to _v_ but not both. In other words, the underlying graph of a tournament is complete and ( _v_ , _w_ ) is an arc in the tournament if and only if ( _w_ , _v_ ) is not an arc. If the vertices denote the different players, the existence of an arc from _v_ to _w_ indicates that _v_ defeats _w_ in the game. The following questions arise: (1) Is it possible to rank all the players as a sequence _u_ 1, _u_ 2, _. . . , u n_ such that _u i_, defeats _u_ _i_ +1 ( _i_ = 1, 2, . . . , _n_ – 1)? In other words, does the digraph have a directed Hamiltonian path? (2) If such a path exists, is it unique? (3) Is there a necessary and sufficient condition to be satisfied by the digraph so that the path is unique, implying that the ranking is also unique? The answers are: (1) yes, (2) no, and (3) yes. Before we justify these assertions, let us consider the two tournaments (a) and (b), as in the digraphs of Figure 5.4.1. We see that both the tournaments have Hamiltonian paths. In (a) we have four different Hamiltonian paths, whereas in (b) we have only one. What makes these two different? Here is a definition to resolve this. A tournament is said to be **transitive** if whenever _u_ defeats _v_ and _v_ defeats _w_ , then _u_ defeats _w_. Notice that a tournament is transitive if and only if it has no directed cycles with three arcs. Even though "transitivity" appears to be normal in most situations in life, in the real world most tournaments are not transitive. Maybe this is one reason they are exciting. In our illustration, (b) is transitive but (a) is not. Our first theorem in tournaments is due to Redei (1934). THEOREM 5.4.1 Every tournament _G_ has a directed Hamiltonian path. **_Proof:_** This is an immediate consequence of Theorem 5.3.4(c). However, an independent proof using induction on the number _n_ of vertices is along the following lines. The theorem is true when _n_ = 2. Suppose that it is true for _n_. Now consider any tournament _G′_ with ( _n_ \+ 1) vertices. Let _v_ be any arbitrary vertex of _G′_. Now consider the subgraph _G_ of _G′_ obtained from _G′_ by deleting _v_ and all arcs from _v_ and to _v_. Obviously, _G_ is a tournament with _n_ vertices and so it has a directed Hamiltonian path _v_ 1 \- - > _v_ 2 . . . > _v i_ . . . > _v n_. If there is an arc in _G′_ from _v_ to _v_ 1 or from _v n_ to _v_ , then _G′_ has a directed Hamiltonian path and we are done. Otherwise let _i_ be the largest integer such that there is no arc from _v_ to _v i_. So there is an arc from _v i_ to _v_. Now our choice of _i_ is such that there is no arc from _v i_+1 to _v_ implying that there is an arc in the opposite direction and consequently we have the directed Hamiltonian path in _G′_ as follows: _v_ 1. . . > _v_ _i_ \- - - > _v_ \- - - > _v i_+1. . . > _v n_, showing that the result is true for ( _n_ \+ 1). FIGURE 5.4.1 COROLLARY A transitive tournament has a unique directed Hamiltonian path. **_Proof:_** If _P_ and _P′_ are two distinct directed Hamiltonian paths, there is a pair of vertices _x_ and _y_ such that there is a path from _x_ to _y_ in _P_ and a path from _y_ to _x_ in _P_ ′. So, by transitivity, there is an arc, from _x_ to _y_ in _P_ and an arc from _y_ to _x_ in _P′_. This is a contradiction. So _P_ = _P′._ We conclude our discussion of tournaments with the following theorem on the existence of unique directed Hamiltonian paths in tournaments. See Roberts (1976) for a proof. THEOREM 5.4.2 In a tournament _G_ the following properties are equivalent: (a) _G_ has a unique directed Hamiltonian path. (b) _G_ has no directed cycles of length 3. (c) _G_ is acyclic. (d) _G_ is transitive. **_5.5 VERTEX COLORING AND PLANARITY OF GRAPHS_** A graph is said to be **colored** if each vertex is assigned a color such that no two adjacent vertices have the same color. If such an assignment of colors is possible using at most _k_ colors, the graph is **k-colorable**. The smallest value of _k_ such that a graph _G_ is _k_ -colorable is the **chromatic number** of _G_. The chromatic number of a graph is 1 if and only if it has no edges. The chromatic number of a complete graph with _n_ vertices is of course _n_ , and the chromatic number of a bipartite graph is 2. In particular, the chromatic number of a tree is 2. Any cycle with _p_ vertices is 2-colorable if and only if _p_ is even. Consequently, if a graph _G_ has an odd cycle (i.e., a cycle with an odd number of vertices), _G_ is not 2-colorable. On the other hand, if there is no odd cycle in a graph _G_ , the graph is 2-colorable. This is obvious if _G_ is a tree because a tree is acyclic. More generally, assume that _G_ is a connected graph with no odd cycles. Start from any vertex _v_ and apply a breadth first search (BFS) procedure to obtain a BFS tree as follows: _v_ is at level 0. All vertices adjacent to _v_ are at level 1. We partition the vertices of the graph into sets of vertices at various levels. Let _v i_1, _v i_2, . . . , _v ir_ be the vertices at level _i_. Consider all vertices adjacent to _v i_2 that are not in levels 0, 1, 2, . . . , _i_. Put these vertices in a new level ( _i_ \+ 1). Then consider all vertices that are adjacent to _v i_2 but not in levels 0, 1, 2, . . . , _i_ , ( _i_ \+ 1). Include these vertices in level ( _i_ \+ 1). Continue this process until all the vertices are examined. We now assign two colors to the vertices: one color to all the vertices at the odd level and another color to all the vertices at the even level. In Figure 5.5.1 we have a graph with no odd cycles and a BFS tree starting from vertex 3 is as in Figure 5.5.2. FIGURE 5.5.1 FIGURE 5.5.2 In the BFS procedure in the worst case we have to examine all the _n_ vertices and all the _m_ edges. So the worst-case complexity is _n_ + _m_. Since _m_ is at most _n_ ( _n_ – l)/2, the complexity is _n_ ( _n_ \+ 1)/2. From an earlier exercise we know that a graph is bipartite if and only if it has no odd cycles. Thus we have the following theorem to characterize the 2-colorability of graphs. **THEOREM 5.5.1** In a connected graph _G_ , the following are equivalent: (a) _G_ is bipartite. (b) _G_ is 2-colorable. (c) _G_ has no odd cycles. There is no known characterization for the _k_ -colorability of graphs when _k_ > 2. In general, it is a hard problem to compute the chromatic number of an arbitrary graph. There is no algorithm that always gives a coloring pattern using the fewest possible colors. However, there are algorithms to color a given graph that "approximate" the best coloring in the sense that it may sometimes use more colors than are absolutely necessary. Here is an algorithm, known as the **largest first algorithm** , because it assigns colors to vertice with the largest degrees first. First we order the vertices according to nonincreasing degrees. Use the first color to color the first vertex and then color, in sequential order, each vertex that is not adjacent to a previously colored vertex of the same color. Repeat this process using the second color for the subsequence of uncolored vertices. Continue this process until all vertices are colored. Let us make use of the largest first algorithm to obtain a coloring for the graph in Figure 5.5.3. The nine vertices are labeled as _i_ = 1, 2, 3, . . . , 9, where the degree of _i_ is greater than or equal to the degree of ( _i_ \+ 1). First assign color 1 to vertex 1. Now vertices 8 and 9 are not adjacent for 1. So assign color 1 to vertex 8. We see that vertex 9 is not adjacent to vertex 8. So assign color 1 to vertex 9 as well. Thus vertices 1, 8, 9 are assigned color 1. The remaining vertices are 2, 3, 4, 5, 6, 7. Assign color 2 to vertex 2. The _uncolored_ vertices not adjacent to vertex 2 are 3, 4, 6. Assign color 2 to vertex 3. We notice that vertices 3 and 4 are adjacent. So vertex 4 remains uncolored. Similarly, vertex 6 remains uncolored. Thus vertices 2 and 3 are colored with color 2. FIGURE 5.5.3 The remaining uncolored vertices are 4, 5, 6, 7. Assign color 3 to vertex 4 and then to vertex 5 and vertex 7. Finally, we are left with vertex 6, to which assign color 4. Thus the graph of Figure 5.5.3 does not need more than four colors. But three colors will do the job. Assign color red to 1, color blue to 3, 5, 7, 8, and color green to 2, 4, 6, 9. The idea of coloring a graph arises naturally in many scheduling problems. Suppose that the computer science department in a university has decided to offer a certain number of graduate courses and there are a certain number of time periods during which these courses can be offered. In scheduling these courses the department has to avoid conflicts. If a graduate student is interested in taking two of these courses, these courses have to be scheduled at different times. We construct a graph model of this scheduling problem as follows: Let each vertex represent a course. Join two vertices by an edge if the courses corresponding to these two vertices cannot be offered at the same time. If the resulting graph is _k_ -colorable, the courses can be scheduled with _k_ or more periods. **_Planarity of Graphs_** A graph is called a **planar graph** if it can be drawn so that no two edges intersect except at a vertex. A planar graph drawn on a plane so that no two edges intersect is a **plane graph**. The two-dimensional regions defined by the edges in a plane graph are the **faces** and the vertices and the various edges define the **boundaries** of these faces. In the plane graph of Figure 5.5.4 there are five vertices, seven edges, and four faces. In this plane graph, we see that _F_ 1, _F_ 2 , and _F_ 3 are the **interior faces** , and the unbounded region _F_ 4 is the **exterior face**. The boundary of _F_ 1 is the cycle 1- - - - -2- - - - -3- - - - -1 and the boundary of _F_ 4 is 1- - - - -2- - - - -5- - - - -2- - - - -4- - - - -3- - - - -1, in which the last edge is _e_. FIGURE 5.5.4 The classical (proved in 1750) result connecting the number of vertices, edges, and faces is the following theorem of Euler. THEOREM 5.5.2 If a connected plane graph has _n_ vertices and _m_ edges, then the number of faces is _p_ , where _n_ – _m_ \+ _p_ = 2. **_Proof:_** We use induction on _m_. When _m_ = 0, we have _n_ = 1 and _p_ = 1 and the result is true. Suppose that the result is true when _m_ = _k_ – 1. Consider any plane graph _G_ with _n_ vertices, _k_ edges and _p_ faces. We wish to show _n_ – _k_ + _p_ = 2. This is certainly true if the graph is a tree. If it is not a tree, let _e_ be any edge of a cycle. If we delete _e_ , we still have a connected plane graph with _n_ vertices, ( _k_ – 1) edges, and ( _p_ – 1) faces. Notice that if we delete an edge from a cycle two faces coalesce into one. By our induction hypothesis, _n_ – ( _k_ – 1) + ( _p_ – 1) = 2, so _n_ – _k_ + _p_ = 2, as we wished to prove. We now establish another useful result, which is an immediate consequence of what we already proved. THEOREM 5.5.3 A simple connected planar graph with _n_ vertices ( _n_ is at least 3) has at most (3 _n_ – 6) edges. **_Proof:_** If _n_ is 3, the number of edges is at most three. Let _n_ be greater than or equal to 3. We draw the plane graph with faces _F_ 1 _F_ 2, _. . . , F p_. Let _r i_ be the number of edges that define the face _F i_. Then _r i_ is at least three for each _i_. So 3 _p_ ≤ ( _r_ 1 \+ _r_ 2 \+ · · · + _r p_) . Now in counting the total number of edges in the boundaries, each edge is counted at most twice. Thus the right side of the inequality above is at most 2 _m_ , where _m_ is the number of edges in the graph. Hence 3 _p_ is at most 2 _m_. But by Theorem 5.5.2 we know that _p_ = 2 _– n_ + _m_. The result follows. We use this theorem to establish the nonplanarity of some famous graphs. If a simple graph with _n_ vertices has more than (3 _n_ – 6) edges, it is nonplanar. The complete graph _K_ 5 has five vertices and 10 edges, so it is not planar. We can establish the nonplanarity of the complete bipartite graph _K_ 3, 3 by contradiction. For this graph we know that _n_ = 6 and _m_ = 9. So if it is planar, it should have exactly five faces according to Theorem 5.5.2. Recall that a bipartite graph has no odd cycles. So there should be at least 20 edges to define the boundaries of these faces. Each edge is counted at most twice. So the graph should have at least 10 edges. But there are only 9. _Thus any graph that has K_ 5 _or K_ 3 _,_ 3 _as a subgraph is nonplanar_. It turns out that every nonplanar graph contains one of these two as a subgraph in a certain sense which we now make precise. Two graphs are said to be **homeomorphic** (or identical to each other within vertices of degree 2) if they both can be obtained from the same graph _G_ by introducing new vertices of degree 2 on its edges. For example, the two graphs (a) and (b) in Figure 5.5.5 are homeomorphic. Notice that insertion or deletion of vertices on edges does not affect considerations of planarity. We now state the following celebrated theorem of Kuratowski (proved in 1930), which gives a necessary and sufficient condition for a graph to be planar. See Bondy and Murty (1976) for a proof. FIGURE 5.5.5 **THEOREM 5.5.4** A graph is planar if and only if it contains no subgraph that is homeomorphic to _K_ 5 or _K_ 3,3. Finally, a note on map coloring. When we color the different nations in a geographical map, two countries with a common border cannot have the same color. The map coloring problem then is to color a given map with as few colors as possible. No one has ever found a map that needs more than four colors. For more than 100 years it was conjectured that no map needs more than four colors, but no correct proof was forthcoming. Finally, in 1976, this four-color conjecture was settled. For a discussion, see Appel and Haken (1976). Now given a geographical map we can construct a planar graph as follows. Consider each country as a vertex. Join two vertices by an edge if the two countries corresponding to these two vertices have a common border. The minimum number of colors required to color the map is the chromatic number of the graph thus constructed. Every map gives rise to a planar graph, and vice versa. Thus the four-color theorem can be reformulated as the following important theorem in graph theory. **THEOREM 5.5.5** The chromatic number of a planar graph cannot exceed four. **_5.6 NOTES AND REFERENCES_** Any book on graph theory will be a good reference for Eulerian and Hamiltonian graphs. The books by Behzad et al. (1979), Bondy and Murty (1976), Chartrand (1977), Deo (1974), Gibbons (1985), Gondran and Minoux (1984), Harary (1969a), Ore (1963), Roberts (1976, 1978), and Wilson (1979) are some of the standard ones. For the discussion of graph algorithms some excellent references are the books by Aho et al. (1983), Baase (1978), Even (1979), Gondran and Minoux (1984), Lawler (1976), and Reingold et al. (1977). The books by Minieka (1978), Papadimitriou and Steiglitz (1982), and Syslo et al. (1983) contain elaborate discussion of some well-known graph algorithms. Applications of graph theory to coding theory, operations research, computer science, and chemistry are presented in Deo (1974). For additional details on feedback shift registers (mentioned in Section 5.2), refer to the books by Golomb (1967) and Ronse (1982). The survey article by Ralston (1982) on de Bruijn sequences while demonstrating the connection between coding theory and graph theory also shows how different areas of discrete mathematics impinge on computer science. The survey paper by Bellmore and Nemhauser (1968) on the Traveling Salesman Problem is a good introductory reading, and the book on the same topic by Lawler et al. (1985) is a complete and systematic study of this celebrated topic. For a more general discussion on tournaments, see the book by Moon (1968). Some excellent references on vertex coloring in graphs are the relevant chapters in the books by Behzad et al. (1979), Berge (1962), Chartrand (1977), Grimaldi (1985), Gould (1988), Harary (1969a), Liu (1985), Roberts (1976, 1978, 1984), and Wilson (1979). Comparable coverage of the material presented in this chapter is contained in Chapter 3 (Sections 3 and ) and Chapter 6 (Section 1) of Roberts (1984). See the papers by Birkhoff and Lewis (1960) and Read (1968) for additional reading on chromatic numbers. The famous four-color conjecture originated in 1852 and was solved 124 years later when Appel and Haken (1976) demonstrated that every planar map can be colored with four or fewer colors. See the paper by Haken (1977) for a description of the proof of this theorem. Since their proof depended on dividing the problem into several cases depending on the arrangement of the countries in the map and analyzing the various colorings of these arrangements by writing computer programs, there is a controversy over the nature of this proof. See the paper by Tymoczko (1980) for some philosophical underpinnings regarding this controversy. Is there a purely mathematical proof without using any computer analysis showing that every map can be colored with four or fewer colors? This is still an open problem. An interesting historical account of this celebrated conjecture prior to its proof can be found in May (1965) and Harary (1969b). **_5.7 EXERCISES_** **5.1.** Find an Eulerian circuit in the graph of Figure 5.7.1. FIGURE 5.7.1 **5.2.** Prove that a graph is Eulerian if and only if its set of edges can be partitioned into cycles. **5.3.** Show that a weakly connected digraph with an Eulerian circuit is strongly connected. **5.4.** Show that a weakly connected digraph with an Eulerian path is unilaterally connected. **5.5.** Can there be a bridge in an Eulerian graph? **5.6.** Can there be a bridge in a graph which has an Eulerian path? **5.7.** Construct a word with four letters A, B, C, and D using the following matrix: **5.8.** Construct a word with four letters A, B, C, and D with the frequency set {2, 1, 2, 4} using the following matrix: **5.9.** Draw the de Bruijn digraph _G_ ( _p, n_ ) and obtain a de Bruijn sequence _B_ ( _p, n_ ) when **(a)** _p_ = 3, _n_ = 2, and **(b)** _p_ = 3, _n_ = 3. **5.10.** Draw a connected graph with five vertices that is Eulerian but not Hamiltonian. **5.11.** Draw a connected graph with four vertices that is Hamiltonian but not Eulerian. **5.12.** Draw a connected graph with four vertices that is both Eulerian and Hamiltonian. **5.13.** Draw a connected graph with four vertices that is neither Eulerian nor Hamiltonian. **5.14.** Prove Theorem 5.3.2. **5.15.** Prove that a bipartite graph with an odd number of vertices is non-Hamiltonian. **5.16.** If there is a Hamiltonian path from vertex _i_ to vertex _j_ in a digraph _G_ , then _i_ is a "winner" and _j_ is a "loser" in _G_. Construct a tournament with five players such that **(a)** each player can be a winner as well as a loser, **(b)** there is a unique winner and a unique loser. **5.17.** Is every tournament unilaterally connected? What can you say about the converse? Justify your answers. **5.18.** Use the largest first algorithm to color the vertices in the graph of Figure 5.7.2. FIGURE 5.7.2 **Trees and Their Applications** **_6.1 DEFINITIONS AND PROPERTIES_** A connected graph with no cycles is a **tree**. One of the more widely studied discrete structures is that of a tree. Trees are especially suited to represent hierarchical structures and addresses and labels. Special types of trees are used in coding theory and in searching. We will be seeing some of these applications in this chapter. Before doing this we shall establish a few results pertaining to the characterization of trees. 1 _. In a tree there is a unique simple path between every pair of vertices_. We prove this assertion as follows. Let _u_ and _v_ be two vertices in a tree _T_. Since _T_ is connected there is a path between _u_ and _v_ and therefore there is a simple path between them. If possible, let _P_ and _P′_ be two simple paths between them. If the two paths are not the same, there is an edge in _P_ that is not in _P′_. Let us assume that _e_ is the first edge that we come across when we go from _u_ to _v_ that is in _P_ but not in _P′_. Let and Let _W_ be the set of intermediate vertices in _P_ between _u i_+ 1 and _v_ , and let _W′_ be the set of intermediate vertices in _P′_ between _v i_ \+ 1 and _v_. If _W_ and _W′_ have no elements in common, then we get a cycle starting from _u i_ going through all the vertices in _W_ , vertex _v_ , and then all the vertices in _W′_. On the other hand, if _W_ and _W′_ have a common vertex, let _r_ be the least subscript of a vertex _u r_ in _P_ such that _u r_ is in _W′_. So none of the intermediate vertices in _P_ between _u i_ and _u r_ is in _P_ ′. Then we have a cycle starting from _u i_ which goes through all the vertices in _W_ up to _u r_ and then all the vertices in _W′_ from _u r_ to _u i_. Thus the existence of two distinct simple paths between two vertices implies the existence of a cycle. By definition, a tree is acyclic. So there is a unique path between every pair of vertices in a tree. 2. _Conversely, if there is a unique simple path between every pair of vertices in a graph _G_ , then _G_ is a tree_. Suppose that _G_ is not a tree. Then there is at least one cycle _C_ in _G_ which implies that between any two vertices in _C_ there are two simple paths and this is a contradiction. 3. In a tree _T_ , an edge between two vertices _v_ and _w_ is the unique path between them, and if we delete this edge from _T_ , then _T_ is no more connected. _In other words, every edge in a tree is a bridge_. 4. _Conversely, if _G_ is a connected graph such that every edge is a bridge, then _G_ is a tree_. Suppose that _G_ is not a tree and let _C_ be a cycle in _G_. Let _e_ be any edge in _C_. Let _G′_ be the subgraph of _G_ after deleting _e_. Since _e_ is a bridge _G′_ is no more connected. Let _p_ and _q_ be any two vertices in _G_. There is a path _P_ between _p_ and _q_ in _G_. If _P_ does not contain _e_ , then _P_ is a path between _p_ and _q_ in the (disconnected) graph _G′_ as well. On the other hand, if _e_ = ( _v_ , _w_ ) is an edge in _P_ that is also in the cycle _C_ which starts from the vertex _t_ , we have the following path in _G′_ between _p_ and _q:_ _p_. . . . . _v_. . . . . _t_. . . . . _w_. . . . . _q_ In other words, there is a path between every pair of vertices in _G′_ , and this contradicts the fact that _G′_ is not connected. 5. _A tree T with n vertices has_ ( _n_ – 1) _edges_. We prove this by induction on _n_. This is true when _n_ = 1. Suppose that it is true for all _m_ , where 1 < _m_ < _n_. Let _e_ = { _u, w_ } be an edge in _T_. Since _T_ is a tree, _e_ is a bridge. Delete _e_ to obtain the subgraph _T_ ′ which has two connected components, _H_ and _H′_. Both _H_ and _H′_ are trees with _k_ and _k′_ vertices. Now _k_ and _k′_ are positive integers whose sum is _n_. So they both are less than _n_. By our induction hypothesis _H_ has ( _k_ – 1) edges and _H′_ has ( _k_ ′ – 1) edges, and together they have _k_ + _k′_ – 2 = _n_ – 2 edges. So _T_ has ( _n_ – 2) edges, and consequently, _T_ has ( _n_ – 1) edges. 6. The converse of (5) is true: _any connected graph _G_ with n vertices and_ ( _n_ – 1) _edges is a tree_. For if _G_ = ( _V, E_ ) is not a tree, there is an edge _e_ that is not a bridge. We delete _e_ to get a connected subgraph _G′_ = ( _V, E_ ′). Continue thus until we get a subgraph _H_ = ( _V, F_ ) in which every edge is a bridge. So _H_ is a tree with ( _n_ – 1) edges. Let _k_ (> 0) be the number of edges removed from _G_ in this process. We see that after deleting _k_ edges from ( _n_ – 1) edges we are left with ( _n_ – 1) edges! 7. Our next assertion is _that any acyclic graph G_ = ( _V, E_ ) _with n vertices and_ ( _n_ – 1) _edges is connected and therefore a tree_. Suppose that _G_ is not connected. Let the components of _G_ be _G i_ (with _n i_ vertices) for _i_ = 1, 2, . . . , _r_. Now each component _G i_ is acyclic and connected and therefore a tree with _n i_ – 1 edges. Thus the total number of edges in _G_ = _n_ 1 \+ _n_ 2 + _· · ·_ + _n r_ – _r_ that is equal to _n_ – _r_. But _G_ has exactly _n_ – 1 edges. So _r_ = 1. That is, _G_ has exactly one acyclic component. 8. _Let _G_ be any graph with n vertices. If any two of the following statements are true, then the third is also true:_ (a) _G is connected_ , (b) _G is acyclic, and_ (c) _G has_ ( _n –_ 1) _edges_. 9. Let _T_ be any tree. Join any two nonadjacent vertices _v_ and _w_ by a new edge, resulting in a graph _G_. Then _G_ has exactly one cycle, consisting of the new edge and the unique simple path in _T_ between _v_ and _w_. On the other hand, if _G_ is an acyclic graph such that whenever any two nonadjacent vertices are joined by a new edge the resulting graph has exactly one cycle, then _G_ is a tree. The proof is by contradiction. Suppose that _G_ is not a tree. Then it is not connected. So there is a pair of vertices _p_ and _q_ in _G_ such that there is no path between them and so the addition of the new edge { _p, q_ } does not create a cycle. 10. Let _G_ be any connected graph such that whenever two nonadjacent vertices are joined by a new edge, the resulting graph has exactly one cycle. Then _G_ is acyclic and therefore a tree. We now summarize this relentless sequence of assertions as three theorems. **THEOREM 6.1.1** The following are equivalent in a simple graph _G_ : (a) _G_ is connected and acyclic. (b) _G_ is _connected_ and the number of edges in _G_ is one less than the number of vertices in it. (c) _G_ is _acyclic_ and the number of edges in _G_ is one less than the number of vertices in it. (d) _G_ is connected and every edge is a bridge. (e) There is a unique simple path between every pair of vertices in _G_. (f) _G_ is _acyclic_ and if any two nonadjacent vertices are joined to construct _G′_ , then _G′_ has exactly one cycle. (g) _G_ is _connected_ and if any two nonadjacent vertices are joined to construct _G′_ , then _G′_ has exactly one cycle. DEFINITION **6.1.1** A subgraph _T_ of a graph _G_ with _n_ vertices is a **spanning tree** in the graph if (a) _T_ is a tree and (b) _T_ has _n_ vertices. **THEOREM 6.1.2** A graph _G_ is connected if and only if it has a spanning tree. **THEOREM 6.1.3** Let _G_ be a simple graph with _n_ vertices. If a subgraph _H_ with _n_ vertices satisfies any two of the following three properties, then it satisfies the third as well. (a) _H_ is connected. (b) _H_ has ( _n_ – 1) edges. (c) _H_ is acyclic. (Notice that Theorem 6.1.3 characterizes spanning trees in a graph. In Chapter 4 we used the depth-first search procedure to obtain a spanning tree in a connected simple graph.) **_6.2 SPANNING TREES_** In Chapter 4 we mentioned that graph theory was "born" in 1736. In the same vein one could say that trees were first used by G. B. Kirchhoff (1824–1877) in 1847 in his work on electrical networks. Analysis of an electrical network actually reduces to finding all spanning trees of the graph under consideration. Spanning trees also form the basis for a large number of problems in network optimization. Some of these problems are taken up in Chapter 7. Even though Kirchhoff used trees in his analysis, it was Arthur Cayley (1821–1895) who, a decade later, used trees systematically in his attempts to enumerate the isomers of the saturated hydrocarbons (compounds of the form _C kH_2 _k_ +2), which can be represented as a connected graph with 3 _k_ \+ 2 vertices (one for each carbon atom C and one for each hydrogen atom H). Since the valences of C and H are 4 and 1, the sum of all the degrees is 4 _k_ \+ (2 _k_ \+ 2) and therefore there should be 3 _k_ \+ 1 edges. Thus the graph under consideration is, in fact, a tree. In other words, the graph _T_ of a hydrocarbon with _k_ carbon atoms is a spanning tree in an arbitrary graph with 3 _k_ \+ 2 vertices such that the degree of each _C_ vertex is 4 and the degree of each _H_ vertex is 1. The natural question to ask: How many distinct hydrocarbons can exist for a given value of _k_? In this context Cayley proved a theorem on the number of spanning trees in a graph. A tree with _n_ vertices is called a **labeled tree** if each vertex is assigned a unique label _i_ where _i_ is a positive integer between 1 and _n_. Two labeled trees are _distinct_ if their edge sets are different. For example, 1- - -2- - -3, 1- - -3- - -2, and 2- - -1- - -3 are three distinct labeled trees when _n_ = 3, whereas 1- - -2- - -3 and 3- - -2- - -1 are not distinct labeled trees. **THEOREM 6.2.1** The number of distinct labeled trees (with _n_ vertices) is _n n–_2 (where _n_ is at least 2). **_Proof:_** Let _N′_ be the set of all ( _n_ – 2)-tuples of _N_ = {1, 2, . . . , _n_ }. Each element in _N′_ has _n_ – 2 components and each component can be chosen in _n_ ways. So the cardinality of _N′_ is _n n–_2 . Our theorem is proved if we establish a one-to-one correspondence between _N′_ and the set of distinct labeled trees with _n_ vertices. Let _T_ be any labeled tree with _n_ vertices, and let _W_ be the set of vertices in _T_ of degree 1. (A vertex of degree 1 in a tree is a **leaf.** ) _W_ has at least two and at most _n_ – 1 elements. Arrange the elements of _W_ in an increasing order and let _w_ 1 be the first element in _W_. Let _s_ 1 be the unique vertex adjacent to _w_ 1. Next, let _T′_ be tree obtained by deleting _w_ 1 from _T_ , and let _W′_ be the set of vertices of degree 1 in _T′_ arranged in an increasing order. If _w_ 2 is the first element in _W′_ , we take _s_ 2 as the unique vertex adjacent to _w_ 2 in _T′_. We continue this process until we get a ( _n_ – 2)-tuple _s_ of the form ( _s_ 1 _s_ 2 _s_ 3 _· · · s n_–2) establishing that every labeled tree corresponds to a unique element in _N′_. Before we establish the result in the opposite direction, let us actually obtain a 10-tuple for a labeled tree _T_ with 12 vertices as shown in Figure 6.2.1. _W_ = {5, 6, 7, 8, 9, 10, 11, 12} and _s_ 1 = 1, which is the vertex adjacent to the first element in _W_. Delete from _T_ the vertex 5 and the edge joining 1 and 5 to obtain the tree _T′_ in which _W′_ is the set of all vertices of degree 1 arranged in an increasing order. The first element in _W′_ is 1, so _s_ 2 is 4. Next, delete vertex 1 and the edge joining 1 and 4. The first vertex of degree 1 in the new tree is 6, which is adjacent to 2. Thus _s_ 3 is 2. We continue similarly and observe that _s_ 4 = 2, _s_ 5 = 4, _s_ 6 = 3, _s_ 7 = 3, _s_ 8 = 3, _s_ 9 = 4, and finally, _s_ 10 = 4. Thus the labeled tree _T_ corresponds to the 10-tuple (1 4 2 2 4 3 3 3 4 4). FIGURE 6.2.1 Next, we prove that every ( _n_ – 2)-tuple _s_ defines a unique labeled tree with _n_ vertices. If _s_ = ( _s_ 1 _s_ 2 _s_ 3 _· · · s n–_2) we define _v_ 1 = the first element in _N_ that is not in _s_ _v_ 2 = the first element in _N_ – { _v_ 1} that is not in _s_ – { _s_ 1} _v_ 3 = the first element in _N_ – { _v_ 1, _v_ 2} that is not in _s_ – { _s_ 1, _s_ 2} , etc. We repeat this process until we get v, ( _i_ = 1, 2, 3, . . .. _n_ – 2). The two remaining elements in _N_ are denoted by _x_ and _y_. Now construct a graph whose vertex set is _N_ for which the edges are the ( _n_ – 2) edges joining _s_ _i_ and _v_ _i_ and the edge joining _x_ and _y_. The graph thus obtained is the unique labeled tree that corresponds to _s_. For example, if _s_ = (1 4 2 2 4 3 3 3 4 4) then _n_ = 12 and _N_ = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}. Thus _v_ 1 = first element in _N_ that is not in _s_. So _v_ 1 = 5. Next _v_ 2 = first element in _N_ – {5} that is not in _s_ – {1}. So _v_ 2 = 1. _v_ 3 = first element in _N_ – {5, 1} that is not in _s -_ {1, 4}. So _v_ 3 = 6. Continuing like this, we get _v_ 4 = 7, _v_ 5 = 2, _v_ 6 = 8, _v_ 7 = 9, _v_ 8 = 10, _v_ 9 = 3, and _v_ 10 = 11. Finally, _x_ = 4 and _y_ = 12. Now construct the graph with vertex set _N_ and edges joining _s i_ and _v i_ ( _i_ = 1, 2, 3, . . . , 10) and the edge joining _x_ and _y_. This graph is precisely the labeled tree in Figure 6.2.1. Thus the one-to-one correspondence between _N′_ and the set of distinct labeled trees with _n_ vertices is established proving the theorem. ( _Note:_ Cayley's theorem is an existence theorem. It does not solve the problem of finding all the distinct labeled trees.) **_6.3 BINARY TREES_** A digraph is called a **directed tree** if its underlying graph is a tree. A directed tree is a **rooted tree** if (1) there is exactly one vertex (called the **root)** with indegree 0 and (2) the indegree of every other vertex is 1. A vertex in a rooted tree is a **terminal vertex** (or a **leaf)** if its outdegree is 0. A nonterminal vertex is called an **intermediate** or **internal vertex**. The root is thus an intermediate vertex. The number of arcs in the path from the root to a vertex is called the **level** of that vertex. By definition the level of the root is 0. If the levels are 0, 1, 2, . . . , _k_ , then _k_ is the **height** of the tree. Vertex _v_ is a **descendant** of vertex _u_ if there is an arc from _u_ to _v_. If _T_ is a rooted tree, a **subtree** _T_ ′ **at vertex** _v_ is a rooted tree _T′_ = ( _V_ ′, _E′_ ) such that (1) the root of _T_ ′ is at _v_ , (2) _V′_ consists of _v_ and all its descendants in _T_ , and (3) _E′_ contains all the arcs of all the directed paths (in _T_ ) from _v_ to all the leaves. A tree is **pruned at vertex** _v_ if we delete all the descendants of _v_ and all the arcs of all the directed paths emanating from _v_ so that _v_ becomes a leaf of the pruned tree. A rooted tree is a **binary tree** if the outdegree of each intermediate vertex is at most 2. In a **regular binary tree** the outdegree of each intermediate vertex is exactly 2. A regular binary tree is **full** if all its leaves are at the same level. Many discrete structures can be represented as binary trees. We discuss here an application in coding theory. In a computer, since all information is stored in the form of binary numbers, whenever a letter in the alphabet or a symbol is entered into the computer it is converted into a binary word by means of a character code that is a one-to-one correspondence between the set of all characters and a set of binary numbers. For example, two of the most commonly used character codes are the ASCII (American Standard Code for Information Interchange) and EBCDIC (Extended Binary-Coded Decimal Interchange Code). In both these codes, each character is assigned a code of fixed length and therefore they are known as **fixed-length character codes**. In ASCII the length is usually 8, and it is 6 in EBCDIC. Thus to encode the number 247 we use a message of 24 bits: 011100100111010001110111. Decoding is also straightforward. The number of bits in a message is a multiple of 8 and we use the code systematically to decipher the message. The chief drawback of a fixed-length character code is that all characters, whether they are used frequently or not, need the same number of bits. It is certainly advantageous to have a code in which more frequently used characters use fewer bits so that the total length of the message is as small as possible. In other words, a **variable-length character code** is more appropriate. But while using such a code, we have to make sure that the decoding is unambiguous. For example, if the codes for 1, 2, and 3 are 01, 0, and 00, respectively, then the word 0001 can be decoded as 221 or 31. An obvious reason for this ambiguity is the fact that the code for 2 appears as a part of the code for another number in its beginning. A word _w_ is a **prefix** of another word _v_ if _v_ = _wp_ , where _p_ is another word. A character code is said to have the **prefix property** if no code for a symbol is a prefix of the code for another symbol. A character code is a **prefix code** if it has the prefix property. It is easy to see that a binary prefix code can be obtained straight from a regular binary tree. First we label the two arcs incident from an intermediate vertex as 0 and 1. Then assign to each leaf a sequence of bits that is the sequence of labels of the arcs from the root to that leaf. A character code that assigns each character into the sequence which corresponds to a leaf is necessarily a prefix code. For example, the assignment _A_ = 00, _B_ = 010, _C_ = 011, _D_ = 100, _E_ = 101, and _F_ = 11 obtained from the regular binary tree of Figure 6.3.1 is a prefix code for the alphabet {A, B, C, D, E, F}. Conversely, corresponding to a given prefix code we can construct a regular binary tree by pruning a full regular binary tree of length _k_ , where _k_ is the length of the largest sequence in the code. For example, consider the code R = 00, E = 01, A = 10, and D = 111 with the prefix property. Here _k_ = 3 and first we construct a full regular binary tree of height 3 and label each arc emanating from any intermediate vertex as 0 or 1. Then assign to each vertex of the tree a sequence of bits that is the sequence of labels of the arcs from the root to that vertex. Thus each binary sequence of length less than or equal to _k_ is assigned to a unique vertex. Locate those vertices whose binary sequences are precisely equal to the sequences in the given prefix code and prune the tree at these vertices. Also delete leaves that do not correspond to sequences in the code. The binary tree of Figure 6.3.2 is obtained for the given prefix code after locating the appropriate vertices. FIGURE 6.3.1 FIGURE 6.3.2 Thus given a prefix code an arbitrary string can be unambiguously decoded by proceeding from left to right in the string, finding the first substring that is a character, then the subsequent substring, and so on. For example, if the prefix codes are 00, 010, 011, 100, 101, and 11, the unique deciphering of the string 10111100100000100111110 is 101, 11, 100, 100, 00, 010, 011, 11. The last two bits are not used in this case. Finally, for a given alphabet it is possible to have more than one prefix code. For example, both {00, 01, 10, 11} and {0, 11, 100, 101} are prefix codes for {A, B, C, D}. So it is natural to ask whether one code is "better" than the other. It is in this context that we consider the _efficiency_ of a code. Let { _a_ 1, _a_ 2, _. . . , a n_} be an alphabet and let _f i_ be the frequency with which _a i_ appears on an average. Let _C_ be any prefix code for the alphabet and let _l i_ be the length of the code for _a i_. The weight _w_ ( _C_ ) of the code _C_ is _f_ 1 _l_ 1 \+ . . . + _f nln_. The problem thus is to find a code _C_ such that _w_ ( _C_ ) is as small as possible and it is equivalent to the following problem: Given that _f i_ ( _i_ = 1, 2, . . . , _n_ ) is the frequency of the character _a i_ in an alphabet of _n_ letters, find a regular binary tree with _n_ terminal vertices such that _f_ 1 _l_ 1 \+ _f_ 2 _l_ 2 \+ . . . + _f nln_ is a minimum where _l i_ is the length of the path from the root to the _i_ th terminal vertex, which represents _a i_. Such a tree is known as an optimal binary tree for the given frequency distribution. We now discuss an elegant procedure due to D. Huffman (1952) to obtain such a tree. First we arrange the frequencies in nondecreasing order from _f_ 1 to _f_ n. Huffman's algorithm is based on the following fact (see Theorem 6.3.1 below): If _T′_ is an optimal tree obtained using this procedure for the ( _k_ – 1) frequencies { _f_ 1 \+ _f_ 2, _f_ 3, . . . , _f_ k} with ( _k_ – 1) terminal vertices, the tree _T_ obtained from _T_ by introducing two new terminal vertices _v_ 1 (to represent _f_ 1) and _v_ 2 (to represent _f 2_) and by joining the terminal vertex _v_ in _T″_ which represents _f_ 1 \+ _f_ 2 to _v_ 1 and _v_ 2 is an optimal tree for the set { _f_ 1, _f_ 2, _f_ 3, . . . , _f_ k}. (The sum of _f_ 1 and _f_ 2 need not be less than _f_ 3. Tiebreaking is arbitrary in the sense that if _f_ 1 \+ _f_ 2 = _f_ 3, then v can be either the vertex that represents _f_ 1 \+ _f_ 2 or the vertex that represents _f_ 3. Notice that _v_ is not a terminal vertex in _T._ ) Here is an example that illustrates the construction of a Huffman code for an alphabet of six letters with a frequency distribution _S_ 1 = {3, 5, 6, 8, 10, 14}. Add the first two numbers and rearrange to get S2 = {6, 8, 8, 10, 14}. Repeat this process to get _S_ 3 = {8, 10, 14, 14}, _S 4_ = {14, 14, 18}, and _S 5_ = {18, 28}. Now construct an optimal tree for _S 5_ as in Figure 6.3.3. Now the frequency 28 is the sum of 14 and 14 in _S_ 4, and this will take us to _S_ 4 and the tree in Figure 6.3.4. Then we use the decomposition 18 = 8 + 10 to get a tree for _S 3_ as in Figure 6.3.5. After that we use the decompositions 14 = 6 + 8 (once) and 8 = 3 + 5 to get the desired Huffman tree, as in Figure 6.3.6. Thus a Huffman code for the frequency distribution {3, 5, 6, 8, 10, 14} is {000, 001, 110, 111, 01, 10} with lengths 3, 3, 3, 3, 2, 2, and the weight of this code is 3.3 + 3.5 + 3.6 + 3.8 + 2.10 + 2.14 = 114, which is a minimum. FIGURE 6.3.3 FIGURE 6.3.4 THEOREM 6.3.1 The regular binary tree obtained by using the Huffman algorithm is an optimal tree. **_Proof_ :** Let the _n_ frequencies _f_ i ( _i_ = 1, 2, . . . , _n_ ) be in a nondecreasing order. The number of regular binary trees with _n_ terminal vertices representing these _n_ frequencies is finite, and therefore there is a regular binary tree _T_ ′ with _n_ vertices for which the weight _w_ ( _T_ ′) is a minimum. Let _v_ be an internal vertex of _T_ ′ such that the distance (i.e., number of edges in the path) from the root of the tree to _v_ is not less than the distance from the root to any other nonterminal vertex, and let the descendants of _v_ be the terminal vertices representing _f_ i and _f_ j . Now assign the value _f_ 1 to the vertex that represents _f_ i and the value _f_ i to the vertex that represents _f j_. At the same time, assign the value _f_ i to the vertex that represents _f_ i and the value _f j_ to the vertex that represents _f_ 2. The weight _w_ ( _T_ ) of the resulting tree _T_ cannot be more than _w_ ( _T″_ ) because of the choice of _v_. At the same time _w_ ( _T_ ) cannot be less than _w_ ( _T″_ ) because _T″_ is an optimal tree. Hence we conclude that there is an optimal tree _T_ for which _f_ 1 and _f_ 2 are the descendants of the same nonterminal vertex that is at a maximal (in comparison with other nonterminal vertices) distance from the root of _T_. FIGURE 6.3.5 FIGURE 6.3.6 Next, let _T″_ be the regular binary tree with ( _n_ – 1) terminal vertices obtained from the tree _T_ obtained above by deleting the two terminal vertices that correspond to _f_ 1 and _f_ 2, so that the vertex _v_ (from which these two vertices descend) becomes a terminal vertex, which corresponds to the frequency _f_ 1 \+ _f_ 2 . It is easy to see that _w_ ( _T_ ) = _w_ ( _T″_ ) + _f_ 1 + _f_ 2 . Thus _T_ is optimal if and only if _T″_ is optimal. The conclusion of the theorem follows by induction on _n_. We conclude this discussion by establishing that Huffman's algorithm indeed gives a coding system that has the desired property: The larger the frequency of a character, the shorter the length of the code that represents that character. THEOREM 6.3.2 Let _T_ be an optimal Huffman tree for the _n_ characters _a i_ with frequencies _f i_, and let _d i_ be the length of the code that represents _a i_ ( _i_ = 1, 2,. . ., _n_ ). If _f i_ < _f j_, then _d i_ ≥ _d j_. **_Proof_ :** The length from the root of _T_ to the terminal vertex that represents _a i_ is _d j_. Let the terminal vertex that represents _f i_ be given the code _a i_, and let the terminal vertex that represents _f j_ be given the code _a i_. Even though the tree is the same, its weight will change. Let w be the old weight and _w′_ be the new weight. Of course, _w_ ≤ _w′_. Now _w_ – _w′_ = ( _d i_ – _d j_)( _f i_ – _f j_) . Thus _d i_ ≥ _d j_. **_Binary Search Trees_** An important operation that frequently arises in computer science is the use of binary trees to search through a list or a table, the elements of which constitute a linearly ordered finite set. If _T_ is a binary tree and if the outdegree of a vertex _v_ is 2, then _v_ has two descendants—a **left descendant** and a **right descendant**. (If the out degree of _v_ is 1, the unique descendant may be considered as either the left descendant or the right descendant.) The tree rooted at the left descendant of a vertex is called the **left subtree of this vertex**. The tree rooted at the right descendant of a vertex is called the **right subtree of this vertex**. Suppose that _T_ is a binary tree. To each vertex _v_ of the tree, a real number _k_ ( _v_ ) is assigned. This number _k_ ( _v_ ) is called the **key** of the vertex. Assign keys to the vertices of _T_ such that the key of a vertex is (1) larger than the keys of the vertices in its left subtree and (2) smaller than the keys of the vertices in its right subtree. A binary tree with keys defined by this rule is called a **binary search tree**. In Figure 6.3.7 we have a binary search tree with nine vertices in which the set of keys is {9, 7, 13, 4, 8, 11, 15, 3, 5}. If _T_ is an arbitrary binary tree, it is possible to assign keys to the vertices of _T_ such that _T_ is a binary search tree. This is proved in Reingold (1977), in which an algorithm to convert a binary tree into a binary search tree is also derived. To search whether a particular item _q_ is in a list, we use the binary search tree of the list as follows: First we start with the root. At any stage, the key of the vertex _x_ is examined. In the beginning, _x_ is the root. If _q_ = _k_ ( _x_ ), we found the desired item in the list. If _q_ < _k_ ( _x_ ), we ignore the right subtree rooted at _x_ and look at the left descendant of _x_. If _q_ > _k_ ( _x_ ), we ignore the left subtree rooted at _x_ and look at the right descendant of JC. We continue this process until we reach a tree with one vertex. In this search process, the number of comparisons to be made will be _h_ \+ 1 in the worst case where _h_ is the height of the binary search tree. Thus the computational complexity of this algorithm is minimized if we can find a binary search tree of minimum height. In this context, the following theorem is very handy. FIGURE 6.3.7 THEOREM 6.3.3 The minimum height of a binary tree with _n_ vertices is _m_ – 1 where _m_ is the ceiling of log( _n_ \+ 1). **_Proof_ :** Let _T_ be any binary tree with _n_ vertices, and let _h_ be its height. There are at most 2 _k_ vertices at level _k_ , where _k_ = 0, 1, 2, . . . , _h_. So n ≤ 1 + 2 + 22 \+ 23 \+ . . . + 2 _h_ = 2 _h_ \+ 1 – 1 Hence _h_ \+ 1 ≥ log( _n_ +1), which implies that _h_ ≥ _m_ – 1, where _m_ is the ceiling of log( _n_ \+ 1). A recursive procedure can be used to _construct a binary search tree_ corresponding to a given linearly ordered list as follows. Start with a tree containing just one vertex, which is the root. We assign the first file in the list as the key of this vertex. To add a new file from the list, this file is first compared with the keys of the existing vertices in the tree, starting at the root and moving to the left if the file is less than the key of the vertex under consideration and if this vertex has a left descendant, or moving to the right if the file is greater than the key of the vertex under consideration and if this vertex has a right descendant. When the current file is less than the key of a vertex and if this vertex has no left descendant, a new vertex with this file as its key is inserted as a new left descendant. Similarly, when a file is greater than the key of a vertex and if this vertex has no right descendant, a new vertex is added with this file as its key as a new right descendant. This procedure is illustrated in the following example. FIGURE 6.3.8 **Example 6.3.1** Form a binary search degree to represent {4, 6, 2, 8, 5, 3, 7, 1}. **Solution** The binary search tree is displayed in Figure 6.3.8. The key of the root is 4, which is the first element in the set. The next element, 6, is more than 4. Since 4 has no right descendant at this stage, a new vertex with key 6 is created as the right descendant of 4. The next element is 2. We start with the root and move left since 2 < 4. Since 4 has no left descendant, a new vertex is constructed as the left descendant of 4, and this vertex has the key 2. The next element is 8. Compare 8 with 4 and we move right. Compare 8 with 6 and we move right. Now the vertex with key as 6 has no right descendant. Now a new vertex with key 8 is constructed as the right descendant of the vertex with key 6. We continue this process until we reach the last item in the list. **_6.4 NOTES AND REFERENCES_** Some useful general references on trees and their properties are the appropriate sections from the books by Behzad et al. (1979), Berge (1962), Bondy and Murty (1976), Deo (1974), Gondran and Minoux (1984), Gould (1988), Grimaldi (1985), Harary (1969a), Liu (1985), Roberts (1984), Townsend (1987), Tucker (1984), and Wilson (1979). Theorem 8.2.1 is due to Arthur Cayley (1821–1895), who used graph theory in connection with enumeration problems in physical chemistry. For additional treatment of Huffman codes, see Huffman (1952), Markowsky (1981), and Standish (1980). See Chapter 2 of Knuth (1973a) and Chapter 6 of Knuth (1973b) for a complete treatment of trees and search trees. **_6.5 EXERCISES_** **6.1.** If _G_ is a forest with n vertices, _m_ edges, and _k_ components, obtain an expression for _m_ in terms of _n_ and _k_. **6.2.** Suppose that a tree has two vertices of degree 5, three vertices of degree 4, six vertices of degree 3, eight vertices of degree 2, and _r_ vertices of degree 1. Find _r_. **6.3.** _G_ is a connected graph with 20 vertices. Find the minimum number of edges that _G_ can have. **6.4.** _G_ is a connected graph with 20 edges. Find the maximum number of vertices that _G_ can have. **6.5.** Suppose _G_ has four components, 20 edges, and _r_ vertices, Find the maximum value of _r_. **6.6.** Show that every tree is a bipartite graph. Which trees are complete bipartite graphs? **6.7.** An edge _e_ in G is in every spanning tree of _G_. What can you say about _e_? **6.8.** An edge _e_ in G is in no spanning tree of _G_. What can you say about _e_? **6.9.** If _T_ and _T′_ are two spanning trees in _G_ , is it necessary that _T_ and _T′_ have an edge in common? Either prove this or produce a counterexample. **6.10.** Show that a connected graph in which every vertex is even must have a cycle. **6.11.** If _e_ is an edge in a connected graph _G_ (with no loops), then prove that there is a spanning tree _T_ ( _e_ ) which contains _e_. **6.12.** If _e_ and _f_ are two edges in a simple graph, there is a spanning tree _T_ ( _e, f_ ) that contains both _e_ and _f_. **6.13.** Find the unique labeled tree that corresponds to s = (8 8 7 7 7 6 6). **6.14.** If the edges of a labeled tree are {1, 2}, {2, 3}, {2, 4}, {4, 5}, {4, 7}, {5, 6}, {7, 8}, and {8, 9}, find _s_. **6.15.** Use the prefix code _A_ = 000, _B_ = 001, C = 01, _D_ = 10, _E_ = 111, and _R_ = 110 to decode the following word: 000001110000010001000000111000011 **6.16.** If the frequencies of the six letters in the prefix code in Problem 6.15 are 8, 10, 4, 5, 12, and 10, find the weight of the code. **6.17.** Obtain an optimal prefix code for the data in Problem 6.16 and find the weight of this code. Encode the word that appears in Problem 6.15 using this code. What is the length of this word if we use this optimal code in the worst case? **6.18.** Decode the word 11111011001001 using the optimal code obtained in Problem 6.17. **6.19.** Find a regular binary tree of height _k_ with 13 vertices such that ( **a** ) _k_ is as small as possible, and ( **b** ) _k_ is as large as possible. **6.20.** A rooted tree of height _k_ is said to be a balanced tree if every terminal vertex is at level _k_ or ( _k_ – 1). Construct a balanced binary tree with _n_ vertices when _n_ = 11 and _n_ = 12. **6.21.** Construct a binary search tree (using alphabetical order) for the set {Hungary, Germany, Poland, Bulgaria, Romania, Czechoslovakia, Albania, Yugoslavia}. **Spanning Tree Problems** **_7.1 MORE ON SPANNING TREES_** If _G_ is a connected graph with _n_ vertices, a spanning tree in G, as we saw in Chapter 6, is an acyclic subgraph of _G_ with ( _n_ – 1) edges. If _T_ = ( _V_ , _E′_ ) is a spanning tree in _G_ = ( _V_ , _E_ ), the edges of _G_ not in _E′_ are called the **chords** of _T_. If _e_ is a chord joining the vertices _u_ and _v_ , the edges in the _unique path in _T_ between u and v_ together with the edge _e_ form a _unique_ cycle in _G_ which is called the **fundamental cycle of** _G_ **relative to** _T_ **with respect to the chord** _e_ and is denoted by _C T_( _e_ ). Thus if _G_ has m edges, it will have _m_ – ( _n_ – 1) such fundamental cycles relative to every spanning tree. We assume that _G_ is a connected graph throughout this chapter unless otherwise stated. A subset _D_ of the set of edges of a graph _G_ = ( _V_ , _E_ ) is called a **disconnecting set** of _G_ if the deletion of the edges in _D_ from _G_ makes _G_ into a disconnected graph. If _V_ is partitioned into two sets _V′_ and _V″_ and if _D_ = ( _V′, V″_ ) is the set of all edges in _E_ of the form { _i, j_ }, where _i_ _V′_ and _j_ _V″_ , then _D_ is a disconnecting set. A disconnecting set _D_ is called a **cutset of** _G_ if no proper subset of _D_ is a disconnecting set. A disconnecting set consisting of exactly one edge is, of course, a cutset known as a **bridge**. If _D_ is a cutset in G, the deletion of the edges in _D_ disconnects _G_ into _exactly two_ components _G′_ (with _V′_ as the set of vertices) and _G″_ (with _V″_ as the set of vertices), and thus _D_ = ( _V′, V″_ ). On the other hand, if _V_ is arbitrarily partitioned into two sets _W_ and _W′_ , the disconnecting set _D_ = ( _W, W′_ ) need not be a cutset. For example, let _V_ = { _a, b, c_ }, _E_ = {{ _a, b_ }, { _a, c_ }}, _W_ = { _a_ }, and _W_ = { _b, c_ }. In this case the disconnecting set _D_ = ( _W, W′_ ) is not a cutset. So when will a partition of the vertex set give rise to a cutset? **THEOREM 7.1.1** If the vertex set _V_ of a connected graph _G_ is partitioned into two subsets _W_ and _W_ such that every two vertices in _W_ (and _W_ ) are connected by a path that consists of vertices only from _W_ (and _W_ ′), then _D_ = ( _W, W′_ ) is a cutset. **_Proof_ :** Suppose that _D_ is not a cutset. Then there is proper subset _D′_ of _D_ that is a cutset. Let _e_ = { _w, w′_ } be an edge in _D_ that is not in _D′_ , where _w_ is in _W_ and _w′_ is in _W′_. Suppose that _u_ and _v_ are any two vertices in _G_ , where _u_ is in _W_ and _v_ is in _W′_. By hypothesis there is a path between _u_ and _w_ consisting of vertices from _W_ only, and there is a path between _w′_ and _v_ consisting of vertices from _W_ only. Thus there is a path between _u_ and _v_ using the edge _e_ that is not in _D′_. Thus _D′_ is not a disconnecting set, which is a contradiction. COROLLARY If _T_ is any spanning tree in _G_ = ( _V, E_ ), the deletion of any edge in _T_ makes _T_ disconnected by creating two subtrees with vertex sets _W_ and _W′_ such that _D_ ( _W, W′_ ) is a cutset of _G_. Thus corresponding to each edge _e_ of a spanning tree _T_ , there is a unique cutset _D T_( _e_ ) called the **fundamental cutset** of _T_ with respect to the edge _e_. Thus any connected graph with _n_ vertices will have a system of ( _n_ – 1) fundamental cutsets with respect to every spanning tree. For example, in the graph of Figure 7.1.1 we have a spanning tree _T_ with edges _a, b, c_ , _d_ , and _e_. The chords are _p, q, r_ , and _s. T_ thus has four fundamental cycles (one with respect to each chord of _T_ ) and five fundamental cutsets (one with respect to each edge of _T_ ). We see that the edges in the fundamental cycle _C_ ( _r_ ) are _r, c, d_ , and _e_. The edges in the cutset _D_ ( _b_ ) are _b, p_ , and _q_. There is a close relation between the concepts of spanning trees, cycles, and cutsets, and this is the content of the next two theorems. FIGURE 7.1.1 THEOREM 7.1.2 Let _T_ be a spanning tree in a connected graph _G_ , and let C and _D_ be a cycle and cutset, respectively, in _G_. Then: (a) Either there are no edges in common between _C_ and _D_ or there are an even number of edges in common between the two. (b) At least one edge of C is a chord of _T_. (c) At least one edge of _D_ is an edge of _T_. **_Proof_ :** (a) Let _D_ = ( _W, W′_ ). If all the vertices of C are in one of the two subsets, then of course _C_ and _D_ have no edges in common. Suppose that _w_ and _w′_ are two vertices in C where _w_ is in _W_ and _w′_ is in _W_. Then the cycle _C_ that starts from _w_ and ends in _w_ will necessarily use the edges from _D_ an even number of times: Whenever an edge from _D_ of the form { _i_ , _j_ }, where _i_ is in _W_ and _j_ is in _W′_ , is used, an edge of the form { _u_ , _v_ } is also used, where _u_ is in _W′_ and _v_ is in _W_. (b) If no edge of _C_ is a chord of _T_ , then _C is a subgraph of T_ , which is a contradiction since _T_ is acyclic. (c) If no edge of _D_ is an edge of _T_ , the deletion of the edges from _D_ will not disconnect _G_ because such a deletion will not affect the spanning tree. THEOREM 7.1.3 (a) Let _D_ ( _e_ ) be the fundamental cutset with respect to an edge _e_ of a spanning tree _T_ , and let _f_ be any other element in this cutset. Then (1) _f_ is a chord of _T_ defining the fundamental cycle _C_ ( _f_ ), (2) _e_ is an element of _C_ ( _f_ ), and (3) if _e′_ is another chord of _T_ that is not in _D_ ( _e_ ), then _e_ is not an element of _C_ ( _e′_ ) . (b) Let _C_ ( _e_ ) be the fundamental cycle with respect to the chord _e_ of a spanning tree _T_ , and let _f_ be any other edge in this cycle. Then (1) _f_ is an edge of the tree defining the fundamental cutset _D_ ( _f_ ), (2) _e_ is an element of _D_ ( _f_ ), and (3) if _e′_ is another edge of _T_ that is not in _C_ ( _e_ ), then _e_ is not an element of _D_ ( _e′_ ) . **_Proof of_ ( _a_ ) _:_** (1) Any edge _f_ in _D_ ( _e_ )other than _e_ , cannot be an edge in _T_ since _T_ is acyclic. So _f_ is a chord defining a fundamental cycle _C_ ( _f_ ) (2) Let _f_ be any edge in _D_ ( _e_ ) other than _f_. Then _D_ ( _e_ ) = { _e_ , _f_ } ∪ _A_ , where _A_ is a set of chords of _T_ , and _C_ ( _f_ ) = { _f_ } ∪ _B_ , where _B_ is a set of edges of _T_. The edge _f_ is common for both _D_ ( _e_ ) and _C_ ( _f_ ). Recall that _D_ ( _e_ ) and _C_ ( _f_ ) should have an even number of edges in common. Since _A_ and _B_ have no edges in common, we conclude that _e_ is the only other edge common to both _D_ ( _e_ ) and _C_ ( _f_ ). Thus _e_ is an edge in the fundamental cycle _C_ ( _f_ ) . (3) Let _C_ ( _e_ ′) be the fundamental cycle with respect to _e′_ that is a chord not in _D_ ( _e_ ). Thus _C_ ( _e_ ′) = _e′_ ∪ _L_ , where _L_ is a set of edges of _T_ , and _D_ ( _e_ ) = _e_ ∪ _M_ , where _M_ is a set of chords of _T_. If _e_ is in _C_ ( _e_ ′), then _e′_ will be in _D_ ( _e_ ), which is against our assumption. So _e_ is not in _C_ ( _e_ ′). **_Proof of_ ( _b_ ) _:_** This proof is similar to that of (a) and is left as an exercise. We can restate parts (a) and (b) of Theorem 7.1.3 as follows: (a) If _e_ is any edge of a spanning tree _T_ in a connected simple graph _G_ , then (1) there is a unique cutset _D_ ( _e_ ) _;_ (2) if _f_ is any edge in this cutset other than _e_ , then _f_ is a chord of _T_ that defines a unique cycle _C_ ( _f_ ) such that _e_ is an edge in this cycle; and (3) if _e′_ is a chord of _T_ that is not in _D_ ( _e_ ) defining a unique cycle _C_ ( _e_ ′) then _e_ is not an edge in _C_ ( _e_ ′). (b) If _e_ is any chord of a spanning tree _T_ in a connected simple graph _G_ , then (1) there is a unique cycle _C_ ( _e_ ); (2) if _f_ is any edge in this cycle other than _e_ , then _f_ is an edge of _T_ that defines a unique cutset _D_ ( _f_ ) such that _e_ is an edge in this cutset; and (3) if _e′_ is an edge of _T_ that is not in _C_ ( _e_ ) defining a unique cutset _D(e′_ ), then _e_ is not an edge in _D_ ( _e′_ ) . If we associate a real number (called the **weight** ) with each edge of a graph _G_ so that _G_ becomes a network, the **weight of a spanning tree** _T_ in _G_ is then the sum of the weights of all the edges in _T_. A spanning tree _T_ is a **minimal spanning tree** (MST) if the weight of _T_ does not exceed the weight of any other spanning tree in _G_. The MST problem (also known as the minimal connector problem) is the problem of finding a MST in a connected graph _G_. This optimization problem has several important practical applications. For example, it can be helpful in planning large-scale communication and distribution networks when the most important consideration usually is to provide paths between every pair of vertices in the most economical way. The vertices would be cities, terminals, or retail outlets, and the edges would be highways or pipelines. The weights corresponding to these edges could be distances or costs or time involved in these processes. See Graham and Hell (1982) for an exhaustive survey and many references. In this chapter we present two simple algorithms to find a minimal spanning tree in a graph. One is due to Kruskal (1956), and the other is due to Prim (1957). The approach in both the methods is _greedy:_ It so happens that if we take the "choicest morsel" at each opportunity without violating any rules, we will have eventually an optimal solution. The minimal spanning tree problem has the following generalization. Let _W_ be a subset of the set _V_ of all vertices of a connected simple graph G. A tree _T_ = ( _U, F_ ) in G, where _W_ is a subset of _U_ , is called a **Steiner tree with respect to the set** _W_. The **minimal Steiner network problem for the set** _W_ is the problem of finding a Steiner tree with respect to _W_ of minimum weight. Thus a minimal Steiner tree with respect to _V_ is a minimal spanning tree in G. It is quite possible that a minimal Steiner tree with respect to a proper subset _W_ is a minimal spanning tree in G. There is no known efficient algorithm to solve the Steiner tree problem. An efficient algorithm to obtain an approximate solution is presented in Chang (1972). **_7.2 KRUSKAL'S GREEDY ALGORITHM_** We list the edges of the connected network with _n_ vertices in an ascending order of _nondecreasing weights_ and then construct a subgraph _T_ examining these edges one at a time starting with an edge of the smallest weight. An edge will be added to _T_ as long as it does not form a cycle with some or all the edges of _T_. The construction halts when _T_ has ( _n_ – 1) edges. Obviously, this greedy procedure ensures that _T_ is a spanning tree. That the _T_ thus obtained is indeed a _minimum_ spanning tree is a consequence of the following result. THEOREM 7.2.1 If _e_ is an edge in a cycle _C_ of a connected graph _G_ such that the weight of _e_ is more than the weight of any other edge in the cycle _C_ , then _e_ is not an edge for any MST in _G_. **_Proof_ :** Suppose that _T_ is a MST in which _e_ is an edge. Let _D_ ( _e_ ) be the fundamental cutset with respect to _e_. Since the edge _e_ is common for both the cycle C and the cutset _D_ ( _e_ ), there should be at least one more element _f_ common to both these sets because the number of elements common to a cycle and a cutset is even. Since _f_ is in _D_ ( _e_ ), _f_ is necessarily a chord of _T_. Let _C_ ( _f_ ) be the fundamental cycle with respect to _f_. By Theorem 7.1.2 we know that _e_ is an element of _C_ ( _f_ ). Now consider the subgraph _H_ obtained by adjoining _f_ to _T_. The only cycle in _T_ is _C_ ( _f_ ), and if we delete _e_ from _H_ , we get a spanning tree _T′_ with weight less than that of _T_. This contradiction establishes the fact that _e_ is not an edge of _T_ . COROLLARY If the weight of any other edge in _C_ does not exceed the weight of _e_ , there is a minimal spanning tree in which _e_ is not an edge. In Kruskal's algorithm we abandon an edge _p_ in favor of another edge _q_ for inclusion in _T_ (when the weight of _p_ does not exceed that of _q_ ) only when the inclusion of _p_ creates a cycle in which _p_ is an edge with the largest weight. Thus Kruskal's algorithm correctly solves the MST problem. It is also an easy exercise at this stage to verify that if all the weights of the edges in _G_ are distinct, there is a unique MST in G. As an example, consider the graph of Figure 7.2.1, where we have the sorted list _L_ of all the edges of _G_ as _L_ = {{1, 2}, (1, 5} {2, 5}, {2, 3}, {3, 5}, {3, 6}, {5, 6}, {1, 4}, {4, 5}} in an ascending order. The algorithm examines {1, 2} and accepts it for the tree. Then it examines {1, 5} and accepts it. After that it examines {2, 5} and does not accept it, to avoid the cycle _C_ consisting of {1, 2}, {1, 5} and {2, 5}. Then it proceeds further and accepts {2, 3}, {3, 6}, and {1, 4} in turn. At this stage it halts because the number of edges accepted is one less than the total number of vertices. In this graph we have nine edges and we had to examine eight of them before we stop. How many computational steps (in this case, comparisons) are needed to arrange (to sort) the _m_ edges of the graph with nondecreasing order? The number of such comparisons no doubt depends on the algorithm we use to sort the _m_ elements of the edge set _E_. One obvious method is as follows: Successively compare the ith term to the ( _i_ \+ 1)st term in the set, interchanging the two if the _i_ th term is larger than the ( _i_ \+ 1)st term. This procedure is called the **bubblesort** because the larger numbers "rise" to the top. The first number in the set has to be compared with at most ( _m_ – 1) numbers. Then the second number has to be compared with at most ( _m_ – 2) numbers, and so on. Thus the total number of comparisons in the worst case if we use the bubblesort algorithm is 1 + 2 + 3 + . . . + ( _m_ – 1) = _m_ ( _m_ – l)/2, which is a polynomial in _m_ of degree 2. In other words the worst-case complexity of the bubblesort algorithm is _O_ ( _m 2_). See the Appendix for notations and concepts related to computational complexity of algorithms. On the other hand if we use the **mergesort** algorithm (see Stanat and McAllister, 1977) the number of computations to sort m numbers in the worst case is _O_ ( _m_ log _m_ ). In general, there is no sorting algorithm that is more efficient than this. Another well-known algorithm known as **heapsort** has a worst-case behavior of _O_ ( _m_ log _m_ ), whereas the **quicksort** algorithm has an average-case behavior of _O_ ( _m_ log _m_ ) and a worst-case behavior of _O_ ( _n 2_), where _n_ is the number of vertices. See Aho et al. (1983) for more details. Under these circumstances it is reasonable to conclude that the worst-case complexity of Kruskal's algorithm is _O_ ( _m_ log _m_ ). Notice, however, that if m is very large in comparison with _n_ [i.e., when m is _O_ ( _n 2_)], it is not very economical to sort all the _m_ edges when we need only ( _n_ – 1) of these _m_ edges. See Syslo et al. (1983) for implementation details of Kruskal 's algorithm when _m_ is large. FIGURE 7.2.1 **_7.3 PRIM'S GREEDY ALGORITHM_** In this procedure we start with an arbitrary vertex _v_ in _G_ and examine all the edges incident at _v_. Let _e_ = { _v_ , _w_ } be an edge with least weight among all these edges. We construct a subgraph _T_ of _G_ starting with _e_ as an edge. Next examine all edges (other than _e_ ) incident at _v_ and all edges incident at _w_ and choose an edge _f_ of least weight among them. This newly found edge is added to the subgraph _T_. The edge _f_ is either between _v_ and a new vertex or between _w_ and a new vertex. Let _u_ be the new vertex. At this stage we have three vertices _v_ , _w_ , and _u_. Examine all the edges other than _e_ and _f_ that are incident at _v_ , _u_ , and _w_ and choose the one with the smallest weight such that the edges _e_ and _f_ and the newly selected edge _g_ do not form a cycle. At this stage _g_ is added to _T_. We continue until all vertices are accounted for. This procedure obviously ends up with a spanning tree. That this tree is indeed an MST is a consequence of the two corollaries of the following theorem. **THEOREM 7.3.1** If _v_ is any vertex in a connected network _G_ and if _e_ is an edge incident at _v_ such that the weight of _e_ is less than the weight of every edge incident at v, then _e_ is an edge of every minimum spanning tree in _G_. **_Proof_ :** Let _T_ be a MST and suppose that _e_ = { _v_ , _w_ } is not an edge of _T_. Let _H_ be the subgraph of _G_ obtained by adding _e_ to _T. H_ has a unique cycle _C_ ( _e_ ) that can be represented as _v_ \- - - - _v_ 1 \- - - - _v_ 2\- - - - . . . . _v_ r \- - - - _w_ \- - - - - _v_ , where _e_ = { _v_ , _w_ } and let _f_ = { _v_ , _v_ 1}. Now both _e_ and _f_ are incident at _v_ and the weight of _e_ is less than that of _f_. If we remove _f_ from _H_ we get a spanning tree _T′_ with weight less than the weight of _T_ , which is a contradiction. **COROLLARY 1** If _v_ is any vertex of _G_ and if _e_ is an edge incident at _v_ such that the weight of no edge incident at _v_ is less than the weight of _e_ , there is an MST in _G_ for which _e_ is an edge. **COROLLARY 2** If a tree _T′_ that spans the vertices in a subset _W_ of vertices in a connected graph _G_ = ( _V, E_ ) is a subtree of a minimal spanning tree of _G_ , there is a minimal spanning tree of _G_ that contains _T′_ and the smallest edge connecting _W_ and _V_ – _W_. At each iteration of Prim's algorithm we have a partition of the vertex set _V_ = {1, 2, . . . , _n_ } into subsets _P_ and _Q_ where _P_ is the set of vertices already accounted for and _Q_ is its complement. Initially, we take _P_ = {1}. We associate a label _t_ ( _i_ ) for each vertex _i_ in _Q_. Initially, _t_ ( _i_ ) is the weight of the edge between 1 and _i_ if there is an edge; otherwise, it is infinity (a large positive number). In step 1 we choose a vertex _v_ in _Q_ with the smallest label. Then we locate a vertex _u_ in _P_ such that the weight _d_ ( _u, v_ ) of the edge between _u_ and _v_ is _t_ ( _v_ ). At this point the edge { _u_ , _v_ } is accepted as an edge for the MST and _v_ is added to _P_. In step 2 we update the labels of the remaining vertices in _Q_ as follows. If _w_ is in _Q_ , define t(w) := Min{ _t_ ( _w_ ), _d_ ( _v_ , _w_ )}, where _v_ is the latest entry in _P_. We continue similarly until _P_ = _V_. The worse-case complexity of the algorithm can be obtained immediately. Initially, _Q_ has ( _n_ – 1) elements. So in step 1, there will be at most ( _n_ – 2) comparisons to start with. Thus this step entails ( _n_ – 2) + ( _n_ – 3) + . . . + 2 + 1 comparisons. In step 2 we have ( _n_ – 2) elements to start with. The label of each vertex _w_ in _Q_ has to be compared with _d_ ( _v_ , _w_ ), where _v_ is the latest entry in _P_. This involves ( _n_ – 2) comparisons to start with and then ( _n_ – 3) comparisons. Thus step 2 also entails in the worst case as many comparisons as in step 1. Thus the worst-case complexity of Prim's algorithm is twice the sum of the first ( _n_ – 2) natural numbers, which is _O_ ( _n 2_). The different iterations of this algorithm for the network of Figure 7.3.1 are as follows. **Iteration 1** _Step_ **_1:_** _A_ smallest label corresponds to vertex 2. So the edge {1, 2} is in _T_. The set _P_ is updated as _P_ = {1, 2}. _Step 2:_ FIGURE 7.3.1 **Iteration 2** _Step 1:_ Vertex 5 is chosen for updating _P_. Since _d_ (1, 5) = _d_ (2, 5), we may take either the edge {1, 5} or the edge {2, 5} for updating _T_. _Step 2:_ **Iteration 3** _Step 1:_ Vertex 3 is chosen for updating _P_ and the edge {2, 3} is chosen for updating _T_. _Step 2:_ **Iteration 4** _Step 1:_ Vertex 6 is chosen for updating _P_ and the edge {3, 6} is chosen for updating _T_. _Step 2:_ **Iteration 5** **Step** _1:_ Vertex 4 is chosen for updating _P_ and the edge {5, 4} is chosen for updating _T_. _Step 2:_ _P_ = {1, 2, 3, 4, 5, 6} Q = the empty set. Output: The edges of an M.S.T. are {1, 2}, {1, 5}, {2, 3}, {3, 6}, and {4, 5}. **_Prim's Algorithm_** ( ** _Matrix Method_** ) Let _D_ = ( _d_ ( _i, j_ )) be the _n_ × _n_ matrix, where _n_ is the number of vertices of _G_ and _d_ ( _i, j_ ) is the weight of the edge { _i_ , _j_ } if there is an edge between _i_ and _j_ , Otherwise, it is infinity. Initially we delete all elements of column 1 and mark row 1 with a *. All elements initially are with no underlines. Each iteration has two steps as follows. _Step 1:_ Select a smallest element from the entries (with no underlines) in the starred rows. Stop if no such element exists. The edges that correspond to the underlined entries constitute a MST. _Step 2:_ If _d_ ( _i, j_ ) is selected in step 1, underline that entry, mark row _j_ with a *, and delete the remaining elements in column _j_. Go to step 1. As an illustration, let us consider the network as given in Figure 7.2.1. Initially, we have in which all entries in column 1 have been deleted and row 1 is starred. At this point no entry is underlined. _Iteration 1:_ _Iteration 2:_ _Iteration 3:_ _Iteration 4:_ _Iteration 5:_ The procedure at this stage halts, giving the edges that correspond to the underlined entries in the matrix _D_ of the last iteration. Thus the edges {1, 2}, {1, 4}, {1, 5}, {2, 3}, and {3, 6} form a minimal spanning tree in _G_. **_7.4 COMPARISON OF THE TWO ALGORITHMS_** The execution time of Prim's algorithm depends only on the number of vertices, but the time for Kruskal's algorithm increases as the number of edges is increased for a network with the same number of vertices. However, in general, it is not possible to assert which one is more efficient. The efficiency depends on, among other things, the structure of the network and the distribution of weights. Many variations, based primarily on data structures and implementation details, have been suggested to improve the efficiency. It has been observed that for networks with vertices up to 100, Prim's method appears to be more efficient, particularly so when there is an abundance of edges. The following running times for the two algorithms run on an AMDHL 470 V/8 computer are reported by Syslo et al. (1983). **_7.5 NOTES AND REFERENCES_** For a discussion of spanning trees in general, see any standard book on graph theory listed at the end of the book. See Lawler (1976), where an algorithm ("not a very good one") is described to solve the Steiner tree problem. For the MST problem the earliest references are probably the classical papers of Kruskal (1956) and Prim (1957). The matrix description of Prim's algorithm is from Hu (1982). For a description of the implementation details of these two greedy algorithms, a very good reference is the book by Syslo et al. (1983). Other general references are Cheriton and Tarjan (1976), Graham and Hell (1982), and Gabow et al. (1986). **_7.6 EXERCISES_** **7.1.** Suppose that _G_ is a graph in which _T_ is a spanning tree, _C_ is a cycle, and _D_ is a cutset. Prove the following: **(a** ) _C_ and _D_ have an even number of edges in common. **(b** ) _D_ and _T_ have at least one edge in common. **(c** ) _C_ and the complement of _T_ have at least one edge in common. **7.2.** Prove that if the weights of the edges in a connected graph are all distinct, there is a unique MST in the graph. **7.3.** If _e_ is the unique edge in a connected network with the smallest weight, prove that _e_ is an edge in every MST in _G_. **7.4.** ( **a** ) Obtain a Steiner tree with respect to the vertex set _W_ = {1, 2, 4, 5} in the network shown in Figure 7.2.1. ( **b** ) Suppose that the weights of the edges {1, 5}, {2, 5}, and {4, 5} are all 10 units each. What will be the Steiner tree with respect to W? **7.5.** Use Kruskal's algorithm to obtain a MST in _G_ with the following weight matrix: **7.6.** Use Prim's algorithm (matrix method) to obtain a MST in the graph of Problem 7.5. Do you get the same tree in Problem 7.5 and in Problem 7.6? Find the weights of the two trees. FIGURE 7.6.1 **7.7.** Modify Kruskal's algorithm to obtain a maximum spanning tree. **7.8.** Obtain a maximum spanning tree in the graph of Problem 7.5. **7.9.** Delete as many edges as possible from the graph _G_ of Figure 7.6.1 to get a connected graph _G′_ such that the weight of _G′_ is minimum. **7.10.** A cycle in a connected network _G_ that passes through every vertex of _G_ is called a Hamiltonian cycle. An arbitrary connected network _G_ need not have a Hamiltonian cycle. The sum of the weights of all the edges in a Hamiltonian cycle is the weight of the Hamiltonian cycle, and a Hamiltonian cycle with minimum weight is called a traveling salesman (TS) cycle. Show that it is possible to obtain a lower bound for the weight of a TS cycle in a connected network _G_ using the MST algorithm whether or not _G_ has a Hamiltonian cycle. Obtain such a bound for the network in Problem 7.10. **7.11.** Construct a network with 5 vertices and 5 edges of distinct weights in which the unique MST is the minimal Steiner tree with respect to 4 of these vertices. **Shortest Path Problems** **_8.1 INTRODUCTION_** If each arc of a digraph is assigned a numerical weight (i.e., a distance), it is a natural and intuitively appealing problem to find a shortest path (if it exists) from a prescribed vertex to another prescribed vertex. Many optimization problems can be formulated and solved as shortest path problems of this type, and many complex problems in operations research can be solved by procedures that call upon shortest path algorithms as subroutines. Shortest path problems are in fact the most fundamental and also the most commonly encountered problems in combinatorial optimization. According to Goldman (1982), a shortest path algorithm developed by the U.S. Department of Transportation is regularly used billions of times every year. We confine our attention to two types of problems: (1) the problem of finding a shortest path from a vertex v to another vertex _w_ , and (2) the problem of finding a shortest path from every vertex to every other vertex. Of course, (1) is a special case of (2). In what follows we discuss two polynomial algorithms to solve the shortest path (S.P.) problem. The first algorithm is to find a S.P. and the shortest distance (S.D.) from a specified vertex to every other vertex. This algorithm is due to Dijkstra (1959). Our next algorithm, known as the Floyd-Warshall algorithm, enables us to find the S.P. and S.D. from every vertex to every other vertex. This procedure is due to Floyd (1964) and Warshall (1962). We assume that the weight function is nonnegative in the case of Dijkstra's algorithm even though it is possible to relax this restriction. It should be noted that there is a real difference between problems involving nonnegative weight functions and problems involving arbitrary weight functions. In the latter case the problem becomes unbounded if the network has a cycle with negative weight. The Floyd-Warshall algorithm detects the existence of such negative cycles. **_8.2 DIJKSTRA'S ALGORITHM_** In the network _G_ = ( _V, E_ ), let _V_ = {1, 2, . . . , _n_ } and let the weight of the arc ( _i_ , _j_ ) be _a_ ( _i, j_ ), which is assumed to be nonnegative. If there is no arc from _i_ to _j_ ( _i ≠ j_ ), then _a_ ( _i, j_ ) is taken as infinity. We thus have the _n_ × _n_ weight matrix _A_ = ( _a_ ( _i, j_ )) for _G_ , in which all diagonal numbers are 0. The problem is to find the S.D. and S.P. from vertex 1 to all other vertices. The procedure is as follows. Each vertex _i_ is assigned a label that is either permanent or tentative. The permanent label _L_ ( _i_ ) of _i_ is the S.D. from 1 to _i_ , whereas the tentative label _L′_ ( _i_ ) of _i_ is an upper bound of the S.D. from 1 to _i_. At each stage of the procedure, _P_ is the set of vertices with permanent labels and _T_ is its complement. Initially, _P_ = {1} with _L_ (1) = 0 and _L'_ ( _i_ ) = _a_ (1, _i_ ) for each _i_. When _P_ = _V_ the algorithm halts. Each iteration consists essentially of two steps, as follows. _Step 1_ ( _Designation of a Permanent Label_ ) _:_ Find a vertex _k_ in _T_ for which _L′_ ( _k_ ) is minimal. Stop if there is no such _k_ because then there is no path from 1 to any vertex in _T_. Adjoin _k_ to the set _P_. Stop if _P_ = _V_. _Step 2_ ( _Revision of Tentative Labels_ ) _:_ If _j_ is a vertex in _T_ , replace _L′_ ( _j_ ) by the smaller value of _L'_ ( _j_ ) and _L_ ( _k_ ) + _a_ ( _k_ , _j_ ) . Go to step 1. We now prove that the algorithm correctly solves the problem by induction on the number of elements in _P_. THEOREM 8.2.1 Dijkstra's algorithm finds the S.D. from 1 to each _i_. **_Proof_ :** We prove by induction on the number of elements in _P_ that for every _i_ in _P, L_ ( _i_ ) is equal to the S.D. from 1 to _i_ , and for every _j_ not in _P, L′_ ( _j_ ) is the length of a S.P. from 1 to _j_ , every intermediate vertex of which is in _P_. This is true when _P_ has one element. Suppose that this is true when _P_ has up to _m_ elements. By induction hypothesis just before vertex _k_ is adjoined to the set _P, L′_ ( _k_ ) is equal to the length of a S.P. from 1 to _k_ in which every intermediate vertex is a vertex in _P_. Now _k_ is adjoined to _P_ and _L_ ( _k_ ) = _L'_ ( _k_ ) . We claim that _L_ ( _k_ ) is the S.D. from 1 to _k_. If not, let _d_ be the S.D. from 1 to _k_. Then _d_ < _L_ ( _k_ ) = _L′_ ( _k_ ) . So any S.P. from 1 to _k_ should have at least one vertex not from _P_ as an intermediate vertex. Let v be the first such vertex. Let _d′_ be the S.D. from 1 to _v_. Then, obviously, _d′_ ≤ _d_. But _d_ < _L′_ ( _k_ ), which implies that _d_ < _L′_ ( _k_ ) . This contradicts the assumption that _L′_ ( _k_ ) is minimal. The worse-case complexity of the algorithm is _O_ ( _n 2_) . This can be established as follows. At most there are _n_ iterations. In Step 1, in the first iteration we have at most ( _n_ – 2) comparisons, in the next iteration we have at most ( _n –_ 3) comparisons, and so on. Therefore, there will be at most ( _n –_ 2) + ( _n –_ 3) + . . . + 1 comparisons in Step 1. In Step 2 we again have at most ( _n_ – 2) comparisons and also ( _n –_ 2) additions in the first iteration. Thus in Step 2 we have ( _n_ – _2_ ) + ( _n –_ 3) + . . . + 1 comparisons and an equal number of additions in the worst case. Thus in all we have ( _n –_ 1)( _n – 2_ ) comparisons and ( _n_ – l)( _n_ – 2)/2 number of additions, establishing the polynomial complexity of the algorithm. Once the S.D. from 1 to _i_ is known it is easy to find a S.P. from 1 to _i_ by examining vertices _j_ such that (1) _L_ ( _j_ ) is less than _L_ ( _i_ ) and (2) there is an arc from _j_ to _i_. Here is an illustrative example to find the S.D. and S.P. from vertex 1 to the remaining vertices in a directed network as shown in Figure 8.2.1. **Iteration 1** _Step_ **_1:_** FIGURE 8.2.1 Vertex 2 gets a permanent label. _Step_ **_2:_** **Iteration 2** _Step 1_ : Vertex 3 gets a permanent label. _Step 2:_ **Iteration 3** _Step 1:_ Vertex 4 gets a permanent label. _Step 2:_ **Iteration 4** _Step 1:_ Vertex 6 gets a permanent label. _Step 2:_ **Iteration 5** _Step 1:_ Vertex 5 gets a permanent label. _Step 2:_ **Iteration 6** _Step 1:_ Vertex 7 gets a permanent label. _Step 2_ : Once we obtain the S.D. from vertex 1 to each vertex, it is very easy to determine a shortest path from 1 to each vertex. This is achieved by constructing a shortest distance tree rooted at 1 as follows. For each vertex _i_ (other than 1), find a vertex _j_ such that (1) there is an arc from _j_ to _i_ in the network, (2) _L_ ( _j_ ) < _L_ ( _i_ ), and (3) _L_ ( _j_ ) + _a_ ( _j, i_ ) = _L_ ( _i_ ) . Tie-breaking is arbitrary. Include arc ( _j, i_ ) in the tree. In our example there are arcs from 3 and 4 to 6. We see that _L_ (3) + _a_ (3, 6) = 5 + 4 = 9 = _L_ (6) and _L_ (4) + _a_ (4, 6) = 7 + 5 = 12 FIGURE 8.2.2 Thus arc (3, 6) is in the tree. It is easily seen that there is a tie between (3, 5) and (6, 5) to be included in the tree and we can take only one of them. In Figure 8.2.2 we have a S.D. tree rooted at 1, giving the shortest paths from 1 to all other vertices. **Note:** Dijkstra's algorithm need not solve the S.D. problem for an arbitrary weight function. Consider the network _G_ = ( _V, E_ ) where _V_ = {1, 2, 3} and the arcs are (1, 2), (1, 3), and (2, 3) with weights 10, 8, and – 3, respectively. In iteration 2 we get _L_ (3) = 8, but the S.D. from 1 to 3 is only 7. **_8.3 FLOYD-WARSHALL ALGORITHM_** We saw that Dijkstra's algorithm is not suitable when the weight function is arbitrary. There are several polynomial algorithms to solve the S.D. problem when the weight function is not restricted to be nonnegative. One well-known algorithm is the Floyd-Warshall algorithm, which can be used to find the S.D. and S.P. from every vertex to every other vertex for arbitrary weight functions and which locates the existence of negative cycles. If there is a negative cycle that starts at _i_ and ends in _i_ , it does not make sense to consider the S.D. from _i_ to any vertex in the network in a minimization problem. Consider a directed network with _n_ vertices and an arbitrary weight function. Let _A_ = ( _a ij_) be the _n_ × _n_ weight matrix and let _P_ = ( _p ij_) be another _n_ × _n_ matrix where _p ij_ = _j_. We have _n_ iterations during the execution of the algorithm. Iteration _j_ begins with two _n_ × _n_ matrices _A_ ( _j_ –1) and _P_ ( _j_ –1) (initially _A_ (0) = _A_ and _P_ ( _0_ ) = _P_ ) and ends with _A_ ( _j_ ) and _P (j_) . The elements in these matrices are defined as follows: If then the ( _i_ , _k_ ) entry in _A_ ( _j_ –l) equals the ( _i_ , _k_ ) entry in A( _j_ ) and the ( _i, k_ ) entry in _P_ ( _j_ –1) equals the ( _i_ , _k_ ) entry in _P (j_) . Otherwise, the ( _i_ , _k_ ) entry in _A_ ( _j_ ) is the sum of the ( _i_ , _j_ ) entry and the ( _j, k_ ) entry in _A_ ( _j_ –1) and the ( _i_ , _k_ ) entry in _P_ ( _j_ ) is equal to the ( _i, j_ ) entry in _P_ ( _j_ –1). When the algorithm terminates we are left with two matrices, _A′_ = _A_ ( _n_ ) and _P′_ = _P_ ( _n_ ). It can be proved by induction that the ( _i_ , _j_ ) entry in _A′_ is the S.D. from _i_ to _j_. See Papadimitriou and Steiglitz (1982) for a proof. It can also be verified that if the ( _i_ , _j_ ) entry in _P'_ is _k_ , then ( _i_ , _k_ ) is the first arc in a S.P. from _i_ to _j_ and we use this fact to obtain a S.P. from _i_ to _j_. This procedure of updating the A-matrix (known as the **triple operation** ) involves at most ( _n_ – l)2 comparisons and an equal number of additions for each iteration, whereas the procedure to update the _P_ matrix does not involve any work. Thus the worst-case complexity is _O_ ( _n 3_), as there are at most _n_ iterations. Let us illustrate this procedure in the case of the network shown in Figure 8.3.1. We compute the matrices FIGURE 8.3.1 and From _A′_ we see that the S.D. from vertex 3 to vertex 1 is the (3, 1) entry in that matrix, which is 6. From _P′_ we see that the (3, 1) entry is 4. So (3, 4) is the first arc in the S.P. from 3 to 1. The (4, 1) entry is 2. So the next arc is (4, 2). Then we see that the (2, 1) entry is 1. Thus the last arc is (2, 1). Next consider a digraph with four vertices for which we have At the end of the second iteration we have We see a new development here: _The diagonal element_ ( _4, 4_ ) _is negative_ instead of being 0, indicating the existence of a negative cycle in the network. In the corresponding _P_ matrix we see that the (4, 4) element is 2, giving the arc (4, 2); the (2, 4) element is 1, giving the arc (2, 1), and the (1, 4) element is 4, giving the arc (1, 4) creating the cycle \- - - - - - \- - - - - - \- - - - - - \- - - - - -with total weight – 1. **_8.4 COMPARISON OF THE TWO ALGORITHMS_** To solve the all-pair (i.e., from every vertex to every other vertex) S.D. problem, it appears that on the average Dijkstra's algorithm will outperform that of Floyd-Warshall, as can be seen from the following table in Syslo et al. (1983). **Computing Times for All-Pair Shortest Path Algorithms on Complete Networks** **_8.5 NOTES AND REFERENCES_** No other problem in network optimization has received as much attention as the shortest distance problem. For an excellent review, see Dreyfus (1969). Some excellent general references are the relevant chapters in the books by Lawler (1976), Minieka (1978), and Papadimitriou and Steiglitz (1982). The paper by Dijkstra (1959) is one of the earliest papers on this topic. See Nemhauser (1972) for an extension of Dijkstra's algorithm for networks with arbitrary weights. The Floyd-Warshall algorithm was published as an ALGOL algorithm by Floyd (1964) based on the work of Warshall (1962). This algorithm is by far one of the most efficient known algorithms for solving the all-pair shortest distance problem. For another efficient algorithm, see Tabourier (1973). **_8.6 EXERCISES_** **8.1.** The distance matrix of a digraph is as follows: Find _A_ (7) and _P_ (7) using the Floyd–Warshall algorithm. **8.2.** Find the S.D. and a S.P. from vertex 4 to vertex 7 in Problem 8.1. **8.3.** Construct a directed tree rooted at vertex 1 giving the S.D. from 1 to the other vertices in problem 8.1. **8.4.** Replace the number – 1 that appears in the fourth column of the matrix in Problem 8.1 by – 3. You detect a negative cycle now. What is this negative cycle? **8.5.** Find a S.P. from 4 to 2 that does not pass through 5, 6, or 7 in problem 8.1. **8.6.** Replace – 1 by 1 in the matrix of Problem 8.1 and find a tree rooted at vertex 1 giving the S.D. from vertex 1 to all vertices using Dijkstra's algorithm. **8.7.** At a small but growing airport the local airline company is purchasing a new tractor-trailer train to bring luggage to and from the airplanes. A new mechanized luggage system will be installed in three years and the tractor will not be needed after that. However, it may be more economical to replace the tractor after one or two years because, due to heavy use, running time and maintenance cost will increase rapidly with age. The following array gives the total net discounted cost associated with purchasing a tractor (purchase price minus trade-in allowance plus running and maintenance costs) at the end of year _i_ and trading it in at the end of year _j_ (assuming that year 0 is now). The problem is to determine at what times (if any) the tractor should be replaced to minimize the total cost. Formulate this as a shortest distance problem and solve it. (This problem is from Hillier and Lieberman, 1986.) **8.8.** A garage sells used motorcycles for $500.00 each. The sale is only at the beginning of the academic year and the purchase price is the same every year. A student can buy a vehicle and use it for four years or replace it after using it for one year or two years or three years. The trade-in value of a vehicle is $100.00 after one year, $50.00 after two years, $30.00 after three years, and $0.00 after four years. The maintenance costs for a used vehicle are $200.00, $400.00, $600.00, and $800.00, respectively, for the four years. The problem for a student interested in buying a motorcycle from the dealer for use for four years in college is to make a decision as to whether the cycle bought as a freshman should be kept for all four years or it should be replaced so that the total cost is a minimum. If replacement is necessary, at what intervals? Formulate this problem as a shortest distance problem and solve it. **8.9.** If the weights of the edges of a connected graph are all distinct positive numbers, is it true that there is a unique minimal spanning tree in the graph? Is it true that the shortest path between any two vertices is unique? **8.10.** A network is said to satisfy the triangle inequality if for every three distinct arcs ( _i_ , _j_ ), ( _j, k_ ), and ( _i_ , _k_ ) the weight of the arc ( _i_ , _k_ ) does not exceed the sum of the weights of the other two arcs. A directed path _P_ from vertex _x_ to vertex y is a minimum length path if the number of arcs in _P_ does not exceed the number of arcs in any other path from _x_ to _y_. Is it necessary that the shortest path from _x_ to _y_ is a minimum length path? **What Is NP-Completeness?** **_A. 1 PROBLEMS AND THEIR INSTANCES_** Informally, we always distinguish between a _problem_ and an _instance_ of the problem. For example, "solve a linear system of equations" is a problem an instance of which will be "given a _m_ × _n_ matrix _A_ and a _m_ × 1 matrix _B_ test whether there exists a _n_ × 1 matrix _x_ such that _Ax_ = _B_ and if the answer is yes obtain the vector." Here _A_ and _B_ are the "inputs" and _x_ is the "solution" or "output." On the other hand, an instance of the _decision problem_ "does a linear system have a solution?" will be "is there a _x_ such that _Ax_ = _B_?" The output now is either "yes" or "no." Problems and decision problems thus viewed have an infinite number of instances. A more formal approach to the concept of " problem" and "an instance of the problem" as defined in Schrijver (1986) is along the following lines. An **alphabet** is a finite set _L_ the elements of which are **symbols** or **letters**. An ordered finite sequence of symbols from _L_ is called a **string** or a **word**. The set of all strings or words from _L_ is denoted by _L*_. The size or **length** of a string is the number of components in it. If _L_ = { _a_ , _b, c_ }, the length of _x_ = _abbaa_ is 5 even though _x_ consists of only two symbols. The string of size 0 is called the **empty string** , denoted by ϕ. There are several ways of encoding rational numbers, vectors, matrices, systems of inequalities, linear systems, graphs (matrix representations), and so on, as strings of symbols from a fixed alphabet such as _L_ = {0, 1}. In our present discussion we shall skip the details of such encodings. For more details, see Garey and Johnson (1979). A **problem** _p_ is a subset of _L*_ × _L*_. For any problem _p_ we have the corresponding metamathematical problem: Given a string z in _L*_ , find a string _y_ in _L*_ such that ( _z_ , _y_ ) is in _p_ or report that no such string y exists. Here the string z is called an **instance** or **input** of the problem and _y_ is called the **solution** or **output**. A problem _p_ is called a **decision problem** if whenever (z, y) is in _p_ , then _y_ is the empty string. If _L*_ ( _p_ ) = { _z_ _L*_ : ( _z_ , ϕ) _p_ } , the corresponding metamathematical problem is: Given a string z in _L*_ , does it belong to _L*_ ( _p_ )? Thus the set {(( _A_ , _B_ ), _x_ ) _: A_ is a matrix, _B_ and _x_ are column vectors such that _Ax_ = _B_ } is a subset of _L*_ × _L_ * (where _L_ = {0, 1}), defining the problem _p_ for which ( _A_ , _B_ ) is an instance and _x_ is a solution. This problem can be couched in metamathematical language as follows: Given the string ( _A_ , _B_ ), find a string _x_ (if it exists) such that _Ax_ = _B_. Similarly, the set {(( _A_ , _B_ ), ϕ) : _A_ is a matrix, fi is a column vector such that _Ax_ = _B_ for at least one column vector _x_ } is a decision problem the metamathematical version of which is as follows: Given a matrix _A_ and a column vector _b_ , is there a column vector x such that _Ax_ = _B_? **_A.2 THE SIZE OF AN INSTANCE_** If the rational number _q_ is of the form _m/n_ (where _m_ and _n_ are relatively prime, _m_ is an integer, and _n_ is a positive integer), the size of **_q_** is defined to be size (?) = 1 + ceiling of [log( _m_ \+ 1)] + ceiling of [log( _n_ \+ 1)]. There are other ways of defining the size of a rational number. But it can be shown that most of these definitions are "linearly equivalent" (see Section A.5 for a definition of linear equivalence). Thus the size of a positive integer is proportional to its logarithm and not to its value. See Garey and Johnson (1979) again for sizes of instances and encodings. If A is a vector with _n_ rational components, we define size ( _A_ ) = _n_ \+ the sum of the sizes of all the components of _A._ Similarly, if _M_ is a _m_ × _n_ rational matrix, define size ( _M_ ) = _mn_ \+ the sum of the sizes of all the elements of the matrix. The size of the linear equation _ax_ = _b or_ the linear inequality _ax_ ≤ _b_ is 1 + size( _a_ ) + size( _b_ ). The size of the linear system _Ax_ = _B_ is 1 + size( _A_ ) + size( _B_ ). The size of a graph is the size of its incidence matrix. **_A.3 ALGORITHM TO SOLVE A PROBLEM_** An **algorithm** to solve a problem, in an informal sense, is a finite sequence of instructions to obtain an output for a given input of the problem. It is a step-by-step procedure for solving the problem. Thus given the instance _z_ in _L*_ , an algorithm for the problem _p_ determines an output y in _L_ * such that ( _z_ , _y_ ) is in _p_ or terminates without delivering an output if there is no such string y. It is possible to define an algorithm in a more formal sense in terms of Turing machines or computer programs in some programming language. For our purpose this informal concept of an algorithm will be sufficient. We mention in passing that there are well-defined problems in mathematics for which no algorithms exist. A problem is **undecidable** if there is no algorithm that will solve every instance of the problem. It was proved in 1970 by the then 22-year-old Russian mathematician Yuri Matiyasevich that the decision problem known as **Hilbert's tenth problem** , which asks whether a polynomial equation in more than one variable with integer coefficients has any integer solutions, is an undecidable problem. The most famous undecidable problem in computer science is the **halting problem:** Given a computer program with its input, will it ever halt? When we say that the halting problem is undecidable, we mean that there is no algorithm which will decide whether an arbitrary computer program will get into an infinite loop while working on a given input. An excellent reference for the topic of unde-cidability and related items of interest is the book by Lewis and Papadimitriou (1981). **_A.4 COMPLEXITY OF AN ALGORITHM_** If we have two algorithms at our disposal to solve every instance of a problem, it is natural to compare them to find out whether one is better or more efficient than the other. For this purpose we have to measure the amount of work done by the algorithm, which is the number of "basic operations" needed to solve the problem using the algorithm. Here are some examples of basic operations. The basic operation in a sorting problem is the comparison of two numbers in a given list of entries and thus the work done in a sorting problem is the number of comparisons. In a problem involving both multiplication and addition, we may take multiplication as a basic operation since multiplication is more difficult than addition. Thus the work done in multiplying two _n_ × _n_ matrices is at most _n_ 3 (See Section 3.5.) If _A_ is an algorithm to solve (every instance of) a problem and if _x_ is an instance of the problem, the number of basic operations needed to solve _x_ using _A_ is denoted by _w A_( _x_ ) . Since we are interested in the efficiency of the actual working of the algorithm, it is important that this measure we choose to define the work done is independent of the computer used, the particular computer program, programming language, and other implementation details. Usually, the work done is taken as a function of the _size_ of an instance of the problem. Now if two instances have the same size, it does not imply that the work done is the same for both. Thus we have to aggregate in one way or other the work done _for_ all instances of the same length. One way of doing this is by taking a worst-case approach. Thus the **worst-case complexity** of the algorithm A to solve the problem _p_ is defined to be _f A_( _n_ ), where _f A_( _n_ ) = Max { _w A_( _x_ ) : _x_ is an instance of _p_ and size of _x_ is _n_ } We may take a different approach as follows. Let _h_ ( _x_ ) be the probability that an instance _x_ of the problem is taken as a candidate for input. Then the **average-case complexity** is the sum of all terms _h_ ( _x_ ) _. w A_( _x_ ) where _x_ is an instance of size _n_. In what follows complexity means complexity in the worst case. **_A.5 THE "BIG OH" OR THE_** O(.) **_NOTATION_** Let _f_ and _g_ be two functions from the set of natural numbers to the set of nonnegative real numbers. If there is a positive constant _c_ and a natural number _n_ 0 such that _f_ ( _n_ ) ≤ _c_ · _g_ ( _n_ ) for all _n_ ≥ _n_ 0, we write " _f_ is _O_ ( _g_ )" or " _f_ = _O_ ( _g_ )" or " _f_ ( _n_ ) is _O_ ( _g_ ( _n_ ))" or " _f_ ( _n_ ) = _O_ ( _g_ ( _n_ ))" and say (as in Wilf, 1986) that " _f_ ( _n_ ) **is big oh of** _g_ ( _n_ )." Two functions _f_ and _g_ are **linearly equivalent** if _f_ = _O_ ( _g_ ) and _g_ = _O_ ( _f_ ). We write _f_ >< _g_ if _f_ and _g_ are linearly equivalent. The relation >< thus defined is an equivalence relation and the **rate of growth** of _f_ is its equivalence class, which may be represented by a canonical member from that class. For example, let _f_ ( _n_ ) = 5 _n_ 2 \+ 9 _n_ \+ 7 and _g_ ( _n_ ) = 8 _n_ 2 \+ 23. Then it is easy to show that _f_ >< _g_ and a typical representative from the equivalence class to which _f_ and _g_ belong is the function _p_ ( _n_ ) = _n_ 2. Thus we write _f_ ( _n_ ) = _O_ ( _n_ 2) and _g_ ( _n_ ) = _O_ ( _n_ 2). Now consider the function _h_ ( _n_ ) = 4 _n_ 3 \+ 9 _n_. Then _f_ ( _n_ ) is _O_ ( _h_ ( _n_ )) but _h_ ( _n_ ) is definitely not of _O_ ( _f_ ( _n_ )). Notice that the big oh notation gives only an upper limit. If _f_ ( _n_ ) is _O_ ( _n k_), it is quite possible that _f_ ( _n_ ) is _O_ ( _n r_) for some _r_ less than _k_. At the same time _f_ ( _n_ ) is _O_ ( _n r_) for all _r_ ≥ _k_. Let _c_ = lim [ _f_ ( _n_ )/ _g_ ( _n_ )] as _n_ goes to plus infinity. Then it can easily be verified that: 1. If _c_ is finite and nonzero, _f_ and _g_ are linearly equivalent. 2. If _c_ is zero, _f_ is _O_ ( _g_ ) but _g_ is not _O_ ( _f_ ) . 3. If _c_ is infinite, _g_ is _O_ ( _f_ ) but _f_ is not _O_ ( _g_ ) . As a consequence, we say that _f_ is of **lower order** than _g_ (or equivalently, _g_ is of **higher order** than _f_ ) if _c_ = 0. Using this ratio test one can establish that if _k_ is a positive integer, _n k_ is of higher order than log _n_ , _2 n_ is of higher order than _n k_ and _n_! is of higher order than 2 _n_. **_A.6 EASY PROBLEMS AND DIFFICULT PROBLEMS_** An algorithm A to solve a problem _p_ is called a **polynomial algorithm** or **polynomial-time algorithm** if its worst-case complexity _f A_( _n_ ) is _O_ ( _n k_) for some fixed positive integer _k_. Thus an algorithm with complexity n log _n_ is a polynomial algorithm because _n_ log _n_ is _O_ ( _n 2_) . An algorithm whose complexity violates all polynomial bounds is referred to as an **exponential algorithm**. An algorithm with complexity _f_ ( _n_ ) is exponential if and only if there are positive numbers _a_ and _b_ , numbers _p_ and _q_ greater than 1, and a positive integer _n_ 0 such that _a . p n_ ≤ _f_ ( _n_ ) ≤ b . q _n_ for all _n_ ≥ _n_ 0 . Some examples of rates of growth of exponential algorithms are _k n_ ( _k_ > 1), _n_!, _n n_ and _n_ logn. To discuss whether a problem is easy or not, it should first be decided where to draw a line between easy and difficult problems. The distinction between exponential functions and polynomials becomes clear if we take an asymptotic point of view: Polynomials grow more slowly than exponential functions. So polynomial algorithms with growth rate _n k_ (even when _k_ is large) are efficient in comparison with exponential functions. In other words, for sufficiently large problems, a polynomial algorithm executed on the slowest computer will solve a problem faster than an exponential algorithm on the fastest computer. Furthermore, in some cases an algorithm to solve a problem may be obtained by combining several algorithms for simpler subproblems. If each of these algorithms of subproblems is polynomial, then the algorithm of the main problem is also polynomial because the class of all polynomials is closed under addition, multiplication, and composition of functions. Thus the consensus among computer scientists is to say that a problem is **easy** if there is a polynomial algorithm that will solve every instance of the problem. This idea is originally due to Edmonds (1965b). Observe that it does not make sense to say that an algorithm is good if its complexity is _O_ ( _n k_) when _k_ is large. In this connection the following comments from Papadimitriou and Steiglitz (1982) are worth reproducing: "The thesis that polynomial-time algorithms are 'good' seems to weaken when pushed to extremes. Experience, however, comes to its support. For most problems, once any polynomial-time algorithm is discovered, the degree of the polynomial quickly undergoes a series of decrements as various researchers improve on the idea. Usually the final rate of growth is O( _n_ 3) or better." To appreciate the difference in the rate of growth of a polynomial algorithm and the tyrannical rate of growth of an exponential algorithm, consider the following scenario: Suppose that a basic operation (each step) in a computer requires one millionth of a second of computer time. If _n_ = 50, the computation times for algorithms with complexities _n_ 2, _n_ 3, 2 _n_ , and 3 _n_ will be 0.0025 second, 0.125 second, 35.7 years, and (2)(109) centuries, respectively. If _n_ = 100, these times will be 0.01 second, 1 second, 248 centuries, and 370 centuries, respectively. It was mentioned earlier that a problem is (provably) undecidable if it can be proved that there exists no algorithm that will solve every instance of the problem. Instead of asking whether a problem is provably undecidable or not, the basic question in computational complexity theory asks how difficult or hard it is to solve a problem **provably difficult** if it can be proved that any algorithm which will solve (every instance of) the problem is an exponential algorithm. Such problems do exist, but they are rather obscure. See Lewis and Papadimitriou (1981) for more details. A problem for which no polynomial algorithm is known and for which it is conjectured that no such algorithm exists is called an **intractable** problem. The problem of finding a shortest path between a vertex and another vertex in a connected graph is an easy problem (the algorithm of Dijkstra is polynomial), but the problem of finding a longest simple path between two vertices is intractable since no one knows an algorithm to solve this which is substantially faster than enumerating all possible paths between the two and choosing the optimal one. Thus if there is a polynomial algorithm to solve a problem _p_ , we can say that "it is easy to solve _p_." But how can one show that "it is hard (not easy) to solve a problem"? One may prove that problem _p_ is as hard as problem _q_. So whenever there is no known polynomial algorithm to solve a problem, we have to examine that problem in this spirit. In other words, to prove that a given problem is difficult, it is not enough to assert that no polynomial algorithm has been discovered so far to solve it. It requires some sophisticated mathematical techniques to show that the complexity of any algorithm conceivable for the problem cannot be bounded above by a polynomial. Such techniques are now being discovered by computer scientists with the advent of latest developments in computational complexity theory. The introduction of the concept of NP-completeness is an important milestone in this field. **_A. 7 THE CLASS P AND THE CLASS NP_** Hereafter we shall assume that the problems we consider are all decision problems. These are problems whose output is either "yes" or "no." Examples of decision problems: (1) Is there a Hamiltonian cycle in a given connected graph? and (2) Is there a simple path from a vertex v to another vertex _w_ in a connected network such that the total length of this path does not exceed a certain amount? We are interested in classifying decision problems according to their complexity. A decision problem belongs to **class P** if there is a polynomial algorithm to solve every instance of the problem. If a problem has a polynomial algorithm, then obviously the corresponding decision problem also has a polynomial algorithm. Notice that we can assert that an arbitrary problem is in the class P only after we have a proof that there is a polynomial algorithm to solve it. The fact that the linear programming problem is a member of the class P was established only a decade ago, when Khachiyan (1979) came out with his ellipsoid algorithm. Subsequently, a more efficient algorithm was obtained by Karmarkar (1984) using interior point methods. For a lucid description of these developments, see Schrijver (1986). Now consider a decision problem the status of which is as follows: (1) there is at least one exponential algorithm to solve it, and (2) no one has proved so far that every algorithm to solve is necessarily exponential. In other words, it is a decidable decision problem and we do not know whether or not it is provably difficult. What do we do with problems in this category? It is at this stage that we introduce a class of decision problems containing the class P. A decision problem belongs to **class NP** if there is a polynomial algorithm to verify the "yes" output of that problem. The acronym NP is for "nondeterministic polynomial." For details regarding nondeterministic algorithms, see Garey and Johnson (1979). Nondeterministic algorithms are not in any sense probabilistic or random algorithms. It is obvious that any decision problem with a polynomial algorithm is in the class NP. Regarding a typical problem _p_ in NP for which so far no one has obtained a polynomial algorithm, there are three mutually exclusive alternatives: (1) a polynomial algorithm for _p_ will be discovered, (2) it will be proved that _p_ will not have a polynomial algorithm, and (3) the status of _p_ will never be settled. So far no one has proved that there exists a problem in NP that is not in P. It is not known whether P = NP or whether P is properly contained in NP. This is a frustrating situation because many practical problems belong to the class NP. If P = NP, it makes sense to try to obtain a polynomial algorithm for a problem in NP for which no efficient algorithm has been discovered so far. On the other hand, if we could establish that a particular problem in NP is not in P, we need not bother to seek an efficient algorithm to solve it. But in the absence of a proof, we cannot abandon our efforts to obtain a polynomial algorithm to solve this problem because there is always a remote chance that somewhere out there a polynomial algorithm to solve the problem exists waiting to be discovered! Here is an example of a problem in NP. Consider the decision problem an instance of which is as follows: "Is the positive integer _n_ a composite number?'' When _n_ is large, we cannot easily answer this question. However, if we are able to exhibit two positive integers _p_ and _q_ (both greater than 1) such that _n_ = _pq_ , then anyone can easily say the answer is "yes" since multiplication of two numbers can be performed in polynomial time. On the other hand, it is not at all obvious whether the decision problem "is the positive integer _n_ a prime number?" belongs to the class NP. It was proved by Pratt (1975) that this is indeed the case. Corresponding to each decision problem _p_ , there is always a complementary decision problem _p′_. Each is complementary to the other. The problems "is it true that _n_ is prime?" and "is it true that _n_ is not prime?" are complementary. In a connected graph _G_ the problems "is it true that there is a Hamiltonian cycle in G?" and "is it true that there is no Hamiltonian cycle in G?" are complementary. Notice that the former problem in graph theory is in NP because it is easy to verify the "yes" answer. But the latter problem is not in NP because to verify the "yes" answer in this case, we have to enumerate all possible cycles. Incidentally, we have here a decision problem that is not in NP. Thus if a problem is in NP it is not necessary that its complementary problem is in NP. A decision problem belongs to **class Co-NP** if its complementary problem is in class NP. A decision problem is said to be **well-characterized** if it is in both NP and Co-NP. Obviously, any problem in P is well-characterized. It is not known whether every well-characterized problem is in P. The problems "is _n_ prime?" and "is _n_ composite?" are both well-characterized because of Pratt's theorem. It is also not known whether NP = Co-NP. If NP is contained in Co-NP (or if Co-NP is contained in NP), then NP = Co-NP = NP ∩ Co-NP. Also, if P = NP, all the sets coincide. **_A.8 POLYNOMIAL TRANSFORMATIONS AND NP-COMPLETENESS_** The easy problems in NP are in P. They are on one side of the spectrum. Since it is not known whether P = NP or not, it is natural to ask whether we can collect all the "hard" problems of NP in one class and put this class on the other side of the spectrum. In the first place, what should be the name of this distinguished class? It seems that Donald Knuth of Stanford University polled his colleagues in 1974 to find an appropriate name. There were many suggestions: formidable, Herculean, arduous, prodigious, obstinate, and so on. Finally, a consensus: Call it the class of NP-complete problems—and this name stuck among computer scientists, logicians, and mathematicians. A basic idea in the theory of NP-completeness is that of polynomial transformation. A decision problem _p_ is **polynomially transformable** to a decision problem _q_ if the following two conditions hold: (1) there exists a function _f_ ( _x_ ) that will transform every instance _x_ of _p_ to an instance _f_ ( _x_ ) of _q_ such that the answer to _x_ is "yes" if and only if the answer _to f_ ( _x_ ) is "yes" and (2) there is an efficient algorithm to compute _f_ ( _x_ ) for every _x_. Here is an example of a problem that can be polynomially transformed into another. Recall that a clique in a graph G is a complete subgraph of G. The number of vertices in a clique is its size. The **clique problem** _p_ is stated as follows: Is there a clique of a prescribed size in a graph? For a given finite set, a collection of subsets is said to cover the set if every element in the set belongs to at least one set in the collection. The **set covering problem** _q_ is stated as follows: Given a finite set _X_ , a collection C of subsets of _X_ , and a positive integer _m_ , is there a subcollection _C′_ consisting of _m_ of these sets such that _C_ ′ covers _X_? The problem _p_ can be transformed into _q_ as follows. Let _G_ = ( _V, E_ ) be a connected graph with _m_ edges where _V_ = {1, 2, 3, . . . , _n_ }. Then its complement _G′_ = ( _V_ , _E_ ′) has _r_ edges, where _r_ = [ _n_ ( _n_ – 1)/2]. Suppose that _E'_ = { _e_ 1 _e_ 2, _. . . , e_ r} is the set to be covered. Let _S_ i be the set of edges in _G'_ that are incident at vertex _i_. We take _C_ = { _S i_ : _i_ = 1, 2, . . . , _n_ } as a collection of available subsets of _E'_. It is easy to see that if _G_ has a clique of size _k_ , then a subcollection _C′_ of ( _n_ – _k_ ) subsets can be chosen from _C_ that will cover the set _E'_. In particular, if _W_ = {1, 2, . . . , _k_ } is a set of vertices that forms a clique in _G_ , the collection _C′_ = { _S k_+1 _S k_+2 , . . . , _S n_} will be a cover for _E′_. The definition of polynomial transformability suggests the following inequality: If _p_ is polynomially transformable to _q_ , (complexity of _p_ ) ≤ (complexity of _f_ ) + (complexity of _q_ ) . Furthermore, if complexity of _f_ is insignificant compared to the complexities of _p_ and _q_ , we can write, in an asymptotic sense, (complexity of _p_ ) ≤ (complexity of _q_ ) . A decision problem _p_ is **NP-hard if** every problem in NP can be transformed into it polynomially. In other words, an NP-hard problem cannot be easier than any problem in NP. A problem in NP that is NP-hard is said to be **NP-compIete**. The class of NP-complete problems is thus the intersection of the class NP and and class NP-hard. Thus if there is an efficient algorithm to solve every instance of a particular NP-complete problem, every problem in NP has a polynomial algorithm. The class of NP- complete problems is denoted by NPC. To show that a problem _p_ is in NPC it has to be proved that (1) _p_ is in NP and (2) every problem in NP can be polynomially transformed into _p_. Observe that the complexity of a problem in NPC is inextricably related to the conjecture that P is a proper subset of NP. The fact that the class NPC is nonempty was established by Cook (1971) in his seminal paper by exhibiting a problem in NP such that every problem in NP can be transformed into it polynomially. This problem is known as the **satisfiability problem**. This problem comes from mathematical logic and applications in switching theory. However, it can be stated as a simple combinatorial puzzle as in Karp (1986): Given several sequences of upper- and lowercase letters, is it possible to select a letter from each sequence without selecting both the upper- and lowercase versions of the same sequence? For example, if the sequences are Abc, BC, aB, and ac, it is possible to choose A from the first sequence, B from the second and third, and c from the fourth; note that the same letter can be chosen more than once, provided we do not choose both its uppercase and lowercase versions. An example where there is no way to make the required selections is given by the four sequences AB, Ab, aB and ab. The satisfiability problem is clearly in NP, since it is easy to check that whether a proposed selection of letters satisfies the conditions of the problem. Cook proved that, if the satisfiability problem is solvable in polynomial time, then every problem in NP is solvable in polynomial time, so that P = NP. Thus we see that this seemingly bizarre and inconsequential problem is an archetypal combinatorial problem; it holds the key to the efficient solutions of all problems in NP. The "floodgate" was open once it was proved that the class NPC is nonempty. By constructing a series of polynomial transformations Karp (1972a) produced a list of 20 or so problems in NPC. It was shown in this paper that most of the classical combinatorial problems such as packing, covering, matching, partitioning, routing, and so on, are in NPC. His list includes the following: (1) Is a given graph Hamiltonian?; (2) Is it possible to color the vertices of a graph with _k_ colors so that no two adjacent vertices have the same color?; (3) Given a set of numbers { _n i_ : _i_ = 1, 2, . . ., _k_ } and a number _s_ , does some subset of the numbers add up to exactly _s_?; and (4) Is there a clique of a given size in a graph? If _p_ is a problem in NP, to show that _p_ is in NPC it is enough if we prove that some known problem in NPC is polynomially transformable to it. There are thousands of problems now known to be NP-complete and their "tribe" is steadily increasing, as can be seen from the publications of new results in this field in recent years. For a fascinating account of the world of NP-completeness, one shoud refer to the book by Garey and Johnson (1979) and Johnson's column on this topic entitled "NP-Completeness: An Ongoing Guide," which appears regularly in the _Journal of Algorithms_. It is now routine to investigate whether an apparently difficult problem is NP-complete. Recall that a problem in NP is well-characterized if its complement also is in NP. If it can be proved that there exists a problem in NPC that is well-characterized, it can be shown that NP = Co-NP. Attempts to show that complements of some standard NP-complete problems are in NP have been fruitless. Also, there is no evidence to believe that these two classes NP and Co-NP coincide. Hence the conjecture: The class NPC and the class W of well-characterized problems are disjoint. (It was known for some time that the linear programming problem LP is a well-characterized problem. So it was conjectured that LP is not in NPC even before the discovery of the ellipsoid algorithm, which proved that not only that LP is not in NPC but that it is in P.) Similarly, if Co-NPC is the class of complements of all NP-complete problems, the class Co-NPC and the class W are also disjoint. Thus the conjectured topography of the class of decision problems is as portrayed in Figure A.8.1. Finally, even if we _assume_ that every NP-complete problem is provably difficult, there remains a class of problems of unsettled status. For example, consider a decision problem _p_ in NP such that (1) no polynomial algorithm has been obtained so far to solve every instance of _p_ , and (2) so far there is no proof that _p_ is in NPC. In particular, the status of a well-characterized problem (which is not likely to be in NPC) for which no polynomial algorithm has been obtained so far remains unsettled. A typical member of this class: "Is the given positive integer a prime number?" FIGURE A.8.1 **_A.9 COPING WITH HARD PROBLEMS_** The problems considered thus far are decision problems. In a _combinatorial optimization problem_ there may be many solutions (feasible solutions) and each solution will have a real number associated with it called the _value_ of the solution. The aim of the problem is to obtain a solution (optimal feasible solution) whose value is optimum. Corresponding to any such optimization problem, there is a decision problem of determining whether the optimization problem has a solution with a value better than a given real number. Obviously, the decision problem associated with a combinatorial optimization problem cannot be harder than the optimization problem itself. Thus if the problem "Is a graph Hamiltonian?" is hard, then the problem "find an optimal Hamiltonian cycle in a graph" is also hard. Stated more precisely, this means that if the decision problem associated with an optimization problem is NP-complete, the optimization problem is NP-hard. Now a proposition which proves that the decision problem (which corresponds to an optimization problem) is in NPC eliminates for all practical purposes the possibility of obtaining an efficient algorithm to solve every instance of the optimization problem. But the fact remains that many combinatorial optimization problems which arise in several areas in science, engineering, and operations research are NP-hard. So it is natural to ask: How do we cope with such hard problems? Broadly speaking, there are two approaches. One is a _heuristic_ approach: It is likely that the problem is hard because of a small proportion of hard instances. Is it possible to obtain an efficient algorithm that will solve a large number of instances of the problem? A heuristic algorithm always gives an optimal solution, but it need not be efficient in every instance. The simplex method to solve the linear programming problem is an algorithm of this category. The other approach is to find out whether there is an efficient _approximation algorithm_ to obtain a feasible solution with value that is very close to the optimal value. There are efficient approximation algorithms for some well-known NP-hard problems in combinatorial optimization. Once again, see Garey and Johnson (1979) for more details. **Bibliography** AHO, A. V., HOPCROFT, J. E., and ULLMAN, J. D. _Data Structures and Algorithms_ , Addison-Wesley, Reading, Mass., 1983. AIGNER, M. _Combinatorial Theory_ , Springer-Verlag, New York, 1979. ANDERSON, I. _A First Course in Combinatorial Mathematics_ , Oxford University Press, Oxford, 1979. APPEL, K., and HAKEN, W. "Every Planar Map Is Four Colorable," _Bull. Amer. Math. Soc_. 82 (1976), 711–712. BAASE, S. _Computer Algorithms: Introduction to Design and Analysis_ , Addison-Wesley, Reading, Mass., 1978. BEHZAD, M., CHATRAND, G., and LESINAK-FOSTER, L. _Graphs and Digraphs_ , Wads-worth, Belmont, Calif., 1979. BELLMORE, M., and NEMHAUSER, G. L. "The Traveling Salesman Problem: A Survey," _Oper. Res_. 16 (1968), 538–558. BERGE, C. _The Theory of Graphs and Its Applications_ , Wiley, New York, 1962. BIRKHOFF, G. D., and LEWIS, D. C. "Chromatic Polynomials," _Trans. Amer. Math. Soc_. 60 (1960), 355–451. BONDY, J. A., and MURTY, U. S. R. _Graph Theory with Applications_ , Elsevier, New York, 1976. BOYER, C. B. _History of Mathematics_ , Wiley, New York, 1968. BUSSEY, W. H. "Origins of Mathematical Induction," _American Math. Monthly_ 24(1917), 199–207. CARRE, B. _Graphs and Networks_ , Clarendon Press, Oxford, 1979. CHANG, S. K. "The Generation of Minimal Trees in a Steiner Topology," _J. Assoc. Comput. Mach_. 19 (1972), 699–711. CHARTRAND, G. _Graphs as Mathematical Models_ , Wadsworth, Belmont, Calif., 1977. CHARTRAND, G., KAPOOR, S. F., and KRONK, H. V. "A Generalization of Hamiltonian-Connected Graphs," _J. Math. Pures Appl_. (9) 48 (1969), 109–116. CHERITON, D., and TARJAN, R. E. "Finding Minimum Spanning Trees," _SIAM. J. Comput_. 5 (1976), 724–742. COHEN, D. I. _Basic Techniques of Combinatorial Theory_ , Wiley, New York, 1978. COOK, S. A. "The Complexity of Theorem-Proving Procedures," in _Proceedings Third ACM Symposium on the Theory of Computing_ , Assoc. for Computing Machinery, New York, 1971, pp. 151–158. DE BRUIJN, N. G. "A Combinatorial Problem," _Nederl. Akad. Wetensch. Proc_. 49 (1946), 758–764. DEO, N. _Graph Theory with Applications to Engineering and Computer Science_ , Prentice-Hall, Englewood Cliffs, N.J., 1974. DIJKSTRA, E. W. "A Note on Two Problems in Connection with Graphs," _Numer. Math_. 1 (1959), 269–271. DIRAC, G. A. "Some Theorems on Abstract Graphs," _Proc. London Math. Soc_. 2 (1952), 69–81. DREYFUS, S. E. "An Appraisal of Some Shortest-Path Algorithms," _Oper. Res_. 17 (1969), 395–112. EDMONDS, J. "Paths, Trees and Flowers," _Canad. J. Math_. 17 (1965b), 449–467. EVEN, S. _Graph Algorithms_ , Computer Science Press, Potomac, Md., 1979. FLOYD, R. W. "Algorithm 97: Shortest Path," _Comm. ACM_ 7 (1964), 345. GABOW, H. P., GALIL, Z., SPENCER, T., and TARJAN, R. E. "Efficient Algorithms for Finding Minimum Spanning Trees in Undirected and Directed Graphs," _Combinatorica_ 6 (1986), 109–112. GAREY, M. R., and JOHNSON, D. S. _Computers and Intractability: A Guide to the Theory of NP-Completeness_ , W. H. Freeman, San Francisco, 1979. GIBBONS, A. _Algorithmic Graph Theory_ , Cambridge University Press, Cambridge, 1985. GOLDBERG, S. _Introduction to Difference Equations_ , Wiley, New York, 1958. GOLDMAN, A. J. "Discrete Mathematics in Government," Lecture on the Applications of Discrete Mathematics, SIAM, Troy, N.Y. (1982). GOLOMB, S. W. _Shift Register Sequences_ , Holden-Day, San Francisco, 1967. GOLOVINA, L. I., and YAGLOM, l. M. _Induction in Geometry_ , D. C. Heath, Boston, 1963. GONDRAN, M., and MINOUX, M. _Graphs and Algorithms_ , Wiley, New York, 1984. GOULD, R. _Graph Theory_ , Benjamin-Cummings, Menlo Park, Calif., 1988. GRAHAM R. L., and HELL, P. "On the History of the Minimum Spanning Tree Problem," _Bell Lab. Rep_. (1982). GRAHAM, R. L., ROTHSCHILD, B. L., and SPENCER, J. H. _Ramsey Theory_ , Wiley, New York, 1980. GRIMALDI, R. P. _Discrete and Combinatorial Mathematics_ , Addison-Wesley, Reading, Mass., 1985. HAKEN, W. "An Attempt to Understand the Four Color Problem," _J. Graph Theory_ 1 (1977), 193–206. HALMOS, P. _Naive Set Theory_ , Van Nostrand, Princeton, N.J., 1960. HARARY, F. _Graph Theory_ , Addison-Wesley, Reading, Mass., 1969a. HARARY, F. "The Four Color Conjecture and Other Graphical Diseases," in _Proof Techniques in Graph Theory_ , Academic Press, New York, 1969b. HENKIN, L. "On Mathematical Induction," _Amer. Math. Monthly_ 67 (1960), 323–337. HILLIER, F. S., and LIEBERMAN, G. J. _Introduction to Operations Research_ , 4th ed., Holden-Day, Oakland, Calif., 1986. HU, T. C. _Combinatorial Algorithms_ , Addison-Wesley, Reading, Mass., 1982. HUFFMAN, D. A. "A Method for the Construction of Minimum Redundancy Codes," _Proc. IRE_ 40 (1952), 1098–1101. HUTCHINSON, J. P., and WILF, H. S. "On Eulerian Circuits and Words with Prescribed Adjacency Patterns," _J. Combin. Theory_ A18 (1975), 80–87. KARMARKAR, N. "A New Polynomial Algorithm for Linear Programming," _Combinatorica_ 4 (1984), 373–395. KARP, R. M. "Reducibility among Combinatorial Problems," in _Complexity of Computer Computations_ , Plenum Press, New York, 1972a. KARP, R. M. "A Simple Derivation of Edmonds' Algorithm for Optimum Branchings," _Networks_ 1 (1972b), 265–272. KARP, R. M. "Combinatorics, Complexity and Randomness," _Comm. ACM_ 29 (1986), 98–111. KHACHIYAN, L. G. "A Polynomial Algorithm in Linear Programming" (in Russian); English translation in _Soviet Math. Dokl_. 20 (1979), 191–194. KNUTH, D. E. _The Art of Computer Programming_ , Vol. 1, Addison-Wesley, Reading, Mass., 1973a. KNUTH, D. E. _The Art of Computer Programming_ , Vol. 3, Addison-Wesley, Reading, Mass., 1973b. KRISHNAMURTHY, V. _Combinatorics: Theory and Applications_ , Ellis Horwood, Chichester, West Sussex, England, 1986. KRUSKAL, J. B. "On the Shortest Spanning Subtree of a Graph and the Traveling Salesman Problem," _Proc. Amer. Math. Soc. 1_ (1956), 48–50. KWAN, M. K. "Graphic Programming Using Odd or Even Points," Chinese J. Math. 1 (1962), 273–277. LAWLER, E. L. _Combinatorial Optimization: Networks and Matroids_ , Holt, Rinehart and Winston, New York, 1976. LAWLER, E. L., LENSTRA, J. K., RINNOOY KAN, A. H. G., and SHMOYS, D. B. _The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization_ , Wiley, New York, 1985. LEVY, H., and LESSMAN, F. _Finite Difference Equations_ , Macmillan, New York, 1961. LEWIS, H. R., and PAPADIMITRIOU, C. H. _Elements of the Theory of Computation_ , Prentice-Hall, Englewood Cliffs, N.J., 1981. LICK, D. R. "A Sufficient Condition for Hamiltonian Connectedness," _J. Comb. Theory_ 8 (1970), 444–445. LIU, C. L. _Introduction to Combinatorial Mathematics_ , McGraw-Hill, New York, 1968. LIU, C. L. _Elements of Discrete Mathematics_ , McGraw-Hill, New York, 1985. MACMAHON, P. _Combinatory Analysis_ , Vol. 1 (1915) and Vol. 2 (1916); reprinted in one volume by Chelsea, New York, 1960. MARKOWSKY, G. "Best Huffman Trees," _Acta Inform._ 16 (1981), 363–370. MAY, K. O. "The Origin of the Four Color Conjecture," _Isis_ 56 (1965), 346–348. MINIEKA, E. _Optimization Algorithms for Networks and Graphs_ , Marcel Dekker, New York, 1978. MOON, J. W. _Topics in Tournaments_ , Holt, Rinehart and Winston, New York, 1968. NEMHAUSER, G. L. "A Generalized Label Setting Algorithm for the Shortest Path between Specified Nodes," _J. Math. Anal. Appl._ 38 (1972), 328–334. ORE, O. _Graphs and Their Uses_ , New York, 1963. Random House, N.Y. PAPADIMITRIOU, C. H. "The Complexity of Edge Traversing," _J. Assoc. Comput. Mach._ 23 (1976), 544–554. PAPADIMITRIOU, C. H. and STEIGLITZ, K. _Combinatorial Optimization: Algorithms and Complexity_ , Prentice-Hall, Englewood Cliffs, N.J., 1982. POLYA, G. _Induction and Analogy in Mathematics_ , Princeton Univ. Press, Princeton, N.J., 1963. PRATT, V. "Every Prime Has a Succinct Certificate," _SIAM J. Comput._ 4 (1975), 214–220. PRIM, R. C. "Shortest Connection Networks and Some Generalizations," _Bell System Tech. J._ 36 (1957), 1389–1401. RALSTON, A. "De Bruijn Sequences—A Model Example of the Interaction of Discrete Mathematics and Computer Science," _Math. Mag._ 55 (1982), 131–143. READ, R. C. "An Introduction to Chromatic Polynomials," _J. Combin. Theory_ 4 (1968), 52–71. REDEI, L. "Ein kombinatorischer Satz," _Acta Litt. Sci. Szegu 1_ (1934), 39–43. REINGOLD, E. M., NIEVERGELT, J., and DEO, N. _Combinatorial Algorithms: Theory and Practice_ , Prentice-Hall, Englewood Cliffs, N.J., 1977. RIORDAN, J. _An Introduction to Combinatorial Analysis_ , Princeton University Press, Princeton, N.J., 1958. ROBBINS, H. E. "A Theorem on Graphs with an Application to a Problem of Traffic Control," _Amer. Math. Monthly_ 46 (1939), 281–283. ROBERTS, F. S. _Discrete Mathematical Models with Applications to Social Biological and Environmental Problems_ , Prentice-Hall, Englewood Cliffs, N.J., 1976. ROBERTS, F. S. _Graph Theory and Its Applications to Problems of Society_ , SIAM, Philadelphia, 1978. ROBERTS, F. S. _Applied Combinatorics_ , Prentice-Hall, Englewood Cliffs, N.J., 1984 RONSE, C. _Feedback Shift-Registers_ , Springer-Verlag, New York, 1982. RYSER, H. J. _Combinatorial Mathematics_ , Cams Mathematical Monographs No. 14, Mathematical Association of America, Washington D.C., 1963. SCHRIJVER, A. _Theory of Linear and Integer Programming_ , Wiley, New York, 1986. SOMINSKIT, I. S. _The Method of Math Induction_ , D. C. Heath, Boston, 1963. STANAT, D. F., and MCALLISTER, D. F. _Discrete Mathematics in Computer Science_ , Prentice-Hall, Englewood Cliffs, N.J., 1977. STANDISH, T. A. _Data Structure Techniques_ , Addison-Wesley, Reading, Mass., 1980. STOLL, R. R. _Set Theory and Logic_ , W. H. Freeman, San Francisco, 1963. SWAMY, M. N. S., and THULASIRAMAN, K. _Graphs, Networks and Algorithms_ , Wiley, New York, 1981. SYSLO, M. M., DEO, N., and KOWALIK, J. S. _Discrete Optimization Algorithms with Pascal Structures_ , Prentice-Hall, Englewood Cliffs, N.J., 1983. TABOURIER, Y. "All Shortest Distances in a Graph," _Discrete Math_. 4 (1973), 83–87. TARJAN, R. E. "Depth First Search and Linear Graph Algorithms," _SIAM J. Comput_. 1 (1971), 146–160. TOWNSEND, M. _Discrete Mathematics: Applied Combinatorics and Graph Theory_ , Benjamin-Cummings, Menlo Park, Calif., 1987. TUCKER, A. _Applied Combinatorics_ , 2nd ed. Wiley, New York, 1984. TYMOCZKO, T. "Computers, Proofs and Mathematicians: A Philosophical Investigation of the Four Color Proof," _Math. Mag_. 53 (1980), 131–138. WARSHALL, S. "A Theorem on Boolean Matrices," _J. Assoc. Comput. Mach_. 9 (1962), 11–12. WHITWORTH , W. A. _Choice and Chance_ (reprint of the fifth edition, 1901), Hafner Press, New York, 1965. WILF, H. S. _Algorithms and Complexity_ , Prentice-Hall, Englewood Cliffs, N.J., 1986. WILSON, R. J. _Introduction to Graph Theory_ , 2nd ed. Longman Group, Harlow, Essex, England, 1979. YEMELICHEV, V. A., KOVALEV, M. M., and KRAVTSOV, M. K. _Polytopes, Graphs and Optimisation_ , Cambridge University Press, Cambridge, 1984. **Answers to Selected Exercises** **Chapter 0** **0.1.** **(a)** _A_ ∪ _B_ = {2, 3, 5, 6, 7, 9} **(b)** _B_ ∩ _C_ = {2, 6} **(c)** _B_ – _A_ = {2, 6} **(d)** _A_ – _B_ = {9} **(e)** _C_ ′ = {3, 5, 7, 9} **(f)** _X_ ′ empty set **(g)** Complement of the empty set _X_ **0.3.** { _a, b, c, c_ } = { _a, b, a, b, c_ } **0.5.** **(a)** _A_ × _A_ = {(3, 3), (4, 4), (3, 4), (4, 3)} **(b)** _A_ × _B_ = {(3, _p_ ), (3, _q_ ), (3, _r_ ), (4, _p_ ), (4, _q_ ), (4, _r_ )} **(c)** _B_ × _A_ = {( _p_ , 3), ( _p_ , 4), ( _q_ , 3), ( _q_ , 4), ( _r_ , 3), ( _r_ , 4)} **(d)** _B_ × _B_ = {( _p, p_ ), ( _q_ , _q_ ), ( _r_. _r_ ), ( _p_ , _q_ ), ( _q_ , _p_ ), ( _p, r_ ), ( _r, p_ ), ( _q, r_ ), ( _r, P_ )} **0.7.** **(a)** _A_ ∪ ( _B_ × _A_ ) = {3, 4, ( _p_ , 3), ( _p_ , 4), ( _q_ , 3),( _q_ , 4), ( _r_ , 3), ( _r_ , 4)} **(b)** ( _A_ × _A_ ) ∪ ( _B_ × _B_ ) = {(3, 3), (4, 4), (3, 4), (4, 3), ( _p_ , 3), ( _p_ , 4), ( _q_ , 3), ( _q_ , 4), ( _r_ , 3), ( _r_ , 4)} **0.9.** **(a)** {{ _a_ }} **(b)** {{ _a_ }, { _b_ }} **(c)** {{ _a_ }, { _b_ }, { _c_ }}, {{ _a, b_ ), { _c_ }}, {{ _a, c_ }, { _b_ }}, and {{ _b, c_ }, { _a_ }} **0.13.** **(a)** Yes **(b)** Yes **0.19.** One **0.23.** At most eight **0.25.** Both ( _A_ – _B_ ) and ( _B_ – _A_ ) are empty. This means that _A_ is a subset of _B_ and at the same time _B_ is a subset of _A._ So, _A_ = _B._ **0.27.** Yes. Let _x_ be an arbitrary element of _B._ Case 1: Suppose _x_ is not in _A._ Then _x_ is in the symmetric difference of _A_ and _B._ So _x_ is in the symmetric difference of _A_ and _C._ So _x_ is in _A_ ∪ _C_ but not in _A_ ∩ _C_. So _x_ is in _C._ Case 2: Suppose _x_ is in _A_. Then _x_ is not in the symmetric difference of _A_ and _B_ and therefore _x_ is not in the symmetric difference of _A_ and _C_. So _x_ is in _A_ ∩ _C_ which implies _x_ is in _C_. Thus in any case _B_ is a subset of _A._ Likewise _C_ is a subset of _B._ **0.29.** **(a)** Both the domain and the codomain are _R_ and the range is the set of all nonnegative real numbers, **(b)** No **(c)** No **(d)** {–2, 2} **(e)** The union of the two closed intervals _I_ and _J_ where _I_ = { _x:_ – 2 ≤ _x_ ≤ – 1} and _J_ = { _x_ : 1 ≤ _x_ ≤ 2} **0.31.** The function _f_ is not a surjection. The inverse function _g_ ( _n_ ) = ( _n_ – 5)/2 where _n_ is in _f_ ( _N_ ) is a surjection. **0.33.** **(a)** 3.3.3.3 **(b)** 0 **(c)** (3.3.3.3) – (3.2.2.2.2) + 3 **(d)** 1.1.3.3 **0.35.** **(a)** Domain = the set of all integers, range = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} **(b)** Domain = the set of all integers, range = the set of all positive integers **0.37.** {( _p_ , 1), ( _q_ , 1), ( _r_ , 2)} **0.39.** {( _p, p_ ), ( _q, q_ ), ( _r_ , _p_ )} when _n_ is even and {( _p_ , _q_ ), ( _q, p_ ), ( _r_ , _q_ )} when _n_ is odd **0.43.** The four constants satisfy the equation _ad_ \+ _b_ = _bc_ \+ _d_. **0.49.** **(a)** 2 **(b)** 3 **0.51.** Draw five small circles on the left side one below the other such that no two circles touch each other and label them 1, 2, 3, 4, and 5. Then draw four small circles one below the other on the right side and label them _a, b, c_ , and _d_. Draw arrows (1) from 1 to a, (2) from 1 to _b_ , (3) from 3 to _c_ , (4) from 4 to _d_ , (5) from 5 to _d_ , and (6) from 5 to _c_. **0.53.** No in all cases except in (c) **0.55.** _R_ 2 = {( _a, a_ ), ( _a, b_ ), ( _b, a_ ), ( _b, b_ ), ( _c, a_ ), ( _c, b_ )} _R_ 3 = {( _a, a_ ), ( _a, b_ ), ( _b, a_ ), ( _b, b_ ), ( _c_ , _a_ ), ( _c_ , _b_ )} **0.62.** Reflexive, not symmetric, antisymmetric, transitive **0.64.** **(a)** This is an equivalence relation. The corresponding partition is {{1, 3}, {2}, {4}}. **(b)** This is not an equivalence relation, **(c)** This is an equivalence relation with the partition {{1,2}, {2}, {3}, {4}}. **0.68.** **(a)** {5 _k_ : _k_ _Z_ } **(b)** {1 + 5 _k_ : _k_ _Z_ } **(c)** {2 + 5 _k_ : _k_ ** ** _Z_ } **0.70.** The preimages of the elements of the range of _f_ **0.78.** The set _S_ has 27 pairs in all. There are 8 pairs in which the first element is the empty set and the second element is any subset of _X_. There are 12 pairs in which the first element is a singleton set and 6 pairs in which the first element is a set of two elements. Finally, ({ _a_ , _b_ , _c_ }, { _a_ , _b_ , _c_ }) is in _S_. **0.80.** {1, 2, 4, 8} and {1, 3, 6} are two chains. **0.82.** **(a)** 2, 3 **(b)** 16, 24 **(c)** 12, 24 **0.87.** The induction hypothesis _P_ ( _n_ ) is the statement that the sum 1/1 · 2 + 1/2 · 3 + 1/3 · 4 + · · · + 1/ _n_ ( _n_ \+ 1) equals _n_ /( _n_ \+ 1). The aim is to prove that this statement is true for all _n_. Obviously, _P_ (1) is true since 1/1 · 2 = 1/(1 + 1). So the basis step is proved. Next we have to prove the induction step: if _P_ ( _k_ ) is true for any it, then _P_ ( _k_ \+ 1) is true as well. It is easy to see that _P_ ( _k_ \+ 1) is true because _k_ /( _k_ \+ 1) + l/( _k_ \+ 1)( _k_ \+ 2) is equal to ( _k_ \+ 1)/( _k_ \+ 2). **0.105.** _f_ (1) = 1 and _f_ ( _n_ ) = _n_ \+ _f_ ( _n-_ 1) **0.107.** It is a tautology. **0.109.** Neither **0.111.** **0.113.** **(a)** Satisfiable in all cases except when _p, q_ , and _r_ are true and _s_ and _t_ ; are false **(b)** Satisfiable in all cases except when _p_ and _q_ are true and _r_ is false **Chapter 1** **1.1.** 109 **1.3.** **(a)** 16 **(b)** 8 **(c)** 102 **(d)** 28 **1.5.** **(a)** 28 **(b)** 16 **(c)** 48 **(d)** 112 **1.7.** There are 26 · 25 · 24 · 23 · 22 ways. **1.9.** _n_ ( _n_ – 1) **1.11.** 26 + (26)(36) + (26)(36)2 \+ (26)(36)3 \+ (26)(36)4 \+ (26)(36)5 **1.13.** **(a)** 12 **(b)** 144 **(c)** 72 **1.17.** (6!) = 720; then (7!) = (7)(720) = 5040 and (8!) = (8)(5040) = 40,320 **1.19.** _n_ = 23 **1.21.** **(a)** (4!)(5!)(6!) **(b)** (3!)(4!)(5!)(6!) **1.23.** 5040 **1.25.** 86,400 **1.27.** **(a)** (10!)/(4!)(4!)(2!) **(b)** (12!)/(5!)(4!)(3!) **1.31.** **(a)** _P_ (11, 9) **(b)** _P_ (11; 2, 3, 4) **1.33.** **(a)** 512 **(b)** 84 **(c)** 36 **1.35.** There are 504 ways. **1.37.** _C_ ( _n, r_ ) · ( _r_ – 1)! **1.39.** The number of ways is _r_ where _r_ = _C_ (14, 8) · (7!) · (6!). **1.41.** _P_ (7; 4, 2, 1) · _C_ (8, 4) **1.47.** _P_ (10; 2, 3, 4, 1) · _C_ (23, 1) **1.49.** **(a)** _C_ (10, 6) · _C_ (12, 6) **(b)** _C_ (12, 7) · _C_ (10, 5) + _C_ (12, 8) · _C_ (10, 4) + _C_ (12, 9) · _C_ (10, 3) + _C_ (12, 10) · _C_ (10, 2) + _C_ (12, 11) · _C_ (10, 1) + _C_ (12, 12) · _C_ (10, 0) **1.57.** **(a)** (18!)/[(4!) · (1!) · (4!)4 · (2!)'l **(b)** (18!)/[(2) · (2) · (5!)2 · (4!) · (2!)2] **(c)** _C_ (18; 7, 6, 5) **1.58.** **(a)** 840 **(b)** 74 **(c)** 79 **1.61.** _C_ (28, 4) **1.63.** **(a)** _C_ (14, 4) **(b)** _C_ (9, 4) **(c)** _C_ (8, 3) + _C_ (6, 3) + _C_ (4, 3) **1.65.** _C_ (10, 4) **1.69.** _C_ (( _r_ – _p_ ) + _n_ – 1, _n_ – 1), where _p_ = _p_ 1 \+ _p_ 2 \+ · · · + _p n_ **1.72.** _C_ (15, 5) – _C_ (8, 5) **1.74.** **(a)** 25 **(b)** 27 **1.78.** 275 **1.80.** **(a)** Let _X_ = {1, 2, . . . , _n_ } and _A_ be the set of numbers in _X_ that are not squarefree. The number of squarefree integers in _X_ is obviously _n_ – _N_ ( _A_ ). To compute _N_ ( _A_ ), proceed as follows. Let _P_ = { _p_ 1, _p_ 2, . . . , _p_ r}, where each _p i_ is a prime number that does not exceed the square root of _n_. Let _A i_, be the set of those numbers in _X_ that are divisible by the square of _p i_ Compute _S i_ ( _i_ = 1, 2, . . . , _r_ ) as in Theorem 1.6.1. Then _N_ ( _A_ ) is equal _S_ 1 − _S_ 2 \+ . . . + (– 1) _r_ −1. **(b)** 100 − (42 − 3 + 0 − 0) **1.82.** **(a)** Number of permutations = 6! = 720, number of derangements = _D_ 6 = 265 **(b)** 265/720 **(c)** [ _C_ (6, 1) · D5]/(6!) = 0.366667 **(d)** 1 – 0.366667 **(e)** [ _C_ (6, 2) · _D_ 4]/(6!) **(f)** 1/720 **1.84.** The answer is 0. **1.86.** There are 120 ways. **Chapter 2** **2.1.** **(a)** 1 + _x_ \+ _x_ 2 \+ _x_ 3 **(b)** _x_ 4 \+ _x_ 5 \+ _x_ 6 \+ · · · **(c)** 1 + _x_ \+ _x_ 2 \+ _x_ 3 \+ · · · **(d)** 1 – _x_ \+ _x_ 2 – _x_ } \+ _x_ 4 – · · · **2.3.** **(a)** {16, 32, 24, 8, 1, 0, 0, 0, . . .} **(b)** {1,1, 3/2, 1/3!, 1/4!, 1/5!, . . .} **(c)** {0, 0, 0, 1, 1, 1,1. . . .} **2.5.** _C_ (15, 8) **2.7.** ( _x_ \+ _x_ 2 + _x_ 3 \+ · · · + _x_ ′)4 **2.9.** _C_ (18, 3) – 4 · _C_ (12, 3) + 6 · _C_ (6, 3) **2.11.** _C_ (12, 2) – 3 · _C_ (6, 2) **2.13.** There are 18 ways. **2.17.** Coefficient of the tenth power of _x_ in _f_ ( _x_ ), where _f_ ( _x_ ) = ( _x_ \+ _x_ 2)( _x_ \+ _x_ 2 \+ _x_ 3)( _x_ \+ _x_ 2 \+ · · ·)2 **2.19.** The function is _x_ 6 (1 – _x_ 4)3 · (1 – _x_ )–4. **2.21.** 10 **2.23.** 6 **2.25.** 30 **2.27.** ( _x_ 4 \+ _x_ 8 \+ · · · )(1 + _x_ 3 \+ _x_ 6 \+ · · ·)(1 + _x_ 2 \+ _x_ 4 \+ · · · )(1 + _x_ \+ _x_ 2 \+ · · ·) **2.29.** _C_ ( _n_ – _r_ \+ 1, _r_ ) **2.37.** The number of such numbers is _t_ , where _t_ = (1/4)(2 _r_ \+ 2 _r_ ) where _r_ is even. If _r_ is odd, _t_ is necessarily zero. **2.41.** ( _e x_ \+ _e_ – _x_ )/2 – l]( _e x_ – 1)4 **Chapter 3** **3.1.** _f_ ( _n_ ) = _f_ ( _n_ – 1) + _n_ where _f_ (1) = 2; _f_ (9) = 46 **3.3.** _f_ ( _n_ ) = 2 _f_ ( _n_ – 1) + 1 with _f_ (1) = l; _f_ (n) = 2" – 1 **3.5.** _f_ ( _n_ ) = _f_ ( _n_ – 1) + _f_ ( _n_ – 2) with _f_ (1) = 2 and _f_ (2) = 3 **3.8.** _f_ ( _n_ ) = 2 _f_ ( _n_ – 1) with _f_ (1) = 2 **3.10.** **(a)** _k_ = 2 **(b)** The initial conditions are not consecutive. **3.12.** _f_ ( _n_ ) = 1 + _n_ \+ 2 _n_ **3.14.** The characteristic polynomial is ( _x_ – 1) ( _x_ –2)2 ( _x_ – 3) and _f_ ( _n_ ) = _A_ \+ _B_ · 2 _n_ \+ _C_ · _n_ · 2 _n_ \+ _D_ · 3 _n_ is the general solution of the relation _f_ ( _n_ \+ 4) = 8 _f_ ( _n_ \+ 3) – 23 _f_ ( _n_ \+ 2) + 28 _f_ ( _n_ \+ 1) – 12 _f_ ( _n_ ). **3.16.** _g_ ( _n_ ) = _A_ (–1) _n_ \+ _B_ ( _m_ – 1) _n_ , where _A_ = (–1)/ _m_ and _B_ = 1/ _m_ and _f_ ( _n_ ) = ( _m_ – 1) _g_ ( _n_ – 1) **3.18.** _f_ ( _n_ ) = _A_ \+ _B_ · (3) _n_ – 8 _n_ ; where _A_ = 1 and _B_ = 3 **3.20.** _A_ (4) _n_ \+ 5 _n_ (4) _n_ **3.22.** _A_ (2) _n_ \+ _B_ · _n_ · (2) _n_ \+ (1/2) · _n_ 2 · (2) _n_ **3.24.** _p_ = –5, _q_ = 6, and _r_ = 8 **3.26.** _f_ ( _n_ ) = coefficient of _x n_ in _g_ ( _x_ ) = 2/(1 + _x_ ) – 1/(1 – _x_ )2\+ 2/(1 – _x_ )3 **3.28.** _f_ ( _n_ **)** = 5 _n_ 2 – 4 _n_ **3.30.** _f_ ( _n_ ) = _d_ \+ _c_ log _n_ **3.32.** The relation is _g_ ( _n_ ) = 7 _g_ ( _n_ /2) + 18( _n_ 2)2 with _g_ (1) = 0. The solution is 6 · _n r_ – 6 · _n_ 2 where _r_ = log 7. **3.34.** **(a)** _f_ ( _n_ ) = 2 _f_ ( _n_ – 1) with _f_ (1) = 0 **(b)** _f_ ( _n_ ) = _f_ ( _n_ /2) + ( _n_ – 1) with _f_ (1) = 0 **(c)** The two solutions are (1) _n_ ( _n_ – 1)/2 and (2) 2 _n_ – log _n_ – 2. When _n_ > 3, the second is more efficient than the first. **Chapter 4** **4.1.** _W_ = {1, 3} **4.3.** **(b)** It is possible to draw _K_ 4 such that no edges intersect. It is not possible to do so for _K_ 5. **4.5.** Draw a simple graph as suggested. There are edges between every pair of cities except between Boston and Moscow, and between Boston and Prague. **4.7.** **(a)** Two **(b)** Two **4.9.** _pq_ **4.11.** Suppose that the arcs of the digraph are (1, 2), (1, 3), (1, 5), (2, 3), (3, 4), (3, 5), and (4, 5). **(a)** The adjacency matrix _A_ = ( _a ij_) is a 5 × 5 matrix in which _a_ 12 = _a_ 13 = _a_ 15 = _a_ 23 = _a_ 34 = _a_ 35 = _a_ 45 = 1 and all the other elements are zero. **(b)** (Indegree of vertex 1) = 0, (outdegree of vertex 1) = 3 (Indegree of vertex 2) = 1, (outdegree of vertex 2) = 1 (Indegree of vertex 3) = 2, (outdegree of vertex 3) = 2 (Indegree of vertex 4) = 1, (outdegree of vertex 4) = 1 (Indegree of vertex 5) = 3, (outdegree of vertex 5) = 0 **(c)** (1, 2) (1, 3) (1, 5) (2, 3) (3, 4) (3, 5) (4, 5) **4.13.** The graph looks like a bracelet or ring studded with stones such that each vertex is represented by a stone. Such a graph with _n_ vertices is denoted by _Z_ _n_ and is called an "odd hole" if _n_ is odd and more than 3. It is an "even hole" if _n_ is even and more than 3. **4.15.** **(a)** _G_ = ( _V_ , _E_ ), where _V_ = {1, 2, 3, 4} and _E_ = {{1, 2}, {3, 4}} **(b)** _G_ = ( _V_ , _E_ ), where _V_ = {1, 2, 3, 4} and _E_ = {{1, 2}, {2, 3}, {3, 4}, {4, 1}} **(c)** ( _nr_ )/2, so at least one of the two numbers is even. **4.19.** **(a)** 1 → 2 → 3 → 4 → 5 → 2 → 6 **(b)** 1 → 2 → 3 → 4 → 5 → 6 **(c)** 2 → 3 → 4 → 5 → 2 **(d)** One **(e)** {1}, {6}, {2, 3, 4, 5} **(f)** This is a 6 × 6 matrix in which (1) all the elements in row 6 are 0, (2) all the elements in column 1 are 0 except the first element, and (3) all the remaining elements are 1. **4.21.** _G_ = ( _V_ , _E_ ), where _V_ = {1, 2, 3, 4, 5} and _E_ = {{1, 2}, {2, 3}, {3, 4}, {4, 2}, {2, 5}, {5, 1}} **4.23.** If the graph is connected, every element is 1. More generally, if _G_ has _n_ vertices and _k_ components, the _n_ × _n_ reachability matrix _A_ of _G_ will have _k_ submatrices along the diagonal of _A_ such that every element in each submatrix is 1 and every other element in _A_ is zero. If _G i_ is a component of _G_ with _n i_ vertices, the submatrix corresponding to this component will be a _n i_ × _n i_, matrix. **4.27.** _G_ is not connected. **4.29.** The three vertices at the top are marked 1, 2, and 3. The three vertices are marked 8, 9, and 4 from the left to the right. The three vertices at the bottom are marked 7, 6, and 5 from the left to the right. The arcs ( _i, j_ ) in which _i_ < _j_ are (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 7), (7, 8), and (8, 9). The remaining arcs are ( _i, j_ ) where _i_ > _j_. **Chapter 5** **5.3.** There will be a directed path from every vertex to every vertex. **5.5.** No **5.7.** The word starts with _C_ since its row sum 2 equals its column sum plus 1. The word ends with _B_ since its column sum equals its row sum plus 1. For the other two letters row sum equals column sum. The row sums of _A_ and _D_ are 2 and 3. Thus the frequencies of _A_ , _B, C_ , and _D_ are 2, 2, 2, and 3, respectively. Draw a digraph with 4 vertices _A_ , _B, C_ , _D_. Draw an are from a letter _X_ ( _X_ is one of these four letters) to a letter _Y_ ( _Y_ is also one of these four letters, the letters _X_ and _Y_ need not be distinct) if the element in the matrix corresponding to row _X_ and column _Y_ is 1. In the resulting digraph there will be a directed Eulerian path (not necessarily unique) from _C_ to _B_. Any such path will give a word. **5.9.** **(a)** The digraph is _G_ = ( _V, A_ ) where _V_ = {0, 1, 2} and _A_ is the Cartesian product _V_ × _V_. An arc from vertex _i_ to vertex _j_ is assigned the word _ij_. The consecutive arcs in the following sequence will give an Eulerian circuit starting at vertex 0 and ending in vertex 0: < 00 01 11 12 22 21 10 02 21 >. The first letters of these nine words define the de Bruijn sequence _B_ (3, 2) = < _a_ 1 _a_ 2 _a_ 3 _a_ 4 _a_ 5 _a_ 6 _a_ 7 _a_ 8 _a_ 9 > = < 0 0 1 1 2 2 1 0 2 > and any two-letter word using the three symbols 0, 1, and 2 is of the form _a i ai_ \+ 1, where _i_ is any integer such that 1 < _i_ < 9 and the addition of the subscripts is modulo 9. **5.11.** _V_ = {1, 2, 3, 4}; _E_ = {{1, 2}, {2, 3}, {3, 4}, {4, 1}, {1, 3}} **5.13.** _V_ = {1, 2, 3, 4}; _E_ = {{1, 2}, {1, 3}, {1, 4}} **5.15.** The number of edges in a Hamiltonian cycle of a graph with _n_ vertices is _n_. The number of edges in any cycle of a bipartite graph is even. So a bipartite graph with an odd number of vertices cannot be Hamiltonian. **5.17.** Yes, by definition. The converse is not true, for consider the counterexample with _V_ = {1, 2, 3} and _A_ = {(1, 2), (2, 3)}. **Chapter 6** **6.1.** _m_ = _n_ – _k_ **6.3.** 19 **6.5.** 24 **6.7.** It is a bridge. **6.9.** Not necessary. If the vertex set of the complete graph with five vertices is {1, 2, 3, 4, 5}, then _T_ and _T′_ are two distinct spanning trees with _E_ = {{1, 2}, {2, 3}, {3, 4}, {4, 5}} and _E'_ = {{1, 3}, {3, 5}, {2, 5}, {2, 4}}. **6.13.** The tree is _T_ = ( _V, E_ ), where _V_ = {1, 2, 3, 4, 5, 6, 7, 8, 9} and _E_ = {(1, 8), (2, 8), (3, 7), (4, 7), (5, 7), (7, 6), (8, 6), (6, 9)}. **6.15.** ABRACADABRA **6.17.** _A_ = 110, _B_ = 00, _C_ = 1110, _D_ = 1111, _E_ = 10, _R_ = 01. The word is 110 00 01 110 1110 110 1111 110 00 01 110 The length is at most 31. **6.19.** **(a)** _n_ \+ 1 = 14 and 3 < log 14 < 4, so _m_ = 3. So the height cannot be less than 3. **(b)** The floor of 13/2 is 6, so the height is not more than 6. **6.21.** The tree is rooted at _H_. The left tree has _G_ , _B_ , _A_ , and _C_. The right tree has _P, R_ , and _Y_. The left child of _G_ is _B_. There is no right child for _G_. The left child of fl is _A_ and the right child of _B_ is _C_. On the right subtree rooted at _H_ , no vertex has a left child. **Chapter 7** **7.5.** The edges are {1, 5}, {1, 4}, {3, 4}, {2, 6}, and {5, 6} . The weight of the tree is 31. **Chapter 8** **8.1.** **8.3.** The arcs of this tree rooted at vertex 1 are (1, 5), (5, 7), (7, 6), (5, 2), (2, 3), and (3, 4). **8.5.** In _A_ (4) the element corresponding to the fourth row and second column is 4. So the S.D. from 4 to 2 without touching 5, 6, or 7 will be 4. Furthermore, in _p_ (4), the element corresponding to the fourth row and second column is 2, indicating that we go straight from 4 to 2. **8.7.** Replace the tractor at the end of the first year. The total cost will be 12 + 32 = 44. Index **_A_** Absolute complement of a set, Acyclic graph, , Addition rule, Adjacency matrix, – for a digraph, for a graph, Adjacent vertices, adjacent from a vertex, adjacent to a vertex, Aho, A. V., , Aigner, M., , Algorithms: analysis of, approximation, breadth first search, for coloring, complexity of, , for connectedness, depth first search, Dijkstra's, , divide-and-conquer, efficient, for Eulerian circuits, for evaluating a polynomial, exponential, Fleury's, Floyd-Warshall, greedy, Horner's, Kruskal's, largest first, for minimum spanning tree, , for multiplying two matrices, , – nondeterministic, polynomial, Prim's, for shortest distance, , for shortest route, , worst case, Allocation model, , Allocation problem, , , , , , , Alphabet, Analysis of algorithms: average case, worst case, , Anderson, I. A., , Antecedent, Antisymmetric property, Appel, K., , , Arc, ASCII code, Assignment problem, Associative laws, Average case complexity, Axiom of choice, Axioms, metric, **_B_** Baase, S., , , , Basic solution of recurrence relations, Basis (of induction), Behzad, M., , , , , Bellmore, M., , Berge, C, , , , Bernoulli, D., Bernoulli, J., Bernoulli, N., BFS (breadth first search), Big Oh notation, Bijection, Binary number, Binary operator, Binary search tree, – Binary trees, – full, height of, regular, Binomial coefficient, Binomial expansion, Binomial theorem, Bipartite graph, , , complete, Birkhoff, G. D., , Bondy, J. A., , , , , Boole, G., , Boundaries of faces, Boyer, C. B., , Bridge (in a graph), , Bubblesort, Bussey, W. H., , Byte, **_C_** Cantor, G., , Cardinality of a set, Carre, B., , Cartesian product, Cayley, Arthur, , Cay ley's theorem, Ceiling, Chain, Chang, S. K., , Character code: fixed length, optimal Huffman, prefix, variable length, Characteristic equation, Characteristic function of a set, Chartrand, G., , , , , Cheriton, D., , Chinese postman problem, Chord of a spanning tree, Chromatic number, Circuit in a graph, Class NP, – Class of subsets of a set, Class P, Clique in a graph, , Clique problem, Closed directed path, Closed path, Closure, _C_ ( _n_ ; _n_ 1, _n_ 2, . . . , _n_ _k_ ), _C_ ( _n_ , _r_ ), , , Codomain of a function, Cohen, D. I. A., , , , Coloring the vertices of a graph, – Combinations, –, – Combinatorial argument, Combinatorial identity, Combinatorial optimization problems, Combinatorics, – Communication digraphs, Commutative laws, Comparison property, Complete bipartite graph, Complete digraph, Complete graph, Complete induction, Complexity of algorithms: average case, of BFS spanning tree, of binary search algorithm, of DFS spanning tree, of Dijkstra's algorithm, of divide-and-conquer algorithm, of Floyd-Warshall's algorithm, of Kruskal's algorithm, of matrix multiplication, of Prim's algorithm, worst case, Component (of a graph), Composition: of functions, of relations, – Compound propositions, Concatenation, Conclusion, Conditions: necessary, necessary and sufficient, sufficient, Congruence classes, Congruent modulo addition, – Conjunction, Connected component of a graph, Connected graph, Co-NP, Consequence, Contradiction, Contrapositive proposition, Converse proposition, Cook, S. A., , Counting problems, Counting rules, – Cryptography, , Cut-set (in a graph), fundamental cutset, Cycle (in a graph), fundamental cycle, **_D_** de Bruijn, N. G., , de Bruijn digraph, de Bruijn sequence, Decision problem, – Dedekind, R., Degree matrix, Degree of a vertex, De Moivre, A , De Morgan, A., , De Morgan's laws. Deo, N.; , , , , Depth first search. Derangement, –, Descendant of a vertex, DFS, DFS spanning tree, Diagonal relation, Difference equations, ( _See also_ Recurrence relation) Digraphs, strongly connected, unilaterally connected, weakly connected, Dijkstra, E. W., , Dijkstra's algorithm, , Dirac, G. A., , Directed circuit, Directed cycles, Directed graphs ( _See_ Digraphs) Directed paths, Dirichlet, G., Dirichlet pigeonhole principle, , Disconnecting set, Disjoint sets, Disjunction, exclusive, Disjunctive counting, Distributive laws of De Morgan, Divide-and-conquer algorithm, _D n_, , Dodecahedron, Domain of a function, Dominance graph, Dominance matrix, Dreyfus, S. E., , **_E_** EBCDIC Code, Edge, Edmonds, J., Efficiency of algorithms,, Element of a set, Element wise product of two matrices, Empty set, Empty string, Enumerative combinatorics, Equivalence class, – Equivalence relation, Equivalent proposition, Equivalent sets, Erdos, P., Euler, L., , Eulerian circuit, – Eulerian digraph, Eulerian directed circuit, Eulerian graph, – Eulerian paths, – Euler's formula, Even, S., , Even vertex, Exclusive disjunction, Existence problem, Exponential algorithm, Exponential generating function, – **_F_** Faces of a graph, exterior, interior, Factorial, , Factorization (prime) of integers, Fast matrix multiplication, – Feedback shift register (FSR), Fibonacci, Leonardo, Fibonacci numbers, Fibonacci sequence, , , Finite set, Fixed length character code, Fleury's algorithm, Floor, Floyd, R. W., Forest, Formal power series, Fortran, Four-color conjecture, Four-color theorem, Frege, G., Full tree, Function, – bijection, ceiling, characteristic, codomain of, composition of, domain of, exponential generating, – floor, function of, generating, – identity mapping, image of, image of an element, injection, inverse, – invertible, one-to-one, onto, preimage, strictly decreasing, strictly increasing, surjection, , , totient, Fundamental cutset, Fundamental cycle, **_G_** Gabow, H. P., , Galil, Z., Garey, M. R., , , , , Generalized permutation, Generalized inclusion-exclusion principle, General solution ( _See_ Recurrence relation) Generating functions: exponential, – ordinary, – Gibbons, A., , , Goldberg, S., , Golden ratio, Goldman, A. J., , Golomb, S. W., , Golovina, L. I., , Gondran, M., , , , Good, I. J., Gould, R., , , Graham, R. L., , , , Graph, – acyclic, bipartite, , bridge in, chromatic number of, circuit in, clique in, , coloring the vertices of, complete, complete bipartite, component, connected, cut-set of, cycle in, directed, dominance, edge of, Eulerian, , faces of, Hamiltonian, , Hamiltonian-connected, homeomorphic, induced subgraph of, _k_ -colorable, loop, , mixed, multiple edge in, nonplanar, orientation of, planar, plane, regular, simple, strong orientation of, subgraph of, 2-colorable, underlying graph of a digraph, vertex of, weighted, Greedy algorithm, —92 comparison of two greedy algorithms, Kruskal's, – Prim's, – Prim's (matrix method), Grimaldi, R. P., , , , , **_H_** Haken, W., , , , Halmos, P., , , Halting problem, Hamilton, Sir William Rowan, , Hamiltonian-connected graph, – Hamiltonian cycles, – Hamiltonian directed cycle, Hamiltonian directed path, , Hamiltonian graph, , Hamiltonian path, Hamilton's round-the-world puzzle, Hamming distance function, Hamming metric, Harary, F., , , , , Hasse diagram, – Heapsort, Held, M., Hell, P., , , Henkin, L., Hilbert's tenth problem, Hillier, F. S., , Heuristic, Homeomorphic graphs, Homogeneous recurrence relation, Hopcroft, J. E., Horner's method, , Hu, T. C., Huffman, D. A., Huffman codes, – Hutchinson, J. P., Hydrocarbons, Hypothesis, **_I_** Identity mapping, If and only if, Image of a function, Incidence matrix of a digraph, Incidence matrix of a graph, Incident, Inclusion-exclusion principle, – generalization of, Indegree of a vertex, Induced subgraph, Induction, principle of (strong form), – Induction, principle of (weak form), – Induction step, Inductive definition, Inductive hypothesis, Infinite set, Inhomogeneous recurrence relation, Initial condition, Injection, Instance of a problem, , Integer-solution-of-equations problem, Intersection of sets, Intractable problem, Inverse function, Inverse image, Inverse relation, Invertible function, **_J_** Johnson, D. S., , , , , Joining in graphs, – Joining of two vertices, **_K_** Kaliningrad, Kapoor, S. F., Karmarkar, N., , Karp, R. M., , Khachiyan, L. G., , Kirchoff, G. B., _K n_, Knuth, D., , , , Konigsberg bridge problem, , Kovalev, M. M., Kowalik, J. S., _K p, q_ Kravtsov, M. K., Krishnamurthy, V., , , , Kronk, H. V., Kruskal, Joseph, , , Kruskal's algorithm, – Kuratowski, Kasimir, Kuratowski's theorem, Kwan, M. K., , **_L_** Labeled tree, Language, Laplace, Pierre-Simon, Largest first algorithm, Lawler, E. L., , , Leaf of a tree, , Least element of a poset, Left descendant of a vertex in a binary tree, Left subtree of a vertex in a binary tree, Leibniz, G., Lenstra, J. K., Lesinak-Foster, L., Lessman, F., , Level of a vertex in a tree, Levy, H., , Lewis, D. C, , Lewis, H. R., , , Lick, D. R., , Lieberman, G. J., , Linearly ordered set, Linear recurrence relation, Liu, C. L., , , , , , , Logic, – language of, Logical equivalence, Lon Nol, Loop in a graph, **_M_** MacMahon, P. A., , , Malayalam, Map, ( _See also_ Function) Map coloring, Markowsky, G., , Mathematical induction, – [ _See also_ Induction, principle of (strong form) and Induction, principle of (weak form)] Matiyasevich, Y., Matrix (matrices): adjacency, degree, dominance, incidence, – multiplication, – reachability, Matrix multiplication, – Maurolycus, Francesco, Maximal element in a partially ordered set, May, K. O., , McAllister, D. F., , , Mergesort, Metric axioms, Minieka, E., , Minimal element in a partially ordered set, Minimal spanning tree, algorithms for, comparison of the two greedy algorithms, Kruskal's greedy algorithm, Prim's greedy algorithm, Minoux, M., , , Modulo r, Monic polynomial, Moon, J. W., , Multigraph, Multinomial coefficient, Multinomial theorem, Multiple edges, Multiplication of matrices (efficient algorithms), – Multiplication rule, Murty, U. S. R., , , , , **_N_** _n_!, _N_ ( _a_ 1, _a_ 2, . . . , _a k_), _N_ ( _a_ 1, _a_ 2, . . . , _a k_′), Naive set theory, Necessary condition, Negation, Nemhauser, G. L., , Network, directed, minimal spanning tree in, – shortest path problem in, – transportation, Newton's identity, Nievergelt, J., Node of a graph ( _See_ Vertex) Nondeterministic algorithm, NP, – NPC, – NP-complete problem, , NP-hard problem, , _n_ -tuple, Null set, Number of subsets of a finite set, **_O_** Object, Odd cycle, Odd vertex, _O_ ( _n k_), _O_ ( _n_ log _n_ ), One-to-one mapping, One-way street assignment, Onto function, Optimization problem, Order: linear, partial, total, Ordered _n_ -tuple, Ordered pair, Ordinary generating function, – Ore, O., , , , Orientation of a graph, Outdegree, Output, **_P_** Pairwise disjoint class, Palindrome, Papadimitriou, C. H., , , –, Partially ordered set, – chain in a poset, greatest element in, Hasse diagram of, – least element of, maximal element of, maximum element of, minimal element of, minimum element of, upper bound, and Zorn's lemma, Particular solution, Partitioning problem, , Partition of an integer, Partition of a set, , , Pascal, Blaise, Pascal's formula, Pascal's triangle, Path, closed, directed, simple, Peacock, G., Peano, G., Permutation, –, – circular, vs. combinations, , generalized permutations, ring permutations, Pigeonhole principle, – extension of, formal statement, Planar graph, Plane graph, Planarity, _P_ ( _n_ ; _n_ 1 _n_ 2 . . . . . . _n k_), , , _P_ ( _n, r_ ), , Polya, G., , Polynomial: algorithm, evaluation of, monic, Polynomial time algorithm, Polynomially transferable, PO set ( _See_ Partially ordered set) Power series, Power set, Pratt, V., , Prefix, Prefix code, Prefix property, Preimage, Premise, Prim, R., , , Prime factorization, Prim's algorithm, – Principle of inclusion-exclusion, – extension of, Principle of mathematical induction, – strong form, weak form, Principle of superposition, Principle of well-ordering, Proper subset, Propositional calculus, Propositions, – atomic, compound, implication, Pruning of a tree, _P_ ( _X_ ), **_Q_** Quicksort, **_R_** Ralston, A., , Ramsey theory, ( _See also_ Pigeonhole principle) Range of a function, Rate of growth, r-collection, r-combination, Reachability matrix, Reaching in digraphs, – Read, R. C, , Recurrence relation, , – basic solution of, characteristic equation of, general solution of, vs. generating functions, homogeneous, inhomogeneous, initial conditions for, initial values for, linear, solutions by generating functions, solutions by the method of characteristic roots, Recursive definition, – Redei, L., , Reflexive property, Reflexivity, Regular graph, Reingold, E. M., , , Relation, – antisymmetric, diagonal, as a digraph, equivalence, as a graph, , inverse, reflexive, symmetric, transitive, Relative complement of a set, Relatively prime, Representing relations: using digraphs, using graphs, , Rinnoy Kan, A. H. G., Riordan, J., , , Robbins, H. E., Robbins's theorem, , Roberts, F. S., , , , , , , , , Ronse, C, , Rooted tree, Root of a tree, r-permutation, Rothschild, B. L., Round-robin tournament, , r-sequence, Russel, Bertrand, Ryser, H. J., , **_S_** Satisfiability problem, , Scheduling problems, Schrijver, A., , , Selection model, , Sequential counting, Series, Set, – absolute complement, cardinality of, characteristic function of, cartesian product, class of subsets, covering problem, contained in, contains, cut, disconnecting, disjoint, empty, equal, equivalent, equivalent set, finite, infinite, intersection of, linearly ordered, null, number of subsets of, pairwise disjoint, partially ordered, – partitioning problem, , , partition of, , , , power, proper subset of, recursively defined, relation on, – relative complement, singleton, subset of, symmetric difference, totally ordered, union of, – universal, Venn diagram of, well-ordered, Set-builder notation, Set covering problem, Set partitioning problem, , , Set theory, – Shmoys, D. B., Shoebox principle of Dirichlet, Shortest distance problems: solutions by Dijksra's algorithm, solutions by Floyd-Warshall algorithm, Simple graph, Simple path, Singleton set, Size of an instance, Skolem, T., Slack variable, Sominskii, I. S., , Spanning trees, Spencer, J. H., Spencer, T., Squarefree integer, _S_ ( _r_ , _n_ ), , , 712 Stanat, D. F., , , Standish, T. A., , Steiglitz, K., , , Steiner network problem, Stirling, J., Stirling number of the second kind, , , Stoll, R. R., , Strictly decreasing, Strictly increasing, String, Strong component of a digraph, Strongly connected digraph, Strongly orientable graph, Strong orientation of a graph, Subgraph, Subset, Sufficient condition for Eulerian circuits and paths, Sufficient conditions for Hamiltonian cycles and paths, – Superposition, principle of, Surjection, , , Swamy, M. N. S., , Sylvester, J., Symmetric difference of two sets, Symmetric relation, Syslo, M. M., , , **_T_** Tabourier, Y., , Tarjan, R. E., , , , Tautology, Thulasiraman, K., , Totally ordered set, Totient function, Tournament, Tower of Hanoi, Townsend, M., , , , , , Transitive relation, Transitive tournament, Transportation networks, Traveling salesman problem (TSP), , Trees, , – binary, breadth first search (BFS), binary search, depth first search (DFS), directed, full binary, labeled with _n_ vertices, leaf of, regular binary, rooted, root of, spanning, terminal vertices of, Triple operation, Truth function, Truth table, Truth value, Tucker, A., , , , , , Tymoczko, T., , **_U_** Ullman, J. D., Underlying graph of a digraph, Undecidable problem, Unilaterally connected digraphs, Union of sets, – Universal set, U Nu, Upper bound, **_V_** Venn, J., Venn diagram, Vertex (vertices): adjacent, adjacent from, adjacent to, coloring of, – degree of, even, indegree of, odd, outdegree of, **_W_** Warshall, S., Weakly connected digraphs, Weighted graphs ( _See_ Network) Well-characterized problem, Well-ordered set, Well-ordering property, , Whitehead, A. N., Whitworth, W. A., , Wilf, H. S., , , Wilson, R. J., , , , Word of length _n_ , Worst case analysis, Worst case complexity, **_Y_** Yaglom, , Yemelichev, V. A., , **_Z_** Zermelo, Zorn's lemma, www.doverpublications.com
{ "redpajama_set_name": "RedPajamaBook" }
8,840
\section{Introduction} In this paper we describe an approach to adapt the Language Models (LMs) used in a system designed to give help to simultaneous interpreters. Simultaneous interpreting is a very difficult task that requires a high cognitive effort especially to correctly translate parts of the source language that convey important pieces of information for the final users. These are: numerals, named entities and technical terms specific of each interpretation session. As an example, a study reported in \cite{fantinuoli2018} claims that the error rate made by professional interpreters on the translation of numbers is, on average, equal to 40\%. This demands for a technology, based on automatic speech recognition (ASR), capable of detecting, in real time and with high accuracy, the important information (words or composite terms) of a speech to interpret and to provide it to a professional interpreter by means of a suitable interface. Therefore, our goal is not to minimise the word error rate (WER) of an audio recording, as usual in ASR applications, instead we aim to maximise the performance of the developed system, in terms of precision, recall and F-measure, over a set of ``important" terms to recognise, as will be explained in section~\ref{sec:benchmark}. To do this we experimented on a set of data properly labelled by human experts. It is worth to point out that this task is different from the usually known ``keywords spotting" task, since we cannot assume to know in advance the terms to spot inside the audio stream but we can only start from some ``seed" terms belonging to a glossary which is part of the experience of each human interpreter. This demands for further processing modules that: {\em a)} extend, in some way, the given glossary including also "semantically" similar terms, as will be explained in section~\ref{sec:w2v}, in order to adapt both the dictionary and the language model (LM) employed in the ASR system, and/or {\em b)} detect along an automatically generated transcription the pieces of information (i.e.\ numerals, named entities, etc) useful to the interpreter. Actually, the ASR system described below is part of a bigger system that integrates natural language processing (NLP) modules, dedicated to both named entity and numeral extraction, and a user interface specifically designed according to the requirements of professional interpreters. This system, named \SmarTerp\footnote{The \SmarTerp\ Project is funded by EIT DIGITAL under contract n. 21184}, aims to support the simultaneous interpreters in various phases of their activities: the preparation of glossaries, automatic extraction and display of the ``important" terms of an interpreting session, post-validation of new entries~\citep{rodriguez2021}. {\bf Related works. } As previously mentioned spotting known words from audio recordings is a largely investigated task since the beginning of speech recognition technology (e.g.\ see works reported in ~\cite{bridle73,Rose1990,Weintraub95}. Basically all these approaches used scores derived from acoustic log-likelihoods of recognised words to take a decision of keyword acceptance or rejection. More recently, with incoming of neural networks, technologies have begun to take hold based on deep neural networks \citep{Chen2014}, convolutional neural networks \citep{Sainath2015} and recurrent neural networks \citep{Fernandez2007} to approach keyword spotting tasks. The last frontier is the usage of end-to-end neural architectures capable of modelling sequences of acoustic observations, such as the one described in~\cite{Yan2020} or the sequence transformer network described in~\cite{berg2021keyword}. \COMMENT{ However, as seen above the particular domain application of this work doesn't allow to have a prior knowledge of all of the important terms to detect and, in addition, the NLP modules specialised to text processing need the whole automatic transcription generated by the ASR system to perform both numerals and named entities recognition. To cope with these requirements we decided to include in the ASR language model as much domain information as possible by extracting it from some, possible large, general text corpora. } The approach we use for enlarging the dictionary of the ASR system and to adapt the corresponding language model to the application domain is to select and use from a given, possibly very large and general text corpus, the sentences that exhibit a certain ``similarity" with the terms included in the glossaries furnished by the interpreters. Similarly to the keyword spotting task, ``term based similarity" represents a well investigated topic in the scientific community since many years. A survey of approaches can be found in the work reported in~\cite{Vijaymeena2016}. Also for this task the advent of neural network based models has allowed significant improvements both in the word representation, e.g.\ with the approaches described in~\cite{mikolov2013}, and in text similarity measures, e.g.\ as reported in ~\cite{mikolov2014,kareem2019}. Worth to notice is that in the ASR system used for this work we do not search for new texts to adapt the LM, instead, as explained in section~\ref{sec:selection}, we select the adaptation texts from the same corpus used to train the baseline LM. Note also that our final goal is not that to extract the named entities from the ASR transcripts - this task is accomplished by the NLP modules mentioned above - instead it consists in providing to the ASR system a LM more suitable to help the human interpreter of a given event. Also for ASR system adaptation there is an enormous scientific literature, both related to language models and to acoustic models adaptation; here we only refer some recent papers:~\cite{song-etal-2019-chameleon} for LM adaptation and ~\cite{bell2021} for a review of acoustic model adaptation approaches, especially related to neural models. \section{Automatic selection of texts} \label{sec:selection} Usually a Language Model (LM) is trained over huge amounts of text data in a given language, e.g.\ Italian. During the training phase, a fixed lexicon is selected - typically the N most frequent words in the text - and millions or billions of n-grams are stored to give some probability to any possible word sequence. This process allows to build a somehow generic LM, capable to represent the language observed in the text. However, interpreters often need to specialise their knowledge on a very specific topic, e.g.\ dentistry. In this case, they also have to quickly become experts in that particular field. We could say that they need to adapt their general knowledge to that field: this means that, before the event, they have to collect material about that topic, study it, prepare and memorise a glossary of very specific technical terms together with their translations. The same process holds for an ASR system: it can perform in a satisfactory way in a general situation, but it may fail when encountering technical terms in a specific field. So, it has to be adapted, both in terms of lexicon (it may be necessary to add new terms to the known lexicon) and in terms of word statistics for the new terms. In the \SmarTerp\ project we are going to explore different adaptation procedures and describe in this paper our preliminary work in this direction. At present, we hypothesise that an interpreter could provide some text and the ASR system will be able to adapt to the corresponding topic in a short time (some hours on a medium computer). This short text could range from a few words to a quite large set of documents that identify that particular topic, depending on the expertise and the attitude of the interpreter. Here are some possibilities: \begin{itemize} \itemsep-0.3em \item just a few technical words; \item a glossary of terms, maybe found with a quick internet search; \item a glossary of technical terms with translations, maybe built over the years by an expert interpreter; \item a set of technical documents, in the desired language. \end{itemize} In a very near future, in \SmarTerp\ a pool of interpreters will be engaged in simulations where they have to provide data that, in a complete automatic way (i.e.\ without the intervention of some language engineer), will adapt the ASR system for a particular topic. In this work we are testing some tools and procedures in order to provide them some possible solutions, assuming that at least some small text (i.e.\ a glossary, or even a few words) will be available. From this small text we will derive some {\em seed words} that will be used, in turn, both to update the dictionary of the ASR system and to select LM adaptation texts from the available training corpora (see Table~\ref{tab:LMdata}). In detail, we implemented the following procedures (although some of them were not used in the experiments described in this paper): \begin{itemize} \itemsep-0.3em \item selection of {\bf seed words}, i.e.\ technical words that characterise the topic to be addressed; they are simply the words, in the short text provided by the interpreter, that are not in the initial lexicon, composed of the most frequent N words of that language (128 Kwords, in this paper). \item optional enlargement of the set of {\bf seed words}, either by exploiting shallow morphological information or using neural network approaches like word2vec \citep{mikolov2013}. \item selection of {\bf adaptation text}, i.e.\ text sentences in the text corpus that contain at least one of the seed words. Note that we hypothesise not to have new texts belonging to the topic to be addressed, that could be directly used for LM adaptation. \item compilation of an {\bf adapted lexicon} and of an {\bf adapted LM}, obtained exploiting the adaptation text. \end{itemize} \subsection{Shallow morphological seed words enlargement} Each initial seed word is replaced by a regular pattern which removes the ending part, to find similar words in the complete dictionary of the corpus. Possible parameters are: $N_M$, maximum number of similar words retained for each seed; $L_M$, minimal length of a seed pattern to be considered valid (too short patterns are useless or even dangerous). \subsection{Semantic similarity based approach} \label{sec:w2v} Each initial seed word is fed to a pretrained neural skipgram model (word2vect, see http://vectors.nlpl.eu/repository), which returns an embedded representation of words. Then, the $N$ more similar words are computed using the cosine distance between couples of words embeddings. The process can be iterated by feeding word2vec with every new similar word obtained. Possible parameters are: $N_W$, number of retained words from each term; $I_W$, number of iterations: typically 1, or 2 in case of a very short list of initial seeds. \subsection{Selection of adaptation text} Given a final set of seed words, the huge text corpus is filtered and every document containing at least one seed word, not contained in the (128K) initial lexicon, is retained. One parameter of the filter - not used in this work - is the number of words forming the context around every seed word in a document. This may be useful to avoid to include in the adaptation corpus useless pieces of texts, due to the fact that every line in the training corpora (newspaper or Wikipedia, title or article) is considered a document, containing from few words to tens (even hundreds in few cases) of Kwords. Note that the selection of the adaptation text is largely responsible of the lexicon enlargement (up to 250 Kwords, see Table~\ref{tab:results}), since the number of seed words resulted to be, in our preliminary experiments, always below 4 Kwords. \section{ASR systems} The ASR system is based on the popular Kaldi toolkit~\citep{kaldi}, that provides optimised modules for hybrid architectures; the modules support arbitrary phonetic-context units, common feature transformation, Gaussian mixture and neural acoustic models, n-gram language models and on-line decoding. \subsection{Acoustic models} The acoustic models are trained on data coming from CommonVoice~\citep{ardila2020} and Euronews transcriptions~\citep{gretter2014}, using a standard {\em chain} recipe based on lattice-free maximum mutual information (LF-MMI) optimisation criterion ~\citep{povey2016}. In order to be more robust against possible variations in the speaking rate of the speakers, the usual {\em data augmentation} technique for the models has been expanded, generating time-stretched versions of the original training set (with factors $0.8$ and $1.2$, besides the standard factors $0.9$ and $1.1$). Table~\ref{tab:audio_data} summarises the characteristics of the audio data used for the models in the three working languages considered in the project. \begin{table}[t] \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline Language & CV (h:m) & EuroNews (h:m) & Total Speakers & Running words \\ \hline English & 781:47 & 68:56 & 35k & 5,742k \\ Italian & 148:40 & 74:22 & 9k & 1,727k \\ Spanish & 322:00 & 73:40 & 16k & 2,857k \\ \hline \end{tabular} \caption{Audio corpora for AM training} \label{tab:audio_data} \end{center} \end{table} \begin{table}[tbh] \begin{center} \begin{tabular}{|l|c|c|r|r|} \hline Language & Lexicon size & Total running words & Internet News & Wikipedia 2018 \\ \hline English & 9.512.829 & 3790.55 Mw & 1409.91 Mw & 2380.64 Mw \\ Italian & 4.943.488 & 3083.54 Mw & 2458.08 Mw & 625.46 Mw \\ Spanish & 4.182.225 & 2246.07 Mw & 1544.51 Mw & 701.56 Mw \\ \hline \end{tabular} \caption{Text corpora for training the LMs for ASR in the three \SmarTerp\ languages. Mw means millions of running words.} \label{tab:LMdata} \end{center} \end{table} \subsection{Language models and Lexica} Text corpora that can be used to train LMs for the various languages are described in Table~\ref{tab:LMdata} and derive both from Internet news, collected from about 2000 to 2020, and from a Wikipedia dump; their corresponding total lexica amount to several millions of words (from 4 to 10) for every language. It has to be clarified that, being the original texts definitely not clean, most of the low frequency words are in fact non-words (typos, etc.). For practical reasons, the size of the lexicon used in the ASR usually ranges from 100 to 500 Kwords. The baseline language models are trained using the huge corpora described in Table~\ref{tab:LMdata}; the adaptation set is selected from the same huge corpora. After the selection stage, the resulting trigrams are computed and a mixed LM is built and then pruned to reach a manageable size. The adapted LM probabilities are efficiently derived using the approach described in~\cite{federico2001} by interpolating the frequencies of trigrams of the background (i.e.\ non adapted) LM with the corresponding frequencies computed on the adaptation text. The most frequent 128Kwords of the corpus are retained; all the words of the adaptation set are then included in the corresponding lexicon. \section{Description of \SmarTerp\ multilingual benchmark} \label{sec:benchmark} As mentioned above, in \SmarTerp\ we prepared benchmarks for the 3 languages of the project: English, Italian, Spanish. For each language, a number of internet videos having Creative Commons licence were selected, in order to reach at least 3 hours of material on a particular topic, dentistry. Table~\ref{tab:benchmark} reports duration and number of words of the benchmarks. Data were collected, automatically transcribed and manually corrected\footnote{We are really grateful to Susana Rodr\'iguez, who did the manual check for all the languages.} using Transcriber\footnote{http://trans.sourceforge.net/}, a tool for segmenting, labelling and transcribing speech. In addition to time markers and orthographic transcription of the audio data, we decided to label with parenthesis Important Words (IWs), which represent content words that are significant for the selected domain (i.e.\ dentistry) and are a fundamental part of the desired output of the automatic system. As only one annotator labelled IWs, it was not possible to compute annotators' agreement for this task. We will address this issue in future works. \begin{table}[bh] \begin{center} \begin{tabular}{|l|c|c|c|c|c|} \hline language & recordings & raw & transcribed & running & running \\ & & duration & duration & words & IWs\\ \hline English & 5 & 04:02:34 & 03:03:06 & 28279 & 3343 \\ Italian & 33 & 05:29:34 & 04:10:31 & 31001 & 4560 \\ Spanish & 13 & 03:09:53 & 03:01:59 & 25339 & 3351 \\ \hline \end{tabular} \caption{Benchmarks collected and annotated in \SmarTerp.} \label{tab:benchmark} \end{center} \end{table} \begin{figure} \centering \includegraphics[scale=0.5]{transcriberSTeng} \caption{Screenshot of Transcriber, a tool used to manually transcribe the \SmarTerp\ benchmark. In the highlighted segment, IWs are in parentheses.} \label{fig:transcriber} \end{figure} Figure~\ref{fig:transcriber} shows a screenshot of Transcriber, where some IWs are highlighted: (dentistry), (dental caries), (periodontal diseases), (oral cancer). In the benchmarks, phrases composed up to 6 words were identified as IWs. \subsection{IW normalization} In order to be able to consistently evaluate the performance of the system in terms of IWs, and considering that it was impossible to pre-define a fixed set of IW patterns, we decided to implement a procedure that automatically processed the whole benchmark. It consisted of the following basic steps, applied independently for every language: \begin{enumerate} \itemsep-0.3em \item identification of all manually defined IWs in the benchmark; \item reduction to a minimum set of IWs, by removing ambiguities. Given that A, B, C, etc. are single words, some cases are: \begin{itemize} \itemsep-0.3em \item if exist (A), (B) and (A B), then the IW (A B) is removed - will be replaced by (A) (B); \item if exist (C), (D E) and (C D E), then the IW (C D E) is removed; \item note however that if exist (C), (D E) and (D C E), nothing can be removed. \end{itemize} \item regeneration of the benchmark, by applying the following steps: \begin{enumerate} \itemsep-0.3em \item remove all round brackets; \item considering the minimum set of IWs, apply new brackets at every IW occurrence, starting from the longest IWs and ending with the one-word IWs; \item in order to evaluate Precision, Recall and F-measure of IWs, remove all words not inside brackets. \end{enumerate} \end{enumerate} Note that some IWs originally present in the benchmark, although legitimate, could not appear in the final version of the benchmark: suppose that the only occurrence of (B) alone is in the context A (B) and also the IW (A B) exist: after the regeneration of the benchmark, both cases will result (A B). \begin{table} \footnotesize{ \begin{center} \begin{tabular}{|l|p{10cm}|} \hline REF & the most of {\bf them} referred from (pulmonary specialist) {\bf (ENTs)} (paediatricians) {\bf let's let Boyd try} nothing {\bf else} \\ ASR & {\bf in} the most of {\bf my} referred from (pulmonary specialist) {\bf ian} (paediatricians) {\bf was led by tried} nothing \\ ALIGNMENT & I\_in S\_them\_my S\_ENTs\_ian S\_let's\_was S\_let\_led S\_Boyd\_by S\_try\_tried D\_else (Sub= 6 Ins= 1 Del= 1 REF=16) \\ WER & 50.00\% [ 100 * (6 +1 +1) / 16 ] \\ \hline IW-REF & (pulmonary\_specialist) {\bf (ENTs)} (paediatricians) \\ IW-ASR & (pulmonary\_specialist) (paediatricians) \\ P / R / F & Precision 1.00 [ 2 / 2 ] / Recall 0.67 [ 2 / 3 ] / F-Measure 0.80 \\ \hline Isol-IW-REF & (pulmonary) (specialist) {\bf (ENTs)} (paediatricians) \\ Isol-IW-ASR & (pulmonary) (specialist) (paediatricians) \\ P / R / F & Precision 1.00 [ 3 / 3 ] / Recall 0.75 [ 3 / 4 ] / F-Measure 0.86 \\ \hline \end{tabular} \caption{Evaluation metrics on a sample of the English benchmark: WER over the whole text; Precision, Recall, F-measure over both the IWs and the Isolated-IWs. ASR errors are highlighted in bold. IWs are those in parentheses.} \label{tab:metrics} \end{center} } \end{table} After the application of this algorithm, a consistent version of the benchmark was obtained. By applying the same regeneration steps to the ASR output, a fair comparison was possible, considering only the IWs. We could also consider different metrics, either by considering each IW as a single item (despite the number of words that compose it) or by considering separately each word that compose the IWs (henceforth Isol-IW). Standard evaluation of ASR output is Word Error Rate (WER), resulting from a word-by-word alignment between reference text (REF) and ASR output (TEST). In detail, WER is the percentage of substitution, insertions and deletions over the number of REF words. In \SmarTerp, however, it could be more useful to concentrate on the IWs only, and to consider Precision, Recall and F-Measure as primary metric. The example in Table~\ref{tab:metrics} shows the different metrics used in this work. \subsection{Preliminary analysis} Figure~\ref{fig:OOVLex} reports OOV rate of the \SmarTerp\ Benchmark for different values of the lexicon size, computed on all the available text data described in Table~\ref{tab:LMdata}. \begin{figure}[bh] \centering \includegraphics[scale=0.5]{OOVvsLex} \caption{OOV rate of the \SmarTerp\ benchmarks against lexicon size for the 3 languages.} \label{fig:OOVLex} \end{figure} An inspection of OOV words was done for the Italian language, in order to better understand how the OOV words are distributed among different classes. With respect to the 128 Kwords lexicon, we had that the Italian benchmark is composed of $31001$ running words, of which $1089$ are OOV (corresponding to 3.51\% OOV rate). The number of different OOV words was 474, manually classified as follows: \begin{itemize} \itemsep-0.3em \item {\bf 190 Morpho}: morphological variations of common words (e.g.\ allunghiamo, distinguerle, divideremo - {\it we lengthen, distinguish them, we will divide}); \item {\bf 181 Tech}: technical terms, that will be part of IWs so it is extremely important to keep their number as low as possible (e.g.\ bruxismo, implantologia, parodontopatici - {\it bruxism, implantology, periodontal disease}); \item {\bf 34 Errors}: words that should not be here and will be fixed soon: numbers in letters, wrong tokenization (e.g.\ cinque, computer-assistita, impianto-protesica, l'igiene - {\it five, computer-assisted, implant-prosthetic, the hygiene}); \item {\bf 28 English}: terms in English, often they are technical terms and should be recognized (e.g.\ osteotomy, picking, restaurative, tracing); \item {\bf 20 Names}: proper names of people, firms or products (e.g.\ claronav, davinci, hounsfield, navident); \item {\bf 10 Latin}: latin words (e.g.\ dolor, restitutio, tumor - {\it pain, restoration, swelling}); \item {\bf 8 Acronyms}: (e.g.\ t-test, mua, d3, d4); \item {\bf 3 Foreign}: pseudo-foreign words that need particular care for pronunciation (e.g.\ customizzata, customizzati, matchare - {\it Italian neologisms from English custom, match}). \end{itemize} \begin{table}[bt] \footnotesize{ \begin{center} \begin{tabular}{|rl|rl|rl|} \hline \multicolumn{2}{|c|}{allunghiamo} & \multicolumn{2}{ c }{distinguerle} & \multicolumn{2}{|c|}{divideremo} \\ \multicolumn{2}{|c|}{{\it we lengthen}} & \multicolumn{2}{ c }{{\it distinguish them}} & \multicolumn{2}{|c|}{{\it we will divide}} \\ \hline 10355 &allunga & 12118 &distingue & 7273 &divide \\ 12657 &allungare & 12493 &distinguere & 7931 &dividendo \\ 17187 &allungato & 20484 &distinguono & 12286 &dividere \\ 18040 &allungo & 26323 &distinguo & 14127 &dividendi \\ 20126 &allungamento & 34366 &distinguersi & 15601 &dividono \\ 23870 &allungano & 52496 &distinguendosi & 27370 &dividersi \\ 25749 &allungata & 56673 &distingueva & 43165 &divideva \\ 35514 &allungando & 60858 &distinguerlo & 59956 &dividerà \\ 40996 &allungate & 61213 &distinguendo & 61370 &dividerci \\ 42540 &allungati & 67741 &distinguibili & 62319 &divideranno \\ 43104 &allungarsi & 75608 &distinguerla & 63369 &dividendosi \\ 60394 &allunghi & 77105 &distinguibile & 68113 &dividevano \\ 98044 &allungherà & 79891 &distinguevano & 80977 &dividerli \\ 106019 &allungava & 91152 &distinguerli & 84294 &dividend \\ 120007 &allungandosi &115236 &distinguiamo & 91609 &divida \\ 126079 &allungherebbe &116550 &distingua & 97706 &dividiamo \\ & &119097 &distinguerà &121708 &dividerlo \\ \hline \end{tabular} \caption{Morphological variations of OOV words, known in the 128 Kwords lexicon, along with their position in the lexicon.} \label{tab:morpho} \end{center} } \end{table} Tech, English, Names, Latin and Foreign will deserve a particular attention in future studies, because they are important for the domain. Errors will be fixed and should disappear; Acronyms should be recognized as subwords (e.g., d3 as d 3). Morpho will probably be misrecognized as another morphological variation of the same stem, present in the active dictionary, which in this domain is not considered a critical error. Note that a single verbal stem in Italian can generate up to 300 different words in Italian, including clitics. In Table~\ref{tab:morpho} you can see the morphological variations of the 3 terms of the class Morpho reported above which are present in the 128 Kwords lexicon. \begin{figure}[th] \centering \includegraphics[scale=0.45]{OOV-LexExperiments} \caption{OOV rate of the \SmarTerp\ benchmarks against lexicon size for the 3 languages, for all the experiments and languages.} \label{fig:OOVLexExpe} \end{figure} \section{Experiments and results} Since several approaches can be employed to obtain, enlarge and use the seed words (e.g.\ based on texts distance, texts semantic similarity, etc) we consider the following indicators that allow to measure their effectiveness on the benchmarks collected and manually transcribed within the \SmarTerp\ project. \begin{itemize} \itemsep-0.3em \item Seeds: number of seed words, used to extract the adaptation text: \item Out Of Vocabulary rate (OOV rate): it is the percentage of unknown words in the benchmark, with respect to the lexicon. OOV words cannot be part of the output of the ASR, hence they will be certainly errors. We should try to get a low OOV rate without the lexicon size growing too much; \item Lexicon size: total number of active words in the adapted LM; \item Word Error Rate (WER): it measures the percentage of errors made by the ASR; \item Precision, Recall, F-Measure over the set of Important Words (IWs) that were defined. \end{itemize} The following experiments were carried out for each of the three languages: \begin{itemize} \itemsep-0.3em \item {\bf Baseline}: the initial 128Kwords lexicon and the LM trained on the whole corpus, without any adaptation; \item {\bf Adapted}: LM adapted starting from seed words coming from a dental glossary (normally 2-3 pages of text, resulting into some hundreds of seeds), found with a quick search in internet for terms like ``dental glossary'' (e.g.\ https://bnblab.com/intro/terminology). \item {\bf Word2Vec}: LM adapted using seed words obtained from 5 initial seed words, applying two iterations ($I_w=2$) of the procedure based on semantic similarity and retaining, for each term, $N_w=40$ words, obtaining $\sim 3000$ seed words. The 5 magic words\footnote{Many thanks to Susana Rodr\'iguez for the translations of the magic words from Italian} were: \begin{itemize} \itemsep-0.3em \item {\bf English}: tartar, filling, caries, tooth, dentist \item {\bf Italian}: tartaro, otturazione, carie, dente, dentista \item {\bf Spanish}: sarro, relleno, caries, diente, dentista \end{itemize} \end{itemize} Figure~\ref{fig:OOVLexExpe} reports OOV rate of the \SmarTerp\ benchmark for different values of the lexicon size for each experiment, along with the initial part of the curve of Figure~\ref{fig:OOVLex}. It should be noted that, for every language, Baseline is along the initial curve, while both Adapted and Word2Vec are well below it. For all languages, Adapted has a Lexicon size which is in between Baseline and Word2Vec. This is due to an initial choice of the parameters described in Section~\ref{sec:selection}: by changing the parameters, a cloud of values could be generated instead of a single point. In fact, in this work we report only initial experiments and future efforts will be devoted to a parameter optimization. In any case, the Lexicon size is directly related to the number of seeds and on the size of the adaptation text, which plays a very important role in the adaptation stage. Table~\ref{tab:results} reports preliminary results on the three benchmarks, for all the experiments. Together with the number of obtained seed words, OOV rate and Lexicon size, we report WER computed on all the uttered words (including functional words, which are useless for this task), and Precision/Recall/F-measure computed both on IWs and Isol-IWs: since they represent the most technically significant words in the domain, they are more related to the output desired by interpreters. It is worth noting that, with respect to Baseline, both the Adapted and Word2Vec systems are effective for all of the three languages and for all the considered metrics. Word2Vec performs slightly better than Adapted, but this can be due to the initial value of the parameters that bring to more seeds and to a bigger Lexicon size. Low WER for English is partly due to a scarce audio quality in the recordings, that mainly affects functional words: this explains the English high precision, which is computed on IWs only. \begin{table}[thb] \begin{center} \begin{tabular}{|l|r|c|c|c|c|c|} \hline & Seeds & Lex size& OOVrate& WER & IW P / R / F & Isol-IW P / R / F \\ \hline Eng BL & 0 & 128041 & 1.93\% & 26.39\% & 0.90 / 0.61 / 0.73 & 0.96 / 0.59 / 0.73 \\ Eng ada & 257 & 213237 & 0.79\% & 23.34\% & 0.92 / 0.73 / 0.81 & 0.97 / 0.71 / 0.82 \\ Eng w2v & 2999 & 373956 & 0.55\% & 23.86\% & 0.93 / 0.72 / 0.81 & 0.97 / 0.70 / 0.81 \\ \hline Ita BL & 0 & 128009 & 3.51\% & 15.14\% & 0.88 / 0.67 / 0.76 & 0.95 / 0.67 / 0.79 \\ Ita ada & 213 & 190126 & 1.53\% & 11.73\% & 0.96 / 0.84 / 0.89 & 0.98 / 0.82 / 0.90 \\ Ita w2v & 3527 & 316679 & 1.11\% & 11.28\% & 0.96 / 0.85 / 0.90 & 0.99 / 0.84 / 0.91 \\ \hline Spa BL & 0 & 128229 & 4.09\% & 22.60\% & 0.86 / 0.56 / 0.68 & 0.93 / 0.56 / 0.69 \\ Spa ada & 673 & 265764 & 1.25\% & 17.74\% & 0.95 / 0.76 / 0.85 & 0.98 / 0.75 / 0.85 \\ Spa w2v & 3207 & 333072 & 0.93\% & 17.31\% & 0.95 / 0.79 / 0.86 & 0.98 / 0.78 / 0.87 \\ \hline \end{tabular} \caption{Preliminary results for Baseline (BL), Adapted (ada) and Word2Vec (w2v) systems. Both WER on all words and Precision/Recall/F-measure on composite and isolated IWs are reported.} \label{tab:results} \end{center} \end{table} \section{Conclusions} We described two different approaches for extending the dictionary of an ASR system in order to detect important terms from technical speeches, namely dental reports, to be translated by simultaneous professional interpreters. The two approaches consist in extracting adaptation text from a huge set of text data, starting from some seed words. In the first one, seed words come from a given glossary. The second one is based on the application of a text similarity measure to an initial (very small) set of $5$ seed words. After the application of the selection procedures we adapted the language models used in the ASR system employed in a computer assisted interpretation (CAI) system under development and we proved the effectiveness on the approaches in terms of different evaluation metrics. \small \bibliographystyle{apalike}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,026
Q: Image gets downloaded every ajax call Currently I'm refreshing a table with an interval and a ajax call. The ajax call returns html(to be more exact) and that html is a row that can also contain a picture if certain data from that tr is still in progress of updating. My problem is that every ajax call, the same picture gets downloaded, is there any easy way in which it only loads once(the first call) ? My code for the ajax: function searchcar() { $(".carcheck").each(function () { if($(this).data('wait') == 1) { //var check = $(this).find('td:eq(0)').text(); var item = $(this); var check = $(this).data('nr'); request2 = $.ajax({ url: "/site/api.php", type: "post", data: {'getdata': 'true', 'nr': check} }); request2.done(function (response, textStatus, jqXHR) { item.replaceWith(response); }); } }); } setInterval(function(){ searchcar() }, 5000); My network tab looks something like this: A: request2.done(function (response, textStatus, jqXHR) { // Log a message to the console var response = $.parseJSON(response); if(response.rar !== 'wait'){ item.find('td:eq(1)').replaceWith(response.rar); } if(response.rov !== 'wait'){ item.find('td:eq(2)').replaceWith(response.rov); } if(response.aid !== 'wait'){ item.find('td:eq(3)').replaceWith(response.aid); } if(response.arr !== 'wait'){ item.find('td:eq(4)').replaceWith(response.arr); } }); Fixed it by only replacing data thats loaded, therefore I'm not sending any <img> anymore through the ajax. If anyone has a better idea, I'd appreciate it, thanks!
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,793
\section{Introduction} The ${\rm SLE}_\kappa(\rho)$ processes are an important variant of the Schramm-Loewner evolution (${\rm SLE}$) \cite{S0}. They were first introduced by Lawler, Schramm, and Werner in \cite[Section~8.3]{LSW_RESTRICTION}. Like ordinary ${\rm SLE}_\kappa$, ${\rm SLE}_\kappa(\rho)$ is defined using the Loewner equation and a {\em driving function}~$W$ that looks (at least locally) like~$\sqrt{\kappa}$ times a Brownian motion. However, in addition to the driving function~$W$, one keeps track of a so-called {\em force point} process~$V$, which itself evolves according to Loewner evolution, and which exerts a {\em drift} on~$W$ proportional to $\rho/(W-V)$. When $\rho > 0$ (resp.\ $\rho < 0$), the drift pushes~$W$ away from (resp.\ towards) the force point~$V$, and the case $\rho = 0$ corresponds to ordinary ${\rm SLE}_\kappa$. The difference $W - V$ evolves as a positive multiple of a Bessel process of dimension $\delta(\kappa,\rho) = 1+\tfrac{2(\rho+2)}{\kappa}$. See Section~\ref{sec::preliminaries} for a formal definition of ${\rm SLE}_\kappa(\rho)$. Various flavors of ${\rm SLE}_\kappa(\rho)$ have been discussed in the literature, but in this paper we generally assume that the processes are {\em chordal} (so they grow from~$0$ to~$\infty$ in the upper half plane~$\mathbf{H}$), {\em one-sided} (so that all excursions of $W-V$ away from zero have the same sign) and {\em origin seeded} (meaning that $V_0 = W_0 = 0$). The time evolution of~$W$ and~$V$ is straightforward to define during intervals of time in which $W_t \neq V_t$, but to continue the evolution after~$W$ and~$V$ collide, one has to work out precisely how these processes ``bounce off'' one another. In the original construction in \cite[Section~8.3]{LSW_RESTRICTION}, and in most of the later work on ${\rm SLE}_\kappa(\rho)$ processes, this is only done for $\rho > -2$. The threshold~$-2$ corresponds to $\delta(\kappa,\rho)=1$, which is the critical threshold below which Bessel processes fail to be semimartingales \cite[Chapter~11]{RY04}. This is related to the fact that $\delta > 1$ is necessary in order for the integral $\int_0^T (W_t - V_t)^{-1} dt$ to be a.s.\ finite for all $T$, which in turn ensures that the cumulative amount of drift exerted on $W$ (up to any finite time) is a.s.\ finite. To define ${\rm SLE}_{\kappa}(\rho)$ when $\rho < -2$ it is necessary to introduce a local time L\'evy compensation to keep the accumulated drift from sending $W$ off to $\infty$ in finite time. As we recall in Section~\ref{sec::preliminaries} (citing~\cite[Section 3.2]{SHE_CLE}), there is a natural scale-invariant way to do this if and only if $\rho > -2 - \tfrac{\kappa}{2}$ so that $\delta > 0$. As detailed in~\cite[Section 3]{SHE_CLE}, if one parameterizes $W$ by the local time associated to $\{t: W_t = V_t \}$ one obtains a skew stable L\'evy process, so that the classification of general ${\rm SLE}_{\kappa}(\rho)$ processes is closely related to the classification of skew stable L\'evy processes.\footnote{In the account in \cite{SHE_CLE}, there is a parameter $\beta$ such that each $W-V$ excursion away from zero is (independently of all others) assigned a positive sign with probability $(1+\beta)/2$ and a negative sign otherwise. When $\rho = -2$, it is necessary to take $\beta =0$ to obtain a canonical, scale-invariant and non-trivial process, and there is an additional free parameter $\mu$ in that case. We will not consider the $\rho = -2$ setting here, except to say that in some limiting sense $\beta = 1$ and $\rho = -2$ corresponds to a trivial boundary tracing path. As mentioned above, this paper treats only the ``one-sided'' case $\beta = 1$, and our main results assume $\rho < -2$.} The continuity and reversibility properties of ${\rm SLE}_\kappa(\rho)$ with $\rho > -2$ are established in \cite{MS_IMAG,MS_IMAG2,MS_IMAG3,MS_IMAG4}, which exhibit and make use of explicit couplings between these processes and the Gaussian free field (GFF) \cite{She_SLE_lectures,DUB_PART,SchrammShe10,MS_IMAG,MS_IMAG4} (see also \cite{Z_R_KAPPA_RHO,DUB_DUAL} for the reversibility of ${\rm SLE}_\kappa(\rho)$ for $\kappa \in (0,4)$ and $\rho \geq \tfrac{\kappa}{2}-2$). When $\rho > -2$, the range of an ${\rm SLE}_\kappa(\rho)$ process looks locally like the range of an ordinary ${\rm SLE}_\kappa$, except where the path hits the boundary. When $\rho \leq -2$, however, one obtains interesting and qualitatively different processes. The Bessel dimension interval $\delta \in (0,1)$ corresponds to $\rho \in (-2 - \tfrac{\kappa}{2}, -2)$. In this article we focus on the set $\mathcal T = \{(\kappa, \rho): (-2-\tfrac{\kappa}{2})\vee (\tfrac{\kappa}{2}-4) < \rho < -2 \}$, which corresponds to the yellow {\em light cone} region depicted in Figure~\ref{fig::rho_kappa_chart}. The {\em loops on trunk} regions shown in Figure~\ref{fig::rho_kappa_chart}. are studied in detail in \cite{cle_percolations}.\footnote{In the loops-on-trunk regime explored in \cite{cle_percolations}, each excursion of $W-V$ away from zero describes a loop, and it is important and relevant to consider {\em non-one-sided} ${\rm SLE}_\kappa(\rho)$, which can be written ${\rm SLE}_\kappa^\beta(\rho)$ for $\beta \in [-1,1]$, and which correspond to different types of CLE explorations. These explorations are useful for understanding CLE percolation and the continuum FK correspondence, among other things. In general, ${\rm SLE}_\kappa^\beta(\rho)$ can be defined for all $\beta \in [-1,1]$ whenever $\rho \in (-2-\kappa/2, \kappa/2-2) \setminus \{-2\}$, so that $\delta \in (0,2) \setminus \{1 \}$, and \cite[Section~10.1.3]{cle_percolations} briefly describes how to interpret and prove continuity results for these processes for general $\beta$ in the case $\kappa>4$. When $\kappa \leq 4$, it remains an open problem to prove continuity for ${\rm SLE}_{\kappa}^\beta(\rho)$ when $\beta \in (-1,1)$ and $\rho \in (-2-\kappa/2 \vee \kappa/2-4, \kappa/2-2) \setminus \{-2 \}$, i.e., in the light cone region and (the boundary-intersecting part of) the ordinary flow line region in Figure~\ref{fig::rho_kappa_chart}. We remark that in these regions, each excursion of $W-V$ away from zero should (assuming continuity of the overall path) describe a {\em chord} (i.e., a simple path segment starting and ending at different points) and we are not aware of a natural interpretation of an overall path that alternates between left and right going chords. As mentioned earlier, we treat only the case $\beta = 1$ in this paper. (The case $\beta = -1$ is equivalent by symmetry.) } We will find that ${\rm SLE}_\kappa(\rho)$ with $(\kappa, \rho) \in \mathcal T$ can be naturally coupled with an instance of the GFF, and that in this coupling the field a.s.\ determines the path. This will be accomplished by showing that such a process can be realized as an {\bf ordered light cone} of angle-varying flow lines of the (formal) vector-field $e^{i h / \chi}$, \begin{equation} \label{eqn::chi} \chi := \frac{2}{\sqrt{\kappa}} - \frac{\sqrt{\kappa}}{2}, \end{equation} where $h$ is a GFF. We remark that for $\kappa' > 4$, we have $\tfrac{\kappa'}{2}-4 > -2$ so ${\rm SLE}_{\kappa'}(\rho)$ with this range of $\rho$ values falls under the scope of \cite{MS_IMAG,MS_IMAG2,MS_IMAG3,MS_IMAG4}. At $\rho = \tfrac{\kappa}{2}-4$, ${\rm SLE}_\kappa(\rho)$ for $\kappa \in (0,4)$ has a phase transition from the light cone regime described in this article to the loop-making/trunk regime studied by the authors together with Werner in \cite{cle_percolations}. (In fact, as we will explain here and have also mentioned in \cite{cle_percolations}, the law of the range of an ${\rm SLE}_\kappa(\tfrac{\kappa}{2}-4)$ process is the same as the law of the range of an ${\rm SLE}_{\kappa'}(\tfrac{\kappa'}{2}-4)$ process, where $\kappa \in (0,4)$ and $\kappa' = 16/\kappa > 4$.) See Table~\ref{tab::rho_values} and Figure~\ref{fig::rho_kappa_chart} for a summary of the phases of ${\rm SLE}_\kappa(\rho)$. Our first main result concerns continuity and transience. \begin{theorem} \label{thm::continuous} The ${\rm SLE}_\kappa(\rho)$ processes for $\kappa \in (0,4)$, $\rho \in [\tfrac{\kappa}{2}-4,-2)$, and $\rho > -2-\tfrac{\kappa}{2}$ are almost surely continuous and transient. That is, if $D \subseteq \mathbf{C}$ is a Jordan domain, $x,y \in \partial D$ are distinct, and $\eta \colon [0,\infty) \to D$ is an ${\rm SLE}_\kappa(\rho)$ in $D$ from $x$ to $y$ then $\eta$ is almost surely continuous and $\lim_{t \to \infty} \eta(t) = y$ almost surely. \end{theorem} The continuity of ordinary ${\rm SLE}$ was first proved by Rohde and Schramm in \cite{RS05}. The main idea is to estimate the moments of the derivative of the reverse Loewner flow evaluated near the inverse image of the tip of the path. By the Girsanov theorem, during a time interval in which $V_t \not = W_t$, the evolution of an ${\rm SLE}_\kappa(\rho)$ is absolutely continuous with respect to the evolution of ordinary ${\rm SLE}_\kappa$. Consequently, the almost sure continuity of the process during such intervals of time can be easily derived from \cite{RS05}. From this one can see immediately that ${\rm SLE}_\kappa(\rho)$ is a.s.\ continuous when $\rho \geq \tfrac{\kappa}{2}-2$ so that $\delta \geq 2$. A more general statement is \cite[Theorem~1.3]{MS_IMAG}, which states that ${\rm SLE}_{\kappa}(\rho)$ is a.s.\ continuous for all $\kappa$ and all $\rho > -2$. The idea of that proof is to extract the continuity from the non-boundary-intersecting case and a conditioning trick which involves multiple ${\rm SLE}$ paths coupled together using the GFF. Theorem~\ref{thm::continuous} extends this further to the case that $\rho \geq \tfrac{\kappa}{2}-4$ and $\rho > -2-\tfrac{\kappa}{2}$. Its proof is also based on GFF arguments, though the method is rather different than that of \cite[Theorem~1.3]{MS_IMAG}. Continuity in the case that $\rho \in (-2-\tfrac{\kappa}{2},\tfrac{\kappa}{2}-4]$ was established in \cite{cle_percolations}, also using GFF based arguments. Combining these works, we have ${\rm SLE}_\kappa(\rho)$ continuity for all of the regions shown in Figure~\ref{fig::rho_kappa_chart}. \begin{table} {\footnotesize \begin{center} \begin{tabular}{llllcc} \toprule \head{$\rho$} & \head{$\delta(\kappa,\rho)$} & \head{$\mathrm{dim}_{\mathcal H}(\text{Range})$} & \head{Process type} & \head{Simple} & \head{Rev.}\\ \toprule $(-\infty,-2-\tfrac{\kappa}{2}]$ & $(-\infty,0]$ & --- & Not defined & --- & ---\\ $(-2-\tfrac{\kappa}{2},\tfrac{\kappa}{2}-4]$ & $(0,2-\tfrac{4}{\kappa}] $ & $1+\tfrac{2}{\kappa}$ & Trunk plus loops & $\text{\sffamily X}$ & $\text{\sffamily X}$\\ $(\tfrac{\kappa}{2}-4,-2)$ & $(2-\tfrac{4}{\kappa},2)$ & $\tfrac{(\kappa-2(2+\rho))(\kappa+2(6+\rho))}{8\kappa}$ & Light cone & $\text{\sffamily X}$ & $\text{\sffamily X}$\\ $-2$ & $1$ & $1$ & $\partial$ tracing & $\checkmark$ & $\checkmark$\\ $(-2,\tfrac{\kappa}{2}-2)$ & $(1,2)$ & $1+\tfrac{\kappa}{8}$ & $\partial$ hitting & $\checkmark$ & $\checkmark$\\ $ [\tfrac{\kappa}{2}-2,\infty)$ & $[2,\infty)$ & $1+\tfrac{\kappa}{8}$ & $\partial$ avoiding & $\checkmark$ & $\checkmark$\\ \bottomrule \end{tabular} \end{center}} \medskip \caption{\label{tab::rho_values} Phases of~$\rho$ values and corresponding $\delta(\kappa,\rho)$ (driving Bessel process dimension) values for ${\rm SLE}_\kappa(\rho)$ processes with a single boundary force point of weight~$\rho$, assuming $\kappa \in (2,4)$. When $\kappa \in (0, 2]$, the phases are the same except that the second and third are replaced by a single ``light cone'' phase with $\rho \in (-2-\tfrac{\kappa}{2},-2)$ and $\delta \in (0,1)$. The symbol ``$\partial$'' should be translated as ``boundary'' and ``rev.'' stands for ``reversible.'' The statements in the reversible column are only applicable when the force point is located immediately to the left or to the right of the seed of the process.} \end{table} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.85]{figures/rhokappachart} \end{center} \caption{\label{fig::rho_kappa_chart} Phase diagram for the behavior of ${\rm SLE}_\kappa(\rho)$ for $\rho$ above the minimal value $-2-\tfrac{\kappa}{2}$ for which such a process is defined. The present paper is focused on the light cone regime (yellow triangle) where $\kappa \in (0,4)$ and $\rho \in ( (-2-\tfrac{\kappa}{2}) \vee (\tfrac{\kappa}{2}-4),-2)$. The other two $\rho < -2$ regimes are studied in \cite{cle_percolations} and the $\rho > -2$ cases are treated in \cite{MS_IMAG,MS_IMAG4}.} \end{figure} Suppose that $D \subseteq \mathbf{C}$ is a Jordan domain, $x \in \partial D$, and $h$ is a GFF on~$D$ with given boundary conditions. Fix angles $\theta_1 \leq \theta_2 \leq \theta_1 + \pi$. The {\bf ${\rm SLE}_\kappa$ light cone} ${\mathbf L}_x(\theta_1,\theta_2)$ of~$h$ starting from $x$ with angle range $[\theta_1,\theta_2]$ is a random set in~$D$ generated from the flow lines of $e^{i h / \chi}$ (hereafter, we will refer to these simply as ``flow lines of $h$''). It is explicitly given by the closure of the set of points accessible by the flow lines of $h$ starting from $x$ with angles which are either rational and contained in $[\theta_1,\theta_2]$ or equal to~$\theta_1$ or~$\theta_2$ and which change angles a finite number of times and only at positive rational times. These objects were first introduced in \cite{MS_IMAG}. We call $\theta_2-\theta_1$ the {\bf opening angle} of ${\mathbf L}_x(\theta_1,\theta_2)$. For $\theta \in [0,\pi]$, we let ${\mathbf L}_x(\theta) = {\mathbf L}_x(-\tfrac{\theta}{2},\tfrac{\theta}{2})$. It is shown in \cite[Theorem~1.4]{MS_IMAG} that a light cone with opening angle $\pi$ starting from $x$ is equal to the range of a form of ${\rm SLE}_{16/\kappa}$, which is called a {\em counterflow line} targeted at $x$. More generally, if $A$ is a segment of $\partial D$, we let ${\mathbf L}_A(\theta_1,\theta_2)$ be the set points accessible by flow lines of $h$ starting from a countable dense subset of $A$ with angles which are either rational and contained in $[\theta_1,\theta_2]$ or equal to $\theta_1$ or $\theta_2$ which change angles only a finite number of times and only at positive rational times. Our next result states that ${\mathbf L}_{\mathbf{R}_-}(0,\theta)$ for the intermediate values of $\theta \in (0,\pi)$ is equal to the range of an ${\rm SLE}_\kappa(\rho)$ process provided the boundary data of $h$ is chosen appropriately. Let \begin{equation} \label{eqn::lambda} \lambda := \frac{\pi}{\sqrt{\kappa}}. \end{equation} \begin{theorem} \label{thm::coupling} Fix $\kappa \in (0,4)$, $\rho \in [\tfrac{\kappa}{2}-4,-2)$ and $\rho > -2-\tfrac{\kappa}{2}$, and suppose that $h$ is a GFF on $\mathbf{H}$ whose boundary data is given by $-\lambda$ on $\mathbf{R}_-$ and $\lambda(1+\rho)$ on $\mathbf{R}_+$. Let $\eta$ be an ${\rm SLE}_\kappa(\rho)$ process on $\mathbf{H}$ from $0$ to $\infty$ where its force point is located at $0^+$. For each $t \geq 0$, let $K_t$ denote the closure of the complement of the unbounded connected component of $\mathbf{H} \setminus \eta([0,t])$, let $g_t \colon \mathbf{H} \setminus K_t \to \mathbf{H}$ be the unique conformal transformation with $\lim_{z \to \infty} |g_t(z) - z| = 0$, and let $(W,V)$ be the Loewner driving pair for $\eta$. There exists a unique coupling of $h$ and $\eta$ such that the following is true. For each $\eta$-stopping time $\tau$, the conditional law of \[ h \circ g_\tau^{-1} - \chi \arg( g_\tau^{-1})'\] given $\eta|_{[0,\tau]}$ is that of a GFF on $\mathbf{H}$ with boundary conditions given by \[ h|_{(-\infty,W_\tau]} \equiv -\lambda,\quad h|_{(W_\tau,V_\tau]} \equiv \lambda, \quad\text{and}\quad h|_{(V_\tau,\infty)} \equiv \lambda(1+\rho).\] Moreover, in the coupling $(h,\eta)$, $\eta$ is almost surely determined by $h$. Finally, let \begin{equation} \label{eqn::lightcone_angle} \theta = \theta_\rho = \pi\left(\frac{\rho+2}{\kappa/2-2} \right). \end{equation} Then the range of $\eta$ is almost surely equal to ${\mathbf L}_{\mathbf{R}_-}(0,\theta)$. \end{theorem} We remark that the existence statement in Theorem~\ref{thm::coupling} takes the same form as that for ${\rm SLE}_\kappa(\rho)$ when $\rho > -2$, e.g.,\ \cite[Theorem~1.1]{MS_IMAG}. The proof that we give here, however, is quite different. The difference between the different regimes of $\rho$ values is in the way that the coupling is interpreted. In particular, we interpret the process when $\rho > -2$ as being a flow line of the (formal) vector field $e^{i h /\chi}$ (see the introductions to \cite{SHE_WELD, MS_IMAG} for further explanation) while we interpret the process when $\rho \in [\tfrac{\kappa}{2}-4,-2)$ as an ordered light cone of flow lines of $e^{i h / \chi}$. The method that we use to prove existence in Theorem~\ref{thm::coupling} is also very different from the existence proof given in \cite{She_SLE_lectures,SHE_WELD,SchrammShe10,DUB_PART} for $\rho > -2$. Indeed, in these works existence is shown by proving that a sample of the GFF can be produced by first sampling the path according to its marginal distribution and then sampling a GFF on the complement of the range of the path with appropriate boundary conditions. That the marginal law of the field is a GFF is proved using tools from stochastic calculus. In the present work, we will use the flow line interaction theory from \cite{MS_IMAG,MS_IMAG2,MS_IMAG3,MS_IMAG4} and the local set theory from \cite{SchrammShe10} to show directly that the path which arises by visiting the points of a light cone with a particular order evolves as an ${\rm SLE}_\kappa(\rho)$. The final statement of Theorem~\ref{thm::coupling} generalizes \cite[Theorem~1.4]{MS_IMAG} to the setting of ${\rm SLE}_\kappa(\rho)$ for $\rho \in [\tfrac{\kappa}{2}-4,-2)$ and $\rho > -2-\tfrac{\kappa}{2}$. In the case of the former, the result followed by studying the manner in which flow and counterflow lines coupled together with the GFF interact with each other. The proof of Theorem~\ref{thm::coupling} is different. We will extract the latter from the corresponding result for ${\rm SLE}_{\kappa'}(\rho)$ processes with $\rho > -2$ proved in \cite[Theorem~1.3]{MS_IMAG}. Let $\mathrm{dim}_{\mathcal H}(A)$ denote the Hausdorff dimension of a set $A$. The almost sure value of $\mathrm{dim}_{\mathcal H}({\mathbf L}_x(\theta))$ is computed in \cite[Theorem~1.1]{LIGHTCONE_DIMENSION}. Combining this with Theorem~\ref{thm::coupling} gives that if $\eta$ is an ${\rm SLE}_\kappa(\rho)$ process with $\kappa \in (0,4)$, $\rho \in [\tfrac{\kappa}{2}-4,-2)$, and $\rho > -2-\tfrac{\kappa}{2}$, then \begin{equation} \label{eqn::dimension} \mathrm{dim}_{\mathcal H}(\eta) = \frac{(\kappa-2(2+\rho))(\kappa+2(6+\rho))}{8\kappa} \quad\text{almost surely}. \end{equation} This result is stated as \cite[Theorem~1.2]{LIGHTCONE_DIMENSION}. The decomposition of the range of ${\rm SLE}_\kappa(\rho)$ into a light cone of angle-varying flow lines is related to the notion of duality for ${\rm SLE}_\kappa$. The principle of duality states that the outer boundary of an ${\rm SLE}_{\kappa'}$ process can be described by a form of ${\rm SLE}_\kappa$ for $\kappa \in (0,4)$ and $\kappa'=16/\kappa \in (4,\infty)$, \cite{ZHAN_DUALITY_1,ZHAN_DUALITY_2,DUB_DUAL,MS_IMAG,MS_IMAG4}. Since the range of an ${\rm SLE}_{\kappa'}$ process can be described in terms of a light cone with opening angle $\pi$, it thus follows from Theorem~\ref{thm::coupling} that the law of the range of an ${\rm SLE}_\kappa(\tfrac{\kappa}{2}-4)$ is the same as that of a form of ${\rm SLE}_{\kappa'}$ (specifically, an ${\rm SLE}_{\kappa'}(\tfrac{\kappa'}{2}-4)$). It turns out, however, that the two processes visit the points in their range using a different order. This is explained in more detail in Section~\ref{sec::limiting_cases} as well as in \cite{cle_percolations}. Our final result is the continuity of the law of an ${\rm SLE}_\kappa(\rho)$ process as a function of $\rho$ with $\rho$ in the light cone regime. \begin{theorem} \label{thm::interpolation} Fix $\kappa \in (0,4)$, let $D \subseteq \mathbf{C}$ be a bounded Jordan domain, and fix $x,y \in \partial D$ distinct. The law of the trajectory of an ${\rm SLE}_\kappa(\rho)$ process from $x$ to $y$ in $D$ is continuous with respect to the weak topology induced by the topology of uniform convergence modulo time parameterization as $\rho$ varies between $(-2-\tfrac{\kappa}{2})\vee(\tfrac{\kappa}{2}-4)$ and $-2$. \end{theorem} \subsection*{Outline} The remainder of this article is structured as follows. In Section~\ref{sec::preliminaries}, we will give some preliminaries. In Section~\ref{sec::gff_couplings} we will prove Theorem~\ref{thm::coupling} and then use it to derive Theorem~\ref{thm::continuous} and Theorem~\ref{thm::interpolation}. Finally, in Section~\ref{sec::limiting_cases} we will explain why the law of the range of an ${\rm SLE}_\kappa(\tfrac{\kappa}{2}-4)$ process for $\kappa \in (2,4)$, which is at the boundary of the light cone regime, is equal to the law of a range of an ${\rm SLE}_{\kappa'}(\tfrac{\kappa'}{2}-4)$ process, but the processes visit their range in a different order. \section{Preliminaries} \label{sec::preliminaries} In this section, we are going to give an overview of the chordal ${\rm SLE}_\kappa(\rho)$ processes, focusing on the particular case that $\rho \in (-2-\tfrac{\kappa}{2},-2)$, as well as summarize some of the basics of imaginary geometry \cite{MS_IMAG,MS_IMAG2,MS_IMAG3,MS_IMAG4} which is relevant for this work. \subsection{${\rm SLE}_\kappa(\rho)$ processes} \label{subsec::sle} In this subsection, we are going to give an overview of the ${\rm SLE}_\kappa(\rho)$ processes. These are variants of ${\rm SLE}$ first introduced in \cite[Section~8.3]{LSW_RESTRICTION}. They are defined in the same way as ordinary ${\rm SLE}$, except they are driven by a multiple of a Bessel process in place of a Brownian motion. The treatment that we give here will parallel that from \cite[Section~3.2 and Section~3.3]{SHE_CLE}. For the convenience of the reader, we will now review a few basic facts about Bessel processes. (We refer the reader to \cite[Chapter~11]{RY04} for a more in-depth introduction.) The starting point for the construction of the law of a \emph{Bessel process of dimension $\delta$} (${\mathrm {BES}}^\delta$) is the so-called \emph{square Bessel process of dimension $\delta$} (${\mathrm {BESQ}}^\delta$). For a fixed value of $\delta \in \mathbf{R}$, the law of a ${\mathrm {BESQ}}^\delta$ is described in terms of the SDE \begin{equation} \label{eqn::besq} dZ_t = \delta dt + 2\sqrt{Z_t} dB_t ,\quad Z_0 = z_0 > 0, \end{equation} where $B$ is a standard Brownian motion. Standard results for SDEs imply that there is a unique strong solution to~\eqref{eqn::besq}, at least up until the first time that $Z$ hits $0$. When $\delta > 0$, there in fact exists a unique strong solution for all $t \geq 0$ which is non-negative for all times. A process $X$ has the ${\mathrm {BES}}^\delta$ law if it admits the expression $X = \sqrt{Z}$ where $Z$ is a ${\mathrm {BESQ}}^\delta$. It\^o's formula implies that $X$ solves the SDE \begin{equation} \label{eqn::bes} dX_t = \frac{\delta-1}{2} \cdot \frac{1}{X_t} dt + dB_t,\quad X_0 = x_0, \end{equation} at least up until the first time that $X$ hits $0$. Using that $X_t^{2-\delta}$ is a continuous local martingale, it is straightforward to check that a ${\mathrm {BES}}^\delta$ process almost surely hits $0$ if $\delta < 2$ and almost surely does not hit $0$ if $\delta \geq 2$. When $\delta > 1$, a ${\mathrm {BES}}^\delta$ process solves~\eqref{eqn::bes} in integrated form for all $t \geq 0$, even when it is bouncing off $0$. In particular, such processes are semimartingales. A ${\mathrm {BES}}^1$ process $X$ is equal in distribution to $|B|$ where $B$ is a standard Brownian motion, hence in this case, by the It\^o-Tanaka formula, $X$ solves a version of~\eqref{eqn::bes} with an extra correction coming from the local time of $X$ at $0$. Thus ${\mathrm {BES}}^1$ processes are also semimartingales. However, $X_t^{-1}$ is not integrable in this case. When $\delta \in (0,1)$, it turns out that a ${\mathrm {BES}}^\delta$ process is not a semimartingale. In order to make sense of it as a solution to~\eqref{eqn::bes} in integrated form, one needs to make a so-called principal value correction. Namely, $X$ satisfies the integral equation \begin{equation} \label{eqn::bes_pv} X_t = x_0 + \frac{\delta-1}{2} {\rm P.V.} \int_0^t \frac{1}{X_s} ds + B_t. \end{equation} As explained in \cite[Chapter~11]{RY04}, the principal value correction can be understood in terms of an integral of the local time process of $X$ at $0$. We will not discuss the details of this here since the properties and definition of the principal value correction will not play much of a role in this work. The Bessel processes that we have discussed so far are always non-negative. We remark that it is also natural in certain contexts to consider Bessel processes which can take on both positive and negative values. These processes can be constructed by starting off with a Bessel process which is always non-negative and then assigning a random sign to each excursion the process makes from $0$ as a result of the flip of an independent coin toss. These processes give rise to so-called side-swapping ${\rm SLE}_\kappa(\rho)$ processes, which we will not discuss in the present article. As mentioned just above, the ${\mathrm {BES}}^\delta$ processes are the starting point for constructing the so-called ${\rm SLE}_\kappa(\rho)$ processes. Fix $\kappa > 0$, $\rho > -2-\tfrac{\kappa}{2}$, and let \[ \delta = 1+\frac{2(\rho+2)}{\kappa}.\] Note that $\delta > 0$. Let $X_t$ be a ${\mathrm {BES}}^\delta$ and let \begin{align*} V_t = \frac{2}{\sqrt{\kappa}} {\rm P.V.} \int_0^t \frac{1}{X_s} ds \quad\text{and}\quad W_t = V_t - \sqrt{\kappa} X_t. \end{align*} Then the chordal Loewner $(g_t)$ chain driven by $W$, i.e., the solution to the ODE \[ \partial_t g_t(z) = \frac{2}{g_t(z) - W_t},\quad g_0(z) = z,\] is an ${\rm SLE}_\kappa(\rho)$ process. The point $g_t^{-1}(V_t)$ gives the location of the so-called \emph{force point} of the ${\rm SLE}_\kappa(\rho)$ process at time $t$. Let us make a few comments about this definition. In the case that $\rho > -2$ so that $\delta > 1$, the principal value integral is the same as the usual integral. This implies that $V_t$ is equal to the image under $g_t$ of the rightmost intersection point of the corresponding hull at time~$t$ with~$\mathbf{R}$. Equivalently, the force point at each time $t$ is located at the rightmost intersection of the hull with~$\mathbf{R}$. The continuity of the processes in this case were established in \cite{MS_IMAG}, building off the continuity of ${\rm SLE}_\kappa$ proved in \cite{RS05}. In the case that $\rho \in (-2-\tfrac{\kappa}{2},-2)$ so that $\delta \in (0,1)$, the force point of an ${\rm SLE}_\kappa(\rho)$ process \emph{does not} stay in $\mathbf{R}$, as a consequence of the principal value correction which is necessary for its definition. In the case that $\rho \in (-2-\tfrac{\kappa}{2},\tfrac{\kappa}{2}-4]$ and $\kappa \in (2,4)$, the continuity of these processes was proved in \cite{cle_percolations} using couplings of these processes with the GFF and as a consequence of the continuity of so-called space-filling ${\rm SLE}$ established in \cite{MS_IMAG4}. In the present work, we will prove the continuity of these processes for $\rho \in ((-2-\tfrac{\kappa}{2}) \vee (\tfrac{\kappa}{2}-4),-2)$, also using the GFF and the continuity of space-filling ${\rm SLE}$, thus covering all possible cases. The ${\rm SLE}_\kappa(\rho)$ processes with $\rho \in (-2-\tfrac{\kappa}{2},-2)$ admit certain approximations which are described in \cite[Section~6]{SHE_CLE}. The reader might find the description contained there helpful for understanding why the principal value correction leads to the force point of the process not always being on the domain boundary. We finish this subsection by collecting the following technical result, which we will use in Section~\ref{sec::gff_couplings} in conjunction with \cite[Theorem~2.4]{MS_IMAG} to construct couplings between the ${\rm SLE}_\kappa(\rho)$ processes with $\rho \in ((-2-\tfrac{\kappa}{2}) \vee (\tfrac{\kappa}{2}-4),-2)$ and the GFF. \begin{proposition} \label{prop::bessel_pv} Suppose that $X$ is a ${\mathrm {BES}}^\delta$ with $\delta \in (0,1)$ and that $U$ is a continuous process coupled with $X$ such that $(X,U)$ is strong Markov and possesses the following properties: \begin{enumerate}[(i)] \item\label{it::brownian_scaling} $(X,U)$ satisfies Brownian scaling: $t \mapsto (X_{\alpha t}, U_{\alpha t}) \stackrel{d}{=} t \mapsto \sqrt{\alpha}(X_t,U_t)$ for each $\alpha \geq 0$, \item\label{it::derivative} for each $t \geq 0$ such that $X_t \neq 0$, we have $\frac{d}{dt} U_t = X_t^{-1}$, \item\label{it::plus_minus_infinity} $\limsup_{t \to \infty} U_t = \infty$ and $\liminf_{t \to \infty} U_t = -\infty$ almost surely, and \item\label{it::independent} if $\tau$ is any stopping time for $X$ such that $X_{\tau} = 0$ and $t \geq 0$, then the law of $U_{t+\tau} - U_{\tau}$ is independent of $\sigma( (X_s,U_s) : s \leq \tau)$. \end{enumerate} Then \begin{equation} \label{eqn::o_pv} U_t = {\rm P.V.} \int_0^t \frac{1}{X_s} ds \quad\text{for all}\quad t \geq 0 \quad\text{almost surely}. \end{equation} \end{proposition} \begin{proof} The choice of~$U$ given by \eqref{eqn::o_pv} satisfies the hypotheses of the proposition, so it suffices to show that it is the only choice which satisfies the hypotheses. Suppose that $U$, $\widetilde{U}$ are two processes which satisfy the properties above and are coupled with~$X$ such that~$U$, $\widetilde{U}$ are independent given $X$ and let $\overline{U} = U - \widetilde{U}$. Let~$\ell$ denote the local time for~$X$ at~$0$ and, for each $s \geq 0$, let $t(s) = \inf\{t \geq 0 : \ell_t = s\}$. Note that $\frac{d}{dt} \overline{U}_t = 0$ for $t \geq 0$ such that $X_t \neq 0$. This implies that $s \mapsto \overline{U}_{t(s)}$ is a continuous process. Indeed, if $u \uparrow s$ then $t(u) \uparrow t(s)$ so that $\overline{U}_{t(u)} \to \overline{U}_{t(s)}$. Let $r$ be the limit of $t(u)$ as $u \downarrow s$. Then $\ell$ is constant on $(t(s),r)$ hence $\overline{U}_{t(s)} = \overline{U}_r$ and, since $\overline{U}$ is continuous, $\lim_{u \downarrow s} \overline{U}_{t(u)} = \overline{U}_r$. Therefore $\lim_{u \to s} \overline{U}_{t(u)} = \overline{U}_{t(s)}$, which proves the desired continuity. By the strong Markov property and \eqref{it::independent}, we also know that $\overline{U}_{t(s)}$ has stationary, independent increments. This implies that there exists a standard Brownian motion $B$ and constants $c_1,c_2 \in \mathbf{R}$ such that $\overline{U}_{t(s)} = c_1 B_s + c_2 s$. Equivalently, $\overline{U}_t = c_1 B_{\ell_t} + c_2 \ell_t$. Since $\overline{U}$, $\ell$, and $B$ all satisfy Brownian scaling, it is easy to see that $c_1 = 0$. That $c_2 = 0$ then follows from~\eqref{it::plus_minus_infinity} since $\ell_t \to \infty$ almost surely as $t \to \infty$ because $\delta \in (0,1)$. This implies that there exists at most one process $U$ which satisfies the hypotheses of the proposition. \end{proof} \subsection{Imaginary geometry review} \label{subsec::sle_gff} We assume in this work that the reader is familiar with the GFF and with imaginary geometry. We direct the reader to \cite{SHE06} for a more in depth introduction to the GFF and to \cite{MS_IMAG} for a basic introduction to imaginary geometry. In the present section, we will remind the reader of a few facts which are established in \cite{MS_IMAG,MS_IMAG4} about the manner in which flow lines interact with each other and the definition of space-filling ${\rm SLE}$. We begin with a review of the coupling of chordal ${\rm SLE}_\kappa(\rho)$ with $\rho > -2$ with the GFF. Throughout, we assume that $\kappa \in (0,4)$, $\kappa'=16/\kappa \in (4,\infty)$, and let \begin{align*} \chi = \frac{2}{\sqrt{\kappa}}- \frac{\sqrt{\kappa}}{2},\quad \lambda = \frac{\pi}{\sqrt{\kappa}},\quad\text{and}\quad \lambda' = \frac{\pi}{\sqrt{\kappa'}} = \lambda - \frac{\pi}{2} \chi. \end{align*} (These are the same values as in~\eqref{eqn::chi} and~\eqref{eqn::lambda}.) \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.85]{figures/boundarycondition} \caption{\label{fig::conditional_boundary_data} Suppose that $h$ is a GFF on $\mathbf{H}$ with the boundary data depicted above. Then the flow line $\eta$ of $h$ starting from $0$ is an ${\rm SLE}_\kappa(\underline{\rho}^L;\underline{\rho}^R)$ curve in $\mathbf{H}$ where $|\underline{\rho}^L| = |\underline{\rho}^R| = 1$. For any $\eta$ stopping time $\tau$, the law of $h$ given $\eta|_{[0,\tau]}$ is equal in distribution to a GFF on $\mathbf{H} \setminus \eta([0,\tau])$ with the boundary data depicted above (the notation $\uwave{a}$ is explained in \cite[Figure~1.10]{MS_IMAG}). It is also possible to couple $\eta' \sim{\rm SLE}_{\kappa'}(\underline{\rho}^L;\underline{\rho}^R)$ for $\kappa' > 4$ with $h$ and the boundary data takes on the same form (with $-\lambda'$, $\lambda' := \frac{\pi}{\sqrt \kappa'}$, in place of $\lambda := \frac{\pi}{\sqrt \kappa}$). The difference is in the interpretation. The (almost surely self-intersecting) path $\eta'$ is not a flow line of $h$, but for each $\eta'$ stopping time $\tau'$ the left and right {\em boundaries} of $\eta'([0,\tau'])$ are ${\rm SLE}_{\kappa}$ flow lines, where $\kappa=16/\kappa'$, angled in opposite directions. The union of the left boundaries --- over a collection of $\tau'$ values --- is a tree of merging flow lines, while the union of the right boundaries is a corresponding dual tree whose branches do not cross those of the tree.} \end{center} \end{figure} We suppose that $\rho > -2$ is fixed and that~$h$ is an instance of the GFF on~$\mathbf{H}$ with boundary conditions $\lambda(1+\rho)$ (resp.\ $-\lambda$) on $\mathbf{R}_+$ (resp.\ $\mathbf{R}_-$). Then it is shown in \cite[Theorem~1.1]{MS_IMAG} that there exists a unique coupling $(h,\eta)$ of~$h$ with an ${\rm SLE}_\kappa(\rho)$ process~$\eta$ in~$\mathbf{H}$ from~$0$ to~$\infty$ with a single boundary force point located at~$0^+$ such that the following is true. Suppose that $(W,V)$ is the Loewner driving pair for $\eta$, $(g_t)$ the corresponding family of conformal maps, and that $\tau$ is an $\eta$-stopping time. Then the conditional law of $h \circ g_\tau^{-1} - \chi \arg (g_\tau^{-1})'$ given $\eta|_{[0,\tau]}$ is that of a GFF on $\mathbf{H}$ with boundary conditions given by \[ h|_{(-\infty,W_\tau]} \equiv -\lambda,\quad h|_{(W_\tau,V_\tau]} \equiv \lambda,\quad\text{and}\quad h|_{(V_\tau,\infty)} \equiv \lambda (1+\rho).\] Equivalently, the conditional law of $h$ given $\eta|_{[0,\tau]}$ restricted to the unbounded component $\mathbf{H}_\tau$ of $\mathbf{H} \setminus \eta([0,\tau])$ is that of a GFF with the same boundary conditions as $h$ on $\partial \mathbf{H} \cap \partial \mathbf{H}_\tau$ and with boundary conditions which are given by $-\lambda'$ (resp.\ $\lambda'$) plus $\chi$ times the winding of $\eta$ along $\eta|_{[0,\tau]}$. Since $\eta$ is not a smooth curve, its winding is not well-defined along the curve itself, however the harmonic extension of its winding is defined. We will indicate this type of boundary data in the figures that follow using the notation introduced in \cite[Figure~1.10]{MS_IMAG}. It is shown in \cite[Theorem~1.2]{MS_IMAG} that $\eta$ is almost surely determined by $h$, which is not an obvious statement from how the coupling is constructed. The path $\eta$ has the interpretation of being a flow line of the vector field $e^{i h/\chi}$. Similar statements hold in the presence of more general piecewise constant boundary data. In the more general setting, the flow line is an ${\rm SLE}_\kappa(\underline{\rho})$ process where the number of force points is equal to the number of jumps in the boundary data for $h$. See Figure~\ref{fig::conditional_boundary_data} for an illustration in the case of two force points. Similar statements also hold for the existence of a unique coupling of an ${\rm SLE}_{\kappa'}$ process $\eta'$ with the GFF, except the interpretation is different. We refer to $\eta'$ \emph{counterflow line} of $h$ because an ${\rm SLE}_{\kappa'}$ process can be realized as a light cone of flow lines which travel in the opposite direction of $\eta'$. We refer to a path coupled as a flow line with $h+\theta \chi$ as the flow line of~$h$ with angle~$\theta$. This is because such a path has the interpretation of being the flow line of the vector field $e^{i (h \chi + \theta)}$, i.e., the field which arises by taking all of the arrows in $e^{i h/ \chi}$ and then rotating them by the angle $\theta$. The manner in which flow lines with different angles interact is established in \cite[Theorem~1.5]{MS_IMAG} as well as \cite[Theorem~1.7]{MS_IMAG4}. Specifically, if $\eta_{\theta_1}$ (resp.\ $\eta_{\theta_2})$ are the flow lines of a GFF $h$ on $\mathbf{H}$ starting from $x_1 \leq x_2$, then the following holds. If $\theta_1 > \theta_2$, then $\eta_{\theta_1}$ stays to the left of (but may bounce off) $\eta_{\theta_2}$. If $\theta_1 = \theta_2$, then $\eta_{\theta_1}$ and $\eta_{\theta_2}$ merge upon intersecting and do not subsequently separate. Finally, if $\theta_2 - \pi < \theta_1 < \theta_2$, then $\eta_{\theta_1}$ and $\eta_{\theta_2}$ cross upon intersecting for the first time. After crossing, the paths may continue to bounce off each other but do not cross again. One can also consider couplings of ${\rm SLE}$ with the GFF on domains other than~$\mathbf{H}$. Specifically, suppose that $D \subseteq \mathbf{C}$ is a simply connected domain and $x,y \in \partial D$ are distinct. Then to construct a coupling an ${\rm SLE}_\kappa(\underline{\rho})$ process $\eta$ in $D$ from $x$ to $y$ with a GFF $h$ on $D$, one starts with such a coupling $(\widetilde{h},\widetilde{\eta})$ on $\mathbf{H}$ and then takes \begin{equation} \label{eqn::change_coordinates} h = \widetilde{h} \circ \varphi^{-1} - \chi \arg (\varphi^{-1})' \quad\text{and}\quad \eta = \varphi(\widetilde{\eta}) \end{equation} where $\varphi \colon \mathbf{H} \to D$ is a conformal transformation which takes $0$ to $x$ and $\infty$ to $y$. We note that this change of coordinates formula is the same as the one which corresponds to the flow lines of $e^{i h / \chi}$ in the setting that $h$ is a continuous function. Flow lines of the GFF starting from interior points were constructed and studied in \cite{MS_IMAG4}. The interaction rules for these paths are the same as in the setting of paths which start on the domain boundary; see \cite[Theorem~1.7]{MS_IMAG4}. In \cite{MS_IMAG4}, these paths were used to construct so-called \emph{space-filling ${\rm SLE}_{\kappa'}$}, which is a form of ordinary ${\rm SLE}_{\kappa'}$ except whenever it cuts off a component, it branches in and fills it up before continuing. Specifically, we suppose that $h$ is a GFF on $\mathbf{H}$ with boundary conditions given by~$\lambda'$ (resp.\ $-\lambda'$) on~$\mathbf{R}_-$ (resp.\ $\mathbf{R}_+$). (These are the boundary conditions so that the counterflow line of~$h$ from~$0$ to $\infty$ is an ${\rm SLE}_{\kappa'}$ process.) Fix a countable dense set $(w_n)$ in~$\mathbf{H}$ and, for each $n$, we let~$\eta_n$ be the flow line of~$h$ starting from~$w_n$ with angle $\pi/2$. Then we say that $w_n$ \emph{comes before} $w_m$ if~$\eta_n$ merges with~$\eta_m$ on its left side (see, e.g., \cite[Figure~1.16]{MS_IMAG4}). This defines an ordering on the $(w_n)$ and space-filling ${\rm SLE}_{\kappa'}$ is a non-crossing random path which fills all of~$\mathbf{H}$ and visits the $(w_n)$ in this order. It turns out that if we target a space-filling ${\rm SLE}_{\kappa'}$ process at a given point $z$ (i.e., parameterize it according to capacity as seen from that point), then we obtain exactly the counterflow line of the GFF targeted at $z$. Therefore the aforementioned ordering also determines the order in which a counterflow line visits the points in its range. The space-filling ${\rm SLE}_{\kappa'}(\underline{\rho})$ processes are defined in an analogous way by starting with a GFF with different boundary data. One can similarly order space using flow lines of any given angle $\theta$ rather than the angle $\pi/2$ and obtain a continuous, space-filling path. \section{GFF couplings} \label{sec::gff_couplings} In this section, we are going to prove Theorem~\ref{thm::continuous} and Theorem~\ref{thm::coupling} simultaneously and then explain how to extract Theorem~\ref{thm::interpolation} from these results. We will begin in Section~\ref{subsec::lightcones} by proving several results about the structure of the complementary components (``pockets'') of light cones and then in Section~\ref{subsec::explorations} we will explain how we can use an ${\rm SLE}_{\kappa'}$, $\kappa'=16/\kappa \in (4,\infty)$, counterflow line to generate a continuous path which explores the range of a light cone. In both of these sections, we will restrict ourselves to the case in which the light cone starts from a single boundary point (rather than a continuum) so that we can work in a unified framework. We will then explain in Section~\ref{subsec::law} that these results also hold in the setting in which the light cone starts from a continuum of boundary points using a conditioning argument and then make the connection to ${\rm SLE}_\kappa(\rho)$ processes with $\rho \in [\tfrac{\kappa}{2}-4,-2)$ and $\rho > -2-\tfrac{\kappa}{2}$. Throughout, unless explicitly stated otherwise, we shall assume that~$h$ is a GFF on~$\mathbf{D}$ which is given by a conformal coordinate change as in~\eqref{eqn::change_coordinates} of a GFF on~$\mathbf{H}$ with piecewise constant boundary data which changes values at most a finite number of times. The reason for this is that it will be more convenient to work on a bounded Jordan domain rather than~$\mathbf{H}$ because then ${\rm SLE}_{\kappa'}$ is uniformly continuous. We also let \begin{equation} \label{eqn::critical_angle} \theta_c = \frac{\pi \kappa}{4-\kappa}. \end{equation} This is the so-called {\bf critical angle} --- the angle difference below which GFF flow lines can intersect each other and at or above which they cannot (see \cite[Theorem~1.5]{MS_IMAG} and \cite[Theorem~1.7]{MS_IMAG4}). It is shown in \cite{LIGHTCONE_DIMENSION} that the almost sure dimension of a light cone with opening angle $\theta \in [0,\theta_c \wedge \pi)$ is contained in $[1,2)$ and that the dimension is equal to $2$ for $\theta \in [\theta_c \wedge \pi,\pi]$. Note that $\theta_c \leq \pi$ if and only if $\kappa \leq 2$, which is closely connected with the fact that ordinary ${\rm SLE}_{\kappa'}$ is space-filling if and only if $\kappa' \geq 8$ \cite{RS05}. \subsection{Pocket structure} \label{subsec::lightcones} Fix $\theta_1 \leq \theta_2$ with $\theta_2 - \theta_1 \leq \pi$. For each $n \in \mathbf{N}$, let ${\mathbf L}_n(\theta_1,\theta_2)$ be the closure of the set of points accessible by angle-varying flow lines of $h$ starting from $-i$ which travel either with angle $\theta_1$ or $\theta_2$, change directions at most $n$ times, and only change directions at positive rational times. The {\bf light cone} ${\mathbf L}(\theta_1,\theta_2) = \overline{\cup_n {\mathbf L}_n(\theta_1,\theta_2)}$ of $h$ (starting from~$-i$) with angle range $[\theta_1,\theta_2]$ is the closure of the set of points accessible by flow lines of $h$ starting from~$-i$ with angle-varying trajectories with angle either equal to~$\theta_1$~or $\theta_2$ and which change directions a finite number of times and only at positive rational times. Note that this definition is slightly different than that given in the introduction because we only allow the paths to travel with the extremal angles~$\theta_1$ and~$\theta_2$ (and do not allow the intermediate angles). This definition will be more convenient for us to work with and we will shortly show that it and the one given in the introduction almost surely agree. For $\theta \in [0,\pi]$, we also let ${\mathbf L}(\theta) = {\mathbf L}(-\tfrac{\theta}{2},\tfrac{\theta}{2})$. \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.85]{figures/lightcone_pocket_finite} \end{center} \caption{\label{fig::lightcone_pocket_finite} Shown on the left is the pocket $P_2(z)$ of ${\mathbf L}_2(\theta_1,\theta_2)$ containing~$z$ on the event that $P_2(z)$ separates $z$ from $\partial \mathbf{D}$. We let $\varphi_2 \colon P_2(z) \to \mathbf{D}$ be the unique conformal transformation with $\varphi_2(z) = 0$ and $\varphi_2'(z) > 0$. Shown on the right is the boundary data of the GFF $h \circ \varphi^{-1} - \chi \arg (\varphi^{-1})'$ on $\partial \mathbf{D}$. The reason that $\mathbf{D}$ on the right side appears not to be perfectly round is so that we can use our notation to label the boundary data.} \end{figure} For each $z \in \mathbf{D}$ and $n \in \mathbf{N}$, let $\pocket{n}{z}$ be the complementary component of ${\mathbf L}_n(\theta_1,\theta_2)$ which contains $z$ and let $\pocket{z}$ be the complementary component of ${\mathbf L}(\theta_1,\theta_2)$ which contains $z$. Throughout, we will refer to such complementary components as (complementary) {\bf pockets} of ${\mathbf L}(\theta_1,\theta_2)$. We are next going to describe the boundary data of $h$ given ${\mathbf L}(\theta_1,\theta_2)$ on $\partial \pocket{z}$. It is a consequence of the main result of \cite{LIGHTCONE_DIMENSION} that $\pocket{z} \neq \emptyset$ almost surely provided $\theta_2 - \theta_1 < \theta_c$ and $\theta_2 - \theta_1 \leq \pi$. \begin{lemma} \label{lem::form_pockets} Suppose that $\theta_2 - \theta_1 < \theta_c$ and $\theta_2 - \theta_1 \leq \pi$. Fix $z \in \mathbf{D}$ and assume that the event $E(z)$ that ${\mathbf L}(\theta_1,\theta_2)$ disconnects $z$ from $\partial \mathbf{D}$ has positive probability. On $E(z)$, let $\varphi \colon \pocket{z} \to \mathbf{D}$ be the unique conformal transformation with $\varphi(z) = 0$ and $\varphi'(z) > 0$. Then the boundary data for $\widetilde{h} = h \circ \varphi^{-1} - \chi \arg (\varphi^{-1})'$ is as described in the left side of Figure~\ref{fig::lightcone_pocket}. In particular, there exists two distinct marked points $x,y \in \partial \pocket{z}$ such that the boundary behavior of $h$ along the clockwise (resp.\ counterclockwise) boundary segment $\side{1}{z}$ (resp.\ $\side{2}{z}$) of $\partial \pocket{z}$ from $x$ to $y$ is the same as that of the right (resp.\ left) side of a flow line with angle $\theta_1$ (resp.\ $\theta_2$). \end{lemma} \begin{proof} Assume that we are working on $E(z)$. Then there exists $n_0 \in \mathbf{N}$ such that ${\mathbf L}_n(\theta_1,\theta_2)$ separates~$z$ from~$\partial \mathbf{D}$ for all $n \geq n_0$. For each $n \geq n_0$, let $\varphi_n \colon \pocket{n}{z} \to \mathbf{D}$ be the unique conformal transformation with $\varphi_n(z) = 0$ and $\varphi_n'(z) > 0$. Let $\widetilde{h}_n = h \circ \varphi_n^{-1} - \chi \arg (\varphi_n^{-1})'$ be the GFF on $\mathbf{D}$ given by conformally mapping $\pocket{n}{z}$ to~$\mathbf{D}$ using~$\varphi_n$ and applying the coordinate change formula~\eqref{eqn::change_coordinates}. As shown in Figure~\ref{fig::lightcone_pocket_finite} (in the case that $n=2$), the boundary data for~$\widetilde{h}_n$ has four (possibly degenerate) marked points. These divide $\partial \mathbf{D}$ into the images~$L_n^{\theta_1}$ and~$L_n^{\theta_2}$ of the pocket boundary formed by the left sides of flow lines with angles~$\theta_1$ and~$\theta_2$, respectively, and the images~$R_n^{\theta_1}$ and~$R_n^{\theta_2}$ of the pocket boundary formed by the right sides of flow lines with angles~$\theta_1$ and~$\theta_2$, respectively. Note that $\widetilde{h} = \lim_n \widetilde{h}_n$. Consequently, the boundary data for~$\widetilde{h}$ takes the same form. Let $L^{\theta_1}$, $L^{\theta_2}$, $R^{\theta_1}$, and $R^{\theta_2}$ be the four marked boundary segments for the boundary data of $\widetilde{h}$. If $L^{\theta_1} \neq \emptyset$ or $R^{\theta_2} \neq \emptyset$, then $\mathop{\mathrm{diam}}(L_n^{\theta_1})$ or $\mathop{\mathrm{diam}}(L_n^{\theta_2})$ is bounded from below for arbitrarily large values of~$n$. This is a contradiction because it is easy to see that on this event, the conformal radius of~$P_{n+1}(z)$ as seen from~$z$ decreases by a uniformly positive amount with uniformly positive probability. Consequently, $L^{\theta_1} = \emptyset$ and $R^{\theta_2} = \emptyset$ almost surely. That is, the boundary data for~$\widetilde{h}$ is in fact as illustrated in the left side of Figure~\ref{fig::lightcone_pocket}, as desired. \end{proof} \begin{figure}[ht!] \begin{center} \subfigure[]{ \includegraphics[scale=0.85, page=1]{figures/lightcone_pocket}} \hspace{0.1\textwidth} \subfigure[]{ \includegraphics[scale=0.85, page=2]{figures/lightcone_pocket}} \end{center} \vspace{-0.03\textheight} \caption{\label{fig::lightcone_pocket} Shown on the left is a pocket~$\pocket{z}$ of ${\mathbf L}(\theta_1,\theta_2)$ containing a given point~$z$ and the boundary data for the conditional law of~$h$ given ${\mathbf L}(\theta_1,\theta_2)$ on $\partial \pocket{z}$. Note that it is not possible to draw~$\theta_1$-angle (resp.~$\theta_2$-angle) flow lines of~$h$ contained in~$\pocket{z}$ which start from points on~$\side{2}{z}$ (resp.~$\side{1}{z}$). On the right side, the extra~$\theta_2$-angle flow lines have been drawn in blue to indicate how the paths are ordered using an~${\rm SLE}_{\kappa'}$ counterflow line~$\eta'$. The dark green path indicates the part~$\eta'$ that fills the right side of~$\side{2}{z}$, the orange path indicates the part of~$\eta'$ which travels from the opening point~$x$ to the closing point~$y$ of~$\pocket{z}$, and the light green path indicates the part of~$\eta'$ after it has hit~$y$. The colored arrows indicate the direction in which the different segments of~$\eta'$ are traveling. In particular,~$\eta'$ fills the right side of~$\side{2}{z}$ before entering (the interior of) $\pocket{z}$. Since it has to hit the points on~$\side{1}{z}$ in the reverse order in which they are drawn by~$\sideflow{1}{z}$, after reaching $x$, $\eta'$ enters into the interior of $\pocket{z}$ and then travels to $y$. As it travels up to $y$, it visits point on the left side of $\side{2}{z}$, does not hit $\side{1}{z}$, and does not leave $\overline{\pocket{z}}$. After reaching $y$, it then visits the points of $\side{1}{z}$ in the reverse order in which they drawn by $\sideflow{1}{z}$. While it does so, it makes excursions both into and outside of $\pocket{z}$.} \end{figure} Throughout, we shall refer to the point~$x$ in the statement of Lemma~\ref{lem::form_pockets} as the {\bf opening point} of~$\pocket{z}$. If we want to emphasize the dependency of~$x$ on~$z$, we will write~$\open{z}$ for~$x$. For a generic pocket~$P$, we will write~$\open{P}$ for the opening point of~$P$. Similarly, we will refer to the point~$y$ in the statement of Lemma~\ref{lem::form_pockets} as the {\bf closing point} of~$\pocket{z}$. As before, we will write~$\close{z}$ if we want to emphasize the dependency on~$z$ and write~$\close{P}$ for the closing point of a generic pocket~$P$. We will also use the notation~$\side{j}{z}$ introduced in the statement of Lemma~\ref{lem::form_pockets} to indicate the~$\theta_j$-angle side of~$\partial P(z)$ for $j=1,2$ and write~$\side{j}{P}$ to indicate the same for a generic pocket~$P$. If~$P$ or~$z$ is understood from the context, then we will simply write~$\side{j}$ for $j=1,2$. Finally, we note that~$\side{j}{z}$ is equal to the flow line~$\sideflow{j}{z}$ of~$h$ with angle~$\theta_j$ starting from~$\open{z}$ and stopped upon hitting~$\close{z}$. We will write~$\sideflow{j}{P}$ to indicate these flow lines for a generic pocket~$P$ and write~$\sideflow{j}$ if either~$P$ or~$z$ is understood from the context. We will now use Lemma~\ref{lem::form_pockets} to show that the definition of the light cone introduced in this section agrees with the one given in the introduction. \begin{lemma} \label{lem::lightcone_approximation} Fix $\theta_1 \leq \theta_2$ with $\theta_2-\theta_1 \leq \pi$. Let ${\mathbf L}(\theta_1,\theta_2)$ be as defined in the beginning of the subsection and let $\widehat{{\mathbf L}}(\theta_1,\theta_2)$ be the closure of the set of points accessible by angle-varying trajectories of $h$ starting from $-i$ with angles which are rational and contained in $[\theta_1,\theta_2]$ or equal to $\theta_1$ or $\theta_2$ and which change angles at most a finite number of times and only at positive rational times. (This is the definition of the light cone given in the introduction.) Then ${\mathbf L}(\theta_1,\theta_2) = \widehat{{\mathbf L}}(\theta_1,\theta_2)$ almost surely. \end{lemma} \begin{proof} We may assume without loss of generality that $\theta_1 < \theta_2$ since if $\theta_1=\theta_2$ then the result is trivially true because both ${\mathbf L}(\theta_1,\theta_2)$ and $\widehat{{\mathbf L}}(\theta_1,\theta_2)$ are equal to the flow line of $h$ starting from $-i$ with angle $\theta_1=\theta_2$. It is clear from the definition that ${\mathbf L}(\theta_1,\theta_2) \subseteq \widehat{{\mathbf L}}(\theta_1,\theta_2)$ almost surely, so we just need to prove the reverse inclusion. We first suppose that $\theta_2 - \theta_1 < \theta_c$. In this case, the result follows because, for each fixed $z \in \mathbf{D}$, the flow line interaction rules \cite[Theorem~1.7]{MS_IMAG4} and Lemma~\ref{lem::form_pockets} imply that an angle-varying trajectory with angles which are rational and contained in $[\theta_1,\theta_2]$ or equal to $\theta_1$ or $\theta_2$ which changes angles at most a finite number of times cannot enter the pocket $\pocket{z}$ of ${\mathbf L}(\theta_1,\theta_2)$ which contains $z$. Indeed, a flow line of angle $\theta_2$ cannot cross a flow line of angle $\theta_1$ from left to right since $\theta_2 > \theta_1$ and likewise a flow line of angle $\theta_1$ cannot cross a flow line of angle $\theta_2$ from right to left. The case that $\theta_2 - \theta_1 \geq \theta_c$ follows since for these values we know that both ${\mathbf L}(\theta_1,\theta_2)$ and $\widehat{{\mathbf L}}(\theta_1,\theta_2)$ are equal to the set of points which lie between their left and right boundaries. \end{proof} Fix angles $\theta_1 < \theta_2$ with $\theta_2 - \theta_1 < \theta_c$ and $\theta_2-\theta_1 \leq \pi$. Assume that the boundary data of~$h$ is such that the flow lines $\eta_1,\eta_2$ starting from~$-i$ with angles $\theta_1,\theta_2$ almost surely do not hit the continuation threshold (as defined in just before the statement of \cite[Theorem~1.1]{MS_IMAG}). That is, they both connect~$-i$ to~$i$. Let~$\eta'$ be the counterflow line of $h+(\theta_2-\tfrac{\pi}{2})\chi$ starting from~$i$. Then the left boundary of~$\eta'$ stopped upon hitting a point $z \in \mathbf{D}$ is equal to the flow line starting from~$z$ with angle~$\theta_2$. We are now going to use the flow line interaction rules \cite[Theorem~1.7]{MS_IMAG4} to explain how~$\eta'$ interacts with a pocket~$\pocket{z}$ of~${\mathbf L}(\theta_1,\theta_2)$. See Figure~\ref{fig::lightcone_pocket} for an illustration. If we start a flow line~$\eta_w$ with angle~$\theta_2$ from a point~$w$ inside of~$\pocket{z}$, then it has to merge with~$\sideflow{2}{z}$ on its left side. Indeed, this is obviously true for topological reasons if~$\eta_w$ merges with~$\sideflow{2}{z}$ before leaving~$\pocket{z}$. If~$\eta_w$ first leaves~$\pocket{z}$ before merging into~$\sideflow{2}{z}$, then it necessarily crosses~$\sideflow{1}{z}$ from the right to the left. If~$\eta_w$ were to subsequently wrap around and merge with~$\sideflow{2}{z}$ on its right side, then it would be forced to cross~$\sideflow{1}{z}$ a second time, which is a contradiction to \cite[Theorem~1.7]{MS_IMAG4}. This proves the claim since flow lines with the same angle almost surely merge. Similarly, if we start a flow line from a point~$w$ on~$\side{1}{z}$ then it merges with~$\sideflow{2}{z}$ on its left side. Consequently, it follows from \cite[Theorem~1.13]{MS_IMAG4} that: \begin{enumerate} \item $\eta'$ enters (the interior of) $\pocket{z}$ at $\open{z}$ after filling the right side of $\side{2}{z}$. \item Upon entering $\pocket{z}$, $\eta'$ visits points on the left side of $\side{2}{z}$ as it travels from $\open{z}$ to $\close{z}$. It does not touch $\side{1}{z}$ until hitting $\close{z}$. \item Upon hitting $\close{z}$, it visits the points of $\side{1}{z}$ in the reverse order in which they are drawn by $\sideflow{1}{z}$ and, while doing so, $\eta'$ makes excursions both into and out of $\pocket{z}$. \end{enumerate} We are now going to extract from this and the continuity of space-filling ${\rm SLE}_{\kappa'}$ the local finiteness of the pockets of the light cone. \begin{lemma} \label{lem::locally_finite} Suppose that we have the setup described just above (in particular, the boundary data of $h$ is such that the left and right boundaries $\eta_1,\eta_2$ of ${\mathbf L}(\theta_1,\theta_2)$ almost surely do not hit the continuation threshold before hitting $i$). The pockets of ${\mathbf L}(\theta_1,\theta_2)$ are almost surely locally finite: that is, for each $\epsilon > 0$, the number of pockets of ${\mathbf L}(\theta_1,\theta_2)$ with diameter at least $\epsilon$ is finite almost surely. \end{lemma} \begin{proof} The result trivially holds for $\theta_2 - \theta_1 \geq \theta_c$ because then~${\mathbf L}(\theta_1,\theta_2)$ is space-filling hence does not have pockets which lie between~$\eta_1$ and~$\eta_2$. The pockets which are not surrounded by~$\eta_1$ and~$\eta_2$ are locally finite because~$\eta_1$ and~$\eta_2$ are continuous paths. We now suppose that $\theta_2 - \theta_1 < \theta_c$ so that~${\mathbf L}(\theta_1,\theta_2)$ has pockets which lie between~$\eta_1$ and~$\eta_2$. Since the components of $\mathbf{D} \setminus (\eta_1 \cup \eta_2)$ are locally finite, it suffices to show that the pockets of~${\mathbf L}(\theta_1,\theta_2)$ which are contained in a given component are locally finite. Fix such a component~$C$ and let~$\eta'$ be the space-filling ${\rm SLE}_{\kappa'}$ process starting from~$y$, the last point on~$\partial C$ hit by~$\eta_1$ and~$\eta_2$, and targeted at~$x$, the first point on~$\partial C$ hit by~$\eta_1$ and~$\eta_2$. We choose~$\eta'$ so that its left boundary stopped upon hitting any given point is equal to the flow line of~$h$ with angle~$\theta_2$ starting from that point. Then~$\eta'$ interacts with a pocket~$\pocket{z}$ of~${\mathbf L}(\theta_1,\theta_2)$ for $z \in C$ in the same manner as the counterflow line described before the statement of the lemma except that it completely fills~$\side{2}{z}$ while traveling from~$\open{z}$ to~$\close{z}$. Note that for disjoint pockets~$\pocket{z}$ and~$\pocket{w}$ of~${\mathbf L}(\theta_1,\theta_2)$ contained in~$C$, the time-interval~$I_z$ in which~$\eta'$ travels from~$\open{z}$ to~$\close{z}$ is disjoint from the time-interval~$I_w$ in which it travels from~$\open{w}$ to~$\close{w}$. Moreover, for each $z \in \mathbf{D}$, $\eta'(I_z)$ contains~$\side{2}{z}$. Consequently, it follows from the continuity of space-filling ${\rm SLE}_{\kappa'}$ that the number of pockets~$P$ such that $\mathop{\mathrm{diam}}(\side{2}{P}) \geq \epsilon$ is finite almost surely. The same is also true for the number of pockets~$P$ such that $\mathop{\mathrm{diam}}(\side{1}{P}) \geq \epsilon$ because we can take a space-filling ${\rm SLE}_{\kappa'}$ whose right boundary stopped upon hitting a given point~$z$ is given by the flow line starting from~$z$ with angle~$\theta_1$ in place of~$\eta'$ and then apply the same analysis. This completes the proof since the triangle inequality implies that $\mathop{\mathrm{diam}}(P) \leq \mathop{\mathrm{diam}}(\side{1}{P}) + \mathop{\mathrm{diam}}(\side{2}{P})$ for any pocket~$P$. \end{proof} \begin{figure}[ht!] \begin{center} \subfigure[]{ \includegraphics[scale=0.85, page=1]{figures/lightcone_continuous}} \hspace{0.15\textwidth} \subfigure[]{ \includegraphics[scale=0.85, page=2]{figures/lightcone_continuous}} \end{center} \caption{\label{fig::lightcone_continuous} Shown on the left side is the pocket $\pocket{z}$ of ${\mathbf L}(\theta_1,\theta_2)$ containing~$z$. Its opening (resp.\ closing) point is $x$ (resp.\ $y$). Suppose that $(\theta_n^1)$ (resp.\ $(\theta_n^2)$) is a sequence of angles which increase to~$\theta_1$ (resp.\ decrease to $\theta_2$). We take~$\eta_n^1$ (resp.\ $\eta_n^2$) to be a flow line of angle $\theta_n^1$ (resp.\ $\theta_n^2$) starting from $x^1 \in \partial \pocket{z}$ (resp.\ $x^2 \in \partial \pocket{z}$). As $n \to \infty$, $\eta_n^1$ and~$\eta_n^2$ converge in the Hausdorff topology to the segments of~$\side{1}{z}$ and~$\side{2}{z}$, respectively, which connect~$x^1$ and~$x^2$ to~$y$. The right is the same as the left except we have drawn dual paths $\widetilde{\eta}_n^1,\widetilde{\eta}_n^2$ starting from points on $\eta_n^1,\eta_n^2$, respectively. Explicitly, $\eta_n^1$ (resp.\ $\eta_n^2$) has angle $\theta_n^1-\pi$ (resp.\ $\theta_n^2+\pi$). These paths will intersect and bounce off each other as shown. By the flow line interaction rules, $\widetilde{\eta}_n^1$ cannot cross either~$\eta^2$ or~$\eta_n^2$ but can cross out of~$\pocket{z}$ through the clockwise segment of~$\side{2}{z}$ from~$x$ to~$x^1$ and the symmetric fact holds for $\widetilde{\eta}_n^2$. Since an angle varying flow line with angles contained in $[\theta_n^1,\theta_n^2]$ cannot cross from the right to the left (resp.\ left to the right) of $\widetilde{\eta}_n^1$ (resp.\ $\widetilde{\eta}_n^2$), it follows that the pocket of ${\mathbf L}(\theta_n^1,\theta_n^2)$ which contains $z$ almost surely contains the light blue region on the right. This allows us to prove the continuity of the law of ${\mathbf L}(\theta_1,\theta_2)$ in $\theta_1,\theta_2$ with respect to the Hausdorff topology because the Hausdorff distance between~$\pocket{z}$ and the blue region will with probability tending to~$1$ decrease to $0$ as we take a limit first as $n \to \infty$ and then as $x^1,x^2 \to x$, and then finally the starting points of $\widetilde{\eta}_n^1,\widetilde{\eta}_n^2$ to $x$ as well.} \end{figure} We are now going to establish the continuity of the law of ${\mathbf L}(\theta_1,\theta_2)$ in $\theta_1 \leq \theta_2$ with $\theta_2-\theta_1 \leq \pi$ with respect to the Hausdorff topology. See Figure~\ref{fig::lightcone_continuous} for an illustration of the setup and the proof. \begin{proposition} \label{prop::lightcone_continuous} Suppose that we have the same setup as in Lemma~\ref{lem::locally_finite} and that $\theta_1 \leq \theta_2$ are angles with $\theta_2 - \theta_1 \leq \pi$. Let $(\theta_n^1)$, $(\theta_n^2)$ be sequences of angles with $\theta_n^1 \leq \theta_n^2$ and $\theta_n^2 -\theta_n^1 \leq \pi$ for all $n \in \mathbf{N}$ such that $\theta_n^j \to \theta_j$ as $n \to \infty$ for $j=1,2$. Then ${\mathbf L}(\theta_n^1,\theta_n^2) \to {\mathbf L}(\theta_1,\theta_2)$ as $n \to \infty$ almost surely with respect to the Hausdorff topology. \end{proposition} \begin{remark} \label{rem::lightcone_not_continuous} Proposition~\ref{prop::lightcone_continuous} implies that for a \emph{fixed} choice of $\theta_1 \leq \theta_2$, we have that ${\mathbf L}(\theta_n^1,\theta_n^2) \to {\mathbf L}(\theta_1,\theta_2)$ almost surely. It does not imply that $(\theta_1,\theta_2) \mapsto {\mathbf L}(\theta_1,\theta_2)$ is a continuous function with respect to the Hausdorff topology for a fixed realization of $h$. Indeed, this statement is not true because the left boundary of ${\mathbf L}(\theta_1,\theta_2)$ is the flow line of $h$ with angle $\theta_1$ and for a fixed realization of $h$ the map which takes an angle to the flow line of $h$ starting from that angle is not continuous with respect to the Hausdorff topology. Indeed, if this were true then the \emph{fan} defined in \cite{MS_IMAG} would almost surely have positive Lebesgue measure but it is shown in \cite{MS_IMAG} that the Lebesgue measure is zero almost surely. In fact, it is shown in \cite{LIGHTCONE_DIMENSION} that the dimension of the fan is the same as the dimension of a single ${\rm SLE}_\kappa$ path. \end{remark} \begin{proof}[Proof of Proposition~\ref{prop::lightcone_continuous}] We are going to give the proof in the case that $(\theta_n^1)$ increases to~$\theta_1$ and $(\theta_n^2)$ decreases to~$\theta_2$. We will also assume that $\theta_2 - \theta_1 < \theta_c$. The proof in the other possible cases is similar. By Lemma~\ref{lem::locally_finite}, we know that the pockets of~${\mathbf L}(\theta_1,\theta_2)$ are locally finite. Fix $\epsilon > 0$ and let $P_1,\ldots,P_n$ be the pockets of~${\mathbf L}(\theta_1,\theta_2)$ which have diameter at least~$\epsilon$. For each~$j$, we let $x_j = \open{P_j}$ (resp.\ $y_j = \close{P_j}$) be the opening (resp.\ closing) point of~$P_j$. Fix $\delta \in (0,\epsilon)$ and, for each~$j$, let~$x_j^1$ (resp.\ $x_j^2$) be a point on~$\side{1}{P_j}$ (resp.\ $\side{2}{P_j}$) with distance at most~$\delta$ from~$x_j$. Let~$\eta_{j,n}^1$ (resp.\ $\eta_{j,n}^2$) be the flow line of~$h$ starting from~$x_j^1$ (resp.\ $x_j^2$) with angle~$\theta_n^1$ (resp.\ $\theta_n^2$). As $n \to \infty$, these paths stopped upon exiting~$\overline{P}_j$ almost surely converge in the Hausdorff topology to the segments of~$\side{1}{P_j}$ and~$\side{2}{P_j}$ which start from~$x_j^1$ and~$x_j^2$, respectively, and terminate at~$y_j$. Indeed, this follows for~$\eta_{j,n}^1$ because it is an ${\rm SLE}_\kappa(\rho^L;\rho_1^R,\rho_2^R)$ process in~$P_j$ with force points located at $(x_j^1)^-$, $(x_j^1)^+$, and~$x_j^2$ and $\rho^L \downarrow -2$ as $n \to \infty$. This follows for~$\eta_{j,n}^2$ for an analogous reason. Fix $\delta_1 \in (\delta,\epsilon)$; we will shortly send $\delta \downarrow 0$ while leaving~$\delta_1$ and~$\epsilon$ fixed. For each~$n$, let~$x_{n,j}^1$ be a point on~$\eta_{n,j}^1$ which has distance~$\delta_1$ from~$x_j$ and let~$\widetilde{\eta}_{n,j}^1$ be the flow line of~$h$ starting from~$x_{n,j}^1$ with angle $\theta_n^1-\pi$ (the angle dual to that of~$\eta_{n,j}^1$). We define~$x_{n,j}^2$ and~$\widetilde{\eta}_{n,j}^2$ similarly (the angle of~$\widetilde{\eta}_{n,j}^2$ is $\theta_n^2+\pi$). Let~$C_j$ be the component of $P_j \setminus (\eta_{n,j}^1 \cup \eta_{n,j}^2)$ which $\widetilde{\eta}_{n,j}^1,\widetilde{\eta}_{n,j}^2$ enter immediately upon getting started (there exists such a component with probability tending to~$1$ as $n \to \infty$). Then the joint law of~$\widetilde{\eta}_{n,j}^1, \widetilde{\eta}_{n,j}^2$ in~$C_j$ stopped upon hitting $B(x_j,2\delta)$ is absolutely continuous with respect to that of the pair of paths~$(\widehat{\eta}_{n,j}^1,\widehat{\eta}_{n,j}^2)$ which are distributed as in the case that the boundary data along~$C_j$ takes the same form as if it were a pocket of a light cone with angle range $[\theta_n^1,\theta_n^2]$ and with opening and closing points~$x_j$ and~$y_j$, respectively. Moreover, by \cite[Lemma~2.1]{LIGHTCONE_DIMENSION}, the Radon-Nikodym derivative is bounded from above and below by universal finite and positive constants which do not depend on~$n$. By the flow line interaction rules \cite[Theorem~1.5]{MS_IMAG}, $\widehat{\eta}_{n,j}^1,\widehat{\eta}_{n,j}^2$ almost surely intersect before exiting~$C_j$. Consequently, sending first $n \to \infty$, then $\delta \downarrow 0$, we see that the probability that $\widetilde{\eta}_{n,j}^1$ intersects $\widetilde{\eta}_{n,j}^2$ before hitting $B(x_j,\delta)$ tends to~$1$. Moreover, the diameter of the paths up until intersecting almost surely tends to zero upon taking another limit as $\delta_1 \downarrow 0$. The desired result follows because the pocket of~${\mathbf L}(\theta_n^1,\theta_n^2)$ which contains~$z$ is contained in~$\pocket{z}$ and contains the component of $C_j \setminus (\widetilde{\eta}_{n,j}^1 \cup \widetilde{\eta}_{n,j}^2)$ containing~$z$ on the event that~$\widetilde{\eta}_{n,j}^1$ and~$\widetilde{\eta}_{n,j}^2$ intersect before leaving $B(x_j,2\delta)$ provided~$\delta$ is small enough. See the caption of Figure~\ref{fig::lightcone_continuous} for further explanation of this final point. \end{proof} \begin{proposition} \label{prop::pocket_boundaries_continuous} Suppose that we have the same setup as in Lemma~\ref{lem::locally_finite} and that $\theta_1 \leq \theta_2$ are angles with $\theta_2 - \theta_1 < \theta_c$ and $\theta_2 - \theta_1 \leq \pi$. Let $(\theta_n^1)$, $(\theta_n^2)$ be sequences of angles with $\theta_n^1 \leq \theta_n^2$ and $\theta_n^2 -\theta_n^1 \leq \pi$ for all $n \in \mathbf{N}$ such that $\theta_n^j \to \theta_j$ as $n \to \infty$ for $j=1,2$. For each $z \in \mathbf{D}$ and $n \in \mathbf{N}$, let $\eta_j^n(z)$ be the flow line which forms the $\theta_j^n$-angle boundary of the pocket of ${\mathbf L}(\theta_n^1,\theta_n^2)$ which contains $z$. Then $\eta_j^n(z) \to \sideflow{j}{z}$ for $j=1,2$ almost surely as $n \to \infty$ with respect to the uniform topology modulo parameterization. \end{proposition} \begin{proof} This follows from the same argument used to prove Proposition~\ref{prop::lightcone_continuous}. \end{proof} \subsection{Explorations and continuity} \label{subsec::explorations} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.85,page=3]{figures/lightcone_pocket2} \end{center} \caption{\label{fig::lightcone_pocket2} (Continuation of Figure~\ref{fig::lightcone_pocket}.) Shown is a second pocket $\pocket{w}$ of ${\mathbf L}(\theta_1,\theta_2)$. If the $\theta_2$-angle flow line $\sideflow{2}{w}$ which generates $\side{2}{w}$ merges into the right side of the $\theta_2$-angle flow line $\sideflow{2}{z}$ which generates $\side{2}{z}$ of $\partial \pocket{z}$ as illustrated, then the counterflow line $\eta'$ visits (the interior of) $\pocket{w}$ before visiting (the interior of) $\pocket{z}$. This determines (and is the same as) the order in which the trajectory we consider which explores ${\mathbf L}(\theta_1,\theta_2)$ visits $\pocket{z}$ and $\pocket{w}$. The same color scheme for the segments of $\eta'$ as it visits the points of $\partial \pocket{w}$ is used as in Figure~\ref{fig::lightcone_pocket}. } \end{figure} We assume that $\theta_1 < \theta_2$ are angles with $\theta_2 - \theta_1 < \theta_c$ and $\theta_2 - \theta_1 \leq \pi$. We also assume that the boundary data of~$h$ is such that the flow lines $\eta_1,\eta_2$ with angles $\theta_1,\theta_2$ starting from~$-i$, respectively, almost surely reach~$i$ before hitting the continuation threshold. Let~$\eta'$ be the counterflow line of $h + (\theta_2-\tfrac{\pi}{2})\chi$ starting from~$i$ and targeted at~$-i$. By \cite[Theorem~1.4]{MS_IMAG}, the left boundary of~$\eta'$ stopped upon hitting any point is equal to the flow line of angle~$\theta_2$ starting from that point. We will use the path~$\eta'$ to order the points on~${\mathbf L}(\theta)$ then use the continuity of~$\eta'$ to show that there exists a continuous, non-crossing path whose range is equal to~${\mathbf L}(\theta_1,\theta_2)$ and which visits the points of~${\mathbf L}(\theta_1,\theta_2)$ in this order. We will then show that the path has a continuous chordal Loewner driving function and, in certain special cases, yields a local set for~$h$ when drawn up to any stopping time. In the next section, we will use these facts to complete the proof of Theorem~\ref{thm::coupling} by showing that the corresponding path (in a slightly modified setup) evolves as the appropriate~${\rm SLE}_\kappa(\rho)$ process and is coupled with and determined by the field in the desired manner. This will also give Theorem~\ref{thm::continuous}. The path which traverses~${\mathbf L}(\theta_1,\theta_2)$ is constructed in the following manner. \begin{enumerate} \item Suppose that $z,w \in \mathbf{D}$ are distinct. We say that~$\pocket{z}$ comes before~$\pocket{w}$ if~$\eta'$ visits~$\open{z}$ before~$\open{w}$. Equivalently, $\pocket{z}$ comes before~$\pocket{w}$ if the flow line of~$h$ starting from~$\open{z}$ with angle~$\theta_2$ merges with the flow line of angle~$\theta_2$ starting from~$\open{w}$ on its right side. \item We take~$\eta$ to be the concatenation of the paths~$\sideflow{1}{z}$ using the same ordering as for the pockets~$\pocket{z}$. \end{enumerate} We will now use the continuity of~$\eta'$ to deduce the continuity of~$\eta$. \begin{lemma} \label{lem::continuous} The trajectory~$\eta$ from~$i$ to~$-i$ in~$\overline{\mathbf{D}}$ described above is almost surely continuous. \end{lemma} \begin{proof} Let $\eta_1, \eta_2$ be the flow lines of~$h$ starting from~$-i$ with angles $\theta_1,\theta_2$, respectively, as before, and let~$\eta'$ be the counterflow line of $h+(\theta_2-\tfrac{\pi}{2})\chi$ starting from~$i$ and targeted at~$-i$. From \cite[Theorem~1.3]{MS_IMAG}, we know that~$\eta'$ is almost surely continuous. We are going to prove the continuity of~$\eta$ in two steps. First, we will construct an intermediate path by starting with~$\eta'$ and then excising the excursions that it makes into~${\mathbf L}(\theta_1,\theta_2)$. Second, we will modify this intermediate path to get~$\eta$. Let $I = [0,\infty) \setminus (\eta')^{-1}({\mathbf L}(\theta_1,\theta_2))$. Since~$\eta'$ is continuous, $I \subseteq [0,\infty)$ is open, hence we can write $I = \cup_j (s_j,t_j)$ as a countable, disjoint union of open intervals. Note that for each~$j$ there exists $z \in \mathbf{D}$ such that $\eta'((s_j,t_j)) \subseteq \pocket{z}$. Suppose that $\eta'(s_j) \in \side{1}{z}$. Since~$\side{1}{z}$ is contained in the range of~$\eta'$ and~$\eta'$ visits of the points of~$\side{1}{z}$ in the reverse chronological order in which they are drawn by~$\sideflow{1}{z}$, it must be that $\eta'(s_j)=\eta'(t_j)$. Consequently, letting $\widetilde{\eta}|_{[0,\infty) \setminus I} = \eta'|_{[0,\infty) \setminus I}$ and $\widetilde{\eta}|_{(s_j,t_j)} = \eta'(s_j) = \eta'(t_j)$ for each $j \in \mathbf{N}$ such that $\eta'(s_j) \in \side{1}{z}$, we see that~$\widetilde{\eta}$ is almost surely continuous. Note that after filling the right side of~$\side{2}{z}$ for a pocket~$\pocket{z}$ and then after hitting~$\open{z}$ for the first time,~$\widetilde{\eta}$ travels inside~$\pocket{z}$ starting from~$\open{z}$ until reaching~$\close{z}$ while bouncing off the left side of~$\side{2}{z}$ and does not hit the right side of~$\side{1}{z}$. The amount of time that this takes is equal to the amount of time it takes~$\eta'$ to travel from~$\open{z}$ to~$\close{z}$. Next, $\widetilde{\eta}$ fills~$\side{1}{z}$ until reaching~$\open{z}$. While filling~$\side{1}{z}$, it makes excursions out of~$\pocket{z}$ but never into (the interior of) $\pocket{z}$. Recall from Lemma~\ref{lem::locally_finite} that the pockets of~${\mathbf L}(\theta_1,\theta_2)$ are almost surely locally finite. Let $(P_n)$ be an ordering of the pockets of~${\mathbf L}(\theta_1,\theta_2)$ such that $\mathop{\mathrm{diam}}(P_n) \geq \mathop{\mathrm{diam}}(P_{n+1})$ for all~$n$. (For example, we can order the the pockets by diameter and then break ties using a fixed ordering of the rationals.) For each~$j$, we let~$\widetilde{\eta}_j$ be the path which agrees with~$\widetilde{\eta}$ in~$P_m$ for $m \geq j+1$ and, for $1 \leq m \leq j$, follows~$\sideflow{1}{P_m}$ rather than~$\widetilde{\eta}$ while traveling from~$\open{z}$ to~$\close{z}$ (but in the same interval of time). The local finiteness of the $(P_j)$ implies that the sequence $(\widetilde{\eta}_j)$ is Cauchy with respect to the uniform topology. Therefore the sequence $(\widetilde{\eta}_j)$ has a continuous limit~$\widehat{\eta}$. To complete the proof, we are going to argue that~$\widehat{\eta}$ is the same as~$\eta$. We begin by reparameterizing~$\widehat{\eta}$ by excising those intervals of time which correspond to the excursions that~$\eta'$ makes into pockets of~${\mathbf L}(\theta_1,\theta_2)$ starting from~$\side{1}{P}$ for a pocket~$P$. We do not change the time in which~$\widehat{\eta}$ is drawing the boundaries~$\side{1}{P}$ themselves. By the continuity of~$\eta'$, it is easy to see that this reparameterization is continuous (the set of these excursions is locally finite). Moreover, the set of times that~$\widehat{\eta}$ is drawing the boundaries~$\side{1}{P}$ has full Lebesgue measure and, in particular, is dense. This proves that it~$\eta$ can be reparameterized so that it extends continuously off the intervals of time in which it is drawing the~$\theta_1$-angle boundaries, which proves the desired result. \end{proof} \begin{lemma} \label{lem::continuous_loewner} The path~$\eta$ from Lemma~\ref{lem::continuous} has a continuous chordal Loewner driving function. \end{lemma} \begin{proof} We will prove the result using \cite[Proposition~6.12]{MS_IMAG}. We first apply a conformal change of coordinates $\mathbf{D} \to \mathbf{H}$ which sends~$i$ to~$0$ and~$-i$ to~$\infty$ so that we may assume without loss of generality that we are working on~$\mathbf{H}$. That the first criterion from \cite[Proposition~6.12]{MS_IMAG} is satisfied by~$\eta$ follows from Lemma~\ref{lem::continuous} and the way that we have constructed~$\eta$ from~$\eta'$. We will now check the second criterion. That is,~$\eta$ almost surely does not trace itself or~$\partial \mathbf{H}$. If we parameterize~$\eta$ as in the end of the proof of Lemma~\ref{lem::continuous}, then we know that it spends Lebesgue almost all of its time drawing the~$\theta_1$-angle boundaries of the pockets of~${\mathbf L}(\theta_1,\theta_2)$. When drawing such a boundary,~$\eta$ does not hit the past of its range except at the opening and closing points of the corresponding pockets. Moreover, it also cannot trace the domain boundaries in these intervals. Consequently, the claimed result follows. \end{proof} We are next going to argue that the path~$\eta$ together with the left and right boundaries~$\eta_2$ and~$\eta_1$, respectively, of~${\mathbf L}(\theta_1,\theta_2)$ is local (in the sense of \cite{SchrammShe10}) for and almost surely determined by~$h$. \begin{proposition} \label{prop::ordering_local} For each $t \geq 0$, let~$\mathcal {F}_t$ be the $\sigma$-algebra generated by~$\eta|_{[0,t]}$ and the left and right boundaries~$\eta_2$ and~$\eta_1$, respectively, of~${\mathbf L}(\theta_1,\theta_2)$. For each $(\mathcal {F}_t)$-stopping time~$\tau$, $\eta([0,\tau]) \cup \eta_1 \cup \eta_2$ is a local set for and almost surely determined by~$h$. \end{proposition} Let~$\eta'$ be the counterflow line of $h+(\theta_2-\tfrac{\pi}{2})\chi$ starting from~$i$ and targeted at~$-i$. Then the left boundary of~$\eta'$ stopped upon hitting a point~$z$ is equal to the flow line starting from~$z$ with angle~$\theta_2$. To prove Proposition~\ref{prop::ordering_local}, we are going to describe a ``local'' construction of~$\eta$ from~$\eta'$ (one which will only require us first to observe the left and right boundaries~$\eta_1$ and~$\eta_2$, respectively, of~${\mathbf L}(\theta_1,\theta_2)$ but not all of~${\mathbf L}(\theta_1,\theta_2)$). We begin by using~$\eta'$ to define paths as follows. Fix~$\epsilon > 0$. Let~$\tau_{\epsilon,1}$ be the first time~$t$ that there exists a flow line~$\eta_{\epsilon,1}^R$ of~$h$ with angle~$\theta_1$ starting from~$\eta'(t)$ and which crosses into~$\eta'([0,t])$ on its left side (i.e., the part of the outer boundary of~$\eta'([0,t])$ which is described by a flow line of angle~$\theta_2$ starting from~$\eta'(t)$) such that the following is true: the pocket formed by the left side of~$\eta'([0,t])$ and the range of this path drawn up until crossing into the left side of~$\eta'([0,t])$ has diameter at least~$\epsilon$. (Throughout, we shall write~$\eta_{\epsilon,1}^R$ to mean the path stopped at the time of first hitting the left side of~$\eta'([0,\tau_{\epsilon,1}])$.) Note that the pocket will have diameter at least~$\epsilon$ if either: \begin{enumerate} \item $\eta_{\epsilon,1}^R$ has diameter at least~$\epsilon$ or \item $\eta_{\epsilon,1}^R$ has diameter less than~$\epsilon$ hence closes the pocket before leaving the $\epsilon$-neighborhood of $\eta'([0,\tau_{\epsilon,1}])$. \end{enumerate} In particular, each of the two possibilities can be determined by observing the values of~$h$ in an $\epsilon$-neighborhood of $\eta'([0,\tau_{\epsilon,1}])$. We then let $\eta_{\epsilon,1}'$ be the path which agrees with $\eta'$ until time $\tau_{\epsilon,1}$ and then follows $\eta_{\epsilon,1}^R$ until hitting the left side of $\eta'([0,\tau_{\epsilon,1}])$. Let $P_{\epsilon,1}$ be the pocket thus formed by $\eta_{\epsilon,1}^R$ and the left side of $\eta'([0,\tau_{\epsilon,1}])$. Note that $\partial P_{\epsilon,1}$ consists of the right side $\eta_{\epsilon,1}^R$ and the left side of a flow line starting from $\eta'(\tau_{\epsilon,1})$ with angle $\theta_2$. In other words, $\partial P_{\epsilon,1}$ has the same structure as a pocket of ${\mathbf L}(\theta_1,\theta_2)$; recall Lemma~\ref{lem::form_pockets}. We let $x_{\epsilon,1} = \eta'(\tau_{\epsilon,1})$ be the opening point of $P_{\epsilon,1}$ and let $y_{\epsilon,1}$ be the closing point of $P_{\epsilon,1}$. Explicitly, $y_{\epsilon,1}$ is the point at which $\eta_{\epsilon,1}^R$ crosses into $\eta'([0,\tau_{\epsilon,1}])$. Moreover, $\eta'|_{[\tau_{\epsilon,1},\infty)}$ interacts with $P_{\epsilon,1}$ in the same manner that $\eta'$ interacts with a pocket of ${\mathbf L}(\theta_1,\theta_2)$ as described in Figure~\ref{fig::lightcone_pocket} and Figure~\ref{fig::lightcone_pocket2}. In particular, $\eta'|_{[\tau_{\epsilon,1},\infty)}$ enters (the interior of) $P_{\epsilon,1}$ at $x_{\epsilon,1}$ and does not leave $\overline{P}_{\epsilon,1}$ or hit $\eta_{\epsilon,1}^R$ until hitting $y_{\epsilon,1}$ for the first time, say at time $\sigma_{\epsilon,1}$. After hitting $y_{\epsilon,1}$ it visits the points on $\eta_{\epsilon,1}^R$ in the reverse order in which they are drawn by $\eta_{\epsilon,1}^R$. In particular, $\eta'|_{[\sigma_{\epsilon,1},\infty)}$ makes excursions both into and out of $P_{\epsilon,1}$ and each such excursion starts and ends at the same point on $\eta_{\epsilon,1}^R$ (different excursions, however, are rooted at different points on $\eta_{\epsilon,1}^R$). We take the part of $\eta_{\epsilon,1}'$ after it has finished drawing $\eta_{\epsilon,1}^R$ to be given by $\eta'|_{[\sigma_{\epsilon,1},\infty)}$ with those excursions of $\eta'$ from $\eta_{\epsilon,1}^R$ into $P_{\epsilon,1}$ excised (we leave the excursions out of $P_{\epsilon,1}$ alone). Suppose that $k \geq 1$ and that paths $\eta_{\epsilon,1}'$, $\eta_{\epsilon,1}^R$, $\ldots$,$\eta_{\epsilon,k}'$, $\eta_{\epsilon,k}^R$, stopping times $\tau_{\epsilon,1}$, $\sigma_{\epsilon,1}$, $\ldots$, $\tau_{\epsilon,k}$, $\sigma_{\epsilon,k}$, and pockets $P_{\epsilon,1},\ldots,P_{\epsilon,k}$ with opening and closing points $x_{\epsilon,1},y_{\epsilon,1},\ldots,x_{\epsilon,k},y_{\epsilon,k}$ have been defined. We then let $\tau_{\epsilon,k+1}$ be the first time $t$ after time $\sigma_{\epsilon,k}$ that there is a flow line $\eta_{\epsilon,k+1}^R$ of $h$ with angle $\theta_1$ starting from $\eta_{\epsilon,k}'(t)$ which crosses into the left side of $\eta_{\epsilon,k}'([0,t])$ such that the pocket thus formed has diameter at least $\epsilon$. We then take $\eta_{\epsilon,k+1}'$ to be the path constructed from $\eta_{\epsilon,k}'$ in the same manner that we constructed $\eta_{\epsilon,1}'$ from $\eta'$ and let $\sigma_{\epsilon,k+1}$ (resp.\ $P_{\epsilon,k+1}$) be the corresponding stopping time (resp.\ pocket). Finally, we let $x_{\epsilon,k+1}$ (resp.\ $y_{\epsilon,k+1}$) be the opening (resp.\ closing) point of $P_{\epsilon,k+1}$. For each $\epsilon > 0$, we let $\mathcal {P}_\epsilon(\theta_1,\theta_2)$ consist of those pockets of ${\mathbf L}(\theta_1,\theta_2)$ which have diameter at least $\epsilon$; recall from Lemma~\ref{lem::continuous} that $\mathcal {P}_\epsilon(\theta_1,\theta_2)$ is finite almost surely. Let ${\mathbf L}_\epsilon^R(\theta_1,\theta_2) = \{\side{1}{P} : P \in \mathcal {P}_\epsilon(\theta_1,\theta_2)\}$. Let $J_\epsilon = \sup\{j \geq 1 : \tau_{\epsilon,j} < \infty\}$ and let $\mathcal {R}_\epsilon = \{ \eta_{\epsilon,j}^R : 1 \leq j \leq J_\epsilon\}$ consist of the $\theta_1$-angle boundary segments of the pockets $P_{\epsilon,j}$ (we will explain below that $J_\epsilon < \infty$ almost surely). We are now going to collect several observations about the exploration procedure that we have just defined. \begin{lemma} \label{lem::epsilon_path_properties} Fix $\epsilon > 0$. The following are true. \begin{enumerate}[(i)] \item\label{it::epsilon_neighborhood_local} Suppose that $\zeta$ is a stopping time for $\eta_{\epsilon,j}'$. Then the $\epsilon$-neighborhood of $\eta_{\epsilon,j}'([0,\zeta])$ is a local set for $h$. \item\label{it::path_does_not_enter_pockets} For each $j \geq i$, almost surely $\eta_{\epsilon,j}'$ does not enter (the interior of) $P_{\epsilon,i}$. \item\label{it::epsilon_pockets_finite} Almost surely, $J_\epsilon < \infty$. \item\label{it::epsilon_pockets_contained} For each $ 1 \leq j \leq J_\epsilon$ such that $P_{\epsilon,j}$ lies between the left and right boundaries of ${\mathbf L}(\theta_1,\theta_2)$ there almost surely exists $P \in \mathcal {P}_\epsilon(\theta_1,\theta_2)$ such that $P_{\epsilon,j} \subseteq P$ and $\eta_{\epsilon,j}^R$ emanates from a point on $\side{2}{P}$. \item\label{it::epsilon_pockets_lightcone} Almost surely, ${\mathbf L}_\epsilon^R(\theta_1,\theta_2)$ is equal to the set which consists of those elements $\eta_{\epsilon,j}^R$ of $\mathcal {R}_\epsilon$ for $1 \leq j \leq J_\epsilon$ which lie between the left and right boundaries of ${\mathbf L}(\theta_1,\theta_2)$. \end{enumerate} \end{lemma} \begin{proof} To prove Part~\eqref{it::epsilon_neighborhood_local}, we will use the characterization of local sets given in the first part of \cite[Lemma~3.9]{SchrammShe10}. We are first going to explain the proof in the case that $j=1$. Fix $B \subseteq \mathbf{D}$ open and let $\tau_{\epsilon,B}$ be the first time $t$ that $\mathop{\mathrm{dist}}(\eta'(t),B) \leq \epsilon$. Let $h_B$ the projection of $h$ onto the subspace of functions which are harmonic on $B$. Then \cite[Theorem~1.2]{MS_IMAG} implies that $\eta'|_{[0,\tau_{\epsilon,B}]}$ is almost surely determined by $h_B$. Note that the event $\tau_{\epsilon,1} \leq \tau_{\epsilon,B}$ is also almost surely determined by $h_B$ because the set of all flow lines with angle $\theta_1$ starting from points in $\mathbf{D} \setminus B$ and stopped upon exiting $\mathbf{D} \setminus B$ is (simultaneously) almost surely determined by $h_B$. In particular, we only need to observe these flow lines in an $\epsilon$-neighborhood of $\eta'|_{[0,\tau_{\epsilon,B}]}$ to see if $\tau_{\epsilon,1} \leq \tau_{\epsilon,B}$; recall the discussion after the statement of Proposition~\ref{prop::ordering_local}. Assume that we are working on the event $\tau_{\epsilon,1} \leq \tau_{\epsilon,B}$. Then $\eta'|_{[0,\tau_{\epsilon,1}]}$ is almost surely determined by $h_B$ for the same reason. Let $\tau_{\epsilon,1,B}$ be the first time $t$ that $\mathop{\mathrm{dist}}(\eta_{\epsilon,1}^R(t),B) \leq \epsilon$. Then $\eta_{\epsilon,1}^R|_{[0,\tau_{\epsilon,1,B}]}$ is also almost surely determined by $h_B$, again for the same reason. Finally, on the event that $\eta_{\epsilon,1}^R$ terminates in $\eta'([0,\tau_{\epsilon,1}])$ before time $\tau_{\epsilon,1,B}$, it is easy to see that $\eta_{\epsilon,1}'|_{[\sigma_{\epsilon,1},\infty)}$ stopped upon getting within distance $\epsilon$ of $B$ is almost surely determined by $h_B$ because it is given by the counterflow line of $h$ starting from the terminal point of $\eta_{\epsilon,1}^R$ with its excursions into $P_{\epsilon,1}$ excised. In particular, this is the same as the counterflow line of the conditional GFF $h$ given $\eta'|_{[0,\tau_{\epsilon,1}]}$ and $\eta_{\epsilon,1}^R$ starting from the terminal point of $\eta_{\epsilon,1}^R$. This proves Part~\eqref{it::epsilon_neighborhood_local} for $j=1$. The result for $j \geq 2$ follows using a similar argument and induction on $j$. Part~\eqref{it::path_does_not_enter_pockets} follows because, by our construction, after drawing a pocket we excise all of the excursions that the counterflow line makes into that pocket and the flow line interaction rules imply that a flow line of angle $\theta_1$ (i..e, one of the $\eta_{\epsilon,j}^R$) cannot cross into the interior of such a pocket. We turn to Part~\eqref{it::epsilon_pockets_finite}. For each $k \geq 1$, consider the path $\widetilde{\eta}_{\epsilon,k}'$ which is given by starting with $\eta'$ and then excising the excursions that $\eta'$ makes into the interior of each $P_{\epsilon,j}$ for $1 \leq j \leq k$. Then each path $\widetilde{\eta}_{\epsilon,k}'$ is continuous and has the same range as $\eta_{\epsilon,k}'$ by the argument described after Lemma~\ref{lem::lightcone_approximation}. In particular, the range of $\widetilde{\eta}_{\epsilon,k}'$ is equal to $\overline{\mathbf{D}} \setminus \cup_{j=1}^k P_{\epsilon,j}$. As $k \geq 1$ increases, more and more excursions are excised in order to generate $\widetilde{\eta}_{\epsilon,k}'$. Thus arguing as in the proof of Lemma~\ref{lem::continuous}, this implies that the limit $\widetilde{\eta}_\epsilon'$ of $\widetilde{\eta}_{\epsilon,k}'$ as $k \to \infty$ exists as a uniform limit of continuous paths on a compact interval and $\widetilde{\eta}_\epsilon'$ is continuous and non-self-crossing. Moreover, the complement of the range of $\widetilde{\eta}_\epsilon'$ can only have a finite number of components of diameter larger than $\epsilon > 0$. Indeed, for otherwise the range of $\widetilde{\eta}_\epsilon'$ would not be locally connected which in turn would contradict continuity. This gives Part~\eqref{it::epsilon_pockets_finite}. We are now going to explain the proof of Part~\eqref{it::epsilon_pockets_contained}. We first condition $h$ on the $\epsilon$-neighborhood of $\eta_{\epsilon,j}'|_{[0,\sigma_{\epsilon,j}]}$ for some $j$. Note that $\partial P_{\epsilon,j}$ consists of the right side of a flow line with angle $\theta_1$ and the left side of a flow line with angle $\theta_2$. Consequently, an angle-varying flow line with angles contained in $[\theta_1,\theta_2]$ which changes angles only a finite number of times and at positive rational times cannot enter (the interior of) $P_{\epsilon,j}$ by the flow line interaction rules. Thus if $P_{\epsilon,j}$ for $1 \leq j \leq J_\epsilon$ is between the left and right boundaries of ${\mathbf L}(\theta_1,\theta_2)$ then it is a subset of some element in $\mathcal {P}_\epsilon(\theta_1,\theta_2)$. This gives the first part of Part~\eqref{it::epsilon_pockets_contained}. To establish the second part of Part~\eqref{it::epsilon_pockets_contained}, we first condition on ${\mathbf L}(\theta_1,\theta_2)$. Note that $\eta'$ enters the interior of a pocket of ${\mathbf L}(\theta_1,\theta_2)$ at its opening point. Thus, $\eta_{\epsilon,1}'(\tau_{\epsilon,1})$ must be on the boundary of such a pocket, say $P \in \mathcal {P}_\epsilon(\theta_1,\theta_2)$. Indeed, for otherwise the exploration used to generate $\eta_{\epsilon,1}'$ would have skipped following $\sideflow{1}{P}$. Iterating this proves the claim for $j \geq 1$. We turn to Part~\eqref{it::epsilon_pockets_lightcone}. We fix $P \in \mathcal {P}_\epsilon(\theta_1,\theta_2)$. We claim that either $\eta_{\epsilon,1}^R = \sideflow{1}{P}$ or, if not, cannot merge into $\sideflow{1}{P}$. To see that this is the case, we assume that $\eta_{\epsilon,1}^R$ is not equal to $\sideflow{1}{P}$. If $\eta_{\epsilon,1}^R$ did merge into $\sideflow{1}{P}$, then $\eta'$ would visit the left side of $\sideflow{1}{P}$ before hitting $\open{P}$ because the path would have to visit the left side of $\eta_{\epsilon,1}^R$ before hitting $\open{P}$. (This follows because whenever $\eta'$ hits the opening point of a pocket, the flow line interaction rules imply that it immediately enters and then exits at the closing point of the pocket. Once it exits at the closing point, it immediately starts filling the $\theta_1$-angle boundary segment.) This, in turn, would contradict the ordering because $\eta'$ would hit the left side of $\side{1}{P}$ before hitting $\open{P}$. Iterating this argument implies that $\eta_{\epsilon,j}^R$ is either equal to $\sideflow{1}{P}$ where $P$ is the pocket of ${\mathbf L}(\theta_1,\theta_2)$ which contains $P_{\epsilon,j}$ or does not merge with $\sideflow{1}{P}$. Since the range of $\eta_\epsilon' = \eta_{\epsilon,J_\epsilon}'$ is equal to $\mathbf{D} \setminus \cup_{j=1}^{J_\epsilon} P_{\epsilon,j}$, if $\sideflow{1}{P}$ for some $P \in \mathcal {P}_\epsilon(\theta_1,\theta_2)$ was not equal to one of the $\eta_{\epsilon,j}^R$ for $1 \leq j \leq J_\epsilon$, then $\eta_{\epsilon}'$ would have to visit $\open{P}$. This is a contradiction since exploring $\sideflow{1}{P}$ upon hitting $\open{P}$ would lead to a pocket with diameter at least $\epsilon$ (since no other part of the $\theta_1$-angle boundary segment would have been explored by $\eta_{\epsilon}'$ before the path hits the opening point). This proves that ${\mathbf L}_\epsilon^R(\theta_1,\theta_2) \subseteq \mathcal {R}_\epsilon$ almost surely. We are now going to prove that the set which consists of those elements $\eta_{\epsilon,j}^R$ of $\mathcal {R}_\epsilon$ for $1 \leq j \leq J_\epsilon$ which lie between the left and right boundaries of ${\mathbf L}(\theta_1,\theta_2)$ is contained in ${\mathbf L}_\epsilon^R(\theta_1,\theta_2)$ almost surely. Fix $1 \leq j \leq J_\epsilon$. Suppose that $P_{\epsilon,j}$ is strictly contained in the pocket $P$ of $\mathcal {P}_\epsilon(\theta_1,\theta_2)$ which contains $P_{\epsilon,j}$. Upon hitting the opening point $x_{\epsilon,j}$ of $P_{\epsilon,j}$, $\eta'$ has to enter into the interior of $P_{\epsilon,j}$ hence the interior of $P$ as explained above. If $x_{\epsilon,j}$ is not equal to $\open{P}$, then this implies that $\eta'$ enters the interior of $P$ before hitting $\open{P}$. This is a contradiction, therefore $P_{\epsilon,j} = P$ as desired. This proves Part~\eqref{it::epsilon_pockets_lightcone}. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop::ordering_local}] As in the proof of Lemma~\ref{lem::epsilon_path_properties}, we let $\eta_\epsilon' = \eta_{\epsilon,J_\epsilon}'$. By the construction and Part~\eqref{it::epsilon_pockets_lightcone} of Lemma~\ref{lem::epsilon_path_properties}, $\eta_\epsilon'$ visits the elements of $\mathcal {P}_\epsilon(\theta_1,\theta_2)$ in the same order as $\eta$ defined just before Lemma~\ref{lem::continuous}. Therefore it is easy to see from the construction that $\eta_\epsilon'$ with its excursions outside of the region between $\eta_1$ and $\eta_2$ converges uniformly modulo parameterization to $\eta$ as $\epsilon \to 0$. Therefore Lemma~\ref{lem::epsilon_path_properties} implies that $\eta([0,t]) \cup \eta_1 \cup \eta_2$ is a local set for $h$ for each rational time $t$. Combining this with the characterization of local sets given in the first part of \cite[Lemma~3.9]{SchrammShe10} implies that $\eta([0,\tau]) \cup \eta_1 \cup \eta_2$ is local for each $\eta$-stopping time $\tau$. \end{proof} We are now going to show that the law of the exploration path is continuous in the angles of the light cone. This, in turn, will be used in Section~\ref{subsec::law} to establish the continuity of the law of ${\rm SLE}_\kappa(\rho)$ as the value of $\rho$ varies between $(-2-\tfrac{\kappa}{2}) \vee (\tfrac{\kappa}{2}-4)$ and $-2$. \begin{proposition} \label{prop::interpolation} Suppose that $\theta_1 \leq \theta_2$ are angles with $\theta_2 - \theta_1 < \theta_c$ and $\theta_2 - \theta_1 \leq \pi$ and that $(\theta_n^1)$, $(\theta_n^2)$ are sequences of angles such that $\theta_n^1 \leq \theta_n^2$ and $\theta_n^2 - \theta_n^1 < \theta_c$ and $\theta_n^2 - \theta_n^1 \leq \pi$ for each $n \in \mathbf{N}$ and $\theta_n^j \to \theta_j$ as $n \to \infty$ for $j = 1,2$. For each $n\in \mathbf{N}$, let $\eta_n$ be the path described above which visits the points of ${\mathbf L}(\theta_n^1,\theta_n^2)$ and let $\eta$ be the path associated with ${\mathbf L}(\theta_1,\theta_2)$. Then $\eta_n \to \eta$ as $n \to \infty$ almost surely with respect to the uniform topology modulo reparameterization. \end{proposition} \begin{remark} \label{rem::interpolation_not_continuous} Proposition~\ref{prop::interpolation} does not imply that the map which takes a pair of angles $(\theta_1,\theta_2)$ to the exploration path of ${\mathbf L}(\theta_1,\theta_2)$ is a continuous function into the space of paths equipped with the uniform topology modulo parameterization for a fixed realization of $h$. This follows from the same reasoning as in Remark~\ref{rem::lightcone_not_continuous} in which it was explained that $(\theta_1,\theta_2) \mapsto {\mathbf L}(\theta_1,\theta_2)$ is not a continuous function into the space of closed sets equipped with the Hausdorff topology for a fixed realization of $h$. Proposition~\ref{prop::interpolation} does, however, imply that the map which takes a pair of angles $(\theta_1,\theta_2)$ to the law of the exploration path of ${\mathbf L}(\theta_1,\theta_2)$ is continuous with respect to the weak topology. \end{remark} \begin{proof}[Proof of Proposition~\ref{prop::interpolation}] For each $n \in \mathbf{N}$, let~$\eta_n'$ (resp.\ $\eta'$) be the counterflow line of~$h$ which orders~${\mathbf L}(\theta_1^n,\theta_2^n)$ (resp.\ ${\mathbf L}(\theta_1,\theta_2)$) to generate the light cone exploration path~$\eta_n$ (resp.\ $\eta$). Then we know that $\eta_n' \to \eta'$ almost surely as $n \to \infty$ with respect to the uniform topology.\footnote{This follows because if we fix any finite collection of points $z_1,\ldots,z_k \in \mathbf{D}$, the ``cells'' generated by the flow and dual flow lines corresponding to~$\eta_n'$ starting from these points will converge those of~$\eta'$ as $n \to \infty$. If we fix enough points, then w.h.p.\ the maximal diameter of the cells will be smaller than a fixed choice of $\epsilon > 0$. The claim follows by reparameterization~$\eta_n'$ so that it spends the same amount of time in a given cell is~$\eta'$ does. Note that this time change converges to the identity as $n \to \infty$ since asymptotically the area of the cells converge, too.} We also know from Proposition~\ref{prop::lightcone_continuous} that ${\mathbf L}(\theta_n^1,\theta_n^2) \to {\mathbf L}(\theta_1,\theta_2)$ almost surely as $n \to \infty$ with respect to the Hausdorff topology. Fix an ordering $(r_j)$ of the points in~$\mathbf{D}$ with rational coordinates. For each $n \in \mathbf{N}$, let~$(P_j^n)$ be the ordering of the pockets of~${\mathbf L}(\theta_1^n,\theta_2^n)$ according to diameter in which ties are broken according to which pocket contains the element of $(r_j)$ with the smallest index and let $(P_j)$ be the ordering of the pockets of~${\mathbf L}(\theta_1,\theta_2)$ defined in the same way. For each $j,n \in \mathbf{N}$, we also let~$I_j^n$ (resp.\ $I_j$) be the interval of time in which~$\eta_n'$ (resp.\ $\eta'$) travels from the opening to the closing point of~$P_j^n$ (resp.\ $P_j$). Note that~$I_j^n$ (resp.\ $I_j$) is also the interval of time in which~$\eta_n$ (resp.\ $\eta$) travels from the opening to the closing point of~$P_j^n$ (resp.\ $P_j$) along~$\side{1}(P_j^n)$ (resp.\ $\side{1}{P_j}$). Let~$\eta_{1,j}^n = \eta_n|_{I_j^n}$ and~$\eta_{1,j} = \eta|_{I_j}$. It follows from Proposition~\ref{prop::lightcone_continuous} and Proposition~\ref{prop::pocket_boundaries_continuous} that~$\eta_{1,j}^n \to \eta_{1,j}$ almost surely as~$n \to \infty$ with respect to the uniform topology modulo parameterization. Combining all of the above, we can see that there exists~$k_0 \in \mathbf{N}$ such that for each~$k \geq k_0$ there exists~$n_0 \in \mathbf{N}$ such that the following is true. We have that~$n \geq n_0$ implies that \begin{enumerate} \item the uniform distance modulo parameterization between~$\eta_{1,j}^n$ and~$\eta_{1,j}$ is at most~$\epsilon$ for each $1 \leq j \leq k$, \item $\mathop{\mathrm{diam}}(P_j^n) \leq \epsilon$ for all $j > k$, and \item $\|\eta_n' - \eta'\|_\infty \leq \epsilon$. \end{enumerate} Reparameterizing the time of $\eta_n'$ and $\eta_n$ so that $I_j^n = I_j$ for each $1 \leq j \leq k$, it thus follows that, after possibly reparameterizing the time of $\eta_n'$ and $\eta_n$ within each $I_j$, with $\mathcal {I} = \cup_{1 \leq j \leq k} I_j$ we have that \begin{equation} \label{eqn::big_pocket_distance} \|\eta_n|_\mathcal {I} - \eta|_\mathcal {I} \|_\infty \leq \epsilon. \end{equation} Let $\mathcal {J} = (\eta_n')^{-1}(\cup_{j > k} P_j^n) = (\eta')^{-1}(\cup_{j > k} P_j)$. By the way that we have defined the light cone exploration path, we also have that \begin{equation} \label{eqn::small_pocket_distance} \|\eta_n'|_{\mathcal {J}} - \eta_n|_{\mathcal {J}}\|_\infty \leq \epsilon \quad\text{and}\quad \|\eta'|_{\mathcal {J}} - \eta|_{\mathcal {J}}\| \leq \epsilon. \end{equation} Note that $\eta_n$ (resp.\ $\eta$) is determined by its values on $\mathcal {I} \cup \mathcal {J}$ since the times in $[0,\infty) \setminus \overline{\mathcal {I} \cup \mathcal {J}}$ correspond to those times in which $\eta_n'$ (resp.\ $\eta'$) makes an excursion from $\side{1}{P_j^n}$ (resp.\ $\side{1}{P_j}$) into $P_j^n$ (resp.\ $P_j$) for some $1 \leq j \leq k$. In particular, $\eta_n$ (resp.\ $\eta$) is piecewise constant in $[0,\infty) \setminus \overline{\mathcal {I} \cup \mathcal {J}}$. Combining \eqref{eqn::big_pocket_distance} and \eqref{eqn::small_pocket_distance} implies that \begin{align*} \| \eta_n - \eta\|_\infty &\leq \| \eta_n|_{\mathcal {I}} - \eta|_{\mathcal {I}} \|_\infty + \| \eta_n|_{\mathcal {J}} - \eta|_{\mathcal {J}} \|_\infty\\ &\leq \epsilon + \| \eta_n|_{\mathcal {J}} - \eta_n'|_{\mathcal {J}} \|_\infty + \|\eta_n'|_{\mathcal {J}} - \eta'|_{\mathcal {J}} \|_\infty + \| \eta'|_{\mathcal {J}} - \eta|_{\mathcal {J}} \|_\infty\\ &\leq 3\epsilon + \| \eta_n' - \eta'\|_\infty \leq 4\epsilon, \end{align*} which gives the desired result. \end{proof} \subsection{Law of the exploration path} \label{subsec::law} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.85]{figures/continuum_lightcone} \end{center} \caption{\label{fig::continuum_lightcone} Suppose that~$h$ is a GFF on~$\mathbf{H}$ with piecewise constant boundary data which changes values at most a finite number of times. Shown on the left side are the flow lines $\eta_1,\eta_2$ with angles $0,\theta_\rho$, respectively, of~$h$ starting from~$0$, both of which we assume reach $\infty$ before hitting the continuation threshold, and the flow line~$\widehat{\eta}$ of angle~$\theta_\rho$ starting from a point $u$ on the boundary of a component of $\mathbf{H} \setminus (\eta_1 \cup \eta_2)$ which is between $\eta_1$ and~$\eta_2$. The outer boundary of ${\mathbf L}(0,\theta_\rho)$ is given by $\eta_1 \cup \eta_2$. The exploration path $\eta$ of ${\mathbf L}(0,\theta_\rho)$ starts from $\infty$ and its outer boundary stopped upon hitting $u$ is equal to the union of $\widehat{\eta}$ and the part of $\eta_1$ (resp.\ $\eta_2$) after it hits $u$ (resp.\ $w$). The light blue region indicates the hull of $\eta$ stopped upon hitting $u$. Let $C$ be the component surrounded by $\eta_1$, $\eta_2$, and $\widehat{\eta}$ as shown and let $\varphi \colon C \to \mathbf{H}$ be the conformal map which takes $u$ to $0$, $\widehat{w}$, the point where $\eta_2$ and $\widehat{\eta}$ merge, to $\infty$, and $w$, the point on $\partial C$ where $\eta_1,\eta_2$ first intersect, to $-v$. The boundary data for $\widetilde{h} = h \circ \varphi^{-1} - \chi \arg(\varphi^{-1})'$ is as shown on the right. The image of the part of ${\mathbf L}(0,\theta_\rho)$ contained in $\overline{C}$ under $\varphi$ is equal to the light cone ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$ of $\widetilde{h}$ starting from $0$ and the image of the part of $\eta$ when it is in $\overline{C}$ gives the corresponding exploration path. Sending $v \to \infty$, ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$ converges to the corresponding light cone of a field whose boundary conditions are given by $-\lambda$ (resp.\ $\lambda(1+\rho)$) on $\mathbf{R}_-$ (resp.\ $\mathbf{R}_+$). } \end{figure} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.85]{figures/lightcone_outer_boundary} \end{center} \caption{\label{fig::outer_boundary} Setup for the proof of Lemma~\ref{lem::lightcone_conditional}. Suppose that $h$ is a GFF on $\mathbf{H}$ with the illustrated boundary data where $\rho \in [\tfrac{\kappa}{2}-4,-2)$ and that $\eta$ is the exploration path associated with ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$ where $\theta_\rho = \pi(\rho+2)/(\tfrac{\kappa}{2}-2)$. Suppose that $\tau$ is a stopping time for $\eta$. Then we can describe the boundary behavior of the conditional law of $h$ given $\eta|_{[0,\tau]}$ restricted to the unbounded connected component of $\mathbf{H} \setminus \eta([0,\tau])$ by relating the outer boundary of $\eta([0,\tau])$ (the union of the red and blue paths in the illustration) to the outer boundary of the counterflow line $\eta'$ (the hull of which is indicated in light green) stopped at the first time $\tau'$ that it hits $X_\tau$, the opening point of the pocket whose boundary is being drawn by $\eta$ at time $\tau$, and the $0$-angle flow line starting from the leftmost point of $\eta'([0,\tau']) \cap \mathbf{R}$ (red). The region bounded by the solid red, dashed red, and blue paths is the pocket of ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$ whose $0$-angle boundary is being drawn by $\eta$ at time $\tau$.} \end{figure} It will be more convenient for us to work on $\mathbf{H}$ in this section. Throughout, we fix $\rho \in [\tfrac{\kappa}{2}-4,-2)$ and suppose that $h$ is a GFF on $\mathbf{H}$ with boundary conditions given by $-\lambda$ on $\mathbf{R}_-$ and $\lambda(1+\rho)$ on $\mathbf{R}_+$, as shown in Figure~\ref{fig::outer_boundary}. Let $\theta_\rho$ be as in \eqref{eqn::lightcone_angle}. Let $\eta'$ be the counterflow line starting from the origin whose left boundary stopped upon hitting a point $z$ is equal to the flow line with angle $\theta_\rho$ starting from $z$. Explicitly, $\eta'$ is the counterflow line of $h+(\tfrac{\pi}{2}+\theta_\rho)\chi$ starting from the origin. Note that this is the ``same'' as the corresponding counterflow line starting from $\infty$ because the path starting from $\infty$ will trace along $\mathbf{R}_+$ and does not enter (the interior of) $\mathbf{H}$ until hitting the origin. Using exactly the same analysis as in Section~\ref{subsec::lightcones} and Section~\ref{subsec::explorations}, we can construct from $\eta'$ a path $\eta$ which explores ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$. This path is continuous, has a continuous chordal Loewner driving function, and is almost surely determined by $h$. Moreover, the path drawn up to any stopping time is local for $h$ (in contrast to Proposition~\ref{prop::ordering_local}, it is not necessary also to condition on the outer boundary of the light cone). That these properties hold follows from the results of the previous subsections and the conditioning argument explained in Figure~\ref{fig::continuum_lightcone}. We will now determine the law of $\eta$. This, in turn, will lead to the proofs of Theorem~\ref{thm::continuous} and Theorem~\ref{thm::coupling} (it does not quite imply Theorem~\ref{thm::interpolation} because the boundary data is different for different $\rho$ values). For each $t \geq 0$, let $K_t$ be the closure of the complement of the unbounded connected component $\mathbf{H}_t$ of $\mathbf{H} \setminus \eta([0,t])$. For each $t \geq 0$ such that $\eta$ is drawing a segment of $\side{1}{P}$ where $P$ is a pocket of ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$ in an open interval of time containing $t$, let $P_t$ be the corresponding pocket and let $X_t$ be its opening point. For other values of $t$, we take $P_t = \emptyset$ and let $X_t$ be the limit as $s \downarrow t$ where the times $s$ are restricted to those in which $\eta$ is drawing a segment of $\side{1}{P}$ for a pocket $P$ of ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$. The main step in determining the law of $\eta$ is the following, which gives the conditional law of $h$ given $\eta$ drawn up to a fixed stopping time. \begin{lemma} \label{lem::lightcone_conditional} Suppose that $\tau$ is an almost surely finite stopping time for $\eta$. Then the conditional law of $h$ given $\eta|_{[0,\tau]}$ is independently that of a GFF in each of the components of $\mathbf{H} \setminus \eta([0,\tau])$. The boundary conditions in each of the bounded components agrees with that of $h$ given ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$ in the corresponding component (recall Lemma~\ref{lem::form_pockets}). On $\partial \mathbf{H}_\tau$, the boundary conditions are given by: \begin{enumerate}[(i)] \item\label{it::bd1} the left side of a $0$-angle flow line on the segment of $\partial \mathbf{H}_\tau$ which is to the left of $\eta(\tau)$ (left side of the red path in Figure~\ref{fig::outer_boundary}), \item\label{it::bd2} the right side of a $0$-angle flow line on the right side of the segment of $\partial \mathbf{H}_\tau$ from $\eta(\tau)$ to $X_\tau$ (counterclockwise direction; right side of red path in Figure~\ref{fig::outer_boundary}), and \item\label{it::bd3} the left side of a $\theta_\rho$-angle flow line on the segment from $X_\tau$ to $\mathbf{R}_+$ (counterclockwise direction; left side of blue path in Figure~\ref{fig::outer_boundary}). \end{enumerate} \end{lemma} \begin{proof} Let $\tau$ be any almost surely finite stopping time for $\eta$ such that $\eta(\tau)$ is contained in the interior of a $0$-angle boundary segment of a pocket of ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$. It suffices to show that the conditional law of $h$ given $\eta|_{[0,\tau]}$ is as described in the statement of the proposition for stopping times $\tau$ of this form. Indeed, we know that stopping times of this form are dense in $[0,\infty)$ by the proof of Lemma~\ref{lem::continuous} and, by Proposition~\ref{prop::ordering_local}, we know that $\eta([0,\sigma])$ is a local set for $h$ for every $\eta$-stopping time $\sigma$, so we can use the continuity result for local sets proved in \cite[Proposition~6.5]{MS_IMAG}. The statement regarding the conditional law of $h$ restricted to the components which are surrounded by $\eta([0,\tau])$ follows from \cite[Proposition~3.8]{MS_IMAG} by comparing to ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$. We are now going to describe the boundary behavior for $h$ on $\partial \mathbf{H}_\tau$ using \cite[Proposition~3.8]{MS_IMAG} and a construction involving $\eta'$ and some auxiliary paths. See Figure~\ref{fig::outer_boundary} for an illustration of the setup of the proof. Let $\tau'$ be the first time that $\eta'$ hits $X_\tau$. It follows from the way that we constructed the ordering of ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$ that the left boundary of $\eta'([0,\tau'])$ is contained in $\eta([0,\tau])$ and is in fact equal to the segment of $\partial \mathbf{H}_\tau$ which connects $X_\tau$ to $\mathbf{R}_+$ in the counterclockwise direction (left side of blue path in Figure~\ref{fig::outer_boundary}). Suppose that $t \in \mathbf{Q}_+$. On the event $\{t < \tau'\}$, we can use \cite[Proposition~3.8]{MS_IMAG} to get that the boundary behavior of $h$ given $\eta|_{[0,\tau]}$ on the segment of $\partial \mathbf{H}_\tau$ which is to the right of $X_\tau$ and contained in $\eta'([0,t])$ is as claimed in \eqref{it::bd3}. This proves the boundary behavior claimed in \eqref{it::bd3} because by continuity and because this holds for all $t \in \mathbf{Q}_+$ simultaneously almost surely. For each $s \in \mathbf{Q}_+$, we let $A_s = \eta'([0,s]) \cup \eta_s$ where $\eta_s$ is the $0$-angle flow line of the conditional GFF $h$ given $\eta'|_{[0,s]}$ starting from the leftmost point of $\eta'([0,s]) \cap \mathbf{R}$. Note that $\eta_s$ reflects off the right boundary of $\eta'([0,s])$. We are now going to establish the boundary behavior claimed in \eqref{it::bd1} by showing that there almost surely exists $s \in \mathbf{Q}_+$ such that the segment of $\partial \mathbf{H}_\tau$ which is to the left of $\eta(\tau)$ is contained in $\eta_s$. This will also give \eqref{it::bd2}. Indeed, this suffices since we can use \cite[Proposition~3.8]{MS_IMAG} to compare the boundary behavior of $h$ given $\eta|_{[0,\tau]}$ to that of $h$ given $A_s$. We are now going to show that $\eta_s$ is equal to the closure $C_s$ of the $0$-angle boundaries of the pockets of ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$ which intersect the right boundary $R_s'$ of $\eta'([0,s])$ (dark green path in Figure~\ref{fig::outer_boundary}). We will first show that $\eta_s$ is (non-strictly) to the left of~$C_s$. Fix a countable, dense set~$D$ in~$R_s'$. If $z \in D$ then \cite[Theorem~1.5]{MS_IMAG} implies that~$\eta_s$ is to the left of the $0$-angle flow line of $h$ given $\eta'|_{[0,s]}$ starting from~$z$. Since~$D$ is countable, this holds for all $z \in D$ simultaneously almost surely. Moreover, it is easy to see that $\side{1}{P}$ for a pocket $P$ of ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$ which intersects $R_s'$ can be written as a limit of $0$-angle flow lines starting from points in $D$ by taking starting points contained $P \cap R_s'$ which get progressively closer to $\open{P}$. Indeed, this follows since such a flow line will merge with $\side{1}{P}$ upon intersecting it by \cite[Theorem~1.5]{MS_IMAG}. This proves that $\eta_s$ is (non-strictly) to the left of $C_s$. We will next argue that $\eta_s$ is (non-strictly) to the right of (and hence equal to) $C_s$. Indeed, the reason for this is that the flow line interaction rules imply that an angle-varying flow line with angles contained in $[0,\theta_\rho]$ cannot enter into a pocket formed by $\eta_s$ and $\eta'([0,s])$. This proves the assertion and hence the claim that $\eta_s = C_s$. Take $s \in \mathbf{Q}_+$ with $s > \tau$ such that $\eta'([0,s])$ has not hit the closing point of the pocket of ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$ whose opening point is given by $X_\tau$. Note that $\eta$ visits a pocket $P$ of ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$ before time $\tau$ if and only if $\eta'$ visits the interior of $P$ before time $\tau'$. Consequently, it is easy to see that the boundary segments referred to in \eqref{it::bd1} and \eqref{it::bd2} are contained in $\eta_s$. This proves the desired result by invoking \cite[Proposition~3.8]{MS_IMAG}. \end{proof} Now that we have determined by the boundary behavior for the conditional law of $h$ given $\eta|_{[0,\tau]}$ up to any stopping time $\tau$, we can now give the law of $\eta$. \begin{lemma} \label{lem::law_of_path} The law of $\eta$ is given by that of an ${\rm SLE}_\kappa(\rho)$ process in $\mathbf{H}$ from $0$ to $\infty$ where \begin{equation} \label{eqn::rho_theta_relation} \rho = \overline{\theta}_\rho \left( \frac{\kappa}{2}-2\right) - 2 \quad\text{and}\quad \overline{\theta}_\rho = \frac{\theta_\rho}{\pi}. \end{equation} \end{lemma} \begin{proof} The martingale characterization of the ${\rm SLE}_\kappa(\rho)$ processes given in \cite[Theorem~2.4]{MS_IMAG} combined with Lemma~\ref{lem::lightcone_conditional} implies that~$\eta$ evolves as an ${\rm SLE}_\kappa(\rho)$ process with the value of~$\rho$ determined by~$\theta_\rho$ as given in \eqref{eqn::rho_theta_relation} in those time intervals in which~$\eta$ is not intersecting the past of its range, i.e., those times~$t$ such that $\eta(t) \notin \eta([0,t))$. For each $t$, let $Z_t = g_t(X_t)$. This implies that $Z - W$ evolves as $\sqrt{\kappa}$ times a Bessel process of dimension $d(\kappa,\rho) = 1+\tfrac{2(\rho+2)}{\kappa}$ during these times. By Lemma~\ref{lem::continuous_loewner}, we know that $\eta$ has a continuous Loewner driving function, from which it follows that $Z_t-W_t$ is instantaneously reflecting at $0$. Therefore $Z - W$ evolves as $\sqrt{\kappa}$ times a Bessel process of dimension $d(\kappa,\rho)$ for all $t \geq 0$. The result then follows by applying Proposition~\ref{prop::bessel_pv}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm::continuous} and Theorem~\ref{thm::coupling}] By Lemma~\ref{lem::law_of_path}, we know that $\eta$ is an ${\rm SLE}_\kappa(\rho)$ process with the desired value of $\rho$ and by Lemma~\ref{lem::lightcone_conditional} we know that $\eta$ is coupled with and almost surely determined by the field as described in Theorem~\ref{thm::coupling}. \end{proof} Now that we have proved Theorem~\ref{thm::continuous} and Theorem~\ref{thm::coupling}, it is left to prove Theorem~\ref{thm::interpolation}. The result does not immediately follow from Proposition~\ref{prop::interpolation} because that result describes what happens to the light cone path when we change the angles of the light cone but leave the GFF is fixed. In the present setting, we are changing the angles of the light cone \emph{and} the boundary data of the GFF. \begin{figure} \begin{center} \includegraphics[scale=0.85]{figures/continuous_interpolation} \end{center} \caption{\label{fig::continuous_interpolation} Illustration of the idea of the proof of Theorem~\ref{thm::interpolation}. Suppose that $h$ is a GFF on $\mathbf{H}$ with the illustrated boundary data. Then $h$ is compatible with a coupling with an ${\rm SLE}_\kappa(\rho)$ process $\eta$ starting from $0$. Fix $\theta > 0$ and let $\eta_\theta$ be the flow line of $h$ starting from $0$ with angle $\theta$ and let $\mathbf{H}_\theta$ be the component of $\mathbf{H} \setminus \eta_\theta$ which is to the left of $\eta_\theta$. With $\varphi \colon \mathbf{H}_\theta \to \mathbf{H}$ a conformal transformation which fixes $0$ and $\infty$, $h \circ \varphi^{-1} - \chi \arg(\varphi^{-1})'$ is a GFF on $\mathbf{H}$ with the boundary data shown on the right. Since the law of~$\eta_\theta$ is continuous in~$\theta$ and the light cone exploration path is continuous in its angles, we get the desired interpolation result for ${\rm SLE}_\kappa(\rho)$.} \end{figure} \begin{proof}[Proof of Theorem~\ref{thm::interpolation}] We are going to extract the result in two steps by first applying Proposition~\ref{prop::interpolation} and then using a conditioning argument. (This is similar in spirit to our proof of the continuity of the ${\rm SLE}_\kappa(\rho)$ processes for $\rho > -2$ given in \cite{MS_IMAG}.) Let $\Psi \colon \mathbf{H} \to \mathbf{D}$ be a conformal transformation with $\Psi(0) = -i$ and $\Psi(\infty) = i$. Fix $\rho \in [\tfrac{\kappa}{2}-4,-2)$ with $\rho > -2-\tfrac{\kappa}{2}$ and suppose that $h$ is a GFF on $\mathbf{H}$ with boundary conditions which are given by $-\lambda$ on $\mathbf{R}_-$ and $\lambda(1+\rho)$ on $\mathbf{R}_+$. Then~$h$ is a compatible with a coupling with an ${\rm SLE}_\kappa(\rho)$ process~$\eta$ from~$0$ to~$\infty$ as in Theorem~\ref{thm::coupling}. Moreover, $\eta$ is equal to the light cone exploration path associated with ${\mathbf L}_{\mathbf{R}_-}(0,\theta_\rho)$ where $\theta_\rho = \tfrac{\pi(\rho+2)}{\kappa/2-2}$. For each $\theta \geq \theta_\rho$, let~$\eta^\theta$ be the light cone path associated with ${\mathbf L}_{\mathbf{R}_-}(0,\theta)$. By Proposition~\ref{prop::interpolation}, we know that $\Psi(\eta^\theta) \to \Psi(\eta)$ uniformly (modulo reparameterization) as $\theta \downarrow \theta_\rho$. For each $\theta \geq \theta_\rho$, we let~$\eta_\theta$ be the flow line of~$h$ with angle~$\theta$ starting from~$0$. (For $\theta=\theta_\rho$, we take $\eta_\theta$ to be equal to $\mathbf{R}_+$.) Then we know that $\eta_\theta \to \eta_{\theta_\rho}$ locally uniformly as $\theta \downarrow \theta_\rho$ almost surely. Let $\varphi_\theta$ be the conformal transformation which takes the component $\mathbf{H}_\theta$ of $\mathbf{H} \setminus \eta_\theta$ which is to the left of $\eta_\theta$ to $\mathbf{H}$ fixing $0$, $-1$, and $\infty$. Then $\Psi \circ \varphi_\theta^{-1} \circ \Psi^{-1}$ converges locally uniformly to the identity on $\mathbf{D}$ almost surely as $\theta \downarrow \theta_\rho$. Note that the boundary conditions for the GFF $h_\theta = h \circ \varphi_\theta^{-1} - \chi \arg (\varphi_{\theta}^{-1})'$ are given by $-\lambda$ on $\mathbf{R}_-$ and by $-\lambda-\theta \chi$ on $\mathbf{R}_+$. Since $\varphi_\theta(\eta^\theta)$ is the light cone path associated with the light cone with angle range $[0,\theta]$ of $h_\theta$, we know that $\varphi_\theta(\eta^\theta)$ is an ${\rm SLE}_\kappa(\rho_\theta)$ process where $\rho_\theta = \tfrac{\theta}{\pi}(\tfrac{\kappa}{2}-2)-2$. The desired result follows since combining everything implies that $\Psi(\varphi_\theta(\eta^\theta)) \to \Psi(\eta)$ almost surely as $\theta \downarrow \theta_\rho$. The continuity when $\theta \uparrow \theta_\rho$ is proved similarly. \end{proof} \section{Behavior at the boundary of the light cone regime} \label{sec::limiting_cases} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.85]{figures/lightcone_ordering} \vspace{-0.02\textheight} \end{center} \caption{\label{fig::pocket_order} Illustration of the order in which an ${\rm SLE}_\kappa(\tfrac{\kappa}{2}-4)$ process $\eta$ visits the points in its range. Shown is a pocket $\pocket{z}$ of $\eta$ with opening point $x$ and a clockwise orientation. Note that $\partial \pocket{z}$ is given by a $0$-angle flow line loop starting from $x$. The blue path indicates $\eta$ up until hitting $x$. Upon hitting $x$, $\eta$ immediately traces $\partial \pocket{z}$ in the clockwise direction. The green path indicates the range of $\eta$ after it finishes drawing $\partial \pocket{z}$. This part of the path will crawl along $\partial \pocket{z}$ in the counterclockwise direction. In contrast, the ${\rm SLE}_{\kappa'}(\tfrac{\kappa'}{2}-4)$ counterflow line $\eta'$ whose range is equal to $\eta$ (see Proposition~\ref{prop::lightcone_counterflow}) will draw $\partial \pocket{z}$ in the \emph{opposite} (counterclockwise) direction and, while doing so, visits the pockets in its range which intersect $\partial \pocket{z}$.} \end{figure} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.85]{figures/lightcone_ordering2} \vspace{-0.02\textheight} \end{center} \caption{\label{fig::pocket_order2} (Continuation of Figure~\ref{fig::pocket_order}.) Shown is a pocket $\pocket{z}$ of $\eta$ with opening point which has a counterclockwise orientation. Note that $\partial \pocket{z}$ is given by a $\pi$-angle flow line loop starting from $x$. The blue path segment indicates the part of $\eta$ up until it hits $x$ and the green path segment indicates part of $\eta$ as it draws $\partial \pocket{z}$. In contrast to the case of a clockwise loop, as considered Figure~\ref{fig::pocket_order}, $\eta$ visits the points on $\partial \pocket{z}$ in the same order as $\eta'$. Moreover, as it does so, it draws the boundaries of the pockets which intersect $\partial \pocket{z}$.} \end{figure} We are now going to describe the behavior of ${\rm SLE}_\kappa(\rho)$ at the threshold $\rho = \tfrac{\kappa}{2}-4$ which lies between the light cone and trunk regimes. When $\rho = \tfrac{\kappa}{2}-4$, the opening angle for the light cone is equal to $\pi$. Note that $\pi < \theta_c$ if and only if $\kappa \in (2,4)$. As we mentioned earlier, this is closely connected with the fact that an ${\rm SLE}_{\kappa'}$ process is space-filling if and only if $\kappa' \geq 8$. In analogy with \cite[Theorem~1.4]{MS_IMAG}, in this case, the range of the path is equal to that of a form of an ${\rm SLE}_{\kappa'}$ process as stated in the following proposition. \begin{proposition} \label{prop::lightcone_counterflow} Suppose that $\kappa \in (2,4)$ (so that $\pi < \theta_c$) and let $\eta$ be an ${\rm SLE}_\kappa(\tfrac{\kappa}{2}-4)$ process in $\mathbf{H}$ from $0$ to $\infty$ with a single force point located at $0^+$. Then the range of $\eta$ is equal in law to that of an ${\rm SLE}_{\kappa'}(\tfrac{\kappa'}{2}-4)$ process $\eta'$ in $\mathbf{H}$ from $0$ to $\infty$ where the force point is located at $0^-$. \end{proposition} \begin{remark} \label{rem::hits_points_differently} We emphasize that the statement of Proposition~\ref{prop::lightcone_counterflow} is that the law of the \emph{range} of~$\eta$ is equal to the law of the \emph{range} of~$\eta'$. As explained in Figure~\ref{fig::pocket_order} and Figure~\ref{fig::pocket_order2}, the order in which the paths visit the points in their common range is different. \end{remark} \begin{proof}[Proof of Proposition~\ref{prop::lightcone_counterflow}] Suppose that~$h$ is a GFF on $\mathbf{H}$ with boundary data given by $-\lambda$ (resp.\ $-\lambda-\pi\chi$) on~$\mathbf{R}_-$ (resp.\ $\mathbf{R}_+$) and let $\eta$ be the ${\rm SLE}_\kappa(\tfrac{\kappa}{2}-4)$ process coupled with $h$ as the light cone path from $0$ to $\infty$ as in Theorem~\ref{thm::coupling}. Note that \[ -\lambda = -\lambda'-\frac{\pi \chi}{2} \quad\text{and}\quad -\lambda-\pi\chi = -\lambda' - \frac{3\pi \chi}{2}.\] Let $\eta'$ be the counterflow line of $h+3\pi \chi/2$ starting from~$0$. Then~$\eta'$ is an ${\rm SLE}_{\kappa'}(\tfrac{\kappa'}{2}-4)$ process where the force point is located at~$0^-$. By \cite[Theorem~1.13]{MS_IMAG4}, we note that the left boundary of~$\eta'$ stopped upon hitting a point $z \in \mathbf{H}$ is equal to the flow line of~$h$ starting from $z$ with angle~$\pi$. Consequently, it follows that the range of~$\eta'$ is equal to the range of~$\eta$. \end{proof} We finish by recording two immediate consequences of Proposition~\ref{prop::lightcone_counterflow}. \begin{corollary} \label{cor::boundary_filling} Suppose that $\kappa \in (2,4)$ and let~$\eta$ be an ${\rm SLE}_\kappa(\tfrac{\kappa}{2}-4)$ process in~$\mathbf{H}$ from~$0$ to~$\infty$ with a single boundary force point located at~$0^+$. Then~$\mathbf{R}_-$ is almost surely contained in the range of~$\eta$. \end{corollary} \begin{proof} This follows from Proposition~\ref{prop::lightcone_counterflow} and the fact that $\tfrac{\kappa'}{2}-4$ is the critical value of~$\rho$ at or below which a counterflow line is boundary filling. In particular, with~$\eta'$ as in the statement of Proposition~\ref{prop::lightcone_counterflow}, we have that $\mathbf{R}_-$ is contained in the range of~$\eta'$. \end{proof} \begin{corollary} \label{cor::critical_pocket_structure} Suppose that $\kappa \in (2,4)$ and let $\eta$ be an ${\rm SLE}_\kappa(\tfrac{\kappa}{2}-4)$ process in~$\mathbf{H}$ from~$0$ to $\infty$ with a single boundary force point located at $0^+$ coupled with a GFF~$h$ on~$\mathbf{H}$ with boundary data equal to $-\lambda$ (resp.\ $-\lambda-\pi \chi$) on $\mathbf{R}_-$ (resp.\ $\mathbf{R}_+$). If~$\eta$ separates~$z$ from $\partial \mathbf{H}$, then $\partial \pocket{z}$ is equal to the flow line of $h$ with angle~$0$ (resp.\ $\pi$) starting from $\open{z}$ if~$\eta$ traverses $\partial \pocket{z}$ with a clockwise (resp.\ counterclockwise) orientation. In particular, the boundaries of the pockets of~$\eta$ have only one side. \end{corollary} \begin{proof} This follows from Proposition~\ref{prop::lightcone_counterflow} since the same is true for the counterflow line~$\eta'$ (see, e.g.\ \cite[Theorem~1.13]{MS_IMAG4}). \end{proof} \bibliographystyle{hmralphaabbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,574
\section{The microscopic model} To validate the random matrix theory predictions, we perform numerically exact real-space simulations of the magnetoconductance of the chaotic system. For simplicity, we consider a graphene nanostructure proximity coupled to a high-SOC semiconductor; see Fig.\,\ref{Imagem1}(a). The Hamiltonian of the quantum dot can be expressed as $H=H_{g}+H_{\text{SO}}$, where $H_{g}$ describes the usual nearest-neighbor hopping between $p_z$-orbitals and $H_{\text{SO}}=H_{\text{sym}}+H_{\text{asy}}$ captures the proximity-induced SOC [\onlinecite{PhysRevB.93.155104,PhysRevB.97.085413,PhysRevB.98.045407}]. (We neglect the intrinsic Kane-Mele SOC of graphene [\onlinecite{PhysRevLett.122.046403}], which is too weak to cause any significant perturbation to the quantum dot.) In terms of annihilation (creation) operators $c_{i,\sigma}$ ($c_{i,\sigma}^\dagger$) that remove (add) electrons to site $i$ with spin $\sigma = \uparrow,\downarrow$, the terms $H_g$, $H_{\text{sym}}$ and $H_{\text{asy}}$ read as \begin{eqnarray} H_g&=&-\sum_{\langle i,j \rangle,\sigma} t^{ij} \, c_{i,\sigma}^{\dagger} c_{j,\sigma}\,, \\ H_{\text{sym}}&=& - \sum_{\langle\langle i,j \rangle\rangle, \sigma} \frac{\imath \lambda_{\text{sym}}^{ij}}{3\sqrt{3}} \, c_{i,\sigma}^{\dagger} \left[s_z \right]_{\sigma \sigma} c_{j,\sigma}\,, \label{TBH-sym} \\ H_{\text{asy}}&=& - \sum_{\langle i,j \rangle, \sigma, \sigma^\prime } \frac{2\imath \lambda_{\text{asy}}^{ij}}{3} \, c_{i,\sigma}^{\dagger}\left(\left[\mathbf{s}\right]_{\sigma \sigma^\prime}\times \hat \mathbf{r}_{ij}\right)_z c_{j,\sigma^\prime}\,, \label{TBHC} \end{eqnarray} where the indices $i$ and $j$ run over all lattice sites,\,$ \langle \cdots \rangle$ ($\langle\langle \cdots \rangle\rangle$) denotes a sum over nearest-neighbor (next-nearest-neighbor) sites, $\hat \mathbf{r}_{ij}$ is the unit vector along the line segment connecting the sites $i$ and $j$ and $t^{ij}=t e^{\imath \phi_{ij}}$, $\lambda_{\text{asy}}^{ij}=\lambda_{\text{BR}} e^{\imath \phi_{ij}}$ and $\lambda_{\text{sym}}^{ij}=\lambda_{\text{sv}} \delta_i \nu_{ij} e^{\imath \phi_{ij}}$ are Peierls' substitution modified hopping integrals with phases $\phi_{ij}=(e/\hbar) \int_{\mathbf{r}_i}^{\mathbf{r}_j} \mathbf{A} \cdot d \mathbf{r}$. Here, $\lambda_{\text{sv}(\text{BR})}$ is the spin-valley (BR) coupling strength and $\mathbf{A} = - B_{\perp} \hat \mathbf{y}$ is the magnetic vector potential in the Landau gauge. Furthermore, the $\nu_{ij}$ are signs that equal $\pm 1$ if the electron hops clockwise (anticlockwise) to a next-nearest site within a given hexagonal plaquette and $\delta_{i}=\pm 1$ distinguishes between the sublattices $A$($B)$ [\onlinecite{PhysRevB.95.165415,PhysRevLett.120.156402}]. The billiard is constructed by cutting a half-stadium connected to two identical leads out of a graphene sheet [\onlinecite{PhysRevLett.102.056806,PhysRevB.84.205421}]. To break the left-right symmetry, we cut out circular segments at the top left and bottom right in a way that the graphene lattices are terminated abruptly (see Fig.(\ref{Imagem1}-a)). Before attempting to confirm the predicted statistical behavior of the conductance, we verify that the simulated dots support the universal quantum transport regime, $\tau_d \gg \tau_{\text{erg}}$. To estimate the dwell time, we determine the spectrum of closed cavities with an area $\mathcal A \approx$ $1.2 \times 10^3$ nm$^2$ (containing around $10^5$ energy levels). The calculated mean level spacing ($\Delta$) ranges from $0.2$ to $0.4$ meV, depending on the specific SOC parameter values; for additional details see the SM [\onlinecite{SM}]. This translates into dwell times ($\tau_d \approx \pi\hbar/N\Delta$) on the order of $\approx 10/N$ ps. Meanwhile, the electron transit time in the graphene dot is simply $\tau_{\text{erg}}\approx \sqrt{\mathcal A}/v$, where $v=3 a t/2\hbar \approx 10^6$ m/s (assuming a typical hopping integral $t=2.8$ eV and a lattice constant $a$ of 0.25 nm). As a result, $\tau_d/\tau_{\text{erg}} \gg 1$ is always satisfied for typical ballistic point contacts with small number of open channels $N\approx 1-10$. For our numerical study, we use the recursive Green's function formalism [\onlinecite{Lewenkopf2013,PhysRevB.98.155407,PhysRevB.102.041107}] as implemented in the Kwant code [\onlinecite{Groth_2014}]. From the earlier theoretical analysis, the spin-orbit effects are expected to influence the statistical behavior of the conductance whenever the spin-orbit scattering time, $\tau_{\text{so}}= (\tau_{\text{asy}}^{-1}+\tau_{\text{sym}}^{-1})^{-1}$, is short compared with the cavity dwell time. The conductance fluctuation $\delta g$ is shown in Fig.\ref{Imagem1} (b,c) for selected parameters. In the absence of SOC, the fluctuations are consistent with the circular orthogonal ensemble ($ \delta g \simeq 0.35$) at zero field and with the circular unitary ensemble ($\delta g \simeq 0.25$) for a magnetic flux $\Phi$ on the order of the quantum of flux [\onlinecite{PhysRevLett.102.056806}]. After the proximity-induced SOC is turned on, the statistical properties of the quantum dot are seen to critically depend on the relative magnitude of the spin-orbit effects in complete accord with our prediction [Eqs. (\ref{wl})-(\ref{var})]. A transition to the circular symplectic ensemble ($\delta g \simeq 0.176$) is observed for sufficiently strong BR coupling in the low-field regime ($\tau_d \gg \tau_{\mathcal B}$). This behavior is robust against the presence of symmetric-type SOC as long as the BR effect remains as a strong perturbation ($\tau_{\text{asy}} \ll \tau_d$). Moreover,\,in quantum dots with weak BR effect and strong spin-valley coupling, time reversal symmetry is effectively broken and, as a result, the conductance fluctuation approaches the circular unitary ensemble prediction at zero field ($\delta g \simeq 0.250$). Interestingly, the combined effect of a strong spin-valley coupling and a high magnetic field reduces the conductance fluctuation down to $\delta g \simeq 0.125$, which is lower than the UCF value in any of the Wigner-Dyson ensembles. The orthogonal to unitary/symplectic ensemble transitions facilitated by SOC and the associated statistical properties are summarized in Table \ref{tabela}. It is important to note that due to the small quantum dot size in the simulations, the ensemble transitions are observed at rather large SOC values ($\approx$ 0.1 eV). On the other hand, the experimentally achievable proximity-induced SOC energy scales are more modest (in graphene on a group-VI dichalcogenide monolayer, these range from 0.1 to 10 meV depending on the high-SOC material used and the quality of the interface [\onlinecite{10.1038/ncomms9339, PhysRevB.93.155104,PhysRevB.95.165415,PhysRevB.98.045407}]). Hence, the experimental validation of our findings would require dots of substantially larger dimensions to ensure $\tau_{\text{so}} \ll \tau_{d}$. \begin{figure}[!] \centering \includegraphics[width=0.8\linewidth]{DT.eps} \caption{(a-c) Average magnetoconductance as a function of the magnetic flux obtained numerically for selected SOC parameters. Solid lines are fits to Eq.(\ref{wl}) [\onlinecite{SM}]. Data points are calculated using 20 chaotic samples averaged over the Fermi energy window $[0.45,1.50]$ eV. Here $\Phi_0=h/e$ is the quantum of flux.} \label{Imagem3} \end{figure} Now we turn to the magnetotransport fingerprints of proximity-induced SOC. In Fig. (\ref{Imagem3}), we show the calculated magnetoconductance $\Delta G (B_{\perp}) = \langle G (B_{\perp}) - G (0) \rangle$. The negative signal (WAL) observed in all simulated quantum dots with sizable BR coupling provides a clear signature of the orthogonal-to-symplectic ensemble transition. We note that this is a direct numerical evaluation of conductance WAL corrections in a chaotic billiard. In accord with the theory [c.f. Eq.\,(\ref{wl})], the BR-coupling-induced negative quantum correction is seen to be robust to the presence of symmetric SOC; see Figs. \ref{Imagem3} (b)-(c). The special role played by the spin-valley coupling in quantum dots with weak or vanishing BR effect ($\tau_{\text{asy}}\gg\tau_d$) is also borne out by the simulations. Indeed, the simulation with $\lambda_{\text{BR}}=0$ and $\lambda_{\text{sv}}=0.6$ eV shows a clear suppression of quantum interference effects due to spin-valley coupling since, as discussed earlier, the latter acts on the ballistic electrons as a valley-Zeeman field. The underlying spin-orbit scattering times can be estimated by fitting the numerical data to Eq.(\ref{wl}). We find $\tau_{\text{so}}$ to be on the order of 1 ps, which puts the simulated devices within the universal regime where the theory is expected to be accurate; see the SM for additional details [\onlinecite{SM}]. We note in passing that in the absence of SOC, the magnetoconductance can be accurately fitted to the well-known expression $\Delta G / (4e^2/h)= \mathcal{G}(1+\frac{\tau_{\mathcal{B}}}{2\tau_d})^{-1}$ [\onlinecite{PhysRevLett.70.3876}] with $\mathcal{G}=0.23$, in excellent agreement with the random matrix theory prediction ($\mathcal{G}_{\text{RMT}}=0.22$). To put our predictions into context, we first note that diffusive WAL behavior in (non-chaotic) graphene devices with interface-induced SOC is now well established [\onlinecite{PhysRevX.6.041020,Yang_2016,Yang2017,Volkl2017,Zihlmann2018,Wakamura2018,PhysRevLett.108.166606, PhysRevB.99.205407}]. Transition metal dichalcogenides represent a broad family of high-SOC layered materials, which can be used to fabricate the envisaged chaotic Dirac-Rashba billiard characterized by competing spin-orbit effects with different symmetries. According to our findings, chaotic billiards built from graphene-based heterostructures can display robust signatures of WAL in the universal regime of quantum transport provided that the asymmetric spin-orbit scattering time is shorter than the dwell time of the cavity. We expect that such a condition can be achieved by fabricating mesoscopic quantum dots with linear size approaching the typical (bulk) mean free paths. Electronic transport measurements on submicrometer graphene quantum dots have been recently reported [\onlinecite{GQD-exp1,GQD-exp2,GQD-exp3,GQD-review}], which gives us extra confidence that the predictions in this paper can be put to the test in the near future. In summary, we have used random matrix theory to investigate the statistical behavior of the average conductance and its universal fluctuations in chaotic graphene-based billiards with proximity-induced SOC. Our study, supported by real-space quantum transport calculations, shows that the proximity-induced SOC strongly influences the device conductance in zero and finite applied magnetic fields. Quantum dots with a sizable BR effect (i.e. with asymmetric spin-orbit scattering time shorter than the cavity dwell time) were found to display robust WAL signals with fluctuations consistent with the circular-symplectic ensemble. \begin{acknowledgments} A.L.R.B. and J.G.G.S.R. were supported by CNPq (Grant No. 307474/2018-6) and FACEPE (Grant No. APQ-0325-1.05/18). A.F. gratefully acknowledges the financial support from the Royal Society through a Royal Society University Research Fellowship. \end{acknowledgments} \section{Stub Model} The stub model has been applied to quantum dots of GaAs [\onlinecite{PhysRevB.68.125329,PhysRevB.65.081302,PhysRevB.84.035453}] and pristine graphene [\onlinecite{PhysRevB.93.125136,PhysRevB.99.195131}]. Here, we extend the formulation to graphene with pseudo-spin--spin coupling due to SOC. In the stub model, the scattering matrix of an isolated quantum dot with $M$ energy levels is parameterized as follows [\onlinecite{RevModPhys.69.731}] \begin{eqnarray} \mathcal{S} = PU(1 - Q^{\dagger}RQU)^{-1}P^{\dagger}, \end{eqnarray} where $U$ is a $4M \times 4M$ random unitary symmetric matrix taken from the circular orthogonal ensemble. The matrix $R$ is an unitary matrix with dimension $4(M-N)$, which is used to introduce external perturbations, such as a magnetic field and SOC, via the "stub" [\onlinecite{RevModPhys.69.731}]; see Fig. \ref{Imagem1} (a). The $P$ and $Q$ are $4N \times 4M$ and $(4M-4N) \times 4M$ projection matrices defined as $P_{ij} = \delta_{i,j}$ and $Q_{ij} = \delta_{i+4N,j}$ [\onlinecite{PhysRevB.65.081302}]. These matrices connect the leads ($P$) and stub ($Q$) to the quantum dot. Hence, the scattering matrix $\mathcal{S}$ has dimension $4N \times 4N$. The matrix $R$ is given by [\onlinecite{PhysRevB.65.081302}] \begin{eqnarray} R(\varepsilon,\vec \mathcal{B}) = \exp\left[\frac{2 \pi \imath}{M\Delta} \left(\varepsilon - H'(\vec \mathcal{B})\right)\right],\label{R} \end{eqnarray} where $\Delta$ is the mean level spacing of the quantum dot and $H'$ is an $(4M-4N)$ dimensional matrix encoding the perturbations to the bare Hamiltonian [\onlinecite{PhysRevB.93.125136}], which for a proximitized graphene quantum dot with SOC is given by \begin{eqnarray} H'(\vec \mathcal{B})&=& \frac{\sqrt{N}\Delta}{2\pi}\left[\frac{\imath}{2} \sqrt{\frac{\tau_d}{\tau_{\mathcal{B}}}} \left(A_1 \sigma_x + A_2 \sigma_y\right) \otimes s_0 + \imath \sqrt{\frac{\tau_d}{\tau_{\text{sym,I}}}} I_1 \sigma_z \otimes s_z \right.\nonumber\\&+& \left. \imath \sqrt{\frac{\tau_d}{\tau_{\text{sym,sv}}}} I_2 \sigma_0 \otimes s_z + i \sqrt{\frac{\tau_d}{\tau_{\text{asy}}}} \left(B_1 \sigma_x \otimes s_y +B_2\sigma_y \otimes s_x \right)\right].\label{HRMT} \end{eqnarray} Here, $s_i (\sigma_i)_{i=x,y,z}$ are Pauli matrices acting on the spin (pseudospin) spaces and $\tau_{\text{sym,vz}}$, $\tau_{\text{sym,I}}$ and $\tau_{\text{asy}}$ are characteristic scattering times associated with spin-valley coupling, Kane-Mele SOC and $z\rightarrow -z$ asymmetric BR interaction, respectively. Moreover, $A_i$, $I_i$ and $B_i$ ($i=1,2$) are real antisymmetric matrices of dimension ($(M-N) \times (M-N)$) that satisfy $\langle \textbf{Tr}\left(A_iA_j^T\right) \rangle = \langle \textbf{Tr}\left(I_{i}I_j^T\right) \rangle = \langle \textbf{Tr}\left(B_{i}B_j^T\right) \rangle = \delta_{ij} M^2$. The mean dwell time is given by $\tau_d = 2\pi \hbar/N\Delta$, while the magnetic dephasing rate is defined as $\tau_{\mathcal{B}}^{-1}=\tau_{e}^{-1}\times 2\pi c \left(\Phi/\Phi_0\right)^2$ [\onlinecite{PhysRevB.84.205421}] with $c$ a system-dependent parameter of order unity, while $\Phi$ denotes the magnetic flux (here, $\Phi_0=h/e$ is the quantum of flux). \begin{figure}[!] \includegraphics[width=0.4\linewidth]{dotSM.png} \includegraphics[width=0.7\linewidth]{T.eps} \caption{ (a) Schematic illustration of quantum dot (represented by $U$ matrix) connected to stub (represented by $R$ matrix) and two leads. (b-c) Conductance as a function of Fermi energy at selected SOC and magnetic field values. Dashed lines indicate number of propagating wave channels in the leads, $N_1$. (d-e) Average conductance as a function of number of open channels. Data points are calculated using 20 chaotic billiard realizations.} \label{Imagem1} \end{figure} The average of the conductance, Eq.(2) of main text, can be obtained using the relation \begin{eqnarray} \left< \mathcal{S}_{ij;\alpha \beta}(\epsilon,\vec \mathcal{B}) \, \mathcal{S}_{i'j';\alpha' \beta'}^*(\epsilon,\vec \mathcal{B})\right> = \delta_{ii'}\delta_{jj'}\mathcal{D}_{\alpha \beta; \beta'\alpha'}+ \delta_{ij'}\delta_{ji'}\left(\mathcal{T}\mathcal{C}\mathcal{T}\right)_{\alpha \beta; \alpha'\beta'}\label{SS} \end{eqnarray} which is valid in the semiclassical limit, $N\gg 1$. The matrix $\mathcal{T}$ is defined as $\mathcal{T} = \sigma_0 \otimes s_0 \otimes \sigma_0 \otimes s_y$, while the matrices $\mathcal{D}$ and $\mathcal{C}$ encode contributions of Diffuson and Cooperon diagrams, respectively, given by \begin{eqnarray} \mathcal{D}^{-1}&=&M\sigma_0 \otimes s_0 \otimes \sigma_0 \otimes s_0 - \textbf{Tr}\left(R\otimes R^\dagger \right),\nonumber\\ \mathcal{C}^{-1}&=&M\sigma_0 \otimes s_0 \otimes \sigma_0 \otimes s_0 - \textbf{Tr}\left(R\otimes R^\star \right),\label{DC} \end{eqnarray} where $\dagger$ ($\star$) denotes Hermitian (complex) conjugation. The evaluation of the trace operations in Eq. (\ref{DC}) is carried out using the identity $$\left(\sigma_i \otimes s_j \otimes \sigma_k \otimes s_l\right) \left(\sigma_i' \otimes s_j' \otimes \sigma_k' \otimes s_l'\right)=\left(\sigma_i\sigma_i'\right) \otimes \left(s_j' s_j \right) \otimes \left(\sigma_k' \sigma_k \right) \otimes \left(s_l s_l'\right).$$ From Eqs. (\ref{R}) -(\ref{HRMT}), we obtain, in the limit $M\gg N$, \begin{eqnarray} \mathcal{D}^{-1}&=&N\left[\left(1+\frac{\tau_d}{\tau_{\mathcal{B}}}+\frac{\tau_d}{\tau_{\text{sym,I}}}+\frac{\tau_d}{\tau_{\text{sym,sv}}}+\frac{2\tau_d}{\tau_{\text{asy}}}\right)\sigma_0 \otimes s_0 \otimes \sigma_0 \otimes s_0 \right.\nonumber\\ &-& \left.\frac{1}{2}\frac{\tau_d}{\tau_{\mathcal{B}}} \left(\sigma_x \otimes s_0 \otimes \sigma_x \otimes s_0 +\sigma_y \otimes s_0 \otimes \sigma_y \otimes s_0 \right) - \frac{\tau_d}{\tau_{\text{sym,I}}} \sigma_z \otimes s_z \otimes \sigma_z \otimes s_z \right.\nonumber\\ &-& \left. \frac{\tau_d}{\tau_{\text{sym,sv}}} \sigma_0 \otimes s_z \otimes \sigma_0 \otimes s_z - \frac{\tau_d}{\tau_{\text{asy}}}\left(\sigma_x \otimes s_y \otimes \sigma_x \otimes s_y +\sigma_y \otimes s_x \otimes \sigma_y \otimes s_x \right)\right]\label{Diffuson}\\ \mathcal{C}^{-1}&=&N\left[\left(1+\frac{\tau_d}{\tau_{\mathcal{B}}}+\frac{\tau_d}{\tau_{\text{sym,I}}}+\frac{\tau_d}{\tau_{\text{sym,sv}}}+\frac{2\tau_d}{\tau_{\text{asy}}}\right)\sigma_0 \otimes s_0 \otimes \sigma_0 \otimes s_0 \right.\nonumber\\ &+& \left.\frac{1}{2}\frac{\tau_d}{\tau_{\mathcal{B}}} \left(\sigma_x \otimes s_0 \otimes \sigma_x \otimes s_0 +\sigma_y \otimes s_0 \otimes \sigma_y \otimes s_0 \right) - \frac{\tau_d}{\tau_{\text{sym,I}}} \sigma_z \otimes s_z \otimes \sigma_z \otimes s_z \right.\nonumber\\ &-& \left. \frac{\tau_d}{\tau_{\text{sym,sv}}} \sigma_0 \otimes s_z \otimes \sigma_0 \otimes s_z - \frac{\tau_d}{\tau_{\text{asy}}}\left(\sigma_x \otimes s_y \otimes \sigma_x \otimes s_y +\sigma_y \otimes s_x \otimes \sigma_y \otimes s_x \right)\right]. \label{Cooperon} \end{eqnarray} Replacing Eqs.(\ref{Diffuson}-\ref{Cooperon}) into Eq. (\ref{SS}), we easily obtain \begin{eqnarray} \langle G \rangle = \frac{4e^2}{h}\frac{N_1N_2}{N} + \frac{2e^2}{h} \frac{N_1N_2}{N^2} \left[\frac{1}{1+\Gamma_{\mathcal{B}}}-\frac{1}{1+\Gamma_{\mathcal{B}}+2\Gamma_{\text{asy}}}- \frac{2}{1+\Gamma_{\mathcal{B}}+\Gamma_{\text{asy}}+\Gamma_{\text{sym}}}\right] \label{G} \end{eqnarray} and \begin{eqnarray} \text{var} [G] &=& \frac{4e^4}{h^2} \frac{N_1^2N_2^2}{N^4} \left[1+\frac{1}{\left(1+2\Gamma_{\text{asy}}\right)^2}+ \frac{2}{\left(1+\Gamma_{\text{sym}}+\Gamma_{\text{asy}}\right)^2}\right.\nonumber\\ &+&\left.\frac{1}{\left(1+\Gamma_{\mathcal{B}}\right)^2}+\frac{1}{\left(1+\Gamma_{\mathcal{B}}+2\Gamma_{\text{asy}}\right)^2}+ \frac{2}{\left(1+\Gamma_{\mathcal{B}}+\Gamma_{\text{sym}}+\Gamma_{\text{asy}}\right)^2}\right].\label{varG} \end{eqnarray} Equations (\ref{G}) and (\ref{varG}) are equivalent to Eqs. (4) and (5) in the main text, where $N_1=N_2=N/2$. \section{Numerically exact results} Here, we present additional details on the real-space simulations performed for the ballistic chaotic system, Fig.(\ref{Imagem1}). Figure ~\ref{Imagem1} shows the conductance as a function of the Fermi energy ((b)-(c)) and as a function of open channels ((c)-(d)) for selected parameters. From the fit to the data in the Fig.(\ref{Imagem1}-d,e) (dashed lines), we found $ \langle G \rangle/4e^2/h \approx 0.5 \times N$ for $\Phi/\Phi_0 = 2.14$, and $ \langle G \rangle/4e^2/h \approx 0.5 \times N + \langle G_{\text{qc}} \rangle $, in the absence of magnetic flux, with $ G_{\text{qc}}$ signaling weak-localization ($ G_{\text{qc}}>0$) and anti-weak-localization ($ G_{\text{qc}}<0$) for $\lambda_{\text{R}}=0$ and $\lambda_{\text{R}}=0.3$ eV, respectively. \begin{table*} \centering \begin{tabular}{ccccccccccc} \hline \hline Fig. (2) \quad\quad & $\lambda_{\text{R}}$ (eV)\quad\quad & $\lambda_{\text{sv}}$ (eV)\quad\quad & $\Delta$ (meV) \quad\quad & $\tau_d$ (ps) \quad\quad &$\tau_{\text{asy}}$ (ps) \quad\quad &$\tau_{\text{sym,sv}}$ (ps) \quad\quad & $\tau_{\text{so}}$ (ps) & Limit & Ensemble\\\hline \hline \hline (a) & 0.0 & 0.0 & 0.43 & 0.69 & $\times$ & $\times$& $\times$&$\tau_{\text{so}}\gg \tau_d$& COE\\\hline (a) & 0.0 & 0.6 & 0.22 & 1.33 & $\times$ & $0.16$& $0.16$ & $\tau_d \gg \tau_{\text{so}}$& CUE\\\hline (a) & 0.3 & 0.0 & 0.21 & 1.44 & $0.08$ & $\times$& $0.08$ & $\tau_d \gg \tau_{\text{so}}$& CSE\\\hline\hline (b) & 0.075 & 0.0 & 0.22 & 1.35 & $1.60$ & $\times$& $1.60$& $\tau_d \approx \tau_{\text{so}}$& \\\hline (b) & 0.075 & 0.6 & 0.22 & 1.34 & $2.18$ & $3.95$& $1.41$& $\tau_d \approx \tau_{\text{so}}$& \\\hline\hline (c) & 0.3 & 0.0 & 0.21 & 1.44 & $0.08$ & $\times$& $0.08$& $\tau_d \gg \tau_{\text{so}}$& CSE\\\hline (c) & 0.3 & 0.6 & 0.20 & 1.44 & $1.26$ & $1.26$& $0.63$& $\tau_d \gg \tau_{\text{so}}$& CSE\\\hline \hline\hline \end{tabular} \caption{BR ($\lambda_{\text{R}}$) and spin-valley ($\lambda_{\text{sv}}$) SOC parameters used in the numerical simulation reported in Fig. 2, main text. The calculated mean space level $\Delta$ within the Fermi energy window $[-0.5t,0.5t]$ and the relevant time scales estimated from the fit to the magnetoconductance data are also shown. }\label{tabela2} \end{table*}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,516
Rollin Harlow Person (October 15, 1850 – June 2, 1917) was an American jurist who served as an associate justice of the Michigan Supreme Court. Born on a farm in Iosco Township, Michigan, Person went to high school in Howell, Michigan. He then studied law at University of Michigan Law School and was admitted to the Michigan bar in 1873. Person and his wife moved to Nebraska where he practiced law, but "[a] plague of grasshoppers which practically ruined that section of Nebraska in 1875 drove him back to Michigan", and he returned to Howell to practice law. Person served as Michigan circuit court judge from 1891 to 1899. He advised Governor Woodbridge N. Ferris during the Copper Country strike of 1913–14, and impressed the governor enough that when Justice Aaron V. McAlvay died the following year, Ferris appointed Person to the vacant seat on the Michigan Supreme Court. Person served from 1915 to 1917, his term ending in January 1917 after he was defeated by Grant Fellows in a bid for reelection to the seat. Person died in Lansing, Michigan, at the age of 66, following an attack of indigestion. References 1850 births 1917 deaths People from Iosco County, Michigan University of Michigan Law School alumni Nebraska lawyers Michigan lawyers Michigan state court judges Justices of the Michigan Supreme Court 19th-century American judges 19th-century American lawyers
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,059
{"url":"https:\/\/www.whsmith.co.uk\/products\/iwahori-hecke-algebras-and-schur-algebras-of-the-symmetric-group-university-lecture-series-no-15\/9780821819265","text":"# Iwahori-Hecke Algebras and Schur Algebras of the Symmetric Group (University Lecture Series No. 15)\n\nBy: Mathas Mathas (author)Paperback\n\n1 - 2 weeks availability\n\n\u00a330.50\n\n### Description\n\nThis volume presents a fully self-contained introduction to the modular representation theory of the Iwahori-Hecke algebras of the symmetric groups and of the $q$-Schur algebras. The study of these algebras was pioneered by Dipper and James in a series of landmark papers. The primary goal of the book is to classify the blocks and the simple modules of both algebras. The final chapter contains a survey of recent advances and open problems. The main results are proved by showing that the Iwahori-Hecke algebras and $q$-Schur algebras are cellular algebras (in the sense of Graham and Lehrer). This is proved by exhibiting natural bases of both algebras which are indexed by pairs of standard and semistandard tableaux respectively.Using the machinery of cellular algebras, which is developed in Chapter 2, this results in a clean and elegant classification of the irreducible representations of both algebras. The block theory is approached by first proving an analogue of the Jantzen sum formula for the $q$-Schur algebras. This book is the first of its kind covering the topic. It offers a substantially simplified treatment of the original proofs. The book is a solid reference source for experts. It will also serve as a good introduction to students and beginning researchers since each chapter contains exercises and there is an appendix containing a quick development of the representation theory of algebras. A second appendix gives tables of decomposition numbers.\n\n### Contents\n\nThe Iwahori-Hecke algebra of the symmetric group Cellular algebras The modular representation theory of $\\mathcal {H}$ The $q$-Schur algebra The Jantzen sum formula and the blocks of $\\mathcal H$ Branching rules, canonical bases and decomposition matrices Appendix A. Finite dimensional algebras over a field Appendix B. Decomposition matrices Appendix C. Elementary divisors of integral Specht modules Index of notation References Index.\n\n### Product Details\n\n\u2022 publication date: 15\/09\/1999\n\u2022 ISBN13: 9780821819265\n\u2022 Format: Paperback\n\u2022 Number Of Pages: 200\n\u2022 ID: 9780821819265\n\u2022 weight: 369\n\u2022 ISBN10: 0821819267\n\n### Delivery Information\n\n\u2022 Saver Delivery: Yes\n\u2022 1st Class Delivery: Yes\n\u2022 Courier Delivery: Yes\n\u2022 Store Delivery: Yes\n\n### BooksView More\n\nPrices are for internet purchases only. Prices and availability in WHSmith Stores may vary significantly\n\nClose","date":"2016-12-11 00:42:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4456941783428192, \"perplexity\": 969.3367459789824}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-50\/segments\/1480698543614.1\/warc\/CC-MAIN-20161202170903-00449-ip-10-31-129-80.ec2.internal.warc.gz\"}"}
null
null
**JOHN FAHEY** hovers ghostlike in the sound of almost every acoustic guitarist who came after him, from Leo Kottke to Jimmy Page. In essence, John Fahey is to the solo acoustic guitar what Jimi Hendrix was to the electric: the man whom all subsequent musicians had to listen to. Fahey made close to forty albums between 1959 and his death in 2001, most of them featuring only his solo steel-string guitar. He fused elements of folk, blues, and experimental composition, taking familiar American sounds and recontextualizing them as something entirely new. His artistic voice transformed the cultural landscape of his time—and ours. Yet despite his stature as a groundbreaking visionary, Fahey's intentions— as a man and as an artist—remain largely unexamined. His memoir, _How Bluegrass Music Destroyed My Life_ , was largely fiction; his liner notes were full of half-truths. John Fahey's real story has never been told—until now. Journalist Steve Lowenthal has spent years researching Fahey's life and music, talking with his producers, his friends, his peers, his wives, his business partners, and many others. He describes how Fahey introduced prewar blues records and the men who made them to a broader public; how his independent label Takoma set new standards; how he battled his demons, including stage fright, alcohol, and prescription pills; how he ended up homeless and mentally unbalanced; and how, despite his troubles, he managed to found a new record label, Revenant, that won Grammys and remains critically revered. This portrait of a troubled and troubling man in a constant state of creative flux is the compelling story of a great American outcast. Copyright © 2014 by Steve Lowenthal Foreword copyright © 2014 by David Fricke All rights reserved Published by Chicago Review Press Incorporated 814 North Franklin Street Chicago, Illinois 60610 ISBN 978-1-61374-519-9 All written material by John Fahey used by permission of the copyright holder and in cooperation with his estate. **Library of Congress Cataloging-in-Publication Data** Lowenthal, Steve. Dance of death : the life of John Fahey, American guitarist / Steve Lowenthal. pages cm Includes bibliographical references and index. ISBN 978-1-61374-519-9 1. Fahey, John, 1939-2001. 2. Guitarists—United States—Biography. I. Title. ML419.F35L69 2014 787.87092—dc23 [B] 2014007354 Interior design: PerfecType, Nashville, TN Printed in the United States of America 5 4 3 2 1 # CONTENTS _Foreword by David Fricke_ _Acknowledgments_ _Introduction_ 1 When the Catfish Is in Bloom 2 Sunflower River Blues 3 The Legend of Blind Joe Death 4 On the Sunny Side of the Ocean 5 Poor Boy Long Way from Home 6 Voice of the Turtle 7 View East from the Top of the Riggs Road B&O Trestle 8 Old Fashioned Love 9 Let Go 10 When the Springtime Comes Again 11 Dance of the Inhabitants 12 Red Cross _Epilogue: I Remember Blind Joe Death_ _Source Notes_ _Bibliography_ _John Fahey Discography_ _Index_ # FOREWORD I saw John Fahey in performance only once, very late in his journey through American blues, roots, and expressive mystery—in the late 1990s, only a few years before his death, at a New York club, Tramps. It was a telling measure of the guitarist's cult heroism and odyssey of troubles to that point: the room was packed with older fans, recent devotees, and alternative-rock cachet—I stood against a wall near the low, small stage with Sonic Youth guitarist Thurston Moore and Fahey's great critic-champion in that decade, Byron Coley. But that night, Fahey was an opening act, warming up the room for another dogged, gifted folk-blues survivor, John Hammond Jr. History was in the house, in abundance. Fortune determined the billing. Fahey's set was a rare local sighting. It also came with baggage and warning. Fahey's poetic facility and improvisational brio—the soul and dazzle of his routinely breathtaking 1960s recordings—had suffered through neglect, ill health, poverty, and his long, perverse war with celebrity and public admiration. And Fahey—who infused the acoustic guitar with a pioneering, orchestral luminescence and storytelling articulation on (to name just a handful of diamonds) 1965's _The Transfiguration of Blind Joe Death,_ 1968's _The Yellow Princess,_ and the '68 Christmas present _The New Possiblity_ —was playing a Strato-caster, casting rippled shadows of digital delay across his recent electric minimalism, bossa nova sway, and suite-like wandering. The effect was at once discomfiting and hypnotizing, a quietly insistent contradiction of Fahey's history and legend, bound up in a music that felt like he was talking to himself in a crush of strangers. I watched and listened with keen, grateful acceptance, privileged to be so close to a figure of such revolutionary passion and fusion. I also knew that Fahey's storied virtuosity—his unprecedented advance in the 1960s through pioneer folk, Delta blues, and advanced classical harmonies, with complex fingerpicking grace and velocity, to an invention he wryly dubbed "American Primitive Guitar"—was not coming back again. The music he played at Tramps was, nevertheless, classic Fahey: aggressive in its striving, beautiful in its deep hurt and candor. There were outbursts over misfiring gear and odd, digressive banter. There was consistency too. Fahey's lifelong evasion of convention and expectation, on his most eccentric and sublime albums, was just as true that night, in his playing, manner, and, after the last note, vanishing. I never saw Fahey in his generally acknowledged prime. But I witnessed the impulse, challenge, and restless artistry at their purest, just in time. I would have liked more, earlier. Fahey never made it easy. In 1970—a little over a decade into his recording career, right as I was discovering the strange, colorful lore and intimate force of his 1968 album _The Voice of the Turtle_ in the library at my campus radio station, Fahey already sounded like a magus at a crossroads—itching for a fight, lost in his work, desperate for peace—in the opening line of his first _Rolling Stone_ article: "I just want to make a whole bunch of money so I can pay my psychiatric bills." Even in the best of times, Fahey toured irregularly, elevating and taunting his audiences in equal measure, with a peculiar sense of geography and occasion. Fahey's descent, by the early 1990s, into itinerant destitution— a compound product of alcoholism, failed relationships, and the Epstein-Barr virus, which struck the energy and precision of his playing to a severe, permanent degree—ironically mirrored the lives of those prewar blues and country singers and specters that Fahey studied and loved on the way to his own records and a UCLA master's degree in folklore and mythology. But Fahey had cultivated anonymity from the start. Half of the original pressing of his self-released 1959 debut, _John Fahey / Blind Joe Death,_ was credited to a pseudonymous bluesman, a mask Fahey often used later for both retreat and fun. After Fahey's death in 2001, his friend and collaborator Barry Hansen—a.k.a. Dr. Demento—pointed out to me that three of the tracks on _The Voice of the Turtle,_ my entrance into Fahey's music, were old blues 78s that Fahey dubbed from the shellac and credited on the album to Blind Joe Death. To Fahey, that wasn't deceit; it was a prankster's homage. Fahey was no Delta ghost; he grew up in Takoma Park, Maryland, in a troubled household, under emotional-combat conditions. He was an enterprising loner. Fahey started his own label, Takoma, named after the old neighborhood, to release _John Fahey / Blind Joe Death_ and sold copies at the Maryland gas station where he worked, between filling cars. He also slipped copies of the album into local thrift and record stores, making it seem as if the LP had arrived by vapor, under cover of darkness—a prophetic gesture for a man who made most of his music away from the mainstream industry, slipping in and out of earshot, always in some kind of motion or flight. Even as he entered the world, with that first album, Fahey was expert in the guile of exile. Steve Lowenthal has written the first major historical and critical biography of John Fahey. It is a vivid, rigorously reported examination of his life, the emotional and creative birth of his genius, and its rich, magnificent, and often confounding legacy on record and in performance. There are memories from those he loved, tested, crossed, and in some cases abandoned. It is a book with a ready-made, still-expanding soundtrack: in 2011, the archival imprint Dust-to-Digital added to the more than forty original studio and live albums in Fahey's discography, compiling his primal beginnings—a wild mass of early 78s, demos, and private recordings—in a deluxe five-CD bounty, _Your Past Comes Back to Haunt You: The Fonotone Years 1958–1965._ Fahey loathed nostalgia. He would have adored that title. _Dance of Death_ is also very much like the music running through this story: thrilling, poignant, cryptic, funny, explosive, harrowing, caring, and fragile—a perfect reflection of the man who made it, at every step in his growth, achievement, anger, and sorrows, right up to the night I saw him at Tramps. Barry Hansen told me something else after Fahey's passing, a story from their trips through the Deep South in the 1960s, seeking the seminal forgotten singers and pickers that made their favorite prewar blues records. "John would buy a lot of 78s," Hansen said. "Some of them were great, some of them he didn't want to keep. So he would throw them out the window as he drove. His favorite thing was to throw these old 78s at bridge abutments and watch them smash." But Hansen added, "He was always careful not to hit anybody." That was John Fahey in a nutshell. _Dance of Death_ is John Fahey in full, at last. DAVID FRICKE _R OLLING STONE_ NOVEMBER 2013 # ACKNOWLEDGMENTS This book would not have been possible without the help of Anthony Pappalardo, Nicholas Katzban, Kris D'Agostino, Erika Storella, Maria Raha, Mike Wolf, Mallory Farrugia, and Yuval Taylor. Eternal gratitude for the overwhelming support and encouragement over the years of this project from Peter Kolovos, Wayne Rogers, Dominick Fernow, Kate Village, all at VDSQ, Brandon Kavulla, MV Carbon, Paul Gillis, Kevin Bodenheimer, C. Spencer Yeh, Meg Clixby, Michael Bernstein, Chris O'Neal, Kasey Byrne, Angela Sawyer, Paul Familetti, Chris Gray, Beth Lewand, Becka Diamond, Sheila Refael, and Eldad Gothelf, and especially my mother, Sally; my father, Mark; and my sister, Janet. Special thanks to Laris Kreslins for publishing my first article. And thanks to Dan Koretsky, Kristen Eshelman at the Thomas J. Dodd Research Center at UConn, Stephen Brower and all at Vanguard, John Allen, Mitch Greenhill, James Jackson Toth, Claudio Guerrieri, Steve Manning, Carlos Giffoni, Marc Minsker, Charles Schmidt, Charles Eppley, Anthony Mangicapra, Matt Krefting, Ted at Feeding Tube Records, the Delta Slider Blog, and the International Fahey Committee / Fahey Files for all their incredible research. # INTRODUCTION America in the twentieth century was littered with guitar heroes. Most were bombastic, some introspective. Yet among them, John Fahey is perhaps the most mysterious. Delving into the paradoxical universe of Fahey is often a confusing prospect. Despite his status as a groundbreaking visionary, Fahey's intentions as a man and an artist have remained largely unexamined. He authored and published a memoir in 2001, _How Bluegrass Music Destroyed My Life: Stories by John Fahey,_ but as the subtitle suggests, the work is meant to stand as fiction, not a revelation of truth. With so many half truths provided by Fahey in his memoir and liner notes, his story has never been fully told. But by dissecting the myths, more universal truths begin to emerge: those of creative strife and American outsider culture. The process of telling his story began when I applied to grad school in 2008 with the intention of using an MFA program to launch what would eventually become a full-fledged biography of this mystifying figure in American music. My first step in the research process was to find all the original Fahey LPs and read all of his bizarre and hallucinatory liner notes, which mixed faux academic scholarship and pranks with true references to his life. One particularly obscure LP was 1965's _The Transfiguration of Blind Joe Death,_ which included a thirty-page booklet written by Fahey that same year. In this text, he, as an unnamed omniscient narrator, tells the hallucinatory tale of a student researching his master's thesis on John Fahey. The student finds a shopkeeper and asks, "Did you ever go to any of the clubs around Boston during the 1960s and perchance see or hear of a guitar player named John Fahey? I need any information I can get on him for my Master's thesis. I'm doing it on pre-second foundation artistically creative geniuses." From there, the student is sent to meet a series of Fahey's associates and lovers, a bizarre maze of fantastically surreal characters. The student, who is continually referred to throughout the text as insipid and stupid, eventually finds Fahey trapped in a cave, and becomes trapped himself. While I certainly recognized the strange coincidence here, it became even odder once I realized that the story was set in 2010, and that Fahey had made the protagonist Jewish. I had, quite unwittingly, stepped into a prophesied role, created by John Fahey himself. John predicted that I was coming, and had laid all the traps and mazes for me decades earlier. I took it as a sign I was on the right track. But that track was far from straight and narrow. Fahey's significance as a musician aside, several other components of his legacy make his story compelling. As a record collector, he opened a door to prewar American music. He discovered and cataloged unknown recordings, such as a 78 of Charley Patton's "Circle Round the Moon," by literally salvaging them from people's trash and dusty basements, reviving music forgotten by history. As a record enthusiast and archivist, he served as a bridge from the past to the present and helped to show how music developed in the recording era. In his own music, Fahey combined various concepts and approaches. His pastiche of cultural bric-a-brac was deeply postmodern while remaining emotionally relevant. He conveyed a profound sadness at the very core of our shared musical experience, with the blood of the oppressed and dispossessed at its center. Though his is not a story of the blues per se, its language is part of Fahey's vernacular, as are the languages of modern composition, bluegrass—and Maryland. And Maryland is where the story begins. # 1 # WHEN THE CATFISH IS IN BLOOM "I just watched shades of red pass over everything. This went on for some time. Until the red went away and the black came. The black did come and then it too went away. And so did the memories. It took awhile but the red, the black and the memories all went away. For thirty years they went away and only came out in psychoanalysis." —John Fahey, _How Bluegrass Music Destroyed My Life_ Takoma Park, Maryland, in the mid-1950s embodied the promise of postwar America at its fullest. Among the first planned commuter suburbs, Takoma Park centered around the B&O Railroad. The Victorian-style houses that dotted the landscape were close enough to Washington, DC, that the employees of the growing government who lived there could get to and from work in a reasonable amount of time, and yet far enough away from the city's unsavory elements that they could feel safe. Deep woods ran through and surrounded the landscape, enough to remind its denizens that it too was once wild. Lush hardwood scenery punctuated the skyline. The Sligo Creek ran through the wilderness, creating a gorgeous naturalism (a nine-mile park ran through the middle of the town). One could easily get lost among the foliage when the light hit and reflected through the multihued leaves of autumn's canopy. Takoma Park was the best of many worlds. Maryland straddled both sides of the racial and cultural divide, with some areas increasingly liberal and others that hung close to old Southern ideologies. Takoma Park was largely considered among the more left-leaning towns. There still lingered traces of racism, though more generally in the older, more established communities of Montgomery County and nearby Prince George's County. Takoma Park was hardly integrated in the 1950s, with pockets of poverty where poor black or poor white families lived. A public works building close to Ritchie Avenue still had segregated bathrooms. African Americans were employed by the city mainly for trash collection. There were no freeways connecting Takoma Park to other cities, so, as in many suburbs, life remained slow. The children were the first generation raised in suburban incubation, and they would experience fewer of the hardships of the previous eras, with depressions and world wars behind them. Yet some were left with a hunger for rebellion—or at least for a glimpse into a world that wasn't their own. Unable to connect with the ideas of their time, these teenagers looked backward at the ignored cultural leftovers of years past, finding new value in forms of expression such as blues, bluegrass, and folk music. There were mysteries in records, feelings that were not discussed in any other language. These scratchy, roughly rendered sounds transported listeners back to a time when the problems of 1950s modernity were only distant imaginings. John Aloysius Fahey was born on February 28, 1939, in Washington, DC, to an adoring mother, Jane Hayes, and a distant father, Aloysius. Al worked at the National Institutes of Heath and spent a lot of time out of the house. Jane worked as a secretary at the US Geological Survey, though her main focus was her son. In 1944, the Faheys moved from the city to a house on New York Avenue in Takoma Park, an ideal setting for a young family. Al ran the house with strict Catholic discipline. Having grown up in an orphanage, his upbringing had been difficult and filled with abuse, which influenced how he treated his shy, meek son. He controlled his family with a sharp tongue and a firm hand. Both athletic and quick-witted, he quickly grew disappointed in his clumsy son, who rarely showed much interest in sports. Their one common trait was a love of music. Al knew music theory and played Irish harp around the house. With red hair and freckles against his pale skin, his heritage was plain to see. The family often took trips to local fairs to see country and bluegrass performances at places such as the New River Ranch in Rising Sun, Maryland, where they saw artists like the Stanley Brothers perform. In the summer months one could often hear classical music blasting from the open windows of the Fahey house. Jane was softer than her husband, with the darker features seen in her son. She got by with a pleasant smile, always avoiding difficult subjects and under the thumb of her husband. Jane doted on her child and offered him constant encouragement, becoming his unquestioning champion. "I remember the night we moved into the new house in the suburbs," Fahey recalled in _How Bluegrass Music Destroyed My Life._ "I was sleepy and didn't like what was going on. I remember the following morning, feeling afraid and shy, but preparing myself to go across the street where I saw the local kids hanging out. My mother was encouraging me. She gave me a lot of support." Those Takoma Park kids formed a neighborhood gang of about fifteen members, mostly boys but a few girls too, and made it a point to know who was moving in, especially the fellow children. Eddie and Larry were two older elementary school kids who decided to admit Fahey, who was about five at the time. The connection to his new neighborhood gang provided him with company and acceptance for the first time. Every day—starting from the day after he met them until sometime in 1948—they came over to his house and took him everywhere they went. "Every day. Everywhere. And they taught me. For some reason they loved me and felt sorry for me, instead of simply snubbing me like most kids would do, they took on the responsibility of rearing me and educating me," recalled Fahey in his memoir, romanticizing his friends' kindness. They raised him in the way only slightly older peers could. They taught him about sex and simplistic politics, and contradicted the ethos of the Catholic Church. There was a dishonesty in the church that Fahey could never come to terms with. He was taught that the meek inherited the earth, but in school the spoils went to the popular and the strong. Day-to-day normalcies rang false to him. "They made us into monsters," wrote Fahey. _"We_ didn't want to be monsters. But we are monsters. And it's all their faults. All they care about is keeping up with the Joneses, whoever in hell the Joneses are." The competitive nature of navigating social pecking orders left him cold. Fahey instead retreated into a lush fantasy life, along with his friends. Since Takoma Park had brought them all together, they saw the town itself as possessed of magical properties. They dreamed of a secret race of cat people who lived in Magruder Park, one of their favorite local escapes, and only came out at night. The group created its own "history" and pieced together various complex story lines relating to their imagined local demigod, whom they named "the Great Koonaklaster." "Eddie glorified the neighborhood and the people who lived there," remembered Fahey. "He told us all that it was a special place like Valhalla or paradise. The very soil was sacred. The water in the creeks and springs was holy water. The oak trees were the highest in the world. And these oak trees weren't like regular oak trees. They were sacred oak trees planted by the Great Koonaklaster himself while he was creating the world." Through ritualistic chanting the local gang would state their devotion to this imaginary deity in exchange for magical milkshakes and protection from adults. Turtles were considered sacred in their world. This imaginative spirit helped ameliorate the ever-growing problems at home. In response to his father's temper, Fahey began to take out his frustrations at school. In the seventh grade he was suspended for attacking a female classmate. "But it wasn't fair," wrote Fahey. "After all, I was just doing what my father did to me all the time. Nothing unusual. What was all the fuss about? Oh I knew. I knew. I was wrong and my father was wrong, too. Very much in the wrong. Evil. But I couldn't tell anyone or he might come and get me and kill me." Things at home soon came to a head. Al and Jane divorced just before John entered high school, and the task of raising their son fell squarely on Jane's shoulders. John and Jane moved out of the house on New York Avenue, where Al would remain for the rest of his life. They moved into Jane's mother's apartment at 7101 New Hampshire Avenue in nearby Prince George's County. Traumatic as divorce would be for any teenager, the split was most likely to John's benefit. Now he no longer had to live with his father, who had nothing kind to say to him or much to offer him. However, Jane struggled to make ends meet, and John never got along with his grandmother, whom he found cold and unloving. The move also separated him from his Takoma Park pals, and he began attending high school in nearby Adelphi. John developed a hot-headed impulsiveness, overcoming his once-shy demeanor. When he began high school in 1952, the pop charts were filled with bland singers like Rosemary Clooney and Eddie Fisher, everything pleasant and mundane. Fahey began to identify himself as an outsider, feeling he had nothing in common with the popular representations he saw and heard. "I don't know if you boys experienced junior and senior high school the way I did," Fahey said. "I hated them—for various reasons. Aside from the boredom, and the jail-like atmosphere and all the other terrible things, there was no atmosphere for _honesty_." Fahey would soon find a perfect template for his new persona in bad-boy, leather-clad figurehead James Dean. Fahey became a tall young man at six foot four, with a slim yet solid frame. With such an imposing presence, he was able to adopt the role of the rebel easily. And with his black leather jacket and slicked-back hair, he looked ready for trouble—even though he was far from a tough guy. School seemed to increasingly offend him as he continued into his teens. Neither teachers nor students provided him solace. He began a search for something, anything that he could connect with. If the suburbs were false and couldn't handle the truth, he would look elsewhere for a language to express his disconnection. "They taught us to love each other at the same time they taught us to kill one other," he wrote. "But it wouldn't work with me. It just wouldn't work. I tried. I really tried. But I couldn't make it work. And then I felt guilty. I hated myself. I really did. I hated myself because I couldn't make these two things work together. I couldn't. You don't know how hard I tried to follow those crazy-making instructions, mores, assumptions, actions. Even today, when I think about it, I almost start crying." Girls he had crushes on, kids who beat him up, and the normal teenage social pressures all seemed liked gigantic, life-altering traumas to him. The everyday trials of growing up from which most recover hit him extremely hard. While many adolescents consider themselves miserable, Fahey seemed more miserable and alienated than most. With his vivid imagination came equally lucid nightmares. "I wanted to kill my parents and then myself," wrote Fahey. "That's what the strange dreams meant. I wanted to kill us because there was something wrong with us. And everyone knew it, too." He sought refuge in music. One day, while flipping through the radio stations, he became drawn to the instrumental tapestry of classical music. Fahey embraced the strident power of revolutionary Russian composers; they became the first soundtrack to his rebellion. In his memoir he imagined vicious fantasies: "At Mount Rainier Junior High School, in the same town where William Peter Blatty's exorcism actually took place, the kids took one of the teachers onto the roof and threw him off, killing him. Maybe the revolution was beginning. I listened to WGMS, then called WQQW. They played a lot of Shostakovich and Prokofiev—Russian, Communist composers. The music was so angry that I believed the revolution was going to come. And it did." Igor Stravinsky's _The Rite of Spring_ had infamously sparked riots upon its initial performances with its brazen use of atonality. In stark contrast to his beloved children's work _Peter and the Wolf,_ Sergei Prokofiev also composed war sonatas, venting his anger at the Soviet regime. Fahey, inspired by these composers and seeing a way to tell stories without the trappings of language, began to trace out his own musical aspirations. Through the power and violence of Russian music he discovered concepts of dissonance, atonality, and drastic rhythmic shifts. He dreamed of destroying the structures that tormented him, hearing this in the reverberations of his tiny radio speaker. Finally, the music spoke a truth he could relate to. But anger wasn't his only excessive emotion. He would be prone to fits of great joy, energy, and enthusiasm, too. His passions for what excited him were as severe as his hatred for what bothered him. His musical focus shifted in 1954 at the age of fifteen, when his favorite station changed formats to country and western. He started to hear records like Jimmie Rodgers's "Blue Yodel No. 7," a fiddle and acoustic guitar number lamenting a girl who left the singer so lonesome that he didn't know what to do. Fahey's reaction was immediate. "It reached out and grabbed me and it has never let go of me," he remembered. "I went limp. I almost fell off the sofa. My mouth fell open. My eyes widened and expanded. I found myself hyperventilating....I screamed for help but nobody was around and nobody came. Nothing has ever been the same since." Inspired and moved by those sounds coming through the radio, Fahey decided to pick up a seventeen-dollar Sears & Roebuck guitar. He earned the money by taking up a local paper route. On summer nights he walked the streets of Takoma Park, exploring the boundaries of his neighborhood, which included a trash collection site near which several low-income families lived, less than a mile from his own house. One night he ran into an older black musician named Elmer Williams, who lived down on Prince George's Avenue. He was picking a guitar in a Blind Boy Fuller style. Soon Williams would teach Fahey how to play the twelve-bar blues in E. It was Fahey's first-ever encounter with a black musician. Every summer Friday night there would be giant outdoor crab boils in the mixed part of town. Fahey recalled going once and hearing Williams play for hours at these parties while neighbors and guests danced hypnotically in the street. Like many lonely teenagers he found playing guitar an ideal activity, because it required no one else. He sat in his room with his instrument for hours on end. Feeding his newfound musical habit, he set out on a mission to find any information he could about music of all kinds, picking up musical techniques and ideas where he could. The first step was trying to track down a copy of the song he couldn't get out of his head, "Blue Yodel No. 7." Few people had any interest in or knowledge of this music, which barely existed in physical form. Asking around school, he heard about a young record collector named Dick Spottswood, a popular kid two years Fahey's senior who had friends in many different circles. Spottswood had a far different high school experience than Fahey. Indeed, not everyone who sought out such music was as tormented as he was. "When we were still in our teens the road ahead was filled with choices. We were lucky. We were kids getting good educations," Spottswood says. Fahey seemed troubled and disgruntled, while Spottswood was far more amiable and well adjusted. "We had mutual friends who introduced us," recalls Spottswood. "He gave off very much a tall, tough-guy image. He dressed in T-shirts with the cigarettes rolled up in the sleeve and a toothpick in the mouth—that kind of thing. He had long black hair. He was very good-looking in a tough, blue-collar kind of way. At least that was the image he gave out. When I came to know him I could see behind the façade, but that's what he wanted to show to the world." The two became friends, and together they would listen to blue-grass artists like Bill Monroe, while Fahey drank enough Coca-Cola to kill a normal man. Fahey would drive the pair to local thrift stores and soon beyond, up to Baltimore to hunt for records. Spottswood noticed that his new friend seemed to be suffering a great deal from his home life: "He was subject to such mood swings. He was depressed a good bit of the time, and at times when he was on the other side of the spectrum his enthusiasms threatened to carry him off," he says. Although the suburbs were planned for families, kids had few desirable options for spending their free time. For a while Fahey hung out at the local pool hall with some greaser kids, but ultimately he knew he didn't fit in there. There was nothing there to stimulate his growing existential concerns. Through the Episcopal Youth Fellowship at Trinity Episcopal Church he found a refuge from the banality he saw around him, as well as a safe haven for intellectual and theological conversation. Away from the high school socialites, he put aside his tough-guy act and revealed a more pensive side. Attending an Episcopal church also doubled as rebellion against his Catholic father. There he met Anthony Lee, who played organ at services. Lee was a self-described awkward teenager and happily adopted the nickname Flea. "My first impression of John was simply that he was weird, which appealed to me because I was considered weird too," remembers Lee. At the time, Lee was a closeted homosexual and naturally a target in the repressive environs of 1950s suburbia. But Fahey, no stranger to ridicule from his strict father, did nothing to defend him. Lee recalls that "Fahey and I never hung out anywhere except at Trinity and at his home, primarily because he was ashamed to be seen in my company by his hard-rock friends, in whose presence, at Trinity Church, he mercilessly ridiculed me." Fahey, though deeply sensitive, had a sadistic side; he was able to target people's vulnerabilities, a trait gleaned from his father. Fahey was attracted by Lee's musicianship, and their friendship grew over a shared absurd sense of humor and a mutual love of the wildly experimental music of composer Harry Partch. Lee's aunt had briefly dated Partch (although later he came out as gay) in the 1920s and he sent her a copy of his self-released album _Plectra and Percussion Dances_ when she told him of her nephew's interest in modern music. Lee played it for Fahey and they both loved it. Partch was an artist who invented his own one-of-a-kind instruments to play avant-garde, microtonal works that followed few recognizable patterns. Once, as a prank, Fahey and Lee played a typically bizarre Partch record through the church PA system—much to the confusion of the attendees. On the organ, Lee would occasionally try to incorporate melodic phrases from gospel standards like "Uncloudy Day" in improvised sections of the hymns while Fahey would smirk from the pews. By 1956, Fahey and Lee began attending St. Michael's Episcopal Church, which was just down the street from where Fahey now lived with his mother. Having been introduced by Spottswood, who also attended St. Michael's, Fahey found a group of suburban rebels, all of whom played music and talked about the malaise of being a teenager during the Eisenhower administration. There were even some girls. Among them was a young flute player named Nancy McLean. McLean was serious about her music studies, having taken private lessons from the first-chair flutist in the US Marine Corps band. A few years younger than the rest, McLean looked older than her age—an advantage at thirteen—and was drawn to what she saw as self-assured and interesting young people. She went to Northwestern High School in nearby Adelphi, the same school Fahey attended. "John portrayed himself as an outcast/outlaw/beatnik/pre-hippie," remembers McLean. "He was super cool in the way he walked, and rarely showed any true distress." However cool he presented himself, she saw that his erratic behavior at times could become provocative. She recalled Fahey would shout absurdities during inappropriate times at church such as "Being is!" and fall into fits of hysterical laughter. "One would have thought he was fox crazy with a few odd proclivities—nothing serious," she recalls. Spottswood saw similar traits in Fahey, and felt that his pranks and posturing were part of his appeal. "John managed to be charming by being anti-charming," Spottswood says. "John was a contrarian; he always cared a lot but acted as if he didn't. He was never happier than when he was pulling your leg, although if you played a trick on him he could get extremely upset." On Sundays the church would have potluck lunches, where young and old engaged in long discussions regarding religion. Fahey enjoyed being treated like an intellectual equal by adults. Spottswood had started his first year at the University of Maryland and was living in a spare room in St. Michael's pastor's house. During this time, Fahey and McLean started to develop what she described as a "chaste" romance. The two dated briefly and had no hard feelings when they stopped. In this sanctuary, Fahey had found a place for himself among his new friends. Fahey continued his deep fascination with the guitar. He spent hours alone working on the fundamentals and trying to copy what he heard on records and the radio. His progress came slow and was the result of intense focus. Playing guitar became meditative therapy, an outlet for his anger and a way to channel his imagination. Music also became a way for him to connect with people. Fahey, with the occasional help of McLean, would make impromptu recordings while hanging out at St. Michael's. In this group structure, Fahey used the church resources at his disposal to make his earliest demos. These primitive recordings would serve as blueprints for future work. Bluegrass and classical were the main types of music he listened to, but what he imagined had little to do with any traditional genre. His friends were all interested in music to varying degrees, but Fahey was generating his own unique ideas by combining various musical influences. Relating to the intellectual appeal of classical music, the sadness of country and western, and the spirituality of hymns, he became interested in the transformative powers of each. He saw music as a conduit for emotion. The process was less important than the results, especially since Fahey lacked the patience for learning how to read or write music, instead imitating techniques he heard on records. Merging genres with a bold ambitiousness, he would eventually call his style "American Primitive," in reference to his untrained methods. Rather than being restrained by formal song structure, he tried to keep the feel of more abstract classical structures, while using familiar fingerpicking patterns found in country and bluegrass. He took ideas from the music he heard as source material for collage. "I learned a few country-western songs," said Fahey. "I bought a chord book, and right away I started writing my own stuff, which, nobody else did that, I don't know why. I had a big background in listening to classical music and I started trying to compose, like I was playing the guitar but I heard an orchestra in my head. So I was really composing for full orchestra, and of course I didn't know enough chords or harmonies yet, but I came up with some interesting stuff." Despite developing his own style, he struggled to articulate his ideas. "I don't mean to demean his talent, which was quite remarkable, but I think he must have worked harder, in private, on his picking and fingering and composing than he let on," recalls McLean. "He was inventive musically. But because he couldn't read music, I think he was prevented from making an even bigger mark. He needed Tony 'Flea' Lee to help him tune his guitars." In 1956, having graduated high school, Fahey started working at Martin's Esso, an all-night gas station located on the central intersection of University Boulevard and New Hampshire Avenue on the border of Takoma Park and Langley Park. He would often play pranks on those he attended. Lee remembered, "He would give people directions, sending them in about a ten-mile-long circle which brought them right back to Martin's Esso, where they would look in confusion at Fahey, who would smile cheerfully and wave to them as they drove by." Eugene "Ed" (his first and last initials) Denson, a mutual Takoma Park friend of Spottswood, remembers him from this post as well. "He was young and thin, and fond of saying 'Happy motoring' to the customers—this was an advertising slogan of the time, and seemed to baffle the customers," he says. Fahey felt comfortable there, king of his micro-universe, messing with the squares for kicks. His position seemed to suit him, and he often recalled it among his favorite memories. "Martin's was the only thing open all night in the county," said Fahey. "I always invited the cops to stay as long as they wanted. 'You want some free batteries for your flashlight? Take them.' I got to know all the cops and they let me speed. I never got caught. It was just, 'Hi, Fahey.' I became a very important person for the only time in my life. I still dream about it. I have very nice dreams of going back and working all night at this gas station. I liked the responsibility. In the three years I had that gig not one quart of oil was ever missed on the inventory. I watched. I'm real good at watching things. And that was the main part of the job during the week. There was not much work to do." He would never know hard labor, like that of the impoverished, but for the suburbs this was blue-collar work, and he enjoyed what it entailed: an earnestness. He would sit out at night and play guitar and watch the nothing go by, listening for the B&O engine whistling off in the distance. This simplicity suited him and allowed his mind to wander through the complexities of his mental orchestras. Most important, he was left alone. # 2 # SUNFLOWER RIVER BLUES "Canvassing in and around Washington and Baltimore, as far north as Havre de Grace [Maryland] and even Philadelphia, I found hundreds of hillbilly and race records....Stump Johnson on Paramount doing 'I'll Be Glad When You're Dead You Rascal You' and 'West End Blues' by Louis [Armstrong]....On Richie Avenue East, I found a Kokomo Arnold record and the Carter Family doing 'When the Roses Bloom Again in Dixieland.' See what I mean? I could go on and on like this." —John Fahey, interview, 1998 When Fahey began listening to records, he had no idea what a record collector was. Record collecting was a secret fascination, coded in mailing lists printed in the back of small jazz and record-collector magazines. The most rare and sought-after collectibles sold for hundreds of dollars. Harry Smith was a pioneer of folk music collecting, an established avant-garde filmmaker, ethnomusicologist, and all-around rabid hoarder of obscure texts and arts. One of his main interests was 78 RPM records, and soon he accumulated them by the thousands: Cajun, blues, jazz, gospel, and more. In 1947 Smith approached Folkways Records head Moe Asch with a pitch to sell his collection to the label. Asch instead commissioned a six-LP set of Smith's favorite recordings among his collection, entitled the _Anthology of American Folk Music._ Released in 1952, the set was a precedent-setting catalyst of the emerging interest in ethnomusicology and the roots of American music, introducing new listeners to iconic artists like the Carter Family, Uncle Dave Macon, Charley Patton, and dozens more. In the coming decade, Bob Dylan, Joan Baez, Johnny Cash, and many others found inspiration in the sounds introduced on the set. Smith and his _Anthology_ showed that troves of cultural treasure were buried in America—in basements and attics, piled in boxes as trash. Fahey initially rejected records by black musicians. After record hunting with Spottswood, he at first traded the blues records he found in exchange for country records. "Where I was brought up was very prejudiced towards Negroes," Fahey said. "I was taught to hate and fear them. I didn't like black music very much, I wouldn't even listen to it." One day, while tallying their scores, Spottswood and Fahey played Blind Willie Johnson's "Praise God I'm Satisfied" to check the record's condition. It was 1957, and what Fahey heard changed him forever. He recalled, "I started to feel nauseated so I made him take it off, but it kept going through my head so I had to hear it again. When he played it the second time I started to cry, it was suddenly very beautiful. It was some kind of hysterical conversion experience where in fact I had liked that kind of music all the time, but didn't want to. So, I allowed myself to like it." Spottswood's firsthand accounts mirror Fahey's own telling. "He went from disliking it quite a bit to adopting it totally in the span of a couple of hours. That's the surprising part to me, that that conversion was so much like Saul on the way to Damascus. It was as if lightning had struck. In the afternoon that predilection was not there, but in the evening it was the start of the rest of his life." The song tells of a man thanking the Lord for saving him and clearing the clouds away, the joy of religious devotion echoing in Johnson's raspy voice. The music of bluesmen like Charley Patton and Blind Blake—other names he found in similar record scores—sent Fahey spiraling toward more collecting and research. Their guitar playing attracted him, Patton for his energetic and percussive playing and Blake for his sophisticated fingerpicking technique. What they had in common was syncopation. Fahey related the anger he found in blues music to his own childhood angst. He heard the alienation of outsiders, voices that were ignored and absent from his own world. He felt removed and powerless in the suburbs and related his own complaints to the blues themes of loneliness and disappointment. Fahey also found techniques he could use to further develop his guitar language. He taped his favorite records, keeping the recordings for reference and selling the physical records when he could fetch a nice price for them. Like many musicians, he began by studying his idols and playing along to their songs. Among his favorite artists was fingerpicking guitar player Sam McGee, a regular at the Grand Ole Opry known for his lightning-fast playing. Skip James was another player Fahey idolized, although very few James recordings were known at the time. Fahey found them to be among the most deeply affecting records of the blues canon. The only way for Fahey to satisfy his emerging need for records was to go out and find them. Searching out old 78s in playable condition became a treasure hunt. There was no other way to hear the original country blues music. No radio stations played it, and record labels hadn't yet reissued blues music on the modern 33 1/3 or 45 RPM formats. Blues music was generally regarded as outdated, no longer of interest to current audiences. To people Fahey's age it was all but unknown. Blues fans had largely moved on to the electric R&B coming from Chicago from artists like Muddy Waters and the rest of the Chess Records roster. With bass, drums, and electric guitar added to the mix, the music took on a propulsive rhythm that set the tone for the coming rock 'n' roll onslaught. By contrast, country blues, with its scratchy acoustic guitars, already sounded antiquated by the mid-1950s. For a handful of young white teens in Maryland, however, it provided a glimpse into another reality, the dark gauze of pops and static only adding to the mystery. Living in suburban Maryland suddenly had a new advantage in its proximity to the South. The closer they could get to the source, the more likely there would be records to be found. After exhausting their resources locally, Fahey and Spottswood began making long trips to the Deep South to find unheard gems, Fahey driving them in his '55 Chevy. Listening to Charley Patton records, they would hear lyrics with the names of towns such as Clarksdale, Mississippi, so they resolved to head to those places to hunt for 78s. Fahey, Spottswood, and occasionally others, including Lee, would often canvass poor black neighborhoods. Beyond looking in secondhand stores, the young white men would literally go door to door, looking for dusty old records whose owners no longer wanted them. These requests were such a breach of the racial divide at the time that the residents were wary of the visitors. But the potential danger in their pursuit did not deter Fahey and Spottswood. And they found often the locals were only too happy to sell their old junk. To these small-town folk, Fahey evidenced a looseness that most likely protected him, according to Lee. "He would walk through the rural Southern black ghettos waving an old 78 and yelling, 'Got any old phonograph records? Buyin' up old records!'" Lee remembered. "Occasionally, whether out of discouragement or just ordinary insanity, he really would yell, 'Got any old arms or legs you'd like to sell? Buyin' up old arms and legs!' It's been suggested that one of the reasons he managed to survive unscathed from being a conspicuously white presence in the rural black South at a time when civil rights workers were being murdered by local police for such audacity, was simply that white racists, if they noticed him at all, probably dismissed him as too crazy to bother with." About one house in ten would have some records, and most seemed willing to part with them. Generally they would pay around 25 cents a record. One of Fahey's most valued finds turned out to be the only known existing copy of Charley Patton's "Tom Rushen Blues" / "Pea Vine Blues" on Paramount. An old woman in Clarksdale, Mississippi, agreed to let him into her house and began to play a stack of records, talking about each one. When she reached a Charley Patton record, she began to tell a story about him, as Patton had lived in Clarksdale himself. Fahey cut her off, pretending to be disinterested. He didn't want her to know how much he coveted the record. He badgered her into a sale, wanting to abscond with his treasure before she could reconsider. Fahey, overwhelmed by his good fortune, gloated about his discovery. Unfortunately, whatever biographical or anecdotal story she might have imparted was now lost to the ages in his haste to secure the deal. His sympathies and politics were naive, and they remained undeveloped despite his repeated trips to the South. All he saw was the music; the realities of poverty and institutionalized racism were far from his mind. Fixated on the musical expressions of the underclass, he expressed no regrets about their condition. His fantasy was of the Buddha-like bluesman who transcended the slums. Lee recalls Fahey's attitudes during an early canvassing trip: "Fahey's idea of how the South should be was so strongly stratified that whenever he saw a Negro family living in anything like human conditions he snorted in halfhearted resentment because he knew he wouldn't find any old records in such houses. 'Goddam white niggers!' he would say." Fahey acted as if he were myopically concerned with music, playing up his tough-guy image to his friends with provocative racist comments. But this front masked fear rather than hatred. The hardships of the impoverished and ignored, as represented through the records, spoke to him more than he was ready to admit. After unearthing a few major discoveries, Fahey and his friends became sellers, buyers, and traders in an obscure world. There were only a handful of collectors in the DC metro area and they had come to know each other quite well. Spottswood was more interested in collecting than selling, amassing a gigantic catalog of prewar American music. "Today we have a pretty good idea of the breadth and scope of the commercial sound recordings of the 1920s, but in those days we were still discovering things," recalls Spottswood. "I would stockpile everything, but John would turn around and sell them if he needed money." To cement his reputation and better capitalize on his finds, occasionally Fahey destroyed extremely rare records he found but which he already had, just to make his own copy more valuable. It was an act of selfishness he'd later regret. Fahey would often sell records to subsidize his canvassing trips. The records were auctioned by mail, after being strictly graded for condition, through private mailing lists. He had many buyers in New York City, at least three of whom formed a record label, the Origin Jazz Library, which started reissuing compilation albums of songs from old 78s in 1962. Notably, they introduced Skip James's "Devil Got My Woman" to a new audience on their _Really! The Country Blues 1927–1933_ collection. Spottswood's Zen calm was the inverse of Fahey's wild enthusiasm. "The records represented the art and that was the only way you could experience it," Spottswood says. "There weren't any people playing this music anymore. It was the only way to access the sound of a generation that had already passed. We white kids were experiencing them for the first time, because our parents had ignored that music totally." Spottswood later worked with the Library of Congress on the fifteen-LP series _Folk Music in America,_ funded by a grant from the National Endowment for the Arts. Other scholars had previously examined the indigenous music of America, most notably Alan Lomax with his work through the Smithsonian recording folk and blues musicians. Though there had been research into the vast numbers of 78s pressed in earlier decades, many discoveries still remained to be made. The excitement of the unknown propelled Fahey and Spottswood forward. Fahey heard in the blues a rage not expressed elsewhere, and stories fascinated with death, violence, and sex. "The reason I liked Charley Patton and those other Delta singers so much was because they were angry," Fahey remembered. "Their music is ominous. Patton had a rheumatic heart and he knew that he was going to die young, which he did. In Son House you hear a lot of fear. In Skip James you hear a lot of sorrow, but also a lot of anger....I played some of the records to the doctor and he said, 'These guys are angry as hell.' " Fahey started to incorporate blues techniques and melodic fragments into his own guitar work. With his heavy thumb he alternated the bass on the sixth and fourth or fifth and third strings of the guitar while his middle and ring fingers picked out a melody. He then would use bent notes and slides to mimic the vocal phrasings of the blues. This combination gave his playing a richly dynamic sound, with lead, rhythm, and melody all incorporated into a single instrumental performance. Though Blind Blake, Sam McGee, and Mississippi John Hurt all utilized similar techniques, Fahey fused them with his interest in dissonant modernism, taking his music somewhere else entirely. Back at home, Fahey and Spottswood would make frequent trips to visit fellow country and blues collector Joe Bussard, another collector of the same age who lived in nearby Frederick. Gospel, blues, and hillbilly country records from the 1920s and '30s were his specialty. Along with various other collectors, they would hang out in Bussard's basement, listen to records, and trade their finds. Few young people had similar tastes, so Fahey, Bussard, and Spottswood enjoyed the chance to share with each other and talk shop. By that time, home electronics had also emerged as a hobby, and many kids in the 1950s built their own transistor radios. Bussard made a lathe cutting machine at home and cut records one at a time from his basement. He'd even draw his own center labels by hand. Bussard greatly admired Fahey's guitar playing and asked to record him. He instructed Fahey to sing as rough as he could—so he would sound like a real bluesman. On these early home recordings, Fahey is heard singing far off key. As a singer, he seems hesitant and affected, as if trying to sound more withered and aged, or at other times simply laconic, covering songs by his newfound idol Charley Patton like "Some Summer Day." Under the pseudonym Blind Thomas, Fahey cut six sides for Bussard's personal Fonotone label. For the most part, the recordings were just for fun. The actual market for such 78s was microscopic, as Bussard primarily did trades through the mail with other collectors and obsessive types. He also hosted a bluegrass radio show, on which he sold his Fonotone records for one dollar apiece on the air. Fahey, too, loved the idea of fooling some hopeless collector. That was the cover at least; but underneath the joke a more serious desire began brewing. Fahey always insisted that the recordings were inferior, never meant to be released, and never meant much to him. However, there are traces of what would become his seminal style, a heavy thumb keeping the rhythm and a richly melodic sense with minimal embellishments. It's hard to say if Fahey's voice was just genuinely poor or if he just never really tried, but Bussard's recordings were his first and last serious recorded attempts at singing. The Fonotone recordings provide a template for the American Primitive style and are also an early example of a private press record label, a concept that came to greater fruition decades later. Knowing no one would be interested in their goings-on, the pressure was off Fahey and Bussard, and the records were made largely for their own enjoyment. But Fahey found a voice for himself through this process. What he liked so much about the original blues he couldn't find in the revivalists. Artists like Ramblin' Jack Elliott were releasing albums of finger-picked acoustic guitar and singing traditional folk songs such as "Salty Dog" or covering the Woody Guthrie catalog. "They're coming from people who lived the lives of folk people," Fahey said of the original songs from the _Anthology of American Folk Music,_ "not from some suburbanite who's singing someone else's tradition. He can't figure out how to express himself on his own. It might be interesting if they expressed the anguish of the suburbs but they didn't. It would be authentic if that's what a suburbanite talked about and sang about. The pathos of the suburbs or whatever. But they didn't do that. Believe me, there's a lot of pathos there but instead they adopted other cultures' music which they didn't know anything about." In 1960, Fahey entered his first year at the University of Maryland at College Park for a philosophy degree. He then quickly transferred to American University in Washington, DC, where he studied religion and philosophy, both natural fits for his aptitudes. "He had gotten his degree in philosophy at American University and he did some hard and honest work there," says Spottswood. "John was someone who had anti-intellectual tendencies but he was fairly intellectual." By then he had ditched his teen tough-guy act and started to ease up on his friends. "He had matured dramatically," recalls Lee. "He had stopped hanging around with a pack of half-witted, socially misfit punks, had been accepted as an intellectual equal by the adults at St. Michael's, and had begun to be recognized musically. So, having a more secure sense of himself, he began treating me more decently." He also attended group therapy sessions with other parishioners, in which he presumably talked about his family issues. Fahey's connection to religion was based largely on this social acceptance and intellectual equality—on being treated like an adult. He took melodies from the hymnal and incorporated them into his playing. The priests who resided at St. Michael's had what was at the time a forward-thinking mentality toward young people, according to Lee: "John was influential in getting me my first church organist job at St. Michael's, roughly summer of 1960, when the rectorate was just changing from Don Shaw to Don [Donald Wylie] Seaton," Lee remembers. "Seaton would have left St. Michael's some time before 1966 and gone to Christ Church in Southeast DC, where I smoked dope in his rectory while we listened to _Sgt. Pepper's Lonely Hearts Club Band_ [released June 1967]—a very hip priest, in other words." Fahey continued collecting records, and socialized, drank, and played music with his friends. He and Spottswood at times played guitar and harmonica respectively at college parties in DC. Fahey often played with his back against the door so no one could leave the room while they were performing. This activity was at first purely recreational, but Fahey soon found something more to propel him. # 3 # THE LEGEND OF BLIND JOE DEATH "You're not meant to feel miserable in American society; you're supposed to keep the smile up. With _Blind Joe Death_ I was secretly throwing hatred and death back in the faces of those people who told me I was bad and sinful because I had these feelings." —John Fahey, interview, 1998 John Fahey needed to properly document and share his music, and he had no intention of waiting for anyone else to do it for him. Inspired by Bussard, Partch, and others, he decided to create his own record label. Record plants often had special products divisions, which would do short runs of private (vanity) pressings in order to keep their machines calibrated in between larger runs of their own label's stock. Encouraged by this practice, Fahey self-released his first album, _Blind Joe Death,_ in a pressing of 100 copies in 1959. He called his label Takoma Records in homage to his hometown. The record was packaged in a plain white sleeve with the words JOHN FAHEY printed on one side and BLIND JOE DEATH on the other. The album borrowed liberally from his favored artists. Then, many blues and country songs were based on traditional arrangements, and the practice of adapting other artists' material was common in both genres. Fahey's album begins with "On Doing an Evil Deed," a piece Fahey claimed to have written about a girl whose heart he had broken. It contains elements of Robert Johnson's "Kind Hearted Woman Blues," and is played in standard tuning, in the key of A. It showcases Fahey's melodic fingerpicking runs and evolves over its five-minute duration, with Fahey bending notes on the refrain to add further dynamics. The album continues with a version of "St. Louis Blues." Originally composed by W. C. Handy, Fahey's adaptation channels a 1927 version recorded by the old-time duo Weaver and Beasley in its pacing. Later, he picks up the tempo on his take of the classic folk song "John Henry," using a strange countermelody in the bridge to create some modern dissonance. He features some original compositions as well, including the melancholy "Sligo River Blues," an ode to the Sligo Creek of his Takoma Park childhood. Fahey wrote about the song in his liner notes, showcasing his emerging surreal literary voice: "An attempt to reconstruct an old song from three lines imperfectly remembered by an old peasant woman in the village of Balysodare, Sligo, who often sings them to herself. 'Every hand is lunatic that travels on the Moon.' " A hint at Fahey's longer-form compositions is "The Transcendental Waterfall," another original song that takes a profound leap away from standard blues and country, using nonresolving chords in the manner of composers such as Bartok and Stravinsky. The piece has an abstract form, not locked into the standard rhythm of the other album tracks. On his traditional blues, Fahey keeps a heavy lock on the structure, but on this track Fahey explores texture and improvisation, placing fragments of riffs together in an unorthodox manner, using strange bends to twist notes into unusual territory. With a few harmonic taps the piece departs even further from the blues standards and succeeds, creating an altogether new sound for acoustic guitar. Although the playing remains hesitant at times, like on his Fonotone records, _Blind Joe Death_ demonstrates the emergence of a unique voice. While other guitarists such as Dave Van Ronk were picking blues and country, no one else explored such a mixture of modern elements. For an album with no vocals, _Blind Joe Death_ speaks a different language. Having studied the details of the blues, Fahey now had the template to create his own persona. While Blind Thomas had been a start, the full realization of Fahey's alter ego emerged in his new, bolder character, Blind Joe Death. He reflected on the name with alternating accounts. Depending on when and by whom he was asked, his answers varied wildly. If feeling shy, he distanced himself from any serious intent: "When I made my first record I thought it would be a good joke to have me on one side, have the label say 'John Fahey' on one side, and this guy 'Blind Joe Death' on the other side... Also I was thinking, whenever you print the word 'Death' people look at it, and I was thinking of record sales already even though I was only going to have a hundred copies pressed." Among his friends, reactions to his alter ego were mixed. Some recognized Fahey's conflicted feelings regarding his own credibility. Part of the appeal of an alias was being able to hide behind the signifiers of blues culture. With this cloak, he could obscure the fact that the music was made by a white suburbanite. Spottswood agrees the duality existed from the beginning: "I think he was trying to have it both ways. Having adopted that music and attempted to play it, I think he also wanted a badge of authenticity, which of course he wasn't ever going to have, because he was learning that music secondhand from records. In order to create some authenticity attached to him, he created a mythical person." Fahey wasn't playing the blues, but rather a deviated form based on blues structures. He wasn't trying to tell anyone else's story; he had his own experiences to express. And he followed his passions with a fierce intensity. The darker side of Blind Joe Death, according to Fahey, is the embodiment of all the hate and negativity rippling under the surface of the faux suburban dream. While Fahey, as a child, was powerless, Blind Joe Death projects from a position of power, lashing out against those who repressed and abused him. From behind this veil, Fahey expresses his contempt for society. As an artist whose repertoire is instrumental, Fahey has a wealth of things he expresses with his imagery. "The whole point was to use the word 'death,' " said Fahey. "I was fascinated by death and I wanted to die. I probably could have told you that at the time, but I wasn't being that honest. Blind Joe Death was my death instinct. He was also all the Negroes in the slums who were suffering. He was the incarnation, not only of my death wish, but also of all the aggressive instincts in me." Through Blind Joe Death, Fahey created a minstrelized persona. These revelations came later, however; at the time the symbolism was seen as largely tongue in cheek. Stylistically, there was no distinction between the music on each side of _Blind Joe Death;_ it all sounded of a piece. The recordings have a homemade feel, enhancing the intimacy of the performances with its pops and imperfections. The bare aesthetic of the packaging matched the music. He sold the record mostly at his all-night gas station post. Occasionally he would drop copies in local thrift stores. It took three years for him to get rid of them all. Few imagined that it would have such far-reaching and long-lasting effects. One of the few copies that Fahey sent out was to folk/blues scholar and producer Sam Charters. Known for his production work, Charters made his name recording ethnomusicology records for the Folkways label in the 1950s. Charters, also a scholar, had written one of the earliest books on the blues, the seminal _The Country Blues,_ published in 1959. Fahey respected Charters's work and hoped to find a sympathetic ear. But upon initial listen, Charters recalled being less than impressed. Guitarists like Dave Van Ronk were acclaimed for their startling prowess, however these players usually sang as well. Charters had worked with accomplished guitar players such as Bahaman Joseph Spence, whose fierce yelp and hard attack sounded unlike any other, as well as electric blues icon Muddy Waters. To Charters, Fahey's record sounded generic. "When John sent me the record, it sounded like a lot of stuff I'd already heard and played. I didn't think it was that special and I sent him a letter saying so," says Charters. The letter upset Fahey greatly and their relationship began on a sour note. Spottswood was equally unimpressed: "I didn't think his technique was very sophisticated. He basically played in a variety of open tunings and that was part of his appeal, that you could pick up the guitar and play like him if you wanted to as it wasn't that difficult." While these criticisms were valid, they did nothing to deter Fahey, and his playing continued to develop privately. Ed Denson felt differently. Another participant in the burgeoning Maryland blues-collector circles, he picked up on Fahey's talents early on. He wasn't a musician, but he championed the talents he found around him. He was among the first to foresee the massive appeal of Fahey's music. Denson had explored psychedelic drugs and was starting to become interested in left-wing politics. A sharp, laconic guy with a penchant for writing like Samuel Beckett in his creative writing classes, Denson was a regular around the folk hangouts. "Fahey could play virtually any piece on a 78, in the style of any of the older artists," says Denson. "His guitar playing, especially on record, took the music and moved it into another realm, so he too was not 'authentic,' but I don't think that disturbed him. John never expressed any sense of his own feelings about his own music to me that I recall. He did what he did, and from the first time I met him, he was good enough at it to perform to audiences and issue records." By 1960, Fahey was playing at informal events at the Unicorn on 17th and S Streets down in Washington, DC. The club hosted a hootenanny every Friday, which would attract all the local kids just getting into the folk scene. Fahey's music, containing elements of blues and country, sounded familiar to them, but its instrumental focus was unlike the rambling stories of the common folk guitarist. Even from his first performances, the music resonated with audiences. Max Ochs, a classmate of Denson's, first encountered Fahey at the Unicorn. A fellow guitar player, Ochs also performed at the hangouts. He recalled Fahey's commanding presence having an immediate impact on those around him. "He was not chatty," remembers Ochs. "He had a larger-than-life demeanor that inspired a kind of hero worship in me. I was a devotee, sitting on the floor as one of a circle of devotees around a blues bodhisattva. He sat in the chair in the center of the room and he played his latest compositions, sometimes sounding as if they were being created as he played, all with an expressionless mask, a deep looking inward." Fahey connected the blues to his darker thoughts, and this resonated with his audience. "My impression was that there was an old, old sorrow in John Fahey that a quart of whiskey might assuage but never alleviate," adds Ochs, "an affect that we were decades too early to think might indicate the presence of some pathology, like autism. We were more disposed to accept a variety of behaviors or nonbehaviors. John's remote irony was admired by the whole set of devotees." Another of the club's regular performers was a gorgeous young guitar player named Pat Sullivan. With flawless features and dark hair, she commanded attention for both her playing and her feminine charms. Adored by the boys for being into guitar, she had many suitors. Fahey and Sullivan first met at St. Michael's, where she often ran the tape machine for him when he made recordings. They bonded through their mutual love of guitar and, later, their penchant for drinking Old Grand-Dad whiskey—in contrast to the pot-smoking regulars of their scene. Fahey fell hard for her and she became his muse. He adored her, naming several songs for her. She is even credited as coauthor for a track on his second record. "I had all these pieces in my head, and she seemed to be able to hear them, I swear," Fahey said about Sullivan. "She was more certain of me and my talent than I was. We had two guitars and we were doing all these incredible things and learning new stuff every day just by listening to each other. I mean, we'd play for eight hours and think nothing of it." Her encouragement propelled his desires; he believed he had found an ideal partner, a true love. Fahey's reaction was overzealous. Sullivan was not ready to settle down with the intense guitarist. They remained close, and she treated him with patience. Sullivan started dating Ed Denson, who asked her to marry him. When she accepted, Fahey grew despondent. He wasn't ready to accept her decision to marry Denson and continued to chase her, furthering his despair in the torture of the unrequited. Seeking an escape, he ended up spending a few months as a teaching assistant in Hawaii. The experience didn't leave a positive impression on him and he never mentioned it afterward, a rare lapse in candor for the verbose Fahey. Upon his return to Maryland toward the end of 1963, he continued to work on his guitar playing and made a few recordings with Bussard and McLean. Still left with unresolved feelings for Sullivan, he continued to chase her, despite her marriage to his friend. Sullivan, it seems, played both sides. "There was a time when John and Ed had a radio program in Washington," remembers Sam Charters, "and they were trying to pin down the moment John had taped something and Ed turned to him and said, 'Was that before you moved in with my wife or after?' Ed was totally relaxed." The love triangle ended up bringing Fahey to the other side of the country—just in time for the youth culture revolution. # 4 # ON THE SUNNY SIDE OF THE OCEAN "I remember when you'd go into a folk store, there'd always be a big sign up, 'Should Pete Seeger Go To Jail?' I'd always say, 'Absolutely. Because he sings such lousy music.' " —John Fahey, interview, 1994 Propelled by desire, Fahey headed west, with unrequited love as his blinders. Ed Denson and Pat Sullivan had moved to California to pursue their graduate degrees. Fahey followed suit and in the fall of 1963 enrolled in the master's program in philosophy at the University of California, Berkeley, where Pat was also studying. Her marriage to Denson would be short-lived. She would leave a trail of broken hearts all through the Berkeley music scene. In Berkeley, as Denson recalls, "John and I lived in one large, somewhat ramshackle residence out in the sticks beyond Clayton. I don't know if Pat was there or not. My only really clear memories of that period are of the Clayton Peacock"—a bird that frequently appeared on the property. Fahey continued his pursuits as both a student and musician, with aspirations to write a scholarly thesis on the blues. His expertise in the once extremely marginal field was in vogue now that interest in it had started to go mainstream. The folkies were especially interested in the blues' cultural relevancy in the era of civil rights. Fahey was already far ahead of those who were just beginning to listen, and his considerable knowledge become another source of his charisma. "Among these people, John was a person who had done the things they were trying to do. He was an excellent guitarist, and his persona suggested that he knew something they did not—which was, in a way, true," Denson says. Unlike many others, Fahey was often vocal with his criticisms. And however much he abused them, he still found followers drawn to his strange personality. "I would not say there was anything endearing about John, even in his vulnerabilities, but for people of a certain personality type, there was something attractive....'I'm always surrounded by midgets,' he said one day. He was tall, but the reference was to accomplishments, not height," recalls Denson. Despite the tension regarding Pat, Denson and Fahey remained amicable and were able to seemingly ignore the situation. The two decided to relaunch Takoma Records as a full-time independent record label dedicated to the guitar, with Fahey as its cornerstone artist. Fahey was fortunate to have found a partner who possessed all the networking, organizational, and social skills that he lacked. Through Denson, Fahey was able to market his music to the new folk audience. They never much discussed Pat, or even Fahey's music, and the partnership, for the time being, worked out. "My relationship with John was not unpleasant, nor stormy," remembers Denson. "Generally speaking he was happy to record music, and I was happy to get it on the market and hope to sell it." Independent, artist-owned labels were uncommon at the time. An important distinction between Fahey and other contemporary instrumentalists was his realism. He knew no label would be interested in putting out instrumental guitar music; so he simply took matters into his own hands and pressed records, in limited quantity, himself. This self-reliance cemented his commitment, while others simply waited to be discovered. With Denson's help, Takoma Records would blossom into a sizeable and venerable record label. They soon began plotting other artists to recruit. The most coveted country blues recordings had been made in the late 1920s and early 1930s, and it was perfectly reasonable to assume that many of these performers were still alive, some perhaps even still playing. In March 1963, Tom Hoskins's discovery of Mississippi John Hurt got the ball rolling. There was no telling who else might still be out there. The logical next step for an academic and collector like Fahey was to look for the artists themselves while on collecting trips. Many of them were still hanging around the same haunts they referenced on their records thirty years earlier, completely unaware of any interest in their work. A shot-in-the-dark postcard to a small town in Mississippi started the process. Bukka White originally recorded for the Victor label in the early 1930s. He was later convicted of murder and sent to prison. Famed folklore documentarian Alan Lomax made several recordings of White while in prison, and White received recognition during the early 1960s folk movement when Bob Dylan covered his song "Fixin' to Die Blues" on his first album. In the process, White became introduced to the folk music community as a pioneer. To these middle-class, white teenagers who made up the folkie crowd, White represented authenticity. After hearing White sing about Aberdeen, Mississippi, on his records, Fahey sent a postcard to Bukka's attention care of general delivery to the Aberdeen Post Office, offering him $100 to record for Takoma. White lived in Tennessee, where he was employed at a tank factory. By sheer chance, his cousin worked at a Mississippi post office and forwarded the letter to him, and White responded to the upstart label. White was a fast-talking, good-natured character. After visiting him at his home in rural Tennessee, Fahey formed a deep bond with him, not only through music but also through their mutual fascination with trains. White told stories of riding the rails, a prospect that had thrilled Fahey dating back to his Maryland days, the sounds of the B&O still resounding in his memories. White indulged in tales of the old times and would take Fahey fishing when Fahey would come to visit him. All the while the two would drink whiskey like water. White also indulged Fahey in fictional tales of Charley Patton. When White ultimately agreed to record for Takoma, Denson and Fahey had their dreams, to a large extent, realized. However, even with the common bonds of music, cultural differences soon became pronounced. Many old bluesmen and their young white sponsors had difficulty trusting each other in regard to money. Often the attention generated from the press did little to sell LPs in mass quantities and the financial rewards were slow in coming. In a letter Fahey sent to Sam Charters dated November 27, 1963, he writes, "There is a slight chance Bukka will break my contract and go away and at this point he's been so much trouble that I don't think I'd mind too much if this occurred." According to Charters, Fahey had to hide in doorways in Memphis from an angry White, who thought Fahey owed him money. White only recorded one album for Takoma, yet he and Fahey remained friends for years afterward—once the unpleasantness of their business had been settled. Takoma Records was launched in earnest with the release of Bukka White's _Mississippi Blues_ and John Fahey's second album _Death Chants, Breakdowns and Military Waltzes._ Much to everyone's surprise, Fahey's record sold more quickly, with the help of new distributor Norman Pierce, who sold Takoma albums direct to stores. _Death Chants_ was sold out in just a matter of weeks. Four years had passed since he released _Blind Joe Death._ A tongue-in-cheek press release from Takoma read as follows: John recorded his second LP, saddened that Death was not there to share in a triumph that was as much his as anyone's. The extent of that triumph may be seen in the fact that our Directors, without hesitation, issued (in part) the following statement in a June press conference: It is a measure not only of the tremendous gain in maturity, stature, and international reputation of Mr. Fahey, but of the vital and expanding folk market in this nation and across the seas, that we have, without president [ _sic_ ], decided to issue an initial pressing of 300 copies of _Death Chants, Breakdowns and Military Waltzes._ The album retained the same homemade look and feel as its predecessor: a white silk-screened sleeve with the words JOHN FAHEY in black and the album title below. The music, however, featured far more confident performances and compositions. The opening strain of "Sunflower River Blues" is a mid-tempo fingerpicked anthem with a melancholy that echoes throughout the piece. The song is an ode to Charley Patton, written the year prior in Yazoo City, Mississippi. A unique element in the track is the use of an open-C tuning. Fahey's bottleneck skills inform the stirring "On the Beach of Waikiki," a song written in 1915, which is as hopeful and lively as anything Fahey ever performed. There are odes to his classical influences as well: in the opening measures of "Stomping Tonight on the Pennsylvania/Alabama Border," he borrows a riff, which alternates between a second-inversion C-major chord and a second-inversion C-sharp minor chord, from the end of Ralph Vaughan Williams's "Symphony No. 6." The same track quotes both Skip James and the plainsong hymn "Dies Irae"—a pastiche of influences and styles that brought out his obsession with death. The brief, self-composed track "America" features a rare instance of Fahey playing twelve-string guitar. The song uses harmonics and muted strings to tap out its initial strains and then blooms into a lush refrain. The album closes with a rendition of an Episcopal hymn. With the musical developments of his second album, Fahey started to enter the world of professional musicians. At the same time, with the inclusion of his surreal and bizarre liner notes, he continued to build on the farce he began with Blind Joe Death. He wrote about himself, replete with made-up words and fictional places in a jumbled yet fascinating narrative. John Fahey had made his first guitar from a baby's coffin, and led the old blind Negro [Blind Joe Death] through the back alleys and whore-houses of Takoma Park in return for lessons. When the Second World War broke out, John was already a musician in his own right. His career as a volk entertainer was briefly interrupted when he was drafted and sent to New Zealand to fight with the allies against the Finno-Armenian invasion. After the war was over, John, a decorated war hero, returned to his home and re-established relations with Blind Joe. In 1952, only a few years before Blind Joe's bodily ascension, Patricia Sullivan working in co-ordination with the Library of Congress (of Bessarabia), recorded the two of them and issued them on the now rare Takoma label....John Fahey went insane in 1964 and died shortly there after. He spoke to me in his last minutes on his dying bed and said: "Take down my old guitar and smash it against the wall so I can die easy." I did so and he passed away with a chthonic smile on his face. His friends Lee, Spottswood, and McLean enjoyed these notes, as he often included allusions to each of them within the texts. It's never made clear who is speaking, the assumption being a noted scholar or critic, although the writing is credited to Chester Petranick, a former music teacher at Takoma Park schools. In the time since his debut album, the market for his music, still seen under the banner of folk music, had expanded. The most famous outlet for the emerging cultural celebration of folk and blues was the Newport Folk Festival, which began in 1959 as an extension of the already successful Newport Jazz Festival. Promoter George Wein teamed with Folklore Productions' Manny Greenhill to organize a series of concerts catering to the rising popularity of the blues, country, bluegrass, and folk. By 1963, with attendance blossoming to 45,000, Newport was a celebration of the music and culture, providing workshops, panels, and over 100 performances over the weekend of July 26–28 of that year. Bob Dylan, Johnny Cash, and Joan Baez all performed at the fest alongside blues legends such as Mississippi John Hurt, who performed for their new public for the first time. As an expert historian and guitarist, Fahey attended the festival, participating in one of the smaller workshops. While the music, at times, engaged him, the politics left him absolutely cold, and he questioned the motivations of those involved. Fahey famously criticized Pete Seeger during the festival's Topical Song Discussion workshop—an act of near sacrilege since Seeger was a sacred cow to the young activists. Having been jailed for his protest songs, Seeger seemed the living embodiment of a modern folk hero, an actual martyr. Fahey wasn't buying it. He voiced his opinion that the songs sung by Seeger did not represent the contemporary voice of the actual people. The actual folk were listening to R&B, and while they supported the civil rights cause, black audiences certainly weren't listening to the music of their past as reinterpreted by white intellectuals. In Fahey's eyes, white performers like Seeger didn't understand the blues, missing its emotional rawness. Never shy to express himself, he stood up against the zeitgeist, to mostly deaf ears. "I was trying to convince the audience, who was mostly Negros, that these jerks like Phil Ochs and Seeger were writing music about Negros to make money and not to help Negros," said Fahey. "That they were actually exploiters. And I got booed by the Negros. I kept saying, 'I think that Negros have enough intelligence to write their own songs. I'm really convinced of it.' BOO! I was set up, I just didn't know it." Fahey didn't consider himself or any other middle-class, educated white people as "folk." They were not the common people, and their enthusiasm to him seemed insincere. The mystery of the blues continued to captivate Fahey as widespread interest in the subject grew. Out of all the lost bluesmen, Fahey searched for Skip James with the most interest. James remained perhaps the most elusive, his recordings among the most rare, and his material the most deeply sinister. His songs of murder, misogyny, and coldness were unsurpassed in their severity. His song "22-20 Blues," an ode to his pistol of choice, features the lyrics "Sometimes she gets unruly / An she act like she just don't wanna do / But I get my 22-20 / Cut that woman in two." Even for the blues, James's music contained a sadistic streak. James channels sorrow and anger, the feeling enhanced by his then-unknown open D-minor guitar tuning, which gives a more sullen tone than standard tuning, a haunting match for James's weary falsetto. His mystique among collectors grew as more of his records were uncovered and became coveted on the underground market. A black cloud hangs over James's 1931 recordings, which bear intense themes of mortality and betrayal. But underneath the violence lies a deep remorse, with no less than the punishment of God to contend with. James, torn between the secular and the religious worlds, spent time as both a bootlegger and an assistant to his father, a Texas preacher. His father hated his music and forbade him to perform the blues while working at the church. James vacillated between these worlds, his own music and decadent lifestyle clashing with his religious upbringing. He played songs in praise of the Lord, such as "Jesus Is a Mighty Good Leader." This kind of duality—the struggle between sin and salvation—fascinated Fahey. At first, Fahey romanticized the people behind those lost blues records as entities of magical proportions. Surely, if there were wisdom or answers, Skip James possessed them. Fahey imagined that finding James would be like finding a great spirit. In reality, few answers would be found. But Fahey was oblivious to all else in his path. He tackled his obsessions with great expectations. "I was seeking out mean, sadistic, aggressive, hateful, and maybe even dangerous expressions and expressers of music most cruel," said Fahey. "Because the search was urgent and of utmost importance. Because I had to find them, locate them, understand them (maybe not master them), but at least have some knowledge of their origins." When another lost bluesman, Ishman Bracey, was found in Jackson, Mississippi, the discovery set off a flare for Fahey. Bracey and James had recorded for the same label at around the same time. Perhaps Bracey would have some knowledge as to the whereabouts of James. Fahey planned a trip to the Deep South to find out. He took friend and guitarist Bill Barth, as well as Frank Zappa guitarist and fellow Takoma Park native Henry Vestine—both blues scholars in their own right. After a long drive through the swamps of Mississippi, they spoke to Bracey, who gave them a hint: James lived somewhere near the town of nearby Bentonia. At the local gas station they found someone who knew James's wife. They discovered that James was in the hospital. It had been more than thirty years since James recorded music when Fahey, Barth, and Vestine found him in 1964 in a hospital in Tunica, Mississippi, suffering from testicular cancer. The three paid his medical bills so that he could be released. Back at his home a few days later, James didn't offer so much as a thank-you. Barth had brought a guitar, and the sixty-two-year-old James, tuning it to open D minor, began shakily playing his classic songs. The odd chord structures and tuning were a revelation, as they had been trying to figure out the secret to his sound for years. Fahey hoped to record James for Takoma as well. He expected surprise and perhaps a bit of excitement from James, considering the lengths to which they had gone to find him. Indeed, Fahey's sense of connection to the music was so strong he considered James a mystical figure. But while James may have possessed a true understanding of misanthropy and darkness, he had no intention of helping Fahey unravel his problems. In Fahey's estimation, James remained an angry man. He bragged about his nefarious past and dismissed and insulted many of those whom he came across. Fahey would later proclaim in a somewhat bitter tone that he had "bought" James, surely aware of the racial implications of the statement. In Fahey's mind, James would have rotted to death in that hospital if not for his heroic and altruistic efforts. James would never record for Takoma. "James became a frightful figure who inspired fear and loathing everywhere he went," recalled Fahey. "It was his attitude toward his music. Toward his audience. Toward himself. Toward everything. He made no attempt to disguise his disgust and disdain for people he met, the music that they played and liked and for his gigs. Everybody noticed it. James' connection to the unconscious was broken. He had nothing to teach anybody anymore." Others sought to capitalize on and exploit the newly rediscovered bluesman, and some therefore doubted Fahey's motivations. Skip James biographer Stephen Calt details his take on Fahey's intentions: "Although the blues field in 1964 tended to attract people who could charitably be described as connivers, the petty nature of the burgeoning blues business obscured the fact that the real purpose of James' discoverers and sponsors was to make money off him." The blues revival had an immediate impact on the contemporary music scene. James and legendary bluesman Son House both performed and were reintroduced to audiences at the 1964 Newport Folk Festival. (No one is sure if Fahey attended.) Skip James's comeback album _Today_ ended up coming out on Vanguard Records, a label that had far more resources than Takoma. The British psychedelic rock band Cream covered his song "I'm So Glad," to great success. Photos of James and House were featured in _Newsweek_ alongside an article on the blues revival. Little of this translated into direct record sales, but rock musicians had a new template. "Those rediscoveries were earth-shaking to those of us who cared about them," says Denson. "If John had been offered the Nobel Prize for Bukka's discovery, I don't think we would have been surprised, so great was our sense of the importance of it. Perhaps it is better expressed as finding a new pyramid in Egypt, or a lost city in the Amazon. We were fully engaged in the projects, and believed that universal recognition for the artists was bound to come. In a way we were right: Robert Johnson got a stamp issued with him on it." Fahey was not possessive. Once James and White had embarked upon their revival careers, Fahey let them go their way and returned his focus to his own music. _Death Chants, Breakdowns and Military Waltzes_ helped propel Fahey further into the role of burgeoning guitar icon. Others outside his circle began to take notice. Peter Stampfel of the Holy Modal Rounders, an emerging group of experimental folk musicians, wrote favorably about the album in his column in _Boston Broadside_ magazine. Given this new regional interest, Fahey was asked to play a weeklong residence at the Odyssey Coffee House in Boston for the sum of $200. Other players were now beginning to add to the conversation of instrumental guitar. Another key figure in the early Takoma Records catalog was Robbie Băsho. Băsho grew up as Daniel R. Robinson Jr. in Baltimore, the adopted son of a middle-class family. He attended Catholic school, then military school, until he entered college in 1959 at the University of Maryland as a premed student. Far from a bookworm, Robinson defied the stereotype and spent some time working as a bouncer in a club. Known as an athletic, weightlifting jock, he transformed himself into a beatnik poet when he discovered the twelve-string guitar in his junior year. Robinson played the standard folk guitar repertoire of the time, which included the likes of the Kingston Trio, along with more pop-oriented material. An encounter with the music of Ravi Shankar in 1962 set Robinson on an obsessive path toward Eastern music—Indian raga, specifically. To solidify this transformation he renamed himself Robbie Băsho, after the Japanese poet Matsuo Băsho. Băsho aimed to play solo steel-string guitar as elevated compositional music—not for pop songs—and began writing extended ragas for the instrument. Despite his prodigious skills on the guitar, his lack of social graces left him with few friends or supporters. He had no sense of humor about his work. Described as unapproachable and insufferable, no one seemed to like him. "Fahey was obnoxious, but Băsho was just a nebbish—the personality of a frog," recalls Tom Weller, a regular on the Berkeley folk scene. "He often complained that he couldn't get laid." The folk movement championed a relation to the common man, but Băsho presented himself as a mystic and dressed in robes and capes. Convinced of his own importance, he viewed his music as having spiritual and magical properties. Băsho tried to channel divinity and Eastern thought through long-form fingerpicking that was stunning in its complex virtuosity. Leaving the blues behind, he followed his new path with religious dedication. "Băsho was a religious mystic who used his guitar for chanting and expression of his religious views," recalls Charters. "He didn't interact in our world at all except to ask for a great deal of praise. I didn't like him personally or musically, but Ed [Denson] liked the music a lot." Another important distinction between Băsho and his peers was his over-the-top, operatic singing style. His overwhelming bravado turned off many—although some were blown away by his emotional conviction. Băsho seemed removed from Americana in all but instrumentation. Few were ready for his mix of instrumental, raga-influenced guitar, or his pretensions. Naturally, a tension between Fahey and Băsho grew, as they shared the same management, label, and scene. An intense gunslinger competition developed between the two. Fahey naturally believed himself superior as a musician, because of his compositional abilities. Fahey thought so little of Băsho that he would sell Băsho's LPs at a deep discount at his shows. Later, Fahey would admit that Băsho had interesting moments, but for the most part he had nothing to do with him. "He was crazy," said Fahey. "I never hung out with Robbie personally much. Nobody did. You couldn't." Even so, the two guitarists would be the mainstays of the Takoma label, both releasing solo guitar LPs at regular intervals. Băsho's 1965 Takoma Records debut LP, _The Seal of the Blue Lotus,_ although largely ignored, became a cult classic to guitar players of a more experimental nature. Băsho played with a gorgeous, dexterous style, rife with dramatic flair and flourishes. By incorporating more Eastern influences the music came across as forward thinking, stylistically fitting the emerging Takoma style of innovative contemporary players. In early 1966, Băsho moved out to Berkeley, having been picked up by Denson and various members of Country Joe & the Fish, whom Denson was also managing, on their return from a cross-country journey. Not surprisingly, Băsho did not bond with his new West Coast contemporaries on the trip. The group stopped in the Sierra Nevada Mountains, excited about going out in the woods and enjoying the trees and nature while taking psychedelics. Băsho sat in the car and kept honking the horn, complaining that he needed to get back to Berkeley to see a doctor. A notoriously vocal hypochondriac, he complained constantly about back problems. In fact, Băsho would demand they stop at hospitals the whole trip. Not once could a doctor find anything wrong with the twentysomething Băsho. Meanwhile, Fahey was busy establishing himself as a powerful figure in the music scene. With his encyclopedic knowledge of prewar American music and his biting wit he became a difficult man to win over when fans started to seek his approval. Still, despite his attitudes, he found devotees and fans who became so enthralled by his music that they forgave his often brash behavior. Fahey's approval became coveted among a certain group. "Once the records began selling even modestly he was, in our small circle, a star," recalls Denson about Fahey's charmless charisma. "He had achievements, and in his special area he was really one of the leading figures. Everyone around him was young—yet to achieve anything—and especially at that time of life, someone who is mysterious, accomplished, and who disdains people or work that you think is good, is impressive." Back at home in the summer, Denson produced sessions for what would be Fahey's third album, _Dance of Death & Other Plantation Favorites._ Retaining the feel of its predecessors, the album continued to showcase Fahey's artistic and commercial growth. Still, Fahey remained a cult figure. # 5 # POOR BOY LONG WAY FROM HOME "He said he was confused, because, he said, he couldn't get along with the women he liked. He was going to go up to see some psychoanalyst or something in Miricle Valley Arizojahi. I guess he did. I haven't seen him in months." —John Fahey, in his liner notes to _Days Have Gone By,_ 1967 The left-wing politics of the student movement was bringing attention to blues and folk as the soundtrack to the civil rights movement. Some of those who started coming to Fahey's shows were more interested in politics or drugs than music or records. Fahey hated them. To him, the student idealists had naive worldviews and dreamed of unrealistic political utopias. A bunch of college students sitting in parks singing "This Land Is Your Land" was enough to make him downright irate. "I hate mellow," he stated emphatically. "There are lots of other things, people, places and times and what-have-you that I hate, but nothing I hate so much as Berkeley in the 1960s." When folk music became popular, Fahey was disappointed. He found the style, as popularized by groups like the Weavers, insufferable. Unsurprisingly, he considered those who played such music to be largely unsuitable as allies or friends. Others soon took note of his negative attitudes. Many had ventured west to find a more freewheeling, exotic lifestyle; they were ready to experiment and cast off the shackles of 1950s repression in the coming age of sexual revolution. But in Fahey's estimation, many of the musicians and fans lacked the ability to think critically and blindly followed popular trends. Always an iconoclast, Fahey found himself starkly at odds with the Berkeley scene. So, in the fall of 1964, Fahey moved to Venice, California. Earlier that year, at the UCLA Folk Festival, he had met a like-minded scholar who believed in both the purity of bluegrass and the insincerity of the current folk scene. D. K. Wilgus had just been hired to start a new graduate program in folk studies at UCLA. Fahey was an ideal candidate, and Wilgus encouraged him to switch schools. Folk studies would offer him more than any of his previous academic experiences, and directly intersect with his music. The change would also provide an ideal opportunity to make the transition away from the town he despised so much. Berkeley had offered him little in terms of intellectual stimulation, but there were personal reasons as well. After years of pursuit and hurt feelings, his on-and-off romance with Pat Sullivan finally came to a concrete conclusion. After her marriage with Denson had ended, she began dating Fahey. And for a short time, they lived together and he was joyously happy. She left him in a matter of months. Fahey was furious. During an angry conversation he threatened her, telling her that he was going to wait a year or two—until she least expected it—and then kill her. Frightened, she told university officials that he was stalking her. He tried to focus his energy on new hobbies such as karate. The school psychiatrist, after a few sessions, determined there was nothing medically wrong with him and suggested he reconnect with the church; he should become more proactive so that he could meet a nice girl whom he could marry and who would take care of him. This would be the end of Pat Sullivan in his life—although she would later be immortalized in his liner notes as "Evil Devil Woman." Pat was a teenage fantasy, and Fahey had grown into a man. Once he became well known for his music, it was a boon to his personal life. He immediately attained a notable presence on the UCLA campus because of his emerging status as performer and recording artist. In addition, he finally had a place where he could seriously dive into his specific subject of expertise. He chose Charley Patton as the sole focus of his thesis. He would dissect Patton lyrically and musically, breaking down every verse, measuring and charting their structures. One of Fahey's neighbors was Barry Hansen, a fellow classmate in the folklore program. Hansen had a vast record collection and later went on to a career as famed radio personality Dr. Demento. At UCLA, he became one of Fahey's closest supporters. Both were dedicated to their pursuit of rare music, so they naturally had a lot in common. Hansen's easy temperament and wacky sense of humor appealed directly to Fahey's sensibilities. Hansen lived at 525 Grand Boulevard. The next lot south had three small houses, back to back. Fahey occupied the one in the middle: one room with a tiny kitchen and bathroom. The floor was usually littered with dozens of empty Coca-Colas. Fahey still drank it constantly, usually three quarts a day. At that time Venice was still considered a rough part of town, riddled with crime. The sounds of police sirens and bar fights often echoed through the streets. Fahey seemed oblivious to the chaos. "One time in Venice we were hanging out all night, drinking a lot. I like whiskey too," recalls Sam Charters. "I remember it was very dangerous where he lived. There was a whole line of police right on John's corner, hiding. There was a coffee shop there, and there were two bikers sitting in there talking about Sartre's existential theory. And that's what the cops were waiting for, these two bikers to come out. And there was John, living in the middle of all of this, people getting shot. In a way, he self-created a hell that he lived in." Another member of their record-obsessed entourage was Alan Wilson. Wilson was a collector and guitarist Fahey met in Boston in the summer of 1965 while in town playing shows and recording songs for his album _The Transfiguration of Blind Joe Death._ Aside from music, they had a great deal in common. According to Rebecca Davis Winters's biography of Wilson, he was similarly troubled. Described as painfully shy, with a demanding father, Wilson had trouble socially. This was due in large part to his notorious lack of personal hygiene. He often had to be told by friends to change clothes or bathe, as he would never think to do so on his own. As such, he possessed a pungent body odor—even among the liberal, freewheeling set. Wilson existed mostly as a catatonic who lived on barely anything but focused so intently on the country blues and guitar that his knowledge and abilities were stunning. Because Wilson was regarded as a true talent to those who knew him, Fahey decided to bring him back to the West Coast. The two bonded intensely, relating to each other through their common unrequited romances, passions for musical obscurities, and family struggles. Fahey helped Wilson escape his unhappy East Coast life, something about which Fahey could empathize. Fahey kept Wilson around as a roommate, paying him a tiny fee in exchange for access to his astounding transcription and musical notation skills. Wilson's unacknowledged contributions to Fahey's thesis would be invaluable. Though enthusiastic, there were limits to Fahey's scholarly abilities; he never learned to read, write, or transcribe music. Outsourcing this portion of the work left him plenty of time to expound on Patton's lyrical themes. He'd count how many references to death occurred in the musician's catalog, how many positive and negative references to women, and other sociological concerns in an almost pathological map of Patton's work. He and Wilson would sit up all night listening to old blues 78s and talking about their childhoods, commiserating on each other's sorrows and insecurities. When they would socialize out of the house, the place to go for serious blues heads was Bob Hite's house in Topanga Canyon. Hite was a gregarious host; with a heavy frame and long hair, he earned the nickname "Bear." Musicians often came around the house for informal jam sessions. Hite was interested in putting together an electric jug band to play traditional tunes, and his large, commanding voice was perfect for leading a heavy band. Fahey invited Wilson, and Wilson and Hite hit it off. Another frequent regular was guitarist Henry Vestine, after he was kicked out of Frank Zappa's band for excessive drug use. By late 1965, Hite, Wilson, and Vestine formed the core of the electric blues band Canned Heat. They cut their teeth on the L.A. club circuit, their set consisting of high-energy takes on classic blues songs. Onstage, Hite was the host of the band, just as he was the host of so many parties. Wilson bloomed as an expert guitarist, his exactitude envied by other blues players and his encyclopedic knowledge of country blues on full display. Vestine, in sharp juxtaposition to Wilson's traditional style, played wild guitar solos, increasingly psychedelic in technique, with howling feedback as a counterpoint. Focusing more on the postwar electric blues of artists like John Lee Hooker, Vestine played more sustained guitar runs and arpeggios against the rhythmic chug of Wilson's twelve-bar blues. This explosive combination of influences and styles eventually found international acclaim, but from the beginning, Canned Heat's common link was Fahey. In turn, he supported the band as individuals and encouraged them; though not part of the rock scene himself, he thought the band well-intentioned. With interest in his work growing, Fahey began to play more concerts, getting booked in clubs and at folk festivals. Before then, his live experiences had been largely informal; back in Berkeley, he had played casually at parties or in smoke-filled clubs and coffeehouses on open mic nights. Increasingly, the pressure of being a professional entertainer in front of paid crowds began to weigh heavily on his nerves. As the demand for folk music surged, Fahey reluctantly forced himself into the role of showman. He was ill prepared to deal with the anxiety. Of all of the aspects of being a professional musician, live performance proved the most challenging for him. Stage fright became a powerful terror. When faced with performing in front of a paying audience, his aggressiveness came to the forefront as a defense mechanism. That trait soon turned into self-destruction. To combat his fears he turned to alcohol as a crutch, often in copious amounts. His combination of insecurity and appetite was toxic. He infamously mixed bourbon into a large bottle of Coca-Cola and drank throughout his set, sometimes to the point of obliteration. As easy as it would be to keep silent, he often became vocal about his tensions. He sometimes became delirious on stage, ranting about politics or lecturing the crowd about how drugs, LSD in particular (although he had never tried it), were for the weak-minded. He seemed to challenge the mores of his new audience, if not out of spite, then in defense of his own fragility. In his drunken state, he strove to separate himself from the culture in which he found himself immersed. His music drew the crowd in, but the man himself often became the greater spectacle. "I wouldn't describe him as a hard-core bigot or a right-winger, but he had grown up with old southern attitudes and understood them very well and sympathized with them," remembers Barry Hansen. "As he came to realize that many of his California listeners detested those attitudes, he would assume the role of a 'redneck' and bait his audience, using the n-word and all sorts of nasty language. I could be wrong, but I think he did that more for the sport of it, to get a rise out of people, and perhaps to make them realize that not everyone thought the same way as they did." Fahey's natural tendency to be an instigator led to frequently uncomfortable performances during which he would openly goad the audience. Rather than assume the role of the victim, he lashed out. "I was playing an Al Capp role, calling them Communists and using the word 'nigger' and things, just to see if they really had any backbone," said Fahey. "Nobody ever said a word." Audiences were not there to be lectured about their lifestyles or politics, but Fahey's vehemence left few who challenged him. He considered the left wing to be false and inauthentic around the issues of the working class, although he himself was a middle-class academic. He found the student Socialist and left-wing extremist movements to be obnoxious, noisy, and impotent. The power and force of his music matched his anticonformist viewpoints and left many listeners isolated in its wake. While some who came to his shows were advocates for mellowness or peace, Fahey attempted to channel darkness and dread through his music. Death continued to be a central theme of his work. His attempts to communicate openly with his "duped" audiences were poorly received. The hippies didn't appreciate the negativity Fahey brought to the table. David Cohen, guitarist for Country Joe & the Fish and a prominent musician on the Berkeley scene in his own right, recalls, "I thought Fahey was rather dark, and I didn't much care for musicians who drank on stage. I thought it was rude to the audience, which he certainly was, and contrived. Sometimes his pieces seemed to go on forever. He was a very difficult person to be friendly with, so after a couple of attempts, I stopped trying. Personally, I couldn't understand the fascination everyone had." Sam Charters later infamously proclaimed Fahey to be the only artist that he knew whose sales went _down_ following live shows. "I remember one night at a show in New York. He was sitting there with his bottle of whiskey in a paper bag, another bag that he spit into, and a two-quart bottle of Coke. It was a rather large crowd, and someone requested one of his songs. John said it was a hard song. He lit a cigarette and we watched him smoke the whole cigarette silent and looking off into space. He picked up the guitar and couldn't play it. It was too hard and he gave up." His intense consumption escalated as his schedule expanded. He was a wreck. The audience became Fahey's victims—it being their fault, albeit unwittingly, that Fahey had to endure the torture of performance. He wrote one article called "Performance Is War" for a Canadian paper called the _Georgia Straight,_ in which he fantasized about killing his audience then committing suicide onstage. Put simply, he lacked the disposition of an entertainer. Fahey felt he functioned best as a scholar and composer, in isolated rooms, cutting tape or researching the minutiae of prewar blues 78s. This attitude had an obvious detrimental effect on Fahey's career as a musician. His reputation for drunken unprofessionalism and horrible stage interactions cost him access to a wider audience. While Fahey himself had little interest in the commercial world, those working at his label and management were quite invested in his success. Marketing a temperamental, antisocial guitar maverick was a difficult task, and he did nothing to make it any easier. "He was very shy, which made him an awkward stage personality," says Denson. "He was known for things like smoking a cigarette, and between drags impaling the cigarette on the end of one of his guitar strings, or stopping mid-performance of a piece to take a long swig of Coca-Cola, then resuming the piece at the exact point at which he had stopped. I recall one traveler from Czechoslovakia who heard him perform in Berkeley and left saying, 'the man is just a clown.' John missed most of the potential he had for projects on a national or worldwide level because he could never adapt socially." His social deviations were not merely combative, however. Having an audience gave the prankster in him a chance to subvert people's expectations. Ever the absurdist, Fahey often crossed the performer/audience divide. He would use his emerging cult status to his perceived advantage, however heavy handed. Once he got a small mimeograph machine and copied a note that read if he didn't find a girl who would marry him by the end of the night he was going to kill himself. He then placed these on the tables before he performed. Two girls came up to offer him their hand, but he didn't like them and declined. Occasionally Fahey conjured sublime performances that exceeded the recorded versions in terms of performance and variation. One album, _The Great Santa Barbara Oil Slick,_ was culled from two live performances in the 1960s and released posthumously in 2004. On it, a listener can find an easygoing Fahey performing a heartened take through his catalog. The audience laughs at his mild banter, and both performer and crowd seem happy. One can hear the appeal of Fahey's extended guitar work in a live setting: it is both hypnotic and all-encompassing. Never does it feel lacking in dynamics. On such good nights, Fahey successfully transformed the solo steel guitar into a concert instrument. During the peak of the 1960s drug counterculture, audiences expected to be spellbound. Many ensured that this would be the case by coming high to the gigs. "I think he wanted people to listen, but he wanted them to overhear him playing to himself," says Charters. "They were there to notice. In Berkeley, when the audience was totally stoned, all they wanted was a mellifluous sound that was long. I did see him do whole theater concerts out there where the audience walked in stunned and walked out stunned." While his career blossomed, his love life remained in disarray. All of his appetites were extreme in their fervor, but none seemed to invade his music like women. He had storybook ideas of true love that couldn't possibly match reality. When he fell for a girl, he fell hard— and he presented these various romances throughout his recorded work. He felt that deeply emotional circumstances prompted his best compositions. Fahey needed them as inspiration, giving them direct attention in his song titles and liner notes. Among the most misguided of his romantic quests was the subject of one of his most beloved albums. Fahey met an attractive young lady named Linda Getchell, whom he quickly became infatuated with. Not since Pat Sullivan had a woman captivated him to such an extent. Getchell was taking summer classes at MIT when Fahey performed in Boston in 1965, but she also lived in Southern California. He proposed to her, but she turned him down, thinking the eccentric guitarist too much to handle. Still, she adored his music and the attention he gave her. She kept him at arm's length with promises of eventually being ready to return his affection. He wrote and recorded a song for her, boldly entitled "Beautiful Linda Getchell." Their relationship continued back on the West Coast while Fahey attended school and Getchell worked as a weather girl at a local television station in San Bernardino. She called a dejected Fahey and invited him to attend her birthday party there, a few hours south of Los Angeles. Fahey complained to her that the only reason she invited him was to play guitar for her friends and that she didn't care about his presence. He told her that all he wanted was a piece of cake and that he really didn't want to play any music. Getchell insisted that he need not worry. Fahey and a few friends, including Al Wilson and Barry Hansen, drove down from Los Angeles to the party in Fahey's '55 Chevy. Getchell lived in the back of an older woman's house, which had a large backyard. When Getchell asked him to play a few songs for her guests—even though he had explicitly asked her not to—he became sullen. Feeling rejected and used, he reluctantly agreed. In the process he became toxically drunk. He stumbled off after a few short songs. His temper got the best of him. While most of the guests were in the backyard, Fahey and Getchell got into an intense argument in the kitchen. He accused her of leading him on, rejecting his advances and proposals. Furious and drunk, Fahey allegedly grabbed Getchell by the hair and slapped her in front of a few stupefied partygoers. "I remember he broke some of the landlady's china," recalls Hansen. "Linda was totally mortified. We beat a hasty retreat. John barfed out the rear passenger window; the stain stayed on the fender for weeks afterward. As far as I know, that was the end of beautiful Linda Getchell as part of John's life." The incident provided the title for Fahey's fifth album, _The Great San Bernardino Birthday Party & Other Excursions,_ released in 1966 on Takoma Records. The title track was inspired by Fahey's epic rejection at the hands of Getchell. (He later claimed the last six notes express "futility, a hopelessness and general existential despair complicated by ontological absurdity," themes far from the political or romantic.) Compositionally it represented a departure from earlier material. He incorporated bold new recording techniques that helped take the music further from traditional roots. The pieces grew longer, with experimental passages littered throughout. On the twenty-minute title track, Fahey splices various takes together, creating unplayable transitions through editing. A guitar will be in one tuning and then a tape splice will have the guitar playing in an entirely different one as if by magic. Additionally, the tape is run backwards, creating a hallucinatory effect. The technique of musique concrète (a term referring to the combination of acoustic, electric, and ambient sounds) adds a further dimension to Fahey's sound. Recalling the suite form often found in classical music, Fahey was taking the acoustic guitar into new contexts, eschewing the usual verse/chorus/verse structure of most blues, folk, and country compositions. Many call this album psychedelic, as its sound contains elements not in any other record found under folk in the record store. The music's extended melodic passages often created hypnotic, repetitive patterns conducive to the effects of hallucinogens. Another musical anomaly of the record is the duet between Fahey and former roommate Al Wilson, who guests on the song "Sail Away Ladies" playing the vina, a South Indian instrument that predates the sitar. The track was the most mystical, Eastern-sounding song in Fahey's catalog to date, and became a favorite with taste-making UK DJ John Peel, who was largely responsible for cultivating Fahey's audience in Europe. Wilson learned the instrument in two days, making the recording on that second day of the pair's time in Boston. The song contains edited sections of an hour-plus session and combines blues with raga in a hypnotic swirl. This vivid approach was uncharted territory for the two blues fanatics. Despite the innovations, lingering traces of the past remain on the record. Anthony Lee makes his only Fahey-accompanied performance, playing organ on the standard "Will the Circle Be Unbroken." The track dated back to St. Michael's in May 1962 and was culled from Fahey's personal archive of recordings. Fahey and Lee's version has a more Pentecostal feel than most, because of the church organ. The song serves as a hymnal intermission for the album. Old friend Nancy McLean also makes an appearance, on a flute duet with Fahey on "900 Miles," which was recorded in the same St. Michael's era. The traditional songs mix with the more experimental techniques to make a hodgepodge collection of tracks. He may have traveled west, but the ghosts of Takoma Park still roamed the highways of Fahey's mind, as the hymns recorded at the church there attest. Fahey appreciated religion and its relationship with death, the concepts of salvation and freedom from bondage. By no means is Fahey proselytizing on the record; rather, he is exploring the language of the spiritual, a counterpoint to the hedonistic blues that also fascinated him. The fragmentation of sounds highlights the psychedelic feel of _The Great San Bernardino Birthday Party._ One moment the listener is in Maryland, another California, transported through different years with no real explanation or transition. Regardless of intent, it was the right record for an audience beginning to experiment with abstract sounds. Those who tuned in were finding music that fit a more altered way of thinking. The audience is also introduced to a new persona in the Fahey universe, the character of Knott's Berry Farm Molly. Molly was a young woman who lived out by the amusement park Knott's Berry Farm. Her courtship with Fahey was more reciprocal than his onesided obsession with Getchell, although ultimately just as fleeting. The track "Knott's Berry Farm Molly" features a haunting finger-picked melody played in standard tuning in the keys of C and D. Then, using multiple recording tracks, Fahey incorporates backwards looping. Everything turns around in a swirl of warped, stretched guitar sounds, as if the tape were being manually turned in the other direction against its will. In fact, Fahey recorded a version of "Canned Heat Blues" by Tommy Johnson and recorded it backwards to achieve the desired effect, putting it together alone in his house with a tape recorder. Fahey got the idea from the Beatles' "Rain," as Molly was a big Beatles fan. Perhaps the theme for this record lies in the dichotomy of Linda Getchell and Molly, and what they represented. There was the unattainable beauty Linda and the sympathetic and sweet Molly. Molly seemed to patch the wound of Linda and provide a counterbalance. _The Great San Bernardino Birthday Party_ brings Fahey closer than ever to telling the actual stories of his romantic pratfalls, however skewed they might seem. By the time the album was released, Molly was long gone. Fahey claimed she ended the relationship because she wanted a Jewish husband and she didn't believe he'd convert for her. He appreciated her genuine affection and dedication to him, but ultimately he wanted something more dynamic. Still, he thought about marriage and settling down. He soon found a partner who had all the things he was looking for. Jan Lebow first became acquainted with Fahey at UCLA. A Jewish girl from a self-described pinko liberal California family, Lebow came from a far different background than the twenty-seven-year-old Fahey. A pretty brunette undergrad studying zoology, she ended up interning one day a week in the graduate folklore department, where Fahey held court. He had already established a reputation around campus as an underground hero. She had seen him play and knew him as a musician of considerable appeal, and she played acoustic guitar as well. One day while hanging around school, they were introduced. He asked her to be his date for a concert he was performing at UCSB and took her along to Santa Barbara. The two began a romance, and it wasn't long before he proposed. Lebow seemed to possess the sensitivity needed to appreciate his artistic temperament, but she also encouraged healthy lifestyle choices and better living. "He understood that he wasn't really good at the day-to-day stuff and he needed someone to take care of him. That was me for a while," admits Jan. Keeping him in line was no small feat. To Jan, the attraction was based not only on his fiery musicality but the thoughtful, gentle side that he rarely showed anyone outside his closest friends. "Underneath the bravado and the outrageousness he was really pathologically shy. He was a fascinating man, very bright, very philosophical. He was eccentric, he was unusual, but he was together," she recalls. In Lebow, he had found someone who complemented him, and his desperation eased. A health food expert, she cooked for him and got him to quit his massive Coca-Cola intake. There were other habits that proved more difficult to break. "Even then he always had problems sleeping, so he'd be up late at night and play and then he'd sleep most of the day," she recalls. "I'm an early person and he was a night owl and he always said he had trouble sleeping, so he took downers at night and uppers during the day and that was his cycle. He smoked like a fiend." Despite their differences, the couple found enough common ground, and Fahey tried to adjust to the structure of a traditional relationship. # 6 # VOICE OF THE TURTLE "Turtles are my favorite animals. Everybody runs over them on the highways and that's why I want to kill everybody. That's one reason I want to kill everybody." —John Fahey, interview, 1970 In 1966, Denson began consolidating his many ventures. Under the banner Joyful Wisdom Enterprises he managed Takoma Records, along with artists Fahey, Robbie Băsho, and the far more successful Country Joe & the Fish. His headquarters occupied the second floor of a building on Adeline Street at Ashby Street in Berkeley. The building, then in a rough industrial neighborhood, had six rooms with offices for the small staff, a rehearsal space for Country Joe & the Fish, and two rooms where Denson and others lived periodically. The accommodations were far from luxurious but suited their purposes. Denson hired Tom Weller as art director, to give Takoma Records a modern overhaul. Weller had designed most of the concert posters for shows at Berkeley's famous folk club the Jabberwock, so he fit the natural aesthetic for Takoma in its new home. Weller's signature style echoed the growing radicalism of the hippie movement, in keeping with the vibrant, colorful visuals of the era. Unlike the stodgy, drab folk records of the time, his bold designs attracted younger, hip listeners and helped push the label to the next level in terms of visibility. The album art for Takoma became far more psychedelic and ornate, in line with the poster art explosion. Weller created iconography that would define the Fahey legacy for decades to come, though Fahey himself was almost completely uninvolved. "I never got any input from Fahey, nor any feedback," recalls Weller. "I had carte blanche on the Takoma covers and I just did what I wanted. Except one time he stuck his head in the door of the studio, said, 'Don't ever make anything that puke green color again,' and left." In the two-year span from 1967 to 1968, Fahey released five full-length albums: _Days Have Gone By, Requia, The Voice of the Turtle, The Yellow Princess,_ and _The New Possibility._ In addition, Takoma repackaged Fahey's back catalog with new artwork. In addition to new, more eye-catching cover designs, Fahey rerecorded his first two albums in stereo (the initial pressings were in the by-then-outdated mono format). As the lead force of the label, Fahey was well represented in his label's catalog and he was pleased to have his records in print to meet the growing demand for his music. Though updated for the times, the content of Fahey's music was still death-obsessed. The grinning skeletons on the reissue of Fahey's third album, _Dance of Death_ (1965), echo the music's darker themes and appealed to a new audience of 1960s music fans searching for new sounds. The images were adapted from a book of medieval woodcuts depicting, yes, the dance of death. The harmony in the artwork of the recently reissued albums gave Fahey's catalog a sense of continuity. They were now subtitled with volume numbers (volume 1, volume 2, and so on), and together stood as the saga of Fahey in installments. Every record became a chapter in an ongoing narrative. The fact that he chose to rerecord the music of his first two albums in their entirety was evidence not only of his increased prowess on the guitar but also of his constant insecurity. He couldn't let go of these early pieces and revisited them constantly. While the redone versions sound far better in performance and fidelity, something of the charm and sloppy feel of the initial material remains. Takoma also kept the original versions of the first two Fahey albums in print, as _The Early Sessions,_ volumes 1 and 2, for the purists, obsessives, and simply curious. Taken together, Fahey's first ten albums contain all of the recordings and compositions he had been working on since he began playing guitar. Most of his Takoma releases were put together from different recording sessions and eras, like a patchwork, and they vary in style and fidelity. As Fahey's appeal continued to grow, Takoma began releasing his work in a flurry. In early 1967, Takoma also released a compilation album entitled _Contemporary Guitar,_ featuring contributions from Fahey, Băsho, Bukka White, Max Ochs, and Harry Taussig. Here, the label's aesthetic was clearly presented: instrumental guitar composition. From blues to raga to folk, the album made a strong case for the Takoma label's aesthetic, while showcasing the myriad influences and variations under its umbrella. The Weller-designed artwork of the first pressing of the compilation was boldly psychedelic, echoing the iconography of the Fillmore Theater posters of the day and using the surrealistically illustrated typography typically associated with hallucinogens. Indeed, there was nothing traditional about how this record looked or sounded. And the instrumental template offered limitless possibilities for guitar players and solo musicians. While Takoma was a fine outlet for Fahey's home-recorded experiments, he yearned for the resources to attempt grander statements. He still wanted to record for larger labels, with which he could find a budget to actualize his more ambitious and elaborate musical concepts. The next step in his career would bring Fahey into the professional music industry in a whole new way, complete with publicity, marketing, and distribution, the likes of which Takoma had never seen. As audiences began to change, larger companies would become increasingly interested in underground music. Outside of Fahey's self-created, insular universe, folk music had become a pop phenomenon. Acts like Peter, Paul and Mary and the Mamas and the Papas were selling records by the millions. Record labels were racing to snatch up new folk acts. Bob Dylan had set the template for folk singer as superstar and made a fortune in sales for Columbia Records. Dylan's female counterpart in the 1960s antiwar movement was Joan Baez, whose ascent took struggling independent Vanguard Records to major-label heights. Started in 1950 by brothers Maynard and Seymour Solomon, Vanguard established itself as a vehicle for mostly classical music, issuing works by Charles Ives, Prokofiev, and Mozart among many others. By the early 1960s the label set its sights on the emerging folk sounds coming from the West Coast. As the symbolic home to a new wave of political dissent, Berkeley had given birth to the student protest movement, and the music scene there was thriving. Soon enough, A&R men from the major labels came to check out the scene. Vanguard hired Sam Charters as a talent scout. Having just recorded Buddy Guy, Otis Rush, and Junior Wells among others for the Prestige label, he had already worked with many guitar legends. Vanguard had their eyes on a number of big acts, Country Joe & the Fish among their top priorities. Charters successfully signed them and produced their first three albums. Released in 1967, the band's second, _I-Feel-Like-I'm-Fixin'-to-Die,_ became Vanguard's biggest hit since Baez. While in California, Charters was determined to check up on Fahey, having remembered him from their earlier correspondence. He had kept his copy of the original _Blind Joe Death_ album and set off to find the man behind it. Although Charters had rejected Fahey's debut, he changed his mind upon hearing his second album in 1964 and had since become an advocate of Fahey's work. He decided to drop by UCLA on a trip to Los Angeles to meet Fahey in person. He showed up early to one of Fahey's classes and sat in the back of the room as the students arrived. "John came in wearing a turtle-neck, looking very much like a graduate student, and walked immediately to the blackboard; I was watching him, he didn't look my way," remembers Charters. "The teacher said 'John, there's someone here who wants to meet you,' and without turning around he said, 'Hi, Sam.' " The two quickly became friends. Fahey came to realize firsthand the difficulties of recording old bluesmen and had a newfound appreciation for Charters's work in that field. Charters, in turn, felt Fahey had progressed as a musician enough to record for a bigger company. Charters worked to sign Fahey to Vanguard, despite the guitarist's perceived lack of commercial appeal. Vanguard had prior cult success with Sandy Bull, a contemporary of Fahey and an improvisational guitarist with stronger jazz leanings and eclectic skill. However, by 1966, Bull had become so addicted to heroin that he was unable to write new material. In effect, Vanguard saw Fahey as filling Bull's shoes as the label's acoustic guitar maverick. Fahey's dream was for Vanguard to let him make a record with an orchestra. He had great respect for the label's classical line, which made up half the label's catalog. He asked Charters if he could have his record released on the label's classical imprint, but the request went unheeded as the label saw him as a contemporary folk artist. Still, it was a great step forward to work with a big New York record company with a studio budget. Charters was attached to produce the project. For the first time Fahey had the opportunity to connect with a larger audience. And the album he submitted to Charters would be among his most unusual and nontraditional recording to date. One clear aspect of Fahey's intent was the incorporation of modern classical ideas and found sounds into traditional melodic American guitar forms. While several of his albums featured elements of collage and edits of field recordings mixed in, no greater was this approach heard than on Fahey's 1967 Vanguard Records debut, _Requia._ The record, along its course, samples Charles Ives, obscure brass band and string quartet recordings, Charley Patton, Adolf Hitler speeches, military sounds, field recordings of bridges, and other found sound objects to form a pastiche. Throughout, Fahey incorporates ambient recordings of nature and splices field recordings into the music. Bringing together such disparate elements, the collages show their seams, and often are at odds with Fahey's own tempered playing. The most experimental aspect is his use of alternate tunings, which he describes in his liner notes as a freeing experience. The source materials here work as additional narrative to Fahey's pristinely rudimentary melodies, creating a musical template as grandiose as, and often more extreme than, any "art rock" record of its time period. The album opens conservatively enough with a tribute to the then-recently departed Mississippi John Hurt. The central theme of the track is Patton's version of "Jesus Is a Dying Bed Maker." However, it is not the only Patton reference found on _Requia._ The second side is where things turn experimental, with a four-part suite entitled "Requiem for Molly," a tribute to his ex-girlfriend Molly. The recently unearthed Patton record "Circle Round the Moon" was edited and used in shards in a bizarre psychedelic duet of spliced tape with Fahey playing a haunting fingerpicked melody underneath. Using harsh cuts of Patton singing and playing interspersed, like the sound of paper being ripped loudly over Fahey's playing, the piece sounded unlike anything else. Later on, in "Requiem for Molly (Part 4)," in juxtaposition to the obscure Patton reference, Fahey echoes the melody of "California Dreamin' " by the Mamas and the Papas. Again, Fahey throws listeners' expectations a curve-ball; he was not above the pleasure of a popular melody. The album, like all Fahey albums to this point, ends with a spiritual: the hymn "Fight On Christians, Fight On," perhaps washing away the sins of his experimentalism with a taste of the traditional. _Requia's_ audio collages and panning techniques were jarring, yet certainly effective as a piece of psychedelia, a movement then in full bloom. The music's repetitive foundation—long, melodic stretches of open-tuned notes—made for a resplendent soundtrack to the hallucinations of LSD. Fahey's intentions were not to soundtrack a good time. Instead, he was celebrating a descent into madness and an indulgence with death, each track a reminder of mortality. A magazine advertisement for the album features a picture of a suit-clad Fahey sitting in front of a large tombstone with his name and the album details engraved on it. It's not hard to see why Fahey didn't connect with the hippie zeitgeist of the time; the morbid scene expressed the opposite of peace and love. Despite the incredibly difficult content of _Requia,_ Fahey knew that Vanguard would give him access to a wider audience. Certainly no other record being marketed and sold in the folk sections at retailers contained such confrontational radicalism and harsh audio content. Although the A-side seemed appropriate, the B-side, with its boldly experimental sounds, belonged more with the modern avant-garde of John Cage or Morton Subotnik than the formalism of songwriters like Pete Seeger. But records need to sit _somewhere_ on the shelves, and the record was still marketed as an acoustic guitar album. In the liner notes, Fahey largely abandons the abstract farce of his other albums and adopts a more reverent tone, looking back to his earliest inclinations as a musician. Since 1948, after seeing the movie, _The Thief of Bagdad_ I composed cerebral symphonies every day. It was a pleasant pastime. But suddenly in 1953 I needed a full orchestra at my command— and me playing every instrument in that impossible ensemble (Impossible! It would have had to include a full Western HighArt orchestra, bagpipes, Rahet Ek Lek, Saron, Sarangi, Gender and numerous other instruments). Furthermore, there was no time to study composition, conducting etc. Besides I was too young. I needed it then, immediately, to drown out with music, the new disturbing sounds I heard emanating for my own fear and ignorance of the ways of men and women; from the contempt I felt for the fact that I had no driver's license; and so far of course I had to drown out the sound of the traffic on the road east. Now I have learned to drive quite well, have a license, and I see that I have learned the ways of men and women just as well, but unfortunately a little too late. So in celebration of time wasted I continue to play the guitar. Fahey had assembled the various sounds he wanted to collage, and then the album was largely arranged by producer Sam Charters. Their often difficult relationship extended into the recording process. "This was the frustration for him, that he wasn't the one cutting the tape. It was the first time when he had to sit down with a studio engineer," says Charters. "John would drink and work alone at night. He was losing control. He was gaining more opportunity, better studio, better sound, all these pluses, but _Requia_ was not a happy album. When we finished I thought it was really depressing, but I respected John so much as an artist." Charters had to contend with an unresponsive artist in Fahey, but he knew what he was getting into. "I did a rough mix of it and I had a lot of fun doing it. If you listen to it on a good stereo you can hear the train pan around and around," says Charters. "I did an editing of everything and did the whole album and sent it to John and then nothing. Months went by and he couldn't finish the damn thing. He had never released anything without months of fiddling with it, and I told him it was done. I asked him over and over what he wanted me to do—phone calls, letters—and just nothing but procrastination. He'd taken a step beyond where he knew. It wasn't hostility, but he just wasn't used to telling someone else how to work with his own materials." This shift in process alienated Fahey. Not being in control, and not trusting Charters, left him deadlocked. On the album jacket, a stoic-looking Fahey sits with his hands on his knees, staring into the camera. His clean white shirt is tucked into a gray suit; he wears a modest blue tie. His hair short, his face cleanshaven, he presents himself in stark contrast to the hedonism of the left-wing movement that was running wild. Vanguard didn't expect much in terms of sales from the record. The label had no illusions about him being Joan Baez, Buffy Sainte-Marie, or Country Joe & the Fish, the label's breadwinners at the time. "He was a prestige artist, the kind of artist we believed in," adds Charters. "There had been, at the beginning, antagonism, and he respected what I did recording the blues, but he did not like the fact that I was working with the Fish because they were a commercial band. Finally we reached a deadline when the first year we had was up, so I released it—and John was very, very angry. I found out that this was his pattern—he did this to Ed Denson—but I was culpable." Upon the album's release in 1967, Fahey stated bluntly, _"Requia_ stinks. I was drunk during the recording sessions and they put the splices in the wrong places. Don't buy it. It's bad news." He painted himself as the classic temperamental artist, constantly displeased; his statements seemed a blatant insult to the label and producer who took a chance on him. The response to the album was solid but commercially unspectacular. The general public was more interested in electric rock music than a multigenre approach to instrumental music. "Vanguard needed a megahit," says Charters. "They didn't need to sell thirty, forty thousand. The first Country Joe & the Fish record sold 850,000 in the first couple of months." Fahey remained hesitant about careerism, in any respect. "What I have is this, and it is very important," he said. "I have a small little niche carved out here where I play guitar for people once in a great while. I make just enough money to get by and have a little left over. And that's all I want to do. For me, to work any harder than that would be unethical and greedy." Ultimately, those in his circle were less idealistic; they were far too busy with more pressing financial concerns. Charters, Denson, and even Al Wilson were all riding the rush of the 1960s rock 'n' roll commercial success as both Country Joe & the Fish and Canned Heat gained national attention. In the midst of it all, John Fahey and Jan Lebow got married in a small ceremony in California on June 24, 1967. "We got married here but we went back east to have a private ceremony with his priest," says Jan. "No family, just the two of us and the priest. The only one of his family that came was his grandfather's brother, who was a really nice man." Later that summer, the newlyweds went canvassing for records, driving through the South. Fahey showed Jan the ropes of the collector trade. She had never traveled the region, and she saw the trip as an adventure. "We were young and it was fun," Jan remembers. "We went to some general store in the South and they had a wind-up portable Victrola and they still had needles for them. He got a box of needles, and we would go door to door in these ancient black neighborhoods that they still called 'the quarters,' for slave quarters, and people would have a few records. We would go and we'd fill up the entire backseat of the station wagon and he would drive and I would play them on the Victrola. I'd have it on my lap. So I'd play them, and if he liked [the record] he would put it in the backseat; if he didn't, it would go sailing out the window." They would end up making around $1,000 after selling the bulk to private collectors upon their return, something they would do after each trip. Not a bad haul for a young couple in love. During this time, Fahey completed his thesis on Patton, a largely theoretical and obsessive take on the artist's known recordings. His findings were well received by the academic and blues scholar communities and helped cement his reputation as an authority in his field, far beyond the average fan. Seen as a formidable, scholarly work, simply titled _Charley Patton,_ the 112-page book was eventually published, in England by Studio Vista in November 1970, in a limited run as part of a series of chapbooks. The book became a seminal volume among blues fanatics. Having completed his master's and newly contented in his personal life, Fahey focused on his musical activities with renewed vigor, stimulated by new opportunities for his art and his hopes for finding an audience that understood it. His visits to Berkeley and the Takoma offices became less frequent. Things in Berkeley were beginning to become far more commercial. The counterculture attracted attention around the world and money started to enter the scene. Yet Fahey remained outside, while his label partner Denson was flooded with new responsibilities as manager of Country Joe & the Fish. Although Fahey became the public face of the label, much of what it became known for can be attributed to Denson's vision and savvy. Fahey often criticized him for the decisions that he made in regards to marketing or design, and felt any success he had was innately due to his own efforts as a musician. But with Takoma's roster growing, Fahey could no longer claim the label as his own, and conflicts of interest began to emerge. Denson wanted Takoma to be a commercial success, while Fahey was more preoccupied by his own recordings and touring. "As far as commerciality, I don't think John ever once made a single concession," says Charters. "Ed did. He was always pushing and believed totally in John as an artist. John demanded or expected a great deal of attention, but he also didn't want your attention. It was so damn complicated." In the era of free-spirited abandon and love, Fahey seemed staunchly prudish by comparison, set adrift among the counterculture by virtue of his psychedelic album covers and spellbinding music. Another notable connection was his collaboration with Texas-based psychedelic band the Red Crayola—later the Red Krayola. The band, led by guitarist Mayo Thompson, was an improvisation-based trio. In live settings, they often abandoned form entirely and largely made up their performances on the spot. Fahey found their approach freeing, and it appealed to his absurdist sense of humor. He performed live with the band in Los Angeles on July 3, 1967—just a week after his wedding. Their set retained little semblance of rock music, instead exploring the inaccessible. Having gotten along well, the band and Fahey decided to book studio time to record an entire collaborative album for the Red Crayola's label at the time, International Artists. Label head Lelan Rogers rejected the results, and the tapes were never heard again. This shelved collaboration was an anomaly; Fahey remained a solo act, preferring to work on his own terms. He then turned back to Takoma Records to create one of his most ambitious works. _The Voice of the Turtle_ offers a more in-depth look at the artist than any in his catalog. The album exists as a world unto itself, so rich with symbolism that it functions more as semifictional autobiography than an album. Here Fahey introduces his audience for the first time to the turtle, a recurring presence throughout his life. He considered himself an amateur expert on turtles and kept many as pets around his house, sometimes more than a dozen. If he saw a turtle crossing a highway he would stop, get out of his car, and bring it to the other side of the road so it wouldn't be harmed. Once when he and Jan were visiting a local pet store, Fahey became appalled at the conditions in which the turtles were kept. He decided to buy all thirteen turtles they had in the store, despite having no idea what to do with them. Once they rescued the creatures from their cages, Fahey had to keep them all in his bathtub. Whenever he or Jan wanted to take a shower they had to remove the turtles and then scrub the tub clean from turtle dung. They kept it up for a few weeks, but then even Fahey had to concede that they had to go. Though he didn't have the means, he would prefer to have lived with as many as he could. In terms of artwork and layout, _The Voice of the Turtle_ is Fahey's most sprawling and elaborate album, with a twelve-page insert that includes extensive pictures and liner notes. The package reads like a museum exhibit whose narration spirals into the absurd. "He didn't say anything about the cover, but for that insert of his, um, deranged ranting, he brought over all the materials and told me exactly what he wanted," remembers cover designer Tom Weller, "so I precisely followed what he said." The booklet, also titled "The Fahey Picture Album," is littered with images of people and places, notably Fahey's ex-girlfriends. Knott's Berry Farm Molly, Linda Getchell of the _Great San Bernardino Birthday Party,_ and Pat Sullivan (dubbed Evil Devil Woman below her picture) are all seen for the first time. He had written songs featuring the real women in his life, and they had also become recurring characters in his liner notes—but now his audience could see them vividly for the first time. None of the women were asked permission—or even informed that their pictures would adorn his records. It was a first for a record of any kind, and a communication to the audience that was personal but also obscure, as the public had little idea who these people were. Yet to those who knew him, it was a diary. "I'm not aware of any other musician who put out anything remotely resembling the presentation of _The Voice of the Turtle_ before that came out," says Barry Hansen. "He had these surrealistic ideas running around in his brain starting at a very early age, probably before he began recording. When people started asking him why the heck he named one of his instrumental pieces 'Stomping Around on the Pennsylvania-Alabama Border' [sic] he began to think that some of his followers might be interested in some of those thought processes, and eventually began writing them down. I had no idea that he was using a snapshot of me [for the insert of _The Voice of the Turtle]_ until he handed me a finished copy." In addition to his lovers, he included images of his friends and relatives, and even Takoma Park. Characters like Chester C. Petranick (the real-life inspiration for Fahey's pseudonym), his grandparents, and shots of guest musicians littered the pages. Other images included blues legend Son House's birthplace, the Takoma Park Funeral Home, and Barry Hansen holding a rare Jelly Roll Morton record during a canvassing trip. The back cover showed a photo of Fahey as a seventeen-year-old, exhibiting a public distance and an intense stare, his hair slicked back in a classic 1950s greaser pompadour. The overall effect was of an elaborately narrated photo album. Everyone became a part of his universe, like constellations, laid out in a virtual confessional art exhibit. "Notes, in those days, were often intended to convince people to buy a record, but that doesn't seem to be the case here," adds Hansen. "I think that in the final analysis he was writing for himself... using writing to sort out all the things that obsessed him, writing to help mitigate the ways those things disturbed, even tormented him. Eventually, of course, his writing became an end unto itself, still related to his music but not attempting to explain any particular pieces." Released on Takoma in 1968, _The Voice of the Turtle_ is a musical collage as well as a visual one. In his most elaborate prank, Fahey released two different albums with the exact same cover, booklet, and track titles—but with completely different recordings. Each pressing of the record contained a different sequence and music. It was yet another vexing display of Fahey's absurd humor. These alternate versions of the same album remain the most confusing part of his discography, since they were indistinguishable to the record-buying public save by the color of the center label. Fahey created an audio patchwork that properly mirrored the timeline of the notes, using recordings from throughout his personal archive. He starts the album with the traditional "Bottleneck Blues," a 1927 performance by Weaver & Beasley, with which Fahey plays along on the record—another prank on the listener. The track is credited to John Fahey and Blind Joe Death. In the process, Fahey literally plays on top of his favorite records and uses them as his own in an attempt to place himself into his beloved blues history. After "Bottleneck," the album shifts to the more modern, psychedelic ragas of the oft-reprised Fahey composition "A Raga Called Pat." Part 3 ends the A-side, and part 4 begins the B-side. These two tracks offer a surreal counterpoint to traditional, Tin Pan Alley nostalgia of earlier tracks including "Bean Vine Blues." After "Pat," a flurry of guests appear on the record. Most notable are the performances (again from Fahey's personal archive) featuring Nancy McLean on flute. Also included are two pieces recorded on a 1966 canvassing trip with Hansen. The pair went to Oklahoma, northeast Texas, Arkansas, and northern Louisiana. The official reason for the trip was to record two old-time fiddlers, Hubert Thomas and Virgil Willis Johnston. Arrangements for the visits had been made in advance, and they spent an afternoon/early evening with each fiddler. Hansen ran the UCLA Folklore Department's AMPEX tape deck while Fahey supervised the sessions, sometimes accompanying the fiddlers on guitar. The album concludes with one such collaboration, the spiritual "Lonesome Valley." Ending with a spiritual, he echoes an old Nashville tradition of praising the Lord to wash away the secular damage of earlier tracks— a typical end to a Fahey record. Fahey also uses the idea of ending with a spiritual as a social critique, attacking in his liner notes those who he feels are insincere in their musical presentation. Fahey sought the feelings they stirred, not the stylistic conventions, of those old blues records. He rarely played them in those days; he was writing his own music, assured of his vision. In his liner notes, under the guise of a narrator Fahey evaluates his own art: "The recordings which comprise this record comprise a well defined yet non-directive channel of Mr. Fahey's roots and the progression of his music for the casual listener to be entertained thereby, the inquisitive listener thus may have his curiosity satisfied and the casual listener may, in the same manner, as it were be entertained," he said in his best faux-scholar voice. Fahey continues, "The former is exactly the point of this record: A history, chronicle and documentary recording—all in one—of Mr. Fahey's musical creations, and of what is, to the scholar, or the inquisitor of more significance, Mr. Fahey's musical influences which led to his creations." In doing so, he explicitly shows his influences as musical and biographical. Further, the women in his life are given an equal influence to the musicians and scholars whose work he so greatly admired. If one can feel the haze of the Delta while hearing Son House then certainly the silhouettes of the Adelphi Rolling Grist Mill and the presence of Blind Joe Death can be felt in the recorded work of John Fahey. The pieces form a view of Fahey from all angles—the professional, the myth, the collector, the romantic, the scholar— and form the sum total of his early American experience. The result sounds almost like a conversation the artist is having with himself, rearranging and editing the details of his life. All the while, the audience is privy to this process, and its voyeuristic indulgence becomes a justification for Fahey's self-obsessions. After all, Fahey himself fantasized about his work being viewed with the same level of devotion as that of his idols. "He was unassailably convinced of his importance," says Sam Charters. "As for mythology, didn't he do for himself what he did for Charley Patton? That was his blueprint. He became a legendary figure just like Patton was. When John did the Patton book we didn't know very much about him. It was largely conjecture, a musical interpretation in a way. So yes, I think this was the working template for what John was doing. It's easy to fall into. It's a kind of presentation that you understand has value and has a methodology. So he did his own version of a methodology, a working musicologist, and he did it on himself while creating the music at the same time. You get a wonderful parallel universe, of him creating the document while documenting its creation at the same time." Yet _The Voice of the Turtle_ was a financial disaster. The bulky packaging cost 15 cents more per unit to manufacture than the wholesale price at which the company sold it, so they lost money on every copy sold. According to Fahey, no one at Takoma figured this out until a year after its release. In the meantime, despite Fahey's frustrations with Vanguard and the outcome of _Requia,_ he moved forward with another ambitious creation for the label. It would ultimately become one of his most beloved and recognized achievements. In stark contrast to the brooding and difficult _Requia, The Yellow Princess_ offered the opposite in tonality and approach. In February 1968, Fahey simply asked Barry Hansen if he would help produce his next album, and Hansen quickly agreed. Charters, busy with various other projects, gladly handed over the studio reins to Hansen, whom he trusted implicitly. "With _Yellow Princess,_ John talked about working with bands, so I told him I would stay in New York and he could work with Barry—and Barry was great," says Charters. "John said he wanted a tape recorder to record the sound of the bridge on his way back to California so I bought him a tape recorder. John admired Barry's record collection, which was absolutely staggering. I knew Barry was sympathetic musically and I trusted what he would do, and I absolutely love the album. I think it's a masterpiece." The crisp, robust sound of the album showed that recording in a studio as opposed to his home recording techniques had benefitted _Yellow Princess._ Hansen's easy temperament and vast musical knowledge enhanced his job as producer. And Fahey had hit a stride in his increasing proficiency on guitar. Fahey's compositions feel fully thought out, and his picking is perfectly executed. The title track, which begins the album, contains both a confidence and a peace not often found on Fahey's songs. Its melodic conclusion is technically dazzling, and notably bright for the oft-gloomy Fahey. Its title alludes to the name of a clipper ship that he saw in 1953 in Virginia. In fact, the piece was originally started in 1954, then eventually completed in 1966 in Bastrop, Louisiana, according to the album's notes. Hansen recalls the solo recording for the sessions as being effortless. "The title song was the first song recorded, and John nailed it in one take. The other solo guitar pieces came off very easily as well. John and I had previously spent several evenings in L.A. going over the material and picking the best pieces. That process was very amicable." "Lion," a spirited tribute to his recently departed cat of the same name, is the album's third track, meditative but hardly morose. It is more of a playful celebration of meandering melody lines than a sentimental ballad. This is followed immediately by Fahey's only political ode, "March for Martin Luther King," of which he asks in the notes, "Why didn't we all? Maybe some of us will now; maybe it's too late." The track includes a military-style drum tapping; eventually a full backing band comes in behind him, with organ, bass, and drum. Fahey rounds out the side with an audio collage of field recordings, presumably of the bridge of its title: "The Singing Bridge of Memphis, Tennessee." (Vanguard would release "March"/"Singing" as a seven-inch promo for the album.) The second side begins dramatically, with the soaring "Dance of the Inhabitants of the Invisible City of Bladensburg," which transforms into a full-blown blues-rock outro. On it, he employs a full backing band made up of members of the band Spirit, whom Hansen called on for the sessions. Hansen recalls, "That session was star-crossed, because Robert Kennedy was shot the night before, and everyone was disturbed and distracted on account of that. It was my idea to produce an electric version of 'Dance of the Inhabitants...' and John was not thrilled with the idea but agreed to give it a go. It was of course his first experience with anything resembling electric rock music" (apart from his experience with the Red Crayola). "He groused and grumbled all the way but gave it his best shot." The addition of these band members help to make _The Yellow Princess_ accessible, while still unyielding in artistic vision, since the new techniques remain sparse. The album features classic Fahey solo performances as well, such as "Charles A. Lee: In Memoriam," a track dedicated to the memory of Anthony Lee's father. He wrote in a section of the notes, "Noted icthyologist _[sic]_ who accidentally saved the lives of thousands of people through his research. Father of my best and oldest friend, Flea. C.A.L. was murdered in Brazil in 1966. I hardly knew him but I knew enough." The album ends with the epic "Commemorative Transfiguration and Communion at Magruder Park." The title alludes to a magical fantasy experience—the coming of his fictional childhood messiah, the Great Koonaklaster—about which he would later write at great length. The album cover is a colorful abstract representation of the album notes, with drawings of Fahey playing guitar, the mast of the clipper ship towering in the foreground, a turtle blending into the background. On the back is a black-and-white photo of Fahey staring off into the distance, his torso contrasting with a cloudy skyline, the wind blowing his hair. "I did not go east," Fahey wrote in the liner notes. "I took the wrong passage. Still, I thought, maybe I had gotten somewhere. Maybe I did. Who knows? But I am reminded of a quote from Whitman, which seems appropriate. '... Where is what I started for so long ago? And why is it yet unfound?' " His notes for the album are direct and noticeably avoid his trademark obfuscation, instead featuring a transcendental theme mixed with a sincere self-reflection. However, the writing moves between lucidity and dreamlike prose. For the first time, he seems to be speaking in his own voice, not that of the faux scholar or the hidden narrator of dubious credibility. Fahey, like his blues heroes, had come to a crossroads. Revealing his true self to his audience—and to himself— marked a new level of confidence for the artist as a writer. _The Yellow Princess_ remains a perfect idealized interaction between Fahey and the 1960s counterculture. With hints of rock and expanded musical vocabulary, the album teems with new ideas. Nowhere can themes of death or misplaced anxieties be found. Marketed to a wide audience, the album offered a perfect entry into Fahey's music. The album sold reasonably well: around fifty thousand copies at the time of its release. For many, it served as a gentle introduction to the oft-dour world of John Fahey. Even the title spoke more relevantly to the times than the death odes and personal grudges of Fahey albums past. For once, a disposition of possibility and hope seemed to shine through the author's impenetrable, detached visage. While Charters was off with the Fish, Fahey got involved professionally with Vanguard's new West Coast A&R man and producer Denny Bruce, a onetime drummer for Frank Zappa's Mothers of Invention. Bruce had gone on to work with artists like Tina Turner, Magic Sam, and Buffy Sainte-Marie in various production and managerial roles. But unlike many of Bruce's clients, fame and commercial success were not on Fahey's agenda. Instead, he wanted to make orchestral records that harkened back to Dixieland jazz. Rather than continuing to explore the themes of the times, Fahey sought refuge in the uncoolest of pasts. Certainly no one would bother him there. However, the label seemed inflexible to opening a larger budget for such a project. "We started talking about the concept and I took the budget to Vanguard and they told me I was out of my mind, and that a Fahey LP was one microphone and a couple of rolls of tape," remembers Bruce. "I said that's what he does for Takoma, but that he's here to make a more commercial product. Fahey starts calling the owners of Vanguard at like four in the morning and bugging the shit out of them. So they dropped him from the label and fired me." However unceremonious a departure, his two LPs with the label helped bolster his popularity and established Fahey as a global artist with records distributed worldwide. No longer selling records by the hundreds, he had become a commodity in the marketplace. He was interested in success and acclaim but only on his own terms; he would not sacrifice his musical vision for commercial considerations. "John felt he was ordained to be successful because of the innate musical quality of what he was doing," says Charters. And yet Denson, at Takoma, had a different perspective. "Ed made his living managing Country Joe," Charters continues. "He was very aware of the value of copyrights. He spent all his time talking to lawyers and paying bills." Takoma Records was operational and producing at an accelerating rate, yet Fahey seemed unsatisfied. Between his resentment of Country Joe & the Fish's success, his dismissal of Băsho, and his loss of aesthetic control over the label, Fahey decided that he no longer wanted to work with Denson and bought out his partner's shares in the label in late 1968. Their friendship had ended long before, and Denson and Fahey would never work together again. Fahey moved the label to Los Angeles, slowly rebuilding it, and Jan took over managerial and accounting duties. With the streamlined efficiency of in-house management, the couple were able to make a decent living for themselves from the profits of the label, Fahey's performance proceeds, and record collecting. # 7 # VIEW EAST FROM THE TOP OF THE RIGGS ROAD B&O TRESTLE "When a person is that ambitious they will invariably become disappointed in life. And they may hurt themselves and other people too with their ambition. And it is hard not to take the bait when there is a lot of money involved. But I was of the belief that everything usually comes to somebody that does very little or even nothing. All you have to do is not consider anything crucially important. Or urgent. It all comes down to he who waits." —John Fahey, from _How Bluegrass Music Destroyed My Life_ Fahey once claimed to have been in a record store and seen a huge box of Bing Crosby's _White Christmas_ LPs. The clerk told him they always sold out. This innocuous tale had strong reverberations for his career. In another step of anticool genius, Fahey concocted a concept for an album that resulted in his best selling and most famous work: _The New Possibility: John Fahey's Guitar Soli Christmas Album._ Released late in 1968, the album transcended genre and crossed over into the Christian market. People who had no concept of psychedelic or blues music bought the record for the acoustic renderings of "Joy to the World" and "Silent Night," along with ancient European standards like "Greensleeves" thrown in for good measure. Presented on solo guitar, these Christmas tunes presaged the New Age movement. The album had vast appeal and would continue to sell seasonally for years after its release. But Fahey continued to flaunt expectations: even a Christmas album was fair game as a place to vent. Fahey attacked many of the traditions of the holiday in his liner notes. Religion was a topic close to his heart, and he relished the chance to launch into a diatribe given the right platform. "Christmas and Easter are the two most important events of the Christian calendar, and should as such be celebrated with all due awe and respect, but not underneath a pagan Christmas tree, or in a department store, or by searching for the illusive commercial-divine EGG," wrote Fahey. While his criticism of the commercialism (written in the notes of a commercial product) of religion was certainly in line with similar ideas of the time, his ruthless attack on sentimentality extended even further. "I seriously doubt if the Son of Man ascended to Heaven on a rabbit," Fahey wrote. "I doubt if He sits on the right hand of Santa Claus. And children do not need to be told these things; it makes Christianity much less _possible_ for them in later years. Superstition does not aid Christianity; it does not need it. Christianity is not a religion of superstition anyway, although you may think it is." His criticism stemmed from his reverence of religion and his respect for the mystery of the spiritual, yet it read as if he were delivering a sermon from a pulpit. All theological debate aside, the album sold well into the six figures. Fahey seemed shocked by the success of _The New Possibility_ and its far-reaching effects. He later recalled, "Well, the arrangements are pretty good, but on the other hand there are more mistakes on that album than on any of the other 17 albums I've recorded. And yet, here's the paradox... this album has not only sold more than any of my others, [but] I meet people all the time who are crazy about it. I mean really love it. What can I say? I'm confused." The Christmas album set a benchmark in sales for Fahey and ensured him live gigs and record deals for years to come. There was also the business of a label to run. A year or so earlier, a demo tape had arrived at the Takoma offices from a guitar player no one there had heard of. Some dismissed it as sounding too much like Fahey. When the man himself heard the music, he was convinced of its commercial appeal. The tape, recorded by a young player named Leo Kottke, led to the biggest hit of either of their careers. Born in Athens, Georgia, in 1945, Kottke had been a musician from a young age. Although he studied trombone as a kid, he found something far more substantial in the acoustic guitar. After a short stint in the navy, he hitchhiked around the United States, ending up in the Twin Cities in Minnesota. Kottke was still a teenager when he saw Robbie Băsho perform and became immediately enthralled by the artist's twelve-string technique. Kottke tried to corner him after the show and ask him questions, but Băsho had no interest in his newly minted fan. "Robbie'd just opened for someone, and that guy started playing back onstage while I babbled at Robbie," remembers Kottke. "Just a couple years ago I realized who that guy onstage was, I can still hear that big _bong_ of a thumb on the E string as Robbie was running away from me....It was Fahey. It was Fahey yet to be... which I'm thinking is all we'll ever know of him." While Băsho ignored the eager fan, Fahey encouraged him. After discovering his tape, Fahey began a correspondence with the young midwesterner, and ended up taking him under his wing. Fahey tried to mold the young musician, encouraging Kottke not to sing but instead to concentrate on his instrumental work. Against conventional wisdom, the advice paid off when Kottke recorded an all-instrumental guitar album for Takoma. Though most labels likely encouraged vocals in order to sell records and get radio play, Kottke instead recorded the epic yet simply titled _6- and 12-String Guitar._ Released in 1969, Kottke's Takoma debut featured a high-energy, virtuosic display of guitar prowess and became the label's best-selling release, selling over half a million copies. At the time, Fahey was still suffering from severe sleep deprivation. He had recently been overdoing it with his prescription sleep medications in an effort to combat his insomnia. The number of drugs required to get Fahey to sleep became massive. The results wreaked havoc on his already fragile mental state. He was often confused, and his moods raged while he attempted to combat his ailments. Even with Jan looking after him, he remained adrift in a sea of emotional chaos. Worse, his drinking was steadily growing. He was in such a stupor that he often forgot his concerts and his loud behavior; his blackouts grew in frequency and intensity. He had different doctors in various seedy parts of Los Angeles who refilled his prescriptions numerous times over. By telling them he was a musician set to go on tour he amassed stores of strong pharmaceuticals. When Kottke came to visit Fahey for the first time, shortly after his album's release, Fahey was completely dependent. "John called me into the bathroom that first night at his place to show me what he needed to get to sleep, and to tell me, standing there in a sweat suit with a blindfold on his forehead and earplugs in his ears, not to wake him up," recalls Kottke. Fahey's isolation seemed bizarre to the young guitarist, who implicitly trusted his new mentor, even in the face of severe uncertainty. Fahey's letters to him were littered with swastikas and German, so he actually seemed more normal in person than in correspondence. "I can't figure how he survived as long as he did, but I do wonder," Kottke adds. "I saw him take two chloral hydrates, two or four 10 milligram Valiums, two 100 milligram Thorazines, a couple Placydyl....He ate the pills in front of me and then exploded out of his bedroom an hour later, beet-red, yelling at me for waking him up with the television and waving a gun in my face. And it only happened once. And it just seemed normal....John could do that." The gun-toting incident didn't faze Kottke too much. Surviving that first night, he figured nothing worse could happen. He was right; afterward, Fahey became calm toward Kottke and enjoyed his company. The two got along well—they genuinely liked each other. "John had so much contempt for all his peers. He had no reason to even consider them on the same planet," says Charters. "Leo was an exception because he came under John's wing." In public Kottke came off as a charming "everyman" kind of guy, lacking the immediately visible dysfunction of Fahey or Băsho. His onstage persona was far more relaxed, and he often told elaborate humorous stories between songs. As a humble midwesterner, his sincerity appealed to a broader demographic than his labelmates. Kottke was just happy to be in the game. His affability and charm translated to adoring audiences. His lightning-fast playing made him an instant smash with tracks like the dizzying "Vaseline Machine Gun." In the process, Kottke cemented his status as a guitar icon with a singular album. The success also brought increased exposure to Takoma Records and, of course, Fahey. The pupil's success opened doors to new career opportunities for the master. And major record labels became aware of instrumental guitar music's potential to sell to a large commercial audience. Though their business had grown exponentially since Fahey took it over outright, Jan still covered all aspects of it herself, from keeping the books to shipping boxes of records to distributors. In order to keep Kottke's album in stores, many other Takoma releases were left out of print for months or years, some never repressed at all. Whereas Băsho's sales floundered, Kottke became a sensation and Fahey felt vindicated in his A&R skills. Fahey was proud of himself for having the vision to see how successful his protégé would become, and Takoma Records had an influx of cash, enough to hire a ramshackle staff. Fahey's peers were reaching new levels of success; both Country Joe & the Fish and Canned Heat made huge splashes at Woodstock and were off on promotional tours around the world, selling records and having hit songs on the radio. Country Joe had a number-one song with his antiwar anthem "I-Feel-Like-I'm-Fixin'-to-Die Rag," which catapulted sales of its Vanguard debut to over a million copies. Canned Heat had a top-thirty hit with "On the Road Again," sung by Al Wilson in a voice that recalls Skip James, and making him an unlikely pop singer. When Jan had taken over the books from Denson two years prior, Takoma was grossing $10,000 annually. After Kottke and _The New Possibility,_ their yearly earnings had grown to $100,000. With a string of successes, Takoma was now on the radar of aspiring musicians and outside artists alike. Labels in those days occasionally had artists stop by unannounced in an attempt to get noticed or to impromptu audition, especially in California. On one such occasion the Takoma office received a surprising visit from a group of unannounced guests. "These beautiful, young, scantily clad women showed up at the Takoma office with a one-inch demo tape that they wanted us to play," Fahey remembered. The young women claimed to live on a ranch in the desert and invited the staff to come see them there. They called themselves the Family. Acting on behalf of their spiritual leader, Charles Manson, a frustrated musician who was a close associate of the Beach Boys until his aggressive ways alienated him from them, the girls sought Takoma as a potential new home for their guru's unreleased album. "Before they left those girls fucked everybody in the office, except me. And everybody in the office caught gonorrhea— except me. Later on we realized who they were," said Fahey. The rest of the world soon knew who they were, too, as names like Susan "Sadie Mae Glutz" Atkins and Lynette "Squeaky" Fromme eventually made international headlines. Although a relatively tame visit from the Manson Family—considering what they were capable of— the encounter certainly indicated that Takoma was growing in profile on the hippie scene. As Fahey began to attract attention overseas, he made his debut European appearance in London. Britain had a huge traditional folk scene of its own, and the UK guitar elite turned out in droves to glimpse the mysterious John Fahey, the man behind Leo Kottke's Takoma album (by then, Kottke was a major headliner in the folk guitar world). Rather than dazzle the crowd with technical guitar playing, Fahey demonstrated his American Primitive technique to a less than spellbound audience. Among the artists in attendance were John Renbourn, John Martyn, Ralph McTell, Roy Harper, Mike Cooper, and Bert Jansch. Singer/guitarist Michael Chapman attended one of the London performances: "There was a famous club in London we all played called Les Cousins," he recalls. "That was the acoustic mafia's hideout. Fahey came and played and the back was wall-to-wall musicians. Everybody came to find out who this guy was, and he was dreadful. We kind of saw him as a one-off. No one else wanted to do what he was doing. We had English guys who were incredibly flashy, and that's what they had to do to satisfy an audience for an hour's performance. John saw things completely different. He would play as simply as he possibly could. He was doing that because that's what he believed in." He may have been rejected by the guitar-playing community in London, but he immediately found fans within the music press. While in London on May 28, 1969, he recorded a session for John Peel's _Night Ride_ show on BBC radio. Fahey revisited some of his earliest work, and the performance was a success. Peel was considered the preeminent tastemaker in British music culture, and his endorsement helped spread the music of Fahey to a wider audience abroad. Europe was filled with different social mores, which provided a whole new context in which Fahey could be misunderstood. This came to a head in the most traumatic failed collaboration of his professional career. One morning in late 1969, Jan awoke him with an urgent telephone call. It was MGM Studios, on behalf of acclaimed Italian filmmaker Michelangelo Antonioni. Following his international breakthrough, _Blow Up,_ Antonioni was riding high on his recent success. Antonioni was a fan of Fahey's records and wanted him to compose the soundtrack for his new movie, _Zabriskie Point._ The studio offered to fly Fahey first class to Italy the next morning to start work. Fahey had seen and enjoyed _Blow Up_ and, at the urgings of his wife and management, he reluctantly agreed. Suspicious of things that seemed too good to be true and unaware of the expectations that he would need to live up to, he grew nervous. Everyone in his camp seemed excited about the possibility except Fahey himself. With no prior preparation, he got on a plane to do a job for a man he had never met, on a film he knew nothing about. There are two distinct versions of what happened next. Fahey's version is extensively detailed in his memoir—and although almost entirely fictional, it gives a deeper insight into his psyche. His experience became one of the greatest stories in his mythology, a battle of wills that symbolized his contempt and misunderstanding of the culture in which he found himself. Just as in the blues, myths are often more important than the truth. According to Fahey, when he arrived, he was greeted by handlers and eventually introduced to Antonioni himself, whom he described as "civilized and erudite and intelligent and polite and suave and sophisticated. Of course, now I know that those are the most dangerous kind of people that exist." Antonioni wanted him to create music for a group sex scene that had been shot out in the desert of California. He asked Fahey to echo the beauty of young love with the juxtaposition of death, to represent the emptiness of America and its celebrated youth culture. By no means a free spirit, Fahey had a terrible time viewing the twenty-minute sex scene. He found the proceedings sick and wanted no part of it. Feeling as if he had been suckered into working on some high-end skin flick, he plotted his exit strategy. Since he was being paid by the day, and knowing that the overbudgeted studio would cater to any wild demand, Fahey booked several days of studio time with a group of local musicians. He instructed the group to improvise for hours; their only directive was to make sure the music never came together. They spent a few days making noise and enjoying the process on the company's dime. They ran out the clock unsupervised. Rather than completely sabotage the project, he claimed to have recorded twenty minutes of solo guitar in a unique tuning that he felt actually summoned the love/death tension of the California desert. Having spent time in such climates while studying his beloved turtles, he had a connection to the bleak emptiness of the sparse, sprawling landscape. Confident that he had completed his task to the best of his abilities, he played the music for Antonioni, who declared it a rousing success. With all parties placated, the project should have been resolved amicably. Things, however, fell apart when the two went out to celebrate at an upscale restaurant and began to talk politics. Antonioni started discussing how his film was a critique of American culture, which he thought was an abomination. In actuality, his and Fahey's views weren't that different. Fahey was hypercritical of the counterculture himself and wondered how anyone could live with it. His hangers-on nodded in agreement. Yet Fahey resented the director's insinuation that Americans were unsophisticated and endlessly materialistic. Perhaps in Antonioni's world this seemed the case, but Fahey resented the implied condescension. "I felt that my intelligence was insulted," wrote Fahey. "That, _qua_ musician, I was being treated quite rudely and wrongly and unethically. I thought that this was an insult to my mind, my reality, my commitments, and everything that I was, and everything that I stood for." Despite his similar objections to excess and hedonism, Fahey decided he wanted nothing to do with what he perceived as an anti-American propaganda film. Being the patriot that he was, he felt it his duty to stand up and defend his homeland. The disagreement escalated, and the two began shouting in the restaurant. Then Fahey punched the famed director in the face. The two never spoke again. Fahey's music was cut from the film—though he was still paid for his work—and was replaced with Jerry Garcia and Pink Floyd songs. The film was considered a disappointment upon release, and Fahey felt vindicated by its commercial failure. This entire account, as recalled by Fahey in his memoir, is actually a piece of revisionist history, a product of his wild imagination. "If he had done anything like that he would have been arrested on the spot," says Jan. "I think people believe what they want to believe." While Fahey did indeed travel to Rome on the director's behest, no sort of confrontation occurred. He became so blocked by the whole experience that he couldn't produce anything. He drank to the point where he was stumbling around and practically incoherent, according to Jan, who had accompanied him. The reality of the situation was that Fahey faced creative impotency and failed to compose any music. Always one to attempt to recontextualize the events of his life, Fahey rewrote his failure as that of rebellion and toughness, a nod to his teenage greaser persona. At the time, those in his camp excited about the prospect of the collaboration found their hopes dashed. And with them, a great opportunity for his career to progress beyond his 1960s cult status was lost. More troublesome was his psychological state after returning home. Jan recalls that his delicate condition crossed the line into full-fledged breakdown. "By the time he got home he had a psychotic break," says Jan. "He was just completely out of it. He was taking more and more of his drugs, he was drinking heavily, he didn't know where he was, he didn't know who I was." He started suffering from increasingly elaborate delusions. "He believed he was possessed by demons," recalls Jan. "That this is what was causing all these problems. He thought he was _actually_ possessed by demons. He saw an orange snake. There were others. He saw them when he was awake." His daily care had become a full-time responsibility, and Jan and her family sought help at a mental clinic in Santa Monica. He stayed there for about a month. When he returned, things at home were difficult; the marriage was severely troubled. "Fahey and I had dinner at her parents' house," remembers manager Denny Bruce. "They were extremely nice—nice home, she's great and here's this guy with blue jeans and a denim shirt sitting at the table not really talking. Maybe Jan was crazy? I was polite to the parents and everything. She must know something I don't because she was putting up with it." Meanwhile, Takoma hired a full-time employee, Jon Monday, who first came aboard to fill orders (he would become the label's longest-standing employee). As a Fahey fan, he was aware of the artist beforehand. "I didn't know what to expect when I first met him. I greatly admired his records, so I was a bit awestruck. John was just plain folk. He had no sense of self-importance," recalls Monday. "Jeans, T-shirt, and tennis shoes was his typical dress code—at home, in the office, in the studio, or onstage." Monday, who had garnered some major-label promotions work, was eventually hired to take on the task of publicity for Takoma and for Fahey. "I developed a mailing list, went to local [radio] stations, and traveled on the road with John to promote his concerts and the records. I'd contact the local radio stations, get them to play John's records, and occasionally I'd get him an on-air interview. I was the director of promotion." They also added Kerry Fahey, a young man of no relation—though John often told people that Kerry was his cousin and treated him accordingly—to help with filling orders. Never one to suffer fools lightly, Fahey could be combative with the media, especially those unfamiliar with his work. Monday recalls one incident: "I had arranged to get him on a music TV show in L.A. called _Headshop,_ that was hosted by Elliot Mintz. I think Joan Baez was also on that same episode. Anyway, John was supposed to 'lip-sync' his guitar playing—they would play the record, and John would look like he was playing live. John was shown where to sit and get ready to play. The camera was on Elliot, who did the introduction: 'And here's John Fahey to sing a song off his latest album.' The camera is switched to John. John shouted, 'I don't sing songs. This guy doesn't even know who I am!' They had to stop the taping, reset the positions and cameras. John started pretending to play the song, but then started swinging his right arm wildly, like Pete Townshend. I'm sure it pissed everyone off, but John didn't care. I, of course, was worried about the effect on his career from these outbursts." Later that year _Rolling Stone_ magazine decided to profile Fahey for a feature story. The ever-candid subject was only too happy to open up regarding his personal instability. In the December 24, 1970, issue, writer Tim Ferris found his subject in a suitable state of disarray. "Fahey hasn't made a record in two years, since he did _Yellow Princess_ on Vanguard and _Voice of the Turtle_ [and _The New Possibility]_ on Takoma," wrote Farris. "During the first of those two years he was known as a man who would take a drink. During the second, he could be found three times a week in the office of a psychiatric ward in Santa Monica." Fahey left the impression of a man in trouble and on the verge of collapse. "I was really crazy, like, back in February. I was really nuts," he told Farris. "There's a whole sequence that I can't remember. One day I woke up thinking I was going to go crazy. I thought, 'Well, I'll sit this out for a couple of days.' Then that night I thought 'hmmmm, I'd better go to sleep or I'll go crazy....' But I couldn't. I was going to kill myself. Then Jan stole my gun. It really made me mad. I felt kind of suicidal. I looked for the gun and the gun was gone. Somehow I got over to St John's Hospital in Santa Monica." While most artists would attempt to shield their violent, suicidal dysfunctions, Fahey chose to highlight them to the reporter. In the chaos of the times, though, this went largely unnoticed. The dark side of excess touched those close to Fahey as well. Lost in the throes of success, Canned Heat guitarist and close friend Al Wilson died of a drug overdose. His death at a young age was a harbinger of counterculture decay that would soon become a cliché. They had long talked of Wilson doing a solo album for Takoma, but it never would be. "I will remember Wilson..." he wrote. "Of course because he was my roommate for about half a year. How could I forget his odor? And how could I forget the wonderful music he played and sang? And how could I forget all the things he taught me about music? That would be impossible." Fahey returned to audiences in 1971 with the sprawling _America,_ a record that reinforced his stature. Opening with a harmonic minimalism, Fahey barred the twelfth fret, creating a chiming by holding his finger lightly over the strings while his right hand repeated a pattern. In New York, minimalist composers like Terry Riley and Steve Reich were gaining favor by employing similar repetitive techniques in their compositions. Fahey, using the acoustic guitar, created an album that stands alongside any long-form composition of the times in terms of scope and approach. Originally intended to be released as a double LP, Fahey cut _America_ down by half at the last moment when he decided that a double LP was a harder sell than a single. With the album already in the test-pressing phase, Fahey decided to randomly pull one of the two LPs outright. While the majority of his Takoma releases had been hodgepodge assemblies of home recordings, this album was recorded at Larrabee Sound Studios in Los Angeles and features a clarity and tone that captures his performances immaculately. His classical influences again arise in the structure and makeup of his pieces. While it isn't new territory, the album represents a perfection of his stylistic combinations. He references his older pieces, revisiting their themes on "Mark 1:15," which for years would be one of his favorite songs to play live. "Out of all the songs I ever wrote, I consider only two of them 'epic' or 'classic' or in the 'great' category and they are both on this record," Fahey said of _America._ "Most of the melodic ideas existed a long time ago, i.e. the primary 'lyric' melody in 'Mark 1:15' is the same as 'When the Springtime Comes Again'..." The album ends with a take on Sam McGee's "Knoxville Blues," which resolves the record on a traditional note. With a theme of ecological conservation in the album's artwork, no liner notes accompanied the music. Instead a series of surreal, symbolic drawings by friend Pat Finnerty narrate the story of the poisoning of an ancient turtle that lived in the Adelphi Mill Pond back in Takoma Park. Both front and back covers feature a turtle fleeing the destruction of man. Fahey wrote elsewhere of the situation: There is a pulp-mill somewhere in Maryland. And this mill pours its refuse into what is now, but was not always, a land-locked lake. And in that lake lived an enormous turtle, (only one) very old, very large, his shell painted by moss and pulp. You can (or at least I can) hear his voice, or rather cry, sometimes late at night when everything else is still. He was there long before the mill came. The water is bad now, but there are still a few carp and cat-fish on the bottom for him to snap up and chomp on. For some reason no one else has ever seen him, and as an amateur herpetologist I should like to say that he resembles no species that I have ever seen or heard of elsewhere. There he spends his days confined to the polluted water. There is no outlet. He cannot make it to the sea. Nothing ever gets out of that lake. He basks and sounds, half conscious, half asleep, half alive, the first and last of his kind. The workers in the mill do not bother him; they mistake him for an old log. He waits for death in the dirty water but doesn't even think about the waiting. He is an old turtle, and having seen the horizon on all sides, there is not much more for him to think about. I used to go and watch him. He saw me too, I think. Sometimes I imagined we understood something of each other. But I could never tell what it was. On the home front, following Fahey's release from the hospital, he and Jan attempted in vain to save their marriage. In the spring of 1971 they tried couples counseling. After a few sessions Fahey started going by himself. He was getting slightly better and even started to dress in suits for the first time. One night he decided to take Jan for a night on the town sporting his new look. He seized the opportunity to have some fun. "John started feeling better about himself, instead of wearing jeans and a blue work shirt, which he wore the entire time I knew him, he got himself a suit and tie, button-down shirts, and all this stuff," Jan remembers. "We went to the Troubadour and he took a cane and wore dark glasses and was bumping into people as if he was blind. That was John." With Kottke a runaway success, manager Denny Bruce had more leverage in which to create new business opportunities. Several plans were in the works for Fahey and Takoma. They hatched an idea to use Bruce's connections to get Kottke a home at a major record label, with Fahey and Bruce attached as producers under the banner Takoma Productions. Bruce shopped the deal to Capitol Records and sold the three of them as a package. In 1971, they began work on Kottke's Capitol debut, _Mudlark,_ with Canned Heat's Larry Taylor on bass and a session drummer. Every time Kottke began to play, Fahey stopped him and asked him to retune or start over. Feeling the pressure, Kottke grew stressed at having to perform in front of a hypercritical Fahey. Bruce stepped in to try to rectify the situation. "After half an hour of not letting him play a verse all the way through, Leo is at the breaking point," remembers Bruce. "He asks to see me in the hallway and says he can't record with John there. It was intimidating that he kept stopping him and criticizing his guitar sound, he couldn't concentrate. He wanted the chance to make the record with the session musicians there. I took John aside and told him he was intimidating Leo and to just let me produce it myself, and that he would get the same money and not have to show up. He got a beaming smile on his face and said it was the best news he heard all year." Fahey fussed around for about an hour and took off. "It shocked me that John would want to produce anything, so it was no surprise when he split, but I was envious," recalls Kottke. "I remember his expression as he walked out: frozen." Without Fahey, the album became a moderate major-label success. Kottke spent the next few decades on a stable career path, which continued to subsidize Fahey and Bruce through their publishing arrangements. Fahey's improvements from therapy with Jan were short-lived and mostly cosmetic. Rather than see himself as a person who had problems that needed to be solved, he became enthralled by his own fascinating tapestry of dysfunction. He found his neuroses and their sources to be of endless interest. Jan didn't want to indulge him, and saw that there seemed little chance for a life for herself within the marriage. Fahey needed constant care and attention, and she could no longer handle the burden. They soon separated and Jan moved out. "Every day was something else, and he was really sick, the poor guy," says Jan. "It was miserable. The mood swings and all the booze and the pills and not sleeping. I didn't know how to help him. I didn't hate him. It got to the point where neither of us knew what to do. That was the worst. That was the lowest part of the whole thing. Even after I moved out, I used to take food over there three times a day. I couldn't abandon him, but I couldn't live with him either." Post-separation life was difficult for Fahey. He lived with "cousin" Kerry Fahey for a time. Jan and Kerry tried their best to get John off of drugs, once even flushing the contents of all his pill bottles down the toilet. An irate Fahey went ballistic and the police had to be called to calm him down. When Jan told them about the situation they suggested she calm him down with liquor—hardly a productive solution for an alcoholic. His proposed solution to their problems was for the two of them to run away to the woods and retreat from the pressures of society. He thought that if they lived alone and isolated, his hysteria would be calmed and he'd be better able to function. For her own sake, she decided to divorce him. Jan's visits became less frequent. She had dreams of her own beyond that of taking care of her husband, isolated from the rest of the world. "My life was going by and I was taking care of this man who I can't help that was getting worse and I felt like I needed to get out to save my own sanity, but I felt bad about abandoning him," she remembers. When they officially divorced in 1973, Fahey grew angry. Remembering his fury over his split with Pat Sullivan, Jan decided it would be best to cut off contact with him completely. They never spoke again, although he later recalled her fondly. Fahey's extremes were too much to sustain his marriage, and his divorce would be costly, both for lawyers and in his settlement to Jan. Over the next few years, her share of Takoma was bought out, and Fahey pressed forward. John Aloysius Fahey, five and a half years old. PHOTOGRAPHER UNKNOWN, COURTESY OF THE COLLECTION OF CHARLIE SCHMIDT The guitarist as a young man, 1948. PHOTOGRAPHER UNKNOWN, COURTESY OF THE COLLECTION OF CHARLIE SCHMIDT John Fahey, his mother, Jane, and his grandmother Catherine, August 19, 1962, Washington, DC. PHOTOGRAPHER UNKNOWN, COURTESY OF THE COLLECTION OF CHARLIE SCHMIDT Fahey, September 4, 1962. PHOTOGRAPHER UNKNOWN, COURTESY OF THE COLLECTION OF CHARLIE SCHMIDT Print ad for _Requia._ COURTESY OF VANGUARD RECORDS, FROM THE COLLECTION OF GLENN JONES John and Jan Fahey get close, November 1967. PHOTOGRAPHER UNKNOWN, COURTESY OF THE COLLECTION OF CHARLIE SCHMIDT Poster for Takoma artists John Fahey and Robbie Băsho, with art by Tom Weller, 1967. COURTESY OF THE COLLECTION OF TOM WELLER Vanguard studio sheet. COURTESY OF THE COLLECTION OF SAM CHARTERS Fahey with tortoise. PHOTO BY JOHN VAN HAMERSVELD Fahey and Denny Bruce, artist and manager, Hollywood 1971. PHOTO BY JOHN VAN HAMERSVELD John and Jan Fahey enjoying a London vacation, 1969. PHOTOGRAPHER UNKNOWN, COURTESY OF THE COLLECTION OF JAN LEBOW FAHEY John and Melody Fahey: happy faces on their wedding day, 1978. COURTESY OF MELODY BRENNAN FAHEY Fahey's paintings. COURTESY OF THE COLLECTION OF BYRON COLEY Live and electric in Aurora, Oregon, 1999. PHOTO BY MELISSA STEPHENSON Romping with his animal friends. PHOTOS BY MELISSA STEPHENSON Fahey's portrait of Dave Nuss of the No-Neck Blues Band. COURTESY OF DAVE NUSS Live at the Salem Arts Festival, 1999. PHOTOGRAPHER UNKNOWN, COURTESY OF THE COLLECTION OF MELISSA STEPHENSON # 8 # OLD FASHIONED LOVE "All I have ever done with music was to depict various emotions in an organized and coherent musical language, especially hate, fear, repulsion, grief, depression or feeling nothingness." —John Fahey, in his liner notes to _The Legend of Blind Joe Death,_ 1996 reissue of _Blind Joe Death_ Inspired by the movie _The Traveling Executioner,_ a dark comedy set in the South, Fahey and Bruce decided to focus on the traditional southern jazz experience for Fahey's next record, using original musicians from the 1920s as session players to add authenticity. The eventual lineup included Nappy Lamare on banjo and Jack Feierman on trumpet along with some contemporary studio musicians such as Chris Darrow (who played with James Taylor and Leonard Cohen) and Joel Druckman (Bonzo Dog Band) to even things out. Through Bruce's connections, Fahey signed to Reprise Records, an imprint of Warner Brothers started by Frank Sinatra. "Warner's was still thinking that if enough people told them something was cool, let's say Rambin' Jack Elliot or Ed Sanders from the Fugs," they'd sign them, says Bruce. "These guys got deals too. John Fahey, sure, very well known, very influential. How much do these guys really sell?" The label was able to foot the hefty bill for the recordings, unlike Vanguard, which had scoffed when approached with the same idea. However, the process was not without its share of difficulties. "You had to get Fahey when he was in a good mood and wanted to have fun," remembers Bruce. "Recording with the engineer and the other musicians for the Warner Brothers albums, he would be late sometimes. He'd have this ritual where he'd go in the bathroom for half an hour and wash his fingerpicks in soap and have to line them up one by one on paper towels on top of the sink. I'd ask him what we're waiting for and he'd say that the picks had to dry so they feel right. Then he came out and would smoke a cigarette and drink an entire liter of Pepsi and not say anything, just sitting there. The engineer and I had been waiting there for three hours already; all you could hope for is that he would get them in one take." Rather than expanding on his more experimental work, Fahey opted for an album of mostly straight versions of Dixieland jazz standards. For an artist who seemed so conceptually modern, the resulting _Of Rivers and Religion_ album felt antiquated and out of step, especially for an artist once vehemently opposed to revivalism. "When John began working with Dixieland musicians I was really disappointed," says Sam Charters. "To me those records were just the pits." Credited to John Fahey and His Orchestra, the album cover pictured a black-and-white photo of an old-time riverboat that looked as if it had been torn from the pages of an old book. Fahey actually took the picture himself at Disney World. It was a plastic simulation of antiquated themes, much like the album itself. The elaborate Dixieland band sounds enough like the real thing, but it is still only a copy. There is no integration of stylistic forms; rather it is played straight, with a slightly slow drawl. Still, several solo performances by Fahey resound with despair and raw beauty, such as "Funeral Song for Mississippi John Hurt," a reworked and improved version of the cut found on _Requia._ Without his "orchestra," he still shines, sounding even crisper with studio-quality recording; but with them, he seems drowned out. These profound solo moments were sparse amid the horn blasts of Tin Pan Alley bombast. He accomplished his goal of making Dixieland ensemble music and remained true to his vision, but the record never found an audience beyond his die-hard fans. In the liner notes, _Village Voice_ critic Nat Hentoff wrote, "I was not prepared for what I heard in this album. I've been absorbed in all kinds of music for a long time, and only rarely have the first few notes of a musician I'm listening to for the first time announced a wholly singular presence—an event." But there were very few who found the record as thrilling. Although Fahey's excitement about the material was sincere, the results came off as campy to many, and accordingly, it suffered from minimal sales. At a concert in San Francisco soon after the album's release, Reprise's head of A&R, Gary George, complained to Bruce that Fahey was boring. Working within the major-label recording industry, Fahey still refused to make compromises, artistically or politically. One of his favorite venues was an auditorium at UC Santa Barbara, which he always sold out. The promoter asked Bruce, as Fahey's manager, if Asylum Records recording artist David Blue could do a short opening set. Blue shared the same manager as Neil Young, and the promoter was looking to earn some points to get Young shows in the future. Fahey had known Blue from New York and agreed. "We're backstage and John is going through his ritual, washing his guitar picks, and Joni Mitchell walks in with her two managers via a limo," remembers Bruce. "She wanted to go out and play a few extra songs with David Blue. Nobody had cleared this with me. Fahey is looking at his watch and is now mentally ready to go on and he walked over to Joni Mitchell and said, 'You're not going out there tonight. Sorry.' Her managers started in on him—David Geffen was one of them; Elliot Roberts, who was Neil Young's manager, was the other—saying, 'Do you know who we are?' etc. Elliot Roberts said something that really pissed him off and Fahey said, 'If you don't get your ass and hers out of here in two minutes I'm going to beat the fucking shit out of you.' They hightailed it out of there. Fahey says, 'Who does she think she is? She tried to do this to me before at Swarthmore College. They purposely came late so she could play after me. She has how many albums out? Two or three? Well I have twenty, so fuck her.' That's the maddest I've ever seen him." Fahey remained at odds with a universe that didn't consider him a priority. Although the performances and recordings were rich and dynamic, _Of Rivers and Religion_ seemed like a crucial misstep, a floundered opportunity to capitalize on the major-label exposure and distribution of Reprise. Fahey still steadfastly had no interest in considering commercial appeal; he followed his muse no matter how it affected his career. As long as he could make a living creating records, that was enough for him. It remains something of a miracle that the label agreed to foot the bill for a follow-up, the equally disappointing _After the Ball,_ in 1973. The sequel also featured jazz accompanists. Not surprisingly, it tanked just as badly, ending his major-label run. "We were left alone to do what we wanted, and that's what he wanted," says Bruce. "Well, how many John Fahey albums had already been made by then? This was the next one. I knew I was in trouble when they only pressed two thousand copies." Still, Fahey had cemented his reputation as an iconic instrumentalist, and his achievements continued to garner him attention from aspiring musicians. Among those who sought his counsel was a pianist named George Winston. Thrilled at the idea of an artist releasing instrumental music on his own terms, Winston brought a copy of his demos of solo piano recordings to one of Fahey's concerts. "The show was at the Paul Masson winery, and John played first and Dave Van Ronk played second," remembers Winston. "He didn't say anything the whole time, would just go into the next piece. At one point an airplane went by and he sort of stared at it like it was interfering with the show. Everybody just laughed." Afterward, Fahey asked to hear some of Winston's demos and played them right there. Much to Winston's surprise, Fahey instantly offered him a deal with Takoma. "John was doing everything I wanted to be doing. Playing solo instrumental concerts," says Winston. "He was recording other solo acoustic guitarists and he was making his own instrumental records. Those were all things I saw in myself and I couldn't believe anyone else was doing it. Not only had he recorded Băsho, Kottke, and Peter Lang, but he thought piano fit into that. There was no one else in the world that would have recorded me at that time. Solo piano at the beginning of the glam rock era? I couldn't believe it. John came down when I recorded it in '72 and I did the final sessions alone. He was there at the studio but he kind of just read comic books while me and the engineer talked. It was real casual." The resulting _Ballads and Blues_ received minimal sales or attention, but it started Winston on the path to becoming a multiplatinum-selling artist in the soon-to-come New Age market. Fahey's effect as an artist and label owner began drawing other musicians into his orbit. He continued to eschew praise whenever it came his way. Winston once made the mistake of referring to him as a great composer and Fahey refused to speak to him, accusing him of being a groupie. "Now everyone calls him a composer," adds Winston. "He just wasn't ready to see himself that way [then], but I saw it right away. He was a great composer, not a great guitarist. Then after a few months, I guess he forgot, and we went back to being normal people, but of course he was right." In spite of his disdain for behavior he saw as sycophantic, even Fahey could be blown away. He was extremely particular about what he liked, but when he found music that moved him he became swept up in the excitement. At a live performance of Brazilian guitarist Bola Sete, Fahey experienced a life-altering musical moment. Sete was best known for playing with jazz great Dizzy Gillespie, but his solo compositions were textured, rich with flourish and emotion. Using space and subtle shifts of rhythm, his songs had a more compositional feel than those of players on the folk scene, perhaps because his Brazilian roots lent him techniques and phrasings that were untouched by American players of the time. Meeting Sete after the show, Fahey asked him about the secret to his playing. Sete recommended meditation. Fahey wrote about the experience at length in a characteristically confessional editorial for _Guitar Player_ in the early 1970s. Fahey described first seeing Sete play in San Francisco circa 1972 while intoxicated. He went on to reveal that he had been high on drugs, daily for many years, and, in his own words, "walking and talking amongst the shadows." The purity of Sete's playing left him changed, much like his initial experience with the blues. "Few living people have had such an enormous influence on my life, my music, my soul, my religion—you name it—as has Bola Sete," Fahey wrote. Awed by his prowess, Fahey asked Sete to record for Takoma. Tucked away in Sete's archive was a solo guitar album he had recorded that had been rejected by other labels. A few years later, Takoma issued Sete's _Ocean,_ and for the rest of his life Fahey called it his favorite solo guitar record. Sete's playing was separate from the blues influence that trapped so many others. With this new influence, Fahey envisioned different directions and stylistic innovations. He longed for the same spiritual center that Sete evoked in his music. Upon discovering that the secret to Sete's intensity was meditation, he sought to cleanse himself of toxins. He believed that they distracted him, preventing him from communicating honestly through his music. Quitting his prescription pill habit, he turned to transcendental meditation. To facilitate his newfound sobriety, he began attending Krishna temples around L.A. He found the rituals spellbinding. The services reminded him of one of his favorite silent films, _The Thief of Bagdad._ "They had a service every day with singing and horns and amplified harmonium," Fahey recalled. "I'd go over there just for the beauty of it. I didn't believe in Krishna or anything." He seemed comfortable being a religious tourist, taking solace in the ritual. His new interest found its way onto his records as he continued to move away from shorter, song-based pieces to extended sidelong compositions. Fahey showcased this approach on his next Takoma album, _Fare Forward Voyagers._ Released in 1973, during the hangover of the 1960s rock 'n' roll crash, the album feels like a spiritual retreat. It marks a departure from his earlier material, featuring long-form ragas of alternating rhythms and tempos. The album finds him at the height of his prowess, delivering deeply hypnotic performances. The nuances of his fingerpicking combined with his stylistic integrations of American and Indian music made for a fully developed album. Dedicating the album to his guru, the record contained a pamphlet for the "Yogaville West" retreat with an endorsement by Fahey printed on it. "I would like to introduce you to this healthy, spiritually based concept of living. The 46 people living here follow the ideals of Integral Yoga as taught by Swami Satchidananda. To the extent that I have practiced these techniques, they really seem to work." These were his only words for the album; no stories or fiction found their way in. Takoma Park, for once, seemed out of view. He never returned to this style of long-form playing, which makes _Fare Forward Voyagers_ unique in his catalog. The compositions were difficult for him to reproduce live, although he did perform the bulk of the material in a stellar performance at Carnegie Hall on September 21, 1973, as part of an evening of guitar music along with sets by classical guitarist Laurindo Almeida and jazz player Gabor Szabo. Upon being introduced to the audience, Fahey barely acknowledged the crowd and launched into his set. After one continuous half-hour piece he picked his head up and was met with thunderous applause. Performing solo guitar at one of the world's premier cultural institutions, he had unquestionably succeeded in his initial musical goal, bringing his style of American Primitive guitar playing to the heights of the cultural elite. With Jan out of the picture, Fahey restructured Takoma. He wanted nothing to do with the actual business aside from choosing records, so he put Jon Monday's friend Charlie Mitchell in charge. The label threw together a compilation album featuring tracks from three of its star guitarists—Fahey, Kottke, and new arrival to Takoma Peter Lang. It became another top seller. With business steady, Fahey hired new employees to run the day-to-day. "John could have run it, but he didn't want to," remembers Monday. "John knew I was friends with Charlie, whose office was in the same building as Takoma. John asked me if Charlie could be trusted; I said yes—not that my vote counted for much. Charlie was hired as president and given a deal for a third of the company. We got a real entertainment lawyer and accountant for the first time. We had steady sales growth from 1970 to about 1976 or even 1977. We repromoted the Fahey Christmas album, Kottke's _6- and 12-String Guitar_ (the label's two best sellers), and released a Mike Aldridge LP and the Kottke-Fahey-Lang album." Fahey found recognition in other far-flung reaches of the creative arts as well. In 1971, Stanley Kubrick notably gave Fahey's 1965 album _The Transfiguration of Blind Joe Death_ prominent placement in the record store scene of his classic film _A Clockwork Orange._ Fahey received letters of admiration from Kubrick and rock stars like Pete Townshend. Fahey reportedly replied to the Who star with a letter specifically detailing why the _Tommy_ album was not, in fact, an opera. Meanwhile, Fahey was dating a woman named Marilyn, with whom he had a volatile relationship. Fahey had been playing an extremely rare Ray Whitley Recording King guitar, a beautiful vintage instrument coveted by acoustic players. Once, backstage at a concert, the two began a heated argument. Marilyn stepped on the guitar. Fahey became so furious that he screamed that she should finish the job, then promptly took the guitar and smashed it to pieces. Yet despite, or perhaps because of, Fahey's inability to control his emotions, he intensified his focus on healthy living. Yoga became his central philosophy. When keeping the routine of constant practice, he achieved his goal of staying sober. Some have described the environment of Yogaville as "cultlike," with its devout followers of the spiritual guru. Yet yoga provided a great opportunity to meet girls as well. In 1974, Yogaville hired a young woman named Deborah Goldman to organize a concert at the Wiltern Theatre for the swami's sixtieth birthday. Scheduled to appear were Carlos Santana, Alice Coltrane, and Fahey. Goldman started receiving bouquets of red roses until they filled an entire room at Yogaville. Fahey sent them—after meeting her just once. Eleven years younger than the then thirty-five-year-old Fahey, Goldman, who had turned down graduate school for a job at the institute, went along with it—apprehensively. Although overwhelmed by his aggressive pursuit, the two connected and started dating. "He was funny, he was smart, interesting, intellectual, obviously talented if you like music, which I do," says Goldman. What truly bonded them was their lifestyle, dedicated to the teachings of the swami. "At the time we had this spiritual interest in yoga; more than just the physical exercises it was a philosophy, a way of living." The temple encouraged their relationship, as both were active members. Fahey seemed to represent the opposite of a dull, standard life for Goldman. Temporarily, things seemed as if they could work between the couple. With the structure of yoga to keep him sober and the thrill of a new romance, he remained stable and content. He soon proposed to Goldman and she accepted, if somewhat reluctantly. The Swami Satchidananda married them at a friend's house in Santa Barbara on January 1, 1975. The quick courtship and subsequent romance wouldn't yield a long-lasting union. Fahey's relationship ideas were far from Goldman's. She had career ambitions in the age of women's liberation and quickly became bored killing time in between Fahey's jobs. "I married John, and for me to just hang out at the house was not consistent with my character so I took the real estate exam. I was going to work with this really successful friend who I knew in the field, but John didn't even like the idea of me working. That was also a source of tension," Goldman says. Considering the other person in a relationship seemed more than Fahey was capable of; he needed to be the focal point. "Having known other musicians, I think their emotions are close to the surface and they very much have an interior life that is self-centered to the extent that it's all they are focused on," continues Goldman. "I think that makes it difficult [for them] to have relationships." As the 1970s progressed, so did the demands of touring. Fahey settled into the role of cult figurehead and possessed an untouchable prestige that came with a decade-plus run of successful musical innovations. Yet he still rejected careerism. "He would always compare himself to Leo [Kottke], who was the trouper who went out there and killed every night," says Denny Bruce. "Fahey just called him 'superstar' and said he wasn't interested in being a superstar. He didn't want to do the industry thing and be interviewed and have pictures taken of him. He didn't want to play the game—the politics to work with the right promoters and venues and so on and so forth. He was just as happy to play for promoter Sandy Getz in a little club. Sandy had been at the Ash Grove, where John used to play a lot before [Santa Monica folk club] McCabe's opened, and Sandy started booking some jazz guys and would also book John. It wasn't Carnegie Hall, but she'd pay him a few hundred dollars and he was happy with that." Fahey's fan base seemed to be the same loyal listeners who saw him each time he came through town. Uncomfortable with the process of performing live, he was able to get through shows and impress audiences and critics—even if he barely acknowledged their presence. The _New York Times_ championed Fahey for a mid-1970s concert, head pop critic John Rockwell writing, "John Fahey, who stopped by the Bottom Line for two shows Sunday night, remains as impressive and distinctive a master of the acoustic guitar as he ever was." His performances throughout the 1970s occasionally eclipsed his albums and left audiences spellbound. One never knew which Fahey would arrive; the highs and lows of his temperament made for unreliable but sporadically sublime concerts. Still, he could rely on the reputation he spent much of the former decade establishing and still channel the cool, detached 1950s rebel bit he cultivated in high school. In a review of a concert at Hunter College in New York in 1975, the _Village Voice's_ Paul Nelson sums up his intimidating persona: "His guitar-playing is a deliberate mixture of psychology, order, mythology, poetry, and genre—all very exact, with the meaning entirely between the lines. Part of our fine national school of minimal acting, glints of feeling shining through the stoic, awesome professionalism that is characteristic of the American hero, John Fahey seems to me to be the Clint Eastwood/Steve McQueen of the guitar. I'd hate to meet him in a dark alley. He didn't even say goodbye." By the mid-1970s, Fahey fell into a professional routine, finding ample touring work both in the States and abroad. His music had particular appeal to European audiences, as its instrumentalism presented no language barrier to overcome, speaking even to those who had no knowledge of the personal exorcisms he presented in his liner notes. One of his European tours had him paired up with Michael Chapman for several weeks, playing throughout Germany. "The folk and acoustic scene in Germany was huge at that time. There was a club in every town. Quite big audiences, three hundred or four hundred people a night in seated theaters. John at this time was a kind of legendary figure. He could make that fucking thing dance. It was just a joy when he got it right," recalls Chapman. Chapman remembers Fahey more as the absent-minded professor, a genius in his own world, however clumsily he stumbled through life. These shows produced his next generation of fans. While not hippies, many of these listeners were seeking relaxing music, an escape from the excess of 1970s stadium rock music. His career may have been self-sustaining, but his marriage was dissolving. After a few months on the road, Goldman found, Fahey became a different man when separated from the peaceful environment of the Yoga Institute. His fears and temperament were impossible to contain, and his venomous contempt for touring life poisoned the new marriage. His anxiety and stress cast a dark shadow over their travels together. "I like to travel and I like music. The clubs were fun, but I think that was the downfall of the marriage because John did not enjoy performing. He had stage fright, which made him anxious and stressed out and it kind of deteriorated the relationship when we were on the road," says Goldman. "He was hard for me to be around. A more mature person might have had the tools to help, but I wasn't used to being around someone like that. I didn't particularly want to be around someone like that. I don't think I had the motivation to try to figure it out or make it work." The couple temporarily lived in New York City between tours. When John left for a few days of shows, Deborah headed back west. Fahey seemed furious, according to her friends, and they never spoke again. According to Fahey's tour manager at the time, Stephen Calt, Deborah and Calt had struck up a romance while commiserating about the difficulties of dealing with the troubled guitarist. In the face of Fahey doing nothing to save the marriage, she took off with Calt. Upon hearing about the alleged tryst, Fahey drove around searching for Goldman, carrying a gun. He also called Calt's girlfriend, looking for the two. After a few months of delayed correspondence, all three went their separate ways without further incident. His personal life changed profoundly again that year when he met a young woman at a Joseph Byrd concert named Melody Brennan. As a self-described Jungian, she clashed with the strict Freudian methodology Fahey had adopted during his split with Jan, but they found plenty else in common. A child of the San Fernando Valley, she grew up in California, eventually studying in the UCLA film program. (Jim Morrison was among her classmates there.) She spent the next twelve years working on various film-related educational projects. Melody had an artistic streak, which included painting and music. It was Fahey's charm and sense of humor that again won him the girl. "John was always a very funny person with a great sense of humor. He was very intelligent and attractive in a lot of ways because of those things," she recalls. For fun they rode their bikes around to local Hare Krishna temples and visited the neighborhood cats. Fahey brought a mouse on a string and played games with cats they encountered. Melody soon began accompanying him on tour, selling his records at the concerts. In between they stopped at zoos and aquariums, both of which Fahey loved. They liked to visit museums as well, but Fahey didn't like to walk, so Melody would stroll around while he waited on a bench. They would also shop on the road, Fahey for records and Melody for antiques. It was a lifestyle that worked for both of them. "John was a dynamic person, just like his music," she adds. "He would go from quiet and gentle to raging loud and back and down and around. That was his music and that was his personality, too. He was interesting. You were never bored around John." With American culture in a postrevolutionary lull, the often beautiful and contemplative music of John Fahey remained relevant and engaging to audiences. Because it had no roots in the political agendas of any time, his already extensive discography continued to be discovered by new listeners, particularly young guitar players. Fahey's slow, even pace and open tunings made the guitar more approachable to beginners; it was a style easy to emulate. Fahey was fundamentally uncomfortable with his status as a technician. While the language of guitar is often spoken in notes, phrasings, and chords, Fahey was far more interested in the emotions behind such choices than the process itself. "When I play the guitar, even when I am practicing, I am besieged with images, memories, déjà vu experiences and emotions; and for every chord I play, for every tune I write, there is within me a distinct and unique image, emotion, or feeling," he wrote. No one embodied the professional, technical guitar player better than Stefan Grossman, a blues fanatic and guitar player who came up around the same time as Fahey and became active during the folk resurgence of the early 1960s. Grossman, who never had much success in the commercial world, became a guitar teacher after several years of study with the gospel/blues master guitarist Reverend Gary Davis. Grossman studied various guitar techniques and began publishing a series of instructional books. The kind of player who rarely missed a note, he concentrated on the minute details of transposition and notation. Grossman seemed ideally suited for the task of publishing guitar notation guides. In Fahey's mind, there existed little content in Grossman's work beyond the appreciation of traditional technique. He let his opinions of Grossman be known openly, naming a track on his 1976 LP _Old Fashioned Love,_ "The Assassination of Stephan Grossman," most likely intentionally misspelling the name. Fahey, after much pestering, was convinced to participate in transcribing his own compositions in a _Best of John Fahey_ guitar tablature book. Takoma released an LP of the same name in conjunction with the book, a compilation of early Fahey favorites. He had largely abandoned his lengthy prose in his albums' liner notes, but he seized upon the opportunity in his book of guitar tablature to mount a massive tirade against the conformity and rigidity of technical guitar playing. The result is a bizarre ramble that mixes guitar-playing tips with gender politics and confessional therapy. "Mastering a guitar is really very similar to conquering a woman, and when you fail to master it, like when you fail to master a woman, you have the same feelings of humiliation and violence," wrote Fahey. "When you are alone with your guitar, you must win if you are to be a man." These life lessons on machismo were not exactly what most young students were looking for in an instructional book. More sage advice was his advocacy of playing for many hours at a time to trick the mind to try new things out of sheer boredom, to force the creative to take over. Only in this mental trance, he wrote, can the link to the unconscious be achieved. This language can't be explained with notes or transcriptions, but he sought to relay his example to those who wanted to know something about existential guitar. "What I am advocating is the supremacy of playing by ear and of subjectivity, which is the evocation of and externalization of internal moods," he wrote. "Every chord should evoke a particular emotion and you must learn to hear what you play and feel that emotion." After a few stormy years during which they broke up and then later moved in together, Fahey and Melody decided to tie the knot on November 25, 1978. It would be Fahey's final (and most enduring) marriage. Long since removed from the daily process of record label administration, Fahey, with the help of Denny Bruce, decided to sell off Takoma to Chrysalis Records, who used the label and its catalog to relaunch a more roots-oriented sublabel. "So Chrysalis wanted to do an American roots label and put me in charge," says Bruce. "John's psychiatrist told him not to sell the label, that he needed something on a day-to-day level more than he needed any money. He hung in, and one day he said he was done." The sale of the label facilitated another major life change, one that he had longed for: to move away from Los Angeles. "The reason that I got rid of [Takoma] was almost everybody in the office started taking cocaine and I couldn't get rid of it," said Fahey. "We weren't losing money or anything. We were still selling records. I made the terrible mistake of giving stock to the employees so I couldn't fire them. The only thing I could do was to dissolve the company. While I was doing that, Chrysalis offered to buy it and I said 'sure, take it." The sale had an immediate positive effect, but the long-term reverberations would last for years. After the demise of Takoma, he spent more time around the house. Melody, who considered herself eccentric as well, rolled with Fahey's mood swings and occasionally drunken behavior. He tried to hide his drinking from her and indulged to even greater excess in her absence, relieved by the chance to binge without being scolded. "He would go on for periods where he would have a lot of problems with drinking, although he went through periods without drinking too. The sleeping pills he took were chloral hydrate, which he took a lot," remembers Melody. In their own way, they tried to live like a normal couple, sometimes entertaining guests in their home after shows. Michael Chapman recalls one night in particular, after a show the two played in Los Angeles in 1981. Fahey invited him back to his place afterward. "It was insane," says Chapman. "This was in southeast L.A., where, as Melody said, 'even the dogs bark in Spanish.' It was just my wife and I, John and Melody. John was drinking brandy and Coke, half and half, he was missing a lot and spilling on himself. He offered me a drink and I asked for some white wine, and he reaches into this fridge and pulls out a gallon jug. He asks my wife, who also asks for white wine, and he pulls out another gallon jug for her. I saw quickly where the night was going. At around 2 AM, John is naked at the end of the dinner table except for a genuine Nazi flag from the Nuremberg rallies. He was moaning that Melody wouldn't let him wear his flag to bed. The phone rings at about 3 AM and John just mumbles into the phone and hangs up. We asked who it was and he said it was a journalist asking for an interview that he said was scheduled for right now. Some guy shows up and claims to be some long-lost cousin of John's from Oklahoma. We tried to go to this bar, which had a sign that said 'every beer in the world.' Of course they wouldn't let John in. He had been banned." Melody became anxious to live in a real house and start a family, while Fahey had simply longed to get farther away from the chaos of society. She gave up her career and the pair relocated to Salem, Oregon, in 1981, where John purchased a modest house for the two of them. They could have afforded something bigger, but modest was what he felt they needed. In private, he seemed scared and conflicted by the move. No longer balanced through yoga, which he had abandoned, his demand as a musician had largely peaked. In a letter to a friend at the time he asked, "You might pray for me. I'd really appreciate it, whether you believe it or not. I am severely depressed right now and feeling suicidal and I don't know why. I have been seeing the shrink that analyzed me. See, I just put down money on a nice house in Salem, Oregon and I should feel excited but I don't feel excited." Regardless of his reservations, Fahey took to the role of husband and supporter. He battled with his issues but kept Melody close to him. He had ideas about masculinity and the male role as provider in the marriage. "I had a career working in educational films, but once we moved to Oregon he was supporting me completely," says Melody. "If I started to get too independent, sometimes he'd say I was upsetting the balance of power. He liked to be in charge of things." Melody did not complain about the arrangement, and the two tried to settle into their new town. In the summer they went to the local quarry, where Fahey would swim and the two of them would picnic. He practiced his guitar while Melody strummed a few chords on a ukulele. These were good times for the couple. Fahey often donated money to various charities. Few things gave him more pleasure than seeing a homeless person in a Dumpster and then handing him or her a twenty-dollar bill. His generosity sometimes extended beyond his means. "One year he gave $2,000 to this Catholic charity that was a shelter for runaway kids in New York, and our accountant told me that after all our deductions we only made $11,000 that year," Melody remembers. "He said that John shouldn't give that much to charity, but I couldn't stop him if that's what he wanted to do. I didn't want to stop him." In the winters, unable to bike, ride, or swim, Fahey grew restless, feeling trapped in the house. He converted the basement into a makeshift studio and record room. Down there, he sought the solace he needed to find inspiration. In preparation for his shows, and to generate new material, he spent hours playing guitar—perhaps heeding his own advice. "If you make yourself play the guitar for four to six hours, I can guarantee that you will come out of these sessions with something new: a composition, an arrangement, a fragment," recommended Fahey in his instructional book. "That is the way the mind works. In order to conquer boredom and chaos, you cannot avoid coming up with something new. I recommend these long sittings rather than short sittings more often per week." In Oregon, he had time to himself to compose, yet he remained consumed by memories of the past. Although he had attained a relative stability in his marriage and career, he soon began to struggle with unresolved issues, memories that became the focus of his remaining years. # 9 # LET GO "The Void is a term you find in existentialist writers and it's particularly well-described by some Catholic mystics in books on contemplation. It's how you feel when the bottom drops out. It's worse than the blues. Some of the music I've written is a description of this state." —John Fahey, interview, 1980 Fahey's influence in the 1980s reached new heights when a breed of guitarists who had grown up listening to his records came of age. William Ackerman, an unabashed Fahey acolyte (his first album is titled _In Search of the Turtle's Navel)_ , founded the acoustic-based record label Windham Hill, which focused on instrumental, meditative music. The music was so light, it hardly seemed there—the type of stuff well suited for dentist waiting rooms, far from the pain of the blues or the experimentalism of modern composers. Inspired by Fahey's initial vision of instrumental composition for guitar, Ackerman slowly built an empire by marketing the material as New Age, playing perfectly to aging baby boomers who wanted something relaxing to listen to during dinner parties or yoga lessons. This Muzak sold millions of copies and became the boilerplate for elevator music. Ackerman eventually sold Windham Hill for an estimated $28 million. Ackerman's own instrumental solo work sold well, but the label found a mass audience with instrumental guitarist Michael Hedges, whose pyrotechnic hammer-on technique packed theaters and dazzled listeners. The associations with Fahey were difficult to avoid. For one, former Takoma artists George Winston and Robbie Băsho released albums for Windham Hill. Fahey's influence over the next generation led to some calling him the father of New Age guitar, a title that offended him like no other. Fahey often called it "hot tub music," seeing it as codified shallowness being done in his name. "He hated those guys," recalls Denny Bruce. "He started being called a New Age guitarist. Just say 'John Fahey' and 'New Age' and he would lose it." It may well have been professional jealousy at the root of his contempt. Windham Hill quickly surpassed Takoma in terms of commercial success. Fahey openly rejected any notion that his music had anything in common with New Age. He only noticed this connection professionally when his touring schedule booked him with Windham Hill guitarists such as Hedges. The fact that he had discovered George Winston only added fuel to his ire. This association with New Age led him to believe that he had failed as an artist. He got through, getting drunk to escape his discomfort. He fared much better at home. Steadily performing or recording, he seldom had to do much outside of his interest or scope. He stayed famous as a cult icon to guitarists, new players continually discovering his vast catalog, which provided him with enough cachet to tour and record new albums. As long as he maintained his modest lifestyle, that was all he wanted. "John's main goal in life was never money," Melody says. "He wanted to do certain things. Money was good and it was a good thing to have but he didn't like the idea of having the best of something." Fahey seemed happy in their little house, a Louvin Brothers poster gracing the entrance of his basement lair. Creating new music and developing as an artist remained central to his identity. His desire to propel his music forward and not become stagnant continued to drive him, and his creativity and skill were still completely at his disposal. Fahey made a pair of records for Chrysalis Records' Takoma, run by Denny Bruce, which had recently enjoyed platinum success with acts like the Fabulous Thunderbirds. _Live in Tasmania,_ released in 1981, finds Fahey performing fantastically in front of an enthusiastic audience. However, it isn't really a live record—no surprise considering Fahey's adversarial relationship to the act of live performance. Fahey did indeed perform live in Tasmania, but the tracks on the album are from studio sessions (two songs from his Reprise sessions and perhaps more from a studio session he booked while in Tasmania). The applause was edited in later—another subtle ruse in the Fahey discography. His final album for the label, 1983's _Railroad,_ revisits older material, giving them new titles in an attempt to regain some control of his back catalog. Some of his publishing was owned by ex-managers and uninterested parties, so he rerecorded and retitled his pieces in order to use them on new albums. Fans considered the record a spectacular return to form, hearing Fahey perform updated versions of his classic material. By 1984 Takoma folded, existing only in publishing form; Bruce moved on to other projects. With this, Fahey needed to sell himself in the marketplace of the music industry for the first time, and was completely at the mercy of other labels. Though relieved not to have to deal with the responsibilities of running a business, not having a musical home caused further anxiety for him as he shopped around his recordings. Through his network of intense devotees, Fahey was able to navigate the choppy waters of a free agent. After leaving Takoma, Fahey started to release records with traditional folk music labels, including Shanachie and Varrick. Shanachie, home to artists like Grateful Dead lyricist David Grisman and world music acts such as the Chieftans, issued his _God, Time and Causality,_ while Varrick suggested he record albums with guests—famous friends like George Winston—but plans never got off the ground. Fahey again felt relegated to a musical universe he despised. Yet he always found people who recognized his achievements and championed him, even in the face of a seemingly noncommercial marketplace. One of Fahey's disciples was fingerpicking guitarist Glenn Jones. Jones would go to his shows in Boston, and the two eventually struck up a correspondence and friendship. While working in Rounder distribution as a day job, Jones worked closely with many record labels that released Fahey's albums after Takoma folded. "Sometimes when you meet someone who's 'famous' they like to talk about themselves and be in that position," remembers Jones. "John talked about himself, I wouldn't say reluctantly, but he was as interested in talking about me. Getting to know John was getting to know yourself better. In some ways he would almost push you into situations where you were uncomfortable talking about yourself. He wasn't doing that to make you uncomfortable but rather as a way to better understand himself—like if you were not always the most popular kid in school, or if you moved around a lot. He was always looking back at his own childhood—girls he had crushes on, kids who beat him, just grade-school social pressures. Getting other people to talk about that stuff was a way to help him figure out his own issues." Fahey often got deeply personal with those whom he worked alongside. Having fans entrenched in the industry worked in his favor, but the owners of the labels he recorded for were ultimately interested in the bottom line. His career seemed to stall, with little room for him to move outside his niche. Labels wanted repackaged versions of his Christmas music, the only selling point in his current repertoire in the emerging nostalgia market. They saw him as a folk act, viewing him largely in the context of 1960s music. Fahey still resented the company of what he considered boring contemporaries. Jones recalls one concert that David Grisman and Fahey played together: "We were hanging out backstage. John was headlining, and Grisman was playing, and when John walked backstage past all [Grisman's] friends and hangers-on he had his head down and just muttered 'hate hate hate hate' under his breath, making his vibes known. They got it and they stayed away. The anger was there." His drinking intensified a crippling depression offstage as well, and the accumulated years of abuse had taken their toll. "He was a very heavy drinker at the time—I would say he was a very serious alcoholic," Jones continues. "He was kind of emotional in the way some alcoholic people are. Anything can set them off. I remember him crying to a Stanley Brothers record in my living room, where the music was affecting him so much he would be weeping uncontrollably. Certainly I think his emotions were very close to the surface, and this was the case throughout when I knew him." Those close to him knew his struggles; he seldom held back when it came to his personal or professional life. His problems compounded when his health began to fail. But he kept most of his illnesses a mystery, even to his wife. He was unwilling to make lifestyle changes, even in the face of new realities. "He might have been prediabetic and had hypoglycemia; I know he had a lot of problems," remembers Melody. "He had urinary tract problems for many years and had some operations that were very painful. When you're not feeling well, it's hard to enjoy life." Still in his forties, he was faced with semiretirement, afflicted with several debilitating medical ailments. For an entire summer he lay bedridden, with no strength to move. The doctors had diagnosed him with the Epstein-Barr virus. The virus manifests itself as mononucleosis in adolescents; when it develops in adults, flulike symptoms and fatigue can persist for months. Fahey's resulting malaise almost drove Melody mad. "For me, it was torture," she remembers. "I don't know what it was like for him. He was in pain and I probably wasn't helping. You think there's nothing wrong with someone and you wonder why they don't get out of bed. But I guess there was something wrong with him. He was tired all the time." Indeed, the symptoms bothered him on and off for several years. Worse, his drinking increased, as he used alcohol to cope with his condition. In extreme discomfort, he grew insufferable. Unable to work, he lashed out in depressive fits. To further complicate matters, his diet began to spiral uncontrollably. Some remember him eating a gallon of ice cream in one sitting. One friend recalled him ordering a steak and eggs at breakfast, and then another order of steak and eggs to eat in the car on the way back. Glenn Jones recalls him buttering his bacon on more than one occasion. Unsurprisingly, he ballooned in weight, and his hair dangled in long strands around his increasingly balding pate. Only music kept him engaged and productive. In Portland, he befriended a talented guitarist and arranger named Terry Robb, who was a fan of Fahey's work. After spending a few nights hanging out and trading Charley Patton tunes on guitar, the two formed a close bond. "He was about forty and was just great, and a lot of fun to be around," remembers Robb. "He was really smart. We had a lot of things in common in our likes and dislikes. He would do these crazy things that were hilarious. Every day was filled with some sort of event—like he'd reach into his pocket and pull out a cheeseburger." Still ready to try new approaches, he expanded his tastes from the blues and bluegrass of his early days. Fahey invited Robb to produce his next record, _Let Go,_ a bold step forward. He decided to try his hand at a wide range of material, including even sentimental pop hits like Eric Clapton's "Layla." To Fahey traditionalists, this divergence seemed akin to blasphemy. Always flying in the face of convention, he met the challenge of mainstream rock head on, and tackled the material with sincerity and inventiveness. None of it came naturally to him, so Robb's help was invaluable. Robb, a lifelong Fahey devotee, admittedly had trouble wrapping his head around the idea at first. "I would try to get him to play like he used to because at the time I was a fan still. He just stopped me and said, 'Look, I'm not doing that anymore. I've already done that. I'm moving on.' Finally I got it, it was like Miles Davis, same thing, moved on. And the thing was, he was up for anything. People always ask about us doing 'Layla' or the Hendrix tune 'May This Be Love.' Some people think I talked him into it, but it was all his idea. He decided he wanted to do those songs, and it was up to me to arrange them." Robb filled in the blanks, playing many of the backing parts. With Robb, Fahey attempted a more Brazilian feel, even covering his newfound contemporary hero Bola Sete on the title track. Remarkably, the two were able to seamlessly integrate Sete's influence with Fahey's already recognizable style. The resulting album garnered great press and solid sales. Fahey seemed able to adopt the new phrasings and play them his own way, still sounding unique, just as he had done with the blues and bluegrass. The album sounds like Fahey, but boldly heading into new territory, with joy. Robb proved an invaluable asset on the album, and they set forth working on a follow-up: 1985's _Rain Forests, Oceans and Other Themes._ Also released by Varrick, the album features a similar approach. This time the recording was done in a special setting, according to the album's notes, written in the distant voice of a technical narrator: Cascade Recording Studios in Portland, Oregon, during November and December 1984, and February 1985. This studio was at one time a small church, with a tiny pulpit and choir area at the north end. Though not a breakthrough in terms of sales, its intimate performances, with Robb alongside, made it another distinctive addition to Fahey's catalog. Robb also frequently stepped in as Fahey struggled to take care of himself, in rough shape both physically and mentally. "He would mix his medications and get out of control and get obsessive about things, and he would drink and take too many pills," remembers Robb. "If something bothered him, that's how he'd deal with it, and he would get out of control in the way that people who do those things get. Melody was there to keep him in check and we got to be close, too. I was close to both of them. He depended on me to keep him in check, too. It got to the point where I was sort of handling him as well. He was fragile and depended on people to take care of him." The root of Fahey's unhappiness rose to the forefront. He began exploring his childhood to an even more intense degree, with a strong focus on Freudian psychology. He claimed to recall memories from his infancy. During many of his therapy sessions, Fahey began experiencing "repressed memories," products of a now largely discredited psychological theory. In these states he had vivid memories of his father sexually abusing him as a child. He recalled these visions in disturbing detail in his memoirs. There, Fahey graphically describes being held at gunpoint in his grandfather's room when he was just four. In his accounts of these incidents, his father is portrayed as sadistic and deranged, once tying a noose and describing how he would kill him slowly and watch him die if the boy ever told anyone about the abuse. In these accounts, his mother chose to ignore what happened as a defense mechanism, and Fahey grew a bitter resentment toward her for not interceding. Still the doting mother, she tried her best to help, but he refused to involve her in his life in any way. "I wish you knew what he did to me," he wrote to her in _Bluegrass._ "My father was right, you never did really love me, Mom. Never." These accounts seem hyperbolic to those who knew both father and son. Ex-wife Jan had met Al on several occasions and doubts Fahey's accounts. "He claimed he was abused, but I don't believe him, because it came at a time when it was a fad to have recovered memories. I think it's another story he made up. His father was a tough character." His childhood friends found the allegations equally surprising. "In his book he makes serious allegations," said Dick Spottswood. "That was nothing that was even remotely spoken about at the time. I think I met his father once only in passing." Melody questioned whether there was sexual abuse, but she knew there was a lot of pain there. "I'm sure there was a lot of emotional abuse going on, because I stayed a week or two with John's father. He had a sadistic streak, and his mother was one of these people who never wanted to speak about anything unpleasant," she recalls. In order to deal with these severe "memories," he started to attend a support group for male victims of sexual abuse, a few hours south in Eugene. He drove by himself to get there. He attended another group in Portland, and in order to reclaim part of his tormented childhood, his therapist recommended that he carry a teddy bear with him wherever he went. The intellectual, funny, and endearing qualities of Fahey stood in stark contrast to his neediness and psychological damage. He became dependent upon others and further separated himself from reality. Robb saw both sides of his personality. "John was a pretty together guy," he says. "The thing that made life difficult for him was his childhood. I think that is the root of it all. I really do. He was very well educated, he was civil, he was very generous to me. I would attribute anything that bothered him to that and his relationship to his father." As with many aspects of Fahey's life, truth and fiction are difficult to parse. He was prone to making up wild stories about all aspects of his life; it wouldn't be unusual for him to tell a journalist that he had a teenage son or invent other outright lies out of boredom, to toy with the mundane process of being interviewed. His writing revealed a panoramic imagination; the bounds of what he believed often spooled into the irrational. With Fahey's wild imagination and disconnect from reality, it's impossible to arrive at anything conclusive regarding the truth of his scandalous accusations. "John's father was an orphan, from what I heard, and abused in an orphanage, so I guess it's kind of like the sins of the father visited the sons," says Melody. "People get abused and they learn to deal with their life in a certain way and they pass that on. John himself had a certain amount of sadistic tendencies where he'd like to push your buttons." The childhood trauma he perceived as being real overwhelmed his thoughts and his time; he couldn't seem to escape and became further anxiety ridden. "He was showing up at the studio sometimes just really out of his mind," recalls Robb. "I'd calm him down and get him to listen to last night's mix. That was a good diversion. He appreciated that." Music was the only thing that seemed to help him. Robb and Fahey continued to work together on a series of albums, until the funding from Varrick began to run out. He revisited his Christmas material, and in 1987 released a record entitled _I Remember Blind Joe Death,_ which features a slowed-down Fahey taking on Bill Monroe and Bola Sete songs. The record is the sound of a man losing his abilities. Just a few years prior he had been forging ahead; now he wandered lost in the roots of his own process, the fire gone and a pathetic sadness remaining. His illness had clearly affected his playing. Shadows of his former self peek through in isolated moments, but they are few and far between. He just seemed exhausted. "He got tired of it," says Robb. "He would go on the road by himself or with Melody and he'd get lonely. It's hard work. He was a very intense musician—he put a lot into it. You expose yourself emotionally every night to such an intense level that it takes its toll on you. I could see why he'd want to get away from it." The final project between Fahey and Robb was an album covering 1950s pop hits like "Sea of Love" and "A Rose and a Baby Ruth," on which he is accompanied by Melody on ukulele. The album features no growth or forward momentum; it is instead a senile look at a romantic past that never existed. Released in 1992, the brilliantly titled _Old Girlfriends and Other Horrible Memories_ provides insight into his psychological state at the time. It marks the first time he reveals his interest in the pop music of his childhood, acknowledging that between all the classical music and blues he was in fact influenced by the mainstream culture. Though genuinely interested in the oldies material performed on the record, he had neither the capability nor imagination to channel it into something new. The album seems foggy and nostalgic. His dismissal of his early work remained a constant. "I got interested in '50s rock and roll music and started arranging songs like 'Blueberry Hill' for solo guitar—and mood songs about people and places where I grew up," Fahey said about the project. "At shows these days I play almost nothing but '50s music and blues. No longer do I play long, neo-Wagnerian, pretentious, pompous songs like 'Mark 1:15.' I did quite a few of those disgustingly eclectic, preposterous tone poems." In truth, he couldn't play intense fingerpicking songs even if he wanted to. He also wanted to distance himself from the 1960s and New Age. The album seems a sentimental hodgepodge, played at half speed, like an old man crossing a busy road. The only evidence of the old prankster Fahey is found with the uncredited appearance of an Al Wilson recording, which Fahey titled "Fear & Loathing at 4th & Butternut" in tribute to his old friend. The recording is an old Takoma session for a proposed Al Wilson solo album that never came to fruition. It is the sole nod to his post-Maryland life, and a bittersweet highlight to an otherwise unremarkable album. The material provided another reason for Fahey to revisit his past. The ubiquitous nature of pop radio ensured that these songs had become part of his subconscious. His preteen romantic experiences suffered none of the realities of his adult relationships, so they remained a pristine ideal. While on tours, he would go to phone booths and randomly search for girls from his childhood. Melody knew about his pattern of behavior and understood that he was really looking for some sort of closure with his traumatic past, not trying to ignite an extramarital fling. Sometimes she even helped him look. "It drove me nuts!" she admits. "One of the things we used to do when we were on the road was stop at phone booths and try to find this little girl that he knew when he was a kid that he met at some campground and just look to see if she was in that phone book," she remembers. "So he'd look in these phone books because maybe her family moved to _this_ town." He once even hired a private detective to track down one girl, whom he hadn't spoken to in forty years. When he found her, he was disappointed that she had gotten old and fat, just like him. In his imagination, she remained a perfect distillation of the innocence he imagined he lost in his youth. His expectations were beyond unrealistic, of course, but he ended up moping as a result. Things at home worsened. Melody had always wanted children, and even though Fahey seemed uninterested in parenting from the beginning, the issue began to cause strain. His constant focus on his childhood caused resentment and unhappiness in the marriage. With nothing else to focus on and no other presence in the home, Melody felt alone. "I don't know if I would have been that great of a mother or if that would have been the end of the marriage, but it didn't happen and I had some anger about that," she says. "John was not quite upfront about that at the beginning. I should have been more realistic about him. You know how people are. When we got married I got this set of crystal, and we never entertained. You have different ideas of what your life is going to turn out like and you find out that it's not really what you want." With his mounting health problems and their growing discontent, things were looking darker than ever. His playing had deteriorated from lack of practice and he neglected his performing schedule. Aside from some local gigs, the only money coming in was from his publishing and royalty statements, just enough to sustain them. "Jobs would come and he would do them," Melody remembers. "He always had some work. It would just be fairly local, but there was never six months where he didn't have any jobs at all. He did have some guitar students, although that was never his big thing. He had one little boy who was a child of a friend of ours. He introduced him at a concert at the university here, and he was really young and he played one song with John and it was really touching." His life seemed to have come to a standstill. Confused and depressed, he decided to make a bold decision. In 1992, Melody was unexpectedly served with divorce papers. There was no precipitating event; Melody had only vague reasons as to why he decided to get a divorce after fifteen years of marriage. He never talked to her directly until after the lawyers got involved, and by then it was far too late to have a civil discussion. After the many years she spent taking care of him, he suddenly cut off all communication with her. So, spurned by him, Melody placed an injunction against him for two years after the divorce. She felt betrayed and angry. "I didn't want to have to fight with him about anything. I didn't want to talk to him at all because I was so upset. Eventually we became friends again, but that's how the divorce went. I got the house and I didn't ask for interest in any of the properties he created during the fifteen years we were married." In retrospect, Melody felt that perhaps things ended up for the best. "I didn't leave John. John left me," she says. "Later, he said he thought he made a mistake, but it might have been a good thing for me because I would have been stuck with John in his terrible health and it would have been a financial disaster for me. Maybe he was doing me a mitzvah. For instance, with the divorce he once said to me, however mysteriously, that he didn't want to become a monster. He was talking about his physical problems. That might have been part of why he started the divorce—to release me. I never would have divorced him. Having stayed together would have been quite a disaster for me. The way it worked out, I have the house unencumbered. Who knows? It's hard to say what went on. He may have had mixed feelings about me. Sometimes he may have hated me and sometimes he may have thought I was the best thing to ever happen to him. Aren't we all that way?" Left to his own devices for the first time in many years, Fahey began living in week-to-week motels around Salem. He began writing long essays about his childhood and his experiences as a musician—much in the style of his liner notes. He detailed his ideas, stories, and fantasies, creating a universe through his own filter. In his mind was a world he connected with, and he began trying to translate it to the page. But there were more banal concerns. He couldn't pick up after himself and became unable to do daily chores. As the day-to-day eluded him further, he retreated into a hermetic existence. Pizza boxes and delivery containers littered his room. He spent his time wandering the streets, haunting the local record stores searching for classical records and drinking. There was little left for him personally or professionally. Strangely, though, a resurrection waited around the corner. While the folkies were either dead or playing dinner theaters, a new generation of music listeners found his work and began to celebrate it anew. # 10 # WHEN THE SPRINGTIME COMES AGAIN "This new group is all for freedom. That's one hell of an improvement. With the alternative people, there are some social dos and don'ts. But in comparison, it shows that the hippie movement was always quite rigid even though it was always talking about freedom. It was phony." —John Fahey, interview, 1997 In the 1990s the mainstream music world went through a drastic paradigm shift. The charts had been dominated for decades by manufactured pop stars and larger-than-life rock bands. But success had reached a new breed of musicians, ones groomed on the fringes of record stores, not seeking stardom but driven by an expression of suffering and existential angst. The troubled, tortured artist, best personified by Nirvana front man Kurt Cobain, came to the forefront. Psychology, addiction, and the pathos of the suburbs all became prevalent themes in popular culture. The signifiers of rock stars and their inherent clichés were scuttled. Younger audiences searched for even more obscurities as their alternative heroes name-checked everything from Krautrock to Japanese noise bands. With the rise of Nirvana and other grunge bands, the cult hero had more marketplace cachet than ever. Record labels, unsure of the boundaries, scrambled to sign up any act that seemed to possess an authentic alternative credibility. Taste-makers became de-facto major-label A&R reps: Sonic Youth's Thurston Moore was responsible for Nirvana's signing to Geffen Records, which resulted in tens of millions of albums sold. As a result, huge checks were cut to almost any band or musician endorsed by Cobain or Moore, from the Meat Puppets to the Boredoms. Bands who a year or two earlier were barely a blip on the radar all of a sudden had major-label record deals. Similarly, those musicians who had influenced these contemporary successes were given the revival treatment. The more obscure or difficult the musical reference, the greater the appeal for the truly in-the-know. Roky Erickson of the 13th Floor Elevators was brought back—although he was damaged from years of LSD abuse and his time in mental institutions. Daniel Johnston, a severe schizophrenic and manic-depressive who recorded and lived at his mother's house, was signed to a reported seven-figure deal with Atlantic Records after Cobain famously wore a T-shirt Johnston drew while accepting an MTV Video Music Award. Seemingly overnight, being a severely damaged musician was extremely lucrative. Fahey fit comfortably within this new canon. His discography was already filled with images of death, rejection, and lost loves. To a bummer generation, Fahey provided a perfect soundtrack. Much like at the 1964 Newport Folk Festival, forgotten recluses were being brought out years later for their moment of cultural appreciation. Rhino Records issued a double-CD set of the best of John Fahey. Compiled by old buddy Barry Hansen, the album was titled, at Fahey's insistence, _Return of the Repressed._ "I spent a day with him in Salem to work on _Return of the Repressed,"_ says Hansen. "As others have stated, he was living in a 'welfare motel.' The room was piled high with LPs that he had gleaned from local thrift shops. He supported himself by selling the better ones to affluent collectors around the country, very much a 1990s analog to what he had done in the 1960s with blues 78s." Released in 1994, this set introduced Fahey's music to the digital age and included tracks from most of his 1960s and '70s Takoma records. The material sounded fresh and exciting to modern listeners. He seemed adamant about not being part of some hippie nostalgia trip. However, feeling as though he never fit into the image of 1960s rock 'n' roll culture, he remained wary about his old work being reintroduced. Fahey had already disassociated himself from the American Primitive fingerpicking style with which he had become synonymous. "I was writing these things as an escape, as a possible way to make money," he claimed. "The sentiments expressed come out of a fucked up situation. I was creating for myself an imaginary, beautiful world and pretending that I lived there, but I didn't feel beautiful. I was mad but I wasn't aware of it. I was also very sad, afraid and lonely. By presenting this so-called beautiful facade I looked good to myself and to my audience." Luckily, Hansen took care of writing the notes. When Fahey couldn't afford his weekly fleabag motel he took shelter at the Union Gospel Mission across Salem, the only place left for him to go. The mission was often filled with drug dealers and other unsavory people. Fahey would find cocaine bags stashed behind the toilets and was once mugged while staying there. But more objectionable to Fahey than the criminal element was the mission's dogmatic interpretation of Christianity, which the staff demanded from those who entered. For Fahey, it was one of the most difficult conditions of staying there. Forced to attend meetings and discuss one-sided views of theology, he became increasingly frustrated at having to accept their version of religion in order to keep his bed. If he did not regurgitate the ethos preached therein they cast him out and he slept in his car. So he bit his tongue and played by the rules. Fahey became so removed from society and his former life that few knew how dire his living situation was. "I didn't know when he moved to Salem, so that's how much we kept in touch," says former manager Denny Bruce. "We had the same CPA, and one day I got a call from him asking me to come see him because John needed a check FedExed to him right away, as he was in very bad shape. Our share of some Kottke check was $120 each. I had no idea what he was going to do with it until I saw it was addressed to John Fahey, care of the Salvation Army. I was shocked. That was the first time I heard." Even those who knew about his living conditions felt that perhaps it had been his choice to live in squalor. "John did spend quite a bit of time creating his own myths, and I think maybe the Union Gospel Mission was part of that, the myth of John Fahey," says Melody. Within the music industry, new myths were being written about Fahey, too—ones that seemed to understand his perspective as part of a pantheon of American independent artists. Music critic Byron Coley wrote for and coedited one of the decade's most beloved underground magazines, _Forced Exposure._ Known for his expertise in the outsider and the difficult, Coley had long followed Fahey's career. "When I started hanging out with Glenn Jones, we each had people who we were obsessed with collecting, and Fahey was one of his," recalls Coley. After Jones explained Fahey's significance, Coley fully understood the bigger picture. "We were into people like Sun Ra, Harry Partch, Michael Hurley. There was a pantheon of people who put out their own records and ran their own labels. From that point on, with the people who I hung out with, Fahey was one of the pantheon." Through Jones, Coley learned that Fahey had fallen on hard times. He pitched an idea to _Spin_ magazine: he would get the story on the current whereabouts and activities of this reclusive genius guitar player if they would fund the trip to Salem. "People were buying the records and people assumed he was dead," Coley adds. "It led to the obvious comparison of people thinking blues guys are dead." The parallels to Fahey's own discovery of Skip James were striking. In turn, Fahey himself became the keeper of lost secrets from the past; he seemed as alien to the alternative crowd as James had been to the folk revivalists. Fahey, living in a flophouse in Salem, was an ideal candidate for rediscovery. When Coley went to see him, he was immediately caught off guard. Fahey opened the door with his robe wide open, naked underneath. His room was covered with pizza boxes and takeout containers; piles of records and books were strewn about haphazardly. After Coley explained why he was there, Fahey told him to leave, saying he felt tired. When Coley returned the next day and told him he spent the previous day shopping at record stores, Fahey got excited. The two spent the next few days driving to every record store in Oregon, and ate plenty along the way. Fahey was overjoyed by eating and shopping on someone else's dime. "He was hilarious," recalls Coley. "He was so mean. He would say the meanest shit about people. I would ask him about this stuff, like what was the deal on _Fare Forward Voyagers._ He was really into this girl, the secretary Shanti Norris who was the maharishi's secretary at the ashram in L.A., and thought if he did the record maybe she would go out with him. I thought it was so fucked up. He said the worst thing about living in his town was all the Mormon broads. One day a Mormon woman came to his door on her mission or whatever and she's talking to him and he thinks she's really cute. So he gets the Book of Mormon and reads it and tells her it's a piece of shit and that he couldn't believe anyone could believe in that. He would ramble on endlessly about the weirdest shit." Coley got the impression that Fahey was well known around town and often a difficult presence. Few people around him were sufficiently well versed in Fahey's obscure interests to have a meaningful conversation with him. The subsequent feature, which ran in _Spin_ in 1994, introduced Fahey to a new, younger audience. As presented by Coley, Fahey seems equally charming and troubled, but his intelligence comes across perfectly. Seeing the world through his own lens, he was a perfect feature subject for an alternative music culture. Having been endorsed by the rock heroes of the day, Fahey was seen by younger fans as a precursor to the contemporary movement rather than a relic of their parents' generation like the Grateful Dead or James Taylor. Fahey was branded an authentic outsider genius and suddenly there was renewed interest in his work. For the first time Fahey embraced his audience and felt generally excited about the people who were interested in his music. "In the current season, the only people who understand me and with whom I have anything in common are punks and alternatives and industrial and no wave and anti-folk, etc." Fahey said. "Last year there was a big spread on me in _Spin_ by Byron Coley. _Spin,_ not some damned folk music zine or new age yoga yuppie magazine. My category is alternative, period. I object to another categorization. Of course, the matter is out of my hands and I cannot prevent you from doing whatever you want to do, but I want to tell you how I—just in case you are interested—look and feel about these matters," wrote Fahey in a letter to Fantasy Records, the company that owned and reissued his earlier Takoma catalog on compact disc. His day-to-day reality still lacked cohesion as he stumbled around town seeking solace. Lonely and lacking company, he talked to anyone who shared his interests. He relied on his ability to scavenge records, the one skill that he constantly maintained a grasp on. However, he became a burden to the local record stores and their staff, often putting records on hold that he couldn't pay for and generally being a nuisance. "A few record store guys in Salem knew who he was, but I had the feeling he could be a real pain in the ass in these places," remembers Coley. "He seemed to know a lot of people around town, but I can't imagine many of them knew what he was talking about most of the time." One record store clerk found Fahey sitting on the corner after the store had closed, sobbing. The owner had reshelved the records Fahey had on hold but couldn't afford. Fahey had finally achieved the isolation he claimed to crave, but it came at a cost. His grip on reality was worsening. Left to his own devices, he had little motivation to interact with the outside world— until he discovered the last few decades of experimental music. While Fahey had been aware of experimental composers like John Cage in the 1960s, he was completely unaware of the decades of abstract music that had been made subsequently, from nihilist punk to noise music. Fahey loved the idea of being part of the continuum of artists whose work lay outside the folk or rock worlds. These audiences weren't interested in people like Leo Kottke or the technical guitarists or the New Age lightness of Windham Hill. The bold despair and radicalism in Fahey's work had finally been manifested elsewhere. Coley would often get late-night phone calls from Fahey, who harangued him on a multitude of subjects. Coley witnessed Fahey's eccentricities far beyond the writing of the article. "I stayed in touch with him, and sometimes he'd be really funny and super pissed off," recalls Coley. "He was angry that I hadn't told him about industrial music. He was mad because I knew all this stuff about this experimental music and I hadn't told him. He was obviously off his rocker a little bit." New opportunities and unparalleled resources emerged. Coley teamed up with Geffen Records executive and former SST Records employee Ray Farrell to sign Fahey to Geffen for a six-figure check. The plan was for him to rerecord the Fonotone material, which at that time was only six sides, with Sonic Youth as his backing band and platinum-selling Geffen star Beck singing vocals. It seemed to them a sweet deal for the down-on-his-luck musician. Fahey didn't have to do much of anything, and he could collect a check large enough to keep him living comfortably in his old age. Contracts were drawn up, but Fahey turned the deal down, saying that he didn't find the idea interesting. "I got the impression that he just refused to do anything that someone else suggested, regardless of what their intention was," says Coley. "He was able to negate that [concept], as it didn't spring from him." In truth, creating music was not a priority for him. He didn't even own a guitar then, and showed little interest in playing. Fahey's recent press had brought a new batch of supporters to help revive his career. One reader affected by his story was Dean Blackwood, a twenty-five-year-old lawyer and record collector. Blackwood had recently started the label Perfect Records, dedicated to making 78 RPM records, a format as obscure and difficult as the music he chose to release. Having recently issued a 78 by the outsider/improv/jazz band Sun City Girls, he reached out to Fahey to see if he was interested in recording something for his label. Immediately, Blackwood saw that Fahey desperately needed assistance and decided to take on the task of helping Fahey with his various problems. Most urgently, several collection agencies were after him for old debts, which had been steadily accruing interest while he ignored them. His only income was from the marginal payments he got on his publishing. These publishing companies were controlled by legendary folk manager Manny Greenhill for many years, and his son Mitch took over after he died. "All Fahey's own records, the underlying compositions were published by [Manny Greenhill's] companies," says Blackwood. "All his sales and mechanical royalties, all Kottke's, all the Takoma stuff was collected there even though [Fahey] sold the master recordings many years ago. He still retained the publishing for Kottke and Băsho. I don't know if that extended to everyone on the label but at least the Fahey and Kottke stuff. There was a little nest egg there. We were forced to draw on it periodically to keep the lights on, literally. He always had some situation and needed money wired to him and that was how it worked with Mitch. He'd just get a call when he needed money." This royalty income was the only thing separating him from complete ruin. With proper management, Blackwood believed Fahey could regain some sense of normalcy. He offered to manage Fahey and began to put his various affairs in order, initially thinking the weekly motels were the root of the problem. "I remember working out the math at the time. The motels were four times what it would be for an apartment—and a nice place, too," says Blackwood. "The services at a motel offered him certain freedoms from ordinary hygiene, which was important, I would come to find out. Initially after doing this analysis I thought that getting him out of the motels was crucial in order to get him back on track. He needed someone to mind a budget and whatnot. I learned after working with him for a while that there was a disproportional value in the economics of the thing and it in fact did make sense to have someone there to sort of hover over him in the background and make sure he didn't disappear in a sea of pizza boxes." Blackwood arrived at a crucial time. Without a wife to anchor him emotionally, Fahey had lost himself. "When you operate in that kind of world for several decades, the condition probably transcends will," says Blackwood. "He was no longer capable. Looking at a bill that had been slipped under the door and being able to pay it, I think he lost the capacity to do that over years of dedicating himself against doing so. It wasn't just him being a lifelong contrarian, which he was in spades. It had become a true dysfunctional aspect of his personality. In that sense he really was an outsider." With Blackwood on board as his legal aide and manager, Fahey wanted to get back to focusing on his creative endeavors. He dreamed of being truly independent, and had an endless stream of ideas. This too was a fantasy, however, as he needed others to take care of him. "He wasn't the kind of artist that could build a cabin in the woods and sustain himself outside of society," adds Blackwood. "He had needs that could only be satisfied by us here in the modern world." Then came an unexpected turning point. In 1995, Fahey's estranged father passed away. He had lived the rest of his life in the house on New York Avenue in Takoma Park, where John had lived as a young boy. An NRA member, Al had amassed a cache of loaded guns, which were found at the time of his death, and the house surrounded by barbed wire. He had also amassed a considerable amount of money. And, to everyone's surprise, he left it to his only son. It was enough for John to pay his creditors, and then some. Fahey decided the best course of action with the remainder of the inheritance was to start a new record label. He partnered with Blackwood to handle the back end. With a more adventurous audience tuned into his music, Fahey curated a dream label of his favorite artists, while Blackwood introduced more contemporary acts. "Our initial conversations were more just general musings about record labels and what was wrong with them, and what was right with some of them," Blackwood says. "And wouldn't it be great if someone focused on these neglected artists? And what if people took these luxurious cocoons of packaging combined with the beauty of the sounds?" Blackwood and Fahey's plan was to repackage and present both old and new material at the aesthetic heights they felt it deserved. They decided to name their new label Revenant Records and quickly began working on projects with new artists and reissues of Fahey favorites like the Stanley Brothers. Younger audiences who had already found Fahey were curious about roots as well as contemporary music. By making highly detailed reissue sets they figured they could appeal to new collectors who could see the Revenant catalog as a viable way to approach outsider American music. "They were appreciative of art and design being married with these gorgeous sounds, so it seemed like there was an opportunity to tap into, if only someone had the money to do it," says Blackwood. "So when the money came, we had been talking about Dock Boggs and Charley Patton and early Stanley Brothers and Ornette Coleman." Blackwood's legal background was crucial in negotiating the murky waters of music from decades past. "A lot of the legality was in a gray area," says Blackwood. "At the time there was a possibility that stuff from at least as far back as the '20s could still not be technically public domain, depending on what the copyright holders had done." Major labels didn't seem too intent on searching their legal agreements from decades earlier for the obscure artists Revenant sought to put out, so the new albums went to market unopposed. Featuring extensive notes and critical analysis, Revenant's releases were a treasure trove of folk and blues for collectors. The culmination of its approach was the release of the previously unissued fourth volume of Harry Smith's iconic _Anthology of American Folk Music._ Packaged with a level of detail that satisfied the most hardcore collector, the two-disc set served as a stunning addendum to America's most definitive and long-standing compilation of folk music. Revenant became an immediate success, and their roster quickly expanded to include formidable contemporary acts such as Jim O'Rourke, Sir Richard Bishop, and the Bassholes. These artists not only gave the label a modern voice but also highlighted the lineage of Fahey's influence. If Fahey had absorbed the influences of his youth and recontextualized them, many of Revenant's contemporary artists had achieved their own innovations using Fahey as an influence. Younger artists were happy to work with Revenant; Fahey's iconoclast persona helped seal the deal, even if he rarely interacted with the artists themselves. Revenant not only gave Fahey a modern context as an influence on the new school of guitar innovators, but also reestablished him as a cultural tastemaker, exhibiting his curatorial expertise and fanatical dedication. Blackwood handled the day-to-day aspects of the operation from his home in Tennessee. They stayed in close contact, and Blackwood did the best he could to keep Fahey's and Revenant's affairs in order. Apart from standard albums, the label also became known for their elaborate box sets of twentieth-century outsider musicians. Free jazz saxophone legend Albert Ayler and abstract blues-rock kings Captain Beefheart & the Magic Band both received the full Revenant box set treatment: packaged in beautifully designed boxes, and filled with unreleased tracks, images, and detailed historical liner notes. Both Ayler's nine-disc box and Beefheart's five-disc set garnered praise and sales, though very little money was to be made. New difficulties set in when Fahey and Blackwood worked with established living artists. It was hard sometimes penetrating the mindsets of artists who came of musical age in the 1960s and had existed on the fringe of commercial recording companies, like Beefheart and Ayler. Blackwood explains, "None of them ever got paid in the day, so they see a big project as evidence of several things: one, that they will finally get paid; two, you wouldn't be interested in doing the project unless you were going to get paid; and three, that by definition a large project has a large audience. None of those things were true in most cases. The types of projects we put out were extensive. It's hard to tap into the uninitiated. We tried to appeal to both the hardcore fan base and to those who were just more adventurous in their music listening and might try something extensive if it was done very well and had a great presentation. In the end, at most, for your highest-sales-potential item, you get maybe in the low tens or twenty thousand copies worldwide over a period of time." After recouping the substantial expense of the elaborate production costs and then splitting up the proceeds between the various artists and publishers, there was little profit. However, the quality of Revenant's releases garnered instant appreciation from its niche public, and it quickly solidified the label's merit. What's more, it furthered Fahey's legendary status. _Spin_ named Revenant's _Captain Beefheart & His Magic Band Grow Fins_ best reissue of 1999, and it received four stars in _Rolling Stone._ The _Chicago Tribune_ called the Albert Ayler _Holy Ghost_ set "the Everest of all jazz boxed sets of 2004.... A major event.... 'Holy Ghost' represents a long overdue restoration of Ayler's art to a listening public that has had scant chance to hear it." Unsurprisingly, the Revenant project closest to Fahey's heart had to be the Charley Patton seven-CD box set _Screamin and Hollerin the Blues: The Worlds of Charley Patton._ The set included all of Patton's existing recordings, as well as a slew of music that was influenced by him, setting up Patton as a legend in his own right. "There was a sense that there was really something to tap into because there hadn't been anything like that since the Robert Johnson set," says Blackwood. Also included in the Patton set was a reprinted and reworked edition of Fahey's thesis on Patton, which had originally been published in Europe in the early 1970s. In addition to his own research, Fahey reached out to his old friend Dick Spottswood for assistance with the set. The mighty box set became the ultimate collection of Patton research and ephemera. So impressive was the production value that Revenant earned three Grammys, for Best Historical Album, Best Box or Special Limited Edition Package, and Best Album Notes. Being the foremost expert on Charley Patton garnered Fahey industry-wide recognition. Fahey was indeed a serious musicologist, a man who knew about traditional American music, in addition to being a record collector and musician of stature. In the fifth decade of his musical career, Fahey had reached a new high. His own music, however, found a more mixed reception. # 11 # DANCE OF THE INHABITANTS "I'm just doing solo electric. One gets old, and then the fingers hurt. I mean I've got an acoustic, but Jesus, it kills me. Like razor blades cutting into my left fingers. Then I can't practice the next day. I tend to do very long practicing, like for hours, and I just can't do it. Life is so tough." —John Fahey, interview, 2000 If Fahey intended for his music to be the conduit for his negative emotions, the current climate of music basically begged for little else. Fahey's love affair with the modern, experimental guitarists was a two-way street. Seeing the complete absence of the politics or sentimentality characteristic of the folkies, hippies, or New Agers, he embraced his new stylistic freedom with abandon. Fahey had heard a record of the prepared guitar work of experimental musician Jim O'Rourke. Musically, O'Rourke's records at the time were abstract and minimal, using the guitar as a sound source rather than traditional picking or strumming, and eschewing any traditional forms or structures. Having grown up with a strict midwestern background, the overtly polite improviser became a perfect foil for the notoriously difficult Fahey. O'Rourke's interest in music straddled both the improvisational and the compositional, and he had created a staggering body of work by his midtwenties. He was influenced strongly by both Fahey and by minimalist composers like Tony Conrad, which informed his broad musical template. O'Rourke was in high demand as a producer and collaborator, working with the likes of Faust and Henry Kaiser, among others. His musical prowess gave him the ability to help translate even the most difficult and abstract musical ideas. Unlike most other guitarists whom Fahey encountered, O'Rourke had little interest in either the technical aspects of guitar playing or its bluesmen. Fahey, impressed by O'Rourke's extreme guitar manipulations, found his phone number and called him out of the blue, asking him to help on new recordings. Stunned by the call, O'Rourke initially thought it a practical joke. He never imagined that Fahey listened to his work. The two eventually met in Los Angeles, where O'Rourke was working with former Fahey associates the Red Krayola. "Fahey isn't an Americana thing for me, although I understand that it's really the roots of the music," says O'Rourke. "But it's this other part, the minimalist aspect that he tapped into, that was really important to me. I don't think he knew what the hell I was talking about, but he understood that I didn't think of him in the context of Bukka White. I didn't give a shit about that stuff, honestly." Inspired by the experimental music community, Fahey grew bolder in his own approaches. No longer focused solely on music, he began to explore other outlets of expression and creative release. His fascination with his past demanded further exploration. The results filled notebooks, with stories of friends, childhood, and career all getting equal treatment. The occasionally lucid accounts of his career were offset by the descriptions of the wild fantasy world of Takoma Park. Though unreliable as a narrator, as an author of fiction he came across as both wildly entertaining and emotive. Fahey also displayed his sentimental side, detailing his childhood crushes and fantasy loves. In one such description of a teenage love, he wrote, "Yes, I wonder what would have happened if I hadn't gone for a walk that balmy day in April. Whatever would have happened I'm glad it didn't. As far as I can see or feel, Dianne turned out to be my salvation. The girl I met that solstice spring day when she utterly destroyed my unconscious vow to remain superficial, unconnected, cold—that's why I had been afraid of the winter. I had been afraid of myself. But she wasn't afraid of me. Not my beloved tassel time girl." But the true shock lay in the vivid depictions of sexual abuse committed by his father. One particular scene tells of his father showing him what would happen if he told anyone about the abuse: "He made a noose out of the sash pull hanging down from the ceiling," Fahey wrote. "He made it very slowly and looped end around end. And while he did this he told me what it was like to die by hanging. How I would gag and gasp for breath but wouldn't die because he wouldn't let me die by breaking my neck. Oh no. That would be too easy and too quick. He wanted me to strangle and strangle for a long time." Blurred together, the material made for fascinating reading and eventually found a publisher at the Drag City record label, home to Jim O'Rourke and largely thanks to his efforts. Fahey had pulled them out of the trash at O'Rourke's insistence. "He told me about the writing in one of our first conversations, before the record or any of that," recalls O'Rourke. "We started talking about movies and he told me how he punched out Antonioni once. He told me he wrote a story about it and that he would send it to me. The next day I got a box FedExed to me full of pages and stories that were stained in spaghetti sauce and just a mess. That was the first book." Once they were collected and edited, under the title _How Bluegrass Music Destroyed My Life,_ Fahey had something of a fictional memoir. Fiction and reality had always held little distinction for the author, so readers were left to decide for themselves the stories' veracity. The life and times of Fahey, although presented in a highly subjective fashion, left plenty for audiences to pore over. Besides his traumas, he presented his take on Skip James, folk festivals, Antonioni, young love, and patience—all in his unmistakable narrative voice. The book became a success for all parties, selling more than ten thousand copies. More important, it further perpetuated the John Fahey mythos. As a young man he had sought answers regarding life's great questions from the elder bluesmen he encountered. He in turn was imparting his own sage wisdom to a new generation via his memoirs. His creative outlets were widening. He took up whole new avenues of expression, including painting. He created abstracts with water-colors and spray paint, sometimes in deep traditional colors, other times with bright neon. The painting seemed a direct reflection of his moods, sometimes splattered across the paper, other times drowned in ink and textured by diffusion. The bold colors and blotchy shapes are reminiscent of Rorschach tests. He'd make dozens of pieces at a time, transforming his motel rooms into paint-splattered studios, much to the dismay of the cleaning staffs and management. "John's life was his work," remembers Melody. "Maybe I inspired him somewhat with painting. After we split up, I was making a living buying and selling things at yard sales and estate sales. John asked me if I could get him some powdered paint. He made these small paintings by putting the powder in wet phone books and then he'd stomp on them; then he'd sell them for five dollars at his shows." He also mailed packages of several dozen to Byron Coley, unbidden, for him to sell at record fairs. "He asked me to sell them for $10 apiece, and asked me to wire him $300, which is ridiculous because it costs $50 to wire something. He made me send it to some Western Union in Salem," says Coley. "Then he asked me how they were selling and I said I sold a few. Then the next time he'd say, 'You know you really got to give me more money for those paintings' and I just said I only made $40 so far. Then he'd send me some more. I had so many of them." His return to music was equally nontraditional. Fahey committed acoustic heresy and switched for the first time ever to electric guitar. In this new medium he played exploratory, extended, improvised material. The music sounds distant, covered in reverb. His playing seems slow, as if each note were the result of great effort. The rich melodies and virtuosity that attracted his original fans is largely absent, leaving the skeletal elements of his signature style adrift in a pool of effects. Rather than attempting elaborate compositions, he repeats elementary refrains in a stilted, hesitant manner. It sounds as if he is relearning the instrument after not playing for many years. Even so, he finally felt free to pursue the edges of his playing with little thought of technique, melody, or audience. With few expectations, he released his first album of new music in years, 1997's _City of Refuge,_ on Tim/Kerr Records. The album title is another reference to his troubled relationship with his parents. He explained: It was a place my parents took me to when I was a child. It was along the Atlantic Ocean somewhere, and we ran out of food and water and we went into this mysterious city. It was just so weird. There were no people, but there was a big factory. I had a recurrent dream about it that my parents had planned to take me to the city to chop me up and consume me. But the factory communicated with me and warned me what they were planning, and me and the factory consumed my parents instead. Fahey, still lost in the throes of repressed memories, produced a mixed bag of noise collages, meandering electric and acoustic guitar, and various sampled sounds. Elements of his original style can be heard, but they are juxtaposed with colder electronic sounds. There was little to comfort fans of his vintage 1960s acoustic work. The innovations of the last several decades were new and exciting to Fahey, but for those who had been listening to experimental music, Fahey's new direction lacked cohesion, as if he were being difficult for the sake of it. Even ardent supporters had nothing good to say about the album. Glenn Jones wrote: Little of _City of Refuge_ can be considered groundbreaking, whether in light of the works of '80s and '90s sampling artists; the overwhelming (and largely undifferentiated) bulk of industrial music created in the wake of Throbbing Gristle and SPK in the late '70s; the musical anarchism of the '60s art-rock; the Fluxus and futurist composers; the works of electronic and musique concrete composers in the '40s, '50s and '60s, the dada and noise composers of the '20s—or by Fahey's own previous high-water mark. He continues with a sentiment that many shared about the tepid album's slow crawl: _"City of Refuge_ hasn't shocked old fans so much as it's bored or disappointed many of them. It pales in comparison with most of John's back catalogue, and I believe that if _City of Refuge_ were John Fahey's first record, instead of his 40th, it would have gone largely unnoticed." Jones had spent most of the 1990s playing guitar in the psychedelic band Cul de Sac, a band that aimed to fuse elements of American Primitive, Krautrock, and other disparate influences. As a band of record collectors, Cul de Sac had dreams of collaborating with some of their influences. Jones's longtime correspondence with Fahey made him an ideal potential collaborator, especially now that Fahey had gone electric. Jones suggested that Fahey join his band Cul de Sac for a fleshed-out take on each other's songs. The record label Thirsty Ear agreed to foot the bill for the recording sessions. Fahey and the band rehearsed for ten days in preparation. At the last moment the studio canceled due to lack of payment, and Jones scrambled to find a place to record. Jones recalls the process in the album notes: After a Boston photo shoot, we made our way to Warren, Rhode Island's Normandy Studios, the new site for the project. We had nine days to record and mix an album. But after two days of recording basics, John, growing more and more impatient, rebelled. I discovered that he had no interest in making the kind of record I'd envisioned. He attacked the material, said it would be disastrous for his career to be associated with it, and called us a 'retro lounge act.' And while Cul de Sac might run through a song three or four times, Fahey rarely played a song more than once. He has little patience in striving for the perfect take. Accidents and serendipity delight him. I can still see him stretched out on the floor of the studio control room listening to the playback of this album's final track, roaring with laughter. Whereas Jones had a vision for the album and tried to articulate it, Fahey decided at some point the material was too musical and rebelled, refusing to be involved with it any longer. Jones found working with Fahey a difficult process, despite their years of friendship. "Having been so closely involved with Fahey throughout the project and having had to bear much of the brunt of his claims, I have discovered that John exaggerates or invents things in order to appear in the best possible light," wrote Jones. "He seems willing to change his tune depending on how 'hip' he thinks his music should appear at the moment, or who he's trying to impress." After scrapping what had initially been prepared they spent the remaining time improvising and following Fahey's instincts. Rather than trying to control the impossible, the resulting album became appropriately entitled _The Epiphany of Glenn Jones._ A mixed bag of sound collage and some song-based collaborative material, the album retains an unhinged, unpredictable feel throughout. After the release of these new albums, Fahey began to tour again to support himself. Back on the road, he dealt with many of the same problems. He still felt that he didn't have to impress his audience. His electric material was intentionally slow and dark, something not all Fahey fans were looking for. Fans who hadn't been keeping close tabs on his recent activities occasionally came to his shows seeking acoustic mellowness. Instead they saw an angry Fahey figuring out the nuances of electric guitar. No stranger to shrugging off criticism, he enjoyed the dissonance. Most of his younger audience was used to such sounds and tolerated his experiments. "They have a much wider knowledge of music and noise and experimentalism," said Fahey. "I'm not dealing with hippies anymore. I always hated hippies. I ran into this chick the other night when my trio was playing here in Portland. Everybody was digging it but here comes this old chick making a lot of noise, wanting me to play shit that's forty years old. I told her 'go to hell.' She started screaming and stuff so they had to take her out. I don't care. Get lost. That stuff was too sentimental anyway." Revisiting the technique of collage, he found that the new technology allowed him to edit and layer more efficiently than splicing tape together. Seeking the furthest reaches of out-there music, he discovered Japanese noise, a devout circle of extreme electronic musicians. "I like noise," said Fahey. "I use Merzbow [a Japanese noise artist] in my tape collages. I like the violent. It's abstract violent. When I come home exhausted and I want to lay down and forget about my obligations to other people, I'll turn on noise and enjoy it. Noise has nothing to do with people, and I don't want to think about people while I'm resting. Then I'll fall asleep, and when I wake up, I'll be ready to go and deal with people again." He moved ever closer to the avant-garde. He began working with the Table of the Elements label, known at the time for their series of radical experimental guitar albums. Their roster included Keiji Haino, Loren Connors, and O'Rourke, among other experimentalists. It seemed ideal company. The label even attached O'Rourke to produce Fahey's next record, giving the two burgeoning friends an opportunity to collaborate on a musical level. Fahey came out to Chicago to stay with O'Rourke and his roommate Kevin Drumm, a pioneer of experimental guitar in his own right, to record the jarringly abstract _Womblife._ In the process, Fahey abandoned guitar playing entirely. "He had these tapes and he wanted me to stack them on top of each other in various ways," remembers O'Rourke. "My studio at the time was basically a room off of the kitchen. He said he wanted to make it sound like whales rubbing the barnacles off the side of a boat. Then he would lie down on a couch in the kitchen where he could hear and I would sort of massage it together. He just said, 'Do what you need to do,' and I just sort of did electronic music-ing or whatever." Fahey doesn't play an instrument at all on _Womblife._ Instead, he orchestrates found sounds to compose the delirious symphonies he heard in his head. "All the tracks were made with these tapes he had," recalls O'Rourke. "Kevin played on something. I played on something. I had a synthesizer at the time, but if he didn't have a tape of the sound he wanted he'd just tell us to do it." To fend off accusations that he could no longer play the intricate fingerpicking style he was best known for, he decided to include a long solo acoustic composition, the twelve-minute album finale "Juana." The piece was intended to silence his critics, a gorgeous long-form track that recalled his Takoma heights. However, according to O'Rourke, it was not Fahey who performed it for the album. "The last track he recorded a few times, and then said he didn't want to play it anymore," recalls O'Rourke. "What happened was, the chair he was playing in had wheels on it and he leaned too hard on the front of the chair and it went out from under him. He was really big! He just said, 'I'm not playing this anymore. You play it.' So I played it. "He wanted to put one track on there to show he could play guitar. I think one of the things about the record before was I think he was stung by people saying he couldn't play guitar anymore. He could play that track. He just didn't want to that day." No one seemed to be able to tell the difference, and the track became an album highlight for nostalgic Fahey fans. A few years earlier, one of his prior labels, Shanachie, had suggested he rerecord his 1964 _Death Chants_ album. Instead he recruited guitarist Charlie Schmidt to do a note-for-note version and passed the tapes off to the label. (He was dropped from the label before the project was released.) Uninterested in the mechanics of playing his signature fingerpicking style, he seemed all too happy to let others stand in when it suited him. Fahey had grown stubborn in his isolation and continued to be massively difficult for those around him. On the road he seemed unable or unwilling to do basic upkeep. He brought only T-shirts and shorts for bitter winter climates. He required constant supervision from friends and concert promoters, all of whom rushed to accommodate his unpredictable needs. On tour in Boston, Coley recalls, his belt broke: "When he broke his belt he would just let his pants fall down. We were walking around Harvard Square and he was shuffling along with his pants around his ankles. I would hate to say his story is cautionary, but it's hard to say." Some believed that he put on a show of being dysfunctional as a defense mechanism. "He had so many years of people he didn't want to deal with and idiots that he developed so many methods of deflecting," says O'Rourke. "He created a character to avoid people and then he was always that character. I think he was more in touch with reality than people give him credit for, but I just don't think it was worth his while to show that because it was a protective wall." Jones, who had known Fahey far longer than his new 1990s experimental fans, sees it differently. "Part of that was him living up to the legend of himself," says Jones. "I saw that behavior in him when he was around younger people who were into the Fahey mythology and he would kind of play to the balcony. In the years I knew him, one-on-one he was never like that around me. I don't know how much of that was him. Certainly there was excess, stories of him eating a whole tub of ice cream. I don't think his excesses were to impress anyone. It was just his appetite. When he was a drinker he would drink to the point of obliteration and when he was a smoker he chain-smoked. When he did anything that he thought was good he would take it as far as it could go." Fahey employed these techniques often. On the road, if situations weren't to his specifications, he found excuses to evade or sabotage things. Fahey called promoters and pretended to be a doctor, telling them that John Fahey had a heart attack. If the airport line dragged too long to catch a flight to a show, he sometimes just turned around and went home. He got creative, unafraid to make a spectacle if need be. "We were at one show and he didn't like the promoter, so he decides to pretend to go into a diabetic coma," recalls O'Rourke. "He doesn't let me in on it, I'm literally carrying him up the stairs and the promoter is behind us freaking out saying we have to call the police and John's unconscious. I'm carrying him and all of a sudden I hear this whisper in my ear: 'Keep going.' He eventually let me in on his tomfoolery, so I felt blessed even though I was a victim of it. I just don't think he could get out of the habit. He couldn't stop fucking with people. He just couldn't do it." Coley saw Fahey's isolation with clarity. "He knew exactly what was going on, but he didn't seem to give a shit. He was impressively 'fuck you' about a lot of stuff. I assume he was a wise-ass. In his prime, especially when he was drinking, he must have been a fucking terror. The impression you got was that he was a really smart guy but had put himself perversely in this milieu where it would be impossible for him to exercise these aspects of his personality," says Coley. "It was like he was doing some weird penance. I hate to project too much onto it, but it did have a weird moral quality of self-flagellation." Fahey's lifelong habits made him difficult to deal with, even for those who lived on the fringe of the creative worlds themselves. "I think the creative impulse was growing again. In a way it was a fight between the new impulse to do things and to him the easy life he had gotten used to, basically living hand to mouth and going to thrift stores," says O'Rourke. "It's almost like he spent so many years not giving a shit, not about being creative but just about what people thought of him. Now that he was starting to deal with people again being creative he couldn't quite get out of the habits he had for years and years. That's the way I looked at it. He was never bad to me, but he put me in situations that he didn't really want to put me into but he couldn't help it, it had been so ingrained." Fahey enjoyed flouting conventions, pushing buttons to test people's limits. The strictures of international etiquette were to him invisible. His travels did little to restrict his behavior. "I remember we were in Germany, we were in Cologne, and he wanted to go to one of the big art bookstores, Walther König," remembers O'Rourke. "We got there and the first thing he asks for are books on Nazi propaganda. So he buys all these books of Nazi propaganda and we go get something to eat before the show. We were sitting at the table and he starts opening these books and pointing at them and showing them to other people in the restaurant and going 'ha ha!' " As Fahey's catalog continued to be reissued on CD, he booked bigger and bigger gigs. In 1998, Fahey was invited to perform at the Guinness Fleadh festival on Randall's Island in New York. The Irish-themed multiday event featured performances from artists like Sinead O'Connor, Van Morrison, and John Lee Hooker, among many others. Fahey used the concert as an excuse to spend a few weeks in New York. He stayed at the Hint House, a home shared by various members of the No-Neck Blues Band. As one of the city's most uncompromising group of improvisational musicians, the band often played in tunnels in Central Park and other strange locations. The group was more of a collective than a traditional band, with no set membership; Fahey thought they were a cult. He scheduled them to do some recording work for Revenant. One of the band's more consistent members, Dave Nuss, recalls Fahey's presence as more than memorable: "Fahey stayed with [artist] Rita Ackermann and me up on the top floor of the NNCK [No-Neck Blues Band] building," he remembered. "Upon his arrival, the bathroom filled with bottles of pills on every surface. Despite the chilly temperatures, he was always barefoot and shirtless, wearing the same cutoff jean shorts, held up by a rope, for the duration of his stay. He rarely had his sunglasses off, day or night. I recall that he would play acoustic guitar sometimes, which was a joy, and he would fall asleep anywhere, anytime, for any length of time, and no sound would rouse him. We thought he was narcoleptic. I recall an incident when he was eating a banana and fell asleep holding the peeled, half-eaten banana straight into the air while his head drooped and he loudly snored." One Hint House member felt his presence more than the rest. Sara Press lived in the house with her boyfriend at the time, Adam Mortimer. After having met her just briefly, Fahey developed an intense obsession with her. His romantic fantasies were exaggerated to unreasonable degrees. He made his affections overwhelmingly clear, however nonsensical it may have seemed. "I was living in the Hint House at the time," Press remembers. "Fahey stayed there, and I had a whirlwind experience as his muse for a few months afterwards. My Fahey name was 'Sacred Sara of the Clean Shirts.' He sent me several boxes of letters, paintings, mixtapes, and other things during that time, culminating in a marriage proposal. Just before I got the letter with the proposal and just after he mailed it, I managed to clarify my situation with him over the phone. I guess he hadn't realized I was in a live-in relationship since he mistakenly believed I lived in a cult or commune. He then became horribly embarrassed and got off the phone and never contacted me again. For my part, I thought he knew my situation all along and just didn't care. Being twenty-three at the time I didn't analyze too closely his fascination with me. I had thought it was simply a very strange friendship with someone who lived within rules of his own making." Romantic entanglements or not, his relationship with the No-Neck Blues Band continued unabated. He even booked a US tour with the group. The chaotic nature of the young, eccentric musicians inspired Fahey. The band had no clear structures, rules, or discussions about their performances. With them, he had finally found a place where he could do anything musically, unbound by any genre or audience barriers. Fans who came to see the Fahey of the past were often startled and turned off by both NNCK and Fahey's modern, abstract takes on musicality and performance. John Fell Ryan was the most flamboyant persona in NNCK, often dressing in bell-bottoms and robes. His contributions to the group included free-associating vocal while brandishing a seven-foot shaman staff. Ryan recalls the tour receiving a mixed reception. "I was aware [Fahey's] performances were kind of bumming people out," Ryan says. "His legend was as a highly skilled and composed acoustic picker, but his style at the time was electric, detuned, slow, wandering, and always an hour too long. I didn't like the kind of fussy fan with high expectations and demands of performers, so I thought his sludgy, fuck-you delivery was appropriate. But then again, that same fussy audience had similar problems with me as a performer." Fahey enjoyed disappointing those seeking his older style. One night during the tour, over dinner, Fahey suggested psychotherapy to the troubled young musician. Concerned, Fahey tried to impart some wisdom, seeing aspects of his own troubles in his tourmate. "His suggestion of psychoanalysis was coming from a protective understanding of going through madness himself. Later that night, Fahey came up to me again in the parking lot and told me, 'You know when you were singing "Everyone is the same"? I know what you're talking about. Everyone is the same.' "I would claim to others that I was just riffing on Motorhead's 'Ace of Spades,' but secretly knew that Fahey knew that we both were experiencing what might be considered gnostic revelations or visions of the universe. A few days up the coast, in my hometown of Seattle, NNCK and Fahey played a set together at the Tractor Tavern. This set was striking in that Fahey put down his guitar and adopted my 'rapping' style of free verse. So we did a bit of a duet there. It was the last time I saw John Fahey alive." # 12 # RED CROSS "Suddenly I hit desolation and just as suddenly my mother was gone, and I found myself on another kind of train headed West. And there was my wife. We were together again and headed home. Desolation was gone. You don't feel so bad when you're headed home. Desolation was gone. You don't feel so bad when you're headed towards a place that was ruined a long time ago—as when you're headed towards a good place where they are just beginning the abomination and you know it won't stop until it's all gone. I didn't want to see the process. But we were escaping, so I felt better. We could never live in Paradise, Md., but it wouldn't be there very long anyway. Nobody could stay." —John Fahey, in his liner notes to _John Fahey Visits Washington, D.C.,_ 1979 Back in Salem, Fahey once again attempted to settle down. He started dating a woman named Melissa Stephenson. She had approached him to autograph her copy of the _Anthology of American Folk Music_ at a show in California, and in return he handed her a business card from the motel he was staying in. After a few visits to Salem she rented a house in nearby Kaiser, Oregon, and Fahey moved in with her. An avid fan of his music for many decades, Stephenson was reluctantly thrilled and charmed by Fahey. The two spent time listening to albums, going out to eat, and driving around town to thrift stores, searching for records. "When John moved in he was nice, polite, eager to be a good housemate," says Stephenson. "He agreed to help with everything but never did anything. He was very entertaining, could be very funny. He told me he'd really wanted to be a stand-up comedian but was better at playing the guitar so he did that. With Fahey there's never a dull moment. He mostly liked to eat. He was somewhat ill at ease with most women, except waitresses. His interactions with men seemed normal enough. He could say a guy was a nice guy or an asshole and he was right. All women were dangerous." The relationship quickly grew strained as he became combative. He began to come and go. Stephenson let him go, but seeing that he needed help kept an open door, offering him refuge whenever he wanted it. They lived together on and off for the next two years, during which he periodically went back to the cheap motels, until they no longer took him in due to his erratic behavior. When left to his own devices he invariably ran out of money and a hospitable place to stay. Even though she adored him, he became impossible to take care of. "He was not in therapy while living with me, but once he was about to be evicted from the motel where he was renting a room and faked a suicide to get a free ride to Salem in an ambulance," remembers Stephenson. "They put him in the psych ward, and two weeks later, when they caught on, I got a call from Dean Blackwood asking me to take him back in because he had no place to go. They'd thrown him out of the last place that would have him. I agreed, but Dean had to convince me to some degree. And yes, John talked about having been abused by his father frequently. He'd call his father a pedophile out loud to anyone anytime he had an audience. I got tired of hearing it, and after a while I no longer believed it and asked him to stop talking about it around me." In the summer of 2000, Glenn Jones reconnected with Fahey for the first time since the fallout from their collaborative album. Jones hoped they could work together on compiling a comprehensive set of Fahey's Fonotone material—his oldest recordings, which he cut back in Bussard's basement so many decades ago. Reluctant to revisit this music, Fahey said he would only participate for $10,000; otherwise they would have to wait until he died. Jones had learned from his prior collaborative attempts and didn't battle to persuade him. Instead, he let it go, and the two enjoyed an afternoon unburdened by expectations, hanging out and talking about records like old times. Fahey even commended Jones for sticking to his guns on their album together, while so many others would have given up in the face of such overwhelming adversity. Fahey continued pressing forward with new music, focusing on fragments for his next album. Revenant had the Patton box set under way, and Drag City was putting together a second book of Fahey's writing, under the title _Vampire Vultures._ Although he had some assistance with his day-to-day responsibilities, he became increasingly lonely. With no partner in his life, he remained in emotional disarray. He continued to obsess over women, no matter how fleeting the interaction. His most desperate attempts at "love" were aimed toward a young Japanese woman named Hitomi. Fahey had undertaken many intensely powerful pursuits, bordering on stalking, but this was the most severe of them all. They met at a show of his in Japan, and she instantly became his all-consuming reason for being. She became a symbol of some elusive cure for his sufferings. The relationship was yet another fantasy for Fahey; his attentions were unrequited. "I happened by Fahey, who was entering a Chinese restaurant, and he invited me along," remembers NNCK's John Fell Ryan from their time together on tour. "I wasn't hungry, but Fahey ate enough for the both of us, plus. It was then he showed me his collage book. He opened this very thick binder and leafed through it, opening page after page of cosmic New Age paintings of utopian architecture he had clipped out of books or magazines—hundreds of pages, like a phone book. Unicorns and fantasy stuff, like Roger Dean, but more down on the edge of those airbrushed galaxy paintings you sometimes see homeless men peddling on the NYC streets. There would be really weird sections, like a whole series of newspaper clippings of Tom Selleck's face. Fahey told me the collage book was a present for a woman in Japan to whom he planned on proposing marriage. He told me he had written her a letter and bought a ticket to Japan to see her." Hitomi became the myopic center of his days and nights. He sold his possessions to Stephenson to raise money to go see her in Japan. After a barrage of marriage requests, her parents finally became aware of the situation. The obsession came to a head when he met with her family at a conference room in the Tokyo airport. Fahey tried to plead his case to them but failed. Their presence was an intervention, not a dialogue, an attempt to get Fahey to leave her alone once and for all. "John told me the story of the confrontation," says Stephenson. "It upset him very much. You should have seen his wild eyes. I don't believe she was married, but she may have married someone after the whole John thing wound down. There were people sort of trying to rescue her from Fahey. It sounded serious to me." According to O'Rourke, the police had to get involved. This final rejection left him despondent. He spent the cold Salem nights alone, pining for what could never be. His heartbreak became the subject of his last album released during his lifetime, 2000's _Hitomi._ Much like his 1990s output, the album featured a dark stream of collaged noise and reverbed electric guitar. With titles like "Despair" and "East Meets West," the album sounded distant. Only the occasional sparse beauty of loneliness in evocation of his muse shines through an otherwise dour affair. Like many of his albums, he recorded it himself. "While he was at home he did spend a lot of time, hours, playing the guitar using electronic noisy gadgets I cannot remember the names of," says Stephenson. "He recorded a lot of the album on cassette tapes on his boom box. Most of _Hitomi_ was recorded this way in my guest bedroom. It was dark, bluesy guitar-music-slash-noise. He would get started and play the same thing over and over. Artie Shaw's 'Nightmare' was one of his favorites." His sadness overwhelmed him and he grew even more distant. Always fascinated with religion and mortality, Fahey seems to have been prepared for death since his teenage years. The legend of Blind Joe Death, as the dawn of the new millennium approached, became less a prophecy and closer to a stark inevitability. Fahey turned sixty years old in 2000, a feat many considered incredible given his years of hard living. He had survived decades of substance abuse and mental and physical ailments, though all had taken tolls along the way. Miraculously, he seemed to triumph over his obstacles, still a forceful personality in his later years. "Those of us who knew John cannot imagine ever again meeting anyone with his iron will, his seemingly indestructible constitution and enormous appetites," Glenn Jones would write in his liner notes to Fahey's final and posthumously released album _Red Cross._ "His passions were insatiable: food, women, music, books, drugs, alcohol, cigarettes. People said Fahey never grew up, that he was a child all his life—with all that that entails, both good and bad. John's prankster charm, endless curiosity, guileless spirit, largesse, and a life lived in the present made him a delightful and engaging figure to be around. But his a-sociability, belligerence, irresponsibility and an almost constant need for gratification were exhausting." With no one left in his life romantically, he began to recall the grace of the only woman to stand by him, and he began to regret his divorce from Melody. Since the initial shock of their split, her anger toward him had cooled. Whenever business calls for him came to the house, she got him the messages. Never fully out of contact, he attempted to bridge their friendship. Although she had remarried, her company and affection seemed his only shot for comfort. He began to feel his mortality more completely. Feeling not long for this world, he began attempting to rekindle some of his other relationships, even e-mailing Spottswood and Bussard about visiting Maryland again. Never the picture of health, Fahey experienced more pronounced symptoms of what was soon discovered to be advanced heart disease. When he finally consulted a doctor, he was advised to undergo heart bypass surgery in an attempt to clear his clogged arteries. "He had visited a cardiologist or two in Portland who told him he was in very bad shape," says Stephenson. "He declined rather rapidly. I tried to help, but could not get a handle on what to do. I invited him out to walk the dogs and he went once. He was really not good." Decades of poor diet and little exercise had left him weak and debilitated. There were doubts from those who were still close to him as to the quality of medical care he was receiving, but with no legal or marital ties, he was left to his own in such decisions. "He was taking way too many prescription meds," says Stephenson. "Some doctors had him on fifteen to twenty different medications at once, for everything under the sun. John was a committed doctor shopper. I'd like to find those fuckers who killed John Fahey." He scheduled an operation for a quadruple bypass at Salem Hospital, an extensive and potentially life-threatening procedure. He was advised by his doctors to get his affairs in order. Facing imminent surgery, he began to examine his life in a different, more immediate context. When thinking of the people he cared about most in the world, the first person he considered was Melody. Fahey appreciated the unconditional nature of her love for him, even after he left her. Melody, regardless of the past, came to his side to support him. "We ran into him once at a thrift store and he didn't look too good. Then I didn't hear from him for a month or two, so I phoned to see if he was OK, and he had been sick. He didn't tell me what it was. The next day he phoned and asked me and my husband, Verlyn, to come out to lunch with him. He started talking about his will. I told him he should set up a scholarship for music to UCLA. I was just joking around. Anything would go to his mother. I thought it was all a big joke. He asked about me, leaving me something, I said it was OK with me. I took it as John being melodramatic. A few days later he called and told me he was in the hospital. That was the first time I knew he had serious heart problems." While Melody and Blackwood had high hopes for a swift recovery, Fahey calmly prepared his will—in his own unique fashion. "There was stuff to be taken care of and he was filling out forms," recalls Blackwood. "It started to get him thinking about one of the great questions of life. When faced with death, which people in your life were most deserving of something by whatever criteria you would use? He went through several versions of hand-written wills." Like his liner notes, these hazy notes would be entertaining and insightful but not confined by reasonability. "Because his instructions were kind of crazy, Mitch Greenhill [as the executor of Fahey's estate] worked with him to make it something that could be actionable," says Blackwood. "One of the original versions had him leaving a certain amount to the Union Pacific railroad—just things that weren't really possible. It had this grand sweeping poetry as a gesture, but it made no sense as to what to do. He was infatuated with Hitomi at the time. I don't know if she was in the final version or not but it got more practical over time." Melody became the main beneficiary of a trust formed from his estate and controlled by Mitch Greenhill and her. As the only person there for him in his final days, Fahey wanted to know that she would be provided for after he died. Maybe he still loved her too. "I just know that right before he died he wanted to move back in with me," she says. On February 16, 2001, Fahey checked in to Salem Hospital for the operation. He seemed in relatively good spirits and appeared relaxed, even amiable. Perhaps he had accepted his fate and begun making peace with his own existence, having accomplished an engaging body of work left to be discovered for generations. Prone to anxiety throughout his life, Fahey seemed serene and calm, though a very real ordeal awaited him. "One thing I have to say is that when he was in the hospital, he treated everyone in there so gently, the nurses and the doctors and everyone, because he didn't think he was going to live," says Melody. Melody and a few local friends were at his side as he awaited surgery. Few among them believed that it could be the end for Fahey. But the operation would be far more serious than doctors had planned. Instead of the original quadruple bypass, it became a sextuple. "It was a lot worse than what they thought, so the operation took a lot longer, it was more complicated, there were more bypass procedures involved in the surgery itself," says Blackwood. "He did have a sense that things might not go well, even though at that point no one knew how complicated it would be when they got in there. I didn't go into the surgery thinking anything too bad was going to happen. I thought the odds were pretty good that he would recover, but he just never really regained consciousness. I wasn't physically there because I guess in my head I downplayed the chances of a negative outcome." On February 22, 2001, after a few days on a ventilator, John Fahey was removed from life support at 9:45 AM and passed away due to complications from open-heart surgery, according to his death certificate. Those in his life were stunned and devastated by the news. "I didn't think for a second that he was going to die," said Melody. "I was totally in shock when it happened, when they told me he was brain-dead and they had to turn off the machine. I told them to wait. It was horrible. The whole thing was just horrible. But he was so sweet to everyone in there. He was very sweet to me at the end. He told me he set up this trust for me and I told him not to do it because he was going to need his money. He told me he already set it up. He knew he wasn't going to need it. He said he was happy he left this much music around for people as he had. He wasn't angry about the fact that he was sure he was going to die." Melody had had her suspicions about the quality of his medical care, but because she wasn't his wife she felt removed from the details. Fahey himself had done little to help matters. "He was bacon-and-egging it right up to death's door," she recalls. "The night before his operation he had mashed potatoes and meat loaf. I thought it was odd. I asked the nurses if he should be eating that the night before a big operation. If I'd been married to him I would have sued them. He actually died of a bowel obstruction on the second day. They give you these paralytic drugs so you can't move. If you can't move, your bowel system can't move, and he really shouldn't have had anything in his system, but he was diabetic so that may have complicated things. Maybe he had to eat every few hours. I don't know. It was terrible, just terrible. For a couple of years after that I would burst into tears whenever I thought about it." Arrangements were quickly put together, mostly by Melody and her husband. Through the Internet, the word of his passing spread, and those who knew him did their best to attend his memorial in Salem. Blackwood, perhaps one of the closest associates of Fahey in his final years, had a hard time understanding how to process his death. "I didn't have a place on this spectrum of feeling except off to one end," he says. "I don't think I was bottling it up or anything, but I didn't feel that it was within my right to be devastated. I think I put off processing it for years. To be honest, I never had any big revelation moment or anything like that, which is not a testament to my stoicism but just as sort of an example of how things went down in my relationship with John. I was this dedicated trooper in his vision for an anti-sentimentalist approach to life. I was a foot soldier in that army. So to deny that at the end would have been a crime, to subvert or undermine that by getting all sloppy at the end. He had influenced me in that way." The funeral was a closed casket affair, and John Aloysious Fahey was apparently buried in black shorts, sneakers, and an XXXL T-shirt. More than one hundred people came—friends, managers, ex-wives, musicians, and even a few Union Gospel Mission residents. Friends came to pay their respects: Glenn Jones, George Winston, Peter Lang, and Leo Kottke all made the journey. Kottke delivered a brief yet powerful eulogy crediting Fahey for launching his career and creating an avenue of expression for acoustic guitar players: "In a country full of crap, John created living, generative culture. With his guitar and his spellbound witness, he synthesized all the strains in American music and found a new happiness for all of us. With John, we have a voice only he could have given us; without him, no one will sound the same." Melody spoke as well, and a Japanese koto player Fahey admired performed. After the service, a few musicians and friends gathered at a club to play songs and drink in his honor. Glenn Jones visited Melissa Stephenson to listen to Stanley Brothers records and go through boxes of Fahey's writings, collages, paintings, and books—the remnants of his robust creativity and the ephemera that filled his final years. For Blackwood, the funeral was when reality finally began to set in. "That's when I started to feel the weight," Blackwood remembers. "This was it. That was the finality of it. But it was almost like I was looking around to those who were closer to Fahey in a way. I hope they weren't too devastated by this. I didn't feel in that class, even though we had spent the last decade together. Maybe that prevented me from absorbing it fully because it would be disingenuous to have a meltdown over this. It would be contrived in a way. I worked with him, but I wasn't Melody or even Spottswood, who had a long history [with him]. To that point, Kottke was really torn up and gave a nice eulogy that was great, and it was clear he was struggling with it. Maybe that reinforced what I was thinking: here was a guy who was appropriately devastated by this." The following day there was an all-day memorial service held at a local high school. George Winston performed Fahey's "The Last Steam Engine Train" on harmonica and many friends and well-wishers came through to pay their respects. A second memorial service was held at a lecture hall at Willamette University in Salem on March 4. Although they hadn't communicated at all since 1973, his first wife, Jan, felt compelled to attend. She had read about his comeback and his tough living situation but feared interfering. She found herself the object of fascination by some of Fahey's more ardent fans and supporters, since she had been immortalized in his memoir. "I went to the memorial. As soon as they found out who I was they all wanted to touch me. I was like, 'Who are you people?' We were in a big room, and one after the other they would come up to me and tell me their story. He had to have people take care of him, and they did. It's very sad. It's really tragic. People talked about having to drive him places and people having to feed him and buy him guitars and take care of him like a giant, talented baby. They talked about what a blessing it was to have this opportunity to take care of him and I thought, better you than me. It seemed like that would have been my life if I had made different choices, but it was absolutely the right decision. Melody is a good lady. I'm sure she tried." On February 25, the _New York Times'_ Jon Pareles wrote an extensive obituary: John Fahey, a guitarist who carved out a private corner of Americana only to see it become a foundation of new age music, died on Thursday....Playing a six-string acoustic guitar, Mr. Fahey used country-blues fingerpicking and hymn-like melodies in stately pieces with classical structures. Wordless and unhurried, his music became contemplation and an elegy, a stoic invocation of American roots, nameless musicians and ancestral memories. Behind its serene surface, the music was both stubborn and haunted. Two years later, David Fricke of _Rolling Stone_ would echo the admiration, naming him number thirty-five among the one hundred greatest guitarists of all time. Even in summation, no other musician could be credited with his achievements, his contributions rightfully seen as essential to the language of popular music: John Fahey created a new, enduring vocabulary for acoustic solo guitar—connecting the roots and branches of folk and blues to Indian raga and the advanced harmonies of modern composers such as Charles Ives and Béla Bartók—on an extraordinary run of albums in the 1960s, released on his own Takoma label. Fahey knew American pioneer song in academic detail; he wrote his UCLA master's thesis on blues-man Charley Patton. Fahey was also a precise fingerpicker addicted to the mystery of the blues as well as the music, a passion reflected in apocryphal album titles such as _The Transfiguration of Blind Joe Death,_ from 1965. Fahey endured illness and poverty in the 1990s, but re-emerged to a new wave of acclaim from bands such as Sonic Youth. He continued touring and recording—often on electric guitar—until his death in 2001. # EPILOGUE # I REMEMBER BLIND JOE DEATH "I've always really thought of myself as a spiritual detective and a psychological detective. I guess with my music I'm always trying to get to a fuller understanding of myself. I felt so alienated from the culture around me, like I was from a different planet, like I wasn't really a member of the human race. I had two heads, one just wasn't visible. So I was looking for another path of music. I didn't really know what it was. I didn't care what it was and I still don't. Makes no difference to me and that's perfectly okay. 'Cause I'm just a little blip. The whole style is just a little blip on all the mainstream of music. We don't fit anywhere. And we never will." —John Fahey, interview, 1994 John Fahey remains an ineffable presence, a touchstone. I believe this was his intention from the moment of his first recording. Seemingly, his career was in preparation for his legacy, with his copious notes and fictions providing its building blocks. His albums are the soundtracks to his story, and Blind Joe Death his alter ego. As a scholar, he saw the scope of modern music, and carved his place therein by weaving fragments of cultures and genre together in his own strange collage, bridging the storytelling immediacy of the folk tradition and the modern expanse of the avant-garde. Fahey's personality kept audiences fixated on him as much as his music. Rather than a mishmash of ideas, his music always sounded like John Fahey, no matter what he attempted. Byron Coley explains: "There's nobody before him that has the same kind of syncretic musical qualities. He used these weird blues chords that make the melodies sound strange, in the same way that Albert Ayler was strange but familiar at the same time. He never overplays. The restraint that he shows when he was doing these incredibly strange fingerings....It's a very weird self-taught quality to a lot of his conception that I just find really appealing. The DIY ethos of pressing your own record and selling it at the gas station you work at under this fake blues guys name and writing these incredibly insane liner notes. The whole package was so appealing, what was this twenty-year-old philosophy student thinking... What the fuck? There's no precedent for something like that." Listeners are still drawn to Fahey for the same reasons that people like Glenn Jones were attracted to him back in 1969. The myths conjured in the texts Fahey wrote for himself and the images he proliferated created a mystery that continues to fascinate. What was he trying to tell us in all this pathology? The stories of Blind Joe Death, his experiences of Takoma Park, his loves and demons, retain potency in their presentation. His unique version of Americana, focused on the existential and the symbolism of his youth, became guideposts that grounded his wild imagination. "There was a quality to his music that I had never heard at the time," Jones recalls. "Between what he was doing with the sound and the emotional quality of his playing, it made me keep coming back to hear more. The records had a handmade quality, not a corporate look, and were hard to find at the time. That made seeking the stuff out that much sexier in terms of just trying to learn more. Of course with the absurd liner notes you wanted to know more about the guy. There was this mythology that you had to weed through and try to figure out what was real and wasn't. You could only make guesses." To many he remains a dislikeable figure. His vices and abusiveness affected everyone close to him. And his knack for alienating those who wished to be close to him forced him into solitude. Like his music, his presence was polarizing. Some felt his lack of filters in social and professional situations cost him greater fame. Yet those same elements brought the audience closer to him; he held nothing back. He admitted to his regrets and failures as freely as he acknowledged his successes. The narrative of his life, which he presented so vividly, combined with the haunted melodies of the music, creating a universe in which listeners could fully immerse themselves. As otherworldly as his universe might have been, they could relate to the pitfalls in his life. Jim O'Rourke explains Fahey's ability to channel his life though his music. "John lived a bigger life than most of his listeners, and his music is an expression of that life. When people hear his music they're let into a world that is still connected to theirs but has gone farther, taken more chances, had more highs and lows than they will ever have. So it has an ecstatic quality. It's the expression of a human being who has gone through extremes in his life, but when he expresses these feelings it comes direct from the heart and is not aestheticized or turned into an abstract." This raw connection echoes the blues in its pathos but lacks its narrative and form. Instead, using similar building blocks, Fahey constructed his own stylistic and narrative conventions, unique to himself. "John always sounded large—not big... large. But it's hard for me to see John," says Kottke. "He's too close. It's like talking about my Aunt Frances. That one note floating over Băsho's head in Maryland '63 or '64, long before I knew it was John, is as good as anything. It's all there in that note—an E, I'm guessing. It replicated the distance and the time, then and now. If someone really plays and really writes, it's in every note, even on a bad night. Moods and competencies come and go, and change, but the thing itself is always there, often from very early on. We usually do catch these people by accident, then we stop and turn. I miss John very much. I was walking down the street in Minneapolis a couple of years ago and passed a kid playing 'Sunflower River Blues.' That kid is John at his best." Audiences still form an intimate connection to Fahey's music. One evening during the course of working on edits for this book, I was introduced at a bar to a woman in her early twenties. With peroxide-bleached hair and a NAPALM DEATH logo painted on the back of her black leather jacket, she gave off the affectation of nonchalance until she heard about my book about John Fahey. "I love John Fahey!" she exclaimed. I asked her why and she shrugged, saying that she liked the music. I dug deeper, reminding her that she used the word love. She didn't say she liked him or that he was cool, but her instant reaction was that she loved John Fahey. Why? As if a living example of the breadth of his reach she replied, "I don't know. His music is sad but it's not hopeless. It's complicated, I guess." Even though he believed that ambition toward careerism was hollow, Fahey wanted to matter. That his music is continually discovered and enjoyed proves his enduring relevance. "When people ask me how good I am, I usually cop to being brilliant, even better than that, but short of genius," Fahey wrote. "But I say these things in an objective dispassionate manner because, you know, and I can't explain why, but being one of the greatest guitarists in the world simply is not very important to me. Oh, but if you took it away somehow I would be very unhappy." # SOURCE NOTES Introduction _"Did you ever go to any of the clubs..."_ Fahey, liner notes to _Transfiguration of Blind Joe Death,_ 3. **1. When the Catfish Is in Bloom** _"I just watched shades of red..."_ Fahey, _How Bluegrass Music Destroyed My Life,_ 53. _"I remember the night we moved..."_ Ibid., 2. _"Every day. Everywhere. And they taught me..."_ Ibid., 5. _"They made us into monsters"..._ Ibid., 17. _"Eddie glorified the neighborhood..."_ Ibid., 7. _"But it wasn't fair"..._ Ibid., 206. _"I don't know if you boys experienced..."_ Fahey, liner notes to _Voice of the Turtle._ _"They taught us to love each other..."_ Fahey, _How Bluegrass Music Destroyed My Life,_ 90. _"I wanted to kill my parents..."_ Ibid., 88. _"At Mount Rainier Junior High School..."_ Ibid., 94. _"It reached out and grabbed me..."_ Ibid., 253. _"When we were still in our teens..."_ Spottswood, interview by the author. _"We had mutual friends who introduced us"..._ Ibid. _"He was subject to such mood swings..."_ Ibid. _"My first impression of John..."_ Lee, interview by the author. _"Fahey and I never hung out..."_ Ibid. _"John portrayed himself as an outcast..."_ McLean, interview by the author. _"One would have thought he was fox crazy..."_ Ibid. _"John managed to be charming..."_ Spottswood, interview by the author. _"I learned a few country-western songs..."_ Fahey, interview by Stefan Grossman. _"I don't mean to demean his talent..."_ McLean, interview by the author. _"He would give people directions..."_ Lee, "The Wolves Are Gone Now," essay in box set _Your Past Comes Back to Haunt You._ _"He was young and thin..."_ Denson, interview by the author. _"Martin's was the only thing..."_ Fahey, "The Persecutions & Resurrections of Blind Joe Death." **2. Sunflower River Blues** _"Canvassing in and around Washington..."_ Fahey, "In Memory of Blind Thomas of Old Takoma." _"Where I was brought up..."_ Fahey, "Blood on the Frets," 25. _"I started to feel nauseated..."_ Ibid. _"He went from disliking it..."_ Spottswood, interview by the author. _"He would walk through the rural Southern black ghettos..."_ Lee, "The Wolves Are Gone Now," essay in box set _Your Past Comes Back to Haunt You_. _"Fahey's idea of how the South should be..."_ Lee, personal letter written summer 1961. _"Today we have a pretty good idea..."_ Spottswood, interview by the author. _"The records represented the art..."_ Ibid. _"The reason I liked Charley Patton..."_ Fahey, "Blood on the Frets," 25. _"They're coming from people..."_ Fahey, interview by Jason Gross. _"He had gotten his degree..."_ Spottswood, interview by the author. _"He had matured dramatically"..._ Lee, interview by the author. _"John was influential..."_ Ibid. **3. The Legend of Blind Joe Death** _"You're not meant to feel miserable..."_ Fahey, "Blood on the Frets," 27. _"An attempt to reconstruct an old song..."_ Fahey, liner notes to _Blind Joe Death._ _"When I made my first record..."_ Fahey, interview by Stefan Grossman. _"I think he was trying to have it both ways..."_ Spottswood, interview by the author. _"The whole point was to use the word 'death'"..._ Fahey, "Blood on the Frets," 27. _"When John sent me the record..."_ Charters, interview by the author. _"I didn't think his technique was very sophisticated..."_ Spottswood, interview by the author. _"Fahey could play virtually any piece..."_ Denson, interview by the author. _"He was not chatty"..._ Ochs, interview by the author. _"My impression was that there was an old..."_ Ibid. _"I had all these pieces in my head..."_ Fahey, "Reinventing the Steel." _"There was a time when John and Ed..."_ Charters, interview by the author. **4. On the Sunny Side of the Ocean** _"I remember when you'd go into a folk store..."_ Fahey, "The Persecutions & Resurrections of Blind Joe Death." _"John and I lived in one large..."_ Denson, interview by the author. _"Among these people..."_ Ibid. _"I would not say there was anything endearing..."_ Ibid. _"My relationship with John was not unpleasant..."_ Ibid. _"There is a slight chance Bukka..."_ Fahey, letter to Sam Charters, November 27, 1963. _"John recorded his second LP..."_ Fahey, liner notes to _Blind Joe Death._ _"John Fahey had made his first guitar..."_ Ibid. _"I was trying to convince the audience..."_ Fahey, "The Persecutions & Resurrections of Blind Joe Death." _"I was seeking out mean, sadistic, aggressive..."_ Fahey, _How Bluegrass Music Destroyed My Life,_ 235. _"James became a frightful figure..."_ Ibid., 246. _"Although the blues field in 1964..."_ Calt, _I'd Rather Be the Devil,_ 251. _"Those rediscoveries were earth-shaking..."_ Denson, interview by the author. _"Fahey was obnoxious..."_ Weller, interview by the author. _"Băsho was a religious mystic..."_ Charters, interview by the author. _"He was crazy"..._ Fahey, "Blood on the Frets," 28. _"Once the records began selling..."_ Denson, interview by the author. **5. Poor Boy Long Way from Home** _"He said he was confused..."_ Fahey, liner notes to _Days Have Gone By._ _"I hate mellow"..._ Fahey, letter to Bill Belmont, early 1990s. _During an angry conversation..._ Lebow Fahey, interview by the author. _"One time in Venice..."_ Charters, interview by the author. _Aside from music, they had a great deal in common..._ Winters, _Blind Owl Blues._ _"I wouldn't describe him as a hard-core bigot..."_ Hansen, interview by the author. _"I was playing an Al Capp role..."_ Fahey, "The Persecutions & Resurrections of Blind Joe Death." _"I thought Fahey was rather dark..."_ Cohen, interview by the author. _"I remember one night at a show in New York..."_ Charters, interview by the author. _"He was very shy..."_ Denson, interview by the author. _Once he got a small mimeograph machine..._ Charters, interview by the author. _"I think he wanted people to listen..."_ Ibid. _"I remember he broke..."_ Hansen, interview by the author. _"futility, a hopelessness and general existential despair..."_ Undated interview, quoted in "The Great San Bernardino Birthday Party and Other Excursions," Fahey Files. _"He understood that he wasn't really good..."_ Lebow Fahey, interview by the author. _"Underneath the bravado and the outrageousness..."_ Ibid. _"Even then he always had problems sleeping..."_ Ibid. **6. Voice of the Turtle** _"Turtles are my favorite animals..."_ Fahey, "Why Fahey Wants to Kill Everybody." _"I never got any input from Fahey..."_ Weller, interview by the author. _"John came in wearing a turtleneck..."_ Charters, interview by the author. _"Since 1948, after seeing the movie..."_ Fahey, liner notes to _Requia._ _"This was the frustration for him..."_ Charters, interview by the author. _"I did a rough mix of it..."_ Ibid. _"He was a prestige artist..."_ Ibid. _"Requia stinks..."_ Fahey, 1968 interview, quoted in "Requia and Other Compositions for Guitar Solo," Fahey Files. _"Vanguard needed a megahit"..._ Charters, interview by the author. _"What I have is this..."_ Fahey, _How Bluegrass Music Destroyed My Life,_ 164. _"We got married here..."_ Lebow Fahey, interview by the author. _"We were young and it was fun"..._ Ibid. _"As far as commerciality..."_ Charters, interview by the author. _"He didn't say anything about the cover..."_ Weller, interview by the author. _"I'm not aware of any other musician..."_ Hansen, interview by the author. _"Notes, in those days..."_ Ibid. _"The recordings which comprise this record..."_ Fahey, liner notes to _Voice of the Turtle._ _"He was unassailably convinced..."_ Charters, interview by the author. _"With Yellow Princess, John talked about..."_ Charters, interview by the author. _"The title song was the first song..."_ Hansen, interview by the author. _"Why didn't we all?..."_ Fahey, liner notes to _The Yellow Princess._ _"That session was star-crossed..."_ Hansen, interview by the author. _"Noted icthyologist [sic] who accidentally saved..."_ Fahey, liner notes to _The Yellow Princess._ _"I did not go east"..._ Ibid. _"We started talking about the concept..."_ Bruce, interview by the author. _"John felt he was ordained to be successful..."_ Charters, interview by the author. **7. View East from the Top of the Riggs Road B &O Trestle** _"When a person is that ambitious..."_ Fahey, _How Bluegrass Music Destroyed My Life,_ 161. _"Christmas and Easter are the two most important..."_ Fahey, liner notes to _The New Possibility._ _"Well, the arrangements are pretty good..."_ Fahey, 1979 interview, quoted in "The New Possibility," Fahey Files. _"Robbie'd just opened for someone..."_ Kottke, interview by the author. _"John called me into the bathroom..."_ Ibid. _"I can't figure how he survived..."_ Ibid. _"John had so much contempt..."_ Charters, interview by the author. _"These beautiful, young, scantily clad women..."_ Fahey, "Blood on the Frets," 28. _"There was a famous club in London..."_ Chapman, interview by the author. _"civilized and erudite..."_ Fahey, _How Bluegrass Music Destroyed My Life,_ 170. _"I felt that my intelligence..."_ Ibid., 173. _"If he had done anything like that..."_ Lebow Fahey, interview by the author. _"By the time he got home..."_ Ibid. _"Fahey and I had dinner..."_ Bruce, interview by the author. _"I didn't know what to expect..."_ Monday, interview by the author. _"I developed a mailing list..."_ Ibid. _"I had arranged to get him on a music TV show..."_ Ibid. _"Fahey hasn't made a record in two years..."_ In Fahey, "Why Fahey Wants to Kill Everybody." _"I was really crazy..."_ Ibid. _"I will remember Wilson..."_ Fahey, _How Bluegrass Music Destroyed My Life,_ 100. _"Out of all the songs I ever wrote..."_ Fahey, 1972 interview, quoted in "America," Fahey Files. _"There is a pulp-mill somewhere in Maryland..."_ Fahey, liner notes to _The Yellow Princess._ _"John started feeling better about himself..."_ Lebow Fahey, interview by the author. _"After half an hour..."_ Bruce, interview by the author. _"It shocked me that John..."_ Kottke, interview by the author. _"Every day was something else..."_ Lebow Fahey, interview by the author. _"My life was going by..."_ Ibid. **8. Old Fashioned Love** _"All I have ever done with music..."_ Fahey, liner notes to _The Legend of Blind Joe Death._ _"Warner's was still thinking..."_ Bruce, interview by the author. _"You had to get Fahey when..."_ Ibid. _"When John began working with Dixieland musicians..."_ Charters, interview by the author. _"I was not prepared for what I heard..."_ Hentoff, liner notes to _Of Rivers and Religion._ _"We're backstage and John is going..."_ Bruce, interview by the author. _"We were left alone..."_ Ibid. _"The show was at the Paul Masson winery..."_ Winston, interview by the author. _"John was doing everything..."_ Ibid. _"Now everyone calls him a composer"..._ Ibid. _"Few living people have had such..."_ Fahey, "Bola Sete, the Nature of Infinity and John Fahey." _"They had a service every day..."_ Fahey, "The Persecutions & Resurrections of Blind Joe Death." _"I would like to introduce you..."_ Fahey, from the pamphlet included with _Fare Forward Voyagers_ (Takoma C 1035, 1973). _"John could have run it..."_ Monday, interview by the author. _"He was funny, he was smart..."_ Goldman, interview by the author. _"At the time we had this spiritual interest..."_ Ibid. _"I married John..."_ Ibid. _"Having known other musicians..."_ Ibid. _"He would always compare himself..."_ Bruce, interview by the author. _"John Fahey, who stopped by..."_ Rockwell, "John Fahey Plays Impressive Guitar at the Bottom Line." _"His guitar-playing is a deliberate..."_ Nelson, "John Fahey Is a Tough Guy." _"The folk and acoustic scene..."_ Chapman, interview by the author. _"I like to travel..."_ Goldman, interview by the author. _According to Fahey's tour manager..._ Calt, "The Illusionist." _"John was always a very funny person..."_ Brennan Fahey, interview by the author. _"John was a dynamic person..."_ Ibid. _"When I play the guitar..."_ Fahey, _The Best of John Fahey 1959–1977,_ 10. _"Mastering a guitar..."_ Ibid., 13. _"What I am advocating..."_ Ibid., 12. _"So Chrysalis wanted..."_ Bruce, interview by the author. _"The reason that I got rid of..."_ Fahey, interview by Jason Gross. _"He would go on for periods..."_ Brennan Fahey, interview by the author. _"It was insane"..._ Chapman, interview by the author. _"You might pray for me..."_ Fahey, letter to Glenn Jones, 1981. _"I had a career..."_ Brennan Fahey, interview by the author. _"One year he gave $2,000..."_ Ibid. _"If you make yourself play..."_ Fahey, _The Best of John Fahey 1959–1977,_ 10. **9. Let Go** _"The Void is a term"_ Fahey, "Finger Style Adventurer," 26. _"He hated those guys..."_ Bruce, interview by the author. _"John's main goal in life..."_ Brennan Fahey, interview by the author. _"Sometimes when you meet someone..."_ Jones, interview by the author. _"We were hanging out backstage..."_ Ibid. _"He was a very heavy drinker..."_ Ibid. _"He might have been prediabetic..."_ Brennan Fahey, interview by the author. _"For me, it was torture..."_ Ibid. _"He was about forty..."_ Robb, interview by the author. _"I would try to get him to play..."_ Ibid. _"He would mix his medications..."_ Ibid. _"I wish you knew..."_ Fahey, _How Bluegrass Music Destroyed My Life,_ 207. _"He claimed he was abused..."_ Lebow Fahey, interview by the author. _"In his book he makes serious allegations..."_ Spottswood, interview by the author. _"I'm sure there was a lot of emotional abuse..."_ Brennan Fahey, interview by the author. _"John was a pretty together guy"..._ Robb, interview by the author. _"John's father was an orphan..."_ Brennan Fahey, interview by the author. _"He was showing up..."_ Robb, interview by the author. _"He got tired of it"..._ Ibid. _"I got interested in '50s rock and roll..."_ Fahey, 1990 interview, quoted in "Old Girlfriends and Other Horrible Memories," Fahey Files. _"It drove me nuts!"..._ Brennan Fahey, interview by the author. _"I don't know if I would have been that great..."_ Ibid. _"Jobs would come..."_ Ibid. _"I didn't want to have to fight..."_ Ibid. _"I didn't leave John..."_ Ibid. **10. When the Springtime Comes Again** _"This new group is all for freedom..."_ Fahey, interview by Jason Gross. _"I spent a day with him..."_ Hansen, interview by the author. _"I was writing these things as an escape..."_ Fahey, "Blood on the Frets," 24. _"I didn't know when he moved..."_ Bruce, interview by the author. _"John did spend quite a bit of time..."_ Brennan Fahey, interview by the author. _"When I started hanging out with Glenn Jones..."_ Coley, interview by the author. _"People were buying the records..."_ Ibid. _"He was hilarious"..._ Ibid. _"In the current season..."_ Fahey, letter to Bill Belmont, early 1990s. _"A few record store guys..."_ Coley, interview by the author. _"I stayed in touch with him..."_ Ibid. _"I got the impression..."_ Ibid. _"All Fahey's own records..."_ Blackwood, interview by the author. _"I remember working out the math..."_ Ibid. _"When you operate in that kind of..."_ Ibid. _"He wasn't the kind of artist..."_ Ibid. _"Our initial conversations..."_ Ibid. _"They were appreciative..."_ Ibid. _"A lot of the legality..."_ Ibid. _"None of them ever got paid..."_ Ibid. _Spin named Revenant's Captain Beefheart & His Magic Band Grow Fins..._ Revenant Records, "Captain Beefheart and His Magic Band Grow Fins," Revenant Records official website. _"the Everest of all jazz boxed sets..."_ Reich, "The Music Box." _"There was a sense..."_ Blackwood, interview by the author. **11. Dance of the Inhabitants** _"I'm just doing solo electric..."_ Fahey, June 2000 interview, quoted in "Georgia Stomps, Atlanta Struts & Other Contemporary Dance Favorites," Fahey Files. _"Fahey isn't an Americana thing..."_ O'Rourke, interview by the author. _"Yes, I wonder what would have happened..."_ Fahey, _How Bluegrass Music Destroyed My Life,_ 146. _"He made a noose out of the sash..."_ Ibid., 205. _"He told me about the writing..."_ O'Rourke, interview by the author. _"John's life was his work"..._ Brennan Fahey, interview by the author. _"He asked me to sell them..."_ Coley, interview by the author. _"It was a place my parents took me to..."_ Fahey, undated interview, quoted in "City of Refuge," Fahey Files. _"Little of City of Refuge..."_ Jones, "Of Rivers and Revisions." _"After a Boston photo shoot..."_ Jones, liner notes to _The Epiphany of Glenn Jones._ _"Having been so closely involved..."_ Jones, "Of Rivers and Revisions." _"They have a much wider knowledge..."_ Fahey, interview by Jason Gross. _"I like noise"..._ Fahey, "Blood on the Frets," 28. _"He had these tapes..."_ O'Rourke, interview by the author. _"All the tracks were made..."_ Ibid. _"The last track he recorded..."_ Ibid. _"When he broke his belt..."_ Coley, interview by the author. _"He had so many years..."_ O'Rourke, interview by the author. _"Part of that was him living up..."_ Jones, interview by the author. _"We were at one show..."_ O'Rourke, interview by the author. _"He knew exactly what was going on..."_ Coley, interview by the author. _"I think the creative impulse..."_ O'Rourke, interview by the author. _"I remember we were in Germany..."_ Ibid. _"Fahey stayed with [artist] Rita Ackermann..."_ Nuss, interview by the author. _"I was living in the Hint House..."_ Press, interview by the author. _"I was aware [Fahey's] performances..."_ Ryan, interview by the author. _"His suggestion of psychoanalysis..."_ Ibid. **12. Red Cross** _"Suddenly I hit desolation..."_ Fahey, liner notes to _John Fahey Visits Washington, D.C._ _"When John moved in..."_ Stephenson, interview by the author. _"He was not in therapy..."_ Ibid. _"I happened by Fahey..."_ Ryan, interview by the author. _"John told me the story..."_ Stephenson, interview by the author. _"While he was at home..."_ Ibid. _"Those of us who knew John..."_ Jones, liner notes to _Red Cross._ _"He had visited a cardiologist..."_ Stephenson, interview by the author. _"We ran into him once..."_ Brennan Fahey, interview by the author. _"There was stuff to be taken care of..."_ Blackwood, interview by the author. _"Because his instructions..."_ Ibid. _"I just know that right before..."_ Brennan Fahey, interview by the author. _"One thing I have to say..."_ Ibid. _"It was a lot worse..."_ Blackwood, interview by the author. _"I didn't think for a second..."_ Brennan Fahey, interview by the author. _"He was bacon-and-egging it..."_ Ibid. _"I didn't have a place..."_ Blackwood, interview by the author. _"In a country full of crap..."_ Kottke, unpublished eulogy for John Fahey. _"That's when I started to feel the weight..."_ Blackwood, interview by the author. _"I went to the memorial..."_ Lebow Fahey, interview by the author. _"John Fahey, a guitarist who carved..."_ Pareles, "John Fahey, 61, Guitarist and an Iconoclast, Is Dead." _"John Fahey created a new, enduring vocabulary..."_ Fricke, "100 Greatest Guitarists: David Fricke's Picks." **Epilogue: I Remember Blind Joe Death** _"I've always really thought of myself..."_ Fahey, "The Persecutions & Resurrections of Blind Joe Death." _"There's nobody before him..."_ Coley, interview by the author. _"There was a quality to his music..."_ Jones, interview by the author. _"John lived a bigger life than most..."_ O'Rourke, interview by the author. _"John always sounded large..."_ Kottke, interview by the author. _"When people ask me..."_ Fahey, letter to Ron Cowan, November 25, 1998. # BIBLIOGRAPHY **Original Interviews** Dean Blackwood Denny Bruce Michael Chapman Sam Charters Byron Coley Ed Denson Jan Lebow Fahey Melody Brennan Fahey Deborah Goldman Barry Hansen Glenn Jones Leo Kottke Anthony Lee Nancy McLean Jon Monday Dave Nuss Max Ochs Jim O'Rourke Sara Press Terry Robb John Fell Ryan Dick Spottswood Melissa Stephenson Tom Weller George Winston **Books** Calt, Stephen. _I'd Rather Be the Devil: Skip James and the Blues._ Chicago: Chicago Review Press, 2008. Fahey, John. _How Bluegrass Music Destroyed My Life._ Chicago: Drag City, 2000. Fahey, John, and John Lescroart. _The Best of John Fahey 1959–1977._ New York: Guitar Player Books, 1977. Winters, Rebecca Davis. _Blind Owl Blues: The Mysterious Life and Death of Blues Legend Alan Wilson._ Blind Owl Blues, 2007. **Articles** Calt, Stephen. "The Illusionist." Unpublished manuscript. Fahey, John. "Bola Sete, the Nature of Infinity and John Fahey." _Guitar Player,_ February 1975. Fricke, David. "100 Greatest Guitarists: David Fricke's Picks." _Rolling Stone._ www.rollingstone.com/music/lists/100-greatest-guitarists-of-all-time-19691231/john-fahey-20101202. Jones, Glenn. "Of Rivers and Revisions: John Fahey and Cul de Sac." Fahey Files. www.johnfahey.com/pages/revision.html. Lee, Anthony. "The Search for Charley Patton" Unpublished manuscript, summer 1961. Personal collection of Anthony Lee. Nelson, Paul. "John Fahey Is a Tough Guy." _Village Voice,_ June 9, 1975. Pareles, Jon. "John Fahey, 61, Guitarist and an Iconoclast, Is Dead." _New York Times,_ February 25, 2001. Reich, Howard. "The Music Box." _Chicago Tribune,_ December 12, 2004. Revenant Records. "Captain Beefheart and his Magic Band Grow Fins." Revenant Records official website. www.revenantrecords.com/musics/products/captain-beefheart-and-his-magic-band-grow-fins/. Rockwell, John. "John Fahey Plays Impressive Guitar at the Bottom Line." _New York Times,_ December 2, 1975. **Published Interviews** Fahey, John. "Blood on the Frets." Interview by Edwin Pouncey. _Wire_ 174, August 1998. ____. "Finger Style Adventurer." Interview by Mark Humphrey. _Frets,_ August 1980. ____. "In Memory of Blind Thomas of Old Takoma." Interview by Eddie Dean. _Washington City Paper,_ September 15, 2001. ____. Interview by Jason Gross. _Perfect Sound Forever,_ October 1997. www.furious.com/perfect/johnfahey.html. ____. Interview by Stefan Grossman. Stefan Grossman's Guitar Workshop. www.guitarvideos.com/interviews/john-fahey. ____. "The Persecutions & Resurrections of Blind Joe Death." Interview by Byron Coley. _Perfect Sound Forever,_ May 2001. www.furious.com/perfect/fahey/fahey-byron2.html. ____. "Reinventing the Steel." Interview by Dale Miller. _Acoustic Guitar,_ January/February, 1992. ____. "Why Fahey Wants to Kill Everybody." Interview by Tim Farris. _Rolling Stone,_ December 24, 1970. ____. 1968 interview. Quoted in "Requia and Other Compositions for Guitar Solo." Fahey Files. www.johnfahey.com/pages/req2.html. ____. 1972 interview. Quoted in "America," Fahey Files. www.johnfahey.com/pages/am2.html. ____. 1979 interview. Quoted in "The New Possibility." Fahey Files. www.johnfahey.com/pages/np2.html. ____. 1990 interview. Quoted in "Old Girlfriends and Other Horrible Memories." Fahey Files. www.johnfahey.com/pages/girl2.html. ____. June 2000 interview. Quoted in "Georgia Stomps, Atlanta Struts & Other Contemporary Dance Favorites." Fahey Files. www.johnfahey.com/pages/georg.html. ____. Undated interview. Quoted in "City of Refuge." Fahey Files. www.johnfahey.com/pages/cr2.html. ____. Undated interview. Quoted in "The Great San Bernardino Birthday Party and Other Excursions." Fahey Files. www.johnfahey.com/pages/v42.html. **Letters** Fahey, John. Letter to Bill Belmont, early 1990s. Personal collection of Glenn Jones. ____. Letter to Glenn Jones, 1981. Personal collection of Glenn Jones. ____. Letter to Ron Cowan, November 25, 1998. www.johnfahey.com/roncowanletter.htm. ____. Letter to Sam Charters, November 27, 1963. Samuel and Ann Charters Archives of Blues and Vernacular African American Musical Culture, Archives & Special Collections at the Thomas J. Dodd Research Center, University of Connecticut Libraries. Kottke, Leo. Unpublished eulogy for John Fahey. Personal collection of Leo Kottke. **Liner Notes** Fahey, John. Liner notes to _Blind Joe Death._ Takoma C 1002, 1964, LP. ____. Liner notes to _Days Have Gone By._ Takoma C 1014, 1967, LP. ____. Liner notes to _John Fahey Visits Washington, D.C._ Takoma TAK 7069, 1979; Chrysalis TAK 7069, 1979, LP. ____. Liner notes to _The Legend of Blind Joe Death._ Takoma TAKCD-8901-2, 1996, CD. ____. Liner notes to _The New Possibility._ Takoma C 1020, 1968, LP. ____. Liner notes to _Requia._ Vanguard. VSD-79259, 1967, LP. ____. Liner notes to _Transfiguration of Blind Joe Death._ Riverboat RB-1, 1965, LP. ____. Liner notes to _Voice of the Turtle._ Takoma C 1019, 1968, LP. ____. Liner notes to _The Yellow Princess._ Vanguard VSD-79293, 1968, LP. ____. Pamphlet included with _Fare Forward Voyagers._ Takoma C 1035, 1973, LP. Hentoff, Nat. Liner notes to _Of Rivers and Religion_ by John Fahey. Reprise. MS 2145, 1973, LP. Jones, Glenn. Liner notes to _The Epiphany of Glenn Jones_ by John Fahey and Cul de Sac. Thirsty Ear thi 57037.2 1997, CD. ____. Liner notes to _Red Cross_ by John Fahey. Revenant 104, 2002, CD. Lee, Anthony. "The Wolves Are Gone Now." Essay in box set _Your Past Comes Back to Haunt You._ Dust-to-Digital DTD-21, 2011, CD. # JOHN FAHEY DISCOGRAPHY _Blind Joe Death_ Takoma C 1002 (1959, 1964, 1967); Sonet SNTF 607 (1969) _Death Chants, Breakdowns and Military Waltzes_ Takoma C 1003 (1963, 1967); Sonet SNTF 608 (1969) _Dance of Death & Other Plantation Favorites_ Takoma C 1004 (1965) _The Transfiguration of Blind Joe Death_ Riverboat RB-1 (1965, 1967); Transatlantic TRA 173 (1968); Takoma R 9015 (1973); Sonet SNTF 744 (1978) _The Great San Bernardino Birthday Party and Other Excursions_ Takoma C 1008 (1966) _Days Have Gone By_ Takoma C 1014 (1967) _The Early Sessions_ Takoma C 1000 (1967) _Requia_ Vanguard VSD-79259 (1967); Terra T-2 (1985) _The Voice of the Turtle_ Takoma C 1019 (1968); 4 Men with Beards 4m219 (2012) _The Yellow Princess_ Vanguard VSD-79293 (1968) "March for Martin Luther King" / "Singing Bridge of Memphis, Tennessee" Vanguard VRS 35076 (1968) _The New Possibility_ Takoma C 1020 (1968); Sonet SNTF 702 (1976) _America_ Takoma C 1030 (1971); Sonet SNTF 628 (1972); 4 Men with Beards 4m117 (2009) _Of Rivers and Religion_ Reprise MS 2089 (1972); Edsel ED 216 (1987); Collectors Choice CCM-212-2 (2001) _After the Ball_ Reprise MS 2145 (1973); Collectors Choice CCM-213-2 (2001) _Fare Forward Voyagers_ Takoma C 1035 (1973); Sonet SNTF 656 (1974); Shanachie 99005 (1992) _The Essential John Fahey_ Vanguard VSD 55/56 (1974) _Old Fashioned Love_ Takoma C 1043 (1975); Sonet SNTF 688 (1975); Shanachie 99001 (1990); P-Vine PCD-3281 (2003) _Christmas with John Fahey, Vol. II_ Takoma C 1045 (1975) _The Best of John Fahey 1959–1977_ Takoma C 1058 (1977); Sonet SNTF 733 (1977); Metronome 0069.053 (1977); P-Vine PCD-3277 (2003) _John Fahey Visits Washington, D. C._ Takoma TAK 7069 (1979); Chrysalis TAK 7069 (1979) _Yes! Jesus Loves Me_ Takoma TAK 7085 (1980) _Live in Tasmania_ Takoma TAK 7089 (1981); Sonet SNTF 861 (1981) _Christmas Guitar Volume 1_ Varrick VR-002 (1982) _The Guitar of John Fahey_ Stefan Grossman Guitar Workshop (1983); Mel Bay MB95399CD (1995) _Railroad 1_ Takoma TAK 7102 (1983); Shanachie 99003 (1992) _Popular Songs of Christmas & New Year's_ Varrick VR-012 (1983) _Let Go_ Varrick VR-008 (1984) _Rain Forests, Oceans and Other Themes_ Varrick VR-019 (1985) _Christmas Guitar_ Varrick CD VR 11503 (1986); Better Days CA-4196 (1989) _I Remember Blind Joe Death_ Varrick VR-028 (1987); Rounder REU 1025 (1987); Demon Fiend CD 207 (1987) _God, Time and Causality_ Shanachie 97006 (1989) _The John Fahey Christmas Album_ Burnside BCD 0004-2 (1991); Attic ACD 1362 (1992) _Old Girlfriends and Other Horrible Memories_ Varrick CD VR 031 (1992) _The New Possibility / Christmas with John Fahey, Vol. II_ Rhino R2 71437 (1993); Takoma TAKCD-8912-2 (2000) _Return of the Repressed: The John Fahey Anthology_ Rhino R2 71737 (1994) "Morning" / "Evening Not Night" Perfect 14404 (1996) _City of Refuge_ Tim/Kerr 644830127-2 (1997) _The Mill Pond_ Little Brother lb-009 (1997); Important IMPREC 183 (2009) _Womblife_ Table of the Elements Rb37 (1997); P-Vine PCD-23014 (1999) _The Epiphany of Glenn Jones_ Thirsty Ear thi 57037.2 (1997) _Things to Come_ (John Fahey Trio) Wavelength (1997) _Georgia Stomps, Atlanta Struts, and Other Contemporary Dance Favorites_ Table of the Elements TOE-LP-38 Sr38 (1998); P-Vine PCD-23015 (1999) _Best of the Vanguard Years_ Vanguard 79532-2 (1999) _Hitomi_ LivHouse 70334 90001 2 (2000); LivHouse IMPREC 030 (2003) _Good Luck_ (John Fahey Trio) One Hit Records 0002 (2001) _KBOO Live_ (John Fahey Trio) One Hit Records 0004 (2001) _John Fahey Trio, Vol. 1_ Jazzoo Records (2002) _Red Cross_ Revenant 104 (2002); P-Vine PCD-3276 (2003) _Hard Time Empty Bottle Blues_ Table of the Elements Nd60 (2003) _Of Rivers and Religion & After the Ball_ Warner Bros 8122-73663-2 (2003); Reprise WQCP-1167 (2011) _The Best of John Fahey Vol. 2:1964–1983_ Takoma TAKCD-8916-2 (2004); P-Vine PCD-3300 (2008) _The Great Santa Barbara Oil Slick_ Water 139 (2004) _Americana Masters, Volume One_ Digital Masterworks International (2004) _Americana Masters, Volume Two_ Digital Masterworks International (2004) _Americana Masters, Volume Three_ Digital Masterworks International (2004) _Sea Changes and Coelacanths_ Table of Elements TOE-85 (2006) _Addendum_ Vanguard 942-2 (2006) _Vanguard Visionaries_ Vanguard 73160-2 (2007) _Twilight on Prince Georges Avenue_ Rounder 11661-9093-2 (2009) _Your Past Comes Back to Haunt You_ Dust-to-Digital DTD-21 (2011) _The Transcendental Waterfall: Guitar Excursions 1962–1967_ 4 Men with Beards 4m600 (2012) (Contains _Blind Joe Death_ 4m201, _Death Chants_ 4m202, _Dance of Death_ 4m203, _Great San Bernardino Birthday Party_ 4m204, _Transfiguration of Blind Joe Death_ 4m205, _Days Have Gone By_ 4m206) **STEVE LOWENTHAL** started and ran the music magazine _Swingset_ ; his writing has also been published in _Fader_ , _Spin_ , _Vice_ , and the _Village Voice_. He ran the record label Plastic for five years and currently runs the VDSQ label, which specializes in solo instrumental acoustic guitar music. He lives in New York City. **DAVID FRICKE** is a senior editor at _Rolling Stone_ magazine. JACKET DESIGN: Marc Whitaker / MTWdesign.net COVER IMAGE: Michael Ochs Archives / Getty Images AUTHOR PHOTO: Shawn Brackbill **PRINTED IN THE UNITED STATES OF AMERICA**
{ "redpajama_set_name": "RedPajamaBook" }
6,878
Q: How to loop through XML rows? I have an xml structure like so : <Resident Id="100"> <Name>Sample Name</Name> <PhoneNumber>12345642357891</PhoneNumber> <EmailAddress>sample_name@example.com</EmailAddress> <Address> <StreetLine1>Street Line1</StreetLine1> <City>City Name</City> <StateCode>AE</StateCode> <PostalCode>12345</PostalCode> </Address> </Resident> I want to loop through each row and while I can do the below (snippet) for element in root: listOfAttribAndValues = [] listOfAttribAndValues.append(int(element.get("Id"))) listOfAttribAndValues.append(element.find('Name').text) #and so on and then write them to a list, and then write the list to a csv file writer.writerow(listOfAttribAndValues) Is there an easy way to loop through each row (Name, PhoneNumber, etc) rather than explicitly finding the value of each item out? A: Your solution is correct. As mentioned in the comments, there is no concept of XML "rows". XML is a tree. The items are associated with their parent elements and child elements, but not with each other. In fact, no particular order is guaranteed or expected among elements of the same depth in the tree. The answer here may help improve your code: https://stackoverflow.com/a/31844901/2713818
{ "redpajama_set_name": "RedPajamaStackExchange" }
513
{"url":"http:\/\/www.cis.hut.fi\/Opinnot\/T-61.5020\/Exercises08\/v10\/v10.html","text":"T-61.5020 Statistical Natural Language Processing\nAnswers 10 -- Speech recognition and language model evaluation\nVersion 1.1\n\n1.\nAgain, we will use the Viterbi algorithm to find the most probable state sequence from a Hidden Markov Model. There are three differences to the weather model presented in the earlier exercise: The emissions are now done in the state transitions, the model has some null transitions, and the final state is determined.\n\na)\nLet's initialize the grid such that the initial state is . We will collect only non-zero probability values.\n\nThe first observation\n\nThe initial state can lead only the second or fourth state, so let's calculate those probabilities:\n\nThe second observation\n\nFrom the second state we can go to the third state, and from the fourth state to the fifth state, so there is no choices to be made for those steps.\n\nHowever, we should notice that the states and can lead to the initial state with a null transition. Thus after the second observation we can go also to :\n\nThe third observation\n\nNow the possible transitions are from to or , and from to .\n\nThe fourth observation\n\nAgain, from we can go only to and from to .\n\nFinal state\n\nIn the end we should arrive to the final state . With a null transition:\n\nThe calculated grid is in the Figure 1. By following the arrows from the end to the beginning, we obtain the most probable sequence . This corresponds to the word jaon''.\n\nb)\nIn this case we must take into account the probabilities given by the language model. The probability values are calculated conditioned by the different choice of the word : . The probability for the word is added to the calculations at each point where the word is selected. When we arrive to the initial state again, the selections determine which of the bigram probabilities is used. After that, they can be forgotten, as the language model does not use longer contexts.\n\nLet's initialize the grid as before. We do not select the word yet.\n\nThe first observation\n\nThe initial state leads to and . The second state can start either the word ja'' or jaon'', so both must be taken into account.\n\nThe second observation\n\nThe second state leads only to the third state and the fourth state to the fifth state. In addition, the first state can be reached with a null transition. This is of course possible only for the words that end at this point.\n\nThe third observation\n\nPossible transitions are from to or , and from to . The transitions from start new words, so the probabilities from the language model are taken into account. In addition, as we had two possible words in state , we can now select the more probable one.\n\nThe fourth observation\n\nFrom the second state we can only to the third state, and from the fourth state only to the fifth state. Also the first state can be reached with a null transition.\n\nThe grid after the final step is in Figure 2. The different word choices are drawn with different arrows. The most probable of the three paths that have led to the final state is . When we follow the arrows backwards in time, we get the most probable sequence . This corresponds to the two-word sequence ja on''.\n\n2.\nThe models build with the units from segmentation B have about three times as many unit types as models build from segmentation A. The tokens in A are smaller on average, and thus the evaluation data includes more of them. The tokenwise cross-entropies cannot be compared directly because of this. For example, if the text was segmented to individual letters, the tokens would be quite easy to predict on average, but the likelihood of the whole data is not likely to be very high.\n\nInstead of direct comparison, we can first normalize the entropies so that they are based on words. The cross-entropy of test data could be calculated as\n\n (1)\n\nIf we divide the logarithm of data likelihood by the number of words in the data, , instead of the number of tokens in the data, , we get the normalized, word-based entropy:\n\n (2)\n\nAs we know , and , we can calculate the normalized entropy as follows:\n\n (3)\n\nLet's convert the given entropies to word-based estimates:\n\nIt seems that the entropies with the segmentation B are somewhat better in models of all magnitudes. However, as the differences are small, and models B have larger models, the exact sizes must be taken into account. The comparison is easy if we draw plot the results to size-entropy coordinates; see Figure 3.\n\nThe break-line that connects the points of the segmentation A is nearer to the left-down corner that the lines of connecting B, which means better accuracies for the models of same size.\n\nNext we will take a look at the recognition results. The error rates have been calculated per words, so there is no need for normalization. The word error rates (WER) are plotted against model sizes in Figure 4. We see that the results are mixed for the small and large models: Segmentation A works better for the small models, but B seems to outperform it after the size grows over 900000 n-grams.\n\nIt seems to be quite clear that the models based on segmentation A are better than those based on B, if the model size is small. For larger models, the results are very close. In addition, the performance is not known for models smaller than half million or larger than one million n-grams. To get more reliable results, we would need more measurement points and test the statistical significance between the values (e.g. with Wilcoxon signed-rank test).\n\nsvirpioj[a]cis.hut.fi","date":"2017-10-20 23:18:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8386833071708679, \"perplexity\": 465.0676580419124}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-43\/segments\/1508187824471.6\/warc\/CC-MAIN-20171020230225-20171021005953-00028.warc.gz\"}"}
null
null
import { getKey, fromKey, hash, withinExtentAndZ } from '../../../src/ol/tilecoord.js'; import TileGrid from '../../../src/ol/tilegrid/TileGrid.js'; describe('ol.TileCoord', function() { describe('create', function() { it('sets x y z properties as expected', function() { const tileCoord = [1, 2, 3]; expect(tileCoord[0]).to.eql(1); expect(tileCoord[1]).to.eql(2); expect(tileCoord[2]).to.eql(3); }); }); describe('getKey()', function() { it('returns a key for a tile coord', function() { const key = getKey([1, 2, 3]); expect(key).to.eql('1/2/3'); }); }); describe('fromKey()', function() { it('returns a tile coord given a key', function() { const tileCoord = [1, 2, 3]; const key = getKey(tileCoord); const returned = fromKey(key); expect(returned).to.eql(tileCoord); }); }); describe('hash', function() { it('produces different hashes for different tile coords', function() { const tileCoord1 = [3, 2, 1]; const tileCoord2 = [3, 1, 1]; expect(hash(tileCoord1)).not.to.eql( hash(tileCoord2)); }); }); describe('withinExtentAndZ', function() { it('restricts by z', function() { const tileGrid = new TileGrid({ extent: [10, 20, 30, 40], tileSize: 10, resolutions: [2, 1], minZoom: 1 }); expect(withinExtentAndZ([0, 0, 0], tileGrid)).to.be(false); expect(withinExtentAndZ([1, 0, 0], tileGrid)).to.be(true); expect(withinExtentAndZ([2, 0, 0], tileGrid)).to.be(false); }); it('restricts by extent when extent defines tile ranges', function() { const tileGrid = new TileGrid({ extent: [10, 20, 30, 40], sizes: [[3, -3]], tileSize: 10, resolutions: [1] }); expect(withinExtentAndZ([0, 1, 1], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 2, 0], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 0, 2], tileGrid)).to.be(false); }); it('restricts by extent when sizes define tile ranges', function() { const tileGrid = new TileGrid({ origin: [10, 20], sizes: [[3, 3]], tileSize: 10, resolutions: [1] }); expect(withinExtentAndZ([0, 0, 0], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 1, 0], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 2, 0], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 0, 1], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 1, 1], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 2, 1], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 0, 2], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 1, 2], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 2, 2], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 0, -1], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 1, -1], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 2, -1], tileGrid)).to.be(false); expect(withinExtentAndZ([0, -1, 0], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 3, 0], tileGrid)).to.be(false); expect(withinExtentAndZ([0, -1, 1], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 3, 1], tileGrid)).to.be(false); expect(withinExtentAndZ([0, -1, 2], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 3, 2], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 0, 3], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 1, 3], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 2, 3], tileGrid)).to.be(false); }); it('restricts by extent when sizes (neg y) define tile ranges', function() { const tileGrid = new TileGrid({ origin: [10, 40], sizes: [[3, -3]], tileSize: 10, resolutions: [1] }); expect(withinExtentAndZ([0, 0, -1], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 1, -1], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 2, -1], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 0, -2], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 1, -2], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 2, -2], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 0, -3], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 1, -3], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 2, -3], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 0, 0], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 1, 0], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 2, 0], tileGrid)).to.be(false); expect(withinExtentAndZ([0, -1, -1], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 3, -1], tileGrid)).to.be(false); expect(withinExtentAndZ([0, -1, -2], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 3, -2], tileGrid)).to.be(false); expect(withinExtentAndZ([0, -1, -3], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 3, -3], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 0, -4], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 1, -4], tileGrid)).to.be(false); expect(withinExtentAndZ([0, 2, -4], tileGrid)).to.be(false); }); it('does not restrict by extent with no extent or sizes', function() { const tileGrid = new TileGrid({ origin: [10, 20], tileSize: 10, resolutions: [1] }); expect(withinExtentAndZ([0, Infinity, -1], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 0, Infinity], tileGrid)).to.be(true); expect(withinExtentAndZ([0, -Infinity, -1], tileGrid)).to.be(true); expect(withinExtentAndZ([0, 0, Infinity], tileGrid)).to.be(true); }); }); });
{ "redpajama_set_name": "RedPajamaGithub" }
1,078
@interface CTBarButtonItem : NSObject @property (nonatomic, strong, readonly) UIView *view; @property (nonatomic, assign) CGFloat width; @property (nonatomic, assign) NSInteger tag; @property (nonatomic, copy) NSString *title; @property (nonatomic, strong) UIFont *font; @property (nonatomic, assign, getter = isEnabled) BOOL enabled; @property (nonatomic, assign, getter = isHidden) BOOL hidden; - (instancetype)initWithImage:(UIImage *)image title:(NSString *)title target:(id)target action:(SEL)action; - (instancetype)initWithTitle:(NSString *)title target:(id)target action:(SEL)action; - (instancetype)initWithCustomView:(UIView *)customView; @end
{ "redpajama_set_name": "RedPajamaGithub" }
9,034
\section{Introduction} \noindent Beauville has shown in \cite{B79} that if the image of the canonical map $\Phi_{K_S}$ of a surface has dimension $2$, then its degree $d$ is bounded as follows: \[ d:=\deg(\Phi_{K_S}) \leq 9+\frac{27-9q}{p_g-2}\leq 36. \] Note that the bound $d\leq 36$ was shown first by Persson in \cite[Proposition $5.7$]{Per}. Here, $q$ is the irregularity and $p_g$ the geometric genus of $S$. In particular, $28 \leq d$ is only possible if $q=0$ and $p_g=3$. Motivated by this observation, the construction of surfaces with $p_g=3$ and canonical map of degree $d$ for every value $2 \leq d \leq 36$ is an interesting, but still widely open problem \cite[Question 5.2]{MLP21}. For a long time the only examples with $10\leq d$ were the surfaces of Persson \cite{Per}, with canonical map of degree $16$, and Tan \cite{Tan}, with degree $12$. In recent years, this problem attracted the attention of many authors, putting an increased effort in the construction of new examples. As a result, we have now examples in literature for all degrees $2\leq d \leq 12$ and $d=14,16, 20, 24, 27, 32$ and $36$, see \cite{MLP21},\cite{Ri15, Ri17, Ri17Zwei, Ri22}, \cite{LY21}, \cite{GPR}, \cite{Bin19, Bin21}, \cite{FG2022} and \cite{Bin22}. In this paper we construct surfaces as quotients of a product of two curves $C_1\times C_2$ modulo an action of the group $S_3\times \mathbb{Z}_3^2$. Here $C_1$ is a fixed curve of genus $10$ while $C_2$ is a curve of genus $19$ varying in a one-dimensional family. Varying the action of $S_3\times \mathbb{Z}_3^2$ we get four different one-dimensional families of canonical models of surfaces of general type with $K_S^2=24$, $p_g=3$ and $q=0$. We write the canonical system of each of them in terms of invariant holomorphic two-forms on the product $C_1\times C_2$. It turns out that for none of them $\vert K_{S}\vert$ is base-point free, i.e. the canonical map $\Phi_{K_{S}} \colon S \dashrightarrow \mathbb P^2$ is just a rational map. To compute its degree, we resolve the indeterminacy by a sequence of blowups and compute the degree of the resulting morphism via elementary intersection theory. It turns out that the degree of the canonical map is not always constant in a family and in fact it assumes five different values: $d=12,13,15,16$ and $18$. To our knowledge there are no other examples in literature of surfaces with canonical map of degree $13$, $15$ and $18$. \footnote{During the preparation of this work Bin Nguyen has communicated to us a different construction of a surface with canonical map of degree $13$.} We point out that our surfaces are examples of product-quotient surfaces, i.e. quotients of product of two curves modulo an action of a finite group. In our cases the action is diagonal and non-free, arising surfaces with $8$ rational double points as singularities of type $\frac{1}{2}(1,1)$. Product-quotient surfaces are studied for the first time by Catanese in \cite{Cat00}. They are revealed to being a very useful tool for building new examples of algebraic surfaces and studying their geometry in an accessible way. Apart from other works, that mainly deal with irregular surfaces, we want to mention the complete classification of surfaces isogenous to a product with $p_g=q=0$ \cite{BCG} and the classification for $p_g=1$ and $q=0$ under the assumption that the action is diagonal \cite{G15}, the rigid but not infinitesimally rigid manifolds \cite{BP21} of Bauer and Pignatelli that gave a negative answer to a question of Kodaira and Morrow \cite[p.45]{KM71} and also the infinite series of $n$-dimensional infinitesimally rigid manifolds of general type with non-contractible universal cover for each $n\geq 3$, provided by Frapporti and Gleissner\cite{FG}. \medskip \noindent {\bf Notation:} An algebraic surface $S$ is a \textit{canonical model} if it has at most rational double points as singularities and ample canonical divisor. Recall that each surfaces of general type is birational to a unique canonical model. In particular the minimal resolution of the singularities of $S$ is its minimal model. Let us denote by $\sigma$ and $\tau$ a rotation ($3$-cycle) and a reflection (transposition) of $S_3$ respectively. Consider also the three irreducible characters of $S_3$, so the trivial character $1$, the character $\textit{sgn}$ computing the sign of a permutation, and the only $2$-dimensional irreducible character $\mu:=\frac{1}{2}\left(\chi_{reg}-sgn-1\right)$, where $\chi_{reg}$ is the character of the regular representation of $S_3$. \\ Let us fix a basis $e_1, e_2$ of $\mathbb Z_3^2$ and consider the dual characters $\epsilon_1$, $\epsilon_2$ of $e_1$ and $e_2$, i.e. the characters defined by \[ \epsilon_i(ae_1+be_2):=\zeta_3^{a\delta_{1i}+b\delta_{2i}}, \qquad \zeta_3:=e^{\frac{2\pi i }{3}}, \] where $\delta_{ij}$ is the Kronecker delta. \\ Given a representation $\rho$ on a vector space $V$ and an isotypic component $W$ of $V$ of character $\chi$, we can sometimes write $W_\chi$ instead of $W$ for specifying its character; \\ When we write $\sqrt[n]{\lambda}$ we mean one of the $n$-roots (arbitrarily chosen) of the complex number $\lambda$. \\ Finally, denote by $[j]\in \{0,1\}$ the class of the integer number $j$ modulo $2$. \section{The surfaces} \noindent In this section we construct a series of surfaces $S$, as quotients of a product of the two curves $C_1$ and $C_2$, modulo a suitable diagonal action of the group $S_3\times \mathbb Z_3^2$. For any surface $S$, we determine the canonical map $\Phi_{K_S}$ and compute its degree. We consider the projective space $\mathbb{P}^3$ with homogeneous coordinates $x_0, \ldots, x_3$ and the weighted projective space $\mathbb{P}^3(1,1,1,2)$ with homogeneous coordinates $y_0, \ldots, y_3$. Here $y_3$ is the variable of weight $2$. We take the curves $C_1\subseteq \mathbb{P}^3$ and $C_2\subseteq \mathbb{P}^3(1,1,1,2)$ as follows \[ C_1 \colon \begin{cases} x_2^3=x_0^3-x_1^3 \\ x_3^3=x_0^3+x_1^3 \end{cases}, \qquad C_2 \colon \begin{cases} y_2^3=y_0^3+y_1^3 \\ y_3^3=y_0^6+y_1^6-2\lambda y_0^3y_1^3 \end{cases}, \lambda\neq -1,1 \] \noindent Both curves are smooth, in fact this is the reason why we assume $\lambda \neq -1,1$ in the definition of $C_2$. On the first curve $C_1$ we consider the action of $S_3\times \mathbb Z_3^2$ given by \[ \phi_1\colon S_3\times \mathbb Z_3^2 \to \operatorname{Aut}(C_1), \quad \left(\sigma^i\tau^j, (a,b)\right) \mapsto [(x_0:x_1:x_2: x_3) \mapsto (\zeta_3^ix_{[j]}:x_{[j+1]} : (-1)^j\zeta_3^{2a+2i} x_2: \zeta_3^{2b+2i} x_3)]. \] We leave to the reader to checking that this defines an action. Note that the automorphisms $\phi_1(\sigma^i\tau^j,(a,b))$ are precisely the deck transformations of the cover \[ \pi_1 \colon C_1 \stackrel{9 : 1}{\longrightarrow} \mathbb P^1 \stackrel{6 : 1}{\longrightarrow} \mathbb P^1, \qquad (x_0:x_1:x_2:x_3) \mapsto (x_0:x_1)\mapsto \left(x_0^3x_1^3: (x_0^6+x_1^6)/2\right). \] In particular $C_1/\left(S_3\times \mathbb Z_3^2\right) \simeq \mathbb P^1$ and $\pi_1$ is the quotient map. The cover is branched along $p_1:=(1:1)$, $p_2:=(0:1)$ and $p_3:=(-1:1)$, corresponding to the three orbits of the points with non trivial stabilizer, of respective length $9,18$ and $9$. A representative of each orbit and a generator of the stabilizer is given by: \[ \begin{tabular}{c | c | c | c } & $p_1$ & $p_2$ & $p_3$ \\ \hline \makebox{representative} & $(1:1:0:\sqrt[3]{2})$ & $(1:0:1:1)$ & $ (1: -\zeta_3 :\sqrt[3]{2}:0)$ \\ \hline \makebox{generator} & $ g_1:=\left(\tau,(1,0)\right) $ & $g_2:=(\sigma^2,(2,2)) $ & $g_3:=(\sigma\tau,(0,1)) $ \\ \end{tabular} \] \noindent On the second curve $C_2$ the action $\phi_2$ is defined as \[ \phi_2 \colon S_3\times \mathbb Z_3^2 \to \operatorname{Aut}(C_2), \quad \left(\sigma^i\tau^j, (a,b)\right) \mapsto [(y_0:y_1:y_2: y_3) \mapsto (\zeta_3^iy_{[j]}:y_{[j+1]} : \zeta_3^{a+2b+2i} y_2: \zeta_3^{2a+2b+i} y_3)]. \] As in the previous case, we leave to the reader to checking that this defines a group action and note that the automorphisms $\phi_2(\sigma^i\tau^j,(a,b))$ are precisely the deck transformations of the cover \[ \pi_2\colon C_2 \stackrel{9:1}{\longrightarrow}\mathbb P^1 \stackrel{6:1}{\longrightarrow} \mathbb P^1, \qquad (y_0:y_1:y_2:y_3) \mapsto (y_0:y_1)\mapsto \left(y_0^3y_1^3: (y_0^6+y_1^6)/2\right). \] Hence $C_2/\left(S_3\times \mathbb Z_3^2\right) \simeq \mathbb P^1$ and $\pi_2$ is the quotient map. The cover is branched along $q_1:=(1:1)$, $q_2:=(0:1)$, $q_3:=(1:\lambda)$ and $q_4:=(-1:1)$, corresponding to the four orbits of the points with non trivial stabilizer, of respective length $27,18, 18$ and $9$. Note that the points $q_j$ are pairwise distinct under the assumption $\lambda \neq -1,1$. A representative of each orbit and a generator of the stabilizer is given by: \[ \small\begin{tabular}{c | c | c | c |c} & $q_1$ & $q_2$ & $ q_3$ & $q_4$ \\ \hline \makebox{representative} & $(1:\zeta_3:\sqrt[3]{2}:\sqrt[3]{2-2\lambda})$ & $(0:1:1:1)$ & $ (1: \sqrt[3]{\lambda-\sqrt{\lambda^2-1}}:\sqrt[3]{1+\lambda-\sqrt{\lambda^2-1}}:0)$ & $(1:-1:0:\sqrt[3]{2+2\lambda})$ \\ \hline \makebox{generator} & $ h_1:=\left(\sigma\tau,0\right) $ & $h_2:=(\sigma,(1,0)) $ & $h_3:=(Id,(1,1)) $ & $h_4:=(\tau,(1,2))$ \\ \end{tabular} \] We compute the action of $S_3\times \mathbb{Z}_3^2$ on $H^0(C_i,\Omega_{C_i}^1)$. By standard adjunction theory $H^0(C_1,\Omega_{C_1}^1)$ is isomorphic to $H^0(C_1,\mathcal{O}_{C_1}(2))$, isomorphism mapping a monomial $x_0^{2-\alpha-\beta-\gamma}x_1^\alpha x_2^\beta x_3^\gamma$ to the $1$-form $\omega_{\alpha\beta\gamma}$ that in affine coordinates is \[ \omega_{\alpha\beta\gamma}:=u^\alpha v^{\beta-2} t^{\gamma-2}du, \qquad \makebox{where} \qquad u:=\frac{x_1}{x_0} \quad v:=\frac{x_2}{x_0} \quad \makebox{and} \qquad t:=\frac{x_3}{x_0}. \] The character of the \textit{canonical} representation of $C_1$, the action of $S_3\times \mathbb Z_3^2$ on $H^0(C_1,\Omega_{C_1}^1)$, can be computed by the standard Chevalley-Weil formula and is amount to \[ \chi_{can}^1=\epsilon_1^2\cdot\epsilon_2^2+sgn\cdot\epsilon_1\cdot\epsilon_2+sgn\cdot\epsilon_2+sgn\cdot\epsilon_1+\mu\cdot\epsilon_1\cdot\epsilon_2+\mu\cdot\epsilon_1^2\cdot\epsilon_2+\mu\cdot \epsilon_1\cdot\epsilon_2^2.\] We give an explicit decomposition in irreducible subspaces. Using the expression in affine coordinates we obtain \[ \begin{split} (\sigma^i\tau^j,(a,b))\cdot \omega_{\alpha\beta\gamma} & = \phi_1(\left(\sigma^i\tau^j,(a,b)\right)^{-1})^{\ast}(\omega_{\alpha\beta\gamma}) \\ & =(-1)^{j(\beta-1)}\zeta_3^{a(\beta-2)+b(\gamma-2)+\left(\alpha-(2\alpha+\beta+\gamma-2)[j]+2\beta+2\gamma-7\right)i}\omega_{(\alpha-(2\alpha+\beta+\gamma-2)[j])\beta\gamma}. \end{split} \] A tedious but straightforward computation gives the following decomposition: \[ \begin{split} H^0(C_1,\Omega_{C_1}^1)= & \langle \omega_{011}\rangle_{\epsilon_1^2\cdot\epsilon_2^2} \oplus \langle \omega_{100}\rangle_{sgn\cdot \epsilon_1\cdot \epsilon_2} \oplus \langle \omega_{020}\rangle_{sgn\cdot \epsilon_2} \oplus \langle \omega_{002}\rangle_{sgn\cdot \epsilon_1} \oplus \\ & \langle \omega_{000},\omega_{200}\rangle_{\mu\cdot \epsilon_1\cdot \epsilon_2}\oplus \langle \omega_{010},\omega_{110}\rangle_{\mu\cdot\epsilon_1^2\cdot\epsilon_2}\oplus \langle \omega_{001},\omega_{101}\rangle_{\mu\cdot \epsilon_1\cdot \epsilon_2^2}. \end{split} \] Similarly, adjunction theory gives an isomorphism among $H^0(C_2,\Omega_{C_2}^1)$ and $H^0(C_2,\mathcal{O}_{C_2}(4))$ mapping a monomial $y_0^{4-\alpha-\beta-2\gamma}y_1^\alpha y_2^\beta y_3^\gamma$ to the $1$-form $\omega'_{\alpha\beta\gamma}$ that in affine coordinates is \[ \omega'_{\alpha\beta\gamma}:=(u')^{\alpha}(v')^{\beta-2} (t')^{\gamma-2}du', \qquad \makebox{where} \qquad u':=\frac{y_1}{y_0} \quad v':=\frac{y_2}{y_0} \quad \makebox{and} \qquad t':=\frac{y_3}{y^2_0}. \] We obtain a basis of $19$ dimension space $H^0(C_2, \mathcal{O}_{C_2}(4))$ by taking the $22$ monomials of degree $4$ in the variables $y_j$ and removing $y_0y_2^3$, $y_1y_2^3$ and $y_2^4$, that can be expressed in terms of the other monomials using the cubic equation defining $C_2$. Accordingly we get a basis of $H^0(C_2,\Omega_{C_2}^1)$ by removing from that set $\omega'_{\alpha\beta\gamma}$ the $1$-forms $\omega'_{040}, \omega'_{030}$ and $\omega'_{130}$. The \textit{canonical} character of $C_2$ is given by Chevalley-Weil as \[ \chi_{can}^2=sgn \cdot \epsilon_1^2\cdot \epsilon_2+sgn\cdot\epsilon_1^2\cdot \epsilon_2^2+sgn\cdot \epsilon_1\cdot \epsilon_2+sgn\cdot \epsilon_1+sgn\cdot \epsilon_2^2+\mu\cdot \epsilon_1+\mu\cdot\epsilon_2+2\mu\cdot \epsilon_2^2+sgn\cdot\epsilon_1^2+\epsilon_1^2+\mu\cdot \epsilon_1^2+\mu\cdot\epsilon_1\cdot\epsilon_2, \] and the action on $H^0(C_2,\Omega_{C_2}^1)$ computed in affine coordinates as above is \[ \begin{split} (\sigma^i\tau^j,(a,b))\cdot \omega'_{\alpha\beta\gamma} & = \phi_2(\left(\sigma^i\tau^j,(a,b)\right)^{-1})^{\ast}(\omega'_{\alpha\beta\gamma}) \\ & =(-1)^{j}\zeta_3^{a(2\beta+\gamma)+b(\beta+\gamma-4)+\left(\alpha-(2\alpha+\beta+2\gamma-4)[j]+2\beta+\gamma+1\right)i}\omega'_{(\alpha-(2\alpha+\beta+2\gamma-4)[j])\beta\gamma}. \end{split} \] Another tedious computation gives the decomposition \[ \begin{split} H^0(C_2,\Omega_{C_2}^1)= & \langle \omega'_{002}\rangle_{sgn\cdot \epsilon_1^2\cdot \epsilon_2} \oplus \langle \omega'_{021}\rangle_{sgn\cdot \epsilon_1^2\cdot \epsilon_2^2}\oplus \langle \omega'_{120}\rangle_{sgn\cdot\epsilon_1\cdot\epsilon_2} \\ & \oplus\langle \omega'_{101}\rangle_{sgn\cdot\epsilon_1}\oplus \langle \omega'_{200}\rangle_{sgn\cdot \epsilon_2^2}\oplus \langle \omega'_{001},\omega'_{201}\rangle_{\mu\cdot \epsilon_1} \oplus \langle\omega'_{011},\omega'_{111}\rangle_{\mu\cdot \epsilon_2} \\ & \oplus \left(\langle\omega'_{000},\omega'_{400}\rangle\oplus \langle \omega'_{100},\omega'_{300}\rangle\right)_{\mu\cdot \epsilon_2^2}\oplus \langle\omega'_{010}+\omega'_{310}\rangle_{sgn\cdot \epsilon_1^2}\oplus\langle\omega'_{010}-\omega'_{310}\rangle_{\epsilon_1^2} \\& \oplus \langle \omega'_{110},\omega'_{210}\rangle_{\mu\cdot \epsilon_1^2} \oplus \langle \omega'_{220},\omega'_{020}\rangle_{\mu\cdot \epsilon_1\cdot \epsilon_2}. \end{split} \] \bigskip We consider unmixed quotients $S:=(C_1\times C_2)/\left( S_3\times \mathbb Z_3^2\right)$ modulo a diagonal action $\phi_1\times \left(\phi_2\circ \Psi\right)$, where $\Psi$ is one of the automorphism of $S_3\times \mathbb{Z}_3^2$. \\ Firstly we study the singularities of $S$. We observe that $C_1$ and $C_2$ have stabilizers of order $6,3 $ and $6$ and $2,3,3$ and $6$ respectively. Hence $18$ points of $C_1$ and $36$ points of $C_2$ have stabilizer of even order. However $S_3\times \mathbb{Z}_3^2$ has only three elements of order $2$ and they are in the same conjugacy class. This means that each of these three elements fix exactly $6\cdot 12=72$ points of $C_1\times C_2$. Thus $S$ can never be smooth and if it admits only nodes, then they are in total $3\cdot 72 /27=8$. \\ Now let us consider the following automorphisms of $S_3\times \mathbb{Z}_3^2$ \begin{equation}\label{automorfismi} \begin{aligned} \Psi_1 & =Id, & \Psi_2 &= \left(\begin{cases} \sigma \mapsto \sigma \\ \tau \mapsto \tau\sigma \\ \end{cases}, \begin{pmatrix} 0&1 \\ 2 & 0\end{pmatrix} \right), &\\ \Psi_3 &= \left(\begin{cases} \sigma \mapsto \sigma^2 \\ \tau \mapsto \tau \\ \end{cases}, \begin{pmatrix} 0&2 \\1 & 0\end{pmatrix} \right),& \Psi_4 &= \left(\begin{cases} \sigma \mapsto \sigma^2 \\ \tau \mapsto \tau \\ \end{cases}, \begin{pmatrix} 0&2 \\ 2 & 0\end{pmatrix} \right).& \end{aligned} \end{equation} A direct computation shows us that for these four choices of $\Psi$ the surface $S$ has exactly $8$ nodes and no other singularities. \begin{remark} The first example has been found by using the database \cite{CGP22}. Later on we have run a systematic research over all automorphisms of $S_3\times \mathbb{Z}_3^2$ proving that the obtained surfaces having only nodes are isomorphic to the four surfaces presented in this note. \end{remark} The vector space $H^0(K_{S})$ is isomorphic to the invariant subspace $\big(H^0(\Omega_{C_1}^1) \otimes H^0(\Omega_{C_2}^1) \big)^{S_3\times \mathbb Z_3^2}$, where the action on the tensor product is diagonal, i.e. $\left(\sigma^i\tau^j,(a,b)\right)\in S_3\times \mathbb Z_3^2$ acts via \begin{equation}\label{azione_twistata} \phi_1(\left(\sigma^i\tau^j,(a,b)\right)^{-1})^{\ast} \otimes \phi_2(\Psi(\left(\sigma^i\tau^j,(a,b)\right)^{-1}))^{\ast}. \end{equation} \noindent For each character $\eta$ of $S_3\times \mathbb{Z}_3^2$ define its twist by $\Psi$ as \[ \eta_\Psi:=\eta\circ \Psi^{-1}. \] Pulling back $H^0(K_S)$ to $C_1\times C_2$ we obtain \begin{Lemma}\label{invariantforms} A basis of $H^0(K_S)$ is given by the $\left(S_3\times \mathbb Z_3^2\right)$-invariant $2$-forms of $H^0(\Omega_{C_1}^1) \otimes H^0(\Omega_{C_2}^1) $ with respect to the action \eqref{azione_twistata}. Hence \[ \big(H^0(\Omega_{C_1}^1) \otimes H^0(\Omega_{C_2}^1) \big)^{S_3\times \mathbb Z_3^2}=\bigoplus_{\eta\neq 0} \big(H^0(\Omega_{C_1}^1)_{\eta}\otimes H^0(\Omega_{C_2}^1)_{\overline{\eta_\Psi}}\big)^{S_3\times \mathbb Z_3^2}, \] where $H^0(\Omega_{C_i}^1)_{\eta}$ is the isotypic component of $H^0(\Omega_{C_i}^1)$ of character $\eta$. Moreover \[p_g=\langle \chi_{can}^1\cdot \chi_{can}^2,1\rangle=\sum_{\eta\neq 0} \langle \chi_{can}^1,\eta \rangle \cdot \langle \chi_{can}^2, \overline{\eta_\Psi}\rangle. \] \end{Lemma} \noindent Denote by $\omega_{jklmrs}:=\omega_{jkl}\otimes \omega'_{mrs}$. We can now state and prove our main result: \begin{theorem}\label{MainTheo} For all $\Psi \in \operatorname{Aut}(S_3\times \mathbb Z_3^2)$ in \eqref{automorfismi}, the diagonal action $\phi_1 \times (\phi_2 \circ \Psi)$ of $S_3\times \mathbb Z_3^2$ on the product of the two curves $C_1$ and $C_2$ is not free. The quotient is a canonical model of a regular surface $S$ of general type with $K_S^2=24$, $p_g=3$ and with $8$ rational double points as singularities of type $\frac{1}{2}(1,1)$. A basis of $H^0(K_S)$, the canonical map $\Phi_{K_S}$ in projective coordinates and its degree are stated in the table: \[ \Small\begin{tabular}{c | c | c | c | c } \makebox{No} & $\Psi$ & \makebox{Basis of $H^0(K_S)$} & $\Phi_{K_S}(x,y)$ & $\deg(\Phi_{K_S})$ \\ \hline 1. & $ Id $ & $\lbrace \omega_{100021},\omega_{020200}, \omega_{002040} \rbrace$ & $(x_0x_1y_2^2y_3: x_2^2y_0^2y_1^2: x_3^2y_2^4)$ & $18$ \\ \hline 2. & $ \Psi_2$ & $\lbrace \omega_{020101}, \omega_{002200}, \zeta_3\omega_{010020}-\omega_{110220} \rbrace$ & $(x_2^2y_0y_1y_3:x_3^2y_0^2y_1^2: x_2y_2^2(\zeta_3x_0y_0^2-x_1y_1^2))$ & $\begin{cases} 15 \quad \makebox{if} \quad \lambda \neq 0 \\ 13 \quad \makebox{if} \quad \lambda = 0 \\ \end{cases}$ \\ \hline 3. & $ \Psi_3$ & $\lbrace \omega_{100002}, \omega_{020040}, \omega_{001220}+\omega_{101020} \rbrace$ & $(x_0x_1y_3^2:x_2^2y_2^4:x_3y_2^2(x_0y_1^2+x_1y_0^2))$ & $\begin{cases} 18 \quad \makebox{if} \quad \lambda \neq 0 \\ 16 \quad \makebox{if} \quad \lambda = 0 \\ \end{cases}$ \\ \hline 4. & $\Psi_4$ & $\lbrace \omega_{100120}, \omega_{020101}, \omega_{000020}+\omega_{200220} \rbrace$ & $(x_0x_1y_0y_1y_2^2:x_2^2y_0y_1y_3:y_2^2(x_0^2y_0^2+x_1^2y_1^2))$ & $12$ \\ \hline \end{tabular} \] \end{theorem} \begin{proof} We have already mentioned that for all $\Psi$ in \eqref{automorfismi} the action is not free and the quotient $S$ has $8$ singularities of type $\frac{1}{2}(1,1)$ and no other singularities. The genus of the two curves is $g(C_i)\geq 2$, hence $C_1\times C_2$ has ample canonical divisor and so $S$ has ample canonical divisor too. It follows $S$ is a canonical model. The self-intersection of the canonical divisor of each $S$ is amount to \[ K_S^2=\frac{8(g(C_1)-1)(g(C_2)-1)}{\vert S_3\times \mathbb Z_3^2\vert }=24. \] They are regular surfaces, because they do not possess any non-zero holomorphic one-forms, since $C_i/\left(S_3\times \mathbb Z_3^2\right)$ is biholomorphic to $\mathbb P^1$. The geometric genus of each $S$ is therefore equal to (compare \cite{BP12}) \[ p_g= \chi(\mathcal O_{S})- 1 = \frac{(g(C_1)-1)(g(C_2)-1)}{\vert S_3\times \mathbb Z_3^2\vert}+\frac{1}{12}\left(8\cdot \frac{3}{2}\right)-1=3. \] Using Lemma \ref{invariantforms} we have computed a basis of $H^0(K_S)$. In fact since we have proved that $p_g=3$ it is enough to verify that the given elements of the table are invariant for the corresponding action. Applying the explicit isomorphisms from $H^0(C_1,\Omega_{C_1}^1)$ to $H^0(C_1,\mathcal{O}_{C_1}(2))$ and from $H^0(C_2,\Omega_{C_2}^1)$ to $H^0(C_2,\mathcal{O}_{C_2}(4))$ we obtain the product of quadrics and quartics defining the canonical map in the table. It remains to determine the degree of $\Phi_{K_S}$ for each surface $S$. Instead to work on $S$ it is convenient to work on $C_1\times C_2$, that is smooth: \[ \xymatrix{ C_1\times C_2 \ar[r]^{\lambda_{12}}\ar[dr]_{\Phi_{K_{C_1\times C_2}}}& S \ar@{-->}[r]^{\Phi_{K_S}} & \mathbb{P}^2 \ar@{<--}[dl]\\ & \mathbb P^{10\cdot 19 -1}. } \] Note that the map $\Phi_{K_S}\circ \lambda_{12}$ is induced by the sublinear system $\vert T \vert $ of $\vert K_{C_1\times C_2}\vert$ generated by the three invariant $2$-forms defining $\Phi_{K_S}$. In particular the self-intersection of $T$ is amount to \[ T^2=\left(\lambda_{12}^* K_S\right)^2=\vert S_3\times \mathbb Z_3^2\vert \cdot K_S^2=54\cdot 24. \] We \emph{resolve the indeterminacy} of $\Phi_T=\Phi_{K_S}\circ \lambda_{12}$ by a sequence of blowups, as explained in the textbook \cite[Theorem II.7]{Beauville}: \[ \xymatrix{ \widehat{C_1\times C_2} \ar[r] \ar[dr]_{\Phi_{\widehat{M}}} & C_1\times C_2\ar@{-->}[d]^{\Phi_{T}} \\ & \mathbb P^2. } \] Here the morphism $\Phi_{\widehat{M}}$ is induced by the base-point free linear system $\vert \widehat{M} \vert$ obtained as follow: \\ We blow up the base-points of $\vert T\vert$, take the pullback of the mobile part $\vert M \vert$ of $\vert T\vert$ and remove the fixed part of this new linear system. We repeat the procedure, until we obtain a base-point free linear system $\vert\widehat{M}\vert $. The self-intersection $\widehat{M}^2$ is positive if and only if $\Phi_{\widehat{M}}$ is not composed by a pencil. In this case $\Phi_{\widehat{M}}$ is onto and it holds: \[ \deg(\Phi_{K_S})=\frac{1}{\vert S_3\times \mathbb{Z}_3^2 \vert }\deg(\Phi_{\widehat{M}})=\frac{1}{54}\widehat{M}^2. \] For the computation of the resolution, it is convenient to write the divisors of the product of quadrics and quartics defining $\Phi_{K_S}$ (and hence $\Phi_T ) as linear combinations of the curves $F_j:=\lbrace x_j=0\rbrace$ and $G_k:=\lbrace y_k=0\rbrace$ on $C_1\times C_2$. We point out that these curves are reduced and intersect pairwise transversally thanks to the assumption $\lambda \neq -1,1$. In particular $(F_j,F_k)=(G_j,G_k)=0$ and $(F_j, G_k)=81$, for $k\neq 3$, while $(F_j, G_3)=162$. \\ Consider the first surface in the table. Here, the divisors of the three products of quadrics and quartics spanning the subsystem $\vert T\vert$ are: \[ F_0+F_1+ 2G_2+G_3, \qquad 2F_2+2G_0+2G_1 \qquad \makebox{and} \qquad 2F_3+4G_2. \] Here $\vert T\vert$ has not fixed part and it has precisely $81$ (non reduced) base-points $F_2\cap G_2$. We can perform the computation of the difference $T^2- \widehat{M}^2$ by applying Lemma \ref{FedericoLemma} below (for a proof see \cite[Lemma 2.3] {FG2022}) recursively for each base-point of $\vert T \vert$: \begin{Lemma}\label{FedericoLemma} Let $\vert M \vert $ be a two-dimensional linear system on a surface $S$ spanned by $D_1$, $D_2$ and $D_3$. Assume that $\vert M \vert $ has only isolated base-points, smooth for $S$, and that in a neighborhood of a basepoint $p$ we can write the divisors $D_i$ as \[ D_1=aH, \quad D_2=bK \quad \makebox{and} \quad D_3= cH+d K. \] Here $H$ and $K$ are reduced, smooth and intersect transversally at $p$ and $a,b,c,d$ are non-negative integers, $b\leq a$. Assume that \begin{itemize} \item $d\geq b$ or \item $b\neq 0$ and $c+md\geq a$, where $a=mb+q$ with $0\leq q<b$. \end{itemize} Then after blowing up at most $(ab)$-times we obtain a new linear system $\vert \widehat{M} \vert $ such that no infinitely near point of $p$ is a base-point of $\vert \widehat{M} \vert $. Moreover $\widehat{M}^2 =M^2-ab$. \end{Lemma} In a neighbourhood of each of these base-points the three divisors are respectively \[ 2G_2, \qquad 2F_2 \qquad \makebox{and} \qquad 4G_2. \] Since $F_2$ and $G_2$ are transversal we are in the situation of the Lemma \ref{FedericoLemma} with $H=G_2$ and $K=F_2$, $a=b=2$ and $c=4$, $d=0$. So $b\neq 0$ and $c+md\geq a$ and the Lemma applies. The correction term is $ab=4$ for each of the $81$ base-points. Thus \[ T^2- \widehat{M}^2=4\cdot 81. \] The degree of the canonical map is therefore given by \[ \deg(\Phi_{K_S})=\frac{1}{54}\widehat{M}^2=\frac{1}{54}\left(T^2 - (T^2-\widehat{M}^2)\right)=\frac{1}{54}\left(54 \cdot 24- 4\cdot 81\right)=18. \] Now we take in exam the second surface in our table. Here the subsystem $\vert T\vert $ is spanned by: \[ D_1:= 2F_2+G_0+G_1+G_3, \quad D_2 :=2F_3+2G_0+2G_1 \quad \makebox{and} \quad D_3:= F_2+2G_2+\Delta, \] where $\Delta=(\zeta_3x_0y_0^2-x_1y_1^2)$. The (set-theoretical) base locus is \[ F_2\cap G_0, F_2\cap G_1, \quad \Delta \cap G_0, \Delta\cap G_1, \quad \makebox{and} \quad \Delta \cap F_3\cap G_3. \] We remark that the other pieces of the base locus are empty. In fact that points would belong in some $F_i\cap F_j$ or $G_i\cap G_j$ and we have already mentioned that they are pairwise disjoint. We determine the correction term to the self intersection number for each of these base-points of $\vert T\vert$. We consider first the $81$ points $F_2\cap G_i$, for $i=0,1$. Here $F_2$ and $G_i$ intersect transversally on each of them. Around one of these points, the divisors $D_k$ are given by $G_i+2F_2$, $2G_i$ and $F_2$. We are in the situation of the Lemma with $H=G_i$ and $K=F_2$, $a=d=2$ and $b=c=1$. Hence $d\geq b$ and the Lemma applies, which yields $ab=2$ as correction term. We let go on to the $81$ base-points $\Delta\cap G_i$. The local coordinates around one of these points are $X:=x_j/x_i$ and $Y:=y_i/y_j$, where $j=0,1, j\neq i$. So the divisors $D_k$ are respectively given by \[ \lbrace Y=0\rbrace , \qquad 2\lbrace Y=0 \rbrace \qquad \makebox{and} \qquad \lbrace \zeta_3^{1+i}Y^2-X=0 \rbrace. \] Thus $D_1$ and $D_3$ intersect transversally in $(0,0)$ and we fall down once more in the situation of the Lemma. Here $H=D_3$ and $K=D_1$, $a=b=1$, $c=0$ and $d=2$. Since $d\geq b$ then the Lemma is fulfilled and the correction term is amount to $ab=1$. We consider finally the points $\Delta\cap F_3\cap G_3$. These points satisfy the equations \begin{equation}\label{equazioni} \begin{cases} y_3^3 =y_0^6+y_1^6-2\lambda y_0^3y_1^3 &=0 \\ x_3^2 =x_0^3+x_1^3 & =0 \\ \zeta_3x_0y_0^2-x_1y_1^2 & =0 \end{cases}. \end{equation} The last two equations imply that $x_1^3=-x_0^3$ and \[ x_0^3y_0^6 =(\zeta_3x_0y_0^2)^3=(x_1y_1^2)^3 =x_1^3y_1^6=-x_0^3y_1^6. \] Thus $y_0^6+y_1^6=0$ and comparing it with the first equation of \ref{equazioni} we get $\lambda y_0^3y_1^3=0$. Therefore $\Delta\cap F_3\cap G_3$ is non empty only if $\lambda=0$. \\ Let us suppose $\lambda \neq 0$. Then \[ T^2-\widehat{M}^2=2\cdot 2 \cdot 81+2\cdot 81=6\cdot 81, \] and the degree of the canonical map is amount to \[ \deg(\Phi_{K_S})=\frac{1}{54}\left(T^2 - (T^2-\widehat{M}^2)\right)=\frac{1}{54}\left(54\cdot 24- 6\cdot 81\right)=15. \] It remains to consider the case when $\lambda=0$. The base-points $\Delta\cap F_3\cap G_3$ are the following $54$ ones: \[ t_k:= \left(\left(1:-\zeta_3^{k_1}:\sqrt[3]{2}\zeta_3^{k_2}:0\right),\left(1:e^{\frac{\pi i }{6}}\zeta_6^{k_3}:\sqrt[6]{2}e^{\frac{\pi i}{12}\left(1-2[k_3]\right)}\zeta_3^{k_4}:0\right)\right), \qquad k_1+k_3\equiv 2 \mod 3, \] where $k_i=0,1,2$, for $i\neq 3$, and $k_3=0, \dots, 5$. Fix coordinates $X:=x_1/x_0+\zeta_3^2$ and $Y:=y_1/y_0-e^{\frac{\pi i }{6}}$ around one of these points, for example that one for $k=(2,0,0,0)$. The divisors $D_k$ are locally given by \[ \lbrace Y=0 \rbrace, \qquad 2\lbrace X=0 \rbrace \qquad \makebox{and} \qquad \lbrace Y(2e^{\frac{\pi i 5}{6}}+Y-2e^{\frac{\pi i 5}{6}}X-XY)=0\rbrace=\lbrace Y=0\rbrace. \] In this case $H=\{X=0\}$ and $K=\{Y=0\}$ and $a=2$ and $b=d=1$, $c=0$. The correction term is $ab=2$. \\ Hence \[ T^2-\widehat{M}^2=2\cdot 2 \cdot 81+2\cdot 81+2 \cdot 54=6\cdot 81+2\cdot 54. \] The degree of the canonical map is therefore given by \[ \deg(\Phi_{K_S})=\frac{1}{54}\left(T^2 - (T^2-\widehat{M}^2)\right)=\frac{1}{54}\left(54\cdot 24- 6\cdot 81-2\cdot 54\right)=13. \] We leave to the reader to verifying with the same approach that the degree of the canonical map of the remain two surfaces are amount to that ones stated in the table. \end{proof} \noindent \bigskip \bigskip
{ "redpajama_set_name": "RedPajamaArXiv" }
39
<?php namespace Deployer\Config; class RunList implements \Iterator { protected $current = 0; protected $commands; public function addCommand(Command $action) { $this->commands[] = $action; } public function rewind() { $this->current = 0; } public function current() { return $this->commands[$this->current]; } public function key() { return $this->current; } public function next() { ++$this->current; } public function valid() { return isset($this->commands[$this->current]); } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,353
San Francisco Alternative Dispute Resolution Lawyer » Demler, Armstrong & Rowland, LLP » Primerus Home > San Francisco Alternative Dispute Resolution Lawyer San Francisco Alternative Dispute Resolution Lawyer Steven A. Block blo@darlaw.com Dalen Saludes sal@darlaw.com Demler, Armstrong & Rowland, LLP 101 Montgomery Street Consult with a Proven San Francisco Alternative Dispute Resolution Lawyer The proven San Francisco alternative dispute resolution lawyers at Demler, Armstrong & Rowland, LLP have experience resolving cases in multiple industries using methods of alternative dispute resolution such as mediation and arbitration in California. San Francisco alternative dispute resolution attorneys are knowledgeable in all areas of general alternative dispute resolution law, including but not limited to civil appeals in San Francisco, California. Clients will have the confidence of knowing that their case is being handled by an experienced and knowledgeable San Francisco alternative dispute resolution lawyer. Alternative dispute resolution (ADR) is an approach or means for resolving disputes outside the judicial system of state or federal courts, commonly in the form of arbitration or mediation. ADR can also include negotiation, collaborative law and conciliation. Mediation is considered among the least formal alternatives to litigation that involves a panel or impartial third party (typically consisting of a group of qualified attorneys or retired judges experienced in negotiations) that intervenes to reach a settlement of the dispute. The San Francisco alternative dispute resolution attorneys have experience representing clients in cases involving: Conciliation The terms "arbitration" and "mediation" are sometimes used interchangeably, but this mixing of terminology is careless and inaccurate. While the mediator suggests possible solutions to the disputing parties, the arbitrator makes a final decision on the labor dispute which is binding on the parties. San Francisco Arbitration Lawyer The San Francisco arbitration lawyer adheres to the process of arbitration, which is the procedure by which parties agree to submit their disputes to an independent neutral third party, known as an arbitrator, who considers arguments and evidence from both sides, then hands down a final and binding decision. This alternative, which can be used to adjudicate business-to-business, business-to-employee, or business-to-customer disputes, can utilize a permanent San Francisco arbitrator, an independent San Francisco arbitration lawyer professional selected by the two parties to resolve a particular grievance, or a selected San Francisco arbitrator through the procedures of the AAA or FMCS. A board of arbitrators can also be used in a hearing. San Francisco Mediation Lawyer In contrast to arbitration, San Francisco mediation lawyers utilize a process whereby the parties involved utilize an outside party to help them reach a mutually agreeable settlement. Rather than dictate a solution to the dispute between labor and management, the mediator—who maintains scrupulous neutrality throughout—suggests various proposals to help the two parties reach a mutually agreeable solution. In mediation, the various needs of the conflicting sides of an issue are identified, and ideas and concepts are exchanged until a viable solution is proposed by either of the parties or the San Francisco mediator. Rarely does the mediator exert pressure on either party to accept a solution. Instead, the San Francisco mediation lawyer professional's role is to encourage clear communication and compromise in order to resolve the dispute. Mediation can be a tremendously effective tool in resolving disputes without destroying business relationships. It allows parties to work toward a resolution out of the public eye (the courts) without spending large sums on legal expenses. Trusted San Francisco Alternative Dispute Resolution Law Firm The San Francisco alternative dispute resolution attorneys at Demler, Armstrong & Rowland, LLP are distinguished by a history of successful alternative dispute resolution claim recoveries. If you or your organization has received an unfavorable ruling in a prior trial, contact the San Francisco alternative dispute resolution lawyers at Demler, Armstrong & Rowland, LLP in California. Law Firm Contact: John R. Brydon Email: bry@darlaw.com Website: www.darlaw.com
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,262
Florida Panel Awards Former U.S. Congressman $852,000 For Losses In Alleged Ponzi Scheme After 42 days of hearings spanning the period from January 2011 to March 2012, a Financial Industry Regulatory Authority ("FINRA") arbitration panel has awarded former U.S. Congressman Alan Grayson ("Grayson") $852,000 for losses sustained in an alleged Ponzi scheme. The Award against Wachovia Securities, Inc., now known as Wells Fargo Advisors ("Wachovia"), was for far less than Grayson requested. The Statement of Claim alleged damages of over $30 million relating to an alleged "stock loan" Ponzi scheme involving Derivium Capital LLC and Derivium Capital (USA), Inc. (collectively "Derivium"). Grayson, a Congressman from Orlando, Florida from 2008 to 2010, claimed that hundreds of investors lost hundreds of millions of dollars in the alleged Derivium scheme. According to Grayson, Derivium touted an investment strategy that allowed people to offer stock as collateral on loans, receiving up to 90 percent of the value of the stock, with the option to reclaim the stock later. Grayson alleged that Derivium quickly sold the stock through Wachovia even though the clients were not in default, then utilized the proceeds from the Wachovia stock sales to fund additional loans and pay earlier investors, in typical Ponzi fashion. Now-defunct Derivium Capital filed for bankruptcy in 2005. Grayson has won judgments against Derivium but has been unable to collect on them. Grayson filed a FINRA claim against Wachovia in 2007 alleging that Wachovia played a role in Derivium's scheme. According to the FINRA Award, the arbitrators found Wachovia liable for "aiding and abetting [Derivium's] breach of fiduciary duty." The Florida securities lawyers at McCabe Rabin, P.A. represent investors nationwide in FINRA arbitration matters. Investors nationwide who have incurred recoverable investment losses due to specific failures by stockbrokers and brokerage firms, or as a result of a Ponzi scheme, may contact the Florida securities lawyers at McCabe Rabin, P.A. for a free and confidential consultation by calling toll free at 877.915.4040 or by e-mail to kelly@mccaberabin.com. By McCabe Rabin, P.A. | Posted on April 24, 2012
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,580
La Paz () is the capital city of the La Paz Department of Honduras. The town, founded in 1792, has a population of 30,020 (2020 est.). La Paz is located 750m (2461 feet) above sea level on the Comayagua River near the Cordillera de Montecillos in an area that has mountainous terrain with thick jungle cover. History The town dates back to 1750 when two Spanish colonies existed in the area. The town's title was given on 14 September 1848, when the name "La Paz" was officially recognized by a decree from Comayagua; in 1861, it was given the status of a city, and in 1869 it was made a departmental seat. Demographics At the time of the 2013 Honduras census, La Paz municipality had a population of 43,980. Of these, 91.32% were Mestizo, 4.78% White, 2.42% Indigenous (2.19% Lenca), 1.32% Black or Afro-Honduran and 0.16% others. Economy Major industries in and around the city include henequen and coffee farming, cattle raising, timber processing, tanning, distilling and some mining. Culture The festival of the "Virgen de los Dolores" is held in November. The local cultural center is located in a 19th-century house and has a collection of paintings and cultural objects that date to the 19th century period. It also organizes activities throughout the year. Sports The local football team, Municipal Paceño, play their home games at the Estadio Roberto Suazo Cordoba. In summer 2013, they were relegated to the third tier of Honduran football. References Municipalities of the La Paz Department (Honduras)
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,325
{"url":"http:\/\/databasefaq.com\/index.php\/answer\/264\/ruby-ruby-access-words-in-string","text":"ruby , Ruby access words in string\n\n## Question:\n\nTag: ruby\n\nI don't understand the best method to access a certain word by it's number in a string.\n\nI tried using [] to access a word but instead it returns letter.\n\nputs s\n# => I went for a walk\nputs s[3]\n# => w\n\n\nWhat you are doing will access the fourth character of String s.\n\nSplit the string to an array and then access the fourth element as follows.\n\nputs s.split[3]\n\n\nNote: Calling split without parameters separates the string by whitespace.\n\nEdit: Fixing indexes. The index starts from 0. That means s.split[3] will access fourth element.\n\n# Related:\n\n## How to pivot array into another array in Ruby\n\narrays,ruby,csv\nI have a multidimensional array like this one : myArray = [[\"Alaska\",\"Rain\",\"3\"],[\"Alaska\",\"Snow\",\"4\"],[\"Alabama\",\"Snow\",\"2\"],[\"Alabama\",\"Hail\",\"1\"]] I would like to end up with CSV output like this. State,Snow,Rain,Hail Alaska,4,3,nil Alabama,2,nil,1 I know that to get this outputted to CSV the way I want it I have to have output array like this: outputArray =[[\"State\",\"Snow\",\"Rain\",\"Hail\"],[\"Alaska\",4,3,nil],[\"Alabama\",2,nil,1]]...\n\n## Undefined local variable post\n\nruby-on-rails,ruby,variables,undefined,local\nI write this code: <% @user.posts.each do |post| %> <%= render 'post', locals: { post: post, user: @user} %> <% end %> Then in _post.htm.erb I write follow code: <div class=\"post-title\"> <img src=\"<%= @user.avatar.url%>\" class=\"img-rounded post-image\"> <h4 id=\"post-name\"><%= @user.first_name + ' ' [email\u00a0protected]_name %> <\/h4> <div id=\"post-date\"><%= post.created_at.strftime('%d %b -...\n\n## Saying there are 0 arguments when I have 2? Trying to open a CSV file to write into\n\nruby,file,csv,dir\nI'm trying to read from a CSV file and codify people into groups using an equation. I append the name of their group they fall into to the end of the array that their row creates. Then I write it to a new file so I don't overwrite the original...\n\n## Rails basic auth not working properly\n\nruby-on-rails,ruby,authentication\nI am building a small API that uses basic authentication. What I have done, is that a user can generate a username and password, that could be used to authenticate to the API. However I have discovered that it is not working 100% as intended. It appears that a request...\n\n## In Ruby how to put multiple lines in one guard clause?\n\nruby-on-rails,ruby\nI have the following line of code : if params[:\"available_#{district.id}\"] == 'true' @deliverycharge = @product.deliverycharges.create!(districtrate_id: district.id) delivery_custom_price(district) end Rubocop highlight it and asks me to use a guard clause for it. How can I do it? EDIT : Rubocop highlighted the first line and gave this message Use a guard...\n\n## Heroku RAM not increasing with upgraded dynos\n\nruby-on-rails,ruby,ruby-on-rails-3,memory,heroku\nI have a massive function i have been calling manually through the heroku rails console. I have been receiving the error rapid fire in my logs: 2015-06-22T14:56:42.940517+00:00 heroku[run.9877]: Process running mem=575M(112.4%) 2015-06-22T14:56:42.940517+00:00 heroku[run.9877]: Error R14 (Memory quota exceeded) A 1X dyno is suppose to have 512 MB of RAM. I...\n\n## Get X days out of an Array\n\nruby,ruby-on-rails-4\nI have an array filled with Datetime objects: [Mon, 22 Jun 2015, Tue, 23 Jun 2015, Wed, 24 Jun 2015, Thu, 25 Jun 2015, Fri, 26 Jun 2015, Sat, 27 Jun 2015, Sun, 28 Jun 2015] I know how to select what I want from the array ex: week.select{|x|x.monday? ||...\n\n## Appending an element to a page in VoltRb\n\nhtml,ruby,opalrb,voltrb\n\n## How to handle backslash \u201c\\\u201d escape characters in q string and heredocument\n\nruby\nRuby Newbie here. I do not understand why Ruby looks inside %q and escapes the \\. I am using Ruby to generate Latex code. I need to generate \\\\\\hline which is used in Latex for table making. I found \\\\\\hline as input generated \\hline even though the string was inside...\n\n## Ruby- get a xml node value\n\nruby,xml\ncan someone help me in extracting the node value for the element \"Name\". Type 1: I am able to extract the \"name\" value for the below xml by using the below code <Element> <Details> <ID>20367<\/ID> <Name>Ram<\/Name> <Name>Sam<\/Name> <\/Details> <\/Element> doc = Nokogiri::XML(response.body) values = doc.xpath('\/\/Name').map{ |node| node.text}.join ',' puts values...\n\n## Get value from string representing local variable [duplicate]\n\nruby,local-variables\nThis question already has an answer here: Is there a 'variable_get' method? If not, how can I create my own? 2 answers I have a local variable name as a string, and need to get its value. variable = 22 \"variable\".to_variable? How can I get the value 22 form...","date":"2018-09-22 18:27:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2806518077850342, \"perplexity\": 5322.259616857372}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-39\/segments\/1537267158633.40\/warc\/CC-MAIN-20180922182020-20180922202420-00023.warc.gz\"}"}
null
null
//////////////////////////////////////////////////////////////////////////////////// /// /// \file reportimage.cpp /// \brief This file contains the implementation of a JAUS message. /// /// <br>Author(s): Daniel Barber /// Created: 21 January 2010 /// Copyright (c) 2010 /// <br>Applied Cognition and Training in Immersive Virtual Environments /// <br>(ACTIVE) Laboratory /// <br>Institute for Simulation and Training (IST) /// <br>University of Central Florida (UCF) /// <br>All rights reserved. /// <br>Email: dbarber@ist.ucf.edu /// <br>Web: http://active.ist.ucf.edu /// /// Redistribution and use in source and binary forms, with or without /// modification, are permitted provided that the following conditions are met: /// * Redistributions of source code must retain the above copyright /// notice, this list of conditions and the following disclaimer. /// * Redistributions in binary form must reproduce the above copyright /// notice, this list of conditions and the following disclaimer in the /// documentation and/or other materials provided with the distribution. /// * Neither the name of the ACTIVE LAB, IST, UCF, nor the /// names of its contributors may be used to endorse or promote products /// derived from this software without specific prior written permission. /// /// THIS SOFTWARE IS PROVIDED BY THE ACTIVE LAB''AS IS'' AND ANY /// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED /// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE /// DISCLAIMED. IN NO EVENT SHALL UCF BE LIABLE FOR ANY /// DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES /// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; /// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND /// ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT /// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS /// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. /// //////////////////////////////////////////////////////////////////////////////////// #include "jaus/extras/video/reportimage.h" using namespace JAUS; //////////////////////////////////////////////////////////////////////////////////// /// /// \brief Constructor, initializes default values. /// /// \param[in] src Source ID of message sender. /// \param[in] dest Destination ID of message. /// //////////////////////////////////////////////////////////////////////////////////// ReportImage::ReportImage(const Address& dest, const Address& src) : Message(REPORT_IMAGE, dest, src) { mCameraID = 0; mFormat = Image::RAW; mFrameNumber = 0; } //////////////////////////////////////////////////////////////////////////////////// /// /// \brief Copy constructor. /// //////////////////////////////////////////////////////////////////////////////////// ReportImage::ReportImage(const ReportImage& message) : Message(REPORT_IMAGE) { mCameraID = 0; mFormat = Image::RAW; mFrameNumber = 0; *this = message; } //////////////////////////////////////////////////////////////////////////////////// /// /// \brief Destructor. /// //////////////////////////////////////////////////////////////////////////////////// ReportImage::~ReportImage() { } //////////////////////////////////////////////////////////////////////////////////// /// /// \brief Writes message payload to the packet. /// /// Message contents are written to the packet following the JAUS standard. /// /// \param[out] packet Packet to write payload to. /// /// \return -1 on error, otherwise number of bytes written. /// //////////////////////////////////////////////////////////////////////////////////// int ReportImage::WriteMessageBody(Packet& packet) const { int total = 0; int expected = BYTE_SIZE*2 + UINT_SIZE*2 + mImage.Length(); total += packet.WriteByte(mCameraID); total += packet.Write(mFrameNumber); total += packet.WriteByte((Byte)mFormat); total += packet.Write(mImage.Length()); total += packet.Write(mImage); return total == expected ? total : -1; } //////////////////////////////////////////////////////////////////////////////////// /// /// \brief Reads message payload from the packet. /// /// Message contents are read from the packet following the JAUS standard. /// /// \param[in] packet Packet containing message payload data to read. /// /// \return -1 on error, otherwise number of bytes written. /// //////////////////////////////////////////////////////////////////////////////////// int ReportImage::ReadMessageBody(const Packet& packet) { int total = 0; int expected = BYTE_SIZE*2 + UINT_SIZE*2; unsigned int length = 0; total += packet.Read(mCameraID); total += packet.Read(mFrameNumber); total += packet.Read((Byte &)mFormat); total += packet.Read(length); if(length > 0) { total += packet.Read(mImage, length); expected += length; } return total == expected ? total : -1; } //////////////////////////////////////////////////////////////////////////////////// /// /// \brief Clears message payload data. /// //////////////////////////////////////////////////////////////////////////////////// void ReportImage::ClearMessageBody() { mFormat = Image::RAW; mCameraID = 0; mFrameNumber = 0; mImage.Clear(); } //////////////////////////////////////////////////////////////////////////////////// /// /// \return True if the contents of the message will be larger than /// maximum payload size, otherwise false. /// //////////////////////////////////////////////////////////////////////////////////// bool ReportImage::IsLargeDataSet(const unsigned int maxPayloadSize) const { unsigned int size = BYTE_SIZE + UINT_SIZE + mImage.Length(); return size > maxPayloadSize; } //////////////////////////////////////////////////////////////////////////////////// /// /// \brief Sets equal to. /// //////////////////////////////////////////////////////////////////////////////////// ReportImage& ReportImage::operator =(const ReportImage& message) { if(this != &message) { CopyHeaderData(&message); mCameraID = message.mCameraID; mFrameNumber = message.mFrameNumber; mFormat = message.mFormat; mImage = message.mImage; } return *this; } /* End of File */
{ "redpajama_set_name": "RedPajamaGithub" }
1,862
<?php defined('BASEPATH') OR exit('No direct script access allowed'); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>JOBS</title> <?php includeJQuery(); ?> <?php includeBootstrap(); ?> <style type="text/css"> .dotted-bottom { border-bottom: 2px dotted gray; padding-top: 15px; padding-bottom: 15px; } .sectionTitle { padding-bottom: 15px; } </style> <script type="text/javascript"> var pageController = "Control_proveedor_ctrl"; </script> <script type="text/javascript" src="<?php echo base_url(); ?>includes/js/utilitiesJS.js"></script> <script type="text/javascript" src="<?php echo base_url(); ?>includes/js/JSControllers/Alta_cliente_JS.js"></script> <script type="text/javascript" src="<?php echo base_url(); ?>includes/js/JSControllers/Alta_cliente_DireccionFiscal_JS.js"></script> <script type="text/javascript" src="<?php echo base_url(); ?>includes/js/JSControllers/Alta_cliente_DireccionOperativa_JS.js"></script> <script type="text/javascript" src="<?php echo base_url(); ?>includes/js/JSControllers/Alta_cliente_Banco_JS.js"></script> <script type="text/javascript" src="<?php echo base_url(); ?>includes/js/JSControllers/Alta_cliente_agenda_JS.js"></script> <script type="text/javascript" src="<?php echo base_url(); ?>includes/js/JSControllers/Alta_cliente_Perfil_JS.js"></script> <script type="text/javascript" src="<?php echo base_url(); ?>includes/js/JSControllers/Alta_cliente_Servicio_JS.js"></script> <script type="text/javascript" src="<?php echo base_url(); ?>includes/js/JSControllers/Alta_cliente_Commons_JS.js"></script> </head> <body> <?=$menu ?> <div class="container"> <div class="row"> <div class="col-xs-12 col-sm-12 col-md-12 col-lg-12"> <div class="form-group"> <label for="cCliente">Proveedor en edición</label> <select id="cCliente" class="form-control"> <option value="-1">Ninguno</option> <?php foreach($proveedores as $proveedor){ ?> <option value="<?php echo $proveedor->id; ?>"><?php echo $proveedor->nombre; ?></option> <?php } ?> </select> </div> </div> </div> <div class="row" id="rowNuevoCliente"> <div class="col-xs-12 col-sm-12 col-md-12 col-lg-12"> <form action = "Alta_cliente_ctrl/nuevoCliente" method = "post" id = "form_alta" > <div class="form-group"> <label for="nombre">Nombre comercial</label> <input type="text" name="nombre" placeholder="Nombre comercial" class="form-control" required> </div> <div class="form-group"> <input type="submit" value="Crear" class="form-control btn btn-primary"> </div> </form> </div> </div> <!-- Inicia la sección de edición --> <span id="rowEdicionCliente"> <div class="row"> <div class="col-xs-12 col-sm-12 col-md-12 col-lg-12"> <form action = "Alta_cliente_ctrl/editarCliente" method = "post" id = "form-edita" > <div class="form-group"> <div class="row"> <div class="col-xs-12 col-sm-6 col-md-8 col-lg-8"> <label for="nombre">Nombre comercial</label> <input type="text" name="nombre" id="nombre" placeholder="Nombre comercial" class="form-control" required> </div> <div class="col-xs-12 col-sm-6 col-md-4 col-lg-4"> <label id="labelEstado">Estado: Activo</label> <button style="width: 100%;" id="btn-estado" class="btn btn-danger">Inactivar</button> </div> </div> </div> <input type="hidden" name="id" value="-1"> <input type="hidden" name="estadoActivo" value="1"> <div class="form-group"> <input type="submit" value="Actualizar" class="form-control btn btn-info"> </div> </form> </div> </div> <div class="row" style="background: rgb(238, 238, 238) none repeat scroll 0% 0%; border: 2px solid gray;"> <div class="col-xs-12 col-sm-12 col-md-12 col-lg-12"> <nav class="navbar navbar-default" style="background: none; box-shadow: none; border-color: transparent; margin: 0px;"> <div class="container-fluid"> <ul class="nav navbar-nav" id="main-menu-cliente"> <li> <a href="#" id="btn-direcciones-fiscales"> <span class="glyphicon glyphicon-home"></span> Direcciones fiscales </a> </li> <li> <a href="#" id="btn-direcciones-operativas"> <span class="glyphicon glyphicon-home"></span> Direcciones operativas </a> </li> <li> <a href="#" id="btn-bancos"> <span class="glyphicon glyphicon-usd"></span> Bancos </a> </li> <li> <a href="#" id="btn-agenda"> <span class="glyphicon glyphicon-phone-alt"></span> Agenda </a> </li> <li> <a href="#" id="btn-perfiles"> <span class="glyphicon glyphicon-user"></span> Perfiles </a> </li> <li> <a href="#" id="btn-servicios"> <span class="glyphicon glyphicon-th-list"></span> Servicios </a> </li> </ul> </div> </nav> </div> </div> <!-- Inicia sección de información financiera --> <?=$form_seccion1; ?> <?=$form_agenda; ?> </span> <!-- FIN ROW CLIENTE EDICIÓN --> </div> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
8,033
\section{Introduction} The COVID-19 pandemic has a significant impact on individuals, society and almost all sectors of the economy. This also applies to the insurance industry as well as the financial market by drop down of asset prices. The COVID-19 crisis is one example for an event with impact on financial and insurance risks, which shows that it makes sense to add interdependencies between both. This is also suggested by Wang et al.\ \cite{Wang2018}, who point out the following two reasons: First, (re)insurance companies transfer their insurance risks to the capital market by using insurance-linked securities, like catastrophe bonds, for instance. As a result, an insurer invested in the financial market is exposed to the insurance risks exported by another insurance company to the financial market, and there may be dependencies among these risks for example through natural catastrophes. A second interconnectedness among financial and insurance risks is in insurance contracts for financial guarantees, which can cause systemic risk. Whereas it is common now in the actuarial literature to model dependencies between different lines of business, the number of papers which connect the evolution of the financial market to the occurrence of claims is sparse. A widespread approach to obtain dependent business lines is by {\em common shock } models. In general this means that there is an additional Poisson process which produces joint claims in all or many business lines. Papers which have used this approach are among others \cite{BaeuerleGruebel2005,GBC12,YuenLiangZhou2015,BLX16,BiChen2019,BaeuerleLeimcke2020}. The first two papers in this list deal with modeling and computational aspects of performance measures, whereas the last four use these models to solve stochastic control problems for optimal reinsurance and investment for different criteria and for diffusion as well as jump models. The advantage of modeling dependence in this way is that we obtain an immediate interpretation for the interdependence. Since the papers \cite{YuenLiangZhou2015,BLX16,BiChen2019,BaeuerleLeimcke2020} consider a financial market which is independent from the claim generation mechanism, the control problems for investment and reinsurance decompose which makes it of course easier to obtain explicit solutions. Another popular approach to model dependence between business lines is to use {\em L\'evy copuals}, see among others \cite{bk05,bb11,atwy14} and \cite{acw11} for an overview. This approach is elegant from a mathematical point of view but its interpretation is less clear than for common shock models. Other approaches include the construction of dependence via {\em interacting intensities} (see \cite{BaeuerleGruebel2008}) or a {\em common subordinator} (see \cite{SchererSelch2018}). The first contribution of this paper is to model a dependence between the financial market and the insurance business for the joint problem of optimal investment and (proportional) reinsurance. To keep the model simple we restrict here to one business line for the insurance risk, but the model can be extended here in a straightforward way. A paper which connects financial and insurance risk is \cite{Wang2018} where a discrete-time risk model is considered. The authors there assume a joint distribution for the claim size and the discount factor at each point in time and are interested in the asymptotics for the finite-time ruin. They do not consider a control. The second paper is the recent paper by \cite{bs20} who create the dependence by a common factor process which influences drift and volatility of the risky asset as well as size and risk fluctuations of the insurance risk process. They consider a diffusion model and general utility function and obtain explicit solutions in some special cases. In contrast to their approach we assume here that in 'normal' times we have independence and that dependence is created by major events like catastrophes. More precisely, whenever the claim size exceeds a certain threshold we assume that this corresponds to a catastrophe and implies at the same time a drop of the risky asset by a random proportion. What turns out to be very surprising is the fact that creating only a small dependence has a sincere effect on the optimal investment strategy. Our second contribution is that we allow the claim size distribution to be learned. In most articles, it is assumed that the insurer has complete knowledge of the model. However, in reality, insurance companies operate in a setting with partial information. That is, with regard to the net claim process, only the claim arrival times and magnitudes are directly observable. Therefore we study the optimal investment and reinsurance problem in a partial information framework. More precisely we consider a Bayesian approach and restrict here to the claim size distribution which is allowed to be learned from a finite set of possible distributions (for learning the intensity see e.g. \cite{BaeuerleLeimcke2020}). A paper with learning in an actuarial context is \cite{LST14} where dividend payment is optimized and the drift of the risky asset has to be learned. The model there is a diffusion model. \cite{s15,LiangBayraktar2014} are both Hidden-Markov models which means that a latent hidden factor influences model parameters. In \cite{s15} again the dividend has to be maximized in a diffusion setting with unobservable dirft. Based on the suggestion in \cite[p.\,165]{AsmussenAlbrecher2010}, the authors in \cite{LiangBayraktar2014} consider the optimal investment and reinsurance problem for maximizing exponential utility under the assumption that the claim intensity and loss distribution depend on the states of the Hidden Markov chain. The aim in our paper is to maximize the expected exponential utility of the insurer's capital at a fixed time point. Note that this is an interesting optimization criterion which interpolates between a mean-variance criterion and a robust approach (for details see \cite{BaeuerleLeimcke2020}). The control consists of (proportional) reinsurance and investment into two assets. The baseline financial market is given by a Black Scholes model and the insurance model is a Cram\'er-Lundberg model. As explained before, as soon as the claim size exceeds a threshold the risky asset drops by a random proportion. Using stochastic control methods we are able to characterize the optimal investment and reinsurance strategy via the Hamilton-Jacobi-Bellman (HJB) equation. Since the value function may not be differentiable everywhere we use the Clarke gradient as a general gradient in our analysis. In the case of known model data we get explicit optimal investment and reinsurance strategies and are able to discuss the influence of the threshold level which creates the dependency. The paper is organized as follows: In the next section we introduce our basic model which consists of the claim arrival process, the financial market, the strategies and the optimization problem. In Section \ref{sec:learn} we state the model with learning and explain how we can transform the model with unknown claim size distribution to a model with known data. The standard approach here is to include a filter process which keeps track of all relevant observations. Section \ref{sec:sol} contains the solution. Being able to show that the value function possesses some Lipschitz properties we can prove that it is a solution of a generalized HJB equation where we replace a derivative by the generalized Clarke gradient. Thus, we are also able to characterize an optimal pair of investment and reinsurance strategy. Due to the dependence between the financial market and the claim process these strategies are now rather complicated. So we first manage in Section \ref{sec:comp} to compare the optimal strategy to the optimal one where we have independence between the financial market and claim occurrence. It will turn out that the insurance company will invest less when dependence shows up. Indeed a numerical example will reveal the magnitude of the impact of the threshold which creates the dependence. We can show that even large thresholds which create a minimal dependence have a huge impact on the investment strategy. Second we are able to compare the optimal investment strategy in our model to the optimal one in a model with known data and where the jump size distribution exactly equals our expectation. We will see that in the latter model the invested amount provides an upper bound to what is invested in the more complicated model. In the appendix we summarize additional information on the Clarke gradient and provide detailed calculations and proofs for our main theorems. \section{The Optimal Investment and Reinsurance Model}\label{sec:model} We consider an insurance company with the aim to maximize the expected utility of the terminal surplus by choosing optimal investment and reinsurance strategies. The processes $\Psi$ and $W$ below are defined on a common probability space $(\Omega,\mathcal{F},\mathbb{P})$. \subsection{The aggregated claim amount process} In the following, let $N=(N_t)_{t\ge0}$ be a Poisson process with intensity $\lambda>0$. We interpret the jump times of $N$, denoted by $(T_n)_{n\in\mathbb{N}}$, as arrival times of insurance claims. We assume that $(Y_n)_{n\in\mathbb{N}}$ is a sequence of positive random variables, where $Y_n$ describes the claim size at $T_n$. The insurer faces uncertainty about the claim size distribution. This is taken into account by a Bayesian approach. Let $\{F_\vartheta:\vartheta\in\Theta\}$, $\Theta\subset\mathbb{R}^n$, be a family of distributions on $(0,\infty)$, where $\vartheta$ in unknown. We view $\vartheta$ as a random variable taking values in $\Theta=\{1,\ldots,m\}$ for some $m\in\mathbb{N}$ and initial distribution $\pi_\vartheta(j)$, $j=1,\ldots,m$. Moreover, we suppose that $F_j$ is absolutely continuous with density $f_j$, where \begin{equation*} M_{j}(z):= \int_{(0,\infty)} e^{zy}f_j(y)dy<\infty,\quad z\in\mathbb{R},\quad j=1,\ldots,m. \end{equation*} The sequence $Y_1,Y_2,\ldots$ is assumed to be conditional independent and identically distributed according to $F_\vartheta$ given $\vartheta$ as well as independent of $(T_n)_{n\in\mathbb{N}}$. The aggregated claim amount process, denoted by $(S_t)_{t\ge0}$, is given by \begin{equation*} S_t = \sum_{i=1}^{N_t} Y_i = \int_0^t y\, \Phi(dt,dy), \end{equation*} where $\Phi:=(T_n,Y_n)$ is the $(0,\infty)$-Marked Point Process which carries the information about the claim arrival time and amounts. \subsection{The financial market} The surplus will be invested by the insurer into a financial market, where it is assumed that there exists one risk-free asset and one risky asset. The price process of the \emph{risk-free asset}, denoted by $B=(B_t)_{t\ge0}$, is given by \begin{equation*} d B_t = rB_t dt, \quad B_0=1, \end{equation*} with \emph{risk-free interest rate} $r\in\mathbb{R}$. That is, $B_t = e^{rt}$ for all $t\ge0$. The price of the risky asset drops down by a random value at the claim arrival time $T_n$, if the corresponding insurance claim $Y_n$ exceed a fixed threshold $L>1$. We assume that $(Z_n)_{n\in\mathbb{N}}$ is a sequence of independent and identically distributed random variables taking values in $(0,1)$ with distribution $Q$. It is supposed that $(Z_n)_{n\in\mathbb{N}}$ is independent of $(T_n)_{n\in\mathbb{N}}$ and $(Y_n)_{n\in\mathbb{N}}$. The random variable $Z_n$ describes the relative jump height downwards of the risky asset at time $T_n$, if $Y_n>L$. From now on, we set $\Psi := (T_n,(Y_n,Z_n))_{n\in\mathbb{N}}$ and let $E:=(0,\infty)\times(0,1)$. That is, $\Psi$ is the $E$-Marked Point Process which contains the information of the claim arrival times, claim sizes and potential relative jumps downwards of the risky asset. The filtration generated by $\Psi$ is denoted by $\mathfrak{F}^\Psi=(\mathcal{F}_t^\Psi)_{t\ge0}$. The price process of the risky asset evolves according to a geometric Brownian motion between the jumps. That is, the price process of the \emph{risky asset}, denoted by $P=(P_t)_{t\ge0}$, is characterized by \begin{equation*} d P_t = P_{t-}\bigg(\mu dt + \sigma d W_t - \int_E z \mathds{1}_{(L,\infty)}(y)\Psi(dt,d(y,z))\bigg) , \quad P_0=1, \end{equation*} where $\mu\in\mathbb{R}$ and $\sigma>0$ are constants describing the drift and volatility of the risky asset, respectively, and $(W_t)_{t\ge0}$ is a standard Brownian motion which is independent of $\seq{T}$, $\seq{Y}$ and $\seq{Z}$. Since the price process of the risky asset is observable, the filtration generated by $P$, denoted by $\mathfrak{F}^P=(\mathcal{F}_t^P)_{t\ge0}$, is known by the insurer. Throughout this work, ${\mathfrak G}=(\mathcal{G}_t)_{t\ge0}$ denotes the observable filtration of the insurer which is given by \begin{equation*} {\mathcal G}_t = {\mathcal F}^P_t\vee{\mathcal F}_t^\Psi,\quad t\ge0. \end{equation*} \subsection{The strategies} We assume that the wealth of the insurance company is invested into the previously described financial market. \begin{definition}\label{def:investment} An \emph{investment strategy}, denoted by $\xi=(\xi_t)_{t\ge0}$, is an $\mathbb{R}$-valued, c\`{a}dl\`{a}g and ${\mathfrak G}$-predictable process such that $| \xi_t|\le K$ for some $0<K<\infty$. $\xi_t$ is the amount of money invested at time $t$. \end{definition} The restriction $| \xi_t|\le K$ is only a technical tool. We will make $K$ sufficiently large later, s.t.\ the optimal $\xi_t^\star$ is the same as in the unrestricted problem. We further assume that the first-line insurer has the possibility to take a proportional reinsurance. Therefore, the \emph{part of the insurance claims paid by the insurer}, denoted by $h(b,y)$, satisfies \begin{equation*} h(b,y) = b\cdot y \end{equation*} with \emph{retention level} $b\in[0,1]$ and \emph{insurance claim} $y\in(0,\infty)$. Here we suppose that the insurer is allowed to reinsure a fraction of her/his claims with retention level $b_t\in[0,1]$ at every time $t$. \begin{definition}\label{def:reinsurance} A \emph{reinsurance strategy}, denoted by $b=(b_t)_{t\ge0}$, is a $[0,1]$-valued, c\`{a}dl\`{a}g and ${\mathfrak G}$-predictable process. \end{definition} We denote by ${\mathcal U}[t,T]$ the set of all admissible strategies $(\xi,b)$ on $[t,T]$. We assume that the policyholder's payments to the insurance company are modelled by a fixed \emph{premium (income) rate} $c=(1+\eta)\kappa$ with safety loading $\eta>0$ and fixed constant $\kappa>0$, which means that premiums are calculated by the expected value principle. If the insurer chooses retention levels less than one, then the insurer has to pay premiums to the reinsurer. The \emph{part of the premium rate left to the insurance company} at retention level $b\in[0,1]$, denoted by $c(b)$, is $c(b) = c - \delta(b)$, where $\delta(b)$ denotes the \emph{reinsurance premium rate}. We say $c(b)$ is the \emph{net income rate}. Moreover, the net income rate $c(b)$ should increase in $b$, which is fulfilled by setting $\delta(b) := (1-b)(1+\theta)\kappa$ with $\theta>\eta$ which represents the safety loading of the reinsurer. Therefore \begin{equation}\label{eq:cb} c(b) = (1+\eta)\kappa - (1-b)(1+\theta)\kappa = (\eta-\theta)\kappa + (1+\theta)\kappa\,b. \end{equation} This reinsurance premium model is used e.g.\ in \cite{ZhuShi2019}. The surplus process $(X^{\xi,b}_t)_{t\ge0}$ under an admissible investment-reinsurance strategy $(\xi,b)\in{\mathcal U}[0,T]$ is given by \begin{align*} d X^{\xi,b}_t &=(X^{\xi,b}_t - \xi_t)r dt + \xi_t\bigg(\mu dt+\sigma dW_t-\int_E z\mathds{1}_{(L,\infty)}(y)\Psi(dt,d(y,z))\bigg) + c(b_t)dt - b_t dS_t \\ &= \big(rX^{\xi,b}_t + (\mu - r)\xi_t + c(b_t)\big) dt + \xi_t\sigma dW_t - \int_E\big(b_{t}y+ \xi_t z \mathds{1}_{(L,\infty)}(y)\big)\Psi(dt,d(y,z)). \end{align*} We suppose that $X^{\xi,b}_0=x_0 >0$ is the initial capital of the insurance company. \subsection{The optimization problem} Clearly, the insurance company is interested in an optimal investment-reinsurance strategy. But there are various optimality criteria to specify optimization of proportional reinsurance and investment strategies. We consider the expected utility of wealth at the terminal time $T>0$ as criterion with exponential utility function $U:\mathbb{R}\to\mathbb{R}$ \begin{equation}\label{eq:u} U(x)=-e^{-\alpha x}, \end{equation} where the parameter $\alpha>0$ measures the \emph{degree of risk aversion}. The exponential utility function is useful since by choosing $\alpha$ we can interpolate between a risk-sensitive criterion and a robust point of view as explained in \cite{BaeuerleLeimcke2020}. The case of small $\alpha $ can be seen as maximizing the expectation with a bound on the variance and the case of large $\alpha$ can be seen as a robust optimization. Next, we are going to formulate the dynamic optimization problem. We define the value functions, for any $(t,x)\in[0,T]\times\mathbb{R}$ and $(\xi,b)\in{\mathcal U}[t,T]$, by \begin{equation}\label{eq:problem} \begin{aligned} V^{\xi,b}(t,x) &:= \mathbb{E}^{t,x}\big[U(X^{\xi,b}_T)\big], \\ V(t,x) &:= \sup_{(\xi,b)\in{\mathcal U}[t,T]}V^{\xi,b}(t,x). \end{aligned} \end{equation} The expectation $\mathbb{E}$ is taken w.r.t.\ the probability measure $\pi_\vartheta \otimes\mathbb{P}$ and $\mathbb{E}^{t,x}$ denotes the conditional expectation given $X^{\xi,b}_t=x$. \section{A Model with Learning}\label{sec:learn} The task is to reduce the control problem~\eqref{eq:problem} with partial information within the introduced framework to one with complete information, taken the observations into account. \subsection{Filtering} By the Bayes rule, the posterior probability mass function of $\vartheta$ given the observation $\bar{Y}_n=\bar{y}_n$ with $\bar{Y}_n:=(Y_1,\ldots,Y_n)$ and $\bar{y}_n:= (y_1,\ldots,y_n)$ is \begin{equation}\label{posttheta} \mathbb{P}(\vartheta=j|\bar{Y}_n=\bar{y}_n) = \frac{\pi_\vartheta(j)\prod_{i=1}^n f_j(y_i)}{\sum_{k=1}^m\pi_\vartheta(k)\prod_{i=1}^n f_k(y_i)},\quad j=1,\ldots,m. \end{equation} However, the solution method requires a dynamic representation of this posterior probability distribution given the information up to any time $t$. To achieve this, let us introduce the following notation. Throughout this paper, we write \begin{equation*} p_j(t) = \mathbb{P}(\vartheta=j|\mathcal{F}_t^\Psi),\quad t\ge0,\quad j=1,\ldots,m. \end{equation*} Moreover, let $(p_t)_{t\ge0}$ denote the $m$-dimensional process defined by \begin{equation*} p_t:=(p_1(t),\ldots,p_m(t)),\quad t\ge0. \end{equation*} We obtain the following representation of the process $(p_t)_{t\ge0}$ from \eqref{posttheta}: \begin{equation}\label{pj} p_j(t) = \pi_\vartheta(j) + \int_0^t\int_{(0,\infty)}\bigg(\frac{p_j(s-)\,f_j(y)}{\sum_{k=1}^m p_k(s-)\,f_k(y)}-p_j(s-)\bigg)\Phi(ds,dy),\quad j=1,\ldots,m. \end{equation} Note that $(p_t)_{t\ge0}$ is a pure jump process and the new state of $(p_t)$ at the jump time $T_n$ with jump sizes $Y_n$ is \begin{equation*} p_{T_n} = J\big(p_{T_n-},Y_n\big),\quad n\in\mathbb{N}, \end{equation*} where \begin{equation*} J(p,y) := \left(\frac{f_1(y)\, p_1}{\sum_{k=1}^m f_k(y)\,p_k},\ldots,\frac{f_m(y)\,p_m}{\sum_{k=1}^m f_k(y)\,p_k}\right), \end{equation*} for $p=(p_1,\ldots,p_m)\in\Delta_m:=\{x\in\mathbb{R}_+^m:\sum_{k=1}^m x_i=1\}$ and $y\in(0,\infty)$. \begin{proposition}\label{GintkernelPsi} The ${\mathfrak G}$-intensity kernel of $\Psi=(T_n,(Y_n,Z_n))$, denoted by $\hat{\nu}(t,d(y,z))$, is given by \begin{equation*} \hat{\nu}(t,d(y,z)) = \lambda\sum_{k=1}^m p_k(t)f_k(y)dyQ(dz),\quad t\ge0. \end{equation*} \end{proposition} \begin{proof} First note that $\hat{\nu}$ is a transition kernel. The ${\mathfrak G}$-intensity is derived from the ${\mathfrak G} \vee \sigma(\vartheta)$-intensity kernel $\lambda f_\vartheta(y)dyQ(dz)$ by conditioning on ${\mathcal G}_t$ (see \cite{bre}). Note here in particular that the posterior predictive distribution of the claim sizes given the observed claims up to time $t$ is $\sum_{k=1}^m p_k(t)f_k(y)dy$. \end{proof} We denote by $\hat{\Psi}(dt, d(y,z))$ the compensated random measure given by \begin{equation}\label{Psihat} \hat{\Psi}(dt, d(y,z)) := \Psi(dt, d(y,z)) - \hat\nu(t,d(y,z))dt, \end{equation} where $\hat\nu$ is defined as in Proposition~\ref{GintkernelPsi}. Thus, we obtain the following indistinguishable representation of the surplus process $(X^{\xi,b}_t)_{t\ge0}$: \begin{equation}\label{wealth} \begin{aligned} d X^{\xi,b}_t = \bigg(&r X_t^{\xi,b} + (\mu-r)\xi_t + c(b_t) - \lambda\sum_{k=1}^m p_k(t)\big(b_t\mu_k + \xi_t \bar F_k(L)\mathbb{E}[Z]\big)\bigg)dt \\ & + \xi_t\sigma dW_t - \int_E \big(b_t y + \xi_t z \mathds{1}_{(L,\infty)}(y)\big) \hat{\Psi}(dt, d(y,z)),\quad t\ge0, \end{aligned} \end{equation} where $\mu_j:=\int_{(0,\infty)} y f_j(y)dy$, $\bar F_j$ denotes the survival function of $F_j$, $j=1,\ldots,m$, and $Z$ is a random variable with $Z\sim Z_1$. Note that all processes here are ${\mathfrak G}$-adapted. This dynamic will be one part of the reduced control model discussed in the next section. \subsection{The Reduced Control Problem} The process $(p_t)_{t\ge0}$ in \eqref{pj} carries all relevant information about the unknown parameter $\vartheta$ contained in the observable filtration ${\mathfrak G}$ of the insurer. Therefore, the state process of the reduced control problem with complete observation is the $(m+1)$-dimensional process \begin{equation*} (X^{\xi,b}_s,p_s)_{s\in[t,T]}, \end{equation*} where $(X^{\xi,b}_s)$ is given by \eqref{wealth} and $(p_s)$ is given by \eqref{pj} for some fixed initial time $t\in[0,T)$ and $(\xi,b)\in{\mathcal U}[t,T]$. We can now formulate the reduced control problem. For any $(t,x,p)\in[0,T]\times\mathbb{R}\times\Delta_m$, the value functions are given by \begin{equation}\label{P} \tag{P} \begin{aligned} V^{\xi,b}(t,x,p) &:= \mathbb{E}^{t,x,p}\big[U(X^{\xi,b}_T)\big],\\ V(t,x,p) &:= \sup_{(\xi,b)\in{\mathcal U}[t,T]}V^{\xi,b}(t,x,p), \end{aligned} \end{equation} where $ \mathbb{E}^{t,x,p}$ denotes the conditional expectation given $(X^{\xi,b}_t,p_t)=(x,p)$. An investment-reinsurance strategy $(\xi^\star,b^\star)\in{\mathcal U}[t,T]$ is optimal if $V(t,x,p) = V^{\xi^\star,b^\star}(t,x,p).$ Note that by classical filtering results we have that $V(0,x,\pi_\vartheta)=V(0,x)$ (see e.g. \cite{BaeuerleRieder2007}). \section{The Solution}\label{sec:sol} \subsection{The HJB equation}\label{sec:HJB} In a first step we derive the HJB equation for the value function $V$ using standard methods and assuming full differentiability of $V$, which results in \begin{equation}\label{HJBV} \begin{aligned} &0=\sup_{(\xi,b)\in[-K,K]\times[0,1]} \bigg\{V_t(t,x,p) - \lambda V(t,x,p) +V_x(t,x,p)\big(rx + (\mu-r)\xi+c(b)\big) \\ &+ \frac12\sigma^2V_{xx}(t,x,p)\xi^2 + \lambda \sum_{k=1}^m p_k\int_E V\big(t,x-(b y + z\xi\mathds{1}_{(L,\infty)}(y)),J(p,y)\big)f_k(y)dyQ(dz)\bigg\}, \end{aligned} \end{equation} For solving \eqref{HJBV} we apply the usual separation approach: For any $(t,x,p)\in[0,T]\times\mathbb{R}\times\Delta_m$, we assume \begin{equation}\label{separation} V(t,x,p) = -e^{-\alpha x e^{r(T-t)}}g(t,p) \end{equation} with $g\ge 0$. This implies that we conclude from~\eqref{HJBV} \begin{equation}\label{HJBgdiff} \begin{aligned} 0&=\inf_{(\xi,b)\in[-K,K]\times[0,1]} \bigg\{g_t(t,p) - \lambda g(t,p) - \alpha e^{r(T-t)}g(t,p)\Big((\mu-r)\xi + c(b) - \frac12\alpha \sigma^2 e^{r(T-t)}\xi^2\Big) \\ &\quad+\lambda \sum_{k=1}^m p_k \int_0^\infty g(t,J(p,y))e^{\alpha b y e^{r(T-t)}} \int_{(0,1)} e^{\alpha \xi z \mathds{1}_{(L,\infty)}(y)e^{r(T-t)}}Q(dz) f_k(y)dy \bigg\}. \end{aligned} \end{equation} However, $V$ is probably not differentiable w.r.t.\ $t$. Assuming $t\mapsto g(t,p)$ is Lipschitz on $[0,T]$ for all $p\in\Delta_m$, we can replace the partial derivative of $g$ w.r.t.\ $t$ by Clarke's generalized subdifferential (see appendix). Throughout, we denote by ${\mathcal L}$ an operator acting on functions $g:[0,T]\times\Delta_m\to(0,\infty)$ and $(\xi,b)\in [-K,K]\times[0,1]$ which is defined by \begin{equation}\label{L} {\mathcal L} g(t,p;\xi,b) := - \lambda g(t,p)+ \alpha e^{r(T-t)}g(t,p)(\theta-\eta)\kappa + \gamma(t,p,\xi,b), \end{equation} where \begin{equation}\label{eq:gamma} \begin{aligned} \gamma(t,p,\xi,b) := &-\alpha e^{r(T-t)} g(t,p)\Big((\mu-r)\xi -\frac12\alpha\sigma^2 e^{r(T-t)}\xi^2 + (1+\theta) \kappa b\Big) \\ &+\lambda\sum_{k=1}^m p_k \int_0^\infty g(t,J(p,y)) e^{\alpha b y e^{r(T-t)}}\int_{(0,1)}e^{\alpha \xi z \mathds{1}_{(L,\infty)}(y) e^{r(T-t)}}Q(dz)f_k(y)dy. \end{aligned} \end{equation} Using this operator and replacing the partial derivative of $g$ w.r.t.\ $t$, in~\eqref{HJBgdiff} by Clarke's generalized subdifferential, we get the generalized HJB equation for $g$: \begin{equation}\label{HJBg} 0 = \inf_{(\xi,b)\in[-K,K]\times[0,1]}\big\{ {\mathcal L} g(t,p;\xi,b)\big\} + \inf_{\varphi\in\partial^C\! g_p(t)}\{\varphi\} \end{equation} for all $(t,p)\in[0,T)\times\Delta_m$ with boundary condition \begin{equation}\label{HJBgbound} g(T,p) = 1,\quad p\in\Delta_m. \end{equation} Note that we set $\partial^C\! g_p(t)=\{g_p^\prime(t)\}$ at the points $t$ where the subdifferential exists. The notation $g_p(t)$ indicates that the derivative is w.r.t.\ $t$ for fixed $p$. \subsection{Candidate for an optimal strategy} To obtain candidates for an optimal strategy, we have to minimize the function $\gamma$ given in \eqref{eq:gamma} w.r.t.\ $(\xi,b)$ for fixed $(t,p)$. For this purpose we introduce the following notation: \begin{equation*} M_Z(u) := \mathbb{E}\big[e^{uZ}\big], \quad u\in\mathbb{R}. \end{equation*} Notice that $M_Z^\prime(u)=\mathbb{E}\big[Z e^{uZ}\big]$ and $M_Z^{\prime\prime}(u)=\mathbb{E}\big[Z^2e^{uZ}\big]$ whenever they exist. \begin{lemma}\label{gamma} For any $(t,p)\in[0,T]\times\Delta_m$, the function $\mathbb{R}^2\ni (\xi,b)\mapsto \gamma(t,p,\xi,b)$ is strictly convex and \begin{align*} \frac{\partial}{\partial \xi}\gamma(t,p,\xi,b) &= -\alpha e^{r(T-t)}g(t,p)\big((\mu-r)-\alpha\sigma^2 e^{r(T-t)}\xi\big) \\ &\quad + \lambda\,\alpha\,e^{r(T-t)}\sum_{k=1}^m p_k \int_L^\infty g(t,J(p,y))e^{\alpha b y e^{r(T-t)}}f_k(y)dy\,M_Z^\prime(\alpha\,e^{r(T-t)}\xi),\\ \frac{\partial}{\partial b}\gamma(t,p,\xi,b) &= -\alpha\,e^{r(T-t)}g(t,p)\,(1+\theta)\kappa \\ &\quad + \lambda\alpha e^{r(T-t)}\sum_{k=1}^m p_k\!\!\int_0^\infty\!\!\! yg(t,J(p,y))e^{\alpha b y e^{r(T-t)}}\!\!\int_{(0,1)}\!\!\!e^{\alpha \xi z\mathds{1}_{(L,\infty)}(y) e^{r(T-t)}}Q(dz)f_k(y)dy. \end{align*} \end{lemma} \begin{proof} A straightforward calculation yields the announced partial derivatives and \begin{align*} \frac{\partial^2\gamma(t,p,\xi,b)}{\partial \xi^2} &= \alpha^2\sigma^2e^{2r(T-t)}g(t,p) \\ &\quad + \lambda\alpha^2e^{2r(T-t)}\sum_{k=1}^m p_k\int_L^\infty\! g(t,J(p,y))e^{\alpha b y e^{r(T-t)}}f_k(y)dy\,M_Z^{\prime\prime}(\alpha e^{r(T-t)}\xi),\\ \frac{\partial^2\gamma(t,p,\xi,b)}{\partial b^2} &= \lambda\alpha^2e^{2r(T-t)}\sum_{k=1}^m p_k\bigg(\int_0^L y^2g(t,J(p,y))e^{\alpha b y e^{r(T-t)}}f_k(y)dy\\ &\quad +\int_L^\infty y^2g(t,J(p,y))e^{\alpha b y e^{r(T-t)}}f_k(y)dy M_Z(\alpha e^{r(T-t)}\xi)\bigg), \\ \frac{\partial^2\gamma(t,p,\xi,b)}{\partial b\partial\xi} &= \lambda\alpha^2e^{2r(T-t)}\sum_{k=1}^m p_k\int_L^\infty yg(t,J(p,y))e^{\alpha b y e^{r(T-t)}}f_k(y)dy M_Z^\prime(\alpha e^{r(T-t)}\xi). \end{align*} Therefore, the Hessian matrix $H_\gamma$ of $\gamma$ w.r.t.\ $(\xi,b)$ is given by \begin{equation*} H_\gamma=\alpha^2e^{2r(T-t)}\Big(A+\lambda\sum_{k=1}^m p_k B_k\Big) \end{equation*} with \begingroup \renewcommand*{\arraystretch}{1.4} \begin{equation*} A:=\begin{pmatrix} \sigma^2g(t,p) & 0 \\ 0 & \lambda\sum_{k=1}^m p_k\int_0^L y^2g(t,J(p,y))e^{\alpha b y e^{r(T-t)}}f_k(y)dy \end{pmatrix} \end{equation*} and \begin{equation*} B_k:=\begin{pmatrix} a_k & b_k \\ b_k & c_k \end{pmatrix} \end{equation*} \endgroup with \begin{align*} a_k &:= \int_L^\infty g(t,J(p,y))e^{\alpha b y e^{r(T-t)}}f_k(y)dy M_Z^{\prime\prime}(\alpha e^{r(T-t)}\xi),\\ b_k &:= \int_L^\infty yg(t,J(p,y))e^{\alpha b y e^{r(T-t)}}f_k(y)dy M_Z^\prime(\alpha e^{r(T-t)}\xi),\\ c_k &:= \int_L^\infty y^2g(t,J(p,y))e^{\alpha b y e^{r(T-t)}}f_k(y)dy M_Z(\alpha e^{r(T-t)}\xi). \end{align*} for $k=1,\ldots,m$. To prove the convexity of $(x,y)\mapsto \gamma(t,p,\xi,b)$, it is sufficient to show that $H_\gamma$ is positive definite. Clearly, $A$ is positive definite. Moreover, for any $k\in\{1,\ldots,m\}$ and $\bar{x}=(x_1,x_2)\in\mathbb{R}^2\setminus\{0\}$, it holds (since $L>1$ and $(M_Z^\prime)^2 \le M_Z^{\prime\prime} M_Z$ by the Cauchy-Schwarz inequality) \begin{align*} &\bar{x} B_k \bar{x}^\top = x_1^2 a_k + 2x_1x_2b_k + x_2^2c_k \\ &\ge \int_L^\infty \!g(t,J(p,y))e^{\alpha b y e^{r(T-t)}}f_k(y)dy\Big(x_1^2 M_Z^{\prime\prime}(\alpha e^{r(T-t)}\xi) \\ &\quad + x_2^2 M_Z(\alpha e^{r(T-t)}\xi) + 2x_1x_2 M_Z^\prime(\alpha e^{r(T-t)}\xi)\Big) \\ &\ge \int_L^\infty \!g(t,J(p,y))e^{\alpha b y e^{r(T-t)}}f_k(y)dy\bigg(x_1^2 \frac{\big(M_Z^{\prime}(\alpha e^{r(T-t)}\xi)\big)^2}{M_Z(\alpha e^{r(T-t)}\xi)} + x_2^2 M_Z(\alpha e^{r(T-t)}\xi) \\ &\quad + 2x_1x_2 M_Z^\prime(\alpha e^{r(T-t)}\xi)\bigg) \\ &= \int_L^\infty \!g(t,J(p,y))e^{\alpha b y e^{r(T-t)}}f_k(y)dy \bigg(x_1 \frac{M_Z^{\prime}(\alpha e^{r(T-t)}\xi)}{\sqrt{M_Z(\alpha e^{r(T-t)}\xi)}} + x_2\sqrt{M_Z(\alpha e^{r(T-t)}\xi)}\bigg)^2>0. \end{align*} Consequently, $H_\gamma$ is positive definite. \end{proof} Setting $\nabla\gamma$ to zero, we obtain the following first order condition for the candidate of an optimal strategy in case $g>0$: \begin{equation}\label{foc} \begin{aligned} v_1(t,p,\xi,b) &=\mu-r, \\ v_2(t,p,\xi,b) &= (1+\theta)\kappa, \end{aligned} \end{equation} where \begin{align*} v_1(t,p,\xi,b) &:= \alpha\sigma^2 e^{r(T-t)}\xi+ \lambda\sum_{k=1}^m p_k\int_L^\infty \frac{g(t,J(p,y))}{g(t,p)}e^{\alpha b y e^{r(T-t)}}f_k(y)dy\,M_Z^\prime(\alpha\,e^{r(T-t)}\xi),\\ v_2(t,p,\xi,b) &:= \lambda\sum_{k=1}^m p_k\int_0^\infty y\frac{g(t,J(p,y))}{g(t,p)}e^{\alpha b y e^{r(T-t)}}\int_{(0,1)} e^{\alpha\xi z\mathds{1}_{(L,\infty)}(y)e^{r(T-t)}}Q(dz)f_k(y)dy. \end{align*} The next proposition states that this system of equations is solvable. \begin{proposition}\label{candidates} For any $(t,p)\in[0,T]\times\Delta_m$, \eqref{foc} has a unique root w.r.t.\ $(\xi,b)$, denoted by $r(t,p):=(r_1(t,p),r_2(t,p))$, where $r_2(t,p)$ is increasing w.r.t.\ the safety loading parameter $\theta$ of the reinsurer. Moreover, it holds, \begin{enumerate} \item $r_2(t,p) \le 0$ if $(1+\theta)\kappa \le A(t,p)$, \item $0< r_2(t,p) <1$ if $A(t,p) < (1+\theta)\kappa < B(t,p)$, \item $r_2(t,p)\ge 1$ if $(1+\theta)\kappa \ge B(t,p)$, \item $r_1(t,p)$ is decreasing with $r_2(t,p)$, \end{enumerate} with \begin{align*} A(t,p) &:= v_2(t,p,r_1(t,p),0), \\ B(t,p) &:= v_2(t,p,r_1(t,p),1). \end{align*} \end{proposition} \begin{proof} Due to the strict convexity of $\gamma$ according to Lemma \ref{gamma} and \begin{equation*} \lim_{\xi\to-\infty}\gamma(t,p,\xi,b) = \lim_{\xi\to+\infty}\gamma(t,p,\xi,b) = \lim_{b\to-\infty}\gamma(t,p,\xi,b) = \lim_{b\to+\infty}\gamma(t,p,\xi,b) = \infty, \end{equation*} there exists a unique minimizer of the function $\gamma$ w.r.t.\ $(\xi,b)$ for fixed $(t,p)$, i.e.\ \eqref{foc} has a unique root denoted by $r(t,p):=(r_1(t,p),r_2(t,p))$. Note that $\mathbb{R}\ni b\mapsto v_2(t,p,\xi,b)$ is strictly increasing and thus $A(t,p)<B(t,p)$. Then statements (a), (b) and (c) follow from considering the zeros of \eqref{foc} in $\theta$ when $b=0$ and when $b=1$. For (d) note that $\xi, b\mapsto v_1(t,p,\xi,b)$ are both increasing. \end{proof} The proposition above provides the candidate for an optimal investment-reinsurance strategy. Let $K$ be large s.t. $|r_1(t,p)|\le K$ for all $t\in [0,T], p\in \Delta_m$. For any $(t,p)\in[0,T]\times\Delta_m$, we set \begin{equation*} b(t,p) := \begin{cases} 0, & \theta\le A(t,p)/\kappa-1,\\ 1, &\theta\ge B(t,p)/\kappa-1, \\ r_2(t,p), &\text{otherwise}. \end{cases} \end{equation*} Then the candidate for an optimal investment-reinsurance strategy $(\xi^\star,b^\star)=(\xi^\star_t,b^\star_t)_{t\ge[0,T]}$ is given by \begin{equation*} b^\star_t := b(t,p_{t-}) \mbox{ and } \xi^\star_t := r_1(t,p_{t-}), \end{equation*} the latter equation only holds if $A(t,p_{t-})<(1+\theta)\kappa < B(t,p_{t-})$. If $b_t^\star=0$ or $b_t^\star=1$, then we have to find the minimum point of $\gamma$ on $(-\infty, \infty)\times [0,1]$. In the case $b_t^\star=0$, $\xi_t^\star$ may deviate from $r_1(t,p_{t-})$. We have to solve $v_1(t,p,\xi,0)=\mu-r$ here, which unique root w.r.t.\ $\xi$ is denoted by $a_0(t,p)$. Similarly, we denote by $a_1(t,p)$ the unique root w.r.t.\ $\xi$ of $v_1(t,p,\xi,1)=\mu-r$. Setting \begin{equation*} z(t,p) := \begin{cases} (a_0(t,p),0), & \theta\le A(t,p)/\kappa-1,\\ (a_1(t,p),1), &\theta\ge B(t,p)/\kappa-1, \\ r(t,p), &\text{otherwise}, \end{cases} \end{equation*} we obtain the following representation of the candidate for an optimal investment-reinsurance strategy $(\xi^\star,b^\star)=(\xi^\star_t,b^\star_t)_{t\in[0,T]}$: \begin{equation}\label{optstr} (\xi^\star_t,b^\star_t) := z(t,p_{t-}),\quad t\in[0,T]. \end{equation} Notice that the strategy $(\xi^\star,b^\star)$ can only jump at the claim arrival times due to the dependency on the filter process $(p_t)_{t\ge0}$. \subsection{Verification} This section is devoted to a verification theorem to ensure that the solution of the stated generalized HJB equation yields the value function (see Theorem~\ref{veri}). We also demonstrate an existence theorem of the solution of the HJB equation (see Theorem~\ref{existenceHJB}). Both proofs can be found in the appendix. \begin{theorem}\label{veri} Suppose there exists a bounded function $h:[0,T]\times\Delta_m\to(0,\infty)$ such that $t\mapsto h(t,p)$ is Lipschitz on $[0,T]$ for all $p\in\Delta_m$, $p\mapsto h(t,p)$ is continuous on $\Delta_m$ for all $t\in[0,T]$ and $h$ satisfies the generalized HJB equation \eqref{HJBg} for all $(t,p)\in[0,T)\times\Delta_m$ with boundary condition \begin{equation}\label{hHJBbcond} h(T,p) = 1,\quad p\in\Delta_m. \end{equation} Then \begin{equation*} V(t,x,p)= -e^{-\alpha x e^{r(T-t)}}h(t,p),\quad (t,x,p)\in[0,T]\times\mathbb{R}\times\Delta_m, \end{equation*} and $(\xi^\star,b^\star)=(\xi^\star_s,b^\star_s)_{s\in[t,T]}$ with $(\xi^\star_s,b^\star_s)$ given by~\eqref{optstr} (with $g$ replaced by $h$ in $A(s,p)$ and $B(s,p)$) is an optimal feedback strategy for the given optimization problem~\eqref{P}, i.e.\ $V(t,x,p) = V^{\xi^\star,b^\star}(t,x,p)$. \end{theorem} \subsection{Existence result for the value function} \label{existencevalue} We now show that there exists a function $h:[0,T]\times\Delta_m\to(0,\infty)$ satisfying the conditions stated in Theorem~\ref{veri}. For this purpose let \begin{equation}\label{g} g(t,p) := \inf_{(\xi,b)\in{\mathcal U}[t,T]}g^{\xi,b}(t,p), \end{equation} with \begin{equation}\label{gxib} \begin{aligned} g^{\xi,b}(t,p) := \mathbb{E}^{t,p}\bigg[\exp\bigg\{&-\int_t^T \alpha e^{r(T-s)}\big((\mu-r)\,\xi_s+c(b_s)\big)ds -\int_t^T\alpha\sigma e^{r(T-s)}\xi_sdW_s\\ &+\int_t^T \int_E \alpha \big( b_s y + \xi_s z \mathds{1}_{(L,\infty)}(y)\big)e^{r(T-s)}\Psi(ds, d(y,z))\bigg\}\bigg], \end{aligned} \end{equation} where $\mathbb{E}^{t,p}$ denotes the conditional expectation given $(p_t,q_t)=(p,q)$. The next lemma summarizes useful properties of $g$. A proof can be found in the appendix. \begin{lemma}\label{propg} The function $g$ defined by~\eqref{g} has the following properties: \begin{enumerate} \item $g$ is bounded on $[0,T]\times\Delta_m$ by a constant $0<K_1<\infty$ and $g>0$. \item $g^{\xi,b}(t,p) = \sum_{j=1}^m p_j g^{\xi,b}(t,e_j)$ for all $(t,p)\in[0,T]\times\Delta_m$ and $(\xi,b)\in{\mathcal U}[t,T]$. \item $g^{\xi,b}(t,J(p,y)) = \sum_{j=1}^m \frac{f_j(y)p_j}{\sum_{k=1}^m f_k(y)p_k} g^{\xi,b}(t,e_j)$ for all $(t,p)\in[0,T]\times\Delta_m$ and $(\xi,b)\in{\mathcal U}[t,T]$. \item $\Delta_m\ni p\mapsto g(t,p)$ is concave for all $t\in[0,T]$. \item $[0,T]\ni t\mapsto g(t,p)$ is Lipschitz on $[0,T]$ for all $p\in\Delta_m$. \end{enumerate} \end{lemma} Notice that $e_j$ denotes the $j$th unit vector. We are now in the position to show the following existence result of a solution of the generalized HJB equation. \begin{theorem}\label{existenceHJB} The value function of problem~\eqref{P} is given by \begin{equation*} V(t,x,p) = -e^{-\alpha x e^{r(T-t)}}g(t,p),\quad (t,x,p)\in[0,T]\times\mathbb{R}\times\Delta_m, \end{equation*} where $g$ is defined by~\eqref{g} and satisfies the generalized HJB equation \eqref{HJBg} for all $(t,p)\in[0,T)\times\Delta_m$ with boundary condition $g(T,p)=1$ for all $p\in\Delta_m$. Furthermore, $(\xi^\star,b^\star)=(\xi^\star_s,b^\star_s)_{s\in[t,T]}$ with $(\xi^\star_s,b^\star_s)$ given by~\eqref{optstr} is the optimal investment and reinsurance strategy of the optimization problem~\eqref{P}. \end{theorem} \section{Comparison results}\label{sec:comp} \subsection{Case of independent financial and insurance risks} In this section we present a comparison result of the optimal strategy given in Theorem~\ref{existenceHJB} and the one in the case of independent financial and insurance risks. In this case the price process of the risky asset has no jumps if an insurance claim exceed the threshold $L$, i.e.\ the price process of the risky asset evolves according to a geometric Brownian motion. Throughout this section, we suppose that $K$ is large. We write $(\tilde\xi^\star,\tilde b^\star)$ for the optimal investment and reinsurance strategy in the case of no interdependencies between the financial and insurance market as describe above. We obtain the special solution (cp. \cite[Ch.\,6]{gl20}) \begin{align*} \tilde\xi^\star_t &= \frac{\mu-r}{\sigma^2}\frac{1}{\alpha}e^{-r(T-t)},\\ \tilde b^\star_t &= \tilde b(t,p_{t-}), \end{align*} where \begin{equation*} \tilde b(t,p) := \begin{cases} 0, & \theta\le \tilde A(t,p)/\kappa-1,\\ 1, &\theta\ge \tilde B(t,p)/\kappa-1, \\ \tilde r(t,p), &\text{otherwise}, \end{cases} \end{equation*} with \begin{align*} \tilde\gamma(t,p,b) &:= \lambda\sum_{k=1}^m p_k \int_0^\infty y \frac{g(t,J(p,y))}{g(t,p)} e^{\alpha b y e^{r(T-t)}} f_k(y)dy, \\ \tilde A(t,p) &:= \tilde\gamma(t,p,0), \\ \tilde B(t,p) &:= \tilde\gamma(t,p,1), \end{align*} and $\tilde r(t,p)$ is the unique root of $\tilde\gamma(t,p,b)=(1+\theta)\kappa$ w.r.t.\ $b$. The next theorem provides a comparison of the optimal investment strategies $\xi^\star$ and $\tilde\xi^\star$. \begin{theorem}\label{comparison} For any $t\in[0,T]$ it holds $ \xi^\star_t \le \tilde\xi^\star_t.$ \end{theorem} \begin{proof} Fix $t\in[0,T]$. Note that the first order condition of $\tilde\xi^\star$ is \begin{equation*} \alpha\sigma^2 e^{r(T-t)}\xi=\mu-r, \end{equation*} where the left-hand side is always less than $v_1(t,p,\xi,b)$ from~\eqref{foc} and crosses $\mu-r$ from below. Consequently, $\xi_t^\star\le \frac{\mu-r}{\sigma^2}\frac{1}{\alpha}e^{-r(T-t)} $. \end{proof} The theorem says that it is always optimal to invest more money into the risky asset in the absence of interdependencies between financial and insurance risks than in the presence of dependencies. This is not surprising since the interdependency in our model may only imply some downward jumps of the risky asset. A negative investment into the financial market can be used to hedge against claims. \subsection{Case of complete information} First note that the case with complete information is always a special case of our general model. We obtain this case when the prior is concentrated on a single value. In order to state the optimal strategy in the complete information case, we define for any $t\in[0,T]$ and $(\xi,b)\in\mathbb{R}^2$ \begin{align*} v_1^F(t,\xi,b) &:= \alpha\sigma^2e^{r(T-t)}\xi + \lambda \int_L^\infty e^{\alpha b y e^{r(T-t)}}F(dy)M_Z^\prime\big(\alpha e^{r(T-t)}\xi\big),\\ v_2^F(t,\xi,b) &:= \lambda \int_0^\infty y e^{\alpha b y e^{r(T-t)}}\int_{(0,1)}e^{\alpha\xi z \mathds{1}_{(L,\infty)}(y)e^{r(T-t)}}Q(dz)F(dy), \end{align*} for some distribution $F$ on $(0,\infty)$. Furthermore, we denote by $r^F(t)=(r^F_1(t),r^F_2(t))$ the unique root w.r.t.\ $(\xi,b)$ of \begin{equation}\label{full:foc} \begin{aligned} v_1^F(t,\xi,b) &= \mu-r\\ v_2^F(t,\xi,b) &= (1+\theta)\kappa, \end{aligned} \end{equation} which exists, and we define \begin{equation*} A_F(t) := v_2^F(t,r^F_1(t),0), \quad B_F(t) := v_2^F(t,r^F_1(t),1). \end{equation*} Moreover, $a_0^F(t)$ denotes the unique root w.r.t.\ $\xi$ of $v_1^F(t,\xi,0)=\mu-r$ and $a_1^F(t)$ the unique root w.r.t.\ $\xi$ of $v_1^F(t,\xi,1)=\mu-r$. By the same line of arguments as in Proposition~\ref{candidates}, we obtain under the notation above that the optimal reinsurance strategy $(\xi_F^\star,b^\star_{F})=(\xi_F^\star(t),b^\star_{F}(t))_{t\in[0,T]}$ in the case of complete information is given by \begin{equation}\label{full:xibstart} (\xi_F^\star(t),b^\star_F(t)) := \begin{cases} (a_0^F(t),0), & \theta\le A_F(t)/\kappa -1, \\ (a_1^F(t),1), &\theta \ge B_F(t)/\kappa-1, \\ r^F(t), &\text{otherwise}. \end{cases} \end{equation} Note that $r_1^F(t)$, $r_2^F(t)$, $a_0^F(t)$, $a_1^F(t)$, $A_F(t)$ and $B_F(t)$ are continuous in $t$. Consequently, the optimal strategies $\xi^\star_F$ and $b^\star_F$ is continuous. Moreover, $(\xi^\star_F,b^\star_F)$ is deterministic and can be calculated easily. We will now compare the strategies. In order to do so, we assume throughout this section that \begin{equation*} F_1(x)\ge F_2(x)\ge \ldots \ge F_m(x) \end{equation*} for all $x\in\mathbb{R}$. That is, the claim sizes are ordered stochastically as follows: \begin{equation*} Y|\vartheta=1 \preceq_{\textup{st}} Y|\vartheta=2 \preceq_{\textup{st}}\ldots\preceq_{\textup{st}} Y|\vartheta=m, \end{equation*} where $\preceq_{\textup{st}}$ denotes the usual stochastic order. This assertion is equivalent to \begin{equation*} \int_0^\infty g(y) f_1(y)dy\le\int_0^\infty g(y) f_2(y)dy\le \ldots \le \int_0^\infty g(y) f_m(y)dy \end{equation*} for all increasing functions $g$, for which the expectations exist, compare Theorem 1.2.8 in \cite{MuellerStoyan2002}. First of all we derive bounds for the optimal strategy which can be calculated apriori, i.e.\ independent of the filter process $(p_t)_{t\ge0}$. For this determination, we introduce the following terms. For any $t\in[0,T]$ and $(\xi,b)\in\mathbb{R}^2$, we set \begin{align*} v_1^{\min}(t,\xi,b) &:= \alpha\sigma^2e^{r(T-t)}\xi + \lambda \int_L^\infty e^{\alpha b y e^{r(T-t)}}f_1(y)dy M_Z^\prime\big(\alpha \xi e^{r(T-t)}\big),\\ v_1^{\max}(t,\xi,b) &:= \alpha\sigma^2e^{r(T-t)}\xi + \lambda \int_L^\infty e^{\alpha b y e^{r(T-t)}}f_m(y)dy M_Z^\prime\big(\alpha \xi e^{r(T-t)}\big), \end{align*} For some fixed $t\in[0,T]$, we denote by $r^{\min}_1(t)$ the unique root of $v_1^{\min}(t,\xi,b) = \mu-r$ and by $r^{\max}_1(t)$ the unique root of $v_1^{\max}(t,\xi,b) = \mu-r$, which exist by the same line of arguments as in Proposition~\ref{candidates}. The announced a-priori-bounds are a direct consequence of the following result. \begin{proposition}\label{pr:aprioribounds} For any $(t,p)\in[0,T]\times\Delta_m$, we have for $v_1$ from \eqref{foc} \begin{align*} v_1^{\min}(t,\xi,b) &\le v_1(t,p,\xi,b) \le v_1^{\max}(t,\xi,b)\quad\text{for all }(\xi,b)\in\mathbb{R}\times\mathbb{R}_+. \end{align*} \end{proposition} \begin{proof} Choose some $(t,p)\in[0,T]\times\Delta_m$ and $(\bar\xi,\bar b)\in\mathbb{R}\times\mathbb{R}_+$. For any $(\xi,b)\in\mathcal U[t,T]$, an application of Lemma~\ref{propg}~(b) and~(c) yields \begin{align*} &\sum_{k=1}^{m}p_k\int_L^\infty g^{\xi,b}(t,J(p,y))e^{\alpha\bar b y e^{r(T-t)}}f_k(y)dy \\ &= \sum_{j=1}^{m}p_j g^{\xi,b}(t,e_j) \int_L^\infty\frac{\sum_{k=1}^{m}p_kf_k(y)}{\sum_{\ell=1}^{m}p_\ell f_\ell(y)} e^{\alpha \bar b y e^{r(T-t)}}f_j(y)dy \le g^{\xi,b}(t,p) \int_L^\infty e^{\alpha \bar b y e^{r(T-t)}}f_m(y)dy, \end{align*} which yields $v_1(t,p,\bar\xi,\bar b) \le v_1^{\max}(t,\bar\xi,\bar b)$ by dividing by $ g^{\xi,b}$, multiplying both sides by \linebreak $\lambda M_Z^\prime(\alpha \bar\xi e^{r(T-t)})$ and by adding $\alpha\sigma^2e^{r(T-t)}\bar\xi$. The inequality $v_1^{\min}(t,\bar \xi,\bar b) \le v_1(t,p,\bar\xi,\bar b)$ is obtained in the same way. \end{proof} The proposition directly implies the following corollary: \begin{corollary}\label{co:aprioribounds} The optimal investment strategy $\xi^\star$ from Theorem~\ref{existenceHJB} has the following bounds for $ t\in[0,T]$: \begin{eqnarray*} r_1^{\max}(t) \le \xi^\star_t \quad\text{if } b^\star_{F_1}(t)=b_t^\star,\\ \xi^\star_t \le r_1^{\min}(t) \ \quad\text{if } b^\star_{F_m}(t)=b_t^\star. \end{eqnarray*} \end{corollary} The next theorem is now the main statement of this section. It provides a comparison of the optimal investment strategy to the optimal one in the case of complete information, where the unknown claim size distribution is replaced by their expectation. It turns out that in the latter case the amount which is invested is higher if the retention is the same. In this sense the complete information case provides upper bounds. \begin{theorem}\label{th:comparison} Let $(\xi_F^\star,b_F^\star)$ be the function given in~\eqref{full:xibstart} and suppose the insurance company does invest into the financial market, i.e.\ $\xi_t^\star >0$ for all $t\in [0,T]$. Then if $b_t^\star = b^\star_{\bar F_{p_{t-}}}$ we obtain for $t\in[0,T]$ \begin{equation*} \xi_t^\star \le \xi^\star_{\bar F_{p_{t-}}}(t),\quad \mbox{with } \bar F_p(dy) := \sum_{k=1}^m p_kf_k(y)dy. \end{equation*} \end{theorem} \begin{proof} Let us fix $(t,p)\in[0,T]$ and $(\bar\xi,\bar b)\in\mathbb{R}\times\mathbb{R}_+$. From the proof of Proposition~\ref{pr:aprioribounds}, we know already that \begin{equation*} \sum_{k=1}^{m}p_k\int_L^\infty g^{\xi,b}(t,J(p,y))e^{\alpha\bar b y e^{r(T-t)}}f_k(y)dy = \sum_{j=1}^{m}p_j g^{\xi,b}(t,e_j) \int_L^\infty e^{\alpha \bar b y e^{r(T-t)}}f_j(y)dy \end{equation*} for all $(\xi,b)\in\widetilde{\mathcal{U}}[t,T]$, where $\widetilde{\mathcal{U}}[t,T]$ denotes the set of all admissible strategies $\mathcal{U}[t,T]$ restricted to positive investment strategies. The integrand of \begin{align*} g^{\xi,b}(t,p) = \mathbb{E}^{t,p}\bigg[\exp\bigg\{&-\int_t^T\alpha e^{r(T-s)}\big((\mu-r)\xi_s+c(b_s)\big) ds -\int_t^T\alpha e^{r(T-s)}\xi_s dW_s\\ &+\sum_{n=1}^{N_{T-t}}\alpha \big(b_{T_n}Y_n+\xi_{T_n} Z_n\mathds{1}_{(L,\infty)}(Y_n)\big)e^{r(T-T_n)}\bigg\}\bigg] \end{align*} is increasing in $Y_n$ (due to the positivity of $\xi_t$ for all $t\in[0,T]$) and hence $g^{\xi,b}(t,e_1)\le \ldots\le g^{\xi,b}(t,e_m)$. Therefore, by Lemma~\ref{propg}~(b) as well as Lemma~\ref{le:ineqsum}, we get \begin{align*} \sum_{j=1}^{m}p_j g^{\xi,b}(t,e_j) \int_L^\infty e^{\alpha \bar b y e^{r(T-t)}}f_j(y)dy \ge g^{\xi,b}(t,p) \int_L^\infty e^{\alpha \bar b y e^{r(T-t)}}\sum_{j=1}^m p_j f_j(y)dy. \end{align*} In summary, we have \begin{equation*} \sum_{k=1}^{m}p_k\int_L^\infty g^{\xi,b}(t,J(p,y))e^{\alpha\bar b y e^{r(T-t)}}f_k(y)dy \ge g^{\xi,b}(t,p) \int_L^\infty e^{\alpha \bar b y e^{r(T-t)}}\bar F_p(dy), \end{equation*} for all $(\xi,b)\in\widetilde{\mathcal{U}}[t,T]$, which yields $v_1(t,p,\bar\xi,\bar b)\ge v_1^{\bar F_p}(t,\bar\xi,\bar b)$ by the same argumentation as in the proof of Proposition~\ref{pr:aprioribounds}. Therefore, we get $\xi_t^\star \le \xi^\star_{\bar F_{p_{t-}}}(t)$ under the assumptions $\xi_t^\star>0$. \end{proof} \subsection{Numerical results} We have seen in the last subsection that it is easy to compute the optimal strategy in the case of full information and that this yields in some cases a bound on the optimal strategy in the case of incomplete information. In particular when we set $r=0$ then the strategy obtained through \eqref{full:xibstart} is a constant and does not depend on time, only on the final time horizon. We have computed the optimal strategy in the case of full information for the following data: The volatility of the financial market is $\sigma=0.4$, the drift $\mu=0.3$ and the interest rate $r=0$. The claim arrival intensity is $\lambda=10$ and the claim sizes are exponentially distributed with parameter $\varrho=0.1$, i.e. $Y\sim Exp(0.1)$. Note that the moment generating function of the exponential distribution does exist only for $\alpha \in (0,\varrho)$. Thus, for all integrals to exist we have to make sure that $\alpha < \varrho$. Hence, we choose $\alpha=0.05$ which means that we are close to the risk-sensitive case. For $Z$ we choose a uniform distribution on $(0,1)$. The expected amount of claims per year in this model is $\mathbb{E} N_1 \mathbb{E} Y= 100$, so we should choose $(1+\theta)\kappa >100$. Indeed since the premium income itself is below $(1+\theta)\kappa $ we set $(1+\theta)\kappa =350$. We compute now the optimal investment and reinsurance strategy for different level $L$. Note that the expected claim size is $10$. The larger $L$, the smaller will be the constructed dependency between the markets. For $L\to \infty$ we obtain independence. Figure \ref{fig:influenceL} and shows the results. \begin{figure} \begin{center} \includegraphics[width=0.95\textwidth]{influenceL.pdf} \end{center} \caption{Optimal strategy in the case of complete observation as a function of $L$ with logaritmically scaled $x$-axis.} \label{fig:influenceL} \end{figure} So what we obviously see here is that with increasing $L$ the investment is increasing. This may be expected since there will be less drop downs in the financial market when $L$ is large. However, what is surprising is the following observation: In the independent case the optimal investment with these parameters is $\xi^\star=\frac{\mu}{\alpha\sigma^2}=37.5$ and for $L\to\infty$ we can see a convergence. But even if $L=100$ which means that the threshold which produces the correlation is 10 times as high as an expected claim, i.e.\ very unlikely to occur (the probability indeed is $4.5^{-5}$) the investment in the risky asset is only $25.19$ compared to $37.5$. Thus, the insurance company is very conservative. Of course we have a risk-sensitive criterion here, but nevertheless the impact of the dependency is amazing. For $L$ below $64.35$ there is a negative investment into the financial market. The insurance company then uses the dependence to hedge against claims by shortselling stocks. In the case of $L\to0$, the optimal investment converges to $-86.57$. For smaller $L$ there is indeed no reinsurance. For $L\to\infty$ the value stabilizes around $b^\star=0.93$, i.e.\ only $7\,\%$ of the claims are covered by reinsurance. In total, the conclusion that we draw here is that in this simple model introducing only a small correlation between claim sizes and behavior of the financial market has already a severe impact on the optimal investment strategy. \section{Appendix}\label{sec:app} \subsection{Clarke's generalized subdifferential} The following definition and results are taken from Section~2.1 in \cite{Clarke1983}, where we restrict ourself to some univariate function by $f:\mathbb{R}\to\mathbb{R}$, which is sufficient for this paper. \begin{definition}[\cite{Clarke1983}, p.\,25]\label{def:gendirder} Let $x\in\mathbb{R}$ be a given point and let $v\in\mathbb{R}$. Moreover, let $f$ be Lipschitz near $x$. Then the \emph{generalized directional derivative} of $f$ at $x$ in the direction $v$, denoted by $f^\circ(x;v)$, is defined by \begin{equation*} f^\circ(x;v) = \limsup_{y\to x, h\downarrow 0}\frac{f(y+h\,v)-f(y)}{h}. \end{equation*} \end{definition} \begin{definition}[\cite{Clarke1983}, p.\,27]\label{def:Clarksub} Let $f$ be Lipschitz near $x$. Then \emph{Clarke's generalized subdifferential} of $f$ at $x$, denoted by $\partial^C f(x)$, is given by \begin{equation*} \partial^Cf(x):=\big\{\xi\in\mathbb{R}: f^\circ(x;v)\ge \xi v \text{ for all }v\in\mathbb{R}\big\}. \end{equation*} \end{definition} \begin{proposition}[\cite{Clarke1983}, Prop.\,2.2.4]\label{pr:propClarkediff} If $f$ is strictly differentiable at $x$, then $f$ is Lipschitz near $x$ and $\partial^C f(x) = \{f^\prime(x)\}$. Conversely, if $f$ is Lipschitz near $x$ and $\partial^C f(x)$ reduces to a singleton $\{\zeta\}$, then $f$ is strictly differentiable at $x$ and $f^\prime(x)=\zeta$. \end{proposition} \begin{theorem}[\cite{Clarke1983}, Thm.\,2.5.1]\label{th:genCgco} Let $f$ be Lipschitz near $x$ and let $S$ be an arbitrary set of Lebesgue-measure $0$ in $\mathbb{R}$. Moreover, the set of points, at which the function $f$ is not differentiable, is denoted by $\Omega_f$. Then \begin{equation*} \partial^C f(x) = co\Big\{\lim_{n\to\infty} f^\prime(x_n): x_n\to x, x_n\notin S, x_n\notin\Omega_f\Big\}. \end{equation*} \end{theorem} \subsection{Auxiliary Results} From now on, we denote by $f:[0,T]\times\mathbb{R}\to\mathbb{R}$ the function which is defined by \begin{equation}\label{f} f(t,x) := -e^{-\alpha x e^{r(T-t)}}. \end{equation} \begin{lemma}\label{Qxibt} Let $t\in[0,T]$ and let $(\xi,b)\in\mathcal{U}[0,T]$ be an arbitrary admissible strategy. We set \begin{equation}\label{density} \begin{aligned} L^{\xi,b}_t &:= \exp\bigg\{ -\int_0^t \alpha\sigma e^{r(T-s)}\xi_s dW_s -\frac12 \int_0^t \alpha^2\sigma^2e^{2r(T-s)}\xi_s^2 ds \\ &\;\quad+\int_0^t\int_E \alpha(b_s y + \xi_s z \mathds{1}_{(L,\infty)}(y)) e^{r(T-s)}\Psi(ds, d(y,z)) + \lambda t \\ &\;\quad -\int_0^t\lambda \sum_{k=1}^m p_k(s) \int_0^\infty e^{\alpha b_s y e^{r(T-s)}}\int_{(0,1)} e^{\alpha \xi_s z\mathds{1}_{(L,\infty)}(y) e^{r(T-s)}}Q(dz)f_k(y)dy ds\bigg\}. \end{aligned} \end{equation} Then, a possibly substochastic measure on $(\Omega,\mathcal{G}_t)$ is defined by $\mathbb{Q}^{\xi,b}_t(A):=\int_A L^{\xi,b}_t d\mathbb{P}$, $A\in\mathcal{G}_t$, for every $t\in[0,T]$, i.e.\ $\frac{d\mathbb{Q}^{\xi,b}_t}{d\mathbb{P}}:=L^{\xi,b}_t$. The measures $\mathbb{Q}^{\xi,b}_t$ and $\mathbb{P}$ are equivalent. \end{lemma} \begin{proof} First, we show that $(L_t^{\xi,b})_{t\ge0}$ is the Dol\'{e}ans-Dade exponential of the martingale $(Z_t)_{t\ge0}$ defined by \begin{equation*} Z_t := -\int_0^t \alpha \sigma e^{r(T-s)}\xi_sdW_s + \int_0^t \int_E \Big(e^{\alpha(b_s y+ \xi_s z \mathds{1}_{(L,\infty)}(y))e^{r(T-s)}}-1\Big)\hat{\Psi}(ds,d(y,z)). \end{equation*} That is, \begin{equation*} L_t^{\xi,b} = \mathcal{E}(Z_t) = e^{Z_t - \frac12\int_0^t \alpha^2\sigma^2e^{2r(T-s)}\xi_s^2ds}\prod_{0<s\le t}(1+\Delta Z_s)e^{-\Delta Z_s}, \end{equation*} where \begin{align*} \prod_{0<s\le t}\!(1\!+\!\Delta Z_s)e^{-\Delta Z_s} &= \exp\bigg\{\int_0^t\!\int_E \alpha(b_s y+ \xi_s z \mathds{1}_{(L,\infty)}(y))e^{r(T-s)}\Psi(ds,d(y,z))\bigg\} \\ &\quad\times\! \exp\bigg\{\!\!-\!\int_0^t\!\int_E\! \Big(\!\exp\Big\{\alpha(b_s y+ \xi_s z \mathds{1}_{(L,\infty)}(y))e^{r(T-s)}\Big\}\!-\!1\Big)\Psi(ds,d(y,z))\bigg\}. \end{align*} This implies the announced representation \eqref{density} of $(L_t^{\xi,b})_{t\ge0}$ since $\hat\Psi-\Psi = \hat\nu$. As $(L_t^{\xi,b})_{t\ge0}$ is a non-negative local martingale, it is a supermartingale and hence $\mathbb{E} L_t^{\xi,b} \le 1$ for all $t\ge 0$. \end{proof} \begin{lemma}\label{fbounded} Let $(\xi,b)\in \mathcal{U}[0,T]$ and let $\prT{L^{\xi,b}}$ be the density process given by~\eqref{density}. Then there exists a constant $0<K_2<\infty$ such that \begin{equation*} \frac{\big|f(t,X^{\xi,b}_t)\big|}{L^{\xi,b}_t}\le K_2\quad\mathbb{P}\text{-a.s.} \end{equation*} for all $t\in[0,T]$. \end{lemma} \begin{proof} Fix $t\in[0,T]$ and $(\xi,b)\in \mathcal{U}[0,t]$. Using Theorem V.52 in \cite{Protter2005}, the unique solution of \eqref{wealth} is \begin{align*} X_t^{\xi,b} &= x_0e^{rt} + \int_0^t e^{r(t-s)}\big((\mu-r)\xi_s+c(b_s)\big)ds + \int_0^t \sigma e^{r(t-s)}\xi_sdW_s \\ &\quad + \int_0^t \int_E e^{r(t-s)} \big(b_s y + \xi_s z\mathds{1}_{(L,\infty)}(y)\big)\Psi(ds,d(y,z)) \end{align*} Hence \begin{align*} &\frac{\big|f(t,X^{\xi,b}_t)\big|}{L^{\xi,b}_t} = \exp\bigg\{-\alpha x_0 e^{rT} -\int_0^t \alpha e^{r(T-s)}\Big( (\mu-r)\xi_s +c(b_s) - \frac12\alpha\sigma^2 e^{r(T-s)}\xi_s^2\Big) ds \\ &\;+ \int_0^t\lambda \sum_{k=1}^m p_k(s) \int_0^\infty e^{\alpha b_s y e^{r(T-s)}}\int_{(0,1)} e^{\alpha \xi_s z\mathds{1}_{(L,\infty)}(y) e^{r(T-s)}}Q(dz)f_k(y)dy ds -\lambda t\bigg\} \\ &\le\exp\bigg\{\bigg(\alpha e^{|r|T}\big(|\mu-r|K+(2+\eta+\theta)\kappa\big)+\frac12\alpha^2\,\sigma^2\, e^{2|r|T}K^2\\ &\qquad\qquad + \lambda\sum_{k=1}^m M_k\big(\alpha e^{|r|T}\big)M_Z\big(\alpha K e^{|r|T}\big)\bigg)T\bigg\}=: K_2, \end{align*} where $0<K_2<\infty$ is independent of $t\in[0,T]$ as well as $(\xi,b)$. \end{proof} For convenience we define \begin{equation}\label{H} \mathcal{H} h(t,p;\xi,b) := \mathcal{L} h(t,p;\xi,b) + h_t(t,p) \end{equation} for all functions $h:[0,T]\times\Delta_m\to(0,\infty)$ and $(\xi,b)\in \mathbb{R}\times [0,1]$, where the right-hand side is well-defined. Using this notation, the generalized HJB equation~\eqref{HJBg} can be written as \begin{equation}\label{HHJB} 0= \inf_{(\xi,b)\in[-K,K]\times[0,1]}\{\mathcal{H} g(t,p;\xi,b)\} \end{equation} at those points $(t,p)$ with existing $g_t(t,p)$. \begin{lemma}\label{characG} Suppose that $(\xi,b)\in \mathcal U[0,T]$ is an arbitrary strategy and $h:[0,T]\times\Delta_m\to(0,\infty)$ is a bounded function such that $t\mapsto h(t,p)$ is absolutely continuous on $[0,T]$ for all $p\in\Delta_m$ and $p\mapsto h(t,p)$ is continuous on $\Delta_m$ for all $t\in[0,T]$. Then, the function $G:[0,T]\times\mathbb{R}\times\Delta_m\to\mathbb{R}$ defined by \begin{equation*} G(t,x,p) := -e^{-\alpha x e^{r(T-t)}}h(t,p) \end{equation*} satisfies \begin{equation*} d G(t,X^{\xi,b}_t,p_t) = -e^{-\alpha X^{\xi,b}_t e^{r(T-t)}}\mathcal{H} h(t,p_t;\xi_t,b_t)dt + d\eta^{\xi,b}_t, \quad t\in[0,T], \end{equation*} where $(\eta^{\xi,b}_t)_{t\in[0,T]}$ is a martingale w.r.t.\ ${\mathfrak G}$ and we set $\mathcal{H} h(t,p;\xi,b)$ zero at those points $(t,p)$ where $h_t$ does not exist. \end{lemma} \begin{proof} Let $(\xi,b)\in \mathcal U[0,T]$ and $h:[0,T]\times\Delta_m\to(0,\infty)$ be some function satisfying the conditions stated in the lemma and bounded with constant $0<K_0<\infty$. Applying the product rule to $G\big(t,X^{\xi,b}_t,p_t\big)=f\big(t,X^{\xi,b}_t\big)h(t,p_t)$, we get \begin{equation*} d G\big(t,X^{\xi,b}_t,p_t\big) = h(t,p_{t-})df\big(t,X^{\xi,b}_t\big) + f\big(t,X^{\xi,b}_{t-}\big) d h(t,p_t) + d\big[f\big(\cdot,X^{\xi,b}_\cdot\big),h(\cdot,p_\cdot)\big]_t \end{equation*} and hence \begin{equation}\label{G} \begin{aligned} &d G\big(t,X^{\xi,b}_t,p_t\big) \\ &= f\big(t,X^{\xi,b}_t\big)h(t,p_t)\bigg(\alpha e^{r(T-t)}\Big(\frac12\alpha\sigma^2 e^{r(T-t)}\xi_t^2 - (\mu-r)\xi_t - c(b_t)\Big) \\ &\quad + \lambda\sum_{k=1}^m p_k(t)\int_0^\infty e^{\alpha b_t y e^{r(T-t)}}\int_{(0,1)}\!e^{\alpha \xi_t z \mathds{1}_{(L,\infty)}(y)e^{r(T-t)}}Q(dz)f_k(y)dy - \lambda\bigg)dt \\ &\quad-f\big(t,X^{\xi,b}_{t-}\big) h(t,p_{t-}) \alpha\sigma e^{r(T-t)} \xi_tdW_t \\ &\quad+\int_0^\infty f\big(s,X^{\xi,b}_{t-}\big)h(t,p_{t-})\big(e^{\alpha b_t y e^{r(T-t)}}e^{\alpha \xi_t z \mathds{1}_{(L,\infty)}(y) e^{r(T-t)}}-1\big)\hat\Psi(dt,d(y,z)) \\ &\quad+f\big(t,X^{\xi,b}_{t}\big) \bigg(h_t(t,p_t) - \lambda h(t,p_t) + \lambda\sum_{k=1}^m p_k(t)\int_0^\infty h(t,J(p_t,y))f_k(y)dy\bigg)dt \\ &\quad+\int_0^\infty f\big(t,X^{\xi,b}_{t-}\big)\big(h(t,J(p_{t-},y))-h(t,p_{t-})\big)\hat{\Psi}(dt,dy,(0,1)) \\ &\quad + d\big[f\big(\cdot,X^{\xi,b}_\cdot\big),h(\cdot,p_\cdot)\big]_t. \end{aligned} \end{equation} Using the introduced compensated random measure $\hat{\Psi}$ the variation becomes \begin{align*} &d\big[f\big(\cdot,X^{\xi,b}_\cdot\big),h(\cdot,p_\cdot)\big]_t \\ &= \int_E f\big(t,X^{\xi,b}_{t-}\big)\big(h(t,J(p_{t-},y))-h(t,p_{t-})\big)\Big(e^{\alpha b_t y e^{r(T-t)}}e^{\alpha \xi_t z \mathds{1}_{(L,\infty)}(y) e^{r(T-t)}} -1\Big)\hat{\Psi}(dt, d(y,z)) \\ &\quad + \lambda f\big(t,X_t^{\xi,b}\big)\sum_{k=1}^m p_k(t) \int_0^\infty h(t,J(p_t,y))e^{\alpha b_t y e^{r(T-t)}}\int_{(0,1)}\!e^{\alpha \xi_t z \mathds{1}_{(L,\infty)}(y)e^{r(T-t)}}Q(dz)f_k(y)dydt \\ &\quad - \lambda f\big(t,X_t^{\xi,b}\big)h(t,p_t)\sum_{k=1}^m p_k(t) \int_0^\infty e^{\alpha b_t y e^{r(T-t)}}\int_{(0,1)}\!e^{\alpha \xi_t z \mathds{1}_{(L,\infty)}(y)e^{r(T-t)}}Q(dz)f_k(y)dydt \\ &\quad - \lambda f\big(t,X_t^{\xi,b}\big)\sum_{k=1}^m p_k(t) \int_0^\infty h(t,J(p_t,y))f_k(y)dydt + \lambda f\big(t,X_t^{\xi,b}\big)h(t,p_t)dt. \end{align*} Substituting this into \eqref{G}, we obtain \begin{align*} &dG\big(t,X^{\xi,b}_t,p_t\big) \\ &=f\big(t,X^{\xi,b}_t\big)\bigg(- \alpha\, e^{r(T-t)}h(t,p_t)\Big((\mu-r)\xi_t+c(b_t)-\frac12\alpha\sigma^2 e^{r(T-t)}\xi_t^2\Big) \\ & + \lambda f\big(t,X^{\xi,b}_t\big)\sum_{k=1}^m p_k(t) \int_0^\infty h(t,J(p_t,y))e^{\alpha b_t y e^{r(T-t)}}\int_{(0,1)}\!e^{\alpha \xi_t z \mathds{1}_{(L,\infty)}(y)e^{r(T-t)}}Q(dz)f_k(y)dy\\ & - \lambda\,h(t,p_t) + h_t(t,p_t)\bigg) dt - f\big(t,X^{\xi,b}_{t-}\big)\,h(t,p_{t-})\alpha\sigma e^{r(T-t)} \xi_t dW_t - f\big(t,X^{\xi,b}_{t-}\big)h(t,p_{t-})\hat\Psi(dt,E)\\ & + \int_E f\big(t,X^{\xi,b}_{t-}\big)\big(h(t,J(p_{t-},y))-h(t,p_{t-})\big)e^{\alpha b_t y e^{r(T-t)}}e^{\alpha \xi_t z \mathds{1}_{(L,\infty)}(y) e^{r(T-t)}}\hat{\Psi}(dt, d(y,z)), \end{align*} Therefore, by definition of the operator $\mathcal{H}$ given in~\eqref{H}, we have \begin{equation*} d G\big(t,X^{\xi,b}_t,p_t\big) = f\big(t,X^{\xi,b}_{t}\big)\mathcal{H} h(t,p_t;\xi_t,b_t)dt + d\eta^{\xi,b}_t, \end{equation*} where $\eta^{\xi,b}_t := \bar\eta^{\xi,b}_t - \hat\eta^{\xi,b}_t - \tilde\eta^{\xi,b}_t$ with \begin{align*} \bar\eta^{\xi,b}_t &:= \int_0^t \int_E f\big(s,X^{\xi,b}_{s-}\big)\big(h(s,J(p_{s-},y))-h(s,p_{s-})\big)e^{\alpha b_s y e^{r(T-s)}}e^{\alpha \xi_s z \mathds{1}_{(L,\infty)}(y) e^{r(T-s)}}\hat{\Psi}(dt, d(y,z)), \\ \hat\eta^{\xi,b}_t &:= \int_0^t f\big(s,X^{\xi,b}_{s-}\big)h(s,p_{s-})\hat\Psi(ds,E), \\ \tilde\eta^{\xi,b}_t &:= \int_0^t f\big(s,X^{\xi,b}_{s-}\big)h(s,p_{s-})\alpha \sigma e^{r(T-s)} \xi_s dW_s. \end{align*} To complete the proof we need to show that the introduced processes are martingales w.r.t.\ ${\mathfrak G}$ on $[0,T]$. According to Corollary VIII.C4 in \cite{bre}, the process $(\tilde\eta^{\xi,b}_t)_{t\ge0}$ is a martingale w.r.t.\ $\mathfrak{G}$ if \begin{equation*} \mathbb{E}\bigg[\int_0^t\!\! \int_E\!\Big| f\big(s,X^{\xi,b}_{s}\big)\big(h(s,J(p_{s},y))-h(s,p_{s})\big)e^{\alpha b_s y e^{r(T-s)}}e^{\alpha \xi_s z \mathds{1}_{(L,\infty)}(y) e^{r(T-s)}}\Big| \hat\nu(ds,d(y,z))\bigg] <\infty. \end{equation*} Using the boundedness of $h$ with constant $K_0$, we obtain that the expectation above is less or equal to \begin{equation*} \lambda 2K_0 M_Z\big(\alpha K e^{|r|T}\big)\sum_{k=1}^m M_k\big(\alpha e^{|r|T}\big)\int_0^t \mathbb{E}\big[\big|f\big(s,X^{\xi,b}_{s}\big)\big|\big]ds, \end{equation*} where, by Lemma~\ref{fbounded}, \begin{equation*} \mathbb{E}\big[\big|f\big(s,X^{\xi,b}_{s}\big)\big|\big] = \mathbb{E}_{\mathbb{Q}_s^{\xi,b}}\bigg[\frac{\big|f\big(s,X^{\xi,b}_{s}\big)\big|}{L_s^{\xi,b}}\bigg] \le K_2, \end{equation*} which yields the desired finiteness. Similarly the martingale property of $(\hat\eta^{\xi,b}_t)_{t\ge0}$ can be seen. Moreover, by the boundedness of $h$ and $\xi$ as well as Lemma~\ref{fbounded}, it follows $$\mathbb{E}\Big[\big(f\big(s,X^{\xi,b}_{s-}\big)h(s,p_{s-})\alpha \sigma e^{r(T-s)} \xi_s\big)^2\Big]<\infty,$$ which implies the martingale property of $(\tilde\eta^{\xi,b}_t)_{t\ge0}$. \end{proof} The following result can be found in \cite{Mitrinovic1993}. \begin{lemma}\label{le:ineqsum} Let $\alpha_1\le \ldots \le \alpha_n$ and $\beta_1\le\ldots\le\beta_n$ be real numbers and $(p_1,\ldots,p_n)\in\Delta_n$. Then \begin{equation*} \sum_{j=1}^n p_j\alpha_j\beta_j \ge \sum_{j=1}^n p_j\alpha_j\sum_{k=1}^n p_k\beta_k. \end{equation*} \end{lemma} \subsection{Proofs} Recall the function $f:[0,T]\times\mathbb{R}\to\mathbb{R}$ defined by \eqref{f} and the operator $\mathcal{H}$ given by~\eqref{H}. \begin{proof}[Proof of Theorem~\ref{veri}] Let $h:[0,T]\times\Delta_m\to(0,\infty)$ be a function satisfying the conditions stated in the theorem. Note that every Lipschitz function is also absolutely continuous. We set \begin{equation*} G(t,x,p) := f(t,x)\,h(t,p), \quad (t,x,p)\in[0,T]\times\mathbb{R}\times\Delta_m. \end{equation*} Let us fix $t\in[0,T]$ and $(\xi,b)\in \mathcal U[t,T]$. From Lemma~\ref{characG}, it follows \begin{equation}\label{GT} G(T,X^{\xi,b}_T,p_T) = G(t,X^{\xi,b}_t,p_t) + \int_t^T f(s,X^{\xi,b}_s)\mathcal{H} h(s,p_s;\xi_s,b_s)ds + \eta^{\xi,b}_T - \eta^{\xi,b}_t , \end{equation} where $\stprT{\eta^{\xi,b}}$ is a martingale w.r.t.\ ${\mathfrak G}$ and we set $\mathcal{H} h(s,p_s;\xi,b)$ to zero at those points $s\in[t,T]$ where $h_t$ does not exist. Note that $h$ is partially differentiable w.r.t.\ $t$ almost everywhere in the sense of the Lebesgue measure according to the absolute continuity of $t\mapsto h(t,p)$ for all $p\in\Delta_m$. The generalized HJB equation~\eqref{HHJB} implies \begin{equation*} \mathcal{H} h(s,p_s;\xi_s,b_s)\ge0\quad{s\in[t,T]}. \end{equation*} As a consequence \begin{equation*} \int_t^T f(s,X^{\xi,b}_s)\,\mathcal{H} h(s,p_s;\xi_s,b_s) ds\le 0, \end{equation*} due to the negativity of $f$. Thus, by~\eqref{GT}, we get \begin{equation}\label{GGeta} G(T,X^{\xi,b}_T,p_T)\le G(t,X^{\xi,b}_t,p_t) + \eta^{\xi,b}_T-\eta^{\xi,b}_t. \end{equation} Using the boundary condition~\eqref{hHJBbcond}, we obtain \begin{equation*} G(T,x,p) = f(T,x)h(T,p) = f(T,x) = -e^{-\alpha x}=U(x). \end{equation*} Now, we take the conditional expectation in \eqref{GGeta} given $(X^{\xi,b}_t,p_t)=(x,p)$ on both sides of the inequality, which yields \begin{equation*} \mathbb{E}^{t,x,p}\big[U(X^{\xi,b}_T)\big]\le G(t,x,p). \end{equation*} Taking the supremum over all investment and reinsurance strategies $(\xi,b)\in \mathcal U[t,T]$, we obtain \begin{equation}\label{VG} V(t,x,p) \le G(t,x,p). \end{equation} To show equality, note that $(\xi^\star_s,b^\star_s)$ given by~\eqref{optstr} (with $g$ replaced by $h$ in $A(s,p)$ and $B(s,p)$) are the unique minimizer of the HJB equation \eqref{HJBg}. Therefore, \begin{equation*} \mathcal{L} h(s,p_s;\xi^{\star}_s,b^\star_s) + \inf_{\varphi\in\partial^C h_p(t)}\{\varphi\} = 0. \end{equation*} So we can deduce that \begin{equation*} \mathcal{H} h(s,p_{s};\xi^{\star}_s,b^{\star}_s) = 0,\quad s\in[t,T]. \end{equation*} This implies \begin{equation*} \int_t^T f(s,X^{\xi^\star,b^\star}_s)\,\mathcal{H} h(s,p_s;\xi^{\star}_s,b^{\star}_s)ds = 0. \end{equation*} Consequently, \begin{equation*} U(X^{\xi^\star,b^\star}_T)= G(T,X^{\xi^\star,b^\star}_T,p_T) = G(t,X^{\xi^\star,b^\star}_t,p_t) + \eta^{\xi^\star,b^\star}_T - \eta^{\xi^\star,b^\star}_t. \end{equation*} Again, taking the conditional expectation given $(X^{\xi^\star,b^\star}_t,p_t)=(x,p)$ on both sides then yields \begin{equation*} \mathbb{E}^{t,x,p}\big[U(X^{\xi^\star,b^\star}_T)\big] = G(t,x,p)= -e^{-\alpha x e^{r(T-t)}}h(t,p) \end{equation*} and the proof is complete. \end{proof} \begin{proof}[Proof of Lemma \ref{propg}] \begin{enumerate} \item The boundedness and positivity is proven by the same line of arguments as in \cite[Lemma 4.4 (a)]{BaeuerleLeimcke2020}. \item Follows by conditioning. \item Follows again by conditioning. \item The concavity is proven in much the same way as in \cite[Lemma 4.4 (c)]{BaeuerleLeimcke2020}. \item The Lipschitz condition is proven in much the same way as in \cite[Lemma 6.1 (d)]{BaeuerleRieder2007}.\qedhere \end{enumerate} \end{proof} \begin{proof}[Proof of Theorem~\ref{existenceHJB}] Fix $t\in[0,T)$ and $(\xi,b)\in \mathcal U[t,T]$. Let $\tau$ be the first jump time of $X^{\xi,b}$ after $t$ and $t'\in(t,T]$. It follows from Lemma~\ref{propg} and Lemma \ref{characG} that \begin{equation}\label{eqprDir:Vtilde} V(\tau\wedge t',X^{\xi,b}_{\tau\wedge t'},p_{\tau\wedge t'}) = V(t,X^{\xi,b}_t,p_t) + \int_t^{\tau\wedge t'} f(s,X^{\xi,b}_s)\,\mathcal{H}g(s,p_s;\xi_s,b_s)ds + \eta^{\xi,b}_{\tau\wedge t'} - \eta^{\xi,b}_t, \end{equation} where $\stprT{\eta^{\xi,b}}$ is a martingale w.r.t.\ ${\mathfrak G}$ and we set $\mathcal{H}g(s,p_s;\xi_s,b_s)$ to zero at those $s\in[t,T]$ where $g_t(s,p_s)$ does not exist. For any $\varepsilon>0$ we can construct a strategy $(\xi^\varepsilon,b^\varepsilon)\in \mathcal U[t,T]$ with $(\xi^\varepsilon_s,b^\varepsilon_s)=(\xi_s,b_s)$ for all $s\in[t,\tau\wedge t']$ from the continuity of $V$ such that \begin{align*} \mathbb{E}^{t,x,p}\Big[V(\tau\wedge t',X^{\xi,b}_{\tau\wedge t'},p_{\tau\wedge t'})\Big] &\le \mathbb{E}^{t,x,p}\Big[\mathbb{E}^{\tau\wedge t',X^{\xi,b}_{\tau\wedge t'},p_{\tau\wedge t'}}\Big[U(X_T^{\xi^\varepsilon,b^\varepsilon})\Big]\Big] +\varepsilon \le \mathbb{E}^{t,x,p}\Big[U(X_T^{\xi^\varepsilon,b^\varepsilon})\Big]+\varepsilon \\ &\le V(t,x,p)+\varepsilon. \end{align*} From the arbitrariness of $\varepsilon>0$ we conclude $$V(t,x,p) \ge \mathbb{E}^{t,x,p}\Big[V(\tau\wedge t',X^{\xi,b}_{\tau\wedge t'},p_{\tau\wedge t'})\Big]. $$ Using this statement and \eqref{eqprDir:Vtilde} we obtain \begin{align*} 0 &\ge \lim_{t'\downarrow t}\mathbb{E}^{t,x,p}\bigg[\frac{1}{t'-t}\int_t^{t'} f(s,X^{\xi,b}_s)\,\mathcal{H}g(s,p_s;\xi_s,b_s) ds \big| t'<\tau\bigg]\mathbb{P}^{t,x,p}(t'<\tau) \\ &\quad + \lim_{t'\downarrow t}\mathbb{E}^{t,x,p}\bigg[\frac{1}{t'-t}\int_t^{\tau} f(s,X^{\xi,b}_s)\,\mathcal{H}g(s,p_s;\xi_s,b_s)ds\big| t'\ge\tau\bigg]\mathbb{P}^{t,x,p}(t'\ge\tau), \end{align*} where \begin{equation*} \lim_{t'\downarrow t}\mathbb{P}^{t,x,p}(\tau\le t') = 1-\lim_{t'\downarrow t}e^{-\lambda(t'-t)}=0. \end{equation*} Consequently, \begin{equation*} 0\ge \lim_{t'\downarrow t}\mathbb{E}^{t,x,p}\bigg[\frac{1}{t'-t}\int_t^{t'} f(s,X^{\xi,b}_s)\mathcal{H}g(s,p_s;\xi_s,b_s)ds\Ind{t'<\tau}\bigg]. \end{equation*} By the dominated convergence theorem, we can interchange the limit and the expectation and we obtain by the fundamental theorem of Lebesgue calculus and $\Ind{t'<\tau}\to1$ $\mathbb{P}$-a.s.\ for $t'\downarrow t$, \begin{equation*} 0\ge \mathbb{E}^{t,x,p}\bigg[f(t,X^{\xi,b}_t)\,\mathcal{H}g(t,p_t;\xi_t,b_t)\bigg]. \end{equation*} From now on, let $(\xi,b)\in[-K,K]\times[0,1]$ and $\varepsilon>0$ as well as $(\bar{\xi},\bar{b})\in \mathcal U[t,T]$ be a fixed strategy with $(\bar{\xi}_s,\bar{b}_s)\equiv(\xi,b)$ for $s\in[t,t+\varepsilon)$. Then \begin{equation*} 0\ge \mathbb{E}^{t,x,p}\bigg[f(t,X^{\bar{\xi},\bar{b}}_t)\,\mathcal{H}g(t,p_t;\bar{\xi}_t,\bar{b}_t)\bigg] = f(t,x)\mathcal{H}g(t,p;\xi,b) \end{equation*} at those points $(t,p)$ where $g_t(t,p)$ exists. Due to the negativity of $f$, we get \begin{equation*} 0\le \mathcal{H}g(t,p;\xi,b). \end{equation*} We show next the inequality above if $g_t$ does not exist. For this purpose, we denote by $M_p\subset[0,T]$ the set of points at which $g_p^\prime(t)$ exists for any $p\in\Delta_m$. On the basis of Theorem~\ref{th:genCgco}, we have, for any $p\in\Delta_m$, \begin{equation*} \partial^C g_p(t) = co\Big\{\lim_{n\to\infty} g_p^\prime(t_n): t_n\to t, t_n\in M_p\Big\}. \end{equation*} That is, for every $\varphi\in\partial^C g_p(t)\subset[0,T]$, there exists $u\in\mathbb{N}$ and $(\beta_1,\ldots,\beta_u)\in\Delta_u$ such that $\varphi = \sum_{i=1}^u \beta_i\,\varphi^i$, where $\varphi^i = \lim_{n\to\infty} g_p(t_n^i)$ for sequences $(t_n^i)_{n\in\mathbb{N}}$ with $\lim_{n\to\infty}t_n^i=t$ along existing $g_p^\prime$. From what has already been proved, it can be concluded that, for any $i=1,\ldots,u$ \begin{equation*} 0 \le \mathcal{L}g(t_n^i,p;\xi,b)+g_t(t_n^i,p). \end{equation*} Thus, by the continuity of $t\mapsto g(t,p)$, $p\mapsto g(t,p)$ and $p\mapsto J(p,y)$, we get for $i=1,\ldots, u$ \begin{equation*} 0 \le \beta_i\mathcal{L}g(t,p;\xi,b)+\beta_i\lim_{n\to\infty}g_t(t_n^i,p), \end{equation*} which yields \begin{equation*} 0 \le \mathcal{L}g(t,p;\xi,b)+\sum_{i=1}^u\beta_i\lim_{n\to\infty}g_t(t_n^i,p) = \mathcal{L}g(t,p;\xi,b)+ \varphi. \end{equation*} Due to the arbitrariness of $\varphi\in\partial^C g_p(t)$ and $(\xi,b)\in[-K,K]\times[0,1]$, we obtain \begin{equation*} 0 \le \inf_{(\xi,b)\in[-K,K]\times[0,1]}\mathcal{L}g(t,p;\xi,b)+ \inf_{\varphi\in\partial^C g_p(t)}\{\varphi\}. \end{equation*} Our next objective is to establish the reverse inequality. For any $\varepsilon>0$ and $0\le t<t'\le T$, there exists a strategy $({\xi^{\varepsilon,t^\prime},b^{\varepsilon,t^\prime}})\in \mathcal{U}[t,T]$ such that \begin{equation*} V(t,x,p)-\varepsilon(t'-t)\le \mathbb{E}^{t,x,p}\Big[U\big(X_T^{\xi^{\varepsilon,t^\prime},b^{\varepsilon,t^\prime}}\big)\Big] \le \mathbb{E}^{t,x,p}\Big[V\big(\tau\wedge t',X^{\xi^{\varepsilon,t^\prime},b^{\varepsilon,t^\prime}}_{\tau\wedge t'},p_{\tau\wedge t'}\big)\Big]. \end{equation*} Using Lemma \ref{characG} it follows \begin{equation*} -\varepsilon(t'-t)\le \mathbb{E}^{t,x,p}\bigg[\int_t^{\tau\wedge t'} f\big(s,X^{\xi^{\varepsilon,t^\prime},b^{\varepsilon,t^\prime}}_s\big)\,\mathcal{H}g\big(s,p_s;\xi^{\varepsilon,t^\prime}_s,b^{\varepsilon,t^\prime}_s\big)ds\bigg]. \end{equation*} In the same way as before, we get \begin{align*} -\varepsilon&\le \lim_{t'\downarrow t}\mathbb{E}^{t,x,p}\bigg[\frac{1}{t'-t}\int_t^{t'} f\big(s,X^{\xi^{\varepsilon,t^\prime},b^{\varepsilon,t^\prime}}_s\big)\,\mathcal{H}g\big(s,p_s;\xi^{\varepsilon,t^\prime}_s,b^{\varepsilon,t^\prime}_s\big)ds\Ind{t'<\tau}\bigg] \\ &\le\lim_{t'\downarrow t}\mathbb{E}^{t,x,p}\bigg[\frac{1}{t'-t}\int_t^{t'}\! f\big(s,X^{\xi^{\varepsilon,t^\prime},b^{\varepsilon,t^\prime}}_s\big)\!\inf_{(\xi,b)\in[-K,K]\times[0,1]}\mathcal{H}g\big(s,p_s;\xi,b\big)ds\Ind{t'<\tau}\bigg]. \end{align*} We can again interchange the limit and the infimum by the dominated convergence theorem which yields \begin{equation*} -\varepsilon\le \mathbb{E}^{t,x,p}\bigg[\lim_{t'\downarrow t}\frac{1}{t'-t}\int_t^{t'} f\big(s,X^{\xi^{\varepsilon,t^\prime},b^{\varepsilon,t^\prime}}_s\big)\,\inf_{(\xi,b)\in[-K,K]\times[0,1]}\mathcal{H}g\big(s,p_s;\xi,b\big)ds\Ind{t'<\tau}\bigg]. \end{equation*} Thus the same conclusion can be draw as above, i.e. \begin{equation*} -\varepsilon \le f(t,x)\inf_{(\xi,b)\in[-K,K]\times[0,1]}\mathcal{H}g(t,p;\xi,b) \end{equation*} at those point where $g_t(s,p)$ exists. According to the negativity of $f$ and the arbitrariness of $\varepsilon>0$, we get, by $\varepsilon\downarrow0$, \begin{equation*} 0 \ge \inf_{(\xi,b)\in[-K,K]\times[0,1]}\mathcal{H}g(t,p;\xi,b) \end{equation*} at those point where $g_t(s,p)$ exists. By the same way as before, we obtain in the case of no differentiability of $g$ w.r.t.\ $t$, that \begin{equation*} 0 \ge \inf_{(\xi,b)\in[-K,K]\times[0,1]}\mathcal{L}g(t,p;\xi,b) + \inf_{\varphi\in\partial^C g_p(t)}\{\varphi\}. \end{equation*} Summarizing, we have equality in the previous expression. The optimality of $(\xi^{\star},b^{\star})$ follows as in the proof of Theorem \ref{veri}. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,662
Well, I managed to post some pictures shortly after the race start, but due to limited phone signal, no phone battery charge and camera malfunctions, I was unable to post much more from the water. We managed to get outside of the Needles before the 'big boats' started to overtake us, first Rambler 88 (we were definitely pointing higher than them), soon to be followed by the Volvo Ocean Race fleet. We got a really close view of all of them… briefly… before they disappeared over the horizon. On every watch, we were accompanied by dolphins, and the sunsets and sunrises were worth the lack of sleep. On Wednesday morning, almost 3 days after leaving Cowes, we spotted the Fastnet Rock. Too Close For Comfort? At least we got some decent pictures of the Fastnet! Once we finally managed to extricate ourselves from the rock, it was a downwind drag race all the way back to Plymouth. The wind was pretty light at times, which made it harder to keep the boat moving and catch any of the boats ahead. We finished in an elapsed time of 4 days, 10 hours, 52 minutes and 25 seconds, coming 30th in our class (out of 64 starters). Thanks to all the crew on board Winston Logic for a fabulous race, and for all the support provided by Sailing Logic, including the beers and Dark and Stormy's on arrival in Plymouth! One of our crew, Oscar Watts, put together a video of our race – he even managed to capture some of the many dolphins that escorted us on our journey. The RORC Fastnet 2017 starts tomorrow morning – I'll be racing on a First 40 called Winston Logic with a number of my crew mates fomr Derry~Londonderry~Doire (Clipper Round the World 15-16). Yellowbrick racing will be tracking the race; you can use the site on the RORC website (If you are using an iPhone or an iPad, you'll need to download the free Yellowbrick app and add the Rolex Fastnet Race 2017).
{ "redpajama_set_name": "RedPajamaC4" }
1,355
How Jalsa Casting Director Selected Surya Kasibhatla To Play The Child With Cerebral Palsy Casting director Anmol Ahuja describes the process of casting a child with special needs Anmol Ahuja casts with an eye for detailing and a heart for inclusion. Part of the Paatal Lok team that went to Manipur to find Mairembam Ronaldo Singh to play the role of the trans character, he believes in a more professional, more responsible casting process. "Casting as a department is not very old. Excel Entertainment, Anurag Kashyap, and Vishal Bhadrwaj actually brought this whole concept of casting. So when there is a specialized job, it must be done in a more professional manner." For Jalsa, he and his team scouted for a child with cerebral palsy to play the role of Ayush, the neurodivergent son of Maya, the journalist played by Vidya Balan. They found Surya Kasibhatla, who was not only confident and fit the bill, but to Ahuja's joy "even had a similar profile to that of Vidya Balan". (He mentions the same trivia about Rohini Hattangadi who plays Maya's mother, looking very similar to Vidya Balan's mother) When Kasibhatla was 4-years old, he had told his parents he wanted to be an actor. It took a few years and a few videos before his paths crossed with that of Anmol Ahuja and team Jalsa. In this conversation, edited for length and clarity, Ahuja goes into the details of casting someone from the community that is being presented on screen. What was your initial brief for casting the character of Ayush? Inclusivity is something I am a firm believer of. When we read the script, we knew casting someone for Ayush won't be easy. So we thought why not look for a child who has this condition (of cerebral palsy), but we work with the child and see if we can get the right performance out. Deepak Agarwal, a team member, helped look for kids with this condition. We found some in Doha, South Africa, a few in Mumbai also. Before auditioning them we would do a workshop where they become familiar with the process. We made it seem like we are playing a game, so there isn't heartbreak if they don't get selected. Parallelly, we were also looking at kids who are doing theater, without the condition, as a back-up plan. We were auditioning both sets of kids. It wasn't easy, but because of the second wave of COVID-19, we had a lot of time which we made use of. How did you come across Surya Kasibhatla? Bade mazedaar kahani hai iski. He is a Hyderabad-born child, son of two techie parents. When he was 9-10 years old, they shifted to Texas. He has his own website and everything. He is into software development at the age of thirteen! If you remember the first scene of Jalsa, Ayush is making a video on how to bowl. Surya had done a similar video, which is there on YouTube, giving tips on how to play cricket. The moment we saw that video, we knew we had to get him. We auditioned him and sent the video to Suresh sir who also agreed after doing a video call with him. But he also wanted to see how close Surya is to the character of Ayush. So we gave Surya little exercises — for example, what his dreams are, etc. — and he would send us these videos. When Surya came to Mumbai we started off with film-based workshops conducted by Pooja Swaroop and Suresh sir. Suresh sir also has a family member with this condition, so he knows how to handle such kids. He also rewrote the character as per Surya's personality. Rewrote in what ways? For example, initially there were long dialogues. While we were auditioning we realized that a kid with this syndrome cannot speak this much. So how to say the same thing with fewer words was something he thought of. It took us close to about 2.5 to 3 months to cast this character. Usually, how long would it take to cast a child actor? Maximum 20 days. But because we were clear that we wanted to be inclusive in the casting process, it took longer. How many people did you audition? About 150 kids without the condition, and 40-45 kids with the condition. Both processes were going parallely because of the scare of the second wave, as a solid back-up. But from the starting, it was clear that we must try and find a kid with cerebral palsy. What were these workshops like? My team, including (casting director) Abhishek Banerjee, come from theater backgrounds. Before the play starts a director does a workshop to understand the space, the character, not by giving you a bhashan, but by playing different games and exercises through which one understands how to get into the character easily. That is the idea. For example, we used to get 10 kids in a batch, and for 2-3 hours everyone would introduce themselves, and the next person would have to fold the previous person's name into their introduction. This whole idea of competition generally comes into play, which we were trying to avoid. For us, at the end of the day, we can only cast one kid. But no child must feel less important. They must enjoy the whole process, learn something new.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
31
'use strict' const html2jade = require('html2jade') exports.name = 'html2jade' exports.outputFormat = 'jade' exports.renderAsync = function (str, options) { return new Promise((resolve, reject) => { html2jade.convertHtml(str, options, (err, jade) => { if (err) { reject(err) } else { resolve(jade) } }) }) }
{ "redpajama_set_name": "RedPajamaGithub" }
655
Q: Getting a reference to the current instance of the view model Let's say I have a WPF application which makes use of the MVVM pattern. The application's main window defines its data context in the XAML: <Window.DataContext> <vm:MainWindowViewModel/> </Window.DataContext> Is it possible to get a reference to the current instance of MainWindowViewModel in the XAML code-behind after InitializeComponent()? I know this is not recommended when using MVVM but I can't figure out any other way for solving my problem. A: sure: var viewModel=DataContext as MainWindowViewModel; Just cast your DataContext to the type of your viewmodel. A: You can hold it in some static class and define it as a static resource.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,149
Sun GIF Solutions Sun Life MFS Low Volatility Global Equity Investment Series O Management team - MFS - Low Vol #Management team - MFS - Low Vol MFS Investment Management: James Fallon - Portfolio Manager Jonathan Sage, CFA - Portfolio Manager Matthew Krummell, CFA - Portfolio Manager John Stocks, CFA - Portfolio Manager Content type RightNav Document centre - Investment #Document centre - Investment Products-at-a-glance Sun Guaranteed Investment Fund Solutions information folder and individual variable annuity contract Fund suitability - 76 #Fund suitability - 76 This Fund may be suitable for investors who: seek to add global geographic diversification with a focus towards lowering portfolio volatility are long term investors are comfortable with medium investment risk Fund essentials - Sun GIF Solutions Sun Life MFS Low Volatility Investment Series O #Fund essentials - Sun GIF Solutions Sun Life MFS Low Volatility Investment Series O Fund essentials Min initial inv $ Min additional inv $ MER %^ Management fee % Management company - SLGI MFS InvInst #Management company - SLGI + MFS InvInst SLGI Asset Management Inc. Sub-advisor MFS Investment Management Canada Limited; MFS Institutional Advisors, Inc. Risk profile - Medium #Risk profile - Medium Refer to Simplified Prospectus for more detail. Approach - MFS Investment Management #Approach - MFS Investment Management INVESTMENT MANAGEMENT APPROACH MFS Investment Management has guided investors through every market condition imaginable. From the highest highs to the lowest lows, the firm has spent decades refining its investment process according to four key principles: work from the bottom up, take a global perspective, collaborate, and manage risk. Bottom up. MFS believes detailed fundamental analysis of individual companies is the cornerstone of successful investing. Global. Analysts in many of the world's major financial centers scour the globe for opportunities. Collaborative. MFS believes ideas improve when they're carefully reviewed and constantly challenged. Risk managed. Risk management is intrinsic to MFS entire investment process. MFS Investment Management joined forces with McLean Budden in Canada in November 2011. The two firms have 150 years of investing history between them. Now with a unified commitment to fundamental and balanced research from all corners of the globe, MFS is looking forward to 150 more. Content type Content Objective - 78 #Objective - 78 The fund's investment objective is to achieve long-term capital appreciation with low volatility by investing primarily in a diversified portfolio of equity securities of issuers located anywhere in the world or indirectly by investing in mutual funds (including exchange-traded funds) that invest primarily in such securities. Notes and disclaimers - Sun Life MFS Low Volatility Global Equity Fund #Notes and disclaimers - Sun Life MFS Low Volatility Global Equity Fund NOTES AND DISCLAIMERS The manager, Sun Life Global Investments (Canada) Inc., obtained approval of securityholders to change the way certain operating expenses are charged to the fund. The manager will replace the current method with a fixed rate administration fee, effective on or about January 1, 2015. The annual fixed rate administration fee will be 0.20% of the series value. Commissions, trailing commissions, management fees and expenses all may be associated with mutual fund investments. Please read the prospectus before investing. For periods greater than one year, the indicated rates of return are the average annual compounded total returns as of the date indicated including changes in unit value and reinvestment of all distributions and do not take into account sales, redemption, distribution or optional charges or income taxes payable by any security holder that would have reduced returns. Mutual funds are not guaranteed, their values change frequently and past performance may not be repeated. Ratings and/or ranking information is subject to change monthly. Morningstar is an independent organization that groups funds with generally similar investment objectives for comparison purposes and ranks them on a historical basis. Morningstar star ratings are an objective, quantitative measure of a fund's historical risk-adjusted performance relative to other funds in its category, and are calculated from a fund's 3, 5, and 10-year returns measured against 91-day Treasury bill and peer group returns. The top 10% of the funds in a category earn five stars; the next 22.5% four stars; the following 35% three stars; the next 22.5% two stars, and the bottom 10% one star. The Overall Rating is a weighted combination of the 3, 5, and 10-year ratings. Only funds with at least a three-year track record are considered, and ratings are calculated only for categories with at least 20 funds. Morningstar quartile rankings show how well a fund has performed compared to all other funds in its peer group. Each fund within a peer group is ranked based on its performance, and these rankings are broken into quarters or quartiles. Within a group, the top 25% (or quarter) of the funds are in the first quartile, the next 25% are in the second quartile, the next group in the third quartile, and the bottom 25% of funds with the poorest relative performance are in the fourth quartile. The point in which half the funds had better performance and half had worse performance is the median. If 100 funds are being compared, there would be four quartiles of twenty-five funds each. The median would be the fiftieth fund. For more details on the calculation of Morningstar star ratings or quartile rankings, please see www.morningstar.ca. The Morningstar Style BoxTM reveals a fund's investment strategy. For equity funds the vertical axis shows the market capitalization of the stocks owned and the horizontal axis shows investment style (value, blend or growth). For fixed-income funds the vertical axis shows the average credit quality of the bonds owned, and the horizontal axis shows interest rate sensitivity as measured by a bond's duration (short, intermediate or long). Morningstar Market Capitalization Breakdown Stocks are first divided into seven style zones based on their country of domicile: United States, Latin America, Canada, Europe, Japan, Asia ex-Japan, and Australia/New Zealand. Capitalization assignments are determined as follows: 1) For all stocks in a style zone, the market cap of each stock is converted into a common currency. The stocks in each style zone are ordered in descending order by size, and a cumulative capitalization as a percentage of total sample capitalization is calculated as each stock is added to the list. 2) The stock that causes cumulative capitalization to equal or exceed 40% of the style zone's total cap is the final one assigned to the giant-cap group. 3) The largest of the remaining stocks are assigned to the large-cap group until cumulative capitalization equals or exceeds 70% of the total capitalization of the style zone. 4) The largest of the remaining stocks are assigned to the mid-cap group until cumulative capitalization equals or exceeds 90% of the total capitalization of the style zone. 5) The largest of the remaining stocks are assigned to the small-cap group until cumulative capitalization equals or exceeds 97% of the total capitalization of the style zone. 6) The remaining stocks are assigned to the micro-cap group. The Market Capitalization Breakdown at a fund level is a breakdown of the capitalization assignments of the fund's equity holdings. Morningstar category averages are equal-weighted category returns. The calculation is simply the average of the returns for all the funds in a given category. The standard category average calculation is based on constituents of the category at the end of the period. Categories are assigned by Canadian Investment Funds Standards Committee (CIFSC) based on an evaluation of a fund's holdings. A fund's category may change at any time. Funds within the same category may differ in terms of investment philosophy, investment process, as well as overall composition. The calculators found on this website and/or within fund pages generated by this website and by users are provided for illustrative purposes only. The calculators are educational and/or illustrative tools and do not constitute advice with respect to investment, insurance, financial, legal, tax, accounting or similar matters. The Global Industry Classification Standard ("GICS") was developed by and is the exclusive property and a service mark of MSCI Inc. ("MSCI") and Standard & Poor's Financial Services LLC ("S&P") and is licensed for use by Sun Life Global Investments (Canada) Inc. Neither MSCI, S&P, nor any other party involved in making or compiling the GICS or any GICS classifications makes any express or implied warranties or representations with respect to such standard or classification (or the results to be obtained by the use thereof), and all such parties hereby expressly disclaim all warranties of originality, accuracy, completeness, merchantability and fitness for a particular purpose with respect to any of such standard or classification. Without limiting any of the foregoing, in no event shall MSCI, S&P, any of their affiliates or any third party involved in making or compiling the GICS or any GICS classifications have any liability for any direct, indirect, special, punitive, consequential or any other damages (including lost profits) even if notified of the possibility of such damages. The information contained in this fund page/profile is designed to provide you with general information related to the fund and investment alternatives and strategies and is not intended to be comprehensive investment advice applicable to individual circumstances. We strongly recommend that investors consult with a financial advisor prior to making any investment decisions. © 2016 Morningstar Research Inc. All Rights Reserved. The information contained herein: (1) is proprietary to Morningstar and/or its content providers; (2) may not be copied or distributed; and (3) is not warranted to be accurate, complete or timely. Neither Morningstar nor its content providers are responsible for any damages or losses arising from any use of this information. Past performance is no guarantee of future results. Please replace ... Java content goes here
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,336
{"url":"https:\/\/mathematica.stackexchange.com\/questions\/81126\/rolling-disk-animation","text":"# Rolling Disk Animation\n\nI am trying to create an animation in Mathematica of a rolling disk outside a disk of equal radius. The following coding is what I have tried but without the one circle staying in a fixed position I am having a hard time seeing what is going on.\n\nManipulate[\nShow[{\nGraphics[{\nCircle[{0,0},1],\nCircle[{2 Cos[Theta],2 Sin[Theta]},1],\n{Blue,PointSize[0.012],Point[{Cos[Theta],Sin[Theta]}]},\n{Green, PointSize[0.012],Point[{2 Cos[Theta],2 Sin[Theta]}]},\nLine[{{2 Cos[Theta],2 Sin[Theta]},{Cos[Theta],Sin[Theta]}}]\n}]\n}]\n,{Theta,0.0000001,2 Pi}\n]\n\n\nI also cannot figure out the how to make it trace the point that is rotating.\n\n\u2022 Use the PlotRange option to keep the stationary circle from moving around in the frame. In order to plot the path you should use ParametricPlot to plot the coordinate for the second circle over time. You can combine the ParametricPlot with your graphics using Show. \u2013\u00a0C. E. Apr 28 '15 at 23:13\n\u2022 Stan Wagon's Mathematica in Action features an animation that involves rolling a (simulated) penny on another. If you can see a copy of that book, go do so. \u2013\u00a0J. M.'s torpor May 3 '15 at 0:24\n\nIs this something like what you want?\n\nManipulate[Show[\nGraphics[{\nCircle[{0, 0}, 1],\nCircle[{2 Cos[t], 2 Sin[t]}, 1], {Blue, PointSize[0.012],\nPoint[{Cos[t], Sin[t]}]}, {Green, PointSize[0.012],\nPoint[{2 Cos[t], 2 Sin[t]}]},\nLine[{{2 Cos[t],\n2 Sin[t]}, {2 Cos[t], 2 Sin[t]} + {Cos[Pi + 2 t], Sin[Pi + 2 t]}}]\n}, PlotRange -> {{-3.1, 3.1}, {-3.1, 3.1}}],\nParametricPlot[{2 Cos[t], 2 Sin[t]} + {Cos[Pi + 2 t], Sin[Pi + 2 t]},\n{t, 0, 2 Pi}]], {t, 0, 2 Pi}]\n\n\nYou need to set a fixed PlotRange to stop the graphics from jumping around. A ParametricPlot can be used to show the path it traces out, but you need to work out the formula yourself.\n\nHere's an approach using ParametricPlot, where ListAnimate permits smooth animation.\n\n testparaNew[\u03b1_] := Show[{\nParametricPlot[\n{{Cos[\u03b8], Sin[\u03b8]},\n{2 Cos[\u03b1] + Cos[\u03b8],\n2 Sin[\u03b1] + Sin[\u03b8]}},\n{\u03b8, 0, 2 \u03c0},\nPlotRange -> 3,\nAxes -> False,\nFrame -> False\n],\nParametricPlot[\n{{2 Cos[\u03b1] + r Cos[2 \u03b1 + \u03c0],\n2 Sin[\u03b1] + r Sin[2 \u03b1 + \u03c0]}},\n{r, 0, 1},\nPlotRange -> 3,\nFrame -> False,\nPlotStyle -> {Thick, Red}\n],\nParametricPlot[\n{{2 Cos[\u03b2] + Cos[2 \u03b2 + \u03c0],\n2 Sin[\u03b2] + Sin[2 \u03b2 + \u03c0]}},\n{\u03b2, 0, 2 \u03c0},\nPlotRange -> 3,\nFrame -> False,\nPlotStyle -> {Red, Thin}\n]\n}]\nlist4 = Table[testparaNew[\u03b1], {\u03b1, 0, 2 \u03c0 - \u03c0\/60, \u03c0\/60}]\nListAnimate[list4]","date":"2021-06-23 05:32:52","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.22872592508792877, \"perplexity\": 7213.297298375489}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623488534413.81\/warc\/CC-MAIN-20210623042426-20210623072426-00220.warc.gz\"}"}
null
null
\section{\bf Introduction}\label{I} Coronavirus disease 2019 (COVID-19) is caused by a new Coronavirus (SARS-CoV-2) that has spread rapidly around the world. Most infected people have no symptoms or suffer from mild, flu-like symptoms, but some become seriously ill and can die. In recent weeks coronavirus has had too many opportunities to spread again. After successfully tamping down the first surge of infection and death, Europe is now in the middle of a second coronavirus wave as it moves into winter \cite{cacciapaglia}, \cite{baley}, \cite{sonia}, \cite{vynnycky}, \cite{gleick}, \cite{coullet}. Even though several vaccines for COVID-19 are actually been produced other ways of slowing its spread have to continue to be explored. One way of controlling the disease are the lockdown and the quarantine measures. The lockdown measures are emergency measures or conditions imposed by governmental authorities, as during the outbreak of an epidemic disease, that intervene in situations where the risk of transmitting the virus is greatest. Under these measures, people are required to stay in their homes and to limit travel movements and opportunities for individuals to come into contact with each other such as dining out or attending large gatherings. The lockdown measures are more effective when combined with other measures such as the quarantine. Quarantine means separating healthy people from other healthy people, who may have the virus after being in close contact with an infected person, or because they have returned from an area with high infection rates. Similar recommendations include isolation (like quarantine, but for people who tested positive for COVID-19) and physical distancing (people without symptoms keep a distance from each other). Several governments have then decided that stricter lockdown and quarantine measures are needed to bring down the number of infections. In this work we shall propose interventions which are as targeted as possible. Unfortunately, the greater the number of infections, the more sweeping the measures have to be. Tightening the measures will impact on our society and the economy but this step is needed for getting the coronavirus under control. \noindent The aim of this work is to model the dynamics of the infectious, recovered, and deceased people when population is subject to lockdown and quarantine measures imposed by governments. We shall see that the combined effect of the restrictions measures with the action of the Hospitals and Health Institutes is able to contain and even dampen the spread of the SARS-CoV-2 epidemic. The dynamics of the entire process will be obtained by taking into account the theoretical results recently appeared in literature \cite{sonnino} and \cite{sonnino1} and by adopting a \textit{kinetic-type reactions} approach. In this framework, the dynamics of the Health Institutes is obtained by taking inspiration from the Michaelis- Menten's enzyme-substrate reaction model (the so-called \textit{MM reaction} \cite{MM1}, \cite{MM2}, and \cite{MM3}) where the \textit{enzyme} is associated to the \textit{available hospital beds}, the \textit{substrate} to the \textit{infected people}, and the \textit{product} to the \textit{recovered people}, respectively. In other words, everything happens as if the hospitals beds act as a \textit{catalyzer} in the hospital recovery process \cite{sonnino3}. In addition, the time-delay for recovery or death processes are duly taken into account. More specifically, in our model, we have the following 10 compartments: \noindent $S$ = Number of susceptible people. This number concerns individuals not yet infected with the disease at time $t$, but they are susceptible to the disease of the population; \noindent $I$ = Number of people who have been infected and are capable of spreading the disease to those in the susceptible category; \noindent $I_h$ = Number of hospitalised infected people; \noindent $I_Q$ = Number of people in quarantine. This number concerns individuals who may have the virus after being in close contact with an infected person; \noindent $R$ = Total number of recovered people, meaning specifically individuals having survived the disease and now immune. Those in this category are not able to be infected again or to transmit the infection to others; \noindent $r_h$ = Total recovered people previously hospitalised; \noindent $D$ = Total number of people dead people for COVID-19; \noindent $d_h$ = Total number of people previously hospitalised dead for COVID-19; \noindent $L$ = Number of inhibitor sites mimicking lockdown measures: \noindent $Q$ = Number of inhibitor sites mimicking quarantine measures. \noindent In addition, $N$, defined in Eq.~(\ref{6.4}), denotes the number of total cases. \noindent The manuscript is organised as follows. In Section~\ref{ODE} we derive the deterministic Ordinary Differential Equations (ODSs) governing the dynamics of the infectious, recovered, and deceased people. The lockdown and quarantine measures are modelled in Subsection~\ref{LQM}. The dynamics of the hospitalised individuals (i.e., the infectious, recovered, and deceased people) can be found in Subsection~\ref{H}. As mentioned above, the corresponding ODEs are obtained by considering the \textit{MM reaction model}. The equations governing the dynamics of the full process and the related \textit{basic reproduction number} are reported in Section~\ref{TODEs} and Section~\ref{BRN}, respectively. It is worth mentioning that our model foresees also the second wave of Infection by Coronavirus. As shown in Section~\ref{SIRD}, in absence of the restrictive measures and by neglecting the role of the Hospitals and the delay in the reactions steps, our model reduces to the classical \textit{Susceptible-Infectious-Recovered-Deceased-Model} (SIRD-model) \cite{sird}. Finally, Section~\ref{Applications} shows the good agreement between the theoretical predictions with real data for Belgium, France and Germany. The last Section~\ref{C} presents the conclusions and perspectives of this manuscript. \section{\bf Model for COVID-19 in Presence of the Lockdown and Quarantine Measures}\label{ODE} \noindent As mentioned in the Introduction, the population is assigned to compartments with labels $S$, $I$, $R$ $D$ etc. The dynamics of these compartments is generally governed by deterministic ODEs, even though stochastic differential equations should be used to describe more realistic situations \cite{sonnino}. In this Section, we shall derive the deterministic ordinary differential equations obeyed by compartments. This task will be carried out by taking into account the theoretical results recently appeared in literature \cite{sonnino1}, \cite{sonnino2} and without neglecting the delay in the reactions steps. \subsection{Modelling the Susceptible People} \noindent If a susceptible person encounters an infected person, the susceptible person will be infected as well. So, the scheme simply reads \begin{equation}\label{S1} S + I \xrightarrow{\mu} 2I \end{equation} \subsection{Modelling the Lockdown and Quarantine Measures} \noindent The lockdown measures are mainly based on the isolation of the susceptible people, (eventually with the removal of infected people by hospitalisation), but above all on the removal of susceptible people. \vskip0.2cm \noindent {\bf Subsection 2.1. Modelling the Lockdown and Quarantine Measures with Chemical Interpretation}\label{LQM} \noindent It is assumed the lockdown and quarantine measures are modelled by some kind of inhibitor reaction where the susceptible people and the infected can be \textit{trapped} into inactive states $S_L$ and $I_Q$, respectively. Indicating with $L$ and $Q$ the Inhibitor sites mimicking the lockdown and the quarantine measures respectively, we get \begin{align}\label{LQM1} &S + L \xrightleftharpoons[k_{LMax}-k_L]{k_L} S_L\\ &I \xrightarrow{k_Q} I_Q\xRightarrow{k_{QR}, \ t_{QR}} R\nonumber \end{align} \noindent In the scheme~(\ref{LQM1}), symbol $\implies$ stands for a \textit{delayed reaction} just like \textit{enzyme degradation processes} for instance. Here, $L_{max}=S_L+L$ hence, if $L\simeq L_{Max}$, an almost perfect lockdown measures would totally inhibit virus propagation by inhibiting all the susceptible people $S$ and the infected people $I$. A not so perfect lockdown measures would leave a fraction of $I$ free to spread the virus. The number of inhibitor sites maybe a fraction of the number of the infected people. Fig.~\ref{LEP}. shows the behaviour of the lockdown efficiency parameter adopted in our model. For simplicity, we have chosen a parameter which is constant $k_{LMax}\neq0$ inside the time-interval $t_1\leq t\leq t_2$ and vanishes outside it. The \textit{inverse Lockdown efficiency parameter} is $k^{-1}_L=k_{LMax}-k_L$, which is equal to $k_{LMax}$ outside the door and vanishes inside the the interval $t_1\leq t\leq t_2$. \begin{figure}[hbt!] \hskip 0.5truecm \includegraphics[width=10cm, height=6cm]{LEP.pdf} \caption{ \textit{{\bf Lockdown Efficiency Parameter.} For simplicity, in our model the lockdown efficiency parameter $k_L$ is a \textit{door-step function}. This function is constant, $K_{LMax}\neq 0$,within the range $t_1\leq t\leq t_2$ and zero outside it.} } \label{LEP} \end{figure} Finally, from Schemes~(\ref{S1}) and (\ref{LQM1}), we get the O.D.E.s for $S$, $L$, $Q$, and $I_Q$: \begin{align}\label{LQM2} &{\dot S}=-\mu SI - k_L S(L_{Max}-S_L)+(1-k_L)(L_{Max}-L)\\ &{\dot S}_L=k_L SL-k^{-1}_LS_L\nonumber\\ &{\dot I_Q}=k_Q I-\chi{ I_Q}_{(t-t_R)}\noindent \end{align} \noindent with the \textit{dot} above the variables denoting the \textit{time derivative}. \noindent \subsection{ O.D.E. for the Total Recovered People} \noindent At the first approximation, the O.D.E. for the \textit{total recovered people} $R$ (i.e. the total individuals having survived the disease) is trivially obtained by considering the following \textit{kinetic scheme}: \begin{align}\label{R1} &I \xRightarrow{\chi ,\ t_R} R\\ &I_Q \xRightarrow{k_{QR} ,\ t_{QR}} R\nonumber \end{align} \noindent That is, the rate of $R_t$ is approximatively proportional to the number of the infected people $I$ at time $t$ i.e.\footnote{Notice that the first \textit{reaction} in the scheme Eq.~(\ref{R1}) is the dynamic equation for the total recovered people adopted in the SIRD-model \cite{sird}.}. \begin{equation}\label{R} {\dot R}=\chi I_{(t-t_R)}+\chi R_{(t-t_R)} \end{equation} \noindent where we have introduced the time-delay $t_R$ (the number of the recovered people at time time $t$ is proportional to the infected people at time $t-t_R$). However, it is useful to clarify the following. In Eqs~(\ref{R1}), $R$ stands for the \textit{total number of the recovered people} (i.e. the number of the recovered people previously hospitalised, plus the number of the asymptomatic people, plus the infected people who have been recovered without being previously hospitalised). The natural question is: \textit{how can we count $R$ and compare this variable with the real data ?}. The current statistics, produced by the Ministries of Health of various Countries, concern the people released from the hospitals. Apart from Luxembourg (where the entire population has been subject to the COVID-19-test), no other Countries are in a condition to provide statistics regarding the total people recovered by COVID-19. Hence, it is our opinion that the equation for $R$, is not useful since it is practically impossible to compare $R$ with the experimental data. We then proceed by adopting approximations and to establish the differential equation whose solution can realistically be subject to experimental verification. More specifically: \noindent Firstly, we assume that $R$ is given by three contributions: \begin{equation}\label{R2} R=r_h+r_{A}+r_{I} \end{equation} \noindent with $r_h$, $r_{A}$, and $r_{I}$ denoting the \textit{total number of the recovered people previously hospitalised}, \textit{the total number of asymptomatic people}, and the \textit{total number of people immune to SARS-CoV-2}, respectively. \noindent Secondly, we assume that the two contributions $r_{A}$ and $r_{I}$ are negligible i.e. we set $r_A\approx 0$ and $r_{I}\approx 0$ \footnote{We consider that the SARS-CoV-12 has just appeared for the first time. So, we do not consider the asymptomatic people who are immune to the virus without any medical treatment.}. \subsection{O.D.E. for the Recovered People in the Hospitals}\label{H} \noindent Now, let us determine the dynamics for the recovered people in the hospitals. So, we account people who are only traced back to hospitalised infected people. We propose the following model\footnote{Our model is inspired by \text{Michaelis-Menten's enzyme-substrate reaction}. Of course, the reverse \textit{MM reaction} has no sense in our case and, consequently, the \textit{kinetic constant} is equal to zero.}: \begin{align}\label{H1} &I + b_h \xrightarrow{k_1} I_h\xRightarrow{k_r, \ t_r} r_h+b_h\\ &\qquad\qquad\ \! I_h \xRightarrow{k_d,\ t_d} d_h+b_h\nonumber \end{align} \noindent with $b_h$ denoting the number of available \textit{hospital beds}, $I$ the number of \textit{infected people}, $I_h$ the number of \textit{infected people blocking an hospital bed}, $r_h$ the number of \textit{recovered people previously hospitalised}, and $d_h$ the number of \textit{people deceased in the hospital}. Of course, \begin{equation}\label{H2} I_h+b_h=C_h=const.\qquad {\rm where}\quad{C_h={\rm Total\ hospital's\ capacity}} \end{equation} \noindent The dynamic equations for the processes are then: \begin{align}\label{H3} &{\dot I}_h=k_1I(C_h-I_h)-k_r{I_h}_{(t-t_r)}-k_d{I_h}_{(t-t_d)}\\ &{\dot r}_h=k_r{I_h}_{(t-t_r)}\nonumber\\ &{\dot d}_h=k_d {I_h}_{(t-t_d)}\nonumber \end{align} \noindent where $t_r$ and $t_d$ are the \textit{average recovery time delay} and the \textit{average death time delay}, respectively, and we have taken into account Eq.~(\ref{H2}) i.e., $b_h=C_h-I_h$. In general $t_r\neq t_d\neq 0$. Of course, the variation of $r(t)$ over a period $\Delta t$ is: \begin{equation}\label{H4} \Delta {r_h}_t={r_h}_t-{r_h}_{(t-\Delta t)} \end{equation} \subsection{O.D.E. for People Tested Positive to COVID-19} \noindent The number of the infected people may be modelled by the following \textit{kinetic scheme} \begin{align}\label{I1} &S + I \xrightarrow{\mu} 2I\\ &I \xRightarrow{\chi ,\ t_R} R\nonumber\\ &I \xRightarrow{\alpha ,\ t_D} D\nonumber\\ &I + b \xrightarrow{k_1} I_h\nonumber\\ &I \xrightarrow{k_Q} I_Q\nonumber \end{align} \noindent The scheme~(\ref{I1}) stems from the following considerations \begin{description} \item{{\bf a)}} If a susceptible person encounters an infected person, the susceptible person will be infected ; \item{{\bf b)}} The infected people can either survive and, therefore, be recovered after an average time-delay $t_R$, or die after an average time-delay $t_D$; \item{{\bf c)}} The schemes~(\ref{LQM1}) and (\ref{H1}), respectively, have been taken into account. \end{description} \noindent The differential equation for the infected people is reads then \begin{equation}\label{I2} {\dot I}=\mu SI-k_Q IQ-k_1I(C_h-I_h)-\chi I_{(t-t_R)}-\alpha I_{(t-t_D)} \end{equation} \subsection{ O.D.E. for Deaths} \noindent In this model, we assume that the rate of death is proportional to the infected people, according to the scheme~(\ref{I1}). By also taking into account the scheme~(\ref{LQM1}), we get \begin{equation}\label{D1} I \xRightarrow{\alpha ,\ t_D} D \end{equation} \noindent and the corresponding O.D.E. for deaths reads \begin{equation}\label{D2} {\dot D}=\alpha I_{(t-t_D)} \end{equation} \section{Set of O.D.E.s for the Spread of SARS-CoV-2 when the Lockdown and the Quarantine Measures are Adopted}\label{TODEs} \noindent By collecting the above O.D.E.s, we get the full system of differential equations governing the dynamics of the number of the infected people, the total number of the recovered people previously hospitalised and the total number of deceased peopled, when the lockdown and the quarantine measures are adopted \begin{align}\label{6.1} &{\dot S}=-\mu SI - k_L S(L_{Max}-S_L)+k^{-1}_LS_L\qquad{\rm with}\quad k^{-1}_L=k_{Max}-k_L\\ &{\dot S}_L=-k_L S(L_{Max}-S_L)+k^{-1}_LS_L\nonumber\\ &{\dot I}=\mu SI-k_Q I-k_1I(C_h-I_h)-\chi I_{(t-t_R)}-\alpha I_{(t-t_D)}\nonumber\\ &{\dot I}_{h}=k_1I(C_h-I_h)-k_r{I_h}_{(t-t_r)}-k_d{I_h}_{(t-t_d)}\nonumber\\ &{\dot I}_{Q}={k_{Q}}I_t-\chi {I_Q}_{(t-t_{R})}\nonumber\\ &{\dot r}_h=k_r{I_h}_{(t-t_r)}\nonumber\\ &{\dot R}=\chi I_{(t-t_R)}+\chi {I_Q}_{(t-t_{R})}\nonumber\\ &{\dot d}_h=k_d {I_h}_{(t-t_d)}\nonumber\\ &{\dot D}=\alpha I_{(t-t_D)}\nonumber \end{align} \noindent From Eqs~(\ref{6.1}) we get \begin{equation}\label{6.2} S+S_L+I+I_Q+I_h+R+r_h+D+d_h=const. \end{equation} \noindent or, by taking into account that $S+S_L=S_{Tot.}$, $R+r_h=R_{Tot.}$, $D+d_h=D_{Tot.}$, and $I+I_Q+I_h=I_{Tot.}$ we get \begin{equation}\label{6.3} S_{Tot.}+I_{Tot.}+R_{Tot.}+D_{Tot.}=const. \end{equation} \noindent The number of total cases $N$ is defined as \begin{equation}\label{6.4} N=I_{Tot.}+r_h+D_{Tot.} \end{equation} \section{The Basic Reproduction Number}\label{BRN} \noindent We note that, in absence of the lockdown and the quarantine measures, the dynamics of the infectious class depends on the following ratio: \begin{equation}\label{7.1} R_0= \frac{\mu}{\chi+\alpha} \frac{S}{N_{Tot.}} \end{equation} \noindent with $N_{Tot.}$ denoting the \textit{Total Population}. $R_0$ is the \textit{basic reproduction number}. This parameter provides the expected number of new infections from a single infection in a population by assuming that all subjects are susceptible \cite{baley}, \cite{sonia}. The epidemic only starts if $R_0$ is greater than $1$, otherwise the spread of the disease stops right from the start. \section{Comparison with the SIRD model}\label{SIRD} \noindent The \textit{Susceptible-Infectious-Recovered-Deceased-Model} (SIRD-model) is one of the simplest compartmental models, and many models may be derived from this basic form. According to the SIRD model, the dynamic equations governing the above compartments read \cite{sird} \begin{align}\label{8.1} &{\dot S}=-\mu S I\\ &{\dot I}=\mu S I-\chi I-\alpha I\nonumber \\ &{\dot R}=\chi I\nonumber\\ &{\dot D}=\alpha I\nonumber \end{align} \noindent It is easily checked that Eqs~(\ref{6.1}) reduce to Eqs~(\ref{8.1}) by adopting some assumptions. In particular: \noindent {\bf 1)} The system is not subject to the lockdown and quarantine measures; \noindent {\bf 2)} The average times-delay may be neglected; \noindent {\bf 3)} Hospitals do not enter in the dynamics. \noindent Under these assumptions, Eqs~(\ref{6.1}) reduce to the SIRD equations: \begin{align}\label{8.2} &{\dot S}\simeq -\mu S I\\ &{\dot I}\simeq\mu S I-\chi I-\alpha I\nonumber\\ &{\dot R}=\chi I\nonumber\\ &{\dot D}= \alpha I\nonumber \end{align} \section{Application of the Model and Appearance of the Second Wave of SARS-CoV-2 Infection}\label{Applications} \noindent Let us now apply our model to the case of a small Country, Belgium, and to other two big Countries, France and Germany. Real data are provided by the various National Health agencies (Belgium - \textit{Sciensano} \cite{dataBE}; France - \textit{Sant{\'e} Publique France} \cite{dataFR}; Germany -\textit{Robert Koch Institut. Country data from Worldbank.org} \cite{dataDE}) and compiled, among others, by European Centre for Disease Prevention and Control (ECDC). It should be noted that this measures does not generally provide the true new cases rate but reflect the overall trend since most of the infected will not be tested \cite{ourworldindata}. It should also be specified that real data provided by ECDC refer to the \textit{new cases per day}, which we denote by $\Delta I_{new}(t)$. By definition, $\Delta I_{new}(t)$ corresponds to the new infected people generated from step $I+S\xrightarrow{\mu} 2I$ solely during 1 day, and \textit{not} to the compartment $I$. Hence, the ECDC data have to be confronted vs the theoretical predictions provided by the solutions for $S(t)$ and $S_{L}(t)$ of our model, according to the relation $\Delta I_{new}(t) = -\Delta S(t) -\Delta S_{L}(t)$. The values of the parameters used to perform these comparisons are shown in Table~\ref{table}. \begin{table}[htp] \caption{List of the Parameters} \begin{center} \begin{tabular}{|l|c|c|c|} \hline Parameters & Belgium & France & Germany\\ \hline Density [$km^{-2}$] & 377 & 119 & 240 \\ Surface [$km^{2}$]& 30530 & 547557 & 348560 \\ $\mu$ [$d^{-1} km^{2}$]& 0.00072 & 0.002 & 0.00093 \\ $\mu$ after $L_{1}$ & 0.000288 & 0.00087 & 0.000387\\ $\chi$ [$d^{-1}$]& 0.062 & 0.062 & 0.0608 \\ $\alpha$ [$d^{-1}$]& 0.05 $\chi$ & 0.05 $\chi$ & 0.02 $\chi$ \\ $k_{L} $ [$d^{-1}$]& 0.07 & 0.06 & 0.06 \\ $k_Q$ [$d^{-1}$]& 0.02 & 0.01 & 0.01 \\ $L_m$ [$km^{-2}$] & 377.0 & 119 & 240 \\ $k_1$ [$d^{-1} km^{2}$] & 0.01 & 0.01 & 0.01 \\ $k_d + k_r$ [$d^{-1}$] & 0.2 & 0.2 & 0.21 \\ $\frac{k_d}{k_r} $ & 0.5 & 0.5 & 0.1 \\ $t_r $ [$d$]& 7 & 7 & 7 \\ $t_d$ [$d$]& 7 & 7 & 7\\ $t_R$ [$d$]& 8 & 8 & 8 \\ $t_D$ [$d$]& 8 & 8 & 8 \\ $C$ [$km^{-2}$] & 0.0655 & 0.0091 & 0.023 \\ $I(60)$ [$km^{-2}$] & 0.0023 & 0.0018 & 0.0014 \\ Start $L_{1}$ [$d$]& 77 & 71 & 76 \\ End $L_{1}$ [$d$]& 124 & 131 & 125 \\ Start $L_{2}$ [$d$]& 306 & 303 & 306 \\ \hline \end{tabular} \end{center} \label{table} \end{table} \noindent Initial $\mu$ en $k_1$ values have been estimated (fitted) from the measurements using the short period at the start of the pandemic using simple solution valid during that period. $I(60)$ (from March 1, 2020). Hospital capacity is evaluated from the different Countries published capacity. However, we are aware that the interpretation may vary from one Country to another. During the first lockdown, Countries have taken various actions to limit Coronavirus spreading (social distancing, wearing masks, reducing high density hotspots etc.). In order to include these measures in a simple way, we assumed that the net effect is to reduce the actual infection kinetic rate $\mu$ by some constant factor. This is given in the table as $\mu$ after $L_1$. Note that the transition occurs instantaneously in our model hence the sharp drop in the total infected at that time. Other parameters are tuned to account for the actual variability of $\Delta I_{new}$ (but not its absolute value) and official number of deaths ($D(t) + d(t)$). The delay for recovery or death processes has been estimated from the measurements of hospitalisation recovery in a Country. For instance, Fig.~\ref{delay} shows the estimation of the recovery time-delay for Belgium: it corresponds to the \textit{time-interval} between the peak of the new admission and the peak of the recovered people from hospitals. Such a procedure has been adopted for estimating the recovery and death time-delays also for France and Germany. \begin{figure}[h] \begin{center} \includegraphics[width=0.8\textwidth]{HospitalInOut.eps} \caption{\textit{Estimation of the time-delay. The time-delays have been estimated by considering the \textit{time-interval} between the peak of the new admission and the peak of the recovered people from hospitals. This figure corresponds to the Belgian case.}} \label{delay} \end{center} \end{figure} \noindent $\bullet$ {\bf Belgian Case}. \noindent Figs~(\ref{BE_IRD}) refer to the Belgian case. In particular, Fig~(\ref{BE_IRD}) shows the solutions of our model for the infectious ($I$), total recovered ($R$) and total deceased ($D$) people. Fig.~(\ref{BE_IRD_h}) illustrates the theoretical solutions for hospitalised infectious ($I_h$), the total recovered ($r_h$) and total deceased ($d_h$) people previously hospitalised. \begin{figure*}[htb] \hfill \begin{minipage}[t]{.48\textwidth} \centering \includegraphics[width=5cm,height=5cm]{BE_IRD.pdf} \caption{\textit{Theoretical solutions for infectious ($I$), cumulative number of recovered people ($R$) and deaths ($D$) for Belgium.}} \label{BE_IRD} \end{minipage} \hfill \begin{minipage}[t]{.48\textwidth} \centering \includegraphics[width=5cm,height=5cm]{BE_IRD_h.pdf} \caption{\textit{Theoretical solutions for hospitalised infectious ($I_h$), total recovered ($r_h$) and total deceased ($d_h$) people, previously hospitalised, for Belgium.}} \label{BE_IRD_h} \end{minipage} \hfill \end{figure*} \noindent Figs~(\ref{I_new_BE}) and (\ref{D_BE}) shows the comparison between the theoretical predictions for $\Delta I_{new}(t)$ and deaths and real data for Belgium (according to the database \textit{Sciensano}). Notice in Fig.~\ref{I_new_BE} the prediction of the \textit{second wave of infection by SARS-CoV-2} \begin{figure*}[htb] \hfill \begin{minipage}[t]{.48\textwidth} \centering \includegraphics[width=5cm,height=5cm]{I_new_BE.pdf} \caption{\textit{Comparison between the theoretical prediction for $\Delta I_{New}$ with real data provided by the data base \textit{Sciensano}, for Belgium.}} \label{I_new_BE} \end{minipage} \hfill \begin{minipage}[t]{.48\textwidth} \centering \includegraphics[width=5cm,height=5cm]{D_BE.pdf} \caption{\textit{Comparison between the theoretical solution of our model for Deaths with real data provided by the database \textit{Sciensano}, for Belgium.}} \label{D_BE} \end{minipage} \hfill \end{figure*} \noindent $\bullet$ {\bf French Case}. \noindent Figs~(\ref{I_new_FR}) and (\ref{D_FR}) shows the comparison between the theoretical predictions for $\Delta I_{new}(t)$ and deaths and real data for Belgium (according to the database \textit{Sant{\'e} Publique France}). Notice in Fig.~\ref{I_new_FR} the prediction of the \textit{second wave of infection by SARS-CoV-2} \begin{figure*}[htb] \hfill \begin{minipage}[t]{.48\textwidth} \centering \includegraphics[width=5cm,height=5cm]{I_new_FR.pdf} \caption{\textit{Comparison between the theoretical prediction for $\Delta I_{New}$ with real data provided by the data base \textit{Sant{\'e} Publique France}, for France.}} \label{I_new_FR} \end{minipage} \hfill \begin{minipage}[t]{.48\textwidth} \centering \includegraphics[width=5cm,height=5cm]{D_FR.pdf} \caption{\textit{Comparison between the theoretical solution of our model for Deaths with real data provided by the database \textit{Sant{\'e} Publique France}, for France.}} \label{D_FR} \end{minipage} \hfill \end{figure*} \noindent $\bullet$ {\bf German Case}. \noindent Figs~(\ref{I_new_DE}) and (\ref{D_DE}) shows the comparison between the theoretical predictions for $\Delta I_{new}(t)$ and deaths and real data for Belgium (according to the database \textit{(Robert Koch Institut). Country data from Worldbank.org}). Notice in Fig.~\ref{I_new_DE} the prediction of the \textit{second wave of infection by SARS-CoV-2} \begin{figure*}[htb] \hfill \begin{minipage}[t]{.48\textwidth} \centering \includegraphics[width=5cm,height=5cm]{I_new_DE.pdf} \caption{\textit{Comparison between the theoretical prediction for $\Delta I_{New}$ with real data provided by the data base \textit{(Robert Koch Institut. Country data from Worldbank.org}, for Germany.}} \label{I_new_DE} \end{minipage} \hfill \begin{minipage}[t]{.48\textwidth} \centering \includegraphics[width=5cm,height=5cm]{D_DE.pdf} \caption{\textit{Comparison between the theoretical solution of our model for Deaths with real data provided by the database \textit{(Robert Koch Institut. Country data from Worldbank.org}, for Germany.}} \label{D_DE} \end{minipage} \hfill \end{figure*} \section{Conclusions and Perspecives}\label{C} We showed that our model is able to produce predictions not only on the first but also on the second or even the third waves of SARS-CoV2 infections. The theoretical predictions are in line with the official number of cases with minimal parameter fitting. We discussed the strengths and limitations of the proposed model regarding the long-term predictions and, above all, the duration of how long the lockdown and the quarantine measures should be taken in force in order to limit as much as possible the intensities of subsequent SARS-CoV-2 infection waves. This task has been carried out by taking into account the theoretical results recently appeared in literature \cite{sonnino} and without neglecting the delay in the reactions steps. Our model has been applied in two different situations: the spreading of the Coronavirus in a small Country (Belgium) and in big Countries (France and Germany). \noindent It is worth noting the \textit{degree of the flexibility} of our model. For example, let us suppose that we need to set up a model able to distinguish old population (over 65 year old) from the young one (with age not exceeding 35 years), by assuming that the older population is twice as likely to get infected by Coronavirus with respect to the younger one. In this case, it is just sufficient to replace the scheme $I+S\xrightarrow{\mu} 2I$ with the scheme \begin{align}\label{C1} &I+S_Y \xrightarrow{\mu_y} 2I\\ &I+2S_O \xrightarrow{\mu_o} 3I\nonumber\\ &S=S_Y+S_O\nonumber \end{align} \noindent with $S_Y$ and $S_o$ denoting the \textit{susceptible young people} and the \textit{susceptible old people}, respectively. Another example could be the following. Let us suppose that we need to distinguish two class of infected individuals: \noindent {\bf 1)} infected people (denoted by $I_1$) able to transmit the Coronavirus to susceptible according to the (standard) scheme $I_1+S\rightarrow 2I$; \noindent {\bf 2)} Infected people (denoted by $I_2$) having the capacity to transmit the virus, say, 7 times higher with respect to the category {\bf 1)}. In this case, the corresponding scheme reads: \begin{align}\label{C2} &I_1+S \xrightarrow{\mu_1} 2I\\ &I_2+7S \xrightarrow{\mu_2} 8I\nonumber\\ &I=I_1+I_2\nonumber \end{align} \noindent It is then easy to write the ordinary differential equations associated to schemes (\ref {C1}) and (\ref {C2}). \noindent Let us now consider another aspect of the model. In the Subsection~(\ref{LQM}), we have introduced scheme~(\ref{LQM1}) that models the lockdown measures. As mentioned, such measures are imposed by national governments to all susceptible population. However, we can also take into consideration the hypothesis that these measures are not rigorously respected by the population and this for various reasons: neglect of the problem, depression due to prolonged isolation, lack of confidence in the measures adopted by the Government, desire to attend parties with friends and relatives, etc. Scheme~(\ref{LQM1}) still adapts to describe these kind of situations with the trick of replacing Fig.~\ref{LEP} with a curve that models the \textit{emotional behaviour} of susceptible people. The O.D.E.s read \begin{align}\label{C3} &{\dot S}=-\mu SI - k_E S(E_{Max}-S_E)+(1-k_E)(E_{Max}-E)\\ &{\dot S}_E=k_E SE-k^{-1}_ES_E\nonumber \end{align} \noindent where $E$ stands for \textit{Emotional}. \noindent Finally, we mention that in ref.~\cite{sonnino2} we have incorporated real data into a stochastic model. The goal is to obtain a comparative analysis against the deterministic one, in order to use the new theoretical results to predict the number of new cases of infected people and to propose possible changes to the measures of isolation.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,923
\section{Introduction} Given a positive integer $n$ and a graph $H$, the {\it extremal number} $\mathrm{ex}(n,H)$ is the largest number of edges in an $H$-free graph on $n$ vertices. In this short paper, we will be concerned with one of the standard conjectures about extremal numbers, the rational exponents conjecture of Erd\H{o}s and Simonovits (see, for example,~\cite{E81}), which states that every rational number $r$ between 1 and 2 is {\it realisable} in the sense that there exists a graph $H$ such that $\mathrm{ex}(n,H)=\Theta(n^r)$. \begin{conjecture}[Rational exponents conjecture] \label{con:rational} For every rational number $r\in [1,2]$, there exists a graph $H$ with $\mathrm{ex}(n,H)=\Theta(n^r)$. \end{conjecture} The main result towards this conjecture is arguably the result of Bukh and Conlon~\cite{BC18} saying that for any rational number $r\in [1,2]$ there exists a finite family $\mathcal{H}$ of graphs such that $\mathrm{ex}(n,\mathcal{H})=\Theta(n^r)$, where $\mathrm{ex}(n,\mathcal{H})$ denotes the largest number of edges in an $n$-vertex graph which does not contain any $H\in \mathcal{H}$ as a subgraph. However, the conjecture remains open in its original form, which asks for a single graph rather than a family. Nevertheless, following the breakthrough in~\cite{BC18}, progress on the single graph case has been swift, with substantial contributions, each extending the range of exponents for which the conjecture is known, made by Jiang, Ma and Yepremyan~\cite{JMY18}, Kang, Kim and Liu \cite{KKL18}, Conlon, Janzer and Lee~\cite{CJL21}, Janzer~\cite{Jan20}, Jiang and Qiu~\cite{JQ20,JQ19} and, most recently, Jiang, Jiang and Ma~\cite{JJM20}. For now, we highlight only one of these results, due to Jiang and Qiu~\cite{JQ19} saying that any rational of the form $1 + p/q$ with $q > p^2$ is realisable. Proving a conjecture of Jiang, Jiang and Ma~\cite[Conjecture 11]{JJM20} in a strong form, we show that a similar phenomenon holds near two. \begin{theorem} \label{thm:main} All rationals of the form $r = 2 - a/b$ with $b \geq \max(a, (a-1)^2)$ are realisable. \end{theorem} To say more, we must first explain the context in which the recent progress has been made. We will be interested in {\it rooted graphs} $(F, R)$ consisting of a graph $F$ together with a proper subset $R$ of the vertex set $V(F)$ that we refer to as the {\it roots}. We will usually just write $F$ if the roots are clear from context. For each $S \subseteq V(F) \setminus R$, let $\rho_F(S) := \frac{e_S}{|S|}$, where $e_S$ is the number of edges in $F$ incident with a vertex of $S$. The \emph{density} of $F$ is then $\rho(F) := \rho_F(V(F) \setminus R)$ and we say that $(F, R)$ is {\it balanced} if $\rho_F(S) \geq \rho(F)$ for all $S \subseteq V(F) \setminus R$. Finally, given a rooted graph $(F, R)$ and a positive integer $t$, the {\it $t$-blowup} $F^t$ is the graph obtained by taking $t$ vertex-disjoint copies of $F$ and identifying the different copies of $v$ for each $v \in R$. The following result of Bukh and Conlon~\cite{BC18} now yields a lower bound for the extremal number of $F^t$ provided $F$ is balanced and $t$ is sufficiently large in terms of $F$. \begin{lemma}[Bukh--Conlon] \label{lem:BC} For every balanced rooted graph $F$ with density $\rho$, there exists a positive integer $t_0$ such that $\mathrm{ex}(n, F^t)=\Omega(n^{2-\frac{1}{\rho}})$ for all integers $t \geq t_0$. \end{lemma} Paired to this result is the following conjecture, saying that Lemma~\ref{lem:BC} is tight up to the constant for balanced rooted {\it trees}. If true, this conjecture would easily imply Conjecture~\ref{con:rational}. \begin{conjecture}[Bukh--Conlon] \label{con:BC} For every balanced rooted tree $F$ with density $\rho$ and all positive integers~$t$, $\mathrm{ex}(n, F^t) = O(n^{2-\frac{1}{\rho}})$. \end{conjecture} \begin{figure} \centering \begin{tikzpicture}[thick, scale=0.45, baseline=(v.base)] \coordinate (v) at (0,-0.8); \draw (-3.6,-1.1) -- (-4.2,-2.6) node[root]{}; \draw (-3.6,-1.1) -- (-3.6,-2.6) node[root]{}; \draw (-3.6,-1.1) -- (-3,-2.6) node[root]{}; \draw [bracket] (-3,-2.6) -- (-4.2,-2.6) node[black,midway,yshift=-10pt]{\footnotesize $s$}; \draw (-0.9,.4) -- (-3.6,-1.1) node[vertex]{}; \draw (-1.8,-1.1) -- (-2.4,-2.6) node[root]{}; \draw (-1.8,-1.1) -- (-1.8,-2.6) node[root]{}; \draw (-1.8,-1.1) -- (-1.2,-2.6) node[root]{}; \draw [bracket] (-1.2,-2.6) -- (-2.4,-2.6) node[black,midway,yshift=-10pt]{\footnotesize $s$}; \draw (-0.9,.4) -- (-1.8,-1.1) node[vertex]{}; \draw (0,-1.1) -- (-0.6,-2.6) node[root]{}; \draw (0,-1.1) -- (0,-2.6) node[root]{}; \draw (0,-1.1) -- (0.6,-2.6) node[root]{}; \draw [bracket] (0.6,-2.6) -- (-0.6,-2.6) node[black,midway,yshift=-10pt]{\footnotesize $s$}; \draw (-0.9,.4) -- (0,-1.1) node[vertex]{}; \draw (1.8,-1.1) -- (1.2,-2.6) node[root]{}; \draw (1.8,-1.1) -- (1.8,-2.6) node[root]{}; \draw (1.8,-1.1) -- (2.4,-2.6) node[root]{}; \draw [bracket] (2.4,-2.6) -- (1.2,-2.6) node[black,midway,yshift=-10pt]{\footnotesize $s$}; \draw (-0.9,.4) node[vertex]{} -- (1.8,-1.1) node[vertex]{}; \draw [bracket] (1.8,-3.8) -- (-3.6,-3.8) node[black,midway,yshift=-10pt]{\footnotesize $r$}; \end{tikzpicture} \caption{The rooted graph $F_{r,s}$, with black vertices representing roots.} \label{fig:F} \end{figure} The recent progress then has centred on proving Conjecture~\ref{con:BC} for particular choices of the rooted tree $F$, with many novel and interesting ideas going into each new case. Here, we consider a family of rooted trees first studied in this setting by Jiang, Jiang and Ma~\cite{JJM20}. More precisely, for every pair of integers $(r,s)$ with $r, s \geq 1$, we write $F_{r, s}$ for the rooted graph with vertices $y$, $z_i$ for $1\leq i\leq r$ and $w_{i,j}$ for $1\leq i\leq r$, $1\leq j\leq s$, with the $w_{i,j}$ roots, and edges $y z_i$ for all $1\leq i\leq r$ and $z_i w_{i,j}$ for all $1\leq i\leq r,1\leq j\leq s$. For a picture with $r = 4$ and $s = 3$, we refer the reader to Figure~\ref{fig:F}, where the roots are drawn in black. It is easy to verify that $F_{r, s}$ is balanced provided $s \leq r$. Therefore, since $\rho(F_{r,s}) = (rs+r)/(r+1)$, Lemma~\ref{lem:BC} implies that $$\mathrm{ex}(n, F_{r,s}^t) = \Omega(n^{2- \frac{r+1}{rs+r}})$$ for $s \leq r$ and $t$ sufficiently large. Our main technical result is the corresponding upper bound for a certain range of parameters. \begin{theorem} \label{thm:main2} For any integers $r\geq s+2\geq 3$ and $t\geq 1$, $\mathrm{ex}(n,F_{r,s}^t)=O(n^{2-\frac{r+1}{rs+r}}).$ \end{theorem} This improves on a result of Jiang, Jiang and Ma~\cite{JJM20}, who proved a result similar to Theorem~\ref{thm:main2}, but under the more restrictive assumption that $r \geq s^3 - 1$. While our argument, which we outline in the next subsection, shares some ideas with theirs, it is considerably simpler. To see that Theorem~\ref{thm:main2} implies Theorem~\ref{thm:main}, we require one more ingredient, a key observation of Kang, Kim and Liu~\cite{KKL18} saying that if the exponent $2 - \frac{a}{a p_0 +q}$ is realisable by a power of a balanced rooted graph, then so is $2 - \frac{a}{ap+q}$ for all $p \geq p_0$. But \[2 - \frac{r+1}{rs+r} = 2 - \frac{r+1}{(r+1)s +(r- s)},\] so the observation of Kang, Kim and Liu implies that the exponent $2 - \frac{r+1}{(r+1)p + (r- s)}$ is realisable for all $p \geq s$. Since $r- s$ ranges from $2$ to $r-1$, this means that we get all exponents of the form $2 - \frac{r+1}{d}$ with $r \geq 3$, $d \geq r^2$ and $d \not\equiv -1, 0, 1 \pmod{r+1}$. Therefore, setting $a = r+1$, we see that $2 - \frac{a}{b}$ is realisable provided $a \geq 4$, $b \geq (a-1)^2$ and $b \not\equiv -1, 0, 1 \pmod{a}$. The remaining cases, where $a \in \{1, 2, 3\}$ or $b \equiv -1, 0, 1 \pmod{a}$, have all previously appeared in the literature (see, for instance, \cite{KKL18}). It therefore remains to prove Theorem~\ref{thm:main2}. \subsection{An outline of the proof} Let $G$ be an $n$-vertex graph with $Cn^{2-\frac{r+1}{rs+r}}$ edges, where $C$ is taken sufficiently large in terms of $r$, $s$ and $t$. We want to show that $G$ contains $F_{r,s}^t$ as a subgraph. As is usual when estimating extremal numbers, we may assume that $G$ is $K$-almost-regular for some constant $K$ depending only on $r$ and $s$, by which we mean that every vertex in $G$ has degree at most $K$ times the minimum degree $\delta(G)$. Suppose that $G$ does not contain $F_{r,s}^t$ as a subgraph. First, we show that, among all stars in $G$ with $s+1$ leaves, the proportion of those in which the leaves have codegree at least $|V(F_{r,s}^t)|$ is only $o(1)$. Indeed, otherwise we could find a vertex $u\in V(G)$ such that a positive proportion of the $(s+1)$-sets in $N(u)$ have codegree at least $|V(F_{r,s}^t)|$. However, since $F_{r,s}^t$ is the subdivision of an $(s+1)$-partite $(s+1)$-uniform hypergraph, this would imply that $F_{r,s}^t$ can be embedded into $G$ with one part of the bipartition mapped to a subset of $N(u)$. We call a copy of $F_{r,s}$ in $G$ \emph{nice} if, for each $1\leq i\leq r$, the codegree of the images of $y,w_{i,1},\dots,w_{i,s}$ is at most $|V(F_{r,s}^t)|$. By the previous paragraph and since $G$ is almost regular, almost all copies of $F_{r,s}$ in $G$ are nice. Suppose now that we have a large collection of nice copies of $F_{r,s}$ in $G$ all of which have the same leaf set, i.e., they all map the $w_{i,j}$ to the same vertices $x_{i,j}$. Since $G$ is $F_{r,s}^t$-free, there cannot be $t$ of these copies of $F_{r,s}$ which are pairwise vertex-disjoint apart from the $x_{i,j}$. Hence, a positive proportion of them must map one of $y,z_1,\dots,z_r$ to the same vertex in $G$. However, there cannot exist many nice copies of $F_{r,s}$ which map $y$ and all the $w_{i,j}$ to the same set of vertices. Hence, we find that a positive proportion of the nice $F_{r,s}$ rooted at the $x_{i,j}$ must map some $z_k$ to the same vertex $v\in V(G)$. For the sake of notational simplicity, we will assume that a positive proportion of the copies rooted at the $x_{i,j}$ map $z_r$ to $v$. The crucial observation is that this means that $v$ sends many edges to a relatively small set that depends only on the vertices $x_{i,j}$ for $1\leq i\leq r-1,1\leq j\leq s$. More precisely, $v$ is clearly a neighbour of the image of $y$ in every copy of $F_{r,s}$ that maps $z_r$ to $v$. However, the locus of the possible images of $y$ is rather restricted: if $u$ is such an image, then, for each $1\leq i\leq r-1$, the vertices $u,x_{i,1},\dots,x_{i,s}$ have a common neighbour. Fix a ``typical'' collection of vertices $x_{i,j}$, $1\leq i\leq r-1,1\leq j\leq s$, and let $X$ be the locus of the possible images of $y$ in embeddings of $F_{r,s}$ that map $w_{i,j}$ to $x_{i,j}$ for all $1\leq i\leq r-1,1\leq j\leq s$. For any $u\in X$, there are around $\delta(G)^{s+1}$ embeddings of $F_{r,s}$ that map $y$ to $u$ and $w_{i,j}$ to $x_{i,j}$ for each $1\leq i\leq r-1,1\leq j\leq s$, since we can ``freely" choose how $z_r,w_{r,1},\dots,w_{r,s}$ are embedded. If we assume that $|X|$ is about as large as it would be in a random graph with the same edge density, then, on average, for each embedding of $F_{r,s}$ which maps $w_{i,j}$ to $x_{i,j}$ for each $1\leq i\leq r-1,1\leq j\leq s$, there are a large constant number of copies of $F_{r,s}$ with the same leaves. Assuming that these copies are nice, the previous paragraph shows that there are many embeddings of $F_{r,s}$ which map $w_{i,j}$ to $x_{i,j}$ for all $1\leq i\leq r-1,1\leq j\leq s$ with the property that the image of $z_r$ has a large constant number of neighbours in $X$. This then allows us to conclude that there are many edges $uv\in E(G)$ with $u\in X$ such that $v$ has a large constant number of neighbours in $X$, which in turn yields a very unbalanced bipartite subgraph of $G$ with parts $X$ and $Y$ where every $v\in Y$ has many neighbours in $X$. This subgraph contains many stars with $s+1$ leaves centred in $Y$ and, for most of them, the leaves have large codegree, contradicting the observation made in the second paragraph. \section{Proof of Theorem \ref{thm:main2}} Fix $r\geq s+2\geq 3$, $t\geq 1$ and let $H=F_{r,s}^t$. We begin our proof by defining what it means for a star with $s+1$ leaves to be heavy and then showing that there cannot be too many such stars. Originating in work of Conlon and Lee~\cite{CL21} and Janzer~\cite{Jan19} on extremal numbers of subdivisions, similar definitions and results appear often in the recent literature on the rational exponents conjecture. \begin{definition} We call a star with $s+1$ leaves \emph{heavy} if the leaves have codegree at least $|V(H)|$ and \emph{light} otherwise. \end{definition} \begin{lemma} \label{lem:heavy stars} For any $\varepsilon>0$, there is a constant $C=C(\varepsilon,H)$ such that the following holds. Let $G$ be an $H$-free bipartite graph with parts $X$ and $Y$ and minimum degree at least $C$ on side $Y$. Then the proportion of heavy $(s+1)$-stars among all $(s+1)$-stars centred in $Y$ is at most $\varepsilon$. \end{lemma} \begin{proof} It suffices to prove that for each $u\in Y$, the proportion of heavy stars among all stars centred at $u$ is at most $\varepsilon$. Define an $(s+1)$-uniform hypergraph $\mathcal{G}$ on vertex set $N(u)$ by setting $S\subset N(u)$ with $|S| = s+1$ to be an edge of $\mathcal{G}$ if and only if the common neighbourhood (in $G$) of the vertices in $S$ has order at least $|V(H)|$. We also define an $(s+1)$-uniform hypergraph $\mathcal{H}$ with vertices $y_k$ for $1\leq k\leq t$ and $w_{i,j}$ for $1\leq i\leq r,1\leq j\leq s$ whose edges are $\{y_kw_{i,j}: 1\leq j\leq s\}$ for every $1\leq k\leq t,1\leq i\leq r$. It is easy to see that if $\mathcal{G}$ contains a copy of $\mathcal{H}$, then there exists a copy of $H$ in $G$. Moreover, $\mathcal{H}$ is $(s+1)$-partite (the parts being $\{y_1,\dots,y_t\}$ and $\{w_{i,j}:1\leq i\leq r\}$ for each $1\leq j\leq s$), so $\mathrm{ex}(n,\mathcal{H})=o(n^{s+1})$. It follows that if $|N(u)|$ is large enough in terms of $\varepsilon$ and $\mathcal{H}$, then there are at most $\varepsilon \binom{|N(u)|}{s+1}$ heavy $(s+1)$-stars in $G$ with centre $u$. Since $\mathcal{H}$ depends only on $H$, the proof is complete. \end{proof} We now make a few definitions which capture some of the main ideas in our proof. \begin{definition} Let $F$ be a labelled copy of $F_{r,s}$ with vertices $y,z_i,w_{i,j}$ as before. We call $F$ \emph{nice} if, for each $1\leq i\leq r$, the $(s+1)$-star with centre $z_i$ and leaves $y,w_{i,1},\dots,w_{i,s}$ is light. \end{definition} \begin{definition} \label{def:locus} \sloppy For distinct vertices $x_{i,j}$ with $1\leq i\leq r-1$, $1\leq j\leq s$ in a graph $G$, let $S(x_{1,1},\dots,x_{1,s},x_{2,1},\dots,x_{2,s},\dots,x_{r-1,1},\dots,x_{r-1,s})$ be the set of vertices $u\in V(G)$ for which there are vertices $v_1,\dots,v_{r-1}$ such that $u$, the $v_i$ and the $x_{i,j}$ are all distinct, $uv_i\in E(G)$ for all $i$ and $v_ix_{i,j}\in E(G)$ for all $i,j$. \end{definition} \begin{definition} \sloppy Let $F$ be a nice labelled copy of $F_{r,s}$ with vertices $y,z_i,w_{i,j}$ and let $q$ be the number of nice labelled copies of $F_{r,s}$ with the same labelled leaf set as $F$. For $c>0$ and $1\leq k\leq r$, we call $F$ \emph{$(c,k)$-rich} if $z_k$ has at least $cq$ neighbours in $S(w_{1,1},\dots,w_{1,s},\dots,w_{k-1,1},\dots,w_{k-1,s},w_{k+1,1},\dots,w_{k+1,s},\dots,w_{r,1},\dots,w_{r,s})$. \end{definition} The next lemma shows that if an $H$-free graph $G$ has many nice copies of $F_{r,s}$ sharing the same leaves, then many of those copies of $F_{r,s}$ are rich. \begin{lemma} \label{lem:rich with fixed leaves} There exist positive constants $c=c(H)$ and $C=C(H)$ such that the following holds. Let $G$ be an $H$-free graph and let $x_{i,j}$, for $1\leq i\leq r$, $1\leq j\leq s$, be distinct vertices in $G$. Assume that there are $q\geq C$ nice labelled copies of $F_{r,s}$ in $G$ with $w_{i,j}$ mapped to $x_{i,j}$ for all $i,j$. Then there is some $1\leq k\leq r$ such that the number of $(c,k)$-rich labelled copies of $F_{r,s}$ with $w_{i,j}$ mapped to $x_{i,j}$ for all $i, j$ is at least $cq$. \end{lemma} \begin{proof} Let $C=(t-1)(r+1)^2|V(H)|^r+1$ and $c=1/((t-1)(r+1)^2|V(H)|^r)$. Since $G$ is $H$-free, there cannot be more than $t-1$ copies of $F_{r,s}$ which all have the same leaves $x_{i,j}$ but are otherwise pairwise vertex-disjoint. This means that any maximal collection of copies of $F_{r,s}$ with leaves $x_{i,j}$ which are otherwise pairwise disjoint cover a set $R$ of at most $(t-1)(r+1)$ vertices in addition to $\{x_{i,j}:1\leq i\leq r,1\leq j\leq s\}$. Because of the maximality, any labelled copy of $F_{r,s}$ with leaves $x_{i,j}$ must map one of $y,z_1,\dots,z_r$ to an element of $R$. By the pigeonhole principle, there are therefore at least $q/(|R|(r+1))\geq q/((t-1)(r+1)^2)$ nice copies of $F_{r,s}$ with leaves $x_{i,j}$ in which one of the vertices $y,z_1,\dots,z_r$ is mapped to the same vertex $v$ in $G$. By the condition that these copies are nice, $y$ cannot be mapped to the same vertex in more than $|V(H)|^r$ copies. Hence, since $q\geq C>(t-1)(r+1)^2|V(H)|^r$, there is some $1\leq k\leq r$ such that $z_k$ is mapped to the same vertex $v$ in at least $q/((t-1)(r+1)^2)$ copies. Again using the fact that $y$ is mapped to the same vertex at most $|V(H)|^r$ many times, it follows that there are at least $q/((t-1)(r+1)^2|V(H)|^r)=cq$ different images of $y$ in these copies. All of these vertices are in $S(x_{1,1},\dots,x_{1,s},\dots,x_{k-1,1},\dots,x_{k-1,s},x_{k+1,1},\dots,x_{k+1,s},\dots,x_{r,1},\dots,x_{r,s})$ and all of them are neighbours of $v$. Thus, all nice copies of $F_{r,s}$ mapping $w_{i,j}$ to $x_{i,j}$ for every $i,j$ and $z_k$ to $v$ are $(c,k)$-rich. \end{proof} The upshot of what we have done so far is the following lemma, which says that, under a mild technical condition on the degrees (that we will in any case be able to assume), any $H$-free graph must have many rich copies of $F_{r,s}$. \begin{lemma} \label{lem:many rich} For any positive real number $K$, there are positive constants $c=c(H)$ and $C=C(K,H)$ such that the following holds. Let $G$ be an $H$-free $n$-vertex bipartite graph with minimum degree $\delta\geq Cn^{1-\frac{r+1}{rs+r}}$ and maximum degree at most $K\delta$. Then $G$ has at least $cn\delta^{rs+r}$ $(c,r)$-rich labelled copies of $F_{r,s}$. \end{lemma} \begin{proof} The number of labelled copies of $F_{r,s}$ in $G$ is at least $\frac{1}{2}n\delta^{rs+r}$. Let $\varepsilon=\frac{1}{4rK^{rs+r}}$. By Lemma \ref{lem:heavy stars}, if $C$ is sufficiently large compared to $K$ and $H$, then the proportion of heavy $(s+1)$-stars in $G$ is at most $\varepsilon$. Then, by the maximum degree condition, there are at most $\varepsilon n(K\delta)^{s+1}$ labelled heavy $(s+1)$-stars. Thus, again using the maximum degree assumption, there are at most $r\cdot \varepsilon n (K\delta)^{s+1}\cdot (K\delta)^{rs+r-(s+1)}=\frac{1}{4}n\delta^{rs+r}$ labelled copies of $F_{r,s}$ in $G$ which contain a heavy $(s+1)$-star. It follows that there are at least $\frac{1}{4}n\delta^{rs+r}\geq \frac{C^{rs+r}}{4}n^{rs}$ nice labelled copies of $F_{r,s}$ in $G$. Let $C'$ be the constant $C(H)$ from Lemma \ref{lem:rich with fixed leaves}. Clearly, there are at most $C'n^{rs}$ nice labelled copies of $F_{r,s}$ whose leaves $w_{i,j}$ are mapped to some $x_{i,j}$ for all $1\leq i\leq r$, $1\leq j\leq s$ with the property that there are fewer than $C'$ nice labelled copies of $F_{r,s}$ with $w_{i,j}$ mapped to $x_{i,j}$. Hence, if $C$ is sufficiently large, then these nice labelled copies of $F_{r,s}$ amount to at most half of all nice labelled copies of $F_{r,s}$. The statement then follows from Lemma \ref{lem:rich with fixed leaves} by noting that the number of $(c,k)$-rich labelled copies of $F_{r,s}$ in $G$ is the same for every $k$. \end{proof} The following lemma is the last ingredient needed for the proof of Theorem \ref{thm:main2}. \begin{lemma} \label{lem:dependent random choice} There is a constant $C_0=C_0(H)$ such that the following holds. Let $G$ be a bipartite graph with parts $X$ and $Y$ such that there are at least $|X|p$ edges $xy$ for which $x\in X$, $y\in Y$ and $y$ has degree at least $q$ in $G$. If $q\geq C_0$ and $pq^s\geq C_0|X|^s$, then $G$ contains $H$ as a subgraph. \end{lemma} We will prove Lemma \ref{lem:dependent random choice} using Lemma \ref{lem:heavy stars}, but we remark that it can also be proved directly using dependent random choice. \begin{proof} We may assume, by shrinking $Y$ if necessary, that each $y\in Y$ has degree at least $q$. Then any edge in $G$ can be extended in at least $\binom{q-1}{s}$ ways to an $(s+1)$-star centred in $Y$. Hence, the conditions of the lemma guarantee that $G$ has at least $|X|p\binom{q-1}{s}/(s+1)$ stars with $s+1$ leaves centred in $Y$. Suppose that $G$ is $H$-free. If $C_0$ is sufficiently large, then Lemma \ref{lem:heavy stars} implies that at least half of the $(s+1)$-stars centred in $Y$ are light. If again $C_0$ is sufficiently large, then, since $pq^s\geq C_0|X|^s$, there are more than $|V(H)||X|^{s+1}$ light $(s+1)$-stars centred in $Y$. However, since there are at most $|X|^{s+1}$ choices for the set of $s+1$ leaves and, given such a choice, there are at most $|V(H)|$ possibilities for the centre, this is a contradiction. \end{proof} \begin{comment} \begin{proof} Assume that $C_0$ is sufficiently large. Let $Y'$ be the subset of $Y$ consisting of vertices which have degree at least $q$ and let $e$ be the number of edges in $G\lbrack X,Y'\rbrack$. By the assumption of the lemma, $e\geq |X|p$. Furthermore, $e\geq |Y'|q$, so $e^{s+1}\geq |X|p (|Y'|q)^s\geq C_0|X|^{s+1}|Y'|^s$. Let $y$ be a uniformly random vertex in $Y'$ and let $T=N(y)$. Clearly, \begin{equation} \mathbb{E}\left\lbrack \binom{|T|}{s+1}\right\rbrack\geq \binom{e/|Y'|}{s+1}\geq \frac{e^{s+1}}{(s+1)^{s+1}|Y'|^{s+1}}\geq \frac{C_0|X|^{s+1}}{(s+1)^{s+1}|Y'|}, \label{eqn:dependentrandom} \end{equation} where the second inequality follows from $e/|Y'|\geq q\geq C_0\geq s+1$. Let $B$ be the number of sets of $s+1$ distinct vertices in $T$ with codegree at most $|V(H)|$. Then $\mathbb{E}\lbrack B\rbrack \leq |X|^{s+1}\cdot \frac{|V(H)|}{|Y'|}$. Using equation (\ref{eqn:dependentrandom}), we get that \begin{align*} \mathbb{E}\left[\binom{|T|}{s+1}\right] &\geq \frac{1}{2}\binom{e/|Y'|}{s+1}+\frac{1}{2}\frac{C_0|X|^{s+1}}{(s+1)^{s+1}|Y'|}\geq \frac{1}{2}\binom{C_0}{s+1}+\frac{1}{2}\frac{C_0|X|^{s+1}}{(s+1)^{s+1}|Y'|} \\ &> \binom{|V(H)|}{s+1}+\binom{|V(H)|}{s+1}\mathbb{E}[B]. \end{align*} Thus, there is an outcome for which $\binom{|T|}{s+1}> \binom{|V(H)|}{s+1}+\binom{|V(H)|}{s+1}|B|$, so there is a set $T\subset X$ of size at least $|V(H)|$ in which the proportion of $(s+1)$-sets having codegree at most $|V(H)|$ is less than $1/\binom{|V(H)|}{s+1}$. This then implies that there is a set $T'\subset T$ of size $|V(H)|$ in which all $(s+1)$-sets have codegree at least $|V(H)|$ and, hence, $G$ contains $H$ as a subgraph. \end{proof} \end{comment} We are now ready to complete the proof of Theorem~\ref{thm:main2}. By a reduction going back to work of Erd\H{o}s and Simonovits~\cite{ES70}, we may assume that our graph is {\it $K$-almost-regular} for some constant $K$ depending only on $r$ and $s$, by which we mean that $\max_{v \in V(G)} \deg(v) \leq K \min_{v \in V(G)} \deg(v)$. As noted in~\cite{CL21}, we may also assume that the graph is bipartite, reducing our task to proving the following result. \begin{theorem} \label{thm:regular} For any positive real number $K$, there is a constant $C=C(K,H)$ such that if $G$ is an $n$-vertex bipartite graph with minimum degree $\delta \geq Cn^{1-\frac{r+1}{rs+r}}$ and maximum degree at most $K\delta$, then $G$ contains $H$ as a subgraph. \end{theorem} \begin{proof} Let $C$ be sufficiently large and suppose, for the sake of contradiction, that $G$ is $H$-free. By Lemma~\ref{lem:many rich}, there is a positive constant $c=c(H)$ such that $G$ has at least $cn\delta^{rs+r}$ $(c,r)$-rich labelled copies of $F_{r,s}$. \medskip \noindent \emph{Claim.} There are distinct vertices $x_{i,j}\in V(G)$ for $1\leq i\leq r-1$, $1\leq j\leq s$ such that the number of $(c,r)$-rich labelled copies of $F_{r,s}$ mapping $w_{i,j}$ to $x_{i,j}$ for $1\leq i\leq r-1,1\leq j\leq s$ is \begin{enumerate} \item at least $\frac{1}{2}cn\delta^{rs+r}n^{-(r-1)s}$ and \label{property:many ext} \item at least $c/(2K^{rs+r})$ times the number of all labelled copies of $F_{r,s}$ mapping $w_{i,j}$ to $x_{i,j}$ for $1\leq i\leq r-1,1\leq j\leq s$. \label{property:relatively many ext} \end{enumerate} \medskip \noindent \emph{Proof of Claim.} Clearly, the number of $(c,r)$-rich labelled copies of $F_{r,s}$ which agree with fewer than $\frac{1}{2}cn\delta^{rs+r}n^{-(r-1)s}$ $(c,r)$-rich labelled copies of $F_{r,s}$ on the images of $w_{i,j}$ ($1\leq i\leq r-1,1\leq j\leq s$) is less than $\frac{1}{2}cn\delta^{rs+r}$. Hence, there are at least $\frac{1}{2}cn\delta^{rs+r}$ $(c,r)$-rich labelled copies of $F_{r,s}$ such that each of them agrees with at least $\frac{1}{2}cn\delta^{rs+r}n^{-(r-1)s}$ other $(c,r)$-rich labelled copies of $F_{r,s}$ on the images $w_{i,j}$ ($1\leq i\leq r-1,1\leq j\leq s$). Moreover, the total number of labelled copies of $F_{r,s}$ in $G$ is at most $n(K\delta)^{rs+r}$. Since $\frac{\frac{1}{2}cn\delta^{rs+r}}{n(K\delta)^{rs+r}}=c/(2K^{rs+r})$, there are vertices $x_{i,j}$ satisfying the two conditions in the claim. \hfill $\Box$ \medskip Fix some vertices $x_{i,j}$ ($1\leq i\leq r-1,1\leq j\leq s$) satisfying the conclusion of the claim and let $X=S(x_{1,1},\dots,x_{1,s},x_{2,1},\dots,x_{2,s},\dots,x_{r-1,1},\dots,x_{r-1,s})$. Moreover, let $\mathcal{A}$ be the set of $(c,r)$-rich labelled copies of $F_{r,s}$ mapping $w_{i,j}$ to $x_{i,j}$ for all $1\leq i\leq r-1,1\leq j\leq s$. Observe that \begin{equation} |\mathcal{A}|\leq |X|(K\delta)^{s+1}|V(H)|^{r-1}. \label{eqn:upper bound on N} \end{equation} Indeed, there are at most $|X|$ ways to embed $y\in V(F_{r,s})$, by the maximum degree condition there are at most $(K\delta)^{s+1}$ ways to embed $z_r,w_{r,1},w_{r,2},\dots,w_{r,s}$ and, finally, since the copy needs to be nice, there are at most $|V(H)|$ ways to embed each of $z_1,z_2,\dots,z_{r-1}$. On the other hand, property \ref{property:many ext} of the claim asserts that $|\mathcal{A}|\geq \frac{1}{2}cn\delta^{rs+r}n^{-(r-1)s}$, so, by comparing this with (\ref{eqn:upper bound on N}), we get \begin{equation} |X|(K\delta)^{s+1}|V(H)|^{r-1}\geq \frac{1}{2}cn\delta^{rs+r}n^{-(r-1)s}. \label{eqn:lower bound on |X|} \end{equation} Note also that the total number of labelled copies of $F_{r,s}$ mapping $w_{i,j}$ to $x_{i,j}$ for all $1\leq i\leq r-1,1\leq j\leq s$ is at least $|X|\delta^{s+1}/2$, since, after embedding $y$ to any vertex in $X$, there are at least $\delta^{s+1}/2$ ways to complete the embedding. It follows from property \ref{property:relatively many ext} of the claim that \begin{equation*} |\mathcal{A}|\geq \frac{c}{4K^{rs+r}}|X|\delta^{s+1}. \end{equation*} The number of those elements of $\mathcal{A}$ which agree with fewer than $\frac{c}{8K^{rs+r}}|X|\delta^{s+1}n^{-s}$ elements of $\mathcal{A}$ on the images of $w_{r,1},\dots,w_{r,s}$ is at most $\frac{c}{8K^{rs+r}}|X|\delta^{s+1}$. Hence, there are at least $\frac{c}{8K^{rs+r}}|X|\delta^{s+1}$ elements of $\mathcal{A}$ such that each of them agrees with at least $\frac{c}{8K^{rs+r}}|X|\delta^{s+1}n^{-s}$ elements of $\mathcal{A}$ on the images of $w_{r,1},\dots,w_{r,s}$. By the definition of $(c,r)$-richness, for all these copies, the image of $z_r$ has at least $c\cdot \frac{c}{8K^{rs+r}}|X|\delta^{s+1}n^{-s}$ neighbours in $X$. By the maximum degree condition in $G$ and since any $(c,r)$-rich copy of $F_{r,s}$ is nice, we see that for any $u,v\in V(G)$, there are at most $|V(H)|^{r-1}(K\delta)^s$ elements of $\mathcal{A}$ which map $y$ to $u$ and $z_r$ to $v$. Hence, $G$ has at least $\frac{\frac{c}{8K^{rs+r}}|X|\delta^{s+1}}{|V(H)|^{r-1}(K\delta)^s}=\frac{c}{8K^{rs+r+s}|V(H)|^{r-1}}|X|\delta$ edges $uv$ with $u\in X$ and $v\in V(G)$ such that $v$ has at least $c\cdot \frac{c}{8K^{rs+r}}|X|\delta^{s+1}n^{-s}$ neighbours in $X$. Set $Y=V(G)\setminus X$. Since $G$ is bipartite, any neighbour of a vertex in $X$ is in $Y$. We now want to apply Lemma \ref{lem:dependent random choice} to the bipartite graph $G\lbrack X,Y\rbrack$. By the previous paragraph, we can take $$p=\frac{c}{8K^{rs+r+s}|V(H)|^{r-1}}\delta$$ and $$q=\frac{c^2}{8K^{rs+r}}|X|\delta^{s+1}n^{-s}$$ and we just need to verify that $q\geq C_0$ and $pq^s\geq C_0|X|^s$, where $C_0=C_0(H)$ is the constant provided by Lemma \ref{lem:dependent random choice}. But, by equation (\ref{eqn:lower bound on |X|}), $$q\geq \frac{c^3}{16K^{rs+r+s+1}|V(H)|^{r-1}}\delta^{rs+r}n^{1-rs}\geq \frac{c^3}{16K^{rs+r+s+1}|V(H)|^{r-1}}C^{rs+r}.$$ When $C$ is sufficiently large, this is indeed at least $C_0$. Moreover, \begin{align*} pq^s &= \frac{c^{2s+1}}{8^{s+1}K^{rs+r+s+s(rs+r)}|V(H)|^{r-1}}\delta^{s^2+s+1}n^{-s^2}|X|^s \\ &\geq \frac{c^{2s+1}}{8^{s+1}K^{rs+r+s+s(rs+r)}|V(H)|^{r-1}}C^{s^2+s+1}n^{(s^2+s+1)(1-\frac{r+1}{rs+r})-s^2}|X|^s. \end{align*} Since $r\geq s+2$, we have $(s^2+s+1)(1-\frac{r+1}{rs+r})-s^2\geq 0$, so we get that $$pq^s\geq \frac{c^{2s+1}}{8^{s+1}K^{rs+r+s+s(rs+r)}|V(H)|^{r-1}}C^{s^2+s+1}|X|^s\geq C_0|X|^s,$$ provided that $C$ is sufficiently large. Hence, we can indeed apply Lemma \ref{lem:dependent random choice} to find a copy of $H$ in $G$, which is a contradiction. \end{proof} \section{Concluding remarks} Let $T_{r,s, s'}$ be the rooted tree obtained from $F_{r,s}$ by attaching $s'$ leaves to the vertex~$y$, all of which are taken to be roots. It is easy to verify that $T_{r,s, s'}$ is balanced if and only if $s'-1\leq s\leq r+s'$. In their paper, Jiang, Jiang and Ma \cite{JJM20} actually studied this family of graphs, which clearly includes $F_{r,s} = T_{r,s,0}$, showing that if $T_{r,s, s'}$ is balanced and $r \geq s^3 - 1$, then $\mathrm{ex}(n,T_{r,s,s'}^t)=O(n^{2-1/\rho})$ holds, where $\rho=\rho(T_{r,s,s'}) = \frac{rs+r+s'}{r+1}$. We can prove the same upper bound under the relaxed condition $r\geq s-s'+1$ (except in the case $s'=0$, where we need $r\geq s+2$), almost matching the inequality $r\geq s-s'$ required for balancedness. \begin{theorem} For any integers $s'\geq 1$, $s\geq s'-1$, $r\geq s-s'+1$ and $t\geq 1$, $\mathrm{ex}(n,T_{r,s,s'}^t)=O(n^{2-\frac{r+1}{rs+r+s'}})$. \end{theorem} \noindent \emph{Proof sketch.} Since the proof is very similar to that of Theorem \ref{thm:main2}, we only mention the necessary adjustments. Taking $H=T_{r,s,s'}^t$, Lemma \ref{lem:heavy stars} still holds, although in the proof we need to consider the common neighbourhood of $s'$ vertices rather than that of a single vertex. The auxiliary hypergraphs $\mathcal{G}$ and $\mathcal{H}$ can then be defined identically (except that the vertex set of $\mathcal{G}$ is the common neighbourhood of $s'$ vertices). By making use of the extra $s'$ vertices whose common neighbourhood we considered, the existence of a subgraph $\mathcal{H}$ inside $\mathcal{G}$ still provides a copy of $H$. The next substantial change is in Definition \ref{def:locus}, where an additional $s'$ vertices are taken as inputs, corresponding to the images of the $s'$ new leaves, and the vertices in $S$ are required to be common neighbours of these $s'$ vertices (on top of the previous requirements). Similarly, for the claim in (the analogue of) Theorem \ref{thm:regular}, we choose and fix the $s'$ new leaves as well as the $(r-1)s$ leaves that were fixed before. Finally, although Lemma \ref{lem:dependent random choice} does not directly provide a copy of $H=T_{r,s,s'}^t$, we can still use Lemma~\ref{lem:dependent random choice} in the proof of Theorem~\ref{thm:regular} to find a copy of $F_{r,s}^t$ in $G[X,Y]$ with the $t$ copies of $y$ embedded into $X$. But $X$ is the common neighbourhood of $s'$ fixed vertices, so using those vertices we can extend $F_{r,s}^t$ to $H$. The remaining changes are numerical, so we do not detail them here. \hfill $\Box$ \section*{Acknowledgments} We are grateful to the anonymous reviewers for several helpful comments. \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,344
{"url":"https:\/\/www.thejach.com\/view\/id\/297","text":"Correlation is evidence of causation\n\nI've been bringing the title line out frequently for the past few years in response to people saying the somewhat true phrase \"correlation does not imply causation\", or the true phrase \"correlation is not causation\" which they've been indoctrinated by fraudsters protecting Big Tobacco.\n\nWhen asked for a proof, I often just link to this page: http:\/\/oyhus.no\/CorrelationAndCausation.html It's the simplest and easiest to understand version I've come across. But I think it's sort of missing a final step, and a longer proof will fill that in.\n\nIn order to prove the title statement, we have to back up a bit and ask about what evidence is, and before we do that we have to ask about what belief is. Or rather, we don't really need to define what they are so much as how to measure them. Bets are a way of measuring your confidence and certainty of your beliefs, and odds ratios and other aspects of betting can be expressed through probability theory, so your beliefs being true can be expressed using probability theory as well. (If you're interested in non-betting-based foundation for probability theory governing beliefs, see Jaynes. If you're interested in representing uncertainty of several \"flavors\", see Goertzel.) So if we have a probability for a belief, and we encounter a new piece of evidence, then that will either raise or lower the probability of the belief depending on whether it's evidence for or against. Formally, if some fact A is evidence for belief B being true, that means that the probability of B being true is greater if A is true than if A is false. In math, $P(B|A) \\gt P(B|\\overline{A})$ means A is evidence of B.\n\nSo the above link proves that correlation is evidence of causation, but here I'll repeat the math (more verbosely) and add one additional fact to make things crystal clear.\n\nLet $c$ be defined to mean \"correlation\", let $a$ be defined to mean \"causation\". Given the universe of background information $I$, we know that not everything correlates: $P(c|I) \\lt 1$. Additionally we assume that if we have causation, then there is also a correlation. (It may not be linear correlation, but it will be correlation of some kind.) i.e. $P(c|a,I) = 1$.\n\nIf we're trying to determine whether a particular correlation $c$ is evidence for a particular causation $a$, we need to find out if $P(a|c,I) \\gt P(a|\\overline{c},I)$. We can do that with Bayes' theorem and substitution.\n\n\\begin{align} P(a|c,I) & & \\text{(causation given correlation)} \\\\ P(a|c,I) &= P(a|I) \\frac{P(c|a,I)}{P(c|I)} & \\text{(Bayes theorem)} \\\\ &= P(a|I) \\frac{1}{P(c|I)} & \\text{(assumption that causation gives correlation)} \\end{align}\n\nSince we know that $P(c|I) \\lt 1$, we know that it only serves to increase the value of $P(a|c,I)$ relative to $P(a|I)$, so we now know that $P(a|c,I) \\gt P(a|I)$.\n\nThe proof is technically done because the next step some might consider obvious (it was left out of the link), but I'm going to follow through anyway and show it. Let $P(a|c,I) \\gt P(a|I)$ which we just proved be known as Lemma 1. Now we just need to marginalize $P(a|I)$ with respect to $c$ to see that indeed $c$ is evidence of $a$. The marginalization is done by: $P(a|I) = P(a,c|I) + P(a,\\overline{c}|I) = P(a|c,I) P(c|I) + P(a|\\overline{c},I) P(\\overline{c}|I)$. (The second step was just using the product rule.)\n\nSo now the final proof:\n\\begin{align} P(a|c,I) &\\gt P(a|I) & \\text{Lemma 1} \\\\ &\\gt P(a|c,I) P(c|I) + P(a|\\overline{c},I) P(\\overline{c},I) & \\text{Marginalization} \\\\ &\\gt P(a|c,I) (1 - P(\\overline{c}|I)) + P(a|\\overline{c},I) P(\\overline{c},I) & \\text{Sum rule} \\\\ &\\gt P(a|c,I) + P(\\overline{c},I) (P(a|\\overline{c},I) - P(a|c,I)) \\\\ 0 &\\gt P(\\overline{c},I) (P(a|\\overline{c},I) -P(a|c,I)) \\\\ &\\gt P(a|\\overline{c},I) -P(a|c,I) \\\\ P(a|c,I) &\\gt P(a|\\overline{c},I) & \\text{c is evidence of a, QED} \\end{align}\n\nIf you've ever taken a stats course or a course that covered deductive reasoning with the predicate calculus, you were probably beat over the head with the phrase \"correlation is not causation!\" or even \"correlation does not imply causation!\" You are beaten over the head with these phrases because first of all it was used to defend Big Tobacco by noting that smoking may not cause lung cancer, but secondly because intuitively it seems like correlation and causation are related, and by noticing correlations you can then proceed to do experiments that show possible causation, and stats and logic courses try to beat intuition out of you because in those fields it can often lead you astray (especially when you're new). And I do agree that these phrases are true: correlation is not the same as causation, and correlation does not logically, deductively, imply causation within the predicate calculus. However, as we just showed, when you move to probability theory and allow for probabilistic inferences, you get the result that correlation is evidence of causation, and this is proved deductively. This matches common sense that correlation \"implies\" causation in a looser (probabilistic) sense. And if you find a bunch of correlations, like objects of widely different masses falling at the same rate in a vacuum, that hints very strongly at a cause (like gravity or the Flying Spaghetti Monster's invisible noodly appendages). Note that the above proof did not specify how much evidence a given correlation provides to a given causation, it just says that there's some evidence. Finding out how much takes more work.\n\nTo the hardcore logician who only knows deduction in the predicate calculus, this may sound like heresy. But the predicate calculus is contained as a special case in probability theory; probability theory is more general and allows for more general inferences, including ones that match up with common sense better. (For example, the subject of a future post is on logical fallacies, and how if you analyze a \"fallacy\" in the domain of probability theory it ceases to be a fallacy and instead becomes a theorem! If you assume (or measure a prediction sample to verify) that authorities are more right about their topic of expertise than not, an argument from authority is valid!)\n\nPosted on 2014-06-03 by Jach\n\nTags: bayes, math, probability, rationality\n\nLaTeX allowed in comments, use $\\\\...\\\\$\\$ to wrap inline and $$...$$ to wrap blocks.","date":"2022-06-30 04:06:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9789493083953857, \"perplexity\": 980.666767934533}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103661137.41\/warc\/CC-MAIN-20220630031950-20220630061950-00592.warc.gz\"}"}
null
null
Q: Snowflake ODBC driver does not recognize TIMESTAMP_TZ (1) Table describe in ODBC returns a TIMESTAMP_TZ column in Snowflake as sqltype = 93 (SQL_TYPE_TIMESTAMP). All the same attributes are returned for TIMESTAMP_TZ column Vs. a TIMESTAMP_NTZ column. SELECT get_ddl('TABLE', 'TS_TEST'); create or replace TABLE TS_TEST ( TS TIMESTAMP_TZ(9), ID NUMBER(38,0) ); SELECT column_name, data_type, datetime_precision FROM INFORMATION_SCHEMA.COLUMNS WHERE table_schema = 'PUBLIC' and table_name = 'TS_TEST' and column_name = 'TS'; COLUMN_NAME DATA_TYPE DATETIME_PRECISION ----------- ----------- ------------------- TS TIMESTAMP_TZ 9 sqlstmt = 0x000000a220befd60 L"SELECT * FROM TS_TEST LIMIT 1" rc = SQLDescribeColW (*cursor_ptr, column_index, (SE_WCHAR FAR *) column_name, (SNOW_MAX_IDENTIFIER_LEN * sizeof(SE_WCHAR)), /* BufferLength */ &name_length, &sqltype, (SQLULEN *) &precision_size, &scale, &nulls); column_name = 0x000000a220bef670 L"TS" name_length = 2 sqltype = 93 // #define SQL_TYPE_TIMESTAMP 93; // C:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\um\sql.h precision_size = 29 // #define SQL_SF_TIMESTAMP_COLUMN_SIZE 29; C:\Program Files\Snowflake ODBC Driver\include\sf_odbc.h scale = 9 nulls = 1 (2) The Snowflake ODBC driver documentation is very sparse regarding TIMESTAMP_TZ. There are no examples of binding input/output to TIMESTAMP_TZ with ODBC. What is the data structure provided by Snowflake (Simba) ODBC to bind input/output to a TIMESTAMP_TZ column when the value includes time zone offset information? Where is the structure defined? For example: MS SqlServer defines SQL_SS_TIMESTAMPOFFSET_STRUCT for binding a DATETIMEOFFSET column in C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\sqltypes.h typedef struct tagSS_TIMESTAMPOFFSET_STRUCT { SQLSMALLINT year; SQLUSMALLINT month; SQLUSMALLINT day; SQLUSMALLINT hour; SQLUSMALLINT minute; SQLUSMALLINT second; SQLUINTEGER fraction; SQLSMALLINT timezone_hour; SQLSMALLINT timezone_minute; } SQL_SS_TIMESTAMPOFFSET_STRUCT; Are we expected to bind TIMESTAMP_TZ columns as BINARY (SQL_C_BINARY) OR as a STRING (SQL_C_WCHAR)? That should only be applicable to ODBC 3.5 and should not be required with ODBC 3.8. That is not feasible currently, because the function SQLDescribeColW() in the Snowflake ODBC driver describes TIMESTAMP_TZ columns as SQL_TYPE_TIMESTAMP, i.e. the identical typecode as a TIMESTAMP_NTZ column. Therefore, there is no way for an ODBC application to distinguish between TIMESTAMP_TZ and TIMESTAMP_NTZ columns. (3) The following topic in the Snowflake ODBC documentation alludes to custom SQL Data Types, but does NOT provide an example of binding a TIMESTAMP_TZ value, nor an appropriate data structure: https://docs.snowflake.com/en/user-guide/odbc-api.html "Some SQL data types supported by Snowflake have no direct mapping in ODBC (e.g. TIMESTAMP_*tz, VARIANT). To enable the ODBC driver to work with the unsupported data types, the header file shipped with the driver includes definitions for the following custom data types:" //////////////////////////////////////////////////////////////////////////////////////////////////// /// Custom SQL Data Type Definition /// /// //////////////////////////////////////////////////////////////////////////////////////////////////// #define SQL_SF_TIMESTAMP_LTZ 2000 #define SQL_SF_TIMESTAMP_TZ 2001 #define SQL_SF_TIMESTAMP_NTZ 2002 #define SQL_SF_ARRAY 2003 #define SQL_SF_OBJECT 2004 #define SQL_SF_VARIANT 2005 Refer to the topic "C Data Type Extensibility" in the ODBC documentation https://learn.microsoft.com/en-us/sql/odbc/reference/develop-app/c-data-types-in-odbc?redirectedfrom=MSDN&view=sql-server-ver15 "In ODBC 3.8, you can specify driver-specific C data types. This enables you to bind a SQL type as a driver-specific C type in ODBC applications when you call SQLBindCol, SQLGetData, or SQLBindParameter. This can be useful for supporting new server types, because existing C data types might not correctly represent the new server data types. Using driver-specific C types can increase the number of conversions that drivers can perform. https://learn.microsoft.com/en-us/sql/odbc/reference/develop-app/driver-specific-data-types-descriptor-information-diagnostic?view=sql-server-2017 Note: "Driver-specific data types, descriptor fields, diagnostic fields, information types, statement attributes, and connection attributes must be described in the driver documentation. When any of these values is passed to an ODBC function, the driver must check whether the value is valid. Drivers return SQLSTATE HYC00 (Optional feature not implemented) for driver-specific values that apply to other drivers." (4) Is there any registry key OR key to set in ODBC.ini ? Or another attribute to enable on the ODBC connection handle that controls behavior pertaining to Snowflake custom data types? I'm specifically interested in TIMESTAMP_TZ, TIMESTAMP_NTZ, TIMESTAMP_LTZ. I tried configuring the parameter ODBC_USE_STANDARD_TIMESTAMP_COLUMNSIZE in accordance with the following topic in the Snowflake ODBC documentation: https://docs.snowflake.com/en/user-guide/odbc-parameters.html Additional Connection Parameters ODBC_USE_STANDARD_TIMESTAMP_COLUMNSIZE "This boolean parameter affects the column size (in characters) returned for SQL_TYPE_TIMESTAMP. When this parameter is set to true, the driver returns 29, following the ODBC standard. When this parameter is set to false, the driver returns 35, which allows room for the timezone offset (e.g. "-08:00"). This value can be set via not only the odbc.ini file (Linux or macOS) or the Microsoft Windows registry, but also the connection string." However, none of the following has any impact on the behavior. A) Setting the registry key ODBC_USE_STANDARD_TIMESTAMP_COLUMNSIZE under the DSN name: * *(String value) to FALSE/TRUE *(DWORD 32 bit value) to 0/1 B) Concatenating to the ODBC connection string (DSN based OR DSN-less string): - "ODBC_USE_STANDARD_TIMESTAMP_COLUMNSIZE=FALSE", - "ODBC_USE_STANDARD_TIMESTAMP_COLUMNSIZE=TRUE" - "ODBC_USE_STANDARD_TIMESTAMP_COLUMNSIZE=0", - "ODBC_USE_STANDARD_TIMESTAMP_COLUMNSIZE=1" NOTE: The above parameter makes no difference to the behavior. SQLDescribeColW always returns the exact same attributes for both TIMESTAMP_TZ and TIMESTAMP_NTZ columns. sqltype = 93; // #define SQL_TYPE_TIMESTAMP 93; // C:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\um\sql.h precision_size = 29; // #define SQL_SF_TIMESTAMP_COLUMN_SIZE 29; // C:\Program Files\Snowflake ODBC Driver\include\sf_odbc.h // scale = 9; // nulls = 1; One would expect TIMESTAMP_TZ columns to be described back as the type defined in // C:\Program Files\Snowflake ODBC Driver\include\sf_odbc.h, namely SQL_SF_TIMESTAMP_TZ (2001) and TIMESTAMP_NTZ columns to be described back as the type SQL_SF_TIMESTAMP_NTZ (2002) (5) NOTE: The installed version of SnowflakeDSII.dll in C:\Program Files\Snowflake ODBC Driver is 2.22.4.0 NOTE: Since the time of the original post, the Snowflake ODBC driver has been upgraded to the latest version, namely 2.24.5.0 - without any change in behavior. /* In the connection, the target ODBC version is set to ODBC 3.8 */ rc = SQLSetEnvAttr (connection->henv, SQL_ATTR_ODBC_VERSION, (void *)SQL_OV_ODBC3_80, 0); rc = SQLGetInfoW (connection->hdbc, SQL_DRIVER_ODBC_VER, &odbc_ver, SE_MAX_MESSAGE_LENGTH, NULL); odbc_ver = 0x00000000022cae40 L"03.80" (6) The parameter CLIENT_TIMESTAMP_TYPE_MAPPING is not set to anything. It only pertains to TIMESTAMP_LTZ or TIMESTAMP_NTZ, anyway. I'm interested specifically in binding TIMESTAMP_TZ only. https://docs.snowflake.com/en/sql-reference/parameters.html#client-timestamp-type-mapping The parameter TIMESTAMP_TYPE_MAPPING is set to its default value. It anyway specifies the TIMESTAMP_* variation that the TIMESTAMP data type alias maps to. The test scenario explicitly creates a TIMESTAMP_TZ column and does not use an alias. A: I get all three Snowflake correct data types by using: SQLLEN nDataType = SQL_UNKNOWN_TYPE; rc = ::SQLColAttribute( hstmt, nCol, SQL_DESC_CONCISE_TYPE, NULL, 0, NULL, &nDataType); Seems there is currently no data type specific structure to use for SQL_SF_TIMESTAMP_TZ that provides the time zone stored for a record. Not sure if the Snowflake driver would return the time zone if you were to bind SQL_SF_TIMESTAMP_TZ data as regular text, but maybe worth trying.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,670
USA: PK Acquisition Buys Retailer Priceless Kids New York-based retail and real estate investor PK Acquisition LLC has bought value children's clothing retailer Priceless Kids Inc. Founder Cathy Cohn, who set up the chain in 1990, will stay on as president and chief operating officer. PK Acquisition LLC is led by Alan Cohen, chairman of financial consulting company Abacus Advisors Group LLC. Cohen said he bought Priceless as a personal investment and will run the company externally to Abacus. He added: "We believe Priceless Kids has a unique and valuable concept in an increasingly competitive market. "It provides a comfortable shopping environment for parents and children alike, while offering quality branded children's goods at great prices." The other investors involved in the group were not identified. Priceless operates 32 stores in New York, Ohio, Pennsylvania, Massachusetts and Rhode Island under brand names such as Baby Togs, French Toast, Mudd, Bongo, and Buster Brown. PK Acquisition said that Priceless would be adding a number of brands to its portfolio, including Laura Ashley, Eddie Bauer, Russell Athletic, Starter and New Balance. It added that no employees would lose their jobs due to the acquisition. USA: Cutter & Buck Q2 Profit Jumps 38% UNITED ARAB EMIRATES: Giordano Launches Kids' Range Companies: Brown Shoe Company, Russell Corporation US: Russell Corp "disappointed" with 2005 Russell Corporation has said it is "disappointed" with its 2005 results, despite reporting a rise in sales and earnings in its fourth quarter.... US: Russell to lay off 2,300 to cut costs Athletic and sporting goods company Russell is to axe about 2,300 jobs and stop retirement benefits as part of a large-scale restructuring.... US: Russell lowers FY 2005 expectations Athletic and sporting goods company Russell has lowered its full-year profit and sales predictions and said it will take aggressive action in early 2006 to boost its earnings.... UK: Hector Russell Bought By Edinburgh Woollen Mill Scottish kilt maker Hector Russell has been acquired by larger competitor Edinburgh Woollen Mill Group, which plans to expand the business.... USA: Russell Corp Cuts Outlook On CAFTA, Katrina Chaos Russell Corporation today blamed delays in the full implementation of CAFTA and the effects of Hurricane Katrina for lower third quarter forecasts.... USA: Russell Axes President, COO Position Athletic and sports goods company Russell Corporation says it is axing the position of president and chief operating officer as part of efforts to flatten its organisational structure.... USA: Apparel Executives To Discuss Global Sourcing ASAP Show, a wholly owned subsidiary of Cyber Merchants Exchange, has organised various seminars featuring top apparel executives from US fashion, sourcing and financial fields during its August 28-31... USA: Russell Corp Names Victoria Beck As VP Athletic and sporting goods company Russell Corporation has named Victoria Beck as a vice president of the company, effective as of 1 September....
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,493
It looks like the task of sending voice messages on WhatsApp is going to become a bit easier. WhatApp Beta users can lock the voice recording button, instead of continuously pressing the recording button. It should be noted that the feature is already available for iOS users. To use this feature, users need to hold down the recording button, and then swipe up. They would then see a lock symbol, which means that voice recording is now locked. Once the user is done recording the message, they can then press the send button to send the message across. The feature is available in Whatsapp Beta version 2.18.102 and it is not yet known when this feature will be available to all users. Last month, WABetainfo reported that WhatsApp was in the process of testing the new feature. Further, the report noted that the company was also testing a feature that would allow users to preview voice messages before sending them. However, the latter has not been added to WhatsApp Beta just yet. A couple of months ago, WhatApp took on services like PayTM and Google Tez by added UPI-based payments to its services. The new service would allow users to send and receive money directly through WhatsApp. In order to do so however, the WhatsApp number should be the same as the number that is linked to the UPI-enabled bank account. The payments option is located inside the attachments icon, which is the same one used to send contacts, media, location and more. Once the user accepts the terms and conditions, they can then conduct transactions. Phones have a inbuilt function.
{ "redpajama_set_name": "RedPajamaC4" }
2,973
Home Westside Messenger OCHO is more than meets the eye OCHO is more than meets the eye The Ohio Hispanic Coalition (OHCO), a non-profit organization that provides extensive services to the Hispanic/Latino communities, has moved, but they still call the Westside home. As of January 2007, OHCO relocated from the Westland Mall to their new home, 3566 Sullivant Ave., Suite 203. Crystal Merida, Program Coordinator at OHCO, said the move just makes sense. "OHCO needed a location that allowed the youth space for activities and sports, to feel and be safe and secure. This will allow the youth to focus on their academic studies without being disturbed," said Merida. The new building is located next door to La Voz Hispana, a Hispanic newspaper, and in the same building as Columbus Public Health office, which is a convenience for those who need those services as well. Merida also states that the new space is much larger than what they had previously and they have more flexibility with their hours and their ability to expand their services. According to Merida, one of the problems with the Westland Mall was the sounds of people walking and talking through the mall. Sounds would echo and disrupt classes or other business being conducted, which Merida is happy to note is no longer something OHCO has to deal with. "It's very comfortable, whether you're there for a class or medical screenings or just to stop and get information," said Merida. The OHCO gets their funding through a series of grants. For instance, they receive funds from the governor's Office of Highway Safety as part of a program that informs the community on the importance of car seats and booster seats for children. OHCO educates people on the laws regulating child restraints. "We have workshops and trainings on safety that allow us to install the seats properly, demonstrating with the parents and then it's theirs to keep. The seats are provided to Hispanic/Latino families who are unable to purchase one on their own, based upon availability," said Merida. OHCO also receives similar grants from Central Ohio Breathing Association and Ohio Tobacco Prevention Foundation. Merida said health care of the Hispanic/Latino community is a very well funded concern. OHCO recently established a STAND team with all Hispanic/Latino youth. One of the programs addressing dual concerns coming up soon is the Ohio Hispanic Coalition's Back to School program on August 17; it focuses on health and safety while providing back to school supplies to Hispanic/Latino families. Families that sign up for this program can expect different workshops to be available to them, such as a diabetes screening, blood pressure screening and vision exams, courtesy of Wal-Mart. "The families that go through the different health and safety sessions then are provided with back to school supplies for their children," said Merida. Some of the other programs that OHCO provides include an after-school program, Soy Latina (a women's support group) and VOCA (an education and advocacy group for victims of domestic violence). Some of the activities and programs of OHCO include how to open a checking account, interpretation services, fire prevention courses, HIV/AIDS programs and summer camp, just to name a few. Merida says in the future they plan on offering more activities including a self-defense courses. OHCO also has a fully functioning computer lab available for educational use by the Hispanic/Latino community. This includes uses such as English as a second language learning, job readiness skills andbasic computer use. Funding for the computer lab is provided by one of the largest and oldest national Latino organizations, LULAC (League of United Latin American Citizens) through AT&T. "It is important to know that all of OHCO's trainings, workshops and events are conducted in Spanish. Something that no other organization does," says Merida. Merida would like more people to just know the mission of OHCO – which is "To improve the well-being and quality of life for all Hispanics and Latinos through advocacy, education, training and access to quality services" – and to know OHCO is here. "A lot of people aren't aware of us and that we've been serving Ohio for 17 years; and we're not going anywhere," said Merida. Previous articleWest Jeff hosts car & craft show Next articleAlum Creek Greenway opens
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3
Bitva na Queenston Heights (13. říjen 1812) byla největší bitva na území Horní Kanady v britsko-americké válce. Britské síly v ní připravily zničující porážku americké invazní armádě a na dlouho odvrátily hrozbu americké invaze. Bitva však byla katastrofou i pro britskou stranu, protože v ní padl velitel britských jednotek, generál Isaac Brock, a jeho nástupce Roger Seaffle ani zdaleka nedosahoval jeho kvalit. Reference Externí odkazy Bitvy britsko-americké války Bitvy USA Bitvy napoleonských válek Bitvy roku 1812 Bitvy Spojeného království
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,655
Literary Theory (1) Literary Theory x Octavio González and Todd G. Nordgren The definitional limits of the term queer have been under conceptual, political, and ethical dispute since its reclamation from its pejorative meaning during the early AIDS crisis of the 1980s and early 1990s. Reflecting activist recuperation, queer became a means to inspire and propel a coalitional politics oriented toward nonconformity and anti-normativity among diverse sexualities and across divisions of gender. Concomitantly, queer theory arose in academia as a way to expand upon and break what some scholars saw as the restrictive disciplinary boundaries of gay and lesbian studies, which were explicitly grounded in post–Stonewall identity politics. The term's radical potential derives in part from its grammatical fluidity, as it operates as noun, adjective, and verb—combining action, identification, and effect into a single word. In the late 1990s and early 2000s, queer of color critique drew upon a different genealogy, beyond the postmodern rupture inaugurated by Michel Foucault's work on sexuality and "biopower," by foregrounding black and women of color feminisms, critical race studies, and postcolonial studies in order to analyze the intersections of race, nationality, coloniality, class, sex, and gender with a Foucauldian understanding of sexuality as a privileged mode of modern power– knowledge. Queer of color critique inspired and was mirrored in investigations of the analytic boundaries of the term, often defined as a binary distinction between a minoritizing and universalizing definition of queer.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,730
Q: Apache Beam : RabbitMqIO watermark doesn't advance I need some help please. I'm trying to use Apache beam with RabbitMqIO source (version 2.11.0) and AfterWatermark.pastEndOfWindow trigger. It seems like the RabbitMqIO's watermark doesn't advance and remain the same. Because of this behavior, the AfterWatermark trigger doesn't work. When I use others triggers which doesn't take watermark in consideration, that works (eg: AfterProcessingTime, AfterPane) Below, my code, thanks : public class Main { private static final Logger LOGGER = LoggerFactory.getLogger(Main.class); // Window declaration with trigger public static Window<RabbitMqMessage> window() { return Window. <RabbitMqMessage>into(FixedWindows.of(Duration.standardSeconds(60))) .triggering(AfterWatermark.pastEndOfWindow()) .withAllowedLateness(Duration.ZERO) .accumulatingFiredPanes(); } public static void main(String[] args) { SpringApplication.run(Main.class, args); // pipeline creation PipelineOptions options = PipelineOptionsFactory.fromArgs(args).create(); Pipeline pipeline = Pipeline.create(options); // Using RabbitMqIO PCollection<RabbitMqMessage> messages = pipeline .apply(RabbitMqIO.read().withUri("amqp://guest:guest@localhost:5672").withQueue("test")); PCollection<RabbitMqMessage> windowedData = messages.apply("Windowing", window()); windowedData.apply(Combine.globally(new MyCombine()).withoutDefaults()); pipeline.run(); } } class MyCombine implements SerializableFunction<Iterable<RabbitMqMessage>, RabbitMqMessage> { private static final Logger LOGGER = LoggerFactory.getLogger(MyCombineKafka.class); /** * */ private static final long serialVersionUID = 6143898367853230506L; @Override public RabbitMqMessage apply(Iterable<RabbitMqMessage> input) { LOGGER.info("After trigger launched"); return null; } } A: I spent a lot of time looking into this. After opening https://issues.apache.org/jira/browse/BEAM-8347 I left some notes in the ticket on what I think the problems are with the current implementation. Re-stated here: The documentation for UnboundedSource.getWatermark reads: [watermark] can be approximate. If records are read that violate this guarantee, they will be considered late, which will affect how they will be processed. ... However, this value should be as late as possible. Downstream windows may not be able to close until this watermark passes their end. For example, a source may know that the records it reads will be in timestamp order. In this case, the watermark can be the timestamp of the last record read. For a source that does not have natural timestamps, timestamps can be set to the time of reading, in which case the watermark is the current clock time. The implementation in UnboundedRabbitMqReader uses the oldest timestamp as the watermark, in violation of the above suggestion. Further, the timestamp applied is delivery time, which should be monotonically increasing. We should reliably be able to increase the watermark on every message delivered, which mostly solves the issue. Finally, we can make provisions for increasing the watermark even when no messages have come in. In the event where there are no new messages, it should be ok to advance the watermark following the approach taken in the kafka io TimestampPolicyFactory when the stream is 'caught up'. In this case, we would increment the watermark to, e.g., max(current watermark, NOW - 2 seconds) when we see no new messages, just to ensure windows/triggers can fire without requiring new data. Unfortunately, it's difficult to make these slight modifications locally as the Rabbit implementations are closed to extension, and are mostly private or package-private. Update: I've opened a PR upstream to address this. Changes here: https://github.com/apache/beam/pull/9820
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,436
\section{Introduction} Several years ago, Sergey Fomin and Andrei Zelevinsky introduced a new mathematical object known as a cluster algebra which is related to a host of other combinatorial and geometric topics. Some of these include canonical bases of semisimple algebraic groups, generalized associahedra, quiver representations, tilting theory, and Teichm\"{u}ller theory. In the proceeding we will use the definitions and conventions used in Fomin and Zelevinsky's inital papers, \cite{ClustI, ClustII}. Starting with a subset $\{x_1,x_2,\dots, x_n\}$ of cluster algebra $\mathcal{A}$, one applies binomial exchange relations to obtain additional generators of $\mathcal{A}$, called \emph{cluster variables}. The (possibly infinite) set of cluster variables obtained this way generate $\mathcal{A}$ as an algebra. It was proven in \cite{ClustI} and \cite{Laurent} that any cluster variable is a Laurent polynomial in $\{x_1,x_2,\dots, x_n\}$, i.e. of the form $$\frac{P(x_1,\dots, x_n) }{ x_1^{a_1}x_2^{a_2}\cdots x_n^{a_n}} \hspace{3em} (\mathrm{Note~that~}x_i = \frac{1}{x_i^{-1}}\mathrm{~is~also~allowed})$$ where $P(x_1,\dots, x_n)$ is a polynomial with integer coefficients (not divisible by any monomial) and the exponents $a_i$ are (possibly negative) integers. It is further conjectured that the polynomials $P(x_1,\dots, x_n)$ have \emph{nonnegative} integer coefficients for any cluster algebra. However, this conjecture has been proved in a limited number of cases, including the finite type case, as proved in \cite{ClustII}, the case of rank two affine cluster algebras as demonstrated in \cite{Caldero}, \cite{MusPropp}, \cite{SherZel}, and \cite{Zel} as well as cluster algebras arising from acyclic quivers \cite{CalReit}. The finite type case is defined as the case where the cluster variable generation procedure only yields a finite set of cluster variables associated to $\mathcal{A}$. By a combinatorial and geometric miracle, one which has sparked much interest in these algebras, the cluster algebras of finite type exactly correspond to the Lie algebras of finite type. Furthermore, in these cases, the cluster variables (except for the $x_i$'s) have denominators with \emph{nonnegative} exponents and can be put in a $1$-to-$1$ correspondence with the positive roots of the associated root system. Study of the particular finite type cluster algebra of type $A_n$, also known as the Ptolemy algebra has been especially fruitful as it can be realized in terms of the Grassmannian and Pl\"{u}cker embedding. In 2003, as part of the REACH research group under Jim Propp's direction, Gabriel Carroll and Gregory Price \cite{CPpre} described two combinatorial interpretations of the associated cluster variables, one in terms of paths and one in terms of perfect matchings. Further, Ralf Schiffler recently independently discovered and extended the paths interpretation \cite{Schiffler}. In the present paper, we go beyond $A_n$, and describe a combinatorial interpretation for the cluster variables in all four families of finite type, namely $A_n$, $B_n$, $C_n$, and $D_n$, for the coefficient-free case. Our combinatorial model will involve perfect matchings, in the spirit of \cite{MusPropp}, and agrees with Carroll and Price's interpretation in the $A_n$ case. Unlike the aforementioned work we do not attempt to give the Laurent expansion of cluster variables in terms of any seed but only in terms of the initial bipartite seed, whose definition we remind the reader of below. By restricting ourselves to expansions in this initial seed, we are able to explicitly write down families of graphs which encode the cluster algebra using weighted perfect matchings. We shall use the following notation throughout this paper. Let $G=(V,E)$ be a finite graph with vertex set $V = \{v_1,\dots, v_m\}$ and edge set $E \subseteq \{ \{u,v\}:u,v\in V\}$. For each edge $e\in E$, we set $w_e$ to be the weight of $e$, where $w_e$ is allowed to be $1$ or $x_i$ for $i\in \{1,2,\dots, n\}$. A \emph{perfect matching} $M$ of graph $G$ is a subset of $E$ such that for every vertex $v\in V$, there is exactly one edge $e\in M$ containing $v$. The weight of a perfect matching is defined to be the product $w(M) = \prod_{e\in M} w_e$, and we let $P(G)$ denote the matching polynomial, or matching enumerator, of graph $G$, defined as $$P(G) = \sum_{M \mathrm{~is~a~perfect~matching~of~}G} w(M).$$ The result of this paper is the following theorem. \begin{Thm} \label{vargraph} Let $\Phi$ be a root system of classical type and denote its positive roots as $\Phi_+$. For each such $\Phi$, we explicitly construct a family of graphs, $\mathcal{G}_{\Phi}$, with the following three properties. \begin{enumerate} \item $|\mathcal{G}_{\Phi}| = |\Phi_{+}|$. \item For each $\alpha = (\alpha_1, \alpha_2, \dots, \alpha_n)$, there exists a unique $G_\alpha^{\Phi} \in \mathcal{G}_{\Phi}$ that can be effeciently identified. \item We have the cluster expansion formula $$x[\alpha]^{\Phi} = \frac{P({G_\alpha}^{\Phi})}{x_1^{\alpha_1}\cdots x_n^{\alpha_n}},$$ where $x[\alpha]^{\Phi}$ denotes the cluster variable corresponding to positive root $\alpha$ (in type $\Phi$) under Fomin and Zelevinsky's bijection. \end{enumerate} \end{Thm} Given graph $G \in \mathcal{G}_\Phi$, we are able to determine for which $\alpha \in \Phi_+$ we have $G = G_\alpha^{\Phi}$ by breaking down $G$ into tiles. More precisely, we let a family of tiles $\mathcal{T}=\{T_1,\dots, T_n\}$ be a finite set of graphs, with weighted edges, such that each $T_i$ is isomorphic to a cycle graph. Given the faces and edge weighting of graph $G$, we decompose $G$ into a union of such tiles by gluing together certain edges. \vspace{1em}\begin{center} $\begin{array}{cccc} \includegraphics[width = 0.2in , height = 0.2in]{BB1.eps} &~~~& \includegraphics[width = 0.2in , height = 0.2in]{BB3.eps} &~~~\\ ~~~& \includegraphics[width = 0.6in , height = 0.45in]{BB1213.eps} &~~~& \includegraphics[width = 0.2in , height = 0.4in]{BB34.eps} \\ % \includegraphics[width = 0.4in , height = 0.6in]{BB123.eps} &~~~& \includegraphics[width = 0.6in , height = 0.7in]{BB1214.eps} &~~~ \\ ~~~& \includegraphics[width = 0.8in , height = 0.6in]{BB121234.eps} &~~~& \includegraphics[width = 0.6in , height = 0.4in]{BB121.eps} \\ \includegraphics[width = 0.4in , height = 0.6in]{BB124.eps} &~~~& \includegraphics[width = 0.8in , height = 0.5in]{BB121230.eps} &~~~ \\ ~~~& \includegraphics[width = 0.8in , height = 0.6in]{BB121240.eps} &~~~& \includegraphics[width = 0.2in , height = 0.5in]{BB23.eps} \\ % \includegraphics[width = 0.4in , height = 0.4in]{BB12.eps} &~~~& \includegraphics[width = 0.2in , height = 0.6in]{BB24.eps} &~~~ \\ ~~~& \includegraphics[width = 0.25in , height = 0.4in]{BB2.eps} &~~~& \includegraphics[width = 0.2in , height = 0.2in]{BB4.eps} \\ \end{array}$ \\ The collection $\mathcal{G}_{B_4}$ (edge weights described in Section $4$). \end{center}\vspace{1em} We shall use the convention from \cite{ClustII}, so that the initial exchange matrix $B =||b_{ij}||_{i,j=1}^n$ contains rows of like sign. Any rank $n$ cluster algebra of finite type has such a seed consisting of a cluster of initial variables $\{x_1,\dots, x_n\}$ and a set of $n$ binomial exchange relations of the form $$x_ix_i^\prime = \prod_{j =1}^nx_j^{|b_{ij}|} + 1.$$ After mutating in the $k$th direction, i.e. applying an exchange relation of the form $x_k x_k^\prime =$ binomial, we obtain a new seed with cluster $\{x_1,x_2,\dots, x_n\}\cup \{x_k^\prime\}\setminus \{x_k\}$ and exchange matrix $B^\prime = ||b_{ij}^\prime||_{i,j=1}^n$ such that the $b_{ij}^\prime$'s satisfy $$b_{ij} = \begin{cases} -b_{ij} &\mbox{~~if~~} i=k \mathrm{~or~}j=k, \\ b_{ij} + \max(-b_{ik},0)\cdot b_{kj} + b_{ik}\cdot \max(b_{kj},0) &\mathrm{~~otherwise}.\end{cases}$$ As we mention below in Remark \ref{sameseed}, we shall use an ordering of mutations in this paper so that we need only work with binomial exchanges of the form $x_k x_k^\prime =~($Monomial $+1)$. Note that we shall use the notation $P_\alpha(x_1,x_2,\dots, x_n)$ to denote the numerator of the cluster variable with denominator $x_1^{\alpha_1}x_2^{\alpha_2}\cdots x_n^{\alpha_n}$ despite its similarity with the notation of $P(G)$ for the matching polynomial of graph $G$. \vspace{1em} The outline of the paper is as follows. We proceed to prove Theorem \ref{vargraph} separately for the four families of non-exceptional type, starting with the well-studied case of $A_n$. We will use different language than in \cite{CPpre}, \cite{ClustII}, or \cite{Schiffler}, and we include our own proof of this case to familiarize the reader with the techniques which we will utilize later in the paper. Since the type of the cluster algebra will frequently be clear from context, we will simply denote tiles as $T_i$ or graphs as $G_\alpha$ (instead of ${G_\alpha}^{\Phi}$). We end with some comments and directions for further research. \begin{Rem} In \cite{YSys}, Fomin and Zelevinsky explicitly constructed Fibonacci polynomials for types $A_n$ and $D_n$, which provide an alternate combinatorial expansion formula for cluster varibles. Generalizations of these polynomials, for other types, are defined in \cite{ClusIV}, where they are referred to as F-polynomials. \end{Rem} \section{$A_n$} \label{an} The work in this section was done independently of the work of Carroll-Price \cite{CPpre} and the work of Schiffler \cite{Schiffler} mentioned in the introduction. We will use the notation and the techniques of this section later in the paper for the $B_n$ and $D_n$ cases. Thus we include this section even though the combinatorial interpretation given by Proposition \ref{CaseAn} is not new in this case, although we believe our proof via excision, as described by Lemma \ref{exciseGr}, is new. This excision technique will also be utilized for the $B_n$ and $D_n$ cases in Section $4$. We begin by reviewing the necessary characteristics of the cluster algebra of type $A_n$. Recall that Lie algebra $A_n$ has a Dynkin diagram consisting of a line of $n$ vertices connected by edges of weight one. $$\bullet\line(1,0){3}\bullet\line(1,0){3}\bullet\line(1,0){3}\bullet\line(1,0){3}\bullet\line(1,0){3} \bullet\line(1,0){3} \dots \dots \line(1,0){3} \bullet$$ \noindent Thus the associated Cartan matrix has the form $$\begin{bmatrix} 2 & -1 & 0 & 0 & \dots & 0 & 0 \\ -1 & 2 & -1 & 0 & \dots & 0 & 0 \\ 0 & -1 & 2 & -1 & \dots & 0 & 0 \\ 0 & 0 & -1 & 2 & \dots & 0 & 0 \\ \dots & \dots & \dots & \dots & \dots & \dots & \dots \\ 0 & 0 & 0 & 0 & \dots & -1 & 2 \\ \end{bmatrix},$$ \noindent and thus using the convention given in \cite{ClustII} the associated exchange matrix is $$B^{A_n} = ||b_{ij}|| = \begin{bmatrix} 0 & 1 & 0 & 0 & \dots & 0 & 0 \\ -1 & 0 & -1 & 0 & \dots & 0 & 0 \\ 0 & 1 & 0 & 1 & \dots & 0 & 0 \\ 0 & 0 & -1 & 0 & \dots & 0 & 0 \\ \dots & \dots & \dots & \dots & \dots & \dots & \dots \\ 0 & 0 & 0 & 0 & \dots & (-1)^{n+1} & 0 \\ \end{bmatrix}.$$ \vspace{1em} \noindent Notice that every row has like sign and that the matrix is skew-symmetrizable (and in fact skew-symmetric in this case). The bipartite seed for a cluster algebra of type $A_n$ therefore consists of a initial cluster of variables $\{x_1,x_2,\dots, x_n\}$ and exchange matrix $B^{A_n}$ which encodes the following exchange binomials \begin{eqnarray*} x_1x_1^\prime &=& x_2 + 1 \\ x_2x_2^\prime &=& x_1x_2 + 1 \\ x_3x_3^\prime &=& x_2x_4 + 1 \\ \dots \\ x_{n-1}x_{n-1}^\prime &=& x_{n-2}x_n + 1 \\ x_nx_n^\prime &=& x_{n-1} + 1. \end{eqnarray*} We describe a set of tiles from which we will build our family of graphs. In the case of $A_n$, let tiles $T_1,\dots, T_n$ be squares defined as follows: \begin{Def} Tile $T_1$'s northern edge is given weight $x_2$ while the other three are given weight $1$. Tile $T_n$'s southern edge is weighted with value $x_{n-1}$ and the rest are weighted with value $1$. Finally all other $T_i$ have a weight of $x_{i+1}$ given to their northern edge, $x_{i-1}$ for their southern edge while the eastern and western edges are given weight $1$. \end{Def} \begin{center} \includegraphics[width = 3in , height = 0.6in]{A5.eps}\\ The tiles for cluster algebra of type $A_5$. \end{center}\vspace{1em} \noindent Let $\mathcal{G}_{A_n}$ be the set of graphs that can be built from these $n$ tiles given the following gluing rule. \begin{Rule} \label{Gluing} Without allowing reflections or rotations of the tiles, tile $T_i$ can be glued to tile $T_j$ if and only if the identified edge (as an edge of $T_i$) lies clockwise from an edge weighted $x_j$ and clockwise from an edge weighted $x_i$ (as an edge of $T_j$). \end{Rule} \noindent Since tile $T_i$ only contains edges of weight $x_{i+1}$ and $x_{i-1}$, and these weights appear across from each other, this rule uniquely describes how the blocks can connect. \begin{Lem} Given the above tiles, $\mathcal{T}_{A_n}$, and the above gluing rule, the collection of possible graphs is enumerated by the set of subsets $$\{T_i,T_{i+1},\dots, T_{j-1},T_j\}$$ for $1 \leq i < j \leq n$. \end{Lem} \noindent This collection $\mathcal{G}_{A_n}$ has the same cardinality as the set of positive roots of the Lie algebra of type $A_n$ using the bijection $$ T_i \cup T_{i+1} \cup T_{i+2} \cup \dots \cup T_{j-1} \cup T_j \rightarrow \alpha_i + \dots + \alpha_j.$$ As shown in \cite{ClustII}, this implies that the cardinality is also the same as the number of non-initial cluster variables for the bipartite cluster algebra of type $A_n$. \begin{Prop} \label{CaseAn} The set of graphs $\mathcal{G}_{A_n}$ is in bijection with the set of non-initial cluster variables for a coefficient-free cluster algebra of type $A_n$ and satisfy the statement of Theorem \ref{vargraph}. \end{Prop} \vspace{1em} \begin{center} \includegraphics[width = 3in , height = 3in]{A5lattice.eps} \\ The collection $\mathcal{G}_{A_5}$. \end{center} \vspace{1em} To prove this proposition we will take a detour through a case we refer to as $A_{\infty}$. In this case, our set of tiles is in bijection with the integers, and we define $T_i$ to have $x_{i+1}$ on its northern edge, $x_{i-1}$ on its southern edge for all $i \in \mathbb{Z}$. Without the issue of boundaries, it is easier to show that a certain lattice of graphs corresponds to the non-initial cluster variables. After doing so, we choose a periodic specialization for the initial variables to recover a corresponding region of this lattice for any specific $A_n$. We start with the following observation. \begin{Rem} \label{sameseed}For all $n$, if we start with the above exchange matrix $B^{A_n}$ and apply the binomial exchanges corresponding to relations $1$, $3$, $5, \dots n$ (resp. $n-1$) if $n$ is odd (resp. even) the resulting exchange matrix is $-B$. Afterward, applying the relations $2$, $4$, $6, \dots n-1$ (resp. $n$) if $n$ is odd (resp. even) to exchange matrix $-B^{A_n}$ results in the intial exchange matrix $B^{A_n}$. In fact, in both of these cases, the order of the exchanges does not matter, and the intermediate exchange matrices will have rows of like sign for all relevant $x_k$ not already exchanged. By the definition of matrix mutation, this procedure will in fact work for any cluster algebra where the seed has an exchange matrix that is tri-diagonal ($b_{ij} = 0$ if $|i-j| \not = 1$). Thus we can calculate a row of cluster variables at a time by applying the exchange relations relative to the two previous rows. The tri-diagonal condition includes the cases $A_n$, $B_n$, $C_n$, and $G_2$ and minor modifications to the procedure will allow it to work for $D_n$. \end{Rem} Returning to the $A_n$ case, after applying exchange $1,3,5,\dots, n$ (resp. $n-1$) we have cluster $$\{x_1^{(1)},~x_2,~x_3^{(1)},~x_4,~x_5^{(1)},\dots, ~x_{n-1},~x_n^{(1)} \} $$ $$(\mathrm{resp.~~} \{x_1^{(1)},~x_2,~x_3^{(1)},~x_4,~x_5^{(1)},\dots, ~x_{n-2},~x_{n-1}^{(1)},~x_n \}~)$$\vspace{1em} \noindent where $x_i^{(1)} = \frac{x_{i-1}x_{i+1}+1 }{ x_i}$ using the convention $x_0 = x_{n+1} = 1$. Analogously, applying exchanges $2,4,6, \dots, n-1$ (resp. $n$) we obtain the cluster $$\{x_1^{(1)},~x_2^{(2)},~x_3^{(1)},~x_4^{(2)},~x_5^{(1)},\dots, ~x_{n-1}^{(2)},~x_n^{(1)} \} $$ $$(\mathrm{resp.~~~~} \{x_1^{(1)},~x_2^{(2)},~x_3^{(1)},~x_4^{(2)},~x_5^{(1)},\dots, ~x_{n-2}^{(2)},~x_{n-1}^{(1)},~x_n^{(2)} \}~)$$\vspace{1em} \noindent where $x_i^{(2)} = \frac{x_{i-2}x_{i+2}+x_{i-2}+x_{i+2}+x_{i-1}x_{i+1}+1 }{ x_{i-1}x_ix_{i+1}}$ for $1\leq i \leq n$, if we set $x_{-1}=x_{n+2}=0$. By Remark \ref{sameseed} we can make a lattice of cluster variables by applying exchanges iteratively (row-by-row) in this order. \vspace{1em} $\begin{array}{cccccccccccc} x_1 &~~~& x_3 &~~~& x_5 &~~~& \dots &~~~& x_{n-2} &~~~& x_{n} \\ ~~~& x_2 &~~~& x_4 &~~~& x_6 &~~~& \dots &~~~& x_{n-1} & ~~~ \\ x_1^{(1)} &~~~& x_3^{(1)} &~~~& x_5^{(1)} &~~~& \dots &~~~& x_{n-2}^{(1)} &~~~& x_{n}^{(1)} \\ ~~~& x_2^{(2)} &~~~& x_4^{(2)} &~~~& x_6^{(2)} &~~~& \dots &~~~& x_{n-1}^{(2)} & ~~~ \\ x_1^{(3)} &~~~& x_3^{(3)} &~~~& x_5^{(3)} &~~~& \dots &~~~& x_{n-2}^{(3)} &~~~& x_{n}^{(3)} \\ ~~~& x_2^{(4)} &~~~& x_4^{(4)} &~~~& x_6^{(4)} &~~~& \dots &~~~& x_{n-1}^{(4)} & ~~~ \end{array}$ \\ \begin{center} Top six rows of this lattice (assuming $n$ odd). \end{center}\vspace{1em} By the binomial exchange relations, this lattice satisfies the diamond condition which states that the relation $ad = bc + 1$ holds for any four elements arranged as a diamond. \begin{center}$\begin{array}{ccc} &a& \\ b&&c \\ &d& \end{array}$\end{center} \noindent For example $x_ix_i^{(1)} = x_{i-1}x_{i+1}+1$ for $i \in \{2,3,\dots, n-1\}$. Furthermore, this lattice can be extended periodically by using the conventions $x_{-1}=x_{n+2}=0$, $x_0 = x_{n+1} = 1$, and extending further using negatively weighted variables. Note that the negatives are necessary since we wish the configurations \begin{center}$\begin{array}{ccccccc} &0& &~& &0& \\ b&&1 &~&1 && c \\ &0& &~& &0& \end{array}$\end{center} to satisfy the diamond condtion. \vspace{1em} $\begin{array}{ccccccccccccccc} -x_3^{(2)} &~~~& -x_1^{(2)} &~~~& 0 &~~~& x_1^{(2)} & \dots & x_{n}^{(2)} &~~~& 0 &~~~& -x_{n}^{(2)} \\ ~& -x_2^{(1)} &~~~& -1 &~~~& 1 & ~~~& x_2^{(1)} & \dots & 1 &~~~ & -1 &~~~ \\ -x_3 &~~~& -x_1 &~~~& 0 &~~~& x_1 & \dots & x_{n} &~~~& 0 &~~~& -x_{n} \\ ~& -x_2 &~~~& -1 &~~~& 1 & ~~~& x_2 & \dots & 1 &~~~ & -1 &~~~ \\ -x_3^{(1)} &~~~& -x_1^{(1)} &~~~& 0 &~~~& x_1^{(1)} & \dots & x_{n}^{(1)} &~~~& 0 &~~~& -x_{n}^{(1)} \\ ~& -x_2^{(2)} &~~~& -1 &~~~& 1 & ~~~& x_2^{(2)} & \dots & 1 &~~~ & -1 &~~~ \end{array}$ \\ \begin{center} Six rows of extended lattice (assuming $n$ odd). \end{center}\vspace{1em} This lattice can also continue infinitely in the vertical direction, as well as horizontally, extending vertically in the unique way that preserves the diamond condition throughout the entire lattice. Consequently all $A_n$ can be treated simultaneously by considering the infinite diamond pattern ($A_{\infty}$) which starts with sequence $\{\dots, -y_2,-y_1,y_0,y_1,y_2,\dots\}$ zig-zagging to create the initial two rows. To obtain the extended $A_n$-lattice for a specific $n$ we let \begin{eqnarray*} y_1 &=& 1 \\ y_i &=& x_{i-1} \mathrm{~for~} i \in \{2,\dots, n+1\} \\ y_{n+2} &=& 1 \\ y_{n+3} &=& y_0 \\ y_{n+3+k} &=& -y_{n+3-k} \mathrm{~for~} k \in \{0,\dots, n+3\} \\ y_{2n+6+k} &=& y_{k} \end{eqnarray*} \noindent and then take the limit as $y_0$ goes to zero. We do not set $y_0$ and $y_{n+3}$ to be zero directly since this would sometimes result in indeterminate expressions of the form ``$0/0$''. Also we use the shifted indices here since it will make the ensuing arguments more symmetrical. As a consequence of these substitutions, it suffices to start the proof of Proposition \ref{CaseAn} by proving the combinatorial interpretation for the infinite diamond pattern corresponding to $A_{\infty}$, which we write in terms of $y_i$'s for $i\in \mathbb{Z}$. Even though we now have a boundary-less lattice, every given $y_i^{(j)}$ can be computed locally by considering the necessary mutations stemming from a finite half diamond extending back to the initial two rows of $y_i$'s. \begin{center} \includegraphics[width = 5in , height = 0.5in]{ExtendedAn.eps}\\ The tiles for the extended $A_n$-lattice. \end{center}\vspace{1em} For the purposes of the $A_\infty$ case, for all $i\in\mathbb{Z}$, we let $\tilde{T_i}$ denote the tile with $y_{i+1}$ on its northern edge and $y_{i-1}$ on its southern edge. We will utilize variables $y_i$'s and tiles $\tilde{T_i}$'s until we do the final substitution at the end of the proof of Proposition \ref{CaseAn}. We now proceed to prove the combinatorial interpretation in this boundary-less version. We start with the base case where we can easily see that the combinatorial interpretation works for cluster variables with denominator $y_i$. To see this, we observe that $y_iy_i^{(1)} = y_{i-1}y_{i+1}+1$ corresponds to the two perfect matchings of the graph consisting of tile $\tilde{T}_i$ by itself. Similarly, we observe the bijection works for the second row of non-initial cluster variables by the definition of $y_i^{(2)}$. We see this by verifying that the graph containing tiles $\tilde{T}_{i-1}$, $\tilde{T}_{i}$, $\tilde{T}_{i+1}$ connected in that order bijects to cluster variable $y_i^{(2)}$, i.e. $y_i^{(2)}y_{i-1}y_iy_{i+1} = P(\tilde{T}_{i-1}\cup \tilde{T}_i \cup \tilde{T}_{i+1})$. By a technique of graphical condensation developed by Eric Kuo \cite{Kuo}, we obtain the following combinatorial interpretation for the rest of the rows. \begin{Lem} \label{diag} Cluster variable $y_i^{(j)}$ bijects to graph $$\tilde{T}_{i-j+1}\cup \dots \cup \tilde{T}_{i+j-1},$$ i.e. the grid graph containing exactly $2j-1$ tiles, namely tiles $\tilde{T}_{i-j+1}$ through $\tilde{T}_{i+j-1}$ connected in order. \end{Lem} \begin{proof} The proof follows from a slight variant of the argument given in \cite{MusPropp}. Here we need to be more careful with the labeling scheme, but the same pairings will yield the desired result. In fact if one lets $y_i = x$ if $i$ even and $y_i =y$ if $i$ odd, one recovers the $A(2,2)$ case analyzed in \cite{MusPropp}. In particular, we inductively assume for all $i\in\mathbb{Z}$ that $y_i^{(j-1)}$ bijects to graph $G_1^i = \tilde{T}_{i-j+2}\cup\dots\cup\tilde{T}_{i+j-2}$ and $y_i^{(j-2)}$ bijects to graph $G_2^i = \tilde{T}_{i-j+3}\cup\dots\cup\tilde{T}_{i+j-3}$, in the sense that $y_i^{(j-1)} = \frac{P(G_1^i)}{y_{i-j+2}\cdots y_{i+j-2}}$ and $y_i^{(j-2)} = \frac{P(G_2^i)}{y_{i-j+3}\cdots y_{i+j-3}}$. It thus suffices to show, for all $i\in\mathbb{Z}$, that Laurent polynomials $y_i^{(j)}$, defined as $\frac{y_{i-1}^{(j-1)}y_{i+1}^{(j-1)}+1}{y_i^{(j-2)}}$, equal $\frac{P(G_0^i)}{y_{i-j+1}\cdots y_{i+j-1}}$ where $G_0^i = \tilde{T}_{i-j+1}\cup\dots\cup\tilde{T}_{i+j-1}.$ We use our induction hypothesis and normalize to rewrite our desired equation as {\small \begin{eqnarray} \label{norma} P(G_0^i)P(G_2^i) &=& P(G_1^{i-1})P(G_1^{i+1})+y_{i-j+1}y_{i-j+2}y_{i-j+3}^2\cdots y_{i+j-3}^2y_{i+j-2}y_{i+j-1}. \end{eqnarray} } One can decompose graph $G_0^i$ into a superposition of graphs $G_1^{i-1}\cup G_1^{i+1}$ so that $G_2^i$ is the intersection of overlap. Out of the two subgraphs, only $G_1^{i-1}$ contains tiles $\tilde{T}_{i-j+1},~\tilde{T}_{i-j+2}$ and only $G_1^{i+1}$ contains $\tilde{T}_{i+j-1}, ~\tilde{T}_{i+j-2}$. Let $M(G)$ denote the set of perfect matchings of graph $G$, ${m_0}^\prime$ denote the matching of $G_0^i$ using the horizontal edges of $\tilde{T}_{i-j+2},~\tilde{T}_{i-j+4},\dots,~ \tilde{T}_{i+j-4}, ~\tilde{T}_{i+j-2}$, and ${m_2}^\prime$ denote the matching of $G_2^i$ using the horizontal edges of $\tilde{T}_{i-j+3},~\tilde{T}_{i-j+5},\dots,~ \tilde{T}_{i+j-5}, ~\tilde{T}_{i+j-3}$. The pair of matchings $(m_0^\prime, m_2^\prime)$ has exactly the weight of the excess monomial $$y_{i-j+1}y_{i-j+2}y_{i-j+3}^2\cdots y_{i+j-3}^2y_{i+j-2}y_{i+j-1}.$$ We finish the proof of Lemma \ref{diag} by exhibiting a weight-preserving bijection between $M(G_0^i)\times M(G_2^i)\setminus\{ ( {m_0}^\prime, {m_2}^\prime )$ and $M(G_1^{i-1})\times M(G_1^{i+1})$, thus showing (\ref{norma}). We define our bijection piece-meal on $M(G_0^i) \times M(G_2^i) \setminus \{ ( {m_0}^\prime, {m_2}^\prime )$, first considering the case where the horizontal edges of penultimate tile $\tilde{T}_{i+j-2}$ in $G_0^i$ are not used. In this case, the pair of matchings from $M(G_0^i)\times M(G_2^i)$ reduces to a pair from $M(G_1^{i-1}\cup \tilde{T}_{i+j-1})\times M(G_2^i) $. We define $\phi(m_0,m_2)=(m_{-1},m_1)$ for such matchings by letting $m_{-1}$ be the corresponding matching of $G_1^{i-1}$ and build matching $m_1$ by adjoining the matching of $G_2^i$ to the matching of $\tilde{T}_{i+j-1}$. In other words, map $\phi$ takes tile $\tilde{T}_{i+j-1}$ and slides it down from $G_0^i$ onto $G_2^i$ to obtain $G_1^{i+1}$ with the matching included. This also leaves $G_1^{i-1}$ in place of $G_0^i$. If on the other hand, the horizontals of $\tilde{T}_{i+j-2}$ \emph{are used}, then the situation is more complicated. If we restrict further to the case where the rightmost vertical edge is used in $G_2^i$, we can define $\phi$ analogously by sliding down tiles $\tilde{T}_{i+j-2}$ and $\tilde{T}_{i+j-1}$. We are also forced to use the rightmost vertical edge of $G_1^{i-1}$ in this case. We can continue defining $\phi$ iteratively, defining it for classes characterized by the length of the pattern of horizontals on the right-hand sides of $G_0^i$ and $G_2^i$. If the horizontals of $\tilde{T}_{i+j-2},~ \tilde{T}_{i+j-4}, \dots, \tilde{T}_{i+j-2\ell}$ in $G_0^i$, the horizontals of $\tilde{T}_{i+j-3},~\tilde{T}_{i+j-5},\dots,$ $\tilde{T}_{i+j-2\ell - 1}$ (resp. $\tilde{T}_{i+j-2\ell + 1}$) in $G_2^i$ are used, accompanied by a vertical edge between tiles $\tilde{T}_{i+j-2\ell-2}$ and $\tilde{T}_{i+j-2\ell-1}$ of $G_0^i$ (resp. $\tilde{T}_{i+j-2\ell-1}$ and $\tilde{T}_{i+j-2\ell}$ of $G_2^i$), then $\phi$ swaps the right-hand sides of these two graphs, leaving the left-hand sides alone up until tile $\tilde{T}_{i+j-2\ell-3}$ (resp. $\tilde{T}_{i+j-2\ell-2}$). This construction makes sense as long as we eventually encounter a vertical edge as we move leftward via the use of these horizontal edges since given such patterns, neither the matching of $G_0^i$ nor of $G_2^i$, will use the horizontal edges of tile $\tilde{T}_{i+j-2\ell-2}$ (resp. $\tilde{T}_{i+j-2\ell-1}$). Map $\phi$ is injective since the inverse map just swaps back the right-hand sides as dictated by the alternating pattern of horizontals. Since we have exhaustively enumerated the pairs of matchings $(m_{-1},m_1)$ by splitting into classes according to the longest alternating pattern of horizontals, we also have surjectivity. Lastly, it is easy to verify that $(m_0^\prime,m_2^\prime)$ is the unique matching which cannot be decomposed into a pair $(m_{-1},m_1)$. \end{proof} Analogous pairings will also appear below in the arguments for the case of $B_n$. Notice that by Lemma \ref{diag}, the diagonals of the lattice satisfy the following two properties: \vspace{1em} $\bullet$ On any of the diagonals travelling from \emph{SW to NE}, all graphs \emph{end} with the same tile. $\bullet$ On any of the diagonals travelling from \emph{NW to SE}, all graphs \emph{start} with the same tile. \vspace{1em} \noindent We now wish to show how to specialize to the case of a specific $A_n$ by imposing periodicty and boundary conditions on the intial two rows of variables. To accomplish this goal, we must (a) verify that the $y_i^{(j)}$'s satisfy the correct horizontal periodicity of the extended-$A_n$ lattice once we apply the proper substitutions of variables, (b) verify that we have a vertical periodicity as well and really only need to worry about a finite collection of graphs which we readily identify as the set $\mathcal{G}_{A_n}$. To show (a), it suffices to show $y_{-i}^{(j)}=-y_{i}^{(j)}$, $y_0^{(j)}=0=y_{n+3}^{(j)}$, and $y_1^{(j)}=1=y_{n+2}^{(j)}$ for all $j$. The diamond condition will then induce the horizontal periodicity for non-initial cluster variables. Notice that $y_{n+3+k} = -y_{n+3-k}$ and the periodicity $y_{2n+6+k}=y_k$ imply the relation $y_k = y_{-k}$ for all $k\in \mathbb{Z}$. Thus Lemma \ref{diag}, together with $y_{-i}=-y_i$, imply the relations \begin{eqnarray*} y_i^{(j)} &=& \frac{P(\tilde{T}_{i-j+1}\cup \dots \cup \tilde{T}_{i+j-1})}{y_{i-j+1}\cdots y_{i+j-1}} \mathrm{~~~and}\\ y_{-i}^{(j)} &=& \frac{P(\tilde{T}_{-i-j+1}\cup \dots \cup \tilde{T}_{-i+j-1})}{y_{-i-j+1}\cdots y_{-i+j-1}} = \frac{P(\tilde{T}_{-(i-j+1)}\cup \dots \cup \tilde{T}_{-(i+j-1)})}{(-1)^{2j-1}y_{i-j+1}\cdots y_{i+j-1}}, \end{eqnarray*} where the second equality comes from reversing the order of the tiles. Furthermore, tile $\tilde{T}_{-k}$ has $y_{-k+1}$ on its northern edge and $y_{-k-1}$ on its southern edge, while $\tilde{T}_{k}$ has $y_{k+1}$ on its northern edge and $y_{k-1}$ on its southern edge. Thus the relation $y_{-i}=-y_i$ for all $i\in\mathbb{Z}$ induces $y_{-i}^{(j)}=-y_i^{(j)}$ for all $i\in \mathbb{Z}$ and $j\geq 1$. As a corollary, we obtain $y_0^{(j)}=0$ for all $j$. Under Lemma \ref{diag}, the graph corresponding to $y_0^{(j)}$ is centered around tile $\tilde{T}_0$, i.e. $$\tilde{T}_{-j+1}\cup \dots \cup \tilde{T}_0 \cup \dots \cup \tilde{T}_{j-1}.$$ Laurent polynomial $y_{n+3}^{(j)}$ analogously corresponds to a graph centered around tile $\tilde{T}_{n+3}$, which is equivalent to tile $\tilde{T}_0$ after applying the above periodicity and substution in the extended $A_n$-lattice. Consequently, we induce that for all $j\geq 1$, $y_{n+3}^{(j)}= 0$ similarly. To complete our proof of (a), we prove the following Lemma. \begin{Lem} \label{CenterOne} If $H_j$ signifies the graph $\tilde{T}_{-j}\cup \dots \cup \tilde{T}_{j+1}$, i.e. a graph with an even number of tiles, with leftmost tile $\tilde{T}_{-j}$, and centered around subgraph $\tilde{T}_0\cup \tilde{T}_1$, then (using $y_{-i}=-y_i$), after dividing the matching polynomial $P(H_j)$ by the proper monomial, we find that graph $H_j$ bijects to Laurent polynomial $y_{j+2}$. Moreover, any graph $\hat{H}_j$ with an odd number of tiles centered around tile $\tilde{T}_1$ (resp. $\tilde{T}_{n+2}$) bijects to $1$ as a Laurent polynomial, regardless of the number of tiles. \end{Lem} \begin{proof} We start by proving the result for graphs with an even number of tiles, beginning with two base cases. The graph $\tilde{T}_0\cup \tilde{T}_1$ has three perfect matchings, but due to signs, two of them cancel with one another. We thus obtain $P(\tilde{T}_0\cup \tilde{T}_1)= y_0y_2$. After dividing through by $y_0y_1=y_0(1)$ we get Laurent polynomial $y_2$. Secondly, the graph $\tilde{T}_{-1}\cup\tilde{T}_0 \cup \tilde{T}_1 \cup \tilde{T}_2$ has eight perfect matchings, but one of them has a weight containing submonomial $y_0^2$, and six of the other seven cancel with each other after letting $y_{-i}=-y_i$ and $y_1=1$. We are left with one perfect matching, which has weight $-y_0y_2y_3$. This time the denominator is $y_{-1}y_0y_1y_2=(-1)y_0(1)y_2$ and we get $y_3$ after division. We now assume that $j\geq 2$ and wish to show $$P(H_j)= y_{-j}y_{1-j}y_{2-j}\cdots y_{-2}(-1)y_0(1)y_2\cdots y_{j}y_{j+1}y_{j+2}$$ for all such $j$. Notice that if the rightmost vertical edge of graph $H_{j}$ is used in a matching, our computation of $P(H_j)$ reduces to the computation of $P(H_j^{\prime})$, where $H_j^{\prime} =\tilde{T}_{-j}\cup \dots \tilde{T}_0\cup \dots \cup \tilde{T}_{j}$, a graph centered around tile $\tilde{T}_0$. However, we know from previous arguments that such a graph corresponds to zero as a Laurent polynomial. Thus any matching with the rightmost vertical edge of graph $H_j^\prime$ does not contribute to the enumerator $P(H_j)$. Thus we must use the two rightmost horizontal edges, which have weight $y_{j}y_{j+2}$, and then compute $P(\overline{H_{j-1}})$ where $\overline{H_{j-1}}=\tilde{T}_{-j}\cup \dots \cup \tilde{T}_{j-1}$. However, such a graph is the horizontal reflection of graph $H_{j-1}$ and so by analogous logic, we are now forced to use the two \emph{leftmost} horizontal edges in our matchings to get a nontrivial contribution. Such edges have weight $y_{-j-1}y_{-j+1} = (-y_{j+1})(-y_{j-1}) = y_{j-1}y_{j+1}$. Consequently, after two iterations, and two helpfully placed negative signs, we have the identity $$P(H_j) = (-y_{-j})(-y_{1-j}) P(H_{j-2})y_{j+1}y_{j+2}.$$ Induction thus yields the result for the case of $H_j$, i.e. a graph with an even number of tiles centered around $\tilde{T}_0\cup \tilde{T}_1$. We now prove this result for the corresponding graphs with an odd number of tiles. We only describe the proof for the case of those centered around $\tilde{T}_1$ since the proof for those centered around $\tilde{T}_{n+2}$ is analogous, only with messier notation. We let $\hat{H}_j= \tilde{T}_{-j}\cup \cdots \cup \tilde{T}_{j+2}$ and for the moment ignore boundaries and periodicity; only use the assignments $y_1=1$, $y_{-1}=-1$, and take the limit of $$\frac{P(\hat{H}_j)}{y_{-j}\cdots y_{-2}y_{-1}y_0y_1y_2\cdots y_{j+2}}$$ as $y_0\rightarrow 0$. We look at possible perfect matchings of graph $\hat{H}_j$, and note that if both horizontal edges or both vertical edges of tile $\tilde{T}_j$ are used, then we are reduced to computing $P({\hat{H}_j}^\prime \sqcup \tilde{T}_{j+2}) = P({\hat{H}_j}^\prime)P(\tilde{T}_{j+2})$ with ${\hat{H}_j}^\prime = \tilde{T}_{-j}\cup \cdots \cup \tilde{T}_{j}$. However, since ${\hat{H}_j}^\prime$ is centered around tile $\tilde{T}_0$, the Laurent polynomial $P({\hat{H}_j}^\prime)/y_0$ tends to zero as $y_0\rightarrow 0$. We conclude that any matching of $\hat{H}_j$ resulting in a nontrivial contribution to $\lim_{y_0\rightarrow 0}\frac{P(H)}{y_{-j}\cdots y_{-2}y_{-1}y_0y_1y_2\cdots y_{j+2}}$ must utilize the two horizontal edges of tile $\tilde{T}_{j+1}$ with weight $y_{j}y_{j+2}$. However, this step reduces our calculation to that of the matching polynomial of $\overline{H^{j-1}} = \tilde{T}_{-j} \cup \dots \cup \tilde{T}_{j-1}$, which has an even number of tiles and is centered around $T_{-1}\cup T_0$. By our earlier logic, we therefore can only have a nontrivial contribution to $P(\hat{H}_j)$ in the case where we use the two leftmost horizontal edges, which have weight $y_{-j-1}y_{-j+1}$. This reduction results in subgraph $H_{j-2}$, which is centered around $T_0\cup T_1$, and so by induction $$P(\hat{H}_j) = (-y_{-j})y_{1-j}\bigg(y_{2-j}\cdots y_{j}\bigg)(-y_{j+1})y_{j+2}$$ which is the same as the denominator corresponding to $\hat{H}_j= \tilde{T}_{-j}\cup \cdots \cup \tilde{T}_{j+2}$. \end{proof} We thus have shown $(a)$, i.e. that after applying substitutions to the initial row of the $A_\infty$ lattice, we get the extended $A_n$ lattice with the proper horizontal periodicity. We now wish to show the vertical periodicity of $(b)$. \begin{Lem} \label{exciseGr} Whenever a graph contains the tile $\tilde{T}_1$ (resp. $\tilde{T}_{n+2}$), we may excise the graph by removing a subgraph centered around $\tilde{T}_1$ (resp. $\tilde{T}_{n+2}$). without changing the corresponding Laurent polynomial.\end{Lem} Underneath initial variables $y_2=x_1, \dots, y_{n+1}=x_n$, any graph containing tile $\tilde{T}_0$ (resp. $\tilde{T}_{n+3}$), or other forbidden tiles $\tilde{T}_i$ with $i$ outside the range $\{2,\dots, n+1\}$ must contain either tile $\tilde{T}_1$ or tile $\tilde{T}_{n+2}$. Consequently, this process of excision is sufficient to eliminate all tiles $\tilde{T}_i$ with $i \not\in \{2,\dots, n+1\}$. (We will see shortly that the apparent problem of a graph containing both $\tilde{T}_1$ and $\tilde{T}_{n+2}$ where the subgraphs of excision overlap is not actually an issue.) \begin{proof} Without loss of generality, assume that graph $G$ contains tile $\tilde{T}_1$ and has the form $G=\tilde{T}_{2-j}\cup \dots \cup \tilde{T}_1 \cup \dots \cup \tilde{T}_j \cup \tilde{T}_{j+1} \cup \dots \cup \tilde{T}_{j+k}$ for $j, k\geq 1$. The content of the claim is that graph $G$ and graph $G^\prime = \tilde{T}_{j+1}\cup \dots \cup \tilde{T}_{j+k}$ biject to the same cluster variable. We categorize perfect matchings of $G$ based on whether or not two horizontal edges appear on the tile $\tilde{T}_{j+1}$. If they do not, then that matching of $G$ decomposes into a matching of subgraph $\tilde{T}_{2-j}\cup \dots \cup \tilde{T}_j$, and a matching of $\tilde{T}_{j+2}\cup \dots \cup \tilde{T}_{j+k}$. Thus we obtain \begin{eqnarray*} \frac{P(\tilde{T}_{2-j}\cup \dots \cup \tilde{T}_j)\cdot P(\tilde{T}_{j+2}\cup \dots \cup \tilde{T}_{j+k})}{y_{2-j}\cdots y_{j+k}} &=& \frac{P(\tilde{T}_{2-j}\cup \dots \cup \tilde{T}_j)}{y_{2-j}\cdots y_{j}}\cdot \frac{P(\tilde{T}_{j+2}\cup \dots \cup \tilde{T}_{j+k})}{y_{j+1}\cdots y_{j+k}} \\ &=& 1\cdot \frac{P(\tilde{T}_{j+2}\cup \dots \cup \tilde{T}_{j+k})}{y_{j+1}\cdots y_{j+k}} \end{eqnarray*} as a contribution to the Laurent polynomial corresponding to graph $G$, where the second equality follows from Lemma \ref{CenterOne}. If on the other hand, the horizontal edges of $\tilde{T}_{j+1}$ \emph{are used}, the matching decomposes differently and we get a contribution of $$\frac{P(\tilde{T}_{2-j}\cup \dots \cup \tilde{T}_{j-1})}{y_{2-j}\cdots y_{j-1}}\cdot \frac{y_jy_{j+2}}{y_jy_{j+1}y_{j+2}}\cdot \frac{P(\tilde{T}_{j+3}\cup \dots \cup \tilde{T}_{j+k})}{y_{j+3}\cdots y_{j+k}},$$ which equals $$y_j\cdot \frac{y_{j+2}}{y_{j+1}y_{j+2}}\cdot \frac{P(\tilde{T}_{j+3}\cup \dots \cup \tilde{T}_{j+k})}{y_{j+3}\cdots y_{j+k}}$$ by Lemma \ref{CenterOne}. Comparing the sum of these two Laurent polynomials to the cluster variable corresponding to graph $G^\prime$ finishes the proof. \end{proof} With Lemma \ref{exciseGr} proved, we turn our attention back to the proof of Proposition \ref{CaseAn}. \begin{proof} [Proof of Proposition \ref{CaseAn}] Recall that the extended $A_n$-lattice is horizontally periodic. We thus restrict our attention to a region that lies between the two columns of positive ones and below the strip of initial variables. \vspace{1em} $\begin{array}{cccccccccccc} x_1^{(1)} &~~~& x_3^{(1)} &~~~& x_5^{(1)} &~~~& x_7^{(1)} &~~~& x_{9}^{(1)} &~~~& x_{11}^{(1)} \\ ~~~& x_2^{(2)} &~~~& x_4^{(2)} &~~~& x_6^{(2)} &~~~& x_8^{(2)} &~~~& x_{10}^{(2)} & ~~~ \\ x_1^{(3)} &~~~& x_3^{(3)} &~~~& x_5^{(3)} &~~~& x_7^{(3)} &~~~& x_{9}^{(3)} &~~~& x_{11}^{(3)} \\ ~~~& x_2^{(4)} &~~~& x_4^{(4)} &~~~& x_6^{(4)} &~~~& x_8^{(4)} &~~~& x_{10}^{(4)} & ~~~ \\ x_1^{(5)} &~~~& x_3^{(5)} &~~~& x_5^{(5)} &~~~& x_7^{(5)} &~~~& x_{9}^{(5)} &~~~& x_{11}^{(5)} \\ ~~~& x_2^{(6)} &~~~& x_4^{(6)} &~~~& x_6^{(6)} &~~~& x_8^{(6)} &~~~& x_{10}^{(6)} & ~~~ \\ x_1^{(7)} &~~~& x_3^{(7)} &~~~& x_5^{(7)} &~~~& x_7^{(7)} &~~~& x_{9}^{(7)} &~~~& x_{11}^{(7)} \\ ~~~& x_2^{(8)} &~~~& x_4^{(8)} &~~~& x_6^{(8)} &~~~& x_8^{(8)} &~~~& x_{10}^{(8)} & ~~~ \\ x_1^{(9)} &~~~& x_3^{(9)} &~~~& x_5^{(9)} &~~~& x_7^{(9)} &~~~& x_{9}^{(9)} &~~~& x_{11}^{(9)} \\ ~~~& x_2^{(10)} &~~~& x_4^{(10)} &~~~& x_6^{(10)} &~~~& x_8^{(10)} &~~~& x_{10}^{(10)} & ~~~ \\ x_1^{(11)} &~~~& x_3^{(11)} &~~~& x_5^{(11)} &~~~& x_7^{(11)} &~~~& x_{9}^{(11)} &~~~& x_{11}^{(11)} \\ ~~~& x_2^{(12)} &~~~& x_4^{(12)} &~~~& x_6^{(12)} &~~~& x_8^{(12)} &~~~& x_{12}^{(6)} & ~~~ \end{array}$ \\ \begin{center} The first twelve rows of this region for $A_{11}$. \end{center} \vspace{1em} \noindent Cluster variable $y_i^{(j)}$ contains tiles $\tilde{T}_{i-j+1}$ through $\tilde{T}_{i+j-1}$ for a total of $2j-1$ tiles. Since we are considering only cluster variables between the two columns of positive ones, we have graphs centered at $\tilde{T}_i$ for $2 \leq i \leq n+1$. Thus our graph either \vspace{1.0em} 1) Contains no forbidden tiles. \\ 2) Contains forbidden tiles including $\tilde{T}_1$ and tiles to the left. \\ 3) Contains forbidden tiles including $\tilde{T}_{n+2}$ and tiles to the right. \\ 4) Contains both sets of forbidden segments. \vspace{1.0em} \noindent If there are forbidden tiles on only one side, then Lemma \ref{exciseGr} allows us to edit out the forbidden segment. However, if there are forbidden strips on both sides we would encounter a problem if the segments we were deleting overlapped. So consider a graph that contains tiles $\tilde{T}_1\cup\dots \cup \tilde{T}_{n+2}$, along with $a$ tiles to the left and $b$ tiles to the right. By Lemma \ref{exciseGr}, when we get rid of the $a+b$ tiles on the two ends, we would also be deleting $a+b+2$ tiles from the middle (including $\tilde{T}_1$ and $\tilde{T}_{n+2}$). For the first $n+1$ rows of our lattice, the number of total tiles is $\leq 2n+1$ and thus $a+b+n+2 \leq 2n+1$ which implies that $a+b \leq n-1 < n$. Thus there cannot be an overlap in these first $n+1$ rows. Furthermore, recalling our indexing $y_{i+1}=x_i$, we get as an added bonus that the graph corresponding to $y_{i+1}^{(n+1)}=x_i^{(n+1)}$, which consists of $2n+1$ tiles, bijects to the same cluster variable as the graph of the single tile $\tilde{T}_{n-i-1}=T_{n-i}$ (resp. $\tilde{T}_{n-i}=T_{n+1-i}$) for $i$ even and $n$ even (resp. $n$ odd). Consequently, we get a combinatorial interpretation for the first $n+1$ rows. We will refer to the region of the extended $A_n$-lattice that lies between the two columns of positive ones and in the first $n+1$ rows underneath the initial variables as the $A_n$-lattice. The diagonals of the $A_n$-lattice inherit the properties of the $A_\infty$-lattice and this implies that all the cluster variables do in fact appear in our lattice. In particular, their denominators are in bijection with positive roots of $A_n$'s root system and we have the desired corresponding graph for each of them. We now can complete the substitutions by letting $y_i =x_{i-1}$, for $i \in \{2,\dots, n+1\}$, which induces $y_{i}^{(j)}=x_{i-1}^{(j)}$ for $i\in \{2,\dots ,n+1\}$. In particualr, the extended $A_n$ lattice reduces to doubly-periodic copies of the $A_n$ lattice containing graphs involving only tiles $\tilde{T}_2=T_1,\dots, \tilde{T}_{n+1}=T_n$. Thus Propositon \ref{CaseAn} is proven. As a corollary of this argument, the diamond condition implies that the $(n+2)$nd and $(n+3)$rd rows consists of the initial cluster variables written in reverse order. \end{proof} \begin{Rem} Consider a new lattice $\{z_{i}^{(j)}\}$ consisting of connected subsets of $\mathcal{T}_{A_n}$ such that $T_i \in z_i^{(j)} \iff T_i$ appears in the graph associated to $x_i^{(j)}$ and add columns consisting of empty sets on the left-hand and right-hand sides of this lattice. This lattice satisfies a tropical-like diamond condition where one of the four following hold. \begin{eqnarray*} a = b \cup c \mathrm{~and~} d = b \cap c \\ a = b \cap c \mathrm{~and~} d = b \cup c \\ b = a \cup d \mathrm{~and~} c = a \cap d \\ b = a \cap d \mathrm{~and~} c = a \cup d \end{eqnarray*} \end{Rem} \begin{Rem} Such lattices are known as frieze patterns, and were studied by Conway and Coxeter \cite{ConCox} in the 1970's. Such patterns have also been studied in connection with cluster algebras in work of Caldero \cite{Caldero2} and work of Propp \cite{MarkPropp}. These lattices are also special cases of the bipartite belt described in \cite{ClusIV}; each row of the lattice corresponds to a seed of the belt. \end{Rem} \begin{Rem} Hugh Thomas \cite{HTPers} brought it to the author's attention that one can also derive the above lattices via the algorithm for constructing the Auslander-Reiten quiver \cite{AssocAlg} starting from projective representations; in particular the pattern of denominator vectors agrees with the dimension vectors of the indecomposables in the AR quiver. \end{Rem} \section{$C_n$} The Lie algebra $C_n$ has the following Dynkin diagram $$\bullet\Rightarrow \bullet\line(1,0){3}\bullet\line(1,0){3}\bullet\line(1,0){3}\bullet\line(1,0){3} \bullet\line(1,0){3} \dots \dots \line(1,0){3}\bullet$$ \noindent and thus the bipartite exchange matrix is: $$\begin{bmatrix} 0 & 2 & 0 & 0 & \dots & 0 & 0 \\ -1 & 0 & -1 & 0 & \dots & 0 & 0 \\ 0 & 1 & 0 & 1 & \dots & 0 & 0 \\ 0 & 0 & -1 & 0 & \dots & 0 & 0 \\ \dots & \dots & \dots & \dots & \dots & \dots & \dots \\ 0 & 0 & 0 & 0 & \dots & (-1)^{n+1} & 0 \\ \end{bmatrix}.$$ \noindent To build the corresponding graphs we let $\mathcal{T}_{C_n}$ be identical to $\mathcal{T}_{A_n}$ except that tile $T_1$ now has weights of $x_2$ and $x_2$ opposite each other instead of a lone weighted edge. This change to $T_1$ corresponds to the change to the exchange polynomial associated to label $1$ in the seed of this cluster algebra. \vspace{1em}\begin{center} \includegraphics[width = 3in , height = 0.6in]{C5.eps}\\ Tiles for $C_5$. \end{center} \vspace{1em} We use gluing rule \ref{Gluing} again which leads us to a collection similar to $\mathcal{G}_{A_n}$ except now tile $T_1$ can connect to tile $T_2$ on either side. Thus the collection of possible graphs, $\mathcal{G}_{C_n}$ corresponds to the sets of the form $$\{T_i,T_{i+1},T_{i+2},\dots, T_{j-1},T_j\}$$ for $1 \leq i < j \leq n$ or multisets of the form $$\{T_i,T_{i-1},T_{i-2},\dots, T_3, T_2, T_1, T_2, T_3, T_{j-1},T_j\}$$ for $2 \leq i \leq j \leq n$. This collection $\mathcal{G}_{C_n}$ has the same cardinality as the collection of non-initial cluster variables for a cluster algebra of type $C_n$ and thus the collection of positive roots for a root system of type $C_n$, as in the last case \cite{ClustII,Kac}. \begin{Prop} \label{CaseCn} The set of graphs $\mathcal{G}_{C_n}$ is in bijection with the set of non-initial cluster variables for a coefficient-free cluster algebra of type $C_n$ such that the statement of Theorem \ref{vargraph} holds. \end{Prop} This can be proved quickly by using the folding procedure as in \cite{ClustII}. We identify $A_{2n-1}$ with $C_n$ by letting $x_k = x_{n+1-k}$ for $k \in \{1,\dots, n-1\}$. We let $x_n = x_1$ and let $x_k = x_{k-n+1}$ for $k \in \{n+1,\dots, 2n-1\}$. Our lattice will contain repeats but we can restrict our list to the right half, including the central axis, to obtain the correct number of graphs. Thus Proposition \ref{CaseAn} implies Proposition \ref{CaseCn}. \vspace{1em}\begin{center} \includegraphics[width = 3in , height = 3in]{C3lattice.eps} \\ The collection $\mathcal{G}_{C_3}$ with duplicates. \end{center} \vspace{1em} \section{$B_n$ and $D_n$} In the previous two cases, all of the exchange polynomials had degree two or less. For the cases of $B_n$ and $D_n$, exactly one of the exchange polynomials has degree three. We will deal with such exchanges by including hexagons as potential tiles. We start with the case of $B_n$, which is a folded version of the simply-laced $D_n$ case. By folding, our proofs will require less notation and as we will see, the $D_n$ case has a symmetry such that we can easily derive this case from the results for $B_n$. In the case of $B_n$, the Dynkin diagram is $$\bullet\Leftarrow \bullet\line(1,0){3}\bullet\line(1,0){3}\bullet\line(1,0){3}\bullet\line(1,0){3} \bullet\line(1,0){3} \dots \dots \line(1,0){3}\bullet$$ \noindent and thus the bipartite exchange matrix is: $$\begin{bmatrix} 0 & 1 & 0 & 0 & \dots & 0 & 0 \\ -2 & 0 & -1 & 0 & \dots & 0 & 0 \\ 0 & 1 & 0 & 1 & \dots & 0 & 0 \\ 0 & 0 & -1 & 0 & \dots & 0 & 0 \\ \dots & \dots & \dots & \dots & \dots & \dots & \dots \\ 0 & 0 & 0 & 0 & \dots & (-1)^{n+1} & 0 \\ \end{bmatrix}.$$ \noindent We will now use the notation $T_1$ through $T_n$ to refer to a collection of tiles, $\mathcal{T}_{B_n}$, related to $B_n$. We construct $\mathcal{T}_{B_n}$ from $\mathcal{T}_{A_n}$ by first replacing $T_2$ with a hexagon having weights $1$, $x_1$, $1$, $x_1$, $1$, and $x_3$ in clockwise order starting from the top. We let $T_1$ be a trapezoid with a single weighted edge of $x_2$ on its northern side. Note that $T_1$ is homeomorphic to its previous definition. Then for all $i > 2$ we define $T_i$, for type $B$, as a counter-clockwise rotation of the $A_n$-tile $T_i$, including the boundary tile $T_n$ which have a single weighted edge of $x_{n-1}$ on its eastern side. \vspace{1em}\begin{center} \includegraphics[width = 2.45in , height = 1.6in]{B5Tiles-New.eps}\\ Tiles for $B_5$. \end{center} \vspace{1em} The gluing rule will be more complicated now that hexagons are involved. As a first approximation, the set of graphs $\mathcal{G}_{B_n}$ will include any graphs that can be constructed from $\mathcal{T}_{B_n}$ while conforming to Rule \ref{Gluing}. Again we are not allowing rotation or reflections of the tiles so they must be connected in the orientaitions as described above. Any such graph will resemble either a \emph{tower} of tiles $T_a$ through $T_b$ for $3\leq a \leq b \leq n$, a \emph{base} involving hexagon $T_2$ with or without trapezoid $T_1$ on its western side, or may be a complex of a tower beginning with $T_3$ on top of a base. In addition, we enlarge the set of $\mathcal{G}_{B_n}$ by allowing any graphs that obey the following second rule: \begin{Rule} \label{Hexagons} The trapezoidal tile $T_1$ may appear twice if and only if the lift of the graph to $\mathcal{G}_{B_\infty}$ (i.e. $n$ arbitrarily large) has one of the following three forms: \end{Rule} \vspace{1em}\begin{center} \includegraphics[width = 1.1in , height = 0.8in]{BB121.eps} \hspace{2em} \includegraphics[width = 1.1in , height = 1.8in]{BaseTower.eps} \hspace{2em} \includegraphics[width = 1.1in , height = 1.1in]{CaseBnb.eps} \end{center}\vspace{1em} \noindent where $3 \leq m_1 < m_2$ and $m_1,~m_2$ are both odd. \begin{Rem} Notice that Rule $1$ is now broken when we connect trapezoid $T_1$ to a hexagon $T_2$ on its left. Furthermore, in the last of these cases, we have adjoined an additional arc which had not been allowed or required in previous examples. However, there is precedent for using such additional arcs, see Section $3$ of \cite{MusPropp}. It will develop that we can project these graphs, consisting of two towers, down to the $B_n$-lattice by excision around tile $T_{n+1}$ just as in the $A_n$ case. Thus the fact that these are an odd number of tiles in each tower, with the larger tower on the left, greatly limits the set of such graphs.\end{Rem} One can check that the collection of graphs $\mathcal{G}_{B_n}$ obeying Rule $1$ \emph{or} Rule $2$ has the cardinality equal to the number of positive roots for $B_n$. Further we will prove, after excision, that Theorem \ref{vargraph} is satisfied by these defintions. \begin{Prop} \label{CaseBn} The set of graphs $\mathcal{G}_{B_n}$ is in bijection with the set of non-initial cluster variables for a coefficient-free cluster algebra of type $B_n$ such that the statement of Theorem \ref{vargraph} holds. \end{Prop} \begin{proof} Analogous to the $A_n$ case, we will first prove the result for the $B_\infty$ case, that is we assume that $n$ is arbitrarily large so that for $i\geq 3$ tile $T_i$ always has exactly two weighted edges ($x_{i-1}$ on its east and $x_{i+1}$ on its west). This greatly simplies the proofs by allowing easier notation and bypassing case-by-case analysis. We will later discuss how to obtain the result for a specific $B_n$ from such graphs. We create a semi-infinite lattice whose entries satisfy a deformed diamond condition. \vspace{1em}$\begin{array}{cccccccccccc} x_1 &~~~& x_3 &~~~& x_5 &~~~& \dots &~~~& &~~~& \\ ~~~& x_2 &~~~& x_4 &~~~& x_6 &~~~& \dots &~~~& & ~~~ \\ x_1^{(1)} &~~~& x_3^{(1)} &~~~& x_5^{(1)} &~~~& \dots &~~~& &~~~& \\ ~~~& x_2^{(2)} &~~~& x_4^{(2)} &~~~& x_6^{(2)} &~~~& \dots &~~~& & ~~~ \\ x_1^{(3)} &~~~& x_3^{(3)} &~~~& x_5^{(3)} &~~~& \dots &~~~& &~~~& \\ ~~~& x_2^{(4)} &~~~& x_4^{(4)} &~~~& x_6^{(4)} &~~~& \dots &~~~& & ~~~ \\ \dots & \dots & \dots & \dots \end{array}$ \\ \begin{center} The lattice for $B_\infty$. \end{center}\vspace{1em} \noindent Without the boundary on the right, any collection of four variables \begin{center}$\begin{array}{ccc} &a& \\ b&&c \\ &d& \end{array}$\end{center} \noindent such that $b \not = x_1^{(j)}$ and $c \not = x_2^{(j)}$ will satisfy $ad - bc = 1$. A diamond such that $c = x_2^{(j)}$ will satisfy the truncated condition $ad - c = 1$, and a diamond which contains $b = x_1^{(j)}$ will satisfy the relation $ad - b^2c = 1$. As before, we say that a Laurent polynomial bijects to graph $G$, which we denote as $x_i^{(j)}\leftrightarrow G$, if $x_i^{(j)} = P(G)/{x_1^{\alpha_1}\cdots x_n^{\alpha_n}}$ where $P(G)$ is the matching polynomial of $G$ and $\alpha_i$ encodes the number of occurences of tile $T_i$ in graph $G$. Given this setup along with the initial assignments of $x_i^{(0)} = x_i$ for $i\geq 1$, we directly verify that \begin{center} $x_1^{(1)} ~\longleftrightarrow~ $ \includegraphics[width = 0.4in , height = 0.4in]{Bx11.eps} \hspace{0.1in}, \hspace{0.6in} $x_2^{(2)} ~\longleftrightarrow~ $ \includegraphics[width = 0.7in , height = 0.7in]{Bx22.eps} \hspace{0.05in}, \\ \vspace{1.5em} $x_1^{(3)} ~\longleftrightarrow~ $ \includegraphics[width = 0.7in , height = 0.7in]{BB123} \hspace{0.1in}, \hspace{0.3in} $x_2^{(4)} ~\longleftrightarrow~ $ \includegraphics[width = 0.7in , height = 0.7in]{BB121235} \hspace{0.05in}, \end{center}\vspace{1.5em} where the weights of the edges are as dictated by the definitions of tiles $T_1$ through $T_n$. Additionally for $i-j \geq 2$, the only initial variables used to determine $x_i^{(j)}$ are $\{x_3,x_5,x_7,\dots\}$ and thus we recover the regular diamond pattern used in the $A_\infty$ case. Consequently, we immediately obtain \begin{Lem} \label{AAregion} $x_i^{(j)} ~\longleftrightarrow~ $ \includegraphics[width = 1.5in , height = 0.3in]{BasicBlock.eps} \vspace{1.5em} \hspace{1em} for $a = i-j+1, ~b = i+j-1$ when $i - j \geq 2$. We also find that $3 \leq a \leq b$. \end{Lem} We proceed with the rest of the proof in three steps. The first two steps are proved inductively by using Lemma \ref{AAregion} as well as $\{x_1^{(1)}, x_2^{(2)}, x_1^{(3)}, x_2^{(4)} \}$ as a base case. We will prove the inductive step via the usual diamond condition $ad - bc = 1$ which will hold for the diagonals $i - j = 0$ and $i - j = - 2$ while $i \geq 2$. \begin{Lem} \label{xii} $x_i^{(i)} ~\longleftrightarrow~ $ \includegraphics[width = 1in , height = 1in]{Bxii.eps} \hspace{0.3em} for $i \geq 2$. \end{Lem} \begin{Lem} \label{xii+2} $x_i^{(i+2)} ~\longleftrightarrow~ $ \includegraphics[width = 1.5in , height = 1in]{Bxii2.eps} \hspace{0.3em} for $i \geq 2$. \end{Lem} The proof of these Lemmas will prove Proposition \ref{CaseBn} for all $x_i^{(j)}$ such that $i - j \geq -2$. We must now use variants of the diamond conditions ($ad - c = 1$ and $ad- b^2c = 1$) to extend down columns $x_1^{(j)}$ and $x_2^{(j)}$ respectively. But to continue to have new entries to use as $c$ in the relation we must continually extend down diagonals as we extend down the columns. Consequently, we proceed to prove the following three results for $j = 3$, then for $j = 4$, and so on by induction. \begin{Lem} \label{restofdiagonals} $x_1^{(2j+1)} ~\longleftrightarrow~ $ \includegraphics[width = 1in , height = 1in]{Bx12j1.eps} \hspace{0.3in} for $j \geq 1$,\\ \vspace{1em} $x_2^{(2j)} ~\longleftrightarrow~ $\includegraphics[width = 1.5in , height = 1in]{Bx22j.eps} \hspace{0.3in} for $j \geq 2$, \\ \vspace{1em} $x_i^{(i+2j)} ~\longleftrightarrow~ $ \includegraphics[width = 1.5in , height = 1in]{Bxii2j.eps} \hspace{0.3in} where $k = i+j$ for $i \geq 2$ and $j \geq 1$. \end{Lem} \vspace{2.5em}\begin{center} \includegraphics[width = 2.5in , height = 2.5in]{Proof1.eps}\\ A model of how these Lemmas fit together and relate to the $B_\infty$-lattice. \end{center} \vspace{0.5em} \noindent With the proof of Proposition \ref{CaseBn} now broken down into manageable chunks, we proceed to prove the Lemmas. \vspace{1em} \noindent\bf Proof of Lemma \ref{xii}.\rm \hspace{0.05in} By Lemma \ref{AAregion}, we have that the northeast portion of the lattice is filled in, and we use the entries on diagonal $i-j=2$ and the base cases of $x_1^{(1)}$ and $x_2^{(2)}$ to extend to the rest of diagonal $i-j=0$ by the diamond condtion. Assuming that cluster variables are \vspace{1em} $a ~\longleftrightarrow~ $ \includegraphics[width = 1.5in , height = 0.3in]{Lem6a.eps} \\ \vspace{1em} $b ~\longleftrightarrow~ $\includegraphics[width = 1in , height = 1in]{Bxii.eps} \\ \vspace{1em} $c ~\longleftrightarrow~ $ \includegraphics[width = 1.5in , height = 0.3in]{Lem6c.eps} \vspace{1em} \noindent we wish to show that $d ~\longleftrightarrow~ $ \includegraphics[width = 1in , height = 1in]{Lem6d.eps}, \hspace{1.5em} given the diamond relation $ad = bc + 1 $. First of all, we see that the occurrences of tiles match up on each side of the equal sign, which implies that the denominators agree appropriately. It suffices to show the weighted number of matchings also match up accordingly. Any pair of matchings of the graphs \includegraphics[width = 1.0in , height = 0.2in]{Lem6a.eps} and \includegraphics[width = 0.8in , height = 0.8in]{Lem6d.eps} can be decomposed into a pair of matchings on graphs \includegraphics[width = 1.0in , height = 0.2in]{Lem6c.eps} and \includegraphics[width = 0.8in , height = 0.8in]{Bxii.eps} except for one matching. The logic is identical to that of Lemma \ref{diag}. Here we swap the tops of the towers and note if the top horizontal edge of the hexagon is used (instead of the NW and NE diagonal edges), then completely swapping the two towers is permissible. \vspace{0.5em} \noindent This extraneous indecomposable pairing is the pair \includegraphics[width = 1in , height = 0.9in]{Lem6match.eps} and it has exactly the correct weight $(x_1^2x_2x_3^2x_4^2\cdots x_{2i-1}^2x_{2i}x_{2i+1})$ and thus the Lemma is proved. \vspace{1em} \noindent\bf Proof of Lemma \ref{xii+2}.\rm \hspace{0.05in} The proof of this Lemma is analogous except that we shift the diamond pattern so that \vspace{1em} $a ~\longleftrightarrow~ $ \includegraphics[width = 1in , height = 1in]{Bxii.eps} \hspace{0.1in}, \hspace{0.6in} $d ~\longleftrightarrow~ $ \includegraphics[width = 1.5in , height = 1in]{Bxii2.eps} \\ \vspace{1em} $b ~\longleftrightarrow~ $\includegraphics[width = 1.5in , height = 1in]{Lem7b.eps} \hspace{0.1in}, \hspace{0.6in} $c ~\longleftrightarrow~ $ \includegraphics[width = 1in , height = 1in]{Lem6d}\hspace{0.1in}. \vspace{1em} \noindent Again, we have a bijection by swapping the tops of the (left) towers, and there is exactly one extraneous pair of matchings: \vspace{1em}\includegraphics[width = 1in, height = 0.9in]{Lem7match.eps}. \hspace{2em} This pair has precisely the correct weight of $x_1^4x_2^3x_3^3x_4^2x_5^2\cdots x_{2i-1}^2x_{2i}x_{2i+1}$. \vspace{1em} \noindent\bf Proof of Lemma \ref{restofdiagonals}.\rm \hspace{0.05in} The first part is proven by the following observations. If we \vspace{1em} \noindent let graph $G_1$ be \includegraphics[width = 1in , height = 0.7in]{Lem8G1.eps}, $G_2$ be \includegraphics[width = 1in , height = 0.7in]{Lem8G2.eps}, $T_1$ be \includegraphics[width = 0.7in , height = 0.7in]{Lem8T1.eps}, \hspace{1em} \noindent $T_2$ be \includegraphics[width = 0.7in , height = 0.7in]{Lem8T2.eps}, $H_1$ be \includegraphics[width = 1.3in , height = 0.65in]{Lem8H1.eps}, and let $H_2$ be \includegraphics[width = 1.3in , height = 0.65in]{Lem8H2.eps} \hspace{0.1in} then \begin{eqnarray*} P(G_1) &=& P(G_2) + x_1^2x_2x_3P(H_1) \\ P(T_1)P(T_2) &=& P(G_2) + x_1^2x_2x_3P(H_2). \end{eqnarray*} Putting these two equalities together we obtain $$P(G_1) = P(T_1)P(T_2) + x_1^2x_2x_3\bigg(P(H_2) -P(H_1)\bigg).$$ \noindent Most matchings of $H_2$ correspond to a matching of $H_1$ by the usual procedure of swapping the right-hand sides. The extraneous matching of $H_2$ has the form \begin{center}\includegraphics[width = 1.3in , height = 0.65in]{Lem8extra.eps}, \end{center} contributing a factor of $x_3x_5x_5x_7\cdots x_{2i-1}x_{2i+1}\cdot x_2x_4x_4x_6\cdots x_{2i-2}x_{2i}$, and yielding the identity $$P(G) = P(T_1)P(T_2) = x_1^2x_2^2x_3^2\cdots x_{2i-1}^2x_{2i}x_{2i+1}.$$ Since $x_1^{(j-1)}x_1^{(j+1)} = x_2^{(j)}+1$ is satisfied by letting $T_1\longleftrightarrow x_1^{(j-1)}$ and $T_2\longleftrightarrow x_1^{(j+1)}$, part one of Lemma \ref{restofdiagonals} is proved. Part three is proved analogously to Lemma \ref{xii+2}. In this case, we have a diamond where all four entries are graphs consisting of two towers on the maximal base of two trapezoids and two hexagons. We inductively know the validity of these graphs for Laurent polynomials $a$, $b$, and $c$, so it is sufficient to verify the diamond condition if $d \longleftrightarrow$ \includegraphics[width = 1.5in , height = 1in]{Bxii2j.eps}. For ease of notation, we temporarily let $G_a$, $G_b$, $G_c$, and $G_d$ be the graphs corresponding to these particular Laurent polynomials. As before, we wish to present a bijection $$\phi: M(G_a) \times M(G_d) \setminus\{(m_a^\prime, m_d^\prime)\} \rightarrow M(G_b)\times M(G_c)$$ where $(m_a^\prime, m_d^\prime)$ is a specific pair of matchings. Map $\phi$ starts by swapping the two left towers of $G_a$ and $G_d$ if able; this is analogous to the earlier cases. However, in the case where these towers cannot be swapped (because the alternating pattern continues down into the base) map $\phi$ then attempts to swap the right towers of $G_a$ and $G_d$. There is exactly one pair of matchings where both attempts at swapping fails. This is the pair of matchings where the alternating patterns appear on both towers down through the bases; by inspection, such a pair has precisely the weight of the extraneous monomial. This leaves part two as the crux of the proof and the step which utilizes the diamond relation $ad - b^2c = 1$ which makes $B_\infty$ different from the previous cases. We wish to show that the assignments \vspace{1em} $a ~\longleftrightarrow~ $ \includegraphics[width = 1in , height = 1in]{Lem8G1.eps} \hspace{0.1in}, \hspace{0.6in} $d ~\longleftrightarrow~ $ \includegraphics[width = 1.5in , height = 1in]{Lem8d.eps} \\ \vspace{1em} $b ~\longleftrightarrow~ $\includegraphics[width = 1in , height = 1in]{Lem8T2.eps} \hspace{0.1in}, \hspace{0.6in} $c ~\longleftrightarrow~ $ \includegraphics[width = 1in , height = 1in]{Lem8c}\hspace{0.1in} \vspace{2em}\noindent satisfies $ad-b^2c=1$. We shall use the fact that if $b ~\longleftrightarrow~ $\includegraphics[width = 1in , height = 1in]{Lem8T2.eps} \hspace{0.1in}, then $b^2 ~\longleftrightarrow~ $\includegraphics[width = 1.5in , height = 1in]{Lem8b2.eps}. \noindent Clearly the denominator corresponds correctly. The number of weighted matchings, and thus the numerator, is also correct since there is a weight-preserving bijection between matchings of \vspace{1em}$\includegraphics[width = 1.05in , height = 0.7in]{Lem8b2pf.eps} \hspace{0.1in}$ and matchings of $\includegraphics[width = 1.05in , height = 0.7in]{Lem8b2pf2.eps}$. With this substitution, a superposition argument analogous to that which just proved part three demonstrates the validity of the standard diamond relation $ad - (b^2)c = 1$. With the last step completed, these three Lemmas prove that the cluster variables of the $B_\infty$-lattice correspond exactly to the desired graphs. There is a pattern inherent in the NW to SE and NE to SW diagonals once again. This time, this pattern manifests itself (in the region where $i \leq j$) by dictating the choice of right tower (NW to SE) and the choice of left tower (NE to SW). Recall that Lemma \ref{AAregion} already described the pattern in the region where $i > j$ using grid graphs consisting of tiles $T_a\cup\dots \cup T_b$ where $a \geq 3$. We now turn to the problem of restricting to specific $B_n$. We use the same methodology as in the $A_n$ case. Given that we did not include the right-hand boundary, graphs can contain tile $T_i$ for arbitrarily large $i$. We thus want to essentially let $x_{n+1}=1$ to force tile $T_n$ to have the proper weights, as a tile in $\mathcal{T}_{B_n}$ as opposed to $\mathcal{T}_{B_\infty}$. However, unlike the $A_n$ case, we cannot simply apply this substitution and use horizontal periodicity to make sure the diamond condition holds throughout. The problem is that the left-hand boundary satisfies a diamond conditon of a different form. Nonetheless, the diagonals to the northeast, those where $i - j \geq 2$, contain graphs which are connected grid graphs, towers, of the form $T_a \cup \dots \cup T_b$ where $3 \leq a \leq b$, and any neighboring four entries satisfy the same diamond condition as the $A_n$ case. Thus the logic of Lemma \ref{CenterOne} carries over and we can excise connected subgraphs centered at tile $T_{n+1}$ ($\tilde{T}_{n+2}$ under the old notation). In particular, we obtain the following excision. \begin{Lem} If $b$ satisfies $n+1 \leq b \leq 2n-(a-2)$ then the graphs $T_a\cup \dots \cup T_b$ and $T_a \cup \dots \cup T_{2n+2-b}$ biject to the same Laurent polynomial. \end{Lem} This Lemma follows directly from Lemma \ref{CenterOne} after replacing $T_{n+1}$ by the equivalent tile $\tilde{T}_{n+2}$. The restriction of $b \leq 2n-(a-2)$ must be added here since we have a boundary on the left side, i.e. hexagon $T_2$ cannot be excised during this procedure. Notice that at the extreme, $T_3 \cup \dots \cup T_{2n-2}$ bijects to the same Laurent polynomial as $T_3$, and further $T_3 \cup \dots \cup T_{2n-1}$ is centered around $T_{n+1}$ and thus bijects to the Laurent polynomial $1$, the same as the empty graph. Using this Lemma, we are able to determine the northeast corner of the finite $B_n$ lattice. Using these graphs during the inductive step of Lemma \ref{xii} in lieau of the arbitrarily large towers of $T_3\cup \dots \cup T_{2j-1}$ allows us to fill in the next diagonal of the $B_n$ lattice where the towers sitting on the base of $T_1 \cup T_2 \cup T_1$ will consist exclusively of the tiles between $T_3$ and $T_n$. Simiarly, the recursive steps of Lemmas \ref{xii+2} and \ref{restofdiagonals} also follow with these truncated graphs, with tiles between $T_1$ and $T_n$ used instead. Applications of Lemma \ref{xii+2} and then successive applications of the third part of Lemma \ref{restofdiagonals} will allow this interpretation to extend down SW to all but a SE corner of the lattice. Note that the diagonals again determine the left-hand and right-hand towers of the $B_n$-lattice since this property is inherited from the $B_\infty$-lattice. We compute the SE corner by starting at the bottom with initial row $T_2$, $T_4$, $T_6$, $\dots$, and propogating \emph{upwards} via the diamond condition. We get via Lemma \ref{AAregion}, now with tile $T_2$, instead of $T_3$, as the smallest allowable tile, all but a single diagonal of the SE corner. The final diagonal has the form $(Tower~1)\cup T_2 \cup T_1 \cup T_2 \cup T_1$, proven by applying Lemma \ref{xii+2} upwards. Notice that in the end, we obtain a lattice where the NW to SE diagonals dictate the right towers and NE to SW diagonals dictate the left towers. There is one caveat: the \emph{empty} tower, $Tow_\emptyset$ is now allowed. Thus one has to determine from context whether a graph consisting of a single tower is of the form $Tow_\emptyset \cup Tow_R$ or $Tow_L \cup Tow_\emptyset$. Alternatively, we can picture the SE corner as sitting directly above the NE corner of the lattice to form a half-diamond. Thus Proposition \ref{CaseBn} is proven. \end{proof} On the next page, we give the lattice corresponding to $B_6$. Notice that there are six graphs in the northeast corner and four graphs in the southeast corner which are also graphs corresponding to positive roots and cluster variables for the $A_6$ case. In fact, if we decrease each label by $2$ and horizontally reflect the southeast corner, we can fit these two pieces together to obtain the $A_4$ lattice of graphs exactly. \vspace{1em} Also comparing with the $B_4$ lattice we notice boundary behavior. For example the second entry of the third column is now $T_3 \cup T_4$ instead of $T_3 \cup T_4\cup T_5$ and the second to last element of the fourth column is $T_2 \cup T_3$ instead of $T_2 \cup T_3 \cup (T_4\cup T_5 \cup T_6)$. \vspace{10em} \newpage $\begin{array}{cccccc} \includegraphics[width = 0.2in , height = 0.2in]{BB1.eps} &~~~& \includegraphics[width = 0.2in , height = 0.2in]{BB3.eps} &~~~& \includegraphics[width = 0.2in , height = 0.2in]{BB5.eps} &~~~ \\ ~~~& \includegraphics[width = 0.6in , height = 0.45in]{BB1213.eps} &~~~& \includegraphics[width = 0.2in , height = 0.5in]{BB35.eps} &~~~& \includegraphics[width = 0.2in , height = 0.4in]{BB56.eps} \\ % \includegraphics[width = 0.4in , height = 0.45in]{BB123.eps} &~~~& \includegraphics[width = 0.6in , height = 0.6in]{BB1215.eps} &~~~& \includegraphics[width = 0.2in , height = 0.7in]{BB36.eps} &~~~ \\ ~~~& \includegraphics[width = 0.8in , height = 0.7in]{BB121235.eps} &~~~& \includegraphics[width = 0.6in , height = 0.8in]{BB1216.eps} &~~~& \includegraphics[width = 0.2in , height = 0.4in]{BB34.eps} \\ % \includegraphics[width = 0.4in , height = 0.7in]{BB125.eps} &~~~& \includegraphics[width = 0.8in , height = 0.8in]{BB121236.eps} &~~~& \includegraphics[width = 0.6in , height = 0.6in]{BB1214.eps} &~~~ \\ ~~~& \includegraphics[width = 0.8in , height = 0.8in]{BB121256.eps} &~~~& \includegraphics[width = 0.8in , height = 0.8in]{BB121234.eps} &~~~& \includegraphics[width = 0.6in , height = 0.4in]{BB121.eps} \\ % \includegraphics[width = 0.4in , height = 0.8in]{BB126.eps} &~~~& \includegraphics[width = 0.8in , height = 0.7in]{BB121254.eps} &~~~& \includegraphics[width = 0.8in , height = 0.4in]{BB121230.eps} &~~~ \\ ~~~& \includegraphics[width = 0.8in , height = 0.8in]{BB121264.eps} &~~~& \includegraphics[width = 0.8in , height = 0.7in]{BB121250.eps} &~~~& \includegraphics[width = 0.2in , height = 0.4in]{BB23.eps} \\ % \includegraphics[width = 0.4in , height = 0.6in]{BB124.eps} &~~~& \includegraphics[width = 0.8in , height = 0.8in]{BB121260.eps} &~~~& \includegraphics[width = 0.2in , height = 0.7in]{BB25.eps} &~~~ \\ ~~~& \includegraphics[width = 0.8in , height = 0.6in]{BB121240.eps} &~~~& \includegraphics[width = 0.2in , height = 0.8in]{BB26.eps} &~~~& \includegraphics[width = 0.2in , height = 0.4in]{BB45.eps} \\ % \includegraphics[width = 0.4in , height = 0.4in]{BB12.eps} &~~~& \includegraphics[width = 0.2in , height = 0.6in]{BB24.eps} &~~~& \includegraphics[width = 0.2in , height = 0.5in]{BB46.eps} &~~~ \\ ~~~& \includegraphics[width = 0.25in , height = 0.25in]{BB2.eps} &~~~& \includegraphics[width = 0.2in , height = 0.2in]{BB4.eps} &~~~& \includegraphics[width = 0.2in , height = 0.2in]{BB6.eps} \end{array}$ \newpage We were able to analyze $C_n$ based on $A_{2n-1}$ using a folding procedure. Analogously we can analyze $D_n$ using $B_{n-1}$ and an \emph{unfolding} procedure. We label the Dynkin diagram for $D_n$ starting with $1$ and $\overline{1}$ on the left, and label the rest in a line from $2$ to $n-1$. \begin{center}\includegraphics[width = 2.4in , height = 0.8in]{DnDynk.eps}\end{center} Indexing the rows and columns in the order $\{1,\overline{1},2,3,\dots, n-1\}$, the corresponding exchange matrix is therefore $$\begin{bmatrix} 0 & 0 & 1 & 0 & 0 & \dots & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & \dots & 0 & 0 \\ -1 & -1 & 0 & -1 & 0 & \dots & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 & \dots & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & \dots & 0 & 0 \\ \dots & \dots & \dots & \dots & \dots & \dots & \dots & \dots \\ 0 & 0 & 0 & 0 & 0 & \dots & (-1)^{n} & 0 \\ \end{bmatrix}.$$ We split the odd and even initial variables into the first two rows, in a zig-zagging pattern, just as before. We then mutate in the order $1$, $\overline{1}$, $3$, $5$, $\dots$, $n$ (resp. $n-1$) if $n$ is odd (resp. even) to get the the third row, followed by mutation via $2$ then $4$, $6$, $\dots$ $n-1$ (resp. $n$) if $n$ is odd (resp. even) to get the fourth row. The advantage of such an ordering is that the mutated exchange matrix, which we use to encode the binomial exchanges, is always the same, up to sign. We notice that the analogue of the diamond condition for this case is $ad-bc=1$ if $b = x_i^{(j)}$ with $i \geq 2$ and \begin{eqnarray} \label{Dexch1} x_{2}^{(j-1)}x_{2}^{(j+1)} - x_{1}^{(j)}x_{\overline{1}}^{(j)}x_3^{(j)} &=& 1 \\ \label{Dexch2} x_{1}^{(j-1)}x_{1}^{(j+1)} - x_{2}^{(j)} &=& 1 \\ \label{Dexch3} x_{\overline{1}}^{(j-1)}x_{\overline{1}}^{(j+1)} - x_{2}^{(j)} &=& 1 \end{eqnarray} on the western boundary. We let $\mathcal{T}_{D_n}$ be $\mathcal{T}_{B_{n-1}} \cup \{T_{\overline{1}} \}$ where $T_{\overline{1}}$ is the same tile as $T_1$ except with a different label. We also change tile $T_2$ so that it is still a hexagon, but has weights $1, x_1, 1, x_{\overline{1}}, 1,$ and $x_3$ going around clockwise from the top. Following the arguments of Lemmas $5$, $6$, $7$, and $8$ result in the same graph theoretic interpretation and lattice structure. We use Rule $3$ which is analogous to Rule $2$ \begin{Rule} Notice that when we apply Rule $1$ to set of tiles $\mathcal{T}_{D_n}$, we get a set of graphs consisting of a base of $T_2$ or $T_1 \cup T_2$ adjoining a tower of $T_a \cup \dots \cup T_b$, as before. We enlarge the set of graphs by allowing a base of $T_{\overline{1}} \cup T_2$ (with or without an accompanying tower), and also allow \emph{both} tile $T_1$ and tile $T_{\overline{1}}$ to appear if and only if the lift of the graph to $\mathcal{G}_{D_\infty}$ (i.e. $n$ arbitrarily large) is of the following three forms: \begin{center} \includegraphics[width = 1.1in , height = 0.8in]{DD121.eps} \hspace{2em} \includegraphics[width = 1.1in , height = 1.8in]{DnBaseTower.eps} \hspace{2em} \includegraphics[width = 1.1in , height = 1.1in]{DDTwoTowers.eps} \end{center} where $3 \leq m_1\leq m_2$ and $m_1,~m_2$ both odd. \end{Rule} Let $\mathcal{T}_{D_n}$ be defined as above and $\mathcal{G}_{D_n}$ be the set of graphs constructed according to Rules $1$ and $3$. In particular, this construction will be quite analogous to that of $\mathcal{G}_{B_{n-1}}$. \begin{Prop} \label{CaseDn} The set $\mathcal{G}_{D_n}$ is in bijection with the set of non-initial cluster variables for a coefficient-free cluster algebra of type $D_n$ such that the statement of Theorem \ref{vargraph} holds. \end{Prop} \begin{center} \includegraphics[width = 3.5in , height = 1.5in]{D5.eps} \\ Tiles for $D_5$. \end{center} \begin{Rem} As indicated, the proof follows from the exact same logic as Lemmas $5$ through $8$. The only caveat is that as a consequence of the proof, that $x_1^{(j)}$ will sometimes be a tower on base $T_1 \cup T_2$, and sometimes contain base $T_{\overline{1}} \cup T_2$. In particular, $x_1^{(j)}$ contains $T_1$ if and only if $j$ is odd, and so we get an alternating behavior. \end{Rem} On the next page, we give the lattice for $\mathcal{G}_{D_5}$. We have the usual diamond condition for four entries in three consecutive rows and three consecutive columns, not including columns one. We encode column one by placing $x_1^{(j)}$ on top of $x_{\overline{1}}^{(j)}$, and we have the exchange relations (\ref{Dexch1}), (\ref{Dexch2}), and (\ref{Dexch3}). \newpage $\begin{array}{cccc} \includegraphics[width = 0.2in , height = 0.2in]{BB1.eps} &~~~&~~~&~~~~ \\ ~~~~& ~~~& \includegraphics[width = 0.2in , height = 0.2in]{BB3.eps} &~~~ \\ \includegraphics[width = 0.2in , height = 0.2in]{DD1.eps} &~~~&~~~&~~~~ \\ ~~~& \includegraphics[width = 0.6in , height = 0.6in]{DD1213.eps} &~~~& \includegraphics[width = 0.2in , height = 0.4in]{BB34.eps} \\ % % \includegraphics[width = 0.4in , height = 0.6in]{DD123.eps} &~~~&~~~&~~~~ \\ ~~~~& ~~~& \includegraphics[width = 0.6in , height = 0.7in]{DD1214.eps} &~~~ \\ \includegraphics[width = 0.4in , height = 0.6in]{BB123.eps} &~~~&~~~&~~~~ \\ ~~~& \includegraphics[width = 0.8in , height = 0.7in]{DD121234.eps} &~~~& \includegraphics[width = 0.6in , height = 0.4in]{DD121.eps} \\ % % \includegraphics[width = 0.4in , height = 0.7in]{BB124.eps} &~~~&~~~&~~~~ \\ ~~~~& ~~~& \includegraphics[width = 0.8in , height = 0.6in]{DD121230.eps} &~~~ \\ \includegraphics[width = 0.4in , height = 0.7in]{DD124.eps} &~~~&~~~&~~~~ \\ ~~~& \includegraphics[width = 0.8in , height = 0.7in]{DD121240.eps} &~~~& \includegraphics[width = 0.25in , height = 0.6in]{BB23.eps} \\ % % \includegraphics[width = 0.4in , height = 0.4in]{DD12.eps} &~~~&~~~&~~~~ \\ ~~~~& ~~~& \includegraphics[width = 0.25in , height = 0.7in]{BB24.eps} &~~~ \\ \includegraphics[width = 0.4in , height = 0.4in]{BB12.eps} &~~~&~~~&~~~~ \\ ~~~& \includegraphics[width = 0.25in , height = 0.4in]{BB2.eps} &~~~& \includegraphics[width = 0.2in , height = 0.2in]{BB4.eps} % % \end{array}$ \section{$G_2$} The case of $G_2$ is the only cluster algebra of exceptional finite type for which we have been able to extend our graph theoretic interpretation. We are able to do so since this case is analogous to $B_3$. We use collection $\mathcal{T}_{G_2} = \{T_1,T_2\}$ with tile $T_1$ as in the $B_n$ case, and tile $T_2$ is again a hexagon, but now has all three nontrivial weights being value $x_1$. There are six possible graphs that correspond to the non-initial cluster variables. \vspace{1em} \begin{center} $\begin{array}{ccc} \includegraphics[width = 0.3in , height = 0.15in]{BB1.eps} &~~~& \includegraphics[width = 0.9in , height = 0.4in]{BB121.eps} \\ \\ \includegraphics[width = 0.3in , height = 0.3in]{BB2.eps} &~~~& \includegraphics[width = 0.9in , height = 0.75in]{G2small.eps} \\ \\ \includegraphics[width = 0.75in , height = 0.4in]{BB12.eps} &~~~& \includegraphics[width = 0.9in , height = 0.75in]{G2big.eps} \end{array}$ \\ Graphs for a cluster algebra of type $G_2$. \end{center} \vspace{1em} \noindent $G_2$ has Dynkin diagram \hspace{0.8em}\includegraphics[width = 0.6in , height = 0.1in]{G2Dynk.eps}\hspace{0.8em} and exchange matrix $\begin{bmatrix} 0 & 1 \\ -3 & 0 \end{bmatrix}.$ \section{Future Directions} Given the previous sections, coefficient-free cluster algebras of type $A_n$, $B_n$, $C_n$, $D_n$, or $G_2$ have a combinatorial interpretation as a family of graphs such that the numerators of the cluster variables enumerate the weighted number of matchings and the denominators encode the occurrences of faces. Thus Theorem \ref{vargraph} is true in all of these cases. The next step would be to extend Theorem \ref{vargraph} to include cluster algebras of type $E_6$, $E_7$, $E_8$, and $F_4$, and thus have the result for all cluster algebras of finite type. \begin{Rem} Even though the Dynkin diagrams for the $E_n$'s are simply laced, fitting cluster algebras of these three types into patterns analogous to those of the $A_n$'s and $D_n$'s has been notoriously hard. Such difficulties have rose elsewhere such as in the original proof of positivity in \cite{ClustII}, and also in recent models using $T$-paths on triangulated surfaces, for example in \cite{ClusIV} or \cite{ClustSurf} among other work. \end{Rem} Additionally, in the work of Schiffler and Carroll-Price for $A_n$, the cluster algebra considered is specifically the Ptolemy algebra, a cluster algebra \emph{with} coefficients. In the $T$-paths model, the boundary of the polygon gives rise to $n+3$ additional coefficients which can be included in the exchange relations and cluster expansion formula. Since the graphs we obtain in the above combinatorial interpretations are weighted so sparsely, perhaps a certain number of coefficients can be handled by the graph-model as well. In \cite{MusPropp}, an analogous interpretation is given for rank $2$ cluster algebras of affine type and unpublished work \cite{Markoff,MusMark} done as a part of REACH, as described in \cite{MarkPropp}, gives a graph theoretic interpretation for a totally cyclic rank $3$ cluster algebra. This totally cyclic rank $3$ cluster algebra corresponds to a triangulated surface of genus one with exactly one puncture (i.e. interior marked point). Such a cluster algebra has been studied geometrically including work of \cite{Thomas}. Perhaps these graph theoretical interpretations could be extended to other cluster algebras thus providing proofs of Fomin and Zelevinsky's positivity conjecture for even further cases. Lastly, we note that all the examples discussed above are families of \emph{planar} graphs associated to generators of cluster algebras. When expanding our scope to include more complicated cluster algebras, is the category of planar graphs too restrictive? More specifically, why did we need the extra arcs in the $B_n$, $D_n$ , $G_2$, and affine $A_1^{(2)}$ cases? Perhaps it is an artifice of taking a higher dimensional object and projecting to two dimensions. \vspace{2em} \noindent {\bf Acknowledgments.}~~ The author would like to thank Andrei Zelevinsky for numerous helpful conversations including referring the author to \cite{YSys} where Fibonacci polynomials appear. Discussions with Sergey Fomin, Jim Propp, Ralf Schiffler, and Hugh Thomas have also been very useful. I wanted to especially thank Hugh Thomas and Andrei Zelevinsky for their comments on an earlier draft of this paper.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,140
Sir George Grimes (1605–1657) was an English politician who sat in the House of Commons from 1628 to 1629. He supported the Royalist cause in the English Civil War. Grimes was the son of Sir Thomas Grimes and his wife Margaret More, daughter of Sir George More of Loseley Park and was baptised on 10 February 1605. In 1628, he was elected Member of Parliament for Haslemere and sat until 1629 when King Charles decided to rule without parliament for eleven years. Grimes was knighted at Theobalds on 9 December 1628. He supported the King in the civil war, describing himself as having " for a long time wayted on His Majesty' s person as his sworne servant." Grimes died at the age of about 52 and was buried on 15 October 1657. Grimes married Alice Lovell, daughter of Charles Lovell, of West Harling, Norfolk. References 1605 births 1657 deaths English MPs 1628–1629
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,915
Q: Bool array transfer from client with UDP in C I need to create a simple modbus application that will transfer data in bool type. I created client and server codes for this. Client side: int Client(bool message[8]) { struct sockaddr_in si_other; int s, slen=sizeof(si_other); bool buf[BUFLEN]; WSADATA wsa; if (WSAStartup(MAKEWORD(2,2),&wsa) != 0) { exit(EXIT_FAILURE); return 1; } if ( (s=socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) == SOCKET_ERROR) { exit(EXIT_FAILURE); return 2; } memset((bool *) &si_other, 0, sizeof(si_other)); si_other.sin_family = AF_INET; si_other.sin_port = htons(PORT); si_other.sin_addr.S_un.S_addr = inet_addr(SERVER); if (sendto(s, message, strlen(message) , 0 , (struct sockaddr *) &si_other, slen) == SOCKET_ERROR) { exit(EXIT_FAILURE); return 3; } // closesocket(s); // WSACleanup(); return 0; } Server side: int main() { SOCKET s; struct sockaddr_in server, si_other; int slen , recv_len; bool buf[BUFLEN]; WSADATA wsa; slen = sizeof(si_other) ; printf("\nInitialising Winsock..."); if (WSAStartup(MAKEWORD(2,2),&wsa) != 0) { printf("Failed. Error Code : %d",WSAGetLastError()); exit(EXIT_FAILURE); } printf("Initialised.\n"); if((s = socket(AF_INET , SOCK_DGRAM , 0 )) == INVALID_SOCKET) { printf("Could not create socket : %d" , WSAGetLastError()); } printf("Socket created.\n"); server.sin_family = AF_INET; server.sin_addr.s_addr = INADDR_ANY; server.sin_port = htons( PORT ); if( bind(s ,(struct sockaddr *)&server , sizeof(server)) == SOCKET_ERROR) { printf("Bind failed with error code : %d" , WSAGetLastError()); exit(EXIT_FAILURE); } puts("Bind done"); while(1) { printf("Waiting for data...\n"); fflush(stdout); memset(buf,'0', BUFLEN); if ((recv_len = recvfrom(s, buf, BUFLEN, 0, (struct sockaddr *) &si_other, &slen)) == SOCKET_ERROR) { printf("recvfrom() failed with error code : %d" , WSAGetLastError()); exit(EXIT_FAILURE); } printf("Received packet from %s:%d\n", inet_ntoa(si_other.sin_addr), ntohs(si_other.sin_port)); for (int i=0; i<=7; i++) { printf("%d", buf[i]); } printf("Data: %d\n" , buf); printf("%s-%s-%s\n",buf[0],buf[1],buf[2]); //When I run this code, it works just like the code with 'for' and gives an error. if (sendto(s, buf, recv_len, 0, (struct sockaddr*) &si_other, slen) == SOCKET_ERROR) { printf("sendto() failed with error code : %d" , WSAGetLastError()); exit(EXIT_FAILURE); } } closesocket(s); WSACleanup(); return 0; } When I run the application, the data transfer is not correct. For example, when I send data as '101010', it transmits '100000'; If send data as '110101', it transmits '110000' or if send it as '011111', it transmits '000000'. In other words, in order to read a data with a value of '1' correctly, all the values before that data must be '1'. When I remove the 'for' part in the code and try to read the whole buf directly ( printf("Data: %d\n" , buf) ) , it reads '6421972' data. Even if I change the data, this number does not change. What can I do to overcome this problem? A: In your client, strlen(message) will count chars until the first '0'encoutered. So your sent bool array never will be of length 8. Your client must then take the buffer length too in parameter to fix this. If your buffer is a true C array (not an allocated pointer) then sizeof statement can give the length. But if you use a malloc'd pointer for buf, sizeof statement will always return 8, never less, never more (in 64 bit systems), that is the size of a pointer only. In short, always keep a length integer alongside with a buffer.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,719
Marshall Davidson Hatch (24 de diciembre de 1932) es un bioquímico y fisiólogo vegetal australiano, ya retirado como Jefe de Investigaciones del CSIRO, de la División de Vegetales Industriales en Canberra. Entre 1947 a 1950, concurrió al Newington College (1947–1950) y estudió bioquímica en la Universidad de Sídney con Frederick Robert Whatley. Completó su B.Sc. con honores, en 1954. Y de 1955 a 1959, fue investigador de vegetales en el Commonwealth Scientific and Industrial Research Organisation (CSIRO) de Sídney. Obtuvo exitosamente su doctorado en 1959, en la Universidad de Sídney; recibió, en 1961, una beca Fulbright que utilizó para trabajar con Paul Karl Stump en el Departamento de Bioquímica en la Universidad de California. De 1961 a 1966, fue investigador en el Centro de Investigaciones de Vegetales David North, de la Colonial Sugar Refining Co. Ltd. (CSR), en Brisbane, junto con K. T. Glasziou. En 1967, fue docente en botánica en la Universidad de Queensland, y retornó a CSR entre 1968 y 1969, donde trabajó como director del Centro de Investigaciones de Vegetales David North. Desde 1970, fue Jefe investigador del CSIRO Plant Industry, en Canberra. Junto con el investigador británico Charles Roger Slack descubrió el recorrido de fijación del carbono vía C4, vía metabólica también conocida como ruta de Hatch-Slack de la fotosíntesis. Publicó más de 200 artículos en revistas científicas y en libros en el campo de la fotosíntesis y en otras áreas de la bioquímica vegetal. Algunas publicaciones . 1959. Studies on the glycolytic breakdown of carbohydrate in a plant extract, and the mechanism of control of this process by a co-functional pasteur effect. University of Sydney. Honores 1973: Medalla Clarke de la Royal Society of New South Wales Miembro de Orden de Australia Australian Academy of Science Royal Society 1991: por sus contribuciones a las ciencias vegetales, recibió el Premio Internacional de Biología. Referencias Enlaces externos Bioquímicos de Australia Fotosíntesis Miembros de la Academia Australiana de Ciencias Miembros de la Royal Society Escritores en inglés Miembros de la Academia Nacional de Ciencias de Estados Unidos Medalla Clarke Nacidos en Perth
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,379
Q: How easy is it to make a VB.net form button run logic in a separate thread I'm not a VB coder but I'm tinkering with a little VB.net utility project which lets you set up a few parameters in a form and hit "go" - this then does a lot of logic which can run for several minutes. This all happens in the go-button-handler which blocks the form. I wondered, is it easy in vb.net to make all this logic happen in a separate thread which can still update the form i.e. update a label to show which file is being processed? If it's complicated, it's not worth doing in my use-case! Is it possible to just copy-paste my event code into a thread.Run or something like that, or even dynamically create a thread class around the code I have? A: I have used the BackgroundWorker class (System.ComponentModel.BackgroundWorker) many times for things like this. It's very simple to use (compared to other multi-threading techniques available in .NET). Just drag it from the tool box onto your form, for example. If you set it's "WorkerReportsProgress" and "WorkerSupportsCancellation" properties to "True", you can even give feedback in your UI in the form of a progress bar, for example, and provide the ability for the user to click the cancel button. Anyway, there's a lot more information about it than I can reasonably include here, so I would start by looking at this page: http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx A: BackgroundWorker is a good choice to start. Whatever you use be aware that performance can be affected if the background thread is processor intensive i.e. a long running tight loop. This might not be very obvious if you have a multi-core CPU. Here is a simple example of Threading.Thread. Private Sub Button3_Click(sender As Object, e As EventArgs) Handles Button3.Click Button3.Enabled = False 'for example pass a string and an integer to a thread as an array Dim params() As Object = {"one", 1} 'parameters for thread. object picked because of mixed type Dim t As New Threading.Thread(AddressOf someThread) t.IsBackground = True t.Start(params) 'start thread with params End Sub Public Sub someThread(params As Object) 'not on the UI Dim theparams() As Object = DirectCast(params, Object()) 'convert object to what it really is, an array of objects Dim param1 As String = DirectCast(theparams(0), String) Dim param2 As Integer = DirectCast(theparams(1), Integer) Debug.WriteLine(param1) Debug.WriteLine(param2) showOnUI(param1) End Sub Public Sub showOnUI(s As String) If Me.InvokeRequired Then 'not running on UI Me.Invoke(Sub() showOnUI(s)) 'run method on UI Else 'running on UI Label1.Text = s Button3.Enabled = True End If End Sub
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,287
{"url":"http:\/\/msp.org\/involve\/2015\/8-3\/p05.xhtml","text":"Vol. 8, No. 3, 2015\n\n Recent Issues\n The Journal Cover Page Editorial Board Editors\u2019 Addresses Editors\u2019 Interests About the Journal Scientific Advantages Submission Guidelines Submission Form Ethics Statement Subscriptions Editorial Login Author Index Coming Soon Contacts ISSN: 1944-4184 (e-only) ISSN: 1944-4176 (print)\nEmbedding groups into distributive subsets of the monoid of binary operations\n\nGregory Mezera\n\nVol. 8 (2015), No. 3, 433\u2013437\nAbstract\n\nLet $X$ be a set and $Bin\\left(X\\right)$ the set of all binary operations on $X$. We say that $S\\subset Bin\\left(X\\right)$ is a distributive set of operations if all pairs of elements ${\\ast }_{\\alpha },{\\ast }_{\\beta }\\in S$ are right distributive, that is, $\\left(a{\\ast }_{\\alpha }b\\right){\\ast }_{\\beta }c=\\left(a{\\ast }_{\\beta }c\\right){\\ast }_{\\alpha }\\left(b{\\ast }_{\\beta }c\\right)$ (we allow ${\\ast }_{\\alpha }={\\ast }_{\\beta }$).\n\nThe question of which groups can be realized as distributive sets was asked by J.\u00a0Przytycki. The initial guess that embedding into $Bin\\left(X\\right)$ for some $X$ holds for any $G$ was complicated by an observation that if $\\ast \\in S$ is idempotent ($a\\ast a=a$), then $\\ast$ commutes with every element of $S$. The first noncommutative subgroup of\u00a0$Bin\\left(X\\right)$ (the group ${S}_{3}$) was found in October 2011 by Y.\u00a0Berman.\n\nHere we show that any group can be embedded in $Bin\\left(X\\right)$ for $X=G$ (as a set). We also discuss minimality of embeddings observing, in particular, that $X$ with six elements is the smallest set such that $Bin\\left(X\\right)$ contains a nonabelian subgroup.\n\nKeywords\nmonoid of binary operations, distributive set, shelf, multishelf, distributive homology, embedding, group\nMathematical Subject Classification 2010\nPrimary: 55N35\nSecondary: 18G60, 57M25","date":"2017-05-28 08:47:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 20, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5689762234687805, \"perplexity\": 849.8901878718864}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-22\/segments\/1495463609610.87\/warc\/CC-MAIN-20170528082102-20170528102102-00062.warc.gz\"}"}
null
null
import injectTapEventPlugin from 'react-tap-event-plugin' import React from 'react' import ReactDOM from 'react-dom' import App from './App' import './index.css' // Needed for onTouchTap // http://stackoverflow.com/a/34015469/988941 injectTapEventPlugin() // #{process.env.REACT_APP_BASENAME}# ReactDOM.render(<App />, document.getElementById('root'))
{ "redpajama_set_name": "RedPajamaGithub" }
4,265