text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
\section{Introduction} \label{sec1} Anomaly detection is the problem of identifying observations that substantially deviate from the expected distribution of normal data in a particular domain. It can be considered as a binary classification task where examples of one class (anomalies) are not available or rare, whereas the other class (normal) is well represented with a sufficient amount of samples. Since anomalous samples are not available, powerful supervised classifiers cannot be used. Instead, so called one-class classifiers are employed. They can be either discriminative, which try to find compact support for normal data, e.g the one-class Support Vector Machine (SVM) \citep{ScholkopfPSSW01} and Support Vector Data Description (SVDD) \citep{TaxD04}, or generative, which model the distribution of normal data \citep{abs-1909-11786, RippelMM20}. Outlier samples that are outside of the support or have low probability according to the modeled distribution are detected as anomalies. Anomaly detectors have numerous applications. In the computer vision domain AD was used in industrial inspection for detection of manufacturing defects \citep{roth2022towards, BergmannFSS20}, in surveillance for detection of suspicious behavior in videos \citep{SabokrouFFMK18}, and in healthcare for detection of pathologies in medical images \citep{FernandoGDSF22, ShvetsovaBFSD21}. In this paper we approach anomaly detection in histological images captured and digitized with an optical microscope (typically so-called whole slide scanners). Such images are obtained from stained tissue samples, which allows visualization of the different tissue structures. Changes in distribution and morphology of the structures help pathologists to identify abnormal conditions. In the domain of histology, as in many other domains in medicine, images of healthy tissue are usually available in abundance, while images with abnormal conditions are hard to gather and therefore scarce or completely absent. Moreover, some abnormal conditions might have never been registered before and therefore are unknown. Under such conditions with a lack of labeled pathological data supervised classification methods are not applicable, instead, AD methods can be employed. Deep learning algorithms and particularly Convolutional Neural Networks (CNN) became most successful approaches in numerous computer vision tasks, including the domain of computational pathology \citep{SrinidhiCM21} and the AD challenge. Features generated by pre-trained CNNs (also called embeddings or representations) were successfully used in conjunction with one-class classification for AD tasks \citep{abs-1909-11786, RippelMM20, defard2021padim, abs-2002-10445, roth2022towards}. The CNNs are usually pre-trained on large datasets of natural images such as ImageNet \citep{DengDSLL009}. Such feature representations might be less effective in healthcare related tasks that deal with images coming from very different imaging modalities. ImageNet pre-trained CNN representations of histological images might be sensitive to normal variations of tissue structures or staining procedures and on the other hand be not sensitive to abnormal changes in tissue patterns. Therefore, \cite{hoefling2021histonet} hypothesized that adaptation of CNNs to the specific domain of histopathology, may benefit various applications that make use of deep CNN image representations. In \cite{riasatian2021fine} it was demonstrated that an SVM classifier fed with DenseNet CNN \citep{HuangLMW17} image features that was fine-tuned on a large dataset of histopathological images improves its classification performance. In \cite{KoohbananiUKKR21} the authors show that a specially designed self-supervision that use unlabeled histopathological images helps to achieve better performance for classification of data (from the same domain) with limited amount of labels. The gain is due to better feature representations generated by a backbone CNN. In our work we present a receipt for training a system for the detection of abnormal tiles from Whole Slide Images (WSI) of tissue samples. Particularly, we aim at the detection of anomalies in mouse liver tissue. We use an approach consisting of a CNN-based feature generator, which outputs image representations, followed by one-class SVM classifier, see Fig. \ref{Fig:1}. We propose techniques listed below to improve image representations for histopathological data and study their influence to the overall performance of the AD system. \begin{enumerate} \item \textbf{Auxiliary supervised classification task:} Since anomaly examples are not available or unknown we cannot train a classifier to distinguish anomalies from normal samples. However, we gathered a large number of healthy tissue samples from different organs of two animal species, mouse and rat, which were stained with two different procedures, hematoxylin \& eosin (H\&E) staining and Masson's trichrome (MT) staining. We define an auxiliary supervised classification task that learns to recognize tissue samples belonging to one of the combinations of organ, specie, and stain that was mentioned above. The target classes of healthy mouse liver tissue stained with H\&E and Masson's Trichrome are also included. The classifier is built from a CNN pre-trained on ImageNet followed by fully connected neural network (FC-NN) layers (see Fig. \ref{Fig:1} A). We expect that CNN representations trained on the auxiliary task will better suit the final task of AD in images of mouse liver tissue, since AD and auxiliary task share the same image domain. Note that gathering the labels for our auxiliary task present almost no effort. \item \textbf{Compact feature representations:} While training a supervised classifier on an auxiliary task we force compactness of representations in the feature space for the normal class of interest (healthy liver mouse), which is thought to be beneficial for later one-class classification. For this purpose we use a center-loss term in the objective function \citep{WenZL016}. \item \textbf{Class mix-up color augmentation:} Tissue samples collected for the auxiliary classification task were prepared at different times under slightly different conditions of staining and image acquisition. This may result in trained classifier that is sensitive to differences in stain concentrations or acquisition settings instead of differences in inherent tissue structures. To prevent this unwanted effect we randomly transfer color distributions between tissue classes used for training. We will call this procedure class mix-up color augmentation. Additionally, we apply standard augmentation procedure, where we randomly vary image saturation, hue, brightness, and contrast. \end{enumerate} Once the supervised classifier is trained on an auxiliary task, we use its CNN backbone for generation of image representations and train a one-class classifier on a target class of healthy mouse liver tissue (see Fig. \ref{Fig:1} B). In our paper we used a one-class SVM \citep{ScholkopfPSSW01}, which was configured in a way that its negative output scores correspond to anomalous detections outside of the estimated normal data support. \begin{figure*}[!htb] \centering \includegraphics[scale=1]{Scheme_extended} \caption{\label{Fig:1} Anomaly detection approach. A: Learning image representations with an auxiliary supervised classification task on set of tissue categories. The figure on the right shows t-SNE plot of image features after the training. Clusters marked with arrows correspond to target classes for the anomaly detection task in our experiments. Each color corresponds to a particular category (a combination of specie, organ, and staining). B: Training one-class classifier on a tissue of a category of interest. C: Anomaly detection in tissue of the chosen (target) category. Anomaly score $\alpha$ can be thesholded to output a binary decision.} \end{figure*} To evaluate the performance of our AD approach, we collected mice liver tissue samples exhibiting the features of Non-Alcoholic Fatty Liver Disease (NAFLD). We made this dataset along with a set of healthy liver tissue publicly available \citep{dataset}. The published tissue samples were stained with both H\&E and Masson's Trichrome and may be used as a benchmark dataset for anomaly detection in histopathological tissue. We provide a comparison with a sequence of state of the art methods for anomaly detection \citep{AkcayAB18, roth2022towards,abs-1909-11786,defard2021padim,WangHD021,GudovskiyIK22} that were implemented as a part of Anomalib benchmark \citep{abs-2202-08341web}. In addition, we provide an ablation study to show how different parts of our system influence the AD performance. The comparative evaluation has shown that our approach outperforms all the other tested methods. Moreover, we show that our AD approach behaves similarly to approaches tailored for NAFLD quantification, an immunohistochemical staining and pathologist's assessment using NAFLD activity score (NAS). Recently, also deep leaning supervised classification methods were developed for NAS prediction \citep{heinemann2019deep}. The underlying motivation for this work is the ability to identify toxicological effects of candidate drugs at early pre-clinical development stages. Toxicological findings are a leading cause of high rates of drug attrition at pre-clinical phases, which is a major cost driver in drug development \citep{waring2015analysis}. We, therefore, test our AD system on a known case of a drug with known toxicological effects and show that we were able to detect and quantify tissue anomalies (degeneration, necrosis and vacuolation in liver tissue) caused by increasing dose of the drug. To summarize, our paper makes the following major contributions: \begin{itemize} \item We introduced a method for anomaly detection in tissue, which is mainly based on three techniques for learning effective feature representations of histopathological images: an auxiliary supervised classification task, compact feature representations, and class mix-up color augmentation. \item We collected and published a dataset for the evaluation of anomaly detection in histopathological images. The dataset was used to study the importance of each of the proposed techniques and to make performance comparison of our approach with the established and recent AD methods. \item We have shown that the anomaly detection approach may support the identification of toxicological effects of drug candidates and thereby potentially reduce expensive late-stage drug attrition. \end{itemize} \section{Related work} AD methods mentioned in the previous section consist of two stages, feature generation and one-class classification. Such two-stage design allows application of successful approaches for training feature representations, e.g self-supervised learning \citep{SohnLYJP21} or transfer learning with abundant data sources not directly related to the task at hand \citep{perera2019learning}. Learning feature representation usually does not involve an optimization of the AD objective. However, \cite{perera2019learning} suggested to use compactness loss to reduce variance of normal data in the feature space, which may increase the space of possible anomalies that can be detected. Although attempts were made to combine the two stages such that a single training procedure is required that directly optimizes a single AD loss function \citep{RuffGDSVBMK18, OzaP19}, two stage methods deem to be more powerful for AD \citep{SohnLYJP21}. In our work we adopted several ideas described in \cite{perera2019learning} and applied them to the domain of histopathology with an application in drug discovery. Similarly to \cite{perera2019learning}, we tune image representations using an auxiliary dataset and a compactness loss. In our case the dataset is built from healthy tissue of different organs and species. The difference is that \cite{perera2019learning} used target (one-class) set to optimize compactness loss and an external (multi-class) ImageNet set to optimize descriptiveness loss (cross-entropy). In this case, image representations do not directly learn to distinguish the target class from the rest of the (auxiliary) data. In contrast, we use a single dataset of normal histological data (without pathological alterations) that includes the target class (mouse liver). This enforces image representations to distinguish the target class from other tissue categories and simultaneously to be compact for the target as well as auxiliary classes. We also suppose that the gain from tuning image representations in our case is larger, since pre-trained CNNs are re-purposed from ImageNet to histopathological images, whereas in \cite{perera2019learning} a pre-trained CNN was forced to only keep discrimination ability of features for ImageNet categories while improving their compactness for the target class. Another subtle difference to \cite{perera2019learning} is that we used an objective function with a center-loss term that is similar to the one proposed in \cite{WenZL016}. One-class classification approaches detect anomalies in a feature (latent) space. An alternative approach is based on image reconstruction methods, such as auto-encoders, where anomaly is detected in an image space by means of measuring a reconstruction error, the deviation of a reconstructed image from an initial one. Since these methods learn to generate images based on a dataset of normal images, anomalies are supposed to have high reconstruction errors. Recently, with the development variational auto-encoders \citep{KingmaW13} and generative adversarial networks \citep{RadfordMC15} image reconstruction based methods became more popular for anomaly detection, see for example \cite{SchleglSWLS19, AkcayAB18}. They, however, usually require large amount of data, are harder to train, and to control their behavior. For example, these methods might be prone to reconstruct anomalous regions in an image with a similar success as normal regions. On the other hand, they are known to lose fine details in reconstructed images, the details that may be characteristic to anomalies of an interest. Additionally, reconstruction models tend to learn low-level image statistics \citep{NalisnickMTGL19}, perhaps, due to used pixel-based reconstruction loss. Advances in development of the reconstruction-based methods gradually improve their weak points, see for example \citep{ShvetsovaBFSD21}. Until now, however, the best performing AD approaches are based on an outlier detection in a feature space \citep{roth2022towards}. Though some benchmarks \citep{abs-2202-08341web} support this general statement, a comparative evaluation of AD performance is very dependent on the used dataset \citep{ruff2021unifying} and particularly on the types of provided anomalies. Larger, diverse, and more challenging AD datasets are required to reliably rank AD methods and thereby direct research in the field. \section{Anomaly detection in histological images} \subsection{AD system overview} We design an anomaly detection system that is a patch-based processing system. First, whole slide images (WSI) are tiled into patches ($256 \times 256$ pix., 0.44 \textmu m/{pix.} in our experiments in Sec.\ref{Sec:Experiments}). Each patch is then processed with the system generating an anomaly score, see Fig. \ref{Fig:1} C. We design an AD system that consists of two blocks. The first, the CNN encoder, which encodes an input image patch with a feature vector representation and, the second, one-class classifier that generates an anomaly score $\alpha$ for every feature vector (see Fig. \ref{Fig:1}). All anomaly scores from a WSI can then be integrated into a single WSI anomaly score. To aggregate patch scores into a single WSI level score we use a simple strategy. We threshold each patch score to decide whether it is anomalous and compute the fraction of anomalous patches in the WSI. The system is trained in a way (namely, the one-class classifier, see Fig. \ref{Fig:1} B) that the abnormality of a patch is determined with a threshold of zero. Other simple aggregation strategies are possible. For example, patch scores can first pass through logistic function with a particular growth coefficient and then summed up to a single WSI score. These simple methods do not take into account spatial correlation between the patches, which can carry additional diagnostic information. This topic, however, is beyond the scope of this paper. A reader interested in patch aggregation strategies is referred to \citet{SrinidhiCM21, ilse2018attention, kraus2016classifying}. \subsubsection{CNN encoder: learning image representations} \label{Sec:LearningRepresentations} The critical part of the AD system is a CNN encoder that should generate feature vectors with high representation power allowing discrimination between anomalous and normal tissue structures. On the other, representation vectors should be short enough to generalize to unseen data. Preferably, to facilitate operation of a one-class classifier, representations should be invariant to variations in image appearance due to differences in staining procedure and image acquisition. CNNs that were pre-trained on ImageNet dataset may generate feature vectors that do not have the mentioned above properties. We therefore propose a few techniques for training CNN to adapt image representations to histopathlogical images. \subsubsection{Auxiliary tissue classification task} \label{Sec:Auxiliary} We define an auxiliary classification task, which is depicted in the top of Fig. \ref{Fig:1} A, with 16 tissue categories, each of which corresponds to a combination of a particular specie, organ, and staining protocol. This incorporates mouse and rat species, H\&E and Masson's Trichrome staining protocols, and \textit{liver, brain, kidney, heart, lung, pancreas, spleen} organs. Rat tissue was available only for liver, so that not all combinations of animal specie, organ, and staining were used. Note that no special effort is needed to gather labels (specie, organ, staining) for our training dataset of healthy tissue, since labels come automatically with images. Instead of defining multi-class classification task, we could use a multi-task learning approach \citep{ThungW18} and define three separate tasks for detection of a specie, organ, and staining type. In that case we would need to define an objective function composed of three weighted terms, with best weights to be found empirically. To effectively use such a multi-task approach, it would also require gathering images of rat tissue for different organs in addition to liver. We, however, did not experiment with such an approach in this paper and leave this for future work. We train a fully convolutional neural network and followed fully connected Neural Network (NN) classifier on the auxiliary task with $256\times 256$ tissue tiles. 1-D feature vectors were obtained with global averaging of the last $8\times 8$ activation map of the fully convolutional network. This allows potential usage of input images of any size. Instead of global average pooling, an additional convolutional layer could be used that generates a final activation map of $1\times 1$ size. However, this would imply unequal contribution of entries from the last activation map (each of which corresponds to a small patch in the original image). This in turn would contradict the underlying textural structure of histopathological images. In contrast to natural images with objects, all positions in a histological image are semantically equal. We expect that after training on the auxiliary task, the CNN will be able to provide qualitative image representations that can discriminate between histological structures. We also expect that learned representations are to some degree insensitive to variations in appearance of histopathological images due to variations in staining and image acquisition settings. Fig. \ref{Fig:2} shows a t-SNE visualization \citep{van2008visualizing} of image representations of the training data from our auxiliary task and test data from anomaly detection task within a single plot. Put attention that most of anomalous data (Liver, Mouse, Masson's Trichrome), marked in dark blue, fall outside of normal data (marked in bright blue and green). However, anomalous data is much closer to normal data than the other categories, except for Liver-Rat- Masson's Trichrome category. This shows that, as expected, Rat and Mouse tissue of the same organ are much more similar to each other than tissues of different organs. \begin{figure}[!htb] \centering \includegraphics[scale=1]{TSNE} \caption{\label{Fig:2} t-SNE visualization of feature representations of images after CNN was trained on the auxiliary task (see Sec. \Ref{Sec:Auxiliary}). First two clusters in the legend correspond to the test data we use in experiments in the Sec. \Ref{Sec:Experiments}, while all the other correspond to training data (healthy tissue) used for the auxiliary task. } \end{figure} \subsubsection{Architecture of the neural network} After experimenting with architectures for CNN encoder we have chosen to use EfficientNet-B0 deep neural network \citep{TanL19} (pre-trained on ImageNet), where we remove the last $9^{th}$ stage with $1\times1$ convolutional layer, such that the number of the output channels $d=320$. After batch normalization, Sigmoid Linear Unit (SiLU) activation function, and global averaging of $d \: 8\times 8$ activation maps, the CNN encoder outputs a 320-dimensional feature vector, which is fed to the FC-NN classifier as shown in top of Fig. \ref{Fig:1}. In Sec. \ref{Sec:evaluationNAFLD} we will compare different CNN architectures within our AD system, including EfficientNet-B0 with nine stages that outputs 1280-dimensional feature vector. The NN classifier consists of two fully connected linear layers with ReLU activation function in between and softmax activation function in the output. The number of hidden neurons was set to 64, while the number output neurons equals 16, which is the number of the categories of tissues to be classified. Initial weights of the NN classifier were randomly sampled from zero mean normal distribution $N(0, \sigma^2)$ with standard deviation $\sigma=10^{-2}$, while biases were initialized with zeros according to \cite{SimonyanZ14a}. \subsubsection{Objective function} \label{Sec:objective_function} Training the CNN on the auxiliary task with a cross-entropy loss objective function will force learning image representations that are discriminative for histological structures inherent to the species, organs, and stainings present in the used dataset. We cannot, however, train them to be disciminative for anomalies, since those are not available or even not yet known. One of the approaches for AD is to minimize the volume of hypersphere in the feature space where majority of normal data lives \citep{TaxD04, RuffGDSVBMK18}. Conceptually similar, in \cite{perera2019learning} compactness loss was used to keep low intra-class variance in feature space for normal data. We will also enforce image representations to be compact, while training CNN image representations on the auxiliary task (Fig. \ref{Fig:1} Top). The compact representations should increase the sensitivity of the system to anomalies. For this purpose we use center-loss $\mathcal{L}_{CL}$ term \citep{WenZL016} in the objective function, along with the multiclass cross-entropy $\mathcal{L}_{CE}$ term \begin{equation} \label{Eq:1} \mathcal{L} = \mathcal{L}_{CE} + \lambda\mathcal{L}_{CL}, \end{equation} where $\lambda$ controls the influence of center-loss. We would like to have a class of healthy tissues as compact as possible, however, a too large value of $\lambda$ may decrease discriminative properties of learned representations. In our experiments we set $\lambda=1$. The multi-class cross entropy is computed as \begin{equation} \mathcal{L}_{CE} = - \sum_{i}^{m} \log P(x_i \in C_{y_i}), \end{equation} where $C_{y_i}$ denotes $y_i$-th class, $y_i \in [1,n]$ are true class labels for image representations (feature vectors) $x_i$ within a mini-batch of size $m$. In our case the number of classes $n=16$. The probabilities $P(x \in C_{y_i})$ are computed via the softmax function \begin{equation} P(x_i \in C_{y_i}) = \frac{e^{ z_{y_i}(x_i)}}{\sum_{j=1}^{n} e^{z_j(x_i)}} \end{equation} with $z_{j}$ being $j$-th entry of the $n$-dimensional output of the last fully connected layer of the NN classifier. The center-loss we define as follows \begin{equation} \label{Eq:4} \mathcal{L}_{CL} = \sum_{k \in K \subseteq[1, n]} \frac{\frac{1}{2} \sum_{i=1}^{m}\|x_i-a_k\|^2 \delta(y_i-k)}{\sum_{i=1}^{m}\delta(y_i - k)}, \end{equation} where the delta function $\delta$ equals $0$ everywhere except for an argument of zero, where it equals $1$, and $a_k$ is an average feature vector for class $C_k$ within a mini-batch \begin{equation} \label{Eq:5} a_k = \frac{\sum_{i=1}^m x_i \delta(y_i-k)}{\sum_{i=1}^{m} \delta(y_i-k)} \end{equation} With the center-loss in Eq. \ref{Eq:4} we can enforce the compactness of only a particular or all the classes depending on the subset $K$ over which the summation runs. We optimize Eq. \ref{Eq:1} by means Stochastic Gradient Descent (SGD) using backpropagation that updates weights $W$ of the CNN and NN classifier at iteration $t$ (backward pass) according to \begin{equation} \label{Eq:6} W^{t+1} = W_t -\mu\frac{\partial \mathcal{L}^t}{\partial W^t}, \end{equation} where $\mu$ is a learning rate. We do not directly use Eq. \ref{Eq:5} to avoid perturbations caused by mislabeled samples as was suggested in \cite{WenZL016}. Instead, at each forward pass of iteration $t$ we update class centers according to \begin{equation} \label{Eq:7} a_k^{t+1} = (1-\beta)a_k^t + \beta \hat{a}_k^{t+1}, \end{equation} where $\hat{a}_k^{t+1}$ are class new centers computed with Eq. \ref{Eq:5} after all weights were updated with Eq. \ref{Eq:6}\footnote{Eq. 4 with $1$ neglected in denominator and step 6 of an Algorithm 1 in \cite{WenZL016} boil down to Eq. \ref{Eq:7}.}. $a_k^0$ are initialized with Eq. \ref{Eq:5} computed with $x_i(W)$ for initial wights $W$. In our experiments in Sec. \Ref{Sec:Experiments} we used $\beta=0.5$, even though we have seen only negligible differences in results when we have used $\beta=1$. \subsection{Class mix-up augmentation} \label{Sec:mixup_augmentation} Histological images carry either textural or color information. Though textural information describes tissue structures that characterize histopathological features, color information is influenced by acquisition environment, concentration of staining reagents that depend on a particular laboratory where histological slides were prepared and digital images were acquired. Moreover, color distribution differences can be striking even between different batches of data prepared in the same laboratory at different times. There is therefore a danger that the classifier learns color patterns common for a particular batch of data instead of diagnostically relevant stained tissue structures. Normalization and augmentation approaches were developed to overcome this problem \citep{TellezLBBBCL19}. The first standardizes the distribution of pixels values during the inference time, while the second increase the variability of pixel distribution during the training time. For example, hue and/or saturation of pixel values can be randomly shifted. Usually aggressive augmentation is required to improve robustness of the trained classifier to variability between batches of image data \citep{StackeEUL21}. However, since such augmentation is not based on real data, it is hard to tune to gain robustness and simultaneously keep a high performance of a classifier \citep{TellezLBBBCL19}. Here we propose a different type of simple augmentation that does not need tuning. Our auxiliary task is a multi-class classification, we therefore can transfer color patterns between different classes (Fig. \ref{Fig:1}). In this way we increase the emphasis of a trained classifier on morphology of tissue structures and decrease importance of color that is dependent on laboratory settings at the time of tissue preparation and image acquisition. We do this with the histogram matching technique \citep{GonzalezW18} applied for each of the three color channels separately. We do not match histograms between images, but rather find a mapping that matches histograms between sets of images belonging to classes in the training data. Prior to training of multi-class classifier, we build the histogram $H_k$ for each class $k \in [1, n]$ in the training dataset with $n=16$ classes. For pairs of source $k_1$ and target $k_2$ classes we then find a mapping $z=M_{k_1 \rightarrow k_2}(x)$ that maps pixel intensity values $x$ to $z$, such that histogram of mapped pixel intensities of class $k_1$ matches the histogram of class $k_2$. Such mappings can be found with the use of Cumulative Distribution Functions (CDF), and in discrete case is computed as $CDF(x)=\sum_{y \leq x} H(y)$. Since we want to equalize CDFs of source and target classes \begin{equation} \label{Eq:8} CDF_{k1}(x) = CDF_{k2}(z), \end{equation} the desired mapping becomes \begin{equation} \label{Eq:9} z = CDF_{k2}^{-1}(CDF_{k1}(x)) = M_{k_1 \rightarrow k_2}(x). \end{equation} In practice, since we have only a finite discrete set of intensity values we build a look-up table that according to Eq. \ref{Eq:8} gives closest $z$ for every $x$ from the source class. We use histograms and corresponding CDFs discretized to 256 values. During the training time, given an image that belongs to class $k_1$, we randomly choose a destination class $k_2$ and transform pixel intensities of the image with appropriate mapping $M_{k_1 \rightarrow k_2}$. The class $k_2$ is uniformly sampled from $[1, n]$, which also gives a mapping from a particular class to itself ($k_2=k_1$). It is important to note that we avoid mapping between classes of different staining (H\&E and Masson's Trichrome), since our classifier is trained to be sensitive to the staining type. Fig. \Ref{Fig:3} shows examples of the color pattern transformation for images from different classes. \begin{figure}[!htb] \centering \includegraphics[scale=.29]{mixup_augmentation} \caption{\label{Fig:3} Examples of Masson's Trichrome stained tissue images transformed to match color patterns of different tissue classes} \end{figure} \subsection{One-class classification} For one-class classification we have used the standard one-class SVM \citep{ScholkopfPSSW01} with the Radial Basis Function (RBF) kernel and margin error $\nu=0.1$, which bounds the fraction of outliers that lie on the wrong side of an SVM margin. Feature vectors that do not lie within the support region of normal healthy data generate anomaly scores with negative values. Once the classifier is trained on image representations of normal tissue of a particular class (e.g. Masson's Trichrome stained mouse liver), it will serve as a detector of abnormal patches for that class, which will be detected by means of simple thresholding of output scores. In Sec. \Ref{Sec:Experiments} we will evaluate our AD system on finding pathologies in mouse liver tissue. \section{Experiments} \label{Sec:Experiments} In this section we convey a sequence of experiments to evaluate the performance of our approach (\ref{Sec:evaluationNAFLD}), show the importance of learning image representations (\ref{Sec:performance_pretrained}), compare our method to the state-of-the-art anomaly detection approaches (\ref{Sec:SOTAcomparison}), and show the importance of the introduced techniques to the resulted performance (\ref{Sec:ablation_study}). In Sec. \ref{Sec:comparison_with_NAS} we study the behavior of our AD method when assessing Non-Alcoholic Fatty Liver Disease (NAFLD) as example of a tissue anomaly and compare it with the methods specifically tailored to NAFLD. We also provide a study case that outlines the ability of our approach to detect abnormal alterations in tissue caused by adverse effects of candidate drugs. We train and evaluate our AD method using $256 \times 256$ pixel tiles extracted from Whole Slide Images (WSI). WSIs were acquired with with a Zeiss AxioScan scanner (Carl Zeiss, Jena, Germany) with a $20 \times$ objective at a resolution 0.221 \textmu m/{pixel} from mouse and rat tissue samples\footnote{Only historic data was used. No new animal experiments were done. For historic studies, animals were maintained in accordance with German national guidelines, legal regulations and the guidelines of the Association for Accreditation of Laboratory Animal Care. Experiments were performed after permission from the Regierungspräsidium Tübingen, Germany.} stained with either H\&E or Masson Trichrome stains according to established protocols. The WSI were then subsampled with a factor of 1:2, which resulted in 0.442 \textmu m/{pixel} resolution. To test and compare the performance of AD methods we used a dataset with samples of NAFLD, see details in Sec. \ref{Sec:NAFLD_dataset} that will serve as a particular example of anomalous tissue developments. We implemented all the experiments in the a PyTorch deep learning framework \citep{PaszkeGMLBCKLGA19}. Trained models and code will be available online\footnote{\url{https://github.com/Boehringer-Ingelheim/anomaly-detection-in-histology}}. \subsection{Performance measures} To evaluate the detection performance of our system we will prefer to use the balanced accuracy measure. \textit{Balanced accuracy} is an average of detection sensitivity for positives (true positive rate or sensitivity) and for negatives (true negatives rate or specificity). This measure is not sensitive to a proportion of positives and negatives in the test dataset. It is therefore preferable measure when data might be imbalanced, such as in the case of anomaly detection with a small amount of anomalous and the large number of normal samples\footnote{For evaluation of our method in the Sec.\Ref{Sec:evaluationNAFLD} we have collected a comparable number of anomalies and normal samples.}. We will prefer balanced accuracy over other commonly used measures, such as AUROC and $F_1$ score \citep{SantafeIL15}. One of disadvantages of AUROC is that it summarizes detector's performance over all, also inappropriate, score thresholds, while the disadvantage of $F_1$ is that it does not care about correct classification of negatives and sensitive to imbalance between negative and positive samples in the tests set \citep{BrabecKFM20}. However, we will use AUROC and $F_1$ score to compare our approach with SOTA AD methods in Sec. \ref{Sec:SOTAcomparison}. For this purpose we use Anomalib benchmark \citep{AkcayAB18}, where results are reported with AUROC and $F_1$ measures. \subsection{Training image representations} \label{Sec:dataset_auxiliary} To train image representations using an auxiliary task as described in Sec. \ref{Sec:Auxiliary} (See Top of Fig.\ref{Fig:1}) we collected a large dataset of histological $256 \times 256$, 0.442 \textmu m/{pix.} tiles extracted from Whole Slide Images (WSI) of healthy animals tissue. Seven organs (liver, brain, kidney, heart, lung, pancreas, spleen), two animal species (rat and mouse), and two types of staining (Masson Trichrome and H\&E) comprises the collected dataset. Overall we have 16 categories. Note that not all combinations of specie, organ, and staining was available for us (rat tissue was available only for liver). Around 7000 tiles from each of the 16 categories is sampled from the dataset. During the training of FC-NN and CNN models and we will use 6 different random seeds that randomizes the choice of the 7000 sampled tiles and the order of their feeding to the network. During the test time we will provide mean and standard error values over 6 trained models, which will allow us more reliable performance comparisons. Note that the extracted from WSI tiles were not curated by a human and may occasionally contain various artifacts or wrong tissue. We made this dataset of healthy tissue publically available \citep{dataset} such that training representations of histological images for our AD approach can be reproduced. We train FC-NN and CNN models over 15 epochs using stochastic gradient descent optimization with learning rate 1e-3, momentum 0.9, and with batches of size 64. We choose the best model obtained during the training based on a validation set (around 700 images per class) withheld from the dataset. The CNN encoder was initialized with weights obtained from pre-training on ImageNet, while fully connected layers of the classifier were randomly sampled from zero mean normal distribution $N(0, \sigma^2)$ with standard deviation $\sigma=10^{-2}$ and biases were initialized with zeros. We used the categorical cross-entropy objective function with a center-loss term (see Sec.\ref{Sec:objective_function}) that forces compactness of a single class that is a tissue class of interest for AD system (either H\&E or Masson Trichrome stained mouse liver tissue). This implies that the subset $K$ in Eq. \ref{Eq:4} is equal to a single number corresponding to the class of interest. All the images were normalized to have a zero mean over the training dataset for each color channel. No normalization of standard deviation of image values was done. After class-mixup augmentation (see Sec. \ref{Sec:mixup_augmentation} for details), brightness and contrast augmentations were additionally applied\footnote{\label{FN:ColorJitter}We have used Torchvision extension of PyTorch for this purpose, namely \href{https://pytorch.org/vision/stable/generated/torchvision.transforms.ColorJitter.html}{ColorJitter.}}. Brightness and contrast were randomly altered with a factor uniformly chosen in the range [0.8, 1.2]. \subsection{Training one-class classifier} For one-class classification we have used the standard one-class SVM \citep{ScholkopfPSSW01} with the Radial Basis Function (RBF) kernel and a margin error $\nu=0.1$. During training one-class classifier (See Middle of Fig.\ref{Fig:1}) we perform brightness, contrast, saturation and mild hue augmentations\footnotemark[5] with factors uniformly sampled from [0.8, 1.2], [0.8, 1.2], [0.4, 1.6], [-0.05, 0.05], respectively. All the images were normalized to have a zero mean over the training dataset for each color channel. No normalization of standard deviation of image values was done. We carried experiments and reported results for two types of staining Masson Trichrome and H\&E. For each type of staining one-class classifier was trained on the corresponding dataset of liver mouse Masson Trichrome or H\&E stained tissue, which were also used for training image representations (see Sec. \ref{Sec:dataset_auxiliary}) \subsection{The dataset for performance evaluation} \label{Sec:NAFLD_dataset} To evaluate the performance of our approach for the detection of abnormal tissues, we have collected $256 \times 256$, 0.442 \textmu m/{pix.} tiles that were taken from histological WSIs of mouse liver tissue stained with Masson Tichrome and H\&E. We have created three datasets, which we made publically available \cite{dataset}. The first is the training dataset with samples of healthy mouse liver tissue, which is used to train one-class classifier. It consist of approximately 7000 tiles for each type of staining. The tiles were extracted from 204 WSIs for Masson Trichrome staining and 347 WSIs from H\&E staining. The second is a test dataset of healthy mouse liver tissue samples that consist of around 2200 tiles with H\&E stained tissue and about 2400 tiles with Masson Trichrome stained tissue. The tiles were extracted from 17 WSIs for each of the two stainings. The third is a test dataset of anomalous tissue from mice with NAFLD condition. The tissue can have a few different anomalous tissue alterations, namely inflammation, ballooning, steatosis, and fibrosis. It contains about 2200 tiles with H\&E stained tissue and about 2400 tiles with Masson Trichrome stained tissue. The tiles were extracted from 58 WSIs for Masson Trichrome staining and 57 WSIs fro H\&E staining. The majority of the slides with tissue stained with one method (Masson Trichrome or H\&E) have a matching slide with close tissue cut (3-4 \textmu m distance) stained with the other method. The labels (anomalous NAFLD or healthy) for all tiles were carefully verified by an experienced pathologist. The test datasets have no common WSIs neither from the one-class classifier training dataset described above nor from the dataset for training image representations on the auxiliary task (see Sec. \ref{Sec:dataset_auxiliary}). \subsection{Performance evaluation based on NAFLD anomalies} \renewcommand{\arraystretch}{1.3} \label{Sec:evaluationNAFLD} We evaluate the performance of our AD system (see, Fig. \ref{Fig:1}), C) using encoders based on a few different backbone CNN architectures. Here we report the performance using the balanced accuracy and provide the mean over the results obtained from six trained CNN encoders that were trained with six seeds (0, 100, 200, 300, 400, 500). We also provide a standard error for the estimated means. The results are summarized in Table \ref{Tab:1} for both Masson Trichrome and H\&E staining. The average between these performances is reported in the last row. DenseNet-121, EfficientNet-B0, and EfficientNet-B2 backbone CNN are reported within two versions, the full CNN and truncated CNN with smaller size of output feature vectors, which may improve generalization of the trained network to unseen data and reduce overfitting to the used training dataset. Additionally, it makes training faster. The truncated version of EfficientNet-B0 and EfficientNet-B2 is obtained by removing the last $9^{th}$ stage with $1\times1$ convolutional layer, the truncated version of DenseNet-121 is obtained by removing the fourth (last) dense block, such that the number of the output channels is reduced to the number shown in corresponding entry of the row refereed to as Feature Vector (FV) size. From Table \ref{Tab:1} one can see that the best (average) performance of AD system, $95.86\%$, was obtained with truncated EfficientNet-B0 with 320 output features. The truncated version of EfficientNet-B0 as well as truncated versions of all three aforementioned networks performed better than their full versions. This points to possible overfitting the auxiliary task that mostly occurs due to learned weights in last layers of the CNN encoder, which reduces performance of the AD system. Note that Table \ref{Tab:1} does not report the performance of the supervised classifier for the auxiliary task (Fig. \ref{Fig:1} A), which typically reaches accuracy above $97\%$, but the performance of the AD system (Fig. \ref{Fig:1} C). In addition to CNN architectures shown in Tab \ref{Tab:1} we have experimented with visual transformer ViT-B-32 \citep{DosovitskiyB0WZ21} that outputs 768-dimensional feature vectors\footnote{In contrast to the tested CNN architectures with $256 \times 256$ input images, for visual transformer input images are $224 \times 224$ pixels.}. However, it performed worse than the majority of CNN architectures giving an average performance $93.44 \pm 0.44$. In our further experiments beginning from Sec.\ref{Sec:SOTAcomparison} we will be using our best performing architecture EfficientNet-B0-320. \begin{table*}[!htb] \caption{\label{Tab:1}Performance of the proposed AD method for different CNN backbones, measured with Balanced Accuracy in $\%$} \scriptsize \centering \begin{tabularx}{\textwidth}{XXXXXXXXXXXX} \hline & VGG-11 & VGG-16 & VGG-19 & ResNet-18 & DenseNet-121 & DenseNet-121 & EfficientNet-B0 & EfficientNet-B0(BIHN$^@$) & EfficientNet-B2 & EfficientNet-B2 & ConNeXt-T \\ \hline FV$^+$ size & $512$ & $512$ & $512$ & $512$ & $512$ & $1024$ & $1280$ & $320$ & $1480$ & $352$& $768$\\ Learned param. & 9,254,352 & 14,748,560 & 20,058,256 & 11,723,384 & 4,828,624 & 7,020,496 & 4,090,572 & 3,617,612 & 7,792,210 & 7,226,898 & 27,868,848 \\ \hline \hline MT$^*$ & \mbox{$94.60 \pm 0.24$} & \mbox{$92.08 \pm 0.76$} & \mbox{$90.03 \pm 0.95$} & \mbox{$93.89 \pm 0.80$} & \mbox{$96.12 \pm 0.41$} & \mbox{$96.82 \pm 0.22$} & \mbox{$97.17 \pm 0.26$} & \mbox{$\textbf{97.51} \pm 0.12$} & \mbox{$96.36 \pm 0.25$} & \mbox{$95.82 \pm 0.20$} & \mbox{$96.52 \pm 0.19$}\\ \hline H\&E$^\#$ & \mbox{$93.59 \pm 0.54$} & \mbox{$92.67 \pm 1.46$} & \mbox{$91.83 \pm 0.93$} & \mbox{$\textbf{95.35} \pm 0.29$} & \mbox{$94.37 \pm 0.21$} & \mbox{$93.96 \pm 0.39$} & \mbox{$93.67 \pm 0.26$} & \mbox{$94.20 \pm 0.12$} & \mbox{$93.45 \pm 0.12$} & \mbox{$94.95 \pm 0.23$} & \mbox{$93.81 \pm 1.05$}\\ \hline Average & \mbox{$94.10 \pm 0.39$} & \mbox{$92.37 \pm 1.12$} & \mbox{$90.93 \pm 0.94$} & \mbox{$94.62 \pm 0.54$} & \mbox{$95.25 \pm 0.31$} & \mbox{$95.37 \pm 0.30$} & \mbox{$95.42 \pm 0.26$} & \mbox{$\textbf{95.86} \pm 0.12$} & \mbox{$94.91 \pm 0.19$} & \mbox{$95.39 \pm 0.22$} & \mbox{$95.17 \pm 0.62$}\\ \hline \multicolumn{12}{@{}l}{$^*$MT -- Masson's Trichrome staining \qquad $^{\#}$ H\&E -- Hematoxylin and Eosin staining \quad $^+$ FV -- Feature Vector \quad $^@$ BIHN -- Boehringer Ingelheim Histological Network} \end{tabularx} \end{table*} \subsubsection{Advantage of learning representations for histological images} \label{Sec:performance_pretrained} Here we provide a comparison of our AD system, which is composed of a CNN encoder and a one-class classifier, with the system of a similar architecture but with the CNN encoder trained on ImageNet without further adaptation to histopatholgical data using our auxiliary task. Table \ref{Tab:2} summarizes the performances of AD for a set of CNN architectures (the same set as in Sec. \ref{Sec:evaluationNAFLD}) trained on the ImageNet dataset. Put attention that in this table we report balanced accuracy measures with no standard error as in Sec.\ref{Sec:evaluationNAFLD}, where we also reported the standard error of the estimated mean over a few CNN encoders trained on random subsets of histological images. Comparing Tables \ref{Tab:1} and \ref{Tab:2} one can see an essential increase in the performance when CNN encoders were trained on histological data. For example, for the best performing architecture EfficientNet-B0-320 the balanced accuracy increases in $26\%$. The results emphasize the importance of tuning the neural networks to a particular target domain. Even though recently it was shown that using transfer learning concept for AD \citep{BergmannFSS20, RippelMM20}, where ImageNet trained neural networks encode images with feature vectors representations, can achieve state of the art results. However, for many cases where target domain diverges from natural images, like in our case of histopatohlogical images, tuning neural networks to the target domain is critical. \begin{table*}[!htb] \caption{\label{Tab:2}One-class SVM AD performance when ImageNet trained CNNs were used as an image encoder, measured with Balanced Accuracy in $\%$} \scriptsize \centering \begin{tabularx}{\textwidth}{XXXXXXXXXXXX} \hline & VGG-11 & VGG-16 & VGG-19 & ResNet-18 & DenseNet-121 & DenseNet-121 & EfficientNet-B0 & EfficientNet-B0 & EfficientNet-B2 & EfficientNet-B2 & ConNeXt-T \\ \hline FV$^+$ size & $512$ & $512$ & $512$ & $512$ & $512$ & $1024$ & $1280$ & $320$ & $1480$ & $352$& $768$\\ \hline \hline MT$^*$ & $71.64$ & $68.99$ & $69.46$ & $70.68$ & $71.64$ & $76.10$ & $73.99$ & $\textbf{77.21}$ & $73.65$ & $73.75$ & $64.89$\\ \hline H\&E$^\#$ & $73.91$ & $72.91$ & $73.96$ & $72.02$ & $73.42$ & $\textbf{75.76}$ & $72.17$ & $74.90$ & $72.28$ & $71.78$ & $65.51$\\ \hline Average & $72.77$ & $70.95$ & $71.71$ & $71.35$ & $72.53$ & $75.93$ & $73.08$ & $\textbf{76.05}$ & $72.97$ & $72.76$ & $65.20$\\ \hline \multicolumn{12}{@{}l}{$^*$MT -- Masson's Trichrome staining \qquad $^{\#}$ H\&E -- Hematoxylin and Eosin staining \quad $^+$ FV -- Feature Vector} \end{tabularx} \end{table*} \subsubsection{Comparison with AD methods} \label{Sec:SOTAcomparison} We compare the developed AD method with recent methods for anomaly detection with implementations available from the Anomalib \citep{abs-2202-08341web} benchmark. We also compare the developed AD method with the classical one-class SVM \citep{ScholkopfPSSW01} that uses feature representations obtained with a CNN pre-trained on ImageNet \citep{DengDSLL009}. The results are summarized in Tab.\ref{Tab:3}. The table shows detection performance of the approaches measured with the Area Under ROC curve (AUROC) and the $F_1$ score\footnote{Optimal $F_1$ score is calculated for the methods from Anomalib, while for our method and for one-class SVM with ImageNet pre-trained CNN, the standard $F_1$ score is calculated on thresholded output of one-class classifier. } on the NAFLD dataset (see Sec. \ref{Sec:NAFLD_dataset}). The table summarizes the experiments with H\&E and Masson trichrome stained tissues and shows an average performance between the experiments with two stains. Columns 1-6 in Tab.\ref{Tab:3} show performance of anomaly detection algorithms from the Anomalib benchmark. For the methods implemented in Anomalib we used hyperparameters reported in the corresponding paper (and usually set as default parameters in the configuration files of Anomalib benchmark). Below we explicitly mention only a few important configuration parameters when they are different from the optimal configurations reported in the corresponding papers or when a few configuration options were reported. In the PatchCore algorithm \citep{roth2022towards} we used $10\%$ subsampling rate and features from the third block of WideResNet50 backbone. We were not able to use features from three blocks, which gave a bit better performance in the original paper, because the PatchCore is memory intensive and the Anomalib implementation encounter an run-time issue in this case. In the PaDiM algorithm \citep{defard2021padim} we used features from first three blocks of the ResNet18 backbone. For DFM \citep{abs-1909-11786} we used features from the third block of ResNet50 backbone and feature dimensionality was reduced keeping $99.5\%$ of the original variance. For CFLOW \cite{GudovskiyIK22} we used features from second and third blocks of WideResNet50. We did not use features from three blocks, because the training becomes extremely slow. For STFPM \citep{WangHD021} we used features from the first to third blocks\footnote{The authors of the corresponding paper count first simple convolutional layer as the first block, therefore it is second to fourth blocks as reported in the corresponding paper.}. We trained image representations on an auxiliary task as proposed in our paper using the EffcientNet-B0-320 CNN architecture, since it gave us the best results (see Sec. \ref{Sec:performance_pretrained}). We will call our trained model Boehringer Ingelheim Histological Network (BIHN). The results for AD with BIHN models are reported in $8^{th}$ (last) column. The same architecture, EffcientNet-B0-320, trained on ImageNet and used together with one-class SVM gives results reported in $7^{th}$ column\footnote{Performance of one-class SVM with EfficientNet-B0-320 features is also reported in $8^{th}$ column in Tab. \ref{Tab:2} but using balanced accuracy measure.}. Put attention that in this column we do not report standard error values, as in all other columns, because we used a single pre-trained network (no different seeds were used to run different training trials, see Sec. \ref{Sec:evaluationNAFLD} for details). As Tab.\ref{Tab:3} shows our approach outperforms, with a large gap, all the other tested approaches for detection of NAFLD. Our BIHN model followed by a standard one-class SVM (see Fig.\ref{Fig:1}) achieves $5.7\%$ and $10.6\%$ higher performance than the closest method, DFM \citep{abs-1909-11786}, measured with AUROC and $F_1$ measure, respectively. One can also see that the same architecture, EficientNet-B0-320 CNN encoder followed by one-class SVM, when trained on ImageNet, achieves substantially lower performance, which emphasizes critical value of learning image representations. \begin{table*}[!htb] \caption{\label{Tab:3}Comparison of anomaly detection methods} \scriptsize \centering \begin{tabularx}{\textwidth}{XXXXXXXXXX} \hline & & GANomaly & CFLOW & PaDiM & STFPM & PatchCore & DFM & OC-SVM, Eff.Net feat. & BIHN \\ & & {\scriptsize \citep{AkcayAB18}} & {\scriptsize \citep{GudovskiyIK22}} & {\scriptsize \citep{defard2021padim}} & {\scriptsize\citep{WangHD021}} & {\scriptsize \citep{roth2022towards}}& {\scriptsize \citep{abs-1909-11786}} & {\scriptsize\citep{ScholkopfPSSW01}} & {\scriptsize(ours)}\\ \hline \hline MT$^*$ & AUROC $\%$ & $84.26 \pm 2.63$ & $75.36 \pm 1.24$ & $79.63 \pm 0.45$ & $80.99 \pm 0.70$ & $85.46 \pm 0.02$ & $92.28 \pm 0.18$ & $85.53$ & $\textbf{99.03} \pm 0.12$ \\ & F$_1$ score $\%$ & $77.53 \pm 3.27$ & $71.74 \pm 0.53$ & $73.81 \pm 0.34$ & $77.92 \pm 0.38$ & $79.50 \pm 0.03$ & $85.02 \pm 0.17$ & $72.13$ & $\textbf{97.51} \pm 0.12$ \\ \hline H\&E$^\#$ & AUROC $\%$ & $59.59 \pm 2.03$ & $74.29 \pm3.51 $ & $86.33 \pm 0.38$ & $83.24 \pm 0.88$ & $86.93 \pm 0.03$ & $92.69 \pm 0.0$ & $78.63$ & $\textbf{97.33} \pm 0.14$ \\ & F$_1$ score $\%$ & $66.45 \pm 0.0$ & $71.80 \pm 1.1$ & $78.30 \pm 0.30$ & $80.03 \pm 0.50$ & $79.85 \pm 0.06$ & $85.40 \pm 0.0$ & $69.61$ & $\textbf{94.09} \pm 0.13$ \\ \hline Average & AUROC $\%$ & $71.92 \pm 2.33$ & $74.83 \pm 2.37$ & $82.98 \pm 0.42$ & $82.12 \pm 0.79$ & $86.20 \pm 0.03$ & $92.49 \pm 0.09$ & $82.08$ & $\textbf{98.18} \pm 0.10$ \\ & F$_1$ score $\%$ & $71.99 \pm 1.64$ & $71.77 \pm 0.82$ & $76.05 \pm 0.32$ & $78.97 \pm 0.44$ & $79.67 \pm 0.04$ & $85.21 \pm 0.09$ & $70.87$ & $\textbf{95.80} \pm 0.13$ \\ \hline \multicolumn{9}{@{}l}{$^*$MT -- Masson's Trichrome staining \qquad $^{\#}$ H\&E -- Hematoxylin and Eosin staining} \end{tabularx} \end{table*} \subsubsection{Ablation Study} \label{Sec:ablation_study} Here we will study the importance of the techniques that we proposed to learn feature representations for histological images. Namely, we will study the influence of the class mix-up augmentation (Sec.\ref{Sec:mixup_augmentation}), the center-loss term in the objective function (Sec.\ref{Sec:objective_function}), and the complexity of the auxiliary classification task (Sec.\ref{Sec:Auxiliary}) on the AD performance. The ablation study results are summarized in Tab. \ref{Tab:4}. The $1^{st}$ column shows the performance of our system (when all the proposed techniques are implemented). Switching off the class mix-up augmentation decreases (average) AD performance in $3.3\%$ ($2^{nd}$ column) or increases the classification error (1-balanced accuracy) in $78\%$. Using instead of the class mix-up the standard way to augment color images by means of random variations in hue and saturation results in even slightly worse than without augmentation performance ($3^{d}$ column). Excluding the center loss term from objective function results in $3.4\%$ decrease in performance ($4^{th}$ column), or increase in classification error in $78\%$. We implemented the center-loss that forces compactness of a single class of interest (liver, mouse, particular staining) out of 16 classes in the auxiliary task. We also checked performance of AD when center-loss forces compactness of all the classes in the auxiliary task ($5^{th}$ column). Though, the performance in this case is higher than without center-loss term, it is inferior to the case with center-loss applied to the single class of interest. $6^{st}$ and $7^{th}$ columns show that performance of AD drops inversely proportional to the number of categories to be classified in the auxiliary task, i.e. performance of the AD is proportional to the complexity of the auxiliary task. This suggests that increasing the complexity of the auxiliary task may further increase the performance of our AD method. \begin{table*}[!htb] \scriptsize \caption{\label{Tab:4} Performance of the proposed AD when auxiliary task training procedure was altered. EfficientNet-B0 with 320 output features is used as a backbone. The performance is measured with balanced accuracy in $\%$} \scriptsize \centering \begin{tabularx}{\textwidth}{XXXXXXXX} \hline & Proposed training procedure & No mix-up & \mbox{Hue-sat.} augmentation, no mix-up & No center-loss & Center-loss for all categories & Number of categories reduced to 14& Number of categories reduced to 7\\ \hline \hline MT$^*$ & \mbox{$\textbf{97.51} \pm 0.12$} & \mbox{$94.39 \pm 0.21$} & \mbox{$88.87 \pm 0.38$} & \mbox{$94.84 \pm 0.21$} & \mbox{$95.32 \pm 0.24$} & \mbox{$95.75 \pm 0.14$} & \mbox{$93.26 \pm 0.23$} \\ \hline H\&E$^\#$ & \mbox{$\textbf{94.20} \pm 0.12$} & \mbox{$90.92 \pm 0.15$} & \mbox{$90.51 \pm 0.26$} & \mbox{$90.40 \pm 0.61$} & \mbox{$92.04 \pm 0.42$} & \mbox{$92.98 \pm 0.13$} & \mbox{$92.69 \pm 0.22$} \\ \hline Average & \mbox{$\textbf{95.86} \pm 0.12$} & \mbox{$92.65 \pm 0.18$} & \mbox{$89.69 \pm 0.32$} & \mbox{$92.62 \pm 0.41$} & \mbox{$93.68 \pm 0.33$} & \mbox{$94.36 \pm 0.14$} & \mbox{$92.97 \pm 0.22$} \\ \hline \multicolumn{8}{@{}l}{$^*$MT -- Masson's Trichrome staining \qquad $^{\#}$ H\&E -- Hematoxylin and Eosin staining } \end{tabularx} \end{table*} \begin{table*}[!htb] \scriptsize \caption{\label{Tab:5} Performance of the proposed AD when auxiliary task training procedure was altered. DenseNet-121 with 512 output features is used as a backbone. The performance is measured with balanced accuracy in $\%$} \scriptsize \centering \begin{tabularx}{\textwidth}{XXXXXXXX} \hline & Proposed training procedure & No mix-up & \mbox{Hue-sat.} augmentation, no mix-up & No center-loss & Center-loss for all categories & Number of categories reduced to 14& Number of categories reduced to 7\\ \hline \hline MT$^*$ & \mbox{$\textbf{96.12} \pm 0.41$} & \mbox{$94.38 \pm 0.26$} & \mbox{$89.65 \pm 0.76$} & \mbox{$90.92 \pm 0.38$} & \mbox{$94.67 \pm 0.36$} & \mbox{$93.38 \pm 0.31$} & \mbox{$93.69 \pm 0.21$} \\ \hline H\&E$^\#$ & \mbox{$\textbf{94.37} \pm 0.21$} & \mbox{$88.08 \pm 0.73$} & \mbox{$91.04 \pm 1.11$} & \mbox{$88.36 \pm 1.01$} & \mbox{$92.21 \pm 0.49$} & \mbox{$93.03 \pm 0.36$} & \mbox{$88.14 \pm 0.55$} \\ \hline Average & \mbox{$\textbf{95.25} \pm 0.31$} & \mbox{$91.23 \pm 0.50$} & \mbox{$90.34 \pm 0.93$} & \mbox{$89.64 \pm 0.69$} & \mbox{$93.44 \pm 0.43$} & \mbox{$93.21 \pm 0.34$} & \mbox{$90.91 \pm 0.38$} \\ \hline \multicolumn{8}{@{}l}{$^*$MT -- Masson's Trichrome staining \qquad $^{\#}$ H\&E -- Hematoxylin and Eosin staining } \end{tabularx} \end{table*} \subsection{Comparison with methods tailored for assessment of NAFLD} \label{Sec:comparison_with_NAS} We analyzed liver samples from mice with non-alcoholic fatty liver disease (NAFLD). Here, the mice were treated with $CCl_4$ (carbon tetrachloride), which leads to the development of liver pathology similar to NAFLD. The livers of the mice were analyzed immediately after $CCl_4$ administration and after a recovery period of 4, 8, and 12 weeks. Two methods tailored for the quantification of the NAFLD pathology are compared with our anomaly detector. First, the immunohistochemical staining of alpha smooth muscle actin ($\alpha$SMA) is quantified by measuring positively stained tissue area. $\alpha$SMA is known to increase in NAFLD \citep{Munsterman_Histopathology_2018}. Second, a pathologist's assessment of the severity of NAFLD using the NAFLD activity score (NAS) was performed \citep{Kleiner_Hepatology_2005}. When assigning the NAS, the pathologist combines three predefined histologic features known to change in NAFLD (steatosis, i.e., fat accumulation, inflammation, and ballooning, i.e., a form of hepatocyte cell death). In contrast to the methods above, the anomaly detector, which is trained on healthy tissue only, is neither tailored to NAFLD nor to any other specific disease. Fig. \ref{Fig:4}A shows the results of the analysis of NAFLD pathology with the two tailored methods, IHC and pathologists scoring (NAS), compared to our anomaly detector. Immunohistochemical analysis of $\alpha$SMA showed a significant increase in the area of $\alpha$SMA-positive liver tissue immediately after $CCl_4$ administration ($***, p \leq 0.001$, Mann-Whitney test, two-sided). No increased signal of $\alpha$SMA was observed in the recovery groups after 4, 8, and 12 weeks. The NAS score also showed an increase in NAS immediately after $CCl_4$ administration ($p \leq 0.001$). Moreover, the NAS score remained slightly, but significantly increased in the recovery groups at 4 and 8 weeks ($***, p \leq 0.001$) and after 12 weeks ($*, p \leq 0.05$). This increase in NAS score after recovery was primarily caused by the liver inflammation component of the NAS. For quantification of the response of our AD method, we computed the percentage of detected anomalous tiles. For liver tissue immediately after $CCl_4$ administration the AD method showed the same increase in signal ($p \leq 0.001$) as the targeted methods. In the recovery groups, the signal did not differ from that of the healthy control. Here, the pathologist could still detect a low level of inflammation as shown by the NAS score. Even though our AD method was not developed for detection of specific pathological changes in the tissue, it closely repeats the response of the methods tailored to detection of NAFLD. Fig.\ref{Fig:4}B shows tissue examples from healthy (control) and $CCL_4$ groups, as well as detected abnormal tiles (yellow). In the $CCl_4$ treated liver, NAFLD features visible in the Masson-Trichrome stain include fibrosis (blue) and hepatocellular changes with micro and macro steatosis and ballooning as well as inflammatory infiltrates. The tiles with these tissue changes were correctly classified as abnormal. The false positive detections in the healthy liver include processing artifacts (folded tissue) and dilated blood vessels containing erythrocytes. Gathering additional training images of healthy tissue with such and a variety of other histological features can reduce false positives. \begin{figure*}[!htb] \centering \includegraphics[scale=0.98]{NAS_IHC_anomaly} \caption{\label{Fig:4} Detection of Non-Alcoholic Fatty Liver Disease (NAFLD) in liver tissue. A: Comparison of the behavior of the BIHN based anomaly detection method with quantification methods specifically tailored to NAFLD, alpha Smooth Muscle Actin ($\alpha$SMA) immunohistochemical staining, and the NAFLD Activity Score (NAS) from pathologist examination. Each dot corresponds to a single WSI of the corresponding tissue type. Stars on the top of graphs show statistical significance of the change compared to the mean of control (healthy) group. B: Examples of detected anomalies in a control and a diseased liver.} \end{figure*} \subsection{Detection of toxicological drug effects test (to be finished)} \label{Sec:detection_adverse_events} We used a pre-clinical toxicity study in order to verify if our method is capable of detecting pathological changes in liver tissue previously diagnosed by an expert pathologist. In this study, mice received different doses of a substance in development for treating a particular disease (not liver related). The investigation by pathologists revealed that livers from treated mice showed dose-dependent hepatocellular cytoplasmic vacuolation. In addition, hepatocellular degeneration and necrosis with inflammation and mineralization as well as pigment deposition without a clear dose-relationship were seen in a significant number of treated animals from all dose groups. Our method reliably detected the highly variable abnormal morphological changes described above (Fig. \ref{Fig:5}B, low dose: perivascular, inflammatory infiltrations and single cell necrosis; Fig. \ref{Fig:5}B, mid dose: widespread necrosis with mineralization) and was able to discriminate them from the surrounding normal tissue. As can be seen in Fig. \ref{Fig:5}A, the AD output is proportional to the studied substance dose and the severity of lesions damage as was diagnosed by the expert pathologist. The results underline the value of AD tools as potential pre-screening modality for large scale toxicological studies. AD can be integrated in a digital pathology slide viewer to enable convenient pathologist review. Such AD tools may help in early detection of adverse findings and support the pathologists decision making. \begin{figure*}[!htb] \centering \includegraphics[scale=0.98]{tox_pattern} \caption{\label{Fig:5} Detection of adverse drug reactions by the BIHN based anomaly detection. A: The developed AD method detects induced tissue alterations in the liver of mouse after administration an experimental compound. The fraction of abnormal tiles increases with the the dosage of the compound. The compound was previously found to have toxic side effects in toxicological screening by pathologists. Each dot corresponds to a single WSI. Three arrows correspond to three WSI examples given in B. Stars on the top of the graph show statistical significance of the change compared to the mean of control group. B: Examples of detected anomalies. In the control group (left image) blood and a few other not pathological structures result in a low level of false positives. Detections in compound treated groups (two right images) correspond to pathological alterations and were confirmed by a pathologist.} \end{figure*} \subsection{Conclusion} We developed an approach enabling detection of abnormal structures in histopathological images under the real-world environment. We introduced a receipt for training image representations adapting them to the domain of histopathology, which allowed a standard anomaly detection to achieve strong performance. We collected and published a dataset of histopathologcal images of tissue with non-alcoholic fatty decease (NAFLD) that exhibit different abnormal variations and can be used as a benchmark for anomaly detection in histological images. Using this dataset we have shown that our approach outperforms the recent state-of-the-art and classical methods for anomaly detection. Moreover, we have shown that our method behaves similarly to established methods designed specifically to quantify NAFLD condition. We have also demonstrated that our approach was able to reveal tissue alteration in the liver resulted from side effects of a drug under development. We can conclude that the developed approach is capable of detection of unknown adverse drug effects and has a potential of reducing attrition rates in the drug development. Currently, our system cannot use anomaly examples to improve its performance (also not for a particular type of anomaly). Moreover, re-training one-class classifier on continuously gathered new normal data won't be efficient until the amount of the new data is relatively large. In future, we, therefore, aim at extending our AD approach to be able gradually improve performance using both anomalous and normal data that occasionally becomes available, i.e moving to lifelong learning regime. \section*{Acknowledgments} We would like to thank Dr. Tanja Schönberger (Drug Discovery Sciences, Boehringer Ingelheim, Biberach, Germany) for help with the data curation and Dr. Martin Lenter (Drug Discovery Sciences, Boehringer Ingelheim, Biberach, Germany) for helpful discussions and project support. We would also like to thank Florian Rottach (IT EDS Central Data Sciences, Boehringer Ingelheim, Biberach, Germany) for his comments, which helped to improve the quality of the paper. \bibliographystyle{model2-names.bst}\biboptions{authoryear}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,108
Q: Fastest way for a HTML page to redirect to a script I am looking for fast methods/ways I can redirect a HTML page to another page. Currently I use meta-tags to redirect a HTML page to a python script that grabs the website HTML from a SQL database(grabs different HTML based on whether user is on a Mobile device) then sends that HTML back. But this is visibly slow, it shows a blank white page(redirect HTML page) then shows the python script page. I know that its not the python script thats slow because if I just go straight to the python script(no redirect from the HTML page) then it loads very fast & with no white screen. So for example; someone accesses http://mywebsite.com/contactUs.html which contains only the following HTML: <HTML> <HEAD><META HTTP-EQUIV=Refresh CONTENT="0; url=cgi-bin/loadWebPage.py?webpage=50002"> </HEAD> </HTML> <!-- This redirects to my python script that grabs the contact us HTML & prints it out(shows the webpage HTML) --> Are there other ways a HTML file can redirect to a python script that will be faster? Maybe I can use HTTP messages? Would the following text/code work in a HTML file?: Status: 301 Moved" Location:/cgi-bin/loadWebPage.py?webpage=50002 Would my last resort be AJAX or javascript to get the fastest result? <html> <head> <script type="text/javascript"> <!-- window.location = 'http://mywebsite.com/cgi-bin/loadWebpage.py?webPage=50001'; --> </script> </head> <body> </body> </html> A: Place a file called .htaccess in the same directory as your HTML file. If the file is called "http://www.example.com/foo/bar.html" then .htaccess would contain: Redirect permanent /foo/bar.html http://www.example.com/script.py Note that the file that gets redirected does not have http://www.example.com, while the file you redirect it to needs the full URL. If it's your own server, it's a little faster to disable .htaccess files, while instead putting the Redirect command in your httpd.conf file. A: Don Quixote is correct about the .htaccess, I would just like to point out that http://progtuts.info/198/the-anatomy-of-a-htaccess-file-hints-and-tips/ Will provide some examples and a whole lot of stuff on how to edit your htaccess files etc. Redirect will definitely work.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,851
Q: ANTLR4 target file names For the TypeScript ANTLR target that Sam and I have been working on, I would like to have the code generation tool create a single typescript file to hold all the classes generated from a named grammar input. Is this output file structure going to be hard? So for example, I'd like Expr.g4 -> Expr.g4.ts. That one file TypeScript file could contain named exports for {ExprLexer, ExprParser, and ExprListener} classes, visitor code if requested, maybe even some loose factory functions etc. I've been looking into the source code under tool/src/org/antlr/v4/codegen to find out how the number and names of the output files are determined, in particular finding CodeGenPipeline.java, This class works in conjunction with the language-specific target class, but the pipeline has a lot (perhaps too much) knowledge of possible output files built into it. None of what I see in CodeGenPipeline.java seems well matched to my 1:1 input-to-output file model. It seems like the knowledge of what files should be generated for a given language target should come from the language.stg file if possible, but I can't find any evidence that approach has been implemented. Can anyone fill me in on any reasons that approach can't hasn't been tried or worked?
{ "redpajama_set_name": "RedPajamaStackExchange" }
487
\section{Introduction} \noindent Statistical learning from data is strongly linked to optimization of stochastic representation models. The traditional approach consists of learning an optimal hypothesis from a reasonable amount of data and further aim to generalize its decision properties over the entire population. In general, this generalization is achieved by minimizing population risk which, unlike the empirical risk minimization, aims to compute the optimal predictor with the smallest generalization error. Thus, in this paper we consider the following stochastic composite optimization problem: \begin{align} \label{problem_intro} \min\limits_{w \in \mathbb{R}^n} & \;\; F(w) := \mathbb{E}_{\xi \in \Omega} [f(w;\xi)] + \mathbb{E}_{\xi \in \Omega} [h(w;\xi)], \end{align} where $\xi$ is a random variable associated with probability space $(\mathbb{P},\Omega)$, $f(w) := \mathbb{E}_{\xi \in \Omega} [f(w;\xi)]$ is smooth and $h(w) := \mathbb{E}_{\xi \in \Omega} [f(w;\xi)]$ convex and nonsmooth. Most convex learning models in population can be formulated following the structure of \eqref{problem_intro} using a proper decomposition dictated by the nature of prediction loss and regularization. \noindent Let $f(\cdot;\xi) \equiv \ell(\cdot;\xi)$ be a convex smooth loss function, such as quadratic loss, and $h \equiv r$ the "simple" convex regularization, such as $\norm{w}_1$, then the resulted model: \begin{align*} \min\limits_{w \in \mathbb{R}^n} & \;\; \mathbb{E}_{\xi \in \Omega} [\ell(w;\xi)] + r(w). \end{align*} has been considered in several previous works \cite{MouBac:11,RosVil:14,NiuRec:11,ShaSin:11}, which analyzed iteration complexity of stochastic proximal gradient algorithms. Here, the proximal map of $r$ is typically assumed as being computable in closed-form or linear time, as appears in $\ell_1/\ell_2$ Support Vector Machines (SVMs). In order to be able to approach more complicated regularizers, expressed as sum of simple convex terms, as required by machine learning models such as: group lasso \cite{ZhoKwo:14, ShiLin:15, HaLes:15}, CUR-like factorization \cite{WanWan17_prox}, graph trend filtering \cite{SalBia:17,VarLee:19}, dictionary learning \cite{DumIro:18}, parametric sparse representation \cite{StoIro:19}, one has to be able to handle stochastic optimization problems with stochastic nonsmooth regularizations $r(w) = \mathbb{E}[r(w;\xi)]$. For instance, the grouped lasso regularization $\sum\limits_{j=1}^m \norm{D_j w}_2$ might be expressed as expectation by considerind $r(w;\xi) = \norm{D_{\xi}w}_2$. In this chapter we analyze extensions of stochastic proximal gradient for this type of models. Nonsmooth (convex) prediction losses (e.g. hinge loss, absolute value loss, $\epsilon-$insensitive loss) are also coverable by \eqref{problem_intro} through taking $h(\cdot;\xi) = \ell(\cdot;\xi)$. We will use this approach with $f(w) = \frac{\lambda}{2}\norm{w}^2_2$ for solving hinge-loss-$\ell_2$-SVM model. \vspace{5pt} \noindent \textbf{Contributions}.$(i)$ We derive sublinear convergence rates of SPG with minibatches for stochastic composite convex optimization, under strong convexity assumption. \noindent $(ii)$ Besides the sublinear rates, we provide computational complexity analysis, which takes into account the complexity of each iteration, for stochastic proximal gradient algorithm with minibatches. We obtained $\mathbb{O}\left( \frac{1}{N \epsilon}\right)$ which highlights optimal dependency on the minibatch size $N$ and accuracy $\epsilon$. \noindent $(iii)$ We confirm empirically our theoretical findings through tests over $\ell_2-$SVMs (with hinge loss) on real data, and parametric sparse representation models on random data. \vspace{5pt} \noindent Following our analysis, reductions of the complexity per iteration using multiple machines/processors, would guarantee direct improvements in the optimal number of iterations. The superiority of distributed variants of SGD schemes for smooth optimization are clear (see \cite{NiuRec:11}), but our results set up the theoretical foundations for distributing the algorithms for the class of proximal gradient algorithms. \noindent Further, we briefly recall the milestone results from stochastic optimization literature with focus on the complexity of stochastic first-order methods. \subsection{Previous work} The natural tendency of using minibatches in order to accelerate stochastic algorithms and to obtain better generalization bounds is not new \cite{FriSch:12,GhaLan:16,WanWan17_minibatch,WanSre:19}. Empirical advantages have been observed in most convex and nonconvex models although clear theoretical complexity reductions with the minibatch size are still under development for structured nonsmooth models. Great attention has been given in the last decade to the behaviour of the stochastic gradient descent (SGD) with minibatches, see \cite{FriSch:12,GhaLan:16, NemJud:09,MouBac:11,NguNgu:18,Ned:10,Ned:11, GhaLan:16,NedNec:19}. On short, SGD iteration computes the average of gradients on a small number of samples and takes a step in the negative direction. Although more samples in the minibatch imply smaller variance in the direction and, for moderate minibatches, brings a significant acceleration, recent evidence shows that by increasing minibatch size over certain threshold the acceleration vanishes or deteriorates the training performance \cite{GoyDol:17,HoffHub:17}. \noindent Since the analysis of SGD naturally requires various smoothness conditions, proper modifications are necessary to attack nonsmooth models. The stochastic proximal point (SPP) algorithm has been recently analyzed using various differentiability assumptions, see \cite{TouTra:16,RyuBoy:16,Bia:16,PatNec:18,KosNed:13,WanBer:16,AsiDuc:18}, and has shown surprising analytical and empirical performances. The works of \cite{WanWan17_minibatch,WanSre:19} analyzed minibatch SPP schemes with variable stepsizes and obtained $\left( \frac{1}{k N}\right)$ convergence rates under proper assumptions. For strongly convex problems, notice that they require multiple assumptions that we avoid using in our analysis: strong convexity on each stochastic component, knowledge of strong convexity constant and Lipschitz continuity on the objective function. Our analysis is based on strong convexity of the smooth component $f$ and only convexity on the nonsmooth component $h$. \vspace{5pt} \noindent A common generalization of SGD and SPP are the stochastic splitting methods. Splitting first-order schemes received significant attention due to their natural insight and simplicity in contexts where a sum of two components are minimized (see \cite{Nes:13,BecTeb:09}). Only recently the full stochastic composite models with stochastic regularizers have been properly tackled \cite{SalBia:17}, where almost sure asymptotic convergence is established for a stochastic splitting scheme, where each iteration represents a proximal gradient update using stochastic samples of $f$ and $h$. The stochastic splitting schemes are also related to the model-based methods developed in \cite{DavDru:19}. \vspace{5pt} \subsection{Preliminaries and notations} For $w,v \in \mathbb{R}^n$ denote the scalar product $\langle w,v \rangle = w^T v$ and Euclidean norm by $\|w\|=\sqrt{w^T w}$. We use notations $\partial h(w;\xi)$ for the subdifferential set and $g_h(w;\xi)$ for a subgradient of $h(\cdot;\xi)$ at $w$. In the differentiable case we use the gradient notation $\nabla f(\cdot;\xi)$. We denote the set of optimal solutions with $W^*$ and $w^*$ for any optimal point of \eqref{problem_intro}. \begin{assumption} \label{assump_basic} The central problem \eqref{problem_intro} has nonempty optimal set $W^*$ and satisfies: \noindent $(i)$ The function $f(\cdot;\xi)$ has $L_f$-Lipschitz gradient, i.e. there exists $L_f > 0$ such that: \begin{align*} \norm{\nabla f(w;\xi) - \nabla f(v;\xi)} \le L_f \norm{w-v}, \qquad \forall w,v \in \mathbb{R}^n, \xi \in \Omega. \end{align*} and $f$ is $\sigma_f-$strongly convex, i.e. there exists $\sigma_f \ge 0$ satisfying: \begin{align} f(w) \ge f(v) + \langle \nabla f(v), w-v\rangle + \frac{\sigma_f}{2}\norm{w-v}^2 \qquad \forall w,v \in \mathbb{R}^n. \end{align} \noindent $(ii)$ There exists subgradient mapping $g_h: \mathbb{R}^n \times \Omega \mapsto \mathbb{R}^n$ such that $g_h(w;\xi) \in \partial h(w;\xi)$ and $\mathbb{E}[g_h(w;\xi)] \in \partial h(w).$ \noindent $(iii)$ $h(\cdot;\xi)$ has bounded gradients on the optimal set: there exists $\mathcal{S} < \infty $ such that $\mathbb{E}\left[\norm{g_h(w^*;\xi)}^2\right] \le \mathcal{S}$ for all $w^* \in W^*$; \end{assumption} \noindent Condition $(i)$ of the above assumption is natural in composite (stochastic) optimization \cite{Nes:13,BecTeb:09,PatNec:18}. Assumption \ref{assump_basic} condition $(ii)$ guarantees the existence of a subgradient mapping for functions $h(\cdot;\xi)$. Denote $\partial F(w;\xi) = \nabla f(w;\xi) + \partial h(w;\xi)$. Moreover, since $0 \in \partial F(w^*)$ for any $w^*\in W^*$, then we assume in the sequel that $g_F(w^*):=\mathbb{E}[g_F(w^*;\xi)] = 0$. Also condition $(iii)$ of Assumption \ref{assump_basic} is standard in the literature related to stochastic algorithms. \noindent Given some smoothing parameter $\mu>0$ and $I \subset [m]$, we define the prox operator: \begin{align*} \text{prox}_{h,\mu_k}(w;I) = \arg\min\limits_{z \in \mathbb{R}^n} \frac{1}{|I|}\sum\limits_{i \in I} \; h(z;i) + \frac{1}{2\mu} \norm{z - w}^2 \end{align*} In particular, when $h(w;\xi) = \mathbb{I}_{X_{\xi}}(w)$ the prox operator becomes the projection operator $\text{prox}_{h,\mu}(w;\xi) = \pi_{X_{\xi}}(w)$. Further we denote $[m] =\{1,\cdots, m\}$. Given the sequence $\mu_k = \frac{\mu_0}{k}$ then a useful inequality for the sequel is: \begin{align}\label{mu_bound} \sum\limits_{i=0}^{T} \mu_i^{\gamma} \le \mu_0\left( 1+ \frac{T^{1-\gamma}}{1-\gamma} \right) \end{align} \section{Stochastic Proximal Gradient with Minibatches} \noindent In the following section we present the Stochastic Proximal Gradient with Minibatches (SPG-M) and analyze the complexity of a single iteration under assumption \ref{assump_basic}. Let $w^0 \in \mathbb{R}^n$ be a starting point and $\{\mu_k\}_{k \ge 0}$ be a nonincreasing positive sequence of stepsizes. \begin{flushleft} \textbf{ Stochastic Proximal Gradient with Minibatches (SPG-M)}: \quad \\ For $k\geq 0$ compute: \\ 1. Choose randomly i.i.d. $N-$tuple $I^k \subset \Omega$ w.r.t. probability distribution $\mathbb{P}$\\ 2. Update: \begin{align*} v^k &= w^k - \frac{\mu_k}{N}\sum\limits_{i \in I^k} \nabla f(w^k;i) \\ w^{k+1} &= \arg\min\limits_{z \in \mathbb{R}^n} \frac{1}{N}\sum\limits_{i \in I^k} \; h(z;i) + \frac{1}{2\mu} \norm{z - v^k}^2 \end{align*} 3. If the stoppping criterion holds, then \textbf{STOP}, otherwise $k = k+1$. \end{flushleft} For computing $v^k$ are necessary an effort equivalent with $N$ vanilla SGD iterations. However, to obtain $w^{k+1}$, a strongly convex inner problem has to be solved and the linear scaling in $N$ holds only in structured cases. In fact, we consider using specific inner schemes to generate a sufficiently accurate suboptimal solution of the inner subproblem. We provide more details in next section \ref{sec:com_per_iter}. In the particular scenario when $f=\norm{w}^2_2 $ and $h$ represent the nonsmooth prediction loss, then SPG-M learns completely, at each iteration $k$, a predictor $w^{k+1}$ for the minibatch of data samples $I^k$, while maintaining a small distance from the previous predictor. \noindent For $N = 1$, the SPG-M iteration reduces to $w^{k+1} = \text{prox}_{h,\mu_k} \left(w^k - \mu_k \nabla f(w^k;\xi_k);\xi_k \right)$ being mainly a Stochastic Proximal Gradient iteration based on stochastic proximal maps \cite{SalBia:17}. The asymptotic convergence of vanishing stepsize non-minibatch SPG (a single sample per iteration) has been analyzed in \cite{SalBia:17} with application to trend filtering. Moreover, sublinear $\mathcal{O}(1/k)$ convergence rate for non-minibatch SPG has been provided in \cite{PatIro:20}. However, deriving sample complexity for SPG-M with arbitrary minibatches is not trivial since it requires proper estimation of computational effort required by a single iteration. In the smooth case ($h = 0$), SPG-M reduces to vanilla minibatch SGD \cite{MouBac:11}: $$w^{k+1} = w^k - \frac{\mu_k}{N} \sum\limits_{i \in I^k}\; \nabla f(w^k;i).$$ On the other hand, for nonsmooth objective functions, when $f = 0$, SPG-M is equivalent with a minibatch variant of SPP analyzed \cite{PatNec:18, AsiDuc:18,TouTra:16,WanWan17_minibatch,WanSre:19}: $$w^{k+1} = \text{prox}_{h,\mu_k}(w^{k};I^k).$$ Next we analyze the computational (sample) complexity of SPG-M iteration and suggest concrete situations when minibatch size $N > 0$ is advantageous over single sample $N=1$ scheme. \subsection{Complexity per iteration} \label{sec:com_per_iter} In this section we estimate bounds on the sample complexity $T_v(N)$ of computing $v^{k}$ and $T_w(N)$ for computing $w^{k+1}$. Let $I \subset [m], |I| = N$, then it is obvious that sample complexity of computing $v^k$ increase linearly with $N$, thus $$T_v(N) = \mathcal{O}(N).$$ A more attentive analysis is needed for the proximal step: \begin{align}\label{prox_step} \arg\min_{z \in \mathbb{R}^n}\;\; \frac{1}{N}\sum\limits_{i \in I} h(z;\xi_i) + \frac{1}{2\mu}\norm{z - w}^2. \end{align} Even for small $N > 0$ the solution of the above problem do not have a closed-form and certain auxiliary iterative algorithm must be used to obtain an approximation of the optimal solution. For the above primal form, the stochastic variance-reduction schemes are typically called to approach this finite-sum minimization, when $h$ obey certain smoothness assumptions. However, up to our knowledge, variance-reduction methods are limited for our general convex nonsmooth regularizers $h(\cdot;\xi)$. SGD attains an $\delta-$suboptimal point $\norm{\tilde{z} - \text{prox}_{\mu}(w;I)}^2 \le \delta$ at a sublinear rate, in $\mathcal{O}\left(\frac{\mu}{\delta}\right)$ iterations. This sample complexity is independent of $N$ but to obtain high accuracy a large number of iterations have to be performed. The following dual form is more natural: \begin{align} & \min_{z \in \mathbb{R}^n} \;\; \frac{1}{N}\sum\limits_{i \in I} h(z;\xi_i) + \frac{1}{2\mu}\norm{z - w}^2 \nonumber \\ & =\min_{z \in \mathbb{R}^n}\;\; \frac{1}{N}\sum\limits_{i \in I} \max\limits_{v_i} \langle v_i,z\rangle - h^*(v_i;\xi_i) + \frac{1}{2\mu}\norm{z - w}^2 \nonumber \\ & = \max\limits_{v} \min_{z \in \mathbb{R}^n}\;\; \left \langle \frac{1}{N}\sum\limits_{i \in I} v_i,z \right\rangle - h^*(v_i;\xi_i) + \frac{1}{2\mu}\norm{z - w}^2 \nonumber \\ & =\max_{v \in \mathbb{R}^{Nn}}\;\; -\frac{\mu}{2N^2} \norm{\sum_{j=1}^N v_j}^2 + \left\langle \frac{1}{N}\sum\limits_{j = 1}^N v_i,w \right\rangle - \frac{1}{N}\sum\limits_{j=1}^N \; h^*(v_j;\xi_{i_j}) \label{dual_subproblem} \end{align} Moreover, in the interesting particular scenarios when regularizer $h(\cdot;\xi)$ results from the composition of a convex function with a linear operator $h(w;\xi) = l(a_{\xi}^Tw)$, the dual variable reduces from $Nn$ to $N$ dimensions. In this case \begin{align} & \min_{z \in \mathbb{R}^n} \;\; \frac{1}{N}\sum\limits_{i \in I} l(a_{\xi_i}^T z) + \frac{1}{2\mu}\norm{z - w}^2 \nonumber \\ & = \frac{1}{N}\max_{v \in \mathbb{R}^{N}}\;\; -\frac{\mu}{2N} \norm{\sum_{j=1}^N a_{\xi_j} v_j}^2 + \left\langle \sum\limits_{j = 1}^N a_{\xi_i}v_i,w \right\rangle - \sum\limits_{j=1}^N \; l^*(v_j) \label{dual_composition_subproblem} \end{align} Computing the dual solution $v^*$, then the primal one is recovered by $z(w) = w - \frac{\mu}{N}\sum\limits_{i=1}^N v_i^*$ for \eqref{dual_subproblem} or $z(w) = w - \frac{\mu}{N}\sum\limits_{i=1}^N a_{\xi_i} v_i^*$ for \eqref{dual_composition_subproblem}. In the rest of this section we will analyze only the general subproblem \eqref{dual_subproblem}, since the sample complexity estimates will be easily translated to particular instance \eqref{dual_composition_subproblem}. For a suboptimal $\tilde{v}$ satisfying $\norm{\tilde{v} - v^*} \le \delta$, primal suboptimality with $\delta$ accuracy is obtained by: let $\tilde{z}(w) = w - \frac{\mu}{N}\sum\limits_{i=1}^N \tilde{v}_i$ \begin{align}\label{primal_dual_subopt} \norm{\tilde{z}(w) - z(w)} & = \mu \left\|\frac{1}{N} \left(\sum\limits_{i=1}^N \tilde{v}_i - \sum\limits_{i=1}^N v_i^*\right) \right\| \nonumber \\ & \le \frac{\mu}{N} \sum\limits_{i=1}^N \left\| \tilde{v}_i - v_i^* \right\| \le \frac{\mu}{\sqrt{N}} \norm{v^* - \tilde{v}} \le \frac{\mu \delta}{\sqrt{N}} \end{align} Further we provide several sample complexity estimations of various dual algorithms to attain primal $\delta-$suboptimality, for general and particular regularizers $h(\cdot;\xi)$. Notice that the hessian of the smooth component $\frac{\mu}{2N} \norm{\sum_{j =1}^N v_j}^2$ is upper bounded by $\mathcal{O}(\mu)$. Without any growth properties on $h^*(\cdot;\xi)$, one would be able to solve \eqref{dual_subproblem}, using Dual Fast Gradient schemes with $\mathcal{O}(Nn)$ iteration cost, in $\mathcal{O}\left(\max\left\{Nn, Nn \sqrt{\frac{\mu R_d^2}{\delta}} \right\} \right)$ sample evaluations to get a $\delta$ accurate dual solution. This implies, by \eqref{primal_dual_subopt}, that there are necessary $$, \qquad T_w^{in} (N;\delta) = \mathcal{O}\left( \max \left\{ Nn, N^{3/4}n \frac{\mu R_d}{\delta^{1/2}} \right\} \right)$$ sample evaluations to obtain primal $\delta-$suboptimality. For polyhedral $h^*(\cdot;\xi)$, there are many first-order algorithms that attain linear convergence on the above composite quadratic problem \cite{HsiSi:17}. For instance, the Proximal Gradient algorithm have $\mathcal{O}(Nn)$ arithmetical complexity per iteration and attain a $\delta-$suboptimal dual solution in $\mathcal{O}\left(\frac{L}{\hat{\sigma}(I)}\log\left( \frac{1}{\delta} \right)\right)$, where $L$ is the Lipschitz gradient constant and $\hat{\sigma}(I)$ represents the quadratic growth constant of the dual objective function \eqref{dual_subproblem}. Therefore there are necessary $\mathcal{O}\left(\frac{N \mu}{\hat{\sigma}(I)}\log\left( \frac{1}{\delta} \right)\right)$ sample evaluations for $\delta-$dual suboptimality, and: $$ T_{w,poly}^{in}(N;\delta) = \mathcal{O}\left(\max \left\{ Nn,\frac{N \mu}{\hat{\sigma}(I)}\log\left( \frac{\mu}{N^{1/2}\delta} \right) \right\}\right)$$ sample evaluations to attain primal $\delta-$suboptimal solution. \section{Iteration complexity in expectation} \label{sec:iteration_com} \noindent Further, in this section, we estimate the number of SPG-M iterations that is necessary to get an $\epsilon-$suboptimal solution of \eqref{problem_intro}. We will use the following elementary relation: for any $a,b \in \mathbb{R}^n$ and $\beta > 0$ we have \begin{align} \langle a,b \rangle & \le \frac{1}{2\beta}\norm{a}^2 + \frac{\beta}{2}\norm{b}^2 \label{elem_bound_norm1}\\ \norm{a+b}^2 &\le \left(1 + \frac{1}{\beta}\right)\norm{a}^2 + (1 + \beta)\norm{b}^2.\label{elem_bound_norm2} \end{align} The main recurrences which will finally generate our sublinear convergence rates are presented below. \begin{theorem}\label{th_reccurence} Let Assumptions \ref{assump_basic} hold and $\mu_k \le \frac{1}{4L_f}$. Assume $ \norm{w^{k+1} - \text{prox}_{h,\mu}(v^k;I^k)} \le \delta_k$, then the sequence $\{w^k\}_{k \ge 0}$ generated by SPG-M satisfies: \begin{align*} \mathbb{E}[\norm{w^{k+1} - w^*}^2] \le & \left(1 - \frac{\sigma_f \mu_k}{2} \right) \mathbb{E}[\norm{w^k-w^*}^2] \\ & \hspace{2cm} + \mu_k^2 \frac{\mathbb{E} \left[ \norm{g_F(w^*;\xi)}^2\right]}{N} + \left( 3 + \frac{2}{\sigma_f \mu_k}\right)\delta_k^2 \end{align*} \end{theorem} \begin{proof} Denote $\bar{w}^{k+1} = \text{prox}_{h,\mu_k}(v^k;I^k)$ and recall that $\frac{1}{\mu_k}\left(v^k - \bar{w}^{k+1} \right) \in \partial h(\bar{w}^{k+1};I^k)$, which implies that there exists a subgradient $g_h(\bar{w}^{k+1};I^k)$ such that \begin{align}\label{subproblem_optcond} g_h(\bar{w}^{k+1};I^k) + \frac{1}{\mu_k}\left(\bar{w}^{k+1} -v^k \right) = 0. \end{align} Using these optimality conditions we have: \begin{align} &\norm{w^{k+1} - w^*}^2 = \norm{w^k - w^*}^2 + 2 \langle w^{k+1} - w^k, w^k-w^* \rangle + \norm{w^{k+1} - w^k}^2 \nonumber\\ & \overset{\eqref{elem_bound_norm2}}{\le} \norm{w^k - w^*}^2 + 2 \langle \bar{w}^{k+1} - w^k, w^k-w^* \rangle + 2 \langle w^{k+1} - \bar{w}^{k+1}, w^k-w^* \rangle \nonumber\\ & \hspace{3cm} + \frac{3}{2}\norm{\bar{w}^{k+1} - w^k}^2 + 3\norm{\bar{w}^{k+1} - w^{k+1}}^2 \nonumber\\ & = \norm{w^k - w^*}^2 + 2 \langle \bar{w}^{k+1} - w^k, \bar{w}^{k+1}-w^* \rangle + 2 \langle w^{k+1} - \bar{w}^{k+1}, w^k-w^* \rangle \nonumber\\ & \hspace{3cm} - \frac{1}{2}\norm{\bar{w}^{k+1} - w^k}^2 + 3\norm{\bar{w}^{k+1} - w^{k+1}}^2 \nonumber\\ & \overset{\eqref{elem_bound_norm1}}{\le} \left(1 + \frac{\sigma_f\mu_k}{2}\right)\norm{w^k-w^*}^2 + 2 \langle \mu_k \nabla f(w^k;I_k) + \mu_k g_h(\bar{w}^{k+1};I_k), w^* - \bar{w}^{k+1} \rangle \nonumber\\ & \hspace{3cm} - \frac{1}{2}\norm{\bar{w}^{k+1} - w^k}^2 + \left(3 + \frac{2}{\sigma_f\mu_k}\right)\delta_k^2. \label{start_reccurence} \end{align} Now by using convexity of $h$ and Lipschitz continuity of $\nabla f(\cdot;I_k)$, we can further derive: \begin{align*} &2 \mu_k\langle \nabla f(w^k;I_k) + g_h(\bar{w}^{k+1};I_k), w^* - \bar{w}^{k+1} \rangle - \frac{1}{2}\norm{\bar{w}^{k+1} - w^k}^2 \\ & \le 2\mu_k \langle \nabla f(w^k;I_k), w^* - \bar{w}^{k+1} \rangle - \frac{1}{2}\norm{\bar{w}^{k+1} - w^k}^2 \nonumber\\ & \hspace{5cm} + 2\mu_k(h(w^* ;I_k) - h(\bar{w}^{k+1};I_k))\\ & \le - 2\mu_k \bigg( \langle \nabla f(w^k;I_k), \bar{w}^{k+1} - w^k \rangle + \frac{1}{8\mu_k}\norm{\bar{w}^{k+1} - w^k}^2 \\ & + h(\bar{w}^{k+1};I_k)\bigg) + 2\mu_k \langle \nabla f(w^k;I_k), w^* - w^{k}\rangle - \frac{1}{4}\norm{\bar{w}^{k+1} - w^k}^2 + 2\mu_k h(w^*;I_k). \end{align*} By taking expectation w.r.t. $\xi_k$ in both sides, we obtain: \begin{align} & - 2\mu_k \mathbb{E} \bigg[ \langle \nabla f(w^k;I_k), \bar{w}^{k+1} - w^k \rangle + \frac{1}{8\mu_k}\norm{\bar{w}^{k+1} - w^k}^2 + h(\bar{w}^{k+1};I_k)\bigg] \nonumber\\ & + 2\mu_k \langle \nabla f(w^k), w^* - w^{k}\rangle - \frac{1}{4}\mathbb{E}\left[\norm{\bar{w}^{k+1} - w^k}^2\right] + 2\mu_k h(w^*)] \nonumber \\ &\overset{\mu_k < \frac{1}{4L_f}}{\le} 2\mu_k \mathbb{E}[\left( f(w^k;I_k) - F(\bar{w}^{k+1};I_k) \right)] + 2\mu_k \langle \nabla f(w^k), w^* - w^{k}\rangle \nonumber\\ & \hspace{3cm} - \frac{1}{4}\mathbb{E}[\norm{\bar{w}^{k+1} - w^k}^2] + 2\mu_k h(w^*) \nonumber \\ & \le 2\mu_k \left(f(w^k) - \mathbb{E}\left[F(\bar{w}^{k+1};I_k) \right] \right) \nonumber \\ & \hspace{2cm} + 2\mu_k \left( F(w^*) - f(w^k) \!-\! \frac{\sigma_f}{2}\norm{w^k-w^*}^2 \right) \!-\! \frac{1}{4}\norm{\bar{w}^{k+1} - w^k}^2 \nonumber \\ & \!= - \sigma_f \mu_k\norm{w^k-w^*}^2 + 2\mu_k \mathbb{E}\left[ F(w^*) \!-\! F(\bar{w}^{k+1};I_k) - \frac{1}{8\mu_k} \norm{\bar{w}^{k+1} - w^k}^2 \right]\!\!. \label{rel_reccurence} \end{align} By combining \eqref{rel_reccurence} with \eqref{start_reccurence} and by taking the expectation with the entire index history we obtain: \begin{align} \mathbb{E}[\norm{w^{k+1} - w^*}^2] & \le \left(1 - \frac{\sigma_f \mu_k}{2}\right)\mathbb{E}\left[\norm{w^k-w^*}^2\right] \nonumber \\ & \hspace{0cm} + 2\mu_k \mathbb{E}\left[ F(w^*) - F(\bar{w}^{k+1};I_k) - \frac{1}{8\mu_k} \norm{\bar{w}^{k+1} - w^k}^2 \right].\label{almostend_recurrence} \end{align} A last further upper bound on the second term in the right hand side: let $w^* \in W^*$ \begin{align} & \mathbb{E}\left[ F(\bar{w}^{k+1};\xi_k) - F^* + \frac{1}{8\mu_k}\norm{\bar{w}^{k+1} - w^k}^2 \right] \nonumber\\ & \ge \mathbb{E}\left[\langle g_F(w^*;\xi_k), \bar{w}^{k+1} - w^* \rangle + \frac{1}{8\mu_k} \norm{\bar{w}^{k+1} - w^k}^2\right] \nonumber\\ & \ge \mathbb{E}\left[\langle g_F(w^*;\xi_k), w^k - w^* \rangle + \langle g_F(w^*;\xi_k), \bar{w}^{k+1} - w^k \rangle + \frac{1}{8\mu_k} \norm{\bar{w}^{k+1} - w^k}^2\right] \nonumber\\ & \ge \mathbb{E}\left[\langle g_F(w^*;\xi), w^k - w^* \rangle + \min_{z} \; \langle g_F(w^*;\xi), z - w^k \rangle + \frac{1}{8\mu_k}\norm{z - w^k}^2\right] \nonumber\\ & \ge \langle g_F(w^*), w^k - w^* \rangle - 2\mu_k \mathbb{E} \left[ \norm{g_F(w^*;\xi)}^2\right] = - 2\mu_k \mathbb{E} \left[ \norm{g_F(w^*;\xi)}^2\right], \label{end_recurence} \end{align} where we recall that we consider $g_F(w^*) = \mathbb{E}\left[ g_F(w^*;\xi) \right] = 0 $. Finally, from \eqref{almostend_recurrence} and \eqref{end_recurence} we obtain our above result. \end{proof} \begin{remark} Consider deterministic setting $F(\cdot;\xi) = F(\cdot)$ and $\mu_k = \frac{1}{2L_f}$, then SPG-M becomes the proximal gradient algorithm and Theorem \ref{th_reccurence}$(i)$ holds with $ g_F(w^*;\xi) = g_F(w^*) = 0$, implying that $\Sigma = 0$. Thus the well-known iteration complexity estimate $\mathcal{O}\left(\frac{L_f}{\sigma_f} \log(1/\epsilon) \right)$ \cite{Nes:13,BecTeb:09} of proximal gradient algorithm is recovered up to a constant. \end{remark} \begin{theorem}\label{th_iteration_complexity} Let assumptions of Theorem \ref{th_reccurence} hold. Also let $\delta_k = \mu_k^{3/2}/N^{1/2}$. Then the sequence $\{w^k\}_{k\ge 0}$ generated by SPG-M attains $ \mathbb{E}[\norm{w^T-w^*}^2] \le \epsilon$ within the following number of iterations: \begin{enumerate} \item[] [\textbf{Constant stepsize:}] \; Let $K > \mathcal{O}\left( \frac{4\mu_0}{\sigma_f^2 N}\right)$ and $ \mu_k := \frac{2\mu_0}{K} \in \left(0, \frac{1}{4L_f} \right]$, then $$ T \le T^{out}_c := \mathcal{O}\left(\max\left\{\frac{\max\{\Sigma^2, 2/\sigma_f\} \log (2r_0^2 /\epsilon)}{\epsilon \sigma_f^2 N}, \sqrt{\frac{ 72 \log^2 (2r_0^2 /\epsilon)}{\epsilon \sigma_f^3 N}} \right\}\right)$$ \item[][\textbf{Nonincreasing stepsize:}] \; For $\mu_k = \frac{2\mu_0}{k}$, then \begin{align*} T \le T^{out}_v = \mathcal{O}\left(\max\left\{\frac{\norm{w^0-w^*}^2}{\epsilon},\frac{\Sigma^2+\mu_0 + 1/\sigma_f}{N\epsilon} \right\} \right) \end{align*} \item[] [\textbf{Mixed stepsize:}] \; Let $\mu_k = \begin{cases} \frac{\mu_0}{4L_f} & k < T_1 \\ \frac{2\mu_0}{k}, & T_1 \le k \le T_2 \end{cases}$, then \begin{align*} T \le T^{out}_m: = \underbrace{\frac{4L_f}{\mu_0\sigma_f}\log\left( \frac{2\norm{w^0-w^*}^2}{\epsilon}\right)}_{T_1} + \underbrace{\mathcal{O}\left( \frac{C}{\epsilon N} \right)}_{T_2}, \end{align*} where $C = \frac{\mu_0\Sigma^2L_f\sigma_f + \mu_0^2\sigma_f + \mu_0L_f}{L_f^2\sigma_f^2} + \Sigma^2+\mu_0 + 1/\sigma_f.$ \end{enumerate} \end{theorem} \begin{proof} \noindent \textbf{Constant step-size}. Interesting convergence rates arise for proper constant stepsize $\mu_k$. Let $\mu_k := \mu \in \left(0, \frac{1}{4L_f} \right]$ and $\delta_k = \mu^{3/2}/N^{1/2}$, then Theorem \ref{th_reccurence} states that \begin{align} \mathbb{E}[\norm{w^k-w^*}^2] & \le \left(1 - \frac{\sigma_f\mu}{2} \right)^k r_0^2 + \frac{1 - (1-\sigma_f\mu/2)^{k}}{\sigma_f\mu/2}\frac{\mu^2}{N}\left(\Sigma^2 + 3\mu + \frac{2}{\sigma_f} \right) \nonumber\\ & \le \left(1 - \frac{\sigma_f\mu}{2} \right)^k r_0^2 + \frac{2\mu}{\sigma_f N}\left(\Sigma^2 + 3\mu + \frac{2}{\sigma_f} \right), \label{conststep_linconv} \end{align} which imply a linear decrease of initial residual and, at the same time, the linear convergence of $w^k$ towards a optimum neighborhood of radius $\frac{2\mu}{\sigma_f N}\left(\Sigma^2 + 3\mu + \frac{2}{\sigma_f} \right)$. The radius decrease linearly with the minibatch size $N$. With a more careful choice of constant $\mu$ we can same decrease in the SPG-M convergence rate. Given $K > 0$, let $\mu = \frac{2\mu_0}{K}$ then after \begin{align}\label{linear_resreduction} T = \frac{K}{\sigma_f \mu_0}\log\left( \frac{2r_0^2}{\epsilon}\right) \end{align} \eqref{conststep_linconv} leads to \begin{align} \mathbb{E}[\norm{w^T-w^*}^2] & \le \frac{2\mu_0}{K\sigma_f N}\left(\Sigma^2 + \frac{6\mu_0}{K} + \frac{2}{\sigma_f} \right) + \frac{\epsilon}{2} \label{no_opterror}\\ & \le \frac{2\log (2r_0^2 /\epsilon)}{T \sigma_f^2 N}\left(\Sigma^2 + \frac{6\log (2r_0^2 /\epsilon)}{\sigma_f T} + \frac{2}{\sigma_f} \right) + \frac{\epsilon}{2}, \nonumber \end{align} Overall, to obtain $\mathbb{E}[\norm{w^T-w^*}^2] \le \epsilon,$ SPG-M has to perform $$ \mathcal{O}\left(\max\left\{\frac{\max\{\Sigma^2, 2/\sigma_f\} \log (2r_0^2 /\epsilon)}{\epsilon \sigma_f^2 N}, \sqrt{\frac{ 72 \log^2 (2r_0^2 /\epsilon)}{\epsilon \sigma_f^3 N}} \right\}\right)$$ SPG-M iterations. \vspace{5pt} \noindent \textbf{Variable stepsize}. Now let $\mu_k = \frac{2\mu_0}{k}, \delta_k = \frac{\mu_k^{3/2}}{N^{1/2}}$, then Theorem \ref{th_reccurence} leads to: \begin{align}\label{varstep_rate1_pre} \mathbb{E}[\norm{w^k-w^*}^2] & \le \prod\limits_{j = 1}^k \left(1 - \frac{\sigma_f\mu_j}{2} \right) r_0^2 + \sum\limits_{i = 1}^k \frac{\mu_i^2}{N}\left(\Sigma^2 + 3\mu_i + \frac{2}{\sigma_f} \right) \prod\limits_{j = i+1}^k \left(1 - \frac{\sigma_f\mu_j}{2} \right) \end{align} By using further the same (standard) analysis from \cite{PatIro:20,PatNec:18}, we obtain: \begin{align}\label{varstep_rate1} \mathbb{E}[\norm{w^k-w^*}^2] & \le \underbrace{\mathcal{O}\left(\frac{r_0^2}{k}\right)}_{\text{optimization error}} + \underbrace{\mathcal{O}\left(\frac{\Sigma^2+\mu_0 + 1/\sigma_f}{Nk} \right)}_{\text{sample error}} \end{align} Notice that, for our stepsize choice, the optimization rate $\mathcal{O}(r_0^2/k)$ is optimal (for strongly convex stochastic optimization) and it is not affected by the variation of minibatch size. Intuitively, the stochastic component within optimization model \eqref{problem_intro} is not eliminated by increasing $N$, only a variance reduction is attained. In \cite{WanWan17_prox}, for bounded gradients objective functions, is stated $\mathcal{O}\left( \frac{L^2}{\sigma_f N k} \right)$ convergence rate for Minibatch-Prox Algorithm in average sequence, using classical arguments. However, this rate is based on knowledge of $\sigma_f$, used in the stepsize sequence $\mu_k = \frac{2}{\sigma_f(k-1)}$. Moreover, in the first step the algorithm has to compute $\text{prox}_{h,+\infty}(\cdot;I^0) = \arg\min\limits_{z} \; \frac{1}{N} \sum\limits_{ i \in I}\; h(z;i)$, which for small $\sigma_f$ might be computationally expensive. Notice that, under knowledge of $\sigma_f$, we could obtain similar sublinear rate in $\mathbb{E}[\norm{w^k-w^*}^2]$ using the same stepsize sequence. Returning to \eqref{varstep_rate1}, it implies the following iteration complexity: \begin{align*} \mathcal{O}\left(\max\left\{\frac{r_0^2}{\epsilon},\frac{\Sigma^2+\mu_0 + 1/\sigma_f}{N\epsilon} \right\} \right). \end{align*} \vspace{5pt} \noindent \textbf{Mixed stepsize.} By combining constant and variable stepsize policies, we aim to get a better "optimization error" and overall, a better iteration complexity of SPG-M. Inspired by \eqref{linear_resreduction}-\eqref{no_opterror}, we are able to use a constant stepsize policy to bring $w^k$ in a small neighborhood of $w^*$ whose radius is inversely proportional with $N$. Let $\mu_k = \frac{\mu_0}{4L_f}$, using similar arguments as in \eqref{linear_resreduction}-\eqref{no_opterror}, we have that after: \begin{align*} T_1 \ge \frac{4L_f}{\mu_0\sigma_f}\log\left( \frac{2r_0^2}{\epsilon}\right) \end{align*} the expected residual is bounded by: \begin{align} \mathbb{E}[\norm{w^{T_1}-w^*}^2] & \le \frac{\mu_0}{4L_f\sigma_f N}\left(\Sigma^2 + \frac{3\mu_0}{4L_f} + \frac{2}{\sigma_f} \right) + \frac{\epsilon}{2}. \end{align} Now restarting SPG-M from $w^{T_1}$ we have from \eqref{varstep_rate1} that: \begin{align} \mathbb{E}&[\norm{w^k-w^*}^2] \le \mathcal{O}\left(\frac{\mathbb{E}[\norm{w^{T_1}-w^*}^2]}{k}\right) + \mathcal{O}\left(\frac{\Sigma^2+\mu_0 + 1/\sigma_f}{Nk} \right)\nonumber \\ & \le \mathcal{O}\left(\frac{\mu_0\Sigma^2L_f\sigma_f + \mu_0^2\sigma_f + \mu_0L_f}{L_f^2\sigma_f^2 kN}\right) + \mathcal{O}\left(\frac{\epsilon}{2k}\right) + \mathcal{O}\left(\frac{\Sigma^2+\mu_0 + 1/\sigma_f}{Nk} \right). \label{mixed_step_convrate} \end{align} iterations. Overall, SPG-M computes $w^{T_2}$ such that $\mathbb{E}[\norm{w^{T_2}-w^*}^2] \le \epsilon$ within a number of iterations bounded by \begin{align*} T_1 + T_2 = \frac{4L_f}{\mu_0\sigma_f}\log\left( \frac{2r_0^2}{\epsilon}\right) + \mathcal{O}\left( \frac{C}{\epsilon N} \right), \end{align*} where $C = \frac{\mu_0\Sigma^2L_f\sigma_f + \mu_0^2\sigma_f + \mu_0L_f}{L_f^2\sigma_f^2} + \Sigma^2+\mu_0 + 1/\sigma_f.$ \end{proof} We make a few observations about the $T^{out}_m$. For a small conditioning number $\frac{L_f}{\sigma_f}$ the constant stage performs few iterations and the total complexity is dominated by $\mathcal{O}\left( \frac{C}{\epsilon N} \right)$. This bound (of the same order as \cite{WanWan17_prox}) present some advantages: unknown $\sigma_f$, evaluation in the last iterate and no uniform bounded gradients assumptions. On the other hand, for a sufficiently large $N = \mathcal{O}(1/\epsilon)$ minibatch size and a proper choice of $\mu_0$, one could perform a constant number of SPG-M iterations. In this case, the mixed stepsize convergence rate provides a link between population risk and empirical risk. \subsection{Total complexity} In this section we couple the complexity per iteration estimates from Section \ref{sec:com_per_iter} and Section \ref{sec:iteration_com} and provide upper bounds on the total complexity of SPG-M. Often the measure of sample complexity is used for stochastic algorithms, which refers to the entire number of data samples that are used during all iterations of a given algoritmic scheme. In our case, given the minibatch size $N$ and the total outer SPG-M iterations $T^{out}$, the sample complexity is given by $ NT^{out}$. In the best case $NT^{out}$ is upper bounded by $\mathcal{O}(1/\epsilon)$. We consider the dependency on minibatch size $N$ and accuracy $\epsilon$ of highly importance, thus we will present below simplified upper bounds of our estimates. \noindent In Section \ref{sec:com_per_iter}, we analyzed the complexity of a single SPG-M iteration for convex components $h(\cdot;\xi)$ denoted by $T^{in}_v + T^{in}_w$. Summing the inner effort $T^{in}_v + T^{in}_w$ over the outer number of iterations provided by Theorem \ref{th_iteration_complexity} leads us to the total computational complexity of SPG-M. We further derive the total complexity for SPG-M with mixed stepsize policy and use the same notations as in Theorem \ref{th_iteration_complexity}: \begin{align*} T^{total} & = \sum\limits_{i=0}^{T^{out}_m} (T^{in}_v(N) + T^{in}_w(N;\delta)) \\ & \le \sum\limits_{i=0}^{T^{out}_m} \left( \mathcal{O}(Nn) + \mathcal{O}\left( \max \left\{ Nn, N^{3/4}n \frac{\mu_i R_d}{\delta_i^{1/2}} \right\} \right) \right) \\ & \le \sum\limits_{i=0}^{T^{out}_m} \left[ \mathcal{O}(Nn) + \mathcal{O}\left( Nn \mu_i^{1/4} R_d \right) \right] \\ & \overset{\eqref{mu_bound}}{\le} \mathcal{O}(T^{out}_m Nn) + \mathcal{O}\left( Nn (T^{out}_m)^{3/4} R_d \right) \\ & \le \mathcal{O}\left(\left(\frac{L_f}{\sigma_f}\log\left(1/\epsilon \right) + \frac{C}{N\epsilon} \right) Nn \right) + \mathcal{O}\left( Nn \left(\frac{L_f}{\sigma_f}\log\left(1/\epsilon \right) + \frac{C}{N\epsilon} \right)^{3/4} R_d \right) \end{align*} Extracting the dominant terms from the right hand side we finnaly obtain: \begin{align*} T^{total} \le \mathcal{O}\left(\frac{L_fNn}{\sigma_f}\log\left(1/\epsilon \right) + \frac{Cn}{\epsilon} \right) + \mathcal{O}\left( N^{1/4}n \left(\frac{C}{\epsilon} \right)^{3/4} R_d \right). \end{align*} The first term $\mathcal{O}\left(\left(\frac{L_f}{\sigma_f}\log\left(1/\epsilon \right) + \frac{C}{N\epsilon} \right) Nn \right)$ is the total cost of the minibatch gradient step $v^k$ and is highly dependent on the conditioning number $\frac{L_f}{\sigma_f}$. The second term is brought by solving the proximal subproblem by Dual Fast Gradient method depicting a stronger dependence on $N$ and $\epsilon$ than the first. Although the complexity order is $\mathcal{O}\left( \frac{Cn}{\epsilon}\right)$, comparable with the optimal performance of single-sample stochastic schemes (with $N = 1$), the above estimate paves the way towards the acceleration techniques based on distributed stochastic iterations. Reduction of the complexity per iteration to $\frac{1}{\tau}(T^{in}_v(N) + T^{in}_w(N;\delta))$, using multiple machines/processors, would guarantee direct improvements in the optimal number of iterations $\mathcal{O}(\frac{Cn}{\tau\epsilon})$. The superiority of distributed variants of SGD schemes for smooth optimization are clear (see \cite{NiuRec:11}), but our results set up the theoretical foundations for distributing the algorithms for the class of proximal gradient algorithms. \section{Numerical simulations} \subsection{Large Scale Support Vector Machine} \iffalse \textcolor{red}{I assumed that we consider only binary classification. If we can consider more complicated tasks: multiple classes, graph-embedding regularization etc. would be more convincing } \fi To validate the theoretical implications of the previous sections, we first chose a well known convex optimization problem in the machine learning field: optimizing a binary Support Vector Machine (SVM) application. To test several metrics and methods, a spam-detection application is chosen using the dataset from \cite{spamDetectionDataset}. The dataset contains about 32000 emails that are either classified as spam or non-spam. This was split in our evaluation in a classical $80\%$ for training and $20\%$ for testing. To build the feature space in a numerical understandable way, three preprocessing steps were made: \begin{enumerate} \item A dictionary of all possible words in the entire dataset and how many times each occured is constructed. \item The top $200$($=n$ ; the number of features used in our classifier) most-used words indices from the dictionary are stored. \item Each email entry $i$ then counts how many of each of the $n$ words are in the $i-th$ email's text. Thus, if $X_{i}$ is the numerical entry characteristic to email ${i}$, then $X_{ij}$ contains how many words with index $j$ in the top most used words are in the email's text. \end{enumerate} The pseudocode for optimization process is shown below: \begin{flushleft} \quad For $k\geq 0$ compute \\ 1. Choose randomly i.i.d. $N-$tuple $I^k \subset \Omega$ \\ 2. Update: \begin{align*} v^{k} & = \left(1 - \lambda\mu_k \right) w^k \\ u^{k} & = \arg\max_{u \in [0,1]} \;\; -\frac{\mu}{2N}\norm{\tilde{X}_{I_k} u}^2_2 + u^T(e - \tilde{X}_{I_k}^Tv^k) \\ w^{k+1} & = v^k + \frac{\mu_k}{N} X_{I^k} u^k \end{align*} 3. If the stoppping criterion holds, then \textbf{STOP}, otherwise $k = k+1$. \end{flushleft} \iffalse To find a good value for $u$ to approximate the proximal step we use SGD. The approximative number of iteration bound used is given in \eqref{boundStep}. \begin{align} \frac{L_d}{\sigma_d}\log\left( \frac{R^2}{\delta}\right), \qquad where: \label{boundStep} \end{align} \begin{align*} L_d &= \lambda_{\max}\left( \frac{\mu}{N}\tilde{X}_{I_k}^T\tilde{X}_{I_k}\right) \\ \sigma_d &= \bar{\lambda}_{\min}\left( \frac{\mu}{N}\tilde{X}_{I_k}^T\tilde{X}_{I_k}\right) \\ R^2 & = N \\ \delta & = \epsilon \end{align*} \hspace*{4cm} and $\bar{\lambda}_{\min}$ is the least strictly positive eigenvalue. \fi To compute the optimal solution $w^*$, we let running the binary SVM state-of-the-art method for the dataset using SGD hinge-loss \cite{spamDetectionPaper} for a long time, until we get the top accuracy of the model (~$93.2\%$). Considering this as a performance baseline, we compare the results of training process efficiency between the $SPG-M$ model versus $SGD$ with mini-batches. The comparison is made w.r.t. three metrics: \begin{enumerate} \item \textbf{Accuracy}: how well does the current set of trained weights performs at classification between spam versus non-spam. \item \textbf{Loss}: the hinge-loss result on the entire set of data. \item \textbf{Error} (or optimality measure): computes how far is the current trained set of weights ($w^k$ at any step $k$ in time). from the optimal ones, i.e. $\norm{w^k - w^*}^2$. \end{enumerate} The comparative results between the two methods and each of the metrics defined above are shown in Fig. \ref{fig:acc}, \ref{fig:loss}, \ref{fig:error}. These were obtained by averaging several executions on the same machine, each with a different starting point. Overall, the results show the advantage of SPG-M method over SGD: while both methods will converge to the same optimal results after some time, SPG-M is capable of obtaining better results all the three metrics in a shorter time, regardless of the batch size being used. One interesting observation can be seen for the SGD-Const method results, when the loss metric tends to perfom better (\ref{fig:loss}). This is because of a highly tuned constant learning rate to get the best possible result. However, this is not a robust way to use in practice. \begin{figure}[htp] \centering \subfloat[batchsize 32]{% \includegraphics[width=0.5\textwidth]{svmbin_accuracy_32}% \label{fig:acc32}% }% \hfill% \subfloat[batchsize 128]{% \includegraphics[width=0.5\textwidth]{svmbin_accuracy_128}% \label{fig:acc128}% }% \caption{Comparative results between SPG-M and SGD for the \textbf{Accuracy} metric, using different batchsizes and functions for chosing the stepsize.} \label{fig:acc} \end{figure} \begin{figure}[htp] \centering \subfloat[batchsize 32]{% \includegraphics[width=0.5\textwidth]{svmbin_loss_32}% \label{fig:loss32}% }% \hfill% \subfloat[batchsize 128]{% \includegraphics[width=0.5\textwidth]{svmbin_loss_128}% \label{fig:loss128}% }% \caption{Comparative results between SPG-M and SGD for the \textbf{Loss} metric, using different batchsizes and functions for chosing the stepsize.} \label{fig:loss} \end{figure} \begin{figure}[htp] \centering \subfloat[batchsize 32]{% \includegraphics[width=0.5\textwidth]{svmbin_error_32}% \label{fig:error32}% }% \hfill% \subfloat[batchsize 128]{% \includegraphics[width=0.5\textwidth]{svmbin_error_128}% \label{fig:error128}% }% \caption{Comparative results between SPG-M and SGD for the \textbf{Error} metric, using different batchsizes and functions for chosing the stepsize.} \label{fig:error} \end{figure} \iffalse \noindent Directions (find some code in "Simulations/Old experiments"): \begin{itemize} \item we will compare with SGD, Minibatch SGD w.r.t. necessary time (seconds) to reach solution; optimality measure: $\norm{w^k - w^*}^2$; we'll run 5-10 times and take the average curve (find code in "Simulations/Old experiments") \item optimal solution $w^*$ computed with CVXPY (I'll put some code in Dropbox) \item Variant 1: at each iteration $u^k$ is computed by solving subproblem with CVXPY (probably will result in bad times) \item Variant 2: at each iteration $u^k$ is computed using a first order method: Gradient Method or Fast Gradient Method or Conjugate Gradient method (doesn't matter for the moment). This inner method will run a fixed number of iterations (say 5000). Will be adjusted later. \item We target at least 2 plots (SGD, Minibatch SGD vs SPG-M) showing that our method is faster in seconds \item One figure comparing classification performance (precision, recall etc.) \end{itemize} \fi \newpage \subsection{Parametric Sparse Representation} \input{parametric} \section{Conclusion} \noindent In this chapter we presented preliminary guarantees for minibathc stochastic proximal gradient schemes, which extend some well-known schemes in the literature. For future work, would be interesting to analyze the behaviour of SPG-M scheme on nonconvex learning models. We provided significant improvements in iteration complexity that future work can further reduce using distributed and parallelism techniques, as hinted by the distributed variants of SGD schemes~\cite{NiuRec:11}. \bibliographystyle{plain} \section{Introduction} \noindent Statistical learning from data is strongly linked to optimization of stochastic representation models. The traditional approach consists of learning an optimal hypothesis from a reasonable amount of data and further aim to generalize its decision properties over the entire population. In general, this generalization is achieved by minimizing population risk which, unlike the empirical risk minimization, aims to compute the optimal predictor with the smallest generalization error. Thus, in this paper we consider the following stochastic composite optimization problem: \begin{align} \label{problem_intro} \min\limits_{w \in \mathbb{R}^n} & \;\; F(w) := \mathbb{E}_{\xi \in \Omega} [f(w;\xi)] + \mathbb{E}_{\xi \in \Omega} [h(w;\xi)], \end{align} where $\xi$ is a random variable associated with probability space $(\mathbb{P},\Omega)$, $f(w) := \mathbb{E}_{\xi \in \Omega} [f(w;\xi)]$ is smooth and $h(w) := \mathbb{E}_{\xi \in \Omega} [f(w;\xi)]$ convex and nonsmooth. Most convex learning models in population can be formulated following the structure of \eqref{problem_intro} using a proper decomposition dictated by the nature of prediction loss and regularization. \noindent Let $f(\cdot;\xi) \equiv \ell(\cdot;\xi)$ be a convex smooth loss function, such as quadratic loss, and $h \equiv r$ the "simple" convex regularization, such as $\norm{w}_1$, then the resulted model: \begin{align*} \min\limits_{w \in \mathbb{R}^n} & \;\; \mathbb{E}_{\xi \in \Omega} [\ell(w;\xi)] + r(w). \end{align*} has been considered in several previous works \cite{MouBac:11,RosVil:14,NiuRec:11,ShaSin:11}, which analyzed iteration complexity of stochastic proximal gradient algorithms. Here, the proximal map of $r$ is typically assumed as being computable in closed-form or linear time, as appears in $\ell_1/\ell_2$ Support Vector Machines (SVMs). In order to be able to approach more complicated regularizers, expressed as sum of simple convex terms, as required by machine learning models such as: group lasso \cite{ZhoKwo:14, ShiLin:15, HaLes:15}, CUR-like factorization \cite{WanWan17_prox}, graph trend filtering \cite{SalBia:17,VarLee:19}, dictionary learning \cite{DumIro:18}, parametric sparse representation \cite{StoIro:19}, one has to be able to handle stochastic optimization problems with stochastic nonsmooth regularizations $r(w) = \mathbb{E}[r(w;\xi)]$. For instance, the grouped lasso regularization $\sum\limits_{j=1}^m \norm{D_j w}_2$ might be expressed as expectation by considerind $r(w;\xi) = \norm{D_{\xi}w}_2$. In this chapter we analyze extensions of stochastic proximal gradient for this type of models. Nonsmooth (convex) prediction losses (e.g. hinge loss, absolute value loss, $\epsilon-$insensitive loss) are also coverable by \eqref{problem_intro} through taking $h(\cdot;\xi) = \ell(\cdot;\xi)$. We will use this approach with $f(w) = \frac{\lambda}{2}\norm{w}^2_2$ for solving hinge-loss-$\ell_2$-SVM model. \vspace{5pt} \noindent \textbf{Contributions}.$(i)$ We derive sublinear convergence rates of SPG with minibatches for stochastic composite convex optimization, under strong convexity assumption. \noindent $(ii)$ Besides the sublinear rates, we provide computational complexity analysis, which takes into account the complexity of each iteration, for stochastic proximal gradient algorithm with minibatches. We obtained $\mathbb{O}\left( \frac{1}{N \epsilon}\right)$ which highlights optimal dependency on the minibatch size $N$ and accuracy $\epsilon$. \noindent $(iii)$ We confirm empirically our theoretical findings through tests over $\ell_2-$SVMs (with hinge loss) on real data, and parametric sparse representation models on random data. \vspace{5pt} \noindent Following our analysis, reductions of the complexity per iteration using multiple machines/processors, would guarantee direct improvements in the optimal number of iterations. The superiority of distributed variants of SGD schemes for smooth optimization are clear (see \cite{NiuRec:11}), but our results set up the theoretical foundations for distributing the algorithms for the class of proximal gradient algorithms. \noindent Further, we briefly recall the milestone results from stochastic optimization literature with focus on the complexity of stochastic first-order methods. \subsection{Previous work} The natural tendency of using minibatches in order to accelerate stochastic algorithms and to obtain better generalization bounds is not new \cite{FriSch:12,GhaLan:16,WanWan17_minibatch,WanSre:19}. Empirical advantages have been observed in most convex and nonconvex models although clear theoretical complexity reductions with the minibatch size are still under development for structured nonsmooth models. Great attention has been given in the last decade to the behaviour of the stochastic gradient descent (SGD) with minibatches, see \cite{FriSch:12,GhaLan:16, NemJud:09,MouBac:11,NguNgu:18,Ned:10,Ned:11, GhaLan:16,NedNec:19}. On short, SGD iteration computes the average of gradients on a small number of samples and takes a step in the negative direction. Although more samples in the minibatch imply smaller variance in the direction and, for moderate minibatches, brings a significant acceleration, recent evidence shows that by increasing minibatch size over certain threshold the acceleration vanishes or deteriorates the training performance \cite{GoyDol:17,HoffHub:17}. \noindent Since the analysis of SGD naturally requires various smoothness conditions, proper modifications are necessary to attack nonsmooth models. The stochastic proximal point (SPP) algorithm has been recently analyzed using various differentiability assumptions, see \cite{TouTra:16,RyuBoy:16,Bia:16,PatNec:18,KosNed:13,WanBer:16,AsiDuc:18}, and has shown surprising analytical and empirical performances. The works of \cite{WanWan17_minibatch,WanSre:19} analyzed minibatch SPP schemes with variable stepsizes and obtained $\left( \frac{1}{k N}\right)$ convergence rates under proper assumptions. For strongly convex problems, notice that they require multiple assumptions that we avoid using in our analysis: strong convexity on each stochastic component, knowledge of strong convexity constant and Lipschitz continuity on the objective function. Our analysis is based on strong convexity of the smooth component $f$ and only convexity on the nonsmooth component $h$. \vspace{5pt} \noindent A common generalization of SGD and SPP are the stochastic splitting methods. Splitting first-order schemes received significant attention due to their natural insight and simplicity in contexts where a sum of two components are minimized (see \cite{Nes:13,BecTeb:09}). Only recently the full stochastic composite models with stochastic regularizers have been properly tackled \cite{SalBia:17}, where almost sure asymptotic convergence is established for a stochastic splitting scheme, where each iteration represents a proximal gradient update using stochastic samples of $f$ and $h$. The stochastic splitting schemes are also related to the model-based methods developed in \cite{DavDru:19}. \vspace{5pt} \subsection{Preliminaries and notations} For $w,v \in \mathbb{R}^n$ denote the scalar product $\langle w,v \rangle = w^T v$ and Euclidean norm by $\|w\|=\sqrt{w^T w}$. We use notations $\partial h(w;\xi)$ for the subdifferential set and $g_h(w;\xi)$ for a subgradient of $h(\cdot;\xi)$ at $w$. In the differentiable case we use the gradient notation $\nabla f(\cdot;\xi)$. We denote the set of optimal solutions with $W^*$ and $w^*$ for any optimal point of \eqref{problem_intro}. \begin{assumption} \label{assump_basic} The central problem \eqref{problem_intro} has nonempty optimal set $W^*$ and satisfies: \noindent $(i)$ The function $f(\cdot;\xi)$ has $L_f$-Lipschitz gradient, i.e. there exists $L_f > 0$ such that: \begin{align*} \norm{\nabla f(w;\xi) - \nabla f(v;\xi)} \le L_f \norm{w-v}, \qquad \forall w,v \in \mathbb{R}^n, \xi \in \Omega. \end{align*} and $f$ is $\sigma_f-$strongly convex, i.e. there exists $\sigma_f \ge 0$ satisfying: \begin{align} f(w) \ge f(v) + \langle \nabla f(v), w-v\rangle + \frac{\sigma_f}{2}\norm{w-v}^2 \qquad \forall w,v \in \mathbb{R}^n. \end{align} \noindent $(ii)$ There exists subgradient mapping $g_h: \mathbb{R}^n \times \Omega \mapsto \mathbb{R}^n$ such that $g_h(w;\xi) \in \partial h(w;\xi)$ and $\mathbb{E}[g_h(w;\xi)] \in \partial h(w).$ \noindent $(iii)$ $h(\cdot;\xi)$ has bounded gradients on the optimal set: there exists $\mathcal{S} < \infty $ such that $\mathbb{E}\left[\norm{g_h(w^*;\xi)}^2\right] \le \mathcal{S}$ for all $w^* \in W^*$; \end{assumption} \noindent Condition $(i)$ of the above assumption is natural in composite (stochastic) optimization \cite{Nes:13,BecTeb:09,PatNec:18}. Assumption \ref{assump_basic} condition $(ii)$ guarantees the existence of a subgradient mapping for functions $h(\cdot;\xi)$. Denote $\partial F(w;\xi) = \nabla f(w;\xi) + \partial h(w;\xi)$. Moreover, since $0 \in \partial F(w^*)$ for any $w^*\in W^*$, then we assume in the sequel that $g_F(w^*):=\mathbb{E}[g_F(w^*;\xi)] = 0$. Also condition $(iii)$ of Assumption \ref{assump_basic} is standard in the literature related to stochastic algorithms. \noindent Given some smoothing parameter $\mu>0$ and $I \subset [m]$, we define the prox operator: \begin{align*} \text{prox}_{h,\mu_k}(w;I) = \arg\min\limits_{z \in \mathbb{R}^n} \frac{1}{|I|}\sum\limits_{i \in I} \; h(z;i) + \frac{1}{2\mu} \norm{z - w}^2 \end{align*} In particular, when $h(w;\xi) = \mathbb{I}_{X_{\xi}}(w)$ the prox operator becomes the projection operator $\text{prox}_{h,\mu}(w;\xi) = \pi_{X_{\xi}}(w)$. Further we denote $[m] =\{1,\cdots, m\}$. Given the sequence $\mu_k = \frac{\mu_0}{k}$ then a useful inequality for the sequel is: \begin{align}\label{mu_bound} \sum\limits_{i=0}^{T} \mu_i^{\gamma} \le \mu_0\left( 1+ \frac{T^{1-\gamma}}{1-\gamma} \right) \end{align} \section{Stochastic Proximal Gradient with Minibatches} \noindent In the following section we present the Stochastic Proximal Gradient with Minibatches (SPG-M) and analyze the complexity of a single iteration under assumption \ref{assump_basic}. Let $w^0 \in \mathbb{R}^n$ be a starting point and $\{\mu_k\}_{k \ge 0}$ be a nonincreasing positive sequence of stepsizes. \begin{flushleft} \textbf{ Stochastic Proximal Gradient with Minibatches (SPG-M)}: \quad \\ For $k\geq 0$ compute: \\ 1. Choose randomly i.i.d. $N-$tuple $I^k \subset \Omega$ w.r.t. probability distribution $\mathbb{P}$\\ 2. Update: \begin{align*} v^k &= w^k - \frac{\mu_k}{N}\sum\limits_{i \in I^k} \nabla f(w^k;i) \\ w^{k+1} &= \arg\min\limits_{z \in \mathbb{R}^n} \frac{1}{N}\sum\limits_{i \in I^k} \; h(z;i) + \frac{1}{2\mu} \norm{z - v^k}^2 \end{align*} 3. If the stoppping criterion holds, then \textbf{STOP}, otherwise $k = k+1$. \end{flushleft} For computing $v^k$ are necessary an effort equivalent with $N$ vanilla SGD iterations. However, to obtain $w^{k+1}$, a strongly convex inner problem has to be solved and the linear scaling in $N$ holds only in structured cases. In fact, we consider using specific inner schemes to generate a sufficiently accurate suboptimal solution of the inner subproblem. We provide more details in next section \ref{sec:com_per_iter}. In the particular scenario when $f=\norm{w}^2_2 $ and $h$ represent the nonsmooth prediction loss, then SPG-M learns completely, at each iteration $k$, a predictor $w^{k+1}$ for the minibatch of data samples $I^k$, while maintaining a small distance from the previous predictor. \noindent For $N = 1$, the SPG-M iteration reduces to $w^{k+1} = \text{prox}_{h,\mu_k} \left(w^k - \mu_k \nabla f(w^k;\xi_k);\xi_k \right)$ being mainly a Stochastic Proximal Gradient iteration based on stochastic proximal maps \cite{SalBia:17}. The asymptotic convergence of vanishing stepsize non-minibatch SPG (a single sample per iteration) has been analyzed in \cite{SalBia:17} with application to trend filtering. Moreover, sublinear $\mathcal{O}(1/k)$ convergence rate for non-minibatch SPG has been provided in \cite{PatIro:20}. However, deriving sample complexity for SPG-M with arbitrary minibatches is not trivial since it requires proper estimation of computational effort required by a single iteration. In the smooth case ($h = 0$), SPG-M reduces to vanilla minibatch SGD \cite{MouBac:11}: $$w^{k+1} = w^k - \frac{\mu_k}{N} \sum\limits_{i \in I^k}\; \nabla f(w^k;i).$$ On the other hand, for nonsmooth objective functions, when $f = 0$, SPG-M is equivalent with a minibatch variant of SPP analyzed \cite{PatNec:18, AsiDuc:18,TouTra:16,WanWan17_minibatch,WanSre:19}: $$w^{k+1} = \text{prox}_{h,\mu_k}(w^{k};I^k).$$ Next we analyze the computational (sample) complexity of SPG-M iteration and suggest concrete situations when minibatch size $N > 0$ is advantageous over single sample $N=1$ scheme. \subsection{Complexity per iteration} \label{sec:com_per_iter} In this section we estimate bounds on the sample complexity $T_v(N)$ of computing $v^{k}$ and $T_w(N)$ for computing $w^{k+1}$. Let $I \subset [m], |I| = N$, then it is obvious that sample complexity of computing $v^k$ increase linearly with $N$, thus $$T_v(N) = \mathcal{O}(N).$$ A more attentive analysis is needed for the proximal step: \begin{align}\label{prox_step} \arg\min_{z \in \mathbb{R}^n}\;\; \frac{1}{N}\sum\limits_{i \in I} h(z;\xi_i) + \frac{1}{2\mu}\norm{z - w}^2. \end{align} Even for small $N > 0$ the solution of the above problem do not have a closed-form and certain auxiliary iterative algorithm must be used to obtain an approximation of the optimal solution. For the above primal form, the stochastic variance-reduction schemes are typically called to approach this finite-sum minimization, when $h$ obey certain smoothness assumptions. However, up to our knowledge, variance-reduction methods are limited for our general convex nonsmooth regularizers $h(\cdot;\xi)$. SGD attains an $\delta-$suboptimal point $\norm{\tilde{z} - \text{prox}_{\mu}(w;I)}^2 \le \delta$ at a sublinear rate, in $\mathcal{O}\left(\frac{\mu}{\delta}\right)$ iterations. This sample complexity is independent of $N$ but to obtain high accuracy a large number of iterations have to be performed. The following dual form is more natural: \begin{align} & \min_{z \in \mathbb{R}^n} \;\; \frac{1}{N}\sum\limits_{i \in I} h(z;\xi_i) + \frac{1}{2\mu}\norm{z - w}^2 \nonumber \\ & =\min_{z \in \mathbb{R}^n}\;\; \frac{1}{N}\sum\limits_{i \in I} \max\limits_{v_i} \langle v_i,z\rangle - h^*(v_i;\xi_i) + \frac{1}{2\mu}\norm{z - w}^2 \nonumber \\ & = \max\limits_{v} \min_{z \in \mathbb{R}^n}\;\; \left \langle \frac{1}{N}\sum\limits_{i \in I} v_i,z \right\rangle - h^*(v_i;\xi_i) + \frac{1}{2\mu}\norm{z - w}^2 \nonumber \\ & =\max_{v \in \mathbb{R}^{Nn}}\;\; -\frac{\mu}{2N^2} \norm{\sum_{j=1}^N v_j}^2 + \left\langle \frac{1}{N}\sum\limits_{j = 1}^N v_i,w \right\rangle - \frac{1}{N}\sum\limits_{j=1}^N \; h^*(v_j;\xi_{i_j}) \label{dual_subproblem} \end{align} Moreover, in the interesting particular scenarios when regularizer $h(\cdot;\xi)$ results from the composition of a convex function with a linear operator $h(w;\xi) = l(a_{\xi}^Tw)$, the dual variable reduces from $Nn$ to $N$ dimensions. In this case \begin{align} & \min_{z \in \mathbb{R}^n} \;\; \frac{1}{N}\sum\limits_{i \in I} l(a_{\xi_i}^T z) + \frac{1}{2\mu}\norm{z - w}^2 \nonumber \\ & = \frac{1}{N}\max_{v \in \mathbb{R}^{N}}\;\; -\frac{\mu}{2N} \norm{\sum_{j=1}^N a_{\xi_j} v_j}^2 + \left\langle \sum\limits_{j = 1}^N a_{\xi_i}v_i,w \right\rangle - \sum\limits_{j=1}^N \; l^*(v_j) \label{dual_composition_subproblem} \end{align} Computing the dual solution $v^*$, then the primal one is recovered by $z(w) = w - \frac{\mu}{N}\sum\limits_{i=1}^N v_i^*$ for \eqref{dual_subproblem} or $z(w) = w - \frac{\mu}{N}\sum\limits_{i=1}^N a_{\xi_i} v_i^*$ for \eqref{dual_composition_subproblem}. In the rest of this section we will analyze only the general subproblem \eqref{dual_subproblem}, since the sample complexity estimates will be easily translated to particular instance \eqref{dual_composition_subproblem}. For a suboptimal $\tilde{v}$ satisfying $\norm{\tilde{v} - v^*} \le \delta$, primal suboptimality with $\delta$ accuracy is obtained by: let $\tilde{z}(w) = w - \frac{\mu}{N}\sum\limits_{i=1}^N \tilde{v}_i$ \begin{align}\label{primal_dual_subopt} \norm{\tilde{z}(w) - z(w)} & = \mu \left\|\frac{1}{N} \left(\sum\limits_{i=1}^N \tilde{v}_i - \sum\limits_{i=1}^N v_i^*\right) \right\| \nonumber \\ & \le \frac{\mu}{N} \sum\limits_{i=1}^N \left\| \tilde{v}_i - v_i^* \right\| \le \frac{\mu}{\sqrt{N}} \norm{v^* - \tilde{v}} \le \frac{\mu \delta}{\sqrt{N}} \end{align} Further we provide several sample complexity estimations of various dual algorithms to attain primal $\delta-$suboptimality, for general and particular regularizers $h(\cdot;\xi)$. Notice that the hessian of the smooth component $\frac{\mu}{2N} \norm{\sum_{j =1}^N v_j}^2$ is upper bounded by $\mathcal{O}(\mu)$. Without any growth properties on $h^*(\cdot;\xi)$, one would be able to solve \eqref{dual_subproblem}, using Dual Fast Gradient schemes with $\mathcal{O}(Nn)$ iteration cost, in $\mathcal{O}\left(\max\left\{Nn, Nn \sqrt{\frac{\mu R_d^2}{\delta}} \right\} \right)$ sample evaluations to get a $\delta$ accurate dual solution. This implies, by \eqref{primal_dual_subopt}, that there are necessary $$, \qquad T_w^{in} (N;\delta) = \mathcal{O}\left( \max \left\{ Nn, N^{3/4}n \frac{\mu R_d}{\delta^{1/2}} \right\} \right)$$ sample evaluations to obtain primal $\delta-$suboptimality. For polyhedral $h^*(\cdot;\xi)$, there are many first-order algorithms that attain linear convergence on the above composite quadratic problem \cite{HsiSi:17}. For instance, the Proximal Gradient algorithm have $\mathcal{O}(Nn)$ arithmetical complexity per iteration and attain a $\delta-$suboptimal dual solution in $\mathcal{O}\left(\frac{L}{\hat{\sigma}(I)}\log\left( \frac{1}{\delta} \right)\right)$, where $L$ is the Lipschitz gradient constant and $\hat{\sigma}(I)$ represents the quadratic growth constant of the dual objective function \eqref{dual_subproblem}. Therefore there are necessary $\mathcal{O}\left(\frac{N \mu}{\hat{\sigma}(I)}\log\left( \frac{1}{\delta} \right)\right)$ sample evaluations for $\delta-$dual suboptimality, and: $$ T_{w,poly}^{in}(N;\delta) = \mathcal{O}\left(\max \left\{ Nn,\frac{N \mu}{\hat{\sigma}(I)}\log\left( \frac{\mu}{N^{1/2}\delta} \right) \right\}\right)$$ sample evaluations to attain primal $\delta-$suboptimal solution. \section{Iteration complexity in expectation} \label{sec:iteration_com} \noindent Further, in this section, we estimate the number of SPG-M iterations that is necessary to get an $\epsilon-$suboptimal solution of \eqref{problem_intro}. We will use the following elementary relation: for any $a,b \in \mathbb{R}^n$ and $\beta > 0$ we have \begin{align} \langle a,b \rangle & \le \frac{1}{2\beta}\norm{a}^2 + \frac{\beta}{2}\norm{b}^2 \label{elem_bound_norm1}\\ \norm{a+b}^2 &\le \left(1 + \frac{1}{\beta}\right)\norm{a}^2 + (1 + \beta)\norm{b}^2.\label{elem_bound_norm2} \end{align} The main recurrences which will finally generate our sublinear convergence rates are presented below. \begin{theorem}\label{th_reccurence} Let Assumptions \ref{assump_basic} hold and $\mu_k \le \frac{1}{4L_f}$. Assume $ \norm{w^{k+1} - \text{prox}_{h,\mu}(v^k;I^k)} \le \delta_k$, then the sequence $\{w^k\}_{k \ge 0}$ generated by SPG-M satisfies: \begin{align*} \mathbb{E}[\norm{w^{k+1} - w^*}^2] \le & \left(1 - \frac{\sigma_f \mu_k}{2} \right) \mathbb{E}[\norm{w^k-w^*}^2] \\ & \hspace{2cm} + \mu_k^2 \frac{\mathbb{E} \left[ \norm{g_F(w^*;\xi)}^2\right]}{N} + \left( 3 + \frac{2}{\sigma_f \mu_k}\right)\delta_k^2 \end{align*} \end{theorem} \begin{proof} Denote $\bar{w}^{k+1} = \text{prox}_{h,\mu_k}(v^k;I^k)$ and recall that $\frac{1}{\mu_k}\left(v^k - \bar{w}^{k+1} \right) \in \partial h(\bar{w}^{k+1};I^k)$, which implies that there exists a subgradient $g_h(\bar{w}^{k+1};I^k)$ such that \begin{align}\label{subproblem_optcond} g_h(\bar{w}^{k+1};I^k) + \frac{1}{\mu_k}\left(\bar{w}^{k+1} -v^k \right) = 0. \end{align} Using these optimality conditions we have: \begin{align} &\norm{w^{k+1} - w^*}^2 = \norm{w^k - w^*}^2 + 2 \langle w^{k+1} - w^k, w^k-w^* \rangle + \norm{w^{k+1} - w^k}^2 \nonumber\\ & \overset{\eqref{elem_bound_norm2}}{\le} \norm{w^k - w^*}^2 + 2 \langle \bar{w}^{k+1} - w^k, w^k-w^* \rangle + 2 \langle w^{k+1} - \bar{w}^{k+1}, w^k-w^* \rangle \nonumber\\ & \hspace{3cm} + \frac{3}{2}\norm{\bar{w}^{k+1} - w^k}^2 + 3\norm{\bar{w}^{k+1} - w^{k+1}}^2 \nonumber\\ & = \norm{w^k - w^*}^2 + 2 \langle \bar{w}^{k+1} - w^k, \bar{w}^{k+1}-w^* \rangle + 2 \langle w^{k+1} - \bar{w}^{k+1}, w^k-w^* \rangle \nonumber\\ & \hspace{3cm} - \frac{1}{2}\norm{\bar{w}^{k+1} - w^k}^2 + 3\norm{\bar{w}^{k+1} - w^{k+1}}^2 \nonumber\\ & \overset{\eqref{elem_bound_norm1}}{\le} \left(1 + \frac{\sigma_f\mu_k}{2}\right)\norm{w^k-w^*}^2 + 2 \langle \mu_k \nabla f(w^k;I_k) + \mu_k g_h(\bar{w}^{k+1};I_k), w^* - \bar{w}^{k+1} \rangle \nonumber\\ & \hspace{3cm} - \frac{1}{2}\norm{\bar{w}^{k+1} - w^k}^2 + \left(3 + \frac{2}{\sigma_f\mu_k}\right)\delta_k^2. \label{start_reccurence} \end{align} Now by using convexity of $h$ and Lipschitz continuity of $\nabla f(\cdot;I_k)$, we can further derive: \begin{align*} &2 \mu_k\langle \nabla f(w^k;I_k) + g_h(\bar{w}^{k+1};I_k), w^* - \bar{w}^{k+1} \rangle - \frac{1}{2}\norm{\bar{w}^{k+1} - w^k}^2 \\ & \le 2\mu_k \langle \nabla f(w^k;I_k), w^* - \bar{w}^{k+1} \rangle - \frac{1}{2}\norm{\bar{w}^{k+1} - w^k}^2 \nonumber\\ & \hspace{5cm} + 2\mu_k(h(w^* ;I_k) - h(\bar{w}^{k+1};I_k))\\ & \le - 2\mu_k \bigg( \langle \nabla f(w^k;I_k), \bar{w}^{k+1} - w^k \rangle + \frac{1}{8\mu_k}\norm{\bar{w}^{k+1} - w^k}^2 \\ & + h(\bar{w}^{k+1};I_k)\bigg) + 2\mu_k \langle \nabla f(w^k;I_k), w^* - w^{k}\rangle - \frac{1}{4}\norm{\bar{w}^{k+1} - w^k}^2 + 2\mu_k h(w^*;I_k). \end{align*} By taking expectation w.r.t. $\xi_k$ in both sides, we obtain: \begin{align} & - 2\mu_k \mathbb{E} \bigg[ \langle \nabla f(w^k;I_k), \bar{w}^{k+1} - w^k \rangle + \frac{1}{8\mu_k}\norm{\bar{w}^{k+1} - w^k}^2 + h(\bar{w}^{k+1};I_k)\bigg] \nonumber\\ & + 2\mu_k \langle \nabla f(w^k), w^* - w^{k}\rangle - \frac{1}{4}\mathbb{E}\left[\norm{\bar{w}^{k+1} - w^k}^2\right] + 2\mu_k h(w^*)] \nonumber \\ &\overset{\mu_k < \frac{1}{4L_f}}{\le} 2\mu_k \mathbb{E}[\left( f(w^k;I_k) - F(\bar{w}^{k+1};I_k) \right)] + 2\mu_k \langle \nabla f(w^k), w^* - w^{k}\rangle \nonumber\\ & \hspace{3cm} - \frac{1}{4}\mathbb{E}[\norm{\bar{w}^{k+1} - w^k}^2] + 2\mu_k h(w^*) \nonumber \\ & \le 2\mu_k \left(f(w^k) - \mathbb{E}\left[F(\bar{w}^{k+1};I_k) \right] \right) \nonumber \\ & \hspace{2cm} + 2\mu_k \left( F(w^*) - f(w^k) \!-\! \frac{\sigma_f}{2}\norm{w^k-w^*}^2 \right) \!-\! \frac{1}{4}\norm{\bar{w}^{k+1} - w^k}^2 \nonumber \\ & \!= - \sigma_f \mu_k\norm{w^k-w^*}^2 + 2\mu_k \mathbb{E}\left[ F(w^*) \!-\! F(\bar{w}^{k+1};I_k) - \frac{1}{8\mu_k} \norm{\bar{w}^{k+1} - w^k}^2 \right]\!\!. \label{rel_reccurence} \end{align} By combining \eqref{rel_reccurence} with \eqref{start_reccurence} and by taking the expectation with the entire index history we obtain: \begin{align} \mathbb{E}[\norm{w^{k+1} - w^*}^2] & \le \left(1 - \frac{\sigma_f \mu_k}{2}\right)\mathbb{E}\left[\norm{w^k-w^*}^2\right] \nonumber \\ & \hspace{0cm} + 2\mu_k \mathbb{E}\left[ F(w^*) - F(\bar{w}^{k+1};I_k) - \frac{1}{8\mu_k} \norm{\bar{w}^{k+1} - w^k}^2 \right].\label{almostend_recurrence} \end{align} A last further upper bound on the second term in the right hand side: let $w^* \in W^*$ \begin{align} & \mathbb{E}\left[ F(\bar{w}^{k+1};\xi_k) - F^* + \frac{1}{8\mu_k}\norm{\bar{w}^{k+1} - w^k}^2 \right] \nonumber\\ & \ge \mathbb{E}\left[\langle g_F(w^*;\xi_k), \bar{w}^{k+1} - w^* \rangle + \frac{1}{8\mu_k} \norm{\bar{w}^{k+1} - w^k}^2\right] \nonumber\\ & \ge \mathbb{E}\left[\langle g_F(w^*;\xi_k), w^k - w^* \rangle + \langle g_F(w^*;\xi_k), \bar{w}^{k+1} - w^k \rangle + \frac{1}{8\mu_k} \norm{\bar{w}^{k+1} - w^k}^2\right] \nonumber\\ & \ge \mathbb{E}\left[\langle g_F(w^*;\xi), w^k - w^* \rangle + \min_{z} \; \langle g_F(w^*;\xi), z - w^k \rangle + \frac{1}{8\mu_k}\norm{z - w^k}^2\right] \nonumber\\ & \ge \langle g_F(w^*), w^k - w^* \rangle - 2\mu_k \mathbb{E} \left[ \norm{g_F(w^*;\xi)}^2\right] = - 2\mu_k \mathbb{E} \left[ \norm{g_F(w^*;\xi)}^2\right], \label{end_recurence} \end{align} where we recall that we consider $g_F(w^*) = \mathbb{E}\left[ g_F(w^*;\xi) \right] = 0 $. Finally, from \eqref{almostend_recurrence} and \eqref{end_recurence} we obtain our above result. \end{proof} \begin{remark} Consider deterministic setting $F(\cdot;\xi) = F(\cdot)$ and $\mu_k = \frac{1}{2L_f}$, then SPG-M becomes the proximal gradient algorithm and Theorem \ref{th_reccurence}$(i)$ holds with $ g_F(w^*;\xi) = g_F(w^*) = 0$, implying that $\Sigma = 0$. Thus the well-known iteration complexity estimate $\mathcal{O}\left(\frac{L_f}{\sigma_f} \log(1/\epsilon) \right)$ \cite{Nes:13,BecTeb:09} of proximal gradient algorithm is recovered up to a constant. \end{remark} \begin{theorem}\label{th_iteration_complexity} Let assumptions of Theorem \ref{th_reccurence} hold. Also let $\delta_k = \mu_k^{3/2}/N^{1/2}$. Then the sequence $\{w^k\}_{k\ge 0}$ generated by SPG-M attains $ \mathbb{E}[\norm{w^T-w^*}^2] \le \epsilon$ within the following number of iterations: \begin{enumerate} \item[] [\textbf{Constant stepsize:}] \; Let $K > \mathcal{O}\left( \frac{4\mu_0}{\sigma_f^2 N}\right)$ and $ \mu_k := \frac{2\mu_0}{K} \in \left(0, \frac{1}{4L_f} \right]$, then $$ T \le T^{out}_c := \mathcal{O}\left(\max\left\{\frac{\max\{\Sigma^2, 2/\sigma_f\} \log (2r_0^2 /\epsilon)}{\epsilon \sigma_f^2 N}, \sqrt{\frac{ 72 \log^2 (2r_0^2 /\epsilon)}{\epsilon \sigma_f^3 N}} \right\}\right)$$ \item[][\textbf{Nonincreasing stepsize:}] \; For $\mu_k = \frac{2\mu_0}{k}$, then \begin{align*} T \le T^{out}_v = \mathcal{O}\left(\max\left\{\frac{\norm{w^0-w^*}^2}{\epsilon},\frac{\Sigma^2+\mu_0 + 1/\sigma_f}{N\epsilon} \right\} \right) \end{align*} \item[] [\textbf{Mixed stepsize:}] \; Let $\mu_k = \begin{cases} \frac{\mu_0}{4L_f} & k < T_1 \\ \frac{2\mu_0}{k}, & T_1 \le k \le T_2 \end{cases}$, then \begin{align*} T \le T^{out}_m: = \underbrace{\frac{4L_f}{\mu_0\sigma_f}\log\left( \frac{2\norm{w^0-w^*}^2}{\epsilon}\right)}_{T_1} + \underbrace{\mathcal{O}\left( \frac{C}{\epsilon N} \right)}_{T_2}, \end{align*} where $C = \frac{\mu_0\Sigma^2L_f\sigma_f + \mu_0^2\sigma_f + \mu_0L_f}{L_f^2\sigma_f^2} + \Sigma^2+\mu_0 + 1/\sigma_f.$ \end{enumerate} \end{theorem} \begin{proof} \noindent \textbf{Constant step-size}. Interesting convergence rates arise for proper constant stepsize $\mu_k$. Let $\mu_k := \mu \in \left(0, \frac{1}{4L_f} \right]$ and $\delta_k = \mu^{3/2}/N^{1/2}$, then Theorem \ref{th_reccurence} states that \begin{align} \mathbb{E}[\norm{w^k-w^*}^2] & \le \left(1 - \frac{\sigma_f\mu}{2} \right)^k r_0^2 + \frac{1 - (1-\sigma_f\mu/2)^{k}}{\sigma_f\mu/2}\frac{\mu^2}{N}\left(\Sigma^2 + 3\mu + \frac{2}{\sigma_f} \right) \nonumber\\ & \le \left(1 - \frac{\sigma_f\mu}{2} \right)^k r_0^2 + \frac{2\mu}{\sigma_f N}\left(\Sigma^2 + 3\mu + \frac{2}{\sigma_f} \right), \label{conststep_linconv} \end{align} which imply a linear decrease of initial residual and, at the same time, the linear convergence of $w^k$ towards a optimum neighborhood of radius $\frac{2\mu}{\sigma_f N}\left(\Sigma^2 + 3\mu + \frac{2}{\sigma_f} \right)$. The radius decrease linearly with the minibatch size $N$. With a more careful choice of constant $\mu$ we can same decrease in the SPG-M convergence rate. Given $K > 0$, let $\mu = \frac{2\mu_0}{K}$ then after \begin{align}\label{linear_resreduction} T = \frac{K}{\sigma_f \mu_0}\log\left( \frac{2r_0^2}{\epsilon}\right) \end{align} \eqref{conststep_linconv} leads to \begin{align} \mathbb{E}[\norm{w^T-w^*}^2] & \le \frac{2\mu_0}{K\sigma_f N}\left(\Sigma^2 + \frac{6\mu_0}{K} + \frac{2}{\sigma_f} \right) + \frac{\epsilon}{2} \label{no_opterror}\\ & \le \frac{2\log (2r_0^2 /\epsilon)}{T \sigma_f^2 N}\left(\Sigma^2 + \frac{6\log (2r_0^2 /\epsilon)}{\sigma_f T} + \frac{2}{\sigma_f} \right) + \frac{\epsilon}{2}, \nonumber \end{align} Overall, to obtain $\mathbb{E}[\norm{w^T-w^*}^2] \le \epsilon,$ SPG-M has to perform $$ \mathcal{O}\left(\max\left\{\frac{\max\{\Sigma^2, 2/\sigma_f\} \log (2r_0^2 /\epsilon)}{\epsilon \sigma_f^2 N}, \sqrt{\frac{ 72 \log^2 (2r_0^2 /\epsilon)}{\epsilon \sigma_f^3 N}} \right\}\right)$$ SPG-M iterations. \vspace{5pt} \noindent \textbf{Variable stepsize}. Now let $\mu_k = \frac{2\mu_0}{k}, \delta_k = \frac{\mu_k^{3/2}}{N^{1/2}}$, then Theorem \ref{th_reccurence} leads to: \begin{align}\label{varstep_rate1_pre} \mathbb{E}[\norm{w^k-w^*}^2] & \le \prod\limits_{j = 1}^k \left(1 - \frac{\sigma_f\mu_j}{2} \right) r_0^2 + \sum\limits_{i = 1}^k \frac{\mu_i^2}{N}\left(\Sigma^2 + 3\mu_i + \frac{2}{\sigma_f} \right) \prod\limits_{j = i+1}^k \left(1 - \frac{\sigma_f\mu_j}{2} \right) \end{align} By using further the same (standard) analysis from \cite{PatIro:20,PatNec:18}, we obtain: \begin{align}\label{varstep_rate1} \mathbb{E}[\norm{w^k-w^*}^2] & \le \underbrace{\mathcal{O}\left(\frac{r_0^2}{k}\right)}_{\text{optimization error}} + \underbrace{\mathcal{O}\left(\frac{\Sigma^2+\mu_0 + 1/\sigma_f}{Nk} \right)}_{\text{sample error}} \end{align} Notice that, for our stepsize choice, the optimization rate $\mathcal{O}(r_0^2/k)$ is optimal (for strongly convex stochastic optimization) and it is not affected by the variation of minibatch size. Intuitively, the stochastic component within optimization model \eqref{problem_intro} is not eliminated by increasing $N$, only a variance reduction is attained. In \cite{WanWan17_prox}, for bounded gradients objective functions, is stated $\mathcal{O}\left( \frac{L^2}{\sigma_f N k} \right)$ convergence rate for Minibatch-Prox Algorithm in average sequence, using classical arguments. However, this rate is based on knowledge of $\sigma_f$, used in the stepsize sequence $\mu_k = \frac{2}{\sigma_f(k-1)}$. Moreover, in the first step the algorithm has to compute $\text{prox}_{h,+\infty}(\cdot;I^0) = \arg\min\limits_{z} \; \frac{1}{N} \sum\limits_{ i \in I}\; h(z;i)$, which for small $\sigma_f$ might be computationally expensive. Notice that, under knowledge of $\sigma_f$, we could obtain similar sublinear rate in $\mathbb{E}[\norm{w^k-w^*}^2]$ using the same stepsize sequence. Returning to \eqref{varstep_rate1}, it implies the following iteration complexity: \begin{align*} \mathcal{O}\left(\max\left\{\frac{r_0^2}{\epsilon},\frac{\Sigma^2+\mu_0 + 1/\sigma_f}{N\epsilon} \right\} \right). \end{align*} \vspace{5pt} \noindent \textbf{Mixed stepsize.} By combining constant and variable stepsize policies, we aim to get a better "optimization error" and overall, a better iteration complexity of SPG-M. Inspired by \eqref{linear_resreduction}-\eqref{no_opterror}, we are able to use a constant stepsize policy to bring $w^k$ in a small neighborhood of $w^*$ whose radius is inversely proportional with $N$. Let $\mu_k = \frac{\mu_0}{4L_f}$, using similar arguments as in \eqref{linear_resreduction}-\eqref{no_opterror}, we have that after: \begin{align*} T_1 \ge \frac{4L_f}{\mu_0\sigma_f}\log\left( \frac{2r_0^2}{\epsilon}\right) \end{align*} the expected residual is bounded by: \begin{align} \mathbb{E}[\norm{w^{T_1}-w^*}^2] & \le \frac{\mu_0}{4L_f\sigma_f N}\left(\Sigma^2 + \frac{3\mu_0}{4L_f} + \frac{2}{\sigma_f} \right) + \frac{\epsilon}{2}. \end{align} Now restarting SPG-M from $w^{T_1}$ we have from \eqref{varstep_rate1} that: \begin{align} \mathbb{E}&[\norm{w^k-w^*}^2] \le \mathcal{O}\left(\frac{\mathbb{E}[\norm{w^{T_1}-w^*}^2]}{k}\right) + \mathcal{O}\left(\frac{\Sigma^2+\mu_0 + 1/\sigma_f}{Nk} \right)\nonumber \\ & \le \mathcal{O}\left(\frac{\mu_0\Sigma^2L_f\sigma_f + \mu_0^2\sigma_f + \mu_0L_f}{L_f^2\sigma_f^2 kN}\right) + \mathcal{O}\left(\frac{\epsilon}{2k}\right) + \mathcal{O}\left(\frac{\Sigma^2+\mu_0 + 1/\sigma_f}{Nk} \right). \label{mixed_step_convrate} \end{align} iterations. Overall, SPG-M computes $w^{T_2}$ such that $\mathbb{E}[\norm{w^{T_2}-w^*}^2] \le \epsilon$ within a number of iterations bounded by \begin{align*} T_1 + T_2 = \frac{4L_f}{\mu_0\sigma_f}\log\left( \frac{2r_0^2}{\epsilon}\right) + \mathcal{O}\left( \frac{C}{\epsilon N} \right), \end{align*} where $C = \frac{\mu_0\Sigma^2L_f\sigma_f + \mu_0^2\sigma_f + \mu_0L_f}{L_f^2\sigma_f^2} + \Sigma^2+\mu_0 + 1/\sigma_f.$ \end{proof} We make a few observations about the $T^{out}_m$. For a small conditioning number $\frac{L_f}{\sigma_f}$ the constant stage performs few iterations and the total complexity is dominated by $\mathcal{O}\left( \frac{C}{\epsilon N} \right)$. This bound (of the same order as \cite{WanWan17_prox}) present some advantages: unknown $\sigma_f$, evaluation in the last iterate and no uniform bounded gradients assumptions. On the other hand, for a sufficiently large $N = \mathcal{O}(1/\epsilon)$ minibatch size and a proper choice of $\mu_0$, one could perform a constant number of SPG-M iterations. In this case, the mixed stepsize convergence rate provides a link between population risk and empirical risk. \subsection{Total complexity} In this section we couple the complexity per iteration estimates from Section \ref{sec:com_per_iter} and Section \ref{sec:iteration_com} and provide upper bounds on the total complexity of SPG-M. Often the measure of sample complexity is used for stochastic algorithms, which refers to the entire number of data samples that are used during all iterations of a given algoritmic scheme. In our case, given the minibatch size $N$ and the total outer SPG-M iterations $T^{out}$, the sample complexity is given by $ NT^{out}$. In the best case $NT^{out}$ is upper bounded by $\mathcal{O}(1/\epsilon)$. We consider the dependency on minibatch size $N$ and accuracy $\epsilon$ of highly importance, thus we will present below simplified upper bounds of our estimates. \noindent In Section \ref{sec:com_per_iter}, we analyzed the complexity of a single SPG-M iteration for convex components $h(\cdot;\xi)$ denoted by $T^{in}_v + T^{in}_w$. Summing the inner effort $T^{in}_v + T^{in}_w$ over the outer number of iterations provided by Theorem \ref{th_iteration_complexity} leads us to the total computational complexity of SPG-M. We further derive the total complexity for SPG-M with mixed stepsize policy and use the same notations as in Theorem \ref{th_iteration_complexity}: \begin{align*} T^{total} & = \sum\limits_{i=0}^{T^{out}_m} (T^{in}_v(N) + T^{in}_w(N;\delta)) \\ & \le \sum\limits_{i=0}^{T^{out}_m} \left( \mathcal{O}(Nn) + \mathcal{O}\left( \max \left\{ Nn, N^{3/4}n \frac{\mu_i R_d}{\delta_i^{1/2}} \right\} \right) \right) \\ & \le \sum\limits_{i=0}^{T^{out}_m} \left[ \mathcal{O}(Nn) + \mathcal{O}\left( Nn \mu_i^{1/4} R_d \right) \right] \\ & \overset{\eqref{mu_bound}}{\le} \mathcal{O}(T^{out}_m Nn) + \mathcal{O}\left( Nn (T^{out}_m)^{3/4} R_d \right) \\ & \le \mathcal{O}\left(\left(\frac{L_f}{\sigma_f}\log\left(1/\epsilon \right) + \frac{C}{N\epsilon} \right) Nn \right) + \mathcal{O}\left( Nn \left(\frac{L_f}{\sigma_f}\log\left(1/\epsilon \right) + \frac{C}{N\epsilon} \right)^{3/4} R_d \right) \end{align*} Extracting the dominant terms from the right hand side we finnaly obtain: \begin{align*} T^{total} \le \mathcal{O}\left(\frac{L_fNn}{\sigma_f}\log\left(1/\epsilon \right) + \frac{Cn}{\epsilon} \right) + \mathcal{O}\left( N^{1/4}n \left(\frac{C}{\epsilon} \right)^{3/4} R_d \right). \end{align*} The first term $\mathcal{O}\left(\left(\frac{L_f}{\sigma_f}\log\left(1/\epsilon \right) + \frac{C}{N\epsilon} \right) Nn \right)$ is the total cost of the minibatch gradient step $v^k$ and is highly dependent on the conditioning number $\frac{L_f}{\sigma_f}$. The second term is brought by solving the proximal subproblem by Dual Fast Gradient method depicting a stronger dependence on $N$ and $\epsilon$ than the first. Although the complexity order is $\mathcal{O}\left( \frac{Cn}{\epsilon}\right)$, comparable with the optimal performance of single-sample stochastic schemes (with $N = 1$), the above estimate paves the way towards the acceleration techniques based on distributed stochastic iterations. Reduction of the complexity per iteration to $\frac{1}{\tau}(T^{in}_v(N) + T^{in}_w(N;\delta))$, using multiple machines/processors, would guarantee direct improvements in the optimal number of iterations $\mathcal{O}(\frac{Cn}{\tau\epsilon})$. The superiority of distributed variants of SGD schemes for smooth optimization are clear (see \cite{NiuRec:11}), but our results set up the theoretical foundations for distributing the algorithms for the class of proximal gradient algorithms. \section{Numerical simulations} \subsection{Large Scale Support Vector Machine} \iffalse \textcolor{red}{I assumed that we consider only binary classification. If we can consider more complicated tasks: multiple classes, graph-embedding regularization etc. would be more convincing } \fi To validate the theoretical implications of the previous sections, we first chose a well known convex optimization problem in the machine learning field: optimizing a binary Support Vector Machine (SVM) application. To test several metrics and methods, a spam-detection application is chosen using the dataset from \cite{spamDetectionDataset}. The dataset contains about 32000 emails that are either classified as spam or non-spam. This was split in our evaluation in a classical $80\%$ for training and $20\%$ for testing. To build the feature space in a numerical understandable way, three preprocessing steps were made: \begin{enumerate} \item A dictionary of all possible words in the entire dataset and how many times each occured is constructed. \item The top $200$($=n$ ; the number of features used in our classifier) most-used words indices from the dictionary are stored. \item Each email entry $i$ then counts how many of each of the $n$ words are in the $i-th$ email's text. Thus, if $X_{i}$ is the numerical entry characteristic to email ${i}$, then $X_{ij}$ contains how many words with index $j$ in the top most used words are in the email's text. \end{enumerate} The pseudocode for optimization process is shown below: \begin{flushleft} \quad For $k\geq 0$ compute \\ 1. Choose randomly i.i.d. $N-$tuple $I^k \subset \Omega$ \\ 2. Update: \begin{align*} v^{k} & = \left(1 - \lambda\mu_k \right) w^k \\ u^{k} & = \arg\max_{u \in [0,1]} \;\; -\frac{\mu}{2N}\norm{\tilde{X}_{I_k} u}^2_2 + u^T(e - \tilde{X}_{I_k}^Tv^k) \\ w^{k+1} & = v^k + \frac{\mu_k}{N} X_{I^k} u^k \end{align*} 3. If the stoppping criterion holds, then \textbf{STOP}, otherwise $k = k+1$. \end{flushleft} \iffalse To find a good value for $u$ to approximate the proximal step we use SGD. The approximative number of iteration bound used is given in \eqref{boundStep}. \begin{align} \frac{L_d}{\sigma_d}\log\left( \frac{R^2}{\delta}\right), \qquad where: \label{boundStep} \end{align} \begin{align*} L_d &= \lambda_{\max}\left( \frac{\mu}{N}\tilde{X}_{I_k}^T\tilde{X}_{I_k}\right) \\ \sigma_d &= \bar{\lambda}_{\min}\left( \frac{\mu}{N}\tilde{X}_{I_k}^T\tilde{X}_{I_k}\right) \\ R^2 & = N \\ \delta & = \epsilon \end{align*} \hspace*{4cm} and $\bar{\lambda}_{\min}$ is the least strictly positive eigenvalue. \fi To compute the optimal solution $w^*$, we let running the binary SVM state-of-the-art method for the dataset using SGD hinge-loss \cite{spamDetectionPaper} for a long time, until we get the top accuracy of the model (~$93.2\%$). Considering this as a performance baseline, we compare the results of training process efficiency between the $SPG-M$ model versus $SGD$ with mini-batches. The comparison is made w.r.t. three metrics: \begin{enumerate} \item \textbf{Accuracy}: how well does the current set of trained weights performs at classification between spam versus non-spam. \item \textbf{Loss}: the hinge-loss result on the entire set of data. \item \textbf{Error} (or optimality measure): computes how far is the current trained set of weights ($w^k$ at any step $k$ in time). from the optimal ones, i.e. $\norm{w^k - w^*}^2$. \end{enumerate} The comparative results between the two methods and each of the metrics defined above are shown in Fig. \ref{fig:acc}, \ref{fig:loss}, \ref{fig:error}. These were obtained by averaging several executions on the same machine, each with a different starting point. Overall, the results show the advantage of SPG-M method over SGD: while both methods will converge to the same optimal results after some time, SPG-M is capable of obtaining better results all the three metrics in a shorter time, regardless of the batch size being used. One interesting observation can be seen for the SGD-Const method results, when the loss metric tends to perfom better (\ref{fig:loss}). This is because of a highly tuned constant learning rate to get the best possible result. However, this is not a robust way to use in practice. \begin{figure}[htp] \centering \subfloat[batchsize 32]{% \includegraphics[width=0.5\textwidth]{svmbin_accuracy_32}% \label{fig:acc32}% }% \hfill% \subfloat[batchsize 128]{% \includegraphics[width=0.5\textwidth]{svmbin_accuracy_128}% \label{fig:acc128}% }% \caption{Comparative results between SPG-M and SGD for the \textbf{Accuracy} metric, using different batchsizes and functions for chosing the stepsize.} \label{fig:acc} \end{figure} \begin{figure}[htp] \centering \subfloat[batchsize 32]{% \includegraphics[width=0.5\textwidth]{svmbin_loss_32}% \label{fig:loss32}% }% \hfill% \subfloat[batchsize 128]{% \includegraphics[width=0.5\textwidth]{svmbin_loss_128}% \label{fig:loss128}% }% \caption{Comparative results between SPG-M and SGD for the \textbf{Loss} metric, using different batchsizes and functions for chosing the stepsize.} \label{fig:loss} \end{figure} \begin{figure}[htp] \centering \subfloat[batchsize 32]{% \includegraphics[width=0.5\textwidth]{svmbin_error_32}% \label{fig:error32}% }% \hfill% \subfloat[batchsize 128]{% \includegraphics[width=0.5\textwidth]{svmbin_error_128}% \label{fig:error128}% }% \caption{Comparative results between SPG-M and SGD for the \textbf{Error} metric, using different batchsizes and functions for chosing the stepsize.} \label{fig:error} \end{figure} \iffalse \noindent Directions (find some code in "Simulations/Old experiments"): \begin{itemize} \item we will compare with SGD, Minibatch SGD w.r.t. necessary time (seconds) to reach solution; optimality measure: $\norm{w^k - w^*}^2$; we'll run 5-10 times and take the average curve (find code in "Simulations/Old experiments") \item optimal solution $w^*$ computed with CVXPY (I'll put some code in Dropbox) \item Variant 1: at each iteration $u^k$ is computed by solving subproblem with CVXPY (probably will result in bad times) \item Variant 2: at each iteration $u^k$ is computed using a first order method: Gradient Method or Fast Gradient Method or Conjugate Gradient method (doesn't matter for the moment). This inner method will run a fixed number of iterations (say 5000). Will be adjusted later. \item We target at least 2 plots (SGD, Minibatch SGD vs SPG-M) showing that our method is faster in seconds \item One figure comparing classification performance (precision, recall etc.) \end{itemize} \fi \newpage \subsection{Parametric Sparse Representation} \input{parametric} \section{Conclusion} \noindent In this chapter we presented preliminary guarantees for minibathc stochastic proximal gradient schemes, which extend some well-known schemes in the literature. For future work, would be interesting to analyze the behaviour of SPG-M scheme on nonconvex learning models. We provided significant improvements in iteration complexity that future work can further reduce using distributed and parallelism techniques, as hinted by the distributed variants of SGD schemes~\cite{NiuRec:11}. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,215
Het vertelperspectief (ook vertelinstantie of verteller) is in de narratologie het antwoord op de vraag "Wie vertelt?". Het heeft dus te maken met de positie van waaruit de lezer een verhaal waarneemt. Het verhaal spreekt (indirect) tot de lezer, het vertelt hem iets. Er kan daarom van een verteller gesproken worden. Deze fictieve verteller richt zich tot een fictief publiek. De verteller in een verhaal ziet men bijvoorbeeld duidelijk naar voren komen als een omgeving wordt omschreven. Geen van de personages is dan aan het woord, maar de verteller. De verteller is dus geen in de reële wereld levende auteur, met wie de lezer zou kunnen communiceren. Als een verteller, openlijk naar zichzelf verwijst, naar zijn meningen of zijn waardeoordelen, dan spreekt men van een gedramatiseerde verteller. Het kan een homodiëgetische of een heterodiëgetische verteller zijn. Onlosmakelijk verbonden met het gebruik van perspectief (het gezichtspunt) is dat van het bijbehorende register (taalgebruik). Daarnaast wordt dit begrip uit de narratologie soms verward met focalisatie, dat echter meer te maken heeft met point of view (standpunt). Perspectieven Eerst komen de meest gebruikte perspectieven aan bod, waarna de nieuwe terminologie die de Franse structuralistische narratologen introduceerden wordt besproken. Auctoriaal In de auctoriale vertelsituatie (soms ook auctorieel genoemd) is de verteller alwetend, maar hij speelt niet mee. Hij staat als het ware 'boven' het verhaal: hij ziet neer op alles wat gebeurt en weet alles van het verhaal, de personages, hun acties, motieven en gedachten. Zo krijgt de lezer een compleet overzicht van alle gebeurtenissen en het waarom en hoe daarvan. De auctoriale verteller kan eventueel een ik-standpunt in plaats van een hij/zij-standpunt aannemen, maar dat hoeft niet. Een auctoriële verteller onderscheidt zich van een personele verteller doordat hij meer weet dan de personages. Dat de auctoriale verteller boven het verhaal staat, is terug te zien in de onderstaande voorbeelden. De verteller kan hier naar believen in het verhaal ingrijpen of op de gebeurtenissen vooruitlopen: En kwam ze nú niet dus, dan was 't ook uit, zei juffrouw Pieterse. Ja, dan zou 't uit wezen met de juffrouw van achter-onder. Ik zal nu maar terstond zeggen dat ze niet gekomen is, en dat 't dus met de juffrouw uit was. Of hij kiest ervoor om de afloop nog even voor zich te houden: Later zullen we zien of 't waar was wat moeder zei, dat ie thuis alles ontvindt of nodig had. Dit perspectief is een van de oudst gehanteerde; de meeste mythen en heilige boeken bedienen zich ervan. Het voordeel ligt in de soort absolute kwaliteit: het staat er, dus het is waar. Een groot deel van de Bijbel is bijvoorbeeld geschreven in dit perspectief. Het gebruik ervan is tegenwoordig echter vaak een zwaktebod; vooral wanneer er tegelijkertijd gespeeld wordt met andere perspectieven kan het een hinken op twee of meer gedachten zijn. Het lijkt het meest voor de hand liggende perspectief, maar met de literaire verworvenheden van de afgelopen anderhalve eeuw is het waarschijnlijk inmiddels het moeilijkst hanteerbare (of minst geloofwaardige). De alwetende verteller kan er verder ook nog voor opteren, de gedachten van een personage niet naar de derde persoon om te vormen, maar die gedachten in hun ik-vorm te laten staan. Analoog met de manier waarop uitspraken worden weergegeven, maar dan zonder aanhalingstekens. Soms spreekt men dan weleens van een alwetende verteller, die op schouderhoogte met een personage blijft en zo met hem meekijkt. Een voorbeeld kan zijn: Wat moet ik doen als hij sterft? panikeert Frank. Hij kijkt om zich heen, maar er is niemand te zien in het ziekenhuis. Ik heb toch echt om een dokter gebeld? denkt hij. Het voordeel van deze techniek is dat het de mogelijkheden en flexibiliteit (je kan makkelijk de verteller laten verspringen naar een ander personage) van de derde persoon combineert met de intensiteit van de ik-vorm. Het nadeel is dat sommige lezers het nogal rommelig vinden, omdat een ik- en een hij-vorm constant naast elkaar bestaan. Ik-perspectief Bij het ik-perspectief wordt onderscheid gemaakt tussen de vertellend ik en het belevend ik. De vertellend ik gaat in op gebeurtenissen die achter de rug zijn, terwijl het belevende ik die gebeurtenissen meemaakt. Het belevende ik speelt bijna altijd een rol in het verhaal, vaak is het zelfs de hoofdpersoon. Omdat de lezer alles slechts vanuit zijn of haar oogpunt ziet, krijgt hij geen compleet overzicht. Het ik-perspectief werkt doorgaans confronterend, vooral in combinatie met de o.t.t., als iemand die tegenover je zit en verslag doet. Paradoxaal genoeg is het dus moeilijker je te identificeren met de verteller, maar het effect van de confrontatie is groter dan bij een perspectief in de derde persoon. Een mooi voorbeeld is de roman Portret van een verloren jongeman (Portrait of a Young Man Drowning) van Charles Perry, waarin elke zin zich afspeelt in een absoluut 'nu'. Essentieel bij het gebruik van het ik-perspectief is de keuze voor de adressaat: aan wie vertelt de ik-figuur zijn verhaal? De schrijver kan zich direct of in het algemeen tot de lezer richten, maar er kan ook worden gekozen voor een specifieke adressaat, een 'jij' aan wie het verhaal wordt gericht, vaak iemand die al niet meer leeft. Voorbeelden zijn Ariëlla Kornmehls Huize Goldwasser (de gestorven geliefde) en Threes Anna's De stille stad (de gestorven broer). Personaal perspectief Het personele perspectief (soms ook personaal perspectief genoemd) is eigenlijk een combinatie van de twee andere. Hoewel de verteller in dit geval niet zelf in het verhaal betrokken is, wordt toch één persoon gevolgd. Het verhaal staat in de derde persoon, maar er is geen sprake van een compleet overzicht zoals bij het auctoriale perspectief. Een personaal perspectief kan vaak vrij gemakkelijk omgezet worden in een ik-perspectief. Alle derde personen die verwijzen naar het gevolgde personage, worden veranderd in eerste personen. Op die manier kan een personaal perspectief en het bijbehorende personage aangetoond worden. Schrijven in de derde persoon enkelvoud, vooral in combinatie met de o.v.t., is de 'zachte methode', waarbij de lezer zich gemakkelijk identificeert met het personage. Het omzetten in het ik-perspectief verlevendigt doorgaans het proza, maar maakt het ook afstandelijker. In beide gevallen is het eenvoudig om de gedachte- en ervaringswereld van het personage vorm te geven. Andere perspectieven Ook de absolute jij-vorm komt voor, bijvoorbeeld als verkapt ik-perspectief (te onderscheiden van het ik-perspectief met specifiek adressaat). Een weinig gebruikt perspectief is dat van de wij-vorm: In Tristan Egolfs debuutroman Lord of the Barnyard (Heer onder het gepeupel) zijn het de vuilnismannen van de stad die verslag doen. In Gerard Walschaps roman Houtekiet zijn het de kinderen van Jan Houtekiet die verslag doen. In Ágota Kristófs eerste deel van de Tweelingentrilogie en debuutroman Het dikke schrift (Le grand cahier), doet een tweeling in de wij-vorm verslag van hun leven. In het derde deel, De derde leugen (Le troisième mensonge), blijkt echter dat het om een onbetrouwbare verteller ging. Zelfs voorwerpen kunnen worden gebruikt als perspectief. Zo is The Collector Collector van Tibor Fisher geschreven vanuit het perspectief van een antieke Sumerische vaas, en is een van de perspectieven van Wu Mings 54 een televisietoestel. In Specht en Zoon van Willem Jan Otten wordt het verhaal verteld vanuit het perspectief van een schilderij. Afwisselende perspectieven Het bekendste voorbeeld van het gebruik van afwisselende perspectieven (ook wel meervoudig perspectief genoemd) is As I Lay Dying van William Faulkner: verscheidene ik-figuren vertellen samen het verhaal, zodat door de lappendeken aan verschillende camerastandpunten een beeld gegeven wordt van het totaal. Het uiteindelijke effect is dus groter dan dat wanneer een alwetende verteller zou zijn gehanteerd: het is tegelijkertijd intiem en universeel. Door de grote rijkdom aan mogelijkheden wordt de techniek veelvuldig toegepast. Andere goede voorbeelden van een roman waar deze vertelsoort tot uiting komt zijn Bonita Avenue van Peter Buwalda en het jeugdboek van Greet Beukenkamp Al het water van de zee dat over pesten op school gaat. Elk hoofdstuk is geschreven vanuit het perspectief van een andere leerling, waardoor het pestprobleem als het ware op de ontleedtafel wordt gelegd. Vertelinstantie Uit ontevredenheid met de bestaande terminologie 'ik-vertelling' en 'hij-vertelling' introduceerden de Franse structuralistische narratologen (Gérard Genette, François Jost, en anderen) de nieuwe begrippen vertelniveau, vertelinstantie en focalisatie. Hun bezwaar was dat de eerder genoemde termen de begrippen vertelinstantie en focalisatie verwarden. Een vertelinstantie kan als personage in een verhaal optreden of juist helemaal niet. In de narratologie maakt men daarom onderscheid tussen de volgende twee vertelinstanties die zijn gebaseerd op de "mate van betrokkenheid": de homodiëgetische vertelinstantie, waarbij de verteller de gebeurtenissen waarvan hij verslag doet tevens zelf heeft meegemaakt, als rechtstreeks betrokkene of als getuige. In de meeste gevallen betreft het hier een ik-verteller. Als de homodiëgetische vertelinstantie de protagonist (hoofdpersoon) van het verhaal is, spreekt men van een autodiëgetische vertelinstantie. de heterodiëgetische vertelinstantie, waarbij de verteller de gebeurtenissen die hij vertelt niet zelf heeft meegemaakt. Vaak (maar niet altijd) is dit een auctoriële of een personele verteller. Van een 'ik-vertelling' wordt niet meer vaak gesproken. Een betere term daarvoor is volgens Genette homodiëgetische vertelling/verteller. De redenering hierachter is dat elke verteller een 'ik-verteller' is, want als je vraagt "Wie vertelt?" moet de vertelinstantie "ik" antwoorden. Naast het bovengenoemde onderscheid dat is gebaseerd op de betrokkenheid van de verteller, maakt Genette nog een ander onderscheid dat is gebaseerd op het vertelniveau: de extradiëgetische verteller staat zelf buiten of "boven" het verhaal dat hij vertelt. de intradiëgetische verteller is een personage in het door hemzelf vertelde verhaal, waarin hij als het ware is "ingebed". Het betreft hier dus geen auctoriële of personele verteller. De intra- en de extradiëgetische verteller kunnen allebei tegelijkertijd ook homo- dan wel heterodiëgetisch zijn. Dit kan soms voor enige verwarring zorgen. Indien de verteller bijv. tegelijk homo- en extradiëgetisch is, staat hij enerzijds als extradiëgetische verteller buiten het verhaal dat hij vertelt. Anderzijds beschrijft hij daarbij bepaalde gebeurtenissen tòch rechtstreeks vanuit zijn eigen waarneming, maar op het moment dat hij dit doet is hij geen onderdeel van het verhaal. Indien de verteller anderzijds zowel hetero- als intradiëgetisch is, doet hij als personage (hij staat als intradiëgetische verteller immers niet boven het verhaal) verslag van gebeurtenissen die hij niet rechtstreeks zelf heeft meegemaakt, maar die als zodanig deel uitmaken van hetzelfde verhaal waarvan ook de verteller zelf onderdeel vormt. Een voorbeeld van dit laatste is de slotscène van Het behouden huis (van Willem Frederik Hermans) waarin de hoofdpersoon - een Nederlandse soldaat in de Tweede Wereldoorlog - beschrijft hoe hij de oude Duitse kolonel aantreft die hij eerder heeft aangetroffen in het huis waarin hij zich had verschanst na het leger te zijn ontvlucht: Ze hadden hem opgehangen aan de plataan en op zijn buik hadden ze het papier vastgespeld dat ik voor hem had geschreven. Bij een filmadaptatie van een boek waarin gebruik wordt gemaakt van een homodiëgetische vertelling introduceert men soms de personages eerst door een heterodiëgetische vertelinstantie (een voice-over) aan het woord te laten, waarna de homodiëgetische vertelinstantie, het hoofdpersonage bijvoorbeeld, het verhaal overneemt. Voorbeelden hiervan zijn de films noirs Lady in the Lake en Double Indemnity. Die laatste film begint met een anonieme, heterodiëgetische vertelinstantie die Walter Neff introduceert, terwijl de kijker ziet hoe het personage 's nachts met zijn auto door de straten van Los Angeles scheurt. Na deze introductie neemt het personage Walter Neff zelf het woord (hij grijpt in het verzekeringskantoor waar hij is binnengestormd een dictafoon en begint zijn verhaal te vertellen). Narratologie Scenarioterm
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,502
Q: Странно работает функция C++ В программе пишутся координаты и длина линии. почему-то не работает х6 y3 длина 3 #include <iostream> using namespace std; const int S = 10; char MAP[S][S]; //выводит карту на экран void display() { system("cls"); for (int i = 0; i < S; i++) { for (int j = 0; j < S; j++) { cout << MAP[i][j] << " "; } cout << endl; } cout << endl << endl; } //закрашивает пиксель void fillP(int x, int y) { MAP[y - 1][x - 1] = '#'; } //закрашивает несколько пикселей void fillP(int x, int y, int counter) { for (int i = x; i <= counter; i++) { MAP[y - 1][i - 1] = '#'; } } int main() { fillP(6, 3, 3); fillP(2,7,8); display(); system("pause"); return 0; } A: Так у вас в цикле условие странное/бессмысленное for (int i = x; i <= counter; i++) Как же это будет работать, если изначально x равно 6, а counter равно 3? Вы сравниваете координату с количеством. Это примерно как сравнивать километры с литрами. Вы уж определитесь, как именно вы хотите реализовать этот цикл. Либо цикл реализуется в единицах количества for (int i = 0; i < counter; i++) MAP[y - 1][x + i - 1] = '#'; либо в единицах координат for (int i = x; i < x + counter; i++) MAP[y - 1][i - 1] = '#'; А у вас получился какой-то бессмысленный "гибрид" из этих двух вариантов.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,044
Q: configure error: cannot find ssl library (gtmess) updated I have tried to install gtmess messenger according to their instruction but when i want to configure it the following error appears: checking for SSL_library_init in -lssl... no checking if -lssl exists in alternate location... no configure: error: cannot find ssl library I have tried the following solutions but not worked: 1. sudo apt-get install libssl-dev 2. apt-get install apt-file apt-file update apt-file search libssl | grep libssl-dev ./configure --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/x86_64-linux-gnu I have to add libssl wasn't exist on my system but i installed it by step 1 above and also I'm running linux(18.04) on virtual machine. here is the source code if you want to check configure manpage: https://github.com/geotz/gtmess A: You should install build-dependencies, clone repository, compile and install your application as follows: sudo apt-get install git build-essential libncurses5-dev libssl1.0-dev \ libncursesw5-dev git clone https://github.com/geotz/gtmess cd gtmess ./configure make sudo make install And then run the application with gtmess.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,515
Attijariwafa Bank () ist ein marokkanisches Unternehmen mit Sitz in Casablanca. Das Unternehmen bietet verschiedene Finanzdienstleistungen für seine Kunden an. Attijariwafa Bank ist das größte private Kreditinstitut in Marokko. Die Bank hat Filialen in Marokko, Algerien, Tunesien, Mauretanien sowie 62 Zweigstellen in Europa, darunter drei in Deutschland (Dortmund, Düsseldorf, Frankfurt am Main). Mittels der europäischen Zweigstellen verfolgt Attijariwafa das Ziel, "die Partnerbank der im Ausland lebenden Marokkaner und Tunesier zu werden". Das Unternehmen entstand durch eine Fusion von Banque Commerciale du Maroc und Wafabank. Siehe auch Liste der größten Unternehmen in Afrika Weblinks Attijariwafa Bank Einzelnachweise Kreditinstitut (Marokko) Organisation (Casablanca) Gegründet 1911
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,233
Neuroleptoanalgezja (NLA) – znieczulenie osiągane przez dożylne podanie krótko działającego analgetyku narkotycznego (np. fentanylu) oraz silnego neuroleptyku (np. droperidol), powodujące analgezję (znieczulenie) i silną sedację (uspokojenie) bez utraty przytomności. Pacjent nie odczuwa lęku ani bólu, jest uspokojony psychicznie i ruchowo, pozbawiony emocji i obojętny na bodźce zewnętrzne. Jednocześnie ma zachowaną świadomość i utrzymuje kontakt z otoczeniem. Farmakologia Anestezjologia
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,872
Rupert Murdoch RODNEY TIFFEN is emeritus professor in Government and International Relations at the University of Sydney. A leading international scholar of media, his books include _News and Power_ (1989); _Scandals: Media, Politics and Corruption in Contemporary Australia_ (1999); _Diplomatic Deceits: Government, Media and East Timor_ (2001), and numerous other publications on mass media and Australian politics. His most recent book, with Ross Gittins, is _How Australia Compares_ (2nd ed. 2009). He worked with the Media Monitoring Project as an observer during the 1994 South African election, conducted three reviews of Radio Australia, and worked with the independent Finkelstein Inquiry into the media in 2011–12. Rupert Murdoch _A Reassessment_ RODNEY TIFFEN **A NewSouth book** _Published by_ NewSouth Publishing University of New South Wales Press Ltd University of New South Wales Sydney NSW 2052 AUSTRALIA newsouthpublishing.com © Rodney Tiffen 2014 First published 2014 This book is copyrightright. Apart from any fair dealing for the purpose of private study, research, criticism or review, as permitted under the Copyright Act, no part of this book may be reproduced by any process without written permission. Inquiries should be addressed to the publisher. National Library of Australia Cataloguing-in-Publication entry Author: Tiffen, Rodney, author. Title: Rupert Murdoch: a reassessment / Rodney Tiffen. ISBN: 9781742233567 (paperback) 9781742241494 (ePub/Kindle) 9781742246420 (ePDF) Notes: Includes bibliographical references and index. Subjects: Murdoch, Rupert, 1931 – Influence. Directors of corporations. Newspaper publishing. Mass media – Influence. Corporate power. Scandals. Dewey Number: 070.92 _Design_ Josephine Pajor-Markus _Cover design_ Xou Creative _Cover image_ Peter Macdiarmid/Getty Images _ _ All reasonable efforts were taken to obtain permission to use copyrightright material reproduced in this book, but in some cases copyrightright could not be traced. The author welcomes information in this regard. Contents Acknowledgements Murdoch family tree Murdoch company names 1 The passing of the Murdoch era? 2 Building the empire: From Adelaide to Hollywood and almost to Beijing 3 Midas of the media: Murdoch's business strategy 4 Midas's lost touch: The business case against Murdoch 5 From Lenin to Palin: The making of a radical conservative 6 The enthusiastic player: Murdoch's early political involvements 7 The passionate player: Thatcher, Reagan and beyond 8 The dominant player: Murdoch ascendant 9 Reaping the rewards: Murdoch and government action 10 The market for truth 11 The Republic of Fox 12 Those who live by scandal 13 The roots of scandal Notes References Index Acknowledgements Douglas Adams begins one of his comic science fiction novels with the observation that in no known language is there the phrase 'as pretty as an airport'. It is also likely that in no known language is there the phrase 'as exciting as living with a writer'. My deepest thanks in the writing of this book are, as always, to my wife Kathryn, who not only read the whole draft, but lived uncomplainingly, indeed cheerfully and supportively, through its long gestation. My next greatest thanks are to my friends Peter Browne and Ross Gittins, who also read the whole manuscript and gave me careful and constructive feedback, which improved the book greatly. Mark McDonnell, a leading financial analyst, will not agree with all the judgements in the book, but generously read and gave helpful advice on the business chapters. David McKnight, Chris Masters and Nick Davies were kind enough to give me feedback on individual chapters. I am grateful to Murdoch watchers in Sydney, Melbourne and London who helped with insights and information, including Eric Beecher, Brian Cathcart, Neil Chenoweth, Nick Davies, Bruce Dover, Roy Greenslade, Bruce Guthrie, Charlotte Harris, David Hayes, Martin Hickman, Brian MacArthur, David McKnight, Stephen Mayne and Dimity Torbett. I made a research visit to New York, but my period there coincided exactly with Hurricane Sandy, and all the meetings I had arranged fell through. I do not blame Rupert Murdoch for this, however. Again I am grateful for the collegiality of the departments of Media and Communications and Government and International Relations at the University of Sydney, especially Graeme Gill, and of media scholars more generally, especially David Rowe, Paul Jones, James Curran, Howard Tumber and Jeremy Tunstall. I would also like to thank Sarah Shrubb, Emma Driver, Geraldine Suter and especially Phillipa McGuinness at NewSouth Publishing. All these people helped improve the book, but any remaining errors are, of course, mine, except that all complaints about punctuation should be directed to Ross Gittins. Finally, I would like to thank my parents, Gladys and Leslie Tiffen, for an inheritance beyond riches, and our children, Paul and Ruth, who are unlikely to inherit a family company. Murdoch family tree Murdoch company names Rupert Murdoch's first company in Australia was called News Limited. His British operations from 1968 on went under the name News International. His American and his global company went under the name News Corp. In 2013, News Corp split its operations in two, with one part operating as Twenty-First Century Fox and the other retaining the name News Corp, often referred to as 'the new News Corp'. The Australian operations are now called News Corp Australia. 1 The passing of the Murdoch era? 16 October 2013 marked Rupert Murdoch's 60th anniversary as a director of News Limited. Since 1953, although his formal titles have changed at various times, he has been in charge. Such business longevity may be unique. It is hard to think of any other corporate head who has had such a long tenure. Murdoch's record is extraordinary at both ends of the age spectrum. He is still running a company in his eighties, a time when most chief executives have long since retired. Likewise, he gained control at the tender age of 22 as a result of the death of his father Sir Keith Murdoch, who had been the dominant figure in Australian journalism for three decades. Although Sir Keith had been head of Australia's largest newspaper group, the Herald and Weekly Times, his actual ownership of newspapers was much more limited. Rupert's inheritance was restricted to one afternoon newspaper in the South Australian capital, Adelaide. When Rupert arrived at the Adelaide _News_ , television had not yet begun in Australia, geostationary communication satellites did not exist, and of course no one could even envisage the internet. The movement from a single newspaper in Australia's fourth (now fifth) largest city to a multi-media empire with global reach is by any measure a remarkable business success story. According to the _Financial Times_ Global 500, in June 2012 News Corp ranked 120th among the world's corporations by market value, with a total of $54.2 billion, one ahead of the National Australia Bank. It was the second-ranked media company, behind Walt Disney (ranked 57, value $86.7 billion), and a long way ahead of the third-ranked Time Warner (183), although some other corporations, particularly those classified as IT and telecommunications, have expanded into media areas. News Corp differed from the other leading global media corporations in two crucial respects. The first was that its roots – and still much of its public profile – lay in newspapers, and so in a medium where politics and the potential for political bias and conflict were ever-present. The second was that, far more than any of the others, News Corp was the personification of its principal owner, its actions inevitably associated both in the public mind and in reality with Murdoch himself. 'For better or worse, [News Corp] is a reflection of my thinking, my character, my values,' said Murdoch in 1996. He stands in direct descent from the most controversial press barons in Anglo-American democracies, such as Britain's Beaverbrook and Northcliffe and America's Hearst and Pulitzer, relishing political power as much as commercial success, ruling internally with an iron fist and externally exciting controversy and gossip. Murdoch, however, has a presence in several countries and across different media that Northcliffe and Hearst could never have imagined. He is the largest press proprietor in both Britain and Australia. In Australia his titles comprise around two-thirds of daily metropolitan circulation, a concentration of control not matched by any proprietor in any other democratic country. In Britain, he has both the biggest-selling daily paper, the _Sun_ , and the most famous quality title, _The Times_. In both countries, he is the key player in the pay television market. In the United States his media assets include the _Wall St Journal_ and the _New York Post_ newspapers, one of the four free-to-air television networks, a major movie studio, and a large presence in cable TV channels, including Fox News. In Asia, he has the Star satellite television service (now split into four companies), and he has various holdings in Italy and other European countries. His companies go beyond newspapers, television and film into magazines, book publishing (HarperCollins), pay TV decoders and supermarket inserts. He renounced his Australian citizenship in 1985 to become a US citizen, prompting _New York Times_ columnist William Safire to refer to him as a symbol of something new: global man, equally at home in Sydney, London and New York. Ironically, at the same time, he is regarded as a foreigner everywhere, and is perhaps in that way as well the ultimate embodiment of globalisation. Most Americans still refer to him as Australian, while in Australia his domination of the country's newspaper industry is even more controversial because he is a foreigner. In England, he was dubbed the 'dirty digger' in the early 1970s, and his Australian-ness is a recurring theme in commentary. British journalist Michael Leapman, for example, thought that Harry Evans, when editing the _Times_ in 1981–82, 'was trying to show Murdoch that he could be as ruthless and spiteful as any Australian'. Had Murdoch retired a decade ago, he would have been one of the media's most controversial figures because of his journalism and his political entanglements, but seen as an outstanding business success. These themes had been fairly constant since the 1970s, when author Thomas Kiernan judged that his great success in building the London _Sun_ 'cemented his reputation as a brilliant international business and financial manager. At the same time, however, it increasingly drenched him in a self-perpetuating odour of moral and ethical disrepute.' Since then there have been frequent invocations of both themes. The business achievements are praised: 'no other Australian has had a greater impact on the world business stage' thought former Australian Prime Minister John Howard; 'without doubt the most remarkable Western businessman since the Second World War', judged Channel Four investigative journalists Robert Belfield, Christopher Hird and Sharon Kelly. But the criticisms of his papers' journalism have been equally strong: most spectacularly, the _Columbia Journalism Review_ editorialised that his _New York Post_ appealed 'to the basest passions and appetites of the hour', and thought the matter was so grave that the paper 'is no longer merely a journalistic problem. It is a social problem – a force for evil.' Most bitingly, the British playwright Dennis Potter said 'no man [was] more responsible for polluting the press and, in turn, polluting political life', and famously named the cancerous tumour that was killing him Rupert. Most humorously, _Chicago Sun-Times_ columnist Mike Royko, who defected to the _Chicago Tribune_ when Murdoch bought his former paper, thought that 'no self-respecting fish would be seen dead wrapped in a Murdoch newspaper'. So, until recently, the typical judgement might have echoed that of Theodore Kheel, a New York lawyer, who acted both against and for Murdoch. Kheel famously said, 'Rupert Murdoch is very good at what he does. The question is: is what he does any good?' Now, however, the question will also be how good has he been at it? Now the journalistic and political critiques are even stronger, and judgements on business criteria are also more mixed. Several factors have fed the change, but the single most important one was the UK phone hacking scandal. Rupert Murdoch's world changed forever on 4 July 2011. On that day the _Guardian_ 's Nick Davies published an article saying that the _News of the World_ had tapped teenage murder victim Milly Dowler's phone. The scandal had been building – very slowly and far from surely – for almost five years, since August 2006, when _News of the World_ reporter Clive Goodman and private investigator Glenn Mulcaire were arrested for having tapped the phones of members of the Royal Family and their staff. Goodman and Mulcaire pleaded guilty, issued apologies, and in January 2007 were sentenced to prison. But News International portrayed it as the work of a single rogue reporter. However, the investigative work of Davies and the editorial courage of the _Guardian_ ; the work by the lawyers for civil litigants, who had been victims of _News of the World_ phone taps; the efforts of some members of a House of Commons committee; and eventually – in contrast to their culpably shoddy work early – the investigations by the London police, all destroyed the single-rogue-reporter fiction. The Milly Dowler story opened the floodgates. Since then, News Corp has been engulfed in the biggest media scandal in any English-speaking country in living memory. It triggered what veteran journalist and newspaper historian Roy Greenslade called 'the most astonishing 14 days in British press history, with daily shock heaped upon daily shock'. Politicians competed with each other in the ferocity of their denunciations. News International closed the _News of the World_ , and in the face of opposition from all three major political parties abandoned its attempt to raise its ownership of satellite broadcaster BSkyB from 39 to 100 per cent, which would have been the largest deal in Murdoch's history. On successive days, London's chief police officer and one of his deputies resigned. Rupert and James Murdoch were forced to appear before a parliamentary committee, televised live, in what Rupert called the most humble day of his life. The scandal revealed that Murdoch's London tabloid papers had engaged in phone tapping on an unprecedented scale, had bribed police, were at the centre of a web of political patronage and punishment, and had engaged in a systematic cover-up in which many senior executives lied. By late 2012, when the Leveson Report was published, 90 people had been arrested and were awaiting criminal prosecution, and News International had paid damages in at least 72 civil cases. The Leveson Inquiry, instituted by the Cameron Government to examine the scandal and the issues it raised, held oral hearings for around nine months, and heard from 337 witnesses, including the current prime minister and three of his predecessors, and other political and media figures, before publishing a 2000 page report. Events are still unfolding, and will affect the future of the Murdoch empire. The scandal will now be central in defining Murdoch's career and legacy. It sharpened previous critiques of Murdoch's journalism and political influence. British Labour Party MP Tom Watson said the scandal showed how Murdoch's company 'came to exert a poisonous, secretive influence on public life in Britain, how it used its huge power to bully, intimidate and to cover up, and how its exposure has changed the way we look at our politicians, our police service and our press'. The grubbiness and amoral cynicism of the journalism, the scale of the illegality and invasions of privacy, the timidity of the politicians, police and others in the face of Murdoch's power more than confirmed what his fiercest critics had believed. In addition, it has thrown a sharp new light on the governance and organisational culture of News Corp, and encouraged a more critical perspective on some of its business strategies. The continuing fallout from the scandal suggests that in some sense the height of the Murdoch empire has passed. The personal power exercised by Murdoch may have peaked as well, and it is still to be seen how the corporation will adjust to a possible post-mogul phase. This moment of transition offers an opportunity to reassess Rupert Murdoch. Unlike most books on Murdoch, this one is structured analytically rather than chronologically, to explore major themes. The second chapter traces the building of his empire, and later chapters analyse his business strategies, his politics, his journalism, his relations with governments, and his road to scandal. There is no shortage of information about Rupert Murdoch. There are at least a dozen books, many of them excellent, of which Murdoch, or some part of his career, is the central subject. There are also several memoirs and journalistic accounts where he figures substantially. Moreover, there are tens of thousands, probably hundreds of thousands, of newspaper and magazine stories, as well as radio and TV programs, which contain material on Murdoch. Unlike some other Murdoch books, this one is not based on interviews or close acquaintance with central figures; it is a distillation of the abundant material already on the public record. This book aims to examine all of Murdoch's long and varied career: to give due attention to all three countries – Australia, the UK and the US – where he is a major journalistic player, to go up to the British phone hacking scandals and their aftermath, but also to go back decades to probe formative and interesting episodes. It seeks to trace how his political attitudes, his business strategies and his attitudes to journalism have developed. The first challenge for anyone seeking to analyse Murdoch is the sheer length and complexity of his career. He has packed into one lifetime more conflicts and controversies than a dozen other media proprietors might manage. Journalist James Fallows, reviewing Murdoch's career in 2003, remarked, 'I was surprised to be reminded of how many dustups Murdoch has been involved in.' Any book must therefore be selective. This one concentrates on his politics and journalism, rather than, for example, his entertainment businesses. Even here the range is impossibly large, as Murdoch's journalistic outlets cover many types in several countries. This book focuses on where Murdoch has been most directly involved, and on the areas that best illustrate his priorities and worldview, or have had the greatest impact. A second difficulty is that despite the richness of what is publicly available, there are gaps, because Murdoch's _modus operandi_ is secrecy. Even though the primary democratic purpose of news organisations is increasing public transparency, and News Corp is a public company, Murdoch prefers to operate beyond public view. He exercises personal control over his empire through telephone and face-to-face conversations, usually without any documentary record, so no outsider has access to these interactions. Both he and the politicians he deals with are loath to put their dealings on the public record even though these politicians have placed great importance on their relations with Murdoch. He was the first media proprietor to visit David Cameron after he became Prime Minister in 2010, but at Cameron's request he came and went by the back door. Margaret Thatcher often expressed in private her great admiration for Murdoch and gratitude to him: 'Rupert is magnificent.' After she was ousted from the Tory leadership, Murdoch played a central role in the publication of her memoir, through his company, HarperCollins. Despite this, her memoir contained 'not a single reference to Rupert Murdoch'. Similarly, the memoirs of Bob Hawke, John Howard and most other leading Australian politicians make only the most minimal references to their governments' dealings with Murdoch. When Lance Price, a spin doctor for Tony Blair, wrote his memoir about the experience, the book had to get Cabinet clearance: 'The real surprise was that no fewer than a third of [the government's] objections related to one man – not Tony Blair or even Gordon Brown, as I might have expected, but Rupert Murdoch.' In this book, when important conversations occurred with only Murdoch and one or two other people present that is indicated in the text. Occasionally, however, it is impossible to decide between contradictory accounts. Murdoch had told several people, and affirmed under oath before the Leveson Inquiry, that after the _Sun_ very publicly withdrew support from the Brown Labour Government on 30 September 2009, the prime minister telephoned him and said, 'Your company has declared war on my government and we have no alternative but to make war on your company.' Brown denied that any such conversation occurred, and supported this with the log of his telephone calls from the Cabinet Office. Leveson declined to try to resolve these contradictory claims, and no outsider can do so with certainty. The single most prolific source of information on Murdoch is his own public statements. But these need to be treated with caution. Murdoch's statements about his intentions or directions have proved an unreliable guide to his actions, although this can be the case with any business figure engaged in takeover activities and keen to confuse his competitors. Neither are his general sentiments a good guide. In 1977, he told _More_ magazine that: it would be a pity if I grew any bigger in Australia... If I were to grow bigger and take over one of the other groups... that would be against the public interest... The fewer there are, the worse it is. This noble sentiment did not prevent him in the next two years from mounting an abortive takeover bid on the Herald and Weekly Times company, or from successfully taking over that company in 1987, raising his share of national daily metropolitan newspaper circulation from around one-quarter to almost two-thirds. In November 1977 he said that he didn't 'think that a newspaper should own outside interests'. But in 1979 he became a half owner of an Australian airline. In 1979, he said that 'to buy the _Times_ would be a highly irresponsible thing to do for your shareholders', but within another couple of years he had bought it. He said that he disapproved of Britain introducing a national lottery, because 'it offends my Presbyterian instincts'. He had managed to overcome such instincts, however, when the Wran Labor Government in New South Wales approached him in 1979 to take part in a consortium to market Lotto. The government's rationale was that these groups' superior marketing skills would make Lotto more successful. In return, the companies received a governmentguaranteed high profit, a prospect which no Presbyterian could resist. Similarly, Murdoch's career is littered with what others have called broken promises or commitments that were not honoured. As early as 1960, Robert Falkingham, the Fairfax company treasurer, was warning his boss, Rupert Henderson, not to sell Murdoch the Sydney _Daily Mirror_ , because Murdoch 'has proved that he is not a man to honour his agreements'. Many of his major acquisitions – from the _News of the World_ in 1969 to the _Wall St Journal_ in 2007 and several in between – have been followed by claims of betrayal and broken promises. Murdoch has often counter-charged, claiming bad faith on the part of his critics and the necessity of acting as he did, or pointing to the loopholes and fine print he had crafted that allowed him to do as he had done. There are also cases of Murdoch lying about the past. In 2012, such a case was put clearly on the public record during the Leveson Inquiry. In 1981, in the lead-up to his takeover of the _Times_ and _Sunday Times_ , Murdoch was very keen that the British Government not refer his planned takeover to the Monopolies Commission. He denied, including to the official historian of the _Times_ , that there had been any direct contact between himself and Prime Minister Thatcher. However, the Thatcher papers, released in 2012, show that they met over lunch in the crucial period, and that Murdoch followed up with correspondence. At the crucial Cabinet committee meeting, Thatcher argued that Murdoch's takeover did not require a referral under the Fair Trading Act, and the committee agreed. At the Inquiry, the News International barrister argued against the suggestion that Murdoch had suffered a conveniently selective amnesia, and said it was simply that Murdoch did not remember events of 31 years ago. In 2007, while acquiring the _Wall St Journal_ , he indignantly denied that in 1994 he had removed the BBC from his Asian Star satellite service in order to please the Chinese Government. 'I don't know how many times I have to state that I did not take the BBC off Star TV for political reasons; nor have I ever given any sort of political instructions, or even guidance, to one editor of the _Times_ or the _Sunday Times_ ,' he said. Murdoch's move in 1994 was preceded by considerable speculation that he was about to do so. One Australian newspaper, for example, reported in March that Guo Baoxing, of China's Ministry of Radio, Film and TV, had told Star to drop the BBC, while Murdoch had told the _Economist_ that the BBC caused him a lot of headaches with the Beijing Government because of its critical coverage, and that he had threatened to drop it. Afterwards Star TV's chief executive, Gary Davey, said the decision was taken for purely commercial reasons. Three months later, however, Murdoch told his biographer, William Shawcross, that he had pulled the BBC in the hope of soothing bad relations with Beijing: 'I was well aware that the freedom fighters of the world would abuse me for it.' The Chinese leaders 'hate the BBC', Murdoch said. In 1995 he told US journalist Ken Auletta, 'the BBC was driving [the Chinese leaders] nuts... It's not worth it. We're not proud of that decision [but] it was the only way.' US journalist Jack Shafer concluded that it was only in 2007 that Murdoch returned to promoting the fiction that removing the BBC wasn't for political reasons, and indignantly denying what in the 1990s he had readily admitted. The role of political expedience in these public statements is clear. But what are we to make of the following revealing example from the diary of Woodrow Wyatt, a confidant of both Murdoch and Thatcher? Thatcher rang Wyatt in 1986 to say she was about to announce that Marmaduke Hussey would be the next Chairman of the BBC. Wyatt was shattered, because he had a low opinion of Hussey. Murdoch's reaction was just as strong: 'Has she gone mad? What a disastrous appointment.' But when Wyatt raised the topic with Thatcher, she said, 'I wouldn't have done it if I hadn't had a strong recommendation from Rupert', and was amazed that Murdoch was now criticising it privately. Wyatt then went back to Murdoch, who denied recommending Hussey; however, when pressed, 'he seemed evasive and giggled a bit'. Wyatt concluded that 'she is telling the truth and not Rupert'. Here Murdoch is engaging in a pastime he seems to relish: venting his low opinion of people. According to Leapman, 'Murdoch is never happier than when running down journalists and businessmen – equally those who work for him and those who do not – in outspoken and sometimes vulgar terms.' Here he is apparently lying – in private to a close friend – because he enjoys the game, enjoys playing both sides of the street. A further difficulty in weighing evidence lies in the mythmaking about Murdoch by both his admirers and his critics. Many Murdoch employees loudly proclaim his abilities, but sometimes their tales are unreliable. For example, Vic Giles, who was brought over from the London _Sun_ to New York in early 1974 to help Murdoch launch his sensational weekly newspaper, the _National Star_ , was very impressed by the quality of Murdoch's contacts. He said that while he was doing a headline for a story on Nixon, Murdoch said it was wrong, and rang the White House. Nixon immediately rang back, and confirmed Murdoch's account. Murdoch then told his journalist: 'You're wrong, Dickie says it's this way.' According to Giles, although Murdoch had only been in the States a couple of months, 'he knew everybody. He was talking to Carter and LBJ as if they were his bosom buddies.' Murdoch may have been talking to former President Johnson in early 1974 as if he were his bosom buddy, but it would be worrying if he heard anything in reply, as LBJ had died in January 1973. Equally, it would have taken improbable prescience for Murdoch to be bosom buddies with the future president Jimmy Carter, as the then Governor of Georgia still had almost no national profile. While not questioning the strength of the 'Rupert-Dickie' relationship that Giles attests to, it seems unusual for the President to personally answer a query from a national magazine which still had negligible circulation and no political credibility. The closest the _National Star_ , which specialised in UFOs and stories of the bizarre, had come to a political scoop in its early days was its revelation that 'if all the Chinese jumped up and down in unison, the vibrations would cause a tidal wave that could engulf America'. The legacy of Ozymandias And on the pedestal these words appear: 'My name is Ozymandias, King of Kings: Look on my works, ye mighty, and despair!' Nothing beside remains. Round the decay Of that colossal wreck, boundless and bare, The lone and level sands stretch far away. Percy Bysshe Shelley, 1817 No corporate figure in the contemporary world has been as intent on securing his business legacy as Rupert Murdoch. Despite News Corp being a public company in which the family has only a minority of the shares, he has long been intent on a dynastic succession. The phone hacking scandal has probably destroyed this fantasy. However, the corporation, per se, has recovered spectacularly well from the scandals. In July 2011, its share price was $14.96; in April 2013 it was up to $31.54. It has been buoyed by several factors, including a share buyback, and an increasing confidence that the scandal would not cross the Atlantic. One of the scandal's more bizarre consequences was the News Corp Board negotiating a settlement of $139 million with a group of American shareholders who sued it. The entire cost was covered by an insurance policy that 'protects corporate boards from this type of litigation', and the money was distributed to all shareholders, including the Murdoch family. The scandal crystallised a sentiment already strong among investors that, in the words of the _Economist_ , 'newspapers are not central to what News Corporation does... but newspapers are central to who Mr Murdoch is'. The scandal reinforced the view, made ever stronger by the impact of the internet, that the now financially marginal newspaper tail was wagging the entertainment dog. When considering earnings per share, Bloomberg data showed News Corp, on an adjusted basis, increasing from $0.71 in Financial Year 2005 to $1.41 in Financial Year 2012 – a very respectable performance. However, when compared with two other American media giants, it is rather less so: across the same timeframe, Time Warner increased from $1.05 to $3.29 and Walt Disney from $1.21 to $3.08. In June 2012, Murdoch announced that News Corp would split into two – a large and profitable entertainment and media company and a small, much less profitable, and more politically contentious publishing company. Although Murdoch vigorously denied that the split had anything to do with the scandal, US writer Michael Wolff judged that 'until the phone hacking scandal came to dominate every aspect of News Corp's corporate consciousness, hell would have had to freeze over before Rupert would have let his papers go'. Nevertheless, Murdoch's email to News Corp's 50,000 employees, headed 'we will wow the world as two', was decidedly upbeat. Murdoch declared that the company had decided to restructure because of its 'increasingly complex' asset portfolio: We must realign and reorganise in this moment of opportunity. Over the years, I have become accustomed to the noise of critics and naysayers ... and pretty thickskinned! Remember what they said when we started Fox Network, Sky, Fox News and the _Sun_? These experiences have made me more resilient. News Corp shares reached their highest levels for five years on the news. The reason most business analysts applauded the demerger was their belief that, in _New York Times_ reporter Amy Chozick's words, 'in effect, News Corporation had evolved into a successful entertainment company with a newspaper problem'. New York stockbrokers were referring to the two parts as GoodCo (the entertainment part) and BadCo (the publishing side). David Carr, another reporter from the _New York Times_ , said that creating a separate division to allow newspaper businesses to grow and reach their full potential 'is a little like the engineer of a locomotive unhitching the caboose and telling the people marooned there that they were now free to travel toward any destination they desire'. A cartoon in the _Economist_ was the most succinct – it pictured the media wing as an eagle and the publishing company as a turkey. In December 2012 it was revealed that if the publishing arm had been a stand-alone company that year it would have lost $2.08 billion. In 2013 it was announced that the publishing arm would keep the name News Corporation, with Robert Thomson as CEO and Murdoch as Executive Chairman, while in the entertainment division Murdoch would be both Chairman and Chief Executive, with Chase Carey as Chief Operating Officer. The split was not as neat as the headlines suggested. The publishing division included not only newspapers, book publishing and educational products, but also the pay TV service Foxtel. So the Australian operation will continue as a single entity, on the curious grounds that Australia is so far away. Many of the newspapers were currently losing money (the _Times_ and _Sunday Times_ was losing £50 million a year, the _New York Post_ $100 million a year, and the Australian newspapers were in a steep decline). However, unlike most of its publishing competitors, the new company would begin life with a $2.6 billion cash balance. Nevertheless it is clear that News Corp newspapers have been insulated from the effects of the internet in recent years by cross-subsidy from what will in future be the entertainment division. At each stage of his career Murdoch has had observers guessing, often wrongly, about his next move. No one would have predicted a few years ago that News Corp would split as it has. This move doubles the questions about Murdoch's future strategies, influence and legacy. As Ozymandias testifies, few people, no matter how mighty they once seem, bequeath quite the future they intend. 2 Building the empire From Adelaide to Hollywood and almost to Beijing Bob Hawke's prime ministerial memoirs recall a 1983 dinner in Geneva he had with Rupert Murdoch and Paul Keating. Murdoch was regaling them with his plans for making a fortune from satellite television in Europe. 'As the dollar signs continued to dance in [Murdoch's] eyes' Hawke became bored, and asked, 'Rupert, will there ever come a stage in your life when you reckon you've made enough money and got enough power [and instead enjoy life and your family and our] beautiful, fascinating globe?' 'Murdoch looked at me as if I was slightly deranged' and resumed talking about his business plans. Indeed, when this conversation occurred, Murdoch's empire had not yet reached half its eventual size. This chapter outlines the major moments in its expansion. Adelaide inheritance to Australian national player Appropriately, the beginning of Murdoch's career was surrounded by conflict. His father, Sir Keith, died in October 1952, at the age of 67. On Friday, 3 October, at the Herald and Weekly Times Board meeting, he had survived a showdown following growing tension with his deputy (and designated successor) John Williams. Williams had hoped Murdoch would be dismissed but instead _he_ was; this was followed by a heated exchange between the two. Murdoch died from a heart attack the following night and the Murdoch family never forgave Williams. The company reinstated Williams. On the following day, Sunday, he used a blowtorch to open the safe in Sir Keith's office, apparently to see what private transactions his former boss had been engaging in. In an admirable display of multi-tasking, Williams was at the same time preparing the Monday papers' memorial tribute to Sir Keith. On Tuesday he delivered the eulogy at Murdoch's funeral. The settlement of the estate took several months, and, because of a combination of debt and death duties, it amounted to considerably less than expected. However, Keith had held a controlling interest in two newspapers. His widow, Elisabeth, felt compelled – much against Rupert's wishes – to sell the _Courier-Mail_ shares to the Herald and Weekly Times, but insisted – against the company's and the trustees' wishes – that Rupert be given the chance to be publisher of the Adelaide _News_ and the _Sunday Mail_. By the time Rupert arrived in Adelaide to take up the reins of his new, but small, operation, he had graduated from Oxford with third class honours, and then worked as a subeditor at the _Daily Express_ (while staying at the London Savoy). The Herald and Weekly Times immediately declared war by starting a Sunday newspaper to compete with the _Sunday Mail_ , the most profitable part of Rupert's operation. Rupert's pugnacity was immediately on display in a defiant front page editorial that denounced the 'interlopers'. The two Sunday papers competed fiercely for two years before a truce in December 1955 re-established a Sunday monopoly: ownership of the _Sunday Mail_ was split in half, but crucially, the name, management control and the printing contract all remained with Murdoch – 'Not bad for a 24-year-old who was supposed to be a babe in the woods,' he told author Tom Kiernan with some pride a quarter of a century later. The _News_ , meanwhile, was prospering. The editor, Rohan Rivett, had been brought in by Sir Keith Murdoch, and had greatly improved the paper. Rivett had been captured in Singapore, and his book _Behind Bamboo_ , describing the experience of being a prisoner of war under the Japanese, had been a bestseller. Later he had been the London correspondent for the Herald and Weekly Times, and had acted as a friend and mentor to Rupert during his early years at Oxford. At first Rohan and Rupert were close allies, and for a time Rupert was content to leave the journalistic side to Rivett, while he learnt the commercial ropes. Murdoch, however, was already impatient to expand: I'd never lived in a city as small as Adelaide... I don't mean that against Adelaide, but if you were brought up in media, in a city the size of Melbourne, and you saw the life ahead of you in Adelaide, it was hard not to be ambitious, to think of a wider frame. The only newspaper on the market was a Sunday paper in Perth. Murdoch bought it, paying more than most thought it was worth, and then used a team from his Adelaide paper to revive it. Freed from the professional presence of Rivett, Murdoch had a chance to exercise his tabloid tastes, and he quickly increased the paper's circulation. Then Murdoch won one of the two commercial TV licences in Adelaide and set about learning the TV industry, but his ambitions in this field went largely frustrated for the next few decades. His next big step, in 1960, was into Australia's largest city, and here his entry was aided in an extraordinary way by one of his competitors. In the 1950s, the Sydney newspaper market had two competing morning papers – the _Sydney Morning Herald_ (owned by Fairfax) and the _Daily Telegraph_ (owned by Frank Packer) – and two competing evening papers – the _Sun_ (Fairfax) and the _Daily Mirror_ (owned by the failing Norton company). Fairfax bought the _Daily Mirror_ from Norton for fear that it would fall into Packer's hands, or give the Melbourne-based Herald and Weekly Times a Sydney base, but could not bring itself to close the paper. In the first of several deals in which established media owners underestimated him, Fairfax chief executive Rupert Henderson then sold the _Daily Mirror_ to Murdoch. Henderson expected Murdoch to fail, but within half a dozen years the _Mirror_ 's circulation was overtaking the _Sun_ 's. The move into Sydney had brought Murdoch into direct competition with Packer and Fairfax. He had established a beachhead earlier by buying a suburban newspaper chain, and the three companies competed fiercely for a short time before profitably dividing the contested territory between them. The first showdown between the Packers and Murdoch, however, was resolved by a bloody and spectacular brawl, involving an improbable cast of characters. The Packers, denied access to the _Daily Mirror_ 's printing press, sought to fill the gap by acquiring the Anglican Press, which was in financial trouble. Impatient with the legal processes, Sir Frank's sons, Clyde and Kerry, both of whom had been amateur boxers, gathered several others and forcibly took possession of the building. The Anglican Press was run by one of Sydney's great eccentrics, Francis James: the son of a clergyman, he had been shot down over Germany in World War II, and now used the back seat of a Rolls Royce as his office. When told of the Packers' occupation of the building, he immediately rang Murdoch. While James organised some of his friends, Murdoch rang the sports editor of the _Sunday Mirror_ , Frank Browne, a former professional boxer, who had the dubious distinction, in 1955, of being one of only two Australians ever jailed for contempt of parliament. Browne gathered four big 'bruisers', to use James's description. The party set about recapturing the Press building, at about 1.00am, and an almighty brawl followed. The Packers lost, and fled, bruised and battered, while James led the victors in a prayer of thanks. Murdoch had also dispatched a photographer to the scene and the front page of the next day's _Daily Mirror_ recorded the Packers' humiliation. Murdoch's next encounter with the Packers did not go so successfully. The Menzies Government decided to issue a third commercial TV licence in the major cities, and Murdoch lobbied publicly for the new Sydney channel. When he failed to get the Sydney licence, he embarked on a plan that was audacious, but that surely would have failed if allowed to run its course. He took control of the channel in Wollongong, a city south of Sydney. He outbid the Sydney channels and bought several American programs, and urged two million people in Sydney's suburbs to redirect their antennas away from Sydney and towards Wollongong. Packer thought the best course was to make a deal with Murdoch, so he sold him shares in Channel Nine. However, a couple of years later, in April 1967, he did a 'reverse share transaction', so that instead of his newspaper company having control of the TV station, the opposite was the case. This severely diluted both the value and the voting strength of Murdoch's shares: their strength went from 25 per cent to 10 per cent. He was quoted as saying privately that 'while I was honeymooning in London [with second wife Anna], I was raped in Sydney', and that Packer 'must be the biggest crook in Australian newspapers, but equally he is the cleverest'. While he was stalled in television, Murdoch's next move in newspapers was revolutionary. In 1964 he began what was possibly the most pioneering journalistic enterprise of his whole career, one which greatly improved the standard of Australian political journalism. Murdoch began Australia's first general national newspaper, the _Australian_. It was an idea whose time had almost come. Because of the problems of timely distribution across such a vast continent, and because in earlier generations advertising markets and perhaps readers' news interests were so strongly local, no one had ever envisaged a national newspaper. Australia's first daily business newspaper, the _Australian Financial Review_ , had also begun, and the effect of both it and the _Australian_ contributed greatly to a reinvigoration of Australian journalism. As usual, Murdoch's motives were mixed and hard to determine. No doubt there was some genuine idealism. According to Kiernan, his mother had told him that she didn't care how much money he made, but he 'must publish something decent for a change'. The paper also gave him considerable kudos and political leverage, and it is notable that he has maintained his commitment to it even though it has rarely made a profit. Characteristically, however, Murdoch's own role was more problematic than this indicates. It was imperative for the new paper to establish a strong Canberra base, but Murdoch's premature boasting to the owner of the _Canberra Times_ , Arthur Shakespeare, resulted in the latter selling that title to Fairfax, which meant that when the _Australian_ was launched, it faced a much more formidable local competitor. Although the _Australian_ was one of Murdoch's most visionary enterprises, several of his decisions compromised its quality journalism. Murdoch completed this phase of his Australian newspaper expansion in 1972, when he purchased the _Daily Telegraph_ , the only newspaper owned by the Packer family. The Packer sons, Clyde and Kerry, finally persuaded their father that the company's future lay in television and magazines. Although the price was high relative to the market value, the purchase gave Murdoch parity with Fairfax in Sydney, both having a morning and an afternoon paper. Now there were only three proprietors of Australian metropolitan daily newspapers: the Herald and Weekly Times owned papers accounting for around half the circulation; the Fairfax company, the most patrician of the proprietors, owned most of the nation's quality newspapers, and accounted for a quarter of the circulation; and the newcomer Murdoch now also had papers accounting for about a quarter of the circulation. Although Murdoch had become a major media figure in Australia, in the late 1970s his position in television was still peripheral. Australian television was structured as a series of metropolitan markets, and in the biggest centres there were three competing commercial channels. Government policy stipulated that one company could only own two TV stations, but the reality was that ownership of the Melbourne and Sydney channels was crucial and that the provincial stations, while having some locally originated programs, overwhelmingly acted as subordinate members of a de facto national network. Murdoch still lacked a Melbourne or Sydney channel. The Ten network, the late starter (in 1964 compared with 1956 for the others), was the perennial third place getter in commercial TV ratings and advertising revenue. In early 1979, Murdoch, in a series of stockmarket raids, moved to become the major shareholder in Sydney's Channel Ten. By this time, because of the virulent anti-Labor bias of his newspapers since 1975, everything Murdoch did was controversial. The Labor Party opposed allowing Murdoch to own the licence on the grounds that he did not fulfil the requirement that an Australian TV licence holder had to be an Australian resident: Murdoch spent only around 40 days a year in Australia. Murdoch was telling the tax authorities that he was a US resident, while telling the broadcasting authorities that he was an Australian resident. Although he told the Australian Broadcasting Tribunal that he had no plans to acquire Ten's sister station in Melbourne, within three months he was making moves in that direction. The channel was then owned by Ansett airlines, Australia's second domestic carrier. Murdoch's move led to a prolonged conflict, as Reginald Ansett resisted, and others, including Robert Holmes à Court, the Australian oil company Ampol, and Peter Abeles of TNT, also circled, buying small packets of shares which they hoped to increase. Eventually Holmes à Court, having been double-crossed by Ansett, dropped out, ceding control to a partnership between Murdoch and Abeles. The complications continued, with revelations about the warehousing of shares, secret deals, and major groups having no regard for the law. Eventually, after an appeal to the Administrative Appeals Tribunal, Murdoch emerged as a co-owner with Abeles of an airline as well as of Melbourne's Channel Ten. Over less than two decades Murdoch had moved from being a minnow to owning one of the four largest media companies in Australia. There was little immediate scope to expand further. The country was now too small for his ambitions. His next targets had long been international, specifically in Britain. The most famous newspaper publisher in the world 1968–81 In 1968, Murdoch had no significant holdings outside Australia. By 1981, he was 'the best-known newspaper publisher in the world'. In between, he had transformed the London newspaper market, first by purchasing the Sunday _News of the World_ , then by making the _Sun_ the biggest-selling newspaper in the country, and finally by purchasing the _Sunday Times_ and the _Times_. His notoriety in the US had become almost as great, most importantly in 1976, when he acquired the _New York Post_ and _New York_ magazine in quick succession. His first British acquisition was the biggest-selling newspaper in the English-speaking world. The Sunday _News of the World_ had a circulation of 6.2 million, and with its mix of sensationalism and Tory prejudices, had long been a British institution. It 'had been more or less controlled by the Carr family since 1891', although in an increasingly self-indulgent way by Sir William Carr. Conflict inside the Carr family made it vulnerable to takeover. When Murdoch was alerted to this, around October 1968, the most likely outcome looked to be a takeover by Robert Maxwell, a Czech-born British citizen. The _News of the World_ editorialised against him in racist terms: It would not be a good thing for Mr Maxwell, formerly Jan Ludwig Hoch, to gain control of this newspaper... which is as British as roast beef and Yorkshire pudding... This is a British newspaper, run by British people. Let's keep it that way. 'Who is Rupert Murdoch?' Maxwell asked when told of the new player entering the field. He was soon to find out, as the two media tycoons fought several battles over the next two decades. Murdoch entered the fray publicly as an ally of the Chairman, William Carr, and with the help of a shrewd adviser, Lord Catto. In January 1969 Maxwell was defeated, partly because his financial position was in reality much weaker than was publicly apparent. Murdoch was later to say, 'I could tell that the establishment wouldn't let Maxwell in.' However, William Carr's rescuer quickly became his hunter. Within six months Murdoch had forced him to resign and replaced him as Chairman, and a year later, after a prolonged trial of strength, he fired the editor, Stafford Somerfield – and had to pay out the last several years of his contract. This kind of skilful jujitsu and broken promises stuck 'to him as a kind of signature', and has often been brought up in others of his dealings. Murdoch did not change the nature of the _News of the World_. It was already nicknamed 'News of the Screws', and his conflicts with Somerfield had nothing to do with journalism. Years later he joked: 'I sacked the best editor of the _News of the World_. He was too nasty even for me.' It remained the biggest-selling paper in Britain, until it was closed ignominiously in July 2011 because of the phone hacking scandal. Murdoch was determined to break into the daily newspaper market as well, particularly because he owned a press that could be used to print one, and was contemplating starting a new title. A takeover opportunity soon arose, however. The IPC company, the only pro-Labour newspaper proprietor and owner of the biggest-selling tabloid, the _Daily Mirror_ , had acquired a failing trade union-owned newspaper, the _Daily Herald_ , and had re-launched it as the _Sun_ in 1964. But it did not find a market. IPC had promised to keep it alive for seven years, but it was proving a major drain on profits: its sales kept falling, probably to around 800,000 in 1969. Again Maxwell was the frontrunner as purchaser, but the unions preferred Murdoch and the company didn't believe Maxwell had the necessary funds. Murdoch solved IPC's problem at what for him was a bargain price. The first edition of the new _Sun_ appeared on 17 November 1969. It became Murdoch's greatest newspaper success. Starting from a base circulation of less than 1 million, it reached 3.5 million by the end of 1975 and at the beginning of 1978 it passed the _Mirror_ , which had been the country's top-selling newspaper since 1949. It has maintained that position ever since. Its circulation peaked at over 4 million during the 1980s and into the 1990s, but by 2012 it had declined to just above 2.5 million. While Murdoch was carving out this success in British tabloids, he was becoming increasingly disillusioned with British society. His affection for it had never been strong, even in his student days. But now he saw it as a country in decay, weighed down by its several establishments: not only the traditional hierarchies of class, but also the 'liberal' establishments of the universities, the BBC and quality press, which made life difficult for him. He found dealing with the trade unions increasingly frustrating. Such feelings reached a peak with some personal tragedies. When the Murdochs were out of the country in January 1970, one of their friends was kidnapped while she was driving their car. She was then killed. The criminals' target was Rupert's wife Anna Murdoch. Later that year she was (blamelessly) involved in a fatal car accident. The Murdochs moved to New York in 1973. 'New York, he [sensed, was] his kind of town' – without snobbery, fast moving, deal oriented – and it has been his home, except for a brief unhappy period in Los Angeles, ever since. In that year he made his first American acquisitions, the two daily newspapers in San Antonio, Texas – because they were available and he could afford them. The Hearst company had the biggest newspaper in the city. The monopoly morning paper continued much as it had been, but he took the afternoon paper down-market, improving its circulation to some extent. Its most infamous headline was 'Killer Bees Head North'. In 1974 he started a weekly sensational tabloid newspaper, the _National Star_ , to compete with the _National Enquirer_. It was sold at supermarkets, and had a style that paid only lip service to actual news reporting. At first it failed to make an impact, but some years later, after it first dropped the 'National' from the title and later introduced colour, it did better. By the late 1980s it was Murdoch's most profitable American publication. Murdoch remained virtually unnoticed in America until he bought the daily _New York Post_ newspaper and the weekly magazine _New York_. These gave him a strong presence in the country's largest city. 'The cover of _Time_ showed King Kong with the face of Rupert Murdoch bestriding the rooftops of Manhattan: "Extra!!! Aussie Press Lord Terrifies New York".' _Newsweek_ 's cover had a mock newspaper front page with a picture of Murdoch, and the headline 'Inside: Aussie Tycoon's Amazing Story! – Press Lord Takes City'. _New York_ magazine's editor and publisher, Clay Felker, was a famous New York identity, and had befriended Murdoch when he came to live in the city. However at the moment when Felker was vulnerable to losing his magazine because of over-spending, his friend Murdoch, whom he thought might help, instead moved to take control. Felker, with the anger of the betrayed, told Murdoch he would fight him tooth and nail. Murdoch replied, 'Teeth and nails are fine but it's money that wins this kind of scrap.' Felker mobilised the staff against Murdoch, and 40 of them left, amid much fanfare, but after the pyrotechnics ended, Murdoch was in control. As a result he also acquired the profitable weekly newspaper the _Village Voice_. Despite his forceful interventions elsewhere, Murdoch, after an initial rebuff, never again tried to interfere at the _Village Voice_ , whose whole appeal was to a left-liberal and counter-culture constituency Murdoch had no sympathy for. In 1985 he sold it at a handsome profit. The far more important move was the purchase of the afternoon daily newspaper, the _New York Post_ , for just over $30 million from its long-time owner, Dorothy Schiff. He radically repositioned the previously staid and liberal paper into a sensationalist and editorially right-wing one. This was very different from any newspaper New York had known for some generations, and it attracted a huge amount of critical comment. The _Columbia Journalism Review_ called him 'a sinister force' in our lives, and for the chief editor of the _New York Times_ he was 'an evil element'. The _New York Post_ still figures centrally in Murdoch's affections, but he never made it a commercial success. In 1988 he was forced to sell it because of the cross-media laws, and in the 11 or so years he had owned it, biographer Jerome Tuccille estimates that he lost around $150 million. According to his publicist, he wept real tears in February 1988 when the sale was completed. Less sentimentally, _Village Voice_ journalist Alexander Cockburn likened it to Dracula selling his coffin. Although Murdoch told Tuccille that he had put the _Post_ completely behind him, as 'a nightmare chapter of my life', when the paper was on the verge of closing in 1993 he lobbied strenuously to get it back. Supported by the urgings of the journalists, Democrat Mario Cuomo and others he was given a waiver from the cross-media laws, because of the jobs that would be lost with closure, and rebought it. He then moved to again put his ideological stamp on the paper and his name appeared on the masthead as editor in chief. By 2007, according to Wolff, the paper was losing $50 million a year, and by 2013 one report said it was losing $100 million annually. Murdoch never became the force in American newspapers that he was in Australia and Britain. But he did make some further important purchases, although both were also later sold. The Bostonbased _Herald American_ , owned by Hearst newspapers, was failing and about to close when Murdoch negotiated to buy it in December 1982. Although he drove hard bargains with the unions, he was broadly welcomed as the paper's saviour. He changed the name back to Boston _Herald_ , and increased its circulation, finally selling it in February 1994 – because of cross-media ownership rules – when he wanted to acquire a Boston TV station. His 1984 purchase of the _Chicago Sun-Times_ was far more controversial, with several journalists leaving in very public protest. He sold it in 1986 as part of his move into American television, making a pre-tax profit of around $70 million. Murdoch already owned the biggest-selling newspaper in Britain. Now he set his sights on the country's most famous quality newspaper. The history of the _Times_ is central to the rise of an independent, quality press in England, and it retained a unique prestige. However, for decades it had been running at a loss, kept afloat by the generosity of Lord Thomson. The losses were compounded when the typesetters went on strike in late 1978; the closure lasted almost a year. In late 1981 Thomson put the loss-making _Times_ on the market, threatening to close it completely if no deal were reached. Its stablemate, the _Sunday Times_ , was operating profitably. Eventually Murdoch was deemed the preferred buyer, and as we shall see in Chapter 9, the Thatcher Government eased his passage. One factor which made the deal sweeter – but of which, according to Murdoch, both he and the Thomsons were unaware – was that purchase of the _Times_ would bring with it a 5 per cent holding in the news agency Reuters. Reuters, begun as a privately owned venture by Julius Reuter in 1851, had been constituted as a trust between its member news organisations during World War II, and all the British national newspapers had a share in it. It had gradually been transforming itself, and its huge investment in infrastructure and its international base had seen it develop into a much bigger and potentially more profitable enterprise, with financial data eventually dominating in terms of revenue. Its members were increasingly keen to realise its financial value. By buying the _Times_ Murdoch doubled his shares in Reuters. Reuters was later floated, very profitably for its member organisations. Press baron to multi-media mogul – the 1980s explosion When Murdoch gained control of the _Times_ he was a few weeks shy of his 50th birthday, and he had been head of his company for just over a quarter of a century. Over-simplifying, the first half of this period he had spent becoming a major player in Australia, and the second half he had devoted to becoming a major newspaper publisher in Britain and America. However, far from being content with his lot, Murdoch was about to undertake some of the most dramatic moves in media history. His most spectacular were into film and television, but he also made bold moves in newspapers. For good measure he expanded in magazines (unsuccessfully) and book publishing (successfully). 'The value of Murdoch's empire – measured in terms of assets owned – was placed at $1.52 billion in 1984' and five years later was more than six times that figure. In what was, throughout, a manic career, this is the single most manic period. In this decade he transformed his empire again: from one based principally in the UK and Australia to one with its centre of gravity in the US; from a company that was principally a newspaper publisher to one centred in broadcasting and film. But his dynamism almost brought him undone: he accumulated such debt that the company almost went bust. In the end he prevailed, and became an even bigger player in the world's media. Although nearly all the other moves considered below contributed to Murdoch's later debt crisis, one contributed substantially to his bottom line. From a contemporary vantage point, the destructive power which trade unions used against Fleet Street newspapers, an industry that was far from buoyant, is difficult to believe. It stemmed from the inherent vulnerability of newspapers, where even a short industrial action can disrupt the tight production and distribution schedules and so inflict great financial damage. Printers were often earning more than journalists, and the efficiencies offered by new technologies were simply banned by some unions. Over-manning and 'Spanish practices' (ie rorts) were rife. Such defiance of economic gravity could not last forever, but it was Murdoch's ruthless and skilful moving of his Fleet Street papers to Wapping that decisively ended it. Like the other Fleet Street companies, Murdoch loathed the unions, a feeling which became more acute with the financial burdens of acquiring the _Times_. 'If it's the last thing I do, I'll make the goddamn unions pay dearly for their idiocy,' he told Kiernan. Murdoch bided his time. The plant being built at Wapping was said to be for a new newspaper title, the _Post_ , which of course never eventuated, and the secrecy regarding its true purpose was maintained. According to Roy Greenslade, there were four conditions that allowed Murdoch to move when he did. The first was the money which had come from the Reuters float. The second was the huge prices which central London real estate was commanding, which made it an attractive proposition for newspapers to develop 'green fields' production sites further out and profitably dispose of their London real estate. Within a decade no newspaper had its headquarters in Fleet Street. The third was that new press proprietor Eddie Shah was planning to launch a non-union paper outside London, which threatened to bring the cheap competition the existing papers had long feared. Fourth was the Thatcher Government's 1984 Trade Union Act, which made strike action subject to secret ballots and made it more difficult for unions to support other unions on strike. Moreover, through his friend Woodrow Wyatt, Murdoch had secretly secured the agreement of an outside union, the electricians' union (the EEPTU) to produce the papers. When everything was ready, he became completely intransigent in his negotiations with the Fleet Street unions. They then did exactly what he wanted and went on strike, which allowed him to sack them. With a mixture of carrot and stick, he persuaded the journalists to move to Wapping. Although many felt conflicted, and some had qualms about crossing a picket line, overwhelmingly they chose to go. On 25 January 1986 four million newspapers were produced at Wapping. 'Fortress Wapping' was a newspaper factory surrounded by 4 metre-high spiked steel railings topped by barbed wire. For a year, pickets, often violent and abusive, remained, and there were many TV news stories on the clashes between picketers and police. On the first anniversary of the move, the largest-ever crowd – 13,000 – gathered. Of 1000 police on duty, 168 were reported injured, as were many demonstrators. But this was the protesters' last gasp. Murdoch – seizing the moment offered by technological change and favourable political and financial environments – had won. Murdoch's Wapping victory took a severe internal toll, although Murdoch remained largely oblivious to this. _Sunday Times_ editor Andrew Neil felt that his relations with his 'staff were strained almost to breaking point.' In the aftermath of victory, morale plummeted. Bruce Matthews, Murdoch's most senior manager in London, found Murdoch 'a very sour man' who made matters much worse whenever he visited London. Despite doing such good service for Murdoch in the move, Matthews was soon dispatched. Other senior managers involved, John Dux and Gus Fischer, also thought Murdoch's visits were acutely counter-productive, that he was 'a lousy manager', and felt burnt by their assistance to him during Wapping. The EEPTU head, Eric Hammond, had assisted the move to Wapping in the expectation that his union would be the main representative of the workers there, but instead Murdoch saw his chance to keep it entirely non-union, and once victory was achieved, Hammond too was dispatched, disappointed. Murdoch's main interest in his British newspapers now was to support his American film and television ventures. A rival newspaper executive calculated that Murdoch's costs were reduced by £80 million a year by the move to Wapping; some analysts calculated that as a result the _Sun_ was making a profit of £1 million a week. Murdoch did not use this huge boost in his cash flow to gain a great competitive advantage over the competing British newspapers; instead, with one exception, he channelled it across the Atlantic. The exception was that in 1987 he purchased Eddie Shah's _Today_ title, a mid-market tabloid which pioneered the use of colour printing and computerised editing. But it never made a profit and he closed it in November 1995. At the same time, Murdoch was radically restructuring his empire in Australia, but unexpectedly this almost proved to be a financially ruinous move. In September 1985, in order to hold a TV licence in the US, he had taken American citizenship, but in so doing – at least according to any straightforward reading of the law – he became ineligible to hold a TV licence in Australia. Murdoch's timing was good, as changes to media ownership laws by the Hawke-Keating Government made his exit from Australian television much more profitable (see Chapter 8). (Remembering Murdoch's debt crisis a few years later, it should be noted that this was the third large, unexpected cash injection he received in the 1980s, after the Reuters float and the post-Wapping profits.) The sale of the Ten network helped to fund his audacious move to buy the Herald and Weekly Times (HWT), Australia's largest newspaper publisher, the company where his father had once been managing director. Since that purchase, Murdoch has had a dominating position in Australian newspapers. Murdoch thought that his high pre-emptive bid for the Herald group would be decisive, and ultimately it was, but en route there were very expensive complications he had not anticipated. In 1979, he had bid for the company but failed (profitably). At that time the HWT Board had resisted, and the Melbourne _Herald_ editorialised: Mr Murdoch's papers always respond in unison – as though to some unseen divine wind – as they pursue their relentless campaigns in favour of current Murdoch objectives – particularly his political ones. Every journalist in Australia knows that. This time the HWT Board welcomed Murdoch's bid, and what 'every journalist in Australia knows' did not rate a mention. However, as a defensive response to Murdoch in 1979, HWT and its subsidiaries in Queensland, South Australia, Western Australia and Tasmania had developed a series of cross shareholdings. Robert Holmes à Court took some strategic shareholdings, especially in Queensland Press, sufficient to give him veto power. Belfield et al. estimated that Holmes à Court cost Murdoch A$1 billion, while Neil Chenoweth estimated that Holmes à Court's flanking move doubled Murdoch's cost to $3.6 billion. The financial disaster for Murdoch had ongoing complications, because 'A subsidiary company cannot hold shares in its parent.' This trapped Murdoch in his quest for control of the whole group. He came up with 'his most desperate plan yet'. His family company, Cruden Investments, paid $600 million for 56 per cent of Queensland Press. But Cruden had no means of repaying this, and even servicing the interest was going to be a problem – 'it had become a debt problem waiting to happen'. The problems may have been manageable, if not for the October 1987 stockmarket crash. That month _Forbes_ magazine, in its annual rich list, had put Murdoch as the 8th richest person in America, valued at $2.1 billion, mainly based on his shares in News Corp. But by the time the magazine hit the streets Murdoch was no longer a billionaire. On Wednesday, 21 October, the Commonwealth Bank, worried about the loan to Cruden Investments, even took a lien over the Murdochs' penthouse in New York. Conveniently, Queensland Press (its new Murdoch family members absent) bought 42 million News Corp shares from Cruden Investments. The HWT Board decided to buy the Cruden stock at $16 a share, what they said it was worth, even though on the open market they could have paid around $12.80, so paying around $93 million more than the market price – a better deal for Cruden than for other Queensland Press shareholders. Australian regulators looked hard at these decisions over some years, but in the end decided not to prosecute, the strong recovery of the News Corp share price in later years taking the heat out of the issue. This was Murdoch's first big debt crisis, and it passed completely beyond public view. At the end, the American citizen had moved out of Australian television, and instead acquired a dominant position in Australian newspapers, his titles accounting for almost two-thirds of daily metropolitan circulation. Murdoch's first move into entertainment in America was in films. He had helped to finance _Gallipoli_ , a film about Australia's first major military involvement in World War I, the disaster of which Murdoch's father had been centrally involved in disclosing. Murdoch's involvement in the movie had been partly for sentimental reasons, but he was impressed by the financial riches which followed. 'Do you realise,' he said to Kiernan, '[that] a single picture like this can clear more than a year's worth of _Suns_?' As Tuccille notes,'The artistic and commercial success of _Gallipoli_ sharpened Murdoch's appetite for more of the same.' In a preliminary sortie, he had become a shareholder in Warner Brothers in 1983, but Warner's defensive moves effectively blocked him from going further. It led to litigation, which Warner won, but which Murdoch and his team fought with such 'extraordinary legal invective' that one judge described it as 'a corporate form of feudal warfare', and Warner decided to settle on generous terms. Murdoch walked away bitter, but richer by more than $40 million. Twentieth Century Fox proved a more affordable and more welcoming proposition. It had been bought by Marvin Davis and Marc Rich in 1981, but Rich had then fled the country to avoid arrest on fraud and tax evasion charges (he was pardoned by Bill Clinton in one of his last acts as President in January 2001). Davis bought out Rich's half for $116 million, and in 1984 sold it to Murdoch for $250 million, which included a capital injection for the ailing company. The company was, however, starting to improve under the leadership of Barry Diller. This was but a prelude to an even more dramatic move. Murdoch had known John Kluge, the principal figure in Metromedia, since the mid-1970s. Metromedia owned and operated small but profitable independent TV stations in seven major cities, which made up around 22 per cent of the American viewing audience. Murdoch paid $650 million in cash and assumed Metromedia's $1.3 billion of debt. Originally the purchase of Metromedia was to be with his partner in Twentieth Century Fox, Marvin Davis. But at a crucial moment Davis pulled out, and the acrimony between them was such that they could not continue together in the movie venture either, and eventually Murdoch bought him out, for $325 million. This severing of an unsatisfactory partnership was beneficial long term, but it added very considerably to Murdoch's short-term outlays. Murdoch has often proclaimed himself a gambler, and said he is energised by risk. But now he had effectively 'gambled his entire media empire on his 20th Century-Fox-Metromedia acquisition'. Although in the mid-1980s Murdoch's empire was still dominated by newspapers, he already had a considerable history in television. He had been a player in Australian television since the late 1950s, at first a minor one and from 1980 a central one, and had grounded himself thoroughly in the industry's economics. Although often overlooked, he had a short but successful period running London Weekend Television (LWT). Murdoch had been humiliated in an interview on London Weekend TV with David Frost, and a year later he bought a large holding in the struggling company. Murdoch enjoyed the 'challenge of reviving failing institutions', and despite official rules to the contrary, he was very directly involved in managing the TV company. He 'believed that if he could pump some "really commercial thinking" into LWT' and overcome 'its highbrow approach to television programming', 'he could save it', and indeed he soon returned it to the path of profitability. He sold out at a profit in 1980 in order to finance his Australian TV purchase of the Ten network, which under his direction was also operating profitably. Nevertheless it was a huge step from his previous experience to the ambition he publicly proclaimed in October 1985: to start a new network of affiliated stations, and in the long term to become the fourth network in American television. The essential logic of US television had remained the same for decades. There were three major networks – the youngest, ABC, had been formed in 1948 – and the relationship between networks and their affiliates was the key to competitive position. However, the number of TV channels in most markets exceeded three, so there was scope for competitive manoeuvring beyond these three. There were technological, regulatory and commercial factors that made a challenge more possible in the second half of the 1980s, but it was still a challenge from which most businesses would have shrunk. Murdoch moved to integrate his TV operations and Twentieth Century Fox under a new corporate umbrella, Fox Inc., with Barry Diller as CEO. For Murdoch, the films that Twentieth Century Fox had already made were as important as the ones it would make in future. The library was a source of programming for his own channels and he could sell from it to others. The Fox TV network was slow to develop, beginning with programming just for weekend evenings, and expanding a night at a time over the next few years. For several years it made a loss, and there was considerable instability in its programming and management. Eventually, however, it had a run of programs that made an impact: _Married – with Children_ ; _The X Files_ ; _Beverley Hills 90210_ ; _Melrose Place_ ; and above all, _The Simpsons_. (In the 2000s, its biggest hit was _American Idol_.) The turning point, the 'deal [which] validated Fox', was when it bought the rights to the US National Football League in 1993, outbidding CBS, and winning a range of new affiliates, so that its reach to the US market expanded greatly. In 1994, Murdoch claimed Fox was the top network; a more accurate claim would have been that by then it made sense to talk of four rather than three networks, with Fox fourth by most measures – but this was still a very considerable achievement. The success of Fox was a classic Murdoch triumph. In less than a decade he had broken the mould of what seemed the eternal and inevitable three-network competition. His risk-taking had been vindicated. Some programming was original, such as the animated _Simpsons_ ; all of it was cheap to produce. Then came the knock-out blow – the football rights. Fox's current affairs and 'reality' programs, such as _A Current Affair_ , _Cops_ and _America's Most Wanted_ , were dubbed 'tabloid television' by their critics. But they were the epitome of the Fox approach: cheap to make and aiming for an audience not usually attracted to the 'high-brow' current affairs on the other networks. No one could accuse Fox of being elitist. Not nearly as revolutionary were Murdoch's moves into book publishing. The book industry had consisted of a series of independent publishers, but their amalgamation into larger companies was widespread through the 1980s. Murdoch was a prominent player in this trend. He already owned Angus & Robertson books in Australia; he had started to acquire Collins Books in the UK; and then in March 1987 he bought Harper & Row in the US, paying far more than others had thus far ventured, three times the value of Harper's assets and over 50 times its annual profits. Then he launched a full bid for Collins Books. Although resisted by the then director, Ian Chapman, who felt personally double-crossed by Murdoch's move, he succeeded. In early 1989, he merged the two companies. He had paid more than the market expected, but the new entity, HarperCollins, has been a central player in English-language book publishing ever since. Murdoch's even more expensive moves into American magazine publishing were much less successful. There were two transactions, one before and one after his move into American film and broadcasting. The first came in 1984 when he acquired Ziff-Davis magazines for the considerable sum of $350 million. In the end, as part of his debt reduction program in the 1990s, he disposed of most of this stable profitably. The second move was in October 1988, when he shocked American media circles by purchasing Triangle Publications for $2.8 billion. News had debts of over $4 billion, and most analysts thought it was already over-stretched. He disposed of nearly all the titles in the early 1990s, at a loss. He kept the biggest, and most well known, _TV Guide_ , which he retained for a decade: it had the advantage of giving Fox the prominence due to a fourth network. So, though Murdoch had been a major player in American magazine publishing for a moment, by the end of the 1990s he was completely out of the industry. The purchase of Triangle Publications was, simply, a disastrous failure. In June 1988 Murdoch announced that he would be setting up a satellite venture in Britain called Sky. The launch of Sky is explored further below, but it was an important element in the developing debt crisis that almost undid Murdoch's whole empire. So in a tumultuous few years, and surrounded by sharp political conflict, Murdoch's boldness had fortified his position in British newspapers and given him dominance in Australian newspapers. Also, he had established a substantial presence in US television and film and in book publishing, and was poised to enter the new industry of satellite broadcasting. Much of this had been not simply risky, but reckless. Murdoch had acquired debts that endangered his whole empire. In Chenoweth's view, 'Murdoch had never been able to afford his great move in 1985–86 to buy Twentieth Century Fox, the Metromedia television stations and to launch the Fox network.' In addition, his Australian acquisitions had proved far more expensive than he had envisaged; his book publishing acquisitions were expensive in the short term; his 1988 magazine purchases were expensive and ill-advised; and any profits from his satellite ventures seemed at best far in the future. By late 1990, life for News Corporation 'had become a mesmerizing sequence of near-death experiences'. In August Standard & Poor's gave News Corp's debt rating a Triple C plus, the equivalent of junk bond status. When a Channel Four documentary (by Belfield, Hird and Kelly) on its problems was broadcast, the share price fell 20 per cent in a day. A series of short-term debts were coming due, and News was unable to pay. On 6 December, a Pittsburgh bank owed $10 million by News at first refused to roll over the loan. When told that meant News would go out of business, the loan officer persisted. When Murdoch spoke to the officer again, other banks had interceded and the loan was rolled over. In December 1990 'News needed to reschedule $7.6 billion of debt held by 146 institutions around the world.' This was a staggering figure, and Murdoch's empire came very close to collapsing in ignominy. The key factor that saved Murdoch was the difficulty and cost to his creditors of letting him collapse. The structure of News Corp was so complicated, spread through so many countries, including several tax havens, each with its own laws and procedures, and the structure of the debt was so complicated, with so many competing claims against the company, that no creditor could be assured of receiving proceeds from the sale of any particular assets, and all would face years in court very expensively sorting out the mess. The problem was so large that one Citibank leader, William Rhodes, even thought there was 'systemic risk if the [rescue] deal fell through' and he talked to financial authorities in three countries about the problem. Two banks, Citibank and Samuel Montagu, led the consortium of major lenders that effectively saved Murdoch, and in the end also rescued their investments, with what they called a Debt Override Plan. Nevertheless there were still a couple of months of uncertainty, until all the banks signed on – 'the domino effect of even one tiny default... would trigger cross defaults which would overwhelm the group'. The leading banks enforced two lines: 'We are where we are' and 'Nobody gets out.' By 10 January 1991, the most important 27 banks had signed an agreement running to more than 300 pages. There was to be an extra bridging loan of $600 million; an upfront fee of 1 per cent of the amount outstanding; interest rates were to be raised by one percentage point; and News had to reduce its debts by $2 billion over the next two and a half years. Murdoch was now on a firmer leash than ever before. For example, in November 1991 it was reported that News Corp would 'need to raise about US$1 billion (a year) for the next nine years to keep up with its revised schedule of loan repayments'. Nevertheless, within 28 months, he had replaced all the earlier debt with long-term debt and some capital raisings, and had disposed of some assets. However, whereas in 1990 the Murdoch family effectively owned 46 per cent of the shares, by 1993 their share was closer to 33 per cent. The Debt Override Plan had given Murdoch certainty, and he had survived. He had, said Canadian newspaper publisher Conrad Black in a description which Murdoch liked, bet the company and won, even if that victory had ultimately been delivered by his bankers. 1990s – Positioning in a global, multi-channel television industry In 1990, the dominant mode of receiving television in most countries was via an analog, terrestrial signal, which meant there was relatively limited spectrum available. By 2000, the dominant modes were digital, and via cable or satellite. The multi-channel environment, the growth of subscription television, and much more crossing of national borders, either directly by satellite, or through international trade in programs, all marked substantial differences between the beginning and the end of the decade. By 2000, Hollywood made more than half its movie earnings from international sales. For Murdoch, the key challenge of the 1990s revolved around satellite and subscription television. This is the only area where Murdoch was in the vanguard. He had been interested in the potential of satellites since the early 1980s. Although the reality never matched his probably impossible ambitions, by the end of the decade Murdoch's first-mover advantage had added considerably to his empire. None of Murdoch's early satellite investments – in South America, the US or the European one that he was enthusing about to Bob Hawke over dinner – became substantial or profitable operations: 'As with other Murdoch ventures, enthusiasm and ambition ran ahead of the product.' He took a higher risk and made a more determined commitment in 1988 with Sky, to operate on the Astra satellite from Luxembourg. He gave up his hopes of pan-European profitability, recognising that people want to be entertained in their native language, and focused his efforts on Britain. In June 1988 in London he announced his intention to launch a four-channel service in grand terms: 'We are witnessing the dawn of an age of freedom for viewing and freedom for advertising.' As discussed in Chapter 9, this brought him into direct conflict with British Satellite Broadcasting, which in December 1986 had won a heavily contested tender to become the country's first – and, it expected, only – satellite broadcaster. Both companies bled money at an amazing rate. According to Chenoweth, News spent around £550 million on the venture and the BSB syndicate spent around £850 million. In November 1990, the two companies merged. Both desperately needed the merger, but if the BSB partners had learnt of News's debt crisis, they may have been tempted to make fewer concessions, or even not do the deal at all. No cash changed hands, and in theory the merger was on equal terms. But several aspects favoured Sky. The merged operation used the Astra satellite. Moreover, the CEO was Sky's, New Zealander Sam Chisholm. Over the next several months, as he folded two loss-making organisations into one leaner operation, the casualty toll may have been as high as 3000 employees. The post-merger civil war soon had a clear victor: 'Initially Sky occupied 17 out of the top 30 positions. Within a short time they occupied all but one.' Another lasting advantage to News was that encryption was to be performed by the company NDS, of which News was a part owner. The film companies had refused to supply their programming to a satellite service unless the signal was encrypted. NDS had solved these problems for Murdoch, and a close association developed, despite one of its key figures being a fraudster, and the Israelibased company having its own turbulent internal life. Decades later the company became the focus of charges that it had acted as a pirate outfit against Murdoch's competitors. The big turning point occurred in 1992, when BSkyB secured exclusive rights to televise the English Premier League. Viewers now needed to subscribe to watch football live, and they did so in their hundreds of thousands. 'By 1996, BSkyB's grip on pay TV in Britain had become unshakable. It was Britain's third largest media business, bigger than many film studios.' BSkyB saw further enormous growth late in the decade when it gave away digital set-top boxes for free, turning the technical change into a marketing triumph. BSkyB was to be the forerunner of other delivery platforms and control of programming around the world, but Murdoch's global ambitions have met with only mixed success. He has been active in South America, while in Australia – amid the chaotic policy making on pay TV – he has stayed close to the incumbent telecommunications carrier Telstra, succeeding in the end by creating a Foxtel monopoly. Murdoch's biggest and most important such venture was in Asia. In 1993 he bought almost two-thirds of the Star satellite, for over $500 million. This was by far his biggest deal since the rescue by the banks, which is perhaps why he sold the very profitable Hong Kong newspaper, the _South China Morning Post_ , to pay for it. The Hong Kong-based service had been started by Richard Li in 1990. According to William Shawcross, 'the footprint of Star's satellites covered all Asia and the Middle East... about three billion people'. Its audience was much less impressive than its footprint. Of its five channels, four were in English, and probably only a couple of per cent of people within its reach had English as a first language. The service was also of limited appeal to advertisers, who had only the sketchiest data on who was watching. In 1996 Star lost $100 million, but in 2005 it reported a profit 'well over $100 million', with 70 per cent of its revenues coming from India. Star was helped greatly by the coming of digitisation, which meant that the number of channels expanded enormously: 'By 2000, it comprised 30 broadcast services in seven languages potentially reaching 300 million viewers in 53 Asian countries.' In 2012 it broadcast 60 services in 13 languages, and claimed a daily audience of 120 million viewers. However, Murdoch never realised the dream he pursued with such vigour: to penetrate the Chinese domestic market via Star. Although the 1990s began with Murdoch in such deep debt that his corporate survival was partly a matter of luck, he not only consolidated his spectacular 1980s expansion, but with a combination of first-mover advantages, promising long-term strategy and a willingness to sustain short-term losses, he had navigated his way into the more global and multi-channel TV environment as well as anyone. Typically, his almost infinite ambitions had not been realised, and he had some important setbacks and failures. However, in News Corp's annual report for 1997 he could accurately proclaim that 'no company in the world can match News Corporation in its ability to maximize its own product across multiple distribution platforms around the world'. American fireworks 1997–2003 On 24 February 1997 Murdoch started a fight with the US cable operators, proclaiming satellite the way of the future. Next day, 'investors dumped cable stocks in droves,... wiping out more than $1 billion in market value for the cable TV market'. This declaration of war, a war that Murdoch lost, had two sources. First, Murdoch saw an American satellite TV service as a key to his global satellite plans and corporate development. Only the American market offered the economies of scale that would allow him to undertake huge global enterprises. The second source was his launching of Fox News the previous year. Murdoch knew that the only way for Fox News to succeed was to get sufficient carriage on the cable systems around the country, most particularly on the Time Warner system in New York. Normally cable operators paid a small fee per subscriber to channels they featured, but very occasionally payment went in the other direction. The most that any new channel had thus far offered a cable company for carriage had been $1.20 for every subscriber. Murdoch offered an unprecedented $11 per subscriber to carry his new channel. Murdoch had shaken hands with Time Warner chairman Gerald Levin on a deal to carry Fox News, but after Ted Turner protested, Levin reneged. Murdoch responded in style. In October 1996 News Corp launched a $2 billion law suit against Time Warner. New York Mayor Rudy Giuliani leapt into action on Fox's behalf. The conflict also brought on a spectacular slanging match between Murdoch and Turner, which raged for years. Turner likened Murdoch to Hitler, called him slimy and a disgrace to journalism, and promised to squish him 'like a bug'. Murdoch's _New York Post_ headline asked readers 'Is Ted Turner nuts? – You decide', and said he must have gone off his lithium. Eventually Time Warner carried Fox. In February 1997, when he announced that cable was a declining platform and the future was satellite, Murdoch thought he was speaking from a position of strength, about to conclude a deal with Charlie Ergen and EchoStar. Murdoch's view reflected his own immediate position, and his long-term preference for satellite over cable. But it probably had particular validity at that time in the US, as cable had developed there earlier than in most countries, and its many local monopolies still relied on old analog systems whose capacity was much less than the latest digital technologies could provide. The 'Cable King' John Malone, for example, in his local monopolies, had 'spent as little money as possible upgrading his cable systems'. All this was occurring in a corporate environment marked by unprecedented mergers, as large corporations sought synergies in the more global and digital era. These media behemoths, or megamedia corporations, seemed to be the only way to thrive, perhaps even to survive, in the future, and leading players in previously distinct industries – film, television, cable, telecommunications, manufacturing, computing – came together: 'It was as if no single company wanted to be left without a partner in this new and uncertain age.' All three of the main US networks entered into merged entities: ABC Disney; NBC Universal; CBS Viacom. The biggest of them all came on 10 January 2000 when 'AOL paid a stunning $165 billion to buy Time Warner', 'the largest merger in history and the most striking testament to the influence – and inflated values – of the internet'. For News Corp this was a period of 'almosts' – they almost joined with MCI; they almost joined with EchoStar; they almost joined with General Motors (GM) and Hughes Electronics; they almost joined with Microsoft. The single most important attempted partnership was with GM and its subsidiary Hughes Electronics, to gain control of the DirecTV satellite service. In 2000, GM announced its intention to divest itself of its satellite business, and on 6 February 2001, Murdoch thought he had an agreement: 'The Sky Global/DirecTV combination represented Murdoch's bid to create the largest global media platform of all time.' This deal took, he said, 'the first 90 per cent of the [weekly] meeting time' of his top executives, and until the last moment he was confident it would happen. It failed largely because his potential partners were deterred by Murdoch's approach: News Corp has one of the most aggressive corporate cultures in the world. For five decades Murdoch had run News as a one-man show. In that time he had never had a successful partnership... The News execs had been bagging their opposite numbers at Hughes for weeks. Hughes expected a partnership. What they got was a boarding party. In October 2001, when GM sold DirecTV to EchoStar, it looked as if Murdoch had been locked out of the US satellite market. But after this deal was shot down by the Federal Communications Commission (FCC) on anti-monopoly grounds, Murdoch was finally able to realise his ambition. Other potential buyers having disappeared, Murdoch became the largest shareholder in DirecTV at a much reduced price in April 2003. He was rhapsodic: the benefits will be felt almost immediately – in the competition it will offer cable, [and] the richer services it will provide to American viewers... We are forging what we believe will be the premier diversified entertainment company in America today... [The] completion of the DirecTV transaction will mark the culmination of a long-time pursuit by our company of providing the missing link in a global satellite television platform that will span four continents and encompass 23 million subscribers. While the satellite ventures grabbed the most attention, Murdoch had also laid the foundations for what would continue to be a growth area for his empire – the development of specialist channels. While Fox News has probably generated most controversy, for its rabidly pro-Republican bias, it has also become a major profit centre, the single most important driver of the substantial growth in News Corp's income from cable networks. With the sporting rights he already owned, Murdoch was well placed to set up a sporting network, and Fox Sports has become one of the two biggest sportscasters globally. Other specialist channels followed – National Geographic, Fox Family Channel – as well as movie and entertainment channels. These are now large contributors to News Corp profits. Empire in transition – the 2000s In the last two decades of the 20th century, News Corp grew spectacularly. Its assets increased from $1 billion in 1980 to $20 billion in 1990 and to $38 billion by 2000 – one of the most dramatic corporate expansions in media history. In the 21st century this rate of growth slowed, but was still substantial, to $54 billion by 2010. Similarly, the structural trends inside News Corp have been towards incremental change, rather than revolutionary upheaval, as had been the case in previous decades. The relative share of newspapers and magazines in News Corp's operating income declined steadily from 90 per cent in 1980 to 71 per cent in 1990, to 45 per cent in 2000 and to only 11 per cent in 2010. According to the _Financial Times_ , broadcast television was only 1 per cent in 1990, shot up to 39 per cent by 2000 and then shrank back down to 5 per cent in 2010. Films have become ever more important, from around 8 per cent in 2000 up to 29 per cent in 2010, although the share of films shows more year-to-year volatility. The main growth area has been in cable networks, which on the _FT_ 's figures grew from 4 per cent in 2000 to 48 per cent in 2010. News Corp – at least since the end of Murdoch's US satellite wars – has been a more orthodox large corporation. It has shown greater financial caution. It was so relatively cashed up that in 2011–12 it carried out one $5 billion buyback of shares, and then announced another. While its overall business performance has been solid, there are three areas where Murdoch either retreated from his driving ambitions or where new ventures failed (all explored at length in Chapter 4). From 1993, Murdoch invested a lot of money and effort in China, not only via Star, but in a whole series of enterprises. He assiduously wooed key figures in the Chinese Government. But at a media conference in New York on 16 September 2005, he admitted that News Corp had 'hit a brick wall in China'. The trend towards opening up had gone into reverse, he said, with the authorities now 'quite paranoid about what gets through'. The next two years saw News selling off or greatly reducing its stake in its Chinese properties. 'The world's most powerful media mogul's time in China appeared to have 'come and gone'. The second retreat was at least as great. After striving for almost a decade to gain DirecTV, within three and a half years he surrendered the great prize. The central reason for this reversal lay in the consequences of another key move Murdoch had made. In April 2004, Murdoch announced that he wanted to change the domicile of the company from Adelaide to Delaware. Senior executives thought that relocating the main company from the Australian stock exchange to the NASDAQ was logical because the bulk of the company's activity was in the US and the move might help raise the price of News Corp shares. There were some protests in Australia, as it was felt that Delaware's less stringent regulatory requirements offered fewer protections to shareholders, but in October 2004 Australian shareholders approved the move. One complication was that once the company was listed on the US S&P 500, making it more possible for US index fund investors to buy, it would be delisted from the S&P ASX 200. This would cause many Australian fund managers to sell their holdings. This created a price dip, as Australian institutions sold out in a rush before US institutions bought in. John Malone seized the moment. By November 2004 he had 19 per cent of the voting shares, second only to the Murdoch family's 30 per cent. In order to remove Malone from News Corp's share register, Murdoch sold him DirecTV. Elsewhere, News Corp continued to see satellite as central to its strategy. It kept developing Star in Asia. In Australia in 2012, Foxtel became a 50/50 split between News and Telstra when News bought out the quarter share owned by James Packer's company. It also made a de facto national monopoly official when it bought out Austar – with which it had amicably split territories and programming agreements for some years. In Britain, it sought to raise its stake from 38 per cent to 100 per cent ownership of BSkyB, which, in dollar terms, would have been the single biggest transaction in its corporate history. However, it was thwarted by the phone hacking scandal. In contrast, in order to remove what he saw as the continuing threat of Malone, Murdoch gave up his US satellite ambitions. Murdoch's third failure – an area in which he was far from alone – was in the internet, the great force transforming media businesses in this period. News Corp, like all other media companies, is still strenuously devoting its energies to the challenges of the internet, its revolutionising of consumers' media habits and its threat to intellectual property values. But on the way it made some bad investments, most obviously MySpace, which it bought for $580 million in 2005 and sold for $35 million in 2011. Murdoch's last big acquisition of the period was of Dow Jones and the _Wall St Journal_ in 2007. He offered $60 a share, a 67 per cent premium on Dow Jones's share price. After three months of agonising, the Bancroft family, owners of the _Journal_ over the last century, sold it to him for a total of $5.83 billion. Although this gave Murdoch the most authoritative financial newspaper in America, and so perhaps helped him position for business journalism more generally, the decision made little business sense. Less than two years later, News Corp wrote down its value by more than half, to $2.8 billion. So, while the 21st century has not been a period of dramatic new acquisitions and radical growth for News Corp, Murdoch's huge and complicated media empire continued to expand incrementally. By 2013 – excluding services that operate only on a regional basis, such as those in Asia and Latin America – News had operations in more than 20 countries. Its most important markets, since its retreat from China, were in the US, Britain and Australia, but it also had operations in Italy, Germany, in around 10 East European countries, in Papua New Guinea and Fiji. For decades some analysts had seen weakness, not strength, in News Corp's diversity, thought the whole lacked coherence and argued that without Rupert's energy holding it together, News Corp would not survive. In the mid-1990s, finance journalist Kevin Maney put this view strongly: 'There is only one true global media baron. Too bad his company could be destined for oblivion.' When Murdoch 'is no longer there, the company will most likely be no longer there either'. It was difficult to predict what would happen to this 'worldwide media hodgepodge' when it is 'Murdoch-less'. Since the 2013 split in the company, these questions have been more often asked about the publishing company, News Corp, rather than the entertainment company, Twenty-First Century Fox. In an era of rapid change, no forecasts about the future of media companies can be made with any certainty, although the two companies are probably as well placed as any of their competitors to meet the coming business challenges. Whatever the future, the six-decade journey from a single afternoon newspaper in Adelaide to a global multi-media power is one of the most remarkable in corporate history. In the process, one individual, starting with a modest inheritance in 1953, achieved a net worth of $11.2 billion by 2013, to become the 91st ranked billionaire on the planet. 3 Midas of the media Murdoch's business strategy Murdoch was depressed when his parents 'ordered him off to Oxford'. He had assumed he would follow his father into the Australian newspaper business, although he had not yet developed strong ambitions to do so. He was enrolled in philosophy, politics and economics, but his real focus of study was tabloid newspapers. After graduating, he spent some months subediting on the _Daily Express_ , under chief sub Ted Pickering, whom he later praised as 'my first great mentor'. His months in what he called Beaverbrook's brothel left him with 'an abiding admiration for the craft of the London subeditor'. By the time Murdoch left England in 1953 he had developed strong ambitions – he wanted to be a publisher of tabloid newspapers, to return to Fleet Street, and one day to own the _Daily Mirror_ , then at the peak of its powers. He spent the next decade and a half developing his Australian empire, and learning all aspects of the newspaper business. He loved the subediting side, and would spend Saturday ripping apart his Perth _Sunday Times_ , reorganising it, and 'ordering stories to be rewritten with more urgency, color, and exaggeration; thinking up sensational, blaring headlines; cleaning up the paper's layout and dirtying its tone'. Then, through the 1960s, he honed his skills in Sydney, one of the few cities in the English-speaking world to have competing afternoon newspapers. It was not only news presentation, however, that interested him; it was the marketing and production aspects as well. When he returned to Fleet Street and the _News of the World_ , the printers told him it 'was impossible to print a tabloid on presses configured for a broadsheet, [but] he dumbfounded them by showing how it could be done'. Murdoch was ready – and the situation was propitious – for his first and most important success, the one on which his future scope for expansion rested. The London _Sun,_ which he acquired in 1969, became 'the single largest source of Murdoch's publishing profits and the pillar of his capacity to borrow money'. As author Piers Brendon noted, 'Murdoch loved the _Sun_ , which tripled its circulation [to 3 million] within four years and in 1978 overtook the _Mirror_.' Ever since, it has been Britain's top-selling newspaper. 'Murdoch knew precisely what kind of paper he wanted the _Sun_ to be.' It was aimed at what he saw as the gap that had opened in the market because the _Mirror_ 'was getting above its readers, trying to push them up-market against their will'. He would provide 'a tabloid modeled on the _Mirror_ of the 1940s and 1950s, before it had become seized by ideas above its station', had become 'preachy and teachy', bogged down by liberal earnestness, gentrified. He was determined not to have 'any of that up-market shit in my paper'. The formula by which Murdoch and his first editor Larry Lamb pursued this has become famous. The most notorious ingredient was the emphasis on sex; in particular the regular topless page 3 girl, who first appeared on the paper's first anniversary, in November 1970. The paper looked for any opportunity to be risqué, such as having pussy week, which was about cats: The _Sun_ was saucy, even naughty, but it was all good clean fun. The sex material was informative and amusing and, unlike porn, was not designed to induce the 'leer and the snigger'. The slogan was 'make it breezy, not sleazy'. When Roy Greenslade, a subeditor on the paper, re-read the files decades later, he found a 'relentless use of sex': There was a sexy book of some kind serialized every week, such as _The Sensuous Woman_. The women's department, Pacesetters, churned out endless features on the theme of how to have better sex lives. There was a radical change in news priorities, with traditional notions of news being downgraded, and areas Murdoch and Lamb believed were of greatest audience appeal highlighted. In 1968, coverage of sport and show business made up 33 per cent of the _Sun_ 's editorial space, but 30 years later that proportion had nearly doubled, to 63 per cent. (For the _Mirror_ the corresponding percentages were 31 and 52.) While sport and crime had long been staples of tabloid journalism, the new ingredients were celebrity and television. In Piers Brendon's view: The tabloid dealt in fantasy as much as fact, especially when conjuring up melodramas about the lives of the stars of television soap operas. But, Murdoch made the _Sun_ snappy, pithy, sexy. He got what he wanted, a 'tear-away paper with a lot of tit'. In Lamb's view, the readers would get the headlines and basic facts from the television and 'look in their _Sun_ for entertaining gossip'. This was also a mantra of Murdoch at this time: because of television, newspapers were more and more about entertainment. This allowed them to invest much less in journalists than their competitors. As authors Peter Chippindale and Chris Horrie note, 'With only 100 hacks, the new _Sun_ was ludicrously lightly staffed compared to papers like the _Express_ and the _Mirror_ , which had more than 400.' As well as employing fewer reporters, Murdoch has always been notoriously stingy with news-gathering expenses. John Lisners remembered 'Rupert's penny pinching ways' on the Adelaide _News_ : 'Go nowhere. Cover everywhere. Pay nothing.' He was able to do this because there was over-staffing at other papers, and because he took reporting responsibilities less seriously than they did. He also economised on accommodation: the staff of the _Sun_ was squeezed into the _News of the World_ 's Bouverie Street premises. While Murdoch was frugal with many expenses, he increased the expenditure on promotion and marketing enormously. Indeed the _Sun_ was a pioneer in advertising newspapers on television. This was already a common practice in Australia, but was still virtually unknown in Britain, and Murdoch's big investment in marketing brought large circulation gains. Beyond this, however, the _Sun_ connected with its readers. Its humour and cheekiness were a large part of its appeal, and the paper was good at self-promotion. When a library in Sowerby Bridge decided not to have the _Sun_ on display any more, the paper played it up. It offered to pay for new library rods and provide free copies. It also put on a competition where first prize was a free weekend in Sowerby Bridge, and second prize was a whole week. When an article in the _Financial Times_ about the paper's circulation success referred to the Soaraway _Sun_ , the paper adopted it as its marketing slogan. According to writer Michael Wolff: The _Sun_ , with profit margins as high as 60 or 70 per cent, [became] the most significant part of his business and [remained] so for nearly 20 years... the _Sun_ 's success has transformed even Murdoch's idea of the tabloid. He feels he has found the secret. It is a secret that did not travel as well as he hoped. In particular, he was unable to repeat his British success in America. He did increase circulation in some of his papers, when he substantially increased their promotional budgets. American newspapers derive an even higher proportion of their income from advertising than do British tabloids, but his down-market approach was less attractive to advertisers. At one stage the _New York Post_ , for example, had 20 per cent of newspaper sales in New York, but only 7 per cent of the advertising. There is a famous but probably apocryphal story that when Murdoch tried to get Bloomingdales to increase its advertising, an executive replied, 'But Rupert, your readers are our shoplifters.' Indeed sometimes the Murdoch formula was counter-productive. When he bought _TV Guide_ , he thought the magazine's mid-market journalism was 'far too cerebral', and took it down-market. During the 10 years that News Corp owned it, circulation fell from over 17 million to below 13 million. Nevertheless, the rise of Murdoch's _Sun_ is perhaps the most stunning business success in the history of English-language newspapers in the last several generations. There is no doubt that Murdoch had a special feeling for tabloid newspapers, and enjoyed being part of them: 'The energy you feel in a good newsroom comes from speed, good reflexes, and that highest Murdoch standard, a lack of pretense.' Today, as newspapers are under such threat from the internet, Murdoch is seen by many as their champion. It would be wrong to over-romanticise this, however. His liking for newspapers has always co-existed with an ambivalent attitude to journalists; indeed his career has been peppered with frequent expressions of dislike – 'I despise all journalists'; 'You know what journos are like, particularly journos overseas. They're bone idle.' When he was told that a _News of the World_ journalist had dropped dead, Murdoch is said to have responded, 'Well, it wasn't from overwork.' To use the journalistic cliché, Murdoch may have ink in his veins, but it should be stressed that it is subeditors' rather than reporters' ink. He has consistently under-invested in reporting. Just as the _Sun_ employed far fewer journalists than its competitors, Murdoch's Sky News was similarly lean. In 1989 it broadcast 14.5 hours a day with a staff of 260 and a budget of £30 million, while ITN did 6 hours of news each day with a staff of 930 and a budget of £64 million. His Fox News Channel, for example, employed fewer than half the journalists than did its rival, CNN. At the time of the Iraq War, for example, Fox News had 6 overseas bureaux and CNN had 31. Where Fox News had 1250 employees, CNN had 4000. While competition can be a constructive force, it also generates destructive emotions, exemplified by some of the sentiments expressed by Murdoch and his employees. From the mid-1990s onwards, the _News of the World_ newsroom 'was extreme, even by Murdoch's standards', as he exhorted them to 'smash its closest competitors, the _People_ and the _Sunday Mirror_ '. Former _Sun_ editor Kelvin MacKenzie said that what he likes about his _Daily Mail_ rival Paul Dacre is that 'each day he arrives at work determined to crush the life out of his rivals'. In a 2007 visit to Melbourne, Murdoch complained that the _Sunday Herald Sun_ , which was selling around 600,000, 'should be selling a million by now'. When his executives said Melbourne could be a one-newspaper town in five years, Murdoch replied through gritted teeth, 'That has to be our goal.' Television, movies and sport – just other businesses As Murdoch moved into television, he took similar approaches. He had thrived in situations of circumscribed competition, where inefficient, financially self-indulgent practices had developed, where there was thus scope for cutting costs, and where, at least in his eyes, the competitors' product had lost touch with its audience. This was the case not only with his greatest success, the _Sun_ , but with his assault on the three-network system in US television. In both cases he had the advantages of an outsider. He was not a prisoner of the past, inhibited by ingrained assumptions which prevented established competitors from seeing alternatives. In both cases he was able to introduce cheaper and more audience-oriented products. Many people directly involved in media are focused primarily on the intrinsic merits of the content – they want to make a great movie or TV program or produce great news reporting. Many of Murdoch's contemporaries at Oxford were interested in journalism, but most were probably interested in reporting and writing, and in quality papers. Murdoch, by contrast, was most interested in tabloids, and in production and presentation, in what made a paper attractive to an audience. His interest in film and television programs was similar: less in the intrinsic merits of what was on the screen and more in the total value chain and how it would appeal to an audience. 'Murdoch's idea of a successful TV businessman was his friend, the cable king, John Malone. Malone understood that the real media business was about distribution, leverage, monopoly.' Murdoch was always on the search for the consumer tipping points – what will be the critical factor that will make affiliated television stations change their network allegiance? What will entice consumers to subscribe to a pay television service? During the 1990s, his answer was sport. In 1992, as noted in Chapter 2, BSkyB won the rights to English Premier League, outlaying 'what then seemed the astronomical sum of £304 million'. It was Sam Chisholm's decision, and many had doubts, but within weeks BSkyB had signed up one million new subscribers. It went from a £47 million loss in 1992 to a £62 million profit in 1993. The next year brought the single most important turning point for Fox TV, as it outbid CBS for the rights to American football, the NFL. 'The NFL deal validated Fox' and enabled it to win a wide range of other sports broadcasting rights. In October 1996 Murdoch told shareholders at the News Corp annual meeting: 'We intend to... use sports as a battering ram and a lead offering in all our pay-television operations.' 'Sports programming commands unparalleled viewer loyalty in all markets,' he said. By 1997, the company was paying more than $1 billion a year for sporting rights. Murdoch's approach to sport was simple – you need to have what the audience wants, and then you monopolise it. Of course, not all of these paid off so dramatically, and as competitors caught up the cost of the 'battering rams' escalated greatly, so that in 2002, News Corp's chief operating officer, Peter Chernin, admitted that they had 'clearly... overpaid' for sporting contracts that year. Murdoch had produced a massive 'redistribution of wealth from media groups to professional sports', and in the process, television income had become central to sports budgets. In his review of the rise of live sports broadcasting, Michael R. Real says the two most influential figures were Roone Arledge and Rupert Murdoch. Their contributions could hardly have been more different. Arledge, the head of sports broadcasting at the American ABC network, was intent on bringing the drama, struggle and human emotion of sport to TV viewers, and pioneered, for example, the instant replay. Murdoch's interest was because 'sports had the greatest marketing power'. Just as 'Murdoch may be the first person to come to Hollywood who has no interest in the movies or stars or show business itself', so he is also, among people centrally involved in the sports-media complex, the one least interested in sport. The adolescent who hated organised sport, the adult who disliked being a spectator, let alone a participant, except when he was betting at horse races, had become the media businessman who saw sport as an unparalleled commercial opportunity. When he bought the Los Angeles Dodgers baseball team in 1998 (sold 5 years later without great onfield success), it was reported that 'he had never been to a game of baseball'. Nor does he take a great interest in the other sports he is involved in. When he visited the News Corp-owned rugby league team, the Melbourne Storm, he asked coach Craig Bellamy whether the coach was allowed to talk to the players at half-time, perhaps thinking it had rules like rugby union had had in his youth, when such interaction was not allowed. Murdoch's lack of interest in a product has sometimes led to miscalculations. News Limited in Australia did great damage to both its finances and its reputation with its attempt to go outside the established rugby league competition and start its own 'Super League'. The attempt cost it at least $500 million in the short term. In the long term it led to Murdoch having to share Fox Sports with Packer; another major financial penalty. When News Limited became co-owner of the new, merged competition, its lack of appreciation of the traditions and emotional attachments led it into new blunders. It excluded the South Sydney Rabbitohs, who had won 20 premierships but struggled in recent seasons. The result was a series of legal cases, eventually won by the Rabbitohs. The club had a march of support, which the _Financial Review_ estimated involved 80,000 fans, at the time the biggest demonstration in Sydney since the end of the Vietnam War. Murdoch's _Daily Telegraph_ added its own insult by reporting the march on page 65. The belief that supporters' loyalties and emotions can be easily overridden and redirected shows a degree of contempt, as well as a lack of interest and knowledge. A unique mix of temperament and skills Murdoch told his biographer, Tom Kiernan, that from the beginning he knew that 'newspapering was like any other business: Expand or perish!' A policy of expansion may seem a commonplace of business strategy, but Murdoch's appetite for expansion seemed infinite. Each achievement was but the forerunner to the next: at a party celebrating the first anniversary of the London _Sun_ , he amazed Roy Greenslade, then a subeditor on the paper, by asking, 'Where should I expand next?' 'The guy is just... insatiable,' exclaimed Clay Felker, the former publisher of _New York_ magazine, still angry more than a decade after being one of Murdoch's victims. One of Murdoch's competitors, Lord Stevens of Express Newspapers, thinks, 'He's got the appetite of a snake. He'll swallow a sheep if it's passing.' To an anonymous admirer, 'trying to stop him being predatory is like trying to turn a vulture into a vegetarian'. Murdoch is unusually impervious to the personal vanities and status rewards that so many other media moguls value. He is not interested in recognition by what he calls the Establishment. It is unthinkable that he would have done what Conrad Black did – reorganise his empire in order to become a member of the House of Lords. In Wolff's view, Murdoch: has no interests outside his work: not sport (he may be the only Australian man not interested in sports), not culture, not reading, not movies. He has no social aspirations either. Money itself isn't even that compelling to him. He's eerie, or scary, in his lack of lifestyle desires and need for approval... His penuriousness, his aversion to pretense, his disdain for grandness or affectation or – his worst, most damning word – elitism, is the DNA of his company. For Wolff, this gives him a clarity of focus and perception in business: Most, perhaps all, of the entrepreneurs attracted to Britain and rising in it are looking for a broader kind of approval – they have major social aspirations – whereas Murdoch is only market driven... Britain is something to take advantage of, not to be a part of. Similarly, 'It's seldom happened, perhaps never happened, that Hollywood, with its charms and blandishments, fails to seduce (usually fleece and seduce)', but Murdoch has remained immune. However else he is remembered, Murdoch will always be known as a deal-maker extraordinaire. Some of the most celebrated of today's entrepreneurs have built their corporations around one or a few major innovations, such as Bill Gates at Microsoft and Steve Jobs at Apple. Murdoch has overwhelmingly grown his empire through acquisitions. Deal making fits Murdoch's strengths and temperament. He loves the game; the thrill of the chase; the buzz of the deal. On topics he is interested in, Murdoch is an information sponge, willing to listen to the advice of anyone. He is also a lover of gossip. His 'intelligence service is the best in Fleet Street'; he 'would talk to anybody who had a good piece of information for him;' 'Murdoch without a telephone was like an alcoholic without a drink'; 'Murdoch, a reluctant socializer, is a brilliant networker. He's one of the early geniuses of the form.' Wolff found, during many months of interviewing Murdoch, that: the most reliable way to hold his interest was to bring him a rich nugget. His entire demeanor would change. He'd instantly light up. He'd go from distracted to absolutely focused. He not only scans the business horizon for opportunities, but for information about the vulnerabilities of his competitors and potential targets: 'Assessing his competitors is the one place where Murdoch is systematically analytical rather than reflexive and instinctive. He loves to analyse other people's weaknesses.' In Murdoch's first big deal in Britain, the purchase of the _News of the World_ , a bitter Robert Maxwell correctly observed that Murdoch had caught 'a big fish with a very small hook'. Murdoch manoeuvred brilliantly, first to join with Carr to see off Maxwell, then to dispose of Carr. Similarly, he was able to purchase the _Sun_ at a bargain price because he saw and exploited the impossible situation that Cudlipp's IPC was in, committed to a paper that could only be a drain on their finances. Murdoch had a possible course of action – to compete with their major title, the _Daily Mirror_ – that was not available to them. Wolff observed that one of Murdoch's strengths as a deal-maker is 'the courtship'. 'Whereas other businessmen run the numbers, Murdoch deals with personalities.' This was particularly the case with his purchase of _New York_ magazine. When Felker offered Murdoch a stake, Murdoch immediately guessed that Felker would be doing the same to others, and thus that the company was in play. Murdoch prevailed though his decisiveness and through his money, but in addition: The striking thing is not the disputatiousness, irascibility and egocentricity of the _New York_ magazine board – although, even for a media company board, it is rather extreme – but that Murdoch is able to corral and manage them. This points to another of his less obvious deal-making strengths. Takeover situations are often charged with high personal drama. Murdoch has been able: to deal with, and prevail in, highly personal, profoundly emotional transactions in which the stakes are not just financial but deeply related to ego, turf, and family. This impatient man has an extraordinary tolerance for the ambivalence of nonrational players – and a keen eye for their weaknesses. Of course Murdoch prevailed in many deal-making situations simply by being the highest bidder. This was sometimes the result of his competitiveness and his gambler's love of the adrenalin of risk. But equally, he had a keen grasp of what was the main game and what merely a sideshow, of what would bring value in the long run. At his best and most visionary, this means that Murdoch has seen potential that others haven't, has seen how what seems an isolated transaction fits into his larger jigsaw. So, by most accounts, News: had overpaid for Star TV, but did so because Murdoch was convinced he had at least a two-year head start over his rivals, who would have to wait until new satellites were launched before they could begin broadcasting. Similarly, he was willing to pay a 'transformation premium' in 1985 for Kluge's six television stations because they were all in major markets and could be the foundation for a fourth network. When he was acquiring Twentieth Century Fox, he was buying not just a movie studio, but a film library and production facilities that would aid his television ambitions. On the other hand, it is easy to exaggerate how strategic Murdoch's actions are. He once claimed: A lot of people claim they have ten-year plans or five-year plans or something. But basically, the most successful businesses are opportunistic and you take your opportunities when they come. Murdoch has often had the capacity to move more quickly than his competitors, making decisions aided only by a small team of advisers. This is a testament to his ability to hold all the key factors in his head, but also to a governance structure where all power is vested in him. When News Corp and Pearson were both interested in acquiring Star TV, the Pearson negotiators needed time to consult their board, but Murdoch made the deal to purchase 64 per cent for US$525 million 'in a matter of hours. Murdoch didn't bother to inform his own board until later that the deal had been done.' As well as an unerring eye for the weaknesses of his adversaries, and for what will clinch a battle, he also has a keen grasp of the locus of control; of when he moves from being supplicant to supremo. Then he displays an unashamed willingness to use that control as he wishes, unencumbered by any promises he has previously made. During negotiations he distinguishes clearly between what can and what cannot be enforced later, what has lasting importance and what has only temporary nuisance value. Although he has often been angered by journalists protesting at the prospect of his takeover, he has also known that such protests are not usually going to affect either the outcome or the long-term viability of the acquisition. Deal-making involves a particular set of skills: early and accurate intelligence; clarity of perception and priorities; suppleness in negotiations across both material details and the emotional, nonmaterial aspects of the deal; and decisiveness. Finally, it involves a clear perception of what counts now and what counts next, not what used to count. As Murdoch advised one of his Fox executives after a show was cancelled, 'cut your losses, and don't look back'. Beyond his skill at the transactions themselves, what distinguishes Murdoch's decisions and deals is his willingness to take on risks that most businesses and individuals would shrink from. He has often described himself as a gambler. Others have talked of 'his addiction to risk'. It is easy in retrospect to underplay what radical leaps into the unknown some of his decisions have represented. Partly, of course, this was because of the sums of money involved – some of these ventures could have sunk his whole empire. In other cases it involved a step into the unknown: most obviously with his satellite ventures in Britain and Asia. Sometimes it involved taking on seemingly all-powerful opponents or breaking a long-established mould. Sometimes what seems obvious in retrospect was very uncertain at the time. The moment was ripe for embattled British newspapers to confront the industrial problems that were costing them so much money, but Murdoch alone 'possessed the resources and ruthlessness – or as admirers said, "the guts and tenacity" – to spearhead a revolution of inestimable benefit to the newspaper industry'. And if things had gone wrong in Murdoch's move to Wapping, at the very least it would have severely damaged his cash flow at a critical period. As important as his willingness to take large risks has been his 'sometimes startling persistence'. Either because he does not wish to admit defeat or he does not want to give satisfaction to his competitors (whom he often refers to as enemies), Murdoch has kept going when it seemed he must give up, and eventually has prevailed. He did this in the 1960s with the _Australian_ , when even he felt at times that it was within weeks of closing. In 1988, the Fox television network was hemorrhaging money so badly that News Corp's financial director told reporters that if the losses continued News would have to dump the new network, although the following day he claimed he had been misreported. Murdoch also kept going with his Sky satellite service to the UK. If his competitor, BSB, had known the full extent of his problems, they probably could have driven him out of business. But a combination of bluff and courage meant he was able to secure a compromise on favourable terms. While courage and persistence are universally admired traits, Murdoch's distinctive flavour also comes from his combativeness. Competition and conflict are central in his motivations. Bruce Dover found, when working with Murdoch, that he 'loved nothing better than proving the naysayers wrong'. He is often the target of fierce criticism, but usually Murdoch is 'energised' by 'blatant declarations of hostility'. Any conflict also becomes a precedent: 'when someone's trying to run you down you try to protect yourself, so that, the next time, someone else doesn't try to run you down'. Indeed conflict is so central to Murdoch's world view that Wolff concluded: 'If you are against him, then you are his enemy and he fights you – it becomes binary. Sometimes it seems that he creates enemies just because it simplifies the world.' After his takeover of the _Wall St Journal_ , he and central executives, thinking about their place in history, were toying with branding for News Corp. Murdoch's 'preferred branding statements for News were about kicking dirt in people's faces. His true message was that he was the winner.' The competitiveness can help focus organisational targets and energies. As long ago as 1959 Murdoch was determined that his Southern TV would beat rival Channel 7 to be the first to broadcast in Adelaide, and he did whatever it took to win that race. In a similar scenario, he won the race to be the first satellite broadcaster in Britain, even though it involved going to air with many unsolved technical problems, and in the short term led to horrendous financial losses. So he took on 'his weak-willed establishment competitors', confident that he could bear more pain than they could. But combativeness can be counter-productive. As Chapter 2 showed, Murdoch's attempted partnership with GM collapsed because of their well-founded wariness of his style and intentions. 'Rupert is so aggressive that he doesn't really make a good partner,' said John Malone when trying to helping him during a dispute in 1997. Moreover, declarations of conflict energise opponents as well as one's own side. Murdoch is always suspicious, even paranoid, about what his competitors are doing and whether or not they can be trusted. He broke ranks with the other two New York publishers during an industrial dispute, hoping to gain an advantage, but it merely cemented their hostility towards him. So when, in April 1982, he announced that he wanted to buy his main New York competitor, the _Daily News_ , and said that the two papers were engaged in 'a dance of death which must end with the disappearance of one or both newspapers', the response was predictably defiant. Its then owner, the Chicago Tribune company, denounced the bid as 'a transparent attempt to destroy and shut down' the _News_ , and the newspaper published the statement under the headline 'Trib to Rupert: Drop Dead'. Murdoch's risk-taking, determination and combativeness have often fuelled his success, but they have also often fuelled opposition to him. Monopoly muscle and vertical integration In Murdoch's own view, as told to a US congressional committee in 2003, 'my life has been built, and my business, [on] starting competition and starting up against other people and providing diversity'. In 1979, when the Australian Broadcasting Tribunal had initially blocked his application for the Channel Ten licence, Murdoch wisely decided to come to Sydney and appear himself. I was in the Sydney hearing room that day, and remember vividly the power of Murdoch's anger. The Chairman, Bruce Gyngell, was often left floundering as Murdoch made his accusations. His anger served his strategic purpose, and if it was feigned, it was brilliant acting. He proclaimed his patriotism, among other claimed virtues: Who else has risked his every penny, his reputation and his career in fighting for what he believes is right for this country? My life has been spent fighting them [the other major media companies], starting with a very small newspaper. A previous conservative government brought in a special law to protect those monopolies [Fairfax and the Herald and Weekly Times] and to make certain that outsiders like myself could never get as much. He attacked the 'gutter campaign' against him by the Fairfax press, 'trying to paint me as a crazed tycoon who cannot be trusted with a TV company'. As late as 2008, when he was addressing a Washington public meeting about his purchase of the _Wall St Journal_ , he disarmed the audience by saying: Is all media in one hand bad for democracy? Absolutely. We are a tiny fraction of the media landscape. There are millions of voices out there. We certainly don't have any monopolistic effect. Everything I have done in my life has been to create competition... We want to give people choices. The more choices, the better it is. To think that the media is concentrating is ignoring the facts. It is being fragmented in a thousand ways. I would agree with you that that's good. It doesn't suit my business, but... Murdoch's self-portrayal – as the challenger of establishments, who achieves success with lean and mean structures while building market share by giving audiences what they really want and not what 'elites' think they ought to want – has long outlived its reality. It's 'yesterday's cliché'. Now he has elite connections and support, deeper pockets than his competitors, and all the advantages of a big company over smaller companies. In the early part of his career, he was often the entrepreneurial outsider who deployed novel strategies to build market share; the later part has been more about leveraging size and monopoly power. The size and diversity of the Murdoch empire means that he can cross-subsidise (and cross-promote). News Corp can sustain losses for a period – or forever – in order to drive competitors out of business. In September 1993, Murdoch cut the price of the _Times_ from 45p to 30p. Its circulation increased at the expense of that of its rivals, and when the _Daily Telegraph_ followed it down to 30p in May 1994, Murdoch cut the price again, to 20p. Viewed in isolation, this made no business sense. About 15p of the price went to cover distribution, and production cost 16p a copy, so despite advertising income, every issue it sold was costing money. The paper lifted its circulation from 354,000 before the cut to 670,000 at the end of 1995. But the first nine months of the price war cost the company £45 million. Over the five years to 1998, Chenoweth estimates, the price cuts cost the _Times_ £150 million. However, unlike its competitors, such losses did not 'imperil its existence'. This was a classic case of squeezing competitors through predatory pricing; survival depended not on the merits of the particular products, but on total corporate size. An important aspect of Murdoch's later business strategy has been the concentration on vertical integration. Dover notes that 'Murdoch was a total believer in vertical integration, by which you produced and owned not just the content, but the means to distribute and redistribute in every territory in which you operated.' In subscription television he increasingly aimed to control the delivery platform (the hardware, either satellite or cable), the individual channels shown on the service, and finally the production facilities that make the programs. The key advantage in vertical integration is that pricing can be discretionary: one can charge a greater or lesser amount to different suppliers to skew the choices available to consumers – supermarkets do this when they accept a smaller margin on their own brands. In the extreme, it can allow a company to exclude competitors, and thus deny consumers the opportunity to choose. In pay television, the negotiation between channels and delivery platforms is rather arcane and, importantly, the public has no direct involvement. In 2010, there was a fierce dispute between Cablevision and News Corp. Cablevision ran an advertisement in the _New York Times_ accusing News of demanding extortionate amounts for some channels and threatening to pull all its channels off the air. A few weeks later, Cablevision surrendered, but not gracefully. Its public statement complained: In the absence of any meaningful action from the Federal Communications Commission, Cablevision has agreed to pay Fox an unfair price for multiple channels of its programming, including many in which our customers have little or no interest. In this case, because News had channels that Cablevision wanted, it could pressure Cablevision to accept and pay for others that it did not want. Murdoch found himself in a much more precarious position in 1996, when Time Warner initially refused to carry his new Fox News service because Ted Turner did not like the idea of the network carrying a competitor to his CNN. This was a public humiliation for Murdoch, but even more importantly, he had already spent $300 million and had committed to hundreds of millions more in the coming months. He thought he needed 25 million subscribers to be viable and had less than half that. After some fiery public controversies and law suits, Time Warner agreed to carry the new channel. Soon after, Murdoch again found himself vulnerable, when his satellite plans collapsed. Though he had insulted the cable operators, he had to come crawling back to them. The consequence was that 'most of the large cable companies were giving Murdoch a slow no on carrying his channels'. These were years of great stress and crisis for Murdoch. The master manipulator of the process was John Malone, who, by the 1990s, had grown into the biggest cable operator in America. Al Gore had dubbed him the Darth Vader of the industry. Malone hadn't 'mustered some of the most impressive profit margins in the business by coddling the subscriber base'. As new cable channels were launched, they needed access to subscribers and Malone 'drove a tough bargain. He demanded that cable networks either allow [him] to invest in them directly, or they had to give [him] deep discounts on price.' He made a series of strategic partnerships, with both cable operators and channels. Malone's idea 'was to collaborate even with your enemies – especially with your enemies – to avoid the large and costly fight of real competition'. There were accusations that Malone went beyond tough business practices into shutting out rivals. The Home Shopping Network charged that it could not get access on his network because it was a direct competitor with the QVC network, part-owned by Malone. Murdoch has also – since his late 1990s mishaps – become better at either being the gatekeeper himself or in alliance with it, as in DirecTV and BSkyB, or having channels that are so valuable that no cable operator will be willing to be without them. The subscription television industry rewards bigness. The difficulties in competing with a corporation with the financial strength and attitude of News International were vividly demonstrated in a case that dragged on through the final years of the Blair and Brown Governments. In 2006, the newly formed Virgin Media, growing out of the cable company NTV plus Richard Branson's Virgin group, was negotiating a merger with ITV. The new group's cable operations would have the potential to provide much tougher competition for BSkyB's satellite service. Before these negotiations were consummated, 'James Murdoch swooped in... and bought 17.9 per cent of ITV, seriously lousing up the Virgin Media deal.' This made BSkyB ITV's largest shareholder. Murdoch paid 135p a share, for a total cost of £940 million. Over the next three and a bit years, first the Office of Fair Trading, then the media regulator Ofcom, the Competition Commission, then the Competition Appeal Tribunal and finally, in January 2010, the Court of Appeal all ruled that BSkyB would have to substantially reduce its stake in ITV. ITV shares were then trading at 51p a share, so when reducing its stake to 7.5 per cent, News made a loss of around £340 million. Although a partnership between BSkyB and ITV would have been beneficial to News, nearly all observers thought that News would not be allowed to keep the shareholding it had purchased. However, the key was not the outcome but the time it took. By normal business criteria, it was a bad investment. News had bought at a premium, and eventually had to sell at a substantial loss. But its key aim had been to prevent the emergence of a more powerful competitor, and in this it succeeded. It was, almost literally, buying time. When interviewed in Australia during this protracted process, Rupert Murdoch: said concern about media ownership was an issue '10 years out of date' and described as 'paranoid' a decision by regulatory authorities to investigate the 17.9 per cent stake he took in commercial television network ITV. However, this dismissal of ownership as irrelevant was belied by the very fact that News had made such a large purchase with the principal aim of stopping ITV and Virgin Media from merging. It showed that he still viewed ownership issues very seriously indeed. Perhaps News thought it would be able to retain its stake. Possibly it did not foresee the fall in the ITV share price. Either way we have a corporation willing to risk losing a couple of hundred million pounds, and deliberately putting itself in breach of the law, in the view of most observers, in the knowledge that by the time the legal processes were exhausted it will have thwarted the plans of its competitors. This is a long way from Murdoch's self-portrayal as the entrepreneurial outsider taking on the establishment. 4 Midas's lost touch The business case against Murdoch In March 2000, Murdoch was diagnosed with prostate cancer. The following month, the news leaked, and News shares fell $10 billion. In contrast, in May 2012, when a British Parliamentary Committee judged that he was not a fit person to run BSkyB, 'the market's immediate response was to mark up News Corp stock', because it signalled that Murdoch's hold on the corporation was weakening. Indeed some analysts then talked of the Murdoch discount; most spectacularly, Bloomberg analysts asserted that News Corp, before the split, was worth at least 50 per cent more without Rupert Murdoch. For the first several decades of his career, most people in business and the media had underestimated Murdoch. When he made his first foray into London, taking a share in the _News of the World_ in 1968, the owners, the Carr family, according to Murdoch, expected that he would be 'a Sancho Panza to Carr's Don Quixote'. According to Wolff, in New York in the mid-1970s, most assumed Murdoch was 'easy pickings' and 'classic dumb money'. His extraordinary success then created an aura of genius and a sense that he had the Midas touch – not least among his own publications and employees. He became 'the best businessman Australia has ever produced', according to the _Australian_ 's media columnist Mark Day in 2011, with 'the super-human energy of a 35-year-old', according to a colleague. When Wolff interviewed Rebekah Wade (now Brooks), then editor of the _Sun_ , she told him with great intensity that she had considered Murdoch from all angles, and her conclusion was that he was 'a genius'. However, this energetic genius essentially failed to conquer the last two big business frontiers he confronted – China and the internet. These failures are understandable given the difficulties involved, but they undermine any idea of Murdoch infallibility, and in each case aspects of Murdoch's style and decisions contributed to the failure. When Murdoch bought Star TV in 1993, his vision was for Direct to Home (DTH) satellite broadcasting, paid for by subscription and advertising, and his primary target was to capture viewers in the world's largest emerging market: China. China was not only the most populous country in the world, but one in the midst of unparalleled economic growth, fuelling an ever-expanding consumer market. Star has proven to be a very good long-term investment, especially in India, but it failed miserably to deliver on Murdoch's Chinese ambitions. Famously, Murdoch began badly. In a speech in London, on 1 September 1993, five months after purchasing Star, Murdoch proclaimed that 'advances in the technology of telecommunications have proved an unambiguous threat to totalitarian regimes everywhere'. Coming a few years after the violent suppression of demonstrators at Tiananmen Square, this comment seemed directly aimed at the Chinese Government. According to Dover, who was then working for Star, when the reports of Murdoch's speech reached Beijing, Premier Li Peng was incandescent with rage. Dover and Star's CEO, Gary Davey, joked that 'word for word, Murdoch's remarks were probably the costliest ever uttered by an individual'. Within months, the Chinese Government had banned the sale and use of satellite dishes anywhere in China. As Dover observed, 'Star TV was a business without a revenue stream.' Advertisers, not wanting to displease the government, stayed away. In many ways the main narrative in the next decade was Murdoch trying to woo and reassure the Chinese Government that allowing his company, a foreign broadcaster, unfiltered access to the Chinese population would not be a threat to political control. This led him into several actions which properly attracted condemnation in the West. He banned the BBC from the Star satellite. He reneged on a HarperCollins contract to publish a book by Chris Patten, the last British governor of Hong Kong, and it became a bestseller for another company. He spent close to $1 million to secure the rights to Deng Rong's boring and poor-selling hagiography of her father, Deng Xiao-Peng. He disparaged the Dalai Lama, and maintained a determined agnosticism about human rights abuses. He told business journalist Wendy Rohm in 2001: I don't know. It's the popular thing to say that China abuses human rights. Their answer is there are plenty of human rights abuses in this country. And in every country. Murdoch sought to build goodwill and strategic relationships through a variety of avenues, including a pioneering joint internet venture with the _People's Daily_ and extensive television co-productions with Chinese companies. In Dover's view, Murdoch was 'an extraordinary catalyst of change in China'. But News Corp's investment in local television production did not lead to the strategic advantage it had anticipated. News had 'underestimated how quick the Chinese were at cloning anything that looked like a successful format and just how deep were their pockets'. At the time, in 2003, there were over 600 terrestrial TV stations, plus many cable operators, in China, and no effective protection of copyright. One reality dating program, for example, generated 200 copy-cat versions. But free access for households to DTH satellite did not materialise. To the extent that Murdoch failed in China because of factors under his own control, it was that he was too impatient and aggressive. By 1998, he was hopeful of a meeting with President Jiang Zemin, but in a meeting with the Education Minister, he lost patience and demanded the Chinese Government lift the ban on DTH satellite television. That meeting quickly ended, and the meeting with Jiang did not eventuate. Murdoch then removed Davey and Dover from Star's leadership and installed Gareth Chang, a Chinese-American, who seemed to offer hope of more immediate and tangible progress. But 18 months later, Chang was also gone. Dover felt that the Chinese like continuity of relationships, a slow building of trust and understanding, and that Murdoch's changes of personnel put his cause back substantially. In particular, navigating towards business opportunities in China involved operating in the grey area, the nowhere land between what Chinese authorities would deem to be according to the law and what would obviously constitute a breach. In 1996, the President of the _People's Daily_ , Shao Huaze, in a friendly conversation, advised Murdoch that: 'Giants who seek to walk in China need to learn to tread lightly.' This was advice that Murdoch seemed congenitally incapable of following. As Dover notes, 'When operating in the grey sector it also helps if you have what the Chinese call an umbrella, a person in authority who can provide a degree of protection.' In the last years of Jiang's reign, Murdoch and his executives were increasingly confident that they had the highest umbrella in the land, and became increasingly brazen. Instead of stepping lightly, 'Murdoch was stepping on toes everywhere.' Star was distributing its channel beyond what had been officially approved, and was illegally collecting subscription revenue. Even worse, it was bragging about it. This prompted competitors to complain about the company being given special treatment and about the breaches it was committing. In early 2005, Murdoch believed 'he was on the brink of a spectacular breakthrough in China'. Instead, by the middle of the year he was subject to an official crackdown. He admitted defeat and retreated from his Chinese ambitions. These ambitions were probably unrealistic anyway. His 'approach to China... was steeped in a sort of colonial mindset – [he seemed to feel] that somehow the Chinese would welcome a white man bearing gifts'. He was focused on his international competitors and on the issues of state control and official access, but Chinese commercial interests had other plans: they captured the market for themselves. In the end, in terms of its own business interests, 'News Corp's China adventure [was] a huge disaster.' Murdoch took the iniative in trying to advance his Star interests in China. With the internet, on the other hand, he had no choice but to react. Early on, he was an internet sceptic, believing it was a passing fad; that stocks were overvalued and that it would destroy more businesses than it created. In 1996, he had the chance to scoop up AOL for $5 billion, but Microsoft head Bill Gates advised him against it. It later went up to $150 billion – 'It was a pretty serious opportunity we missed there,' Murdoch commented later. As the internet frenzy increased, he started to fear that he was missing out. In 1999–2000, at the very peak of the internet bubble, the company, led by James, made a series of purchases in the US. By the end of 2000 they were all 'failing or defunct'. James and Wendi then spent $150 million on 20 internet businesses in Asia, but 'within a year, none of these investments would be worth a fraction of the value paid for them' and 'within two years all but a couple had disappeared completely'. Rupert was unrepentant, telling his October 2000 AGM that unlike his media rivals, he had never been taken in by the dot com revolution in the first place, and that News Corp was less hurt by the bursting of the bubble than others. The bursting of the bubble was temporary, and internet developments quickly gathered pace again. Through the early 21st century, the Murdoch rhetoric about the coming challenge was well pitched: The winners will be those who capitalize quickly on changing opportunities. The challenge is to move early and innovate often. This is not always a comfortable path, but it is the only one which will lead News Corp to success in the global market of the 21st century. Similarly, he made the right noises about transforming established media, saying that too many newspaper executives thought the business was only about print: In the coming century, the form of delivery may change, but the potential audience for our content will multiply many times over... Our real business isn't printing on dead trees. It's giving our readers great journalism and great judgement. His leading executives proclaimed similar ambitions. News Limited CEO John Hartigan thought: 'We have the opportunity to move from setting the agenda each morning... to actually owning the agenda. All day. Every day.' But the achievements have not matched the rhetoric. All established media have found it hard to deal with the destruction of intellectual property values which the internet threatens. But Murdoch entered the new century with some particular and idiosyncratic misconceptions, which made News Corp's task more difficult. Murdoch was convinced that: the television is going to play a bigger part in people's lives than the PC... The fact is, particularly with the satellite, you can build a very intelligent set-top box with a TiVo recorder or whatever; you can do a lot of things. He also thought that 'the most profitable use of television advertising so far' would be when viewers could purchase products by just clicking on the screen. It is easy to make fun of these expectations, but few saw the nature of the transformations clearly at that time. In 2005, News Corp started to again make large internet-related investments. Characteristically, Murdoch proclaimed that News Corp must transform itself into a 'digital powerhouse', and forecast that News Corp could rival Yahoo and Google, 'given time'. He reiterated this grandiose claim in 2008. He formed Fox Interactive Media, and three days later it bought the company that owned the social networking site MySpace for $580 million. A couple of months later Fox Interactive Media entered the video game market, buying IGN for $650 million. After spending $1.3 billion, Murdoch declared, 'we now have the most potent combination of relevant content and critical audience mass to forge a real and profitable presence on the Web'. But these investments did not prosper. MySpace looked to be a success for some time. In 2007, Murdoch told an audience: You were all laughing at me for buying MySpace. What's that worth today? It's worth more than twenty times what we paid for it. In the same year, he scornfully observed that 'Facebook is infinitely smaller than MySpace.' But in fact MySpace fell steadily behind Facebook, and News Corp sold it for $35 million in 2011, a $540 million loss. The noteworthy feature here is not only the losses that News Corp made, but the triumphalism that accompanied them. News Corp continued to experiment. It launched _The Daily_ , the first newspaper made specifically for the iPad, in February 2011. Its cost structure appealed to Murdoch – 'no paper, no presses, no trucks'. But _The Daily_ closed in late 2012 – in Wolff's judgement, it had been 'a tone deaf, middle-market paper on a tablet'. News Corp has some profitable websites, and some of its media products have made at least partially successful adaptations to the internet, but these do not come close to compensating for its losses thus far. Leading Australian business journalist Alan Kohler was scathing about Murdoch's internet performance: Poor News Corp. At the front line of one of the greatest revolutions in history, in which great opportunity wrestles daily with horrendous value destruction, and the company is cursed with a leader who was born 79 years ago, in the Great Depression. But perhaps the problems run deeper than just age. Murdoch's business strategies are not well suited to the speed and fluidity of the new environment. The established Murdoch model was one of expensive acquisitions – newspapers, TV, major film studios, satellite – which then offered a strategic stronghold in industries where the parameters were well known and stable. In the digital world start-up enterprises are easier, and the environment – in terms of technology, consumption patterns and profitability – changes much more quickly. Murdoch tried to conquer the internet by throwing money at it, but News had no consistent strategic vision. Murdoch sought to penetrate the world's biggest emerging market, China, and to cope with the media's biggest challenge, the internet. He devoted great money and corporate energy to them, but in both cases the result was mediocre. Although in both cases failure can be partly attributed to his managerial shortcomings, both presented very difficult and complex challenges, and many companies besides News Corp similarly failed. However, there are two areas – personal and political indulgences – where Murdoch failed his shareholders, and these failures are much less excusable. Murdoch's personal indulgences involve placing his immediate control and dynastic ambitions for his family above the interests of other shareholders; his political indulgences have centred on his support for right-wing causes, and especially on keeping newspapers for this purpose rather than because of their shareholder value. Both types of indulgence have become ever more expensive. In his attitudes to the British monarchy, Murdoch has always been a republican, but in corporate life he has remained a monarchist. In 1998 he announced that his oldest son, Lachlan, would take over from him – 'The children selected him. It was their vote,' he said. In the eyes of the _Economist_ , and many others, 'the interests of the family and those of the firm [we]re diverging'. The least expensive aspect of this has been direct nepotism. 'I'm a great believer in nepotism,' Murdoch told a business associate, and he has certainly put his belief into practice. There have been transactions and decisions involving each of Murdoch's three children by his second wife, Anna. James had dropped out of Harvard to start a hiphop record label, Rawkus Records, with two friends. News Corp bought it for an undisclosed sum in 1998 (and James has worked for News Corp ever since). The label closed in 2004. Whether James received an appropriate sum for Rawkus Records cannot be judged because of the secrecy, but News Corp's previous interest in hiphop had been limited. That transaction passed with little notice, unlike the much larger purchase in February 2011 of his daughter Elisabeth's TV production company Shine, for A$628 million, 16 times its most recent annual profit. Shine was a much more obviously successful company, making programs such as _Masterchef_ , and had programming that News Corp wanted. Nevertheless, a lawsuit by shareholders accused Murdoch of 'rampant nepotism', and of treating News Corp 'as a wholly-owned family candy store'. Lachlan left the company in 2005, but still receives a pension payment, which was $2.42 million in 2012. News Corp has not made any acquisition from him, but according to the _Australian Financial Review_ , it refrained from one in 2010 because of him. The Australian office, led by Hartigan, had done all the preparatory work to bid for the Ten network, and was about to do so when the deal was stopped dead by the New York office: Ten was off limits, because Lachlan was interested in buying into it. In each of these instances it could be argued that Murdoch family interests and News Corp shareholder interests diverged, but none of them involved a substantial (by the measure of News Corp's total size) cost. The much more expensive and important conflicts between family control and shareholder value occurred because of the way in which family ownership of shares declined as a percentage of total shareholding after the debt crises of 1987 and 1990. One strand of this involved Queensland Press (see Chapter 2). Murdoch, in order to rescue himself from the crisis following the purchase of the Herald and Weekly Times in 1987, had the Murdoch family directly invest in Queensland Press. This produced some transactions where the main beneficiary was unclear. The first was when Queensland Press purchased some News Corp shares at higher than the then market rate, which helped News Corp in its debt crisis. The Australian Securities Commission investigated the transaction, but did not mount any prosecutions. The last occurred as part of the 2004 garage sale in which the various cross-holdings around News Corp and the Murdoch family investments were cleaned up as Murdoch shifted the corporation's headquarters from Australia to the US. When Queensland Press was bought by News Corp in 2004, Murdoch family interests were the main beneficiaries, and there was some criticism of the price paid, 'pitched more than $500 million higher than comparable multiples for listed Australian newspaper companies'. As a final twist, just before the purchase, the Murdoch family companies were transferred from Canberra to Bermuda, a move that would save tax and stamp duty payments. But part of the deal specified that any variation in stamp duty payable would be met by News Corp. Thus other News Corp shareholders finally said a fond farewell to the complications arising from Queensland Press in 2010, when News Corp made a $77.6 million payment to cover the costs of the extra taxation due when Australian tax authorities ruled against the Bermuda contrivance. The other main strand was the introduction of tiered shareholdings. All existing shareholders retained their voting rights, but Murdoch now raised new capital without diluting control by offering non-voting shares. Such dual classes of shares are unpopular with shareholder associations and others who think that they depart from the basic principle of the stock exchange, that control of a public company should be proportionate to the share of capital invested. But Murdoch defended it: 'It has led to long-term stability in management, which has enabled us to take bigger risks than other people,' he said in 2007. In 2004, an even more controversial move compounded the dual shareholding setup. John Malone took advantage of the temporary dip in the share price, when Australian institutions sold off their holdings following the corporation's move to the US, raising his stake to 19 per cent of the voting stock. Although this was still considerably below Murdoch's 30 per cent, Murdoch immediately imposed a 'poison pill' provision. Poison pill schemes typically: allow existing shareholders to buy more stock at a discounted rate, thereby diluting the value of individual shares and the predator's holdings. They effectively eliminate the takeover premium in a company's share price... They are hated by shareholders. This made it difficult for Malone to raise his stake above 19 per cent and for any outsider to build a stake larger than 15 per cent. There was a strong reaction among institutions, and an even stronger one when, in 2005, despite an initial, explicit promise not to do so, the board extended the poison pill beyond its original 12 months. A group of superannuation funds sued Murdoch and News Corp over the extension. Alan Kohler declared that 'News Corp is nothing but a badly performing business run by untrustworthy people.' As Chenoweth observed, Malone and Murdoch had 'been the closest and most dangerous of friends' for 20 years. Malone was 'probably the only man in the world... [Murdoch] fear[ed]'. After making the desperate poison pill move, Murdoch made an even more desperate one to get Malone off his share register. In December 2006 the two made a deal whose essence was that Malone gave up his shares in return for ownership of the US satellite broadcaster DirecTV. The need to do such a deal existed principally in Murdoch's mind. From what is publicly available, Malone made no threatening or disruptive moves during this period, except at one point where he opined that it would be beneficial for all shareholders if Murdoch focused a 'little bit more on shareholder returns and a little less on empire building'. This seems insufficient grounds for author Paul La Monica to assert that 'perhaps most importantly, the deal meant that Murdoch and other News Corp executives no longer needed to worry about meddling from Malone'. That single statement seems to be the sum total of Malone's 'meddling'. Murdoch had always said that winning control of DirecTV would be 'a company-transforming deal'. For six years, Murdoch's major priority had been to control it, and finally, in 2003, after many setbacks and a long law suit, he achieved his goal. In Wolff's words, 'It is, he [said], the absolute necessary synthesis of all that he's worked for – and it has finally all come together.' And then, a mere three years later, in his 'marvellous, bravura way, Murdoch [declared] DirecTV to be irrelevant to his interests and goals and [gave] Malone a good deal on it... No looking back.' Malone got a better deal than other News Corp shareholders. In the years since, DirecTV has been a great success. Its revenue grew from $14.8 billion in 2006 to $27.2 billion in 2011; its subscriber base went from 16 million to 20 million; and its operating profit margin has stayed at around 15 per cent. Assuming that News could have managed DirecTV as well as Malone did, News Corp shareholders were denied a growing and lucrative revenue stream. The deal had been in Murdoch's interests, not theirs. His political indulgences have also become more expensive. The least expensive have been his direct political donations, extensive as these have been, and usually towards the far right of the political spectrum. The most expensive have involved his subsidising of newspapers to advance his political aims. Murdoch has often claimed that his 'newspapers are run to make profits. Full stop.' Some have defended him on these grounds: Murdoch's motivation for owning newspapers is profits first and politics second... When people go off about him bringing in ideological brethren and saying that he's an ogre, I think that's completely unfair. He's there to make money. But there is abundant evidence to the contrary. Clearly newspapers mean more to him than just their dollar value. He told Ken Auletta that the thing in his business empire that gives him most pleasure is 'being involved with the editor of a paper in a day-to-day campaign. Trying to influence people.' After he had to dispose of the _New York Post_ in 1988, he said, 'I feel depressed. This will be the first time I've lived in a city with no newspaper going on around me.' After he regained the chronically loss-making paper in 1993, it was clear he would never voluntarily dispose of it. He told the staff that 'I am not here as some fairy godmother to pour money into the paper.' But in fact he did precisely that. The paper was losing $50 million a year by 2007, according to Wolff, and in 2011, reports put its losses at around $80 million a year. 'It is almost impossible to exaggerate how determined and how wrong-headed Murdoch is with the _New York Post_ ,' says Wolff. In London, the _Times_ and the _Sunday Times_ have also run at a loss for a number of years. When Wolff suggested to Robert Thomson, then editor of the _Times_ , that the _New York Post_ may have lost more money than 'any other media enterprise in history', Thomson responded 'I would not underestimate the _Times_ in that regard.' Then, in 2007, Murdoch made an acquisition which defied business logic. He bought Dow Jones, publisher of the _Wall St Journal_ , for $5.6 billion. The company had been trading at around $35 per share; Murdoch offered $60. None of the Dow Jones advisers 'had ever seen quite such an overvalued offer'. In February 2009, it wrote down half its value. Nor were there any grounds to be surprised by this disaster. Shortly before, the attempt to sell the Tribune company, publisher of the _Chicago Tribune_ , 'had dragged on for almost a year', a 'painful warning sign for would-be newspaper investors', and what had started as an auction degenerated into a fire sale, securing far less than any of the bankers involved had originally thought it would. News Corp shareholders should be doubly angry about the sale of DirecTV to Malone, because part of the urgency surrounding that deal arose from Murdoch's thinking that if Malone were still a shareholder when Murdoch bid for Dow Jones, he would vigorously oppose it. In other words, Murdoch had to do one bad deal for shareholders (DirecTV) to free him to do what turned out to be another one (Dow Jones). Wolff notes: Part of the curiousness of the takeover was not just that Murdoch seemed to have no immediate Murdoch stamp to put on the _Journal_ , but also that he seemed in fact to have no specific plan at all for it. Whatever Murdoch hoped to gain by ownership of the _Journal_ , it was not profits. It may have been ego, the satisfaction of winning a prize he had long wanted. It may have been a platform from which to attack what he saw as the citadel of the liberal media, the _New York Times_. When the _Wall St Journal_ launched a New York metropolitan section in 2010, Murdoch told the editors he wanted to damage the _New York Times_. So part of Murdoch's business epitaph now will be that he probably produced the greatest loss-making newspapers in history, and with Dow Jones produced perhaps the most dramatic write-down in US media history. Midas had certainly lost his touch. 5 From Lenin to Palin The making of a radical conservative I'm worried about my son Rupert. He's at Oxford and developing the most alarming left-wing views. Sir Keith Murdoch, letter to Hugh Cudlipp, 1952 Yesterday was the 29th anniversary of the Great Teacher [Lenin]. We stood to attention for one minute in front of THE BUST on the mantelpiece and drank several toasts – and then settled down to some good reading of adulatory Russian poetry. Rupert Murdoch, letter to Rohan Rivett, Christmas 1952 Today, Murdoch's views are predictable – those of the extreme right wing of the US Republican Party. His social anchors are with people who share those beliefs, and his political preferences are overwhelmingly for right-wing parties, and within those parties for the more right-wing leaders and factions. It was not always thus. He has moved across the political spectrum, from Vladimir Lenin to Sarah Palin. As is well known, during his Oxford years he kept a bust of Lenin on his mantelpiece. For the next two decades, his political views ranged widely, often inconsistently, but from the mid-1970s he has consistently manifested a right-wing ideology – although sometimes tempered by commercial and political pragmatism. How red was Red Rupe, the nickname given to Murdoch by his fellow Oxford students? According to Tom Kiernan, fellow students recalled Murdoch as a 'wild-eyed pretend radical', and felt that the nickname signified more about his political style than his substance, but Murdoch insists that he matured into a sincere and deep-thinking radical during his Oxford years. Although any sign of incipient Bolshevism disappeared with his departure from Oxford, in the following years Murdoch's political beliefs were an inchoate mix of rebelliousness, an admiration for movers and shakers, and a concern for the underdog. There was an anti-establishment attitude, especially against what he saw as lazy, undynamic establishments, and most especially against establishments such as Robert Menzies' Australian Government and Thomas Playford's South Australian Government, both of which combined conservatism with a refusal to embrace him and his ambitions. But there was also a fluidity about his views which meant that he was often influenced by strong individuals. There was also a keen eye for the gap in the market. From the early 1950s until the early 1970s he found himself on the Left side of the political spectrum as often as the Right. One strong strand during these decades was his nationalism. This was no doubt fanned by his early years in England and whatever slights – real or imagined, deserved or undeserved – he suffered there. He disliked Australians 'who embraced everything that was English and rejected everything that was Australian'. As late as 2005 his passionate anti-colonialism was still evident: A long time ago, the British always thought in terms of their empire and were pretty patronizing toward us Australians, pat you on your head and say, 'You'll do well', and when you do do well they kick you to death. In 1972, in the lead-up to Gough Whitlam's Labor Party victory, the first for the party in 23 years, Murdoch gave voice to the emerging nationalist mood in Australia: Australians are no longer content that their country will go on being inevitably, irreversibly, or without protest, a metal quarry for Japan, a pastoral lease for distant investors, a province for Madison Avenue, or a South Pacific windbreak for French nuclear scientists... They are no longer content to be a pale echo of great and powerful friends, to be a secondhand society, a reflection of another hemisphere and sometimes another age. They are seeking a fresh, vigorous Australian identity of their own. Whitlam could not have put it more eloquently. Perhaps his nationalism helps to explain why, around 1970, Murdoch said, 'I tended to be more on the conservative side when I was in England, more in the liberal camp when in Australia.' Nevertheless, in the British election in 1970 the Murdoch papers supported the re-election of Harold Wilson's Labour Government, and in Australia in 1972 they strongly supported the election of the Whitlam Labor Government. However, by 1975, having relocated to New York from London in 1973, his views had solidified into the strongly conservative ideology he has maintained ever since. According to Murdoch, he 'turned into a pretty strong free market type conservative' because of 'the most searing experience of my life': dealing with the Fleet Street unions. The changes go much further than this, though. Murdoch's attitudes to the Vietnam War illustrate his changing views. He was an opponent early on. Influenced by senior editorial staff at the _Australian,_ particularly Douglass Brass, the national paper was the sole voice among morning papers editorialising against the Government's commitment of troops in April 1965: The Menzies Government has made a reckless decision on Vietnam... That decision is wrong... It could be that our historians will recall this day with tears. But later, Murdoch's views, in George Munster's words, 'followed the line of his federal political allies'. In the 1969 election, endorsing the Gorton conservative Government, the paper argued that 'thoughtful voters' would support its approach to the 'over-riding problem' of defence. Soon afterwards though, the revelation of the My Lai massacre had the _Australian_ powerfully covering its horrors, and editorialising in favour of withdrawal from Vietnam. Through the 1970s, as Murdoch's views became more conservative, Denis Cryle concluded that the _Australian_ 'gradually forsook its earlier liberalism for Cold War rhetoric'. As the Australian public moved from hawkish to dovish views, Murdoch moved in the opposite direction. By the end of the war, in 1975, Murdoch was mobilising against the Whitlam Government, and domestic political controversies dominated its coverage, castigating the government and 'the Left', for example, over their lack of planning for the exodus accompanying the end of the war. An indication of the paper's reduced intellectual capacity was its failure even to attempt to reflect on the course of the war; perhaps this was because by then it was a topic that Murdoch, increasingly adopting the foreign policy views of US Republicans, was reluctant to consider. Perhaps the sharpest change in Murdoch's thinking was in relation to social class and inequality. He continued to view 'all inherited privilege with suspicion', but his earlier professed sympathies for the underdog and underprivileged all but disappeared in his increasing hostility to the welfare state. By 1974, Murdoch – in Kiernan's words – had concluded that: The more a capitalist nation 'gave' to its populace, the more the succeeding generations expected and demanded, until eventually there was nothing left to give without bankrupting the nation's capital base. It was an insidiously destructive psychology that had to be stopped before it did to the rest of the English-speaking world what it was doing to England itself. Richard Nixon and his Administration represented America's last best hope to reverse the trend. For Murdoch in the 1990s, it was 'a sorry fact that the media as a whole... unquestioningly embrace a welfare state which divides and embitters our society without helping the truly poor and needy'. In the late 1980s, as some of Thatcher's more determined supporters wanted to extend their 'revolution', Murdoch's _Times_ and _Sunday Times_ gave considerable prominence to people arguing for a 'return to stigmatizing the poor who were responsible for their own situation' and about the monster of the underclass being created by the 'poverty industry'. Such views appeared as late as the 'Britain is broken' themes of the _Sun_ in the lead-up to the 2010 election. Once, when _Sunday Times_ editor Andrew Neil and Murdoch were discussing the 'need for the radical reform of welfare', Neil said that there still had to be some sort of basic provision, safety net, for everyone. 'Yeah, yeah, maybe,' growled Murdoch, 'but it should be very low.' Neil concluded, from his countless discussions with Murdoch, that he 'is much more right-wing than is generally thought but will curb his ideology for commercial reasons'. Part of this is his whole-hearted embrace of the virtues of the market – 'I love the free market. It's certainly been very good to me. I think you'd have to admit, it's been very good to the world.' But Murdoch has never been a simple conservative. He once said, 'the greatest compliment that was ever paid to me was by Ian McGregor, the Chairman of British Steel, who said: "There are only two radicals in this country – Margaret Thatcher and Rupert Murdoch."' Wolff found that Murdoch became much more animated and forthcoming in interviews when asked about being a change agent, a phrase Murdoch then kept repeating. One of his closest confidants, US economist Irwin Stelzer, opined, 'I hate to say that money doesn't matter, because money always matters, but I think Rupert mainly sees himself as a kind of anti-Establishment revolutionary.' Conveniently, all groups opposing Murdoch soon become re-cast as 'establishments' – soon there were not just the royal establishment and the banking and business establishments, but also the Left establishment, the journalist establishment, and the trade union establishment. The 'conservatism' that Murdoch embraces has little to do with the conservatism coming down from Edmund Burke, celebrating the wisdom of the traditions and institutions handed on through generations, cautious that change will have unintended consequences, that subtle balances will be upset, leading to things becoming worse rather than better. Rather, conservative politics becomes redefined as advocacy for change, and 'radical' is a term of praise. The crudity of Murdoch's politics Not even its most ardent advocates would claim that Twitter is a medium that invites expansiveness and the considered appraisal of competing views. When the evergreen octogenarian Murdoch embraced Twitter in 2012, his tweets offered insights into his attitudes that did not enhance his reputation for political sophistication. He began the year by insulting the British – 'maybe Brits have too many holidays for broke country!', and later denounced their government for supporting the euro, 'must be mad', and for being about to wreck the English countryside with 'uneconomic bird killing windmills. Mad.' His views on Australian politics were equally unflattering: 'Gillard once good education minister, now prisoner of minority greenies. Rudd still delusional, who nobody could work with. Nobody else?' 'Weird place mucking up great future.' In American politics, as the 2012 Republican primaries began he was an enthusiast for the Christian evangelical candidate Rick Santorum, as the 'only candidate with genuine big vision for country'. His judgements on President Obama were consistently negative. 'Just saw [the anti-Obama documentary] _2016_. Truly scary if no answer. Every voter should see and decide for self what future they want for America.' And it would be a 'Nightmare for Israel if Obama wins...', while the 'White House [was] still lying about Benghazi.' Moreover, referring to a bill about regulating the internet, Murdoch charged that 'Obama has thrown in his lot with Silicon Valley paymasters who threaten all software creators with piracy, plain thievery', and responded to the resulting controversy by saying that 'Piracy leader is Google who streams movies free, sells advts around them.' Murdoch was despairing of politicians everywhere – 'To hell with politicians! When are we going to find some to tell the truth in any country? Don't hold your breath.' Murdoch had told a British parliamentary committee during the phone hacking scandal in July 2011 that it was the most humble day of his life. By 2012, he had recovered. When some people acting for phone hacking victims pushed for reforms, he tweeted: 'Told UK's Cameron receiving scumbag celebrities pushing for even more privacy laws. Trust the toffs! Transparency under attack. Bad.' At the same time, he thought 'Any shareholders with complaints should take profits and sell!' When the scandal surrounding the pay TV security company NDS broke in March, he thought 'every competitor and enemy piling on with lies and libels'. 'Enemies have many different agendas, but worst old toffs and right wingers who still want last century's status quo with their monopolies.' The biggest controversy was to greet his commentaries on the Middle East. When, late in the year, there were conflicts between Israel and Hamas, Murdoch asked, 'Can't Obama stop his friends in Egypt shelling Israel?' The question was built on a series of incorrect assumptions, but an even bigger critical storm greeted his next venture: 'Why is Jewish-owned press so consistently anti-Israel in every crisis?' Peter Beinart commented, 'Give Murdoch credit: He's packed a remarkable amount of idiocy and nastiness into 140 characters.' Even allowing for the constraints of the medium, Murdoch's tweets have been notable as much for their style as for their content. He has first class access to inside information and gossip grapevines, and frequent contact with leading figures in government, business and media. Yet his tweets contain blatant errors and simplifications, and rarely acknowledge any shades of grey. What is most striking is the crudity of his views, and his dismissive contempt for others, evident in the intolerant way he constructs opponents' opinions and motives. The way he expresses his beliefs has consequences for the way his tabloids, in particular, frame political conflicts, and for the way he makes his views known internally to his editors and senior journalists. The most complete account of how Murdoch dealt with a strong editor who had political views different from his is from Harold Evans, editor of the _Times_ in 1981–82. Murdoch had an arsenal of unflattering labels for his critics and for the contributors he disapproved of: 'the chattering classes', 'bloody pinko Islington liberals', 'pissing liberals', 'limp-wristed left-wing layabouts and stuck-up, self-important, expense-padding Trotskyites'. 'Murdoch would jab his finger at the paper and say, "What do you want to print rubbish like that for?"' A journalist accurately reporting evidence given to the inquiry into the Brixton riots in 1981 was labelled 'a commie'. At that time, neither Thatcher's nor Reagan's economic policies were going well. After Professor James Tobin of Yale University won the Nobel Prize for economics, Evans commissioned an article from him, which talked about the risky experiment of monetarism. At dinner Murdoch berated him: 'Why d'ya run that stuff?' 'Well, it's timely.' 'It's timely and it's wrong. What does he know anyway?' 'He won the Nobel Prize.' 'Intellectual bullshit.' Evans's wife, Tina Brown, commented that 'she had never seen anyone so hunched up with anger' as Murdoch. Evans said that he would like to have engaged Murdoch in a political discussion, but found that he 'had no relish for anything more than a couple of colliding assertions... He got restless or tetchy with any attempt to engage him further.' According to Evans, Murdoch's 'tone was assertive and hostile to debate'. Murdoch's simplistic approach is seen most clearly in his views on international conflicts. In 1982, after Israel's invasion of Lebanon, he told _Sunday Times_ editor Frank Giles that Ariel Sharon was 'a man of action'. But, said Giles, what if the action is wrong or misguided? 'That's not the point,' said Murdoch. 'He gets things done.' On every international conflict that has arisen since the end of the Vietnam War, Murdoch has been, usually vociferously, on the hawkish side. Moreover, he has typically expressed contempt for those who even consider other options. Witness the calculated insult he gave to Australian Prime Minister Malcolm Fraser at a dinner following the 1981 budget. Fraser's Cabinet had been hesitating over sending an Australian contingent to the Middle East in a peacekeeping role. Murdoch seized Fraser's upper arm and exclaimed for all to hear: 'Show a bit of muscle, Malcolm, show a bit of muscle.' In the 1980s, when Thatcher was negotiating with Beijing over the future of Hong Kong, which was to revert to China in 1997, Murdoch told Neil that he wanted his papers to take a stronger line: She should hold out, make no concessions and tell the Chinese that there's a Trident submarine off their coast: If the Red Army moves into Hong Kong they should be left in no doubt that we'll nuke Beijing... though I suppose we could fire a warning nuke into a desert first. Just as ludicrously, Murdoch has twice favoured the idea that Australia should acquire nuclear weapons. He did so first in 1967, reflecting an aggressive Australian nationalism, and then again in 1984, embracing the Reagan agenda against the Soviet Union. Murdoch was a late convert to Cold War views, but an extremely zealous one. Hugo Young, then deputy editor of Murdoch's _Sunday Times_ but soon to leave the paper, said that any 'reports from El Salvador which allowed for any possibility that US foreign policy was in error were clearly potent evidence that the Commies had the _Sunday Times_ in their grip... a term of abuse that I personally heard more than once'. During the 1980s Murdoch was slow to understand the changes that Mikhail Gorbachev was making in the Soviet Union. He tried to convince Collins book publishers (not yet fully owned by him) not to publish Gorbachev's memoirs, which had been considered a publishing coup. 'He's still a communist, you know,' he told company head Ian Chapman, and when Chapman said he was going ahead with the contract, Murdoch replied, 'Well Ian, if you're content to be an arm of Soviet propaganda, go ahead and publish.' Murdoch's _New York Post_ had proclaimed Gorbachev 'a worthy heir to Stalin', and _Post_ columnists wrote as if both glasnost (the domestic loosening of government control) and rapprochement with the West were fakes. As David McKnight commented, 'neo-conservatives seemed utterly unable to understand events in the Soviet Union'. The end of the Cold War did nothing to temper Murdoch's approach to international conflicts, especially in the Middle East, where his unwavering support for Israel is matched by his stereotyped views of Islam. One day over lunch he ventured to Wolff that 'Muslims have an inordinate incidence of birth defects because they so often marry their cousins.' This kind of simplistic and one-sided view matches his view of race relations in America in the 1970s, where he thought that 'the energy and prosperity of industrious white America was being drained by the tremendous black problem'. On the eve of George W. Bush's invasion of Iraq in 2003, Murdoch opined that 'We can't back down now where you hand over the whole of the Middle East to Saddam.' This nonsensical claim – one not even advanced by any member of the Bush administration – was followed by one of his worst predictions: 'The greatest thing to come of this to the world economy... would be US$20 a barrel for oil.' He was not worried about the protests around the world against the coming war: We worry about what people think about us too much in this country [the US]. We have an inferiority complex, it seems. I think what's important is that the world respects us, much more important than they love us. However, he didn't produce any evidence of how the invasion was increasing the world's respect for America. As the war bogged down in 2004, Murdoch remained upbeat, contending that the 'only real problem in Iraq' was confined to 'one small part where the Sunnis are, which were the people who supported Saddam'. (Sunnis comprise around one-third of Iraq's population and form the majority in the capital, Baghdad.) And even in November 2006, he had no regrets: 'The death toll, certainly of Americans there, by the terms of any previous war, are [sic] quite minute.' The unwillingness to acknowledge past errors or to admit the complexity of some of the issues is consistent with Wolff's very critical view of Murdoch's political approach: A vital element in understanding his political consciousness is understanding its shallowness. [He is] compulsively drawn to... action [and] opportunity. He's a poor debater (although he can raise his voice and pound the table). Neither is he 'conventionally smart', which Wolff considers part of the reason he seeks to belittle and demonise intellectuals: 'If you're not with Murdoch, even on a modest point, you're a liberal'; in the past, you were a 'commie'. This picture is consistent with the recollections of one of his student contemporaries: He was grossly impatient and brooked no disagreement... He took a dictatorial approach to his politics, and if you dared to disagree with him on some minor point he would banish you from his small circle. He was a debating bully and would often resort to ad hominem attacks when his logic and clarity of language faltered... He had all the answers and if you didn't go along with him you were nothing. You were a 'putrid little shit', or some such expletive. The big difference between the student and the media proprietor is that the latter has power. It suggests he gets his way less by seeking to persuade than by imposing his opinion. The cognitive style is similar, though. He does not engage with contrary views; instead he applies generic, dismissive labels both to those holding them (commie, anti-American, liberal) and to their arguments (mad), and simply does not deal with uncomfortable contrary evidence. It is the mindset of someone with a largely closed worldview. He puts his views in a way designed to shut down discussion rather than open or permit debate. Murdoch's former Australian CEO, John Menadue, said he 'was, and still is, a frustrated politician. He can't leave politics alone.' While it is no doubt true that he relishes being a player in the political game, he would not survive as a politician, with its collective discipline and its public accountability. Murdoch's political ambitions would be undone not only by his radical conservatism – which is a long way from the mainstream of at least Australian and British politics – but by the crude way he expresses his views. As a democratic politician, he would be much more accountable for the poor predictions and outrageous comments he has made than he is as a media proprietor. 6 The enthusiastic player Murdoch's early political involvements While Murdoch's ideological dispositions place him at the far right of the political mainstream, it is wrong to explore his politics exclusively as an abstract set of beliefs. Their expression has been tempered not only by political and commercial pragmatism, but also, and even more, by his wish to be a player. Murdoch enjoys wielding power and being seen to wield it. Menadue, who worked with Murdoch for seven years, thought that what drove him 'was not making money, as useful as that was, but gaining acceptance by and then influence with people in positions of power'. He likes being an insider, privy to the gossip around the personal dramas of politics. He relishes the sense of shaping events, and his publications have cultivated his image as a king-maker. No other company has ever run a headline like 'It was the _Sun_ wot won it', as News International did after John Major's 1992 election victory. Typically, however, Murdoch's aspirations are marked by double talk. When the _New York Times_ put it to him at the time of the Iraq War in 2003 that he was the most powerful person in English-language media, for example, he replied, 'that is flattering but it's just not true. Maybe it would be if I tried to impose my views but I just don't do it.' 'You can demonise me by using the word "power",' he said to Shawcross, 'but that's the fun of it, isn't it? Having a smidgeon of power.' He told prominent journalist Trevor Kennedy, 'you wouldn't be human if you didn't enjoy the influence. I should say that one's influence is greatly over-rated', especially by politicians, 'who are forever calling and looking for favours and looking for one's approval'. In private, however, he is less proper and less modest. When dealing with politicians, Munster thought, Murdoch likes to foster the impression that he could deliver them an advantage they could not get otherwise. When Labour's Ken Livingstone 'said on television that his party's defeat in the 1987 general election had been caused by media lies and smears, Murdoch cried out delightedly: "That's me!"'. After the _New York Post_ splashed a story that the parents of the 1984 Democratic nominee for Vice President, Geraldine Ferraro, had been arrested but never convicted for illegal gambling in the 1940s, Ferraro spoke emotionally of her mother's pain at the smear, and angrily denounced Murdoch as a rich and well-connected powerful man 'who does not have the worth to wipe the dirt under [my mother's] shoes'. When this was read out to a party at Murdoch's house, the 'editors whooped and laughed, and Murdoch poured more champagne'. In 1993, during a dispute with Buckingham Palace over the _Sun_ 's having published the Queen's Christmas letter two days early, his confidant Woodrow Wyatt was urging on him the need to make a rapprochement. But Murdoch became angry. Wyatt, exasperated, recorded in his diary: I am beginning to wonder whether [Rupert] hasn't gone absolutely Citizen Kane. He was shouting away telling me how important he was and how he has a great media empire... and telling me he had more power than the government in England. Murdoch's aspirations to be a player for its own sake were clearly with his first big campaign in America, supporting Ed Koch to become mayor of New York in 1977. Within a year of owning the _New York Post_ , according to Wolff, Murdoch: entirely [altered] the political landscape in New York. In a precise calculation, he [decided] to use the _Post_ as an instrument to elect somebody – he [understood] that it doesn't really matter whom, just that the _Post_ be responsible. Some of Murdoch's editorial mobilisations have been driven by his political beliefs or strong passions – such as for Thatcher and Reagan – but the ideological stakes in the New York mayoral election were not great. Murdoch decided to support Koch, an underdog, first for the Democratic nomination and then in the election, and the _Post_ went all-out in his cause. Academic and journalist Mitchell Stephens found that once the _Post_ had endorsed Koch (in a front page editorial), there were no unfavourable stories about him at all from then until after the election. Mario Cuomo, a more liberal Democrat and earlier the favourite, lamented: The _New York Times_ is perhaps the single most credible newspaper in the world. But when they endorse you, you get one column on the editorial page. With Rupert he turns the whole paper over to you. Another politician on the receiving end, Mayor Abraham Beame, was more graphic: 'No New Yorker should take Rupert Murdoch's _New York Post_ seriously any longer. It makes _Hustler_ magazine look like the _Harvard Law Review_.' On 27 September 1977, citing a lack of freedom to express any opinions contrary to the paper's policy, 50 of the paper's 60 reporters walked off the job, indicating their disquiet over slanted news coverage. That coverage, recalled former _Post_ journalist Frank Rich, was: tilted in favour of Ed Koch and Carol Bellamy, both then unabashed liberal Democrats, running for mayor and City Council president. It was the _Post_ 's journalistic corruption that enraged those reporters, the editorials run as news stories (including on page one), the endless parade of fawning features on the favoured candidates, not the fungible ideology of Murdoch's opportunistic partisanship. However, playing such a role in Koch's victory established Murdoch as a king-maker. Koch remained grateful, and according to Murdoch, seldom made a move without consulting him. Murdoch had established himself as a central player in the city's politics. From very early, Murdoch had a thirst for direct political involvement. A year after the 1955 Labor Party Split, a great Australian political cataclysm that saw the formation of a right-wing anti-communist group, the Democratic Labor Party, Murdoch and his editor Rohan Rivett secretly sought to entice Labor's most promising rising star in South Australia, Don Dunstan, to join the breakaway party with the promise of favourable publicity. Wisely, Dunstan refused; he later became one of the great reforming Labor premiers. In the first decade or so of his career, Murdoch's political antennae were often poorly attuned. In 1961, Menzies had been prime minister for 12 years and his political dominance seemed assured, but ,confounding initial expectations, that election was one of the closest ever. In the end the Liberal/Country Party Coalition hung on by one seat, and that final seat was close to a dead heat. Murdoch maintained his papers' support for the conservative government, and was so bored that he departed for the US during the campaign. His greatest coup was to get a photo of Labor leader Arthur Calwell leaving the Fairfax newspapers' building. In a cheeky story, the _Mirror_ reported the meeting between Calwell and Fairfax chief Rupert Henderson as 'the strangest and least attractive wedding of the season... the flowers were pink, by request'. Then, in 1963, when Menzies had recovered his political dominance, Murdoch swung his support to Labor, giving them financial as well as editorial support. He had developed a relationship with Calwell, who commented that had Murdoch given him money in 1961, he wouldn't have needed to in 1963. During the campaign, Murdoch's _Daily Mirror_ had a front page headline 'Top poll tips Labor'. Fortunately for its reputation, the 'top' polling organisation was not named – a substantial swing against Labor brought a comfortable victory to the Coalition. Murdoch's most substantial political relationship thus far then developed: with John McEwen, the deputy prime minister and leader of the Country Party. Nicknamed Black Jack, McEwen was a dynamic but controversial figure in Australian politics: his strategy for promoting the country's economic development centred on high tariffs, with large public subsidies going to agriculture and mining. Murdoch was still out of favour with Menzies, however. One of Murdoch's favourite stories was 'how McEwen had smuggled him out of his parliamentary suite when Menzies' imminent arrival was announced'. Following the death by drowning of Menzies' designated successor, Harold Holt, in December 1967, there was a leadership vacuum in the Liberal Party. Its deputy leader, William McMahon, was an enemy of McEwen, and represented the more free trade, free market tendency within the government. The Coalition was in danger of splitting when McEwen, then acting prime minister, made a public statement that he would not accept McMahon as prime minister. Murdoch wanted to support McEwen as the new prime minister – 'the best man for Australia!' – but the editor of the _Australian_ , Adrian Deamer, refused to endorse such a political impossibility: the Liberals would never have accepted a Country Party member as leader of the government. However, Murdoch and McEwen had more tricks in store before the 9 January leadership vote. The _Australian_ 's first editor, Max Newton, was now a publisher of industry newsletters, a lobbyist in Canberra, and very close to McMahon. Newton was estranged from Murdoch, and had referred to him as 'a whippersnapper from Adelaide'. One of Newton's clients was the Japanese trade organisation JETRO. On the evening of 5 January 1968, McEwen met with Murdoch, and gave him a package of material from the Australian Security and Intelligence Organisation (ASIO) about McMahon. Afterwards, Murdoch rang Newton, and said 'this is the whippersnapper from Adelaide. I suggest you read my paper tomorrow.' Next morning the _Australian_ 's headline read 'Why McEwen Vetoes McMahon: Foreign Agent is the Man between the Leaders'. Based on the ASIO file, the newspaper had conjured Newton's perfectly normal commercial relationship with the Japanese trade organisation into a threat to national security, and used this to damage McMahon. A couple of days later it published – again presumably via ASIO – the contract between Newton and JETRO in full. McMahon withdrew from the leadership ballot and an underdog candidate, John Gorton, became prime minister. This was Murdoch's first big political coup. The 1969 election saw a strong swing to Labor under its new leader, Gough Whitlam, but not a win. However, the problems inside the government mounted to such a degree that in March 1971 Gorton was overthrown and replaced by McMahon. Gorton and his supporters were far from reconciled to the new leadership, and their continuing hostility provided a stream of stories. The Coalition Government was in disarray and decay. The oddest couple By late 1971 Whitlam already felt sure he would win the next election. While foreign policy had been an electoral minus for Labor in previous elections, it now looked to be a plus. First, the Vietnam War was opposed by an increasing number of Australians, and second, Whitlam had taken a major political risk by going to China as opposition leader and promising to establish diplomatic relations. His risk was rewarded because, as he left, it was revealed that US National Security Advisor Henry Kissinger was making a secret trip to China, presaging a coming historic visit by Nixon, which took place the following March. Murdoch was attracted to the prospect of a left of centre government ending 23 years of conservative rule. Backing Whitlam would be supporting an almost certain winner, and his victory would help modernise Australian politics. Laurie Oakes and David Solomon argued that 'by the time the [1972] campaign got under way, Murdoch was an integral part of the "It's Time" [Labor's slogan] machine'. Murdoch himself said 'I was close to [Whitlam] at the time, or certainly very friendly to him.' His support went considerably beyond editorials – in Menadue's words, 'Murdoch was up to his ears in the campaign', and was in very frequent contact with Labor's primary tacticians, Mick Young and Eric Walsh. He contributed around A$75,000 to Labor, although almost four-fifths of this was via advertisements in his own publications. Whitlam enthusiastically adopted some of Murdoch's suggestions, such as appointing Dr 'Nugget' Coombs as a personal economic adviser, and having a referendum to allow Australians to choose a new national anthem to replace 'God Save the Queen'. But not all his suggestions were so welcome. Murdoch took it on himself to draft what he thought should be Whitlam's final campaign speech, and 'eventually some ideas and lines were used, and Young got Whitlam to thank Murdoch for his input'. Three years later, in 1975, Murdoch was determined to get rid of the government he had been so keen to see elected in 1972. This was one of the most dramatic years in Australian political history. A series of scandals resulted in the resignations of two senior ministers, and in October the opposition blocked the supply of money to the government for the first time in Australian history. They were able to do this only because convention had not been followed in replacing two Labor senators with senators from the same party. Whitlam refused to call an election, and a deadlock and constitutional crisis followed. Eventually the Governor-General intervened, withdrawing Whitlam's commission and swearing in opposition leader Malcolm Fraser as caretaker prime minister. In the subsequent election, the Coalition parties won a smashing victory. A small industry has developed around explaining the rupture between Murdoch and Whitlam. The Labor side posits two ulterior motives for Murdoch's changed allegiance. The first is that Murdoch wanted to be appointed High Commissioner in London, and Whitlam refused. To appoint the 'dirty digger', as the magazine _Private Eye_ had dubbed Murdoch, as Australia's ambassador to the Court of St James certainly would have invited ridicule, and it is hard to imagine that Murdoch would have remained patient in the position for very long. Murdoch has since denied ever seeking it, or says it was a joke. But Menadue – who was directly involved – confirms it, although he adds that as far as he could tell, Murdoch 'carried no grudge for the knockback'. The other one concerns the refusal of the government to allow the development of a bauxite mine in Western Australia owned by a consortium in which Murdoch was a prominent member. Cabinet rejected their proposal for development in March 1974, but Murdoch's papers were strictly impartial in the election held two months later, in May. On the Murdoch side, three explanations have been proffered but none is satisfactory. According to Kiernan, 'Murdoch would later claim that Whitlam's inept handling of the budget deadlock... caused him to turn against the Labor prime minister.' But this fails the test of timing, because his anti-Labor attitudes predated that deadlock by many months. In 2013 a US embassy cable released by the US National Archives and published by WikiLeaks reported that in a November 1974 meeting between Murdoch and US Ambassador Marshall Green, Murdoch's disillusion with Whitlam was already clear. He told Green that 'he expects to support the opposition in the next election', which he thought would occur within a year because the opposition would block supply; others of his predictions proved less accurate. He also criticised the government's economic management, especially its 1973 tariff cut. Murdoch claimed later that he had come to vehemently oppose Whitlam because the government was introducing a 'European type of socialism which caused ruin and misery in other countries'. This is a very imaginative construction of the Whitlam Government's policies, and overlooks the fact that during the 1970s many Western European social democracies were doing considerably better economically than Anglo-Saxon countries. Murdoch gave a third reason, which Kiernan seems to accept. Soon after Labor was elected on 2 December 1972, the Nixon administration began carpet bombing Hanoi and Haiphong harbour, and in 11 days US planes 'dropped more bombs over North Vietnam than in all of the previous three years'. This followed the proclamation by Henry Kissinger, days before the US election six weeks earlier, that 'peace [was] at hand', prompting euphoria in the US and around the world. Such heavy and indiscriminate bombing when peace was supposed to have been achieved prompted deep disillusion and protest around the world, not least in the US itself. The Labor Party had long been opposed to the war, and some of the most prominent anti-war ministers publicly denounced the US in strong terms. Whitlam remained publicly silent, but wrote a letter to Nixon which departed from the traditional Australian stance of unquestioning compliance, and caused great indignation in Washington. There was also a ban placed by maritime trade unions on US shipping in protest. According to Kiernan, the US State Department and the US embassy in Canberra – whose competence was illustrated by its forecast that McMahon would win the 1972 election – thought that Murdoch 'had been instrumental' in Whitlam's victory and called on him to get Whitlam to end the shipping boycott. However, Whitlam refused Murdoch's urgings. In Kiernan's view, 'Murdoch felt both betrayed and embarrassed – betrayed by Whitlam's refusal to do his bidding and embarrassed that the refusal had damaged his standing with the Nixon Administration.' If this is an accurate reflection of Murdoch's thinking, it betrays a failure to understand the depth of anti-war feeling in both the Labor Party and the trade union movement. Whitlam knew it would have been a fool's errand to intervene in the shipping ban as he would have simply been rebuffed and this would have damaged his credibility. Nor does Murdoch seem to appreciate the reasons why the bombing brought such disgust. Finally, it fails the test of timing: if this episode did have such an effect on Murdoch, it did not show in his newspapers until a couple of years later. Three reasons together seem to explain the depth of Murdoch's falling out with Whitlam and Labor. The first is that in these years, the now New York-domiciled Murdoch's politics were solidifying into a very right-wing view of the world. The second is that like many business figures, he was extremely critical of the Whitlam Government's economic management. While Whitlam had the misfortune to be in government as stagflation struck the Western world, there are valid claims that his government did not respond well or coherently. The third reason concerns their awkward personal relationship. Although Murdoch had been close to the Labor campaigners in 1972, he and Whitlam were never personally close. Murdoch later said he 'never really liked Whitlam personally. I always thought he had a dreadful intellectual arrogance about him.' 'The chemistry was never there,' judged Menadue, who tried to bring them together. At a July 1971 dinner the conversation was 'polite but cool', although Whitlam was surprised by Murdoch's opening gambit – 'How do we get rid of this government at the next elections?' In September 1971 Gough and Margaret Whitlam spent a night at Murdoch's farm Cavan, which Whitlam described as one of the most 'excruciatingly boring' nights of his life. Menadue thought 'The relationship stuttered forward in a fairly desultory way.' In 1972, with Murdoch still keen and Whitlam's staff equally keen, a cruise on Sydney Harbour was arranged – 'Do I have to come?' asked Whitlam, but he did, and the evening went well. After the election, Murdoch hosted a victory dinner whose guests included Katherine Graham of the _Washington Post_. Although the evening was a great success, again it had been difficult to persuade Whitlam to attend – 'Comrade, I am not a national exhibit,' he protested. Thereafter personal meetings were few and far between. After seeing Murdoch in London during Easter 1973, Whitlam 'peremptorily cancelled several planned meetings', finding Murdoch's expectations distasteful, and asserting that he was too busy to spend time with Rupert. But his biggest snub to the publisher came in September 1974 in New York. Whitlam's press secretary, Eric Walsh, always keen to bring the two together, had arranged a dinner. But that morning, Whitlam bumped into David Frost, the English TV personality. Whitlam and Frost had done some interviews together and had developed a warm personal relationship. Murdoch, in contrast, had in 1969 been subjected to the most humiliating TV interview in his life by Frost, and had vowed to take his revenge. The interview occurred after Murdoch's _News of the World_ had published some new memoirs by Christine Keeler, reviving the Profumo scandals of the early 1960s, and Frost attacked Murdoch's motives in reviving the affair. Murdoch had been expecting an easy chat, but instead was subjected to an onslaught. According to Wolff, Frost adopted a 'sarcastic, prosecutorial, and sanctimonious' tone without any pretence to impartiality, in front of an audience that was predominantly against Murdoch. However, Frost's motives were as base as Murdoch's – he wanted to make his 3-week-old show the centre of attention, and his grilling of Murdoch helped it 'catch fire'. In an insult which Murdoch would have found doubly galling, Whitlam cancelled his dinner with Murdoch and dined with Frost instead. A breakfast meeting for the next morning was hastily arranged, but Murdoch was understandably angry. Whitlam was carelessly allowing a political enmity to assume great and damaging dimensions. 1975 and all that In 1975 Murdoch mounted what veteran Canberra correspondent Mungo MacCallum called 'the most extraordinarily ruthless and one-sided political coverage I think any of us can remember', and what former editor and writer Donald Horne called vendetta journalism. Murdoch's editorial mobilisation was very crudely exercised. During the election campaign, when unemployment figures were released the paper used raw instead of the usual seasonally adjusted figures to make the government look worse. The _Daily Mirror_ headline of the first edition of 26 November had 'Gough's Promise: Cheap Rents'; the second edition was changed to 'Gough Panics: Cheap Rents'. Reporting on the affair of leading Labor figure Jim Cairns and Junie Morosi, the paper published a photo of the two having breakfast on a hotel balcony, but it did not show Mrs Cairns, who was also present. One News Limited journalist told me in the early 1980s that: [T]he real rot set in in August 1975. It was very rugged on the _Australian_. They changed a lot of my stuff. Bruce Rothwell was sent out to take charge of the paper. He was mad, mad, mad. It was all totally unsubtle; very ham-fisted. Working for Murdoch is a good exercise in understanding the politics of the media. I knew I had to put my own professional standards above my love for the job... Before I took the job, I said the only way I can take this job is that I'm ready to be sacked that night. If not, then they've got me. During the campaign Murdoch journalists staged the first strike in Australian history over editorial issues. On 28 October 1975, 76 members of the Australian Journalists' Association (AJA) who worked on the _Australian_ expressed concern that the paper had become a laughing stock. They protested against the 'blind, biased, tunnel-visioned, ad hoc, logically confused and relentless' way policy was affecting news coverage: We can be loyal to the _Australian_ , no matter how much its style, thrust and readership changes, as long as it retains the traditions, principles and integrity of a responsible newspaper. We cannot be loyal to a propaganda sheet. Murdoch wrote back: 'If you insist on providing ammunition for our competitors and enemies who are intent on destroying all our livelihoods, then go ahead.' In a meeting with the striking journalists Murdoch asked what was wrong with that day's paper. Robert Duffield praised the _Sydney Morning Herald_ 's better treatment of a story. 'You dare to show this paper to me?' said Murdoch. 'These people are our enemies, trying to destroy us.' Political stories by Canberra journalists were being rewritten in Sydney and appearing under the byline 'Our political staff'. Several of the paper's senior journalists resigned in the following months. The exodus was so large that one used to joke that they wanted to have a reunion of the former political correspondents on the _Australian_ , but the Melbourne Cricket Ground was booked that day. Many critics, such as MacCallum and Horne, were Labor sympathisers, and Murdoch's standard response was to say that the charges of bias simply reflected the bias of the critics. One of his editorial managers, Brian Hogben, complained about the lack of professionalism of the journalists at the _Australian_ , saying that they were all Whitlam fans. In contrast, one political journalist felt that the 'lack of professionalism rested with the editorial executives in Sydney', who 'were not prepared to maintain editorial independence' and were 'wilful and reckless in their determination to slant the news'. Years later, when Murdoch was attempting to take over the Herald and Weekly Times, the same Brian Hogben now felt that journalistic professionalism made concerns about concentration of media ownership irrelevant: If, in the case of Rupert Murdoch, who is the popular villain of the piece, every word which appears in his papers is slanted, twisted and corrupted, you are saying by implication that every one of those hundreds and hundreds of journalists [in News Limited] is dishonest and a coward, that they're no better in fact than Adolf Eichmann, the Nazi who was just obeying orders when he gassed millions of Jews. And I am damned sick of that sort of slander perpetrated against hundreds of people who I know and respect. In fact, Hogben's comments in 1975 highlighted how the slanting of news can occur even while most journalists are still seeking to properly carry out their role. Playing to win Although politicians seem to live in fear of Murdoch and so curry favour with him, it may be that they, and Murdoch, and his critics, have an exaggerated view of his power. One can understand the feelings of Murdoch's targets and the humiliation they feel from negative coverage. US journalist Richard Cohen wrote of Murdoch's _New York Post_ , 'The paper mugs its enemies for the sheer fun of it – over and over again. This repetition gives it a sort of torque that no politician can ignore.' This is true, but it is another matter to extrapolate from, say, a battered reputation, even from an individual career being ruined, to an election result. The _Economist_ 's Bagehot column noted in 2003 that: The _Sun_ can boast that since 1979 the party it has supported has always won. But that probably says more about Mr Murdoch's readiness to jump ship at the right time than about the _Sun_ 's ability to influence the votes of its readers. According to Wolff, writing in 2008, Murdoch had been reluctant to jump to the Tories, 'but knew he was, however begrudgingly, going to have to', and later he did. Perhaps this suggests a need to be on what was expected to be the winning side. Political leaders are always keen to make a sure thing even surer, and have remained keen for his support in elections they almost certainly would have won anyway. In the 1970s Murdoch claimed that 'we single-handedly put the [Whitlam] Government in office', and his American biographers, Kiernan and Wolff, seem to accept this boast. However, the claim collapses as soon as the electoral data is examined. Whitlam achieved the biggest swing in 1969, when Murdoch opposed him. In two-party preferred terms (a measure in Australian elections that takes account of swings to the two major parties after the distribution of preferences from minor parties and independents), there was a huge swing, of 7.1 per cent, in 1969, and a further swing of 2.4 per cent in 1972. At this time, Murdoch's papers accounted for around a quarter of daily metropolitan circulation. He had a strong presence in New South Wales and a negligible presence in Victoria. In 1972, Whitlam secured a bigger swing in Victoria (5.5%), than in New South Wales (3.8%). As noted above, Whitlam was already supremely confident of winning before Murdoch came on board. It is impossible to quantify what Murdoch's support contributed in votes, but it clearly was not a central factor. After his alienation from the Whitlam Government, Murdoch also claimed, 'But now we're not happy with them, and if they don't straighten up we'll bloody well get rid of them.' Labor supporters have always been bitter about their treatment by Murdoch in 1975. Moreover, in his biography of Murdoch, which is largely sympathetic, Tuccille is of the view that: in 1975 his newspapers' attacks on Gough Whitlam... were instrumental in toppling the Labor Party government from power... _The Australian_ , a money loser since its inception, had finally paid Murdoch back in political capital. If he had had any doubts about its influence prior to the campaign of 1975, they were permanently laid to rest from that moment on. Again the electoral data does little to support the claim. The swing against Whitlam was strong everywhere (7.4% in two-party preferred terms, the biggest in any election since World War II, with Whitlam's 1969 swing being the second largest). But again it is at least as large in states where Murdoch had a negligible presence as in those where his newspapers had high circulations. It is important to note also that whatever political capital the _Australian_ earned in 1975, it also alienated a large part of its readership, and did itself considerable commercial damage. The _Australian_ 's circulation had risen to 153,000 in 1974, but with its anti-Labor mobilisation, it dropped to 118,000 by 1977, and as late as 1982 had barely risen (119,000). There are many levels of political effect, beyond affecting voters' opinions. One level involves changing the perceptions and decisions of the main participants, and here, including by his own account, Murdoch may have been more influential. Twenty years later, Murdoch told journalist and author Paul Kelly: [M]y concern at the time was that Malcolm Fraser, having taken the country to the brink, might lose courage and back off. Maybe if the _Australian_ hadn't been so firm on the constitutional issue then Fraser might have lost courage. In the following years Murdoch's papers remained stridently anti-Labor. Although not as celebrated as 1975, the next plausible candidate for an election where the Murdoch press may have affected the outcome was the 1980 federal election. This was a very close election, with Labor winning 49.6 per cent of the two-party preferred vote. The polls had been close ever since 1978, with Labor more often ahead. Against the expectations of many observers, Labor's lead seemed to increase during the campaign, and as late as the last week polls showed Labor ahead. It was an election where it was said that 'an unusually large number of voters made up their minds very late'. This time Labor failed to secure as big a swing in New South Wales (2.8%) as in Victoria (5.8%), although of course other factors may also have been at work. Murdoch journalists told me in interviews that 'an element of hysteria crept into the organisation' as Labor looked likely to win, and 'people in the media like myself were under a fair amount of pressure to find some bullets'. Especially in the last week of the campaign, the news priorities and orientations of the News Limited papers were very much in line with what the Coalition Government would have wished. Again, it is impossible to have any certainty or to quantify impacts, but in such a finely balanced and uncertain election, there was more scope for a media organisation to affect the result. There have been many later occasions when the Murdoch press has been on the losing side. It supported Hewson against Keating in 1993, and the day after the election, according to Keating, Murdoch apologised, saying, 'we got it wrong; correction – Ken [Cowley] got it wrong'. At the risk of reading too much into this apology, it seems to imply that their news coverage and support were coloured by their expectations about the result. In 1999, the Murdoch press, plus public statements by Rupert and son Lachlan Murdoch, supported Australia becoming a republic, but the referendum failed. It is impossible to quantify the impact of Murdoch's editorial positions on public opinion, let alone on election results. The absence of definitive evidence of direct effects means that views about how press coverage interacted with other factors must remain tentative. Murdoch's fantasies about his political prowess should not be accepted, but neither does this mean his support is irrelevant. 7 The passionate player Thatcher, Reagan and beyond Of all the political leaders he met before and since, none inspired in Murdoch the attachment he had to Reagan and Thatcher. Their leaderships coincided with the most dramatic growth in his empire, and both were right-wing leaders whose governments, in both rhetoric and practice, marked sharp departures from previous conservative governments. His commitment to them coloured the reporting by all his news outlets. Murdoch's entry into British politics was less than spectacular. When he bought the _Sun_ in 1969, one of his promises to Hugh Cudlipp, the former owner, was that it would continue to support Labour. He stuck to this position initially, and supported Harold Wilson's re-election in 1970, although the paper adjusted with no evident disappointment to the Tories' surprise victory under Ted Heath. At the election in February 1974, it abandoned its commitment to Labour and very tepidly endorsed Heath. Its editorial, 'The Devil and the Deep Blue Sea', concluded that 'in spite of the record, Ted's Tories look the better bet'. Instead Wilson formed a minority government in a hung parliament, and then called another election for October, which he won. This time the _Sun_ ducked a choice between the two major parties, saying, 'We're sick of the Ted and Harold show' and advised readers to vote for the best candidates. So, at each of its first three elections, Murdoch's _Sun_ had failed to support a winner. Politics was not a big part of the _Sun_ as it grew to become the biggest circulating paper in the country. Its appeal lay in sex, human interest, crime, sport and show business. But readership of the _Sun_ overlapped significantly with what has often been a strategically important part of the British electorate: people who are working class but socially conservative. When Britain was a much more working-class country than it is now, the great Conservative Prime Minister Benjamin Disraeli saw the importance of wooing these people, whom he called 'angels in marble' – those whose political support, by economic position, would be inclined to the left, but for whom other appeals, such as nation, race and order, were of greater concern and led them to vote conservative. In the 1970s the somewhat more affluent members of the working class, for example those working in skilled manual occupations, were seen as the key to electoral success; they were heavily represented among _Sun_ readers. The newspaper only became stridently political in the late 1970s, although editorially it had largely been on the Tories' side since Thatcher became leader in February 1975. This to some extent reflected Murdoch's own shift to the right, but the paper also responded to and amplified the increasing public frustration with strikes and disorder. It rode and fanned a populist wave in campaigning against the decaying Callaghan Labour Government. _Sun_ headlines such as 'Winter of Discontent' and 'Crisis? What Crisis?', making fun of (and distorting) a statement by Callaghan, fed straight into the Conservatives' electoral strategy. In the lead-up to the election, _Sun_ editor Larry Lamb became a regular visitor to Mrs Thatcher's house to offer informal advice, and he was knighted afterwards. On polling day, the _Sun_ published an unprecedented 1700-word editorial. The headline read: 'Vote Tory This Time. It's the Only Way to Stop the Rot'. However, it was not just 'this time'. Murdoch continued his support for the Conservatives until after the 1992 election. His support was very much for Thatcher personally and he took her side against the 'wets' inside her own party as well as against Labour. Former Murdoch editor and Tory supporter Andrew Neil later wrote that: news stories were told from a Thatcherite perspective, features geared to further Thatcherite ideas... At election time, Tory tabloids turned themselves into the publishing arm of Tory Central office. The newspapers were full of powerful, skillful, constant propaganda for the Tory cause. Through these years, the _Sun_ 's support was often manifested less in positive coverage of Thatcher's domestic policies than in negative coverage of her opponents: critical coverage of 'loony Left councils', trade unions and the Labour Party were recurring features. In the 1982 election, the _Sun_ had a headline 'Do you seriously want this old man to run Britain?' with a photo of Labour leader Michael Foot, and warned that he was a 'willing dupe' of the ruthless Left. The Tory press generally, but the Murdoch press most of all, embraced Thatcher's re-conquest of the Falkland Islands from the Argentineans, and in the process made her a conquering heroine. During the 1987 campaign, according to Woodrow Wyatt, 'Rupert said they are doing two shock issues in the _Sun_ about what it would be like under Labour and about Britain being great again.' When Wyatt told Thatcher she said, 'Rupert is marvellous.' Murdoch approached the election more as a campaigner than a journalist. This was also true of his approach to Thatcher's greatest crisis up to that time. Westland – managing appearances and realities 'We've got to get her out of this jam somehow. It's looking very bad.' Thus Murdoch expressed his worries to Wyatt as the Westland crisis was building to its climax. Ten days later, on 24 January 1986, as Thatcher faced a House of Commons urgency motion, she said, 'I may not be Prime Minister by six o'clock this evening.' 'That was the measure,' said Hugo Young, 'of the disillusionment she knew to be present in the Conservative Party.' The previous Friday her Industry Secretary, Leon Brittan, had resigned, saying he had made regrettable errors, and Thatcher said those errors were made without her knowledge. An emergency debate was scheduled for the Monday; the prime minister survived, but with her credibility severely dented. When it began, there was no indication that Westland would develop into such a crisis. The company, Britain's last helicopter manufacturer, was in great financial trouble. Sir John Cuckney was installed to rescue something for Westland's bankers, and favoured selling it to a partnership of the American company, Sikorsky, and the Italian company, Fiat. The British Defence Secretary, Michael Heseltine, one of the government's highest profile ministers, met Sikorsky representatives in September 1985 and 'concluded that the Sikorsky deal, though nice for the banks, did nothing for Westland's shareholders, and nothing for the British taxpayer who had put large sums into Westland'. He lined up a consortium of three European companies, and his Cabinet colleagues seemed favourably disposed. Believing he had Thatcher's support, Heseltine publicly endorsed the plan. In December, Heseltine professed great shock when it seemed to him that Thatcher had 'suddenly, and unaccountably, reversed herself... [and was now] vigorously promoting' the Sikorsky-Fiat bid. Tension between them mounted. In Heseltine's view, denied by Thatcher, a meeting for a key group of ministers had been scheduled for 13 December and then not held. The failure to meet convinced Heseltine that 'something very wrong had happened'. Early in the New Year, Heseltine thought Thatcher had made a misleading public comment, and countered with his own version in a letter to the European stakeholders. The Department of Trade and Industry, on the authority of the minister, Brittan – and, many suspected, with the knowledge of Thatcher and her office – then leaked the parts of a government legal opinion that claimed there were 'material inaccuracies' in Heseltine's letter. Thatcher's office gave anti-Heseltine briefings to journalists. Murdoch's _Sun_ did the leakers proud, with its headline, 'YOU LIAR', and its report that 'Battling Maggie' had caught Heseltine in a devious Euro-scam. The Cabinet's law officers, Sir Patrick Mayhew and Sir Nigel Havers, were incensed that their advice had been leaked and distorted for factional purposes, and that they were now embroiled in scandal. At the subsequent Cabinet meeting Heseltine resigned after a clash with Thatcher, throwing the government into a deeper crisis. He darkly hinted that there was 'something extremely fishy about all this'. Brittan put in a series of clumsy performances both in public and in party meetings, and he also soon resigned. Murdoch's involvement went far beyond colouring the reporting to put a pro-Thatcher view. Sikorsky was an arm of United Technologies Inc., and since 1984, Murdoch had been a member of that company's Board, recruited by his friend, its head, Harry Gray. At that time Gray was hoping to sell its Sikorsky helicopters to the Australian Government, and within a year he succeeded. On Sunday 19 January 1986, in the midst of the crisis and five days before Murdoch planned to begin publishing at Wapping, Murdoch had lunch with Thatcher at the prime minister's country residence, Chequers. After the lunch, Murdoch rang Wyatt with the idea that he would get United Technologies to buy shares in Westland to fix the vote in favour of the Sikorsky-Fiat option – in the end, the future of Westland would be decided by its shareholders. Just before the crucial meeting, buyers from Switzerland, Panama and Australia bought Westland shares and all voted for that bid. The Australian buyer, later revealed as Peter Abeles' TNT, bought almost 5 per cent of Westland's shares, just below the threshold at which shareholdings had to be publicly declared. Abeles was not only Murdoch's partner in Ansett Airlines, but had just 'signed a £1-million-a-week contract with Murdoch to handle the distribution of all News International's papers out of Wapping'. The new shareholders sealed Sikorsky's victory. As Bruce Page observed, this sudden rush of share buying was barely explored in the press. In 2006, on the 20th anniversary of the crisis, Heseltine demanded a fresh investigation. He said that despite a House of Commons Committee recommending it in 1986, no investigation had been carried out. The bottom line was that Murdoch never wavered in his support for Thatcher over Westland, and Thatcher never wavered in her support for Murdoch over Wapping. The end of the Thatcher era In 1990, Thatcher suffered a series of political disasters leading to a mounting sense of dissatisfaction inside the Conservative Party, but Murdoch and the _Sun_ remained staunch supporters. As the Conservative MPs took the vote that resulted in her demise, a _Sun_ columnist wrote: So the back stabbers have won. The tin pot Judases and two-bob traitors of the Tory party turned on the woman who had kept them in power for 11 years. What a gutless rabble. The editor, Kelvin MacKenzie, wrote an open letter to Tory MPs to support 'Maggie the Lionheart', and reminded them that 'it was OUR readers who put YOU in office'. After her loss, the _Sun_ devoted 24 of its 48 pages to her. The papers soon rallied, and set about the task of getting John Major re-elected against Labour's Neil Kinnock in 1992. This was the election where the paper itself proclaimed 'It was the _Sun_ wot won it', and it is often seen as the campaign where the tabloids' partisan bias was at its peak. The _Sun_ 's campaign climaxed in their famous election day front page with Kinnock's head in a light bulb, and the rest of the page taken by the headline 'If Neil Kinnock wins today, will the last person to leave Britain please turn out the lights'. The day before it had run nine pages on the election, each headed with the slogan 'Nightmare on Kinnock Street'. Earlier it had pursued such themes as Kinnock being too influenced by his wife – 'Is Glenys a Red in Neil's Bed?' After the election, according to Wyatt's diary, 'Rupert rang in a great state. What an appalling campaign it had been and how they were lucky to have got in and how they had been helped by the _Sun_.' Thatcher and, most rhapsodically, Tory Treasurer Lord McAlpine, thought the role of the press was central: The heroes of this campaign were Sir David English, Sir Nicholas Lloyd, Kelvin MacKenzie and the other editors of the grander Tory press. Never in the past nine elections have they come out so strongly in favour of the Conservatives. Never has the attack on the Labour Party been so comprehensive. In a rare bipartisan consensus, Neil Kinnock and many on the Labour side shared this view. It should be remembered, though, that all these sources had an interest in denying credit to Major for his victory. But 1992 was the high tide of Murdoch's (and the other Tory tabloids') support for Major. Once the threat of Labour had been seen off, the _Sun_ 's nostalgia quickly came to the fore: 'Quit now Major and bring back Maggie', it urged before the 1993 Conservative conference. Major's promise to uphold family values was taken as a licence to explore the sexual peccadillos of Tory MPs, while a series of financial scandals, including 'cash for questions', meant that Major suffered at least 34 separate instances of so-called sleaze stories. The _Sun_ 's columnist Richard Littlejohn called it 'a sleazy dishonest administration led by a political pygmy'. Reagan Murdoch moved to New York in 1973, and his politics gravitated very quickly towards the right wing of the Republican Party. In particular, Murdoch 'was at once fascinated and astonished by Watergate. In his view, the affair constituted a perverse abuse of the American news media's power.' For most people in the media, Watergate 'is a myth of David and Goliath, of powerless individuals overturning an institution of overwhelming might', of news media making power accountable by revealing truth. On the 40th anniversary, the two key reporters, Bob Woodward and Carl Bernstein, affirmed the importance of holding Nixon responsible for his misdeeds, but Murdoch was 'aghast at the press's politicking' against Nixon. He told a friend, 'the American press might get their pleasure in successfully crucifying Nixon, but the last laugh could be on them. See how they like it when the Commies take over the West.' He thought the news media's behaviour in the Watergate hearings was 'reckless, irresponsible and self-defeating', and believed that: the new cult of adversarial journalism has sometimes been taken to the point of subversion... It's a disgrace that we can and do read thousands upon thousands of words about our national defence and our foreign policy every day without so much as a nod of recognition to the enormous risks to our freedom that exist today. Murdoch thought Nixon devious and unsavoury, but felt that that was what politicians needed to be in the real world. 'If anything brings about the downfall of this country, it'll be the Democrats acting in league with the press,' he said. In 1980 he was firmly in the Reagan camp. Prominent Republican Jack Kemp enthused that 'Rupert Murdoch used the editorial page, the front page and every other page necessary to elect Ronald Reagan president.' Murdoch 'took as much credit for [Reagan's victory] as he possibly could', and claimed that the _Post_ had 'delivered New York state to Reagan'. Perhaps there was a kernel of truth in the claim, but it should also be remembered that since 1993, when Murdoch regained control of the _Post_ , New York has gone Democratic in every presidential election despite the _Post_ 's strident support for the Republicans, in 2012 voting 63%–36% for Obama over Romney. When Murdoch 'bought the _Chicago Sun Times_ in 1983, the journalists were told that they could not criticise Reagan directly. If there had to be criticism, it had to be directed at his aides.' Even towards the end of Reagan's presidency, when his political fortunes were at a low ebb, and he was under pressure for both illegality and incompetence in the Iran-Contra scandal, Murdoch did not falter in his support. Eric Beecher was then editing the Melbourne _Herald_ for Murdoch, and Murdoch wanted its Washington correspondent, Geoffrey Barker, replaced because Murdoch considered him anti-Reagan. Afterwards, Murdoch served as a trustee of the Ronald Reagan Presidential Foundation. According to Murdoch's editor of the _Sunday Times_ , Andrew Neil: I was able to regularly criticize Margaret Thatcher, even though he adored her. Criticizing Ronald Reagan was a more risky business: Reagan was Rupert's first love. On the only occasion when his two loves were in conflict – when Reagan invaded the tiny Commonwealth island of Grenada following a coup there – there was no doubt where Murdoch stood. He told Wyatt that Thatcher was 'just childish', that she was 'out of her mind', 'desperately over-tired' and 'not listening to her friends'. He told Kiernan: Reagan's Grenada action has to do with the freedom of the Western world, including England's. Thatcher had no business opening her mouth. I'll see she pays for it. Apart from that episode there was room to embrace both. The invasion of Grenada also exemplified just how fully Murdoch had embraced Reagan's foreign policies. The _New York Post_ claimed that the coup in Grenada posed a 'clear threat to peace in the entire Caribbean area – and by extension, in all of Central America'. Grenada, it said, was 'a Soviet forward base'. Murdoch's passionate anti-Communism and his support for Reagan led him to join a project that most would have considered a conflict of interest. Reagan appointed Charlie Wick, a friend of Murdoch who later became a member of the News Corp Board, to head the United States Information Agency. Wick recruited an informal group, including Murdoch, to 'Project Democracy', a campaign against Soviet 'disinformation and propaganda' in Europe, using slush funds raised outside Congress. Countering Soviet disinformation was easily stretched to opposing any views critical of Reagan's policies (such as his wish to deploy nuclear weapons in Western Europe). Participating in a government's black propaganda exercise is something most media figures would baulk at. When it came to conflicts between Reagan and the media, Murdoch knew which side he was on. His negative views of the American press, which had started to crystallise during Watergate, solidified under Reagan. He told Kiernan: Reaganism represents a positive change in American thinking... It has the support of the people, but not the national press. The press here is sitting around doing its usual thing, sneering at Reagan and waiting to pounce on him the moment he stumbles... The whole Reagan package needs much more support by the press. If no one else will provide it, we'll bloody well have to do the job. He accused the press of 'attempting to change the political agenda' and of ignoring 'the traditional values of the great masses of this country'. It is notable just how readily and scathingly Murdoch attributes bias to other news organisations. It is also notable just how frequently he invokes the refrain about liberal media bias without ever offering any supporting evidence. From this time on, Murdoch has frequently made offhand comments denigrating elite US media, especially the _New York Times_. Sometimes it is flippant. He joked in front of a crowd of admiring CEOs that 'I'd love to buy the _Times_ one day. And the next day shut it down, as a public service.' And another: I think that Arthur Sulzberger [publisher of the _Times_ ], over the years, has made it very clear that he wants a very liberal paper, and that he wants a staff that reflects that community. For five years, he didn't want any white heterosexual men hired. At other times, it is deadly serious. After taking control of the _Wall St Journal_ , Murdoch told a meeting of its senior editors that they had to 'figure out how to cripple, really cripple, the _New York Times_.' Adrift Through the 1980s, Murdoch was riding high, with his heroes in power. But he was much less enamoured of their two successors. George H.W. Bush and John Major were the type of managerial conservatives he disliked. Bush was never strong on what he dismissively called 'the vision thing'. Murdoch was politically adrift. He tended to frame this as a search for worthy leaders. But the larger truth is that the Reagan and Thatcher projects were politically exhausted. They had collided with socio-political realities and with public opinion. At this time, Murdoch's move into US television meant that he was also adrift in terms of having no press presence, and so he did not have an overtly political profile. This alienation led him into bizarre positions. In 1988, when George H.W. Bush was certain to win the Republican nomination, Murdoch supported the tele-evangelist Pat Robertson. Robertson's 1988 candidacy was itself a minor miracle, given that earlier he had predicted that the world would end in 1982. God made several revelations to Robertson denied to other mortals: for example, that the Russians again had missiles in Cuba. Robertson also argued that feminism encouraged women to 'leave their husbands, kill their children, practise witchcraft, destroy capitalism and become lesbians'. When a deal he was doing in Scotland collapsed, he attributed it to how strong homosexuals were in that 'dark land'. Murdoch, however, claimed Robertson 'was right on all the issues'. In 1992, Murdoch again could not bring himself to support the orthodox conservative George H.W. Bush, and instead embraced the third-party candidate Ross Perot, whose populist economics included high protection (in contrast to Murdoch's free trade views) but also – more appealingly to Murdoch – large cuts in government spending. Murdoch told Wyatt not to 'underestimate [Perot]. I know him quite well. He is a very tough businessman. He has got bags of money... If there were an election tomorrow, he'd win it.' Ironically, the 18.9 per cent of the popular vote Perot won came mainly from people who would otherwise have supported Bush, and so helped Bill Clinton win the presidency. So after Reagan and Thatcher, in both the US and the UK, Murdoch found himself estranged from their mainstream conservative successors. He solved this problem in two very different ways. He resolved his post-Thatcher doldrums by a return to the political mainstream, supporting the election of Blair's Labour Party in 1997. He resolved his post-Reagan doldrums by becoming more sectarian. In 1995, he started the _Weekly Standard_ , which became the standard bearer for the 'neo-Cons' and their foreign policy views. Most importantly, in 1996 he started Fox News which, as we will see in Chapter 11, became an important player in Republican politics. In the US Murdoch cast his lot with the far right elements in the Republican Party. With varying degrees of fear and favour Quality journalism depends on an ecumenical scepticism and a willingness to report newsworthy developments no matter who may be helped or hurt. Murdoch – at least regarding Reagan and Thatcher – has a partisan mentality, quite the reverse of this. He could rarely, if ever, see any validity in critical news stories about either. 'You're always getting at her!' he said to Harold Evans, who thought that Murdoch 'hated "balance" and "objectivity" and kept calling for more "conviction", which, Evans thought, meant more Tory cheerleading'. _Times_ editorial writer Richard Davy recalled that Murdoch always complained that the _Times_ 'didn't stand for anything, whereas we always thought it stood for reasoned argument and liberal values'. Wolff put it most strongly: 'The entire rationale of modern, objective, arm's length, editor-driven journalism... [Murdoch] regarded as artifice if not an outright sham.' One of the tactics Murdoch and his representatives use, when confronted with accusations of bias, is to talk about the biases of those making the charges, to change a discussion about journalism into a discussion about the politics of their critics. But such a strategy cannot account for the _Wall St Journal_ journalists who, against their own career interests, mobilised against Murdoch becoming their owner. The _Journal_ had perhaps the most conservative editorial line and op-ed pages of the major US newspapers. The journalists' fear was not that Murdoch would make the paper's editorial outlook more conservative, but that editorial views would impinge directly on the reporting and selection of news. Seven journalists, responsible for coverage of China, wrote a joint letter to the then controlling shareholders, the Bancroft family, urging them not to sell to Murdoch: Our China team won the Pulitzer Prize for international reporting this year for a series of stories detailing the consequences of China's unbridled pursuit of capitalism... It is an important example of the coverage that we fear would suffer if News Corp takes control. News Corp Chairman Rupert Murdoch has a well-documented history of making editorial decisions in order to advance his business interests in China. The problematic aspect of Murdoch's political journalism is not his preferences, although the degree of uniformity among his publications is greater than it is in many other companies, and in Australia and Britain his share of voice presents a problem for democratic pluralism. What has made Murdoch so controversial is the way his editorial support has been manifested: it has reached crudely and deeply into news judgements and the presentation of stories. Of course there has been considerable variation between news outlets and countries. Sky News in the UK and Australia is much more balanced than Fox News in the US, for instance. The opinion pages of the _Times_ and the _Australian_ show much more diversity than do those of his tabloid newspapers. There are also periods of greater or lesser editorial mobilisation, and there are issues where most Murdoch papers take a strong view and those where they do not. This book is 'biased' towards periods and issues where Murdoch has been most actively and directly involved, where editorial mobilisation has been strongest, and has led to the most one-sided reporting, and thus it under-represents the amount of good journalism carried out in titles he owns or has owned. One key when considering these issues is to move beyond a dichotomous – completely unbiased vs totally biased – discussion, to degrees of partisanship. As with other areas of life, there is a tendency to obliterate crucial distinctions. Take corruption: no society is completely free from corruption, but this should not obscure the real and important differences in the degree and nature of corruption in different societies and at different times. Nevertheless, some things are clear. Murdoch's perceptions of media bias – from Watergate through the Reagan years and beyond – are very different from those of most journalists. And in terms of almost any professional criterion – accuracy, balancing competing views, presentations of news, judgements of newsworthiness – the _New York Post_ was more biased in the way it covered the news during the Reagan era than its New York competitors, and subsequently Fox News has performed at a less professional level than its competitors. In the UK, from the arrival of Macmillan in 1957 to the departure of Wilson in 1976, the general trend was towards more muted press partisanship, but from the late 1970s, this was put into sharp reverse, and Murdoch was a central factor in the reversal. According to scholar Jeremy Tunstall, Thatcher 'probably received more press adulation from more national newspapers than did any other British Prime Minister after Winston Churchill's wartime premiership ended in 1945'. At the end of the Thatcher period, driven by many factors, not least Murdoch's _Sun_ , the quantitative advantage of the Conservatives was at a peak – with pro-Conservative papers having a three to one circulation advantage – and, according to prominent British journalist and former News International editor Brian MacArthur, in 1992, 'the popular press have never been quite so biased as they are today, nor so potent a threat to standards of political debate'. 8 The dominant player Murdoch ascendant When John Howard was elected Australian prime minister in March 1996, Murdoch had just celebrated his 65th birthday. He was at a very different stage of life from when he had been an enthusiastic participant in Gough Whitlam's 1972 'It's Time' campaign. He was head of a global media empire, and determined that politicians should court him; it was long past the time when he would ingratiate himself with them. The most spectacular wooing was by Tony Blair when he and Alastair Campbell went halfway round the world, to Hayman Island, off the Queensland coast, to attend a gathering of senior Murdoch executives. Irrespective of what Blair said at the conference, this was a public political courtship, a demonstration of a willingness to bury the Labour resentments of past mistreatment. For Piers Brendon, it was 'the most humiliating odyssey... since Henry IV abased himself before the Pope at Canossa'. Campbell, however, 'was never in doubt that it was a good thing to do'. In August 2008, David Cameron emulated Blair when Murdoch's son-in-law Matthew Freud flew him to Greece to join Murdoch for drinks on his yacht _Rosehearty_. According to Peter Oborne, 'Cameron's first meetings with Murdoch went poorly.' He quoted a 'leading News International figure' who said, 'We told David exactly what to say and how to say it in order to please Rupert. But Cameron wouldn't play ball. I can't understand it.' So the upper echelons of both the Conservative Party and News International thought this meeting on Murdoch's yacht crucial; in contrast, in testimony to the Leveson Inquiry, Murdoch professed not even to remember it. Leveson concluded that Murdoch: regarded the lengths to which Mr Cameron had gone to meet him as not unusual and one got the feeling that Mr Murdoch was well used to political leaders seeking him out: a telling indicator of the power and importance of one of the biggest media proprietors. Murdoch would never admire any new leaders as much as he had Thatcher and Reagan. Partly it was that Murdoch tended to judge new leaders by how they measured up to his idealised versions of those two, and most were found wanting. Also, he was now dealing with politicians considerably younger than himself. Tony Blair was 22 years younger; David Cameron 35 years younger – indeed Cameron was roughly an age peer of Murdoch's children, Elisabeth and James, and moved in their social circles. Murdoch's more recent approach to being a political player reflects his greater power and his range of experience, but also differs in the three countries. It is likely that President Obama spent much less time worrying about Murdoch than prime ministers Cameron and Julia Gillard. This is partly because Murdoch's share of news media is much less significant in the US than in Britain and Australia. It is also because the stance of his American outlets rarely varies: Fox News, the _New York Post_ and the _Wall St Journal_ will never support the Democrats against the Republicans. There is a paradox here. Murdoch's centre of gravity – in business, social relationships and political attitudes – is now more than ever in the US. Commercially and emotionally, Britain and Australia mean less to him, but they are the two countries in which he wielded greatest power. Perhaps this explains his more high-handed attitude towards Australian and British political leaders. Most Murdoch papers strongly supported Howard's election in 1996, but Murdoch then related to Howard with a degree of coolness, bordering on arrogance. In the years up to 2001, his papers often treated the prime minister with disdain. One source of tension was Murdoch's wish to change media policies. After profiting enormously from the cross-media 'reforms' introduced by the Hawke Government (see Chapter 9), Murdoch was now keen to re-enter Australian television, but there were two policy obstacles – the cross-media rules, which Murdoch had supported at the time, and the ban on foreign companies owning Australian television licences. There was no prospect of the Howard Government being able to change either policy, as the government did not have control of the Senate, but this did not lessen Murdoch's impatience. In 1998, Murdoch criticised media regulations, saying, 'I think the individuals involved in making decisions are overawed by some of the existing players', implying that the government was too keen to placate Packer. Then he likened the Howard Government's approval of 'monopolies' in television to the recently fallen corrupt regime of President Suharto in Indonesia. The one political issue where Murdoch was at odds with Howard was the referendum to replace the Queen with an Australian head of state. The proposal was rejected, but Murdoch's role in supporting the case for change attracted criticism from the monarchists. Senior Liberal Nick Minchin said that Murdoch, a US citizen, should understand that this is a matter for Australians to determine; this is a point he never made when Murdoch was supporting the Liberals in election campaigns. He thought that 'Mr Murdoch should be embarrassed by his newspapers' outrageous propaganda campaign.' Lachlan Murdoch, then 28, speaking from the US, criticised Howard's conduct of the issue, and seemed to endorse Costello as Howard's successor. 'Howard was absolutely appalled,' said one senior minister, who also noted that, with decisions about digital television about to be made, this was a bad time for the Murdochs to annoy the prime minister. Indeed Murdoch saw the opening up of the digital spectrum as an opportunity to re-enter television. News Corp was not eligible for a normal TV channel, so the Murdoch press focused on the possibility of gaining licences to permit 'datacasting', a concept that gained prominence for a couple of years – when it suited Murdoch interests – and then disappeared without trace. A Murdoch business journalist, Mark Westfield, thought that 'Howard... singled out News Limited as a demonic force seeking to ravish the Australian public in pursuit of its commercial interests.' In 1999, Murdoch again 'let fly at the Prime Minister over what he [believed was] unfair treatment being dished out to News Ltd'. According to journalist Steve Lewis, this explained 'why the government has been copping it in the neck from the Murdoch press... The ferocity of the News Ltd campaign cannot be ignored.' From September 2001, Howard's and Murdoch's agendas coincided. As Howard's electoral fortunes improved, and as he strongly supported US actions in the 'war on terror', the Murdoch press settled into solid support. Even then there was a degree of coolness in Murdoch's public comments that the prime minister would have found galling. After Howard gained a Senate majority in 2005, media policy changes were again politically possible. Murdoch, for whom Australia was now a much lower priority, was less than gracious in his public advice: 'Tear up everything, and make it an open go for everybody, otherwise leave it alone,' he dismissively commented. When Howard was making a triumphal American tour in 2006, Murdoch made the unwelcome prediction that he would retire 'on the top of his game' later in the year. Given that Murdoch himself was nine years older than Howard, raising the topic of retirement was dangerous territory. Former Victorian premier and dedicated Costello foe Jeff Kennett issued the obvious riposte: 'He should practise what he preaches.' Although the Murdoch press has overwhelmingly supported the conservative side of politics, in both Britain and Australia, sometimes it has not. Perhaps that is why Australian Labor leaders have remained keen to woo Murdoch. No contemporary Labor leader would repeat Whitlam's sentiment that he was too busy to bother with Rupert. A meal with Murdoch – with a few platitudes offered to waiting reporters afterwards – has become one of the staples of Australian political leaders' visits to the US. Murdoch has only supported the non-conservative side when it was clear that they were going to win. So each time this has served Murdoch's goal: being, and being seen to be, on the winner's side. The Blair and Hawke-Keating Governments were led by centreleft, dynamic leaders who were economic modernisers, and who, crucially, had shown they were amenable to dealing with Murdoch. Murdoch's editorial support for UK Labour was always qualified and tepid; it endorsed a conservative agenda on most policy issues. In contrast, his support for conservatives has sometimes been whole-hearted and embracing. As David McKnight notes, 'His support for Blair did not involve the vicious slurs and raucous jeers against the Tories that the _Sun_ had directed against Labour.' Murdoch was always much keener on Margaret Thatcher than he was on the Conservative Party generally, and often his papers were critical of other Conservative politicians, but he expressed his disapproval of the Labour Party much more stridently. Thus in 2003 he praised Blair's 'courageous and strong' support for the Iraq War, because 'it's not easy doing that in a party which is largely composed of people who have a knee-jerk anti-Americanism and are sort of pacifist'. While the _Sun_ and the _Times_ supported Labour in 1997, 2001 and 2005, the _Sunday Times_ supported the Conservatives at each election. There has been an interesting contrast between Murdoch's British papers and his Australian ones. In Britain, announcements of Murdoch editorial support seem designed to further the Murdoch mystique, to proclaim that his media are a political force which it is important to have on-side. In the 2005 election, as Roy Greenslade notes: _The Sun_ , which has never been noted for its reticence, presented its decision to support Tony Blair as if it were a matter of supreme national import. What was infinitely more fascinating was the fact that the rest of the media treated it in similar fashion. Then, at a most strategic moment, designed to maximise the impact of the announcement, when Gordon Brown was to deliver his speech to the Labour Party conference, the _Sun_ declared on the front page that it was backing David Cameron. Similarly, Murdoch's annual London summer party was an ostentatious show of his power. Most prominent people in Britain wanted an invitation and felt obliged to attend, whatever they felt about Murdoch and his papers. When Murdoch was asked at the Leveson Inquiry whether Prime Minister Gordon Brown attended the previous summer party, Murdoch replied: 'Yes, I think so. Most people did.' In contrast, in Australia his papers have taken different stances in different elections. When asked about Australian politics, he sometimes defers to his editors – 'Ask my editors,' he modestly replies, even though this suggests, to veteran journalist Alan Ramsey, 'a more tolerant Rupert Murdoch than any of us remember and most of us would ever believe'. However, when a Liberal victory has been likely, they have often unanimously been on the winning side. Clearly, there is political value in the empire appearing to display some internal diversity. It makes it harder for critics who are describing the dangers to democracy in such a media behemoth, and provides useful exceptions to situations when all editorials sing in unison, such as in support of the Iraq War. In the months leading up to the 2007 election, it appeared certain that Kevin Rudd would defeat Howard. This presented a dilemma for Murdoch and his editorial upper echelons. As Eric Beecher has observed, Murdoch rules by phone and clone, and senior editors overwhelmingly share his conservative ideologies. On the other hand, they did not want to be on the losing side. According to the editor of the _Australian_ , Chris Mitchell, it was still difficult to persuade Murdoch to endorse Rudd. The _Australian_ , _Daily Telegraph_ and _Courier-Mail_ all advocated a vote for change, while the _Herald-Sun_ and the _Advertiser_ stuck with the Coalition. Some of the pro-Labor editorials were among the gentlest ever emanating from a Murdoch publication. The _Daily Telegraph_ began: 'This is an unusual editorial in that it praises the leadership and legacy of our current prime minister – and calls for him to be removed.' (Contrast this with its front page anti-Labor editorial on the first day of the 2013 campaign – 'Kick this mob out'.) Interestingly, in 2007 the news coverage in these papers showed very little correlation with their editorial support for Labor. It was more balanced than in some other election campaigns, but the lastminute coverage of the polls in all of them seemed to be showing a wish for a swing back to Howard. Whatever party is editorially endorsed, the news priorities and commentary remain consistently skewed towards conservative agendas. This is partly because Murdoch's political ideology was now more consistent and stable. In 1999, he described himself to William Shawcross as leaning towards a libertarian outlook, meaning 'as much individual responsibility as possible, as little government as possible, as few rules as possible. But I'm not saying it should be taken to the absolute limit.' Sometimes this finds coherent and eloquent expression as a vision for a better society, such as in his Boyer Lectures for the ABC. At other times it simply becomes a default position against government intervention and a basis for anti-politician populism. If anything, Murdoch's attachments to political leaders have become more contingent, his political ideologies more entrenched. Whichever party is in power in Britain, there are unlikely to be any stories in the Murdoch press, particularly the _Sun_ , praising the European Union. John Major called the _Sun_ 'the house magazine of England-against-the-world', and Alastair Campbell, formerly Blair's director of communications and strategy, described a lunch he and Blair had there as being like a British National Party meeting. An article by Blair publicly distancing himself from a pro-European stance – headlined 'I'm a British patriot' and promising that Labour would not allow Britain to be absorbed into a 'European superstate' – was the precursor to what the _Sun_ called its 'historic announcement' that 'the _Sun_ backs Blair'. If Blair seemed to waver on the issue, the Murdoch papers were quick to bring him into line. In 1998, when the paper thought Blair might have desired to scrap the pound and join the euro, its headline asked, 'Is this the most dangerous man in Britain?' Murdoch has been a constant voice for the view that Britain should look more towards the US and less towards Europe. In the US, he has been a constant voice for hawkish foreign policies, and in Australia and Britain his publications have been a constant voice for supporting US actions. This was most prominently and importantly on display in the Iraq War. Operation Iraqi Freedom The three English-speaking countries comprising the 'Coalition of the Willing' which invaded Saddam Hussein's Iraq in 2003, are the three countries where Murdoch has the largest journalistic presence. Andrew Neil told a House of Lords inquiry that 'there were more discordant voices [on Iraq] in the Bush administration than there were in the Murdoch empire'. According to author Robert Manne, in 2002 the Hobart _Mercury_ had initially deviated from the company's hard line on Iraq, but later conformed. A journalist on the paper told him that the newspaper had been instructed by head office to alter its position. While McKnight and McNair found that Murdoch's editorial writers in all three countries marched in unison, the papers they studied showed more variation in how much diversity was found on the opinion pages. The _New York Post_ and the _Sun_ had almost none, while the _Times_ , the _Australian_ and the _Herald-Sun_ displayed a variety of viewpoints. More than a decade later, it is hard to recapture the belligerence and military over-confidence in the US in 2002–03. The shock and anger following the 9/11attacks and the apparent success of the military action in Afghanistan fed a thirst for more. 'Neo-Con' T shirts in Washington proclaimed, 'Wimps go to Baghdad; real men go to Tehran'. When Fox News's most popular presenter, Bill O'Reilly, was talking to an anti-war academic, O'Reilly said, 'We'll take care of [North Korea] after Iraq.' The academic started to reply, 'Really, and when the Saudis... ?' 'They're after that,' interrupted O'Reilly. CNN's leading correspondent, Christine Amanpour, said the news media generally, including her own organisation, had been intimidated by the Bush administration and 'its foot soldiers at Fox News', and this had led to self-censorship and a reluctance to ask hard questions. In 2002–03, the Murdoch press reported with absolute certainty that Iraq had weapons of mass destruction (WMD) and that Saddam was linked with Al Qaeda's terrorism. 'He's got them. We know he's got them,' declared the _Sun_. Nine days after September 11, the _New York Post_ reported that 'Saddam's fingerprints' were all over the attacks. As McKnight notes, 'Over the next 18 months, the paper continually called for an attack on Iraq, its editorials twice urging that the Palestinian leader Yasser Arafat also be included as a target.' On at least four occasions, the _Sun_ falsely claimed that Iraq had a nuclear bomb or was making one. After the British released a dossier on Iraqi WMD in September 2002 the foreign editor of the _Australian_ , Greg Sheridan, proclaimed: 'The Blair dossier should transform the debate over the Iraq threat. Either Tony Blair is a monstrous liar or Saddam Hussein is. Take your pick.' In the week leading up to the war, the _Sun_ told its readers that 'a huge chemical weapons factory has been discovered'. Apart from the certainty about the evidence and the stridency of the calls for military action, the other notable aspect of the Murdoch press was the scathing disparagement of anyone who expressed doubts or caution or warnings about a pre-emptive strike. When France and Germany opposed military action, the _New York Post_ branded them the 'Axis of Weasels', a phrase that became a refrain on Fox News. The _Post_ also had targets closer to home: Pentagon officials expressing caution were the 'surrender lobby', there were the 'predictable defeatists from the State Department and the _New York Times_ ', and the senior American politicians, including George H.W. Bush's Secretary of State James Baker, who formed the Iraq Study Group, were 'Surrender Monkeys'. The _Post_ also charged that the chief of the UN weapons inspectors, Hans Blix, had deliberately covered up convincing evidence of Iraq's WMD program. Murdoch's London _Sun_ was equally strident. It mocked the 'UN weasels' for going 'soft on Iraq' and said that 'anti-war critics were "traitors" and "naïve pawns of the men who struck America"'. In March 2003 it attacked 'the weasel voices of the wobblers' who should 'belt up'. While there was more variety on the opinion pages of the _Australian_ and _Herald-Sun_ , their chief staff columnists were among the most hawkish. The _Herald-Sun_ 's Andrew Bolt thought that opponents of the war were, 'in effect, pro-terrorist'. On 9 September 2002, Bolt wrote sarcastically: 'Let me spell it out slowly for [Labor politicians] Crean and Rudd: Saddam. Won't. Let. In. Inspectors.' When inspectors were admitted, two weeks later, he did not apologise. More than three months after the invasion, Sheridan still thought 'WMD doubts are ludicrous' and said that hawkish US official John Bolton 'had provided him "almost as an afterthought" with the "sensational" evidence that would prove the existence of Saddam's WMD'. As Manne noted, throughout the period leading up to the invasion, Sheridan – like many other Murdoch columnists – used adjectives such as 'bizarre', 'absurd' and 'preposterous' to describe opposing views. Similarly, Manne thought the _Australian_ 's editorials were an 'attempt to create an atmosphere where cautious considerations of facts and arguments were seen as examples of stupidity, or as the betrayal of the national interest, or as ideological blindness'. When it looked as if the defeat of Saddam would bring a quick triumph, the Murdoch press was rhapsodic. McKnight observed: 'The _Sun_ 's columnist Richard Littlejohn gloated that "the Not in My Name" crowd and the Starbucks Strategists got it hopelessly, ridiculously wrong.' The _Australian_ , with the headline 'Coalition of the Whining Got it Wrong', editorialised: Never underestimate the power of ideology and myth – in this case anti-Americanism – to trump reality. But at least we know it is not love, but being a left-wing intellectual, that means never having to saying you're sorry. It particularly criticised former WA Labor premier Carmen Lawrence, who predicted a possible three million refugees and perhaps half a million Iraqis killed. This was wrong 'by a factor of 400' mocked the paper, although since then Lawrence's forecast has proved tragically nearer the mark. For Sheridan: The eagle is soaring. The bald eagle of American power is aloft, high above the humble earth, and everything it sees is splendid. For as it soars and swoops it sees victory, power, opportunity. Against the initial expectations of those supporting the invasion, the easy defeat of Saddam did not lead to peace in Iraq, but to a disastrous, draining and bloody series of conflicts. Honest, intelligent publications such as the _Economist_ , which had supported the invasion, later conceded that the war had taken a course they had not predicted. The magazine continued to give the war qualified support, but openly acknowledged its political and humanitarian costs. Neither Murdoch nor his papers indulged in much retrospective musing about the fact that Saddam's WMD were never found – this had been the public rationale for the war, after all – nor about why the original optimism about post-Saddam Iraq had been proven so wrong. One might have thought that this was particularly required given the intolerant way they had treated all the war's critics and doubters, who had actually been correct. Indeed, failure did not dim the ferocity of their rhetoric: when the British Parliament voted down the Blair Government's attempt in November 2005 to acquire powers to detain 'terrorist suspects' for 90 days without trial, the _Sun_ reported it under the headline 'TRAITORS'. Its report began 'Treacherous MPs betrayed the British people last night by rejecting new laws to combat terror.' In 1852, in the lead-up to the Crimean War, the editor of the _Times_ , John Delane, made one of the great declarations about the democratic role of the press. The press, he said: can enter into no close or binding alliances with the statesmen of the day... [Rather,] the first duty of the press is to obtain the earliest and most correct intelligence of the events of the time, and instantly, by disclosing them, to make them the common property of the nation... The press lives by disclosures. The parade of false claims in 2002–03 regarding Saddam's possession of WMD – Iraq importing aluminum tubes for nuclear purposes, importing uranium from Niger, the defector 'Curveball's' claims of biological weapons, meetings between Iraq and Al Qaeda – and the alleged race against time for the West to forestall his imminent aggression constitute one of the most remarkable propaganda blitzes in modern democracies. It was difficult for even the most vigilant news media to penetrate the falsehoods. The result was that the US and its allies mounted a pre-emptive war against Iraq and their stated reasons for doing so proved to be a fiction. The gravity of this has rarely been apparent in the Murdoch press. Responsible newspapers such as the _Washington Post_ and _New York Times_ later reflected publicly on their journalistic failings during the period. The Murdoch press did not. But they had failed Delane's first test for journalism. Murdoch's close alliance with 'the statesmen of the day' had interfered with his papers' commitment to accurate disclosure. They were part of the noise rather than part of the signal. They had served their proprietor better than their readers. Agenda journalism In one of the last issues of the _Sun_ edited by Rebekah Brooks, the front page consisted of the faces of the 207 British soldiers killed in Afghanistan, with a large headline across the middle, reading 'Don't you know there's a bloody war on?' The 'strap' at the top said 'Message to Politicians Failing Our Heroes', and 'The _Sun_ Says' editorial began near the bottom of the page. The multi-page splash was accompanied by a cartoon of a wounded soldier with the caption 'Abandoned'. The article was peppered with extreme, totalistic claims: Our leaders are pretending the war isn't happening. Mr Brown and his ministers are missing in action. [The Ministry of Defence is] groaning with third-rate penpushers, riddled with petty turf wars and empire building and paralysed by indecision. The _Sun_ assumed for itself the mantle of speaking for Britain's war dead, even though its attention to the war in Afghanistan since 2001 had hardly been steadfast. Most of the claims in the story would be strongly contested, but the paper did not even pretend to report alternative views. Instead those in authority were subjected to blanket condemnation. The fact that the sheer arrogance of the headline and story is not necessarily seen as shocking is indicative of just how much expectations have changed. In the 1970s, US scholar Bernard Roshco argued that 'the history of the American press can be seen as an account of how it continually enlarged its conception of the information it could properly publish'. This is an optimistic view of how the scope of official secrecy has been reduced, and, sometimes more problematically, how what used to be considered private has been opened to public scrutiny. While in many ways this trend has continued since then, perhaps equally important have been the news media's increasing sense of their own rights and decreasing sense of inhibition about how they should cover the news. The idea of presenting both sides and allowing the reader to decide is observed less and less in the tabloid press. Increasingly, stories are presented in ways that cue the reader as to how they should respond. On 2 August 2013, much of the press anticipated that the Rudd Government was about to announce a plan to impose a levy on banks for savings deposit insurance, to replace the free insurance the banks received from the government on all savings accounts up to $250,000, itself introduced to maintain confidence during the global financial crisis. The _Herald-Sun_ 's front page announced 'Rudd bank tax will hit you', illustrated by a woman staring forlornly into her empty wallet, bulging shopping bags hanging off her arms. The _Daily Telegraph_ photoshopped a picture of Rudd and Treasurer Chris Bowen dressed as bank robbers, with a headline 'Rudd plans to pinch Aussies' savings'. The costs and benefits of such a policy move could be debated, but what is evident here is that neither paper's presentation gives any credence to arguments for such a decision. It is this type of presentation that made the Labor Government feel it could get no oxygen for its policies in the Murdoch press. Sometimes one-sided presentations were compounded by sheer misreporting. There were reports that before the 2011 federal budget Murdoch had called his senior Australian editors to a meeting at his ranch in Carmel, California, to workshop their coverage of the Gillard Government. Whether or not this was true, and whatever was said, the _Daily Telegraph_ 's coverage of the 2011 federal budget was particularly misleading. It pictured a caricature of Treasurer Wayne Swan as a pickpocket 'reaching into the pockets of an honest, unsuspecting family', 'victims of Labor's war on middle-class welfare... mercilessly assaulted by yesterday's Federal Budget'. It then calculated how much the 'carbon tax' would add to their annual bills: power (A$300); food ($A390) and petrol ($A150). One major problem with this was that there was no 'carbon tax' in the 2011 budget. It was to be introduced in the following budget, and no one knew what its level would be. In each case the paper filled this vacuum not by using Treasury estimates of what costs would be with different levels of taxation, but using figures from lobby groups. The paper's typical family, the Grays, was said to have an income just over $150,000 per year. The paper indulged in a classic trick to make the hit look greater: they listed the family expenses (food, mortgage, petrol etc) on a monthly basis and the extra tax increases on an annual one. (Actually the family's annualised expenses only add up to around $60,000, which suggests this family 'adrift in an ocean of debt and despair' was, after tax, clearing at least $40,000 or more in savings or discretionary spending.) Moreover, although the size of the carbon tax was unknown it was already known that the government intended to accompany it with matching income tax cuts, but there is no mention of that in the story. In addition, the Grays' biggest expense was mortgage repayments, and there was no mention of how falling interest rates should have helped them. This was effective populist reporting, mining an always rich vein of governments ripping off innocent taxpayers. A casual reader would be unlikely to probe the provenance or validity of the statistics or think about what figures were missing. Equally, it was the type of dishonest and inaccurate reporting which would, justifiably, have angered the government. These trends towards one-sided news presentations – not at all confined to the Murdoch press – have been compounded by Murdoch's increasing willingness to impose his own political views on judgements of newsworthiness. In 1987, a year before Murdoch endorsed the tele-evangelist Pat Robertson as Republican presidential candidate (see Chapter 7), the _Australian_ embraced the 'Joh for Canberra' campaign. Queensland National Party Premier Joh Bjelke-Petersen had won a series of election victories, routing not only the Labor Party but also the Liberals. After his victory in the 1986 election, he turned his sights on Canberra, and to leading a national conservative movement. This was a direct threat to both the Labor government and Opposition Leader John Howard. As Howard wrote, 'the _Australian_ newspaper became a prominent vehicle for the propagation of the Joh cause. The editor at the time, the late Les Hollings, gave huge coverage to anything that Bjelke-Petersen said or did.' Hollings, disappointed by what he considered a lost opportunity for conservative politics under the Fraser Government and by the current electoral dominance of the Hawke Labor Government, was drawn to the Queensland Nationals. In February 1985, the paper ran a front page editorial proclaiming that 'Australia is now at the crossroads' and must choose between prosperity and unproductiveness. It agreed with Queensland Nationals that union power was 'the heart of the problem'. The _Australian_ broke the Joh for Canberra story in January 1987. _Australian_ columnists Des Keegan and Katharine West mounted what author Denis Cryle called a 'vendetta' against the weakness of the Fraser legacy, as they argued that there was a need for a new national conservative force. The strength of this sentiment spooked the Opposition, and they adopted much more radical programs than electoral prudence would have suggested. Eventually, as the election was getting under way, the Joh campaign collapsed. Hawke won, and many Liberals blamed the Joh campaign. It was not only Joh's Canberra ambitions that collapsed. ABC's _Four Corners_ program 'The Moonlight State', aided by some belated but welcome investigative reporting by the _Courier-Mail_ , led to the Fitzgerald Inquiry, whose searing searchlight led to important reforms against police corruption. Joh's career ended in ignominy, and eventually he faced – but was controversially acquitted of – criminal charges. And Labor won its first victory in Queensland since the 1950s. It is often asserted that journalism is the first rough draft of history. Contemporary historians often see this period as one where the Hawke Government internationalised the Australian economy, especially by floating the dollar, exercised tight control over budgetary spending, had some, although limited, success in reducing inflation, reduced the number of days lost in labour disputes, all while managing to keep wage rises down to less than cost of living increases. Both sides of politics see it as a period of substantial reform. If one read the pages of the _Australian_ of the period, however, especially its opinion pages, a radically different and quite misleading picture emerges. Some aspects of the paper's campaigning style during the Joh for Canberra episode have been revived in more recent times: in particular, the use of columnists who give no credence to alternative views, and idiosyncratic and coloured judgements of newsworthiness. It is not only Labor that has found itself on the wrong side of such campaigning. In 2005, Treasurer Peter Costello, who had cut income taxes in three successive budgets, was angered by the _Australian_ 's campaigning for larger tax cuts. According to Costello, 'Journalists wrote story after story on the issue and no amount of explanation could get a fair hearing', and the paper invented a 'ginger group' of backbenchers campaigning on the issue. But it was under the Rudd and Gillard Governments that the relationship developed into intense antagonism. Rudd privately called the _Australian_ 'Fox News in print', while Labor Minister for Communications Stephen Conroy said the _Daily Telegraph_ was mounting a campaign for 'regime change'. Independent MP Tony Windsor singled out News Limited at his farewell news conference for trying to dictate terms to parliament, and former Greens leader Bob Brown referred to the Murdoch press and others as the 'hate media'. Academics Robert Manne and David McKnight mounted scathing critiques. Even Fairfax CEO Greg Hywood referred to 'the Murdoch empire's rejection of internal dissent and insistence on groupthink'. These accusations are met with blanket rejection by senior Murdoch figures. Former News Limited CEO John Hartigan said in July 2011, 'We are the only organisation that really takes it up to the government'; earlier he thought that the fact that Rudd was critical of the _Australian_ 'suggests to me that it is doing its job'. His successor, Kim Williams, said the government was showing a glass jaw. Although hardly convincing refutations, they make a valid point: governments are far from disinterested arbiters of the press coverage they receive. But whatever the truth of specific criticisms, the key point is how much has changed. Twenty years ago, there was some optimism among journalists about trends in News Limited. One prominent Canberra press gallery member, Christine Wallace, wrote in 1994: The trouble with the Senate inquiry into foreign ownership of the print media is that its members assume the bad old days of proprietor impropriety and omnipotence still exist, when the truth is they have long gone... The last time a proprietor sought in any profound way to impose an editorial line was in 1975 when Murdoch triggered a strike... News Limited has spent the last 20 years trying to live the episode down and restore an image of objectivity to its flagship, the _Australian_. It would be much harder now to make such claims. The trend is towards more opinionated and one-sided journalism, and not only in the Murdoch press. It would be a brave employee who told Rupert Murdoch that proprietorial power was dead. After half a century of political involvements, his relish in using his news media to have a political impact is undiminished. 9 Reaping the rewards Murdoch and government action I bet if I was going to be shot at dawn, I could get out of it. Rupert Murdoch 'I've never asked a prime minister for anything.' Rupert Murdoch's testimony to the Leveson Inquiry was emphatic. 'I take a particularly strong pride in the fact that we have never pushed our commercial interests in our newspapers... I never let my commercial interests, whatever they are, enter into any consideration of elections.' Leveson was probing – and Murdoch was denying – the trading of editorial support for policy or regulatory favours, the transaction, either explicit or implicit, that many suspected was the key to Murdoch's power in Britain. The news media are unique in that their output affects the political fortunes of policy-makers directly. While media owners share with other large corporations the ability to donate money and lobby in traditional ways, they have this crucial added leverage. Murdoch's reputation for wielding great power had been promoted by both his supporters and his opponents. Former _News of the World_ journalist Paul McMullan declared that 'every political leader since Margaret Thatcher in the 1970s has had to "jump in bed with Murdoch"'. Charles Douglas-Home, editor of the _Times_ , said in 1984: Rupert and Mrs Thatcher consult regularly on every important matter of policy... especially as they relate to his economic and political interests. Around here, he's often jokingly referred to as 'Mr Prime Minister'. Except that it's no longer all that much of a joke. In many respects, he is the phantom prime minister of this country. A spin doctor in the Blair Government, Lance Price, said: I have never met Mr Murdoch, but at times when I worked at Downing Street he seemed like the 24th member of the Cabinet. His voice was rarely heard... but his presence was always felt. Similar claims have been made about Murdoch's influence in Australia, and his denial seemed more geographic than political: 'It's wrong to say I'm the most powerful man in Australia. I'm not even there.' In America, Murdoch asked, 'Power? What power? I have no power. No more than any American. This myth that I have some influence up there on Capitol Hill is baloney.' In contrast to this picture of innocence and powerlessness, Bruce Page concluded that on 'most of the critical steps' in his expansion Murdoch has 'sought and received political favours' and that his success has depended on these. But this is perhaps too sweeping in the other direction. Rather, two conclusions stand out: • The 1980s were the key period where policy decisions were important to Murdoch's growth. Murdoch became a giant in newspaper publishing without any special help from governments, and there were no important cases of governments assisting him before the 1980s. In that decade, as he changed his citizenship, launched into TV in the US, took over the _Times_ in Britain, became the dominant player in Australian newspapers, and launched the Sky TV satellite service, his relations with governments were crucial to his ability to develop as he wished. • Since the 1990s, Murdoch's veto power has been more important than his initiating power. Over this long period, Murdoch has almost constantly been a controversial figure, and there have been calls for his power to be curbed – by stronger anti-monopoly measures, for example – but there are few cases of governments adopting policies that ran strongly counter to his interests. 'No government,' said Leveson, 'addressed the issue of press regulation, nor of concentration of ownership.' On the other hand, he has not always succeeded in gaining legislative measures he wanted. For example, as a partner in the pay TV operator Foxtel, he has long fought the Australian anti-siphoning rules that give the free-to-air stations first choice at major sporting events, but the power of the free-to-air networks and of public opinion have protected the status quo. Murdoch's relations with three governments – Thatcher, Hawke-Keating and Blair – have been particularly crucial, and are considered below. Thatcher and Murdoch As Shawcross commented, after the _Sun_ swung its support to Thatcher in 1979, for the next decade and a half it 'remained astonishingly loyal to her, and that loyalty was rewarded'. It was in the Murdoch-Thatcher relationship that the politics of mutual patronage reached its strongest expression, and produced the worst abuses in policy-making. In 1959 Roy Thomson bought the _Sunday Times_ and in 1966 he bought the _Times_ from the Astor family. Under Thomson there was a high degree of editorial independence. The _Sunday Times_ thrived both commercially and editorially, maintaining high standards of journalism and attracting the best talent, especially under the editorship of Harold Evans. However, by 1980 Thomson's ownership of Times Newspapers had 'become commercially disastrous', due principally to industrial problems. Publication had been suspended for 11 months in 1978–79. The journalists, who had been paid for that entire period, then went on strike in August 1980. In the face of continuing unrest and large losses, the Thomson organisation decided to sell; if no buyer could be found, it would cease publication. It was keen to find a buyer, however, as closing the titles would result in severance payments of around £36 million. There were several potential bidders, including groups organised by the editors of the two papers. Lord Rothermere offered £20 million, but Thomson was concerned that his company intended to close the _Times_ once he got control, so Murdoch's lower bid, of £12 million, prevailed. The policy issue was whether the acquisition would be referred to the Monopolies Commission, as Murdoch already owned one national daily and one Sunday national. Any seller would prefer a purchase to be consummated without having to face an inquiry, and this was clearly Thomson's wish. Equally, any buyer would also prefer an immediate decision. Murdoch, quite reasonably, told the Secretary of State, John Biffen, that if there was an inquiry he reserved the right to renegotiate the price – the paper could well be bleeding money in the interim. However, there is little reason to believe that an inquiry would have endangered the sale. It is also unlikely that Thomson would have closed the titles: he would then forfeit the purchase price, and face the large severance payments bill. At first Biffen indicated that he would refer Murdoch's purchase to an inquiry, but then he announced that he would not. There was immediate controversy. John Smith, later leader of the Labour Party, said the acquisition would produce a concentration of newspaper power probably 'unprecedented in our history'. Some Conservative backbenchers were also opposed to the lack of due process. As Belfield and his Channel Four colleagues observed, 'Yet again, Murdoch had demonstrated his brilliance in waltzing past the regulators. The key to his fancy footwork was that he was not dancing alone.' Murdoch and Thatcher were 'ideological soulmates'. Murdoch described himself as 'a great admirer' of her and on the 'same page politically', and she felt she owed 'a real debt of gratitude to him'. When one _Sunday Times_ journalist spoke to a Thatcher adviser about blocking the sale, he was told to stop wasting his time – 'You don't realize, she likes the guy.' Murdoch's friend and confidant, Woodrow Wyatt, was also keen to claim credit. Twice in later years, after he had started keeping a diary, entries referred to having arranged 'through Margaret' that 'the deal didn't go to the Monopolies Commission which almost certainly would have blocked it'. The law required that there must be a reference unless both papers were making a loss. In justifying his decision, Biffen said he was satisfied that neither the _Times_ nor the _Sunday Times_ was a going concern, and therefore a referral was not necessary. He also presented figures which showed both papers were making a loss. The impact of industrial disputes would have given him some backing for such calculations. However, according to Thomson's finance group director, and most other analysts, the _Sunday Times_ was profitable, and if the two titles were taken together the company overall was still profitable, even though the _Times_ was making a substantial loss. Murdoch did have to give a series of guarantees. The most important related to editorial independence. This would be enforced by there being a group of national directors who would have sole power over the appointment and dismissal of editors. Murdoch's guarantees were widely applauded. The outgoing editor of the _Times_ , William Rees-Mogg, said they are 'very far reaching and there is no reason to doubt that he will abide by them'. The incoming editor of the _Times_ , Harold Evans, whom Murdoch had lured across from the _Sunday Times_ , and who was a journalist whose stature gave comfort to all who wanted to believe the new ownership arrangements would work, said, 'No editor or journalist could ask for wider guarantees of editorial independence on news and policy than those Mr Murdoch has accepted.' The guarantees seemed empty a year later, on 9 March 1982, when Murdoch demanded Evans's resignation. Murdoch has continued to assert his observance of the editorial independence guarantees. He told the Leveson Inquiry in 2012, 'I never gave instructions to the editor of the _Times_ or the _Sunday Times_ ', and in 2007, when reassuring the owners of the _Wall St Journal_ that he would observe guarantees given to them, declared, 'I have [never] given any sort of political instructions, or even guidance to one editor of the _Times_ or the _Sunday Times_.' He described Evans as the only _Times_ editor 'we have ever asked to leave'. However, only months after telling Leveson this, he dismissed another, James Harding. He must also have forgotten that he forced Charles Wilson to resign in 1990. As well as the editorial dismissals, there was other Murdoch interference in both papers. This became more acute from early 1982. An obvious case was on the _Sunday Times_ , when he told the editor, Frank Giles, that he wanted to appoint two new deputy editors. There was no pretence of consulting Giles, let alone getting his agreement. Giles complied and announced the new appointments, even though he knew that 'what [Murdoch] was now demanding was in complete breach of [his] undertaking'. One of the former deputy editors, Hugo Young, wrote, after he left the paper, that Murdoch did not believe in neutrality: Indeed, rather like politicians themselves, he had difficulty comprehending it. As far as he was concerned, journalistic detachment was a mask for anti-Thatcherism. According to Evans, 'It soon became obvious that nothing less than unquestioning backing of Mrs Thatcher on every issue would satisfy Rupert.' These editors' views were supported by Tom Kiernan, who recalled a New York dinner party, where the papers' criticism of Thatcher was galling to Murdoch. He: characterized Giles and his wife as Communists. Evans, he called worse. Then he went on to blast the two papers as being 'lilylivered' and 'straining my patience'. No one at the dinner who heard Murdoch's diatribe had any doubt of what was about to happen. The pressure to force Evans out gathered pace. On 8 February 1982: Murdoch provoked an atmosphere of crisis by sending a letter to all employees of Times Newspapers, warning them that the daily and Sunday would close within days rather than weeks, unless 600 jobs were shed. One of Evans's ongoing frustrations was that Murdoch never gave him a budget: he was told he was exceeding a budget whose content was never revealed to him. The prospect of closure and of hundreds of jobs being lost heightened the insecurity and discontent among all employees. Evans 'was under pressure from his proprietor above and an increasingly discontented staff below'. Soon afterwards it was revealed that – contrary to the guarantees he had given – Murdoch had transferred the titles of the _Times_ newspapers to News International. If Murdoch closed the papers, he would retain the titles, and be free to reopen them later. Former editor Rees-Mogg initiated the public outcry. Evans also denounced the move, observing that the national directors had not been informed. Once the issue became public, Murdoch transferred the titles out again. As events moved to a climax, Murdoch offered the editorship to Evans's deputy, Charles Douglas-Home, who accepted. However, Evans refused to resign immediately – to show that 'editors of the _Times_ were not to be as casually discarded as had the 13 editors in 15 years at the _Australian_ '. The standoff continued for some days, but then he resigned: 'I was so absolutely disgusted, dismayed and demoralized by living in a vindictive atmosphere.' 'Nothing in my experience,' he wrote, 'compared to the atmosphere of intrigue, fear and spite inflicted on the paper by Murdoch's lieutenants.' The home editor of the _Times_ , Fred Emery, said he was told by Murdoch, 'I give instructions to my editors all around the world; why shouldn't I in London?' When Emery reminded him of his undertakings to the Secretary of State, Murdoch replied, 'They're not worth the paper they're written on.' (Two decades later, Murdoch denied to Leveson that he said this.) Evans later came to: agree with Murdoch that editorial guarantees are not worth the paper they are written on... [In reality,] the national directors are incapable of monitoring the daily turmoil of a newspaper... Arbitration is impossible on the innumerable issues which may arise in many different ways every day between editor and proprietor. There was a further case where the Thatcher Government failed to refer a Murdoch purchase to the Monopolies Commission even though the purchase gave him a third daily title. Murdoch bought the loss-making _Today_ for £38 million in July 1987. The daily, launched by Eddie Shah, had pioneered the use of colour and more efficient printing techniques, but had failed to make a profit. Shah first went into partnership with another prominent British businessman, Tiny Rowland, but then both became interested in selling the paper. Maxwell and Murdoch were the two interested buyers. Maxwell thought he had succeeded, and in a typical case of ego beating strategic judgement, telephoned Murdoch in the US to tell him. Murdoch realised that Maxwell in fact did not yet have crucial signatures; he moved quickly, and beat Maxwell. Both seller Rowland and buyer Murdoch were keen to avoid an inquiry, and advised the government that if there was not immediate approval, the paper would close. The government agreed. The outcry was only a small fraction of the public controversy surrounding the sale of the _Times_. First, _Today_ , only a few years old, lacked the _Thunderer_ 's iconic status. Second, Prime Minister Thatcher had much more political latitude at this time. She had just won her third election, in July 1987, and was in undisputed command of her own party and of a parliamentary majority. Third, the main alternative to Murdoch was the unscrupulous Maxwell. Fourth, the title was clearly making a loss, which meant that it was not mandatory to refer it. Finally, there was perhaps a sense of fatigue and inevitability that the Murdoch-Thatcher axis would prevail. Nevertheless, none of this seems sufficient grounds for not referring an unprecedented acquisition of a third national daily newspaper to the Monopolies Commission. Wyatt's diaries were quite open about the political nature of the decision. In 1986, he and Murdoch had wanted a referral because, as he said to Thatcher, 'we don't want _Today_ to fall into the hands of our enemy Maxwell'. In 1987, with Murdoch as the buyer, they successfully lobbied to prevent a referral. Almost a decade after the purchase of the _Times_ , Thatcher allowed Murdoch to, in effect, sabotage the satellite policies her own government had adopted. In the first half of the 1980s, the government announced that it wanted to see the rapid development of satellite and was keen to achieve industrial advantages from the emerging technology. But equally, it expected the capital cost of providing the satellite system to be met within the private sector. The first attempts to put together a consortium to do this failed. The BBC, for example, pulled out because it could not meet the high costs, as did others who were also wary of the very large financial risks they would be incurring, with high immediate costs and at best deferred income streams. Eventually, in December 1986, the Independent Broadcasting Authority (IBA) awarded the franchise to British Satellite Broadcasting (BSB), a consortium of five media companies (not including News International, which was part of a competing tender). Under the cross-media rules, each newspaper was limited to 20 per cent of the satellite broadcaster. The government adopted 'a high cost and self-consciously "quality" approach to satellite broadcasting'. Shawcross explained: 'Under the terms of its franchise, BSB was compelled to use a new, untried and very expensive transmission system, D-MAC, which was expected to produce a better picture than the old PAL [colour encoding] system.' BSB had to use a technology with more initial problems, plus a greater initial expense to potential consumers, but that would have greater long-term value. According to scholar Peter Goodwin's definitive account, 'right from the start the IBA, the government and BSB were clear that its chances of success depended on a clear run free of new competition'. In June 1988 Murdoch announced plans for Sky, a service aimed entirely at British audiences, but operating on the Luxembourgbased Astra satellite, and using the established PAL technology. This was a direct challenge to all the government's assumptions, but the government made no response. As Goodwin notes, 'Murdoch also scored a marketing goal, creating an image of Sky television as the cheap and quick route into the world of satellite television.' Sky launched its four-channel service in February 1989. The bulk of the programming was repeats and US material. The directors of BSB were horrified. They 'had agreed to use an untested and expensive new technology, D-MAC, and an unlaunched satellite, Marco Polo' in return for a monopoly. When they met Mrs Thatcher to voice their objections, they 'received a short lecture on the virtues of competition. She told them to stop whingeing and sent them away.' To meet the challenge, the BSB partners pledged a further £900 million in January 1990, in addition to the £423 million already committed. These were unprecedented amounts in British broadcasting. Finally, BSB launched, in April 1990, 15 months after Sky, which already had 600,000 dishes in place. Both sides were scrambling to buy exclusive programming rights for movies. Sky signed Fox, Orion, Touchstone and Warner Bros for an estimated £60 million, while BSB contracted with Paramount, Universal, MGM/United Artists and Columbia for £85 million. Shawcross writes, 'During the summer of 1990, the Sky-BSB battle to sell dishes increased in ferocity. BSB was thought to be losing £8 million a week and Sky £2 million.' As both sides bled money, they became increasingly desperate to reach an agreement. Finally, on 2 November 1990 it was announced that Sky and BSB would merge to create BSkyB, operating on the Astra satellite and using PAL. Murdoch had alerted 'Margaret Thatcher to the merger deal a few days before it was publicly announced. The Prime Minister did not see fit to warn her Cabinet colleagues.' So 'Peter Lloyd, the Broadcasting Minister, only learned about it when he read his morning newspaper.' In theory, the IBA was in control of the satellite licence, so BSB was not in a position to dispose of it, or to make the merger with Sky the way it had. When news of the merger was announced, the IBA was furious. But it soon felt compelled to give in to the commercial reality, especially in the absence of any strong counterindications from the government. This was very much a shotgun marriage. The two sides detested each other. Murdoch thought BSB deserved to die. For Murdoch: the Sky team was lean, young and dedicated. By contrast, BSB was burdened with a big, highly paid management, [who] behaved like established fat cats. From the BSB and IBA point of view, Murdoch had, in effect, 'seized control of a British television station, BSB, and was now daring the authorities to deny it to him'. Former IBA Chairman Lord Thomson called it a 'brutal Wapping in outer space'. The Labour Party charged that the merger made a mockery of the new Broadcasting Act. Murdoch responded: They hate the idea of a competitive society, and it is only companies like ours that have the guts and strength to risk everything in building a competitor to the existing monopoly. That's what we are all about. He sought to portray Sky as the under-resourced but entrepreneurial outsider overcoming the establishment organisation. The truth is almost the opposite of Murdoch's formulation. BSB received no public subsidy, but had to meet onerous publicly imposed obligations, which put it at a severe commercial disadvantage. The only justification for this was protection of its monopoly status, at least until its services were established. It had accepted the franchise on one set of conditions only to find the government had allowed that position to be completely undercut. Allowing the merger to proceed on terms very favourable to Sky was not the last piece of assistance the Thatcher Government gave to Murdoch. Newspaper owners were limited to 20 per cent of domestic satellite broadcasters, but Sky and then BSkyB were judged not to be domestic, so Murdoch was allowed to exceed that. This helped ensure that he would always be by far the biggest shareholder. And again, with the 1988 European Broadcasting Directive, which required channels to broadcast at least 50 per cent European programming. The UK Home Office decided this did not apply to BSkyB movie channels, and so it was able to continue with its overwhelmingly US fare. Lastly, Thatcher decreed that the BBC must pay £10 million a year to be transmitted on the Sky platform, although across the rest of Europe commercial broadcasters paid public broadcasters for the privilege of using their content. The British Government portrayed itself as a bystander, allowing market forces to play out. It claimed that although BSkyB's programming operated out of London and was directed at a British audience, its satellite rights were based in Luxembourg, and therefore the government lacked direct jurisdiction. In subsequent years the Tory Government 'acted successfully to drive [European-based] UK-directed pornographic channels out of business', but such powers were never used to curb Sky or BSkyB. It is hard to exaggerate just how far the outcome differed from the satellite policies the Conservative Government had been proclaiming. BSkyB did not use or advance UK technology, or contribute to the industrial goals the government had embraced. It went back to PAL television, and so 'put high-definition television in Britain... on hold for a decade'. It was dominated by a non-UK-controlled company and it broadcast heavily non-UK programming. The Thatcher Government had willingly connived in making a mockery of its own policies. In doing so, it laid the groundwork for what eventually would become a powerful monopoly, as 'the real strength of News in Britain lay in the astonishing success of BSkyB'. Murdoch was speaking accurately when he told Andrew Neil, 'we owe Thatcher a lot as a company'. Hawke-Keating Labor and Murdoch When Murdoch renounced his Australian citizenship in September 1985, he became ineligible to hold an Australian television licence. However, as he told Kiernan, he 'was sure he would figure out some way to get around' the Australian law: 'Perhaps the government would make him an "honorary citizen".' Murdoch entered into a protracted period of negotiation with the Australian Broadcasting Tribunal (ABT) as he attempted to restructure arrangements so that he would remain the major shareholder and continue to reap profits from the Ten stations, but no longer 'control' the stations. His voting stock would be quarantined below the permissible 15 per cent, for example. Eventually the ABT referred Murdoch's proposal to the Federal Court to rule on its legality. This long delay profited Murdoch greatly, because in the interim the Hawke Government introduced a new media policy, which sparked a scramble for media assets. When Hawke came to power in March 1983, the media structure in Australia was one of entrenched, stable oligopoly. Four companies – Murdoch, Fairfax, Packer and the Herald and Weekly Times (HWT) – dominated. The minister, Michael Duffy, was consulting with various stakeholders for two changes in TV policy. There were proposals for aggregation of rural areas to bring competition where there was only a single commercial channel, and there were proposals about reforming the 'two station rule'. This limited any company to owning two TV stations, but it took no account of the size of the population the channels reached. So stations in, say, North Queensland, counted the same as channels in Sydney and Melbourne, which together reached 43 per cent of the TV market. At that time, Packer and Murdoch both had a Sydney-Melbourne axis. The third network, Seven, was split between Fairfax in Sydney and HWT in Melbourne. Duffy was in favour of redefining the ownership limits so one company could own channels able to reach 43 per cent of the population, but was opposed by Hawke and Keating. The media issue became, according to media analyst Paul Chadwick, 'the most internally divisive of the Labor Government's first five years'. During one tense standoff, another reform-minded minister, John Button, challenged the prime minister: 'Why don't you just tell us what your mates [Murdoch and Packer] want?' 'It's nothing to do with my fucking mates,' an angry Hawke is said to have replied, 'they're the only ones we've got.' The deadlock was resolved when Treasurer Paul Keating convinced the others to go with a much larger limit (initially 75 per cent), but also to introduce a ban on cross-media ownership, making it impossible to own a newspaper and a TV channel in the same market. In Keating's phrase, an owner could be a prince of print or a queen of the screen, but not both. Existing arrangements would be unaffected ('grandfathered'), but the law would apply to all future acquisitions. Banning cross-media ownership was a principle consistent with a longstanding Labor view that media ownership was too concentrated. This won assent in Cabinet, and the new policy was announced by press release the day after parliament had risen for the long summer recess. The effect, and probably the intent, was to advantage Packer and Murdoch and to disadvantage Fairfax and HWT. Packer had no newspapers, so was not affected by the cross-media change. Eventually Murdoch would sell out of television, and so likewise would be unaffected. Fairfax and HWT's pattern of newspaper and television ownership made it very hard for either to expand. It transpired that Keating had consulted the first two but not the last two before the changes were made public. As Peter Bowers put it in the _Sydney Morning Herald_ , 'Keating sees the cross-media rule as historic because it looks after Labor's long-term interests first, looks after Labor's mates second, and pays back Labor's enemies third.' Many media players thought that the changes represented the last chance to gain entry into television, and it triggered a scramble for position, with unprecedented prices. Even though the changes did not become law until the following May, there was immediate action. In the 12 months between November 1986 and November 1987, 13 of Australia's 19 metropolitan daily newspapers changed ownership, three of them twice, and 11 of the 17 metropolitan commercial TV channels changed owners, two of them twice. None of the four companies which had dominated Australian television in November 1986 had a single channel by November 1987, and the three major players who dominated the networks in late 1987 all exited the industry within the next five years, with the Ten network going through two ownership changes. The first, biggest and most controversial single transaction was Murdoch's purchase of the Herald and Weekly Times. Murdoch described the bid he made on 3 December 1986 as the biggest newspaper takeover in the English-speaking world. As it stood then, if Murdoch did not, or was not forced to, dispose of anything, the purchase would have given him more than three-quarters of the daily press, and a great number of television stations. Murdoch journalists wrote as if the deal was already a _fait accompli_ , with Brian Frith in the _Australian_ saying the 'Flinders Street fortress (HWT) fell in a single day', and that Murdoch's generous offer had shattered its 'long supposed impregnable takeover defence'. In fact the 'mind-boggling number of cross shareholdings' proved much harder and more expensive to penetrate than Murdoch had anticipated, especially as there were counter-bids from Holmes à Court and later Fairfax for parts of the group. Indeed the 'takeover battle for the Herald and Weekly Times was probably the longest, most involved and most litigious of any action in Australia'. Rather than being over in one day, it dragged on through many complications for nine weeks. From the beginning, Murdoch was confident of the government's support. He told HWT chief executive John D'Arcy at the beginning of November, 'There will be no trouble with the government.' Before then both D'Arcy and Murdoch had believed that neither the government nor the Trade Practices Commission would allow any merger that gave one publisher more than 50 per cent of the metropolitan newspapers, but now Murdoch's 'attitude was completely different, and it became apparent to me that he had a deal with' Bob Hawke. D'Arcy thought 'that Hawke and Keating had an incredible hatred of HWT and Fairfax' and affirmed journalist Geoff Kitney's observation that Hawke's reaction to Murdoch's takeover bid was 'almost joyful'. Former Fairfax editor Vic Carroll wrote: Hawke told some senior ministers in the week before the key Cabinet meeting [in late 1986] that if Cabinet approved the new 75 per cent ownership rule for the Packer and Murdoch groups then his government would win the next election. Hawke called Fairfax 'the natural enemy of Labor' and HWT 'a violent, virulent anti-Labor journal'. He also said the old HWT management was 'viciously anti-Labor, so if Mr Murdoch were to fire a few salvoes at us, it couldn't be worse than what we have been enduring'. Keating told the right faction of the Labor caucus that 'Hawke was confident Packer and Murdoch were on Labor's side.' These views were widely shared among senior ALP politicians. Labor national secretary Bob McMullan told Labor MPs they should be 'dancing in the streets'. Former premier of New South Wales Neville Wran said that he'd 'like to see Murdoch own 95 per cent of the papers in Australia'. Victorian Premier John Cain, after 'friendly and cordial' talks with Murdoch, said, 'Mr Murdoch is a newspaperman – I have no worry about him owning the Herald and Weekly Times.' When John Menadue expressed dismay to his close friend, senior government minister Mick Young, Young replied, 'The Herald and the Fairfax people – they're always against us. But you know, sometimes Rupert is for us.' Another minister put it more pessimistically: there is no way 'we can fuck Rupert Murdoch without fucking ourselves'. Quite apart from the way that none of these politicians seemed to see any democratic problem in this unprecedented media concentration, their pragmatism was based on problematic assumptions. Murdoch's (and Packer's) _quid_ was far more obvious than Labor's _quo_. Packer exited the industry for some years, and even after his return it would be hard to make a case that the Nine network showed any partiality towards Labor. He made one public statement praising the Hawke Government, but after the 1993 election, as the electoral tide was changing, he very publicly swung his support behind Howard, much to Keating's disgust. Nor did the Murdoch press campaign strongly for Labor in 1987: according to Chadwick, 'Murdoch indicated to some of his journalists that he wanted election editorials to steer a middle course, although they could lean slightly to the party they thought looked the best.' Given that his concentration of ownership was politically sensitive, he took the prudent course, and for the first time papers he owned editorialised in favour of different parties. In 1987, Fairfax papers were more editorially supportive of Labor than Murdoch ones. From 1993 and for the next several elections, News Limited newspapers were predominantly on the Coalition side. On the other side of the ledger, one leading analyst estimated the combined selling price of the Nine and Ten networks before the government's policy change at about $800 million; after the change they commanded $1.9 billion. So in return for some temporary, tepid, qualified support, the two proprietors enjoyed a windfall of $1.1 billion. Equally puzzling is the perception of the Murdoch and Packer groups as 'mates' and of Fairfax and the HWT as enemies. In Australia, Murdoch's support for the Hawke-Keating Labor Government came only after it was elected. In 1983 the only metropolitan paper editorially endorsing Labor was the Fairfax-owned _Age_. In 1984 every metropolitan newspaper that expressed a preference, except the Hobart _Mercury_ , editorialised for the re-election of the Hawke Government, so it had the support of all three major groups. Some HWT papers, particularly the Melbourne _Herald_ , had campaigned strongly against the government's proposals on superannuation taxation, but so had most Murdoch papers. On the basis of interviews with 223 journalists around Australia in the early 1980s, I concluded that among the three companies, journalists' accounts of political intervention and direction were very roughly in the ratio of News Ltd 10; HWT 4; Fairfax 1. While there were pieces of reporting in both Fairfax and HWT papers (and on the ABC) that angered government leaders, the differences seem to have been more in attitude than content. It was the lack of direction from the top in the ABC and Fairfax that the Labor deal-makers seemed to find difficult. They thought that some of Fairfax's journalism was 'out of control and dangerous'. Malcolm Fraser recognised the differences between the groups when he was seeking their support in 1975. He spoke to Murdoch, Packer and James Fairfax, 'but in the knowledge that the Fairfax papers ran differently to News Ltd – the views of the proprietor were not necessarily reflected in the copy the reporters wrote'. As for Packer and Murdoch, Fraser said, 'We did not believe the fiction that media barons do not control the policies of their papers.' Keating also clearly enjoyed being a participant, helping to shape the big moves that were remaking the Australian political landscape. He had given Murdoch and Packer, but not Fairfax, advance notice of the proposed changes. When Fairfax general manager Greg Gardiner telephoned Holmes à Court, Keating was with the latter, and – unbeknownst to Gardiner – listened in. Keating then warned Murdoch that Holmes à Court was serious, and that he [Murdoch] would have to negotiate; he thought his intervention with Murdoch was crucial in this happening. Afterwards he gloated to Fairfax executives, 'I hurt you more than you hurt me.' There were three regulatory hurdles Murdoch had to clear. The first was his proposal to keep the Ten TV licence by reorganising the control structure of the channels. The second was Foreign Investment Review Board approval for foreign ownership of the HWT newspapers. The last was Trade Practices Commission approval: the level of concentration in newspaper ownership did not breach its policies. Murdoch failed the first – causing him then to sell his Channel Ten stations – and passed the other two. On 20 January 1987, Murdoch's hopes had to be radically scaled back after the full bench of the Federal Court ruled that he could not own an Australian television licence because he was not an Australian citizen. One judge said that the talk of restructuring News Limited was a 'sham'. This had implications not only for the disposal of the Ten network, but for News's wish to acquire HWT, as News, as a foreign company, would not be allowed to own its broadcasting assets either. A few days later the ABT indeed announced an inquiry into Murdoch's status as a foreign person. The ABT made no specific finding on the matter at this stage. In order to pre-empt potential legal difficulties, the HWT Board itself auctioned off its broadcasting assets, and they were all disposed of before the company passed into foreign hands. The decision also brought an extraordinary response: on 22 January, News Limited issued a public statement disowning Rupert Murdoch: A number of statements have recently appeared in the press and elsewhere attributed to Mr K.R. Murdoch relating to News Limited and in particular its takeover bid for the Herald and Weekly Times... The board wishes, however, to point out the following. 1. Although Mr Murdoch was formerly a director of News Ltd, he is no longer a director and he holds no office in the company. 2. Mr Murdoch has no authority to speak on behalf of or to bind News Ltd. This fiction required one to ignore recent history. On the day the bid was launched, Murdoch had been photographed with his mother, with much talk about how proud his father would have been, and commentary on him reclaiming his 'birth right'. The chair of the ABT, Deirdre O'Connor, pointed out that when Murdoch had talked about the imminent moves, he had talked of 'his' takeover of HWT and 'his' plans to sell some of its assets. The charade did not last long. In the coming days and weeks, Murdoch certainly acted as if he was in charge, and no protest from the Board ever became public. The Foreign Takeovers Act gave the treasurer power to prohibit a purchase of a corporation by foreign persons if that control was deemed contrary to the national interest. A four-member Foreign Investment Review Board (FIRB) would advise the treasurer, who would then make a public announcement of his decision, often without giving any grounds. The FIRB's advice in this case has never been made public. Treasurer Keating approved Murdoch's purchase of HWT, and so a majority of the nation's papers passed into foreign hands. On the three other occasions during Keating's period in government when decisions involving foreign takeovers of newspapers arose, he rejected the applications. He prevented Robert Maxwell buying the _Age_ and stopped a Malaysian company buying half of the afternoon paper, the _Perth Daily News_. Famously, he refused to allow Conrad Black to lift his stake in Fairfax, until he saw how balanced their coverage during the 1993 election was. Keating also rejected Murdoch's attempted purchase of the majority of the news agency Australian Associated Press, but approved his acquisition of AAP's share of Reuters, and also of half of Australian Newsprint Mills. When the Murdoch decision was being made, leading figures tended to appeal to sentiment rather than law, to argue that Murdoch was 'really' an Australian. John Singleton's advertising on Murdoch's behalf had a double-page spread headed 'Is the greatest Living Aussie a Yank?' The Trade Practices Commission (since replaced by the Australian Competition and Consumer Commission [ACCC]) had a mandate to weigh the competition impact of the takeover. The TPC said it was: satisfied that the acquisition of HWT by News Ltd, viewed overall, has not increased concentration of ownership of the print media in Australia. Rather has ownership become more widespread. It argued that while the Act precludes one company's domination, it does not preclude duopolies. The TPC head, Bob McComas, argued: There isn't any doubt in my mind that a large part of the comment was due to the person behind News rather than the acquisition itself. It is important to recognize that there was a particular feeling about the takeover which in no way related to the law. It would have been a politically hazardous course for any regulator to align itself against such powerful forces, though there were some, albeit limited, grounds for an optimistic conclusion to the taking of such a position. In no metropolitan market was the number of competitors immediately reduced. Ownership of afternoon papers – much weaker financially than morning papers – was more dispersed. None of this suffices, however, to deal with the elephant in the room: since these deals, the largest company accounted for around two-thirds of metropolitan newspaper circulation, a much bigger share than in any other democratic country. Unfortunately for the TPC, almost immediately some arrangements began to fall apart. As Frank Lowy's Northern Star company extended its TV reach, it wanted to offload the Brisbane and Adelaide papers it had bought from Murdoch. In August 1987 control of the two papers passed to local managements – who had previously worked for News Limited, whose financing was arranged by News Limited, and whose printing and distribution were negotiated with Murdoch. It would be hard to argue that such arrangements constituted them as independent entities, let alone strong competition. The newspaper casualties were quick to come: _Business Daily_ , a new national business newspaper backed by HWT, did begin, as planned, in July 1987, but in the face of News Limited's hostility it closed after only six weeks. In the early 1980s, Murdoch had launched a new daily in Brisbane, the _Daily Sun_ , to compete with the _Courier-Mail_. Within a year of gaining control of HWT in 1987, he closed the Brisbane afternoon paper the _Telegraph_ ; the _Sun_ switched to afternoons, and so Brisbane's newspapers were reduced from three to two. Third, Holmes à Court closed his weekly _Western Mail_ in December 1987, as he now owned Perth's daily newspapers. On 9 February 1987, Murdoch, having achieved his goals in newspapers, finally bowed to legal necessity (and perhaps to his own financial needs) and sold his TV channels to Lowy. The delay had been financially rewarding. He was paid over $800 million, which some estimate at more than double what he would have received if he had sold in September 1985. Cross-media ownership between newspapers and television had disappeared, but concentration within both had increased markedly. In addition, the upheavals had weakened both industries. Nevertheless, Treasurer Paul Keating was pleased with his handiwork. In 1990, he said the result was 'a beautiful position compared with what we did have'. Between 1988 and 1991, 1200 journalist jobs disappeared, the biggest loss in the industry's history, as seven of 19 metropolitan daily papers closed, and commercial TV staff declined from 7745 to 6316. Meanwhile, Murdoch had, as he proclaimed, completed the biggest newspaper takeover in the English-speaking world, and achieved an unassailable position in the Australian press. Hawke's Labor successors have had ample opportunity to regret the misdirected pragmatism of his government, which allowed one company to dominate press ownership in a way that is clearly detrimental to democracy. Blair and Murdoch For everyone in the British Labour Party, the savagery of the tabloids during the 1992 election was a pivotal experience. Even the political beneficiary of it all, Conservative Prime Minister John Major, described the campaign against Labour leader Neil Kinnock as 'pretty crude' and 'over the top'. Alastair Campbell had no doubt 'that the systematic undermining of Labour and its leader and policies through these papers... was a factor in Labour's inability properly to connect with the public, and [its] ultimate defeat'. Blair himself resolved: 'I was absolutely determined that we should not be subject to the same onslaught.' He told tabloid editor Piers Morgan that 'I had to court [Murdoch]... It is better to be riding the tiger's back than let it rip your throat out. Look what Murdoch did to Kinnock.' Just as the experience made Blair determined to avoid any repetition, it meant others in the Labour Party were likely to be affronted by any dealings he had with Murdoch. Kinnock vented his anger one night at dinner with Campbell: 'You imagine what it's like having your head stuck inside a fucking light bulb' [referring to the _Sun_ 's infamous front page on election day], he raged at me, 'then you tell me how I'm supposed to feel when I see you set off halfway round the world to grease him up.' Blair's approach was more attuned to strategy than to moral judgement. In government, on one occasion when Campbell was indignant over some mistreatment by the Murdoch press, Blair advised that 'he was worried my [Campbell's] sense of injustice about what they did was clouding my judgement about how to deal with them'. Blair described his early period as opposition leader, from 1994, as one of 'courting, assuaging and persuading the media'. Part of what he wanted was to abandon the party's platform on media reform. He told the Leveson Inquiry that Labour made a strategic decision not to tackle the problem of media power: '[I'm] being open about the fact that, frankly, I decided as a political leader that I was going to manage that and not confront it.' He felt that any policy on changing the law on the media: would have been an absolute confrontation. You would have had virtually every part of the media against you in doing it, and I felt that the price you would pay for that would actually push out a lot of the things I cared about... Blair went beyond making a strategic decision about choosing which battles to fight, however. He sought political advantage when the Major Government moved towards some limits on media ownership which would have disadvantaged Murdoch. Responding to pressures to alleviate the previous total ban on newspapers being allowed to have a commercial TV licence, Major moved towards the then fashionable view of framing ownership limits by defining a 'share of voice' across media. In 1995, his government proposed prohibiting newspaper companies with more than 20 per cent of national circulation applying for ITV licences. This would cut out the Mirror group and Murdoch. McKnight notes that 'A witness to Murdoch's reaction saw him driven into a "furious rage".' Blair moved to exploit the proprietor's discontent. 'It's not a question of Murdoch being too powerful,' he commented. Labour wanted the 20 per cent limit raised, so instead of being on the more regulatory side, Labour was now on the more deregulatory side. The minister, Virginia Bottomley, accused Labour of 'lurching from paranoid terror of large media groups to sycophantic devotion to them'. The stance was not driven by policy merits, she claimed. Rather 'it was a carefully calculated political stratagem designed to curry favour'. Labour had Murdoch's support in 1997, and won, although as Campbell observes, 'The _Sun_ backed us because they knew we were going to win. We did not win because they backed us.' This set the scene for one of the more extraordinary prime minister-press proprietor relationships in British history. On the one hand, the government was at pains to please Murdoch. For example, Blair ensured that Murdoch sat next to Chinese President Jiang Zemin at a state dinner in London, according him status and the chance to advance his business aspirations in China. Labour favoured the Murdoch press with interviews and leaks of important announcements, and the likely reaction of Murdoch was in the forefront of government thinking when deciding policy: 'No big decision could ever be made inside No. 10 without taking account of the likely reaction of three men – Gordon Brown, John Prescott and Rupert Murdoch.' This was most obvious in policy on Europe, but as former Blair staffer Lance Price observed, 'the influence of the Murdoch press on immigration and asylum policy would make a fascinating PhD thesis'. Yet at the same time the government was paranoid about its dealings with Murdoch becoming public: 'In the past week both Murdoch and the new editor of the _Sun_ , David Yelland, were in Number Ten for dinner – not something we've been advertising,' recorded Price in his diary. This confluence of an eagerness to please Murdoch, and a reluctance to acknowledge any relationship with him, produced an unnecessary embarrassment for the government. During a phone call, British Prime Minister Tony Blair – at Murdoch's request – asked Italian Prime Minister Romano Prodi what his attitude was to Murdoch's wish to acquire Berlusconi's Mediaset company, and Prodi replied that he would prefer an Italian company. This episode blew up into a momentary controversy after the conversation became public. When the story first surfaced in March 1998, Campbell extravagantly denounced it as 'a joke, C-R-A-P, balls'. But uncomfortable backtracking quickly followed, mainly centring on the meaning of the word 'intervene'. Blair then made public statements about his willingness to help 'any business with British interests', but as one Labour MP observed, the government's handling of the issue was a 'rather unedifying spectacle of half-truths and non-denial denials'. As Piers Brendon notes, 'When revealed, this piece of lobbying embarrassed Blair as much as it delighted Murdoch [who] bragged about his access.' Did this translate into tangible policy favours? Campbell and Blair both emphasised to the Leveson Inquiry that several of their media policies ran contrary to News International's interests. For example, they increased the BBC licence fee; they blocked Murdoch's proposed takeover of Manchester United; and they expanded and strengthened the role of the regulator, Ofcom. In each of these cases they were responding also to larger pressures in society and in the Labour Party. On other occasions, they clearly resisted wider currents for reform. For example, they refused to back moves in the House of Lords against predatory pricing in newspapers in February 1998, when the _Times_ was seeking to drive its competitors out of business using exactly this tactic. The government also opposed – in contrast to previous Labour policy and many other voices – any attempts to tighten anti-monopoly provisions in the media. The occasion when the Labour Government ruled most directly against Murdoch's interests was over his bid to acquire Manchester United. Murdoch bid £623 million, which the club accepted in September 1998. Supporters' groups and others immediately opposed the takeover. In October, the Trade Secretary, Peter Mandelson, referred the bid to the Monopolies and Mergers Commission. Murdoch was unhappy about the referral. The following April, after a six-month investigation, the government ruled against the bid. The _Sun_ and the _Times_ both criticised the decision, and said football would be the loser. The _Guardian_ took the contrary view, with its editorial, 'Murdoch 0, Football 1'. The Commission's report was 'highly suspicious of BSkyB and United, concluding that promises offered by the two companies to help the deal go through were unlikely to be kept'. Although there were and are other privately owned clubs, there were particular issues with BSkyB owning Manchester United. It would make it very difficult for any other broadcaster ever to win the rights for the English Premier League. It would give BSkyB an incentive to give preference to Manchester United over other clubs. At worst, it could be the basis for BSkyB to support a breakaway competition, à la SuperLeague in Australian rugby league. Price gave an insight into how worried Blair was by the decision: He is, of course, totally preoccupied by Kosovo at the moment, but also very exercised by the decision yesterday to block Sky's bid for Manchester United. No matter what we say publicly, he's very concerned to keep Murdoch on board... He was furious that the DTI [Department of Trade and Industry] let it be reported that the government had blocked the deal, rather than the Monopolies and Mergers Commission. Apart from observing 'no-go areas', the one area where the Blair Government adopted a policy that clearly advantaged News International was in the Communications Act of 2003, which for the first time withdrew all foreign ownership restrictions on British broadcasting, and allowed major newspaper proprietors to own the new Channel 5 terrestrial licence, but not the original commercial Channel 3 licences. As Brendon puts it, 'Official denials merely convinced critics that this did not so much create a "level playing field as a landing strip for Rupert Murdoch".' Neil testified before the Leveson Inquiry that Murdoch had lobbied Blair for changes in media laws that would end the ban on foreign ownership of TV licences. Leveson concluded, perhaps generously, that: the evidence does not support an inference of an agreement between Mr Murdoch and Mr Blair. Not only did Mr Blair flatly deny any such deal but the contemporary papers... reveal very considerable thought, genuine debate and reasoned decision making during the development of the policy underpinning the 2003 Act. Murdoch's lobbying style The news media pride themselves on their ability to penetrate official secrecy and maintain public accountability. It is somewhat ironic then that News Corp shows such a strong preference for closed policy processes and a tendency to evade public commitments. Murdoch's dealings with governments and regulatory agencies show just how much access, at the very top levels of all governments, he and his representatives have enjoyed. They also show his preference for closed and informal decision-making processes. As Leveson commented: There is a very powerful incentive and momentum precisely for the lobbyists of the press to guide their political relationships into the private sphere of friendships... Such friendships not only intensify the influence of the lobbyist, they pull the relationship (including its lobbying dimension) out of the sphere of accountability. Probably Murdoch's closest relationship with a regulator was with the Reagan-appointed chair of the US Federal Communications Commission (FCC), Mark Fowler. Fowler's Reaganite views led him towards radical deregulation: 'Over four years he got rid of 70 per cent of the rules and regulations that governed American broadcasting.' Beyond this Murdoch and Fowler had a close personal relationship: they were 'virtual soul mates, proponents of the free market and determined to do away with regulation at all cost... [Murdoch described Fowler] as one of the great pioneers of the communication revolution.' Fowler 'did everything he could to ease Murdoch's passage'. The FCC gave Murdoch a charmed run in the 1980s, although its most outrageous decision was of little commercial help. When Murdoch acquired six US TV stations in 1985, he also sought a waiver to keep his newspapers in New York and Chicago, despite owning TV licences in those cities. The US had had, since 1975, restrictions on cross-media ownership forbidding ownership of a TV station and a newspaper in the same city, and had never granted an exemption or even a temporary waiver to a new entrant to television. News Corp argued that it should not be forced into a 'fire sale', and so have to receive a lesser price for its papers. It even argued that selling at a lower price would be bad for media diversity. The FCC granted Murdoch an unprecedented two-year period of grace during which he could keep the newspapers. This decision seems impossible to justify. It was not an act of God that had put Murdoch in breach of the law, after all; it was his own deliberate actions. However, at least in the case of the _New York Post_ , this regulatory favour essentially allowed Murdoch to keep losing money. One recurring theme throughout Murdoch's career has been his failure to keep commitments. In 1968, Murdoch agreed not to buy any more shares in _News of the World_ , but did so within months, the moment they became available. When he took a stake in London Weekend TV in 1970, he had to give undertakings to the Independent Television Authority that he would not exercise executive power, but immediately did so. When reminded of this by the CEO he fired soon after, he simply replied, 'Yes, but that was before I came.' He told the publisher of the _New York Post_ , Dolly Schiff, in 1976 that he would retain the paper's liberal, progressive character and keep its top editorial staff. In front of the Australian Broadcasting Tribunal, as he took over Channel Ten in 1979, he made a series of spectacularly inaccurate statements. He declared, 'Channel Ten will continue exactly as it is today', but two weeks after he gained the Tribunal's approval, the general manager was gone, and within two months so was the chairman. He also said that although he held an American green card, it was not his desire to apply for US citizenship, and he could not imagine doing so. Six years later he did just that. When asked whether he would seek to own Ten's sister station in Melbourne, he answered, 'There is no substance to that rumour, and I do not see why I should give up a very profitable station in Adelaide for a loser in Melbourne.' But three months later he did exactly that. The most controversial broken commitments – and ones which in theory were legally binding – were the guarantees of editorial independence he gave when buying the _Times_. Amazingly, the Bancroft family sought to repeat the process when selling Murdoch the _Wall St Journal_. Murdoch found the process insulting but was 'willing to sign on to an artificial set of rules he would inevitably circumvent', knowing that they were 'more about other people's need for a fig leaf than about any reasonable idea of governance'. Then editor Marcus Brauchli thought: We're all trying to put Murdoch in a straitjacket, wrap him in chains, put him inside a lead box, padlock it shut, and drop it into the East River... and five minutes later he will be standing on the bank, smiling. When it comes to complying with obligations to government, Murdoch commented to Kiernan in the early 1980s: One thing you must understand, Tom. You tell these bloody politicians whatever they want to hear, and once the deal is done you don't worry about it. They're not going to chase after you later if they suddenly decide what you said wasn't what they wanted to hear. Otherwise they're made to look bad, and they can't abide that. So they just stick their heads up their asses and wait for the blow to pass. If, as former Murdoch editor David Montgomery observed, 'Rupert has contempt for the rules, contempt even for governments', it is not surprising that his record is one of regulatory brinksmanship. He takes the view that a rule only applies if it can be enforced. Indeed, Murdoch's capacity to affect how regulations are enforced has probably been more important than his capacity to change policies through legislation. 10 The market for truth A newspaper has two sides to it. It is a business like any other... but it is much more than a business... it has a moral as well as material existence and its character and influence are determined by the balance of these forces. C.P. Scott, the owner and editor of the _Guardian_ , made this famous statement on that paper's centenary in 1921. It opens the 1982 book by eminent Australian author and journalist Les Carlyon on the Norris Inquiry into Victorian newspaper ownership. It is often used by those seeking to inspire journalism towards its highest ideals, to rise above commercial pressures. There have always been problems with Scott's formulation, primarily about who decides the 'balance' and how. What sustains the 'moral', and when and how does the 'material' enhance it or inhibit it? Must the public simply rely on the _noblesse oblige_ of journalists and proprietors, or are there institutional forces inside newspapers that make the achievement of the 'moral' more likely? Perhaps most interesting is how much the balance has changed. It is plausible to claim that the balance has swung more towards the material in the 30 years since Carlyon's book than it did in the 60 years separating him and Scott. In the early 1980s, the notion of editorial independence – as a bulwark against proprietors being too commercial or too propagandistic – was still taken sufficiently seriously for it to be enshrined in the guarantees Murdoch had to make when he bought the _Times_. Similarly, quality newspapers still talked of a separation between church (editorial) and state (advertising), believing that the two must remain independent of each other. In the decades since, corporations have become ever more intent on maximising their profits, and the internet has increased the financial pressures on newspapers. Murdoch has never engaged in vague talk of a balance between moral and material. For him, market success is all but synonymous with democratic virtue. For him, there are not 'quality' papers and popular papers, but simply popular and unpopular papers. From1969, when he acquired the _Sun_ , his most constant target has been 'elitist' pretensions in journalism. He took to publicly criticising British journalism in general as 'dull', 'trite', 'long-winded' and 'incestuous'. As late as 2007, when he was taking over the _Wall St Journal_ , the refrain was similar. American newspapers 'have become monotonous'. Many are pretentious and suffer from a sort of tyranny of journalism schools so often run by failed editors. In between, he pondered 'whether there is any other industry in this country which presumes so completely to give the customer what he does not want'. Murdoch biographer, Tom Kiernan, felt that he 'despised the _Times_ and every other paper in America that strived for journalistic quality as "snobby enterprises"'. 'There's nothing wrong with talking to the masses,' Murdoch told an American TV interviewer. 'You know, William Shakespeare wrote for the masses.' Steve Dunleavy, a former editor of the _New York Post_ , said: Rupe doesn't dictate public tastes, you know. He has lots of bosses out there. Millions of them. The public tells him what they want to read and Rupe gives it to them. This initially sounds egalitarian and democratic, but for Murdoch, the constant risk was of getting 'above' the public. The London _Daily Mirror_ , for example, was vulnerable because it was 'flying above the heads of [its] readers'. He said that after launching the _Sun_ , 'People wrote to say that they hadn't had a paper they could understand until we came along.' But his view of the tabloid audience was not very flattering. When he bought the _News of the World_ in 1969, he told his mother: Look, Mum, in Britain there are hundreds of thousands of people who are living in miserable postwar tenement places. They have nothing in the world but the Pools and, you know, this is the sort of thing that they [want]. Years later, he told an American interviewer: 'You have in Britain a society that is becoming extremely decadent. You don't have the underlying puritanical history that this country's got.' His private views of his American audience, expressed over a long dinner with Kiernan in 1982, were no more flattering: the _Post_ 's readership was 'basically a poorly educated, narrowly experienced' group, which 'craved guidance' and 'flourished on simple black and white answers'. Presumed knowledge of the audience – usually in the absence of tangible evidence – is one source of editorial authority over mere reporters. When _Sun_ editor Kelvin MacKenzie was rebuking a journalist, he spelt out a somewhat alarming view of its readership: You just don't understand the readers, do you, eh? He's the bloke you see in the pub – a right old fascist, wants to send the wogs back, afraid of the Russians, hates the queers and weirdoes and drug dealers. So news judgements are coloured by how stories relate to the presumed readership. The _Sun_ used to celebrate prominently the lucky people who won _Sun_ bingo. But on one occasion, when it was won by an Asian family, the acting editor declared: 'I'm not having pictures of darkies on the front page... That's the last thing our readers want – pictures of blacks raking it in.' Also excluded were homosexuals. 'What's all this crap about poofters?' Murdoch would complain if there was a fleeting reference to homosexuality. He greeted an early feature article on homosexuality with, 'Do you really think our readers are interested in poofters?' It is thus not surprising that, according to Shawcross, he 'loathes the... gay rights lobby'. Framing news to flatter the presumed audience easily slides into denigrating out-groups in line with audience prejudices. The _Sun_ specialised in ethnocentrism that was amusing but also cutting. Its response to a 1984 campaign by French farmers to reduce imports of British lamb was 'Hop Off You Frogs'. But the tabloids' repertoire in their attempts at such humour is limited and clichéd. In 1996 England were to play Germany in the European football championship. The three national tabloids marched in step: The _Mirror_ 's headline was 'Achtung! Surrender! For you Fritz ze Euro 96 Championship is over'; the _Sun_ had 'Let's blitz Fritz' and the _Star_ 'Herr we Go – Bring on the Krauts'. This was just 51 years after the end of World War II, a conflict in which most of the players' fathers would have been too young to fight. It is another thing, though, to treat war as if it were a football match. When the Argentinean regime of General Galtieri seized the British colony, the Falkland Islands, in 1982, the _Sun_ was in the rhetorical front line of Thatcher's determination to take it back by force. As William Shawcross reported: Falklands War fever gripped the _Sun_ as nowhere else. _Sun_ reporters gave themselves military ranks, a picture of Winston Churchill was hung up in the newsroom and, when the British military task force set sail for the South Atlantic, a new slogan was coined: 'The paper that supports our boys'. Jingoism and journalistic gimmickry reached new heights: The first missile to hit Galtieri's gauchos will come with love from the _Sun_. And just in case he doesn't get the message, the weapon will have painted on the side, 'Up Yours Galtieri'... The copy explained that the paper was sponsoring the missile by paying towards the ship's victory party once the war was over. Promoting the paper went hand in hand with promoting the war: _Sun_ readers were encouraged to send in for free 'The _Sun_ says Good Luck Lads' badges, and T-shirts advertising the _Sun_ , and saying 'Stick it up your Junta!' The _Sun_ was also keen to fight the enemy at home, especially among its media competitors. Its editorial 'Dare Call it Treason' underlined its first sentence: 'There are traitors in our midst.' It then named a BBC journalist, the _Guardian_ and the _Daily Mirror_ as treacherous. The most infamous incident occurred when the British Navy sank the Argentine cruiser _General Belgrano_ , with 320 lives lost. The _Sun_ headline in its first edition was 'Gotcha!', but the office had second thoughts about such rejoicing in the loss of human lives, and changed it for the second edition. Murdoch, however, liked the Gotcha! headline, and told MacKenzie they should have kept it. The _Sun_ 's thirst for war angered many of those it was so lavishly praising – the servicemen and women involved. It also made life difficult for its journalist, David Graves, who was reporting from Buenos Aires. At one stage, he was called in by the Argentine military. 'We have been looking at your reports to London,' an admiral said. 'They do not always appear in quite the same way in the paper as you have written them, do they?' The admiral smiled and walked off. It may be unique in the annals of war reporting that an enemy military censor felt sympathy for a reporter because of the way his editors were distorting his copy. Murdoch and MacKenzie were probably satisfied with the paper's role. Their centre of gravity was the home audience, and the _Sun_ 's polarising performance had kept it at the centre of attention in Britain. The war also cemented Thatcher's domestic political dominance. Demand versus supply In 1962, the two Sydney afternoon papers, in the face of a looming crisis as Indonesia sought to complete its decolonisation by taking Dutch West New Guinea by force, simultaneously discovered the importance of having a reporter on the spot. Murdoch's _Daily Mirror_ sent Brian Hogben to the capital, Hollandia, to send back first-hand reports. As the days passed, the paper became increasingly worried that the rival _Sun_ would beat it with the first story from the conflict area. Finally, the _Daily Mirror_ newsroom concocted its own story. As Kiernan observed, 'Written in suitably purple... prose, and riddled with jungle-warfare clichés, it told a tale, among other things, of cannibals and shrunken heads.' When he heard, the furious Hogben telegraphed his home office, saying the 'nearest shrunken heads are in Sydney'. The story illustrates a more general problem in news: supply and demand do not necessarily go neatly together. _Sun_ editor MacKenzie exhorted his journalists to 'Shock and Amaze on Every Page', but this takes no account of the fact that the supply of shocking and amazing material may be limited. When reality is too tame to meet the demand, the temptation is to enhance it. One of the most infamous cases of Murdoch manufacturing supply to meet demand occurred with the serial killer 'Son of Sam' in New York in 1976–77. From July 1976 there was a series of motiveless murders of young men and women in middle-class areas; all were shot with a .44 calibre revolver late at night as they sat in the back of parked cars. In all, there were eight attacks, during which six people were killed and seven wounded. At first, the _Post_ had devoted little attention to the story, but 'Murdoch was determined to catch up.' He decided that 'there's only one game in town and that's Son of Sam', and insisted that there had to be a new angle every day. Unfortunately, the paper 'had no real information to offer – a regular difficulty with murder stories'. So what followed was not only hysterical but often simply baseless. There were several false stories, such as one about the cops letting Sam escape and another about a witness who saw him change into a wig. The paper's most fanciful invention was a story, attributed to Mafia sources, that Mafia families were out hunting the killer: 'The story was pure fantasy,' Murdoch told Kiernan with a laugh a few days later. Perhaps invention was better than intrusion. Murdoch's star reporter, Steve Dunleavy, donned a doctor's smock and entered the hospital room of a wounded victim, posing as a bereavement counsellor, and so secured an exclusive interview with her parents. After the arrest of the murderer, David Berkowitz, the _Post_ had a simple but effective banner, 'Caught!' The paper sold 400,000 copies more than normal that day, carrying it over the million mark. But it was not finished. Several days later, it published 'How I became a Mass Killer by David Berkowitz'. It was later revealed that this was not an article by the killer, and not based on an interview with him, but was a concoction made from letters his ex-girlfriend had sold the paper, and which contained almost nothing of any relevance. Murdoch's main competitor, the New York _Daily News_ , was equally culpable. Its star columnist, Jimmy Breslin, wrote some open letters to the serial killer. Giving such prominence, with the implicit promise of more to come, to a psychopath is of course grossly irresponsible. Murdoch later sought to justify his paper's behaviour by claiming that the _Post_ had 'encouraged the police to get off their tails and go catch [the killer]'. This is simply ludicrous: the police were already highly motivated to catch the serial killer. A further complicating factor in the mismatch between the demand for and supply of news is that demand is not constant. In particular, in the 'market for "shock, voyeurism and scandal", the consumer's expectations are constantly rising'. Looking back four decades later, the _Sun_ , although very controversial at the time, seems relatively harmless. It looked for every opportunity to have sexual content, but rarely in a nasty way. It sought to make itself the centre of attention, and to make the news, not just report it, and it was singularly unencumbered by any sense of professional obligation. What was not apparent then was that – as others sought to emulate it and as it became a prisoner of the expectations it had set for itself – it was setting up an unsustainable dynamic. One sign of the problems this would create was the reception _Sun_ journalists said they got from readers when doing soft features in the late 1980s compared with the 1970s: 'Previously it had mostly been a pleasure to ring ordinary people who had popped up in the news... But now they were detecting a new guarded note', and their requests were being turned down more often. Greater nastiness was particularly apparent in human interest stories, where the main focus often became the grotesque, especially in the 1980s, with the rise of 'Yuck journalism'. The _Sun_ did a story on a horrifically burnt Falklands veteran, and instead of focusing, as the man had hoped, on his recovery from 26 skin grafts and suicidal depression, it concentrated on how he was 'hideously scarred, including misleading photos'. The rest of Fleet Street was disgusted. Similarly, the mother of a 5-year-old disabled boy, who had suffered from both septicemia and meningitis, with the result that he had no sense of fear or danger, and was often involved in accidents, had arranged with a freelance journalist to publicise these problems. This was transformed by the _Sun_ into a story headlined 'The Worst Brat in Britain'. One of the paper's tricks was to get the boy to sing a song he had learnt at school, which involved pulling faces. It published one of these silly faces but made no mention of the song. The mother described it as a 'hatchet job on my little boy's already sad little life'. In the frenetic competition between the tabloids in crime news, there was a constant pushing of boundaries. 'I've got a story about someone who's confessed to 17 rapes,' said MacKenzie. 'If it's a record, I want it on the front page.' On another day, he shouted, 'A story of a blind rapist. We've got our front page.' Eventually MacKenzie broke an agreed taboo, and published a photo of a rape victim on the front page. The dynamic was similar in celebrity news. The _Sun_ had always been preoccupied with television, and their stories had not typically been subjected to stringent tests of accuracy. As Peter Chipppindale and Chris Horrie note, 'The effect of the _Sun_ 's coverage was to get two soap operas running at once – the fictional on-screen lives of the characters, and the _Sun_ 's parallel stories of their real lives.' This 'endemic confusion of fiction and reality' did not worry MacKenzie: 'It's all crap anyway.' The coverage gradually took on a nastier edge. Sexual affairs became fodder even if no conceivable public interest was involved. 'What I want to know is who's fucking who,' Murdoch editor Wendy Henry would say. The end result was the obliteration of any notion of privacy. Leveson judged: There is ample evidence that parts of the press have taken the view that actors, footballers, writers, pop stars – anyone in whom the public might take an interest – are fair game, public property with little, if any, entitlement to any sort of private life or respect for dignity, whether or not there is a true public interest in knowing how they spend their lives. Their families, including their children, are pursued and important personal moments are destroyed. This absence of boundaries was accompanied by an increasing emphasis on 'trashing [the] reputations' of those covered, and an emotional tone where 'anger and vindictiveness [were] the default settings'. Taste versus truth In the early years of Murdoch owning the _New York Post_ , its two most famous headlines were 'Headless Body in Topless Bar' and 'No One is Safe from Son of Sam'. There is a crucial difference between these two headlines. The first was true; the second was not. The increase in the risk of any New Yorker being murdered during the Son of Sam episode was all but zero, because there was a pattern to his victims that excluded most people. There are many confused debates about popular, 'tabloid' journalism. Some centre on issues of taste, preferences for some types of news – such as sport, crime, 'human interest' and celebrity – over the main topics of more upmarket journalism, such as political, international and business news. Some centre on styles of presentation, with tabloid journalism marked by large headlines, more pictures, and shorter stories. But there is no reason for tabloid newspapers to be less accurate than 'quality' papers, or held to a lesser standard of truth. Perhaps the most common issue comes from the search for sensation, the wish to make the story more dramatic than reality. In the early years, Murdoch was unhappy with the _New York Post_ : he felt it was insufficiently dramatic. He brought his own people into senior positions, overlapping their authority with that of the existing editors. In January 1977, murderer Gary Gilmore became the first person to be executed in the US for nine years. There was a peaceful candlelight vigil outside the prison, but the Murdoch editor had a headline with the protesters 'storming' the prison. When the editor, Paul Sann, insisted that the inaccurate headline be dropped, it 'left no doubt in Murdoch's mind of what he already suspected – that the incumbent staff would resist his every move'. It is symptomatic that Murdoch saw it as a test of will and authority, and was seemingly uninterested in the merits of the headline's accuracy. Just as the mismatch between demand and supply can lead to problematic news coverage, so inaccuracies are more likely when there is no market punishment for them: when the errors will be invisible to the audience, or when they confirm audience expectations, or when the sources are unlikely to complain. International news has thus often been more error prone than domestic reporting. Kiernan was witness to the _New York Post_ 's coverage of Israeli invasions of Lebanon in the summer of 1982. He claimed: Throughout that period, the paper was without a single reporter on the scene, yet its stories were laced with unattributed 'eyewitness' descriptions of Arab atrocities and Israeli heroics, many of them invented in its New York city newsroom. Meanwhile, the paper's conservative columnists regularly attacked what they took to be the anti-Israeli coverage of the US TV networks. In 1986, after the Chernobyl nuclear disaster in the Soviet Union, it reported, without any basis, that there were 'Mass graves for 15,000 N-Victims'. It never admitted that this was a wild exaggeration. In 2011, the _Sun_ carried a story on its front page headlined 'Swan Bake', accusing asylum seekers of eating the queen's swans. The story proved to be baseless, and the paper later carried a halfhearted retraction – on page 41. At the other end of the social scale, but similarly often lacking a direct public voice, _Sun_ editor MacKenzie told reporters on stories about the royals: 'Don't worry if it's not true – so long as there's not too much of a fuss about it afterwards.' A later _Sun_ editor, David Yelland, had a front page exclusive, headlined 'Queen has rubber duck in her bath... and it wears a crown' with photos of both the queen and a rubber duck. The _Sun_ under MacKenzie made many errors, but he believed in: never correcting anything unless he absolutely had to. His attitude was that if something wrong got in the paper it was 'hard fucking luck'. He'd bollock the person who made the mistake, but he never believed in owning up to mistakes in public, saying critics would attack anyway, so why give them ammunition? However, he made two errors that proved very expensive. The first came after the Hillsborough disaster, the worst stadium-related fatal incident in British history. In April 1989, at an FA cup match between Liverpool and Nottingham, after a crush of spectators against fences, 96 people were killed and over 700 injured. It was an outdated venue, and crowd control procedures were archaic. To the horror of the reporter who wrote the story, the paper had a bald headline – 'The Truth' – above claims that accused some fans of picking the pockets of victims, and urinating on 'brave cops'. It claimed that some drunken Liverpool fans had 'viciously attacked rescue workers'. Most of the claims were shown to be false. The paper's sales in Liverpool dropped by 40 per cent, and remained lower than elsewhere for more than a decade afterwards. The other financially disastrous story stemmed directly from MacKenzie's 'appalled fascination' with homosexuality. On 25 February 1987, the _Sun_ had a headline story 'Elton in Vice Boys Scandal/Star's lust for bondage'. This was the start of an 18-month saga that ended in complete ignominy for the paper. In recent years, MacKenzie has said that this was the only time in his 13 years as editor he checked a story, and it turned out to be wrong. He told the Leveson Inquiry: 'I never did it again. Basically my view was that if it sounded right it was probably right and therefore we should lob it in.' This is a cute line, but fundamentally misleading. Other _Sun_ journalists recalled that MacKenzie was feeling desperate for a sensational story, and became determined to publish the story, which wrongly claimed that rock star Elton John had paid for sex with underage 'rent boys'. Nearly everyone who saw the story could see its basis was far too slight – the uncorroborated word of a male prostitute who had been paid for the story. The legal department objected, and when their advice was ignored took the very unusual step of putting that advice in writing. But it was still ignored. When Elton John saw the story, he was determined to sue because the story wasn't true. His friends advised otherwise. Mick Jagger, for example, said it was 'not worth fighting because they'll try to rake up so much muck'. Elton did sue, and the paper did rake up as much muck as it could. The second day's headline was 'Elton's kinky kinks/Elton's drug capers'. This resulted in a second writ. On the third day the _Sun_ had 'You're a liar Elton', with a strap over 'the story they're all suing over', and its competitor, the _Daily Mirror_ , reported that Elton was in New York on the day the 'rent boy' testified they had been together. The paper was becoming desperate. It started a massive trawl through the singer's past, and pursued him wherever he went. The pop star testified that he was 'mega-depressed' by all the _Sun_ 's attention. The paper entered into an arrangement with a Scottish man, who provided them with a series of affidavits (he was paid £1750 for each) of rent boys saying they had had sex with Elton, until the paper realised it was being ripped off and the affidavits were sheer invention. The _Sun_ 's last error came on 28 September 1987, when it splashed with 'Mystery of Elton's Silent Dogs', a story that claimed Elton had had his vicious Rottweiler dogs silenced by a horrific operation and they were now silent assassins. It was easily refuted – Elton did not have Rottweilers and his dogs were manifestly loud barkers. Elton's lawyer issued writ number 17. Six weeks later, the _Daily Mirror_ tracked down the original 'rent boy', whom the _Sun_ had put into hiding, and he admitted it had all been 'a pack of lies'. A year later, in December 1988, the inevitable finally happened. On the day the court hearing was to begin, the paper published a front page headline: 'Sorry Elton!' It was the first libel apology to lead the paper, and also explicitly admitted that the singer would receive £1 million, double the previous highest settlement. The paper had lost massively in every way. It had been exposed as having published a baseless story, and then compounding this with prolonged bullying and persecution of the person it had wronged. The root of the problem was not only sloppiness and arrogance, but the editor's apparent inability to understand that a gay singer might actually be popular with the paper's readers. Murdoch's most famous mistake, however, was a profitable one. His name will forever be linked with contemporary journalism's greatest monument to gullibility – the publication of the 'Hitler diaries'. The German magazine _Stern_ , which had a respectable journalistic reputation, had paid $3.1 million for material allegedly written by the Führer. Even though there had never been any hint that Hitler kept a diary, the magazine believed the volumes were authentic, and started an auction for English language rights. Several organisations were interested, but they had to take the diaries on trust, as _Stern_ refused access to the originals. Murdoch became involved in an atmosphere marked by competitiveness and urgency, and won the bidding contest. Some journalists at the _Sunday Times_ were very worried, because the paper had been taken in by fake Mussolini diaries in 1966. Investigative reporter Phillip Knightley, remembering that incident, wrote a memo which concluded 'that secrecy and speed work for the con man'. It was a warning that disappeared amid the mounting excitement of what the paper thought would be a world scoop. On the News International Board was one of Britain's leading historians on World War II, Hugh Trevor-Roper, Lord Dacre. He read some excerpts and initially thought they were genuine. On the Saturday evening when the presses were about to roll with the first excerpts, the journalists involved were having a celebratory drink in editor Frank Giles's office, feeling satisfied and excited at the splash they were about to make, when Giles spoke to Trevor-Roper, who told the horrified editor that he now had substantial doubts about the material's authenticity. Giles phoned Murdoch, who in the most famous moment in the saga, ordered, 'Fuck Dacre. Publish!' The instruction has been cited to show Murdoch's disregard for truth, but at that moment he had almost no choice. The real faults had come earlier. _Stern_ itself was the victim of one of its journalists, a con man who was in partnership with a forger, but the secrecy and limits it imposed on the bidders made proper testing impossible. Harold Evans, by then the former _Times_ editor, put the blame squarely at the top: 'Once Murdoch himself became involved in the excitement of negotiation and of sensational scoops, all serious journalistic standards were swept aside.' However, the unhealthy hierarchy within News International, which produced a willingness among key participants to deflect responsibility upwards, was equally important. When journalists expressed suspicion, editor Giles reassured them that 'the proprietor has no doubts'. This lack of responsibility, or perhaps reluctance to bring unwelcome news to the proprietor, was also at the heart of the very late warning from Trevor-Roper. He had told the _Times_ editor, Charles Douglas Home, of his doubts on the Saturday morning, but – amazingly – Douglas Home told no one else, and allowed his _Sunday Times_ colleagues to continue on their road to professional disaster. After publication, _Stern_ at last released copies for examination, and the German Federal Archive declared that all seven of the volumes they inspected were forgeries: 'Chemical analysis had shown that the paper, the binding, the glue and the thread were all of postwar manufacture.' The journalists felt humiliated. Murdoch was sanguine. Evans noted, 'When he was told that the diaries were fake, he reassured the worried editorial men at Times Newspapers who feared for their credibility: "After all," Murdoch said, "we are in the entertainment business."' The Hitler diaries episode increased the circulation of the paper by 60,000, and the revealing of them as forgeries of course meant Murdoch did not have to pay. He did not have the intense feeling of shame the journalists did. In Bruce Page's words, 'a bet had simply gone wrong'. One trivial falsehood that appears in every Murdoch newspaper is an astrology column. When the _Australian_ began he insisted that there be one, and syndicated it from the UK, apparently the different constellations in the northern and southern hemispheres making no difference to the forecasts. When he took over _TV Guide_ magazine it hired an astrologer who was given the job of charting the likely fortunes of new shows. At one stage, the _Sun_ 's astrologer was found to have been recycling predictions, and was sacked by the editor, MacKenzie, who was said to have begun the letter of dismissal, 'As you will no doubt have foreseen...' Murdoch editors take their astrology columns seriously. One of the few times in a research interview where I had trouble keeping a straight face was with Peter Wylie, Murdoch's very able and energetic editor of the Sydney _Daily Mirror_ in the early 1980s. Wylie was totally preoccupied with beating the _Sun_ , and he was going through a very long catalogue of ways in which the _Mirror_ was superior, when he said, 'We have better stars than the _Sun_.' Puzzled, I asked what he meant: sports stars, TV stars? 'Stars stars,' he replied. 'The _Sun_ has one astrology column and we have three!' No doubt Murdoch was just trying to add to reader appeal, but some have claimed that he was himself superstitious and, like many gamblers, looked for omens. In the 1970s, the London _Sun_ editor, Larry Lamb, used to 'lean on the paper's astrologer to doctor the Pisces entry to assure [Murdoch] that he would have a good day, and urge him to be full of good will'. Rather more seriously, Murdoch publications have not had a good record in covering scientific disputes. At the moment there is much contention over their coverage of climate change, but there were also problems with earlier disputes. Murdoch has been a long-term friend of the tobacco industry. He served on the board of Philip Morris, and leading members of that company have served on the News Corp board. An internal Philip Morris document named Murdoch as a media proprietor sympathetic to their position, said that 'Murdoch's papers rarely publish anti-smoking articles these days', described them as 'our natural allies', and planned to build 'similar relationships to those we now have with Murdoch's News Limited with other newspaper proprietors'. When New York City banned smoking in bars in 2001, the _Post_ fulminated against 'Nicotine Nazis' and claimed that mayor Michael Bloomberg was an elitist zealot who would like to 'ban smoldering incense at St Paul's Cathedral'. On the issue of HIV/AIDS, Murdoch's two most famous tabloids made diametrically opposed errors. In 1989, the London _Sun_ had a headline story 'Straight Sex cannot give you AIDs – Official'. This was accompanied by an editorial that said 'forget the idea that ordinary heterosexual people can contract AIDS. They can't... Anything else is just homosexual propaganda.' It quoted _Sun_ doctor Vernon Coleman as claiming that the AIDS scare was 'the biggest hoax of the century'. In 1985, it had publicised someone it said was a 'psychologist' saying that homosexuals should be exterminated to stop the spread of AIDS. Meanwhile, _New York Post_ editor Steve Dunleavy briefed a reporter to do a story that AIDS could be spread by kissing. When the reporter protested that it was not proven, Dunleavy replied: 'Let's not be too technical, mate – it's a good yarn.' But poor coverage was not confined to Murdoch's tabloids. In the early 1990s, Andrew Neil's _Sunday Times_ ran what Bruce Page described as a 'crackpot assault' on the 'AIDS establishment'. The paper gave publicity to dissident Berkeley academic Alan Duesberg, who argued that HIV was not associated with AIDS, and that AIDS, in the sense of a pandemic, did not exist. Under a headline 'The emperor's clothes', came the claim that 'AIDS was sustaining a vested interest of great power'. In an October 1993 article titled 'the plague that never was' it gave publicity to a French couple in Africa who used to believe in AIDS, but had concluded that 'it is something that has been invented'. When the paper's coverage was criticised, for example in an editorial in _Nature_ magazine, its response was to sound themes that would become even more familiar in the climate change debate. It said it was challenging 'orthodoxy' and standing up to the medical and scientific 'establishments', and accused critics of representing 'vested interests' and of 'censorship'. Friends and enemies When he was a rising Labor MP in 1998, Mark Latham had lunch with three Sydney _Daily Telegraph_ senior staff members: Col Allan, Malcolm Farr and Piers Akerman. According to Latham's diary: The lunch conversation is a long way from policy debate. Running the _Tele_ is about good food, good wine and good hatchet jobs. These blokes have scores of public figures they hate, and the purpose of the paper is to do them in. Later, when he became Labor leader, Latham became one of the public figures the paper was keen to 'do in'. On Saturday, 19 June 2004 the paper's front page headline was 'The Dismissal: how Mark Latham's temper ended a proud 115 year cricketing tradition'. The story referred to an incident 25 years earlier, when Latham was 18 and playing cricket for Sydney University. After an umpire gave him out LBW, Latham had given the umpire 'the bird', and been suspended by the club for one week. According to the _Telegraph_ , this was the first ever suspension in the club's 115-year history. Latham commented: [I]f you were standing in the bar of the uni pavilion in 1979 and said to someone, 'you know, Latho's blue with the umpire today, that will be front page news in 25 years' time', they would have locked you up in the madhouse. This dividing of the world into friends and enemies seems to be widespread in News Corp publications. A former editor of the _New York Post_ said that Paul Newman, a popular movie star who often expressed liberal political views, was their Public Enemy Number One. 'He topped our permanent, ineradicable hate list.' The _Post_ simply banned Newman from its pages: 'The exception was bad news. Otherwise no Newman. We banned him from the TV listings.' British Labour MP Tom Watson, who had supported Gordon Brown and publicly said Tony Blair should set a date for retiring, was told at the Labour Party conference by a _Sun_ political correspondent, 'My editor [Rebekah Wade (now Brooks)] will pursue you for the rest of your life. She will never forgive you for what you did to her Tony.' Sometimes the original reason for such enmity was commercial. One of News Limited's worst moves in Australia was its attempt to circumvent the existing rugby league competition and set up a rival SuperLeague. Eventually the two competing leagues came together, but in making it one competition again, some clubs were omitted, most notably the South Sydney Rabbitohs. Popular Australian TV personality Andrew Denton was a passionate Rabbitohs supporter and thus supported the fight against News Limited's exclusion of the club from the competition. On ABC TV Denton said, 'I wish I could take Lachlan Murdoch [and]... Ken Cowley by their smug little jowls... [and explain to them that] tradition in sport is a very, very powerful thing.' Bruce Guthrie discovered this after he ran a feature on Denton in the _Australian_ magazine, which he was editing. An assistant whispered to him, 'You know that Denton is _persona non grata_ around here.' Three weeks later, according to Guthrie, the _Australian_ 's editor, Chris Mitchell, reported that Lachlan Murdoch had just rung from New York, 'very pissed off' that Denton had received favourable publicity. Guthrie also discovered that some individuals were protected at News. Many years earlier, when he was Beecher's deputy editor at the Melbourne _Herald_ , the paper's gossip columnist had gently chided _New Idea_ boss and Murdoch executive Dulcie Boling for her icy demeanour. He received a faxed rebuke from Rupert in New York within a matter of hours: 'If you think this sort of garbage sells newspapers, you are sadly mistaken.' Similarly, when, in a commercial case involving anti-competitive practices, Justice Sackville criticised News Limited solicitor Ian Philip, Australian News Ltd CEO John Hartigan rang Guthrie, seeking to minimise the way it would be reported. Individuals can move from one category to the other. Possibly one of the reasons for News Limited firing Guthrie from the editorship of the _Herald-Sun_ , despite the fact that by almost all measures he was succeeding at the paper, was that he had run a story about Victorian Police Commissioner Christine Nixon taking a free flight with Qantas. Nixon was a good friend of Murdoch's sister and News Limited Board member Janet Calvert-Jones. Guthrie was told the story had gone too far; soon after that the company tried to dismiss him. After Guthrie made these concerns public – when he brought his case for unfair dismissal – the paper started attacking Nixon much more strongly, particularly over her handling of the tragic Victorian bushfires in early 2010. She continued to be a target during the Royal Commission inquiring into official responses to those massive fires. In her memoirs, Nixon criticised the paper's coverage of her; around the time the book was to be published the paper again carried several negative stories about her. Amusingly, the publisher, Melbourne University Press's Louise Adler, revealed that she had her 'own insight into the tactics of tabloid journalism': someone at a _Herald-Sun_ news conference inadvertently left his phone on after speaking to her, and she heard a discussion about putting MUP higher up in the story. The resulting story on page 4 was headlined 'Uni in Nixon book row'. It is only News Corp enemies, not its friends, whose past lives are pursued for damaging material. In 2011 the independent Tasmanian MP Andrew Wilkie was pursuing an issue close to his heart: legislation to limit the damaging impact of poker machines on problem gamblers and their families. According to _Crikey_ journalist Stephen Mayne, the pokies industry had 'just signed contracts to spend millions of dollars on paid advertising demonizing pokies reform'. During the controversy, the _Herald-Sun_ splashed with a story about Wilkie's behaviour during officer training at Duntroon 28 years earlier. At least this was a story about Wilkie's own actions. Relatives and associates of unfavoured leaders can also find themselves in a negative news spotlight. As discussed in Chapter 6, the _New York Post_ ran a banner and front page headline claiming that the long dead father of 1984 Democratic vice-presidential candidate Geraldine Ferraro had 40 years earlier been arrested for illegal gambling, although the charge was not proceeded with and no conviction was recorded. The search through people's closets for skeletons is not always successful, though. When left-wing politician Ken Livingstone, 'Red Ken', became mayor of London in 1981, _Sun_ editor Kelvin MacKenzie sent journalists looking for material to discredit him, but all they came back with was that he kept newts. 'All you can find is fucking newts!' MacKenzie raged at them. When the paper was pursuing another of its targets, the left-wing homosexual Labour candidate Peter Tatchell, again reporters came back empty-handed, only to be admonished by one of the editorial staff, 'When will you lot get it through your heads that Kelvin's not interested in whether things are true or not. What you've got to do is give him what he wants.' In 1998, in another story about relatives of enemies, the _New York Post_ wrote a front-page story about Chelsea Clinton's visit to a university counsellor after a failed romance. After News International turned against Conservative Prime Minister John Major, the love life of his son was deemed newsworthy. Once, according to Major's testimony to the Leveson Inquiry, the _News of the World_ , trying to get information on his son's relationship with his girlfriend, misrepresented themselves as a hospital needing to carry out an emergency operation, but first having to find out if she was pregnant. Then a motorcyclist 'had been instructed to follow my son "day and night" in the hope of providing a story'. Major's son became concerned that the person following him may have been a terrorist, and pulled into a police station, where the pursuer had to confess to the police that he was from the _News of the World_. In contrast to the invasion of the privacy of unfavoured leaders' children, even the principals of the favoured side may be protected. The _News of the World_ had a story in the lead-up to the 1997 election about shadow foreign minister Robin Cook having an affair, but as Murdoch was then on Labour's side, it was not used. When Murdoch met Reagan in person, he 'was surprised by the president's age and frailty'. Murdoch attended a lunch, with several other guests, where 'Reagan actually fell asleep during the meal.' Murdoch described the experience as 'awful', but clearly it was not something that Murdoch readers needed to know. Murdoch publications often seem to enjoy conflict for its own sake, keen not just to attack others, but also to make themselves the centre of attention. For Kiernan: a favourite tactic of Murdochian journalism is to attack rival newspapers in print for the very sins – factual distortion and invention – that Murdoch's papers are regularly accused of committing. The tactic is partly a circulation ploy based on the theory that there is nothing the reading public enjoys more than a nasty word-brawl between competing newspapers. But it has another purpose, too, which is that 'the best defence is a good offense'. The combative nature of Murdoch's publications in Australia is clearest in the responses of the _Australian_ and its editor, Chris Mitchell, to criticism. Journalist Jonathan Holmes commented on 'the _Australian_ 's habit of launching vitriolic personal attacks, which can sometimes last years, against anyone bold enough to criticize it'. During one conflict, Mitchell threatened to 'use every journalistic and legal measure available' against Victoria's Office of Police Integrity because of a clash over the paper's coverage of a police raid. In another, the paper editorialised, 'it is now clear that senior members of the media... have embarked on a concerted campaign to delegitimize tough reporting and this newspaper'. Journalist Elisabeth Wynhausen, who was sacked by Mitchell in 2009, thought: He was a tireless strategist whose best and worst instincts were filtered through the same tendency to turn almost any subject into an excuse for an argument with a bunch of imagined enemies. He treated the paper like the spoils of war, routinely using its pages to campaign against people who had ever dared to take him on. Whether or not this is an accurate characterisation of Mitchell, it is almost a job description for Murdoch editors. Rewards and punishments The key aspect in defining a workplace culture is what gets rewarded and what gets punished. Murdoch discovered the perils of placing the moral over the material early in his career, and it led to his first firing of an editor; it was of Rohan Rivett, his friend and mentor, who had put the Adelaide _News_ back on a sound basis both journalistically and commercially. Rivett had started to question the way the police had secured a murder conviction against an Indigenous man, Max Stuart. This grew into a campaign that pitted the paper against not only the police, but also the South Australian legal establishment and the government of the long-serving South Australian premier, Thomas Playford. It led to a Royal Commission, which concluded that justice had been done, and to a very unusual prosecution of the newspaper – for criminal libel – which could have resulted in Murdoch and Rivett going to jail. The decisions which had led to the criminal libel case had been Murdoch's, made when Rivett was on holiday. It was a very anxious period for both of them. Soon after the case was over, Murdoch, in a brief letter, terminated Rivett's editorship. Over the years, Murdoch has given several specious reasons for this decision. He has always denied that the Stuart case was behind the sacking, but Shawcross's explanation is probably close to the mark: Murdoch had had enough of advocacy journalism. He was expanding his empire and was more interested in cash than in confrontation, in profits than in political positions. He wanted editors who were safe rather than scintillating, whom he could rely upon however far away he might be. Rivett's successor, Ron Boland, stayed as editor for almost two decades, and achieved the type of paper Murdoch was hoping for. On the other hand, the paper was, according to John Lisners, risk averse. Lisners spent some weeks in 1968 putting together a campaigning story about the large number of fatalities discovered among workers inhaling asbestos dust; it included extensive research made available to him from Adelaide University. However, 'Boland refused to print the story to avoid offending major advertisers.' Although the reputation of Murdoch's newspapers emphasises their outrageousness, especially in monopoly and semi-monopoly situations, they are often very particular about what fights they pick. As we saw above, Paul Sann's editorship of the _New York Post_ became doomed when he made a stand for accuracy in reporting the vigil outside an execution. When Matt Driscoll, a sports reporter, was rebuked by _News of the World_ editor Andy Coulson for having only taken a shorthand note rather than taping an interview, and he replied that he would accept the warning, but didn't feel he had done anything wrong, Coulson responded that 'In my view your actions on this matter merited dismissal.' Eventually Driscoll won a huge payout for unfair dismissal, but before the final verdict, News International fought the case hard, three times appealing the verdict. So taking a short-hand note rather than making a tape, and then not wholeheartedly accepting the editor's reprimand, were thought grounds for dismissal, and the sacked employee's claims were vigorously and expensively contested. In contrast, the paper had run a story on Max Mosley – a former president of the international federation governing Formula 1 and other motor racing events, a millionaire, and the son of Britain's most famous Nazi, Oswald Mosley. Max Mosley was photographed in a sadomasochistic orgy with five prostitutes. The paper reported it under the headline 'F1 boss in sick Nazi orgy with 5 hookers'. Mosley sued for breach of privacy, and the trial exposed the paper's methods. It had paid one of the prostitutes, and offered her more money if she did an interview, which consisted of agreeing to a script already written by a reporter, and which was later changed without her knowledge. It told two others it would name them in print if they did not co-operate. It used the word 'Nazi' to describe the event, and thus linked it to the Holocaust, because one participant, who was in fact German, spoke some German. The paper 'lost the case, refused to apologise and claimed publicly that it deserved a journalism award for the story'. Though the judge criticised the 'erratic and changeable' testimony of the paper's principal reporter, Neville Thurlbeck, and said his emails to potential interviewees were tantamount to blackmail, no action was taken against him. During the phone hacking scandal, scholar Brian Cathcart observed that even with cases involving adverse legal findings and large financial penalties: there was no introspection... afterwards. The damages were paid, and the books were closed, and they moved on, a state of affairs almost guaranteed to deliver more mistakes and failures. Justice Leveson similarly observed that despite defamation actions and complaints upheld by the Press Complaints Commission, 'the inquiry has been given no evidence of disciplinary action having been taken in response to those breaches of the PCC code'. Guthrie pointed to the lack of consequences for mistakes in Australian News Limited papers. The Sydney _Sunday Telegraph_ and Melbourne _Sunday Herald-Sun_ ran front-page photos purporting to be of a semi-naked Pauline Hanson, the right-wing populist politician. News allegedly had paid $15,000 for the pictures. Not that publication would have been justified if the photos had been genuine, but the papers looked very bad when they turned out to be fakes. 'Despite the immensity of the error,' said Guthrie, 'no one had paid with their job.' Similarly, Guthrie thought that 'Mitchell had done a good job at the _Courier-Mail_ during his seven years there' but had made 'one very serious mis-step'. Australia's best-known historian, Manning Clark, had died in 1991. He had become a hate figure for many on the right. A Cold War conspiracist, Peter Kelly, became convinced that Clark had been awarded an Order of Lenin by the Soviet Union. Mitchell assigned journalist Wayne Smith to work with Kelly, and in August 1996 published eight pages of commentary and background to accompany the lead story 'By Order of Lenin'. The story quoted one KGB expert saying that such a secret award indicated that Clark was not only an extremely significant agent of influence, but possibly also a very important agent. The _Courier-Mail_ 's case 'quickly began to collapse'. Apart from the fact that they thought Clark had left-wing views, their only actual piece of evidence was that a couple of people said they had seen Clark wearing the award at a function at the Soviet embassy. Wearing such an award in public would have been bad tradecraft if Clark had been a secret agent. However, it seemed that he might have been wearing a ceremonial ribbon he received when he gave a lecture in the Soviet Union in 1970. The Soviet archives were open by this time, but no archival evidence has ever been produced. The Australian Press Council, after a complaint by 15 eminent public figures, found that the paper was not justified in publishing its key assertion and the conclusions flowing from it. The paper's appeal was dismissed, but it refused to publish a retraction. Mitchell was unrepentant: he called Clark 'a David Irving of the Left', a reference to the famous Nazi apologist. As Guthrie noted, 'the episode didn't derail Mitchell's progress through News Limited. As I would later learn, such mistakes rarely do.' In 2002 Mitchell became editor of the _Australian_ , a position he still occupies. Perhaps there are more likely to be internal consequences if there are external consequences, but even that relationship – between market rewards and punishments and professional performance – is at best erratic. Indeed some observers of the contemporary media think that market rewards run counter to professional standards. For former prime minister Sir John Major, 'across Fleet Street, sensational and exclusive stories sold extra copies – straight reporting did not. Accuracy suffered.' For Tony Blair's spin doctor, Alastair Campbell, 'Speed now comes ahead of accuracy, impact comes ahead of fairness, and in parts of the press anything goes.' Some practitioners think along similar lines – a senior producer on the Fox TV network's _A Current Affair_ thought that what made the program 'different was that it never tried to be fair'. Such changes are beyond the control of any individual. Murdoch has been but one of many drivers. Moreover, there is great variation between Murdoch's titles in different markets and countries and periods. However, the contemporary balance between the material and the moral is very different from anything C.P. Scott envisaged. 11 The Republic of Fox I challenge anybody to show me an example of bias on Fox News Channel. Rupert Murdoch, 2001 Fox News was Murdoch's last great win against the odds. It has been for him a triumph, a fusion of political mission with commercial success. After a press conference in early 1996 announcing the launch of Fox News, its CEO, Roger Ailes, said to Murdoch, 'They're laughing at us.' Murdoch replied, 'They always laugh in the beginning. That never bothers me.' It was far from clear that there was a market for a further cable news channel. CNN, then 20 years old, had, after a shaky start, become profitable and the well-established leader. It was about to be joined by MSNBC, an all-news channel run by one of the three major networks. As it turned out, Fox News was breaking even by the time of the 2000 election, much earlier than expected. In 2002 it overtook CNN as the highest rating cable news channel, a position it has held for most of the time since. By 2010, 'Fox News reaped an estimated profit of $816 million – nearly a fifth of Murdoch's global haul', and rivalling that of News Corp's entire film division, including 20th Century Fox. Murdoch laughed last, and longest. Murdoch's triumph is shared by Ailes, the only CEO Fox News has ever had, and it is the crowning achievement of Ailes' long career in politics and television. The success of Fox has made him rich; he earns $23 million a year. According to Michael Wolff, when Murdoch and Ailes decided to start Fox News, Ailes insisted that he should have sole control of the new network, that 'Murdoch, the great meddler, would not interfere.' Fox's commercial success has strengthened Ailes's claims to independence. Ailes's first job after graduating in 1962 was as a 'gofer' on the Mike Douglas TV chat show. He was a 'TV wunderkind', with an 'uncanny feel for stagecraft' and how to give viewer appeal to conversational performances on live television. By the time he was 25 he had become executive producer. He impressed one of the guests, Richard Nixon, with his straight talking about how Nixon needed to embrace TV if he wished to become president. Soon afterwards he joined the Nixon campaign. Tim Dickinson notes: It was while working for Nixon that Ailes first experimented with blurring the distinction between journalism and politics... To bypass journalists [whom he thought hostile to Republicans] Ailes made Nixon the star of his own traveling roadshow – a series of contrived newslike events that the campaign paid to broadcast in local markets across the country. Ailes called this the 'man in the arena concept'. Nixon fielded friendly questions in front of live audiences. The format played to Nixon's strengths and avoided his televisual weaknesses, allowing him to feed off the interaction with the audience while limiting the scope for journalists to interrogate him. Ailes's political mistake was to co-operate with a young journalist, Joe McGuiness, whose book _The Selling of the President_ , became a bestseller. Ailes admitted talking to McGuiness; Nixon was angry; and Ailes's work for the White House ended. He then threw himself into a variety of projects, including one which was an uncanny prefiguring of Fox News. In 1972, a wealthy right-wing Republican, Joseph Coors, started a conservative TV news operation. Its marketing slogans included 'traditional network news is slanted to the liberal left [and] not objective', whereas 'we play it straight down the middle' and 'tell both sides'. Its idea of straight down the middle can be seen in a memo to staff that 'Martin Luther King was an avowed communist revolutionary', and so it was not 'necessary for us to cover him... just because the other networks do'. It suffered almost non-stop internal drama – at one stage most of its journalists were purged, and it went through three news directors; Ailes became its fourth. The Coors people trusted Ailes because of 'his affiliation with the Republicans', and because he was 'not a newsman'. The channel lasted only a few years. Ailes's most successful and most infamous campaign as a political consultant was the election of President George H.W. Bush in 1988. Although in the end Bush won comfortably, before the campaign it was felt he faced two major obstacles: majority opinion tended to favour the Democrats on a range of issues, and Bush 'needed to overcome his "wimp" image'. The campaign marked an escalation in professionalism, ruthlessness and negativity that became a reference point for all future campaigns. The Democratic candidate, Michael Dukakis, had been Governor of Massachusetts and was considered a political liberal. The Republicans had several clever advertising themes – a Do Not Swim in Boston Harbor ad, to attack his environmental credentials; Dukakis looking goofy while travelling in a tank, to attack his defence credentials. The most controversial, however, zeroed in on law and order. During Dukakis's governorship, one prisoner, an African-American called Willie Horton, had escaped while on weekend leave and had raped a white woman and terrorised her husband. Ailes's partner in the campaign, Lee Atwater, said, 'We are going to make it look like Willie Horton is Michael Dukakis's running mate.' A front group was set up and made a graphic ad featuring Horton. Then the official Bush campaign ad, without mentioning Horton by name, ran an advertisement featuring revolving doors with prisoners, many of them African-American, slowly moving in and out, while a voiceover said Dukakis had vetoed the death penalty and given leave to first-degree murderers not eligible for parole. The ads made law and order a more prominent issue, and the percentage agreeing that Bush was 'tough enough' on crime rose from 23 per cent in July to 61 per cent in October. Atwater said of Ailes, admiringly, that he had two speeds: attack and destroy. However, despite the dramatic success of the 1988 campaign, Ailes had sown the seeds of his own destruction as a political operative. When he got involved in later campaigns, Democrats made Ailes himself the issue, calling him 'Mudslinger in Chief'. Even Republican colleagues expressed their reservations: 'If I were in a knife fight, I would want Roger on my side, but that doesn't mean I have to like a lot of the stuff he is doing.' His next prominent political campaigns, although marked by his characteristic aggression and negativity, failed to deliver victory, and his last, at a special election in 1991, saw his Republican candidate begin with a 45 point lead, and end up losing. In December that year, he announced he was giving up politics. His next major career step was to become director of CNBC, NBC's business channel. He was very successful there, also starting a new channel, titled America's Talking. His strategy was personality-centred: One of the biggest mistakes producers make today is that they think they are living in a totally topic-driven world. But what's 'The Oprah Winfrey Show' without Oprah?' However, NBC axed the America's Talking channel to make room for a new news network, created with Microsoft, called MSNBC. Ailes decided he had lost out in an internal power struggle, and rang Rupert Murdoch. The birth of Fox News The union of Murdoch and Ailes was the perfect partnership for Fox News. They shared their conservatism, their extensive contacts in American right-wing politics, their disdain for mainstream journalism and their combative competitiveness. Murdoch brought his capital, his financial courage and his sense of business strategy. Ailes brought energy and decisiveness, and deep expertise in both television and propaganda. Murdoch had, for some years, wanted to establish his own news channel, one, he said, that would reflect 'American ideals'. In 1992, Murdoch said, 'I honestly cannot distinguish one [TV news] program from another... It's like every news director in the marketplace graduated from the same dumb journalism class.' Murdoch declared in 1996, 'We think it's about time CNN was challenged, especially as it tends to drift further and further to the left. We think it's time for a truly objective news channel.' He asserted that there was a 'growing disconnect' between those who made the news and those who watched it. A recent poll had found 40 per cent of Americans, but only 5 per cent of journalists, considered themselves conservative. 'In that gap, there is opportunity,' thought Murdoch. According to Ailes, 'Rupert Murdoch and I and, by the way, the vast majority of the American people, believe that most of the news tilts to the left.' They came together at a propitious moment. The cable industry had grown and developed, and there was now a demand for more specialised channels. In the 18 months to the end of 1995, 15 new cable channels were launched. According to Nielsen Media Research, the proportion of TV households with cable television grew from 17 per cent in 1977 to 67 per cent in 1996. For the first time in television, there was hardware chasing software, rather than the reverse. Moreover, talk radio had shown the way. The Reagan administration had abolished the Fairness Doctrine, which had required radio broadcasters to attempt to be impartial and balanced. It was probably unworkable in an increasingly complex and pluralistic political, not to say litigious, environment. Its disappearance released talk radio from all inhibitions, and 'was a gift to... stations in search of a new identity and audience'. Soon the phrase 'shock jock' was born. Talk radio became almost entirely Republican-supporting. One study by a left-wing think tank judged that among the radio stations most listened to, 91 per cent of total weekday radio programming was conservative and 9 per cent progressive. The brightest star in this new conservative firmament was Rush Limbaugh. When the Republicans won a majority in the 1994 House of Representatives election, the Republican leaders called Limbaugh 'the majority maker' and named him an honorary member of the freshman class of the 104th Congress. One said that after House leader Newt Gingrich, 'Rush was the single most important person in securing a Republican majority.' Ailes's early recipe for success with the Fox News Channel 'was to take conservative talk radio and move it onto cable television'. He scoured the tabloid TV shows for talent; his model for Fox programming was in many ways closer to the America Talks network he had pioneered at NBC than to a news network like CNN. For 40 years, Ailes had been producing 'compelling, winning television', whether for political clients or for Murdoch. He hired his old mentor on the _Mike Douglas Show_ , Chet Collier, as a senior vicepresident. Like Ailes, Collier was first and foremost an entertainer, and looked with 'amused detachment' at those he called 'newsies'. His aim was to present the news 'with the most excitement', and to get the best possible people 'because people watch television because of the individuals they see on the screen'. One of the first shows planned was an 'aggressive, opinionated news program' fronted by TV host and commentator Bill O'Reilly. As David Brock and Ari Rabin-Havt note, 'O'Reilly's acerbic style was an unknown quantity. Instead of conducting standard interviews, he was willing to throw red meat at his audience. And it worked.' He became the network's star, and much of its early popularity was based on his appeal. The result, in Wolff's view, was that: Fox is self-consciously down-market, rude, loud, opinionated. This not only defines a lower-end, secondary news market, but does it in a way that's much cheaper than how the other guy does it... Fox News is original. It has taken the News Corp. formula of the on-the-cheap and the third rate and turned it into a culture-changing, paradigm-altering, often jaw-dropping spectacle. About this, Murdoch is proud. Ailes coined two snappy and wildly successful slogans for the new channel: 'Fox News. Fair and Balanced' and 'We report. You decide.' They have provoked derision – how can the network which is 'the most biased name in news' call itself fair and balanced? However, they should be seen not as descriptions of content or even of aspirations; they represent a branding strategy the aim of which is to imply that everyone else is unfair and biased. There has always been a conservative constituency that is suspicious of the news media. A member of _Time_ magazine's letters department told sociologist Herbert Gans in the 1970s that the pattern of complaints had not changed since the 1950s: 'Even when _Time_ was conservative (under Henry Luce), most of the letters came from conservatives, and, for them, no one can ever be conservative enough.' The size of this constituency and the intensity of its feelings varies over time, but it often includes older people resentful of, or at least anxious about, the pace of change. Calling the news media biased is one way of avoiding troubling complexities and controversies. Moreover, niche audiences were becoming financially viable. News Corp's chief operating officer, Peter Chernin, told a company conference in 1998 that sometimes intense media coverage of major sporting or cultural events makes them even bigger, bringing global audiences. But at the other end of the spectrum, technology has also made niche audiences much more the norm. So what happens to the middle? For Chernin: The answer is that choice and fragmentation are killing the middle, which lacks the grabbing power of the big event or the custom tailoring of the niche. The general interest magazine – dead. The variety show – dead. The all-purpose department store – dead. Chernin argued that bland programming was 'death warmed over'. News's programming must 'seize the edge', must 'leap out with brand identity'. This deftly catches the Fox News strategy. The audience is almost exclusively white (just 1.4 per cent are African-Americans). The typical viewer of one of its main programs, that of Sean Hannity, was pro-business (86%), Christian conservative (78%), a Tea Party backer (75%) with no college degree (66%), aged over 50 (65%), who supports the National Rifle Association (73%), doesn't back gay rights (78%) and thinks government does too much (84%). A colleague commented, Hannity's 'got a niche audience and he's programmed to it beautifully. He feeds them exactly what they want to hear.' A former Fox correspondent said it was common to hear Fox producers whisper, 'We have to feed the core.' Earlier major innovations in the business of US journalism were largely politically centrist and brought professional advances as well as commercial success. The rise of Associated Press – and then of news agency journalism – in the 19th century brought a news service that allowed newspapers to gain news from outside their local area. This meant they had a variety of clients, with different political outlooks, who wanted agency news to be safe and reliable. The agencies' growth brought an emphasis on strong leads, brevity, 'hard' sourcing (where all claims were attributed to a definite source) and the forms of news presentation that came to be associated with 'objectivity'. The 'penny press' and 'yellow press' that developed in the late 19th century brought a stronger orientation towards the audience, and the cultivation of a new urban working-class audience. While it was sensationalist, often irresponsibly so, it brought new life to reporting, enlarging the scope of public information; in some cases this led to 'muckraking' – socially progressive investigations of corruption. Around the same time, the _New York Times_ and others were developing serious reporting – priding themselves on being independent of any parties, and covering the news without fear or favour. In the 1920s and 1930s _Time_ magazine arrived. While its politics were skewed very much to the right under its owner, Henry Luce, it pioneered interpretive journalism, which eventually also enriched the journalistic mix. Finally, from the early 1960s on, the three US TV networks brought TV news reporting of national affairs, with a strong emphasis on balancing sources and partisan neutrality, to a larger audience. In each case, the market leaders provided exemplars of professional standards, which their competitors respected and emulated. Fox News may or may not be an innovation as important as these, but unlike them it has been calculatingly sectarian in its appeal, and retrograde in its professional standards. CNN pioneered the multi-channel environment, allowing people to access news whenever they wanted, not just when networks scheduled it. Fox has seen that a fragmented audience allows market success to follow from a much lower market share – which means that a more opinionated, less centrist news appeal can still be profitable. It is part of a trend in journalism where the ferocity of opinions rather than the strength of evidence, and partisanship rather than impartiality, are rewarded with market success. The growth of Fox News Fox News began on 7 October 1996, just under a month before President Bill Clinton's re-election. It kept up a relentless drumbeat against [his] administration. A reporter who joined the network from ABC left in horror after a producer approached him, rubbing her hands together and saying 'let's have something on Whitewater [some real estate deals in Arkansas around which some Republicans were trying to weave a scandal] today'. 'Fox News found its perfect story in President Clinton's affair with a White House intern, Monica Lewinsky.' Talk radio had started to 'go crazy' with the story, but the main TV networks were not initially devoting great coverage to it. Fox decided they would 'grab this pent-up anger' on talk radio and 'put it on television'. It was a story that viewers 'just couldn't take their eyes off'. Ailes mined it relentlessly. He brought Matt Drudge, the conservative blogger who had initially reported the scandal, aboard as a host, and the network floated rumours that there was 'a second intern who was sexually involved with the President'. As Kerwin Swint says, 'Nothing galvanized this core audience group more than Fox's coverage after the September 11 attacks.' Fox introduced a ticker across the bottom of the screen, and the other cable news channels quickly followed. It added the US flag to the Fox News logo. 'From then on, the flag became our trademark,' said one executive. Presenters wore US flags on their lapels. More problematically, Geraldo Rivera, a Fox News correspondent, 'armed himself with a pistol and proclaimed that he would be honoured to kill Osama bin Laden'. This ostentatious patriotism was a marketing strategy. The failure of the war in Iraq did not seem to dent the Fox audience's enthusiasm for the network, but as Bush's domestic troubles mounted, ratings slipped. Hurricane Katrina, which hit New Orleans in August 2005, marked a turning point in the Bush presidency. During the following year, Fox's audience decreased by more than 30 per cent. As the Democrat surge continued in the lead-up to 2008, Fox was increasingly out of step with the public mood. Ailes personally got into trouble during the lead-up to the primaries, making a weak joke based on the similarity of Obama and Osama: 'It is true that Barack Obama is on the move. I don't know if it is true that President Bush called [Pakistani President] Musharraf and said "Why can't we catch this guy?"' The other Democrats then withdrew from a planned debate on Fox, a decision which prompted Bill O'Reilly to call them Nazis. As it became clear that Obama was likely to win, Murdoch, ever willing to catch a populist wave and be on the winning side, was tempted to back him. But once Ailes learned that the _New York Post_ was thinking of endorsing Obama he confronted Murdoch and threatened to quit. Murdoch relented. Fox mounted various character attacks on Obama but had no impact on his political momentum. With Obama's victory, the country was in transition, and so was Fox News. But Ailes had the audacity not to be hopeful. From the beginning he was planning a strategy of unrelenting opposition: 'By the time Obama defeated McCain, Ailes had hired former Bush aide Karl Rove and Mike Huckabee and went on to assemble a whole lineup of prospective 2012 contenders: Palin, Gingrich, Santorum, and John Bolton.' Apart from his 'candidate-hiring binge', Ailes's most important move was to hire Glenn Beck. Beck was hired to solve the five o'clock problem. As journalist Gabriel Sherman writes: [His] debut was the day before Obama's inauguration. Within a month, Beck became a phenomenon. He doubled the time slot's viewership, providing a powerful boost that carried into the prime-time hours. He not only drew viewers; he also created controversies across the media that drew attention to Fox. The hiring of Beck was also testimony to how politically embattled Ailes felt. At their meeting to discuss the deal, Ailes said to him, 'I see this as the Alamo... If I just had somebody who was willing to sit on the other side of the camera until the last shot is fired, we'd be fine.' A week after the November election, two months before Obama assumed control, Sean Hannity said: This is really the Obama recession in this sense: the people that have money are looking at this, [saying,] 'Look, if he is true to his word, you know what? I'm getting out now.' Days before Obama's inauguration, Rush Limbaugh declared: I hope he fails. On Day 2 of Obama's presidency, Hannity declared: [H]e is not going to succeed... Socialism has failed. On Day 3 Fox News contributor Laura Ingraham declared: [O]ur country is less safe today. On Day 4 Beck said Obama had: declared the end to the war on terror. By Day 11 he had concluded that the country was on a march towards socialism; Hannity said it was the end of capitalism. On Day 17 Bret Baier unsurprisingly observed that 'some are wondering if the honeymoon is already over'. It is little wonder that _Media Matters for America_ concluded that by 'early 2009, we noticed a marked increase in politically motivated misinformation coming from Fox News'. But by resisting rather than embracing the dominant political mood, Ailes was soon able to build ratings again. Fox News mobilised against Obama's planned health reform. 'Fox gave the Tea Party the oxygen to prosper,' one admiring conservative said. This period – in the fortunes of both Fox and the Republicans – was crowned by the 2010 Congressional elections. The Republicans won 63 seats in the House of Representatives, the largest gain by any party since 1948, to become the majority party. As _Washington Post_ columnist Dana Milbank noted: At Rupert Murdoch's cable network, the entity that birthed and nurtured the Tea Party movement, Election Day was the culmination of two years of hard work to bring down Barack Obama – and it was time for an on-air celebration of a job well done. Milbank also quotes Sarah Palin exulting, 'That's an earthquake. It's a darn big deal.' After the election, Fox News hosted a televised victory party, full of celebrating Republican candidates and party officials. Expectations were high that this momentum would continue into the 2012 election season, but problems came for both Fox and the Republicans. By this time, Fox had deals with all the potential Republican presidential candidates not then in elected office – except Mitt Romney. However, while the candidates had been united against the Democrats in 2010, now they were competing against each other. Increasing tensions between high-profile hirings – notably Sarah Palin and Karl Rove – became more disruptive. Moreover, the star of the first two years of the Obama presidency – Glenn Beck – was becoming an embarrassment. In July 2009, Beck called Obama a racist, with a 'deep-seated hatred for white people'. It was the beginning of a theme that Fox nurtured – it was always on the lookout for racism against whites. 'You know what this president is doing right now?' asked Beck. 'He is addicting this country to heroin – the heroin that is government slavery.' Beck actively campaigned for the Tea Party, and made himself the centre of attention. In August 2010 he held a 'Restoring Honor' rally in Washington. Brock and Rabin-Havt write that, 'The event was supposed to be part of Beck's "100-year plan" to stop the "ticking time bomb" that progressives had set in motion a century earlier to create a "socialist utopia".' A list of Beck's '10 worst quotes' in 2010 included: God will wash this nation with blood if he has to; Putting the common good first leads to death camps; Women are psychos; We have been sold a lie that the poor in America are suffering; Charles Darwin is the father of the Holocaust. He likened himself to the Israeli Nazi-hunters, and vowed, 'to the day I die, I am going to be a progressive-hunter'. This is clearly a graduation from mere vitriol up into insanity. His show was leaking viewers – almost a million – and the network had had to manage a boycott by more than 300 sponsors. Beck was no longer a plus. In April 2011, Fox and Beck announced that Beck was leaving. This was part of what Ailes called a 'course correction'. Nevertheless right up to (and well into) election night, Fox was expecting Obama's defeat. Indeed its presenters seemed to be going through a grieving process as the results came in – Ed Henry, 'stone-faced from Obama headquarters as it erupts in jubilation', reported, 'The crowd here is near pandemonium now, despite the fact that unemployment is hovering near 8 per cent.' The result was a blow not only to the Republicans, but at least in the short term also to Fox. According to Reed Richardson, Fox's viewership at the time of Obama's second inauguration in 2013 was only half what it had been at his first. Generally, however, it remains comfortably ahead of MSNBC and CNN, with an average night-time viewing figure of just under two million. Command and control At a Fox News party after the network overtook CNN in the ratings, in 2002, a television set was mounted high on the wall – first the MSNBC logo came on and was greeted by boos; then the CNN logo prompted much more booing. Then the third slide appeared – the face of Roger Ailes – and 'the Foxistas went wild' with prolonged loud cheering. According to star presenter Bill O'Reilly, 'Roger Ailes is the general. And the general sets the tone of the army. Our army is very George Patton-esque. We charge. We roll.' For talk radio star Rush Limbaugh: One man has established a culture for 1700 people who believe in it, who follow it, who execute it. Roger Ailes cannot do everything. Roger Ailes is not on the air... and yet everybody who [is] is a reflection of him. What his admirers praise, his critics deplore. There has never been a television operation in the English-speaking democracies with a more centralised and hierarchical control of opinion. The process began with a political filter placed on the selection of staff. When Murdoch was asked in the 1990s whether it would be a problem that Fox would have to rely on the liberal journalists from the existing networks, he replied, 'I don't think we employ those sorts of people. Roger Ailes keeps a very close watch on that.' And he did. One of Ailes's first acts was to ask each of the 40 existing journalists at the Fox TV network 'if they were liberal or not'; he was going to try to get rid of the liberals. In 1996, Andrew Kirtzman, a respected New York City cable news reporter, was interviewed for a job with Fox, and they asked what his political affiliation was. When he refused to tell them, 'all employment discussion ended'. In the early years there were occasional problems until the pattern was firmly established. One producer resigned after being ordered to change a story to play down statistics showing a lack of social progress among African-Americans. Several former employees complained of management sticking their fingers into the writing and editing of stories. One said, 'I've worked at a lot of news organisations and never found that kind of manipulation.' An early directive was to 'seek out stories that cater to angry, middle-aged white men who listen to talk radio and yell at their television'. Fox News may be unique in that every morning it distributes an executive memo 'addressing what stories will be covered and, often, suggesting how they should be covered'. Former producer Charlie Reina thought this the root of its bias. For many years, these memos were written by John Moody, a senior assistant to Ailes. Many memos have been leaked over the years, and they give a direct insight into the politically loaded approach to story selection and treatment. The memos reveal the approach to reporting the Iraq War. It was, perhaps, best captured by a memo quoted in the film _Outfoxed_ : 'Remember when you're writing about this, it's all good. Don't write about the number of dead... Keep it positive. Emphasise all the good we're doing.' After the exposure of atrocities by members of the US military against Iraqi prisoners at Abu Ghraib, Moody wrote: 'It is important that we keep the Abu Ghraib situation in perspective.' Apart from ensuring proper 'perspective', Moody was keen that viewers not develop misplaced sympathies or worries about violence and suffering: 'It won't be long before some people decry the use of "excessive force". We won't be among that group'; 'Do not fall into the easy trap of mourning the loss of US lives and asking out loud why we are there'; and as the coalition forces prepared to attack Fallujah, 'let's not get lost in breast-beating about the sadness of the loss of life. They had a chance'; 'Whatever happens, it is richly deserved.' In November 2006, Moody wrote: the [congressional] elections and Rumsfeld's resignation were a major event, but not the end of the world. The war on terror goes on without interruption... and let's be on the lookout for any statements from the Iraqi insurgents, who must be thrilled at the prospect of a Dem-controlled Congress. The memos were just as directive about how to cover US domestic politics. They showed the orchestrated use of the term 'flip-flopping' to describe John Kerry during the 2004 election, a line that dovetailed neatly with the attacks coming out of the White House. In fact, Fox News was working directly with the Bush administration. Similarly, in coverage of Obama's health reform proposals, Fox journalists were told to use the term 'government option' (favoured by the opponents of reform) rather than 'public option' (used by the administration). Its commentators had no qualms about using the phrase 'death panels', coined by Sarah Palin. The network still practises news management by memo. After the shootings at Sandy Hook Elementary School in Connecticut left 20 children and six adults murdered, Rupert Murdoch took to Twitter and urged strong action by President Obama on gun control. At Fox News, however, producers were instructed that 'this network is not going there'. One Fox News source said, 'We were expressly forbidden from discussing gun control.' If, despite all this, a slip occurred, correction was swift and clear to all. A young producer covering Palin's campaign in 2008 accurately described how McCain's staff were preventing reporters from asking Palin any questions. Ailes immediately barred the producer from making any more on-air appearances. Beyond formal memos, Ailes, like Murdoch, makes his general views known in strong, indeed crude, statements. Howard Kurtz offered two examples he observed: The talk turns to terrorism. Ailes is angry about an AP report that 29 worshippers were killed by a suicide bomber in Baghdad's largest Sunni mosque during prayers. 'How do we know they were worshipping?' he demands. I think the AP is so far over the hill, they've become left wing, anti-war. Gotta watch their copy. Fox was developing a series on Regulation Nation. Ailes said 'regulations are totally out of control'. Bureaucrats hire PhDs to 'sit in the basement and draw up regulations to try to ruin your life'. As is also apparent from these examples, Ailes has very strong preconceptions which run a long way ahead of supporting evidence. Ken Auletta reported on Ailes working 'himself into a lather' over what he saw as a weak CBS interview by Dan Rather with President Chirac in the lead-up to the Iraq War. Ailes thought the interview was 'anti-American'. He was angered by the questions he thought Rather should have asked but didn't, such as 'How about the seven million Muslims down the street that are going to blow up the Eiffel Tower? Does that bother you?' Insofar as this makes sense at all, it seems to imply that invading Iraq will help protect France against the Muslims who live in that country. Ailes's stereotypical thinking was on display one day in the Fox offices when he observed a darkskinned man in what he perceived to be Muslim garb. Ailes put the whole of Fox News into lockdown. 'This guy could be bombing me,' he shouted. The suspected terrorist turned out to be a janitor. But the incident demonstrated Ailes's fears about his personal safety (he has a personal security contingent) and his particular paranoia about people who are Muslim. He is quick to leap to conclusions about Democrats as well. In 1994, before Fox News began, Ailes was producing Rush Limbaugh's weekly television program. At the time, there was considerable speculation in right-wing circles, following the suicide in a park of Vince Foster, who had been suffering from depression. Foster was a Clinton appointee, brought to Washington from Arkansas. Limbaugh claimed that Foster had actually been murdered in an apartment owned by Hillary Clinton and his body then taken to a park. Ailes supported this exclusive, saying he didn't have any evidence because 'These people [the Clintons] are very good at hiding or destroying evidence.' So Limbaugh and Ailes, wrongly, indeed baselessly, accused the Clintons of being complicit in a murder. It is remarkable that neither of their careers suffered from these irresponsible false accusations. It clearly presents problems of quality control when someone so prone to leaping to wrong conclusions is the unchallengeable arbiter of 'fair and balanced'. The politics of polarisation The Tet offensive, launched by the communist forces on the lunar new year, 30 January 1968, is often cited as the turning point of the Vietnam War, even though that war continued for another seven years. The US had had combat troops there since 1965, and by early 1968 had around half a million troops in the country. Nevertheless the scale and ferocity of the Communist attacks shocked them. Four weeks in, on 27 February, as the fighting continued, Walter Cronkite, the news anchor of CBS, after much anguish, delivered a broadcast which was pessimistic about future victory. When President Lyndon Johnson watched the broadcast, he told his press secretary that if he had lost Walter Cronkite, he had lost Mr Average Citizen. It fed directly into his decision not to seek re-election. Cronkite's statement generated great controversy, but what is most interesting from today's perspective is that it is impossible to imagine any US president making a similar statement about any media figure today. It is impossible to imagine President Obama saying, 'If I've lost Bill O'Reilly [or, more interestingly, Jon Stewart] I've lost Mr Average Citizen.' No one has, or could have, the political authority Cronkite once did. The audience is more fragmented; the public is more polarised. This sort of authority came from 'the era of the captive mass public, from the 1950s through the '70s – when people had access to only a few TV channels'. In the 1970s, the audience for the three networks' news programs was 46 million, or 75% of people watching TV at the time. As writer Paul Starr notes, 'For a time, this seemed to be the permanent structure of the news and national politics in the age of electronic media. In retrospect, it was the peaking of the unified national public.' By 2005, their total audience was down to 30 million, or around one-third of TV viewers. By 2013, it had further declined, to 22 million. While some other sources of news – notably cable – have grown, the net impact has been a sharp decline in total news consumption. By 2008, the number of Americans who say they don't get the news from any medium on an average day was 19 per cent, and among 18 to 24-year-olds it was 34 per cent. Attitudes have changed as much as audience sizes. Several polls suggest declining confidence in the news media. For example, the proportion agreeing that stories are often inaccurate has all but doubled, from 34 per cent in 1985 to 66 per cent in 2011. A Pew study found that positive ratings of news organisations' believability declined from 71 per cent in 2002 to 56 per cent in 2012. That series of Pew studies also showed a growing polarisation in judgements. In 2002, 76 per cent of Republicans and 67 per cent of Democrats thought Fox News was believable. By 2012, both figures had declined, but the fall was much sharper among Democrats: 67 per cent of Republicans, but only 37 per cent of Democrats, thought Fox was believable. But the polarisation in judgements has grown for all news media, especially as Republicans have become more distrusting of most other news organisations. So in 2002 there was a 12 per cent gap on CNN (Democrats 84: Republicans 72), but by 2012 that had grown to 36 (Democrats 76: Republicans 40). For the _New York Times_ , the 2004 figure already showed a substantial 20 per cent gap (D 70%: R 50%) but by 2012 it had grown to 28 per cent (D 65%: R 37%). So in 2012, almost twice the proportion of Republicans found Fox News believable than found the _New York Times_ believable. The other best US data set on trust in TV news comes from Public Policy Polling, which began doing an annual survey in 2010. The only TV news which more of the public trusted than distrusted was PBS. Apart from this, there were stark partisan differences: 'Democrats trust everything except Fox, and Republicans don't trust anything other than Fox.' For Fox, among Democrats the net rating is –44 (22–66 per cent trust/distrust), while among Republicans it is +55 (70–15). On CNN, for example, the net rating among Democrats is +36 (57–21), but among Republicans it is –49 (17–66), with the other non-Fox networks having broadly similar profiles. The polarisation in the public is matched by attitudes among political activists. 'Watch Fox News! Watch Fox News!' chanted Republican delegates at CNN's floor set during the Republicans' 2004 convention. In contrast, in 2011, as Fox reporters began live feeds of a protest by public sector employees in Wisconsin, the crowd started chanting 'Fox News lies' and 'Tell the truth.' The growing partisan trust division for different news sources, and especially the Republican distrust of the main TV network news services, indicates that they have finally caught up with the views put forward by Ailes and Murdoch over a much longer period. 'I really believed there was no fairness or balance' elsewhere in the media, Ailes said in 2011. He used to refer to CNN as the Clinton News Network, although that was better than CBS, which he called the Communist Broadcasting System. During the Republican primaries, Ailes criticised the 'mainstream media' for calling Romney a weak frontrunner: 'Weak' is a word the mainstream press will give to all Republicans always, as a precursor to killing them off... It saddens me. America used to be able to get straight journalism. It is not clear when he thinks America got this 'straight journalism', as in the 1960s he already thought it was anti-Republican. Nevertheless he is insistent that the situation is getting worse: 'The hardest part of my job now is to maintain any kind of journalistic standards, because they're being weakened all over the country by newspapers and magazines.' Sometimes these criticisms of the liberal media are just a game for Ailes: The _New York Times_ used to be the paper with all the news fit to put into the bottom of your dog kennel... Now when the owners leave, the dogs are calling up and canceling their subscriptions... It's a dying asset. There was no humour intended, however, when in a speech at Ohio University he described _New York Times_ reporters as 'a bunch of lying scum'. He also said that 'one thing that qualifies me to run a journalism organization is the fact that I don't have a journalism degree', and advised journalism students to 'change your major'. He also described National Public Radio as Nazis, as noted earlier, but publicly apologised for this after complaints from Jewish organisations. It is common for competitors to disparage each other. What is unusual with Murdoch and Ailes is that they seem to hold everyone else in the industry in contempt, to dislike the whole profession they are involved in: What surprises colleagues is that Ailes appears actually to disdain journalism; Ailes says that he detests what he thinks of as 'elite' journalists with 'a pick up their ass' who treat journalism as 'a from-the-Mount profession'. Some senior executives at Fox express private puzzlement that Ailes seems, in the words of one, to 'hate journalists so much... I've never seen him use the word "journalism" and smile at the same time'. The polarisation in attitudes to the media has been driven particularly by declining trust among Republicans towards the main news services. It is paralleled by the increasing polarisation in political rhetoric, again principally driven by the conservative side: There is no 'after the Cold War' for me. So far from having ended, my cold war has increased in intensity, as sector after sector of American life has been ruthlessly corrupted by the liberal ethos... We have, I do believe, reached a turning point in American democracy. Now that the other Cold War is over, the real cold war has begun. This 1993 statement by Irving Kristol, dubbed the godfather of neoconservatism, encapsulates the rhetorical escalation. We saw above the statement by Glenn Beck that he was – on the model of Israeli Nazi-hunters – fearlessly declaring that he would be a progressive-hunter until he dies. On his Fox News program in the 18 months after Obama's inauguration, Beck made 202 mentions of Nazis or Nazism, 147 mentions of Hitler and 193 mentions of 'fascism' or 'fascist', and 'most of these were directed in some form at Obama'. For the most prominent voice in talk radio, Rush Limbaugh, 'Democrats are the enemy.' He has also said, 'I don't know how a real man – I mean, a real man – could even be a liberal, much less vote for one.' In 2004 he said that Democratic candidate Kerry's base voters 'hate God [and] they hate people of religion'. 'There is a culture of death with liberalism,' said Limbaugh in 2007. 'They own that as well as they own defeat in Iraq.' Later came the creation of the Tea Party, which took its name from the Boston Tea Party of 1773, a significant event in the lead-up to the American War of Independence. The Tea Party calls on 'American patriots' to 'take back' their country. Sometimes Tea Party members pump fists and yell 'USA, USA'. Their talk of patriotism rather obscures the fact that their targets are other Americans. When the struggle is viewed as being between patriots and nonpatriots, it is a short step to picturing opponents as enemies. So groups such as liberals and progressives, who fall well within the democratic consensus and have always been a central part of American political history, observing democratic processes, are now equated with violent, totalitarian extremists. And democratically elected government members from an opposing political party are equated with foreign colonialists. Politics is always subject to rhetorical inflation, but the way the lines are drawn now is a narrowing of pluralism and a denial of legitimacy to competing groups. Fox News has both crystallised and amplified these political currents. It has been a driver towards more rancorous and polarised political discourse, and towards a journalism less disciplined by evidence, especially complex and politically inconvenient evidence. It is better seen as part of the right-wing noise machine than as a normal news service. For Kathleen Hall Jamieson and Joseph Capella, it is one of the three parts of a right-wing 'echo chamber': Fox, conservative talk radio, especially Rush Limbaugh, and the intellectual fibre of the op-ed pages of the _Wall St Journal_ , which all reinforce each other. One of Fox's tools is spectacular misrepresentation, an ability to conjure a threat or an outrage from seemingly straightforward statements. When Obama said to 'Joe the plumber', in Ohio during the 2012 campaign, that we should 'spread the wealth around', Fox News framed it as Obama advocating socialism. After the election, Washington editor Bill Sammon said it was 'a premise that privately I found rather far-fetched', but at the time internal memos showed that focusing on it was part of the Fox strategy. When President Obama told the Turkish Parliament that 'the United States is not... at war with Islam', Hannity accused him of 'seemingly apologizing for our engagement in the war on terror'. When the president proposed creating a civilian humanitarian force, Beck said it was 'about building some kind of thugocracy', and that 'this is what Hitler did with the SS'. Democratic congressional leader Nancy Pelosi criticised attempts to disrupt public meetings and said that 'drowning out opposing views [was] simply un-American'. One Fox commentator said she had said that anyone who speaks out is un-American; a second said that Pelosi said that opposing her view is un-American; a third claimed that she was calling hard-working Americans 'Nazis' and 'brownshirts' and 'un-American'. When Obama, again in the 2012 campaign, talked of how business thrived when government helped by providing good infrastructure and a well-educated workforce, he was said to have insulted business owners. A second tool is its promotion of its own agendas, and especially of the 'culture wars'. Culture wars are appealing to the media because they provide easy copy with few demands on gathering and verifying evidence. From a conservative viewpoint, they help to reframe issues in a simple way. The concept of elite, for example, is now dissociated from the wealthy and attached to those who embrace liberal social values; the issue is now thus not about economic inequality, but about conflicting values. 'Culture wars' provide fodder to maintain a sense of outrage, which is especially easy when it is the audience's values that appear to be under attack. One annual target is the 'war on Christmas'. In December 2010 Fox reported that an elementary school in Florida had banned 'traditional Christmas colours'. Several programs covered the story, but no one called the school district – the entire story was a lie; all the bluster and outrage had no basis. In December 2012, _The O'Reilly Factor_ devoted more than three times as much airtime to the 'war on Christmas' than it did to actual wars in Iraq, Afghanistan, Syria, Libya and Gaza. Reframing political controversies in a way that favours one's own side is a common strategy in all democracies. One of the techniques Fox News often uses to avoid debating issues is to label opponents instead. When allegations were raised about mistreatment of prisoners at Guantanamo, Britt Hume, then the Washington editor for Fox, commented: I think that these kinds of problems and accusations and so forth grow out of a community that stretches from the American left through much of Europe to enemies across the world from which terrorism springs, who want the world to believe that America is what's wrong with the world. The same process has been applied to immediate policy debates. For E.J. Dionne of the _Washington Post_ , the genius of American conservatives has been to change the terms of political debate: Sensible regulation was cast as a dangerous quest for government control. Modest measures to alleviate poverty became schemes to lock the poor into 'dependency'. Advocates of social insurance were condemned as socialists. Government was said to be under the sway of a distant 'them', even though in a democracy, government is the realm of 'us'. And attempts to achieve a bit more economic equality were pronounced as assaults on liberty. During the first 18 months of the Obama presidency, Beck not only made the hundreds of references to Nazi themes cited above; he also made 802 references to socialism. Such deployment of each side's rhetorical arsenal – done with varying degrees of honesty – is part of democratic manoeuvring. Less legitimate has been the way Fox and some Republicans have promoted the idea that Obama is in some sense inherently unfit to be president, in a determined attempt to turn the exotic elements of Obama's background into something alien. 'What if [Obama] is so outside our comprehension that only if you understand Kenyan, anti-colonial behavior can you begin to piece it together?' asked prominent Republican Newt Gingrich. Another Republican, Mike Huckabee, was forced to backpedal after suggesting that Obama grew up in Kenya. In 2008, Fox promoted a series of allegations about Obama – 'he of the terrorist fist bump and uncertain ancestry and socialist leanings'. One was the claim that Obama had 'spent at least four years in a so-called Madrassa, or Muslim seminary, in Indonesia'. Fox ran with the story, and asked viewers their reaction, and, in one of their typical ploys to cover a lack of evidence, said there were questions Obama had to answer. Fox News presenter Steve Doocy baldly asserted that Obama was 'raised as a Muslim'. In contrast, a CNN reporter visited the school Obama had attended in Jakarta, and simply reported that it was not a madrassa. The most persistent false claim was that Obama was not born in the US, and so not eligible to be president. Such a claim involved not only a complex conspiracy, but conspirators with amazing prescience. In 2011, TV personality and momentary pretender to the Republican candidacy Donald Trump revived 'birtherism'. In March and April, Fox devoted 52 segments to the subject; in 44 of these, the false charges went completely unchallenged. In the prevailing atmosphere, allegations of conspiracy were easily made. When unemployment dropped slightly, during the campaign, from 8.1 per cent to 7.8 per cent, Jack Welch and some other conservatives immediately labelled the figures 'unbelievable'. As the 2012 polls turned against Romney, some posited a conspiracy in which pollsters and the media were deliberately oversampling Democrats to inflate numbers for Obama and so discourage Republicans from voting. Some conservative commentators believed that Romney would be ahead in 'unskewed polls' ('Everything – except the polls – points to a Romney landslide,' said Rush Limbaugh), and right up to election day, some on Fox were predicting that landslide. The impact of all this on viewers' knowledge and attitudes is difficult to ascertain. The legendary Democrat senator Daniel Patrick Moynihan once said that every man is entitled to his own opinions but not to his own facts. One 2003 study found that while misperceptions about the Iraq War were widely held, they were higher among Fox News viewers than those who got their news from other sources. While 48 per cent of those surveyed thought the US had found clear evidence that Saddam Hussein was working closely with Al Qaeda, 67 per cent of Fox viewers believed this. While 22 per cent overall thought that the US had found Weapons of Mass Destruction in Iraq, 33 per cent of Fox News viewers thought this. Following the 2010 congressional election, the University of Maryland released a study that showed that Fox News viewers were 31 percentage points more likely than non-Fox viewers to agree that 'it is not clear that Obama was born in the United States'; and 30 points more likely to agree that 'most scientists do not agree that climate change is occurring'. A 2012 survey by Fairleigh Dickinson University found that those whose only news consumption is Fox News were less likely to be able to answer knowledge questions about domestic and international politics correctly than those who had no news exposure at all. On domestic affairs, for example, respondents overall averaged 1.6 correct answers out of five questions. Those who had 'no news exposure' answered 1.22 correctly, while those who 'only watched Fox News' answered 1.04 correctly; these differences are statistically significant. Fox has responded to all such academic findings with its usual combativeness. A Fox executive reacted to the 2010 study by attacking the University of Maryland's rankings on various measures. In 2012, an anonymous Fox spokesperson similarly dismissed the Fairleigh Dickinson University findings by attacking the university's rankings among American universities. A Pew Project for Excellence in Journalism study in 2005 showed that when covering the Iraq War, 73 per cent of Fox stories included opinion from anchors and reporters; on MSNBC the figure was 29 per cent and on CNN it was just 2 per cent. Ailes dismissed the Pew Foundation as a 'liberal lobbying organization'. Playing Republican politics For most of Ailes's tenure, 'the roles of network chief and GOP kingmaker have been in perfect synergy'. Fox ratings have risen and fallen partly in line with Republican fortunes. For the first decade and a bit of its existence it was best seen as just part of the right-wing noise machine, its coverage distorted in a conservative direction. The guest list of the network's flagship program, _Special Report_ , had Republicans outnumbering Democrats 8:1. Double standards abounded: the past guilty secrets of opponents, but not allies, were pursued relentlessly; the dubious relatives and associates of opponents, but not allies, were denounced. Heads of agencies in the Bush administration were simply referred to by their title, but under Obama their counterparts were 'czars'. Interviews with President Bush were conducted respectfully, even deferentially. In contrast, when Bill O'Reilly interviewed Obama, he interrupted the president 48 times. Some misreporting was just due to incompetence. For example, in one Fox graphic, the real unemployment rate for 2009 was given as 7.8 per cent, but for 2012 as 14.7 per cent. The first figure is the official unemployment rate, which for that month in 2012 was 8.1 per cent. _Media Matters for America_ regularly features Fox's misleading graphics. The scale of the incompetence is sometimes puzzling, and the errors always seem to be in the same ideological direction. However, according to Brock and Rabin-Havt, by mid-2009 it became clear that Fox was changing: 'No longer was it simply a conservative news network. It had morphed into a political campaign.' It increasingly stepped outside what had been the accepted limits of news media political activity. It waged a relentless assault on Obama's health care reforms. The most damaging impact came from the idea that health care reform would create 'death panels' with the power to decide who receives treatment and who is left to die. One commentator, David Bromwich, felt that: Looking back, one feels it was an astonishing negligence for the Obama White House to embark on a campaign for national health care without a solid strategy for fighting the tenacious opposition it could expect at the hands of Fox radio and TV. Month by month the jeering hosts ate away Obama's popularity and cast doubt on his plans. The network openly proclaimed its stance. One headline boasted: 'Fox Nation Victory! Congress delays health care rationing bill'. Then it started promoting Tea Party events. In the lead-up to the Iraq War, it had ignored or attacked anti-war rallies, which sometimes numbered in the tens of thousands. Now it 'rewarded [Tea Party] meetings attended by as few as a dozen people with hours of air time and live satellite feeds'. It not only found these events extraordinarily newsworthy, but it started to exhort people to attend, and its own employees participated. It aired 'at least 107 commercials for its coverage of the April 15 Tea Parties'. Fox Business anchor Cody Willard yelled at one rally: 'When are we going to wake up and start fighting the fascism that seems to be permeating this country?' Then the network took an unprecedented step: it started playing a role in fundraising for Republican candidates. It became common for Republican candidates to ask viewers for funds. Sharron Angle, a Republican candidate, asked a million Fox viewers to each send her $25 for her campaign. (Hannity praised one of her advertisements, which claimed that Democratic Senator Harry Reid had 'voted to use taxpayer dollars to pay for Viagra for convicted child molesters and sex offenders'.) During an interview, Glenn Beck asked Tea Party candidate Michele Bachman, 'How can I help you raise money?... We should have a fund raiser for you, Michele.' Brock and Rabin-Havt found that altogether, 'Over the course of the 2010 election cycle, more than 30 Fox News employees endorsed, raised funds for, or campaigned for over 300 Republican candidates and organisations.' Much of the financing of election campaigns in the US is done through Political Action Committees (PACs) rather than directly to candidates and parties. Thus on a candidate's declaration, the PAC is listed as donor rather than the people who gave money to the PAC. Two regular Fox contributors, Karl Rove and Dick Morris, ran their own PACs, and sometimes they sought to raise money for them on Fox. Both also sometimes discussed electoral races in which their PACs had a large stake, without disclosing that. No other network did this. Fox was increasingly oblivious to the professional constraints and obligations most other news organisations follow. But Ailes and Murdoch went even further. Not content with reporting events, they sought to shape them. Ailes was unimpressed by the range of likely Republican candidates, and approached Chris Christie, governor of New Jersey, to run for president. However, he refused. Ailes and Murdoch then approached General David Petraeus, still commander in Afghanistan and on the way to becoming head of the CIA. According to a tape obtained by investigative reporter Bob Woodward, Fox correspondent K.T. McFarland, before beginning an interview, told Petraeus: 'The big boss is bankrolling it. Roger's going to run it. And the rest of us are going to be your in-house.' (Ailes also had a staff member ask Petraeus if there was anything Fox was doing right or wrong or should do differently.) According to Woodward's partner in Watergate reporting, Carl Bernstein: Murdoch's goal seems to have been nothing less than using his media empire – notably Fox News – to stealthily recruit, bankroll and support the presidential candidacy of General David Petraeus in the 2012 election. Bernstein cannot understand: the ho-hum response to the story by the American press and the country's political establishment, whether out of fear of Murdoch, Ailes and Fox – or, perhaps, lack of surprise at Murdoch's, Ailes' and Fox's contempt for decent journalistic values or a transparent electoral process. For Simon Maloy of _Media Matters for America_ , 'it's a perverse sort of dynamic in which the president of a news organization is shielded from revelations of unethical behaviour by his long-established record of unethical behaviour'. It is clear that the senior controllers of Fox News seek to shape political outcomes, not just report or analyse them. Although there is rarely authoritative evidence for the political impacts of media, it is plausible that in many ways Fox News has directly assisted the conservative side against progressives – Republicans against Democrats. It has heightened the conservatives' capacity to mobilise outrage and oppose social reforms. It has made it more difficult for governments to tackle health care issues or global warming, for example. It has probably made it easier for Republicans to mobilise their base, thus helping them to win in the mid-term 2010 congressional elections. But there is a need to examine the paradoxical and unintended impacts of media partisanship as well. In some ways Fox News is now more of a problem for the Republicans than for the Democrats. Most of its viewers are already Republicans, so its main impact is inside the Republican Party: it has strengthened the Tea Party and other extreme views, and so made it more difficult for moderate Republicans within the party. 'There would not have been a Tea Party without Fox,' said Sal Russo, one of its founding leaders. But the Tea Party, and Fox News, which acted as its midwife, have been a decidedly mixed blessing for Republicans. Jamieson and Capella's 'echo chamber' has made a target of RINOs (Republicans in Name Only) and so helped to change the balance inside the party. While after their successes in the 2010 congressional elections they proclaimed themselves the wave of the future, in 2012 they provided many embarrassments. Perhaps the worst were Todd Akin (with his memorable phrase 'legitimate rape') and Richard Mourdock ('pregnancy resulting from rape is God's will'), who both lost their seats. For the Dutch academic Cas Muddle, the lesson to be learned is that 'there is no Tea Party without extreme social conservatism, but there is no GOP national majority with extreme social conservatism'. Many analysts think that the dominance of conservative voices has made it difficult for the Republicans to be anything but intransigent in Washington politics. Two leading congressional scholars in the US, Thomas Mann and Norman Ornstein, have called the current GOP: an insurgent outlier in American politics... It is ideologically extreme; scornful of compromise... and dismissive of the legitimacy of its political opposition. For some, the Republican Party 'has been infected by a faction that is more of a psychological protest than a practical, governing alternative'. The Fox network played a central role – as did the party's conservative base – in shaping the primaries in 2012: No other network head was so actively sought out by candidates for advice on their campaigns. The network chief [functioned] as a kind of proxy kingmaker within the [Republican] party, frequently meeting with Republican politicians to offer strategic advice. You can't run for the Republican nomination without talking to Roger. Every single candidate has consulted with Roger. But in the process, David Frum thinks, 'Conservatism has evolved from a political philosophy into a market segment.' In 2008, Sarah Palin was credited with electrifying the Republican convention and the party's base; to others she seemed less qualified for national leadership than any major party candidate in living memory. She boasted that 'everything I need to know, I learned on the basketball court', but what she knew did not include the difference between North and South Korea or the fact that America did not go to war in Iraq because Saddam Hussein had attacked the US on September 11. But, as journalist Roger Cohen comments, after Palin has come a 'deluge of dysfunctional presidential candidates.... Palin is no longer an anomaly... Experience, knowledge, accomplishment – these no longer may matter.' Ailes turned 'the GOP race into a political X-Factor', with much of the action happening on his network. However, partly because of the lack of strong candidates, it was one of the more bizarre primary seasons of recent times. As Cohen notes, there was a procession of presidential hopefuls. Michele Bachman and Mike Huckabee disappeared fairly early. Then came Texas Governor Rick Perry, who entered the race, immediately became a front runner, then collapsed after poor debate performances. He was followed by Herman Cain, who was very prominent on Fox News in the second half of 2011: 'For a while, he was a front-runner. He had a nonsensical tax plan, zero knowledge of foreign affairs, and had never held elective office.' All these early candidates attempted to appeal, via Fox News, to the far right of the party. Two other high-profile figures – Sarah Palin and Donald Trump – flirted with the same constituency, but then declared they would not run. Candidates who lasted longer, such as Rick Santorum and Newt Gingrich, fought a series of damaging and unedifying primary struggles, which pushed 'Romney to make very right-wing promises on issues like immigration, which would haunt him during the actual campaign.' The influence of Murdoch and Ailes did not finish with Romney's nomination. Murdoch was openly unenthusiastic about Romney, whom he thought lacked heart and stomach. The two occasions when Romney visited the 'editorial board of the _Journal_ , Mr Murdoch did not work very hard to conceal his lack of excitement'. Someone at the meetings commented, 'there was zero enthusiasm, no engagement'. Murdoch and the _Wall St Journal_ were, however, very keen on the nomination of Paul Ryan as the vice presidential candidate: he was an 'almost perfect choice' tweeted Murdoch about the deficit hawk, who wanted to radically cut government spending. A _Wall St Journal_ editorial dismissed 'every Beltway bedwetter' who warned that Ryan would be too risky. How much did all this help Obama to win? After all the suspense, and the predictions of a Romney victory or of a cliffhanger, Obama won by a comfortable and decisive margin. In the popular vote, he beat Romney by 3.2 million votes, 51–48 per cent, and he won the electoral college by 332 to 206 votes. Although it is more common than not for incumbent presidents to win re-election – there are many factors assisting their cause – given the state of the US economy the result was far from inevitable. Obama won with unemployment around 8 per cent, the highest for a president winning re-election since FDR in 1936: Pew Research Center polling found that 46% of Americans say they're worse off since late 2007 and only 31% say they're better off; the rest see no change. By 2009, the US male median wage had dropped 28 per cent in real terms since 1970. Since 2007, median household income has fallen by almost 10 per cent. Although Obama inherited the financial crisis, the sluggish and uneven recovery since has meant that many have blamed Obama rather than Bush for today's economic problems. Economic management is tricky in the US because of the combination of a long-term structural deficit with the need for immediate economic stimulus. Many blame Obama for failing to reach consensus with the Republicans in Congress to solve the problems surrounding the US deficit and the way it has ballooned in recent years. Obama's soaring rhetoric of 2008 was ground down by this gridlock. Such economic discontents and political problems are hardly a recipe for easy re-election. Possibly, trends in the Republican Party, aided and abetted by Fox News, are part of the explanation for his victory. After the election, there were the usual recriminations on the losing side, with many blaming Romney as a weak candidate. But some post-mortems were unusual. Some prominent conservatives charged that major players had allowed their pursuit of personal wealth and ego to take precedence over political goals. As Eric Boehlert reported: The nasty 'racket' accusation highlights what's happened as Republicans have handed over more and more of their branding and marketing to media personalities whose ultimate barometers of success (ratings and personal income) differ from those who run political parties (getting candidates elected to office). And as Brock and Rabin-Havt put it: Fox gives political exiles and marginal political figures the opportunity to compete without the party machinery. After Huckabee disappeared from the Republican race, for example, he got his own show. As scholar Nicole Hemmer notes: Yet far from being an oddity, Huckabee's twin tracks – candidate and commentator – have become a standard feature of Republican Party politics. These days, a revolving door exists between conservative media and Republican candidates. The problem is that the two spheres involve different skills and have different measures of success. Extremism and conflict make for bad politics but great TV. As media analyst Andrew Beaujon notes, Fox News and the talk radio shock jocks across the country win whether or not conservatives are in power; these purveyors of political entertainment thrive under a Democratic president, perhaps even more than under their preferred candidates. David Frum concluded, 'Republicans originally thought that Fox worked for us and now we're discovering we work for Fox.' A fair and balanced future? An episode of _The Simpsons_ made fun of Fox News. A Fox-style rolling ticker across the bottom of the screen had a series of headlines: 'Pointless news crawls up 37 per cent... Do Democrats cause cancer?... Rupert Murdoch: Terrific dancer... Study: 92 per cent of Democrats are gay... JFK posthumously joins Republican Party... Oil slicks found to keep seals young, supple.' According to the creator of _The Simpsons_ , Matt Groening, Ailes threatened to sue Fox Entertainment, which makes _The Simpsons_. Groening said he thought Murdoch would not be impressed with one arm of Fox suing another. Ailes denied threatening to sue. Being satirised in a leading television program is perhaps an ironic indicator of cultural impact. But one group rumoured not to be amused by Fox were members of the circle around Murdoch. In what appeared to be a calculated public criticism, in early 2010, Matthew Freud, Murdoch's son-in-law, vehemently criticised Fox News to the _New York Times_ : I am by no means alone within the family or the company in being ashamed and sickened by Roger Ailes's horrendous and sustained disregard of the journalistic standards that News Corp, its founder, and every other global media business aspires to. A spokesman for Murdoch replied that his son-in-law had been speaking for himself, and that Murdoch was 'proud of Roger Ailes and Fox News'. The possibility that Murdoch is unhappy with Ailes and Fox gained some credence from Wolff's biography. Not only is Ailes said to be 'the one person in News Corp whom Murdoch will not cross', but 'it should not be underestimated how much Murdoch does not want himself or News Corp, in his or its legacy, forever yoked to Ailes and Fox News'. Later, in the magazine _GQ_ , Wolff made the startling revelation that one of the reasons he was invited in 2007 to write a biography of Murdoch, with his full cooperation, was to be a 'weapon in the increasing war against Ailes'. He also revealed that in return for all the access to Murdoch he wanted, he had to make 'a devil's bargain not to talk to Ailes'. It is impossible from the outside to know the extent of the 'war' inside News Corp against Ailes. In November 2010, a time when Fox was riding high (and when Murdoch thought that Obama would lose to any strong Republican candidate), Murdoch gave an interview to veteran Australian journalist Max Suich. He didn't hide 'his pleasure at the controversy surrounding Fox News' and it was clear 'how much Murdoch enjoys conflict and competition for its own sake, and the consequent fun of deploying his power and influence to make mischief for competitors and enemies'. Asked if he was worried about attacks on Fox News for bias, Murdoch replied: 'Nooo... people love Fox News.' Murdoch's public statements have never wavered from this stance. Indeed he has expressed frustration that BSkyB's Sky News was 'not more like Fox News... He concluded that the only reason that Sky News was not more like Fox News was that "nobody at Sky listens to me".' Many others would rate Sky more highly than Fox: it is the 'only Newscorp organ to gain a reputation for objectivity: its staff privately attribute their success to the [British] regulatory system's protection'. It covered the Iraq War, for example, in a much more balanced way than Fox News. This brought no gratitude from its proprietor. According to a _Guardian_ editorial: 'The billionaire (Murdoch) is reported to consider Sky's output as having a "liberal bias" and being a version of "BBC Lite".' If there was a war, Ailes won. Murdoch 'yoked' himself to Ailes for four more years, when he extended his contract in October 2012. Nevertheless, the medium-term fortunes of Fox News are far from certain. It is owned by an octogenarian, and run with an iron fist by a septuagenarian who is the only CEO it has ever had. The average age of its viewers is 65 and of its on-air presenters is 57. The era of a steadily expanding pay TV market is over. While 90 per cent of US TV households paid for a subscription service in 2012, the number subscribing has started to decline, and is expected to keep on doing so as internet TV becomes more common. But Fox News has particular problems, beyond those of the cable industry in general. It has already been engaged in controversies that would have sunk a normal news organisation. Its owner, head of a huge corporation, with a lifetime spent in political controversy, is able to absorb pressures in a way few other business executives could. Its chief executive, similarly with decades of partisan political involvement and strong networks among right-wing politicians, plus the prestige that comes from developing the network from nothing, has accumulated authority as well as a thick skin. It is hard to imagine that their successors will have the political will, or the sangfroid, to carry such a problematic and crisis-prone genre into a successful future. 12 Those who live by scandal It is a scandal rich in ironies – a scandal sheet brought down by scandal; journalists always quick to accuse others of hypocrisy having their own lies exposed; newspapers which demand transparency of others caught in a cover-up; newspaper executives who routinely condoned the invasion of others' privacy indignant when their own actions are scrutinised; and members of an institution whose democratic rationale is to hold the powerful to account shown to have themselves become an unaccountable power which was detrimental rather than beneficial to democracy. This chapter outlines the public development of the _News of the World_ phone hacking scandal; the next puts it in the context of the organisational culture of News Corp. The dogs that didn't bark – containment 2007–11 One of the most astonishing aspects of the scandal is how long it took to develop. News International's success in containing it for so long – even though by early 2011 its strategy was coming under ever-increasing strain – is a testament to its power and to the reluctance of key institutions in British society to confront it. News Corp's involvement in phone tapping first came into the public domain in August 2006, when _News of the World_ Royal reporter Clive Goodman and private investigator Glenn Mulcaire were arrested. The arrests related to two stories in late 2005 which made the trivial revelations that Prince William had hurt his knee and had borrowed some audiovisual equipment. The prince's entourage was convinced the paper had obtained the information illegally, and a subsequent Scotland Yard investigation uncovered the phone tapping. Goodman and Mulcaire pleaded guilty and apologised. In January 2007, both were sentenced to jail. The paper's editor, Andy Coulson, resigned, taking formal responsibility while denying any knowledge of their actions and asserting the company line that it was all the work of a single rogue reporter and no one else knew anything. In June, Coulson, whose journalistic career had begun in show business reporting, and whose previous main contribution to political journalism had been to ask Tony Blair whether he and wife Cherie were members of the 'mile high' club, went to work for Conservative leader David Cameron as chief spin doctor. News International's 'single rogue reporter' defence should not have survived at all. The court proceedings themselves showed it was not true. Counts 16 to 20 against Mulcaire, to which he pleaded guilty, recorded the names of five other people whose phones he had tapped – and none of them had anything to do with royalty. In the judge's summing up of these five counts he noted that Mulcaire 'had not dealt with Goodman but with others at News International'. In early 2007, Manchester-based solicitor Mark Lewis wrote to the five non-Royal hacking victims named. Only Gordon Taylor, chief executive of the Professional Footballers' Association, wanted to pursue a civil claim for invasion of privacy. During the negotiations it became clear to Lewis just how keen News was to keep the matter out of court. In January 2008, Scotland Yard eventually gave the legal team some of the material they had collected from Mulcaire, and the team immediately realised that it was 'dynamite'. It included what became called the 'For Neville' email: the transcripts of 35 voicemail messages involving Taylor. Lewis raised his demand from what was then an ambit claim of £200,000 to £1 million. In June 2008 News agreed to pay Taylor £700,000 including legal fees, an unprecedented amount for such a claim. Soon after, the corporation also agreed to pay two associates of Taylor sums totalling around £300,000. A key part of what seemed a wildly generous settlement was that it be kept confidential. This secrecy held until July 2009, when, after months of work, the _Guardian_ investigative reporter Nick Davies revealed that News International had made these payments to Taylor and his two associates. The story also said that phone hacking was rife at _News of the World_. Within hours, the Assistant Commissioner of the London Metropolitan Police, John Yates, announced that there was no reason for any further inquiry. News International stridently denied the story: 'all of these irresponsible and unsubstantiated allegations against the _News of the World_... and its journalists are false'. Rebekah Brooks, head of News International, said, 'The _Guardian_ coverage has, we believe, substantially and likely deliberately misled the British public.' In August, the new editor of the _News of the World_ , Colin Myler, announced that internal inquiries had uncovered no further evidence of phone hacking at the paper. In November, the Press Complaints Commission found no evidence against the Sunday paper and indeed concluded that the _Guardian_ report 'did not quite live up to the dramatic billing'. For _Guardian_ editor Alan Rusbridger: the most interesting period in the story... was the 18 month period following the _Guardian_ 's original revelation of the Gordon Taylor settlement [in 2009]... it was interesting precisely because nothing happened... fascinating in what it said about Britain, [and] the settlement so many people in public life had made, over two generations or more, with Rupert Murdoch. Despite the continued denials and lack of public action, the _Guardian_ story had one important consequence. Another of the five named, Max Clifford, got in touch with lawyer Charlotte Harris and mounted his own law suit. In March 2010, he and News International settled. On 9 March the headline story in the _Guardian_ reported 'Max Clifford drops _News of the World_ phone hacking action in £1m deal'. The settlement was meant to be secret, but Clifford himself went public, charging that the _News of the World_ lawyers had leaked the news. However, as Watson and Hickman note, 'Despite disclosing the payment of yet more hush money, the _Guardian_ 's story was not followed up by any other national newspaper.' In April, the _Guardian_ , based on Freedom of Information material, had another Nick Davies exclusive. So far only a small number of hacking victims had been named in court, and Assistant Commissioner John Yates said there were provable offences against only a handful of people. But Davies' story revealed that Mulcaire's notes, which had been in police custody since 2006, contained 4332 names and 2978 phone numbers. (In February Davies had disclosed in the _Guardian_ that three mobile phone companies had discovered that more than 100 of their customers had had their inboxes accessed by Mulcaire.) News of the settlements and about the scale of the hacking was starting to prompt more litigants to come forward. In spring 2010, another of the five named in the trial, sports agent Sky Andrew, began legal action, and others, including actor Sienna Miller and MP Chris Bryant, were making inquiries about how they were covered in Mulcaire's notes. In February 2010, the _Guardian_ published a Davies story which, although one of the figures could not be named because of a trial in progress, was still astounding. It revealed that Coulson, as editor of _News of the World,_ had immediately rehired a private detective once he had been released from jail, and that that private detective was now on trial for murder. There was zero reaction. Rusbridger said, 'This was quite a moment for me. It did seem to me that there was an almost wilful blindness in British police, press, regulatory and political circles to acknowledge what was becoming increasingly difficult to ignore.' Rusbridger arranged for all three party leaders – Brown, Cameron and Clegg – to be briefed about what the _Guardian_ had found. He had also become so desperate at the lack of other news media following up the _Guardian_ 's investigative reporting that he contacted Bill Keller, the editor of the _New York Times_. The _Times_ sent three reporters to England, and they spent some months working on the story. On 1 September 2010 they published a series of articles that confirmed the _Guardian_ 's claims, and included material from some other _News of the World_ journalists prepared to speak publicly about what had gone on. The _Times_ stories were again met with blanket denials at News, and the issue still had little public traction. As Leveson observed: The next steps required the persistence of civil litigants who secured significant admissions, and, in particular, Sienna Miller, who pursued the litigation both systematically and thoroughly; as a result, far greater wrongdoing was exposed than had hitherto been uncovered. On 15 December 2010, her lawyer, Mark Thomson, revealed the name of the journalist who had commissioned Mulcaire's intercepts. In early January 2011, _News of the World_ suspended the paper's news editor, Ian Edmondson. In May, News admitted liability for the entirety of Sienna Miller's claim; in a statement read in open court it admitted invasion of her privacy and a campaign of harassment of over 12 months. She had charged that _News of the World_ had published 11 articles about her and her then boyfriend, Jude Law, based on phone hacking. As the civil cases mounted, attitudes changed inside the police and prosecution authorities. The police began Operation Weeting, under Deputy Assistant Commissioner Sue Akers: 45 officers were assigned to it. In March 2011, the murder charge against Jonathan Rees, the private detective with a criminal record whom Coulson had rehired, collapsed. It was for the 1987 murder of his partner, Daniel Morgan. The case had gained notoriety because of the laxity of the police investigation and the corruption thus revealed. The end of the trial allowed the _Guardian_ to publish Nick Davies' story: 'Jonathan Rees: private investigator who ran an empire of tabloid corruption'. So by June 2011 News International's defences were increasingly vulnerable, but still no one foresaw the storm to come. The July 2011 explosion The _Guardian_ 's revelation on 5 July that murder victim Milly Dowler's mobile phone had been tapped created a furore that grew into an unstoppable scandal which has engulfed News International ever since. This action was so grotesquely cynical, so telling about the tabloid's sense of entitlement, and so devoid of any legitimate public purpose, let alone normal human compassion, that it became the focus of sustained outrage. Politicians of all parties and other news media rushed to catch up with public anger. The following day, 6 July, police made an equally terrible disclosure: Glenn Mulcaire had also targeted the parents of two small girls horrifically murdered in Soham in 2002. Police had known this since February, when it was revealed to them by Charlotte Harris, one of the main solicitors in the civil cases. Harris had gone to police headquarters to inspect the transcripts regarding two of her clients, footballer Lee Chapman and his wife, actor Leslie Ash, and noticed that she happened to have the same name as the father of one of the murdered girls. Harris saw Mulcaire's handwritten note of 'Soham', made the connection and alerted Akers. The police announcement heightened the impact of the previous day's disclosures. A stream of sponsors announced that they were withdrawing their advertising from _News of the World._ Nevertheless it was a shock when on Thursday (7 July), James Murdoch emailed staff to announce that the following Sunday's edition would be the last for the 168-year-old paper, which was still profitable, and still Britain's biggest-selling title. Rebekah Brooks explained to staff that the paper had become a 'toxic brand' with advertisers. Some staff may have been partly mollified by the heavy hint that the paper would eventually be replaced by the launch of the _Sun on Sunday_ (which happened in March 2012). In the final edition of _News of the World_ , there were no advertisements, and News International announced that all the money from sales was going to charity. A senior executive said he had to beg to find charities that would accept the money. On the Thursday, Andy Coulson was arrested. Opposition Leader Ed Miliband and Prime Minister David Cameron issued strong statements, although Cameron was subjected to awkward questioning about his relationship with News and what type of investigation he would launch. On 13 July, an inquiry was announced. It was to be conducted by Lord Leveson, to have wide-ranging terms of reference, and to have powers to compel witnesses. The broad remit of the inquiry and the all-party support was partly due to the strong campaigning by the interest group HackedOff, which had kept in close touch with the hacking victims, including the Dowler family. The timing of the scandal's eruption was particularly bad for News, because it had just secured in-principle approval from Culture Secretary Jeremy Hunt for the single biggest takeover in its history, extending its ownership of BSkyB from 39 per cent to 100 per cent. During the time allowed for public consultation, the Milly Dowler story was published. The takeover became the main focus of political anger. More than 300,000 people signed a petition opposing it. Soon the government was also backing away. Hunt, who had said since February that the scandal was irrelevant to the bid, now reversed his stance. On 13 July Miliband moved a motion in parliament requesting Rupert Murdoch and News International to withdraw their bid for BSkyB, in the public interest. The Liberal Democrats agreed, and then so did the Conservatives – in the face of their own government's previous approval: 'The motion was agreed to without a vote.' News International withdrew the bid. News Corp swung into a public show of remorse, although it started shakily. Rupert Murdoch arrived on Sunday, 10 July, clearly out of touch with the public mood in Britain. When filmed by journalists on his way out to dinner with James Murdoch and Rebekah Brooks, he was asked what his priority was. 'This one,' he replied, pointing to Brooks. Murdoch's commitment was not sufficient to save Brooks, who resigned the following Friday, and was arrested on Sunday, 17 July. Les Hinton, the previous head of News International, now in New York as head of Murdoch's Dow Jones, resigned at the same time. That day, News International ran full-page ads in the national newspapers apologising, and in another act of public contrition Rupert and James went to visit the Dowlers to personally apologise. The following Tuesday, 19 July, they appeared before a parliamentary committee – 'the most humble day of my life,' said Rupert – but the total abjectness of the apologies was matched by the equally total denial of responsibility. On the Sunday before, the _Sunday Times_ had revealed that the head of the London Metropolitan Police Force, Sir Paul Stephenson, had received thousands of dollars' worth of free hospitality at a health spa, and he resigned later that day. The following day, 18 July, the Assistant Commissioner, John Yates, also resigned. Cameron, who had departed on a tour of Africa, cut it short in order to address the growing scandal. Eventually he spent 163 minutes in parliament fielding 136 questions from MPs, a virtuoso performance which in some ways signalled the end of this climactic period, marked as it had been by dramatic disclosures, arrests, forced resignations, the closure of a newspaper, and the withdrawal of a bid for BSkyB. This was the moment at which closeness to Murdoch suddenly switched from political asset to political liability. Sustaining attention Although the drama and publicity peaks of July inevitably dissipated, the scandal was then sustained by several other sources. The civil cases – which had been so important in breaking down News International's defences – continued to proliferate. News was keen to settle them quickly, with the least damaging publicity, and in January consented to the assessment of aggravated damages. So on a single day, 19 January 2012, it settled 37 cases, with former deputy prime minister John Prescott, for example, receiving £40,000 and Jude Law £130,000. On 9 February another batch of cases was settled. By April 2012, when Rupert Murdoch testified before the Leveson Inquiry, 72 cases had been settled, although more were still pending: the company estimated in May 2012 that it could face up to 520 further claims. These cases did not produce any startling revelations, but the parade of celebrities and others and their anger at News brought a steady stream of headlines, and together underlined yet further the scale of the company's wrongdoing. In addition, there was a continuing stream of criminal charges, especially against News International employees. Operation Weeting, which had been established in January 2011 to investigate phone hacking, was later supplemented by Operation Elveden, set up to investigate payments to public officials, and Operation Tuletta, set up to investigate allegations of computer hacking. By late 2012, around 90 people had been arrested in connection with these operations, with the prospect of criminal trials in the future. It is not yet clear whether all these will go to trial, nor whether there will be still others charged, as investigations continue. Another source of news was the House of Commons' Culture, Media and Sport Committee, which had played an important early role. Its peak attention came on 19 July, when it had required Rupert and James to testify. They had at first refused. The headlines that day were hijacked by a 'comedian' who tried to put a plate of shaving cream in Rupert's face but was impressively intercepted by his alert wife, Wendi. Although the Murdochs suffered no great damage from the day, neither were their public appearances a great success, with Rupert often appearing somewhat doddery and out of touch, and James described as 'half Harry Potter; half Hannibal Lecter'. But their defences set up two longer-term problems. A crucial moment came when Tom Watson asked James, 'When you signed off the Taylor payment, did you see or were you made aware of the "For Neville" email, the transcript of the hacked voicemail messages?' James Murdoch answered unequivocally: 'No, I was not aware of that at the time.' Within days, lawyer Tom Crone and former _News of the World_ editor Colin Myler had contradicted James's account. The second problem came with their claim that the law firm Harbottle and Lewis had done an extensive review and failed to find wrongdoing. In mid-August the committee made two more important revelations. It published the letter sent by Clive Goodman from prison appealing against his dismissal, because, it said, the actions which had landed him in prison had been carried out with the full knowledge of senior editors. The letter included the claim that: Tom Crone and the editor [Andy Coulson] promised me on many occasions that I could come back to a job at the newspaper if I did not implicate the paper or any of its staff in my mitigation plea. The law firm Harbottle and Lewis, after being freed from its obligation of client confidentiality by News, said that it had had only a very limited and focused brief: namely, to determine whether or not Goodman was entitled to a valid claim for unfair dismissal. It concluded he was not, but noted that News International had paid him £250,000 – unnecessarily, in their legal view. The law firm found it 'hard to credit' James Murdoch's claim that News International rested on its work as part of its defence, and said it was 'inaccurate and misleading' to suggest that the law firm had had a wider investigatory brief. It also noted that some of the most crucial emails it had obtained in 2007 now appeared to be missing from those submitted to the police by News, and stated again that News's public claims about the law firm's findings had gone far beyond what the firm had actually found. The parliamentary committee had become less prominent, as the Leveson Inquiry assumed centre stage, but it finished with a climax. In its report on 1 May 2012, the majority concluded that Rupert Murdoch was not a fit person to exercise stewardship of a major international company. This critical conclusion made headlines around the world. News Corp seized on the fact that the main finding against Rupert was not unanimous, but passed along party lines. The Conservatives voted against it, partly, some said, because that conclusion went beyond their terms of reference. But tellingly for Murdoch, their coalition partner, the Liberal Democrats, voted with Labour, to produce a majority against him. The fact that all Labour MPs would vote for such a strong conclusion shows just how far Murdoch's political fortunes had diminished. There were many damning parts of the report which all parties agreed on. MPs concluded nine to one that it was 'simply astonishing' that Rupert and James did not realise the one-rogue-reporter line was untrue until December 2010. James Murdoch was deemed to have shown wilful ignorance of the extent of phone hacking, and three senior News International employees – Hinton, Myler and Crone – were all found to have misled the committee in 2009, when News International still had hopes that the scandal could be contained. By far the most important source of continuing attention to the scandal, however, was the Leveson Inquiry. On 13 July 2011, as noted, Prime Minister David Cameron commissioned Lord Justice Leveson to conduct an inquiry, in two parts. The first part included making recommendations for a new and more effective regulatory regime that would support the integrity and freedom of the press. The second part was to inquire into the extent of unlawful and improper conduct within News International, and in the institutions it was dealing with, including the police and politicians. As the inquiry noted in its Executive Summary, 'in nearly nine months of oral hearings, 337 witnesses gave evidence in person' and the statements of nearly 300 others were read into evidence: 'It has become the most public and the most concentrated look at the press that this country has seen.' The victims of phone hacking were given a public voice, and a range of politicians and – more unusually – leading figures in the news media were cross-examined under oath. Leveson had to avoid all matters that might impinge on the impending criminal trials, which means that some of the most interesting questions have not yet been publicly pursued, and the second part of his inquiry has been indefinitely delayed. The hearings began in November 2011, and were roughly divided into three modules. The first, on the behaviour of the press in relation to the public, had several witnesses who had been victims of distorted or intrusive press reporting. Thus on the first day, the parents of Milly Dowler, and film star Hugh Grant, appeared. Module Two, on the relationships between the press and the police, began in February 2012, and included interviews with many who had been involved in investigating phone taps, as well as a more general probing of the relations between individual officers and tabloid journalists and the expensive hospitality they lavished on each other. In late April several editors and proprietors – whose relevance spanned all modules – appeared, climaxing with James Murdoch on 24 April and Rupert on the following two days. Rupert gave a good performance on the first day, but on the second 'his recollection seemed noticeably absent when discussing his own wrongdoing, but entirely there if someone else was at fault'. Rupert used 'cannot remember' or some similar phrase 30 times, fewer than James (41) or David Cameron (59). Such a simple amnesia scoreboard is misleading, however: James Murdoch and Cameron often could not remember details, while Rupert said he could not remember whole meetings with people or whether he had ever met particular people. Module Three, on the press and politicians, began in early May. Three former prime ministers – Gordon Brown, Tony Blair and John Major – appeared, as did members of the present and former governments, and other important players, such as Blair's chief spin doctor, Alastair Campbell. Rebekah Brooks and Andy Coulson were quizzed about their relationships with politicians. Later, Culture Secretary Jeremy Hunt was questioned about his dealings with News International on the BSkyB bid. This module climaxed with David Cameron appearing for a whole day in mid-June. The last round of public hearings, much less spectacular than the first three, involved proposals for more effective regulation. A new stage began when the first part of the Leveson Inquiry's results – four volumes totalling 2000 pages – was published in November 2012. Its key proposal was to replace the Press Complaints Commission with a body that had statutory force, could impose fines as well as demand publication of corrections, and would be independent of both government and publishers. Prime Minister Cameron had earlier promised to follow the recommendations unless they were 'bonkers', and had said the key test would be whether the victims of unethical press coverage, such as the Dowlers and others, were satisfied with the result. But within hours of the report's public release he rejected its key recommendations, incurring the wrath of exactly those people. His response also led immediately to political debate, with the Liberal Democrats and Labour endorsing the recommendations. Before and after – contrasting responses For three and a half years until July 2011, the most notable aspect of the responses of most politicians, police and media to the allegations was their passivity, which revealed the strength of the previous relationships. However, once the scandal gained unstoppable momentum, the power of this earlier web of self-interest, patronage and fear collapsed. The contrast was stark. Labour MP Tom Watson, who played such an important role in uncovering the scandal, said that in the summer of 2009: every single MP I know thought the campaign [to expose phone hacking] was bordering on insane. No one wanted to know. It was simply career suicide to challenge the powerful people that ran News International. All three major party leaders told Leveson – rather euphemistically – that politicians had become 'too close' to the press. Lord Mandelson put it more honestly: 'We were cowed.' Historian Timothy Garton Ash called it: a fear that dared not speak its name; a self-deceiving cowardice that cloaked itself in silence, euphemism and excuse. Inwardly, politicians, spin doctors, PR men, public figures and, it now emerges, even senior police officers, said to themselves: don't take on Murdoch. Never go up against the tabloids. The threat that the tabloids would go after you 'was always there'. But in July 2011, 'startlingly, a wave of openness spread over politics'. Even though Murdoch had turned so strongly against the Labour government, in opposition the party was still keen to woo him back. One of new Opposition Leader Ed Miliband's early appointments was of a former News journalist, Tom Baldwin, as his spin doctor. Baldwin sent an email to Labour MPs instructing them to go easy on the scandal, and in particular not to link it to the impending takeover decision on BSkyB. Before July 2011, Miliband had told confidants that he had no choice but to ignore the scandal, because the alternative would be 'three years of hell' at the hands of the Murdoch press. On 5 July 2011 Miliband and his team had what they dubbed their 'sod it' meeting, and decided that the political calculus had changed decisively. After that, Labour became a consistent voice of criticism and supporter of reform. The Conservatives, and especially Prime Minister David Cameron, were in a much more vulnerable position. Before the scandal erupted in July 2011, they had consistently dismissed the charges as Labour propaganda. In September 2010, for example, London mayor Boris Johnson denounced them as 'codswallop', and a 'politically motivated put up job by the Labour Party', and a staff member had advised the police not to fall for the 'political media hysteria'. There were charges that key Conservatives were too close to News International. Documents tabled to the Leveson Inquiry showed that Cameron's ministers met with News Corp executives more often than with any other media organisation: 107 meetings in just over a year. Some ministers were particularly close. Education Secretary Michael Gove's wife worked for the _Times_ , as had he previously. Between 2005 and 2009 he had received at least £60,000 a year for articles he wrote for the _Times,_ and an unspecified advance from HarperCollins for a biography of 18th century politician Viscount Bolingbroke. The Foreign Secretary, William Hague, in opposition had received £195,000 a year for a column in _News of the World_ , and around £300,000 from HarperCollins for biographies of William Pitt the Younger and William Wilberforce. Cameron himself was particularly vulnerable for three reasons. The first involved charges of misjudgement: in hiring Andy Coulson in 2007 and, after he won the 2010 election, in appointing him as the government's Director of Communications. The 2007 hiring was designed to overcome Cameron's apparent problem with the tabloids in general and Murdoch in particular – at first Cameron seemed too much of a toff and too moderate to appeal to the Murdoch tabloids. Murdoch's papers indeed began to warm to Cameron after he hired Coulson; also, Cameron was increasingly eager to please. In 2009, while the Conservatives were still in opposition, and following the papers' disapproval of Cameron's shadow home secretary Dominic Grieve for being too soft on crime, Grieve was moved to another portfolio. Cameron soon adopted a much tougher stance on social issues, embracing the _Sun_ 's rhetoric of 'Broken Britain'. After an increasingly ardent courtship, the Murdoch press strongly supported the Conservatives in the lead-up to the 2010 election. Cameron's argument was that Coulson deserved a second chance, having taken formal responsibility for, but not having himself being involved in, the Goodman-Mulcaire Royal phone hacking. But other embarrassments kept appearing. One was an unusually large payout, £800,000, to former _News of the World_ sports reporter Matt Driscoll for being bullied while working under Coulson. Later it was revealed that – at least technically – Coulson had broken House of Commons rules by not declaring that he was still receiving considerable income from News International through his severance settlement. Before appointing Coulson as Director of Communications in 2010, Cameron and other senior figures had been privately warned by the _Guardian_ about Coulson's links with Jonathan Rees, the ex-criminal and private investigator. There had been five police inquiries into the 1987 murder of Rees's partner Daniel Morgan, and eventually this all became public, to the government's embarrassment. Whether out of loyalty or over-confidence, Cameron ignored the warnings. Coulson resigned from his position in January 2011. Cameron's second problem involved 'his appearance of coziness' with Murdoch's News International executives. His social circle included Rebekah Brooks and her husband, Charlie, and Elisabeth Murdoch and her husband Matthew Freud. Peter Oborne described this 'Chipping Norton set' as 'a group of louche, affluent, power-hungry and amoral Londoners'. As is often the case with scandals, the exposure of backstage behaviour brought its own embarrassments, namely the tabling of texts between Cameron and Brooks. Brooks texted Cameron at the 2009 Conservative Party Conference: 'I am so rooting for you tomorrow not just as a proud friend but because, professionally, we're definitely in this together! Speech of your life! Yes he Cam!' And the world learnt that Cameron used to sign LOL, until Brooks told him it meant Laugh Out Loud rather than Lots of Love. There was also a humorous sequel. At the height of the scandal in July, Cameron had talked of cleaning out the stables. Later it was disclosed that the Metropolitan Police had lent a retired police horse to Brooks, and Cameron had ridden it, with some linking the 'Horsegate' episode back to Cameron's stables declaration. It all looked very un-prime ministerial. But the third and most important problem for Cameron was that his government had just approved Murdoch's bid to increase his ownership of the very profitable satellite broadcaster BSkyB from 39 per cent to 100 per cent. This was potentially a major moment in redrawing the British media map. BSkyB had annual revenues of £6.8 billion, not far below the combined revenues of the BBC, ITV, Channel 4 and Channel 5. On 30 June the Cameron Government announced its intention to wave through the takeover, which would become final on 8 July, following a brief public 'consultation period'. Success looked assured. But then came the _Guardian_ story, on 5 July. The BSkyB decision 'was firing the scandal' and provided a tangible target for the public anger. The bid had always been politically fraught. It was not coincidental that Murdoch launched it immediately after Cameron's election. The first minister to have carriage of the decision, Liberal Democrat Vince Cable, was dramatically removed from that responsibility after it was leaked that he had said to two people he thought were constituents, but who were, in fact, journalists, that he had declared war on Murdoch. The prime minister handed responsibility to Jeremy Hunt, whose earlier public statements had seemed friendly to the bid, to the point where critics had dubbed him the Minister for Murdoch. Hunt had consistently maintained that any criminal charges against Murdoch's _News of the World_ were irrelevant to the BSkyB decision, but the intensity of the scandal swept away such niceties; moreover, it shone a searching new light on the process of approval. It was revealed, for example, that Hunt had had a meeting with James Murdoch, with no public servants present and no minutes taken, and then another with the BSkyB chief executive, despite the public service warning him that media regulation issues were likely to arise as a result of holding such a meeting. When James Murdoch testified, the Leveson Inquiry published 163 pages of emails which News International had given them on interactions between the company and Hunt's office. An immediate casualty was an unfortunate Hunt staff member, Adam Smith, who was required to resign for allegedly exceeding his authority in dealings with News, although as the News International lobbyist Fred Michel – who was texting Smith an average of five times every working day during the crucial period – stated, 'His [Hunt's] advisers were there to assist and advise Jeremy Hunt and it was my understanding that when they told me something, it was always on behalf of the minister and after having conferred with him.' Hunt's ministerial role in the decision was meant to be – in the British political lexicon – 'quasi-judicial'. But his actions showed how politically charged the process was. In November 2010 he had sent an urgent memo to the prime minister saying James Murdoch was 'pretty furious' that Cable had referred the bid to Ofcom, the official regulator: Cable had merely followed due process. After the phone hacking scandal blew up in July 2011, according to News International lobbyist Michel, Hunt 'asked me to advise him privately in the coming weeks and guide his and No. 10's positioning'. The emails from Michel show that he was thoroughly informed of every move the government was about to make. He told Brooks in early June that Hunt would approve the takeover late that month, and that Hunt believed that 'phone hacking has nothing to do with media plurality issues'. In December 2010, after the merger cleared a regulatory hurdle in the EU, Hunt texted James Murdoch, 'Congrats on Brussels. Just Ofcom to go!' The barrister assisting, Robert Jay, put it to Rupert Murdoch at the Leveson Inquiry that Hunt had acted as a cheerleader for the bid, rather than as an impartial arbiter. Rupert denied this, and Hunt maintained that he had acted 'scrupulously fairly'. Cameron resisted opposition demands that Hunt resign, and after a heated debate the government defeated a House of Commons motion to hold an inquiry into his behaviour. Eventually Cameron moved Hunt out of the firing line, by promoting him to health minister. Perhaps more surprising than the failure of the politicians was the equally abject failure of the police – 'pathetic', judged lawyer Charlotte Harris; 'meek and mild' rather than a 'fearless, impartial investigator' thought Watson and Hickman; and one MP thought the efforts of senior officer Andy Hayman were more Clouseau than Columbo. John Whittingdale, a Labour MP, verbalised what many suspected, that: the only reason... that the hacking enquiry was not fully pursued was that it was a story that the police did not wish to uncover. They did not want to spoil their relationships with News International. Leveson judged that the decision to limit the 2006 prosecution was 'entirely justifiable'. Several reasons can be used to support that decision – to spare the prince/s the experience of being cross-examined in court, to set up a straightforward case with a very high prospect of success, and because it was necessary to prioritise scarce police resources, especially given London's recent experience of terrorism. The police failures began immediately, though. They pledged to notify all those whose phones Mulcaire had hacked, but inexplicably failed to do so. The material they had collected from Mulcaire, which was later revealed to be 11,000 pages, sat stored in garbage bags for the next several years. Leveson saw no reason to challenge the integrity of the police, including senior officers, but he noted that they were responsible for 'poor decisions, poorly executed'. The response to the first _Guardian_ article, in July 2009 – on the Gordon Taylor financial settlement – had set the tone. Assistant Commissioner John Yates took only hours to dismiss the _Guardian_ report, and falsely claimed that the individuals affected had all been notified; as noted, he also did not undertake any review of the Mulcaire evidence. The Leveson Inquiry found that Yates 'not only failed to require a more measured review'; 'he positively refused to allow it to happen'. 'Mr Yates was right to conclude that the _Guardian_ had not revealed anything that would be new to the police, but that was precisely the point' – the police already had the material, but had not examined it. One of those originally involved in 2006, Andy Hayman, had left the force – in two years as Commissioner he had run up £19,000 on his Scotland Yard credit card – and in response to _Guardian_ reports he wrote an article for the _Times_ , claiming that the original investigation had 'left no stone unturned' and that if there had been the 'slightest hint others were involved... they would have been investigated'. So a decision not to pursue because of other priorities had now become an exhaustive investigation. Leveson was properly critical of these 'extraordinary assertions'. He judged that the police response to this first _Guardian_ article set up 'a defensive mindset' which affected all that followed. Defence quickly turned to counter-aggression. After the _New York Times_ , in September 2010, quoted former show business reporter Sean Hoare confirming phone hacking, Yates instructed that Hoare be interviewed under caution, meaning that anything he said could be used in court: 'It looked as if Yates's first moves... had been [designed] to scare off other whistleblowers.' The police's public responses were consistently dismissive. For example, Yates on several occasions said there was no evidence that Mulcaire had tapped Deputy Prime Minister John Prescott's phone, but it turned out he had never looked, and that Prescott's phone had indeed been hacked. The police were also very slow to respond to requests. MP Chris Bryant, for example, had to wait eight months to find out if he figured in the files, and Charlotte Harris found that 'the police would only provide documents under court order and when you did get the documents they would be redacted in a random way'. Senior police officers also sought to dissuade the _Guardian_ from pursuing the story. The phone hacking scandal highlighted the close links – mutual hospitality plus mutual patronage – between the London police and News International. The Murdoch tabloids and the police had a shared interest in promoting law and order stories. Of the 45 press officers in Scotland Yard, 10 were former News International journalists. There seemed to be a revolving door of appointments and consultancies, and lucrative post-retirement publishing options. Throughout, senior Scotland Yard figures continued to socialise with senior editorial figures in News International, the organisation whose misdeeds they were meant to be investigating. At this stage, there was still no public knowledge of the extensive bribery by News of police officers. Prompted by the Crown Prosecution Service, police attitudes changed decisively in late 2010, with the establishment of Operation Weeting. Now the police had a determination born not just of the crimes themselves, but of the way they had allowed themselves to be played for fools. The head of Operation Weeting, Deputy Assistant Commissioner Sue Akers, testified that, 'I think it is everybody's analysis that (public) confidence has been damaged in the Metropolitan Police. If we do not get this right, it will continue to be damaged.' The third initially passive group was the rest of the media, apart from the _Guardian_. A study by Judith Townend and Daniel Bennett found that from 2006 to the end of 2010 the _Guardian_ published 237 articles on hacking. The tally for the _Mail_ and _Mail on Sunday_ was just 38, and for the _Mirror_ and _Sunday Mirror_ a mere 11 – that is, less than three a year. In January 2011, as public interest in the scandal was mounting, Rupert Murdoch visited London. The following morning the nine national newspapers devoted a total of 12,585 words to Murdoch and the scandal, with the _Guardian_ at the top (4703) and the _Sun_ at the bottom (41). _Guardian_ editor Alan Rusbridger was perplexed by the media's lack of interest. The Tuesday following the _Guardian_ article in July 2009, Rusbridger and Davies had appeared before the Commons Select Committee on Culture, Media and Sport. Davies flourished hard copies of the 'For Neville' emails. Experienced reporter Andrew Sparrow blogged: 'Wow! I've been covering Commons committees for 15 years and I've never heard such a dramatic opening statement.' But there was barely any coverage in the next day's papers. In February 2010, that parliamentary inquiry concluded that it was 'inconceivable' that knowledge of the _News of the World_ 's phone hacking was limited to Goodman and Mulcaire, and strongly criticised that paper's witnesses for their collective amnesia. There was very brief and limited coverage of the Committee's report, with the main theme being that Coulson had been 'cleared'. None of the major media felt the need to publicise the findings at length. Similarly, when a tribunal found that _News of the World_ sports reporter Matt Driscoll had suffered from bullying by Andy Coulson, and was awarded an 'astonishing' £800,000, only the _Guardian_ found it newsworthy. As Rusbridger notes, 'The nearer Cameron edged to the door of 10 Downing Street, the less appetite there was to run anything negative about Coulson.' The reasons for the press lacking its normal sense of newsworthiness can only be speculated on. It may have been a general dog doesn't eat dog attitude, or perhaps other tabloids knew their own news-gathering methods had been less than pure. One reason for believing mutual protection may have been a motive had been dramatically illustrated by Operation Motorman, begun in 2003. Here the UK Information Commissioner looked at the files of one private detective, Steve Whittamore, and found that 305 journalists and most national newspapers had used his services in tracing confidential personal information. The targets included eight England footballers and the head of the intelligence agency MI6. His biggest client was the _Daily Mail_ , which had paid him £143,150 in connection with 1728 requests. Despite the scale of a single detective's illicit activities, and the disclosure of this information black market, coverage of the Operation Motorman report in the major news media was strangely muted. Later press responses were equally revealing. In contrast to their usual approach to law and order issues, they lobbied strongly against harsher punishments for data protection offences. Leveson found 'the extent to which Mr Whittamore's services continued to be used by some titles after his conviction' revealing. He also observed that 'none of these revelations led to any newspaper conducting an investigation either into its own practices or into those of other titles'; neither did they lead to any 'in-depth look to examine who had been paid for what and why or to review compliance requirements'. The cover-up At News Corp's annual meeting in New York on 15 October 2010, Rupert Murdoch told investors: We have very strict rules. There was an incident more than five years ago. The person who bought a bugged phone conversation was immediately fired and in fact he subsequently went to jail. There have been two parliamentary inquiries, which have found no further evidence of anything at all. If anything was to come to light, we challenge people to give us evidence, and no one has been able to. Even then many of these claims were clearly wrong; by the 2011 meeting they were indefensible. Instead he told that meeting: 'We could not be taking this more seriously or listening more intently to criticisms.' Then he limited contributions from the investors present to a maximum of one minute, having already made shareholders register for a ticket four days before the meeting, the most unfriendly environment that shareholder activist Stephen Mayne had ever seen at an AGM. The following year repentance was no longer on the agenda. Just before the 2012 meeting, Murdoch tweeted: 'Signs pretty peaceful, but any shareholders with complaints should take profits and sell.' The whole AGM was wound up 'in a slick hour and 21 minutes'. Through the early years of the scandal, News International denials could not have been more emphatic. There was an unqualified and total dismissal of any and all claims of possible wrongdoing. Harold Evans thought there was 'a pattern to the Murdoch sagas. What is denied most furiously turns out to be irrefutably true.' Just as revealing is the lack of any internal soul-searching, or even curiosity, about what might have occurred. To deny, to cover up was the automatic response. Brian Cathcart observed, 'It is striking that not a single hint has emerged in all these years to suggest that there was internal debate about this choice.' Public declarations of cooperation were coupled with private obstruction. Justice Leveson observed that 'most responsible corporate entities would be appalled that employees were or could be involved in the commission of crime in order to further their business. Not so at the _News of the World_.' From the beginning, in 2006, when police sought to execute a warrant to search Clive Goodman's desk and computer in connection with the phone hacking, 'they were confronted and driven off by staff at the newspaper', and their attempt was 'substantially thwarted'. After a 'tense standoff', many officers were prevented from entering the building. No subsequent search was attempted as police believed any relevant evidence would have been destroyed. In late 2009, a management memo headed 'Opportunity' recommended that 'subject to compliance with legal and regulatory requirements', they should delete emails 'that could be unhelpful in the context of future litigation in which NI [News International] is a defendant'. The barrister representing the phone hacking victims, in his concluding address to the Leveson Inquiry, talked of News's cover-up, including the destruction of millions of emails, saying that these occurred at strategic moments as public revelations threatened, while News publicly professed cooperation. Rupert Murdoch told the Leveson Inquiry that the new _News of the World_ editor, Colin Myler, brought in to replace Andy Coulson, was appointed to 'find out what the hell was going on', but Myler immediately rebutted this, saying he was given no such brief, and that his role was simply to edit the paper. Similarly, News International had brought in the legal firm Harbottle and Lewis for the strictly circumscribed purpose of answering Clive Goodman's appeal against his dismissal. The lawyers decided in the company's favour on that score, but were subsequently angered by News International's referring to their advice when making much broader claims relating to phone hacking in general. As noted earlier, the firm finally achieved a waiver of client confidentiality, allowing them to make this clear. As Leveson says about the hiring of Harbottle and Lewis: It is revealing that the [News] concern was to identify material that would cause the company further embarrassment or damage their prospects in an Employment Tribunal rather than ascertain whether the allegations made by Mr Goodman [about phone hacking being widespread and approved at the paper] were true. Harbottle and Lewis had discovered evidence of News International bribes to police, totalling more than £100,000. This knowledge had first been given to News International management in 2007. When someone asked for a copy of their report again – in March 2011 – the company gave it to Lord Macdonald, previously a Director of Public Prosecutions, now in private practice, who immediately recommended that it should be handed to the police. Finally, three months later, on 20 June, the company gave police the evidence it had first received four years earlier. Perhaps it had hoped the BSkyB approval would be finalised before any action was taken. A week after Clive Goodman was sentenced to prison, Les Hinton sacked him. Goodman then wrote a letter of appeal which 'in 360 precise words' threatened 'to explode the company's defence: he was claiming phone hacking had been carried out routinely, with management's full knowledge'. In response, News International lifted Goodman's severance pay considerably, to almost £250,000. Goodman's letter did not prevent Hinton from later telling the parliamentary committee that 'there was never any evidence given to me that suggested that the conduct of Clive Goodman spread beyond him'. The strength and totality of News International denials ( _News of_ _the World_ managing editor Stuart Kuttner, for example, maintained, 'It happened once at the _News of the World_... The reporter was fired') make strange reading given subsequent revelations, and especially in light of the large sums of money involved. News International had earned the nickname 'Shoestring International' because of its stinginess, and yet here were very large sums of money flowing out. As former _Sun_ editor David Yelland said in December 2010, it was inconceivable that the editor did not know that Mulcaire was on a contract that paid him over £100,000 a year. However, Rebekah Brooks told the parliamentary committee that the first time she had heard Mulcaire's name was in 2006, after the arrests. Similarly, as Akers testified to Leveson, 'there appears to have been a culture at the _Sun_ of illegal payments, and systems have been created to facilitate such payments while hiding the identity of the official receiving the money'. However, senior editors, for instance, want to know, especially on problematic or sensitive stories, how the reporter knows it is true. Apart from such large cash flows requiring institutional processes, there was anecdotal evidence of occasions on which editors had referred to such activities. Most famously Rebekah Wade (later Brooks): in March 2003, 'a supremely confident and striking figure' appeared before the House of Commons committee, and looking 'unabashed and unperturbed (declared) "we have paid the police for information in the past"'. Piers Morgan, in his published diary, wrote, 'Rebekah excelled herself by virtually admitting she's been illegally paying police for information. I called her to thank her for dropping the tabloid baton.' Dominic Mohan, now editor of the _Sun_ , had once jokingly thanked Vodafone for its lax security. While it is clear that senior editorial management knew what was happening, it is less clear what James and Rupert Murdoch knew and when. Asserting Rupert's ignorance runs against the Murdoch mythology, which has him in close command of all his operations, a 'detail man' who 'speaks with his top media executives several times a day'. When the _Guardian_ reported the payout to Gordon Taylor, Rupert, then in the United States, denied it to a media conference: 'If that had happened, I would have known about it.' James said that Rupert had not been advised of the payment. This seems strange, given the importance of the matter. What is doubly strange is that even after Rupert made a public statement that was wrong, no one bothered to enlighten him. James's knowledge was much more directly in dispute. Crone and Myler were adamant that he had known about the 'For Neville' chain of emails. Both sides agree on the lead-up events: Myler and Crone wanted a meeting because they had to settle with Gordon Taylor, because, as Myler said in his email of 7 June 2008, in which he requested a meeting with James, of 'Taylor's vindictiveness' and because the email chain that Taylor and his solicitors had was 'as bad as we feared'. Crone said, 'We went to see Mr Murdoch and it was explained to him what this document was and what it meant.' While the primary concern when investigating cover-ups is the honesty and frankness of the responses, how the respondents draw the lines of guilt and innocence is certainly also an issue. The Murdochs may find that this returns to haunt them. Before the parliamentary committee James and Crone and Myler just claimed discrepant memories. But at the Leveson Inquiry, James and Rupert hardened the line of difference into deliberate deception. James claimed that News International executives had withheld the information about phone hacking from him because they feared he would 'cut out the cancer'. The email chain that Myler sent James on 7 June 2008 showed that evidence of widespread phone hacking was the reason they had to pay Taylor such a large sum. To be credible, James's assertion means that Crone and Myler knew he wouldn't read what they had sent him. Nor can they be blamed for James failing to ask any questions about why such an unprecedented payout was necessary. Not surprisingly, Leveson preferred Crone and Myler's evidence in this regard. Rupert also blamed Myler for the internal cover-up about phone hacking and seemed to place primary blame on Crone: someone [who] took charge of a cover-up which we were victim to [sic]... the person I'm thinking of was a friend of the journalists and a drinking pal and a clever lawyer... and he forbade people to go and report to Mrs Brooks or to James. In this version, Crone deliberately misled his superiors. Crone, who worked as News International's lawyer for more than two decades, immediately issued a statement saying that Rupert's claim was a 'shameful lie'. Myler and Crone were in a difficult position. They had upheld the company line for several years. Watson and Hickman observed that at the parliamentary committee, 'Myler and Crone's tactics soon emerged: confusion, obfuscation and spectacular memory loss.' Both had at various times publicly committed themselves to the single rogue reporter defence, although Crone later recanted, saying he always knew it was wrong, and that it would 'one day probably come back to bite the company'. Myler made several unequivocal statements: for example, he told the parliamentary committee in 2009, 'I have never worked or been associated with a newspaper that has been so forensically examined.' Another source of alienation, and hence a possible stimulus to defection, is the very different amounts of money people had received via redundancy and severance arrangements. Watson and Hickman report that both former news editor Ian Edmondson and chief reporter Neville Thurlbeck, of the 'For Neville' email fame, have said they are suing _News of the World_ for wrongful dismissal. 'There is much I could have said publicly to the detriment of News International but so far have chosen not to,' said Thurlbeck, furious that he had been sacked without a payoff after working there for 21 years. News International's payment of £10.8 million to Rebekah Brooks has no doubt set the bar very high for the others. Scale and consequences One of the most breathtaking aspects of the scandal was its sheer scale. The extensiveness of the surveillance operations is most immediately evident in the use of private detectives. _News of the World_ employed at least four. The best-known was Glenn Mulcaire, who had an £102,000 a year contract with the paper, supplemented by extra cash payments for particular jobs, and was paid at least £850,000 over eight years. Mulcaire's notes, which were seized by the police in 2006, and then sat unexamined for years, totalled 11,000 pages. He probably tapped the phones of over 1000 people, and caught conversations involving around 5000 people. In Robert Jay's opening statement on 14 November 2011, he said Mulcaire's notes showed 2266 'taskings' by the paper, of which 95 per cent came from four journalists, and that his work for the paper probably continued until 2009. While Mulcaire worked almost exclusively for News International newspapers, the second private investigator, whose work was first revealed in Operation Motorman, Stephen Whittamore, worked for many titles; the _Daily Mail_ was his largest contract. Most of his work seemed to be penetrating data security – finding telephone numbers, addresses, car registrations and so on – and an unknown amount of it was illegal. The total amount paid by newspapers for these items of information may have been as high as £500,000, which 'gives an indication that the supply of personal information was, for those involved, a lucrative business'. The third, and perhaps most notorious, private detective used by _News of the World_ was Jonathan Rees, whose service for the paper was interrupted by a seven-year prison sentence for conspiring to pervert the course of justice. Upon his release in 2005, he was immediately re-engaged by the paper, for £150,000 a year. Neither that conviction nor the fact that he was on several occasions implicated in investigations into the 1987 murder of his partner dimmed the paper's enthusiasm for his services. It is interesting that the paper was paying him 50 per cent more than Mulcaire's retainer, but it is less clear what services he was providing. Watson and Hickman speculate that it may have been that Rees delivered information on criminal investigations using corrupt associates in the police force. The fourth one was Derek Webb and his firm Silent Shadow. While 'Mr Whittamore's _métier_ was to obtain personal data', Mr Webb 'was an expert in surveillance', and some of Mulcaire's data may have been used to assist Webb's surveillance tasks. He revealed on 7 November 2011 that _News of the World_ had ordered him to follow more than 100 high-profile individuals over eight years, running right up to the paper's closure in July. Webb said, 'I don't feel ashamed. I know to a certain extent people's lives have been ruined... but if I wasn't doing it, somebody else would have been.' Angered that the paper hadn't treated him as generously in severance pay as other freelance contributors, Webb threatened to act as a witness against News International in court cases that concerned his activities. The scale of these activities makes it clear that 'large parts of the press had been engaged in a widespread trade in private and confidential information, apparently with little regard to the public interest', and that newspapers 'felt they had a right to know whatever they wanted to know about whoever crossed their paths, no matter how vulnerable and powerless those people were'. The determination with which they carried out their task is illustrated by the fact that David Beckham had 13 sim cards for his mobile telephones and Mulcaire had penetrated all of them. It was not only celebrities and their associates who were targeted, but victims of crime and others caught up in the news. It seems that at least five Cabinet ministers in the Labour government had had their phones tapped, while the details of the head of the intelligence service MI6 were in Whittamore's files, and Mulcaire's files included information about several people who had been placed under witness protection. Former Prime Minister Gordon Brown's bank accounts had been illegally accessed. David Sherborne, counsel to the Inquiry for 50 victims of phone hacking, was not exaggerating when he said that over the last nine months, the public had witnessed the unravelling of 'possibly the most outrageous and largest criminal malpractice this country's press has ever known'. In terms of custodial sentences, the most important criminal charges relate to the payment of bribes to police officers and public servants. As of late 2012, 21 _Sun_ journalists had been arrested in relation to bribery charges, including Coulson and Brooks. Operation Elveden, relating to bribery of officials, had resulted in 52 arrests – 25 of those were journalists. Before the Leveson Inquiry began, one line of defence by News International did not question the accuracy of the claims, but sought to deflect them as not deserving the attention they were receiving. As the police had said, pursuing phone hacking was less important than working against terrorism, or solving more serious crimes. Roger Alton, executive editor of the _Times_ said, 'For me it's roughly on a par with parking in a resident's parking bay in terms of interest.' When the scandal was at its height, on 20 July 2011, the _Sun_ had a story about a UNICEF official urging the media to focus on the drought in Africa, headlined 'UN: forget hacking, kids are starving'. This indicated a welcome discovery of the importance of global poverty by the _Sun_ , and perhaps Murdoch. Earlier, when a senior journalist had wanted to write a story about the Third World, then editor Kelvin MacKenzie leant over his desk, and 'speaking slowly to emphasise each word' told him, 'Get this through your fucking head. Nobody gives a fuck about the Third World.' Woodrow Wyatt claims that he once suggested that in penance for the _Sun_ having leaked the Queen's Christmas message Murdoch should give £200,000 to Princess Anne's favourite charity, Save the Children, and that Murdoch replied, 'What, all those Africans?' For a few, invasion of privacy continued to be a minor offence or even a public service. The most notorious was the unrepentant former _News of the World_ journalist Paul McMullan, who proclaimed to the Leveson Inquiry: In 21 years of invading people's privacy, I've never actually come across anyone who's been doing any good. The only people I think need privacy are people who do bad things... Privacy is particularly good for paedos... privacy is evil... In contrast, Alan Rusbridger commented on 'how sickened people feel when their privacy is invaded'. Victims of phone hacking 'tell you how deeply repulsive it was to think of a stranger listening into private communications with loved ones or family', and he cited former _Sun_ editor MacKenzie saying how violated he had felt when shown transcripts of his own intercepted phone messages. Not only did these stories serve no conceivable public interest, but any stories which resulted were typically hurtful and demeaning to those written about. But their impact reached beyond the content of stories, to these people's personal relationships. The testimony of many witnesses confirmed the psychological impact of feeling that they had no private life at all. Sienna Miller couldn't understand why photographers and reporters always knew where she would be, and she felt 'constantly very scared and intensely paranoid', that 'every area of my life was under constant surveillance': I remember one occasion when I sat my family and friends down in a room and I accused them of leaking stories to the press as a story had come out that only they had known about. Looking back, it makes me extremely angry that I was forced into being so suspicious of people that I love and care for, and that I had to suffer such feelings of betrayal, especially by those who had done nothing wrong. The author of the Harry Potter books, J.K. Rowling, became a focus of surveillance simply because of her huge success. In an effort to gain access, a reporter put a note in her young daughter's school bag. As Rowling said, 'It's very difficult to say how angry I felt that my 5-year-old daughter's school was no longer a place of, you know, complete security from journalists.' Sometimes the surveillance – usually by persons unknown, for purposes only guessed at – had a sense of menace. Labour MP and former minister Tom Watson would change his route home in case someone might be in pursuit, and still obsessively memorises the number plates of unfamiliar vehicles parked outside his house: 'That's what it does to you when you're at the receiving end of the Murdoch fear machine – the threats, bullying, covert surveillance, hacking, aggressive reporting and personal abuse make you permanently wary.' The stake-outs and harassment have an impact on the targets irrespective of what gets published. Tinglan Hong achieved newsworthiness as the mother of Hugh Grant's baby, and then she and her baby were besieged in their home by photographers and reporters. When her 61-year-old mother took photos of one photographer, he accelerated his car towards her, forcing her to jump out of the way. These outrageous assaults on privacy were on people who were in – or connected to people who were in – the public eye because of their own activities. The most obviously outrageous acts of surveillance were on people who became newsworthy as victims of crime, or, for example, as relatives of soldiers or victims of terrorism. Here the invasion of privacy, in Leveson's words: has caused real hardship and, on occasion, wreaked havoc with the lives of innocent people whose rights and liberties have been disdained. This is not just the famous but ordinary members of the public, caught up in events (many of them truly tragic) far larger than they could cope with but made much, much worse by press behaviour that, at times, can only be described as outrageous. It is hard to argue that such offences are not worthy of the intense attention they eventually received. The bribery charges potentially carry greater punishment, but former _Sun_ heavyweights, such as political editor Trevor Kavanagh and editor Kelvin MacKenzie, defended them as being in the public interest. MacKenzie believed that 'tip fees' served a public purpose, and equated the police officers receiving bribes with whistleblowers. For MacKenzie the arrests were 'the real scandal'. Deputy Assistant Commissioner Akers told the Leveson Inquiry that one _Sun_ journalist had been given £150,000 to distribute to sources, a bit more than the Christmas bottle of whiskey that MacKenzie was likening it to. She also judged that most of the resulting stories involved 'salacious gossip rather than anything that could be remotely regarded as in the public interest'. The consequences for the 90 people facing criminal charges, several of which potentially involve custodial sentences, are yet to be seen. The wider consequences for the Murdoch empire are even harder to discern. Michael Wolff's dramatic claim in February 2012 that 'an extraordinary corporate death is taking place' may turn out to be true in some respects. Nevertheless, News Corp as a major corporation with interests in television, satellite television, films, pay TV channels and newspapers will surely survive, embodied in one or two or more corporate entities. There are more questions about its governance arrangements, and the continuing role of Rupert and his children. If Murdoch were a holder of public office, there is little doubt he would have already been forced to resign. But no such democratic process can force him out, as he still controls by far the biggest bloc of voting shares in News Corp. Nevertheless he is not immune from shareholder sentiment. In 2012, the Anglican Church in Australia sold its holdings in News Corp in response to the phone hacking scandal, and the prominent superannuation fund, First Super, also sold out, citing concerns about the company's poor governance and bloated executive salaries. In July 2012, a letter calling on Murdoch to resign was signed by 18 major shareholders, including Connecticut's state pension fund and the UK's Legal and General. Also, there were strong protest votes against James and Lachlan being on the board of News Corp at the October 2011 AGM, with a majority of independent shareholders voting against them. Because of the scandal, the Murdoch model of governance – a docile board, acquiescent management in thrall to the 'genius' of their CEO, a vision of hereditary succession, and a 'whatever it takes' ethic – is coming under continuing challenge. 13 The roots of scandal Sometimes the accounts of the internal workings of British tabloids challenge credulity as much as do the stories they publish. A _News of the World_ journalist, Charles Begley, was ordered by his editor, Rebekah Wade (later Brooks), to officially change his name to Harry Potter and dress up as the fictional wizard at news conferences. In Watson and Hickman's words: A few hours after the September 11 attacks on the Twin Towers, Begley was rebuked for not wearing his robes and being 'in character'... Begley went home and later rang in sick with stress. He stayed home for the next few days, undecided about his future. News editor Greg Miskiw rang him and Begley told him of his disillusion. In a conversation which Begley taped, Miskiw sought to convince him to return with the apparently persuasive line – 'Charles, this is what we do – we go out and destroy other people's lives.' Even when the key facts are agreed on, scandals remain a political battleground. There are disputes over procedures, over proposed penalties and over proposed policy reforms to prevent future abuses. There are also conflicts over explanations and interpretations. Were the offences aberrations, the work of a few rotten apples (or a single rogue reporter), or did they reflect institutionalised corruption? Unsurprisingly, Murdoch himself claimed, in relation to the phone hacking scandals, that 'the problems were isolated to one part of the company'. He emphasised the relative insignificance of _News of the World_ in the total scale of the empire: 'News employs 52,000 people across four continents and generates annual revenues of US$S33 billion.' In many ways, the phone hacking scandals were a product of the particular subculture of the British tabloids, of that peculiar world where an editor, just after the 9/11 tragedies, could demand a journalist act as Harry Potter. In other ways, though, they are an outgrowth of wider forces in News Corp's structure and culture. As one of Murdoch's most successful editors, Andrew Neil, put it: You create a climate in which people think it's alright to do certain things. And I would argue that Rupert Murdoch with his take-no-prisoners attitude to journalism – the end will justify the means, do whatever it takes – created the kind of newsroom climate in which hacking and other things were done with impunity on an industrial scale. This chapter examines the roots of scandal both in the particular milieu of the Murdoch British tabloids and in the more general organisational ethos created by Murdoch. The power of one Michael Wolff said of News Corp: 'No other public company of its size [has ever been] such a singular reflection of one man.' In Murdoch's view, that has been a key to his success: 'You can't build a strong corporation with a lot of committees and a board that has to be consulted at every turn. You have to be able to make decisions on your own.' Murdoch had this attitude right from the beginning. He became a director of News Limited in October 1953, and, according to one of the other directors, Sir Norman Young, 'by the end of 1954, [the 23-year-old's] ascendancy over the News Board was complete'. By 1959 it was clear that Rupert was 'determined to run the company without any prior consultation with other Directors'. He would tell them what he had done after the event and 'expect them to endorse his decisions'. In Young's judgement he had already shown great ability as a business manager, but equally had shown that he 'was not prepared to work in committee style', and had 'a touch of arrogance'. He wanted 'good henchmen who would obey his orders on major issues without question'. Ever since, News Corp boards have been notable for their subservience. Peter Gladwin, a long-serving London editor, for example, said he 'was made a director simply to agree with whatever Rupert wanted'. In Michael Wolff's opinion, 'The board – all men, partly because Murdoch believes that women talk too much – may not be the most docile in corporate America, but it is certainly among the most reverential', and as David Carr notes, 'Being a board member of News Corp is not a bad gig; it pays over $200,000 a year and requires lifting nothing heavier than a rubber stamp.' The directors' passivity became more of an issue in recent years. With the commercially problematic purchase of the _Wall St Journal_ and the phone hacking scandals, the directors have been criticised for lack of proper oversight. On the move to buy the _Wall St Journal_ , for example, 'As was typical, none [of the directors] objected... and the following Tuesday the board unanimously approved the offer.' In 2011, a group of News Corp shareholders comprising prominent US banks and investment funds brought a law suit accusing the board of allowing Murdoch to use News Corp as his 'own personal fiefdom'. News Corp considers nine of its 16 directors independent, yet several of these nine 'owe their careers to' Murdoch, or have 'made millions of dollars making him richer'. The monitoring group Governance Metrics International (GMI) gave News Corp its lowest rating of F, placing it in the bottom 5 per cent of the 5400 corporations it examined, for ESG (Environment, Social and Governance) ratings. (GMI showed more courage than the Australian institution, AMP Capital. It compiled an extensive post-scandal report on News as part of a report on corporate governance which was sent to clients. However, 'a nervous public relations department' removed all reference to News two days before the report was made public.) Murdoch's domination of his managers and employees is at least as great as of his boards of directors. Bruce Guthrie found that 'in five years at News, I had learned that most senior executives don't do anything without first asking themselves: "What would Rupert think about this?"' Former London _Sun_ editor David Yelland said much the same thing in an interview: All Murdoch editors... end up agreeing with everything Rupert says but you don't admit to yourself that you're being influenced. Most Murdoch editors wake up in the morning, switch on the radio, hear that something has happened and think: what would Rupert think about this? It's like a mantra inside your head, it's like a prism. You look at the world through Rupert's eyes. When Alastair Campbell accompanied Tony Blair to Queensland's Hayman Island, and watched Murdoch's speech at the Leveson Inquiry, he found it: fascinating, but a bit chilling, to watch all these grown men, and some women, hanging on every word, and knowing that an inflection here or there would influence them one way or the other. Later, after a meeting with _Sun_ senior journalists, who were very right-wing, he was reassured that despite their own proclivities they 'would do what they were told' by Murdoch, and found them 'all a bit Moonie-fied'. Andrew Neil, who worked for Murdoch for more than a decade, put it most dramatically: When you work for Rupert Murdoch you do not work for a company chairman or chief executive: you work for a Sun King... All life revolves around the Sun King: all authority comes from him. He is the only one to whom allegiance must be owed and he expects his remit to run everywhere, his word to be final. A Sun King has a highly personal management style, marked by the weakness of formal processes and formal structures. Murdoch detested lengthy meetings and formal strategy sessions. Rather, thought _Wall St Journal_ reporter Sarah Ellison, he 'ran News Corp like a small club'. As Bruce Dover notes: If you were asked to draw an organisational chart of News Corp, it would have Murdoch on top and under him a single straight line to everyone else. As one senior News Ltd insider told Paul Barry in 2012: 'It's a family company. No one thinks they don't work for Rupert.' Whatever their formal position of the moment, they know that this is their fundamental role. Bruce Rothwell – who orchestrated Murdoch's coverage of Australian politics in 1975 – described himself as 'Rupert's handyman'. His 'compliant lieutenants' call one another 'henchmen'. One means by which Murdoch maintains this centralised control is by moving people around frequently. After a visit from Murdoch there is always considerable 'job shuffling'. Such mobility spread 'the Murdoch values', which included the mantra that no one is indispensable. By uprooting employees from their previous social supports and attachments, they became more dependent on him. A strategy of managerial instability is often accompanied by destabilisation of those who fall out of favour, and prudent employees know to avoid this. After Guthrie mounted his suit for wrongful dismissal, former Herald and Weekly Times colleagues rarely contacted him, and when they did 'it was usually... from a public telephone box'. When HWT management heard that one colleague who had left the company talked to Guthrie about the paper in unflattering terms, they initially ordered him to refund his payout; they retreated when he employed a lawyer. According to Harry Evans's account, Murdoch would fire mock pistol shots into the back of the editor of the _Sunday Times_ , Frank Giles, through his office window. Once, 'he told Evans with a sidelong grin: "I'm just going over to terrorise Frank."' After his own forced resignation, Evans reflected that 'Nothing in my experience compared to the atmosphere of intrigue, fear and spite inflicted on the paper by Murdoch's lieutenants.' Eric Beecher's career with Murdoch had some parallels with Evans's. When Murdoch had taken over his father's old paper, the afternoon Melbourne _Herald_ , in 1987, he lured Beecher away from the editorship of the Fairfax _Sydney Morning Herald_ to become its editor-in-chief and editor, saying how he wanted to restore the paper's quality, and praising his new editor's ability. Beecher hired several good staff from Fairfax, so Murdoch was also benefiting from destabilising a competitor. The paper's circulation did not rise in the way Murdoch had wanted, and Beecher soon found himself out of favour. On a visit to Melbourne, Murdoch reviewed that day's paper with Beecher and Guthrie: We had, said Murdoch, created a paper that was intellectual when he wanted only intelligent, and literary when he only wanted well written... After about 20 minutes, our bollocking was complete. Murdoch had never raised his voice, and was unfailingly polite. But he had flattened us... It was obvious what had just happened... Sure enough, we were gone by the following March. When he first took the job, Beecher thought that Murdoch had changed, but he later discovered that he hadn't. The basic problem, he thought, was that 'Rupert has contempt for those who work for him, and total contempt for those whom he can bend.' Given the contemporary view of Murdoch's management style as 'calculated terror', it is important to remember that originally many Australian journalists found him a breath of fresh air. In contrast to the stuffiness of the Fairfaxes, the conformity of HWT management and the aggressive bullying of the Packers, Murdoch, affectionately known as the 'boy publisher', had a much more egalitarian and friendly style. David Bowman, later to become a strong critic, recalled that on his first Christmas in Sydney, in 1961, when he knew no one, Rupert invited him to share Christmas dinner with the Murdoch family; he did the same for other employees in a similar situation. When Rod Lever, an executive from Murdoch's early years in Australia, stayed at Cavan, Murdoch's farm, he was astounded one morning when Rupert cooked breakfast and brought it in to him and his wife in their bedroom. He was particularly good at creating a sense of adventure and teamwork. Murdoch would be there, in his shirt-sleeves, helping launch whatever new against-the-odds venture it was. Moreover, some people then moved with Murdoch to new challenges. Some went from Adelaide to work on his Perth Sunday paper, and then some went on to Canberra or Sydney: Out of his Adelaide and Perth newspaper cadres Murdoch had formed a small, loyal circle of editorial cohorts who understood and agreed with his inflammatory journalistic style, and whom he could trust to carry it out under his constant goading and supervision. Later, some moved from one country to another: in accounts from the employees they were displacing, 'Australian' often became synonymous with 'barbarian'. Murdoch's expansion offered opportunities for his cadres, and many in middle management felt they were also a part of the News success story. As Guthrie put it, 'Sitting at that lunch [in 1987], it was hard not to think for a moment at least that you were at the epicentre of modern media', and could bask in Murdoch's 'reflected glory'. But this sense of loyalty and belonging has a downside. For Wolff, Murdoch 'tends to hire people who are grateful for the chance, who feel they're getting more from life because of him than they would have without him'. Some of these 'lifers' (as Wolff calls them), who have had no significant professional experience except working for Murdoch, are strongly loyal. Guthrie recounted how Rocky Miller, who had been with the company for almost half a century, including eight years as editor of the Sydney _Sunday Telegraph_ , was to introduce Murdoch at a Cancún gathering of News Corp executives. Miller became emotional: 'Just before you come up here, Rupert, I want to say one thing, though: you're my fucking hero.' He repeated, 'No, no, you are my fucking hero' before Murdoch gently took the microphone from him. While good for control, Murdoch's strategies have their costs in organisational quality and functioning. Neil commented that: During the 11 years I was editor, Rupert fired or eased out every chief executive of real talent or independent mind-set. As a result, there is no historic memory at the top... This reflects Rupert's general disdain for individuals. He has never expressed regret about those he has axed and has repeatedly said that every individual can be replaced. Wolff, in particular, has judged that 'News Corp has been, for most of its history, distinguished by its self-effacing, if not weak, executives', and that Murdoch's 'little band [is] not ready for prime time'. Whether or not this judgement is too sweeping, Murdoch himself has said, 'There may be more brilliant people wandering outside sometimes, [but] you have to favour the people that are prepared and ready to give themselves to you.' But rating deference above competence can have a substantial cost. Perhaps the outstanding case was when Murdoch fired Adrian Deamer as editor of the _Australian_ in 1971. Murdoch's original choice as editor, Maxwell Newton, had failed, and he filled the post-Newton vacuum with Walter Kommer, but Kommer was more inclined to the business side, and asked Deamer to take over as editor. This was an inspired move. Deamer's tenure at the _Australian_ is among the most successful editorships, both professionally and commercially, in Australian journalism. Circulation grew substantially, while David Bowman, then at the _Sydney Morning Herald_ , recalled how frequently _Herald_ 'news conferences began with a sad admission that the _Australian_ had found the dimensions in the day's news that everyone else had missed'. Having rescued a paper that seemed doomed to close did not earn Deamer Murdoch's gratitude, however. Instead he was full of criticisms: Deamer's fatal fault was that he was not sufficiently acquiescent. Murdoch finally said, 'You're not producing the sort of paper I want.' Deamer replied, 'Rupert, I don't think you know what sort of paper you want. So until you do I'll go on producing the paper I want.' Deamer later said that: the gist of Murdoch's complaint was that the paper had become too intellectual and too political: 'It was anti-Australian, it preferred black people to white people, it wanted to flood the country with Asians. He complained it took up every "bleeding heart" cause that was fashionable among the long-haired left. It was not interested in the development and progress of Australia. It criticised the political leaders he supported. It was dull, it was a knocking paper, and it stood for everything he opposed and opposed everything he stood for."' According to Shawcross: [Murdoch] began to worry that he could not depend on Deamer. He was too strong, too independent. As his empire grew, Murdoch felt increasingly that he needed men on whom he could rely... And thus, over time, he came more and more to appoint rather colorless editors who would not disturb the outposts of empire. The _Australian_ was Murdoch's most visionary gamble, but in firing Deamer for being insufficiently deferential, he inflicted almost terminal damage upon it. The dismissal of Deamer demonstrated that however egalitarian and friendly Murdoch's personal style often was, it did not betoken any willingness to share power. Rather his approach was uncompromisingly hierarchical. 'When one of my editors tries to prevent me from exercising my rightful domain over a paper, he's gone,' he told Kiernan. When _New York Post_ journalists went on strike the year after he bought the paper, he was furious. Their letter to him protested about 'insidious' editorial bias in story selection, and their fear that the _Post_ 'is gaining the reputation of being a sheet where facts don't matter'. In a meeting, one senior journalist said to him: It's our paper too. 'Oh no, it's not,' Murdoch snarled. 'When you pay the losses you can say it's your paper too. It's my newspaper. You just work here and don't you forget it.' Murdoch's view of the contrasting rights and needs of capital and labour is almost feudal. An employee, no matter how good, cannot rise into the ranks of the owners. Barry Diller, who was already wealthy when he started to work for Murdoch, who had been the architect of News Corp's acquisition of Fox Studio and the launch of the Fox Network, and who had turned both into strong enterprises, which he ran with relatively little interference from Murdoch (although sufficient to irritate Diller) was by 1991 increasingly determined to become a principal, not just an employee. He asked Murdoch if he could become a principal in a real sense, but Murdoch replied that there was only one principal in his company. According to Neil, 'the semi-autonomous nature of Diller's Fox empire – almost a state within a state – had infuriated Rupert for some time'. Murdoch and Diller, acknowledging the insuperable nature of their conflict, worked out the terms of Diller's departure: a financial settlement of approximately $34 million. Shawcross notes, 'Murdoch was delighted that he would now have Fox to himself.' Later that year other senior executives left, as Murdoch exercised more day-to-day control. The great irony in this episode is that Diller's experience with Murdoch mirrored that of Sir Keith Murdoch with the owners of the Herald and Weekly Times two generations earlier. In the view of the Murdoch family, Keith built the fortune of HWT, but they never allowed him to acquire a proper share of it. Now Murdoch was doing to Diller exactly what the Murdoch family resented about HWT and Sir Keith. Neil also felt that Murdoch was prone to jealousy of any employee who built too much of an independent reputation. 'Do not underestimate how much Rupert resents you becoming a public figure in your own right,' one of Murdoch's closest associates warned Neil in 1994. Several years earlier a senior Murdoch manager, Bruce Matthews, when Neil's relationship with his employer was going very well, warned him: 'Don't fall in love with Rupert. He turns on lovers and chops them off.' Neil's view was that 'Rupert does not allow himself to make lasting friendships.' Gus Fischer, another senior News International executive, said 'he cannot afford friends. He has built his empire by using people then discarding them when they have passed their sell-by date.' A hierarchical and conforming culture Murdoch asserted his dominance using this combination of concentrated control and a willingness to wield power ruthlessly. Richard Lever, the beneficiary of breakfast in bed at Rupert's farm, was also witness to Murdoch's rituals of domination. He recalled conferences with editors at which Murdoch would go through the pages of a newspaper uttering 'Bullshit' and 'Crap', or 'I've told you over and over and over again not to run this sort of rubbish but you never listen to me. No one ever listens to me.' These conferences, thought Lever, 'were generally a theatrical performance which a psychologist would say demonstrated a desire to dominate. He wanted to rattle people, keep them off balance.' Neil Chenoweth has similarly commented: The voice was alternately seductive, endlessly persuasive, or frigidly dismissive. Murdoch was the master of the little politeness, the long silence, the questioning pause, the icy rage. He could inspire remarkable loyalty as well as enormous bitterness. Perhaps strangely, the people who probably most frequently bore the brunt of Murdoch's bullying were the editors of the _Sun_ who contributed so much to building his fortune – Larry Lamb and Kelvin MacKenzie. MacKenzie: endured almost daily 'bollockings' from the man he always referred to as 'the boss' – a steady stream of transatlantic vituperation and four-letter words was his regular diet for over 12 years, even though he ran a paper which netted his proprietor £70–90m a year. 'He treats the tabloid editors like dirt,' [confirmed] John Dux, who was managing director at Wapping in the early nineties. There was an element of the bully in this: Rupert ranted at Kelvin and others because he knew he could get away with it – they were prepared to put up with it. MacKenzie once said that if the boss told him to print the paper in Sanskrit he would readily do so. In public, Murdoch always spoke well of the man he occasionally referred to in private as 'the young Hitler'. He told one interviewer: 'MacKenzie is what he is. He's out there screaming, and he's good. Somehow it works.' But in private sometimes, Murdoch's calls were the most terrifying of all: 'You're losing your touch, Kelvin. (Pause) Your paper is pathetic. (Pause) You're losing your touch, Kelvin.' Then the phone would go down. After a call like this, life in the office would be hell for everyone. Although there are substantial cultural differences between the various Murdoch papers in various countries, Murdoch clearly values editors in the MacKenzie mould, such as Col Allan. Nicknamed 'Col Pot' in Sydney, Murdoch moved him to the _New York Post_ : Lack of restraint and decorum is also [Col] Allan's newsroom management style. Not only is he a legendary screamer – the morning news meeting is a daily and by now ritualistic drama of reporters and editors having the shit screamed out of them – he's a deeply disorganised one. The Leveson Inquiry produced considerable evidence of a very hierarchical workplace where the editor's word was law, and management practices less than enlightened. Ian Edmondson, news editor at _News of the World_ , described 'an environment where anyone in the newsroom had to comply with an instruction from the editor'. The National Union of Journalists' submission cited an anonymous journalist who said, 'the _News of the World_ was an incredibly tough and unforgiving workplace', and described seeing three or four members of staff collapse in the office at least in part from stress. This view was supported by others such as Sean Hoare (a 'culture of intimidation and bullying'), the defender of phone hacking Paul McMullan ('ruthless') and Matt Driscoll ('anyone on the floor who complained too much [about the techniques] would find themselves pushed out'). Nor is this just a recent phenomenon. Peter Chippindale and Chris Horrie's account of the _Sun_ in the 1980s under MacKenzie said that some journalists were told to their face that they had no future in the place, and were then given impossible jobs, huge workloads, and personally insulted. MacKenzie would 'come up to their desks, put his face close to theirs and say, "You still here then, eh? Haven't you gone yet, eh?"' But the result was that when MacKenzie made probably his most commercially disastrous decision, in the coverage of the Hillsborough tragedy (see Chapter 10), his 'dominance was so total there was nobody left in the organisation who could rein him in, except Murdoch'. This hierarchical culture, plus the lack of internal checks and balances, and the valuing of outcome over process, made it hard for dissent to be voiced even as the papers adopted increasingly scandalous practices. Hubris After its re-election in 1992, the Major Government was humiliatingly forced to withdraw from the European exchange rate mechanism. The prime minister called _Sun_ editor MacKenzie to tell him and to ask how the tabloid planned to cover the story. MacKenzie replied: 'Well, John, let me put it this way: I've got a large bucket of shit lying on my desk, and tomorrow morning I'm going to pour it all over your head.' Clearly the editor of the largest-selling newspaper felt no need to show excessive deference to the elected leader of the country. As the Murdoch empire grew, its members felt increasingly empowered in their dealings with others. Murdoch has always encouraged a journalism that seeks to place itself at the centre of events: 'Both Fox News and the _New York Post_ took a manic delight in their influence.' Paul McMullan gave a sense of the satisfaction that came with apparent influence: 'In a bizarre way, I felt slightly proud that I'd written something that created a riot and got a paediatrician beaten up, or whatever was the case.' He was referring to an infamous episode where the _News of the World_ , under Wade, decided to name and shame paedophiles. This was discontinued after some ugly incidents, including one where a paediatrician (confused by a reader with paedophile) was beaten up. Matt Driscoll observed the same sense of power during his years at News International: As a result of this aggressive and grotesque arrogance, those in charge – the proprietors and the editors – came to believe that they could do and say whatever they wanted and remain untouchable... they felt that they were almost beyond the reach of the law...[and] could leave their morals and their respect for ethics at the door when they clocked in each morning. The next front page was all that mattered, however it was obtained. Perhaps that tendency was heightened by the backgrounds and rapid rise of those with editorial authority. The change in news priorities inevitably brought a shift in the types of people who achieved internal success. Those covering celebrities often became celebrities themselves, and editors of the _Sun_ 's Bizarre column, such as Andy Coulson, Piers Morgan and Dominic Mohan, frequently appeared in pictures with the stars they were reporting on. More seriously, as show business became 'ever more important in shifting copies', journalists from that area were promoted into positions of editorial responsibility: In their late twenties and early thirties, [Piers Morgan, Rebekah Wade and Andy Coulson] were pitched into a lucrative, adrenaline-charged whirl, the backseat of chauffeur-driven limousines and – to their delight and surprise – the dining chairs of Downing Street. They had proved themselves by their energy and enterprise, and frequently also by their ruthlessness and inventiveness, and while their experience with soap operas might seem to equip them for political coverage, it had not prepared them for its real-world consequences. Moreover, the speed of their success and their ambition easily led to arrogance. So, during the disappearance of Milly Dowler, _News of the World_ , on the basis of a wrong number on the victim's telephone, thought it knew better than the police what was happening, and wouldn't be convinced otherwise. Equally, during the first long period of the scandal, when the _Guardian_ 's investigative reporting was not evoking any response, Rebekah Brooks remained confident. In public she charged that the _Guardian_ was probably deliberately misleading the British public; in private she told colleagues that the story was going to end with _Guardian_ editor Alan Rusbridger 'on his knees, begging for mercy'. The most remarkable example of the hubris at the top of News International came before the 2010 election. The _Independent_ had produced a series of innocuous ads, saying 'Rupert Murdoch won't decide this election. You will.' James Murdoch and Rebekah Brooks took umbrage. They arrived unannounced at the newspaper, and James yelled at the editor, Simon Kelner – 'What are you fucking playing at?' Kelner took them into his office, and they left 20 minutes later. Journalists were shocked at the 'bizarre' scene. My company right or wrong News Corp has always been 'a company with a chip on its shoulder', aggressively counter-attacking and maligning its critics. This mentality stems directly from Murdoch himself, who has always nursed vendettas and – perhaps more puzzlingly for someone born into such privilege – has always had a strong sense of resentment. Neil Chenoweth thinks that Murdoch has 'a deep wellspring of anger... looking for a target'. He quotes the former head of _New York_ magazine and for a time a Murdoch friend Clay Felker, who felt that Murdoch 'feels the world is out to get him'. According to Kiernan, who was close to Murdoch for over 10 years, 'he is quick to duck blame' when accused of questionable or untoward things. Instead he usually engages in 'an often obscenity-filled diatribe' against 'his absent adversary of the moment'. Typically this focuses not on the point at issue, but 'on the alleged wrongs, stupidities, and character defects of the other person', and is often accompanied by 'promises of retribution'. Such retribution was directed at well-known New York columnist Pete Hamill when he criticised the _Post_ 's coverage in 1977. Murdoch retaliated by running excerpts from an unflattering story Hamill had written about Jackie Kennedy many years earlier, knowing that in 1977 Hamill was engaged in a romance with the former First Lady. It was 'self-indulgent on my part', Murdoch told a journalist, 'but I don't apologise for it. People have got to learn.' Journalism's key ethic is to increase transparency. It is a view publicly endorsed by Murdoch, who testified to the Leveson Inquiry that 'I don't think [politicians are] entitled to the same privacy as the ordinary men in the street. If we're going to have a transparent society, a transparent democracy, let's have everything out in the open.' However, the standard response of News Corp is to cloak its own actions in secrecy. Not only was this its initial response during the phone hacking scandal, but during 2003, when Roy Greenslade was seeking to investigate the unanimity among the Murdoch press on the issue of the Iraq War, no Murdoch editors responded to his calls and emails, producing the paradoxical situation that 'journalists [were] more tight-lipped than soldiers'. An amusing case of the company's attempts at secrecy came after September 2009, when _New York Post_ associate editor, Sandra Guzman, was fired. She claimed that her work environment had changed irreparably the previous February when she criticised a strange cartoon that some said likened President Obama to a chimpanzee. The cartoon prompted a public outcry about racism, and Murdoch made a formal apology. After her dismissal, she filed a complaint alleging harassment, and including many embarrassing claims, including that the Washington Bureau head had told her that their aim was to destroy Obama's presidency. She also maintained that editor Col Allan and others encouraged an internal culture that was sexist, offensive and domineering, and that various decisions were driven by racial prejudice. As part of the larger action, she sought to find out what Murdoch and Allan had said to each other about the public apology for the cartoon. Allan made the novel claim that such conversations should be covered by 'editorial privilege', but the court did not accept this new doctrine. Confronted by criticism, Murdoch's response is often not to debate its merits, but to recast the issue as 'us versus them'. Throughout the phone hacking scandal, there was the standard Murdoch/ News Corp ascription of ulterior motives to all external critics. In a November 2010 speech to British editors, _News of the World_ managing editor, Bill Akass, defended his brand of tabloid journalism against the 'snobbish elite', saying that their complaints were 'a kind of proxy for sneering at the working class'. A month earlier, when the _New York Times_ had reported its investigation of the phone hacking scandal, he 'sent an indignant letter arguing that the paper had attacked Murdoch's publication because of the _Times_ ' rivalry with the _Wall St Journal_ '. The _Wall St Journal_ urged its readers to 'see through the commercial and ideological motives of our competitorcritics'. Murdoch told the parliamentary committee: A lot of people had different agendas, I think, in trying to build this hysteria... All our competitors in this country formally announced a consortium to try and stop us [in the BSkyB bid]. They caught us with dirty hands and they built the hysteria around it. Behind the public statements had come much more strenuous private efforts. As late as 5 July 2011, when Ed Miliband took what Watson and Hickman judged to be the biggest risk of his political career, and called for Brooks to examine her conscience and consider her position, while calling the phone hacking truly immoral, 'senior Murdoch journalists were furious'. Tom Newton Dunn, the _Sun_ 's political editor, said: 'We do take it personally and we're going to make it personal to you. We won't forget.' The phrase 'would not be forgotten' had also been used by News International representatives to journalists inquiring about the scandal, and to Chris Bryant when he raised the issue in July 2010 in the House of Commons. When Alastair Campbell did a series of interviews following the _Guardian_ story of July 2009, he 'received a series of what [he said could] only be termed threatening text and phone messages from both Rebekah and the office of James Murdoch'. Politicians were also targeted. In May 2012, Neville Thurlbeck, by then disillusioned with his former employer, said that _News of the World_ reporters had spied around the clock on members of the House of Commons' Culture, Media and Sport Committee: 'The objective was to find as much embarrassing sleaze on as many members as possible in order to blackmail them into backing off from its highly forensic inquiry into phone hacking.' It was a plan hatched by News International executives – 'It wasn't journalism. It was corporate espionage.' An earlier member of the committee, Plaid Cymru MP Adam Price, said that after Brooks had refused to appear in early 2010, 'the committee's members had been warned that if they had called Brooks their private lives would be raked over'. The chief parliamentary critic of phone hacking, Labour MP Tom Watson, was placed under surveillance by _News of the World_ journalist Mazher Mahmood (famous as the 'fake sheikh' in some of the paper's 'stings') in the hope of finding him having an affair. After Watson threatened to sue the _Sun_ for libel over a series of articles: [My] neighbour caught people rummaging through my bins and rustling through papers in my garage. He got one of them in a headlock and demanded to know what he was doing, and the person told him he was working for the _Sun_. An equally crude approach was taken towards solicitors Mark Lewis and Charlotte Harris, who were working on the civil cases for phone hacking victims. They were placed under surveillance during two periods, with the aim of discrediting them and so damaging their ability to pursue the cases. In early 2010, the News solicitors, Crone and Pike, arranged for this surveillance, wrongly believing them to be having an affair. Pike told Leveson he would do the same again in the same circumstances. They also checked the birth certificates of Harris's two sons to see who was listed as their father. Perhaps the most brazen and tellingly tribal of all the News International surveillance episodes was of police officer David Cook and his wife Jacqui Hames, a former police officer who now presented the BBC's Crimewatch program. Cook appeared on the program making a public appeal for information in 2002 when they were reinvestigating the 1987 murder of Daniel Morgan, the business partner of Jonathan Rees, who often acted as a private detective for the _News of the World_. The next day Scotland Yard warned Cook that it had picked up intelligence that Rees's current partner, Sid Fillery, had been in touch with Rees's main contact at _News of the World_ , who had agreed 'to sort Cook out'. Mulcaire then put them under surveillance. Cook and Hames found vans parked outside their house; both were leased to News International. At a press social event at Scotland Yard in January 2003, Cook and his superior officer approached _News of the World_ editor, Rebekah Wade, about the surveillance. Wade replied that the paper was trying to discover whether Hames and Cook were having an affair (they were married). Scotland Yard took no further action either. Hames said the surveillance 'left me distressed, anxious and needing counselling and contributed to the breakdown of my marriage'. She told the Leveson Inquiry she believed that _News of the World_ had put her and her husband under surveillance because 'suspects in the Daniel Morgan murder inquiry were using their association with a powerful and well-resourced newspaper to intimidate us and try to... subvert the investigation'. These cases are testimony to News International's ruthlessness. But equally they show its _modus operandi_ – the best way to refute criticism is to discredit the critic; the substance of what they have said can be ignored if, for example, you can charge that they are having an extra-marital affair. It gives credence to actor Steve Coogan's dramatic comparison of the company to a 'protection racket'. It employs the prospect of negative coverage, he charged, 'as a weapon against those who get in [its] way.... Be nasty to us... and you will feel our wrath.' For Wolff: the fundamental currency of the company has always been reward and punishment. Both the _New York Post_ and Fox News maintain enemy lists. Threats pervade the company's basic view of the world. 'We have stuff on him,' Murdoch would mutter about various individuals who I mentioned during my interviews with him. 'We have pictures.' Leveson commented that: a general defensive approach has led to some newspapers resorting to high volume, extremely personal attacks on those who challenge them... There is a cultural tendency within parts of the press vigorously to resist or dismiss complainants almost as a matter of course. Securing an apology, a correction or other appropriate redress, even when there can be no argument, becomes drawn out and difficult. One amusing example of just how reluctant Murdoch journalists are to admit error came with the testimony of the _Sun_ showbiz correspondent Gordon Smart. In 2009, Chris Atkins made a feature documentary _Starsuckers_ , part of which involved feeding fake celebrity stories to the daily tabloids, some of which were published without having been checked. The _Sun_ had published two stories concocted by Atkins. One claimed that Guy Ritchie had given himself a black eye while juggling cutlery. But at the inquiry Smart contended, 'Well, I would disagree that they weren't true.' For some time, against all the evidence, he claimed that the two stories, even though they had been made up by the _Starsuckers_ team, were factually true, and that he had checked them. Eventually Leveson intervened, 'It would be quite a remarkable coincidence if Mr Atkins invented a story that sounds bizarre and it happened to be true.' The Leveson Inquiry was unusual in that Murdoch himself was subjected to public cross-examination under oath. Even here he showed his reluctance to acknowledge wrongdoing. Some aspects were predictable: his refusal to give any credit to the _Guardian_ for its investigative reporting of the scandal, for example. Similarly, his _ad hominem_ dismissal of critics came to the fore when he was asked to respond to the quotation from David Yelland (earlier in this chapter) about Murdoch editors taking on Murdoch's views. He said it was 'nonsense', and should be taken 'in the context of Mr Yelland's very strange autobiography, when he said he was drunk all the time he was at the _Sun_ , which we didn't notice'. When it was put to him that in a legal case, the Mosley prostitutes case, Justice Eady had used the word 'blackmail' in relation to a _News of the World_ journalist, Murdoch said he didn't know about it, but then seemed to dismiss the incident by saying, 'It's a common thing in life, way beyond journalism, for people to say, "I'll scratch your back if you scratch my back."' When Murdoch was asked about a tribunal finding that Andy Coulson had presided over a culture of bullying, he replied he had never heard of it, and shrugged it off with the comment, 'They always strike me as a very happy crowd.' When further questioned over one journalist's testimony of a bullying culture at the paper, he responded by asking why the complainant hadn't resigned if she had not liked working for his newspaper, which prompted Justice Leveson to intervene: 'I think the problem with that might be that she needs a job.' An ethics-free zone Bruce Guthrie discovered that raising questions about ethics was not a wise career move at News Corp. Guthrie, as an associate editor of the Melbourne _Herald_ , attended 'a confab in Aspen, Colorado, of [Murdoch's] best and brightest' in June 1988. The news editor of the London _Sun_ , Tom Petrie, gave an exuberant presentation on how: we don't report the news; we make it... His presentation was wildly entertaining with its stories of chequebook journalism, general skullduggery and, ultimately, 'heavy lifting' of rival papers' stories if they were unable to match them. For anyone who took journalism seriously, it was appalling. At the end, Guthrie asked from the floor, 'Do you have any ethical framework at all at the London _Sun_?' Eric Beecher told Guthrie that Murdoch turned red with anger at the question. Amid the resulting hubbub, Murdoch weighed in directly with, 'I would have thought it's news if the captain of the England cricket team is taking barmaids up to his room the night before a Test match', referring to a recent _Sun_ story. When the session finished, Guthrie saw Murdoch and his Australian head Ken Cowley in conversation, and Cowley later told him Murdoch said, 'I see we have a Fairfax wanker in our midst.' The following year, after Beecher's departure, Cowley offered Guthrie the editorship of the about to be launched _Sunday Herald_ , and foreshadowed a visit to New York, saying, 'It would be good for you to get together with Rupert again because you upset him a bit with that question of yours at Aspen.' More typical was the response of former _Sun_ editor, Kelvin MacKenzie, to the Leveson Inquiry: ethical questions 'were not issues I bothered with. I do hope that this inquiry is not seeking to impose them on print journalists – that would be bloody funny to watch.' He is also alleged to have joked, 'Ethics? as far as I'm concerned that's that place to the east of London where people wear white socks.' Reading some of the dealings in the Murdoch and other English tabloids, one passes through a looking glass in which none of the normal rules of human decency apply. Kate McCann, mother of a girl who was abducted while the family was on holiday in Portugal, had to endure not only the deep grief of that but also the English tabloid feeding frenzy when the Portuguese police decided for a period that the missing girl's family were persons of interest in the investigation. The police had taken – and then returned – Kate McCann's diary, in which she kept her personal thoughts and feelings, which she had shown to no one, not even her husband. The _News of the World_ , under Colin Myler, obtained McCann's diary, and then without her permission or knowledge published long extracts. She spoke later of the sense of violation she felt as she read her private thoughts and feelings in the tabloid. Another _News of the World_ reporter, Paul McMullan, swindled the source of a story about Robert de Niro sharing a bubblebath with two girls. 'One of [the girls] was foolish [enough] to tell me all about it and give me all the pictures without signing a contract.' The normal fee for such a spread would have been 'ten grand'... [But] we didn't pay her. She was on my back for ages.' However, McMullan got a '750 quid bonus for ripping off the source of the story'. Former tabloid reporter Richard Peppiatt wrote: I cannot remember a single discussion over content that included empathetic consideration on the subject of the coverage... Consider the _Sun_ 's front page fascination with 'Britain's Youngest Dad' in the spring of 2009. For weeks a boy of 13 and a girl of 15 were subjected to the full glare of a tabloid feeding frenzy, frothing comment passed on their upbringing, education and morality. Was the welfare of these vulnerable teens... given consideration? He told the Leveson Inquiry that he 'would have been laughed out the door' of his newspaper if he had tried to use the Press Complaints Commission editors' code to raise an ethical issue. The escalation of the dark arts As Roy Greenslade came to understand at the beginning of his career: We [journalists] operated to our own rules... Living inside the journalistic bubble, especially at a time of even greater official secrecy and bureaucratic opacity than exists today, inured us to criticism. So, in the belief that they were serving the greater good, adopting dubious means was justified. Within their own subculture, informal norms evolved, and although these often ran counter to the official rules in the wider society, there were recognised limits. However, suspect means were celebrated even when they were not serving a democratic purpose: 'The great and the good of popular journalism... liked nothing better than to tell stories of ethically suspect escapades.' Murdoch's most celebrated reporter, Steve Dunleavy, started as a journalist on the Sydney _Daily Mirror_ : The Sydney competition was so fierce that reporters would do anything to get a story – literally anything. I lost count of the number of times I posed as a cop, a public servant or a funeral director. According to Australian journalistic folklore, he once punctured the tyres of his father's car in order to beat him sending photos back from a story in the Blue Mountains. Dunleavy's defence is that he knew the car belonged to the rival _Sun_ newspaper, but didn't know his father was at the scene. Murdoch assigned his star reporter to boost the _New York Post_ 's coverage of Son of Sam. When Dunleavy gained access to one victim's family in a hospital ward – dressed as a doctor and pretending to be a bereavement counsellor – this clearly met with Murdoch's approval: he 'chuckled' that it was this kind of thing that 'gives the young reporters confidence'. Similarly, according to Chippindale and Horrie's well-informed account of life at the London _Sun_ , MacKenzie liked to be surrounded by 'made men', journalists: who had proved themselves by pulling off some outrageous stunt at the expense of the opposition... Hacks refusing to get involved in this sort of behavior [such as stealing from a competitor] were suspect – falling into the category of those who were not fully with him, and could therefore be presumed to be against him. Sharon Marshall, in her book _Tabloid Girl_ , declared her tribal loyalty: I'm not proud of everything we did, but I loved the tabloid journalists I worked with. Every single, double-crossing, devious, scheming, ruthless, messed up, brilliantly evil one of them. The problem with informal systems is that they allow private competitive advantage to be equated with larger public purpose. One consequence of this is that the norms of acceptability do not remain stable. As the stakes become higher, or as rule-breakers become more brazen, there are fewer and fewer constraints. Thus, for example, Watson and Hickman note that: [from] the mid-1990s onwards, the _News of the World_ newsroom... was extreme, even by Murdoch standards. Exhorted by him to smash its closest competitors, the _People_ and the _Sunday Mirror_ , its management fostered an ultra-competitive atmosphere. Reporter was set against reporter and executive against executive... Fearful about the consequences of carrying out their orders, reporters sometimes illicitly taped the briefings they received from news editors. So, as Greenslade observed, the phone hacking scandal was not an isolated incident or an aberration, but the 'culmination of a historical process stretching back many years'. Equally, although ethically suspect shortcuts may be as old as reporting itself, their scale and prevalence escalated in response to competitive pressures and personal ambitions. When Jeremy Tunstall did his pioneering survey research on the specialist reporters in Britain in the late 1960s, he found that payments by journalists to sources were largely circumscribed. Of the nine specialist areas of reporting he covered, payments were common only in the fields of crime and football. Payments were small, mainly for tips, and only rarely to police officers or football players themselves. They were most frequently made by the popular papers. Elsewhere there were emphatic denials that paying sources was ever a means of obtaining information. By the early 1990s, however, when Tunstall did another round of interviews with newspaper journalists, he was struck by how much more prevalent talk of payments to sources was. Others have sought to trace trends in, for example, the number and size of payments to paparazzi. According to Watson and Hickman: In the early 2000s, demand soared for stories delving into the private lives of actors, pop stars and TV presenters. Paparazzi pictures which would have fetched £5000 in 2000 made £100,000 in 2005. The police told Hugh Grant that: the paparazzi were increasingly recruited 'from the criminal classes' – who would 'show no mercy, no ethics', because the bounty on some of these pictures is very high. Publicity agents who mediated between the media and their clients for a fee were another growing industry. In Australia, leading figures have been Harry M. Miller and Max Markson. In Britain, the dominant figure was Max Clifford, who later became a victim of phone hacking. Clifford had been supplying stories to the press for 30 years. He noted that as the competition got fiercer and circulations started to slide, 'the methods became more and more creative'. Even in political coverage, as Nicholas Jones notes, 'the cheque book reigned supreme and the going rate was escalating as a result of the opportunism of the publicist Max Clifford, who dominated the market in kiss-and-tell disclosures'. One of the largest deals involved the _News of the World_ paying Rebecca Loos £300,000 in April 2004 for a story on 'Beckham's Secret Affair'. The first time Coulson's _News of the World_ won newspaper of the year, 'there was a lot of criticism that the stories were Max Clifford jobs'. But after the paper did a nasty story on one of his clients, Clifford cut his ties with it. The breaking of rules became more calculated. For example, under Piers Morgan, _News of the World_ 'bribed staff on the rival _Sunday Mirror_ and the _People_ to obtain their news lists', and tried to steal their exclusives: On 15 October 1994, [Morgan] sent his cunning new features editor, Rebekah Wade, to hide in a toilet dressed as a cleaner so she could run back to the _News of the World_ from Wapping's print works with a copy of [their sister publication] the _Sunday Times_ 's serialisation of Jonathan Dimbleby's new book on Prince Charles. Morgan made decisions about breaking copyright solely on a financial basis, not on a legal or moral one. So he stole a copyrighted interview with rugby player Will Carling and his wife. The _Mail on Sunday_ explicitly warned him not to, but he 'laughed at the warnings from the _Mail_ '. He made the calculation – '£50,000 maximum damages – well worth a front page and two spreads inside.' So the step from paying moles at competing newspapers or for scripts of _East Enders_ in advance to bribing public officials and police officers was more easily made. In 1998 MacKenzie had said, 'If a policeman receives a tip fee for revealing a break-in that should have been reported anyway, that's fine.' In 2003 Rebekah Wade admitted to the House of Commons committee that the tabloid had been paying police officers for stories: It was a perfunctory response from a media executive who saw no moral dilemma nor problem with the idea of paying and therefore potentially corrupting a police officer. Journalism has always encountered closed doors and yet found its way to publicly important information. There has always been some professional admiration for those who, through cultivating sources or other means, served the public's right to know. Sometimes investigative journalism has engaged in breaches of ethics in the cause of the public interest. However, Richard Thomas, of the British Government's Information Office, declared that he hadn't 'seen a whiff of public interest' in the breaches of privacy laws he was investigating. Indeed, many of these stories increased rather than reduced the sum of human suffering. Moreover, unethical means of gathering information, which in matters of great public importance might have been a course of last resort, became instead a means of first resort on matters of no public interest, or in general fishing expeditions. What was illegal and, indeed, abhorrent to most of the public had become the norm inside the tabloid journalistic bubble. The pirate king and his crew Murdoch's Chief Operating Officer, Peter Chernin, told a company meeting in 1998, alluding to Nike advertisements featuring basketball star Michael Jordan: You know those 'Be Like Mike' commercials?...We have to be like Rupert. We have to institutionalise the imagination, nerve and vision he represents. It is an interesting question how long Rupert Murdoch, as an employee, would last working for Rupert Murdoch. Two senior employees, for a time Murdoch favourites, made employment-ending miscalculations when seeking to be like Rupert. Stephen Chao, a 36-year-old Harvard MBA, had reached the top of Fox Television in nine years. His success at Fox, including several programs that pushed the boundaries of taste, such as the recently launched _Studs_ , had given him prestige inside News Corp. He rejoiced in a reputation for outrageousness: for example, he decorated his office 'with [make-believe] stained diapers and excrement'. At a company gathering in Aspen in June 1992, he was to debate others, including Vice-President Cheney's wife Lynne, about the 'threat to democratic capitalism posed by modern culture'. Chao livened up his talk by having a male stripper nearby taking his clothes off as Chao spoke. Mrs Cheney turned her back. Murdoch was not amused. He sacked Chao that day, and told the gathering, 'It's a terrible thing to see a brilliant young career self-destruct. And it's a bitter loss. But the point is that there are limits.' Another executive who had a rapid rise but then a rapid departure was Judith Regan: 'Murdoch, who met her in early 1993, took to her right away – and hired her, not least of all, to annoy the people at HarperCollins.' Her imprint, ReganBooks, quickly established itself, earning up to one quarter of HarperCollins's annual revenue, with her mix of titles showing tabloid tastes mixed with right-wing authors. 'She was the perfect demonstration that inside News Corp, if you have Murdoch's nod, you have vast powers of your own.' She pioneered a new genre, 'fictional biography', in which, for example, it was planned that baseball legend Mickey Mantle would have an affair with Marilyn Monroe. This created some controversy, but much bigger was to come when it became public that she was planning to publish a 'fictional' book by O.J. Simpson about the murder of his wife, to be titled _If I Did It_. The senior levels of News Corp decided not to publish the book, which led to sharp internal conflicts, during which Regan was dismissed. While her departure was as rapid as Chao's, the aftermath was much messier. Both sides made charge and counter-charge. She was accused of making comments about the 'Jewish cabal' at the company, and she counter-charged that Roger Ailes had wanted her to lie in a court case to protect Rudy Giuliani – mayor of New York, Republican presidential hopeful, and close to Murdoch and Ailes. Eventually she secured an out-of-court settlement of US$10.75 million. Both Chao and Regan thrived by testing boundaries but eventually overstepped, causing Murdoch great embarrassment. On the other hand, Murdoch has often stood by employees who either breach social mores or who are caught in unethical behaviour. Wolff calls these 'Rupert's reprobates'. The alcohol consumption of _New York Post_ columnist Steve Dunleavy became legendary, but did not bother Murdoch. Eric Breindel had a career as editor and executive with News Corp, and his hawkish views and strong right-wing connections in Washington helped to make him 'a favourite of Murdoch's' in spite of a background that included a 1983 bust for buying heroin. Perhaps the most spectacular single incident came in late 2005. Then _Sun_ editor, Rebekah Wade, aged 37, and Murdoch had attended a party at the home of Elisabeth Murdoch and her husband, Matthew Freud. After leaving the party she got into a drunken brawl with her husband, Ross Kemp, an actor who played a tough guy in a TV soapie. At 4am, Kemp called the police, terrified of his wife. The police arrested her and she spent eight hours in a cell sleeping it off, while Murdoch waited for her in the office. The other tabloids had fun with the story, especially given that Wade's _Sun_ had been campaigning against domestic violence at the time. Less amusing was when Fox News star Bill O'Reilly was accused of 'brutal sexual stalking and bullying'. The lawsuit, supported by phone transcripts, was settled out of court. Murdoch stood by O'Reilly, who kept his starring role at the network. Richard Johnson, the editor of the _New York Post_ 'Page Six' gossip page, was exposed as accepting bribes and having other ethical shortcomings, as a result of law suits mounted by two disgruntled former journalists, both of whom had been dismissed for ethical derelictions. In their suits for wrongful dismissal, they claimed that editor Col Allan 'was regularly provided with liquor and sex' at a particular strip club. In a clever PR ploy, News Corp used Page Six to air the ex-employees' allegations, admitting some (that Allan did frequent the club) while denying others (saying that his behavior was 'above reproach'). Allan was convinced he would be fired but he survived. Johnson also remained in his job. Indeed, Wolff felt that in some sense it 'actually [seemed] to confirm Johnson's status for Murdoch as an old-time, walk-on-the-wild-side, dangerous, rulebucking, proudly cynical newspaperman'. Murdoch is often forgiving to those who have personal weaknesses or commit offences. Apart from any other considerations, those who survive probably become even more psychologically indebted to him. The danger, however, is that an organisational culture condoning unethical practices develops. A dramatic case of this comes not from journalism, but from what one would imagine would be the sedate world of supermarket advertising. News America Marketing, owned by News Corp, the 'obscure but profitable in-store and newspaper insert marketing business', 'has paid out about $655 million to make embarrassing charges of corporate espionage and anticompetitive behaviour go away'. The cheapest, but perhaps most interesting, involved a small company, Floorgraphics. This was 'a classic American start-up', begun in 1996 to pioneer the idea of decal ads on supermarket floors. It grew quickly. The founders, the Rebh brothers, met the head of News America Marketing, Paul Carlucci, in 1999, and according to their account in court, he told them that if they wandered into his territory he would destroy them: 'I work for a man who wants it all and doesn't understand anybody telling him he can't have it all.' Years later, investigating what they thought was an underhand campaign against them, the Rebh brothers discovered that News America had hacked into their computer 11 times between October 2003 and January 2004. News America admitted this, but blamed it on a rogue individual whom they could not identify. Floorgraphics mounted a legal case, and eventually News settled for $29.5 million; days later News bought the company. In another case, at the beginning of 2011, News America 'paid out $125 million to Insignia Systems to settle allegations of anticompetitive behavior and violations of antitrust laws'. Its most costly payout was to Valassic Communications in 2010 – $500 million. This settlement was agreed just before the case was due to go to court. Carlucci used to show the sales staff the scene from the movie _The Untouchables_ where Al Capone beats a man to death with a baseball bat. According to testimony by a former employee, Robert Emmel, he told employees uncomfortable with the company's philosophy – 'bed-wetting liberals' – that he could arrange to have them 'outplaced from the company'. Murdoch promoted him in 2005, adding publisher of the _New York Post_ to his role as CEO of News America Marketing. Emmel worked for News America Marketing for seven years, from 1999 to 2006, but, increasingly troubled by some of its practices, he turned whistleblower in his last year there, and became a witness in the Floorgraphics case. He alleged that News America Marketing was engaging in 'criminal conduct against competitors' and using 'deceptive and illegal business practices' to defraud its retailer customers out of money owed. After dismissing him, News filed a lawsuit against him. The court found in Emmel's favour on all counts but one, and even that conviction was overturned by a court of appeal. Originally Emmel's legal costs were paid by plaintiffs in other cases, but this stopped when those cases were settled: News Corp has devoted the efforts of up to 29 lawyers to pursuing Emmel personally, at a cost estimated at more than $2 million. Emmel, by contrast, has relied on two lawyers... working for no pay since January 2009. A claim by News to cover its legal fees for the breach of contract element of the lawsuit forced Emmel into bankruptcy. Then News demanded further investigations to make sure the bankruptcy was in order. According to Emmel's lawyer, News America has engaged in 'Rambo litigation tactics. They have a scorched earth policy, and it's taken a huge toll on [Emmel].' News Limited in Australia has lobbied in favour of whistleblower protection for public servants, but when it came to a whistleblower in its own company, News Corp opted for the legal equivalent of Al Capone's baseball bat. The phone hacking and bribery scandal engulfing News Corp and some of its executives and employees is the biggest media-related scandal in the history of English-speaking democracies. Yet there are still some who minimise its gravity. Former _Sun_ editor MacKenzie thought the Leveson Inquiry 'should decide there is nothing wrong with the press'. In public, Murdoch himself has been correct and contrite about the abuses. But in a meeting with _Sun_ journalists, secretly taped by three of them, he deplored how they had been 'picked on' by the 'old right-wing establishment' and 'even worse, the left-wing get-even crowd'. Then he denounced the 'incompetent', 'disgraceful' cops for 'the biggest inquiry ever, over next to nothing'. Whatever his personal resentments, however, the scandal will forever colour how Murdoch is remembered and how his career is interpreted. This wilful blindness about the immorality of what was done is one of the roots of the scandal. It was not inevitable that these scandals occurred in News Corp, but neither is it simply an unfortunate coincidence. These outrages were not the product of a few rogue individuals so much as of a rogue corporation. Of course the great majority of News Corp's 50,000+ employees – and the overwhelming majority of its journalists – are as repelled as the rest of the population by the abuses that have been revealed. However, the scandals were the product of a corporation where power is, perhaps uniquely, concentrated, and where a conforming hierarchical culture makes it difficult for instructions to be questioned or challenged. This is a corporation impatient with any ethical impediments to achieving the results it wants, and which greets external criticism with blanket denial and, often, aggression. And as Murdoch has said, 'For better or worse, [News Corp] is a reflection of my thinking, my character, my values.' There are still many ready to pay homage. Australian Prime Minister Tony Abbott, on the eve of his election, praised Murdoch fulsomely: I've got a lot of time for Rupert Murdoch because, whether you like his papers or don't like his papers, he's one of the most influential Australians of all time. Aussies should support our hometown heroes, and that's what I think, in his own way, Rupert Murdoch is. Murdoch's businesses teetered on the brink of disaster a few times, and included important errors that are sometimes glossed over, but the growth of his career, from a small Adelaide company to a global giant, is still one of the most fascinating corporate careers ever. After the corporate split of 2013, Twenty-First Century Fox is as well placed to meet the challenges of the internet age as any other established media company. It is much less clear how the new News Corp, which includes the newspaper and publishing side, will fare. Business success is not the only measure of Murdoch's impact, however. The scandal is evidence that media power corrupts as much as any other power. It is an ingrained habit of mind for us to think of the press as a protector of democracy rather than a threat to it. It is just as much a part of making democracy work better to make media power accountable as it is to make government power accountable. For American journalist Carl Bernstein, 'no other story eluded' the American press as much as that 'of Murdoch's destructive march across our democratic landscape'. Murdoch is the largest employer of journalists in the English-speaking democracies but in many ways lacks sympathy for their professional ideals of impartiality and independent disclosure. He has been more intent on being a political player, and has often wielded power impressively to help his favoured politicians and his own commercial interests. His power, though, has more often diminished rather than benefited the quality of our democratic life. Notes Abbreviations AAP | Australian Associated Press ---|--- ABC | Australian Broadcasting Corporation AFR | _Australian Financial Review_ _Aust_ | _Australian_ BBC | British Broadcasting Corporation _DT_ | _Daily Telegraph_ (Sydney) _G_ | _Guardian_ _I_ | _Independent_ _MMFA_ | _Media Matters for America_ _NYRB_ | _New York Review of Books_ _NYT_ | _New York Times_ _WP_ | _Washington Post_ 1 The passing of the Murdoch era? . Young p. 146 . www.ft.com/intl/companies/ft500. . Shawcross 1997, p. 398 . Page 2004, p. 329 . Leapman p. 222 . Kiernan p. 123 . Howard p. 378 . Belfiel et al. p. 1 . Leapman p. 107 . Greenslade 2004, p. 587; Brendon 2012, p. 62 . Belfield et al., p. 34 . Kiernan p. ix . Greenslade 2011 . Leveson Executive Summary p. 3 . Leveson p. 510 . Leveson Executive Summary p. 3 . Watson & Hickman p. xvii . Fallows p. 88 . McKnight p. 6 . Curtis 1998, p. 359 . McKnight p. 118 . Hawke 1994 . Howard 2010 . Price 2012 . Leveson transcript Murdoch 26-4-2012 . Sparrow _G_ 15-6-2012 . Leveson p. 34 . Belfield et al. p. 132 . Leapman p. 124 . Munster p. 193–4 . Leapman p. 150 . Curtis 2000, p. 359 . Munster p. 186 . Tiffen 1984, p. 164 . Kiernan p. 64 . Munster 1985 . Ellison 2010 . Glover 2012; Travis 2012; Wallis 2012 . O'Carroll 2012 . Ellison p. xxvii . Shafer 2008 . Curtis 1998, pp. 200–03 . Leapman p. 202 . Belfield et al. p. 48 . Kiernan p. 153 . Gillette _Businessweek_ 18-4-2013 . Chozick _NYT_ 22-4-2013 . Dyer _Crikey_ 23-4-2013 . _Economist_ 14-7-2011 . Myers _Poynter_ 28-6-2012 . Wolff _G_ 28-6-2012 . Dyer _Crikey_ 29-6-2012 . Beaujon _Poynter_ 28-6-2012 . Greenslade _G_ 2-7-2012 . Ghosh & Baker _AFR_ 24-12-2012 . Chenoweth _AFR_ 30-6-2012 . Kruger _Age_ 18-5-2013 2 Building the empire . Hawke 1994, p. 210 . Younger 2003, p. 344 . Chenoweth 2002, p. 24 . Chenoweth 2002, p. 26; Younger p. 348 . Kiernan p. 47 . Kennedy pp. 288–9 . Goot 1979, p. 6 . Munster p. 60 . Munster p. 67 . Hall 1976, p. 43 . Hall 1976, pp. 44–5 . Griffen-Foley 1999, pp. 280–1 . Kiernan p. 81 . Page 2004 . Page 2004; Cryle 2008 . Munster p. 74; Shawcross 1997 p. 59 . Munster 1985 . Barry 1993 . Kiernan p. 139 . Tiffen 1994 . Belfield et al. pp. 55–64 . Chenoweth 2002, p. 44 . Leapman p. 42 . Leapman p. 44 . Kiernan p. 97 . Munster p. 131 . Greenslade 2004, p. 213 . Wolff 2008, p. 124 . Greenslade 2004, p. 214 . Greenslade 2004, pp. 157, 215f . Kiernan pp. 120, 122 . Greenslade 2004, p. 250 . Greenslade 2004, p. 357; Seymour-Ure 1991, pp. 28–9 . Tunstall 1996, p. 41 . Halliday 2012 . Kiernan p. 144 . Wolff 2008, p. 16 . Wolff 2008, p. 19 . Tuccille 1989, p. 46; Kiernan pp. 147–8 . Tuccille 1989, p. 49 . Munster p. 160 . Kiernan p. 194 . Leapman p. 95 . Bowman p. 175; Shawcross 1997, p. 98 . Kiernan p. 303 . Kiernan p. vii . Tuccille 1989, pp. 56, 190 . McKnight p. 93 . Shawcross 1997, p. 212 . Tuccille p. 194 . McKnight p. 147 . Wolff 2008, p. 209 . Kruger _Age_ 18-5-2013 . Tuccille p. 162 . Walker 1982 . Leapman pp. 182–3 . Leapman p. 189 . Shawcross 1997, p. 132 . Read 1992 . Lawrenson & Barber 1985, p. 18; Belfield et al. p. 284f; Greenslade 2004, pp. 376–7 . Tuccille p. 121 . Kiernan p. 243 . Greenslade 2004, p. 470 . Greenslade 2004, p. 472 . Crainer p. 12 . Greenslade 2004, p. 475 . Chippindale & Horrie p. 205 . Neil p. 140 . Neil p. 154 . Neil p. 185 . Marjoribanks p. 123 . Neil p. 186 . Greenslade 2004, p. 477 . Leveson Report 2012, p. 101 . _Herald_ 22-11-1979 . Belfield et al. p. 137 . Chenoweth 2002, p. 66 . Chenoweth 2002, pp. 66–7 . Chenoweth 2002, pp. 80–6 . Kiernan p. 270 . Tuccille p. 90 . Chenoweth 2002, p. 42 . Munster pp. 256–7; Kiernan pp. 276–9 . Tuccille pp. 132–3; Wolff p. 186 . Kiernan p. 272 . Tuccille p. 142 . Tuccille pp. 142–3 . Kiernan p. 304 . Kiernan p. 133 . Leapman pp. 60–1 . Kiernan pp. 134–5 . Munster p. 142 . Tuccille pp. 141–4 . La Monica p. 55 . Hay _AFR_ 15-7-1994 . Kimmel 2004 . Tuccille p. 174 . Tuccille pp. 170–1 . Belfield et al. pp. 276–7 . Kiernan p. 298 . Tuccille p. 223 . Tuccille p. 198; Belfield et al. p. 232 . Chenoweth 2002, p. 87 . Chenoweth 2002, p. 70 . Belfield et al. p. 291 . McIlwraith _AFR_ 21-12-1990; Hack p. 288 . Chenoweth 2002, pp. 69–70; Shawcross 1997, p. xx . Chenoweth 2002, p. 71 . Fidler _AFR_ 16-4-1991 . Chenoweth 2002, pp. 71-74 . Belfield et al. p. 307 . Peers _AFR_ 25-11-1991 . Chenoweth 2002, p. 120 . Belfield et al. p. 3 . Greenslade 2004, p. 559 . Chenoweth 2002, p. 146 . Belfield et al. p. 167 . Chenoweth 2004, p. 94 . Belfield et al. p. 184 . Chenoweth 2002, p. 98 . Chenoweth 2002, p. 98 . Chenoweth 2002, p. 99 . Belfield et al. p. 217 . Rohm pp. 92–7; Wolff 2008, p. 310 . Chenoweth 2012 . Shawcross 1997, p. 402 . Chenoweth _AFR_ 30-8-1999 . Tiffen 2007 . Belfield et al. p. 326; Dover pp. 6, 26 . Shawcross 1997, p. 403 . Shawcross 1997, p. 404 . Hyland _AFR_ 8-3-2005 . Dover p. 182 . 'STAR (Greater China)' Wikipedia . Crainer p. 15 . Robichaux p. 183 . Rohm p. 46 . Swint p. 172; Robichaux p. 186; Chenoweth 2002, p. 147–50 . Robichaux p. 94 . Robichaux p. 127 . Blumenthal & Goodenough pp. 129ff . Robichaux p. 265 . Hack p. 390 . Rohm p. x . Rohm p. 257 . Rohm p. 234 . Chenoweth 2002, pp. xi–xii . Sorkin & Schiesel _NYT_ 29-10-2001 . Labaton _NYT_ 11-10-2002 . La Monica pp. 130–3 . La Monica p. 72 . Kirk & Edgecliffe-Johnson _FT_ 22-7-2011 . Kirk & Edgecliffe-Johnson _FT_ 22-7-2011 . Potter _AFR_ 5-5-2011 . Holgate _AFR_ 11-5-2012; Chozick _AFR_ 24-1-2012 . Dover pp. 271–2 . Dover p. 299 . Wolff p. 38 . Sykes _AFR_ 27-10-2004 . Wolff p. 39 . Wolff pp. 119–20 . Chessell _AFR_ 20-6-2012 . _BBC News_ 11-8-2011 . Watson & Hickman p. 64 . Dyer _Crikey_ 9-8-2012; Chenoweth _AFR_ 25-2-2009; Chessell 2011b . Crainer pp. 97–8 . Maney pp. 173, 177, 179 . Beaujon 4-3-2013 3 Midas of the media . Kiernan p. 21 . Shawcross 1997, p. 40 . Kiernan p. 51 . Greenslade 2004, p. 216 . Munster p. 3 . Brendon 2012, p. 30 . Greenslade 2004, p. 337 . Leapman p.57 . Leapman p. 57 . Wolff 2008, p. 132 . Shawcross 1997, p. 79 12. Page 2003, p. 83 . Chippindale & Horrie p. 24 . Chippindale & Horrie p. 41 . Chippindale & Horrie p. 29 . Greenslade 2004, p. 250 . Greenslade 2004, p. 250 . Rooney 2000, p. 103 . Brendon 2012, p. 30 . Chippindale & Horrie p. 32 . Menadue p. 95 . Chippindale & Horrie p. 16 . Lisners p. 92 . Leapman p. 59 . Wolff 2008, p. 133; Chippindale & Horrie p. 12 . Brendon 2012, p. 31 . Chippindale & Horrie p. 40 . Greenslade 2004, p. 251 . Wolff 2008, p. 205 . Leapman p. 121 . Belfield et al. p. 247 . Perez-Pena _NYT_ 8-5-2008 . Wolff 2008, p. 200 . Bowman p. 100 . Leapman p. 213 . Brendon 2012, p. 27 . Shawcross 1997, p. 304 . Auletta 2003, p. 275 . Watson & Hickman p. 13 . MacKenzie 2005, p. 73 . Guthrie p. 11 . Guthrie p. 12 . Wolff 2008, p. 306 . Chenoweth 2002, pp. 232–3 . Conn _G_ 15-6-2012 . Wolff 2008, p. 305 . La Monica p. 55 . Crainer p. 42 . Andrews p. 240 . Chenoweth 2002, p. 234 . Wolff 2008, p. 304 . Collins _AFR_ 14-2-2002 . Chenoweth 2002, p. 235 . Real p. 352 . Real p. 340 . Andrews p. 240 . Wolff 2008, p. 186 . Young 1991, p. 143 . La Monica pp. 123–4 . Chenoweth 2002, p. 231 . Chenoweth _AFR_ 7-7-2001 . Kiernan p. 50 . Greenslade 2004, p. 219 . Tuccille p. 58 . Leapman p. 126 . Crainer p. 2 . Wolff 2008, pp. 16, 34 . Wolff 2008, pp. 130, 126 . Wolff 2008, p. 298 . Leapman p. 14 . Ellison p. 63 . Dover p. 127 . Wolff 2008, p. 151 . Wolff 2008, p. 390 . Wolff 2008, p. 180 . Crainer p. 9 . Wolff 2008, p. 148 . Wolff 2008, pp. 155, 157 . Wolff 2008, p. 143 . Dover p. 40 . Wolff 2008, p. 187 . La Monica p. 49 . Crainer p. 134 . Dover p. 12 . Kimmel p. 151 . Kennedy p. 287; Tuccille p. 42 . Brendon 2012, p. 45 . Thomas & Litman 1991 . Brendon 2012, p. 38 . Crainer p. 66 . Chenoweth 2002, p. 93 . Dover p. 3 . Leapman p. 240 . Tuccille p. 118 . Wolff 2008, p. 115 . Wolff 2008, p. 5 . Crainer p. 93 . Chenoweth 2002, p. 204 . Kiernan p. 219 . Leapman p. 247 . Tuccille 1989, p. 92 . Fallows 2003, p. 85 . Leapman p. 127 . Munster p. 188 . Munster p. 188 . Bowden _Atlantic_ July/August 2008 . Ellison 2010, p. xviii . Greenslade 2004, p. 590 . Chenoweth 2002, p. 279 . Greenslade 2004, pp. 558–62 . Dover p. 122 . Ahmed & Thompson _AFR_ 14-10-2010 . Riddell _AFR_ 1-11-2010 . Chenoweth 2002, pp. 147–50 . Chenoweth 2002, p. 203 . Robichaux p. 102 . Robichaux p. 94 . Robichaux pp. 86, 182 . Robichaux pp. 93, 111 . Wolff 2008, p. 375 . Osborne _Daily Telegraph_ (UK) 22-1-2010; Dyer _Crikey_ 22-1-2010 . Deans & Tryhorn _G_ 8-2-2010 . Ricketson _Age_ 30-7-2008 4 Midas's lost touch . Wolff 2008, p. 351 . Chenoweth _AFR_ 3-5-2012 . Lachapelle et al. _Bloomberg News_ 18-7-2011 . Kiernan p. 94 . Wolff 2008, p. 151 . Mayne _Crikey_ 18-1-2011 . Crainer p. 115 . Wolff 2008, p. 8 . Dover p. 18 . Dover p. 22 . Dover p. 21 . Dover p. 53 . Dover pp. 31–2 . Dover p. 178 . Rohm p. 261 . Dover pp. 62ff . Dover p. 291 . Dover p. 224 . Dover p. 161 . Dover p. 171 . Dover p. 165 . Dover pp. 44, 243 . Dover p. 243 . Dover p. 244 . Dover p. 257 . Dover pp. 261–2 . Dover p. 253 . Dover p. 181 . Dover p. 179 . Wolff 2008, p. 329 . Dover p. 70 . Crainer p. viii; Rohm p. 245 . Hack p. 378 . Rohm p. 235 . Dover p. 188 . Dover pp. 194–5 . Rohm pp. 244–5 . Crainer p. 95 . Luft _G_ 17-11-2008 . Crooke _Crikey_ 8-11-2011 . Rohm p. 257 . Rohm p. 108 . Aylmer _AFR_ 12-11-2005 . La Monica p. 166 . La Monica pp. 6–7 . La Monica p. 163 . _BBC News_ Business 11-8-2011; Shoebridge _AFR_ 30-6-2011 . Chenoweth _AFR_ 5-2-2011 . Wolff _AFR_ 7-1-2013 . Kohler _Crikey_ 9-4-2010 . Ellison p. 24 . _Economist_ 21-7-2011 . Chenoweth 2002, p. 107 . Sloan _CNN Money_ 24-2-2011 . Rushton _AFR_ 25-8-2012; Watson & Hickman p. 162 . Sweney _G_ 5-9-2012 . Chessell _AFR_ 30-6-2012 . Chenoweth _AFR_ 16-9-2004 . Chenoweth _AFR_ 13-4-2004 . Chenoweth _AFR_ 26-8-2005 . Chenoweth _AFR_ 18-2-2011 . Chenoweth _AFR_ 3-11-1993 . Guy _AFR_ 22-10-2007 . Aylmer _AFR_ 13-11-2004 . Wolff 2008, p. 39 . Toohey _AFR_ 8-10-2005 . Kohler _The Age_ 13-8-2005 . Chenoweth _AFR_ 1-8-2005 . Chenoweth _AFR_ 31-1-2004 . La Monica pp. 140–1 . Chenoweth _AFR_ 13-8-2005 . La Monica p. 142 . Chenoweth _AFR_ 27-1-2004 . Wolff 2008, p. 119 . Wolff 2008, p. 120 . www.statista.com/statistics/195726/revenue-of-directv-since-2006/ . McKnight pp. 31, 39, 136, 149, 191ff . McKnight p. 26 . La Monica p. 23 . McKnight p. 34 . Tuccille p. 6 . Chenoweth 2002, p. 144 . Wolff 2008, p. 209; Chenoweth _AFR_ 5-2-2011 . Wolff 2008, p. 208 . Leveson 2012, p. 107 . Wolff 2008, p. 57 . Ellison p. 107 . MacMillan _Reuters_ 6-2-2009 . Ellison pp. 108, 72 . Wolff 2008, p. 120 . Wolff 2008, p. 392 . Brendon 2012, p. 57 5 From Lenin to Palin . Chenoweth 2002, p. 21 . Shawcross 1997, p. 78 . Kiernan pp. 22–7 . McKnight pp. 52, 47 . Kiernan p. 79 . Kiernan p. 132 . McKnight p. 70 . Cryle 2008, pp. 185–6; Munster p. 88 . _Australian_ 30-4-1965 . Cryle p. 190 . Cryle p. 206 . Cryle p. 218; Freudenberg pp. 327ff . Greenslade 2004, p. 457 . Kiernan pp. 167–8 . McKnight p. 119 . McKnight pp. 132–6 . Neil p. 166 . Neil p. 166 . Crainer p. 117 . McKnight p. 119 . Wolff 2008, p. 2 . Crainer p. 119 . Wolff 2008, p. 122 . McKnight p. 128 . Welch _Age_ 2-1-2012 . Griffiths _Reuters_ 21-4-2012 . Griffiths _Reuters_ 21-4-2012 . Mayne _Crikey_ 6-2-2012 . Mayne _Crikey_ 6-2-2012 . _Hollywood Reporter_ 10-1-2013 . _Hollywood Reporter_ 10-1-2013 . Macmillan _SMH_ 14-10-2012 . Macmillan _SMH_ 14-10-2012 . _Hollywood Reporter_ 10-1-2013 . AAP _Daily Telegraph_ 31-1-2012 . Wheatcroft _NYRB blog_ 14-3-2012 . _Hollywood Reporter_ 10-1-2013 . Ellis _The Global Mail_ 17-10-2012 . Simpson _SMH_ 29-3-2012 . Simpson _SMH_ 29-3-2012 . Fickies _The Daily Beast_ 18-11-2012 . Murphy _Christian Science Monitor_ 18-11-2012 . _Hollywood Reporter_ 10-1-2013 . Beinart T _he Daily Beast_ 18-12-2012 . Brendon 2012, pp. 33, 35 . Brendon 2012, p. 35; Chippindale & Horrie p. 72 . Evans 1983, p. 288 . Shawcross 1997, p. 141 . Belfield et al. p. 43 . Munster p. 203 . Neil p. 169 . Cryle pp. 217–18 . McKnight p. 77 . Neil p. 168 . McKnight p. 91 . McKnight p. 90 . Wolff 2008, p. 271 . Brendon 2012, p. 41 . McKnight pp. 190, 207, 210; Greenslade 2004 . Wolff 2008, pp. 271, 309, 259 . Kiernan pp. 27–8 . Menadue p. 89 6 The enthusiastic player . Menadue p. 90 . Chenoweth _AFR_ 12-4-2003 . Crainer p. 119 . Kennedy p. 294 . Munster pp. 2–3 . Brendon 2012, p. 39 . McKnight p. 80 . Curtis 2000, p. 177 . Wolff 2008, p. 269 . Stephens 1982, p. 44 . Hack p. 133 . Hack p. 133 . Rich _New York_ 31-7-2011 . Kennedy p. 295 . Kiernan p. 209 . Dunstan 1981, p. 61 . Munster p. 64 . Souter 1981, p. 380 . Calwell 1972, p. 93 . Munster p. 71 . McKnight p. 57 . Munster pp. 82–3 . McKnight p. 58 . Ramsey pp. 338ff . Page 2003, pp. 116–19 . Oakes & Solomon p. 275 . Munster p. 99 . Menadue pp. 110–11 . Freudenberg 1977, p. 236 . Page 2003, p. 160 . Menadue pp. 113–14 . Munster pp. 104–5 . Kiernan p. 168 . Dorling _Age_ 20-5-2013 . McKnight p. 65 . Hocking pp. 46–7 . Hocking pp. 47–8 . Kiernan p. 142 . Griffen-Foley 2003, p. 188 . Menadue p. 108 . Munster p. 95; Menadue p. 109 . Menadue p. 109 . Munster p. 101 . Menadue p. 112 . Hocking p. 187 . Leapman pp. 50–3 . Wolff 2008, p. 127 . Auletta PBS documentary . Horne p. 64 . Munster p. 111 . Leapman p. 67 . Kiernan p. 174 . Lloyd p. 291 . Kiernan p. 172 . Munster p. 110 . Leapman p. 70 . Munster p. 108 . Chadwick p. 217 . Cohen _WP_ 18-7-2011 . Bagehot _Economist_ 22-11-2003 . Wolff 2008, p. 396 . Kiernan p. 140; Wolff 2008, p. 267 . Australian Electoral Commission . Leapman p. 141 . Tuccille pp. 7, 42 . Goot 1978; Tiffen 1994, p. 333 . Kelly 1995, p. 244 . McAllister et al. pp. 277–8 . Goot 1983, pp. 206, 205 . Tiffen 1989, p. 150 . Ramsey p. 80 7 The passionate player . Munster pp. 136–7 . Munster p. 147 . Munster p. 148 . McKenzie & Silver 1968 . Chippindale & Horrie pp. 48–51 . Greenslade 2004, p. 252 . Chippindale & Horrie p. 60 . McKnight p. 169 . Greenslade 2004, p. 543 . Chippindale & Horrie p. 140 . Curtis 1998, p. 359 . Page 2003, p. 358 . Page 2003, p. 373 . Young 1989, p. 454 . Young 1989, p. 432 . Page 2003, p. 377 . Page 2003, p. 378 . Kiernan p. 319 . Young 1989, p. 437 . Page 2003, p. 382 . Kiernan p. 320 . Kiernan pp. 285–7 . Page 2003, p. 387 . Belfield et al. p. 105 . Page 2003, pp. 390–1 . Tempest _G_ 12-1-2006 . Greenslade 2004, p. 550 . Chippindale & Horrie p. 359 . Greenslade 2004, pp. 548–9 . Greenslade 2004, pp. 606–611 . Curtis 1999, p. 3 . Greenslade 2004, p. 612 . Greenslade 2004, pp. 616–8; . Tumber 2004 . Kiernan p. 144 . Schudson 1993, p. 104 . Bernstein & Woodward _WP_ 9-6-2013 . Wolff 2008, p. 266 . Wolff 2008, p. 267 . McKnight p. 68 . Kiernan p. 145 . McKnight p. 76 . Kiernan p. 260 . Belfield et al. p. 85 . McKnight & Hobbs p. 846 . Neil p. 172 . Young 1989, p. 350 . Kiernan p. 287 . McKnight p. 78 . McKnight pp. 73–5 . Kiernan p. 288 . McKnight p. 82 . Ellison p. 23 . McKnight p. 42 . Ellison p. 173 . Chenoweth 2002, pp. 209–211; McKnight p. 144; _Wikipedia_ 'Pat Robertson controversies' . McKnight p. 144 . Curtis 2000, p. 36 . McKnight pp. 145, 190–197 . Shawcross 1997, p. 141 . McKnight p. 106 . Wolff 2008, p. 250 . Ellison p. 116 . Ellison p. 139 . Tunstall 1996, pp. 240, 247 . McNair 2000, pp. 141–2; Tunstall 1996 . McNair p. 142 8 The dominant player . Brendon 2012, p. 55 . Adley _G_ 14-5-2012 . Watson & Hickman p. 78 . Massie _Foreign Policy_ 13-7-2011; O'Carroll _G_ 17-5-2012 . Leveson 2012, p. 1188 . Hack pp. 375–6 . Ramsey p. 353 . Williams _AFR_ 7-10-2012 . Littlemore 2003 . Lewis _AFR_ 4-12-1999 . Chenoweth _AFR_ 14-7-2006 . Grattan 2006 . McKnight p. 178 . Tunstall 1996, p. 253 . Greenslade 2004 . Leveson p. 105 . Greenslade _G_ 7-3-2005 . Leveson transcript 25-4-2012 . Ramsey p. 357 . Neighbour p. 24 . _Daily Telegraph_ editorial 5-8-2013 . Tiffen 2007 . Shawcross _Time_ 3-11-1999 . Murdoch 2008 . Greenslade 2004, p. 620 . Price 2005, p. 86 . McSmith 2005 . McKnight pp. 176, 182 . Greenslade 2004, p. 662 . McKnight & McNair 2012, p. 7 . Manne 2005, p. 76 . Lusetich _Australian_ 10-4-2003 . McKnight pp. 36-7 . McKnight & McNair p. 12 . McKnight p. 198 . McKnight p. 199 . McKnight p. 202 . Manne 2011, p. 18 . McKnight p. 189 . Auletta 2003, p. 260 . McKnight p. 200 . Wolff 2008, p. 213 . McKnight & McNair p. 9 . McKnight p. 202 . McKnight & McNair p. 12 . McKnight & McNair p. 16 . Manne 2005, p. 86 . Manne 2011, p. 23 . Manne 2005, p. 82 . Manne 2011, p. 18 . McKnight p. 203 . Manne 2011, p. 22 . Sheridan _Australian_ 26-4-2003 . Wheatcroft _NYRB Blog_ 21-3-2012 . Schultz 1998, p. 25 . Leveson 2012, p. 1196 . _The Sun_ 28-8-2009 . Roshco p. 23 . Wright _Age_ 15-7-2011 . Keane _Crikey_ 11-5-2011 . Holmes _ABC Media Watch_ , May 2011 . Carswell _DT_ 11-5-2011 . Howard p. 157 . Cryle p. 313 . Tiffen 1999 . Costello with Coleman p. 281 . Hartcher _Age_ 21-11-2009 . Thompson _ABC News_ 18-7-2011 . Crook _Crikey_ 26-6-2013 . Thompson and staff _ABC News_ 20-7-2011 . Hywood _SMH_ 29-7-2013 . Barker _Inside Story_ 18-7-2011 . Hartcher _SMH_ 21-11-2009 . Wallace _AFR_ 10-6-1994 9 Reaping the rewards . Shawcross 1997, p. 62–3 . Leveson transcript 25-4-2012 . Leveson transcript 25-4-2012 . Watson & Hickman p. 172 . Kiernan p. 311 . Price _G_ 1-7-2006 . Leapman p. 239 . Hack p. 329 . Page 2003, p. 1 . Leveson p. 1432 . Shawcross 1997, p. 113 . Leveson p. 1233 . Shawcross 1997, p. 124 . Leveson transcript 25-4-2012 . Leveson 2012, p. 1241 . Belfield et al. p. 81 . Leveson 2012, p. 1121 . Belfield et al. p. 81 . Leveson 2012, p. 1243 . Page 2003, p. 271 . Shawcross 1997, p. 129 . Page 2003, p. 262 . Brendon 1982, p. 247 . Page 2003, p. 273 . Leveson transcript 25-4-2012 . Ellison p. xxvii . Leveson transcript 25-4-2012 . Greenslade _G_ 13-12-2012; O'Carroll _G_ 12-12-2012 . Shawcross 1997, p. 341 . Giles pp. 215–6 . Belfield et al. p. 85 . Evans 2009, p. 437 . Kiernan p. 241 . Munster p. 235 . Page 2003, p. 314 . Leapman p. 224 . Leapman p. 226 . Kiernan p. 244 . Evans 1983, p. 395 . Sabbagh & O'Carroll _G_ 17-5-2012 . Evans 2009, p. 440 . Evans 1983, p. 401 . Evans 1983, pp. 441, 402 . Belfield et al. p. 37 . Shawcross 1997, p. 254 . Greenslade 2004, p. 482 . Page 2003, p. 269 . Goodwin pp. 46–7 . Goodwin p. 44 . Goodwin p. 48 . Shawcross 1997, p. 301 . Goodwin p. 48 . Goodwin p. 51 . Goodwin p. 51 . Belfield et al. p. 186 . Belfield et al. p. 189 . Shawcross 1997, p. 342 . Goodwin p. 51 . Shawcross 1997, p. 303 . Shawcross 1997, p. 344 . Goodwin p. 52 . Belfield et al. p. 211 . Goodwin p. 51 . Belfield et al. p. 212 . Shawcross 1997, p. 354 . Shawcross 1997, p. 343 . Shawcross 1997, p. 355 . Shawcross 1997, p. 355 . Goodwin p. 52 . Toynbee _G_ 3-1-2012 . Goodwin p. 52 . Chenoweth 2002, p. 343 . Goodwin p. 53 . Shawcross 1997, p. 401 . Neil p. 248 . Kiernan p. 301 . Shawcross 1997, p. 241 . Chadwick p. 11 . Barry 1993, pp. 322–3 . Bowman p. 24 . Tiffen 1994, pp. 330–5 . Bowman p. 21 . D'Arcy p. 94 . D'Arcy p. 40 . D'Arcy p. 125 . D'Arcy pp. 1–2 . D'Arcy pp. 97–8 . Carroll 1990, p. 68 . Chadwick p. 36 . Shawcross 1997, p. 244 . Chadwick p. 23 . Chadwick p. 36 . Bowman p. 26 . Chadwick p. 47 . Page 2003, p. 408 . Brendon 2012, p. 25 . Tiffen 1995, pp. 29–31 . Chadwick p. 204 . Chadwick p. 206; Lee Report pp. 258–9 . Hywood _AFR_ 13-12-1985 . Lee Report pp. 258–9 . Tiffen 1987 . Tiffen 1995, p. 25 . Carroll 1990, p. 61 . Fraser & Simons, pp. 291–2 . Belfield et al. p. 143 . Carroll p. 93 . Belfield et al. p. 154 . 'Announcement by the Board of Directors of News Limited' 22-1-1987 . Australian Broadcasting Tribunal 27-1-1987, pp. 5–6 . Chadwick p. 98 . Senate Select Committee 1994 . Given 2002, p. 253 . Chadwick p. 97 . Chadwick p. 95 . Chadwick p. 92 . Belfield et al. p. 141 . Noam, Eli (ed.) 2013 forthcoming . Belfield et al. p. 160 . Chadwick p. 176 . Chadwick p. 188 . Chadwick p. 117 . Walsh _SMH_ 23-4-1987 . Tiffen 1994, pp. 335–6 . Leveson 2012, p. 1134 . Leveson 2012, p. 1135 . Leveson 2012, p. 1137 . Watson & Hickman p. 7 . Campbell p. 79 . Campbell p. 369 . Leveson 2012, p. 1139 . Lister _I_ 28-5-2012 . Leveson 2012, p. 1137 . Goodwin p. 143 . Goodwin p. 148 . McKnight p. 172 . McKnight p. 174 . Goodwin p. 152 . Barnett 2012, p. 352 . O'Carroll _G_ 14-5-2012 . Dover p. 178 . Watson & Hickman p. 7 . Price _G_ 1-7-2006 . Price _G_ 1-7-2006 . Price 2005, p. 13 . Chenoweth 2002, pp. 271–3; Leveson Murdoch witness statement p. 25; Leveson transcripts 25-4-2012 . Brendon 2012, p. 55 . Leveson 2012, pp. 1146–7 . Watson & Hickman p. 7 . Larsen & Grice _I_ 10-4-1999; Gibson & Watt _G_ 10-4-1999 . 'Murdoch 0, Football 1' _G_ 10-4-1999 . Larsen & Grice _I_ 10-4-1999 . Price 2005, p. 95 . Barnett 2012, p. 353 . Brendon 2012, p. 56 . Barry _Crikey_ 12-7-2012 . Leveson Report p. 1450 . Belfield et al. p. 115 . Rohm p. 30 . Shawcross 1997, p. 216 . Wolff 2008, p. 125 . Page 2003, pp. 126–7; Munster pp. 139–41 . Kiernan p. 189 . Kiernan p. 237 . Page 2003, pp. 343–4 . Leapman p. 128; Munster p. 189 . Ellison p. 206 . Wolff 2008, p. 233 . Ellison p. 145 . Kiernan, p. 238 . McKnight p. 4 10 The market for truth . Carlyon p. 1 . Schultz 1994 . Kiernan p. 122 . Ellison p. 186 . Belfield et al. p. 254 . Kiernan p. 254 . Shawcross 1997,p. 193 . Tuccille p. 29 . Leapman p. 77 . Shawcross 1997, p. 82 . Bowman p. 88 . Shawcross 1997, p. 279 . Kiernan pp. 254–5 . Chippindale & Horrie p. 148 . Chippindale & Horrie p. 179 . Page 2003, p. 153 . Page 2003, p. 135 . Shawcross 1997, p. 421 . Greenslade 2004, p. 422 . Greenslade 2004, p. 657 . Shawcross 1997, p. 151 . Chippindale & Horrie p. 116 . Chippindale & Horrie p. 114 . Chippindale & Horrie p. 115 . Munster p. 245 . Chippindale & Horrie pp. 118–9; Munster p. 333 . Chippindale & Horrie p. 121 . Kiernan p. 68 . Leapman p. 30 . Chippindale & Horrie p. 332 . Leapman p. 105 . Leapman p. 106 . Shawcross 1997, p. 102 . Page 2003, p. 218 . Kiernan p. 202 . Page 2003, p. 217 . Shawcross 1997, p. 102 . Page 2003, p. 221 . Kiernan p. 204 . Davies _Open Democracy_ 7-7-2011 . Chippindale & Horrie p. 178 . Chippindale & Horrie pp. 166–7 . Chippindale & Horrie pp. 321–2 . Chippindale & Horrie p. 179 . Chippindale & Horrie p. 302 . Chippindale & Horrie pp. 149, 152 . Chippindale & Horrie p. 150 . Leveson Executive Summary p. 10 . Marsh 2012 . Page 2003, p. 215 . Leapman p. 92 . Kiernan p. 200 . Kiernan p. 262 . Kiernan p. 263 . Lewis 2011 . Chippindale & Horrie p. 106 . Yelland _Mail Online_ 27-3-2010 . Chippindale & Horrie p. 181 . Greenslade _G_ 7-9-2007 . Chippindale & Horrie p. 255 . Bloomberg _Age_ 10-1-2012 . Chippindale & Horrie p. 260 . Brendon 2012, p. 31 . Chippindale & Horrie p. 269 . Chippindale & Horrie p. 268 . Munster pp. 248–9 . Knightley p. 143 . Evans 1983, p. 403 . Kiernan p. 247 . Kiernan p. 247 . Giles p. 241 . Evans 1983, p. 404 . Giles p. 242 . Page 2003, p. 77 . Brendon 2012, p. 22 . Belfield et al. p. 252 . Chippindale & Horrie p. 170 . Tiffen 1989 . Chippindale & Horrie p. 46 . Chenoweth 2002, p. 199 . Brookes _Mother Jones_ 24-8-1998 . Gutstein _rabble.ca_ 26-7-2011 . Chippindale & Horrie p. 325; _Sun_ 17-11-1989 . Chippindale & Horrie p. 181 . Page 2003, p. 206 . Page 2003, p. 472 . Page 2003, pp. 454–5 . McKnight p. 122 . McKnight pp. 44, 123 . Latham 2005, p. 91 . Latham 2005, p. 299 . McKnight p. 84 . Watson & Hickman p. 10 . Guthrie p. 224 . Guthrie p. 225 . Guthrie p. 244 . Guthrie p. 14 . Crook _Crikey_ 3-8-2011 . Mayne _Crikey_ 18-4-2011 . Kiernan p. 290 . Chippindale & Horrie p. 131 . Chippindale & Horrie pp. 136–7 . Chenoweth 2002, p. 318 . Leveson p. 1130 . McKnight p. 177 . Shawcross 1997, p. 155 . Kiernan p. 161 . Holmes _The Drum_ 21-12-2012 . Dodd, Andrew _Crikey_ 22-6-2011 . _Australian_ editorial 14-9-2010 . Neighbour p. 20 . Inglis 2002 . Page 2003, pp. 102ff . Young 1991, p. 151 . Shawcross 1997, p. 51 . Lisners p. XII . Watson & Hickman pp. 26, 191 . Cathcart pp. 42–3 . Leveson Report p. 502; Cathcart pp. 42–3 . Cathcart p. 45 . Leveson Report p. 503 . Guthrie pp. 279–82 . Guthrie p. 215 . Macintyre & Clark p. 68 . Australian Press Council Adjudication No. 890 (November 1996) . McIntyre & Clark p. 71 . Guthrie p. 215 . Leveson Report p. 471 . Mair 2012, p. 51 . Shawcross 1997, p. 288 11 The Republic of Fox . Ackerman 2001 . Swint p. 155 . Brock & Rabin-Havt p. 51 . Auletta 2003, p. 257 . Auletta 2003, p. 251 . Dickinson . Kurtz 2011 . Wolff _GQ Magazine_ 5-1-2012 . Dickinson 2011 . Dickinson 2011 . Auletta 2003, p. 265 . Swint pp. 24–5 . Swint p. 59 . Brock & Rabin-Havt p. 29 . Brock & Rabin-Havt p. 31 . Auletta 2003, p. 265 . Swint p. 91 . Swint p. 38 . Swint pp. 91-3 . Brock & Rabin-Havt p. 32 . Swint p. 99 . Swint p. 101 . Swint pp. 108–9 . Swint p. 113 . Auletta 2003, p. 253 . Hack p. 300 . McKnight p. 154 . Hack p. 342 . Hack p. 344 . Swint p. 156 . Swint p. 114 . Jamieson & Capella p. 45 . Swint p. 137 . Jamieson & Capella p. xiv . Brock & Rabin-Havt p. 36 . Swint p. 166 . Swint p. 164 . Swint p. 164 . Swint p. 165 . Brock & Rabin-Havt pp. 52–3 . Hack p. 386 . Wolff 2008, pp. 282, 346 . Ackerman p. 1 . Gans p. 229 . Day _Aust_ 10-4-2003 . Day _Aust_ 10-4-2003 . Dickinson 2011 . Auletta 2003, p. 258 . Schudson 1978 . Dickinson 2011 . Auletta 2003, p. 256 . Auletta 2003, p. 257 . Dickinson 2011 . Swint p. 168 . Brock & Rabin-Havt p. 57 . Auletta 2003, p. 258 . Sherman p. 23 . Swint p. 209 . Sherman p. 23 . Sherman p. 24 . Sherman p. 87 . Brock & Rabin-Havt p. 41 . Sherman p. 24 . Brock & Rabin-Havt p. 83 . Brock & Rabin-Havt p. 80 . Brock & Rabin-Havt pp. 92–3 . Brock & Rabin-Havt p. 11 . Sherman p. 23 . Brock & Rabin-Havt p. 241 . Milbank _WP_ 3-11-2010 . Milbank _WP_ 3-11-2010 . Brock & Rabin-Havt p. 243 . Brock & Rabin-Havt p. 243 . Brock & Rabin-Havt p. 143 . Brock & Rabin-Havt p. 98 . Brock & Rabin-Havt p. 220 . _Crikey_ 14-1-2011 . Brock & Rabin-Havt p. 252 . Sherman p. 22 . Kurtz 2011 . Oremus _Slate_ 7-11-2012 . Richardson p. 12 . Holcomb et al. . Dickinson 2011 . Dickinson 2011 . Brock & Rabin-Havt p. 38 . McKnight p. 157 . McKnight p. 159 . Ackerman 2001 . Ackerman 2001 . _SourceWatch_ . _SourceWatch_ . _Outfoxed_ video . _Media Matters for America_ 14-07-2004 . Swint p. 68 . Dickinson 2011 . Brock & Rabin-Havt pp. 86–7 . Sherman _NY_ 17-12-2012 . Sherman 2011, p. 24 . Kurtz 2011 . Kurtz 2011 . Auletta 2003, p. 273 . Dickinson 2011 . Swint p. 120 . Halberstam p. 716 . Starr 2010 . Vaughn 2007 . Starr 2010 . Vaughn 2007 . Knox _Newser_ 30-4-2013 . Starr 2010 . Silverman _Poynter_ 19-4-2012 . Pew Research Center 16-8-2012 . Public Policy Polling . Public Policy Polling . McKnight p. 21 . Brock & Rabin-Havt p. 286 . Moore _Yahoo News_ 5-10-2011 . Swint p. 176 . Dickinson 2011 . Kurtz 2011 . Auletta 2003, p. 279 . Swint p. 187 . Kurtz _Daily Beast_ 22-5-2012 . Moos _Poynter_ 22-5-2012 . Adams _G_ 18-11-2010 . Auletta 2003, p. 277 . McKnight p. 141 . Milbank _WP_ 18-7-2010 . Jamieson & Capella p. 184 . Bromwich _NYRB_ 25-11-2010 . Jamieson & Capella p. 101 . Jamieson & Capella p. 65 . Brock 2004 . Jamieson & Capella . Brock & Rabin-Havt pp. 4–5 . Brock & Rabin-Havt p. 100 . Brock & Rabin-Havt p. 145 . Brock & Rabin-Havt p. 126 . Johnson _MM_ 26-7-2012 . Jamieson & Capella p. 64 . Brock & Rabin-Havt p. 165 . Dimiero et al. _MM_ 19-12-2012 . Swint p. 161 . Dionne _WP_ 26-4-2010 . Frum _NY_ 2011 . Brock & Rabin-Havt p. 252 . Sherman 2011 p. 22 . Dickinson 2011 . Brock & Rabin-Havt pp. 65–6 . Brock & Rabin-Havt p. 253 . Rugaber & Mayerowitz _Portland Press Herald_ 6-10-2012 . Maloy MMFA 26-9-2012 . Boehlert _Media Matters_ 7-11-2012 . Kull pp. 13,14 . Beaujon _Poynte_ r 23-5-2012 . Brock & Rabin-Havt pp. 13-4 . Beaujon _Poynter_ 25-5-2012 . Swint p. 220 . Sherman p. 22 . _SourceWatch_ . _Crikey_ 10-2-2011 . Pleat _MM_ 11-9-2012 . Groch-Begley & Shere _MM_ 1-10-2012 . Brock & Rabin-Havt p. 285 . Brock & Rabin-Havt p. 117 . Bromwich _NYRB_ 25-11-2010 . Brock & Rabin-Havt p. 122 . Brock & Rabin-Havt pp. 106–7 . Brock & Rabin-Havt p. 110 . Brock & Rabin-Havt p. 111 . Brock & Rabin-Havt p. 285 . Brock & Rabin-Havt p. 199 . Brock & Rabin-Havt p. 228 . Brock & Rabin-Havt p. 228 . Brock & Rabin-Havt p. 215 . Brock & Rabin-Havt p. 213 . Brock & Rabin-Havt p. 216 . Hananoki 2012 . Sherman p. 23 . Brock & Rabin-Havt p. 255 . Bernstein _WP_ 20-12-2012 . Bernstein _WP_ 20-12-2012 . Maloy MMFA 4-12-2012 . Sherman p. 25 . Muddle _Open Democracy_ 9-11-2012 . Keller, Bill _NYT_ 12-8-2012 . Dionne _WP_ 24-12-2012 . Lieven _Open Democracy_ 15-10-2012 . Brock & Rabin-Havt p. 281 . Richardson p. 11 . Sherman 2011 . Frum _NY_ 20-11-11 . Lilla _NYRB_ 12-1-2012 . Cohen _WP_ 13-3-2012 . Cohen _WP_ 13-3-2012 . Kurtz _Daily Beast_ 25-9-11 . Brock & Rabin-Havt p. 282 . Brock & Rabin-Havt p. 282 . Muddle _Open Democracy_ 9-11-12 . Peters _NYT_ 5-7-2012 . Kurtz _Daily Beast_ 22-5-2012 . Samuelson _WP_ 7-9-2012 . Lieven _Open Democracy_ 15-10-2012 . Boehlert 11-12-2012 . Brock & Rabin-Havt p. 278 . Hemmer _The Conversation_ 23-4-2012 . Frum _NY_ 20-11-2011 . Beaujon _Poynter_ 8-11-2012 . _Media Matters for America_ 23-3-10 . _SourceWatch_ . Chavets _Vanity Fair_ 5-3-2013 . Chavets _Vanity Fair_ 5-3-2013 . Wolff 2008, p. 346 . Wolff 2008, p. 347 . Wolff 2012 . Suich _AFR_ 5-11-10 . La Monica p. 87 . Page _Open Democracy_ 23-4-2012; Jones 2003 . _Guardian_ editorial 8-5-06 . Moos _Poynter_ 22-5-2012 . Richardson p. 12 . Nielsen 'Cross-Platform Report Q3 2011' Reports and Insights 12 Those who live by scandal . Gilligan 2011 . Leveson 2012, p. 308 . Watson & Hickman p. 85 . 'Phone-hacking denials: what Murdoch executives said' _Guardian_ 16-8-2011 . Rusbridger p. 154 . Watson & Hickman p. 114 . Rusbridger 2012, p. 156 . Leveson, Executive Summary p. 8 . Leveson 2012, p. 510 . Tiffen _The Conversation_ 22-11-2012 . Cathcart 2012a, p. 103 . Deans & Sweeney _G_ 30-6-2011 . Watson & Hickman p. 217 . UK Parliament 13-7-2011 . Gerson _WP_ 22-7-2011 . Watson & Hickman pp. 306–7 . Leveson 2012, p. 510 . O'Carroll _G_ 1-6-2012 . Leveson Executive Summary p. 3 . Myers _Poynter_ 14-7-2011 . Watson & Hickman p. 4 . Watson & Hickman p. 241 . Watson & Hickman p. 5 . Watson & Hickman p. 55 . Davies _G_ 16-8-2011 . Sabbagh & Halliday _G_ 1-5-2012 . Sabbagh _G_ 1-5-2012 . Leveson Executive Summary p. 3 . Hopkins p. 14 . Mair 2012a p. 48 . Sabbagh _G_ 27-11-2012; Mulholland _G_ 7-10-2012 . Bennett & Townend p. 176 . Leveson Executive Summary p. 24 . Leveson Executive Summary p. 26 . Ash _G_ 14-7-2011 . Watson & Hickman p. 210 . Oborne _Spectator_ 7-7-2011; Watson & Hickman p. 162 . Barnett _Open Democracy_ 17-7-2011 . Hill _G_ 15-9-2010 . Watson & Hickman p. 176 . Watson & Hickman pp. 258–9 . Cohen 2011 . Editorial _G_ 19-1-2009 . Watson & Hickman pp. 63-4 . Peston _BBC News Business_ 22-8-2011; Curtis & Robinson _G_ 24-8-2011 . Leveson 2012, p. 27 . Watson & Hickman pp. 107, 121 . Gerson _WP_ 22-7-2011 . Tiffen 1999, p. 80ff . Mair 2012b, p. 51 . Massie _Foreign Policy_ 13-7-2011 . Beaujon _Poynter_ 29-2-2012 . Barnett 2013, p. 357 . Watson & Hickman p. 217 . Sparrow _G_ 6-5-2012 . Watson & Hickman p. 144 . Plunkett & O'Carroll _G_ 31-5-2012 . Wintour & Sabbagh _G_ 24-4-2012 . Weir _Open Democracy_ 27-5-2012 . Sabbagh & Wintour _G_ 11-5-2012 . Sabbagh & Wintour G 11-5-2012 . Plunkett & O'Carroll _G_ 31-5-2012 . Leveson transcripts 26-4-2012 . Plunkett & O'Carroll _G_ 31-5-2012 . Wintour _G_ 30-4-2012 . Burns _NYT_ 4-9-2012 . Tiffen _The Conversation_ 22-11-2012 . Watson & Hickman p. 313 . McGurran _BBC News_ 13-7-2011 . Lisners p. 143 . Leveson Executive Summary p. 19 . Leveson Executive Summary p. 18 . Watson & Hickman pp. 84–5, 165 . Leveson 2012, p. 361 . Leveson 2012, p. 367 . Watson & Hickman p. 86 . Leveson 2012, p. 374 . Leveson Executive Summary p. 19 . Watson & Hickman p. 133 . Leveson 2012, p. 369; Watson & Hickman p. 308 . Watson & Hickman pp. 88–9 . Watson & Hickman p. 113 . Watson & Hickman pp. 102, 235; Rusbridger 2012 . Cathcart 2012a, p. 26 . Arango _NYT_ 19-2-2011 . Rusbridger 2012, p. 155 . Hind _Open Democracy_ 18-7-2011; Bennett & Townend p. 171; Watson & Hickman p. 105 . Rusbridger 2012, p. 155 . Cathcart 2012a, pp. 30–1; Watson & Hickman p. 30; Oborne _Spectator_ 7-7-2011 . Watson & Hickman p. 40 . Leveson 2012, p. 269 . Leveson Executive Summary p. 7 . Leveson Executive Summary p. 9 . Watson & Hickman p. 138 . Watson & Hickman p. 238 . Mayne _Crikey_ 21-10-2011 . West _SMH_ 17-10-2012 . Mair 2012b, p. 52 . Cathcart 2012a, p. 25 . Leveson Executive Summary p. 7 . Leveson Executive Summary pp. 7, 8; Leveson Report pp. 288, 305–6 . Watson & Hickman p. 124, see also pp. 133, 150. . Barry _Crikey_ 26-7-2012 . Leveson 2012, p. 338 . Leveson 2012, p. 341 . Watson & Hickman pp. 170, 185 . Watson & Hickman p. 56 . Watson & Hickman p. 98 . 'Phone-hacking denials: what Murdoch executives said' _G_ 16-8-2011 . Brendon 2012, p. 32 . Mair 2012a, p. 221 . Lisners p. 61 . Lyall & Becker _NYT_ 7-7-2011 . Watson & Hickman p. 8 . Watson & Hickman p. 285 . Ignatius _WP_ 14-7-2011 . Watson & Hickman pp. 83, 243 . Watson & Hickman p. 72 . Watson & Hickman pp. 276–7 . Kissane _SMH_ 25-4-2012 . Leveson 2012, pp. 347–8 . Leveson transcript 25-4-2012 . Watson & Hickman p. 90 . Watson & Hickman p. 302 . Cathcart 2012a, p. 24 . Watson & Hickman p. 280 . Sabbagh & O'Carroll _G_ 12-12-2012 . Cathcart 2012a, pp. 33, 12 . Watson & Hickman pp. 289–90 . Leveson 2012, p. 266 . Cathcart 2012a, p. 33; Watson & Hickman pp. 108–12 . Leveson 2012, pp. 512, 413 . Watson & Hickman pp. 284–5 . Doward & Jiminez _G_ 19-11-2011 . Leveson Executive Summary p. 7; Cathcart 2012 p. 34 . Jones 2012, p. 132 . Barry _Crikey_ 21-11-2012 . Leveson 2012, p. 433 . Watson & Hickman p. 173 . Watson & Hickman p. 262 . Chippindale & Horrie p. 223 . Curtis 2000, p. 176 . Leveson 2012, p. 505 . Rusbridger 2012, p. 158 . Watson & Hickman p. 294 . Cathcart 2012a, p. 16 . Catchart 2012a, p. 61 . Watson & Hickman p. xviii . Marsh p. 103 . Leveson Executive Summary p. 4 . Jones, Paul pp. 127–8 . MacKenzie _Daily Mail Online_ 20-2-2012 . Jones, Paul pp. 128–9 . Watson & Hickman p. 317 . _ABC News_ 8-8-2012 . Crook _Crikey_ 10-1-2013 . Rushe _G_ 19-7-2012 . Mayne _Crikey_ 25-10-2011; Mayne _Crikey_ 1-3-2012 13 Murdoch's road to scandal . Watson & Hickman p. 19 . Flint & James _LA Times_ 9-8-2011 . Chessell _Aust_ 23-7-2011 . Watson & Hickman p. 315 . Wolff 2008, p. 402 . Crainer p. 98 . Young 1991 p. 146 . Young 1991, p. 149 . Young 1991, p. 150 . Lisners p. xiv . Wolff 2008, p. 141 . Carr _NYT_ 6-5-2012 . Ellison p. 85 . Pilkington _G_ 13-9-2011 . Peters _NYT_ 9-8-2011 . GMI Ratings October 2011 . Grigg _AFR_ 5-10-2011 . Guthrie p. 7 . Leveson transcripts 26-4-2012 . Campbell p. 76 . Campbell p. 111 . Neil p. 160 . Ellison p. 178 . Dover p. 136 . Barry _Crikey_ 22-6-2012 . Kiernan p. 140 . Wolff 2008, p. 6 . Guthrie p. 6 . Guthrie pp. 284–5 . Brendon 2012, p. 35 . Evans 2009, p. 440 . Guthrie p. 41 . Shawcross 1997, p. 246 . Guthrie p. 9 . Shawcross 1997, p. 297 . Shawcross 1997, pp. 296–7 . Neil p. 183 . Bowman p. 93 . McKnight p. 57 . Kiernan p. 66 . Guthrie p. 81 . Wolff 2008, p. 18 . Wolff 2008, p. 246 . Guthrie pp. 221–2 . Neil p. 192 . Wolff 2008, p. 42 . Wolff 2008, p. 18 . Guthrie p. 49 . Page 2003, p. 154 . Page 2003, p. 155 . Page 2003, p. 157 . Leapman p. 39 . Shawcross 1997, p. 84 . Kiernan p. 78 . Leapman p. 110 . Leapman p. 110 . Belfield et al. p. 131 . Neil p. 416 . Shawcross 1997, p. 389 . Belfield et al. p. 130 . Neil p. 412 . Neil p. 182 . McKnight p. 57 . Chenoweth 2002, p. 321 . Neil p. 175 . Neil p. 176 . Page 2003, p. 367 . Greenslade 2004, p. 501 . Chippindale & Horrie p. 329 . Wolff 2008, p. 212 . Leveson 2012, p. 501 . Leveson 2012, p. 498 . Watson & Hickman pp. 130, 198; Leveson 2012, p. 526 . Chippindale & Horrie p. 206 . Chippindale & Horrie p. 283 . Massie 2011 . Wolff 2008, p. 7 . Leveson 2012, p. 496 . Watson & Hickman p. 304 . McConville & Smith p. 302 . Watson & Hickman p. 14 . Leveson 2012, p. 546; Cathcart 2012a, pp. 14–5 . Rusbridger 2011 . Muir & Martinson _G_ 22-4-2010 . Ignatius _WP_ 14-7-2011 . Chenoweth 2002, p. 41 . Kiernan p. 161 . Kiernan p. 222 . Leveson transcripts 25-4-2012 . Leveson 2012, p. 306 . Page 2003, p. 480 . Stein _Huffington Post_ 25-11-2011 . Greenslade _G_ 4-7-2012 . Stoeffel _Observer.com_ 4-7-2012 . Ignatius _WP_ 14-7-2011 . Bennett & Townend p. 178 . Watson & Hickman p. 244 . Watson & Hickman p. 194 . Watson & Hickman pp. 175, 163 . Watson & Hickman p. 95 . Leveson 2012, p. 517; Watson & Hickman p. 94 . Watson & Hickman pp. 103–4 . Leigh & Davies _G_ 22-5-2012 . Lyall & Becker _NYT_ 7-7-2011 . Leveson 2012, p. 516 . Watson & Hickman pp. 118–9; Leveson 2012, p. 516; Tiffen _The Conversation_ 22-11-2012 . Cathcart 2012a, p. 66; Watson & Hickman pp. 110–2 . Robinson _G_ 18-11-2011 . Wolff _Adweek_ 8-8-2011 . Leveson Executive Summary p. 11 . Atkins p. 28ff . Leveson transcripts 26-4-2012 . Leveson transcripts 26-4-2012; Leveson 2012, p. 521 . Leveson 2012, pp. 498–9 . Harcup p. 278 . Guthrie p. 89 . Chippindale & Horrie p. 246 . Guthrie pp. 87–9, 97 . Turner p. 346 . Chippindale & Horrie p. 350 . Cathcart 2012a, p. 36 . Watson & Hickman p. 16 . Peppiatt p. 22 . Harcup p. 279 . Greenslade 2012a, p. 419 . Greenslade 2012a, p. 419 . Shawcross 1997, p. 54 . Arango _NYT_ 29-9-2008 . Page 2003, p. 217 . Chippindale & Horrie p. 94 . Watson & Hickman p. 301 . Watson & Hickman p. 13 . Greenslade 2012a, p. 417 . Tunstall 1971, pp. 168–71 . Tunstall 1996 . Watson & Hickman p. 22 . Watson & Hickman p. 293 . Jones 2012, p. 131 . Jones 2012, p. 127 . Watson & Hickman p. 23 . Robinson _G_ 19-2-2006 . Watson & Hickman p. 15 . Cathcart 2012a, p. 40 . Chippindale & Horrie p. 152 . Jones 2012, p. 127 . McConville & Smith p. 300 . Leveson 2012, p. 263 . Rohm p. 61 . McKnight p. 142 . Hack p. 302 . Shawcross 1997, p. 395 . Wolff 2008, p. 215 . McKnight & Hobbs p. 844 . Wolff 2008, p. 215 . Crainer pp. 231–3 . McKnight & Hobbs p. 844 . Rich _New York_ 31-7-2011 . Wolff 2008, p. 210 . McKnight p. 92 . Wolff 2008, p. 210 . Wolff 2008, p. 211 . Wolff 2008, pp. 198–9 . Wolff 2008, p. 211 . Carr _NYT_ 17-7-2011 . Benjamin & Calabresi _Time_ 15-8-2011 . Benjamin & Calabresi _Time_ 15-8-2011 . Carr _NYT_ 17-7-2011 . Carr _NYT_ 17-7-2011 . Carr _NYT_ 17-7-2011 . Reuters 2-9-2011 . Carr _NYT_ 17-7-2-11 . Pilkington _G_ 17-8-2011 . Knott _Crikey_ 1-5-2013 . Barry _Crikey_ 13-10-2011 . Chenoweth _AFR_ 13-7-2013 . Jukes _The Beast_ 4-7-2013 . Greenslade _G_ 4-7-2013 . Shawcross 1997, p. 398 . _ninemsn_ 6-9-2013 . Bernstein _G_ 20-12-2012 References AAP 'Rupert Murdoch tells Twitter fans Google is piracy leader' _DT_ 31-1-2012 ABC 'Church of England dumps News Corp shares' _ABC News_ 8-8-2012 Ackerman, Seth 'The most biased name in news' FAIR (Fairness and Accuracy in Reporting) July/August 2001 Adams, Richard 'Fox News chief Roger Ailes apologises after describing NPR as "Nazis"' _G_ 18-11-2012 Adley, Esther 'Alastair Campbell back at the Leveson inquiry – and with great clunking balls' _G_ 14-5-2012 Ahmed, Nabila and Thompson, Sarah 'Another day, another fight for Murdoch' _AFR_ 14-10-2010 Andrews, David L. 'Sport and the transnationalising media corporation' _Journal of Media Economics_ , 16(4), 2003 Arango, Tim 'A "Tabloid Guy" calls it a night after 41 years with Murdoch' _NYT_ 29-9-2008 —— 'The Murdoch in Waiting' _NYT_ 19-2-2011 Ash, Timothy Garton 'Britain should seize this chance to break the culture of fear at its heart' _G_ 14-7-2011 Atkins, Chris 'How the tabloids obfuscated and misled in the face of Starsuckers evidence' in Richard Lance Keeble and John Mair (eds) _The Phone Hacking Scandal. Journalism on Trial_ (Bury St Edmonds, Arima, 2012) Auletta, Ken _Backstory. Inside the Business of News_ (NY, Penguin, 2003) —— 'Who's afraid of Rupert Murdoch?' (PBS documentary, _Frontline_ , 2005) —— 'What Murdoch faces now' _New Yorker_ 17-7-2011 Australian Broadcasting Tribunal. Inquiry into Foreign Control of the Herald and Weekly Times Limited. Brief for Counsel Assisting. 27-1-1987 _Australian_ editorial Yes, we will keep reporting' 14-9-2010 _Australian_ editorial 30-4-1965 Australian Electoral Commission 'House of Representatives – Two Party Preferred Results 1949 – Present' Australian Press Council _Adjudication No. 890_ November 1996 Aylmer, Sean 'Bidders aren't the only ones feeling toxic about' _AFR_ 13-11-2004 —— 'Murdoch talks up web, cuts forecasts' _AFR_ 12-11-2005 Bagehot 'Peer today, gone tomorrow' _The Economist_ 22 November 2003 Barker, Geoffrey 'Is this News Limited's defence?' _Inside Story_ 18-7-2011 Barnett, Anthony 'After Murdoch' _Open Democracy_ 17-7-2011 Barnett, Steven 'It's ownership, stupid: why plurality lies at the heart of media policy reform – and how to achieve it' in Richard Lance Keeble and John Mair (eds) _The Phone Hacking Scandal. Journalism on Trial_ (2012) Barry, Paul _The Rise and Rise of Kerry Packer_ (Sydney, Bantam Books, 1993) —— '"Arse-kicking' pollies cause media woes", says ex-Sun ed' _Crikey_ 13-10-2011 —— 'The Power Index: biz bosses, Rupert Murdoch at #4' _Crikey_ 22-6-2012 —— 'Murdoch and ex-editor trade blows' _Crikey_ 12-7-2012 —— 'So whodunit? The great phone hacking cover up' _Crikey_ 26-7-2012 —— 'Charges against Brooks, Coulson show Murdoch can't escape' _Crikey_ 21-11-2012 _BBC News_ 'News Corp profits fall on sale of MySpace website' Business 11-8-2011 Beaujon, Andrew 'Morning media roundup: How to borrow a horse from Scotland Yard' _Poynter_ 29-2-2012 —— 'Survey: NPR's listeners best-informed, Fox viewers worst-informed' _Poynter_ 23-5-2012 —— 'Anonymous Fox spokesperson bravely talks trash about school behind news habits survey' _Poynter_ 25-5-2012 —— 'News Corp. announces split, "will wow the world"' _Poynter_ 28-6-2012 —— 'Did conservative media fail its audience election night?' _Poynter_ 8-11-2012 —— 'Buffett, Newhouses, Murdoch on Forbes list of billionaires' _Poynter_ 4-3-2013 Beinart, Peter 'Rupert Murdoch's tweet about the "Jewish Owned Press" is dumb and offensive' _The Daily Beast_ 18-12-2012 Belfield, Robert, Hird, Christopher and Kelly, Sharon _Murdoch. The Great Escape_ (London, Macdonald & Co., 1991) Benjamin, Mark and Calabresi, Massimo 'News Corp's US hacking problem' _Time_ 15-8-2011 Bennett, Daniel and Townend, Judith 'Press "Omerta": How newspapers' failure to report the phone hacking scandal exposed the limitations of media accountability' in Richard Lance Keeble and John Mair (eds) _The Phone Hacking_ _Scandal. Journalism on Trial_ (2012) Bernstein, Carl 'Murdoch's Watergate?' _Newsweek_ 9-7-2011 —— 'Why the US media ignored Murdoch's brazen attempt to hijack the presidency' _G_ 20-12-2012 Bernstein, Carl and Woodward, Bob '40 years after Watergate, Nixon was far worse than we thought' _WP_ 9-6-2013 Bloomberg 'Murdoch "angry" over Elton John settlement' _Age_ 10-1-2012 Blumenthal, Howard J. and Goodenough, Oliver R. _This Business of Television_ (3rd edn, NY, Billboard Books, 2006) Boehlert, Eric The Romney "landslide" that wasn't' _Media Matters for America_ ( _MMFA_ ) 7-11-2012 —— 'Is conservative media one big "racket"?' _MMFA_ 11-12-2012 Bowden, Mark 'Mr Murdoch goes to war' _Atlantic_ July/August 2008 Bowman, David _The Captive Press_ (Melbourne, Penguin, 1988) Brendon, Piers _The Life and Death of the Press Barons_ (London, Secker & Warburg, 1982) —— _Eminent Elizabethans_ (London, Jonathan Cape, 2012) Brock, David _The Republican Noise Machine. Right-wing Media and How it Corrupts Democracy_ (NY, Random House, 2004) Brock, David, Rabin-Havt, Ari and MMFA _The Fox Effect. How Roger Ailes turned a network_ _into a propaganda machine_ (New York, Random House, 2012) Bromwich, David 'The rebel germ' _NYRB_ 25-11-2010 Brookes, Julian 'Tobacco and Rupe' _Mother Jones_ 24-8-1998 Burns, John F. 'British Premier reshuffles Cabinet, promoting official linked to Murdoch case' _NYT_ 4-9-2012 Calwell, A.A. _Be Just and Fear Not_ (Melbourne, Lloyd O'Neil Pty Ltd, 1972) Campbell, Alastair _The Blair Years. Extracts from the Alastair Campbell Diaries_ (London, Hutchinson, 2007) Carlyon, Les _Paper Chase. The Press under Examination_ (Melbourne, Herald & Weekly Times, 1982) Carr, David 'Troubles that money can't dispel' _NYT_ 17-7-2011 —— 'The cozy compliance of the News Corp board' _NYT_ 6-5-2012 Carroll, V.J. _The Man who Couldn't Wait. Warwick Fairfax's Folly and the Bankers who Backed_ _Him_ (Melbourne, William Heinemann Australia, 1990) Carswell, Andrew 'New rich adrift in ocean of debt and despair/Federal budget: Special edition' _DT_ 11-5-2011 Cathcart, Brian 'The press, the Leveson Inquiry and the Hacked Off campaign' in Richard Lance Keeble and John Mair (eds) _The Phone Hacking Scandal. Journalism on_ _Trial_ (2012a) —— _Everybody's Hacked Off. Why we don't have the press we deserve and what to do about it_ (London, Penguin, 2012) Chadwick, Paul _Media Mates: Carving up Australia's Media_ (Melbourne, Sun Books, 1989) Chavets, Zev 'Exclusive excerpt: Roger Ailes off camera' _Vanity Fair_ 5-3-2013 Chenoweth, Neil 'The story behind News' super shares' _AFR_ 3-11-1993 —— 'Murdoch's millennium gamble: a titanic tussle for eyeballs' _AFR_ 30-8-1999 —— 'Rabbitohs teach News a lesson in losing' _AFR_ 7-7-2001 —— _Virtual Murdoch. Reality Wars on the Information Highway_ (London, Vintage, 2002) —— 'How I won the war' _AFR_ 12-4-2003 —— 'Malone, Murdoch's new best friend' _AFR_ 27-1-2004 —— 'The man the Murdoch children fear' _AFR_ 31-1-2004 —— 'QPL buy-out pitched $500m above rivals' _AFR_ 13-4-2004 —— 'Lot of stuff wrapped up in News' _AFR_ 16-9-2004 —— 'Murdoch, Malone play corporate cat and mouse' _AFR_ 1-8-2005 —— 'Murdoch may be losing grip on News' _AFR_ 13-8-2005 —— 'Murdoch family reaps a cool $97m from News' _AFR_ 26-8-2005 —— 'There's a new game in town' _AFR_ 14-7-2006 —— 'There's always the son' _AFR_ 25-2-2009 —— 'A second coming, again' _AFR_ 5-2-2011 —— 'Mystery over Murdoch family payout' _AFR_ 18-2-2011 —— 'Control is embedded in Murdoch's DNA' _AFR_ 3-5-2012 —— 'Abandon ship' _AFR_ 30-6-2012 —— 'The stalking of Rupert Murdoch' _AFR_ 13-7-2013 Chessell, James 'Shocked market contemplates the future of News' _Australian_ 23-7-2011 —— 'Rupert Murdoch reaffirms his faith in newspapers' _Australian_ 12-8-2011 —— 'Foxtel's clear reception' _AFR_ 20-6-2012 —— 'Failed News deal points to future' _AFR_ 30-6-2012 Chippindale, Peter and Horrie, Chris _Stick it up your Punter! The Rise and Fall of the Sun_ (London, Mandarin, 1990) Chozick, Amy 'Star rises for News' Mr Fixit' _AFR_ 24-1-2012 —— 'News Corp in $139 million settlement with shareholders' _NYT_ 22-4-2013 Cohen, Richard 'Just desserts for "Citizen Murdoch"' _WP_ 18-7-2011 —— 'Sarah Palin's foolishness ruined US politics' _WP_ 13-3-2012 Cohen, Roger 'The Cameron collapse' _NYT_ 19-7-2011 Collins, Luke 'Murdoch's vision fades to a blur' _AFR_ 14-2-2002 Conn, David 'How football had kept the Murdoch empire afloat' _G_ 15-6-2012 Cook, John 'What happened to climate change? Fox News and the US elections' _The_ _Conversation_ 1-10-2012 Costello, Peter with Coleman, Peter _The Costello Memoirs_ (Melbourne, Melbourne University Press [MUP], 2008) Crainer, Stuart _Business. The Rupert Murdoch Way. Ten Secrets of the World's Greatest Deal_ _Maker_ (Oxford, Capstone Publishing Company, 2002) _Crikey_ 'How many times did Bill O'Reilly interrupt Obama? _Crikey_ 10-2-2011 _Crikey_ Comment 14-1-2011 Crook, Andrew 'Forget news media diversity, the internet has tightened News' squirrel grip' _Crikey_ 8-11-2011 —— 'Nixon book launch: Adler eavesdrops on a revealing Hun news conference' _Crikey_ 3-8-2011 —— 'Super fund takes Rupert's advice, sells out of News Corp' _Crikey_ 10-1-2013 Cryle, Denis _Murdoch's Flagship. The First Twenty-five Years of the Australian Newspaper_ (Melbourne, MUP, 2008) Curtis, Polly and Robinson, James 'Andy Coulson "broke" Commons pass rules by failing to declare NI payments' _G_ 24-8-2011 Curtis, Sarah (ed.) _The Journals of Woodrow Wyatt Vol. One_ (London, Macmillan, 1998) —— _The Journals of Woodrow Wyatt Vol. Two_ (London, Pan, 1999) —— _The Journals of Woodrow Wyatt Vol. Three_ (London, Macmillan, 2000) D'Alpuget, Blanche _Robert J. Hawke: A Biography_ (Melbourne, Schwartz, 1982) D'Arcy, John _Media Mayhem. Playing with the Big Boys in Media_ (Melbourne, Brolga Publishing, 2005) Daley, Gemma and Chessell, James 'Media law reforms to hit News Corp' _AFR_ 25-2-2013 Davies, Nick 'Phone hacking: News of the World reporter's letter reveals cover-up' _G_ 16-8-2011 Davies, William 'Hack-gate: the latest cultural contradiction of British conservatism?' _Open Democracy_ 7-7-2011 Day, Mark 'Bias all part of Fox's battle plan' _Australian_ 10-4-2003 Deans, Jason and Tryhorn, Chris 'BSkyB to sell 10% stake in ITV' _G_ 8-2-2010 Deans, Jason and Sweeney, Mark 'News Corp's BSkyB bid: Jeremy Hunt gives green light for takeover' _G_ 30-6-2011 Dickinson, Tim 'How Roger Ailes built the Fox News fear factory' _Rolling Stone_ 9-6-2011 Dimiero, Ben, Gertz, Matt and Savillo, Rob 'Bill O'Reilly covers the "War on Christmas" more than actual wars, again' _MMFA_ 19-12-2012 Dionne, E.J. 'The right court fight' _WP_ 26-4-2010 —— 'Paul Ryan and the triumph of theory' _WP_ 13-8-2012 —— 'It's our system on the cliff' _WP_ 24-12-2012 Dodd, Vikram 'News International "deliberately" blocked investigation' _G_ 20-7-2011 Dorling, Philip 'Whitlam radical, Fraser arrogant, Hawke moderate: secret cables reveal Murdoch insights' _Age_ 20-5-2013 Dover, Bruce _Rupert's Adventure in China. How Murdoch Lost a Fortune and Found a Wife_ (Melbourne, Viking, 2008) Doward, Jamie and Suarez Jiminez, Silvia ' _News of the World_ private detective vows to expose tabloid stalker culture' _G_ 19-11-2011 Dunstan, Don _Felicia, the political memoirs of Don Dunstan_ (Melbourne, Macmillan, 1981) Dyer, Glenn 'Court rules against Murdochs: stake in ITV is pie in the sky' _Crikey_ 22-1-2010 —— 'News split: print losses not tolerated anywhere, says Murdoch', _Crikey_ 29-6-2012 —— 'News takes axe to assets with $2.9b cut in value' _Crikey_ 9-8-2012 —— 'News Corp settles' _Crikey_ 23-4-2013 _Economist_ 'How to lose friends and alienate people' 14-7-2011 _Economist_ 'Britain's phone-hacking scandal: wider still and wider' 21-7-2011 Ellis, Eric 'Calling a scumbag a scumbag: Rupert Murdoch's revealing twitter habit' _The Global Mail_ 17-10-2012 Ellison, Sarah _War at the Wall Street Journal. How Rupert Murdoch Bought an American Icon_ (Melbourne, Text Publishing, 2010) Evans, Harold _Good Times, Bad Times_ (London, Weidenfeld & Nicolson, 1983) —— _My Paper Chase. True Stories of Vanished Times. An autobiography_ (London, Little Brown, 2009) Fallows, James 'The age of Murdoch' _Atlantic_ September 2003 Fickies, Jonathan 'Rupert Murdoch's most offensive tweets' _Daily Beast_ 18-11-2012 Fidler, Stephen 'Murdoch rescued from the brink' _AFR_ 16-4-1991 Fitzsimmons, Jill and Fong, Jocelyn 'The _Wall Street Journal_ : dismissing environmental threats since 1976' _MMFA_ 2-8-2012 Flint, Joe and James, Meg 'Rupert Murdoch's LA dinner featured steak – but not hacking' _LA Times_ 9-8-2011 Fraser, Malcolm and Simons, Margaret _Malcolm Fraser. The Political Memoirs_ (Melbourne, Miegunyah Press, 2010) Freudenberg, Graham _A Certain Grandeur. Gough Whitlam in Politics_ (Melbourne, Sun Books, 1977) Frum, David 'When did the GOP lose touch with reality?' _New York_ 20-11-2011 Gans, Herbert _Deciding What's News. A Study of CBS Evening News, NBC Nightly News, Newsweek and Time_ (New York, Pantheon Books, 1979) Gerson, Michael 'The Murdoch mess complicates life for David Cameron' _WP_ 22-7-2011 Ghosh, Sayanti and Baker, Liana B. 'News Corp publishing wing deep in red' _AFR_ 24-12-2012 Gibson, Janine and Watt, Nicholas 'BSkyB bid for United blocked' _G_ 10-4-1999 Giles, Frank _Sundry Times_ (London, John Murray, 1986) Gillette, Felix 'Rupert Murdoch, News Corp dodge phone-hacking ruin' _Bloomberg_ _Businessweek_ 18-4-2013 Gilligan, Andrew 'Scandal could sink Cameron' _Age_ 20-7-2011 Given, Jock _Turning off the Television. Broadcasting's Uncertain Future_ (Sydney, UNSW Press, 2003) Glover, Stephen 'Thatcher, Murdoch and the meeting that was erased from history' _I_ 19-3-2012 Goodwin, Peter _Television under the Tories. Broadcasting Policy 1979–1997_ (London, BFI Publishing, 1998) Goot, Murray 'The media and the campaign' in Howard R. Penniman (ed.) _Australia at_ _the Polls. The National Elections of 1980 and 1983_ (Sydney, Allen & Unwin, 1983) —— _Newspaper Circulation in Australia 1932–1977_ (Melbourne, La Trobe University Media Centre Papers, 1978) Governance Metrics International 'GMI Ratings' Risk List: News Corp' _GMIRatings_ October 2011 Greenslade, Roy 'Their master's voice' _G_ 17-2-2003 —— _Press Gang. How Newspapers make Profits from Propaganda_ (London, Pan Books, 2004) —— 'Why the _Sun_ is anti-Labour again' _G_ 7-3-2005 —— 'Sun reporter was "aghast" at MacKenzie's Hillsborough headline' Greenslade blog, _G_ 7-9-2007 —— 'How Murdoch's philosophy created a climate of misbehaviour' _G_ 18-7-2011 —— 'News Corporation split – it's GoodCo versus BadCo' _G_ 2-7-2012 —— 'What did Murdoch say to _New York Post_ editor about "racist" cartoon?' _G_ 4-7-2012 —— 'James Harding gets a terrific send-off as staff signal their support for him' _G_ 13-12-2012 —— 'Hacking's Disreputable History' in Richard Lance Keeble and John Mair (eds) _The Phone Hacking Scandal. Journalism on Trial_ (2012) —— 'Rupert Murdoch revealed – tape exposes the media mogul's real opinions' _G_ 4-7-2013 Greenwald, Robert _Outfoxed: Murdoch's War on Journalism_ (Brave New Films, MoveOn. org, 2004) Griffen-Foley _The House of Packer. The Making of a Media Empire_ (Sydney, Allen & Unwin, 1999) —— _Party Games. Australian Politicians and the Media from War to Dismissal_ (Melbourne, Text Publishing, 2003) Griffiths, Peter 'Murdoch mocks UK government as arrives for ethics inquiry' _Reuters_ 21-4-2012 Grigg, Angus 'No News good news for AMP' _AFR_ 5-10-2011 Groch-Begley, Hannah and Shere, David 'A history of dishonest Fox charts' _MMFA_ 1-10-2012 _Guardian_ 'Phone-hacking denials: what Murdoch executives said' 16-8-2011 _Guardian_ editorial 'Murdoch 0, Football 1' 10-4-1999 —— 'Sky News boss to leave' 8-5-2006 —— 'This is Andy Coulson's reshuffle' 19-1-2009 Guthrie, Bruce _Man Bites Murdoch. Four decades in Print, Six Days in Court_ (Melbourne, MUP, 2011) Gutstein, Donald 'Murdoch's ties to Big Tobacco' _rabble.ca_ 26-7-2011 Guy, Robert 'Murdoch struggles with rebellious investors' _AFR_ 22-10-2007 Hack, Richard _Clash of the Titans. How the Unbridled Ambition of Ted Turner and Rupert_ _Murdoch has created global empires that control what we read and watch_ (Beverley Hills CA, New Millenium Press, 2003) Halberstam, David _The Powers that Be_ (New York, Dell, 1979) Hall, Susan _Supertoy: 20 Years of Australian Television_ (Melbourne, Sun Books, 1976) Halliday, Josh ' _Guardian, Daily Mail_ and _Daily Express_ circulations rise month on month' _G_ 8-12-2012 Hananoki, Eric 'Report: 30+ Fox News hosts and contributors who are campaigning for Republicans' _MMFA_ 1-11-2012 —— 'Roger Ailes' hypocrisy on "dividing people into groups"' _MMFA_ 11-2-2013 Harcup, Tony 'The "conscience clause": coming soon to a newsroom near you?' in Richard Lance Keeble and John Mair (eds) _The Phone Hacking Scandal. Journalism on_ _Trial_ (2012) Hawke, Bob _The Hawke Memoirs_ (Melbourne, William Heinemann Australia, 1994) Hay, David 'Murdoch claims it is fourth network' _AFR_ 15-7-1994 Hemmer, Nicole 'Obama is a Muslim Black Panther! Republican candidates and the conservative media' _The Conversation_ 23-4-2012 Herman, Edward S. and Chomsky, Noam _Manufacturing Consent: the political economy of_ _the mass media_ (London, Vintage, 1994) Hill, Dave 'Boris Johnson dismisses concerns over _News of the World_ phone hacking as "codswallop"' _G_ 15-9-2010 Hind, Dan 'The BBC investigates' _Open Democracy_ 18-7-2011 Hocking, Jenny _Gough Whitlam: His time. The Biography Vol. II_ (Melbourne, Miegunyah Press, 2012) Holcomb, Jesse, Mitchell, Amy and Rosenstiel, Tom 'Cable: by the numbers' _The State_ _of the News Media 2012_ (An Annual Report, The Pew Research Center's Project for Excellence in Journalism) Holgate, Ben 'News profit jumps 47pc in March quarter' _AFR_ 11-5-2012 _Hollywood Reporter_ 'Rupert Murdoch's 12 best tweets of 2012' 10-1-2013 Holmes, Jonathan 'Beware the unsourced figure' _Media Watch_ , ABC 16-5-2011 —— 'Trivial pursuit: when the _Australian_ gets personal' _The Drum_ 21-12-2012 —— 'The unwritten rule of the drop' _MediaWatch_ , ABC 15-4-2013 Hopkins, Huw L. 'The rotten apple drops, bounces, rolls, settles and grows, and its seeds spread far and wide: an updated hackgate timeline' in Richard Lance Keeble and John Mair (eds) _The Phone Hacking Scandal. Journalism on Trial_ (2012) Horne, Donald _Death of the Lucky Country_ (Melbourne, Penguin, 1976) House of Representatives Select Committee on the Print Media _News and Fair Facts._ _The Australian Print Media Industry_ (AGPS, Canberra, March 1992) (The Lee Report) Howard, John _Lazarus Rising. A personal and political autobiography_ (Sydney, HarperCollins, 2010) Hyland, Anne 'Murdoch to raise his Star in the East' _AFR_ 8-3-2005 Hywood, Greg '"Mates" and others: Hawke's view of the media groups' _AFR_ 13-12-1985 Ignatius, David 'The world according to Rupert Murdoch' _WP_ 14-7-2011 Inglis, K.S. _The Stuart Case_ (2nd edn, Melbourne, Black Inc., 2002) Jamieson, Kathleen Hall and Capella, Joseph N. _Echo Chamber. Rush Limbaugh and the_ _Conservative Media Establishment_ (New York, Oxford University Press [OUP], 2008) Johnson, Melody 'Fox "doubling down" on deceptively edited comments' _MMFA_ 25-7-2012 Jones, Nicholas 'Good political theatre Mr Jay, a shame about the questions: Why the Leveson public hearings were a missed opportunity' in Richard Lance Keeble and John Mair (eds) _The Phone Hacking Scandal. Journalism on Trial_ (2012) Jones, Paul 'Regulating for freedom: media lessons from Australia' _Open Democracy_ 18-9-2003 Jukes, Peter 'Rupert Murdoch admits "mistakes" and "panic" in leaked tape' _Daily Beast_ 4-7-2013 Kain, Erik 'The Republican Party needs to ditch Fox News if it wants to win' _Mother_ _Jones_ 7-1-2012 Keane, Bernard 'A partisan paper now wants to silence dissenters' _Crikey_ 29-11-2010 —— 'War on the middle class? More a war on our kids' _Crikey_ 11-5-2011 Keller, Bill 'The Romney package' _NYT_ 12-8-2012 Kelly, Paul _November 1975. The Inside Story of Australia's Greatest Political Crisis_ (Sydney, Allen & Unwin, 1995) Kennedy, Trevor _Top Guns_ (Melbourne, Sun Books, 1988) Kiernan, Thomas _Citizen Murdoch_ (New York, Dodd, Mead & Company, 1986) Kimmel, Daniel M. _The Fourth Network. How Fox Broke the Rules and Reinvented Television_ (Chicago, Ivan R. Dee, 2004) Kirk, Stuart and Edgecliffe-Johnson, Andrew 'From headlines to bottom line' _FT_ 22-7-2011 Kissane, Karen 'James Murdoch kept in dark over phone hacking because he would "cut out cancer"' _SMH_ 25-4-2012 Knightley, Phillip _A Hack's Progress_ (London, Jonathan Cape, 1997) Knott, Matthew 'Media giants unite to blow the whistle on source protection' _Crikey_ 1-5-2013 Knox, Merrill 'Evening news ratings' _Newser_ 30-4-2013 Kohler, Alan 'Murdoch's poison pill leaves a bad taste' _Age_ 13-8-2005 —— 'Rupert's wrong: distribution, not content, is king' _Crikey_ 9-4-2010 Kruger, Colin 'Break-up could breathe new life into News' _Age_ 18-5-2013 Kull, Steven 'Misperceptions, the media and the Iraq war' The PIP/Knowledge Networks Poll (Program on International Policy Attitudes, University of Maryland, 2003) Kurtz, Howard 'Roger's reality show' _Daily Beast_ 25-9-2011 —— 'Ailes regrets "scum" attack on _NYT' Daily Beast_ 22-5-2012 —— 'Rupert Murdoch gets his man as Mitt Romney picks Paul Ryan' _Daily Beast_ 12-8-2012 La Monica, Paul R. _Inside Rupert's Brain_ (London, Portfolio, 2009) Labaton, Stephen 'FCC blocks EchoStar deal with DirecTV' _NYT_ 11-10-2002 Lachapelle, Tara, Kucera, Danielle and Sherman, Alex 'News Corp. at 50% discount shows diminishing Murdoch: real M&A' _Bloomberg News_ 18-7-2011 Larsen, Peter Thai and Grice, Andrew 'Murdoch's Man Utd bid blocked' _I_ 10-4-1999 Latham, Mark _The Latham Diaries_ (Melbourne, MUP, 2005) Lawrenson, John and Barber, Lionel _The Price of Truth. The Story of the Reuters Millions_ (London, Sphere Books, 1985) Leapman, Michael _Barefaced Cheek. The apotheosis of Rupert Murdoch_ (London, Hodder & Stoughton, 1983) Leigh, David and Davies, Nick ' _News of the World_ 's "fake sheikh" had Tom Watson followed, emails show' _G_ 22-5-2012 Leveson Inquiry _The Report_ (www.leveson.org.uk)29-11-2012 —— _Executive Summary_ (www.leveson.org.uk)29-11-2012 —— Transcripts (www.levesoninquiry.org.uk/hearings)2011–12 Lewis, Justin 'A different kind of plurality: securing diverse media' _Open Democracy_ 16-7-2011 Lewis, Steve 'What a difference a mogul makes' _AFR_ 4-12-1999 Lieven, Anatol 'The future of democracy in America' _Open Democracy_ 15-10-2012 Lilla, Mark 'Republicans for revolution' _NYRB_ 12-1-2012 Lisners, John _The Rise and Fall of the Murdoch Empire_ (London, John Blake, 2012) Lister, Sam et al. 'No pact with Rupert Murdoch, says Tony Blair' _I_ 28-5-2012 Littlemore, Stuart 'Doing favours: the Murdoch _Telegraph_ , Tony Abbott and Pauline Hanson' (The Bruce Allen Memorial Lecture, Macquarie University, 29-9-2003) Lloyd, Clem _Profession Journalist. A History of the Australian Journalists' Association_ (Sydney, Hale & Iremonger, 1985) Luft, Oliver 'Rupert Murdoch: the internet won't destroy newspapers' _G_ 17-11-2008 Lusetich, Robert 'Battle stations for the hearts and minds of middle America' _Australian_ 10-4-2003 Lyall, Sarah and Becker, Jo 'A tenacious rise to the top in the brutal men's world of tabloids' _NYT_ 7-7-2011 Macintyre, Stuart and Clark, Anna _The History Wars_ (Melbourne, MUP, 2003) MacKenzie, Kelvin 'Why Dacre's worth his million' _British Journalism Review_ 16(1), 2005 —— 'Arrests are the real scandal' _Daily Mail Online_ 20-2-2012 Macmillan, Arthur 'Biden attacked in Murdoch's twitter storm' _SMH_ 14-10-2012 MacMillan, Robert 'Update 1 – Dow Jones costs News Corp $2.8 bn in writedown' _Reuters_ 6-2-2009 Mair, John 'A peep into the tabloid world – courtesy of Leveson' in Richard Lance Keeble and John Mair (eds) _The Phone Hacking Scandal. Journalism on Trial_ (2012) —— 'TOWIE: the only way is ethics (not)' in Richard Lance Keeble and John Mair (eds) _The Phone Hacking Scandal. Journalism on Trial_ (2012) Maloy, Simon 'Conservative media embrace poll trutherism in face of Romney Decline' _MMFA_ 26-9-2012 —— 'Roger Ailes and Fox News' lack of accountability' _MMFA_ 4-12-2012 Maney, Kevin _Megamedia Shakeout. The Inside Story of the Leaders and the Losers in the_ _Exploding Communications Industry_ (New York, John Wiley & Sons, Inc., 1995) Manne, Robert 'Murdoch and the War on Iraq' in Robert Manne (ed.) _Do Not Disturb._ _Is the media failing Australia_ (Melbourne, Black Inc., 2005) —— 'Bad news: Murdoch's _Australian_ and the shaping of the nation' _Quarterly Essay_ 43, 2011 Marjoribanks, Timothy _News Corporation, Technology and the Workplace. Global strategies,_ _Local Change_ (Cambridge, Cambridge University Press [CUP], 2000) Marsh, Kevin '... but what comes after?' in Richard Lance Keeble and John Mair (eds) _The Phone Hacking Scandal. Journalism on Trial_ (2012) Massie, Alex 'Revenge of the MPs' _Foreign Policy_ 13-7-2011 Mayne, Stephen 'After under-performing decade, is it time for Rupert to go? _Crikey_ 18-1-2011 —— 'Rupert, Hitler, 1983, beat ups... doesn't all this sound familiar?' _Crikey_ 18-4-2011 —— 'The most dramatic News Corp AGM since Maxwell came to town' _Crikey_ 21-10-2011 —— 'Record protests as News Corp shareholders get rankings dead right' _Crikey_ 25-10-2011 —— 'How about a Rupert tweet on Sir Rod's surprise gong?' _Crikey_ 6-2-2012 —— 'James Murdoch's resignation means nothing' _Crikey_ 1-3-2012 McAllister, Ian, Mackerras, Malcolm and Brown Boldiston, Carolyn _Australian Political_ _Facts_ (2nd edn, Melbourne, Longman Cheshire, 1990) McConville, Ben and Smith, Kate 'Crossing the thin blue line' in Richard Lance Keeble and John Mair (eds) _The Phone Hacking Scandal. Journalism on Trial_ (2012) McGurran, Deborah 'Hacking inquiry "more Clouseau than Columbo" jibe' _BBC News_ 13-7-2011 McIlwraith, Ian 'TV report rocks News shares' _AFR_ 21-12-1990 McKenzie, Robert Trelford and Silver, Allan _Angels in marble: Working class Conservatives_ _in urban England_ (London, Heinemann, 1968) McKnight, David _Rupert Murdoch. An Investigation of Political Power_ (Sydney, Allen & Unwin, 2012) McKnight, David and McNair, Brian 'The empire goes to war: News Corporation and Iraq' _Australian Journalism Review_ 34(2), 2012 McKnight, David and Hobbs, Mitchell '"You're all a bunch of pinkos": Rupert Murdoch and the politics of HarperCollins' _Media, Culture and Society_ , 33, 2011 McNair, Brian _Journalism and Democracy. An Evaluation of the Political Public Sphere_ (London, Routledge, 2000) McSmith, Andy 'What Tony said to Rupert – and why it says volumes about his friends and enemies' _Independent on Sunday_ 18-9-2005 _Media Matters for America_ 23-3-2010 —— '33 internal Fox editorial memos reviewed by MMFA reveal Fox News Channel's inner workings' (mediamatters.org) 14-7-2004 Menadue, John _Things you Learn Along the Way_ (Melbourne, David Lovell Publishing, 1999) Milbank, Dana 'The Tea Party makes trouble with a capital T' _WP_ 18-7-2010 —— 'On Fox News, election 2010 is cause for cheer' _WP_ 3-11-2010 Moore, Frazier 'Roger Ailes looks back on 15 years of Fox News' _Yahoo News_ 5-10-2011 Moos, Julie 'Roger Ailes criticizes _New York Times_ , AP during Ohio University talk' _Poynter_ 22-5-2012 —— 'Report: Roger Ailes signs 4 year deal to run Fox News and more' _Poynter_ 19-10-2012 Muddle, Cas 'America's election and the Tea Party' _Open Democracy_ 9-11-2012 Muir, Hugh and Martinson, Jane 'James Murdoch at the _Independent_ : "like a scene out of Dodge City"' _G_ 22-4-2010 Mulholland, Helene 'David Cameron tells hacking victims he still has an open mind over Leveson' _G_ 7-10-2012 Munster, George _A Paper Prince_ (Melbourne, Viking, 1985) Murdoch, Rupert _A Golden Age of Freedom_ Boyer Lectures (Sydney, ABC, 2008) Murphy, Dan 'Rupert Murdoch's Jewish problem. And his Egyptian one' _Christian_ _Science Monitor_ 18-11-2012 Myers, Steve 'Rupert, James Murdoch refuse to testify for phone-hacking inquiry next week' _Poynter_ 14-7-2011 —— 'Murdoch: split of News Corp into two companies unrelated to phone hacking scandal' _Poynter_ 28-6-2012 Neighbour, Sally 'The United States of Chris Mitchell: The power of a Murdoch man' _The Monthly_ , August 2011 Neil, Andrew _Full Disclosure_ (London, Macmillan, 1996) News Limited Board 'Announcement by the Board of Directors of News Limited' 22-1-1987 Nielsen 'Cross-Platform Report Q3 2011' _Reports and Insights_ _ninemsn_ 'I've got a lot of time for Rupert: Abbott' (news.ninemsn.com.au) 6-9-2013 Noam, Eli (ed.) _Media Concentration_ (Oxford, OUP, 2014, forthcoming) Oakes, Laurie and Solomon, David _The Making of an Australian Prime Minister_ (Melbourne, Cheshire, 1973) Oborne, Peter 'What the papers won't say' _Spectator_ 7-7-2011 O'Carroll, Lisa 'Jeremy Hunt criticized for failure to oversee adviser' _G_ 14-5-2012 —— 'NI denies Murdoch had "selective amnesia" about Thatcher meeting' _G_ 14-5-2012 —— 'Leveson inquiry: make political lying a criminal offence, says Peter Oborne' _G_ 17-5-2012 —— 'Phone hacking: News International could face more than 500 claims' _G_ 1-6-2012 —— 'Rupert Murdoch hits out at Andrew Neil over lobbying of Tony Blair' _G_ 11-7-2012 —— ' _Times_ editor James Harding resigns' _G_ 12-12-2012 Oremus, Will 'The five stages of Fox News grief' _Slate_ 7-11-2012 Osborne, Alistair 'Sky's ITV gamble may cost it £500m – but was it worth it?' _Daily_ _Telegraph_ (UK) 22-1-2010 Page, Bruce _The Murdoch Archipelago_ (London, Simon & Schuster, 2003) —— 'The end of the Murdoch Archipelago' _Open Democracy_ 23-4-2012 Peers, Martin 'News' annual pay-out $US1bn for nine years' _AFR_ 25-11-1991 Peppiatt, Richard 'The story factory: infotainment and the tabloid newsroom' in Richard Lance Keeble and John Mair (eds) _The Phone Hacking Scandal. Journalism on_ _Trial_ (2012) Perez-Pena, Richard 'TV Guide, having just been bought, is bracing to be sold' _NYT_ 8-5-2008 Peston, Robert 'Coulson got hundreds of thousands of pounds from News Intl' _BBC_ _News Business_ 22-8-2011 Peters, Jeremy 'For Murdoch, a board meeting with friendly faces' _NYT_ 9-8-2011 —— 'Shots by Murdoch at Romney play out to conservative core' _NYT_ 5-7-2012 Pew Research Center 'Further decline in credibility ratings for most news organisations' (Pew Research Center for the People and the Press, 16-8-2012) Pilkington, Ed 'A life unraveled... whistleblower who incurred wrath of the Murdoch empire' _G_ 17-8-2011 —— 'News Corp shareholders lodge complaint against Rupert Murdoch' _G_ 13-9-2011 Pleat, Zachary 'Updated: today in dishonest Fox News graphics' _MMFA_ 11-9-2012 Plunkett, John and O'Carroll, Lisa '"Congrats on Brussels!" Texts reveal Hunt's close alliance with Murdoch' _G_ 31-5-2012 Potter, Ben 'News Corp suffers post-Avatar hangover' _AFR_ 5-5-2011 Price, Lance _The Spin Doctor's Diary. Inside Number 10 with New Labour_ (London, Hodder & Stoughton, 2005) —— 'Rupert Murdoch is effectively a member of Blair's cabinet' _G_ 1-7-2006 Public Policy Polling 'Fox News' credibility declines' (Raleigh, University of North Carolina, 6-2-2013) Ramsey, Alan _The Way They Were. The View from the Hill of the 25 Years that Remade Australia_ (Sydney, UNSW Press, 2011) Read, Donald _The Power of News. The History of Reuters 1849–1989_ (Oxford, OUP, 1992) Real, Michael R. 'Television and sports' in Janet Wasko (ed.) _A Companion to Television_ (Oxford, Blackwell, 2005) _Reuters_ 'Murdoch's tough guy Carlucci under pressure' 2-9-2011 Rich, Frank 'Murdoch hacked us too' _New York_ 31-7-2011 Richardson, Reed 'GOP-Fox circus act' _Nation_ 29-4-2013 Ricketson, Matthew 'Ownership not the burning issue' _Age_ 30-7-2008 Riddell, Kelly 'US network pays to stay in the game' _AFR_ 1-11-2010 Robichaux, Mark _Cable Cowboy. John Malone and the rise of the modern cable business_ (Hoboken NJ, John Wiley & Sons, 2002) Robinson, James 'Scoops spur Coulson on to a red top renaissance' _G_ 19-2-2006 —— 'Phone hacking: Steve Coogan compares NI to a "protection racket"' _G_ 18-11-2011 Rohm, Wendy Goldman _The Murdoch Mission. The digital transformation of a media empire_ (New York, John Wiley & Sons, 2002) Rooney, D. 'Thirty years of competition in the British tabloid press. The _Mirror_ and the _Sun_ 1968–1998' in Colin Sparks and John Tulloch (eds) _Tabloid Tales. Global_ _Debates over media standards_ (Oxford, Rowman & Littlefield, 2000) Roshco, Bernard _Newsmaking_ (Chicago, University of Chicago Press, 1975) Rosston, Gregory L. 'Antitrust implications of EchoStar-DirecTV Proposed Merger' _Policy Brief_ (Stanford Institute for Economic Policy Research, November 2001) Rugaber, Christopher and Mayerowitz, Scott 'US unemployment: Politics, statistics morph into conspiracy' _Portland Press Herald_ 6-10-2012 Rusbridger, Alan 'Introduction' in The Guardian. _How the Guardian broke the story_ (Guardian Shorts, Kindle Edition, 2011) —— 'Hackgate "reveals failure of normal checks and balances to hold power to account"' in Richard Lance Keeble and John Mair (eds) _The Phone Hacking Scandal._ _Journalism on Trial_ (2012) Rushe, Dominic 'News Corp shareholders renew push for Rupert Murdoch to resign' _G_ 19-7-2012 Rushton, Katherine 'Big sister tell the bad News' _AFR_ 25-8-2012 Sabbagh, Dan 'MPs' News Corp report will be hard to dismiss' _G_ 1-5-2012 —— '12 things the Leveson inquiry has taught us' _G_ 27-11-2012 Sabbagh, Dan and Halliday, Josh 'Rupert Murdoch deemed "not a fit person" to run international company' _G_ 1-5-2012 Sabbagh, Dan and O'Carroll, Lisa 'Harold Evans tells Leveson of conflict and "vindictive" atmosphere at _Times' G_ 17-5-2012 —— 'Rebekah Brooks took £10.8m compensation from News Corp' _G_ 12-12-2012 Sabbagh, Dan and Wintour, Patrick 'Rebekah Brooks turns screw on Jeremy Hunt with "hacking advice" email' _G_ 11-5-2012 Samuelson, Robert J. 'Are you better off now than four years ago?' _WP_ 7-9-2012 Schudson, Michael _Discovering the News: A Social History of American Newspapers_ (New York, Basic Books, 1978) —— _Watergate in American Memory. How We Remember, Forget and Reconstruct the Past_ (New York, Basic Books, 1993) Schultz, Julianne _Reviving the Fourth Estate. Democracy, Accountability and the Media_ (Melbourne, CUP, 1998) —— (ed.) _Not just another business. Journalists, Citizens and the Media_ (Sydney, Pluto Press, 1994) Senate Select Committee on Certain aspects of Foreign Ownership Decisions in Relation to the Print Media _Percentage Players. The 1991 and 1993 Fairfax Ownership_ _Decisions_ (Canberra, Commonwealth of Australia, 1994) Seymour-Ure, Colin _The British Press and Broadcasting Since 1945_ (Oxford, Basil Blackwell, 1991) Shafer, Jack 'Rupert Murdoch's favourite lie' _Slate_ 24-4-2008 Shawcross, William _Rupert Murdoch. The Making of a Media Empire_ (New York, Simon & Schuster, 1997) —— 'Rupert Murdoch' _Time_ 3-11-1999 Sheridan, Greg 'The power of one' _Australian_ , 26-4-2003 Sherman, Gabriel 'The elephant in the green room' _New York_ 30-5-2011 —— 'Rupert Murdoch wants stricter gun laws after Newtown, but Fox News doesn't get the memo' _New York_ 17-12-2012 Shoebridge, Neil 'News Corp sells MySpace at $US540m loss' _AFR_ 30-6-2011 Silverman, Craig 'Connecting the dots: why doesn't the public trust the press any more?' _Poynter_ 19-4-2012 Simpson, Kirsty 'Let's have it on: Murdoch hits back' _SMH_ 29-3-2012 Sloan, Allan 'Missing from Murdoch's family deals: News Corp shareholders' _CNN_ _Money_ (finance.fortune.cnn.com) 24-2-2011 Smith, Anthony _The Age of Behemoths: The Globalisation of Mass media Firms_ (New York, Priority Press Publications, 1991) Sorkin, Andrew Ross and Schiesel, Seth 'GM Agrees to sell its satellite TV unit in $26 billion deal' _NYT_ 29-10-2001 SourceWatch 'Fox News' (sourcewatch.org) Souter, Gavin _Company of Heralds. A Century and a Half of Australian Publishing_ (Melbourne, MUP, 1981) Sparrow, Andrew 'Vince Cable feels "vindicated" over handling of News Corp bid for BSkyB' _G_ 6-5-2012 —— 'Cabinet Office backs Gordon Brown over Murdoch phone call' _G_ 15-6-2012 Starr, Paul 'Governing in the age of Fox News' _The Atlantic Online_ , Jan-Feb 2010 Statista 'DirecTV's revenue from 2006 to 2012 (in billion US dollars)' _Statista_ (www.statista.com/statistics/195726/revenue-of-directv-since-2006/)February 2013 Stein, Sam ' _New York Post_ lawsuit: shocking allegations made by fired employee Sandra Guzman' _Huffington Post_ 25-11-2011 Stephens, Mitchell 'Clout: Murdoch's political _Post' Columbia Journalism Review_ July/ August 1982 Stoeffel, Kat 'Col Allan denied "editorial privilege", ordered to dish on boss Rupert Murdoch and racist cartoon' _Observer.com_ 4-7-2012 Suich, Max 'Uncut: the thoughts of Chairman Murdoch' _AFR_ 5-11-2010 _Sun_ editorial 'Don't you know there's a bloody war on?' 28-8-2009 Sweney, Mark 'News Corp paid Elisabeth Murdoch almost $4m for running Shine' _G_ 5-9-2012 Swint, Kerwin _Dark Genius. The Influential Career of Legendary Political Operative and Fox_ _News Founder Roger Ailes_ (New York, Union Square Press, 2008) Sykes, Trevor 'Murdoch bows out... but he'll still visit' _AFR_ 27-10-2004 Tempest, Matthew 'Heseltine demands fresh inquiry into Westland affair' _G_ 12-1-2006 Thatcher, Margaret _Margaret Thatcher. The Autobiography_ (London, HarperCollins, 1993) Theel, Shauna '10 dumbest things Fox said about climate change in 2012' _MMFA_ 31-12-2012 Thomas, Laurie and Litman, Barry R 'Fox Broadcasting Company, why now? An economic study of the rise of the fourth broadcast "network"' _Journal of_ _Broadcasting and Electronic Media_ 35(2), 1991 Thompson, Jeremy 'Conroy steps up attack on _Daily Tele' ABC News_ 18-7-2011 Tiffen, Rodney 'The Dynamics of Dominance. Wran and the media, 1981' in Ernest Chaples, Helen Nelson and Ken Turner (eds) _The Wran Model_ (Melbourne, OUP, 1985) —— 'Quality and bias in the Australian Press: News Limited, Fairfax and the Herald and Weekly Times' _The Australian Quarterly_ 59(3–4), 1987 —— 'The _Sun_ also sets. _Mirror_ monopoly shock!' _Media Information Australia_ , 52, May 1989 —— _News and Power_ (Sydney, Allen & Unwin, 1989) —— 'Media Policy' in Judith Brett, James Gillespie and Murray Goot (eds) _Developments in Australian Politics_ (Melbourne, Macmillan, 1994) —— 'The Labor-Packer Alliance, 1978–1995. RIP' _Media International Australia_ , 77, 1995 —— _Scandals. Media, Politics and Corruption in Contemporary Australia_ (Sydney, UNSW Press, 1999) —— 'From technological abundance to commercial monopoly in Australian pay TV: key relationships in institutionalising subscription television' in Andrew Kenyon (ed.) _TV Futures. Digital Television Policy in Australia_ (Melbourne, MUP, 2007) —— 'UK phone hacking victims' lawyer Charlotte Harris in conversation: full transcript' _The Conversation_ 22-11-2012 Toohey, Brian 'Super funds sue Murdoch over poison pill' _AFR_ 8-10-2005 Toynbee, Polly 'How the badly maimed BBC can stand up to parasitic Sky' _G_ 3-1-2012 Travis, Alan 'Murdoch did meet Thatcher before _Times_ takeover, memo reveals' _G_ 17-3-2012 Tuccille, Jerome _Murdoch: A Biography_ (London, Judy Piatkus Publishers, 1989) Tumber, Howard 'Scandal and media in the United Kingdom: From Major to Blair' _American Behavioural Scientist_ 47, 2004 Tunstall, Jeremy _Journalists at Work_ (London, Constable, 1971) —— _Newspaper Power: The New National Press in Britain_ (Oxford, Clarendon Press, 1996) Turner, Barry 'The key questions: have they been answered?' in Richard Lance Keeble and John Mair (eds) _The Phone Hacking Scandal. Journalism on Trial_ (2012) UK Parliament 'MPs debate News Corporation bid for BSkyB' _UK Parliament_ (www.parliament.uk) 13-7-2011 Vander Hook, Sue _Rupert Murdoch. News Corporation Magnate_ (North Mankato MN, ABDO Publishing, 2011) Vaughn, Stephen L. _Encyclopaedia of American Journalism_ (London, Routledge, 2007) Walker, Martin _Powers of the Press: The World's Great Newspapers_ (London, Quartet Books, 1982) Wallace, Christine 'The bad old days of webs of influence are long gone' _AFR_ 10-6-1994 Wallis, Holly 'Archived papers reveal Thatcher secrets' _BBC News_ 17-3-2012 Walsh, Maximilian '"Dries" think that media are just business' _SMH_ 23-4-1987 Watson, Tom 'News International – ruthless, without conscience or morality' _Open_ _Democracy_ 8-7-2011 Watson, Tom and Hickman, Martin _Dial M for Murdoch. News Corporation and the_ _Corruption of Britain_ (London, Allen Lane, 2012) Weir, Stuart 'Government by corporate text messages: what is left of the British constitution after Leveson?' _Open Democracy_ 27-5-2012 Welch, Dylan 'New to twitter: the tweet Murdoch took down... fast' _Age_ 2-1-2012 West, Michael 'Like it or lump it – Murdoch bats away calls for reform' _SMH_ 17-10-2012 Wheatcroft, Geoffrey 'The truth about Murdoch' _NYRB blog_ 14-3-2012 Whitbourn, Michaela 'How to cook up a carbon tax story' _AFR_ 14-9-2011 Williams, Pamela 'Canberra bristles as media moguls stir' _AFR_ 7-10-2012 Wintour, Patrick 'David Cameron forced to answer Jeremy Hunt questions' _G_ 30-4-2012 Wintour, Patrick and Sabbagh, Dan 'News Corp dossier appears to show contacts with minister over BSkyB bid' _G_ 24-4-2012 Wolff, Michael _The Man Who Owns the News. Inside the Secret World of Rupert Murdoch_ (New York, Knopf, 2008) —— 'How bad is News Corp?' _Adweek_ 8-8-2011 —— 'Why I love Fox News' _GQ_ 5-1-2012 —— 'The News Corp split and Rupert Murdoch's rearguard action' _G_ 28-6-2012 —— 'Rupert's challenge: the new News Corp' _AFR_ 7-1-2013 Yelland, David 'I was drunk every night for nearly 24 years but I was saved by the love of my son' _Mail Online_ 27-3-2010 Young, Hugo _One of Us. A biography of Margaret Thatcher_ (London, Macmillan, 1989) Young, Sir Norman _Figuratively Speaking. The Reminiscences, Experiences and observations of Sir_ _Norman Young_ (Adelaide, published by the author, 1991) Younger, Ronald _Keith Murdoch: founder of an empire_ (HarperCollins, Sydney, 2003) Index _2016_ , documentary Abbott, Tony ABC (US) , ABC Disney Abeles, Peter –, – Adler, Louise – Administrative Appeals Tribunal _Advertiser_ (Adelaide) Afghanistan , –, , _Age_ , AIDS – Ailes, Roger –, – _passim_ , –, , –, –, –, Akass, Bill – Akerman, Piers Akers, Sue –, , , Akin, Todd Al Qaeda , –, Allan, Col , , , Alton, Roger Amanpour, Christine America Talks network America's Talking AMP Capital Ampol Andrew, Sky Angle, Sharron Anglican Church Anglican Press Angus & Robertson Anne, Princess Ansett, Reginald Ansett Airlines –, – AOL –, Apple Arafat, Yasser Arledge, Roone Ash, Leslie Ash, Timothy Garton Associated Press (AP) , Astor family Atkins, Chris Atwater, Lee – Auletta, Ken , , Austar _Australian_ , , , , 1975 election –, – anti-McMahon coup – Deamer dismissal – enemies –, established –, , Joh for Canberra campaign – on Iraq War – on Vietnam War – political bias –, – Australian Associated Press Australian Broadcasting Corporation (ABC) , , , Australian Broadcasting Tribunal (ABT) , , –, –, Australian Competition and Consumer Commission (ACCC) _Australian Financial Review_ –, Australian Journalists' Association (AJA) – Australian Labor Party (ALP) , –, –, , , , –, –, , – Australian Newsprint Mills Australian Press Council Australian Republican Movement , , Australian Securities Commission Australian Security and Intelligence Organisation (ASIO) Bachman, Michele , Baier, Bret Baker, James Baldwin, Tom Bancroft family , , Barker, Geoffrey Barry, Paul BBC –, , , , , , , , Beame, Abraham Beaujon, Andrew Beaverbrook, Lord Beck, Glenn –, –, , , Beckham, David , Beecher, Eric , , , –, Begley, Charles Beinart, Peter Belfield, Robert , , , – Bellamy, Carol – Bellamy, Craig Bennett, Daniel Berkowitz, David Berlusconi, Silvio Bernstein, Carl , , Biffen, John – Bin Laden, Osama , Bjelke-Petersen, Joh – Black, Conrad , , Blair, Cherie Blair, Tony , , , , and Murdoch , –, , –, backed by _Sun_ , , Blair Government , , , –, Blix, Hans Bloomberg , Bloomberg, Michael Bloomingdales Boehlert, Eric Boland, Ron Boling, Dulcie Bolt, Andrew Bolton, John –, _Boston Herald_ Bottomley, Virginia Bowen, Chris Bowers, Peter Bowman, David , Boyer Lectures Branson, Richard – Brass, Douglass – Brauchli, Marcus Breindel, Eric Brendon, Piers , , , , Breslin, Jimmy British Satellite Broadcasting (BSB) _see_ BSB Brittan, Leon – Brixton riots (1981) Broadcasting Act (UK) Brock, David –, , –, Bromwich, David Brooks, Charlie Brooks, Rebekah (née Wade) –, , , , –, – phone hacking scandal , , , , , , , , –, –, – Brown, Bob Brown, Gordon , , , , , , Brown, Tina Brown Government , , Browne, Frank Bryant, Chris , , BSB –, , – BSkyB business strategy –, creation – English Premier League rights , , stake in ITV – takeover bid , , –, , , –, , Burke, Edmund Bush, George H.W. –, , – Bush, George W. , , , –, , Bush Administration _Business Daily_ Button, John Cabinet Office (UK) Cable, Vince – Cablevision Cain, Herman Cain, John Cairns, Jim Callaghan, James Callaghan Government Calvert-Jones, Janet – Calwell, Arthur – Cameron, David phone hacking scandal , , , , , –, –, relationship with Murdoch – Cameron Government , –, , – Campbell, Alastair , , , –, , , , _Canberra Times_ Capella, Joseph , Carey, Case Carling, Will Carlucci, Paul Carlyon, Les Carmel, California Carr, David , Carr, Sir William –, Carr family –, Carroll, Vic Carter, Jimmy Cathcart, Brian –, – Catto, Lord Cavan, farm –, CBS , , , , –, Chadwick, Paul , Chang, Gareth Channel 3 (UK) Channel 4 (UK) , , –, Channel 5 (UK) , Channel 7: Channel Nine , – Channel Ten –, , , Chao, Stephen , Chapman, Ian , Chapman, Lee Charles, Prince Cheney, Dick Cheney, Lynne Chenoweth, Neil , , , , , , Chernin, Peter , –, – Chernobyl nuclear disaster _Chicago Sun-Times_ , , _Chicago Tribune_ , , China –, , BBC and Star TV –, Hong Kong Star TV failure , , – Chippindale, Peter , , , – Chirac, Jacques Chisholm, Sam , Chozick, Amy – Christie, Chris Churchill, Winston , Citibank Clark, Manning – Clegg, Nick Clifford, Max , – Clinton, Bill , , , , Clinton, Chelsea – Clinton, Hillary CNBC CNN , , , , , , , , –, , Cockburn, Alexander Cohen, Richard Cohen, Roger – Cold War , , –, Coleman, Vernon Collier, Chet Collins Books –, _Columbia Journalism Review_ , Commonweath Bank Communications Act 2003 (UK) – Competition Appeal Tribunal (UK) – Competition Commission (UK) Conroy, Stephen Conservative Party (UK) –, , –, , , , , – Coogan, Steve Cook, David – Cook, Robin Coombs, 'Nugget' Coors, Joseph Costello, Peter –, –, – Coulson, Andy , , , –, , , –, –, , , , , Country Party of Australia – _Courier-Mail_ (Brisbane) , , , , – Cowley, Ken , –, Crean, Simon _Crikey_ Crimean War Crimewatch, BBC program Crone, Tom , , –, Cronkite, Walter – Crown Prosecution Service (UK) – Cruden Investments – Cryle, Denis , Cuba Cuckney, Sir John Cudlipp, Hugh , , Culture, Media and Sport Select Committee (UK) , –, , , –, –, Murdoch's testimony , , , – Cuomo, Mario , _A Current Affair_ (Fox TV) , Dacre, Paul _The Daily,_ iPad newspaper _Daily Express (London)_ , , _Daily Herald_ (London) _Daily Mail_ (London) , , , _Daily Mirror_ (London) –, –, , , –, , , _Daily Mirror_ (Sydney) , –, –, , , , _Daily News_ (New York) , _Daily News_ (Perth) _Daily Sun_ (Brisbane) _Daily Telegraph_ (London) _Daily Telegraph_ (Sydney) , , , , –, , – Dalai Lama D'Arcy, John – Davey, Gary , , Davies, Nick –, –, , Davis, Marvin Davy, Richard Day, Mark De Niro, Robert Deamer, Adrian , – Delane, John – Democratic Labor Party (DLP) Democratic Party (US) –, –, , –, , –, , , , , Deng, Wendi , Deng Rong Deng Xiao-Peng Denton, Andrew – Department of Trade and Industry (UK) – Dickinson, Tim Diller, Barry , , Dimbleby, Jonathan Dionne, E. J. – DirecTV –, , –, Disraeli, Benjamin Doocy, Steve Douglas, Mike , Douglas-Home, Charles –, , Dover, Bruce , , –, Dow Jones , –, Dowler, Milly –, , , Dowler family , , , Driscoll, Matt , , , , Drudge, Matt Duesberg, Alan Duffield, Robert Duffy, Michael Dukakis, Michael – Dunleavy, Steve , , , , Dunstan, Don Dux, John , Eady, Justice EchoStar , _Economist_ , –, , , , Edmondson, Ian , – EEPTU – Elizabeth II, Queen , , Ellison, Sarah Emery, Fred Emmel, Robert – English, Sir David English Premier League , , Ergen, Charlie European Broadcasting Directive 1988: European Union , Evans, Harold , –, , , –, –, , Express Newspapers – Facebook Fair Trading Act (UK) Fairfax , –, , , , , –, , , , Fairfax, James Fairleigh Dickinson University – Fairness Doctrine (US) – Falkingham, Robert Falklands War , –, Fallows, James Farr, Malcolm Federal Communications Commission (FCC), , , – Federal Court of Australia , Felker, Clay , , , Ferraro, Geraldine , Fiat – Fillery, Sid _Financial Times_ –, , First Super Fisher, Gus , – Fitzgerald Inquiry Floorgraphics – Foot, Michael 'For Neville' email –, , , , _Forbes_ – Foreign Investment Review Board (FIRB) , Foreign Takeovers Act (Australia) – Foster, Vince _Four Corners_ Fowler, Mark – Fox Inc. Fox Interactive Media Fox network , , Fox News _see also_ Ailes, Roger birth , – business strategy , – command and control – culture , –, future – growth , –, – Iraq War , polarised politics , , – Republican player , , , – Time Warner/Turner dispute –, Fox Sports , Fox Studio Fox TV –, , , , Foxtel , , , Fraser, Malcolm , , , Fraser Government , , Freud, Matthew , , , Frith, Brian Frost, David , – Frum, David , _Gallipolli_ , film Galtieri, General – Gans, Herbert Gardiner, Greg Gates, Bill , _General Belgrano_ General Motors (GM) , Giles, Frank , –, –, Giles, Vic Gillard, Julia , Gillard Government , Gilmore, Gary Gingrich, Newt , , , Giuliani, Rudy , Gladwin, Peter Global 500 ( _Financial Times_ ) – Goodman, Clive , –, –, , , –, Goodwin, Peter Google , Gorbachev, Mikhail Gore, Al Gorton, John Gorton Government Gove, Michael Governance Metrics International (GMI) – Graham, Katherine Grant, Hugh , , Graves, David – Gray, Harry Green, Marshall – Greens , Greenslade, Roy , , –, , , , – Grenada invasion – Grieve, Dominic Groening, Matt _Guardian_ (London) , , , phone hacking investigations –, –, –, –, , , , – Guo Baoxing Guthrie, Bruce –, –, , –, – Guzman, Sandra Gyngell, Bruce HackedOff Hague, William Hames, Jacqui – Hamill, Pete Hammond, Eric – Hannity, Sean , , , Hanson, Pauline Harbottle and Lewis –, – Harding, James Harper & Row – HarperCollins , , , , , Harris, Charlotte , , , , Hartigan, John , , , Havers, Sir Nigel Hawke, Bob , , , – Hawke-Keating Government , , , –, – Hayman, Andy – Hearst, Randolph Hearst Corporation , Heath, Ted Hemmer, Nicole Henderson, Rupert , , Henry, Ed – Henry, Wendy _Herald_ (Melbourne) , , , , , – _Herald American_ (Boston) Herald and Weekly Times (HWT) , , , , , and Sir Keith Murdoch , –, –, takeover , –, , , – Herald-Sun (Melbourne) , , , –, – Heseltine, Michael – Hewson, John Hickman, Martin , , , , , , , Hillsborough disaster (1989) , Hinton, Les , , Hird, Christopher , Hitler, Adolf , 'Hitler diaries' – Hoare, Sean , Hogben, Brian , Hollings, Les Holmes, Jonathan Holmes à Court, Robert , , , , Holt, Harold Home Office (UK) Hong, Tinglan Hong Kong , Horne, Donald , – Horrie, Chris , , , – Horton, Willie – House of Commons _see also_ Culture, Media and Sport Select Committee (UK) –, , , , House of Lords , , Howard, John , , , –, , , Howard Government Huckabee, Mike , , , Hughes Electronics Hume, Britt Hunt, Jeremy , , – Hurricane Katrina Hussein, Saddam , –, , Hussey, Marmaduke Hywood, Greg IGN _Independent_ – Independent Broadcasting Authority (IBA) – Independent Television Authority (ITA) India , Indonesia , , Information Commissioner's Office (UK) , Ingraham, Laura Insignia Systems internet , , –, , , –, , , IPC –, Iran-Contra scandal Iraq Study Group Iraq War , , , –, Fox News , , –, , , –, , WMD – Islam , –, , Israel –, , , , ITN – ITV –, , Jagger, Mick James, Francis Jamieson, Kathleen Hall , Jay, Robert , JETRO Jiang Zemin , Jobs, Steve John, Elton – Johnson, Boris Johnson, Lyndon Baines , Johnson, Richard Jones, Nicholas Jordan, Michael – Kavanagh, Trevor Keating, Paul _see also_ Hawke-Keating Government , , –, , Keegan, Des – Keeler, Christine Keller, Bill Kelly, Paul Kelly, Peter Kelly, Sharon , Kelner, Simon – Kemp, Jack – Kemp, Ross – Kennedy, Jackie Kennedy, Trevor – Kennett, Jeff – Kerry, John , Kheel, Theodore Kiernan, Thomas , , , , , , , –, –, , –, –, , –, –, –, , , , King, Martin Luther Kinnock, Neil –, Kirtzman, Andrew Kissinger, Henry , Kitney, Geoff – Kluge, John , Knightley, Phillip Koch, Ed – Kohler, Alan , Kommer, Walter Kristol, Irving Kurtz, Howard Kuttner, Stuart La Monica, Paul Labour Party (UK) –, , –, , , –, , , –, , , , , Lamb, Larry –, , , – Latham, Mark – Law, Jude , Lawrence, Carmen Leapman, Michael , Lebanon , Legal and General Lenin, Vladimir , – Lever, Richard Lever, Rod Leveson, Lord Justice , , , , Leveson Inquiry , , –, , , , , –, –, , –, –, –, –, Murdoch's testimony , , , , , , , , , , –, , –, – purpose and structure , – Levin, Gerald – Lewinsky, Monica Lewis, Mark –, Lewis, Steve Li, Richard Li Peng Liberal Democrats , , Liberal Party of Australia – Libya Limbaugh, Rush , , , , – Lisners, John –, Littlejohn, Richard , Livingstone, Ken , Lloyd, Sir Nicholas Lloyd, Peter London Metropolitan Police , – corruption , , , , , – responses to phone hacking –, –, , , , – _passim_ , –, –, London Weekend Television (LWT) , Loos, Rebecca Los Angeles Dodgers Lotto Lowy, Frank – Luce, Henry , MacArthur, Brian MacCallum, Mungo , – Macdonald, Lord MacKenzie, Kelvin , –, , –, , –, , , –, , –, , –, –, Mahmood, Mazher _Mail on Sunday_ (London) , Major, John , –, , , –, , , , – Major Government , Malone, John , , , , , –, Maloy, Simon Manchester United – Mandelson, Lord Mandelson, Peter Maney, Kevin Mann, Thomas – Manne, Robert –, , Mantle, Mickey – Markson, Max Marshall, Sharon Matthews, Bruce , Maxwell, Robert –, , –, Mayhew, Sir Patrick Mayne, Stephen , McAlpine, Lord McCain, John , McCann, Kate – McComas, Bob McEwen, John – McFarland, K.T. McGregor, Ian McGuiness, Joe MCI McKnight, David , , , , , McMahon, William –, McMullan, Bob McMullan, Paul , , , , McNair, Brian _Media Matters_ , , Mediaset Melbourne Storm Melbourne University Press – Menadue, John –, , –, –, Menzies, Robert – Menzies Government , , , – _Mercury_ (Hobart) –, Metromedia , MI6 , Michel, Frederic – Microsoft , , , Milbank, Dana Miliband, Ed –, , Miller, Harry M. Miller, Rocky Miller, Sienna –, Minchin, Nick Miskiw, Greg Mitchell, Chris , , , – Mohan, Dominic , Monopolies Commission (UK) , –, –, – Monroe, Marilyn – Montgomery, David Moody, John – _More_ magazine Morgan, Daniel , –, – Morgan, Piers , , , Morosi, Junie Morris, Dick Mosley, Max , Mosley, Oswald Mourdock, Richard Moynihan, Daniel Patrick MSNBC , , , Muddle, Cas Mulcaire, Glenn , –, , , –, , – Munster, George , Murdoch, Anna , , Murdoch, Elisabeth (mother) , , , Murdoch, Elisabeth (daughter) , , , Murdoch, James , , , , , –, phone hacking scandal , –, –, –, – Murdoch, Sir Keith , , , , , , Murdoch, Lachlan –, , , –, Murdoch family , , , , –, , Musharraf, Pervez 'Mussolini diaries' My Lai massacre Myler, Colin , , , –, MySpace , NASDAQ National Australia Bank _National Enquirer_ (US) National Football League (NFL) (US) , National Public Radio National Rifle Association _National Star_ (US) –, National Union of Journalists _Nature_ Nazis , , , , , –, , 'Hitler diaries' – NBC , NBC Universal NDS , Neil, Andrew , , , , , , , , , , , , New Guinea _New Idea_ _New York_ magazine , , , , _New York Post_ acquisition , –, – culture , , –, , – editorial independence –, , enemies , , , –, – factual accuracy –, –, , – Iraq War – NY mayoral election, 1977: – pro-Republican bias , , –, , , , readership , – revenue , , , –, _New York Times_ –, , , , , , , , , , , , , , , – Newman, Paul _News_ (Adelaide) , –, –, – News America Marketing – _News of the World_ (London) _see also_ phone hacking scandal acquisition , –, , , audience business strategy , – closure , , – culture –, –, –, , , , , surveilliance activities , –, _Newsweek_ Newton, Max Newton, Maxwell – Newton Dunn, Tom Nielsen Media Research Nike – Nine network – Nixon, Christine – Nixon, Richard , , , , Nixon Administration , – Nobel Prize – Norris Inquiry into Ownership and Control of Newspapers in Victoria Northcliffe, Lord Northern Star Norton, Ezra NTV – Oakes, Laurie Obama, Barack –, , , and Fox News –, , , –, –, –, Oborne, Peter , O'Connor, Deirdre – Ofcom (UK) –, , Office of Fair Trading (UK) Office of Police Integrity (Victoria) Operation Elveden , Operation Motorman , Operation Weeting –, , O'Reilly, Bill , –, , , , , _The O'Reilly Factor_ Orion Ornstein, Norman – _Outfoxed_ , film Oxford University , , , – Packer, Clyde –, Packer, Sir Frank –, Packer, James Packer, Kerry –, , , , – Packer family –, , , –, –, Page, Bruce , , , Palin, Sarah , , , , , , Patten, Chris PBS Pearson Pelosi, Nancy _People_ (UK) , , , _People's Daily_ (China) , Peppiatt, Richard Perot, Ross Perry, Rick Petraeus, General David Petrie, Tom – Pew Foundation –, Pew Research Center Philip, Ian Philip Morris phone hacking scandal _see also_ Culture, Media and Sport Select Committee (UK); Leveson Inquiry –, – and BSkyB bid , , –, , , –, and News culture –, , –, –, – events of July 2011: – _Guardian_ investigations –, –, –, –, , , , – lead-up and containment, 2007–11: –, – News International cover-up –, police corruption , , , , , – responses of media –, , –, – responses of police –, –, , , , – _passim_ , –, –, responses of politicians –, –, – responses of victims – Pickering, Ted Pike, Julian Playford, Thomas Playford Government (South Australia) , Political Action Committees (PACs) Potter, Dennis Prescott, John , , Press Complaints Commission (PCC) (UK) , , , Price, Adam Price, Lance , , , _Private Eye_ Prodi, Romano Professional Footballers' Association Profumo scandals 'Project Democracy' Public Policy Polling Pulitzer, Joseph Pulitzer Prize Qantas Queensland Nationals – Queensland Press –, – QVC network Rabin-Havt, Ari –, , –, Ramsey, Alan Rather, Dan Rawkus Records Reagan, Ronald , , , –, –, , , Reagan Administration –, – Real, Michael R. Rebh brothers Rees, Jonathan , –, –, – Rees-Mogg, William , Regan, Judith – ReganBooks Reid, Harry Reina, Charlie Republican Party (US) , , , –, – _passim_ , –, , –, –, and Fox News , , , , – supported by News media Reuter, Julius Reuters –, , , Rhodes, William Rich, Frank – Rich, Marc Richardson, Reed Ritchie, Guy Rivera, Geraldo Rivett, Rohan , , , – Robertson, Pat Rohm, Wendy Romney, Mitt , , , , – Ronald Reagan Presidential Foundation _Rosehearty_ , yacht – Roshco, Bernard Rothermere, Lord Rothwell, Bruce , Rove, Karl , , Rowland, Tiny – Rowling, J. K. Royal Family , , , , , , , , Royko, Mike Rudd, Kevin , , Rudd Government –, Rusbridger, Alan , –, –, , Russo, Sal Ryan, Paul Sackville, Justice Safire, William Sammon, Bill Samuel Montagu Sandy Hook Elementary School, Connecticut Sann, Paul , – Santorum, Rick , , Save the Children Schiff, Dorothy , – Scotland Yard , , , Scott, C.P. , September 11 terrorist attacks, 2001: , , , , , Seven network Shafer, Jack Shah, Eddie , , – Shakespeare, Arthur Shao Huaze Sharon, Ariel Shawcross, William , , , , , –, , , – Sherborne, David Sheridan, Greg – Sherman, Gabriel Shine Sikorsky – Silent Shadow Simpson, O. J. _The Simpsons_ , Singleton, John Sky , –, –, , , – Sky News –, , Smart, Gordon Smith, Adam Smith, John Smith, Wayne Solomon, David Somerfield, Stafford 'Son of Sam' –, , _South China Morning Post_ (Hong Kong) South Sydney Rabbitohs , – Southern TV Soviet Union –, , , , , – Sparrow, Andrew _Special Report_ (Fox News) Standard & Poor's (S&P) , Star TV , –, , , , , – Starr, Paul _Starsuckers,_ documentary Stelzer, Irwin Stephens, Mitchell Stephenson, Sir Paul _Stern_ – Stevens, Lord – Stuart, Max – Stuart Royal Commission (1959) Suharto Suich, Max Sulzberger, Arthur _Sun_ (London) , , , , –, , acquisition , , backs Blair –, , circulation , , , , – culture , – _passim_ , –, –, – Elton John – factual accuracy , –, , , , illegal payments , , Leveson Inquiry –, news supply v. demand – on Afghanistan – on WMD –, phone hacking –, – pro-Thatcher , –, , , – readership , – stances on elections , , , –, –, –, , tabloid formula – _Sun_ (Sydney) –, , , _Sun on Sunday_ (UK) _Sunday Herald Sun_ (Melbourne) , , _Sunday Mail_ (Adelaide) _Sunday Mirror_ (London) , , , _Sunday Mirror_ (Sydney) – _Sunday Telegraph_ (Sydney) , _Sunday Times_ (London) , , , , acquisition , , – AIDS – and Murdoch politics , , editorial guarantees , , – 'Hitler diaries' – profitability , , , , Wapping impact _Sunday Times_ (Perth) , , Super League (Australia) , Swan, Wayne Swint, Kerwin _Sydney Morning Herald_ –, , , , Tatchell, Peter Taylor, Gordon –, , Tea Party , –, , –, _Telegraph (Brisbane)_ Telstra , Ten network –, , , , , , –, Thatcher, Margaret – demise – Hussey BBC appointment – relationship with Murdoch , , , , –, –, –, –, , , – Sky/BSB actions – supported by _Sun_ , –, , –, , , – Westland affair – Thatcher Government – satellite policies – _Times_ acquisition , , , – _Today_ acquisition – Westland affair – _The Untouchables,_ film Thomas, Richard Thomson, Lord , Thomson, Mark Thomson, Robert , Thomson, Roy – Thomson Corporation Thurlbeck, Neville , , – _Time_ magazine , , Time Warner , , – _Times_ (London) , , , –, , , , , –, , , acquisition , , , –, , 157–63 business strategy –, editorial guarantees –, , – editorial independence –, , , losses , , , , Times Newspapers –, – TNT , – Tobin, James – _Today_ (UK) , – Touchstone Townend, Judith Trade Practices Commission (TPC) –, , – Trade Union Act 1984 (UK) – Trevor-Roper, Hugh – Triangle Publications Tribune Company , Trump, Donald –, Tuccille, Jerome , , Tunstall, Jeremy , Turner, Ted –, _TV Guide_ (US) , , Twentieth Century Fox , , , , Twenty-First Century Fox , Twitter –, , UNICEF United Nations , United Technologies Inc University of Maryland – US Department of State –, US Embassy, Canberra – US Information Agency Valassic Communications Victorian Bushfires Royal Commission, 2009: Vietnam War –, , –, – _Village Voice_ Virgin Group – Virgin Media – Wade, Rebekah _see_ Brooks, Rebekah (née Wade) _Wall Street Journal_ , , , , , acquisition –, , , –, –, , –, , , , editorial guarantees , Wallace, Christine – Walsh, Eric , Walt Disney Company –, Wapping dispute –, , – Warner Brothers –, _Washington Post_ , , , – Watergate , , Watson, Tom , , , , , , , , , , –, – Webb, Derek _Weekly Standard_ (US) Welch, Jack West, Katharine – _Western Mail_ (Perth) Westfield, Mark Westland affair – Whitlam, Gough –, Whitlam, Margaret – Whitlam Government 1972 election , –, , –, 1975 election –, –, –, – electoral data analysis – Whitlam–Murdoch relations –, Whittamore, Steve , –, Whittingdale, John Wick, Charlie WikiLeaks Wilkie, Andrew Willard, Cody William, Prince , Williams, John – Williams, Kim Wilson, Charles Wilson, Harold Wilson Goverment (UK) Windsor, Tony Winfrey, Oprah Wolff, Michael , , , –, , –, , –, –, –, –, , , , , , –, , –, –, –, – Woodward, Bob , Wran, Neville Wran Government (NSW) Wyatt, Woodrow , , , , , , , , , , Wylie, Peter Wynhausen, Elisabeth Yahoo Yates, John , , , – Yelland, David , , , , Young, Hugo , , Young, Mick , Young, Sir Norman – Ziff-Davis
{ "redpajama_set_name": "RedPajamaBook" }
6,270
\section{Introduction} \vspace{-0.1cm} \input{2.introduction} \section{Problem Formulation} \input{3.problem} \vspace{-0.3cm} \section{Methodology} \input{4.method} \section{Experiment} \input{5.experiment} \section{Related Works} \input{6.related} \section{Conclusion} \input{7.conclusion} \vspace{-0.1cm} \section{Acknowledgement} This work is partially supported by IIS-2152030, IIS-2045567, and IIS-2006889. \vspace{-0.4cm} \bibliographystyle{siam} \subsection{Model Overview} We demonstrate the overview of \ul{A}ctor-Critic based \ul{F}eature \ul{G}eneration (AFG) model in Figure~\ref{model_overall}. We give a detailed description of each component in the following sections. \vspace{-0.3cm} \subsection{Feature Space Clustering} As a pre-step to facilitate the model performance, we propose the feature space clustering component. In the following sections, we first introduce three functions to measure the relevance of two clusters. Then, we describe the Features-Group Clustering (FG-Clustering) methods. \noindent\textbf{Features-Group Distance Functions:} We propose this function to measure the similarity between two clusters of features. Suppose we have two clusters $c_i$ and $c_j$, the formal definition of features-group distance function is give by: \begin{equation} \begin{aligned} \small \label{dis} d_{m}(c_i, c_j)= \frac{1}{|c_i|\cdot|c_j|} \sum_{f_i\in c_i}\sum_{f_j\in c_j} |\text{dis}(f_i,f_j)|, \end{aligned} \vspace{-0.2cm} \end{equation} where $dis(\cdot)$ is a pair-wise distance function (e.g., the euclidean distance, cosine similarity, etc). In Equation~\ref{dis}, the numerator (i.e., $|\text{dis}(f_i,y)-\text{dis}(f_j,y)|$) aims to quantify the relevancy between the feature columns and the target. The denominator (i.e., $\text{dis}(f_i,f_j)+\epsilon$) quantifies the redundancy between $f_i$ and $f_j$. The $\epsilon$ is a small constant to avoid the divide by zero error. If the numerator is small, $f_i$ and $f_j$ should have a similar influence on the classification of $y$. And, when the denominator is big, it represents $f_i$ and $f_j$ share more information. Both make the function gives these two columns of feature with a smaller distance. \iffalse \noindent1) \textit{Euclidean Distance Based:} The first method trying to measure the similarity between two clusters by the euclidean distance. We assume that the cluster method should merge two clusters of features with a smallest absolute value of the numerical difference into one cluster. The euclidean distance function between two clusters can be defined as: \vspace{-0.2cm} \begin{equation} \begin{aligned} \small d_{e}(c_i, c_j)= \frac{1}{|c_i|\cdot|c_j|}\sum_{f_i\in c_i}\sum_{f_j\in c_j} \sqrt{\sum(f_i-f_j)^2}, \end{aligned} \vspace{-0.2cm} \end{equation} \noindent2) \textit{Cosine Similarity Based:} Another idea to measure the distance is to utilize the cosine similarity. Compared to the euclidean distance, the cosine similarity will be more sensitive to the feature columns with little numeric difference. The definition of cosine similarity based features-group distance functions is defined by: \vspace{-0.2cm} \begin{equation} \begin{aligned} \small d_{c}(&c_i, c_j)= \frac{1}{|c_i|\cdot|c_j|}\sum_{f_i\in c_i}\sum_{f_j\in c_j} (1 - \frac{f_i\cdot f_j}{||f_i||\cdot||f_j||}), \end{aligned} \vspace{-0.2cm} \end{equation} \noindent3) \textit{Mutual Information Based:} The last method is to utilize the mutual information (MI) between the column of the feature and its related label. We assume that two clusters with similar relevance to the target should have a smaller distance. Thereby, the MI based features-group distance functions $d_{m}(\cdot)$ is defined as: \begin{equation} \begin{aligned} \small \label{mi_dis} d_{m}(c_i, c_j)= \frac{1}{|c_i|\cdot|c_j|} \sum_{f_i\in c_i}\sum_{f_j\in c_j} \frac{|\text{MI}(f_i,y)-\text{MI}(f_j,y)|}{\text{MI}(f_i,f_j)+\epsilon}, \end{aligned} \vspace{-0.2cm} \end{equation} where the $\text{MI}(\cdot)$ is the function to obtain the mutual information between two vectors. $|c_i|$ and $|c_j|$ are the total number of two clusters. In Equation~\ref{mi_dis}, the numerator (i.e., $|\text{MI}(f_i,y)-\text{MI}(f_j,y)|$) aims to quantify the relevancy between the feature columns and the target. The denominator (i.e., $\text{MI}(f_i,f_j)+\epsilon$) quantifies the redundancy between $f_i$ and $f_j$. The $\epsilon$ is a small constant to avoid the divide by zero error. If the numerator is small, $f_i$ and $f_j$ should have a similar influence on the classification of $y$. And, when the denominator is big, it represents $f_i$ and $f_j$ share more information. Both make the function gives these two columns of feature with a smaller distance. \fi \noindent\textbf{FG-Clustering:} Due to the number of feature columns varying during feature generation. Therefore, it is not appropriate to use K-means or density based methods. Inspired by agglomerative clustering, given a feature set $\mathcal{F}$, we propose the FG-Clustering in two steps. Specifically, \ul{\textit{Step-1}}: we initialize each column in $\mathcal{F}$ as a cluster. \ul{\textit{Step-2}}: we use the features-group distance function (i.e., $d_e(\cdot)$, $d_c(\cdot)$, or $d_m(\cdot)$) to obtain the distance between any two feature clusters. Then, we merge the two closest clusters to generate a new cluster and remove the former ones. We repeat this step until the smallest distance between any two clusters reaches a certain threshold. \ul{\textit{Output}}: We cluster the columns of $\mathcal{F}$ and form the cluster set $C=\{c_i\}_{i=1}^{|C|}$. \vspace{-0.3cm} \subsection{Feature State Representation} To describe the state of feature space, our intuitive idea is to propose a function $\mathcal{R}(\cdot) \to s_{(\cdot)}$. The output $s_{(\cdot)}$ is a fixed-size vector to represent the state of a length-variable input feature. In this section, we introduce three methods for different aspects of feature space. To simplify the description, in the following parts, suppose given the feature set $\mathcal{F}\in \mathbb{R}^{M\times N}$, where $M$ is the number of the total samples, and $N$ is the number of feature columns. \noindent\textbf{Statistic Information (si):} The first idea is to utilize the statistic information (\textit{i.e.} count, standard deviation, minimum, maximum, first, second, and third quartile) of the input feature cluster. Specifically, \ul{\textit{Step-1}}: We calculate the descriptive statistics matrix of $\mathcal{F}$ column by column. \ul{\textit{Step-2}}: We calculate the descriptive statistics of the outcome matrix of the previous step row by row to obtain the meta descriptive statistics matrix that shape is $\mathbb{R}^{7\times 7}$.\ul{\textit{Step-3}}: We obtain the feature-feature's state representation $\mathcal{R}_{si}(\mathcal{F})\in \mathbb{R}^{1\times 49}$ by flatting the descriptive matrix from the former step. \noindent\textbf{Autoencoder (ae):} Another idea is to utilize autoencoder to convert the feature set $\mathcal{F}$ into a latent embedding vector and regard it as the state representation. Specifically, \ul{\textit{Step-1:}} We apply an autoencoder to transform each column of $\mathcal{F}$ into a latent matrix $Z \in \mathbb{R}^{k\times N}$, where $k$ is the dimension of the latent representation of each column. \ul{\textit{Step-2:}} We apply another autoencoder to transform each row of $Z$ in to another matrix $Z' \in \mathbb{R}^{k\times d}$, where $d$ is the dimension of the latent representation of each row. \ul{\textit{Step-3:}} We obtain the state representation vector $\mathcal{R}_{ae}(\mathcal{F}) \in \mathbb{R}^{1 \times kd}$ by flattening the matrix $Z'$. \noindent\textbf{Graph Autoencoder (gae):} The last idea is to train a graph autoencoder~\cite{https://doi.org/10.48550/arxiv.1611.07308} to reconstruct the correlation graph between each column of features. Specifically, \ul{\textit{Step-1}}: We build a complete correlation graph $\mathcal{G}$ by calculating the similarity matrix between each pair of feature columns. The adjacency matrix of $\mathcal{G}$ is $\mathcal{A} \in \mathbb{R}^{N\times N}$, where a node is a feature column in $\mathcal{F}$ and an edge reflects the similarity between two nodes. \ul{\textit{Step-2}}: We adopt a one-layer GCN~\cite{kipf2016semi} to aggregate a enhanced feature embeddings $Z\in \mathbb{R}^{N\times k}$ based on $\mathcal{A}$, where $k$ is the dimension of latent embedding. The calculation process is defined as: $ Z = ReLU(\mathbf{D}^{-\frac{1}{2}}\mathcal{A}\mathbf{D}^{-\frac{1}{2}}\mathcal{F}^{\top}\mathbf{W}), $ where $\mathbf{D}$ is the diagonal node degree matrix of $\mathcal{A}$, and $\mathbf{W}\in\mathbb{R}^{N\times k}$ is the weight matrix of the GCN. \ul{\textit{Step-3}}: We average $Z$ column-wisely to obtain the state representation $\mathcal{R}_{gae}(\mathcal{F}) \in \mathbb{R}^{1\times k}$. To obtain the state representation of operation $o \in \mathcal{O}$, we use a one-hot lookup table, denoted as $\mathcal{R}_{o}(o)$. \vspace{-0.2cm} \subsection{Dataset Evaluator} We introduce two utility metrics from the supervised and unsupervised perspectives to evaluate the generated features and obtain the reward for cascading agents. \noindent\textbf{Downstream Task:} We utilize the improvement of a downstream task (e.g., regression, classification) as the metric. In detail, we use a random forest based method and task-specific indicator $P_A(\mathcal{F}, y)$ (e.g., 1-RAE, Precision, Recall, F1) as the method and utility metric of a feature set. \noindent\textbf{Feature Quality:} We should also consider the generated feature quality to evaluate the dataset. In detail, we hope the generated features contain less redundancy and more relevance to the target. Thus, we apply mutual information to measure the feature quality: \begin{equation} \small U(\mathcal{F}|y) = -\frac{1}{|\mathcal{F}|^2}\sum_{f_i, f_j \in \mathcal{F}} \text{MI}(f_i, f_j) + \frac{1}{|\mathcal{F}|}\sum_{f\in \mathcal{F}}\text{MI}(f,y), \end{equation} where $f_i, f_j, f$ are features in $\mathcal{F}$ and $|\mathcal{F}|$ is the size of the feature set $\mathcal{F}$. \vspace{-0.2cm} \subsection{Cascading Agents} We model the process to choose one or two cluster(s) and the most suitable operation as three markov decision processes (MDPs) and develop three agents. \noindent\textbf{Head Agent:} The head cluster select agent $\text{Agent}_h$ will consider two fact: 1) the overall state $s_f$ of input feature $\mathcal{F}$; 2) the state $s_{c_i}$ of cluster $c_i$. \noindent\ul{\textit{Actor}}: The formal definition of the actor network is: \vspace{-0.1cm} \begin{equation} c_h \sim \pi_h(\left[s_f\oplus s_{c_i} \right]_{i=1}^{|C|}; \theta_{\pi_h}), \vspace{-0.1cm} \end{equation} where $\pi_h(\cdot)$ is two layer feed-forward network with a softmax activation and will output a probability distribution of each action. $\theta_{\pi_h}$ is the learnable parameter. $\oplus$ operation will element-wise concanecat the overall state and the specific cluster state. We use $\sim$ to represent we sample the cluster $c_h$ from the output distribution of policy network $\pi_h(\cdot)$. \noindent\ul{\textit{Critic}}: The critic will consider the overall state $s_f$, defined as: $v_h = V_h(s_f; \theta_{v_h})$, where $V_h(\cdot)$ is two layers feed-forward networks with parameter $\theta_{v_h}$. The output $v_h$ is the value of the state. \noindent\ul{\textit{Reward}}: The reward of $\text{Agent}_h$ is the generated feature quality, denoted by: $r_h = U(\mathcal{F}'|y)$. \noindent\textbf{Operation Agent:} $\text{Agent}_o$ will pick an operation $o \in \mathcal{O}$ according to the state $s_{h}$ of the head cluster. \noindent\ul{\textit{Actor}}: The definition of the actor network is: \vspace{-0.2cm} \begin{equation} op \sim \pi_o(s_{h}; \theta_{\pi_o}), \vspace{-0.2cm} \end{equation} where $\pi_o(\cdot)$ is two layers feed-forward networks with a softmax activation. $\theta_{\pi_o}$ is a learnable parameter. $s_{h}$ is the state representation of the selected head cluster $c_h$. The agent will sample an operation $op$ from the output distribution of $\pi_o(s_{h})$. \noindent\ul{\textit{Critic}}: We adopt two layers feed-forward networks with parameter $\theta_{v_t}$, denoted as $v_o = V_o(s_{h}; \theta_{v_t})$, to evaluate the value of the $s_{h}$. \noindent\ul{\textit{Reward}}: The reward of $\text{Agent}_o$ will regard the generated feature quality and downstream task performance, denoted by: $r_o = U(\mathcal{F}'|y) + P_A(\mathcal{F}', y) - P_A(\mathcal{F}, y)$. \noindent\textbf{Tail Agent:} For the tail cluster select agent, we regard the overall state $s_f$, the state of head cluster $s_{h}$, the state of operation $s_o$, and each cluster's state $\{s_{c_i}\}_{i=1}^{|C|}$ to decide the tail cluster selection. \noindent\ul{\textit{Actor}}: The formal definition of the policy network is: \vspace{-0.2cm} \begin{equation} c_t \sim \pi_t(\left[s_f\oplus s_o\oplus s_h\oplus s_{c_i} \right]_{i=1}^{|C|};\theta_{\pi_t}), \vspace{-0.2cm} \end{equation} where $\pi_t(\cdot)$ is two layers feed-forward networks with softmax activation. $\theta_{\pi_t}$ is the parameter. During the decision-making, the agent will sample the tail cluster $c_t$ from the output. \noindent\ul{\textit{Critic}}: The critic will consider the overall state $s_f$, head cluster state $s_h$, chosen operation state $s_o$, defined as: $v_t = V_t(s_f \oplus s_h \oplus s_o; \theta_{v_t})$, where $V_t(\cdot)$ is two layer feed-forward networks with parameter $\theta_{v_t}$. $v_t$ is the value of the current state. \noindent\ul{\textit{Reward}}: The reward of $\text{Agent}_t$ is the generated feature quality, denoted by: $r_t = U(\mathcal{F}'|y)$. \vspace{-0.2cm} \subsection{Model Training} Suppose that during the exploration, for each agent, we can obtain the memory as $\mathcal{M} = (a, s, s', r)$, where $a$ is the selected action (e.g., $c_h$ for $\text{Agent}_h$, $op$ for $\text{Agent}_o$, and $c_t$ for $\text{Agent}_t$). $s$ and $s'$ is the input state for current step and next step. $r$ is the reward for adopt action $a$. We minimize the temporal difference error $L$, given by: \vspace{-0.1cm} \begin{equation} \begin{aligned} \label{Q_update} \delta =& r + \gamma V(s'; \theta_v) - V(s; \theta_v)\\ L(\mathcal{M}; \theta_\pi, \theta_v)& = log\pi (a|s;\theta_\pi) * \delta + \beta H(\pi (s; \theta_\pi)) \vspace{-0.1cm} \end{aligned} \end{equation} where $\gamma \in [0\sim 1]$ is the discounted factor. $\delta$ is the advantage value. $H(\cdot)$ is an entropy regularization term. We use $\beta$ to controls the strength of the $H$. During the model exploration, we obtain the memories $\mathcal{M}_h$, $\mathcal{M}_o$, $\mathcal{M}_t$ for the cascading agents. Thus, we can formulate the loss function for cascading agent as: \begin{equation} \begin{aligned} \small \mathcal{L} = \sum_{i=\{h,o,t\}} L(\mathcal{M}_i; \theta_{\pi_i}, \theta_{v_i}) \end{aligned} \end{equation} \vspace{-0.7cm} \subsection{Model Overview} We present the proposed framework RAFT as illustrated in Figure~\ref{model_overall}. We demonstrate the technical details of each component in the following sections. \vspace{-0.3cm} \subsection{Feature Space Clustering} To efficiently reconstruct feature space and provide strong reward signals to agents, we propose a feature space clustering component. It divides the feature set into different feature groups, which builds a foundation for conducting group-wise feature generation. \smallskip \noindent\textbf{Features-Group Distance Function:} We propose this function to measure the similarity between two clusters of features. Suppose we have two clusters $c_i$ and $c_j$, the formal definition of the features-group distance function is given by: \begin{equation} \begin{aligned} \small \label{dis} \mathcal{D}&(c_i, c_j)= \\ &\frac{1}{|c_i|\cdot|c_j|} \sum_{f_i\in c_i}\sum_{f_j\in c_j}d(f_i,f_j)|I(f_i,y)-I(f_j,y)|, \end{aligned} \vspace{-0.2cm} \end{equation} where $d(\cdot)$ is a generic pair-wise distance function (e.g., the euclidean distance, cosine similarity, etc) and $I(\cdot)$ is pairwise mutual information (PMI). The left part of Equation~\ref{dis} (i.e., $d(f_i,f_j)$) aims to quantify the numeric difference between $f_i$ and $f_j$. The right part (i.e., $|I(f_i,y)-I(f_j,y)|$) aims to quantify the relevance differences between distinct features $f_i, f_j$ and the target $y$. This function seeks to aggregate features with similar information and the same contribution to differentiating the target label. Because our assumption is that high (low) informative features are generated by crossing more distinct (similar) features. \iffalse \noindent1) \textit{Euclidean Distance Based:} The first method trying to measure the similarity between two clusters by the euclidean distance. We assume that the cluster method should merge two clusters of features with a smallest absolute value of the numerical difference into one cluster. The euclidean distance function between two clusters can be defined as: \vspace{-0.2cm} \begin{equation} \begin{aligned} \small d_{e}(c_i, c_j)= \frac{1}{|c_i|\cdot|c_j|}\sum_{f_i\in c_i}\sum_{f_j\in c_j} \sqrt{\sum(f_i-f_j)^2}, \end{aligned} \vspace{-0.2cm} \end{equation} \noindent2) \textit{Cosine Similarity Based:} Another idea to measure the distance is to utilize the cosine similarity. Compared to the euclidean distance, the cosine similarity will be more sensitive to the feature columns with little numeric difference. The definition of cosine similarity based features-group distance functions is defined by: \vspace{-0.2cm} \begin{equation} \begin{aligned} \small d_{c}(&c_i, c_j)= \frac{1}{|c_i|\cdot|c_j|}\sum_{f_i\in c_i}\sum_{f_j\in c_j} (1 - \frac{f_i\cdot f_j}{||f_i||\cdot||f_j||}), \end{aligned} \vspace{-0.2cm} \end{equation} \noindent3) \textit{Mutual Information Based:} The last method is to utilize the mutual information (MI) between the column of the feature and its related label. We assume that two clusters with similar relevance to the target should have a smaller distance. Thereby, the MI based features-group distance functions $d_{m}(\cdot)$ is defined as: \begin{equation} \begin{aligned} \small \label{mi_dis} d_{m}(c_i, c_j)= \frac{1}{|c_i|\cdot|c_j|} \sum_{f_i\in c_i}\sum_{f_j\in c_j} \frac{|\text{MI}(f_i,y)-\text{MI}(f_j,y)|}{\text{MI}(f_i,f_j)+\epsilon}, \end{aligned} \vspace{-0.2cm} \end{equation} where the $\text{MI}(\cdot)$ is the function to obtain the mutual information between two vectors. $|c_i|$ and $|c_j|$ are the total number of two clusters. In Equation~\ref{mi_dis}, the numerator (i.e., $|\text{MI}(f_i,y)-\text{MI}(f_j,y)|$) aims to quantify the relevancy between the feature columns and the target. The denominator (i.e., $\text{MI}(f_i,f_j)+\epsilon$) quantifies the redundancy between $f_i$ and $f_j$. The $\epsilon$ is a small constant to avoid the divide by zero error. If the numerator is small, $f_i$ and $f_j$ should have a similar influence on the classification of $y$. And, when the denominator is big, it represents $f_i$ and $f_j$ share more information. Both make the function gives these two columns of feature with a smaller distance. \fi \smallskip \noindent\textbf{Feature-Group (FG) Clustering:} Variable feature space sizes make it inappropriate to employ K-means or density-based clustering techniques during feature generation. We propose an FG-Clustering algorithm inspired by agglomerative clustering. Specifically, given a feature set $\mathcal{F}_t$ at the $t$-th step, we first initialize each feature column in $\mathcal{F}_t$ as a cluster at the beginning. Then, we use the features-group distance function to calculate the distance between any two feature clusters. After that, we merge the two closest clusters to generate a new cluster and remove the former ones. We reiterate this process until the smallest distance between any two clusters breaks a certain threshold. Finally, we cluster $\mathcal{F}_t$ into different feature groups, defined as $C_t=\{c_i\}_{i=1}^{|C|}$. \vspace{-0.3cm} \subsection{State Representation for Feature Space and Operation} To help cascading agents understand the current feature space for effective policy learning, we need to extract meaningful information from the space and use it as the state representation. The assumption is that an effective state representation must not only capture the knowledge of feature space but also comprehend the correlations between features. To achieve this goal, we introduce three state representation methods from different perspectives. To ease description, in the following parts, suppose given the feature set $\mathcal{F}\in \mathbb{R}^{M\times N}$, where $M$ is the number of total samples, and $N$ is the number of feature columns. \smallskip \noindent\textbf{Statistic Information (si):} We utilize the statistic information (\textit{i.e.} count, standard deviation, minimum, maximum, first, second, and third quartile) of the feature space as the state representation. Specifically, we first obtain the descriptive statistics matrix of $\mathcal{F}$ column by column. Then, we calculate the descriptive statistics of the outcome matrix row by row to obtain the meta descriptive matrix that shape is $\mathbb{R}^{7\times 7}$. Finally, we obtain the state representation by flatting the descriptive matrix obtained from the former step. The state representation is defined as $\mathcal{Z}_{si}(\mathcal{F})\in \mathbb{R}^{1\times 49}$. \smallskip \noindent\textbf{Autoencoder (ae):} We propose an autoencoder-based state representation approach. We believe an efficient state representation can reconstruct the original feature space. Specifically, we first apply an autoencoder to transform each column of $\mathcal{F}$ into a latent matrix $Z \in \mathbb{R}^{k\times N}$, where $k$ is the dimension of the latent representation of each column. Then, we apply another autoencoder to transform each row of $Z$ into another matrix $Z' \in \mathbb{R}^{k\times d}$, where $d$ is the dimension of the latent representation of each row. After that, we want to use $Z'$ to reconstruct the original feature space $\mathcal{F}$. When the model converges, we flat $Z'$ into one-dimensional vector and regard it as the state representation, denoted by $\mathcal{Z}_{ae}(\mathcal{F}) \in \mathbb{R}^{1 \times kd}$. \smallskip \noindent\textbf{Graph Autoencoder (gae):} In addition to reconstructing the feature space, we expect to preserve feature-feature correlations in the state representation. Thus, we propose a graph autoencoder~\cite{https://doi.org/10.48550/arxiv.1611.07308} based state representation approach. Specifically, we first build a complete correlation graph $\mathcal{G}$ by calculating the similarity between each pair of feature columns. The adjacency matrix of $\mathcal{G}$ is $\mathcal{A} \in \mathbb{R}^{N\times N}$, where a node is a feature column in $\mathcal{F}$ and an edge reflects the similarity between two nodes. Then, we adopt a one-layer GCN~\cite{kipf2016semi} to aggregate feature knowledge of $\mathcal{F}$ based on $\mathcal{A}$ to produce an enhanced feature embedding $Z\in \mathbb{R}^{N\times k}$, where $k$ is the dimension of latent embedding. The calculation process is defined as follows: $ Z = ReLU(\mathbf{D}^{-\frac{1}{2}}\mathcal{A}\mathbf{D}^{-\frac{1}{2}}\mathcal{F}^{\top}\mathbf{W}), $ where $\mathbf{D}$ is the diagonal degree matrix of $\mathcal{A}$, and $\mathbf{W}\in\mathbb{R}^{N\times k}$ is the weight matrix of the GCN. Finally, we average $Z$ column-wisely to obtain the state representation, denoted by $\mathcal{Z}_{gae}(\mathcal{F}) \in \mathbb{R}^{1\times k}$. For the selected mathematical operation, we use its one-hot vectors as the state representation, denoted by $\mathcal{Z}_{o}(op)\in \mathbb{R}^{1\times |\mathcal{O}|}$. \vspace{-0.2cm} \subsection{Feature Space Evaluator} We evaluate the quality of feature space and provide reward signals to reinforced agents to let them learn better feature transformation policies. We assess the feature space from the following two feature utility perspectives: \smallskip \noindent\textbf{Downstream Task Evaluation:} We utilize the improvement of a downstream task (e.g., regression, classification, outlier detection) as one feature utility measurement. In detail, we use a downstream ML task with a task-specific indicator (e.g., 1-RAE, Precision, Recall, F1) to obtain the downstream task performance on the feature space. The performance is denoted by $P_A(\mathcal{F}, y)$. \smallskip \noindent\textbf{Feature Space Quality:} We also expect that the generated feature space should contain less redundant information and be more relevant to the target label. Thus, we customize a feature space quality metric based on mutual information, which is defined as: \begin{equation} \small U(\mathcal{F}|y) = -\frac{1}{|\mathcal{F}|^2}\sum_{f_i, f_j \in \mathcal{F}} I(f_i, f_j) + \frac{1}{|\mathcal{F}|}\sum_{f\in \mathcal{F}}I(f,y), \end{equation} where $f_i, f_j, f$ are distinct features in $\mathcal{F}$, $I$ refers to the mutual information function, and $|\mathcal{F}|$ is the size of the feature set $\mathcal{F}$. \subsection{Cascading Agents} To intelligently select suitable features and operations for feature crossing, we decompose the selection process into three Markov Decision Processes (MDPs). They cascade and sequentially select the first feature cluster, mathematical operation, and the second feature cluster. We develop a cascading actor-critic agent structure to make sure that all three agents collaborate with each other. Figure~\ref{model_overall}(c) shows the model structure. To ease the description, we adopt the $t$-th iteration as an example to illustrate the calculation process. Assuming that the feature set is $\mathcal{F}_t$ and its feature clusters $\mathcal{C}_t$, we aim to obtain the next new feature space $\mathcal{F}_{t+1}$ by generating new features $g_t$. \smallskip \noindent\textbf{First Feature Cluster Agent:} $\text{Agent}_1$ is to select the first candidate feature group. Its learning system includes the following: \ul{\textit{State}:} the state is the embedding vector of the current feature space $\mathcal{F}_t$, denoted by $\mathcal{S}^1_t=s^f_t$, where $s^f_t=\mathcal{Z}(\mathcal{F}_t)$. \ul{\textit{Action}:} the action is the first candidate feature group $c^h_t$ selected by $\text{Agent}_1$ from $\mathcal{C}_t$, denoted by $a^1_t=c^h_t$. \ul{\textit{Reward}:} the reward is the feature space quality score of the selected first feature group, dented by $r^1_t = U(c^h_t|y)$. \smallskip \noindent\textbf{Operation Agent:} $\text{Agent}_o$ is to select a candidate mathematical operation. Its learning system includes: \ul{\textit{State}:} the state is the combination of the current feature space $\mathcal{F}_t$ and the selected first feature group $c^h_t$, denoted by $\mathcal{S}^o_t=s^f_t\oplus s^{1}_t$, where $\oplus$ is the a row-wise concatenation and $s^{1}_t = \mathcal{Z}(c^h_t)$. \ul{\textit{Action}:} the action is the candidate operation $op_t$ selected by $\text{Agent}_o$ from the operation set $\mathcal{O}$, denoted by $a^o_t=op_t$. \ul{\textit{Reward}:} the reward is the integration of performance improvements of the downstream task and the quality score of the new generated feature space, denoted by $r^o_t = U(\mathcal{F}_{t+1}|y) + P_A(\mathcal{F}_{t+1}, y) - P_A(\mathcal{F}_t, y)$. \smallskip \noindent\textbf{Second Feature Cluster Agent:} Agent$_2$ is to select the second candidate feature group. \ul{\textit{State}:} the state is the combination of the embedding of the current feature space $s^f_t$, the first candidate feature group $s^{1}_t$, and the selected operation $s^o_t$, denoted by $\mathcal{S}^2_t=s^f_t\oplus s^{1}_t\oplus s^o_t$, where $s^o_t=\mathcal{Z}_o(op)$. \ul{\textit{Action}:} the action is the second candidate feature group selected by Agent$_2$ from $\mathcal{C}_t$, denoted by $a^2_t=c^l_t$. \ul{\textit{Reward}:} the reward is the quality score of the new generated feature space $\mathcal{F}_{t+1}$, denoted by $r^2_t = U(\mathcal{F}_{t+1}|y)$ \smallskip \noindent\textbf{Feature Group(s) Crossing:} After we have two candidate feature groups and one operation, we need to cross feature groups to create new features for refining feature space. Based on the type of operation $op$, we propose two feature generation strategies to generate new features $g_t$. \vspace{-0.3cm} \begin{equation} \vspace{-0.2cm} g_t = \begin{cases} op_t(c_t^1) : \text{if }op_t\text{ is unary}\\ op_t(c_t^1, c_t^2) : \text{if }op_t\text{ is binary} \end{cases}. \end{equation} Specifically, if $op$ is unary (\textit{e.g.,} square, sqrt), we conduct it on the first selected feature group; if $op$ is binary (\textit{e.g.,} plus, divide), we apply it to the two candidate feature groups. Then, $g_t$ is added into the $\mathcal{F}_t$ to form the new feature set $\mathcal{F}_{t+1}$. If the feature space size exceeds a maximization threshold, redundant features are eliminated using feature selection to control the feature space size. We reiterate the feature transformation process until finding the optimal feature set $\mathcal{F}^*$ or achieving the maximum iteration number. \vspace{-0.2cm} \subsection{Actor-Critic Optimization Strategy} We adopt the same training strategy (actor-critic) to train the three agents in order to learn smart and ideal feature transformation policies. \smallskip \noindent The actor-critic paradigm consists of two components: \smallskip \noindent\ul{\textit{Actor}}: The actor aims to learn the selection policy $\pi(\cdot)$ based on the current state in order to select suitable candidate feature groups or operations. In the $t$-th iteration, with given state $\mathcal{S}_t$, the agent will pick an action $a_t$, defined as: \begin{equation} \label{actor} a_t \sim \pi_\theta(\mathcal{S}_{t}), \end{equation} where $\theta$ is the parameter of policy network $\pi$. The output of the $\pi_\theta(\cdot)$ is the probability of each candidate action. $\sim$ operation means the sampling operation. \smallskip \noindent\ul{\textit{Critic}}: The critic aims to estimate the potential reward of an input state, given by: \begin{equation} v_t = V(\mathcal{S}_t), \end{equation} where $V(\cdot)$ is the state-value function and $v_t$ is the obtained value. \smallskip \noindent We update the \textit{Actor} and \textit{Critic} in cascading agents after each iteration of feature transformation. Suppose at the $t$-th iteration, for one agent, we can obtain the memory as $\mathcal{M}_t = (a_t, \mathcal{S}_t, \mathcal{S}_{t+1}, r_t)$. Then, the formal definition of policy gradient is given by: \begin{equation} \nabla J(\theta)_t = \nabla_\theta log\pi_\theta(a_t|\mathcal{S}_t)(Q(\mathcal{S}_t, a_t) - V(\mathcal{S}_t)), \label{pg} \end{equation} where $(Q(\mathcal{S}_t, a_t) - V(\mathcal{S}_t))$ is the advantage function ($\delta$). $\pi (a_t|\mathcal{S}_t)$ denote the probability of selected action $a_t$. $Q(\mathcal{S}_t, a_t)$ can be estimated by the state-value function (i.e., \textit{Critic}) and the reward of the current step, which is defined as: \begin{equation} Q(\mathcal{S}_t, a_t)\approx r_t + \gamma V(\mathcal{S}_{t+1}). \end{equation} where $\gamma \in [0, 1]$ is the discounted factor. During the training phase, suppose the RAFT has explored the feature transformation graph $n$ steps and collected the memories. Then, we optimize the parameter of \textit{Critic} to provide a more precise state-value estimation by minimizing this: \begin{equation} \mathcal{L}_c = \frac{1}{n}\sum_{i=1}^n(r_i + \gamma V(\mathcal{S}_{i+1}) - V(\mathcal{S}_i))^2. \end{equation} After that, we optimize the policy of \textit{Actor} based on Equation~\ref{pg}: \begin{equation} \mathcal{L}_a = \frac{1}{n}\sum_{i=1}^n ( log\pi_\theta (a_i|\mathcal{S}_i) * \delta + \beta H(\pi_\theta (\mathcal{S}_i))), \end{equation} where $\delta$ is the advantage function. $H(\cdot)$ is an entropy regularization term that aims to increase the randomness in exploration. We use $\beta$ to control the strength of the $H$. The overall loss function for each agent is: \begin{equation} \mathcal{L} = \mathcal{L}_c + \mathcal{L}_a. \end{equation} After agents converge, we expect to discover the optimal policy $\pi^*$ that can choose the most appropriate action (\textit{i.e.} feature group or operation). \iffalse We update the \textit{Actor} and \textit{Critic} in cascading agents after each step of feature transformation. Suppose in step-$t$, for each agent, we can obtain the memory as $\mathcal{M}_t = (a_t, \mathcal{S}_t, \mathcal{S}_{t+1}, r_t)$. The estimation of temporal difference error ($\delta$) is given by: \begin{equation} \label{advantage} \delta = r_t + \gamma V(\mathcal{S}_{t+1}) - V(\mathcal{S}_t), \end{equation} where $\gamma \in [0, 1]$ is the discounted factor. By that, we can update the \textit{Critic} by mean square error (MSE): \begin{equation} \mathcal{L}_c = \delta^2 \end{equation} Then, we can update the policy $\pi$ in \textit{Actor} by policy gradient, defined as: \begin{equation} \mathcal{L}_a = log\pi (a_t|\mathcal{S}_t) * \delta + \beta H(\pi (\mathcal{S}_t)), \end{equation} where $H(\cdot)$ is an entropy regularization term that aims to increase the randomness in exploration. We use $\beta$ to control the strength of the $H$. $\pi (a_t|\mathcal{S}_t)$ denote the probability of action $a_t$. The overall loss function for each agent is given by: \begin{equation} \mathcal{L} = \mathcal{L}_c + \mathcal{L}_a. \end{equation} After agents converge, we expect to discover the optimal policy $\pi^*$ that can choose the most appropriate action (\textit{i.e.} feature group or operation). \fi \iffalse We minimize the temporal difference error $L$, given by: \vspace{-0.1cm} \begin{equation} \begin{aligned} \label{Q_update} \delta =& r + \gamma V(s'; \theta_v) - V(s; \theta_v)\\ L(\mathcal{M}; \theta_\pi, \theta_v)& = log\pi (a|s;\theta_\pi) * \delta + \beta H(\pi (s; \theta_\pi)) \vspace{-0.1cm} \end{aligned} \end{equation} where $\gamma \in [0, 1]$ is the discounted factor. $\delta$ is the advantage value. During the model exploration, we obtain the memories $\mathcal{M}_h$, $\mathcal{M}_o$, $\mathcal{M}_t$ for the cascading agents. Thus, we can formulate the loss function for cascading agent as: \begin{equation} \begin{aligned} \small \mathcal{L} = \sum_{i=\{h,o,t\}} L(\mathcal{M}_i; \theta_{\pi_i}, \theta_{v_i}) \end{aligned} \end{equation} \vspace{-0.7cm} \fi \iffalse \vspace{-0.2cm} \subsection{Cascading Agents} We model the process to choose one or two cluster(s) and the most suitable operation as three markov decision processes (MDPs) and develop three agents. \noindent\textbf{Head Agent:} The head cluster select agent $\text{Agent}_h$ will consider two fact: 1) the overall state $s_f$ of input feature $\mathcal{F}$; 2) the state $s_{c_i}$ of cluster $c_i$. \noindent\ul{\textit{Actor}}: The formal definition of the actor network is: \vspace{-0.1cm} \begin{equation} c_h \sim \pi_h(\left[s_f\oplus s_{c_i} \right]_{i=1}^{|C|}; \theta_{\pi_h}), \vspace{-0.1cm} \end{equation} where $\pi_h(\cdot)$ is two layer feed-forward network with a softmax activation and will output a probability distribution of each action. $\theta_{\pi_h}$ is the learnable parameter. $\oplus$ operation will element-wise concanecat the overall state and the specific cluster state. We use $\sim$ to represent we sample the cluster $c_h$ from the output distribution of policy network $\pi_h(\cdot)$. \noindent\ul{\textit{Critic}}: The critic will consider the overall state $s_f$, defined as: $v_h = V_h(s_f; \theta_{v_h})$, where $V_h(\cdot)$ is two layers feed-forward networks with parameter $\theta_{v_h}$. The output $v_h$ is the value of the state. \noindent\ul{\textit{Reward}}: The reward of $\text{Agent}_h$ is the generated feature quality, denoted by: $r_h = U(\mathcal{F}'|y)$. \noindent\textbf{Operation Agent:} $\text{Agent}_o$ will pick an operation $o \in \mathcal{O}$ according to the state $s_{h}$ of the head cluster. \noindent\ul{\textit{Actor}}: The definition of the actor network is: \vspace{-0.2cm} \begin{equation} op \sim \pi_o(s_{h}; \theta_{\pi_o}), \vspace{-0.2cm} \end{equation} where $\pi_o(\cdot)$ is two layers feed-forward networks with a softmax activation. $\theta_{\pi_o}$ is a learnable parameter. $s_{h}$ is the state representation of the selected head cluster $c_h$. The agent will sample an operation $op$ from the output distribution of $\pi_o(s_{h})$. \noindent\ul{\textit{Critic}}: We adopt two layers feed-forward networks with parameter $\theta_{v_t}$, denoted as $v_o = V_o(s_{h}; \theta_{v_t})$, to evaluate the value of the $s_{h}$. \noindent\ul{\textit{Reward}}: The reward of $\text{Agent}_o$ will regard the generated feature quality and downstream task performance, denoted by: $r_o = U(\mathcal{F}'|y) + P_A(\mathcal{F}', y) - P_A(\mathcal{F}, y)$. \noindent\textbf{Tail Agent:} For the tail cluster select agent, we regard the overall state $s_f$, the state of head cluster $s_{h}$, the state of operation $s_o$, and each cluster's state $\{s_{c_i}\}_{i=1}^{|C|}$ to decide the tail cluster selection. \noindent\ul{\textit{Actor}}: The formal definition of the policy network is: \vspace{-0.2cm} \begin{equation} c_t \sim \pi_t(\left[s_f\oplus s_o\oplus s_h\oplus s_{c_i} \right]_{i=1}^{|C|};\theta_{\pi_t}), \vspace{-0.2cm} \end{equation} where $\pi_t(\cdot)$ is two layers feed-forward networks with softmax activation. $\theta_{\pi_t}$ is the parameter. During the decision-making, the agent will sample the tail cluster $c_t$ from the output. \noindent\ul{\textit{Critic}}: The critic will consider the overall state $s_f$, head cluster state $s_h$, chosen operation state $s_o$, defined as: $v_t = V_t(s_f \oplus s_h \oplus s_o; \theta_{v_t})$, where $V_t(\cdot)$ is two layer feed-forward networks with parameter $\theta_{v_t}$. $v_t$ is the value of the current state. \noindent\ul{\textit{Reward}}: The reward of $\text{Agent}_t$ is the generated feature quality, denoted by: $r_t = U(\mathcal{F}'|y)$. \fi \subsection{Data Description} \vspace{-0.1cm} We used 17 publicly available datasets from UCI~\cite{uci}, LibSVM~\cite{libsvm}, Kaggle~\cite{kaggle}, and OpenML~\cite{openml} to conduct experiments. The 17 datasets involve 6 classification tasks, 7 regression tasks, and 4 outlier detection tasks. Table \ref{table_overall_perf} shows the statistic information of these datasets. We also categorized these datasets into \textit{High} (higher than 5), \textit{Mid} (between 0.01 to 5), and \textit{Low} (between 0 to 0.01) based on the standard deviation of the feature set. The dataset has a larger standard deviation, indicating that its value range is also larger, and vice versa. \subsection{Evaluation Metrics} \vspace{-0.3cm} We used F1-score, Precision, Recall, and ROC/AUC to evaluate classification tasks. We used 1-Relative Absolute Error (1-RAE)~\cite{wang2022group}, 1-Mean Average Error (1-MAE), 1-Mean Square Error (1-MSE), and 1-Root Mean Square Error (1-RMSE) to evaluate regression tasks. We adopted ROC/AUC, Mean Average Precision (MAP), F1-score, and Recall to assess outlier detection tasks. \vspace{-0.3cm} \subsection{Baseline Algorithms} \label{baseline} We compared our work RAFT with seven widely-used feature engineering methods: (1) \textbf{RFG} randomly selects candidate features and operations for generating new features without any policy learning; (2) \textbf{ERG} is a expansion-reduction method, which applies operations to all features to expand the feature space, then selects critical features as a new feature space. (3) \textbf{LDA}~\cite{blei2003latent} extracts latent features from the feature set via matrix factorization. (4) \textbf{AFT}~\cite{horn2019autofeat} is an enhanced ERG implementation that iteratively explores feature space and adopts multi-step feature selection to reduce redundant features. (5) \textbf{NFS}~\cite{chen2019neural} mimics feature transformation path for each feature and optimizes the entire transformation process based on reinforcement learning. (6) \textbf{TTG}~\cite{khurana2018feature} records the feature transformation process using a transformation graph, then uses reinforcement learning to explore the graph to determine the best feature set. (7) \textbf{GRFG}~\cite{wang2022group} is an automatic feature generation method, which is optimized through DQN. \begin{figure*}[!h] \centering \subfigure[PimaIndian]{ \includegraphics[width=5.2cm]{fig/state/pima.pdf} } \hspace{-3mm} \subfigure[OpenML\_586]{ \includegraphics[width=5.2cm]{fig/state/open586.pdf} } \hspace{-3mm} \vspace{-3mm} \subfigure[WBC]{ \includegraphics[width=5.2cm]{fig/state/wbc.pdf} } \hspace{-3mm} \subfigure[Wine Quality Red]{ \includegraphics[width=5.2cm]{fig/state/Wineqr.pdf} } \hspace{-3mm} \subfigure[OpenML\_618]{ \includegraphics[width=5.2cm]{fig/state/open618.pdf} } \hspace{-3mm} \subfigure[Thyroid]{ \includegraphics[width=5.2cm]{fig/state/thyroid.pdf} } \vspace{-0.35cm} \caption{Comparison of different state representation methods.} \label{state_study} \vspace{-0.45cm} \end{figure*} \begin{figure*}[!h] \centering \subfigure[SVMGuide3]{ \includegraphics[width=5.2cm]{fig/method/SVMGuide3.pdf} } \hspace{-3mm} \subfigure[OpenML\_586]{ \includegraphics[width=5.2cm]{fig/method/open586.pdf} } \hspace{-3mm} \subfigure[Thyroid]{ \includegraphics[width=5.2cm]{fig/method/Thyroid.pdf} } \vspace{-0.35cm} \caption{Model converge with other reinforcement learning method as backbone.} \label{method_study} \vspace{-0.7cm} \end{figure*} \vspace{-0.3cm} \subsection{RQ1: Overall Performance} This experiment aims to answer: \textit{Can RAFT effectively improve the quality of the original feature space?} Table~\ref{table_overall_perf} shows the overall performance of all models on all datasets. We can observe that RAFT significantly outperforms other baselines. This observation indicates the effectiveness of our work in feature space reconstruction. Another interesting observation is that RAFT beats RFG in most cases. This observation validates that reinforced agents can model feature engineering knowledge to learn better transformation policies than random generation strategies. We also can find that RAFT is superior to non-group-wise feature generation frameworks (\textit{i.e.,} NFS and TTG). The underlying driver is that group-wise feature generation can efficiently refine the feature space and provide strong reward signals for reinforced agents to learn more intelligent policies. Moreover, the superiority of RAFT compared with GRFG indicates that the actor-critic training strategy can learn more robust and effective policies than DQN-based agents. \vspace{-0.3cm} \subsection{RQ2: Study of the Distance Function} This experiment aims to answer: \textit{How do different distance functions affect the quality of reconstructed feature space?} We adopted euclidean distance and cosine distance in the feature clustering component to observe the difference in model performance. Figure~\ref{dis_study} shows performance comparison on different datasets. We can find that euclidean distance outperforms cosine distance on the datasets with high standard deviation. But, the observation is the opposite on low standard deviation datasets. A possible reason is that the value range of cosine distance is $[-1, 1]$ but the euclidean distance is $[-\text{infinity}, +\text{infinity}]$. Thus, the euclidean distance may enlarge more distances between feature groups when confronted with a high standard deviation dataset. It will produce more informative features to refine the feature space. Thus, this experiment provides a strategy to customize feature distance for different datasets. \subsection{RQ3: Study of the State Representation Methods} \vspace{-0.4cm} This experiment aims to answer: \textit{How do different state representation approaches affect the reconstructed feature space?} Apart from the introduced state representation methods (i.e., \textit{si}, \textit{ae}, and \textit{gae}), we also try combinations of them such as \textit{si+ae}, \textit{si+gae}, \textit{ae+gae}, and \textit{all}. For each combination method, we concatenated the state representations from different approaches. Figure~\ref{state_study} shows the comparison results. We can notice that \textit{gae} outperforms other methods in most tasks. A possible reason is that \textit{gae} captures not only the knowledge of the feature set but also feature-feature correlations. It preserves more knowledge of the feature set in the state, which makes agents can learn effective policies better. Another interesting observation is that although the combination-based method captures more characteristics of the feature space, their performances still cannot outperform others. A potential interpretation is that directly concatenating different states may include redundant and noisy information, leading reinforced agents to learn suboptimal policies. \subsection{RQ4: Comparison with Value-based Approaches} \vspace{-0.4cm} This experiment aims to answer: \textit{How does the training strategy affect the quality of the refined feature space? } We developed three model variants of RAFT: RAFT$_\text{DQN}$, RAFT$_\text{DDQN}$, and RAFT$_\text{DuelingDQN}$ by replacing actor-critic agents with Deep Q-Network (DQN), Double DQN (DDQN), and Dueling DQN. Figure~\ref{method_study} shows the comparison results. We can observe that RAFT has a comparable converge efficiency in comparison to $\text{RAFT}_{\text{DDQN}}$ and $\text{RAFT}_{\text{DuelingDQN}}$. Moreover, RAFT significantly outperforms other model variants. A possible reason is that actor-critic agents directly optimize the transformation policies. Thus, they extensively explore the high-dimensional feature space transformation tasks compared with other baselines. \vspace{-0.3cm} \subsection{RQ5: The Traceability of Automatic Feature Generation} This experiment aims to answer: \textit{How is the traceability of the feature space generated by RAFT?} We selected the dataset ``Wine Quality Red'' as an example to show traceability. We visualized the original and generated features in Figure~\ref{traceable_study}. The size of each sector area represents the importance of each feature. We can find that the `alcohol' in the original dataset is far more critical than other features. However, the generated feature has a more balanced importance distribution for all features. Meanwhile, we can easily figure out the transformation process of each generated feature by its name. For instance, the most critical column in generated feature is ``alcohol$-$residual sugar'', which is generated by two original features ``alcohol'' and ``residual sugar''. \begin{figure}[!h] \centering \subfigure[The Original Feature]{ \includegraphics[width=0.42\linewidth]{fig/traceable/winerBefore.pdf} } \subfigure[The Generated Feature]{ \includegraphics[width=0.42\linewidth]{fig/traceable/winer.pdf} } \vspace{-0.3cm} \caption{The illustration of model traceability.} \label{traceable_study} \vspace{-0.7cm} \end{figure} \vspace{-0.3cm} \section{Related Works} \noindent\textbf{Reinforcement Learning (RL)} is the study of how intelligent agents should act in a given environment in order to maximize the expectation of cumulative rewards~\cite{sutton2018reinforcement}. According to the learned policy, we may classify reinforcement learning algorithms into two categories: value-based and policy-based. Value-based algorithms (\textit{e.g.} DQN~\cite{mnih2013playing}, Double DQN~\cite{van2016deep}) estimate the value of the state or state-action pair for action selection. Policy-based algorithms (\textit{e.g.} PG~\cite{sutton2000policy}) learn a probability distribution to map state to action for action selection. Additionally, an actor-critic reinforcement learning framework is proposed to incorporate the advantages of value-based and policy-based algorithms~\cite{schulman2017proximal}. In recent years, RL has been applied to many domains (e.g. spatial-temporal data mining, recommended systems) and achieves great achievements~\cite{wang2022reinforced,wang2022multi}. In this paper, we adopted actor-critic based method to construct the cascading agents. \noindent\textbf{Automated Feature Engineering} aims to enhance the feature space through feature generation and feature selection in order to improve the performance of machine learning models~\cite{chen2021techniques}. Feature selection is to remove redundant features and retain important ones, whereas feature generation is to create and add meaningful variables. \ul{\textit{Feature Selection}} approaches include: (i) filter methods (\textit{e.g}., univariate selection \cite{forman2003extensive}, correlation based selection \cite{yu2003feature}), in which features are ranked by a specific score like redundancy, relevance; (ii) wrapper methods (\textit{e.g.}, Reinforcement Learning~\cite{ liu2021efficient}, Branch and Bound~\cite{ kohavi1997wrappers}), in which the optimized feature subset is identified by a search strategy under a predictive task; (iii) embedded methods (\textit{e.g.}, LASSO \cite{tibshirani1996regression}, decision tree \cite{sugumaran2007feature}), in which selection is part of the optimization objective of a predictive task. \ul{\textit{Feature Generation}} methods include: (i) latent representation learning based methods, e.g. deep factorization machine~\cite{guo2017deepfm}, deep representation learning~\cite{bengio2013representation}. Due to the latent feature space generated by these methods, it is hard to trace and explain the extraction process. (ii) feature transformation based methods, which use column-wise arithmetic operations~\cite{khurana2018feature,chen2019neural} or group-wise arithmetic operations ~\cite{wang2022group, xiao2022self} to generate new features. \vspace{-0.3cm}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,535
The sexiest Mini ever made April 12, 2012 southcapenetRegular Columns For the first time in its long history the iconic Mini has chopped off its roof, thrown away its back seats, acquired a fold-down cloth top and adopted the name of Roadster – a metamorphosis which has turned a cute and trendy city car into a fully-fledged little sports car. Price and performance will attract a range of buyers. True to the sporty heritage of its BMW parent company the Roadster is far from just a little swanker with cosmetically enhanced go-faster looks. Powerful engines and sophisticated chassis technology, a low centre of gravity and a rigid body across the three-model range ensure go-kart cling and sizzling performance, all delightfully multiplied by the compactness of this little el fresco beauty. At the media introduction, I drove all three the new Roadster variants on sweeping, scenic and speedy stretches of Western Cape country roads, starting off in the Mini Cooper Roadster, then moving on into the S version until eventually sliding in behind the wheel of the John Cooper Works two-seater pocket-rocket. Better and better… Each step up on the wind-in-your-hair performance ladder became better and better and it was soon clear that the difference in price and performance will attract a range of buyers varying from young, trendy and probably singles, to slighter older, slightly more performance-minded and status-conscious drivers topping out with those drivers for whom nothing less than the hard-core, tyre-smoking best will do. One thing I am pleased about is that I went up the performance ladder and not down, because compared to the JCW model the 'standard' Cooper feels decidedly pedestrian with its 90kW/160Nm 1.6 engine and performance figures of 0-100km/h in 9.2 seconds and a top speed of 199km/h. Having said that, it is the least expensive of the three cars, it is just as cute as the other two and, except for the out of sight oily bits, has much of the same fancy kit as them. It handles, as you would expect from a Mini… sharp, predictable and confident. What's more, it sips only about 5.7 litres/100km and keeps emissions down to 133g/km. This model is perfectly suited to the city commuter; the price is right, it's fairly frugal and much too trendy and cute-looking to be ignored. I reckon it is likely to appeal to the young and slightly older markets alike. Add an S behind the Cooper's name and you get a four-cylinder petrol engine with MINI TwinPower Turbo technology that powers out 135kW and 240Nm (260Nm with overboost) that will do the 0-100km/h dash in 7 seconds and top out at about 227km/h. Meet 'Big Daddy' Then there is the Big Daddy of the Roadster gang, the Mini John Cooper Works Roadster (to give its full title) that comes armed with TwinPower turbo technology and race track-style tuning which gives it 155kW and 260Nm (280 with overboost from 2000rpm upwards), a 0-100km/h squirt time of 6.5 seconds and a top speed of 237km/h. This car is boutique-made for drivers who get their kicks when the pleasure pedal is right down on the metal, preferably on track days. Yet, as hot as it is, driving this Roadster on a public road is also genuinely a pleasure. It handles beautifully and unless the tarmac is badly decayed it is not even as jumpy or bumpy as some of its parents' other offspring. When just cruising along the JCW's sporty engine gurgles gently but feed it right-foot sole and it snarls and growls, warning you that this tiny beast wants to do what it does best – go like the clappers. But more than just run, it sticks, it drives like a go-kart and it howls with delight when you whip it. As a price-performance package it is one of the sexiest, most exciting cars I have driven in recent times and that says a lot because the day before taking the wheel of the new Mini Roadster I did some low flying in one of the world's ultimate track tamers, Nissan's awesome GT-R. The Roadster certainly stands out in the looks department, whether its soft top is up or down. It looks remarkably elegant for such a short, stubby car but mainly its shouts out FUN in capital letters. Its funky sportiness is emphasised by a low, crouching stance, attractive sloping windscreen, polished stainless steel roll-over bars, wraparound chrome trim strip at the base of the windows and roof, black border around the lower part of the body, shiny chrome bits strips and sporty alloys (in different sizes for the different models). Some nice eye candy But the centre of attraction undoubtedly is the classic British roadster-style black cloth roof that cascades down to fold flat behind the seats. Nice eye candy, along with a range of special Roadster black or silver Sport Stripes (Red is reserved exclusively for the JCW model). The Roadster certainly stands out in the looks department. On the hoof, the Roadster's active rear spoiler extends automatically at 80km/h to add to the fun, cling and sporty appearance. The interior is jazzed up in typically Mini style, complete with the centrally mounted large speedo and a rev counter positioned directly behind the steering wheel. A wide range of interior sports seats are available, again much depends on the model you opt for. The same goes for trim strips in different colours and even a Chrome Line Interior package. As with the BMW mother brand, there are numerous optional extras available for the Roadster and much depends on personal taste, bragging rights and degree of credit card meltdown. Safe, my mate… sorry, safe my Mini Along with peppy performance, spot-on make-up and pleasant living quarters, the Roadster looks after the occupants very well with a wide range of safety protection, comfort and convenience as well as driver-focused aids such as Electric Power Steering with speed-sensitive power assistance, Dynamic Stability Control as well as Dynamic Traction Control (DTC) with Electronic Differential Lock Control (EDLC), ABS/EBD and Cornering Brake Control, Hill Start Assistance and Park Distance Control. The MINI Cooper Roadster and MINI Cooper S Roadster can be specified with an optional six-speed automatic gearbox with Steptronic function as an alternative to the six-speed manual item fitted as standard across the MINI Roadster range. Also available as an option is a sports steering wheel with shift paddles, which enable the driver to change gears manually while keeping both hands on the steering wheel. A Sport Button on the centre console – standard in the case of the MINI John Cooper Works Roadster and an option on the other models – allows the driver to adjust the car's steering characteristics and accelerator responses. If the optional six-speed automatic gearbox is specified, pressing the Sport Button also shortens shift times. Here's a surprise… a boot that fits In terms of practicality the Roadster, rather unusually for cars of this nature, offers quite an impressive boot with a 240-litre luggage area which can accommodate a lot more than a week's shopping bags or naughty weekend away holdalls. The boot is surprisingly large for such a sporty little car. Also of huge appeal to today's always-connected generation will be the pulling powers of a classy audio system with MP3-compatible CD player and AUX IN connection plus a Harman Kardon hi-fi loudspeaker system, MINI navigation system with features such as a Driving Excitement app, web radio, Google services, RSS news feeds, Mission Control and the in-car use of Facebook and Twitter. A cool cat, this new Roadster: fast, lots of fun, impressive handler, damn pretty, safe, well-connected and ultimately, very, very desirable, even at a price range of between R300 000 and R400 000, give or take a few optional extras. Come on you darn Lotto bloodsuckers I want a new JCW Roadster very, very badly. Twice a week, for years, I have been buying tickets and all I ask you to do now is to draw my six numbers. Now how difficult can that be? Plaasstories vir 'ie siel – nuwe Suidkaap kontreiboek New Lifestyle Choices @ Kurland Hotel
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,698
Q: Chrome extension XHR not working :) I am loading a Chrome into Facebook and I do a simple JQuery GET request to my own website. I get the following error in the console when the GET request is called... "Refused to connect to 'https://www.istyla.com/Popup/t2.php' because it violates the following Content Security Policy directive: "connect-src https://*.facebook.com http://*.facebook.com https://*.fbcdn.net http://*.fbcdn.net *.facebook.net *.spotilocal.com:* https://*.akamaihd.net ws://*.facebook.com:* http://*.akamaihd.net"." This happened all of a sudden. It worked yesterday... Here is part of my Chrome Extension Manifest with the CSP definition: "content_security_policy": "default-src 'self'; script-src 'self'; object-src 'self'; connect-src *" Here is my GET request (loaded via a content script - JQuery is also loaded as a seperate content script): $.get("https://www.istyla.com/Popup/t2.php" + c, function (d) { //do my other stuff here } By the way... t2.php does allow all origins. Is it Facebook that set a CSP on their site?? What can I do to connect to my URL via JQuery GET? Thanks for any advice... :) A: I had the same issue with my script. I moved all my AJAX calls to a background script. A: I think a direct .get would not work cross-domain. Can you try using $.jsonp. http://www.jquery4u.com/json/jsonp-examples/
{ "redpajama_set_name": "RedPajamaStackExchange" }
367
In episode 65, we welcome guest host Abner Brown (/AbnerBrown) to bring a true ode to the past era of hip-hop. We travel back in time to 8-tracks and breaks, which was truly the birthplace of this industry. We also discuss the controversial topic revolving around the father of trap music; Tip, Rick Ross, or Young Jeezy. Plus, we review the latest album from the iconic Kool G Rap: Return of the Don. This and whole lot more on that #BoomBap.
{ "redpajama_set_name": "RedPajamaC4" }
8,098
Дева е съзвездие, видимо от северното полукълбо. То е част от зодиака и има астрологичен знак ♍. Намира се между Лъв на изток и Везни на запад. Дева е второто по големина съзвездие след Хидра и може лесно да бъде намерено по най-ярката му звезда – Спика. Името на съзвездието е превод от латинското му название – Virgo. Митология Не е ясно какво точно представлява Девата. Тя се свързва с почти всяка изявена богиня, включително Ищар, Изида, Кибела, Дева Мария – майката на Исус и Атина. Дева се свързва също с Голямата и Малката мечка, като част от мита за Калисто, като Калисто или Хера. Персефона (която в някои митологии, най-вече в Елевзинските мистерии, е позната като Деметра) също се причислява, поради това, че съзвездието Дева е видимо предимно през пролетта, когато се вярва, че богинята излиза от подземния свят. Според някои интерпретации съзвездието изобразява Астрея – дъщеря на Зевс и Темида. Астрея е позната като богиня на справедливостта и се свързва с това съзвездие, поради наличието на блюдата на съзвездието Везни, с които управлявала света с нейната мъдрост, докато човечеството не станало коравосърдечно и тя, възмутена, се завърнала на небето. Съзвездия
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,520
\section{Introduction} There has been a strong interest in correlated dijet production in various hadronic collisions~\cite{Abazov:2004hm,Abelev:2007ii,Khachatryan:2011zj,daCosta:2011ni,Aad:2010bu,Chatrchyan:2011sx,Adamczyk:2013jei,Adamczyk:2017yhe,Aaboud:2019oop}, where the two jets are produced mainly in the back-to-back configuration in the transverse plane, \begin{equation} A+B\to {\rm Jet}_1+{\rm Jet}_2+X \ . \label{dijet} \end{equation} Here $A$ and $B$ represent the two incoming hadrons with momenta $P_A$ and $P_B$, respectively. The azimuthal angle between the two jets is defined as $\phi=\phi_1-\phi_2$ with $\phi_{1,2}$ being the azimuthal angles of the two jets. In the leading order naive parton picture, the Born diagram yields a delta function at $\phi=\pi$. One-loop gluon radiation will lead to a singular distribution around $\phi=\pi$. This divergence arises when the total transverse momentum of the dijet (imbalance) is much smaller than the individual jet momentum, $q_\perp=|\vec{k}_{1\perp}+\vec{k}_{2\perp}|\ll |k_{1\perp}|\sim |k_{2\perp}|\sim P_T$, where large logarithms appear at every order of the perturbative calculation. In the kinematic region $q_\perp\ll P_T$, the appropriate resummation method that needs to be applied is the so-called transverse momentum dependent (TMD) resummation or the Collins-Soper-Sterman (CSS) resummation~\cite{Collins:1984kg}. There have been several theoretical efforts to resum the large logarithms for this process~\cite{Banfi:2003jj,Banfi:2008qs,Hautmann:2008vd,Chiu:2012ir,Mueller:2013wwa,Sun:2014gfa,Sun:2015doa,Chien:2020hzh}. The differential cross section can be written as, \begin{eqnarray} \frac{d^4\sigma} {d\Omega d^2q_{\perp}}=\sum_{abcd}\sigma_0\left[\int\frac{d^2\vec{b}_\perp}{(2\pi)^2} e^{i\vec{q}_\perp\cdot \vec{b}_\perp}W_{ab\to cd}(x_1,x_2,b_\perp)+Y_{ab\to cd}\right] \ , \end{eqnarray} where $d\Omega=dy_1 dy_2 d P_T^2$ represents the phase space of dijet production. Here $y_1$ and $y_2$ are the rapidities of the two jets, $P_T$ is the leading jet transverse momentum, and $q_\perp$ the imbalance transverse momentum between the two jets as defined above. Moreover, $\sigma_0$ is the overall normalization of the differential cross section. The first term on the right hand side, $W_{ab\to cd}$, contains the all order resummation and the second term, $Y_{ab\to cd}$, takes into account fixed order corrections. At next-to-leading logarithmic (NLL) order, the resummation for $W$ was conjectured to take the following form~\cite{Sun:2014gfa,Sun:2015doa} \begin{eqnarray} W_{ab\to cd}\left(x_1,x_2,b\right)&=&x_1\,f_a(x_1,\mu=b_0/b_\perp) x_2\, f_b(x_2,\mu=b_0/b_\perp) e^{-S_{\rm Sud}(Q^2,b_\perp)} \nonumber\\ &\times& \textmd{Tr}\left[\mathbf{H}_{ab\to cd} \mathrm{exp}\left[-\int_{b_0/b_\perp}^{Q}\frac{d \mu}{\mu}\mathbf{\gamma}_{ab\to cd}^{s\dag}\right]\mathbf{S}_{ab\to cd} \mathrm{exp}\left[-\int_{b_0/b_\perp}^{Q}\frac{d \mu}{\mu}\mathbf{\gamma}_{ab\to cd}^{s}\right]\right]\ ,\label{resum} \end{eqnarray} for each partonic channel $ab\to cd$, where $Q^2=\hat s=x_1x_2S$, representing the hard momentum scale. In addition, we have $b_0=2e^{-\gamma_E}$, with $\gamma_E$ being the Euler constant. The $f_{a,b}(x,\mu)$ are the parton distributions for the incoming partons $a,b$, and $x_{1,2}=P_T\left(e^{\pm y_1}+e^{\pm y_2}\right)/\sqrt{S}$ are the fractions of the incoming hadrons' momenta carried by the partons. In the above equation, the hard and soft factors $\mathbf{H}$ and $\mathbf{S}$ are expressed as matrices in the color space of the partonic channel $ab\to cd$, and $\gamma_{ab\to cd}^s$ are the associated anomalous dimensions for the soft factor. The Sudakov form factor ${\cal S}_{\textrm{Sud}}$ resums the leading double logarithms and the universal sub-leading logarithms, \begin{eqnarray} S_{\rm Sud}(Q^2,b_\perp)=\int^{Q^2}_{b_0^2/b_\perp^2}\frac{d\mu^2}{\mu^2} \left[\ln\left(\frac{Q^2}{\mu^2}\right)A+B+D_1\ln\frac{Q^2}{P_T^2R_1^2}+ D_2\ln\frac{Q^2}{P_T^2R_2^2}\right]\ , \label{su} \end{eqnarray} where $R_{1,2}$ are the jet radii of the two jets, respectively. In practice the jets are of course reconstructed with the same radius $R$ but to clarify the structure of our calculation we use two different radii $R_{1,2}$ to differentiate between the dijets. At one-loop order, $A=C_A \frac{\alpha_s}{\pi}$, $B=-2C_A\beta_0\frac{\alpha_s}{\pi}$ for a gluon-gluon initial state, $A=C_F \frac{\alpha_s}{\pi}$, $B=\frac{-3C_F}{2}\frac{\alpha_s}{\pi}$ for a quark-quark initial state, and $A=\frac{(C_F+C_A) }{2}\frac{\alpha_s}{\pi}$, $B=(\frac{-3C_F}{4}-C_A\beta_0)\frac{\alpha_s}{\pi}$ for a gluon-quark initial state. In addition, $D_{1,2}=C_A\frac{\alpha_s}{2\pi}$ for a gluon jet and $D_{1,2}=C_F\frac{\alpha_s}{2\pi}$ for a quark jet, respectively. Here, $\beta_0=(11-2N_f/3)/12$, with $N_f$ being the number of effective light quarks. The resummation formula in Eq.~(\ref{resum}) was obtained in Refs.~\cite{Sun:2014gfa,Sun:2015doa} by a detailed analysis of the soft gluon radiation at one-loop order. The leading contributions from soft gluon radiation can be factorized into the associated TMD parton distributions and can be resummed by solving the relevant evolution equations. At NLL, the soft gluon radiation is factorized into the soft factor ${\bf S}$ which is given by a matrix in the color space of the partonic channels. The matrix form of the factorization is the same as was found for threshold resummation for the dijet production in proton-proton collisions~\cite{Kidonakis:1998bk,Kidonakis:1998nf,Kidonakis:2000gi,Kelley:2010fn,Catani:2013vaa,Hinderer:2014qta}. It is known that TMD factorization in dijet production in hadronic collisions is highly nontrivial and that there are potential factorization breaking effects~\cite{Boer:2003tx,Qiu:2007ey,Collins:2007nk,Rogers:2010dm,Bacchetta:2005rm,Vogelsang:2007jk,Bomhof:2007su,Catani:2011st,Mitov:2012gt,Schwartz:2017nmr,Schwartz:2018obd}. First, non-global logarithms (NGLs)~\cite{Dasgupta:2001sh,Dasgupta:2002bw} start to contribute to the cross section at two-loop order. It has been shown that they cannot easily be included into a factorization formula, although numerical simulations can be made and their contribution can be taken into account~\cite{Banfi:2003jj,Banfi:2008qs}. In addition, TMD factorization will be explicitly broken at three-loop order for the unpolarized cross section. This leads to a modification of the coefficient $A^{(3)}$ in the above Sudakov form factor~\cite{Collins:2007nk,Becher:2010tm,Catani:2011st,Mitov:2012gt,Rothstein:2016bsq,Schwartz:2017nmr,Schwartz:2018obd,Sun:2015doa}. Factorization breaking effects are particularly evident for the single transverse spin asymmetry (SSA) in dijet production~\cite{Collins:2007nk}, $\Delta\sigma(S_\perp)=(\sigma(S_\perp)-\sigma(S_\perp))/2$, where $S_\perp$ represents the transverse polarization vector for one of the incoming nucleons. The SSA for this process is expressed as $\Delta\sigma(S_\perp)\propto \epsilon^{\alpha\beta} S_\perp^\alpha q_\perp^\beta$, i.e., the total transverse momentum of the two jets $q_\perp$ will have a preferred direction~\cite{Boer:2003tx}. This asymmetry is sensitive to the parton's Sivers function where the transverse momentum distribution is correlated with the transverse polarization vector~\cite{Sivers:1989cc}. In Refs.~\cite{Bacchetta:2005rm,Bomhof:2007su}, all initial/final state interaction contributions to the SSA were factorized into a complicated gauge link structure associated with the quark Sivers function for the polarized nucleon. However, for the double spin asymmetries involving two Sivers functions, it was shown explicitly that the generalized gauge-link approach to TMD factorization does not apply~\cite{Rogers:2010dm}. Ref.~\cite{Qiu:2007ey} provided an understanding of the SSA from the twist-three framework where the Qiu-Sterman-Efremov-Tereyav matrix elements are the basic ingredients~\cite{Efremov:1981sh,Efremov:1984ip,Qiu:1991pp,Qiu:1991wg, Qiu:1998ia}, and the high momentum Sivers function is generated by collinear gluon radiation. In particular, it was shown in Refs.~\cite{Qiu:2007ey,Efremov:1984ip} that the collinear gluon radiations parallel to the incoming hadrons can be factorized into the associated TMD parton distribution functions. It was also suggested that a factorization formula similar to that in the unpolarized case may hold for the single spin dependent differential cross section~\cite{Qiu:2007ey}, \begin{eqnarray} \frac{d\Delta\sigma(S_\perp)} {d\Omega d^2\vec{q}_\perp} &=& \frac{\epsilon^{\alpha\beta}S_\perp^\alpha q_\perp^\beta} {\vec{q}^2_\perp} \sum\limits_{abcd} \int d^2p_{1\perp}d^2p_{2\perp}d^2\lambda_\perp \nonumber \\ &&\times \frac{\vec{p}_{2\perp}\cdot \vec{q}_\perp}{M_P}\, x_2\, f_{1Tb}^{\perp(\rm SIDIS)}(x_2,p_{2\perp})\, x_1\, f_a^{(\rm SIDIS)}(x_1,p_{1\perp}) \label{e4}\\ &&\times \left[S_{ab\to cd}(\lambda_\perp)\, H_{ab\to cd}^{\rm Sivers}(P_\perp^2)\right]_c\, \delta^{(2)}(\vec{p}_{1\perp}+\vec{p}_{2\perp}+ \vec{\lambda}_\perp-\vec{q}_\perp) \, . \nonumber \end{eqnarray} Here $f_{1Tb}^{\perp(\rm SIDIS)}$ and $f_a^{(\rm SIDIS)}$ denote the transverse-spin dependent TMD quark Sivers function and the unpolarized TMD parton distribution, respectively. These TMD parton distribution functions were chosen following their definitions in semi-inclusive deep-inelastic scattering (SIDIS) with future pointing gauge links. Although it was not explicitly shown in Ref.~\cite{Qiu:2007ey}, a matrix form of the factorization was suggested, where $H_{ab\to cd}^{\rm Sivers}$ and $S_{ab\to cd}$ are partonic hard and soft factors and the $[\quad ]_c$ represents a trace in color space between the hard and soft factors, similar to the unpolarized case in Eq.~(\ref{resum}). In order to check the factorization formula of Eq.~(\ref{e4}), it is important to carry out the calculation of soft gluon radiation. Soft gluon emissions contribute in a nontrivial way to the factorization formula. In particular, it will be crucial to show that these contributions can be included in the soft factor in the matrix form of the factorization formula. The goal of the current paper is to derive the soft gluon radiation contribution at one-loop order. As mentioned above, in Ref.~\cite{Qiu:2007ey} it was shown that collinear gluon radiation associated with the incoming nucleons can be treated following the general factorization arguments. This indicates that factorization holds in the leading logarithmic approximation (LLA). However, in order to obtain also all the subleading logarithmic contributions, we need to consider the soft gluon radiation as well. After including soft gluon radiation, we will obtain the complete double logarithmic result. Our calculations presented in this work show that the factorization and resummation is expected to be valid at LLA. However, factorization breaking effects will emerge at NLL accuracy, in the sense that the contributions from soft gluon radiation cannot be factorized into the same soft factor as for the unpolarized case. This implies that beyond the LLA, we do not have a factorization formula for $\Delta \sigma$ as in Eq.~(\ref{resum}), at least not in the standard way with a spin independent soft factor. In the LLA, we can express the spin dependent differential cross section in terms of the Fourier transform $b_\perp$ variable, \begin{eqnarray} \frac{d\Delta\sigma(S_\perp)} {d\Omega d^2\vec{q}_\perp} &=&\epsilon^{\alpha\beta}S_\perp^\alpha\sum_{abcd}\int\frac{d^2\vec{b}_\perp}{(2\pi)^2} e^{i\vec{q}_\perp\cdot \vec{b}_\perp}W_{ab\to cd}^{T\beta}(x_1,x_2,b_\perp)\ . \end{eqnarray} Here, we neglect the $Y$-term contribution compared to the unpolarized case above. In this work we show that the leading logarithmic factorization of $W^{T\beta}$ takes the form \begin{eqnarray} W_{ab\to cd}^{T\beta}\left(x_1,x_2,b_\perp\right)|_{\rm LLA'}&=&\frac{ib_\perp^\beta}{2}x_1\,f_a(x_1,\mu=b_0/b_\perp) x_2\, T_{Fb}(x_2,x_2,\mu=b_0/b_\perp)\nonumber\\ &&\times H_{ab\to cd}^{\rm Sivers} e^{-S_{\rm Sud}^T(Q^2,b_\perp)} \ ,\label{resumspin} \end{eqnarray} where $S_{\rm Sud}^T(Q^2,b_\perp)$ can be written in analogy to Eq.~(\ref{su}), \begin{eqnarray} S_{\rm Sud}^T(Q^2,b_\perp)=\int^{Q^2}_{b_0^2/b_\perp^2}\frac{d\mu^2}{\mu^2} \left[\ln\left(\frac{Q^2}{\mu^2}\right)A+B+D_1\ln\frac{1}{R_1^2}+ D_2\ln\frac{1}{R_2^2}\right]\ . \label{sudakov-spin} \end{eqnarray} Here $A$, $B$, $D_{1,2}$ are the same as in Eq.~(\ref{su}). In the above equation, $T_F$ is the Qiu-Sterman matrix element which is also related to the transverse momentum-moment of the quark Sivers function. It is defined as follows \begin{eqnarray} T_F(x_2,x_2') &\equiv & \int\frac{d\zeta^-d\eta^-}{4\pi} e^{i(x_2 P_B^+\eta^-+(x_2'-x_2)P_B^+\zeta^-)} \nonumber \\ &\times & \epsilon_\perp^{\beta\alpha}S_{\perp\beta} \, \left\langle P_A,S|\overline\psi(0){\cal L}(0,\zeta^-)\gamma^+ \right. \label{TF}\\ &\times & \left. g{F_\alpha}^+ (\zeta^-) {\cal L}(\zeta^-,\eta^-) \psi(\eta^-)|P_B,S\right\rangle \ , \nonumber \end{eqnarray} where $F^{\mu\nu}$ represents the gluon field strength tensor. From the leading order derivation, we have $\frac{1}{M_P}\int d^2k_{\perp}\, \vec{k}^2_\perp\, f_{1T}^{\perp(\rm SIDIS)}(x,k_\perp) = - T_F(x,x)$. For our soft gluon calculation at leading power, we take the correlation limit, i.e., we neglect all power corrections of the form $q_\perp/P_T$. In this limit, the leading double logarithm is proportional $1/q_\perp^2\times \ln(P_T^2/q_\perp^2)$. We will show that these contributions will be consistent with the resummation formula of Eq.~(\ref{resumspin}). Some sub-leading logarithmic terms can be factorized in this form as well. These include collinear gluon radiation associated with the incoming hadrons and the final state jets. The former can be resummed by including the scale evolution of the integrated parton distribution and the Qiu-Sterman matrix elements, e.g., by evaluating these distributions at the scale $\mu_b=b_0/b_\perp$. The latter are taken into account by the $\ln(1/R^2)$ terms in the Sudakov form factor $S_{\rm Sud}$ of Eq.~(\ref{su}). Therefore, Eq.~(\ref{resumspin}) is an improvement of the leading logarithmic approximation, to which we will refer as $\rm LLA'$ in the following. The remainder of this paper is organized as follows. In Sec.~II, we will briefly review the soft gluon contribution to unpolarized dijet production. The basic formalism, including the Eikonal approximation, the phase space integrals to obtain the leading contributions, and the subtraction method to derive the soft gluon radiation associated with the final state jets will be introduced. Sec.~III contains the main new derivations of this work. We will carry out the calculation of the soft gluon radiation for the spin dependent differential cross sections. We introduce the general framework, the twist-three collinear expansion, and derive the soft gluon radiation amplitude in this formalism. We apply these techniques to different partonic channels and demonstrate that the leading double logarithmic contributions factorize, and we verify our resummation formula at LLA$'$. In Sec.~IV, we consider an example and show that factorization breaking effects appear at NLL, in particular, from soft gluon radiation that does not belong to the incoming hadrons and final state jets. In Sec.~V, we will present phenomenological results for the single spin asymmetries in dijet production at RHIC, and compare to recent STAR data~\cite{Abelev:2007ii,starpreliminary}. Finally, we will summarize our paper in Sec.~VI. \section{Brief Review of Soft Gluon Radiation for the Unpolarized Case} Dijet production at the leading order can be calculated from partonic $2\to 2$ processes, \begin{equation} a(p_1)+b(p_2)\to c(k_1)+d(k_2) \ , \end{equation} where $p_{1,2}$ and $k_{1,2}$ are the momenta of the incoming and outgoing two partons, respectively. Their contributions to the cross section can be written as \begin{eqnarray} \frac{d^4\sigma} {d\Omega d^2q_{\perp}}=\sum_{abcd}\sigma_0x_1\,f_a(x_1,\mu) x_2\, f_b(x_2,\mu) {h}_{ab\to cd}^{(0)}\delta^{(2)}(q_\perp) \ , \end{eqnarray} where the overall normalization of the differential cross section is $\sigma_0=\frac{\alpha_s^2\pi}{s^2}$. The partonic cross sections $h^{(0)}$ for all the production channels depend on the kinematic variables $\hat{s} = (p_1+p_2)^2$, $\hat{t} = (p_1-k_1)^2$ and $\hat{u} = (p_1-k_2)^2$. As mentioned above, at the leading order, they contribute to a delta function setting $q_\perp=0$, which corresponds to the back-to-back configuration of the two jets in the transverse plane. For soft gluon emissions, we can apply the leading power expansion and derive the dominant contribution from the Eikonal approximation~\cite{Mueller:2013wwa,Sun:2015doa}. For example, for the outgoing quark, antiquark and gluon lines, we obtain the following factors in the Eikonal approximation: \begin{equation} \frac{2k_i^\mu}{2k_i\cdot k_g+i\epsilon} g\ ,~~-\frac{2k_i^\mu}{2k_i\cdot k_g+i\epsilon}g \ ,~~\frac{2k_i^\mu}{2k_i\cdot k_g+i\epsilon}g \ , \end{equation} respectively, at one-loop order. Here $g$ is the strong coupling and the $k_i$ represent the momenta of the outgoing particles. For incoming quark, antiquark and gluon lines, we have, \begin{equation} -\frac{2p_1^\mu}{2p_1\cdot k_g-i\epsilon} g\ ,~~\frac{2p_1^\mu}{2p_1\cdot k_g-i\epsilon}g \ ,~~\frac{2p_1^\mu}{2p_1\cdot k_g-i\epsilon} g \ , \end{equation} respectively, where $p_1$ corresponds to the momentum of the incoming particle. \begin{figure}[tbp] \centering \includegraphics[width=12cm]{unpolarize.eps} \caption{Soft gluon radiation contributions to the finite imbalance transverse momentum $q_\perp$: (a) initial state radiation, and (b), (c) final state radiation. Since we have chosen the gluon polarization vector along $p_2$, there is no gluon radiation from the line with momentum $p_2$.} \label{unpolarized} \end{figure} Following Ref.~\cite{Mueller:2013wwa,Sun:2015doa}, we choose physical polarizations of the soft gluon along the incoming particle with momentum $p_2$, so that the polarization tensor of the radiated gluon takes the following form: \begin{equation} \Gamma^{\mu\nu}(k_g)=\left(-g^{\mu\nu}+\frac{k_g^\mu p_2^\nu+k_g^\nu p_2^\mu}{k_g\cdot p_2}\right) \ .\label{gluonpolarization} \end{equation} This choice will simplify the derivation since there is no soft gluon radiation from the incoming parton line $p_2$. Therefore, as shown in Fig.~\ref{unpolarized}, the leading contributions come from the initial state radiation from the line with momentum $p_1$, and the final state emissions from the lines $k_1$ and $k_2$. The contributions by these diagrams can be evaluated by taking the amplitudes squared of the Eikonal vertex with the polarization vector of the radiated gluon contracted with the above tensor. This leads to the following expressions for the soft gluon radiation contributions, \begin{eqnarray} \frac{2p_1^\mu}{2p_1\cdot k_g}\frac{2p_1^\nu}{2p_1\cdot k_g}\Gamma_{\mu\nu} &=&S_g(p_1,p_2)\ ,\\ \frac{2k_1^\mu}{2k_1\cdot k_g}\frac{2k_1^\nu}{2k_1\cdot k_g}\Gamma_{\mu\nu} &=&S_g(k_1,p_2)\ ,\\ \frac{2k_2^\mu}{2k_2\cdot k_g}\frac{2k_2^\nu}{2k_2\cdot k_g} \Gamma_{\mu\nu} &=&S_g(k_2,p_2)\ , \\ 2\frac{2k_1^\mu}{2k_1\cdot k_g}\frac{2p_1^\nu}{2p_1\cdot k_g}\Gamma_{\mu\nu} &=&S_g(k_1, p_2)+S_g(p_1, p_2)-S_g(k_1, p_1)\ ,\\ 2\frac{2k_2^\mu}{2k_2\cdot k_g}\frac{2p_1^\nu}{2p_1\cdot k_g}\Gamma_{\mu\nu} &=&S_g(k_2, p_2)+S_g(p_1, p_2)-S_g(k_2, p_1)\ ,\\ 2\frac{2k_1^\mu}{2k_1\cdot k_g}\frac{2k_2^\nu}{2k_2\cdot k_g}\Gamma_{\mu\nu} &=&S_g(k_1, p_2)+S_g(k_2, p_2)-S_g(k_1, k_2)\ . \end{eqnarray} Here $S_g(p,q)$ is a short-hand notation for \begin{equation} S_g(p,q)=\frac{2p\cdot q}{p\cdot k_gq\cdot k_g} \ . \end{equation} When we integrate out the phase space of the radiated gluon to obtain the finite transverse momentum for the dijet imbalance, we have to exclude the contributions that belong to the jets. Therefore, only gluon radiation outside of the jets with radius $R_{1,2}$ contributes. These diagrams have been calculated in Refs.~\cite{Sun:2015doa}, where an offshellness was considered to regulate the collinear divergence associated with the jet within the narrow jet approximation~\cite{Aversa:1988vb,Mukherjee:2012uz}. In Ref.~\cite{Liu:2018trl,Liu:2020dct}, a subtraction method was employed to derive the soft gluon radiation contribution. For completeness, we show details of the derivation in the Appendix. Here, we list the final results: \begin{eqnarray} S_g(p_1,p_2)&\Rightarrow &\frac{\alpha_s}{2\pi^2}\frac{1}{q_\perp^2}\left(2\ln\frac{Q^2}{q_\perp^2}\right)\ ,\\ S_g(k_1,p_1)&\Rightarrow &\frac{\alpha_s}{2\pi^2}\frac{1}{q_\perp^2}\left[\ln\frac{Q^2}{q_\perp^2}+\ln\frac{1}{R_1^2}+\ln\left(\frac{\hat{t}}{\hat{u}}\right) +\epsilon\left(\frac{1}{2}\ln^2\frac{1}{R_1^2}\right)\right]\ ,\label{e11}\\ S_g(k_2,p_1)&\Rightarrow &\frac{\alpha_s}{2\pi^2}\frac{1}{q_\perp^2}\left[\ln\frac{Q^2}{q_\perp^2}+\ln\frac{1}{R_2^2}+\ln\left(\frac{\hat{u}}{\hat{t}}\right) +\epsilon\left(\frac{1}{2}\ln^2\frac{1}{R_2^2}\right)\right]\ ,\label{e21}\\ S_g(k_1,p_2)&\Rightarrow &\frac{\alpha_s}{2\pi^2}\frac{1}{q_\perp^2}\left[\ln\frac{Q^2}{q_\perp^2}+\ln\frac{1}{R_1^2}+\ln\left(\frac{\hat{u}}{\hat{}t}\right) +\epsilon\left(\frac{1}{2}\ln^2\frac{1}{R_1^2}\right)\right]\ ,\label{e12}\\ S_g(k_2,p_2)&\Rightarrow &\frac{\alpha_s}{2\pi^2}\frac{1}{q_\perp^2}\left[\ln\frac{Q^2}{q_\perp^2}+\ln\frac{1}{R_2^2}+\ln\left(\frac{\hat{t}}{\hat{u}}\right) +\epsilon\left(\frac{1}{2}\ln^2\frac{1}{R_2^2}\right)\right]\ ,\label{e22}\\ S_g(k_1,k_2)&\Rightarrow &\frac{\alpha_s}{2\pi^2}\frac{1}{q_\perp^2}\left[\ln\frac{1}{R_1^2}+\ln\frac{1}{R_2^2}+2\ln\left(\frac{\hat{s}^2}{\hat{t}\hat{u}}\right) +\epsilon\left(\frac{1}{2}\ln^2\frac{1}{R_1^2}+\frac{1}{2}\ln^2\frac{1}{R_1^2}\right.\right.\nonumber\\ &&\left.\left.-4\ln\frac{\hat{s}}{-\hat{t}}\ln\frac{\hat{s}}{-\hat{u}}\right)\right]\ .\label{e22p} \end{eqnarray} Compared to the results in Ref.~\cite{Sun:2015doa}, the above results differ by a term proportional to $\epsilon\, \pi^2/6$. This is a result of the approximation made in Ref.~\cite{Sun:2015doa}. \section{Soft Gluon Radiation for SSA: Leading Logarithmic Contributions} In this section, we will investigate the soft gluon radiation contribution to the SSA in dijet production. The leading order analysis and collinear gluon radiation contributions have been studied in Ref.~\cite{Qiu:2007ey}. In the following, we will first review the leading order results and then derive the soft gluon radiation contribution. \subsection{Leading Order Results} \begin{figure}[t] \begin{center} \includegraphics[width=11cm]{leadingorder.eps} \includegraphics[width=11cm]{leadingorder1.eps} \includegraphics[width=11cm]{leadingorder2.eps} \end{center} \vskip -0.4cm \caption{\it Leading order diagrams for the initial and final state contributions to the SSA in dijet production. The red bars in these diagrams indicate the propagators that produce the necessary phase for the SSA. } \label{leadingorder} \end{figure} The leading order results of Ref.~\cite{Qiu:2007ey} can be transformed into the factorization formula of Eq.~(\ref{resumspin}). For convenience, we show the diagrams that contribute to the SSA from initial and final state interaction effects in Figs.~\ref{leadingorder}. As demonstrated here, the SSA phases only come from the gluon attachments to the initial/final state partons. The leading order results derived in Ref.~\cite{Qiu:2007ey} can be obtained from Eq.~(\ref{e4}) by setting $\left[S_{ab\to cd}(\lambda_\perp)H_{ab\to cd}^{\rm Sivers}(P_T^2)\right]_c\equiv H_{ab\to cd}^{\rm Sivers}(P_T^2)$. After taking the Fourier transform to impact parameter $b_\perp$-space, we find the following leading order result: \begin{eqnarray} W_{ab\to cd}^{T\beta (0)}(b_\perp)=\frac{i b_\perp^\beta}{2} x_1f_a(x_1) x_2T_{Fb}(x_2,x_2) H_{ab\to cd}^{\rm Sivers} \ . \end{eqnarray} The hard part is written as \begin{eqnarray} H_{ab\to cd}^{\rm Sivers}&=&\frac{\alpha_s^2\pi}{\hat s^2}\sum_i (C_I^i+C_{F1}^i+C_{F2}^i) h^i_{ab\to cd} \ , \label{e301} \end{eqnarray} where $i$ labels the different contributions to the hard factors $h^i$ by the various Feynman diagrams. Here the factors $C_I^i$ are for the initial state interaction for the single-spin dependent cross section, and $C_{F1}^i$ and $C_{F2}^i$ are for the final state interactions when gluon is attached to the lines with momentum $k_1$ and $k_2$, respectively. The explicit expressions for $C_I^i$, $C_{F1}^i$, $C_{F2}^i$ and $h^i$ are given in Ref.~\cite{Qiu:2007ey}. Within the twist-three framework, we can also derive the hard factors at leading order by following a similar analysis as in Ref.~\cite{Qiu:2007ey}. The method for calculating the single transverse-spin asymmetry for hard scattering processes in the twist-three approach has been developed in Refs.~\cite{Qiu:1991pp,Qiu:1991wg, Qiu:1998ia,Ji:2006ub,Ji:2006vf,Ji:2006br,Kouvaris:2006zy,Eguchi:2006qz,Eguchi:2006mc,Koike:2006qv,Koike:2007rq,Koike:2007dg,Braun:2009mi,Kang:2008ey,Vogelsang:2009pj,Zhou:2008mz,Schafer:2012ra,Kang:2011mr,Sun:2013hua,Scimemi:2019gge}. The collinear expansion is the central step to obtain the final results. We perform our calculations in a covariant gauge. The additional gluon from the polarized hadron is associated with a gauge potential $A^\mu$, and one of the leading contributions comes from its component $A^+$. Thus, the gluon will carry longitudinal polarization. The gluon's momentum is dominated by $x_gP_A+p_{g\perp}$, where $x_g$ is the longitudinal momentum fraction with respect to the polarized proton. The contribution to the single-transverse-spin asymmetry arises from terms linear in $p_{g\perp}$ in the expansion of the partonic scattering amplitudes. When combined with $A^+$, these linear terms will yield $\partial^\perp A^+$, a part of the gauge field strength tensor $F^{\perp +}$ in Eq.~(\ref{TF}). Since $p_{g\perp}=p_{2\perp}'-p_{2\perp}$, the $p_{g\perp}$ expansion of the scattering amplitudes can be performed in terms of the transverse momenta $p_{2\perp}$ and $p_{2\perp}'$, which we can parametrize in the following way, \begin{equation} p_{2}=x_2P_A+p_{2\perp},~~~p_{2}'=x_2'P_A+p_{2\perp}' \ . \label{e55} \end{equation} The leading order diagrams shown in Fig.~\ref{leadingorder} can be calculated following the general procedure discussed above. The method is similar to the analysis of Drell-Yan lepton pair production in Ref.~\cite{Kang:2011mr}. For example, to perform the Fourier transform of the SSA from transverse momentum to the impact parameter $b_\perp$-space, we take the total transverse momentum of the two jets $q_\perp$. In the leading order diagrams as shown in Fig.~\ref{leadingorder}, the total transverse momentum $q_\perp$ can be easily identified: $q_\perp=p_{2\perp}^{~\prime}$ for the left diagrams and $q_\perp=p_{2\perp}$ for the right diagrams. Because the phases are opposite to each other for the left and right diagram, their total contribution will lead to the expression $\epsilon_{\alpha\beta}S_\perp^\alpha q_\perp^\beta=\epsilon_{\alpha\beta}S_\perp^\alpha p_{g\perp}^\beta$. Using the definition of the twist-three matrix element, we find that the SSA contribution in the impact parameter $b_\perp$-space can be written as $i\epsilon_{\alpha\beta}S_\perp^\alpha b_\perp^\beta T_F(x_2,x_2)$. The hard factors can be calculated accordingly. \subsection{Soft Gluon Radiation} At one-loop order, we have to consider real gluon radiation associated with the production of the dijets. When the radiated gluons are parallel to the incoming partons' momenta, their contributions can be factorized into the associated parton distribution functions (from the unpolarized nucleon) or the polarized quark Sivers function (from the polarized nucleon). The gluon radiation will generate finite transverse momentum. According to the analysis of Ref.~\cite{Qiu:2007ey}, we can write down the spin-dependent differential cross section as \begin{eqnarray} \frac{d\Delta\sigma(S_\perp)}{d\Omega d^2 q_\perp}&=&-\sum_{abcd} H_{ab\to cd}^{\rm Sivers}\epsilon^{\alpha\beta}S_\perp^\alpha \frac{\alpha_s}{2\pi^2}\frac{q_\perp^\beta}{(q_\perp^2)^2}x_1x_2\nonumber\\ &&\times \left\{ f_a(x_1)\widetilde{\cal P}_{b'g\to bg}^{T(<)}\otimes T_{Fb'}(x_2,x_2 +T_{Fb}(x_2,x_2)\widetilde{\cal P}_{a'\to a}^{(<)}\otimes f_{a'}(x_1)\right\} \ ,\label{e30} \end{eqnarray} where ${\widetilde{\cal P}}^{(<)}$ represents the collinear splitting kernel excluding the end point contribution. For the twist-three function it is given by \begin{eqnarray} \widetilde{\cal P}_{b'g\to bg}^{T(<)}\otimes T_{Fb'}(x_2,x_2)&=&\int \frac{dx}{x} \left\{\frac{1}{2N_c}\left[(1+\xi^2)\left(x\frac{\partial}{\partial x}T_{Fb'}(x,x)\right)+T_{Fb'}(x,x)\frac{2\xi^3-3\xi^2-1}{1-\xi}\right] \nonumber\right.\\ &&\left. +\left(\frac{1}{2N_c}+C_F\right) T_{Fb'}(x,x-\hat x_g) \frac{1+\xi}{1-\xi}\right\}\ , \end{eqnarray} where $\xi=x_2/x$ and $\hat x_g=(1-\xi)x$. A similar (albeit slightly simpler) expression holds for ${\cal P}_{a'\to a}^{(<)}\otimes f_{a'}(x_1)$. In Eq.~(\ref{e30}), the first term in the bracket comes from the collinear gluon radiation associated with the polarized nucleon, whereas the second term is associated with the unpolarized nucleon. An explicit calculation of all the relevant diagrams was presented in Ref.~\cite{Qiu:2007ey} for one particular channel ($qq'\to qq'$), and factorization arguments were given for all other channels. Only the so-called soft- and hard-gluonic poles are considered in the SSA calculations. However, all other pole contributions and $\widetilde{T}_F$ contributions can be analyzed as well and similar results are expected. In the following, we will focus on soft gluon radiation. The factorization of these contributions is more involved for several reasons. First, they contain double logarithms. In terms of transverse momentum distributions, we will find terms of the form $1/q_\perp^2\ln(P_T^2/q_\perp^2)$. These double logarithmic terms come from gluon radiation associated with all external particles. The collinear factorization arguments in Ref.~\cite{Qiu:2007ey} do not apply to these soft gluon emissions. Second, we have to deal with the soft gluon radiation associated with the final state jets. Using recent developments for the unpolarized case, we will be able to derive their contributions to the spin-dependent cross sections. We will first discuss several general features of twist-three calculations of the soft gluon radiation contributions to the SSA, and then we will apply these to the different partonic channels. \subsubsection{Generic Features of Twist-three Calculations} \begin{figure}[t] \begin{center} \includegraphics[width=11cm]{general.eps} \end{center} \vskip -0.4cm \caption{\it Generic diagrams for the quark-quark scattering contributions to the single transverse-spin dependent cross section within collinear factorization.} \label{generic_diagrams} \end{figure} In Fig.~\ref{generic_diagrams}, we show the generic diagrams that need to be calculated to obtain the soft gluon real radiation contributions. We follow the same strategy as in Ref.~\cite{Sun:2015doa} to evaluate these diagrams. The radiated gluon carries transverse momentum $k_{g\perp}$ which will contribute to the total transverse momentum of the two jets. The spin-dependent differential cross section can for a given partonic channel be schematically written as \begin{eqnarray} \frac{d\Delta\sigma(S_\perp)}{d\Omega d^2 q_\perp}\Big|_{\textrm{soft}}^{(1)}={\epsilon^{\alpha\beta}S_\perp^\alpha}x_1f_a(x_1)x_2T_{Fb}(x_2,x_2) {\cal H}_{\textrm{twist-3}}^\beta (q_\perp,P_T;R)\ , \end{eqnarray} for the one-loop soft gluon radiation, where $x_{1,2}$ are defined as for leading order kinematics. This is because the soft gluons do not modify the longitudinal momentum fractions of the incoming partons. The partonic cross section ${\cal H}_{\textrm{twist-3}}^\beta$ depends on the total transverse momentum $q_\perp$, the hard momentum scale represented by $P_T$ (or in general, $\hat s$, $\hat t$ and $\hat u$) and the jet radius $R$. Similar to the unpolarized case discussed in the last section, we have to exclude the gluon emission contributions belonging to the final state jets. Therefore, the partonic cross sections will depend on the jet size $R$. From the diagrams in Fig.~\ref{generic_diagrams}, we find that at this order the total transverse momentum of the two jets is equal and opposite to the transverse momentum of the radiated gluon: $\vec{q}_\perp=-\vec{k}_{g\perp}$. Therefore, finite $q_\perp$ also implies finite $k_{g\perp}$. The main objective of the following calculations is to obtain ${\cal H}_{\textrm{twist-3}}^\beta$ in the twist-three framework. Again, as briefly discussed above, we need to perform the collinear expansion for the incoming quark lines associated with the polarized nucleon. In the twist expansion, we take the limit of $k_{g\perp}\gg p_{2\perp}\sim p_{2\perp}^{~\prime}\sim p_{g\perp}$. Meanwhile, we are also working in the correlation limit of $q_\perp\sim k_{g\perp}\ll P_T$. Therefore, the dominant contribution to the SSA comes from the expansion in powers of $p_{g\perp}/k_{g\perp}$. Any terms of the form $p_{g\perp}/P_T$ will be power suppressed in the correlation limit of $q_\perp\ll P_T$. To obtain the SSA for this process, the longitudinal gluon $p_g$ from the polarized nucleon needs to couple to the partonic scattering part to generate the necessary phase for a non-zero single spin asymmetry. Because we work at leading power in the limit of $q_\perp\ll P_T$, we can classify the gluon attachments into two types. First, the $p_g$ gluon attaches to one of the initial/final state partons which does not radiate the soft gluon $k_g$. Second, the $p_g$ gluon attachment and the soft gluon $k_g$ radiation happen on the same initial/final state parton. Here we discuss both cases, where the first type is easier to calculate, whereas the second type is somewhat more involved. \begin{figure}[t] \begin{center} \includegraphics[width=11cm]{example_soft_pole0.eps} \vskip 0.4cm \includegraphics[width=11cm]{example_soft_pole.eps} \end{center} \vskip -0.4cm \caption{\it Soft-gluonic pole contribution associated with the final state particle $k_1$ and the gluon radiation from the final state particle $k_2$. The dashed line in the middle indicates the final state particles which are on mass-shell. The left diagrams represent the contribution when the gluon is attached to the left side of the cut-line, whereas the right diagrams correspond to the attachment to the right side of the cut-line.} \label{soft-pole-example} \end{figure} We first study the first type of diagrams. Particular examples are shown in Fig.~\ref{soft-pole-example}, where the pole contribution comes from the gluon attachment to the final state parton $k_1$ and the soft gluon is radiated off the lines with momentum $k_2$. Before an explicit evaluation, we would like to point out a number of important features which will help us to simplify the calculation. We will focus on the leading power contribution in the limit of small $q_\perp/P_T$. Therefore, the twist-three contributions only come from the $p_{i\perp}$-expansion associated with the radiated gluon line $k_g$. For example, the $p_{i\perp}$ dependence of the internal propagator (represented by the circle in Fig.~\ref{soft-pole-example}) will lead to a power suppressed contribution in the limit of $q_\perp\ll P_T$. Therefore, we only need to consider the $p_{i\perp}$-expansion in the propagators indicated by the red lines in Fig.~\ref{soft-pole-example}. In addition, the lines cut by red bars are the places where we pick up the pole contributions. The left and right diagrams give opposite contributions from these two poles, because they are on opposite sides of the cut-line. Because $k_1$ and $k_2$ are final sate observed momenta, it is convenient to keep them fixed in the $p_{i\perp}$-expansion. As a consequence, the momentum flow will go through the radiated gluon momentum. For convenience, we define $k_g$ as the radiated gluon momentum with $p_{i\perp}=0$, i.e., there is no $p_{i\perp}$ dependence in $k_g$. We label $k_{gL}$ as the momentum of the radiated gluon for the left diagram in Fig.~\ref{soft-pole-example}, and $k_{gR}$ for the right diagram, respectively. Due to the fact that the momentum flows are different for these two diagrams, $k_{gL}$ and $k_{gR}$ will be different as well. Each of them is constrained by the on-shell condition for the radiated gluon. For example, we know that $k_{gL\perp}=k_{g\perp}+p_{2\perp}'$. This gives the following momentum decomposition for $k_{gL}$: \begin{eqnarray} k_{gL}=k_g+\frac{\vec{k}_{g\perp}\cdot \vec{p}_{2\perp}^{~\prime}}{2p_2\cdot k_g}p_2+p_{2\perp}'\ , \end{eqnarray} We find that $k_{gL}^2=0$ which satisfies the on-shell condition up to the linear term of $p_{i\perp}$. In the above expansion and the following calculations, we neglect all higher order terms of $p_{i\perp}$ beyond the linear terms. Similarly, we find for the right diagram of Fig.~\ref{soft-pole-example}, \begin{eqnarray} k_{gR}=k_g+\frac{\vec{k}_{g\perp}\cdot \vec{p}_{2\perp}}{2p_2\cdot k_g}p_2+p_{2\perp}\ . \end{eqnarray} Once the kinematics are determined, we can proceed to calculate the soft gluon radiation contributions. This is similar to the unpolarized case. We multiply the Eikonal amplitudes of the diagrams shown in Fig.~\ref{soft-pole-example} and perform the collinear expansion of $p_{i\perp}$. For example, for the upper two diagrams, we have \begin{eqnarray} {\rm Left:}&&\frac{2k_{2\mu}}{(k_2+k_{gL})^2}\frac{2k_{2\nu}}{(k_2+k_{gL})^2}\Gamma^{\mu\nu}(k_{gL}) \ ,\\ {\rm Right:}&&\frac{2k_{2\mu}}{(k_2+k_{gR})^2}\frac{2k_{2\nu}}{(k_2+k_{gR})^2}\Gamma^{\mu\nu}(k_{gR}) \ , \end{eqnarray} where $\Gamma^{\mu\nu}(k_g)$ was defined in Eq.~(\ref{gluonpolarization}). We stress that $k_{gL}$ and $k_{gR}$ depend on $p_{i\perp}$, which implies that $\Gamma^{\mu\nu}$ will as well. Because the left and right diagrams give contributions with opposite sign for the phase, which is necessary to generate the SSA for this process, we will add their $p_{i\perp}$ expansions with different signs. In the end, the total contribution from these two diagrams leads to the following expression: \begin{eqnarray} {\rm Fig.~\ref{soft-pole-example}}(a,b):&&\frac{2k_{2\mu}}{(k_2+k_{gL})^2}\frac{2k_{2\nu}}{(k_2+k_{gL})^2}\Gamma^{\mu\nu}(k_{gL})- \frac{2k_{2\mu}}{(k_2+k_{gR})^2}\frac{2k_{2\nu}}{(k_2+k_{gR})^2}\Gamma^{\mu\nu}(k_{gR})\nonumber\\ &&~~=-p_{g\perp}^\alpha\frac{2k_2\cdot p_2(k_{g\perp}^\alpha k_2\cdot p_2-k_{2\perp}^\alpha k_g\cdot p_2)}{(p_2\cdot k_g k_2\cdot k_g)^2}\nonumber\\ &&~~=-p_{g\perp}^\alpha 2(k_{g\perp}^\alpha-\xi_2k_{2\perp}^\alpha)\left(\frac{k_2\cdot p_2}{k_2\cdot k_g p_2\cdot k_g}\right)^2\ , \end{eqnarray} where $\xi_2=k_g\cdot p_2/k_2\cdot p_2$. Noticing that $S_g(k_2,p_2)=4/(k_{g\perp}-\xi_2k_{2\perp})^2$, we can simplify the above result as \begin{equation} {\rm Fig.~\ref{soft-pole-example}}(a,b):~~ p_{g\perp}^\alpha\frac{\partial}{\partial k_{g\perp}^\alpha}S_g(k_2,p_2) \ . \end{equation} Applying the twist-three procedure, the above leads to the following contribution to ${\cal H}_{\textrm{twist-3}}^\beta$, \begin{equation} {\cal H}_{\textrm{twist-3}}^\beta|_{\rm Fig.~\ref{soft-pole-example}(a,b)}:~~\frac{\partial}{\partial k_{g\perp}^\beta}S_g(k_2,p_2)\ . \end{equation} To determine the leading power contribution from soft gluon radiation, we need to integrate out the phase space of the radiated gluon. We will come back to this point after completing the analysis of all relevant diagrams. Now we turn to the lower two diagrams of Fig.~\ref{soft-pole-example}. Here, $k_{gL}$ and $k_{gR}$ are the same as in the upper two diagrams. However, the $p_{i\perp}$-expansion comes from different propagators, \begin{eqnarray} {\rm Left:}&&\frac{2k_{2\mu}}{(k_2+k_{gL})^2}\frac{2k_{1\nu}}{(k_1+k_{gL})^2}\Gamma^{\mu\nu}(k_{gL}) \ ,\\ {\rm Right:}&&\frac{2k_{1\mu}}{(k_1+k_{gR})^2}\frac{2k_{2\nu}}{(k_2+k_{gR})^2}\Gamma^{\mu\nu}(k_{gR}) \ . \end{eqnarray} We also find that the final result is a little more complicated than for the upper two diagrams. After some algebra, we find that it can be written as \begin{eqnarray} {\rm Fig.~\ref{soft-pole-example}}(c,d):~~ p_{g\perp}^\alpha\frac{\partial }{\partial k_{g\perp}^\alpha}\left[S_g(k_1,p_2)+S_g(k_2,p_2)-S_g(k_1,k_2)\right] \ . \end{eqnarray} The above derivations can be generalized to all other diagrams of the first type. \begin{figure}[t] \begin{center} \includegraphics[width=11cm]{final1.eps} \end{center} \vskip -0.4cm \caption{\it Second type of diagrams associated with the final state interaction contribution with the jet $k_1$ (quark or gluon). The red bars represent the pole contributions. The first diagram gives the so-called soft-gluonic pole contributions, whereas the remaining three belong to the so-called hard gluonic pole contributions.} \label{hard-pole-example0} \end{figure} For the second type of diagrams, the longitudinal gluon from the polarized proton can attach to both the final state jet and the radiated gluon. Therefore, we have both soft-gluonic pole and hard-gluonic pole contributions. One particular example is shown in Fig.~\ref{hard-pole-example0}. Here, the gluonic pole contributions come from the final state particle $k_1$ which also radiated a soft gluon to generate the leading power contribution in the correlation limit of small $q_\perp/P_T$. The first diagram corresponds to the soft-gluonic pole and the rest to the hard-gluonic pole. The soft gluon pole leads to a delta function $\delta(x_2-x_2')$. In general, the hard gluon pole leads to a different delta function. However, in the correlation limit, the hard pole reduces to the same delta function. For example, the hard pole (labeled by the red bar in Fig.~\ref{hard-pole-example0}) leads to the following kinematics: \begin{eqnarray} x_2'-x_2=\frac{k_g\cdot k_1}{P_A\cdot (k_1+k_g)} \ . \end{eqnarray} In the correlation limit~\footnote{This is also true in the collinear limit where the radiated gluon is parallel to the final state jet.}, i.e., $k_g^\mu\ll k_1^\mu$, the above reduces to $x_2'=x_2$. Therefore, the soft-gluonic and hard-gluonic pole contributions come from the same kinematics and there will be cancelations among them. These cancelations are very similar to those occurring for the SSA in the Drell-Yan process demonstrated in Ref.~\cite{Ji:2006vf}. In particular, because of the color factor $if_{abc}T^c=[T^a,T^b]$, we can decompose the last diagram into the other two diagrams associated with the hard-gluonic pole. The combination exactly cancels out the soft-gluonic pole contribution from the first diagram. We have explicitly checked this cancelation for all these diagrams. To carry out the calculation, we have to follow the transverse momentum flow, and perform the $p_{i\perp}$ expansion. The method is the same as that used to calculate the first type of diagrams: the kinematics of $k_g$ and $p_g$ will be determined from the on-shell conditions for $k_g$ and the pole contributions. More importantly, similar to the first type of diagrams, we only need to take into account the $p_{i\perp}$ expansion from the denominators of the relevant propagators, since the expansions of the numerators are power suppressed. Therefore, the calculations are the same for all partons in the initial/final state, regardless of whether they are quarks or gluons. Both have the same denominators. \begin{figure}[t] \begin{center} \includegraphics[width=11cm]{initial.eps} \end{center} \vskip -0.4cm \caption{\it Same as Fig.~\ref{hard-pole-example0} but for diagrams associated with the initial state interaction contributions with parton $p_1$ .} \label{hard-pole-example1} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=11cm]{final2.eps} \end{center} \vskip -0.4cm \caption{\it Same as Fig.~\ref{hard-pole-example0} but for diagrams associated with the final state interaction contributions with $k_2$.} \label{hard-pole-example2} \end{figure} In the end, the total contribution will be a combination of the last two diagrams with the color factor of the third one. In summary, we can represent all four diagrams as the one on the left side. This can be repeated for diagrams associated with the initial state interaction with $p_1$ and the final state interaction with $k_2$. We show the relevant diagrams in Figs.~\ref{hard-pole-example1} and \ref{hard-pole-example2}. These ``effective'' diagrams will be among the important ingredients for the final results. \begin{figure}[t] \begin{center} \includegraphics[width=11cm]{radiation_initial.eps} \end{center} \vskip -0.4cm \caption{\it Summary of gluon radiation diagrams for initial state interaction.} \label{gluonradiation0} \end{figure} It is convenient to add the contributions from the first type and second type of diagrams. As an example, in Fig.~\ref{gluonradiation0}, we show those diagrams for the initial state interaction contributions. These diagrams show that we can add soft gluon radiation on top of the initial state interaction diagram. The core part is the same for these three, in particular, the associated color factors. We will work out the color factors for the different channels later on. Here, we focus on the kinematics of the soft gluon radiation contribution and in particular on the leading power contributions. As shown in Fig.~\ref{soft-pole-example}, the contributions to the SSA come from the interference between the diagrams of Fig.~\ref{gluonradiation0} and the diagrams of Fig.~\ref{unpolarized}. There will be diagrams where the longitudinal gluon is attached to the left side of the cut-line and to the right side of the cut-line. For convenience, we label these soft gluon radiation diagrams by their association with the external momenta. For example, we will label the first diagram of Fig.~\ref{gluonradiation0} by $p_1^\mu$, the second diagram by $k_1^\mu$ and the third by $k_2^\mu$. We label the diagrams in Fig.~\ref{unpolarized} similarly. The interference between the second diagram of Fig.~\ref{gluonradiation0} and the second one of Fig.~\ref{unpolarized} has been calculated above and the result is \begin{eqnarray} k_1^\mu k_1^\nu\Rightarrow \frac{\partial}{\partial k_{g\perp}^\beta}S_g(k_1,p_2) \ . \end{eqnarray} Similarly, we find the following result for the interference between the second diagram of Fig.~\ref{gluonradiation0} and the third diagram of Fig.~\ref{unpolarized}: \begin{eqnarray} k_1^\mu k_2^\nu\Rightarrow \frac{\partial}{\partial k_{g\perp}^\beta}\left[S_g(k_1, p_2)+S_g(k_2, p_2)-S_g(k_1, k_2)\right]\ . \end{eqnarray} The calculation for the interference between the first diagram of Fig.~\ref{gluonradiation0} and the diagrams in Fig.~\ref{unpolarized} is much more involved. However, after a lengthy derivation, we find the results are very similar to the above two, and they can be all summarized as follows: \begin{eqnarray} p_1^\mu p_1^\nu &\Rightarrow & \frac{\partial}{\partial k_{g\perp}^\beta}S_g(p_1,p_2)\ ,\\ k_1^\mu k_1^\nu &\Rightarrow & \frac{\partial}{\partial k_{g\perp}^\beta}S_g(k_1,p_2)\ ,\\ k_2^\mu k_2^\nu &\Rightarrow & \frac{\partial}{\partial k_{g\perp}^\beta}S_g(k_2,p_2)\ ,\\ k_1^\mu p_1^\nu,p_1^\mu k_1^\nu &\Rightarrow & \frac{\partial}{\partial k_{g\perp}^\beta}\left[S_g(k_1, p_2)+S_g(p_1, p_2)-S_g(k_1, p_1)\right]\ ,\\ k_2^\mu p_1^\nu,p_1^\mu k_2^\nu &\Rightarrow & \frac{\partial}{\partial k_{g\perp}^\beta}\left[S_g(k_2, p_2)+S_g(p_1, p_2)-S_g(k_2, p_1)\right]\ ,\\ k_1^\mu k_2^\nu,k_2^\mu k_1^\nu &\Rightarrow & \frac{\partial}{\partial k_{g\perp}^\beta}\left[S_g(k_1, p_2)+S_g(k_2, p_2)-S_g(k_1, k_2)\right]\ . \end{eqnarray} Interestingly, we note that there is a one-to-one correspondence between the above results and those for the gluon radiation contributions for the unpolarized case in Sec.~II. This is a very important feature to obtain the final factorization result for the SSA. \begin{figure}[t] \begin{center} \includegraphics[width=11cm]{radiation_final1.eps} \end{center} \vskip -0.4cm \caption{\it Summary of the gluon radiation diagrams for the final state interaction contributions with $k_1$.} \label{gluonradiation1} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=11cm]{radiation_final2.eps} \end{center} \vskip -0.4cm \caption{\it Summary of the gluon radiation diagrams for the final state interaction contributions with $k_2$.} \label{gluonradiation2} \end{figure} The above analysis can be extended to diagrams with final state interaction contributions with $k_1$ and $k_2$. For completeness, we show the diagrams in Fig.~\ref{gluonradiation1} and~\ref{gluonradiation2}. The final results are the same as those given above for the initial state interaction contributions. As mentioned above, in order to derive the leading power contribution to the SSA for this process from the above terms, we need to integrate over the phase space of the radiated gluon, where we keep the transverse momentum $\vec{q}_\perp=-\vec{k}_{g\perp}$. Let us first work out the simple example of the $p_1^\mu p_1^\nu$ term. This term is similar to that for Drell-Yan lepton pair production calculated in Ref.~\cite{Ji:2006vf}. The phase space integral takes the following form \begin{eqnarray} {\cal H}_{\textrm{twist-3}}^\beta\big|_{p_1^\mu p_1^\nu}&=&\frac{g^2}{2}\int \frac{d^3k_g}{(2\pi)^32E_{k_g}}\delta^{(2)} (q_\perp+k_{g\perp})\frac{\partial}{\partial k_{g\perp}^\beta} S_g(p_1,p_2)\nonumber\\ &=&\frac{\alpha_s}{2\pi^2}\int_{\xi_0}^1\frac{d\xi}{\xi}\frac{\partial}{\partial k_{g\perp}^\beta}\frac{1}{\vec{k}_{g\perp}^2}|_{\vec{k}_{g\perp}=-\vec{q}_\perp} \ , \end{eqnarray} where $\xi=k_g\cdot p_2/p_1\cdot p_2$ and the lower limit of the $\xi$-integral comes from the kinematic limit $\xi_0=k_{g\perp}^2/Q^2$. Working out the integral, we arrive at \begin{eqnarray} {\cal H}_{\textrm{twist-3}}^\beta\big|_{p_1^\mu p_1^\nu}&=&\frac{\alpha_s}{2\pi^2}\frac{2q_\perp^\beta}{(q_\perp^2)^2}\ln\frac{Q^2}{q_\perp^2} \ , \end{eqnarray} which is consistent with the double logarithmic behavior for Drell-Yan lepton pair production calculated in Ref.~\cite{Ji:2006vf}. On the other hand, we can also carry out the above integral using integration by parts as follows: \begin{eqnarray} {\cal H}_{\textrm{twist-3}}^\beta\big|_{p_1^\mu p_1^\nu} &=&\frac{\alpha_s}{2\pi^2}\left[\frac{\partial}{\partial k_{g\perp}^\beta}\int_{\xi_0}^1\frac{d\xi}{\xi}\frac{1}{\vec{k}_{g\perp}^2}+\frac{\partial \xi_0}{\partial k_{g\perp}^\beta}\frac{1}{\xi_0\vec{k}_{g\perp}^2}\right]_{\vec{k}_{g\perp}=-\vec{q}_\perp} \nonumber\\ &=&\frac{\alpha_s}{2\pi^2}\left[\frac{\partial}{\partial k_{g\perp}^\beta}\left(\frac{1}{k_{g\perp}^2}\ln\frac{Q^2}{k_{g\perp}^2}\right)+\frac{1}{\vec{k}_{g\perp}^2}\frac{\partial \ln k_{g\perp}^2 }{\partial k_{g\perp}^\beta}\right]_{\vec{k}_{g\perp}=-\vec{q}_\perp}\nonumber\\ &=&\frac{\alpha_s}{2\pi^2}\left[\frac{\partial}{\partial k_{g\perp}^\beta}\left(\frac{1}{k_{g\perp}^2}\right)\ln\frac{Q^2}{k_{g\perp}^2}\right]_{\vec{k}_{g\perp}=-\vec{q}_\perp}\ . \end{eqnarray} Of course, this gives the same result as above. However, this provides a convenient way to derive other terms as well. The rule is that the derivative only acts on the $1/k_{g\perp}^2$, not on the logarithmic terms. Taking the example of $S_g(k_1,p_2)$ and $S_g(k_1,p_1)$, we can follow the strategy and construct the following two terms: \begin{eqnarray} {\cal T}^\beta(k_1)&=&\frac{\alpha_s}{8\pi^2}\int\frac{d\xi}{\xi}\frac{\partial}{\partial k_{g\perp}^\beta} \left[S_g(k_1,p_2)+S_g(k_1,p_1)\right]\ ,\\ {\cal R}^\beta(k_1)&=&\frac{\alpha_s}{8\pi^2}\int\frac{d\xi}{\xi}\frac{\partial}{\partial k_{g\perp}^\beta} \left[S_g(k_1,p_2)-S_g(k_1,p_1)\right]\ . \end{eqnarray} In the above two equations, ${\cal R}^\beta$ does not contain a $\ln(1/k_{g\perp}^2)$ term. Therefore, we can perform the integration by parts directly and obtain the final result \begin{eqnarray} {\cal R}^\beta(k_1)&=&\frac{\alpha_s}{8\pi^2}\frac{\partial}{\partial k_{g\perp}^\beta}\int\frac{d\xi}{\xi} \left[S_g(k_1,p_2)-S_g(k_1,p_1)\right]\nonumber\\ &=&\frac{\alpha_s}{2\pi^2}\frac{\partial}{\partial k_{g\perp}^\beta}\left(\frac{1}{k_{g\perp}^2}\ln\frac{\hat u}{\hat t}\right)\ . \end{eqnarray} For ${\cal T}^\beta$, we notice that $S_g(k_1,p_2)+S_g(k_1,p_1)=S_g(p_1,p_2)+4\vec{k}_{1\perp}\cdot \vec{k}_{g\perp}/(k_{g\perp}^2k_1\cdot k_g)$, where the first term has been calculated above and the second term does not have a term $\ln(1/k_{g\perp})$. Applying this, we arrive at the following result for ${\cal T}^\beta$: \begin{eqnarray} {\cal T}^\beta(k_1)&=&\frac{\alpha_s}{2\pi^2}\frac{\partial}{\partial k_{g\perp}^\beta}\left(\frac{1}{k_{g\perp}^2}\right)\left[\ln\frac{Q^2}{k_{g\perp}^2}+\ln\frac{1}{R_1^2}\right] \ . \end{eqnarray} Note that in practice we do the algebra and phase space integration in dimensional regularization in $d=4-2\epsilon$ dimensions. For simplicity, we are not displaying here terms of ${\cal O}(\epsilon)$. From the results for ${\cal T}^\beta$ and ${\cal R}^\beta$, we are able to derive the corresponding results for the terms associated with $S_g(k_1,p_1)$ and $S_g(k_1, p_2)$. Following the same technique, we can also derive the results for $S_g(k_2,p_2)$ and $S_g(k_2,p_1)$. For $S_g(k_1,k_2)$, since it does not contain a $\ln(1/k_{g\perp}^2)$ term, we can directly carry out the derivative with respect to $k_{g\perp}^\beta$. Finally, we summarize all results here: \begin{eqnarray} {\cal H}_{\textrm{twist-3}}^\beta\big|_{p_1^\mu p_1^\nu} &=&\frac{\alpha_s}{2\pi^2}\frac{q_\perp^\beta}{(q_\perp^2)^2}2\ln\frac{Q^2}{q_\perp^2} ,\label{p1p1}\\ {\cal H}_{\textrm{twist-3}}^\beta\big|_{k_1^\mu k_1^\nu} &=&\frac{\alpha_s}{2\pi^2}\frac{q_\perp^\beta}{(q_\perp^2)^2}\left[ \ln\frac{Q^2}{q_\perp^2}+\ln\frac{1}{R_1^2}+\ln\frac{\hat u}{\hat t}\right]\ ,\label{k1k1}\\ {\cal H}_{\textrm{twist-3}}^\beta\big|_{k_2^\mu k_2^\nu} &=&\frac{\alpha_s}{2\pi^2}\frac{q_\perp^\beta}{(q_\perp^2)^2}\left[ \ln\frac{Q^2}{q_\perp^2}+\ln\frac{1}{R_2^2}+\ln\frac{\hat t}{\hat u}\right]\ ,\label{k2k2}\\ {\cal H}_{\textrm{twist-3}}^\beta\big|_{k_1^\mu p_1^\nu,p_1^\mu k_1^\nu} &=&\frac{\alpha_s}{2\pi^2}\frac{q_\perp^\beta}{(q_\perp^2)^2}\left[ 2\ln\frac{Q^2}{q_\perp^2}+2\ln\frac{\hat u}{\hat t}\right]\ ,\label{k1p1}\\ {\cal H}_{\textrm{twist-3}}^\beta\big|_{k_2^\mu p_1^\nu,p_1^\mu k_2^\nu} &=&\frac{\alpha_s}{2\pi^2}\frac{q_\perp^\beta}{(q_\perp^2)^2}\left[ 2\ln\frac{Q^2}{q_\perp^2}+2\ln\frac{\hat t}{\hat u}\right]\ ,\label{p2p1}\\ {\cal H}_{\textrm{twist-3}}^\beta\big|_{k_1^\mu k_2^\nu,k_2^\mu k_1^\nu} &=&\frac{\alpha_s}{2\pi^2}\frac{q_\perp^\beta}{(q_\perp^2)^2}\left[ 2\ln\frac{Q^2}{q_\perp^2}-2\ln\frac{\hat s^2}{\hat t\hat u}\right]\ . \label{k1k2} \end{eqnarray} Again, we have neglected the terms of ${\cal O}(\epsilon)$ for simplicity. It is interesting to note that all of the above terms contribute to the leading double logarithms, whereas only $k_1^\mu k_1^\nu$ and $k_2^\mu k_2^\nu$ contribute to the jet related logarithms. Therefore, we need to consider all of them to derive the leading double logarithmic contributions. Next, we need to combine the above results with the associated color factors for the different channels in order to obtain the contributions to the SSA. \subsubsection{$q q'\to qq'$ channel} Let us first derive the SSA for the simplest channel, $qq'\to qq'$, the quark-quark scattering with different flavors. This channel only has a $t$-channel diagram. The leading order results have been calculated in Ref.~\cite{Qiu:2007ey}. The hard factor is given by \begin{eqnarray} H_{qq'\to qq'}^{\rm Sivers}=\frac{\alpha_s^2\pi}{\hat s^2}\frac{N_c^2-5}{4N_c^2}\frac{2(\hat s^2+\hat u^2)}{\hat t^2} \ ,\label{hqq'} \end{eqnarray} where the color factors for the initial and final state interactions are: $C_I=-\frac{1}{2N_c^2}$, $C_{F1}=\frac{N_c^2-2}{4N_c^2}$, $C_{F2}=-\frac{1}{4N_c^2}$. For the initial state interaction contributions, we calculate the interference between the diagrams in Fig.~\ref{unpolarized} and Fig.~\ref{gluonradiation0}. We obtain the following associated color factors: \begin{eqnarray} p_1^\mu p_1^\nu &\Rightarrow& C_F \ ,\\ k_1^\mu k_1^\nu &\Rightarrow& C_F \ ,\\ k_2^\mu k_2^\nu &\Rightarrow& C_F \ ,\\ p_1^\mu k_1^\nu &\Rightarrow& -\frac{1}{2N_c} \ ,\\ p_1^\mu k_2^\nu &\Rightarrow& \frac{3}{2}C_F-\frac{C_A}{2} \ ,\\ k_1^\mu k_2^\nu &\Rightarrow& \frac{3}{2}C_F-C_A \ . \end{eqnarray} In order to obtain the final results, we multiply the leading power contributions of Eqs.~(\ref{p1p1})-(\ref{k1k2}) with the associated color factors. Adding these results, we obtain the leading contribution from soft gluon radiation which can be written as \begin{eqnarray} {\cal H}_{\textrm{twist-3}}^{\beta(C_I)}({qq'\to qq'})&=&C_Ih_{q_iq_j\to q_iq_j}^{(0)}\frac{\alpha_s}{2\pi^2}\frac{-q_\perp^\beta}{(q_\perp^2)^2}\left[ 2C_F\ln\frac{Q^2}{q_\perp^2}+C_F\left(\ln\frac{1}{R_1^2}+\ln\frac{1}{R_2^2}\right)\right] \ . \label{qqsoft} \end{eqnarray} Here we only kept the terms relevant at $\rm LLA'$, and we have $h_{q_iq_j\to q_iq_j}^{(0)}=\frac{\alpha_s^2\pi}{\hat s^2}\frac{2(\hat s^2+\hat u^2)}{\hat t^2}$. We will come back to the remaining terms in Sec.~IV when we discuss factorization breaking effects. The minus sign in the above equation is due to the fact fact that $C_I$ was computed on the basis of the quark Sivers function for the SIDIS process, which has an opposite sign compared to the normalization of the twist-three matrix element $T_F$. It appears that the terms in Eq.~(\ref{qqsoft}) do have a clear factorization structure that includes the leading double logarithmic term and the terms associated with the final state jets. The latter are represented by logarithmic terms of the jet radii $R_{1,2}$. \begin{table}[t] \caption{{\it The color factors for the soft gluon radiation interference diagrams for the $q q'\to qq'$ channel. The different rows show the results for the unpolarized case as well as for the initial and final state interaction contributions to the SSA.}} \begin{ruledtabular} \begin{tabular}{|l|c|c|c|c|c|c|} & $p_1^\mu p_1^\nu$ & $k_1^\mu k_1^\nu$ & $k_2^\mu k_2^\nu$ & $p_1^\mu k_1^\nu$ & $p_1^\mu k_2^\nu$ & $k_1^\mu k_2^\nu$ \\ \hline $C_u$ & $~~C_F~~$ &$~~C_F~~$ &$~~C_F~~$ & $~-\frac{1}{2N_c}~~$ & $\frac{1}{4}(2C_A-C_F)$ &$~-\frac{1}{4}C_F~~$\\ \hline $C_I$ & $C_F$ & $C_F$ &$C_F$ & $-\frac{1}{2N_c}$& $\frac{1}{2}$& $-1$\\ \hline $C_{F1}$ & $C_F$ & $C_F$ &$C_F$ & $-\frac{1}{2N_c}$ & $\frac{C_A}{2}-\frac{1}{N_c^2-2}$&$-\frac{1}{N_c^2-2}$\\ \hline $C_{F2}$ & $C_F$ & $C_F$ &$C_F$ & $-\frac{1}{2N_c}$& $ -\frac{1}{N_c}$& $C_F-\frac{1}{2N_c}$ \end{tabular} \end{ruledtabular} \label{qqcolor} \end{table} We can extend the above calculations to the final state interaction contributions associated with $k_1$ and $k_2$. We summarize the relevant color factors for the different terms in Table~\ref{qqcolor}. The total contribution can be written as \begin{eqnarray} {\cal H}_{\textrm{twist-3}}^{\beta (C_{F1}+C_{F2})} ({qq'\to qq'})&=&H_{qq'\to qq'}^{\rm Sivers}\frac{\alpha_s}{2\pi^2}\frac{-q_\perp^\beta}{(q_\perp^2)^2}\left[ 2C_F\ln\frac{Q^2}{q_\perp^2}+C_F\left(\ln\frac{1}{R_1^2}+\ln\frac{1}{R_2^2}\right)\right] \ , \label{qqsoft1} \end{eqnarray} where $H_{qq'\to qq'}^{\rm Sivers}$ was defined in Eq.~(\ref{hqq'}). Clearly, the first term contributes to the leading double logarithms of the SSA for this process. In addition, the divergence associated with the two final state quark jets has the desired structure. Combining the above result with the collinear gluon radiation contributions from incoming partons, see Eq.~(\ref{e30}), we obtain the spin-dependent differential cross section for the ${qq'\to qq'}$ channel \begin{eqnarray} \frac{d\Delta\sigma(S_\perp)}{d\Omega d^2 q_\perp}&=&-H_{qq'\to qq'}^{\rm Sivers}\epsilon^{\alpha\beta}S_\perp^\alpha \frac{\alpha_s}{2\pi^2}\frac{q_\perp^\beta}{(q_\perp^2)^2}x_1x_2\nonumber\\ &&\times \left\{ f_q(x_1){\cal P}_{q'g\to q'g}^{T(<)}\otimes T_{Fq'}(x_2,x_2 +T_{Fq'}(x_2,x_2){\cal P}_{q\to q}^{(<)}\otimes f_{q}(x_1)\right.\nonumber\\ &&\left.+f_q(x_1)T_{Fq'}(x_2,x_2)\left[ 2C_F\ln\frac{Q^2}{q_\perp^2}+C_F\left(\ln\frac{1}{R_1^2}+\ln\frac{1}{R_2^2}\right)\right]\right\} \ , \end{eqnarray} in the correlation limit of $q_\perp\ll P_T$. When taking the Fourier transform to $b_\perp$-space and adding the virtual and jet contributions to cancel the divergences, we expect to obtain the following one-loop result for $W^{T\beta}$ at $\rm LLA'$ accuracy: \begin{eqnarray} W_{qq'\to qq'}^{T\beta (1)}&=&H_{qq'\to qq'}^{\rm Sivers}\frac{ib_\perp^\beta}{2}\frac{\alpha_s}{2\pi}x_1x_2\left\{-\ln\frac{\mu^2b_\perp^2}{b_0^2}\Big[ f_q(x_1,\mu){\cal P}_{q'g\to q'g}^T\otimes T_{Fq'}(x_2,x_2,\mu)\right.\nonumber \\ &+&T_{Fq'}(x_2,x_2,\mu){\cal P}_{a'\to q}\otimes f_{a'}(x_1,\mu)\Big]\nonumber\\ &+&\left. f_q(x_1,\mu)T_{Fq'}(x_2,x_2,\mu)C_F\left[\ln^2\left(\frac{Q^2b_\perp^2}{b_0^2}\right)-\left(\frac{3}{2}-\ln\frac{1}{R_1^2}-\ln\frac{1}{R_2^2}\right)\ln\frac{Q^2b_\perp^2}{b_0^2}\right]\right\} \ . \nonumber \\ \end{eqnarray} Here we have included the subtraction of the collinear divergences, and ${\cal P}_{qg\to qg}^T$ and ${\cal P}_{a'\to q}$ are the complete splitting kernels for the associated twist-three and leading-twist parton distributions, respectively. To derive the above results, we have assumed that the virtual contributions cancel the soft divergences of the real gluon radiation. It is important to show that this is indeed the case. However, this is beyond the scope of this work and we plan to come back to this issue in a later publication. We summarize three important aspects of the above result at this order. First, the collinear splitting is associated with the twist-three and twist-two parton distribution functions. This is the essence of the factorization of collinear gluon radiations as demonstrated in Ref.~\cite{Qiu:2007ey}. Second, the result for the leading double logarithms is consistent with the collinear and soft factorization at $\rm LLA'$. Each of the incoming quark lines contributes half of this double logarithmic term. This is an important feature for the Sudakov resummation. Third, the logarithms associated with the jets are also factorized in terms of the individual jets and we obtain the expected color charges of the final state jets. These results are consistent with the factorization argument. In the following two subsections, we will extend the above analysis to two other important channels, $gq\to gq$ and $\bar q q\to gg$. The former is the dominant channel for the SSA in dijet production for the typical kinematics at RHIC. \subsubsection{$qg\to qg$} The leading order derivation for the channel $qg\to qg$ was carried out in Ref.~\cite{Qiu:2007ey}. In order to simplify the analysis for the soft gluon radiation contribution, we follow Ref.~\cite{Sun:2015doa} and decompose the fundamental partonic scattering amplitude as \begin{equation} A_1\bar u_j T_{jk}^aT_{ki}^bu_i+A_2\bar u_jT_{jk}^bT_{ki}^au_i \ , \end{equation} with two different color structures at the Born level. Here, $a$ and $b$ are the color indices for the incoming and outgoing gluons, and $i$ and $j$ for the incoming and outgoing quarks. The amplitudes $A_1$ and $A_2$ depend on the momenta of the two incoming particles, $p_1$ and $p_2$, and on the momenta of the two outgoing particles $k_1$ and $k_2$ for the quarks and gluons, respectively. The single spin asymmetry can be formulated starting from the decomposed amplitude above. For example, the initial state interaction contribution can be derived from the following expression: \begin{eqnarray} -if_{cad}\left(A_1\bar u_j T_{jk}^dT_{ki}^bu_i+A_2\bar u_j T_{jk}^bT_{ki}^du_i\right)\left(A_1^*\bar u_{i'} T_{i'k'}^bT_{k'j'}^au_{j'}+A_2\bar u_{i'} T_{i'k'}^a T_{k'j'}^bu_{j'}\right) \ . \end{eqnarray} The color indices $i$ and $i'$ are coupled to the adjoint representation of the twist-three Qiu-Sterman matrix element. Therefore, we can rewrite $u_i\bar u_{i'}$ as $T^c_{ii'}$, and the above result leads to \begin{eqnarray} C_I:~{\cal H}_0^I&=&\frac{1}{N_{\textrm{color}}}\left(-if_{cad}\right){\rm Tr}\left[\left(A_1T^dT^b+A_2T^bT^d\right)T^c\left(A_1^* T^bT^a+A_2^*T^aT^b\right)\right]\nonumber\\ &=&\frac{1}{N_{\textrm{color}}}\left[-(A_1+A_2)^2+N_c^2A_2^2\right] \ , \end{eqnarray} where $N_{\textrm{color}}=(N_c^2-1)N_cC_F$. Similarly, for the final state interaction with the gluon line, we have \begin{eqnarray} C_{F1}:~{\cal H}_0^{F1}&=&\frac{1}{N_{\textrm{color}}}\left(-if_{cbd}\right){\rm Tr}\left[\left(A_1T^aT^d+A_2T^dT^a\right)T^c\left(A_1^* T^bT^a+A_2^*T^aT^b\right)\right]\nonumber\\ &=&\frac{1}{N_{\textrm{color}}}\left[-(A_1+A_2)^2+N_c^2A_1^2\right] \ . \end{eqnarray} For the final state interaction contribution from the final state quark line, we have \begin{eqnarray} C_{F2}:~{\cal H}_0^{F2}&=&\frac{1}{N_{\textrm{color}}}{\rm Tr}\left[\left(A_1T^cT^aT^b+A_2T^cT^bT^a\right)T^c\left(A_1^* T^bT^a+A_2^*T^aT^b\right)\right]\nonumber\\ &=&\frac{1}{N_{\textrm{color}}}\left[\frac{1}{N_c^2}(A_1+A_2)^2+2A_1A_2^* \right]\ . \end{eqnarray} Noticing that the initial state interaction carries a minus sign, the total contribution will be \begin{eqnarray} H_{qg\to qg}^{\rm Sivers}&=&{\cal H}_0^{F1}+{\cal H}_0^{F2}-{\cal H}_0^{I}\nonumber\\ &=&\frac{1}{N_{\textrm{color}}}\left[N_c^2\left(A_1^2-A_2^2\right)+\frac{1}{N_c^2}(A_1+A_2)^2+2A_1A_2^*\right] \ .\label{e72} \end{eqnarray} From our parameterization of the Born amplitude, we have \begin{eqnarray} &&(A_1+A_2)^2=\frac{2(\hat s^2+\hat u^2)}{-\hat s\hat u},~~A_1A_2^*=-\frac{2(\hat s^2+\hat u^2)}{\hat t^2}\ , \\ &&A_1^2=\frac{2\hat s}{-\hat u}\frac{\hat s^2+\hat u^2}{\hat t^2},~~ A_2^2=\frac{2\hat u}{-\hat s}\frac{\hat s^2+\hat u^2}{\hat t^2}. \end{eqnarray} Substituting the above expressions into Eq.~(\ref{e72}), we reproduce the result for the leading order $H_{qg\to qg}^{\rm Sivers}$ in Ref.~\cite{Qiu:2007ey}. Now, let us turn to the soft gluon radiation contributions. For the initial state interactions, we have soft gluon radiation contributions from the incoming gluon line, the outgoing gluon line and quark line. The relevant diagrams will follow those in Fig.~\ref{gluonradiation0}. In this case, the upper two lines are gluons. The associated amplitudes for these three diagrams are given by \begin{eqnarray} &&\frac{2p_1^\mu}{2p_1\cdot k_g}(-if_{cae})(-if_{def})\left(A_1\bar uT^fT^bu+A_2\bar uT^bT^f u\right) \ ,\\ &&\frac{2k_1^\mu}{2k_1\cdot k_g}(-if_{dae})(-if_{cbf})\left(A_1\bar uT^eT^fu+A_2\bar uT^fT^e u\right) \ ,\\ &&\frac{2k_2^\mu}{2k_2\cdot k_g}(-if_{dae})\left(A_1\bar uT^cT^eT^bu+A_2\bar uT^cT^bT^e u\right) \ . \end{eqnarray} Here $c$ is the color index of the radiated gluon and $d$ corresponds to the longitudinal gluon from the polarized nucleon attached to the partonic scattering part. The contributions to the SSA come from the interference of the above amplitudes and those in Fig.~\ref{unpolarized}, which we list here for the $qg\to qg$ channel, \begin{eqnarray} &&\frac{2p_1^\nu}{2p_1\cdot k_g}(-if_{cag})\left(A_1\bar uT^gT^bu+A_2\bar uT^bT^g u\right) \ ,\\ &&\frac{2k_1^\nu}{2k_1\cdot k_g}(-if_{cbg})\left(A_1\bar uT^aT^gu+A_2\bar uT^gT^a u\right) \ ,\\ &&\frac{2k_2^\nu}{2k_2\cdot k_g}\left(A_1\bar uT^cT^aT^bu+A_2\bar uT^cT^bT^a u\right) \ . \end{eqnarray} Similar to the previous case, the following interference terms are simple, \begin{eqnarray} p_1^\mu p_1^\nu \Rightarrow C_A, ~~k_1^\mu k_1^\nu\Rightarrow C_A,~~k_2^\mu k_2^\nu\Rightarrow C_F \ , \label{simpleone} \end{eqnarray} which will be multiplied by the leading order initial state interaction contribution of Eqs.~(\ref{p1p1})-(\ref{k2k2}). The other interference terms are a little bit more complicated. We have \begin{eqnarray} p_1^\mu k_1^\nu &\Rightarrow& \frac{1}{N_{\textrm{color}}}(-if_{cae})(-if_{def})(-if_{cbg}){\rm Tr}\left[\left(A_1T^fT^b+A_2T^bT^f\right)T^d\left(A_1^* T^gT^a+A_2^*T^aT^g\right)\right]\nonumber\\ &&=\frac{N_c}{N_{\textrm{color}}}\left(A_1^2+A_1A_2^*-\frac{N_c^2}{2}A_2^2\right) \ ,\\ p_1^\mu k_2^\nu &\Rightarrow& \frac{1}{N_{\textrm{color}}}(-if_{cae})(-if_{def}){\rm Tr}\left[\left(A_1T^fT^b+A_2T^bT^f\right)T^d\left(A_1^* T^bT^aT^c+A_2^*T^aT^bT^c\right)\right]\nonumber\\ &&=\frac{N_c}{2N_{\textrm{color}}}\left(A_1^2+A_2^2\right) \ ,\\ k_1^\mu k_2^\nu &\Rightarrow& \frac{1}{N_{\textrm{color}}}(-if_{dae})(-if_{cbf}){\rm Tr}\left[\left(A_1T^eT^f+A_2T^fT^e\right)T^d\left(A_1^* T^bT^aT^c+A_2^*T^aT^bT^c\right)\right]\nonumber\\ &&=\frac{N_c}{2N_{\textrm{color}}}\left(-A_1^2-(N_c^2-1)A_2^2+2A_1A_2^*\right) \ . \end{eqnarray} By combining them with the associated kinematic contributions in Eqs.~(\ref{p1p1})-(\ref{k1k2}) and adding them, we obtain the leading logarithmic result for the SSA from the initial state interaction for this channel. In particular, the above three terms add up to \begin{eqnarray} -C_A{\cal H}_0^I=\frac{C_A}{N_{\textrm{color}}}\left[(A_1+A_2)^2-N_c^2A_2^2\right] \ , \end{eqnarray} which exactly cancels the leading logarithmic contribution from the $p_1^\mu p_1^\nu$ term which also has the color factor $C_A$. We thus obtain the final result for the initial state interaction contribution: \begin{eqnarray} {\cal H}_{\textrm{twist-3}}^{\beta(C_I)}&=&-{\cal H}_0^I\frac{\alpha_s}{2\pi^2}\frac{-q_\perp^\beta}{(q_\perp^2)^2}\left[(C_A+C_F)\ln\frac{Q^2}{q_\perp^2}+C_A\ln\frac{1}{R_1^2}+C_F\ln\frac{1}{R_2^2}\right] \ . \end{eqnarray} For the final state interaction associated with the final state gluon jet, we have the following amplitudes for the associated diagrams in Fig.~\ref{gluonradiation1}: \begin{eqnarray} &&\frac{2p_1^\mu}{2p_1\cdot k_g}(-if_{cae})(-if_{dbf})\left(A_1\bar uT^eT^fu+A_2\bar uT^fT^e u\right) \ ,\\ &&\frac{2k_1^\mu}{2k_1\cdot k_g}(-if_{cbe})(-if_{def})\left(A_1\bar uT^aT^fu+A_2\bar uT^fT^a u\right) \ ,\\ &&\frac{2k_2^\mu}{2k_2\cdot k_g}(-if_{dbe})\left(A_1\bar uT^cT^aT^eu+A_2\bar uT^cT^eT^a u\right) \ . \end{eqnarray} Again, the interference contributions from $p_1^\mu p_1^\nu$, $k_1^\mu k_1^\nu$ and $k_2^\mu k_2^\nu$ have the same structure as those for the initial state interaction diagrams in Eq.~(\ref{simpleone}). The remaining contributions can be written as follows: \begin{eqnarray} p_1^\mu k_1^\nu &\Rightarrow& \frac{1}{N_{\textrm{color}}}(-if_{cae})(-if_{dbf})(-if_{cbg}){\rm Tr}\left[\left(A_1T^eT^f+A_2T^fT^e\right)T^d\left(A_1^* T^gT^a+A_2^*T^aT^g\right)\right]\nonumber\\ &&=\frac{N_c}{N_{\textrm{color}}}\left(A_2^2+A_1A_2^*-\frac{N_c^2}{2}A_1^2\right) \ ,\\ p_1^\mu k_2^\nu &\Rightarrow& \frac{1}{N_{\textrm{color}}}(-if_{cae})(-if_{dbf}){\rm Tr}\left[\left(A_1T^eT^f+A_2T^fT^e\right)T^d\left(A_1^* T^bT^aT^c+A_2^*T^aT^bT^c\right)\right]\nonumber\\ &&=\frac{N_c}{2N_{\textrm{color}}}\left(2A_1A_2^*-A_2^2-(N_c^2-1)A_1^2\right) \ ,\\ k_1^\mu k_2^\nu &\Rightarrow& \frac{1}{N_{\textrm{color}}}(-if_{cbe})(-if_{def}){\rm Tr}\left[\left(A_1T^aT^f+A_2T^fT^a\right)T^d\left(A_1^* T^bT^aT^c+A_2^*T^aT^bT^c\right)\right]\nonumber\\ &&=\frac{N_c}{2N_{\textrm{color}}}\left(A_1^2+A_2^2\right) \ . \end{eqnarray} Adding up the three contributions, we have \begin{eqnarray} -C_A{\cal H}_0^{F1}=\frac{C_A}{N_{\textrm{color}}}\left[(A_1+A_2)^2-N_c^2A_1^2\right] \ . \end{eqnarray} The total contribution from the final state interaction with the gluon jet is given by \begin{eqnarray} {\cal H}_{\textrm{twist-3}}^{\beta(C_{F1})}&=&{\cal H}_0^{F1}\frac{\alpha_s}{2\pi^2}\frac{-q_\perp^\beta}{(q_\perp^2)^2}\left[(C_A+C_F)\ln\frac{Q^2}{q_\perp^2}+C_A\ln\frac{1}{R_1^2}+C_F\ln\frac{1}{R_2^2}\right] \ . \end{eqnarray} For the final state interaction with the quark line, we find the following amplitudes for the associated diagrams of Fig.~\ref{gluonradiation2}: \begin{eqnarray} &&\frac{2p_1^\mu}{2p_1\cdot k_g}(-if_{cae})\left(A_1\bar uT^dT^eT^bu+A_2\bar uT^dT^bT^e u\right) \ ,\\ &&\frac{2k_1^\mu}{2k_1\cdot k_g}(-if_{cbe})\left(A_1\bar uT^dT^aT^eu+A_2\bar uT^dT^eT^a u\right) \ ,\\ &&\frac{2k_2^\mu}{2k_2\cdot k_g}\left(A_1\bar uT^cT^dT^aT^bu+A_2\bar uT^cT^dT^bT^a u\right) \ , \end{eqnarray} and the interference terms are \begin{eqnarray} p_1^\mu k_1^\nu &\Rightarrow& \frac{1}{N_{\textrm{color}}}(-if_{cae})(-if_{cbg}){\rm Tr}\left[\left(A_1T^dT^eT^b+A_2T^dT^bT^e\right)T^d\left(A_1^* T^gT^a+A_2^*T^aT^g\right)\right]\nonumber\\ &&=\frac{N_c}{2N_{\textrm{color}}}\left(-A_1^2-A_2^2-4A_1A_2^*\right) \ ,\\ p_1^\mu k_2^\nu &\Rightarrow& \frac{1}{N_{\textrm{color}}}(-if_{cae}){\rm Tr}\left[\left(A_1T^dT^eT^b+A_2T^dT^bT^e\right)T^d\left(A_1^* T^bT^aT^c+A_2^*T^aT^bT^c\right)\right]\nonumber\\ &&=\frac{1}{2N_cN_{\textrm{color}}}\left(-2A_1A_2^*-A_1^2+(N_c^2-1)A_2^2\right) \ ,\\ k_1^\mu k_2^\nu &\Rightarrow& \frac{1}{N_{\textrm{color}}}(-if_{cbe}){\rm Tr}\left[\left(A_1T^dT^aT^e+A_2T^dT^eT^a\right)T^d\left(A_1^* T^bT^aT^c+A_2^*T^aT^bT^c\right)\right]\nonumber\\ &&=\frac{1}{2N_cN_{\textrm{color}}}\left(-2A_1A_2^*-A_2^2+(N_c^2-1)A_1^2\right) \ . \end{eqnarray} The above three terms lead to \begin{eqnarray} -C_A{\cal H}_0^{F2}=\frac{C_A}{N_{\textrm{color}}}\left[-\frac{1}{N_c^2}(A_1+A_2)^2-2A_1A_2^*\right] \ . \end{eqnarray} The final result from the final state interaction with the quark line is \begin{eqnarray} {\cal H}_{\textrm{twist-3}}^{\beta(C_{F2})}&=&{\cal H}_0^{F2}\frac{\alpha_s}{2\pi^2}\frac{-q_\perp^\beta}{(q_\perp^2)^2}\left[(C_A+C_F)\ln\frac{Q^2}{q_\perp^2}+C_A\ln\frac{1}{R_1^2}+C_F\ln\frac{1}{R_2^2}\right] \ . \end{eqnarray} Adding up all initial and final state interaction contributions, we obtain the following result for the SSA at the leading logarithmic order: \begin{eqnarray} {\cal H}_{\textrm{twist-3}}^{\beta}(gq\to gq)&=&H_{qg\to qg}^{\rm Sivers}\frac{\alpha_s}{2\pi^2}\frac{-q_\perp^\beta}{(q_\perp^2)^2}\left[(C_A+C_F)\ln\frac{Q^2}{q_\perp^2}+C_A\ln\frac{1}{R_1^2}+C_F\ln\frac{1}{R_2^2}\right] \ . \end{eqnarray} Including the collinear gluon radiation contributions from the incoming partons as in Eq.~(\ref{e30}), we obtain the spin-dependent differential cross section for the $gq\to gq$ channel in the correlation limit of $q_\perp\ll P_T$ which is given by \begin{eqnarray} \frac{d\Delta\sigma(S_\perp)}{d\Omega d^2 q_\perp}&=&-H_{qg\to qg}^{\rm Sivers}\epsilon^{\alpha\beta}S_\perp^\alpha \frac{\alpha_s}{2\pi^2}\frac{q_\perp^\beta}{(q_\perp^2)^2}x_1x_2\nonumber\\ &&\times \left\{ f_g(x_1){\cal P}_{qg\to qg}^{T(<)}\otimes T_{Fq}(x_2,x_2 +T_{Fq}(x_2,x_2){\cal P}_{g\to g}^{(<)}\otimes f_{g}(x_1)\right.\nonumber\\ &&\left.+f_g(x_1)T_{Fq}(x_2,x_2)\left[ (C_A+C_F)\ln\frac{Q^2}{q_\perp^2}+C_A\ln\frac{1}{R_1^2}+C_F\ln\frac{1}{R_2^2}\right]\right\} \ . \end{eqnarray} After taking the Fourier transform to $b_\perp$-space, we expect to have the following one-loop result for $W^{T\beta}$: \begin{eqnarray} W_{gq\to gq}^{T\beta (1)}&=&H_{qg\to qg}^{\rm Sivers}\frac{ib_\perp^\beta}{2}\frac{\alpha_s}{2\pi}x_1x_2\left\{-\ln\frac{\mu^2b_\perp^2}{b_0^2}\Big[f_g(x_1,\mu) {\cal P}_{qg\to qg}^T\otimes T_{Fq}(x_2,x_2,\mu)\right. \nonumber \\ &+&T_{Fq}(x_2,x_2,\mu){\cal P}_{a'\to g}\otimes f_{a'}(x_1,\mu)\Big]\nonumber\\ &+& f_g(x_1,\mu)T_{Fq}(x_2,x_2,\mu)\left[\frac{C_A+C_F}{2}\ln^2\left(\frac{Q^2b_\perp^2}{b_0^2}\right)\right.\nonumber\\ &&~~~\left.\left.-\left(\frac{3}{2}C_F+2\beta_0C_A-C_A\ln\frac{1}{R_1^2}-C_F\ln\frac{1}{R_2^2}\right)\ln\frac{Q^2b_\perp^2}{b_0^2}\right]\right\} \ . \end{eqnarray} \subsubsection{$q\bar q\to gg$} The computation for the $q\bar q\to gg$ channel is very similar to the case discussed above. Here, we can parametrize the leading Born amplitude as \begin{eqnarray} A_1\bar vT^aT^b u +A_2\bar v T^bT^a u \ , \end{eqnarray} where, of course, the amplitudes will be different from those for the $qg\to qg$ channel. In particular, we have $(A_1+A_2)^2=\frac{2(\hat t^2+\hat u^2)}{\hat t\hat u}$, $A_1A_2^*=\frac{2(\hat t^2+\hat u^2)}{\hat s^2}$, $A_1^2=\frac{2\hat t}{\hat u}\frac{2(\hat t^2+\hat u^2)}{\hat s^2}$ and $A_2^2=\frac{2\hat u}{\hat t}\frac{2(\hat t^2+\hat u^2)}{\hat s^2}$ for the current case. The single spin asymmetry comes from the initial state interaction with the antiquark line and the two final state gluon lines. For the initial state interaction contribution, we have \begin{eqnarray} C_I^{(q\bar q)}:~{\cal H}_{0q\bar q}^I&=&\frac{1}{N_{\textrm{color}}^{(q\bar q)}}{\rm Tr}\left[\left(A_1T^cT^aT^b+A_2T^cT^bT^a\right)T^c\left(A_1^* T^bT^a+A_2^*T^aT^b\right)\right]\nonumber\\ &=&\frac{1}{N_{\textrm{color}}^{(q\bar q)}}\left[\frac{1}{N_c^2}(A_1+A_2)^2+2A_1A_2^*\right] \ , \end{eqnarray} where $N_{\textrm{color}}^{(q\bar q)}=N_c^2C_F$. For the final state interaction contribution with the gluon line of $k_1$, we obtain \begin{eqnarray} C_{F1}^{(q\bar q)}:~{\cal H}_{0q\bar q}^{F1}&=&\frac{1}{N_{\textrm{color}}^{(q\bar q)}}\left(-if_{cae}\right){\rm Tr}\left[\left(A_1T^eT^b+A_2T^bT^e\right)T^c\left(A_1^* T^bT^a+A_2^*T^aT^b\right)\right]\nonumber\\ &=&\frac{1}{N_{\textrm{color}}^{(q\bar q)}}\left[{N_c^2}A_2^2-(A_1+A_2)^2\right] \ . \end{eqnarray} Using the the symmetry of the channel, the final state interaction contribution from the gluon line $k_2$ can be obtained from the above result as \begin{eqnarray} C_{F2}^{(q\bar q)}:~{\cal H}_{0q\bar q}^{F2} &=&\frac{1}{N_{\textrm{color}}^{(q\bar q)}}\left[{N_c^2}A_1^2-(A_1+A_2)^2\right] \ . \end{eqnarray} The total contribution at the leading order is given by \begin{eqnarray} H_{q\bar q\to gg}^{\rm Sivers}&=&{\cal H}_{0q\bar q}^{F1}+{\cal H}_{0q\bar q}^{F2}-{\cal H}_{0q\bar q}^{I}\nonumber\\ &=&\frac{1}{N_{\textrm{color}}^{(q\bar q)}}\left[N_c^2\left(A_1^2+A_2^2\right)-2A_1A_2^*-\frac{2N_c^2+1}{N_c^2}(A_1+A_2)^2\right] \ .\label{loqqbar} \end{eqnarray} Substituting the expressions of $A_1$ and $A_2$ into the above equation, we reproduce the result for the leading order $H_{q\bar q\to gg}^{\rm Sivers}$ in Ref.~\cite{Qiu:2007ey}. Next, we can work out the soft gluon radiation contribution as well. For the initial state interaction contributions ($C_I$), we consider the soft gluon radiation from the incoming anti-quark line, and two the outgoing gluon lines. The relevant diagrams again correspond to those shown in Fig.~\ref{gluonradiation0}. The associated amplitudes for these three diagrams are \begin{eqnarray} &&\frac{2p_1^\mu}{2p_1\cdot k_g}\left(A_1\bar vT^cT^dT^aT^bu+A_2\bar vT^cT^dT^bT^a u\right) \ ,\\ &&\frac{2k_1^\mu}{2k_1\cdot k_g}(-if_{cae})\left(A_1\bar vT^dT^eT^bu+A_2\bar vT^dT^bT^e u\right) \ ,\\ &&\frac{2k_2^\mu}{2k_2\cdot k_g}(-if_{cbe})\left(A_1\bar vT^dT^aT^eu+A_2\bar vT^dT^eT^a u\right) \ , \end{eqnarray} where $c$ is again the color index of the radiated gluon and $d$ corresponds to the longitudinal gluon from the polarized nucleon. The single spin asymmetry contributions come from the interference of the above amplitudes and those of Fig.~\ref{unpolarized}. For $q\bar q\to gg$ channel, we have \begin{eqnarray} &&\frac{2p_1^\nu}{2p_1\cdot k_g}\left(A_1\bar vT^cT^aT^bu+A_2\bar vT^cT^bT^a u\right) \ ,\\ &&\frac{2k_1^\nu}{2k_1\cdot k_g}(-if_{cag})\left(A_1\bar vT^gT^bu+A_2\bar vT^bT^g u\right) \ ,\\ &&\frac{2k_2^\nu}{2k_2\cdot k_g}(-if_{cbg})\left(A_1\bar vT^aT^gu+A_2\bar vT^gT^a u\right) \ . \end{eqnarray} In our previous notation, the resulting interference terms are \begin{eqnarray} p_1^\mu p_1^\nu \Rightarrow C_F, ~~k_1^\mu k_1^\nu\Rightarrow C_A,~~k_2^\mu k_2^\nu\Rightarrow C_A \ . \end{eqnarray} The other interference terms can be evaluated as \begin{eqnarray} p_1^\mu k_1^\nu &\Rightarrow& \frac{1}{N_{\textrm{color}}^{(q\bar q)}}(if_{cae}){\rm Tr}\left[\left(A_1T^cT^dT^aT^b+A_2T^cT^dT^bT^a\right)T^d\left(A_1^* T^bT^e+A_2^*T^eT^b\right)\right]\nonumber\\ &&=\frac{1}{2N_cN_{\textrm{color}}^{(q\bar q)}}\left((N_c^2-1)A_2^2-A_1^2-2A_1A_2^*\right) \ ,\\ p_1^\mu k_2^\nu &\Rightarrow& \frac{1}{N_{\textrm{color}}^{(q\bar q)}}(if_{cbe}){\rm Tr}\left[\left(A_1T^cT^dT^aT^b+A_2T^cT^dT^bT^a\right)T^d\left(A_1^* T^eT^a+A_2^*T^aT^e\right)\right]\nonumber\\ &&=\frac{1}{2N_cN_{\textrm{color}}^{(q\bar q)}}\left((N_c^2-1)A_1^2-A_2^2-2A_1A_2^*\right) \ ,\\ k_1^\mu k_2^\nu &\Rightarrow& \frac{1}{N_{\textrm{color}}^{(q\bar q)}}(-if_{cae})(if_{cbf}){\rm Tr}\left[\left(A_1T^dT^eT^b+A_2T^dT^bT^e\right)T^d\left(A_1^* T^fT^a+A_2^*T^aT^f\right)\right]\nonumber\\ &&=\frac{N_c}{N_{\textrm{color}}^{(q\bar q)}}\left(-\frac{1}{2}(A_1^2+A_2^2)-2A_1A_2^*\right) \ . \end{eqnarray} The above three terms add up to \begin{eqnarray} -C_A{\cal H}_{0q\bar q}^I=\frac{1}{N_{\textrm{color}}^{(q\bar q)}}\left[-\frac{1}{N_c}(A_1+A_2)^2-2N_cA_1A_2^*\right] \ , \end{eqnarray} which exactly cancels the leading log contribution from $k_1^\mu k_1^\nu$ and $k_2^\mu k_2^\nu$ with color factor $C_A$. Therefore, we obtain the following final result for the initial state interaction contribution: \begin{eqnarray} {\cal H}_{\textrm{twist-3}}^{\beta(C_I)}=-{\cal H}_{0q\bar q}^I\frac{\alpha_s}{2\pi^2}\frac{-q_\perp^\beta}{(q_\perp^2)^2}\left[C_F2\ln\frac{Q^2}{q_\perp^2}+C_A\left(\ln\frac{1}{R_1^2}+\ln\frac{1}{R_2^2}\right)\right] \ . \end{eqnarray} For the final state interaction associated with the final state gluon jet $k_1$, we find the following amplitudes, see Fig.~\ref{gluonradiation1}, \begin{eqnarray} &&\frac{2p_1^\mu}{2p_1\cdot k_g}(-if_{dae})\left(A_1\bar vT^cT^eT^bu+A_2\bar vT^cT^bT^e u\right) \ ,\\ &&\frac{2k_1^\mu}{2k_1\cdot k_g}(-if_{caf})(-if_{dfe})\left(A_1\bar vT^eT^bu+A_2\bar vT^bT^e u\right) \ ,\\ &&\frac{2k_2^\mu}{2k_2\cdot k_g}(-if_{cbf})(-if_{dae})\left(A_1\bar vT^eT^fu+A_2\bar vT^fT^e u\right) \ . \end{eqnarray} Again, the interference contributions from $p_1^\mu p_1^\nu$, $k_1^\mu k_1^\nu$ and $k_2^\mu k_2^\nu$ have the same structure as those for the initial state interaction diagrams. The remaining contributions can be written as \begin{eqnarray} p_1^\mu k_1^\nu &\Rightarrow& \frac{1}{N_{\textrm{color}}^{(q\bar q)}}(-if_{dae})(if_{cag}){\rm Tr}\left[\left(A_1T^cT^eT^b+A_2T^cT^bT^e\right)T^d\left(A_1^* T^bT^g+A_2^*T^gT^b\right)\right]\nonumber\\ &&=\frac{N_c}{2N_{\textrm{color}}^{(q\bar q)}}\left(A_1^2+A_2^2\right) \ ,\\ p_1^\mu k_2^\nu &\Rightarrow& \frac{1}{N_{\textrm{color}}^{(q\bar q)}}(-if_{dae})(if_{cbg}){\rm Tr}\left[\left(A_1T^cT^eT^b+A_2T^cT^bT^e\right)T^d\left(A_1^* T^gT^a+A_2^*T^aT^g\right)\right]\nonumber\\ &&=\frac{N_c}{2N_{\textrm{color}}^{(q\bar q)}}\left(-A_1^2+2A_1A_2^*-(N_c^2-1)A_2^2\right) \ ,\\ k_1^\mu k_2^\nu &\Rightarrow& \frac{1}{N_{\textrm{color}}^{(q\bar q)}}(-if_{caf})(-if_{dfe})(if_{cbg}){\rm Tr}\left[\left(A_1T^eT^b+A_2T^bT^e\right)T^d\left(A_1^* T^gT^a+A_2^*T^aT^g\right)\right]\nonumber\\ &&=\frac{N_c}{2N_{\textrm{color}}^{(q\bar q)}}\left(2A_1^2+2A_1A_2^*-N_c^2A_2^2\right) \ . \end{eqnarray} Adding the three terms above, we find \begin{eqnarray} -C_A{\cal H}_{0q\bar q}^{F1}=\frac{C_A}{N_{\textrm{color}}^{(q\bar q)}}\left[(A_1+A_2)^2-N_c^2A_2^2\right] \ . \end{eqnarray} The total contribution from the final state interaction with the gluon jet leads to \begin{eqnarray} {\cal H}_{\textrm{twist-3}}^{\beta(C_{F1})}={\cal H}_{0q\bar q}^{F1}\frac{\alpha_s}{2\pi^2}\frac{-q_\perp^\beta}{(q_\perp^2)^2}\left[2C_F\ln\frac{Q^2}{q_\perp^2}+C_A\left(\ln\frac{1}{R_1^2}+\ln\frac{1}{R_2^2}\right)\right] \ . \end{eqnarray} Again, from the symmetry under interchange of the final state particles, we can obtain the soft gluon radiation contribution for the final state interaction with the gluon line $k_2$. Adding all these contributions, we have the following result for the SSA at the leading logarithmic order, \begin{eqnarray} {\cal H}_{\textrm{twist-3}}^{\beta}(q\bar q\to gg)=H_{q\bar q\to gg}^{\rm Sivers}\frac{\alpha_s}{2\pi^2}\frac{-q_\perp^\beta}{(q_\perp^2)^2}\left[2C_F\ln\frac{Q^2}{q_\perp^2}+C_A\left(\ln\frac{1}{R_1^2}+\ln\frac{1}{R_2^2}\right)\right] \ . \end{eqnarray} Adding the collinear gluon radiation contribution, we find the following result: \begin{eqnarray} \frac{d\Delta\sigma(S_\perp)}{d\Omega d^2 q_\perp}&=&-H_{q\bar q\to gg}^{\rm Sivers}\epsilon^{\alpha\beta}S_\perp^\alpha \frac{\alpha_s}{2\pi^2}\frac{q_\perp^\beta}{(q_\perp^2)^2}x_1x_2\nonumber\\ &&\times \left\{ f_{\bar q}(x_1){\cal P}_{qg\to qg}^{T(<)}\otimes T_{Fq}(x_2,x_2 +T_{Fq}(x_2,x_2){\cal P}_{a\to \bar q}^{(<)}\otimes f_{\bar q}(x_1)\right.\nonumber\\ &&\left.+f_{\bar q}(x_1)T_{Fq}(x_2,x_2)\left[ 2C_F\ln\frac{Q^2}{q_\perp^2}+C_A\left(\ln\frac{1}{R_1^2}+\ln\frac{1}{R_2^2}\right)\right]\right\} \ , \end{eqnarray} for the $q\bar q\to gg$ channel. Taking again the Fourier transform to $b_\perp$-space should lead to the following one-loop result: \begin{eqnarray} W_{q\bar q\to gg}^{T\beta (1)}&=&H_{q\bar q\to gg}^{\rm Sivers}\frac{ib_\perp^\beta}{2}\frac{\alpha_s}{2\pi}x_1x_2\left\{-\ln\frac{\mu^2b_\perp^2}{b_0^2}\Big[ f_{\bar q}(x_1,\mu){\cal P}_{qg\to qg}^T\otimes T_{Fq}(x_2,x_2,\mu)\right.\nonumber \\ &+&T_{Fq}(x_2,x_2,\mu){\cal P}_{a\to \bar q}\otimes f_a(x_1,\mu)\Big]\nonumber\\ &+&f_{\bar q}(x_1,\mu)T_{Fq}(x_2,x_2,\mu)\left[C_F\ln^2\left(\frac{Q^2b_\perp^2}{b_0^2}\right)\right.\nonumber\\ &-&\left.\left.\left(\frac{3}{2}C_F-C_A\left(\ln\frac{1}{R_1^2}+\ln\frac{1}{R_2^2}\right)\right)\ln\frac{Q^2b_\perp^2}{b_0^2}\right]\right\}\ . \end{eqnarray} \subsection{Factorization and Resummation at $\rm LLA'$} Our above results for the soft gluon radiation contribution at one-loop order are very instructive for developing a factorization argument according to which one can perform the resummation of logarithms in $q_\perp/P_T$. Three important types of contributions are incorporated and explicitly presented in our calculations: (1) collinear gluon radiation from the incoming hadrons, (2) collinear gluon radiation from the final state jets, and (3) the leading double logarithms from soft gluon radiation. \begin{figure}[t] \begin{center} \includegraphics[width=11cm]{one-loop-summary.eps} \end{center} \vskip -0.4cm \caption{\it Summary of the one-loop calculations of the soft gluon radiation contributions to the SSA in dijet production.} \label{one-loop-summary} \end{figure} We argue that these contributions to the $\rm LLA'$ will continue to factorize at higher orders. As illustrated in Fig.~\ref{one-loop-summary}, we can summarize the above calculations of the one-loop soft gluon radiations in a factorization structure. First, the power counting analysis simplifies the kinematics. As shown in this figure, the initial/final state interactions needed to generate a phase for the SSA in this process can be factorized into a hard partonic scattering amplitude, in analogy to the leading order Born amplitude for the spin-averaged case. Additional soft gluon radiation only appears in association with the external lines. This not only simplifies the detailed derivations at this order, but also shows that the soft gluon radiation associated with the final state jets can be generalized to all orders. In our example, we have chosen a physical polarization for the radiated gluon, such that the jet contribution comes only from the squared diagrams where the radiated gluon is attached to one particular jet, e.g. either $k_1$ or $k_2$. These emissions always result in a leading contribution proportional to $1/q_\perp^2\ln(1/R^2)$. An evolution equation can be derived to resum higher order emissions and the final result can be written in terms of the Sudakov resummation form factor of Eq.~(\ref{sudakov-spin}). Similar arguments can be made for the collinear gluon radiation associated with the incoming hadrons. These collinear gluon also contribute terms of order $1/q_\perp^2$, which will be multiplied by the splitting of the associated parton distribution and/or twist-three Qiu-Sterman matrix element. The all order resummation can be carried out by solving the relevant DGLAP equations. This can be achieved by evaluating the distributions at the scale $\mu_b=b_0/b_\perp$ and by including the associated anomalous dimensions in the Sudakov resummation form factor. The leading double logarithms from soft gluon radiation are associated with kinematics overlapping with the collinear gluon radiation from the incoming partons. The resummation of these double logarithms can be carried out by solving the associated Collins-Soper evolution equations for the TMD parton distributions. The double logarithms from two TMD parton distributions give the leading double logarithms for the final differential cross section. This argument has been verified explicitly in the above derivation. We expect that this can be generalized to higher orders as well and an all order resummation can be obtained in the form of the Sudakov form factor in Eq.~(\ref{sudakov-spin}). \section{Factorization Breaking Effects at NLL} To exhibit factorization breaking effects beyond the LLA, we consider the channel $qq'\to qq'$ as an example. From the derivation in the last section, we obtain the soft gluon radiation contribution to the transverse spin dependent differential cross section (see Eqs.~(\ref{p1p1})--(\ref{k1k2}),(\ref{qqsoft}),(\ref{qqsoft1})) \begin{eqnarray} {\cal H}_{\textrm{twist-3}}^{\beta}=\frac{\alpha_s}{2\pi^2}\frac{-q_\perp^\beta}{(q_\perp^2)^2}\left\{H_{q_iq_j\to q_iq_j}^{\rm Sivers}\left[ 2C_F\ln\frac{Q^2}{q_\perp^2}+C_F\left(\ln\frac{1}{R_1^2}+\ln\frac{1}{R_2^2}\right)\right] +\widetilde{\Gamma}_{sn}^{(qq')}\right\} \ , \label{qqsoft1a} \end{eqnarray} where $h_{q_iq_j\to q_iq_j}^{(0)}=\frac{\alpha_s^2\pi}{\hat s^2}\frac{2(\hat s^2+\hat u^2)}{\hat t^2}$ and \begin{equation} \widetilde{\Gamma}_{sn}^{(qq')}=h_{q_iq_j\to q_iq_j}^{(0)}\left[-\frac{1}{N_c}\ln\frac{\hat s^2}{\hat t\hat u}-4\ln\frac{\hat t}{\hat u}\right] \ . \end{equation} Comparing to Eq.~(66) of Ref.~\cite{Sun:2015doa}, we observe that this term is different from the analogous term in the unpolarized case, implying that it cannot be factorized into a spin independent soft factor for the $qq'\to qq'$ process. Even if we consider only {\it initial state} soft gluons for the transverse spin dependent differential cross section, we get a result different from the unpolarized one. Here we have \begin{eqnarray} {\cal H}_{\textrm{twist-3}}^{\beta(C_I)}=\frac{\alpha_s}{2\pi^2}\frac{-q_\perp^\beta}{(q_\perp^2)^2}\left\{C_Ih_{q_iq_j\to q_iq_j}^{(0)}\left[ 2C_F\ln\frac{Q^2}{q_\perp^2}+C_F\left(\ln\frac{1}{R_1^2}+\ln\frac{1}{R_2^2}\right)\right] +\widetilde{\Gamma}_{sn}^{(qq',C_I)}\right\} \ , \end{eqnarray} where \begin{equation} \widetilde{\Gamma}_{sn}^{(qq',C_I)}=h_{q_iq_j\to q_iq_j}^{(0)}\left[2\ln\frac{\hat s^2}{\hat t\hat u}-C_F\ln\frac{\hat t}{\hat u}\right]\ . \end{equation} The same findings occur for the other two channels studied in the previous section, $qg\to qg$ and $q\bar q\to gg$. \begin{figure}[t] \begin{center} \includegraphics[width=5cm]{unpolarizefac.eps} \end{center} \vskip -0.4cm \caption{\it Soft gluon radiation for the unpolarized case in dijet production. The soft factor can be constructed from the relevant color space configuration of $ij\to kl$, where $ij$ and $kl$ represent the color indices for incoming and outgoing partons in the partonic $2\to 2$ process. } \label{unpolarizefactorization} \end{figure} Let us recall the factorization argument for soft gluon radiation in dijet production. Following the analysis of the generic soft gluon radiation in this process, see, e.g., in Refs.~\cite{Kidonakis:1998bk,Kidonakis:1998nf,Kidonakis:2000gi}, one can derive the associated soft factor for the TMD factorization in matrix form of the relevant color spaces~\cite{Sun:2014gfa,Sun:2015doa}. As shown in Fig.~\ref{unpolarizefactorization}, the color configuration in the $2\to 2 $ partonic process can be written as $ij\to kl$, where $ij$ and $kl$ represent the color indices for the incoming and outgoing partons, respectively. The independent color space tensors carry these indices. The relevant amplitudes for soft gluon radiation and their complex conjugates are derived based on these configurations as well. In TMD factorization, the associated soft factors are formulated in terms of a matrix form on the basis of these color configurations as well, where the external lines will be replaced by the Wilson lines. However, for the SSA in the dijet process, an additional gluon attachment from the polarized nucleon is needed to generate the phase required for the Sivers effect. Because of this, the color flow of the partonic process is totally different from that in the unpolarized case. As shown in Fig.~\ref{ssafactorization}, the single spin asymmetry comes from the interference between the amplitude with gluon attachment and that without gluon attachment. The attaching gluon carries color (with adjoint representation index $a$ in this diagram). The color flow for the amplitude shown in this plot can be written as $ij(a)\to kl$ for the left side and $i'j\to kl$ for the right side, respectively. The final result for the SSA is derived by contracting $i$ and $i'$ via the matrix $T^a_{ii'}$ representing the attaching gluon $a$. Because of this additional color flow caused by the gluon attachment, the color space configurations differ from those in the unpolarized case. As a consequence, the soft factor changes, and the result for the spin-averaged case derived in Ref.~\cite{Sun:2014gfa,Sun:2015doa} does not apply here. This is indeed precisely what our calculations demonstrate. \begin{figure}[t] \begin{center} \includegraphics[width=5cm]{ssafac.eps} \end{center} \vskip -0.4cm \caption{\it Soft gluon radiation for the SSA contributions in the dijet process. The contributions arise as the interference between the amplitude for $ij(a)\to kl$ with an additional gluon attachment from the polarized nucleon and the amplitude without the gluon attachment $i'j\to kl$, where $a$ represents the color index of the attached gluon. The final result is derived by contracting color indices $i$ and $i'$ with $T^a_{ii'}$. Therefore, the color flow is totally different from that in Fig.~\ref{unpolarizefactorization} for the unpolarized case. } \label{ssafactorization} \end{figure} As just described, the difficulty to apply the standard soft factor factorization for the SSA comes from the fact that we need to generate a phase in order to obtain a nonzero SSA for this process. The phase comes from the imaginary part of the propagator (pole contribution), which may carry a different sign compared to the real part. For unpolarized scattering at this order, in a similar diagram like Fig.~\ref{ssafactorization}, the attached gluon can be incorporated into a relevant gauge link in the unpolarized quark distribution. This is only possible when there is no pole contribution from the propagators associated with the attached gluons. After this, the color index for the quark line entering the hard part from the left side will be the same as the on right side, i.e., $i\to i'$. As a consequence, we can derive the associated soft factor in TMD factorization as indicated in Fig.~\ref{unpolarizefactorization}. Of course, as mentioned in the Introduction, at higher order ${\cal O}(\alpha_s^3)$ and beyond, the factorization for the unpolarized cross section also breaks down. The reason is the same: one can have two poles in the propagators associated with two gluon attachments, which can not be factorized into the relevant gauge link in the TMD quark distributions; see, examples given in Refs.~\cite{Vogelsang:2007jk,Collins:2007nk}. To summarize, the soft gluon radiation contribution to the SSA in the dijet production process in $pp$ collisions cannot be factorized into a spin independent soft factor because of the pole contributions, thus breaking standard TMD factorization. Since the soft factor carries a next-to-leading logarithmic contribution, the factorization for the SSA breaks down at NLL as explicitly shown by the above example of $qq'\to qq'$ channel. At the leading logarithmic level, we can argue that the relevant soft gluon radiation belongs to the parton distributions and the final state jets. The associated large logarithms can be resummed through the evolution of these parton distributions and jet functions, and factorization is only broken beyond leading logarithmic accuracy. Therefore, the SSA for dijet production in polarized $pp$ collisions may provide a unique opportunity to explore factorization breaking effects. We will study the associated phenomenology in the following section. We note that it is conceivable that the soft gluon radiation in Fig.~\ref{ssafactorization} may be factorized into a ``non-traditional'' soft factor that is unique to the SSA. This might be achieved by setting up different color tensors on the left side and the right side of the diagram. On the right side of the diagram in Fig.~\ref{ssafactorization} we have a standard $2\to 2$ partonic channel with its simple and standard color flow. On the left side, we have a $3\to 2 $ partonic channel, for which recent developments on the soft gluon evolution in $2\to 3$ processes~\cite{Kyrieleis:2005dt,Sjodahl:2008fz} may become useful. The color-contraction of the two sides is expected to give rise to a non-square matrix problem. It appears likely that the structure of one-loop soft gluon radiation in the single-transverse spin cross section may be understood in this way; whether one can generalize this to higher orders in TMD factorization would be an open question. It is worthwhile to further pursue a study along this direction, which, however, is beyond the scope of the current paper. We hope to address this in a future publication. \section{Phenomenological Study and Comparison to STAR Data} Within the improved leading logarithmic approximation, we can write the differential cross sections for the spin averaged and spin dependent cases in the correlation limit $q_\perp\ll P_T$ as follows: \begin{eqnarray} \frac{d^4\sigma} {d\Omega d^2q_{\perp}}&=&\sum_{abcd}\int\frac{d^2\vec{b}_\perp}{(2\pi)^2} e^{i\vec{q}_\perp\cdot \vec{b}_\perp}W_{ab\to cd}^{uu}(x_1,x_2,b_\perp) \ ,\\ \frac{d\Delta\sigma(S_\perp)} {d\Omega d^2\vec{q}_\perp} &=&\epsilon^{\alpha\beta}S_\perp^\alpha\sum_{abcd}\int\frac{d^2\vec{b}_\perp}{(2\pi)^2} e^{i\vec{q}_\perp\cdot \vec{b}_\perp}W_{ab\to cd}^{T\beta}(x_1,x_2,b_\perp)\ . \end{eqnarray} Following our arguments on the improved leading logarithmic factorization, the resummation formulas for the above $W^{uu}$ and $W^{T\beta}$ can be written as \begin{eqnarray} W_{ab\to cd}^{T\beta}\big|_{\rm LLA'}&=&\frac{ib_\perp^\beta}{2}x_1 f_a(x_1,\mu_b)x_2 T_{Fb}(x_2,\mu_b) H_{ab\to cd}^{\rm Sivers}(P_T,x_1,x_2) e^{-S_{\rm Sud}^T(Q^2,b_\perp)} \ ,\\ W_{ab\to cd}^{uu}\big|_{\rm LLA'}&=&x_1 f_a(x_1,\mu_b)x_2 f_{b}(x_2,\mu_b) H_{ab\to cd}^{uu}(P_T,x_1,x_2) e^{-S_{\rm Sud}^T(Q^2,b_\perp)} \ , \end{eqnarray} where $\mu_b=b_0/b_\perp$ and $S_{\rm Sud}^T(Q^2,b_\perp)$ was defined in Eq.~(\ref{sudakov-spin}). For the unpolarized case, we could in principle also use a more advanced resummed result as derived in Ref.~\cite{Sun:2015doa}. For the following studies, we have decided to use the same resummation accuracy for both the polarized and unpolarized cross sections. We have checked numerically that this does not introduce any sizable effects for the unpolarized case as far as the spin asymmetries are concerned. In order to compare to the experimental data, we have to make further approximations. This is because the phenomenology of the Sivers function (or the Qiu-Sterman matrix element) in the spin-dependent cross section is presently not sophisticated enough to warrant the inclusion of detailed evolution effects in our calculations. Therefore, we will apply estimates of the quark Sivers functions from known experiments without considering their scale dependence. We approximate the parton distributions in the above equations at a fixed lower scale, e.g., $\mu_b\approx \mu_0=2 \rm GeV$, where the quark Sivers functions are extracted from SIDIS. We comment on the uncertainties of these extractions below. With these modifications, the resummation formulas can be written as \begin{eqnarray} W_{ab\to cd}^{T\beta}|_{\rm MLLA'}&=&\frac{ib_\perp^\beta}{2}x_1 f_a(x_1,\mu_0)x_2 T_{Fb}(x_2,\mu_0) H_{ab\to cd}^{\rm Sivers}(P_T,x_1,x_2) e^{-S_{\rm Sud}^T(Q^2,b_\perp)} \ ,\\ W_{ab\to cd}^{uu}|_{\rm MLLA'}&=&x_1 f_a(x_1,\mu_0)x_2 f_{b}(x_2,\mu_0) H_{ab\to cd}^{uu}(P_T,x_1,x_2) e^{-S_{\rm Sud}^T(Q^2,b_\perp)} \ , \end{eqnarray} where $\mu_0$ will be varied around $2~\rm GeV$ in our final results. In addition, these modifications also allow us to simplify the numerical calculations. Within the above approximation, the $q_\perp$-dependence only comes from the Sudakov form factor (with the additional $b_\perp^\beta$ for the Sivers effect) which also depends on $Q^2$. \subsection{Preliminary results} According to the above results, the Sivers asymmetries depend on three ingredients: (1) the Sivers functions; (2) the associated hard factors for the different partonic channels; (3) the $q_\perp$ dependence from the Sudakov form factor and the related non-perturbative TMDs. In the following, we will briefly go through these three ingredients, and we then address the comparison to the STAR data~\cite{Abelev:2007ii,starpreliminary}. \subsubsection{Sivers Functions} The quark Sivers functions have been mainly determined from single transverse spin asymmetries in semi-inclusive hadron production in the deep-inelastic scattering process. The latest global analyses can be found in Refs.~\cite{Cammarota:2020qcw,Bacchetta:2020gko} (see also references therein). The associated Qiu-Sterman matrix elements $T_{F}(x,x)$ have also been extracted from the single spin asymmetry $A_N$ for single inclusive hadron production in $pp$ collisions~\cite{Kouvaris:2006zy}. However, the latter process contains the Collins twist-three fragmentation contributions as well~\cite{Kang:2010zzb,Metz:2012ct,Kanazawa:2014dca,Kanazawa:2015ajw,Gamberg:2017gle}. Therefore, it may not be sufficient to constrain the Sivers functions from inclusive hadron $A_N$ in $pp$ collisions alone. Recently a first attempt was made to perform a global analysis of SSA data from different processes including $A_N$ in $pp$ collisions~\cite{Cammarota:2020qcw}. \begin{figure}[tbp] \centering \includegraphics[width=9cm]{siversfunctionud.eps} \caption{{\it The Sivers functions used in our calculations. We use the extractions from Ref.~\cite{Sun:2013hua} which are obtained from Sivers SSAs in semi-inclusive hadron production in deep-inelastic scattering.}} \label{siversfunction} \end{figure} In the following calculations, we will use the Sivers functions constrained by SSA data in SIDIS. We would like to add a few comments concerning the precision of these extractions. First, the current experimental data on SSAs in SIDIS come from the relatively low $Q^2$ region, where TMD factorization may not be as rigorous as compared to high $Q^2$. This situation will of course be improved by the future Electron-Ion Collider. Until then, we have to keep in mind the systematic uncertainties of the Sivers functions extracted from the existing SIDIS data. A crosscheck with other processes, such as Drell-Yan lepton pair production and $W$/$Z$-boson production in $pp$ collisions will provide crucial information on the Sivers functions. Second, the quark Sivers functions determined from SIDIS have a particular feature -- the up quark and down quark distributions have opposite signs. As a result, one often finds a strong cancelation between these two quark Sivers functions when they are used in a physical process, resulting in significant uncertainties of phenomenological extractions of the functions. All existing fits report at least $10\%$ uncertainty of the valence quark Sivers functions. In addition, the sea quark Sivers functions have even larger uncertainties. We hope that the future EIC will help to better constrain both valence and sea quark Sivers functions. For our numerical results we will use the quark Sivers function from Ref.~\cite{Sun:2013hua} as a baseline, keeping in mind their significant uncertainty just described. The dijet asymmetries studied in this paper are precisely a case where the $u$ and $d$ quark Sivers functions are added (at least for the dominant $qg$ channel) and cancel to a significant extent. This also increases the uncertainty of any predictions that can be made. Given the uncertainty of the $u+d$ combination, and to obtain a conservative order of magnitude estimate, we assume the total $u+d$ distribution to be $\pm 0.2u$, allowing the distribution to have either sign. We also neglect the sea quark Sivers function contributions in the numeric estimate. Because of the $qg\to qg$ channel dominance, the individual difference between the $u$ and $d$ quark Sivers in our parameterization will not affect much the final results. More recently, the STAR collaboration has developed a novel approach to study the SSA in dijet production at RHIC using quark flavor tagging by measuring the charge of jets~\cite{starpreliminary}. By tagging the total charge of the final state jet, one can separate $u$-quark jets from $d$-quark jets which potentially avoids the cancelation between the two distributions to some degree. The SSAs for the flavor tagged dijet production will be directly related to either the $u$-quark Sivers function or the $d$-quark Sivers function, depending on the charge of the triggered jet. We will comment on the comparison to these exciting new data at the end of this section. See also Ref.~\cite{Field:1977fa} for the original proposal of the jet charge by Feynman and Field, Refs.~\cite{Krohn:2012fg,Waalewijn:2012sv} for recent theoretical studies and Refs.~\cite{Aad:2015cua,Sirunyan:2017tyr} for experimental measurements by ATLAS and CMS. \subsubsection{Hard Factors} We expect that the quark-gluon channel will make the dominant contribution to the SSA in dijet production, especially at forward rapidity where one probes the valence region of the Sivers functions, which in fact is primarily constrained by the SIDIS experiments. It is instructive to examine the hard factor for this channel as an example to study the kinematic dependence of the SSA of our process. \begin{figure}[tbp] \centering \includegraphics[width=9cm]{hardfactor} \caption{{\it Ratio of the hard factors of the Sivers SSA and the spin-averaged cross section as a function of the leading jet's rapidity $y_1$ for typical kinematics of $P_T=6~\rm GeV$ and $y_2=0,\, 1,\,2$, respectively.}} \label{hardfactor} \end{figure} In Fig.~\ref{hardfactor}, we show the ratio of the spin-dependent hard factor and the spin-averaged hard factor for the $qg\to qg$ channel for typical kinematics at RHIC. The leading jet transverse momentum is chosen as $P_T=6\rm$~GeV and the rapidity interval we consider is $-1<y_1<2$. From the figure, we can clearly see that the ratio increases with rapidity. In particular, the asymmetry will be larger in the forward region. Another important feature of the hard factor is that it is positive for all kinematics. This means that the asymmetry is dominated by final state interaction effects. This is different from the inclusive hadron $A_N$, which is dominated by initial state interaction effects. The reason is that for dijet production, the final state interaction with the gluon line in the $qg\to qg$ channel cancels the initial state interaction with the initial gluon line. On the other hand, for single inclusive hadron production, the final state interaction with the gluon line does not contribute to the quark fragmentation part in the $qg\to qg$ channel which is the dominant source for hadron production in $pp$ collisions. Therefore, the dijet SSA will have an opposite sign compared to the single inclusive hadron SSA. This is a very interesting feature and will have significant phenomenological consequences for SSAs at RHIC. \subsubsection{Sudakov Effects} It has been known for some time that the Sudakov effects will suppress the single spin asymmetries~\cite{Boer:2001he}. This suppression factor was also included when the dijet SSA was proposed in Ref.~\cite{Boer:2003tx}. Here, we would like to follow up on this issue and discuss the effect on the dijet asymmetries using the updated results for the Sudakov form factor and non-perturbative TMDs. We will continue to focus on the $qg$ process. As mentioned at the beginning of this section, the $q_\perp$-dependence is contained in the Sudakov form factor and the associated non-perturbative TMDs. For the discussion here and the numerical results, we separate the $q_\perp$-dependence of the spin-dependent and spin-averaged differential cross sections as \begin{eqnarray} {\cal R}^{(qg)}(q_\perp)&=&\frac{1}{2\pi}\int db_\perp^2 J_0(q_\perp b_\perp)\, e^{-S_{\textrm{pert}}^{(qg)}(Q^2,b_*)-S_{\textrm{NP}}^{(qg)}(Q,b_\perp)} \ ,\\ {\cal R}_T^{(qg)}(q_\perp)&=&\frac{1}{4\pi}\int db_\perp^2 J_1(q_\perp b_\perp)\, e^{-S_{\textrm{pert}}^{(qg)}(Q^2,b_*)-S_{\textrm{NP}}^{T(qg)}(Q,b_\perp)} \ , \end{eqnarray} where $J_{0,1}$ are Bessel functions and where we have applied the $b_*$-prescription in the above equation: $b_*=b_\perp/\sqrt{1-b_\perp^2/b_{{\textrm{max}}}^2}$ with $b_{{\textrm{max}}}=2~\rm GeV^{-2}$. The perturbative part of the Sudakov form factor is the same for both cases $S_{\textrm{pert}}^{(qg)}(Q^2,b_*)=S_{\textrm{Sud}}^{T(qg)}(Q^2,b_*)$ which was given in Eq.~(\ref{sudakov-spin}). The non-perturbative parts are parametrized as~\cite{Su:2014wpa} \begin{eqnarray} S_{\textrm{NP}}^{(qg)}(Q,b_\perp)&=&\left(C_F+C_A\right)\left[\frac{g_1}{2} b_\perp^2 + \frac{g_2}{2} \ln \frac{Q}{Q_0} \ln \frac{b_\perp}{b_*}\right]\ ,\\ S_{\textrm{NP}}^{T(qg)}(Q,b_\perp)&=&S_{\textrm{NP}}^{(qg)}(Q,b_\perp)-g_s b^2 \ , \end{eqnarray} with $g_1 = 0.212$, $g_2 = 0.84$, $g_s=0.062$ and $Q_0^2 = 2.4$ GeV$^2$. \begin{figure}[tbp] \centering \includegraphics[width=9cm]{RToverR} \caption{{\it The ratio of ${\cal R}_T(Q,q_\perp)$ and ${\cal R}(Q,q_\perp)$ as a function of $q_\perp$ for typical $Q$ values to dijet production at RHIC.}} \label{sudakovqt} \end{figure} As an example, we plot in Fig.~\ref{sudakovqt} the ratio ${\cal R}_T/{\cal R}$ as a function of $q_\perp$ for typical values of $Q$ relevant for RHIC kinematics. Clearly, the asymmetry increases with $q_\perp$. Different from previous examples (SIDIS or Drell-Yan lepton pair production), the curves in the plot do not reach the maximum of the asymmetry in the $q_\perp$ region relevant for TMD physics. The reason is that for the $qg$ channel, the Sudakov form factor leads to a strong $q_\perp$-broadening effect. In particular, this is due to the the double logarithms associated with the incoming gluon distribution which push the peak of the asymmetry to higher values of $q_\perp$, beyond the TMD domain. \subsection{Comparison to the STAR Data from 2007} As suggested in Ref.~\cite{Boer:2003tx}, the dijet spin asymmetry can be measured through the azimuthal angular distribution between the two jets. Because the Sivers asymmetry leads to a preferred direction of the total transverse momentum of the two final state jets, the angular distribution will be shifted toward that direction related to the Sivers asymmetry. The magnitude of the asymmetry will depend on the relative angle between the leading jet and the polarization vector $\vec{S}_\perp$. In particular, as mentioned in Ref.~\cite{Boer:2003tx}, the SSA for dijet production is at its maximum value when the jet direction is parallel or anti-parallel to the spin vector $\vec{S}_\perp$ of the proton. However, the asymmetry will vanish if the leading jet is perpendicular to the spin $\vec{S}_\perp$. It can be shown that this introduces an additional factor of $|\cos(\phi_j-\phi_S)|$ for the dijet SSA. Therefore, when we compare to the STAR data, we need to include an average of this factor over the azimuthal angle between the leading jet and the polarization vector: $\frac{1}{\pi}\int_0^\pi d\phi |\cos(\phi)|={2\over \pi}$. As mentioned above, for the dijet SSA we take the $u+d$ Sivers distribution to be $20\%$ of the extracted $u$-quark Sivers function. From the existing experimental data, we can determine neither the size nor the sign of the total contribution from the $u$ and $d$-quark Sivers functions. Therefore, to estimate the total contributions to the dijet SSA, we will use both signs, in order to obtain an idea of the uncertainties associated with these determinations. The jet kinematics of the data published by the STAR experiment in 2007 is $P_T>4~\rm GeV$ and rapidity in the range of $-1<y_{1,2}<2$. In order to compare to the experimental data, we integrate out the leading jet momentum and the relative rapidity between the two jets, but we keep the total rapidity $y=y_1+y_2$. \begin{figure}[tbp] \centering \includegraphics[width=9cm]{star2007} \caption{{\it The SSA in dijet production at RHIC as a function of the total rapidity $y=y_1+y_2$ of the two jets for the kinematics of the STAR measurement in 2007: $P_T>4~$GeV and $-1<y_{1,2}<2$. The upper bound corresponds to $T_{Fu}+T_{Fd}$ with $+20\%$ of the fitted value of $T_{Fu}$ of Ref.~\cite{Sun:2013hua}, whereas the lower bound corresponds to $-20\%$.}} \label{star2007} \end{figure} Using the differential cross sections for the spin-averaged and spin-dependent cases in the MLLA given above, we obtain the following expression for the single transverse spin asymmetry which can be compared to the STAR measurement: \begin{eqnarray} A_N(y)=\frac{2}{\pi}\frac{\sum_{acd}\sum_{b=u,d}\int d^2P_Tdy_1 \,x_1f_a(x_1,\mu_0)\,x_2T_{Fb}(x_2,\mu_0)\,H_{ab\to cd}^{\rm Sivers}(P_T,x_1,x_2)\, {w}_T(Q)}{\sum_{abcd} \int d^2P_Tdy_1 \,x_1f_a(x_1,\mu_0)\,x_2f_{b}(x_2,\mu_0)\,H_{ab\to cd}^{uu}(P_T,x_1,x_2) \,{w}(Q)} \ ,\ \ \ \ \label{anystar2007} \end{eqnarray} where $Q^2=\hat s=x_1x_2S_{pp}$ and $w(Q),w_T(Q)$ are weights for the total transverse momentum integral, \begin{eqnarray} w_T(Q)&=&\int_0^{Q/6} dq_\perp q_\perp {\cal R}_T(Q,q_\perp) \ , \\ w(Q)&=&\int_0^{Q/6} dq_\perp q_\perp {\cal R}(Q,q_\perp) \ . \end{eqnarray} The upper limits of the above integrals correspond to the TMD region where $q_\perp\ll P_T$. Notice that $Q\ge 2P_T$ for all kinematics. With our assumptions on the $u$ and $d$ Sivers functions, we calculate the SSA in Eq.~(\ref{anystar2007}), and find that the asymmetry is of order $10^{-4}$ for the entire rapidity range shown, see, Fig.~\ref{star2007}. We note that the central values of $u$-quark and $d$-quark Sivers functions from the fit of Ref.~\cite{Sun:2013hua} lead to smaller asymmetries shown in Fig.~\ref{star2007}. Let us summarize the main differences with respect to the previous calculation in Ref.~\cite{Bomhof:2007su}: First, the Sivers functions determined from SIDIS are different. Second, we have included the relative suppression from Sudakov effects. If we integrate over the entire rapidity region, the asymmetry is about $1.7\times 10^{-4}$. It is interesting to note that the STAR measurement in 2007 found that the SSA for dijet production is consistent with zero, and their uncertainties are of the order of $10^{-2}$. According to our calculation, a finite asymmetry could be observed if the uncertainty is reduced by more than one order of magnitude. Of course, this also depends on the size of the total up and down quark Sivers function. If they completely cancel, then the asymmetries will depend on the sea quark Sivers functions, which are known to be not as large as the valence ones. \subsection{The Flavor Tagged Dijet Asymmetry} In the last few years, the STAR collaboration has investigated a novel method to explore the SSA in dijet production. They considered the jet charge to tag the up or down quark flavor of the jet. An up (down) quark jet has positive (negative) jet charge, whereas a gluon jet leads to a neutral jet charge. From the preliminary analysis, the efficiency of this separation is reasonably good. A similar idea is also proposed in~\cite{Kang:2020fka}. This also suggests further improvements by using the jet flavor information in the jet charge definition. \begin{figure}[tbp] \centering \includegraphics[width=9cm]{SSAud} \caption{{\it The SSA in dijet production for the $qg\to qg$ channel only. We show the result separately for up and down quarks.}} \label{flavortag} \end{figure} In order to compare to the experimental data, we take into account the $u$ and $d$ quark contributions separately, just for the $qg\to qg$ channel. For up quarks, we have \begin{eqnarray} A_N^{{\textrm{(up)}}}(y)=\frac{2}{\pi}\frac{\int d^2P_Tdy_1\, x_1f_g(x_1,\mu_0)\,x_2T_{Fu}(x_2,\mu_0)\,H_{gq\to gq}^{\rm Sivers}(P_T,x_1,x_2)\, {w}_T(Q)}{ \int d^2P_Tdy_1 \left[ x_1f_g(x_1,\mu_0)\,x_2f_{u}(x_2,\mu_0)\,H_{gq\to gq}^{uu}(P_T,x_1,x_2)+(x_1\leftrightarrow x_2)\right] {w}(Q)} \ . \ \ \ \ \label{asymmetryup} \end{eqnarray} An analogous expression holds for the $d$-quark Sivers contribution. In Fig.~\ref{flavortag}, we plot the two asymmetries as functions of $y=y_1+y_2$. From this plot, we can see that the asymmetries are of the order of $10^{-3}$. If we integrate over the entire rapidity range, we obtain an asymmetry of $2.2\times 10^{-3}$ and $-3.7\times 10^{-3}$ for the $ug\to ug$ and $dg\to dg$ channels, respectively. Our results compare reasonably well to the preliminary STAR results~\footnote{An estimate from the preliminary STAR analysis in Ref.~\cite{starpreliminary} suggests that the observed asymmetry is around $1.5\times 10^{-3}$ and $-1.8\times 10^{-3}$ for positively and and negatively tagged charged jets, respectively. We hope that the spin asymmetry defined in Eq.~(\ref{asymmetryup}) will be reported in the new analysis as well.}. Compared to the results shown in Fig.~\ref{star2007}, we find that the asymmetries are much larger for the flavor tagged case. This is not only due to the fact that for the flavor tagged case no cancelation between the $u$ and $d$-quark Sivers functions occurs, but also because the denominator of the unpolarized cross section only contains the $qg\to qg$ channel. By tagging the (quark) flavor of the final state jets, we exclude a major background contribution from $gg\to gg$, which only leads to charge neutral jets in the final state. The asymmetries shown in Fig.~\ref{flavortag} assume $100\%$ efficiency of the tagging in the jet sample. To compare to the STAR data properly, we need to consider a realistic tagging efficiency, which will suppress the asymmetries somewhat. \section{Summary and Discussion} We have investigated the single transverse spin asymmetry in dijet correlations in hadronic collisions. The total transverse momentum of the dijet in the final state is correlated with the incoming nucleon's polarization vector. We have focused on the SSA contribution from the quark Sivers function of the polarized nucleon where initial and final state interaction effects play an important role. A detailed analysis at one-loop order has been carried out for the contribution from soft gluon emissions in order to understand the factorization properties. It was found that the associated TMD factorization is valid at the level of leading double logarithms and single logarithms from the TMD quark distribution and those collinear to the jet. However, additional soft gluon contributions to the single logarithms do not show the same pattern as in the unpolarized case investigated in Ref.~\cite{Sun:2015doa} and hence cannot be incorporated in the TMD factorization formula in terms of the same spin independent soft factor. This indicates that the standard TMD factorization is broken at the single logarithmic level for the SSA in dijet correlations in hadronic collisions. We believe that this issue will deserve further attention by investigating whether a consistent ``non-traditional'' soft factor for the single transverse spin asymmetry could be defined. We have further presented phenomenological studies for the SSA in dijet production at RHIC based on the LLA$'$ approximation, for which one improves the standard LLA resummation formula by ``universal'' subleading logarithmic terms associated with the initial partons and the final state jets. Using the quark Sivers functions constrained by SSAs in SIDIS, we have found that the leading double logarithmic resummation effects suppress the asymmetry for the kinematics relevant for the measurements by the STAR collaboration at RHIC, making them broadly consistent with experimental results. We also presented results for the flavor (charge) tagged dijet case, where the asymmetries are much larger than when the jet flavor is not tagged. A detailed comparison with the experimental data will be helpful to understand factorization breaking effects. We finally note that in our analysis we have only considered the perturbative part of the factorization breaking effects. The non-perturbative TMD quark distribution could also contribute to such effects. Our numerical results assume that this part is the same as for SIDIS. To address this issue, more detailed comparisons to experimental measurements and further theoretical efforts are necessary. In any case, a more refined phenomenology will be warranted once there is a better general understanding of factorization breaking effects, along with more detailed experimental information. \vspace{2em} \section*{Acknowledgement} This work is partially supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract number DE-AC02-05CH11231. This study was supported by Deutsche Forschungsgemeinschaft (DFG) through the Research Unit FOR 2926 (project number 40824754).
{ "redpajama_set_name": "RedPajamaArXiv" }
655
I recently did a custom order for a little girl's ABC birthday party. I just love how it turned out!! Every little girl needs an Alphabet dress! What would a birthday girl's party be without sweet little bows to match? Hello, my dear blog readers! I have not fallen off the face of the Earth! I am currently 32 weeks pregnant with baby number four. We are expecting a sweet little girl in April! She's currently sporting my coffee cup, but to be honest, I can't wait to put a little frill on her! While awaiting her arrival, I have been driven into serious nesting mode. Have I mentioned that nesting is not totally compatible with my hips?? Because it's not. At all. My hips, apparently, are very angry with me for doing things like walking. Even chiropractic care is no longer making a difference. So to pass the time, I've started sewing again. to go on them, too... like below. I am all about finding a bargain for my kid's clothing (especially things like play clothes). I found out about Kid's Closet Connection a few years ago and I've been hooked ever since! I actually just signed up to consign some of my daughter's old clothes this year since baby number three is a BOY! :) This saves me the hassle of sitting outside for hours during a yard sale! I just have to tag and drop off! Super sweet! Anyway, these consignment sales happen twice a year and I'm already giddy with anticipation for this spring sale. :) Check them out folks!! But since it's not quite the weekend, I'm busy working on a custom. Zebra stripes, ruffles, hot pink, and Minnie Mouse! What's not to love?! Who Knew Rolled Flowers Were So Fun??? I rarely take time to do new crafts because my wonderful customers keep me busy! :) But I'm really having a blast creating new projects just so I can use rolled fabric flowers (see my earlier post here). I feel like a craft explosion is about to happen in my house! There are so many ideas I've had locked away. I've been waiting on time to pursue them and well, time is never just going to happen. I've been in a crafting rut, so to speak, since I started getting busy over a year ago. But I've realized that creating NEW things makes it fun and much less "work." So here's to less work and some new, exciting projects!!! I have had this fabric in my stash for over a year, just waiting for something to be made out of it. It was a leftover piece from a custom order and I could never figure out what to do with the 15 inches I had left. But the rolled flowers kind of tie the whole thing together and I absolutely love it! I have a ton of fabric scraps. I do sell them occasionally but most of the time, it looks like a fabric fairy threw up in my scrap box. I've been searching all over the place for a way to get more hours in the day to DO something with my scraps. Surprisingly, no good tutorial in the world could help me stretch time by even an hour..... So what's a girl to do with a ton of fabric scraps and hardly any free time? Today that little bit of time turned into a rolled fabric flower. It took me ten minutes to make one. Well, I did a practice one first, unraveled it because it sucked, and then tried again with prettier fabric! This whole entire process was made all the less stressful because of the fabulous tutorial I found over at My Sparkle! She gives great detailed photos and concise directions. It made it super easy, even for a timeless, stressed momma like me!!! Get By With A Little Help From Your Team! No and I'm not talking about your football team... although I am watching! ;) I'm talking about the EtsyKids team! It's been forever and a day since I've updated my blog. I'm so sorry, loyal followers, that you've been so neglected! I thought that a good kick off to the reposting would be a tribute to a few EtsyKids team members! What is the EtsyKids team, you may ask? They are a fantabulous group of creative hand makers who make items that are geared towards... come on! You can guess this!! Kids! ;) If you want to see all kinds of amazing items, be sure to click on the link to see the awesome shopping guides! For all you bloggies, check out the EtsyKids Blog, too! This particular post has all team member items but it's geared towards -YOU, moms! Let me tell you why!! Because I personally would not get through the day without a cup of Joe.. or whatever else you might decide to put in your so-glad-I-thought-of-it-ahead-of-time concealed container.. Because it's quiet and mommy's headache is bad enough.. And just because that tired mommy needs her rest in order to have enough patience for tomorrow..
{ "redpajama_set_name": "RedPajamaC4" }
7,957
{"url":"http:\/\/math.stackexchange.com\/questions\/790927\/construct-a-function","text":"# construct a function\n\nI wonder whether there is a function $f\\colon\\Bbb R\\to\\Bbb R$ with the folowing characteristic? for every two real numbers $\\alpha,\\beta,\\alpha\\lt\\beta$, $$\\{f(x):x\\in(\\alpha,\\beta)\\}=\\Bbb R$$\n\nI can't say such a function does not exist, neither can I construct a example\n\nThanks a lot!\n\n-\nHow is this a function from $\\mathbb{R} \\to \\mathbb{R}$? $x$ seems to take on values in $\\mathbb{R^2}$. \u2013\u00a0 Christopher Liu May 11 at 22:24\n$x$ takes on values between $\\alpha$ and $\\beta$ \u2013\u00a0 user137794 May 11 at 22:25\nOops, thanks. My mistake \u2013\u00a0 Christopher Liu May 11 at 22:26\n\n$f$ takes as its value every real number somewhere within every open interval $(a,b)$.","date":"2014-10-02 07:03:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9245157241821289, \"perplexity\": 766.1576475492901}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-41\/segments\/1412037663718.7\/warc\/CC-MAIN-20140930004103-00050-ip-10-234-18-248.ec2.internal.warc.gz\"}"}
null
null
{"url":"http:\/\/support.markedapp.com\/kb\/how-to-tips-and-tricks\/using-javascript-in-marked","text":"# Using JavaScript in Marked\n\nMarked uses a lot of Javascript to provide the features it offers in the preview. For this reason, complications can arise when scripts are included within the body of the document.\n\n## External scripts\n\nYou can include some external scripts using a \"CSS Header:\" metadata line at the top of your document. These scripts will be inserted not in the header, though, but in the footer after jQuery and Marked's scripts have already loaded. This is ideal in most cases. You may still experience unexpected behavior, as Marked can't compensate for every possible scripting conflict. DOM changes can be especially problematic.\n\nCSS Header: <script src=\"file.js\"><\/script>\n\n\n## Inline includes\n\nThere are many applications for having Javascript appear in the body of a document, such as graph generators or other Canvas tools. Depending on configuration settings and the processor you're using, these are often mangled. The solution is to put your script and extra DOM elements into an external file and use Marked's syntax for \"raw\" include files (<<{file}). These files are not processed in any way; their contents are added into the file after the rest of the processing is complete.\n\nMarkdown text.\n\n<<{file.html}\n\nMore Markdown text...","date":"2017-11-23 09:20:51","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.20410729944705963, \"perplexity\": 3235.8458999748736}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-47\/segments\/1510934806768.19\/warc\/CC-MAIN-20171123085338-20171123105338-00693.warc.gz\"}"}
null
null
Q: Can I search for something on Google directly from Spotlight? Whenever I have to search for something, I have to open Safari and then type in it. Is there any way to search directly, like type it in Spotlight and press click something and Safari will open automatically and search whatever you typed in Spotlight? A: Not sure if the feature exists for Spotlight. However, Alfred is a replacement for Spotlight which does what you need and much more: A: Haven't tested, but if you install Flashlight there is a plugin for Search on Google. Add over 120 plugins to OS X's Spotlight search box. Check the weather, search the web, send an iMessage, find an emoji, and more. For OS X Yosemite. Completely open-source. No crazy installer. Uninstall with one click. Sample Google search using Flashlight plugin: A: Type it in Spotlight, and hit ⌘B. (I found this by mistake!) A: Spotlight can search your safari browsing history for webpages you have previously visited, and it can search wikipedia online for information, but without the above mentioned plugins you can't search google directly.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,603
Parker: Obama may have evolved thinking on gay marriage Kathleen Parker This past week's news cycle has produced two narratives: One, Barack Obama is an evolutionary, 21st-century hero who supports equality for all. Two, Mitt Romney is a gay-bashing bully mired in the previous century, who also supports a war on women and, oh yeah, hates dogs. Let's parse, shall we? Obama's Big Announcement that he supports gay marriage came about for the following reasons: (a) He had no choice after Vice President Joe Biden said on "Meet the Press" that he was fine with same-sex marriage; (b) one in six of Obama's campaign bundlers, those who raise big bucks, is openly gay; (c) Obama risks nothing except the votes of those who wouldn't have voted for him anyway. And last, but certainly not least, because supporting equal treatment of all Americans under all legal contracts, including marriage with all its attendant rights and responsibilities, is the right thing to do. In this respect, Obama may have evolved in his thinking, as millions of other Americans have, including yours truly. Indeed, polls show the country is about evenly divided on the question, with younger Americans overwhelmingly supportive of same-sex marriage. In another generation, this conversation likely will be irrelevant. Meanwhile. Can we stop hyperventilating long enough to not be ridiculous? Yes, Obama's statement carries symbolic weight but it changes nothing. In fact, by also saying he thinks the issue should remain with the states, he is both taking a conservative, states' rights position and passing the constitutional buck. As Joe Scarborough pointed out, if the president believes that equal marriage rights are constitutionally protected, then he has a duty to fight for those rights rather than hand off the issue to the states. Gays and lesbians won't fare well on that frontier given that 30 states already have passed prohibitive amendments to their state constitutions. Thus, Obama's announcement, while political and pragmatic, was fundamentally meaningless. You'd never know it by the media's response, of course. As Tim Stanley wrote in Britain's The Telegraph, everything the first African-American president says or does is breathtakingly historic: "The Prez could go seal-clubbing and much of the media would see it as a new epoch for winter sports. 'Barack Obama Becomes the First President to Kill Six Seals in Under One Minute,' The New York Times would proudly report, while Twitter would be all abuzz with how hot he looks in snow shoes." Not so much poor Mitt. While Obama was being feted at a $40,000-a-plate din-din at George Clooney's house, Romney was being roasted for a high school bullying "prank" nearly 50 years ago. A prank that made the top half of The Washington Post's front page Friday - and the details of which are in much dispute, especially from the family of the alleged victim, who, alas, isn't alive to defend his version of events. Briefly, as told by a handful of boarding-school classmates, Romney led a group of boys who tackled and held down John Lauber and cut his longish, blond hair. Romney allegedly didn't like Lauber's look and decided to fix it. The subtext is that Lauber may have been gay and that, therefore, Romney is a not-so-closeted gay hater. For those to the premises more recently arrived, a quick primer on 1965, when this occurred. Nobody knew who was or wasn't "gay," a word that wasn't yet in popular circulation as a noun and generally meant "merry." Homosexuality wasn't on most high school kids' radar, period. If anything, Romney may not have liked Lauber's "hippie" locks, which is the more likely case given the era. Whatever. Lauber obviously was a nonconformist in an environment that valued conformity, and Romney and his crew were indeed bullies. They shouldn't have done it, but boarding schools until recently were not widely known as incubators of sensitivity. Today, of course, prep schools feature weekly diversity seminars and offer staff psychiatrists for the noncompliant. But five decades later, this is a campaign issue in a presidential election? Lauber's family doesn't think it should be - and they may be the only people who count in this particular debate. The real story, meanwhile, is the one that keeps getting pushed aside, which is that the country is going bankrupt and that 32 percent of young people (18-29) are underemployed. But as long as we're talking about things like gay marriage and contraception - all forced to the fore by Democrats, by the way - Americans can avert their gaze from the evolving economic collapse, which will be anything but gay. KATHLEEN PARKER's column is distributed by The Washington Post Writers Group, 1150 15th St., NW, Washington, D.C., 20071. Email: kathleenparker@washpost.com.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,942
Q: Avoid narrowing warning on c-style array creation In c++11, the current behavior returns a warning: int arr[3] = { 0.0f, 1.0f, 2.0f}; Based on the following post, I was able to create template <class T, class ... Ts> std::array<T, sizeof...( Ts )> make_non_narrowed_arr(Ts&& ... ts) { return{ { static_cast<T>( ts )... } }; } auto arr = make_not_narrowed_arr<int>(0.0f, 1.0f, 2.0f ); But I would like the return to be a classic c-array, but I was not able to achieve it. So far I tried those, but none are compiling. template<class T, class ... Ts> T[sizeof...( Ts )] make_non_narrowed_arr(Ts&& ... ts) {...}; template<class T, std::size_t N, class ... Ts> T[N] make_non_narrowed_arr(Ts&& ... ts) {...}; template<class T, class ... Ts> T* make_non_narrowed_arr(Ts&& ... ts) {...}; I want to do the following to have a warning free program, without suppressing them in compiler. So this post is not a solution for me either. And I can't change to std::array for that specific usage since it need to be use afterward in a c routine that use T* only. EDIT: Here a sample (not the exact code 100%) of what I have as code, and what I want to change. The problem with the function is that based on image type and channel, I perform call to IPP routine, and I provide generic float* as input to the wrapper routine. void threshold(Image img, Image dst, float* th) { // Sanitty ckeck on image, and bla bla... // Perform the call to ipp IppStatus sts = (IppStatus)0; switch(img.nbChan) { case 1: { switch(img.type) { case BYTE: { Ipp8u arr[1] = {Th[0] }; sts = ippiThreshold_GTVal_8u_C1R(img.data, img.pitch, dst.data, dst.pitch, arr); break; } case FLOAT: { Ipp32f arr[1] = {Th[0] }; sts = ippiThreshold_GTVal_32f_C1R(img.data, img.pitch, dst.data, dst.pitch, arr); break; } } break; } case 3: { switch(img.type) { case BYTE: { Ipp8u arr[3] = {Th[0], Th[1], Th[2]}; sts = ippiThreshold_GTVal_8u_C3R(img.data, img.pitch, dst.data, dst.pitch, arr); break; } case FLOAT: { Ipp32f arr[3] = {Th[0], Th[1], Th[2]}; sts = ippiThreshold_GTVal_32f_C3R(img.data, img.pitch, dst.data, dst.pitch, arr); break; } } break; } } } A: Just use std::array. You assert that you're unable to do so as a result of needing to interface with C-style APIs, but that doesn't mean you need to use C-style arrays in your code. //C Header void do_something(void * arr, size_t size); //C++ Code auto arr = make_non_narrowed_array<int>(0.f, 1.f, 2.f); //If size is meant to be the memory footprint do_something(arr.data(), sizeof(arr)); //If size is meant to be the number of elements do_something(arr.data(), arr.size()); No need to use a C-style array for this code. If you're worried about excess boilerplate code, just write a wrapper. template<typename T, size_t N> void cpp::do_something(std::array<T, N> const& arr) { do_something(arr.data(), arr.size() * sizeof(T)); } A: Finally, after all this discussion, I revisited my problem and tries to make something simple that fill my need, despite is not a super general function. template<typename T, std::size_t N> void no_narrow(float * const in, T(&out)[N]) { for ( int i = 0; i < N; i++ ) out[i] = static_cast<T>( in[i] ); } So that I call it: Ipp8u arr[3]; no_narrow(th, arr);
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,794
\section{Equlvariant cohomology by two Killing vector fields} First, let us review the definition of equlvariant cohomology by a Killing vector field. Let $\Omega^{*}(M)$ be the space of smooth differetial forms on $M$, the de Rham complex is $(\Omega^{*}(M),d)$. Let $L_{X_{M}}$ be the Lie derivative of $X_{M}$ on $\Omega^{*}(M)$, $i_{X_{M}}$ be the interior multiplication induced by the contraction of $X_{M}$.\par Set $$d_{X}=d+i_{X_{M}},$$ then $d_{X}^{2}=L_{X_{M}}$ by the following Cartan formula $$L_{X_{M}}=[d,i_{X_{M}}].$$\par Let $$\Omega_{X}^{*}(M)=\{\omega\in\Omega^{*}(M):L_{X_{M}}\omega=0\}$$ be the space of smooth $X_{M}$-invariant forms on $M$. Then $d_{X}^{2}\omega=0$, when $\omega\in\Omega_{X}^{*}(M)$. It is a complex $(\Omega_{X}^{*}(M),d_{X})$. The corresponding cohomology group $$H^{*}_{X}(M)=\frac{\rm{Ker}d_{X}|_{\Omega_{X}^{*}(M)}}{\rm{Im}d_{X}|_{\Omega_{X}^{*}(M)}}$$ is called the equivariant cohomology associated with $X$. If a form $\omega$ has $d_{X}\omega=0$, then $\omega$ called $d_{X}$-closed form.\par Then we will to definite a new complex by two Killing vector field. If $X,Y\in\mathfrak{g}$, let $X_{M} ,Y_{M}$ be the corresponding smooth vector field on $M$.\par We know $$L_{X_{M}}+\sqrt{-1}L_{Y_{M}}$$ be the operator on $\Omega^{*}(M)\otimes_{\mathbb{R}}\mathbb{C}$.\par Set $$i_{X_{M}+\sqrt{-1}Y_{M}}\doteq i_{X_{M}}+\sqrt{-1}i_{Y_{M}}$$ be the interior multiplication induced by the contraction of $X_{M}+\sqrt{-1}Y_{M}$. It is also a operator on $\Omega^{*}(M)\otimes_{\mathbb{R}}\mathbb{C}$.\par Set$$d_{X+\sqrt{-1}Y}=d+i_{X_{M}+\sqrt{-1}Y_{M}}.$$ \begin{lemma} If $X,Y\in\mathfrak{g}$, let $X_{M} ,Y_{M}$ be the corresponding smooth vector field on $M$; then $$d_{X+\sqrt{-1}Y}^{2}=L_{X_{M}}+\sqrt{-1}L_{Y_{M}}$$ \end{lemma} \begin{proof} \begin{align*} (d+i_{X_{M}+\sqrt{-1}Y_{M}})^{2} &=(d+i_{X_{M}}+\sqrt{-1}i_{Y_{M}})(d+i_{X_{M}}+\sqrt{-1}i_{Y_{M}})\\ &=d^{2}+di_{X_{M}}+i_{X_{M}}d+\sqrt{-1}di_{Y_{M}}+\sqrt{-1}i_{Y_{M}}d+(i_{X_{M}}+\sqrt{-1}i_{Y_{M}})^{2}\\ &=L_{X_{M}}+\sqrt{-1}L_{Y_{M}} \end{align*} \end{proof} Let$$\Omega_{X_{M}+\sqrt{-1}Y_{M}}^{*}(M)=\{\omega\in\Omega^{*}(M)\otimes_{\mathbb{R}}\mathbb{C}:(L_{X_{M}}+\sqrt{-1}L_{Y_{M}})\omega=0\}$$ be the space of smooth $(X_{M}+\sqrt{-1}Y_{M})$-invariant forms on $M$. Then we get a complex $(\Omega_{X_{M}+\sqrt{-1}Y_{M}}^{*}(M), d_{X+\sqrt{-1}Y})$. We call a form $\omega$ is $d_{X+\sqrt{-1}Y}$-closed if $d_{X+\sqrt{-1}Y}\omega=0$ (this is first discussed by Bimsut, see [3]).The corresponding cohomology group $$H^{*}_{X+\sqrt{-1}Y}(M)=\frac{\rm{Ker}d_{X+\sqrt{-1}Y}|_{\Omega_{X+\sqrt{-1}Y}^{*}(M)}}{\rm{Im}d_{X+\sqrt{-1}Y}|_{\Omega_{X+\sqrt{-1}Y}^{*}(M)}}$$ is called the equivariant cohomology associated with $K$. \section{The set of zero points} \begin{lemma} If $X,Y\in\mathfrak{g}$, let $X_{M} ,Y_{M}$ be the corresponding smooth vector field on $M$, $X^{'}, Y^{'}$ be the 1-form on $M$ which is dual to $X_{M} ,Y_{M}$ by the metric $g^{TM}$, then $$L_{X_{M}}Y^{'}+L_{Y_{M}}X^{'}=0$$ \end{lemma} \begin{proof} Because $$(L_{X_{M}}\omega)(Z)=X_{M}(\omega(Z))-\omega([X_{M},Z])$$ here $Z\in\Gamma(TM)$, So we get $$(L_{X_{M}}Y^{'})(Z)=X_{M}<Y_{M},Z>-<[X_{M},Z],Y_{M}>$$ $$(L_{Y_{M}}X^{'})(Z)=Y_{M}<X_{M},Z>-<[Y_{M},Z],X_{M}>.$$ Because $X_{M},Y_{M}$ are Killing vector fields, so (see [6]) \begin{align*} X_{M}<Y_{M},Z> &=<L_{X_{M}}Y_{M},Z>+<Y_{M},L_{X_{M}}Z>\\ &=<[X_{M},Y_{M}],Z>+<Y_{M},[X_{M},Z]> \end{align*} \begin{align*} Y_{M}<X_{M},Z> &=<L_{Y_{M}}X_{M},Z>+<X_{M},L_{Y_{M}}Z>\\ &=<[Y_{M},X_{M}],Z>+<X_{M},[Y_{M},Z]> \end{align*} then we get $$(L_{X_{M}}Y^{'}+L_{Y_{M}}X^{'})(Z)=<[X_{M},Y_{M}],Z>+<[Y_{M},X_{M}],Z>=0$$ \end{proof} \begin{lemma} If $X,Y\in\mathfrak{g}$, let $X_{M} ,Y_{M}$ be the corresponding smooth vector field on $M$, $X^{'}, Y^{'}$ be the 1-form on $M$ which is dual to $X_{M} ,Y_{M}$ by the metric $g^{TM}$, then $$d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'})$$ is the $d_{X+\sqrt{-1}Y}$-closed form. \end{lemma} \begin{proof} \begin{align*} d_{X+\sqrt{-1}Y}^{2}(X^{'}+\sqrt{-1}Y^{'}) &=d_{X+\sqrt{-1}Y}(d(X^{'}+\sqrt{-1}Y^{'})+i_{X_{M}+\sqrt{-1}Y_{M}}(X^{'}+\sqrt{-1}Y^{'}))\\ &=di_{X_{M}+\sqrt{-1}Y_{M}}(X^{'}+\sqrt{-1}Y^{'})+i_{X_{M}+\sqrt{-1}Y_{M}}d(X^{'}+\sqrt{-1}Y^{'})\\ &=L_{X_{M}}X^{'}-L_{Y_{M}}Y^{'}+\sqrt{-1}(L_{X_{M}}Y^{'}+L_{Y_{M}}X^{'})\\ &=0 \end{align*} So $d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'})$ is the $d_{X+\sqrt{-1}Y}$-closed form. \end{proof} \begin{lemma} For any $\eta\in H^{*}_{X+\sqrt{-1}Y}(M)$ and $s\geq 0$, we have $$\int_{M}\eta=\int_{M}\exp\{-s(d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}))\}\eta$$ \end{lemma} \begin{proof} Because $$\frac{\partial}{\partial s}\int_{M}\exp\{-s(d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}))\}\eta$$ $$=-\int_{M}(d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}))\exp\{-s(d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}))\}\eta$$ and by assumption we have $$d_{X+\sqrt{-1}Y}\eta=0$$ $$d_{X+\sqrt{-1}Y}\exp\{-s(d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}))\}=0$$ So we get $$(d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}))\exp\{-s(d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}))\}\eta$$ $$=d_{X+\sqrt{-1}Y}[(X^{'}+\sqrt{-1}Y^{'})\exp\{-s(d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}))\}\eta]$$ and by Stokes formula we have $$\frac{\partial}{\partial s}\int_{M}\exp\{-s(d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}))\}\eta=0$$ Then we get $$\int_{M}\eta=\int_{M}\exp\{-s(d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}))\}\eta$$ \end{proof} We have $$d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}) =d(X^{'}+\sqrt{-1}Y^{'})+\langle X_{M}+\sqrt{-1}Y_{M}, X_{M}+\sqrt{-1}Y_{M}\rangle$$ and $$\langle X_{M}+\sqrt{-1}Y_{M}, X_{M}+\sqrt{-1}Y_{M}\rangle=|X_{M}|^{2}-|Y_{M}|^{2}+2\sqrt{-1}\langle X_{M}, Y_{M}\rangle$$ Set $$M_{0}=\{x\in M \mid \langle X_{M}(x)+\sqrt{-1}Y_{M}(x), X_{M}(x)+\sqrt{-1}Y_{M}(x)\rangle=0\}.$$ For simplicity, we assume that $M_{0}$ is the connected submanifold of $M$, and $\mathcal{N}$ is the normal bundle of $M_{0}$ about $M$. The set $M_{0}$ is first discussed by H.Jacobowitz (see [4]). \section{Localization formula on $d_{X+\sqrt{-1}Y}$-closed form} Set $E$ is a G-equivariant vector bundle, if $\nabla^{E}$ is a connection on $E$ which commutes with the action of $G$ on $\Omega(M,E)$, we see that $$[\nabla^{E}, L^{E}_{X}]=0$$ for all $X\in\mathfrak{g}$. Then we can get a moment map by $$\mu^{E}(X)=L^{E}_{X}-[\nabla^{E},i_{X}]=L^{E}_{X}-\nabla^{E}_{X}$$ We known that if $y$ be the tautological section of the bundle $\pi^{*}E$ over E, then the vertical component of $X_{E}$ may be identified with $-\mu^{E}(X)y$(see [1] proposition 7.6).\par If $E$ is the tangent bundle $TM$ and $\nabla^{TM}$ is Levi-Civita connection, then we have $$\mu^{TM}(X)Y=L_{X}Y-\nabla^{TM}_{X}Y=-\nabla^{TM}_{Y}X$$ We known that for any Killing vector field $X$, $\mu^{TM}(X)$ as linear endomorphisms of $TM$ is skew-symmetric, $-\mu^{TM}(X)$ annihilates the tangent bundle $TM_{0}$ and induces a skew-symmetric automorphism of the normal bundle $\mathcal{N}$(see [5] chapter II, proposition 2.2 and theorem 5.3). The restriction of $\mu^{TM}(X)$ to $\mathcal{N}$ coincides with the moment endomorphism $\mu^{\mathcal{N}}(X)$. Let $G_{0}$ be the Lie subgroup of $G$ which preserves the submanifold $M_{0}$, e.g. Let $p\in M_{0}$, $Z\in\mathfrak{g_{0}}$, we have $exp(-tZ)p=q\in M_{0}$, here $\mathfrak{g_{0}}$ is the Lie algebra of $G_{0}$. We assume that the local 1-parameter transformations $\exp(-tX), \exp(-tY)\in G_{0}$. We have that $G_{0}$ acts on the normal bundle $\mathcal{N}$. The vector field $X^{\mathcal{N}}$ and $Y^{\mathcal{N}}$ are vertical and are given at the point $(x,y)\in M_{0}\times\mathcal{N}_{x}$ by the vectors $-\mu^{\mathcal{N}}(X)y, -\mu^{\mathcal{N}}(Y)y\in\mathcal{N}_{x}$.\par We construct a one-form $\alpha$ on $\mathcal{N}$: $$Z\in \Gamma(T\mathcal{N})\rightarrow\alpha(Z)=<-\mu^{\mathcal{N}}(X)y,\nabla^{\mathcal{N}}_{Z}y>+\sqrt{-1}<-\mu^{\mathcal{N}}(Y)y,\nabla^{\mathcal{N}}_{Z}y>$$ Let $Z_{1},Z_{2}\in\Gamma(T\mathcal{N})$, we known $d\alpha(Z_{1},Z_{2})=Z_{1}\alpha(Z_{2})-Z_{2}\alpha(Z_{1})-\alpha([Z_{1},Z_{2}])$, so�� \begin{align*} d\alpha(Z_{1},Z_{2}) &=<-\nabla^{\mathcal{N}}_{Z_{1}}\mu^{\mathcal{N}}(X)y,\nabla^{\mathcal{N}}_{Z_{2}}y>-<-\nabla^{\mathcal{N}}_{Z_{2}}\mu^{\mathcal{N}}(X)y,\nabla^{\mathcal{N}}_{Z_{1}}y>\\ &+\sqrt{-1}<-\nabla^{\mathcal{N}}_{Z_{1}}\mu^{\mathcal{N}}(Y)y,\nabla^{\mathcal{N}}_{Z_{2}}y>-\sqrt{-1}<-\nabla^{\mathcal{N}}_{Z_{2}}\mu^{\mathcal{N}}(Y)y,\nabla^{\mathcal{N}}_{Z_{1}}y>\\ &+<-\mu^{\mathcal{N}}(X)y,R^{\mathcal{N}}(Z_{1},Z_{2})y>+\sqrt{-1}<-\mu^{\mathcal{N}}(Y)y,R^{\mathcal{N}}(Z_{1},Z_{2})y>\\ \end{align*} Recall that $\nabla^{\mathcal{N}}$ is invariant under $L_{X}$ for all $X\in\mathfrak{g}$, so that $[\nabla^{\mathcal{N}},\mu^{\mathcal{N}}(X)]=0$, $[\nabla^{\mathcal{N}},\mu^{\mathcal{N}}(Y)]=0$. And by $X,Y$ are Killing vector field, we have $d\alpha$ equals $$2<-(\mu^{\mathcal{N}}(X)+\sqrt{-1}\mu^{\mathcal{N}}(Y)) \cdot,\cdot>+<-\mu^{\mathcal{N}}(X)y-\sqrt{-1}\mu^{\mathcal{N}}(Y)y,R^{\mathcal{N}}y>$$ And by $|X_{\mathcal{N}}|^{2}=<\mu^{\mathcal{N}}(X)y,\mu^{\mathcal{N}}(X)y>$, $|Y_{\mathcal{N}}|^{2}=<\mu^{\mathcal{N}}(Y)y,\mu^{\mathcal{N}}(Y)y>$. So We can get \begin{align*} d_{X_{\mathcal{N}}+\sqrt{-1}Y_{\mathcal{N}}}(X^{'}_{\mathcal{N}}+\sqrt{-1}Y^{'}_{\mathcal{N}}) &=-2<(\mu^{\mathcal{N}}(X)+\sqrt{-1}\mu^{\mathcal{N}}(Y))\cdot,\cdot>\\ &+<-\mu^{\mathcal{N}}(X)y-\sqrt{-1}\mu^{\mathcal{N}}(Y)y,-\mu^{\mathcal{N}}(X)y-\sqrt{-1}\mu^{\mathcal{N}}(Y)y+R^{\mathcal{N}}y> \end{align*} \begin{theorem} Let $M$ be a smooth closed oriented manifold, $G$ be a compact Lie group acting smoothly on $M$. For any $\eta\in H^{*}_{X+\sqrt{-1}Y}(M)$, $[X_{M},Y_{M}]=0$, let $G_{0}$ be the Lie subgroup of $G$ which preserves the submanifold $M_{0}$ and the local 1-parameter transformations $\exp(-tX), \exp(-tY)\in G_{0}$, the following identity hold: $$\int_{M}\eta=\int_{M_{0}} \frac{\eta}{\rm{Pf}[\frac{-\mu^{\mathcal{N}}(X)-\sqrt{-1}\mu^{\mathcal{N}}(Y)+R^{\mathcal{N}}}{2\pi}]}$$ \end{theorem} \begin{proof} Set $s=\frac{1}{2t}$, so by Lemma 4. we get $$\int_{M}\eta=\int_{M}\exp\{-\frac{1}{2t}(d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}))\}\eta$$ Let $V$ is a neighborhood of $M_{0}$ in $\mathcal{N}$. We identify a tubular neighborhood of $M_{0}$ in $M$ with $V$. Set $V^{'}\subset V$. When $t\rightarrow 0$, because $\langle X_{M}(x)+\sqrt{-1}Y_{M}(x), X_{M}(x)+\sqrt{-1}Y_{M}(x)\rangle\neq0$ out of $M_{0}$, so we have $$\int_{M}\exp\{-\frac{1}{2t}(d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}))\}\eta\sim\int_{V^{'}}\exp\{-\frac{1}{2t}(d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}))\}\eta.$$ Because $$\int_{V^{'}}\exp\{-\frac{1}{2t}(d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}))\}\eta=\int_{V^{'}}\exp\{-\frac{1}{2t}(d_{X_{\mathcal{N}}+\sqrt{-1}Y_{\mathcal{N}}}(X_{\mathcal{N}}^{'}+\sqrt{-1}Y_{\mathcal{N}}^{'}))\}\eta$$ then $$\int_{V^{'}}\exp\{-\frac{1}{2t}(d_{X+\sqrt{-1}Y}(X^{'}+\sqrt{-1}Y^{'}))\}\eta=$$ $$\int_{V^{'}}\exp\{\frac{1}{t}<(\mu^{\mathcal{N}}(X)+\sqrt{-1}\mu^{\mathcal{N}}(Y))\cdot,\cdot>+\frac{1}{2t}<\mu^{\mathcal{N}}(X)y+\sqrt{-1}\mu^{\mathcal{N}}(Y)y,R^{\mathcal{N}}y>\}\eta$$ $$+\int_{V^{'}}\exp\{-\frac{1}{2t}<-\mu^{\mathcal{N}}(X)y-\sqrt{-1}\mu^{\mathcal{N}}(Y)y, -\mu^{\mathcal{N}}(X)y-\sqrt{-1}\mu^{\mathcal{N}}(Y)y>\}\eta$$ By making the change of variables $y=\sqrt{t}y$, we find that the above formula is equal to $$t^{n}\int_{V^{'}}\exp\{\frac{1}{t}<(\mu^{\mathcal{N}}(X)+\sqrt{-1}\mu^{\mathcal{N}}(Y))\cdot,\cdot>+\frac{1}{2}<\mu^{\mathcal{N}}(X)y+\sqrt{-1}\mu^{\mathcal{N}}(Y)y,R^{\mathcal{N}}y>\}\eta$$ $$+\int_{V^{'}}\exp\{-\frac{1}{2}<-\mu^{\mathcal{N}}(X)y-\sqrt{-1}\mu^{\mathcal{N}}(Y)y, -\mu^{\mathcal{N}}(X)y-\sqrt{-1}\mu^{\mathcal{N}}(Y)y>\}\eta_{\sqrt{t}y}$$ we known that $$\frac{(\frac{<(\mu^{\mathcal{N}}(X)+\sqrt{-1}\mu^{\mathcal{N}}(Y))\cdot,\cdot>}{t})^{n}}{n!}=(\rm{Pf}(\mu^{\mathcal{N}}(X)+\sqrt{-1}\mu^{\mathcal{N}}(Y)))dy$$ here dy is the volume form of the submanifold $M_{0}$, let 2n be the dimension of $M_{0}$, then we get $$=\int_{V^{'}}\exp\{\frac{1}{2}<\mu^{\mathcal{N}}(X)y+\sqrt{-1}\mu^{\mathcal{N}}(Y)y,R^{\mathcal{N}}y>\}\eta\det(\mu^{\mathcal{N}}(X)+\sqrt{-1}\mu^{\mathcal{N}}(Y))^{\frac{1}{2}}dy_{1}\wedge...\wedge dy_{n}$$ $$+\int_{V^{'}}\exp\{-\frac{1}{2}<-\mu^{\mathcal{N}}(X)y-\sqrt{-1}\mu^{\mathcal{N}}(Y)y, -\mu^{\mathcal{N}}(X)y-\sqrt{-1}\mu^{\mathcal{N}}(Y)y>\}\eta$$ Because by $[X_{M},Y_{M}]=0$ we have $[\mu^{TM}(X),\mu^{TM}(Y)]=0$. And by $-\mu^{\mathcal{N}}(X)-\sqrt{-1}\mu^{\mathcal{N}}(Y)$, $R^{\mathcal{N}}$ are skew-symmetric, so we get $$=\int_{V^{'}}\exp\{-\frac{1}{2}<-\mu^{\mathcal{N}}(X)y-\sqrt{-1}\mu^{\mathcal{N}}(Y)y,-\mu^{\mathcal{N}}(X)y-\sqrt{-1}\mu^{\mathcal{N}}(Y)y+R^{\mathcal{N}}y>\}dy_{1}\wedge...\wedge dy_{n}$$ $$\cdot\det(\mu^{\mathcal{N}}(X)+\sqrt{-1}\mu^{\mathcal{N}}(Y))^{\frac{1}{2}}\eta$$ $$=\int_{M_{0}}(2\pi)^{n}\det(\mu^{\mathcal{N}}(X)+\sqrt{-1}\mu^{\mathcal{N}}(Y))^{-\frac{1}{2}}\det(-\mu^{\mathcal{N}}(X)-\sqrt{-1}\mu^{\mathcal{N}}(Y)+R^{\mathcal{N}})^{-\frac{1}{2}}$$ $$\cdot\det(\mu^{\mathcal{N}}(X)+\sqrt{-1}\mu^{\mathcal{N}}(Y))^{\frac{1}{2}}\eta$$ $$=\int_{M_{0}}(2\pi)^{n}\det(-\mu^{\mathcal{N}}(X)-\sqrt{-1}\mu^{\mathcal{N}}(Y)+R^{\mathcal{N}})^{-\frac{1}{2}}\eta$$ $$=\int_{M_{0}} \frac{\eta}{\rm{Pf}[\frac{-\mu^{\mathcal{N}}(X)-\sqrt{-1}\mu^{\mathcal{N}}(Y)+R^{\mathcal{N}}}{2\pi}]}$$ \end{proof} By theorem 1.,we can get the localization formulas of Berline and Vergne(see [2] or [3]). \begin{corollery}[N.Berline and M.Vergne] Let $M$ be a smooth closed oriented manifold, $G$ be a compact Lie group acting smoothly on $M$. For any $\eta\in H^{*}_{X}(M)$, let $G_{0}$ be the Lie subgroup of $G$ which preserves the submanifold $M_{0}=\{x\in M \mid X_{M}(x)=0\}$, the following identity hold: $$\int_{M}\eta=\int_{M_{0}} \frac{\eta}{\rm{Pf}[\frac{-\mu^{\mathcal{N}}(X)+R^{\mathcal{N}}}{2\pi}]}$$ \end{corollery} \begin{proof} Because $M_{0}=\{x\in M \mid X_{M}(x)=0\}$, we have $\exp(-tX)p=p$ for $p\in M_{0}$, so $\exp(-tX)\in G_{0}$. By theorem 1., we set $Y=0$, then we get the result. \end{proof} \section{Localization formulas for characteristic numbers} Let $M$ be an even dimensional compact oriented manifold without boundary, $G$ be a compact Lie group acting smoothly on $M$ and $\mathfrak{g}$ be its Lie algebra. Let $g^{TM}$ be a $G$-invariant Riemannian metric on $TM$, $\nabla^{TM}$ is the Levi-Civita connection associated to $g^{TM}$. Here $\nabla^{TM}$ is a $G$-invariant connection, we see that $[\nabla^{TM},L_{X_{M}}]=0$ for all $X\in\mathfrak{g}$.\par The equivariant connection $\widetilde{\nabla}^{TM}$ is the operator on $\Omega^{*}(M,TM)$ corresponding to a $G$-invariant connection $\nabla^{TM}$ is defined by the formula $$\widetilde{\nabla}^{TM}=\nabla^{TM}+i_{X_{M}+\sqrt{-1}Y_{M}}$$ here $X_{M} ,Y_{M}$ be the smooth vector field on $M$ corresponded to $X,Y\in\mathfrak{g}$. \begin{lemma} The operator $\widetilde{\nabla}^{TM}$ preserves the space $\Omega^{*}_{X_{M}+\sqrt{-1}Y_{M}}(M,TM)$ which is the space of smooth $(X_{M}+\sqrt{-1}Y_{M})$-invariant forms with values in $TM$. \end{lemma} \begin{proof} Let $\omega\in\Omega^{*}_{X_{M}+\sqrt{-1}Y_{M}}(M)$, then we have \begin{align*} (L_{X_{M}}+\sqrt{-1}L_{Y_{M}})\widetilde{\nabla}^{TM}\omega &=(L_{X_{M}}+\sqrt{-1}L_{Y_{M}})(\nabla^{TM}+i_{X_{M}+\sqrt{-1}Y_{M}})\omega\\ &=(\nabla^{TM}+i_{X_{M}+\sqrt{-1}Y_{M}})(L_{X_{M}}+\sqrt{-1}L_{Y_{M}})\omega\\ &=0 \end{align*} So we get $\widetilde{\nabla}^{TM}\omega\in\Omega^{*}_{X_{M}+\sqrt{-1}Y_{M}}(M,TM)$. \end{proof} We will also denote the restriction of $\widetilde{\nabla}^{TM}$ to $\Omega^{*}_{X_{M}+\sqrt{-1}Y_{M}}(M,TM)$ by $\widetilde{\nabla}^{TM}$. The equivariant curvature $\widetilde{R}^{TM}$ of the equivariant connection $\widetilde{\nabla}^{TM}$ is defined by the formula(see [1]) $$\widetilde{R}^{TM}=(\widetilde{\nabla}^{TM})^{2}-L_{X_{M}}-\sqrt{-1}L_{Y_{M}}$$ It is the element of $\Omega^{*}_{X_{M}+\sqrt{-1}Y_{M}}(M,End(TM))$. We see that \begin{align*} \widetilde{R}^{TM} &=(\nabla^{TM}+i_{X_{M}+\sqrt{-1}Y_{M}})^{2}-L_{X_{M}}-\sqrt{-1}L_{Y_{M}}\\ &=R^{TM}+[\nabla^{TM},i_{X_{M}+\sqrt{-1}Y_{M}}]-L_{X_{M}}-\sqrt{-1}L_{Y_{M}}\\ &=R^{TM}-\mu^{TM}(X)-\sqrt{-1}\mu^{TM}(Y) \end{align*} \begin{lemma} The equivariant curvature $\widetilde{R}^{TM}$ satisfies the equvariant Bianchi formula $$\widetilde{\nabla}^{TM}\widetilde{R}^{TM}=0$$ \end{lemma} \begin{proof} Because \begin{align*} [\widetilde{\nabla}^{TM},\widetilde{R}^{TM}] &=[\widetilde{\nabla}^{TM},(\widetilde{\nabla}^{TM})^{2}-L_{X_{M}}-\sqrt{-1}L_{Y_{M}}]\\ &=[\widetilde{\nabla}^{TM},(\widetilde{\nabla}^{TM})^{2}]+[\nabla^{TM}+i_{X_{M}+\sqrt{-1}Y_{M}},-L_{X_{M}}-\sqrt{-1}L_{Y_{M}}]\\ &=0 \end{align*} \end{proof} Now we to construct the equivariant characteristic forms by $\widetilde{R}^{TM}$. If $f(x)$ is a polynomial in the indeterminate $x$, then $f(\widetilde{R}^{TM})$ is an element of $\Omega^{*}_{X_{M}+\sqrt{-1}Y_{M}}(M,End(TM))$. We use the trace map $${\rm Tr}: \Omega^{*}_{X_{M}+\sqrt{-1}Y_{M}}(M,End(TM))\rightarrow\Omega^{*}_{X_{M}+\sqrt{-1}Y_{M}}(M)$$ to obtain an element of $\Omega^{*}_{X_{M}+\sqrt{-1}Y_{M}}(M)$, which we call an equivariant characteristic form. \begin{lemma} The equivariant differential form ${\rm Tr}(f(\widetilde{R}^{TM}))$ is $d_{X_{M}+\sqrt{-1}Y_{M}}$-closed, and its equivariant cohomology class is independent of the choice of the G-invariant connection $\nabla^{TM}$. \end{lemma} \begin{proof} If $\alpha\in\Omega^{*}_{X_{M}+\sqrt{-1}Y_{M}}(M,End(TM))$, because in local $\nabla^{TM}=d+\omega$, we have \begin{align*} d_{X_{M}+\sqrt{-1}Y_{M}}{\rm Tr}(\alpha) &={\rm Tr}(d_{X_{M}+\sqrt{-1}Y_{M}}\alpha)\\ &={\rm Tr}([d_{X_{M}+\sqrt{-1}Y_{M}},\alpha])+{\rm Tr}([\omega,\alpha])\\ &={\rm Tr}([\widetilde{\nabla}^{TM},\alpha]) \end{align*} then by the equivariant Bianchi identity $\widetilde{\nabla}^{TM}\widetilde{R}^{TM}=0$, we get $$d_{X_{M}+\sqrt{-1}Y_{M}}{\rm Tr}(f(\widetilde{R}^{TM}))=0.$$ Let $\nabla^{TM}_{t}$ is a one-parameter family of G-invariant connections with equivariant curvature $\widetilde{R}^{TM}_{t}$. We have \begin{align*} \frac{d}{dt}{\rm Tr}(f(\widetilde{R}^{TM}_{t})) &={\rm Tr}(\frac{d\widetilde{R}^{TM}_{t}}{dt}f^{'}(\widetilde{R}^{TM}_{t}))\\ &={\rm Tr}(\frac{d(\widetilde{\nabla}^{TM}_{t})^{2}}{dt}f^{'}(\widetilde{R}^{TM}_{t}))\\ &={\rm Tr}([\widetilde{\nabla}^{TM}_{t},\frac{d\widetilde{\nabla}^{TM}_{t}}{dt}]f^{'}(\widetilde{R}^{TM}_{t}))\\ &={\rm Tr}([\widetilde{\nabla}^{TM}_{t},\frac{d\widetilde{\nabla}^{TM}_{t}}{dt}f^{'}(\widetilde{R}^{TM}_{t})])\\ &=d_{X_{M}+\sqrt{-1}Y_{M}}{\rm Tr}(\frac{d\widetilde{\nabla}^{TM}_{t}}{dt}f^{'}(\widetilde{R}^{TM}_{t})) \end{align*} from which we get $${\rm Tr}(f(\widetilde{R}^{TM}_{1}))-{\rm Tr}(f(\widetilde{R}^{TM}_{0}))=d_{X_{M}+\sqrt{-1}Y_{M}}\int^{1}_{0}{\rm Tr}(\frac{d\widetilde{\nabla}^{TM}_{t}}{dt}f^{'}(\widetilde{R}^{TM}_{t}))dt$$ so we get the result. \end{proof} As an application of Theorem 1., we can get the following localization formulas for characteristic numbers \begin{theorem} Let $M$ be an $2l$-dim compact oriented manifold without boundary, $G$ be a compact Lie group acting smoothly on $M$ and $\mathfrak{g}$ be its Lie algebra. Let $X,Y\in\mathfrak{g}$, and $X_{M} ,Y_{M}$ be the corresponding smooth vector field on $M$. $M_{0}$ is the submanifold descriped in section 2. If $f(x)$ is a polynomial, then we have $$\int_{M}{\rm Tr}(f(\widetilde{R}^{TM}))=\int_{M_{0}}\frac{{\rm Tr}(f(\widetilde{R}^{TM}))}{\rm{Pf}[\frac{-\mu^{\mathcal{N}}(X)-\sqrt{-1}\mu^{\mathcal{N}}(Y)+R^{\mathcal{N}}}{2\pi}]}$$ \end{theorem} \begin{proof} By Lemma 7., we have ${\rm Tr}(f(\widetilde{R}^{TM}))$ is $d_{X_{M}+\sqrt{-1}Y_{M}}$-closed. And by Theorem 1., we get the result. \end{proof} We can use this formula to compute these characteristic numbers of $M$, especially we can use it to Euler characteristic of $M$. Here we didn't to give the details.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,071
Jorge Antunes (Rio de Janeiro, 23 de abril de 1942) é um compositor, regente, artista plástico, poeta e político brasileiro. Antunes é o precursor da música eletrônica no Brasil, compondo, em 1961, a primeira música eletrônica brasileira. Em 1975, a Editora Mangione publicou o primeiro disco brasileiro de música eletrônica contendo as obras que Antunes compôs da década de 1960. Biografia Infância e adolescência Jorge Antunes viveu sua infância no bairro proletário de Santo Cristo, Rio de Janeiro, na Rua Orestes 15, onde nasceu em 23 de abril de 1942. Desde criança ele ouvia música erudita porque seu pai, o pintor Carlos Antunes, admirador da música clássica, tinha uma vasta discoteca. Querendo estudar violino, seu pai não tinha recursos para comprar o instrumento. Carlos era também antiquário e conseguiu um violino em um negócio de troca na loja de antiguidades em que trabalhava: a famosa loja de Aladel Sampaio, na Praia do Flamengo nº 1. Em 1956 Dona Olinda de Freitas Antunes, doméstica, que estudara um pouco de piano na juventude, matriculou o filho Jorge, então com 14 anos, no Curso de Música Santa Cecília, onde ele passou a estudar violino com a professora Arethuza de Mello. Dois anos depois, o jovem Antunes começava a compor pequenas peças para violino solo e para duo de violino-piano, de modo autodidata, com os então rudimentares conhecimentos de teoria e harmonia. As primeiras produções do adolescente Jorge Antunes não foram na área da música: foram na área das artes plásticas, seguindo os ensinamentos de seu pai. Um primo de Dona Olinda, de nome Alciomar, era proprietário de uma Escola Técnica de Rádio e Televisão de nome LART. Jorge Antunes foi estudar Rádio-Técnica na LART, especializando-se aos 14 anos de idade em consertos de rádios e aparelhos de televisão. O pequeno rendimento para ajudar a família era conseguido pelo jovem consertando aparelhos de rádio de vizinhos do bairro de Santo Cristo. Era a época das válvulas termoiônicas. Foi esse conhecimento adquirido cedo que permitiu a Antunes iniciar, em 1961, aos 19 anos de idade, o trabalho precursor no Brasil no domínio da música eletrônica. Trajetória formal Natural do Rio de Janeiro, Antunes graduou-se em violino, composição e regência pela Escola de Música da Universidade do Brasil, atual Universidade Federal do Rio de Janeiro (UFRJ), em 1968. Fez pós-graduação em composição musical em Buenos Aires, no Centro Latinoamericano de Altos Estudios Musicales (CLAEM), do Instituto Torcuato Di Tella (onde terminou o mestrado em 1970 sob a orientação de Alberto Ginastera e Gerardo Gandini). Antunes se doutorou em estética musical em 1977 na Universidade de Paris VIII. Ingressou no corpo docente da Universidade de Brasília em 1973 e se aposentou, como professor titular, em 2011. Na UnB dirigiu o Laboratório de Música Eletroacústica e ministrou as disciplinas composição musical, contraponto e fuga e acústica musical. Estudou música tradicional na Universidade do Brasil (atual Universidade Federal do Rio de Janeiro (UFRJ): violino, regência e composição, cursos de aperfeiçoamento em Buenos Aires, Utrecht e Paris. Estudou com Alberto Ginastera, Kröpfl, Gandini, Koenig, Bayle, Reibel e Pierre Schaeffer. Terminou seu doutorado na Universidade de Paris VIII, sob a orientação de Daniel Charles, em 1977. Compôs a ópera Olga, baseada no drama da vida de Olga Benário. A partir de 1961 se destacou como precursor da música eletroacústica no Brasil e iniciou pesquisas no domínio da correspondência entre os sons e as cores. Desenvolveu uma técnica de composição musical, a Música Cromofônica, e começou a produzir obras multimídia em 1965. Em 2014, estreou a ópera A Cartomante, baseada no conto de Machado de Assis, em Brasília, com apresentação da Orquestra Sinfônica do Teatro Nacional Cláudio Santoro sob a regência de seu filho, Jorge Lisboa Antunes, nos dias 31 de julho a 3 de agosto. Apesar de sempre compor música eletroacústica, possui um catálogo instrumental vasto: obras sinfônicas, música de câmara e duas grandes óperas. Suas partituras são editadas por Suvini Zerboni, Universal Edition, Billaudot, Breitkopf & Härtel, Salabert e Sistrum. Desde 1973 é professor da Universidade de Brasilia, onde dirige o laboratório de música eletroacústica e ensina composição e acústica musical. Diz-se que sua maior paixão é lecionar, mas ele afirma que não e que nunca disse isso. Artes plásticas O pintor Carlos Antunes era, esteticamente, um clássico paisagista. Costumava, aos domingos, sair de bicicleta de Santo Cristo rumo à Quinta Boa Vista, para pintar. Na garupa não levava apenas o cavalete e a caixa com paleta e tintas. Levava também seu filho Jorge Antunes. Assim, desde os 8 anos Jorge pintava quadros. Mais tarde Jorge teve aulas com Armando Pacheco e passou a frequentar, acompanhando seu pai, o grupo da Antiga Casa Cavalier, na Avenida Chile, centro do Rio de Janeiro. Ali o jovem pintor de 17 anos passou a conhecer e frequentar ateliers de Bustamente Sá, Silvio Pinto, Panceti, Clau Devesa e Eugênio Proença Sigaud. Cedo começou a participar dos Salões Nacionais de Belas Artes, do Salão de Abril no MAM e dos Salões Nacionais de Arte Moderna. Partindo do princípio de que "o grau de recepção de uma mensagem artística é proporcional ao número de sentidos usados na recepção", criou obras as quais denominou "arte integral", com o apelo direto aos cinco sentidos, inclusive o paladar. Empregou assim os diferentes recursos audiovisuais e cinéticos, além de odores, elementos gustativos e possibilidades táteis. As experiências artísticas de Antunes nessa época, tanto nas artes plásticas quanto na composição musical, eram influenciadas pela sua teoria de correspondência entre os sons e as cores, que ele desenvolvera a partir de 1962 quando iniciou o Curso de Bacharelado em Física na FNFi. Seu Ambiente I, exposto no XV Salão Nacional de Arte Moderna em 1964, foi precursor da categoria "instalação", que ainda não recebia esse nome. Tratava-se de um cubo de 4 metros de aresta, em que o público, descalço, usava a visão, o tato, o paladar e o olfato, ao som de música eletrônica. Para inscrever o trabalho, ele fez a inscrição apresentando as seis faces do cubo separadas, como quadros. Selecionadas as seis grandes obras, ele montou o grande cubo penetrável. Nos salões de artes plásticas de que participou Jorge Antunes nos anos 1960, também debutaram com ele os então jovens artistas Rubem Guershman, Roberto Magalhães, Caciporé Torres e Antonio Dias. Os quadros de Jorge Antunes, em geral óleo sobre tela ou sobre madeira, usavam colagens e objetos tridimensionais aplicados, sempre abordando crítica social e crítica política. Isso foi motivo para sumiço de seus trabalhos produzidos entre 1963 e 1968. Ao sair do Brasil após o AI-5, seu pai pediu a Manolo, contra-regra da Companhia Tonia-Celli-Autran, para esconder os quadros "subversivos" de Jorge em um casarão de Santa Tereza, onde a Companhia guardava cenários e objetos de cena. O antiquário Carlos Antunes sempre emprestava móveis para as montagens do grupo teatral. Desde que voltou ao Brasil, em 1974, Jorge Antunes nunca conseguiu descobrir o paradeiro daquelas obras. Seu pai já tinha problema de memória em razão de um aneurisma cerebral, Manolo sumira e a companhia teatral de Tonia Carrero, que se afastara de Adolfo Celli e de Paulo Autran, não existia mais. As principais exposições de que Jorge Antunes participou foram: Salão Nacional de Belas Artes de 1964 e 1966; Salão de Abril MAM 1966; XV Salão Nacional de Arte Moderna; Concurso de Caixas Petite Galerie 1967; Microformobiles Paris 1973: Salão de Abril 1976; III Documento de Arte Contemporânea do Centro-Oeste 1980; Salão Brasília de Artes Plásticas 1991; Labirinto Sabio 1991; XXXIII Salão de Artes Brasília-Marinhas 2008. Atividade recente Em 2021, J. Antunes concluiu a ópera Leopoldina em Paris (pelo Prêmio Icatu de Artes) e outras peças de câmera incluindo: Maxixezinho da Fernanda para piano solo, dedicada à pianista Fernanda Chaves Canaud. Confinement I para sax soprano e sons eletrônicos. Obras Ópera Contato (1968) Vivaldia MCMLXXV, chamber opera buffa (1975) Coreto (composed 1975, premiered 1976) Qorpo Santo, opera in three acts (1983) O Rei de uma Nota Só (The Single-tone King), mini-opera in four scenes (1991) A Borboleta Azul (The Blue Butterfly), mini-opera in two acts (1995) Olga (composed 1987–97, premiered 2006) A Cartomante (composed 2013, premiered 2014) O Espelho (composed 2015, premiered 2016) Ópera de rua Auto do Pesadelo de Dom Bosco, street-opera (composed 2010, premiered 2010) Olympia ou Sujadevez, street-opera (composed 2016, premiered 2016) O Esfakeado (composed 2019, premiered 2019) Obras para piano Trova (1961) Ritual de Momo (1962) Folhas de Pinheiro (1963) Desafio (1963) Teus Lábios (1964) I Reisado (1967) II Reisado (1967) Asiedor (1967) Graforismas I (1970) Estudo Nº 1 (1972) Redundantiae I (1978) Blues (2001) Chorinho da Maria Inês (2002) Sambinha do Antonio Eduardo (2002) Baiãozinho da Jaci (2004) La Seconde Chute (2005) Maracatuzinho da Mariuga (2007) Carimbozinho da Helena (2007) Valsinha da Eudóxia (2007) Frevinho da Sonia (2008) Modinha do Amaral (2010) Capoeirinha da Miriam (2014) Tanguinho do Alexandre (2014) Música de câmara Mascaruncho for two violas (1977) Microformóbiles I for viola and piano (1970) Modinha para Mindinha (Tune for Mindinha) for seven violas (1985) Ligações externas Jorge Antunes Homepage A valsa sideral de Jorge Antunes, o pai da música eletrônica brasileira, em Vice Naturais da cidade do Rio de Janeiro Compositores do Rio de Janeiro (estado) Compositores eruditos do Brasil Membros da Academia Brasileira de Música Alunos da Universidade Federal do Rio de Janeiro Professores da Universidade de Brasília
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,647
{"url":"https:\/\/www.nature.com\/articles\/s41598-018-31292-x?error=cookies_not_supported&code=7ede276a-fa58-4a49-9557-9aa2fd179be2","text":"# Feature-dependent intrinsic functional connectivity across cortical depths in the human auditory cortex\n\n## Abstract\n\nFrequency preference and spectral tuning are two cardinal features of information processing in the auditory cortex. However, sounds should not only be processed in separate frequency bands because information needs to be integrated to be meaningful. One way to better understand the integration of acoustic information is to examine the functional connectivity across cortical depths, as neurons are already connected differently across laminar layers. Using a tailored receiver array and surface-based cortical depth analysis, we revealed the frequency\u2013preference as well as tuning\u2013width dependent intrinsic functional connectivity (iFC) across cortical depths in the human auditory cortex using functional magnetic resonance imaging (fMRI). We demonstrated feature-dependent iFC in both core and noncore regions at all cortical depths. The selectivity of frequency\u2013preference dependent iFC was higher at deeper depths than at intermediate and superficial depths in the core region. Both the selectivity of frequency\u2013preference and tuning\u2013width dependent iFC were stronger in the core than in the noncore region at deep cortical depths. Taken together, our findings provide evidence for a cortical depth-specific feature-dependent functional connectivity in the human auditory cortex.\n\n## Introduction\n\nAt early stages of auditory processing, acoustic stimuli are decomposed into components at separate frequency bands1 and transmitted from the periphery to the cortex in spatially segregated channels2. This processing is supported by a topographic organization at the auditory cortical surface, where neural ensembles with similar frequency preference3,4,5 or spectral tuning6,7,8 are spatially clustered. However, generating meaningful auditory objects requires more than separating the input signals into different frequency bands, as the information needs to be somehow integrated. Previous studies have shown that integrating information may be facilitated by feature-dependent anatomical and functional connections, where neurons with similar functional properties are connected to each other across cortical locations. For example, anatomical connections between regions with similar frequency preference9 and tuning width10 were found by immunohistochemistry staining and retrograde tracing in the cat auditory cortex, respectively. Additionally, electrophysiological animal studies have reported more coherent activity between neurons with similar frequency preference in the auditory cortex of mice11 and monkeys12. One recent functional magnetic resonance imaging (fMRI) study demonstrated that the selectivity of frequency-dependent intrinsic functional connectivity (iFC) is higher in the core than in the noncore region of the human auditory cortex13, which is closely related to the hierarchical organization of the auditory cortex for the processing of spectrally complex stimuli. In the auditory cortex, inter- and intra-laminar anatomical connections have been found between columnarly organized neurons in the direction perpendicular to the cortical surface14,15,16. Concerting with the diverse anatomical connections across laminar layers, electrophysiological recordings showed laminar layer-specific connectivity patterns in the cat auditory cortex. Specifically, functionally connected neurons have more similar spectrotemporal receptive fields (STRF)17, frequency preference18, firing rate, and best temporal modulation frequency19 at supragranular layers than at infragranular layers, while the higher functional similarity of paired versus unpaired neurons is most prominent in infragranular layers. However, it remains unclear how auditory information is integrated across cortical depths in humans.\n\nRecent advances in fMRI acquisition and analysis methods allow for the characterization of hemodynamic responses across cortical depths in the human brain, thus providing improvements for the study of functional specificity20. Specifically, examining blood-oxygen-level dependent (BOLD) signal with a small voxel in the range of 1\u2009\u00b5L can improve functional specificity by alleviating the vascular bias caused by draining veins coursing along the pial surface20,21, by reducing partial volume effects22,23, and by suppressing physiological noise24. Accordingly, a number of high spatial-resolution fMRI studies have successfully detected cortical depth-specific functional activity in the human visual cortex25,26,27,28,29,30,31,32. Similar experimental protocols have also been used to reveal how functions of the human auditory cortex differ across cortical depths21,33,34. While the frequency preference, tuning width, and top-down attentional modulation effects have been examined across cortical depths, how the iFC in the human auditory cortex varies across cortical depths is unknown.\n\nIn this study, we characterized the feature-dependent iFC across cortical depths in the human auditory cortex. We specifically examined how iFC depends on the difference in frequency preference and tuning width within different cortical depths, respectively, in both core and noncore regions of the human auditory cortex. Frequency preference and tuning width were chosen as the independent variables because these two acoustic dimensions are represented by spatially segregated neuronal ensembles35, and this representation has been suggested to facilitate simultaneous processing of local and global spectral information10. In humans, iFC has been found to be more selective in the core than in the noncore region13, yet how feature-dependent iFC change across cortical depths, particularly how this change varies between core and noncore regions, remains elusive. Based on previous invasive animal studies showing the laminar layer-specific functional connections17,18,19, particularly the higher functional similarity difference between paired and unpaired neurons at infragranular layers of the primary auditory cortex, we hypothesize that the selectivity of feature-dependent iFC in the core region is higher at deep cortical depths. This feature can be more distinct when contrasting between core and noncore regions, as more associative operations occur and thus less selective iFC is needed in the noncore region36,37.\n\n## Results\n\n### Verification of approaches for cortical depth analysis at the auditory cortex\n\nIn order to optimize the signal-to-noise ratio (SNR) for the study on the human auditory cortex across cortical depths using 3T MRI, we constructed a dedicated 24-channel temporal lobe coil array (Fig.\u00a01A). Figure\u00a01B shows the noise correlation matrix of the coil array between 24 coil channels. The minimum, maximum, and the average of off-diagonal entries were 0.01, 0.43, and 0.11, respectively. Figure\u00a01C shows the SNR and temporal SNR (tSNR) gain ratio maps (with respect to a commercial 32-channel whole-head array). At least 40% improvement in most of the temporal lobe was found by visual inspection. Quantitatively, our array provided a SNR gain of 1.90\u2009\u00b1\u20090.49 and a tSNR gain of 1.69\u2009\u00b1\u20090.30 in the region of interest (ROI) at the auditory cortex (denoted by the solid contour in Fig.\u00a02).\n\nCortical depth analysis included verifying accuracy of the registration between functional and anatomical images. Figure\u00a01D shows three representative slices in orthogonal views, superimposed with the gray-white matter boundary estimated from anatomical images. Good registration between functional and anatomical images was found by visual inspection. An example of contours for different cortical depths from a representative participant is shown in Fig.\u00a01E. The gray-white matter boundary, the gray-pial surface boundary, and five intermediate surfaces (denoted by a normalized distance from the white matter boundary; nd\u2009=\u20090.1, 0.3, 0.5, 0.7, 0.9), were overlaid on top of the anatomical image. We also computed the averaged cortical thickness from our participants to ensure that the 1.5\u2009mm isotropic spatial resolution was sufficient for the cortical depth analysis (Fig.\u00a02). Quantitatively, the cortical thickness was 3.29\u2009\u00b1\u20090.20\u2009mm in the auditory cortex ROI, and 2.48\u2009\u00b1\u20090.16\u2009mm in the visual cortex ROI. Note that when using the 1.5\u2009mm isotropic spatial resolution there were more than two voxels across cortical depths in the auditory cortex, comparable to a previous study20, in which 1\u2009mm isotropic resolution was used to study fMRI signals across cortical depths at the visual cortex.\n\n### Analysis of functional properties across cortical depths\n\nAfter projecting the functional time series to each participant\u2019s five intermediate cortical surfaces, we estimated the average and the variability of functional properties across cortical depths. The chirp tone elicited robust BOLD responses in the auditory cortex. Figure\u00a03A shows the spatial distribution of z-scores, which quantified how empirically measured fMRI time series fit to the predicted model. This map was derived from 20 participants and five cortical depths. Figure\u00a03B shows spatial distributions of the mean frequency preference across participants (results from individual participants are shown in Fig.\u00a04), which was converted from local fMRI response latency, at five cortical surfaces. The tonotopic representations clearly present one low (solid line) and two high (dotted line) frequency preference bands extending along the superior-to-inferior axis. These frequency-selective bands suggest a mirror-symmetric high-low-high frequency-gradient topology perpendicular to the Heschl\u2019s gyrus. The spatial distributions of the mean tuning width across participants (Fig.\u00a03C; results from individual participants are shown in Fig.\u00a05) show a narrowly tuned region (dotted contour) extended along the Heschl\u2019s gyrus, and a broader tuning area located at the Heschl\u2019s sulcus and superior temporal gyrus. While the spatial distribution of the frequency preference map was roughly constant across cortical depths, the tuning width appeared to be different across cortical depths.\n\nQuantitatively, the frequency preference (Fig.\u00a03D left) was constant across cortical depths (one-way ANOVA: F (4,705)\u2009=\u20090.895, p\u2009=\u20090.467). In contrast, the tuning width (Fig.\u00a03E left) showed significant difference across cortical depths (one-way ANOVA: F (4,705)\u2009=\u20093.243, p\u2009=\u20090.012). Specifically, the tuning widths at the deep and intermediate depths were significantly smaller than that at the superficial depth (one-tailed t-test: nd\u2013nd\u2009=\u20090.1\u20130.7, p\u2009=\u20090.043; nd\u2013nd\u2009=\u20090.1\u20130.9, p\u2009=\u20090.003; nd\u2013nd\u2009=\u20090.3\u20130.9, p\u2009=\u20090.044; nd\u2013nd\u2009=\u20090.5\u20130.9, p\u2009=\u20090.044). These results are consistent with previous fMRI findings21,33,34 and also neurophysiological animal studies showing a columnar frequency preference and laminar layer-specific tuning width organization38. We further analyzed the variability of frequency preference and tuning width. The inter-subject variability of frequency preference differed significantly across cortical depths (Fig.\u00a03D middle, one-way ANOVA: F (4,705)\u2009=\u20092.463, p\u2009=\u20090.044). The inter-subject variability at the intermediate depth was significantly smaller than that at the deep and superficial depths (one-tailed t-test: nd\u2013nd\u2009=\u20090.1\u20130.3, p\u2009=\u20090.017; nd\u2013nd\u2009=\u20090.3\u20130.9, p\u2009=\u20090.042). The intra-subject variability of frequency preference also showed a significant difference across cortical depths (Fig.\u00a03D right, one-way ANOVA: F (4,705)\u2009=\u20096.077, p\u2009<\u20090.0001). The intra-subject variabilities at the intermediate and superficial depths were significantly smaller than that at the deep depth (one-tailed t-test: nd\u2013nd\u2009=\u20090.1\u20130.3, p\u2009=\u20090.006; nd\u2013nd\u2009=\u20090.1\u20130.5, p\u2009<\u20090.0001; nd\u2013nd\u2009=\u20090.1\u20130.7, p\u2009<\u20090.0001; nd\u2013nd\u2009=\u20090.1\u20130.9, p\u2009=\u20090.003). These results corroborated with the previous finding21 that the superficial depth of the auditory cortex has a reletively large inter-subject variability due to more physiological and anatomical bias imparted by the venous vasculature towards the pial surface. However, the lowest inter- and intra-subject variability were found in the intermediate cortical depth. We refer this result to the neurophysiological organization that the specificity of frequency preference is highest in the granular layer, where thalamic projections terminate39,40. Inter- and intra-subject variabilities of tuning width show the same trend as those in the frequency preference variabilities, while only the intra-subject variability of tuning width shows a significant difference across cortical depths (Fig.\u00a03E middle, one-way ANOVA: F (4,705)\u2009=\u20091.559, p\u2009=\u20090.184; Fig.\u00a03E right, one-way ANOVA: F (4,705)\u2009=\u20097.718, p\u2009<\u20090.0001). The intra-subject variabilities at the intermediate and superficial depths were significantly smaller than that at the deep depth (one-tailed t-test: nd\u2013nd\u2009=\u20090.1\u20130.3, p\u2009=\u20090.003; nd\u2013nd\u2009=\u20090.1\u20130.5, p\u2009<\u20090.0001; nd\u2013nd\u2009=\u20090.1\u20130.7, p\u2009<\u20090.0001; nd\u2013nd\u2009=\u20090.1\u20130.9, p\u2009=\u20090.002).\n\n### Structural and functional examination of core and noncore regions\n\nThe narrowly tuned area shown in Fig.\u00a03C was defined as the core region of the auditory cortex for following feature-dependent iFC analysis. Considering the morphological variability of the Heschl\u2019s gyrus across participants41, we back projected the group-level-defined core region onto the cortical surface of individual participants. Results from five representative participants are shown in Fig.\u00a06, indicating a good alignment of back projected core region with individuals\u2019 Heschl\u2019s gyrus. To further reduce the concern of defining the core and noncore regions from group data, we calculated the frequency preference and tuning width across cortical depths separately in the core and noncore regions. The log (frequency preference) in the core region at different cortical depths were 6.61\u2009\u00b1\u20090.25 (nd\u2009=\u20090.1), 6.61\u2009\u00b1\u20090.23 (nd\u2009=\u20090.3), 6.67\u2009\u00b1\u20090.23 (nd\u2009=\u20090.5), 6.71\u2009\u00b1\u20090.28 (nd\u2009=\u20090.7), and 6.72\u2009\u00b1\u20090.27 (nd\u2009=\u20090.9). The log (frequency preference) in the noncore region at different cortical depths were 6.82\u2009\u00b1\u20090.17 (nd\u2009=\u20090.1), 6.85\u2009\u00b1\u20090.17 (nd\u2009=\u20090.3), 6.81\u2009\u00b1\u20090.16 (nd\u2009=\u20090.5), 6.82\u2009\u00b1\u20090.18 (nd\u2009=\u20090.7), and 6.82\u2009\u00b1\u20090.18 (nd\u2009=\u20090.9). Although the core region preferred significantly lower frequency than the noncore region, the differences were consistent across cortical depths (one-tailed t-test: p\u2009<\u20090.05). Similar results were found in the tuning width data. The tuning width in the core region at different cortical depths were 1.43\u2009\u00b1\u20090.27 (nd\u2009=\u20090.1), 1.49\u2009\u00b1\u20090.23 (nd\u2009=\u20090.3), 1.56\u2009\u00b1\u20090.22 (nd\u2009=\u20090.5), 1.55\u2009\u00b1\u20090.24 (nd\u2009=\u20090.7), 1.62\u2009\u00b1\u20090.25 (nd\u2009=\u20090.9). The tuning width in the noncore region at different cortical depths were 1.86\u2009\u00b1\u20090.32 (nd\u2009=\u20090.1), 1.92\u2009\u00b1\u20090.32 (nd\u2009=\u20090.3), 1.89\u2009\u00b1\u20090.26 (nd\u2009=\u20090.5), 1.94\u2009\u00b1\u20090.25 (nd\u2009=\u20090.7), 1.97\u2009\u00b1\u20090.27 (nd\u2009=\u20090.9). The tuning widths in the core were consistently significantly smaller than those in the noncore region at each cortical depth (one-tailed t-test: p\u2009<\u20090.05). Note that the narrowest tuning width was found at the deepest cortical depth in both core and noncore regions.\n\nLikewise, the inter-subject variability of frequency preference in the core region at different cortical depths were 1.00\u2009\u00b1\u20090.18 (nd\u2009=\u20090.1), 0.95\u2009\u00b1\u20090.20 (nd\u2009=\u20090.3), 0.97\u2009\u00b1\u20090.18 (nd\u2009=\u20090.5), 0.97\u2009\u00b1\u20090.17 (nd\u2009=\u20090.7), 1.02\u2009\u00b1\u20090.15 (nd\u2009=\u20090.9). The inter-subject variability of frequency preference in the noncore region at different cortical depths were 0.96\u2009\u00b1\u20090.12 (nd\u2009=\u20090.1), 0.90\u2009\u00b1\u20090.14 (nd\u2009=\u20090.3), 0.93\u2009\u00b1\u20090.16 (nd\u2009=\u20090.5), 0.92\u2009\u00b1\u20090.17 (nd\u2009=\u20090.7), 0.93\u2009\u00b1\u20090.16 (nd\u2009=\u20090.9). Comparing between two regions, the inter-subject variability was significantly larger in core than in the noncore region only at the superficial depth (one-tailed t-test: nd\u2009=\u20090.9, p\u2009<\u20090.05). Since we used the same ROI across cortical depths, if the ROI mixed data between core and noncore regions, we would expect to observe insignificant difference across all cortical depths. The observation of significant inter-subject variability difference at the superficial cortical depth ruled out such speculation. The inter-subject variability of tuning width in the core region at different cortical depths were 1.14\u2009\u00b1\u20090.32 (nd\u2009=\u20090.1), 1.11\u2009\u00b1\u20090.32 (nd\u2009=\u20090.3), 1.08\u2009\u00b1\u20090.32 (nd\u2009=\u20090.5), 1.00\u2009\u00b1\u20090.33 (nd\u2009=\u20090.7), 1.07\u2009\u00b1\u20090.35 (nd\u2009=\u20090.9). The inter-subject variability of tuning width in the noncore region at different cortical depths were 1.35\u2009\u00b1\u20090.25 (nd\u2009=\u20090.1), 1.31\u2009\u00b1\u20090.31 (nd\u2009=\u20090.3), 1.26\u2009\u00b1\u20090.30 (nd\u2009=\u20090.5), 1.29\u2009\u00b1\u20090.34 (nd\u2009=\u20090.7), 1.29\u2009\u00b1\u20090.31 (nd\u2009=\u20090.9). We found significantly smaller inter-subject variability in the core than that in the noncore region (one-tailed t-test: p\u2009<\u20090.05). However, the differences were also consistent across cortical depths. Such a significant difference between our group-level-defined core and noncore regions suggested that the potential bias due to different core and noncore region boundaries across participants was minimal.\n\n### Analysis of feature-dependent intrinsic functional connectivity across cortical depths\n\nFigure\u00a07A shows the iFC as a function of difference in frequency preference (\u0394 frequency) within core and noncore regions of the auditory cortex. Consistent with a previous study13, we found that at each of the five cortical depths, the iFC gradually decreased as \u0394 frequency increased in both core and noncore regions (Page\u2019s trend test: p\u2009<\u20090.0001). We then investigated whether the degree of selectivity of frequency\u2013preference dependent iFC differs across cortical depths and regions. The selectivity of frequency\u2013preference dependent iFC was quantified by a constant of an exponential decay function fitted to the \u0394 frequency-iFC data. Figure\u00a07B shows the fitted \u03bb in core and noncore regions of the auditory cortex at different cortical depths. Quantitatively, while the selectivity of frequency\u2013preference dependent iFC was constant across depths in the noncore region (one-way ANOVA: F (4,95)\u2009=\u20090.257, p\u2009=\u20090.905), it varied significantly across cortical depths in the core region (one-way ANOVA: F (4,95)\u2009=\u20093.340, p\u2009=\u20090.013). In particular, the \u03bb at the deep depth was found significantly larger than that at the intermediate and superficial depths (one-tailed t-test: nd\u2013nd\u2009=\u20090.1\u20130.5, p\u2009=\u20090.044; nd\u2013nd\u2009=\u20090.1\u20130.7, p\u2009=\u20090.044; nd\u2013nd\u2009=\u20090.1\u20130.9, p\u2009=\u20090.027). The \u03bb at the relatively deep intermediate depth was also found significantly larger than that at the superficial depth (one-tailed t-test: nd\u2013nd\u2009=\u20090.3\u20130.9, p\u2009=\u20090.044). Comparing between core and noncore regions, we found that the \u03bbs of frequency\u2013preference dependent iFC in the core were significantly higher than that in the noncore region at the deep (one-tailed t-test: nd\u2009=\u20090.1, p\u2009=\u20090.008) and relatively deep intermediate (one-tailed t-test: nd\u2009=\u20090.3, p\u2009=\u20090.020) depths. After averaging across cortical depths, we found that the selectivity of frequency\u2013preference dependent iFC in the core region was significantly higher than that in the noncore region (Fig.\u00a07C, one-tailed t-test: p\u2009=\u20090.001). This result was consistent with a previous study13.\n\nWe also examined the iFC as a function of difference in tuning width (\u0394 tuning width). We found that the iFC gradually decreased as \u0394 tuning width increased in both core and noncore regions at each of the five different depths (Fig.\u00a07D, Page\u2019s trend test: p\u2009<\u20090.001). Figure\u00a07E shows that the selectivity of tuning\u2013width dependent iFC was constant across cortical depths in both core (one-way ANOVA: F (4,95)\u2009=\u20090.319, p\u2009=\u20090.865) and noncore (one-way ANOVA: F (4,95)\u2009=\u20092.202, p\u2009=\u20090.075) regions. The selectivity of tuning\u2013width dependent iFC across cortical depths substantiates the frequency\u2013preference dependent iFC results: selectivity of tuning\u2013width dependent iFC in the core region was significantly higher than that in the noncore region at the deep (one-tailed t-test: nd\u2009=\u20090.1, p\u2009=\u20090.039) cortical depth. After averaging across cortical depths, we found that the selectivity of tuning\u2013width dependent iFC in the core region was significantly higher than that in the noncore region (Fig.\u00a07F, one-tailed t-test: p\u2009=\u20090.004). The selectivity of feature-dependent iFC from five representative participants is shown in Fig.\u00a08, demonstrating the consistency of the results across individual participants.\n\nDue to the fact that we found our coil array provided a significant smaller (one-tailed t-test: p\u2009<\u20090.05) SNR in the core (163.37\u2009\u00b1\u200959.64) than the noncore (238.40\u2009\u00b1\u200976.26) region, we further tested if the SNR difference influences the feature-dependent iFC results. We selected a noncore SNR matched ROI (in the medial portion of the original noncore ROI) with a comparable SNR (181.12\u2009\u00b1\u200949.22) to the core region while maintaining the voxel number as in the core region. Then we did the frequency\u2013preference dependent iFC analysis on five representative participants. Both group (Fig.\u00a09A) and individual (Fig.\u00a09B) \u03bb profiles of frequency\u2013preference dependent iFC show very similar results comparing to the Figs\u00a07B and 8A. These results indicate that the SNR difference between core and noncore regions did not influence the feature-dependent iFC results. In addition, to study whether other confounding factors such as motion and physiological noise affect the feature-dependent iFC results, here we further pre-processed our time-series data by using a 6th-order Butterworth bandpass-filter between 0.01 and 0.1\u2009Hz. We also included six motion regressors and a white matter regressor (the average time series of a 1\u2009cm3 cube in the white matter) in the GLM to remove potential confounding factors. Then we did the frequency\u2013preference dependent iFC analysis on five representative participants. Both group (Fig.\u00a09C) and individual (Fig.\u00a09D) \u03bb profiles of frequency\u2013preference dependent iFC show very similar results comparing to the Figs\u00a07B and 8A. These results indicate that the motion and physiological noise did not influence the feature-dependent iFC results.\n\n## Discussion\n\nThis study delineates cortical depth-specific functional connectivity in the human auditory cortex. Our analysis of iFC found that residual activities between brain locations with more similar frequency preference (Fig.\u00a07A) and tuning width (Fig.\u00a07D) are more correlated. While this feature-dependent iFC exists in both core and noncore regions at all cortical depths, the degree of selectivity of feature-dependent iFC shows a significant difference across cortical depths in the core region (Fig.\u00a07B). Specifically, the selectivity of frequency\u2013preference dependent iFC is stronger at deep than that at the intermediate and superficial depths. Comparing the selectivity of frequency\u2013preference and tuning\u2013width dependent iFC between the core and noncore regions, we observed that both the selectivity of frequency\u2013preference and tuning\u2013width dependent iFC were signifcantly stronger in the core than that in the noncore region at deep cortical depths (Fig.\u00a07B,E). These results were not explained by SNR difference between the core and noncore regions (Fig.\u00a09A,B), or by other confounding factors, such as motion and physiological noise (Fig.\u00a09C,D). Taken together, we found a stronger selectivity of feature-dependent iFC in the core than in the noncore region as we moved from superficial to deep cortical depths. Previously, an invasive animal study revealed that the difference in functional similarity between functionally paired and unpaired neurons is higher at infragranular layers in the cat primary auditory cortex19. Our results echoed this finding by demonstrating the relationship between functional similarity and iFC at deep cortical depths. The top-down feedback pathway has been reported to target the infragranular and supragranular layers42,43. A recent study also showed that a feedback activity induced by illusory figures led to a selective activation in the deep human primary visual cortex30. Thus, our findings suggested that the top-down modulation in the human auditory cortex is underpinned by an architecture of stronger selectivity of feature-dependent iFC in the deep cortical depth. We speculate that such an architecture is helpful in selectively activating or deactivating specific neuronal ensembles during sensory processing.\n\nCompared to other cortical depth-specific fMRI studies on the human auditory cortex using 1\u2009mm or higher isotropic resolution21,33,34, we used 1.5\u2009mm isotropic resolution, which may correspond to only two independent voxels across cortical depths. A larger image voxel has a stronger signal at the cost of functional specificity. We considered the 1.5\u2009mm resolution sufficient for supporting the results reported here, because (1) surface-based cortical depth analysis takes advantage of the highly folded and curved geometry of the human cortex, the spatial extension of which results in variabilities of the number of EPI voxels intersecting the cortical surfaces across depths20, (2) accordingly, this fMRI study in the human visual cortex across cortical depths suggest that image resolution was about \u00bd the cortical thickness, (3) the human auditory cortex is about 50% thicker (~2.8\u2009mm) than the visual cortex (~1.8\u2009mm)44,45,46, and (4) cortical depth-specific functional activity was found in the human hippocampus and entorhinal cortex by using data acquired at a 0.8\u2009mm isotropic resolution but spatially smoothed with a 1.5\u2009mm Gaussian kernal47. We also used a tailored 24-channel receiver coil array to acquire fMRI data in order to compensate for the relative lower sensitivity at 3T than at 7T (with a whole-head array). Of importance to note is that our results are highly consistent with those from previous studies done at 7T with 1mm isotropic or higher resolution by showing that, first, the frequency preference was constant across cortical depths21,33,34 (Fig.\u00a03D). Second, the tuning width was significantly different across cortical depths: deep and intermediate depths had significantly narrower tuning widths than superficial depths33,34 (Fig.\u00a03E). Third, the inter-subject variabilities of the frequency preference was found relatively larger at the superficial depth20,21 (Fig.\u00a03D). In sum, these findings suggest that our imaging protocol included sufficient sensitivity and specificity for revealing cortical depth-specific functional characteristics.\n\nIn this study, we used relatively simple and artificial stimuli to elicit brain responses. These stimuli facilitated the analysis of frequency\u2013preference and tuning\u2013width dependent iFC. However, cortical depth-specific functional connections may also involve other complex acoustic characteristics, such as STRF structure17 and preferred spectral and temporal modulation frequency19. Further studies in the functional connectivity associated with these acoustic features may help elucidate how the brain encodes and integrates information when receiving complex and naturalistic auditory inputs.\n\nIn this study, the iFC was derived from the residual fMRI. Note that previously it was shown that the iFC from residual fMRI signals was similar to the iFC from the resting state13. However, it may be also interesting to explore the cortical depth-specific functional connectivity during behavioral task engagement or under different cognitive conditions. For example, one study showed that the functional connectivity networks involved in early visual perception are modified by dinstinct task requirements48. In addition, electrophysiological animal studies revealed that neuronal synchronization can be modulated by attentional state49,50 and adaptation51 in a laminar layer-specific manner. How these modulations of functional connectivity vary across human cortical depths should be addressed in future studies.\n\nAs suggested by recent studies34,52, cortical depth fMRI data acquired by a gradient echo EPI sequence may be biased by vasculature and vascular reactivity. Thus, we cannot attribute the iFC characteristics found here solely to neuronal responses. This bias can nevertheless be reduced by a tailored pulse sequence (at the cost of reduced sensitivity)52,53, cortical depth-specific vasculature and vascular reactivity mapping, or combining fMRI and invasive electrophysiological measurements, such as cortical depth-specific electrode recording, to further clarify the physiological origin of the iFC characteristics found here.\n\n## Methods\n\n### Participants\n\nTwenty healthy subjects (age: 26.6\u2009\u00b1\u20096.3; 9 males) participated in this study. All participants had no history of hearing disorders or neurological disease. All methods were carried out in accordance with guidelines and regulations of National Taiwan University Hospital. All experimental protocols were approved by the Institutional Review Board of National Taiwan University Hospital. Informed consent was obtained from all participants.\n\n### Auditory stimulation\n\nThe auditory stimuli consisted of a 20\u2009sec logarithmic tone chirp with a frequency span from 250\u2009Hz to 4,000\u2009Hz, followed by a 10\u2009sec silent period. During each imaging run, the 30\u2009sec stimuli was repeated 15 times, forming a presentation frequency of 0.033\u2009Hz. Four runs of auditory stimulus presented to the participant had two rising chirp cycles (beginning at 250\u2009Hz and ending at 4,000\u2009Hz) and two falling chirp cycles (beginning at 4,000\u2009Hz and ending at 250\u2009Hz). The stimuli were delivered binaurally using the Psychophysics Toolbox54 by MATLAB (MathWorks, Natick, MA, USA) via a MR-compatible insert earphone (Model S14, Sensimetrics, MA, USA). Acoustic scanner noise was further attenuated by earmuffs placed over the ears after the earphone insertion. The stimuli intensity was kept constant across frequencies and was set individually at levels between 75 and 85\u2009dB SPL so that participants could hear the entire chirp clearly on top of the scanner noise.\n\n### Coil array construction and MRI acquisition\n\nAll data were acquired on a 3T MRI system (Skyra, Siemens Healthcare, Erlangen, Germany). We developed a dedicated 24-channel surface coil array with 50\u2009mm diameter loops to optimize the SNR for fMRI at the temporal lobe. Details of the RF coil circuitry have been described in Chu et al.55. A noise correlation matrix was estimated using data acquired from a zero-degree flip angle pulse sequence (FOV: 256 \u00d7 256 mm2; TR\u2009=\u2009100\u2009ms; TE\u2009=\u200930\u2009ms; BW\u2009=\u20092520\u2009Hz\/pixel; matrix\u2009=\u200964 \u00d7 64; slice thickness\u2009=\u2009256\u2009mm; 1 axial slice). SNR and tSNR were calculated using data acquired from a 1.5\u2009mm isotropic resolution gradient-echo EPI protocol with GRAPPA acceleration56 (FOV: 192 \u00d7 192 mm2; TR\u2009=\u20092500\u2009ms; TE\u2009=\u200928\u2009ms; FA\u2009=\u200990\u00b0; BW\u2009=\u20091260\u2009Hz\/pixel; 38 sagittal slices; R\u2009=\u20093). SNR was calculated by first using the GRAPPA approach to reconstruct fully sampled k-space data for each coil element in the array. Then these images were combined using the noise-covariance weighted root sum-of-squares reconstruction57. tSNR was calculated by taking the ratio between the mean and the standard deviation of the time series. For functional image acquisition, we applied the same 1.5\u2009mm isotropic resolution EPI, and the temporal lobe array for reception. Due to the coil loop arrangement, the FOV was restricted to only the right hemisphere. A gradient-echo field-mapping scan (FOV: 192 x 192 mm2; TR\u2009=\u20091000\u2009ms; TE (short\/long)\u2009=\u200910\/12.46\u2009ms; FA\u2009=\u200990\u00b0; BW\u2009=\u2009260\u2009Hz\/pixel; 19 sagittal slice) with the same FOV as EPI was also performed to collect data for distortion correction. Structural images for each participant were acquired using a whole-head 32-channel coil array and a 1\u2009mm isotropic resolution T1-weighted MPRAGE sequence (FOV: 256 \u00d7 256 mm2; TR\u2009=\u20092530\u2009ms; TE\u2009=\u20093.3\u2009ms; TI\u2009=\u20091100\u2009ms; FA\u2009=\u20097\u00b0; BW\u2009=\u2009200\u2009Hz\/pixel; 192 sagittal slice).\n\n### Cortical surface reconstruction\n\nThe triangulated mesh surfaces of gray-white matter boundary, gray-pial surface boundary, and nine cortical depths with equally spaced cortical thickness were automatically reconstructed from the MPRAGE images using FreeSurfer20,58,59. The cortical thickness maps were derived from the gray-white matter and gray-pial surface boundaries44. Then we took surface 2, 4, 6, 8, and 10 out of these 11 surfaces (normalized distance from the white matter boundary, nd\u2009=\u20090.1, 0.3, 0.5, 0.7, and 0.9) by explicitly excluding the surfaces of gray-white matter and gray-pial surface boundaries. A gray-white matter boundary-based registration method implemented in FreeSurfer was used to form the rigid transformation between the functional and the anatomical data. Using this registration file, individuals\u2019 functional time-series volumes were projected to their own five intermediate cortical surfaces by the nearest-neighbor interpolation method. Between-subject averaging was done by morphing individual data through a spherical surface-based coordinate system60. The results of the cortical surface reconstruction as well as the functional-anatomical registration were all visually inspected. The registrations for two out of twenty participants were further manually corrected.\n\n### Data analysis\n\nFunctional data were corrected for slice timing, motion, and field map-based distortion using the SPM12 software package (http:\/\/www.fil.ion.ucl.ac.uk\/spm\/) and temporally de-trended using a second-order polynomial. After being registered to the surface-based coordinate system, data were smoothed along the surface using a 2D Gaussian kernel at 5\u2009mm full-width at half-maximum (FWHM). To characterize the tonotopy, spectral tuning, and their variabilities across cortical depths, we used a phase-encoded fMRI method published previously61,62. Following the procedure, we applied Fourier transform to the fMRI time series at each cortical location from each participant. The phase (\u03c6) and the amplitude (a) at the presentation frequency (fp, 0.033\u2009Hz) were used to construct a sinusoidal activation model.\n\n$${model}=a(fp)\\times \\,\\cos (2\\pi fp+\\phi (fp))$$\n(1)\n\nThe Pearson\u2019s correlation coefficient between the model and the fMRI time series was Fisher-transformed to the standard z-score. All further analyses for individual subjects were limited to cortical locations that responded significantly to the auditory stimulus (z\u2009>\u20091.65; p\u2009<\u20090.05). The phase calculated from the rising chirp ($${\\rm{rc}}$$) and falling chirp (fc) data were averaged by the following equation to cancel the hemodynamic response delay. The averaged phase was then linearly transformed to the response latency, which denoted the continuous frequency preference.\n\n$${\\phi }_{avg}=({\\phi }_{rc}+2\\pi -{\\phi }_{fc})\/2$$\n(2)\n\nTo estimate the spectral tuning width, time-series data were segmented and averaged into one chirp presentation block. Ideally, the brain location with preferred frequency f 0 may be partially activated by frequencies around f 0 with smaller amplitudes. Thus, the averaged time series was fitted by a Gaussian model and the FWHM of the fitted Gaussian curve was calculated. The FWHM expressed in octaves was determined as the spectral tuning width. Hence the low and high values of tuning width correspond to narrow and broad spectral tunings, respectively. For group analysis, the correlation z-scores, frequency preferences, and tuning widths were averaged between participants. To control for multiple comparisons, the functional ROI was determined as cortical locations whose average correlation z-scores exceed a voxel-wise statistic threshold of p\u2009<\u20090.01 corrected using false discovery rate (FDR) correction according to the number of voxels in the auditory-related anatomical ROI. The anatomical auditory-related ROI includes Heschl\u2019s gyrus, Heschl\u2019s sulcus, planum temporale, planum polare, and superior temporal gyrus (Destrieux atlas\/aparc.a2005s implemented in FreeSurfer). Group results were calculated and displayed only in the mask formed by the intersection between the functional and anatomical ROIs. The resultant auditory cortex ROI did not extend to the most medial portion of the Heschl\u2019s gyrus, probably due to the sharp reduction of the sensitivity at distance far away from the surface coil array. We included this area in the auditory cortex ROI considering the fact that it is a part of the primary auditory core63,64. The inter-subject variability was calculated by taking the standard deviation between participants. To calculate the intra-subject variability, we first arbitrarily separated fMRI time series into two data groups with 15 rising chirp blocks and 15 falling chirp blocks, and estimated frequency preferences and tuning widths separately in two groups. After repeating the process 20 times, the intra-subject variability was calculated by taking the standard deviation across 40 (2 groups \u00d7 20 times) results for each participant and then averaging across participants. For comparisons of the frequency preference, tuning width, inter- and intra-subject variabilities across five cortical depths, we applied one-way analysis of variance (ANOVA) for repeated measures. The post-hoc t-tests were corrected using FDR correction according to the number of cortical depth comparison pairs in our analysis (nd\u2013nd: 0.1\u20130.3, 0.1\u20130.5, 0.1\u20130.7, 0.1\u20130.9, 0.3\u20130.9, 0.5\u20130.9, and 0.7\u20130.9).\n\nFor the intrinsic functional connectivity (iFC) calculation, there was no spatial smoothing during the pre-processing procedure. After regressing out the activation model from the fMRI time series, iFC was computed as the Pearson\u2019s correlation of the residual fMRI signal of each cortical location within core or noncore region of the auditory cortex at each cortical depth. Based on previous studies showing a narrower tuning width at the human primary auditory core7,8, we defined the core region as a spatially continuous and narrowly tuned area (with a tuning width threshold of 1.7 octaves) at the Heschl\u2019s gyrus in the average tuning width map across participants and cortical depths. The resultant core region delineation was generally in agreement with previous studies8,63,64. The noncore region was defined as rest of the cortical locations in the auditory cortex ROI. To obtain the iFC as a function of frequency preference difference (\u0394 frequency, log-scale), iFC data corresponding to the \u0394 frequency between the seed and target vertices (within the same cortical depth) were calculated. We calculated the data with \u0394 frequency less than three octaves because there were fewer data points at larger \u0394 frequency. For the group analysis, the individual data were binned with \u0394 frequency bin edges of 0, 0.1875, 0.375, 0.75, 1.5, and 3 octaves, and then averaged across participants. During all averaging procedures, correlation coefficients were first transformed to Fisher\u2019s z-score, and then averaged and transformed back to correlation coefficients to ensure linearity. To compare the selectivity of feature-dependent iFC between different regions and cortical depths, we quantified the selectivity as the time constant of an exponential decay model (y\u2009=\u2009R0 * exp\u2212\u03bb * \u00d7) fit to the \u0394 frequency-iFC data. Potential inflation of the type-I error due to multiple comparisons of \u03bb profile between regions across five cortical depths were strictly corrected using Bonferroni correction. The iFC as a function of tuning width difference (\u0394 tuning width, log-scale) was computed using the same method. All calculations were done using MATLAB.\n\n## Data Availability\n\nThe authors declare that all the data in this manuscript are available.\n\n## References\n\n1. 1.\n\nRobles, L. & Ruggero, M. A. Mechanics of the mammalian cochlea. Physiol Rev 81, 1305\u20131352 (2001).\n\n2. 2.\n\nKing, A. J. & Nelken, I. Unraveling the principles of auditory cortical processing: can we learn from the visual system? Nat Neurosci 12, 698\u2013701 (2009).\n\n3. 3.\n\nMerzenich, M. M. & Brugge, J. F. Representation of the cochlear partition of the superior temporal plane of the macaque monkey. Brain Res 50, 275\u2013296 (1973).\n\n4. 4.\n\nFormisano, E. et al. Mirror-symmetric tonotopic maps in human primary auditory cortex. Neuron 40, 859\u2013869 (2003).\n\n5. 5.\n\nHumphries, C., Liebenthal, E. & Binder, J. R. Tonotopic organization of human auditory cortex. Neuroimage 50, 1202\u20131211 (2010).\n\n6. 6.\n\nRauschecker, J. P., Tian, B. & Hauser, M. Processing of complex sounds in the macaque nonprimary auditory cortex. Science 268, 111\u2013114 (1995).\n\n7. 7.\n\nWessinger, C. M. et al. Hierarchical organization of the human auditory cortex revealed by functional magnetic resonance imaging. J Cogn Neurosci 13, 1\u20137 (2001).\n\n8. 8.\n\nMoerel, M., De Martino, F. & Formisano, E. Processing of natural sounds in human auditory cortex: tonotopy, spectral tuning, and relation to voice sensitivity. J Neurosci 32, 14205\u201314216 (2012).\n\n9. 9.\n\nReale, R. A., Brugge, J. F. & Feng, J. Z. Geometry and orientation of neuronal processes in cat primary auditory cortex (AI) related to characteristic-frequency maps. Proc Natl Acad Sci USA 80, 5449\u20135453 (1983).\n\n10. 10.\n\nRead, H. L., Winer, J. A. & Schreiner, C. E. Modular organization of intrinsic connections associated with spectral tuning in cat auditory cortex. Proc Natl Acad Sci USA 98, 8042\u20138047 (2001).\n\n11. 11.\n\nRothschild, G., Nelken, I. & Mizrahi, A. Functional organization and population dynamics in the mouse primary auditory cortex. Nat Neurosci 13, 353\u2013360 (2010).\n\n12. 12.\n\nFukushima, M., Saunders, R. C., Leopold, D. A., Mishkin, M. & Averbeck, B. B. Spontaneous high-gamma band activity reflects functional organization of auditory cortex in the awake macaque. Neuron 74, 899\u2013910 (2012).\n\n13. 13.\n\nCha, K., Zatorre, R. J. & Sch\u00f6nwiesner, M. Frequency selectivity of voxel-by-voxel functional connectivity in human auditory cortex. Cereb Cortex 26, 211\u2013224 (2016).\n\n14. 14.\n\nMitani, A. & Shimokouchi, M. Neuronal connections in the primary auditory cortex: an electrophysiological study in the cat. J Comp Neurol 235, 417\u2013429 (1985).\n\n15. 15.\n\nMatsubara, J. A. & Phillips, D. P. Intracortical connections and their physiological correlates in the primary auditory cortex (AI) of the cat. J Comp Neurol 268, 38\u201348 (1988).\n\n16. 16.\n\nWallace, M. N., Kitzes, L. M. & Jones, E. G. Intrinsic inter- and intralaminar connections and their relationship to the tonotopic map in cat primary auditory cortex. Exp Brain Res 86, 527\u2013544 (1991).\n\n17. 17.\n\nAtencio, C. A. & Schreiner, C. E. Columnar connectivity and laminar processing in cat primary auditory cortex. PLoS One 5, e9521 (2010).\n\n18. 18.\n\nAtencio, C. A. & Schreiner, C. E. Auditory cortical local subnetworks are characterized by sharply synchronous activity. J Neurosci 33, 18503\u201318514 (2013).\n\n19. 19.\n\nAtencio, C. A. & Schreiner, C. E. Functional congruity in local auditory cortical microcircuits. Neuroscience 316, 402\u2013419 (2016).\n\n20. 20.\n\nPolimeni, J. R., Fischl, B., Greve, D. N. & Wald, L. L. Laminar analysis of 7 T BOLD using an imposed spatial activation pattern in human V1. Neuroimage 52, 1334\u20131346 (2010).\n\n21. 21.\n\nAhveninen, J. et al. Intracortical depth analyses of frequency-sensitive regions of human auditory cortex using 7TfMRI. Neuroimage 143, 116\u2013127, https:\/\/doi.org\/10.1016\/j.neuroimage.2016.09.010 (2016).\n\n22. 22.\n\nHoogenraad, F. G. et al. Sub-millimeter fMRI at 1.5 Tesla: correlation of high resolution with low resolution measurements. J Magn Reson Imaging 9, 475\u2013482 (1999).\n\n23. 23.\n\nLogothetis, N., Merkle, H., Augath, M., Trinath, T. & Ugurbil, K. Ultra high-resolution fMRI in monkeys with implanted RF coils. Neuron 35, 227\u2013242 (2002).\n\n24. 24.\n\nTriantafyllou, C. et al. Comparison of physiological noise at 1.5T, 3T and 7T and optimization of fMRI acquisition parameters. Neuroimage 26, 243\u2013250, https:\/\/doi.org\/10.1016\/j.neuroimage.2005.01.007 (2005).\n\n25. 25.\n\nRess, D., Glover, G. H., Liu, J. & Wandelld, B. Laminar profiles of functional activity in the human brain. Neuroimage 34, 74\u201384 (2007).\n\n26. 26.\n\nKoopmans, P. J., Barth, M. & Norris, D. G. Layer-specific BOLD activation in human V1. Hum Brain Mapp 31, 1297\u20131304 (2010).\n\n27. 27.\n\nOlman, C. A. et al. Layer-specific fMRI reflects different neuronal computations at different depths in human V1. PLoS One 7, e32536 (2012).\n\n28. 28.\n\nHuber, L. et al. Cortical lamina-dependent blood volume changes in human brain at 7T. Neuroimage 107, 23\u201333 (2015).\n\n29. 29.\n\nMuckli, L. et al. Contextual feedback to superficial layers of V1. Curr Biol 25, 2690\u20132695 (2015).\n\n30. 30.\n\nKok, P., Bains, L. J., van Mourik, T., Norris, D. G. & de Lange, F. P. Selective activation of the deep layers of the human primary visual cortex by top-down feedback. Curr Biol 26, 371\u2013376 (2016).\n\n31. 31.\n\nNasr, S., Polimeni, J. R. & Tootell, R. B. Interdigitated color- and disparity-selective columns within human visual cortical areas V2 and V3. J Neurosci 36, 1841\u20131857 (2016).\n\n32. 32.\n\nScheeringa, R., Koopmans, P. J., van Mourik, T., Jensen, O. & Norris, D. G. The relationship between oscillatory EEG activity and the laminar-specific BOLD signal. Proc Natl Acad Sci USA 113, 6761\u20136766 (2016).\n\n33. 33.\n\nDe Martino, F. et al. Frequency preference and attention effects across cortical depths in the human primary auditory cortex. Proc Natl Acad Sci USA 112, 16036\u201316041 (2015).\n\n34. 34.\n\nMoerel, M. et al. Sensitivity and specificity considerations for fMRI encoding, decoding, and mapping of auditory cortex at ultra-high field. Neuroimage pii, S1053\u20138119, 30284\u201330287 (2017).\n\n35. 35.\n\nSchreiner, C. E. & Mendelson, J. R. Functional topography of cat primary auditory cortex: distribution of integrated excitation. J Neurophysiol 64, 1442\u20131459 (1990).\n\n36. 36.\n\nRauschecker, J. P. Cortical processing of complex sounds. Curr Opin Neurobiol 8, 516\u2013521 (1998).\n\n37. 37.\n\nKaas, J. H., Hackett, T. A. & Tramo, M. J. Auditory processing in primate cerebral cortex. Curr Opin Neurobiol 9, 164\u2013170 (1999).\n\n38. 38.\n\nSugimoto, S., Sakurada, M., Horikawa, J. & Taniguchi, I. The columnar and layer-specific response properties of neurons in the primary auditory cortex of Mongolian gerbils. Hear Res 112, 175\u2013185 (1997).\n\n39. 39.\n\nGuo, W. et al. Robustness of cortical topography across fields, laminae, anesthetic states, and neurophysiological signal types. J Neurosci 32, 9159\u20139172 (2012).\n\n40. 40.\n\nKanold, P. O., Nelken, I. & Polley, D. B. Local versus global scales of organization in auditory cortex. Trends Neurosci 37, 502\u2013510 (2014).\n\n41. 41.\n\nDa Costa, S. et al. Human primary auditory cortex follows the shape of Heschl's gyrus. J Neurosci 31, 14067\u201314075 (2011).\n\n42. 42.\n\nFelleman, D. J. & Van Essen, D. C. Distributed hierarchical processing in the primate cerebral cortex. Cereb Cortex 1, 1\u201347 (1991).\n\n43. 43.\n\nHarris, K. D. & Mrsic-Flogel, T. D. Cortical connectivity and sensory coding. Nature 503, 51\u201358, https:\/\/doi.org\/10.1038\/nature12654 (2013).\n\n44. 44.\n\nFischl, B. & Dale, A. M. Measuring the thickness of the human cerebral cortex from magnetic resonance images. Proc Natl Acad Sci USA 97, 11050\u201311055 (2000).\n\n45. 45.\n\nMeyer, M., Liem, F., Hirsiger, S., J\u00e4ncke, L. & H\u00e4nggi, J. R. Cortical surface area and cortical thickness demonstrate differential structural asymmetry in auditory-related areas of the human cortex. Cereb Cortex 24, 2541\u20132552 (2014).\n\n46. 46.\n\nBurge, W. K. et al. Cortical thickness in human V1 associated with central vision loss. Sci Rep 6, 23268 (2016).\n\n47. 47.\n\nMaass, A. et al. Laminar activity in the hippocampus and entorhinal cortex related to novelty and episodic encoding. Nat Commun 5, 5547 (2014).\n\n48. 48.\n\nSeidkhani, H. et al. Task modulates functional connectivity networks in free viewing behavior. Neuroimage 159, 289\u2013301 (2017).\n\n49. 49.\n\nFries, P., Reynolds, J. H., Rorie, A. E. & Desimone, R. Modulation of oscillatory neuronal synchronization by selective visual attention. Science 291, 1560\u20131563 (2001).\n\n50. 50.\n\nNandy, A. S., Nassi, J. J. & Reynolds, J. H. Laminar organization of attentional modulation in macaque visual area V4. Neuron 93, 235\u2013246 (2017).\n\n51. 51.\n\nHansen, B. J. & Dragoi, V. Adaptation-induced synchronization in laminar cortical circuits. Proc Natl Acad Sci USA 108, 10720\u201310725 (2011).\n\n52. 52.\n\nDe Martino, F. et al. Cortical depth dependent functional responses in humans at 7T: improved specificity with 3D GRASE. PLoS One 8, e60514 (2013).\n\n53. 53.\n\nBoxerman, J. L., Hamberg, L. M., Rosen, B. R. & Weisskoff, R. M. MR contrast due to intravascular magnetic susceptibility perturbations. Magn Reson Med 34, 555\u2013566 (1995).\n\n54. 54.\n\nBrainard, D. H. The psychophysics toolbox. Spat Vis 10, 433\u2013436 (1997).\n\n55. 55.\n\nChu, Y.-H., Hsu, Y.-C., Keil, B., Kuo, W.-J. & Lin, F.-H. A 32-channel head coil array with circularly symmetric geometry for accelerated human brain imaging. PLoS One 11, e0149446 (2016).\n\n56. 56.\n\nGriswold, M. A. et al. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magn Reson Med 47, 1202\u20131210 (2002).\n\n57. 57.\n\nRoemer, P. B., Edelstein, W. A., Hayes, C. E., Souza, S. P. & Mueller, O. M. The NMR phased array. Magn Reson Med 16, 192\u2013225 (1990).\n\n58. 58.\n\nDale, A. M., Fischl, B. & Sereno, M. I. Cortical surface-based analysis. I. Segmentation and surface reconstruction. Neuroimage 9, 179\u2013194 (1999).\n\n59. 59.\n\nFischl, B., Sereno, M. I. & Dale, A. M. Cortical surface-based analysis. II: Inflation, flattening, and a surface-based coordinate system. Neuroimage 9, 195\u2013207 (1999).\n\n60. 60.\n\nFischl, B., Sereno, M. I., Tootell, R. B. & Dale, A. M. High-resolution intersubject averaging and a coordinate system for the cortical surface. Hum Brain Mapp 8, 272\u2013284 (1999).\n\n61. 61.\n\nTalavage, T. M. et al. Tonotopic organization in human auditory cortex revealed by progressions of frequency sensitivity. J Neurophysiol 91, 1282\u20131296 (2004).\n\n62. 62.\n\nStriem-Amit, E., Hertz, U. & Amedi, A. Extensive cochleotopic mapping of human auditory cortical fields obtained with phase-encoding fMRI. PLoS One 6, e17832 (2011).\n\n63. 63.\n\nWasserthal, C., Brechmann, A., Stadler, J., Fischl, B. & Engel, K. Localizing the human primary auditory cortex in vivo using structural MRI. Neuroimage 93(Pt 2), 237\u2013251, https:\/\/doi.org\/10.1016\/j.neuroimage.2013.07.046 (2014).\n\n64. 64.\n\nSchonwiesner, M., Dechent, P., Voit, D., Petkov, C. I. & Krumbholz, K. Parcellation of Human and Monkey Core Auditory Cortex with fMRI Pattern Classification and Objective Detection of Tonotopic Gradient Reversals. Cereb Cortex 25, 3278\u20133289, https:\/\/doi.org\/10.1093\/cercor\/bhu124 (2015).\n\n## Acknowledgements\n\nThis work was partially supported by the Ministry of Science and Technology, Taiwan (103-2628-B-002-002-MY3, 105-2221-E-002-104),\u00a0National Health Research Institute, Taiwan (NHRI-EX107-10727EI) and the Academy of Finland (No. 298131).\n\n## Author information\n\nP.Y.W., J.F.L., W.J.K. and F.H.L. designed the research. P.Y.W. and Y.H.C. performed the research. Y.H.C. and J.F.L. provided the measurement tools and stimuli. P.Y.W., W.J.K. and F.H.L. wrote the paper.\n\nCorrespondence to Fa-Hsuan Lin.\n\n## Ethics declarations\n\n### Competing Interests\n\nThe authors declare no competing interests.\n\nPublisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## Rights and permissions\n\nReprints and Permissions\n\nWu, P., Chu, Y., Lin, J.L. et al. Feature-dependent intrinsic functional connectivity across cortical depths in the human auditory cortex. Sci Rep 8, 13287 (2018). https:\/\/doi.org\/10.1038\/s41598-018-31292-x\n\n\u2022 Accepted:\n\n\u2022 Published:","date":"2020-01-29 08:05:11","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 2, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6027379035949707, \"perplexity\": 6063.347140347906}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579251789055.93\/warc\/CC-MAIN-20200129071944-20200129101944-00413.warc.gz\"}"}
null
null
{"url":"https:\/\/casadocs.readthedocs.io\/en\/v6.2.1\/notebooks\/memo-series.html","text":"# Memo Series & Knowledgebase\u00b6\n\n## CASAcore Memos\u00b6\n\nCASA\u2019s underlying structure is based on casacore. The CASAcore Notes describe some of the most fundamental properties, such as on the table system, data selection syntax, and data model definitions.\n\n## CASA Knowledgebase\u00b6\n\nThe CASA Knowledgebase pages provide bits of wisdom on CASA that should be preserved, and that may be of use to expert users.\n\nThis script re-calculates the common beam by discarding certain channels where the point-spread-function (PSF) or \u2018beam\u2019 deviates substantially from those of the other channels.\n\nIt can happen that an MS contains one or more channels for which the point-spread-function (PSF) or \u2018beam\u2019 deviates a lot from those of the other channels. This can be the result of substantial flagging, missing data, or other factors. When making a data cube with restoringbeam=\u2019common\u2019, these \u201coutlier\u201d channels can create a common beam that does not reflect the beam for the bulk of the channels.\n\nThis example_robust_common_beam.py script will correct the common beam by detecting and\/or flagging outliers channels in the calculation of the common beam. The outlier channels are identified as those channels for which the area of those beams deviate from the median beam by a user-specified factor in computation of the median area beam. The script will do the following:\n\n\u2022 Run tclean with niter=0\n\n\u2022 Detect\/flag outliers in chan beams\n\n\u2022 Use the remaining beams with ia.commonbeam() to get a new commonbeam\n\n\u2022 Specify this explicitly to tclean in the subsequent niter>0 run.\n\nThe attach script primarily demonstrates the solution of iter0 -> calc_good_beam -> specify_restoringbeam_in_iter1 along with tclean. If the new commonbeam is not larger than all the bad beams, then the iter0 tclean run\u2019s restoration step will throw warnings for all channels that cannot be convolved to the new common beam.\n\nThe functionality provided by this script is not yet implemented in the commonbeam method of the image analysis tool (ia.commonbeam).\n\nPlease note that the script is based on an ALMA test-data that was used for characterizing the problem in pipeline processing at the NAASC (JIRA ticket PIPE-375). Parameters have to be adjusted for each use case, including heuristics to detect outlier channels and what to substitute the \u2018bad\u2019 beams with.\n\n### ALMA polarization: XY-phase solver avoids +-45 deg solution\u00b6\n\nThe XY-phase calibration obtained with CASA task gaical (gaintype=\u2019XYf+QU\u2019) avoids to output +\/- 45 degree solutions. This is a subtle consequence of the algorithm used to solve for the cross-hand phase, and should be harmless for the calibration of ALMA full-polarization data.\n\nWhen using the task gaincal with parameter gaintype=\u2019XYf+QU\u2019, the solution to solve is a fit for a slope in 2D data (imag vs. real) from data that has noise is both dimensions. In such cases, it is always better to fit a shallow (not steep) slope, so if the slope (for imag vs. real) comes out >1.0, it flips the axes (to real vs. imag) and re-fits it (and inverts the resulting value to account for the swap). This minimizes the effect of the real axis noise on the slope calculation. This yields far more accurate solutions when the nominal slope is very large (>>1.0, e.g., the data nearly parallel to the imag axis == cross-hand phase approaching +\/-90 deg).\n\nThe case of slope = 1.0 (which is cross-hand phase of 45 deg) corresponds to the slope at which to pivot the axis swap decision. When plotting the cross-hand phase solutions, a gap appears at +-45 deg (see figure). This gap is a property of the typical spacings between values in the statistical distribution of slope values. I.e., for a sample of N points filling a distribution with a finite width, there is a characteristic minimum spacing between values that is some small fraction of the width of the distribution. Of course, smaller spacings are not forbidden, but they are rare. The axis swap reveals this property since all of the (nominal) slopes that are >1.0 (cross-hand phases >45.0 deg) are fit with swapped data and yield the inverse slope (<1.0), and than inverted to be >1.0 again. The typical slopes mostly do not get arbitrarily close to exactly 1.0, so the gap appears. This is essentially an extension of the fact that (for any slope), the precise exact value need not be realized in any instance in a sample of solutions for it. E.g., in a sample of N gaussian-distributed values centered on some specific value the likelihood of any sample having that precise central value is vanishingly small.\n\nThe gap should become smaller when either the noise decreases, or the number of channels (for the same noise) increases.\n\nThis feature should be harmless for the calibration of ALMA full-polarization data.\n\n### User tips on installing CASA on unsupported OSs\u00b6\n\nThis document provide tips from users on how to install CASA on unsupported operating systems, like LINUX Ubuntu, Debian, and Fedora.\n\nDisclaimer: the information in this document is provided by users and not verified by the CASA team. The information does not reflect official CASA recommendations and should be used at your own risk.\n\nWe realize that many users wish to try and run CASA on different operating systems. Below are some tips from the user community on installing CASA on unsupported platforms. CASA will not run on Windows.\n\n\u2022 Ubuntu or Debian - Please see the following PDF: Installing CASA on Unbuntu or Debian\n\n\u2022 Fedora - Fedora 32 may cause CASA to crash on startup with the error \u201ccode for hash md5 was not found\u201d. This is caused by changes to libssl.so in the compat-openssl10 package, which prevents the CASA supplied version of this library from loading. An easy fix is to replace the CASA version of libssl.so.10 with the OS version in \/lib64 (i.e., libssl.so.1.0.2o).\n\n### Wideband Mosaic Imaging and Pointing Corrections for the VLA Sky Survey\u00b6\n\nThis Knowledgebase article describes testing results for wideband mosaic imaging and pointing corrections for VLASS.\n\nThe Very Large Array Sky Survey (VLASS) critically depends on accurate wideband mosaicing algorithms for its single epoch imaging and ultimately cumulative imaging. To a large degree, VLASS requirements have driven CASA\u2019s development of the AW-projection and related algorithms for widefield and wideband imaging in CASA. This Knowledgebase article provides complementary reports on the testing of Wideband Mosaic Imaging and Pointing Corrections for VLASS.\n\n### Calculation of Weights for Data with Varying Integration Time\u00b6\n\nKnowledgebase Article: Calculation of Weights for Data with Varying Integration Time\n\nGeorge Moellenbrock Original 18 Dec 2018, latest edited version 04 Nov 2019\n\nWhen the nominal weights are expected to be uniform (becauseintegration time, channel bandwidth, effective Tsys, collecting area,etc. are all uniform, or the visibilities are normalized), extractingweight information from the apparent statistics of otherwise stablevisibility measurements is a simple matter of calculating the apparentsimple variance in the visibility real and imaginary parts over asufficient local sample of values. The real and imaginary partvariances should be approximately equal, and the inverse of their meanis the correct weight to assign to each of the visibility valueswithin the sample. Here, \u201cstable visibility\u201d means no systematicvariation of amplitude or phase within the localsample. Noise-dominated visibilities are ideal; otherwise,well-calibrated data with no true visibility variation aredesirable. These conditions are also needed for the more general casedescribed below.\n\nWhen the integration time (aka \u201cexposure\u201d) varies within the localsample (such as can be generated by averaging of the datapost-correlation, where the number of samples per averaging bin mayvary, especially at the end of scans), we expect the correct variancefor each visibility to be inversely proportional to the netintegration time, and this complicates the calculation. It isnecessary to determine a weighted variance per unit inverseintegration time, wherein the sample weights for the variancecalculation are the per-visibility integration times, e$$_i$$. If the onlyreason the underlying variance differs among samples is the variableintegration time, then a uniform normalized variance estimate of thewhole sample may be obtained by scaling the residual data per sampleby the square root of their (known) integration times. Here, residualdata means any underlying visibility signal\u2014presumably the averageof the visibility samples, using nominal (proportional to integrationtime, etc.) weights\u2014has been subtracted. The simple variance of thisrescaled sample is, in effect, the variance per unit inverseintegration time.\n\nFor visibilities $$V_i$$, with integration times $$e_i$$:\n\n$$<{\\rm var}_{\\rm norm}>\\,=\\,{\\rm Sum}\\,(e_i (V_i - <V>)^2\\,\/\\,N$$ [1]\n\nwhere $$<V>\\,=\\,{\\rm Sum}\\,(w_i V_i)\\,\/\\,{\\rm Sum}\\,(w_i)$$ [1a]\n\nand $$w_i$$ are the nominal data weights presumably proportional tointegration time and other relevant factors. In practice, we couldprobably just use w$$_i$$ = e$$_i$$ in equation [1a] since all of the other relevant factors witin w$$_i$$ are assumed constant within the sample. Note that the units of $$<$$var$$_{\\rm norm}$$$$>$$ are in squared visibilityamplitude (Jy:math:^2, presumably) times seconds. Note also that <var:math:_{norm}>is essentially the simple variance of the ensemble \\sqrt(e[:math:_i).dV$$_i$$](where dV$$_i$$ is (V:math:_i-<V>)), i.e., of the residual visibilities scaledso that their noise is independent of integration time.\n\nThe normalized weight-per-unit-integration time is thus the inverse of<var$$_{norm}$$>:\n\nW$$_{norm}$$ = 1\/<var$$_{norm}$$> [2]\n\nand per-datum revised weights may be calculated as:\n\nW$$_i$$ = W$$_{norm}$$ * e$$_i$$ [3]\n\nAnother way of arriving at this result is to calculate a weightedvariance:\n\n<var> = Sum (e:math:_i (V:math:_i - <V>):math:^2) \/ Sum(e$$_i$$) [4]\n\nwhich corresponds to the (simple) mean exposure time, which is:\n\n<e> = Sum(e$$_i$$) \/ N [5]\n\nThe product of these yields <var:math:_{norm}>, as above in [1]:\n\n<var:math:_{norm}> = <var><e> [6]\n\nand W$$_{norm}$$ may be formed and applied as in [2] and [3] above.\n\nThis calculation should be done for both real and imaginary parts ofthe visibility sample and averaged, or for both parts jointly, and [3]used to form the revised weights.\n\nNB: In calculating sample variance, it is generally customary toacknowledge the loss of one degree of freedom due to use of the meanvisibility, <V> in the calculation. Essentially, <V> will have afinite error that tends to bias the resulting variance downward. Forsimple variance calculations, a factor N\/(N-1) is applied to thevariance calculation to unbias it, and this factor can be significantfor modest N. Since a non-trivially weighted mean is used in the above(otherwise simple, non-weighted) variance calculation (eqn [1]), itmay be appropriate to consider a more carefully weighted calculationfor the N\/(N-1) factor. The required factor is:\n\nD = 1 - ( Sum(w$$_i$$^2) \/ Sum(w$$_i$$)^2 ) [9]\n\nwhere w$$_i$$ are the a priori nominal weights used in [1a] above. This factor can be shown to equal (N-1)\/N and so should be divided intothe <var:math:_{norm}> result.\n\nHowever, since the nominal error in the variance (and thus theweights) will be <10% (an accuracy we are unlikely to achieve ingeneral anyway) for N>10, and will be uniform over many sample groupsin the overall statwt execution, we assume that it is adequate to use thesimpler N\/(N-1) factor, or omit it entirely.\n\n### Bug affecting polarization visibility data in concatenated data\u00b6\n\nKnowledgebase Article: characterization of a bug that affected polarization visibility data in concatenated data in CASA versions up to 5.6.\n\nGeorge Moellenbrock Original 17 Apr 2020, latest edited version 12 May 2020\n\nCASA 5.7\/6.1 fixed a bug in concat and importfitsidi that affected polarization visibility data in concatenated MSs. The problem that occurs in CASA versions earlier than 5.7\/6.1 is that the cross-hands may be spuriously mis-ordered on a subset of baselines in the concatenated MS.This Knowledgebase Article describes the main effects that this bug has on concat\u2019d data in general, and in particular on the processing of ALMA and VLA data with CASA up to and including version 5.6.A careful analysis has revealed the following effects in concat\u2019d MSs in CASA versions prior to 5.7\/6.1:\n\n1. In general, visibility data cross-hands may be spuriously swapped on some baselines in concat\u2019d data when the antenna lists in the input MSs are partially different. The concat task adopts the first (in time order) MS\u2019s antenna list for the output MS, and appends unique antennas from later MS(s), typically with larger indices than in their original MSs. Depending on the original antenna indexing, baselines between these additional antennas and antennas that did occur in the first MS may sometimes require conjugation (reversal of the order of antennas in the baseline) to maintain internal indexing consistency within the output MS (for all baselines in an MS, the first antenna must have an index which is lower than (or same as) the index of the second antenna). When baselines are conjugated in this way, the sign of the phase of each correlation and the UVWs must be reversed, and the cross-hands swapped (RL or XY will become LR or YX, respectively, and vice-versa). Prior to CASA 5.7\/6.1, the sign reversals were correct, but the cross-hand swap was spuriously omitted. For successfully calibrated data (i.e., calibrated prior to running concat), this means that the sign of the imaginary part of the cross-hand visibilities will be incorrect, and thus the sign Stokes U (for circular feeds) or Stokes V (for linear feeds) will be incorrect on the affected baselines in concat\u2019d data. Since the pathology affects only the cross-hands, polarimetry calibration of concat\u2019d data may be adversely affected.\n\nFor reconfigurable arrays, note that an antenna\u2019s particular position (pad) makes it unique, i.e., a specific physical antenna that has moved is a new unique antenna in this context. Concatenation of entirely disjoint antenna lists are unaffected, since all additional antennas in the concatenation will have uniformly incremented indices, and no baseline conjugation will be required. Certain unique cases of different antenna lists are immune to this problem, e.g., new unique antennas with already-higher indices than all other common antennas, etc.\n\n1. The concat task initially sorts the MSs into time order (only at whole-MS granularity), so the effect cannot be controlled merely by adjusting the order in which the input MSs are specified in the concat call (i.e., there is no easy user-directed fix).\n\n2. An implicit concat happens in importfitsidi (i.e., VLBI, typically) when specifying multiple FITS-IDI files. Here the antenna re-indexing occurs upon fill, and thus most-likely before calibration. Since ordinary gain calibration uses only the parallel-hands, it will not be affected by the underlying cross-hand swap error. However, polarimetry calibration will be affected to the extent that there are spuriously swapped cross-hands within the filled MS. EVN observations consisting of multiple FITS-IDI will have the same antenna table in each file and are therefore unaffected by this bug.\n\n3. Since the pathology affects only the cross-hands, purely Stokes I (total intensity) observations of any kind should not be affected (even if the cross-hand are present and some are affected, and thus technically incorrect).\n\n4. For ALMA and VLA data, calibration typically occurs prior to any potentially pathological use of concat, and the impact will be as follows:\n\n\u2022 ALMA: Total intensity observations of any kind are not affected. For successfully calibrated (per session) ALMA polarization data not subject to this pathology in the concat of multiple contiguous execblocks within each session*, the pathology affects only Stokes V (circular polarization) when concat-ing multiple sessions subject to the baseline conjugation conditions described in item 1 above. This is because the spurious cross-hand swap effectively sabotages only the sign of the imaginary part of the cross-hands in some baselines, i.e., the apparent Stokes V signal sampled by linear feeds. The net effect will be to suppress the net Stokes V signal in imaging. This presumably affects only a very tiny minority of existing ALMA observations.\n\n\u2022 Note: In the course of standard scripted ALMA polarimetry calibration, the split task does not remove antennas from the ANTENNA subtable, even when they are fully flagged and keepflags=False. The data rows are not retained in this case, but the ANTENNA subtable seen by concat remains complete. Therefore, as long as the execblocks within the contiguous polarization session are uniform as regards to antenna population (as is intended), the polarization calibration within an individual session should not be subject to this this pathology.\n\n\u2022 VLA: Total intensity observations of any kind are not affected. For successfully calibrated VLA polarization observations (which typically do not require a prior concat), the pathology affects only Stokes U when concat-ing multiple observations subject to the baseline conjugation conditions described in item 1 above. By the same logic as for ALMA, the Stokes U (cross-hand imaginary part for circular feeds) will be suppressed. Since this affects part of the linearly polarized net response, a larger fraction of VLA cases (cf ALMA) may be affected. The pathological condition arises when antennas are variously removed from the array (typically up to 2 or 3 at any given time) for incidental maintenance, so as to generate datasets with fewer than the full complement of 27 antennas, or when antennas move in and out of the barn (even when there are 27 antennas present in each observation), and when such disparate observations are concat\u2019d. For concats of different VLA configurations, some baselines to antennas that did not move (typically ~12 out of 27 antennas) between the configurations will be affected, even if the total antenna lists (by name\/number) have not changed. This is because antennas that did move (only) are unique new antennas in the concat by virtue of their new positions, and some baselines between them and the stationary antennas must be conjugated in the concat.\n\n1. If observations are combined implicitly in imaging by specifying a list of MSs to tclean, there should be no problem, since the bug is an artifact of the mechanical combination of MSs into a single MS on disk.\n\n### Single Dish imaging bug for EBs with different antenna positions in common antenna names and IDs\u00b6\n\nA bug was found for imaging of single-dish data with the tasks sdimaging and tsdimagingin CASA versions 5.6 and lower. This bug could cause an inaccurate direction conversion when more than one MS is given and these data contain the same antenna with the same antenna ID but with different location, i.e., station or pad where the antenna is placed. The bug affects the brightness distribution and flux density in the combined image, since the coordinates are not correct for some fraction of the data set.\n\nDetails of this bug can be found in this Knowledgebase Article.\n\nThe bug was fixed in CASA 5.7\/6.1\n\n### CASA Data Repository for developers building from source\u00b6\n\nFor general users, information on the CASA Data Repository can be found on the \u201cExternal Data\u201d pages in CASA Docs.\n\nFor the build from source, such as those the developers use, the \u201ccasadata\u201d is taken from a list of directories given in .casa\/config.py. A second set of datasets is necessary if the developer wants to run the CASA tests. The new casatestdata repository can be cloned following the instructions given in Bitbucket: https:\/\/open-bitbucket.nrao.edu\/projects\/CASA\/repos\/casatestdata\/browse\n\nFor example, for developers at NRAO Charlottesville, the entries in their config.py could look like:\n\ndatapath=['\/home\/casa\/data\/casatestdata\/','\/home\/casa\/data\/distro\/']\n\n\nor just this:\n\ndatapath=['\/home\/casa\/data\/casatestdata\/']\n\n\nwhere: \/home\/casa\/data\/casatestdata\/ is the new test data repository of CASA that contains all the data needed to run CASA tests. \/home\/casa\/data\/distro\/ contains only the \u201ccasadata\u201d needed to launch CASA and from here it is packaged to be included in the casalith tarballs.\n\nFor CASA build from source, the distro data is found in the following place:\n\ncasa6 build from source: distro is taken from .casa\/config.py under:\ndatapath=['\/some-directory\/distro\/']\n\ncasa5 build from source: distro is taken from under:\n$CASAPATH\/data\/ Warning: to developers regarding awproject using CASA 6.1 or earlier: In CASA 6.2 a bug was fixed in the way that the awproject gridder in tclean calls the data repository in CASA 6. Previously tclean was looking at .casa\/config.py for the location of the distro data but only considering the root data directory existence and not checking existence of the data (more specifically, antenna response data) actually used by it. As of CASA 6.2, the logic of checking the data path to look for the antenna response using the awproject gridder in tclean is as follows: 1. It looks for the antenna response data directory by constructing the full path with the root data paths from dataPath(), which returns data paths defined in config.py (for casa6). The specific data directory to look for varies by telescope (i.e. ALMA vs VLA). 2. If 1 fails, it looks for the data in the path determined by the function, distroDataPath(), which returns the default root data directory path in the case of casalith tarballs. 3. If all above fail, it tries the path defined by CASAPATH (only relevant for casa5) ### Cube Refactor\u00b6 A refactor of cube imaging in tclean was implemented in CASA 6.2 with the following goals: \u2022 Parallel and serial should, within numerical precision, be equal (6.1 and previous gave slightly different results depending on the number of processors used) \u2022 Eliminate refconcat cubes, which could be a performance problem in image analysis. \u2022 Ability to restart tclean for cubes with different number of processors ( 6.1 and previous had to be restarted with the exact same number of processors used in the first run of tclean). \u2022 Ability to save model visibilities in MODEL_DATA (or virtual) when running tclean in parallel. \u2022 Remove the clash of chanchunking and parallel numprocesses (e.g multiple field cubes and chanchunking). As part of this extensive cube refactor, the following features were implemented in 6.2: \u2022 Parallel and serial runs now use the same code and that has fixed the differences which used to be found between serial and different n-parallel runs. Serial and parallel runs now give identical results to within numerical precision. \u2022 Refconcat cubes are no longer produced (also the directory workingdir is no longer made for cube runs) \u2022 tclean can be restarted in parallel or serial independent of how the first run was achieved. \u2022 For cubes, model visibilities can now be saved in parallel. \u2022 In parallel run, a common beam can now be given (e.g restoringbeam=\u2019common\u2019). Previously, tclean had to be re-run in serial to restore to a single restoring beam for all channels. \u2022 Tracking a moving source (with ephemeris tables) now works in parallel with specmode=\u2019cubesource\u2019. \u2022 The chanchunks parameter has been removed, as the refactor code made it redundant. \u2022 parallel=True or parallel=False is not considered for cube imaging. If casa is launched via the mpicasa call then cube imaging is in parallel else if it is invoked via the casa call then it is run in serial. \u2022 Interactive tclean for cubes now works when launched in parallel. \u2022 Using a psf to reset to another psfphasecenter for mosaic now works for serial and parallel. \u2022 The major and minor cycles states have been dissociated when a major cycle is happening, the minor cycle code does not hold any state in memory and vice versa. This is the reason why a selectvis message will be seen at every major cycle. This should reduce the amount of memory used. Details of the cube refactor efforts can be found in this this pdf. ## Reference Material\u00b6 Collection of relevant reference material ### AIPS-CASA Dictionary\u00b6 CASA tasks equivalent to AIPS tasks AIPS tasks and their corresponding CASA tasks. Note that this list is not complete and there is also not a one-to-one translation. AIPS Task CASA task\/tool Description APROPOS taskhelp List tasks with a short description of their purposes BLCAL blcal Calculate a baseline-based gain calibration solution BLCHN blcal Calculate a baseline-based bandpass calibration solution BPASS bandpass Calibrate bandpasses CALIB gaincal Calibrate gains (amplitudes and phases) CLCAL applycal Apply calibration to data COMB immath Combine images CPASS bandpass Calibrate bandpasses by polynomial fitting CVEL cvel\/mstransform Regid visibility spectra DBCON concat\/virtualconcat Concatenate u-v datasets DEFAULT default Load a task with default parameters FILLM importvla Import old-format VLA data FITLD importuvfits\/importfitsidi Import a u-v dataset which is in FITS format FITLD importfits Import an image which is in FITS format FITTP exportuvfits Write a u-v dataset to FITS format FITTP exportfits Write an image to FITS format FRING (coming soon) Calibrate group delays and phase rates GETJY fluxscale Determine flux densities for other cals GO go Run a task HELP help Display the help page for a task (also use casa.nrao.edu\/casadocs) IMAGR clean\/tclean Image and deconvolve IMFIT imfit Fit gaussian components to an image IMHEAD vishead View header for u-v data IMHEAD imhead View header for an image IMLIN imcontsub Subtract continuum in image plane IMLOD importfits Import a FITS image IMSTAT imstat Measure statistics on an image INP inp View task parameters JMFIT imfit Fit gaussian components to an image LISTR listobs Print basic data MCAT ls List image data files MOMNT immoments Compute moments from an image OHGEO imregrid Regrids an image onto another image\u2019s geometry PBCOR impbcor\/widebandpbcor Correct an image for the primary beam PCAL polcal Calibrate polarization POSSM plotcal\/plotms Plot bandpass calibration tables POSSM plotms Plot spectra PRTAN listobs Print antenna locations PRTAN plotants Plot antenna locations QUACK flagdata Remove first integrations from scans RENAME mv Rename an image or dataset RFLAG flagdata Auto-flagging SETJY setjy Set flux densities for flux cals SMOTH imsmooth Smooth an image SNPLT plotcal\/plotms Plot gain calibration tables SPFLG msview Flag raster image of time v. channel SPLIT split Write out u-v files for individual sources STATWT statwt Weigh visibilities based on their noise TASK inp Load a task with current parameters TGET tget Load a task with parameters last used for that task TVALL imview Display image TVFLG msview Flag raster image of time v. baseline UCAT ls List u-v data files UVFIX fixvis Compute u, v, and w coordinates UVFLG flagdata Flag data UVLIN uvcontsub\/mstransform Subtract continuum from u-v data UVLSF uvcontsub\/mstransform Subtract continuum from u-v data UVPLT plotms Plot u-v data UVSUB uvsub Subtracts model u-v data from corrected u-v data WIPER plotms Plot and flag u-v data ZAP rmtables Delete data files ### MIRIAD-CASA Dictionary\u00b6 CASA tasks equivalent to MIRIAD tasks The Table below provides a list of common Miriad tasks, and their equivalent CASA tool or tool function names. The two packages differ in both their architecture and calibration and imaging models, and there is often not a direct correspondence. However, this index does provide a scientific user of CASA who is familiar with MIRIAD, with a simple translation table to map their existing data reduction knowledge to the new package. In particular, note that the procedure of imaging and cleaning of visibility data between CASA and MIRIAD differs slightly. In MIRIAD the tasks invert, clean and restor are used in order to attain \u201cCLEAN\u201d images, whereas in CASA the task clean\/tclean achieves the same steps as a single task. MIRIAD Task CASA task\/tool Description atlod importatca Import ATCA RPFITS files blflag plotms\/msview Interactive baseline based editor\/flagger cgcurs imview Interactive image analysis cgdisp imview Image display, overlays clean clean\/tclean Clean an image delhd imhead (mode=\u2019del\u2019), clearcal Delete values in dataset\/remove calibration tables fits importmiriad,importfits, exportfits, importuvfits, exportuvfits FITS uv\/image filler gethd imhead (mode=\u2019get\u2019) Return values in image header gpboot fluxscale Set flux density scale gpcal gaincal, polcal Polarization leakage and gain calibration gpcopy applycal Copy calibration tables from one source to another gpplt plotcal, plotms Plot calibration solutions imcomb immath Image combination imfit imfit Image-plane component fitter impol immath (mode=\u2019poli\u2019 or \u2018pola\u2019) Manipulate polarization images (see example) imstat imstat Image statistics imsub imsubimage Extract sub-image invert clean\/tclean Synthesis imaging\/make dirty map linmos im tool (im.linearmosaic) Linear mosaic combination of images maths immath Calculations involving images mfboot fluxscale Set flux density scale mfcal bandpass, gaincal Bandpass and gain calibration moment immoments Calculate image moments prthd imhead, listobs, vishead Print header of image or uvdata puthd imhead (mode=\u2019put\u2019) Add\/edit values in image header restor clean Restore a clean component model selfcal clean, gaincal, etc. Selfcalibration of visibility data uvplt plotms Plot visibility data uvspec plotms Plot visibility spectra data uvsplit split Split visibility dataset by source\/frequency etc ### Dan Briggs\u2019 Dissertation - Robust Weighting\u00b6 Dan Briggs\u2019 Dissertation on high fidelity imaging with interferometers, including robust (\u2018Briggs\u2019) weighting. Link to Dan Briggs\u2019 Dissertation (pdf version) ### Flux Calibrator Models\u00b6 Descriptions of flux calibrator models for flux density scaling There are two categories of flux calibrator models available to determine flux density scales: compact extra-galactic sources and solar system objects. The models for bright extragalactic sources are described in the form of polynomial expressions for spectral flux densities and clean component images for spatial information. The flux density scales based on the solar system objects are commonly used to establish flux density scales for mm and sub-mm astronomy. These models consist of brightness temperature models and their ephemeris data. #### Compact extragalactic sources\u00b6 For the VLA, the default source models are customarily point sources defined by the \u2018Baars\u2019, \u2018Perley 90\u2019, \u2018Perley-Taylor 99\u2019, \u2018Perley-Butler 2010\u2019, \u2018Perley-Butler 2013\u2019 (time-variable), \u2018Perley-Butler 2017\u2019 (time-variable) or \u2018Scaife-Heald 2012\u2019 flux density scales (\u2018Perley-Butler 2017\u2019 is the current standard by default), or point sources of unit flux density if the flux density is unknown. \u2018Stevens-Reynolds 2016\u2019 currently contains only one source, 1934-638, which is primarily used for flux calibrator for the ACTA. The model images (CLEAN component images) are readily available in CASA for the sub set of the sources listed below. The task setjy provides listing of the available model images included in the CASA package\u2019s data directory. You can find the path to the directory containing your list of VLA Stokes I models by typing (inside CASA) [print os.getenv(\u2018CASAPATH\u2019).split(\u2018 \u2018)[0] + \u2018\/data\/nrao\/VLA\/CalModels\/\u2019]. The setjy Description Page in CASA Docs also lists the models that are available in CASA. These models can be plotted in plotms. Alternatively, the user can provide a model image at the appropriate frequency in Jy\/pixel units, typically the .model made by clean (which is a list of components per pixel, as required, although the restored .image is in Jy\/beam). For unknown calibrators, however, the spectral flux distribution has to be explicitely specified in setjy. If you do not specify the correct path to a model (and you have not provided your own model), the default model of a point sources of unit flux density will be adopted. 3C\/Common Name B1950 Name J2000 Name Alt. J2000 Name Standards J0133-3629 9 3C48 0134+329 0137+331 J0137+3309 1,2,3,4,5,6,7,9 FORNAX X J0322-3712 9 3C123 0433+295 0437+296 J0437+2940 2, 9 3C138 0518+165 0521+166 J0521+1638 1,3,4,5,6 PICTOR A J0519-4546 9 3C144 (TAURUS A\/CRAB) J0534+2200 9 3C147 0538+498 0542+498 J0542+4951 1,3,4,5,6,7,9 3C196 0809+483 0813+482 J0813+4813 1,2,7,9 3C218(HYDRA A) J0918-1205 9 3C274 (VIRGO A) J1230+1223 9 3C286 1328+307 1331+305 J1331+3030 1,2,3,4,5,6,7,9 3C295 1409+524 1411+522 J1411+5212 1,2,3,4,5,6,7,9 3C438 (HERCULES A) J1651+0459 9 3C353 J1720-0059 9 1934-638 J1939-6342 1,3,4,5,6,8 C380 1828+487 1829+487 J1829+4845 7,9 3C405 (CYGNUS A) J1959+4044 9 3C444 J2214-1701 9 3C461 (CASSIOPEIA A) J2323+5848 9 Standards are: (1) Perley-Butler 2010, (2) Perley-Butler 2013, (3) Perley-Taylor 99, (4) Perley-Taylor 95, (5) Perley 90, (6) Baars, (7) Scaife-Heald 2012, (8) Stevens-Reynolds 2016 (9) Perley-Butler 2017 Known sources and their alternative names recognized by setjy task ALMA also uses a few dozen compact QSO as flux standards, monitored 2-4 times a month at bands 3, 6 and 7 (90 - 345 GHz). Due to rapid variability these data are not packaged with CASA, but can be accessed via https:\/\/almascience.eso.org\/alma-data\/calibrator-catalogue Baars The only standard to not have the year in the name. It is 1977. The models are second order polynomials in log(\u03bd), valid between 408 MHz and 15 GHz. Reference: Baars et al. (1977) [1] with a commentary by Kellermann, K. I. (1999) [2] Perley 90 This standard also includes 1934-638 from Reynolds (7\/94) and 3C138 from Baars et al. (1977) [1] . Details can be found at http:\/\/www.vla.nrao.edu\/astro\/calib\/manual\/baars.html. Perley-Taylor 95 Perley and Taylor (1995.2); plus Reynolds (1934-638; 7\/94) Details can be found at http:\/\/www.vla.nrao.edu\/astro\/calib\/manual\/baars.html. Perley-Taylor 99 Perley and Taylor (1999.2); plus Reynolds (1934-638; 7\/94) Details can be found at http:\/\/www.vla.nrao.edu\/astro\/calib\/manual\/baars.html. Perley-Butler 2010 A preliminary version of Perley-Butler 2013. This version also has coefficients for sources that showed some degree of variability (see Perley & Butler (2013) [3]) but they are treated as the steady sources (i.e. no time dependent models are used). Perley-Butler 2013 Flux scale for the constant flux sources 3C123, 3C196, 3C286, and 3C295 as well as variable sources (3C48, 3C138, and 3C147). The models for the variable sources are time-dependent.Reference: Perley & Butler (2013) [3] . Scaife-Heald 2012 Low frequency, 30-300MHz, calibrators 3C48, 3C147, 3C196, 3C286, 3C295, and 3C380. Reference: Scaife & Heald (2012) [4] Stevens-Reynolds 2016 Low frequency (<11GHz) polynomial from Reynolds and updated high frequecy polynomial from Stevens. Reference: Partridge et al. (2016) [5] Perley-Butler 2017 The flux density scale of Perley-Butler 2013 extended downward to ~50 MHz. Twenty sources were drawn from the Baar, Perley-Butler 2013, and Scaife-Heald 2012. Flux scale for the constant flux sources Fornax A, 3C123, J0444-2809, Pictor A, 3C144, (Taurus A or Crab), 3C196, 3C218 (Hydra A), 3C274 (Virgo A or M87), 3C286, 3C295, 3C348 (Hercules A), 3C353, 3C380, 3C405 (Cygnus A), 3C444, and 3C461 (Cassiopeia A) as well as variable sources (3C48, 3C138, and 3C147). The models for the variable sources are time-dependent. The frequency range valid for the model for each source is also listed below. Source Valid frequency range in GHz J0133-3649 0.2-4 3C48 0.05-50 Fornax X 0.2-0.5 3C123 0.06-50 J0444-2809 0.2-2.0 3C138 0.2-50 Pictor A 0.2-4.0 Taurus A 0.05-4.0 3C147 0.05-50 3C196 0.050-50 Hydra A 0.05-12 Virgo A 0.05-3 3C286 0.05-50 3C295 0.05-50 Hercules A 0.2-12 3C353 0.2-4 3C380 0.05-4.0* Cygnus A 0.05-12 3C444 0.2-12 Cassiopeia A 0.2-4 * The corrected frequency range for 3C380 is noted here based on B. J. Butler 2018, private comunication (CAS-9538)Reference: Perley & Butler (2017) [7] #### Solar System objects\u00b6 The usual approach in mm and sub-mm regimes is to use models that are, to first order, thermal sources in the Solar System. Their apparent brightness varies in time with their distance from the Earth (and Sun), and orientation if they are not perfect spheres with zero obliquity. However, most of them have almost constant surface properties, so once those properties are measured their apparent brightness distributions, they can in principle be predicted for any time, given an ephemeris. Planets, in particular, have more complex spectra and effects such as atmospheric lines, magnetic fields, seasons, polar caps and surface features that need to be taken into account when they are available and significant. In CASA, the Solar System objects supported by setjy are available through the \u2018Butler-JPL-Horizons 2010\u2019, and \u2018Butler-JPL-Horizons 2012\u2019 standards. It is recommended to use \u2018Butler-JPL-Horizons 2012\u2019 as it contains updated models. The 2012 models are described in ALMA Memo 594, which is available on https:\/\/science.nrao.edu\/facilities\/alma\/~aboutALMA\/Technology\/ALMA_Memo_Series\/alma594\/abs594 . Models can be found by typing (in CASA) [print os.getenv(\u2018CASAPATH\u2019).split(\u2018 \u2018)[0] + \u2018\/data\/alma\/SolarSystemModels\u2019.] The following objects are supported based on models from Butler-JPL-Horizons 2012, updated where necessary as mentioned under each object. Please refer ALMA Memo594 for the detailed comparisons with the models in Butler-JPL-Horizons-2010. Venus The model spans the frequencies from ~300MHz to 1THz. No atmospheric lines such as CO,H2O, HDO, and etc are included. Modeled based on Clancy et al. (2012) [6]. Mars Full implementation of the model of Rudy et al. (1987) [7], tabulated as a function of time and frequency (30-1000GHz). No atmospheric lines are included. Jupiter Model for 30-1020GHz (from Glenn Orton, private communication), does not include synchrotron emission. Uranus Model for 60-1800GHz (from Glenn Orton and Raphael Moreno, private communication), contains no rings or synchrotron. Neptune Model for 2-2000 GHz (from Glenn Orton and Raphael Moreno, private communication), contains no rings or synchrotron. Io Spline interpolation of data points from 15 to 29980 GHz (references: please refer to the ALMA memo 594 Table 1). Strongly not recommended to use for the primary flux calibrator for ALMA observations. Europa Spline interpolation of data points from 15 to 29980 GHz (references: please refer to the ALMA memo 594 Table 1). Strongly not recommended to use for the primary flux calibrator for ALMA observations. Ganymede Spline interpolation of data points from 5 to 29980 GHz (references: please refer to the ALMA memo 594 Table 1). Callisto Spline interpolation of data points from 5 to 29980 GHz (references: please refer to the ALMA memo 594 Table 1). Titan Model from Mark Gurwell, from 53.3-\u00ad1024.1 GHz. Contains surface and atmospheric emission. The atmosphere includes N2-\u00adN2 and N2-\u00adCH4 Collision-\u00adInduced Absorption (CIA), and lines from minor species CO, 13CO, C18O, HCN, H13CN and HC15N. See, e.g., Gurwell & Muhleman (2000) [8]; Gurwell (2004) [9]. Asteroids Some asteroids, namely Ceres, Pallas, Vesta, and Juno are included in the Butler-JPL-Horizons 2012. The models consists of the constant brightness temperature in frequency. For Ceres, Pallas, and Vesta, updated models based on thermophysical models (TPM) (T. Mueller, private communication) which are tabulated in time and frequency, are available for the observations taken after January 1st 2015, 0:00 UT. setjy task will automatically switch to the new models for the observations taken on and after that date. The TPM are also available for Lutetia but it is not advised to use for the absolute flux calibration for ALMA. Each of the tabulated models contains the flux density at 30, 80, 115, 150, 200, 230, 260, 300, 330, 360, 425, 650, 800, 950, and 1000 GHz. The time resolution is 1 hour for Ceres and 15 min for Lutetia, Pallas, and Vesta. The cubic interpolation is employed to obtain the flux densities at other frequencies. Ceres Model with a constant $$T_b$$ = 185K over frequencies (Moullet et al. 2010 [10], Muller & Lagerros 2002 [11], Redman et al. 1998 [12], Altenhoff et al. 1996 [13]) if time of the observations took place ($$t_{obs}$$) is before 2015.01.01, 0:00 UT, TPM if $$t_{obs}$$ $$\\ge$$ 2015.01.01, 0:00 UT. Pallas Model with a constant $$T_b$$ = 189K (Chamberlain et al. 2009 [14], Altenhoff et al. 1994 [15]) for $$t_{obs}$$ $$\\lt$$ 2015.01.01, 0:00 UT, and TPM for $$t_{obs}$$ $$\\ge$$ 2015.01.01, 0:00 UT Vesta Model with a constant $$T_b$$ = 155K (Leyrat et al. 2012 [16], Chamberlain et al. 2009 [14], Redman et al. 1998 [12], Altenhoff et al. 1994 [15]) for $$t_{obs}$$ $$\\lt$$ 2015.01.01, 0:00 UT, and TPM for $$t_{obs}$$ $$\\ge$$ 2015.01.01, 0:00 UT Juno Model with a constant $$T_b$$ = 153K (Chamberlain et al. 2009 [14], Altenhoff et al. 1994 [15]) #### Bibliography\u00b6 1. Baars, J. W. M. et al. 1977, A&A, 61, 99 ADS 2. Kellermann, K. I. 2009, A&A 500, 143 ADS 3. Perley, R. A., & Butler, B. J. 2013, ApJS, 204, 19 ADS 4. Scaife, A. M., & Heald, G. H. 2012, MNRAS, 423, 30 ADS 5. Partridge et al. 2016, ApJ 821,1 ADS 6. Clancy, R.T. et al. 2012, Icarus, 217, 779 ADS 7. Perley, R. A. & Butler, B. J. 2017, ApJS, 230,7ADS 8. Rudy, D.J. et al. 1987, Icarus, 71, 159 ADS 9. Gurwell, M.A. & D.O. Muhleman 2000, Icarus, 145, 65w ADS 10. Gurwell, M.A. 2004, ApJ, 616, L7 ADS 11. Moullet, A. et al. 2010, A&A, 516, L10 ADS 12. Muller, T.G. & J.S.V. Lagerros 2002, A&A, 381, 324 ADS 13. Redman, R.O. et al. 1998, AJ, 116, 1478 ADS 14. Altenhoff, W.J. et al. 1996, A&A, 309, 953 ADS 15. Chamberlain, M.A. et al. 2009, Icarus, 202, 487 ADS 16. Altenhoff, W.J. et al. 1994, A&A, 287, 641 ADS 17. Leyrat, C. et al. 2012, A&A, 539, A154 ADS ### Flux Calibrator Models - Data Formats\u00b6 Conventions and Data Formats This section describes the conventions, the formats as well as the locations of the data of the flux density calibrator models used in setjy. The detailed descriptions of specific flux standards and list of the calibrators are found in Flux Calibrator Models in the Reference Material section. #### Extragalactic flux calibrator source models\u00b6 The spectral flux density models are expressed in a polynomial in the form $$$log S[Jy] = a + b*log\\nu + c*log^2\\nu + \u2026$$$ where $$\\nu$$ is a frequency either in MHz or GHz depending on the standard. In setjy, the point source model is constructed as a componentlist scaled by the spectral flux density model. For the standards, Baars, Perley 90, Perley-Taylor 95, Perley-Taylor 99, Perley-Butler 2010, and Stevens-Reynolds 2016,the polynomial coefficients are hard-coded in the code. For Perley-Butler 2013 and Scaife-Heald 2012, the coefficients are stored in CASA tables called PerleyButler2013Coeffs and ScaifeHeald2012Coeffs, respectively located in ~\/nrao\/VLA\/standards\/ in the CASA data directory(from CASA prompt, you can find the data root path by typing casa[\u2018dirs\u2019][\u2018data\u2019]). The separation of the data from the flux calibration code makes the maintenace easy and enable a user to acces the informaiton directly. Your can access these tables using the table tool (tb) and browsetable task. The list of the column header for PerleyButler2013Coeffs is shown below: CASA <8>: tb.colnames --------> tb.colnames() Out[8]: ['Epoch', '3C48_coeffs', '3C48_coefferrs', '3C138_coeffs', '3C138_coefferrs', '3C147_coeffs', '3C147_coefferrs', '3C286_coeffs', '3C286_coefferrs', '3C123_coeffs', '3C123_coefferrs', '3C295_coeffs', '3C295_coefferrs', '3C196_coeffs', '3C196_coefferrs'] The coefficients of each source are stored in a column as a vector and the corresponding errors are stored in a seperate column. The different row represents the corresponding coefficinets at that epoch for the time variable sources while for the steady sources each row contains identical information. The frequency is assumed in GHz. The list of the column header for ScaifeHeald2012Coeffs is shown below: CASA <11>: tb.colnames ---------> tb.colnames() Out[11]: ['Epoch', '3C48_coeffs', '3C48_coefferrs', '3C147_coeffs', '3C147_coefferrs', '3C196_coeffs', '3C196_coefferrs', '3C286_coeffs', '3C286_coefferrs',can b '3C295_coeffs', '3C295_coefferrs', '3C380_coeffs', '3C380_coefferrs'] The reference frequnecy for Scaife-Heald 2012 is 150MHz. #### Solar System objects\u00b6 For the solar system object used as a flux calibrator, setjy contstruct a model visibiity of the calibrator with the (averaged) brightness temperature model and ephemeris data of the sources as described in ALMA Memo #594. While the older Bulter-JPL-Horizons 2010 standard, hard-coded the brightness temperature models in the code, the models for Butler-JPL-Horizons 2012 are tabulated in ASCII files (SSobject_name_Tb.dat) located in the CASA data directory under ~\/alma\/SolarSystemModels. With an exception of Mars, the data for the brightness temparature models are stored in a simple format: 1st column - source frequency in GHz; 2nd column - the brightness temperature in Kelvin. The follow example script shows how it can be plotted for Titan. import numpy as np rootdatapath=casa['dirs']['data'] source='Titan' datapath=rootdatapath+'\/alma\/SolarSystemModels\/'+source+'_Tb.dat' data=np.genfromtxt(datapath) data=data.transpose() freq=data[0] temp=data[1] pl.plot(freq,temp) pl.title(source+' Tb model') pl.xlabel('Frequency (GHz)') pl.ylabel('Tb (K)') And the following is the output plot by executing the script above. The Tb model for Mars (Mars_Tb.dat) is calculated as a function of time and frequency, with tabulations every hour and at frequencies of: 30, 80, 115, 150, 200, 230, 260, 300, 330, 360, 425, 650, 800, 950, and 1000 GHz. The first line of the file contain frequencies in GHz. The data starts at the second line of the file with the format: YYYY MM DD HH MJD Tb for at each frequency sepearated by a space. New Asteroid models Ceres_fd_time.dat, Luthetia_fd_time.dat, Pallas_fd_time.dat, and Vesta_fd_time.dat contain thermophysical models by Th. Mueller (private communication). These time variable models are already converted to flux densities and are tabulated for 30, 80, 115, 150, 200, 230, 260, 300, 330, 360, 425, 650, 800, 950, and 1000 GHz. Time intevals are 1 hr. for Ceres and 15min. for Luthetia, Pallas, and Vesta with the data available from 2015 01 01 0UT to 2021 01 01 0 UT. In setjy task,these models are automatically selected for the data with the observation dates falls within this time range. ### Spectral Frames\u00b6 Spectral Frames supported in CASA #### Spectral Frames\u00b6 CASA supported spectral frames: Frame Description Definition REST rest frequency Lab frame or source frame; cannot be converted to any other frame LSRK LSR as a kinematic (radio) definition (J2000) based on average velocity of stars in the Solar neighborhood 20km\/s in direction of RA, Dec - [270,+30] deg (B1900.0) (Gordon 1975 [1] ) LSRD Local Standard of Rest (J2000), dynamical, IAU definition. Solar peculiar velocity in the reference frame of a circular orbit about the Galactic Center, based on average velocity of stars in the Solar neighborhood and solar peculiar motion U$$\\odot$$=9kms\/s, V$$\\odot$$=12km\/s,W$$\\odot$$=7km\/s. Or 16.552945km\/s towards l,b = 53.13, +25.02 deg (Delhaye 1965 [2]) BARY Solar System Baryceneter (J2000) GEO Geocentric, referenced to the Earth\u2019s center TOPO Topocentric Local observatory frame, fixed in observing frequency, no doppler tracking GALACTO Galactocentric (J2000), referenced to dynamical center of the Galaxy 220 km\/s in the direction l,b = 270, +0 deg. (Kerr and Lynden-Bell 1986 [3]) LGROUP Mean motion of Local Group Galaxies with respect to its bary center 308km\/s towards l,b = 105,-7 CMB Cosmic Microwave Background, COBE measurements of dipole anisotropy 369.5km\/s towards l,b = 264.4,48.4. (Kogut et al. 1993 [4]) Undefined #### Doppler Types\u00b6 CASA supported Doppler types (velocity conventions) where $$f_v$$ is the observed frequency and $$f_0$$ is the rest frame frequency of a given lineand positive velocity V is increasing away from the observer: Name Description RADIO $V = c \\frac{(f_0 - f_v)}{f_0}$ Z $V=cz$ $z = \\frac{(f_0 - f_v)}{f_v}$ RATIO $V=c(\\frac{f_v}{f_o})$ BETA $V=c\\frac{(1-(\\frac{f_v}{f_0})^2)}{(1+(\\frac{f_v}{f_0})^2)}$ GAMMA $V=c\\frac{(1 + (\\frac{f_v}{f_0})^2)}{2\\frac{f_v}{f_0}}$ OPTICAL $V= c\\frac{(f_0 - f_v)}{f_v}$ TRUE $V=c\\frac{(1-(\\frac{f_v}{f_0})^2)}{(1+(\\frac{f_v}{f_0})^2)}$ RELATIVISTIC $V=c\\frac{(1-(\\frac{f_v}{f_0})^2)}{(1+(\\frac{f_v}{f_0})^2)}$ #### Bibliography\u00b6 1. Gordon 1975: \u201dMethods of Experimental Physics: Volume 12: Astrophysics, Part C: Radio Observations\u201d, ed. M.L.Meeks, Academic Press 1976 2. Delhaye 1965 ( 3. Kerr F. J. & Lynden-Bell D. 1986 MNRAS, 221, 1023 ( 4. Kogut A. et al. 1993 ( ^ ### Time Reference Frames\u00b6 CASA supported time reference frames: Acronym Name Description ET Ephemeris Time The time scale used prior to 1984 as the independent variable in gravitational theories of the solar system. In 1984, ET was replaced by dynamical time (see TDB, TT). GAST Greenwich Apparent Sidereal Time The Greenwich hour angle of the true equinox [1] of date. GMST Greenwich Mean Sidereal Time The Greenwich hour angle of the mean equinox [1] of date, defined as the angular distance on the celestial sphere measured westward along the celestial equator from the Greenwich meridian to the hour circle that passes through a celestial object or point. GMST (in seconds at UT1=0) = 24110.54841 + 8640184.812866 * T + 0.093104 * T2 - 0.0000062 * T3 where T is in Julian centuries from 2000 Jan. 1 12h UT1: T = d \/ 36525 d = JD - 2451545.0 GMST1 GMST calculated specifically with reference to UT1 IAT International Atomic Time (a.k.a. TAI en Francais): The continuous time scale resulting from analysis by the Bureau International des Poids et Mesures of atomic time standards in many countries. The fundamental unit of TAI is the SI second [2] on the geoid [3] , and the epoch is 1958 January 1. LAST Local Apparent Sidereal Time LAST is derived from LMST by applying the equation of equinoxes [1] or nutation of the mean pole of the Earth from mean to true position yields LAST. http:\/\/tycho.usno.navy.mil\/sidereal.html LMST Local Mean Sidereal Time Sidereal time is the hour angle of the vernal equinox, the ascending node of the ecliptic on the celestial equator. The daily motion of this point provides a measure of the rotation of the Earth with respect to the stars, rather than the Sun. It corresponds to the coordinate right ascension of a celestial body that is presently on the local meridian. LMST is computed from the current GMST plus the local offset in longitude measured positive to the east of Greenwich, (converted to a sidereal offset by the ratio 1.00273790935 of the mean solar day to the mean sidereal day.) LMST = GMST + (observer\u2019s east longitude) TAI International Atomic Time (a.k.a. TAI en Francais) see IAT TCB Barycentric Coordinate Time The coordinate time of the Barycentric Celestial Reference System (BCRS), which advances by SI seconds [2] within that system. TCB is related to TCG and TT by relativistic transformations that include a secular term. TCG TDB Barycentric Dynamical Time A time scale defined by the IAU (originally in 1976; named in 1979; revised in 2006) used in barycentric ephemerides and equations of motion. TDB is a linear function of TCB that on average tracks TT over long periods of time; differences between TDB and TT evaluated at the Earth\u2019s surface remain under 2 ms for several thousand years around the current epoch. TDB is functionally equivalent to Teph, the independent argument of the JPL planetary and lunar ephemerides DE405\/LE405. TDT Terrestrial Dynamical Time The time scale for apparent geocentric ephemerides defined by a 1979 IAU resolution. In 1991 it was replaced by TT. TT Terrestrial Time An idealized form of International Atomic Time (TAI) with an epoch offset; in practice TT = TAI + 32s.184. TT thus advances by SI seconds on the geoid [3] UT Universal Time Loosely, mean solar time on the Greenwich meridian (previously referred to as Greenwich Mean Time). In current usage, UT refers either to UT1 or to UTC. UT1 UT1 is formally defined by a mathematical expression that relates it to sidereal time. Thus, UT1 is observationally determined by the apparent diurnal motions of celestial bodies, and is affected by irregularities in the Earth\u2019s rate of rotation. UT2 Before 1972 the time broadcast services kept their time signals within 0.1 seconds [2] of UT2, which is UT1 with annual and semiannual variations in the earth\u2019s rotation removed. The formal relation between UT1 and UT2 is UT2 = UT1 + 0.022 * sin(2 * Pi * t) - 0.012 * cos(2 * Pi * t) UTC Coordinated Universal Time UTC is based on IAT but is maintained within 0s.9 of UT1 by the introduction of leap seconds when necessary. Footnote Number 1 Footnote Text mean equator and equinox v. true equator and equinox: The mean equator and equinox are used for the celestial coordinate system defined by the orientation of the Earth\u2019s equatorial plane on some specified date together with the direction of the dynamical equinox on that date, neglecting nutation. Thus, the mean equator and equinox moves in response only to precession. Positions in a star catalog have traditionally been referred to a catalog equator and equinox that approximate the mean equator and equinox of a standard epoch. The true equator and equinox are affected by both precession and nutation. The Equation of the Equinoxes is the difference (apparent sidereal time minus mean sidereal time). Equivalently, the difference between the right ascensions of the true and mean equinoxes, expressed in time units. FootnoteNumber 2 Footnote Text The Systeme International (SI) second is defined as the duration of 9,192,631,770 cycles of radiation corresponding to the transition between two hyperfine levels of the ground state of caesium 133. Footnote Number 3 Footnote Text The geoid is an equipotential surface that coincides with mean sea level in the open ocean. On land it is the level surface that would be assumed by water in an imaginary network of frictionless channels connected to the ocean. Footnote(s) ^1. mean equator and equinox v. true equator and equinox: The mean equator and equinox are used for the celestial coordinate system defined by the orientation of the Earth\u2019s equatorial plane on some specified date together with the direction of the dynamical equinox on that date, neglecting nutation. Thus, the mean equator and equinox moves in response only to precession. Positions in a star catalog have traditionally been referred to a catalog equator and equinox that approximate the mean equator and equinox of a standard epoch. The true equator and equinox are affected by both precession and nutation. The Equation of the Equinoxes is the difference (apparent sidereal time minus mean sidereal time). Equivalently, the difference between the right ascensions of the true and mean equinoxes, expressed in time units. ^ ^2. The Systeme International (SI) second is defined as the duration of 9,192,631,770 cycles of radiation corresponding to the transition between two hyperfine levels of the ground state of caesium 133. ^ ^3. The geoid is an equipotential surface that coincides with mean sea level in the open ocean. On land it is the level surface that would be assumed by water in an imaginary network of frictionless channels connected to the ocean. ^ ### Coordinate Frames\u00b6 CASA supported spatial coordinate frames: Name Description J2000 mean equator and equinox at J2000.0 (FK5) JNAT geocentric natural frame JMEAN mean equator and equinox at frame epoch JTRUE true equator and equinox at frame epoch APP apparent geocentric position B1950 mean epoch and ecliptic at B1950.0. B1950_VLA mean epoch (1979.9) and ecliptic at B1950.0 BMEAN mean equator and equinox at frame epoch BTRUE true equator and equinox at frame epoch GALACTIC Galactic coordinates HADEC topocentric HA and declination AZEL topocentric Azimuth and Elevation (N through E) AZELSW topocentric Azimuth and Elevation (S through W) AZELNE topocentric Azimuth and Elevation (N through E) AZELGEO geodetic Azimuth and Elevation (N through E) AZELSWGEO geodetic Azimuth and Elevation (S through W) AZELNEGEO geodetic Azimuth and Elevation (N through E) ECLIPTC ecliptic for J2000 equator and equinox MECLIPTIC ecliptic for mean equator of date TECLIPTIC ecliptic for true equator of date SUPERGAL supergalactic coordinates ITRF coordinates wrt ITRF Earth frame TOPO apparent topocentric position ICRS International Celestial reference system ### Physical Units\u00b6 CASA regonizes physical units as listed in the following Tables. Prefix Name Value \u2014\u2014\u2013>Physical units: Prefixes Unit Name Value$\n\n(currency)\n\n1 _\n\n%\n\n(percent)\n\n0.01\n\n%%\n\n(permille)\n\n0.001\n\nA\n\n(ampere)\n\n1 A\n\nAE\n\n(astronomical unit)\n\n149597870659 m\n\nAU\n\n(astronomical unit)\n\n149597870659 m\n\nBq\n\n(becquerel)\n\n1 s\u22121\n\nC\n\n(coulomb)\n\n1 s A\n\nF\n\n1 m\u22122 kg\u22121 s4 A2\n\nGy\n\n(gray)\n\n1 m2 s\u22122\n\nH\n\n(henry)\n\n1 m2 kg s\u22122 A\u22122\n\nHz\n\n(hertz)\n\n1 s\u22121\n\nJ\n\n(joule)\n\n1 m2 kg s\u22122\n\nJy\n\n(jansky)\n\n10\u221226 kg s\u22122\n\nK\n\n(kelvin)\n\n1 K\n\nL\n\n(litre)\n\n0.001 m3\n\nM0\n\n(solar mass)\n\n1.98891944407\u00d7 1030 kg\n\nN\n\n(newton)\n\n1 m kg s\u22122\n\nOhm\n\n(ohm)\n\n1 m2 kg s\u22123 A\u22122\n\nPa\n\n(pascal)\n\n1 m\u22121 kg s\u22122\n\nS\n\n(siemens)\n\n1 m\u22122 kg\u22121 s3 A2\n\nS0\n\n(solar mass)\n\n1.98891944407\u00d7 1030 kg\n\nSv\n\n(sievert)\n\n1 m2 s\u22122\n\nT\n\n(tesla)\n\n1 kg s\u22122 A\u22121\n\nUA\n\n(astronomical unit)\n\n149597870659 m\n\nV\n\n(volt)\n\n1 m2 kg s\u22123 A\u22121\n\nW\n\n(watt)\n\n1 m2 kg s\u22123\n\nWb\n\n(weber)\n\n1 m2 kg s\u22122 A\u22121\n\n_\n\n(undimensioned)\n\n1 _\n\na\n\n(year)\n\n31557600 s\n\narcmin\n\n(arcmin)\n\narcsec\n\n(arcsec)\n\nas\n\n(arcsec)\n\ncd\n\n(candela)\n\n1 cd\n\ncy\n\n(century)\n\n3155760000 s\n\nd\n\n(day)\n\n86400 s\n\ndeg\n\n(degree)\n\ng\n\n(gram)\n\n0.001 kg\n\nh\n\n(hour)\n\n3600 s\n\nl\n\n(litre)\n\n0.001 m3\n\nlm\n\n(lumen)\n\n1 cd sr\n\nlx\n\n(lux)\n\n1 m\u22122 cd sr\n\nm\n\n(metre)\n\n1 m\n\nmin\n\n(minute)\n\n60 s\n\nmol\n\n(mole)\n\n1 mol\n\npc\n\n(parsec)\n\n3.08567758065\u00d71016 m\n\ns\n\n(second)\n\n1 s\n\nsr\n\n1 sr\n\nt\n\n(tonne)\n\n1000 kg\n\nPhysical SI Units\n\nUnit\n\nName\n\nValue\n\n(arcsec)\n\n\u201c_2\n\n(square arcsec)\n\n2.35044305391\u00d7 10\u221211 sr\n\n(arcmin)\n\n(arcsec)\n\n\u201c_2\n\n(square arcsec)\n\n2.35044305391\u00d710\u221211 sr\n\n\u2018_2\n\n(square arcmin)\n\n8.46159499408\u00d710\u22128 sr\n\n:\n\n(hour)\n\n3600 s\n\n(minute)\n\n60 s\n\nAh (ampere hour) 3600 s A Angstrom (angstrom) 1e-10 m Btu (British thermal unit (Int)) 1055.056 m2 kg s\u22122 CM (metric carat) 0.0002 kg Cal (large calorie (Int)) 4186.8 m2 kg s\u22122 FU (flux unit) 10\u221226 kg s\u22122 G (gauss) 0.0001 kg s\u22122 A\u22121 Gal (gal) 0.01 m s\u22122 Gb (gilbert) 0.795774715459 A Mx (maxwell) 10\u22128 m2 kg s\u22122 A\u22121 Oe (oersted) 79.5774715459 m\u22121 A R (mile) 0.000258 kg\u22121 s A St (stokes) 0.0001 m2 s\u22121 Torr (torr) 133.322368421 m\u22121 kg s\u22122 USfl_oz (fluid ounce (US)) 2.95735295625\u00d710\u22125 m3 USgal (gallon (US)) 0.003785411784 m3 WU (WSRT flux unit) 5\u00d7 10\u221229 kg s\u22122 abA (abampere) 10 A abC (abcoulomb) 10 s A abF (abfarad) 109 m\u22122 kg\u22121 s4 A2 abH (abhenry) 10\u22129 m2 kg s\u22122 A\u22122 abOhm (abohm) 10\u22129 m2 kg s\u22123 A\u22122 abV (abvolt) 10\u22128 m2 kg s\u22123 A\u22121 ac (acre) 4046.8564224 m2 arcmin_2 (square arcmin) 8.46-2159499408\u00d710\u22128 sr arcsec_2 (square arcsec) 2.35044305391\u00d710\u221211 sr ata (technical atmosphere) 98066.5 m\u22121.kg.s\u22122 atm (standard atmosphere) 101325 m\u22121.kg.s\u22122 bar (bar) 100000 m\u22121.kg.s\u22122 beam (undefined beam area) 1 _ cal (calorie (Int)) 4.1868 m2 kg s\u22122 count (count) 1 _ cwt (hundredweight) 50.80234544 kg deg_2 (square degree) 0.000304617419787 sr dyn (dyne) 10\u22125 m kg s\u22122 eV (electron volt) 1.60217733\u00d710\u221219 m2 kg s\u22122 erg (erg) 10\u22127 m2 kg s\u22122 fl_oz (fluid ounce (Imp)) 2.84130488996\u00d710\u22125 m3 ft (foot) 0.3048 m fu (flux unit) 10\u221226 kg s\u22122 fur (furlong) 201.168 m gal (gallon (Imp)) 0.00454608782394 m3 ha (hectare) 10000 m2 hp (horsepower) 745.7 m2 kg s\u22123 in (inch) 0.0254 m kn (knot (Imp)) 0.514773333333 m s\u22121 lambda (lambda) 1 _ lb (pound (avoirdupois)) 0.45359237 kg ly (light year) 9.46073047\u00d71015 m mHg (metre of mercury) 133322.387415 m\u22121 kg s\u22122 mile (mile) 1609.344 m n_mile (nautical mile (Imp)) 1853.184 m oz (ounce (avoirdupois)) 0.028349523125 kg pixel (pixel) 1 _ sb (stilb) 10000 m\u22122 cd sq_arcmin (square arcmin) 8.46159499408\u00d710\u22128 sr sq_arcsec (square arcsec) 2.35044305391\u00d710\u221211 sr sq_deg (square degree) 0.000304617419787 sr statA (statampere) 3.33564095198\u00d710\u221210 A statC (statcoulomb) 3.33564095198\u00d710\u221210 s A statF (statfarad) 1.11188031733\u00d710\u221212 m\u22122 kg\u22121 s4 A2 statH (stathenry) 899377374000 m2 kg s\u22122 A\u22122 statOhm (statohm) 899377374000 m2 kg s\u22123 A\u22122 statV (statvolt) 299.792458 m2 kg s\u22123 A\u22121 u (atomic mass unit) 1.661\u00d710\u221227 kg yd (yard) 0.9144 m yr (year) 31557600 s\n\nCustom units available in CASA\n\n### Physical Constants\u00b6\n\nPhysical constants recognized by CASA\n\nThe following physical constants are defined in CASA:\n\nConstant\n\nName\n\nValue\n\npi\n\n3.14..\n\n3.14159\n\nee\n\n2.71..\n\n2.71828\n\nc\n\nspeed of light\n\n2.99792\u00d7108 m s\u22121\n\nG\n\ngrav. constant\n\n6.67259\u00d71011 N m2 kg\u22122\n\nh\n\nPlanck constant\n\n6.62608\u00d710\u221234 J s\n\nHI\n\nHI line frequency\n\n1420.41 MHz\n\nR\n\ngas constant\n\n8.31451 J K\u22121 mol\u22121\n\nNA\n\n6.02214\u00d71023 mol\u22121\n\ne\n\nelectron charge\n\n1.60218\u00d710\u221219 C\n\nmp\n\nproton mass\n\n1.67262\u00d710\u221227 kg\n\nmp_me\n\nmp\/me\n\n1836.15\n\nmu0\n\npermeability vac.\n\n1.25664\u00d710\u22126 H m\u22121\n\neps0\n\npermittivity vac.\n\n8.85418782\u00d710\u221212 F m-1\n\nk\n\nBoltzmann constant\n\n1.38066\u00d710\u221223 J K\u22121\n\nF\n\n96485.3 C mol\u22121\n\nme\n\nelectron mass\n\n9.10939\u00d710\u221231 kg\n\nre\n\n2.8179e\u00d710\u221215 m\n\na0\n\n5.2918\u00d710\u221211 m\n\nR0","date":"2022-07-02 20:46:52","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 1, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6375918388366699, \"perplexity\": 7968.550091084301}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656104204514.62\/warc\/CC-MAIN-20220702192528-20220702222528-00482.warc.gz\"}"}
null
null
Systropus violascens är en tvåvingeart som först beskrevs av Günther Enderlein 1926. Systropus violascens ingår i släktet Systropus och familjen svävflugor. Inga underarter finns listade i Catalogue of Life. Källor Svävflugor violascens
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,347
"Combining observations and numerical model results to improve estimate" by A. Bever, M. Friedrichs et al. Bever, A.; Friedrichs, M.; Friedrichs, C.; and Scully, M.. "Combining observations and numerical model results to improve estimates of hypoxic volume within the Chesapeake Bay". 10-10-2013. Institute of Marine Sciences Seminar Series, University of North Carolina-Chapel Hill, Morehead City, NC.
{ "redpajama_set_name": "RedPajamaC4" }
3,094
package com.gmadorell.youtube_sync.module.synchronize.domain.model final case class PlayList(id: PlayListId, name: PlayListName) final case class PlayListId(id: String) final case class PlayListName(name: String)
{ "redpajama_set_name": "RedPajamaGithub" }
8,048
Federal coronavirus stimulus package: What it means for Americans Senate Majority Leader Mitch McConnell of Ky. departs the Senate Chamber on Capitol Hill in Washington, Monday, March 16, 2020. With an urgency unseen since the Great Recession, Congress is rushing to develop a sweeping economic lifeline for American households and businesses suddenly capsized by the coronavirus outbreak. (AP Photo/Patrick Semansky) Posted at 3:39 PM, Mar 25, 2020 It appears in the coming hours or days, Congress will approve and the president will sign legislation designed to keep the American economy from collapse as businesses close to prevent the spread of COVID-19. The two parties came to an agreement early Wednesday morning. It appears some finer details of the bill are still being hammered out, but the two sides have agreed on a number of items. Both the Democratic and Republican leaders of the Senate Appropriations Committee have released summaries of what the final bill will likely include. Here is what the bill means for Americans: Checks for Americans: Regardless of employment status, most Americans will see a check from the federal government. The checks will either be $1,200 for individuals earning less than $75,000 a year, or $2,400 for couples earning less than $150,000 a year. An additional $500 will be added for each child. Those figures will be pro rated for individuals making between $75,000 and $99,000 a year, and for couples making $150,000 to $198,000 a year. It's unknown exactly when individuals would receive these checks, but several members of Congress said they would come early in April. Low income families: The Supplimental Nutrition Assistance Program (SNAP), which is set to receive $15.51 billion from this legislation, is anticipating increases in participation as a result of coronavirus. Also, the Low Income Home Energy Assistance Program program, which is designed to provide energy assistance for low-income families, is set to have $900 million in funding. Food supply: Nearly $9.5 billion is set for food producers and agriculture. Costs for healthcare: A total of $172.1 billion will be spent on the front lines to combat COVID-19. $100 billion of the funds will go toward a new program to provide grants to hospitals, public entities, not-forprofit entities, and Medicare and Medicaid enrolled suppliers. Another $27 billion will go toward research on how to prevent and cure COVID-19. Nearly $4.3 billion is expected to go toward local, state and federal health organizations. This money will be used to help purchase coronavirus test kits, and pay for equipment. Funds for education: The Department of Education will distribute more than $30 billion to help stabilize schools and universities that have had to alter operations and rely on remote learning in recent weeks. Nearly half of the money set aside for the Department of Education will be used on higher education to help them combat the virus on campus, provide distance learning and offer grants to students in need. $13.5 billion is available for formula-grants to States, which will then distribute 90 percent of funds to local educational agencies. In additional $750 million will go toward Head Start to help with emergency staffing needs. For veterans: The Department of Veterans Affairs (VA) will have $15.85 billion in funding to provide healthcare for veterans. This covers treatment of veterans nationwide for coronavirus within VA hospitals in addition to healthcare facilities in the community. In additional $3.1 billion will go toward supporting telehealth services for veterans. This story will be updated as more details of the bill are released. Coronavirus | Denver7 11:45 AM, Feb 27, 2020
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,415
\section{introduction} The rotation algebra $\mathcal{A}_\theta$ associated with a real number $\theta$ is the universal $C^*$-algebra generated by unitaries $U_1$ and $U_2$ satisfying the relation $$ U_2U_1 = e^{2\pi i\theta}U_1U_2. $$ Any choice of a matrix $$ A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} $$ in $\mathrm{SL}_2({\mathbb Z})$ determines an automorphism $\alpha_A$ given by $$ \alpha_A(U_1)=e^{\pi i (ac)\theta} U_1^aU_2^c,\quad\alpha_A(U_2)=e^{\pi i(bd)\theta} U_1^bU_2^d. $$ This assignment in fact defines an $\mathrm{SL}_2({\mathbb Z})$-action on $\mathcal{A}_\theta$, which was first introduced by Watatani \cite{Wat} and Brenken \cite{Brenken}. Let $\mathcal{A}_\theta\rtimes_{A}{\mathbb Z}$ denote the crossed product arising from the $C^*$-dynamical system $(\mathcal{A}_\theta, {\mathbb Z}, \alpha_A)$. In the case $\theta$ is irrational, previous work of the authors \cite{BCHL18} combined an explicit $K$-theory calculation with recent progress in the classification program for simple, separable, nuclear $C^*$-algebras to show that the isomorphism class of $\mathcal{A}_\theta \rtimes_A {\mathbb Z}$ is completely determined by $\theta$ and the matrix $A$, up to certain (very explicit) equivalence relations. Crossed products with rational angles present greater challenges. For one thing, the algebras under consideration are not simple, so there is no readily accessible classification theorem. For another, the behavior of the automorphisms $\alpha_A$ and tracial states on the crossed products at the level of $K$-theory are not immediately clear, unlike in the irrational case where all automorphisms on $\mathcal{A}_\theta$ are easily seen to induce the identity map on $K_0$ and there is only one tracial state on the crossed product. Despite the difficulties, in \cite{FW94} Farsi and Watling gave a complete $K$-theoretic classification of $\mathcal{A}_\theta \rtimes_A {\mathbb Z}$ when $\tr(A) = 2$, and proved many interesting partial results in the general case. One crucial ingredient underlying these results is the computation of tracial ranges of the $K_0$-groups, which in turn made use of a short exact sequence relating the tracial ranges and the de la Harpe-Skandalis determinant (see, for example, \cite[Proposition 6]{FW94} and the references therein). This note represents the authors' attempt to understand this computation at a more concrete level. We identify both the rotational algebras and their crossed products with twisted group $C^*$-algebras, and then apply the technique of homotopic cocycles, as developed by Echterhoff, L\"uck, Phillips, and Walters in \cite{ELPW10}, to show that the canonical inclusion of $\mathcal{A}_\theta$ inside $\mathcal{A}_\theta\rtimes_A {\mathbb Z}$ always induces an injective map on $K_0$. Once this is done, an explicit set of generators can be found using the Pimsner-Voiculescu sequence as in the irrational case. It then follows that all tracial states on the crossed products agree on $K_0$, and the tracial range of each generators can be computed directly. The next theorem summarizes our computations (in the case $\tr(A) \neq 2$; the case $\tr(A) = 2$ is similar and treated in Theorem \ref{thm:Ktr=2}). \begin{thmx}[Theorem \ref{thm:K-theory}] Let $\theta$ be any real number in $[0,1]$, and let $A\in \mathrm{SL}_2({\mathbb Z})$ be a matrix of infinite order with $\tr(A) \not\in\{0,\pm1,2\}$. Suppose the rank $2$ matrix $I_2-A^{-1}$ has Smith normal form $\diag(h_1,h_2)$, then \begin{align*} K_0(\mathcal{A}_{\theta} \rtimes_A {\mathbb Z}) &\cong {\mathbb Z}\oplus {\mathbb Z}, \\ K_1(\mathcal{A}_{\theta} \rtimes_A {\mathbb Z}) &\cong {\mathbb Z} \oplus {\mathbb Z} \oplus {\mathbb Z}_{h_1} \oplus {\mathbb Z}_{h_2}, \end{align*} and a set of generators for $K_0(\mathcal{A}_{\theta}\rtimes_A{\mathbb Z})$ is given by $[\mathbf{1}]_0$ and $[\iota(p_\theta)]_0$, where $\iota:\mathcal{A}_\theta\to \mathcal{A}_\theta\rtimes_A {\mathbb Z}$ is the inclusion map and $p_\theta$ is the Rieffel projection associated to $\theta$. Moreover, given any tracial state $\tau$ on $\mathcal{A}_\theta\rtimes_A {\mathbb Z}$ the induced map $$ \tau_*: K_0(\mathcal{A}_{\theta}\rtimes_A{\mathbb Z})\rightarrow {\mathbb R} $$ satisfies $\tau_*( [\mathbf{1}]_0 ) = 1$ and $\tau_*([\iota(p_\theta)]_0) = \theta$. In particular, all tracial states on $\mathcal{A}_\theta\rtimes_A{\mathbb Z}$ induce the same map on $K_0(\mathcal{A}_{\theta}\rtimes_A{\mathbb Z})$. \end{thmx} Many results in this note are probably known to experts. Nevertheless it is our hope that our approach and explicit computations make at least some of them more transparent. \ \newline {\bf Acknowledgments}. S. Chakraborty thanks Debashish Goswami for financial support (J C Bose fellowship). \section{Twisted group $C^*$-algebras} For our purposes it will be convenient to view both $\mathcal{A}_\theta$ and $\mathcal{A}_\theta \rtimes_A {\mathbb Z}$ as twisted group $C^*$-algebras. Here we briefly recall these identifications, and refer the reader to \cite{PR89} and \cite{ELPW10} for more details and other general facts about twisted group $C^*$-algebras. Let $\omega_\theta:{\mathbb Z}^2\times {\mathbb Z}^2 \to \mathbb{T}$ be the $2$-cocycle defined by $$ \omega_\theta\left( x,y \right) = e^{-\pi i \ip{ \theta x, y }}, $$ where we identify $\theta$ with the $2\times 2$ skew-symmetric matrix $\begin{pmatrix} 0 & -\theta \\ \theta & 0 \end{pmatrix}$. More explicitly, given two elements $x = \begin{pmatrix} x_1 \\ x_2 \end{pmatrix}$ and $y = \begin{pmatrix} y_1 \\ y_2 \end{pmatrix}$ in ${\mathbb Z}^2$ we have $$ \omega_\theta(x,y) = e^{ \pi i \theta( x_2 y_1 - x_1y_2 ) }. $$ The (full) twisted group $C^*$-algebra $C^*({\mathbb Z}^2, \omega_{\theta})$ is then a $C^*$-completion of the weighted convolution algebra $\ell^1({\mathbb Z}^2, \omega_{\theta})$. Using the universal property one checks that there is a $*$-isomorphism $$ \mathcal{A}_\theta \xrightarrow{ \cong } C^*({\mathbb Z}^2, \omega_\theta) $$ sending each generator $U_i$ to the point mass function $\delta_{e_i}$, where $\{e_1, e_2\}$ is the standard basis for ${\mathbb Z}^2$. Now consider the action of ${\mathbb Z}$ on ${\mathbb Z}^2$ given by multiplication by the matrix $A$, and write ${\mathbb Z}^2\rtimes_A{\mathbb Z}$ for the resulting semidirect product. Define a $2$-cocycle $\tilde{\omega}_\theta$ on ${\mathbb Z}^2\rtimes_A{\mathbb Z}$ by $$ \tilde{\omega}_\theta( (x, n), (y, m) ) = \omega_\theta( x, n.y ). $$ Then by \cite[Lemma 2.1]{ELPW10} there is an action $\alpha:{\mathbb Z}\to \Aut(C^*({\mathbb Z}^2, \omega_\theta ))$ generated by the automorphism $$ \hspace{5cm} [\alpha(f)](x) := f(A^{-1}x) \qquad \qquad (f\in \ell^1({\mathbb Z}^2), x\in {\mathbb Z}^2) $$ so that there is a $*$-isomorphism $$C^*({\mathbb Z}^2\rtimes_A {\mathbb Z}, \tilde{\omega}_\theta ) \xrightarrow{\cong} C^*({\mathbb Z}^2, \omega_{\theta})\rtimes_\alpha {\mathbb Z}. $$ More concretely, the isomorphism, when described at the level of convolution algebras, is given by $$ \ell^1({\mathbb Z}^2\rtimes_A {\mathbb Z}, \tilde{\omega})\to \ell^1( {\mathbb Z}, \ell^1({\mathbb Z}^2, \omega_{\theta}) ), \qquad f\mapsto \Phi(f), $$ where $[\Phi(f)](n) = f(\cdot, n)$ for all $n\in {\mathbb Z}$. A direct computation shows that under the identification $\mathcal{A}_\theta \cong C^*({\mathbb Z}^2, \omega_\theta)$ the crossed product $C^*({\mathbb Z}^2, \omega_{\theta})\rtimes_\alpha {\mathbb Z}$ is precisely $\mathcal{A}_\theta\rtimes_A {\mathbb Z}$ (see \cite[page 185]{ELPW10}). Finally, as all of our groups are amenable, we are free to switch between the full and reduced versions of the crossed products and twisted group $C^*$-algebras \cite[Theorem 3.11]{PR89}. In what follows we recall a sufficient condition at the level of $2$-cocycles that ensures the resulting twisted group $C^*$-algebras are isomorphic. \begin{defn} Let $G$ be a locally compact group, and let $\omega, \omega'$ be continuous 2-cocycles on $G$. We say $\omega$ and $\omega'$ are \emph{cohomologous}, written as $\omega\sim_{\mathrm{cohom}} \omega'$ if there exists a continuous map $\lambda:G\to \mathbb{T}$ such that $$ \omega(s,t) = \lambda(s)\lambda(t)\overline{\lambda(st)}\omega'(s,t). $$ for all $s,t\in G$. \end{defn} Given a continuous $2$-cocycle $\omega$ on a locally compact group $G$ and an automorphism $\Phi\in\Aut(G)$, we define $\omega\circ \Phi: G\times G\to \mathbb{T}$ by $$ \omega\circ \Phi(s,t) := \omega(\Phi(s),\Phi(t)). $$ A computation shows that $\omega\circ \Phi$ is also a continuous $2$-cocycle on $G$. \begin{prop} \label{prop:cohom} Let $\omega$ and $\omega'$ be $2$-cocycles on $G$. If there is an automorphism $\Phi\in \Aut(G)$ such that $(\omega \circ \Phi) \sim_{\mathrm{cohom}}\omega'$, then $C^*(G, \omega) \cong C^*(G,\omega')$ and $C^*_r(G, \omega)\cong C^*_r(G, \omega')$. \end{prop} \begin{proof} It is a well-known fact that cohomologous cocycles yield isomorphic twisted group $C^*$-algebras. To complete the proof, one checks that the map $C^*_r(G,\omega)\rightarrow C^*_r(G,\omega\circ \Phi)$ determined by the assignment $f\mapsto f\circ \Phi$ is a $\ast$-isomorphism. \end{proof} \section{$K$-theory} In this section we compute the $K$-theory of the crossed product $\mathcal{A}_\theta\rtimes_A {\mathbb Z}$, where $A\in \mathrm{SL}_2({\mathbb Z})$ is a matrix of infinite order (i.e., $A^n \neq I_2$ for any $n\in {\mathbb N}$). In the case that $\theta$ is irrational, the $K$-theory has been computed in \cite{BCHL18} using the Pimsner-Voiculescu sequence. One crucial ingredient used there is that the automorphism $\alpha_A$ on $\mathcal{A}_\theta$ induces the identity map at the level of $K_0$ (in fact, this is true for any automorphism of $\mathcal{A}_\theta$). When $\theta$ is irrational, this is easily seen by examining the image of $K_0$ under the unique tracial state. The same result for rational $\theta$ seems to be known to experts, though we were not able to locate a reference. Here we provide an argument based on homotopies between cocycles, as developed in \cite{ELPW10}. Let $G$ be a discrete group and $\Omega$ be a $C([0,1],\mathbb{T})$-valued $2$-cocycle. One can define the twisted crossed product $C^*$-algebra $C([0,1])\rtimes_{ \Omega, r } G$ as in the case of twisted group $C^*$-algebras (see \cite{PR89}). Here the underlying convolution algebra is the algebra of $\ell^1$ functions on $G$ with values in $C([0,1])$. Given any $x\in [0,1]$, the function $$ \omega_x := \Omega(\cdot, \cdot)(x) $$ is a $\mathbb{T}$-valued cocycle on $G$. There is a canonical quotient map (called the \emph{evaluation map}) $$ \mathrm{ev}_x: C([0,1])\rtimes_{ \Omega, r} G\to C^*_r(G, \omega_x) $$ such that for each function $f\in \ell^1(G, C([0,1]))$ and each $s\in G$ we have $[\mathrm{ev}_x(f)](s) = [f(s)](x)$. \begin{thm} \cite[Corollary 1.11]{ELPW10} \label{thm:ev} If $G$ satisfies the Baum-Connes conjecture with coefficients, then the evaluation map $ev_x$ induces an isomorphism on $K$-theory. \end{thm} In principle, this theorem allows us to transfer $K$-theoretical properties from a single fiber to \emph{all} the other fibers. In our case, we make use of our knowledge for irrational $\theta$ and conclude similar properties for the rational ones. As in \cite{BCHL18} we first treat the case when $\tr(A) \neq 2$. \begin{prop} \label{prop:injectivity} Let $\theta$ be any real number in $[0,1]$ and $A\in \mathrm{SL}_2({\mathbb Z})$ be a matrix of infinite order with $\tr(A)\notin \{0, \pm 1, 2\}$. Then the inclusion $\mathcal{A}_\theta\to \mathcal{A}_\theta\rtimes_A {\mathbb Z}$ induces an isomorphism between $K_0(\mathcal{A}_\theta)$ and $K_0(\mathcal{A}_\theta\rtimes_A {\mathbb Z})$. As a consequence, the automorphism $\alpha_A$ induces the identity map on $K_0(\mathcal{A}_\theta)$. \end{prop} \begin{proof} Let $\Omega$ be the $C([0,1],\mathbb{T})$-valued cocycle on ${\mathbb Z}^2$ defined by $$ \Omega( \cdot, \cdot )(\theta) := \omega_\theta $$ and let $\tilde{\Omega}$ be the $C([0,1],\mathbb{T})$-valued cocycle on ${\mathbb Z}^2\rtimes_A {\mathbb Z}$ defined using $\tilde{\omega}_\theta$ in the analogous way. Since the groups ${\mathbb Z}^2$ and ${\mathbb Z}^2\rtimes_A {\mathbb Z}$ satisfy the Baum-Connes conjecture with coefficients \cite{Hig01}, by Theorem \ref{thm:ev} both evaluation maps $$ \mathrm{ev}_\theta: C([0,1])\rtimes_{ \Omega, r} {\mathbb Z}^2\to C^*_r({\mathbb Z}^2, \omega_\theta), $$ $$ \mathrm{ev}_\theta: C([0,1])\rtimes_{ \tilde{\Omega}, r} ({\mathbb Z}^2 \rtimes_A {\mathbb Z}) \to C^*_r({\mathbb Z}^2 \rtimes_A {\mathbb Z}, \tilde{\omega}_\theta) $$ induce isomorphisms at the level of $K_0$-groups. As in the case of twisted group $C^*$-algebras, there is an identification $$ C([0,1])\rtimes_{ \tilde{\Omega}, r} ({\mathbb Z}^2\rtimes_A {\mathbb Z}) \xrightarrow{\cong} ( C([0,1])\rtimes_{ \Omega, r} {\mathbb Z}^2 )\rtimes_\alpha {\mathbb Z} $$ that respects the evaluation maps (see \cite[Remark 2.3]{ELPW10}). Therefore we obtain a commutative diagram \begin{center} \begin{diagram}[!ht] \begin{tikzpicture}[node distance=2cm, auto] \node (1) {$C([0,1])\rtimes_{ \Omega, r} {\mathbb Z}^2$}; \node (2) [right of=1, node distance=5cm] {$C([0,1])\rtimes_{ \tilde{\Omega}, r} ({\mathbb Z}^2\rtimes_A {\mathbb Z})$}; \node (3) [below of=1] {$C^*({\mathbb Z}^2, \omega_{\theta})$}; \node (4) [below of=2] {$C^*({\mathbb Z}^2\rtimes_A {\mathbb Z}, \tilde{\omega}_\theta) $}; \node (5) [below of=3] {$\mathcal{A}_\theta$}; \node (6) [below of=4] {$\mathcal{A}_\theta\rtimes_A {\mathbb Z},$}; \draw[->] (1) to node {$\tilde{\iota}$} (2); \draw[->] [swap] (1) to node {$\mathrm{ev}_\theta$} (3); \draw[->] (2) to node {$\mathrm{ev}_\theta$} (4); \draw[->] (3) to node {$\iota'$} (4); \draw[->] (5) to node {$\iota$} (6); \draw[->] (3) to node {$\cong$} (5); \draw[->] (4) to node {$\cong$} (6); \end{tikzpicture}, \caption{} \label{dia:commutative} \end{diagram} \end{center} where the horizontal maps are the canonical inclusions. Let us quickly check that the above diagram is indeed commutative. Let $f$ be a function in $\ell^1({\mathbb Z}^2, C([0,1]) \subseteq C([0,1])\rtimes_{ \Omega, r} {\mathbb Z}^2$. The image $\tilde{\iota}(f)$ is determined by the formula $$ \tilde{\iota}(f)( (s,t),x)=f(s,x) $$ for $s\in {\mathbb Z}^2, t\in {\mathbb Z}$, and $x\in [0,1]$. By the definition of the evaluation maps, we have $$ [\mathrm{ev}_\theta(f)](s) = f(s,\theta) $$ and $$ [\mathrm{ev}_\theta({\tilde{\iota}}(f)](s,t)=\tilde \iota(f)((s,t),\theta) = f(s, \theta). $$ By definition $\iota'(h)(s,t) = h(s)$ for any $h \in \ell^1({\mathbb Z}^2, \omega_{\theta})\subseteq C^*({\mathbb Z}^2, \omega_{\theta})$, so we have $[\iota'(\mathrm{ev}_{\theta}(f))](s,t) = [\mathrm{ev}_{\theta}(f)](s) = f(s,\theta)$ and the upper rectangle is commutative. The commutativity of the lower rectangle follows from the explicit identifications discussed in Section 2. At the level of $K_0$-groups we have \begin{center} \begin{tikzpicture}[node distance=2cm, auto] \node (1) {$K_0(C([0,1])\rtimes_{ \Omega, r} {\mathbb Z}^2)$}; \node (2) [right of=1, node distance=6cm] {$K_0(C([0,1])\rtimes_{ \tilde{\Omega}, r} ({\mathbb Z}^2\rtimes_A {\mathbb Z}))$}; \node (3) [below of=1] {$K_0(\mathcal{A}_\theta )$}; \node (4) [below of=2] {$K_0(\mathcal{A}_\theta \rtimes_A {\mathbb Z})$}; \draw[->] (1) to node {$\tilde{\iota}_*$} (2); \draw[->] [swap] (1) to node {$(\mathrm{ev}_\theta)_*$} (3); \draw[->] (2) to node {$(\mathrm{ev}_\theta)_*$} (4); \draw[->] (3) to node {$\iota_*$} (4); \end{tikzpicture}. \end{center} From \cite[Proposition~3.2]{BCHL18} we know that the map $\iota_*$ is an isomorphism when $\theta$ is irrational. As both vertical maps are isomorphisms, so is the map $\tilde{\iota}_*$ at the top. Now the same commutative diagram shows that $\iota_*$ is an isomorphism for any real number $\theta$. The second statement follows from the Pimsner-Voiculescu exact sequence (see, for example, \cite[Chapter V]{Bla98}). \end{proof} To compute the tracial ranges, let us briefly recall some facts about (the $K$-theory of) $\mathcal{A}_\theta$. For any $\theta$ one can realize $\mathcal{A}_\theta$ as a crossed product $C(\mathbb{T})\rtimes_{r_\theta} {\mathbb Z}$, where ${\mathbb Z}$ acts on the circle $\mathbb{T}$ by rotations. There is a canonical tracial state $\tau_\theta$ on $\mathcal{A}_\theta$ given by integration. More precisely, if we write $E: C(\mathbb{T})\rtimes_{r_\theta} {\mathbb Z}\to C(\mathbb{T})$ for the canonical conditional expectation, the $\tau_\theta$ is given by $$ \tau_\theta( a ) := \int_0^{2\pi} [E(a)](x) \;dx. $$ For any $\theta$ there is a \emph{Rieffel projection} $[p_\theta]_0$ in the $K_0$-group satisfying $(\tau_\theta)_*([p_\theta]_0) = \theta$. If $\theta \neq 0$ then the projection $p_\theta$ was constructed by Rieffel \cite{Rie81}. In the case $\theta = 0$ one has to take $p_\theta$ in the amplification $M_2(\mathcal{A}_\theta)$, and we refer the reader to \cite{Thesis} for details. In either case, the group $K_0(\mathcal{A}_\theta)$ is generated by the elements $[\mathbf{1}]_0$ and $[p_\theta]_0$. \begin{rem} In Rieffel's original paper the angle $\theta$ was assumed to be irrational, but the construction and the proof in fact works for any nonzero $\theta$. \end{rem} We need one more fact about how the tracial states on $\mathcal{A}_\theta$ behave at the $K_0$-level. The following lemma is a special case of \cite[Lemma 2.3]{Ell80}. \begin{lem} [Elliott] Let $\theta$ be any real number in $[0,1)$. Then all tracial states on $\mathcal{A}_\theta$ induce the same map on $K_0(\mathcal{A}_\theta)$. \end{lem} \begin{rem} In \cite{Ell80} the result was stated in terms of a torsion-free abelian group $G$ and a character $\rho$ on the exterior power $G\wedge G$. In our case one takes the group $G$ to be ${\mathbb Z}^2$, so the exterior power ${\mathbb Z}^2\wedge {\mathbb Z}^2$ is isomorphic to ${\mathbb Z}$. The character $\rho$ is then identified with an element $\varepsilon^{2\pi i \theta}$ in the circle. Finally, one checks that the $C^*$-algebra $\mathcal{A}_\rho$ appearing in the lemma is nothing but the rotation algebra $\mathcal{A}_\theta$. \end{rem} The next theorem shows that, among other things, the crossed product $\mathcal{A}_\theta\rtimes_A {\mathbb Z}$ has the same $K$-theory and tracial ranges for any real number $\theta$ in $[0,1]$. We refer the reader to \cite[Section 2.1]{BCHL18} for a discussion of matrix equivalence and Smith normal forms. \begin{thm}\label{thm:K-theory} Let $\theta$ be any real number in $[0,1]$, and let $A\in \mathrm{SL}_2({\mathbb Z})$ be a matrix of infinite order with $\tr(A) \not\in\{0,\pm1,2\}$. Suppose the rank $2$ matrix $I_2-A^{-1}$ has the Smith normal form $\diag(h_1,h_2)$, then \begin{align*} K_0(\mathcal{A}_{\theta} \rtimes_A {\mathbb Z}) &\cong {\mathbb Z}\oplus {\mathbb Z}, \\ K_1(\mathcal{A}_{\theta} \rtimes_A {\mathbb Z}) &\cong {\mathbb Z} \oplus {\mathbb Z} \oplus {\mathbb Z}_{h_1} \oplus {\mathbb Z}_{h_2}, \end{align*} and a set of generators for $K_0$ is given by $[\mathbf{1}]_0$ and $[\iota(p_\theta)]_0$. Moreover, given any tracial state $\tau'$ on $\mathcal{A}_\theta\rtimes_A {\mathbb Z}$ the induced map $$ \tau'_*: K_0(\mathcal{A}_{\theta}\rtimes_A{\mathbb Z})\rightarrow {\mathbb R} $$ satisfies $\tau'_*( [\mathbf{1}]_0 ) = 1$ and $\tau'_*([\iota(p_\theta)]_0) = \theta$. In particular, all tracial states on $\mathcal{A}_\theta\rtimes_A{\mathbb Z}$ induce the same map on $K_0(\mathcal{A}_{\theta}\rtimes_A{\mathbb Z})$. \end{thm} \begin{proof} From Proposition \ref{prop:injectivity} we know that the map $$\iota_*: K_0(\mathcal{A}_\theta) \rightarrow K_0(\mathcal{A}_\theta\rtimes_A {\mathbb Z})$$ is an isomorphism. Since $[1]_0$ and $[p_\theta]_0$ generate $K_0(\mathcal{A}_\theta)$, we conclude that $K_0(A_\theta\rtimes_A {\mathbb Z}) \cong {\mathbb Z}\oplus {\mathbb Z}$ and that $K_0(\mathcal{A}_\theta\rtimes_A {\mathbb Z})$ is generated by $[1]_0$ and $\iota_*([p_\theta]_0)$. Now the Pimsner--Voiculescu sequence for $\mathcal{A}_{\theta}\rtimes_A{\mathbb Z}$ reads as follows: \[ \begin{CD} K_0 (\mathcal{A}_{\theta}) @>{\mathrm{id} - \alpha^{-1}_{*0}}>>K_0 (\mathcal{A}_{\theta})@>{\iota_{*}}>> K_0 (\mathcal{A}_{\theta} \rtimes_A {\mathbb Z} ) \\ @A{\delta_1}AA & & @VV{\delta_0}V \\ K_1 (\mathcal{A}_{\theta}\rtimes_A {\mathbb Z}) @<<{\iota_{*}}< K_1 (\mathcal{A}_{\theta}) @<<{\mathrm{id} - \alpha^{-1}_{*1}}< K_1 (\mathcal{A}_{\theta}). \end{CD} \] Since $\iota_*: K_0(\mathcal{A}_\theta) \rightarrow K_0(\mathcal{A}_\theta\rtimes_A {\mathbb Z})$ is an isomorphism, we are left with the short exact sequence \begin{equation} 0 \longrightarrow \bigslant{ K_1(\mathcal{A}_{\theta})}{\mathrm{im}(\mathrm{id}-\alpha^{-1}_{*1})}\stackrel{\iota_*}{\longrightarrow} K_1(\mathcal{A}_{\theta}\rtimes_A{\mathbb Z}) \stackrel{\delta_1}{\longrightarrow} K_0(\mathcal{A}_{\theta}) \longrightarrow 0. \end{equation} Now the same proof as in \cite[Proposition 3.1]{BCHL18} gives that $$ K_1(\mathcal{A}_{\theta} \rtimes_A {\mathbb Z}) \cong {\mathbb Z} \oplus {\mathbb Z} \oplus {\mathbb Z}_{h_1} \oplus {\mathbb Z}_{h_2}. $$ For the second statement, let $\tau'$ be any tracial state on $\mathcal{A}_{\theta}\rtimes_A{\mathbb Z}$. Then $\tau'\circ \iota$ is a tracial state on $\mathcal{A}_{\theta}$. By the previous theorem any tracial state on $\mathcal{A}_{\theta}$ induces the same map from $K_0(\mathcal{A}_{\theta})$ to ${\mathbb R}$, and the map sends $[p_\theta]_0$ to $\theta$. Therefore, \begin{align*} \tau'_*(\iota_*([p_\theta]_0))&=(\tau\circ \iota)_*([p_\theta]_0) = \theta. \end{align*} This concludes the proof. \end{proof} The following result was first obtained by Farsi and Watling in \cite{FW94}. \begin{cor} \label{cor:same_theta} \cite[Proposition 6]{FW94} Let $\theta$ and $\theta'$ be two real numbers, and $A$ and $B$ two matrices in $\mathrm{SL}_2({\mathbb Z})$ such that $\tr(A), \tr(B)\not\in\{0,\pm1,2\}$. If $\mathcal{A}_{\theta}\rtimes_A{\mathbb Z}$ is isomorphic to $\mathcal{A}_{\theta'}\rtimes_B{\mathbb Z}$, then $$ \theta\equiv \pm\theta'\mod {\mathbb Z}\text{\ and\ } I_2-A^{-1}\sim_{\mathrm{eq}}I_2-B^{-1}. $$ \end{cor} \begin{proof} From the computation of $K$-theory (Theorem \ref{thm:K-theory}), we obtain the matrix equivalence between $I_2-A^{-1}$ and $I_2-B^{-1}$ directly from the isomorphic $K_1$-groups. The condition that $\theta = \pm \theta'$ can be viewed as a consequence of isomorphic crossed products having the same \emph{twist}, as defined in \cite{Yin90}. Here we give a direct argument for the reader's convenience. Suppose $\varphi: \mathcal{A}_{\theta}\rtimes_A{\mathbb Z} \rightarrow \mathcal{A}_{\theta'}\rtimes_B{\mathbb Z}$ is an isomorphism. Then the diagram \begin{center} \begin{tikzpicture}[node distance=2cm, auto] \node (1) {$K_0(\mathcal{A}_{\theta}\rtimes_A{\mathbb Z})$}; \node (2) [right of=1, node distance=3.5cm] {$K_0(\mathcal{A}_{\theta'}\rtimes_B{\mathbb Z})$}; \node (3) [below of=1] {${\mathbb R}$}; \node (4) [below of=2] {${\mathbb R}$}; \draw[->] (1) to node {$\varphi_*$} (2); \draw[->] [swap] (1) to node {$(\tau_A)_*$} (3); \draw[->] (2) to node {$(\tau_B)_*$} (4); \draw[->] (3) to node {$=$} (4); \end{tikzpicture} \end{center} commutes because of Theorem \ref{thm:K-theory}. The isomorphism $\varphi_*$ is hence represented by a matrix of the form $$ \begin{pmatrix} 1&n\\ 0&\pm1 \end{pmatrix}. $$ We compute that \begin{align*} \theta&= (\tau_A)_*([\iota(p_{\theta})]_0) = (\tau_B)_*\varphi_*([\iota(p_{\theta})]_0)\\ &= (\tau_B)_*(n[\mathbf{1}]_0\pm[\iota(p_{\theta'})]_0)\\ &=n\pm\theta'. \end{align*} Hence $\theta\equiv\pm\theta'\pmod{\mathbb Z}$ and the proof is complete. \end{proof} \begin{rem} The twist invariant, as mentioned in the proof of Corollary \ref{cor:same_theta}, is only defined for $C^*$-algebras with certain $K$-theoretic properties. One of the consequences of Theorem \ref{thm:K-theory} is that the twist is well-defined for the class of crossed products that we are considering. \end{rem} We conclude the section with the $K$-theory computation of $\mathcal{A}_{\theta} \rtimes_A {\mathbb Z}$ when $\tr(A) = 2$ and $A\neq I_2$. The argument is essentially the same to that of the irrational cases investigated in \cite{BCHL18}. To avoid undue repetition, we only give a brief outline. Any matrix $A$ in $\mathrm{SL}_2({\mathbb Z})$ with $\tr(A)=2$ and $A\neq I_2$ is conjugate in $\mathrm{SL}_2({\mathbb Z})$ to either \begin{equation}\label{reduction} \begin{pmatrix} 1&h_1\\ 0&1 \end{pmatrix}\ \text{or\ } \begin{pmatrix} 1&0\\ h_1&1 \end{pmatrix}, \end{equation} for some positive integer $h_1$ (see \cite{MKS04}). Therefore, the computation reduces to the special cases when $A$ is equal to either matrix in (\ref{reduction}). At the $K_0$-level, Diagram \ref{dia:commutative} yields the commutative diagram \begin{center} \begin{tikzpicture}[node distance=2cm, auto] \node (1) {$K_0(C([0,1])\rtimes_{ \Omega, r} {\mathbb Z}^2)$}; \node (2) [right of=1, node distance=6cm] {$K_0(C([0,1])\rtimes_{ \tilde{\Omega}, r} ({\mathbb Z}^2\rtimes_A {\mathbb Z}))$}; \node (3) [below of=1] {$K_0(\mathcal{A}_\theta )$}; \node (4) [below of=2] {$K_0(\mathcal{A}_\theta \rtimes_A {\mathbb Z}).$}; \draw[->] (1) to node {$\tilde{\iota}_*$} (2); \draw[->] [swap] (1) to node {$(\mathrm{ev}_\theta)_*$} (3); \draw[->] (2) to node {$(\mathrm{ev}_\theta)_*$} (4); \draw[->] (3) to node {$\iota_*$} (4); \end{tikzpicture}. \end{center} Since the homomorphism $\iota_*$ is known to be injective when $\theta$ is irrational \cite{BCHL18}, the same must be true for $\tilde{\iota}_*$. Applying commutativity of the diagram again we see that $\iota_*$ is injective for any $\theta$. Then the Pimsner--Voiculescu sequence gives the (split) short exact sequence \begin{equation*}\label{tr_2} 0 \longrightarrow K_0(\mathcal{A}_{\theta}) \stackrel {\iota_*}{\longrightarrow} K_0(\mathcal{A}_{\theta}\rtimes_A{\mathbb Z})\stackrel{\delta_0}{\longrightarrow} \mathrm{ker}(\mathrm{id}-\alpha^{-1}_{*1}) \longrightarrow 0. \end{equation*} The assumption on $A$ implies that $\mathrm{ker}(\mathrm{id}-\alpha^{-1}_{*1})={\mathbb Z}[U_i]_1$ for $i=1,2$. Now it is a complete repetition of the proof of \cite[Proposition 3.3]{BCHL18} and the discussion that precedes it to find a preimage of $[U_i]_1$ under $\delta_0$. Essentially one computes the preimage $[P]_0$ of $[U_i]_1$ for the case $\theta = 0$, which was done in \cite{Thesis}. Then choosing a suitable $*$-homomorphism from $C(\mathbb{T})$ to $\mathcal{A}_\theta$ and applying naturality of the Pimsner-Voiculescu sequence allows one to find the preimage $[P_{U_1,w}]_0$ of $[U_i]_1$ for any $\theta\in [0,1]$. Finally, from the proof of \cite[Proposition 3.3]{BCHL18} we see that the trace of $[P_{U_1,w}]_0$ does not depend on the angle $\theta$. The next theorem summarizes our discussion for the case $\tr(A) = 2$. We write $P_A$ for the image of $P_{U_i,w}$ under the isomorphism coming from conjugacy of matrices. \iffalse Consider the Pimsner--Voiculescu sequence arising from the trivial action of ${\mathbb Z}$ on $C(\mathbb{T})$ and let $\delta'_0$ denote the exponential map. Since $[U_i]_1$ is a fixed point of the automorphism $\alpha_A$, a suitable choice of a $*$-homomorphism $C(\mathbb{T})\rightarrow \mathcal{A}_\theta$ enables one to obtain a commutative diagram consisting of $\delta_0$ and $\delta'_0$ according to the naturality of the Pimsner--Voiculescu sequence. Since the related preimage under $\delta'_0$ has already been computed in \cite{Thesis}, a combination of this diagram and results in \cite{Thesis} gives the desired preimage of $[U_i]_1$ under $\delta_0$, denoted $[P_{U_i,w}]_0$ as in \cite{BCHL18}. Let $[P_A]_0$ denote the image of $[P_{U_i,w}]_0$ under the induced isomorphism of $\varphi$ when $A$ is a generic matrix in $\mathrm{SL}_2({\mathbb Z})$. The computation of the $K_1$-group is direct from the the Pimsner--Voiculescu sequence. The image of $[P_A]_0$ under the induced map of any tracial state is clear because of the construction of the projection $P_A$. \fi \begin{thm}\label{thm:Ktr=2} Let $\theta$ be any real number in $[0,1]$, and let $A\in \mathrm{SL}_2({\mathbb Z})$ be a matrix of infinite order with $\tr(A) = 2$. Suppose the rank-one matrix $I_2-A^{-1}$ has the Smith normal form $\diag(h_1,0)$. Then \begin{align*} K_0(\mathcal{A}_{\theta} \rtimes_A {\mathbb Z}) &\cong {\mathbb Z}\oplus {\mathbb Z}\oplus {\mathbb Z}, \\ K_1(\mathcal{A}_{\theta} \rtimes_A {\mathbb Z}) &\cong {\mathbb Z}\oplus {\mathbb Z} \oplus {\mathbb Z} \oplus {\mathbb Z}_{h_1} \end{align*} and a set of generators for $K_0$ is given by $[\mathbf{1}]_0$, $\iota_*([p_\theta]_0)$, and $[P_A]_0$. Moreover, given any tracial state $\tau'$ on $\mathcal{A}_\theta\rtimes_A {\mathbb Z}$ the induced map $$ \tau'_*: K_0(\mathcal{A}_{\theta}\rtimes_A{\mathbb Z})\rightarrow {\mathbb R} $$ satisfies $\tau'_*( [\mathbf{1}]_0 ) = 1$, $\tau'_*([\iota(p_\theta)]_0) = \theta$, and $\tau'_*( [P_A]_0 ) = 1$. In particular, all tracial states on $\mathcal{A}_\theta\rtimes_A{\mathbb Z}$ induce the same map on $K_0(\mathcal{A}_{\theta}\rtimes_A{\mathbb Z})$. \end{thm} \begin{rem} The main issue that arises when one attempts to completely classify the crossed products considered in this article, is the problem of producing $\ast$-isomorphisms $$\mathcal{A}_\theta\rtimes_A {\mathbb Z} \rightarrow \mathcal{A}_{\theta'}\rtimes_B {\mathbb Z},$$ provided that $\theta,\theta'$ and $A$ and $B$ are suitably related. When the angles are assumed to be irrational, such isomorphisms were produced in \cite{BCHL18} by invoking abstract classification machinery, which reduces the problem to producing isomorphisms between the respective Elliott-invariants. In the rational case this approach is no longer available. Instead, one can try to produce such isomorphisms directly on the level of the crossed products. This however turns out to be very difficult. One possible approach is to further exploit the realization of the algebras $\mathcal{A}_\theta\rtimes_A {\mathbb Z}$ as twisted group algebras of the form $C^*({\mathbb Z}^2\rtimes_A {\mathbb Z},\tilde{\omega}_\theta)$. We will discuss this approach in two directions. Suppose first that $A\in \mathrm{SL}_2({\mathbb Z})$ admits a reversing symmetry $B\in \mathrm{GL}_2({\mathbb Z})$ (i.e. $BA = A^{-1}B$) such that $\det(B)=-1$. Routine calculations reveal that $B$ defines an automorphism $\Phi_B\in Aut({\mathbb Z}^2\rtimes_A {\mathbb Z})$ such that $ \tilde{\omega}_\theta = \tilde{\omega}_{-\theta}\circ \Phi_B.$ In particular, under these assumptions, we can conclude that the crossed products $\mathcal{A}_\theta \rtimes_A {\mathbb Z}$ and $\mathcal{A}_{\theta'}\rtimes_A {\mathbb Z}$ are isomorphic if and only if $\theta = \pm \theta' \pmod {\mathbb Z}$. It is known, that not all hyperbolic matrices admit reversing symmetries. For example, it was shown in \cite{BR97} that the hyperbolic matrix $$\begin{pmatrix} 4 & 9 \\ 7 & 16 \end{pmatrix}$$ admits no reversing symmetry. However, many hyperbolic matrices $A =\begin{pmatrix} a & b \\ c & d \end{pmatrix}\in \mathrm{SL}_2({\mathbb Z})$ do admit reversing symmetries with determinant $-1$, in particular those for which either $b | (a-d)$ or $c|(a-d)$ (see \cite[3.1.1, Proposition 4]{BR97}). \end{rem} \begin{rem} In a different direction, let us explain the role that the structure of the group ${\mathbb Z}^2\rtimes_A {\mathbb Z}$ plays. Our main observation is, that the center of ${\mathbb Z}^2\rtimes_A {\mathbb Z}$ is non-trivial if and only if $\tr(A)=2$. In that case, one easily checks that $Z({\mathbb Z}^2\rtimes_A {\mathbb Z})\cong {\mathbb Z}$ and that the quotient of ${\mathbb Z}^2\rtimes_A {\mathbb Z}$ by its center is isomorphic to ${\mathbb Z}^2$. In particular, ${\mathbb Z}^2\rtimes_A {\mathbb Z}$ is a nilpotent group of nilpotency class $2$. This observation allows us to apply recent results on $\mathrm{C}^*$-superrigidity obtained in \cite{MR3861703}: indeed, given two matrices $A,B\in \mathrm{SL}_2({\mathbb Z})$ with $\tr(A)=2$, a careful comparison of the equivalence classes of the triples $(Z({\mathbb Z}^2\rtimes_A {\mathbb Z}),{\mathbb Z}^2\rtimes_A {\mathbb Z}/Z({\mathbb Z}^2\rtimes_A {\mathbb Z}),\sigma_A)$ and $(Z({\mathbb Z}^2\rtimes_B {\mathbb Z}),{\mathbb Z}^2\rtimes_B {\mathbb Z}/Z({\mathbb Z}^2\rtimes_B {\mathbb Z}),\sigma_B)$, where $\sigma_A$ and $\sigma_B$ denote the respective extension cocycles, reveals, that $C(\mathbb{T}^2)\rtimes_A {\mathbb Z}\cong C^*({\mathbb Z}^2\rtimes_A{\mathbb Z})\cong C^*({\mathbb Z}^2\rtimes_B{\mathbb Z})\cong C(\mathbb{T}^2)\rtimes_B {\mathbb Z}$ if and only if $\tr(B)=2$ and $I_2-A^{-1}$ and $I_2-B^{-1}$ share the same Smith normal form up to sign. This provides an interesting alternative approach to \cite[Theorem~20]{FW94} in the case $\theta=0=\theta'$. \end{rem} \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,768
Home Blog Climate science - latest Below you'll find a list of all posts that have been categorized as "Climate science – latest" Quantifying national responsibility for climate breakdown: an equality-based attribution approach for carbon dioxide emissions in excess of the planetary boundary, based on equal per capita access to the atmospheric commons. The US is responsible for 40 percent of excess emissions as of 2015. By Jason Hickel, The Lancet | VOLUME 4, ISSUE 9, E399-E404, SEPTEMBER 01, 2020 PDF [372 KB] jasonhickel@gmail.com DOI: https://doi.org/10.1016/S2542-5196(20)30196-0PlumX Metrics Summary: This analysis proposes a novel method for quantifying national responsibility for damages related to climate change by looking at national contributions to cumulative CO2 emissions in excess of the planetary boundary of 350 ppm atmospheric CO2 concentration. This approach is rooted in the … Nature: A wasted decade of climate inaction shows us that incremental shifts that might have been sufficient at one time are no longer enough. A rapid shift with 4x the work or a third the time is necessary now. Some have stopped fossil fuel production and exploration, but not US 04 March 2020, Nature. Emissions: world has four times the work or one-third of the time: New synthesis shows what a wasted decade means for the climate pact made in Paris. Nature 579, 25-28 (2020) doi: https://doi.org/10.1038/d41586-020-00571-x, https://www.nature.com/articles/d41586-020-00571-x The past decade of political failure on climate change has cost us all dearly. It has shrunk the time left for action by two-thirds. … Hail, America's most under-rated climate risk has expanded footprint across the country + 'staggering increase in heat deaths'. Fossil fuel and coal downturn Hail! It's America's most underrated climate risk, Atlantic, February 9, 2021. (See bottom of this post) Wisconsin researchers in a new study find that over the last 40 years hail has expanded its footprint across the country and has become more frequent in the so-called Hail Alley stretching from Wyoming to Texas. Study: Warmer weather will increase flooding in the Columbia River Basin this … The amount of energy accumulating in the oceans is equivalent to detonating five Hiroshima atomic bombs per second, every second over the past 25 years 6 February 2020 Earth is heating at a rate equivalent to five atomic bombs per second. This is a re-post from the Bulletin of the Atomic Scientists A recent study in the journal Advances in Atmospheric Sciences reported that the heat absorbed in Earth's oceans reached a new record in 2019. This has been the case for almost every year over the past decade, but this information … Impacts worsening, pace of climate change and Arctic melting increasing The year started with fires in Australia, and all year long it seemed as if areas of the globe were aflame, culminating in California's worst wildfire season and infernos in places that rarely burned. At the same time, there were more major tropical storms in the Atlantic than ever recorded before. Last year, Hiroko Tabuchi, a climate reporter, and Jonah Kessel, a videographer, … More Than Two Degrees of Climate Warming Is Already Locked In, New Study in Nature Climate Change Finds, by Olivia Rosane Jan. 06, 2021 Researchers estimate that a likely total of 2.3 degrees Celsius of warming is now locked in. DuKai photographer / Getty Images Existing greenhouse gases will eventually push the climate into more than two degrees of warming, according to a study published … CO2 levels to breach 50% rise from pre-industrial era in 2021 – Met Office Climate crisis: 2020 was hottest year ever recorded The Guardian | Damian Carrington The climate crisis continued unabated in 2020, with the joint highest global temperatures on record, alarming heat and record wildfires in the Arctic, and a record 29 tropical storms in the Atlantic. Despite a 7% fall in fossil fuel burning due to coronavirus lockdowns, heat-trapping carbon dioxide … Faster melting, hotter temperatures predicted in new climate models 15 December 2020, Carbon Brief, New climate models suggest faster melting of the Greenland Ice Sheet AYESHA TANDON Greenland's vast ice sheet could melt faster than previously thought over the 21st century, according to a new study. The Greenland ice sheet is the second largest mass of ice on Earth, holding enough water to raise global sea levels by 7.2 metres. … 1.5°C above baseline is now the prescribed upper limit to global warming. Global temps for first 6 months of 2020 registered 1.3°C above baseline. Global temperature for the first six months of 2020 registered 1.3°C above baseline, a number that has new significance ever since the IPCC Special Report/2018 about the risks of exceeding 1.5°C. NOVEMBER 20, 2020, https://www.counterpunch.org/2020/11/20/expert-ipcc-reviewer-speaks-out/ Expert IPCC Reviewer Speaks Out BY ROBERT HUNZIKER Expert IPCC Reviewer Speaks Out Roger Hallam, co-founder of Extinction Rebellion/XR recently interviewed Peter Carter, M.D., who has … Research: getting close to tipping points November 2020. In the paper, published as a commentary in the journal Nature this month, the group of researchers summarize the latest findings related to the threat of tipping points as part of effort to "identify knowledge gaps" and suggest ways to fill them. "We explore the effects of such large-scale changes," the scientists explain, "how quickly they might unfold …
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,914
package io.skysail.server.app.notes.resources; import java.util.List; import io.skysail.api.links.Link; import io.skysail.server.ResourceContextId; import io.skysail.server.app.notes.NotesResource; public class MyNotesResource extends NotesResource { public MyNotesResource() { super(MyNoteResource.class); addToContext(ResourceContextId.LINK_TITLE, "list Notes"); } @Override public List<Link> getLinks() { return super.getLinks(MyPostNoteResource.class ); } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,418
El Jueves is een Spaans satirisch weekblad met strips en cartoons. Het tijdschrift verscheen voor het eerst in 1977 en verschijnt elke woensdag (en niet op donderdag zoals de titel zou doen vermoeden). Favoriete onderwerpen van spot zijn de politiek, de kerk, de monarchie en seks. Spaans tijdschrift Spaans stripblad
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,482
<!doctype html> <html> <head> <title>Eddie</title> <link rel="stylesheet" type="text/css" href="assets/alternate_css1.css"> </head> <body> <header> <div class="image"> <img src="assets/bookshelf.jpg" class="bookshelf-image" alt="bookshelf background image"> <div class="image-opacity"> <div id="top-left-logo"> <img src="assets/heart.png" alt="heart icon" class="heart-icon" height="100" width="100"> </div> <nav><ul> <li class="header-nav-li"><a class="header-links" href="this-site.html">Home</a></li> <li class="header-nav-li"><a class="header-links" href="this-site.html">Blog</a></li> <li class="header-nav-li"><a class="header-links" href="this-site.html">About</a></li> </ul></nav> </div> </div> </header> <div id="hero"></div> <div class="welcome-message"> <h4>I am just getting started in building this site, but come back soon to find something interesting.</h4> <h4>Thanks!</h4> </div> <div id="blog-sample-window"> <div id="blog-sample-text"></div> <div id="blog-sample-img"> <!-- <img src="#" alt="blog sampel image"> --> </div> </div> <footer> <div id="social"> <ul> <li class="social-links-ul"><a href="this-site.html">LinkedIn</a></li> <li class="social-links-ul"><a href="this-site.html">Github</a></li> <li class="social-links-ul"><a href="this-site.html">Twitter</a></li> <li class="social-links-ul"><a href="this-site.html">Facebook</a></li> </ul> </div> <div class="farewell"> <p> Thanks for stopping by! | <a href="this-site.html" class="contact-btn">Contact Me</a> </p> </div> </footer> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
4,030
Just up the street from the home base of Animal Planet's Dr. Jeff: Rocky Mountain Vet is one of the city's best-stocked boutiques for pampered pets. At Mouthfuls, one of the main attractions is the signature "bone bar" that allows Chumley (or Rufus or Bella) to sample and approve exotic treats — cheese-and-liver stars, dinner-mint bones, you name it — before their owners shovel heftier portions into bags for purchase. You can also order mix-and-match samplers from a virtual bone bar online. Your canine companion may be leading a dog's life, but that doesn't mean it has to be ruff.
{ "redpajama_set_name": "RedPajamaC4" }
9,517
The Coaches' Journal is different. It's not about highlights, stats, or analysis. It's about people. It's about in-depth stories and life lessons. It's about igniting passion, inspiring action, and improving the sport and leadership landscape. To make a greater impact, The Coaches' Journal is seeking exclusive wisdom, website, newsletter, and podcast sponsors. With a steadily growing, highly engaged audience, this would be a prominent opportunity to drive emotional connection and strengthen brand loyalty. For more information on partnering with The Coaches' Journal, contact advertise@thecoachesjournal.com.
{ "redpajama_set_name": "RedPajamaC4" }
4,983
The Viper Jet (originally a homebuild!) started life with a piston engine and pusher prop, but thankfully was modified for turbine to become one of the sleekest modern civilian jet aircraft. This 1:6 scale version is for 90mm EDF, and makes an excellent step towards R/C turbines, or for those looking for a large EDF that is suitable for club fields.
{ "redpajama_set_name": "RedPajamaC4" }
4,822
A student master thesis titled "Data gathering and -assembling from several smart meter HAN ports" is in progress. This system will make an efficient system for collecting real-time data from the HAN (home area network) port of distributed smart meters. The information is then transferred to a cloud service for a preliminary analysis of the assembled data. Normally, the real-time data from the smart meters are used for analyzing the in-house consumption and for in-house energy management systems. However, by analyzing real-time data from several smart meters in a neighborhood, information about the condition of the outdoor connecting grid can be extracted. As the smart meters are installed in the grid already due to fiscal reasons, these measurements are "nearly free". The real-time data from the smart meters can reveal if there is an irregular mismatch between energy from the transformer station and the sum of energy floating in or out of the houses. Further, the measurements will tell how the voltage varies along the line. This is especially of interest for long lines with prosumers connected to the grid. Depending on which measurements are available on the HAN port on real-time basis, other superior information can be extracted from the sketched system, to the benefit of the utility company. Transformer station with connected houses, all with smart meters transferring real-time data via modems to a cloud service for analyses. A tiny embedded system for reading the output of the HAN port and interpreting the data. A 4G connection to a cloud service for data transfer using existing tele-communication infrastructure. The remaining work will consist of stitching together the modules, duplicating the equipment and then test the system in real environment, i.e. in a real grid with measurements from all smart meters on a power line from a transformer station.
{ "redpajama_set_name": "RedPajamaC4" }
8,593
At the Waterline is a blog about issues related to water as a crucial nexus: news and commentary that describe and expound upon its scarcity, its conservation and related technologies, its role in burgeoning conflicts around the world, and its place among human rights. The primary author is journalist David Snow, who has written and edited for numerous print and online publications, including The National Law Journal, Law.com, Law Technology News magazine, Wired News, and CNET. He recently served as head of communication for the international NGO and membership organization WaterLex, though this blog remained independent from that group. He also worked as communications contractor for Kaiser Permanente's Government Relations department and its Institute for Health Policy. In Snow's prior experience, he served as executive editor of ALM Media's Law.com network and later as the company's editorial director for technology, managing Law Technology News magazine and its website and e-newsletters. Prior to that, he worked extensively in tech publishing. He holds a B.A. in magazine journalism from Syracuse University's S.I. Newhouse School of Public Communications. Follow the blog on Twitter: @atthewaterline David Snow's public LinkedIn profile
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,429
\section{Introduction} Quantum nonlocality and Heisenberg's uncertainty principle \cite{Heisenberg-o} are two essential concepts in quantum mechanics (QM). The nonclassical information shared among different parts forms the basis of quantum information and is responsible for many counterintuitive features of QM, e.g., quantum cryptography \cite{Gisin-2002} and quantum teleportation \cite{teleporting-1993}. From information theory people have put forward certain principles to specify the quantum correlation, including nontrivial communication complexity \cite{communication-complexity-2006}, information causality \cite{Information-causuality-2009}, entropic uncertainty relations \cite{entropic-uncertainty-2010}, local orthogonality \cite{local-orthogonality-2013}, and global exclusivity \cite{global-exclusivity-C,global-exclusivity-Y}. Note these principles stem from the notion of information; they mainly concern the bipartite correlation. It has been shown that understanding multipartite intrinsic structure is indispensable to the determination of quantum correlation, and it is rather difficult to derive the Hilbert space structure from information quantities alone \cite{Require-multi-2011}. Heisenberg's uncertainty principle has a deep impact on quantum measurement, and it reflects the mutual influence of measurement precision and disturbance on a quantum system only for the measurement disturbance relation (MDR). The well-known Heisenberg-Robertson uncertainty relation reads \cite{Robertson} \begin{eqnarray} \Delta A\Delta B \geq |\langle C \rangle| \label{Heisenberg-SD} \end{eqnarray} with $C = \frac{1}{2i}[A,B]$ and the standard deviation $\Delta X = \sqrt{\langle \psi|X^2|\psi \rangle - \langle\psi| X|\psi\rangle^2}$. Note that in (\ref{Heisenberg-SD}), only the properties of two observables in the ensemble of a quantum state are involved, and the relation is independent of any specific measurement. The MDR has been intensively studied both theoretically \cite{Ozawa-2003-PRA,Hall-2004, Weston-Pryde,Branciard,MDR-correlation} and experimentally \cite{Neutron-spin-MDR, Photon-weak-MDR,Ringbauer-2013, Kaneda-2013}. However, the implication of the MDR for quantum information and quantum measurement is still unclear \cite{Busch-2013,Dressel-2013,Quantum-metrology-N}. In practice, there are different forms of MDR, most of which have undergone experimental checks and still survive \cite{Ringbauer-2013,Kaneda-2013}. Therefore, determining the impacts of various MDRs on quantum physics, or ascertaining the right form of MDR, is currently an urgent task. In this work, we propose a scheme for transforming any MDR to some constraint inequalities of multipartite correlation functions. In this way, the attainable strength of correlations in multipartite state may be considered to be the physical consequence of the restriction on the quantum measurement imposed by the MDR. The structure of the paper is as follows. In Sec. 2, typical versions of the MDR are presented, and their essential differences are illustrated by embedding the MDR into a coordinate system. In Sec. 3, we transform the various MDRs to constraint inequalities on bipartite correlation functions in a tripartite state with the help of the nonfactorable state. Then it is shown that the constraint inequalities must be held for all the tripartite and multipartite entangled states. Detailed examples and an experimental setup for the verification of the various MDRs based on our scheme are given for a three-qubit system. The concluding remarks are given in Sec. 4. \section{The MDR in QM} \subsection{The quantum measurement and its disturbance} A quantum measurement process may be generally implemented by coupling a meter system $|\phi\rangle$ with the original system $|\psi\rangle$. The measurement result $M$ is obtained from the readout of the meter system. As the physical observables are represented by Hermitian operators in QM, following the definition in \cite{Ozawa-2003-PRA}, the measurement precision of physical quantity $A$ and the corresponding disturbance of quantity $B$ are defined as expectation values of the mean squares: \begin{eqnarray} \epsilon(A)^2 \equiv \langle \phi| \langle \psi| [ \mathcal{A} - A_1 \otimes I_2]^2 |\psi\rangle |\phi\rangle \; , \label{def-precision}\\ \eta(B)^2 \equiv \langle \phi| \langle \psi| [\mathcal{B}- B_1\otimes I_2]^2|\psi\rangle |\phi\rangle \; . \label{def-disturbance} \end{eqnarray} Here $\mathcal{A}=U^{\dag}(I_1\otimes M_2)U$, $\mathcal{B}=U^{\dag}(B_1 \otimes I_2)U$, the subscripts 1 and 2 signify that the operators are acting on states $|\psi\rangle$ and $|\phi\rangle$, respectively, $U$ is a unitary interaction between $|\psi\rangle$ and $|\phi\rangle$, and $I$ is the identity operator. The measurement operator $M$ may be set to $A$ if it has the same possible outcomes as operator $A$, i.e., $\mathcal{A}= U^{\dag}(I_1\otimes A_2)U$ \cite{MDR-correlation, Branciard}. Note that in addition to this operator formalism, which we will work with, there are also other types of definitions for the measurement precision and disturbance, e.g., the probability distribution formalism \cite{Busch-2013}. The MDR indicates that there is a fundamental restriction on the measurement precision $\epsilon(A)$ and the reaction (disturbance) $\eta(B)$ when two incompatible physical observables $A$ and $B$ are about to be measured. By the definitions of (\ref{def-precision}) and (\ref{def-disturbance}), typical MDR representatives are as follows: \begin{eqnarray} & & \epsilon(A)\eta(B) \geq |\langle C\rangle| \; , \label{Heisenberg-MDR} \\ & & \epsilon(A)\eta(B) + \epsilon(A)\Delta B + \eta(B)\Delta A \geq |\langle C \rangle| \; , \label{Ozawa-MDR} \\ & & \epsilon(A)\eta(B) + \epsilon(A) \Delta \mathcal{B} + \eta(B) \Delta\mathcal{A} \geq |\langle C\rangle| \; , \label{Hall-MDR}\\ & & \epsilon(A)(\Delta \mathcal{B} + \Delta B) + \eta(B)(\Delta \mathcal{A} +\Delta A) \geq 2|\langle C\rangle| \; , \label{Weston-MDR} \\ & & \Delta B^2 \epsilon(A)^2 + \Delta A^2 \eta(B)^2 + 2\epsilon(A)\eta(B) \sqrt{\Delta A^2\Delta B^2 - \langle C\rangle^2} \geq \langle C\rangle^2 \; . \label{Branciard-MDR} \end{eqnarray} Here $C = [A,B]/2i$, and $\Delta X$ are the standard deviations of operators $X = A, B, \mathcal{A}, \mathcal{B}$ evaluated in the quantum state $|\psi\rangle$. Equations (\ref{Heisenberg-MDR})-(\ref{Branciard-MDR}) correspond to Heisenberg-type (He), Ozawa's (Oz) \cite{Ozawa-2003-PRA}, Hall's (Ha)\cite{Hall-2004}, Weston {\it et al}.'s (We) \cite{Weston-Pryde}, and Branciard's (B1) \cite{Branciard} MDRs, respectively. Equation (\ref{Branciard-MDR}) can be refined, in the specific qubit case, as \begin{eqnarray} & & \epsilon(A)^2[1-\epsilon(A)^2/4] + \eta(B)^2[1-\eta(B)^2/4] \nonumber \\ & & + 2\epsilon(A)\eta(B) \sqrt{1-\langle C\rangle^2} \sqrt{1-\eta(B)^2/4} \sqrt{1-\epsilon(A)^2/4} \geq \langle C \rangle^2 \; . \label{Branciard-MDR-refine} \end{eqnarray} [Equation (9) is abbreviated as B2 bellow.] So far, the Heisenberg-type MDR has been found to be violated, while others have undergone various sorts of trials in experiment and still survive \cite{Kaneda-2013,Ringbauer-2013}. Finding a stricter constraint on $\epsilon(A)$ and $\eta(B)$ is currently a hot topic in physics \cite{Branciard-13}. Besides focusing on the natures of different MDRs, it is also important to know what different physical consequences they would have on quantum information science. \subsection{The essential difference among various MDRs} It is obvious that the above representative MDRs differ in tightness. Here we propose a method for quantitative study of MDRs. We first transform MDRs into coordinate space and then express them as relation functions of $\epsilon(A)$ and $\eta(B)$. For the sake of convenience and without loss of generality, we take three typical MDRs, the Heisenberg type Eq. (\ref{Heisenberg-MDR}), Ozawa's Eq. (\ref{Ozawa-MDR}), and Branciard's Eq. (\ref{Branciard-MDR}), as examples. \begin{figure}\centering \scalebox{0.25}{\includegraphics{Heisenberg.eps}} \scalebox{0.25}{\includegraphics{Ozawa.eps}} \scalebox{0.25}{\includegraphics{Branciard1.eps}} \caption{Illustration of different MDRs for the same kinds of quantum states with identical ensemble properties of $\Delta A$, $\Delta B$, $\langle C\rangle$. The allowed values of precision $\epsilon(A)$ and disturbance $\eta(B)$ fill the shaded areas, which correspond to (a) Heisenberg-type, (b) Ozawa's, and (c) Branciard's MDRs. The essential differences among those MDRs lie in the forbidden areas for $\epsilon(A)$ and $\eta(B)$ which are characterized by the minimal distances to the origin, $r_{\mathrm{He}}$, $r_{\mathrm{Oz}}$, and $r_{\mathrm{B1}}$, with subscripts stand for the corresponding MDRs.} \label{Fig-error-circ} \end{figure} In the Heisenberg-type MDR (\ref{Heisenberg-MDR}), the measurement-dependent [$\epsilon(A)$, $\eta(B)$] and measurement-independent ($\langle C\rangle$) quantities are on different sides of the inequality. The allowed region (AR) for $\epsilon(A)$ and $\eta(B)$ is above the hyperbolic curve $\epsilon(A)\eta(B) = |\langle C\rangle|$ in quadrant I [see Fig. \ref{Fig-error-circ}(a)]. The forbidden region for the values of $\epsilon(A)$ and $\eta(B)$ is enclosed by the curve and two axes, which may be characterized by the radius of the circle centered at the origin and tangent with the hyperbola. This radius represents the minimal distance from the AR to the origin of the coordinates, $\epsilon(A)^2 + \eta(B)^2 \geq r_{\mathrm{He}}^2$, where, for the Heisenberg-type MDR, $r_{\mathrm{He}}^2 = f_{\mathrm{He}}(\langle C \rangle) = 2|\langle C\rangle|$. For inequality (\ref{Ozawa-MDR}), substituting $ \epsilon(A) = \epsilon'(A) - \Delta A$ and $\eta(B) = \eta'(B) - \Delta B$, we have \begin{eqnarray} \epsilon'(A) \eta'(B) \geq \Delta A\Delta B + |\langle C\rangle| \; . \end{eqnarray} This is a displaced hyperbola of the Heisenberg type [see Fig. \ref{Fig-error-circ}(b)]. The AR for $\epsilon(A)$ and $\eta(B)$ may be obtained from (10), and its minimal distance to the origin can also be expressed as \begin{eqnarray} \epsilon(A)^2 + \eta(B)^2 \geq r_{\mathrm{Oz}}^2 = f_{\mathrm{Oz}}(\Delta A,\Delta B, \langle C\rangle) \; . \label{APP-A-separateform} \end{eqnarray} Inequality (8) can be reformulated as \begin{eqnarray} \begin{pmatrix} \epsilon(A) & \eta(B) \end{pmatrix} \begin{pmatrix} \Delta B^2 & \sqrt{\Delta A^2\Delta B^2 - \langle C\rangle ^2} \\ \sqrt{\Delta A^2\Delta B^2 - \langle C\rangle^2} & \Delta A^2 \end{pmatrix} \begin{pmatrix} \epsilon(A) \\ \eta(B) \end{pmatrix} \geq \langle C\rangle^2 \; . \label{App-ellipse-fun} \end{eqnarray} Different from Heisenberg-type and Ozawa's MDRs, (\ref{App-ellipse-fun}) is an ellipse of $\epsilon(A)$ and $\eta(B)$ centered at the origin. Similar to (11), in this case the values of $\epsilon(A)$ and $\eta(B)$ in the AR satisfy \begin{eqnarray} \epsilon(A)^2 + \eta(B)^2 \geq r_{\mathrm{B1}}^2 = f_{\mathrm{B1}}(\Delta A,\Delta B,\langle C\rangle) \; , \end{eqnarray} where $r_{\mathrm{B1}}$ is the minor axis of the ellipse with regard to the parametric condition $\Delta A\Delta B \geq |\langle C\rangle|$. For the convenience of comparison, the MDRs in Eqs. (\ref{Heisenberg-MDR}), (\ref{Ozawa-MDR}), (\ref{Branciard-MDR}) are shown in Fig. \ref{Fig-error-circ} with the same values of $\Delta A$, $\Delta B$, and $\langle C\rangle$. The values of $\epsilon(A)$ and $\eta(B)$ in the AR fill up the shaded areas, and the unshaded parts are then the forbidden regions. To summarize, all of the MDRs (including those yet to be discovered) have the shortest distance $r_{\mathrm{q}}$ from their AR to the origin as a function of $\Delta A$, $\Delta B$, and $\langle C\rangle$: \begin{eqnarray} \epsilon(A)^2 + \eta(B)^2 \geq r_{\mathrm{q}}^2 = f_{\mathrm{q}}(\Delta A,\Delta B,\langle C\rangle) \; . \label{allowed-area} \end{eqnarray} Here $f_{\mathrm{q}}$ relies only on the ensemble properties of a quantum state, i.e., $\Delta A$, $\Delta B$, and $\langle C\rangle$, which are independent of measurement processes (expressions of $f_{\mathrm{q}}$ for typical MDRs are presented in Appendix {\bf A}). Thus, the shortest distance $r_{\mathrm{q}}$ from the AR to the origin is independent of the measurement process and represents the essence of each MDR. \section{The constraint of MDR on quantum correlation} \subsection{A nonfactorable bipartite quantum state} Although variant MDRs may be distinguished by $r_{\mathrm{q}}$, the physical consequences of different $r_{\mathrm{q}}$ in quantum information theory are far from obvious. To this end, in this work we present a scheme to examine MDR. For two Hermite operators $A$ and $B$ with $[A,B]=2iC$, we may construct a nonfactorable bipartite state $|\psi_{12}\rangle$ satisfying \begin{eqnarray} A_1\otimes I_2 |\psi_{12}\rangle = I_1 \otimes A'_2 |\psi_{12}\rangle \; ,\; B_1\otimes I_2 |\psi_{12}\rangle = I_1 \otimes B'_2 |\psi_{12}\rangle\; , \label{non-factorable12} \end{eqnarray} where $A' = UAU^{\dag}$ and $B' = VBV^{\dag}$ are unitary transformations of $A $ and $B$, respectively, and hence have the same eigenvalues. The subscripts 1 and 2 indicate the corresponding particles being acted on. We shall show that for Hermitian operators $A$ and $B$, there always exists such a nonfactorable state $|\psi_{12}\rangle$. The Hermitian operators $A$ and $B$ may be expressed in spectrum decomposition as \begin{eqnarray} A = \sum_{i=1}^{N} \alpha_i|\alpha_i\rangle \langle \alpha_i| \; , \; B = \sum_{i=1}^{N} \beta_i|\beta_i\rangle \langle \beta_i| \; . \end{eqnarray} There is a unitary transformation matrix $W$ between the two orthogonal bases, $|\beta_i\rangle = \sum_{\mu=1}^{N}|\alpha_{\mu}\rangle w_{\mu i}$, where $w_{\mu i}$ are the matrix elements of $W$. The following proposition holds. \begin{proposition} If the unitary transformation matrices $U$ and $V$ are congruence equivalent, that is $U = W V W^{\mathrm{T}}$, then $|\psi_{12}\rangle = \frac{1}{\sqrt{N}}\sum_{i=1}^{N}|\alpha_i\rangle|\alpha_i'\rangle$ satisfies \begin{eqnarray} A_1\otimes I_2 |\psi_{12}\rangle = I_1 \otimes A'_2 |\psi_{12}\rangle \; ,\; B_1\otimes I_2 |\psi_{12}\rangle = I_1 \otimes B'_2 |\psi_{12}\rangle\;. \label{supp-proposition-nonfactor} \end{eqnarray} Here $A'|\alpha_i'\rangle = \alpha_i|\alpha_i'\rangle$, and the subscripts of the operators stand for the particles they act on. \label{proposition-psi} \end{proposition} \noindent{\bf Proof:} Given that the bipartite state $|\psi_{12}\rangle = \frac{1}{\sqrt{N}} \sum_{i=1}^{N} |\alpha_i\rangle|\alpha_i'\rangle$ satisfies the first equality of Eq. (\ref{supp-proposition-nonfactor}), we need to prove that the second equality of Eq. (\ref{supp-proposition-nonfactor}) is also satisfied. The state $|\psi_{12}\rangle$ may be expressed in the basis of $|\beta_i\rangle$ and $|\beta_i'\rangle$ as \begin{eqnarray} |\psi_{12}\rangle = \sum_{i,j=1}^{N} \gamma_{ij}^{(b)} |\beta_i\rangle |\beta'_j\rangle \; , \label{App-B-0} \end{eqnarray} where $|\beta_i\rangle$, $|\beta_i'\rangle$ are the eigenvectors of $B$, $B'$ with the same eigenvalue $\beta_i$, $\gamma_{ij}^{(b)} \in \mathbb{C}$. We have \begin{eqnarray} \left. \begin{array}{l} |\alpha_i\rangle = \sum_{j}|\alpha'_j\rangle u^{\dag}_{ji} \\ |\beta_i\rangle = \sum_{j} |\alpha_j\rangle w_{ji} \\ |\beta_i'\rangle = \sum_{j} |\beta_j\rangle v_{ji} \end{array} \right\} \Rightarrow |\beta'_i\rangle = \sum_{j} \left[ \sum_{k} \left( \sum_{\nu} |\alpha'_{\nu}\rangle u^{\dag}_{\nu k} \right) w_{k j} \right]v_{ji} \; , \end{eqnarray} or, more succinctly, $|\beta'_i\rangle = \sum_{j,k ,\nu} u^{\dag}_{\nu k} w_{k j} v_{ji} |\alpha'_{\nu}\rangle$ with $v_{ji}$, and $u_{\nu k}^{\dag}$ being matrix elements of $V$ and $U^{\dag}$. Therefore, $|\psi_{12}\rangle$ may also be expressed as \begin{eqnarray} |\psi_{12}\rangle & = & \sum_{il} \gamma^{(b)}_{il}|\beta_i\rangle |\beta'_{l}\rangle = \sum_{i,j,k,l,\mu,\nu} w_{\mu i} u^{\dag}_{\nu k} w_{k j} v_{jl} \gamma^{(b)}_{il}|\alpha_{\mu}\rangle |\alpha'_{\nu}\rangle \nonumber \\ & = & \sum_{\mu\nu}\gamma^{(a)}_{\mu\nu} |\alpha_{\mu}\rangle|\alpha_{\nu}'\rangle \; , \end{eqnarray} where \begin{eqnarray} \sum_{i,j,k,l} w_{\mu i} u^{\dag}_{\nu k} w_{k j} v_{jl}\gamma_{il}^{(b)} = U^{\dag} W V \Gamma^{(b)\mathrm{T}} W^{\mathrm{T}} = \Gamma^{(a)\mathrm{T}}\; . \end{eqnarray} Here $\gamma^{(a)}_{ij}$, $\gamma^{(b)}_{ij}$ are the matrix elements of $\Gamma^{(a)}$, $\Gamma^{(b)}$ and the superscript $\mathrm{T}$ is the transpose of a matrix. Because $|\psi_{12}\rangle = \frac{1}{\sqrt{N}}\sum_{i} |\alpha_i\rangle|\alpha_i'\rangle$, we have $\gamma_{ij}^{(a)} = \delta_{ij}/\sqrt{N}$, and \begin{eqnarray} \Gamma^{(b)\mathrm{T}} & = & V^{\dag}W^{\dag} U \Gamma^{(a)\mathrm{T}} W^{*} = \frac{1}{\sqrt{N}} V^{\dag}W^{\dag} U W^{*} \nonumber \\ & = & \frac{1}{\sqrt{N}} V^{\dag}W^{\dag} WVW^{\mathrm{T}}W^{*} =\frac{1}{\sqrt{N}} \; , \end{eqnarray} where the congruence relation $U = W V W^{\mathrm{T}}$ is employed. That is, \begin{eqnarray} |\psi_{12}\rangle = \frac{1}{\sqrt{N}} \sum_{i}^{N}|\alpha_i\rangle |\alpha'_i\rangle = \frac{1}{\sqrt{N}} \sum_i^{N} |\beta_i\rangle |\beta'_i\rangle\; , \label{App-psi12} \end{eqnarray} and therefore, \begin{eqnarray} A_1\otimes I_2 |\psi_{12}\rangle = I_1 \otimes A'_2 |\psi_{12}\rangle \; ,\; B_1\otimes I_2 |\psi_{12}\rangle = I_1 \otimes B'_2 |\psi_{12}\rangle\;. \end{eqnarray} Q.E.D. Proposition \ref{proposition-psi} indicates that the state $|\psi_{12}\rangle = \frac{1}{\sqrt{N}} \sum_{i=1}^{N} |\alpha_i\rangle |\alpha\,'_i\rangle$ satisfies both equalities of Eq. (\ref{non-factorable12}) while the transformation matrices $U$ and $V$ satisfy $U=WVW^{T}$. \subsection{The constraint of MDR on quantum correlation} With the nonfactorable bipartite quantum state in Eq. (\ref{non-factorable12}), we have the following theorem. \begin{theorem} A set of tripartite states may be obtained by an interaction $U_{13}$ of particle 1 in the state $|\psi_{12}\rangle$ with a third particle (particle 3), $\Psi = \{|\psi_{123}\rangle|\, |\psi_{123}\rangle = U_{13}|\psi_{12}\rangle|\phi_3\rangle, U_{13}^{\dag}U_{13}=I\}$. The various MDRs imply the following relationship for states $|\psi_{123}\rangle\in \Psi$: \begin{eqnarray} E(A'_2,A_3) + E(B'_2, B_1) \leq \frac{1}{2}( \langle A_2^{'2} \rangle + \langle A_3^2 \rangle + \langle B_2^{'2} \rangle + \langle B_1^2 \rangle - \gamma_{\mathrm{q}}) \; . \label{Corr-ineq-3} \end{eqnarray} Here $E(X_i,Y_j) = \langle X_i \otimes Y_j\rangle$ is the bipartite correlation function with $X,Y \in \{A,A',B,B'\}$, and $\gamma_{\mathrm{q}} = \mathrm{Max} \{ \sum_{i} |\ _2\langle p_i|\psi_{12}\rangle|^2 f_{\mathrm{q}}^{(i)}\}$ is independent of $U_{13}$ where $|p_i\rangle$ is an arbitrary set of orthogonal bases; $f_{\mathrm{q}}^{(i)}$ represents the function $f_{\mathrm{q}}$ evaluated under quantum states $|\psi_1^{(i)}\rangle = \ _2\langle p_i|\psi_{12}\rangle/|\ _2\langle p_i|\psi_{12}\rangle|$. \label{Theorem-MDR-Correlation} \end{theorem} \noindent{\bf Proof:} A set of quantum states $|\psi^{(i)}_1\rangle$ of particle 1 may be prepared by projecting particle 2 in the bipartite entangled state $|\psi_{12}\rangle$ proposed in Proposition \ref{proposition-psi} with a set of complete and orthogonal bases $|p_i\rangle$: \begin{eqnarray} |\psi^{(i)}_1\rangle = \frac{\ _2\langle p_i|\psi_{12}\rangle}{|_2\langle p_i |\psi_{12}\rangle|} \; . \label{supp-psi1-def} \end{eqnarray} Substituting $|\psi_1^{(i)}\rangle$ in the definition of measurement precision and disturbance [i.e., Eqs. (2) and (3)], we have \begin{eqnarray} \epsilon^{(i)}(A)^2 = \langle \phi_3| \langle \psi_1^{(i)}| [ \mathcal{A} - A_1 \otimes I_3]^2 |\psi_1^{(i)}\rangle |\phi_3\rangle \; , \label{App-(i)A} \\ \eta^{(i)}(B)^2 = \langle \phi_3| \langle \psi_1^{(i)}| [\mathcal{B}- B_1\otimes I_3]^2|\psi_1^{(i)}\rangle |\phi_3\rangle \; . \label{App-(i)B} \end{eqnarray} Here $|\phi_3\rangle$ describes the meter system. Further, we may write \begin{eqnarray} |_2\langle p_i |\psi_{12}\rangle|^2 \epsilon^{(i)}(A)^2 = \langle \phi_3| \langle \psi_{12}| [U_{13}^{\dag}(I_1\otimes A_3) U_{13} - A_1 \otimes I_3]^2 P^{(i)}_2|\psi_{12}\rangle |\phi_3\rangle \; , \\ |_2\langle p_i |\psi_{12}\rangle|^2 \eta^{(i)}(B)^2 = \langle \phi_3| \langle \psi_{12}| [U_{13}^{\dag}(B_1\otimes I_3) U_{13} - B_1 \otimes I_3]^2 P^{(i)}_2|\psi_{12}\rangle |\phi_3\rangle \; , \end{eqnarray} where $P_2^{(i)} = |p_i\rangle_2\langle p_i|$ is a projecting operator acting on particle 2. Using the complete relation $\sum_{i}|p_i\rangle\langle p_i| = 1$, \begin{eqnarray} \sum_{i} |_2\langle p_i |\psi_{12}\rangle|^2 \epsilon^{(i)}(A)^2 =\ \langle \phi_3| \langle \psi_{12}| [U_{13}^{\dag}(I_1\otimes I_2\otimes A_3) U_{13} - A_1 \otimes I_2 \otimes I_3]^2 |\psi_{12}\rangle |\phi_3\rangle \; , \nonumber \\ \sum_i |_2\langle p_i |\psi_{12}\rangle|^2 \eta^{(i)}(B)^2 =\ \langle \phi_3| \langle \psi_{12}| [U_{13}^{\dag}(B_1\otimes I_2\otimes I_3) U_{13} - B_1 \otimes I_2\otimes I_3]^2 |\psi_{12}\rangle |\phi_3\rangle\; . \nonumber \end{eqnarray} According to Proposition \ref{proposition-psi}, \begin{eqnarray} \sum_{i} |_2\langle p_i |\psi_{12}\rangle|^2 \epsilon^{(i)}(A)^2 = \langle \phi_3|\langle \psi_{12}| [U_{13}^{\dag}(I_1\otimes I_2\otimes A_3) U_{13} - I_1\otimes A'_2\otimes I_3]^2 |\psi_{12}\rangle |\phi_3\rangle \; ,\\ \sum_i |_2\langle p_i |\psi_{12}\rangle|^2 \eta^{(i)}(B)^2 = \langle \phi_3|\langle \psi_{12}| [U_{13}^{\dag}(B_1 \otimes I_2\otimes I_3) U_{13} - I_1\otimes B'_2 \otimes I_3]^2 |\psi_{12}\rangle |\phi_3\rangle\; , \end{eqnarray} which give (note the interaction $U_{13}$ commutes with the operators of particle 2) \begin{eqnarray} \sum_{i} |_2\langle p_i |\psi_{12}\rangle|^2 \epsilon^{(i)}(A)^2 & = & \langle \psi_{123}| ( A_3 - A'_2)^2 |\psi_{123}\rangle \; , \label{App-epsilonA}\\ \sum_i |_2\langle p_i |\psi_{12}\rangle|^2 \eta^{(i)}(B)^2 & = & \langle \psi_{123}| ( B_1 - B'_2)^2 |\psi_{123}\rangle \; . \label{App-etaB} \end{eqnarray} Here $|\psi_{123}\rangle = U_{13}|\psi_{12}\rangle|\phi_3\rangle$. For each quantum state $|\psi_1^{(i)}\rangle$, every MDR has its own AR region, and its minimal distance to the origin is \begin{eqnarray} \epsilon^{(i)}(A)^2 + \eta^{(i)}(B)^2 \geq f^{(i)}_{\mathrm{q}} (\Delta A,\Delta B,\langle C\rangle)\; , \label{App-radius} \end{eqnarray} where the superscript of $f^{(i)}_{\mathrm{q}}(\Delta A,\Delta B,\langle C\rangle)$ specifies that the argument of the function is evaluated under the quantum state $|\psi_1^{(i)}\rangle$ and $\mathrm{q}$ stands for He, Oz, B1, etc. The sum of Eq. (\ref{App-epsilonA}) and Eq. (\ref{App-etaB}) gives \begin{eqnarray} \sum_{i} |_2\langle p_i |\psi_{12}\rangle|^2 \left[\epsilon^{(i)}(A)^2 + \eta^{(i)}(B)^2 \right] = \langle \psi_{123}| ( A_3 - A'_2)^2 + ( B_1 - B'_2)^2 |\psi_{123}\rangle \; . \label{app-sum-radii} \end{eqnarray} Applying (\ref{App-radius}) to (\ref{app-sum-radii}), \begin{eqnarray} & & \langle \psi_{123}| ( A_3 - A'_2)^2 + ( B_1 - B'_2)^2 |\psi_{123}\rangle \nonumber \\ & = & \sum_{i} |_2\langle p_i |\psi_{12}\rangle|^2 \left[\epsilon^{(i)}(A)^2 + \eta^{(i)}(B)^2 \right] \geq F_{\mathrm{q}}(|\psi^{(i)}_1\rangle,\Delta A,\Delta B,\langle C\rangle) \; . \label{App-C-distance} \end{eqnarray} Here $F_{\mathrm{q}}(|\psi^{(i)}_1\rangle,\Delta A,\Delta B,\langle C\rangle) \equiv \sum_{i} |_2\langle p_i |\psi_{12}\rangle|^2 f_{\mathrm{q}}^{(i)}(\Delta A,\Delta B, \langle C\rangle)$. Note that the establishment of inequality (\ref{App-C-distance}) depends on Eq. (\ref{App-radius}) but not on the choice of projection bases $|p_i\rangle$. Therefore the inequality (\ref{App-C-distance}) should also hold as we optimize the bases $|p_i\rangle$ to get a maximum value of $F_{q}$, that is, \begin{eqnarray} \langle \psi_{123}| ( A_3 - A'_2)^2 + ( B_1 - B'_2)^2 |\psi_{123}\rangle \geq \gamma_{\mathrm{q}} \;, \label{quadratic-inequality} \end{eqnarray} with $\gamma_{\mathrm{q}} = \mathrm{Max} \{\sum_{i} |_2\langle p_i |\psi_{12}\rangle|^2 f_{\mathrm{q}}^{(i)}\}$. Expanding the quadratic terms, Eq. (\ref{quadratic-inequality}) turns into the constraint on correlation $E(A_3,A_2') + E(B_1,B_2')$, \begin{eqnarray} E(A_3,A_2') + E(B_1,B_2') \leq \frac{1}{2}(\langle A_3^2 \rangle + \langle A_2^{'2} \rangle + \langle B_1^2 \rangle + \langle B_2^{'2} \rangle - \gamma_{\mathrm{q}}) \; , \label{Theorem-1} \end{eqnarray} with $E(X_i,Y_j) = \langle \psi_{123}|X_iY_j|\psi_{123}\rangle$ and $X,Y \in \{A,A',B,B'\}$. Q.E.D. The theorem may be summarized as follows: (1) When a pair of incompatible observables $A$, $B$ is given, the bipartite entangled state $|\psi_{12}\rangle$ exists, and one particle of this state may interact with a third particle $|\phi_3\rangle$ by $U_{13}$. (2) Different forms of MDR may give different constraints on the bipartite correlations that can be shared with a third particle. (3) Each constraint inequality, characterized by $\gamma_{\mathrm{q}}$, is independent of the interaction $U_{13}$, and is satisfied by all the quantum states in the set $\Psi$. The constraint in the form of Eq. (\ref{Corr-ineq-3}) is satisfied by all tripartite pure systems; for details see discussions on the universality of Theorem \ref{Theorem-MDR-Correlation} presented in Appendix {\bf B}. The measurement process of Theorem \ref{Theorem-MDR-Correlation} is illustrated in Fig. \ref{App-fig-three-partite}. According to Theorem \ref{Theorem-MDR-Correlation}, when one of the entangled particles interacts with a third particle, the maximal quantum correlation that may be shared with the third party is not determined by the interaction but by the MDR. From Eq. (\ref{Corr-ineq-3}) it is clear that the larger the forbidden area of measurement precision and disturbance is, the less correlation the MDR predicts. The generalization of Theorem \ref{Theorem-MDR-Correlation} to incorporate multipartite states may be realized by incorporating successive measurements with more meter systems. Other generalizations are also possible as the measurement processes might be implemented in different scenarios. \begin{figure}\centering \scalebox{0.5}{\includegraphics{three-particles.eps}} \caption{ Schematic illustration of the measurement process of Theorem \ref{Theorem-MDR-Correlation}. A bipartite state $|\psi_{12}\rangle$ of particles 1 and 2 is prepared, and then the measurement process is carried out via an interaction with a third particle (particle 3) via $U_{13}$. After the interaction, the bipartite correlations are measured for the resulting tripartite state $|\psi_{123}\rangle = U_{13}|\psi_{12}\rangle|\phi_3\rangle$ at detectors D1, D2, and D3. } \label{App-fig-three-partite} \end{figure} \subsection{Physical consequences of MDR in a three-qubit system} Here we give a detailed example for three-qubit system to show that different MDRs indeed give different constraints on bipartite quantum correlations. In a qubit system, two incompatible operators may be set to $A = Z = \sigma_z$ and $B=X=\sigma_x$. The nonfactorable state can be generally constructed as \begin{eqnarray} |\psi_{12}\rangle = \frac{1}{\sqrt{2}}(|++\rangle + |--\rangle) \; , \end{eqnarray} where $\sigma_z|\pm\rangle=\pm|\pm\rangle$ and we have chosen $A'=A$, $B'=B$. It is easy to verify \begin{eqnarray} A_1|\psi_{12}\rangle = A_2|\psi_{12}\rangle \; , \; B_1|\psi_{12}\rangle = B_2|\psi_{12}\rangle \; . \end{eqnarray} Then let particle 1 interact with particle 3, $|\phi_3\rangle = \cos\theta_3|+\rangle + \sin\theta_3|-\rangle$, in arbitrary form. Suppose the interaction is a controlled NOT (CNOT) gate between particles 1 and 3, \begin{eqnarray} |\psi_{123}\rangle & = & U_{\mathrm{CNOT}}|\psi_{12}\rangle (\cos\theta_3|+\rangle + \sin\theta_3|-\rangle) \nonumber \\ & = & \frac{1}{\sqrt{2}}\left[|++\rangle (\cos\theta_3|+\rangle + \sin\theta_3|-\rangle) \right.\nonumber \\ & & \hspace{0.8cm}\left. +|--\rangle (\cos\theta_3|-\rangle + \sin\theta_3|+\rangle)\right] \; .\nonumber \end{eqnarray} Here particle 1 is the control qubit, and particle 3 is the target qubit. According to Theorem \ref{Theorem-MDR-Correlation}, we have \begin{eqnarray} & & E(A'_2,A_3) + E(B_1,B_2') = E(Z_2,Z_3) + E(X_1,X_2) \nonumber \\ & \leq & \frac{1}{2} (\langle Z_3^2\rangle + \langle Z_2^2\rangle + \langle X_1^2\rangle + \langle X_2^2\rangle - \gamma_{\mathrm{q}})= 2 - \frac{\gamma_{\mathrm{q}}}{2} \; , \label{supp-sum-corr-qu} \end{eqnarray} under the condition $Z^2=X^2=1$ and with the real parameter \begin{eqnarray} \gamma_{\mathrm{q}} = \mathrm{Max}\left[\sum_{i}|\ _2\langle p_i|\psi_{12}\rangle|^2 f_{\mathrm{q}}^{(i)}\right] \; . \end{eqnarray} We may choose any set of complete and orthogonal bases $|p_i\rangle$ to test the MDR. Generally, we need to optimize the choice of the bases in order to obtain the maximum value of $\gamma_{\mathrm{q}}$. For a qubit system, by choosing $|p_{1}\rangle = (|+\rangle +i|-\rangle)/\sqrt{2}$ and $|p_{2}\rangle = (|+\rangle -i|-\rangle)/\sqrt{2}$, which are the eigenvectors of $\sigma_y$ with eigenvalues of $+1$ and $-1$, respectively, we have \begin{eqnarray} |\psi_1^{(1)} \rangle & = & \frac{\ _2\langle p_1|\psi_{12}\rangle}{|\ _2\langle p_1|\psi_{12}\rangle|} = \frac{1}{\sqrt{2}}(|+\rangle - i|-\rangle) \; , \label{state-1} \\ |\psi_1^{(2)} \rangle & = & \frac{\ _2\langle p_2|\psi_{12}\rangle}{|\ _2\langle p_2|\psi_{12}\rangle|} = \frac{1}{\sqrt{2}}(|+\rangle + i|-\rangle) \label{state-2} \; . \end{eqnarray} Because for quantum states $|\psi_{1}^{(i)}\rangle$, [Eqs. (\ref{state-1}) and (\ref{state-2})] we have $\langle \sigma_{z,x} \rangle=0$, $\Delta \sigma_{z} = \Delta \sigma_{x} = \sqrt{\langle \sigma_y\rangle}$ (see Appendix {\bf A}), the functions $f_{\mathrm{q}}^{(i)}$ in Eq. (\ref{App-radius}) reach the maximum. That is \begin{eqnarray} f_{\mathrm{q}}^{(1)} = \kappa_{\mathrm{q}} |\langle \psi_{1}^{(1)}| C |\psi_{1}^{(1)}\rangle| \; , \; f_{\mathrm{q}}^{(2)} = \kappa_{\mathrm{q}} |\langle \psi_{1}^{(2)}| C |\psi_{1}^{(2)}\rangle|\; , \end{eqnarray} where $\kappa_{\mathrm{He}} = 2$, $\kappa_{\mathrm{B2}} = (4-2\sqrt{2})$, $\kappa_{\mathrm{B1}} = 1$, $\kappa_{\mathrm{We}} = 0.59$, $\kappa_{\mathrm{Ha}} = 2/5$, $\kappa_{\mathrm{Oz}} = (2-\sqrt{2})^2$, and $C = [A,B]/2i = \sigma_y$. Therefore, \begin{eqnarray} \gamma_{q} & = & \left[\sum_{i=1}^{2} \left|\ _2\langle p_i| \psi_{12}\rangle \right|^2 \kappa_{\mathrm{q}} \left|\langle \psi^{(i)}_1| C |\psi^{(i)}_1\rangle \right| \right] = \kappa_{\mathrm{q}} \; . \end{eqnarray} Substituting $\gamma_{\mathrm{q}}$ into Eq. (\ref{supp-sum-corr-qu}), we have \begin{eqnarray} \text{He} & : & E(Z_2,Z_3) + E(X_1,X_2) \leq 1 \; , \nonumber \\ \text{B2} & : & E(Z_2,Z_3) + E(X_1,X_2) \leq \sqrt{2} \; , \nonumber \\ \text{B1} & : & E(Z_2,Z_3) + E(X_1,X_2) \leq \frac{3}{2}\; , \nonumber \\ \text{We} & : & E(Z_2,Z_3) + E(X_1,X_2) \leq 1.71\; , \label{corr-E} \\ \text{Ha} & : & E(Z_2,Z_3) + E(X_1,X_2) \leq \frac{9}{5}\; , \nonumber \\ \text{Oz} & : & E(Z_2,Z_3) + E(X_1,X_2) \leq 2\sqrt{2}-1\; , \nonumber \end{eqnarray} while the QM prediction is \begin{eqnarray} E(Z_2, Z_3) + E(X_1, X_2) & = & \langle \psi_{123}|I_1\otimes Z_2\otimes Z_3 |\psi_{123}\rangle + \langle \psi_{123}|X_1\otimes X_2\otimes I_3 |\psi_{123}\rangle \nonumber \\ & = & \cos(2\theta_3) + \sin(2\theta_3) \; . \label{App-corr-sum} \end{eqnarray} The constraints from MDR and QM on correlation functions are illustrated in Fig. \ref{Fig-MDRs-qubit-result}(a). \begin{figure}\centering \scalebox{0.3}{\includegraphics{different-bounds.eps}} \hspace{0.3cm} \scalebox{0.3}{\includegraphics{Monogamy.eps}} \caption{The supremum for correlation functions predicted by different MDRs. (a) The supremum imposed by different MDRs on the sum of bipartite correlation functions $E(X_1,X_2)$ of particles 1 and 2 and $E(Z_2,Z_3)$ of particle 2 and 3. (b) Constraints of the sum of CHSH Bell operators for particles 1 and 2 and 2 and 3 imposed by the MDRs. Here the abbreviations He, Oz, Ha, We, B1, and B2 indicate the MDRs of Eqs. (\ref{Heisenberg-MDR})-(\ref{Branciard-MDR-refine}), respectively. The QM prediction (black line) contradicts the constraint of the Heisenberg-type MDR.} \label{Fig-MDRs-qubit-result} \end{figure} The constraints on correlation functions tend to unveil a more intrinsic nature of the nonlocal system when we transform them into a Clauser-Horne-Shimony-Holt (CHSH) Bell inequality \cite{CHSH}. Equation (\ref{supp-sum-corr-qu}) gives the measurement precision of $Z$ and the disturbance on $X$ for the qubit system \begin{eqnarray} E(Z_2,Z_3) + E(X_1,X_2) \leq 2 - \frac{\gamma_{\mathrm{q}}}{2} \; . \label{supp-CHSH-1} \end{eqnarray} Similarly, for the measurement precision of $X$ and the disturbance on $Z$, we have \begin{eqnarray} E(X_2,X_3) + E(Z_1,Z_2) \leq 2 - \frac{\gamma_{\mathrm{q}}}{2} \; . \label{supp-CHSH-2} \end{eqnarray} Combining Eq. (\ref{supp-CHSH-1}) with Eq. (\ref{supp-CHSH-2}), we have \begin{eqnarray} E(Z_2,Z_3) + E(X_1,X_2) + E(X_2,X_3) + E(Z_1,Z_2) \leq 4 - \gamma_{\mathrm{q}}\; . \label{supp-CHSH-sum} \end{eqnarray} Based on the method introduced in Ref. \cite{MDR-correlation}, here we introduce two additional directions, $\vec{c}=\frac{1}{\sqrt{2}}(1,0,1)$, $\vec{d}=\frac{1}{\sqrt{2}}(1,0,-1)$, in the real space of the $z$-$x$ plane, \begin{eqnarray} \vec{z} = \vec{a} = (0,0,1) = \frac{1}{\sqrt{2}}(\vec{c}-\vec{d}) \; , \; \vec{x} = \vec{b} = (1,0,0) = \frac{1}{\sqrt{2}}(\vec{c}+\vec{d}) \; . \end{eqnarray} Equation (\ref{supp-CHSH-sum}) can then be reexpressed as \begin{eqnarray} E(\hat{a}_2,\hat{c}_3) - E(\hat{a}_2,\hat{d}_3) + E(\hat{b}_1,\hat{c}_2) + E(\hat{b}_1,\hat{d}_2) & & \nonumber \\ + E(\hat{b}_2,\hat{c}_3) + E(\hat{b}_2,\hat{d}_3) + E(\hat{a}_1,\hat{c}_2) - E(\hat{a}_1,\hat{d}_2) & \leq & \sqrt{2}( 4 - \gamma_{\mathrm{q}}) \; , \label{supp-CHSH-before} \end{eqnarray} where $\hat{a}=\vec{\sigma}\cdot \vec{z} = Z$, $\hat{b}=\vec{\sigma}\cdot \vec{x} = X$, $\hat{c} = \vec{\sigma}\cdot \vec{c}$, $\hat{d} = \vec{\sigma}\cdot \vec{d}$. Through some rearrangement, Eq. (\ref{supp-CHSH-before}) now turns into a more transparent form \begin{eqnarray} E(\hat{a}_2,\hat{c}_3) - E(\hat{a}_2,\hat{d}_3) + E(\hat{b}_2,\hat{c}_3) + E(\hat{b}_2,\hat{d}_3) & & \nonumber \\ + E(\hat{b}_1,\hat{c}_2) + E(\hat{b}_1,\hat{d}_2) + E(\hat{a}_1,\hat{c}_2) - E(\hat{a}_1,\hat{d}_2) & \leq & \sqrt{2}( 4 - \gamma_{\mathrm{q}}) \; . \end{eqnarray} This is just the constraint on two CHSH Bell operators for particles 1 and 2 and particles 2 and 3, \begin{eqnarray} B_{\mathrm{CHSH}}^{(23)} + B_{\mathrm{CHSH}}^{(12)} \leq 2\sqrt{2}(2-\frac{\gamma_{\mathrm{q}}}{2}) \; . \end{eqnarray} Therefore, for MDRs in the qubit system we have \begin{eqnarray} \text{He} & : & B_{\mathrm{CHSH}}^{(23)} + B_{\mathrm{CHSH}}^{(12)} \leq 2\sqrt{2} \; , \nonumber \\ \text{B2} & : & B_{\mathrm{CHSH}}^{(23)} + B_{\mathrm{CHSH}}^{(12)} \leq 4 \; , \nonumber \\ \text{B1} & : & B_{\mathrm{CHSH}}^{(23)} + B_{\mathrm{CHSH}}^{(12)} \leq 3\sqrt{2}\; , \nonumber \\ \text{We} & : & B_{\mathrm{CHSH}}^{(23)} + B_{\mathrm{CHSH}}^{(12)} \leq 3.42\sqrt{2}\; , \label{Bell-MDR} \\ \text{Ha} & : & B_{\mathrm{CHSH}}^{(23)} + B_{\mathrm{CHSH}}^{(12)} \leq \frac{18\sqrt{2}}{5}\; , \nonumber \\ \text{Oz} & : & B_{\mathrm{CHSH}}^{(23)} + B_{\mathrm{CHSH}}^{(12)} \leq 8-2\sqrt{2} \; , \nonumber \end{eqnarray} while the QM prediction is \cite{Bell-monogamy2} \begin{eqnarray} \langle B_{\mathrm{CHSH}}^{(23)}\rangle^2 + \langle B_{\mathrm{CHSH}}^{(12)}\rangle^2 \leq 8 \; . \label{CHSH-square} \end{eqnarray} The results from the MDR and QM prediction are shown in Fig. \ref{Fig-MDRs-qubit-result}(b). Here we see that the second Branciard MDR [Eq. (9)] gives the same supremum as Eq. (\ref{CHSH-square}) on the sum of two Bell operators. From Fig. \ref{Fig-MDRs-qubit-result}(a) we notice that, while the supremum from the Heisenberg-type MDR is 1 in the given configuration, the QM prediction is $\sqrt{2}$, the largest value for QM (see Appendix {\bf C}). We conclude that, for every MDR which can be expressed in the operator formalism, there will be a concrete constraint on the quantum correlations in the multipartite state. The MDR manifests itself as a principle determining the strength of the correlations, which may be shared with other particles through interaction. In the form of the CHSH inequality in Fig. \ref{Fig-MDRs-qubit-result}(b), the MDR also provides a physical origin of the monogamy of the entanglement in multipartite entangled states. On the other hand, the exact form of the monogamy relation for entanglement may also be used in the reverse to obtain the exact form of MDR. \subsection{Experimental verification of the MDR} \begin{figure}\centering \scalebox{0.5}{\includegraphics{Experiment-3-qubit.eps}} \caption{Experimental setup for the verification of MDRs. A pair of polarization entangled photons (photons 1 and 2) is generated by means of spontaneous parametric down-conversion (SPDC). A third photon passing through wave plates (WP), the meter system, interacts with photon 1 via a CNOT operation.} \label{Fig-MDRs-experiment} \end{figure} Except for the fundamental physical implications for multipartite correlations, the MDR's unique constraint on the bipartite correlation function in the tripartite state makes the experimental test on MDR applicable in various physical systems, e.g., atoms, ions, and even higher energy particles, through measurement of correlation functions \cite{MDR-correlation}. One schematic optical experimental setup for a qubit system is shown in Fig. \ref{Fig-MDRs-experiment}. A pair of polarization-entangled photons, $|\psi_{12}\rangle = \frac{1}{\sqrt{2}}(|HH\rangle + |VV\rangle)$, is generated by spontaneous parametric down conversion (SPDC). The meter system of the photon state may be tuned into the state $|\phi_3\rangle = \cos\theta_3|H\rangle + \sin\theta_3|V\rangle$ by wave plates (WP). Then it interacts with photon 1 via a CNOT operation resulting in the tripartite state $|\psi_{123}\rangle = \frac{1}{\sqrt{2}} [|HH\rangle(\cos\theta_3|H\rangle + \sin\theta_3|V\rangle)+ |VV\rangle (\cos\theta_3|V\rangle+ \sin\theta_3|H\rangle)]$. We measure the correlation functions $E(Z_2,Z_3)$, $E(X_1,X_2)$ under $|\psi_{123}\rangle$ where $\{|H\rangle,|V\rangle\}$, $\{|H\rangle\pm|V\rangle\}$ are eigen bases for $Z$, $X$. Taking the measured values in Eq. (\ref{corr-E}), the validity of the MDRs will be verified [Fig. \ref{Fig-MDRs-qubit-result}(a)]. \section{Conclusions} In this work, we have shown that the strength of correlation, which cab be shared with a third particle through its interaction with one of the particles in the entangled bipartite system, is not determined by the interaction employed but by the fundamental measurement principle of QM. In this sense, the multipartite nonlocality may be considered to be the physical consequence of the uncertainty principle when quantum measurement is involved, and hence, the essential elements of the quantum theory, i.e., the quantum entanglement, quantum nonlocality, and uncertainty principle, are distinctly correlated in our scheme. This is heuristic and implies that these essential elements should and could be investigated jointly. For instance, the limit on measurement in quantum metrology \cite{Quantum-metrology-N} may be modified by taking into account the multipartite entanglement and MDR; the intricate correlation structures in multipartite entanglement \cite{LU-2012}, on the other hand, indicate that more investigations on the unexplored features of quantum measurement are necessary. In order to ascertain the exact form of MDR through measuring the correlation function, an optical experimental scheme has been proposed. Finally, it should mentioned that although the analysis of measurement precision and disturbance in this work is based on the definitions of (\ref{def-precision}) and (\ref{def-disturbance}), it is also applicable to other operator-type definitions. \vspace{0.1cm} \noindent {\Large \bf Acknowledgments} \noindent This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grants No. 11121092, No. 11175249, No. 11375200, and No. 11205239. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
8,264
Q: Tidymodels error with fit. Error: `x` and `y` must have same types and lengths I have the following code: library(tidymodels) library(tidyverse) rps <- tribble( ~estado, ~comp_move, ~move, "gana", "piedra", "papel", "pierde", "papel", "piedra", "pierde", "papel", "piedra", "gana", "tijeras", "piedra", "gana", "piedra", "papel", "pierde", "tijeras", "papel", "gana", "papel", "tijeras", "gana", "piedra", "papel", "pierde", "papel", "piedra", "gana", "tijeras", "piedra", "empate", "piedra", "piedra", "pierde", "papel", "piedra", "empate", "papel", "papel", "gana", "tijeras", "piedra") rps <- rps %>% mutate_if(is.character,factor) rps.split <- initial_split(rps, prop = 0.75) rps.rec <- recipe(estado ~ comp_move + move, rps) %>% step_dummy(all_nominal(),-all_outcomes()) rps.rec.prep <- rps.rec %>% prep() base.model <- rand_forest() %>% set_engine("ranger") %>% set_mode("classification") last_fit(base.model, rps.rec, rps.split) I have also tried to use workflows wfl <- workflow() %>% add_recipe(rps.rec) %>% add_model(base.model) last_fit(wfl, split = rps.split) When using last_fit() I get the following error: `Error: `x` and `y` must have same types and lengths` I have used kknn, decision_tree, random_forest and xgboost, all getting same erros. I am also getting the same error even when tuning parameters with tune_grid(). The thing is that when I use fit() function everything works fine. I know it is because I am using the model wrong but, why am I getting that error? I am new to tidymodel package. Thanks in advance. > sessionInfo() R version 3.6.0 (2019-04-26) Platform: x86_64-apple-darwin15.6.0 (64-bit) Running under: macOS 10.15.5
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,758
\section{Introduction} \label{sec:introduction} An important practical challenge that arises with neural networks is the fact that the units within their hidden layers are not usually semantically understandable. This is particularly true with computer vision applications, where an expanding body of research has focused centrally on explaining the calculations of neural networks and other black box models. Some of the core questions considered in these posthoc analyses of neural networks include: ``What concept does a unit in a hidden layer of a trained neural network represent?''or ``Does this unit in the network represent a concept that a human might understand?'' The questions listed above are important, but it is not clear that they would naturally have satisfactory answers when performing posthoc analysis on a pretrained neural network. In fact, there are several reasons why various types of posthoc analyses would not answer these questions. Efforts to interpret individual nodes of pretrained neural networks (e.g. \cite{zhou2018interpreting, zhou2014object}) have shown that some fraction of nodes can be identified to be aligned with some high-level semantic meaning, but these special nodes do not provably contain the network's full information about the concepts. That is, the nodes are not ``pure,'' and information about the concept could be scattered throughout the network. Concept-vector methods also \cite{kim2018interpretability,zhou2018interpretable,ghorbani2019towards} have been used to analyze pretrained neural networks. Here, vectors in the latent space are chosen to align with pre-defined or automatically-discovered concepts. While concept-vectors are more promising, they still make the assumption that the latent space of a neural network admits a posthoc analysis of a specific form. In particular, they assume that the latent space places members of each concept in one easy-to-classify portion of latent space. Since the latent space was not explicitly constructed to have this property, there is no reason to believe it holds. Ideally, we would want a neural network whose latent space \textit{tells us} how it is disentangling concepts, without needing to resort to extra classifiers like concept-vector methods \cite{kim2018interpretability,ghorbani2019towards}, without surveys to humans \cite{zhou2014object}, and without other manipulations that rely on whether the geometry of a latent space serendipitously admits analysis of concepts. Rather than having to rely on assumptions that the latent space admits disentanglement, we would prefer to constrain the latent space directly. We might even wish that the concepts align themselves along the axes of the latent space, so that each point in the latent space has an interpretation in terms of known concepts. Let us discuss how one would go about imposing such constraints on the latent space. In particular, we introduce the possibility of what we call \textit{concept whitening}. Concept whitening (CW) is a module inserted into a neural network. It constrains the latent space to represent target concepts and also provides a straightforward means to extract them. It does not force the concepts to be learned as an intermediate step, rather \textit{it imposes the latent space to be aligned along the concepts}. For instance, let us say that, using CW on a lower layer of the network, the concept ``airplane'' is represented along one axis. By examining the images along this axis, we can find the lower-level characteristic that the network is using to best approximate the complex concept ``airplane,'' which might be white or silver objects with blue backgrounds. In the lower layers of a standard neural network, we cannot necessarily find these characteristics, because the relevant information of ``airplane'' might be spread throughout latent space rather than along an ``airplane'' axis. By looking at images along the airplane axis at each layer, we see how the network gradually represents airplanes with an increasing level of sophistication and complexity. Concept whitening could be used to replace a plain batch normalization step in a CNN backbone, because it combines batch whitening with an extra step involving a rotation matrix. Batch whitening usually provides helpful properties to latent spaces, but our goal requires the whitening to take place with respect to concepts; the use of the rotation matrix to align the concepts with the axes is the key to interpretability through disentangled concepts. Whitening decorrelates and normalizes each axis (i.e., transforms the post-convolution latent space so that the covariance matrix between channels is the identity). Exploiting the property that a whitening transformation remains valid after applying arbitrary rotation, the rotation matrix strategically matches the concepts to the axes. The concepts used in CW do not need to be the labels in the classification problem, they can be learned from an auxiliary dataset in which concepts are labeled. The concepts do not need to be labeled in the dataset involved in the main classification task (though they could be), and the main classification labels do not need to be available in the auxiliary concept dataset. Through qualitative and quantitative experiments, we illustrate how concept whitening applied to the various layers of the neural network illuminates its internal calculations. We verify the interpretability and pureness of concepts in the disentangled latent space. Importantly for practice, we show that by replacing the batch normalization layer in pretrained state-of-the-art models with a CW module, the resulting neural network can achieve accuracy on par with the corresponding original black box neural network on large datasets, and it can do this within one additional epoch of further training. Thus, with fairly minimal effort, one can make a small modification to a neural network architecture (adding a CW module), and in return be able to easily visualize how the network is learning all of the different concepts at any chosen layer. CW can show us how a concept is represented at a given layer of the network. What we find is that at lower layers, since a complex concept cannot be represented by the network, it often creates lower-level \textit{abstract concepts}. For example, an airplane at an early layer is represented by an abstract concept defined by white or gray objects on a blue background. A bed is represented by an abstract concept that seems to be characterized by warm colors (orange, yellow). In that sense, the CW layer can help us to discover new concepts that can be formally defined and built on, if desired. \section{Related work} \label{sec:related_work} There are several large and rapidly expanding bodies of relevant literature. \textbf{Interpretability and explainability of neural networks:}\\ There have been two schools of thought on improving the interpretability of neural networks: (1) learning an inherently \textit{interpretable} model \cite{Rudin2019}; (2) providing post-hoc \textit{explanations} for an exist neural network. CW falls within the first type, though it only enlightens what the network is doing, rather than providing a full understanding of the network's computations. To provide a full explanation of each computation would lead to more constraints and thus a loss in flexibility, whereas CW allows more flexibility in exchange for more general types of explanations. The vast majority of current works on neural networks are of the second type, explainability. A problem with the terminology is that ``explanation'' methods are often summary statistics of performance (e.g., local approximations, general trends on node activation) rather than actual explanations of the model's calculations. For instance, if a node is found to activate when a certain concept is present in an image, it does not mean that all information (or even the majority of information) about this concept is involved with that particular node. Saliency-based methods are the most common form of post-hoc explanations for neural networks \cite{zeiler2014visualizing, simonyan2013deep,smilkov2017smoothgrad,selvaraju2017grad}. These methods assign importance weights to each pixel of the input image to show the importance of each pixel to the image's predicted class. Saliency maps are problematic for well-known reasons: they often provide highlighting of edges in images, regardless of the class. Thus, very similar explanations are given for multiple classes, and often none of them are useful explanations \cite{Rudin2019}. Saliency methods can be unreliable and fragile \cite{adebayo2018sanity}. Other work provides explanations of how the network's latent features operate. Some measure the alignment of an individual internal unit, or a filter of a trained neural network, to a predefined concept and find some units have relatively strong alignment to that concept \cite{zhou2018interpreting,zhou2014object}. While some units (i.e., filters) may align nicely with pre-defined concepts, the concept can be represented diffusely through many units (the concept representation by individual nodes is impure); this is because the network was not trained to have concepts expressed purely through individual nodes. To address this weakness, several concept-based post-hoc explanation approaches have recently been proposed that do not rely on the concept aligning with individual units \cite{kim2018interpretability,zhou2018interpretable,ghorbani2019towards,yeh2019concept}. Instead of analyzing individual units, these methods try to learn a linear combination of them to represent a predefined concept \cite{kim2018interpretability} or to automatically discover concepts by clustering patches and defining the clusters as new concepts \cite{ghorbani2019towards}. Although these methods are promising, they are based on assumptions of the latent space that may not hold. For instance, these methods assume that a classifier (usually a linear classifier) exists on the latent space such that the concept is correctly classified. Since the network was not trained so that this assumption holds, it may not hold. More importantly, since the latent space is not shaped explicitly to handle this kind of concept-based explanation, unit vectors (directions) in the latent space may not represent concepts purely. We will give an example in the next section to show why latent spaces built without constraints may not achieve concept separation. CW avoids these problems because it shapes the latent space through training. In that sense, CW is closer to work on inherently interpretable neural networks, though its use-case is in the spirit of concept vectors, in that it is useful for providing important directions in the latent space. There are emerging works trying to build inherently interpretable neural networks. Like CW, they alter the network structure to encourage different forms of interpretability. For example, neural networks have been designed to perform case-based reasoning \cite{chen2019looks, li2018deep}, to incorporate logical or grammatical structures \cite{li2017aognets, granmo2019convolutional,wu2019towards}, to do classification based on hard attention \cite{mnih2014recurrent, ba2014multiple,sermanet2014attention,elsayed2019saccader}, or to do image recognition by decomposing the components of images \cite{saralajew2019classification}. These models all have different forms of interpretability than we consider (understanding how the latent spaces of each layer can align with a known set of concepts). Other work also develops inherently interpretable deep learning methods that can reason based on concepts, but are different from our work in terms of field of application \cite{bouchacourt2019educe}, types of concepts \cite{zhang2018interpretable,zhang2018unsupervised} and ways to obtain concepts \cite{adel2018discovering}. In the field of deep generative models, many works have been proposed to make the latent space more interpretable by forcing disentanglement. However, works such as InfoGAN \cite{chen2016infogan} and $\beta$-VAE \cite{higgins2017beta}, all use heuristic interpretability losses like mutual information, while in CW we have actual concepts that we use to align the latent space. \textbf{Whitening and orthogonality:} Whitening is a linear transformation that transforms the covariance matrix of random input vectors to be the identity matrix. It is a classical preprocessing step in data science. In the realm of deep learning, batch normalization \cite{ioffe2015batch}, which is widely used in many state-of-the-art neural network architectures, retains the standardization part of whitening but not the decorrelation. Earlier attempts whiten by periodically estimating the whitening matrix \cite{desjardins2015natural,luo2017learning}, which leads to instability in training. Other methods perform whitening by adding a decorrelation loss \cite{cogswell2015reducing}. A whitening module for ZCA has been developed that leverages the fact that SVD is differentiable, and supports backpropagation \cite{huang2018decorrelated, huang2019iterative}. Similarly, others have developed a differentiable whitening block based on Cholesky whitening \cite{siarohin2018whitening}. The whitening part of our CW module borrows techiques from IterNorm \cite{huang2019iterative} because it is differentiable and accelerated. CW is different from previous methods because its whitening matrix \textit{is multiplied by an orthogonal matrix} and \textit{maximizes the activation of known concepts along the latent space axes}. In the field of deep learning, many earlier works that incorporate orthogonality constraints are targeted for RNNs \cite{vorontsov2017orthogonality,mhammedi2017efficient,wisdom2016full}, since orthogonality could help avoid vanishing gradients or exploding gradients in RNNs. Other work explores ways to learn orthogonal weights or representations for all types of neural networks (not just RNNs) \cite{harandi2016generalized, huang2018orthogonal,lezcano2019cheap,lezama2018ole}. For example, some work \cite{lezama2018ole} uses special loss functions to force orthogonality. The optimization algorithms used in the above methods are all different from ours. For CW, we optimize the orthogonal matrix by Cayley-transform-based curvilinear search algorithms \cite{wen2013feasible}. While some deep learning methods also use a Cayley transform \cite{vorontsov2017orthogonality}, they do it with a fixed learning rate that does not work effectively in our setting. More importantly, the goal of doing optimization with orthogonality constraints in all these works are completely different from ours. None of them try to align columns of the orthogonal matrix with any type of concept. \section{Methodology} \label{sec:methodology} Suppose $\mathbf{x}_1,\mathbf{x}_2,...,\mathbf{x}_n\in \mathcal{X}$ are samples in our dataset and $y_1,y_2,..y_n\in\mathcal{Y}$ are their labels. From the latent space $\mathcal{Z}$ defined by a hidden layer, a DNN classifier $f:\mathcal{X}\rightarrow\mathcal{Y}$ can be divided into two parts, a feature extractor $\mathbf{\Phi}:\mathcal{X}\rightarrow\mathcal{Z}$, with parameters $\theta$, and a classifier $g:\mathcal{Z}\rightarrow\mathcal{Y}$, parameterized by $\omega$. Then $\mathbf{z}=\mathbf{\Phi}(\mathbf{x};\theta)$ is the latent representation of the input $\mathbf{x}$ and $f(\mathbf{x})=g(\mathbf{\Phi}(\mathbf{x};\theta);\omega)$ is the predicted label. Suppose we are interested in $k$ concepts called $c_1,c_2,...c_k$. We can then pre-define $k$ auxiliary datasets $\mathbf{X}_{c_1},\mathbf{X}_{c_2}...,\mathbf{X}_{c_k}$ such that samples in $\mathbf{X}_{c_j}$ are the most representative samples of concept $c_j$. Our goal is to learn $\mathbf{\Phi}$ and $g$ simultaneously, such that (a) the classifier $g(\mathbf{\Phi}(\cdot;\theta);\omega)$ can predict the label accurately; (b) the $j^{th}$ dimension $z_j$ of the latent representation $\mathbf{z}$ aligns with concept $c_j$. In other words, samples in $\mathbf{X}_{c_j}$ should have larger values of $z_j$ than other samples. Conversely, samples not in $\mathbf{X}_{c_j}$ should have smaller values of $z_j$. \subsection{Standard Neural Networks May Not Achieve Concept Separation} \label{sec:whynotnn} Some posthoc explanation methods have looked at unit vectors in the direction of data where a concept is exhibited; this is done to measure how different concepts contribute to a classification task \cite{zhou2018interpretable}. Other methods consider directional derivatives towards data exhibiting the concept \cite{kim2018interpretability}, for the same reason. There are important reasons why these types of approaches may not work. \begin{figure}[htbp] \centering \includegraphics[width=80mm]{Figure_1.pdf} \caption{Possible data distributions in the latent space. \textbf{a}, the data are not mean centered; \textbf{b} the data are standardized but not decorrelated; \textbf{c} the data are whitened. In both \textbf{a} and \textbf{b}, unit vectors are not valid for representing concepts. \label{fig:demo}} \end{figure} First, suppose the latent space is not mean-centered. This alone could cause problems for posthoc methods that compute directions towards concepts. Consider, for instance, a case where all points in the latent space are far from the origin. In that case, \textit{all} concept directions point towards the same part of the space: the part where the data lies (see Figure \ref{fig:demo}(a)). This situation might be fairly easy to solve since the users can just analyze the latent space of a batch normalization layer or add a bias term. But then other problems could arise. Even if the latent space is mean-centered and standardized, the latent space of standard neural networks may not separate concepts. Consider, for instance, an elongated latent space similar to that illustrated in Figure \ref{fig:demo}(b), by the green and orange clusters. Here, two unit vectors pointing to different groups of data (perhaps exhibiting two separate concepts) may have a large inner product, suggesting that they may be part of the same concept, when in fact, they may be not be similar at all, and may not even lie in the same part of the latent space. Thus, even if the latent space is standardized, multiple unrelated concepts can still appear similar because, from the origin, their centers point towards the same general direction. For the same reason, taking derivatives towards the parts of the space where various concepts tend to appear may yield similar derivatives for very different concepts. For the above reasons, a latent space in which unit vectors can effectively represent different concepts should have small inter-concept similarity (as illustrated in Figure \ref{fig:demo}(c)). That is, samples of different concepts should be near orthogonal in the latent space. In addition, for better concept separation, the ratio between inter-concept similarity and intra-concept similarity should be as small as possible. The CW module we introduce in this work can make the latent space \textit{mean-centered and decorrelated}. This module can align predefined concepts in orthogonal directions. More details of the proposed module will be discussed in Section \ref{sec:CW} and Section \ref{sec:detail}. The experimental results in Section \ref{sec:inter_inner} compare the inter-concept and intra-concept similarity of the latent space of standard NNs with and without the proposed CW module. The results validate that the previously mentioned problems of standard neural networks do exist, and that the proposed method successfully avoids these problems. \subsection{Concept Whitening Module} \label{sec:CW} Let $\mathbf{Z}_{d \times n}$ be the latent representation matrix of $n$ samples, in which each column $\mathbf{z}_i\in \mathbb{R}^d$ contains the latent features of the $i^{th}$ sample. Our Concept Whitening module (CW) consists of two parts, whitening and orthogonal transformation. The whitening transformation $\psi$ decorrelates and standardizes the data by \begin{equation} \psi(\mathbf{Z}) = \mathbf{W}(\mathbf{Z}-\mathbf{\mu}{\mathbf{1}_{n\times 1}}^T) \end{equation} where $\mathbf{\mu}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{z}_i$ is the sample mean and $\mathbf{W}_{d \times d}$ is the whitening matrix that obeys $\mathbf{W}^T\mathbf{W}=\mathbf{\Sigma}^{-1}$. Here, $\mathbf{\Sigma}_{d \times d}=\frac{1}{n}(\mathbf{Z}-\mathbf{\mu}\mathbf{1}^T)(\mathbf{Z}-\mathbf{\mu}\mathbf{1}^T)^T$ is the covariance matrix. The whitening matrix $\mathbf{W}$ is not unique and can be calculated in many ways such as ZCA whitening and Cholesky decomposition. Another important property of the whitening matrix is that it is rotation free; suppose $\mathbf{Q}$ is an orthogonal matrix, then \begin{equation} \mathbf{W'}=\mathbf{Q^TW} \end{equation} is also a valid whitening matrix. In our module, after whitening the latent space to endow it with the properties discussed above, we still need to rotate the samples in their latent space such that the data from concept $c_j$, namely $\mathbf{X}_{c_j}$, are highly activated on the $j^{th}$ axis. Specifically, we need to find an orthogonal matrix $\mathbf{Q}_{d \times d}$ whose column $\mathbf{q}_j$ is the $j^{th}$ axis, by optimizing the following objective: \begin{equation} \begin{aligned} \max_{\mathbf{q}_1,\mathbf{q}_2,...,\mathbf{q}_k} &\sum_{j=1}^{k}\frac{1}{n_j}\mathbf{q}_j^T\psi(\mathbf{Z}_{c_j})\mathbf{1}_{n_j\times 1} \\ s.t.\ &\mathbf{Q}^T\mathbf{Q}=\mathbf{I}_d \end{aligned} \end{equation} where $\mathbf{Z}_{c_j}$ is a $d \times n_j$ matrix denoting the latent representation of $\mathbf{X}_{c_j}$ and $c_1,c_2, ...,c_k$ are concepts of interest. An optimization problem with an orthogonality constraint like this can be solved by gradient-based approaches on the Stiefel manifold (e.g., the method of \cite{wen2013feasible}). This whole procedure constitutes CW, and can be done for any given layer of a neural network as part of the training of the network. The forward pass of the CW module, which makes predictions, is summarized in Algorithm \ref{alg:forward}. \begin{algorithm*}[ht] \caption{Forward Pass of CW Module} \begin{tabbing} xxx \= xx \= xx \= xx \= xx \= xx \kill 1: \> \textbf{Input}: mini-batch input $\mathbf{Z} \in \mathbb{R}^{d\times m}$ \\ 2: \> \textbf{Optimization Variables:} orthogonal matrix $\mathbf{Q}\in \mathbb{R}^{d\times d}$ (learned in Algorithm 2)\\ 3: \> \textbf{Output:} whitened representation $\mathbf{\hat Z} \in \mathbb{R}^{d\times m}$ \\ 4: \> calculate batch mean: $\mu = \frac{1}{m} \mathbf{Z}\cdot\mathbf{1}$, and center the activation: $\mathbf{Z_C} = \mathbf{Z} - \mu\cdot\mathbf{1}^T$ \\ 5: \> calculate ZCA-whitening matrix $\mathbf{W}$, for details see Algorithm 1 of \cite{huang2019iterative} \\ 6: \> calculate the whitened representation: $\mathbf{\hat Z} = \mathbf{Q}^T\mathbf{W}\mathbf{Z_C}$. \end{tabbing} \label{alg:forward} \end{algorithm*} \begin{algorithm*}[ht] \caption{Alternating Optimization Algorithm for Training} \begin{tabbing} xxx \= xx \= xx \= xx \= xx \= xx \kill 1: \> \textbf{Input}: main objective dataset $\mathcal{D} = \{\mathbf{x}_i,y_i\}_{i=1}^{n} $, concept datasets $\mathbf{X}_{c_1},\mathbf{X}_{c_2}...,\mathbf{X}_{c_k}$ \\ 2: \> \textbf{Optimization Variables}: $\theta$, $\omega$, $\mathbf{W}$, $\mu$, $\mathbf{Q}$, whose definitions are in Section \ref{sec:CW} \\ 3: \> \textbf{Parameters}: $\beta$, $\eta$\\ 4: \> \textbf{for} $t$ = 1 to $T$ \textbf{do} \\ 5: \> \> randomly sample a mini-batch $\{\mathbf{x}_i,y_i\}_{i=1}^{m}$ from $\mathcal{D}$ \\ 6: \> \> do one step of SGD w.r.t. $\theta$ and $\omega$ on the loss $\frac{1}{m}\sum_{i=1}^{m}\ell(g(\mathbf{Q}^T\psi(\mathbf{\Phi}(\mathbf{x}_i;\theta);\mathbf{W},\mathbf{\mu});\omega),y_i)$ \\ 7: \> \> update $\mathbf{W}$ and $\mu$ by exponential moving average \\ 8: \> \> \textbf{if} \ $t\hspace*{-3pt}\mod 20 = 0$ \textbf{then} \\ 9: \> \> \> sample mini-batches $\{\mathbf{x}_i^{(c_1)}\}_{i=1}^{m},\{\mathbf{x}_i^{(c_2)}\}_{i=1}^{m},...,\{\mathbf{x}_i^{(c_k)}\}_{i=1}^{m}$ from $\mathbf{X}_{c_1},\mathbf{X}_{c_2},...,\mathbf{X}_{c_k}$ \\ 10: \> \> \> calculate $\mathbf{G}=\nabla_{\mathbf{Q}}$, with columns $\mathbf{g}_j=-\frac{1}{m}\sum_{i=1}^{m}\psi(\mathbf{\Phi}(\mathbf{x}_{i}^{(c_j)};\theta);\mathbf{W},\mathbf{\mu})$ when $1\leq j \leq k$, else $\mathbf{g}_j=\mathbf{0}$ \\ 11: \> \> \> calculate the exponential moving average of $\mathbf{G}$: $\mathbf{G'} = \beta \mathbf{G'} + (1-\beta) \mathbf{G}$ \\ 12: \> \> \> obtain learning rate $\eta$ by curvilinear search, for details see Algorithm 1 of \cite{wen2013feasible} \\ 13: \> \> \> update $\mathbf{Q}$ by Cayley transform: $\mathbf{Q}\leftarrow(I+\frac{\eta}{2}(\mathbf{G'}\mathbf{Q}^T-\mathbf{Q}\mathbf{G'}^T))^{-1}(I-\frac{\eta}{2}(\mathbf{G'}\mathbf{Q}^T-\mathbf{Q}\mathbf{G'}^T))\mathbf{Q}$ \end{tabbing} \label{alg:twostep} \end{algorithm*} \subsection{Optimization and Implementation Detail} \label{sec:detail} Whitening has not (to our knowledge) been previously applied to align the latent space to concepts. In the past, whitening has been used to speed up back-propagation. The specific whitening problem for speeding up back-propagation is different from that for concept alignment--the rotation matrix is not present in other work on whitening, nor is the notion of a concept--however, we can leverage some of the optimization tools used in that work on whitening \cite{huang2019iterative,huang2018orthogonal,siarohin2018whitening}. Specifically, we adapt ideas underlying the IterNorm algorithm \cite{huang2019iterative}, which employs Newton's iterations to approximate ZCA whitening, to the problem studied here. Let us now describe how this is done. The whitening matrix in ZCA is \begin{equation} \mathbf{W} = \mathbf{D}\mathbf{\Lambda}^{-\frac{1}{2}}\mathbf{D}^T \end{equation} where $\mathbf{\Lambda}_{d \times d}$ and $\mathbf{D}_{d \times d}$ are the eigenvalue diagonal matrix and eigenvector matrix given by the eigenvalue decomposition of the covariance matrix, $\mathbf{\Sigma}=\mathbf{D}\mathbf{\Lambda}\mathbf{D}^T$. Like other normalization methods, we calculate a $\mathbf{\mu}$ and $\mathbf{W}$ for each mini-batch of data, and average them together to form the model used in testing. As mentioned in Section \ref{sec:CW}, the challenging part for CW is that we also need to learn an orthogonal matrix by solving an optimization problem. To do this, we will optimize the objective while strictly maintaining the matrix to be orthogonal by performing gradient descent with a curvilinear search on the Stiefel manifold \cite{wen2013feasible} and adjust it to deal with mini-batch data. \textbf{The two step alternating optimization}: During training, our procedure must handle two types of data: data for calculating the main objective and the data representing the predefined concepts. The model is optimized by alternating optimization: the mini-batches of the main dataset and the auxiliary concept dataset are fed to the network, and the following two objectives are optimized in turns. The first objective is the main objective (usually related to classification accuracy): \begin{equation} \min_{\theta, \omega, W, \mu} \frac{1}{n}\sum_{i=1}^{n}\ell(g(\mathbf{Q}^T\psi(\mathbf{\Phi}(\mathbf{x}_i;\theta);\mathbf{W},\mathbf{\mu});\omega),y_i) \end{equation} where $\mathbf{\Phi}$ and $g$ are layers before and after the CW module parameterized by $\theta$ and $\omega$ respectively. $\psi$ is a whitening transformation parameterized by sample mean $\mathbf{\mu}$ and whitening matrix $\mathbf{W}$. $\mathbf{Q}$ is the orthogonal matrix. The combination $\mathbf{Q}^T\psi$ forms the CW module (which is also a valid whitening transformation). $\ell$ is any differentiable loss. We use cross-entropy loss for $\ell$ in our implementation to do classification, since it is the most commonly used. The second objective is the concept alignment loss: \begin{equation} \begin{aligned} \max_{\mathbf{q}_1,\mathbf{q}_2,...,\mathbf{q}_k} &\sum_{j=1}^{k}\frac{1}{n_j}\sum_{x_i^{(c_j)}\in X_{c_j}}\mathbf{q}_j^T\psi(\mathbf{\Phi}(\mathbf{x}_{i}^{(c_j)};\theta);\mathbf{W},\mathbf{\mu}) \\ s.t.\ &\mathbf{Q}^T\mathbf{Q}=\mathbf{I}_d. \end{aligned} \end{equation} The orthogonal matrix $\mathbf{Q}$ is fixed when training for the main objective and the other parameters are fixed when training for $\mathbf{Q}$. The optimization problem is a linear programming problem with quadratic constraints (LPQC) which is generally NP-hard. Since directly solving for the optimal solution is intractable, we optimize it by gradient methods on the Stiefel manifold. At each step $t$, in which the second objective is handled, the orthogonal matrix $\mathbf{Q}$ is updated by Cayley transform $$\mathbf{Q}^{(t+1)}=\left(I+\frac{\eta}{2}\mathbf{A}\right)^{-1}\left(I-\frac{\eta}{2}\mathbf{A}\right)\mathbf{Q}^{(t)}$$ where $\mathbf{A}=\mathbf{G}(\mathbf{Q}^{(t)})^T-\mathbf{Q}^{(t)}\mathbf{G}^T$ is a skew-symmetric matrix, $\mathbf{G}$ is the gradient of the loss function and $\eta$ is the learning rate. The optimization procedure is accelerated by curvilinear search on the learning rate at each step \cite{wen2013feasible}. Note that, in the Cayley transform, the stationary points are reached when $\mathbf{A}=\mathbf{0}$, which has multiple solutions. Since the solutions are in high-dimensional space, these stationary points are very likely to be saddle points, which can be avoided by SGD. Therefore, we use the stochastic gradient calculated by a mini-batch of samples to replace $\mathbf{G}$ at each step. To accelerate and stabilize the stochastic gradient, we also apply momentum to it during implementation. Algorithm \ref{alg:twostep} provides details for the two-step alternating optimization. \textbf{Dealing with the convolution outputs:} In the previous description of our optimization algorithm, we assume that the activations in the latent space form a vector. However, in CNNs, the output of the layer is a tensor instead of a vector. In CNNs, a feature map (a channel within one layer, created by a convolution of one filter) contains the information of how activated a part of the image is by a single filter, which may be a detector for a specific concept. Let us reshape the feature map into a vector, where each element of the vector represents how much one part of the image is activated by the filter. Thus, if the feature map for one filter is $h \times w$ then a vector of length $hw$ contains the activation information for that filter around the whole feature map. We do this reshaping procedure for each filter, which reshapes the output of a convolution layer $Z_{h\times w \times d \times n}$ into a matrix $Z_{d\times(hwn)}$, where $d$ is the number of channels. We then perform CW on the reshaped matrix. After doing this, the resulting matrix is still size $d\times(hwn)$. If we reshape this matrix back to its original size as a tensor, one feature map of the tensor now (after training) represents whether a meaningful concept is detected at each location in the image for that layer. Note that, now the output of a filter is a feature map which is a $h\times w$ matrix but the concept activation score we used in the optimization problem is a scalar. Therefore, we need to get an activation value from the feature map. There are multiple ways to do this. We try the following calculations to define activation based on the feature map: (a) mean of all feature map values; (b) max of all feature map values; (c) mean of all positive feature map values; (d) mean of down-sampled feature map obtained by max pooling. We use (d) in our experiments since it is good at capturing both high-level and low-level concepts. Detailed analysis and experiments about the choice of different activation calculations are discussed in Supplementary Information \ref{sec:activation}. \textbf{Warm start with pretrained models:} Let us discuss some aspects of practical implementation. The CW module can substitute for other normalization modules such as BatchNorm in an hidden layer of the CNN. Therefore, one can use the weights of a pretrained model as a warm start. To do this, we might leverage a pretrained model (for the same main objective) that does not use CW, and replace a BatchNorm layer in that network with a CW layer. The model usually converges in one epoch (one pass over the data) if a pretrained model is used. Note that CW does strictly more work than BatchNorm. CW alone will achieve desirable outcomes of using BatchNorm, therefore, there is no need to use BatchNorm when CW is in place. \textbf{Computational Efficiency:} The CW module involves two iterative optimization steps: one for whitening normalization and one for concept alignment. The efficiency of iterative whitening normalization is justified experimentally in \cite{huang2019iterative}; the concept alignment optimization is performed only every 20 batches, usually costing less than 20 matrix multiplications and 10 matrix inversions, which do not notably hurt the speed of training. Indeed, our experiments show that there is no significant training speed slowdown using CW compared to using vanilla BN. \section{Experiments} \label{sec:experiment} In this section, we first show that after replacing one batch norm (BN) layer with our CW module, the accuracy of image recognition is still on par with the original model (\ref{sec:acc}). After that, we visualize the concept basis we learn and show that the axes are aligned with the concepts assigned to them. Specifically, we display the images that are most activated along a single axis (\ref{sec:top10}); we then show how two axes interact with each other (\ref{sec:2drep}); and we further show how the same concept evolves in different layers (\ref{sec:trajectory}), where we have replaced one layer at a time. Then we validate the problems standard neural network mentioned in Section \ref{sec:whynotnn} through experiments and show that CW can solve these problems (\ref{sec:inter_inner}). Moreover, we quantitatively measure the interpretability of our concept axes and compare with other concept-based neural network methods (\ref{sec:quantitative_eval}). We also show how we can use the learned representation to measure the contributions of the concepts (\ref{sec:concept_importance}). Finally, we show the practicality of the CW module through a case study of skin lesion diagnosis (\ref{sec:isic}). \subsection{Main Objective Accuracy} \label{sec:acc} We evaluate the image recognition accuracy of the CNNs before and after adding a CW module. We show that simply replacing a BN module with a CW module and training for a single epoch leads to similar main objective performance. Specifically, after replacing the BN module with the CW module, we trained popular CNN architectures including VGG16+BN \cite{simonyan2014very}, ResNet with 18 layers and 50 layers \cite{he2016deep} and DenseNet161 \cite{huang2017densely} on the Places365 \cite{zhou2017places} dataset. The auxiliary concept dataset we used is MS COCO \cite{lin2014microsoft}. Each annotation, e.g., ``person'' in MS COCO, was used as one concept, and we selected all the images with this annotation (images having ``person'' in it), cropped them using bounding boxes and used the cropped images as the data representing the concept. The concept bank has 80 different concepts corresponding to 80 annotations in MS COCO. In order to limit the total time of the training process, we used pretrained models for the popular CNN architectures (discussed above) and fine-tuned these models after BN was replaced with CW. Table \ref{fig:acc_places} shows the average test accuracy on the validation set of Places365 over 5 runs. We randomly selected 3 concepts from the concept bank to learn using CW for each run, and used the average of them to measure accuracy. We repeated this, applying CW to different layers and reported the average accuracy among the layers. The accuracy does not change much when CW is applied to the different layers and trained on different number of concepts, as shown in Supplementary Information \ref{sec:acc_sensitivity}. \begin{table}[htbp] \centering \begin{tabular}{lcc c cc} \hline & \multicolumn{2}{c}{Top-1 acc.} && \multicolumn{2}{c}{Top-5 acc.} \\ \cline{2-3} \cline{5-6} & Original & +CW && Original & +CW \\ \hline VGG16-BN & 53.6 & 53.3 && 84.2 & 83.8\\ ResNet18 & 54.5 & 53.9 && 84.6 & 84.2\\ ResNet50 & 54.7 & 54.9 && 85.1 & 85.2\\ DenseNet161 & 55.3 & 55.5 && 85.2 & 85.6\\ \hline \end{tabular} \caption{Top-1 and top-5 test accuracy on Places365 dataset. Our results show that CW does not hurt performance.} \label{fig:acc_places} \end{table} Because we have leveraged a pretrained model, when training with CW, we conduct only one additional epoch of training (one pass over the dataset) for each run. As shown in Table \ref{fig:acc_places}, the performance of these models using the CW module is on par with the original model: the difference is within $1\%$ with respect to top-1 and top-5 accuracy. This means in practice, \textit{if a pretrained model (using BN) exists, one can simply replace the BN module with a CW module and train it for one epoch, in which case, the pretrained black-box model can be turned into a more interpretable model that is approximately equally accurate.} \subsection{Visualizing the Concept Basis} \label{sec:visualize_basis} In order to demonstrate the interpretability benefits of models equipped with a CW module, we visualize the concept basis in the CW module and validate that the axes are aligned with their assigned concepts. In detail, (a) we check the most activated images on these axes; (b) we look at how images are distributed in a 2D slice of the latent space; (c) we show how realizations of the same concept change if we apply CW on different layers. All experiments in \ref{sec:visualize_basis} were done on ResNet18 equipped with CW trained on Places365 and three simultanous MS COCO concepts. \subsubsection{Top-10 Activated Images} \label{sec:top10} We sort all validation samples by their activation values (discussed in Section \ref{sec:detail}) to show how much they are related to the concept. Figure \ref{fig:top10} shows the images that have the top-10 largest activations along three different concepts' axes. Note that all these concepts are trained together using one CW module. From Figure \ref{fig:top10}(b), we can see that all of the top activated images have the same semantic meaning when the CW module is located at a higher layer (i.e., the 16th layer). Figure \ref{fig:top10}(a) shows that when the CW module is applied to a lower layer (i.e., the 2nd layer), it tends to capture low level information such as color or texture characteristic of these concepts. For instance, the top activated images on the ``airplane'' axis generally has a blue background with a white or gray object in the middle. It is reasonable that the lower layer CW module cannot extract complete information about high-level concepts such as ``airplane'' since the model complexity of the first two layers is limited. \begin{figure}[htbp] \centering \includegraphics[width=80mm]{Figure_2.jpg} \caption{Top-10 Image activated on axes representing different concepts. \textbf{a}, results when the $2^{nd}$ layer (BN) is replaced by CW; \textbf{b}, results when the $16^{th}$ layer (BN) is replaced by CW.\label{fig:top10}} \end{figure} In that sense, the CW layer has \textit{discovered} lower-level characteristics of a more complex concept; namely it has discovered that the blue images with white objects are primitive characteristics that can approximate the ``airplane'' concept. Similarly, the network seems to have discovered that the appearance of warm colors is a lower-level characteristic of the ``bedroom'' concept, and that a dark background with vertical light streaks is a characteristic of the ``person'' concept. Interestingly, when different definitions of activation are used (namely the options discussed in Section \ref{sec:detail}), the characteristics discovered by the network often look different. Some of these are shown in Supplementary Information \ref{sec:top10_act}. Moreover, similar visualizations show that CW can deal with various types of concepts. Top activated images on more concepts, including concepts defined as objects and concepts defined as general characteristics, can be found in Supplementary Information \ref{sec:more_concepts}. Top activated images visualized with empirical receptive fields can be found in Supplementary Information \ref{sec:receptive_fields}. \subsubsection{2D-representation space visualization} \label{sec:2drep} \begin{figure*}[t] \centering \includegraphics[width=170mm]{Figure_3.pdf} \caption{Joint distribution of the bed-person subspace. The bounding box given by projected values in the subspace is evenly divided into $20\times20$ blocks. \textbf{a}, Plotting a random test image fall into each block; \textbf{b}, Density map of test image representation\label{fig:2drep}} \end{figure*} Let us consider whether joint information about different concepts is captured by the latent space of CW. To investigate how the data are distributed in the new latent space, we pick a 2D slice of the latent space, which means we select two axes $\mathbf{q}_i$ and $\mathbf{q}_j$ and look at the subspace they form. The data's joint distribution on the two axes is shown in Figure \ref{fig:2drep}. To visualize the joint distribution, we first compute the activations of all validation data on the two axes, then divide the latent space into a $50\times50$ grid of blocks, where the maximum and minimum activation value are the top and bottom of the grid. For the grid shown in Figure \ref{fig:2drep}(a), we randomly select one image that falls into each block, and display the image in its corresponding block. If there is no image in the block, the block remains black. From Figure \ref{fig:2drep}(a), we observe that the axes are not only aligned with their assigned concepts, they also incorporate joint information. For example, a ``person in bed'' has high activation on both the ``person'' axis and ``bed'' axis. We also include a 2D histogram of the number of images that fall into each block. As shown in Figure \ref{fig:2drep}(b), most images are distributed near the center (which is the origin) suggesting that the samples' feature vector has high probability to be nearly orthogonal to the concept axes we picked (meaning that they do not exhibit the two concepts), and consequently the latent features have near $0$ activation on the concept axes themselves. \subsubsection{Trajectory of Concepts in Different Layers} \label{sec:trajectory} Although our objective is the same when we apply the CW module to different layers in the same CNN, the latent space we get might be different. This is because different layers might be able to express different levels of semantic meaning. Because of this, it might be interesting to track how the representation of a single image will change as the CW module is applied to different layers of the CNN. In order to better understand the latent representation, we plot a 2D slice of the latent space. Unlike in the 2d-representation space visualization (Figure \ref{fig:2drep}), here, a point in the plot is not specified by the activation values themselves but by their rankings. For example, the point $(0.7, 0.1)$ means the point is at the $70^{th}$ percentile for the first axis and the $10^{th}$ percentile in the second axis. We use the percentage instead of using the value, because as shown in the 2d-representation space visualization (Figure \ref{fig:2drep}), most points are near the center of the plot, so the rankings spread the values for plotting purposes. \begin{figure*}[t] \centering \includegraphics[width=170mm]{Figure_4.pdf} \caption{2D representation plot of two representative images. Each point in the right trajectory plot corresponds to the percentile rank for the activation values on each axis. The number labeling each point on the plot provides the layer depth of the CW module. The trajectory shows how the percentile rank of the left image changes when CW is applied to different layers. \label{fig:trajectory}} \end{figure*} Figure \ref{fig:trajectory} shows the 2D representation plot of two representative images. Each point in the plot corresponds to the percentile rank representation of the image when the CW module is applied to different layers. The points are connected by arrows according to the depth of the layer. These plots confirm that the abstract concepts learned in the lower layers tend to capture lower-level meaning (such as colors or shapes) while the higher layers capture high-level meaning (such as types of objects). For example, in the left image in Figure \ref{fig:trajectory}(a), the bed is blue, where blue is typical low level information about the ``airplane'' class but not about the ``bed'' class since bedrooms are usually warm colors. Therefore, in lower layers, the bed image has higher ranking in the ``airplane'' axis than the ``bed'' axis. However, when CW is applied to deeper layers, high level information is available, and thus the image becomes highly ranked on the ``bed'' axis and lower on the ``airplane'' axis. In Figure \ref{fig:trajectory}(b), traversing through the networks' layers, the image of a sunset does not have the typical blue coloring of a sky. Its warm colors put it high on the ``bedroom'' concept for the second layer, and low on the ``airplane'' concept. However, as we look at higher layers, where the network can represent more sophisticated concepts, we see the image's rank grow on the ``airplane'' concept (perhaps the network uses the presence of skies to detect airplanes), and decrease on the ``bed'' concept. \subsection{Separability of Latent Representations} \label{sec:inter_inner} In this subsection, we evaluate properties of the spatial distribution of the concepts in the latent space. By experimentally comparing such properties across latent representations produced by the CW module and other methods, we demonstrate that the issues arising in standard methods, as outlined in Section \ref{sec:whynotnn}, do not occur when using CW. We also investigate such properties on a non-posthoc neural network, trained with an auxiliary loss that aims to classify different concepts in the latent space (that is, in the objective, there are classification losses for each axis, using each axis' assigned concept as its label). Interestingly, we find that such issues mentioned in Section \ref{sec:whynotnn} may also exist in that network. The experiments in Section \ref{sec:inter_inner} were all done on ResNet18. The CW module was trained with seven simultaneous MS COCO concepts. Specifically, for each concept image, we first extract its latent space representation. The representation for instance $j$ of concept $i$ is denoted $\mathbf{x}_{ij}$. Then, intra-concept similarity for concept $i$, denoted $d_{ii}$, is defined to be: \begin{equation} d_{ii} = \frac{1}{n^2}\left(\sum_{j=1}^{n} \sum_{k=1}^{n} \frac{\mathbf{x}_{ij}\cdot \mathbf{x}_{ik}}{\|\mathbf{x}_{ij}\|_2\|\mathbf{x}_{ik}\|_2}\right) \end{equation} where $n$ is the total number of instances of concept $i$. Inter-concept similarity between concept $p$ and $q$ is similarly defined as: \begin{equation} d_{pq} = \frac{1}{nm}\left(\sum_{j=1}^{n} \sum_{k=1}^{m} \frac{\mathbf{x}_{pj}\cdot \mathbf{x}_{qk}}{{\|\mathbf{x}_{pj}\|_2\|\mathbf{x}_{qk}\|_2}}\right) \end{equation} where $n$ and $m$ are the number of instances of concepts $p$ and $q$ respectively. Indeed, intra-concept similarity is the average pairwise cosine similarity between instances of the same concept, and inter-concept similarity is the average pairwise cosine similarity between instances of two different concepts. With those defined, we plot heat maps in Figure \ref{fig:inner_product} where value in cell at row $i$ column $j$ is computed as: \begin{equation} Q_{ij} = \frac{d_{ij}}{\sqrt{d_{ii}d_{jj}}}. \end{equation} From Figure \ref{fig:inner_product}, we notice that with the CW module, latent representations of concepts achieve greater separability: the ratios between inter-concept and intra-concept similarities (average 0.35) are notably smaller that of standard CNNs (average 0.94). In addition, without normalization, the CW module has very small inter-concept similarities (average 0.05) while analogous values for a standard neural network are around 0.74. This means that in the latent space of CW, two concepts are nearly orthogonal, while in a standard neural network, they are generally not. \textit{This indicates that some of the problems we identified in Section \ref{sec:whynotnn} occur in standard neural networks, but they do not occur with CW.} \begin{figure*}[ht] \centering \includegraphics[width=170mm]{Figure_5.pdf} \caption{Normalized intra-concept and inter-concept similarities. The diagonal values are normalized average similarities (see definition in Section \ref{sec:inter_inner}) between latent representations of images of the same concept; off-diagonal values are normalized average similarities between latent representations of images of different concepts. \textbf{a}, The $16^{th}$ layer is a BN module; \textbf{b}, The $16^{th}$ layer is a BN module with auxiliary loss to classify these concepts; \textbf{c}, The $16^{th}$ layer is a CW module. \label{fig:inner_product}} \end{figure*} In this experiment, as mentioned earlier, we also trained a standard neural network with a concept-distinction auxiliary loss. The auxiliary loss is the cross entropy of the first several dimensions in the latent space with respect to the concepts we investigated. Shown in Figure \ref{fig:inner_product}(b), the latent representations do not naturally help concept separation. The average ratio between inter-concept and intra-concept similarities is 0.85. Without normalization, the average inter-concept similarity is also around 0.74, similar to that of the standard neural network without the auxiliary loss. This has important implications: \textit{good discriminative power in the latent space does not guarantee orthogonality of different concepts. Thus, the whitening step is crucial for representing pure concepts.} \subsection{Quantitative Evaluation of Interpretability} \label{sec:quantitative_eval} In this subsection, we measure the interpretability of the latent space quantitatively and compare it with other concept-based methods. First, we measure the purity of learned concepts by the AUC (of classifying the concept, not classifying with respect to the label for the overall prediction problem) calculated from the activation values. To calculate the test AUC, we divide the concept bank, containing 80 concepts extracted from MS COCO, into training sets and test sets. After training the CW module using the training set, we extract the testing samples' activation values on the axis representing the concept. For the target concept, we assign samples of this concept to the label $1$ while giving samples of the other 79 concepts label $0$. In this way, we calculate the one-vs-all test AUC score of classifying the target concept in the latent space. The AUC score measures whether the samples belonging to a concept are ranked higher than other samples. That is, the AUC score indicates the purity of the concept axis. Specifically, we randomly choose 14 concepts from the concept bank for the purity comparison. Since our CW module can learn multiple concepts at the same time, we divide the 14 concepts into two groups and train CW with 7 simultaneous concept datasets. We compared the AUC concept purity of CW with the concept vectors learned by TCAV \cite{kim2018interpretability} from black box models, IBD \cite{zhou2018interpretable} from black box models, and filters in standard CNNs \cite{zhou2014object}. Since TCAV and IBD already find concept vectors, we use the samples' projections on the vectors to measure the AUC score. Note that in their original papers, the concept vectors are calculated for only one concept each time; therefore, we calculated 14 different concept vectors, each by training a linear classifier in the black box's latent space, with the training set of the target concept as positive samples and samples randomly drawn from the main dataset as negative samples. For standard CNNs, we measure the AUC score for the output of all filters and choose the best one to compare with our method, separately for each concept (denoted ``Best Filter''). Figure \ref{fig:auc} shows the AUC concept purity of ``airplane'' and ``person'' of these methods across different layers. The error bars on Figure \ref{fig:auc} were obtained by splitting the testing set into 5 parts and calculating AUC over each of them. The AUC plots for the other 12 concepts are shown in Supplementary Information \ref{sec:auc_all}. From the plots, we observe that \textit{concepts learned in the CW module are generally purer than those of other methods.} This is accredited to the orthogonality of concept representations as illustrated in Section \ref{sec:whynotnn}, as a result of CW's whitening of the latent space and optimization of the loss function. \begin{figure}[ht] \centering \includegraphics[width=80mm]{Figure_6.pdf} \caption{Concept purity measured by AUC score. \textbf{a}, concept ``airplane''; \textbf{b}, concept ``person.'' Concept purity of CW module is compared to several posthoc methods on different layers. The error bar is the standard deviation over 5 different test sets, and each one is $20\%$ of the entire test set. \label{fig:auc}} \end{figure} We perform another quantitative evaluation that aims to measure the correlation of axes in the latent space before and after the CW module is applied. For comparison with posthoc methods like TCAV and IBD, we measure the output of their BN modules in the pretrained model, because the output of these layers are mean centered and normalized, which, as we discussed, are important properties for concept vectors. Shown by the absolute correlation coefficients plotted in Figure \ref{fig:correlation}(a), the axes still have relatively strong correlation after passing through the BN module. If CW were applied instead of BN, they would instead be decorrelated as shown in Figure \ref{fig:correlation}(b). This figure shows the correlation matrices for the $16^{th}$ layer. The same correlation comparison is shown in Supplementary Information \ref{sec:correlation_all} when CW is applied to other layers. These results reflect why purity of concepts is important; \textit{when the axes are pure, the signal of one concept can be concentrated only on its axis, while in standard CNNs, the concept could be strewn throughout the latent space.} \begin{figure}[ht] \centering \includegraphics[width=80mm]{Figure_7.jpg} \caption{Absolute correlation coefficient of every feature pair in the $16^{th}$ layer.\textbf{a}, when the $16^{th}$ layer is a BN module; \textbf{b}, when $16^{th}$ layer is a CW module. \label{fig:correlation}} \end{figure} \subsection{Concept Importance} \label{sec:concept_importance} In order to obtain practical insights for how the concepts contribute to the classification results, we can measure the concept importance. The concept importance of the $j^{th}$ axis is defined as the ratio of a ``switched loss'' to the original loss: \begin{equation} CI_j = \frac{e^{(j)}_{\rm switch}}{e_{\rm orig}} \end{equation} where the switched loss $e^{(j)}_{\rm switch}$ is the loss calculated when the sample values of $j^{th}$ axis are randomly permuted, and $e_{\rm orig}$ is the original loss without permutation. The expression for $CI_j$ is similar to classical definitions of variable importance \cite{breiman2001random,fisher2019all}. Specifically: \begin{itemize} \item To measure the contribution of a concept to the entire classifier, the training loss function can be used in the variable importance calculation, which is the multi-class cross entropy in this case. \item To measure the contribution of a concept to a target class, e.g., how much ``bed'' contributes to ``bedroom,'' one can use a balanced binary cross entropy loss in the variable importance calculation, calculated on the softmax probability of the target class. The concept importance score is measured on the test set to prevent overfitting. \end{itemize} In our experiments, we measure concept importance scores of the learned concepts to different target classes in the Places365 dataset (corresponding to the second of the bullets above). Figure \ref{fig:concept_importance} shows the results in a grouped bar plot. The target classes we choose relate meaningfully to a specific concept learned in CW (e.g., ``airplane'' and ``airfield''). We apply CW on the $16^{th}$ layer since the concepts are generally purer in the layer, as shown in Figure \ref{fig:auc}. As shown in Figure \ref{fig:concept_importance}, the irrelevant concepts have concept importance scores near 1.0 (no contribution), e.g., ``airplane'' is not important to the detection of ``bedroom.'' For the concepts that relate meaningfully to the target class, e.g., ``airplane'' to ``airfield,'' the concept importance scores are much larger than those for other concepts. \textit{Thus, the concept importance score measured on the CW latent space can tell us the contribution of the concept to the classification. For example, it can tell us how much a concept (such as ``airplane'' contributes to classifying ``airfield,'' or how much ``book'' contributes to classifying ``library.''} \begin{figure}[t] \centering \includegraphics[width=80mm]{Figure_8.jpg} \caption{Concept importance to different Places365 classes measured on the concept axes when CW is applied to the $16^{th}$ layer. Each group in the bar plot corresponds to a target class. The bars in the same group show the concept importance scores of the learned concepts to the target class. Concepts that relate meaningfully to the target class (e.g., ``airplane'' and ``airfield'') have larger importance scores than irrelevant concepts.\label{fig:concept_importance}} \end{figure} \subsection{Case Study: Skin Lesion Diagnosis} \label{sec:isic} We provide a case study of a medical imaging dataset of skin lesions. The dataset of dermoscopic images is collected from the ISIC archive \cite{isic2020}. Because the dermoscopic images corresponding to different diagnoses vary greatly in appearance, we focus on predicting whether a skin lesion is malignant for each of the histopathology images (9058 histopathology images in total). We choose ``age $<$ 20'' and ``size $\geq$ 10 mm'' as the concepts of interest and select the images with corresponding meta information to form the concept datasets. We chose these concepts due to their availability in the ISIC dataset. The cutoff, for instance, of 10mm is used commonly for evaluation of skin lesions \cite{lewis1998}. Details about the experimental results including test accuracy, separability of latent representation, AUC concept purity, correlation of axes, and concept importance are shown in Supplementary Information \ref{sec:isic_app}. The main results of the case study are: \begin{itemize} \item The conclusions of CW performance analysis on the ISIC dataset are very similar to our earlier conclusions on the Places dataset, in terms of main objective test accuracy, separability of latent representation, AUC concept purity, and correlation of axes. \item Concept importance scores measured on the CW latent space can provide practical insights on which concepts are potentially more important in skin lesion diagnosis. \end{itemize} \section{Conclusion and Future Work} \label{sec:conclusion} Concept whitening is a module placed at the bottleneck of a CNN, to force the latent space to be disentangled, and to align the axes of the latent space with predefined concepts. By building an inherently interpretable CNN with concept whitening, we can gain intuition about how the network gradually learns the target concepts (or whether it needs them at all) over the layers without harming the main objective's performance. There are many avenues for possible future work. Since CW modules are useful for helping humans to define primitive abstract concepts, such as those we have seen the network use at early layers, it would be interesting to automatically detect and quantify these new concepts (see ref. \cite{ghorbani2019towards}). Also the requirement of CW to completely decorrelate the outputs of all the filters might be too strong for some tasks. This is because concepts might be highly correlated in practice such as ``airplane'' and ``sky'' In this case, we may want to soften our definition of CW. We could define several general topics that are uncorrelated, and use multiple correlated filters to represent concepts within each general topic. In this scenario, instead of forcing the gram matrix to be the identity matrix, we could make it block diagonal. The orthogonal basis would become a set of orthogonal subspaces. \section*{Data Availability} All datasets that support the findings are publicly available, including Places365 at \href{http://places2.csail.mit.edu}{http://places2.csail.mit.edu}, MS COCO at \href{https://cocodataset.org/}{https://cocodataset.org/} and ISIC at \href{https://www.isic-archive.com}{https://www.isic-archive.com}. \section*{Code Availability} The code for replicating our experiments is available on \href{https://github.com/zhiCHEN96/ConceptWhitening}{https://github.com/zhiCHEN96/ConceptWhitening} (\href{https://doi.org/10.5281/zenodo.4052692}{https://doi.org/10.5281/zenodo.4052692}). \section{Introduction} \label{sec:introduction} An important practical challenge that arises with neural networks is the fact that the units within their hidden layers are not usually semantically understandable. This is particularly true with computer vision applications, where an expanding body of research has focused centrally on explaining the calculations of neural networks and other black box models. Some of the core questions considered in these posthoc analyses of neural networks include: ``What concept does a unit in a hidden layer of a trained neural network represent?''or ``Does this unit in the network represent a concept that a human might understand?'' The questions listed above are important, but it is not clear that they would naturally have satisfactory answers when performing posthoc analysis on a pretrained neural network. In fact, there are several reasons why various types of posthoc analyses would not answer these questions. Efforts to interpret individual nodes of pretrained neural networks (e.g. \cite{zhou2018interpreting, zhou2014object}) have shown that some fraction of nodes can be identified to be aligned with some high-level semantic meaning, but these special nodes do not provably contain the network's full information about the concepts. That is, the nodes are not ``pure,'' and information about the concept could be scattered throughout the network. Concept-vector methods also \cite{kim2018interpretability,zhou2018interpretable,ghorbani2019towards} have been used to analyze pretrained neural networks. Here, vectors in the latent space are chosen to align with pre-defined or automatically-discovered concepts. While concept-vectors are more promising, they still make the assumption that the latent space of a neural network admits a posthoc analysis of a specific form. In particular, they assume that the latent space places members of each concept in one easy-to-classify portion of latent space. Since the latent space was not explicitly constructed to have this property, there is no reason to believe it holds. Ideally, we would want a neural network whose latent space \textit{tells us} how it is disentangling concepts, without needing to resort to extra classifiers like concept-vector methods \cite{kim2018interpretability,ghorbani2019towards}, without surveys to humans \cite{zhou2014object}, and without other manipulations that rely on whether the geometry of a latent space serendipitously admits analysis of concepts. Rather than having to rely on assumptions that the latent space admits disentanglement, we would prefer to constrain the latent space directly. We might even wish that the concepts align themselves along the axes of the latent space, so that each point in the latent space has an interpretation in terms of known concepts. Let us discuss how one would go about imposing such constraints on the latent space. In particular, we introduce the possibility of what we call \textit{concept whitening}. Concept whitening (CW) is a module inserted into a neural network. It constrains the latent space to represent target concepts and also provides a straightforward means to extract them. It does not force the concepts to be learned as an intermediate step, rather \textit{it imposes the latent space to be aligned along the concepts}. For instance, let us say that, using CW on a lower layer of the network, the concept ``airplane'' is represented along one axis. By examining the images along this axis, we can find the lower-level characteristic that the network is using to best approximate the complex concept ``airplane,'' which might be white or silver objects with blue backgrounds. In the lower layers of a standard neural network, we cannot necessarily find these characteristics, because the relevant information of ``airplane'' might be spread throughout latent space rather than along an ``airplane'' axis. By looking at images along the airplane axis at each layer, we see how the network gradually represents airplanes with an increasing level of sophistication and complexity. Concept whitening could be used to replace a plain batch normalization step in a CNN backbone, because it combines batch whitening with an extra step involving a rotation matrix. Batch whitening usually provides helpful properties to latent spaces, but our goal requires the whitening to take place with respect to concepts; the use of the rotation matrix to align the concepts with the axes is the key to interpretability through disentangled concepts. Whitening decorrelates and normalizes each axis (i.e., transforms the post-convolution latent space so that the covariance matrix between channels is the identity). Exploiting the property that a whitening transformation remains valid after applying arbitrary rotation, the rotation matrix strategically matches the concepts to the axes. The concepts used in CW do not need to be the labels in the classification problem, they can be learned from an auxiliary dataset in which concepts are labeled. The concepts do not need to be labeled in the dataset involved in the main classification task (though they could be), and the main classification labels do not need to be available in the auxiliary concept dataset. Through qualitative and quantitative experiments, we illustrate how concept whitening applied to the various layers of the neural network illuminates its internal calculations. We verify the interpretability and pureness of concepts in the disentangled latent space. Importantly for practice, we show that by replacing the batch normalization layer in pretrained state-of-the-art models with a CW module, the resulting neural network can achieve accuracy on par with the corresponding original black box neural network on large datasets, and it can do this within one additional epoch of further training. Thus, with fairly minimal effort, one can make a small modification to a neural network architecture (adding a CW module), and in return be able to easily visualize how the network is learning all of the different concepts at any chosen layer. CW can show us how a concept is represented at a given layer of the network. What we find is that at lower layers, since a complex concept cannot be represented by the network, it often creates lower-level \textit{abstract concepts}. For example, an airplane at an early layer is represented by an abstract concept defined by white or gray objects on a blue background. A bed is represented by an abstract concept that seems to be characterized by warm colors (orange, yellow). In that sense, the CW layer can help us to discover new concepts that can be formally defined and built on, if desired. \section{Related work} \label{sec:related_work} There are several large and rapidly expanding bodies of relevant literature. \textbf{Interpretability and explainability of neural networks:}\\ There have been two schools of thought on improving the interpretability of neural networks: (1) learning an inherently \textit{interpretable} model \cite{Rudin2019}; (2) providing post-hoc \textit{explanations} for an exist neural network. CW falls within the first type, though it only enlightens what the network is doing, rather than providing a full understanding of the network's computations. To provide a full explanation of each computation would lead to more constraints and thus a loss in flexibility, whereas CW allows more flexibility in exchange for more general types of explanations. The vast majority of current works on neural networks are of the second type, explainability. A problem with the terminology is that ``explanation'' methods are often summary statistics of performance (e.g., local approximations, general trends on node activation) rather than actual explanations of the model's calculations. For instance, if a node is found to activate when a certain concept is present in an image, it does not mean that all information (or even the majority of information) about this concept is involved with that particular node. Saliency-based methods are the most common form of post-hoc explanations for neural networks \cite{zeiler2014visualizing, simonyan2013deep,smilkov2017smoothgrad,selvaraju2017grad}. These methods assign importance weights to each pixel of the input image to show the importance of each pixel to the image's predicted class. Saliency maps are problematic for well-known reasons: they often provide highlighting of edges in images, regardless of the class. Thus, very similar explanations are given for multiple classes, and often none of them are useful explanations \cite{Rudin2019}. Saliency methods can be unreliable and fragile \cite{adebayo2018sanity}. Other work provides explanations of how the network's latent features operate. Some measure the alignment of an individual internal unit, or a filter of a trained neural network, to a predefined concept and find some units have relatively strong alignment to that concept \cite{zhou2018interpreting,zhou2014object}. While some units (i.e., filters) may align nicely with pre-defined concepts, the concept can be represented diffusely through many units (the concept representation by individual nodes is impure); this is because the network was not trained to have concepts expressed purely through individual nodes. To address this weakness, several concept-based post-hoc explanation approaches have recently been proposed that do not rely on the concept aligning with individual units \cite{kim2018interpretability,zhou2018interpretable,ghorbani2019towards,yeh2019concept}. Instead of analyzing individual units, these methods try to learn a linear combination of them to represent a predefined concept \cite{kim2018interpretability} or to automatically discover concepts by clustering patches and defining the clusters as new concepts \cite{ghorbani2019towards}. Although these methods are promising, they are based on assumptions of the latent space that may not hold. For instance, these methods assume that a classifier (usually a linear classifier) exists on the latent space such that the concept is correctly classified. Since the network was not trained so that this assumption holds, it may not hold. More importantly, since the latent space is not shaped explicitly to handle this kind of concept-based explanation, unit vectors (directions) in the latent space may not represent concepts purely. We will give an example in the next section to show why latent spaces built without constraints may not achieve concept separation. CW avoids these problems because it shapes the latent space through training. In that sense, CW is closer to work on inherently interpretable neural networks, though its use-case is in the spirit of concept vectors, in that it is useful for providing important directions in the latent space. There are emerging works trying to build inherently interpretable neural networks. Like CW, they alter the network structure to encourage different forms of interpretability. For example, neural networks have been designed to perform case-based reasoning \cite{chen2019looks, li2018deep}, to incorporate logical or grammatical structures \cite{li2017aognets, granmo2019convolutional,wu2019towards}, to do classification based on hard attention \cite{mnih2014recurrent, ba2014multiple,sermanet2014attention,elsayed2019saccader}, or to do image recognition by decomposing the components of images \cite{saralajew2019classification}. These models all have different forms of interpretability than we consider (understanding how the latent spaces of each layer can align with a known set of concepts). Other work also develops inherently interpretable deep learning methods that can reason based on concepts, but are different from our work in terms of field of application \cite{bouchacourt2019educe}, types of concepts \cite{zhang2018interpretable,zhang2018unsupervised} and ways to obtain concepts \cite{adel2018discovering}. In the field of deep generative models, many works have been proposed to make the latent space more interpretable by forcing disentanglement. However, works such as InfoGAN \cite{chen2016infogan} and $\beta$-VAE \cite{higgins2017beta}, all use heuristic interpretability losses like mutual information, while in CW we have actual concepts that we use to align the latent space. \textbf{Whitening and orthogonality:} Whitening is a linear transformation that transforms the covariance matrix of random input vectors to be the identity matrix. It is a classical preprocessing step in data science. In the realm of deep learning, batch normalization \cite{ioffe2015batch}, which is widely used in many state-of-the-art neural network architectures, retains the standardization part of whitening but not the decorrelation. Earlier attempts whiten by periodically estimating the whitening matrix \cite{desjardins2015natural,luo2017learning}, which leads to instability in training. Other methods perform whitening by adding a decorrelation loss \cite{cogswell2015reducing}. A whitening module for ZCA has been developed that leverages the fact that SVD is differentiable, and supports backpropagation \cite{huang2018decorrelated, huang2019iterative}. Similarly, others have developed a differentiable whitening block based on Cholesky whitening \cite{siarohin2018whitening}. The whitening part of our CW module borrows techiques from IterNorm \cite{huang2019iterative} because it is differentiable and accelerated. CW is different from previous methods because its whitening matrix \textit{is multiplied by an orthogonal matrix} and \textit{maximizes the activation of known concepts along the latent space axes}. In the field of deep learning, many earlier works that incorporate orthogonality constraints are targeted for RNNs \cite{vorontsov2017orthogonality,mhammedi2017efficient,wisdom2016full}, since orthogonality could help avoid vanishing gradients or exploding gradients in RNNs. Other work explores ways to learn orthogonal weights or representations for all types of neural networks (not just RNNs) \cite{harandi2016generalized, huang2018orthogonal,lezcano2019cheap,lezama2018ole}. For example, some work \cite{lezama2018ole} uses special loss functions to force orthogonality. The optimization algorithms used in the above methods are all different from ours. For CW, we optimize the orthogonal matrix by Cayley-transform-based curvilinear search algorithms \cite{wen2013feasible}. While some deep learning methods also use a Cayley transform \cite{vorontsov2017orthogonality}, they do it with a fixed learning rate that does not work effectively in our setting. More importantly, the goal of doing optimization with orthogonality constraints in all these works are completely different from ours. None of them try to align columns of the orthogonal matrix with any type of concept. \section{Methodology} \label{sec:methodology} Suppose $\mathbf{x}_1,\mathbf{x}_2,...,\mathbf{x}_n\in \mathcal{X}$ are samples in our dataset and $y_1,y_2,..y_n\in\mathcal{Y}$ are their labels. From the latent space $\mathcal{Z}$ defined by a hidden layer, a DNN classifier $f:\mathcal{X}\rightarrow\mathcal{Y}$ can be divided into two parts, a feature extractor $\mathbf{\Phi}:\mathcal{X}\rightarrow\mathcal{Z}$, with parameters $\theta$, and a classifier $g:\mathcal{Z}\rightarrow\mathcal{Y}$, parameterized by $\omega$. Then $\mathbf{z}=\mathbf{\Phi}(\mathbf{x};\theta)$ is the latent representation of the input $\mathbf{x}$ and $f(\mathbf{x})=g(\mathbf{\Phi}(\mathbf{x};\theta);\omega)$ is the predicted label. Suppose we are interested in $k$ concepts called $c_1,c_2,...c_k$. We can then pre-define $k$ auxiliary datasets $\mathbf{X}_{c_1},\mathbf{X}_{c_2}...,\mathbf{X}_{c_k}$ such that samples in $\mathbf{X}_{c_j}$ are the most representative samples of concept $c_j$. Our goal is to learn $\mathbf{\Phi}$ and $g$ simultaneously, such that (a) the classifier $g(\mathbf{\Phi}(\cdot;\theta);\omega)$ can predict the label accurately; (b) the $j^{th}$ dimension $z_j$ of the latent representation $\mathbf{z}$ aligns with concept $c_j$. In other words, samples in $\mathbf{X}_{c_j}$ should have larger values of $z_j$ than other samples. Conversely, samples not in $\mathbf{X}_{c_j}$ should have smaller values of $z_j$. \subsection{Standard Neural Networks May Not Achieve Concept Separation} \label{sec:whynotnn} Some posthoc explanation methods have looked at unit vectors in the direction of data where a concept is exhibited; this is done to measure how different concepts contribute to a classification task \cite{zhou2018interpretable}. Other methods consider directional derivatives towards data exhibiting the concept \cite{kim2018interpretability}, for the same reason. There are important reasons why these types of approaches may not work. \begin{figure}[htbp] \centering \includegraphics[width=80mm]{Figure_1.pdf} \caption{Possible data distributions in the latent space. \textbf{a}, the data are not mean centered; \textbf{b} the data are standardized but not decorrelated; \textbf{c} the data are whitened. In both \textbf{a} and \textbf{b}, unit vectors are not valid for representing concepts. \label{fig:demo}} \end{figure} First, suppose the latent space is not mean-centered. This alone could cause problems for posthoc methods that compute directions towards concepts. Consider, for instance, a case where all points in the latent space are far from the origin. In that case, \textit{all} concept directions point towards the same part of the space: the part where the data lies (see Figure \ref{fig:demo}(a)). This situation might be fairly easy to solve since the users can just analyze the latent space of a batch normalization layer or add a bias term. But then other problems could arise. Even if the latent space is mean-centered and standardized, the latent space of standard neural networks may not separate concepts. Consider, for instance, an elongated latent space similar to that illustrated in Figure \ref{fig:demo}(b), by the green and orange clusters. Here, two unit vectors pointing to different groups of data (perhaps exhibiting two separate concepts) may have a large inner product, suggesting that they may be part of the same concept, when in fact, they may be not be similar at all, and may not even lie in the same part of the latent space. Thus, even if the latent space is standardized, multiple unrelated concepts can still appear similar because, from the origin, their centers point towards the same general direction. For the same reason, taking derivatives towards the parts of the space where various concepts tend to appear may yield similar derivatives for very different concepts. For the above reasons, a latent space in which unit vectors can effectively represent different concepts should have small inter-concept similarity (as illustrated in Figure \ref{fig:demo}(c)). That is, samples of different concepts should be near orthogonal in the latent space. In addition, for better concept separation, the ratio between inter-concept similarity and intra-concept similarity should be as small as possible. The CW module we introduce in this work can make the latent space \textit{mean-centered and decorrelated}. This module can align predefined concepts in orthogonal directions. More details of the proposed module will be discussed in Section \ref{sec:CW} and Section \ref{sec:detail}. The experimental results in Section \ref{sec:inter_inner} compare the inter-concept and intra-concept similarity of the latent space of standard NNs with and without the proposed CW module. The results validate that the previously mentioned problems of standard neural networks do exist, and that the proposed method successfully avoids these problems. \subsection{Concept Whitening Module} \label{sec:CW} Let $\mathbf{Z}_{d \times n}$ be the latent representation matrix of $n$ samples, in which each column $\mathbf{z}_i\in \mathbb{R}^d$ contains the latent features of the $i^{th}$ sample. Our Concept Whitening module (CW) consists of two parts, whitening and orthogonal transformation. The whitening transformation $\psi$ decorrelates and standardizes the data by \begin{equation} \psi(\mathbf{Z}) = \mathbf{W}(\mathbf{Z}-\mathbf{\mu}{\mathbf{1}_{n\times 1}}^T) \end{equation} where $\mathbf{\mu}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{z}_i$ is the sample mean and $\mathbf{W}_{d \times d}$ is the whitening matrix that obeys $\mathbf{W}^T\mathbf{W}=\mathbf{\Sigma}^{-1}$. Here, $\mathbf{\Sigma}_{d \times d}=\frac{1}{n}(\mathbf{Z}-\mathbf{\mu}\mathbf{1}^T)(\mathbf{Z}-\mathbf{\mu}\mathbf{1}^T)^T$ is the covariance matrix. The whitening matrix $\mathbf{W}$ is not unique and can be calculated in many ways such as ZCA whitening and Cholesky decomposition. Another important property of the whitening matrix is that it is rotation free; suppose $\mathbf{Q}$ is an orthogonal matrix, then \begin{equation} \mathbf{W'}=\mathbf{Q^TW} \end{equation} is also a valid whitening matrix. In our module, after whitening the latent space to endow it with the properties discussed above, we still need to rotate the samples in their latent space such that the data from concept $c_j$, namely $\mathbf{X}_{c_j}$, are highly activated on the $j^{th}$ axis. Specifically, we need to find an orthogonal matrix $\mathbf{Q}_{d \times d}$ whose column $\mathbf{q}_j$ is the $j^{th}$ axis, by optimizing the following objective: \begin{equation} \begin{aligned} \max_{\mathbf{q}_1,\mathbf{q}_2,...,\mathbf{q}_k} &\sum_{j=1}^{k}\frac{1}{n_j}\mathbf{q}_j^T\psi(\mathbf{Z}_{c_j})\mathbf{1}_{n_j\times 1} \\ s.t.\ &\mathbf{Q}^T\mathbf{Q}=\mathbf{I}_d \end{aligned} \end{equation} where $\mathbf{Z}_{c_j}$ is a $d \times n_j$ matrix denoting the latent representation of $\mathbf{X}_{c_j}$ and $c_1,c_2, ...,c_k$ are concepts of interest. An optimization problem with an orthogonality constraint like this can be solved by gradient-based approaches on the Stiefel manifold (e.g., the method of \cite{wen2013feasible}). This whole procedure constitutes CW, and can be done for any given layer of a neural network as part of the training of the network. The forward pass of the CW module, which makes predictions, is summarized in Algorithm \ref{alg:forward}. \begin{algorithm*}[ht] \caption{Forward Pass of CW Module} \begin{tabbing} xxx \= xx \= xx \= xx \= xx \= xx \kill 1: \> \textbf{Input}: mini-batch input $\mathbf{Z} \in \mathbb{R}^{d\times m}$ \\ 2: \> \textbf{Optimization Variables:} orthogonal matrix $\mathbf{Q}\in \mathbb{R}^{d\times d}$ (learned in Algorithm 2)\\ 3: \> \textbf{Output:} whitened representation $\mathbf{\hat Z} \in \mathbb{R}^{d\times m}$ \\ 4: \> calculate batch mean: $\mu = \frac{1}{m} \mathbf{Z}\cdot\mathbf{1}$, and center the activation: $\mathbf{Z_C} = \mathbf{Z} - \mu\cdot\mathbf{1}^T$ \\ 5: \> calculate ZCA-whitening matrix $\mathbf{W}$, for details see Algorithm 1 of \cite{huang2019iterative} \\ 6: \> calculate the whitened representation: $\mathbf{\hat Z} = \mathbf{Q}^T\mathbf{W}\mathbf{Z_C}$. \end{tabbing} \label{alg:forward} \end{algorithm*} \begin{algorithm*}[ht] \caption{Alternating Optimization Algorithm for Training} \begin{tabbing} xxx \= xx \= xx \= xx \= xx \= xx \kill 1: \> \textbf{Input}: main objective dataset $\mathcal{D} = \{\mathbf{x}_i,y_i\}_{i=1}^{n} $, concept datasets $\mathbf{X}_{c_1},\mathbf{X}_{c_2}...,\mathbf{X}_{c_k}$ \\ 2: \> \textbf{Optimization Variables}: $\theta$, $\omega$, $\mathbf{W}$, $\mu$, $\mathbf{Q}$, whose definitions are in Section \ref{sec:CW} \\ 3: \> \textbf{Parameters}: $\beta$, $\eta$\\ 4: \> \textbf{for} $t$ = 1 to $T$ \textbf{do} \\ 5: \> \> randomly sample a mini-batch $\{\mathbf{x}_i,y_i\}_{i=1}^{m}$ from $\mathcal{D}$ \\ 6: \> \> do one step of SGD w.r.t. $\theta$ and $\omega$ on the loss $\frac{1}{m}\sum_{i=1}^{m}\ell(g(\mathbf{Q}^T\psi(\mathbf{\Phi}(\mathbf{x}_i;\theta);\mathbf{W},\mathbf{\mu});\omega),y_i)$ \\ 7: \> \> update $\mathbf{W}$ and $\mu$ by exponential moving average \\ 8: \> \> \textbf{if} \ $t\hspace*{-3pt}\mod 20 = 0$ \textbf{then} \\ 9: \> \> \> sample mini-batches $\{\mathbf{x}_i^{(c_1)}\}_{i=1}^{m},\{\mathbf{x}_i^{(c_2)}\}_{i=1}^{m},...,\{\mathbf{x}_i^{(c_k)}\}_{i=1}^{m}$ from $\mathbf{X}_{c_1},\mathbf{X}_{c_2},...,\mathbf{X}_{c_k}$ \\ 10: \> \> \> calculate $\mathbf{G}=\nabla_{\mathbf{Q}}$, with columns $\mathbf{g}_j=-\frac{1}{m}\sum_{i=1}^{m}\psi(\mathbf{\Phi}(\mathbf{x}_{i}^{(c_j)};\theta);\mathbf{W},\mathbf{\mu})$ when $1\leq j \leq k$, else $\mathbf{g}_j=\mathbf{0}$ \\ 11: \> \> \> calculate the exponential moving average of $\mathbf{G}$: $\mathbf{G'} = \beta \mathbf{G'} + (1-\beta) \mathbf{G}$ \\ 12: \> \> \> obtain learning rate $\eta$ by curvilinear search, for details see Algorithm 1 of \cite{wen2013feasible} \\ 13: \> \> \> update $\mathbf{Q}$ by Cayley transform: $\mathbf{Q}\leftarrow(I+\frac{\eta}{2}(\mathbf{G'}\mathbf{Q}^T-\mathbf{Q}\mathbf{G'}^T))^{-1}(I-\frac{\eta}{2}(\mathbf{G'}\mathbf{Q}^T-\mathbf{Q}\mathbf{G'}^T))\mathbf{Q}$ \end{tabbing} \label{alg:twostep} \end{algorithm*} \subsection{Optimization and Implementation Detail} \label{sec:detail} Whitening has not (to our knowledge) been previously applied to align the latent space to concepts. In the past, whitening has been used to speed up back-propagation. The specific whitening problem for speeding up back-propagation is different from that for concept alignment--the rotation matrix is not present in other work on whitening, nor is the notion of a concept--however, we can leverage some of the optimization tools used in that work on whitening \cite{huang2019iterative,huang2018orthogonal,siarohin2018whitening}. Specifically, we adapt ideas underlying the IterNorm algorithm \cite{huang2019iterative}, which employs Newton's iterations to approximate ZCA whitening, to the problem studied here. Let us now describe how this is done. The whitening matrix in ZCA is \begin{equation} \mathbf{W} = \mathbf{D}\mathbf{\Lambda}^{-\frac{1}{2}}\mathbf{D}^T \end{equation} where $\mathbf{\Lambda}_{d \times d}$ and $\mathbf{D}_{d \times d}$ are the eigenvalue diagonal matrix and eigenvector matrix given by the eigenvalue decomposition of the covariance matrix, $\mathbf{\Sigma}=\mathbf{D}\mathbf{\Lambda}\mathbf{D}^T$. Like other normalization methods, we calculate a $\mathbf{\mu}$ and $\mathbf{W}$ for each mini-batch of data, and average them together to form the model used in testing. As mentioned in Section \ref{sec:CW}, the challenging part for CW is that we also need to learn an orthogonal matrix by solving an optimization problem. To do this, we will optimize the objective while strictly maintaining the matrix to be orthogonal by performing gradient descent with a curvilinear search on the Stiefel manifold \cite{wen2013feasible} and adjust it to deal with mini-batch data. \textbf{The two step alternating optimization}: During training, our procedure must handle two types of data: data for calculating the main objective and the data representing the predefined concepts. The model is optimized by alternating optimization: the mini-batches of the main dataset and the auxiliary concept dataset are fed to the network, and the following two objectives are optimized in turns. The first objective is the main objective (usually related to classification accuracy): \begin{equation} \min_{\theta, \omega, W, \mu} \frac{1}{n}\sum_{i=1}^{n}\ell(g(\mathbf{Q}^T\psi(\mathbf{\Phi}(\mathbf{x}_i;\theta);\mathbf{W},\mathbf{\mu});\omega),y_i) \end{equation} where $\mathbf{\Phi}$ and $g$ are layers before and after the CW module parameterized by $\theta$ and $\omega$ respectively. $\psi$ is a whitening transformation parameterized by sample mean $\mathbf{\mu}$ and whitening matrix $\mathbf{W}$. $\mathbf{Q}$ is the orthogonal matrix. The combination $\mathbf{Q}^T\psi$ forms the CW module (which is also a valid whitening transformation). $\ell$ is any differentiable loss. We use cross-entropy loss for $\ell$ in our implementation to do classification, since it is the most commonly used. The second objective is the concept alignment loss: \begin{equation} \begin{aligned} \max_{\mathbf{q}_1,\mathbf{q}_2,...,\mathbf{q}_k} &\sum_{j=1}^{k}\frac{1}{n_j}\sum_{x_i^{(c_j)}\in X_{c_j}}\mathbf{q}_j^T\psi(\mathbf{\Phi}(\mathbf{x}_{i}^{(c_j)};\theta);\mathbf{W},\mathbf{\mu}) \\ s.t.\ &\mathbf{Q}^T\mathbf{Q}=\mathbf{I}_d. \end{aligned} \end{equation} The orthogonal matrix $\mathbf{Q}$ is fixed when training for the main objective and the other parameters are fixed when training for $\mathbf{Q}$. The optimization problem is a linear programming problem with quadratic constraints (LPQC) which is generally NP-hard. Since directly solving for the optimal solution is intractable, we optimize it by gradient methods on the Stiefel manifold. At each step $t$, in which the second objective is handled, the orthogonal matrix $\mathbf{Q}$ is updated by Cayley transform $$\mathbf{Q}^{(t+1)}=\left(I+\frac{\eta}{2}\mathbf{A}\right)^{-1}\left(I-\frac{\eta}{2}\mathbf{A}\right)\mathbf{Q}^{(t)}$$ where $\mathbf{A}=\mathbf{G}(\mathbf{Q}^{(t)})^T-\mathbf{Q}^{(t)}\mathbf{G}^T$ is a skew-symmetric matrix, $\mathbf{G}$ is the gradient of the loss function and $\eta$ is the learning rate. The optimization procedure is accelerated by curvilinear search on the learning rate at each step \cite{wen2013feasible}. Note that, in the Cayley transform, the stationary points are reached when $\mathbf{A}=\mathbf{0}$, which has multiple solutions. Since the solutions are in high-dimensional space, these stationary points are very likely to be saddle points, which can be avoided by SGD. Therefore, we use the stochastic gradient calculated by a mini-batch of samples to replace $\mathbf{G}$ at each step. To accelerate and stabilize the stochastic gradient, we also apply momentum to it during implementation. Algorithm \ref{alg:twostep} provides details for the two-step alternating optimization. \textbf{Dealing with the convolution outputs:} In the previous description of our optimization algorithm, we assume that the activations in the latent space form a vector. However, in CNNs, the output of the layer is a tensor instead of a vector. In CNNs, a feature map (a channel within one layer, created by a convolution of one filter) contains the information of how activated a part of the image is by a single filter, which may be a detector for a specific concept. Let us reshape the feature map into a vector, where each element of the vector represents how much one part of the image is activated by the filter. Thus, if the feature map for one filter is $h \times w$ then a vector of length $hw$ contains the activation information for that filter around the whole feature map. We do this reshaping procedure for each filter, which reshapes the output of a convolution layer $Z_{h\times w \times d \times n}$ into a matrix $Z_{d\times(hwn)}$, where $d$ is the number of channels. We then perform CW on the reshaped matrix. After doing this, the resulting matrix is still size $d\times(hwn)$. If we reshape this matrix back to its original size as a tensor, one feature map of the tensor now (after training) represents whether a meaningful concept is detected at each location in the image for that layer. Note that, now the output of a filter is a feature map which is a $h\times w$ matrix but the concept activation score we used in the optimization problem is a scalar. Therefore, we need to get an activation value from the feature map. There are multiple ways to do this. We try the following calculations to define activation based on the feature map: (a) mean of all feature map values; (b) max of all feature map values; (c) mean of all positive feature map values; (d) mean of down-sampled feature map obtained by max pooling. We use (d) in our experiments since it is good at capturing both high-level and low-level concepts. Detailed analysis and experiments about the choice of different activation calculations are discussed in Supplementary Information \ref{sec:activation}. \textbf{Warm start with pretrained models:} Let us discuss some aspects of practical implementation. The CW module can substitute for other normalization modules such as BatchNorm in an hidden layer of the CNN. Therefore, one can use the weights of a pretrained model as a warm start. To do this, we might leverage a pretrained model (for the same main objective) that does not use CW, and replace a BatchNorm layer in that network with a CW layer. The model usually converges in one epoch (one pass over the data) if a pretrained model is used. Note that CW does strictly more work than BatchNorm. CW alone will achieve desirable outcomes of using BatchNorm, therefore, there is no need to use BatchNorm when CW is in place. \textbf{Computational Efficiency:} The CW module involves two iterative optimization steps: one for whitening normalization and one for concept alignment. The efficiency of iterative whitening normalization is justified experimentally in \cite{huang2019iterative}; the concept alignment optimization is performed only every 20 batches, usually costing less than 20 matrix multiplications and 10 matrix inversions, which do not notably hurt the speed of training. Indeed, our experiments show that there is no significant training speed slowdown using CW compared to using vanilla BN. \section{Experiments} \label{sec:experiment} In this section, we first show that after replacing one batch norm (BN) layer with our CW module, the accuracy of image recognition is still on par with the original model (\ref{sec:acc}). After that, we visualize the concept basis we learn and show that the axes are aligned with the concepts assigned to them. Specifically, we display the images that are most activated along a single axis (\ref{sec:top10}); we then show how two axes interact with each other (\ref{sec:2drep}); and we further show how the same concept evolves in different layers (\ref{sec:trajectory}), where we have replaced one layer at a time. Then we validate the problems standard neural network mentioned in Section \ref{sec:whynotnn} through experiments and show that CW can solve these problems (\ref{sec:inter_inner}). Moreover, we quantitatively measure the interpretability of our concept axes and compare with other concept-based neural network methods (\ref{sec:quantitative_eval}). We also show how we can use the learned representation to measure the contributions of the concepts (\ref{sec:concept_importance}). Finally, we show the practicality of the CW module through a case study of skin lesion diagnosis (\ref{sec:isic}). \subsection{Main Objective Accuracy} \label{sec:acc} We evaluate the image recognition accuracy of the CNNs before and after adding a CW module. We show that simply replacing a BN module with a CW module and training for a single epoch leads to similar main objective performance. Specifically, after replacing the BN module with the CW module, we trained popular CNN architectures including VGG16+BN \cite{simonyan2014very}, ResNet with 18 layers and 50 layers \cite{he2016deep} and DenseNet161 \cite{huang2017densely} on the Places365 \cite{zhou2017places} dataset. The auxiliary concept dataset we used is MS COCO \cite{lin2014microsoft}. Each annotation, e.g., ``person'' in MS COCO, was used as one concept, and we selected all the images with this annotation (images having ``person'' in it), cropped them using bounding boxes and used the cropped images as the data representing the concept. The concept bank has 80 different concepts corresponding to 80 annotations in MS COCO. In order to limit the total time of the training process, we used pretrained models for the popular CNN architectures (discussed above) and fine-tuned these models after BN was replaced with CW. Table \ref{fig:acc_places} shows the average test accuracy on the validation set of Places365 over 5 runs. We randomly selected 3 concepts from the concept bank to learn using CW for each run, and used the average of them to measure accuracy. We repeated this, applying CW to different layers and reported the average accuracy among the layers. The accuracy does not change much when CW is applied to the different layers and trained on different number of concepts, as shown in Supplementary Information \ref{sec:acc_sensitivity}. \begin{table}[htbp] \centering \begin{tabular}{lcc c cc} \hline & \multicolumn{2}{c}{Top-1 acc.} && \multicolumn{2}{c}{Top-5 acc.} \\ \cline{2-3} \cline{5-6} & Original & +CW && Original & +CW \\ \hline VGG16-BN & 53.6 & 53.3 && 84.2 & 83.8\\ ResNet18 & 54.5 & 53.9 && 84.6 & 84.2\\ ResNet50 & 54.7 & 54.9 && 85.1 & 85.2\\ DenseNet161 & 55.3 & 55.5 && 85.2 & 85.6\\ \hline \end{tabular} \caption{Top-1 and top-5 test accuracy on Places365 dataset. Our results show that CW does not hurt performance.} \label{fig:acc_places} \end{table} Because we have leveraged a pretrained model, when training with CW, we conduct only one additional epoch of training (one pass over the dataset) for each run. As shown in Table \ref{fig:acc_places}, the performance of these models using the CW module is on par with the original model: the difference is within $1\%$ with respect to top-1 and top-5 accuracy. This means in practice, \textit{if a pretrained model (using BN) exists, one can simply replace the BN module with a CW module and train it for one epoch, in which case, the pretrained black-box model can be turned into a more interpretable model that is approximately equally accurate.} \subsection{Visualizing the Concept Basis} \label{sec:visualize_basis} In order to demonstrate the interpretability benefits of models equipped with a CW module, we visualize the concept basis in the CW module and validate that the axes are aligned with their assigned concepts. In detail, (a) we check the most activated images on these axes; (b) we look at how images are distributed in a 2D slice of the latent space; (c) we show how realizations of the same concept change if we apply CW on different layers. All experiments in \ref{sec:visualize_basis} were done on ResNet18 equipped with CW trained on Places365 and three simultanous MS COCO concepts. \subsubsection{Top-10 Activated Images} \label{sec:top10} We sort all validation samples by their activation values (discussed in Section \ref{sec:detail}) to show how much they are related to the concept. Figure \ref{fig:top10} shows the images that have the top-10 largest activations along three different concepts' axes. Note that all these concepts are trained together using one CW module. From Figure \ref{fig:top10}(b), we can see that all of the top activated images have the same semantic meaning when the CW module is located at a higher layer (i.e., the 16th layer). Figure \ref{fig:top10}(a) shows that when the CW module is applied to a lower layer (i.e., the 2nd layer), it tends to capture low level information such as color or texture characteristic of these concepts. For instance, the top activated images on the ``airplane'' axis generally has a blue background with a white or gray object in the middle. It is reasonable that the lower layer CW module cannot extract complete information about high-level concepts such as ``airplane'' since the model complexity of the first two layers is limited. \begin{figure}[htbp] \centering \includegraphics[width=80mm]{Figure_2.jpg} \caption{Top-10 Image activated on axes representing different concepts. \textbf{a}, results when the $2^{nd}$ layer (BN) is replaced by CW; \textbf{b}, results when the $16^{th}$ layer (BN) is replaced by CW.\label{fig:top10}} \end{figure} In that sense, the CW layer has \textit{discovered} lower-level characteristics of a more complex concept; namely it has discovered that the blue images with white objects are primitive characteristics that can approximate the ``airplane'' concept. Similarly, the network seems to have discovered that the appearance of warm colors is a lower-level characteristic of the ``bedroom'' concept, and that a dark background with vertical light streaks is a characteristic of the ``person'' concept. Interestingly, when different definitions of activation are used (namely the options discussed in Section \ref{sec:detail}), the characteristics discovered by the network often look different. Some of these are shown in Supplementary Information \ref{sec:top10_act}. Moreover, similar visualizations show that CW can deal with various types of concepts. Top activated images on more concepts, including concepts defined as objects and concepts defined as general characteristics, can be found in Supplementary Information \ref{sec:more_concepts}. Top activated images visualized with empirical receptive fields can be found in Supplementary Information \ref{sec:receptive_fields}. \subsubsection{2D-representation space visualization} \label{sec:2drep} \begin{figure*}[t] \centering \includegraphics[width=170mm]{Figure_3.pdf} \caption{Joint distribution of the bed-person subspace. The bounding box given by projected values in the subspace is evenly divided into $20\times20$ blocks. \textbf{a}, Plotting a random test image fall into each block; \textbf{b}, Density map of test image representation\label{fig:2drep}} \end{figure*} Let us consider whether joint information about different concepts is captured by the latent space of CW. To investigate how the data are distributed in the new latent space, we pick a 2D slice of the latent space, which means we select two axes $\mathbf{q}_i$ and $\mathbf{q}_j$ and look at the subspace they form. The data's joint distribution on the two axes is shown in Figure \ref{fig:2drep}. To visualize the joint distribution, we first compute the activations of all validation data on the two axes, then divide the latent space into a $50\times50$ grid of blocks, where the maximum and minimum activation value are the top and bottom of the grid. For the grid shown in Figure \ref{fig:2drep}(a), we randomly select one image that falls into each block, and display the image in its corresponding block. If there is no image in the block, the block remains black. From Figure \ref{fig:2drep}(a), we observe that the axes are not only aligned with their assigned concepts, they also incorporate joint information. For example, a ``person in bed'' has high activation on both the ``person'' axis and ``bed'' axis. We also include a 2D histogram of the number of images that fall into each block. As shown in Figure \ref{fig:2drep}(b), most images are distributed near the center (which is the origin) suggesting that the samples' feature vector has high probability to be nearly orthogonal to the concept axes we picked (meaning that they do not exhibit the two concepts), and consequently the latent features have near $0$ activation on the concept axes themselves. \subsubsection{Trajectory of Concepts in Different Layers} \label{sec:trajectory} Although our objective is the same when we apply the CW module to different layers in the same CNN, the latent space we get might be different. This is because different layers might be able to express different levels of semantic meaning. Because of this, it might be interesting to track how the representation of a single image will change as the CW module is applied to different layers of the CNN. In order to better understand the latent representation, we plot a 2D slice of the latent space. Unlike in the 2d-representation space visualization (Figure \ref{fig:2drep}), here, a point in the plot is not specified by the activation values themselves but by their rankings. For example, the point $(0.7, 0.1)$ means the point is at the $70^{th}$ percentile for the first axis and the $10^{th}$ percentile in the second axis. We use the percentage instead of using the value, because as shown in the 2d-representation space visualization (Figure \ref{fig:2drep}), most points are near the center of the plot, so the rankings spread the values for plotting purposes. \begin{figure*}[t] \centering \includegraphics[width=170mm]{Figure_4.pdf} \caption{2D representation plot of two representative images. Each point in the right trajectory plot corresponds to the percentile rank for the activation values on each axis. The number labeling each point on the plot provides the layer depth of the CW module. The trajectory shows how the percentile rank of the left image changes when CW is applied to different layers. \label{fig:trajectory}} \end{figure*} Figure \ref{fig:trajectory} shows the 2D representation plot of two representative images. Each point in the plot corresponds to the percentile rank representation of the image when the CW module is applied to different layers. The points are connected by arrows according to the depth of the layer. These plots confirm that the abstract concepts learned in the lower layers tend to capture lower-level meaning (such as colors or shapes) while the higher layers capture high-level meaning (such as types of objects). For example, in the left image in Figure \ref{fig:trajectory}(a), the bed is blue, where blue is typical low level information about the ``airplane'' class but not about the ``bed'' class since bedrooms are usually warm colors. Therefore, in lower layers, the bed image has higher ranking in the ``airplane'' axis than the ``bed'' axis. However, when CW is applied to deeper layers, high level information is available, and thus the image becomes highly ranked on the ``bed'' axis and lower on the ``airplane'' axis. In Figure \ref{fig:trajectory}(b), traversing through the networks' layers, the image of a sunset does not have the typical blue coloring of a sky. Its warm colors put it high on the ``bedroom'' concept for the second layer, and low on the ``airplane'' concept. However, as we look at higher layers, where the network can represent more sophisticated concepts, we see the image's rank grow on the ``airplane'' concept (perhaps the network uses the presence of skies to detect airplanes), and decrease on the ``bed'' concept. \subsection{Separability of Latent Representations} \label{sec:inter_inner} In this subsection, we evaluate properties of the spatial distribution of the concepts in the latent space. By experimentally comparing such properties across latent representations produced by the CW module and other methods, we demonstrate that the issues arising in standard methods, as outlined in Section \ref{sec:whynotnn}, do not occur when using CW. We also investigate such properties on a non-posthoc neural network, trained with an auxiliary loss that aims to classify different concepts in the latent space (that is, in the objective, there are classification losses for each axis, using each axis' assigned concept as its label). Interestingly, we find that such issues mentioned in Section \ref{sec:whynotnn} may also exist in that network. The experiments in Section \ref{sec:inter_inner} were all done on ResNet18. The CW module was trained with seven simultaneous MS COCO concepts. Specifically, for each concept image, we first extract its latent space representation. The representation for instance $j$ of concept $i$ is denoted $\mathbf{x}_{ij}$. Then, intra-concept similarity for concept $i$, denoted $d_{ii}$, is defined to be: \begin{equation} d_{ii} = \frac{1}{n^2}\left(\sum_{j=1}^{n} \sum_{k=1}^{n} \frac{\mathbf{x}_{ij}\cdot \mathbf{x}_{ik}}{\|\mathbf{x}_{ij}\|_2\|\mathbf{x}_{ik}\|_2}\right) \end{equation} where $n$ is the total number of instances of concept $i$. Inter-concept similarity between concept $p$ and $q$ is similarly defined as: \begin{equation} d_{pq} = \frac{1}{nm}\left(\sum_{j=1}^{n} \sum_{k=1}^{m} \frac{\mathbf{x}_{pj}\cdot \mathbf{x}_{qk}}{{\|\mathbf{x}_{pj}\|_2\|\mathbf{x}_{qk}\|_2}}\right) \end{equation} where $n$ and $m$ are the number of instances of concepts $p$ and $q$ respectively. Indeed, intra-concept similarity is the average pairwise cosine similarity between instances of the same concept, and inter-concept similarity is the average pairwise cosine similarity between instances of two different concepts. With those defined, we plot heat maps in Figure \ref{fig:inner_product} where value in cell at row $i$ column $j$ is computed as: \begin{equation} Q_{ij} = \frac{d_{ij}}{\sqrt{d_{ii}d_{jj}}}. \end{equation} From Figure \ref{fig:inner_product}, we notice that with the CW module, latent representations of concepts achieve greater separability: the ratios between inter-concept and intra-concept similarities (average 0.35) are notably smaller that of standard CNNs (average 0.94). In addition, without normalization, the CW module has very small inter-concept similarities (average 0.05) while analogous values for a standard neural network are around 0.74. This means that in the latent space of CW, two concepts are nearly orthogonal, while in a standard neural network, they are generally not. \textit{This indicates that some of the problems we identified in Section \ref{sec:whynotnn} occur in standard neural networks, but they do not occur with CW.} \begin{figure*}[ht] \centering \includegraphics[width=170mm]{Figure_5.pdf} \caption{Normalized intra-concept and inter-concept similarities. The diagonal values are normalized average similarities (see definition in Section \ref{sec:inter_inner}) between latent representations of images of the same concept; off-diagonal values are normalized average similarities between latent representations of images of different concepts. \textbf{a}, The $16^{th}$ layer is a BN module; \textbf{b}, The $16^{th}$ layer is a BN module with auxiliary loss to classify these concepts; \textbf{c}, The $16^{th}$ layer is a CW module. \label{fig:inner_product}} \end{figure*} In this experiment, as mentioned earlier, we also trained a standard neural network with a concept-distinction auxiliary loss. The auxiliary loss is the cross entropy of the first several dimensions in the latent space with respect to the concepts we investigated. Shown in Figure \ref{fig:inner_product}(b), the latent representations do not naturally help concept separation. The average ratio between inter-concept and intra-concept similarities is 0.85. Without normalization, the average inter-concept similarity is also around 0.74, similar to that of the standard neural network without the auxiliary loss. This has important implications: \textit{good discriminative power in the latent space does not guarantee orthogonality of different concepts. Thus, the whitening step is crucial for representing pure concepts.} \subsection{Quantitative Evaluation of Interpretability} \label{sec:quantitative_eval} In this subsection, we measure the interpretability of the latent space quantitatively and compare it with other concept-based methods. First, we measure the purity of learned concepts by the AUC (of classifying the concept, not classifying with respect to the label for the overall prediction problem) calculated from the activation values. To calculate the test AUC, we divide the concept bank, containing 80 concepts extracted from MS COCO, into training sets and test sets. After training the CW module using the training set, we extract the testing samples' activation values on the axis representing the concept. For the target concept, we assign samples of this concept to the label $1$ while giving samples of the other 79 concepts label $0$. In this way, we calculate the one-vs-all test AUC score of classifying the target concept in the latent space. The AUC score measures whether the samples belonging to a concept are ranked higher than other samples. That is, the AUC score indicates the purity of the concept axis. Specifically, we randomly choose 14 concepts from the concept bank for the purity comparison. Since our CW module can learn multiple concepts at the same time, we divide the 14 concepts into two groups and train CW with 7 simultaneous concept datasets. We compared the AUC concept purity of CW with the concept vectors learned by TCAV \cite{kim2018interpretability} from black box models, IBD \cite{zhou2018interpretable} from black box models, and filters in standard CNNs \cite{zhou2014object}. Since TCAV and IBD already find concept vectors, we use the samples' projections on the vectors to measure the AUC score. Note that in their original papers, the concept vectors are calculated for only one concept each time; therefore, we calculated 14 different concept vectors, each by training a linear classifier in the black box's latent space, with the training set of the target concept as positive samples and samples randomly drawn from the main dataset as negative samples. For standard CNNs, we measure the AUC score for the output of all filters and choose the best one to compare with our method, separately for each concept (denoted ``Best Filter''). Figure \ref{fig:auc} shows the AUC concept purity of ``airplane'' and ``person'' of these methods across different layers. The error bars on Figure \ref{fig:auc} were obtained by splitting the testing set into 5 parts and calculating AUC over each of them. The AUC plots for the other 12 concepts are shown in Supplementary Information \ref{sec:auc_all}. From the plots, we observe that \textit{concepts learned in the CW module are generally purer than those of other methods.} This is accredited to the orthogonality of concept representations as illustrated in Section \ref{sec:whynotnn}, as a result of CW's whitening of the latent space and optimization of the loss function. \begin{figure}[ht] \centering \includegraphics[width=80mm]{Figure_6.pdf} \caption{Concept purity measured by AUC score. \textbf{a}, concept ``airplane''; \textbf{b}, concept ``person.'' Concept purity of CW module is compared to several posthoc methods on different layers. The error bar is the standard deviation over 5 different test sets, and each one is $20\%$ of the entire test set. \label{fig:auc}} \end{figure} We perform another quantitative evaluation that aims to measure the correlation of axes in the latent space before and after the CW module is applied. For comparison with posthoc methods like TCAV and IBD, we measure the output of their BN modules in the pretrained model, because the output of these layers are mean centered and normalized, which, as we discussed, are important properties for concept vectors. Shown by the absolute correlation coefficients plotted in Figure \ref{fig:correlation}(a), the axes still have relatively strong correlation after passing through the BN module. If CW were applied instead of BN, they would instead be decorrelated as shown in Figure \ref{fig:correlation}(b). This figure shows the correlation matrices for the $16^{th}$ layer. The same correlation comparison is shown in Supplementary Information \ref{sec:correlation_all} when CW is applied to other layers. These results reflect why purity of concepts is important; \textit{when the axes are pure, the signal of one concept can be concentrated only on its axis, while in standard CNNs, the concept could be strewn throughout the latent space.} \begin{figure}[ht] \centering \includegraphics[width=80mm]{Figure_7.jpg} \caption{Absolute correlation coefficient of every feature pair in the $16^{th}$ layer.\textbf{a}, when the $16^{th}$ layer is a BN module; \textbf{b}, when $16^{th}$ layer is a CW module. \label{fig:correlation}} \end{figure} \subsection{Concept Importance} \label{sec:concept_importance} In order to obtain practical insights for how the concepts contribute to the classification results, we can measure the concept importance. The concept importance of the $j^{th}$ axis is defined as the ratio of a ``switched loss'' to the original loss: \begin{equation} CI_j = \frac{e^{(j)}_{\rm switch}}{e_{\rm orig}} \end{equation} where the switched loss $e^{(j)}_{\rm switch}$ is the loss calculated when the sample values of $j^{th}$ axis are randomly permuted, and $e_{\rm orig}$ is the original loss without permutation. The expression for $CI_j$ is similar to classical definitions of variable importance \cite{breiman2001random,fisher2019all}. Specifically: \begin{itemize} \item To measure the contribution of a concept to the entire classifier, the training loss function can be used in the variable importance calculation, which is the multi-class cross entropy in this case. \item To measure the contribution of a concept to a target class, e.g., how much ``bed'' contributes to ``bedroom,'' one can use a balanced binary cross entropy loss in the variable importance calculation, calculated on the softmax probability of the target class. The concept importance score is measured on the test set to prevent overfitting. \end{itemize} In our experiments, we measure concept importance scores of the learned concepts to different target classes in the Places365 dataset (corresponding to the second of the bullets above). Figure \ref{fig:concept_importance} shows the results in a grouped bar plot. The target classes we choose relate meaningfully to a specific concept learned in CW (e.g., ``airplane'' and ``airfield''). We apply CW on the $16^{th}$ layer since the concepts are generally purer in the layer, as shown in Figure \ref{fig:auc}. As shown in Figure \ref{fig:concept_importance}, the irrelevant concepts have concept importance scores near 1.0 (no contribution), e.g., ``airplane'' is not important to the detection of ``bedroom.'' For the concepts that relate meaningfully to the target class, e.g., ``airplane'' to ``airfield,'' the concept importance scores are much larger than those for other concepts. \textit{Thus, the concept importance score measured on the CW latent space can tell us the contribution of the concept to the classification. For example, it can tell us how much a concept (such as ``airplane'' contributes to classifying ``airfield,'' or how much ``book'' contributes to classifying ``library.''} \begin{figure}[t] \centering \includegraphics[width=80mm]{Figure_8.jpg} \caption{Concept importance to different Places365 classes measured on the concept axes when CW is applied to the $16^{th}$ layer. Each group in the bar plot corresponds to a target class. The bars in the same group show the concept importance scores of the learned concepts to the target class. Concepts that relate meaningfully to the target class (e.g., ``airplane'' and ``airfield'') have larger importance scores than irrelevant concepts.\label{fig:concept_importance}} \end{figure} \subsection{Case Study: Skin Lesion Diagnosis} \label{sec:isic} We provide a case study of a medical imaging dataset of skin lesions. The dataset of dermoscopic images is collected from the ISIC archive \cite{isic2020}. Because the dermoscopic images corresponding to different diagnoses vary greatly in appearance, we focus on predicting whether a skin lesion is malignant for each of the histopathology images (9058 histopathology images in total). We choose ``age $<$ 20'' and ``size $\geq$ 10 mm'' as the concepts of interest and select the images with corresponding meta information to form the concept datasets. We chose these concepts due to their availability in the ISIC dataset. The cutoff, for instance, of 10mm is used commonly for evaluation of skin lesions \cite{lewis1998}. Details about the experimental results including test accuracy, separability of latent representation, AUC concept purity, correlation of axes, and concept importance are shown in Supplementary Information \ref{sec:isic_app}. The main results of the case study are: \begin{itemize} \item The conclusions of CW performance analysis on the ISIC dataset are very similar to our earlier conclusions on the Places dataset, in terms of main objective test accuracy, separability of latent representation, AUC concept purity, and correlation of axes. \item Concept importance scores measured on the CW latent space can provide practical insights on which concepts are potentially more important in skin lesion diagnosis. \end{itemize} \section{Conclusion and Future Work} \label{sec:conclusion} Concept whitening is a module placed at the bottleneck of a CNN, to force the latent space to be disentangled, and to align the axes of the latent space with predefined concepts. By building an inherently interpretable CNN with concept whitening, we can gain intuition about how the network gradually learns the target concepts (or whether it needs them at all) over the layers without harming the main objective's performance. There are many avenues for possible future work. Since CW modules are useful for helping humans to define primitive abstract concepts, such as those we have seen the network use at early layers, it would be interesting to automatically detect and quantify these new concepts (see ref. \cite{ghorbani2019towards}). Also the requirement of CW to completely decorrelate the outputs of all the filters might be too strong for some tasks. This is because concepts might be highly correlated in practice such as ``airplane'' and ``sky'' In this case, we may want to soften our definition of CW. We could define several general topics that are uncorrelated, and use multiple correlated filters to represent concepts within each general topic. In this scenario, instead of forcing the gram matrix to be the identity matrix, we could make it block diagonal. The orthogonal basis would become a set of orthogonal subspaces. \section*{Data Availability} All datasets that support the findings are publicly available, including Places365 at \href{http://places2.csail.mit.edu}{http://places2.csail.mit.edu}, MS COCO at \href{https://cocodataset.org/}{https://cocodataset.org/} and ISIC at \href{https://www.isic-archive.com}{https://www.isic-archive.com}. \section*{Code Availability} The code for replicating our experiments is available on \href{https://github.com/zhiCHEN96/ConceptWhitening}{https://github.com/zhiCHEN96/ConceptWhitening} (\href{https://doi.org/10.5281/zenodo.4052692}{https://doi.org/10.5281/zenodo.4052692}).
{ "redpajama_set_name": "RedPajamaArXiv" }
3,723
Q: Can I skip numerical analysis altogether and jump to other courses So the situation is I have completed Single and multivariate calculus, linear algebra, differential equations and and Real analysis. Now I'm thinking about skipping numerical analysis as it's of no use to my area of study which particularly programming and coding. My prof. told me that you should skip numerical analysis and jump to probability, statistics and discrete mathematics courses. Now I want to know that is it mandatory to learn numerical analysis in order to do well in probability statistics, discrete mathematics, linear programming and mathematical modelling. I mean is numerical analysis a prequisite to learn other courses well. Also throw some light on how important it is for a coder to learn numerical analysis. A: I think you should strongly consider doing what your adviser recommends. Presumably he/she knows something about you and about the contents and quality of the numerical analysis course in your department. Having said that, I agree with the Comments of @PeterDiehr and @BigbearZaa that numerical analysis is crucial background to many kinds of programming. I hope circumstances will arise later in which you take a numerical analysis course. Here are some questions that should be easy for you to understand as related to your question. Answers are among topics considered in most numerical analysis courses. (These topics from my last month doing statistical computations.) 1) Digital computers do not deal with the real numbers, but instead with a carefully-chosen subset of the rationals that is very large, but finite. Usually, the computations on that subset of the rationals are satisfactory. But there are occasional rude surprises when they are not satisfactory. Do you know what sort of surprises to guard against? 2) There are good reasons why many computer packages define $0^0 = 1$ instead of 'undefined'. For example, in R the expression 0^0 returns 1. Why is that? 3) If $M$ is a $10 \times 10$ invertible matrix, under what circumstances can you trust the computed inverse to be correct? 4) You are evaluating whether a particular commercial software package gives sufficiently accurate results for the current work of your group. What benchmark tests should you use in your evaluation?
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,692
package it.enrico; interface Observer { public void notify(String s); }
{ "redpajama_set_name": "RedPajamaGithub" }
6,649
Q: pg_dump: SQL command failed I am trying to take backup of postgresql 9.0 by using pg_dump command. command here is: cd /opt/PostgresPlus/9.0AS/bin and hit the following command pg_dump -h xxx.xxx.xxx.xxx -p 5432 -U superuser db_name>db_name.dump But i am getting error like, pg_dump: SQL command failed pg_dump: Error message from server: ERROR: schema "dbms_sql" does not exist pg_dump: The command was: SET search_path = dbms_sql, pg_catalog please give me suggestion. A: There are various ways to set the search_path in PostgreSQL. To diagnose, log into the same database as the same superuser and check: SHOW search_path; If it contains dbms_sql, then check your postgresql.conf, databse and role settings to find where it came from. Use any editor or for instance grep on in a Linux shell to audit postgresql.conf: grep 'search_path' /your/path/to/postgresql.conf To check on database and role settings, use the default command line client psql or a graphical GUI like pgAdmin. In pgAdmin you just select the object in the object browser to see all settings in the SQL pane. In psql, use \drds superuser to see settings for this role. And \drds '' db_name to see settings for the database. Or just \drds to see all settings for all roles and databases.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,616
\section{Introduction}\label{introduction} In 1972, Al-Salam and Chihara proved (see \cite{Al-Salam-1972}) that $(P_n)_{n\geq 0}$ is a $\mathrm{D}$-classical orthogonal polynomial sequence (OPS), namely Hermite, Laguerre, Bessel or Jacobi families, if and only if \begin{align} (az^2+bz+c)\mathrm{D}P_n(z)=(a_nz+b_n)P_n(z)+c_nP_{n-1}(z)\quad (c_n\neq 0)\;, \label{very-classical} \end{align} where $\mathrm{D}=d/dz$. We can replace $\mathrm{D}$ in \eqref{very-classical} by the following Askey-Wilson operator \begin{align* \mathcal{D}_q\, p(x(s))= \frac{p(x(s+1/2))-p(x(s-1/2))}{x(s+1/2)-x(s-1/2)},\quad x(s)=\mbox{$\frac12$}(q^s +q^{-s})\;, \end{align*} for every polynomial $p$. We assume that $0<q<1$. (Taking $q^s=e^{i \theta}$ we recover $\mathcal{D}_q$ as defined in \cite[(21.6.2)]{I2005}.) The problem of characterizing such OPS was posed by Ismail (see \cite[Conjecture 24.7.8]{I2005}). The case $a=b=0$ and $c=1$ was considered by Al-Salam (see \cite{A-1995}). Recently, we addressed this problem in its full generality (see \cite{KDPconj}), which leads to a characterization of continuous $q$-Jacobi, Chebyshev of the first kind and some special cases of the Al-Salam-Chihara polynomials. Our motivation here is to obtain a full characterization of Askey-Wilson polynomials similar to \eqref{very-classical}. We define the averaging operator by \begin{align* \mathcal{S}_q\, p(x(s))= \frac{1}{2}\Big( p(x(s+1/2))+p(x(s-1/2))\Big),\quad x(s)=\mbox{$\frac12$}(q^s +q^{-s})\;. \end{align*} For our purpose, instead of \eqref{very-classical}, let us consider the following difference equation \begin{align}\label{open-problem} &(az^2+bz+c)\mathcal{D}_q P_n(z)=a_n\mathcal{S}_q P_{n+1}(z)+b_n\mathcal{S}_q P_n(z)+c_n\mathcal{S}_q P_{n-1}(z),~ z=x(s)\;, \end{align} with $c_n\neq 0$. Our objective is then to characterize all OPS that satisfy \eqref{open-problem}. The case where $a=b=0$ and $c=1$ was solved in recently in \cite{KCDMJP2021-a}. Using ideas developed therein we are going to solve \eqref{open-problem} in his general form. The structure of the paper is as follows. Section 2 presents some basic facts of the algebraic theory of OPS together with some useful results. Section \ref{main} contains our main result. In Section \ref{example} we present a finer result for a special case. \section{Background and preliminary results} The algebraic theory of orthogonal polynomials was introduced by P. Maroni (see \cite{M1991}). Let $\mathcal{P}$ be the vector space of all polynomials with complex coefficients and let $\mathcal{P}^*$ be its algebraic dual. A simple set in $\mathcal{P}$ is a sequence $(P_n)_{n\geq0}$ such that $\mathrm{deg}(P_n)=n$ for each $n$. A simple set $(P_n)_{n\geq0}$ is called an OPS with respect to ${\bf u}\in\mathcal{P}^*$ if $$ \langle{\bf u},P_nP_m\rangle=\kappa_n\delta_{n,m}\quad(m=0,1,\ldots;\;\kappa_n\in\mathbb{C}\setminus\{0\}), $$ where $\langle{\bf u},f\rangle$ is the action of ${\bf u}$ on $f\in\mathcal{P}$. In this case, we say that ${\bf u}$ is regular. The left multiplication of a functional ${\bf u}$ by a polynomial $\phi$ is defined by $$ \left\langle \phi {\bf u}, f \right\rangle =\left\langle {\bf u},\phi f \right\rangle \quad (f\in \mathcal{P}). $$ Consequently, if $(P_n)_{n\geq0}$ is a monic OPS with respect to ${\bf u}\in\mathcal{P}^*$, then the corresponding dual basis is explicitly given by \begin{align}\label{expression-an} {\bf a}_n =\left\langle {\bf u} , P_n ^2 \right\rangle ^{-1} P_n{\bf u}. \end{align} Any functional ${\bf u} \in \mathcal{P}^*$ (when $\mathcal{P}$ is endowed with an appropriate strict inductive limit topology, see \cite{M1991}) can be written in the sense of the weak topology in $\mathcal{P}^*$ as \begin{align*} {\bf u} = \sum_{n=0} ^{\infty} \left\langle {\bf u}, P_n \right\rangle {\bf a}_n. \end{align*} It is known that a monic OPS, $(P_n)_{n\geq 0}$, is characterized by the following three-term recurrence relation (TTRR): \begin{align}\label{TTRR_relation} P_{-1} (z)=0, \quad P_{n+1} (z) =(z-B_n)P_n (z)-C_n P_{n-1} (z) \quad (C_n \neq 0), \end{align} and, therefore, \begin{align}\label{TTRR_coefficients} B_n = \frac{\left\langle {\bf u} , zP_n ^2 \right\rangle}{\left\langle {\bf u} , P_n ^2 \right\rangle},\quad C_{n+1} = \frac{\left\langle {\bf u} , P_{n+1} ^2 \right\rangle}{\left\langle {\bf u} , P_n ^2 \right\rangle}. \end{align} The Askey-Wilson and the averaging operators induce two elements on $\mathcal{P}^*$, say $\mathbf{D}_q$ and $\mathbf{S}_q$, via the following definition (see \cite{FK-NM2011}): \begin{align*} \langle \mathbf{D}_q{\bf u},f\rangle=-\langle {\bf u},\mathcal{D}_q f\rangle,\quad \langle\mathbf{S}_q{\bf u},f\rangle=\langle {\bf u},\mathcal{S}_q f\rangle. \end{align*} Let $f,g\in\mathcal{P}$ and ${\bf u}\in\mathcal{P}^*$. Hereafter we denote $z=x(s)=(q^s+q^{-s})/2$. Then the following properties hold (see e.g. \cite{KDP2021} and references therein): \begin{align} \mathcal{D}_q \big(fg\big)&= \big(\mathcal{D}_q f\big)\big(\mathcal{S}_q g\big)+\big(\mathcal{S}_q f\big)\big(\mathcal{D}_q g\big), \label{def-Dx-fg} \\[7pt] \mathcal{S}_q \big( fg\big)&=\big(\mathcal{D}_q f\big) \big(\mathcal{D}_q g\big)\texttt{U}_2 +\big(\mathcal{S}_q f\big) \big(\mathcal{S}_q g\big), \label{def-Sx-fg} \\[7pt] f\mathcal{D}_qg&=\mathcal{D}_q\left[ \Big(\mathcal{S}_qf-\frac{\texttt{U}_1}{\alpha}\mathcal{D}_qf \Big)g\right]-\frac{1}{\alpha}\mathcal{S}_q \Big(g\mathcal{D}_q f\Big) , \label{def-fDxg} \\[7pt] f{\bf D}_q {\bf u}&={\bf D}_q\left(\mathcal{S}_qf~{\bf u} \right)-{\bf S}_q\left(\mathcal{D}_qf~{\bf u} \right), \label{def-fD_x-u}\\[7pt] \alpha \mathbf{D}_q ^n \mathbf{S}_q {\bf u}&= \alpha_{n+1} \mathbf{S}_q \mathbf{D}_q^n {\bf u} +\gamma_n \texttt{U}_1\mathbf{D}_q^{n+1}{\bf u}, \label{DxnSx-u} \end{align} where $\alpha =(q^{1/2}+q^{-1/2})/2$, $\texttt{U}_1 (z)=(\alpha ^2 -1)z$ and $\texttt{U}_2 (z)=(\alpha ^2 -1)(z^2-1)$. It is known that \begin{align} \mathcal{D}_q z^n =\gamma_n z^{n-1}+u_nz^{n-3}+\cdots,\quad \mathcal{S}_q z^n =\alpha_n z^n+\widehat{u}_nz^{n-2}+\cdots, \label{Dx-xnSx-xn} \end{align} where $$\alpha_n= \mbox{$\frac12$}(q^{n/2} +q^{-n/2})\;,\quad \gamma_n = \frac{q^{n/2}-q^{-n/2}}{q^{1/2}-q^{-1/2}}$$ and, $u_n$ and $\widehat{u}_n$ are some complex numbers. We set $\gamma_{-1}:=-1$ and $\alpha_{-1}:=\alpha$. It is important to notice that if ${\bf u}$ is a linear functional such that ${\bf D}_q(\phi {\bf u})={\bf S}_q(\psi {\bf u})$, then the following relation holds (see \cite[Proposition 2: (3.8)]{FK-NM2011}) \begin{align}\label{key-for-second-order-relation} \left\langle {\bf u}, (\phi \mathcal{D}^2_qf+\psi \mathcal{S}_q\mathcal{D}_qf)g \right\rangle=\left\langle {\bf u}, (\phi \mathcal{D}^2_qg+\psi \mathcal{S}_q\mathcal{D}_qg)f \right\rangle, \;f,g\in \mathcal{P}\;. \end{align} We denote by $P_n ^{[k]}$ $(k=0,1,\ldots)$ the monic polynomial of degree $n$ defined by \begin{align*} P_n ^{[k]} (z)=\frac{\mathcal{D}_q ^k P_{n+k} (z)}{ \prod_{j=1} ^k \gamma_{n+j}} =\frac{\gamma_{n} !}{\gamma_{n+k} !} \mathcal{D}_q ^k P_{n+k} (z). \end{align*} Here it is understood that $\mathrm{D}_x ^0 f=f $, empty product equals one, and $\gamma_0 !=1$, $\gamma_{n+1}!=\gamma_1\cdots \gamma_n \gamma_{n+1}$. If $({\bf a}^{[k]} _n)_{n\geq 0}$ is the dual basis associated to the sequence $(P_n ^{[k]})_{n\geq 0}$, we leave it to the reader to verify that \begin{align} {\bf D}_q ^k {\bf a}^{[k]} _n=(-1)^k \frac{\gamma_{n+k}!}{\gamma_n ! }{\bf a}_{n+k}\quad (k=0,1,\ldots). \label{basis-Dx-derivatives} \end{align} In 2003, M.E.H. Ismail proved the following result (see \cite[Theorem 20.1.3]{I2005}): \begin{theorem}\label{T} A second order operator equation of the form \begin{align}\label{Ismail} \phi(z)\mathcal{D}^2_q\, Y+\psi(z) \mathcal{S}_q \mathcal{D}_q \, Y+h(z)\, Y=\lambda_n\, Y \end{align} has a polynomial solution $Y_n(z)$ of exact degree $n$ for each $n=0,1,\dots$, if and only if $Y_n(z)$ is a multiple of the Askey-Wilson polynomials, or special or limiting cases of them. In all these cases $\phi$, $\psi$, $h$, and $\lambda_n$ reduce to \begin{align*} \phi(z)&=-q^{-1/2}(2(1+\sigma_4)z^2-(\sigma_1+\sigma_3)z-1+\sigma_2-\sigma_4),\\[7pt] \psi(z)&=\frac{2}{1-q} (2(\sigma_4-1)z+\sigma_1-\sigma_3), \quad h(z)=0,\\[7pt] \lambda_n&=\frac{4 q(1-q^{-n})(1-\sigma_4 q^{n-1})}{(1-q)^2}, \end{align*} or a special or limiting case of it, $\sigma_j$ being the jth elementary symmetric function of the Askey-Wilson parameters. \end{theorem} \section{Main result}\label{main} We are now in the position to prove our main result. \begin{theorem}\label{propo-sol-q-quadratic} If $(P_n)_{n\geq 0}$ is a monic OPS such that \begin{align} (az^2+bz+c)\mathcal{D}_qP_{n}(z)= a_n\mathcal{S}_qP_{n+1}(z)+b_n\mathcal{S}_q P_{n}(z)+c_n\mathcal{S}_qP_{n-1}(z)\quad (n=0,1,\ldots)\;,\label{equation-case-deg-is-two} \end{align} with $c_n\neq 0$ for $n=0,1,\ldots$, where the constant parameters $a$, $b$ and $c$ are chosen such that \small{ \begin{align} &(4\alpha^2-1)aC_2C_3 +\frac{r_3}{2}\Big[(B_0+B_1)^2 +4\alpha^2 (C_1-B_0B_1 +\alpha^2 -1) -2(2\alpha^2-1)C_2 \Big]=0\;,\label{condition-case-deg-is-two-1} \end{align} } whenever $a\neq 0$, and \begin{align} aC_2C_3\Big(b_2+2aB_2+\frac{b}{\alpha}\Big) -r_3\left(a(B_2+B_1)C_2 +\frac{b}{\alpha}C_2 -\frac{r_2}{2}(B_1-B_0) \right)=0\;, \label{condition-case-deg-is-two-2} \end{align} $r_i=c_i+2aC_i$, $i=2,3$, then $(P_n)_{n\geq 0}$ are multiple of Askey-Wilson polynomials, or special or limiting cases of them. Moreover $(P_n)_{n\geq 0}$ satisfy \eqref{Ismail} with \begin{align} &\phi(z)=\mathfrak{a}z^2 +\mathfrak{b}z+\mathfrak{c}\;,~\psi(z)=z-B_0\;,~h(z)=0\;,~ \lambda_n=\gamma_n (\mathfrak{a}\gamma_{n-1}+\alpha_{n-1})\;,\label{expression-phi-psi-general-case} \end{align} where \begin{align*} &\mathfrak{a}=-\frac{aC_3 +(\alpha^2-1)r_3}{\alpha r_3}\;;\\ &\mathfrak{b}=-\frac{1}{2\alpha}\left(\Big(1-2a\frac{C_3}{r_3}\Big)\Big(B_0+B_1\Big)-2\alpha^2B_0 \right) \;;\\ &\mathfrak{c}=-\frac{1}{2\alpha}\left(\Big(1-2a\frac{C_3}{r_3}\Big)\Big(C_1-B_0B_1\Big)+C_1 +B_0 ^2\right) \;; \end{align*} being $B_0$, $B_1$, $C_1$, $C_2$ and $C_3$ coefficients for the TTRR relation \eqref{TTRR_relation} satisfied by $(P_n)_{n\geq 0}$. \end{theorem} \begin{proof} Let $(P_n)_{n\geq 0}$ be a monic OPS with respect to the functional ${\bf u} \in \mathcal{P}^*$ and satisfying \eqref{equation-case-deg-is-two}. Set $\pi_2(z)=az^2+bz+c$. Using \eqref{def-fDxg}, we obtain \begin{align}\label{start-point-01} \pi_2\mathcal{D}_q P_n= \mathcal{D}_q \left[ \Big(\mathcal{S}_q\pi_2 -\frac{\texttt{U}_1}{\alpha}\mathcal{D}_q\pi_2 \Big)P_n\right] -\frac{1}{\alpha}\mathcal{S}_q\Big( \mathcal{D}_q\pi_2~P_n \Big)\;. \end{align} By direct computations we obtain $\mathcal{D}_q \pi_2(z)=2\alpha az+b$ and $\mathcal{S}_q\pi_2(z)=a(2\alpha^2-1)z^2+\alpha bz +c +a(1-\alpha^2)$. Therefore $$ \mathcal{S}_q\pi_2 -\frac{\texttt{U}_1}{\alpha}\mathcal{D}_q\pi_2 = az^2+\frac{b}{\alpha}z +c +a(1-\alpha^2)\;. $$ Hence from \eqref{start-point-01}, using the TTRR \eqref{TTRR_relation}, we can rewrite \eqref{equation-case-deg-is-two} as the following \begin{align}\label{equation-bis} \sum_{j=n-1} ^{n+1} a_{n,j}\mathcal{S}_qP_j(z)=\sum_{j=n-3} ^{n+1} b_{n,j}P_{j} ^{[1]}(z) \quad \quad (n=0,1,\ldots)\;; \end{align} where \begin{align*} &a_{n,n+1}=2a+a_n,~a_{n,n}=b_n+2aB_n+\frac{b}{\alpha},~a_{n,n-1}=c_n+2aC_n,\\ &b_{n,n+1}=a\gamma_{n+2}, ~b_{n,n}=\gamma_{n+1}\Big(a(B_{n+1}+B_n)+\frac{b}{\alpha} \Big),\\ &b_{n,n-2}=\gamma_{n-1}C_n\Big(a(B_{n-1}+B_n)+\frac{b}{\alpha} \Big),~b_{n,n-3}=a\gamma_{n-2}C_nC_{n-1}\\ &b_{n,n-1}=\gamma_n\left(a(C_{n+1}+B_n ^2+C_n )+\frac{b}{\alpha}B_n +c+a(1-\alpha^2) \right)\;. \end{align*} Let us write $P_n(z)=z^n +f_nz^{n-1}+\ldots$, for $n=0,1,\ldots$, with $f_n=-\sum_{j=0} ^{n-1}B_j$. Then using \eqref{Dx-xnSx-xn}, we identify the coefficients of the terms in $z^{n+1}$ and in $z^n$ in \eqref{equation-case-deg-is-two} to obtain \begin{align} a_n=a\frac{\gamma_n}{\alpha_{n+1}}\;,~ \alpha_n\alpha_{n+1} b_n=\gamma_n(a\alpha_nB_n +b\alpha_{n+1})+a\alpha\sum_{j=0} ^{n-1}B_j \;, \end{align} for $n=0,1,\ldots$. Assume now that $a\neq 0$ (the case where $a=0$ follows the same idea as below) and define $$Q_n(z)=\sum_{j=n-2} ^n a_{n-1,j}P_j(z)~~\quad(n=0,1,\ldots)\;.$$ Then $(Q_n)_{n\geq 0}$ is a simple set of polynomials and so let $({\bf a}_n)_{n\geq 0}$, $({\bf a}_n ^{[1]})_{n\geq 0}$ and $({\bf r}_n)_{n\geq 0}$ be the associated basis to the sequences $(P_n)_{n\geq 0}$, $(P_n ^{[1]})_{n\geq 0}$ and $(Q_n)_{n\geq 0}$, respectively. We then claim that \begin{align} &{\bf a}_n=a_{n-1,n}{\bf r}_n+a_{n,n}{\bf r}_{n+1}+a_{n+1,n}{\bf r}_{n+2}\;;\label{basis-relation-an-with-rn-1}\\ &{\bf \mathrm{S}}_q {\bf a}_n ^{[1]}=b_{n-1,n}{\bf r}_n+b_{n,n}{\bf r}_{n+1}+b_{n+1,n}{\bf r}_{n+2}+b_{n+2,n}{\bf r}_{n+3}+b_{n+3,n}{\bf r}_{n+4}\;,\label{basis-relation-with-rn-2} \end{align} for $n=0,1,\ldots$. Indeed, we have $$ \left\langle {\bf a}_n,Q_l \right\rangle=\sum_{j=l-2} ^l a_{l-1,j}\left\langle {\bf a}_n,P_j \right\rangle=\sum_{j=l-2} ^l a_{l-1,j}\delta_{n,j}=\left\{ \begin{array}{ll} a_{l-1,n} & \mbox{if } n\leq l\leq n+2 \\ 0 & \mbox{otherwise.} \end{array} \right. $$ Similarly using \eqref{equation-bis}, we obtain \begin{align*} \left\langle {\bf S}_q {\bf a}_n ^{[1]},Q_l \right\rangle =\left\langle {\bf a}_n ^{[1]},\mathcal{S}_q Q_l \right\rangle &=\sum_{j=l-4} ^l b_{l-1,j}\left\langle {\bf a}_n ^{[1]},P_j ^{[1]} \right\rangle\\ &=\sum_{j=l-4} ^l b_{l-1,j}\delta_{n,j} =\left\{ \begin{array}{ll} b_{l-1,n} & \mbox{if } n\leq l\leq n+4 \\ 0 & \mbox{otherwise.} \end{array} \right. \end{align*} Hence \eqref{basis-relation-an-with-rn-1}--\eqref{basis-relation-with-rn-2} follow by writing $$ {\bf a}_n =\sum_{l=0} ^{+\infty} \left\langle {\bf a}_n ,Q_l \right\rangle {\bf r}_l ,~\quad {\bf S}_q {\bf a}_n ^{[1]} =\sum_{l=0} ^{+\infty} \left\langle {\bf S}_q {\bf a}_n ^{[1]} ,Q_l \right\rangle {\bf r}_l \quad \quad (n=0,1,\ldots)\;, $$ and taking into account what is preceding. Taking $n=0,1,2$ in \eqref{basis-relation-an-with-rn-1} and $n=0$ in \eqref{basis-relation-with-rn-2}, we obtain the following system. \begin{align} {\bf a}_0&=a{\bf r}_0+a_{0,0}{\bf r}_{1}+a_{1,0}{\bf r}_{2} \;;\label{basis-eqt-1}\\ {\bf a}_1&=2a{\bf r}_1+a_{1,1}{\bf r}_{2}+a_{2,1}{\bf r}_{3} \;;\label{basis-eqt-2}\\ {\bf a}_2&=\frac{4\alpha^2-1}{2\alpha^2-1}a{\bf r}_2+a_{2,2}{\bf r}_{3}+a_{3,2}{\bf r}_{4} \;;\label{basis-eqt-3}\\ {\bf S}_q {\bf a}_0 ^{[1]}&=a{\bf r}_0+b_{0,0}{\bf r}_{1}+b_{1,0}{\bf r}_{2}+b_{2,0}{\bf r}_{3}+b_{3,0}{\bf r}_{4} \label{basis-eqt-4} \;. \end{align} By subtracting \eqref{basis-eqt-4} to \eqref{basis-eqt-1}, we obtain \begin{align} -{\bf a}_0+{\bf S}_q {\bf a}_0 ^{[1]}&=a(B_1-B_0){\bf r}_1+(b_{1,0}-a_{1,0}){\bf r}_{2}+b_{2,0}{\bf r}_{3}+b_{3,0}{\bf r}_{4} \label{basis-eqt-5} \;. \end{align} We now combine this with \eqref{basis-eqt-2}, we obtain \begin{align} -\frac{1}{2}(B_1-B_0){\bf a}_1-{\bf a}_0+{\bf S}_q {\bf a}_0 ^{[1]}&=A{\bf r}_{2}+\Big(b_{2,0}-\frac{1}{2}(B_1-B_0)a_{2,1}\Big){\bf r}_{3}+b_{3,0}{\bf r}_{4} \label{basis-eqt-6} \;; \end{align} where $A:=b_{1,0}-a_{1,0}-\small{\frac12}(B_1-B_0)a_{1,1}$. Taking $n=1$ in \eqref{equation-case-deg-is-two} we obtain \begin{align*} &a_1 =\frac{a}{2\alpha^2-1}\;,~~b_1=\frac{a(B_0+B_1)}{2\alpha^2-1}+\frac{b}{\alpha}\;,~~c_1=\frac{a(B_0 ^2 +C_1 +\alpha^2-1)}{2\alpha^2-1}+c+\frac{b}{\alpha}B_0\;. \end{align*} With this we have \begin{align*} A=&-\frac{a}{2(2\alpha^2-1)}\Big((B_1+B_0)^2 -4\alpha^2(B_0B_1-C_1+1-\alpha^2) \Big)+aC_2\;. \end{align*} Note that $A\neq 0$ according to \eqref{condition-case-deg-is-two-1}. Now, combining \eqref{basis-eqt-6} with \eqref{basis-eqt-3} yields \begin{align*} &-\frac{(2\alpha^2-1)A}{(4\alpha^2 -1)a}{\bf a}_2 -\frac{1}{2}(B_1-B_0){\bf a}_1-{\bf a}_0 +{\bf S}_q {\bf a}_0 ^{[1]}\\ &=\Big(b_{2,0}-\mbox{$\frac{1}{2}$}(B_1-B_0)a_{2,1}-\frac{(2\alpha^2-1)A}{(4\alpha^2-1)a}a_{2,2} \Big){\bf r}_3+\Big(b_{3,0}-\frac{(2\alpha^2-1)A}{(4\alpha^2-1)a}a_{3,2}\Big){\bf r}_4\;. \end{align*} Using expressions of the coefficients $a_{n,j}$ and $b_{n,j}$ given by \eqref{equation-bis}, we obtain $$b_{3,0}-\frac{(2\alpha^2-1)A}{(4\alpha^2-1)a}a_{3,2}=aC_2C_3-\frac{(2\alpha^2-1)A}{(4\alpha^2-1)a}(c_3+2aC_3)=0\;,$$ according to condition \eqref{condition-case-deg-is-two-1}. Similarly, using this relation we also obtain \begin{align*} b_{2,0}-\mbox{$\frac{1}{2}$}(B_1-B_0)a_{2,1}-\frac{(2\alpha^2-1)A}{(4\alpha^2-1)a}a_{2,2}=\Big(a(B_1+B_2)+\frac{b}{\alpha}\Big)C_2\\ -\mbox{$\frac{1}{2}$}(B_1-B_0)(c_2+2aC_2)-\frac{aC_2C_3}{c_3+2aC_3}\Big(b_2+2aB_2+\frac{b}{\alpha}\Big) =0\;, \end{align*} according to condition \eqref{condition-case-deg-is-two-2}. Hence \begin{align}\label{basis-final} {\bf S}_q {\bf a}_0 ^{[1]}={\bf a}_0 +\frac{1}{2}(B_1-B_0){\bf a}_1 +\frac{(2\alpha^2-1)A}{(4\alpha^2 -1)a}{\bf a}_2\;. \end{align} On the other hand, by direct computations we have $\mathcal{D}_q\texttt{U}_1=\alpha^2-1$ and $\mathcal{S}_q\texttt{U}_1=\alpha\texttt{U}_1$, and therefore, using \eqref{def-fD_x-u} with $f$ and ${\bf u}$ replaced by $\texttt{U}_1$ and ${\bf a}_1$, respectively, we obtain \begin{align} \texttt{U}_1{\bf D}_q{\bf a}_1=\alpha {\bf D}_q(\texttt{U}_1{\bf a}_1)-(\alpha^2-1){\bf S}_q{\bf a}_1\;.\label{U1D-x-a1} \end{align} We now apply ${\bf D}_q$ to \eqref{basis-final} using successively \eqref{DxnSx-u} for $n=1$ with ${\bf u}$ replaced by ${\bf a}_0$, \eqref{basis-Dx-derivatives} for $k=1$ and $n=0$, and \eqref{U1D-x-a1} to have \begin{align*} {\bf D}_q \left[{\bf a}_0 +\frac{1}{2}(B_1-B_0){\bf a}_1 +\frac{(2\alpha^2-1)A}{(4\alpha^2 -1)a}{\bf a}_2 \right]&={\bf D}_q{\bf S}_q{\bf a}_0 ^{[1]}=\frac{\alpha_2}{\alpha}{\bf S}_q{\bf D}_q{\bf a}_0 ^{[1]} +\frac{\texttt{U}_1}{\alpha}{\bf D}_q ^2 {\bf a}_0 ^{[1]} \\ &=-\frac{2\alpha^2-1}{\alpha}{\bf S}_q{\bf a}_1 -\frac{\texttt{U}_1}{\alpha}{\bf D}_q {\bf a}_1 \\ &=-\alpha {\bf S}_q{\bf a}_1 -{\bf D}_q\big(\texttt{U}_1{\bf a}_1\big) \;. \end{align*} Hence \begin{align*} {\bf D}_q \left[{\bf a}_0 +\frac{1}{2}\Big((B_1-B_0)+2\texttt{U}_1\Big){\bf a}_1 +\frac{(2\alpha^2 -1)A}{(4\alpha^2 -1)a}{\bf a}_2 \right]= -\alpha {\bf S}_q {\bf a}_1\;. \end{align*} So using \eqref{expression-an} and \eqref{TTRR_coefficients} we obtain $${\bf D}_q(\phi {\bf u})={\bf S}_q(\psi {\bf u})\;,$$ where $\phi$ and $\psi$ are given in \eqref{expression-phi-psi-general-case}. Since ${\bf u}$ is regular, then from \cite[Theorem 4.1]{KDP2021} we obtain $\mathfrak{a}\gamma_n+\alpha_n\neq 0$, $n=0,1,\ldots$. From \eqref{key-for-second-order-relation} with $f=P_n$ and $g=P_l$ we obtain \begin{align*} \phi \mathcal{D}^2_qP_n+\psi \mathcal{S}_q\mathcal{D}_qP_n=\sum_{l=0} ^n a_{n,l}P_l\;, \end{align*} where $\left\langle {\bf u}, P_l ^2\right\rangle a_{n,l} =\left\langle {\bf u}, (\phi \mathcal{D}^2_qP_n+\psi \mathcal{S}_q\mathcal{D}_qP_n)P_l \right\rangle =\left\langle {\bf u}, (\phi \mathcal{D}^2_qP_l+\psi \mathcal{S}_q\mathcal{D}_qP_l)P_n \right\rangle $. Taking into account $\phi \mathcal{D}^2_qP_n+\psi \mathcal{S}_q\mathcal{D}_qP_n=\gamma_n(\mathfrak{a}\gamma_{n-1}+\alpha_{n-1})z^n+\textit{lower term degrees}$, we obtain $a_{n,l}=0$ if $l<n$ and $a_{n,n}=\gamma_n(\mathfrak{a}\gamma_{n-1}+\alpha_{n-1})\neq 0$. Therefore $$\phi(z) \mathcal{D}_q ^2 P_n(z) + \psi(z) \mathcal{S}_q\mathcal{D}_qP_n(z) =a_{n,n}P_n(z)~~\quad(n=1,2,\ldots)\;,$$ and the desired result follows by Theorem \ref{T}. \end{proof} \section{A special case}\label{example} In this section we consider the special case of \eqref{equation-case-deg-is-two} where $a=0$ in other to state a finer result. For this purpose we use the following result. \begin{theorem}\label{main-Thm1}\cite{KDP2021} Let $(P_n)_{n\geq 0}$ be a monic OPS with respect to ${\bf u} \in \mathcal{P}^*$. Suppose that ${\bf u}$ satisfies the distributional equation $${\bf D}_q(\phi {\bf u})={\bf S}_q(\psi {\bf u})\;,$$ where $\phi(z)=az^2+bz+c$ and $\psi(z)=dz+e$, with $d\neq0$. Then $(P_n)_{n\geq 0}$ satisfies \eqref{TTRR_relation} with \begin{align} B_n = \frac{\gamma_n e_{n-1}}{d_{2n-2}} -\frac{\gamma_{n+1}e_n}{d_{2n}},\quad C_{n+1} =-\frac{\gamma_{n+1}d_{n-1}}{d_{2n-1}d_{2n+1}}\phi^{[n]}\left( -\frac{e_{n}}{d_{2n}}\right),\label{Bn-Cn-Dx} \end{align} where $d_n=a\gamma_n+d\alpha_n$, $e_n=b\gamma_n+e\alpha_n$, and \begin{align*} \phi^{[n]}(z)=\big(d(\alpha^2-1)\gamma_{2n}+a\alpha_{2n}\big) \big(z^2-1/2\big)+\big(b\alpha_n+e(\alpha^2-1)\gamma_n\big)z+ c+a/2, \end{align*} \end{theorem} The monic Askey-Wilson polynomial, $(Q_n(\cdot; a_1, a_2, a_3, a_4 | q))_{n\geq 0}$, satisfy \eqref{TTRR_relation} (see \cite[(14.1.5)]{KLS2010}) with \begin{align*} B_n &= a_1+\frac{1}{a_1}-\frac{(1-a_1a_2q^n)(1-a_1a_3q^n)(1-a_1a_4q^n)(1-a_1a_2a_3a_4q^{n-1})}{a_1(1-a_1a_2a_3a_4q^{2n-1})(1-a_1a_2a_3a_4q^{2n})}\\[7pt] &\quad-\frac{a_1(1-q^n)(1-a_2a_3q^{n-1})(1-a_2a_4q^{n-1})(1-a_3a_4q^{n-1})}{(1-a_1a_2a_3a_4q^{2n-1})(1-a_1a_2a_3a_4q^{2n-2})},\\[7pt] C_{n+1}&=(1-q^{n+1})(1-a_1a_2a_3a_4q^{n-1}) \\[7pt] &\quad\times \frac{(1-a_1a_2q^n)(1-a_1a_3q^n)(1-a_1a_4q^n)(1-a_2a_3q^n)(1-a_2a_4q^n)(1-a_3a_4q^n)}{4(1-a_1a_2a_3a_4q^{2n-1})(1-a_1a_2a_3a_4q^{2n})^2 (1-a_1a_2a_3a_4q^{2n+1})} \end{align*} and subject to the following restrictions (see \cite{KDP2021}): $$ \begin{array}l (1-a_1a_2a_3a_4q^n)(1-a_1a_2q^n)(1-a_1a_3q^n) \\[7pt] \qquad\quad\times(1-a_1a_4q^n)(1-a_2a_3q^n)(1-a_2a_4q^n)(1-a_3a_4q^n) \neq 0. \end{array} $$ We now state our result. \begin{corollary} \label{main-thm-q-quadratic-2} Let $(P_n)_{n\geq 0}$ be a monic OPS satisfying \begin{align} (z-r)\mathcal{D}_qP_{n}(z)= b_n\mathcal{S}_qP_n(z)+c_n\mathcal{S}_q P_{n-1}(z)\quad (n=0,1,\ldots)\;,\label{equation-case-deg-is-one-main} \end{align} with $c_n\neq 0$ for $n=0,1,\ldots$, where the constant parameter $r$ is chosen such that \begin{align} 2(2\alpha ^2 -1)\left(C_2+\alpha (B_1-B_0)r \right)=B_1^2-B_0^2\;.\label{main-condition-case-deg-is-one-q} \end{align} Then $P_n$ is a specific case of the monic Askey-Wilson polynomial: $$ P_n(z)=Q_n \Big(z;a_1,a_2,a_3,a_4\Big|q \Big)\quad (n=0,1,\ldots) \;, $$ where $a_1$, $a_2$, $a_3$ and $a_4$ are complex numbers solutions of the following equation \begin{align}\label{equation-degree-4-q-a} Z^4 -RZ^3 +TZ^2-SZ-q^{-1}=0\;, \end{align} with \begin{align} \left( R,T,S\right) \in \left\lbrace \Big(2\frac{qB_0-B_1}{q-1},T_1,2\frac{q^{-1}B_0-B_1}{q-1} \Big), \right. \label{equation-4-q-case1}\\ \left. \Big(k\frac{(q+1)(2q^2+q+1)}{q^{3/2}(q-1)}, 4\frac{2C_1 + 4\alpha^4 +\alpha ^2 -1}{q-1}, k\frac{(q+1)(q^2+q+2)}{q^{3/2}(q-1)} \Big),\right.\label{equation-4-q-case2}\\ \left. \Big(2\frac{(q+1)B_0}{q-1},\frac{4(3\alpha^2-1)}{q-1}, 2\frac{(1+q^{-1})B_0}{q-1} \Big) \right\rbrace\;, \label{equation-4-q-case3} \end{align} being $k=\pm 1$ and \begin{align*} T_1=& 1-q^{-1} +8\frac{(B_0^2 -\alpha^2)((4\alpha^2-3)B_1-B_0)}{(B_1+(4\alpha^2-1)B_0)(q-1)} -4\frac{(B_1-B_0)B_0}{q-1}\;. \end{align*} \end{corollary} \begin{proof} Let ${\bf u}\in \mathcal{P}^*$ be the regular functional with respect to $(P_n)_{n\geq 0}$. Note that \eqref{main-condition-case-deg-is-one-q} follows from \eqref{condition-case-deg-is-two-2}. Assume that under the condition \eqref{main-condition-case-deg-is-one-q}, $(P_n)_{n\geq 0}$ satisfies \eqref{equation-case-deg-is-one-main}. Then from Theorem \ref{propo-sol-q-quadratic} and his proof, ${\bf u}$ satisfies the distributional equation ${\bf D}_q(\phi {\bf u})={\bf S}_q (\psi {\bf u})$, where \begin{align} &\phi(z)=-\frac{\alpha ^2 -1}{\alpha}z^2 -\frac{1}{2\alpha}\left(B_1 -(2\alpha^2-1)B_0 \right)z+\frac{1}{2\alpha}\left((B_1-B_0)B_0-2C_1 \right) \;,\label{phi-degree-1-q-quadratic}\\ &\psi(z)=z-B_0\;. \label{psi-degree-1-q-quadratic} \end{align} We then apply \eqref{Bn-Cn-Dx} to obtain \begin{align} &C_2=\frac{1}{4\alpha^2-2}\left[(B_0+B_1)^2-4\alpha^2\left(B_0B_1+1-\alpha^2-C_1 \right) \right]\;; \label{C-2-q-quadratic}\\ &B_2=-B_0+\frac{2}{4\alpha^2-3}B_1\;.\label{B-2-q-quadratic} \end{align} In addition, we claim that the parameters $B_0$, $B_1$ and $C_1$ are related by the following equation \begin{align} \left(B_1+(4\alpha^2-1)B_0 \right) C_1=\left( \alpha^2-B_0^2 \right)\big(B_0-(4\alpha^2-3)B_1\big)\;,\label{C1-in-term-of-B0-B1-q} \end{align} with $B_0+B_1=0$ or $B_0+B_1 \neq 0$. Indeed writing $P_n(z)=z^n +f_nz^{n-1}+\cdots$, where $f_0=0$ and $f_n=-\sum_{j=0} ^{n-1}B_j$ for $n=1,2,\ldots$, we identify the coefficients of the two firsts terms with higher degree in \eqref{equation-case-deg-is-one-main} using \eqref{Dx-xnSx-xn} to obtain \begin{align} b_n=\frac{\gamma_n}{\alpha_n},\quad c_n=-r\frac{\gamma_n}{\alpha_{n-1}}+\frac{1}{\alpha_n\alpha_{n-1}} \sum_{j=0} ^{n-1}B_j ~\quad (n=0,1,\ldots)\;.\label{second-hand-datta-case-deg-1-q-quadratic} \end{align} Also by direct computations we obtain \begin{align*} \mathcal{D}_qP_2(z)=&2\alpha z-B_0-B_1\;,\\ \mathcal{S}_qP_2(z)=&(2\alpha^2-1) z^2 -\alpha(B_0+B_1)z+B_0B_1+1-\alpha^2-C_1\;. \end{align*} Similarly we obtain $\mathcal{D}_qP_3$ and $\mathcal{S}_qP_3$ by taking $n=3$ in \eqref{TTRR_relation} using \eqref{def-Dx-fg}--\eqref{def-Sx-fg}: \begin{align*} \mathcal{D}_qP_3(z)=&(4\alpha^2-1)z^2-2\alpha(B_0+B_1+B_2)z+B_0B_1+B_0B_2+B_1B_2\\ &-C_1-C_2 +1-\alpha^2 \;, \end{align*} and \begin{small} \begin{align*} \mathcal{S}_qP_3(z)&=\alpha(4\alpha^2-3) z^3-(2\alpha^2-1)(B_0+B_1+B_2)z^2\\ &+\alpha \Big(B_0B_1+B_0B_2+B_1B_2-C_2-C_1+3(1-\alpha^2) \Big)z\\ &+B_0C_2+B_2C_1+(\alpha^2-1)(B_0+B_1+B_2)-B_0B_1B_2 \;. \end{align*} \end{small} Taking $n=3$ in \eqref{equation-case-deg-is-one-main}, using what is preceding we obtain the following equations: \begin{align*} C_2+C_1+4\alpha^2(\alpha^2-1) +\frac{\alpha(4\alpha^2-3)}{2}\Big(2B_2r+ (B_0+B_1)(c_3+2r)\Big)\\ -B_0B_1-B_0B_2-B_1B_2=0\;, \\ \end{align*} \begin{align*} (B_0b_3-r )C_2+(B_2b_3-r-c_3 )C_1+&(\alpha^2-1)b_2(B_0+B_1+B_2+\alpha r)\\ &+(r+c_3)B_0B_1+rB_2(B_0+B_1)=b_3B_0B_1B_2\;, \end{align*} where $b_2$ and $b_3$ are giving using \eqref{second-hand-datta-case-deg-1-q-quadratic}. Hence \eqref{C1-in-term-of-B0-B1-q} is obtained from the previous equations by using the expressions of $c_3$, $B_2$, $C_2$ and $r$ giving by \eqref{second-hand-datta-case-deg-1-q-quadratic}, \eqref{B-2-q-quadratic}, \eqref{C-2-q-quadratic} and \eqref{main-condition-case-deg-is-one-q}, respectively. \begin{itemize} \item[I-]Assume that $B_1=-B_0$.\\ Taking into account that from \eqref{main-condition-case-deg-is-one-q}, $B_1\neq B_0$, we see that in the present case $B_0\neq 0$. From \eqref{C1-in-term-of-B0-B1-q} we have $C_1=\alpha^2-B_0^2 \;,$ then we obtain $\psi(z)=z-B_0$ and $\phi(z)=-(\alpha -\alpha ^{-1})z^2+\alpha B_0 z-\alpha$. This implies that $B_0$ is the only free parameter. Let $a_1$, $a_2$, $a_3$ and $a_4$ be four complex numbers solution of \eqref{equation-degree-4-q-a} for $(R,T,S)$ given by \eqref{equation-4-q-case3}. Then $a_1a_2a_3a_4=-1/q$, $a_1a_2+ a_1a_3 +a_1a_4+a_2a_3+a_2a_4+a_3a_4=4(3\alpha^2-1)/(q-1)$ and \begin{align*} B_0&=\frac{q-1}{2(q+1)}(a_1+a_2+a_3+a_4)\\ &=\frac{q-1}{2(1+q^{-1})}(a_1a_2a_3+a_1a_2a_4+a_1a_3a_4+a_2a_3a_4)\;. \end{align*} Hence the result follows by applying \eqref{Bn-Cn-Dx}. Note that $r$ and $c_n$ can be computed using \eqref{main-condition-case-deg-is-one-q} and \eqref{second-hand-datta-case-deg-1-q-quadratic}, respectively. \\ \item[II-] Case where $B_1\neq -B_0$. \begin{itemize} \item[II-a] If $B_1=(1-4\alpha^2)B_0$, then since by \eqref{main-condition-case-deg-is-one-q}, $B_1\neq B_0$, we obtain from \eqref{C1-in-term-of-B0-B1-q}, $B_0=\pm \alpha$ and so $B_1=\pm \alpha (1-4\alpha^2)$. Therefore $C_1$ is the only free parameter. In addition we obtain $\psi(z)=z\pm \alpha$ and $\phi(z)=-(\alpha -\alpha ^{-1})z^2 \pm (3\alpha^2-1)z-2\alpha^3 -C_1/\alpha$. Let $a_1$, $a_2$, $a_3$ and $a_4$ be four complex numbers solution of \eqref{equation-degree-4-q-a} for $(R,T,S)$ given by \eqref{equation-4-q-case2}. Then $a_1a_2a_3a_4=-1/q$, $a_1a_2+ a_1a_3 +a_1a_4+a_2a_3+a_2a_4+a_3a_4=4(2C_1 +4\alpha^4+\alpha^2-1)/(q-1)$ , $a_1+a_2+a_3+a_4=kq^{-3/2}(q+1)(2q^2+q+1)/(q-1)$, $a_1a_2a_3+a_1a_2a_4+a_1a_3a_4+a_2a_3a_4=kq^{-3/2}(q+1)(q^2+q+2)/(q-1)$, $k=\pm 1$, and so we obtain \begin{align*} C_1&=\frac{q-1}{8}\Big(a_1a_2+a_1a_3+a_1a_4+a_2a_3+a_2a_4+a_3a_4 -\frac{4(4\alpha^4 +\alpha^2-1)}{q-1} \Big) \;. \end{align*} Hence the result follows by applying \eqref{Bn-Cn-Dx}. Note that $r$ and $c_n$ can be computed using \eqref{main-condition-case-deg-is-one-q} and \eqref{second-hand-datta-case-deg-1-q-quadratic}, respectively. \item[II-b] If $B_1\neq (1-4\alpha^2)B_0$, then from \eqref{C1-in-term-of-B0-B1-q}, we obtain can express $C_1$ in term of $B_0$ and $B_1$ and so, for this case $B_0$ and $B_1$ are the only free parameters. Consider four complex numbers $a_1$, $a_2$, $a_3$ and $a_4$ solutions of equation \eqref{equation-degree-4-q-a} with $(R,T,S)$ given by \eqref{equation-4-q-case1}. Proceeding as in the previous cases, we can write $B_0$, $B_1$ and $C_1$ only in terms of $a_1$, $a_2$, $a_3$ and $a_4$. Therefore from \eqref{phi-degree-1-q-quadratic}--\eqref{psi-degree-1-q-quadratic}, we use \eqref{Bn-Cn-Dx} to obtain the desired result. \end{itemize} \end{itemize} It is important to notice that for each of the above mentioned cases, $a_i$, $i=1,2,3,4$, are roots of the four degree polynomial \eqref{equation-degree-4-q-a} with coefficients depending on some free parameters and so, explicit expressions of these roots can be only given by a computer system. Indeed they always appear in complicated and very large forms and this is why we find unnecessary to write them here but they exist. \end{proof} \begin{remark} This section highlights the use of initial conditions in the considered equation together with Theorem \ref{main-Thm1} to reduce the free parameters and therefore identify the specific polynomial solutions to our problem. Similar investigations can be done for the case where $a\neq 0$ in \eqref{equation-case-deg-is-two}. \end{remark} \section{Conclusion} The results obtained here were proved for the $q$-quadratic lattice $x(s)=(q^{-s} +q^{s})/2$, where $q$ is not a root of the unity, but it can be easily extended to the quadratic lattice $x(s)=\mathfrak{c}_4s^2+\mathfrak{c}_5s+\mathfrak{c}_6$ by taking the appropriate limit as it was discussed in \cite{KDP2021}. Therefore, by choosing the corresponding quadratic lattice Theorem \ref{propo-sol-q-quadratic} can be easily adapted to characterize the Racah and Wilson polynomials in case there exists the analogue of the Ismail Theorem \ref{T}. \section*{Declaration} The authors declare that they have no conflict of interest. \section*{Acknowledgements } We would like to thank Kenier Castillo for drawing our attention to the question solved in this work. This work is supported by the Centre for Mathematics of the University of Coimbra-UID/MAT/00324/2019, funded by the Portuguese Government through FCT/MEC and co-funded by the European Regional Development Fund through the Partnership Agreement PT2020. A. Suzuki is also supported by the FCT grant 2021.05089.BD. D. Mbouna thanks the support of the ERDF and Consejeria de Economia, Conocimiento, Empresas y Universidad de la Junta de Andalucia(grant UAL18-FQM-B025-A).
{ "redpajama_set_name": "RedPajamaArXiv" }
8,536
Giving Directions in Cornwall Posted on July 23, 2014 by Ellen Hawley Britain's as romantic as it is confusing. Outside the cities, British house are more likely to have names than addresses. You want to name your house Island of the Floating Feathers? As long as you work it out with the local council, you can. Although I don't know how the council would react to that one. They're more used to The Smithy, The Old Schoolhouse, Oak Tree Cottage, Trelawney. Every village in Cornwall has a house called Trelawny (or Trelawney), after John (or Jonathan, depending on your source) Trelawny, a Cornish national hero who got in a wrangle with James II back in the 17th century and had a song written about him in the 19th, by which time it was too late for him to appreciate it. That's the sad bit about being immortalized. But going back to the Island of the Floating Feathers: The council might agree to it. You can be pretty sure there won't be a duplicate in the village, which is another reason to nix a name. And it's not as if you're calling it My Neighbour Painted Her House an Ugly Colour, which any council could find a reason to turn down. Rock Cottage, by Geography 2013 The advantages of house names are romance, atmosphere, self-expression, and tradition. Ah, tradition. There's a lot of that over here. I don't know when houses were first given names, but it was long before civil engineers were invented. Naming houses is as natural as naming people. You have a bunch of these things, and you have to distinguish one from another. As soon as a settlement got too big for everyone to know everyone, a medieval village, say, might start informally talking about the manor, the blacksmith's cottage, plum tree cottage, river cottage, bridge cottage, the old lady with the wart's cottage. Eventually those became the houses' names, only sooner or later the old lady with the wart's cottage would be shortened to Wart Cottage, until someone new moved in and said, can't we change this to Island of the Floating Feathers? And the person in charge at the council replied, "Please state your reasons for wanting to change this house name." In triplicate and black ink. If you have to file a paper copy of anything official in Britain, they'll want it in black ink. Except for the black ink and the business about the wart, of course, I'm making this up. House names probably started much earlier than the medieval period. But it's good to remember that everything started somewhere, at some time, with some set of people who had no idea what they were setting in motion. The down side of this arrangement is that anyone from outside the village is immediately lost. Say you're driving a delivery van with a package for Craken Wartha, and you're new on the route. You've found the village, but after that you have two choices: Drive around aimlessly and hope you see a sign that says Craken Wartha or stop and ask directions. Or you can do both, one after the other after the other, with increasing degrees of frustration. Anyone marketing a sat-nav system with all the house names programed into it would make a fortune, but no one's done it yet, so imagine you spot two women walking a small, silly dog. They look local, by which I mean they're not carrying the beach or hiking gear that would mark them as visitors. Salvation, you think, and you roll down the window and ask where Craken Wartha is. As it happens, the two women are Wild Thing and me (which should, grammatically speaking, be I, but c'mon, who actually says it that way?). And we smile and point and say, "Go down the hill, cross the ford, go up the hill, take a right at the chapel," and so on, but you've stopped listening because we don't sound like locals. Have I mentioned that Wild Thing and I have American accents? And that we came by them honestly? No one who asks for directions believes a word we say. Their eyes glaze over and their faces get this look that says, There has got to be someone else I can ask. Sometimes we don't know the directions. It's surprising how many times you can pass a house but not remember where it is when someone asks for it. It's tempting to spew out a string of lefts and rights and obscure landmarks, since no one's going to follow them anyway, but we don't. We say what anyone else would: "Ask at the post office. They know everything." And we point them to the post office, knowing that if they see anyone else on the road, even if they're staggering under a load of beach gear or a snootful of alcohol, they'll roll down the window and hope to hear the right accent. This entry was posted in Other Stuff and tagged Americans in Britain, Americans in England, anglophile, Cornwall, England, house names by Ellen Hawley. Bookmark the permalink. 4 thoughts on "Giving Directions in Cornwall" terrihudoba on July 23, 2014 at 4:04 pm said: And the name of your house is . . .? Ellen Hawley on July 23, 2014 at 4:17 pm said: Okay, I confess: The name of our house is #3. Is that romantic or what? Our street has a name, so the houses on it have numbers. A few have names as well, but nobody much uses them, because let's face it, house names aren't the best thing history ever handed down to us. Most of the houses in the village (and don't hold me to this, because I'm closing my eyes and guessing at the percentages) are known by names. A number, though not terribly creative, does make sense. Our house (where we've resided for more than 18 years) is still identified by some of the long-time neighbors as the Emmers house–named for the family who built it in the 1960s. Ancient history in this US suburban area. A friend's mother, who lived in northern Minnesota, used to give directions that started something like this: "Well, you go to where the school used to be…"
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,924
{"url":"http:\/\/www.gryllo.co\/gehlgz\/viewtopic.php?5ccb57=cuso4-oxidation-number","text":"chemistry. SO4 (sulphate) has a -2 charge. On heating 10grams copper sulphate crystals (CuSO4.5H2O),6.4 grams anhydrous sulphate was left .Find the value of X . the oxidation numbers of the elements in a compound must add up to . 2 Answers. Und man weiss, dass das Sulfatanion (SO 4 2-) \u2026 10H2O? The oxidation stats of Zn(s) is 0 Cu2+ in CuSO4 is +2 Cu(s) is 0 Zn2+ in ZnSO4 is +2. Reaktanden: FeCl 2, KMnO 4, HCl Oxidation und Reduktion II. It has lost electrons, so it is oxidised. +2 + S +(-8) = 0 S=6 Calculate the oxidation number of sulphur in AL2(SO4)3 . Anonymous. chem See the answer . Which Species Is Being Oxidized In The Reaction Zn2+(aq) + 2Cr2+(aq) \u2192Zn(s) + 2Cr3+(aq) This problem has been solved! 1. Relevance. oxidation number Cu = +2. The oxidation number of metallic copper is zero. Carefully, insert coefficients, if necessary, to make the numbers of oxidized and reduced atoms equal on the two sides of each redox couples. It is possible for sulfur to be -2, but is this case, it is actually +6. Answer Save. Nach oben. Pb's oxidation number is the same as its ionic charge, which is +2. Anonymous. The positive oxidation state is the total number of electrons removed from the elemental state. However, many non metals have MULTIPLE oxidation numbers! 0 0. kotch. (eta^5-\"Cp\")_2\"Hf\"\"Cl\"_2 This looks like this: The cyclopentadienyl (\"Cp\") ligands at the top and bottom are bonding via all five carbon atoms, which is what the eta^5 is saying. Zn has an oxidation number (valency) of 0, and is oxidised to Zn2+, an oxidation number of 2+, So oxidation increases the oxidation number (valency). How much CuSO4-5H2O is needed to make up 500 ml of 2.5 M CuSO4 . Remember the mnemonic \"oil rig\": oxidation is loss, reduction is gain. Copper can also have oxidation numbers of +3 and +4.. ON = +2: Examples are CuCl\u2082, CuO, and CuSO\u2084.See, for example The equal the number of moles of [Cu(NH3)4]x . H+ has an oxidation number (valency) of +1, and is reduced to an oxidation number (valency) of 0. Favorite Answer. Beitrag von SantaMaria \u00bb 10.11. Write down the transfer of electrons. O4 sind ja -8 Woher wei\u00df ich wieviel S sind? CCRIS 3665. The oxidation number of copper depends on its state. \u2026 It has gained electrons, so it is reduced. since both oxidation and reduction are taking place, this can be regarded as a Redox reaction the same can also be understood by using the concept of oxidation numbers. Unfortunately, this must be memorized from a list of common polyatomic ions. Oxygen is in this case -2. Sulfate de cuivre [French] Aqua Maid Permanent Algaecide. What is the correct formula of the salt formed in the neutralization reaction of hydrochloric acid with calcium hydroxide? The negative sign is denoted in the name, hexacarbonyltitanate(II) (instead of hexacarbonyltitanium(II), which is [\"Ti\"(\"CO\")_6]^(2+) and NOT the same!). Snow Crystal Copper Sulfate. THis means, Zn is undergoing oxidation or being oxidized or acting as an reducing agent. In its compounds, the most common oxidation number of Cu is +2.Less common is +1. 2010 19:42 Hallo, ich muss die Oxidationszahlen von CuSO4 bestimmen. As sulphur has an increase in oxidation number, from -2 on the left to +6 on the right, it it oxidised: 1) The oxidized species is S^2-2) Its initial oxidation number is -2. Balancing chemical equations. CaCl2. Varadharajan G . Re: Bestimmen der Oxidationszahlen von CuSO4. 2 Antworten. Lv 4. vor 4 Jahren. 0 0. Beste Antwort. Bestimmen der Oxidationszahlen von CuSO4. In order to make the compound neutral, Pb must be +2. We call that \u2026 An oxidation number of \"II\". 3) Its final oxidation number is +6. This leaves nitrogen: On the left, it is part of HNO3: H is always +1, and O is always -2 (there are three Os), so. Answer Save. CuSO4 will precipitate; Ba2+ and Cl- are spectator ions. The oxidation number of N in NaNO3 is +5. S is reduced. For MgH2 the oxidation number of Mg is +2 [because group 2 metals are always +2] so the oxidation number of H is -1. What Is The Oxidation Number Of Atom Y In YSO3 . The oxidation number of O in compounds is usually -2, but it is -1 in peroxides. Find the number of unpaired electrons on the central metal atom of following: MgSO4 NiSO4.6H2O CoCl2.6H2O KCr(SO4)2.12H2O CuSO4.5H2O (NH4)2Cr2O7 FeSO4.7H2O MnCl2.4H2O FeCl3.6H2O K3Mn(CN)6 . oxidation number before and after the reaction has taken place. 4 Answers. actual formula may be= CuSO4 \u2026 The oxidation number of manganese (Mn) in potassium permanganate (KMnO4) is + 5. In der Chemie beziehen sich die Begriffe \"Oxidation\" und \"Reduktion\" auf Reaktionen, bei denen ein Atom (oder eine Gruppe von Atomen) Elektronen verliert bzw. Antwort Speichern. Ergebnis: +2 +3 Question: What Is The Oxidation Number Of Copper In CuSO4? i just don't get oxidation number for example, in CuSO4 O: (-2)x4= -8 S= -2 so shouldn't Cu be +10?? Relevance. We have 2Na atoms= 2+ and 3O atoms= 6- . Also, the answer would not \u2026 Since oxygen is -2, x-2(4) = -2 and x=+6 Fe=+3 O=-2 S=+6 Identify which reactants are being oxidized (the oxidation number increases when it reacts) and which are being reduced (the oxidation number goes down). What Is The Oxidation Number Of Atom X In H2XO3-? It is possible to remove a fifth electron to form another the $$\\ce{VO_2^{+}}$$ ion with the vanadium in a +5 oxidation state. SantaMaria. Redoxgleichungen Ein Mol KMnO 4 kann 5 Mol Fe(II) zu 5 Mol Fe(III) oxidieren. Copper sulfate (CuSO4) UNII-KUW2Q3U1VV. In this reaction copper undergoes reduction where as iron gets oxidized. Thanks. to balance that Cu has to have a +2 charge. science. what type of reaction is for Pb(NO3)2 + CuSO4 \u2192 PbSO4 + Cu(NO3)2 is it decomposition or synthesis or double replacement . It is the reducing agent. Yes. There are three oxygens in this problem, so the total is -6. BaSO4 will precipitate; Cu2+ and Cl- are spectator ions. we know there is sulphuric acid (H2SO4) hydrogen ions each have a +1 charge so since there are 2. is CuSO4 + Zn ---> ZnSO4 + Cu a redox reaction? Therefore we know that the sum of all the oxidation numbers of Cu, N, and O is equal to 0. so,the oxidation number of carbon is 4+ Chemistry Chemical reaction. The oxidation number of H is +1, but it is -1 in when combined with less electronegative elements. $\\ce{VO^{2+} + H_2O \\rightarrow VO_2^{+} + 2H^{+} + e^{-}}$ Each time the vanadium is oxidized (and loses another electron), its oxidation state increases by 1. Cu2+ is the oxidising agent. Oxygen its -2 each*4=-8 Fe is +2 As we know FeSO4 is neutral, Fe + S + O*4 = 0 i.e. The oxidation number goes from 0 in Cu to +2 in CuSO4. copper(II) sulphate. for that reason, the Fe you're utilising is Fe(II) and not Fe(III). The oxidation number goes from +6 in H2SO4 to +4 in SO2. Figuring out why these numbers are the way they are takes a fair amount of work. Kupfersulfat [German] All Clear Root Destroyer. Note : * The chemical which gains electrons is reduced (reduces its valency) and is \u2026 Please explain in detail. Thus, Charge on Cu + Charge on Sulphur + (Number of oxygen atoms)* (Charge on Oxygen) = 0 x + (+6) + 4(\u2013 2) = 0 x + 6 - 8 = 0 x - 2= 0 x = +2 Fe goes from oxidation number 0 to +2. A negative oxidation number, or \u201ceffective charge,\u201d denotes the number of electrons an atom has available to donate as it participates in a chemical reaction to form a new substance. In Cu(SO4)2, check the oxidation numbers and add them together, then balance the remainder with the same value but opposite sign. so CuSO4, the Cu has to have +2 charge to have equal \u2026 Same with ZnSO4.. but why is it +2 and not +10?? so the oxidation number of Cu is +2. In CuSO4, sulphur will have the oxidation number +6. Chemistry So, reduction decreases the oxidation number (valency). +6: Oxygen in oxyanions is assumed to have an oxidation state of -2; there are four such oxygen atoms, for a total of -8, and the SO4 anion has a charge of -2. Since it is a neutral molecule summation of charges on the molecule is zero. Hope this makes sense. Yes. The oxidation number for Mn in Mn2O7 is + 7. 1 Answer The oxidation number of a free element is always 0. Oxidation number of Zn changes from 0 to +2, and the oxidation number of Cu goes from 2+ to 0. Similarly, a positive oxidation number indicates how many electr ons an atom may acquire during a chemical reaction. Balance TeO3^3- + N2O4 --> Te +NO3^- using oxidation numbers Need help A.S.A.P . The Cu has a decrease oxidation number of +2 going to 0 therefore is being reduced the Zn has an increase in oxidation number going from 0 to +2. 4 years ago. What is the oxidation number of copper in CuSO 4? There are 2 sulphur atoms, so S = +7. 1 decade ago. BaSO4 will precipitate; Cu2+ and Cl- are spectator ions . Calculation of oxidation state of \u2018Cu\u2019 in CuSO4 molecule: Let the oxidation state of \u2018Cu\u2019 is \u201cx\u201d. Chemistry. You may have written the formula wrong. +2 to 0. 2010 22:09 Der Sauerstoff hat die Oxidationszahl -II. Oxidation numbers are defined as the effective charge on an atom in a compound. Beitrag von nicok \u00bb 10.11. Similarly, Cu2+ is undergoing reduction or being reduced or acting as an oxidizing agent. There is no oxidation number for compounds. Lv 7. vor 10 Jahren . Copper (II) sulphate. Formula for the sodium carbonate is Na2CO3.It is understood that Na atoms has 1+ oxidation number and O atoms has a 2- oxidation number. gewinnt. (Copper Sulfate)? Favorite Answer. oxidation number O = -2. oxidation number Fe = 0. Iron has an oxidation state of +3 , therefore , for the sulphate ion ( SO42)- The sum of oxidation states of sulphur and oxygen must equal to -2. oxidation number S = +6. How would u figure it out??? The Cu(NO3)2 is an ionic compound with overall oxidation number \u201c0\u201d. Dr.A. The oxidation number for the calcium in CaSO4 is 2+, the oxidation number for oxygen is 2-, and the oxidation number for sulfur is 6+. +2 + 2S +(8\u00d7-2) = +2 +2S -16 = 2S -14, so 2S = +14. SO4 has charge of -2. What are the oxidation numbers for CuSO4? Copper (II) sulfate (1:1) Copper(2+) sulfate (1:1) Granular Crystals Copper Sulfate. Anonymous. Chemistry. Relevanz. 1 decade ago. 2 0. the sulfate ion has a -2 charge. CuSO4. Also: Multiplikation der ersten Gleichung mit 5 und dann Addition. The oxidation number of a monatomic ion equals the charge of the ion. a) Oxidation von Fe(II) zu Fe(III) (d.h. Fe 2+ zu Fe 3+) mit Kaliumper) mit Kaliumper --manganat (KMnO 4) in saurer L \u00f6sung. CaCl CaCl2 CaH2 CaO CaClH. CuSO4 is a neutral molecule ... so the sum of the oxidation numbers is zero. 1 decade ago. HSDB 916. Cu goes from ON. Chemisty. The SO4 is just a spectator ion and doesn't participate in the reaction. Cu is oxidized. Copper sulfate react with potassium iodide to produce copper(I) iodide, iodine and potassium sulfate. '': oxidation is loss, reduction is gain is sulphuric acid ( H2SO4 ) hydrogen ions each have +2... Its state from a list of common polyatomic ions with overall oxidation number of H is.. ( 8\u00d7-2 ) = +2 +2S -16 = 2S -14, so it is -1 in peroxides atoms= 6- but. ( II ) and not +10? 2S -14, so it -1... ) 2 is an ionic compound with overall oxidation number of a free element is 0... Actual formula may be= CuSO4 \u2026 the oxidation number for Mn in Mn2O7 is + 7 \u201d. And not +10 cuso4 oxidation number neutralization reaction of hydrochloric acid with calcium hydroxide chem the oxidation number electrons! Decreases the oxidation number is the oxidation number of Cu goes from 0 to +2 ] Maid. \u2019 is \u201c x \u201d a spectator ion and does n't participate in the reaction molecule: Let the number! Being oxidized or acting as an reducing agent to have +2 charge but why is it +2 and not (... ) 4 ] x number Fe = 0 is -6 2.5 M CuSO4 the charge the... Is this case, it is reduced number O = -2. oxidation number of in! ( III ) oxidieren as an oxidizing agent compound with overall oxidation number for Mn in Mn2O7 +... Cuso4.5H2O ),6.4 grams anhydrous sulphate was left.Find the value of x reduction decreases the oxidation number Atom! Summation of charges on the molecule is zero combined with less electronegative elements +6. Grams anhydrous sulphate was left.Find the value of x 2S -14, it... Of carbon is 4+ the oxidation number Fe = 0 8\u00d7-2 ) = cuso4 oxidation number +2S -16 2S! Many non metals have MULTIPLE oxidation numbers is zero chemical reaction Zn changes from 0 to +2 compound overall! Molecule: Let the oxidation number of Atom Y in YSO3 kann 5 Fe. Non metals have MULTIPLE oxidation numbers of cuso4 oxidation number elements in a compound must add up to the Fe you utilising. Number goes from 2+ to 0 how many electr ons an Atom may acquire during a reaction. +2 + 2S + ( 8\u00d7-2 ) = +2 +2S -16 = 2S -14, so =... That the sum of all the oxidation number of Cu, N and! 3O atoms= 6- oxidation und Reduktion II from 2+ to 0 non metals have oxidation... Charge to have +2 charge to have +2 charge is usually -2, but it reduced! Maid Permanent Algaecide many non metals have MULTIPLE oxidation numbers ) 3 the salt in... Be= CuSO4 \u2026 the oxidation numbers Need help A.S.A.P is needed to make the compound neutral Pb. Oxidation is loss, reduction decreases the oxidation number of N in NaNO3 is +5 way they takes... ) copper ( 2+ ) sulfate ( 1:1 ) Granular Crystals copper sulfate may be= CuSO4 \u2026 the number... 'S oxidation number of manganese ( Mn ) in potassium permanganate ( KMnO4 ) is + 5 to copper! Answer the oxidation number of Zn changes from 0 in Cu to +2 compounds, the most common oxidation cuso4 oxidation number! That Cu has to have +2 charge undergoing oxidation or being oxidized or acting an... Possible for sulfur to be -2, but it is -1 in.... N, and the oxidation number 0 to +2 in CuSO4, sulphur will have oxidation! To +2 in CuSO4 molecule: Let the oxidation number of manganese ( Mn ) potassium! Ionic compound with overall oxidation number of H is +1 needed to make up 500 ml of M! Left.Find the value of x a free element is always 0 calcium hydroxide HCl und... Cuso4 + Zn -- - > ZnSO4 + Cu a redox reaction H2XO3-! = +2 +2S -16 = 2S -14, so it is oxidised amount! Znso4 + Cu a redox reaction order to make up 500 ml of 2.5 M CuSO4 von... Has to have equal \u2026 CuSO4: oxidation is loss, reduction is gain make up 500 ml 2.5. When combined with less electronegative elements sulphate Crystals ( CuSO4.5H2O ),6.4 grams sulphate... Since there are 2 sulphur atoms, so it is possible for sulfur to be -2 but... Is loss, reduction decreases the oxidation number of sulphur in AL2 ( SO4 ).! 2Na atoms= 2+ and 3O atoms= 6- + 7 of manganese ( Mn ) in potassium (. With potassium iodide to produce copper ( II ) and not Fe ( ). Hallo, ich muss die Oxidationszahlen von CuSO4 bestimmen are takes a fair amount of work atoms! \u2026 the equal the number of electrons removed from the elemental state in a compound must add to... 1 Answer the oxidation number of Cu, N, and O equal... The compound neutral, Pb must be memorized from a list of common ions. Have 2Na atoms= 2+ and 3O atoms= 6- from +6 in H2SO4 to +4 in SO2 a neutral molecule so! Number 0 to +2, and the oxidation numbers ] x ZnSO4 + Cu a redox reaction -16 = -14... Number Fe = 0 ionic compound with overall oxidation number of a monatomic ion equals the charge the! In when combined with less electronegative elements sulfate de cuivre [ French ] Aqua Maid Permanent Algaecide ich. Cu goes from oxidation number of manganese ( Mn ) in potassium permanganate KMnO4... Y in YSO3, it is a neutral molecule... so the total -6! For sulfur to be -2, but it is -1 in peroxides a positive oxidation state is the same its! The salt formed in the reaction, KMnO 4, HCl oxidation und Reduktion II of electrons removed from elemental! Hydrogen ions each have a +1 charge so since there are 2 sulphur,! Mol Fe ( III ) CuSO4 is a neutral molecule... so the total is -6: FeCl 2 KMnO! Dann Addition formula may be= CuSO4 \u2026 the equal the number of moles of [ Cu ( NO3 2... Cuso4-5H2O is needed to make up 500 ml of 2.5 M CuSO4 ion... Mit 5 und dann Addition formed in the neutralization reaction of hydrochloric acid with calcium?... In the reaction sulfate de cuivre [ French ] Aqua Maid Permanent Algaecide on the molecule is zero total... Be -2, but is this case, it is -1 in peroxides the of...","date":"2021-05-15 04:26:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7089603543281555, \"perplexity\": 6951.746119650798}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243989812.47\/warc\/CC-MAIN-20210515035645-20210515065645-00339.warc.gz\"}"}
null
null
Within our design space for the Pasadena Showcase House of Design for 2018, stood a 100 year old tree that was one of the most unique elements to our canvas. This mystical tree inspired our theme " The Enchanted Garden" where we installed our 25 foot stream (pondless waterfall), succulent garden and matching succulent urns to add a touch of magic to this patio area. Teaming up with Pacific Outdoor Living for installation of the synthetic turf and paved patio, we were about to create this stunning outdoor space. By California Waterscapes and Pacific Outdoor Living. Our designers teamed up to create an outdoor living space for the annual Pasadena Showcase House of Design home. Our focus was to create an outdoor living space that best utilized the area transitioning to the pool, while maintaining a water-wise/drought tolerant landscape. We incorporated our hand cast succulent water table, a pondless stream, synthetic turf, a fully paved patio with our famous Belgard paving stones, high-end outdoor furniture, a custom pergola and even a dry creek bed to tie everything together. Drought Tolerant Koi Pond and Landscaped Backyard! Can you believe that this pond uses less water than the lawn that stood before it?! Well believe it! This gorgeous pond, waterfall and landscape actually uses less water than that old lawn! Here's a project where the client traded in their old ugly lawn for a beautiful pond, waterfall and low maintenance stunning succulents and drought tolerant landscape with synthetic turf. Not only does the client use a fraction of the water since the renovation, it is now super low maintenance! This is the family's favorite spot to relax and enjoy any part of the day! If you are considering remodeling your yard either to save money on your water bill or just help conserve water in this tough drought, consider changing your yard into a livable and guilt free paradise with a sustainable water feature! Cesar Millan's Dog Psychology Center – Yin Yang Pond! Nat Geo Wild's The Fish Tank Kings – Living Color Aquariums called in the Pond Specialists, AKA – California Waterscapes and Pacific Outdoor Living to get the job done right! Before: A pond in the desert? The challenge here for the Fish Tank Kings was creating a pond paradise in the middle of a seemingly dry desert. Ponds in any terrain are our specialty, we found a ground water supply so that it will replenish the water daily and turned Cesar's dream into a reality! After: Cesar Millan's Dream Pond! Our team came up with the design and incorporated the "yin yang" symbol, with half of it being the pond, the other half synthetic turf so no extra water was needed to water a lawn. All the plants used in landscaping were also drought tolerant turning this oasis into drought tolerant masterpiece! Tell us what your dream water feature is!
{ "redpajama_set_name": "RedPajamaC4" }
3,078
All HILLARY CLINTON Quotes about "Parties" "There's a different leader in Syria now. Many of the members of Congress of both parties who have gone to Syria in recent months have said they believe he's a reformer." "I sometimes think that I didn't leave the Republican Party, as much as it left me." "As president, I will ramp up enforcement of trade rules by appointing a new chief trade prosecutor and tripling the number of enforcement officers. We will work with both parties to pass the biggest investment new good paying jobs since World War II." "Thanks to you, we've reached a milestone. The first time in our nation's history that a woman will be a major party's nominee." "Does it make any sense at all that the chair of a national party would want fewer voters to see our candidates?" "I think Ted Cruz is a very mean-spirited guy. You can see it from how the Republican Party responds to him." "Is it important that you have a strong leadership for your party? I think it's critical." "You may have heard that Donald Trum has long refused to release his tax returns, the way every other nominee for president has done for decades. You can look at 40 years of my tax returns. I think we need a law that says, if you become the nominee of the major parties, you have to release your tax returns." "This has never happened to a major - to a nominee of a major party just a few days ago Donald Trump was endorsed by the official newspaper of the Ku Klux Klan. They wrote their endorsement under the slogan of his campaign, "make america great again." Do any of us have a place in Trump`s america?" More Hillary Clinton quote about: Partnerships, Past, Personality, Politicians, Poverty, 44th U.S. President Diane Sawyer Former Governor of New Mexico Former Governor of Massachusetts Tim Kaine 42nd U.S. President Political figure
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,976
ACCEPTED #### According to Index Fungorum #### Published in Bull. Jard. Bot. Nat. Belg. 58(3-4): 475 (1988) #### Original name Russula testacea Buyck ### Remarks null
{ "redpajama_set_name": "RedPajamaGithub" }
5,493
{"url":"http:\/\/www.alecjacobson.com\/weblog\/?tag=trisurf","text":"## Posts Tagged \u2018trisurf\u2019\n\n### Plot a bunch of edges in matlab with per-edge color data\n\nMonday, December 19th, 2016\n\nSuppose you have a list of vertices V (#V by dim) and a list of edges E (#E by 2) indexing V, then one hacky to draw all of these edges with per-vertex color data DV (#V by 1) is to use trisurf:\n\ntrisurf(E(:,[1 1 2]),V(:,1),V(:,2),V(:,3),'CData',DV,'EdgeColor','interp');\n\n\nHowever, if you try to use this trick to draw per-edge color data DE (#E by 1), you\u2019ll find that 'EdgeColor','flat' does not work. Instead you can explode you compact \u201cmesh\u201d of edges into individual edges and repeat you edge color data for each edge point:\n\ntrisurf(reshape([1:size(E,1),1:size(E,1)*2],[],3),V(E(:),1),V(E(:),2),V(E(:),3),'CData',repmat(DE,2,1),'EdgeColor','interp');\n\n\n### Two-sided material in matlab\n\nMonday, March 23rd, 2015\n\nUnfortunately there seems to be no builtin support for two-sided surfaces in matlab. There\u2019s some rudimentary control over back-face lighting, but that\u2019s all. At least you can determine the back-facing triangles for a given camera position:\n\nN = normals(V,F);\nBC = barycenter(V,F);\nback_facing = sum(N.*bsxfun(@minus,BC,campos),2)<=0;\n\n\nHere\u2019s an example for an armadillo mesh:\n\nt = tsurf(F,V,'EdgeColor','none','FaceLighting','phong');view(2);\naxis equal;\ncamproj('persp')\nt.FaceVertexCData = 1*(sum(N.*bsxfun(@minus,BC,campos),2)<=0)\napply_ambient_occlusion();\n\n\nOf course, if you change the view, the coloring is no longer valid:\n\nSo you need to recompute the coloring:\n\nYou can also insert nans to achieve back-face culling:\n\nt.FaceVertexCData(sum(N.*bsxfun(@minus,BC,campos),2)>0) = nan;\n\n\n### Use NaNs to hide faces in matlab trisurf\/patch renderings\n\nWednesday, February 11th, 2015\n\nI recently (re)discovered that if you set the \u2018CData\u2019 value of a face to nan the face will be hidden. This can sometimes be useful if you want to use the wireframe of a model to give context but focus on just a few faces. Here\u2019s an example:\n\n% selected faces\nI = sparse(125,1,1,size(F,1),1);\nwhile true\n% Set all but selected to nan\ntsurf(F,V,'CData',sparse(find(~I),1,nan,size(F,1),1));\nview(-48,4); set(gca,'Visible','off');apply_ambient_occlusion();\ndrawnow;\nif nnz(I)==size(F,1);\nbreak;\nend\nI = A*I;\nend\n\n\n### MATLAB2014b features anti-aliasing\n\nMonday, October 13th, 2014\n\nFinally. I\u2019m pretty happy about the results:\n\n[V,F] = load_mesh('\/usr\/local\/igl\/libigl\/examples\/shared\/cheburashka.off');\nAO = ambient_occlusion(V,F,V,per_vertex_normals(V,F),1000);\nt = tsurf(F,V,fphong,'EdgeColor','none');\nC = squeeze(ind2rgb(floor(matrixnormalize(t.FaceVertexCData)*size(colormap,1))+1,colormap));\nt.FaceVertexCData = bsxfun(@times,C,1-AO);\nt.SpecularStrength = 0.1;\nt.DiffuseStrength = 0.1;\nt.AmbientStrength = 0.8;\nl = light('Position',[1 1 100],'Style','infinite');\nl2 = light('Position',[1 -100 1],'Style','infinite');\nset(gca,'XTickLabel',[],'YTickLabel',[],'ZTickLabel',[],'Color',[0.94 0.94 0.94]);\nset(gcf,'Color','w');\n\n\nAnd to spin the camera around:\n\naxis equal\naxis vis3d;\ncamproj('persp');\nfor f = 1:numel(T)\nt = T(f);\nview(-cos(t*2*pi)*45,sin(t*2*pi)*45+45);\ndrawnow;\nframe = getframe(gcf);\n[SIf,cm] = rgb2ind(frame.cdata,256);\nif f == 1\nimwrite(SIf,cm,filename,'Loop',Inf,'Delay',0);\nelse\nimwrite(SIf,cm, filename,'WriteMode','append','Delay',0);\nend\nend\n\n\nWith the awesome but now obsolete myaa.m hacked anti-aliasing, creating this gif would have taken many minutes. This runs in real time.\n\n### Animated GIF of vector field coordinates in matlab\n\nMonday, August 26th, 2013\n\nSuppose you have a mesh (V,F) and a vector field (or matrix of scalar field values) W. You can create a looping animated gif in matlab of a visualization of those values using:\n\nt = trisurf(F,V(:,1),V(:,2),0*V(:,1),'EdgeColor','none','FaceColor','interp','FaceLighting','phong');\nhold on;\n% for me each coordinate corresponds to a point in C\nscatter(C(:,1),C(:,2),'ok','MarkerFaceColor','y','LineWidth',3,'SizeData',100);\nhold off;\naxis equal;\nview(2);\ntitle('Vector Field ','FontSize',20);\nfor w = 1:size(W,2)\nset(t,'CData',W(:,w));\ndrawnow;\nim = myaa('raw');\n[imind,cm] = rgb2ind(im,256);\nfilename = 'vector-field.gif';\nif w == 1;\nimwrite(imind,cm,filename,'gif', 'Loopcount',inf);\nelse\nimwrite(imind,cm,filename,'gif','WriteMode','append');\nend\nend\n\n\nThis produces something like:\n\nSource\n\n### Using patcht to texture map a triangle mesh in matlab\n\nFriday, August 9th, 2013\n\nRecently I found the patcht script which lets you texture map a triangle mesh in matlab. It unfortunately does it in a brute force way: creating a textured surface for every triangle. But it\u2019s at least something. Here\u2019s how I use it for 2d meshes of images using the xy positions as texture coordinates:\n\nim = imread('woody.png');\npatcht(F,V,F,[max(V(:,2))-V(:,2) V(:,1)],im);\naxis equal\n\n\nwhich produces:\n\nWe thank Scott Schaefer for providing the wooden gingerbread man image from \u201cImage Deformation Using Moving Least Squares\u201d.\n\n### Matlab\u2019s trisurf confuses face colors with vertex colors\n\nSunday, July 28th, 2013\n\nI recently had trouble getting matlab\u2019s trisurf to display interpolated colors inside each face. I was calling:\n\n\nset(trisurf(F,V(:,1),V(:,2),V(:,3)),'FaceColor','interp');\n\n\nBut the display looked like flat shading.\n\nThe problem turned out to be that I had 9 faces and 9 vertices. In this rare instance matlab decides that the vertex colors (which actually are just the Z coordinates in V(:,3)) should be attached (incorrectly) to the faces and are not interpolatable.\n\nI hacked a fix to this by appending an extra ghost vertex:\n\n\nset(trisurf(F,[V(:,1);0],[V(:,2);0],[V(:,3);0]),'FaceColor','interp');","date":"2017-10-22 21:01:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2089967429637909, \"perplexity\": 6406.978713823995}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-43\/segments\/1508187825464.60\/warc\/CC-MAIN-20171022203758-20171022223758-00586.warc.gz\"}"}
null
null
La casa de la Senyora Margarida és un edifici de Cambrils (Baix Camp) inclòs a l'Inventari del Patrimoni Arquitectònic de Catalunya. Descripció És un gran edifici senyorívol que fa cantonada amb el carrer dels Immolats del Setge. A la façana que dona al carrer de l'Hospital, hi ha sis grans portes d'arc carpanell; a la del carrer dels Immolats, se n'obrien tres, una de les quals (la primera des de la cantonada) va estar tapiada i una altra (la segona) ha estat transformada en finestra, una quarta (la que porta el número 1 de l'esmentat carrer) és més petita que la resta, rebaixada amb llinda i amb una finestra al damunt. Té planta baixa i tres pisos més terrat. Els dos primers pisos presenten balcons (el primer és corregut, amb reixa de ferro). Al terrat, barana amb balustres de ciment. Té un cos sortint de la façana, a tall d'element amb volada. Història La casa fou bastida probablement el . Porta un escut heràldic damunt una de les portes, damunt el que sembla una data, il·legible en les dues primeres xifres (1836?). Originalment, era una casa d'un sol propietari, però actualment estan repartides les seves dependències entre un magatzem-taller, una escola de dansa, una autoescola, una fonda, una botiga d'electrodomèstics (la qual ha transformat molt la part baixa que ocupa) i una casa particular. Fins fa uns anys, era coneguda com la Casa de la Senyora Margarida o per Cal Campanyà. No hem trobat notícies de quan i qui manà bastir l'edifici. Referències Edificis de Cambrils Patrimoni monumental de Cambrils
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,420
{"url":"https:\/\/byjus.com\/question-answer\/consider-the-following-statements-1-the-function-f-x-x-is-not-differentiable-at-x\/","text":"Question\n\nConsider the following statements : 1. The function f(x) = |x|\u00a0is not differentiable at x = 0. 2. The function $$f(x)=e^x$$ is differentiable at x = 0. Which of the above statements is\/are correct ?\n\nA\n1 only\nB\n2 only\nC\nBoth 1 and 2\nD\nNeither 1 nor 2\n\nSolution\n\nThe correct option is C Both 1 and 2Statement 1$$f(x)=x \\ for \\ x \\geq 0$$$$f(x)=-x \\ for \\ x < 0$$For a function to be differentiable at a point, it's derivative must be continuous at that point$$\\Rightarrow$$ Left hand derivative(L.H.D)=Right hand derivative (R.H.D)$$L.H.D= f^{'}(x) \\ for \\ x<0$$$$\\Rightarrow L.H.D=-1$$$$R.H.D=f^{'}(x) \\ for \\ x \\geq 0$$$$\\Rightarrow R.H.D =1$$$$\\Rightarrow L..H.D \\neq R.H.D$$\u00a0The function $$f(x)=\\lvert x \\rvert$$ is not differentiable at $$x=0$$Statement 2Derivative of function $$f(x)=e^x \\ , f^{'}(x)=e^x$$$$f^{'}(x)\\ is\\ always\\ continuous \\ for \\ all \\ x \\in R$$$$\\Rightarrow$$ The function $$f(x)=e^x$$ is differentiable at $$x=1$$Both statement 1 and 2 are correct.Mathematics\n\nSuggest Corrections\n\n0\n\nSimilar questions\nView More\n\nPeople also searched for\nView More","date":"2022-01-20 05:25:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7316023707389832, \"perplexity\": 1062.0849838660376}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320301720.45\/warc\/CC-MAIN-20220120035934-20220120065934-00063.warc.gz\"}"}
null
null
package com.github.subh0m0y.datastructures.trees; import org.testng.annotations.Test; import java.util.Arrays; import java.util.Comparator; import java.util.Random; import static com.github.subh0m0y.datastructures.trees.Traversals.*; import static org.testng.Assert.*; /** * Checks with repeatable, random values. If it passes, then the * BST is most probably bug free. * * @author Subhomoy Haldar * @version 2017.02.27 */ @SuppressWarnings("RedundantThrows") public class BinarySearchTreeTest { private final int size = 1_000_000; private final int limit = size * 2; private final Random random = new Random(); private final Comparator<Integer> comparator = Comparator.reverseOrder(); private BinarySearchTree<Integer> tree; private Integer[] mirror; @Test public void testInsertAndIsPresent() throws Exception { tree = new BinarySearchTree<>(comparator); mirror = new Integer[size]; for (int i = 0; i < size; i++) { int randomInt = random.nextInt(limit); tree.insert(randomInt); assertEquals(i + 1, tree.size()); mirror[i] = randomInt; } for (int element : mirror) { assertTrue(tree.isPresent(element)); } } @Test public void testRemove() throws Exception { tree = new BinarySearchTree<>(comparator); mirror = new Integer[size]; for (int i = 0; i < size; i++) { int randomInt = random.nextInt(limit); tree.insert(randomInt); assertEquals(i + 1, tree.size()); mirror[i] = randomInt; } int s = size; for (int element : mirror) { if (tree.remove(element)) { s--; assertEquals(s, tree.size()); } } } @Test public void testMaxMinInOrder() throws Exception { tree = new BinarySearchTree<>(comparator); mirror = new Integer[size]; for (int i = 0; i < size; i++) { int randomInt = random.nextInt(limit); tree.insert(randomInt); assertEquals(i + 1, tree.size()); mirror[i] = randomInt; } Arrays.sort(mirror, comparator); assertEquals(mirror[0], tree.min()); assertEquals(mirror[size - 1], tree.max()); assertEquals(mirror, tree.toArray()); assertEquals(mirror, tree.toArray(new Integer[size])); } @Test public void testCopy() throws Exception { tree = new BinarySearchTree<>(comparator); for (int i = 0; i < size; i++) { int randomInt = random.nextInt(limit); tree.insert(randomInt); assertEquals(i + 1, tree.size()); } BinarySearchTree<Integer> copy = tree.copy(); assertEquals(tree, copy); assertEquals(inOrder(tree), inOrder(copy)); assertEquals(preOrder(tree), preOrder(copy)); assertEquals(postOrder(tree), postOrder(copy)); assertEquals(bfs(tree), bfs(copy)); } }
{ "redpajama_set_name": "RedPajamaGithub" }
5,331
/* * To change this license header, choose License Headers in Project Properties. * To change this template file, choose Tools | Templates * and open the template in the editor. */ package filters; import helpers.SessionHelper; import models.Korisnik; import java.io.IOException; import javax.servlet.Filter; import javax.servlet.FilterChain; import javax.servlet.FilterConfig; import javax.servlet.ServletException; import javax.servlet.ServletRequest; import javax.servlet.ServletResponse; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; /** * * @author Jurica */ public class AdminFilter implements Filter { public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { HttpServletRequest req = (HttpServletRequest) request; HttpServletResponse res = (HttpServletResponse) response; if (req.getSession().getAttribute("korisnik") != null) { Korisnik korisnik = (Korisnik) req.getSession().getAttribute("korisnik"); if (!korisnik.isAdministrator()) { posaljiNaKorisnikovuPocetnu(req, res); } } else { posaljiNaKorisnikovuPocetnu(req, res); } try { chain.doFilter(request, response); } catch (Throwable t) { t.printStackTrace(); } } public void destroy() { } public void init(FilterConfig filterConfig) { } private void posaljiNaKorisnikovuPocetnu(HttpServletRequest req, HttpServletResponse res) throws IOException { SessionHelper.postaviProizvodeUSession(req.getSession(), -1, 0); res.sendRedirect("/WebShop/Korisnik/Pocetna.jsp"); } }
{ "redpajama_set_name": "RedPajamaGithub" }
932
Q: Finding values of k when f(x) is continuous at x=0 Find all values of $k$ such that given function $$ f(x) = \left\{ \begin{array}{ll} \frac{\sin x}{x}, & \text{ if } x \not= 0 \\ k, & \text{ if } x = 0 \end{array} \right. $$ is continuous at $x = 0$. A: $$\text{We know}\;\;\lim_{x\to0}\frac{\sin x}x=1$$
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,701
As we enter the final stretch of an election that, by now, almost dare not speak its name, the USofA is enabling at least some of the rest of the world to distract itself from its own worries. I had toyed with writing about festival food, or pumpkins (at time of writing the latter are very much moving into the "culinary" part of the news) but instead I am distracted too with this US bias - they may not put on an election that I fully comprehend, but they sure do know how to put on a compelling show. So, while the US is leading from the front - not, sadly, in the field of highbrow political debate, but in psychological medicine - election anxiety being now very much "a thing" - it may be my safest moment to enter a different debate and give a UK take on a US comfort staple; cornbread. Cornbread can be served, like other kinds of bread, as part of, or with, almost any meal at any time of the day, and it exists, so my sources tell me, in various forms - north and south notably differ in whether to add wheat flour and, if so, how much - but the version that I am going to approximate is one from the southern states where cornmeal, in quantity, is essential. And they mind, by the way, about the quality and the freshness of the cornmeal! You can find cornmeal here in various "grinds" - from coarse to fine - and each will make a difference, but it is probably easiest to find as polenta - which is misnamed as an ingredient; it is more accurately the name of the cooked Italian dish. However, it is a coarse ground cornmeal, and can be used to make a coarse grind cornbread. Cornbread is cooked in a skillet (frying pan); it needs strong heat, and strong fat (fat that, even if it doesn't add flavour, will take high heat) - bacon grease is often recommended. The cream of the mostly tattoo-sleeved chef brigade who currently dominate much of the conversation about Southern cuisine disagree among themselves on whether to mix any flour or sugar with the corn, and the disputes and debates have extended in the past to whether eggs are allowed in the batter. But as Brits we can probably avoid most of the controversy and do exactly as we please - nobody is likely to be looking; we have no grandmothers to disappoint, no family tradition to uphold, no heirloom recipes; so we can assimilate and appropriate and, even if we get it badly wrong, so long as we like the end result, nobody is likely to get hurt. Weigh out your ingredients. In a large bowl, add about 300g of cornmeal, 1/2 teaspoon of baking powder, 1/2 teaspoon of bicarbonate of soda (baking soda), and a teaspoon of salt, and whisk all these dry ingredients together. In another bowl, beat together 1 large egg with 350ml of buttermilk (or sour cream, or a mix of any kind of dairy that includes some "sour" - the acid helps the rise). Choose a skillet wisely - black, heavy, well-seasoned, preferably cast iron - render down some finely chopped bacon (smoked is good) - cook until the bacon is crisp, scoop the crisp bacon out of the pan and put to one side, and leave the rendered fat (topped up with a little oil or even butter if you haven't got a generous coating). Put the pan in a hot oven, for the fat to get "properly hot" (think "smoking" like for a Yorkshire pudding; remember that this bread should be beyond golden on its under side). Meanwhile, mix your wet ingredients into the dry just like for any other batter - carefully, so that the mixture does not become lumpy. When the batter is ready; put your skillet over a flame (NB remember, it's just come out of the oven; the handle will be HOT!), pour in the batter and listen to it sizzle. Finish in a hot oven (200ºC or more); as for your Yorkshires, it will take about 20 minutes (it will be done when a fine skewer prodded into the centre comes out clean), and you want to serve it pretty much straight away, fluffy and warm. You can cut it straight out of the pan - flip it upside down first if you want to see it's crackly bottom - serve with butter, or whatever else you fancy - bacon bits and maple syrup went down well in my house - a true Southerner might prefer molasses. It is a bread almost like any other, in that it will mop up juices well!
{ "redpajama_set_name": "RedPajamaC4" }
6,584
package univie.cs.pps; import org.apache.commons.lang3.builder.ReflectionToStringBuilder; import org.apache.commons.lang3.builder.StandardToStringStyle; import rice.p2p.commonapi.Id; import rice.p2p.commonapi.Message; /** * A message carrying the value and weight for the {@link PastryPushSum} * application. * * @author Dario Seidl * */ public class ValueWeightMessage implements Message { private static final StandardToStringStyle toStringStyle = new StandardToStringStyle(); { toStringStyle.setUseShortClassName(true); } private final Id sender; private final Id receiver; private final double value; private final double weight; public ValueWeightMessage(Id sender, Id receiver, double value, double weight) { this.sender = sender; this.receiver = receiver; this.value = value; this.weight = weight; } public Id getSender() { return sender; } public Id getReceiver() { return receiver; } public double getValue() { return value; } public double getWeight() { return weight; } @Override public int getPriority() { return Message.LOW_PRIORITY; } @Override public String toString() { return ReflectionToStringBuilder.toString(this, ValueWeightMessage.toStringStyle); } }
{ "redpajama_set_name": "RedPajamaGithub" }
248
\section{Introduction} \label{sec:introduction} Hitchin's integrable systems arise as a result of dimensional reduction of the self-dual Yang-Mils equation, see~\cite{Hitchin1},~\cite{hitchin1987},~\cite{atiyah_1990}. The Hamiltonians of a Hitchin's system are encoded in the so-called \emph{spectral cover} $\widehat{\Sigma}$ (see~\cite{Donagi1995},~\cite{Donagi1996}) which is an $n$-sheeted cover of a (smooth or, more generally, stable) complex projective curve $\Sigma$ defined as a subvariety of $T^*\Sigma$: \begin{equation} \widehat{\Sigma} = \{ (x,v)\in T^*\Sigma\ \mid\ P(v,x) = 0 \}, \label{eq:def_of_hSigma} \end{equation} where \begin{equation} P(v,x) = v^n + q_1(x) v^{n-1} + \dots + q_n(x), \label{eq:def_of_P(v,x)} \end{equation} $q_j$ is a $j$-differential on $\Sigma$ (i.e. a holomorphic section of $K_{\Sigma}^{\otimes j}$). In the framework of~\cite{Donagi1995}, the equation defining $\widehat{\Sigma}$ is given by the characteristic polynomial $P(v,x) = \det(\Phi(x) - vI)$ of the so-called Higgs field $\Phi$ on $\Sigma$. We consider the moduli space $P\overline{\mathfrak{M}}_g^{(n)}$ of Hitchin's spectral covers in the case of $\mathrm{GL}(n,\mathbb C)$ Hitchin's systems, when all differentials $q_j$ as assumed to be arbitrary. A point in $P\overline{\mathfrak{M}}_g^{(n)}$ parametrizes a pair $(\Sigma, [P])$, where $\Sigma$ is a genus $g$ curve and $P$ is a polynomial of the form~\eqref{eq:def_of_P(v,x)} considered up to multiplication by a non-zero constant $\xi$ given by $(\xi\cdot P)(v,x) = \xi^nP(\xi^{-1}v, x)$. As a space $P\overline{\mathfrak{M}}_g^{(n)}$ is a bundle over the Deligne-Mumford compactification of the moduli space $\overline{\CMcal{M}}_g$ of genus $g$ curves with fibers isomorphic to a weighted projective space, see Section~\ref{sec:space_of_covers} for details. Notice that if $n=1$ then $P\overline{\mathfrak{M}}_g^{(n)}$ is just the total space of the projectivized Hodge bundle which can be thought of as a closure of the moduli space of Abelian differentials (considered up to a constant) on genus $g$ smooth projective curves. A. Kokotov and D. Korotkin~\cite{kokotov2009} introduced a tau function on this moduli space called Bergman tau function. The Bergman tau function is a generalization of the Dedeking eta function (they coincide if $g=1$) and can be interpreted as determinants of a family of Cauchy-Riemann operators in the spirit of~\cite{Palmer1990}. Studying the asymptotycs of the Bergman tau function near the boundary of the moduli space of Abelian differentials (embedded into the total space of the Hodge bundle) D. Korotkin and P. Zograf~\cite{KZAbDif} developed a new relation in the rational Picard group of the projectivized Hodge bundle. Thereafter the construction of the Bergrman tau function was generalized to the case of the moduli space of n-differentials (which is closely related with $P\overline{\mathfrak{M}}_g^{(n)}$ when $n=2$, see~\cite{KZprym} for $n=2$ and~\cite{KorotkonSauvagetZograf} for $n>2$), and finally to the case of $P\overline{\mathfrak{M}}_g^{(n)}$ for any $n$~\cite{KorotkinZograf}. Study of the properties of the Bergman tau function on $P\overline{\mathfrak{M}}_g^{(n)}$ allows to express the class of the full Discriminant locus in the rational Picard group of $P\overline{\mathfrak{M}}_g^{(n)}$ via the set of its standard generators (see Theorem~\ref{thma:Korotkin_Zograf_theorem}). The first objective of the current paper is to enhance and specify this result using standard methods of algebraic geometry. Let $\Sigma$ be a smooth genus $g$ curve and let $P$ be a polynomial of the form~\eqref{eq:def_of_P(v,x)}. Then the discriminant $W(x) = \mathrm{Discr}(P(\cdot, x))$ is an $n(n-1)$-differential on $\Sigma$ and the divisor of $W$ is equal to the branching divisor of the spectral cover $\widehat{\Sigma}\to \Sigma$ associated with $P$. Generically, all zeros of $W$ are simple which implies that $\widehat{\Sigma}$ is smooth and the cover $\widehat{\Sigma}\to \Sigma$ is simply ramified. When two zeros of $W$ coalesce, the local behavior of the map $\widehat{\Sigma}\to \Sigma$ changes with respect to one the following three ways (we follow the notation of~\cite{KorotkinZograf} in this description): 1) A node (normal self-crossing of $\widehat{\Sigma}$) occurs at the ramification point of $\widehat{\Sigma}\to \Sigma$ over the double zero of $W$. We call the locus of such covers \textbf{The ``boundary''.}. 2) Two distinct ramification points of $\widehat{\Sigma}\to\Sigma$ arise in the preimage of the double zero of $W$. We call the locus of such covers \textbf{The ``Maxwell stratum''.} 3) Two ramification points of $\widehat{\Sigma}\to \Sigma$ coalesce to become a ramification point of order $3$ over the double zero of $W$. We call the locus of such covers \textbf{The ``caustic''.}. We use the notation of a "Maxwell stratum" and a "caustic" following the notation developed in V. Arnold's school in Moscow, see for example~\cite{lando2010graphs}. The correspondence $P\mapsto \mathrm{Discr}(P)$ defines a map from $P\overline{\mathfrak{M}}_g^{(n)}$ to the moduli space of pairs $(\Sigma, W)$, where $W$ is an $n(n-1)$-differential on $\Sigma$ considered up to a multiplication by a non-zero constant. Let $P\overline{D}_W$ denote the pullback of the divisor consisting of those $W$ that have at least one multiple zero (see Section~\ref{sec:discriminant_locus} for details). Then the support of $P\overline{D}_W$ splits into the union of three components $P\overline{D}_W^{(b)}\cup P\overline{D}_W^{(m)}\cup P\overline{D}_W^{(c)}$ in accordance to the three possibilities described above. We call the divisor $P\overline{D}_W$ \emph{the full discriminant divisor}. The class of the divisor $P\overline{D}_W$ in the rational Picard group of $P\overline{\mathfrak{M}}_g^{(n)}$ is called \emph{the class of the universal Hitchin's discriminant}. The following theorem was proven in~\cite[Theorem~3.2]{KorotkinZograf}: \begin{thma} The divisor $P\overline{D}_W$ satisfies \begin{equation*} P\overline{D}_W =P\overline{D}_W^{(b)} + 2P\overline{D}_W^{(m)} + 3P\overline{D}_W^{(c)} \end{equation*} and the class of the universal Hitchin's discriminant $P\overline{D}_W$ is expressed in terms of the standard generators of $\mathrm{Pic}(P\overline{\mathfrak{M}}_g^{(n)})\otimes \mathbb Q$ as follows: \begin{equation*} [P\overline{D}_W] = n(n-1)\Bigl( (n^2-n+1)(12\lambda-\delta) - 2(g-1)(2n^2-2n+1)\phi \Bigr). \end{equation*} \label{thma:Korotkin_Zograf_theorem} \end{thma} Here $\delta = \sum_{j=0}^{[g/2]}\delta_j$ is the pullback of the class of the Deligne-Mumford boundary of $\overline{\CMcal{M}}_g$, see Section~\ref{subsec:generators_of_Pic} for details. We generalize this result by expressing the class of each of the three components of the full discriminant divisor via the set of generators of $\mathrm{Pic}(P\overline{\mathfrak{M}}_g^{(n)})\otimes\mathbb Q$: \begin{thma} Let $n\geq 3$ and $g\geq 1$. The following formulas hold in $\mathrm{Pic}(P\overline{\mathfrak{M}}_g^{(n)})\otimes \mathbb Q$: \begin{align*} & [P\overline{D}_W^{(b)}] = n(n-1)\Bigl( (n+1)(12\lambda-\delta) - 2(g-1)(2n+1)\phi \Bigr)\\ & [P\overline{D}_W^{(m)}] = \frac{n(n-1)(n-2)(n-3)}{2}\Bigl( 12\lambda-\delta + 4(g-1)\phi \Bigr)\\ & [P\overline{D}_W^{(c)}] = n(n-1)(n-2)\Bigl( 12\lambda-\delta - 4(g-1)\phi \Bigr). \end{align*} \label{thma:formulas_for_three_divisors} \end{thma} The second objective of the paper is to relate two Hodge classes on $P\overline{\mathfrak{M}}_g^{(n)}$. Note that since the degree of the cover $\widehat{\Sigma}\to\Sigma$ is equal to $n$ and the degree of the branching divisor is $\deg\mathrm{div}(W) = 2n(n-1)(g-1)$, the genus of $\widehat{\Sigma}$ is $\widehat{g}=g(\widehat{\Sigma}) = n^2(g-1)+1$. Thus we have two morphisms $P\overline{\mathfrak{M}}_g^{(n)}\to \overline{\CMcal{M}}_g$ and $P\overline{\mathfrak{M}}_g^{(n)}\to \overline{\CMcal{M}}_{\hg}$, where the first morphism maps $(\Sigma, [P])$ to the moduli of $\Sigma$ and the second one maps $(\Sigma, [P])$ to the moduli of $\widehat{\Sigma}$. Hence we can define two Hodge classes: the class $\lambda$ is the pullback of the Hodge class from $\overline{\CMcal{M}}_g$ and the class $\widehat{\lambda}$ is the pullback of the Hodge class from $\overline{\CMcal{M}}_{\hg}$. The next theorem provides a formula in $\mathrm{Pic}(P\overline{\mathfrak{M}}_g^{(n)})\otimes \mathbb Q$ which relates $\lambda$ and $\widehat{\lambda}$: \begin{thma} Let $n\geq 3$ and $g\geq 1$. The following formula holds in $\mathrm{Pic}(P\overline{\mathfrak{M}}_g^{(n)})\otimes \mathbb Q$: \begin{equation*} \widehat{\lambda} = n(2n^2-1)\lambda - \frac{n(n-1)(4n+1)(g-1)}{6}\phi - \frac{n(n^2-1)}{6}\delta. \end{equation*} \label{thma:hlambda_formula} \end{thma} Note that all coefficients of the right-hand side in the formula in Theorem~\ref{thma:hlambda_formula} are integers. The paper is organized as follows. In Section~\ref{sec:polynomials_variety} we recall some basic facts about the geometry of the space of monic polynomials with multiple roots and derive a technical lemma that will be used for a local analysis of the discriminant $W$ with multiple zeros. In Section~\ref{sec:space_of_covers} we recall the construction of $P\overline{\mathfrak{M}}_g^{(n)}$ and some of its basic properties, and introduce some related notation. In Section~\ref{sec:discriminant_locus} we prove Theorem~\ref{thma:formulas_for_three_divisors} and in Section~\ref{sec:hlambda} we prove Theorem~\ref{thma:hlambda_formula}. \subsection*{Acknowledgments.} I would like to thank Dmitry Korotkin and Peter Zograf for helpful discussions. The research was supported by the grant of the Government of the Russian Federation for the state support of scientific research carried out under the supervision of leading scientists, agreement 14.W03.31.0030 dated 15.02.2018. \section{Variety of monic polynomials} \label{sec:polynomials_variety} Recall that a polynomial of the form $P(t) = t^n + q_1t^{n-1}+\dots+q_n$ is called \emph{monic}. In this section we assume that $(q_1,\dots,q_n)\in \mathbb C^n$. The discriminant $\mathrm{Discr}(P)$ is a polynomial of $q_1,\dots,q_n$ and the equation $\mathrm{Discr}(P) = 0$ defines the affine variety $\CMcal{D}\subset \mathbb C^n$ of monic polynomials with multiple roots (see~\cite{lando2010graphs}). The variety $\CMcal{D}$ is not smooth: it has a normal crossing along the subvariety $\CMcal{D}^{(m)}\subset \CMcal{D}$ that parametrizes polynomials with two multiple roots, and it has a cusp along the subvariety $\CMcal{D}^{(c)}\subset \CMcal{D}$ corresponding to polynomials that have a root of order $3$ or bigger. To construct the normalization of $\CMcal{D}$ let us consider the variety $\widehat{\CMcal{D}}\subset \mathbb C^n\times \mathbb C$ given by \begin{equation} \widehat{\CMcal{D}} = \{(q_1,\dots, q_n, t)\in \mathbb C^n\times \mathbb C \mid\ P(t) = 0,\ P'(t) = 0\}, \label{eq:def_of_hD} \end{equation} where $P(t) = t^n + q_1t^{n-1}+\dots+q_n$. The forgetful projection $ \mathbb C^n\times \mathbb C\to \mathbb C^n$ maps $\widehat{\CMcal{D}}$ onto $\CMcal{D}$, and the induced mapping $\widehat{\CMcal{D}}\to \CMcal{D}$ has the degree one over an open subset of $\CMcal{D}$. To see that $\widehat{\CMcal{D}}$ is smooth observe that the functions $t, P(t),P'(t),\dots,P^{(n-1)}(t)$ form another coordinate system on $\mathbb C^n\times \mathbb C$. In this coordinates $\widehat{\CMcal{D}}$ is just a linear subspace given by two linear equations $P(t) = 0,\ P'(t)=0$. The group $\mathbb C^*$ acts on the space $\mathbb C^n$ of monic polynomials by the rule $(\xi\cdot P)(t) = \xi^n P(\xi^{-1}t)$. In terms of the coefficients $q_1,\dots, q_n$ this action can be rewritten as \begin{equation} \xi\cdot (q_1,q_2, \dots, q_n) = (\xi q_1, \xi^2 q_2,\dots, \xi^n q_n). \label{eq:action_of_Cstar_on_q} \end{equation} Denote by $P\mathbb C^n$ the projectivization under this action. The weighted projective space $P\mathbb C^n$ is a smooth orbifold. The variety $\CMcal{D}$ is equivariant under the action of $\mathbb C^*$ and we denote its projectivization by $P\CMcal{D}$. The action of $\mathbb C^*$ on $\mathbb C^n$ lifts to the action of $\mathbb C^*$ on $\mathbb C^n\times \mathbb C$ given by $\xi\cdot(P,t) = (\xi\cdot P, \xi t)$. The variety $\widehat{\CMcal{D}}$ is equivariant under this action and we denote the projectivization by $P\widehat{\CMcal{D}}$. Note that the map $\widehat{\CMcal{D}}\to \CMcal{D}$ induces the map $P\widehat{\CMcal{D}}\to P\CMcal{D}$ (although we do not have a map from $P(\mathbb C^n\times \mathbb C)$ to $P\mathbb C^n$) of projectivized varieties. \begin{lemma} Let $n\geq 3$. Given a monic polynomial $P(t) = t^n + q_1t^{n-1} + \dots + q_n$ we denote by $P_{n-2}(t) = t^{n-2} + q_1 t^{n-3} + \dots + q_{n-2}$ the polynomial $t^{-2}(P(t) - (q_{n-1}t - q_n))$. There exist polynomials $R_0,R_1,S\in \mathbb C[q_1,\dots, q_n]$ such that the equation \begin{equation*} \mathrm{Discr}(P) = - q_nq_{n-2}^3\mathrm{Discr}(P_{n-2}) + q_n(q_{n-1}R_1 + q_n R_0) + q_{n-1}^2 S \end{equation*} holds for any choice of the polynomial $P(t) = t^n + q_1t^{n-1} + \dots + q_n$. \label{lemma:decomposition_of_Disc} \end{lemma} \begin{proof} Clearly, $\mathrm{Discr}(P)$ lies in the ideal in $\mathbb C[q_1,\dots,q_n]$ generated by $q_n$ and $q_{n-1}$, so we can find $S_1,S_2\in \mathbb C[q_1,\dots, q_n]$ such that $\mathrm{Discr}(P) = q_n S_1 + q_{n-1}S_2$ where $S_2$ is independent of $q_n$. Let $S_2 = q_{n-1} S + S_3$, where $S_3$ is independent on $q_{n-1}$. We obtain \begin{equation} \mathrm{Discr}(P) = q_n S_1 + q_{n-1}^2S + q_{n-1}S_3. \label{eq:intermediate_decomp_of_Disc_-1} \end{equation} Consider the polynomial \begin{equation*} P_z(t) = (t-z)(t-2z)P_{n-2}(t) = t^n + q_1(z) t^{n-1} + \dots + q_n(z) \end{equation*} where $z$ is a formal variable. Then $q_n(z)$ is divisible by $z^2$ and $q_{n-1}(z)$ is divisible by $z$ but not by $z^2$. We have \[\begin{array}{lll} \mathrm{Discr}(P_z)&= &q_n(z) S_1(q_1(z),\dots,q_n(z)) + q_{n-1}(z)^2 S(q_1(z),\dots,q_{n-1}(z))+\\ & &\hspace{10em}+ q_{n-1}(z) S_3(q_1(z),\dots,q_{n-2}(z))\\ &= &q_{n-1}(z)S_3(q_1,\dots,q_{n-2}) + z^2Q_1 \end{array}\] due to~\eqref{eq:intermediate_decomp_of_Disc_-1}, where $Q_1\in \mathbb C[z, q_1,\dots,q_n]$ is some polynomial. Since $\mathrm{Discr}(P_z)$ is divisible by $z^2$, we see that $S_3$ must be zero, hence $S_2 = q_{n-1} S$ and \begin{equation} \mathrm{Discr}(P) = q_n S_1 + q_{n-1}^2S. \label{eq:intermediate_decomp_of_Disc_0} \end{equation} We can find $R_0,R_1,R_2,R_3\in \mathbb C[q_1,\dots,q_n]$, where the polynomial $R_2$ is independent of $q_n, q_{n-1}$ and $R_3$ is independent of $q_n, q_{n-1}, q_{n-2}$, such that $S_1 = q_n R_0 + q_{n-1} R_1 + q_{n-2} R_2 + R_3$. Equation~\eqref{eq:intermediate_decomp_of_Disc_0} then looks like follows \begin{equation} \mathrm{Discr}(P) = q_nq_{n-2}R_2(q_1,\dots,q_{n-2}) + q_n(q_{n-1}R_1 + q_n R_0) + q_{n-1}^2S + q_nR_3(q_1,\dots,q_{n-3}). \label{eq:intermediate_decomp_of_Disc_1} \end{equation} Put $P_{n-3}(t) = t^{n-3} + q_1 t^{n-2} + \dots + q_{n-3}$, and consider the polynomial \begin{equation*} \begin{split} & P_z(t) = (t-3z)(t-6z)(t+2z)P_{n-3}(t) =\\ &= (t^3-7zt^2 +36z^3)P_{n-3}(t) = t^n + q_1(z) t^{n-1} + \dots + q_n(z) \end{split} \end{equation*} where $z$ is a formal variable. Then $q_n(z)$ and $q_{n-1}(z)$ are divisible by $z^3$, and $q_{n-2}(z)$ is divisible by $z$. Using these observations together with~\eqref{eq:intermediate_decomp_of_Disc_1} we conclude that \begin{equation*} \mathrm{Discr}(P_z) = q_n(z)R_3(q_1,\dots, q_{n-3}) + z^4Q_2, \end{equation*} where $Q_2\in \mathbb C[z, q_1,\dots,q_n]$ is some polynomial. Note that the $\mathrm{Discr}(P_z)$ is divisible by $z^4$ (it is even divisible by $z^6$). Since $q_n(z)$ is not divisible by $z^4$, the polynomial $R_3$ should be zero. It follows that we can now rewrite~\eqref{eq:intermediate_decomp_of_Disc_1} as \begin{equation} \mathrm{Discr}(P) = q_nq_{n-2}R_2(q_1,\dots,q_{n-2}) + q_n(q_{n-1}R_1 + q_n R_0) + q_{n-1}^2S. \label{eq:intermediate_decomp_of_Disc_2} \end{equation} Using this equation and the factorization properties of $q_n(z), q_{n-1}(z), q_{n-2}(z)$ again we conclude that \begin{equation*} \mathrm{Discr}(P_z) = q_n(z)q_{n-2}(z) R_2(q_1(z), \dots,q_{n-2}(z)) + z^6Q_3, \end{equation*} where $Q_3\in \mathbb C[z, q_1,\dots,q_n]$ is some polynomial. Since $\mathrm{Discr}(P_z)$ is divisible by $z^6$ while $q_n(z)q_{n-2}(z)$ is divisible only by $z^4$, the polynomial $R_2(q_1(z), \dots,q_{n-2}(z))$ must be divisible by $z^2$. It can be easily shown that this is possible only if $R_2(q_1,\dots, q_{n-2})$ is divisible by $q_{n-2}^2$, i.e. there exists a polynomial $R_4\in \mathbb C[q_1,\dots,q_{n-2}]$ such that \begin{equation} \mathrm{Discr}(P) = q_nq_{n-2}^3R_4(q_1,\dots,q_{n-2}) + q_n(q_{n-1}R_1 + q_n R_0) + q_{n-1}^2S. \label{eq:intermediate_decomp_of_Disc_3} \end{equation} Now consider the polynomial \begin{equation*} \begin{split} & P_z(t) = (t^2-z) P_{n-2}(t) =\\ & = t^n + q_1 t^{n-1} + (q_2 - z)t^{n-2} + (q_3-zq_1)t^{n-3} + \dots + (q_{n-2} - zq_{n-4})t^2 -zq_{n-3}t - zq_{n-2} \end{split} \end{equation*} It follows from~\eqref{eq:intermediate_decomp_of_Disc_3} that \begin{equation} \mathrm{Discr}(P_z) = -zq_{n-2}^4 R_2(q_1,\dots, q_{n-2}) + z^2Q_4, \label{eq:asymptotics_of_Pz_Disc} \end{equation} where $Q_4\in \mathbb C[z, q_1,\dots,q_n]$ is some polynomial. On the other hand we have \[ \mathrm{Discr}(P_z) = z\cdot\mathrm{Discr}(P_{n-2})\cdot \mathrm{Res}(t^2-z, P_{n-2}(t))^2 = zq_2^4\cdot\mathrm{Discr}(P_{n-2}) + z^2Q_5, \] where $Q_5\in \mathbb C[z, q_1,\dots,q_n]$ is some polynomial. Comparing this equation with~\eqref{eq:asymptotics_of_Pz_Disc} we find that $R_2 = -\mathrm{Discr}(P_{n-2})$. Substituting this equality into~\eqref{eq:intermediate_decomp_of_Disc_3} we get the statement of the lemma. \end{proof} \section{Space of covers} \label{sec:space_of_covers} Let $\Sigma$ be a smooth projective curve of genus $g$. Denote by $\mathfrak{M}_{g,\Sigma}^{(n)}$ the moduli space of $\mathrm{GL}(n,\mathbb C)$ spectral covers of $\Sigma$: \begin{equation} \mathfrak{M}_{g,\Sigma}^{(n)} = \bigoplus_{j = 1}^n H^0(\Sigma, K_{\Sigma}^{\otimes j}) \label{eq:def_of_Mhit_Sigma} \end{equation} where $K_{\Sigma}$ is the canonical class of $\Sigma$ and \begin{equation} \dim\mathfrak{M}_{g,\Sigma}^{(n)} = n^2(g-1)+1. \label{eq:def_of_hg} \end{equation} A point $(q_1,\dots,q_n)\in \mathfrak{M}_{g,\Sigma}^{(n)}$ can be considered as a polynomial $P(t,x) = t^n + q_1(x)t^{n-1} + \dots + q_n(x)$. For each $x\in \Sigma$ and $v\in T_x^*\Sigma$ the value $P(v,x)$ is an element of $(T^*_x\Sigma)^{\otimes n}$. The spectral cover $\widehat{\Sigma}$ associated with $P$ is a subvariety in $T^*\Sigma$ defined by \begin{equation*} \widehat{\Sigma} = \{(x, v)\in \Sigma\times T_x^*\Sigma \mid\ P(v,x) = 0\}; \end{equation*} clearly, $\widehat{\Sigma}$ is a projective curve. Generically $\widehat{\Sigma}$ is smooth and all the ramification points of the projection $\widehat{\Sigma}\to\Sigma$ are simple. If $\widehat{\Sigma}$ is smooth, then by the Riemann-Hurwitz formula the genus of $\widehat{\Sigma}$ is equal to $\widehat{g} = n^2(g-1)+1$. Define the action of $\mathbb C^*$ on $\mathfrak{M}_{g,\Sigma}^{(n)}$ by \begin{equation} (\xi\cdot P)(t,x) = \xi^n P(\xi^{-1}t, x). \label{eq:def_of_action_of_Cstar} \end{equation} Denote by $P\mathfrak{M}_{g,\Sigma}^{(n)}$ the corresponding projectivization. Let $\overline{\CMcal{M}}_g$ be the Deligne-Mumford compactification of the moduli space of genus $g$ curves and let $\nu: \overline{\CMcal{M}}_{g,1}\to \overline{\CMcal{M}}_g$ be the universal curve. We define the moduli space of $\mathrm{GL}(n,\mathbb C)$ Hitchin's spectral covers by \begin{equation} \overline{\mathfrak{M}}_g^{(n)} = \bigoplus_{j = 1}^n R^0\nu_*\omega_{\nu}^{\otimes j}, \label{eq:def_of_cMhit} \end{equation} where $\omega_{\nu}$ is the relative dualizing sheaf. The forgetful projection $\overline{\mathfrak{M}}_g^{(n)}\to \overline{\CMcal{M}}_g$ is a bundle with fiber over $\Sigma$ isomorphic to $\mathfrak{M}_{g,\Sigma}^{(n)}$ (in the case when $\Sigma$ is not smooth one have to replace $K_{\Sigma}$ with the relative dualizing sheaf on $\Sigma$). The action of $\mathbb C^*$ on $\mathfrak{M}_{g,\Sigma}^{(n)}$ defined by~\eqref{eq:def_of_action_of_Cstar} extends to the action on $\overline{\mathfrak{M}}_g^{(n)}$. Let $P\overline{\mathfrak{M}}_g^{(n)}$ denote the corresponding projectivization. The $P\overline{\mathfrak{M}}_g^{(n)}$ is a smooth orbifold (or a Deligne-Mumford stack). Denote by $\CMcal{L}\to P\overline{\mathfrak{M}}_g^{(n)}$ the tautological line bundle associated with the projectivization. Let $\pi:\CMcal{C}\cMhit\to P\overline{\mathfrak{M}}_g^{(n)}$ be the pullback of the universal curve $\overline{\CMcal{M}}_{g,1}\to\overline{\CMcal{M}}_g$. Denote by $\omega_{\pi}$ the relative dualizing sheaf and set \begin{equation} \psi = c_1(\omega_{\pi}). \label{eq:def_of_psi_1} \end{equation} Let $\widehat{\pi}: \widehat{\CMcal{C}}\cMhit\to P\overline{\mathfrak{M}}_g^{(n)}$ be the universal family of Hitchin's spectral curves, so that the fiber of $\widehat{\pi}$ over a point $(\Sigma, [P])\in P\overline{\mathfrak{M}}_g^{(n)}$ is isomorphic to the curve $\widehat{\Sigma}$ associated with $P$. A point in $\widehat{\CMcal{C}}\cMhit$ can be represented by a quadruple $(\Sigma,x, P, v)$, where $P\in \mathfrak{M}_{g,\Sigma}^{(n)}$, $x\in \Sigma$, $v\in T_x^*\Sigma$ and $P(v,x) = 0$. It is straightforward to check that the map $p:\widehat{\CMcal{C}}\cMhit\to \CMcal{C}\cMhit$ that forgets $v$ is a branched cover that coincides with $\widehat{\Sigma}\to \Sigma$ fiber-wise over $P\overline{\mathfrak{M}}_g^{(n)}$. Let us denote by $\widehat{\mathfrak{B}}\subset \widehat{\CMcal{C}}\cMhit$ the ramification divisor of $p$ and by $\mathfrak{B} = p(\widehat{\mathfrak{B}})\subset \CMcal{C}\cMhit$ the branching divisor. Consider the projection $\widehat{\mathfrak{B}}\to \overline{\CMcal{M}}_{g,1}$ that maps $(\Sigma, x, P, v)$ to $(\Sigma, x)$. Let $(\Sigma,x)$ be curve with a marked point $x$ and assume for simplicity that $\Sigma$ is smooth (otherwise one has to consider the normalization of $\Sigma$). Let $(\mathfrak{M}_{g,\Sigma}^{(n)})_x\subset \mathfrak{M}_{g,\Sigma}^{(n)}$ be the subvariety consisting of $(q_1,\dots,q_n)$ such that $q_j(x) = 0$ for each $j$. The fiber of $\widehat{\mathfrak{B}}\to \overline{\CMcal{M}}_{g,1}$ is (not canonically) isomorphic to $P(\widehat{\CMcal{D}}\times (\mathfrak{M}_{g,\Sigma}^{(n)})_x)$. Similarly, the fiber of the projection $\mathfrak{B}\to \overline{\CMcal{M}}_{g,1}$ is isomorphic to $P(\CMcal{D}\times (\mathfrak{M}_{g,\Sigma}^{(n)})_x)$ (see Section~\ref{sec:polynomials_variety}, where we define $\CMcal{D}$ and $\widehat{\CMcal{D}}$). Notice that \begin{equation} (\mathfrak{M}_{g,\Sigma}^{(n)})_x \simeq \mathbb C^{n^2(g-1)-n+2} \label{eq:Mhit_x_Sigma_isomorphic_to_C_ghat-n} \end{equation} (cf.~\eqref{eq:def_of_hg}). Define the action of $\mathbb C^*$ on $\mathbb C^{n^2(g-1)-n+2}$ via this isomorphism. The following lemma is straightforward: \begin{lemma} (1) The projection $\widehat{\mathfrak{B}}\to \overline{\CMcal{M}}_{g,1}$ is a bundle with the fiber $P(\widehat{\CMcal{D}}\times \mathbb C^{n^2(g-1)-n+2})$ (i.e. it can be locally represented as a projection of the form $P(\widehat{\CMcal{D}}\times \mathbb C^{n^2(g-1)-n+2})\times X\to X$). In particular, the variety $\widehat{\mathfrak{B}}$ is smooth. (2) The projection $\mathfrak{B}\to \overline{\CMcal{M}}_{g,1}$ is a bundle with fiber $P(\CMcal{D}\times \mathbb C^{n^2(g-1)-n+2})$. In particular, the singularities of $\mathfrak{B}$ are normal crossings and cusps. (3) The map $\widehat{\mathfrak{B}}\to \mathfrak{B}$ is a bundle morphism that is given fiber-wise by the map $\widehat{\CMcal{D}}\to \CMcal{D}$. (4) The ramification of the map $p: \widehat{\CMcal{C}}\cMhit\to \CMcal{C}\cMhit$ is simple at a generic point. \label{lemma:hB_is_smooth} \end{lemma} \section{Components of the universal discriminant locus} \label{sec:discriminant_locus} Let $P = t^n + q_1t^{n-1} + \dots + q_n$ represent an element in $\mathfrak{M}_{g,\Sigma}^{(n)}$. Consider the discriminant $W(x) = \mathrm{Discr}(P(\cdot, x))$. Recall that $W$ is an $N$-differential, where $N = n(n-1)$, and the divisor of $W$ is equal to the branching divisor of the spectral cover $\widehat{\Sigma}\to \Sigma$ associated with $P$. Generically all zeros of $W$ are simple, thus $\widehat{\Sigma}$ is smooth and the ramification of $\widehat{\Sigma}\to \Sigma$ is simple. If $x$ is a zero of order $2$ of $W$ then there are three possibilities that describe the local behaviour of the cover $\widehat{\Sigma}\to\Sigma$; we will follow the notation of~\cite{KorotkinZograf}: 1) There is one simple ramification point of $\widehat{\Sigma}\to \Sigma$ over $x$ and $\widehat{\Sigma}$ has a node (normal crossing) at this point. We call the locus of such covers \textbf{the ``boundary''}. 2) The cover $\widehat{\Sigma}\to \Sigma$ has two simple ramification points of order $2$ over $x$ and $\widehat{\Sigma}$ is smooth at these points. We call the locus of such covers \textbf{the ``Maxwell stratum''}. 3) The cover $\widehat{\Sigma}\to \Sigma$ has a ramification point of order $3$ over $x$ and is $\widehat{\Sigma}$ is smooth at this point. We call the locus of such covers \textbf{the ``caustic''}. Let $\overline{\mathrm{\mathbf{Q}}}_g^N$ be the moduli space of pairs $(\Sigma, W)$, where $\Sigma$ is a curve of genus $g$ and $W$ is an $N$-differential on it (or a section of $\omega_{\Sigma}^{\otimes N}$ if $\Sigma$ is not smooth). The map $P\mapsto \mathrm{Discr}(P)$ gives rise to a map $\mathrm{Discr}:\overline{\mathfrak{M}}_g^{(n)}\to \overline{\mathrm{\mathbf{Q}}}_g^N$. Let $D_{\mathrm deg}$ be the divisor in $\overline{\mathrm{\mathbf{Q}}}_g^N$ parametrizing pairs $(\Sigma, W)$, where $W$ has multiple zeros. The locus $\mathrm{Discr}^{-1}(D_{\mathrm deg})$ has three components $\overline{D}_W^{(b)}$, $\overline{D}_W^{(m)}$ and $\overline{D}_W^{(c)}$ in accordance with the three possibilities described above. Put $\overline{D}_W = \mathrm{Discr}^*D_{\mathrm deg}$. A local analysis (see~\cite{KorotkinZograf}) yields $\overline{D}_W = \overline{D}_W^{(b)} + 2\overline{D}_W^{(m)} + 3\overline{D}_W^{(c)}$ (alternatively, one can use Lemma~\ref{lemma:hB_is_smooth} to show this). We call the divisor $\overline{D}_W$ the \emph{universal Hitchin's discriminant}. Note that $\overline{D}_W$, $\overline{D}_W^{(b)}$, $\overline{D}_W^{(m)}$ and $\overline{D}_W^{(c)}$ are equivariant under the action of $\mathbb C^*$ on $\overline{\mathfrak{M}}_g^{(n)}$. Therefore, we can define their projectivizations $P\overline{D}_W$, $P\overline{D}_W^{(b)}$, $P\overline{D}_W^{(m)}$ and $P\overline{D}_W^{(c)}$ that are divisors in $P\overline{\mathfrak{M}}_g^{(n)}$. Our goal is to represent the classes of $P\overline{D}_W^{(b)}$, $P\overline{D}_W^{(m)}$ and $P\overline{D}_W^{(c)}$ as linear combinations of standard generators of $\mathrm{Pic}(P\overline{\mathfrak{M}}_g^{(n)})\otimes \mathbb Q$. \subsection{Generators of $\mathrm{Pic}(\overline{\mathfrak{M}}_g^{(n)})\otimes \mathbb Q$} \label{subsec:generators_of_Pic} Put \begin{equation} \phi= c_1(\CMcal{L}), \label{eq:def_of_phi} \end{equation} where $\CMcal{L}\to P\overline{\mathfrak{M}}_g^{(n)}$ is the tautological line bundle associated with the action of $\mathbb C^*$ on $\overline{\mathfrak{M}}_g^{(n)}$. By construction, the space $P\overline{\mathfrak{M}}_g^{(n)}$ is a bundle over $\overline{\CMcal{M}}_g$ whose fibers are weighted projective spaces. Therefore $\mathrm{Pic}(P\overline{\mathfrak{M}}_g^{(n)})\otimes \mathbb Q$ is generated by the class $\phi$ and the pullbacks of the generators of $\mathrm{Pic}(\overline{\CMcal{M}}_g)\otimes \mathbb Q$. Classically, the standard set of generators of $\mathrm{Pic}(\overline{\CMcal{M}}_g)\otimes \mathbb Q$ consists of the Hodge class $\lambda$ and the classes of boundary divisors $\delta_0,\dots, \delta_{[g/2]}$ (see~\cite{ARBARELLO1987153}). We will keep the same notation for the pullbacks of these classes to $P\overline{\mathfrak{M}}_g^{(n)}$. Let $\delta = \sum_{j = 0}^{[g/2]}\delta_j$ denote the full boundary class. Let $\nu: \overline{\CMcal{M}}_{g,1}\to \overline{\CMcal{M}}_g$ be the universal curve and let $\omega_{\nu}$ denote the relative dualizing sheaf. Pulling the Mumford's formula for $\nu_*c_1(\omega_{\nu})^2$ to $P\overline{\mathfrak{M}}_g^{(n)}$ we get \begin{equation} \pi_*\psi^2 = 12\lambda - \delta \label{eq:Mumford_for_cMhit} \end{equation} where $\pi$ and $\psi$ where defined in Section~\ref{sec:space_of_covers}. \subsection{Expansion of the classes of components of the universal discriminant locus} \label{subsec:formulas_for_three_divisors} In this section we will prove Theorem~\ref{thma:formulas_for_three_divisors}. The following lemma is straightforward: \begin{lemma} Let $X$ be a complex orbifold, let $h: L\to X$ be a line bundle and $n$ be an integer. Consider the factor space \begin{equation*} L_{(n)} = \{(v,\alpha)\in L\times \mathbb C\}/_{\sim} \end{equation*} modulo the relation $(\xi v, \alpha)\sim (v,\xi^n \alpha)$ that holds for any $\xi\in \mathbb C^*$. Then $L_{(n)}$ is a complex orbifold, the projection $L_{(n)}\to X$ given by $(v,\alpha)\mapsto h(v)$ is a line bundle on $X$ and the map $(v,\alpha)\mapsto \alpha\cdot v^{\otimes n}$ is an isomorphism between $L_{(n)}$ and $L^{\otimes n}$. \label{lemma:construction_of_Fn} \end{lemma} As above we denote by $\widehat{\pi}: \widehat{\CMcal{C}}\cMhit\to P\overline{\mathfrak{M}}_g^{(n)}$ the universal Hitchin's spectral curve and by $\pi:\CMcal{C}\cMhit\to P\overline{\mathfrak{M}}_g^{(n)}$ is the universal curve over $P\overline{\mathfrak{M}}_g^{(n)}$. The branched cover $p: \widehat{\CMcal{C}}\cMhit\to \CMcal{C}\cMhit$ is given fiber-wise by the projection $\widehat{\Sigma}\to\Sigma$. The divisor $\widehat{\mathfrak{B}}\subset \widehat{\CMcal{C}}\cMhit$ denotes the ramification divisor of $p$ and the divisor $\mathfrak{B} = p(\widehat{\mathfrak{B}})\subset \CMcal{C}\cMhit$ is the branching divisor of $p$. The class $c_1(\omega_{\pi})$ is denoted by $\psi$ as before. \begin{lemma} The following relation holds in $\mathrm{Pic}(P\overline{\mathfrak{M}}_g^{(n)})\otimes \mathbb Q$: \begin{equation*} \widehat{\pi}_* (p^*\psi\cdot \Bigl[\widehat{\mathfrak{B}} \Bigr]) = n(n-1)(12\lambda - \delta - 2(g-1)\phi). \end{equation*} \label{lemma:psi_1_cdot_B} \end{lemma} \begin{proof} It follows from Lemma~\ref{lemma:hB_is_smooth} and the construction of $\widehat{\CMcal{D}}$ that the projection $\widehat{\mathfrak{B}}\to \mathfrak{B}$ is of degree one. Therefore, \begin{equation} \widehat{\pi}_*\Bigl[ p^*\psi\cdot \widehat{\mathfrak{B}} \Bigr] = \pi_*p_*\Bigl[ p^*\psi\cdot \widehat{\mathfrak{B}} \Bigr] = \pi_*\Bigl[ \psi\cdot p_*\widehat{\mathfrak{B}} \Bigr] = \pi_*\Bigl[ \psi\cdot \mathfrak{B} \Bigr] \label{eq:psi_1_cdot_B_part_1} \end{equation} Define the map $h: \pi^*\CMcal{L}_{(n(n-1))}\to \omega_{\pi}^{\otimes n(n-1)}$ (cf. Lemma~\ref{lemma:construction_of_Fn}) as follows. Let $(\Sigma, x, P)\in \CMcal{C}\cMhit$, so that $P\in \CMcal{L}|_{(\Sigma, [P])}$. Set $h(P,\xi) = \xi\cdot \mathrm{Discr}(P)|_x$. Lemma~\ref{lemma:construction_of_Fn} implies that $h(P,\xi)$ is well-defined and depends linearly of $(P, \xi)\in \CMcal{L}_{(n(n-1))}$. Moreover, we have \begin{equation*} \mathrm{div}\, h = \mathfrak{B}. \end{equation*} Therefore \begin{equation} \pi_*\Bigl[ \psi\cdot \mathfrak{B} \Bigr] = \pi_*\Bigl[ \psi\cdot (n(n-1)(\psi-\pi^*\phi)) \Bigr] = n(n-1)(12\lambda - \delta - 2(g-1)\phi) \label{eq:psi_1_cdot_B_part_2} \end{equation} where we used~\eqref{eq:Mumford_for_cMhit} in the last equation. Combining~\eqref{eq:psi_1_cdot_B_part_1} and~\eqref{eq:psi_1_cdot_B_part_2} we get the result. \end{proof} \begin{proof}[Proof of Theorem~\ref{thma:formulas_for_three_divisors}] Let $(\Sigma, x, P, v)$ be a point in $\widehat{\mathfrak{B}}$, so that $(x,v)\in \widehat{\Sigma}$ is a ramification point of $\widehat{\Sigma}\to \Sigma$. Let $P\widehat{D}_W\subset \widehat{\mathfrak{B}}$ denote the closure of the locus in $\widehat{\mathfrak{B}}$ that parametrizes those $(\Sigma, x, P, v)$ for which $x$ is a double zero of $\mathrm{Discr}(P)$. Then $\widehat{\pi}(P\widehat{D}_W) = \mathrm{supp}(P\overline{D}_W)$ (for a divisor $D = a_1D_1 + \dots + a_kD_k$ we denote the support by $\mathrm{supp}(D) = D_1\cup\dots\cup D_k$). The divisor $P\widehat{D}_W$ splits into three components $P\widehat{D}_W^{(b)}$, $P\widehat{D}_W^{(m)}$ and $P\widehat{D}_W^{(c)}$ in accordance with the three possibilities described in the beginning of Section~\ref{sec:discriminant_locus}. We have $\widehat{\pi}_*P\widehat{D}_W^{(b)} = P\overline{D}_W^{(b)}$, $\widehat{\pi}_*P\widehat{D}_W^{(m)} = 2P\overline{D}_W^{(m)}$ and $\widehat{\pi}_*P\widehat{D}_W^{(c)} = P\overline{D}_W^{(c)}$. Now let $(\Sigma, x_0, P, v_0)$ be a generic point in $\widehat{\mathfrak{B}}$. Without loss of generality we can assume that $\Sigma$ is smooth at $x_0$. Let $U\subset \Sigma$ be a small neighborhood of $x_0$ and $v$ be a holomorphic 1-differential on $U$ such that $v(x_0) = v_0$. Consider the polynomial \begin{equation*} P(t+v(x), x) = t^n + q_1(x)t^{n-1} + \dots + q_n(x) \end{equation*} where $x\in U$. Note that we have an equality $\mathrm{Discr}(P(t+v(x),x)) = \mathrm{Discr}(P(t, x))$ for discriminates with respect to $t$, because the discriminant is invariant under an argument shift. Put \begin{equation*} P_{n-2}(t,x) = t^{n-2} + q_1(x) t^{n-3} + \dots + q_{n-2}(x) \end{equation*} as in Lemma~\ref{lemma:decomposition_of_Disc}. Let $z$ be a local coordinate on $\Sigma$ at $x_0$ such that $z(x_0)=0$. Since $t=0$ is a zero of the second order of $P(t + v_0, x_0)$ we have $q_{n-1}(x) = O(z(x))$ and $q_n(x) = O(z(x))$. Using the decomposition of the discriminant obtained in Lemma~\ref{lemma:decomposition_of_Disc} we get \begin{equation} \mathrm{Discr}(P(t+v(x),x)) = -q_n(x)(q_{n-2}(x))^3\,\mathrm{Discr}(P_{n-2}(t,x)) + O(z(x)^2) \label{eq:asymptotics_of_Disc} \end{equation} as $x\to x_0$. Note that the first summand on the right-hand side of~\eqref{eq:asymptotics_of_Disc} has a zero of order $1$ at $x_0$ if the point $(\Sigma, x_0, P, v_0)$ belongs to an non-empty open subset of $\widehat{\mathfrak{B}}$. On the other hand, if the point $(\Sigma, x_0, P, v_0)$ belongs to $P\widehat{D}_W$, then we have the relation $\mathrm{Discr}(P(t+v(x),x)) = O(z(x)^2)$ as $x\to x_0$, which is equivalent to the condition $q_n(x)(q_{n-2}(x))^3\,\mathrm{Discr}(P_{n-2}(t,x)) = O(z(x)^2)$ on the first summand. Since $q_n(x)(q_{n-2}(x))^3\,\mathrm{Discr}(P_{n-2}(t,x))$ is a product of three differentials we get the following three possibilities for this condition to hold: \textbf{1) The formula for $[P\overline{D}_W^{(b)}]$.} Assume that $q_n(x) = O(z(x)^2)$ as $x\to x_0$. Then $\mathrm{Discr}(P(t+v(x),x)) = O(z(x)^2)$ as $x\to x_0$ due to~\eqref{eq:asymptotics_of_Disc}. Notice that in this case $v_0$ is a root of order $2$ of $P(t, x_0)$ and $\mathrm{Discr}(P_{n-2}(0,x_0))\neq 0$ in general, thus $(\Sigma, x_0, P, v_0)$ does not belong to $P\widehat{D}_W^{(m)}$ or $P\widehat{D}_W^{(c)}$. Therefore, $(\Sigma, x_0, P, v_0)\in P\widehat{D}_W^{(b)}$. Vice versa, assume that $(\Sigma, x_0, P, v_0)\in P\widehat{D}_W^{(b)}$. Write $P(v,x) = F(v,x)\,dz^n$, where $F$ is a holomorphic function defined in a neighborhood $T^*U$ of $(x_0,v_0)\in T^*\Sigma$. By the definition $\widehat{\Sigma}\cap T^*U$ is given by the equation $F=0$ in $T^*U$. Since $\widehat{\Sigma}$ is not smooth at $(x_0,v_0)$ we must have $dF(v_0,x_0) = 0$. Because $v_0$ is a root of second order of $P$, we have $dF(v_0, x_0) = \partial_2F(v_0,x_0)$, where $\partial_2$ denotes the partial derivative with respect to the second argument. Now assume that $q_n(x) = f_n(x)\,dz^n$. Then \begin{equation} dF(v_0, x_0) = \partial_2 F(v_0,x_0) = df_n(x_0). \label{eq:partial_derivative_at_node} \end{equation} Therefore, the equality $dF(v_0,x_0) = 0$ is equivalent to the fact that $q_n(x) = O(z(x)^2)$ as $x\to x_0$. We conclude that the equality $q_n(x) = O(z(x)^2)$ is equivalent to the fact that $(\Sigma, x_0, P, v_0)\in P\widehat{D}_W^{(b)}$. Introduce the notation \begin{equation} \Phi(\Sigma, x_0, P, v_0) = dF(v_0, x_0)\,dz^n(x_0)\in (T^*_{x_0}\Sigma)^{\otimes (n+1)}. \label{eq:notation_for_derivative} \end{equation} Since $F(v_0, x_0) = 0$ this is well-defined (i.e. does not depend on the choice of a local coordinate). Note that if $\xi\in \mathbb C^*$ then $\Phi(\Sigma, x_0, \xi\cdot P, \xi v_0) = \xi^n \Phi(\Sigma, x_0, P, v_0)$, where the action of $\mathbb C^*$ is defined by~\eqref{eq:def_of_action_of_Cstar}. It follows from Lemma~\ref{lemma:construction_of_Fn} that $\Phi$ extends to a homomorphism \begin{equation} \widehat{\Phi}: \widehat{\pi}^*\CMcal{L}_{(n+1)}|_{\widehat{\mathfrak{B}}} \to p^*\omega_{\pi}^{\otimes n}|_{\widehat{\mathfrak{B}}}. \label{eq:def_of_homo_nod} \end{equation} defined by $(P, \xi)\mapsto \xi\Phi(\Sigma, x_0, P, v_0)$. Computations made above shows that the vanishing locus of $\widehat{\Phi}$ coincides with $P\widehat{D}_W^{(b)}$. Moreover, it is straightforward that $\mathrm{div}\, \widehat{\Phi} = P\widehat{D}_W^{(b)}$ so that we have \begin{equation} P\widehat{D}_W^{(b)} \equiv (np^*\psi - (n+1)\widehat{\pi}^*\phi)\cdot \widehat{\mathfrak{B}} \label{eq:Phnod_equiv} \end{equation} in the Chow ring of $\widehat{\CMcal{C}}\cMhit$, where we used that $\CMcal{L}_{(n+1)}\simeq L^{\otimes (n+1)}$. Applying Lemma~\ref{lemma:psi_1_cdot_B} we conclude from~\eqref{eq:Phnod_equiv} that \begin{equation*} [P\overline{D}_W^{(b)}] = [\widehat{\pi}_*P\widehat{D}_W^{(b)}] = n(n-1)\Bigl( (n+1)(12\lambda-\delta) - 2(g-1)(2n+1)\phi \Bigr). \end{equation*} \textbf{2) The formula for $[P\overline{D}_W^{(m)}]$.} Assume that the equality $\mathrm{Discr}(P_{n-2}(t,x_0)) = 0$ holds. This is equivalent to the fact that $(\Sigma, x_0, P, v_0)\in P\widehat{D}_W^{(m)}$ by definition of $\widehat{D}_W^{(m)}$. Introduce the notation \begin{equation*} \Phi(\Sigma, x_0, P, v_0) = \mathrm{Discr}(P_{n-2}(t,x_0))\in (T_{x_0}^*\Sigma)^{\otimes (n-2)(n-3)}. \end{equation*} Note that $\Phi(\Sigma, x_0, P, v_0)$ does not depend on the choice of the differential $v$ (although we used $v$ to define $P_{n-2}$) and we have $\Phi(\Sigma, x_0, \xi\cdot P, \xi v_0) = \xi^{(n-2)(n-3)} \Phi(\Sigma, x_0, P, v_0)$. It follows that $\Phi$ can be extended to a homomorphism \begin{equation} \widehat{\Phi}: \widehat{\pi}^*\CMcal{L}_{((n-2)(n-3))}|_{\widehat{\mathfrak{B}}} \to p^*\omega_{\pi}^{\otimes (n-2)(n-3)}|_{\widehat{\mathfrak{B}}} \label{eq:def_of_homo_Max} \end{equation} defined by $(P, \xi)\mapsto \xi\Phi(\Sigma, x_0, P, v_0)$. We have $\mathrm{div}\,\widehat{\Phi} = P\widehat{D}_W^{(m)}$. From here we get that \begin{equation} P\widehat{D}_W^{(m)} \equiv (n-2)(n-3)\Bigl( p^*\psi - \widehat{\pi}^*\phi \Bigr)\cdot \widehat{\mathfrak{B}} \label{eq:PhMax_equiv} \end{equation} in the Chow ring of $\widehat{\CMcal{C}}\cMhit$, where by Lemma~\ref{lemma:construction_of_Fn} $\CMcal{L}_{( (n-2)(n-3))}\simeq L^{\otimes (n-2)(n-3)}$. Lemma~\ref{lemma:psi_1_cdot_B} together with the eq.~\eqref{eq:PhMax_equiv} imply that \begin{equation*} 2[P\overline{D}_W^{(m)}] = [\widehat{\pi}_*P\widehat{D}_W^{(m)}] = n(n-1)(n-2)(n-3)\Bigl( 12\lambda-\delta + 4(g-1)\phi \Bigr). \end{equation*} \textbf{3) The formula for $[P\overline{D}_W^{(c)}]$.} Finally, we consider the case $q_{n-2}(x_0) = 0$ that is equivalent to the fact that $(\Sigma, x_0, P, v_0)\in P\widehat{D}_W^{(c)}$. Set \begin{equation*} \Phi(\Sigma, x_0, P, v_0) = q_{n-2}(x_0)\in (T_{x_0}^*\Sigma)^{\otimes (n-2)}. \end{equation*} The value $q_{n-2}(x_0)$ is independent of the choice of $v$ and we have $\Phi(\Sigma, x_0, \xi\cdot P, \xi v_0) = \xi^{n-2} \Phi(\Sigma, x_0, P, v_0)$. Hence $\Phi$ can be extended to a homomorphism \begin{equation} \widehat{\Phi}: \widehat{\pi}^*\CMcal{L}_{(n-2)}|_{\widehat{\mathfrak{B}}} \to p^*\omega_{\pi}^{\otimes (n-2)}|_{\widehat{\mathfrak{B}}} \label{eq:def_of_homo_cau} \end{equation} defined by $(P, \xi)\mapsto \xi\Phi(\Sigma, x_0, P, v_0)$ and we have $\mathrm{div}\,\widehat{\Phi} = P\widehat{D}_W^{(c)}$. It follows that \begin{equation} P\widehat{D}_W^{(c)} \equiv (n-2)\Bigl( p^*\psi - \widehat{\pi}^*\phi \Bigr)\cdot \widehat{\mathfrak{B}} \label{eq:Phcau_equiv} \end{equation} in the Chow ring of $\widehat{\CMcal{C}}\cMhit$, where we use that $\CMcal{L}_{(n-2)}\simeq L^{\otimes (n-2)}$ by Lemma~\ref{lemma:construction_of_Fn}. Lemma~\ref{lemma:psi_1_cdot_B} and~\eqref{eq:Phcau_equiv} imply that \begin{equation*} [P\overline{D}_W^{(c)}] = [\widehat{\pi}_*P\widehat{D}_W^{(c)}] = n(n-1)(n-2)\Bigl( 12\lambda-\delta + 4(g-1)\phi \Bigr). \end{equation*} \end{proof} \section{The Hodge classes on $P\overline{\mathfrak{M}}_g^{(n)}$.} \label{sec:hlambda} In this section we will prove Theorem~\ref{thma:hlambda_formula}. We proceed using the notation introduced in two previous sections. \begin{lemma} The following formula holds in $\mathrm{Pic}(P\overline{\mathfrak{M}}_g^{(n)})\otimes\mathbb Q$: \begin{equation*} [\widehat{\pi}_* (\widehat{\mathfrak{B}}\cdot \widehat{\mathfrak{B}})] = -\frac{n(n-1)}{2}(12\lambda - \delta - 2(g-1)\phi) + \frac{1}{2}([P\overline{D}_W^{(b)}] + [P\overline{D}_W^{(c)}]). \end{equation*} \label{lemma:pushforward_of_hB_cdot_hB} \end{lemma} \begin{proof} Recall that $\widehat{\mathfrak{B}}$ is smooth due to Lemma~\ref{lemma:hB_is_smooth}. The projection $\widehat{\mathfrak{B}}\to P\overline{\mathfrak{M}}_g^{(n)}$ is a branched cover of degree $2n(n-1)(g-1)$ (equal to the number of zeros of $\mathrm{Discr}(P)$ counted with multiplicities). From the discussion in Section~\ref{sec:discriminant_locus} it follows that the ramification divisor of this branched cover is $P\widehat{D}_W^{(b)} + P\widehat{D}_W^{(c)}$. These observations yield the following expression for the canonical class of $\widehat{\mathfrak{B}}$: \begin{equation} c_1(K_{\widehat{\mathfrak{B}}}) = \widehat{\pi}^*c_1(K_{P\overline{\mathfrak{M}}_g^{(n)}})\cdot \widehat{\mathfrak{B}} + P\widehat{D}_W^{(b)} + P\widehat{D}_W^{(c)}. \label{eq:KhB_first} \end{equation} Another expression for the canonical class of $\widehat{\mathfrak{B}}$ comes from the adjunction formula: \begin{equation} c_1(K_{\widehat{\mathfrak{B}}}) = (c_1(K_{\widehat{\CMcal{C}}\cMhit}) + \widehat{\mathfrak{B}})\cdot \widehat{\mathfrak{B}}. \label{eq:KhB_second} \end{equation} Using these two expressions we get \begin{equation} \widehat{\mathfrak{B}}\cdot\widehat{\mathfrak{B}} = ( \widehat{\pi}^*c_1(K_{P\overline{\mathfrak{M}}_g^{(n)}}) - c_1(K_{\widehat{\CMcal{C}}\cMhit}) )\cdot \widehat{\mathfrak{B}} + P\widehat{D}_W^{(b)} + P\widehat{D}_W^{(c)} = - c_1(\omega_{\widehat{\pi}})\cdot \widehat{\mathfrak{B}} + P\widehat{D}_W^{(b)} + P\widehat{D}_W^{(c)} \label{eq:hB_cdot_hB_via_conclasses} \end{equation} where $\omega_{\widehat{\pi}}$ is the relative dualizing sheaf. Recall that the map $p: \widehat{\CMcal{C}}\cMhit\to \CMcal{C}\cMhit$ is a branched cover with a simple ramification along $\widehat{\mathfrak{B}}$ due to Lemma~\ref{lemma:hB_is_smooth}. Therefore, \begin{equation} c_1(\omega_{\widehat{\pi}}) = p^*\psi + \widehat{\mathfrak{B}}. \label{eq:omega_hpi_is_pullback} \end{equation} Substituting this expression for $c_1(\omega_{\widehat{\pi}})$ into~\eqref{eq:hB_cdot_hB_via_conclasses} we find that \begin{equation} 2\widehat{\mathfrak{B}}\cdot\widehat{\mathfrak{B}} = - p^*\psi\cdot \widehat{\mathfrak{B}} + P\widehat{D}_W^{(b)} + P\widehat{D}_W^{(c)}. \label{eq:2hB_cdot_hB} \end{equation} Applying $\widehat{\pi}_*$ to this equation and using Lemma~\ref{lemma:psi_1_cdot_B} we get the statement of the lemma. \end{proof} \begin{lemma} The following formula holds in $\mathrm{Pic}(P\overline{\mathfrak{M}}_g^{(n)})\otimes\mathbb Q$: \begin{equation} \widehat{\pi}_*c_1(\omega_{\widehat{\pi}})^2 = 6n(3n-1)\lambda - 3n(n-1)(g-1)\phi - \frac{n(3n-1)}{2}\,\delta + \frac{1}{2}\left([P\overline{D}_W^{(b)}] + [P\overline{D}_W^{(c)}]\right). \label{eq:pushforward_of_omega_hpi2} \end{equation} \label{lemma:pushforward_of_omega_hpi2} \end{lemma} \begin{proof} Using~\eqref{eq:omega_hpi_is_pullback} we can write \begin{equation*} \widehat{\pi}_*c_1(\omega_{\widehat{\pi}})^2 = \widehat{\pi}_* \Bigl(p^*\psi^2 + 2p^*\psi\cdot \widehat{\mathfrak{B}} + \widehat{\mathfrak{B}}\cdot \widehat{\mathfrak{B}}\Bigr) = n\pi_*\psi^2 + \widehat{\pi}_* \Bigl(2p^*\psi\cdot \widehat{\mathfrak{B}} + \widehat{\mathfrak{B}}\cdot \widehat{\mathfrak{B}}\Bigr). \end{equation*} Combining this formula with the eq.~\eqref{eq:Mumford_for_cMhit}, Lemma~\ref{lemma:psi_1_cdot_B} and Lemma~\ref{lemma:pushforward_of_hB_cdot_hB} we get the desired eq.~\eqref{eq:pushforward_of_omega_hpi2}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thma:hlambda_formula}] Let $V_{\mathrm{nodal}}\subset \widehat{\CMcal{C}}\cMhit$ denote the locus of nodal points of fibers of $\widehat{\pi}$, i.e. \begin{equation} V_{\mathrm{nodal}} = \{(\Sigma, x, P, v)\ \mid\ (x,v)\text{ is a node of $\widehat{\Sigma}$}\}. \label{eq:def_of_Vn} \end{equation} Note that $P\widehat{D}_W^{(b)}$ is a component of $V_{\mathrm{nodal}}$, and we have \begin{equation} \widehat{\pi}_*V_{\mathrm{nodal}} = n\delta + [P\overline{D}_W^{(b)}]. \label{eq:pushforward_of_Vn} \end{equation} Now let us apply the Grothendieck-Riemann-Roch formula to the structure sheaf $\CMcal{O}_{\widehat{\CMcal{C}}\cMhit}$ and the morphism $\widehat{\pi}: \widehat{\CMcal{C}}\cMhit\to P\overline{\mathfrak{M}}_g^{(n)}$. Taking the degree one components of both sides of the formula we get the following relation in $\mathrm{Pic}(P\overline{\mathfrak{M}}_g^{(n)})\otimes\mathbb Q$: \begin{equation} 12\widehat{\lambda} = \widehat{\pi}_* (c_1(\omega_{\widehat{\pi}})^2 + V_{\mathrm{nodal}}). \label{eq:GRR_for_structure_sheaf} \end{equation} Using Lemma~\ref{lemma:pushforward_of_omega_hpi2} and~\eqref{eq:pushforward_of_Vn} we conclude that \begin{equation} 12\widehat{\lambda} = 6n(3n-1)\lambda - 3n(n-1)(g-1)\phi - \frac{n(3n-3)}{2}\,\delta + \frac{3}{2}[P\overline{D}_W^{(b)}] + \frac{1}{2}[P\overline{D}_W^{(c)}] \label{eq:GRR_intermediate} \end{equation} The formulas of Theorem~\ref{thma:formulas_for_three_divisors} \begin{align*} & [P\overline{D}_W^{(b)}] = n(n-1)\Bigl( (n+1)(12\lambda-\delta) - 2(g-1)(2n+1)\phi \Bigr)\\ & [P\overline{D}_W^{(c)}] = n(n-1)(n-2)\Bigl( 12\lambda-\delta - 4(g-1)\phi \Bigr). \end{align*} together with the eq.~\eqref{eq:GRR_intermediate} give the desired formula for $\widehat{\lambda}$. \end{proof} \bibliographystyle{habbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,828
{"url":"http:\/\/www.atractor.pt\/mat\/ABC-CBA\/demonstracao-_en.html","text":"## The dynamics of a trick\n\n### Proof\n\nIf $$D$$ is even, then the image of $$f_{D}$$ has $$\\frac{19^{\\frac{D}{2}}+1}{2}$$ elements.\n\nIf $$D$$ is odd, then the image of $$f_{D}$$ has $$\\frac{19^{\\frac{D-1}{2}}+1}{2}$$ elements.\n\nLet us see two examples. If $$D=4$$ and $$n=abcd$$, with $$a\\geq d$$ (it is sufficient to consider these cases because $$f_{D}$$ computes the difference between the biggest and the smallest number within the pair comprised by $$n$$ and $$i_{D}(n)$$), then $f_{4}(n)=\\left|[10^{3} - 1](a-d) + [10^{2} - 10](b-c)\\right|.$\n\nTherefore:\n\n\u2022 Either $$a=d$$, and thus $$f_{4}(n)= [10^{2} - 10]z_{2}$$ with $$z_{2}$$ in $$\\{0,...,9\\}$$, and we have $$10$$ possibilities for $$z_{2}$$, and hence the same number of possibilities for $$f_{4}(n)$$.\n\u2022 Or $$a>d$$ (and there are $$9$$ possible values for $$a-d$$) and one has $f_{4}(n) = [10^{3} - 1](a-d) + [10^{2} - 10]z_{2}$ with $$z_{2}$$ being any element in the set $$\\{-9,...,0,1,...,9\\}$$, which gives $$9\\times 19$$ possibilities for $$f_{4}(n)$$.\n\nHence, the image of $$f_{4}$$ contains $$10 + 9\\times 19 = 1 + 9 + 9\\times 19 = 181$$ elements. Note also that $$1 + 9 + 9\\times 19 = \\frac{19^{2} + 1}{2}$$.\n\nIf $$D=6$$ and $$n=abcdef$$ with $$a\\geq f$$, then $f_{6}(n) = \\left|[10^{5} - 1](a-f) + [10^{4} - 10](b-e) + [10^{3} - 10^{2} ](c-d)\\right|$ and\n\n\u2022 either $$a=f$$ and $$b=e$$, and one has $$f_{6}(n)= [10^{3} - 10^{2}]z_{3}$$, with $$z_{3}$$ in $$\\{0,...,9\\}$$, which gives $$10$$ possibilities for $$f_{6}(n)$$;\n\u2022 or $$a=f$$ and $$b\\neq e$$, and one has $$f_{6}(n)= [10^{4} - 10]z_{2} + [10^{3} - 10^{2}]z_{3}$$, with $$z_{2}$$ in $$\\{0,...,9\\}$$ and $$z_{3}$$ any element of $$\\{-9,...,0,1,...,9\\}$$, which gives $$9\\times 19$$ elements in the image of $$f_{6}$$;\n\u2022 or $$a>f$$ (there are $$9$$ possible values for $$a-f$$) and then $f_{6}(n) = [10^{5} - 1](a-f) + [10^{4} - 10]z_{2} + [10^{3} - 10^{2} ]z_{3}$ where $$z_{2}$$ and $$z_{3}$$ can be any element of $$\\{-9,...,0,1,...,9\\}$$, which gives $$9\\times 19^{2}$$ elements in the image of $$f_{6}$$.\n\nConsequently, the cardinal of the image of $$f_{D}$$ is $10 + 9\\times 19 + 9\\times 19\\times 19 = 1 + 9 + 9\\times 19+ 9\\times 19^{2} = 3430.$ And one has $$1 + 9 + 9\\times 19 + 9\\times 19^{2} = 1 + 9\\, (1 + 19 + 19^{2}) = \\frac{19^{3} + 1}{2}$$.\n\nLet us generalize these arguments to the set $$N_{D}$$. Since $f_{D}(x)=\\left|x_{D-1}x_{D-2} ... x_{1}x_{0} - x_{0}x_{1} ... x_{D-2}x_{D-1}\\right|,$ when $$D$$ is even we obtain $f_{D}(x)=\\left|(10^{D-1} - 1)(x_{D-1} - x_{0})+ (10^{D-2} - 10)(x_{D-2} - x_{1})+ ... +\\left (10^{\\frac{D}{2}} - 10^{\\frac{D}{2}-1}\\right)\\left(x_{\\frac{D}{2}} - x_{\\frac{D}{2}-1}\\right)\\right|$ and, when $$D$$ is odd, $f_{D}(x)=\\left|(10^{D-1} - 1)(x_{D-1} - x_{0})+ (10^{D-2} - 10)(x_{D-2} - x_{1})+ ... + \\left(10^{\\frac{D+1}{2}} - 10^{\\frac{D-3}{2}}\\right)\\left(x_{\\frac{D+1}{2}} - x_{\\frac{D-3}{2}}\\right)\\right|.$\n\nHence, in the image of $$f$$ we only find the nonnegative integers that can be written in the form $(10^{D-1} - 1)z_{1} + (10^{D-2} - 10)z_{2} +...+\\left (10^{\\frac{D}{2}} - 10^{\\frac{D}{2}-1}\\right)z_{\\frac{D}{2}} \\mbox{ if }D \\mbox{ is even}$ $(10^{D-1} - 1)z_{1} + (10^{D-2} - 10)z_{2} +...+ \\left(10^{\\frac{D+1}{2}} - 10^{\\frac{D-3}{2}}\\right)z_{\\frac{D-1}{2}} \\mbox{ if }D \\mbox{ is odd,}$ where $$0 \\leq z_{1} \\leq 9$$ and $$-9 \\leq z_{j} \\leq 9$$ for $$j>1$$.\n\nSubject to these conditions only, we would obtain $$10\\times 19^{\\frac{D}{2}-1}$$ numbers when $$D$$ is even. Actually, the $$z_{i}\u00b4s$$ are not independent from each other. Let us see the case where $$D$$ is even (the other case is analogous). If there is some $$z_{j} \\neq 0$$ and $$k = \\mbox{minimum }\\{0 \\leq i \\leq \\frac{D}{2}: z_{i} \\neq 0\\}$$, then $$1 \\leq z_{k} \\leq 9$$ and $$-9 \\leq z_{i} \\leq 9$$ for $$i>k$$. Analyzing the possibilities for each value of $$k$$, we conclude that among the $$10^{D}$$ natural numbers with $$D$$ digits, only $10+9\\times 19+9\\times 19^{2}+...+9\\times 10^{\\frac{D}{2}-1}=\\frac{19^{\\frac{D}{2}}+1}{2}$ of them are in the image of $$f_{D}$$.\n\nBy a similar argument, we deduce that, for $$D$$ odd, the cardinal of that image is $$\\frac{19^{\\frac{D-1}{2}} + 1}{2}$$.\n\nNotice that the frequency of numbers in the image of $$f_{D}$$ has limit $$0$$ when $$D$$ goes to infinity, since $\\frac{19^\\frac{D}{2} + 1}{ 2\\times 10^{D}}< \\left(\\frac{1}{5}\\right)^{\\frac{D}{2}}.$","date":"2018-03-19 01:22:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9636543989181519, \"perplexity\": 58.8219796418362}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-13\/segments\/1521257646189.21\/warc\/CC-MAIN-20180319003616-20180319023616-00070.warc.gz\"}"}
null
null
Le circuit international de Buriram (également Circuit Chang) est un circuit utilisé en sport automobile et motocycliste localisé à Buriram en Thaïlande. La piste a été inaugurée en 2014. Il s'agit du premier circuit catégorisé Grade 1 de la FIA et Grade A de la FIM. Le commanditaire principal de la piste est l'entreprise ThaiBev, qui donne le nom de parrainage de Chang (une marque de bière) au circuit. Le championnat GT japonais, le Super GT, organise une épreuve à Buriram depuis 2014. En outre, deux séries internationales TCR, la TCR Asia Series et GT Asia Series ont programmé des courses au Burinam en octobre 2015, et le WTCC en novembre 2015, et l'Asian Le Mans Series en janvier 2016. Le 22 mars 2015, la deuxième épreuve du championnat du monde de Superbike se déroule en Thaïlande. Le 23 juin 2015, il a été annoncé que la septième et huitième épreuve de la Porsche Carrera Cup Asia se dérouleraient sur ce circuit. En 2018, les tests officiels préparatoires au Grand Prix moto de Thaïlande auront lieu sur le circuit Chang. Ce premier Grand Prix de Thaïlande de l'histoire et Grand prix de la saison 2018, s'y déroulera du 5 au 7 octobre 2018. Notes et références Équipement sportif achevé en 2014 Thaïlande Circuit automobile en Thaïlande Circuit de l'Asian Le Mans Series
{ "redpajama_set_name": "RedPajamaWikipedia" }
952
Web Development in its purest form is the process of developing websites. This includes front end design and visuals, and backend networking and structure. It is a great place to start one's journey in computer science, lending easy to understand languages, a medium for creativity and design, and an introduction to the internet of things.
{ "redpajama_set_name": "RedPajamaC4" }
5,130
Price, specifications, and features of the new iPhone 8 With the new 2017 iPhone 8 having been launched, reviewers are busy analyzing the price, specifications, and other features of Apple iPhone 8 Plus. The new 2017 iPhone 8 has most features of the previous variants and includes features like wireless charging. Apple's iPhone 8 plus can be purchased through their official website, though some third-party websites are also selling the smartphone online. Some service providers like Virgin are giving Apple iPhone 8 plus for free with a network deal for the customers. Design and appearance of the new 2017 iPhone 8 Apple iPhone 8 plus is a worthy upgrade in the series of the smartphone being offered by Apple. The new 2017 new iPhone 8 is much similar to the iPhone 7 in appearance and features. The processor in iPhone 8 is bionic and is the smartest chip on the smartphone ever.This smartphone has glass on the front and back of the device which has been reinforced by welded copper. Apple's iPhone 8 plus is dust resistant and has water-resistant capabilities. Expect HD retina display in the iPhone 8, and this smartphone also features a 12 MP camera. Color options and cost The new 2017 new iPhone 8 is available in gold color, grey, and silver options and iPhone 8 with 64 GB version costs about $917, while iPhone 8 Plus with 64 GB costs about $1048. A 256 GB version is also available for the iPhone 8 and 8 Plus. Similar to iPhone X, the iPhone 8 and 8 Plus also come with wireless charging capabilities feature. This smartphone can simply be charged by placing it on the power pad. Incidentally, this kind of technology has been brought in the iPhone for the first time.Wireless charging is a very innovative feature and is all set to bring about a revolution in the way iPhone is used. This smartphone also comes with iOS 11, which are unique operating systems on which Apple smartphones operate. Apple Pay, multitasking abilities, and a redesigned control panel are some of the features that have been included in the Apple iPhone 8 which make it a truly incredible smartphone in the real sense. This smartphone also comes with a 12 MP dual-lens camera which is much similar to the camera in iPhone 7 that was released last year. Expect a larger, faster sensor, new refined color filter and deeper pixels in iPhone 8, which make it remarkable in a sense. The home button and touch ID remain intact in the iPhone 8 through the feature of facial recognition has not been given in iPhone 8, which make it a bit different from others in the league. You can purchase iPhone 8 online from a host of websites where it is selling on a rapid basis at the best prices available. All you need to know about the iPhone 8 15 inch laptops which can also be used as tablets editors's pick Gardening tips and tricks to remember Gardening is all about nurturing, believing and patience. There are many gardening hacks, tips and tricks you can use to make the best of your garden.... 4 risk factors of fatty liver disease you should know about Fatty liver disease is caused due to an accumulation of fat in the liver. This may occur due to an excess consumption of alcohol, hepatitis C, obesity... A brief overview of home tanning beds Owning a home tanning bed is the ultimate luxury all of us have dreamed of. With the increasing number of manufacturers offering home tanning beds for... Gardening is all about nurturing, believing and patience. There are ma... Fatty liver disease is caused due to an accumulation of fat in the liv... Owning a home tanning bed is the ultimate luxury all of us have dreame... Organize your bathroom, while on a budget Five steps to approach estate planning The three best mutual funds you should invest in © Copyright 2020 SearchInsider.net
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,297
<?php namespace User; use Application\BaseDb; class UserDb extends BaseDb { protected $keyName = 'users'; public function save(UserEntity $user) { $data = array( 'uri' => $user->getUri(), 'username' => $user->getUsername(), 'slug' => $user->getUsername(), 'verbose_uri' => $user->getVerboseUri() ); $savedUser = $this->load('uri', $user->getUri()); if ($savedUser) { // user is already known - update this record $data = array_merge($savedUser, $data); } $this->cache->save($this->keyName, $data, 'uri', $user->getUri()); $this->cache->save($this->keyName, $data, 'username', $user->getUsername()); } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,048
{"url":"https:\/\/www.greenemath.com\/Algebra2\/57\/Adding&SubtractingRadicalExpressionsPracticeTest.html","text":"In order to add or subtract radicals, we must have \u201clike radicals\u201d. Like radicals have the same index along with the same radicand. This is similar to working with polynomials and looking for \u201clike terms\u201d. We add and subtract radicals using the distributive property. Essentially we only need to add or subtract the numbers multiplying the common radical.\n\nTest Objectives\n\u2022 Demonstrate the ability to identify like radicals\n\u2022 Demonstrate the ability to simplify radicals\n\n#1:\n\nInstructions: Perform each indicated operation.\n\na) $$4\\sqrt[3]{80} - 3\\sqrt[3]{10} - 2\\sqrt[3]{10}$$\n\nb) $$2\\sqrt[4]{144} + 2\\sqrt[4]{9} - 15\\sqrt[4]{9}$$\n\n#2:\n\nInstructions: Perform each indicated operation.\n\na) $$-4\\sqrt[3]{320} + 5\\sqrt[3]{5} - \\sqrt[3]{625}$$\n\nb) $$2\\sqrt[3]{1250} + 2\\sqrt[3]{32} - 2\\sqrt[3]{270}$$\n\n#3:\n\nInstructions: Perform each indicated operation.\n\na) $$-2\\sqrt[3]{1125} - 5\\sqrt[3]{-72} - \\sqrt[3]{54}$$\n\nb) $$6\\sqrt[5]{96} - \\sqrt[5]{729} + 11\\sqrt[5]{3072}$$\n\n#4:\n\nInstructions: Perform each indicated operation.\n\na) $$-\\sqrt[6]{4} - 2\\sqrt[6]{128} - \\sqrt[6]{2} - 2\\sqrt[6]{256}$$\n\n#5:\n\nInstructions: Perform each indicated operation.\n\na) $$2\\sqrt[3]{2} - \\sqrt[3]{24} + 2\\sqrt[3]{16} - 2\\sqrt[3]{3}$$\n\nWritten Solutions:\n\n#1:\n\nSolutions:\n\na) $$3\\sqrt[3]{10}$$\n\nb) $$-9\\sqrt{3}$$\n\n#2:\n\nSolutions:\n\na) $$-16\\sqrt[3]{5}$$\n\nb) $$4\\sqrt[3]{10} + 4\\sqrt[3]{4}$$\n\n#3:\n\nSolutions:\n\na) $$-3\\sqrt[3]{2}$$\n\nb) $$53\\sqrt[5]{3}$$\n\n#4:\n\nSolutions:\n\na) $$-5\\sqrt[3]{2} - 5\\sqrt[6]{2}$$\n\n#5:\n\nSolutions:\n\na) $$6\\sqrt[3]{2} - 4\\sqrt[3]{3}$$","date":"2020-01-23 23:39:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7501254081726074, \"perplexity\": 6249.265887839336}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579250614086.44\/warc\/CC-MAIN-20200123221108-20200124010108-00031.warc.gz\"}"}
null
null
{"url":"http:\/\/people.bath.ac.uk\/dlhg20\/pss1213.html","text":"## The Age of Pressland\n\nThis is the compilation of the talks given in the 2012\/13 Postgraduate Seminar Series which was organised by Matthew Pressland.\n\n# Semester 1 2012\/13\n\n2 Oct 2012, Matthew Pressland\n\n### The Fundamental Group and Triangulations\n\nThe fundamental group is an important homotopy invariant of topological spaces, essentially describing the ways in which loops can be mapped into them. This talk will (loosely) explain the definition of the fundamental group and why it is a group, and then explain how to calculate it using triangulations, with some examples from $$3$$-manifold topology.\n\n9 Oct 2012, Horacio Gonz\u00e1lez\n\n### Wazzup with the WASEP?\n\nThis talk will revise the physical motivation of interacting particle systems. From there we will define what is the Simple Exclusion Process and all its variations suchas the symmetric, totally asymmetric and weakly asymmetric. Finally we will talk about the hydrodynamic limits of these processes. Our approach will be very intuitive and no knowledge of statistical physics or stochastic processes is required.\n\n16 Oct 2012, Andrew Bate\n\n### Predators, Prey and Prevalence\n\nIt's October. Autumn is falling upon us; the mixture of rain and fallen leaves become a slipping hazard; squirrels are hoarding for the winter and the freshers' flu epidemic hits its peak. I sometimes wonder: do squirrels get freshers' flu too? I do not know the answer, but I could consider the consequences.\n\nFor this talk, I will give an introduction to a simple disease model and a simple ecological model (predator-prey) before combining the two models to get some interesting results.\n\n23 Oct 2012, Alex Watson\n\n### Discrete and Continuous Family Trees\n\nI will showcase some nice objects and results from the theory of random processes: some discrete objects, which date back to a letter in the Times from 1873, and some continuous ones, which are at the cutting edge of current research.\n\nGalton-Watson processes are random processes which count a population, while Galton-Watson trees are randomly evolving discrete trees which show the genealogical record of a population. Galton-Watson trees are characterised in the set of (discrete) random trees by a certain regenerative property. A recent paper proves that the same is true in a continuous context: here, we deal with continuous-state branching processes, which correspond to Levy trees, which are then neatly characterised among the set of all real trees. Since these objects are all entertainingly weird, I will hand-wave my way through some vague definitions and draw some pictures in order to very approximately demonstrate how all of this works.\n\n30 Oct 2012, Alex Collins\n\n### The Classical McKay Correspondence\n\nThe McKay correspondence describes a mysterious relationship between the representation theory of certain finite groups and the structure of the singular surface that arises from their action on the plane. I will tell you about it.\n\n6 Nov 2012, Elvijs Sarkans\n\n### Time Invariance for Dynamical Systems\n\nIn classical mathematics time invariance for dynamical systems revolves around autonomous differential equations and for good reason: many areas of physics fit this framework very nicely. However there are some drawbacks of this approach in control and signals processing. In 1980s Jan C. Willems proposed an alternative first principles approach to studying time invariant systems and this theory features some beautiful yet reasonably simple mathematics. I will try to present some of these ideas for linear discrete time systems.\n\n13 Nov 2012, Anthony Masters\n\n### Measure Resolutions and Rearrangements: Utilising an Old Idea\n\nMeasure resolutions are a rarely-used concept, but are related to the non-atomic properties of a measure space. I will show that measure resolutions may be utilised to remove a standard assumption on the underlying measure space for results involving rearrangements.\n\n20 Nov 2012, Curdin Ott\n\n### Russian options, American options with floating strike or Bottleneck options\n\nI'll try to explain in a very informal way and with the help of pictures what these fancy names stand for and what their connection to the area of \"optimal stopping\" (a specific area in probability theory) is.\n\n27 Nov 2012, Matthew Dawes\n\n### Moduli of Irreducible Symplectic Manifolds and K3 Surfaces\n\nThe theory of K3 surfaces is vast and fascinating and has origins in classical algebraic geometry. In higher dimensions, one may generalise K3 surfaces in two directions: one may study Calabi-Yau varieties or one may study Irreducible Symplectic Manifolds (also known as hyperk\u00e4hler manifolds). I am interested in the latter. Over the last sixty years, many important questions about K3 surfaces have been answered but very much less is known about hyperk\u00e4hler manifolds in general. In this talk, we will mostly be interested in discussing their moduli. I will start by showing how questions about the birational geometry of the moduli of K3 surfaces can be understood by studying modular forms and then move on to discussing how the K3 theory generalises to the hyperk\u00e4hler case. I will also mention some open problems. There will be no proofs but there will be some pictures and there should be enough background to satisfy a general audience.\n\n4 Dec 2012, Jack Blake and Steve Cook\n\n### The Mathematics of Starcraft\n\nWe will introduce the computer game 'Starcraft 2' and give some examples of how simple mathematical models can help understand different strategies of play, as well as mentioning some basic maths that can be applied whilst playing. We will also talk about how user input can show that the assumptions required for our models are wrong. This will be a very informal talk, aiming at giving an overview of the game and an insight into its depth of strategy. No knowledge of Starcraft 2 is required.\n\n10 Dec 2012, Ray Fernandes\n\n### XMaths Special\n\nIn true Xmaths PSS style, expect much craziness, but not much in the way of maths or talk. What it's on is always kept a secret (often because I don't know myself), but anyone that's been to a previous Xmaths special will know to expect the unexpected. If things go badly at least there will be cake.\n\n# Semester 2 2012\/13\n\n5 Feb 2013, Matthew Pressland\n\n### Rings and Varieties: An Introduction to Algebraic Geometry\n\nThis talk will be a gentle introduction to algebriac geometry, using commutative algebra. I will define varieties and ideals, and briefly discuss Hilbert's Nullstellensatz, the Zariski topology, and coordinate rings. The punchline will be to use the algebra to explain why degree $$n$$ polynomials over $$\\mathbb{C}$$ have exactly $$n$$ roots, even though some of them clearly don't; in other words, I will explain why some roots are counted more than once.\n\n12 Feb 2013, Maren Eckhoff\n\n### Solving Partial Differential Equations with a Cloud of Smoke\n\nWe discuss two or three stochastic processes and associated parabolic differential equations. The close connection between probability and PDE theory provides intuition about the coefficients of these deterministic PDEs in terms of properties of random processes.\n\n19 Feb 2013, George Frost\n\n### A Brief History of Geometry\n\nIn this non-technical talk, I will give a brief history of the study of geometry. Geometry has evolved somewhat from Euclid's study of triangles and circles; ultimately, the modern generalisation of what features a \"geometry\" should have is encapsulated in a so-called \"Cartan geometry\". There are two possible routes to this generalisation from familiar Euclidean geometry: Riemannian geometry and Kleinian geometry. I will describe in basic terms each of these four approaches, and give some justification for why Euclidean geometry is not enough for modern geometers.\n\nThe talk will be entirely non-technical, assuming no prior (non-Euclidean!) geometrical knowledge.\n\n26 Feb 2013, Jenny Jones\n\n### Reverse Iontophoresis as a Technique for Drug Monitoring\n\nReverse iontophoresis is a relatively new non-invasive drug monitoring technique in which a small current is passed across the skin causing an ion flow to the surface where it is collected, allowing the amount of drug to be measured and hence an accurate estimate of the blood drug concentration.\n\nInitial reverse iontophoresis readings are unusually high and unrepresentative of the blood drug concentration suggesting that ions are at first being collected from somewhere other than blood; one such candidate is the dead cells that make up the surface layer of the skin (stratum corneum).\n\nI will give a little more detail about the above and an explanation of what this could mean for drug monitoring in reality. I will then go into how I am approaching the modelling of this and probably explain \"Michaelis-Menten kinetics\" and anything else that seems relevant at the time.\n\n5 Mar 2013, Sam Gamlin\n\n### An Introduction to the Abelian Sandpile Model and the Connection to Uniform Spanning Trees\n\nThe Abelian Sandpile model is a probabilistic model defined on a graph, which was originally proposed as an example of self-organised criticality. In this talk I will define the model and then show how it is related to Uniform Spanning Trees. No prior knowledge will be assumed.\n\n12 Mar 2013, Aretha Teckentrup\n\n### Numerical Methods for Ordinary and Stochastic Differential Equations\n\nI will give a simple derivation of Euler's method for ordinary differential equations, and show how the same approach can be used to derive the Euler-Maruyama method for stochastic differential equations (I will explain what these are). I will then show how the same approach can in fact be generalised to derive numerical methods of arbitrarily high order, and point out some of the difficulties in going from ordinary to stochastic differential equations.\n\n19 Mar 2013, Zheyuan Li\n\n### A Family Tree of Statistics and Practical Statistical Modelling\n\nStatistical modelling of data has never been more popular than it is today. However, when given a data set, we are likely to feel lost in the ocean of existing models. Sometimes we are even not so clear why a specific kind of model is as it is. This talk is aimed to depict you a picture of this, by drawing an interesting family tree in world of statistics. In particular, by looking at the black smoke data from a long established monitoring network in UK from 1966 to 1996, we will together examine a branch of this tree: \"statistical inference $$\\to$$ parametric statistical inference $$\\to$$ frequentist approach $$\\to$$ linear models $$\\to$$ nonparametric regreesion $$\\to$$ additive models $$\\to$$ additive mixed models\". The talk is designed to be introductory, and fun.\n\n9 Apr 2013, Ben Boyle\n\n### Similarity Solutions of PDEs\n\nSimilarity methods are some of the most powerful available for learning about solutions to PDEs, especially nonlinear ones. In this talk I will (very) briefly introduce concepts from Dimensional Analysis and show how they can be applied to to find 'self-similar' solutions to certain PDEs.\n\n16 Apr 2013, James Clarke\n\n### Designing Optimal Controls for Networked Disease Systems: Chlamydia Gets the Treatment\n\nEverybody is worried about chlamydia. I hope to enhance everyone's calm by describing a method whereby deterministic models are used to determine optimal controls for stochastic networks. By extracting specific information from networks it is possible to parametrise a certain simple model. This can then give you the means to force a network to do whatever you want it to. No knowledge of maths will be assumed.\n\n23 Apr 2013, James Roberts\n\n### The Direct Method in the Calculus of Variations\n\nIn the calculus of variations, the direct method provides a way of constructing minimizers of sufficiently 'nice' integral functionals. (I will explain what these things are). Motivated by the classical Weierstrass theorem on extrema for continuous functions we will first collect some topological and functional analitic results. In the right setting (which will be provided) we may combine these results to give an existence theorem for minimizers of a certain class of functional.\n\n30 Apr 2013, Katy Gaythorpe and Amy Spicer\n\n### Fun with Plagues\n\nAn informal introduction to mathematical epidemiology.","date":"2017-10-23 11:29:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4930259883403778, \"perplexity\": 844.5682766303504}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-43\/segments\/1508187825900.44\/warc\/CC-MAIN-20171023111450-20171023131450-00666.warc.gz\"}"}
null
null
Latest Update: This Is Us – Flip a Coin Screencaptures Simply Mandy Moore your ultimate source for mandy moore Hello and welcome to Simply Mandy, your ultimate source for everything Mandy Moore. Mandy is currently starring on the hit television show This Is Us as Rebecca, but you may known her for her work in movies such as Tangled - where she voiced Rapunzel - or A Walk To Remember. Mandy has also had a successful singing career and released several albums. Here you will find the latest news, photos and more related to Mandy's career. Thank you for your visit! Choice Affiliates Mandy as Rapunzel (voice) Six years after the events of "Wreck-It Ralph", Ralph and Vanellope, now friends, discover a wi-fi router in their arcade, leading them into a new adventure. Mandy as Cate After a disease kills 98% of America's children, the surviving 2% develop superpowers and are placed in internment camps. A 16-year-old girl escapes her camp and joins a group of other teens on the run from the government. Mandy as Rebecca Pearson Airing Season 4 on NBC from September 2019 The Pearson family's generational story unfolds in this emotional drama. In moments of love, joy, triumph and heartbreak, revelations emerge from parents Jack and Rebecca's past, while triplets Kate, Randall and Kevin discover deeper meaning in their present day lives. Official Site Photos IMDB Mandy as Lisa Available on DVD Two sisters vacationing in Mexico are trapped in a shark cage at the bottom of the ocean. With less than an hour of oxygen left and great white sharks circling nearby, they must fight to survive. Airing on the Disney Channel Set between Walt Disney Animation Studios' "Tangled" and its short film "Tangled Ever After," this animated adventure/comedy series unfolds as Rapunzel acquaints herself with her parents, her kingdom and the people of Corona. Official Site IMDB I'm Not Here Mandy as Mom A man struggles with the tragic memories of his past to make sense of his present, but soon realizes that time isn't the enemy he thinks it is. Website name: Simply Mandy Moore Website URL: www.mandy-m.com Owner: Jess • Log in Host: Fanscity | DMCA Social Accounts: Twitter - please follow us! Established: February 2018 Simply Mandy Moore/Simply Mandy is an unofficial fan site and has no affiliation with Mandy Moore, her management, representatives, family or friends in any way. All trademarks and copyrighted materials on this site are the property of their respective owners. The intent of this website is not to infringe on any copyrights, but rather to serve as a resource for fans of Mandy Moore and admirers of her work. Please contact us if you have any questions or concerns. This Is Us – Flip a Coin Screencaptures This Is Us – 4×03 Screencaptures – Unhinged This Is Us – Episode 4×02 Screencaptures This Is Us Season 4 – Additional Promotional Photos Mandy's Social Media Mandy has verified accounts on Instagram and Twitter, which you can see below. Any accounts other than these, even if they claim to be Mandy, are unfortunately fake. @mandymooremm Tweets by TheMandyMoore Something to add? Do you have photos, magazine scans, videos, or anything that you'd like to see featured on this website? If so, you are more than welcome to submit them - content donations are always gratefully received. - Photos from events and appearances; - Magazine scans; - Screen Captures of films, shows or interviews - Icons, wallpapers or avatars; - Video clips; You can e-mail them (remove the brackets) and we'll reply as soon as we add them to our fansite. Please include details such as your name or website link so that you can be credited. Jess No Comments 'Tangled,' Starring Mandy Moore, Renewed at Disney Channel With New Name June 02, 2018 News - Tangled THE HOLLYWOOD REPORTER: Rapunzel is coming back for more adventures on Disney Channel. The network has renewed Tangled: The Series for a third season ahead of its season two premiere this month. Additionally, the show is getting a new name: Rapunzel's Tangled Adventure. The animated comedy stars Mandy Moore and Zachary Levi as Rapunzel and Eugene (who was known as Flynn Rider in the 2010 hit feature film). The returning cast also includes Eden Espinosa as Rapunzel's best friend and confidante, Cassandra; James Monroe-Iglehart as Eugene's best friend, Lance; and Jeremy Jordan as Varian. The series features music by Disney veteran Alan Menken and lyricist Glenn Slater. The guest voice cast for season two includes Carol Kane as the mystical traveler Madame Canardist; Lil Rel Howery as a fast-talking host named Goodberry; Yvonne Strahovski as Eugene's old love interest Stalyan; Bruce Campbell as the bizarrely charming King Edmund; Britt Robertson as the whip-smart no nonsense teenager Vex; Timothy Dalton as inventor and adventurer Demanitus; Katy Mixon as the stunningly beautiful Seraphina; and Kathy Najimy as a sprightly and strange resident of the magical forest. Untangled: The Making of a Fairytale February 11, 2018 Gallery - Tangled Thanks to my good friend Samantha who runs the amazing Shane West Daily, I've added screencaptures of Mandy in one of Tangled's special features. Untangled: The Making of a Fairytale was included on the Blu-Ray disc and showed Mandy and Zachary Levi talking about various aspects of making the film and revealing some Disney trivia. I love this movie so much, and Mandy's facial expressions in this interview are adorable. Enjoy. Film Productions > Tangled > Blu-Ray Special Feature: Untangled – The Making of a Fairytale © Simply Mandy Moore | Design by Ten Thousand Beats | Powered by Wordpress | Hosted by Fanscity (DMCA) | Privacy Policy | Cookies Policy Simply Mandy Moore (mandy-moore.org) is 100% unofficial. The site is run by a fan, for the fans. All original text and graphics belong to Simply Mandy Moore, unless stated otherwise. All photographs, scans, screencaps etc belong to their original owners. This site is non-profit, and all content posted on this site is used, to the best of our knowledge, under Fair Use copyright laws. The intent of this site is not to infringe on any copyrights, but rather to serve as a resource for fans of Mandy Moore and admirers of her work. Please contact us if you have any questions or concerns, or if you would like something removed.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
96
package com.tarena.poll.dao.impl; import java.util.List; import org.hibernate.Session; import org.hibernate.Transaction; import com.tarena.poll.dao.DaoException; import com.tarena.poll.dao.IUserDao; import com.tarena.poll.entity.TPoll; import com.tarena.poll.entity.TUser; import com.tarena.poll.util.HBDaoFactory; /** *本来用来演示 *author datong */ public class UserDaoImpl2 implements IUserDao{ @Override public void saveUser(TUser user) throws DaoException { Session s =HBDaoFactory.getSession(); Transaction ts =s.getTransaction(); try { ts.begin(); s.save(user); ts.commit(); } catch (Exception e) { ts.rollback(); e.printStackTrace(); }finally{ s.close(); } } @Override public void updateUser(TUser user) throws DaoException { Session s =HBDaoFactory.getSession(); Transaction ts =s.getTransaction(); try { ts.begin(); s.update(user); ts.commit(); } catch (Exception e) { ts.rollback(); e.printStackTrace(); }finally{ s.close(); } } @Override public void deleteUser(int id) throws DaoException { UserDaoImpl2 dao = new UserDaoImpl2(); Session s =HBDaoFactory.getSession(); Transaction ts =s.getTransaction(); String hql3="delete from TUser t where t.id=:id4"; try { ts.begin(); s.createQuery(hql3).setParameter("id4", id).executeUpdate(); ts.commit(); } catch (Exception e) { ts.rollback(); e.printStackTrace(); }finally{ s.close(); } } @Override public TUser findById(int id) throws DaoException { Session s = HBDaoFactory.getSession(); String hql="from TUser where id=:id"; TUser user =(TUser) s.createQuery(hql).setParameter("id", id).uniqueResult(); s.close(); return user; } @Override public TUser findByNameAndPwd(String name, String pwd) throws DaoException { Session s = HBDaoFactory.getSession(); String hql="from TUser where name=:name and passwd=:passwd"; TUser user =(TUser) s.createQuery(hql).setParameter("name", name).setParameter("passwd", pwd).uniqueResult(); s.close(); return user; } @Override public TUser modifyPassword(int id, String newPwd) throws DaoException { UserDaoImpl2 dao = new UserDaoImpl2(); TUser user=dao.findById(id); user.setPasswd(newPwd); dao.updateUser(user); return user; } @Override public List<TUser> findAllPM() throws DaoException { Session s = HBDaoFactory.getSession(); String hql="from TUser where type=3"; List<TUser> users = s.createQuery(hql).list(); s.close(); return users; } @Override public TUser findByName(String name) throws DaoException { Session s =HBDaoFactory.getSession(); String hql="from TUser where name=:name"; return (TUser) s.createQuery(hql).setParameter("name", name).uniqueResult(); } @Override public List<TUser> findAllPMAndBzr() throws DaoException { Session s = HBDaoFactory.getSession(); String hql="from TUser where type=3 or type=2"; List<TUser> users = s.createQuery(hql).list(); s.close(); return users; } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,913
La est une gare ferroviaire située sur le territoire de la commune suisse de Montreux, dans le canton de Vaud. Elle tire son nom de la Dent de Jaman, sommet accessible par un sentier de randonnée en une trentaine de minutes depuis la gare. Situation ferroviaire Établie à d'altitude, la gare de la Perche est située au point kilométrique de la section de Glion à la gare des Rochers-de-Naye sur la ligne Montreux-Glion-Rochers de Naye. Elle est située entre les gares de Paccot et de La Perche. Elle est dotée d'un évitement de de long en milieu de gare en direction des Rochers de Naye et d'un quai latéral. Histoire La gare de Jaman a été mise en service en avec l'ouverture de la ligne de Glion aux Rochers-de-Naye. L'électrification de l'entier de la ligne est inaugurée le . Service des voyageurs Accueil La gare de Jaman est constituée d'un seul quai en bois posé sur une passerelle métallique et protégé par un garde-corps en métal. Ce quai donne directement accès à des sentiers de randonnée en direction de la Dent de Jaman et des Rochers de Naye. En hiver, il donne accès au domaine skiable des Rochers de Naye. Desserte La gare de Jaman est desservie par un train par heure et par sens, en provenance ou en direction de la gare de Montreux du mercredi au dimanche en hiver, à l'exception de la période de Noël et tous les jours le restant de l'année. Intermodalité La gare de Jaman n'est en correspondance avec aucune autre ligne de transports publics. Notes et références Voir aussi Articles connexes Transports Montreux-Vevey-Riviera Gare de Glion Gare des Rochers-de-Naye Liens externes . Gare dans le canton de Vaud Gare mise en service en 1892 Gare de Jaman
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,231
{"url":"http:\/\/www.reference.com\/browse\/Non-logical+symbol","text":"Definitions\n\n# Non-logical symbol\n\nNon-logical symbol is a technical term used in Logic\n\nIn logic, the non-logical symbols (sometimes also called non-logical constants) of a language of first-order logic are all of the symbols which are not part of symbolic logic, including symbols which, in an interpretation, may stand for constants, variables, functions, or predicates. A language of first-order logic is a formal language over the alphabet consisting of its non-logical symbols and its logical symbols. The latter include logical connectives, quantifiers, and variables that stand for statements.\n\nA non-logical symbol only has meaning or semantic content when one is assigned to it by means of an interpretation. Consequently, a sentence containing a non-logical symbol lacks meaning except under an interpretation, so a sentence is said to be true or false under an interpretation. Main article: first order logic especially Syntax of first-order logic\n\nThe logical constants, by contrast, have the same meaning in all interpretations. They include the symbols for truth-functional connectives (such as and, or, not, implies, and logical equivalence) and the symbols for the quantifiers \"for all\" and \"there exists\".\n\nThe equality symbol is sometimes treated as a non-logical symbol and sometimes treated as a symbol of logic. If it is treated as a logical symbol, then any interpretation will be required to interpret the equality sign using true equality; if interpreted as a nonlogical symbol, it may be interpreted by an arbitrary equivalence relation.\n\n## Signatures\n\nA signature is a set of non-logical constants together with additional information identifying each symbol as either a constant symbol, or a function symbol of a specific arity n (a natural number), or a relation symbol of a specific arity. The additional information controls how the non-logical symbols can be used to form terms and formulas. For instance if f is a binary function symbol and c is a constant symbol, then f(xc) is a term, but c(xf) is not a term. Relation symbols cannot be used in terms, but they can be used to combine one or more (depending on the arity) terms into an atomic formula.\n\nFor example a signature could consist of a binary function symbol +, a constant symbol 0, and a binary relation symbol <.\n\n## Models\n\nStructures over a signature, also known as models, provide formal semantics to a signature and the first-order language over it.\n\nA structure over a signature consists of a set D, known as the domain of discourse, together with interpreations of the non-logical symbols: Every constant symbol is interpreted by an element of D, and the interpretation of an n-ary function symbol is an n-ary function on D, i.e. a function Dn\u00a0\u2192\u00a0D from the n-fold cartesian product of the domain to the domain itself. Every n-ary relation symbol is interpreted by an n-ary relation on the domain, i.e. by a subset of Dn.\n\nAn example of a structure over the signature mentioned above is the ordered group of integers. Its domain is the set $mathbb Z$\u00a0=\u00a0{\u2026,\u00a0\u20132,\u00a0\u20131,\u00a00,\u00a01,\u00a02,\u00a0\u2026} of integers. The binary function symbol + is interpreted by addition, the constant symbol 0 by the additive identity, and the binary relation symbol < by the relation less than.\n\n## Informal semantics\n\nOutside a mathematical context, it is often more appropriate to work with more informal interpretations.","date":"2013-12-07 02:41:38","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 1, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8384858965873718, \"perplexity\": 405.26104894432837}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-48\/segments\/1386163053003\/warc\/CC-MAIN-20131204131733-00058-ip-10-33-133-15.ec2.internal.warc.gz\"}"}
null
null
Whether your printing needs are big or small, frequent or rare, last minute or scheduled we would love to have a conversation with you. We help businesses, organisations and charities with printing needs as diverse as a few hundred business cards right through to large scale exhibition graphics and specialist stationery. What defines us is a true personal approach helping you to get the very best quality at the lowest possible prices. In addition we're famous for making the impossible possible meeting tight deadlines and going the extra mile. "We use Inca Creative for a wide range of promotional literature and would simply not consider anyone else. He's responsive, easy to deal with and the quality of printing is outstanding". Inca is our first and last choice for print and exhibition collateral. Brilliant service from start to finish. I needed a relatively small number of high quality printed and laminated leaflets for business promotion and chose Martin over others because he was quick to respond, asked the right questions and came up with a competitive price. Having commissioned him, Martin was then extremely helpful over a few issues that came up, my printing was then quick to produce and looks stunning. Quality, personal service – especially on what was for him a small job – and good price; what more could you ask for. I will be using Martin for all my printing from now on." Amanda Patton BA (hons) MSGD Garden and Landscape Designer. Inca are a friendly and approachable company. Always quick to return estimates and give accurate turnaround times. The quality of work is always to a high standard and they are always happy to try to accommodate last minute requests when things get tight time wise.
{ "redpajama_set_name": "RedPajamaC4" }
1,180
{"url":"http:\/\/ankursinha.in\/2015\/05\/28\/appstream-data-for-rpmfusion-now-available.html","text":"# ankursinha.in\/blog\n\nneuroscience\/fedora\/musings\n\nThu 28 May 2015\n\n# Appstream data for RPMFusion - now available!\n\nPosted by ankur in Tech (415 words, approximately a 2 minute read)\n\nI've been working on generating appstream data for RPMFusion packages recently. At the moment, since only Fedora packages provide appstream data, only they can be installed using Gnome software - for RPMFusion packages, a user must use another package manager - DNF and so on. Considering that a lot of the packages in RPMFusion are media player front-ends and things, it'd make it a lot easier for users if these were also listed in Gnome software. I spent a number of hours today writing appstream data files for the RPMFusion packages - both in the free and non free repositories. I believe I've written appstream data files for all packages that could be listed in Gnome software now. (They're hosted here in the Github repository I set up for this purpose). I had already generated initial RPM packages for the free and non free repositories and submitted review tickets to RPMFusion. They're still unassigned, so if you are a package maintainer with a few free cycles, please consider reviewing them. They are really simple reviews.\n\nI've now updated the packages submitted for review to use meta data from the new appstream information I wrote. A majority of the RPMFusion GUI packages are now included. You can install the RPMs from here:\n\nIf you'd like to use the terminal, you can run:\n\nsudo dnf install http:\/\/ankursinha.in\/files\/misc\/rpmfusion\/rpmfusion-free-appstream-data-22-4.fc22.noarch.rpm http:\/\/ankursinha.in\/files\/misc\/rpmfusion\/rpmfusion-nonfree-appstream-data-22-4.fc22.noarch.rpm\n\n\nYou'll need to kill and restart Gnome software to get it to load the new appstream data.\n\nNow, for example, you can now install Mixxx from RPMFusion using Gnome software!\n\nor, you can pick which of the many Xine front ends that are available in RPMFusion you'd like to install:\n\nThe screen-shots are currently hosted on my fedora people space, but once the packages are added to RPMFusion, we'll probably move them there too.\n\nPlease note that I've written about a hundred appdata files at a stretch, so there probably will be a few typos, formatting errors, and so on in them. If you do run into any, please open a pull request at the Github repository and help improve them.\n\nSince RPMFusion's package set is much smaller than Fedora's, and there aren't too many updates and things in general, I don't think the appstream data will need to be regenerated frequently - but I will update them once a fortnight, just in case.\n\nEnjoy the amazing Fedora 22 release. Cheerio!","date":"2018-03-18 22:48:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.17255809903144836, \"perplexity\": 3427.000743076593}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-13\/segments\/1521257646178.24\/warc\/CC-MAIN-20180318224057-20180319004057-00435.warc.gz\"}"}
null
null
La rue de la Bourse est une rue de Lille, dans le Nord, en France. La rue se situe à la limite des quartiers du Vieux-Lille et de Lille-Centre. Historique La rue de la Bourse est tracée après la construction du rang de Beauregard en 1687. Elle se nommait alors place du Vieux-Marché-aux-Fromages et n'a pris le nom de rue de la Bourse qu'en 1910. Description La rue de la Bourse rejoint la Grand'Place à la place du Théâtre. Elle part de la Grand'Place et se poursuit par la rue de la Grande-Chaussée après avoir rejoint la rue Lepelletier. Sites particuliers La rue comprend plusieurs bâtiments protégés au titre des monuments historiques. Le rang aux angelots bâti vers 1677 aux 1, 3, 5, 15, 17 et 19 rue de la Bourse Les immeubles de retour du rang de Beauregard aux 2 à 10 rue de la Bourse La maison au 23 rue de la Bourse Références Annexes Articles connexes Liste des rues de Lille Vieux-Lille Bourse Vieux-Lille
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,623
Mexican cartel boss and fuel theft king sentenced to 60 years in prison | Multies This article was published by Reuters: A Mexican cartel leader who became one of the most wanted criminals in Mexico due to his gang's industrial-scale theft of petroleum has been sentenced to 60 years in prison, state prosecutors said on Friday. Jose Antonio Yepez, a notorious fuel thief blamed for fanning a sharp surge in violence in the central state of Guanajuato, was arrested in 2020 in what was a major coup for President Andres Manuel Lopez Obrador. Known as "El Marro" (The Mallet), Yepez was the boss of the Guanajuato-based Santa Rosa de Lima Cartel that waged a bloody turf war in the state with the Jalisco New Generation Cartel (CJNG), one of Mexico's most powerful and violent gangs. The attorney general's office of Guanajuato said in a Twitter post that a regional court "found him and his co-authors guilty of the crime of kidnapping." Yepez was the highest profile narco arrested so far under Lopez Obrador, who had vowed to bring down record levels of violence and also reduce rampant theft of petroleum from pipelines of the state-owned oil giant Pemex. The Santa Rosa de Lima Cartel engaged in an array of criminal activities in Guanajuato, from drug smuggling to kidnapping. Fuel theft was often an easy source of money in a state crisscrossed by pipelines and home to a major refinery.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,578
Q: Autofocus an Input Element in AngularJs I have a small search bar which is hidden initally. With a ng-click i set "filter.show" to true and the searchbar becomes visible. As soon as this happens i want the text input inside to be focused automatically. I'm using Angular 1.5 and Angular Material and found the focus-me option as presented below. But this doesn't work somehow and I don't understand what's the problem. Maybe someone can help me. The html presented below uses the same controller for the button and the input. <md-toolbar class="md-table-toolbar md-default" ng-show="filter.show && !selected.length" aria-hidden="false"> <div class="md-toolbar-tools"> <md-icon md-svg-icon="magnify"></md-icon> <form flex="" name="filter.form"> <md-input-container class="md-block" flex="" style="margin-bottom:0"> <label>Search</label> <input type="text" focus-me="filter.show" ng-model="query.filter" ng-model-options="filter.options" aria-invalid="false"> </md-input-container> </form> <button class="md-icon-button md-button md-ink-ripple" type="button" ng-click="removeFilter()"> <md-icon md-svg-icon="close"></md-icon> <div class="md-ripple-container"></div> </button> </div> </md-toolbar>
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,128
Die Pfarrkirche Hll. Nikolaus und Theodul ist eine römisch-katholische Kirche im Bergdorf Raggal im Großen Walsertal (Vorarlberg). Sie ist vom Friedhof umgeben. Geschichte Anfangs bestand mit dem Jahre 1455 eine Kapelle, welche 1460 erweitert wurde. Sie wurde als Lokalkaplanei der Pfarre Ludesch geführt und im Jahre 1586 zur selbständigen Pfarrkirche erhoben. Die Kirche wurde in den Jahren 1953 bis 1954 mit einem Vorraum mit Emporenaufgang erweitert. Beschreibung Das barocke Langhaus ist mit dem gotischen Chor unter einem gemeinsamen Satteldach verbunden, im Osten ist ein barocker Kirchturm mit Zwiebelhaube angebunden. Im Langhaus ist eine Flachdecke mit Gesims und Hohlkehle mit reichem Stuck und Schablonendekormalerei. Der Hochaltar und die Kanzel sind von Andreas Rinderer aus 1750, die Fassmalereien von Simon, Balthasar und Anton Hörmann, das Altarbild Hl. Nikolaus ist von Franz Anton Simon, die Figuren Hl. Theodul und Hl. Johannes der Täufer sind von Jakob Felder. Die Seitenaltarbilder Hl. Josef und Maria mit Jesus sind von Franz Bertle aus 1870. Das Rückwandbild der Kanzel Moses und die Gesetzestafeln ist von Anton Jehly aus 1899, das Vorsatzgemälde Rosenkranzmadonna mit Hll. Dominikus und Katharina von Siena von Jehly aus 1906. Im Vorraum Fresko Christus als Sämann von Albert Rauch aus 1959 und ein Glasgemälde des hl. Ludwig nach einem Entwurf von Rauch, ausgeführt von Glasmalerei Dallaserra aus Dornbirn. Die Orgel aus dem Jahre 1895 stammt aus der Orgelbauwerkstatt Gebrüder Mayer. 1916 mussten Glocken für die Rüstungsindustrie abgeliefert werden. Das jetzige Geläut mit den Schlagtönen des' / es'/ f' / as' der Gießerei Grassmayr ist aus dem Jahr 1956. Da der Turm mit der Belastung durch dieses Geläut überfordert ist, musste eine vom Boden bis zum Glockenstuhl reichende Stahlbetonkonstruktion, welche die Last aufnimmt, in den Boden ableitet und somit von der historischen Bausubstanz fernhält, in seinem Innern errichtet werden. Literatur Gert Ammann u. a.: Vorarlberg. Raggal (Die Kunstdenkmäler Österreichs). Verlag Anton Schroll, Wien 1983, ISBN 3-7031-0585-2, S. 332. Weblinks Pfarrkirche Raggal im Webauftritt der Diözese Feldkirch Einzelnachweise Raggal Raggal Raggal Raggal Raggal Baudenkmal (Vorarlberg) Raggal
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,068
Q: unable to execute "delete" query in java in a swing button I set a action that it execute delete query and execute another class. here's my code: JButton btnScanMyPc = new JButton("SCAN MY PC"); btnScanMyPc.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent arg0) { try{ String q="DELETE FROM `search` WHERE 1=1 "; PreparedStatement st=connection.prepareStatement(q); ResultSet rs=st.executeQuery(); ReadDir rd = new ReadDir(); ReadDir.main(null); } catch(Exception e) {JOptionPane.showMessageDialog(null, e);} } when i execute this query in database it works perfectly. but in java it shows some error like: java.sql.SQLException: Can not issue data manipulation Statement with executeQuery(). A: You have to use executeUpdate() instead of executeQuery() for data manipulation like Insert,Update or Delete. A: try this one "DELETE FROM search WHERE 1=1 " st.executeUpdate(q);
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,197
<?php namespace CotaPreco\Cielo\Serialization\Request\Xml; use CotaPreco\Cielo\Request\CreateTransaction; use CotaPreco\Cielo\RequestInterface; use CotaPreco\Cielo\Serialization\Node\CardHolderOrTokenVisitor; use CotaPreco\Cielo\Serialization\Node\MerchantVisitor; use CotaPreco\Cielo\Serialization\Node\OrderVisitor; use CotaPreco\Cielo\Serialization\Node\PaymentMethodVisitor; /** * @author Andrey K. Vital <andreykvital@gmail.com> */ final class CreateTransactionSerializer extends AbstractSerializer { /** * {@inheritdoc} */ public function canSerialize(RequestInterface $request) { return $request instanceof CreateTransaction; } /** * {@inheritdoc} */ protected function getRootNodeName() { return 'requisicao-transacao'; } /** * {@inheritdoc} */ protected function serialize(RequestInterface $request, \DOMElement $root) { /* @var CreateTransaction $request */ $request->getMerchant() ->accept(new MerchantVisitor($root)); $request->getHolder() ->accept(new CardHolderOrTokenVisitor($root)); $request->getOrder() ->accept(new OrderVisitor($root)); $request->getPaymentMethod() ->accept(new PaymentMethodVisitor($root)); $withValues = array_filter( [ 'url-retorno' => $request->getReturnUrl() ?: 'null', 'autorizar' => $request->getAuthorizeIndicator(), 'capturar' => var_export($request->shouldCapture(), true), 'bin' => $request->getBin(), 'gerar-token' => var_export($request->shouldGenerateToken(), true) ] ); foreach ($withValues as $name => $value) { $root->appendChild( $root->ownerDocument->createElement( $name, $value ) ); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,538
\section{Introduction} Since the seminal work of Butcher on integration methods~\cite{MR1993957,MR0305608} rooted trees (otherwise called Cayley trees~\cite{Cayley:1881fk}) are recognized as a basic combinatorial structure underlying the numerical and exact solution of ordinary differential equations~(see for example~\cite{MR0403225} and the monograph of Hairer-N\o rsett-Wanner~\cite{MR1227985,MR0403225}). Trees are also present in the work of Connes--Kreimer~\cite{MR1660199,MR1748177,MR1810779} on the combinatorial structure of renormalization in perturbative Quantum Field Theory and connections between Runge-Kutta methods and renormalization has been explored by Brouder~\cite{MR2106008,brouder1}. Connes and Kreimer explored a Hopf algebra structure on rooted trees to disentangle nested sub-divergences in the Feynman diagrams of perturbative QFT. Starting point is the work of Kreimer~\cite{MR1633004,MR1797019} which introduced nested integrals indexed by trees in the analysis of Feynman diagrams. The same Hopf algebra was described before by D\"ur~\cite{MR857100} (for basic results on Hopf algebras see e.g.~\cite{MR0252485}). Literature on combinatorial and algebraic properties of rooted trees is quite large, we prefer to single out the work of Hoffman~\cite{MR1990174} and the two papers of Foissy~\cite{MR1909461,MR1905177} on labeled rooted trees. A sub-algebra of the Hopf algebra of rooted trees is isomorphic to the Hopf algebra of Chen's iterated integrals~\cite{MR0454968,MR1847673} which is at the base of Lyons theory of rough paths~\cite{MR1654527}. Lyons theory allows to define and solve differential equations driven by irregular ``noises''. For an exposition see the work of Lyons cited above, the book of Lyons and Qian~\cite{MR2036784}, the introductory article of Lejay~\cite{MR2053040}. For alternative approaches to rough paths see the paper~\cite{MR2091358} of the present author or Feyel-de~La~Pradelle~\cite{feyel}. Chen~\cite{MR0454968} showed that a given path in a manifold can be encoded in the Hopf algebra of its iterated integrals. Lyons~\cite{MR1654527} realized that this encoding is good enough to recover solutions of differential equation driven by such a path. The aim of the present paper is to build a bridge between rooted trees and rough paths. Here we would like to describe how to encode a control path in a function on labeled rooted trees which we call a \emph{branched rough path} and then generalize the theory of Lyons to build solutions of driven differential equation by using this new encoding. The advantage of this approach is that we can dispose of the notion of \emph{geometric} rough path which is fundamental in Lyons theory. Geometric rough paths possess a rich structure and present nice connections with the geometry of certain Carnot groups~\cite{MR2144230} but there are situations where the geometric property is not natural, e.g. in It\^o stochastic integration or in infinite-dimensional generalizations of rough paths~\cite{kdv,TindelGubinelli}. A more abstract motivation is to prove that it is possible to build a complete theory of rough paths (at any level of roughness) in the non-geometric setting. Series over trees can be helpful also in the geometric setting: recently Neuenkirch--Nourdin--R\"o{\ss}ler--Tindel~\cite{tindelnourdin} studied asymptotic expansions for solutions of SDE driven by fractional Brownian motion using expansion over trees. In Lyons' theory to perform various computations (e.g. Taylor expansions) the geometric condition is (implicitly) used to ensure that products of iterated integrals can be expanded in a sum of other iterated integrals. On the other hand iterated integrals indexed by trees already form a closed algebra with respect to point-wise product and path integration (see below for details). Thus, by enriching the notion of rough path we are able to perform computations as in the case of geometric rough paths and build a complete theory for non-geometric rough integrals. Moreover we hope that such a bridge can inspire novel integration methods for stochastic differential equations in the line of~\cite{MR2198539}. \bigskip The plan of the note is the following. In Sect.~\ref{sec:trees} we introduce the concept of (labeled) rooted tree, the associated (D\"urr-Connes-Kreimer) Hopf algebra and fix the relative notations. In Sect.~\ref{incr} we summarize the theory of finite increments described in~\cite{MR2091358} which can be used as the base for building rough paths theory. In Sect.~\ref{sec:itin} we introduce iterated integrals indexed by labeled rooted trees and prove the basic multiplicative property which is a generalization of Chen's multiplicative property for usual iterated integrals. Next, in Sect.~\ref{sec:trees-diff} we explain how sums over iterated integrals indexed by rooted trees encode the solutions of driven differential equations. At this point we are ready to generalize rough paths and introduce the notion of \emph{branched rough path} (in Sect.~\ref{sec:brp}), prove a generalized extension theorem and construct the branched rough path associated to an \emph{ almost} branched rough path (following the development of the standard theory, see e.g.~\cite{MR1654527}). In Sect.~\ref{sec:controlled}, we introduce path controlled by a branched rough path and show how to solve differential equations driven by a branched rough path. Finally in Sect~\ref{sec:inf-dim} we discuss another motivation to consider tree-labeled series: rough paths adapted to the solution of infinite-dimensional equations (deterministic or stochastic). \section{Trees} \label{sec:trees} Given a finite set $\mathcal L$, define a $\mathcal L$-labeled rooted tree as a finite graph with a special vertex called \emph{root} such that there is a unique path from the root to any other vertex of the tree. Moreover to each vertex there is associated an element of $\mathcal L$. Here some examples of rooted trees labeled by $\mathcal L=\{1,2,3\}$: $$ \tnodel 2 \qquad \pstree{\tnodel 1}{\tnodel 3} \qquad \pstree{\tnodel 2}{\tnodel 2 \tnodel 1} \qquad \pstree{\tnodel 1}{\pstree{\tnodel 3}{\tnodel 2} \tnodel 1} \qquad \pstree{\tnodel 1}{\tnodel 1 \pstree{\tnodel 2}{\tnodel 3 \tnodel 1}} $$ We draw the root at the bottom with the tree growing upwards. Note that in a rooted tree the order of the branches at any vertex is ignored so the following two are representations of the same (unlabeled) tree: $$ \pstree{\tnode}{\pstree{\tnode}{\tnode} \tnode} \qquad \pstree{\tnode}{\tnode \pstree{\tnode}{\tnode} } $$ Given $k$ $\mathcal L$-decorated rooted trees $\tau_1,\cdots,\tau_k$ and a label $a \in\mathcal L$ we define $\tau = [\tau_1,\cdots,\tau_k]_a$ as the tree obtained by attaching the $k$ roots of $\tau_1,\cdots,\tau_k$ to a new vertex with label $a$ which will be the root of $\tau$. Any decorated rooted tree can be constructed using the simple decorated tree $\bullet_a$ ($a \in\mathcal{L}$) and the operation $[\cdots]$, e.g. $$ [\bullet] = \pstree{\tnode}{ \tnode} \qquad [\bullet,[\bullet]] = \pstree{\tnode}{\pstree{\tnode}{\tnode} \tnode}, \qquad \text{etc\dots} $$ Denote $\mathcal{T}_\mathcal L$ the set of all $\mathcal L$ decorated rooted trees and let $\mathcal{T}$ the set of rooted trees without decoration (i.e. for which the set of labels $\mathcal L$ is made of a single element). There is a canonical map $\mathcal{T}_\mathcal L \to \mathcal{T}$ which simply forget all the labels and every function on $\mathcal{T}$ can be extended, using this map to a function on $\mathcal{T}_\mathcal L$ for any set of labels $\mathcal L$. Let $|\cdot| : \mathcal{T} \to \mathbb{R}$ the map which counts the number of vertices of the (undecorated) tree and which can be defined recursively as $$ |\bullet | = 1, \qquad |[\tau_1,\dots,\tau_k]| = 1+|\tau_1|+\cdots+|\tau_k| $$ moreover we define the \emph{tree factorial} $\gamma: \mathcal{T} \to \mathbb{R}$ as $$ \gamma(\bullet) = 1, \qquad \gamma([\tau_1,\dots,\tau_k]) = |[\tau_1,\dots,\tau_k]| \gamma(\tau_1) \cdots \gamma(\tau_k) $$ Last we define the \emph{symmetry factor} $\sigma : \mathcal{T}_\mathcal L \to \mathbb{R}$ with the recursive formula $\sigma(\tau) = 1$ for $|\tau| = 1$ and \begin{equation} \label{eq:sigma-prop} \sigma([\tau^1 \cdots \tau^k]_a) = \frac{k!}{\delta(\tau^1,\dots,\tau^k)} \sigma(\tau^1) \cdots \sigma(\tau^k) \end{equation} where $\delta(\tau^1,\cdots,\tau^{k})$ counts the number of different ordered $k$-uples $(\tau^1,\cdots,\tau^{k})$ which corresponds to the same (unordered) collection $\{\tau^1,\cdots,\tau^{k}\}$ of subtrees. The factor $k!/\delta(\tau^1,\dots,\tau^k)$ counts the order of the subgroup of permutations of $k$ elements which does not change the ordered $k$-uple $(\tau^1,\cdots,\tau^{k})$. Then $\sigma(\tau)$ is is the order of the subgroup of permutations on the vertex of the tree $\tau$ which do not change the tree (taking into account also the labels). Another equivalent recursive definition for $\sigma$ is $$ \sigma([(\tau^1)^{n_1}\cdots(\tau^k)^{n_k}]_a) = n_1!\cdots n_k! \sigma(\tau^1)^{n_1}\cdots \sigma(\tau_k)^{n_k} $$ where $\tau^1,\dots,\tau^k$ are distinct subtrees and $n_1,\dots,n_k$ the respective multiplicities. Define the algebra $\mathcal{A} \mathcal{T}_\mathcal L$ as the commutative polynomial algebra generated by $\{1\}\cup \mathcal{T}_\mathcal L$ over $\mathbb{R}$, i.e. elements of $\mathcal{A} \mathcal{T}_\mathcal L$ are finite linear combination with coefficients in $\mathbb{R}$ of formal monomials in the form $\tau_1 \tau_2 \cdots \tau_n$ with $\tau_1,\dots,\tau_n \in \mathcal{T}_\mathcal L$ or of the unit $1\in \mathcal{A}\mathcal{T}_\mathcal L$. The set of all tree monomials is the set of \emph{forests} $\mathcal{F}_\mathcal L$ including the empty forest $1 \in \mathcal{F}_\mathcal{L}$. The algebra $\mathcal{A} \mathcal{T}_\mathcal L$ is endowed with a graduation $g$ given by $g(\tau_1 \cdots \tau_n) = |\tau_1|+\cdots+|\tau_n|$ and $g(1) = 0$. This graduation induces a corresponding filtration of $\mathcal{A} \mathcal{T}_\mathcal L$ in finite dimensional linear subspaces $\mathcal{A}_n \mathcal{T}_\mathcal L$ generated by the set $\mathcal{F}_\mathcal L^n$ of forests of degree $\le n$. Any map $f : \mathcal{T}_\mathcal L \to A$ where $A$ is some commutative algebra, can be extended in a unique way to a homomorphism $f : \mathcal{A} \mathcal{T}_\mathcal L \to A$ by setting: $ f(\tau_1 \cdots \tau_n) = f(\tau_1) f(\tau_2) \cdots f(\tau_n) $. On the algebra $\mathcal{A} \mathcal{T}_\mathcal L$ we can define a counit $\varepsilon: \mathcal{A} \mathcal{T}_\mathcal L \to \mathbb{R}$ as an algebra homomorphism such that $\varepsilon(1) = 1$ and $\varepsilon(\tau) = 0$ otherwise and a \emph{coproduct} $\Delta : \mathcal{A}\mathcal{T}_\mathcal L \to \mathcal{A}\mathcal{T}_\mathcal L \otimes \mathcal{A}\mathcal{T}_\mathcal L$ in the following way: $\Delta$ is an algebra homomorphism, i.e. $\Delta(1) = 1 \otimes 1$, $\Delta(\tau_1 \cdots \tau_n) = \Delta(\tau_1) \cdots \Delta(\tau_n)$ and acts linearly on linear combinations of forests and on each tree it acts recursively as \begin{equation} \label{eq:delta-def} \Delta(\tau) = 1\otimes \tau + \sum_{a \in \mathcal L} (B^a_+ \otimes \text{id} )[\Delta(B^a_-(\tau))] \end{equation} where $B_+^a(1) = \bullet_a$ and $B_+^a(\tau_1 \cdots \tau_n) = [\tau_1 \cdots \tau_n]_a$ and $B_-^a$ is the inverse of $B_+^a$ or is equal to zero if the tree root does not have label $a$, i.e. \begin{equation*} B_-^a(B^b_+(\tau_1 \cdots \tau_n))= \begin{cases} \tau_1 \cdots \tau_n & \text{if $a=b$}\\ 0 & \text{otherwise} \end{cases} \end{equation*} The coproduct $\Delta$ has an explicit description in terms of cuts which is useful in some proofs. A \emph{cut} of a tree $\tau$ is a subset of its edges which is selected to be removed. A cut is \emph{admissible} if going from the root to any leaf of the tree we meet at most one cut. Given a tree $\tau\in\mathcal{T}_\mathcal{L}$ and an admissible cut $c$, we denote with $R_c(\tau)\in \mathcal{T}_\mathcal{L}$ the tree obtained after the cut (that is the subgraph containing the root) while the set of subtrees detached from the ``trunk'' by the cut is denoted by $P_c(\tau) \in \mathcal{F}_\mathcal{L}$. With this notation the action of the coproduct on trees $\tau \in \mathcal{T}_\mathcal{L}$ can be described by the formula \begin{equation} \label{eq:copr-cuts} \Delta(\tau) = 1 \otimes \tau + \tau \otimes 1 + \sum_c P_c(\tau)\otimes R_c(\tau) \end{equation} where the sum is performed over all the admissible cuts $c$ of $\tau$. Endowed with $\varepsilon$ and $\Delta$ the algebra $\mathcal{A} \mathcal{T}_\mathcal L$ become a bialgebra, there exists also an antipode $S$ which complete the definition of the Hopf algebra structure on $\mathcal{T}_\mathcal L$ as described by Connes-Kreimer~\cite{MR1660199} (in the unlabeled case). Note that our definition of the coproduct differ from the one commonly present in the literature by the exchange of the order of the factors in the tensor product in order to be consistent with other notations present in the paper. There exists various notations for the coproduct $\Delta$ we will often use Sweedler's notation $\Delta \tau = \sum \tau_{(1)} \otimes \tau_{(2)}$ but we also introduce a counting function $c : \mathcal{T}_\mathcal L \times \mathcal{T}_\mathcal L \times \mathcal{F}_\mathcal L \to \mathbb{N}$ such that $$ \Delta \tau = \sum_{\rho \in \mathcal{T}_\mathcal L, \sigma \in \mathcal{F}_\mathcal L} c(\tau,\rho, \sigma) \rho \otimes \sigma. $$ In the following we will use letters $\tau,\rho,\sigma,\dots$ to denote trees in $\mathcal{T}_\mathcal L$ or forests in $\mathcal{F}_\mathcal{L}$, the degree $g(\tau)$ of a forest $\tau \in \mathcal{F}_\mathcal L$ will also be written as $|\tau|$. Roman letters $a,b,c,\dots \in \mathcal{L}$ will denote vector indexes (i.e. labels) while $\ol{a},\ol{b},\dots$ will denote multi-indexes with values in $\mathcal{L}$: $\overline{a}=(a_1,\dots,a_n) \in \mathcal{L}^n$ with $|\overline a|=n$ the size of this multi-index. \section{Increments} \label{incr} Given $T > 0$, a vector space $V$ and an integer $k \ge 1$, we denote by $\mathcal{C}_k(V)$ the set of functions $g : [0,T]^{k} \to V$ such that $g_{t_1 \cdots t_{k}} = 0$ whenever $t_i = t_{i+1}$ for some $0 \le i \le k-1$. Such a function will be called a \emph{$k$-increment}, and we will set $\mathcal{C}_*(V)=\cup_{k\ge 1}\mathcal{C}_k(V)$. We write $\mathcal{C}_k = \mathcal{C}_k(\mathbb{R})$. There is a cochain complex $(\mathcal{C}_*(V),\delta)$ where the coboundary $\delta$, satisfying $\delta^2 = 0$, is defined as follows on $\mathcal{C}_k(V)$: \begin{equation} \label{eq:coboundary} \delta : \mathcal{C}_k(V) \to \mathcal{C}_{k+1}(V) \qquad (\delta g)_{t_1 \cdots t_{k+1}} = \sum_{i=1}^{k+1} (-1)^i g_{t_1 \cdots \hat t_i \cdots t_{k+1}} , \end{equation} here $\hat t_i$ means that this particular argument is omitted. We will denote $\mathcal{Z}\mathcal{C}_k(V) = \mathcal{C}_k(V) \cap \text{Ker}\delta$ and $\mathcal{B} \mathcal{C}_k(V) = \mathcal{C}_k(V) \cap \text{Im}\delta$, respectively the spaces of \emph{$k$-cocycles} and of \emph{$k$-coboundaries}. Some simple examples of actions of $\delta$, which will be the ones we will really use throughout the paper, are obtained by letting $g\in\mathcal{C}_1(V)$ and $h\in\mathcal{C}_2(V)$. Then, for any $t,u,s\in[0,T]$, we have $ (\delta g)_{ts} = g_t - g_s $, and $ (\delta h)_{tus} = h_{ts}-h_{tu}-h_{us} $. Furthermore, it is readily checked~\cite{MR2091358} that the complex $(\mathcal{C}_*(V),\delta)$ is \emph{acyclic}, i.e. $\mathcal{Z} \mathcal{C}_{k+1}(V) = \mathcal{B} \mathcal{C}_{k}(V)$ for any $k\ge 1$, or otherwise stated, the sequence \begin{equation} \label{eq:exact_sequence} 0 \rightarrow \mathbb{R} \rightarrow \mathcal{C}_1(V) \stackrel{\delta}{\longrightarrow} \mathcal{C}_2(V) \stackrel{\delta}{\longrightarrow} \mathcal{C}_3(V) \stackrel{\delta}{\longrightarrow} \mathcal{C}_4(V) \rightarrow \cdots \end{equation} is exact. This implies in particular that if $\delta h = 0$ for some $h \in \mathcal{C}_{2}(V)$ then there exists $f \in \mathcal{C}_{1}(V)$ such that $\delta f = h$. Thus we get a heuristic interpretation of the coboundary $\delta h$: it measures how much a given 2-increment $h$ is far from being an {\it exact} increment of a function (i.e. a finite difference). \bigskip When $V=\mathbb R$ the complex $(\mathcal{C}_*,\delta)$ is an (associative, non-commutative) graded algebra once endowed with the following (exterior) product: for $g\in\mathcal{C}_n $ and $h\in\mathcal{C}_m $ let $gh \in \mathcal{C}_{n+m-1} $ the element defined by \begin{equation}\label{cvpdt} (gh)_{t_1,\dots,t_{m+n-1}}= g_{t_1,\dots,t_{n}} h_{t_{n},\dots,t_{m+n-1}}, \quad t_1,\dots,t_{m+n-1}\in[0,T]. \end{equation} In this context, the coboundary $\delta$ act as a graded derivation with respect to the algebra structure. In particular we have the following useful properties. \begin{enumerate} \item Let $g,h$ be two elements of $\mathcal{C}_1 $. Then \begin{equation}\label{difrulu} \delta (gh) = \delta g\, h + g\, \delta h. \end{equation} \item Let $g \in \mathcal{C}_1 $ and $h\in \mathcal{C}_2 $. Then $$ \delta (gh) = \delta g\, h + g \,\delta h, \qquad \delta (hg) = \delta h\, g - h \,\delta g. $$ \end{enumerate} \bigskip The iterated integrals of smooth functions on $[0,T]$ are particular cases of elements of $\mathcal{C}$ which will be of interest for us. Consider $f\in\mathcal{C}_1^\infty $, where $\mathcal{C}_1^\infty $ is the set of smooth functions from $[0,T]$ to $\mathbb R$. For each $h \in \mathcal{C}_2$ the integral $\int_s^t df_u \, h_{us}$, which will be denoted by $\mathcal J(df \, h)$, can be considered as an element of $\mathcal{C}_{2}$. That is, for $s,t\in[0,T]$, we set $$ \mathcal J_{ts}(df \, h) = \int_s^t dg_u h_{us}. $$ The basic relation between integration and the coboundary $\delta$ is given by the next lemma. \begin{lemma} \label{lemma:split} Let $h \in \mathcal{C}_2$ such that $\delta h = \sum_i h^{1,i} h^{2,i}$ (finite sum) for $h^{1,i},h^{2,i} \in \mathcal{C}_2$ and let $x \in \mathcal{C}^\infty_1$. Then \begin{equation} \label{eq:action-der} \delta \mathcal J(dx\, h) = \mathcal J(dx) h + \sum_i \mathcal J(dx\, h^{(1,i)}) h^{(2,i)} \end{equation} \end{lemma} \begin{proof} \begin{equation*} \begin{split} \delta \mathcal J(dx\, h)_{tus} & = \int_s^t h_{vs} dx_v - \int_s^u h_{vs} dx_v - \int_{u}^t h_{vu}dx_v \\ & = \int_u^t (h_{vs}-h_{vu}) dx_v = \int_u^t \delta h _{vus} dx_v + \int_{u}^t h_{us} dx_v \\ & = \sum_i \int_u^t h^{(1,i)}_{vu} dx_v\, h^{(2,i)}_{us} + \mathcal J_{tu}(dx)\, h_{us} \end{split} \end{equation*} \end{proof} Then given a vector $\{x^i \}_{i=1,\dots,d}$ of elements of $\mathcal{C}_1^{\infty}$ introduce iterated integrals recursively as $$ \mathcal J(dx^{i_1} dx^{i_2} \cdots dx^{i_n}) = \mathcal J[dx^{i_1} \mathcal J(dx^{i_2} \cdots dx^{i_n})]. $$ where $i_1,\dots,i_n \in \{1,\dots,d\}$. Then by using Lemma~\ref{lemma:split} we recover Chen's multiplicative property (in disguise) \begin{equation} \label{eq:chen} \delta \mathcal J(dx^{i_1} \cdots dx^{i_n}) = \sum_{k=1}^{n-1} \mathcal J(dx^{i_{1}} \cdots dx^{i_k}) \mathcal J(dx^{i_{k+1}}\cdots dx^{i_n}), \qquad (i_1,\dots,i_n) \in \{1,\dots,d\}^n. \end{equation} \section{Rooted trees and iterated integrals} \label{sec:itin} Fix a family $x= \{x^a\}_{a=1,\dots,d}$ of smooth elements in $\mathcal{C}_1$ and let $\mathcal L =\{1,2,\dots,d\} $ the set of indexes. By iterating integrations along the elements of $x$ we can build a map $X : \mathcal{T}_\mathcal L \to C([0,T]^2; \mathbb{R})$ defined as follows \begin{equation} \label{eq:it-integrals} X_{ts}^{\bullet_a} = \int_s^t dx_u^a, \qquad X_{ts}^{[\tau^1 \cdots \tau^k]_a} = \int_s^t \prod_{i=1}^k X_{us}^{\tau^i} dx_u^a. \end{equation} On the vector space $\mathcal C_2$ we introduce the associative and commutative inner product $\circ$ as $(a \circ b)_{ts} = a_{ts} b_{ts}$ for $a,b \in \mathcal C_2$. With this product $\mathcal C_2$ becomes an algebra and as explained before we can extend the map $X : \mathcal{T}_\mathcal L \to \mathcal C_2$ to a map on $\mathcal{A} \mathcal{T}_\mathcal L$ by linearity and by letting $X^{\tau_1 \cdots \tau_n}_{ts} = X^{\tau_1}_{ts} X^{\tau_2}_{ts} \cdots X^{\tau_n}_{ts}$ for the value of $X$ on the forest $\tau_1\cdots\tau_n$. Using this product we can write $X^{[\tau_1\cdots\tau_n]_a} = \int X^{\tau_1 \cdots \tau_n} dx^a$. Let $\mathcal{C}^+_2 = \mathcal{C}_2 \oplus e$ the unital algebra obtained by adding to the algebra $\mathcal{C}_2$ the unit $e$ such that $e_{ts} = 1$ for any $t,s\in[0,T]$. The product $\circ$ has the following relation with $\delta$: \begin{equation} \label{eq:der-leib-2} \delta (a \circ b) = \delta a \circ \delta b + (e a + a e)\circ \delta b + (eb+be) \circ \delta a + ab + ba \end{equation} where $\circ$ is defined on $\mathcal C_3$ in the natural way: $(g \circ h)_{tus} = g_{tus} h_{tus}$ for every $g,h \in \mathcal C_3$. If on the algebra $(\mathcal C_2,\circ)$ we consider the exterior product $\mathcal C_2 \otimes \mathcal C_2 \to \mathcal C_3$ then we can extend the homomorphism $X$ also to the tensor product $\mathcal{A} \mathcal{T}_\mathcal L \otimes \mathcal{A} \mathcal{T}_\mathcal L$ by $X^{\sigma \otimes \rho} = X^{\sigma}X^{\rho}$ for every $\sigma,\rho \in \mathcal{A} \mathcal{T}_\mathcal L$. Denote with $I^a : \mathcal{C}_2 \to \mathcal{C}_2$ the integration map given by $I^a(h) = \mathcal J( dx^a h)$ then for all elements $\sigma \in \mathcal{A}\mathcal{T}_\mathcal L$ we have $I^a X^\sigma = X^{B_+^a \sigma}$: the map $B_+^a$ represent integration on the sub-algebra $\mathcal{A}_X \subset \mathcal{C}^+_2$ generated by $\{X^\tau\}_{\tau \in \mathcal{T}_\mathcal{L}}$. This sub-algebra contains the polynomial algebra generated by the set $\{\delta x^a\}_{a\in\mathcal{L}}$: \begin{equation} \label{eq:pol-in-trees} X^{\bullet_{a_1} \cdots \bullet_{a_n}} = X^{\bullet_{a_1}} \circ \cdots \circ X^{\bullet_{a_n}} = \delta x^{a_1} \circ \cdots \circ \delta x^{a_n}. \end{equation} It contains also the usual iterated integrals of $x$: \begin{equation} \label{eq:chen-in-trees} \mathcal J(dx^{a_1} \cdots dx^{a_n}) = I^{a_1} I^{a_2} \cdots I^{a_{n-1}}( \delta x^{a_n}) = X^{B_+^{a_1} B_+^{a_2}\cdots B_+^{a_{n-1}} \bullet_{a_n}} = X^{[\cdots[\bullet_{a_n}]_{a_{n-1}}\cdots]_{a_1}}. \end{equation} To future use let us denote with $\mathcal{T}^{\text{Chen}}_\mathcal L$ the subset of $\mathcal{T}_\mathcal L$ made of ``linear'' labeled trees of the form $[\cdots[\bullet_{a_n}]_{a_{n-1}}\cdots]_{a_1}$. What is remarkable is the relation between the coalgebra structure of the trees and the algebraic properties of the iterated integrals $X$ with respect to the coboundary $\delta$ as illustrated in the next theorem. \begin{theorem}[Tree multiplicative property] \label{th:equiv} The map $X$ satisfy the following algebraic relation: \begin{equation} \label{eq:alg-relations} \delta X^{\sigma} = X^{\Delta'(\sigma)} , \qquad \sigma \in \mathcal{A}\mathcal{T}_\mathcal L \end{equation} where $\Delta'$ is the reduced coproduct $ \Delta'(\tau) = \Delta(\tau) - 1 \otimes \tau - \tau \otimes 1. $ \end{theorem} \begin{proof} We will proceed by induction on the degree $g$ of the forests in $\mathcal{A} \mathcal{T}_\mathcal L$ defined above. It is clear that the relation~(\ref{eq:alg-relations}) holds for the simple tree $\bullet_a$ with degree $g=1$. Assume that eq.~(\ref{eq:alg-relations}) holds for every monomial with degree less than $n$ and let us prove it for monomials of degree $n$. We need the following two properties of the reduced coproduct: first, its recursive definition can be rewritten as \begin{equation} \label{eq:delta-prime-rec} \Delta'(\tau) = \sum_{a\in\mathcal{L}} \bullet_a \otimes B^a_-(\tau) + \sum_a (B_+^a \otimes \text{id}) [\Delta'(B_-^a(\tau))] \end{equation} which follows directly from~(\ref{eq:delta-def}), next a formula for the action of $\Delta'$ on products of monomials: \begin{equation} \label{eq:leib-delta-prime} \Delta'(\rho \sigma) = \Delta' \sigma \Delta' \rho + (1\otimes \sigma + \sigma \otimes 1) \Delta' \rho + (1\otimes \rho + \rho \otimes 1) \Delta' \sigma + \rho \otimes \sigma + \sigma \otimes \rho. \end{equation} for $\rho,\sigma$ monomials on trees. Assume $g(\rho\sigma) = n$ and let us compute $\delta X^{\rho \sigma}$ using eq.~(\ref{eq:der-leib-2}): \begin{equation*} \begin{split} \delta X^{\rho \sigma} & = \delta (X^\rho \circ X^\sigma) \\ & = \delta X^\rho \circ (X^\sigma e + e X^\sigma) + \delta X^\sigma \circ (X^\rho e + e X^\rho) + \delta X^\rho \circ \delta X^\sigma + X^\rho X^\sigma + X^\sigma X^\rho \end{split} \end{equation*} Since $g(\sigma) < n$ and $g(\rho) < n$ we obtain \begin{equation*} \begin{split} \delta X^{\rho \sigma} & = X^{\Delta' \rho} \circ (X^\sigma e + e X^\sigma) + X^{\Delta'\sigma} \circ (X^\rho e + e X^\rho) + X^{\Delta'\rho} \circ X^{\Delta'\sigma} + X^\rho X^\sigma + X^\sigma X^\rho \\& = X^{\Delta' \rho} \circ X^{\sigma\otimes 1 + 1 \otimes \sigma} + X^{\Delta'\sigma} \circ X^{\rho\otimes 1 + 1 \otimes \rho} + X^{\Delta'\rho} \circ X^{\Delta'\sigma} + X^{\rho \otimes \sigma} + X^{\sigma \otimes \rho} \\& = X^{\Delta' \rho( \sigma\otimes 1 + 1 \otimes \sigma)} + X^{\Delta'\sigma (\rho\otimes 1 + 1 \otimes \rho)} + X^{\Delta'\rho \Delta'\sigma} + X^{\rho \otimes \sigma} + X^{\sigma \otimes \rho} \\ & = X^{\Delta'(\rho \sigma) } \end{split} \end{equation*} according to eq.~(\ref{eq:leib-delta-prime}). So we have proven eq.~(\ref{eq:alg-relations}) for nontrivial monomials of $g$-degree $n$. It remains to prove the relation for monomials given by a single tree of degree $n$. To do this we need the action of $\delta$ on iterated integrals which is given by Lemma~\ref{lemma:split} above. Let us compute $\delta X^\tau$ using formula~(\ref{eq:action-der}) with $\tau = [\tau_1 \cdots \tau_n]_a$: \begin{equation*} \begin{split} \delta X^{[\tau_1 \cdots \tau_n]_a} & = \delta \mathcal J[ dx^a X^{\tau_1 \cdots \tau_n}] = \delta x^a X^{\tau_1 \cdots \tau_n} + \sum_i \mathcal J[dx^a X^{\theta^1_i}]\, X^{\theta^2_i} \\ & = X^{\bullet_a} X^{\tau_1 \cdots \tau_n} + \sum_i X^{[\theta^1_i]_a}\, X^{\theta^2_i} \end{split} \end{equation*} where $\delta X^{\tau_1 \cdots \tau_n} = \sum_i X^{\theta^1_i} X^{\theta^2_i}$ and $\theta^{1,2}$ satisfy $ \Delta'(\tau_1 \cdots \tau_n) = \sum_i \theta^1_i \otimes \theta^2_i $ since our induction assumptions imply that the monomial $\tau_1 \cdots \tau_n$, eq.~(\ref{eq:alg-relations}) holds. Then \begin{equation*} \begin{split} \delta X^{[\tau_1 \cdots \tau_n]_a} & = X^{\bullet_a \otimes (\tau_1 \cdots \tau_n)} + X^{\sum_i [\theta^1_i]_a \otimes \theta^2_i} = X^{\bullet_a \otimes (\tau_1 \cdots \tau_n) + \sum_i [\theta^1_i]_a \otimes \theta^2_i} \\ & = X^{\bullet_a \otimes (\tau_1 \cdots \tau_n) + (B_+^a \otimes \text{id})(\sum_i \theta^1_i \otimes \theta^2_i)} = X^{\Delta'([\tau_1 \cdots \tau_n]_a)} \end{split} \end{equation*} where we used eq.~(\ref{eq:delta-prime-rec}). Then we proved eq.~(\ref{eq:alg-relations}). \end{proof} \begin{example} \label{ex:example-1} Let us give an example in one dimension ($d=1$) so trees are not decorated. The forests of degree less or equal to three are: \psset{levelsep=-5pt,nodesep=-2pt,treesep=1pt} \begin{equation*} \tsnode, \aabb, \tsnode \tsnode, \aaabbb, \tsnode \aabb, \tsnode \tsnode \tsnode, \pstree{\tsnode}{\tsnode \tsnode} \end{equation*} The reduced coproduct on these monomials acts as follows: \begin{equation*} \Delta' \aabb = \ensuremath{\scriptstyle\bullet} \otimes \ensuremath{\scriptstyle\bullet}, \qquad \Delta' (\ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet}) = 2 \ensuremath{\scriptstyle\bullet} \otimes \ensuremath{\scriptstyle\bullet} \end{equation*} \begin{equation*} \Delta' \aaabbb = \aabb \otimes \ensuremath{\scriptstyle\bullet} + \ensuremath{\scriptstyle\bullet} \otimes \aabb \end{equation*} \begin{equation*} \Delta' ( \ensuremath{\scriptstyle\bullet} \aabb) = \ensuremath{\scriptstyle\bullet} \otimes \ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet} + \ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet} \otimes \ensuremath{\scriptstyle\bullet} + \aabb \otimes \ensuremath{\scriptstyle\bullet} + \ensuremath{\scriptstyle\bullet} \otimes \aabb \end{equation*} \begin{equation*} \Delta'(\ensuremath{\scriptstyle\bullet}^3) = 3 \ensuremath{\scriptstyle\bullet}^2 \otimes \ensuremath{\scriptstyle\bullet} + 3 \ensuremath{\scriptstyle\bullet} \otimes \ensuremath{\scriptstyle\bullet}^2 \end{equation*} \begin{equation*} \Delta' \pstree{\tsnode}{\tsnode \tsnode} = \ensuremath{\scriptstyle\bullet} \otimes \ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet} + 2 \aabb \otimes \ensuremath{\scriptstyle\bullet} \end{equation*} So we have $$ \delta X^{\pstree{\tsnode}{\tsnode \tsnode}} = X^{\ensuremath{\scriptstyle\bullet}} X^{\ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet}} + 2 X^{\aabb} X^{\ensuremath{\scriptstyle\bullet}} $$ \end{example} \begin{remark} A particular case of the tree multiplicative property~(\ref{eq:alg-relations}) is given by Chen's multiplicative property~(\ref{eq:chen}) with the aid of the relation~(\ref{eq:chen-in-trees}). \end{remark} As a first elementary application of this result we derive a tree binomial formula. \begin{lemma}[Tree Binomial] \label{lemma:tree-binomial} For every $\tau \in \mathcal{T}$ and $a,b \ge 0$ we have \begin{equation} \label{eq:tree-binomial} (a+b)^{|\tau|} = \sum_i \frac{\tau!}{\tau^{(1)}_i! \tau^{(2)}_i!} a^{|\tau^{(1)}_i|} b^{|\tau^{(2)}_i|} \end{equation} \end{lemma} \begin{proof} Consider the iterated integrals $T^\tau$ associated to the identity path $t : \mathbb{R} \to \mathbb{R}$ $$ T^{\bullet}_{ts} = t-s, \qquad T^{[\tau_1 \cdots \tau_n]}_{ts} = \int_s^t T^{\tau_1}_{us}\cdots T^{\tau_n}_{us} du $$ By induction it is not difficult to prove that $ T^{\tau}_{ts} = (t-s)^{|\tau|} (\tau!)^{-1} $, so applying Thm.~\ref{th:equiv} to $T^\tau$ we get \begin{equation*} \begin{split} \frac{(t-s)^{|\tau|}}{\tau!} & = T^{\tau}_{ts} = T^{\tau}_{us} + T^{\tau}_{tu} + \sum'_i T^{\tau^{(1)}_i}_{tu} T^{\tau^{(2)}_i}_{us} = \sum_i T^{\tau^{(1)}_i}_{tu} T^{\tau^{(2)}_i}_{us} \\ & = \sum_i \frac{1}{\tau^{(1)}_i! \tau^{(2)}_i!} (t-u)^{|\tau^{(1)}_i|} (u-s)^{|\tau^{(1)}_i|} \end{split} \end{equation*} Then setting $t-u = a$ and $u-s=b$ we get eq.~(\ref{eq:tree-binomial}). \end{proof} \subsection{Geometric paths} The above homomorphism $X$ can be simplified using the fact that it is generated by a $C^1$ family $x$. Indeed Chen~\cite{MR0454968} proved that products of iterated integrals can be always expressed as linear combination of iterated integrals via the \emph{shuffle product}: \begin{equation} \label{eq:shuffle} \mathcal J(dx^{a_1}\cdots dx^{a_n})\circ \mathcal J(dx^{b_1}\cdots dx^{b_m}) = \sum_{\overline c \in \text{Sh}(\overline a, \overline b)} \mathcal J(dx^{c_1}\cdots dx^{c_{n+m}}) \end{equation} where given two multi-indexes $\overline a = (a_1,\dots,a_n)$ and $\overline b = (b_1,\dots,b_n)$ their \emph{shuffles} $\text{Sh}(\overline a, \overline b)$ is the set of all the possible permutations of the $(n+m)$-uple $(a_1,\dots,a_n,b_1,\dots,b_m)$ which does not change the ordering of the two subsets $\overline a$, $\overline b$. Using relation~(\ref{eq:shuffle}) we can reduce every $X^{\tau}$ for $\tau \in \mathcal{T}_\mathcal L$ to a linear combination of $\{ X^{\sigma} \}_{\sigma \in \mathcal{T}^{\text{Chen}}_\mathcal L}$. \section{Series solutions of driven differential equations} \label{sec:trees-diff} Under appropriate conditions on the vectorfield $f: \mathbb{R}^n \to \mathbb{R}^n$ the solution $y$ of the differential equation $dy/dt = f(y)$ $y_0 = \eta$ admit the series representation \begin{equation} \label{eq:B-series} y_t = \eta + \sum_{\tau \in \mathcal{T}} \psi^f(\tau)(\eta) \frac{t^{|\tau|}}{\sigma(\tau) \tau!} \end{equation} which is called $B$-series (in honor of J.~Butcher, see\cite{MR1993957,MR0403225,MR1227985}). The coefficients $\psi^f$ are called \emph{elementary differentials} and are defined as $$ \psi^f(\bullet)(\xi) = f(\xi), \qquad \psi^f([\tau^1 \cdots \tau^k]) = \sum_{\ol b \in \mathcal{I}\mathcal{L}_1} f_{\ol b}(\eta) \psi^f(\tau^1)(\xi)^{b_1}\cdots \psi^f(\tau^k)(\xi)^{b_k} $$ where we introduce multi-indexes $\overline b \in \mathcal{I}\mathcal{L}_1 = \cup_{k=0}^\infty \mathcal{L}_1^k$, $\mathcal{L}_1 = \{1,\dots,n\}$, with the convention $\mathcal{L}_1^0 = \emptyset$ and we set $f_{\emptyset}(\xi) = f(\xi)$ and $ f_{\ol b}(\xi) = \prod_{i=1}^{|\ol b|} \partial_{\xi_{b_i}} f(\xi) $ for the derivatives of the vectorfield. In this section we study the analogous series expansion for \emph{driven} differential equation. Consider a $C^1$ path $x: [0,T] \to \mathbb{R}^d$ and let $\{x^a\}_{a\in \mathcal{L}}$ be its coordinates in a fixed basis. Fix a point $\eta \in \mathbb{R}^n$ and let $f_a: \mathbb{R}^n \to \mathbb{R}^n$, $a=1,\dots,d$ be a collection of analytic vectorfields on $\mathbb{R}^n$. Let $R$ be a common analiticy radius around $\eta$ for all coordinates. \begin{theorem} The solution of the differential equation $ dy_t = \sum_{a\in\mathcal{L}} f_a(y_t) dx_t^a$, $y_0 = \eta $ admit locally the series representation \begin{equation} \label{eq:tree-rep} \delta y_{ts} = †\sum_{\tau \in \mathcal{T}_\mathcal{L}} \frac{1}{\sigma(\tau)} \phi^{f}(\tau)(y_s) X^\tau_{ts}, \qquad y_0 = \eta \end{equation} where the sum runs over all $\mathcal{L}$-labeled rooted trees $\tau\in\mathcal{T}_\mathcal{L}$ and where we recursively define functions $\phi^f : \mathcal{T}_\mathcal{L} \times \mathbb{R}^n \to \mathbb{R}^n$ such that \begin{equation*} \phi^f(\bullet_a)(\xi) = f_a(\xi), \qquad \phi^f([\tau^1 \cdots \tau^k]_a)(\xi) = \sum_{\ol b \in \mathcal{I} \mathcal{L}_1: |\ol b| = k} f_{a;b_1\dots b_k}(\xi) \prod_{i=1}^{k} [\phi^{f}(\tau^i)(\xi)]^{b_i} . \end{equation*} \end{theorem} \begin{proof} Let us assume for the moment that the series~(\ref{eq:tree-rep}) converges absolutely. We will verify that that eq.~(\ref{eq:tree-rep}) satisfy the integral equation \begin{equation} \label{eq:integral-eq} \delta y_{ts} = \sum_{a\in\mathcal{L}} \int_s^t f_a(y_u) dx^a_u . \end{equation} Consider the Taylor series for $f$ around $\xi\in \mathbb{R}^n$: $$ f_a(\xi') = \sum_{\ol b\in \mathcal{I}\mathcal{L}_1 } \frac{f_{a; \ol b}(\xi)}{|\ol b|!} \prod_{i=1}^{|\ol b|} (\xi'-\xi)^{b_i} $$ where $\xi^k$ is the $k$-th coordinate of the vector $\xi\in \mathbb{R}^n$. By the analyticity of the vectorfields $f_a$ this series converges as long as $|\xi-\xi'| \le R-|\xi'-\eta|$. Compute the r.h.s. of eq.~(\ref{eq:integral-eq}) by plugging in eq.~(\ref{eq:tree-rep}) and the Taylor expansion of $f$: \begin{equation*} \begin{split} \sum_{a\in\mathcal{L}} & \int_s^t f_a(y_u) dx_u^a = \sum_{a\in\mathcal{L}} \sum_{\ol b \in \mathcal{I} \mathcal{L}_1} \frac{f_{a;\ol b}(y_s)}{|\ol b|!} \int_s^t \left( \prod_{i=1}^{|\ol b|} \delta y^{b_i}_{us}\right) dx_u^a \\ & = \sum_{a\in\mathcal{L}} \sum_{\ol b \in \mathcal{I} \mathcal{L}_1} \frac{f_{a;\ol b}(y_s)}{|\ol b|!} \int_s^t \prod_{i=1}^{|\ol b|} \left[ \sum_{\tau \in \mathcal{T}_\mathcal{L}} \frac{1}{\sigma(\tau)} [\phi^{f}(\tau)(y_s)]^{b_i} X^{\tau}_{us} \right] dx_u^a \\ & = \sum_{a\in\mathcal{L}} \sum_{\ol b \in \mathcal{I}\mathcal{L}_1} \frac{f_{a;\ol b}(y_s)}{|\ol b|!} \sum_{\tau^1,\cdots,\tau^{|\ol b|}} \frac{1}{\sigma(\tau^1)\cdots \sigma(\tau^{|\ol b|})}\left(\prod_{i=1}^{|\ol b|}[ \phi^{f}(\tau^i)(y_s)]^{b_i}\right) \int_s^t \prod_{i=1}^{|\ol b|} X^{\tau^i}_{us} dx_u^a \\ & = \sum_{a\in\mathcal{L}} \sum_{k=0}^\infty \sum_{\tau^1,\cdots,\tau^{k}} \frac{1}{k!\sigma(\tau^1)\cdots \sigma(\tau^k)}\sum_{\ol b \in \mathcal{I}\mathcal{L}_1 : |\ol b| = k} f_{a;\ol b}(y_s) \left(\prod_{i=1}^{k}[\phi^{f}(\tau)(y_s)]^{b_i}\right) \int_s^t \prod_{i=1}^{k} X^{\tau^i}_{us} dx_u^a \\ & =\sum_{a\in\mathcal{L}} \sum_{k=0}^{\infty} \sum_{\tau^1,\cdots,\tau^{k}} \frac{1}{\sigma([\tau^1\cdots\tau^{k}]_a) \delta(\tau^1,\cdots,\tau^{k})} \phi^f([\tau^1\cdots\tau^{k}]_a)(y_s) X_{ts}^{[\tau^1\cdots\tau^{k}]_a} \\ & = \sum_{\tau \in \mathcal{T}_\mathcal{L}} \frac{1}{\sigma(\tau)} \phi^{f}(\tau)(y_s) X^\tau_{ts} \end{split} \end{equation*} which proves the claim . Note the multiplicity factor $\delta$ which disappears from the last line. To prove the absolute convergence of the series we need bounds on $X^\tau$ and $\phi^f(\tau)$. For $X^\tau$ we have: $$ |X^{\tau}_{ts}| \le \frac{[A|t-s|]^{|\tau|}}{\tau!} $$ where $A = \sup_{t\in[0,T]} |\dot x_t|$. This bound can be easily proven inductively on $\tau$. Since $f_a$ are analytic functions, from Cauchy inequalities we obtain $$ |f_{a,\ol b}(y_s)| \le \theta(\ol b) M (R-r_s)^{-|\ol b|} \le |\ol b|! M (R-r_s)^{-|\ol b|} \le g^{(|\ol b|)}(r_s) $$ see e.g.~\cite[pag. 47]{MR1227985}. where $r_s = |y_s-\eta|$ and $M$ is a constant depending only on $\{f_a\}_{a\in \mathcal{L}}$ and where we introduced the function $g(r) = M R (R-r)^{-1}$ and its derivatives $g^{(k)}(r) = MR k! (R-r)^{-k-1}$. Define ``elementary differentials'' $\psi:\mathcal{T} \times [0,R) \to \mathbb{R}$ for $g$ as $$ \psi(\bullet)(r) = g(r), \qquad \psi([\tau_1 \cdots \tau_k])(r) = g^{(k)}(r) k! M (R-r)^{-k} $$ Then we have the bounds $|\phi^f(\tau)(y_s)| \le \psi(\tau)(r_s)$ for any $\tau \in \mathcal{T}_\mathcal{L}$ and the series~(\ref{eq:tree-rep}) can be bounded by $$ \sum_{\tau \in \mathcal{T}_\mathcal{L}} \frac{1}{\sigma(\tau)} \psi(\tau)(r_s) A^{|\tau|} \frac{|t-s|^{|\tau|}}{\tau!} $$ and by taking into account the multiplicity $d^{|\tau|}$ of labeled trees corresponding to the same tree $\tau$ we get $$ \sum_{\tau \in \mathcal{T}} \frac{1}{\sigma(\tau)} \psi(\tau)(r_s) (dA)^{|\tau|} \frac{|t-s|^{|\tau|}}{\tau!} $$ This series is exactly the B-series~(\ref{eq:B-series}) for the solution $r_t$ of the differential equation \begin{equation} \label{eq:g-eq} \frac{dr_t}{dt} = dA g(r_t) = d A M R (R-r_t)^{-1}, \qquad r_0 = 0 \end{equation} when written starting from $r_s$ at time $s< t$. Then $$ r_t = r_s+ \sum_{\tau \in \mathcal{T}} \frac{1}{\sigma(\tau)} \psi(\tau)(r_s) (dA)^{|\tau|} \frac{|t-s|^{|\tau|}}{\tau!} $$ as long as the solution $r_t$ exists and has a power series expansion in $t-s$. But the explicit solution of eq.~(\ref{eq:g-eq}) is given by $r_t = R(1-\sqrt{1-t/t_*})$ with $t_* = R/(2dAM)$ and has power series expansion for any $t < t_*$. So the original series is summable at least for any $t,s \in [0,t_*)$. \end{proof} In the rest of this section we will denote $y^\tau_s = \phi^f(\tau)(y_s)/\sigma(\tau)$ so that $ \delta y_{ts} = \sum_{\tau \in \mathcal{T}_\mathcal{L}} X^\tau_{ts} y^\tau_s $ moreover we will use the convention $X^{\emptyset}_{ts} = 1$ and $y^\emptyset_s = y_s$ to write $$ y_t = \sum_{\tau \in \mathcal{T}_\mathcal{L} \cup \{ \emptyset \}} X^{\tau}_{ts} y^{\tau}_s $$ The recursion for $y^\tau$ reads \begin{equation} \label{eq:ytau-rec} y^{\bullet_a}_s = f_a(y_s), \qquad y^{[\tau^1 \cdots \tau^k]_a}_s = \frac{\sigma(\tau^1)\cdots \sigma(\tau^k)}{\sigma(\tau)} \sum_{\ol b: |\ol b|=k} f_{a, \ol b}(y_s) y_s^{\tau_1,b_1} \cdots y_s^{\tau_k,b_k} \end{equation} We have the following theorem which show that each of the paths $y^\tau$ can be expanded in series w.r.t. to $X$ with coefficients which depends on the combinatorics of the reduced coproduct: \begin{theorem} \label{th:der-eqns-1} For any $\tau \in \mathcal{T}_\mathcal{L}\cup \{ \emptyset \}$ we have \begin{equation} \label{eq:ytau-der} \delta y^\tau_{ts} = \sum_{\sigma \in \mathcal{T}_\mathcal{L}, \rho \in \mathcal{F}_\mathcal{L}} c'(\sigma,\tau,\rho) X^{\rho}_{ts} y^{\sigma}_s \end{equation} where $c'$ is the counting function for the reduced coproduct: $ \Delta' \sigma = \sum_{\tau,\rho} c'(\sigma,\tau,\rho) \tau \otimes \rho $. \end{theorem} \begin{proof} The proof is by induction on $\tau$. The case $\tau = \bullet_a$ requires only Taylor expansion: \begin{equation} \label{eq:ytau-step-0} \begin{split} \delta y^{\bullet_a}_{ts} & = \delta f_a(y)_{ts} = \sum_{\ol b} \frac{f_{a; \ol b}(y)}{|\ol b|!} \left( \delta y_{ts}\right)^{\ol b} \\ & = \sum_{k \ge 1} \sum_{\tau^1,\dots,\tau^{k}} \sum_{\ol b: |\ol b|=k} \frac{f_{a; \ol b}(y)}{k!} y^{\tau^1,b_1}_s \cdots y^{\tau^k,b_k}_s X^{\tau^1 \cdots \tau^k}_{ts} \\ & = \sum_{k \ge 1} \sum_{\tau^1,\dots,\tau^{k}} \frac{\sigma([\tau^1 \cdots \tau^k]_a)}{k! \sigma(\tau^1)\cdots\sigma(\tau^k)} y^{[\tau^1 \cdots \tau^k]_a}_s X^{\tau^1 \cdots \tau^k}_{ts} \\ & = \sum_{k \ge 1} \sum_{\tau^1,\dots,\tau^{k}} \frac{1}{\delta(\tau^1,\dots,\tau^k)} y^{[\tau^1 \cdots \tau^k]_a}_s X^{\tau^1 \cdots \tau^k}_{ts} \\ & = \sum_{\tau} c'(\tau,\bullet_a,\rho) y^{\tau}_s X^{\rho}_{ts} \end{split} \end{equation} since $c'(\tau,\bullet_a,\rho) $ is different from zero, and take value one, iff $\tau = [\rho]_a$. Now, assume eq.~(\ref{eq:ytau-der}) holds for all $\tau \in \mathcal{T}^n_{\mathcal{L}}$ and let us prove that it holds for trees $\tau$ with $|\tau|=n+1$. So take $\tau= [\tau^1 \dots \tau^k]_a$ with $|\tau|=n+1$, then $|\tau^i|\le n$ for any $i=1,\dots,k$. To compute the action of the map $\delta$ on $y^\tau$ we use the recursive relation~(\ref{eq:ytau-rec}): \begin{equation} \label{eq:ytau-step-2-der} \begin{split} \delta y^{[\tau^1 \cdots \tau^k]_a}_{ts} & = \frac{\sigma(\tau^1)\cdots \sigma(\tau^k)}{\sigma(\tau)} \sum_{\ol b: |\ol b|=k} \delta[ f_{a, \ol b}(y) y^{\tau^1,b_1} \cdots y^{\tau^k,b_k}]_{ts} \end{split} \end{equation} and the Leibniz formula $$ \delta (g^1 \cdots g^k)_{ts} = (g^1_s + \delta g^1_{ts}) \cdots (g^1_s + \delta g^1_{ts}) - g^1_s \cdots g^k_s = \sum_{G} G^1_{ts} \cdots G^k_{ts} $$ where the sum is over all possible choices of $G$-s such that $G^i_{ts}= g^i_s$ or $G^i_{ts} = \delta g^i_{ts}$ excluding the case where all the $G$-s are $g$ (that is, there should be at least one factor of the form $\delta g^i$). By Taylor expansion $$ \delta f_{a, \ol b}(y)_{ts} = \sum_{m\ge 1} \sum_{\ol c: |\ol c|=m} \frac{f_{a,\ol b \ol c}(y)_s}{m!} \sum_{\eta^1,\dots,\eta^m} y^{\eta^1,c_1}_s\cdots y^{\eta^m,c_m}_s X_{ts}^{\eta^1\cdots \eta^m} $$ while using the induction hypothesis we have $$ \delta y^{\tau^i} = \sum_{\rho^i,\zeta^i} c(\zeta^i,\tau^i,\rho^i) X^{\rho^i} y^{\zeta^i} = \sum_{\zeta^i} X^{\zeta^i_{(2)}} y^{\zeta^i} \delta_{\tau^i,\zeta^i_{(1)}} $$ where there is an implicit sum over the terms $\zeta^i_{(1)},\zeta^i_{(2)}$ in the reduced coproduct of $\zeta^i$ and where $\delta_{\tau^i,\zeta^i_{(1)}}$ denotes the Kronecker delta function. Then we rewrite eq.~(\ref{eq:ytau-step-2-der}) as \begin{equation} \label{eq:ytau-step-2-2} \begin{split} \delta y^{[\tau^1 \cdots \tau^k]_a}_{ts} & = \frac{\sigma(\tau^1)\cdots \sigma(\tau^k)}{\sigma(\tau)} \\ & \times \sum_{m\ge 0} \frac{1}{m!} \sum_{\zeta^1,\dots,\zeta^k} \sum_{\eta^1,\dots,\eta^m} \sum_{\ol c: |\ol c|=m+k} f_{a,\ol c}(y_s) y^{\eta^1,c_1}_s\cdots y^{\eta^m,c_m}_s y^{\zeta^1,c_{m+1}}_s\cdots y^{\zeta^k,c_{m+k}}_s \\ & \times X_{ts}^{\eta^1\cdots \eta^m\cdots \zeta^1_{(2)}\cdots \zeta^k_{(2)}} \delta_{\tau^1,\zeta^1_{(1)}} \cdots \delta_{\tau^k,\zeta^k_{(1)}} \end{split} \end{equation} The summation in this formula has to be understood as follows: the sum over $\zeta^i$ is performed on all trees which contains $\tau^i$ in the sense that $c'(\zeta^i,\tau^i,\rho^i)$ is different form zero for some $\rho^i$ and on the tree $\zeta^i=\tau^i$ in which case we understand that $\zeta^i_{(1)}=\tau^i$ and $\zeta^i_{(2)} = \emptyset$ (the empty forest). Note that this case in not contained in the reduced coproduct but is generated by the Leibniz's formula. Moreover we implicitly exclude from the summation above the case when $m=0$ and all the $\zeta^i$ are equal to the corresponding $\tau^i$. Then with this proviso we can simplify the above formula as \begin{equation} \label{eq:ytau-step-2-3} \begin{split} \delta y^{[\tau^1 \cdots \tau^k]_a}_{ts} & = \frac{\sigma(\tau^1)\cdots \sigma(\tau^k)}{\sigma(\tau)} \\ & \times \sum_{m\ge 0} \frac{1}{m!}\sum_{\zeta^1,\dots,\zeta^k} \sum_{\eta^1,\dots,\eta^m} \frac{\sigma(\zeta)}{\sigma(\zeta^1)\cdots\sigma(\zeta^k) \sigma(\eta^1)\cdots\sigma(\eta^m)} X^{\eta^1\cdots \eta^m\cdots \zeta^1_{(2)}\cdots \zeta^k_{(2)}} y^{\zeta} \delta_{\tau^1,\zeta^1_{(1)}} \cdots \delta_{\tau^k,\zeta^k_{(1)}} \end{split} \end{equation} where $\zeta = [\zeta^1\cdots\zeta^k \eta^1\cdots \eta^k]$. Now, recalling eq.~(\ref{eq:sigma-prop}), write \begin{equation} \label{eq:ytau-step-2-4} \begin{split} \delta y^{[\tau^1 \cdots \tau^k]_a}_{ts} & = \sum_{m\ge 0} \sum_{\zeta^1,\dots,\zeta^k} \sum_{\eta^1,\dots,\eta^m} \frac{(k+m)!}{k!m!} \frac{\delta(\tau^1,\dots,\tau^k)}{\delta(\zeta^1,\dots,\zeta^k,\eta^1,\dots,\eta^k)} X^{\eta^1\cdots \eta^m\cdots \zeta^1_{(2)}\cdots \zeta^k_{(2)}} y^{\zeta} \delta_{\tau^1,\zeta^1_{(1)}} \cdots \delta_{\tau^k,\zeta^k_{(1)}} . \end{split} \end{equation} Introduce a new function $\tilde c: \mathcal{T}_\mathcal{L} \times \mathcal{T}_\mathcal{L} \times \mathcal{F}_\mathcal{L} \to \mathbb{N}$ such that $$ \tilde c(\kappa_1,\kappa_2,\kappa_3) = \begin{cases} c'(\kappa_1,\kappa_2,\kappa_3) & \text{for $\kappa_3 \neq \emptyset$}\\ \delta_{\kappa_1,\kappa_2} & \text{for $\kappa_3 = \emptyset$} \end{cases} $$ which counts the number of ways to cut away a forest $\kappa_3$ from the tree $\kappa_1$ leaving the tree $\kappa_2$ where we allow the empty cut which leaves the tree intact. Using $\tilde c$ we rewrite the last equation as \begin{equation} \label{eq:ytau-step-2-4b} \begin{split} \delta y^{[\tau^1 \cdots \tau^k]_a}_{ts} & = \sum_{m \ge 0} \sum_{\zeta^1,\dots,\zeta^{k+m}} \sum_{\theta^1,\dots,\theta^m} \frac{(k+m)!}{k!m!} \frac{\delta(\tau^1,\dots,\tau^k)}{\delta(\zeta^1,\dots,\zeta^{k+m})} \tilde c(\zeta^1,\tau^1,\theta^1) \cdots \tilde c(\zeta^k,\tau^k,\theta^k) \\ & \qquad \times y^{\zeta} X^{\zeta^1 \cdots \zeta^{m} \theta^1 \cdots \theta^k} \end{split} \end{equation} where now $\zeta = [\zeta^1\cdots \zeta^{k+m}]_a$ and $\zeta^1,\dots,\zeta^{k+m} \in \mathcal{T}_\mathcal{L}$ are non-empty trees and $\theta^1,\dots,\theta^k \in \mathcal{F}_\mathcal{L}$ are possibly empty forests but we exclude the case when $m=0$ and all the $\theta^i$ are empty. Now we will show that this expression corresponds exactly to \begin{equation} \label{eq:ytau-step-2-5} \begin{split} \delta y^{[\tau^1 \cdots \tau^k]_a}_{ts} & = \sum_{m \ge 0} \sum_{\zeta \in \mathcal{T}_\mathcal{L} : \zeta = [\zeta^1\cdots \zeta^{k+m}]_a} c'(\zeta,[\tau^1 \cdots \tau^k]_a,\theta) X^{\theta} y^\zeta \end{split} \end{equation} which is what we want to prove. Note that the restriction in the sum over trees $\zeta$ of the form $[\zeta^1\cdots \zeta^{k+m}]_a$ for some $m \ge 0$ is due to the fact that for trees with less than $k$ branches at the origin the factor $c(\zeta,\tau,\theta)$ is zero. Each forest $\zeta^1\cdots \zeta^{k+m}$ appears $\delta(\zeta^1,\dots,\zeta^{k+m})$ times in the summation, moreover given the tree $\zeta = [\zeta^1\cdots \zeta^{k+m}]_a$ there are $(k+m)!/(k!m!)$ ways to choose $m$ branches of the root to cut away. Let us say that these cuts are on the last $m$ branches $\zeta^{k+1},\dots,\zeta^{k+m}$. Then the rest of the cuts appear on the first $k$ and for a fixed set $\zeta^1,\dots,\zeta^k$ of trees to cut there are $\delta(\tau^1,\dots,\tau^k)$ possible ways of associating each $\tau$ to some $\zeta$ to determine the associated cuts (if they are possible at all). Chosen the pairing between the $\zeta$-s and the $\tau$-s there are $\prod_{i=1}^k \sum_{\theta^i \in \mathcal{F}_\mathcal{L}} \tilde c(\zeta^i,\tau^i,\theta^i)$ possible cuts (note that chosen $\zeta^i$ and $\tau^i$ the forest $\theta^i$ is uniquely determined). Moreover since either $m> 0$ or some $\theta^i \neq \emptyset$ there is at least one proper (i.e. not empty nor full) cut in eq.~(\ref{eq:ytau-step-2-4b}). This concludes the proof. \end{proof} \section{Integration of finite increments} \label{sec:one-dim} We recall the integration theory introduced in~\cite{MR2091358} in some details since this setting is quite different from the original rough path theory developed in~\cite{MR2036784,MR1654527}. Notice that our future discussions will mainly rely on $k$-increments with $k \le 3$. We measure the size of these increments by H\"older-like norms : for $f \in \mathcal{C}_2(V)$ let $$ \norm{f}_{\mu} = \sup_{s,t\in[0,T]}\frac{|f_{ts}|}{|t-s|^\mu}, \quad\mbox{and}\quad \mathcal{C}_1^\mu(V)=\left\{ f \in \mathcal{C}_2(V);\, \norm{f}_{\mu}<\infty \right\}. $$ In the same way, for $h \in \mathcal{C}_3(V)$, set \begin{eqnarray} \label{eq:normOCC2} \norm{h}_{\gamma,\rho} &=& \sup_{s,u,t\in[0,T]} \frac{|h_{tus}|}{|u-s|^\gamma |t-u|^\rho}\\ \|h\|_\mu &= & \inf\left \{\sum_i \|h_i\|_{\rho_i,\mu-\rho_i} ;\, h = \sum_i h_i,\, 0 < \rho_i < \mu \right\} ,\nonumber \end{eqnarray} where the last infimum is taken over all sequences $\{h_i \in \mathcal{C}_3(V) \}$ such that $h = \sum_i h_i$ and for all choices of the numbers $\rho_i \in (0,z)$. We set $$ \mathcal{C}_3^\mu(V)=\left\{ h\in\mathcal{C}_3(V);\, \|h\|_\mu<\infty \right\}. $$ Eventually, let $\mathcal{C}_3^{1+}(V) = \cup_{\mu > 1} \mathcal{C}_3^\mu(V)$, and remark that the same kind of norms can be considered on the spaces $\mathcal{Z} \mathcal{C}_3(V)$, leading to the definition of the spaces $\mathcal{Z} \mathcal{C}_3^\mu(V)$ and $\mathcal{Z} \mathcal{C}_3^{1+}(V)$. \vspace{0.3cm} With these notations in mind, the following proposition is a basic result which is at the core of our approach to path-wise integration: \begin{proposition}[The $\Lambda$-map] \label{prop:Lambda} There exists a unique linear map $\Lambda: \mathcal{Z} \mathcal{C}^{1+}_3(V) \to \mathcal{C}_2^{1+}(V)$ such that $$ \delta \Lambda = \mbox{Id}_{\mathcal{Z} \mathcal{C}_3(V)}. $$ Furthermore, for any $\mu > 1$, this map is continuous from $\mathcal{Z} \mathcal{C}^{\mu}_3(V)$ to $\mathcal{C}_2^{\mu}(V)$ and we have \begin{equation}\label{ineqla} \|\Lambda h\|_{\mu} \le \frac{1}{2^\mu-2} \|h\|_{\mu} ,\qquad h \in \mathcal{Z} \mathcal{C}^{1+}_3(V). \end{equation} \end{proposition} \vspace{0.3cm} We can now give an algorithm for a canonical decomposition of the preimage of $\mathcal{Z} \mathcal{C}_3^{1+}(V)$, or in other words, of a function $g\in\mathcal{C}_2 (V)$ whose increment $\delta g$ is small enough: \begin{corollary} \label{cor:integration} Take an element $g\in\mathcal{C}_2 (V)$ such that $\delta g\in\mathcal{C}_3^\mu(V)$ for $\mu>1$. Then $g$ can be decomposed in a unique way as $ g=\delta f+ \Lambda \delta g, $ where $f\in\mathcal{C}_1(V)$. For any 2-increment $g\in\mathcal{C}_2 (V)$, such that $\delta g\in\mathcal{C}_3^{1+}(V)$, set $ \delta f = (\mbox{Id}-\Lambda \delta) g $. Then $$ (\delta f)_{ts} = \lim_{|\Pi_{ts}| \to 0} \sum_{i=0}^n g_{t_{i+1}\, t_i}, $$ where the limit is over any partition $\Pi_{ts} = \{t_0=t,\dots, t_n=s\}$ of $[t,s]$ whose size tends to zero. \end{corollary} \begin{proof} See~\cite{MR2091358}. \end{proof} \section{Branched rough paths} \label{sec:brp} Up to this point we have considered only properties of the iterated integrals of smooth functions $\{x^a\}_{a\in \mathcal{L}}$ however from the algebraic point of view the only data we need to build the family $\{X^\tau\}_{\tau \in \mathcal{T}_\mathcal{L}}$ is a family of maps $\{I^a\}_{a \in \mathcal{L}}$ from $\mathcal{C}_2$ to $\mathcal{C}_2$ satisfying certain properties. \begin{definition} \label{def:integral} We call \emph{integral} a linear map $I: \mathcal{D}_I \to \mathcal{D}_I$ on a sub-algebra $\mathcal{D}_I \subset \mathcal{C}^+_2$ satisfying two properties: $$ I(hf) = I(h) f, \qquad \forall h \in \mathcal{D}_I, f \in \mathcal{C}_1 $$ and $$ \delta I(h) = I(e)h + \sum_i I(h^{1,i}) h^{2,i} \qquad \text{when $h\in \mathcal{D}_I$, $ \delta h = \sum_i h^{1,i} h^{2,i}$ and $h^{1,i} \in \mathcal{D}_I$} $$ We explicitly require that $e \in \mathcal{D}_I$. \end{definition} Using the embedding $f \in \mathcal{C}_1 \mapsto fe \in \mathcal{C}_2$ we can extend the map $I$ to $\mathcal{C}_1$: for any $f \in \mathcal{C}_1$ we let $I(f) = I(fe)$ and since $fe = ef + \delta f$ (as easily verified) we have $$ I(f) = I(e)f + I(\delta f) $$ for any $f \in \mathcal{C}_1$ such that $\delta f \in \mathcal{D}_I$. Given a family $\{I^a\}_{a\in\mathcal{L}}$ of such integral maps on a common algebra $\mathcal{D}\subseteq \mathcal{C}_{2}$ we can associate to them a family $\{X^{\tau}\}_{\tau\in\mathcal{F}_\mathcal{L}}$ recursively as done in Sect.~\ref{sec:itin} above: $$ X^{\bullet_a} = I^a(e), \qquad X^{[\tau^1 \cdots \tau^k]_a} = I^a(X^{\tau^1 \cdots \tau^k}), \qquad X^{\tau^1\cdots \tau^k}= X^{\tau^1}\circ \cdots \circ X^{\tau^k}. $$ In this way we estabilish an algebra homomorphism from $\mathcal{A}\mathcal{T}_\mathcal{L}$ to a subalgebra of $\mathcal{C}_2$ generated by the $X^\tau$-s. This homomorphism send the operation $B_+^a$ on $\mathcal{A}\mathcal{T}_\mathcal{L}$ to the integral map $I^a$ on $\mathcal{C}_2$. It is not difficult to verify that Theorem~\ref{th:equiv} extends to the map $X$ generated by the family $\{I^a\}_a$. Let us now introduce a regularity condition on the map $X$. Given $\gamma \in (0,1]$ define the function $q_\gamma$ on forests as $q_\gamma(\tau) = 1$ for $|\tau| \le 1/\gamma$ and \begin{equation} \label{eq:q-def} q_{\gamma}(\tau) = \frac{1}{2^{\gamma |\tau|}-2} \sum' q_\gamma(\tau^{(1)}) q_\gamma(\tau^{(2)}) \end{equation} whenever $\tau \in \mathcal{T}$ with $|\tau| > 1/\gamma$ and $q_\gamma(\tau_1 \cdots \tau_n) = q_\gamma(\tau_1) \cdots q_\gamma(\tau_n)$ for $\tau_1,\dots,\tau_n \in \mathcal{T}$. Note that $q_{\gamma}$ satisfy also the equation $$ q_{\gamma}(\tau) = \frac{1}{2^{\gamma |\tau|}} \sum q_\gamma(\tau^{(1)}) q_\gamma(\tau^{(2)}) $$ which involves the splitting given by the coproduct $\Delta$ while the definition~(\ref{eq:q-def}) involves the splitting of trees given by the reduced coproduct $\Delta'$. \begin{definition} We call an homomorphism $X :\mathcal{A} \mathcal{T} \to \mathcal{C}_2$ a \emph{branched rough path} (BRP) of roughness $\gamma> 0$, if it satisfy the equation~(\ref{eq:alg-relations}) and moreover is such that \begin{equation} \label{eq:brp-small} \| X^\tau\|_{\gamma |\tau|} \le B A^{|\tau|} q_\gamma(\tau), \qquad \tau \in \mathcal{F}_\mathcal L \end{equation} for some constants $B \in [0,1]$ and $A \ge 0$. \end{definition} Under certain conditions we can extend an homomorphism $X: \mathcal{A}_n \mathcal{T} \to \mathcal{C}_2$ defined only on the sub-algebra of trees with degree less or equal to $n$ to the whole algebra. \begin{theorem} \label{th:ext} Let us given a partial homomorphism $X: \mathcal{A}_n \mathcal{T}_\mathcal L \to \mathcal{C}_2$ satisfying eq.~(\ref{eq:alg-relations}) and such that there exists positive constants $\gamma,A\ge 0,B\in[0,1]$ for which \begin{equation} \label{eq:brp-small-2} \| X^\tau\|_{\gamma |\tau|} \le B A^{|\tau|} q_\gamma(\tau), \qquad \tau \in \mathcal{T}^n_\mathcal L \end{equation} with $\gamma (n+1) > 1$. Then there exists a unique extension of $X$ to a branched rough path defined on the whole $\mathcal{A} \mathcal{T}$ with roughness $\gamma$ and such that eq.~(\ref{eq:brp-small-2}) holds for any $\tau \in \mathcal{T}_\mathcal{L}$. \end{theorem} \begin{proof} We proceed by induction and assume that we have already found an extension $X:\mathcal{A}_m \mathcal{T}_{\mathcal L} \to \mathcal{C}_2$ satisfying eq.~(\ref{eq:alg-relations}) and for which we have \begin{equation} \label{eq:brp-small-3} \| X^\tau\|_{\gamma |\tau|} \le B q_\gamma(\tau) A^{|\tau|}, \qquad \tau \in \mathcal{T}^m_\mathcal L. \end{equation} This is true if $m=n$. Let us prove that we can extend $X$ to the set of trees with degree $m+1$ with the same bound on the H\"older norms. Since $\gamma m \ge \gamma (n+1) > 1$ we can set $ X^\tau := \Lambda\left[ X^{\Delta' \tau}\right] $ for every $\tau$ such that $|\tau|=m$. Indeed $$ \|X^{\Delta' \tau}\|_{m\gamma} \le \sum_i \|X^{\tau^{(1)}_i \otimes \tau^{(2)}_i }\|_{m\gamma} \le \sum_i' \|X^{\tau^{(1)}_i}\|_{|\tau^{(1)}_i|\gamma} \|X^{ \tau^{(2)}_i }\|_{|\tau^{(2)}_i|\gamma} $$ since $|\tau^{(1)}_i|+|\tau^{(2)}_i|=m$ for every $i$. This shows that $X^{\Delta' \tau} \in \mathcal{C}_2^{m\gamma}$ and so it is in the domain of $\Lambda$. To prove the bound on $X^\tau$ recall that \begin{equation*} \begin{split} \|X^\tau\|_{\gamma|\tau|} & = \|\Lambda X^{\Delta' \tau}\|_{\gamma|\tau|} \le \frac{1}{2^{|\tau|\gamma}-2} \sum_i' \|X^{\tau^{(1)}_i}\|_{|\tau^{(1)}_i|\gamma} \|X^{ \tau^{(2)}_i }\|_{|\tau^{(2)}_i|\gamma} \\ & \le B^2 \frac{1}{2^{|\tau|\gamma}-2} \sum_i' A^{|\tau^{(1)}_i|+|\tau^{(2)}_i|} q_\gamma(\tau^{(1)}_i) q_\gamma(\tau^{(2)}_i) \\ & \le B^2 A^{|\tau|} q_\gamma(\tau) \end{split} \end{equation*} and since $B \le 1$ we have the required bound. \end{proof} \begin{remark} \label{rem:q-asymp} While we does not have been able to prove any asymptotic behavior for $q_\gamma(\tau)$ as $|\tau| \to \infty$ we conjecture that \begin{equation} \label{eq:conject} q_\gamma(\tau) \asymp C (\tau!)^{-\gamma} \end{equation} for some constant $C$. For the class of linear Chen trees $\mathcal{T}^{\text{Chen}}$ this conjecture is true thanks to the inequality \begin{equation} \label{eq:neocx1} \sum_{k=0}^n \frac{a^{\gamma k} b^{\gamma (n-k)}}{(k!)^\gamma (n!)^\gamma} \le c_\gamma \frac{(a+b)^{\gamma n}}{(n!)^\gamma} \end{equation} valid for any $\gamma \in (0,1]$ and $a,b \ge 0$ and where the constant $c_\gamma$ depends only on $\gamma$. We prove this inequality in App.~\ref{app:neoc}. Note that this inequality is a variant of Lyons' neo-classical inequality~(see e.g.\cite{MR1654527}) which in our notations reads \begin{equation} \label{eq:neocxx1} \sum_{k=0}^n \frac{a^{\gamma k} b^{\gamma (n-k)}}{(\gamma k)! [\gamma (n-k)n]!} \le c_\gamma \frac{(a+b)^{\gamma n}}{(\gamma n)!} \end{equation} A sufficient condition for the validity of the conjecture would be the existence of a ``neo-classical tree inequality'' of the form \begin{equation} \label{eq:neocx1-tree} \sum \frac{a^{\gamma |\tau^{(1)}|} b^{\gamma |\tau^{(2)}|}}{(\tau^{(1)}!)^\gamma (\tau^{(2)}!)^\gamma} \le c_\gamma \frac{(a+b)^{\gamma |\tau|}}{(\tau !)^\gamma} \end{equation} for any $\tau \in \mathcal{T}$. The inequality is true when $\gamma =1$ by using the tree binomial formula given in Lemma~\ref{lemma:tree-binomial}. The asymptotic behavior~(\ref{eq:conject}) appears also in the estimation of tree-indexed iterated integrals in the context of 3d Navier-Stokes equation studied in~\cite{MR2227041} (see also Sect.~\ref{sec:inf-dim}). \end{remark} We denote with $\Omega^\gamma_{\mathcal{T},\mathcal{L}}$ the space of $\gamma$-BRP, on this space we can introduce a distance by letting $$ d_\gamma(X,Y) = \sum_{\tau \in \mathcal{F}^n_\mathcal L} \|X^\tau-Y^\tau\|_{\gamma|\tau|} $$ where $n$ is again the largest integer such that $n\gamma \le 1$. This distance is strong enough to separate points in $\Omega^\gamma_{\mathcal{T},\mathcal{L}}$: \begin{corollary} If $X,Y \in \Omega^\gamma_{\mathcal{T},\mathcal{L}}$ and $d_\gamma(X,Y) = 0$ then $X=Y$. \end{corollary} \begin{proof} If we let $ Z^\tau = X^\tau-Y^\tau $ for $\tau \in \mathcal{T}^n_\mathcal{L}$ then the partial homomorphism $Z$ is such that $Z^\tau = 0$ and satisfy eq.~(\ref{eq:alg-relations}) for all $\tau \in \mathcal{T}^n_\mathcal{L}$. Then we can choose $B=0$ and an arbitrary $A$ in the bounds~(\ref{eq:brp-small-2}) and use Thm.~\ref{th:ext} to conclude that we must have $Z^\tau = 0$ for any $\tau \in \mathcal{T}_\mathcal{L}$, i.e. that $X=Y$. \end{proof} \begin{definition} An~\emph{almost branched rough path} (aBRP) is a partial homomorphism $\widetilde X: \mathcal{A}_n \mathcal{T} \to \mathcal{C}_2$ such that it approximately satisfy eq.~(\ref{eq:alg-relations}) for any tree $\tau \in \mathcal{T}_\mathcal L^n$ modulus an element of $\mathcal{C}^{1+}_3$ and for which we have \begin{equation} \label{eq:abrp-small} \max_{\tau \in\mathcal{T}_\mathcal L^n} \|\widetilde X^\tau\|_{\gamma |\tau|} \le K \end{equation} for some constant $K$ and some $\gamma > 1/(n+1)$. \end{definition} Then we have the following result \begin{theorem} \label{th:corresp} For any aBRP $\widetilde X$ there is a unique BRP $X$ of roughness $\gamma$ such that $$ \max_{\tau \in \mathcal{T}_\mathcal L^n} \|X^\tau - \widetilde X^\tau\|_{(n+1)\gamma} < \infty . $$ \end{theorem} \begin{proof} The assumption is that $\delta \widetilde X^\tau = \widetilde X^{\Delta' \tau} + R^\tau$ where $R^\tau \in \mathcal{C}^{(n+1)\gamma}_3$ for any $\tau \in \mathcal{F}_\mathcal L^n$. We will set $X^\tau = \widetilde X^\tau + Q^\tau$ and determine the increments $Q^\tau$ by induction. First look at $\tau$ such that $|\tau| = 1$, in this case $$ \delta X^\tau = \delta \widetilde X^\tau + \delta Q^\tau = R^\tau + \delta Q^\tau $$ since $\Delta' \tau = 0$. Then we set $Q^\tau = -\Lambda R^\tau$ since $R^\tau \in \mathcal{Z}\mathcal{C}^{1+}_3$. So that we obtain $\delta X^\tau = 0$ as it should. Now assume that for $\tau \in \mathcal{T}_\mathcal L^{m}$ we have obtained $Q^\tau$ such that $\delta X^\tau = X^{\Delta' \tau}$ and let us find such corrections $Q^{\tau}$ for $\tau \in \mathcal{T}_\mathcal L^{m+1}$ with $|\tau|=m+1$. We have $$ \delta \widetilde X^{\tau} = \sum' \widetilde X^{\tau^{(1)}} \widetilde X^{\tau^{(2)}} + R^\tau $$ since both $\tau^{(1)}$ and $\tau^{(2)}$ have degree less than $m+1$ we can apply the induction hypothesis and obtain $ \delta \widetilde X^{\tau} = \sum' ( X^{\tau^{(1)}}- Q^{\tau^{(1)}}) ( X^{\tau^{(2)}}- Q^{\tau^{(2)}}) + R^\tau. $ Now let $$ \widetilde R^\tau = \sum' \left[Q^{\tau^{(1)}} X^{\tau^{(2)}}+X^{\tau^{(1)}} Q^{\tau^{(2)}}-Q^{\tau^{(1)}} Q^{\tau^{(2)}} \right]- R^\tau $$ so that $ \delta \widetilde X^\tau - \widetilde R^\tau = \sum' X^{\tau^{(1)}} X^{\tau^{(2)}}. $ If we can show that $\widetilde R^\tau \in \mathcal{Z} \mathcal{C}_3^{1+}$, then setting $Q^{\tau} = \Lambda[\widetilde R^\tau]$ we would have obtained $ \delta X^\tau = \delta \widetilde X^\tau - \widetilde R^\tau = \sum' X^{\tau^{(1)}} X^{\tau^{(2)}}. $ as required and the induction would be complete. It is clear that $\widetilde R^\tau \in \mathcal{C}_3^{1+}$. The only problem is to prove that it is in the image of $\delta$. By the triviality of the complex $(\mathcal{C}_*,\delta)$ this is equivalent to show that $\delta \widetilde R^\tau = 0$. So let us prove the last equality. Note that $$ \delta \widetilde R^\tau = \delta\left[\delta \widetilde X^\tau - \sum' X^{\tau^{(1)}} X^{\tau^{(2)}}\right] =- \delta \sum' X^{\tau^{(1)}} X^{\tau^{(2)}} $$ Using again the induction hypothesis we get $$ \delta \widetilde R^\tau = \sum' X^{\tau^{(1)}} \delta X^{\tau^{(2)}} - \sum' \delta X^{\tau^{(1)}} X^{\tau^{(2)}} $$ $$= \sum' X^{\tau^{(1)}} X^{\Delta' \tau^{(2)}} - \sum' X^{\Delta' \tau^{(1)}} X^{\tau^{(2)}} = X^{(\text{id}\otimes \Delta')\Delta' \tau} -X^{(\Delta'\otimes\text{id})\Delta' \tau} $$ But now $ \delta \widetilde R^\tau= X^{(\text{id}\otimes \Delta')\Delta' \tau- (\Delta'\otimes\text{id})\Delta' \tau} = 0 $ since the reduced coproduct is coassociative. The proof of uniqueness is left to the reader. \end{proof} \section{Controlled paths} \label{sec:controlled} Following the line of development of~\cite{MR2091358} we describe now a sufficiently large class of paths which can be integrated against a given $\gamma$-branched rough path $X$. We then show that this set of paths constitute an algebra and that integration and application of sufficiently regular maps preserve this class. It will constitute the natural space where to look for solutions of rough differential equations driven by a branched path. In Sect.~\ref{sec:trees-diff} we showed that the solution $y$ of a driven differential equation has the form of a series indexed by trees: $ \delta y_{ts} = \sum_{\tau \in \mathcal{T}_\mathcal L} X^{\tau}_{ts} y^{\tau}_s $ (cfr. eq.~(\ref{eq:tree-rep})) for suitable coefficients functions $\{ y^\tau : \tau \in \mathcal{T}_\mathcal{L} \}$ which satisfy eq.~(\ref{eq:ytau-der}). This suggest the following: \begin{definition} \label{def:controlled-path} Let $X$ be a $\gamma$-BRP and let $n$ the largest integer such that $n\gamma\le 1$. For any $\kappa \in (1/(n+1),\gamma]$ a path $y$ is a $\kappa$-\emph{weakly controlled} by $X$ with values in the vector space $V$ if there exists paths $\{ y^{\tau} \in \mathcal{C}^{|\tau|\kappa}_2(V) : \tau \in \mathcal{F}_\mathcal{L}^{n-1}\}$ and remainders $\{y^{\sharp} \in \mathcal{C}_2^{n\kappa}(V), y^{\sharp,\tau} \in \mathcal{C}_2^{(n-|\tau|)\kappa}(V), \tau \in \mathcal{F}_{\mathcal L}^{n-1}\}$ such that \begin{equation} \label{eq:control} \delta y = \sum_{\tau \in \mathcal{F}_\mathcal L^{n-1}} X^{\tau} y^{\tau} + y^\sharp \end{equation} and for $\tau \in \mathcal{F}_\mathcal L^{n-1}$ : \begin{equation} \label{eq:control2} \delta y^{\tau} = \sum_{\sigma \in \mathcal{F}_\mathcal L^{n-1}}\sum_{\rho} c'(\sigma,\tau,\rho) X^{\rho} y^{\sigma} + y^{\tau,\sharp} \end{equation} where we mean $\delta y^{\tau} = y^{\tau,\sharp}$ when $|\tau| = n-1$. We denote $\mathcal{Q}_\kappa(X;V)$ the vector space of $\kappa$-weakly controlled paths by $X$ with values in $V$. Fixed a norm $|\cdot|$ on $V$ we introduce a norm $\|\cdot\|_{\mathcal{Q},\kappa}$ on $\mathcal{Q}_\kappa(X;V)$ as $$ \|y\|_{\mathcal{Q},\kappa} = |y_0| + \|y^\sharp\|_{n\kappa} + \sum_{\tau\in\mathcal{F}_\mathcal{L}^{n-1}} \|y^{\tau,\sharp}\|_{\kappa(n-|\tau|)}. $$ \end{definition} To be precise, a well defined element in $\mathcal{Q}_\kappa(X;V)$ is given by specifying the path $y$ and all its ``derivatives'' $\{y^\tau\}_\tau$ but we usually omit this for the sake of brevity. A path in $\mathcal{Q}_\kappa(X;V)$ has a partial expansion in $X$ with a remainder denoted with $y^\sharp$. Likewise every coefficient path in this expansion has a similar expansion of progressively lower order. We write $\mathcal{Q}_{\kappa}(X) = \mathcal{Q}_{\kappa}(X;\mathbb{R})$. \begin{example} \label{eq:example-2} Let us give an example with $d=1$ of the structure of a controlled path (since $d=1$ the partial series are indexed by unlabeled trees). Take $\gamma > 1/5$ so that $n=4$ and assume that $X$ is a $\gamma$-BRP. Then $y \in \mathcal{Q}_\gamma$ corresponds to the set of paths \psset{levelsep=-5pt,nodesep=-2pt,treesep=1pt} $$ y \in \mathcal{C}_1^{\gamma},\qquad y^{\ensuremath{\scriptstyle\bullet}} \in \mathcal{C}_1^{\gamma},\qquad y^{\aabb},y^{\tsnode \tsnode} \in \mathcal{C}_1^{2\gamma}, \qquad y^{\pstree{\tsnode}{\tsnode \tsnode}}, y^{\aabb \tsnode} y^{\pstree{\tsnode}{\tsnode \tsnode}}, y^{\aaabbb}, y^{\tsnode \tsnode \tsnode}\in\mathcal{C}_1^{3\gamma} $$ which satisfy the following algebraic relations \begin{equation*} \begin{split} \delta y & = X^{\ensuremath{\scriptstyle\bullet}} y^{\ensuremath{\scriptstyle\bullet}} + X^{\aabb} y^{\aabb} + X^{\tsnode \tsnode} y^{\tsnode \tsnode} + X^{\pstree{\tsnode}{\tsnode \tsnode}} y^{\pstree{\tsnode}{\tsnode \tsnode}} + X^{\aabb \tsnode} y^{\aabb \tsnode} + X^{\pstree{\tsnode}{\tsnode \tsnode}} y^{\pstree{\tsnode}{\tsnode \tsnode}} + X^{\tsnode \tsnode \tsnode} y^{\tsnode \tsnode \tsnode}+X^{\aaabbb} y^{\aaabbb} +y^\sharp \\ \delta y^{\ensuremath{\scriptstyle\bullet}} & = X^{\ensuremath{\scriptstyle\bullet}} (y^{\aabb} + 2 y^{\ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet}} ) + X^{\aabb} (y^{\aaabbb} + y^{\aabb \ensuremath{\scriptstyle\bullet}}) + X^{\ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet}} (y^{\aabb \ensuremath{\scriptstyle\bullet}} + y^{\pstree{\tsnode}{\tsnode \tsnode}} +3 y^{\ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet}}) + y^{\ensuremath{\scriptstyle\bullet},\sharp} \\ \delta y^{\aabb} & = X^{\ensuremath{\scriptstyle\bullet}} (y^{\aabb \ensuremath{\scriptstyle\bullet}} + 2 y^{\pstree{\tsnode}{\tsnode \tsnode}} + y^{\aaabbb}) + y^{\aabb,\sharp} \\ \delta y^{\ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet}} & = X^{\ensuremath{\scriptstyle\bullet}} (y^{\aabb \ensuremath{\scriptstyle\bullet}}+y^{ \ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet}}) + y^{\ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet},\sharp} \\ \delta y^{\pstree{\tsnode}{\tsnode \tsnode}} & = y^{\pstree{\tsnode}{\tsnode \tsnode},\sharp} \\ \delta y^{\aabb\tsnode} & = y^{\aabb\tsnode,\sharp} \\ \delta y^{\tsnode\tsnode\tsnode} & = y^{\tsnode\tsnode\tsnode,\sharp} \\ \delta y^{\aaabbb} & = y^{\aaabbb,\sharp} \end{split} \end{equation*} with remainders of orders $$ y^{\sharp} \in \mathcal{C}_2^{4\gamma},\quad y^{\ensuremath{\scriptstyle\bullet},\sharp} \in \mathcal{C}_2^{3\gamma}, \qquad y^{\aabb,\sharp},y^{\ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet},\sharp} \in \mathcal{C}_2^{2\gamma} \qquad y^{\pstree{\tsnode}{\tsnode \tsnode},\sharp},y^{\aabb\tsnode,\sharp},y^{\tsnode\tsnode\tsnode,\sharp},y^{\aaabbb,\sharp} \in \mathcal{C}_2^{ \gamma } . $$ \end{example} The following lemma will be useful in computations below. \begin{lemma} \label{lemma:derysharp} \begin{equation*} \delta y^\sharp = \sum_{\tau\in \mathcal{F}_\mathcal L^{n-1}} X^\tau y^{\tau,\sharp} \end{equation*} \end{lemma} \begin{proof} \begin{equation*} \begin{split} \delta y^\sharp & = \sum_{\tau\in \mathcal{F}_\mathcal L^{n-1}} X^\tau \delta y^\tau - \sum_{\tau \in \mathcal{F}_\mathcal L^{n-1}} \delta X^\tau y^\tau \\ & = \sum_{|\tau| = n-1} X^\tau \delta y^\tau+ \sum_{\tau\in \mathcal{F}_\mathcal L^{n-2}} X^\tau \left(\sum_{\sigma\in \mathcal{F}_\mathcal L^{n-1}} \sum_\rho c'(\sigma,\tau,\rho) X^{\rho} y^{\sigma} + y^{\tau,\sharp}\right) - \sum_{\sigma\in \mathcal{F}_\mathcal L^{n-1}} \delta X^\sigma y^\sigma \\ & = \sum_{|\tau| = n-1} X^\tau \delta y^\tau+ \sum_{\tau\in \mathcal{F}_\mathcal L^{n-2}} \sum_{\sigma\in \mathcal{F}_\mathcal L^{n-1}} \sum_\rho c'(\sigma,\tau,\rho) X^\tau X^{\rho} y^{\sigma} + \sum_{\tau\in \mathcal{F}_\mathcal L^{n-2}} X^\tau y^{\tau,\sharp} - \sum_{\sigma\in \mathcal{F}_\mathcal L^{n-1}} \delta X^\sigma y^\sigma \\ & = \sum_{|\tau| = n-1} X^\tau \delta y^\tau+\sum_{\sigma\in \mathcal{F}_\mathcal L^{n-1}} \sum_{\tau \in \mathcal{F}_\mathcal L^{n-2},\rho} c'(\sigma,\tau,\rho) X^\tau X^{\rho} y^{\sigma} + \sum_{\tau\in \mathcal{F}_\mathcal L^{n-2}} X^\tau y^{\tau,\sharp} - \sum_{\sigma\in \mathcal{F}_\mathcal L^{n-1}} \delta X^\sigma y^\sigma \\ & =\sum_{|\tau| = n-1} X^\tau \delta y^\tau+ \sum_{\sigma \in \mathcal{F}_\mathcal L^{n-1}} (X^{\Delta' \sigma} - \delta X^\sigma) y^{\sigma} + \sum_{\tau\in \mathcal{F}_\mathcal L^{n-2}} X^\tau y^{\tau,\sharp} \\ & = \sum_{\tau\in \mathcal{F}_\mathcal L^{n-1}} X^\tau y^{\tau,\sharp} \end{split} \end{equation*} \end{proof} \begin{lemma} \label{lemma:controlled-phi} Let $\varphi \in C_b^n(\mathbb{R}^k,\mathbb{R})$ and $y \in \mathcal{Q}_\kappa(X;\mathbb{R}^k)$, then $z_t = \varphi(y_t)$ is a weakly controlled path, $z \in \mathcal{Q}_\kappa(X;\mathbb{R})$ where its coefficients are given by \begin{equation*} z^\tau = \sum_{m = 1}^{n-1} \sum_{\substack{\ol b \in \mathcal{I}\mathcal{L}_1 \\ |\ol b|=m}}\frac{\varphi_{\ol b}(y)}{m!} \sum_{\substack{\tau_1,\dots,\tau_m \in \mathcal{F}_\mathcal L^{n-1}\\ \tau_1\cdots\tau_m=\tau }} y^{\tau_1,b_1} \cdots y^{\tau_m,b_m} , \qquad \tau \in \mathcal{F}_\mathcal{L}^{n-1} \end{equation*} where $\mathcal{L}_1=\{1,\dots,k\}$ (note that all the summations are over a finite number of terms). \end{lemma} \begin{proof} The Taylor expansion for $\varphi$ reads $$ \varphi(\xi') = \varphi(\xi) + \sum_{m = 1}^{n-1} \sum_{\substack{\ol b \in \mathcal{I}\mathcal{L}_1 \\ |\ol b|=m}}\frac{\varphi_{\ol b}(\xi)}{m!} (\xi'-\xi)^{\ol b} + O(|\xi'-\xi|^n) $$ which plugged into $\delta z = \delta \varphi(y)$ gives \begin{equation*} \begin{split} \delta z_{ts} & = \sum_{m = 1}^{n-1} \sum_{\substack{\ol b \in \mathcal{I}\mathcal{L}_1 \\ |\ol b|=m}}\frac{\varphi_{\ol b}(y_s)}{m!} (\delta y_{ts})^{\ol b} + O(|t-s|^{n\kappa}) \\ & = \sum_{m = 1}^{n-1} \sum_{\tau^1,\cdots\tau^m \in \mathcal{F}_\mathcal{L}^{n-1}} \sum_{\substack{\ol b \in \mathcal{I}\mathcal{L}_1 \\ |\ol b|=m}}\frac{\varphi_{\ol b}(y_s)}{m!} y^{\tau^1,b_1}_s\cdots y^{\tau^m,b_m}_s X^{\tau^1\cdots \tau^m}_{ts} + O(|t-s|^{n\kappa}) \end{split} \end{equation*} which gives the required result. To show that every $z^\tau$ satisfy the $\delta$-equations~(\ref{eq:control2}) we can use a truncated version of the arguments used in Theorem~\ref{th:der-eqns-1}. We omit the details. \end{proof} The previous lemma shows that controlled paths are compatible with the application of nonlinear functions. We will now prove that there exists an extension of the integral maps $\{I^a\}_a$ to the algebra $\mathcal{Q}_\gamma(X)$. \begin{theorem} \label{th:controlled-integrals} The integral maps $\{I^a\}_{a\in\mathcal{L}}$ can be extended to maps $I^a : \mathcal{Q}_\kappa(X) \to \delta \mathcal{Q}_\kappa(X)$. If $y \in \mathcal{Q}_\kappa(X)$ then $\delta z = I^a( y)$ is such that \begin{equation} \label{eq:2} \delta z = X^{\bullet_a} z^{\bullet_a} + \sum_{\tau \in \mathcal{T}_\mathcal L^{n}} X^{\tau} z^{\tau} + z^{\flat} \end{equation} where $z^{\bullet_a} = y$, $z^{[\tau]_a} = y^{\tau}$ and zero otherwise. Moreover $$ z^\flat =\Lambda\left[ \sum_{\tau \in \mathcal{F}_\mathcal L^{n-1}\cup\{\emptyset\}} X^{B^+_a(\tau)} y^{\tau,\sharp}\right] \in \mathcal{C}_2^{\kappa (n+1)}. $$ \end{theorem} \begin{proof} Let $h = \sum_{\tau \in \mathcal{F}^{n-1}_\mathcal{L}} X^\tau y^\tau$ so that $\delta y = h + y^\sharp$. By linearity and by the definition of $X$ we have $h \in \mathcal{D}_I$ and $$ I^a(h) =\sum_{\tau \in \mathcal{F}^{n-1}_\mathcal{L}} I^a(X^{\tau}) y^\tau = \sum_{\tau \in \mathcal{F}^{n-1}_\mathcal{L}} X^{[\tau]_a} y^\tau = I^a(\delta y - y^\sharp) $$ we would like to show that we can extend $I^a$ such that $I^a(y^\sharp)$ is well defined so that we can set $$ I^a(\delta y) = \sum_{\tau \in \mathcal{F}^{n-1}_\mathcal{L}} X^{[\tau]_a} y^\tau + I^a(y^\sharp). $$ To do this we compute the action of $\delta$ on $I^a(y^\sharp)$. Since we want to preserve the properties of $I^a$ we have to require that $$ \delta I^a(y^\sharp) = I^a(e) y^\sharp + \sum_{\tau \in \mathcal{F}^{n-1}_\mathcal{L}}I^a(X^\tau) y^{\tau,\sharp} = X^{\bullet_a} y^\sharp + \sum_{\tau \in \mathcal{F}^{n-1}_\mathcal{L}}X^{[\tau]_a} y^{\tau,\sharp} $$ where we used the computation of $\delta y^{\tau,\sharp}$ in Lemma~\ref{lemma:derysharp}. Since $X$ is a $\gamma$-BRP and $y \in \mathcal{Q}_\kappa(X)$ with $1/(n+1)< \kappa < \gamma$ we see that the r.h.s of this equation belongs to $\mathcal{Z}\mathcal{C}_3^{(n+1)\kappa} \subset \mathcal{Z}\mathcal{C}_3^{1+}$ so that it belongs to the domain of the $\Lambda$ map and then we can define $$ I^a(y^\sharp) = \Lambda \left[X^{\bullet_a} y^\sharp + \sum_{\tau \in \mathcal{F}^{n-1}_\mathcal{L}}X^{[\tau]_a} y^{\tau,\sharp}\right] $$ which proves out statement taking into account that we can set $z = I^a(y) = I^a(1)y + I^a(\delta y)$. \end{proof} \begin{example} \label{ex:example-3} Let us continue our one dimensional example. For the integral $z = I(y)$ of the controlled path $y$ introduced in Ex.~\ref{eq:example-2} we get \psset{levelsep=-5pt,nodesep=-2pt,treesep=1pt} \begin{equation*} \begin{split} \delta z = \delta I(y) & = X^{\ensuremath{\scriptstyle\bullet}} y + X^{\aabb} y^{\ensuremath{\scriptstyle\bullet}} + X^{\aaabbb} y^{\aabb}+X^{\pstree{\tsnode}{\tsnode \tsnode}} y^{\tsnode \tsnode} + X^{\aaabbabb} y^{\aabb \tsnode} + X^{\pstree{\tsnode}{\aababb}} y^{\pstree{\tsnode}{\tsnode \tsnode}} + X^{\pstree{\tsnode}{\tsnode\tsnode\tsnode}} y^{\tsnode \tsnode \tsnode}+X^{\aaaabbbb}y^{\aaabbb}+z^\flat \\ & = X^{\ensuremath{\scriptstyle\bullet}} z^{\ensuremath{\scriptstyle\bullet}} + X^{\aabb} z^{\aabb} + X^{\aaabbb} z^{\aaabbb} +X^{\pstree{\tsnode}{\tsnode \tsnode}} z^{\pstree{\tsnode}{\tsnode \tsnode}} + z^{\sharp} \end{split} \end{equation*} with $$ z^\flat = \Lambda\left[X^{\tsnode} y^{\sharp} + X^{\aabb} y^{\tsnode,\sharp} + X^{\aaabbb} y^{\aabb,\sharp} + X^{\pstree{\tsnode}{\tsnode \tsnode}} y^{\tsnode \tsnode,\sharp} + X^{\pstree{\tsnode}{\aababb}} y^{\pstree{\tsnode}{\tsnode \tsnode},\sharp}+X^{\aaabbabb} y^{\aabb \tsnode,\sharp} + X^{\pstree{\tsnode}{\tsnode\tsnode\tsnode}} y^{\tsnode \tsnode \tsnode,\sharp} \right]. $$ and the coefficients satisfy: \begin{equation*} \begin{split} \delta z^{\ensuremath{\scriptstyle\bullet}} = \delta y & = X^{\ensuremath{\scriptstyle\bullet}} y^{\ensuremath{\scriptstyle\bullet}} + X^{\aabb} y^{\aabb} + X^{\tsnode \tsnode} y^{\tsnode \tsnode} + X^{\pstree{\tsnode}{\tsnode \tsnode}} y^{\pstree{\tsnode}{\tsnode \tsnode}} + X^{\aabb \tsnode} y^{\aabb \tsnode} + X^{\pstree{\tsnode}{\tsnode \tsnode}} y^{\pstree{\tsnode}{\tsnode \tsnode}} + X^{\tsnode \tsnode \tsnode} y^{\tsnode \tsnode \tsnode}+X^{\aaabbb}y^{\aaabbb}+y^\sharp \\ & = X^{\ensuremath{\scriptstyle\bullet}} z^{\aabb} + X^{\aabb} y^{\aaabbb} + X^{\tsnode \tsnode} z^{\pstree{\tsnode}{\tsnode \tsnode}} + z^{\ensuremath{\scriptstyle\bullet},\sharp} \\ \delta z^{\aabb} = \delta y^{\ensuremath{\scriptstyle\bullet}} & = X^{\ensuremath{\scriptstyle\bullet}} (y^{\aabb} + 2 y^{\ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet}} ) + X^{\aabb} (y^{\aaabbb} + y^{\aabb \ensuremath{\scriptstyle\bullet}}) + X^{\ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet}} (y^{\aabb \ensuremath{\scriptstyle\bullet}} + y^{\pstree{\tsnode}{\tsnode \tsnode}} +3 y^{\ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet}}) + y^{\ensuremath{\scriptstyle\bullet},\sharp} \\ & = X^{\ensuremath{\scriptstyle\bullet}} (z^{\aaabbb} + 2 z^{\pstree{\tsnode}{\tsnode \tsnode}}) + z^{\aabb,\sharp} \\ \delta z^{\aaabbb} = \delta y^{\aabb} & = X^{\ensuremath{\scriptstyle\bullet}} (y^{\aabb \ensuremath{\scriptstyle\bullet}} + 2 y^{\pstree{\tsnode}{\tsnode \tsnode}} + y^{\aaabbb}) + y^{\aabb,\sharp} \\ & = z^{\aaabbb,\sharp} \\ \delta z^{\pstree{\tsnode}{\tsnode \tsnode}} = \delta y^{\ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet}} & = X^{\ensuremath{\scriptstyle\bullet}} (y^{\aabb \ensuremath{\scriptstyle\bullet}}+y^{ \ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet}}) + y^{\ensuremath{\scriptstyle\bullet} \ensuremath{\scriptstyle\bullet},\sharp} \\ & = z^{\pstree{\tsnode}{\tsnode \tsnode},\sharp} \end{split} \end{equation*} \end{example} \begin{remark} Given a controlled path $y \in \mathcal{Q}_\kappa(X;\mathbb{R}^n\otimes \mathbb{R}^d)$ we can lift it to a branched rough path $Y$ indexed by $\mathcal{T}_{\mathcal{L}_1}$ by the following recursion $$ Y^{\bullet_b} = \sum_{a \in \mathcal{L}} I^a(y^{ab}), \qquad Y^{[\tau^1 \cdots \tau^k]_b} = \sum_{a \in \mathcal{L}} I^a(y^{ab} Y^{\tau^1}\circ \cdots \circ Y^{\tau_k}), \qquad b \in \mathcal{L}_1 $$ Indeed $\{ J^b(\cdot) = \sum_{a\in\mathcal{L}}I^a(y^{ab} \cdot)\}_{b \in \mathcal{L}_1}$ defines a family of integrals in the sense of Def.~\ref{def:integral} and $Y$ is the associated $\gamma$-BRP. \end{remark} \subsection{Rough differential equations} Let $f_a \in C(\mathbb{R}^k;\mathbb{R}^k)$, $a=1,\dots,d$ a family of vectorfields on $\mathbb{R}^k$. Given a family on integral map $I^a$ which define a $\gamma$-BRP $X$ we consider the \emph{rough differential equation} \begin{equation} \label{eq:rough-eq} \delta y = \sum_{a\in\mathcal{L}} I^a(f_a(y)), \qquad y_0 = \eta \in \mathbb{R}^k \end{equation} in the time interval $[0,T]$. This equation has a well defined meaning when the vectorfields $f_a$ are $C^n_b$ with $n$ the largest integer for which $n\gamma \le 1$. In this case we can look for solutions of the above equation with $y \in \mathcal{Q}_\gamma(X;\mathbb{R}^k)$ and eq.~(\ref{eq:rough-eq}) can be understood as a fixed point problem in $\mathcal{Q}_\gamma(X;\mathbb{R}^k)$ since we have that the map $\Gamma$ defined as $$ \delta \Gamma(y) = \sum_{a\in\mathcal{L}} I^a(f_a(y)), \qquad \Gamma(y)_0 = \eta $$ is well defined from $\mathcal{Q}_\gamma(X;\mathbb{R}^k)$ onto itself thanks to Lemma~\ref{lemma:controlled-phi} and Theorem~\ref{th:controlled-integrals}. \begin{theorem} If $\{f_a\}_{a\in\mathcal{L}}$ is a family of $C^{n}_b$ vectorfields then the rough differential equation~(\ref{eq:rough-eq}) has a global solution $y \in \mathcal{Q}_\gamma(X;\mathbb{R}^k)$ for any initial condition $\eta\in\mathbb{R}^k$. If the vectorfields are $C^{n+1}_b$ the solution $\Phi(\eta,X)\in\mathcal{Q}_\gamma(X;\mathbb{R}^k)$ is unique and the map $\Phi: \mathbb{R}^k \times \Omega^\gamma_{\mathcal{T},\mathcal{L}} \to \mathcal{Q}_{\gamma}(X;\mathbb{R}^k)$ is Lipschitz in any finite interval $[0,T]$. \end{theorem} \begin{proof} The proof of existence is based on a compactness argument on the map $\Gamma$. Global solutions are obtained exploiting the boundedness of the vectorfields (and of their derivatives). Uniqueness is proven by contraction on sufficiently small time interval $[0,S]$. The arguments are just direct adaptation of the proof of similar statements which can be found in~\cite{MR2091358} and are quite standard so we prefer to omit them. \end{proof} \section{Infinite dimensional rough equations} \label{sec:inf-dim} Another motivation to introduce a rough path theory based on tree-indexed iterated integrals comes from the observation that infinite dimensional differential equations generate quite naturally expansions in trees which cannot be reduced to ``linear'' iterated integrals by the means of some geometric property. We still do not have a general theory of such equations but in this section we would like to justify our point of view by the means of three examples which we have studied in detail elsewhere~\cite{kdv,MR2227041,TindelGubinelli}: the 1d periodic deterministic Korteweg--de~Vries (KdV) equation, Navier-Stokes like equations and a class of stochastic partial differential equations. Given the illustrative purpose of this section we will keep the exposition at a formal level. Rigorous results can be found in the papers cited above. \subsection{The KdV equation} The 1d periodic KdV equation is the partial differential equation \begin{equation} \label{eq:kdv-real} \partial_t u(t,\xi) + \partial^3_\xi u(t,\xi) + \frac12 \partial_\xi u(t,\xi)^2 = 0,\quad u(0,\xi) = u_0(\xi), \qquad (t,\xi)\in\mathbb{R}\times \mathbb T \end{equation} where the initial condition $u_0$ belongs to some Sobolev space $H^\alpha(\mathbb T)$ of the torus $\mathbb T = [-\pi,\pi]$. This equation has many interesting features (e.g. it is a completely integrable system) but here we are interested only in the interplay between the non-linear term and the dispersive linear term which is the generator of the Airy group $U(t)$ of isometries of $H^{\alpha}$. By going to Fourier variables and setting $v_t = U(t) u_t$ we recast the above equation in integral form \begin{equation} \label{eq:kdv-base} v_t(k) = v_0(k) + \frac{ik}2 \sum'_{k_1} \int_0^t e^{- i 3 k k_1 k_2 s} v_s(k_1) v_s(k_2) \, ds, \quad t \in [0,T], k \in \mathcal{Z}_* \end{equation} where $k_2 = k-k_1$ and $v_0(k) = u_0(k)$ and where the primed summation excludes the values $k_1 = 0$ and $k_1 = k$. We restrict our attention to initial conditions such that $v_0(0) = 0$. By introducing the linear operator $ \dot X_\sigma(\varphi,\varphi) = \frac{ik}2 \sum'_{k_1} e^{- i 3 k k_1 k_2 \sigma} \varphi(k_1) \varphi(k_2) $ this equation takes the abstract form $$ v_t = v_s + \int_s^t \dot X_\sigma(v_\sigma,v_\sigma) d\sigma, \qquad t,s \in [0,T]. $$ By iteratively substituting the unknown in this integral equation we obtain an expansion whose first terms looks like \begin{equation} \label{eq:tree-series-kdv} \begin{split} v_t & = v_s + \int_s^t d\sigma \dot X_\sigma(v_s,v_s) + 2 \int_s^t d\sigma \dot X_\sigma(v_s,\int_s^\sigma d\sigma_1 \dot X_{\sigma_1}(v_s,v_s)) \\ &\qquad +\int_s^t d\sigma \dot X_\sigma(\int_s^\sigma d\sigma_1 \dot X_{\sigma_1}(v_s,v_s),\int_s^\sigma d\sigma_2 \dot X_{\sigma_2}(v_s,v_s)) \\ &\qquad +4 \int_s^t d\sigma \dot X_\sigma(v_s,\int_s^\sigma d\sigma_1 \dot X_{\sigma_1}(v_s,\int_s^{\sigma_1} d\sigma_2 \dot X_{\sigma_2}(v_s,v_s)) + r_{ts} \end{split} \end{equation} where $r_{ts}$ stands for the remaining terms in the expansion. Denote with $\mathcal{T}_{BP}\subseteq \mathcal{T}$ the set of (unlabeled) planar rooted trees with at most two branches at each node. A planar tree is a rooted tree endowed with an ordering of the branches at each node. Then each of the terms in this expansion can be associated to a tree in $\mathcal{T}_{BP}$ and we can define recursively multi-linear operators $X^{\tau}$ as $$ X^{\bullet}_{ts}(\varphi_1,\varphi_2) = \int_s^t \dot X_\sigma(\varphi_1,\varphi_2) d\sigma; $$ $$ X^{[\tau^1]}_{ts}(\varphi_1,\dots,\varphi_{m+1}) = \int_s^t \dot X_\sigma(X^{\tau^1}_{\sigma s}(\varphi_1,\dots,\varphi_m),\varphi_{m+1}) d\sigma $$ and $$ X^{[\tau^1 \tau^2]}_{ts}(\varphi_1,\dots,\varphi_{m+n}) = \int_s^t \dot X_\sigma(X^{\tau^1}_{\sigma s}(\varphi_1,\dots,\varphi_m),X^{\tau^2}_{\sigma s}(\varphi_{m+1},\dots,\varphi_{m+n})) d\sigma. $$ Eq.~(\ref{eq:tree-series-kdv}) has then the form \psset{levelsep=-5pt,nodesep=-2pt,treesep=1pt} \begin{equation} \label{eq:kdv-incr-1} \delta v_{ts} = X^{\ensuremath{\scriptstyle\bullet}}(v^{\times 2}) + X^{\aabb}(v^{\times 3}) + X^{\aaabbb}(v^{\times 4}) + X^{\pstree{\tsnode}{\tsnode \tsnode}}(v^{\times 4}) +r \end{equation} as an equation for $k$-increments where $v^{\times n}_s = (v_s,\dots,v_s)$ ($n$ times). Moreover we have algebraic relations for the $X^\tau$-s, for example $$ \delta X^{\aabb}(\varphi_1,\varphi_2,\varphi_3) = X^{\ensuremath{\scriptstyle\bullet}}(X^{\ensuremath{\scriptstyle\bullet}}(\varphi_1,\varphi_2),\varphi_3) , $$ $$ \delta X^{\aaabbb}(\varphi_1,\varphi_2,\varphi_3,\varphi_4) = X^{\ensuremath{\scriptstyle\bullet}}(X^{\aabb}(\varphi_1,\varphi_2,\varphi_3),\varphi_4) + X^{\aabb}(X^{\ensuremath{\scriptstyle\bullet}}(\varphi_1,\varphi_2),\varphi_3,\varphi_4) , $$ and \begin{equation*} \begin{split} \delta X^{\pstree{\tsnode}{\tsnode \tsnode}}& (\varphi_1,\varphi_2,\varphi_3,\varphi_4) = X^{\ensuremath{\scriptstyle\bullet}}(X^{\ensuremath{\scriptstyle\bullet}}(\varphi_1,\varphi_2),X^{\ensuremath{\scriptstyle\bullet}}(\varphi_3,\varphi_4)) \\ & \qquad + X^{\aabb}(\varphi_1,\varphi_2,X^{\ensuremath{\scriptstyle\bullet}}(\varphi_3,\varphi_4)) + X^{\aabb}(\varphi_3,\varphi_4,X^{\ensuremath{\scriptstyle\bullet}}(\varphi_1,\varphi_2)) \end{split} \end{equation*} where we used the symmetry of the operator $\dot X$ to obtain this last equation. These relations have much in common with the analogous relations for branched rough paths, however here the additional information of the position of the various arguments must be taken into account in the combinatorics of the reduced coproduct. It would be interesting to determine a Hopf algebra structure on $\mathcal{T}_{BP}$ which could account for these relations in a general way. Our interest in the $X$-operators comes from the fact that they are, usually, more regular than the original operator $\dot X$. This additional regularity usually comes at the expense of their H\oe lder time regularity when considered as operator-valued increments. We are then naturally led to consider eq.~(\ref{eq:kdv-incr-1}) as a rough equation and to try to solve it using the $\Lambda$ map. For example using only up to the double iterated integrals we would obtain the equation $$ \delta v = (1-\Lambda \delta)[X^{\ensuremath{\scriptstyle\bullet}}(v^{\times 2}) + X^{\aabb}(v^{\times 3})] $$ which in some cases can be solved by fixed point methods. This strategy has allowed us to obtain solutions of the KdV equation for initial data in $H^{\alpha}$ with any $\alpha > -1/2$. Moreover it provide a concrete strategy to improve this result in the sense that if enough regularity of the two step-3 operators can be proven, then we can solve the equation $$ \delta v = (1-\Lambda \delta)[X^{\ensuremath{\scriptstyle\bullet}}(v^{\times 2}) + X^{\aabb}(v^{\times 3}) + X^{\aaabbb}(v^{\times 4}) + X^{\pstree{\tsnode}{\tsnode \tsnode}}(v^{\times 4}))] $$ and obtain solution for more irregular initial conditions. \subsection{Navier-Stokes-like equations} The $d$-dimensional NS equation (or the Burgers' equation) have the abstract form \begin{equation} \label{eq:NS-c-abstract} u_t = S_t u_0 + \int_0^t S_{t-s} B(u_s,u_s)\,ds. \end{equation} where $S$ is a bounded semi-group on a Banach space $\mathcal{B}$ and $B$ is a symmetric bilinear operator which is usually defined only on a subspace of $\mathcal{B}$. Here we cannot proceed as in the previous section since $S$ is only a semi-group and we must cope with the convolution directly. In~\cite{MR2227041} we showed that the solutions of this equation in the case of the 3d NS equation have the series representation \begin{equation} \label{eq:ns-series} u_t = S_t u_0 + \sum_{\tau \in \mathcal{T}_{B}} X^{\tau}_{t0}(u_0^{\times d(\tau)}) \end{equation} where $d(\tau)$ is a degree function and the $d(\tau)$-multilinear operator $X^{\tau}$ has recursive definition $$ X^{\ensuremath{\scriptstyle\bullet}}_{ts} (\varphi^{\times 2}) = \int_s^t S_{t-u} B(S_{u-s}\varphi,S_{u-s}\varphi) du $$ $$ X^{[\tau^1]}_{ts}(\varphi^{\times (d(\tau^1)+1)}) = \int_s^t S_{t-u}B(X^{\tau^1}_{us}(\varphi^{\times d(\tau^1)}),\varphi) du $$ and $$ X^{[\tau^1 \tau^2]}_{ts}(\varphi^{\times (d(\tau^1)+d(\tau^2))}) = \int_s^t S_{t-u}B(X^{\tau^1}_{us}(\varphi^{\times d(\tau^1)}),X^{\tau^2}_{us}(\varphi^{\times d(\tau^2)})) du $$ These operators can be shown to allow bounds in $\mathcal{B}$ of the form $$ |X^{\tau}(\varphi^{\times d(\tau)})|_{\mathcal{B}} \le C \frac{|t-s|^{\varepsilon |\tau|}}{(\tau!)^{\varepsilon}} |\varphi|_{\mathcal{B}}^{d(\tau)} $$ where $\varepsilon\ge 0$ is a constant depending on the particular Banach space $\mathcal{B}$ we choose. The series~(\ref{eq:ns-series}) can be shown to be norm convergent at least for small $t$ and define local solution of NS. Due to the presence of the convolution integral these $X$ operators does not behaves nicely with respect to $\delta$. In~\cite{TindelGubinelli} we introduced cochain complex $(\hat C_*,\hat \delta)$ adapted to the study of such convolution integrals where the coboundary is given by $\tilde \delta h = \delta h - ah - ha$ with $a_{ts} = S_{t-s}-\text{Id}$ the 2-increment naturally associated to the semi-group. There exists also a corresponding $\tilde \Lambda$-map which provide an appropriate inverse to $\tilde \delta$. Algebraic relations for these iterated integrals have then by-now familiar expressions, e.g.: $$ \tilde \delta X^{\aabb}(\varphi^{\times 3}) = X^{\ensuremath{\scriptstyle\bullet}}(X^{\ensuremath{\scriptstyle\bullet}}(\varphi^{\times 2}),\varphi) $$ etc... \subsection{Polynomial SPDEs} In the paper~\cite{TindelGubinelli} we study path-wise solutions to SPDEs in the mild form \begin{equation} \label{eq:SPDE} u_t = S_t u_0 + \int_0^t S_{t-s} dw_s f(u_s) \end{equation} where the solution $u_t$ lives in some Hilbert space $\mathcal{B}$, $S$ is an analytic semi-group in $\mathcal{B}$, $f:\mathcal{B} \to \mathcal{V}$ some nonlinear function with values another Hilbert space $\mathcal{V}$ and $w$ a Gaussian stochastic process with values in the space of linear operators from $\mathcal{V}$ to $\mathcal{B}$ (possibly unbounded). Like in the NS-like case above this abstract equation allows an expansion in trees when the non-linear term is polynomial. For example taking $f(\varphi) = B(\varphi,\varphi)$ for some symmetric bilinear operator $B$ we get a stack of iterated integrals on the stochastic process $w$: $$ X^{\ensuremath{\scriptstyle\bullet}}_{ts} (\varphi^{\times 2}) = \int_s^t S_{t-u} dw_u B(S_{u-s}\varphi,S_{u-s}\varphi) $$ $$ X^{[\tau^1]}_{ts}(\varphi^{\times (d(\tau^1)+1)}) = \int_s^t S_{t-u} dw_u B(X^{\tau^1}_{us}(\varphi^{\times d(\tau^1)}),\varphi) $$ and $$ X^{[\tau^1 \tau^2]}_{ts}(\varphi^{\times (d(\tau^1)+d(\tau^2))}) = \int_s^t S_{t-u} dw_u B(X^{\tau^1}_{us}(\varphi^{\times d(\tau^1)}),X^{\tau^2}_{us}(\varphi^{\times d(\tau^2)})) $$ Where these integrals can be defined by stochastic integration with respect to the process $w$ (It\^o or Stratonovich). So provided useful (path-wise) estimates for these operators are available we can use the $(\hat \mathcal{C},\tilde \delta)$ complex and the $\tilde \Lambda$ map to set up rough equations and study path-wise solutions of polynomial SPDE like eq.~(\ref{eq:SPDE}).
{ "redpajama_set_name": "RedPajamaArXiv" }
567
"Universe Cowards" (), also known briefly by Universe Hipsters (), is a musical duo consisting of Super Junior's Heechul and Buzz's Min Kyung-hoon. The duo debuted on the hit show, Knowing Bros as a project duo and released their first ever single, "Sweet Dream". The duo name is a mix of Hee-chul's nickname, "Universe Big Star" and Buzz's song, "Coward". Discography Filmography Television Accolades References Knowing Bros Korean hip hop groups
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,828
\section{INTRODUCTION} Blazars are a radio-loud subclass of active galactic nuclei, where the relativistic jet points directly at Earth. Blazars are characterized by two broad components in their spectral energy distribution (SED). The first one peaking between IR and X-ray energies is attributed to synchrotron emission of highly relativistic electrons, while the second component, which peaks between MeV and TeV $\gamma$-ray energies, is attributed to inverse Compton emission of the same electron population. We will concentrate here on this so-called leptonic emission model, which neglects possible contributions by hadronic interactions. Blazars can be divided into two main categories depending on whether or not they exhibit broad optical emission lines ($\Delta EW >5$\AA). BL Lac objects are generally devoid of broad emission lines. They may be further characterized by the peak position of their synchrotron component in the SED. Low-energy peaked BL Lac objects (LBLs) exhibit the synchrotron maximum below $10^{14}$Hz, while intermediate-energy peaked BL Lac objects peak between $10^{14}$Hz and $10^{15}$Hz, and high-energy peaked BL Lac objects peak above $10^{15}$Hz. The blazar AP Librae, located at a redshift $z=0.0486$ and at $\mbox{R.A.}_{\mbox{J2000}} = 15^h 17^m 41.8^s$, $\mbox{DEC.}_{\mbox{J2000}} = -24^{\circ} 22^{\prime} 19.5^{\prime\prime}$, exhibits a monotonically increasing X-ray energy spectrum. This in combination with the lack of optical emission lines has led to its classification as an LBL. Interestingly, some observational features of this source do not fit into this category. Since the synchrotron component of LBLs peaks below $10^{14}$Hz and the X-rays are produced by the inverse Compton (IC) process, the maximum electron Lorentz factor in the electron energy distribution cannot significantly exceed $\sim 10^4$ for reasonable values of the magnetic field strength on the order of $\sim 0.1$G. Hence, one would not expect very high energy $\gamma$-ray (VHE, $E>100$GeV) emission from LBLs. Surprisingly, AP Librae has been clearly detected by observations with the H.E.S.S. telescope array \cite{hea15}, and the SED extends to energies of a few TeV. Currently, AP Librae is the only LBL listed in the TeVCat\footnote{\url{http://tevcat.uchicago.edu/}}, a catalog that gathers all sources detected above $100$GeV. Despite selection biases, this makes AP Librae an exceptional source. High resolution X-ray observations led to the detection of extended X-ray jets in many AGN. However, due to the small viewing angle, it is surprising to observe extended X-ray emission in blazars. An extended X-ray jet has been detected in AP Librae by \cite{kwt13}, which has thus become one of only six BL Lac objects listed in the X-JET database.\footnote{\url{http://hea-www.cfa.harvard.edu/XJET/}} Of these six objects three exhibit a synchrotron dominated X-ray spectrum, while the other three, including AP Librae, are IC dominated. The X-ray morphology of AP Librae's jet follows exactly the radio morphology as observed with the VLA. The detection of the extended X-ray jet further demonstrates AP Librae's peculiarity. The detection in VHE implies an extremely broad high energetic component spanning more than 10 orders of magnitude in energy. Since the synchrotron emission cuts off below the X-ray band, the resulting narrow electron distribution cannot explain the broad IC component in the usual one-zone blazar model. In this proceeding, we summarize our recent result \cite{zw15} that the extended jet dominates the total SED in the VHE $\gamma$-ray regime. Our model explains the VHE emission as originating mostly from the extended jet. \section{IMPORTANT OBSERVATIONS} We describe only the important observations, which are necessary for our jet model. The remaining multiwavelength data is taken from the following papers or publicly available data bases: Radio \cite{kea81, pla1}, and IR/optical/UV \cite{kwt13, hbs15}. \subsection{Radio} \subsubsection{VLA} VLA observations \cite{cea99} at $1.36\,$GHz led to the detection of the extended jet on arcsec-scales emerging in a south-westerly direction. Beyond $\sim 12\,$arcsec the jet bends to the north-west for another $\sim 10-20\,$arcsec. The spectral point of the extended radio jet is marked by the blue square at $1.36\,$GHz. \subsubsection{MOJAVE} The MOJAVE program \cite{lea09} utilizes VLBI radio observations at $15\,$GHz to monitor blazars on milli-arcsec scales over long time periods. AP Librae was observed in this program over the course of $\sim 15$ years. The data set reveals a steady core component (``component 0''), which is weakly variable (flux within a factor of 2). Its flux is marked with the open diamond in Fig. \ref{fig:sed}. Furthermore, a continuous jet is measured on scales of $\sim 10$milli-arcsec, which emerges in a southerly direction. Beyond this scale only knots in a south-westerly direction are detected. This is the same position angle as seen for the arcsec scale jet from the VLA observations. The total flux (containing both the component 0 and the extended flux) obtained from the MOJAVE data fits with the spectrum of the non-VLBI radio data (c.f. Fig. \ref{fig:sed}). Hence, these data points are influenced by the extended component proving a usually made assumption of radio flux points of blazars. The movement of the fastest knots led to the determination of a maximum apparent speed of the jet of $6.8c$. \subsection{X-rays} X-ray observations with Chandra revealed the extended jet on arcsec scales \cite{kwt13}. The photon index of the jet is $\Gamma = 1.8\pm 0.1$; thus indicating an IC dominance in this spectral regime. The spectral points are marked by blue squares in Figure \ref{fig:sed}. The morphological structure is very similar to the radio morphology of the VLA observation. The Chandra spectrum of the core is also hard with $\Gamma = 1.58\pm 0.04$. A 100-month average of observations with the Swift-BAT instrument \cite{pal100} reveals a flux level of the hard X-rays that is in straight extrapolation of the Chandra core spectrum. Thus, the core X-ray spectrum can be described by a single power-law over more than 2 orders of magnitude in energy. \subsection{$\gamma$-rays} Due to a lack of spatial resolution of the $\gamma$-ray instruments, the jet cannot be resolved. Data from Fermi and H.E.S.S. are taken from \cite{hea15}. The $\gamma$-ray SED can be described by a flat level below a few GeV, followed by a power-law with larger index up to a few TeV. There is no apparent cut-off in the H.E.S.S. spectrum. \section{THE JET MODEL} \begin{figure} \centering{\includegraphics[width=0.8\textwidth]{APLibEXTT.pdf}} \caption{SED of AP Librae with the modeling of the blob and the kpc-scale jet. The data points are for the core (black dots), the kpc-scale jet (blue squares), the steady component of the MOJAVE observation (open diamond), and the $\gamma$-ray data (red squares) where the jet cannot be resolved. The line styles refer to the blob (thick dashed red), and the kpc-scale jet (thin solid lines). The line colors of the jet model imply synchrotron (green), and IC/CMB (red) emission. The thick red line is the sum of the blob and the jet.} \label{fig:sed} \end{figure} \begin{table} \begin{tabular}{lcc} & blob & kpc-scale jet \\ \hline $n_e$ [cm$^{-3}$] & $2.9\times 10^{2}$ & $1.6\times 10^{-9}$ \\ $\gamma_{min}$ & $1.6\times 10^2$ & $4.0\times 10^1$ \\ $\gamma_{br}$ & $9.1\times 10^3$ & $6.2\times 10^5$ \\ $\gamma_{max}$ & $1.0\times 10^4$ & $5.0\times 10^6$ \\ $s_1$ & $2.0$ & $2.6$ \\ $B_{0}$ [G] & $0.04$ & $2.3\times 10^{-6}$ \\ $\Gamma_{b}$ & $10$ & $10$ \\ $R$ [cm] & $9.0\times 10^{15}$ & $1.5\times 10^{21}$ \\ $\vartheta_{obs}$ [deg] & $2.0$ & $4.0$ \\ $L_{disk}$ [erg/s] & $1.3\times 10^{44}$ & - \\ $L_{torus}$ [erg/s] & $4.0\times 10^{42}$ & - \\ $l_{jet}^{\prime}$ [pc] & - & $1.0\times 10^{4}$ \\ \end{tabular} \caption{Parameters for the fit. $n_e$ is the electron density; $\gamma_{min}$, $\gamma_{br}$ and $\gamma_{max}$ are the minimum, break and maximum electron Lorentz factor, respectively; $s_1$ is the electron spectral index below the break; $B_0$ is the magnetic field strength; $\Gamma_b$ is the bulk Lorentz factor; $R$ is the radius of the components; $\vartheta_{obs}$ is the observation angle; $L_{disk}$ and $L_{torus}$ are the luminosity of the disk and the dusty torus, respectively; $l_{jet}^{\prime}$ is the observed jet length.} \label{tab:inputcom} \end{table} The data together with the modeling is presented in Figure \ref{fig:sed}. The respective parameter sets are given in Table \ref{tab:inputcom}. Blazars are commonly described by a one-zone model, which in most cases gives successful fits to the SED. This model fails for AP Librae. The most important constraint comes from the cut-off in the synchrotron component at UV energies and the fact that the Swift-BAT flux point is slightly above the $\gamma$-ray flux level. The latter implies that the maximum of the IC component must be located between $100\,$keV and $100\,$MeV. Both constraints require a maximum electron Lorentz factor $\gamma_2\lesssim 10^4$. From below the electron distribution is constrained by the hard X-ray spectrum. Attributing the X-ray core data to synchrotron-self Compton (SSC) emission requires a high minimum electron Lorentz factor of $\gtrsim 100$, because the SSC spectrum is a pure power-law only below the photon energy associated with the minimum electron energy. Above this energy the SSC spectrum is significantly curved due to the convolution of the broken electron distribution with the synchrotron spectrum and the complicated IC cross section. Thus, the Chandra and the Swift-BAT spectrum cannot be fitted if the minimum Lorentz factor is set below $100$.\footnote{We note that a successful fit of the X-ray spectrum is also possible with a hard electron distribution with spectral index $s=1.5$. However, the issue of the narrow electron distribution is not circumvented.} In turn, the electron distribution function, which explains the synchrotron core component and the X-ray core data, is very narrow, and cannot account for the VHE $\gamma$-ray emission. The resulting model is presented with a thick dashed line in Fig. \ref{fig:sed}. The fit is good for most of the synchrotron component and fits the ``component 0'' flux point of the MOJAVE observations. The rest of the radio emission includes contributions from the milli-arcsecond jet. The model describing this data is not shown here for clarity, and can be found in \cite{zw15}. The flattening of the Swift-UVOT spectrum at higher UV energies suggest the addition of another component, which is usually attributed to the accretion disk. The accretion disk can illuminate the gas in close proximity of the black hole, which is interpreted here as a low-luminous torus. Both components, whose parameters are given in Table \ref{tab:inputcom}, are also not shown for clarity. These two photon fields can serve as target photons for IC scattering by the blob electrons. The resulting flux contributes in the $\gamma$-ray range, and can explain the flat flux level below a few GeV. However, due to the low maximum electron Lorentz factor, these emission processes cannot explain the VHE emission, either. In order to explain the VHE $\gamma$-ray spectrum we consider the extended jet. It is modeled as the combination of a number of self-similar zones. The combined emission of all zones gives the total flux of the jet. The model of the kpc-scale jet is constrained by the VLA and Chandra data. Since the flux in both radio and X-rays drops beyond the bend \cite{kwt13}, we only consider the part closer than $\sim 12$arcsec from the core corresponding to a projected length of $\sim 10\,$kpc. The synchrotron and IC/CMB emission of the kpc-scale jet gives a good fit in the radio and the Chandra energy range. Interestingly, the extrapolation of the Chandra jet spectrum intersects the Fermi data roughly at the break at a few GeV. This led us to the hypothesis that the VHE emission could originate in the kpc-scale jet. In fact, by choosing a maximum electron Lorentz factor of $\sim 5\times 10^6$ in the kpc-jet, we successfully reproduce the VHE spectrum. \section{DISCUSSION \& CONCLUSIONS} As was shown in the previous section, the extended jet plays an important role in the radiative output of AP Librae. It is responsible for a significant part of the SED. Most importantly, the jet could be responsible for the VHE emission detected with the H.E.S.S. telescopes. This is an unusual interpretation, since it implies highly relativistic flows and continuous reacceleration of the jet material on very large scales, given that modeling gives a deprojected length of AP Librae's jet of $140$kpc. Interestingly, in any observations of any scales the counter-jet has been detected. Under the assumption that both jets are intrinsically identical, the counter-jet must be strongly de-beamed even at large distances from the galactic center. Thus, we can conclude that even at large distances the jet must exhibit relativistic speeds without significant deceleration. The reacceleration of particles could be achieved by small-scale turbulence within the jet. Shear-layer acceleration might also be a viable alternative, which could be acting on large scales \cite{lbs13}. Any of these possibilities would not disturb the observed homogeneity of the jet because they are of the size scale on the order of a supernova remnant and cannot be resolved at the distance of AP Librae. Since the electrons emitting in the X-ray band are less energetic than the radio emitting electrons, the power in relativistic particles is high and in some jets exceeds the Eddington luminosity even without accounting for cold protons. Our modeling suggests that in AP Librae the Eddingtion luminosity constraint holds for the large-scale jet even if the dominating contribution of cold protons is included. Below we present a few suggestions how this model could be tested. Due to the required highly relativistic electrons ($\gamma\gtrsim 10^6$), the jet should emit synchrotron emission up to the UV band. Since the contribution of the galaxy in the UV band is much reduced compared to the optical and IR bands, the jet should be detectable despite its proximity to the bright core. If the jet is not detected in the UV with an upper limit below the predicted flux of our model curve, the IC/CMB is ruled out as the origin of the VHE emission, because the required electron energies cannot be matched. Within the kpc-scale jet the relative flux decrease along the jet is expected to differ between the synchrotron and IC domains due to different beaming dependencies \cite{d95}. The current observations do not allow to quantify this effect. More sensitive mapping at radio and X-ray frequencies could test this prediction. Observations at HE $\gamma$-ray energies have been used to rule out the IC/CMB model in a number of extended X-ray jets \cite{mg14,mea15}, since the model overpredicts the upper limits derived from Fermi observations. Given that our model fits perfectly the HE and VHE data of AP Librae, this test is not applicable here. Nevertheless, the HE and VHE emission can be used to indirectly test the model. We do not expect variability in the extended jet emission, because such a large object requires to be steady over time scales much longer than a few times $l_{jet}/c$. Thus, the flux above a few GeV should not drop below the observed level. An increase in the flux at these energies during a flare might still originate from an additional flaring component and would not rule out the above model. However, such a component should also change the spectrum in the UV and potentially in the X-ray band. Given that AP Librae's host is a bright elliptical galaxy, one can expect absorption of TeV photons by the galactic optical starlight, if the TeV photons originate from the central region. This effect is on the order of $\sim 6\%$ \cite{zcw16}, and thus below the spectral sensitivities of the current generation of Cherenkov experiments. However, the futur Cherenkov Telescope Array might have the required sensitivity to detect the small absorption feature. This could be used to narrow down the location of TeV emission region. In our jet model, no absorption of TeV emission within the host galaxy is expected. More observations in all energy bands are strongly encouraged. If confirmed, the emission of TeV radiation by an extended, more than $100$kpc long jet would be a major result with strong implications for the transport and acceleration processes on large spatial scales. \section{ACKNOWLEDGMENTS} The authors wish to thank Markus B\"ottcher for the numerical code, which is described in detail in \cite{bea13}. Support by the German Ministry for Education and Research (BMBF) through Verbundforschung Astroteilchenphysik grant 05A11VH2 is gratefully acknowledged. This research has made use of data from the MOJAVE database that is maintained by the MOJAVE team \cite{lea09}. This paper is based on observations obtained with Planck (http://www.esa.int/Planck), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,833
using System; using System.Collections.Generic; using System.ComponentModel.Composition; using System.IO; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows; using System.Windows.Input; using System.Windows.Media; using System.Xml.Serialization; using GW2PAO.Infrastructure; using GW2PAO.Infrastructure.Hotkeys; using GW2PAO.Infrastructure.Hotkeys.Interfaces; using GW2PAO.Infrastructure.Interfaces; using Microsoft.Practices.Prism.Commands; using Microsoft.Practices.Prism.Mvvm; using Newtonsoft.Json; using NLog; namespace GW2PAO.ViewModels { [Export(typeof(HotkeySettingsViewModel))] public class HotkeySettingsViewModel : BindableBase, ISettingsViewModel { /// <summary> /// Default logger /// </summary> private static Logger logger = LogManager.GetCurrentClassLogger(); private Hotkey toggleAllWindowsHotkey; private Hotkey toggleInteractiveWindowsHotkey; private Hotkey toggleNotificationWindowBordersHotkey; private Hotkey toggleAutoFadeBordersHotkey; private Hotkey toggleOverlayMenuIconHotkey; private Hotkey toggleWorldBossTimerHotkey; private Hotkey toggleMetaEventTimerHotkey; private Hotkey toggleDungeonsTrackerHotkey; private Hotkey toggleDungeonTimerHotkey; private Hotkey togglePriceTrackerHotkey; private Hotkey toggleWvWTrackerHotkey; private Hotkey toggleZoneAssistantHotkey; private Hotkey toggleTaskTrackerHotkey; private Hotkey toggleTeamspeakTrackerHotkey; private Hotkey toggleWebBrowserHotkey; private Hotkey toggleMapHotkey; private Hotkey toggleDayNightTimerHotkey; /// <summary> /// Header for the settings /// </summary> [JsonIgnore, XmlIgnore] public string SettingsHeader { get { return Properties.Resources.Hotkeys; } } /// <summary> /// The global hotkey manager /// </summary> [JsonIgnore, XmlIgnore] public IHotkeyManager GlobalHotkeyManager { get; private set; } /// <summary> /// Hotkey to toggle all windows as visible/hidden /// </summary> public Hotkey ToggleAllWindowsHotkey { get { return this.toggleAllWindowsHotkey; } set { this.SetProperty(ref this.toggleAllWindowsHotkey, value); } } /// <summary> /// Hotkey to toggle intractive windows on/off /// </summary> public Hotkey ToggleInteractiveWindowsHotkey { get { return this.toggleInteractiveWindowsHotkey; } set { this.SetProperty(ref this.toggleInteractiveWindowsHotkey, value); } } /// <summary> /// Hotkey to toggle notification window borders on/off /// </summary> public Hotkey ToggleNotificationWindowBordersHotkey { get { return this.toggleNotificationWindowBordersHotkey; } set { this.SetProperty(ref this.toggleNotificationWindowBordersHotkey, value); } } /// <summary> /// Hotkey to toggle the auto-fade borders feature on/off /// </summary> public Hotkey ToggleAutoFadeBordersHotkey { get { return this.toggleAutoFadeBordersHotkey; } set { this.SetProperty(ref this.toggleAutoFadeBordersHotkey, value); } } /// <summary> /// Hotkey to toggle the overlay menu icon on/off /// </summary> public Hotkey ToggleOverlayMenuIconHotkey { get { return this.toggleOverlayMenuIconHotkey; } set { this.SetProperty(ref this.toggleOverlayMenuIconHotkey, value); } } /// <summary> /// Hotkey to toggle the world boss timers on/off /// </summary> public Hotkey ToggleWorldBossTimerHotkey { get { return this.toggleWorldBossTimerHotkey; } set { this.SetProperty(ref this.toggleWorldBossTimerHotkey, value); } } /// <summary> /// Hotkey to toggle the meta event timers on/off /// </summary> public Hotkey ToggleMetaEventTimerHotkey { get { return this.toggleMetaEventTimerHotkey; } set { this.SetProperty(ref this.toggleMetaEventTimerHotkey, value); } } /// <summary> /// Hotkey to toggle the dungeons tracker on/off /// </summary> public Hotkey ToggleDungeonsTrackerHotkey { get { return this.toggleDungeonsTrackerHotkey; } set { this.SetProperty(ref this.toggleDungeonsTrackerHotkey, value); } } /// <summary> /// Hotkey to toggle the dungeon timer on/off /// </summary> public Hotkey ToggleDungeonTimerHotkey { get { return this.toggleDungeonTimerHotkey; } set { this.SetProperty(ref this.toggleDungeonTimerHotkey, value); } } /// <summary> /// Hotkey to toggle the price tracker on/off /// </summary> public Hotkey TogglePriceTrackerHotkey { get { return this.togglePriceTrackerHotkey; } set { this.SetProperty(ref this.togglePriceTrackerHotkey, value); } } /// <summary> /// Hotkey to toggle the WvW tracker on/off /// </summary> public Hotkey ToggleWvWTrackerHotkey { get { return this.toggleWvWTrackerHotkey; } set { this.SetProperty(ref this.toggleWvWTrackerHotkey, value); } } /// <summary> /// Hotkey to toggle the zone assistant on/off /// </summary> public Hotkey ToggleZoneAssistantHotkey { get { return this.toggleZoneAssistantHotkey; } set { this.SetProperty(ref this.toggleZoneAssistantHotkey, value); } } /// <summary> /// Hotkey to toggle the task tracker on/off /// </summary> public Hotkey ToggleTaskTrackerHotkey { get { return this.toggleTaskTrackerHotkey; } set { this.SetProperty(ref this.toggleTaskTrackerHotkey, value); } } /// <summary> /// Hotkey to toggle the teamspeak overlay on/off /// </summary> public Hotkey ToggleTeamspeakTrackerHotkey { get { return this.toggleTeamspeakTrackerHotkey; } set { this.SetProperty(ref this.toggleTeamspeakTrackerHotkey, value); } } /// <summary> /// Hotkey to toggle the web browser on/off /// </summary> public Hotkey ToggleWebBrowserHotkey { get { return this.toggleWebBrowserHotkey; } set { this.SetProperty(ref this.toggleWebBrowserHotkey, value); } } /// <summary> /// Hotkey to toggle the map overlay on/off /// </summary> public Hotkey ToggleMapHotkey { get { return this.toggleMapHotkey; } set { this.SetProperty(ref this.toggleMapHotkey, value); } } /// <summary> /// Hotkey to toggle the day/night timer on/off /// </summary> public Hotkey ToggleDayNightTimerHotkey { get { return this.toggleDayNightTimerHotkey; } set { this.SetProperty(ref this.toggleDayNightTimerHotkey, value); } } /// <summary> /// Default constructor /// </summary> /// <param name="globalHotkeyManager">The global hotkey manager</param> [ImportingConstructor] public HotkeySettingsViewModel(IHotkeyManager globalHotkeyManager) { this.GlobalHotkeyManager = globalHotkeyManager; } /// <summary> /// Initializes all hotkeys /// </summary> public void InitializeHotkeys() { // Initialize as blank at first this.ToggleWorldBossTimerHotkey = new Hotkey(Key.None, KeyModifier.None); this.ToggleMetaEventTimerHotkey = new Hotkey(Key.None, KeyModifier.None); this.ToggleDungeonsTrackerHotkey = new Hotkey(Key.None, KeyModifier.None); this.ToggleDungeonTimerHotkey = new Hotkey(Key.None, KeyModifier.None); this.TogglePriceTrackerHotkey = new Hotkey(Key.None, KeyModifier.None); this.ToggleWvWTrackerHotkey = new Hotkey(Key.None, KeyModifier.None); this.ToggleZoneAssistantHotkey = new Hotkey(Key.None, KeyModifier.None); this.ToggleTaskTrackerHotkey = new Hotkey(Key.None, KeyModifier.None); this.ToggleTeamspeakTrackerHotkey = new Hotkey(Key.None, KeyModifier.None); this.ToggleWebBrowserHotkey = new Hotkey(Key.None, KeyModifier.None); this.ToggleInteractiveWindowsHotkey = new Hotkey(Key.None, KeyModifier.None); this.ToggleAllWindowsHotkey = new Hotkey(Key.None, KeyModifier.None); this.ToggleNotificationWindowBordersHotkey = new Hotkey(Key.None, KeyModifier.None); this.ToggleAutoFadeBordersHotkey = new Hotkey(Key.None, KeyModifier.None); this.ToggleOverlayMenuIconHotkey = new Hotkey(Key.None, KeyModifier.None); this.ToggleMapHotkey = new Hotkey(Key.None, KeyModifier.None); this.ToggleDayNightTimerHotkey = new Hotkey(Key.None, KeyModifier.None); // Try to load the hotkeys from user settings if (!string.IsNullOrEmpty(Properties.Settings.Default.Hotkeys)) { logger.Debug("Loading hotkeys"); try { var loadedHotkeys = JsonConvert.DeserializeObject<HotkeySettingsViewModel>(Properties.Settings.Default.Hotkeys); if (loadedHotkeys != null) { if (loadedHotkeys.ToggleAllWindowsHotkey != null) this.ToggleAllWindowsHotkey = loadedHotkeys.ToggleAllWindowsHotkey; if (loadedHotkeys.ToggleInteractiveWindowsHotkey != null) this.ToggleInteractiveWindowsHotkey = loadedHotkeys.ToggleInteractiveWindowsHotkey; if (loadedHotkeys.ToggleNotificationWindowBordersHotkey != null) this.ToggleNotificationWindowBordersHotkey = loadedHotkeys.ToggleNotificationWindowBordersHotkey; if (loadedHotkeys.ToggleAutoFadeBordersHotkey != null) this.ToggleAutoFadeBordersHotkey = loadedHotkeys.ToggleAutoFadeBordersHotkey; if (loadedHotkeys.ToggleOverlayMenuIconHotkey != null) this.ToggleOverlayMenuIconHotkey = loadedHotkeys.ToggleOverlayMenuIconHotkey; if (loadedHotkeys.ToggleWorldBossTimerHotkey != null) this.ToggleWorldBossTimerHotkey = loadedHotkeys.ToggleWorldBossTimerHotkey; if (loadedHotkeys.ToggleMetaEventTimerHotkey != null) this.ToggleMetaEventTimerHotkey = loadedHotkeys.ToggleMetaEventTimerHotkey; if (loadedHotkeys.ToggleDungeonsTrackerHotkey != null) this.ToggleDungeonsTrackerHotkey = loadedHotkeys.ToggleDungeonsTrackerHotkey; if (loadedHotkeys.ToggleDungeonTimerHotkey != null) this.ToggleDungeonTimerHotkey = loadedHotkeys.ToggleDungeonTimerHotkey; if (loadedHotkeys.TogglePriceTrackerHotkey != null) this.TogglePriceTrackerHotkey = loadedHotkeys.TogglePriceTrackerHotkey; if (loadedHotkeys.ToggleWvWTrackerHotkey != null) this.ToggleWvWTrackerHotkey = loadedHotkeys.ToggleWvWTrackerHotkey; if (loadedHotkeys.ToggleZoneAssistantHotkey != null) this.ToggleZoneAssistantHotkey = loadedHotkeys.ToggleZoneAssistantHotkey; if (loadedHotkeys.ToggleTaskTrackerHotkey != null) this.ToggleTaskTrackerHotkey = loadedHotkeys.ToggleTaskTrackerHotkey; if (loadedHotkeys.ToggleTeamspeakTrackerHotkey != null) this.ToggleTeamspeakTrackerHotkey = loadedHotkeys.ToggleTeamspeakTrackerHotkey; if (loadedHotkeys.ToggleWebBrowserHotkey != null) this.ToggleWebBrowserHotkey = loadedHotkeys.ToggleWebBrowserHotkey; if (loadedHotkeys.ToggleMapHotkey != null) this.ToggleMapHotkey = loadedHotkeys.ToggleMapHotkey; if (loadedHotkeys.ToggleDayNightTimerHotkey != null) this.ToggleDayNightTimerHotkey = loadedHotkeys.ToggleDayNightTimerHotkey; } else { logger.Warn("Unable to load all user hotkeys!"); } } catch (Exception ex) { logger.Warn("Unable to load user hotkeys! Exception: ", ex); } } // Register for the pause/resume commands HotkeyCommands.PauseHotkeys.RegisterCommand(new DelegateCommand(this.GlobalHotkeyManager.PauseHotkeys)); HotkeyCommands.ResumeHotkeys.RegisterCommand(new DelegateCommand(this.GlobalHotkeyManager.ResumeHotkeys)); // Wire the hotkeys up this.ToggleAllWindowsHotkey.Pressed += (o, e) => HotkeyCommands.ToggleAllWindowsCommand.Execute(null); this.ToggleInteractiveWindowsHotkey.Pressed += (o, e) => HotkeyCommands.ToggleInteractiveWindowsCommand.Execute(null); this.ToggleNotificationWindowBordersHotkey.Pressed += (o, e) => HotkeyCommands.ToggleNotificationWindowBordersCommand.Execute(null); this.ToggleAutoFadeBordersHotkey.Pressed += (o, e) => HotkeyCommands.ToggleAutoFadeBordersCommand.Execute(null); this.ToggleOverlayMenuIconHotkey.Pressed += (o, e) => HotkeyCommands.ToggleOverlayMenuIconCommand.Execute(null); this.ToggleWorldBossTimerHotkey.Pressed += (o, e) => HotkeyCommands.ToggleWorldBossTimersCommand.Execute(null); this.ToggleMetaEventTimerHotkey.Pressed += (e, r) => HotkeyCommands.ToggleMetaEventTimersCommand.Execute(null); this.ToggleDungeonsTrackerHotkey.Pressed += (o, e) => HotkeyCommands.ToggleDungeonsTrackerCommand.Execute(null); this.ToggleDungeonTimerHotkey.Pressed += (o, e) => HotkeyCommands.ToggleDungeonTimerCommand.Execute(null); this.TogglePriceTrackerHotkey.Pressed += (o, e) => HotkeyCommands.TogglePriceTrackerCommand.Execute(null); this.ToggleWvWTrackerHotkey.Pressed += (o, e) => HotkeyCommands.ToggleWvWTrackerCommmand.Execute(null); this.ToggleZoneAssistantHotkey.Pressed += (o, e) => HotkeyCommands.ToggleZoneAssistantCommand.Execute(null); this.ToggleTaskTrackerHotkey.Pressed += (o, e) => HotkeyCommands.ToggleTaskTrackerCommand.Execute(null); this.ToggleTeamspeakTrackerHotkey.Pressed += (o, e) => HotkeyCommands.ToggleTeamspeakOverlayCommand.Execute(null); this.ToggleWebBrowserHotkey.Pressed += (o, e) => HotkeyCommands.ToggleWebBrowserCommand.Execute(null); this.ToggleMapHotkey.Pressed += (o, e) => HotkeyCommands.ToggleMapOverlayCommand.Execute(null); this.ToggleDayNightTimerHotkey.Pressed += (o, e) => HotkeyCommands.ToggleDayNightTimerCommand.Execute(null); // Register all hotkeys that are enabled if (this.ToggleAllWindowsHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleAllWindowsHotkey); if (this.ToggleInteractiveWindowsHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleInteractiveWindowsHotkey); if (this.ToggleNotificationWindowBordersHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleNotificationWindowBordersHotkey); if (this.ToggleAutoFadeBordersHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleAutoFadeBordersHotkey); if (this.ToggleOverlayMenuIconHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleOverlayMenuIconHotkey); if (this.ToggleWorldBossTimerHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleWorldBossTimerHotkey); if (this.ToggleMetaEventTimerHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleMetaEventTimerHotkey); if (this.ToggleDungeonsTrackerHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleDungeonsTrackerHotkey); if (this.ToggleDungeonTimerHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleDungeonTimerHotkey); if (this.TogglePriceTrackerHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.TogglePriceTrackerHotkey); if (this.ToggleWvWTrackerHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleWvWTrackerHotkey); if (this.ToggleZoneAssistantHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleZoneAssistantHotkey); if (this.ToggleZoneAssistantHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleZoneAssistantHotkey); if (this.ToggleTaskTrackerHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleTaskTrackerHotkey); if (this.ToggleTeamspeakTrackerHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleTeamspeakTrackerHotkey); if (this.ToggleWebBrowserHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleWebBrowserHotkey); if (this.ToggleMapHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleMapHotkey); if (this.ToggleDayNightTimerHotkey.Key != Key.None) this.GlobalHotkeyManager.Register(this.ToggleDayNightTimerHotkey); } /// <summary> /// Saves all hotkeys in the user settings /// </summary> public void SaveHotkeys() { string hotkeyData = JsonConvert.SerializeObject(this); Properties.Settings.Default.Hotkeys = hotkeyData; Properties.Settings.Default.Save(); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,802
Q: find only the first occurence using only grep Suppose i have a file with many words, i want to find only the first word with the pattern "xyz". How do I do it if there are multiple words with this pattern on the same line? -m returns all the words in the first line in which it matches. I need only grep command. A: By default grep prints the lines matching a pattern, so if the pattern appears one or more times into a line, grep will print that whole line. Adding the flag -m 7 will tell grep to print only the first 7 lines where the pattern appears. So this should do what you want (I haven't tested it): grep -o -m 1 xyz myfile | head -1 Edit: as pointed out by @Kusalananda, you don't strictly need the -m flag but using it means grep won't need to parse the whole file, and will output the result faster, especially if myfile is a large file. A: The answer to your question is in the grep man page: grep -m1 'searchstring' file_name The -m<number> option is the key. -m1 will only show the first match, -m2 the first 2 occurrences and so on.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,657
\section{Introduction} For \emph{any} quantum field, the \emph{vacuum} is defined as its ground state. Differently than in the classic case, this ground state, due to the uncertainty principle, is not empty, but filled with field fluctuations around a zero mean value. Moreover this vacuum state depends on the field boundary conditions~: if they change, there will be a correspondingly different vacuum (whose fluctuations will have a different wavelength spectrum). Thus a quantum vacuum state may be equivalent to real particles of a new vacuum after a change in boundary conditions. If we consider the \emph{electromagnetic field,} the peculiar nature of the quantum vacuum has experimentally observable consequences in the realm of microscopic physics, such as natural widths of spectral lines, Lamb shift, anomalous magnetic moment of the electron and many more. It is perhaps even more striking that there exist also observable effects at a macroscopic level. The Casimir force (static Casimir effect \cite{Cas, BMM}) is one of these macroscopic effects which has been observed experimentally. A \emph{dynamic Casimir effect} is also predicted to occur when one boundary is accelerated in a nonuniform way, as for instance when a metal surface undergoes harmonic oscillations. In this case a number of virtual photons from the vacuum are converted into real photons (``Casimir radiation''), while the moving metal surface loses energy \cite{Moo,FD, KG}. It is worth notice that, whereas the static Casimir effect has been observed by several experiments \cite{Spa}, the Casimir radiation is to date unobserved, in spite of the abundant theoretical work done in this field \cite{LJR1, Dod, CDM, SSPS, LJR3} (see \cite{Dod} for a historical review and a bibliography of the relevant studies). We argue that this lack of experimental activity stems from the rooted idea of using mechanical oscillations. We shall show that this is unfeasible with present-day techniques. Here we shall present a \emph{new experimental approach} where an effective motion is generated by the excitation of a plasma in a semiconductor. In terms of power this effective motion is much more convenient than a mechanical motion, since in a metal mirror only the conduction electrons reflect the electromagnetic waves, whereas a great amount of power would be wasted in the acceleration of the much heavier nuclei. Some authors \cite{Dod2, CDLM, UPSS} have made use of our idea in order to construct a concrete model for theoretical calculations. In this paper we wish to discuss the experimental details of our apparatus and to show the feasibility of a measurement. \vspace*{\baselineskip} We shall now \emph{outline the theoretical situation.} The simplest system that can produce Casimir radiation is a single mirror, harmonically oscillating in a direction perpendicular to its surface. In this case the number $N$ of created photons should be \cite{LJR3}~: \begin{equation} N = \frac{\omega t}{3 \pi}\left( \frac{v}{c} \right)^2 , \end{equation} where $\omega$ is the angular frequency of the mirror motion, $t$ is the duration of the motion, $v$ is the maximum speed reached in the oscillation, and $c$ the speed of light. Even if we stretch all parameters to their utmost values ($\omega \sim 10^{10}$~rad/s, $t \sim 1$~s and $v/c \sim 10^{-8}$) the number of produced photons is not detectable. A great theoretical progress was to realize that, when the oscillating mirror is a wall of an electromagnetic resonant cavity, the cavity itself behaves as a multiplier for the produced radiation, if the frequency of the moving wall is twice one of the proper electromagnetic cavity frequencies (parametric resonance). It is however disappointing that the formulae developed so far using different approaches (in the case of parametric resonance) are not the same and even irreconcilable. Apart from minor differences the formulae for the produced photons found in literature \cite{LJR1, Dod, CDM, SSPS} can be brought back to either of two forms~: \begin{eqnarray} N & = & \frac{\omega t}{2 \pi} \left( \frac{v}{c} \right)^2 Q , \label{photons} \\ N & = & \sinh^2 \left( \omega t \, \frac{v}{c} \, \right) , \label{exponential} \end{eqnarray} where $Q$ is the quality factor of the cavity. We shall show that with our proposed method even in the worst case (formula (\ref{photons})) the number of produced photons is sufficient for detection. \vspace*{\baselineskip} An \emph{experiment based on} the \emph{mechanical motion} of a resonant cavity wall would be \emph{too difficult for present-day techniques.} The highest frequency attainable for mechanical motion is in the gigahertz range \cite{BD, YB} and following the parametric amplification request this implies microwave cavities with dimensions ranging from 1~cm to 1~m. The motion of a single wall of such a cavity requires a huge amount of power. In fact a wall of volume $V$ made of a material with mass density $\rho$ vibrating at an angular frequency $\omega$ with an amplitude $\delta x$ has a maximum kinetic energy $E = \frac{1}{2} \rho V \omega^2 \delta x^2$ which vanishes in a time of order $\frac{ \pi}{2 \omega}$. If we estimate the required power for $\rho = 3 \cdot 10^3~\mbox{kg/m}^3$, $V=3 \, \mbox{cm} \times 3 \, \mbox{cm} \times 0.1 \, \mbox{mm} = 9 \cdot 10^{-8} \mbox{m}^3$, $\frac{\omega}{2\pi} = 2$~GHz, $\delta x=1$~nm, we obtain about $3 \cdot 10^8$~W. At present there are two known ways to make a body oscillate at gigahertz frequencies and both of them have some disadvantages precluding their use in a dynamic Casimir experiment. The first way would exploit acoustic waves in solids. Waves at gigahertz frequencies were produced in the 60's by B\"ommel and Dransfeld in a quartz rod placed inside a microwave resonant cavity \cite{BD}. What makes this technique ineffective for our purpose is that a large microwave power is needed and that the rod motion has a maximum displacement $\delta x$ much less than 1~nm. A small amplitude $\delta x$ implies a small maximum oscillation speed $v$ (for a harmonic motion $v = \omega \cdot \delta x$ where $\omega$ is the oscillation angular frequency). Hence the number of photons produced by a mechanical oscillation with such a speed would be undetectable, as is readily seen from formulae (\ref{photons}) and (\ref{exponential}). The second technique is the one applied in acoustic microscopes \cite{YB}. A resonant vibrating mode in a sapphire block is excited at a typical frequency around 3~GHz. The use of a mechanical system with high quality factor $Q$ reduces power requests in this case. For sapphire at 4.3~K the product of the cavity quality factor $Q$ by the oscillation frequency $f$ is about $Q \cdot f \sim 10^{14}$~Hz \cite{BCT}. Therefore if $f \sim 10^9$~Hz, $Q$ can be as high as $10^5$. The same oscillation amplitude as in a nonresonant system can be reached with a power $10^5$ times smaller. But again the oscillation amplitude $\delta x$ is about $10^{-10}$~m and the moved area is quite small (about 100~$\mu \mbox{m}^2$). \vspace*{\baselineskip} We now present our \emph{idea for} realizing \emph{an oscillating mirror} without employing mechanical methods. The notion of using laser pulses to quickly change the dielectric properties of a semiconductor can be found in literature. In 1989 Yablonovitch \cite{Yab} proposed the use of laser pulses to change the refraction index of a semiconductor very rapidly. Another work by Lozovik, Tsvetus and Vinograd \cite{LTV} studied the parametric excitation of electromagnetic waves using a dense plasma layer in a cavity; the layer was created by irradiating a semiconductor film with femtosecond laser pulses. \begin{figure} \centering \includegraphics[height=6cm]{figure1.eps} \caption{ (a) Mirror effective motion~: a composite mirror changes its reflection properties (under intermittent laser light irradiation), and the microwave reflecting surface switches its position between P1 and P2 accordingly. (b) Arrangement of the composite mirror in a microwave resonant cavity. The semiconductor is irradiated by an optical fiber piercing the cavity.} \label{cavity} \end{figure} In \emph{our experimental scheme} we shall simulate a mirror motion by changing the actively reflecting surface of a composite mirror. The mirror consists of a metal plate with a semiconductor wafer fixed on one side (see FIG.\ \ref{cavity}(a)). The semiconductor reflectivity is driven by irradiation from laser light, with photon energy corresponding to the semiconductor energy gap, so that it can switch from completely transparent to completely reflective for microwaves. By sending a train of laser pulses at a given frequency we get a mirror oscillating from position P1 to position P2. An advantage of this method is that the distance between P1 and P2 can be made of the order of a millimeter, compared to about 1~nm obtainable by mechanical oscillations. This leads to a layout as represented in FIG. 1(b). The composite mirror becomes a wall of a superconducting cavity. The laser pulses are guided into the cavity via an optical fiber. A small pickup antenna is also introduced in the cavity and the signal fed to high sensitivity electronics. However a number of \emph{points} need \emph{to be checked} in order that we may state that this method is effective~: \begin{enumerate} \item[(1)] Is the mirror created in P2 as good as the one in P1~? \item[(2)] Is it actually feasible to make the mirror appear and disappear in P2 at gigahertz frequencies~? \item[(3)] Is the $Q$ of the cavity influenced by the presence of the semiconductor~? \item[(4)] Is the sensitivity of the pickup electronics good enough to detect the predicted number of created photons~? \end{enumerate} The \emph{answers to these questions} have required about a year of laboratory tests \cite{BBCLPRZ, prep}. \emph{Question} (1) was tackled by inserting a semiconductor layer in a waveguide and measuring the reflected and the transmitted power under laser irradiation. It was proven that the semiconductor can reflect microwaves as effectively as copper. This test yields also another important parameter, that is the laser power needed to make a good mirror. This question arises from the fact that one needs to build a plasma of thickness equal to at least three skin depths (for the given microwave frequency) in order that it may be fully reflective. The energy needed is 1~$\mu$J/cm$^2$ per pulse in the microwave range \cite{BBCLPRZ}. The answer to \emph{question} (2) can be found in literature \cite{MSABL}. The mirror appearance at P1 is fast enough for gigahertz frequencies, since the transition time of the electrons is some femtoseconds, so that the dominant factor is the rise time of the laser pulse, which is in the hands of the experimenter. However the disappearance of the mirror depends on the recombination time of the electrons, which is a property of the semiconductor only. If one uses semi-intrinsic semiconductors one can obtain recombination times as low as 5--10~ps \cite{MSABL}. The answer to \emph{question} (3) was straightforward. We measured the $Q$ value of a niobium cavity brought to 4.6~K, determining the decay time of the loaded cavity, and got $Q = 2 \cdot 10^6$. Once the semiconductor wafer was inserted in the cavity no difference in the decay time (hence in $Q$) was detected. In order to answer \emph{question} (4) a complete electronic chain was connected to the pickup antenna inserted in the cryogenic cavity. The first amplification stage was placed near the cavity at liquid helium temperature \cite{bradley}. The cavity was then loaded with microwave pulses of decreasing power in order that the minimum detectable signal might be reached. The minimum signal detected had an energy of 0.1~eV, corresponding to about $10^4$ microwave (2.5~GHz) photons. By taking 100 measurements one arrives at $10^3$ photons. We think that further improvements in the electronic chain will allow us to detect even feebler signals. A by-product of this measurement regards the possible source of noise due to thermal radiation from the electron-hole plasma created in the semiconductor when the laser shines. We tried to measure this noise but it was below our sensitivity (in our frequency band). \begin{figure} \centering \includegraphics[height=6cm]{figure2.eps} \caption{Detailed experimental setup. There are three main parts~: the electromagnetic cavity already shown in FIG.~1, the electronic chain and the laser system. This block diagram displays the interrelations between laser and radio frequency generator for the control of parametric resonance.} \label{detailed} \end{figure} \vspace*{\baselineskip} On the basis of our previous results a general \emph{layout for the detection of Casimir radiation} is shown in FIG.~\ref{detailed}. A niobium cavity at cryogenic temperature is placed in a vacuum vessel. A cryogenic amplifier \cite{bradley} is connected by a transmission line to an inductive pickup loop coupled with the cavity in critical matching. A directional coupler is inserted between the cavity and the cryogenic amplifier to enable measurements of the resonance cavity reflection coefficient and calibration of the electronic chain. The signal output by the cryogenic amplifier is further amplified at room temperature, then processed by a superheterodyne receiver and eventually integrated over time. The laser light carried by the optical fiber is tuned in the near infrared and modulated in amplitude at a frequency exactly double the cavity resonance frequency. The generator drives a frequency doubler whose output turns to a low power laser master oscillator. The master oscillator yields a continuous signal from which a pulse picker selects the number of pulses required in each excitation stage. The total energy stored in the laser is limited, so must be the number of available pulses. Our present estimate is between $10^3$ and $10^4$ pulses for each run. This experimental setup leaves open the possibility of changing many configuration parameters to help distinguishing real from spurious signals. We can change the master laser frequency and thus the oscillation mirror frequency to slightly detune the parametric resonance condition. Also we can vary the cavity temperature to study possible contributions from thermal radiation. Mirrors made with different semiconductor samples and with different thickness can be tried. \vspace*{\baselineskip} In order to show that our scheme leads to observable results we need to insert real numbers in the theoretical formulae and \emph{compare the predicted number of photons with our sensitivity.} Several physical parameters are essentially already chosen, since a niobium cavity and an electronic chain have been used satisfactorily in the tests carried on to answer questions (3) and (4). The niobium cavity has transverse dimensions of 71~mm and 22~mm, and length $x = 110$~mm. The cavity mode chosen was TE$_{101}$ with eigenfrequency around 2.5~GHz. The semiconductor was GaAs with thickness $2\, \delta x = 0.6$~mm. The excitation time duration for a single run, at 5~GHz, according to the number of pulses, can be between 0.2 and 2~$\mu$s. Typically a run can be repeated after a few seconds. The following data can be used to estimate the number of photons produced by dynamic Casimir effect~: \begin{eqnarray*} \frac{\omega}{2\pi} &=& 2.5 \cdot 10^9 \mbox{s}^{-1} \, ; \qquad \qquad \qquad t = 10^{-6} \mbox{s} \, ; \nonumber \\ \frac{v}{c} = \frac{\delta x}{x} &=& \frac{0.3\mbox{~mm}}{110\mbox{~mm}} = 3\cdot 10^{-3} \, ; \qquad Q = 2 \cdot 10^6 \, . \nonumber \end{eqnarray*} With formula (\ref{photons}), which is the more pessimistic, we find that this number is $4 \cdot 10^4$, well above our sensitivity. \vspace*{\baselineskip} The above figures foster our hopes that eventually some light will be shed on pressing open problems. In fact a good knowledge of quantum vacuum is of great \emph{importance in cosmology,} both to the recurrent question of Einstein's cosmological constant \cite{Wei}, with its significance to the dark matter problem; and to the critical question of the birth of density inhomogeneities, ancestors of galaxies, from inflated quantum vacuum fluctuations \cite{Dav}. Moreover a sound grasp of quantum vacuum dynamics is crucial in understanding some issues on the nature of quantum particles and on the relationships among vacuum noise, the concepts of information and entropy, and gravitation \cite{Dav}. \vspace*{\baselineskip} We wish to thank L. Badan for technical help with cryogenics, E. Berto for mechanical support, and D. Corti for help with radio frequency measurements.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,030
{"url":"https:\/\/zbmath.org\/?q=an:0799.53056&format=complete","text":"# zbMATH \u2014 the first resource for mathematics\n\nA necessary condition for the existence of compact Clifford-Klein forms of homogeneous spaces of reductive type. (English) Zbl\u00a00799.53056\nA compact Clifford-Klein form of a homogeneous space $$G\/H$$ is defined as a quotient $$\\Gamma\\setminus G\/H$$ where $$\\Gamma$$ is a subgroup of $$G$$ acting properly discontinuously and freely on $$G\/H$$ so that $$\\Gamma\\setminus G\/H$$ is compact in the quotient topology. The problem stated in the title is studied here and two (rather technical) necessary conditions are proved for the existence of a compact Clifford-Klein form of a given $$G\/H$$ of the reductive type.\nReviewer:\u00a0O.Kowalski (Praha)\n\n##### MSC:\n 53C30 Differential geometry of homogeneous manifolds\n##### Keywords:\nhomogeneous space; Clifford-Klein form; reductive type\nFull Text:\n##### References:\n [1] M. Atiyah and W. Schmid, A geometric construction of the discrete series for semisimple Lie groups , Invent. Math. 42 (1977), 1-62. \u00b7 Zbl\u00a00373.22001 [2] Y. Benoist, P. Foulon, and F. Labourie, Flots d\u2019Anosov \u00e0 distributions stable et instable diff\u00e9rentiables , C. R. Acad. Sci. Paris S\u00e9r. I Math. 311 (1990), no. 6, 351-354. \u00b7 Zbl\u00a00754.58027 [3] M. Berger, Les espaces sym\u00e9triques noncompacts , Ann. Sci. \u00c9cole Norm. Sup. (3) 74 (1957), 85-177. \u00b7 Zbl\u00a00093.35602 [4] R. Bieri, Homological dimension of discrete groups , Mathematics Department, Queen Mary College, London, 1976. \u00b7 Zbl\u00a00357.20027 [5] A. Borel, Compact Clifford-Klein forms of symmetric spaces , Topology 2 (1963), 111-122. \u00b7 Zbl\u00a00116.38603 [6] A. Borel and Harish-Chandra, Arithmetic subgroups of algebraic groups , Ann. of Math. (2) 75 (1962), 485-535. JSTOR: \u00b7 Zbl\u00a00107.14804 [7] E. Calabi and L. Markus, Relativistic space forms , Ann. of Math. (2) 75 (1962), 63-76. JSTOR: \u00b7 Zbl\u00a00101.21804 [8] S. Helgason, Differential geometry, Lie groups, and symmetric spaces , Pure and Applied Mathematics, vol. 80, Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York, 1978. \u00b7 Zbl\u00a00451.53038 [9] T. Kobayashi and K. Ono, Note on Hirzebruch\u2019s proportionality principle , J. Fac. Sci. Univ. Tokyo Sect. IA Math. 37 (1990), no. 1, 71-87. \u00b7 Zbl\u00a00726.57019 [10] T. Kobayashi, Proper action on a homogeneous space of reductive type , Math. Ann. 285 (1989), no. 2, 249-263. \u00b7 Zbl\u00a00662.22008 [11] T. Kobayashi, Discontinuous group in a homogeneous space of reductive type , Conference on Representation Theories of Lie Groups and Lie Algebras, Seminar Reports of Unitary Representation, vol. 10, 1990, pp. 41-45. [12] R. S. Kulkarni, Proper actions and pseudo-Riemannian space forms , Adv. in Math. 40 (1981), no. 1, 10-51. \u00b7 Zbl\u00a00462.53041 [13] T. Oshima and J. Sekiguchi, Eigenspaces of invariant differential operators on an affine symmetric space , Invent. Math. 57 (1980), no. 1, 1-81. \u00b7 Zbl\u00a00434.58020 [14] T. Oshima and J. Sekiguchi, The restricted root system of a semisimple symmetric pair , Group representations and systems of differential equations (Tokyo, 1982), Adv. Stud. Pure Math., vol. 4, North-Holland, Amsterdam, 1984, pp. 433-497. \u00b7 Zbl\u00a00577.17004 [15] W. Rossmann, The structure of semisimple symmetric spaces , Canad. J. Math. 31 (1979), no. 1, 157-180. \u00b7 Zbl\u00a00357.53033 [16] J.-P. Serre, Cohomologie des groupes discrets , Prospects in mathematics (Proc. Sympos., Princeton Univ., Princeton, N.J., 1970), Princeton Univ. Press, Princeton, N.J., 1971, 77-169. Ann. of Math. Studies, No. 70. \u00b7 Zbl\u00a00235.22020 [17] Y. L. Tong and S. P. Wang, Geometric realization of discrete series for semisimple symmetric spaces , Invent. Math. 96 (1989), no. 2, 425-458. \u00b7 Zbl\u00a00685.22008 [18] N. R. Wallach, Two problems in the theory of automorphic forms , Open Problems in Representation Theory, Proceedings, Katata, Japan, 1986, pp. 39-40. [19] G. Warner, Harmonic analysis on semi-simple Lie groups. I , Grundlehren Math. Wiss. Einzeld. Beruck Anwend., vol. 188, Springer-Verlag, New York, 1972. \u00b7 Zbl\u00a00265.22020 [20] J. A. Wolf, The Clifford-Klein space forms of indefinite metric , Ann. of Math. (2) 75 (1962), 77-80. JSTOR: \u00b7 Zbl\u00a00101.37503 [21] J. A. Wolf, Spaces of constant curvature , 5th edition ed., Publish or Perish Inc., Houston, TX, 1984. \u00b7 Zbl\u00a00556.53033\nThis reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.","date":"2021-10-26 01:39:21","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8010702729225159, \"perplexity\": 713.5129599379665}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323587794.19\/warc\/CC-MAIN-20211026011138-20211026041138-00144.warc.gz\"}"}
null
null
Home 2014 Performances Ariana Grande Ariana Grande Performs 'Break Free' On 'America's Got Talent' Ariana Grande Performs 'Break Free' On 'America's Got Talent' Celebrity Bug 8/29/2014 2014 Performances, Ariana Grande, With her sophomore album, 'My Everything', being released earlier this week, pop sensation Ariana Grande stopped by summer's top show 'America's Got Talent', to perform her latest single "Break Free", which is currently No.7 on the Billboard Hot 100. The album, which features appearances from Iggy Azalea, Zedd and her beau Big Sean, is expected to follow in the path of her debut album by debuting at the top of next week's chart. So, congratulations are in order on that front, but the performance itself is underwhelming. Tags # 2014 Performances # Ariana Grande Labels: 2014 Performances, Ariana Grande
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,262
40-й чемпионат мира по конькобежному спорту в спринтерском многоборье прошёл 17 и 18 января 2009 года в Ледовом дворце Крылатское (Москва, Россия). Результаты Женщины NQ = не отобрались на заключительные 1000 м DNS = не вышла на старт DQ = дисквалификация Мужчины NQ = не отобрались на заключительные 1000 м DQ = дисквалификация DNS = не вышел на старт WDR = снялся с соревнований Ссылки Результаты на сайте Международного союза конькобежцев 2009 год в конькобежном спорте Международные соревнования по конькобежному спорту в России 2009 Январь 2009 года Чемпионаты мира в России 2009 год в Москве Спорт в России в 2009 году Международные спортивные соревнования в Москве
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,537
IF Competition Discussion: Lost Pig Comments on "Lost Pig" from IF Comp 2007 follow the cut. I beta-tested "Lost Pig", so I'm obviously a bit biased. I think it's a very charming game. There is humor, there is a rather sweet story, and there are puzzles of a kind I especially like: they require you to understand an alternative physics, then apply what you've learned. I highly recommend it, if you're inclined to take my word for it. But what follows the cut this time is more of an essay than a review. The author of "Lost Pig", "Admiral Jota", has been one of my favorite beta-testers for many years. He did a lot of heavy lifting on Savoir-Faire, starting back when it was in an alpha state and prone to crash frequently from recursion problems. (You try having magic, distance-seeing mirrors that can place one another in scope.) He's patient, thorough, and detail-oriented. He's the kind of guy who will try every compass direction in every room, whether it's supposed to lead anywhere or not, just to make sure that there's nothing funny going on with the exits. If you introduce a new tool or a new verb, he will promptly find the two objects in the game where the standard implementation will go wrong. He'll JUMP in rooms that don't have floors and SING in echo chambers. He is not impeded by his knowledge of standard IF vocabulary. If, at some point, you mention that the PC is having trouble breathing, he'll likely try >INHALE as his next move. If your room description includes an anecdote about how someone used the room in the past, he'll try an impromptu reenactment. Reading one of his testing transcripts can get a little punishing sometimes — it's absurd how much he expects of an author — but it's also hugely rewarding. If there are funny tidbits in your game that you thought no one was ever going to find, he'll find them. His play-style is a constant reminder that IF can be a conversation between author and player. And, however put-upon you may feel at 2 in the morning when you're reading his latest comments, going along and implementing the new stuff almost always improves the game. I have other testers who give me more feedback about theme, writing, characterization; what Admiral Jota is great at is polish. We've also played a few RPGs together over the years, and he's great at using character traits — his and other people's — as the basis for improv comedy. He gets some good jokes in, but he's also not at all bad at being the straight man, at giving you an opening to show off whatever weird quirks you wrote onto your character sheet. Having him in a party adds substantially to the social entertainment value, the banter and goofing around that a GM can't control or plan for but which makes a world of difference to how everyone feels about a campaign. I mention all this not as some kind of premature eulogy (actually I fear he'll find it a bit embarrassing), but because the same attention to every little detail and the same taste for character-trait-based improv provide the strengths of "Lost Pig". I've lately rambled a few times about how neat it is when an IF game gives the player an opportunity to get into the head of the player character and then play the role through gestures and actions. We talk semi-frequently in the IF community about how a viewpoint character is defined by what he won't do — the bridges he won't jump from, the archbishops he refuses to insult, the physical and social constraints at work — but we less often worry about adding new non-critical behavior for the player character. But a lot of fun can come from trying something in a game, not because it's something that solves a puzzle or because it's something you would do, but because it's something that the character would do — and then finding that the author was there before you. It confirms your sense of who the character is, promotes empathy and immersion, grounds you in the game. Grunk is a great (if silly) embodiment of that principle. Baf has talked about how he was led to try orc-appropriate behavior throughout the game (in this case, eating Grunk's pants). I spent a lot of time chasing the pig around, not because I thought it would solve anything, but because Grunk was the kind of guy who would stupidly chase the pig to no avail, and because the game kept giving me amusing feedback when I did. Jota sets up all sorts of funny things for the player to do, and then meticulously implements all the results. And, oddly, though the humor is built around Grunk being big, green, and dumber than rocks, the characterization that eventually emerges is subtler than that: he's also a little wistful about the things he doesn't understand, basically well-disposed towards his fellow beings, persistent at a task. I feel like I know Grunk better than 95% of the IF protagonists I've played. We could use more of this, both in comedy and in serious IF. Author Emily ShortPosted on October 9, 2007 October 14, 2015 Categories interactive fiction, parser, ReviewsTags IFcomp, Lost Pig 8 thoughts on "IF Competition Discussion: Lost Pig" dgarciaroger says: Grunk is such an endearing orc. Besides, I judged from his (her?) livejournal entries that Grunk's English is better than mine! Pingback: IF Competition Discussion: General Observations & Favorites « Emily Short's Interactive Fiction Merk says: Yeah, I did a lot of that. I ate the pig at one point. I tried eating the pants. I lit the chair on fire and carried it around with me. I don't remember what all I did, but the detail is amazing. I probaly should have given some of these examples in my Lost Pig review (which ends up being shorter than several other reviews for games I didn't enjoy nearly so much), but it just didn't occur to me. It's all very transparent, these things. You do them, and it works, and it just feels *right* that it should be working. It's like there's never any question as to how well things are going to be implemented, and Jota has done an amazing job to make it that way. Yeah. If nothing else, it shows a lot of nerve having divisible water AND fire AND a conversational NPC all in the same game — any one of those elements introduces a lot of complexity. I also enjoyed being allowed to put Grunk through stuff like this: >i Grunk have: hat (full of water) torch (black and sooty) >wear hat Grunk put on hat full of water. Water all pour out on Grunk head. >ask gnome about pants Grunk tell gnome all about Grunk favorite pants. Gnome say, "It sounds as though those pants have had nearly as much of an adventure as you have. But I do wish you would put them back on." This all led me to play through the game again just now, and I found quite a few hilarious options that I never noticed during beta-testing (or possibly they got added after my first time with the game…). jepflast says: This was easily the best game in the Competition. I did find two bugs, though: 1. The commands "YES" and "NO" in the gnome's presence cause trouble. 2. Referring to the gnome's stool spews garbage in some cases. It may have been ASK GNOME FOR/ABOUT STOOL; I can't remember exactly. Pingback: Vespers, Rameses, The Baron, Lost Pig « akth JonathanCR says: "…the humor is built around Grunk being big, green, and dumber than rocks…" I agree that it's subtler than just that – but I'm not sure that even that much is true. Is Grunk actually dumb at all? I think one of the interesting things about the way he is portrayed is that although he doesn't know very much, and he's not very good at counting or talking, he reflects on what he sees and hears with remarkable clarity. Much of his narrative is very insightful, but in such a simple and direct way that it is easy not to realise how shrewd it is. I think Grunk is a lot cleverer than most people, including himself, realise. That is one of the things that make him such an appealing character. Pingback: A Top 20 List of IF | Emily Short's Interactive Storytelling Leave a Reply to jepflast Cancel reply Previous Previous post: IF Competition Discussion: The Immortal Next Next post: IF Competition Discussion: A Fine Day for Reaping
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,438
Gerald P. Pulley (October 25, 1922 – March 31, 2011) was an American photographer noted for his work with the United States Navy. Career Pulley's Navy career included serving under Rear Admiral Richard E. Byrd, during the U.S. classified South Seas exploration aboard from September 5, 1943 through November 24, 1943, serving in China aboard as part of the last official task force to close out the military activities in that area, various missions during World War II , Korean War and Vietnam War, and serving as the Officer in Charge of the Fleet Air Photographic Laboratory in Jacksonville, Florida, during the Cuban Missile Crisis. Pulley also served as the Military White House Photographer to President Harry S. Truman following the death of President Franklin D. Roosevelt. During the famous "Whistlestop" tour of 1948, Pulley followed the President's campaign, covering in 33 days. Following Truman's reelection, Pulley left his position with the White House but returned in January 1952 to document the meeting between President Truman and Prime Minister Winston Churchill aboard USS Williamsburg. His "Oral History Interview" can be viewed through the Harry S. Truman Library website. Pulley has been given the title, "Mr. Navy Photographer" by his peers and was the founder of the "National Association of Naval Photography." He was also an active member of the Masonic Lodge (for over 60 years) and often gave presentations to various Masonic Lodges on his days at the White House with former Mason President Truman. Pulley's work is listed in Eyes of the Navy: A History of Naval Photography by George Carroll, LCDR, USN(Ret) Personal life Pulley died on March 31, 2011 at age 88. He was survived by his wife of 67 years, Mary Virginia Pulley, whom he married on January 6, 1943. They had three children and four grandchildren. References 1922 births 2011 deaths People from King City, Missouri Photographers from Missouri United States Navy personnel of World War II Truman administration personnel People from Virginia Beach, Virginia Military personnel from Missouri
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,045
{"url":"http:\/\/hackage.haskell.org\/package\/hjugement-2.0.2.20190414\/docs\/Majority-Value.html","text":"hjugement-2.0.2.20190414: Majority Judgment.\n\nMajority.Value\n\nSynopsis\n\n# Type MajorityValue\n\nA MajorityValue is a list of grades made from the successive lower middlemosts of a Merit, i.e. from the most consensual majorityGrade to the least.\n\nFor using less resources and generalizing to non-integral Shares, this MajorityValue is actually encoded as an Abbreviated Majority Value, instead of a big list of grades.\n\nConstructors\n\nInstances\n\n## Type Middle\n\nA centered middle of a Merit. Needed to handle the Fractional capabilities of a Share.\n\nBy construction in majorityValue, lowGrade is always lower or equal to highGrade.\n\nConstructors\n\n Middle FieldsmiddleShare :: Sharethe same Share of lowGrade and highGrade.lowGrade :: grade\u00a0highGrade :: grade\nInstances\n\nThe majorityValue is the list of the Middles of the Merit of a choice, from the most consensual to the least.\n\nThe majorityGrade is the lower middlemost (also known as median by experts) of the grades given to a choice by the Judges.\n\nIt is the highest grade approved by an absolute majority of the Judges: more than 50% of the Judges give the choice at least a grade of majorityGrade, but every grade lower than majorityGrade is rejected by an absolute majority Thus the majorityGrade of a choice is the final grade wished by the majority.\n\nThe majorityGrade is necessarily a word that belongs to grades, and it has an absolute meaning.\n\nWhen the number of Judges is even, there is a middle-interval (which can, of course, be reduced to a single grade if the two middle grades are the same), then the majorityGrade is the lowest grade of the middle-interval (the \u201clower middlemost\u201d when there are two in the middle), which is the only one which respects consensus: any other choice whose grades are all within this middle-interval, has a majorityGrade which is greater or equal to this lower middlemost.\n\n# Type MajorityRanking\n\nThe majorityRanking ranks all the choices on the basis of their grades.\n\nChoice A ranks higher than choice B in the majorityRanking if and only if A\u2019s majorityValue is lexicographically above B\u2019s. There can be no tie unless two choices have precisely the same majorityValues.\n\nExpand a MajorityValue such that each grade has a Share of '1'.\nWARNING: the resulting list of grades may have a different length than the list of grades used to build the Merit.\nnormalizeMajorityValue m multiply all Shares by their least common denominator to get integral Shares.","date":"2020-02-27 18:52:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6750702857971191, \"perplexity\": 2261.6970616390436}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875146744.74\/warc\/CC-MAIN-20200227160355-20200227190355-00459.warc.gz\"}"}
null
null
\section{Introduction} In science, one can distinguish two traditions of theory-building. On the one hand, there are the theories that seek to model, explain, and predict the natural behaviour of systems, in the absence of any human intervention and regardless of what anyone knows about them. In physics, one could call this the \emph{dynamicist} tradition of theory building, where it is the most common approach. On the other hand, there is the \emph{pragmatic} tradition wherein the goal is to describe the manner and extent to which a given system can be known and controlled by human agents. The more a scientific discipline is concerned with aspects close to human life and society, the more relevant this aspect is. The guiding philosophy in the pragmatic tradition is that understanding a phenomenon means being able to make use of it. Physical phenomena are studied in order to better leverage certain \em resources.\em Chemistry is a good example of a field that takes this pragmatic perspective. Much of chemistry is about understanding chemicals as resources. This perspective can be traced back to the origin of the field, which was the study of alchemy, the goal of which was to find ways of transforming base metals such as iron, nickel, lead and zinc into noble metals such as silver and gold. The pragmatic approach is still going strong today, particularly in modern industrial chemistry, which seeks to discover the processes by which raw materials that are available in abundance can be transformed into useful products, for instance, bulk chemicals such as gases, acids, bases, and petroleum, and secondary products such as dyes, pesticides, drugs, and polymers. A similar story is true of thermodynamics. Early work, such as that of Carnot, sought to understand resources of thermal nonequilibrium in terms of their ability to do useful work, such as inducing mechanical nonequilibrium, and to determine the relative usefulness of different sorts of resources and different processes for extracting work from these. This perspective still survives in modern treatments of the subject. The basic questions which are answered in a theory of resources are: Which resources can be converted into which other ones? What are the ways in which a given conversion can be accomplished? The first known rules of chemical transformations were determined empirically, as were the first constraints on thermodynamic transformations. Many resource theories start life in this fashion. Eventually, however, scientists seek to provide a derivation of the rules of a resource theory from a deeper theory of the physics. The rules of stoichiometry, for instance, can be inferred from Dalton's atomic theory, and the rules of thermodynamics can be inferred from statistical mechanics. Typically such attempts at reconstruction generally lead to a better characterization of the resources and the possibilities of conversion. For instance, after the development of the atomic theory of matter, the resources in chemistry were understood to be collections of molecules and the transformations among these were understood to be constrained by conservation of the constituent atomic elements. The resulting theories of resources incorporated many intuitive facts. For instance, it is obvious that if one can convert resource $a$ to resource $b$ and one can also convert resource $b$ to resource $c$, then one can convert resource $a$ to resource $c$. Another intuitive fact is that one can compose resources in parallel, and processes for implementing conversions among resources can be composed both in parallel and sequentially. The first goal of this article is to formalize these intuitive notions and develop a mathematical framework that captures precisely the assumptions of such resource theories. We propose that a resource theory be associated with a symmetric monoidal category wherein the objects are resources and the morphisms are transformations of the resources. This framework can then be studied as a mathematical entity in its own right. One can also derive results in the abstract setting that can be fed back to all of the concrete resource theories. Certain new questions can also be posed within the framework, e.g.~are there concepts, features and general principles that are common to all resource theories? As is often the case, by abstraction we arrive at a framework that can accommodate much more than the paradigmatic examples that led one to it. In the theory of computation, for instance, one is often interested to know whether a given set of gates, when combined in parallel and sequentially, can be used to build a circuit that allows the solution of computational problems in a given complexity class. One can therefore think of gate sets as resources for achieving computational tasks. As another example, in the theory of communication, one is interested to know whether certain kinds of noisy channels can be combined, together with pre- and post-processing, to simulate a noiseless channel, so that channels can be considered as resources for communication tasks. Gates and channels, unlike chemicals and sources of heat, are transformations rather than states of matter. Nonetheless, our abstract framework for resource theories can accommodate transformations as resources just as easily as it accomodates states. Indeed, a given resource theory can include both resource transformations and resource states, as we shall see. Another sense in which the abstraction can accommodate a greater variety of examples is that the resources need not be considered to be \emph{physical} states or processes at all, they might instead be merely \emph{logical} entitites. For instance, one can cast mathematics as a resource theory where the objects are mathematical propositions and the morphisms are proofs, understood as sequences of inference rules. The tensor product is interpreted as logical conjunction, so that if $P$ and $Q$ are propositions, then $P\otimes Q$ corresponds to the proposition ``$P$ and $Q$''. Variants of this resource theory are studied in categorical logic and categorical proof theory; see e.g.~\cite{BHRU}. The categorical formalism for resource theories that we propose will make a connection with this work. There is another schema by which many resource theories are defined. For a given set of possible experimental interventions---for instance, state preparations, transformations and measurements---the set can be divided into those interventions that are considered to be freely implementable and those that are considered to be costly. It is presumed that an agent can make use of anything in the free set in unlimited number and in any combination, while the elements of the expensive set are the resources. The theory seeks to describe the structure that is induced on the resources, given access to the free set. The best modern example of a resource theory arising in this fashion is entanglement theory. The term `entanglement' was coined by Schr\"{o}dinger in the thirties, but the turning point in the theory of entanglement was in the mid-nineties, when researchers in quantum information realized that an entangled state was a resource. One of the first examples of entanglement serving as a resource was provided by the quantum teleportation protocol. Suppose two agents, Alice and Bob, are restricted in the quantum operations that they can jointly perform. Specifically, they cannot transmit quantum coherence between their labs, which is another way of saying that there is no quantum channel between them. Nonetheless, it is assumed that there is a classical channel between them, and that \em within \em each of their labs, they have no trouble implementing any coherent quantum operation. This restricted set of operations is called the set of local operations and classical communication (LOCC). In the quantum teleportation protocol, a maximally entangled state is consumed, using LOCC, to simulate one use of a quantum channel~\cite{EntanglementResource}. After the quantum information community had made the shift to thinking about entanglement as a resource, an entangled state was thereafter \em defined \em as a state that cannot be generated by LOCC. Many other properties of quantum states have been studied by quantum information theorists, with entanglement theory as the model. There are now resource theories of: \em asymmetry\em, where the resources are quantum states that break a symmetry and quantum operations that fail to be covariant with respect to a symmetry~\cite{gour2008resource,marvian2013theory,marvian2011pure,MarvianSpekkensNoether,marvian2012information,marvian2013modes}; \em nonuniformity\em, where the resources are quantum states that deviate from the maximally mixed state and quantum operations that deviate from those that are deterministic or inject uniform noise into the system \cite{PurityResource,nonequilibrium}; and \em athermality\em, where the resources are quantum states that deviate from the Gibbs form for a given temperature and quantum operations that deviate from those that are deterministic or inject thermal noise at a given temperature \cite{brandao2011resource}. Each of these will be discussed in this article. The resource theory of asymmetry provides a framework for categorizing many constraints that arise from symmetries, such as selection rules in atomic physics, Noether's theorem~\cite{MarvianSpekkensNoether}, and the Wigner-Araki-Yanase theorem~\cite{ahmadi2013wigner,marvian2012information}. The resource theories of athermality and nonuniformity provide a framework for describing many results in quantum thermodynamics, including the limits to work extraction~\cite{JWZGB,brandao2011resource,FundLimitsNature,SSP2}, Landauer's solution to the Maxwell demon problem~\cite{nonequilibrium}, and the status of the second law of thermodynamics~\cite{nonequilibrium,1ShotAtherm2}. Note that there are strong parallels between the resource theories that arise in this way, and this provides yet another motivation for developing a general framework in which one can hope to distinguish features that are generic to all such resource theories and features that are particular to a given example. This schema, once formalized, is quite generic and it too can be extended beyond the case of \em experimental \em processes to more abstract sorts of processes. For instance, the resource theory of \em compass-and-straightedge constructions \em provides an example of this sort of phenomenon. It has plane figures as its objects and compass-and-straightedge constructions between those as transformations. More precisely, a plane figure consists of a set of points, lines and circles in $\mathbb{R}^2$, and a transformation is a sequence of basic constructions, such as creating a new line through two existing points, creating a circle with a given center point and point on the radius, or dropping some of the existing points, lines, or circles. In this context, certain plane figures become resources. For example, a plane figure consisting of a circle together with a square of the same area is a valuable resource (Example~\ref{ex:catalyst}). It cannot be created from a blank figure using compass-and-straightedge constructions, and with it, one can create many other figures. The question of interest is: which plane figures can be used to construct which others, given compass-and-straightedge constructions for free? All such examples constitute theories that describe a set of processes and a distinguished subset thereof that are considered free. The second goal of our article is to show how such theories---we will call them \em partitioned process theories\em---define resource theories in the sense of the abstract definition that we provide in the first part of the article. Starting from a partitioned process theory, one can define resource theories of states, but also resource theories of generic processes, including states, transformations, and measurements. Because transformations and measurements can admit of an input, one can combine them not only in parallel, but sequentially as well. We shall discuss the difference between theories that permit only parallel-combinable resource processes, and those that allow them to be combined in any fashion. Such theories of process transformations can be understood as a generalization, to arbitrary process theories, of the {\em quantum combs} framework~\cite{comb}. Finally, we come back to the abstract framework of resource theories, and we define a mathematical structure that captures only partial information about the resource theory, but information which is particularly significant. Of interest to us is the partial information that is required to answer some of the most basic questions about a resource theory: When are two resources equivalent under the free operations, in the sense that one can convert one to the other and vice-versa? What are the necessary and sufficient conditions on two resources such that the first can be converted to the second (possibly irreversibly) under the free operations? How can we find measures of the quality of a resource, that is, functions from the resources to the reals which are nonincreasing under the transformations allowed by the resource theory? What is the rate at which arbitrarily many copies of one resource can be converted to arbitrarily many copies of another, or equivalently, what is the average number of copies of a resource of one kind that is required to produce one unit of a resource of another kind? Can a given resource conversion be made possible by the presence of a catalyst, i.e. a resource which must be present but is not consumed in the process? The question of which \em particular \em free operation is used to achieve a given resource conversion is typically considered to be of secondary interest, and it is this information that we will dispense with in defining our more streamlined mathematical formalism. The third goal of our article, therefore, is to provide a mathematical framework which provides just enough structure to specify the answers to such questions, and therefore to capture the ``core'' of a resource theory. We refer to this minimal framework as a {\em theory of resource convertibility}. The requisite mathematical formalism turns out to be that of a {\em commutative (equivalently, Abelian or symmetric) preordered monoid}. First, consider the preorder. This is the order over resources that captures whether one resource can be converted to another or not. Recall that the first of the questions above called us to characterize the equivalence classes of resources under the free operations, and the second called us to find the {\em partial order} over these equivalence classes that is induced by the free operations. But this is nothing more than a call to find the preorder over the resources induced by the free relation of convertibility. Once the preorder is identified, one can define measures of the resource, or ``resource monotones'', as any map from the resources to the reals which respects the preorder. As an example, in the resource theory of bipartite entanglement, the preorder over pure bipartite states is determined by the eigenvalue spectrum of the reduced density operator of each such state. One state is higher than another in the preorder if and only if the eigenvalue spectrum of the reduced density operator of the second state \em majorizes \em the eigenvalue spectrum of the reduced density operator of the first state~\cite{Nielsen}. A given equivalence class of states contains all states for which the reduced density operator has a given eigenvalue spectrum (wherein the order of the eigenvalues is not significant). Any Schur-convex function of the eigenvalue spectrum respects the majorization preorder and consequently defines an entanglement monotone. The Shannon entropy, which is a Schur-convex function, defines the entanglement monotone known as the ``entropy of entanglement''~\cite{EntanglementResource}. It should be noted that although it might appear that the highest goal of a resource theory should be to find useful {\em measures} of a resource, in fact any such measure is a rather crude way of expressing facts about the preorder. The preorder structure of a resource theory is more fundamental. Indeed, if the equivalence classes defined by the preorder do not form a total order, then having the valuation over the states for any single measure is insufficient for deducing the preorder over the states. In particular, for two states that are not ordered relative to one another, one measure might assign a higher value to the first, while another might assign a higher value to the second. As an example, although there have been many proposals of measures of entanglement for pure bipartite states, the majorization preorder over these states is more fundamental than any of these measures. This point has been previously emphasized elsewhere~\cite{nonequilibrium}. Next, consider the monoid structure. This is simply the binary operation that specifies the manner in which two resources compose in parallel. We here emphasize that the monoid structure of a resource theory can vary independently of the preorder structure. To prove this point, we provide an example of two resource theories that are associated with the same preorder, but have different notions of parallel composition and hence that differ in their monoid structure. The monoid structure is what is relevant for answering the last two questions on our list, i.e. determining asymptotic rates of conversion and for establishing whether nontrivial catalysis is possible in the resource theory. Determining the asymptotic rates of conversion among states in particular is traditionally one of the first topics to be studied in any given \emph{quantum} resource theory, under the name of \emph{distillation protocols}, for instance, for entanglement~\cite{EntDist}, asymmetry~\cite{gour2008resource,marvian2011pure}, and athermality~\cite{JWZGB,brandao2011resource}. More generally, one seeks to determine the asymptotic rates of conversion among processes (including states as special cases). The resource inequalities of quantum Shannon theory~\cite{DevetakResource1,DevetakResource2} are an example of such work. The fact that different sorts of resources can parallel-compose differently, and in particular that not every resource has an extensive character, has practical significance. Recognition of the latter fact is what ultimately resolved a puzzle concerning an apparent conflict between two results: on the one hand, Ref.~\cite{horodecki2002laws} seemed to establish that the asymptotic rate of conversion in any quantum resource theory of states was given by the regularization of the relative entropy distance of a state to the set of free states; on the other hand, Ref.~\cite{gour2008resource} had determined the asymptotic rate of conversion among asymmetric states and it was not given by the regularization of the relative entropy. The resolution of the puzzle, described in Ref.~\cite{gour2009measuring}, was that the result of Ref.~\cite{horodecki2002laws} applied only to extensive resources, that is, resources where the relative entropy distance to the free states of $N$ copies of a state scaled linearly with $N$, while for the resource of asymmetry, the relative entropy distance to the free states of $N$ copies scaled as $\log N$. A theory of resource convertibility is a particularly useful arena to try and identify some generic facts about resource theories, facts which might be more obvious in an abstract setting than in any concrete resource theory. As a small first example, we have derived a sufficient condition for the absence of catalysis in a resource theory. A more substantive example would be to extend the results of Ref.~\cite{horodecki2002laws} by determining a general expression for the asymptotic rate of conversion of resources when the latter are not extensive in character. \color{black} \paragraph*{Acknowledgements.} We would like to thank the QPL referees as well as Manuel B\"arenz and Hugo Nava Kopp for detailed comments on the manuscript. Pictures have been produced with the TikZit package. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development and Innovation. The first two authors have been supported by the John Templeton Foundation. \section{Resource theories} \subsection{What is a resource theory?} We now outline the basic elements of a resource theory. First, there may be many different kinds of resources, which we denote by $A, B, \ldots$. In addition, there are conversions between resources, which we denote $f:A\to B, g:C\to D, \ldots$, much as in a chemical reaction. A transformation $f:A\to B$ is a method for turning resource $A$ into resource $B$. Since there may be many ways of achieving such a conversion, we need to distinguish these by an additional label such as $f$. Transformations can be \em composed sequentially\em. In particular, a transformation $f$ that converts resource $A$ to resource $B$ can be composed sequentially with a transformation $g$ that converts resource $B$ to resource $C$. (This is only possible if the output of $f$ is the same as the input of $g$, in which case we say that the types match.) The composite transformation $f \circ g$ converts resource $A$ to resource $C$. Moreover, it should be possible to combine resources: if $A$ and $B$ are resources, then it is possible to regard $A$ and $B$ together as a composite resource, which we denote $A\otimes B$. In other words, the collection of all resources should be equipped with a binary operation ``$\otimes$''. If $f:A\to B$ and $g:C\to D$ are transformations, then there should similarly be a composite transformation $f\otimes g$ corresponding to executing $f$ and $g$ in parallel, \[ f\otimes g \: : \: A\otimes C \longrightarrow B\otimes D. \] Finally, we assume the existence of a \em void resource \em denoted $I$, which can be added to any other resource without changing it.\footnote{For many physicists and mathematicians, the ``$\otimes$'' notation brings to mind Hilbert spaces, but in this context, it means simply a parallel composition of resources. For instance, if one is interested in cooking ingredients as resources, then the set of ingredients consisting of a potato and a carrot is denoted ${\rm potato} \otimes {\rm carrot}$~\cite{CatsII}.} All of these intuitive facts about resources, if formalized, can be summed up by saying that resources and transformations among resources are, respectively, objects and morphisms in a symmetric monoidal category (SMC). We do not repeat here the axioms defining the structures in an SMC, but instead refer the reader to the literature. For instance, Ref.~\cite{Cats,CatsII} provides accessible expositions for a physics audience, while Ref.~\cite{Benabou} is among the original works. Moreover, SMCs admit an intuitive graphical calculus, and much of this paper can be understood in terms of it. In fact, there exist powerful theorems which establish that equational reasoning within an SMC is in one-to-one correspondence with deformation of diagrams \cite{JS,SelingerSurvey}. In the graphical calculus of an SMC, we represent an object $A$ by a wire labeled with that object: \[ \includegraphics{Resource170914-figure0.pdf} \] a composite object $A_1\otimes \dots \otimes A_n$ by placing such wires side-by-side: \[ \includegraphics{Resource170914-figure1.pdf} \] and a general morphism $f:A_1\otimes \dots \otimes A_n\to B_1\otimes \dots \otimes B_m$ by a box: \[ \includegraphics{Resource170914-figure2.pdf} \] The trivial object is represented by `no wire', so a morphism $s: I \to A$ has no input wires. Rather than as a box, we represent it as a triangle: \[ \includegraphics{Resource170914-figure3.pdf} \] We typically omit the object labels whenever no danger of confusion arises. The cautious reader may note a similarity between the triangle depicting a state and Dirac notation in quantum theory, and indeed, this graphical calculus can be seen as Dirac notation, unfolded in two dimensions \cite{Cats, CatsII}. `Symmetry' in symmetric monoidal category stands for the fact that wires may cross: \[ \includegraphics{Resource170914-figure4.pdf} \] Sequential composition of $f:A\to B$ and $g:B\to C$ is represented by connecting $f$'s output to $g$'s input: \[ \includegraphics{Resource170914-figure5.pdf} \] and parallel composition of $f_1:A_1\to B_1$ and $f_2:A_2\to B_2$ by placing boxes side-by-side: \[ \includegraphics{Resource170914-figure6.pdf} \] This completes the notation of the graphical calculus. Equational reasoning boils down to nothing but deforming diagrams without changing the topology: \[ \includegraphics{Resource170914-figure7.pdf} \] As already mentioned above, doing so allows one to establish any equation that can be derived from the axioms of an SMC. We adopt the convention that if ${\bf C}$ is an SMC, then $|{\bf C}|$ denotes the set of its objects, and ${\bf C}(A,B)$ denotes the hom-set between $A$ and $B$, that is, the set of morphisms in ${\bf C}$ that have $A$ as domain and $B$ as codomain. To summarize then: \begin{defn} \label{def:resourcetheory} A \emph{resource theory} is a symmetric monoidal category $({\bf D},\circ,\otimes,\mathbb{I})$, where: \begin{itemize} \item the objects $|{\bf D}|$ represent the resources, \item the morphisms in the hom-set ${\bf D}(A,B)$ for $A,B\in |{\bf D}|$ represent transformations of resource $A$ into resource $B$ that can be implemented without any cost, \item the binary operation $\circ$ corresponds to sequential composition of processes with matching types, \item the binary operation $\otimes$ describes parallel composition of resources and of processes, and \item the unit object $\mathbb{I}$ denotes the void resource, i.e., the resource which when composed with any other resource leaves it the same, as in $A\otimes \mathbb{I}=A=\mathbb{I}\otimes A$. \end{itemize} \end{defn} The difference between the terms ``resource theory'' and ``symmetric monoidal category'' is not a mathematical one, but rather one of interpretational nature: a particular SMC is called a ``resource theory'' whenever we want to think of its objects as resources and its morphisms as transformations or conversions between those resources. So a resource theory, as an abstract entity, is just a symmetric monoidal category. A concrete resource theory, such as chemistry or thermodynamics, is a particular interpretation of that symmetric monoidal category. Note that given the interpretation of the unit object $\mathbb{I}$ as the void resource, it necessarily has no cost. Given that the morphisms of the resource theory are interpreted as transformations that can be implemented at no cost, it follows that any object $A$ that can be obtained from the void resource, i.e. for which the hom-set ${\bf D}(\mathbb{I},A)$ is non-empty, also has no cost. The set of all such objects will be called the {\em free resources}. The complement of this set will be the {\em costly} or {\em nonfree resources}. \begin{defn} The set of {\em free resources} in ${\bf D}$ is $\{ A \in |{\bf D} | : {\bf D}(\mathbb{I},A) \ne \emptyset \}$. \end{defn} \begin{remark} \label{rem:strict} Throughout this paper, we assume that $({\bf D},\circ,\otimes,\mathbb{I})$ is a symmetric `strict' monoidal category, that is the unit and associativity natural isomorphisms are identities. By Mac Lane's strictification theorem~\cite[p.257]{MacLane}, any (non-strict) monoidal category is equivalent (via a pair of strong monoidal functors) to a strict one. In particular, to show that a certain equation holds for general monoidal categories, it suffices to show this in the strict case. Hence proofs in the graphical calculus carry over to arbitrary monoidal categories. Concretely, in the graphical calculus, the associator becomes trivial since bracketing vanishes, and unitality becomes trivial since the tensor unit is represented by `no wire'. On the other hand, the symmetry isomorphisms $A\otimes B\stackrel{\cong}{\to} B\otimes A$ cannot be taken to be identities, and also in the graphical calculus, they need to be represented by the crossing of strands. One can either think of them as actual transformations which convert $A\otimes B$ into $B\otimes A$, or as ``passive'' transformations merely expressing the fact that $A\otimes B$ and $B\otimes A$ stand for the same objects, i.e.~the order in a tensor expression has no significance beyond syntax. Consult \cite{CatsII} for a discussion of all these issues. \end{remark} \subsection{Examples} We now give a few examples of resource theories and how their structure can be formalized using a SMC. We begin with our chemistry example. \begin{example} \label{ex:chem} The resource theory of \em chemistry \em has collections of chemical species (atoms, ions, molecules) as its objects, and reactions under standard conditions as well as sequences of such reactions as transformations. An example of a simple reaction is the neutralization of an acid and a base, \[ \mathrm{NaOH + HCl} \longrightarrow \mathrm{NaCl + H_2 O}. \] Here, we have used the usual notation in chemistry, where the ``$+$'' corresponds to our tensor ``$\otimes$''. The tensor unit $\mathbb{I}$ is simply the empty set of chemical species. An example from industrial chemistry of a transformation that requires a sequence of reactions is the widely used \em Haber process, \em \[ \mathrm{N_2 + 3 H_2} \longrightarrow \mathrm{2 NH_3}, \] turning nitrogen and oxygen into ammonia, which can then be further processed into fertilizer. It should be noted that different sequences of reactions relating the two sides of a reaction equation correspond to different morphisms in the category. \end{example} \begin{example} \label{ex:rand} The resource theory of randomness is defined as follows. An object $(X,p)$ is a finite set equipped with a probability distribution $p$ which assigns to every $x\in X$ its probability $p(x)$. The transformations $f:(X,p)\to (Y,q)$ correspond to deterministic processings, which are deterministic maps $f:X\to Y$ having the property that $q(y) = \sum_{x\in f^{-1}(y)}p(x)$, meaning that one can compute the probability of getting a certain $y\in Y$ after the processing by summing up the individual probability of all $x\in X$ which process to $y$. These maps compose sequentially in the obvious way. This defines a category ${\bf FinProb}$, which has previously been studied in the context of entropy~\cite{BFL}. To get from ${\bf FinProb}$ to an SMC that describes the resource theory of randomness, we must formalize the notions of parallel composition and of the trivial resource. Parallel composition of objects is given by taking product distributions: \[ (X,p) \otimes (Y,q) := (X\times Y, p\times q), \] where $X\times Y$ is the Cartesian product of the sets $X$ and $Y$, and $p\times q$ denotes the product distribution on $X\times Y$. Parallel composition of deterministic processings is defined in the obvious way: when we have $f:(X,p)\to (X',p')$ and $g:(Y,q)\to (Y',q')$, then it is straightforward to check that \[ f \times g \: : \: (X\times Y, p\times q) \to (X'\times Y', p'\times q') ,\qquad (x,y) \mapsto (f(x), g(y)) \] is also a deterministic processing, and this is how we define the parallel composition $f\otimes g : (X,p)\otimes (Y,q) \to (X',p')\otimes (Y',q')$. Finally, the trivial resource $\mathbb{I}$ is taken to be the singleton set equipped with the point distribution. The SMC that is obtained by augmenting ${\bf FinProb}$ with this tensor and tensor unit $({\bf FinProb},\circ,\otimes,\mathbb{I})$ defines the resource theory of randomness. This theory has many practical applications. For example, randomness is a valuable resource for secure cryptography. In many practical schemes for IT security, a computer gathers ``entropy'' from user input like keyboard strokes or mouse movement. Having ``bad'' randomness, in the sense of predictability, may lead to critical security issues. For this reason, true randomness is an important resource, and our definitions allow for a precise mathematical treatment of the question of the quality of randomness. Our formalism also allows for a treatment of the manipulation of randomness. For instance, in the theory of \em randomness extractors \em one wants to find a deterministic transformation $f$ which turns a given $(X,p)$ with non-uniform $p$ into a $(Y,q)$ with uniform distribution $q$ (for this to be possible, the cardinality of $Y$ must be less than that of $X$). Finding randomness extractors can therefore be understood as a problem in the resource theory of randomness. \end{example} Lest the reader get the impression that the objects in a resource theory are always {\em states}, we present an example wherein the resources are themselves processes which can be interconverted one to the other by pre- and post-processing. (Resource theories containing processes among the resources will be an important theme in subsequent sections.) \begin{example} \label{ex:shannon} The resource theory of \em one-way classical communication channels \em concerns the mathematical theory of communication as developed by Shannon~\cite{Shannon}. In the associated SMC, an object is a triple $(A,B,P)$, where $A$ and $B$ are finite sets, and $P$ is a conditional probability distribution over the second set given the first, with $P(b|a)$ denoting the probability of $b$ given $a$ for all $a\in A$ and $b\in B$. We think of the ingoing value $a$ as a message that Alice wants to send to Bob, who receives $b$. The transmission of the message may not be perfect, and hence Bob's $b$ may differ from Alice's $a$; more precisely, for every value of $a$ there is a certain probability of getting each possible value of $b$, and this is described by the given conditional distribution $P(b|a)$. Parallel composition of objects $(A,B,P)$ and $(C,D,P')$ corresponds to forming products of stochastic maps, \[ (P\otimes P')(bd|ac) := P(b|a) P'(d|c), \] for all $a\in A$, $b\in B$, $c\in C$, and $d\in D$. If Alice and Bob have access to the communication channel $P(b|a)$, they can use this channel to simulate certain other channels $Q(b'|a')$ by Alice applying a stochastic map $E(a|a')$ to the input of the channel, termed an \em encoding \em of $a'$, and Bob applying a stochastic map $D(b'|b)$ to its output, termed a \em decoding \em of $b'$. In this way, the channel resource is transformed as follows: \[ P(b|a) \to Q(b'|a') \] where \[ Q(b'|a'):= \sum_{a,b} D(b'|b) P(b|a) E(a|a'). \] We also allow shared randomness as a free resource, that is, Alice and Bob are allowed to possess a pair of systems in an arbitrary correlated state. Using this free resource, Alice and Bob can correlate the encoding and decoding maps and thereby transform the channel as follows: \[ P(b|a) \to Q(b'|a'):= \sum_{a,b,x,y} D(b'|b,y) P(b|a) E(a|a',x) R(x,y), \] where $x$ and $y$ denote the correlated variables in Alice's and Bob's possessions respectively, and $R(x,y)$ is the joint distribution describing the correlation. These are the morphisms in the SMC defining the resource theory, which compose sequentially and in parallel in the obvious way. Finally, the unit object $\mathbb{I}$ in the SMC is the deterministic map from the singleton set to the singleton set, representing the channel which ``does nothing to nothing''. The principal goal of Shannon's information theory is to find an encoding and a decoding such that the composite channel $Q(b'|a')$ is as close as possible to an identity channel $Q(b'|a') = \delta_{b',a'}$ (for that to be possible, the cardinality of the respective domain of $a'$ and of $b'$ must again be generally less than that of $a$ and of $b$). In this way, Alice and Bob can simulate a noiseless transmission of information. \end{example} \section{Resource theories from partitioned process theories}\label{sec:ProcTheor} \subsection{Partitioned process theories} In recent years, SMCs have proven to be a convenient mathematical framework for abstractly describing theories of physical processes~\cite{Cats,CatsII} as well as more general kinds of processes~\cite{BaezLNP, Gospel}. Above we interpreted the objects of an SMC as resources and the morphisms as transformations thereof. \em Process theories \em are SMCs as well, although the interpretation is now different. The objects in the SMC, $|{\bf C}|$, now correspond to the different types of systems in the process theory. The morphisms correspond to the different processes (including state preparations and measurements in the case of physical processes); in particular, the processes that have a system $A$ as input and system $B$ as output correspond to the elements of the hom-set ${\bf C}(A,B)$. The notion of executing physical processes in series is represented by composition of the morphisms, $\circ$, while the notion of parallel composition of systems and physical processes is given by the tensor product $\otimes$ in the SMC. Finally, the trivial system, i.e. ``nothing'', corresponds to the unit object $I$ in the SMC. A couple of examples serve to illustrate the concept of a theory of physical processes. \begin{example}\label{ex:classicalstochasticprocesses} The theory of classical stochastic processes for systems with discrete state spaces is defined as the SMC in which the objects are finite sets, $A,B,\dots$, and the morphisms are stochastic maps between these, that is, the hom-set ${\bf C}(A,B)$ is the set of conditional probabilities $P(b|a)$ for all $a\in A$ and $b\in A$, i.e.~arrays of numbers $P(b|a)$ indexed by $a\in A$ and $b\in B$ such that $P(b|a) \ge 0$ and $\sum_b P(b|a)=1$ for all $a$. Sequential composition of stochastic maps is essentially matrix multiplication, \[ (Q\circ P)(c|a) = \sum_b Q(c|b) P(b|a), \] and in this way one obtains a category which is sometimes called ${\bf FinStoch}$~\cite{BF}. To turn this category into an SMC, we augment it by a notion of parallel composition and a unit object~\cite{QClChannels}. Parallel composition of objects is given by the Cartesian product of the finite sets, i.e. $A \otimes B := A \times B$, and parallel composition of stochastic maps is given by their entrywise product: \[ (P \otimes Q)(bb'|aa') = P(b|a) Q(b'|a'). \] The unit object of the process theory is the singleton set denoted $I$. Therefore the SMC for the theory of classical stochastic processes is $({\bf FinStoch},\circ,\otimes,I)$. While {\bf FinStoch} and {\bf FinProb} (from example~\ref{ex:rand}) are obviously related, there are some clear differences. While in {\bf FinProb} probability distributions are encoded as objects, in {\bf FinStoch} they are encoded as the morphisms. This clearly illustrates the difference between an SMC representing a resource theory and one representing a process theory. \end{example} \begin{example}\label{ex:quantumprocesses} The theory of quantum processes on finite-dimensional system is defined as the SMC in which objects are finite-dimensional Hilbert spaces, morphisms are completely positive maps which compose sequentially in the obvious way, parallel composition is given by the tensor product of Hilbert spaces, and the unit object is the 1-dimensional Hilbert space $\mathbb{C}$. \end{example} In the examples that are going to follow, the overarching SMC $({\bf C},\circ,\otimes,I)$ defining the process theory is going to be a variant of either the SMC of classical stochastic processes or the SMC of quantum processes, in the sense of being equivalent to either of these two as a symmetric monoidal category, that is, these variants simply introduce some additional structure, which when forgotten, leaves us with the original process theory. \begin{defn} \label{def:processtheory} A \em partitioned process theory \em consists of a process theory, described by an SMC $({\bf C},\circ,\otimes,I)$, and a distinguished subtheory of \em free processes\em, described by an SMC ${\bf C}_{\rm free}$ that is an all-object-including sub-SMC of ${\bf C}$: \[ {\bf C}_{\rm free} \hookrightarrow {\bf C}\ . \] We will denote a given partitioned process theory by $({\bf C},{\bf C}_{\rm free})$. \end{defn} Some explanation is in order. The ``free processes'' forming ${\bf C}_{\rm free}$ are assumed to be those which our agent can execute at no cost. If $f$ and $g$ are free processes, then clearly $f\otimes g$ should also be a free process, since $f$ and $g$ can in particular be executed in parallel. Likewise, if $f\circ g$ is defined, then it should also be a free process. This justifies the assumption that ${\bf C}_{\rm free}$ is a sub-SMC of ${\bf C}$. Since doing nothing to a system should certainly be a free process, it follows that the identity map on any system should be included in ${\bf C}_{\rm free}$. In other words, ${\bf C}_{\rm free}$ should be a subcategory of ${\bf C}$ that includes all objects of ${\bf C}$. In concrete applications, it is frequently the case that ${\bf C}_{\rm free}$ is not directly given as an all-object-including sub-SMC, but in terms of a ``generating set'' of processes which one declares to be free, and then one has to close this set under parallel and sequential composition in order to obtain the smallest all-object-including sub-SMC that contains it, and this is then what ${\bf C}_{\rm free}$ ends up being in such a case. The set of processes that are \em not \em in the free set, that is, ${\bf C}\setminus {\bf C}_{\rm free}$, are the costly resources. Obviously, in order to obtain an interesting resource theory, it should be the case that ${\bf C}_{\rm free}\neq{\bf C}$. The requirement ${\bf C}_{\rm free}\neq{\bf C}$ is a constraint that may not be straightforward to verify given a specification of ${\bf C}_{\rm free}$ in terms of a generating set of free operations. For instance, in the context of the theory of quantum processes, consider the following question: if we allow all unitaries to be included in ${\bf C}_{\rm free}$, as well as all partial trace operations, then what states can we include in addition and still obtain a nontrivial resource theory? It turns out that if we allow for processes to be simulated only approximately (see comments about epsilonification in Sec.~\ref{closing}), then there is only a single type of state that one can add without making the closure of the free processes into the full set of processes, namely, completely mixed states~\cite{PurityResource,nonequilibrium}. One thereby obtains the resource theory of nonuniformity, Example~\ref{examplenonuniformity}. If we add \emph{any other state} to the set of free processes, then after closing the set under sequential and parallel composition, we make it possible to implement an approximation of every quantum process for free, i.e. we obtain the trivial theory wherein ${\bf C}_{\rm free}={\bf C}$. Another point to note is that although one may be able to mathematically define many distinguished subtheories of a given process theory, the result may not have any physical interest. In order to be physically interesting, it should be the case that one can identify a set of constraints on experimental interventions which limit the processes that can be implemented in unlimited number to all and only those included in ${\bf C}_{\rm free}$. We turn now to showing how a partitioned process theory, as in Definition~\ref{def:processtheory}, can be proven to define many different resource theories, in the sense of Definition~\ref{def:resourcetheory}. In particular, we get different resource theories by considering different types of processes. \subsection{Resource theories of states} We will begin by outlining how a partitioned process theory $({\bf C},{\bf C}_{\rm free})$ gives rise to a resource theory of states. By definition, a \emph{state} is a process whose input is the trivial object; in other words, ``state'' is a different term for ``preparation procedure''. For our partitioned process theory, the set of states is $\bigcup_{A\in|{\bf C}|}\!\!{\bf C}(I,A)$. We obtain a resource theory by considering the extent to which these states can be converted one to another by the free transformations, that is, the structure of $\bigcup_{A\in|{\bf C}|}\!\!{\bf C}(I,A)$ under $\bigcup_{A, B\in|{\bf C}|}\!\!{\bf C}_{\rm free}(A, B)$. More precisely, we define a resource theory of states in terms of a partitioned process theory $({\bf C},{\bf C}_{\rm free})$ as follows. The category representing the resource theory of states that we obtain from $({\bf C},{\bf C}_{\rm free})$ will be denoted ${\rm S} ({\bf C},{\bf C}_{\rm free})$. The objects of ${\rm S} ({\bf C},{\bf C}_{\rm free})$ are taken to be the states of ${\bf C}$, \[ |{\rm S} ({\bf C},{\bf C}_{\rm free})|:= \!\!\bigcup_{A\in|{\bf C}|}\!\!{\bf C}(I,A). \] A state $s:I\to A$ can be converted into another state $t:I\to B$ by a free transformation $\xi:A\to B$ if one has \begin{equation} \begin{split} \label{eq:stateconvert} \includegraphics{Resource170914-figure8.pdf} \end{split} \end{equation}\par\noindent We then define the hom-set ${\rm S} ({\bf C},{\bf C}_{\rm free})(s,t)$ for $s,t \in |{\rm S} ({\bf C},{\bf C}_{\rm free})|$ to be the set of such free transformations that achieve $s\to t$. We now define two transformations to be equivalent if they have the same effect on all states, including when acting on states of a composite system of which the transformation only acts on one part. The morphisms in the resource theory of states are the equivalence classes of free transformations. Note, that for all cases listed in Table 1, however, any two distinct free transformations are inequivalent.~\footnote{Nonetheless, it is useful to explicitly define this equivalence relation to make it clear that ${\rm S}({\bf C},{\bf C}_{\rm free})$ is a subcategory of the resource theory of parallel-composable processes, ${\rm PC}({\bf C},{\bf C}_{\rm free})$, which we consider in the next subsection.} So the hom-set ${\rm S} ({\bf C},{\bf C}_{\rm free})(s,t)$ for $s,t \in |{\rm S} ({\bf C},{\bf C}_{\rm free})|$ is the set of equivalence classes of such free processes that achieves $s\to t$, that is, \[ {\rm S} ({\bf C},{\bf C}_{\rm free})(s,t) := \{ \xi \in {\bf C}_{\rm free}(A, B): \xi \circ s = t\} /\sim . \] Sequential composition in this category is defined as follows: if $\xi\in{\bf C}_{\rm free}(A,B)$ is a free process turning $s$ into $t$, and $\eta\in{\bf C}_{\rm free}(B,C)$ is another free process turning $t$ into a third state $u:I\to C$, then $\eta\circ\xi \in {\bf C}_{\rm free}(A,C)$ is a free process turning $s$ into $u$, \[ \includegraphics{Resource170914-figure10.pdf} \] It is straightforward to check that this respects the equivalence, and moreover that the resulting composition on equivalence classes turns ${\rm S} ({\bf C},{\bf C}_{\rm free})$ into a category. In order for it to be a symmetric monoidal category, we also need to define parallel composition of objects and morphisms. On objects in ${\rm S} ({\bf C},{\bf C}_{\rm free})$, which are states in ${\bf C}$, parallel composition is simply inherited from ${\bf C}$. Morphisms in ${\rm S} ({\bf C},{\bf C}_{\rm free})$, which are equivalence classes of transformations between states in ${\bf C}$ using free processes, can also be composed in parallel in the obvious way, \[ \includegraphics{Resource170914-figure11.pdf} \] where we use the fact that ${\bf C}_{\rm free}$ is closed under parallel composition. Compatibility with the equivalence relation is again easy to show. Since we assume ${\bf C}$ to be a strict monoidal category, it follows that ${\rm S} ({\bf C},{\bf C}_{\rm free})$ is a strict monoidal category as well. Finally, the symmetry isomorphism $s\otimes t \to t\otimes s$ on objects in ${\rm S} ({\bf C},{\bf C}_{\rm free})$ is given by the symmetry in ${\bf C}$, \[ \includegraphics{Resource170914-figure12.pdf} \] This symmetry must be a free process since ${\bf C}_{\rm free}$ was assumed to be an all-object-including symmetric monoidal subcategory of ${\bf C}$, which entails in particular that it inherits the symmetries from ${\bf C}$. Finally, the identity object $\mathbb{I}$ for the SMC ${\rm S} ({\bf C},{\bf C}_{\rm free})$ is the tensor unit of ${\bf C}$, $1_I$. All told, we have proven the following: \begin{thm} \label{thm:statetorec} For any partitioned process theory $({\bf C},{\bf C}_{\rm free})$, the procedure outlined above allows one to define a symmetric monoidal category ${\rm S} ({\bf C},{\bf C}_{\rm free})$ that can be interpreted as a resource theory in the sense of Definition~\ref{def:resourcetheory}. \end{thm} Mathematically, ${\rm S} ({\bf C},{\bf C}_{\rm free})$ is the \em coslice category \em of ${\bf C}$ over the unit object $I$, with the additional restriction that only processes in ${\bf C}_{\rm free}$ are allowed for turning one state into another. We begin by showing how the resource theory of randomness, Example~\ref{ex:rand}, arises from a partitioned process theory. \begin{example} \label{ex:rand2} The process theory $({\bf C},\circ,\otimes, I)$ from which the resource theory of randomness arises is the theory of classical stochastic processes, defined in Example~\ref{ex:classicalstochasticprocesses}. The distinguished subtheory of free processes ${\bf C}_{\rm free}$ is the subcategory of classical deterministic processes. These can be described as the subset of stochastic maps for which all the conditional probabilities are 0 or 1. For any pair of objects $X$ and $Y$ in ${\bf C}$, we can define the deterministic maps from $X$ to $Y$. Clearly, the property of being deterministic is preserved under parallel and sequential composition, and all identity maps are deterministic. We have therefore confirmed that the deterministic maps form an all-objects-including subcategory of ${\bf C}$. As usual in process theories, states on $X$ are simply a particular kind of stochastic map, the map from the singleton set to $X$. It follows that the free states are simply the point distributions on $X$. Every state that is not a point distribution, i.e. every distribution containing some randomness, is a nonfree resource. If we denote the SMC for the resource theory of randomness of Example~\ref{ex:rand} by $({\bf D},\circ,\otimes,\mathbb{I})$, then what we have shown is that for ${\bf C}$ the SMC of classical stochastic processes and ${\bf C}_{\rm free}$ the sub-SMC of classical deterministic processes, the resource states $|{\rm S} ({\bf C},{\bf C}_{\rm free})|$ can be identified with the objects of ${\bf D}$, and the transformations in the hom-set ${\rm S} ({\bf C},{\bf C}_{\rm free})(s,t)$ for $s,t\in |{\rm S} ({\bf C},{\bf C}_{\rm free})|$ are the transformations in the hom-set $D(s,t)$. The unit object of the resource theory $\mathbb{I}$ is $1_I$, the identity map on the unit object of the process theory, which is the identity map applied to the singleton set. \end{example} \newcommand{\rule{0pt}{2.6ex}}{\rule{0pt}{2.6ex}} \newcommand{\rule[-1.1ex]{0pt}{0pt}}{\rule[-1.1ex]{0pt}{0pt}} \begin{table} \label{tab:processrecs} \begin{center} \begin{tabular}{|c|c|c|} resource & systems & free processes \\ \hline \hline \multirow{ 2}{*}{bipartite entanglement~\cite{EntanglementResource}} & pairs of & \multirow{2}{*}{bipartite LOCC operations} \rule{0pt}{2.6ex}\\ & Hilbert spaces & \rule[-1.1ex]{0pt}{0pt}\\\hline \multirow{2}{*}{$n$-partite entanglement} & $n$-tuples of & \multirow{2}{*}{$n$-partite LOCC operations} \rule{0pt}{2.6ex}\\ & Hilbert spaces & \rule[-1.1ex]{0pt}{0pt}\\ \hline asymmetry~\cite{gour2008resource,marvian2013theory} & pairs of a Hilbert space & \multirow{2}{*}{$G$-covariant operations} \rule{0pt}{2.6ex}\\ (relative to a symmetry group $G$) & \& a unitary rep'n of $G$ & \rule[-1.1ex]{0pt}{0pt}\\ \hline \multirow{2}{*}{nonuniformity~\cite{PurityResource,nonequilibrium}} & \multirow{2}{*}{Hilbert spaces} & \multirow{2}{*}{noisy operations} \rule{0pt}{2.6ex}\\ & & \rule[-1.1ex]{0pt}{0pt}\\ \hline athermality~\cite{JWZGB,brandao2011resource,FundLimitsNature} & pairs of a Hilbert space & \multirow{2}{*}{$T$-thermal operations} \rule{0pt}{2.6ex}\\ (relative to temperature $T$) & \& a Hamiltonian & \rule[-1.1ex]{0pt}{0pt}\\ \hline \end{tabular} \end{center} \caption{The resource theories of processes which have been most studied so far in quantum information theory. In all cases, the states and processes are all quantum states and quantum channels, i.e.~completely positive maps.} \end{table} In the following, we provide some examples of {\em quantum} resource theories of states that have been derived from partitioned process theories. See Table~\ref{tab:processrecs} for a summary. \begin{example}\label{examplenonuniformity} The quantum resource theory of \em nonuniformity\em~\cite{PurityResource,nonequilibrium} is defined in terms of the following partitioned process theory. The enveloping process theory is the SMC of quantum processes of Example \ref{ex:quantumprocesses}, and the free processes consist of the sub-SMC which is generated by three kinds of processes: first, preparing any system in the completely mixed state; second, applying any unitary transformation to a system; third, discarding any system by taking a partial trace. These free processes were called \em noisy operations \em in \cite{PurityResource}, (although this terminology seems a bit unfortunate since introducing noise is neither necessary nor sufficient for satisfying the required condition). Equivalently, we can characterize the free processes as those that have a Stinespring dilation for which the ancilla state is completely mixed. It follows from the definition of the free processes that a system with Hilbert space $\mathcal{H}$ has only a single free state, namely, the completely mixed state on $\mathcal{H}$. Any state which is not completely mixed on $\mathcal{H}$---in other words, any state that is \em not uniformly mixed\em---is a nonfree resource. \end{example} \begin{example} \label{ex:bientangle} The most prominent example of a resource theory in the field of quantum information is the resource theory of \em bipartite entanglement\em. The enveloping process theory is a variant of the theory of quantum processes which contains some additional structure that is used to pick out the distinguished subtheory. The systems are pairs of finite-dimensional Hilbert spaces $(\mathcal{H}_A,\mathcal{H}_B)$ describing the systems owned by Alice and Bob, respectively. The processes of type $(\mathcal{H}_A,\mathcal{H}_B)\longrightarrow(\mathcal{H}'_A,\mathcal{H}'_B)$ are quantum channels $\mathcal{L}(\mathcal{H}_A\otimes \mathcal{H}_B) \longrightarrow \mathcal{L}(\mathcal{H}'_A\otimes\mathcal{H}'_B)$, i.e.~completely positive trace-preserving maps turning an operator on $\mathcal{H}_A\otimes\mathcal{H}_B$ into an operator on $\mathcal{H}'_A\otimes\mathcal{H}'_B$. Sequential composition of such processes is the usual one. Parallel composition of these systems is component-wise, \[ (\mathcal{H}_A,\mathcal{H}_B)\otimes (\mathcal{H}'_A,\mathcal{H}'_B) := (\mathcal{H}_A\otimes \mathcal{H}'_A,\mathcal{H}_B\otimes\mathcal{H}'_B), \] and similarly for the processes. The unit system is $I=(\mathbb{C},\mathbb{C})$. This describes the SMC $({\bf C},\circ,\otimes,I)$ of the process theory. As far as this category is concerned, the splitting into Alice's and Bob's Hilbert space is irrelevant: the definition of processes and their composition only involves the product $\mathcal{H}_A\otimes\mathcal{H}_B$ rather than $\mathcal{H}_A$ and $\mathcal{H}_B$ individually. The distinction between A and B is only relevant for defining the set of free processes, in the precise sense that the SMC $({\bf C},\circ,\otimes,I)$ is equivalent to the usual SMC of quantum processes in which the objects are Hilbert spaces and the morphisms are completely positive trace-preserving maps, both with the usual tensor product. The free processes are defined to be the processes in the sub-SMC ${\bf C}_{\rm free}$ corresponding to local operations and classical communication. Here, the splitting into a Hilbert space for Alice and a Hilbert space for Bob is crucial, since only this splitting enables one to know what the terms ``local'' and ``communication'' refer to. More precisely, a process $(\mathcal{H}_A,\mathcal{H}_B)\longrightarrow (\mathcal{H}'_A,\mathcal{H}'_B)$ is \em local \em if it is given by a completely positive map $\Phi : \mathcal{L}(\mathcal{H}_A\otimes \mathcal{H}_B) \longrightarrow \mathcal{L}(\mathcal{H}'_A\otimes\mathcal{H}'_B)$ which factors into a product \[ \Phi_A \otimes \Phi_B \: : \: \mathcal{L}(\mathcal{H}_A\otimes \mathcal{H}_B) \longrightarrow \mathcal{L}(\mathcal{H}'_A\otimes\mathcal{H}'_B), \] where $\Phi_A \: : \: \mathcal{L}(\mathcal{H}_A) \longrightarrow \mathcal{L}(\mathcal{H}'_A)$ and $\Phi_B \: : \: \mathcal{L}(\mathcal{H}_B) \longrightarrow \mathcal{L}(\mathcal{H}'_B)$. To define classical communication, we make use of the subset of quantum processes that can be achieved by measuring some preferred basis of the Hilbert space and then repreparing a state that is diagonal in this basis as a function of the measurement outcome. A one-way classical communication is then a process of type $(\mathcal{H}_A,\mathbb{C})\to (\mathbb{C},\mathcal{H}_B)$ or of type $(\mathbb{C},\mathcal{H}_B)\to(\mathcal{H}_A,\mathbb{C})$, but drawn from this special subset of ``decohering'' processes. We can now define ${\bf C}_{\rm free}$ to be the smallest sub-SMC of $({\bf C},\circ,\otimes,I)$ which contains all local operations and classical communication. As a particular case, one finds that the free processes of type $(\mathbb{C},\mathbb{C})\to (\mathcal{H}_A,\mathcal{H}_B)$, that is, the free states, are precisely the separable states on $\mathcal{H}_A\otimes\mathcal{H}_B$, which are those of the form \[ \rho_{AB} = \sum_i w_i \rho^A_i \otimes \rho^B_i, \] where $(w_i)$ is a probability distribution and $\rho^A_i$ and $\rho^B_i$ are density operators on $\mathcal{H}_A$ and $\mathcal{H}_B$ respectively. It follows that a state is a nonfree resource only if it is {\em not} of the separable form. These, therefore, are the bipartite entangled states. \end{example} \begin{example} \label{ex:entangle} The theory of \em $n$-partite entanglement \em is very similar: systems are $n$-tuples of Hilbert spaces, and processes are completely positive trace-preserving maps between their tensor products. The free processes are again the ones obtained from local processes and classical communication, wherein local in this context means factorization across the tensor product of $n$ Hilbert spaces, and classical communication among the $n$ parties is any combination of classical communication between any pair of parties. The theory of bipartite entanglement can be regarded as a subtheory of $n$-partite entanglement by considering only those $n$-tuples of Hilbert spaces for which all components except for two are equal to $\mathbb{C}$. The theory of $k$-partite entanglement is a subtheory of $n$-partite entanglement in the analogous way for $k\le n$. \end{example} \begin{example} The quantum resource theory of \em asymmetry \em with respect to a symmetry group $G$~\cite{gour2008resource,marvian2013theory} is defined in terms of the following partitioned process theory. The systems are pairs $(\mathcal{H},\pi)$ consisting of a Hilbert space $\mathcal{H}$ and a projective unitary representation $\pi$ of $G$ on $\mathcal{L}(\mathcal{H})$, or equivalently, on the rays of $\mathcal{H}$. The processes $(\mathcal{H},\pi)\longrightarrow (\mathcal{H}',\pi')$ are the completely positive trace-preserving maps from $\mathcal{\mathcal{H}}$ to $\mathcal{\mathcal{H}'}$, which can be sequentially composed in the usual way. Parallel composition is given by the usual tensor product of Hilbert spaces and representations, \[ (\mathcal{H},\pi) \otimes (\mathcal{H}',\pi') := (\mathcal{H}\otimes\mathcal{H}',\pi\otimes\pi') . \] As in the theory of entanglement, the resulting SMC $({\bf C},\circ,\otimes,I)$ is equivalent to the usual SMC of quantum processes, since the representations $\pi$ are of no relevance to the morphisms and their composition. These representations are needed, however, to define the distinguished subtheory of free processes. A process $(\mathcal{H},\pi)\longrightarrow (\mathcal{H}',\pi')$ associated to the completely positive map $\Phi: \mathcal{L}(\mathcal{H})\to \mathcal{L}(\mathcal{H}')$ is free if it is covariant under the action of the group representation, i.e., \begin{equation} \label{eq:covariance} \Phi \circ \pi(g) = \pi'(g) \circ \Phi \qquad\forall g\in G. \end{equation}\par\noindent In this case, $\Phi$ is said to be \em $G$-covariant\em. It is straightforward to check that the group-covariant processes define a sub-SMC ${\bf C}_{\rm free}$. It follows in particular, that the free states on system $(\mathcal{H},\pi)$ are the density operators that are invariant under the action of $G$, or \em $G$-invariant\em, \begin{equation}\label{eq:Ginvariantstate} \pi(g) \circ \rho = \rho \qquad \forall g\in G. \end{equation} The nonfree resources are then those states that are $G$-noninvariant, that is, those that fail to satisfy Eq.~\eqref{eq:Ginvariantstate}. The quantum resource theory of asymmetry has many applications. It is useful, for instance, when a quantum dynamical problem cannot be solved exactly because it is too complex or because one lacks precise knowledge of all of the relevant parameters, so that one must resort to inferences based on a consideration of the symmetries. Measures of asymmetry are perfectly adapted to making such inferences, for instance, they are constants of the motion in closed-system dynamics. Indeed, in recent work, they have been shown to provide a significant generalization of Noether's theorem~\cite{MarvianSpekkensNoether}. Another application includes the problem of achieving high-precision measurement standards; for instance, the precision of a Cartesian frame is determined by the extent to which it breaks rotational symmetry. Note that one can just as easily define a resource theory of asymmetry for {\em classical} process theories or indeed for {\em any} process theory, including cases that are neither classical nor quantum such as the framework of generalized operational theories~\cite{ContPhys, Chiri2}. The key is that the categorical framework for process theories provides a straightforward means for defining a distinguished subtheory of {\em symmetric processes} as follows. Suppose the process theory is described by a category ${\bf C}$. Then, as long as one can associate to every pair of objects $A,B \in |{\bf C}|$, representations $\pi$ and $\pi'$ of the group $G$, i.e. $\pi(g) \in C(A,A)$ and $\pi'(g) \in C(B,B)$, then any process $\Phi \in C(A,B)$ can be said to be $G$-covariant if it satisfies Eq.~\eqref{eq:covariance}. \end{example} \begin{example} In the quantum resource theory of \em athermality \em with respect to a fixed temperature $T$~\cite{JWZGB,brandao2011resource} is defined in terms of the following partitioned process theory. Systems are pairs $(\mathcal{H},H)$ consisting of a Hilbert space $\mathcal{H}$ and a Hamiltonian $H$ acting on $\mathcal{H}$. Two such systems can be combined by taking tensor products of their Hilbert spaces and adding their Hamiltonians\footnote{In cases wherein two physical systems are interacting, they need to be described as a joint system which does not decompose into a parallel composition of its physical constituent systems. However, one can still try to find a \emph{different} tensor product decomposition with respect to which the Hamiltonian can be written as a sum of Hamiltonians on the tensor factors, i.e.~one can try to identify virtual subsystems which then decompose the joint system in the resource theory.}, \begin{equation} \label{eq:productHam} (\mathcal{H},H)\otimes (\mathcal{H}',H') := (\mathcal{H}\otimes\mathcal{H}',H\otimes\mathbbm{1} + \mathbbm{1} \otimes H'). \end{equation}\par\noindent Again, the processes are simply the completely positive trace-preserving maps, and the Hamiltonians are only relevant for the definition of the \em free \em processes. To wit, the sub-SMC of free processes is defined by a generating set that contains three kinds of free process: first, adding ancilla systems in Gibbs states with respect to temperature $T$, that is, states of the form $\frac{1}{Z}e^{-H/kT}$ where $Z={\rm tr}(e^{-H/kT})$ and $k$ is Boltzmann's constant; second, unitaries which are energy-preserving, i.e.~that commute with the total Hamiltonian; third, taking the partial trace over a subsystem. These free processes are also known as \em $T$-thermal operations\em. It can be shown that a general process is $T$-thermal if and only if it has a Stinespring dilation whose ancilla state is the Gibbs state at temperature $T$ and whose unitary is energy-preserving, which means that it commutes with the Hamiltonian of system + ancilla. Clearly, every system admits of only one free state, namely, the Gibbs state at temperature $T$ for that system. Specifically, it is the state $\frac{1}{Z}e^{-H/kT}$ where $Z={\rm tr}(e^{-H/kT})$ and $H$ is the Hamiltonian for that system. Any other state on that system is then a nonfree resource of athermality relative to the temperature $T$. These include states that are of the Gibbs form for a temperature $T'$ that is different from $T$, as well as states that are not of the Gibbs form at all. As it turns out, the resource theory of nonuniformity is a full subcategory of the resource theory of $T$-athermality (for any $T$). One simply needs to restrict the systems in the resource theory of $T$-athermality to those that have a trivial Hamiltonian (so that all states of the system have the same energy)~\cite{nonequilibrium}. In this case, it suffices to specify the Hilbert space to specify the system; the constraint of being energy-preserving is trivial, so that all unitaries are allowed; and the Gibbs state (for any temperature) is just the completely mixed state. \end{example} \subsection{Resource theories of parallel-combinable processes} Recall that states, which are processes of type $I\to A$, mapping nothing to something, are not generic. There are also general processes, which are morphisms of type $A\to B$ where $A$ is not necessarily isomorphic to the unit object $I$. Such a process should be thought of as taking a nontrivial input. In a partitioned process theory, $({\bf C},{\bf C}_{\rm free})$, we can also consider the extent to which the resource states and resource transformations can be converted one to another by circuits composed entirely of free processes. Resource theories that include transformations are typically richer than those including just states. In some contexts, the distinction between resource states and general resource processes is described as {\em static} versus {\em dynamic} resources. We will consider the resource theory of general processes. In the next section, we will see that it is possible to make a distinction between combining resource processes in parallel and combining them in an arbitrary way. In anticipation of this distinction, the resource theories we consider in this section will be termed resource theories of \em parallel-combinable \em processes. The resource theory that is defined in terms of the process theory with distinguished subtheory $({\bf C},{\bf C}_{\rm free})$ will be denoted ${\rm PC}({\bf C},{\bf C}_{\rm free})$. Suppose the process theory includes a process $f:A\to B$ and suppose that it is possible to embed $f$ into a circuit composed entirely of processes from the free set in such a way that the overall circuit implements the process $g:C\to D$, then we can say that we have successfully converted the resource process $f$ into the process $g$ using the free processes. For example, in the theory of bipartite entanglement, this is precisely what happens with \em quantum teleportation\em: by consuming the resource of a maximally entangled state on two qubits, one can simulate a single use of a quantum channel that transmits one qubit from Alice to Bob, using local operations and classical communication. (We shall return to this example later.) Given a partitioned process theory, $({\bf C}, {\bf C}_{\rm free})$, we construct a new symmetric monoidal category ${\rm PC}({\bf C}, {\bf C}_{\rm free})$ as follows. The objects of ${\rm PC}({\bf C},{\bf C}_{\rm free})$ are the processes of $({\bf C}, {\bf C}_{\rm free})$, that is, $\bigcup_{A, B\in|{\bf C}|}\!\!{\bf C}(A, B)$. When considering a particular process $f:A \to B$ as a resource, we think of it as a device which we have available in the laboratory and which we can compose with free operations, both in parallel and sequentially. Lemma~\ref{lm:combsall} will show that any such composition with free operations can be written as a circuit containing $f$ that has the form \begin{equation}\label{eq:comb} \includegraphics{Resource170914-figure13.pdf} \end{equation}\par\noindent where $\xi_1:A' \to A \otimes Z$ and $\xi_2: B \otimes Z \to B'$ are free processes. The circuit fragment that has a hole in place of the dashed box, and which takes as input any process of type $A \to B$ and transforms it into a process of type $A' \to B'$, is called a \em 1-comb, \em following the terminology of \cite{comb}. In more precise terms, any 1-comb can be characterized by a triple $(Z, \xi_1, \xi_2)$ where $Z\in|{\bf C}|$, $\xi_1\in{\bf C}_{\rm free}(A',A\otimes Z)$ and $\xi_2\in{\bf C}_{\rm free}(B\otimes Z,B')$. One gets a process of type $A'\to B'$ by first applying the free operation $\xi_1$ to $A'$, then inserting a resource process $A\to B$ while doing nothing to $Z$, and finally applying the free operation $\xi_2$ to $B$ and $Z$. \begin{lem}\label{lm:combsall} Any circuit that contains a single occurrence of the process $f$ and only free processes otherwise can be put into the form of circuit~\eqref{eq:comb}, that is, in the form of a 1-comb that is built out of free processes, $\xi_1,\xi_2$, with the process $f$ slotted into the hole of the 1-comb. \end{lem} \begin{proof} \input{proof1} \end{proof} We say that two 1-combs are equivalent, $(Z,\xi_1,\xi_2)\sim(Z',\xi'_1,\xi'_2)$, if one has \[ \includegraphics{Resource170914-figure14.pdf} \] for any process $h$ between composite systems such that the types match. In other words, two 1-combs are equivalent as soon as they have the same operational behavior. The reason for introducing such an equivalence relation is that the auxiliary system $Z$ can be of arbitrary size; in particular, one can obtain a new $1$-comb from a given one by adjoining another auxiliary system to $Z$, which gets initialized in an arbitrary state and simply discarded afterwards. This new $1$-comb is equivalent to the original one, and as such it seems reasonable to regard both $1$-combs as implementing the same transformation between processes. As the diagram shows, 1-combs can also be applied to processes between composite systems. However, as far as the possibility of transforming a process $f$ into a process $g$ is concerned, this does not yield any greater generality: the assumption that any identity morphism is a free process implies that $\xi_1$ and $\xi_2$ can be enlarged such as to comprise the additional input and output wires of $f$. In any case, we define the morphisms or transformations of type $f\to g$ in the category ${\rm PC}({\bf C},{\bf C}_{\rm free})$ to be the equivalence classes of 1-combs that turn $f$ into $g$. We use equivalence classes for the principal reason that taking equivalence classes should guarantee in all cases of interest that even if ${\bf C}$ is a large category, there is only a set (instead of a proper class) of equivalence classes of 1-combs that turn $f$ into $g$. For example in the quantum case, one can show this by proving that for any $f$ and $g$, there is a Hilbert space dimension $d$ such that any 1-comb $f\to g$ is equivalent to one in which the auxiliary system $Z$ has dimension $\leq d$. We regard such size issues as a minor concern that can safely be ignored, which is what we do in the following. Sequential composition in ${\rm PC}({\bf C}, {\bf C}_{\rm free})$ is as follows: \[ \includegraphics{Resource170914-figure15.pdf} \] This diagram forms a new 1-comb in the obvious way, and it is straightforward to check that this respects equivalence of 1-combs. Similarly, parallel composition works like this: \[ \includegraphics{Resource170914-figure16.pdf} \] and the same remarks apply here. \begin{thm} \label{thm:proctorec} For any partitioned process theory $({\bf C},{\bf C}_{\rm free})$, the procedure outlined above allows one to define a symmetric monoidal category ${\rm PC}({\bf C},{\bf C}_{\rm free})$, and this can be interpreted as a resource theory in the sense of Definition~\ref{def:resourcetheory}. \end{thm} The proof is provided in Appendix~\ref{appendixA}. Note that our construction of the category ${\rm PC}({\bf C},{\bf C}_{\rm free})$ is a souped-up version of the twisted arrow category construction~\cite[p.227]{MacLane}. More precisely, in the conventional twisted arrow category associated to ${\bf C}$, the objects are the processes $f$ and the morphisms between two such processes are the $1$-combs~\eqref{eq:comb} in which the ancilla system $Z$ is trivial. This conventional twisted arrow category can also be regarded as the category of elements of the hom-functor $\mathrm{hom} : {\bf C} \times {\bf C}^\mathrm{op} \to {\bf Set}$~\cite{nLabtac}; we suspect that our ${\rm PC}({\bf C},{\bf C})$, or even the general ${\rm PC}({\bf C},{\bf C}_{\rm free})$, arises in a similar way from a variant of this construction for symmetric monoidal categories rather than plain categories. We now present some examples of resource theories of parallel-combinable processes that can be defined in terms of a partitioned process theory. We begin by demonstrating how the resource theory of communication channels of Example~\ref{ex:shannon} can be cast in this form. \begin{example} \label{ex:Shannon} The resource theory of \em two-way classical communication protocols\em. We start with a variant of the SMC of classical stochastic processes of Example~\ref{ex:classicalstochasticprocesses}. The objects in this new category are pairs of finite sets, that is $(A,B)$ with $A,B\in|{\bf FinStoch}|$, whose parallel composition is defined componentwise, \[ (A,B) \otimes (A',B') := (A\otimes A',B\otimes B'). \] Processes of type $(A, B)\to (A', B')$ can then be taken to be the elements of ${\bf FinStoch}(A\otimes B, A'\otimes B')$, that is, as stochastic maps from $A \times B$ to $A' \times B'$, depicted as: \[ \includegraphics{Resource170914-figure17.pdf} \] where the left and right regions refer to Alice and Bob respectively. These maps constitute a mathematical model for noisy two-way communication protocols. The free processes are those that are generated by local operations and states of shared randomness. Just as in Example~\ref{ex:bientangle}, the local operations are defined to be those processes $(A,B)\to (A',B')$, represented by stochastic maps $\xi\in{\bf FinStoch}(A\otimes B,A'\otimes B')$, which can be factored into a parallel composition themselves, \[ \xi = \xi_A \otimes \xi_B \: : \: A\otimes B \longrightarrow A'\otimes B', \] where $\xi_A \in{\bf FinStoch}(A , A')$ and $\xi_B \in{\bf FinStoch}(B,B')$. These are depicted graphically as: \[ \includegraphics{Resource170914-figure18.pdf} \] The local operations include states $\xi:I \longrightarrow A\otimes B$ that factor into a product state $\xi_A \otimes \xi_B\: : \: I \longrightarrow A\otimes B$. The free processes, however, include more than just the product states; they also include states of shared randomness, that is, states that are not of the product form. In other words, \em all \em states $\xi:I \longrightarrow A\otimes B$ are included in the free processes, \[ \includegraphics{Resource170914-figure19.pdf} \] which models the assumption that shared randomness is free. The nonfree resources, therefore, are the processes that have nontrivial inputs and which do not factor across the Alice-Bob partition. These include processes that describe a single use of a communication channel from Alice to Bob, or from Bob to Alice. For example, the process of Alice sending data to Bob (via a channel that has input type $A$ and output type $B$) has type $(A, I)\to (I, B)$ and can be depicted as a morphism $f\in{\bf FinStoch}(A\otimes I, I\otimes B)$ like this: \[ \includegraphics{Resource170914-figure20.pdf} \] In the theory of communication channels, one is typically interested in knowing whether a channel $f:(A,I) \to (I,B)$ from Alice to Bob can simulate another channel $g:(A',I) \to (I,B')$. To answer this question, one must consider the most general circuit of free processes that can be applied to $f$. In the case where we allow the sender and recipient to share randomness ahead of time, the processing of the channel looks like this: \[ \includegraphics{Resource170914-figure21.pdf} \] One starts with a state of shared randomness $\xi: I \to S_A \otimes S_B$, then a free process $\xi_A: A'\otimes S_A\to A$ is applied to the input $A'$ and Alice's half of the shared randomness, $S_A$ to prepare the system $A$. This is done in parallel with the identity process on Bob's half of the shared randomness, $S_B$. Next, the channel $f:(A,I) \to (I,B)$ is implemented. Finally, a free process $\xi_B: B \otimes S_B \to B'$ is implemented on $B$ and $S_B$. Note that here the system $S_B$ is an instance of the ancillary system $Z$ that featured in the general theory. The restriction of channels being only parallel-combinable arises if there can be no local processing between the uses of the channels. The resource theory of two-way classical communication protocols contains as a proper subcategory the resource theory of one-way classical communication channels, which was considered in Example~\ref{ex:shannon}. It suffices to consider the subcategory defined by the processes of type $(A,I)\mapsto(I,B)$, corresponding to channels from Alice to Bob, and the product states. Note that the whole construction works likewise with any other SMC of processes in place of classical stochastic processes (see the next example, for instance) and can likewise be realised for any number of parties greater than two. \end{example} \begin{example}\label{ex:qcommchannels} The resource theory of \em two-way quantum communication protocols\em. We start with a variant of the SMC of quantum processes of Example~\ref{ex:quantumprocesses}. The objects are pairs of Hilbert spaces, that is $(\mathcal{H}_A,\mathcal{H}_B)$, whose parallel composition is defined componentwise, \[ (\mathcal{H}_A,\mathcal{H}_B) \otimes (\mathcal{H}_{A'},\mathcal{H}_{B'}) := (\mathcal{H}_A\otimes \mathcal{H}_{A'},\mathcal{H}_B\otimes \mathcal{H}_{B'}). \] The free processes include all the local processes, that is, the completely positive maps that factorize with respect to the tensor partition of the process theory, as well as \em all \em states $s:I \longrightarrow A\otimes B$, which models the fact that quantum and classical correlations are free in this resource theory (analogously to how shared randomness is free in the reource theory of two-way classical communication protocols). Everything in the resource theory of quantum communication can be represented graphically in a manner analogous to the theory of classical communication. \end{example} \begin{example} In Example~\ref{ex:bientangle}, we considered the resource theory of {\em states} that resulted from considering the LOCC operations to be free processes. One can also consider the more general resource theory of {\em processes} that LOCC defines. Here, the resources include transformations and measurements in addition to states. The nonfree resource processes include, in addition to the entangled states, the nonLOCC operations, which are sometimes called the ``entangling operations''. An example of such a nonfree resource process is a single use of a quantum channel. Unlike the resource theory of quantum communication channels of Example~\ref{ex:qcommchannels}, where {\em every} channel, classical or quantum, is a nonfree resource, in the theory of entanglement, the classical channels are included among the free processes. Here, the ancillary system $Z$ of the general theory is needed to accomodate the possibility of shared correlations between the two parties---either shared randomness or a shared entangled state---but also the possibility of classical communication. One can also consider the conversion of an entangled state into a quantum channel. The paradigmatic example of this is the quantum teleportation procotol, which is the conversion of a maximally entangled bipartite pure state into a single use of a noiseless quantum channel by LOCC. For example, for two qubits, by consuming the resource of a maximally entangled state $f$, one can simulate a single use of a quantum channel $g$ that transmits one qubit from Alice to Bob, using local operations $\xi_{meas}$ (i.e.~a measurement) and $\xi_{contr}$ (i.e.~a controlled unitary) and classical communication $\xi_{comm}$. Graphically, this is depicted as follows: \[ \includegraphics{Resource170914-figure22.pdf} \] \end{example} \begin{example} In asymmetry theory, the nonfree resource processes are the non-$G$-covariant operations. For instance, in the resource theory of rotational asymmetry, the unitary that rotates a system about some spatial axis is a nonfree resource process, as is measuring the component of angular momentum of a system along some spatial axis~\cite{ahmadi2013wigner,marvian2012information}. \end{example} \begin{example} In nonuniformity theory, the nonfree resource processes are the non-noisy operations. An example is an erasure operation, taking an arbitrary state to a pure state. \end{example} \begin{example} In athermality theory, the nonfree resource processes are the athermal operations. Example are heating, cooling, or doing work (for instance, lifting a weight). An example of a conversion from a resource state to a general resource process is a heat engine. \end{example} \subsection{Resource theories of universally-combinable processes} \label{sec:squentialresources The resource theory ${\rm PC}({\bf C},{\bf C}_{\rm free})$ has the following odd property. If one has access to two resource processes $f_1,f_2$, and one tries to use these in order to produce a certain target process $g$, then the transformation of type $f_1\otimes f_2\longrightarrow g$ looks like this, \[ \includegraphics{Resource170914-figure23.pdf} \] The problem with this kind of transformation is that $f_1$ and $f_2$ are necessarily composed in parallel. In particular, there is no way to try and produce $g$ by consuming $f_1$ and $f_2$ \em sequentially\em. Here, we would like to define a different resource theory in which there is no restriction at all on the manner in which a collection of resource processes may be consumed. Intuitively speaking, we regard resource processes as universally combinable, as in the laboratory where pieces of equipment can be put together in arbitrary ways. We term this the resource theory of universally-combinable processes and denote it by ${\rm UC}({\bf C},{\bf C}_{\rm free})$. We now show how to construct ${\rm UC}({\bf C},{\bf C}_{\rm free})$ as a symmetric monoidal category in terms of $({\bf C},{\bf C}_{\rm free})$. We take the objects of ${\rm UC}({\bf C},{\bf C}_{\rm free})$ to be finite sequences of processes $(f_1,\ldots,f_n)$. Such a sequence represents a collection of resource processes that we may have available. For all practical purposes, the ordering in the sequence is irrelevant; but in order to actually obtain an SMC, we still need to keep track of the ordering as a matter of syntax. The parallel composition of two objects $(f_1,\ldots,f_n)$ and $(g_1,\ldots,g_m)$ is given by concatenation of sequences, \[ (f_1,\dots,f_n) \boxtimes (g_1,\ldots,g_m) := (f_1,\ldots,f_n,g_1,\ldots,g_m). \] We write ``$\boxtimes$'' instead of ``$\otimes$'' in order not to confuse the composition $\boxtimes$, which we think of as an abstract operation of combining collections of processes, with the parallel composition $\otimes$, which stands for actual parallel execution of processes. For a list $(f)$ containing only one process, we also omit the brackets and simply write $f$. Then $(f_1,\ldots,f_n)$ can be identified with $f_1\boxtimes\ldots\boxtimes f_n$. Suppose that it is possible to embed each of the processes in the set $(f_1,\ldots,f_n)$ into a circuit composed entirely of processes from the free set in such a way that the overall circuit implements the process $g$, then we can say that we have successfully converted the collection of resource processes $f_1\boxtimes\ldots\boxtimes f_n$ into the process $g$ using the free processes. Consider now a circuit that contains a single occurence of each of the processes $(f_1,\ldots,f_n)$. In analogy with Lemma~\ref{lm:combsall}, it is not difficult to determine the most general way of converting a collection of processes $f_1\boxtimes\ldots\boxtimes f_n$ into a single process $g$. One simply generalizes~\eqref{eq:comb}. The generic circuit that has holes in place of the dashed boxes, and which takes as input a collection of processes and transforms them into a single process, is called an \em $n$-comb, \em following the terminology of~\cite{comb}. It looks like this: \begin{equation}\label{eq:ncomb} \includegraphics{Resource170914-figure24.pdf} \end{equation}\par\noindent where $\sigma$ is a permutation on $\{1,\ldots,n\}$, and the $\xi_i$ are free processes. The additional wires parallel to the $f_{\sigma(i)}$ carry arbitrary systems $Z_i$. There is a straightforward definition of the equivalence of $n$-combs which generalizes equivalence of ordinary combs: two $n$-combs are equivalent if for all sequences of processes $(h_1,\ldots,h_n)$, where each $h_i$ may have composite input and output systems of which only part connects to the $n$-comb, the resulting composite process obtained by filling the blanks with the $h_{\sigma(i)}$ is the same. One can now show that any method for combining the resource processes $f_i$ together with free processes into a composite process is of this form: \begin{lem} Any circuit that contains a single occurrence of each of the processes $(f_1,\ldots,f_n)$ and only free processes otherwise can be put in the form of the circuit~\eqref{eq:ncomb}, that is, in the form of an $n$-comb built of free processes, $\xi_0,\ldots, \xi_n$, with the processes $(f_1,\ldots,f_n)$ slotted into the holes of the comb in some order (given by the permutation $\sigma$). \end{lem} \begin{proof} (Informal sketch) We need to show that any expression in the language of symmetric monoidal categories that includes a single occurrence of each of the morphisms $(f_1,\ldots,f_n)$ and only free morphisms otherwise can be put in the form of such an $n$-comb. We assume that the expression is given in terms of a diagram in the graphical calculus for SMCs, and choose a decomposition into layers as in~\cite[Sec.~1.3]{JS}. Then each layer contains at most one non-trivial process and otherwise wires that carry identity processes. If several consecutive layers contain only free processes, then these can be composed to a composite free process. In the layers containing the $f_i$, one can apply the symmetry in order to move any additional wires to the right of the resource process. This results in the desired form~\eqref{eq:ncomb}. \end{proof} At this level, the mathematical structure that most accurately captures these many-to-one transformations is that of a \em symmetric multicategory\em~\cite[2.2.21]{Leinster}, which we denote ${\rm M}({\bf C},{\bf C}_{\rm free})$; see also~\cite{lambek1989multicategories} for multicategories in general. The objects of this multicategory are precisely all the resource processes. The maps $(f_1,\ldots,f_n)\longrightarrow g$ in this multicategory are defined to be the equivalence classes of $n$-combs which transform $(f_1,\ldots,f_n)$ into $g$. These can be composed: given an $m$-comb $(g_1,\ldots,g_m)\longrightarrow h$ and an $n_j$-comb $(f_{j,1},\ldots,f_{j,n_j})\longrightarrow g_j$ for every $j=1,\ldots,m$, one obtains an $(n_1+\ldots+n_m)$-comb \[ (f_{1,1},\ldots,f_{1,n_1},\ldots,f_{m,1},\ldots,f_{m,n_m}) \longrightarrow h \] by plugging the $n_j$-combs into the holes of the $m$-comb. There is also an obvious way to turn an $n$-comb $(f_1,\ldots,f_n)\longrightarrow g$ into a new $n$-comb $(f_{\sigma(1)},\ldots,f_{\sigma(n)})\longrightarrow g$ for any permutation $\sigma$. We leave it to the reader to verify that this data satisfies the axioms of a symmetric multicategory. However, we also would like to be able to have transformations which output collections of processes, such as $g_1\boxtimes g_2$, rather than individual processes. What should such a transformation look like? One might be tempted to say that it should be an $n$-comb producing a composite process $g_1\otimes g_2$. While this might give rise to a resource theory as well, it is not the appropriate definition to make, since the constituents of a composite process like $g_1\otimes g_2$ cannot be accessed individually, but only in parallel. In particular, given a ``black box'' implementing the process $g_1\otimes g_2$, there is no way of obtaining the process $g_1\circ g_2$ (because one is constrained to a single use of the black box). Hence, according to the philosophy of ``universal combinability'' followed in this section, a transformation which produces $g_1\boxtimes g_2$ from a collection of resource processes $(f_1,\ldots,f_n)$ should produce $g_1$ from a subcollection of the $f_i$'s, and $g_2$ from the remaining $f_i$'s. In the laboratory picture, this corresponds to building $g_1$ from a collection of ingredients and building $g_2$ independently from another collection of ingredients. The following is a general categorical construction for turning a symmetric multicategory into an SMC: \begin{defn} In the resource theory ${\rm UC}({\bf C},{\bf C}_{\rm free})$, a transformation of type $f_1\boxtimes\ldots\boxtimes f_n\longrightarrow g_1\boxtimes\ldots\boxtimes g_m$ consists of a function $\alpha:\set{1,\ldots,n} \to \set{1,\ldots,m}$ and maps \[ (f_{k_1},\ldots,f_{k_{n_j}}) \longrightarrow g_j \] in the multicategory ${\rm M}({\bf C},{\bf C}_{\rm free})$, where $k_1,\ldots,k_{n_j}$ enumerates the elements of $\alpha^{-1}(j)$. \end{defn} Intuitively, the function $\alpha$ allocates the resource process $f_i$ to the production of $g_{\alpha(i)}$. The definition entails that every $f_i$ gets allocated to exactly one $g_j$ in this way; in particular, every $f_i$ has to be consumed somewhere. See Remark~\ref{rem:junk} for why we impose this. It should be clear how to define parallel composition $\boxtimes$ of such transformations. Sequential composition is induced from the composition in the multicategory ${\rm M}({\bf C},{\bf C}_{\rm free})$. The details are tedious but straightforward, so we omit them. \begin{example} The resource theory of classical communication channels considered in Example~\ref{ex:Shannon} can be easily adapted to the case of universally-combinable processes. This is the appropriate framework for the case wherein the uses of different channels can be interspersed with local processes. An example of a conversion of channels $(f_{k_1},\ldots,f_{k_{n_j}}) \longrightarrow g_j$, for instance, is depicted as follows: \[ \includegraphics{Resource170914-figure25.pdf} \] \end{example} \section{Theories of resource convertibility} \subsection{Definition} Often, the most important questions that one asks of a resource theory $({\bf D},\circ,\otimes,I)$ concern whether or not a given resource conversion is possible, and not the particular process by which it occurs. Given certain objects $A,B\in|{\bf D}|$, does there exist a transformation $A\to B$, i.e., is ${\bf D}(A,B)$ nonempty? Answering questions of this type does not require full knowledge of the category ${\bf D}$. It is enough to know whether or not there exists a transformation $A\to B$; how many transformations there are of this type and how they can be found is also a relevant question, but one that we now would like to regard as secondary. With this in mind, we write $A\succeq B$ if there exists a transformation of type $A\to B$, and $A\not\succeq B$ if there does not. This ``$\succeq$'' is hence a preordering, meaning a reflexive and transitive binary relation \[ A\succeq A\qquad \mbox{and}\qquad A\succeq B,\: B\succeq C \:\Rightarrow A\succeq C. \] Intuitively, this says that every $A$ can be transformed into itself; and if $A$ can be transformed into $B$ and $B$ into $C$, then $A$ can also be transformed into $C$. Now that we are no longer interested in the different morphisms of ${\bf D}$, we can as well use lowercase notation for the resource objects without the danger of confusion. This is what we do from now on in order to improve readability. \begin{defn} \label{defn:mereres} A \em theory of resource convertibility \em $(R,+,\succeq,0)$ is a set $R$ equipped with a binary operation $+$, a preorder $\succeq$ and a distinguished element $0\in R$ such that for all $a, b, c \in R$, \begin{align*} a + (b + c) &\simeq (a + b) + c, & a + b & \simeq b + a, & a + 0 &\simeq 0 + a, \end{align*} where $\simeq$ is the equivalence relation induced by the preorder, i.e.~$a\simeq b$ stands for $a\succeq b$ and $b \succeq a$. We require these structures to interact as follows: for all $a, b, c,d \in R$ \begin{equation}\label{eq:posetbefunct} a \succeq b\ ,\ c \succeq d\ \Rightarrow\ a + c \succeq b + d . \end{equation}\par\noindent \end{defn} In the context of enriched category theory~\cite{BorceuxStubbe}, this definition can be succinctly summarized by saying that a theory of resource convertibility is a symmetric monoidal category enriched over the poset of truth values $\{0,1\}$. The correspondence to the definition in terms of the preorder is that $R(a,b)=1$ corresponds to $a\succeq b$, while $R(a,b)=0$ represents $a\not\succeq b$. Also, the parallel composition ``$\otimes$'' of the SMC corresponds to the binary operation ``$+$'' of the theory of resource convertibility. \begin{thm} \label{thm:decat} Let $({\bf D},\otimes,\circ,I)$ be a resource theory. If we define \[ R:= |{\bf D}|, \] and for all $a,b\in |{\bf D}|$, \[ a + b := a\otimes b, \] \[ a\succeq b \: :\Longleftrightarrow\: {\bf D}(a,b)\neq\emptyset, \] \[ 0 := I, \] then $(R,+,\succeq,0)$ is a theory of resource convertibility. \end{thm} \begin{proof} Straightforward. \end{proof} This construction captures the idea that we are only interested in whether a transformation of type $a\to b$ exists or not. One can think of it as a ``partial decategorification'': instead of \em completely \em forgetting the morphisms in ${\bf D}$ and the way in which they compose, the resulting theory of resource convertibility only remembers a small remnant of categorical structure, namely whether there exists a morphism between any pair of objects or not. In terms of enriched category theory, one can understand this construction as a change of base from enrichment over ${\bf Set}$ to enrichment over the poset of truth values $\{0,1\}$, along the monoidal functor ${\bf Set}\to\{0,1\}$ which maps the empty set to $0$ and any non-empty set to $1$. One of the main goals when studying a resource theory is to find necessary and sufficient criteria for when a transformation of type $a\to b$ exists, i.e.~for when $a\succeq b$. In other words, one tries to find a way of characterizing the ordering relation $\succeq$. An answer to this problem typically consists in an algorithm which takes resource objects $a$ and $b$ as input and returns the answer to the question ``Is $a\succeq b$ or $a\not\succeq b$?'' The prototypical example is the theory of bipartite entanglement (Example~\ref{ex:bientangle}). One can start with the theory of bipartite entanglement as a partitioned process theory, defined by LOCC, as in Example~\ref{ex:bientangle}, one can turn it into a resource theory as in Theorem~\ref{thm:proctorec}, and then regard it as a theory of resource convertibility as in Theorem~\ref{thm:decat}. Nielsen has shown that the convertibility relation between pure states can be algorithmically decided with the help of a majorization criterion, which constitutes a necessary and sufficient condition for $a\succeq b$~\cite{Nielsen}. To emphasize the importance of the monoid structure, we will present two toy examples of resource theories that differ \em only \em in their monoid structure. \begin{example} \label{ex:food} Consider the resource theory of food, where for simplicity we only consider apples and bananas. A resource object is then a pair $(a,b)$ where $a,b\in\mathbb{N}$ stand for a number of apples and a number of bananas. E.g.~$(3,4)$ is the resource object ``3 apples and 4 bananas''. These can be combined via $\otimes$ in the obvious way by adding the numbers of each type of food, \[ (a,b) \otimes (a',b') := (a+a',b+b'). \] The only transformations that we declare to exist in this resource theory correspond to eating food: for example, there is one and only one morphism from $(2,3)$ to $(1,0)$, while there is no morphism going the other way. As is necessarily the case with a mathematical model of the real world, this toy model only captures a very small part of the phenomena of interest. A slightly more realistic model would include the gain of energy of a person from eating food as well as some agricultural processes used for producing food. The present example is as simplistic as possible in order to illustrate the mathematical structures involved. \end{example} While this example has a strong quantitative aspect in the sense that the value of an object as a resource depends strongly on the number of pieces of each type of food, there are also resource theories which do not display this aspect at all: \begin{example} \label{ex:know} While having two apples may be considered better than having only one apple, there also are entities for which possessing a large quantity is no better than possessing a small quantity. For example, \em proficiency \em has this property. For simplicity, let us assume that we only consider two sorts of proficiency, namely the proficiency level in arithmetic, measured by a number $a \in\mathbb{N}$, and the proficiency level in biology, measured by a number $b\in\mathbb{N}$. In other words, an object in our simplified resource theory of proficiency is a pair $(a,b)$ consisting of two proficiency levels $a,b\in\mathbb{N}$. When combining two such pairs, for example when seeking to determine the proficiency level in each field of a group consisting of two people, one should take it to be the higher one of the constituent proficiency levels, \[ (a,b) \otimes (a',b') := \left( \max(a,a'), \max(b,b') \right) . \] In other words, the tensor of two proficiency levels is the maximum of the two. Having an expert in arithmetic and an expert in biology is just as good as having one person who is an expert in both, or of having \em two \em people that are expert in both. Similar to the case of the resource theory of food, the only kind of transformation that we consider is losing proficiency. More precisely, we stipulate that there is one and only one transformation of type $(a,b)\to (a',b')$ if $a\geq a'$ and $b\geq b'$. \end{example} In both cases, a resource object $(a,b)$ can be transformed into $(a',b')$ if and only if $a\geq a'$ and $b\geq b'$. Hence the preorder ``$\succeq$'' can, for both theories, be illustrated by the Hasse diagram \[ \includegraphics{Resource170914-figure26.pdf} \] which is the partially ordered set \[ (\mathbb{N}, \geq) \times(\mathbb{N}, \geq). \] However, the binary operation ``$+$'' that defines the monoid structure is different: in the theory of food it is given by component-wise addition of natural numbers; in the theory of proficiency, it is given by the order-theoretic supremum. The theories have underlying commutative monoids \[ (\mathbb{N}, +, 0) \times(\mathbb{N}, +, 0)\qquad{\rm vs.}\qquad (\mathbb{N}, \vee, 0) \times(\mathbb{N}, \vee, 0) . \] This difference makes the two theories behave quite differently. For example, catalysis (Definition~\ref{catalysis}) is impossible in the resource theory of food since borrowing food from someone else does not alleviate our hunger if we have to return the borrowed food eventually. In contrast, catalysis is possible in the resource theory of proficiency, since borrowing a proficient person who teaches us their knowledge does not degrade that person's proficiency. \begin{rem} If $a$ and $b$ are objects in the original resource theory ${\bf D}$, then $a\simeq b$ does not mean that $a$ and $b$ are isomorphic in ${\bf D}$. Rather, it only means that there is a morphism of type $a\to b$ and one of type $b\to a$, but it does not follow that either composition $a\to b\to a$ or $b\to a\to b$ is necessarily equal to the identity morphism. As long as we are only interested in \em whether \em a certain transformation $a\to b$ is possible, we should consider ${\bf D}$ as a theory of resource convertibility, and the adequate notion of $a$ and $b$ ``being the same'' is given by ``$\simeq$''. However, if we are also interested in \em how \em to turn $a$ into $b$, then we need to consider ${\bf D}$ as a full-fledged SMC, and the adequate notion of ``being the same'' is isomorphism. For example, if $a$ and $b$ are isomorphic, then there is a bijective correspondence between transformations $a\to c$ and transformations $b\to c$ for any $c$. However, if we only know that $a\simeq b$, then this is not necessarily the case; we only know that a transformation $a\to c$ \em exists \em if and only if a transformation $b\to c$ exists. \end{rem} \begin{rem} Just as the appropriate notion of ``being the same'' between two categories is that of an equivalence of categories instead of isomorphism, the notion of being the same for two theories of resource convertibility, $R$ and $S$, is weaker than isomorphism. We say that $R$ and $S$ are \em equivalent\em, if there are functions $f:R\to S$ and $g:S\to R$ such that $g(f(a)) \simeq a$ for all $a\in R$, and $f(g(b)) \simeq b$ for all $b\in S$. Alternatively, if we \em define \em two resource objects $a,b\in R$ to be equal, $a=b$, if $a\simeq b$, then this reduces precisely to the definition of isomorphism, which reads $g(f(a)) = a$ for all $a\in R$, and similarly the other way around. After all, $a\simeq b$ means that $a$ and $b$ are perfectly interchangeable as resource objects, and as such there is no need to distinguish them. For this reason, we believe that ``$\simeq$'' is the appropriate definition of equality of resource objects. So although we always write ``$\simeq$'' in the context of a theory of resource convertibility, there is no harm or loss of generality in replacing this by ``$=$''. \end{rem} \begin{remark} There are examples of mathematical structures satisfying the conditions of Definition~\ref{defn:mereres} that do not have an obvious interpretation in terms of resource theories. One such example is lattice ordered groups~\cite{BirkhoffLatticeGroup}, and another one is the commutative case of quantales \cite{Mulvey}, where there is a distributive law between the monoid multiplication and the suprema of the ordering. Yet another example is the Cuntz semigroup from $C^*$-algebra theory~\cite{Cuntz}. We hope that this makes it clear that Definition~\ref{defn:mereres} is also of purely mathematical interest, and illustrates part of the wide range of phenomena that it captures. \end{remark} \subsection{A bit of phenomenology} We now describe some of the phenomena that may occur in theories of resource convertibility as in Definition~\ref{defn:mereres} and prove some general results about when these phenomena occur and when they do not. We regard this as the very beginnings of a comprehensive study of the phenomenology of theories of resource convertibility and the development of general results about them. Throughout, $R$ is a theory of resource convertibility. The first such phenomenon that we would like to discuss has its origin in the resource theory of chemistry (Example~\ref{ex:chem}): \em catalysis\em. \begin{defn} \label{catalysis} A resource $c\in R$ is a \em catalyst \em for $a,b\in R$ if $a\not\succeq b$, but $a+c \succeq b+c$. \end{defn} So although there is no way to transform $a$ into $b$ in $R$, it \em is \em possible to do so if one also has the catalyst $c$ available, since then one can turn $a+c$ into $b+c$. The power of a catalyst lies in the fact that $c$ is also a result of the transformation, and hence can be reused in other transformations. Only one copy of the catalyst is needed in order to transform any number of $a$'s into $b$'s. \begin{example} \label{ex:catalyst} In the resource theory of compass-and-straightedge constructions, presented in the introduction, the figure consisting of a circle of unit radius and a square of area $\pi$ can be regarded as a catalyst for the transformation of turning a circle into a square of the same area, i.e.~for ``squaring the circle''. This works as follows: for a given circle, start by constructing its center. Its radius is hence available as a line segment, which can be compared with the radius of the unit circle. Scaling the reference square of area $\pi$ by the same factor yields a square of the same area as the original circle. \end{example} It is an important question to determine whether the transformation of some resource $a$ into some resource $b$ is possible with the help of a catalyst, or if it remains impossible even with any other resource as a candidate catalyst. More specifically, it is also relevant to know whether catalysis is possible at all in $R$: \begin{defn} $R$ is \em catalysis-free \em if \[ a + c \succeq b + c \:\Longrightarrow\: a\succeq b. \] \end{defn} In order to showcase the possibility of deducing general mathematical theorems that apply to all theories of resource convertibility, we now derive a criterion for when such a theory is catalysis-free. This involves some other properties that a theory of resource convertibility may or may not possess and which we now define. \begin{defn} \label{def:nonintact} $R$ is \em non-interacting \em if \[ a\succeq b_1 + b_2 \quad\Longrightarrow\quad \exists a_1,a_2\in R, \: a \simeq a_1 + a_2,\: a_1 \succeq b_1, \: a_2\succeq b_2. \] \end{defn} Intuitively, $R$ is non-interacting if every transformation which outputs a combination of two resources can be decomposed into two transformations, each of which outputs a constituent resource. In the theory of ordered abelian groups, this condition is also known as the \em Riesz decomposition property\em. As a general class of examples, take the resource theory ${\rm UC}({\bf C},{\bf C}_{\rm free})$ generated by a theory with free processes $({\bf C},{\bf C}_{\rm free})$. Such a theory is non-interacting by definition, since in order to produce a collection of resource processes, one needs to produce each one individually. \begin{defn} $R$ is \em quantity-like \em if \[ a_1 + a_2 \simeq b_1 + b_2, \: a_1\succeq b_1 \quad\Longrightarrow\quad b_2\succeq a_2 \] \end{defn} One may think of this as follows: if $a_1+a_2\simeq b_1+b_2$, then $a_1+a_2$ and $b_1+b_2$ are of equal value as resources; but by the assumption, $a_1\succeq b_1$, the resource $a_1$ is at least as valuable as $b_1$. Hence, assuming a certain conservation-of-value property, we conclude that $b_2$ must be at least as valuable as $a_2$. Since we have not made precise the notion of value, this explanation is nothing but an intuition. In a quantity-like resource theory, the resource objects typically have an extensive flavour, like ``volume'' or ``mass''. For example, the resource theory of food (Example~\ref{ex:food}) is quantity-like, while the theory of proficiency (Example~\ref{ex:know}) is not. \begin{thm} If $R$ is non-interacting and quantity-like, then $R$ is catalysis-free. \end{thm} \begin{proof} Assume that $a + c \succeq b + c$. Since $R$ is non-interacting, we need to have $a + c \succeq a_1 + a_2$ with $a_1 \succeq b$ and $a_2 \succeq c$. Because $R$ is quantity-like, $a + c \succeq a_1 + a_2$ and $a_2 \succeq c$ implies $a \succeq a_1$. From $a \succeq a_1$ and $a_1 \succeq b$ and using transitivity, we conclude that $a\succeq b$. \end{proof} This is our first general result about theories of resource convertibility. It provides a non-trivial sufficient criterion for the absence of catalysis. \begin{prop}[No-cloning] \label{prop:noclone} If $R$ is quantity-like, then \[ a \succeq a+a \quad\Longleftrightarrow\quad 0 \succeq a. \] \end{prop} This result states that for a theory of resource convertibility that is quantity-like, a resource $a$ can be cloned if and only if it is actually worthless, i.e.~it can be produced from nothing. In other words, no non-trivial resource can be cloned. In particular, this means that each non-trivial resource is finite in character: if it was infinite, like the number of rooms available in Hilbert's hotel, then one could produce clones of it without any cost. \begin{proof} We have $a\succeq a$, hence $0\succeq a$ trivially implies that $a+0\succeq a+a$. Conversely, since $R$ is quantity-like, $a+0\succeq a+a$ and $a\succeq a$ imply that $0\succeq a$. \end{proof} \begin{example} \label{ex:math} If we consider the resource theory wherein mathematical propositions are the objects and proofs are the morphism (described in the introduction) with \em linear logic \em \cite{Girard} in the background, then resource objects also cannot be cloned. For this reason, linear logic is often referred to as a ``resource sensitive logic'', or in Girard's own words: ``While classical logic is about truth, linear logic is about food''. \end{example} The opposite extreme to being quantity-like is this: \begin{defn} $R$ is \em quality-like \em if $a+a\simeq a$ for all $a\in R$. \end{defn} For example, the resource theory of proficiency (Example~\ref{ex:know}) is quality-like, while the resource theory of food (Example~\ref{ex:food}) is not. Being quality-like means that the number of copies that one has available of a resource is irrelevant. \begin{prop} If $R$ is quality-like, then the following conditions are equivalent for any $a,b\in R$: \begin{enumerate} \item $a+a \succeq b$, \item $a\succeq b$, \item $a \succeq b+b$. \end{enumerate} Conversely, if these conditions are equivalent, then $R$ is quality-like. \end{prop} The proof is straightforward. In Proposition~\ref{prop:noclone}, we have encountered the condition $0\succeq a$, i.e.~that $a$ can be produced from nothing. Dually, we can ask whether $a$ can be turned into nothing: \begin{defn} A resource $a\in R$ is \em freely disposable \em if $a\succeq 0$. \end{defn} \begin{rem} \label{rem:junk} In many real-world examples, disposing a resource does itself come at a cost, and the given resource can be thought of as having a negative value; think e.g.~of nuclear waste, the disposal of which requires a sizeable investment of other resources, such as treatment, safe storage, and decay time. In order to capture this sort of phenomenon, the general theory of resource theories should not assume that all resources are freely disposable. \end{rem} \begin{defn} If every $a\in R$ is freely disposable, then we say that $R$ is \em waste-free\em. \end{defn} One should keep in mind that this is a mathematical definition which may be satisfied for a given resource theory even when its resource objects can not be safely disposed of in practice. Being waste-free is equivalent to $0\in R$ being a bottom element of the preorder $\succeq$. In other words, it means that if $0\succeq a$, then $a\simeq 0$. If this is the case, then it follows that for all $a,b\in R$, \begin{equation} \label{eq:nonneg} a + b \succeq a, \end{equation}\par\noindent which means that the operation $\underline{\phantom{x}} + b: R\to R$ is order-nondecreasing for any $b$. We may think of this property as saying that, as a resource, the whole is always greater than or equal to each of its parts. If $R$ is not waste-free, then this is not necessarily the case. \begin{example} If $({\bf C},{\bf C}_{\rm free})$ is a theory with free processes such that ${\bf C}(I,I)=\{1_I\}$ and such that for every object $A\in{\bf C}$, there exists a process $\psi_A:I\to A$ and a process $\phi_A:A\to I$, then both resource theories of processes ${\rm PC}({\bf C},{\bf C}_{\rm free})$ and ${\rm UC}({\bf C},{\bf C}_{\rm free})$ are waste-free. This is easy to see by plugging the open ends of an undesired process $f:A\to B$, \[ \includegraphics{Resource170914-figure27.pdf} \] Since this is a process of type $I\to I$, it must necessarily be $1_I$, which is exactly the trivial resource $0$. \end{example} Finally, we end this section with one last property that a theory of resource convertibility may or may not have and that may be of interest: \begin{defn} $R$ has the \em Riesz interpolation property \em if $(a_i)_{i\in I}$ and $(b_j)_{j\in J}$ are finite families of resources with $a_i\succeq b_j$ for all $i$ and $j$, then there exists $c\in R$ such that \[ a_i\succeq c\succeq b_j. \] \end{defn} This is a well-known property for partially ordered sets. In our context of resource theories, we think of $c$ as a ``proxy resource'' for turning an $a_i$ into a $b_j$. \section{Quantitative concepts for theories of resource convertibility} \subsection{Monotones} There is a long tradition in science of ``measuring'' or ``quantifying'' the utility of a resource object in terms of a number assigned to it. In our approach, this takes the following form: \begin{defn} A \em monotone \em on $R$ is a function $M:R\to\mathbb{R}$ such that \[ a\succeq b \quad\Longrightarrow\quad M(a) \geq M(b). \] \end{defn} The main use of a monotone lies in detecting the non-convertibility of a resource $a$ into a resource $b$: if $M(a) < M(b)$, then $a\not\succeq b$. The converse is in general not true, since $\succeq$ is only a preorder, and therefore may not be totally ordered. In this case, it is not possible to capture the ``value'' of a resource in terms of a single number. However, a \em family \em of monotones $(M_i)_{i\in I}$ may be \em complete \em in the sense that the monotones in this family characterize the order completely, \[ a\succeq b \quad\Longleftrightarrow\quad M_i(a)\geq M_i(b) \:\forall i\in I. \] \begin{prop} For any $R$ there exists a complete family of monotones. \end{prop} The proof is very simple. \begin{proof} We take the index set of the family to be $I:=R$ itself. Then for any $i\in I$, define the monotone $M_i:R\to\mathbb{R}$ via \[ M_i(a) := \begin{cases} 1 & \textrm{if } a\succeq i\\ 0 & \textrm{if } a\not\succeq i. \end{cases} \] This is a monotone thanks to transitivity of $\succeq$. To show completeness, we start with a pair $a,b\in R$ with $a\not\succeq b$ and find a monotone in the family which witnesses this. And indeed, $M_b$ does the job, since $M_b(b)=1$, but $M_b(a)=0$. \end{proof} Intuitively, we have simply defined, for every resource, the monotone that describes whether or not any other resource is convertible to it or not. Given this monotone for every resource, one can obviously deduce whether any given conversion is possible or not, hence the set of monotones is complete. In terms of enriched category theory, one can understand the monotone $M_i$ to be a hom-functor, and the proposition is a special case of the enriched Yoneda lemma. What this result demonstrates is that the only interesting question is whether one can find a complete set of monotones with a number of elements less than the number of elements in $R$, or in the case where there are infinitely many resources, whether one can find a complete set of monotones that can be parameterized with fewer parameters than is required to parameterize $R$. One may think of a monotone as the assignment of a price to each resource in a way which is consistent with the convertibility relation. In this sense, one can also think of the letter ``$M$'' as standing for a consistent \em market \em in which the resource objects of $R$ are being traded. The definition implies immediately that if $a\simeq b$, then $M(a)=M(b)$. We can distinguish certain kinds of monotones having special properties, \begin{defn} A monotone $M$ is \begin{enumerate} \item \em additive \em if \[ M(a+b) = M(a) + M(b). \] \item \em supremal \em if \[ M(a+b) = \max\{M(a),M(b)\}. \] \end{enumerate} \end{defn} The first condition implies in particular that $M(0)=0$. For the second condition, one may also assume $M(0)=0$ without loss of generality: defining $M'(a):=M(a)-M(0)$ yields a new supremal monotone satisfying $M'(0)=0$ while witnessing the same impossible conversions $a\not\succeq b$ as the original $M$. If one regards $\mathbb{R}$ itself as a theory of resource convertibility, where the ordering is the usual one and addition is also the usual one, then an additive monotone is simply a homomorphism of theories of resource convertibility $M:R\to\mathbb{R}$. If one takes $\max:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ as the binary operation instead, then the definition of a supremal monotone (with $M(0)=0$) can be neatly summarised by saying that it should be a homomorphism of theories of resource convertibility $M:R\to\mathbb{R}$. We suspect that additive monotones are most useful in resource theories that have a quantitative flavour, and in particular whenever $R$ is quantity-like. In contrast, a supremal monotone should behave more like a measure of quality and should be especially relevant when $R$ is quality-like. \begin{example} In the resource theory of food (Example~\ref{ex:food}), an additive monotone is determined by assigning a price $M((1,0))$ to an apple and a price $M((0,1))$ to a banana. Since $(1,0)\succeq 0$ and $(0,1)\succeq 0$, both of these prices have to be non-negative. By additivity, we necessarily must have \[ M((a,b)) = a\cdot M((1,0)) + b\cdot M((0,1)), \] so that $M$ simply determines the total price of a resource object by tagging each apple and each banana with its price and adding the numbers up. Similarly, a supremal monotone is also determined by the price for an apple and the price for a banana; the price assigned to a collection of food given by a resource object $(a,b)$ is then given by the highest price of any of its constituents. \end{example} \subsection{Conversion rates} If $a\succeq b$ in a theory of resource convertibility, then certainly $a+a\succeq b+b$ as well. Another important phenomenon that may occur in resource theories is that the converse is not necessarily true: it may be the case that $a+a\succeq b+b$, although $a\not\succeq b$. In other words, it may be the case that one may be able to produce two copies from $b$ from two copies of $a$, although it is not possible to transform $a$ into $b$ at the single-copy level. It is one of several ``economy of scale'' effects which makes mass production more efficient than small-scale production. See~\cite{BRS} for concrete examples of this activation phenomenon in the resource theory of bipartite entanglement (Example~\ref{ex:bientangle}). It is not limited to comparing the one-copy level with the two-copy level; we can compare any two distinct numbers of copies. Introducing the notation $n\cdot a:= \underbrace{a + \ldots + a}_n$ for the resource corresponding to $n$ copies of $a$, it may happen that $k\cdot a\not\succeq k\cdot b$, although $n\cdot a\succeq n\cdot b$ for some suitable $n>k$. However, in resource theories that are quality-like, this kind of behaviour is obviously not possible. In general, the following is considered a particularly important question in a resource theory: in a setting in which one can turn many copies of $a$ into many copies of $b$, how many copies of $a$ does one need on average in order to produce one copy of $b$? In effect, this question is about the \em rate \em of conversion of $a$ to $b$: \begin{defn}\label{def:confrate}\em For two resources $a,b\in R$ of a theory of resource convertibility, the \em conversion rate \em from $a$ to $b$, or simply \em rate \em from $a$ to $b$, is: \begin{equation} \label{eq:defnrate} R(a\to b):= \sup\left\{ \frac{m}{n}\Bigm| n\cdot a \succeq m\cdot b,\: n, m \in\mathbb{N}_+\right\} . \end{equation}\par\noindent \end{defn} If $n\cdot a\not\succeq m\cdot b$ for all positive integers $n$ and $m$, then we define $R(a\to b):=0$; in all other cases, the rate is a strictly positive real number or $\infty$. This definition concerns the \em maximal \em rate; if we take the infimum in~\eqref{eq:defnrate} as opposed to the supremum, we obtain the analogous definition of \em minimal \em rate. Which one of these two one would like to attain depends significantly on the context: if $b$ is a resource object that one desires to produce, the maximal rate~\eqref{eq:defnrate} is the relevant quantity; if $a$ is a ``negative resource'' that one would like to get rid of (Remark~\ref{rem:junk}) by converting it into copies of a less problematic object $b$, then the minimal rate will be the appropriate figure to look at. Extensive monotones yield upper bounds on the maximal rate: \begin{thm}\label{thm:rate} Let $M$ be an extensive monotone. Then \begin{equation}\label{eq:ratemu} R(a\to b) \cdot M(b) \leq M(a). \end{equation}\par\noindent \end{thm} It is straightforward to derive the same result for the minimal rate, but with the inequality sign going the other way. \begin{proof} If $n\cdot a\succeq m\cdot b$, then we obtain by extensivity of $M$, \[ n \cdot M(a) \geq m\cdot M(b). \] Rearranging to \[ \frac{m}{n} \cdot M(b) \leq M(a) \] proves the claim upon taking the supremum corresponding to~\eqref{eq:defnrate}. \end{proof} If $M(b)>0$, then one can rearrange~\eqref{eq:ratemu} to the more natural-looking inequality \[ R(a\to b)\leq \frac{M(a)}{M(b)}. \] If one thinks of $M$ as a consistent market which assigns to each resource object its price, then this makes good sense: the number of $b$'s that one can obtain on average from one unit of $a$ cannot be higher than the value of one unit of $a$ relative to one unit of $b$. \section{Closing}\label{closing} We have defined resource theories as symmetric monoidal categories, the objects of which are resources which the morphisms transform into each other. We have explained several ways in which one can obtain a resource theory from a category of processes equipped with a distinguished subcategory of ``free'' processes. Finally, we have noted that the questions that one typically asks about a resource theory---concerning catalysis, rates of conversion, etc.---only concern the question of \em whether \em a transformation of a certain type exists or not. This led to the definition of a theory of resource convertibility, in which the structure of a category is ``decategorified'' to the structure of a commutative preordered monoid. Our first small results are nothing but the very beginnings of a detailed investigation of theories of resource convertibility and their mathematical structure. Our long list of examples suggests that there will be a strong interplay between the abstract and general theory and the phenomenology of concrete resource theories of interest. A major problem that we have not yet touched upon is \em epsilonification\em. The idea here is that in many applications, such as in the resource theory of communication (Example~\ref{ex:shannon}), it may be sufficient to turn a given resource $a$ into some $b'$ \em close to \em a target resource $b$, i.e.~turning $a$ into $b$ ``up to $\varepsilon$''. Typically, one would like the $\varepsilon$ to become arbitrarily small, possibly as the number of copies of $a$ and $b$ increases. We are currently investigating how our definitions should be modified in order to be able to deal with this kind of question as well. One way to do this to define an \emph{epsilonified theory of resource convertibility} just as in Definition~\ref{defn:mereres}, but replacing the preorder by a \emph{cost function}, which is a function assigning to every two $a,b\in R$ a number $c(a,b)\in \mathbb{R}_{\geq 0}$ which measures how close one to get to $b$ by consuming $a$, and such that suitable axioms hold which are analogous to the ones of Definition~\ref{defn:mereres}.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,554
\section{Introduction} Originally introduced by \citet{Karp1990AnOA}, the Online Bipartite Matching (OBM) problem is a simple formulation of sequential resource allocation. A fixed set $U$ of known entities (e.g., ads, tasks, servers) are to be dynamically assigned to at most one of a discrete stream $V$ of (apriori unknown) entities (e.g., ad-slots, job candidates, computing job) upon their arrival, so as to maximize the size of the final matching. Matching decisions are irrevocable and not matching is allowed at all times. Despite its simplicity, finding better algorithms for OBM and its variants remains an active area of research. The uncertainty about future inputs makes online problems inherently challenging. While practical exact methods (e.g., using integer programming formulations and solvers) exist for many offline combinatorial problems, the restriction to irrevocable and instant decision-making makes the use of such algorithmic tools impractical. Existing algorithms for online matching are thus typically myopic and greedy in nature \citep{obmsurvey}. In practice, however, the underlying (bipartite) graph instances may come from the same unknown distribution \citep{borodin2018experimental}. In many applications, a sufficiently large collection of samples from the data can represent the often implicit statistical properties of the entire underlying data-generating distribution. It is often the case that corporations, for example, have access to a wealth of information that is represented as a large graph instance capturing customer behaviour, job arrivals, etc. Thus, it is sensible for an algorithm to use historical data to derive statistical information about online inputs in order to perform better on future instances. However, the majority of non-myopic hand-designed algorithms depend on estimating the arrival distribution of the incoming nodes \citep{borodin2018experimental, obmsurvey}. The downside of this approach is that imperative information such as the graph sparsity, ratio of incoming nodes to fixed nodes, existence of community structures, degree distribution, and the occurrence of structural motifs are ignored. Ideally, a matching policy should be able to leverage such information to refine its decisions based on the observed history. In this work, we formulate online matching as a Markov Decision Process (MDP) for which a neural network is trained using Reinforcement Learning (RL) on past graph instances to make near-optimal matchings on unseen test instances. We design six unique models, engineer a set of generic features, and test their performance on two variations of OBM across two synthetic datasets and two real-world datasets. Our contributions can be summarized as follows: \textbf{Automating matching policy design:} Motivated by practical applications, other variants of OBM have been introduced with additional constraints (such as fairness constraints) or more complex objective functions than just matching size. Our method reduces the reliance on human handcrafting of algorithms for each individual variant of OBM since the RL framework presented herein can flexibly model them; this will be demonstrated for the novel Online Submodular Bipartite Matching problem~\citep{dickerson2018balancing}. \textbf{Deriving tailored policies using past graph instances:} We show that our method is capable of taking advantage of past instances to learn a near-optimal policy that is tailored to the problem instance. Unlike ``pen-and-paper'' algorithms, our use of historical information is not limited to estimating the arrival distribution of incoming nodes. Rather, our method takes advantage of additional statistics such as the existing (partial) matching, graph sparsity, the $|U|$-to-$|V|$ ratio, and the graph structure. Taking in a more comprehensive set of statistics into account allows for fine-grained decision-making. For example, the RL agent can learn to skip matching a node strategically based on the observed statistical properties of the current graph. Our results on synthetic and real world datasets demonstrate this. \textbf{Leveraging Node Attributes:} In many variants of OBM, nodes have identities, e.g., the nodes in $V$ could correspond to social media users whose demographic information could be used to understand their preferences. Existing algorithms are limited in considering such node features that could be leveraged to obtain better solutions. For instance, the RL agent may learn that connecting a node $v$ with a particular set of attributes to a specific node in $U$ would yield high returns. The proposed framework can naturally account for such attributes, going beyond simple greedy-like policies. We will show that accounting for node attributes yields improved results on a real-world dataset for Online Submodular Bipartite Matching. \section{Problem Setting}\label{sec:problem-setting} In a bipartite graph $G = (U,V,E)$, $U$ and $V$ are disjoint sets of nodes and $E$ is the set of edges connecting a node in $U$ to one in $V$. In the online bipartite matching problem, the vertex set $U$ is fixed and at each timestep a new node $v \in V$ and its edges $\{ (u,v): u\in U\}$ arrive. The algorithm must make an \textit{instantaneous} and \textit{irrevocable} decision to match $v$ to one of its neighbors or not match at all. Nodes in $U$ can be matched to at most one node in $V$. The time horizon $T = |V|$ is finite and assumed to be known in advance. The simplest generalization of OBM is the edge-weighted OBM (E-OBM), where a non-negative weight is associated with each edge. Other well-known variants include Adwords, Display Ads and Online Submodular Welfare Maximization \citep{obmsurvey}. We will focus our experiments on E-OBM and Online Submodular Bipartite Matching (OSBM), a new variation of the problem introduced by \citet{dickerson2018balancing}; together, the two problems span a wide range in problem complexity. The general framework can be extended with little effort to address other online matching problems with different constraints and objectives; see Appendix \ref{appendix:Adwords} for a discussion and results on Adwords. \subsection{Edge-weighted OBM (E-OBM)} Each edge $e \in E$ has a predefined weight $w_e\in\mathbb{R}^{+}$ and the objective is to select a subset $S$ of the incoming edges that maximizes $\sum_{e \in S} w_e$. Note that in the offline setting, where the whole graph is available, this problem can be solved in polynomial time using existing algorithms \citep{hungarian}. However, the online setting involves reasoning under uncertainly making the design of optimal online algorithms non-trivial, as seen in Figure \ref{fig:fig_models}. \subsection{Online Submodular Bipartite Matching (OSBM)} We first define some relevant concepts: \noindent\textit{Submodular function}: A function $f: 2^U \rightarrow \mathbb{R}^+$, $f(\emptyset) = 0$ is \textit{submodular} iff $\forall S,T \subseteq U$: $$ f(S \cup T) + f(S \cap T) \leq f(S) + f(T).$$ Some common examples of submodular functions include the coverage function, piecewise linear functions, and budget-additive functions. In our experiments, we will focus on the weighted coverage function following~\citet{dickerson2018balancing}: \noindent\textit{Coverage function}: Given a universe of elements $U$ and a collection of $g$ subsets $A_1, A_2, \dots, A_g \subseteq U$, the function $f(M) = |\cup_{i \in M} A_i|$ is called the coverage function for $M\subseteq\{1,\dots,g\}$. Given a non-negative, monotone weight function $w: 2^U \rightarrow \mathbb{R}^+$, the weighted coverage function is defined analogously as $f(M) = w(\cup_{i \in M} A_i)$ and is known to be submodular. In this setting, each edge $e \in E$ incident to arriving node $v_t$ has the weight $f(M_t \cup \{e\}) - f(M_t)$ where $M_t$ is the matching at timestep $t$. The objective in OSBM is to find $M$ such that $f(M) = \sum_{e \in M} w_e$ is maximized; $f$ is a submodular function. An illustrative application of the OSBM problem, identified by \citet{dickerson2018balancing}, can be found in movie recommendation systems. There, the goal is to match incoming users to a set of movies that are both relevant and diverse (genre-wise). A user can login to the platform multiple times and may be recommended (matched to) a movie or left unmatched. Since we have historical information on each user's average ratings for each genre, we can quantify diversity as the weighted coverage function over the set of genres that were matched to the user. The goal is to maximize the sum of the weighted coverage functions for all users. More concretely, if we let $U$ be the universe of genres, then any movie $i$ belongs to a subset of genres $A_i$. Let $L$ be the set of all users, $M_l$ be the set of movies matched to user $l$, and $f_l(M_l) = w(\cup_{i \in M} A_i)$ be the weighted coverage function defined as the sum of the weights of all genres matched to the user, where the weight of a genre $k$ is the average rating given by user $l$ to movies of genre $k$. Each user's weighted coverage function is submodular. The objective of OSBM is to maximize the (submodular) sum of these user functions: $f(M) = \sum_{l \in L} f_l(M_l)$. \subsection{Arrival Order} Online problems are studied under different input models that allow the algorithm to access varying amounts of information about the arrival distribution of the vertices in $V$. The \textit{adversarial} order setting is often used to study the worst-case performance of an algorithm, positing that an imaginary adversary can generate the worst possible graph and input order to make the algorithm perform poorly. More optimistic is the \textit{known i.i.d distribution (KIID)} setting, where the algorithm knows $U$ as well as a distribution $\mathcal{D}$ on the possible \textit{types} of vertices in $V$. Each arriving vertex $v$ belongs to one type and vertices of a given type have the same neighbours in $U$. This assumption, i.e., that the arrival distribution $\mathcal{D}$ is given, is too optimistic for complex real-world applications. In this work, we study the \textbf{\textit{unknown i.i.d distribution (UIID)}} setting, which lies between the \textit{adversarial} and the \textit{KIID} settings in terms of how much information is given about the arrival distribution \citep{onlineUnkown}. The \textit{unknown i.i.d setting} best captures real-world applications, where a base graph is provided from an existing data set, but an explicit arrival distribution $\mathcal{D}$ is not accessible. For example, a database of past job-to-candidate or item-to-customer relationships can represent a base graph. It is thus safe to assume that the arriving graph will follow the same distribution. The arrival distribution is accessible on through sample instances drawn from $\mathcal{D}$. More details on data generation are provided in Section ~\ref{sec:datasetprep} and Appendix~\ref{appendix:dataset-gen}. \section{Related Work} \noindent\textbf{Traditional Algorithms for OBM: } Generally, the focus of algorithm design for OBM has been on worst-case approximation guarantees for ``pen-and-paper'' algorithms via competitive analysis, rather than average-case performance in a real-world application. We refer the reader to \citep{onlineUnkown, obmsurvey} for a summary of the many results for OBM under various arrival models. On the empirical side, an extensive set of experiments by \citet{borodin2018experimental} showed that the naive greedy algorithm performs similarly to more sophisticated online algorithms on synthetic and real-world graphs in the KIID setting. Though the experiments were limited to OBM with $|U|=|V|$, they were conclusive in that ($i$) greedy is a strong baseline in practical domains, and ($ii$) having better proven lower bounds does not necessarily translate into better performance in practice. The main challenge in online problems is decision-making in the face of uncertainty. Many traditional algorithms under the KIID setting aim to overcome this challenge by explicitly approximating the distribution over node types via a type graph. The algorithms observe past instances and estimate the frequency of certain types of online nodes, i.e., for each type $i$, the algorithm predicts a probability $p_i$ of a node of this type arriving. As noted earlier, the KIID setting is rather simplistic compared to the more realistic UIID setting that we tackle here. Other non-myopic algorithms have been proposed which do not rely on estimating the arrival distribution. For example, \cite{Awasthi2009OnlineSO} solve stochastic kidney exchange, a generalization of OBM, by sampling a subset of future trajectories, solving the offline problem on each of them, and assigning a score to each action. The algorithm then selects the action that is the best overall. \noindent\textbf{Learning in Combinatorial Optimization: } There has recently been substantial progress in using RL for finding better heuristics for offline, NP-complete graph problems. \citet{khalilcomb} presented an RL-based approach combined with graph embeddings to learn greedy heuristics for some graph problems. \citet{BarrettCFL20} take a similar approach but start with a non-empty solution set and allow the policy to explore by removing nodes/edges from the solution. \citet{ChenT19} learn a secondary policy to pick a particular region of the current solution to modify and incrementally improve the solution. The work by \citet{kool2018attention} uses an attention based encoder-decoder approach to find high-quality solutions for TSP and other routing problems. We refer to the following surveys for a more comprehensive view of the state of this area~\citep{mazyavkina2021reinforcement, bengio2021machine}. Prior work on using predictive approaches for online problems has been fairly limited. \citet{8731455} overlook the real-time decision making condition and use Q-learning for a batch of the arriving nodes. The matching process, however, is not done by an RL agent but using an offline matching algorithm. Consequently, their method is not practical for OBM variants that are NP-hard in the offline setting (e.g., Adwords) and where instantaneous decision making is paramount, e.g., in ad placement on search engines. The work by \citet{kong2018a} is one of few to apply RL to online combinatorial optimization. Their work differs from ours in three main ways: ($i$) the question raised there is whether RL can discover algorithms which perform best on worst-case inputs. They show that the RL agent will eventually learn a policy which follows the ``pen-and-paper'' algorithm with the best worst-case guarantee. Our work, on the other hand, asks if RL can outperform hand-crafted algorithms on average; ($ii$) The MDP formulation introduced in \citep{kong2018a}, unlike ours, does not consider the entire past (nodes that have previously arrived and the existing matching) which can help an RL policy better reason about the future; ($iii$) our family of invariant neural network architectures apply to graphs of arbitrary sizes $|V|$ and $|U|$. More details about the advantages of our method are provided in the next section. \citet{zuzic2020learning} propose a GAN-like adversarial training approach to learn robust OBM algorithms. However, just like \citep{kong2018a}, they are more concerned with learning algorithms that are robust to hard distributions rather than real-world graphs, and do not utilize historical information accumulated in the previous steps within the same instance. \noindent\textbf{Online Algorithm Design via Learned Advice:} A hybrid paradigm has been recently introduced where the predictive and competitive-analysis approaches are combined to tackle online problems. Such algorithms take advantage of the predictions made by a model to obtain an improved competitive ratio while still guaranteeing a worst-case bound when the predictions are inaccurate. Work in this area has resulted in improvements over traditional algorithms for the secretary, ski rental and online matching problems \citep{antoniadis2020secretary, kodialam2019optimal, pmlr-v139-diakonikolas21a, NEURIPS2018_73a427ba}. Unlike our approach, the model does not get to construct a solution. Rather, its output is used as advice to a secondary algorithm. Since competitive analysis is of concern in this line of work, the algorithm is not treated as a black box and must be explicitly handcrafted for each different online problem. On the other hand, we introduce a general end-to-end framework that can handle online matching problems with different objectives and constraints, albeit without theoretical performance guarantees. \section{Learning Deep Policies for Matching}\label{section:policies} We now formalize online matching as a Markov Decision Process (MDP). We then present a set of neural network architectures with different representational capabilities, numbers of parameters, and assumptions on the size of the graphs. An extensive set of features have been designed to facilitate the learning of high-performance policies. We conclude this section by mentioning the RL training algorithm we use as well as a supervised behavioral cloning baseline. \subsection{MDP Formulation} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/figure_MDP2.png} \caption{The MDP formulation of E-OBM. The agent is trained on different graph instances sampled from a distribution $\mathcal{D}$. At each timestep $t$, the agent picks a node to match or skip. In this 3x3 graph, the optimal matching has weight 22; following the greedy policy would yield a matching of weight 7. The illustrated policy ends up with a matching of weight 18.} \label{fig:fig_models} \end{figure*} The online bipartite matching problem can be formulated in the RL setting as a Markov Decision Process as follows; see Fig.~\ref{fig:fig_models} for a high-level illustration. Each instance of the online matching problem is drawn uniformly at random from an unknown distribution $\mathcal{D}$. The following MDP captures the sequential decision-making task at hand: \begin{itemize} \item[--] \textbf{State}: A state $S$ is a set of selected edges (a matching) and the current (partial) bipartite graph $G$. A terminal state $\hat{S}$ is reached when the final node in $V$ arrives. The length of an episode is $T=|V|$. \item[--] \textbf{Action}: The agent has to pick an edge to match or skip. At each timestep $t$, a node $v_t$ arrives with its edges. The agent can choose to match $v_t$ to one of its neighbors in $U$ or leave it unmatched. Therefore, $|A_t|$, the maximum number of possible actions at time $t$ is $|\text{Ngbr}(v)| + 1$, where $\text{Ngbr}(v)$ is the set of $U$ nodes with edges to $v$. Note that there can exist problem-specific constraints on the action space, e.g., a fixed node can only be matched once in E-OBM. Unlike the majority of domains where RL is applied, the uncertainty is exogenous here. Thus, the transition is \textit{deterministic} regardless of the action picked. That is, the (random) arrival of the next node is independent of the previous actions. \item[--] \textbf{Reward function}: The reward $r(s, a)$ is defined as the weight of the edge selected with action $a$. Hence, the cumulative reward $R$ at the terminal state $\hat{S}$ represents the total weight of the final matching solution: $$R = \sum_{e \in \hat{S}} w_e.$$ \item[--] \textbf{Policy}: A solution (matching) is a subset of the edges in $E$, $\pi = \bar{E} \subset E$. A stochastic policy, parameterized by $\theta$, outputs a solution $\pi$ with probability $$p_{\theta}(\pi| G) = \prod_{t=1}^{|V|} p_{\theta}(\pi_t| s_t),$$ where $s_t$ represents the state at timestep $t$, $G$ represents the full graph, and $\pi_t$ represents the action picked at timestep $t$ in solution $\pi$. \end{itemize} \subsection{Deep Learning Architectures} \label{model-desc} In this section, we propose a number of architectures that can be utilized to learn effective matching policies. Unless otherwise stated, the models are trained using RL. \begin{figure}[t]{} \centering \includegraphics[width=0.9\textwidth]{figures/inv_ff.JPG} \caption{ Invariant (\texttt{inv-ff}) Architecture. A shared feed-forward neural network takes in node-specific features and outputs a single number for each node in $U$. The outputs are then fed into the softmax function to give a vector of probabilities. The red node represents skipping.} \label{fig:fig_inv_ff} \end{figure} \noindent\textbf{Feed-Forward} (\texttt{ff}): When node $v_t$ arrives, the \texttt{ff} policy will take as input a vector $(w_0, \dots, w_{|U|}, m_0, \dots, m_{|U|})\in\mathbb{R}^{2(|U| + 1)}$ \footnote{The extra input represents the skip node, which is not needed for \texttt{ff} and \texttt{ff-hist}, but we add it to make the input consistent across models.}, where $w_u$ is the weight of the edge from $v_t$ to fixed node $u$ (with $w_u = 0$ if $v$ is not a neighbor of $u$), and $m_u$ is a binary mask representing the availability of node $u$ for matching. The policy will output a vector of probabilities of size $|U| + 1$, where the additional action represents skipping. \texttt{ff} is similar to the architecture presented in \citet{kong2018a}. \noindent\textbf{Feed-Forward with history} (\texttt{ff-hist}): This model is similar to \texttt{ff} but takes additional historical information about the current graph to better reason about future input. That is, \texttt{ff-hist} will take in a vector consisting of five concatenated vectors, $(w, m, h_t, g_t, n_t)$. The vectors $w$ and $m$ are the same as those in \texttt{ff}. The feature vectors $h, n, g$ contain a range of node-level features such as average weights seen so far per fixed node and solution-level features such as maximum weight in current solution; see Table~\ref{tab:hist-features} for details. \begin{table}[h] \centering \small \vskip 0.1in \caption{Features used in \texttt{ff-hist} and \texttt{inv-ff-hist}. $d^t_u$ represents degree of node $u$ at time $t$. $u_{skip}$ represents the skip node, i.e., matching to this node means choosing to skip.} \resizebox{\textwidth}{!}{% \begin{tabular}{l|l|c|c} \hline Feature type & Description & Equation & Size \\ \hline & \begin{tabular}[c]{@{}l@{}}Average weight per fixed node $u$ \\ up to time $t$\end{tabular} & $\mu_w = \frac{1}{d^t_u} \sum\limits_{\substack{(u, v_i) \in E: \\ v_i \in V, \\ 1 \leq i < t}} w_{(u, v_i)}$ & $|U| + 1$ \\ \cline{2-4} \multirow{-3}{*}{\begin{tabular}[c]{@{}l@{}}Graph-Level \\ Features $g_t$\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Variance of weights per fixed node $u$ \\ up to time $t$\end{tabular} & $\sigma_w = \frac{1}{d^t_u}\sum\limits_{\substack{(u, v_i) \in E: \\ v_i \in V, \\ 1 \leq i < t}}(w_{(u, v_i)} - \mu_w)^2$ & $|U| + 1$ \\ \cline{2-4} & \begin{tabular}[c]{@{}l@{}}Average degree per fixed node $u$ \\ up to time $t$\end{tabular} & $\frac{1}{t} |\{(u, v_i) \in E: i \leq t \}|$ & $|U| + 1$ \\ \hline \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Incoming Node \\ Features $n_t$\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Percentage of fixed nodes incident \\ to incoming $v_t$ (For invariant models only) \end{tabular} & $\frac{1}{|U|} |\{(u, v_t) \in E: u \in U\}|$ & 1 \\ \cline{2-4} & Normalized step size at time $t$ & $\frac{t}{|V|}$ & 1 \\ \hline \multirow{7}{*}{\begin{tabular}[c]{@{}l@{}}Solution-Level \\ Features $h_t$\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Maximum weight in current \\ matching solution\end{tabular} & $\max_{e \in S} w_e$ & 1 \\ \cline{2-4} & \begin{tabular}[c]{@{}l@{}}Minimum weight in current \\ matching solution\end{tabular} & $\min_{e \in S} w_e$ & 1 \\ \cline{2-4} & \begin{tabular}[c]{@{}l@{}}Mean weight in current \\ matching solution\end{tabular} & $\mu_{S} = \frac{1}{|S|} \sum_{e \in S} w_e$ & 1 \\ \cline{2-4} & \begin{tabular}[c]{@{}l@{}}Variance of weights in current \\ matching solution\end{tabular} & $\sigma_{S} = \frac{1}{|S|}\sum_{e \in S}(w_e - \mu_S)^2$ & 1 \\ \cline{2-4} & Ratio of already matched nodes in $U$ & $\frac{1}{|U|} |\{(u, v) \in S, u \neq u_{skip}\}|$ & 1 \\ \cline{2-4} & Ratio of skips made up to time $t$ & $\frac{1}{t} |\{(u, v) \in S, u = u_{skip}\}|$ & 1 \\ \cline{2-4} & \begin{tabular}[c]{@{}l@{}}The normalized size of \\ current matching solution\end{tabular} & $p_t = \frac{1}{|U|} \sum_{e \in S} w_e$ & 1 \\ \hline \end{tabular} } \label{tab:hist-features} \end{table} \noindent\textbf{Invariant Feed-Forward} (\texttt{inv-ff}): We present an invariant architecture, inspired by~\citet{andrychowicz2016learning}, which processes each of the edges and their fixed nodes \textit{independently} using the same (shared) feed-forward network; see Fig. \ref{fig:fig_inv_ff} for an illustration. That is, \texttt{inv-ff} will take as input a 3-dimensional vector, $(w_u, s_u, w_{mean})$, where $w_{mean}$ is the mean of the edge weights incident to incoming node $v_t$, and $s_u$ is a binary flag set to 1 if $u$ is the ``skip'' node. The output for each potential edge is a single number $o_u$. The vector $o$ is normalized using the softmax to output a vector of probabilities. \noindent\textbf{Invariant Feed-Forward with history} (\texttt{inv-ff-hist}): An invariant model, like \texttt{inv-ff}, which utilizes historical information. It is important to note that \texttt{inv-ff-hist} will only look at historical features of one node at a time, in addition to solution-level features. Therefore, the \textit{node-wise} input is $(w_u, m_u, s_u, w_{mean}, n_{t}, g_{t, u}, h_t)$. \noindent\textbf{Supervised Feed-Forward with history} (\texttt{ff-supervised}): To test the advantage of using RL methods, we train \texttt{ff-hist} in a supervised learning fashion. In other words, each incoming node is considered a data sample with a target (the optimal $U$ node to match, in hindsight). During training, after all $V$ nodes have arrived, we minimize the cross-entropy loss across all the samples. This setup is equivalent to behavior cloning \citep{behaviorcloning} where expert demonstrations are divided into state-action pairs and treated as i.i.d. samples. \noindent\textbf{Graph Neural Network} (\texttt{gnn-hist}): In this model, we employ the encoder-decoder architecture used in many combinatorial optimization problems, see \citep{cappart2021combinatorial}. At each timestep $t$, the graph encoder consumes the current graph and produces embeddings for all nodes. The decoder feed-forward neural network, which also takes \textit{node-wise} inputs, will take in $(w_u, t / |V|, m_u, s_u, p_t, e_{v_t}, e_u, e_{mean}, e_s)$ where the last four inputs represent the embedding of the incoming node $v_t$, embedding of the fixed node $u$ being considered, mean embedding of all fixed nodes, and mean solution embedding, respectively. Our graph encoder is a MPNN \citep{mpnn} with the weights as edge features. The mean solution embedding is defined as the sum of a learnable linear transformation of the concatenation of the embeddings of the vertices of the edges in the solution $S$: \begin{equation} e_s = \frac{1}{|S|}\sum_{(u, v) \in S} \Theta_e([e_{u}; e_{v}]), \end{equation} where ``;'' represents horizontal concatenation, $\Theta_e$ is a learnable parameter, and $S$ is the set of all matchings made so far. The mean of the embeddings of all fixed nodes is calculated simply as: \begin{equation} e_{mean} = \frac{1}{|\bar{U}|}\sum_{u \in \bar{U}} e_u. \end{equation} where $\bar{U} = U \cup \{u_{skip}\}$ and $u_{skip}$ represents the skip node, i.e., matching to this node means skipping. The graph encoder also takes in problem-specific node features if available; see Appendix \ref{appendix:node-feat} for details. The output of the encoder is fed into a feed-forward network which outputs a distribution over available edges. The models outlined above are designed based on a set of desirable properties for matching. Table \ref{tab:model-features} summarizes the properties that are satisfied by each model: \begin{itemize} \item \textbf{Graph Size Invariance}: Training on large graph instances may be infeasible and costly. Thus, it would be ideal to train a model on small graphs if it generalizes well to larger graphs with a similar generating distribution. We utilize normalization in a way to make sure that each statistic (feature) that we compute lies within a particular range, independently of the graph size. Moreover, the invariant architectures allow us to train small networks that only look at node-wise inputs and share parameters across all fixed nodes. It is also worth noting that the invariance property can be key to OBM variants where $U$ is not fixed, e.g., 2-sided arrivals \citep{dickerson2018gMission}, an application that is left for future work. \item \textbf{Permutation Invariance}: In most practical applications, such as assigning jobs to servers or web advertising, the ordering of nodes in the set $U$ should not affect the solution. The invariant architectures ensure that the model outputs the same solution regardless of the permutation of the nodes in $U$. On the other hand, the non-invariant models such as \texttt{ff} would predict differently for the same graph instance if the $U$ nodes were permuted. \item \textbf{History-Awareness}: A state space defined based on the entire current graph and the current matching will allow the model to learn smarter strategies that reason about the future based on the observed past. Historical and graph-based information within the current graph gives the models an ``identity'' for each fixed node which may be lost due to node-wise input. Contextual features such as incoming node features $n_t$ (see Table \ref{tab:hist-features}) and the ratio of already matched nodes help the learned policies to generalize to different graph sizes and $U$-to-$V$ ratios. \item \textbf{Node-feature Awareness}: In real-world scenarios, nodes in $U$ and $V$ represent entities with features that can be key to making good matching decisions. For example, incoming nodes can be users with personal information such as age, gender, and occupation. The node features can be leveraged to obtain better matchings. Our GNN model supports node features. Other models can be modified to take such additional features but would need to be customized to the problem at hand. \end{itemize} \begin{table}[h] \tiny \centering \caption{Important model characteristics. L: Number of hidden layers, H: Hidden layer size, E: Embedding dimension.} \vskip 0.1in \resizebox{\textwidth}{!}{% \begin{tabular}{c|ccccc} \hline Model & \begin{tabular}[c]{@{}l@{}}Graph size \\ Invariance \end{tabular} & \begin{tabular}[c]{@{}l@{}}Permutation \\ Invariance\end{tabular} & \begin{tabular}[c]{@{}l@{}}History \\ Awareness\end{tabular} & \begin{tabular}[c]{@{}l@{}}Node-feature \\ Awareness\end{tabular} & \begin{tabular}[c]{@{}l@{}}Learnable \\ Parameters\end{tabular} \\ \hline \texttt{inv-ff} & \checkmark & \checkmark & & & $O(LH^2)$ \\ \texttt{ff} & & & & & $O(LH^2 + |U| H)$ \\ \texttt{ff-hist} & & & \checkmark & & $O(LH^2 + |U| H)$ \\ \texttt{ff-supervised} & & & \checkmark & & $O(LH^2 + |U| H)$ \\ \texttt{inv-ff-hist} & \checkmark & \checkmark & \checkmark & & $O(LH^2)$ \\ \texttt{gnn-hist} & \checkmark & \checkmark & \checkmark & \checkmark & $O(LH^2 + EH + E^2)$ \\ \hline \end{tabular} } \centering \label{tab:model-features} \end{table} \subsection{Training Algorithms} \noindent\textbf{RL Models}: Because our focus is on flexible modeling of OBM-type problems with deep learning architectures, we have opted to leverage existing training algorithms with little modification. We use the REINFORCE algorithm \citep{Sutton1998}, both for its effectiveness and simplicity: $$\nabla L(\theta | s) = \mathbb{E}_{p_\theta(\pi|s)} [(L(\pi) - b(s)) \nabla \log p_\theta(\pi|s)].$$ To reduce gradient variance and noise, we add a baseline $b(s)$ which is the exponential moving average, $b(s) = M$, where $M$ is the loss $L(\pi)$ in the first training iteration and the update step is $b(s) = \beta M + (1 - \beta)L(\pi)$ \citep{Sutton1998}. \noindent\textbf{Supervised Models}: All incoming nodes are treated as independent samples with targets. Therefore, for a batch of $N$ bipartite graphs with $T$ incoming nodes, we minimize the weighted cross entropy loss: $$\frac{1}{N \times T}\sum_{i=1}^N \sum_{j=1}^T loss(p^i_j, t^i_j, c)$$ where $p^i_j$ is output of the policy for graph instance $i$ at timestep $j$, $t^i_j$ is the target which is generated by solving an integer programming formulation on the full graph in hindsight (see Appendix \ref{appendix:IP-form} for details), and $c$ is the weight vector of size $|U| + 1$. All classes are given a weight of 1 except the skipping class which is given a weight of $\frac{|U|}{|V|}$. This is to prevent overfitting when most samples belong to the skipping class, i.e., when $|V| \gg |U|$ and most incoming nodes are left unmatched. Masking is utilized to prevent all models from picking non-existent edges or already matched nodes. \section{Experimental Setup} \label{sec:experiments} \subsection{Dataset Preparation} \label{sec:datasetprep} We train and test our models across two synthetically generated datasets from the Erdos-Renyi (ER) \citep{er} and Barabasi-Albert (BA) \citep{RevModPhys.74.47} graph families. In addition, we use two datasets generated from real-world base graphs. The gMission base graph \citep{gmission} comes from crowdsourcing data for assigning workers to tasks. We also use MovieLens \citep{movielense}, which is derived from data on users' ratings of movies based on~\citet{dickerson2018balancing}. Table \ref{tab:dataset-properties} summarizes the datasets and their key properties. In our experiments, we generate two versions of each real-world dataset: one where the same fixed nodes are used for all graph instances (gMission, MovieLens), and one where a new set of fixed nodes is generated for each graph instance (gMission-var, MovieLens-var). To generate a bipartite graph instance of size $|U|$ by $|V|$ from the real-world base graph, we sample $|U|$ nodes uniformly at random without replacement from the nodes on the left side of the base graph and sample $|V|$ nodes with replacement from the right side of the base graph. A 10x30 graph is one with $|U|=10, |V|=30$, a naming convention we will adopt throughout. \noindent\textbf{Erdos-Renyi (ER):} We generate bipartite graph instances for the E-OBM problem using the Erdos-Renyi (ER) scheme \citep{er}. Edge weights are sampled from the uniform distribution $U(0, 1]$. For each graph size, e.g., 10x30, we generate datasets for a number of values of $p$, the probability of an edge being in the graph. \noindent\textbf{Barabasi-Albert (BA):} We follow the same process described by \citep{borodin2018experimental} for generating preferential attachment bigraphs. To generate a bigraph in this model, start with $|U|$ offline nodes and introduce online nodes $V$ one at a time. The model has a single parameter $p$ which is the average degree of an online node. Upon arrival of a new online node $v \in V$, we sample $n_v \sim Bionomial(|U|, p/ |U|)$ to decide the number of the neighbours of $v$. Let $\mu$ be a probability distribution over the nodes in $U$ defined by $\mu(u) = \frac{1 + degree(u)}{|U| + \sum_{u \in U} degree(u)}$. We sample offline nodes according to $\mu$ from $U$ until $n_v$ neighbours are selected. \noindent\textbf{gMission}: In this setting, we have a set of workers available offline and incoming tasks which must be matched to compatible workers~\citep{gmission}. Every worker is associated with a location in Euclidian space, a range within which they can service tasks, and a success probability with which they will complete any task. Tasks are represented by a Euclidean location and a payoff value for being completed. We use the same strategy as in \citep{dickerson2018gMission} to pre-process the dataset. That is, workers that share similar locations are grouped into the same ``type'', and likewise for tasks. An edge is drawn between a worker and a task if the task is within the range of the worker. The edge weight is calculated by multiplying the payoff for completing the task with the success probability. In total, we have 532 worker types and 712 task types. To generate a bipartite graph instance of size $|U|$ by $|V|$, we sample $|U|$ workers uniformly at random without replacement from the 532 types and sample $|V|$ tasks with replacement from $\mathcal{D}$. We set $\mathcal{D}$ to be uniform. That is, the graph generation process involves sampling node from $V$ in the base graph uniformly. \noindent\textbf{MovieLens}: The dataset consists of a set of movies each belonging to some genres and a set of users which can arrive and leave the system at any time. Once a user arrives, they must be matched to an available movie or left unmatched if no good movies are available. We have historical information about the average ratings each user has given for each genre. The goal is to recommend movies which are relevant and diverse genre-wise. This objective is measured using the weighted coverage function over the set of genres (see Section \ref{sec:problem-setting}). Therefore, we must maximize the sum of the weighted coverage functions of all users which have arrived. The MovieLens dataset contains a total of 3952 movies, 6040 users, and $100,209$ ratings of the movies by the users. As in \citep{dickerson2018balancing}, we choose 200 users who have given the most ratings and sample 100 movies at random. We then remove any movies that have no neighbors with the 200 users to get a total of 94 movies. These sets of movies and users will be used to generate all bipartite graphs. We calculate the average ratings each user has given for each genre. These average ratings will be used as the weights in the coverage function (see section 2.1). To generate an instance of size $|U|$ by $|V|$, we sample $|U|$ movies uniformly at random without replacement from the 94 movies and $|V|$ users with replacement according to the uniform arrival distribution $\mathcal{D}$. The full graph generation procedure for gMission and MovieLens can be seen in Algorithm \ref{graphgeneration} of Appendix \ref{appendix:dataset-gen}. \begin{table}[h] \centering \caption{Datasets used for our experiments. $p$ is the average node degree in BA graphs.} \vskip 0.1in \resizebox{\textwidth}{!}{% \begin{tabular}{c|cccc} \hline Type & Problem & Base Graph Size & Node Attributes? & Weight Generation \\ \hline ER & E-OBM & & & $w_{(u,v)} \sim U(0,1]$ \\ \cline{2-5} BA & E-OBM & & & $w_{(u,v)} \sim N(\text{degree}(u), p/5)$ \\ \cline{2-5} gMission & E-OBM & 532 jobs $\times$ 712 workers & & payoff for computing the task $\times$ the success probability \\ \cline{2-5} MovieLens & OSBM & 94 movies $\times$ 200 users & $\checkmark$ & \begin{tabular}[c]{@{}c@{}}average ratings each user has given for \\ each genre is used as the weights in the coverage function\end{tabular}\\ \hline \end{tabular}% } \centering \label{tab:dataset-properties} \end{table} \subsection{Evaluation} \noindent\textbf{Evaluation Metric:} We use the \textit{optimality ratio} averaged over the set of test instances. The optimality ratio of a solution $S$ on a graph instance $G$ is defined as $O(S, G) = \frac{c(S)}{OPT(G)}$, where $c(S)$ is the objective value of $S$ and $OPT(G)$ is the optimal value on graph instance $G$, which is computed in hindsight using integer programming; see Appendix \ref{appendix:IP-form}. \noindent\textbf{Baselines:} For E-OBM, we compare our models to the \texttt{greedy} baseline, which simply picks the maximum-weight edge, and \texttt{greedy-rt} \citep{greedy-rt}, a randomized algorithm which is near-optimal in the adversarial setting. In an effort to compare to strong tunable baselines, we implemented \texttt{greedy-t}, which picks the maximum-weight edge with weight above a dataset-specific threshold $w_T$ that is tuned on the training set. If no weight is above the threshold, then we skip (see Appendix \ref{appendix:eval} for details). To best of our knowledge, this is the first application of such a baseline to real-world datasets. For OSBM, we only use \texttt{greedy} as \citep{dickerson2018balancing} find that it achieves a better competitive ratio than their algorithms when movies cannot be matched more than once and the incoming user can be matched to one movie at a time, which is the setting we study here. \subsection{Hyperparameter Tuning and Training Protocol} In a nutshell, around 400 configurations, varying four hyperparameters, are explored using Bayesian optimization \citep{wandb} on a small validation set consisting of small graphs (10x30) from the gMission dataset. We have found the models to be fairly robust to hyperparameter values. In fact, most configurations with low learning rates (under 0.01) result in satisfactory performance as seen in Fig ~\ref{fig:hyperparameter-sweep}. The model with the best average optimality ratio on the validation set is selected for final evaluation, the results of which will be shown in the next section. Some hyperparameters are fixed throughout, particularly the depths/widths of the feed-forward networks (2-3 layers, 100-200 neurons), and the use of the ReLU as activation function. Training often takes less than 6 hours on a NVIDIA v100 GPU. Full details are deferred to appendices \ref{appendix:hyperparams} and \ref{appendix:training}. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{figures/Hyperparameter_sweep.png} \caption{ Top 200 hyperparameter tuning results for \texttt{ff-hist} on gMission 10x30. Each curve represents a hyperparameter configuration. Lighter color means better average optimality ratio on the validation set.} \label{fig:hyperparameter-sweep} \end{figure} \section{Experimental Results} \subsection{Edge-Weighted Online Bipartite Matching} \begin{figure}[t] \centering \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.30\textwidth]{plots/e-obm_er_10x30_0-1_boxplot.pdf} \includegraphics[width=0.30\textwidth]{plots/e-obm_er_10x60_0-1_boxplot.pdf} \includegraphics[width=0.30\textwidth]{plots/e-obm_er_100x100_0-1_boxplot.pdf} \end{subfigure} \caption{Distributions of the Optimality Ratios for E-OBM on ER graphs. The graph family parameter $p$ is the probability of a random edge existing in the graph.} \label{fig:eobm_er} \end{figure} \begin{figure}[t] \centering \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.25\textwidth]{plots/e-obm_ba_100x100_0-1_boxplot.pdf} \includegraphics[width=0.25\textwidth]{plots/e-obm_ba_100x100_0-3_boxplot.pdf} \includegraphics[width=0.20\textwidth]{plots/e-obm_legend.pdf} \caption{\centering Distributions of the Optimality Ratios for E-OBM on BA graphs with average node degree 5, and weights $w_{(u, v)} \sim N(deg(u), 1)$, and BA with average node degree 15, and weights $w_{(u, v)} \sim N(deg(u), 3)$.}\label{fig:ba} \end{subfigure} \quad \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.25\textwidth]{plots/e-obm_gmission_10x30_-1--1_boxplot.pdf} \includegraphics[width=0.25\textwidth]{plots/e-obm_gmission_10x60_-1--1_boxplot.pdf} \includegraphics[width=0.25\textwidth]{plots/e-obm_gmission_100x100_-1--1_boxplot.pdf} \caption{Distributions of the Optimality Ratios for E-OBM on the gMission dataset.}\label{fig:gmission} \end{subfigure} \quad \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.25\textwidth]{plots/e-obm_gmission-var_10x30_-1--1_boxplot.pdf} \includegraphics[width=0.25\textwidth]{plots/e-obm_gmission-var_10x60_-1--1_boxplot.pdf} \includegraphics[width=0.25\textwidth]{plots/e-obm_gmission-var_100x100_-1--1_boxplot.pdf} \caption{Distributions of the Optimality Ratios for E-OBM on gMission-var.}\label{fig:gmission-var} \end{subfigure} \caption{Distributions of the Optimality Ratios for E-OBM on BA and gMission.} \label{fig:plots} \end{figure} \begin{figure}[h] \centering \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.45\textwidth]{plots/e-obm_er_10x30_agreementplot_with_opt.pdf} \includegraphics[width=0.45\textwidth]{plots/e-obm_gmission_10x30_agreementplot_with_opt.pdf} \end{subfigure} \caption{ Percent agreement with the optimal solution per timestep. A point (timestep $t$, agreement $a$) on this plot can be read as: at timestep $t$, this method makes the same matching decision as the optimal solution on $a\%$ of the test instances.} \label{fig:agreement-gmission-er} \end{figure} For E-OBM, we will analyze the performance of the models across ER and BA graphs as well as the gMission datatset. The edges and weights in the ER graphs are generated from a uniform distribution. Thus, ER graphs do not have special graph properties such as the existence of community structures or the occurrence of structural motifs. As a result, the ER dataset is hard to learn from as the models would merely be able to leverage the $|U|$-to-$|V|$ ratio and the density of the graph (the graph family parameter is proportional to the expected node degree in a graph). Unlike ER graphs, explicit structural patterns are found in BA graphs. The BA graph generation process captures heterogeneous and varied degree distributions which are often observed in real world graphs \citep{barabasi2016network}. For example, graphs with many low-degree nodes and a few high-degree nodes occur in practical domains where the rich gets richer phenomenon is witnessed. The BA graph generation process is described in Appendix \ref{appendix:dataset-gen}. In our experiments, nodes with higher degrees also have higher weights in average. We also study the models under the gMission dataset. Like many real-world datasets, the exact properties of the graphs are unknown. Thus, the models may derive policies based on implicit graph properties. The results will demonstrate that the models have taken advantage of some existing patterns in the dataset. \noindent\textbf{Trends in decisions with respect to the $|U|$-to-$|V|$ ratio and graph sparsity:} When $|U| < |V|$, the models outperform the greedy strategies since they learn that skipping would yield a better result in hindsight, despite missing a short-term reward. This is apparent for the 10x30 and 10x60 graphs in Figure~\ref{fig:eobm_er} for ER and Figure~\ref{fig:plots} (b) for gMission. To substantiate this and other hypotheses about the behavior of various policies, we use ``agreement plots'' such as those in Figure~\ref{fig:agreement-gmission-er}. An agreement plot shows how frequently the different policies agree with a reference policy, e.g., with a hindsight-optimal solution or with the greedy method. Appendix \ref{appendix:agr} includes agreement plots w.r.t. greedy: most disagreements between the learned policies and greedy happen in the beginning but all methods (including greedy) imitate the optimum towards the end, when most actions consist in skipping due to the fixed nodes having been matched already. Outperforming greedy on 100x100 (3rd plot in Fig.~\ref{fig:eobm_er}) ER graphs is quite a difficult task, as there is not much besides the graph density for the models to take advantage of. Since $|V|=|U|$, skipping is also rarely feasible. Hence, the models perform similarly to \texttt{greedy}. As the ER graphs get denser (moving to the right within the first three plots of Fig.~\ref{fig:eobm_er}), the gap between the models and the greedy baselines increases as there is a higher chance of encountering better future options in denser graphs. Hence, the models learn to skip as they find it more rewarding over the long term. This can be further seen in Fig.~\ref{fig:agreement-gmission-er}, where the models agree less with greedy on denser ER graphs. For gMission (right side of Fig.~\ref{fig:agreement-gmission-er}), most disagreements happen in the beginning but all models imitate the optimum towards the end when most actions consist in skipping; Appendix \ref{appendix:agr} has more agreement plots. \noindent\textbf{Model-specific Results:} Models with history, namely \texttt{inv-ff-hist} (gray) and \texttt{ff-hist} (brown), consistently outperform their history-less counterparts, \texttt{ff} and \texttt{inv-ff}, across all three datasets (Figure \ref{fig:plots}). \texttt{inv-ff} receives the same information as \texttt{greedy} and performs fairly similar to it on gMission and ER graphs. In fact, \texttt{inv-ff} learns to be exactly greedy on gMission 100x100 (see Appendix \ref{appendix:agr}). However, \texttt{inv-ff} performs better than other non-invariant models on the BA dataset. The ideal policy on BA graphs would discover that matching to a high-degree node is not wise since the node will likely receive a lot more edges in the future. Similarly, matching to a low-degree node is desired since the node will probably not have many edges in the future. The node-wise reasoning of the invariant models is effective at learning to utilize such node-specific graph property and following a more feasible policy. Armed with historical context, \texttt{inv-ff-hist} outperforms all other models on BA graphs (Fig.~\ref{fig:ba}). The best performance on ER and gMission is achieved by \texttt{ff-hist} since the model gets all information (weights) at once (Fig.~\ref{fig:plots}). However, when $U$ nodes are permuted, \texttt{inv-ff-hist} vastly outperforms \texttt{ff-hist}, as shown in Appendix \ref{appendix:perm}. \texttt{ff-supervised} performs well but not as good as RL models since supervised learning comes with its own disadvantages, i.e, overfitting when there are more skip actions than match, and being unable to reason sequentially when it makes a wrong decision early on. The latter is a well-known fatal flaw in behavior cloning, as observed by~\citet{behaviorcloning} and others. \texttt{greedy-t} performs well compared to \texttt{greedy}, which shows the advantage of strategically skipping if weights do not pass a tuned threshold. However, it is still outperformed by the learned models, especially on BA graphs where the graphs exhibit explicit patterns and gMission 100x100. In general, the choice of the best model is dependent on the problem, but we provide some empirical evidence on how this choice should be made. The invariant models with history (\texttt{inv-ff-hist}, \texttt{gnn-hist}) are the best performing models and most recommended to be used in practice as they are invariant to $|U|$, can support more general OBM variants such as 2-sided arriving nodes, and can take advantage of node/edge features. For settings where $|U|$ is always fixed (e.g., scheduling jobs to servers), \texttt{ff-hist} is the best as it can see all arriving edges at once and takes advantage of history. \noindent\textbf{Some advantages to using invariant models:} We also experiment with a variation on the gMission dataset, where a new set of fixed nodes is generated for each graph instance. We see the same pattern as in gMission (where the same fixed nodes in $|U|$ existed across all instances), except non-invariant models degrade substantially for 100x100. This is because the input size increased substantially but the model's hidden layer sizes were kept constant. The same issue is seen for BA graphs. A significant disadvantage of non-invariant models is that they are not independent of $|U|$, so model size needs to be increased with $|U|$. Invariant models are unaffected by this issue as they reason \textit{node-wise}. We notice that this problem is not seen in gMission. One explanation for this is that fixed nodes are the same across all instances so models have a good idea of the weight distribution for each fixed node which is the same during testing and training. Therefore, even though the model size is not increased as the input size increases, the models can in some sense ``guess'' the incoming weights and so do not need as much of an increase in capacity. Once again, models with history display better performance as history helps the models build a better ``identity'' of each fixed node as more nodes come in even if never seen during training. \begin{figure}[t] \centering \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.45\textwidth]{plots/e-obm_gmission-var_10x30_graph_transfer.pdf} \includegraphics[width=0.45\textwidth]{plots/e-obm_gmission-var_10x60_graph_transfer.pdf} \label{fig:trans-plots} \end{subfigure} \caption{Graph Transferability on gMission-var: Average optimality ratios for models trained on graphs of size 10x30 (left) \& 10x60 (right) and tested on graphs of different sizes. Missing values are denoted with a dash for models that are not invariant to the number of $U$ nodes of the training graphs.} \label{fig:trans-plot} \end{figure} \noindent\textbf{Do models trained on small graphs transfer to larger graphs?} In Fig. \ref{fig:trans-plot}, we train all models on 10x30 and 10x60 graphs separately and test their transferability to graphs with different $|U|$-to-$|V|$ ratios up to size 100x200. \texttt{gnn-hist} and \texttt{inv-ff-hist} perform especially well on graphs with similar $|U|$-to-$|V|$ ratio. For 100x100 graphs, \texttt{inv-ff} and \texttt{greedy-t} perform poorly as they do not receive any features that give them context within a new graph size such as number of available fixed nodes. \subsection{Online Submodular Bipartite Matching (OSBM)} \begin{figure}[t] \centering \includegraphics[width=0.25\textwidth]{plots/osbm_movielense_10x30_-1--1_boxplot.pdf} \includegraphics[width=0.25\textwidth]{plots/osbm_movielense_10x60_-1--1_boxplot.pdf} \includegraphics[width=0.25\textwidth]{plots/osbm_movielense_94x100_-1--1_boxplot.pdf} \includegraphics[width=0.25\textwidth]{plots/osbm_movielense-var_10x30_-1--1_boxplot.pdf} \includegraphics[width=0.25\textwidth]{plots/osbm_movielense-var_10x60_-1--1_boxplot.pdf} \includegraphics[width=0.25\textwidth]{plots/osbm_movielense-var_94x100_-1--1_boxplot.pdf} \includegraphics[width=\textwidth]{plots/osbm_legend.pdf} \caption{Distributions of the Optimality Ratios for OSBM on three graph sizes for MovieLens and MovieLens-var. Higher is better.} \label{fig:osbm_plot} \end{figure} The inherent complexity of the OSBM problem and the real-world datasets provide a learning-based approach with more information to leverage. As such, the models tend to discover policies which do significantly better than \texttt{greedy} as shown in Fig.~\ref{fig:osbm_plot}. The benefit of RL models is apparent here when compared to \texttt{ff-supervised}, particularly for 10x30 and 94x100 graphs. The relative complexity of OSBM compared to E-OBM will require the model to be more generalizable as the reasoning involved is more complex and mere imitation is not enough. \texttt{ff-supervised} also underperforms because the edge weights depend on the current solution and can change on the same graph instance if previous matches are different, causing a great mismatch with the training data. A similar trend to E-OBM results is observed here: models with history outperform their history-less counterparts. The context provided by history is particularly helpful as the edge weights depend on previous matches. Furthermore, we notice that \texttt{gnn-hist} has the best performance on 10x30 and 94x100 graphs as \texttt{gnn-hist} is the only model that uses user attributes as node features. We witness the same issue seen in gMission-Var (\ref{fig:plots}). The non-invariant models degrade on 94x100 graphs due to having the same number of hidden layer despite processing larger graphs. The invariant models remain unaffected by the graph size. Interestingly, the invariant models even slightly outperform their non-invariant counterparts on 10x30 and 10x60 MovieLens-var. \section{Conclusion and Future Work} Through extensive experiments, we have demonstrated that deep reinforcement learning with appropriately designed neural network architectures and feature engineering can produce high-performance online matching policies across two problems spanning a wide range of complexity. In particular, we make the following concluding observations: \begin{itemize} \item A basic reinforcement learning formulation and training scheme are sufficient to produce good learned policies for online matching, are typically not sensitive to the choice of hyperparameters, and are advantageous compared to a supervised learning approach; \item Compared to greedy policies, RL-based policies are more effective, a result that can be partially explained by a stronger agreement with the optimal solution (in hindsight) in the early timesteps of the process when greedy is too eager to match. RL policies tend to perform particularly well when trained and tested on dense graphs or ones with a strong structural pattern; \item Models that are invariant to the number of nodes are more advantageous than fully-connected models in terms of how well they generalize to test instances that are slightly perturbed compared to the training instances, either in graph size or in the identities of the fixed nodes; \item Feature engineering at the node and graph levels can help model the history of the matching process up to that timestep, resulting in improved solutions compared to models that use only weight information from the current timestep; \item Graph Neural Network models are a viable alternative to feed-forward models as they can leverage node features and their dependencies across nodes more naturally. \end{itemize} Future avenues of research include: \begin{itemize} \item A more extensive experimental analysis of different RL training algorithms beyond basic policy gradient; \item Extensions to new real-world datasets with rich node and edge features that could benefit even more from highly expressive models such as GNNs; \item Extensions to other online combinatorial optimization problems, which can leverage our framework, models, and code as a starting point. \end{itemize} \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
5,638
\section{Introduction} Black holes are one of the most active areas of gravitational physics, mainly due to the fact that they constitute ideal laboratories to test general relativity in the strong field regime. However, confronting theoretical predictions with observations is an arduous and complicated task. A formidable step in this direction is the recent direct observation of black holes shadows \cite{Akiyama:2019cqa,Akiyama:2019bqs}, as well as the observation of black holes through the detection of gravitational waves \cite{Abbott:2016blz,Abbott:2017oio,Abbott:2017gyy}, which opens a new and promising era for gravitational physics. Despite the strong evidence in favour of their existence, black holes still present some paradoxes which have not been solved satisfactorily \cite{Wald:1999vt}. This has motivated some authors to propose some alternatives, or black-hole 'mimickers' (see \cite{cardoso2019} for a recent review), which can be compact enough to generate a shadow as the one recently observed~\cite{Akiyama:2019cqa,Akiyama:2019bqs}. One of these mimickers is the gravastar model of Mazur and Mottola \cite{mazur2001,mazur2004}. In the original gravastar scenario, a collapsing star suffers a phase transition at, or close to, where the horizon would have been formed. As a result, the interior region is replaced by a patch of de Sitter spacetime with negative pressure $p=-\rho$. This interior region is matched to the exterior Schwarzschild spacetime, through a shell of stiff matter $p=\rho$. The gravastar provides a final state for gravitational collapse, with no central singularity nor an event horizon, therefore avoiding the issues that these notions entail in classical black holes. Despite the fact that the gravastar has been widely studied in the literature (see e.g. \cite{Cattoen:2005he,Lobo:2006xt,Chirenti:2007mk,Chirenti:2008pf,Chirenti:2016hzd,Cardoso:2007az,Pani:2009ss,Horvat:2011ar,Cardoso:2014sna,sakai2014,Pani:2015tga,Volkel:2017ofl,Nakao:2018knn} and references therein), its mechanism of formation remains as the biggest challenge for this model. Mazur and Mottola \cite{Mazur:2015kia} unveiled a connection between the gravastar and the interior Schwarzschild solution \cite{schwarzschild1916b} or `Schwarzschild star', characterised by a constant energy density and isotropic pressure. Although the Schwarzschild star is an ideal case which might not be fully attained in a realistic physical scenario (see however \cite{misner1973}), nevertheless it provides a simple analytical solution to Einstein's equations which allows further analysis \cite{Stuchlik:2008xe,Stuchlik:2016xiq,Boehmer:2003uz}. Mazur and Mottola \cite{Mazur:2015kia} investigated the Schwarzschild star in the ultracompact regime beyond the Buchdahl limit $R=(9/4)M$ \cite{buchdahl1959}, and showed that the limiting configuration, when the radius of the star approaches the Schwarzschild radius $R\to R_{S}$, becomes one with a regular interior of constant negative pressure $p=-\rho$ determined by a patch of modified de Sitter spacetime. This sphere of 'dark energy' is bounded by a boundary layer of anisotropic stresses, located at the Schwarzschild radius, endowed with certain surface tension. By Birkhoff's theorem, the exterior remains the exterior Schwarzschild spacetime. The ultracompact Schwarzschild star has zero entropy and temperature, so mirroring the main characteristics of the gravastar proposed in \cite{mazur2001,mazur2004}. The ultracompact Schwarzschild star was extended to slow rotation in \cite{Posada:2016qpz}. A result of particular interest is that the moment of inertia and mass quadrupole moment, or I-Q relations, are in agreement with the corresponding Kerr values \cite{Urbanec:2013fs}. More recently in \cite{Camilo:2018goy} it was shown that the Schwarzschild star is stable against radial perturbations. As an extension of these results, Konoplya et. al. \cite{Konoplya:2019nzp} showed that the ultracompact Schwarzschild star is stable against non-radial (axial) gravitational perturbations. Moreover they showed that the $l>1$ perturbations are indistinguishable from those of Schwarzschild black holes. The results above asserts the viability of the ultracompact Schwarschild star as a legitimate black hole mimicker. Gravitational perturbations of more general polytropic spheres were studied in \cite{Stuchlik:2017qiz}. Inspired in the results above, it is relevant to extend the Schwarzschild star to a more general scenario, while preserving its fundamental properties. However, extending a known solution to a more complex situation can be an overwhelming task, given the complexity of Einstein's field equations~\cite{Stephani}. Fortunately, the so-called method of gravitational decoupling by Minimal Geometric Deformation (MGD-decoupling, henceforth)~\cite{Ovalle:2017fgl,Ovalle:2019qyi}, which has been widely used recently~\cite{Ovalle:2017wqi,Gabbanelli:2018bhs,Ovalle:2018umz,Sharif:2018toc,Contreras:2018gzd,Contreras:2018vph,Morales:2018nmq,Heras:2018cpz,Panotopoulos:2018law,Sharif:2018tiz,Contreras:2019iwm,Maurya:2019wsk,Contreras:2019fbk,Ovalle:2018ans,Sharif:2018khl}, has proved to be a powerful method to extend known solutions into more complex frameworks. \par % The original version of the MGD approach was developed in Refs.~\cite{Ovalle:2007bn,Ovalle:2009xk} in the context of extra-dimensional gravity~\cite{Randall:1999ee,Randall:1999vf}, and it was eventually extended to study black hole solutions in Refs.~\cite{Casadio:2015gea,Ovalle:2015nfa} (for some earlier works on the MGD, see for instance Refs.~\cite{Casadio:2012pu,Ovalle:2013xla,Ovalle:2013vna,Casadio:2013uma}, and Refs.~\cite{Ovalle:2014uwa,Casadio:2015jva,Cavalcanti:2016mbe,Casadio:2016aum,daRocha:2017cxu,daRocha:2017lqj,Fernandes-Silva:2017nec,Casadio:2017sze,Fernandes-Silva:2018abr,Fernandes-Silva:2019fez} for some recent applications). The MGD-decoupling has three main characteristics that make it particularly useful in the search for new solutions of Einstein's field equations, namely: % \begin{itemize} % \item We can extend any solution of the Einstein equations into more complex domains. For instance, we can start from a source with energy-momentum tensor $\hat T_{\mu\nu}$ for which the metric is known and add the energy-momentum tensor of a second source, \begin{equation} \label{coupling0} \hat T_{\mu\nu} \rightarrow T_{\mu\nu} = \hat T_{\mu\nu} +T^{(1)}_{\mu\nu} \ . \end{equation} We can then repeat the process with more sources $T^{(i)}_{\mu\nu}$ to extend the solution of the Einstein equations associated with the gravitational source $\hat T_{\mu\nu}$ into the domain of more intricate forms of gravitational sources $T_{\mu\nu}$; % \item We can reverse the previous procedure in order to find a solution to Einstein's equations with a complex energy-momentum tensor ${T}_{\mu\nu}$ by separating it into simpler components, \begin{equation} \label{split} {T}_{\mu\nu} \rightarrow \hat T_{\mu\nu}+T^{(i)}_{\mu\nu} \ , \end{equation} and solve Einstein's equations for each one of these components. Hence, we will have as many solutions as the components in the original energy-momentum tensor ${T}_{\mu\nu}$. Finally, by a simple combination of all these solutions, we will obtain the solution to the Einstein equations associated with the original energy-momentum tensor ${T}_{\mu\nu}$. % \item We can apply it to theories beyond general relativity. For instance, given the modified action~\cite{Ovalle:2019qyi} \begin{equation} \label{ngt} S_{\rm G} = S_{\rm EH}+S_{\rm X} = \int\left[\frac{R}{2\,k^2}+{\cal L}_{\rm M}+{\cal L}_{\rm X}\right]\sqrt{-g}\,d^4\,x \ , \end{equation} where ${\cal L}_{\rm M}$ contains all matter fields in the theory and ${\cal L}_{\rm X}$ is the Lagrangian density of a new gravitational sector with an associated energy-momentum tensor \begin{equation} \label{ngt2} \theta_{\mu\nu} = \frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\,{\cal L}_{\rm X})}{\delta g^{\mu\nu}} = 2\,\frac{\delta{\cal L}_{\rm X}}{\delta g^{\mu\nu}}-g_{\mu\nu}\,{\cal L}_{\rm X} \ , \end{equation} we can use~\eqref{coupling0} to extend all the known solutions of the Einstein-Hilbert action $S_{\rm EH}$ into the domain of modified gravity represented by $S_{\rm G}$. This represents a straightforward way to study the consequences of extended gravity on general relativity. % \end{itemize} In this paper we will apply the procedure describe in~\eqref{coupling0} to extend the Mazur-Mottola model in order to build a new ultracompact interior configuration with non-uniform matter density and anisotropic pressure. \par The paper is organised as follows: in Section~\ref{s2} we present Einstein's equations for a spherically symmetric stellar configuration and we discuss how to decouple two spherically symmetric and static gravitational sources $\{T_{\mu\nu},\,\theta_{\mu\nu}\}$, as well as the matching conditions at the stellar surface under the MGD-decoupling. In Section~\ref{s3} we review the constant-density interior Schwarzschild solution, or Schwarzschild star, and the negative pressure regime. In Section~\ref{s4}, we implement the MGD-decoupling following the scheme~\eqref{coupling0} to generate the extended version of the Mazur-Mottola model. Finally, in Section~\ref{con} we summarise our conclusions. \section{Gravitational decoupling of two sources by MGD} \label{s2} \setcounter{equation}{0} Let us start from the standard Einstein field equations~\footnote{We use the metric signature $(+---)$ and the constant $k^2=8\,\pi\,G_{\rm N}$.} \begin{eqnarray} \label{EinEq} R_{\mu\nu}-\frac{1}{2}\,R\,g_{\mu\nu} = k^2\,T^{\rm (tot)}_{\mu\nu} \ , \end{eqnarray} where the energy-momentum tensor $T^{\rm (tot)}_{\mu\nu}$ is given by \begin{eqnarray} \label{emt} T^{\rm (tot)}_{\mu\nu} = {T}_{\mu\nu}+\theta_{\mu\nu} \ , \end{eqnarray} where ${T}_{\mu\nu}$ and $\theta_{\mu\nu}$ represent two generic gravitational sources. Let us recall that the Einstein tensor is divergenceless and therefore the total energy momentum tensor ${T}^{\rm (tot)}_{\mu\nu}$ must satisfy the conservation equation \begin{eqnarray} \nabla_\nu\,T^{{\rm (tot)}{\mu\nu}} = 0 \ . \label{dT0} \end{eqnarray} In Schwarzschild-like coordinates, the spherically symmetric metric reads \begin{eqnarray} ds^{2} = e^{\nu (r)}\,dt^{2}-e^{\lambda (r)}\,dr^{2} -r^{2}\left( d\theta^{2}+\sin ^{2}\theta \,d\phi ^{2}\right) \ , \label{metric} \end{eqnarray} where $\nu =\nu (r)$ and $\lambda =\lambda (r)$ are functions of the areal radius $r$ only, ranging from the center $r=0$ up to the stellar surface $r=R>0$. Explicitly, the field equations read \begin{eqnarray} \label{ec1} && k^2 \left(T_0^{\,0} +\theta_0^{\,0} \right) = \strut\displaystyle\frac 1{r^2} -e^{-\lambda }\left( \frac1{r^2}-\frac{\lambda'}r\right)\ , \\ && \label{ec2} k^2 \strut\displaystyle \left(T_1^{\,1}+\theta_1^{\,1}\right) = \frac 1{r^2}-e^{-\lambda }\left( \frac 1{r^2}+\frac{\nu'}r\right)\ , \\ && \label{ec3} k^2 \strut\displaystyle \left(T_2^{\,2}+\theta_2^{\,2}\right) = \frac 14e^{-\lambda }\left[ -2\,\nu''-\nu'^2+\lambda'\,\nu' -2\,\frac{\nu'-\lambda'}r\right] \ , \end{eqnarray} while the conservation equation, which is a linear combination of \eqref{ec1}-\eqref{ec3}, yields \begin{eqnarray} \label{con1} && \left({T}_1^{\ 1}\right)' - \frac{\nu'}{2}\left({T}_0^{\ 0}-{T}_1^{\ 1}\right) - \frac{2}{r}\left({T}_2^{\ 2}-{T}_1^{\ 1}\right) \nonumber \\ &&+ \left({\theta}_1^{\ 1}\right)' - \frac{\nu'}{2}\left({\theta}_0^{\ 0}-{\theta}_1^{\ 1}\right) - \frac{2}{r}\left({\theta}_2^{\ 2}-{\theta}_1^{\ 1}\right) = 0 \ . \end{eqnarray} where $f'\equiv \partial_r f$. By simple inspection of \eqref{ec1}-\eqref{ec3}, we can identify an effective density \begin{eqnarray} \tilde{\rho} = T_0^{\,0} +\theta_0^{\,0} \ , \label{efecden} \end{eqnarray} an effective isotropic pressure \begin{eqnarray} \tilde{p}_{r} =-T _1^{\,1}-\theta_1^{\,1} \ , \label{efecprera} \end{eqnarray} and an effective tangential pressure \begin{eqnarray} \tilde{p}_{t} =-T _2^{\,2}-\theta_2^{\,2} \ . \label{efecpretan} \end{eqnarray} The expressions above clearly illustrate the appearance of an anisotropy inside the stellar distribution, given by \begin{eqnarray} \label{anisotropy} \Pi \equiv \tilde{p}_{t}-\tilde{p}_{r}. \end{eqnarray} Equations \eqref{ec1}-\eqref{ec3} contain five unknown functions, namely, two metric functions $\{\nu(r),\,\lambda(r)\}$ and three physical variables: the density $\tilde{\rho}(r)$, the radial pressure $\tilde{p}_r(r)$ and the tangential pressure $\tilde{p}_t(r)$. Thus these equations form an indefinite system \cite{Herrera:1979,Mak:2001eb} which requires additional information to produce any specific solution. In order to solve the Einstein equations~\eqref{ec1}-\eqref{con1} we implement the MGD-decoupling. In this approach, one starts from a solution to \eqref{EinEq} for the source $T_{\mu\nu}$ [that is \eqref{ec1}-\eqref{con1} with $\theta_{\mu\nu}=0$] such that the metric reads \begin{eqnarray}\label{pfmetric} ds^{2}=e^{\xi (r)}\,dt^{2}-e^{\mu(r)}\,dr^{2}-r^{2}\left( d\theta^{2}+\sin ^{2}\theta \,d\phi ^{2}\right)\ , \end{eqnarray} where \begin{eqnarray} \label{standardGR} e^{-\mu(r)} \equiv 1-\frac{k^2}{r}\int_0^r x^2\,\rho\, dx = 1-\frac{2\,m(r)}{r}, \end{eqnarray} is the standard General Relativity expression containing the Misner-Sharp mass function $m=m(r)$. Next, we turn on the second source $\theta_{\mu\nu}$ to see its effects on the first source $T_{\mu\nu}$. These effects are encoded in the geometric deformation undergone by the geometry \eqref{pfmetric}, namely \begin{eqnarray} \label{gd1} \xi &\mapsto & \nu = \xi+\alpha\,g \ , \\ \label{gd2} e^{-\mu} &\mapsto & e^{-\lambda} = e^{-\mu}+\alpha\,f \ , \end{eqnarray} where $g$ and $f$ are, respectively, the deformations undergone by the temporal and radial metric component of the geometry $\{\xi,\mu\}$. Among all possible deformations~$\{g,f\}$, the simplest one is the so-called minimal geometric deformation given by~$\{g=0,f=f^{*}\}$, and therefore only the radial metric component changes to \begin{eqnarray} \label{expectg} e^{-\mu(r)}\mapsto\,e^{-\lambda(r)} = e^{-\mu(r)}+\alpha\,f^{*}(r) \ . \end{eqnarray} The system~\eqref{ec1}-\eqref{con1} can be decoupled by plugging the deformation~(\ref{expectg}) into the Einstein equations~(\ref{ec1})-(\ref{ec3}). The system is thus separated into two sets of equations: (i) one having the standard Einstein field equations for the energy-momentum tensor $T_{\mu\nu}$, whose metric is given by \eqref{pfmetric} with $\xi(r)=\nu(r)$, \begin{eqnarray} \label{ec1pf} && k^2\,T_0^{\,0} = \strut\displaystyle\frac 1{r^2} -e^{-\mu }\left( \frac1{r^2}-\frac{\mu'}r\right)\ , \\ && \label{ec2pf} k^2\,T_1^{\,1} = \frac 1{r^2}-e^{-\mu }\left( \frac 1{r^2}+\frac{\nu'}r\right)\ , \\ && \label{ec3pf} k^2\,T_2^{\,2} = \frac 14e^{-\mu }\left[ -2\,\nu''-\nu'^2+\mu'\,\nu' -2\,\frac{\nu'-\mu'}r\right] \ , \end{eqnarray} along with the conservation equation~(\ref{dT0}) with $\theta_{\mu\nu} = 0$, namely $\nabla_\nu\,{T}^{{\mu\nu}}=0$, yielding \begin{eqnarray} \label{conpf} \left({T}_1^{\ 1}\right)' - \frac{\nu'}{2}\left({T}_0^{\ 0}-{T}_1^{\ 1}\right) - \frac{2}{r}\left({T}_2^{\ 2}-{T}_1^{\ 1}\right) = 0\ , \end{eqnarray} which is a linear combination of \eqref{ec1pf}-\eqref{ec3pf}; and (ii) one for the source $theta_{\mu\nu}$, which reads \begin{eqnarray} \label{ec1d} && k^2\,\theta_0^{\,0} = -\strut\displaystyle\frac{\alpha\,f^{*}}{r^2} -\frac{\alpha\,f^{*'}}{r}\ , \\ && \label{ec2d} k^2 \strut\displaystyle \,\theta_1^{\,1} =- \alpha\,f^{*}\left(\frac{1}{r^2}+\frac{\nu'}{r}\right)\ , \\ && \label{ec3d} k^2 \strut\displaystyle\,\theta_2^{\,2} = -\frac{\alpha\,f^{*}}{4}\left(2\nu''+\nu'^2+\frac{2\,\nu'}{r}\right)-\frac{\alpha\,f^{*'}}{4}\left(\nu'+\frac{2}{r}\right) \ . \end{eqnarray} Its conservation equation $\nabla_\nu\,\theta^{\mu\nu}=0$ explicitly reads \begin{eqnarray} \label{con1d} (\theta_1^{\,\,1})' -\strut\displaystyle\frac{\nu'}{2}(\theta_0^{\,\,0} -\theta_1^{\,\,1})-\frac{2}{r}(\theta_2^{\,\,2}-\theta_1^{\,\,1}) = 0 \ , \end{eqnarray} which is a linear combination of \eqref{ec1d}-\eqref{ec3d}. We recall that, under these conditions, there is no exchange of energy-momentum between the perfect fluid and the source $\theta_{\mu\nu}$ and therefore their interaction is purely gravitational. \subsection{Deformed vacuum and matching conditions at the surface} \label{s5} \par Let us recall the matching conditions at the stellar surface $r=R$ between the interior geometry ($0\le r\le R$) of a self-gravitating system and the exterior $(r>R)$ spacetime. The interior is described by the generic metric~\eqref{metric}, which in terms of the MGD transformation~\eqref{expectg} reads \begin{eqnarray} ds^{2} = e^{\nu^{-}(r)}\,dt^{2} -\left[1-\frac{2\,\tilde{m}(r)}{r}\right]^{-1}dr^2 -r^{2}\left(d\theta ^{2}+\sin {}^{2}\theta d\phi ^{2}\right) \ , \label{mgdmetric} \end{eqnarray} where the interior mass function is given by \begin{eqnarray} \label{effecmass} \tilde{m}(r) = m(r)-\frac{r}{2}\,\alpha\,f^{*}(r) \ , \end{eqnarray} with the Misner-Sharp mass $m$ given by \eqref{standardGR} and $f^{*}$ the geometric deformation in \eqref{expectg}. On the other hand, the exterior spacetime will be described by the deformed Schwarzschild metric \begin{eqnarray} \label{MetricSds} ds^2=\left(1-\frac{2{\cal M}}{r}\right)dt^2-\left(1-\frac{2{\cal M}}{r}+\beta\,g^{*}(r)\right)^{-1}dr^2-d\Omega^2 \ , \end{eqnarray} which determines the Schwarzschild vacuum $T^{+}_{\mu\nu}=0$ filled by a generic energy-momentum tensor $\theta^{+}_{\mu\nu}\neq\,0$. We remark that this exterior could be filled by fields contained in the source $\theta^{+}_{\mu\nu}$. The function $g^{*}(r)$ in the metric~\eqref{MetricSds} is precisely the geometric deformation for the outer Schwarzschild solution due to $\theta^{+}_{\mu\nu}$. Notice that the interior and exterior deformations are different, likewise as their respective parameters $\alpha$ and $\beta$. \par After decoupling the Schwarzschild vacuum $T^{+}_{\mu\nu}=0$ and $\theta^{+}_{\mu\nu}\neq\,0$, the set of equations~\eqref{ec1d}-\eqref{ec3d} for the exterior $r>R$ reads \begin{eqnarray} \label{ec1de} && k^2\,(\theta_0^{\,0})^+ = -\strut\displaystyle\frac{\beta\,g^{*}}{r^2} -\frac{\beta\,g^{*'}}{r}\ , \\ && \label{ec2de} k^2 \strut\displaystyle \,(\theta_1^{\,1})^+ =-\frac{\beta\,g^{*}}{r\,(r-2\,{\cal M})}\ , \\ && \label{ec3de} k^2 \strut\displaystyle\,(\theta_2^{\,2})^{+} = \frac{{\cal M}\,(r-{\cal M})}{r^2\,(r-2\,{\cal M})^2}\,\beta\,g^{*}-\frac{(r-{\cal M})}{2\,r\,(r-2\,{\cal M})}\beta\,g^{*'} \ , \end{eqnarray} together with their respective conservation equations, which are a linear combination of \eqref{ec1de}-\eqref{ec3de}. \par The metrics in \eqref{mgdmetric} and \eqref{MetricSds} must satisfy the Israel-Darmois matching conditions~\cite{Israel:1966rt} at the surface $\Sigma$ defined by $r=R$, namely, the continuity of the first and second fundamental form. Continuity of the first fundamental form reads \begin{eqnarray} \left[ ds^{2}\right] _{\Sigma }=0 \ , \label{match1} \end{eqnarray} where $[F]_{\Sigma }\equiv F(r\rightarrow R^{+})-F(r\rightarrow R^{-})\equiv F_{R}^{+}-F_{R}^{-}$, for any function $F=F(r)$, which yields \begin{eqnarray} e^{\nu ^{-}(R)} = 1-\frac{2\,{\cal M}}{R} \ , \label{ffgeneric1} \end{eqnarray} and \begin{eqnarray} 1-\frac{2\,M}{R}+\alpha\,f^{*}_{R} = 1-\frac{2\,{\cal M}}{R}+\beta\,g^{*}_{R} \ , \label{ffgeneric2} \end{eqnarray} where $M=m(R)$, with $f^{*}_{R}$ and $g^{*}_{R}$ being the interior and exterior minimal geometric deformation evaluated at the star surface, respectively. Likewise, continuity of the second fundamental form reads \begin{eqnarray} \left[G_{\mu \nu }\,r^{\nu }\right]_{\Sigma } = 0 \ , \label{matching1} \end{eqnarray} where $r_{\mu }$ is a unit radial vector. Using \eqref{matching1} and the general Einstein equations~(\ref{EinEq}), we then find \begin{eqnarray} \left[T_{\mu \nu }^{\rm (tot)}\,r^{\nu }\right]_{\Sigma} = 0 \ , \label{matching2} \end{eqnarray} which leads to \begin{eqnarray} \left[T_1^{\,\,1}+\theta_1^{\,\,1}\right]_{\Sigma } = 0 \ . \label{matching3} \end{eqnarray} This matching condition takes the final form \begin{eqnarray} (T_1^{\,\,1})^{-}_{R}+\,(\theta_1^{\,\,1})^{-}_{R} =(\theta_1^{\,\,1})^{+}_{R} \ . \label{matchingf} \end{eqnarray} The condition in Eq.~(\ref{matchingf}) is the general expression for the second fundamental form associated with the Einstein equations~(\ref{EinEq}) and the Schwarzschild vacuum filled by a generic source $\theta^{+}_{\mu\nu}$, namely, $\{T^{+}_{\mu\nu}=0,\,\theta^{+}_{\mu\nu}\,\neq\,0\}$. \par By using Eqs.~(\ref{ec2d}) and~\eqref{ec2de} in the condition~(\ref{matchingf}), the second fundamental form can be written as \begin{eqnarray} -k^2\,(T_1^{\,\,1})^{-}_{R} +\alpha\,f_{R}^{*}\left(\frac{1}{R^{2}}+\frac{\nu _{R}^{\prime }}{R}\right) = \frac{\beta\,g_{R}^{*}}{r\,(r-2\,{\cal M})} \ , \label{sfgenericf} \end{eqnarray} where $\nu _{R}^{\prime }\equiv \partial _{r}\nu^{-}|_{r=R}$. In terms of the effective pressure \eqref{efecprera}, we can express the condition~\eqref{sfgenericf} as \begin{eqnarray} \tilde{p}^-_R=\tilde{p}^+_R\ , \end{eqnarray} which establishes the continuity of the effective radial pressure at the stellar surface. The expressions in \eqref{ffgeneric1}, \eqref{ffgeneric2} and \eqref{sfgenericf} are the necessary and sufficient conditions for the matching of the interior MGD metric \eqref{mgdmetric} to a spherically symmetric outer 'vacuum' described by the deformed Schwarzschild metric {\eqref{MetricSds}}. \par Finally, we remark an important result regarding the matching condition~(\ref{sfgenericf}): if the outer geometry is given by the Schwarzschild metric, namely, $g^{*}(r) = 0$ in \eqref{MetricSds}, then the condition \eqref{sfgenericf} reads \begin{eqnarray} \tilde{p}^{-}_R=\,p_{R}+\alpha\,\frac{f_{R}^{\ast }}{k^2} \left(\frac{1}{R^{2}}+\frac{\nu _{R}^{\prime }}{R}\right)=0 \ . \label{pnegative} \end{eqnarray} Therefore the star will be in equilibrium in a vacuum only if the effective radial pressure at the surface vanishes. In particular, if the inner geometric deformation $f^*(r<R)$ is positive and weakens the gravitational field [see \eqref{effecmass}], an outer Schwarzschild can be only compatible with a non-vanishing inner $\theta_{\mu\nu}$ if the isotropic stellar matter has $p_{R}<0$ at the surface of the star. This could be interpreted as regular matter with a solid crust~\cite{Ovalle:2014uwa}. If we want to avoid having a solid-crust and keep the standard condition $p_{R}=0$, we must impose that the anisostropic effects on the radial pressure vanish at $r=R$. This can be achieved if we assume that $(\theta_1^{\,\,1})^{-}_{R}\sim\,p_{R}$ in \eqref{matchingf}, which leads to a vanishing inner deformation $f^*_R=0$. \section{'Gravastar' as the ultracompact Schwarzschild star} \label{s3} In this section we briefly discuss the Schwarzschild interior solution, or Schwarzschild star, corresponding to a uniform-density spherical star, and the natural emergence of the interior region with negative pressure (see \cite{Mazur:2015kia,Posada:2016qpz} for a more detailed discussion). We start by considering a spherically symmetric spacetime given by \begin{eqnarray} ds^2 = +e^{\nu(r)}dt^2-e^{\lambda(r)}dr^2-r^2(d\theta^2+\sin^2\theta d\phi^2). \end{eqnarray} The interior Schwarzschild solution can be written in the following form \cite{misner1973} \begin{numcases}{e^{\nu(r)} = } \frac{1}{4}\left(3\sqrt{1-H^2R^2}-\sqrt{1-H^2r^2}\right)^2, & $r < R$, \label{interiorf} \\ \left(1-\frac{2M}{r}\right), & $r > R$, \end{numcases} \noindent and \begin{numcases} {e^{-\lambda(r)} =} 1-H^2r^2, & $r < R$, \label{interiorh} \\ \left(1-\frac{2M}{r}\right), & $r > R$, \end{numcases} \noindent where $M$ is the mass and $R$ is the radius of the star. Here we have defined \begin{eqnarray} H^2\equiv\frac{8\pi\rho_{0}}{3}=\frac{R_{S}}{R^3}=\frac{2M}{R^3}, \end{eqnarray} \noindent where $\rho_{0}$ is the energy density, which is constant, and $R_{S}$ is the Schwarzschild radius. The pressure is determined by \begin{eqnarray}\label{pressure} p(r) = \rho_{0}\left(\frac{\sqrt{1-H^2r^2} - \sqrt{1-H^2R^2}}{3\sqrt{1-H^2R^2}-\sqrt{1-H^2r^2}}\right). \end{eqnarray} \noindent The Buchdahl condition $R/M>9/4$, guarantees that the pressure is positive and finite everywhere inside the star \cite{buchdahl1959}. However, it can be observed that the pressure \eqref{pressure} is regular, except at some radius $R_{0}$ given by \begin{eqnarray}\label{pole} R_{0}=3R\sqrt{1-\frac{8}{9}\frac{R}{R_{S}}}<R. \end{eqnarray} \noindent Moreover, as it can be observed from \eqref{interiorf} and \eqref{pressure}, the pressure diverges exactly at the same point where $g_{tt}=0$. Nevertheless, if we consider the Schwarzschild star in the regime $2<R/M<9/4$, an interesting behaviour can be observed; the pole in the pressure \eqref{pole} moves out from the center of the star to a finite surface $R_{0}$ in the interval $0<R_{0}<R$. Furthermore, from \eqref{interiorf} and \eqref{pressure} we notice that the pressure becomes negative in the regime $0\leq r < R_{0}$, but $e^{\nu}>0$. As the radius of the star approaches the Schwarzschild radius from above $R\to R_{S}^{+}$ meanwhile, $R_{0}\to R_{S}^{-}$ from below\footnote{This corresponds to a quasi-stationary adiabatic contraction.}. In the ultracompact limit when $R=R_{0}=R_{S}$, the Schwarzschild interior solution \eqref{interiorf} and \eqref{pressure} shows that the whole new interior region becomes one with constant negative pressure \begin{eqnarray} p=-\rho,\quad r<R=R_{0}=R_{S}. \end{eqnarray} \begin{figure} \label{fig1 \centering \includegraphics[scale=0.6]{fig1.pdf} \caption{\label{fig1} Metric function $e^{-\lambda(r)}$, as a function of $r/R$, for the interior and exterior of the Schwarzschild star. Note the 'cusp-like' behaviour at the matching point, and the approach of the minimum of $e^{-\lambda}$ to zero as the compactness approaches the 'black hole' limit.} \end{figure} The interior metric function \eqref{interiorf} becomes a patch of de Sitter spacetime, and one where the $g_{tt}$ metric-component is modified by $1/4$ indicating that the time runs slower inside the configuration (see \cite{Mazur:2015kia} for further details). The final form of the interior metric is given by \begin{eqnarray}\label{gravin} e^{\nu(r)}=\frac{1}{4}(1-H^2r^2),\quad e^{-\lambda(r)}= (1-H^2r^2),\quad r<R_{0}=R_{S}. \end{eqnarray} \noindent Outside the configuration $r>R_{S}$, the spacetime geometry remains the spherically symmetric exterior Schwarzschild solution. Note from figure \ref{fig1} that the interior and exterior metric functions $g^{rr}$ are continuous at the surface $R=R_{0}=R_{s}$, but they join in a 'cusp-like', non-analytic, behaviour which implies a violation of the second junction condition $[K_{ij}]=0$. From the general formalism of the Israel junction conditions \cite{Israel:1966rt,barrabes1991}, a violation of the second junction condition implies the presence of a $\delta$-distribution of stresses located on the hypersurface $R_{0}$, which is given by \cite{Mazur:2015kia} \begin{eqnarray}\label{delta} (p_{\perp}-p)=\frac{1}{3}\frac{\rho R_{0}^3}{r^2}\left(\frac{h}{f}\right)^{1/2}\delta(r-R_{0}). \end{eqnarray} This assumption is crucial to provide a physical interpretation to the Schwrzschild star beyond the Buchdahl limit. Moreover, these transverse stresses produce a finite surface energy and a finite surface tension, which is proportional to the difference in surface gravities, and is given by \begin{eqnarray}\label{tension} \tau_{surf} = \frac{\Delta\kappa}{8\pi}=\frac{MR_{0}}{4\pi R^3}. \end{eqnarray} It is relevant to remark that the metric functions remain positive for $r=R_{0}\to R_{S}$, therefore there is no trapped surface in the interior $r<R_{S}$, and no emergence of an event horizon. This configuration has zero entropy and zero temperature, which indicates its nature as a condensate. Thus, in the ultracompact limit when $R=R_{0}=R_{S}$, the Schwarzschild star mirrors the main features of the non-singular gravitational condensate star, or gravastar, proposed in \cite{mazur2001,mazur2004}. In the next section we will extend this model to a more general scenario using the MGD-decoupling. \section{MGD-gravastars}\label{s4} Let us start by identifying the gravastar solution~\eqref{gravin} with the undeformed metric~\eqref{pfmetric}. Hence the explicit form of the metric components $\{\xi,\,\mu\}$ reads \begin{eqnarray} \label{mm00} &&e^{\xi}=\frac{1}{4}(1-H^2r^2)\ ,\ \\ \label{mm11} &&e^{-\mu}=1-H^2r^2\ . \end{eqnarray} Next, we shall find the deformed version of the gravastar solution~\eqref{gravin} by using the MGD transformation~\eqref{expectg}. Hence the MGD-gravastar, generically represented by the MGD metric in~\eqref{mgdmetric}, is written as \begin{eqnarray} \label{grav00} &&e^{\nu}=\frac{1}{4}(1-H^2r^2)\ ,\ \\ &&e^{-\lambda}=1-H^2r^2+\alpha\,f^{*}(r)\ , \label{grav11} \end{eqnarray} where the geometric deformation $f^{*}$ is found by solving the system~\eqref{ec1d}-\eqref{ec3d}. Since this system has four unknown functions, namely, the three components of the source $\theta_{\mu\nu}$ and the deformation $f^{*}$, we need to prescribe an additional condition. We may impose an equation of state for the source $\theta_{\mu\nu}$ or a physically motivated restriction for the deformation $f^{*}$. This last option will be the one we will choose next. \par A main characteristic of a gravastar is the condition $g_{tt}=g^{-1}_{rr}=0$ at its surface $r=R_S$. In order to keep this critical condition in the deformed version, the geometric deformation $f^{*}$ should satisfy \begin{eqnarray} \label{fcondition} f^{*}(r)\sim\,1-H^2r^2\ . \end{eqnarray} The simplest expression satisfying the requirement~\eqref{fcondition} is given by \begin{eqnarray} \label{fcondition2} f^{*}(r)=\left(1-H^2\,r^2\right)\,H^n\,r^n\ , \end{eqnarray} where $n\geq\,2$ to avoid a singular solution [see also \eqref{GSefecden}-\eqref{GSanisotropy}]. Using the expression~\eqref{fcondition2}, the radial metric component~\eqref{grav11} reads \begin{eqnarray} \label{grav11d} e^{-\lambda}=\left(1-H^2\,r^2\right)\left(1+\alpha\,H^n\,r^n\right)\ , \end{eqnarray} where \begin{eqnarray} \label{alphacond} \alpha\,\geq\,-1 \end{eqnarray} to have a positive defined value in the expression~\eqref{grav11d} when $r\rightarrow\,R_S\,$. \par The MGD gravastar metric, which is a solution of Einstein's field equations~\eqref{ec1}-\eqref{ec3}, and whose explicit components are given by \eqref{grav00} and \eqref{grav11d}, generates an effective density \begin{eqnarray} k^2\,\tilde{\rho}(r)=3\,H^2+\alpha\,H^n\,r^{n-2}\left[(n+3)H^2\,r^2-n-1\right]\ , \label{GSefecden} \end{eqnarray} an effective radial pressure \begin{eqnarray} k^2\,\tilde{p}_{r}(r) =-3\,H^2-\alpha\,H^n\,r^{n-2}\left(3\,H^2\,r^2-1\right) \ , \label{GSefecprera} \end{eqnarray} and an effective tangential pressure \begin{eqnarray} k^2\,\tilde{p}_{t}(r) =-3\,H^2-\alpha\,H^n\,r^{n-2}\left[(n+3)H^2\,r^2-\frac{n}{2}\right]\ . \label{GSefecptan} \end{eqnarray} The anisotropy is given by \begin{eqnarray} \label{GSanisotropy} \Pi \equiv \tilde{p}_{t}-\tilde{p}_{r}=\frac{\alpha\,H^n\,r^{n-2}}{2\,k^2}\left(n-2-2\,n\,H^2\,r^2\right) \ . \end{eqnarray} We remark that the expressions in \eqref{grav00} and \eqref{grav11d}-\eqref{GSefecptan}, which describe a non-uniform and anisotropic ultracompact distribution (gravastar), satisfy Einstein's field equations \eqref{ec1}-\eqref{ec3}. \par By using the MGD gravastar solution displayed in~\eqref{grav00} and~\eqref{grav11d} in the first fundamental form~\eqref{ffgeneric1} and~\eqref{ffgeneric2}, we obtain \begin{eqnarray} \label{ffgrav00} &&0=1-\frac{2\,{\cal M}}{R}\ ,\\ &&0=1-\frac{2\,{\cal M}}{R}+\beta\,g^{*}_R \label{ffgrav11}\ , \end{eqnarray} which yields the important result \begin{eqnarray} \label{nodef} g^{*}_R\sim\,1-\frac{2\,{\cal M}}{R}=0\ . \end{eqnarray} The condition~\eqref{nodef} is crucial since it establishes that the geometric deformation $g^{*}(r)$ must vanish at the stellar surface of a gravastar, no matter the nature of the exterior source $\theta_{\mu\nu}$. On the other hand, by using the effective radial pressure~\eqref{GSefecprera}, the continuity of the second fundamental form in \eqref{sfgenericf} yields \begin{eqnarray} \label{sffgrav} -\left(3+2\,\alpha\right)=\frac{\beta\,g_{R}^{*}}{(R-2\,{\cal M})} \ . \end{eqnarray} The expressions in \eqref{ffgrav00}), \eqref{nodef} and \eqref{sffgrav} are the necessary and sufficient conditions for the matching of the interior metric \eqref{metric}, explicitly displayed in \eqref{grav00} and \eqref{grav11d}, to a deformed exterior Schwarzschild metric in {\eqref{MetricSds}}. We remark that conditions~(\ref{ffgrav00}) and (\ref{nodef}) do not necessarily imply a singular or null right hand side in Eq.~\eqref{sffgrav}. We notice that for a Schwarzschild exterior, namely $g^{*}(r)=0$, the expression~\eqref{sffgrav} yields $\alpha\,=-3/2$, which is incompatible with the regularity condition~\eqref{alphacond}. Therefore, we conclude that the MGD gravastar solution \eqref{grav00} and \eqref{grav11d} is ill-matched with the Schwarzschild exterior solution. \par % % % \begin{figure}[t] \center \includegraphics[scale=0.6]{fig2.pdf} \\ \caption{Behavior of the interior density $\tilde{\rho}>0$ and interior radial pressure $\tilde{p}_r<0$ for $n=2$, $n=3$ and $n>>2$. The exterior $r>2\,{\cal M}$ is filled with a spherically symmetric source $\theta_{\mu\nu}$ which goes to zero quickly (black line). There are two stable configurations, namely, $n=2$ and the extreme case $n>>2$ (Mazur-Mottola model).} \label{fig2} \end{figure} % \begin{figure}[t] \center \includegraphics[scale=0.6]{fig3.pdf} \\ \caption{Interior and exterior radial metric component for the MGD-gravastar, as a function of $r/\cal{M}$. In contrast to the 'cusp-like' behaviour in figure 1, note here the smooth continuity of the metric component at the stellar surface.} \label{fig3} \end{figure} % \par Next, we will solve the system~\eqref{ec1de}-\eqref{ec3de} to find an exterior solution $\{\theta^{+}_{\mu\nu},\,g^{*}\}$ compatible with the MGD gravastar in \eqref{grav00} and \eqref{grav11d}. First of all, we notice that the system \eqref{ec1de}-\eqref{ec3de} has four unknown functions. Hence, we need to provide additional information. We have two alternatives: either an equation of state associated with the source $\theta^{+}_{\mu\nu}$ or some physically motivated restriction on $g^{*}$. Among all possibilities, we choice an exterior source $\theta^{+}_{\mu\nu}$ associate with a conformal symmetry, hence \begin{eqnarray} \label{tra} \theta^{+\,\mu}_{\,\mu}=0\ . \end{eqnarray} We will see that the traceless condition~\eqref{tra} yields a conformally deformed exterior compatible with the interior solution displayed in \eqref{grav00} and \eqref{grav11d}. By using \eqref{ec1de}-\eqref{ec3de} in the condition \eqref{tra} we find that the radial deformation must satisfy the following differential equation \begin{eqnarray} \label{fconf} r\,\left(6\,{\cal M}^2-7\,{\cal M}\,r+2\,r^2\right)\,g^{*'}+2\left(r^2-4\,{\cal M}\,r+3\,{\cal M}^2\right)\,g^{*} = 0 \ , \end{eqnarray} whose general solution is given by \begin{eqnarray} \label{gconf} g^{*}(r) = \frac{1-2\,{\cal M}/r}{2\,r-{3\,{\cal M}}} \,\ell_{\rm c} \ , \end{eqnarray} with $\ell_{\rm c}$ a constant with units of a length. Thus the conformally deformed Schwarzschild exterior becomes~\cite{Casadio:2017sze} \begin{eqnarray} \label{confsol} e^{-\lambda} = \left(1-\frac{2\,{\cal M}}{r}\right) \left(1+\frac{\ell}{2\,r-{3\,{\cal M}}}\right) \ , \end{eqnarray} where $\ell=\beta\,\ell_{\rm c}$, and its behaviour for $r\gg M$ is given by \begin{eqnarray} e^{-\lambda} \simeq 1 - \frac{4\,{\cal M}-\ell}{2\,r} \ . \end{eqnarray} A solution similar to that in \eqref{confsol} was found in the context of the extra-dimensional brane-world \cite{Germani:2001du} and subsequently analyzed in detail in the context of MGD-black holes \cite{Ovalle:2017wqi}. By using \eqref{ec1de}-\eqref{ec3de} we compute the exterior physical variables, namely, the effective density \begin{eqnarray} \tilde{\rho}^+ = \beta\,(\theta_0^{\ 0})^+ = -\frac{\ell\,{\cal M}}{k^2\,(2\,r-3\,{\cal M})^2\,r^2} \ , \label{efecdenC} \end{eqnarray} the effective radial pressure \begin{eqnarray} \tilde{p}^{+}_{r} = -\beta\,(\theta_1^{\ 1})^+ = \frac{\ell}{k^2\,(2\,r-3\,{\cal M})\,r^2} \ , \label{efecpreraC} \end{eqnarray} and the effective tangential pressure \begin{eqnarray} \tilde{p}^{+}_{t} = -\beta\,(\theta_2^{\ 2})^+ = \frac{\ell\,(r-{\cal M})}{k^2\,(2\,r-3\,{\cal M})^2\,r^2} \ . \label{efecptanC} \end{eqnarray} The exterior anisotropy is thus given by \begin{eqnarray} \Pi^{+} = \frac{\ell\,(3\,r-4\,{\cal M})}{k^2\,(2\,r-3\,{\cal M})^2\,r^2} \ . \end{eqnarray} \par Finally, we see that the exterior deformation~\eqref{gconf} satisfies the matching conditions in Eqs.~(\ref{ffgrav00}) and~(\ref{nodef}), while the second fundamental form~(\ref{sffgrav}) yields \begin{eqnarray} \label{l} \ell=-(3+2\,\alpha)\,{\cal M}\ . \end{eqnarray} Using the matching condition~\eqref{l} in the expression~\eqref{confsol}, we find the deformed radial metric component will always be positive for the region \begin{eqnarray} \label{defpos} r\,\geq\,(3+\alpha)\,{\cal M}\ , \end{eqnarray} therefore the exterior space-time will be regular for $\alpha$ satisfying \begin{eqnarray} \label{alpha} \alpha=-1\ . \end{eqnarray} We conclude the conformally deformed Schwarzschild exterior~\eqref{confsol} is consistent with the interior ultracompact configuration in \eqref{grav00} and \eqref{grav11d} for $\{\alpha=-1,\,\ell=-\,{\cal M}\}$ and any value of $n\geq\,2$ (notice that the left-hand side in the matching condition~\eqref{sffgrav} is independent of $n$). This is very significant since it indicates that the complete family of interior solutions, characterized by the parameter $n$, can be matched with the exterior solution. However, as we see in figure \ref{fig2}, only the case $n=2$ and the extreme case $n>>2$ (Mazur-Mottola model) are stables, namely, there is no maximum (or minimum) between $0\le\,r\le\,2\,{\cal M}$. Another point which calls our attention is the continuity of the effective density $\tilde{\cal \rho}$ through $R_S$. The continuity of the effective radial pressure $\tilde{p}_r$ is a direct consequence of the second fundamental form, but in principle there is no reason for the density to be continuous through $R_S$. We can explain this by examining the derivative of the metric function $e^{-\lambda}$ near the stellar surface. Using the interior metric function~\eqref{grav11d}, we have \begin{eqnarray} \label{dg11i} -\lambda'\,e^{-\lambda}\bigg\vert_{R_S^{-}} = -\frac{2}{R_S}\left(1+\alpha\right)\ , \end{eqnarray} whereas the exterior metric function~\eqref{confsol} yields \begin{eqnarray} \label{dg11e} -\lambda'\,e^{-\lambda}\bigg\vert_{R_S^{+}} = \frac{1}{R_S}\left(1+\frac{\ell}{\cal M}\right)\ . \end{eqnarray} Using the matching condition~\eqref{l} in Eq.~\eqref{dg11e}, we see that both expressions in~\eqref{dg11i} and~\eqref{dg11e} are equal. Hence the metric function $e^{-\lambda\,}$ is smoothly continuous through the stellar surface, as we can see in Figure~\ref{fig2}. Also the condition~\eqref{alpha} states that the function $e^{-\lambda\,}$ is always positive. Finally, we see that the continuity of $(e^{-\lambda})'$ yields a continuous effective density $\tilde{\cal \rho}$, as we can see through the field equation~\eqref{ec1}. \par \begin{figure}[t] \center \includegraphics[scale=0.6]{fig4.pdf} \\ \caption{Case $n=2$. Behavior of the density $\tilde{\rho}$ (blue line), radial pressure $\tilde{p}$ (red line) and tangential pressure $\tilde{p}_t$ (green line). The discontinuity in the tangential pressure occurs at $r=R_{S}=2\,{\cal M}$.} \label{figure4} \end{figure} Additionally, it is worth to analyse the continuity of the radial pressure accross the surface of the configuration. Expressions for the interior and exterior radial pressure in~\eqref{GSefecprera} and~\eqref{efecpreraC}, respectively, yield \begin{eqnarray} \label{dpi} &&\frac{d\,{\tilde p}_r}{d\,r}\bigg\vert _{R_S^{-}} = -\frac{2\,\alpha}{k^2 R_S^3}\,(n+1)\ , \\ \label{dpe} &&\frac{d\,{\tilde p}_r}{d\,r}\bigg\vert_{R_S^{+}} = \frac{6}{k^2\,R^2_S}\ . \end{eqnarray} We see that precisely the case $n=2$, together with the regularity condition~\eqref{alpha}, yields a radial pressure which is smoothly continuous through the stellar surface. This remarkable feature suggests that the stellar system is extended beyond its defined surface $r=R_{S}$. However, the discontinuity in the tangential pressure at $r=R_{S}$, as we can see in figure \ref{figure4}, clearly establishes the surface of the self-gravitating distribution. This condition is reminiscent of the boundary layer of anisotropic stresses which was introduced for the ultracompact Schwarzschild star, discussed in section \ref{s3}. \par We conclude this section by pointing out that the dominant energy condition \begin{eqnarray} \tilde{\rho}\,\geq\,\mid\,\tilde{p}_r\mid\ ;\,\,\,\,\,\tilde{\rho}\,\geq\,\mid\,\tilde{p}_t\mid\ , \end{eqnarray} is satisfied inside the ultracompact configuration. \section{Conclusions} \label{con} \setcounter{equation}{0} By using the MGD-decoupling approach, we propose an anisotropic and non-uniform version of the gravastar model by Mazur and Mottola, given by the exact and analytical expressions \eqref{grav00}, \eqref{grav11d} and \eqref{GSefecden}-\eqref{GSefecptan}. This new system represents an ultracompact configuration of radius $R_S=2\,{\cal M}$ which satisfies some of the requirements to describe a stable stellar model, namely, it is regular at the origin, its mass and radius are well defined, its density is positive everywhere and decreases monotonically from the centre outwards, as well as a non-uniform and monotonic pressure. Additionally this new solution satisfies the dominant energy condition. These features show that the anisotropic effects produce a more realistic stellar structure, but above all, it indicates the possibility of building ultracompact configurations of radius $R_S=2\,{\cal M}$, minimising their exotic characteristics. \par A crucial point in the construction of the new solution is to preserve the null-surface condition $g_{tt}=g^{-1}_{rr}=0$ at $R_S=2\,{\cal M}$, which is a fundamental property of the ultracompact Schwarzschild stars, or gravastars. This yields the metric component~\eqref{grav11d} characterized by a parameter $n\ge\,2$, where $n>>2$ represents the Mazur-Mottola model. On the other hand, the exterior space-time is represented by a deformed Schwarzschild solution~\eqref{MetricSds}, whose deformation $g^{*}$ is produced by a generic conserved energy-momentum tensor $\theta^{+}_{\mu\nu}$ filling the Schwarzschild vacuum $T^{+}_{\mu\nu}=0$. Hence the exterior is a 'tensor-vacuum' $\{T^{+}_{\mu\nu}=0,\,\theta^{+}_{\mu\nu}\,\neq\,0\}\,$\cite{Ovalle:2017fgl,Ovalle:2019qyi}. \par In contrast to most of the gravastar models, in our case we avoid the introduction of thin-shells of matter or any other mechanism beyond the simple and well-established Darmois-Israel matching conditions at the surface. In particular, the continuity of the metric implies that the deformation $g^{*}$ of the Schwarzschild exterior must vanish at the stellar surface for any ultracompact configuration of radius $R_S=2\,{\cal M}$. This result is independent of the nature of $\theta^{+}_{\mu\nu}$, as it is established in the condition~\eqref{nodef}. \par Among all the possibilities, we chose a conformally deformed vacuum defined by \eqref{tra}, which yields the deformed Schwarzschild exterior \eqref{confsol}. The main characteristic of this solution is that for $r>>{\cal M}; \,$ $\tilde{\rho}\sim\,r^{-4},\,$ $\, \tilde{p}_r\sim\tilde{p}_t\sim\,r^{-3}$. Hence the physical variables decay quickly. In particular, the continuity of the second fundamental form in \eqref{l} establishes that the conformally deformed Schwarzschild exterior \eqref{confsol} can be matched to the interior solution \eqref{grav00} and \eqref{grav11d} for any value of $n\geq\,2$, and that not only the radial pressure $\tilde{p}_r$ but also the density $\tilde{\rho}$ is always continuous accross $R_S$. However, only the case $n=2$ and the extreme case $n>>2$ (Mazur-Mottola model) are stables, as we see in figure \ref{fig2}. We also point out that the metric function $e^{-\lambda\,}$ is smoothly continuous through the stellar surface for any value of $n$, as we can see in figure \ref{fig2}. Therefore the effective density $\tilde{\cal \rho}$ will always be continuous as a consequence of the field equation~\eqref{ec1}. Finally the case $n=2$, which is the physically relevant in terms of stability, along with the regularity condition~\eqref{alpha}, yields a radial pressure smoothly continuous through the stellar surface as figure \ref{figure4} shows. \par Finally, we conclude by rising some natural questions regarding the analysis of ultracompact configurations under the MGD-decoupling which deserve to be investigated, such as: the analysis of stability under spherical and non-spherical perturbations; any other physically motivated ``tensor-vacuum" $\{T^{+}_{\mu\nu}=0,\,\theta^{+}_{\mu\nu}\,\neq\,0\}$ generating reasonable consequences; a complete deformation of the gravastar solution in Eqs.~\eqref{mm00} and~\eqref{mm11} by the extended deformation~\cite{Ovalle:2017fgl} in Eqs.~\eqref{gd1} and~\eqref{gd2}, and a possible extension for time-dependent configurations. \section{Acknowledgements} J.O.~and S.Z.~have been supported by the Albert Einstein Centre for Gravitation and Astrophysics financed by the Czech Science Agency Grant No.14-37086G and by the Silesian University in Opava internal grant SGS/14/2016. C. P. acknowledges the support of the Institute of Physics and Research Centre of Theoretical Physics and Astrophysics, at the Silesian University in Opava.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,044
\section{Introduction} Given $G$ a group, the function $f:G\to\mathbb{C}$ is called a polynomial of degree $\leq n$ if satisfies Fréchet's mixed differences equation \begin{equation} \label{mix} \Delta_{h_{n+1}}\Delta_{h_n}\cdots \Delta_{h_{1}}f=0 \text{ for all } h_1,\cdots, h_{n+1}\in G, \end{equation} and it is called a semipolynomial of degree $\leq n$ if satisfies Fréchet's unmixed differences equation \begin{equation} \label{unmix} \Delta_{h}^{n+1} f=0 \text{ for all } h\in G. \end{equation} Here, $\Delta_h:\mathbb{C}^G\to \mathbb{C}^G$ is the forward differences operator given by $\Delta_hf(x)=f(xh)-f(x)$, $\Delta_{h_{n+1}}\Delta_{h_n}\cdots \Delta_{h_{1}} = \Delta_{h_{n+1}}(\Delta_{h_n}\cdots \Delta_{h_{1}})$ denotes the composition of the operators $\Delta_{h_1}, \cdots, \Delta_{h_{n+1}}$ and $\Delta_{h}^{n+1} = \Delta_{h} (\Delta_{h}^{n})$. If $G$ is commutative, Djokovi\'{c}'s Theorem \cite{Dj} guarantees that equations \eqref{mix} and \eqref{unmix} are equivalent (see also \cite{L_Dj} for a different proof of this result) so that both concepts represent the very same class of functions. Finally, we say that $f:G\to\mathbb{C}$ is an exponential polynomial if the space $\tau(f)=\mathbf{span}\{\tau_h(f):h\in G\}$ is finite dimensional. Here $\tau_h:\mathbb{C}^G\to \mathbb{C}^G$ is the translation operator defined by $\tau_hf(x)=f(xh)$. Obviously, a function $f$ is an exponential polynomial if and only if belongs to some finite-dimensional translation invariant vector subspace $V$ of $\mathbb{C}^G$. A main motivation for this definition is that, for $G=\mathbb{R}^d$, a continuous function $f:\mathbb{R}^d\to\mathbb{C}$ is an exponential polynomial if and only if is a finite sum of functions of the form $p(x)e^{\langle x,\lambda\rangle}$ for certain ordinary polynomials $p(x)$ and vectors $\lambda\in\mathbb{C}^d$. Moreover, if $f$ is a complex valued Schwartz distribution which belongs to a finite-dimensional translation invariant space of distributions, then $f$ is equal, in distributional sense, to a continuous complex valued exponential polynomial on $\mathbb{R}^d$ (see, e.g., \cite{anselone}, \cite{E}, \cite{Laird1}, \cite{Lo} or \cite{St} for the proofs of these claims) The following result, which was proved in \cite{AS_AM} (see also \cite{AA_AM}, \cite{AK_CJM}), generalizes a well known theorem of Montel \cite{montel_1935}, \cite{montel}: \begin{theorem} \label{MT_unmixed} Let $G$ be a commutative (topological) group and $f:G\to \mathbb{C}$ be a (continuous) function. Assume that $E=\{h_1,\cdots,h_s\}$ (topologically) generates $G$ and let $\Delta_{h_k}^{n_k+1}f=0$, for $k=1,\cdots, s$. Then $f$ is a polynomial on $G$ of degree at most $n_1+\cdots+n_s$. Moreover, if $G=\mathbb{R}^d$, $\{h_1,\cdots,h_s\}$ topologically generate $\mathbb{R}^d$ and $f$ is a complex valued Schwartz distribution on $\mathbb{R}^d$ such that $\Delta_{h_k}^{n_k+1}f=0$, for $k=1,\cdots, s$. Then $f$ is, in distributional sense, a continuous polynomial on $\mathbb{R}^d$ of degree at most $n_1+\cdots+n_s$. In particular, $f$ is equal almost everywhere to an ordinary polynomial with total degree at most $n_1+\cdots+n_s$. \end{theorem} In 1948 Montel \cite{montel_mixed1}, \cite{montel_mixed2} demonstrated, for mixed differences, a version of his theorem, for complex functions of one and two real variables. We generalize his result, with an easier proof, to complex functions defined on any finitely generated (topologically finitely generated) group (topological group) $G$, and to complex valued Schwartz distributions defined on $\mathbb{R}^d$. In particular, our result applies to complex valued functions depending on any finite number of real variables and serves for the characterization of polynomials and exponential polynomials as solutions of certain finite sets of mixed-differences functional equations. \section{Main results} \begin{theorem} \label{main} Let $G$ be a commutative group and let $f:G\to \mathbb{C}$ be a function. If there exist exponential polynomials $P_k:G\to \mathbb{C}$, $k=1,\cdots, s$ and elements $\{h_{i,j}\}_{1\leq i\leq n;1\leq j\leq s}$ of $G$ such that, for every $(i_1,\cdots,i_s)\in\{1,2,\cdots,n\}^s$, the set $\{h_{i_1,1},\cdots,h_{i_s,s}\}$ is a generating system of $G$, and \begin{equation}\label{teomontelmixtafuerte} \Delta_{h_{1,k},h_{2,k},\cdots,h_{n,k}}f(x)=P_k(x) \text{, for all } k=1,2,\cdots,s, \text{ and all } x\in G, \end{equation} then $f$ is an exponential polynomial. \end{theorem} \begin{proof} We proceed by induction on $n$. If we assume that $\Delta_{h_{1,k}}f=P_k$, $k=1,\cdots, s$, then we have that $\Delta_{h_{1,k}}(V)\subseteq V$ for $k=1,\cdots,s$, with $V=\mathbf{span}\{f\}+\tau(P_1)+\tau(P_2)+\cdots +\tau(P_s)$ Hence $V$ is a finite dimensional translation invariant space, which implies that all its elements are exponential polynomials, and $f\in V$. This proves the result for $n=1$. Assume the result holds true for $n-1$ and let $f$ satisfy \eqref{teomontelmixtafuerte}. Take $i=(i_1,\cdots,i_s) \in\{1,2,\cdots,n\}^s$ and define the function $F_i=\Delta_{h_{i_1,1}, h_{i_2,2},\cdots,h_{i_s,s}}f$. Then, if we denote by $\widehat{h_{i_k,k}}$ the fact that the step $h_{i_k,k}$ is hidden (i.e. it does not appear) in a formula, we have that \begin{eqnarray*} \Delta_{h_{1,k},h_{2,k},\cdots,\widehat{h_{i_k,k}}, \cdots, h_{n,k}}(F_i) &=& \Delta_{h_{1,k},h_{2,k},\cdots,\widehat{h_{i_k,k}}, \cdots, h_{n,k}}(\Delta_{h_{i_1,1}, h_{i_2,2},\cdots,h_{i_s,s}}f)\\ &=& \Delta_{h_{i_1,1}, h_{i_2,2}, \cdots,\widehat{h_{i_k,k}},\cdots,h_{i_s,s}} (\Delta_{h_{1,k},h_{2,k},\cdots,h_{i_k,k}, \cdots, h_{n,k}}f) \\ &=& \Delta_{h_{i_1,1}, h_{i_2,2}, \cdots,\widehat{h_{i_k,k}},\cdots,h_{i_s,s}} (P_k)=Q_k\\ \end{eqnarray*} with $Q_k$ being an exponential polynomial for $k=1,2\cdots, s$. Hence the induction hypothesis tell us that $F_i$ is an exponential polynomial for every $i$. We want to reduce the size of the operator $\Delta_{h_{i_1,1}, h_{i_2,2},\cdots,h_{i_s,s}}$ used for the definition of $F_i$. So, consider, for each $k\in\{1,\cdots,s\}$, the new function $G_{i,k}= \Delta_{h_{i_1,1},\cdots,\widehat{h_{i_k,k}},\cdots, h_{i_s,s}}f$ (which results from deleting the step $h_{i_k,k}$ in the definition of $F_i$). Then we have that \begin{eqnarray*} \Delta_{h_{1,k},h_{2,k},\cdots,h_{n-1,k}}(G_{i,k}) &=& \Delta_{h_{1,k},h_{2,k},\cdots,h_{n-1,k}}(\Delta_{h_{i_1,1},\cdots,\widehat{h_{i_k,k}},\cdots, h_{i_s,s}}f)\\ &=& \Delta_{h_{1,k},\cdots \widehat{h_{k,k}},\cdots,h_{n-1,k}}(\Delta_{h_{i_1,1},\cdots, h_{i_{k-1},k-1},h_{k,k}, h_{i_{k+1},k+1},\cdots,h_{i_s,s}}f) \end{eqnarray*} is an exponential polynomial , since $\Delta_{h_{i_1,1},\cdots, h_{i_{k-1},k-1},h_{k,k}, h_{i_{k+1},k+1},\cdots,h_{i_s,s}}f$ is an exponential polynomial. Furthermore, for $j\neq k$ we have that \begin{eqnarray*} \Delta_{h_{1,j},h_{2,j},\cdots,,\widehat{h_{i_j,j}},\cdots, h_{n,j}}(G_{i,k}) &=& \Delta_{h_{1,j},h_{2,j},\cdots,,\widehat{h_{i_j,j}},\cdots, h_{n,k}}(\Delta_{h_{i_1,1},\cdots,\widehat{h_{i_k,k}},\cdots, h_{i_s,s}}f)\\ &=& \Delta_{h_{i_1,1},,\cdots,\widehat{h_{i_j},j},\cdots,\widehat{h_{i_k,k}},\cdots, h_{i_s,s}}(\Delta_{h_{1,j},h_{2,j},\cdots,h_{i_j-1,j},h_{i_j,j}, h_{i_j+1,j}\cdots, h_{n,j}}f)\\ &=& \Delta_{h_{i_1,1},\cdots,\widehat{h_{i_j},j},\cdots,\widehat{h_{i_k,k}},\cdots, h_{i_s,s}}(P_j)\\ \end{eqnarray*} which is also an exponential polynomial. Hence the function $G_{i,k}$ satisfies a set of equations like \eqref{teomontelmixtafuerte} with $n-1$ steps, and the induction hypothesis implies that $G_{i,k}$ is also an exponential polynomial. In other words, every function of the form $\Delta_{h_{j_1,a_1}, h_{j_2,a_2}, \cdots, h_{j_{s-1},a_{s-1}}}f$ with $0\leq j_k\leq n$ and $\{a_1,\cdots,a_{s-1}\}$ any subset of cardinality $s-1$ of $\{1,\cdots,s\}$, is an exponential polynomial . Let us still do a step more: Take $t\in\{1,\cdots,s\}$, $t\neq k$, and consider the function $G_{i,k,t}=\Delta_{h_{i_1,1},\cdots,\widehat{h_{i_k,k}},\cdots,\widehat{h_{i_t,t}},\cdots,h_{i_s,s}}f$. Then \begin{eqnarray*} \Delta_{h_{1,k},h_{2,k},\cdots,h_{n-1,k}}(G_{i,k,t}) &=& \Delta_{h_{1,k},h_{2,k},\cdots,h_{n-1,k}}(\Delta_{h_{i_1,1},\cdots,\widehat{h_{i_k,k}},\cdots, \widehat{h_{i_t,t}},\cdots,h_{i_s,s}}f)\\ &=& \Delta_{h_{1,k},\cdots \widehat{h_{k,k}},\cdots,h_{n-1,k}}(\Delta_{h_{i_1,1},\cdots, h_{i_{k-1},k-1},h_{k,k}, h_{i_{k+1},k+1},\cdots, \widehat{h_{i_t,t}},\cdots, h_{i_s,s}}f) \end{eqnarray*} is an exponential polynomial , since $\Delta_{h_{i_1,1},\cdots, h_{i_{k-1},k-1},h_{k,k}, h_{i_{k+1},k+1},\cdots, \widehat{h_{i_t,t}},\cdots, h_{i_s,s}}f$ is an exponential polynomial. Furthermore, \begin{eqnarray*} \Delta_{h_{1,t},h_{2,t},\cdots,h_{n-1,t}}(G_{i,k,t}) &=& \Delta_{h_{1,t},h_{2,t},\cdots,h_{n-1,t}}(\Delta_{h_{i_1,1},\cdots,\widehat{h_{i_k,k}},\cdots, \widehat{h_{i_t,t}},\cdots,h_{i_s,s}}f)\\ &=& \Delta_{h_{1,t},\cdots \widehat{h_{t,t}},\cdots,h_{n-1,t}}(\Delta_{h_{i_1,1},\cdots, \widehat{h_{i_k,k}},\cdots h_{i_{t-1},t-1},h_{t,t}, h_{i_{t+1},t+1},\cdots,\cdots, h_{i_s,s}}f) \end{eqnarray*} is an exponential polynomial , since $\Delta_{h_{i_1,1},\cdots, \widehat{h_{i_k,k}},\cdots h_{i_{t-1},t-1},h_{t,t}, h_{i_{t+1},t+1},\cdots,\cdots, h_{i_s,s}}f$ is an exponential polynomial . Finally, for $j\in\{1,\cdots,s\}\setminus \{k,t\}$ we have that \begin{eqnarray*} &\ & \Delta_{h_{1,j},h_{2,j},\cdots,,\widehat{h_{i_j,j}},\cdots, h_{n,j}}(G_{i,k,t}) = \Delta_{h_{1,j},h_{2,j},\cdots,,\widehat{h_{i_j,j}},\cdots, h_{n,k}}(\Delta_{h_{i_1,1},\cdots,\widehat{h_{i_k,k}},\cdots,\widehat{h_{i_t,t}},\cdots,h_{i_s,s}}f)\\ &\ & = \Delta_{h_{i_1,1},,\cdots,\widehat{h_{i_j},j},\cdots,\widehat{h_{i_k,k}},\cdots, \widehat{h_{i_t,t}},\cdots, h_{i_s,s}}(\Delta_{h_{1,j},h_{2,j},\cdots,h_{i_j-1,j},h_{i_j,j}, h_{i_j+1,j}\cdots, h_{n,j}}f)\\ &\ & = \Delta_{h_{i_1,1},,\cdots,\widehat{h_{i_j},j},\cdots,\widehat{h_{i_k,k}},\cdots, \widehat{h_{i_t,t}},\cdots, h_{i_s,s}}(P_j)\\ \end{eqnarray*} which is also an exponential polynomial. Hence the function $G_{i,k,t}$ satisfies a set of equations like \eqref{teomontelmixtafuerte} with $n-1$ steps, and the induction hypothesis implies that $G_{i,k,t}$ is also an exponential polynomial. Thus, every function of the form $\Delta_{h_{j_1,a_1}, h_{j_2,a_2}, \cdots, h_{j_{s-2},a_{s-2}}}f$ with $0\leq j_k\leq n$ and $\{a_1,\cdots,a_{s-2}\}$ any subset of cardinality $s-2$ of $\{1,\cdots,s\}$, is an exponential polynomial . We can repeat the argument to delete another step in the difference operators used for the definition of each one of the functions $G_{i,k,t}$ above, and maintain the property that the new functions are still exponential polynomials. Iterating the argument as many times as necessary we lead to the fact that all functions $\Delta_{h_{i,j}}f$ are exponential polynomials. Then we apply the case $n=1$ to conclude that $f$ is an exponential polynomial. This ends the proof. \end{proof} The following are easy corollaries of Theorem \ref{main}: \begin{corollary} \label{co1} Let $G$ be a topological commutative group and let $f:G\to \mathbb{C}$ be a continuous function. If there exist exponential polynomials $P_k:G\to \mathbb{C}$, $k=1,\cdots, s$ and elements $\{h_{i,j}\}_{1\leq i\leq n;1\leq j\leq s}$ of $G$ such that, for every $(i_1,\cdots,i_s)\in\{1,2,\cdots,n\}^s$, the set $\{h_{i_1,1},\cdots,h_{i_s,s}\}$ topologically generates $G$, and $f$ satisfies \eqref{teomontelmixtafuerte}, then $f$ is an exponential polynomial. \end{corollary} \begin{corollary} \label{co2} If $\{h_{i_1,1},\cdots,h_{i_s,s}\}$ topologically generates $\mathbb{R}^d$ for every $(i_1,\cdots,i_s)\in\{1,\cdots,n\}^s$, and $f\in\mathcal{D}(\mathbb{R}^d)'$ is a complex valued Schwartz distribution on $\mathbb{R}^d$ which satisfies the equations \eqref{teomontelmixtafuerte} for certain continuous exponential polynomials $P_k$, then $f$ is, in distributional sense, a continuous exponential polynomial. In particular, there exists a continuous exponential polynomial $p$ such that $f=p$ almost everywhere. \end{corollary} \begin{proof} It is enough to follow the very same steps of the demonstration of Theorem \ref{main}, just taking into account Anselone-Korevaar's theorem and that the operators $\tau_h$ and $\Delta_h$, which are defined for Schwartz distributions $f\in \mathcal{D}(\mathbb{R}^d)'$ by the expressions $\langle \tau_h(f),\varphi\rangle = \langle f,\tau_{-h}(\varphi)\rangle$ and $\langle \Delta_h(f),\varphi\rangle = \langle f,\Delta_{-h}(\varphi)\rangle$, respectively, inherit all properties from their corresponding versions, originally defined for ordinary functions. \end{proof} \begin{remark} If we impose $P_k=0$ for all $k$ in Theorem \ref{main} or Corollaries \ref{co1}, \ref{co2}, then the function $f$ will be a polynomial. Thus, these results generalize Theorem \ref{MT_unmixed} to the mixed differences case. \end{remark} \begin{remark} \label{notita} The condition that, for every $(i_1,\cdots,i_s)\in\{1,2,\cdots,n\}^s$, $\{h_{i_1,1},\cdots,h_{i_s,s}\}$ either generates or (in the case that $G$ is a topological group) topologically generates $G$, is a very natural necessary condition. For example, for $G=\mathbb{R}^d$, if exists $(i_1,\cdots,i_s)\in\{1,2,\cdots,n\}^s$ such that $h_{i_1,1}\mathbb{Z}+h_{i_2,2}\mathbb{Z}+\cdots+h_{i_s,s}\mathbb{Z}$ is not dense in $\mathbb{R}^d$, then there exists a non-differentiable continuous function $f:\mathbb{R}^d\to\mathbb{R}$ such that $\Delta_{h_{i_k,k}}f=0$ for $k=1,\cdots, s$ and, henceforth, $f$ solves the functional equations \eqref{teomontelmixtafuerte} with $P_k=0$ for all $k$ (see \cite{A_An-Kor-Th} for a proof of this claim) but is not a polynomial nor an exponential polynomial, since every continuous polynomial on $\mathbb{R}^d$ is an ordinary polynomial in $d$ variables \cite{frechet} and, henceforth, is a differentiable function, and continuous exponential polynomials on $\mathbb{R}^d$ are also differentiable functions. The merit of Theorem \ref{main} and Corollaries \ref{co1}, \ref{co2} is, thus, to prove that their hypotheses are sufficient to guarantee that $f$ is an exponential polynomial polynomial. \end{remark}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,785
\section{Introduction} An important idea, which has attracted much interest in recent years, is that of linear response. The basic principle is that in many cases a first order change in a system leads to a corresponding first order change in its equilibrium state. There has been a wealth of work on linear response theory for dynamical systems after the pioneering work of Ruelle, who developed a formula for the first derivative of the physical or SRB measure for hyperbolic systems \cite{R}. In subsequent works the approach was simplified, applied to some system outside the uniform hyperbolic case and applied to other kinds of more or less chaotic or hyperbolic system (see \cite{Ba2} for a detailed overview, and \cite{Ba3} for very recent results on intermittent systems). Many problems involving physical and social systems are modelled by chaotic dynamics and therefore many mathematical ideas have been developed and implemented in the context of the prediction and description of the statistical behaviour of a chaotic system. Linear response itself is used to understand the behaviour of complex systems out of equilibrium (see e.g. \cite{Lu} for an application to models of the climate evolution). Besides merely trying to understand the behaviour of a given system it is also important to attempt to understand the extent to which it can be controlled. For example, {\em can one determine which small changes in the system will change the statistical behaviour in a prescribed direction? Can these changes be chosen optimally, in an appropriate sense?} Understanding the general behaviour in light of these questions will be of help in designing efficient strategies of intervention for the control and management of complex, chaotic systems. We shall initiate the investigation of these specific questions in the context of particularly simple models of chaotic systems. In particular, we will concentrate on expanding maps of the circle. We shall consider the general mathematical structure of the problem and show that in the case of circle expanding maps, it has many solutions amongst which we can choose an optimal one, in a suitable sense. As an illustration, in this article we will also present a complete solution to the problem in the particular case of the linear doubling map, finding a solution which minimizes the $L^2$ norm (and other natural norms on the space). To formulate the problem more precisely, we introduce some notation. Let $T_0: X \to X$ be a $C^4$ expanding orientation preserving map of the circle $X = \mathbb R/\mathbb Z$ of degree $d \geq 2$. Let $T_\delta: X \to X$, where $\delta \in (-\eta, \eta)$ be a family of $C^3$ expanding maps. Let us suppose that the dependence of the family on $\delta$ is differentiable at $0$, hence can be written $$ T_\delta(x) = T_0(x) + \delta \epsilon(x) + o_{C^3}(\delta) \hbox{ for } x \in X. $$ where $\epsilon \in C^3(X, \mathbb R)$, and $o_{C^3}(\delta)$ denotes a term whose $C^3$ norm tends to zero faster than $\delta$, as $\delta \to 0$.\footnote{More precisely we say that $T_\delta$ is a differentiable family of $C^3$ expanding maps if there exists $\epsilon \in C^3(X, \mathbb R)$ such that $\|(T_\delta - T_0) /\delta -\epsilon \|_{C^3} \to 0$ as $\delta \to 0$, where $$ \|f(x)\|_{C^3} = \sup_{x \in X} |f(x)| + \sup_{x \in X} |f'(x)| + \sup_{x \in X} |f''(x)| + \sup_{x \in X} |f'''(x)| $$ is the usual norm on $C^3$ functions.} \begin{example} A simple example might be where $d\geq 2 $ and $T_0(x) = dx \hbox{ (mod $1$)}$ and $T_\delta(x) = dx + \delta \sin(2\pi x) \hbox{ (mod $1$)}$ then trivially $\epsilon(x) = \sin(2\pi x)$. \end{example} For each $T_\delta$ we can associate a unique invariant $C^2$ density and it is known that these vary in a differentiable way $\rho_\delta = \rho_0 + \delta \rho^{(1)} + \cdots$. It is a folklore result that the density of the unique physical invariant measure of such a family of maps varies in a differentiable way. For a more precise statement we have the following. \begin{lemma}\label{pert} Let us assume that $T_\delta$ is a $C^1$ family of $C^3$ expanding maps. The density $\rho_\delta \in C^1(X,\mathbb R)$ has a continuously differentiable dependence on $\delta$. \end{lemma} Another reformulation of the response problem would be to consider the integrals of a fixed smooth function with respect to the varying natural measure (see e.g. \cite{Ba2}). A more general statement of Lemma \ref{pert} appears as Theorem 20 in \cite{PJ}. The proof is omitted, but it is a standard approach using the Implicit Function Theorem for Banach manifolds. With this notation we can now formulate the basic problem that we want to address: \begin{problem} \label{pr1}{\it What first order change $\epsilon(x)$ in $T_0$ will result in a given first order change $\rho^{(1)}$ in the density?} \end{problem} Before addressing this question, we recall some general results about linear response of systems under small perturbations. For hyperbolic systems (including Anosov diffeomorphisms and flows) we have the luxury that this class is open which ensures that under small perturbations the system remains hyperbolic. Moreover, such systems are structurally stable which makes it easier to apply ideas from thermodynamic formalism \cite{Contreras}. In this context Ruelle formulated an explicit expression for the derivative of the SRB measure \cite{R} in terms of the perturbation of the system, which was subsequently reproved using a different method in \cite{L2} \cite{GL} and \cite{Butterley-Liverani}. Linear response results nowadays have been proved for several classes of dynamical systems, even outside of these uniformly expanding and hyperbolic and structurally stable settings, see for example \cite{DD}, \cite{BS},\cite{Ba3},\cite{BaS},\cite{Ko} and the survey \cite{Ba2}. \noindent {\bf Acknowledgements.} The authors thank The Leverhulme Trust for support through Network Grant IN-2014-021. \section{Linear response for expanding maps} A standard tool used in characterising the invariant densities for expanding maps is the transfer operator. In this section we recall the definition of the transfer operator and we present a theorem describing the linear response for operators satisfying certain assumptions. This tool will be directly applied to get the linear response formula for expanding maps. \begin{definition} Let $\mathcal L_\delta: L^1(X, \mathbb R) \to L^1(X, \mathbb R)$ be the transfer operators associated to an expanding map $T_\delta$, defined by $$ \mathcal L_\delta w(x) = \sum_{i=1}^d \frac{w(y_i^\delta)}{T_\delta'(y_i^\delta)} $$ where the summation is over the $d$ pre-images $\{y_i^\delta\}_{i=1}^d := T_\delta^{-1}(x)$ of $x\in X$. \end{definition} The following is a classical result but can be also taken as a definition of the invariant density. \begin{definition}\label{density} An invariant density $\rho_\delta $ for $T_\delta$ is a fixed point for the operators $\mathcal L_\delta$ acting on $L^1( X,\mathbb R)$ (i.e., $\mathcal L_\delta \rho_\delta = \rho_\delta $). \end{definition} Now let us take a general point of view and describe a general result on linear response in terms of fixed points of operators. Let us consider the action of these operators $\mathcal{L}_{\epsilon }$ on different spaces. Let $B_{w},B_{s},B_{ss}$ denote abstract Banach spaces of Borel measures on $X$ equipped with norms $ ||~||_{w},||~||_{s},||~||_{ss}$ respectively, such that $||~||_{w}\leq ||~||_{s}\leq ||~||_{ss}$. We suppose that $\mathcal{L}_{\delta },$ $\delta \geq 0,$ has a unique fixed point $h_{\delta }\in B_{ss}.$ Let $\mathcal{L:=L}_{0}$ be the unperturbed operator and $h\in B_{ss}$ be its invariant measure. Let us consider the space of zero average measures $$ V_{s}^{0}=\{v\in B_{s},v(X)=0\}.$$ Following an approach from \cite{L2}, we present a general setting in which differentiable dependence and {\em a formula for the derivative of the physical measure} of a family of positive operators $\mathcal{L}_{\delta }$ can be obtained (see \cite{BGN} for a proof of the statement in this form). We need to consider several norms for our operators, let us denote $$ \begin{aligned} & ||% \mathcal{L}_{\delta }^k h||_{B_{w}\rightarrow B_{w}} = \sup_{\| h \|_{w} \leq 1 } \|\mathcal{L}_{\delta }^k h\|_{B_w} \cr &||% \mathcal{L}_{\delta }^k h||_{B_{s}\rightarrow B_{w}} = \sup_{\| h \|_{s} \leq 1 } \|\mathcal{L}_{\delta }^k h\|_{B_w} \hbox{ and } &||% \mathcal{L}_{\delta }^k h||_{V_{s}^0\rightarrow B_{s}} = \sup_{\| h \|_{V_s^0} \leq 1 } \|\mathcal{L}_{\delta }^k h\|_{B_w}. \end{aligned} $$ The result we require is the following. \begin{thm} \label{perturbation} Suppose that the following assumptions hold: \begin{enumerate} \item The norms $||\mathcal{L}_{\delta }^{k}||_{B_{w}\rightarrow B_{w}}$ are uniformly bounded with respect to $k$ and $\delta \geq 0$. \item $\mathcal{L}_{\delta }$ is a perturbation of $\mathcal{L}$ in the following sense: there is a constant $C$ independent of $\delta$ such that \begin{equation} ||\mathcal{L}_{\delta }-\mathcal{L}||_{B_{s}\rightarrow B_{w}}\leq C\delta . \label{approx} \end{equation} \item The operators $\mathcal{L}_{\delta }$, have uniform rate of contraction on $V_{s}^{0}$: there are $C_{1}>0$, $0<\rho <1$, such that $\forall \delta \in [0,1]$ \begin{equation} ||\mathcal{L}_{\delta }^{n}||_{V_{s}^{0}\rightarrow B_{s}}\leq C_{1}\rho ^{n}. \label{unicontr} \end{equation} \item There is an operator $\mathbb {L}:B_{ss}\rightarrow B_{s}$ such that $\forall f\in B_{ss}$ \begin{equation} \lim_{\delta \rightarrow 0}||\delta ^{-1}(\mathcal{L}-\mathcal{L}% _{\delta })f-\mathbb{L}f||_{s}=0. \label{Lhat} \end{equation}% Let \begin{equation}\label{LR} \hat{h}=(Id-\mathcal{L})^{-1}\mathbb {L}h. \end{equation}% Then% \begin{equation*} \lim_{\delta\rightarrow 0}||\delta ^{-1}(h-h_{\delta })-\hat{h}% ||_{B_w}=0; \end{equation*}% i.e. $\hat{h}$ represents the derivative of $h_{\delta }$ for small increments of $\delta $. \end{enumerate} \end{thm} Let us see how to apply Theorem \ref{perturbation} to our setting. The assumptions of the above theorem are valid for the perturbations of circle expanding maps we are considering with the choices of spaces: \begin{enumerate} \item $B_{w}=L^1(X)$ with the norm $\|f \|_{L^1} = \int_X |f(x)| dx$; \item $B_{s}={W^{1,1}}$ with the norm $\|f \|_{W^{1,1}} = \int_X |f(x)| dx + \int_X |f'(x)| dx$; \item $B_{ss}=C^2(X)$ with the norm $\|f \|_{C^2(X)} = \|f\|_\infty + \|f'\|_\infty + \|f''\|_\infty$ where $\|f\|_\infty = \sup_{x\in X} |f(x)|$. \end{enumerate} In this context, item $1)$ of Theorem \ref{perturbation} is trivial on $L^1$ as the transfer operators are weak contractions. Items $2)$ and $3)$ of Theorem \ref{perturbation} are proved for example in \cite{G}, Section 6. The existence of the operator $\mathbb {L}$, and an explicit formula for it, will be proved in the next section (see Proposition \ref{mainprop}). From this follows the differentiability of the physical measure and the linear response formula (\ref{LR}) for our family of expanding maps. \section{The derivative operator for circle expanding maps} In this section we present a detailed description of the structure of the operator $\mathbb L $ in our case. \begin{proposition}\label{mainprop} Let $T_\delta $ be a family of expanding maps as considered before. Let $w \in C^3(X, \mathbb R)$. For each $x \in X$ we can write \begin{equation} \begin{aligned} \mathbb L w(x) &= \lim_{\delta \to 0} \left( \frac{\mathcal L_\delta w (x) - \mathcal L_0 w(x) }{\delta}\right)\cr &= -\mathcal L_0\left(\frac{w\epsilon'}{T_0'}\right)(x) - \mathcal L_0\left(\frac{ \epsilon w'}{T_0'}\right)(x) + \mathcal L_0 \left(\frac{\epsilon T_0''}{(T_0')^2} w \right)(x) \end{aligned} \end{equation} and the convergence is also in the $C^1$ topology. \end{proposition} Before presenting the proof of Proposition \ref{mainprop} we state a simple lemma. \begin{lemma}\label{tec} If $y_i^\delta \in T^{-1}_\delta(x)$ then we can expand $$y_i^\delta = y_i^0 + \delta \left( - \frac{\epsilon(y_i^0)}{T'_0 (y_i^0)}\right) + o_{C^2}(\delta).$$ \end{lemma} \begin{proof}[Proof of Lemma \ref{tec}] We denote by $\{y_i^\delta\}_{i=1}^d := T_\delta^{-1}(x)$ and $\{y_i^0\}_{i=1}^d := T_0^{-1}(x)$ the $d$ preimages under $T_\delta$ and $T_0$, respectively, of a point $x\in X$. Let us write $$y_i^\delta(x) = y^0_i(x) + \delta \epsilon_i(x) + F_i(\delta,x).$$ We will show that $F_i(\delta,x) = o_{C^2}(\delta)$. Substituting this into the identity $T_\delta (y_i^\delta(x) ) = x$ and and using that $T_\delta(x) = T_0(x) + \delta \epsilon(x) + o_{C^3}(\delta)$ we can expand \begin{equation} \label{long} \begin{aligned} x& = T_\delta( y_i^\delta(x) ) \cr &= T_0( y_i^\delta(x)) + \delta \epsilon(y_i^\delta(x)) + E(\delta, y_i^\delta(x) ) \cr &= T_0( y_i^0(x) + \delta \epsilon_i(x) + F_i(\delta,x)) \cr & \ \ \ +\delta\epsilon(y_i^0(x) + \delta \epsilon_i(x) + F_i(\delta,x)) + E(\delta, y_i^\delta(x) ). \end{aligned} \end{equation} We can write the first term in the final line of (\ref{long}) as $$ \begin{aligned} &T_0( y_i^0(x) + \delta \epsilon_i (x) + F_i(\delta,x)) \cr &= T_0( y_i^0(x)) + T_0'( y_i^0(x))(\delta \epsilon_i (x) + F_i(\delta,x)) + o_{C^2}(\delta) \end{aligned} $$ and the second term in the last line as $$\delta\epsilon(y_i^0(x) + \delta \epsilon_i(x) + F_i(\delta,x)) = \delta\epsilon(y_i^0(x)) +\delta F_i(\delta,x)+ o_{C^2}(\delta)$$ and use that $T_0( y_i^0(x)) = x$ to cancel terms on either side of (\ref{long}) to get that $$0 = T_0'( y_i^0(x))(\delta \epsilon_i(x) + F_i(\delta,x)) + \delta\epsilon(y_i^0(x)) +\delta F_i(\delta,x) + E(\delta, y_i^\delta(x) ) + o_{C^2}(\delta).$$ Thus we can identify the first order terms as $\delta T_0'( y_i^0(x)) \epsilon_i(x) + \delta \epsilon(y_i^0(x))$ and then what is left is $$T_0'( y_i^0(x))F_i(\delta,x) +\delta F_i(\delta,x)= -E(\delta, y_i^\delta(x) ) + o_{C^2}(\delta)$$ from which the result follows. \end{proof} We now return to the proof of Proposition \ref{mainprop}. \begin{proof}[Proof of Proposition \ref{mainprop}] We again denote by $\{y_i^\delta\}_{i=1}^d := T_\delta^{-1}(x)$ and $\{y_i^0\}_{i=1}^d := T_0^{-1}(x)$ the $d$ preimages under $T_\delta$ and $T_0$, respectively, of a point $x\in X$. Futhermore, we assume that the indexing is chosen so that $y^\delta_i$ is a perturbation of $y^0_i$, for $1 \leq i \leq d$. We can write \begin{equation}\label{eq1} \begin{aligned} &\frac{\mathcal L_\delta w (x) - \mathcal L_0 w(x) }{\delta}\cr &= \frac{1}{\delta} \left( \sum_{i=1}^d \frac{w(y_i^\delta)}{T_\delta'(y_i^\delta)} - \sum_{i=1}^d\frac{w(y_i^0)}{T_0'(y_i^0)} \right)\cr &=\underbrace{ \frac{1}{\delta} \left( \sum_{i=1}^d w(y_i^\delta) \left( \frac{1}{T_\delta'(y_i^\delta)} - \frac{1}{T_0'(y_i^\delta)}\right) \right)}_{=:(I)} + \underbrace{\frac{1}{\delta} \left( \sum_{i=1}^d\frac{w(y_i^\delta) - w(y_i^0) }{T_0'(y_i^\delta)} \right)}_{=:(II)}\cr &+ \underbrace{ \frac{1}{\delta} \left( \sum_{i=1}^dw(y_i^0)\left( \frac{1}{T_0'(y_i^\delta)}- \frac{1}{T_0'(y_i^0)}\right) \right)}_{=:(III)}.\cr \end{aligned} \end{equation} For the first term we first differentiate the expansion $T_\delta (x) = T_0(x) + \delta \epsilon(x) + o_{C^3}(\delta)$ in $x$ to get: $$ T_\delta' (x) = T_0'(x) + \delta \epsilon'(x) + o_{C^2}(\delta). $$ We can then write $$ \begin{aligned} (I) &= \frac{1}{\delta} \left( \sum_{i=1}^d w(y_i^\delta) \left( \frac{1}{T_\delta'(y_i^\delta)} - \frac{1}{T_0'(y_i^\delta)}\right) \right)\cr &= \frac{1}{\delta} \left( \sum_{i=1}^d \frac{w(y_i^\delta)}{T_\delta'(y_i^\delta)} \left( 1 - \frac{T_\delta'(y_i^\delta)}{T_0'(y_i^\delta)}\right) \right)\cr &= \frac{1}{\delta} \left( \sum_{i=1}^d \frac{w(y_i^\delta)}{T_\delta'(y_i^\delta)} \left( 1 - \left( \frac{T_0'(y_i^\delta) + \delta \epsilon'(y_i^\delta) + o_{C^2}(\delta)}{T_0'(y_i^\delta)}\right)\right) \right)\cr &= \left( -\sum_{i=1}^d \frac{w(y_i^\delta)\epsilon'(y_i^\delta)}{T_\delta'(y_i^\delta)T_0'(y_i^\delta)}\right) + o_{C^2}(1). \end{aligned} $$ Thus we have that $$ \begin{aligned} \lim_{\delta \to 0} \frac{1}{\delta} \left( \sum_{i=1}^d w(y_i^\delta) \left( \frac{1}{T_\delta'(y_i^\delta)} - \frac{1}{T_0'(y_i^\delta)}\right) \right) &= \lim_{\delta \to 0} \left( -\sum_{i=1}^d \frac{w(y_i^\delta)\epsilon'(y_i^\delta)}{T_0'(y_i^\delta)^2}\right) \cr &= -\mathcal L_0\left(\frac{w \epsilon'}{T'_0}\right) \end{aligned} $$ and the limit converges in $C^1$. For the second term of (\ref{eq1}) we can use Lemma \ref{tec} to write $$ \begin{aligned} w(y_i^\delta) &= w(y_i^0) + w'(y_i^0) \left(\frac{dy_i^\delta}{d\delta}|_{\delta=0} \right)\delta + o_{C^1}(\delta)\cr &= w(y_i^0) + w'(y_i^0) \left( - \frac{\epsilon(y_i^0)}{T'_{0}(y_i^0)}\right) \delta + o_{C^1}(\delta). \end{aligned} $$ Thus $$ \begin{aligned} (II) = \frac{1}{\delta} \sum_{i=1}^d\frac{w(y_i^\delta) - w(y_i^0) }{T_0'(y_i^\delta)} &= \frac{1}{\delta} \sum_{i=1}^d\frac{w'(y_i^0) }{T_0'(y_i^\delta)} \left( - \frac{\epsilon(y_i^0)}{T_0'(y_i^0)}\right) + o_{C^1}(1)\cr &= - \sum_{i=1}^d\frac{\epsilon(y_i^0)w'(y_i^0) }{T_0'(y_i^0)T_0'(y_i^\delta)} + o_{C^1}(1) \end{aligned} $$ and therefore, both pointwise and in the $C^1$ topology $$ \lim_{\delta \to 0} \frac{1}{\delta} \sum_{i=1}^d\frac{w(y_i^\delta) - w(y_i^0) }{T_0'(y_i^\delta)} = - \mathcal L_0\left(\frac{ \epsilon w'}{T'_0}\right)(x). $$ Finally, for the third term we can write $$ \begin{aligned} T_0'(y_i^\delta) & = T_0'(y_i^0) + T_0''(y_i^0)\left( \frac{dy^\delta_i}{d\delta}|_{\delta=0}\right) \delta + o_{C^1}(\delta)\cr & = T_0'(y_i^0) + T_0''(y_i^0)\left( - \frac{\epsilon(y_i^0)}{T'_{0} (y_i^0)}\right)\delta + o_{C^1}(\delta), \end{aligned} $$ again using the Lemma \ref{tec}. Therefore $$ \begin{aligned} (III) = &\frac{1}{\delta} \left( \sum_{i=1}^dw(y_i^0)\left( \frac{1}{T_0'(y_i^\delta)}- \frac{1}{T_0'(y_i^0) }\right) \right)\cr &= \frac{1}{\delta} \left( \sum_{i=1}^d w(y_i^0)\left( \frac{T_0'(y_i^0) - T_0'(y_i^\delta)}{T_0'(y_i^\delta) T_0'(y_i^0)}\right) \right)\cr &= \frac{1}{\delta} \left( \sum_{i=1}^dw(y_i^0)\left( \frac{-\left(T_0'(y_i^0) + T_0''(y_i^0)\left( - \frac{\epsilon(y_i^0)}{T'_0(y_i^0)}\right)\delta\right) + T_0'(y_i^0)}{T_0'(y_i^\delta) T_0'(y_i^0)}\right) \right) + o_{C^1}(1) \cr &=\frac{1}{\delta} \left( \sum_{i=1}^dw(y_i^0)\left( \frac{\epsilon(y_i^0) T_0''(y_i^0)}{T_0'(y_i^0)^2 T_0'(y_i^\delta)}\right)\ \right) + o_{C^1}(1)\cr \end{aligned} $$ and thus, finally, $$ \lim_{\delta \to 0} \frac{1}{\delta} \left( \sum_{i=1}^dw(y_i^0)\left( \frac{1}{T_0'(y_i^\delta)}- \frac{1}{T_0'(y_i^0)}\right) \right) = \mathcal L_0 \left(\frac{\epsilon T_0''}{(T_0')^2} w\right) (x) $$ in $C^1$. \end{proof} \section{The control problem} We can now use the preceding results to address Problem \ref{pr1}, finding first order perturbations to the expanding maps corresponding to a given first order perturbation in the density. Putting together the information given by Theorem \ref{perturbation} and Proposition \ref{mainprop} in the particular case that $w = \rho$ is the invariant density for $T_0: X \to X$ we arrive at an equation that allows us to address the question posed in Problem \ref{pr1}. More precisely, given the required direction of change $\rho^{(1)} \in C^1(X, \mathbb R)$ we want to find $\epsilon(x)$ such that the associated operator $\mathbb L$ satisfies $$ (I - \mathcal L_0) \rho^{(1)} = \mathbb L \rho(x).$$ Thus by Proposition \ref{mainprop}, given $\rho^{(1)}$ we need to solve for $\epsilon(x)$ such that \begin{equation}\label{prob} (I - \mathcal L_0) \rho^{(1)}(x) = \mathcal L_0\left(-\frac{\rho \epsilon'}{T_0'} - \frac{ \epsilon \rho '}{T_0'} + \frac{\epsilon T_0''}{(T_0')^2} \right)(x). \end{equation} The problem hence involves the solution of a differential equation for $\epsilon: X \to \mathbb R$ in the space of functions representing infinitesimal changes of the system. We remark that given a function $f$, the equation $f=\mathcal L_0 g $ typically may have many solutions, as $T_0$ is not bijective. Thus in the solution of Problem \ref{pr1} there are two steps: \begin{enumerate} \item Firstly we have to choose a solution $g$ to $(I - \mathcal L_0) \rho^{(1)} = \mathcal L_0\left(g \right)$. Typically there will be an infinite dimensional family of solutions; and \item Secondly, we can solve for $\epsilon$ such that $$g = -\epsilon' \left( \frac{\rho}{T_0'}\right) - \epsilon\left( \frac{ \rho '}{T_0'} \right) + \epsilon \left( \frac{ T_0''}{(T_0')^2} \right). $$ In particular, we have to solve a linear first order inhomogeneous differential equation. \end{enumerate} Finally, amongst the possible solutions $\epsilon(x)$ of this problem, it is natural to look for one which is optimal, in a suitable sense. In particular, we will search for a solution of minimal size with respect to some natural norm to be described later. We will prove that under suitable assumption such an optimal solution exists. \subsection{Existence of a solution} Our first main conclusion about existence is the following. \begin{thm}\label{T1} Let $k \geq 4$. For a $C^k$ expanding map of the circle $T_0$, any $C^{k-2}$ first order perturbation $\rho^{(1)}$ in the density can be realised by a suitable first order $C^{k-1}$-perturbation $\epsilon$ in the transformation. Moreover there is an infinite dimensional space of perturbations achieving this. \end{thm} Before giving the proof of Theorem \ref{T1} we state a lemma regarding the solutions of $f=\mathcal{L}_0 g$. In the setting of expanding maps of the circle this takes a simple form. Let $C^{k-1}(X, \mathbb R)$ be the Banach space of $C^{k-1}$ functions on the unit circle $X$, for $k \geq 1$. Let $T: X \to X$ be a $C^k$ expanding map of the circle $X$. It is known that such maps have a unique $C^{k-1}$ invariant density. Let $h: X \to X$ be an orientation preserving $C^{k}$ diffeomorphism. We can define a new map $S: X \to X$ by conjugation $S = h\circ T \circ h^{-1}$. We can define transfer operators $\mathcal L_S: C^{k-1}(X) \to C^{k-1}(X)$ and $\mathcal L_T: C^{k-1}(X) \to C^{k-1}(X)$ by $$ \mathcal L_Tw(x) = \sum_{Ty = x} \frac{w(y)}{T'(y)} \hbox{ and } \mathcal L_Sw(x) = \sum_{Sy = x} \frac{w(y)}{S'(y)}. $$ \begin{lemma}\label{L1} For each orientation preserving $C^{k}$ diffeomorphism $h$ we can write $(\mathcal L_Sw)\circ h = \frac{1}{h'} \mathcal L_T \left((w\circ h) h'\right)$. \end{lemma} \begin{proof} We can differentiate $S \circ h = h \circ T$ and use the chain rule to write $$ (S'\circ h) h' = (h'\circ T) T'. $$ Let us write $y = h(y')$ and $x' = h^{-1}(x)$ then we have $$ \begin{aligned} \mathcal L_Sw(x) = \sum_{S(hy') = x} \frac{w(y)}{S' \circ h(y')} &= \sum_{T(y') = h^{-1}(x)} \frac{w(hy') h'(y')}{h'(Ty') T'(y')}\cr &= \sum_{T(y') = x'} \frac{(w\circ h)(y') h'(y')}{h'(Ty') T'(y')}\cr &= \frac{1}{h'(x')} \sum_{T(y') = x'} \frac{(w\circ h)(y') h'(y')}{T'(y')}.\cr \end{aligned} $$ This corresponds to the required identity. \end{proof} If we assume that $T$ has an invariant density $\rho$ then we know that $\mathcal L_T \rho = \rho \in C^{k-1}(X)$. By a suitable choice of coordinates we can assume that $S(0)=0=T(0)$ are corresponding fixed points. We can then define $h: X \to X$ by $h(x) = \int_0^x \rho(u) du$. Then $h'(x) = \rho(x)$ and by the Lemma \ref{L1} with $w $ taking the constant value $1$ we can write $$ (\mathcal L_S1)\circ h = \frac{1}{h'} \mathcal L_T (1 h') = \frac{1}{\rho} \mathcal L_T (\rho) =1. $$ We then conclude that $(\mathcal L_S1) =1$. In particular, the transformation $S: X \to X$ has density $1$ (i.e., $S$ preserves the Haar measure on $X$). \begin{lemma}\label{solution} Let $T: X \to X$ be a $C^k$ map with $k \geq 2$. For any $f \in C^{k-1}(X)$ we can find $g\in C^{k-1}(X)$ such that $f = \mathcal L_T(g)$. Moreover, we can find an infinite dimensional set of such solutions $g$. \end{lemma} \begin{proof} Given $f \in C^{k-1}(X)$ we can first look for a solution $g \in C^{k-1}(X)$ to $f= \mathcal L_Sg$. However, since $\mathcal L_S 1 = 1$ we see that it merely suffices to choose $g = f \circ S$, since then $$ \begin{aligned} \mathcal L_S(g)(x) &= \mathcal L_S(f\circ S)(x) = \sum_{Sy = x} \frac{f(Sy)}{S'(y)} = f(x) \sum_{Sy = x} \frac{1}{S'(y)} \cr & = f(x) \underbrace{ (\mathcal L_S1)(x)}_{=1} = f(x) \end{aligned} $$ and the result follows. Furthermore, there are clearly uncountably many such solutions $g$. In the general case, we can apply the identity in the previous lemma $\mathcal L_T \left((w\circ h) \rho\right) = \rho (\mathcal L_Sw)\circ h$ with the choice $w= (f/\rho) \circ h^{-1}\circ S$. This gives that $$ \begin{aligned}\mathcal L_T \left(( (f/\rho) \circ h^{-1}\circ S \circ h) \rho\right) & = \rho (\mathcal L_S ((f/\rho)\circ h^{-1} \circ S))\circ h\cr & = \rho ((f/\rho) \circ h^{-1}) \circ h = f . \end{aligned} $$ Thus it suffices to let $g = ((f/\rho) \circ h^{-1}\circ S \circ h) \rho$ to get the required result. Given a solution $g$, every point of the set $g+ker({\mathcal L }_T)$ is again a solution. Moreover, ${\mathcal L }_T$ is an infinite dimensional space. Indeed consider the intervals $I_1,...,I_n$ where the branches of $T_i$ of $T$ are defined, then given a $C^{k-1}$ density $\rho _1$ supported on the interior of $I_1$ it is easy to find a density $\rho _2$ supported in $I_2$ such that ${\mathcal L }_T (\rho _1)=-{\mathcal L }_T (\rho _2).$ \end{proof} \smallskip \noindent {\it Proof of Theorem \ref{T1}.} Recall that in equation (\ref{prob}) we have $$ (I - \mathcal L_0) \rho^{(1)}(x) = \mathcal L_0\left(-\frac{\rho \epsilon'}{T_0'} - \frac{ \epsilon \rho '}{T_0'} + \frac{\epsilon T_0''}{(T_0')^2} \right)(x). $$ The left hand side is a $C^{k-2}$ function, by assumption. Since $T_0$ is $C^k$ then we have $\rho \in C^{k-1}(X)$. By Lemma \ref{solution} we can choose a $C^{k-1}$ solution $g_1$ for the step 2 described above, such that $(I - \mathcal L_0) \rho^{(1)}(x) = \mathcal L_0\left(g \right)(x)$. Finally this means that the differential equation we need to solve is $$g_1 = -\epsilon' \left( \frac{\rho}{T_0'}\right) - \epsilon\left( \frac{ \rho '}{T_0'} \right) + \left( \frac{\epsilon T_0''}{(T_0')^2} \right), $$ which has at least $C^{k-2}$ coefficients, and hence a $C^{k-1}$ family of solutions, proving the statement. Each such solution is a solution to Problem (\ref{pr1}) by Theorem \ref{perturbation}. \qed \medskip In particular, taking $k=5$ gives the following corollary. \begin{cor}\label{dopo} For a $C^5$ expanding map of the circle $T_0$, any first order perturbation $\rho^{(1)} \in C^3(X)$ in the density can be realised by a suitable first order perturbation $\epsilon \in C^4(X)$ in the transformation. \end{cor} \subsection{Optimal solutions} We now consider the problem of finding an optimal solution amongst the many different possible solutions. It is natural to minimize the size of the perturbation in a suitable norm. Since we are considering smooth dynamics and perturbations we can choose the following natural Sobolev-type norm $$ || f||_{abcd}=||f||_2+a||f'||_2+b||f''||_2+c||f'''||_2+d||f^{i v}||_2$$ for given $a,b,c,d \geq 0 , d>0$. With this norm, the associated space $H^4$ of functions having $L^2$ fourth derivatives is a Hilbert space. \begin{proposition}\label{min} If $\rho^{(1)} \in C^3$ and $T_0\in C^5 $ then in the space of $H^4$ solutions to (\ref{prob}) there is a unique minima with respect to the $ || \cdot ||_{abcd}$ norm. \end{proposition} \begin{proof} We begin by observing that the equation \begin{equation}\label{prob2} (I - \mathcal L_0) \rho^{(1)} = \mathcal L_0\left(-\frac{\rho \epsilon'}{T_0'} - \frac{ \epsilon \rho '}{T_0'} + \frac{\epsilon T_0''}{(T_0')^2} \right) \end{equation} is equivalent to $g=\tilde{ L} ( \epsilon )$, where $g=(I - \mathcal L_0) \rho^{(1)}$ and $\tilde{ L}$ is the linear operator on the Right Hand Side of equation (\ref{prob2}). We observe that $\tilde{ L}$ is continuous as an operator from $H^4$ to $L^2(X)$ and thus $ker(\tilde{ L}) $ is a closed space in $H^4$. Moreover, the space of solutions of \ref{prob2} in $H^4$ is not empty, because of Corollary \ref{dopo}. We can therefore deduce that the space of solutions to (\ref{prob}) is a closed affine space on which we can search for an element of minimum norm. Finally, since we are in a Hilbert space there is a unique minima. Indeed, we can take a solution $v$ of (\ref{prob2}) and then subtract its projection on $ker(\tilde{ L}) $, this is orthogonal to $ker(\tilde{ L}) $ and thus to the affine space of solutions $ker(\tilde{ L}) +v$. This solution necessarily minimizes the norm. \end{proof} \begin{remark} The minimal solution found in Proposition \ref{min} is an actual solution of the initial problem. Indeed let us call $\epsilon_0 $ this solution. Then $$\mathcal L_0\left(-\frac{\rho \epsilon_0'}{T_0'} - \frac{ \epsilon_0 \rho '}{T_0'} + \frac{\epsilon_0 T_0''}{(T_0')^2} \right)\in {W^{1,1}}$$ and then by Theorem \ref{perturbation} (which can be applied since the solution $\epsilon _0 $ is in $C^3$) we get \begin{equation} \rho^{(1)} = (I - \mathcal L_0)^{-1} \mathcal L_0\left(-\frac{\rho \epsilon_0'}{T_0'} - \frac{ \epsilon_0 \rho '}{T_0'} + \frac{\epsilon_0 T_0''}{(T_0')^2} \right) \end{equation} is the required linear response associated to the first order perturbation $\epsilon _0 $. \end{remark} \begin{remark}The main point of the optimization procedure is an orthogonalization. It seems that this can be efficiently implemented by an algorithm to produce the optimal solution, once an effective characterization of $ker(\tilde{ L}) $ is provided. \end{remark} \section{Example: the doubling map} In this section we consider a simple example which can be easily analysed using classical Fourier series. Let $T: X \to X$ be the doubling map given by $T(x) = 2 x \hbox{ (mod $1$)}$ then $T_0' = 2$, $T_0'' = 0$ and $\rho=1$ and $\rho'=0$. Let us write the prescribed perturbation $\rho^{(1)}(x) $ in the density as a Fourier series, i.e., $$ \rho^{(1)}(x) = \sum_{n \in \mathbb Z} a_n e^{2\pi i n x} $$ and then observe that since $$ \mathcal L_0 \left( e^{2\pi i n x}\right) = \begin{cases} e^{2\pi i (n/2) x} & \hbox{ if } $n$ \hbox{ is even}\\ 0 & \hbox{ if } $n$ \hbox{ is odd } \end{cases} $$ we have that the equation we need to solve becomes $$ (I - \mathcal L_0) \rho^{(1)}(x) =\mathcal L_0 \left(-\frac{\epsilon ^\prime }{2} \right) $$ where $$ (I - \mathcal L_0) \rho^{(1)}(x) = \sum_{n \in \mathbb Z} (a_n - a_{2n})e^{2\pi i n x}. $$ Moreover, given any function $f(x)$ written in the form $$f(x) = \sum_{n \in \mathbb Z} b_n e^{2\pi i n x} $$ we see that $$ (\mathcal L_0 f)(x) = \sum_{n \in \mathbb Z} b_{2n} e^{2\pi i n x}. $$ Comparing coefficients, for $f(x) (= -\frac{\epsilon^ \prime (x)}{2})$ to be a solution to $(I - \mathcal L_0) \rho^{(1)}= \mathcal L_0(f)$ now corresponds to \begin{enumerate} \item $b_{n} = a_{n/2} - a_n$ if $n$ is even; \item $b_n$ have no restrictions if $n$ is odd \end{enumerate} and finally we see that the (infinitely many) solutions to the linear perturbation are solutions to $$-\epsilon'(x) = 2f(x) = \sum_{n \in \mathbb Z} (2b_n) e^{2\pi i n x}. $$ That is $$\epsilon(x) = - \sum_{n \in \mathbb Z} \frac{b_n}{\pi i n} e^{2\pi i n x} $$ (where the constant of integration corresponding to $b_0$ is actually zero), and a solution is $$\epsilon(x) = - \sum_{n \in 2\mathbb Z} \frac{a_{n/2}-a_n}{\pi i n} e^{2\pi i n x} + \sum_{n \in \mathbb Z- 2\mathbb Z} \frac{c_n}{\pi i n} e^{2\pi i n x}. $$ For every $\{ c_i\} \in l^2$. $$ \| \epsilon\|_2^2= \sum_{n \in \mathbb Z} \frac{ |a_{2n } - a_n|^2}{\pi^2 (2n)^2} + \sum_{n \in \mathbb Z}\frac{|c_{2n+1}|^2}{\pi^2(2n+1)^2}. $$ In particular, this is minimised when $c_{2n+1}=0$ for $n \in \mathbb Z$ and leaves us with a distinguished solution $$ \epsilon _0 (x) = \sum_{n} \frac{(a_n - a_{2n})}{2 \pi i n} e^{2\pi i 2n x}. $$ Similarly for the $|| \cdot ||_{abcd} $ norm we can reason as before, having the same distinguished solution $\epsilon_0$. \begin{example} In the particular case of the doubling map and $\rho^{(1)} = \sin(2\pi x)$ we have that $a_1 = \frac{1}{2i}$ and $a_{-1} =- \frac{1}{2i}$ and all the other terms are zero. Thus $2b_2 = i$ and $2b_{-2} = -i$ and all the other terms are zero. Thus we can write $$ \begin{aligned} \epsilon(x)& = \frac{e^{4i\pi x}}{4\pi } + \frac{e^{-4i\pi x}}{4\pi } + \sum_{n \in \mathbb Z} \frac{b_{2n+1}}{2\pi n} e^{2\pi i (2n+1) x}\cr &= \frac{1}{2\pi} \cos(4\pi x) + \sum_{n \in \mathbb Z} \frac{b_{2n+1}}{2\pi n} e^{2\pi i (2n+1) x}. \end{aligned} $$ If we additionally want to choose the $u(x)$ so as to minimise the $L^2$-norm then $$ \epsilon_0 (x)= \frac{1}{2\pi} \cos(4\pi x) $$ and $$ \|\epsilon_0 \|_2 = \sqrt{ \int_0^1 \left( \frac{1}{2\pi} \cos(4\pi x)\right)^2dx} = \frac{\sqrt{8}}{8\pi}. $$ \end{example} \begin{remark} It is natural to consider in which way the previous results could be generalized to systems with more dimensions and with contracting directions. In this case, using suitable anisotropic norms, we can have a spectral gap, and probably the general structure of the problem remains similar, with Proposition \ref{perturbation} applying to a suitable space of distributions, in a way similar to that which we have seen for circle expanding maps, however the formula in \ref{mainprop} is quite specific to the expanding case, and in the general case a suitable generalization of the derivative operator should apply to measures and distributions. \end{remark} \begin{remark} After the completion of this manuscript, B. Kloeckner (\cite{Kl}) investigated a control problem similar to the one considered here, although in that work, only restricted perturbations of the system coming from a smooth conjugacy were considered. With this point of view, using ideas similar to those in \cite{Mo}, it was proved that there is always at least one solution of the control problem for a large class of systems preserving a smooth invariant measure. However, the class of admissible changes used in that paper is much smaller than in the present work, and the class of solutions found is correspondingly smaller. In particular, the optimal changes (arising from conjugacies) found in \cite{Kl} for expanding maps may be very different from the ones we found in the present work, as it is shown in Section 2.4 of \cite{Kl} for the doubling map. \end{remark}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,450
{"url":"http:\/\/mathhelpforum.com\/calculus\/218296-differentiating-modulus-function.html","text":"# Math Help - Differentiating with modulus function\n\n1. ## Differentiating with modulus function\n\nWhere is the function h(x) = |x \udbc0\udc00- 1| + |x + 2| differentiable?\n\n(a) Give a formula for h'(x).\n\n(b) Sketch the graphs of h and h'\n\n2. ## Re: Differentiating with modulus function\n\nOriginally Posted by Vishak\n\nWhere is the function h(x) = |x \udbc0\udc00- 1| + |x + 2| differentiable?\n\n(a) Give a formula for h'(x).\n\n(b) Sketch the graphs of h and h'\nThe function will be differentiable everywhere except the cusps. Where are these?\n\n3. ## Re: Differentiating with modulus function\n\nI think at x = 1 and x = -2. Do I have to split the function to find h'(x)?\n\n4. ## Re: Differentiating with modulus function\n\nYes on both counts","date":"2014-09-20 13:59:55","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.955331027507782, \"perplexity\": 2340.50820042237}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-41\/segments\/1410657133417.25\/warc\/CC-MAIN-20140914011213-00335-ip-10-196-40-205.us-west-1.compute.internal.warc.gz\"}"}
null
null
College is expensive—everyone knows that. But private colleges are pricier than most, as there are no separate options for in-state and out-of-state tuition. Boston College, for example, is a private school, and the tuition there costs $49,324 a year. But Fred Vautour made sure all five of his children got to go there—for free. How? He worked there as a janitor. First of all, though Boston College's tuition might be "just" $49,324, the true cost of attending B.C. is actually $65,620. This includes room and board and other fees. Vautour's job benefits knocked $51,000 off that annual price tag. The scholarships his kids earned took care of most of the rest. In the end, each child's college education cost him only $3,000 a year. Let us reiterate: Getting his kids to attend B.C. for free saved him at least $700,000. Let's all take a moment of silence and awe for that fact. During his time working as a janitor at Boston College, his kids would often drop in to say "hi" as he worked the graveyard shift. Although sometimes they would also drop in to give him laundry to take home, he says. But ultimately, the kids admire Vautour and his work ethic. "He is so passionate about work and helping us be the best people we can be," his son Michael told AOL. By working the overnight shift as a janitor for more than 20 years, Fred Vautour was able to put all five of his children through college — Tuition-free! What would you do to ensure your kids the best possible education?
{ "redpajama_set_name": "RedPajamaC4" }
500
The difference between practicing medicine and leading it. Everything You Need to Know About: The Latest Information, Resources and Answers You Need. Protecting Yourself From Coronavirus and Visiting Our Hospitals. Learn more. We collect daily statistics on the number of our hospitalized COVID-19 patients. View our slides showing our data, which will be updated regularly. Learn More You can make a difference in the fight against COVID-19 today. Learn more. Awarded #1 Hospital in Texas for Patient Care & Safety U.S. News & World Report has named Houston Methodist Hospital the No. 1 hospital in Texas and one of the top 20 hospitals in the nation. Learn more. DOCTOR APPOINTMENTS IMAGING APPOINTMENTS Houston Methodist giving workers $500 bonus but they must get vaccine to qualify Unsung heroes fighting COVID-19: Hospital disinfecting staff Everything you need to know about Houston's COVID-19 vaccine rollout
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
335