text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Portal 4/2013 Croatian Conservation Institute began in 2012 with the documenting, investigating and conservation trials on the stone sculpture in the interior of the presbytery and the main apse of St. James's Cathedral in Šibenik. A major architectural achievement of the 15th and 16th century in Croatia, the Cathedral of Šibenik has won its global recognition in 2000, when it was entered in the UNESCO World Heritage List. The collection of six papers on the history of St. Mark's Church was published to mark the completion of its year-long renovation. Croatian Conservation Institute, which supervised the larger part of the renovation, initiated the monograph, aiming to compile all existing insights into the construction history of the church, which revealed it to be a particularly multi-layered monument.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,111
\section{Introduction} The language model is an important component of automatic speech recognition (ASR) systems \cite{inproceedings_Kazuki, 6854535, Si2013PrefixTB, 7460091}, and perplexity (PPL) is known to be closely correlated with word error rate (WER) \cite{wilpon1994voice, Klakow:2002:TCW:638078.638080}. Nowadays, state-of-the-art language models are commonly modeled using neural networks \cite{Bengio:2003:NPL:944919.944966, mikolov2010recurrent, Schwenk:2007:CSL:1230156.1230409, sundermeyer2015:lstm}. The language model aims to learn the probability of word sequences, which are normally decomposed in an auto-regressive manner. To capture long contextual dependencies, the recurrent neural network (RNN) can be applied, which often uses the cross entropy training criterion along with softmax \cite{mikolov2010recurrent,sundermeyer2015:lstm, sundermeyer12:lstm}. The idea of applying large-margin to the softmax layer is used to encourage intra-class compactness and inter-class separability among learned features. In the field of face recognition there exists a line of work \cite{L-Softmax, Liang2017SoftMarginSF, CosFace, ArcFace, DBLP:journals/corr/abs-1801-05599, Liu_2019_CVPR} that studies large-margin in the softmax layer, providing significant improvements in performance. Considering that the vectors in the projection matrix before the last softmax layer in neural language models (NLM) are essentially feature vectors of the words, which resemble the feature vectors of images in face recognition, we are thus curious to examine the performances of the aforementioned margins in NLM. Large-margin in NLM is not an unfamiliar concept. In \cite{LargeMarginNeuralLanguageModel}, a global-level margin that discriminates sentences is introduced. In contrast, this paper focuses on the margin between atomic-level word vectors. We apply different types of large-margins from face recognition to NLM. Our initial experiments show that using the largin-margin softmax from face recognition out-of-the-box for NLM deteriorates the PPL dramatically. We assume that this is due to the fundamental differences between words and faces in their class distributions. It is important to note that unlike in face recognition, the posterior probability of words in NLM is highly unbalanced. Zipf's law \cite{powers1998applications} is a common approximation of word frequency versus word rank in natural languages. In \cite{WeightNormInitialization}, the authors observe that NLM learn word vectors whose norms are closely related to word frequencies. Therefore, we conduct a series of experiments to compare various norm-scaling techniques for the word vectors. In addition, we implement a heuristic to scale the norms of the context vectors. It turns out that one of the norm-scaling methods slightly improves the PPL. When it is used along with the margin techniques, comparable WER to the baseline is achieved. Finally, to figure out the effects of margin techniques in NLM, we visualize the word vectors and observe that word vectors trained with large-margin softmax exhibit expected behaviors and ``stretch" the word vectors to more evenly populate the embedding space. \section{Related Work} The minimum distance of feature vectors to a decision boundary is called the margin. The large-margin concept plays an important role in conventional machine learning algorithms such as the support vector machine \cite{Support-vector-networks} and boosting algorithm \cite{schapire1998boosting}. Its core idea is to maximize the margin during training, in the hope that it leads to greater separability between classes during testing. \cite{chen2000discriminative, roark2004discriminative} study discriminative training on language models. The authors in \cite{sha2006large} introduce large margin into gaussian mixture models for phonetic classification and recognition task (multiway classification). Later in \cite{sha2007large}, they show a framework to train large margin hidden markov models in the more general setting of sequential (as opposed to multiway) classification in ASR. It has also been well studied in image processing. A novel loss function is proposed in \cite{NIPS2018_7364} to encourage large-margin in any set of layers of a deep network. In this work, we concentrate on the traditional margin methods that only focus on the output layer to see if it contributes to NLM. The weights of the output layer are essentially feature vectors of each class (image features in face recognition or word embeddings in NLM). The scores (logits) of each sample are obtained using the dot product between the feature and the context vector. When using the cross entropy criterion along with softmax, the logits are used to calculate the loss. There exists a line of work in face recognition that modifies the loss function such that the scores of the true labels are reduced during training. In \cite{L-Softmax}, the score for the ground truth class is manipulated by multiplying the angle between the ground truth feature vector and the context vector by a constant integer term $m$. It leads to a decline of that score, which ultimately leads to greater angular separability between learned feature vectors. This shares the similar idea with A-Softmax Loss (SphereFace) \cite{SphereFace}. However, SphereFace normalizes the weights by L2-norms in advance so that the learned features are restricted to be on a hypersphere manifold, which is consistent with the widely used assumption in the field of manifold learning for face recognition that face images lie on or close to a low-dimensional manifold \cite{Manifold_of_Facial_Expression, Face_recognition_using_Laplacianfaces}. Later, Hao et al. \cite{CosFace} propose a large-margin cosine loss (CosFace) using L2 normalization for both the feature vectors and the context vector, and subtracting a margin $m$ from the cosine function output. CosFace also leads to a large-margin in the angular space. Subsequently, an additive angular margin loss (ArcFace) is presented in \cite{ArcFace} that adds a margin term $m$ to the angle instead of multiplying an integer term as in SphereFace. While these designs look similar, the authors claim that ArcFace has a better geometric attribute. Compared to SphereFace and CosFace, which are nonlinear in the angular space, ArcFace has a constant linear angular margin. Word frequencies and vector norms are key concepts for this paper. Zipf's law \cite{powers1998applications} states that the frequency of words in a corpus of natural language is inversely related to the rank of words. Further in \cite{WeightNormInitialization}, the authors identify the relation between the norms of word vectors and their frequency. The result shows that the logarithm of word counts is a good approximation of word vector norms. This inspires us to examine various norm-scaling techniques for word vectors. \section{Methodology} \label{sec:methodology} Assuming the long short-term memory (LSTM) network \cite{hochreiter1997long} is used for language modeling, the target word posterior probability using softmax in the last output layer can be written as: \begin{equation} P(y|i) = \frac{\exp{l(y,i)}}{\exp{l(y,i)} + \sum_{y' \neq y} \exp{l(y',i)}} \end{equation} where $y$ denotes the target next word (dependency on $i$ is dropped for simplicity), $y'$ is a running index in the vocabulary, $i$ is a running index of positions in data and $l(y,i)$ denotes the logit calculation. When using the inner-product, it can be written as: \begin{align} l(y, i) &= h_i^T W_{y} + b_{y}\\ &= ||h_i|| \cdot ||W_{y}|| \cdot \cos \theta_{y,i} + b_{y} \end{align} where $h_i$ is the context vector and the output of the LSTM layer(s), $W_y$ is the embedding vector, $\theta_{y,i}$ is the angle between the two and $b_y$ is the bias term. Commonly, the softmax output is used together with the cross entropy training criterion: \begin{equation} L = - \sum_{i} \log P(y|i) \end{equation} \subsection{Conventional Margin}\label{sec:conventional_margin} All of the three margins used in this paper only vary in the calculation of the logit of the ground-truth class $y$. For CosFace ($l_\text{COS}$) and ArcFace ($l_\text{ARC}$), the authors of the original paper claim that the normalization on features is necessary to encourage feature learning in their approach. Moreover, it is better to set the norm of the context vector as constant if the model is trained from scratch. Hence, they set $||W_y|| = 1$ and $||h_i|| = s$, where $s$ is some predefined constant. In contrast to ArcFace and CosFace, L-Softmax ($l_\text{LSM}$) does not normalize anything in advance. The three margins from face recognition are formally defined as follows: \begin{align} l_{\text{COS}}(y, i) &= s \times (\cos(\theta_{y,i}) - m) \label{CosFace_fun}\\ l_{\text{ARC}}(y, i) &= s \times \cos(\theta_{y,i} + m) \label{ArcFace_fun}\\ l_{\text{LSM}}(y, i) &= ||h_i|| \cdot ||W_{y}|| \cdot \varphi(\theta_{y,i}) \label{L-Softmax_fun} \end{align} where $\varphi(\theta_{y,i})$ is designed as: \begin{equation} \varphi(\theta_{y,i}) = (-1)^k \cos(m \theta_{y,i}) - 2k \label{varphi} \end{equation} While margin $m$ in $l_{\text{COS}}$ and $l_{\text{ARC}}$ are non-negative real numbers, it must be a positive integer in $l_{\text{LSM}}$. Using (\ref{varphi}), the monotonicity of $\varphi(\theta_{y,i})$ with respect to $m$ can be guaranteed. $k$ in $\varphi(\theta_{y,i})$ is an integer in the range of $[0,m-1]$ and $\theta \in [\frac{k\pi}{m}, \frac{(k+1)\pi}{m}]$. \subsection{Margin with Norm-scaling} \iffalse \begin{align} f_{\text{uniform}}(y) &= ||w_{c_{\max}}|| \label{eq:1}\\ f_{\text{log-rank}}(y) &= \log (\mathrm{exp}(||w_{c_{\max}}||) - v \times \mathrm{index}(y)) \\ f_{\text{unigram}}(y) &= ||w_{c_{\min}}|| + u \times \mathrm{count}(y) \\ f_{\text{log-unigram}}(y) &= \log (\mathrm{count}(y)) \\ g_{\text{max}}(i) &= ||h_{\text{max}}|| \end{align} \fi We explore different norm-scaling techniques for word vectors and context vectors, which differ only in how they alter the norms of vectors. $f(y)$ defines the modifications to word vectors and $g(i)$ defines the modifications to context vectors: \begin{align} f(y) &= \begin{cases} ||W_y||, & \textit{no-mod}\\ ||W_{\text{argmax}\{c\}}||, & \textit{uniform} \\ \log (\mathrm{exp}(||W_{\text{argmax}\{c\}}||) - v \times y), & \textit{log-rank} \\ ||W_{\text{argmin}\{c\}}|| + u \times c_y, & \textit{unigram}\\ \log (c_y), & \textit{log-unigram} \label{norm_scaling_word_vector} \end{cases} \\ g(i) &= \begin{cases} ||h_i||, & \textit{no-mod}\\ \text{max}\{||h||\}, & \textit{max-norm} \end{cases} \end{align} \noindent with $v$ and $u$ defined as: \begin{align} v &= \frac{\mathrm{exp}(||W_{\text{argmax}\{c\}}||) - \mathrm{exp}(||W_{\text{argmin}\{c\}}||)}{V} \\ u &= \frac{||W_{\text{argmax}\{c\}}|| - ||W_{\text{argmin}\{c\}}||}{\text{argmax}\{c\}} \end{align} \noindent where $c$ is the count of words and $V$ is the vocabulary size. For \textit{uniform} we assume that all word vectors have the same norm, and in this case we use the norm of the word vector with the largest count among all words in the training corpus. For \textit{log-rank}, we expect that the norm of the word vector is linear with respect to its index before a logarithmic operation (assuming words are sorted in descending order by their counts). For \textit{unigram}, we use scaled and shifted word counts as new word vector norms after normalization. Note that for \textit{uniform}, \textit{log-rank} and \textit{log-unigram}, $f(y)$ is dynamically updated in each update step during training. For \textit{log-unigram}, we take the logarithm of word count directly as the norm of that word. On the other hand, the heuristic \textit{max-norm} scales the norm of the $i$-th context vector $h_i$ using the largest norm in the batch where the $h_i$ appears. Finally, combining norm-scaling and margin techniques, the logit calculation can be reformulated as: \begin{equation} l(y, i) = g(i) f(y) \phi (\theta_{y, i}) + b_y \end{equation} where $g(i)$ is selected between $||h_i||$ and $\max\{||h||\}$ , $f(y)$ is selected among the five norm-scaling functions and $\phi (\theta_{y', i}) = \cos (\theta_{y', i})$ for $y' \neq y$, otherwise \begin{equation} \phi (\theta_{y, i}) = \begin{cases} \cos(\theta_{y,i}) - m, & \text{COS} \\ \cos(\theta_{y,i} + m), & \text{ARC} \\ \varphi(\theta_{y,i}), & \text{LSM}\\ \cos (\theta_{y, i}), & \textit{no-margin} \end{cases} \end{equation} For all experiments in the next section, we always keep the bias term $b_y$, as according to our early experiments, dropping it slightly degradates the performance across all setups. \section{Experiments} \label{sec:experiments} We use two datasets to compare the effects of the aforementioned techniques: Switchboard (SWB) and Quaero English. SWB is a relatively small dataset, with a vocabulary size of 30K and 25M training tokens. Quaero has a vocabulary size of 128K and 49M training tokens. We use two-layer LSTM language models with hidden state sizes of 1024 and 2048 for SWB and Quaero, respectively. For Quaero, we also apply the sampled softmax \cite{DBLP:journals/corr/JeanCMB14} method to speed up training. In the following experiments, we fix the model architecture and only alter the softmax layer. \subsection{Conventional Margin} Considering that large-margin works well in face recognition, to grasp the preliminary understanding of its effects on our task we apply the large-margin techniques described in Section \ref{sec:conventional_margin} out-of-the-box for NLM. As shown in Table \ref{PPLSWBLARGIN_MARGIN}, for \text{LSM}, the norms of vectors are retained as defined in Equation \ref{L-Softmax_fun}, and setting $m$ as one means that there is no modification of cosine similarity. Therefore, the first row of the table gives us the baseline of this margin. Moreover, as required in \cite{L-Softmax} that $m$ must be an integer, the minimal step is to increase $m$ by one. As can be seen, even setting $m$ to two would dramatically worsen the performance, we do not further increase $m$ in this experiment. \begin{table}[ht] \centering \caption{PPL on SWB using margins from face recognition out-of-the-box.} \begin{tabular}{|l|r|r|} \hline Method & $m$ & PPL \\ \hline Baseline & n/a & 53.7 \\ \hline \multirow{2}{*}{LSM} & 1 & 53.5 \\\cline{2-3} & 2 & 390.3 \\ \hline \multirow{5}{*}{ARC} & 0 & 66.9 \\ \cline{2-3} & 0.001 & 68.0 \\\cline{2-3} & 0.003 & 80.7 \\\cline{2-3} & 0.01 & 106.1 \\\cline{2-3} & 0.03 & 170.4 \\ \hline \multirow{5}{*}{COS} & 0 & 66.9 \\\cline{2-3} & 0.001 & 70.3 \\\cline{2-3} & 0.003 & 77.7 \\\cline{2-3} & 0.01 & 108.0 \\\cline{2-3} & 0.03 & 382.1 \\ \hline \end{tabular} \label{PPLSWBLARGIN_MARGIN} \end{table} For COS and ARC, the modifications on feature vector norms are defined in Equation \ref{CosFace_fun} and Equation \ref{ArcFace_fun}. The feature vector $W_y$ and context vector $h_i$ are firstly normalized and then a large scalar $s=64$ is used for re-scaling. The results show a clear trend that the bigger the margin term gets, the worse the performance is. Even when disabling the margin, i.e. $m = 0$, PPL is much higher than the baseline. As the only changes to the calculation in this case is the re-scaling of vector norms, this suggests that normalizing the word vectors and re-scaling them to have norms of 64 is too harsh for NLM in concern. \subsection{Margin with Norm-scaling} Considering the pattern discovered in \cite{WeightNormInitialization}, that the norms of word vectors approximate the logarithm of word counts, as well as the results of our preliminary experiments, we believe that norms of vectors play a nonnegligible role in NLM. Simply reducing them and re-scaling them using a large constant seems improper in our case. Hence, our next step aims to figure out which kind of norm-scaling is more suitable for NLM. Specifically, we vary the norm-scaling setup on $g(i)$ and $f(y)$ and examine the PPL of the corresponding models. The top half of Table \ref{norm-scaling} depicts the performance of five different norm-scaling techniques of word vectors as defined in (\ref{norm_scaling_word_vector}), where $g(i)$ uses \textit{no-mod}. As seen, all of them slightly worsen the performance. The bottom half of the table reports the performance when $g(i)$ uses \textit{max-norm}. We can see that only applying the heuristic on the context vector gives the best performance on SWB, and using \textit{max-norm} and \textit{log-unigram} together slightly improves the PPL on Quaero. We go on to apply the best norm-scaling setup in combination with $\phi (\theta_{y, i})$ variants for NLM. \begin{table}[ht] \centering \caption{PPL on SWB or Quaero using different norm-scaling techniques. $g(i)$ and $f(y)$ being \textit{no-mod} corresponds to the standard softmax baseline.} \begin{tabular}{|l|l|cc|} \hline \multicolumn{1}{|c|}{$g(i)$} & \multicolumn{1}{|c|}{$f(y)$} & \multicolumn{1}{c}{SWB} & \multicolumn{1}{c|}{Quaero} \\ \hline \multirow{5}{*}{\textit{no-mod}} & \textit{no-mod} & 53.7 & 105.8 \\\cline{2-4} & \textit{uniform} & 56.8 & 108.4 \\\cline{2-4} & \textit{log-rank} & 56.8 & 108.1 \\\cline{2-4} & \textit{unigram} & 56.0 & 108.4 \\\cline{2-4} & \textit{log-unigram} & 53.8 & 107.4 \\ \hline \multirow{5}{*}{\textit{max-norm}} & \textit{no-mod} & \textbf{52.9} & 104.3 \\\cline{2-4} & \textit{uniform} & 56.4 & 107.2 \\\cline{2-4} & \textit{log-rank} & 57.4 & 109.6 \\\cline{2-4} & \textit{unigram} & 54.6 & 106.1 \\\cline{2-4} & \textit{log-unigram} & 53.1 & \textbf{104.1} \\ \hline \end{tabular} \label{norm-scaling} \end{table} \begin{table}[ht] \centering \caption{PPL on SWB combining norm-scaling and large-margin softmax.} \begin{tabular}{|l|l|l|cc|} \hline \multicolumn{1}{|c|}{$g(i)$} & \multicolumn{1}{|c|}{$f(y)$} & \multicolumn{1}{|c|}{$m$} & ARC & COS \\ \hline \multirow{2}{*}{\textit{no-mod}} & \textit{no-mod} & 0.001 & 54.5 & 54.5 \\ \cline{2-5} & \textit{log-unigram} & 0.001 & 55.3 & 55.3 \\ \hline \multirow{5}{*}{\textit{max-norm}} & \textit{log-unigram} & 0.001 & 55.3 & 54.9 \\ \cline{2-5} & \multirow{4}{*}{\textit{no-mod}} & 0.001 & \textbf{54.2} & \textbf{54.1} \\ \cline{3-5} & & 0.003 & 55.5 & 55.9 \\ \cline{3-5} & & 0.006 & 57.7 & 58.4 \\ \cline{3-5} & & 0.010 & 60.0 & 60.9 \\ \hline \end{tabular} \label{ns_with_lm_SWB} \end{table} \begin{table}[ht] \centering \caption{PPL on Quaero combining norm-scaling and large-margin softmax.} \begin{tabular}{|l|l|l|c|c|} \hline \multicolumn{1}{|c|}{$g(i)$} & \multicolumn{1}{|c|}{$f(y)$} & \multicolumn{1}{|c|}{$m$} & ARC & COS \\ \hline \multirow{2}{*}{\textit{no-mod}} & \textit{no-mod} & 0.001 & \textbf{111.2} & \textbf{111.6} \\ \cline{2-5} & \textit{log-unigram} & 0.001 & 114.7 & 114.2 \\ \hline \multirow{2}{*}{\textit{max-norm}} & \textit{log-unigram} & 0.001 & 113.7 & 112.7 \\ \cline{2-5} & \textit{no-mod} & 0.001 & 114.0 & 112.0 \\ \hline \end{tabular} \label{ns_with_lm_Quaero} \end{table} Now that we have good norm-scaling setups for both the context vectors and the word vectors, the logical next step is to assess the performance of various margins in combination with them. We choose the four best combinations of $g(i)$ and $f(y)$ in Table \ref{norm-scaling}, and conduct large-margin experiments. First, we use a very small margin term for all of them, i.e. $m=0.001$. As can be seen in the first four rows in Table \ref{ns_with_lm_SWB}, they do not differ much in PPL and none of them improves over the baseline on SWB. Furthermore, as shown in Table \ref{ns_with_lm_Quaero}, all of them deteriorate the PPL on Quaero to a large degree. To further verify, we tune the margin term under our best norm-scaling setting. The results in the last five rows in the Table \ref{ns_with_lm_SWB} clearly show that the performance gets worse as $m$ increases. Last but not the least, we conduct LSTM recurrent neural network rescoring experiments as shown in Table \ref{asr} to make a final verdict of the application of large-margin softmax in NLM. The baseline system is based on the hybrid hidden Markov model neural network\cite{kitza19:interspeech}. It is interesting to find that although PPL is deteriorated, NLM with large-margin softmax can yield the same WER as the baseline. \begin{table}[ht] \centering \caption{PPL and WER on SWB using ARC or COS with $m = 0.001$ and best norm-scaling techniques.} \begin{tabular}{|l|c|r|r|r|} \hline \multicolumn{2}{|c|}{Metrics} & baseline & ARC & COS \\ \hhline{|==|=|=|=|} \multicolumn{2}{|l|}{PPL} & 52.9 & 54.2 & 54.1 \\ \hhline{|==|=|=|=|} \multirow{3}{*}{WER} & Switchboard & 13.7 & 13.7 & 13.7 \\ \cline{2-5} & Callhome & 7.1 & 7.1 & 7.1 \\ \cline{2-5} & Average & 10.4 & 10.4 & 10.4 \\ \hline \end{tabular} \label{asr} \end{table} \section{Analysis} To analyze the effects of large-margin in NLM, in this section we visualize the word embeddings trained with large-margin softmax as well as the standard softmax. For visualization, the dimensionality of word vectors is reduced to two by first applying principal component analysis and then using t-distributed stochastic neighbor embedding \cite{maaten2008visualizing}. Figure \ref{fig:word_vector} shows the word vectors in polar coordinates. For COS and ARC, we use the large-margin softmax in bold in Table \ref{ns_with_lm_SWB}. The vectors are scaled and rotated to align the word ``the" in all plots. The points in blue are the top 100 frequent words in SWB, which already account for around 65\% of the total running words. As can be seen, these vectors of frequent words obtained by large-margin softmax approaches in (b) and (c) are more separable than those obtained by standard softmax in (a). In other words, the word vectors are further ``stretched" to more evenly populate the embedding space. \begin{figure}[ht] \centering \subfigure{ \begin{minipage}[t]{0.39\linewidth} \centering \includegraphics[width=\textwidth]{a.pdf} \centerline{(a) softmax, top 100} \end{minipage} } \subfigure{ \begin{minipage}[t]{0.39\linewidth} \centering \includegraphics[width=\textwidth]{b.pdf} \centerline{(d) softmax, word groups} \end{minipage} } \subfigure{ \begin{minipage}[t]{0.39\linewidth} \centering \includegraphics[width=\textwidth]{c.pdf} \centerline{(b) COS, top 100} \end{minipage} } \subfigure{ \begin{minipage}[t]{0.39\linewidth} \centering \includegraphics[width=\textwidth]{d.pdf} \centerline{(e) COS, word groups} \end{minipage} } \subfigure{ \begin{minipage}[t]{0.39\linewidth} \centering \includegraphics[width=\textwidth]{e.pdf} \centerline{(c) ARC, top 100} \end{minipage} } \subfigure{ \begin{minipage}[t]{0.39\linewidth} \centering \includegraphics[width=\textwidth]{f.pdf} \centerline{(f) ARC, word groups} \end{minipage} } \centering \caption{Word vectors plotted in polar coordinates. In (a), (b) and (c), the top 100 frequent words more evenly populate the embedding space. In (d), (e) and (f), word groups with strong semantic (shades of red) or syntactic (shades of blue) relations are preserved.} \label{fig:word_vector} \end{figure} To further investigate if the word embeddings obtained by large-margin softmax maintain the word relations in general. We visualize some word groups in the second column ((d), (e) and (f)) of Figure \ref{fig:word_vector}. The words in red color series are pairs that have semantic similarity while word groups in blue color series have syntactic relations. As seen, even though the large-margin softmax makes the angles among words larger, it can still preserve the semantic and syntactic relations. For instance, words that share a similar meaning (``auto" - ``car") are well gathered and the angle between word ``she" and ``herself' is almost the same as the angle between word ``he" and ``himself" in (f). \section{Conclusions} In this work, we investigate the use of large-margin softmax in neural language models. We first apply margins from face recognition out-of-the-box, which evidently deteriorates perplexity. Considering the unbalanced nature of word distributions, we further conduct experiments to find good norm-scaling settings for neural language models and tune the margin parameters. Then we apply the models trained with large-margin softmax in rescoring experiments, where we can reach the same word error rate performance as the standard softmax baseline. Finally, to figure out the effects of large-margin in neural language models, we visualize the word vectors. It is interesting to note that the expected margin are found among the word vectors trained with large-margin softmax, which makes them more evenly populate the embedding space. At the same time, the semantic and syntactic relations among words are also preserved. \section{Acknowledgements} {\footnotesize This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 694537, project ``SEQCLAS"). The work reflects only the authors' views and the European Research Council Executive Agency (ERCEA) is not responsible for any use of that may be made of the information it contains.} \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,828
THE BODY DEVA " _The Body Deva_ is simply a great book. Through a series of simple-to-do visualizations, Shutan leads the reader on a powerful and effective journey toward self-understanding, self-healing, and self-liberation. Her writing is filled with deep wisdom and insights, which make the journey she invites us to take a trustworthy and safe one. Highly recommended." — **ROBERT HENDERSON,** author of _Emotion and Healing in the Energy Body_ "Mary is a gifted and compassionate teacher, and this book is ideal for anybody ready to move beyond the mentality of the quick fix. The exercises may appear surprisingly simple on the page, but working through them with patience and focus reveals their potency and their depth. _The Body Deva_ is of great value as a workbook for self-healing, as a compendium of healing techniques, and as a guide toward an expanded view of the self." — **JONATHAN HOWARD KATZ** , composer and pianist "Mary Mueller Shutan provides a body-aware and trauma-informed approach to ancestral and cultural healing in _The Body Deva_. Her work brings humility and respect for diverse kinds of life experience in a way that is much needed in contemporary spiritual circles." — **DANIEL FOOR, PH.D.** , author of _Ancestral Medicine_ " _The Body Deva_ is a beautiful introduction for creating a compassionate relationship with your body and understanding the incredible capacity for health and well-being that exists within each of us. Mary is truly gifted at providing vocabulary and understanding of how to access your own inner body deva and live a more balanced, connected life." — **JEANNE GORHAM** , craniosacral therapist "All of Mary Shutan's work is brilliant. Her fourth book, _The Body Deva_ , may very well be her best thus far. This book allows the reader to connect with their own body's wisdom in a grounded way that is an important step to greater self-connection and spiritual growth. There is a lot of material available that takes a person's spiritual journey outside of themselves; _The Body Dev_ a brings you back into yourself." — **ALEXIS EDWARDS** , doctor of Oriental medicine and intuitive healer ## Contents 1. Cover Image 2. Title Page 3. Epigraph 4. Introduction 1. Healing versus Curing 2. The Clarion Call 3. Introducing the Body Deva 4. About My Journey to This Work 5. PART ONE 1. Introductory Work 2. Chapter 1: The Body Deva 1. The Physical Body 2. Beginning Body Deva Exercise 3. Working with the Body Deva: Part Two 4. Finding the Body Deva Within 5. Body Maps 6. Talking to Your Body 7. How to Talk to Your Body 8. Variation on Talking to Your Body 3. Chapter 2: Working with Resistance and Blocks 1. Working with Resistance 2. Working with Resistance Via Inquiry 3. Working with Resistance via Symbol 4. Questions about Resistance 5. The Energetics of Resistance 4. Chapter 3: Working with Fear and Emotions 1. Understanding Our Emotions 2. Questioning Emotions 3. Functions of Emotions 4. Anger 5. Fear 6. Grief and Sadness 7. Looking for an Outlet 8. Working with the Body Deva to Understand and Work with Emotions 9. Working with Emotions 10. Looking at our Reactivity in the World 11. Our Tornado of Chaos 5. Chapter 4: Working with Inner Children 1. How to Work with Your Inner Children: Part One 2. Healing Inner Children: Part Two 3. The Concept of Micro-Trauma 6. PART TWO 1. Intermediate and Advanced Work 2. Chapter 5: Working with Contracts 1. Releasing Held Contracts 2. Changing Contracts 3. Chapter 6: Healing In Utero 1. Working with the In Utero Experience 2. Advanced In Utero Work 3. The Heart and Uterus Connection 4. Working with the Grid 5. Prior Pregnancy, Abortion, and Loss 6. The Mother Wound 4. Chapter 7: Healing Family and Ancestry 1. Working with Ancestral and Familial Patterns 5. Chapter 8: Healing Past Lives 1. Working with Past Lives 2. Past Life Beliefs and Personal Responsibility 6. Chapter 9: Healing Cultural Energies 1. Working with Cultural Energies 2. Healing Power Loss 3. Healing Power Loss by Place or Event 4. Power Gain through Taking 7. Chapter 10: Working with Archetypes and the Multitude of Selves 1. Working with our Multitude of Selves 8. Chapter 11: Healing the Central Myth 1. Common Myths and Fairy Tales that Construct a Centralized Myth 2. Releasing the Central Myth 9. Chapter 12: Tying Things Together 1. Before the Session 2. Aftercare 3. Prompts 7. Closing Thoughts 8. Further Resources 9. Also of interest from Findhorn Press 10. About the Author 11. About Inner Traditions • Bear & Company 12. Books of Related Interest 13. Copyright & Permissions Introduction Much of medicine, whether allopathic or holistic, understands disease. We are labeled and understood as a disease. We are Cancer, or Chronic Fatigue, or Depression. We are understood only by what is pathological, what has gone wrong within us. Our treatment and care is based on the named diseases and imbalances we carry, not who we are as individuals. Our treatment and care is focused on sickness, not on health. When we receive care, we tend to only receive treatment for part of the picture. We are not a whole body—we are a large intestine, or a gallbladder, an emotion, or an ankle or leg. Or we are a constellation of symptoms that has been labeled: Crohn's, PTSD, Lupus. We are then further segmented based on the perceived mind-body split. Our physical complaints, those that can be seen and labeled and tested (and will show up on a test definitively), are treated by our physicians while therapists and counselors treat our emotions or "mind." If we have dipped our toes into holistic therapies, we may have received care that assists with mind and body, or care that understands that our minds and emotions and body can help or hurt one another. While this may be still a bit of a leap for some (and old hat for others), if you were to ask anyone with depression if they experience physical symptoms due to their depression, the answer would be a resounding "yes." Similarly, if you were to ask anyone with cancer if they were experiencing difficult emotions or had an expanded spiritual perspective as a result of the experience, the answer would also be a resounding "yes." There is no doubt as to the capacity of allopathic medicine to assist with emergency type care, or its ability to offer pharmaceuticals to those who need them to ensure proper balance and functioning of their systems. As allopathic medicine still has not fully embraced the understanding that mind and body do indeed interact (even decades after the term _psychoneuroim-munology_ was introduced), we are likely many years away from any type of framework in mainstream medicine that would allow for anything other than a purely mechanistic approach to healing. While this type of care is important, it is just scratching the surface of our healing potential. Methods of individually understanding our history are best represented by healing methods such as Traditional Chinese Medicine (TCM) and Ayurveda, both of which understand that five people may walk into an office with stomach pain in the exact same location for five very different reasons; those five people need to be treated according to their history, their patterns, and their unique mind, body, and spirit. Those five people with the same exact stomach pain may receive five different treatments, as the reasoning behind their pain would be different. Our emotions and thoughts and body and spirit all create our relative illness or health. Who we are is not just a body and then a mind. We are a whole. Not only that, we are not only a mind and emotions and a body—we are also energy and spirit. Most forms of care, even holistic forms of treatment, miss out on the fact that we are indeed energy, matter, spirit, mind, and emotions. It is only by understanding all aspects of things, as well as our own unique history, that we really have all the pieces of the puzzle for what may be out of balance within us. It is through understanding all aspects of what make up our imbalances that we can begin to truly and deeply heal. We are not a disease or an imbalance, or even a pain. We are not segmented into parts or purely mechanistic creatures—we are a whole person. We should not be treated this way, and we do not heal this way. When we are treated this way we look for symptoms, we look to suppress anything that is uncomfortable. By looking within, by looking for our own unique constellation of reasons why a disease, imbalance, or pain emerged for us, we can treat the root. We can heal the root. We can truly understand who we are and the workings behind why a disease or other pattern has emerged for us. We can align ourselves with our innate wholeness and health, instead of our sickness or imbalances. Healing versus Curing Our bodies first start talking to us in whispers. Pain is a scream. If we begin to listen to our body, we can realize when it is whispering rather than wait for it to scream. If our body is screaming, we can begin to understand why and soothe it. We hold the inherent consciousness, the knowledge, and the capacity of healing ourselves within. Many of us simply do not have the tools to access this resource, known in this book as the _body deva._ There is a difference between healing and curing. Healing is something that will arise from utilizing the tools within this book, but healing is different from being cured. So what is the difference? The difference is that healing permits us to release the energies, emotions, and experiences around an imbalance, disease, or pain. Healing can lead to a full understanding of why an imbalance came into your life, what function it serves, and all of the different "parts" that caused it to arise: the emotions, experiences, beliefs, as well as the physical and genetic causative factors. While this may allow symptoms to leave, or even for something to be cured, it doesn't every time. What this means is that if someone has terminal cancer they may still physically die, even if they are healed. We have a lot of cultural fears around death, and I use this example to share that we may heal in ways that will cure whatever imbalances we carry. But we also may find healing, as in understanding of why experiences have arisen for us; this allows us to have acceptance and let go of emotions and beliefs that we carry about the situation, and to face whatever obstacles life throws at us in a peaceful and prepared manner. I realize that this may be difficult to hear, and I may lose a few readers at this point who may have been hoping for a simple, easy fix to all that they carry in this world. But it is a part of authentically describing the process, and radicalizing the notion of what a book in the self-help space can actually do, to understand that someone can deeply heal and yet may still have physical, emotional, mental, and spiritual difficulties in their lives. We all will. Life in a physical body is difficult. Our lives will inevitably throw us curveballs. We experience loss, connection, joy, and heartache, and it is a part of the beauty of this world that we do so. But we can weather the storm of the ups and downs in our lives in better fashion. We can experience more freedom and expansion in our lives. We can stop sabotaging ourselves and become who we are truly meant to be. By looking inward we can appreciate this journey more, and let go of the baggage that we have accumulated in our daily lives. We can fully understand who we are, why we are, and become more conscious about why our bodies and our lives are out of balance. This may result in great physical change, a lessening of symptoms, and more joy and laughter in your world. I do hope that it does, as the weight of the world and the heaviness of what we carry is often such an incredible burden. We are often unaware of just how much of a burden we carry until it begins to lift. We are often unaware of how many limiting beliefs we hold, and the impact of those beliefs on our lives, until we realize that we no longer need to have them shape our world. The Clarion Call The sort of clarion call to take responsibility for our own healing has been sounding for decades, and as meditation and inner work grow in popularity we are more and more accepting of the fact that we must take part in our own healing process to truly get well. This is not to say that we should not visit healers, medical professionals, and others on our path, as those skilled professionals can be of great help (and can get us out of some very stuck places). They can compassionately assist us and listen at the deepest levels, moving us forward in profound ways. Someone who has attended to his or her own healing path in an in-depth way can be a profound catalyst to our own process. One of my teachers used to say, "Shared pain is lessened pain," and I do find that to be true. Work like this can lead to significant and deeply felt emotions and experiences. This is part of the process, as healing can be both blissful and at times uncomfortable. We are so used to numbing ourselves that any sort of pain or difficulty is feared or shoved aside. It takes courage to look within, as what is unhealed inside us may not always be pleasant to consider. But it is only through unearthing our wounds that we can face them as well as eventually free ourselves from them. If we are truly ready, we can look within. We can understand ourselves in a different, in-depth way. We can realize why we are in pain, why we are so angry, why we react or think or understand ourselves to be a certain way. We can know why we relate to the world in a certain manner. We can understand why we "loop" or repeat the same behaviors and patterns again and again and break free from them. We can know where our beliefs come from, and change them. We can be active participants in our own healing process, and evolve toward who we are intended to be in this world. We can take back the power we have so freely given to others in hopes that they will heal us, and realize that the resources to heal ourselves were always within us—they were just hoping to be recognized and worked with. Most importantly, we can begin to free ourselves from the ties that bind us. Effective healing work allows you to understand the messages your body and soul are trying to relay to you; these messages allow you to live from a place of truth and authenticity in this world. This world needs authentic people. It needs people who are willing to look within. It needs people willing to move beyond the limiting beliefs, heavy emotions, and other energies that cause so much pain, both personally and collectively. Our culture is changing, and so are we. We are ready for authenticity. We are ready for complexity. We are ready to look within. We are ready to let go of the empty promises and the easy, quick methods of healing. And this book will teach you how to do so ... or at the very least will introduce a new way of doing so. Through this book, may you learn to lighten your load, heal, and understand yourself on a new level. By understanding ourselves in all our complexity we can sweep aside the illusions fed to us by others, the hoped-for but never found miraculous cures and sudden wealth or health, and instead, look within to truly heal and understand who we are on the deepest levels. This knowledge will allow us to venture forward in our lives in a more free and joyful way, with a sense of purpose and connection that many of us are looking for, but rarely find. Introducing the Body Deva This book will introduce you to the _body deva_ , or the spiritual consciousness of the body. Some of you may currently believe that spirituality is something that lies outside of yourselves, or utilize spirituality to escape from your bodies and lives. But really, spirituality should bring us closer to our bodies, closer to our lives. We should be able to become healthier, understand ourselves on a deeper level, and become more embodied if we are engaged on a spiritual path. Learning to work with the spiritual consciousness of your body, the body deva, will allow you to do so. The body deva does not require a specific religion or spiritual path to work with; if anything, the body deva may connect you to a greater sense of spirituality in whatever spiritual path or religion you have connected with already. The body deva allows us to communicate with an inner resource that will have insight about our body, mind, emotions, and even spiritual patterns. We can learn to use the body deva to understand and heal everything, from physical pain to childhood patterns to past lives. The body deva represents our inner health. It is our vitality, our essence, the wisdom and consciousness of our human form. Returning to this resource again and again throughout this work allows for a perspective that focuses on health and deep inner knowing, rather than story, pain, trauma, emotion, or whatever lies unhealed within us. We continually hear from what is unhealed within; it is time to place our focus on the inherent health and wisdom of our bodies and utilize it as an ally to become more whole. We have a deeply intelligent consciousness in our human form. We can know what our body wants, we can discover who we are, and move through this world from a solid basis of understanding what we want and need from ourselves and from the world. Imagine being able to ask your body about why it is in pain, how you can move forward in your life, or even what to have for dinner. All of this and more is possible if we consciously work with this intelligence. This book will teach you what you need to know to connect and work with the consciousness of the human body. In some ways, this work is quite simple: we are communicating with the inherent wisdom of our bodies. How to connect to and communicate with this consciousness, and the many ways we can work with it to effect healing, are what is offered in this book. This work is inherently practical, and you will learn many tools and resources to facilitate working with your body deva. You will learn to connect to the consciousness of the body as a whole, the consciousness of individual parts of the body, as well as how to focus on specific topics or patterns and resolve them with the body deva. At the end of the book is a chapter on how to tie everything together, or how to best utilize this method as a whole. It is suggested that you work with your _body deva_ until you feel connected to it before moving on to other chapters. By having a solid connection to the body deva as a whole, you will have a trusted ally in this process—one you can return to again and again for answers as well as assistance in the following chapters. It is fairly common to be interested in more advanced work, such as working with ancestral patterns, as you start out. Such work can be exciting to think about and work with. But by working with and understanding your own body, your own body consciousness, and what your own experiences of this world mean, you will be able to work with the more advanced material in a deeper way. We heal best through gradual work, through taking one step (or a few steps) forward in our lives at a time. By grounding ourselves in our own bodies and lifetimes first, we can have a solid foundation to move into more advanced thoughts and exercises. This work is like meeting someone for the first time from whom you have been disconnected or never properly introduced. For some of you it will be an instant "click," when you realize that you have met before, or have met in a different way. For others, especially those who are disassociated or regard their bodies as something unsafe, it may take a bit more work to strike up a friendship with the consciousness of the body. Both paths require a bit of willingness. It is suggested that you be compassionate toward yourself, wherever you may lie on that continuum. This work is always compassionate, even when you are working with subjects or emotions that are culturally considered "dark" or "bad." By offering compassion and connection to even the darkest aspects of ourselves, we can hold every part of ourselves in the highest regard. This does not mean that we turn these shadowy parts of ourselves into a palatable or acceptable "light," but that we consciously show compassion to the darker aspects of ourselves so they can be accepted and offered healing. This work has emerged from my experiences in Craniosacral Therapy, with its framework rooted in Gestalt, Reichian Therapy, and Psychosynthesis, Zero Balancing, Traditional Chinese Medicine (acupuncture and herbalism), Energy Work, Heart-Centered Therapy, Shamanism, and Spiritual Healing, as well as a personal meditative practice for almost twenty years. I have written this book because I believe that many of you out there are searching for exactly this information. Over the last ten years, I have developed this work by utilizing it with clients. I have consistently tweaked and learned what truly worked (and what didn't) in order to find something that is coherent and helpful for everyone—from someone who is just starting to consider that their pain may have an emotional component to people who have worked with a spiritual or inner path for decades. I wish to thank all of the teachers, students, and clients who have helped me to develop this work and allowed me to become who I am in this world. I am always grateful to be of service to those who are willing and ready to look within. About My Journey to This Work For most of my life, I was either partially or completely disassociated from my body. Although I understood where my physical body was in space when I was doing things like walking or taking part in physical activities, the idea that I could be grounded and really take up residence in my body didn't occur to me until I was in my early twenties. When I graduated from college, I began to consciously realize how sensitive I was. I started to understand that my body didn't feel safe to me; it didn't feel like a place where I could really feel at home. I protected myself as a result of my sensitivities. That protection took the form of escaping or being anywhere else than my body, and also involved me blocking off sections of my body, or closing down by armoring or shielding as a mechanism for dealing with life. At the time, I lacked the skills to navigate my sensitivities, and as a result was frequently overwhelmed by them. When we are overwhelmed, we pull away from our bodies and become ungrounded. We close down. We don't recognize who we are, and start to hate parts of our body as they become more and more distant from us. I started to feel a call to become embodied, to learn to take care of and nurture my body in a way that I had not before. I enrolled in massage school and earned certification in Asian Body Therapy and Thai Bodywork. I was always fascinated by energy and had explored Reiki, but at that point, my thoughts of energy, spirituality, and the physical nature of the body was that they were three separate courses of study. I could not see how they intersected or informed one another. At the time I didn't understand why I was drawn to Thai Bodywork or to clinical massage. Although I was fairly decent at both of them, as a sensitive and introverted type, many of my classmates were athletes, assured and confident in their bodies. They also tended to be more physically and scientifically oriented and were excited about studying things like kinesiology, biology, and anatomy. I later realized that learning about the human form, studying its anatomy and physiology, was an important stepping stone in my later work, which has been the realization that our physical bodies are conscious, and that physical issues, mental energies (thought patterns and beliefs), emotions, and spiritual energies that are out of balance are held as static or frozen energies in our system. This is hardly the first time that someone has had some of these revelations, I realize. Mind-body-spirit has been kind of a buzzword for the last twenty years or so. The idea that there were some types of healing that could span more than just one of those (mind, or body, or spirit) was a revelation to me at the time, however. I first really felt how emotions releasing from the body had an impact on the functioning of the body in a course on craniosacral therapy. For those who are not aware, craniosacral therapy is a light-touch form of bodywork that works with the nervous system and with the membranes that serve as the "lining" of the body, giving our spinal cord and brain, as well as other parts of our body, form and protection. There is also a focus on the small movements of the bones of the body that are rarely paid attention to in other forms of bodywork, as well as the protective fluid that bathes the spinal cord and brain, and the fluid nature of our bodies as a whole. This was my basic understanding of craniosacral therapy at the time, and is still the typical information that most craniosacral therapists give out in pamphlets and other promotional materials to introduce the subject. But really it went deeper than that. Here, for the first time, was a modality that not only approached the physical body but the energetic, emotional, and physical nature of the body simultaneously. As a massage therapist, I had experienced people having emotional releases on my massage table. They might have a release of grief, or even of anger, but this was always approached as an aspect of the relaxation response, or how the body can hold on to emotions due to stress, and that by releasing stress, the held emotions would release. In this workshop I began noticing that my body was releasing through the energy channels (or meridians) of the body, and I was fascinated to note that the places that I had pain in my body lacked energy movement and were also places that held deep emotions and memories. When these memories from childhood or other traumatic experiences came up, the area that held the blockage would start to have flow, or energy go through it, that I could feel; not only that, the pain or dysfunction in that part of my body then lessened, or ceased completely, in response to the emotional and energetic release. I then began to realize on a deeper level that our energy creates us. We are not physical, emotional, and spiritual energies, like a set of Russian nesting dolls stacked on one another. I started to see clearly that our energy, emotions, mental state, and physical nature were not only linked but that they were inseparable. We are energy. We are consciousness. We do not "hold" emotions in our physical form, as if it were simply a container. Our world, and who we are, is energy. It is consciousness itself. Some of it is denser (like our physical body) and some of it is not dense at all (like our spiritual nature), but it is all energy. I began to understand that our energy creates us and is on a continuum, with our physical bodies and lives simply being the most dense, or noticeable. Consciousness creates and forms everything that we think, do, and are, including our physical body. It creates us, and it creates the physical world, and if we change our energy, we change our world. Although this thought of how we are made of energy had crossed my mind previously, I didn't realize the impact of how even seemingly existential or spiritual patterns could create imbalances in the physical form until I began delving more deeply into my studies. _The more flow we have in our bodies, the more flow and ease there is in our daily lives._ _Stuck and painful areas of the body often reflect stuck and painful beliefs._ These realizations have followed me since those days, and I have discovered how we can feel the ways in which we are stuck, emotional, or out of balance in our lives in our physical form. Not only that, by resolving emotional and spiritual patterns, our physical body will respond, opening and allowing more flow. As I took more and more responsibility for my own healing path, I began to understand that I could look to the outer world to see what was unhealed within me. What I reacted to, who I reacted to, would point me to areas of my body and imbalances in my body that were in need of healing. I have always had a tight lower back. In my early school days we had Presidential Fitness tests, and one of those tests was to try to stretch far enough to reach your toes. They did this by sitting you at a block of wood that had varying measurements for how far you could reach. I was always awful at this. In a Zero Balancing workshop (another hands-on modality that focuses on the connections between mind, body, and spirit, but this time a focus on bones and joints and how they articulate with one another energetically as well as physically), someone started to work on my lower back and I began to have an emotional release. For anyone who has had bodywork, or is a massage therapist or engaged in mind-body studies, that experience is not terribly unusual. But this time I not only had an emotional release but an understanding of an ancestral component to the pattern, realizations of a profound spiritual nature that centered on loss and grieving. All of my life I had experienced a lot of grief around me. Due to my high perceptual capacities I could see and sense it. It colored my world and had an impact on how I related with others. I had a difficult early childhood, so counselors always pointed to my childhood as the source of such feelings. But I always knew somehow that it went deeper than that. I just didn't have the appropriate experience or vocabulary to express it. And in this instance, I had a vision of ancestors lining up—ancestors who had fought and tried and failed and lost and triumphed. Once this spiritual pattern with its emotions came up, my back released and I could touch my toes. Until then, I had only ever been able to reach as far as my ankles; now I could hook my fingers around my big toe. And this was all because I released a spiritual pattern and the emotions and beliefs surrounding that pattern through my physical body. The considerable grief that had always had such a palpable presence began lifting and changing for the first time in my life. Until this time I had not realized the extent of the sort of Charlie Brown cloud of grief surrounding me and within me. With this Zero Balancing session, I not only became conscious of just how much grief I carried but experienced the clarity of some of it beginning to lift. I also began to realize how much I carried that was not mine—that emotions, experiences, even beliefs could be handed down to us, how we would experience these energies as our own because we naturally assume that what is within or around us is ours. It took me many years to understand how we are shaped and formed by so much more than our personal history, and that incredible healing could be effected by becoming aware of and releasing held energies that were passed down to us by our family, ancestry, culture, even the place and time in which we were born. I realize what you may be thinking here—that massage and bodywork is intended to release aspects of the body, that a good massage can do wonders. Perhaps you may even realize how important touch is to ease things like depression, grief, and patterns of isolation. But at this point I had been through massage school, further bodywork and energy work training, and had started my master's degree in Traditional Chinese Medicine (acupuncture and herbal medicine). I had experienced quite a bit of bodywork, energy work, as well as therapy, and nothing had had any impact on my back, except this. This allowed me to have further revelation regarding how our spiritual nature, emotions, state of mind, and physical body interact with one another. I found separate information in each sphere, or sometimes would find where one would overlap another. For example, I found a lot of information about how mind and body interact, and how our emotions can be held in our physical body. But what I was really looking for I found very little of: how we experience our spiritual nature through our physical bodies. I was looking for how we could approach the entire continuum—physical form, mind, spirit, emotions, and energy—through the physical body simultaneously. I was looking for how our spiritual nature could be explored in an embodied way, making an impact on our daily lives and physical bodies. I found that most resources on spiritual factors of disease were too simplistic—dictionary definitions of how lower back pain comes from low self-worth, for example. Most of these resources came from individuals who had little experience working on or with the physical body, or with clients of any type, and minimal education in health or even holistic healing fields. The basic fact that we are all individuals, and five people may come in with back pain for five very different spiritual, emotional, energetic, and physical reasons was never explored in these resources. To make things even more confusing, we often have multiple patterns, and so between those five people there may be hundreds of different reasons for why that back pain has emerged. Until we look at the individual, and not the disease or dysfunction, we will not move beyond the sort of surface level ideologies that are unfortunately so prevalent in mind-body-spirit studies, and which ultimately limit our own healing capacities. On my own healing path, I began to see that for significant healing to occur I not only needed to approach the mind, emotional factors, spiritual reasons, and physical reasons for something occurring (meaning we might go to many different healers—one for physical healing, another for spiritual, and so forth) but work with the entire continuum, simultaneously considering the thoughts, beliefs, emotions, spiritual nature, and physical factors of pain or imbalances, thereby making a larger impact on them than treating them separately. What was needed was a bridge between all aspects of Self—something that could heal the physical, emotional, mental, and spiritual simultaneously. I also began to realize that everything from the most esoteric spiritual subjects to the mundane sort of aches and pains that come from overwork can be worked with through our physical body; that if we release things through the physical body our world, and our bodies, will significantly change for the better. I found that by working through the physical body, even while exploring something emotional, or of an esoteric spiritual nature, that healing the spirit, mind, emotions, and body together had a profound, life-changing effect. This is because we are always focused on the physical world and our physical bodies. Unless we approach patterns through the physical body, our lives (and bodies) will not experience as significant a shift. By focusing on the entire continuum, we can heal everything held within that continuum—without neglecting any aspect of ourselves. For too long we have held these things separate. We consider ourselves a physical body, and our spirituality is elsewhere. We go to a doctor to seek physical healing, a therapist for emotional healing, and possibly a holistic therapist such as an acupuncturist for that mind-body-spirit connection. If we are lucky, we may find a holistic therapist who has explored mind-body-spirit connections and can help us make dramatic improvements in our lives. I have a lot of education in varying spheres from looking for this information and how to bring it all together. I wanted to know how and why the body held so much information, how these varying spheres (mind, body, emotions, spirit) intersected, and how they could be worked with together. I have explored everything from esoteric spiritual material to anatomical and biological sciences in order to come to the conclusion that we experience everything through our human form, and that by releasing what is held within our bodies that our lives, and our world, will dramatically change. I have now worked with hundreds of patients and students throughout the years, and each one of them has been able to lighten their load in some small or large way. Through them I have changed and recalibrated this work so that it is as simple and effective as it can be without sacrificing depth. And all of them have discovered what I did, and still continually discover—that the held emotions, patterns, and blockages within our bodies reflect where we are blocked in our lives, and that once we resolve what is inner, our outer world is much happier, more peaceful, and has a sense of flow to it. I do hope through exploring this work that you find what you are looking for. Each one of us who has the courage and willingness to look within is able to not only lighten their individual load but also has a ripple effect to our family, loved ones, friends, community, and to the world as a whole. By healing ourselves we truly heal the world. Part One Introductory Work _Our bodies have a soul. We just have to learn to listen._ Within us is a consciousness known as the _body deva._ It is the soul of our body, the very essence of who we are in this world. It is the part of us that is vitally whole, conscious, and healed. Think about what would happen if you had a consciousness within you that could reveal what you are truly experiencing or feeling. We may understand at surface levels what we are feeling or thinking, but we have so many thoughts and realizations that we are not conscious of, or not fully conscious of yet. At our core, we are consciousness itself. What is unhealed within us obscures our inherent health and the vitality of our system. In Buddhist traditions, the state of moving beyond this obscuration is known as the "true mind," or experienced as a state of luminosity, emptiness, and freedom. Ramana Maharshi's method of self-enquiry states that the mind is consciousness that has been restricted, and that our inherent difficulty lies in the fact that we understand ourselves to be the beliefs we have created about ourselves rather than pure consciousness, which is our true nature. Spiritual traditions such as Kashmir Shaivism recognize all that exists as part of one divine consciousness, with each of us having the ability to recognize this light or consciousness within ourselves. This tradition (as well as that of Ramana Maharshi) also speaks to the importance of the spiritual consciousness of the heart, which is one of the ways we can access the body deva. In animist and shamanic traditions, everything within and around us has consciousness. Every person, every organ, every part of our body, and every cell within us has a consciousness that can be spoken to. All of these point to the fact that we are able to access and work with a resource that aligns us to our deepest health, our deepest truth. By moving beyond the beliefs we have created that cause us to feel separate and traumatized, we can associate with our inherent health, freedom, vitality, and joy. We can move away from the pain and the stories we tell ourselves, and into a place of recognizing our inherent consciousness, divinity, and strengths as a unique presence in this world. Imagine releasing what holds you back, what tells you that you are not good enough, that you will never be healthy, that you will always be in pain. In our pain we are constantly rejecting ourselves, and it is by truly looking within that we can embrace ourselves ... for perhaps the first time. CHAPTER ONE The Body Deva Many modern spiritual traditions suggest the seeker look outside of the body, or even state that the physical body should be transcended or ignored. These traditions also come from a place and time where the individual may be able to drop their lives (or be willing to) to achieve a state of spiritual transcendence. In our modern world we are beginning to realize that our physical bodies are not something to neglect or transcend; they are a part of our spirit, our soul. We can awaken through our bodies and allow our physical lives to be not only a part of our spiritual path but the basis or core of our spiritual path. We can understand that we are much more than our physical bodies, and our world is much more than what appears materially, and still be deeply part of our lives and bodies. We can become more grounded, more here, more embodied. We can experience ourselves and the world with less pain, more joy, and deepened connections with our loved ones. It is time to come home to ourselves, to realize that we can be spiritual and physical beings (and experience thoughts and emotions and utilize all of our wonderful senses) in this world. We can let go of that which is holding us back, that which causes us pain. By doing so through the physical body, and with the body deva, we will not only experience greater understandings about ourselves but will also notice our physical reality and our physical bodies shift to a state of greater health and consciousness. If some of this sounds a bit out there, I can commiserate. Twenty years ago I would have laughed at some of the concepts that I have found invaluable on my path, including this work. As with all things in life, I encourage you to work with this process with an open mind. You may be surprised by how you can engage with something that may be a bit out of your comfort zone initially. The Physical Body Our bodies are magnificent and can heal quite a bit on their own, but when we get out of touch with our body, and with the innate intuitive senses that our bodies hold, we become disassociated. We can easily reach a point in which we can no longer tell what our body needs in order to stay well. Or we may be so stressed, depressed, or tired that the thought of our bodies being healthy or vibrant is a long past sentiment. We may fear or hate our bodies, and the idea of our body being an ally or the thought of being in any way embodied may cause panic to arise in us. We may also simply have never been taught the tools to listen to or work with the physical body in a compassionate, in-depth way. Some of us will be capable of handling even the most stressful events that happen in our lives. Others of us may feel worn down, in pain, or are dealing with the busy nature of existence to the extent that we can no longer heal ourselves and are no longer able to process (or basically, deal with) what is happening right after something occurs that is traumatic or overwhelming for us. When we experience an event, emotion, or belief that is too much for us to navigate, that energy becomes static. It "freezes" in that time and place. Our body consciousness (or body deva) is deeply intelligent and will section this energy off until we know what to do with it. Many of us are holding energies within our bodies that have been sectioned off at an earlier age due to trauma and overwhelm that we could heal, partially or fully, if we just knew how. Our body deva sections off these frozen aspects of ourselves so that the rest of our body can continue functioning. What is most important is the survival of the whole, and certain aspects of our physical form (such as the heart and brain) take precedence. By learning to resolve these separated energies, we not only become more functional, whole, and healed but our systems do not have to expend so much energy. It takes a lot of energy to have many sectioned-off parts; when we resolve them, our systems do not have to work as hard. When we experience trauma, overwhelm, or something that is just too much for us to deal with at the time, we will not be able to process it appropriately. We lack closure. When we do not have closure on a spiritual level this means that the energy, emotions, and experiences of this event are still replaying as if in a loop. This means that this event, experience, or emotion is unresolved. Our bodies hate unresolved things; they like closure, compassion, and understanding. By working with your body deva, you can begin to realize what sort of beliefs, emotions, and experiences your body is still holding onto and begin to release your "loops" through the tools found in this book. Through this work, the loops, or repeated behaviors and unresolved issues that we experience again and again, clear away. We have found resolution and closure and no longer need to "loop." We can "unfreeze" what is frozen and locked in time within us to become more whole, stable, and healed. We can release the restrictive beliefs that have created our world and block our understanding of our innate health and well-being. Our body consciousness is a gateway to understanding our physical natures. If we break our leg, we will need a cast to heal it. This is a clear and understandable line of rational thought that we will be entirely conscious of. But if we are exhausted, in pain, with no clear idea of why a part of our body is experiencing pain, or dealing with internal disharmonies (like a chronically upset GI tract) we may have a medical diagnosis but may not realize the reason why symptoms may have come about or the simple ways we can be a part of our own healing process. Our bodies contain the memories and experiences as well as the scars and triumphs of our lives here. We again return to the idea that we are each individuals with unique histories; these histories make us who we are. Two people may experience the exact same type of back pain for two totally different reasons. One may be from overdoing it in yoga one day, and the other may be holding the emotion of fear in their back due to a childhood experience. It is by communicating with the consciousness of the body that each of those people can discover (and heal) the cause of their pain. We are individuals, with an individual history, and that is the real key to healing. We need to know _our_ reasons for being imbalanced, in pain, or out of sorts. By learning to communicate with our body consciousness directly, we can surpass the mechanistic definitions given to us by outer culture of why we may be in pain and discover and, more importantly, heal on an individual basis. There is nothing more powerful than the experience of understanding what our body wants and needs to feel whole. I invite you to discover the body deva, the consciousness of your physical body. _To start, we will begin discovering the body deva through imagery work. As you progress, you will want to work with the body deva in the more advanced ways shared later in the chapter._ Beginning Body Deva Exercise Consider that there is a consciousness in your body that you can speak to. When you are ready, sit in a comfortable position without any outer distractions. We are going to begin working with the body deva through inner symbols. It is through working with symbols that we can initially understand concepts such as this in a simple manner. We relate spiritually through the use of symbols, so even if this type of work is totally new to you, or you don't feel as if you are a great visualizer, it is likely that, after a few tries, you will be able to create the appropriate symbol for you. To start, you are going to consider that you have a body deva. I would like you to visualize it. This visual can be anything, and the purpose of this initially is to create a sort of persona, or character, with which you can communicate on some level. It is likely to be something totally unique to you. This could be you at your current age or another age. It could be something entirely extraordinary, such as a fairy or dragon. It could be a tree, flower, rock, cartoon character, geometric pattern, light, or blob. Trust whatever comes through to you when you ask to see or sense your body deva. In time this image may change, but there should be an inner sense of the body deva appearing and seeming "right" or at least correct for where you are right now. For some of you, drawing your body deva may be helpful. This can be helpful for those of you who do not consider yourself visual by nature, or who simply wish to have the experience of drawing to get to know their body deva on a more visceral level. To draw it, I suggest sitting with a blank sheet of paper and simply allowing whatever sort of pattern or shape appears to come up automatically. You do not need to have a plan or visual already in place. When you are done drawing, there should be an intuitive sense that the drawing seems right, or at least is right for right now. Once a visual has emerged, you will simply focus on it. Notice everything about it. What color or shape it is. What it looks like. What emotions you may sense from it. Simply allow a visual image of your body deva to step forward by putting the intention forward that you would like to see or notice it in some way. If you are feeling terribly blocked, or are not visual, you may wish to ask something like, _If I could see or sense my body deva, what would it look like?_ Many people do not consider themselves to be visual, or that may in fact not be their dominant sense. But anyone, no matter what sense is dominant, can construct a visual image through their other senses. It is how our brains put together our world. By this I mean that if you are not a dominantly visual person, you can use your sense of intuition or sense of knowing (a strong intuitive sense for most people) to create a visual. If you are feeling blocked, it often is a protective instinct. There may be good reason why you do not want to make contact with the consciousness of your body. This is especially true if you are disassociated, in pain, or simply don't like your body. The idea of approaching your body with compassion and with the purpose of getting to know it on a more intimate level may seem frightening. If you are having difficulty and are feeling fairly blocked in your life overall, I suggest moving on to chapter two, where you will learn to work with blocks and resistance. From a spiritual perspective, we can communicate on some level with everything within our bodies or around us; we just may never have learned how to do so. The first time that people open the lines of communication it may seem silly, and there may be doubt about what information comes forward. All of that is perfectly natural. Simply write down whatever is said and consider it when you are in a more logical headspace. By this I mean that if your body deva says that you need to be eating an exclusively raw diet, or should move to Jamaica, or get a divorce, this may indeed be something that you are not willing, or capable, of doing. It also may be entirely the wrong decision for you to follow at this time, based on who you are and what your life is like. Throughout this process, you will always follow up anything that comes through with logic and afterthought. Basically, it will be a process of acknowledging whatever arises but then taking a step back and thinking about what the right decisions are for your life. Acknowledging what comes up, even if it is not acted upon, will be a powerful shift, because you are truly listening and acknowledging the communication from your body deva. It is easy to take this work too far and to think that the only thing you should rely on is your body deva, but we should be open to advice, thoughts, and information coming toward us from many sources. Our body deva is likely to be a trusted resource, especially in time, but always write down or think about what the body deva is relating, as it is always your decision to follow through or believe what has come across. When you are first starting out, it is typical to hear thoughts and advice that are more what you expect to hear rather than what the body deva is actually communicating. From the "negative" or mental aspects of yourself, you may hear a laundry list of what you need to do and who you need to be. By working with the body deva over time, you can hear more authentically, deeply, and build trust with what information you are receiving. At this point, you should have a strong visual of your body deva. You will have either visualized it or have a drawing of it. * You will now introduce yourself to it. Say a quick, internal _hello_. * Ask it directly, as if you were talking to another person (but internally), if there is anything you should know about your health. * You may have a specific pain in your body. Ask if there is anything that you should know about the pain you experience. * When you ask a question, listen for the answer. The answer may arise intuitively (you just get a sense of it or a sense of knowing). The answer may not come immediately. You may realize an answer a few hours, or even a few days later. * If you do not get an answer, do not worry. Just move on to another question. With time, the answers will come to you more clearly and readily. * If you get an answer but it seems unclear, ask for more. ( _Tell me more._ ) Sometimes we receive answers in the form of a visual or one word. By asking for more information (multiple times, if necessary), we can find out all we can about what is coming across. At this point, you can journal or just consider what you have heard from your deva. You can ask your deva any sort of information about your body—and anything that has an impact on you, such as emotions, beliefs, and even your spiritual nature. As silly as it sounds initially, you may at one point be able to ask about finances or your job and get an answer. But to start with, it is often easiest to ask about the physical nature of your body. Some thoughts about what to ask about: * **FATIGUE:** _Why am I so fatigued all the time?_ * **SPECIFIC DISEASES OR KNOWN ISSUES:** _What can I do to help my body feel better?_ * **EXERCISE REGIMEN:** _What is the best type of physical activity for me? How often should I exercise? Where should I exercise?_ * **DIET:** _What sort of foods should I include in my diet? What foods should I stay away from? Does my body like the food I am eating right now?_ Starting out, it is often best to keep questions simple and direct. Some of the answers may surprise you, or even may seem unusual or too simple. You may at first think that you are making things up. Especially when starting out, you will want to listen and acknowledge everything that comes up and then logically think about the answers. As you work with the body deva, you will establish trust and will receive more specific and helpful guidance. As with any relationship, it will simply take a bit of time to develop. For example, your body deva may seemingly be telling you to exercise five days a week for an hour. If you are a couch potato who hasn't done any exercise for a few years (if not a few decades) that may be wonderful advice but not terribly practical, and may actually be harmful. Acknowledging that this advice may be something to work up to, and considering how you want to start in some small or large way incorporating something new into your diet, or getting more nature in your life, would be the way to start truly listening and working with your body deva. We heal through gradual, small steps. Drastic changes rarely ever work; they are too severe and unsustainable for us. Treating ourselves with severity and harshness does not come from a healed and whole space. By listening to your body deva and acknowledging what arises, you can start to understand what your body is craving or may need. What is communicated is often quite simple—it is our mind that creates convoluted reasons and instructions. If you are receiving convoluted or complex instructions or responses, it is a good indication that you may need a bit more time to work with your body deva. If you are receiving harmful or worrisome thoughts, this is an indication that you are not connecting to your body deva but to something unhealed within yourself. You can then check in with your doctor, healthcare provider, or do further research on what came up, as well as logically think about how you can take action concerning what arose. By then taking some small action, you not only show that you are listening and acknowledging what your body deva has to say but start to foster a significant life-long connection to your body. Most of us have moved away from listening to our bodies and the wisdom that our body deva has to relate. While there are a lot of reasons for pain, medical issues, and the varying emotional and physical discomforts we experience, on an energetic and spiritual level pain and discomfort have to do with something that our body is attempting to communicate to us that we have not consciously acknowledged yet. By acknowledging and taking logical action, our bodies can feel heard, and will frequently cut down on pain, discomfort, and other symptoms that we experience. **Sally** Sally was a practical, professional woman who spent most of her day behind a desk doing personal assistant work for a large PR agency. Although she had taken yoga classes and described her daughter as someone who was "into crystals and chakras and stuff," she had no meditation experience and came in to see me for an acupuncture appointment for stress, fatigue, and lower back pain. Sally described her life as incredibly stressful. She has three teenagers and a job where she is expected to complete quick turn-arounds on projects and assignments. At night and on weekends, she described her state as being one of collapse, not wanting to do anything or even communicate with anyone. Sally began working with her body deva quite simply. She visualized it (it looked like a tulip to her) and began asking it what the source of her fatigue and stress levels were. The usual suspects of her work and keeping tabs on three children came up. But as she questioned deeper, she found that her fatigue was a way for her to get some self-nurturing time. If she wasn't able to move, she couldn't do anything for anyone else. Her fatigue allowed her to take care of herself. She had moved away from taking any time for herself and had never really allowed others to nurture her. When she asked what her body wanted her to do to heal her fatigue, it replied, _nature_. When she asked for it to tell her more, it stated that it would like her to be outside in nature, specifically in the gardens by her workplace, a few times a week. Sally felt that she could not go to those gardens so often, but thought that she could visit once a week at lunchtime. Sally started to visit the gardens and eat lunch there once a week. She felt peaceful and restored while there. She realized that her body wanted some stillness and a little time alone. Although she still had stress at work and her three children creating a chaotic household, the gardens became her "me" time. She eventually brought her children and husband to the gardens and began taking walks on her lunch breaks and after work. Although she remained tired from her busy lifestyle, after a few months Sally discovered that her fatigue, back pain, and stress levels had all significantly lessened. She had more energy for her family. She continued to listen to her body deva, adjusting her diet and going to a massage therapist to help her with her back pain. What our body wants and needs is often quite simple. Even in more complex cases than Sally's, simply listening to, acknowledging, and taking logical action based on what the deva has to say can cut down on stress and help us feel better and more in tune with our bodies. When you are doing the activity or following through with what your body deva suggested, take a moment and call up the image of your body deva. Let it know that you are going for a walk, or drinking more water, or doing the breathing exercises it suggested. This will show that you are following through and will strengthen the connection, allowing you to be more embodied and in tune with your body deva. There may be certain cases where the body deva suggests something drastic: divorce, moving, or completely changing your life or career can be thoughts that come up. Those may arise because they are, in fact, something that you should consider. They may also arise because of fear, or expectation that your life needs some sort of drastic change. When beginning to work with the body deva, we may find that our fear speaks instead of the consciousness of our body. In chapter three, you will be working with fear and other emotions, and this will allow you to understand when fear may be speaking or when a deeper and truer aspect of you (the body deva) is communicating. The key thing is to be patient and work with the body deva over time. When you first meet another person, it is rare that the conversation flows naturally or that you discuss deep subjects. The same is true with working with the body deva. Some of you may take to this work quickly; for others, it may take working with the material in chapters two and three to move beyond blocks and fears and approach your body deva. Your body deva may tell you that where you are living is not correct, but that doesn't mean that you need to move tomorrow. If the body deva tells you to get a divorce, or do something elaborate or difficult, just acknowledge the information and then get a second opinion (or third) from a friend, family, or healthcare practitioner. Remember, we heal in small, simple steps, and although we often desire drastic change in our lives, that may not be the practical or correct way of going about things. We may also desire outer change because it is simpler than inner change, or we may lack the tools to know how to achieve the inner change we are looking for. Even if we should get a divorce, change jobs, or locations, simply acknowledging that information (even if it is not acted upon) can result in change. Bringing up patterns and realizations into consciousness is an important aspect of healing, and the importance of doing so should not be minimized. What your body deva says is also negotiable. Sally found that she could not go to the garden every day, but she found that she could go once a week over her lunch hour. Working with the body deva in a way that allows you to acknowledge and understand what it is saying, as well as take action, is what can allow things to fully heal. Working with the Body Deva: Part Two Now that you have formed a basic relationship with your body deva, you may wish to work with it in a more advanced capacity. To start, bring the image you visualized as your body deva into your body and ask where it would like to be located in your body or where it would feel most at home. This is an important step because we are starting to understand that the consciousness of our body is located within our physical form. This will allow a mental and emotional shift to occur. At first, it is often easiest to consider spiritual subjects as an outer mental projection or visualization, but by bringing them into the physical body we can start to understand and make connections of this consciousness with the physical form. We can begin to realize and recognize it as a part of us. If you get an answer telling you where the body deva would like to be located (by visualizing it and then asking), visualize it merging into that spot in your body. Welcome it, and do your best to sense or visualize it there. If you are unable to hear a response, a good starting place to visualize your body deva is either in your lower abdomen, solar plexus, or heart center. Those are places that are easily "home" to a consciousness such as this. If it does not wish to merge, or still seems out of body (or partially out of body), simply ask again on another day. Eventually, the merging and realization of the body deva with your body will feel right to you. When you find a place within your physical body that feels right for the body deva to be located, again start with basic questions about your health as well as any discomfort or pain you are experiencing. Keep things simple and direct, asking questions like: _What can I do about my hip pain?_ Or _What is the best way for me to go about healing my digestive system?_ Listen to the answers, write them down, and think about them logically. Decide to act on what seems like a good first step (or two) and then do it. The body deva can be utilized to ask questions about anything, from what you should eat for breakfast to your career, from thought patterns and beliefs to questions of a spiritual nature. This is like asking a respected elder for their opinion and thoughts on a matter. By this I mean that we have many different types of consciousness that we can speak to. We will get more into this in later chapters, but for now, know that the consciousness of your body may have opinions on things like whom you date, what sorts of beliefs and thoughts you have about yourself, what you are eating for breakfast, and other struggles you may be experiencing in your life. Checking in with your body consciousness about someone you are dating, for example, may reveal that your body deva feels a strong connection to someone, or it may reveal that your body consciousness is feeling cautious or not quite secure with someone. Remember that the choices of what to ask your body deva are endless, and that if a question isn't answered it may be because you are not quite connected yet (patience is always a good thing), because it is not the right time for you to learn about what you are asking, or because the question just wasn't the right way to go about things. By keeping things simple and open-ended, as well as developing a relationship with your body deva over time, you are more likely to develop a life-changing resource that you can refer to time and again. Possible questions to ask the body deva once you have visualized it within your body and feel connected to it: * Can you tell me why I have been depressed lately? * What can I do to ease my fatigue? * How can I move forward in my life? * Is this job opportunity right for me? * What sort of career path would be right for me? * Is my partner (boyfriend, girlfriend, date) right for me? * How did you feel when I was out on that date? * How can I feel less stuck in my life? * What sort of action should I take to move forward in my life? There are many other questions that could be asked of the body deva, but some of these should give you a basic idea about what to ask. Most of the replies from our body consciousness are simple and positive. If you are hearing hurtful, aggressive, negative, or imbalanced viewpoints on anything, it is not from the body deva. Our body deva represents the wholeness and divine nature of our physical form. Any answer that it gives will reflect this. Most of our care comes from a viewpoint of imbalance or sickness; in contrast, the body deva is a source of health and wholeness. By aligning ourselves with our innate health and wholeness, we can recognize a resource that is not speaking from a place of wounding or trauma. This is incredibly important, as even in our sickest or most imbalanced states we do have a part of ourselves that we can access that is vibrant, healed, and whole. That resource is the body deva. One of the reasons to bring the body deva into our body is so that we can recognize where these answers are coming from. In our daily lives, we have a tendency to overthink. We use our heads a lot, as our heads are where most of the realizations and understandings about ourselves and our world come from. But our minds contain a lot of insecurity, trauma, and feelings of inadequacy. They loop thoughts repeatedly, and we may hear that we are not good enough, that we may never be healthy, or that it is better to feel stuck because at least we know what to expect. If these thoughts or others like them arise, begin to question where the answers are coming from. You may know this intuitively, but ask if they are coming from the place where you now locate your body deva or if they are coming from your head. You can even ask your body consciousness to highlight where the answers are coming from and do a body scan (scan up from the feet to the top of the head and down the arms) to find an area that is drawing your attention or highlighting itself. If you are having discouraging or complicated thoughts, it is likely that they are coming from your mind; simple healing thoughts are coming from the place where your body deva has found its home. Finding the Body Deva Within To understand this body deva more fully, you will need to sit or lie down quietly. Particularly during the first few times, you may find that lying down provides a better reading of your body, and you will be able to notice it better. Now bring your attention to your midline. Your midline is what separates your body into left and right parts. The connection of the body deva to greater spiritual information is largely through the central channel (midline). In energetic theory this would be referred to as the "primary circuits," or the _sushumna, ida,_ and _pingala_ in yogic theory. The primary chakras of the body emerge through the midline. In craniosacral therapy, the energy of the midline allows the body to organize energetically and physically. In esoteric literature, the "middle pillar" exercise awakens and strengthens the midline, and the midline is where _kundalini_ (the realization of our conscious, awakened nature) emerges and flows. Physiologically, our central nervous system is housed within our brain and spinal cord, and energetically our nervous system is the "receiver" of what we perceive. This can be problematic for highly perceptual individuals who find their nervous systems "blown out" from either receiving too much stimulus or from not having the appropriate tools to know how to calibrate and nurture the nervous system. In Traditional Chinese Medicine, the central channel is referred to as the _du_ and _ren_ , and is the first "circuit" to emerge energetically when we are being formed in utero. The _microcosmic_ and _macrocosmic orbit_ of Taoist meditation practices, the inner tree emerging along the spine from creative visualization practices, and many other spiritual practices and methods of seeing the body beyond its physical form all point to the significance of the midline and it being the base of our consciousness and power. This means that the body deva not only is the consciousness of our physical form but also has a large channel, or highway, that flows through our midline (separating us into left and right) and through the energetic circuits of the body—the sushumna, ida, pingala, and the chakras—to connect to information of a more spiritual nature or information beyond the experience of the physical body. It is able to receive and translate this information for us in a way that can provide relevance and healing to us now, in our present lives and bodies. For example, while carrying out the visualization exercise in the previous section, you may have found that the "home" of your body deva is the heart. This is the root of our consciousness, the fulcrum, or pivot point, of our midline. Our hearts are often deeply protected and carry such wounding that it is difficult, if not impossible, for many of us to access the deep wisdom and consciousness within. In time, though, many of you may be able to heal the layers of protection as well as the pain that the heart is holding. You may then be able to truly awaken and sit with the consciousness of the heart and the deep wisdom of the body deva emerging through the spiritual heart itself. It is not important to have a background in yogic theory, or to even understand fully what the above means, to start working with the body deva. It will connect using these energy structures, even if you cannot feel or sense them. However, some of you may be ready to feel both the consciousness of the physical form and the expression of the body deva into greater consciousness through the midline. If you are able to connect to your midline, or the energy that divides you into left and right parts, this will be like a stream of energy or a simple awareness of your midline. You may also visualize a black line going through your midline. If this is the case, begin to ask questions of the body deva, specifically the line of energy that was formed up your midline. You will likely get a greater and deeper sense of knowing in your responses. When you interact with the body deva through the midline, you will experience the emergence of a natural orientation toward stillness, centering, and grounding. If you are not quite at this point yet, do not get frustrated. This is fairly advanced work. However, by working with your body deva, even as an outer symbol (the first exercise), you can begin to really and truly understand yourself in a way that you likely have never had access to before. One way to discover the midline is to picture a beam of light coming from the earth, between the legs, through the genitals, up the midline, and out the crown of the head. It would ideally extend about six inches (or even farther) above the head and below the feet. Most of us are not adequately embodied or in tune enough with our bodies to visualize or feel this energy, but know that you can start wherever you are and see results by working through your body deva and have this structure "grow" or become more optimal. You may also find in time that you can feel your midline, as well as a natural focusing, or light, in the heart area. This energy should never, ever, be forced. We frequently focus on "doing" in our culture, but it is by simply sitting with this structure and acknowledging it that we will come to a state of greater awareness. Similarly, please do not force the heart to open. We are so hard on ourselves, and carry such wounds in this area, that anything other than compassion aimed at the heart is often perceived as violence, or something that is attacking or creating further difficulties for a heart that likely is already struggling. Body Maps A body map is a simple way of seeing how your body is doing. We will use it to check out how present you are in your body, or what areas might need a bit of work. To do this work it is easiest to get a piece of blank paper and a pen (any color will do). There is no artistic talent or in-depth anatomical knowledge needed here because we are going to represent the body with a stick figure. This stick figure is going to represent how embodied you are, as well as what areas may need to be worked with by utilizing your body deva and "talking" to your body (the next section). If you were fully in balance and embodied this stick person would be a "full" stick person, with solid lines connecting all areas of the body. This means that the feet would be connected to the ankles and to the legs and so forth. A healthy and energetically vibrant body would have thin straight lines that are all connected and all present. If we have an area of our body that we are disassociated from, that area will not have a line at all—there will be a disconnect (or empty space with no line). If we are only somewhat embodied, there will be a dashed line. If there is something else going on, like a lot of stuck energy or pain, there may be a line that is way too thick, or not straight, or off at an odd angle. The purpose of the body map is to allow you to sense or see areas of your body that could likely use some healing or to inquire about using your body deva. This could be as simple as asking, _Body deva, why is my ankle missing from my body map?_ There can also be a direct interaction with the body part that is missing or imbalanced, as we will see in the following section, where we will learn to directly communicate with the body deva utilizing specific parts of the body. There are a few ways that we can determine what our body map may currently be. The first is the simplest and often offers the best information. You will simply sit in a quiet space and allow yourself to consider your body map. This is simple intuition, and there are no wrong answers. You will then draw your body as a stick figure, considering which areas may be dotted, which may be absent, and which may be thick, not straight, or otherwise stand out to you. There should be a sense that the body map seems right, or is a reasonably correct representation of what may be going on in your body. For people who are quite visual, working with the body map can be done without drawing. To do this you would do a simple body scan, starting at the feet and going up to the head and down the arms, connecting to your body deva first and asking it to highlight or show you areas of your body map that are missing, not connected, or otherwise out of balance. It is likely that your gaze or focus will automatically go to areas of your body that are not fully a part of your body map. In some cases, we have been disassociated from large sections of our body for a long period of time so it may seem as if our entire abdomen, or entire lower body, is missing. This is actually perfectly natural to notice. As a culture we tend to be really focused and centered in our head, so it is not unusual to have large parts of the body map missing or seemingly out of focus. Many people are only embodied in their heads, or above their shoulders. Finding this out may be a bit of a shock, but working with the body deva will allow you to gradually embody and your body map to shift. What the body map is showing are areas to consider working with in the next sections, as well as allowing you to consider, for perhaps the first time, where in your body there is consciousness and flow (connected energy), and where in your body there might be lack of flow or lack of consciousness. In an ideal state, we would have a perfectly connected body map that is fully present. It is somewhat rare that this happens, as we all have areas to work on. We can all be more connected to our bodies. If we are not, it just signifies that we have some work to do, and it can be an exciting process to do healing work from the next sections and then return to your body map to see how you are doing. It is likely after doing work in the next section, as well as the next chapters, that your body map will change significantly, becoming more vibrant, connected, and present. By doing a body map before doing further work, you can not only find areas to work on but also use the work as a barometer. The body map will show you "before" and "after" pictures as "proof" that there has been a shift in your body through this work. Talking to Your Body Our body deva is our body consciousness and can be talked to as a whole, but every part of our body holds consciousness. Our body deva _is_ consciousness—present in every cell of our entire body. From an animist perspective, everything in our bodies has something to say; everything has consciousness. It may sound strange at first to realize that your big toe, or an individual cell in your body, or even organs like your heart or liver, have consciousness (let alone that you can strike up a conversation with them), but our bodies are a repository of untapped, individually based intelligence just waiting to be revealed to us. We may find that our body consciousness as a whole and our individual "parts" have different thoughts and advice for us. This makes sense when we consider that the body deva is responsible for our bodies as a whole (as well as the emotions, thoughts, and energies within), while a part of ourselves (like our big toe) is mainly interested in itself, or the surrounding structures that support it. To work with this section, I first suggest connecting to the body deva (wherever you are at with that, even if you are still visualizing it as an outer symbol) and asking it a few questions. As discussed in the last section, these questions could be more general, such as, _Can you tell me what is going on with my health?,_ or more specifically, inquiring about an area that is actively in pain, such as, _Why is my big toe in pain?_ You may also choose to work with your body map, noticing an area that feels off or out of balance and then inquiring about that by saying, _Body deva, can you tell me why my abdomen is not part of my body map?_ You may also choose to work with an organ or area of your body that you know is not doing well. Perhaps you have had lab tests that have shown that your liver or thyroid is out of balance, or you feel stuck or in pain in a specific part of your body. After you have questioned your body deva as a whole, you will then work with the specific area, such as the big toe or abdomen, and work with its individual consciousness. This means that you are still talking to the body deva, or the consciousness of your body, but it will just be in a more focused manner now. Once you have your "focus," or the area of your body you wish to work with, you can continue. If you find multiple areas on your body map that pique your interest, you will either simply choose one or ask the body deva, _Which area should I work with first?_ You can also ask your body deva to highlight or draw your focus to the most important area to work with, or to work with first. You may also find it important to ask what the _linchpin_ or _fulcrum_ is. In some cases, our body map shows multiple parts because they reflect the same pattern and energy. For example, your throat and abdomen may both be holding the energy of anger. Asking your body what the fulcrum is allows for discovery of what may be creating or at the root of a larger pattern. In time, you may discover that a focus can be an emotion, person, or an element of your life that you want to improve on. To work in this manner, you would do a body scan, and ask for where this energy is held in your physical body. It sounds odd to think that our relationship with our partner or our financial difficulties can be held within our physical body, but they can be worked with through the body deva with great effect. This is advanced work, however, and we will move back to simplicity, such as finding a big toe or an abdomen to work with. How to Talk to Your Body * Start bringing some gentle attention and focus to the body part you have selected. * How does this body part feel physically? Heavy, tight, painful, absent? Describe the physical sensations you notice. * Now, what do you notice energetically? Does it feel full of energy or empty? Does it feel connected to the rest of your body? Does it feel connected to the next part? (for example, does your foot feel connected to your ankle?). * Recall what this body part was doing as part of your body map. Was it completely absent. Did it appear as a dotted line, or otherwise out of balance? In talking about an area of our body that is out of balance, it either will be energetically "empty" (lacking energy) or "full" (has blocked energy); in some cases, it might be both, as our blocked energy can lead to a feeling of lack of connection, a sort of emptiness. We can also have multiple layers of patterns in one area of our body. There is typically emptiness at the root of this, with a lot of fullness (and pain or significant imbalance) on top. Continuing on, we will discover more about the imbalance, or what is creating blockage or lack of energy in this part of your body: * Consider if your body part either is empty/missing, has blocked energy, or both. If you cannot sense this, just use your sense of knowing for right now (if I could sense what was happening energetically in my big toe, what would I sense?). * Now, we are going to focus on the nature of the blocked energy. It is much more common for there to be some form of blockage, especially if there is pain in a body part, even if it has been disassociated from. If you do not sense any blockage, you can move on to the next section, but chances are, you will notice some form of blockage, especially initially. * If you were to visualize this blockage, what would it look like? * How large or small is it? * What shape would it be? \- Even if it is a blob of dark goo, that is still a shape. \- Does this shape remind you of anything? Our bodies may reveal an image that takes us back to a particular time. For example, a pen that we had when we were twenty years old. By noting this, we can receive insights as to when in our timeline this energy became "frozen," allowing us to move into inner child work. * Is this blockage dark, light, or a specific color? If you are not getting answers by sitting and quietly observing your body, you can call up the body deva and ask these questions as well. If you are used to mind-body work, you may further inquire as to what issues are being created in your body due to this blockage, whether there is an emotion that can be sensed that goes along with this blockage, or even how long the blockage has been there. One way of doing this is to ask your body deva to heighten or show you what this imbalance is doing to your body. If you ask this, simply feel this heightened imbalance and note what effect it is having on your body. Don't question the answers that come up. By bringing gentle focus to the area, either a lot or just a little information may arise. It is okay if none or only some of your questions get answered. If you find that there is emptiness there, you will similarly sit with the emptiness, and see, sense, or understand it the best you can by simply sitting with it with gentle focus. It may look like a black hole, or nothingness filled with gray, and you may not be able to connect to it at all at this point. If we were to return to the big toe example, you may look down at your big toe and realize that you can sense the bottom of your big toe, and not the "knuckle" or toenail of that toe. We are now going to communicate directly to the body part: * First introduce yourself. This may seem silly, but saying a quick, internal _hello_ can really start this work in a helpful way. * Then ask if you can ask some questions. In most cases, it will say yes, but if it says no, you may simply need to work a bit more with your body deva as a whole, or with some blockages or fears (next chapters), in order to be able to work with this part of your body. * When you are ready, you will again ask simple, basic questions: \- How long have you been holding this energy? \- Can you tell me why you are in pain? \- Can you tell me why you are blocked? \- What do you need to be healthier or less stuck? \- Can you tell me what emotions you are holding? Sometimes, if we do not get a response, it is because we didn't ask the right question (in which case we can simply rephrase or ask a different question), because it may not be the right time for an answer, or because we are not ready to hear the answer. We may just be at the beginning of trying this out, and like any friendship, striking up a conversation and "making friends" can take a bit of patience and work. We may find that we can transcend some of this, even a lack of being able to connect, by asking, _If I were able to hear an answer (or connect to this body part) about why you are in pain, what would that be?_ When you ask your body what it needs, chances are that you will receive an answer. Usually it is fairly simple; sometimes it is not. No matter the answer that arises, you will again take a step back and consider it logically. For example, if our stomach says it needs more water, that isn't too much of a logical leap. If our toe says that we need to quit our jobs and move to Alaska that may take much more consideration. You may choose to consider what you are hearing on the spot. For example, if our stomach needs more water, we may state that we are happy to drink more water. We then would actualize this in our daily lives by drinking more water. This is how we can link up with what our body is saying and let it know it is being heard. When our body as a whole, or a body part, is heard, it no longer needs to scream its message. It can simply talk, or even just whisper. If what is coming through is complex, I would again check your energy. Is what is "talking" coming through your head or from the body part itself? This may not make complete sense yet, but when a body part versus our mind communicates with us, it has a much different feel. We are so used to overthinking and using our minds that the simplicity of the knowledge coming through from our bodies may cause us to doubt the message. If what is coming through is something you need to consider, you can simply say thank you to the body part and close the communication. If what arises is not right for you, or something you are not able to do, you can still say thank you. Even the act of being heard can allow change to occur in that part of your body. If your body part states that it wants you to eat more meat, and you are ethically a vegetarian, you may wish to compromise. You may ask if eating meat once a week would be helpful (if you are willing to do so), or if there are other products or ways that you could introduce the element of "meat" into your life. Basically, you can negotiate. What you come up with in conjunction with your body part should be something you can actually follow through on in your daily life. Actualization is part of this process; without it, the shift that takes place will not be as big of a shift as with it. This means that our bodies love to be heard, and in carrying out what you discussed with your body (again, this should be something that brings you towards health, and is not destructive in any way), the physical world will create a "bridge" between the mental, energetic, spiritual, emotional, and physical levels of your body. This actualization means that every part of you—mind, body, emotions, and spirit—will have an opportunity to shift and heal in conjunction with one another. When you end your conversation, you will always want to say thank you to this body part. Expressing compassion for ourselves, and for a part of our body that may feel unheard, unloved, or that we dislike because it is in pain is a part of being more loving to ourselves, and garners much better results. After you have said thank you, you will ask your body deva and the individual body part to shift or change in relation to being heard. Simply ask for it. However, the process of listening to, truly hearing, then actualizing what comes across will also start to shift things in the body. If we go through these steps, we will often find that our body map will shift significantly. Sometimes this is immediate, such as right after a chat; other times, it is after we actualize the energy in the physical world. No matter what happens, by doing this work over time we will become more embodied and healthier. It can sometimes be helpful to keep the original body map that we have drawn and do this work over a period of time (a few months), then draw new body maps. Chances are, our body maps have significantly changed. It is hard for our minds to realize how much we have healed. We tend to focus on how far we still have to go, so establishing some form of physical reminder is always a good, concrete way to recall just how far we have come. Variation on Talking to Your Body You can utilize this same work to focus on an emotion, theme, or pattern that you are noticing. To do this you would pick an emotion that you have noticed come up for you (for example, anger) and start out by asking where that anger is stored in your body. Pick a theme (say, not being able to stand up to a family member), and ask where in your physical body that theme is. Or you may be feeling stuck in your career, so inquire where in your body you hold that stuckness. First check in with your body deva, and find out any information it can provide. Then do a body scan or notice where your focus goes in your body when you have picked your theme. You would then continue with the conversation I listed above. Although, at first, it is easier to communicate to the body deva as well as individual body parts regarding pain, we also hold the imbalances of our bodies and lives within our physical form. We can find imbalances of family, work or career, our negative thoughts about ourselves or the world, or even feelings of being spiritually stuck or disconnected within our physical form. Our body deva can lead us to discover where we hold these imbalances within our physical bodies. The more you do this, the easier it becomes. Gradually, you will know exactly what your body is saying, and be able to act from a place of knowing precisely what to do for your individual situation and body. In further chapters, we will discuss more advanced situations, including how to talk with your body in more depth to understand where a pattern or blockage comes from. For now, though, communicate with the body deva to ask about imbalances in your daily life and find out where they are being held in the body. Ask the individual body part for more information about what it is holding or why the topic or imbalance you are questioning emerged. For a summary of how to work with the consciousness of the body, you can move to the chapter Tying Things Together at the end of the book. **Ken** Ken contacted me due to knee pain. He was a tennis player and found that as he got older, his knees were often painful after working out. He jokingly referred to this as part of the natural aging process but was willing to do whatever it took to be able to do what he loved. He began by drawing his body map. To his surprise, the rest of his body was present, except that there were huge black lines where his shoulders were, and his lower body (from the hips down) was dotted or completely absent, except for swirls and a big black dot over his left knee. In communicating with his body deva, he found that he had a tendency to carry energy in his upper body due to his work. Ken was an intellectual and writer who spent a lot of time in his head. Additionally, his body deva revealed that his body was so out of balance because a divorce ten years ago had "knocked him off his feet," and that feeling of lack of stability had followed him around since that time. He did follow-up work with me (inner child work, although he was in his thirties when it happened) to find further information on how the divorce affected him, and he simply asked his body deva how to get more in balance. The body deva revealed that he needed to get back out there—he needed to start dating again and heal the fear that resulted from his rather messy divorce. He listened to this and agreed that he really wanted a relationship and was willing to find someone. Ken then began talking with his left knee, which was the painful one for him. It revealed that it didn't feel supported by anything; it felt alone. It felt disconnected from the rest of his body and was taking on the impact of his tennis playing. It revealed that it needed to be connected to his hip and feet and feel overall connected with the rest of his body. He agreed and asked the body deva if it would connect his knee to the rest of his leg. He then went to see a Feldenkrais practitioner, who was able to help him walk in a more balanced way. In doing these things, Ken found that he was in minimal pain and that the thoughts of knee surgery were in the far past. He found the support he was looking for; by following up with his body deva as well as his knee, his body map now shows energy in his lower body, and he is able to play tennis with only occasional and minimal discomfort. **Sheila** Sheila came to me because she experienced a lot of rage in her life. She would find herself becoming incredibly infuriated while in traffic, at politics, and with people who she thought were trying to "screw her over." She asked her body to highlight or show her where it held this rage energy. Much of it was held around her diaphragm area, and when she sat with it she described it as a big Texas-style belt that was put on too tight. She felt constricted and in pain and saw the colors red, purple, and black when looking at the area. She began talking with her diaphragm, asking why it held this anger energy. It responded that she felt angry because she needed space. Anger was her way of acknowledging when she was overwhelmed and would push people away from her. It also allowed her a way to vent her frustrations. It revealed to her that because she had a big stockpile of anger within her, every time she was angry it was the proverbial "straw the broke the camel's back," and she would explode. Her body revealed that she needed to heal some things from childhood (as well as, eventually, past lives) because that was the part of her that was really angry. She then did the inner child work with great success. But at the beginning of her journey, her diaphragm revealed that it wanted her to exercise. Although she did exercise, her diaphragm specifically wanted her to start boxing. She thought this was odd, as she was used to doing things like running, but agreed. She now goes to a boxing class once a week and finds that she no longer has any explosions of anger. She did further work with her "inner children" (chapter four), and finds that as long as she boxes once a week, she is a calm, collected, and balanced person (well, 95 percent of the time; an hour commute still can create some havoc on occasion). CHAPTER TWO Working with Resistance and Blocks There is a force within us that wants us to stay who and what we are. This force can be called many different things: blocks, ego, inner wounding, fear. It can also be called _resistance_ —the parts of ourselves that do not want us to move forward or outright block us in our quest to heal and improve our lives. We view resistance as something to be actively fought against, a negativity within us that must be overcome by persistence or a battle that we inevitably lose. Rarely do we recognize that there may be a vested reason why we may be resisting. Our resistance is our protection. It is our fear and the accumulation of the disappointments, traumas, and difficulties that we have experienced in our lives. We resist because we fear change—who we would be if we released a core part of our identity. We resist because we believe that any change is equivalent to death... and in a way, we are correct. If we release something that has created beliefs and understandings, something that may have been a core part of our reality, of who we consider ourselves to be and what we consider the world to be like, it is a type of death. No longer are we the person we once were. Rarely do we consider that after death is rebirth and a release of the shackles that have held us. On the deepest levels of our soul, we equate surrender and release with death. On a more surface level, we fear change because that means that our lives and our concept of who we are will change along with it. We fear the unknown, and it often takes experiencing incredible pain on some level (physical, emotional, and/or spiritual) but also a subsequent mythic journey to the depths of the soul to find healing or alleviation for this pain, that we discover this type of death is necessary. We are rarely conscious of the fact that who we are is continually in a state of flux. We "die" and move through cycles of death and rebirth quite often. Some of these are larger, such as moving cross-country, divorcing, getting married, or releasing a long-held belief, and some are smaller, such as deciding to eat something healthy for breakfast instead of sugary cereal. Each day, our decisions and who we are is in this death-and-rebirth cycle. By recognizing that we are engaged in larger and smaller cycles of rebirth and death continually, we can become conscious of the fact that we can work with and move beyond our fears of change. Our resistance is also what is unhealed within us. We like to think of ourselves as one thing, one person, but we are not. We are a multiplicity. When we experience trauma or overwhelm, a part of us freezes at that age. We move on, but we now have an inner six-year-old who is unhealed within us. We accumulate all of these "small selves," and they are the ones that feel unheard, unloved, and in need of protection. They are the ones who are resisting, out of the fear that comes with trauma and overwhelm. It is by healing and integrating these aspects of ourselves, as well as acknowledging that our resistance may not be coming from a current, adult space, that we can begin to heal and move forward in our lives without resistance or self-sabotage. Our resistance is the relative force of what is unhealed, disconnected, and fractured within us. The more we heal, the more whole we become; the more connected we become, the less resistance we will experience. If we understand that what is resisting is not our current, adult self but parts of ourselves that are deeply wounded, fractured, and frozen in time, we can begin to understand as well as develop compassion for our resistance instead of battling against it. Our resistance is often there for a very good reason: it is offering protection. It shields us from experiencing similar events. If someone has gotten their heart broken very badly once (or a few times), it is understandable that they would be resistant to opening their heart again for a new partner. If someone has experienced a significant illness, such as Lyme disease, which wrecked and ravaged their immune system and digestive system ten years ago, they may not realize that their bodies are resisting healing because they are protecting a system that once needed a huge amount of protection or to be blocked off merely to survive. If we were living in poverty and later have enough money to buy ourselves something nice on occasion, we may resist because we are still in a state of fear and protection from twenty years ago. We may not realize that our fatigue is from ten years ago, when we had young children or were in school or in a difficult job. The difficulty is that our bodies, once they have experienced trauma or overwhelm, no longer recognize or see themselves as a cohesive consciousness; they are fractured and no longer recognize the whole. Our inner selves and parts of ourselves that have experienced pain and trauma rarely understand that we are not six, or fifteen, or forty-five any more. That it is not last year, or even two weeks ago. When our bodies experience pain and trauma, they not only freeze but individual body parts remove themselves from the body map. They do this to preserve the whole (the integrity of the body) the best they can. Basically, the show must go on, and the rest of you needs to go to the grocery store and to work. The part of the body that has frozen will resist becoming a part of the whole because it believes itself still to be the same age as when it distanced itself from its body map. It also is doing the best job it can in containing and holding the sickness, disease, emotion, traumas, and other imbalances that it holds within it; it will naturally not want to come back "online" and become a part of the body map again until its needs for protection are healed first. This may mean that your inner six-year-old is very afraid and needs protection, or that your body needs to release inflammation, anger, fear, jealousy, grief, or depression and is protecting itself because it would be too overwhelming to release it all at once. Our minds are funny in their capacity to seek protection by resisting. When we heal in a significant way we must shift physically, mentally, emotionally, and spiritually. Whatever level we are working on reverberates out to the other levels. While much of our resistance does come from fear, and understandable fear at that, it is our minds that lock us into place, that tell us that we cannot heal, that we are not worthy of healing, and that it would be too frightening to go through the "death" process (not revealing, of course, that rebirth is on the other side). Our minds tell us to stay with the known, because we know what to expect. Even if our lives are deeply unhappy, we know what to expect in our routine, and there is a sense of safety there. Our minds will take on black-and-white, either-or thinking as a form of resistance. If we are not exercising, we set up a plan to exercise for 60 minutes every day. If we are unhappy with our lives, this means we should move far away and quit our jobs. Engaging in simplistic, dualistic thinking allows our resistance to have a voice and sets up a battle within. It allows the unhealed forces within us to say that we are unworthy, that things would be too difficult, and that there is no middle ground in any situation. Watch out for this type of thinking as a way to engage in resistance; the path of the middle ground, or more gradual path, is often the best. Reading over all this, you may think that it is surprising that anyone heals, or takes significant steps forward in their lives. But they do, and you can as well. It is by acknowledging and feeling compassion for your resistance that you can move beyond it. It is by realizing its function and giving yourself permission to move forward at a rate that you are comfortable with steadily over time that healing is really and truly successful. Our resistance does not need to be battled against, or treated as a villain. It is by treating even the resistant parts of ourselves with compassion and understanding, allowing them to be heard, that we can move past our resistance. What makes moving beyond our resistance difficult is that it requires openness as well as a willingness to ask questions. For example, we may realize that our belief that nobody loves us is false, and seeing that makes us realize how long and how forcefully we have maintained this belief. With clarity, we may see what sort of damage or difficulty we have imposed upon ourselves in our lives. With this type of clarity, we may realize with that there are some people in our lives who represent our wounds and need for healing rather than a successful relationship. We may also realize that our feet hurt, not because of a spiritual pattern of needing to move forward but because we need to visit a podiatrist. It is natural to want to push our resistance away. We may fear it, hate it, shout at it, and get angry at it. We may feel blocked in some way, having tried to force ourselves beyond this block with no change or tried multiple methods of healing with little success. We may create lists of all of the things we resolve to do tomorrow, or in the New Year, succumbing to the resistance and what is known each time. We may simply sense that we are in some way holding back, or are not what we could be in our lives, and notice a subconscious (or barely conscious) realization of blockage or resistance to moving forward. These are all ways to notice that you are experiencing resistance. Resistance is natural. It is by compassionately and directly inquiring why it is there that we can begin to work with it, instead of battling it and setting up unhelpful polarities that create more resistance. Working with Resistance There are a few ways to work with resistance. The first is to work with it whenever you notice it come up. For example, say you are working with your knee and your knee reveals that it is holding a lot of anger from your divorce. You ask what it needs, and there is no reply. Or you feel some sort of shift but that there is something still there that you cannot get answers about. You may also notice a sense of stubbornness, lack the ability to even focus on your knee, or do not hear any answers (or perhaps a _This is stupid_ ). You can also do this work separately. This means that you may wish to focus on your resistance for a period of time, because you either know it is a big issue for you or you are simply ready to bring some focus and healing to the topic overall. We all have this push-pull to moving forward and wanting to stay who and what we are. Working with resistance generally, as well as separately, can really allow you to understand all the parts of you that may not wish to engage in inner work. When you are doing the work with the body deva, or any of the work in the following chapters on inner children, past lives, and ancestral healing, you may realize that in some way you are resisting. While, certainly, information sometimes just doesn't come up, the energy of resistance feels like stubbornness, a blocked or "hiding" sensation. You may also find that this work isn't going anywhere—not because you haven't made a valiant effort, but because there is a layer of confusion or some type of feeling of obstruction. You may also find that resistance presents as impatience, a thought that you should be able to heal everything you carry within on the first try, or a belief that you have no inner work left to do. By approaching these beliefs as resistance, they can be moved past and further successful work can be done. There are a few ways to work with this. When you are doing work with the body deva, ask it to show you the energy that is resisting. For example, you are doing work exploring why you may have stomach pain and realize that your entire abdomen is not a part of your body map. When you begin to talk with your abdomen, it seems confused and doesn't give any answers. You realize that on some level there may be resistance, or some form of protection blocking you from connecting to your abdomen. You ask your body deva to show you what part of your abdomen is holding this resistance. While it can be the whole abdomen, usually it is only a segment, or small portion, of what you are working with. You will then ask your body deva to see this energy clearly, as a specific color separate from how you view the rest of your abdomen. You can then try asking that color (or that resistance) to step aside so you can work with the rest of your abdomen. You will then work with the area that remains, realizing that you do not need to heal everything in your abdomen at once. If you realize that your resistance covers half of your abdomen and you successfully work with the other half, that is still a lot of progress. Successful work utilizing this method results in a change in physical function, lessening of emotional and energetic baggage, an expansion (or clearing) of previously held beliefs, and a change in your body map. The body map change will come more gradually, as it can take up to a few weeks to fully integrate and consciously comprehend how much change has occurred. Working with Resistance Via Inquiry This work is always done in a body-centered way. This means that our resistance, blocks, and fears always take up a physical space in our bodies. To start, you will do a body scan (start at your feet and move your way up to your head, not forgetting your arms) asking your body deva to highlight or draw your attention to where your resistance may be located. If you have a lot of blocks or complex patterns going on, you may find multiple areas, or even one large area, highlighted. You may feel as if your entire body is resistant. If you find multiple areas, it can be helpful to be more specific; when asking about resistance you may ask for resistance towards healing, or towards healing your knee, or your relationship. Although resistance to healing something like your knee may be located in your physical knee, sometimes it is not, so it is always helpful to be open-minded when exploring. If you are starting from a place of already being immersed in the previous chapters and are encountering resistance, you can still do a body scan to see where the resistance is. It is most likely going to be in the area you are working with, but it still can be helpful to developing a relationship with your body to ask and be open-minded enough to explore. If you have multiple areas of your body that are showing resistance, you will either ask your body deva to highlight or draw your attention to the most important one, the "fulcrum," or you will simply pick one. It is quite common to have multiple layers and types of resistance, and eventually they all can be worked with. You will now sit with this part of your body that holds the resistance, noting things about how the resistance is felt as an energetic restriction or block in your body: * What size is it? (Big, small, quarter-sized, pancake-sized?) * What shape is this energy? (A circle, blob, squiggle, pattern?) * What colors do you notice? (Is it dark or light?) * What does it feel like? (Tight, heavy, empty, pulling?) The purpose in asking such questions is to get the best idea you can of how the resistance energetically "sits" in your body. If you are having difficulty with this, you can ask your body deva to highlight or really bring forward this resistance so you can sense it more. The purpose here isn't to create more pain but to bring into consciousness how this energy of resistance is felt and seen. If you are already working on something, such as knee pain or the emotion of anger in your pelvis or even an ancestral healing (all covered in later chapters), if the resistance is in the same area it is likely to present differently from whatever you are working on. You will ask the resistant energy to step forward within the body part you are working with. In some cases it may not present differently, as our resistance may in fact be creating a great deal of pain or difficulty for ourselves, or be the core issue of why we are holding onto an emotion, physical pain, or spiritual pattern. You will now communicate with this resistance within your body. For example, let's say you notice a dark, circular shape in your pelvis. After asking what is holding you back, or what you may be resisting, ask the dark circle questions, such as: * Can you tell me why you are here? * How long have you been here? * What would happen if you were no longer here? * What are you afraid might happen if you were no longer here? Remember, our resistance is most often protective and may likely be from an entirely different place and time. You may not have had the resources to handle certain information or specific emotions like anger or fear as a child (or even six months ago). The first line of protection is typically to protect you from consciously realizing something. This may be certain memories, emotions, or even the way you currently feel about a specific part of your life. It may be a realization that there is some sort of change needed in your life, such as a change in job or in your relationship. Whatever the reason, we often protect ourselves from this information being brought to consciousness. This can be because the information may be difficult to acknowledge, but often it comes from a place of knowing that if we consciously realize something, then we will need to take action on it. This does not need to be true. We can simply realize something and have it brought into our consciousness without action. Letting our resistance know this—that we can simply receive all the facts or understand what is going on with ourselves so that we can determine over time what to do about it—is one of the best ways to move past resistance. The other aspect of resistance is fear—fear that if you were to know something, you would somehow not be able to handle it. This again is a protective mechanism, and it often brings up valid points. If we were to open our hearts again, they might be broken again. If we were abused as a child (and a part of ourselves is still frozen at that age), we may not want to open or release energy from an area because it was an area of violation that perceives that it needs shielding. If, as a result of a parent, child, or friend passing, we experienced a lot of grief that was overwhelming at the time, we may have sectioned off that grief because we simply needed to go to work and get on with our lives. Understanding and feeling compassion for this resistance and understanding the time and reasoning for the resistance is critical. We are not commanding, pushing away, or telling resistance that it is "bad" here; we are inquiring about its purpose. And as it reveals that purpose, we can determine how much, if any, of the protection it offers we still require. If we find that our resistance is from a much earlier age—such as the abused child I referred to—we may simply wish to tell this resistance that we appreciate it but are no longer five years old and do not need the amount of protection it offers. Although it seems like a silly thing to say, ask the resistance if it realizes that you are your current age _(Do you realize I am not ten years old? I am actually forty-two. Can you acknowledge this?)._ The information about your current age may allow the energy of resistance to "unfreeze" and become aware that you may no longer need as much, if any, of what it offered. If the resistance is a sectioning off due to overwhelm, we again may state that we are no longer at that time and space (whether that was six months ago or several decades), and that now is the right time to begin to heal and work with the resistance, as well as the underlying pattern that created it. The key is to work with the psychological concept of _titration_ , a process that involves the gradual release and negotiation of healing at your own pace to avoid overwhelming your inner resources. For example, imagine that you have a huge block in your pelvis from childhood sexual abuse, with a significant amount of resistance layered on top of it. Releasing that entire pattern all at once would be too difficult for anyone, no matter how stable or experienced in this type of work they are; the associated energy and emotions would be overwhelming to the person's current body consciousness. Moreover, it is simply not necessary to experience this level of catharsis. Such huge, disruptive releases create havoc in the body. So when working with resistance (or any of the healing methods utilizing the body deva in this book), the most healing method is to work with it step by step in a paced manner. We live in a culture where we like advanced, best, and now—if something is not described as simple, easily attainable, and quick, we move on. But if we are willing to have the patience to work in this gradual manner, in a compassionate way, we can effect healing that is profound, gentle, and life changing as well as life affirming. So when you are working with this resistance, and it reveals its fears and reasons for being, as much as possible acknowledge what it is saying. Something like, _Yes, I understand that my anger is how I keep people at arms' length,_ or _Yes, I realize that at some point I needed to be protected from men (or women)_ , or _Yes, in my household, growing up, I couldn't express my grief because children needed to be seen and not heard, but I am forty-five now and no longer need to contain or prevent this grief from arising._ It may not be that deep, by the way; it could simply be, " _Yes, I realize that I am tired because I work seventy-five-hour weeks and have two children_ ," or " _Yes, I realize my knee hurts because of that snowmobile accident in 1975 and that I am resistant because you sometimes just have to do things because you need to do them, despite the pain."_ Sit with and acknowledge whatever you can about whatever arises, and ask yourself (or return to the body deva and ask) if you need this protection "fully," "partially," or "not at all." If you need this protection fully, you will simply say thank you to this resistance for serving your needs. If this happens, do not feel a sense of failure; conscious acknowledgement of what is going on does result in change. You may simply need external resources (such as counseling, healing work, and so on) so that you are in a better place to work with your resistance. You may also wish to remind the area of your current age again. By not forcing this area to change and approaching your resistance with compassion, even if it does not wish to change at all, you will find it much more receptive in later conversations as well. Most of us gravitate toward the "not at all" option—as humans, it is our natural tendency to want to rid ourselves of anything we perceive as blocking our bodies (like ripping off a Band-Aid all at once). This is often a mental answer, so check in with your body deva, and again notice where the information is coming from by placing one hand on the area you are working with and sensing if it is coming from your head or the body part. You can also ask the body deva to highlight the part that is "speaking" to check your answers. If the answer is "partial need for protection," acknowledge this feedback, and ask the consciousness of the specific body part you are working with (for example, the abdomen) to change or shift as much as it is ready to. There is no forcing here. If done correctly, the image that was in your body (the energy, color, shape, and size) will have changed when you look back at it. You will then ask the body deva to integrate this shift with the rest of the body. You may wish to end with gratitude for the protection that your resistance offered, as at one point you may have needed this resistance. It served a vital function, and although it may have been misguided in its efforts, it had a specific and positive role that on some level kept you safe. It is rare that we express gratitude to the things within us that we perceive as dark, difficult, or resistant, and the highest level of healing is being able to express compassion and love to all aspects of ourselves. Working with Resistance via Symbol Similar to the body deva, we can also work with our resistance by creating a character, symbol, or image for our resistance. This method is helpful for working with our resistance as a general concept, or if the previous method is not garnering enough information. To do this you would do the body scan or find an area of resistance (exactly the same as the prior method), and again note how the area you have found it in feels physically, as well as sense an inner visual, shape, or size of the energy. You always want to engage the body first to ensure that this work does not turn into mental gymnastics or a method of disembodied creative visualization. What allows for healing is the embodiment and clarity that comes from truly engaging with the consciousness of the physical form. You would then ask your resistance to step forward as a character or symbol. As you can see, we are working with similar methods, just with different intentions. Our body deva imagery, that healthy body consciousness that we can tap into and gain wisdom from is going to be different when it takes the form of a visual image or "character" than the image or "character" of resistance. * You will simply sit with this image and see it as clearly as you can. If you cannot see it, what would you intuitively "know" about it? This is often our strongest sense and can be utilized to create a visual. * When it seems somewhat clear, you will then say hello to it and ask it if it has anything to say. A journal can be really helpful for this to write down things after. * You will now ask it what it is offering you protection from. What would happen if it was not protecting you? * Inquire what age it is from. This may not result in an answer if it is a bunch of different ages. If it answers "Forever," it may either be from early childhood or an energy that has been passed down to you. * Be compassionate to this fear and resistance. It is protecting you, even if it is misguided in its efforts, or you no longer need its efforts. * Once you understand the fear, you can negotiate a bit. Let it know if you no longer need protection, or as much protection, as it is giving. Let it know that you appreciate its efforts, but if it could back off a bit (say this nicely) that you would appreciate it. * Most of all, say thank you. The highest embodiment of compassion is being loving and compassionate toward everything within and without. This does not mean that this fear becomes love, or something deemed acceptable; it means that we are willing to listen to every single aspect of ourselves with the highest regard. We are willing to listen and accept fear as much as the joyful parts of us. This is true shadow work, and it will allow for significant inner (as well as outer) transformation when done over time and with some patience. It is helpful to have many different ways to do inner work as you will find that sometimes one method works in instances where another one does not. You may also find that, at first, creating an outer visual is easier, or easier to do on its own. Whatever method you choose, remember that your resistance did not show up overnight, and likely will take some time to leave. You will know that you have been successful when you perceive changes in the energy you have visualized and a change in your body map. You may also be able to move beyond the resistance to understand and communicate with the consciousness of that body part now. Even if there is a slight change to the symbol (such as becoming smaller or changing color from black to gray), this will allow you to access more information or whatever you were working with before to be more readily accomplished. Questions about Resistance To work your way around mental blocks, a line of simple questioning is always helpful: * What if I could hear what is going on here? * What if I could hear what you have to say? * What would happen (what is the fear) about hearing this? * Would it be okay if I hear about this pattern without having to do anything about it? * Do you realize that I am (current age)? These questions are often helpful when you are in the midst of working with your body deva, an aspect of your body, or any of the following chapters, as they are ways to negotiate or subtly move beyond the mental blocks that form resistance. We often believe we need to do something with the information we receive, or deeply fear hearing something because to be conscious of it means we would have to take action. Taking that off the metaphorical table is incredibly helpful in moving past resistance, as is asking the questions about fear, protection, and simply "what if" type questions. The Energetics of Resistance For every force there is an equal and opposing force within us. The force that we have propelling us forward also has a force of equal strength keeping us back. Understanding this force, and how the energetics of momentum and resistance works, can allow someone to move beyond the heightened resistance that forms when engaging in healing activities or meditations. When we are going to engage in healing work, inner work, or anything we may plan for ourselves in order to move forward in our lives, a heightened energy develops, a momentum developed from our planning and intentions. But here is the difficulty: the greater the energy we expend on becoming a "new person" or doing something new, the greater the resistance that builds. Although this sounds odd, it is by doing things without mentally building them up that we can succeed in cutting through the built-up energy and achieve something new, whether that is a new art project, going to the gym, or doing the work in this book. If you are able to maintain the mentality of, _This is simply something I will be doing, and if it doesn't happen that is perfectly fine_ , it will allow you to move beyond your basic resistance, as well as the sort of mental planning, resolutions, and promises to be a changed or new and better person tomorrow that we all tend to loop through. When we engage in this manner, the momentum of resistance does not build as drastically as if we were to create the opposing momentum of "change" energy, and we are more likely to be successful at actually doing something. While sitting with and consciously thinking about things is always helpful, as a culture we are so in our heads that it is rare that we actually follow through on even a fraction of our dreams. We can use this principle of opposing forces to simply state something like, _I would like to go to that class at my gym. I am going to sign up for that class, and if I go, that is great, but no pressure_. This is a lot different from the energy of someone stating, _I need to start lifting weights. Starting tomorrow, I am going to lift weights for an hour every day._ The first person is more likely to make it to the gym, and the second is likely to have mysterious things pop up, such as stomach aches, things to do at work, or other forms of resistance, which will prevent them from going. **Lucy** Lucy was aware of a great deal of anger and fear that she held within her. Her parents were refugees, and as the eldest in her family, she was in a position of having to take care of the other children while her parents worked long hours. Lucy was diagnosed with endometriosis, thyroid issues, IBS, and generally felt low energy. She noticed that this anger was pretty widespread: in her pelvis, heart, throat, and jaw. In talking to her body as well as her body deva, all she heard was unintelligible screaming or nothing at all. She started by asking the question, _What if I could hear what is going on here?_ After asking, she still couldn't make out anything other than screaming, pain, and darkness. Lucy had been through therapy for five years with little change in symptoms and so was looking for spiritual healing and related methods to assist her. She found most of her resistance in her pelvis and throat. When asking where the fulcrum of the resistance was, she found that the pelvis was highlighted. The energy she found there was like a brick wall, impossible to penetrate, and too tall to look over. This wall told her that she had too much anger and pain within her, and if the wall were to disappear she may never stop crying. It told her that she not only carried her own anger but that of her mother, and that she was taking responsibility for her mother's pain as well as her own. I asked her if this was an _ancestral pattern_ , a family pattern that extended back farther than her mother, and her pelvis agreed that it was holding onto the pain and anguish of several generations of women. Lucy said that the resistance and pain was creating difficulty for her, and asked if the wall was willing to change or shift a bit so she could actually work with the anger, fear, and grief underneath. The wall was cautious but allowed a small window to open. This window allowed Lucy to work with her childhood and ancestral issues and release a bit of the energy in her throat, jaw, and pelvis. She then returned to the wall and asked it to look at her current, adult body and if she needed as much of this protection still, as she felt she didn't need as much. The wall agreed, but cautioned that she needed to work through some fairly difficult emotions in order to heal. She said that that was okay, and the wall became thinner and short enough for her to look over. Over the next few months, Lucy worked with ancestral issues, her inner children, and the body deva and found that her anger no longer was a factor. She then worked with the grief and the despair she found. The wall slowly dissolved and released, and she found herself happier, in a new job where she was appreciated, and reconciling with her mother and sisters. Her digestion vastly improved, her periods became less painful, and she found upon returning to therapy that it was more successful this time. She did further work on contracts (chapter five) and found that she experienced more freedom in her life and was less critical of herself and those around her. As you read these case studies, you may be noticing that I am not saying that people become multimillionaires with no issues ever again after doing this work. These case studies are all based on real people whom I have worked with (or are composites of a few people with the same general themes) who were courageous enough to look within and to move beyond many of the emotional, mental, physical, spiritual restrictions that were causing them an incredible amount of pain. This work allowed them to be much more functional, more joyous, and to understand themselves on a much deeper level than before. It is part of inner work that we will always have something to work on, more to explore, and more to heal. The more we choose to do so the freer we become. But we are all in human bodies and have human lives that inevitably have ups and downs. No matter how much work you do on yourself you may still swear in traffic, not get along with your sister, and have some days where you feel better than others. But by fully engaging in our bodies, by becoming fully human and conscious of what we carry, we can move beyond the blind emotional reactivity and the unconscious carrying out of patterns that we once did. This allows for a sense of freedom, better self-worth, and improved functionality, and step by step, we become more alive. By being willing to engage, knowing that we always have something to work on (and not engaging in ego dynamics that suggest that we don't), we can continually free ourselves from the many things that have created so many restrictions in our lives. CHAPTER THREE Working with Fear and Emotions Our emotions provide us with valuable insight into what we need to work with internally. We all have a _core emotion_ , an emotion we gravitate toward in times of overwhelm or distress: fear, anger, shame, or grief. Think of how you typically respond to situations you find overwhelming, violating, or just simply bothersome. For many of you, the emotion is likely to be anger. We do not live in a culture that celebrates our emotions. Anything perceived as "negative" is quashed down, ignored, or is something that we attempt to push aside. In modern spiritual communities, the emotions of fear, anger, and other perceived "dark" or "negative" emotions are seen as something to transcend or turned into "light." Our emotions do not go anywhere if we ignore them or shove them aside. What happens is that we create a stockpile of unhealed emotions that continually grows each time an emotion is not expressed. These unexpressed emotions color the life of the individual and prevent them from reaching a state of health. They also affect our loved ones, our society, and even our world. It does not take much realization to see that collectively our unhealed emotions, such as rage, anger, grief, and fear, are played out repeatedly on a world stage. By doing this work you are taking responsibility not only for your own experience but also for healing your family, friends, neighbors, and resolving part of the whole, like waves on the ocean rippling outward. Understanding Our Emotions Our emotions serve a vital function and role in our lives. We are meant to experience our full range of emotions—everything from anger to joy, pain to bliss. Many of us never learn the proper tools and skills to communicate with our bodies or connect with our emotions in a healthy and intelligent manner. Picture this. Two people get into a car accident, both rear-ended by someone on their cell phone. The first person gets angry (anger that is appropriate for the situation), takes some deep breaths, calls the appropriate people to deal with the situation, and in a few hours has moved on from being angry. The second person begins swearing, shouting at the person who caused the accident, wants to fight them, and gets a headache that lasts throughout the day. Days later they are still mumbling about what an idiot that driver was, and how nobody pays attention and how dangerous it is for people to be constantly on their phones. Over the next few months, whenever this person thinks of this experience a surge of anger flows through them and they express hatred towards this individual. In the first example, we can see what happens when we are in a place of emotional intelligence or skill. We are supposed to get angry, grieve, feel pain, as well as all of our "positive" emotions like joy, happiness, contentedness, and ease. Full emotional intelligence and usage is the capacity to have healthy, functioning, and full use of our emotional range, to understand why we may be experiencing an emotion, and the uninhibited, healthy expression of that emotion so that it does not add on to the "stockpile" within us. What this work will do is gradually ease that stockpile so that you will begin to respond more and more appropriately to the situation and not have your responses be magnified by your stockpile. It will also allow you to move beyond the ideologies that any emotions should be avoided or are "bad," and into a space of flow, or a healthy expression and compassion for all emotions. Questioning Emotions To start coming into consciousness (or further consciousness) about your emotions, I suggest beginning with a bit of questioning to determine how much you may be magnifying the emotions you experience on a daily basis. By this I mean a daily process of noting when you feel anger, grief, depression, or fear by asking how magnified this emotion is. I would choose one emotion to start with. I frequently suggest working with your core emotion first—the emotion you notice yourself gravitating toward in distressing situations, or just as a daily occurrence even in non-distressing situations. Things like depression or apathy are often complex and multi-emotional, so picking anger, fear, or grief is often better to begin with. When you experience this emotion, ask yourself, _How magnified is this emotion for the situation?_ This is harder if there is no situation (you randomly feel angry without a conscious catalyst). In cases such as this, I suggest jumping ahead and doing some of the work later in this chapter (working with the body deva to understand and work with emotions locked in the body) and then coming back to this work. You can also ask, _How appropriate is this emotion for the situation?_ Sometimes that question gets into our own restrictions and rules about how any emotion within us is bad or inappropriate. We are supposed to experience the full range of emotions; however, it can take some work to realize this, and to overcome the sort of cultural, personal, and family conditioning that has told us that difficult or "negative" emotions are not okay or must have a lot of rules around them to be a valid part of our experience. The purpose of this line of questioning is to come up with a number or idea about the extent of your over-reaction or magnification of the situation. This is not intended to make you feel bad but to make you conscious of the extent of the present anger (fear, grief), and the anger that may be backlogged (or need to be worked on). Going back to our car accident case study, in person no. 1, we can see that there is no magnification of anger; the person experienced anger that was appropriate for the situation and was able to release it appropriately so it did not get held within their body. If person no. 2 were to do this line of questioning, however, they may find that only 10 percent of their anger arose from the current situation, with 90 percent coming from somewhere else. So even though from the outside this line of questioning seems simple, it takes a bit of practice to get the hang of it. When you experience an emotion (your core emotion) ask yourself: * How much is this being magnified? (Double? Five times? Twenty times?) * What percentage of this is appropriate for the situation? (5, 10, 90 percent?) * How much of this emotional reaction is from prior situations (not from the current situation)? The answers to these questions will enable you to recognize what is healthy or appropriate for the situation, and begin discerning how much material may lie within that you can work with utilizing your body deva. With practice, just using these simple questions will allow you to let go of some of your anger (or fear, or grief) simply by bringing into consciousness how your emotions work. Just by utilizing these questions it is typical for reactions to start to change. A client who found himself frequently angry due to work emails said, "I didn't realize how much of my anger was coming from past experiences. It felt good to find out how much of my anger was actually appropriate for the situation. I had a right to be angry about some of what was going on, and how much I was being asked to do." A funny part of this is that you are likely to discover that many of your emotions are completely valid; they are just magnified or distorted due to prior experiences and conditioning. By understanding our emotions and their functions we can make headway in knowing ourselves better. Emotions serve as messengers, allowing us to defend, protect, or deeply feel on a level that we may not be aware of yet. Functions of Emotions Our emotions are valuable messengers—they tell us what is going on within us, as well as in the context of our environment. Different emotions have different roles, so we will discuss the basic functions of emotions and what they may be saying to us. Anger Our anger is our greatest protection. It allows us to understand when our boundaries are being breached and to energetically create a boundary around ourselves that tells others to keep away. The easiest way to notice how anger creates a boundary is to sit in a public place, such as a bus or train or café. Scan the room and look for someone who is so angry it is palpable. People around them will likely be giving this person a wide berth—an empty seat next to them on the bus, the tables next to them in the café will be empty, and so forth. In the wild, the expression of anger by a mother lion—the gnashing of teeth at a predator coming near her cubs—will keep her cubs safe and cause the predator to fall back. Although we no longer need to defend ourselves in this way, the mechanism from a time when we needed to do so is still ingrained in us. This biological mechanism may work in exactly the same as it did for our predecessors who were worried about others taking their fire or food, but nowadays, it gets triggered a bit differently. We still perceive the predator attempting to "take" or "violate" us in some fashion, but our anger frequently shows us when our energetic or emotional boundaries are being breached during the event as well as after the fact. Our primitive, reptilian brain originally developed as a way of helping us survive a dangerous early human environment (so that we would not be taken or violated, resulting in our death or that of a family or community member). In modern times, our primitive brain response may still get triggered, perhaps when someone makes what feels like a big demand on us, such as asking us to help them move a couch, get along with a family member over the holidays, or do more than our fair share of work in the workplace. I realize that the distinction between these events may not seem clear to some of you. After all, there would seem to be a major difference between a mother lion fighting for the survival of her cubs, our ancestors fighting to prevent their food or fire being stolen or them being raped and pillaged, and us receiving an email from a friend, family member or colleague who breaches our boundaries. In fact, they are on the same continuum: they all breach our boundaries, which exist to ensure our survival and the integrity of our systems. Our anger allows us to instinctively protect ourselves and to survive. We may not be conscious of our boundaries, or know what they are, but on a subconscious level we are always aware when someone is looking to "take" from us, or when our boundaries are being significantly encroached. In many cases, anger may show us when we are giving more energy than we are receiving from a situation or a person. When working with boundaries and unequal relationships in interpersonal relationships, I suggest using energetic cord work, which you can learn about in detail in my book, _The Complete Cord Course_ , as well as in this book. If we understand that our anger arises from our perception that our boundaries have been violated, we can become more compassionate about its vital role in our lives. We also can understand its secondary function, which is that anger is actually an outward expression of fear. Our anger is not really anger much of the time. In Traditional Chinese Medicine, anger is classified as a "yang" emotion. Yang emotions are extroverted, loud, and allow us to "vent." Emotions like fear and grief cause us to retreat inward; the emotion of anger allows us to express ourselves. At the deepest levels, all emotions boil down to fear—the memory on some level of being violated, abused, or taken from. Whether that is from our own lifetime, from family/ancestry, past lives, or genetic and cultural memory, we have all had experiences of having our power taken in some way or other. Our anger, being an outward emotion, allows us to deny our inward experiences and realizations of fear. We dislike being small, being reminded of the ways in which we deeply fear for our physical lives, our families, and ourselves. Out of this fear comes protective anger—a way to keep ourselves safe as we hold onto our fear. Anger makes us energetically larger. It causes others to keep away and, on some level, causes us to deny the inward expression of fear that lies beneath the anger. We may also be utilizing anger as a protective mechanism so that we do not feel "yin" emotions, such as grief. Within ourselves, we may find a very angry child, ancestor, or teenager, who upon being given compassion and understanding reveals this grief. If we understand that our anger is often a biological mechanism of fear, that we hold onto past experiences (or stockpiled) anger, that anger can cover other emotions, and that anger is a protective mechanism, we can come to a place of understanding that anger doesn't need to be ignored, chastised, or pushed aside. It is attempting to serve a valuable function in our lives, and simply may have gotten magnified or skewed due to the unhealed experiences we hold within. _Anger, being a yang emotion, is often best processed through physical expression. Exercise, movement, and artistic expression are all suggestions of how to channel this seemingly destructive and negative force into a place of health. "Doing something" is the motto of anger: volunteer, protest, find the direction in which your anger wants to "do" to help the world, and yourself. Asking your body deva what the best expression, or manner, in which you can express your anger to fully allow its expression in a positive manner, can allow you to start working with this powerful emotion._ Fear Fear is the creator of our other emotions and is thus considered our _base emotion_. Are we going to be okay? What happens when we die? Are our family, job, and loved ones going to be okay? What happens if someone breaks into our home? Will we be safe if we lose our jobs, our health? As we begin clearing and healing the stockpile of emotions that lie within us, fear of our physical demise, fear for our safety, and fear of the unknown all come up full force. These fears deeply restrict our lives, and we have created a lot of beliefs to soothe the unknown, to placate the fears around death that we all have. There are entire industries devoted to telling people exactly what happens after they die, or that claim to definitively make the unknown a known. We like closure, knowing, and simplicity, so the idea that there are things that will eventually happen, such as our death and the death of loved ones, or that we cannot possibly know everything, or even if we will be okay, is deeply troubling to a mind that likes clarity, control, and closure. There are ways to transcend these fears, as much of our fear comes down to the fear of our physical death, but that process is somewhat beyond the limits of this book. What we can understand is how these fears impact our lives and heal the fear that has accumulated due to past experiences. In Traditional Chinese Medicine and similar methodologies, there is a concept of "root" and "branch." This means that for every pattern we have, there is the outward expression (the "branches"), the core of what is being expressed (the base of the tree), and what originally created or caused everything (the "root"). We will talk about this concept more in later chapters, as it is important when considering family and ancestral patterns; for now, just realizing that our fears greatly hold us back, that they create other emotions and a range of beliefs and outward experiences, can allow us to gradually work with the branches, the base of the tree, and eventually face the root, which is fear for our survival and physical death. Doing so will allow us to see how our fear creates many of the restrictions and blockages we carry in this world. In any discussion about fear, we also need to talk more about how it is a protective mechanism. For years I disliked the word "ego," as I completely disagreed with the way the word was used by New Agers. The emphasis on having "no" ego, that ego was "bad," that we are meant to be beyond emotions and even our physical bodies as a sort of goal seemed strange and inauthentic to me. We are intended to have separate personalities, goals, and emotions. Our differences make us unique and beautiful. We are intended to deeply inhabit our physical form, utilize our senses, and be present in our daily lives. Any sort of spiritual transcendence allows us the opportunity to discover that not only are we deeply connected and "one" but also our own vital and beautiful expression of who we are on an individual level. Any awakening or transcendence will allow us not to separate but to be deeply connected, to more fully feel and be in concert with our emotions, and to develop the emotional intelligence to understand and work with our emotions with compassion. _According to Traditional Chinese Medicine, fear is a yin emotion, something that is deeply held and felt internally. Fear causes us to contract, to curl up._ _To release fear requires understanding and compassion. It requires an unfolding. We can often find areas of fear in our bodies by noticing which areas of our body we seek to keep warm, covered, or that are continually contracted. Many who are in a continual fear state will withdraw or seek comfort by contracting in the fetal position. Artistic expression can be used to work with any emotion (yang anger as well as yin fear and grief). Fear is best worked with by soothing, stillness, meditation, and by offering safety, support, and boundaries._ _The key to working with fear is compassion, understanding, and nurturing._ I choose to think of ego as simply our identity, our concept of who we are and what the world is like. We do not create the universe in a solipsistic manner, but based on our wounds and unhealed issues, we "loop" through the same issues again and again, seeking healing. We believe the world to be a certain way because of our unhealed pain. As a result of this work, our beliefs, fears, and emotions can significantly shift. But that requires a certain level of openness to do so, which is why working compassionately with even the darkest or most resistant aspects of ourselves is suggested. Our fear lets us know when we are unsafe, when we need protection, when something in our environment is not right. It allows our systems to react, for us to get that adrenalin surge, go into the "fight-or-flight" mode, and flee (or fight) the attack that is taking place. The difficulty is that we may have so much locked within that we are continually in "fight-or-flight" mode. We may have parts of ourselves that are five years old, frozen in a continual fear state. A part of us may come from a long line of family members who had every right to be afraid of attackers, or we may have memories of family, past lives, or a cultural heritage in which continual abuse or violations have taken place. Those fear mechanisms never found the healing they needed, so they are continually "on" with no capacity to go to the "off" switch. By working with fear gradually, pattern by pattern, and feeling compassion for this biological mechanism which allows us to realize when something is unsafe, you can release fear and slowly allow yourself to switch off and on only when needed. Grief and Sadness Our grief allows us to deeply feel; it allows an access to our soul that no other emotion allows. This may seem like a gift with few benefits, but those who have truly experienced grief have a depth of soul that few others can conceive of. In our modern culture, grief is seen as something that is to be hidden away, private, and feminine. We are only intended to grieve for a culturally specific amount of time, and after that, we should move on with our lives, or get over it. Grief has no time limits, and because it is a yin emotion (unlike its fiery yang anger counterpart), it moves more in cycles and waves than other emotions. This means that if a family member or loved one dies, we may experience the crest of the wave of grief right after the death (or a few months afterwards, once the shock has worn off), but it may also come up six months later, or even several years later. There is an ebb and flow to the emotion of grief that is not fully respected by us, or culture at large. We may intellectually understand death as a transition, or spiritually reconcile ourselves to own death, but the reality of death and dying always creates a level of grief. Whether or not we are conscious of that grief, or fully allow ourselves to experience it, is really the question. When we fully experience our grief, we allow the waves and ebbs and flows associated with that emotion to come up. Grief is a process, an inward (yin) one at that, and we can bully and shame ourselves into not feeling it if it comes from a loss that we feel we should be over (or when others not so kindly suggest that we should be over it). In healthy grief, we are able to make room for grief, treat it with compassion, and seek out methods for its healthy expression. In many cases, methods that are more internal, or yin, are helpful, such as drawing, painting, journaling, and so forth. Our creativity and our emotions are deeply linked—they are essentially the same force—and allowing the expression of our emotions through artistic expression allows us to create beauty from our sorrow. In healthy expression, grief is seen as a dynamic, soul-level force, one that can take your breath away but can also foster creativity and positive and beautiful memories for what is no longer physically available to us. In unhealthy expression, we get stuck in our grief. We get stuck in the "what if's," the wounds created with that person, the lack of closure; in this case, grief is like slamming into the same wall head first again and again. This is unhelpful. As much as grief is a process that should be admired and deeply felt, there is a difference between having a part of ourselves frozen in a state of grief (such as at the time of a loved one's death) and actively moving through the waves and flows of the grief process. We may also experience a great degree of sadness or depression without having a specific loss to point to. Patterns of abandonment, not being heard, and of loss on a level that feels pervasive or has always been present can all be experienced. One of our primary needs is to feel seen and heard, and if we have not experienced this, especially in young childhood, we will feel a certain measure of grief and longing for the love and connection we never properly received. We will continually try to grasp toward others to fill the hole created as a result of not being seen, loved, heard, and appropriately cherished as a young child. We also can carry grief that has been passed down to us without us consciously realizing it. Family and ancestral patterns of loss and grieving are often energies we inherit, and we may feel considerable grief without knowing why. Working with the percentages in the previous chapter, you may have also discovered that you have a lot of stockpiled grief, without there being specific reasoning or causative factors that can be logically pointed to. Although grief can allow us to feel on a deeper level that others may not understand, we cannot simplistically state that feelings like depression, sadness, and emptiness share the same root in each person. Our circumstances and the ways in which we have not been heard, or seen, or felt a part of things can vary considerably. This is why doing work with the body deva is so important. Finding out _your_ reasons and patterns of unhealed grief, sadness, and depression can help you heal on an individual level. A whole host of people may state, "If you have depression, you just have to do X, Y, or Z, as all of it comes from the same issue," but it does not work that way. Our grief shows us our disconnection from one another on some level. Your level may differ from that of your neighbor, however. For example, your grief may be from parents who didn't outwardly show love and appreciation for you, while your neighbor's may be from being abandoned at an early age, and their neighbor's may be from family patterns tracing back hundreds of years involving men having to leave their families to go to war. Our emotions may have specific functions, or roles, and may even have similar dynamics (an inner child not being loved or heard to the level that they needed, for example, is an incredibly common pattern), but by being open to understanding and realizing our own reasons for being, however complex and multi-pattern those may be, we can begin to heal long-standing and even the most difficult or complex emotions within ourselves. _Grief can be healed or utilized in similar fashion to the other emotions, such as through artistic expression. Grief is a yin emotion that can be best expressed by allowing it to have a certain amount of yang energy. What this means is that you need space and time to grieve, cry, and allow it outward expression._ _In certain spiritual traditions, such as the Dagara of West Africa, circles of mourners or "grief ceremonies" were held so that others could witness and hear the expression of grief. Keening, or public wailing at funerals, was a part of traditional Irish funerals._ _We tend to stifle grief, so it sits within our systems. Watching a sad movie, witnessing our grief with the body deva, and allowing tears to come can release grief. If grief is too overwhelming, working with someone who can witness your grief in the early stages can be incredibly helpful._ Looking for an Outlet One of the methods of basic awareness we can develop in relation to our emotions is to notice how we frequently look for a target for our unhealed emotions. The easiest way to see this is online, on social media, where we will grasp onto whomever or whatever we can get angry at, or even feel joy or a heart-felt space with. We may authentically want to cuddle those kittens or be upset with our congressman and whichever celebrity did something newsworthy this week, but we also look for targets for our unhealed emotions when we interact online. We are always seeking to externalize our inner experience, but if we were to heal our internal emotions, we would find ourselves less reactive (or more appropriately reactive) to the outer world. What this means is that if we are anxious, we will find things to be anxious about. If we are angry, we will find reasons to be angry. Our outer chaos will match our inner chaos. By taking responsibility for our emotions and healing them, we can stop ourselves from engaging with the world in such an unconscious way. Most importantly, we can look at the outer world (and whatever we find ourselves reactive or emotional about) as a signpost pointing to what we still need to work on internally. This can be worked with by utilizing percentages (how much of that celebrity saying something stupid are you truly angry at?), by noting what emotions arise often for ourselves (our "core" emotion), and by utilizing the body deva to find the source of our emotions and doing inner work on whatever we are outwardly reacting to. We get used to a certain amount of chaos and emotion in our lives, and will create chaos and emotion at the level we are used to if we are not currently experiencing it. Working with the body deva will allow you to gradually back away from the simple projections and emotive creations that plague you and become a person who is clearer, calmer, and healthier physically, mentally, emotionally, and spiritually. Again, working with this is a simple process of realizing that our reactions often point to unresolved inner experiences or stockpiled emotions. If we can notice when we are projecting and question the motivations behind our judgments, anger, and fear, we can begin to heal the parts of ourselves that we hold separate, that are wounded, and that are deeply afraid, yet yearn to be united with the whole.. A helpful starting point for you to notice what may be unhealed within is to ask yourself simple questions when interacting, online or otherwise, such as: * What age am I acting? (Is it your current adult age? A teenager? A young child?) If it is someone who is completely unlike you (such as an old man when you are a younger woman), it is likely signifying an archetype, family, or ancestral influence. * What purpose is behind what I am saying? \- Is it simply a venting of emotions? \- Is it to prove that I am superior or better than another? \- Is it to calmly share my view with others who may also critically think or be able to engage in discussion, or is it to project unhealed emotions? \- Is what I am feeling toward this person even about the person? \- If it is not about the person, who or what might you be truly angry at, sad about, or fearful of? It is a deep irony that what we project is often a reflection of what we have unhealed within. If we accuse others of being a specific way, we likely need to work on that very thing ourselves. If we continually seek to prove ourselves superior, we often have a pattern of feeling in some way inferior. If we are accusing others of being stupid, there is likely a part of us that feels stupid, or at the very least a former version of ourselves (an "inner child") who felt stupid at some point and who could use some healing. Although things can certainly be a bit more complex, a good place to start and continue working is to notice what we are accusing others of and question our motives when we are interacting. We may really be angry or fearful, but much of what we experience may be the result of the stockpile of past fears and anger we are simply looking to pin on external sources. If we were to reconcile it within, we may still get angry at the outer world, but it is a healthy, adult, and current anger, not skewed by past hurts or beliefs. Working with the Body Deva to Understand and Work with Emotions The first phase of working with emotions, or really any pattern, is to bring them into our conscious awareness. If we are not conscious of something, or do not question how and why we may act, believe, or feel the way we do, we will not move on to being open or willing to work on it. It certainly does take a deal of openness and willingness to heal. We all like to feel that our beliefs and understandings about ourselves and the world are the "truth"; moving away from them means that we may not be as in control as we believe ourselves to be. Our ideas about who we are and what we hold to be true must change, and the idea that we can transition from someone whose anger explodes on a daily basis to a peaceful individual, or from someone who has an ocean of grief and despair coloring their world to someone who experiences joy or happiness, seems too far fetched for us. Working with Emotions * First, consider an emotion that you would like to work with. For the purposes of this work, I will choose anger as an example. * Now, get in contact with your body deva. You may also choose to ask your body deva if there is an emotion that you should work with, or may have received insight through previous work that working with anger (or fear, grief, despair, apathy, and so on) would be a good thing to spend time on. * Ask your body deva to highlight or show you an area of your body where this emotion is being held. * Do a body scan (feet to head, not forgetting the arms) or sense what areas of your body may seem to be drawing your focus or highlighted for you. * If there are several areas of your body that show up, ask which area would be important for you to focus on, or to focus on first. * If no areas of your body show up, you can work with resistance or ask the question, _If I were able to sense anger in my body, where would it be?_ * Sit with this area of your body and note how it feels. Does it feel stuck, tight, heavy, empty, full, or pulling? \- Do the best you can to feel what this area really feels like to you physically. * Now, sense the energetics of the area. How big or small is this blockage or emptiness that you are sensing? \- The size of a tiny point? The size of a baseball? Does it take up the entirety of that area of your body? \- If you are having difficulty, you can always ask the body deva to highlight the area more, or ask it questions about the area to glean more information. * Now, create a visual for this area of your body and the energy that is being held there: \- What shape might this energy be? \- What colors? Dark or light? \- Sit with this until you have a sense of a visual, realizing that the visual may not be a distinct visual but more of a sense of a ball, or clouds, or something else entirely. * If you are working with an area that seems empty, you can still work with a visual. \- How big is the emptiness? Is there anything around the emptiness? Now that you have a basic sense of the energy and area of your body, you will move on to "speaking" to the body consciousness of the local area: * Internally inquire why the blockage or emptiness is there. * Ask what its function may be. \- Ask it what would happen if it didn't hold this energy/emotion there. * Ask what age the energy is from. \- You may wish to engage in "inner child" work (the next chapter) if it feels appropriate. * Ask if this energy is fully "yours." \- This means from your lifetime and experiences here. If the answer is yes or that the emotion has always been with you, you may wish to follow up with work in Part Two of this book. Once you have a basic understanding of what is going on, you will compassionately ask the emotion what it needs to say. One of our primary needs is to be heard, and simply inquiring as to what the anger (or other emotions) have to say can result in profound change. You will listen to what your anger might have to say in an open and compassionate manner, not looking to chastise or tell this emotion that it is wrong or incorrect. Even if your current, adult, and logical self realizes that it doesn't feel this way, or you don't agree with what is being said, there is a part of you that does. Allowing this part of you to be accessed, heard, and given a "voice" can result in the energies of the area shifting or changing. You may wish to again have your body deva show you just how large of an impact this emotion is having on you ( _Body Deva, heighten this emotion (anger) so that I can feel how this energy creates difficulty for me)._ Do not despair if you realize that an emotion is drastically coloring your world or creating quite a bit of difficulty or pain in your physical body. It is by becoming conscious of such things that we can begin to let go of what we have held for so long and come into a healthier relationship with ourselves and the world. Other options for giving anger or other emotions a voice is to journal, dance, or do artwork or creative work in some capacity. To do this, you would ask the emotion to come forward and "speak" what it has to say. Once you have made it known that you are willing to listen and engage with this energy respectfully and compassionately, it will come forward and release, via simple listening or creative pursuits. You would then paint with the "voice" of anger. People often incorrectly assume that there is need for catharsis, or that an emotion like anger needs to release through angry explosion. What it needs is to be seen and heard and given an outlet with compassion. By no longer battling this emotion you can begin to develop skills that will allow you to simply listen and deeply hear what this part of yourself has to say. In doing so, you will become more aligned with the essence of the body deva, the health in your system. You will know that this method is working for you when you return to your body, and your body deva, and look again at the part of the body that was holding the emotion. It should shift or change, growing smaller or thinner or changing shape in some way. It is important to realize that we do not need to heal all of our bottled-up anger in one day, and it is likely that the protective capacity of our bodies and the wisdom of our body deva will stop us before we attempt to do so. If we have an ocean of grief in our bodies, it is enough to release a bucket. Over time the buckets will accumulate, and our ocean will turn into a small lake, then a pond, and eventually a thimbleful. The key to this work is to not treat ourselves roughly. We have experienced enough pain and have told our emotions enough times that they are not wanted. Our emotions do not want to be forcibly cast aside, scraped, cleared, or ignored. They want to be heard and understood from a place of compassion. If distressing or overwhelming emotions or experiences come forward, it can be quite helpful to have an external resource to assist us. An ocean of grief may understandably be too overwhelming for us to deal with, and although I always suggest going slowly, having another human to listen to our grief, anger, fear, and pain and to listen compassionately can offer further reprieve and insight into our emotions, as well as eventual healing. Modalities such as Craniosacral Therapy, Hakomi, Somatic Experiencing, Zero Balancing, and Spiritual Healing approach trauma through the body in ways that will integrate well with this work. I suggest finding someone who is certified in their modality, has it as a primary focus in their practice, and has at least five years of fulltime experience in it. Once you have listened to what your anger or other emotions may have to say, you will say thank you and ask your body deva if some of the energy held in this area would dissipate, change, or shift. You are not doing this forcefully, or commanding; it is simply an opportunity for realization on the part of your body-mind that it is time for the ocean of grief to become a river, and for your body deva to facilitate that happening. In most cases there will be some shift, or there will be shifts that have already occurred simply by asking or recognizing that your body is carrying an emotion in a specific body part. If there is not, it simply means that more work needs to be done, or that the process in this area may be more gradual for you. Looking at our Reactivity in the World We are continually looking to project our inner emotions onto the world and the people in it. We externalize our emotions and then project them onto others as we lack the skills or awareness to take care of them inwardly. We are continually showing one another our pain and what lies unhealed within us. We are also looking to place people into unhealed loops or roles that we have developed for them; for example, if you have an unhealed relationship with your father, you are likely placing the men in your life in a "father" role, seeking from them what you did not receive from your own father. In this situation, you are likely also placing a lot of energy and emotion on the target of your projection. They may be similarly projecting the unhealed wounds of their father or mother onto you, and both of you may then enter a relationship of enacting the same unhealed "looped" energies again and again. Simply put, you may be unconsciously replaying your relationship with your parents in your adult relationships, whatever that relationship may be. Both parties in this situation are then using one another to fulfill different roles, "loops," or unhealed material. At its best, this relationship can allow both parties to heal, to "unfreeze" what is frozen within them, and transcend this type of looping. At its worst, both parties simply loop or reenact the unhealed traumas they have within them, without any type of awareness. If we have a stockpile of anger within us, our system is continually looking for ways to engage with this anger to release and heal it. Unfortunately what arise are simply situations in which we can get angry, or do get angry, without the inner reconciling or proper release of the inner emotion that is looking for healing. We lack the proper tools and consciousness to recognize how we may be projecting onto the world and the people in it. By doing this work you can begin the process of releasing the inner experiences and patterns (the "loops") so you are no longer continually enacting them in the outer world. You can begin to take personal responsibility for your emotions and understand what is truly making you angry, grief-stricken, or fearful. One of the ways this work can be done is by acknowledging what or who is creating emotion within you that is appearing in the outer world—what is causing not just temporary annoyance or anger but causes you to still be thinking about it and reactive hours, days, even months later. This may be a person or an event you are connecting with to serve as an outlet for your inner stockpiled emotions. A typical response to this information is that there are things in the outer world (people and events) that should make us angry, or afraid, or create grief within us (as well as joy and bliss and pleasure). This is true, and with emotional intelligence we can deeply grieve and feel anger or fear in a way that is appropriate for the situation. But by questioning what we may react to in the outer world and utilizing it as a catalyst for our own process, we can utilize the outer world (and the people in it) as a part of our inner work. This means noticing the themes or types of people creating a reaction within us. Do we get angry at abuses of power? Do we experience soul-crushing depression when we see an animal getting hurt or abused? What do we accuse other people of? What we accuse others of is typically something repressed and unhealed within ourselves. What we feel the need to continually prove to others shows what we need to prove to ourselves. Sometimes, situations are not terribly complex. For example, many people get upset about the latest celebrity happening because it allows them to vent their emotions (and have a target to do so). We really don't inwardly care too much about what a celebrity does with their love life, or what outfit the latest celebrity is wearing; celebrities, corporations, and the events on the news give us a "safe" target to vent our backlogged emotions and allow us to go numb or look away from our inner experiences. So notice what you are connecting with when emotions arise. What percentage of you is truly currently upset at the situation? What percentage may be from backlogged emotions? Ask yourself the following questions: * If you were to picture a line going from the "target" into your own body, where would it go? \- This target may be a person, your computer, or even a specific post online. \- If you cannot visualize or sense this, ask the body deva to highlight or show you the part of your body this "line" connects to. * What emotion is involved here? * What pattern or situation is involved here? \- You may not know the first time that you question this, but over time this question will reveal the patterning behind what you are reacting to. \- For example, you may always react with anger when someone is not faithful to their partner. With observation of this pattern you may realize that the pattern is about how your parents interacted when you were young. \- You may also find yourself fearful any time the news reports something. Tracking this fear within yourself can allow you to heal what fear is there that may be watching the news to enact this continual fear state within yourself. * What are you accusing others of? \- This will reveal something unhealed within yourself. This may not be direct—when reacting to animal abuse you are likely not an animal abuser, and much of what you feel may be pure emotion that is appropriate for the situation. But you may find an inner hurt portion of yourself in relation to this reaction. For example, your inner six-year-old may be quite hurt at seeing animal abuse. While the adult you is certainly upset as well, this points to your inner six-year-old needing some healing. \- This also may be very direct, such as accusing others of not being "real" or lacking power or femininity or masculinity. * What do you hate or seek separation from in the outer world? \- The experience of hatred always points to something within us that needs to be healed. Chances are that hatred has been passed down and you are enacting it without much thought. \- There also may be a pattern of hating something you hate within yourself, such as individuals who denounce gay marriage as immoral and then are found to be having same sex affairs. \- It is perfectly fine to disagree with things, but if we truly hate something, or consider ourselves separate or superior from one another (whether by religion, socioeconomic class, race, location, and so forth), it points to something within us that could be healed. By using our outer experiences of the world and noting our emotional reactions, we will have a considerable amount of information to put on our "to heal" list. You can use the emotions and questioning above, noting what you are reactive to and beginning to question why, in order to find something unhealed within yourself. * When you ask the body deva to show you where you are holding what you are outwardly reactive to, you will again move to the questioning of that part of your body. * Find the shape, energy, and color, and where they fit in your body. * Ask this part of your body to clarify what the pattern is here. Basically, you are asking why you may be reactive to this outer experience. * Ask what emotion is held in this area. * Ask this part of your body what the _core wound_ is. Our "core wound" is our reason for why we carry this energy. What may emerge is a belief ( _I am not good enough_ ) or even an understanding that was passed down to us ( _I was taught to hate people different from me_ ), but most likely it is an emotion that stems from our lives here (for which we can do inner children work with), our ancestry, family, culture, or even past lives. * You may now choose to go on to the appropriate work for this situation (such as work with an inner child) for a more complete healing. * To do so you would begin asking this area of your body questions like: \- What age is this from? \- Is this from my family? \- Mother, father, or grandparents? \- Is this from my ancestry? \- Is this from a past life? \- Is this from my culture? \- Is this from my in utero experience? When working with our body deva, either as a whole or with the individual consciousness of a body part, we may get a straight answer, such as, _Yes, this is from your mother_ , upon asking about family patterns. Most of the time we do not get such a clear reply, or since we are complex individuals with the possibility of multiple patterns or needs for healing, we may find multiple "yes" answers. The clearest response is through pacing and pausing and noting how that area of your body responds. There should be a sense of something happening in that area of your body (either release, a sense of heightened energy, or even a temporary increase in discomfort) if you find the correct response. You can look in the back of the book to tie all of this questioning together. For now, though, know that even acknowledging and sitting with what you see in the outer world, and what emotions you experience in reaction to it, can be traced back into your body to the places where emotions and beliefs are held. In this way you can look at the outer world as "fuel" for your healing process. You can actually get to a point where you express gratitude for those who have inspired something unhealed within you to arise, as they are showing you something that you can work on in yourself in order to become more cleared, whole, and healed in your life. You may choose to "talk to your body," using the tools in the previous chapter, to see if some of the held emotions and energy will release. Ask for understanding, for knowledge about what you are reacting to, for the emotions involved. If tracked back to your own body, you should be able to work with your body deva, as well as the individual body part, to establish why that energy is there and what it might want to say. After it does so, you can ask about the _belief systems_ involved. In our unhealed states, we carry restrictive beliefs about ourselves and the nature of the world and the people in it. We will get more into this concept in later chapters, but it is good practice to start by simply asking if there are any beliefs about yourself, the nature of the world, or the nature of people (or specifically men or women) that are associated with this held energy. You may choose to go on to the more advanced work or may simply ask for a release or shift of some of that held energy. Ask what your body needs in order to feel comfortable releasing some, or all, of this held energy. Always ask if there is something that you need to do to take action in the outer world to clear this energy. And remember, it is a negotiation process not a command. Our Tornado of Chaos We get habituated to the amount of stress and chaos around us. This chaos is like a tornado circling us. We get so used to its presence that we actually create chaos in our lives to allow this cyclone to remain at the same level of intensity. It's important to understand this concept: we create chaos, emotion, and circumstances in our outer world to keep this tornado spinning at the same intensity or magnitude. We may not realize how much difficulty and hardship we create for ourselves as a result of the old beliefs, patterns, emotions, and wounds we carry. It is only by stepping away from them, healing, and gaining perspective that this understanding typically forms. We may feel grief or anger toward ourselves for the amount of hardship such wounding or beliefs created and how we have perpetuated our personal tornado. We can step away from this tornado. We can step into our midline and communicate with our body deva, asking it to show us the cyclone that surrounds us. We can then draw or simply sit with whatever sense of this comes up, noting the basic characteristics of our personal tornado. When you step into your midline (by focusing on the energy that flows through it) or sit with your body deva, you are in a place of calm and stillness. It offers perspective, or a step or two away from this tornado. When we heal, this cyclone should slow down, become less substantial, and our lives will generally become less chaotic. This does not mean that we will have nothing difficult ever happen in our existence ever again, or will no longer suffer the slings and arrows of the naturally tumultuous nature of being a human. It does mean that we can develop a certain perspective on our tornado, and step away from creating more chaos simply because we are used to it being there in our lives. Understanding this concept and asking your body deva to show you your tornado while you are in your midline will allow you to get a gradual sense of it. Once we become aware of such things we naturally engage with them. In addition to the other work that is being done throughout this book, the "tornado" is a good indicator of how much healing work we may have left to do within ourselves. Healed people generally radiate a sense of peace, calm, and stillness, and have stepped away from the cycle of engaging with the "cyclone" to create more difficulty than is necessary for themselves. The tornado of chaos **Samantha** Samantha came to me because she felt constricted in her throat. A mind-body practitioner herself, she thought that it was because she was not connected to her creativity and that her throat chakra was blocked. She recognized an element of grief, and wanted to explore that emotion. She discovered that her grief was being held in her throat, chest, and head. When she asked her body deva which she should work with first, her body highlighted her thyroid. When she focused on her thyroid, she saw that it was like a walnut, a protective "nut" outside but a sense of emptiness in the middle. She asked this walnut about the grief. The response was that she carried a lot of grief about wanting to be an artist but not feeling good enough to really show her artwork to the world. She had an underlying fear that in some way she may be attacked or hurt by showing her artwork. With further inquiry, she found that this emotion of grief came from her childhood, and that while she frequently had paintings and drawings up in her school and was commended on her artistic ability, her father was never interested in her artwork. Like most children, she really wanted to be seen and heard by both her parents, and his dismissal and disinterest led her to believe that she was worthless, as her artwork was an extension of who she really was. After this realization she felt the "walnut" change into a lump in her throat, the accumulated grief from this and other experiences. She asked if it was willing to change or shift, and it changed into a smaller lump. She asked her body deva about what was left, and it said that she needed to physically connect back to her creativity again. She began painting, each time tapping into her grief to allow it expression, and over time the grief lessened and disappeared. She found herself more willing to be seen in her job, in her relationship, and with her family. CHAPTER FOUR Working with Inner Children We like to think of ourselves as one congruent identity, one Self. Whether we are a forty-five-year-old businessman, a twenty-year-old college student or a seventy-year-old retiree, what we think of ourselves is based on what we are currently doing and who we believe ourselves to be. If we return to the example of trauma and overwhelm offered at the beginning of this book, we may begin to realize that we are not just one Self; we are composed of many different selves. These parts of ourselves are frozen in time in an unhealed state, constantly looping or repeating their wounds, hoping that they will one day be heard and healed. We also will explore our natural "selves" (different aspects of our personality) in a later chapter. We may have twenty six-year-olds within us, or many different aspects of Self all along our personal timeline. By offering these parts of ourselves from a younger age the closure and healing they need, they can "unfreeze" and we will no longer find ourselves "looping" or reacting from the place of this unhealed part of ourselves. We will also find that the beliefs of that six-year-old, once they have been healed and integrated, have disappeared and no longer affect our lives as they once did. To offer a simple example, I will say that if we were a six-year-old whose parents divorced, we likely had a limited capacity to deal with that situation. We were six, with the intellect and understanding level of a six-year-old. This six-year-old was not able to process the emotions and experiences of that divorce. The body deva then sectioned off, or separated, this part of ourselves in our physical body. The body consciousness does this so that we can move on with our lives reasonably intact, but a part of us is frozen at that age, with the emotions and unprocessed overwhelm still located within us. That six-year-old may be angry, unsure of what is going on, and has relied on sweets to self-soothe. We may now be thirty years old, but there is still a part of ourselves that is six, with the beliefs, traumas, and other unhealed and overwhelming material still informing our thirty-year-old self. Every time we run across a situation that triggers the wounds of the six-year-old we revert to being six and run to sweets to self-soothe. We may feel a part of us is confused and unclear about relationships, or that our relationships fail because we hold the belief that "all men or women cheat" (if this was what precipitated the divorce). Or we may feel immense grief, or anger, and in our current lives there is limited reason for us to feel this way. If we work with this "inner child," the part of ourselves sectioned off by our body deva, we can release the anger, pain, and beliefs that were created out of this situation, and our body deva can allow it to be a part of us again. We may find ourselves experiencing less pain and more sensation in the area of the body where our "six-year-old" was once sectioned off, if we heal this inner child in full or in part. We may also find that the needs of that six-year-old (the craving for sweets) disappears or at the very least recedes into the background a bit. We are also likely to find that we no longer "loop"—no longer revert to acting like a six-year-old and acting out their pain and limited resources (reaching for sweets and getting angry) every time their pain, or something reminding them of the original situation, occurs. This is because the six-year-old is no longer frozen, "looping" and in need of healing; they are simply an integrated aspect of our adult selves now. The wonderful part of doing this work is that it is not a logical or scientific process. The point is not the story, or the endless mental recitation of conscious memories, but of becoming conscious of what lies within, acknowledging the "loop" (or understanding how this unhealed self and their beliefs affect us in our present-day reality), offering compassion, then releasing the beliefs and emotions so that the part of us that has separated can become healed and part of an integrated, healthy whole. In traditional therapies or even mind-body work, we might begin to work with our inner children by consciously picking a time or experience that we know has impacted us. This may provide a great deal of healing for us, or be necessary in our process. But when we "freeze," or section off parts of ourselves, we may not consciously remember them. By communicating with the body deva we can find these parts of ourselves that are below (or deeper) than our conscious recollection. It does take a bit of an open mind as well as willingness to move beyond the mental and logical. We construct our lives and experiences through story, and culturally we are taught that certain variables must be present for something to be "true" or "valid." While I understand that mentality, I would encourage anyone to simply do their body scan, draw out their body map, then do this work with an open mind. It is by seeing the impact of this work on a personal level that it can be appreciated more readily. This work can turn into endless mental gymnastics, or be focused solely on the mental realm, if the work is not done through the physical body in conjunction with the body deva. The focus here is not on mental story, but on shifts in the body, a shift in beliefs, and changes in the body map, as well as the visuals and "felt senses," or what we feel in our bodies, that have emerged. This is always paired with the ability to be compassionate toward ourselves. Working in this manner will allow you to be more successful with some of the more "spiritual" work, such as working with past lives, or very young aspects of self. Although this work is intended for your inner child, it really is best to begin with inner children who are out of infancy (age two and up), and can be done even for parts of ourselves that are in a teenager and adult state. States prior to the age of two require more experience and much more reliance on a relationship with the consciousness of the body, so it is often best to work with situations related to older inner children. Generally, it is best that we have a bit of time and space to reflect clearly, so events and "inner children" at least five years prior to your current age are suggested. More recent events are often still being processed, and can often be worked with utilizing the "talking to your body" or the basic "body deva" questioning. This work is fairly straightforward, but it likely needs to be done multiple times, even with the same age range. What this means is that we may have four parts of ourselves that have "frozen" from the age of six, either from the same event, or due to different events and experiences from that age. Although it is rather clichéd at this point, the metaphor of healing being like peeling an onion is always apt. We may be willing to work with our inner four-year-old's anger about life experiences she cannot understand, but once that anger is healed, that same four-year-old may now be filled with despair from the same event. We may find that we have multiple parts of ourselves frozen at the age of four. We, again, are complex beings with complex reasons for our imbalances. Having compassion and being willing to work with an inner child, even if it is multiple times, is going to provide the best results. How to Work with Your Inner Children: Part One Although you can pick an event or age that sticks out in your mind, I caution against it. It is best in this work to be intuitive, and to flex your intuitive (rather than mental) muscles here. If we allow our intuition, or sense of knowing, to emerge, even if we feel a bit silly or lack confidence while doing it, it is highly likely that different, or new, information will be received. Our mentally and intellectually based minds may believe that our anger is coming from a specific age. Working with that age may be very fruitful, but when you go in with an open mind you may find an inner surly teenager instead of an inner angry six-year-old, and working with that teenager would provide the most healing, or the healing that you need right now. One of the things I hear from people who are just starting this work is worry that they will be wrong. While it is more likely at the beginning that you may self-create, or come up with ages and experiences that you already are aware of, with experience you will come to a more trusting place. While I cannot say that it does not matter if you self-create something, what I will say is that in the healing process what matters is resonance and results. Resonance means that the information that comes up for you feels right in some way to you and your body deva. Even if it is something odd, unexpected, or off the wall, it should somehow ring the bell of truth for you or garner a response, such as a shift in energy or heightened energetic response, from your body. If it doesn't, it is an indication to inquire a bit further. By results, of course, I mean that what comes up, what shifts, and what you realize has an impact on your daily life and how you lead it. Becoming more whole and healed (which this work does, or will do) should allow you to become more grounded, more functional in your daily life, and feel better physically, mentally, and emotionally. In some way, your "load" should feel lightened. After this work, it is likely that your body map will feel more filled in and you will feel more embodied. All of these are signs of what this sort of work can do for you, and signify that you are on the right track. For most people, if they do make up things, they simply will not see changes in their daily reality, how they react to things, and the area where they were sensing an imbalance in their physical body will retain the exact same physical sensations as well as visuals as it did before. A small percentage of the population uses work like this to further disassociate or create chaos for themselves. While I do not dismiss this occurring, it tends to occur in people who are not in a stable enough place for solo inward exploration, and in those who are creating mentally rather than engaging with their body deva. Most commonly, as noted in the previous chapter, people will start working with a body part and realize that what is held there is from an inner child. You may also choose to engage your body deva and set the intention of working with an inner child, and then perform a body scan to see which area, or areas, of your body are highlighted for you. If multiple areas emerge you would ask which area is most important to work with, or to work with first and follow that guidance. How to fully incorporate inner child work with the rest of the body deva protocol is discussed in later on, in the chapter the Tying Things Together. To begin, you will start with the intention of finding where this inner child is held within your physical form. If you already are working with a body part and an inner child comes up, you can skip this step. You will then assess the physical and energetic aspects of the space in your physical body where the energy of this inner child is held. This is always done in this protocol to ensure that it is body-focused, as well as to create a baseline understanding of how things are held and how they shift through the physical form. For simplicity, I will be utilizing "she" for the rest of the exercise: * Sit with this age for a moment and ask your body deva to show you a picture of yourself from that age. * Again, if you do not have strong visual capacities, ask, _If I could sense myself at that age, what would be emerging?_ * Notice what she is wearing, what she might look like. * Now sense where she is. Where is she located? What is going on around her? What is happening to her? * You do not need an extensive story or narration here. Get the basics about what is going on so you can now get a sense of what may be distressing, overwhelming, or traumatizing for this child. * Through sitting with this inner child, you should get information about what sort of trauma or overwhelming experiences happened that caused her to "freeze," or separate. * If it is too much to focus directly on your inner child, you can always visualize your inner child and then ask your body deva to relate to you what is going on. * You can also picture a blank television or projector, and when you turn it on, it will be a scene of your inner child and what they are experiencing. This allows a certain degree of separation if what is arising seems overwhelming to your current self. * What is helpful is a basic idea of what is going on with her, what she perceives as too overwhelming or difficult to deal with, and the basic emotions that may be involved. * A good indicator for this is to feel what emotions you are currently feeling as this part of you arises. * You may also find beliefs or understandings arising that are coming from this inner child. You are welcome to stop here. Simply allowing this inner child to come forward is healing in and of itself. It will also allow you to become more conscious of when in your daily life this inner child and her unhealed needs, or "loops" (repeated behaviors coming from being unhealed), are coming forward. When we are under stress or emotional in our current adult lives, we tend to activate, or energize, these unhealed inner children. When we get angry, we may revert to being a sullen teenager. When we feel out of control, we may revert to being a two-year-old who really wants her mommy. Inquiring how old we are when we feel emotional or "wounded" in our daily lives can give us an excellent indication of what sort of inner children we have lurking within. If you are ready to work with your inner child in a healing capacity, you can continue with the exercise. * Now, you will ask her directly (as if speaking with her internally), or ask your body deva, what she needs to be healed. * Your body deva will offer a healed, conscious perspective; your inner child will offer a wounded perspective. Both are valid ways to gain information, but if you are open to it, I would suggest directly engaging with the child, as the need to be seen and heard is deeply healing. * Among other reasons, we tend to protect ourselves and divide ourselves out of fear. * What would she need to get over that fear? * What would she need in order to feel safe? * What or who would protect her? * Now ask her, what would happen if she received what she truly needed? * Offer that to her, and visualize her receiving it. Often in the case of needing protection, needing to be listened to, or seen or nurtured appropriately (all of which are very common inner child reasons for splitting off), there may not be a good person at that time who can provide that for us. The best way to get around this is to offer (if you are willing) to be that resource for your inner child as an adult. Offer to listen, to hear, to give her protection, or to nurture her. Just the simple offering will be immensely healing for this inner child. Your inner child may only be willing to accept a bit of healing. She may not be willing to get rid of all her anger or pain. This is perfectly acceptable, and in fact, is how we heal—through gradual work. Do not force her to heal, or to do anything that she is not ready to do. Much like our adult selves, we do things when we are ready to do them. Remembering that just by acknowledging the existence of such patterns in our body, of our inner children, you will allow a natural healing process to occur. When it feels right, you will ask her to release as much (fear, anger, overwhelm, or other emotions) as she feels ready to let go. Some people find that it helps to visualize a white or other light color coming into their inner child to support and take away the anger or pain she is experiencing. If your inner child is healed, they will disappear. You will no longer notice them. If you still are noticing them, it just means that there is more work to be done. Ask her what else she needs, then offer it to her. Allow her to speak, be heard, and get her basic needs met. It may be that you cannot heal this inner child in one day. It is good practice to intuitively sense if the work for the day feels complete (even if your inner child still seems angry, sad, or overwhelmed), or ask your body deva if the work for the day is done. Healing Inner Children: Part Two Once you have the basics down, it is time to more fully question what sort of beliefs or understandings were created from the situation. Trauma always creates beliefs: about ourselves, the nature of the world, or our relationship to people in the world. Visualizations can be deeply healing, but it is by questioning our deeply held beliefs (or beliefs that were created by the experience our inner child had) and offering that inner child safety, compassionate listening, and whatever they need that we create the bridge that heals our physical body, mind, emotions, energy, and spirit simultaneously. To start, you will return to the basic body scan and the "talking to your body" work in the previous chapter. While talking to your body, it may arise that what you have energetically blocking or creating imbalance in this area of your body is an inner child, or an experience from an earlier age. The more you do this work, the more clearly and accurately that information will naturally emerge. It is always wonderful to start this work with a check-in with the body deva. Feeling into your body deva, whether through outer visualization, felt sensation in your body, or energetically tapping in (through your midline or where your body deva is located in your body) will allow the body deva to be a trusted ally in this process. You will then do a simple body scan, or you may choose a part of your body that on your body map seems out of balance. You may also choose an area that is consciously painful for you, or state specifically that you wish to find an area that holds an inner child pattern. In an advanced capacity, I like to check in with my body deva and simply ask it to choose—to highlight or otherwise draw my attention to the area of my body that I should be working with today. Once you have chosen a body part, you will utilize the "talking to your body skills" to find out the basics of what is going on with it. However, you may begin to get a sense that something deeper is going on with that area. Perhaps it is an area that has been troublesome for some time, or has significant imbalance. Or maybe it is an area that has not responded well to any sort of healing, either allopathic or holistic. You may also wish to bring up the idea of questioning if there is an inner child component to the body part to see what the response is. You will simply ask this body part if it is holding an inner child experience. Ideally, you will get the sense of a "yes," or an inner sense of resonance (a feeling or shift in your physical body) that tells you that this area of your body holds energy from your past experiences in this world. You would then continue by asking what age this experience is from. The good part about first bringing the body deva online is that if the knee (or liver or other body part you are working with) is unclear, or you are not receiving an answer, you can ask your body deva what age is being held in the body part. You will likely receive an age (an approximate age is fine at first). You will then visualize her, feel into what was going on at that age, and what might have been overwhelming or traumatic for her. Find out the basics of what was going on, what emotions were created, and what she may need to heal or feel better. You will now add on a few additional steps. Understanding and allowing our inner children to be heard and visualizing what they need can be deeply healing, but we need to understand that trauma and overwhelm alter the way we see and experience the world. We change what we think about ourselves and about the nature of the world (or the nature of people) because of these experiences of overwhelm or trauma. You will want to ask your inner child directly what beliefs were created out of this experience: * How did she think differently about herself? * How did she think about people in general? * How did she think about men (or women)? * What did she now know to be true about herself? * What did she need to do to protect herself from this happening again? While not all of these questions need to be asked, it certainly can be helpful to ask them all and see which ones have resonance. A few may not make sense, depending on your inner child and their experiences. But as a result of our overwhelming experiences, emotions, and traumas, we wind up wounded—wounds that are deep, and emotionally based. As a reaction, we create protection around the wound (and our body deva sections it off, much like our body tries to do with physical infections). We then create beliefs and understandings about who we are and what the world (and the people in it) are like in response to these wounded parts of ourselves. This is the lens through which we experience our life then. When we understand our beliefs and where they come from, an incredible amount of healing can occur, and our beliefs can shift or change in ways we likely never thought possible. As a result, our experiences of the world will change. As part of this release process, it is often important to take personal responsibility for the beliefs that are arising. Acknowledge that you still hold the beliefs or understandings that the inner child is relating to you, and take the time to reflect on your conscious awareness of this belief and how it still affects you in your current, adult life. You will then tell your knee (or the body part you are working with) that you are no longer six (or whatever age you are working with) and that it no longer needs to hold the experiences, memories, and beliefs of this six-year-old. You will also now want to ask the body part to change or shift in relation to the processing that has happened, even if the inner child has not fully disappeared. Ask the body deva to integrate the work—to change or shift the body as a whole in relation to the release that was experienced. If it feels appropriate, you can end by saying thank you to both your body deva and your inner child for working with you. It is typical for emotions and memories to come up during and after working with your inner child. Realizing that emotions, memories, and even thoughts that arise are coming up because they are releasing and healing helps your body and mind to understand that what you are experiencing is not current. If difficult experiences and memories emerge, finding someone to talk to can be an important part of the process. Do not be afraid to reach out for support, as friends, groups, or healers of all stripes can help us work through heavy or difficult energies if they arise, as well as provide clarity or an outside perspective on our process. If you are looking to work with something that is less location specific, such as understanding your fatigue, all-over body pain, or a specific emotion, start with that intention ( _I ask that my body deva help with my fatigue_ ) and then ask your body deva to highlight or show you where in your body this fatigue is being held. You may see or sense many places, but you could then ask what would be most important to start with, or you may sense one place that really draws your focus or attention above all others. It is often helpful to ask for the linchpin, or fulcrum, if you are working with a pattern that seems fairly large or is located in many spots. Similarly, you may start out with a specific age or experience you wish to work with. You would then recall that time (for example, when you were picked on in the high school cafeteria) and inquire where in your body that experience was held. The Concept of Micro-Trauma In many ways, it is quite a bit easier to start out with the "big fish," so to speak. We may have experiences come up that were a singular trauma. For example, at our tenth birthday party we only had one other friend come over (when we invited the whole class). These singular experiences are often what come up first when working with our inner children. However, we may find that what is emerging is something that was experienced over a long time period, or at a variety of ages. Many people deal with trauma that took place over a period of time: being molested, violence or drug addiction in the home, not having enough to eat when growing up due to living in a single-parent household. Between the ages of fifteen and forty, we may have had a depressed parent who had an impact on us, or we may have grown up with the sense that our sister was the "perfect" one or the "favorite" in our parents' eyes. Certainly, singular experiences can emerge from events that happen over time, but "micro-trauma," or seemingly small (or sometimes large) events that take place over a period of time, may create an inner child who does not seem to have a singular, large experience of trauma but needs healing just the same. In this case, you would find that your mind cannot settle on a single age, or there may be a layer of confusion or lack of a strong visual or age that emerges for you. You would ask a symbolic inner child to step forward—basically, a speaker for the events that happened between the ages of twelve and eighteen—and then go about the work as usual, asking what they need to feel whole, what they need to express, what beliefs emerged, and finding where that energy has isolated or blocked itself in your body. **Richard** Richard initially came to me with a lot of pain in his mid-back area. He had visited a number of doctors, an acupuncturist, and several massage therapists seeking relief. While he did find relief through those methods, his pain always came back a few days later. He went through further tests and found out that his pain corresponded to his gallbladder and began to cut out foods from his diet that were high in fat and grease. When he focused on the pain, he found that it was like a rope burn and brought up the emotion of grief. He focused on his gallbladder and asked if it was an inner child pattern. His gallbladder replied yes, and so he proceeded. When he asked what age the inner child was, he was told that the child was age fourteen. He visualized himself at a school dance. He had gone there with a girl, but she had ended up dancing with another boy. He asked what the fourteen-year-old needed, and he replied that he wanted to be seen and liked. Richard then asked what beliefs were created, and heard the reply that nobody liked him, or ever would. Upon hearing this, he realized that a deeper pattern was emerging. He visualized his fourteen-year-old getting what he needed but realized that what he was hearing were the words of his father telling his mother that she was worthless. He remembered being eight and feeling helpless that he couldn't do anything to protect his mother from his abusive father. He clearly saw this eight-year-old crouching in the corner of his living room, trying to keep out of his father's way. At first, a surge of anger came up in him, but he asked what his inner eight-year-old wanted, and the child said that he wanted everyone to be okay and get along. Richard felt a lot of resistance to this. He visualized his resistance and worked with it. He began to realize that there was a part of him that didn't want his father to be okay and realized that his current adult self was preventing his inner eight-year-old from receiving healing. He worked step by step with his resistance over a few sessions, beginning to ask his body to release the emotions that it held, and gradually felt decreasing pain as well as emotions in his diaphragm. He then was able to move forward with allowing the inner child to receive what he needed. As Richard was doing this work, he noticed that his outer world was changing. He had previously kept to himself, as he figured that nobody liked him or wanted him around, but colleagues were now talking to him more at work, and he began to realize that his inner child was preventing him from seeing the world clearly and that people could like him. Gradually, his inner child healed, releasing layers of anger and pain and fear. Richard patiently saw this through. He found that he no longer had pain in his body, related better to others, and that while he still had to watch his diet, he could (occasionally) eat a deep-dish pizza or burger without pain. **Monica** Monica grew up in a single-parent household in which she was left alone for long periods of time. She is stable and holds a full-time job, but still recalls standing in line for government cheese and the summer days she needed to stay inside because her neighborhood was too unsafe for her to go to the playground. Monica specifically wanted to work with this part of herself because she never felt like she was really an adult. She described feeling like an imposter, and thought that any day now someone was going to come and take everything away from her and she would be back in the neighborhood she grew up in. With this intention, she asked her body deva to show her where this energy was being held. It took up her entire abdomen and looked like an empty hole on her body map. Since this was something that spanned fifteen years, Monica asked for a representative of her inner child to step forward. This inner child desperately wanted to know that everything was going to be okay, that she would have enough food to eat, and lived in fear that her brother and mother would be harmed or taken away from her due to violence. Her inner child was unable to express these needs because she didn't want to be a burden on her mother, who was already struggling with so much. The inner child expressed her pain to adult Monica and started to feel better. Monica offered her some care, attention, and a calm green light to help her feel better. Monica then asked about beliefs. There were a few, but the largest was a fear that what little she had could be taken away from her. Monica began to see how this belief impacted her life, and she realized that she was enacting a "loop," in which she would buy things and then return them because she didn't feel worthy of them. Monica asked her inner child what it needed to heal this loop and release the fear, and her inner child stated that she wanted to play outside safely in the sun with her brother and mother and feel their love for her. After doing this, the inner child was no longer visible, and the hole in her abdomen went from being basketball size to the size of a golf ball. Part Two Intermediate and Advanced Work In Part Two, we will work with more intermediate and advanced concepts and patterns. It is helpful to have a solid grounding in the methods in Part One before moving on to the work offered in the following chapters. This is because we often need to clear some of our own "baggage" (held energy) in order to be able to see how other patterns are affecting us. I understand that people may not believe in some of these concepts. Do not allow this to deter you; incredible healing can come from working with your inner child as well as the consciousness of your body. You can simply read through the information if you are curious, or disregard some of these chapters if necessary. I do encourage an open mind, and at least a willingness to hear from the wisdom of your body deva if something such as a past life arises, even if it does not fit with your current cosmology. Even if you need to view a past life as an archetypal force or metaphor, it will still result in the same changes in the body map, difference of feeling or sensation in the body part, or change in the visual of the held energy in that part of the body, even if you are not consciously on board with the notions of past lives, or that your experience in utero can affect who and what you are today. So my general advice is to work with what you are ready for. It's best to explore the work in this section in an open way (for example, if you do not feel that your family has affected you, simply ask yourself something like, _If I could sense a family pattern within me, what would I sense?_ ). We tend to block ourselves from gaining new understandings and realizations, so it can be helpful to approach healing work not only with openness but pragmatism and logic. No matter how far you are along on your healing path, there is always more work to do and more to learn about yourself and the universe as a whole. Being open and curious will take you far in your quest for personal healing and evolution. When approaching spiritual patterns, many of us may tend to throw the proverbial baby out with the bathwater—to not take responsibility for our emotions and experiences by simply designating them as a "past life" or "ancestral." In this work, you will realize not only the past life or ancestral influence or pattern but also how you embody such energies in your current life. By understanding how these energies originated but also how you have taken them on and added to them in your present lifetime, you can heal both the "root" and "branch." This will allow for full healing of such patterns. CHAPTER FIVE Working with Contracts Just as we may have made written or oral agreements with our work, landlords, or others, we are continually energetically engaged in the creation of agreements with ourselves and others. The difference with an energetic contract is that they are primarily subconscious—we are not fully conscious when taking on a contract that we are shifting our beliefs and perceptions as well as our identity and outer reality. An energetic contract is an agreement that results in a change of perception or a taking on of roles or responsibilities. For example, if a young woman grew up in a household that was chaotic and with a mother who was unable to deal with her emotions, that young woman likely created a contract around being a peacekeeper for the household or taking care of her mother. A child who grew up as the eldest in the household may create a similar contract. Similarly, that young woman may make a _reactive contract_ about not becoming her mother, not having children, treating children to a different home environment than she had, or not having children before she is ready to. A young male may have grown up in a household with an angry, abusive alcoholic of a father who continually told him to "man up." This could result in a contract in which this young male becomes an angry, wounded male whose priority is to achieve a certain level of machismo. It could also result in a reactive contract in which he moves away from anything that would be considered "manning up" and publicly denounces the culture that created the need for men to do so. In a healthy and healed way, this man may become an advocate for what a healthy and vital male could be. Since this contract was created in relation to specific trauma and interpersonal relationships, those will need to be released for this male to get to this place of health. We are very influenced by the perceptions of others, especially when we are growing up and forming our basic identity. However, we can form contracts at any time. Our openness to the opinions, insights, and values of friends, loved ones, and authority figures plays into many contracts that we have taken on. Some of them can be deep and soul wrenching, such as contracts that we have made to become the exact opposite of a parent, and some of them may seem surprising or even silly. When I was doing this work I found a contract based on my junior high school teacher telling me that I wasn't good at math, and that I was lucky that I was good at art. Before this point I had a math teacher who I enjoyed and who told me that I was one of the best in his class, and he was sorry that I was moving (I moved and changed schools at that age). I took on the "contract" or belief that I was no good at math, and this followed me for many years. This was not a huge or impactful contract, as some can be, but it still needed to heal because it was creating a change in the way I perceived myself and causing me to be reactive, both about my abilities in math and my identity as an artist. There was no reason for me to have that one authority figure (the teacher) define my reality or who I was. Although I am still not a mathematician of any caliber, the release of that contract resulted in the releasing of the belief that _I was lucky to be good at art because I am no mathematician_. So let's take a step back and talk about what a contract is in specific terms: * A belief or understanding about yourself or the world that you have created over the course of your lifetime in reaction to specific events. * A belief or understanding that has been given to you by another, frequently important person in your life (such as a parent telling their child that they are stupid, or a lover telling their partner that they will never find someone as good as them). * A belief or understanding that has been passed down through ancestry, family, or past lives. \- For example, an ancestor who may have been hungry may have decided that they will do whatever it takes for their family to survive. This contract has now passed down to you and you are enacting it without consciously realizing why you are doing so. We form contracts within ourselves in relation to specific events, traumas, or simply information coming our way. We also form contracts with others, and when someone significant says something upsetting to us, we will take that on as a form of contract. We also may form contracts to do the opposite of whatever or whomever we deem "bad"; decades later, we may not realize that a contract we made when we were six to not turn into our parents may be interfering with our quest to find a suitable partner. We may discover that we have based the majority of our existence on a _reactive contract_ , or a contract to do the opposite of what we have seen or experienced. A contract is a belief that you have taken on. Once that belief or understanding has been accepted on some level, a subconscious agreement will form. This will then color or shape the way you interact with the world, the people in it, and what you believe yourself to be capable of, as well as is a shaping tool for your general identity. Some of these contracts may seem to have good results, such as the child of divorced parents vowing to not have a broken home or someone whose ancestral background was one of general lack working toward a comfortable lifestyle for themselves and their family. But contracts come from a place of trauma, and anything that comes from trauma creates restriction. Even if these contracts seem to have benefits, we will benefit more if our beliefs and understandings come from a place of healing, rather than pain. Many of these contracts will come up naturally through the other work in this book. It is always helpful to question our beliefs and what we know to be true to see if they are creating restrictions for ourselves. The beliefs we hold onto most adamantly, the things we really _need_ to be true, are always a good place to start. It is one thing to have a belief and to stand solidly in that belief; it is another to have a belief that creates significant chaos, judgment, or inability to allow others to have differing beliefs or understandings from our own. Although I will share some thoughts about possible contracts here, it's likely that once you get familiar with this concept you will begin to understand how our minds get stuck in repeating these contracts inside our heads over and over in order to convince ourselves of their reality. If something simply _is_ —meaning that it is some form of healthy truth, not coming from a place of wounding—we no longer need to spend our mental time and energy focused on it. We also do not need to spend large swaths of time online or in person arguing about it, or convincing others that our belief is valid and all other beliefs are not. So write down or think about some of the contracts you may have made. You can start by acknowledging what repeats mentally to you. Is it that you are not good enough, that nobody loves you, that you can't do something? One of the most detrimental beliefs we have is that the world (and the people that are part of the world) are out to get us. This usually arises as a result of early childhood trauma or in utero patterns. Understanding that the world, and the people in it, largely don't care about us is a difficult concept to get across to someone firmly engaged in the belief that the whole world is against them. _Energetically, a contract is an experience (trauma), a taking on of the beliefs of that experience (believing it to be true/worthy of absorbing), and a decision made in relation to that experience. This decision is to either directly take that belief on_ (I am bad at math) _or is a decision to act in opposition to that belief_ (I will never be my mother or father) _. These contracts are frequently out of date (formed at an earlier age) and are held onto by our bodies until we can consciously release them._ Some commonly held contract prompts that you may wish to consider: * _I am not good at_ (insert hobby, area of study here). * _I will end up (or not end up) like my mother or father._ * _My (mother or father) is this way, so I am this way._ * _I am not worthy, loveable, likeable._ * _The world and everyone in it is against me._ * _The world and everyone in it are out to harm me._ * _I will never find Mr. Right (Ms. Right)._ * _I always need to struggle_ (in general, or for money, for work, and so on). * _I need to be this way, otherwise this will happen._ * _All men (or women) are like this._ * _I am not a sexual person_ (or judgment for being "too" sexual). * _My religion growing up_ (or current religion) _said that I was this, or needed to be this way._ * _My sexual orientation and the way I choose to express myself are wrong/bad._ * _My culture has machismo. Men are supposed to act this way._ * _I can never succeed or excel at what I am good at._ * _I will always have a soul-sucking job._ * _I can never make enough money to support myself._ * _I will always go hungry._ * _I am not one of them._ * _I was not meant to be in this world._ * _I will never be healthy or strong._ * _I won't_ (or cannot) _open my heart._ * _I will show_ (put name here). Look at people whom you seek to impress and the ways in which you feel you need to prove yourself to them. If we feel comfortable and confident, we no longer need to prove ourselves to the outer world. Look at the "loops" of what upsets you in others, what you find yourself thinking about and talking about most often. While there is much in the way of healing work that goes beyond simple mirroring (what we see in others is an expression of what is unhealed within us), what we consistently say and do, as well as where we consistently find fault in others, is a good place to look for contracts. There are a lot more contracts, both big and small, that we create for ourselves or take on through our interactions with others. It should be noted that we may need some of our contracts. Ideally we wouldn't, but as we formulate our reality based on our beliefs to a large extent, if you are currently in a terrible job that doesn't pay enough and are worried about grocery bills, it may not be the right time to release a contract about not going hungry. We have enough stockpiled contracts that we can work on others if there has not been enough space to have clarity or put some of our contracts in the past. You would be surprised how clearing up contracts about self-worth, or ability to have or hold a career, can impact something like financial status, however. Contracts are typically worked with in conjunction with the other work, but can be released using the same method of accessing and working with the consciousness of the body. Releasing Held Contracts By this point, it is likely that you have thought about a contract you may have. If you do not have a specific contract that you would like to work on, you can also ask your body deva to show you where in your body a contract may be held. * Ask your body deva to highlight or show you where this specific contract is held in your body (or where a contract is, if you are keeping things open or do not know of one). * Do a body scan and note areas of your body that you are drawn to or seem highlighted to you. \- If multiple areas show up, ask your body deva which is most important to work on, to work on first, or to work on today. * Ask the body deva or the individual consciousness of the body part that you have found what the contract is that is held there. * Simply sit until you get a clear sense of what this contract may be. \- This may not be a clear understanding. This may be a sense of knowing, a picture, or something that doesn't compute with what you have experienced in this world. Simply note whatever comes up, no matter how strange. \- Continue to ask this part of your body for more information _(Tell me more)_ until you get a sense of what this contract may be. * Ask the consciousness of your body, _What belief or understanding about myself, the world, or about people (men or women) is held here?_ \- If what comes through is unclear, again ask to be told more. \- If there is resistance, you can move to working with the resistant part of yourself. \- If there is lack of clarity, you may simply need to do a few sessions with this body part (you can return another day) to find out what is going on. Often we need to "digest" information when it becomes conscious, and so whatever we can become conscious of concerning a contract, even if it seems vague, is a wonderful starting point. * You will ask the consciousness of this body part to tell you about the circumstances of what happened. \- Who was involved? \- How old was I? \- If this were a scene on a television, what would be going on in that scene? * What is the contract here? \- We are asking again for purposes of clarity, and to get as close to the wording of the contract that we created as possible. \- Remember that a contract is often a belief that you have on some level agreed to and taken on. This can be taken on _directly_ or _in reaction_. \- This means that if we failed to climb the rope during gym class and got made fun of that we could have decided that we were lousy at physical pursuits. It may have also caused us to decide to never be made fun of again and to become physically focused so that we could climb that rope faster than anyone else in our class. \- Although this reactive belief has resulted in the person becoming physically fit, it is still in reaction or trying to prove to the gym teacher that they are worthy. By healing this belief, this person no longer needs to prove himself and is no longer in reaction to the original situation. * What was the decision that was made? \- We have the situation or trauma that occurred, a belief that was expressed, and then an agreement that was formed. This agreement was a decision to be (or not to be) a certain way. \- This agreement could also be a _relational_ agreement. This means that we could decide that the world is scary, that all men or women act a certain way, or that all authority figures are invalid. * If the information that you are receiving does not correlate with your life experiences here (it was passed down to you), you can either choose to continue if you have the capacity to pick up information, or may choose to inquire as to where the information is coming from. \- Ask your body deva or the individual consciousness of the body part if it is _ancestral_ (family) or _past life_. \- This is only to be done if there is some sort of indicator (a sense of knowing, visuals, or other information) that has led you to believe that this is coming from a place beyond your experiences of this world. \- Remember, when you ask your body slowly, pausing between each choice, there will be an indicator (typically, a sense of knowing or a change in energy in the area of the body you are exploring) that will indicate which choice (past life or family/ancestral) is correct. \- Occasionally, there are multiple contracts. If this is the case, you will ask if any are from your timeline, or your experiences here, and work on those first. \- If it is from beyond your own experiences here, you still want to get a sense of what was happening, the beliefs created, and the agreement that was formed. The work in further chapters will more helpfully guide you to and through this process. * Once you know what the contract is, and largely what age, you will consider if you need the contract. Do you need it _fully, partially, or not at all?_ \- Ask the consciousness of your body for guidance about this. We would all like to release anything that restricts us, but we may not be fully ready to. \- Respecting the body and offering compassion will allow contracts to change more readily than forcing things to leave. * If you do not want the contract at all, you will visualize yourself at the appropriate age to the best of your abilities and let her know that you understand why the contract was formed, but that she no longer needs it any more. \- You may choose to segue into inner child healing work to fully heal this situation and integrate the inner child. * You will then let your body know that you are no longer the age that you were when you created the contract. * Ask your individual body part/consciousness if it realizes that it is holding on to an outdated contract. \- Remember that when we hold onto trauma, we get frozen and fractured, which means that the individual consciousness of a body part may not understand that you are no longer a six-year-old. * Reiterate that you understand the reasoning for the contract, but that the contract is now null and void because you no longer need it in your life in your present-day reality. * Ask that the individual consciousness of the body part you are working with clear the contract. * Ask that the body deva (the consciousness of your body as a whole) release the contract more fully as well as integrate that body part with the rest of your body map to whatever extent it is able to. Changing Contracts It is easy to be disheartened if you hear the word "no"—that you are meant to keep the contract, whatever it may be. If this is the case, you can ask the consciousness of that body part what sort of preparatory work may be necessary in order to work with it. In most cases there will either be a release of the contract in full or negotiation room in the contract. Just as a lawyer may negotiate clauses in a lease or work contract, we can make our contracts more freeing and present. Although we will not go through every variable, consider how there may be some form of "wiggle room" in the agreement that has been created. For my example of being told that I was terrible at math, I found myself reworking that contract at first to be something more open, such as, _I am not great at math, but certainly am not terrible at it_. It sounds like not much of a change from, _It is lucky that I am good at art, because I am terrible at math_ , but what I did was take away the "authority" figure (my math teacher and what he said to me) and the sentiment around my artwork, which was creating a certain amount of guilt for me when I engaged in the creative process. Eventually, I was able to work with this belief again until my body no longer held any belief related to this, I no longer had emotionality regarding this, and the contract overall was dissolved. I am still not great at math, even if we were to look at this situation with total clarity. That is not the point here; the point is that that trauma, however small it might have been, created restrictions and baggage in terms of who I thought I was, and changed what decisions I made about certain factors in my life because of an unhealed situation and out-of-date contract I had taken on. **Tiffany** A more complex situation around contracts would be to consider Tiffany, who came to me because she could not get into a relationship. They all failed quickly, and she found that the men she dated would then get in another relationship and get married. She desperately wanted to find a partner to share her life with. She found a contract from her grandmother that told her that, _All men will just leave you in a ditch._ Her grandmother raised four children by herself after her husband left her, and she recalled her grandmother telling her frequently as a young child how men acted and what they wanted from women. Based on her grandmother's words, Tiffany made an agreement with herself that she would never allow a man to do that to her. This contract was formed in her pelvis, an area that she also complained of feeling pain in during sexual intercourse. She realized that this contract meant that she would never get married or have kids in her adult life, as that would be the only way to truly ensure that no man would leave her as a single mother. She was unwilling to fully release the contract, as she was cautious about men after the experiences that both her grandmother and mother had gone through. She reworked the contract to state that she was an adult now and that she would proceed cautiously with men. She then further worked with the contract to change _All men will just hurt you and saddle you with responsibilities_ to _I am open to relationships and have the adult capacity to take care of myself and any children, man or not._ Eventually, she realized that that agreement formed was still restrictive, and that as an adult she was in a place to enter into a relationship gradually to ensure that it would be a correct match. Tiffany did further work with the consciousness of her heart to release past pain and to open it to the possibility of loving others, even if doing so might mean that she would get hurt. In opening ourselves we always have that possibility, especially in the heart area, but if we are completely closed we never experience the joy, love, and "good" stuff either. Understanding all of this as an adult, rather than a child at her grandmother's house, allowed her to fully release this contract and feel more freedom and opportunities in dating. By working with the consciousness of your body you can compassionately inquire as to what sort of contracts you may hold within and how they can be re-worked to meet your current mentality. By then working further with inner children, ancestors, and family patterns, you can then come back to the contract work, or work with contracts in conjunction with that work, to further release them. Please be compassionate with yourself doing this work. Releasing a contract is a big deal—the sorts of agreements we have taken on or formulated create what we know to be truth. Although coming to a greater truth results in a letting go of wounds, held energy, and outdated beliefs, it does result in a change in reality. It is always a good idea to be patient in the pursuit of healing and releasing often long-standing beliefs, or beliefs that formulate our experience of the world. Any compassion we show ourselves can only result in greater healing and clarity. **Henry** Henry grew up as the youngest of three children. He was a "surprise" baby and was born several years after his brother and sister. His brother, Jeff, seemed like the golden child to him—everything he did Henry wanted to do as well. His parents seemed to agree, and dinner table conversation, even when Jeff was at college, was about what sports Jeff was excelling at, or how proud his parents were about him doing well at school. Henry could not meet those expectations. He was a shy child who preferred the company of animals as well as a few friends rather than a large crowd. As a teenager Henry was having difficulty fitting in and decided to make a contract that he was going to act like his brother in order to be popular. There was an additional contract based on the idea that his parents loved or favored his brother more and so who he was would never be good enough. He started trying out for sports, went to the mall on weekends, and found a girlfriend. Twenty years later, Henry realized that he had no idea who he was, and that he was doing many things in his life simply because he thought that that was what he was supposed to be doing. He found that there was a contract in his solar plexus that revealed that he had put on a mask to survive, to be liked, and to be loved equally by his parents. Henry realized that as an adult he no longer needed this contract and asked it to release. He did further inner child work on his teenage self, assuring his inner teenager that it was okay to be who he was. After his solar plexus released this energy, Henry's digestion improved, he felt more secure about who he was and what he wanted to do with himself in this world, and he was able to more deeply explore the grief that came from feeling not as loved by his parents as his brother was. He realized that it was okay to be an introvert and enjoy the company of a few friends, and ended up buying a cabin in the woods to be near the nature and animals that he loved so much. CHAPTER SIX Healing In Utero Our time in utero sets up our basic relation to the outer world. How we feel about being here and whether we feel wanted or not on a basic, primal level often has energetic roots in our in utero experience. If we were loved, wanted, expected, and fully welcomed into this world we are more likely to feel loved, wanted, and fully welcomed by the earth, as well as the people on it. If there were significant health issues, emotional issues, confusion, or trauma surrounding the pregnancy we may be unsure of our presence here in a human, physical form. If our parents did not want us we are likely to live our entire lives believing that nobody likes us, that we are unloveable, or that there is something deeply wrong with us. If the pregnancy was threatened, tumultuous, or there was severe stress resulting in states of fear we are likely to feel as if the world or the people in it are out to get us, and that it is not a safe place for us to be. Our time in utero can be a time of great safety, love, and warmth—a time when we feel deeply connected and held by our mother. It can also be a time of great fear, chaos, and deeply held emotions that can be directly transmitted to us from our mother during this time. Energetically, there is no barrier between our mother and ourselves when we are developing in utero. We are not only having our own experience, developing and growing to meet the world eventually, we are directly taking on the emotions of our mother, unable to discern where the emotions are coming from. Partly, this is for obvious reasons—if the physical well-being of the mother is threatened, the pregnancy will have a high likelihood of being threatened as well. But what I am referring to is more of a "sponge-like" capacity in which the in utero environment becomes a sea of the mother's emotions, thoughts, and experiences. This sea can be calm and loving and nurturing on the deepest levels, but it can also be tumultuous, scary, or life threatening to the child developing within that sea. We are fully in a watery environment in utero, and this sea absorbs everything from its outer environment. Our concept of the world comes from our mother, and this time in utero as well as the energetics of this relationship between child and mother sets us up for our basic relationship to the world and the people in it. Any trauma we experience in utero or in the birthing process makes a significant imprint on us. During our in utero time we are a grouping of energies gradually coming together. As we draw nearer to our birth, the pattern, or web, of these energies consolidates so that we can become a living, breathing, separate human. Our time in utero forms the "container" or outer webbing for how we react to the world and the people in it. Various signs and beliefs will point to this time and its need for healing in your life: * Belief that the world is against you * Belief that the people in the world dislike, hate, or are against you * Belief that you are separate from the earth * Belief that you come from "other," myths about being from "elsewhere" \- Even if this is spiritually true, we do create a lot of possibly misleading mythological constructs out of deep struggles, including our in utero experience. * Belief that the world is not a safe place * Deep, unfounded fear about your survival * Self-hatred * Belief that one must "take" as much from the world, and the people in it, as possible * Lack of connection to the earth * Inability to connect to others * Feeling of being totally alone or without support * Knowing of an in utero history of difficult pregnancy, birth, or trauma. Some of these patterns point to family, ancestral, and other patterns as well. Our individual reasons for being and why we have enacting certain beliefs can come from a multitude of places. However, this is an aspect and time of needed healing for many that is not discussed or given regard by many body-mind practitioners. If we experience a difficult in utero experience in which we never connected and received the nurturing that we needed from our mothers, we are also going to have many experiences in our lives that will add to, or be colored by, the beliefs that were created during our time in utero. It should be added that pregnancy is rarely an ideal process; it is normal to have a certain amount of tumultuous energies for the duration of the pregnancy. If the mother is reasonably healthy, many of these difficulties will be worked through or not experienced as threatening by the child. If the mother is out of balance significantly (whether physically, mentally, emotionally, or spiritually) or is lacking in support herself in a significant way, these imbalances are much more likely to have an effect on both mother and child. If you are experiencing difficulty working with this on your own, or wish to find a practitioner to assist you, seek out Biodynamic Craniosacral Therapy or Castellino prenatal and birth therapists, who can help immensely in processing these embryological experiences. Sensory deprivation tanks also mimic the in utero experience and can assist in conjunction with this type of work. Working with the In Utero Experience This work can be a bit difficult to tap into. There may be natural resistance to exploring this time or we may find that ages that we can consciously recall (such as when we were six or twenty-five years old) are easier for us to access. We obviously do not have conscious recall of when we were in utero, so working with this subject matter requires a bit of practice as well as mental openness. As with all things, if this work doesn't resonate with you, I would simply move on to other chapters. It is likely that through further work you can return to this work at a later date and access it in greater depth. You will first access the body deva. Once you have done so, you will state the intention of working with your in utero experience. You will now visualize or sense yourself (at your current age) surrounded by water. Even submerged in this water you will still be able to breathe comfortably. As this water surrounds you, you will feel yourself becoming smaller and moving into an in utero state. Some people aim for a specific month or time when there was known difficulty, but I suggest leaving things open in order to more fully access what may need to be healed. In this watery state you will sit with the experiences arising and ask your body deva the following questions for contemplation: * Do I feel warm? * Do I feel safe? * Do I feel nurtured? * Do I feel connected? * Do I feel confined? The in utero space should be one that feels slightly open but still safely contained. If it feels claustrophobic, tight, or restrictive, that would point to something out of balance. * Do I feel as if there is enough "water"? * Are you comfortable and nourished by what surrounds you? * What emotions do I sense? Now ask your body deva to highlight or show you where you may hold any imbalances related to your in utero experience in your current, adult body. Explore each body part individually, noting what you feel physically in each part. You are looking for basic physical and energetic sensations in each part. These sensations will give you a baseline of how and where in utero energies are held static in the physical form, anchor the work in the physical body, as well as allow you to notice any healing that occurs by the shifts that take place. Simply acknowledging what comes up can be deeply healing. You can ask the body deva (or the individual body parts that hold these energies) to release, shift, or change when you become conscious of them. You may also choose to ask for changes and shifts in your in utero experience. For example, if you didn't feel enough warmth or nurturing, you may wish to visualize yourself receiving that warmth and nurturing. This can result in beautiful changes; however, in many cases further work should be done to fully reconcile the in utero experience. Advanced In Utero Work In utero work can create an environment where deep emotions and felt experiences from that time can emerge. Becoming conscious of something and creating a visual for the experience needed can result in profound shifts, but more work is often needed in order to fully heal. This remaining work will involve communicating with the consciousness of the child (or cells, or group of energies coming together, depending on the time worked with), the consciousness of the mother to provide differentiation, and working with the in utero grid, which will be covered in the next section. You will do the work in Part One, allowing yourself to come into that watery state, but instead of communicating with the body deva, you will ask the child (or grouping of cells or energies forming) the following questions: * What do you need to feel safe? * What do you need to feel ready to emerge into this world? * What do you need to feel in order to experience the world (and the people in it) as being supportive and loving toward you? * What do you need to feel to truly feel wanted by your mother? \- By your father? \- By the world? \- By the earth? Remember that our connection (or lack thereof) to our mother is how we feel about the earth. The purpose here is not to create a huge narrative or mythic story about your existence in order to increase wounding or feelings of separation. I say this because it is easy to be carried away by what we discover and to create a framework for our lives that is not helpful for our healing process. The purpose in discovery and sitting with these questions is to discover your individual reasons for why you may not have felt excited or ready to greet the world, or what forces may have caused you to feel unwanted or unsafe. You will now ask the following questions to your in utero self: * How do you feel about the world? * How do you feel about your mother? If the answer is anything but feeling safe, connected, and ready to emerge into the world, it means that there is work to do to heal the situation. You will begin by working with the consciousness of the mother. To do so, you will consider that you are being held in this in utero state by a greater consciousness or container; this container or consciousness is your mother. You have access to this consciousness because in an in utero state there was no separation between your growing consciousness and the consciousness of your mother. This separation doesn't happen until around the age of six, when the child begins education and greater socialization into the outer world. For some people, this separation never happens, which would point to the need for healing of an inner child at the appropriate age. You will sense or feel what being surrounded by the energy of your mother feels like. Does it feel comforting? Does it feel well contained? If your mother was experiencing trauma or overwhelm beyond her capacity to deal, those emotions are being transmitted to you. Feel any emotions that may be here. For most people, these emotions will be fairly strong as they work with this subject. Sense whether these emotions are coming from you (in utero) or if they are coming from the "container" around you. They may be coming from both places, but acknowledge if any of the emotions are coming from your mother. This is important because those emotions are now being held within you; by acknowledging that they do not come from you, they can be differentiated and released. Now sit with any thoughts that may be coming from your mother. Picture a greater "egg" or container around you in utero. See or sense what emotions, beliefs, or understandings may be coming your way: * What emotions are coming toward you from this egg? * What thoughts or beliefs are there? Beliefs can be harder to pick up than emotions, but when simply sitting in this state and gently questioning, it is likely that some thoughts may come up that you repeat to yourself, such as not being wanted, loved, or supported. Pregnancy is an understandably emotional and difficult time for the mother, and emotions and thoughts tend to be amplified and transmitted to the child. While in utero, the baby will not have the skill to differentiate or understand thoughts and feelings coming from the mother, such as the mother contemplating abortion, which may have led the child to feel unsafe and as an adult, unable to understand why they have never felt wanted by the earth. In addition, the mother may be carrying a great deal of fear over her pregnancy. Much of this is natural, especially for first-time mothers, but if there are past instances of miscarriage or threatened pregnancies, this fear can be heightened. This is the type of patterning that can emerge when we use this work to explore our time in utero. Realizing that emotions, thoughts, and beliefs are not yours can result in shifts, but you may wish to communicate with the consciousness of your mother to more fully resolve the situation: * If you were to ask her what she would need to feel supported, what would she state? What sort of support may she need from her partner, her parents, or the community at large? * What would she need to feel safe? * What would she need to be ready to bring you into the world? * What would she need to be willing to connect with you? You can visualize or sense her receiving these things. Doing this work can be a bit tricky, as we may not believe in the simple power of our offering such support, or that it can result in a significant shift. It's not that what we are doing changes what happened or erases any aspect of the story. If we didn't have proper nutrition, or had a mother who was deeply fearful and lonely, that is still part of history that has shaped our life. What we _are_ doing with this work is clearing the held emotions, thoughts, and beliefs that emerged as a result of our in utero experience. As adults, we do not need to hold onto the experiences and emotions of our in utero self any more than we need to hold onto the emotions, thoughts, and traumas of our mother during her pregnancy. By becoming conscious of why we are storing these emotions, thoughts, and beliefs and offering a way to release them, we allow all aspects of ourselves to not only become fully conscious but for us to move away from an imbalanced, frozen state to align with our adult selves. It would be rare on the first try for the mother to be willing and able to provide everything the child needs. This is not a practice where you force such things to occur; you are working compassionately not only with your own consciousness but that of your mother, which is imprinted within you from that time, and inquiring what she may need, what you may be willing to offer her, and what she may be willing to accept. Any shift toward her feeling more balanced will be transmitted to the child. As we do this work, as well as further work with ancestors and family, it should be noted that we carry the forces and dynamics of our time in utero within us. The ripple effect from our healing can deeply heal our family, including our mothers, but we are doing internal work to heal opposing or restricting dynamics within ourselves, and we are not offering healing to our real mother, but to the held energy of "mother" that has frozen within us due to trauma from the in utero experience. For those of you who have had especially difficult experiences with your mothers, this may help you reconcile offering healing to them. Each person has their own path, and although this work can shift dynamics in relationships, it is always the decision of an individual to change, shift, or heal something within them. Whatever work is done, you will now return to the consciousness of the in utero state and ask the child to acknowledge that their environment may now be different due to the mother shifting. Ask the body deva to shift or change the in utero experience. Check in with how that child is doing now, and say thank you to all involved. The Heart and Uterus Connection While doing this work, you may notice that you are not feeling any connection from mother to child. This results in feelings of alienation, not being supported, and other patterns that are enacted in the outer world once born. In Traditional Chinese Medicine, there is an energetic channel, or meridian, within the body that connects the heart and the uterus. This connection energetically connects the heart of the mother to her growing child during pregnancy. If this connection is severed, or never developed, the child will not feel loved, wanted, or supported by its mother. Once born, the child is likely to develop a life-long pattern of not feeling supported or wanted by the earth or other people. The experience of not feeling wanted or supported will lead to a lack of grounding, or willingness to ground and receive connection to the Earth, which will result in feelings of separation, fatigue, and inability to connect to people and the world at large. This is a large pattern, one experienced by many people—we can see the worldwide ramifications of our separation from our mothers and the earth. If we are able to restore this connection and heal the in utero experience, we can become more connected and acknowledge that we need to care for and connect to the natural world, rather than simply take from it. When we start doing this work, we may find that we do not sense any sort of physical or emotional threat but are not feeling any kind of maternal connection. We synchronize with our mother and are intended to connect with her in a deep, emotional, and bonded way. We connect to her heart—in fact, the heart is the first organ formed in utero. If we do not feel a connection with our mother, this can result in feelings of not being able to connect to oneself, others, or the world. Our heart is the place where our deep emotions arise; this is one of the reasons why the emotional status of the mother directly impacts the child. As we are forming within the consciousness and the "container" of our mother, her experiences and emotions directly influence our own formative state, and are a large part of the in utero grid that forms, which we will explore next. As you do in utero work, you may discover that this heart connection is missing. If there is any sense that the connection has not formed, or it may not be fully connected, or there is something wrong with the connection, it is worth exploring this connection between your mother's heart and your own. If you are not feeling warmth, nurturing, and completely loved and cared for in utero, it is worth working with this connection to see if it could improve. While doing the previous work, you will want to question if this connection and the synchronization of energies is there, and whether it is fully healthy and operational. If it is not, you will check in with the consciousness of the mother to see what she would need in order to be _able and willing_ to create such a connection. There is a distinct difference between having the capacity to do something and the willingness to do something, so it is best to inquire about both. You will then offer her what she needs by visualizing her receiving the support, guidance, or love she never received. Although there are other reasons for doing this work, those are the most common. If we have never known heart-centered connection with our own parents, we are unlikely to learn how to offer it to our own children. We also may be feeling isolated, hurt, or in a physical or emotional state in which there is reason for disconnection from the pregnancy. This is not a forceful event, and the consciousness of the mother is unlikely to move from a state of total disconnection to complete and undying love and connection to her child. Allowing whatever opening there is to occur, even if the mother is only willing to try to do so or offer a small amount of love and connection, can drastically change this type of patterning. If there is any willingness on the part of the mother, you will then ask for the in utero self (you) to receive this connection to whatever degree they are able. You will then ask the body deva to facilitate this connection from the heart of the mother to you in utero. Once there is a feeling of connection, it is likely that both mother and child will allow more connection and flow to develop. You may wish to go back later and inquire as to further connection, and again ask the body deva to facilitate this connection. If you feel as if you are at an end point with this work for the day, you will say thank you to all involved (including yourself in utero) and return to your current, adult body. You will then ask that the body deva release and integrate this experience in your current body, and give permission for this change to take place. If appropriate, you may wish to go to the individual body parts where you sensed this energy from the in utero experience was being held and ask the consciousness of those parts to recognize that they no longer need to hold the energy from this experience and to release it. Working with the Grid We are formed by a matrix of energies in utero. These energies come from several places: our history (what was happening in current history while we were in utero), our culture, location/place, as well as cosmic or spiritual energies of varying types. But more significantly we are formed through the energies of our ancestry, our father, and most significantly, our mother. Our mother, the person who carried us, forms the greater consciousness, or container, in which we develop. Everything from her emotional state to how she felt about the pregnancy develops deep imprints within us, as well as the baseline for our matrix. Our matrix shapes how we view the world, our base emotions, and how we relate to the world. On a simple level, this can mean that if our mother had a lot of fear while we were in utero we may feel a base emotion of fear and anxiety throughout our lives and never have the conscious ability to understand where it comes from. This will lead most of us to think that this fear is without cause, or will point to other more rational causes that have taken place in our known and consciously recollected lifetime. If our mother carried deep grief as a result of previous pregnancies, or was unclear whether she wanted to keep the pregnancy, we may find ourselves surrounded by a cloud of grief in our lives without being able to point to why. If we had an in utero experience where we were fighting for life, lacked nutrition, or our mother was unable to emotionally connect to us, we are likely to experience that as a difficult and continually reoccurring theme in our lives. This in utero matrix sets up how we view the world as a whole, how we view other people, as well as how we view ourselves. As a result of viewing things through this matrix, we will spend our entire lives exhibiting a reaction to others and the world (and Earth) as a whole as if they were our mothers. This can either be the joyous realization that the world is a nurturing place for us, or that the world and the people in it are looking to harm or take something from us. In reaction, we will then look to take as much from the earth and from one another as we can. The way in which our matrix has been created will then be reflected in the world around us. This is not because the world has changed and become inherently evil or selfish but because the person who has an unhealed matrix is operating under the belief that the world is this way, so they will perceive everything coming at them this way. After you have worked with this pattern, it will be easy to see what an impact it has on the world, and how disconnected we are from ourselves and one another as a result (either partially or fully) due to unhealed energies that emerge from in utero experiences. This means that if our matrix is composed of threads of fear or struggle for survival, this is likely what we will experience in this world. To fully heal from the wounds developed in the in utero state, we should not only consider the emotional, mental, and spiritual connections and needs for healing but also the matrix that has developed, so that the beliefs and understandings that created this matrix can be released. Picture Of In Utero Matrix **MEDITATION: Part One –** **How to Release Negative Imprints from the Matrix** * Begin by putting yourself in the "watery" state from the previous section, and ask the body deva to help you with your intention of discovering and healing the in utero matrix. * At first you are likely to see a straight, dark line going from the back of your tongue and throat to the belly button. In some cases this line may go below the belly button. * From that straight dark line you will sense a globe, oblong, or another shape surrounding you. * Sit with this until you are able to sense both the straight dark line as well as the energy surrounding you. These may not have connected to one another yet if we have a lot that is unhealed from our in utero experience. * As you sit with this energy, it is normal to feel a bit spacey. Simply sit with it at first and see if any emotions arise. You can breathe out any emotions through your mouth, acknowledging them as emotions associated with that experience that can now leave. * If nothing arises, you may simply need to restate your intention, or you may actually be trying too hard to access this information. Take a few breaths in and out and let the information arise rather than searching for it. Settle back into your midline, into the body deva, and see what emerges. The information will arise when it is ready to do so. If information is difficult to access, it may be that you need to do more initial work with your in utero experience or with the tools in the rest of the book. We come to consciousness when we are ready to do so, and if you really find yourself having difficulty accessing this information you may wish to try asking the body deva to show you where in your body you may be holding resistant energy to this process, or simply, if there are any patterns in your body that you need to heal prior to doing this work. You would then do a body scan and talk with the individual consciousness of that body part, as well as the body deva, to resolve whatever is held there. It is common to start to see or sense birth or in utero imagery when accessing this state. A feeling of water around you or within a bubble may form. As you sit with this energy, you will begin to ask what this matrix contains. How do you relate to the world? What do you believe to be true about the world and the people in it? Allow this information to come from a place of knowing rather than a logical (head-oriented) place. Answers should feel as if they are coming from that line, or a place deep inside, rather than from your head/brain. Allow yourself to be truthful and transparent about this. Do you feel victimized? Do you feel like the world is out to get you? Are others just looking to take from you? Do you believe that the world and the people in it can support you? Can they nurture you? Do you want to be here on Earth? Do you want to be in physical form? A lot of people talk about ascension, or being from another origin, and it is rooted in birth and in utero matrices. If we feel as if we were not wanted, we will feel as if the earth and the people in it don't want us. Even if we are originally from somewhere in terms of spiritual origins, in our present incarnation we are on Earth, in a human form, and that should be our primary focus. You are welcome to take a break and write/journal, or to continue... **MEDITATION: Part Two –** **How to Release Negative Imprints from the Matrix** Now that you have your thoughts and realizations together about how you perceive the Earth and the people in it, you will again go back to that line that you discovered in Part One (from the pelvis or belly button to the back of the throat). * You will now put some focus on the outer "bubble" or matrix that surrounds you. If you have had a difficult in utero experience this may not appear, or may appear broken or weak in places (or overall). You also may not be able to sense it at all. * You will now return your focus to that line and ask to see the matrix unfold from that place. If you sit with this, you should notice or feel electricity, lines, or an awareness of the space around you filling up with energy. You may also notice heaviness or emptiness. * You will again return to center (the dark line) and ask this matrix to shift or balance in a new, healthier way. You do not need to force this or journey and fix anything. You will simply acknowledge this matrix and ask that it reform in a healthier way. * You can also ask it to expand, lighten, or energize. * You can also ask what you need to know, to hear messages from this matrix, or simply sit with it for a period of time. Even acknowledging this matrix will allow it to change and shift in a positive way. Over a period of time, it will shift and change and allow us to experience ourselves and the world differently. If we are willing to allow this to occur, it can be a drastic change. But even if we are not ready for a drastic change, sitting in realization of this energy, and how it might have colored how you perceive the world, can allow significant shifts in your perception to take place, as well as healing opportunities to occur. **MEDITATION: Advanced Applications** In cases of chronic depression, apathy, or difficulty being here, it is likely that the matrix does not have enough energy in it. It is not vitalized or connected. This matrix is intended to connect us not only to our mother but also to ourselves, to humanity, and the natural world as a whole. While exploring our needs for healing in utero, we may also find that there is a part of ourselves holding back, or that it did not fully incarnate in our human form. We are intended to be connected energetically to the earth, to one another, as well as to Spirit. If we are missing one of these connections, it is hard to be healthy and balanced enough to feel vitally alive, or even to have enough energy to go about our day. At the appropriate time, when we have healed enough of our own matrix (when it feels open, balanced, expansive, and has some energy to it), we can make space for it to connect. If we are in a state of chronic depression or are unable to vitally explore this matrix, we can connect as much as we are willing to. I often suggest connecting to the earth matrix at first. There are many methods of grounding that can help to start this process. In my book, _The Spiritual Awakening Guide_ , I offer a grounding technique where you ground by sensing and visualizing yourself as a tree rooting into the earth. When it feels right, you will again feel your matrix as best you can, then like a set of reaching branches, roots, or a grid, you will ask internally if you are ready to connect to the earth grid a bit. If the answer is no, be patient: you can either let it go, or you can ask if it is willing to connect a little bit to the earth. If the answer is yes, you will ask your matrix to reach out and become a part of the matrix of the earth, as little or as much as it is ready to do. There should be a palpable sense of connection or shifting of energy if this is done right. At first, this matrix might come through your feet, but in time can reach out, so you are a part of all directions and sort of "clicked in." You will then ask for as much nurturing and support as you are ready for from the earth matrix. Start out small. Picture this energy as electricity, color, or just a palpable sense of something flowing into you. Work with feeling this matrix and asking for support and nurturing from it until you can feel a current of energy flowing into and through you from it. In time, if you are ready, you will feel more energized from connecting in this way. You may find it easier to connect at first to a specific aspect of "Earth" energy, such as a tree in your yard or a favorite spot in nature. If you feel as if a part of you has remained behind, or did not fully incarnate, you will now go back to the previous sections and the perceptions of the in utero self and find out what they would need to be fully ready to be born. Talking to the in utero part that does not wish to be born can be facilitated through the body deva in order to find out what it may need to be willing to fully incarnate. It is important to realize that with advanced subjects and energetic structures like this, there is always a pull to "do" something more, or to go faster with work like this. This is deep, transformative work that allows us to change who we are and how we relate to this world. By acknowledging this energy and simply sitting with it, it will transform. We do not need to transform everything right away, or overnight. By allowing ourselves to gradually, over time, sit with this matrix and explore the questions of how we operate in this world and how much nurturing we allow ourselves, we can gradually open as well as heal this matrix. Prior Pregnancy, Abortion, and Loss The experiences of the mother in relation to past pregnancies and childbirth can have a large effect on the experience of subsequent children. If a mother experienced loss of a child, the experience of grief is understandably likely to remain. If there was abortion, even if it was the correct decision on the part of the woman at the time, it can similarly create grief. Stillbirth and high-risk prior pregnancies, as well as becoming a parent the first time, can create understandable emotions of anxiety and fear in the parent-to-be. When exploring the in utero experience, realize that some of this information may come up. You may already have knowledge of it, or you may not be able to see how your mother's prior loss before becoming pregnant with you may have resulted in grief you currently experience. Abortion is always a tricky topic, but as every part of us has cellular consciousness, our awareness of a pregnancy and any subsequent trauma, beliefs, or emotions in relationship to the experience that have not been fully healed can be a vital place to explore if we are ready to do so. If you have had a prior loss, abortion, or difficulties in pregnancy it is well worth doing the "inner child" work on that topic to see if there are any held energies within yourself in relation to it. As we explore this, we may wonder how any of us can have a good relation to the earth and one another, as the ideal pregnancy, with the appropriate amount of emotional support and proper nutritional, physical, emotional, mental, and spiritual balance, is rarer than one might think. How we were brought into this world certainly can create a lot of deep-seated energetic and spiritual issues for us. But please remember that we do have a natural capacity for handling even moderately challenging events, and that pregnancy is naturally a time of chaos for the new parent or parents-to-be. Being compassionate toward your mother is the appropriate approach to this work, realizing that in most cases she did the best she could with the resources that she had. This work is never about blaming or creating a polarity in which someone is the villain. Leaving aside true psychopathic tendencies, it is easy to understand how a mother who had unprocessed trauma, as well as lacked support and appropriate adult tools to handle the situation, may have experienced a non-optimal pregnancy. Be kind—both to yourself and those around you. Understanding a situation doesn't make it okay, or that it was right or fair that it happened; by the same token, healing the situation doesn't mean that the situation was good, or needed, or that you need to call your mother after this work and give her a hug. What it does mean is that you have learned in an adult capacity to stop reacting as if you were still in that state, and to release whatever was frozen or separated at that age from its pain. You have released the force, or energetic polarity, that was creating division within you. You have learned that the emotions can release, that the beliefs created out of pain can be healed, and that you can move forward in greater wholeness and connection to yourself and the world. The Mother Wound As we emerge from our mothers, they are often our greatest source of wounding. This can happen in utero; this can also happen through our early childhoods, as we are entirely dependent and enmeshed with our mothers for survival. There is an old folk saying, _Give me a child until they are seven, and they will be mine forever_. Our development on all levels—socially, perceptually, physically, mentally, emotionally, and spiritually—creates a blueprint of who we will be in this world. If we do not move through the stages of healthy development, we can stay frozen in this state because our needs have not been met. Although sad to think about, we then will spend our lives frozen in an infantile state, continually looking to place people we meet in the outer world in the position of "mother" in our attempt to heal this wounding. We may also move in the opposite direction, creating a contract in which we deny anyone who takes on a role similar to "mother," such as teachers, those in authority, or really anyone who could provide us with some level of nurturing. The mother wound can be formed through severe trauma, loss such as death, or absence. It can also form in those who were adopted or born through use of a surrogate. It can also form from lack of bonding or an energetic lack of connection. There may not be severe or noticeable neglect or abuse in these cases, or even anything in the early childhood that someone may consciously complain about, but there was an energetic lack of synchronization between mother and child, which meant that the child on some level registered that they were not fully receiving what they needed. In this case, the mother may have been going through post-partum emotional or hormonal shifts, may never have experienced synchronization with her own mother so does not know how to embody it to her child, or it may have taken her a while to bond with her child after the birth. As we move forward to work with family and ancestral patterns, realize that this primal sort of wounding from our mothers is likely a rich resource to look into for healing. We are not neglecting the importance of the father here (or the partner), but as we emerge and are created through our mother and, unlike our father, are reliant upon her for our survival, we are more likely to realize her impact on us. This wound may also impact us on a cultural level. What women have passed down to them in terms of what is appropriate in terms of behavior and ways of being can be worked with by considering family and ancestral patterning (Chapter Seven) as well as cultural patterning (Chapter Nine). This work of healing the "mother wound" is applicable to all sexes and genders, as we all arise from a mother and interact in a world in which feminine and earth energies have been significantly wounded. CHAPTER SEVEN Healing Family and Ancestry One of the biggest misconceptions about healing is the idea that we should just be over things, or that memories should be altered or erased from our history. I may possibly be the first to tell you that trauma on whatever level is not okay. Growing up we should all have the opportunity to thrive in loving, healthy environments that teach us boundaries and allow us to eventually become stable, loving, and functional adults. This does not happen as often as it should. It is not fair, and it is never right that you, or anyone else, did not receive the love and support they needed to truly thrive. What trauma does is fracture us, and we may believe that we are the only one who has experienced such things. We may intellectually know that we are not; however, the sense of separation and loss that comes from fracturing due to a family environment that did not allow someone to thrive has the effect of convincing them on emotional, mental, and spiritual levels that they are alone in their pain. One of our primary needs is to be heard, as well as to be seen. If we are able to offer this capacity to ourselves now, we can heal in ways we never thought possible. This also means that as an adult we may have (or will hopefully have) the perspective to look at our parents, extended family, and ancestors, in a new way. This does not make what happened okay, but it heals the fracturing and returns to wholeness the parts of you that may have separated due to family trauma. It makes it possible for us to take the strengths and lessons from the experience and leave behind the frozen, fractured, and blocked energies. Whether we have had children or not, the natural tendency of a mature, rational mind is to realize that people are often doing the best they can. This may mean that they make a terrible mess of things, resulting in lack of boundaries or the child not being "seen" because the parent had to work. It may mean outright abuse and violence, neglect, or not experiencing the sort of energetic synchronization that occurs between parent(s) and child that allows for feelings of safety and of feeling truly held and heard. Even if we did not experience outright abuse as a child, and did feel emotionally connected and synchronized with both parents, our childhoods are always a fertile time for healing work. Our concept of who we are, how we relate to others, and our development—not just on a spiritual level but on a physical and social development level—is created largely before the age of six. Our brains create the tracks that color our world during our early childhood, and our adolescence is often full of the struggle to become independent and to consider oneself a separate being from the family unit. One of the biggest forms of resistance in doing this work is often the resistance of an adult self. Our inner two-year-old may really want love and attention from our father; in our adult state, we may want our father to stay as far away from us as possible to preserve our safety or sanity. It is by being open to what that part of you wants, even if it differs from what you may wish to hear at your current age, that more healing can occur. It is by resolving inner conflicts and opposing forces within us that we can heal. Our minds will naturally want to create villains, especially if we have been harmed, or we were not loved and heard to the level we needed. While in rare cases, true psychopathic behavior does exist, in most cases parents were doing the best they could—struggling against a background of trauma and overwhelm, with inadequate tools and support to properly care for their children. It is by working with our ancestry that we can often gain the most insight about the sort of weight and patterning that has been enacted in our families. Our parents and their way of being did not simply emerge out of the ether; it came from how they grew up, and likely from patterns and beliefs and trauma that had been passed down the lineage. As a spiritual healer, one of my first experiences doing ancestral work was with descendants of Holocaust survivors. These were middle to upper class individuals who had significant fear, anxiety, and control issues. They would find themselves hoarding food, experiencing deep grief, and would become obsessive about their jobs and the safety of their family members. It was by working with their inner children and their ancestry that these individuals began finding more peace in their lives and letting go of their fearful and obsessive behavior. Since this time, I have worked with many ancestors and lineages, working with the ancestral effects of slavery, poverty, genocide, war, disease, and land that could not produce crops. For many of us in our modern world, it is hard to imagine the effects of losing several children, dying in childbirth, having a disease or plague wipe out a whole village, being displaced or forced to leave your land, or having the men go off to war, leaving the women and children behind. Our ancestors were more likely to be farmers, to be less protected from the seasons, to fight in battle, and to fall victim to plagues and diseases. In many other ways, our ancestors are quite similar to us. They loved, lost, and struggled with health and providing for their families. They experienced trauma and abuse and may have received mixed messages about religion, spirituality, and money. Our capacity to handle money in a responsible or clear manner may be impacted by the financial situations of our ancestors. How we relate to food and hunger, fears and emotions that seem too large considering your own experiences in this world (or without cause), and our attitudes toward work, money, security, and our loved ones, are all common patterns of fracturing, "freezing" and wounding due to the experiences of our ancestors. We tend to more readily connect to the "victim" aspect of our ancestry. This work is crucial, and deeply healing. For those of you ready to consider such things, healing the perpetrator is also suggested. This means that we may have ancestors who owned slaves, who brutalized others during times of war, or who spread religion through domination and decimation. These things are certainly not easy to consider, and many of us could spend large amounts of fruitful time working on ancestors who experienced loss, disease, and all of the struggles that came from being human during their time in history. However, the eventual healing of the perpetrator can reveal great amounts of insight, inspire healing, as well as release familial beliefs regarding race, sex, gender, culture, and religion that we may still be enacting without consciousness. Similar to our own experiences of trauma, our family members and ancestors experienced trauma that caused them to "freeze." The emotions, beliefs, and experiences that were a part of the trauma become frozen, or locked in time. Ideally, our ancestors would have the skills and opportunities to heal these energies and resolve the trauma they experienced. Often near death, or as part of the dying experience, they may have had the opportunity to cast off or heal emotions and traumas. But if the trauma of our ancestors is not resolved, it gets passed down through the lineage, or our _ancestral line_. We may have an unresolved trauma that has been passed down from our mother or grandmother. We may also have an unresolved trauma that is from several hundred years ago, and each generation that it has passed to has added on, shifted, or simply acted out the same trauma "loop" again and again. To explain this further I will use two metaphors. Imagine a small snowball at the top of a hill. This snowball represents an original trauma or experience of one of our ancestors. This snowball then rolls down the hill, collecting more snow. To make things more complicated, that snowball can go straight down that hill, passing down the exact same beliefs and emotions from that original snowball; it could also veer off course due to another trauma added on top of the original one, adding additional beliefs and emotions to the snowball. In this work, we always simply work with what arises, focusing on the closest we can get to that original snowball. If we heal the original snowball, the rest of the snow melts. To mix metaphors, this is like a series of dominoes. We do not want to just take care of one or two dominoes; we want to get as close as we can to the domino that was set up first, or has caused all of the other dominos to fall. In this way, we can truly effect the greatest amount of healing and release. If this sounds complicated, know that whatever arises is perfect for you to know about and work with in the moment. This means that even if you work with a snowball that is midway down the hill after veering off course, it still represents a considerable amount of release and healing. Sometimes, we need to work through layers of ancestors, or we may not yet have the experience level to understand that an event from long ago is still affecting us now. Wherever you are at is perfect, and you can again look at the end of the book to the "Tying Things Together" section to see how to incorporate this work as a part of the entire protocol. It is actually easier in some ways to work with our ancestors because we have conscious recall and have mentally constructed stories and beliefs around what we have experienced with our family as well as extended family from our childhood. Reexamining what we can consciously recall can sometimes be tricky, or we may have resistance around it for good reason. Our memories are actually more fluid than we may believe, and a new understanding from an adult or child perspective may emerge that could provide new insight as well as healing. My suggestion is that you stay open to new beliefs and realizations from an adult perspective concerning your childhood. Some of the most shocking realizations and most insightful healing work can happen when we are courageous enough to look at our childhood with openness and readiness to see things in a new light. What may have traumatized us deeply as a child can be comforted, understood, and offered compassion as an adult. What we may have experienced as a child and hold onto as a solid, static belief and memory may look entirely different with adult information and recollection. To be clear, I am not saying that memories are false, or that whatever you experienced as a child was less traumatic than you remembered—what emerges may, in fact, be more of a big deal than you once believed. But from an adult perspective we have more tools, understandings about ourselves and the world, and a realization that we emerged out of the situation reasonably intact. We can see those around us from that time with a more nuanced, informed perspective. That perspective makes all the difference in the inner child "unfreezing" and becoming integrated as a part of us again. It is natural for family energies to emerge first when we do this work. Our relationship with our father, mother, as well as other family members often needs to be explored before more in-depth work with ancestors is done. We often need to explore our active, conscious trauma (that which we remember and is top of mind) before working with deeper layers, such as ancestral energies. If we are feeling hesitant to do this work, it is an indication that it is exactly the type of healing work we need to do. You can examine this resistance, and if no difficulties or needs for healing arise in relation to your mother, father, or your experiences in this world, you could ask the following question: _If I could sense anything unhealed in relation to my family (mother/father/grandparents) what would I sense?_ It is likely that once you have moved past some resistance, a lot of patterns will emerge. Working with Ancestral and Familial Patterns For the patterns that you can consciously recall, you will work with the inner child protocol. To begin, you will always check in with your body deva. You may wish to ask it what you should work with today, or specifically intend to focus on an emotion or pattern that you feel may be related to your family or ancestry. When working with the individual area of your body, you may find that the energy feels more than just yours. One of the key indicators of ancestral energies tends to be overwhelming or large emotions that do not make sense in terms of your individual timeline. For example, you will notice a huge amount of grief within you, and while you may have reasons to have some grief, the amount of grief within is much more than your own experiences of this world. You also may be engaged in the inner child protocol and find that there is a restrictive pattern or something going on with your mother, father, or other family member that seems like it is blocking or preventing full healing for that inner child. For example, say your inner six-year-old needs to be loved and heard by his father. The father seems unwilling, unable, or resistant to the extent that it doesn't seem right to visualize this. This would be an indication that something is going on that is either familial or ancestral. When we are working with ancestral healing, we are always working with who started the pattern. Everyone down that line (all of the other dominoes) has taken on that pattern. We have taken on that pattern, as well as added our own beliefs and experiences to it. In this work, it is important that we relate any beliefs of our ancestors resulting from the trauma they experienced to beliefs we hold in our own lives. There may be a direct parallel, or we may have taken on this belief in a slightly different way. It is by releasing both the initiator of the trauma as well as our relation to the energy that we can fully heal the ancestral line. As mentioned, you will begin by asking your body deva to reveal where there may be an ancestral pattern in your body. You will do a body scan, noting areas that draw your focus or seem highlighted. You will then ask your body deva to show which area would be the most important for you to work with (if multiple areas show up). You then would sit with this area of your body, noting how it feels physically, what you sense energetically, as well as visualize the blockage (or emptiness) as a basic shape in your physical form. You may then wish to move on to checking in with the individual body part, as well as the body deva, to find out basic information about what it may hold. Inquire if this energy is ancestral. If the answer you receive is yes, or you feel a significant shift in that area of your body in response, this will serve as confirmation. If you are already immersed in the whole protocol (checking in with an individual body part) or are engaged with inner child work, you may wish to ask if any of the energy is ancestral or familial. If the answer is yes, or a shift in the body part occurs, you would also continue. For the following work, you will rely on your sense of "knowing." It is often our strongest psychic sensitivity. As noted earlier, if you feel you are making up visuals or stories, what will happen is that there will be no shift in the body map or individual part of the body after the work. If you are truly engaged in ancestral work, there often are emotions that begin arising, as well as realizations that occur about how some of what you believe may not have come from your individual experiences of this world. * Ask your body deva to bring forward the ancestor who began this pattern. * See or sense this ancestor. Are they male or female? * How old are they? * What do they look like? * What are they wearing? * Now, look out to where they are. Where are they and what are they doing? * What emotions can you sense? If this information does not arise, you may wish to ask: _If I could sense this ancestor, what would I sense?_ If you have further difficulty, you may wish to either ask your body deva to relate the information to you, or picture a blank television screen. When you turn this television screen on, your ancestor will appear. Once you have a baseline sense of what they look like and where they may be, you will want to know what is causing them overwhelm, difficulty, or harm. This work is similar to the inner child work. We do not need to create a whole story, but need to know the basics of what is causing the person to feel traumatized or overwhelmed. * Ask your body deva what is going on with this ancestor. \- You can try asking the ancestor directly, but at first, it is often easier to ask your body deva or rely on your sense of knowing to fill in these details. * Ask for the basics of what may have happened to them, what caused them to experience trauma. \- A good indicator is always emotions (those you sense them experiencing or those that may be arising in you as you do this work). \- Another good indicator is the scene you see them in. You have seen them there for a good reason, and where they are can give a good indication about what is going on with them. \- Once you have understood the trauma or cause of overwhelm, you will ask about the beliefs created from the trauma. ° What beliefs about the self were created from this trauma? ° What beliefs about the world (or the people in it) were created from this trauma? \- Check in with these beliefs and see how you may resonate or carry similar beliefs in your own life. ° In some cases, it may be an exact parallel: _"Yes, I totally feel like, no matter what I do, I am always going to suffer."_ ° In other cases, it may be strikingly similar: _"I haven't lost children, but I can relate to feeling like I will be grieving forever."_ \- Ask your ancestor what they would need to heal. ° Visualize or offer this to your ancestor. ° Some may choose to visualize a specific color light flowing to their ancestor to help them feel better. \- Ask to receive the strengths that have emerged from your ancestry. \- Your ancestor will disappear when their energy is no longer "frozen." If they are still there, it simply means they need more time (or something else) in order to heal. Continue asking what they need until you can no longer sense them. ° You can also check in with your body deva and ask it if the ancestor is healed. \- You will now go back to your own body and where the energy of your ancestor was being held. \- Let your body part know that this ancestral energy has resolved, and ask it to shift or change due to that knowledge. \- You can now move on to resolving or releasing your own experiences of this energy, either through talking to the individual body part or through doing inner child work. This work is non-forceful. We are not forcing light or energy into these frozen ancestral energies. We are acting from a place of compassion toward anything that arises. Occasionally, the question of ethics arises as it pertains to ancestral work. Working with the actual spirits of ancestors is something that experienced spiritual healers and shamans do quite frequently. This is not what we are doing here. We are working with how the body has taken on the beliefs and energies of trauma and healing our own consciousness and the dynamics of what has been passed down to us and is still held within. By doing this, healing is likely to reverberate out to others, such as our children and family members, but it is also always their choice to what degree they wish to engage with how they have individually taken on the patterns and traumas of their ancestors and made them their own. Resolving how we have taken on these energies and made them our own is an important step in this process. It keeps things body-oriented, as well as allows us to take personal responsibility for how we have enacted these patterns in our own lives. Even if you do not go through the entire protocol and simply find some beliefs that were created from this trauma, simply being aware of that is immensely healing. Our subconscious mind will already be working on how to resolve the emotions and trauma energies when we are ready to explore more deeply. It is important to ask to receive the strengths of your ancestors while doing this work. Our ancestral line may be filled with trauma, and healing that is incredibly helpful. However, it's helpful to understand the positive traits that have been passed down to us, as our struggles often offer us beneficial strength, resiliency, and a specific way of relating to the world. Inquiring how those strengths inform us is at least as revealing as discovering and working with restrictive ancestral energies. It can allow us to more fully understand who we are and the specific culture or ancestry we emerged from. On occasion, the ancestral energies may be resistant or unwilling to engage in a healing process. We can often move beyond this resistance by offering understanding as well as asking, _If you could imagine needing anything, what would that be?_ Ancestral energies are large, and the change would be simply too immense if we were to shift things too quickly. Making space for healing, even if the ancestral energy doesn't completely resolve, can allow for shifts in the held energy in the physical body, and a subsequent shift in your life. If you really get stuck, I suggest working with a spiritual healer or shaman who specializes in ancestral lineage healing and has the spiritual capacity to intercede and work directly with spirits. People like this can be difficult to find, for despite the popularity of shamanism these days, many "shamans" do not work directly with spirits (even though that is a core requirement of the work). Look for someone with five years or more of fulltime experience, specifically as a spiritual healer with a focus on ancestral healing. It takes a long time and immersive study to become competent in spiritual work, even with a calling, so someone solely focused on spiritual or shamanic healing in their practice, rather than a range of holistic or psychotherapeutic modalities, is suggested. **Steven** Steven found himself consistently enraged by what he described as minor inconveniences. He would emerge from his rage embarrassed and unsure about why he got so upset at traffic, friends, or work colleagues. When he worked with percentages, he found that 10 percent of this rage was appropriate and that 90 percent was not. Steven came from a loving, middle-class home. His parents were always available to him, encouraged him in football and orchestra, and supported him financially as much as they were able during college. He reported having an older brother who he got along with reasonably well during the holidays, and no particular early childhood trauma. We asked his body deva to show him where this rage was coming from. It showed his genitals, pelvis, and left leg. The pattern felt dense and cold, and Steven said he felt like running and his body shook when we started looking at it. We asked the body deva as well as the individual pattern (the pelvis, genitals, and left leg) if this pattern was ancestral. What came up was the sensation of cold as well as fear that caused his lower body and leg to shake. When he asked an ancestor to emerge, Steven remarked that he looked male, young, and that it looked like he was running from something and completely terrified. Based on the scene he viewed, he had the sense that this may be connected to his Haitian lineage. With his sense of knowing, Steven felt that this came from the Haitian revolution and that this man was part of the slave revolt. He had given his life so that his people could be freed. When he realized this, his body began feeling warmer and some of the sensation of wanting to run away lessened. He asked this man what he needed, and he replied that he wanted to be known for what he did. Steven said that he would be proud to honor him, and his body deva said that he died without being honored (more on what a "good death" involves can be found in the Past Lives chapter). Steven imagined a funeral for him with a well-tended gravesite. Steven visualized a funeral for this man, and asked what beliefs were created from this situation. The man had no reply. He then asked his body deva, and it replied that there was not a specific belief, but a pervasive feeling of being lost or not at home that came about due to this man and his experiences. Steven related to this feeling and how it had a place in his own life. The scene disappeared, and Steven returned to his own body, letting the pattern in the pelvis, genitals, and leg know that it no longer needed to hold this traumatized energy. The energy released quickly, revealing other ancestral patterns for Steven to work on. Even though there was more to work on in this area, he felt more vibrant and present in his daily life, and became interested in his family history. He had not mentioned lower back pain, but said that he no longer experienced back pain or headaches after this ancestral energy resolved. **Madison** Madison felt like she was a little girl trapped in an adult body. Her voice, affect, and the way she related to me practically screamed that she was "frozen" in a younger child state (around four or five years old). Madison had done enough therapy and self-exploration to know that she had an uncle who had abused her at that age, as well as a family friend and neighbor who had abused her between the ages of five and ten. Despite the healing work she had done, she felt not quite ready to immerse herself in such trauma (and she doesn't need to for this form of work), so she pictured a television screen in which a young girl appeared to her in a completely dark room. It was her bedroom at night, and there were monsters and shadows on the wall, but she was too frightened to get out of bed. This little girl had the belief that drawing attention to herself would get her harmed, and as her uncle was over, she remembers urinating in her bed because she was too afraid to get up to go to the bathroom. When her mother came in the room to clean her up, she noticed a sense of apathy and stoicism from her mother. Her inner child wanted love and to be told that everything was going to be okay from her mother, but her mother seemed like a zombie, unable to offer such things. Madison asked her body deva where her relationship to her mother was held, and it showed her heart area. The area felt tight and like there were pins being shoved into her heart from the outside in. Madison asked if this energy was from ancestral or family sources, and her body deva said yes. Madison asked for clarity about whether this energy came from beyond her mother, and the answer was again yes, that it came from her grandmother. Madison asked her grandmother to step forward, and she saw a scene of her grandmother fixing old socks and hemming clothes to make ends meet for the family. Her fingers were numb, and she was working by very little light. Madison asked the body deva to tell her what was overwhelming or created trauma, and she was shown that poverty and not being shown love from her spouse over time caused her grandmother to retreat within and to develop a hard shell around her so that nobody could see her pain. Her belief was, _This is what life is_ , and resulted in her deciding to become numb and detached from life and her body. This energy was transmitted to her child, Madison's mother, and Madison saw how she herself used this energetic shell and numbness as a way to protect herself when she was experiencing abuse. Madison asked her body deva what her grandmother needed, and the reply was to be at peace. Madison visualized her outside with her sister, simply drinking a cup of tea. She had no idea how or why she came up with that visual, but it seemed to work, and the energetic shell began to fade and a smile came over her grand-mother's face. Her grandmother began to see the beauty in the world and slowly faded from awareness. Madison asked her heart to release this ancestral pattern, telling it that she was willing to see the world as more than a place of struggle and pain. It released the needles and revealed a wound and scar underneath. She went back, but her mother was still not able to show her compassion for wetting the bed. Madison has had a difficult life, and this one piece of work did not solve all of the trauma and abuse she has experienced. But she was willing to work daily to make herself more functional, healthier, and more open. This session, in particular, allowed her to stop feeling as if she was coming up against a brick wall with all she did, and to let others see more of her true self without the fear she once had. CHAPTER EIGHT Healing Past Lives We are not just our own experiences in this world; we are influenced by a variety of energies and belief systems. The easiest example of this is our family. Unless there is some sort of healing or coming into greater awareness that occurs, we will most likely take on the beliefs and understandings of our parents. We are also likely to "loop," or live out their wounds, pain, and unprocessed emotions without realizing it. We rarely understand that many of the energies, beliefs, emotions, and patterns we carry do not originate with us, with our experiences in this world, in this incarnation. This is not to shirk any sort of personal responsibility, by the way—even when we do inherit patterns and traumas and emotions, we still have our own experiences of them and add onto them. Many past life therapies and healing only focus on what was experienced in the past life. This work offers you the opportunity to understand the past life, what occurred, what sort of blocked or "empty" energies this created within you, as well as a chance to reflect on how you took on the patterning from your past life. Our past lives are intended to be in the background. We are not intended to be aware of them. If we are, they are likely _unhealed_ , meaning that there is something about that lifetime that is creating restriction in your current body and life. Similar to the inner child work, past lives need closure, to heal any overwhelming or traumatizing experiences that were not able to be reconciled, and to release the beliefs associated with them. If our past incarnation was not able to reconcile the trauma or emotions they experienced, that energy gets passed down to us. It is a part of the energy that we inherit coming into this world, and clearing at this level can have profound and often surprising effects. We may have little idea before doing this type of work of the impacts unhealed past life beliefs and traumas are having on our current incarnation. There are common reasons why past lives linger. The most common is manner of death. Ideally, we would all have what is known as a "good death." This means that we die consciously, that we are ready to die, and that our death was not surprising to us. Understandably, there are times when a good death does not occur. In these cases, the manner of death frequently needs to be worked with. It may be surprising to notice while doing this work that an area that is creating physical pain for you may be linked to a hanging in a previous life (throat), a miscarriage or death during the childbirth process (pelvic pain), starvation or exposure (digestive tract) or a stabbing (solar plexus), but these are all commonly realized patterns when working with past lives. When we do not have this good death, the death remains unresolved. Our bodies and spirit like closure. We did not have time to fully resolve emotions, feelings, or other experiences because of the manner in which we died. We may not feel as if we had an honorable death, either. This is more important in some cultures than others, but the proper honoring and burial for a warrior, soldier, or someone who died anonymously can allow the death to become honorable and the unresolved energetics of the manner of death to dissipate. The other pattern for past lives is largely emotional and trauma-based. Much like our own lives, we struggled and loved and lost in our past lives. Any trauma, emotion, or experience that was too much for us to work through in our previous incarnation is unresolved and carried forward into this lifetime. A red flag to possibly suggest that past-life healing may be helpful is fear that does not make sense within the context of your experiences in this world. This fear is beyond the normal and logical experience of fearing heights, betrayal, earthquakes and natural disasters, or other experiences in this world. We will then attempt to heal this unresolved past life energy or might relive the experience without our conscious awareness. We may, in fact, meet people from previous incarnations with whom we are attempting to heal some sort of rift. In simpler ways, we may not understand why we have always had a fear of the ocean, or carry around a huge amount of grief, are angry toward a particular type of profession or person, or have a huge interest in airplanes, or knowledge about airplanes, without much if any study. We are, of course, rarely conscious of any of this. If we are "awakening," we may begin to recall past lives and have strange dreams that feel like they are from elsewhere, a recollection of odd events, or even flashes of ourselves in a past life. More commonly, people come to past-life healing because there is an area of their body in pain that nobody can figure out. As a spiritual practitioner, I find that the people who come to me typically have been to see as many as twenty other health care practitioners with little change to their situation and are often willing to try anything, no matter how strange, to bring themselves closer to healing. Past-life healing is, of course, not a panacea for all that ails us. We are unique individuals with unique reasons for being, and have many reasons for disease and dysfunction. But in many cases, past lives are a piece that helps us complete the puzzle of our lives, and in healing the root reason why something began (a past life), other methods of more traditional care may start to work, or work better than they did previously. In previous chapters, the concept of "root and branch" was introduced. When we get to deeper, more spiritual patterns, we are getting closer to the root of a pattern, that is, why it may have emerged in the first place. For example, we may have significant digestive issues and have visited many physicians, therapists, and holistic practitioners for treatment of the physical, emotional, and some of the energetic issues yet still experience problems. In some cases, this would point to more time needed with the physical elements of the digestive system (which take time to heal). But in many other cases, the root of what originally caused the digestive issues has not yet been expressed. The ancestral, familial, or past-life energetics, once worked with and resolved, would heal the root of the issue. Typically, what occurs is that the physical, therapeutic, or holistic methods (the "branches") of working with the digestive tract would then begin to work with higher efficacy. We are spiritual, emotional, mental, and physical beings, and it is by working with all aspects of ourselves that we can heal both root and branch. It goes without saying that this type of work is for people seeking to heal. We may create a lot of illusions for ourselves about past lives and other spiritual matters, and be open and curious about them, but the right frame of mind for this type of work is to remain somewhat skeptical. There always should be a part of ourselves that is logical and questions what we are doing. We can work through this material with an open mind but remain skeptical simultaneously. What is important are the results and shifts in our bodies and lives as a result of the work, not the stories that emerge. Being healthier, more functional, in less pain, and in a more embodied state are the intended results of this work. If you are unwilling, or due to your faith or personal cosmology cannot work with past lives, this section can simply be skipped, as there is more than enough to work with in this book to effect considerable healing and self-knowledge. Ideally, you will eventually open to the possibility of working with past lives, even if they are to be considered "archetypal" rather than "real" past-life experiences. What is most important is that there are changes in the beliefs of the traumatized individual who emerges and a shift to a higher degree of embodiment in the body map and shape of the blocked or empty energy within the body, not the "truth" behind whether it was a past life or not. After doing this work, you can always go back to the body map and see how things have changed, but if it is done right, there should be a palpable and noticeable shift at the time. It is important to know that you do not need to already be aware of a past life in order to do this work; it can almost be better if you do not go into this work with preconceived ideas about what will emerge. Working with Past Lives To begin, you will want to contact your body deva. If you are specifically interested in finding and working with a past life, you can ask your body deva to highlight or point out an area of your body that holds a past life trauma. You would then do a body scan or just notice what emerges or draws your focus. If there are multiple body parts that show up, you can ask your body deva to show you or make clear which one would be best to work with today. You may already have been working with your body deva and talking with a specific body part that called your attention to it. While speaking with it, it may clearly state that it has a past life energy to work with. Or you may begin by creating a list of questions, such as, _Is there any element of this that is ancestral? What about past life? Is there any element that is inner child?_ We will go over a full list in the Tying Things Together chapter so you can clearly know how to ask for this information. To avoid making things too confusing right now, I will say that it is very likely that your body will offer multiple answers to these questions _(e.g. It is from trying to lift a couch a few years back, an inner twenty-year-old, past life, as well as ancestral )_. You eventually will have the tools and capability to work with multiple layers of patterns. For now, there should be some sort of resonance or "activation" (the body part starting to shift or change in some way) if you say the words "past life" to it, or it gives a simple reply of "yes," or you get a sense of knowing that leads you to feel that working with a past life would be fruitful. In time, you may find that a past life emerges for you. Most commonly, this is through dreams, but experienced meditators and others on a spiritual path may simply recognize or realize that some sort of pattern is coming up for them that may be related to a past life. If this happens, you would ask the body deva to show you where your body is holding this past life energy, then continue. You will now feel into the area of your body, and utilize the "talking to your body" exercise to get a general sense of how that area is doing. You may wish to do a quick body map drawing and notice what the area in question looks like. You will then ask the body deva (and the area of your body you are working with) to show you specifically this past life energy. What this means is that if you feel your whole abdomen is on lockdown, the past life energy may be the whole thing, or may just be a little something to the right of your belly button. Asking the body deva to highlight and bring up the past life energy specifically will allow you to separate and sense what this energy may specifically feel like in your body. You will sit with how this energy specifically feels: how big it is, where it is being held, what colors or shape it displays in your body. When you clue in to this energy, you may notice an emotion. Get a sense of what emotions may be there—Fear? Anger? Despair? Name whatever emotions come up for you when focused on this energy. During this initial phase, you want to get as much information as possible. If you are able to clearly sense the shape, color, and emotions behind this held past life energy, you can move on with more success and clarity. It can be easy to get frustrated with work like this; we always want to get everything on our first try or right away. But even the acknowledgment and connection to our body deva and recognizing the past life energy (even if you do not see it or sense it clearly) will allow you to begin healing and reconciling the energy. You will now ask the body deva to clearly show you the past life that started this energy. To be able to see or sense this, you can ask the following questions of the body deva: * Is this past life male or female? * How old are they? * What are they wearing? * What color hair do they have? The purpose here is to get a basic description of the person. What we are doing is energetically focusing on them, similarly to how we focus a camera lens. If you feel like you are making things up, it is understandable, but if something comes up for you there is a reason why. There is a reason why your mind self-created a sad-looking thirteen-year-old with brown hair wearing an apron. If this information changes, you can simply correct it later. If you realize that you thought someone was thirty, but on closer examination they were forty-five, or even seventy, it really isn't a huge leap. If we are self-creating, we will receive no feedback from the body deva when checking in, and will not notice any changes or shifts in our physical body. During the more spiritual work it is easy to try too hard. This is almost always the reason for people being blocked. Breathe in a few times to center yourself. Allowing information to come to you is helpful, as is distinguishing between information coming from your body deva (and where it is located in your body or its energetic structure) and information coming from your head. Ideally, this information will come to you, and you will not need to energetically force any information to come through. Sitting back, taking a few breaths, and allowing things to arise naturally, without trying too hard or searching, will permit information to come up. If multiple people come up for you, ask who initiated the pattern, just as you did in the ancestral work. When multiple people come into consciousness, it often means that it is a pattern that has passed down through more than one past life. If things are unclear, and you are not able to see or sense anything, you can go back to working with resistance (Chapter Two), or simply try another day. Once you have the basic description, you will then get a sense of what is going on around this person: * Where are they? * Can you sense anything about the time period or location? * Is there an emotion that you can sense here? It is likely that a fair amount of information has come up. Check in with your body deva and ask if it is okay to continue: * What is going on with this person? There is likely some sense of overwhelm or a trauma happening. * If you cannot sense this, ask, _If I could sense what is happening with this person, what would I sense?_ * Again, you will tap into any emotions you are sensing. Ask your body deva why the person is feeling (sad, hopeless, filled with grief, angry, afraid). There doesn't need to be a whole, huge story that emerges here. What we need to know is basically what happened to this person that created unresolved trauma or frozen energies that have passed down to you. This will give us something to start working with. If you are not getting a clear sense of things, you can always try again another day, or you can ask your body deva to tell you more. This means that if you hear that it is a young girl who is sad you can simply state, _Tell me more,_ and it may emerge that she is sad because her mother has passed away. You will now ask your body deva or the past life coming forward what they would need to find closure. The body deva allows for a bit of separation and a healed perspective; chatting with the past life allows for more story or emotional perspective. If emotions are too overwhelming, you might work with the body deva as a way of having some much-needed space and perspective, and to avoid getting wrapped up in the mental and emotional trauma being expressed. Often what is needed is to release emotions, or heal the physical wound or experience that led to the person's death. Many times, the past life needs closure by simply letting their memory be shown. As you are asking these questions, you are likely to feel a sense of shifting or lightening of the body part you are working with. As you did with the earlier inner child work, you will get a sense of what the person in your past life would need to heal or find closure, then visualize them receiving this. Our past lives are frozen in the state of trauma they experienced, looping through it again and again. They do not realize that their story, according to linear time, is over. They are so stuck in a "death loop" (their manner of death), or their grief, anger, fear, or other emotions, they may just need recognition and the imagery of resolution to allow them to get unstuck and the energy to clear. You will ask your body deva for the personal or outer beliefs that were created due to this trauma. * How did this trauma change how this person saw themself or what they believed about themself? * How did this experience change or create beliefs about the world, people, or men/women in the world? Sit with any beliefs that emerge and see how you, in your adult state, personally relate to them. We always take on beliefs like this unconsciously, and by bringing them into consciousness, we can reflect on how we have taken on beliefs that did not originate with our own experiences of this world and have made them our own. We all have experiences that will consolidate or add onto the beliefs and by considering your personal expression of these beliefs, you can heal the entire continuum, or both root and branch. As with the inner child work, the past life will gradually dissolve or no longer be apparent if it is healed. If they remain, there is still a need for closure and healing. Continue asking your body deva or the past life what their needs for closure are until you can no longer see or sense them. Similar to the ancestral section, you can also inquire as to the gifts or strengths from this past life. This is much more apparent in ancestral lineages, but it is important to understand that our struggles, and that of our previous lives, may have gained wisdom, strength, or benefit from what they experienced. This does not mean that every tragedy is a learning experience, but that our lives here are complex, with opportunities for wisdom, strength, and many other gifts that come from our direct experience. Understanding these gifts from our past lives can awaken them within ourselves, and allow us to realize how we may relate to or embody these same gifts. It is not important that this final resolution occur, however. Healing everything in one try will happen, especially as you get better at this work, but many of our past lives require a fair amount of resolution, or may come up continually in different ways if they have many unresolved issues. Practicing compassion for yourself and asking the body deva if there is anything that needs to be done today in order for healing to occur (and being okay if the body deva says to come back another day) will allow you to take steps forward in your healing process at a pace and level you are comfortable with. Once the energy dissolves (or you are simply done for the day, even if the past life is still present, or you are unsure in some way), you will then go back to your own body and to where you sensed this energy. The energy and sensation in your body is likely to have changed. Even if it has, you can ask that body part to change and release due to the past life healing work that was done. Let that body part know that it no longer needs to hold the energy of this past life. You will then end by saying "thank you" and then asking your body deva to recognize and shift and heal in relation to the past life energy you released. Past Life Beliefs and Personal Responsibility As mentioned, we take on energies from all sorts of different sources. Some of these are past life sources. Taking responsibility for our path and bodies really means that we cannot blame our experiences on our past lives (or our ancestors, or karma, or even our family or inner children). To complete and provide closure for the past life energy that lived within you, there must be some sort of reconciliation of how it impacted you. If you consider this past life (you can also do this while working with it), this person likely had a lot of emotions and thoughts about their experiences. Trauma changes us. It changes what we think about ourselves. It changes what we think about other people, and creates fear and separation in our relation to the world. If you were to get a sense of what patterns emerged out of this past life, what would they be? What would the contract be? There were likely beliefs or understandings that emerged due to the trauma of this past life. This person may have distrusted authority because they were a servant to a king, or may believe that the world is unsafe because their village was raided. They may believe that they cannot use their voice and that they will never amount to anything; they may feel that men (or women) are dangerous as a result of the experiences they have had. With experience, you may be able to relate their experience to one in this life, and note your relationship to the king (who in this life may be your mother), and how the situation has "looped" or been created again. While working with the past life energy, you are welcome to ask the body deva these questions, but an important part of the process is to take a step back and contemplate what belief structures and understandings changed as a result of the trauma this person experienced. If it is matter of death, the person may have formed fewer ideas about the experience, as they may have simply not had time, but otherwise there is likely something there for you to consider. When an idea emerges, you can realize that you have taken on this belief or reaction in some way. Maybe you are afraid that your house will be broken into, or feel as if the world is constantly out to screw you, or are deathly afraid of heights. Maybe you have had dreams of being strangled, suffocated, or lynched, or you find yourself unwilling or unable to trust your partner due to a betrayal by them in a past life. Whatever it is, you have taken on these thoughts, realizations, and reactions and made them your own. You will have done so directly (believing in the exact same way that the world is constantly out to screw you) or the belief may have shifted and changed based on your own experiences in this world (you now believe that a particular class, race, or gender will break into your home due to the beliefs and traumas of the familial household impacting you). Realizing this will offer the opportunity for further release. Going back to your body deva with this realization, and asking the body part where this energy was being held to release as a result of this new understanding, will permit full closure of any past lives being expressed. It is always lovely after such work to take a bit of a breather (a few days) and then to redraw your body map, or simply see how your body is feeling. Release at this level can cause release of emotions. For example, if the person in your past life was grieving, you may feel some grief arise in your own body. This is always surprising to people, as they are not used to working with something spiritual and having it have a physical impact. Some experiences of soreness can occur with this type of work. We are doing deep excavating here, and this type of response will really show how we do in fact hold energies like this within. **Margot** Margot wanted to understand her long-standing pelvic pain. She was deeply afraid of sex, and found it extremely painful. Her gynecologist told her that it was due to anxiety, and prescribed anti-anxiety medication for her. Margot had been in therapy for the past twenty years due to experiences of childhood sexual abuse, but had found in the last five years or so that she had plateaued, despite changing therapists. Margot was quite sensitive. She found that Reiki and other healing methods helped her quite a bit but realized that there was "something else" going on. This was just a sense she had at this time; she was not the type of person to entertain past lives or remember them. She began doing work with her body deva as well as speaking to her pelvis. A great deal of anger and pain and inner child imagery and healing came up for her. Her digestive tract became really inflamed. She found that instead of speaking to the pelvis that she needed to speak to everything from the belly button down to the knees (front and back), and that instead of a broken or "missing" feeling, she felt pulsations, heat, and a sense of screaming emerge from the area. Margot went back in and asked if there was any part of this that was past life. The answer that came through was yes, and she saw an image of a tall, thin woman who was peering out of the basement of a house. She was quite young (early twenties) and was very shy. This woman revealed that she was not supposed to leave the basement, and that she was a servant of an important man. Her impoverished parents had sold her to this family, and the man of the house was quite violent and sexually abusive with her. It was further revealed that she died due to this violence, and that nobody ever realized that she was dead, or the manner in which she died. This young woman wanted someone to see her and recognize her. Margot agreed to do so, and the woman gradually disappeared. Margot then realized that this woman thought she was invisible, as if her suffering were invisible. Margot saw that in this life this male was her father. In her own life, and with her own experiences, Margot was also shut down, not seen, and not believed. She really felt sad for this woman, as well as herself, and as she asked her pelvis to release the energy from this experience, she began crying. After she felt better, she began to notice that she could feel more energy in her pelvic area. She realized how distanced she had been from it before. Several weeks went by and she was able to have sex with no discomfort, and found that being intimate with her partner was more enjoyable. She then asked her body deva if this past life was anywhere else in her body, as she sensed a part of it was unresolved. She then worked with her throat and released all of the unspoken energy from this young woman. She was then able to work with more clarity with the emotions and experiences that were hers from her childhood, and found a transpersonal therapist to assist her who welcomed her spiritual experiences. After this experience, Margot realized that she had been stifling her voice. She began writing and painting, and realized that she wanted to be a resource for other child sex abuse survivors to find their voice. She enrolled in a transpersonal psychology program and is studying shamanism and is hoping to one day specialize in helping those who have had similar childhood experiences. **Andre** Andre came to see me as a last resort. He was a former semi-pro football player and had a pain in his stomach that he had tried everything to relieve. He daily found himself reaching for antacids and likened his pain to being stabbed, pointing to a specific place under his left ribcage. Scans of his abdomen showed nothing significant enough to create the stabbing pain he felt. Andre said that he was raised in a Christian household and had no prior meditative or spiritual background in which he would ever consider that some of the energy there didn't originate from his own experiences of this world. He was open and willing to consider anything, and looked forward to telling his friends about the weird thing he did (seeking energy work and spiritual help). Andre immediately clued in to the stabbing pain and described it like a large dagger being shoved into his side. He pictured his body deva like a diamond shape, and asked it what was going on in the area. His body deva replied that he should ask the area of the body where he felt stabbing, and that it had to do with something that may surprise him. When he asked the area of his body where he felt the stabbing pain, he felt the emotion of anger come up. He asked me if that was relevant, and I told him yes, and to focus on the area and tell me what else came up as he was focusing on it. He said that it was odd, but what was coming up was the fact that he was always really jealous in his relationships. He would constantly text the women he was dating to check up on what they were doing, and eventually the relationships would break up because of his jealousy. I suggested to him that he do the percentages. He talked to the area of his body that held the pain, and it replied that at least 50 percent was not his own pain. He was surprised by this, and asked where it came from. It said, a past life, so we asked the past life to emerge. He was uncertain about his ability to do this, so I had him visualize the scene as if it was on a television screen. He saw a young man standing outside a small hut of some sort. The man was angry because he knew that his wife was at the house of another man with whom she was carrying on an affair. In the scene, he saw himself go over to the hut of the other man to find his wife. The house was dark, but he went inside anyway. He felt a stabbing sensation in his side and fell over. The last thing he saw was the man standing over him and his wife sitting in the corner. The beliefs that emerged were that he could not trust people, and that women would always betray him. He recognized what an impact those beliefs had had on his current life and friendships. He asked his body deva what the man needed, and the body deva said that he just needed the betrayal to be known, as it was covered up. The man also needed to release the anger that was unresolved. Andre told the body deva that he saw the man's pain, and asked him to release his anger so he could go in peace, and the man slowly faded. Andre asked his body to release the stabbing pain and then felt like something was being pulled out of his body. The stabbing pain completely released, and Andre reported two weeks later that he no longer had any pain in that area, and that in approaching women he was able to give them a bit more space than he had previously. CHAPTER NINE Healing Cultural Energies We not only emerge out of a specific family and ancestry but also are a product of a specific culture (or many cultures). In New Age forms of thought, the idea of "oneness" largely means people attempting to erase culture, race, and basic differences in identity. This is not helpful, nor is it terribly conscious, as we are products of the culture that we are part of, and the unique essence that we bring to the world is to be celebrated. Our culture informs us and provides us with strength and a sense of belonging. The cultural framework we have emerged from can also carry wounding. In spiritual work, the thinking behind larger concepts such as culture is _power loss_ or _inappropriate power gain_. A lot of pain, wounding, and trauma patterns emerge out of a cultural background that has been brutalized, harmed, has been taken advantage of, or is seen as lesser by dominant cultures. While these patterns are likely to emerge from your ancestry, in a wider lens they are also products of the entire culture—the beliefs, traumas, and experiences that have emerged from the history of a specific group of people. In other cases, we may have emerged from a culture that has sought dominance, or has owned slaves, persecuted others due to religion, or has in some way participated in the power loss of another group of people. Our world is full of wars and conflicts between cultures, and while you as an individual may not look at yourself as superior to another culture, or have not actively participated in the destruction on another culture, the healing of the _persecutor_ , or the healing of the culture that someone has emerged from that caused such destructive acts, can allow for a great deal of personal healing to emerge. It is also likely, no matter how conscious we believe ourselves to be, that we carry thoughts, beliefs, and ways of being in relation to other groups of people. We more easily relate to victimhood; however, it is only by considering and being conscious of the parts of ourselves that relate to the _conqueror_ (if we have emerged from a culture that has dominated or taken from others) that we heal beliefs and do our small part to rectify the power dynamics of our current culture or that of our ancestors. We can heal the internal conflicts between oppressor and oppressed that lie within. As an individual we are a thread in the massive web of life, and we can do our part to heal the web of the world by healing our internal dynamics. We can heal the parts of ourselves that carry pain and power loss, as well as release the power gained through taking or conquering. We can release difficult ideologies and ways of looking at the world, or other cultures and people, if we are brave enough to face such things. This is difficult work, as the patterning of power loss and inappropriate power gain are passed through generations, and we are often not conscious enough to realize that there is another way. It is by looking at who or what we are reactive to on a larger, cultural scale that we can see our needs for healing at this level. It is by reconciling that many of us carry both victim as well as perpetrator within that we can look at our reactivity to other cultures as well as our own to see how power loss and inappropriate power gain have manifested. This work may emerge in a few different ways. The first would be through working with your ancestry and noticing that a group of people emerges rather than an individual. You may also notice that you are focusing on a village, or people, instead of an aspect of your ancestry. This would indicate that something cultural would be helpful to work with rather than a specific ancestor. You also may recognize yourself in the above and have awareness of pain or restricted beliefs caused or created by your culture. It is suggested that this work be done after some basic ancestral healing has been done, as it is somewhat easier to connect with our ancestry than our culture. If you come from many cultures, or have been adopted or displaced from your culture or land, this work is often vital. The loss of cultural identity can often cause people to feel rather lost in their lives, or to continually reach out to other cultures and assume their identity. This is often seen in spiritual groups, as people reach for shamanic methods or take on cultural identities that enact the conqueror pattern, and unconsciously contribute to the creation of further power loss and stripping of cultural identity. This is a complex problem, and it is rare that people are willing to see that they are engaging in conqueror dynamics or actively taking power from a culture, when they often simply believe that they are honoring or participating in that culture in a helpful way. When engaging with other cultures, spiritually or otherwise, it's always worthwhile to consider power dynamics. We cannot truly know another culture unless we have immersed ourselves in it, understanding the language, history, mythology, and everyday reality of that culture. We tend to romanticize the "other," but superficiality arises when we do not really know or relate to people from that culture (except possibly in a paid environment). Speaking as someone who has studied Traditional Chinese Medicine in some depth, I can say that there is a big difference between what I have learned versus my friend who grew up in China and learned folk remedies, ancestral veneration, and other practices through her daily immersion in the culture. We tend to surround ourselves with sameness—people who have emerged out of the same culture and with the same outlooks as our own. By interacting with people from other cultures, we can move past our idealized romantic notions or restrictive or hate-filled beliefs about them and authentically listen to their experiences. In this way, we can encourage a relationship rooted in equal power, rather than dynamics of power loss and of conquering or "taking" from that culture for our own purposes. The energies of war and similar worldwide events always have an impact on culture and cultural beliefs. Similar to personal trauma, collective trauma creates beliefs and realizations, as well as contracts. These beliefs may involve the culture one is a part of, other cultures, or beliefs enacted as a result of familial and ancestral patterns of trauma. In severe cases of power loss, there may be hatred or dislike of the culture from which you emerged. You may also notice yourself reacting to a specific culture with disregard or antipathy; these are often ideologies passed down through ancestral lines and can be healed with willingness. The purpose of this work is not to make things "okay"; genocide, slavery, war, displacement, conquering, or taking power are never okay. Our history—how our culture came into being—is important. It should be known, related, and understood to the depth that it deserves. Nevertheless, we can heal the energies we hold in relation to the cultures we emerged from. We can be more conscious of the dynamics of power and relationship to other cultures in our daily lives. We can move from a place of power loss, inappropriate power gain, and the subsequent beliefs and restricted thinking that occurred as a result of trauma into a place of considering the beauty, wisdom, and strength of our ancestry. We can be proud of our heritage, who we are and what we emerged from, in a way that is healthy and life affirming and that represents the vital essence of who we are. We carry the force of both the oppressor and the oppressed within us. By healing the relational aspects of these destructive inner relationships, we can free ourselves from them. This will allow us to act with greater clarity in the world, become more conscious of unconscious racism, classism, and "othering," and move beyond them. Working with Cultural Energies When we consider cultural energies, we are talking about a web, or grid, that has informed our identity as a part of that culture. This can be a specific culture associated with a country, such as Sicilian, or a more general construct associated with a larger geographic region, such as European or Eastern European. This work can also be utilized for race as well as gender constructs. Further work on gender constructs will also be discussed in the Archetypes chapter. Putting this perhaps in an abrasively simple manner, a South African who is white, from a middle class household, and received private schooling is going to have a different cultural "web" from a South African who is black and was subject to abject poverty growing up. A third-generation Mexican immigrant living in the United States is going to have a different cultural web than a first-generation Mexican immigrant whose family may still be living in Mexico. In most cases, this work will arise after or while doing ancestral work. This happens when the individual ancestor may be healed, but there is a sense that there is more work to be done on a larger scale or in a deeper way. There may also be an intuitive sense that working with cultural energies would be helpful. This work also can be done through simple intent and utilization of the body deva, but may be harder to access or see clearly unless some ancestral work is done prior to approaching cultural work. You may also be working with the body deva or an individual part of your body and upon asking what it may hold, the concept of culture, race, sex, or gender, may arise. The Tying Things Together chapter discusses how to work with a checklist to elicit this information, but when asking about cultural energies, you will first want to check in with the body deva or the individual body part to make sure it agrees with you or watch for a change or sense of heightened energy in the body part. If you are starting by wanting to have a focused intention on cultural energies, you will ask the body deva to show where in your body you hold such energies. You may choose a specific aspect of your cultural background (if you come from many cultures, there may be one that you are drawn to), or you may ask more generally. You would then do the body scan and find the area of your body it is being held, asking the body deva and individual body part for information about how that energy looks and feels within the body, as well as basic information about the energy and what it may be. * You will now ask the body deva for a _representative_ or _spokesperson_ for that culture to step forward. * You will see or sense them as clearly as possible: age, male or female, what they look like, what they are wearing, and what they are doing currently. * You will now look behind them to the "crowd." What may be happening collectively to this culture? \- If you are having difficulty, picture this as if it is a television turning on. What story or show are you tuned to? * Ask the body deva what is going on. What trauma, power loss, or inappropriate power gain is occurring? * Ask, _What is creating a lack of balance for this culture?_ * Ask the body deva what this representative needs to feel whole. \- Do they need power restored that they lost or that was taken from them? \- Do they need to give power back that they took? \- We are looking for the basic power dynamics here, or the idea of something lost or gained as a result of being a part of a specific culture. Healing Power Loss * If power loss occurred as a result of having power taken, you will ask the body deva to now show you where the force of the oppressor is located in your body, and do a body scan to find this force. * You will ask the body deva for a representative of the oppressor to step forward again simply describe them and what they are doing. * You will ask the body deva or the oppressor what that person who took power would need to give it back. \- If we are holding onto power that is not our own, we are not in balance. Ideally, we would be filled with our own power. No matter how violent or malevolent we may be outwardly, this is something we all understand on some level. * If the person who has taken power is willing, you will ask the body deva to restore the rightful power to your representative. It can be done by utilizing intention and a specific color light that feels appropriate intuitively to you and your body deva. * If the person is not willing to do that, ask if they would be willing to restore some power. The answer is often yes. * If the person is still not willing, they often need to feel their own power. The body deva will let them know that their power is much more magnificent and healing than taking the power of others. * Allow your representative to feel the restoration of power. * Ask the oppressed representative for the beliefs that were created out of this trauma, both with regard to their culture and the conquering culture. This work may elicit an occasional and understandable resistance to asking a conqueror to give back power. While this is entirely understandable, what is being worked with here is the core power dynamics of the situation. The forces of the oppressor and the oppressed are being held in conflict within your body. By permitting power to return to its rightful source in a compassionate way, the power dynamics are less messy than if power were forcefully taken back from the other individual. Using this method, no more trauma is created. The purpose here is not for you to want to hug your persecutor, or to create a belief that the opposing culture is full of wonderful people who will now look out for you in your daily life, or even for you to want to interact with them in the physical world. It is to acknowledge that both of the representatives—the one from your culture who has had power taken away and the representative from the other culture responsible for taking that power—are forces that are active and still creating pain within you. Allowing yourself to heal and move beyond the pain and restrictions can allow for the clarity, wisdom, and the power of your culture to emerge through you, rather than the unhealed or aspects that feel powerless. Healing Power Loss by Place or Event In this work it may seem that power was not taken but was lost. You would verify with your body deva how power was lost (through taking or through loss due to an event or experience). You would then ask your body deva about the beliefs created from this situation, and ask the body deva to restore the power to the representative. Power Gain through Taking You may find yourself in the position of conquering or taking the power of another culture. You would again ask your representative to appear, as well as a representative from the other culture, locate both forces within your body, and offer to return the power to the oppressed representative. If they do not want it, or you feel intuitively as if it should go to the land instead, offer it to the land. You would do this by asking your body deva to release any held power associated with taking the power of others in your culture, and again, picture a light or color releasing from the area of your body where you initially clued into cultural energies. For the taker, there is often power loss underneath this. We do not take from others unless we feel a lack of wholeness ourselves. If we feel strong and able, we certainly will defend ourselves, and sometimes violence in our world is necessary to maintain societal structure or preserve what we have. This is because despite our own healing, there are many others still in the position of being unhealed, and who have not moved beyond their own inner divisions and seeking superiority, or the power of another. While certainly our own healing has a ripple effect, at some point we will contend with the fact that we do not create the world, and that despite the work we do to heal loss of power or gain, it does not lead directly to peace on Earth. We are one thread in a massive web of life; the more people who do work like this, the more healed the web is. You will offer the power back utilizing the body deva, if it feels appropriate to do so. This power can be sent to the Earth, to a representative of the other culture, or it may be restored in a different manner. Whichever option you work through, you will now reflect on the beliefs and understandings that have emerged and how they have impacted you. You will now ask the body deva for the _strengths that have emerged_ out of this situation for you and your culture. Our traumas and difficulties are a double-edged sword, in that they cause us immense pain but also can lead to understanding, ingenuity, beauty, creativity, and unbelievable strengths. Understanding the individual strengths of your culture, and how you relate to those strengths, allows you the opportunity to begin to carry those strengths instead of maintaining cultural wounds. Finally, you will ask your body deva and the individual body part to release whatever held energy was there in your body, and then say thank you for its help. If you find that there are cultural elements that are restrictive in a different way (not the opposition of conqueror and conquered), you will still ask a representative of the culture to step forward, find out what is happening within the culture, and what restrictive beliefs or ways of being have been created. You can then consider if you require this "contract," and let the representative know whether you wish to change, alter, or nullify the contract. You would then ask the representative to take their "true form," which will be explored further in the chapter on working with archetypes, then inform your body consciousness, or where you held this restriction within your body, that it no longer needs this contract at all, or to the degree it has been in effect. **Krista** Krista was a yoga teacher and Reiki practitioner who wanted to explore her resistance to grounding and the lack of embodiment she felt in her lower body. She felt like she had done an incredible amount of work on her physical body, childhood issues, and even spiritual issues, but still felt like she wasn't fully in her body or connecting to the earth. Krista had a feeling that it had something to do with her ancestry, which was of mixed Europe origins. When I mentioned the possibility of cultural healing, she said that it resonated with her and she was open to exploring it. The body deva revealed her feet and both of her legs up to her knees as being the places where this pattern was held. Her legs and feet felt completely missing from her body, and showed up as dotted or missing on her body map. She asked a representative to step forward. It was an older Ukrainian woman. This woman looked afraid and seemed lost, as did the other crowd of people in the background. The body deva revealed that this was a case of _power loss._ Krista asked her body deva to show her the _conqueror_ or _opposing force_ creating this power loss, and a large Russian male stepped forward. This opposing force was felt in her solar plexus. Krista asked her body deva if this male would offer the power back, and the male reluctantly said yes after Krista visualized him receiving his inherent power back in place of it. Krista visualized a yellow light going into her representative, and a green light going into the conqueror to restore their power. The opposing force then disappeared, and Krista asked about what beliefs were being held. The belief was that it is better to be invisible than to be noticed. Really being present would cause Krista to be seen, she realized, as she commented on how she contributed to and resonated with this pattern. She asked her body deva to relate the strengths associated with this situation, and it replied, _The closeness of family and the ability to offer oneself to another_. Krista said thank you and told her legs that she was ready to release this belief. She released an additional individual contract regarding the belief around being hidden that she had created as a teenager, and asked her legs to shift into a place of balance, grounding, and embodiment, and for her solar plexus to release the opposing force. For the next few days, Krista reported having muscle cramping and strange dreams in which she felt she was processing on a deep level. A month later, she reported that she felt much more in her body and how strange it was to actually feel and have awareness of her feet for perhaps the first time. **Daniel** Daniel was of Mexican and Peruvian descent and had already been working with ancestral and family origin issues for some time. He was adopted into a family that did not share his cultural background, and he had many stories about growing up in a predominantly upper middle class, white neighborhood where he experienced offensive and inappropriate interactions that created micro-traumas in him. Daniel wanted to delve into the energetics and spiritual dimensions of the adoption and how it was affecting him. While exploring other issues, he revealed that he would get in a lot of fights growing up, and would sometimes express explosive anger whose cause he could not determine. Working with his early childhood years prior to the adoption helped him to minimize the frequency of these outbursts; nevertheless, they continued to occur. We decided to approach the body deva and ask about the rage because I felt that it was related to the adoption. The body deva highlighted the solar plexus and said that it was related to cultural energies. The energy felt dense, like a rope wrapped around his midsection, squeezing it, and was dark in color. Daniel asked his solar plexus to bring forward a representative from his culture to help him to heal this. An older Mexican gentleman stepped forward. He was quite powerful looking and radiated a sense of vigor. The body deva revealed that this was a representative of _machismo_ , or the sense of how a man should be. When he asked the body deva for beliefs he was carrying, it revealed a list of items about how men should act and behave and how Daniel did not live up to those high standards. Daniel realized that he had internalized this from his early childhood (pre-adoption), when his birth mother was married to a violent alcoholic. He realized that he felt a conflict inside because part of him wanted to exhibit machismo but his only personal experience with it involved someone he did not want to emulate. Inner child work was done to heal his inner six-month-old and then he went back to the imagery of the man. He asked his body deva what the man needed, and the response was for him to connect to his inner machismo in a healthy way. He asked for more information, and it said he needed to heal the belief that he was a threat to others based on his experience of how men act and the harm they create that was created by the six-month-old. He asked where this belief was held. It revealed that it was in his solar plexus but was a small dense blob, not the rope he sensed around his whole solar plexus and diaphragm. He let the body deva know that he had healed the six-month-old and no longer needed that belief at all. He asked his body to change and shift, and it cleared both the blob as well as most of the rope. He asked the body deva what else was needed, and it said that he needed to do something that would allow him to understand his masculine power in a healthy way, and he agreed to go back to martial arts, which he already felt an intuitive pull to do. Daniel started his daily martial arts practice and in two months reported back to me that he felt much more powerful and balanced, and had not experienced any anger outbursts. CHAPTER TEN Working with Archetypes and the Multitude of Selves When we are on the path to knowing ourselves deeply, in order to heal or evolve spiritually, we tend to pursue one concept as though it were the Holy Grail—the "true self," the concept that we are one thing, one person, with distinct likes and dislikes. We think that if we only knew our essence without it being clouded by pain and beliefs, we would return to an original, "pure" nature; that if we only were healed enough we would integrate into one thing. The irony of this is that we always have a multitude of selves within us. We have forces and parts of us that are incredibly "true" yet may be at odds with one another. We are not just one Self, but a multitude of selves, many forces and energies coming together. By reconciling our multitude of selves, we no longer need to battle them or cast them aside. The easiest pop culture reference for this dichotomy is the idea of the "sexy" librarian—the quiet introvert who prefers to spend her time in silence in the hallowed halls of a library surrounded by books, hair in bun, but who is simultaneously a vivacious sex kitten. Or perhaps Clark Kent, the somewhat shy yet studious friend, who is also Superman. We are fascinated by the polarities inside us, what lies beneath if we scratch the surface. When we talk about such a thing, the mind may easily travel to the concept of multiple personalities, now known as dissociative identity disorder, where the mind fractures in order to cope with severe trauma. In this disorder, separate personalities or selves come out, some of which may not have knowledge of one another. Or we may consider sub-personalities as a form of pathology, as we are not in a place to reconcile the difference between wounded aspects of self, such as inner children, and the aspects of ourselves that may simply have different interests and needs. While this can move into an area of pathology or severe imbalance, even the healthiest among us contain separate selves, separate identities. All of these different selves provide a centralizing identity; they form the general reality for the individual. These selves are formed through our relationship with our inner selves, as well as a relational context. We form our identity based on the movies we see, the teachers we had, the literature we read, who our parents and friends were, and many other sources. We are informed by archetypes: recurrent symbolic figures and symbols in mythology and literature. Our relationship to those archetypes deeply informs who we are, as well as who we feel we should be. Mother, father, hero, nurturer, lover—these are all labels with an expected role in society. Societally, we have agreed what a "mother" should look like, what she does with her day, and how she should act at the neighborhood barbeque. If we do not live up to the archetype, there will likely be hell to pay as she has broken an implicit social contract. Additionally, "mother" will on some level understand that she is not living up to this contract, and she will either question her capabilities, have difficulty with self-worth, or will be at a point where she has considered the archetype and the social ramifications for not living up to it and has decided to free herself from it. Archetypes allow us to easily label and understand one another, as well as ourselves. They build in a sense of safety, allow community to develop between like minds, and for us to live our lives with certain set expectations and instincts, both societally as well as personally. We also tend to outgrow, outlive, or simply move beyond some of the archetypes as a natural part of maturing. We may be an ingénue at age twenty, but at age eighty that label will likely be long past. Our centralizing myths (covered in the next chapter) provide us with a sense of meaning and navigation in this world. The inherent difficulty is that our unconscious association with these myths means that we act, identify, and judge others in ways that are restrictive, rather than healing. With consciousness and healing, the positive aspects of an archetype can inform who we are and be worn like a hat in the appropriate environment. You may wish to put on your "nurturing father" hat when your child scrapes their knee, and your "virile male" hat while out on a date. We all have archetypes that are creating contracts, beliefs, and reactivity within us and causing us to suffer. It is by healing our relationship with that archetype, as well as any other layers of personal, ancestral or cultural healing, that we can come to a place of right relationship with them. The purpose here is not to rid ourselves of all things, or to shove any parts of ourselves aside, but to not have anything that is creating pain or restricted beliefs for ourselves. The contract and archetype involved with "artist" may be creating issues with being financially solvent, or the archetype of "innocent" may be creating issues in the bedroom. But the archetype of "artist" may allow you to dress and behave in a way that you enjoy doing, and be in the company of people and a community who understand you. The archetype of "nurturer" may allow you to offer heart-centered love and be needed by others. The archetype of "rebel" may allow you to push yourself continually outside of your comfort zone in all areas of your life. We are not just one archetype, although we may relate to a few, or have one or two, be dominant. It is worthwhile to examine all labels you have been given—mother, daughter, father, son, brother, grandson, teacher, student, employee, and so forth—not in an effort to cast them aside but to see if they are creating any restrictions or imbalances within yourself, or in your world. This list is not comprehensive, but here are some of the more common archetypes and associated roles we may identify with: * Hero or white knight * Creator * Lover * Nurturer * Explorer * Innocent or dreamer * Rebel * Sage * Mystic or magician * Jester * King or ruler * The Everyman or Regular Joe (or Josephina) * Mentor or Teacher * Villain * Heretic * Outlier: witch, shaman * Crone: wise old woman or man * Shadow (what is unknown within us) * Animal (our base instincts) * Sexual instincts or passion (sex goddess, god, virility) * Masculinity and femininity (we carry both in our nature, it is worthwhile to examine both aspects, no matter your sex or gender) To work with archetypes, you will first consider which archetypes you feel most inform your existence. Under each archetype, write out a bit about what you feel that archetype means: how that archetype acts, how others would relate to them, what their social contract(s) may be. Choose one archetype you feel may be out of balance. If you are trying to choose among several, you can also ask your body deva what would be most healing for you to work with. You will ask your body deva where you may hold elements of this _corrupted archetype_ within your body. Do a body scan or notice what shows up, if need be, inquire as to which part of your body would be best to work with, if multiple areas show up. Talk with the individual consciousness of that body part, with the archetype in mind. Say that you would like to heal the archetype, and note the physical sensations, energetics, size, and shape of the restricted or empty energy in your body. Ask the body deva to show you the most relevant pattern to work with. At this point, you can begin to build a list of questions: * Is this from my experiences here (inner child work)? * Is this from in utero? * Is this from a past life? * Is this ancestral or familial? * Is this cultural? You will want to ask these questions slowly and wait for either a sense of knowing (a yes from the body deva) or a shift in the individual body part you are working with. There are likely multiple answers, so you may wish to ask which one is most relevant to the body deva. You would then do the work from the relevant prior chapter. This work is done so that you can heal the interaction or understanding from the relevant point of view. You may have an idea of what "woman" is, or how to relate to her, based on viewing your mother when you were eight. You may have an idea of what "teacher" is based on your high school interaction with a teacher you disliked. You may reject all authority because, when you were much younger, you created a reactive contract concerning your parents and how they were unable to nurture you. You can also examine which archetypes you carry by utilizing the body deva. It is helpful to heal the part of you that led you to understand these roles and archetypes as having a specific function and societal as well as personal rules. Doing so will allow you to understand how the archetype became _corrupted_ and a source of restriction and unhealed energies for you. By healing, you can release any pain or trauma associated with the archetype, let go of restrictive beliefs, and choose to embody the strengths of the archetype. You may also find that you need to work with the archetype itself. This would best be done after healing some of the varying traumas and layers surrounding the archetype. You would again ask the body deva where this archetype is held. The concept of the _corrupted archetype_ means that the archetype is informing your existence in a way that is creating restrictions in belief or pain. If an archetype is a _pure archetype_ , that means that we are relating to it in a way that informs our existence and provides strength and a positive sense of navigation in our lives. Archetypes most often get corrupted through our own experiences of them. If what we know of being a male is of an absent, neglectful, or possibly violent father figure, we likely have corrupted "male," "father," "teacher," or "authority" archetypes. Archetypes get corrupted and utilized all of the time in pop culture, advertising, and other sources as a way of manipulating us. One of the archetypes of female sexuality in Western culture is the "pop ingénue," a highly sexualized yet "innocent" female creation who has not reached adulthood. One of the archetypes of male sexuality in Western culture is the impossibly muscle-bound male who dominates women and narrowly escapes explosions. Examination of both would likely reveal sources of wounding for both men and women in terms of sexual restrictions and patterning. When we become conscious of the societal roles and archetypes we are bound by, we are no longer restricted by them and move beyond blind, emotional reaction and struggle in relation to them. We may also find that we are holding onto an archetype that we should have outgrown. If we inwardly still feel like a small child who wants to hold her teddy bear, that inner child could use some healing and to be initiated into positive adolescence or adulthood. Pick an archetype that you wish to work on (or are reactive to in the outer world), then ask the body deva where it is held within your body and to show you the corrupted archetype. Much like the initial visualizing of the body deva, or working with restrictions, you would create an image of what that archetype looks like, where they are, what they are wearing, and what they are doing. You would then inquire as to ways that this archetype may be informing your existence: * How is my association with this archetype creating restriction, pain, or suffering? * How does this archetype limit me? * What do I believe is true (what is the contract) concerning this archetype? * How do they act? What do they accomplish? What is their role in society? * What are the societal rules and restrictions for this archetype? * What are the strengths of this archetype? * What is the beauty of this archetype? Consider what has arisen and, if necessary, negotiate the contract about what this archetype means to you. Although there are implicit socially constructed connotations with all of these archetypes, our individual experiences color them. It is by our individual relationship to them that much of the pain, restriction, or corruption around the archetype is created. When we become conscious of the restrictions that society has placed on us, we can also make a decision about how restricted that really makes us. We can negotiate our freedom from such things, even if we still embody them. We spend so much of our lives defining ourselves by the thoughts and rules of others, and of society at large. While we may not initially be comfortable moving beyond such labels and associated societal roles and conduct, by healing our own reaction to the role we can begin to move beyond the restrictions of society as well. This does not mean that we need to stop being an "explorer," an "artist," a "teacher," a "mother," or a "soldier"; it means that we can consciously explore and understand the rules of conduct for such roles, and embody them in their _pure_ form, transforming it to a source of strength and a part of our identity that brings us joy rather than restriction. If you are ready, ask your body deva to transform the archetype closer to its _pure form._ Then ask your body deva or the individual part of your body holding the corrupted archetype to shift or change. Like all other work, if there is a shift in feeling in the area, a change in body map, or a change in beliefs (renegotiation of contract or letting go of some of the beliefs surrounding that archetype), the work has been successful. The shift in image of the archetype to a stronger or healthier version is also a sign of success. This work can, and should be, worked with over time, as we are not going to understand ourselves, or even a single archetype within ourselves, in a single sitting. Working with our Multitude of Selves We have parts of ourselves that are nun-like, sexual, male, female, young, old, vicious, violent, peaceful, animalistic, intellectual, introverted, extroverted, and interested in different hobbies, movies, and music. We may enjoy thrash metal one afternoon and pop music the next, or watch superhero movies as well as foreign films. We may also enjoy marathons as much as sitting on the couch eating Cheetos. The deeper we venture into inner work, the more that we will find this multitude. The difficulty is that we believe ourselves to be much simpler than we actually are, and may feel as if these different aspects of ourselves are at war with one another. If we understand our multiplicity, we will know that it is perfectly okay to enjoy relaxing with a cup of tea and a book as well as laugh loudly and swear consistently at a dinner party. We may discover that what is creating a conflict is not our core or centralized identity that has balanced all of these myths and archetypes to form a congruent personality, but a struggle between seemingly opposing forces within us. The part of us that is animalistic, primal, and masculine and wants to eat large hunks of meat roasted over an open fire, and the part of us that works as a buttoned-up businessman have very different needs, societal roles, and obligations. The exhausted nurturer who simply wants to watch a movie uninterrupted is likely battling the inner aerobics enthusiast who is at odds with the notion of relaxation. Every decision we make is privy to these inner selves, and what may be causing our confusion or lack of direction is that two sides that are seemingly "at war" or opposing one another are both offering suggestions. Until we understand our multiplicity, we will never have the clarity of knowing why we may always sabotage ourselves when making a plan to eat healthier, start exercising, or break other habits. For example, a woman plans on starting a routine of walking around her neighborhood every day. She knows that this will allow her to relieve stress, get some sunlight, and offer movement in a day that is spent in front of a computer. She notices herself self-sabotaging, and that inner voice telling her that she should just close her eyes and take a nap or catch up on a television show instead. When she does these things (watching television or taking a nap) instead of doing the "healthy" activity, she slips into negative self-talk and berates herself about sabotaging her efforts to get healthy. In this situation, neither "force" within her is getting what they truly need. She is not getting the rest that she needs because she is distracted with the opposing force telling her how awful she is for wanting to watch television or take a nap. She then further distances herself from this need for relaxation by continually checking her phone or eating. When she is walking she focuses on how tired and depleted she is, so she never notices the sunlight. These forces are not actually in opposition with one another; they simply have different needs. This woman both needs to relax, eyes closed, away from her computer and experience walking and greater health. By allowing these seemingly opposing forces to have their voices "heard," she can begin to fully allow herself to experience her naps (with no negative self-talk or feelings of needing to be doing something else) as well as walk daily. The only caution for working with these aspects of ourselves is that if one of them is out of control, it is not easily understood and chatted with. This is why this work is at the end of the book; we have other things to work on first if we are still in the ravages of drug addiction and eating disorders or are significantly traumatized to the extent that we are a two-year-old in a fifty-year-old's body. But if we are ready, we can understand the opposing forces within ourselves, and even the opposing relationships in the outer world as forces within us. It may sound strange to think of our boss, our partner, the random person we interacted with in a heated argument, our parents, and others as a "force," or something that is a part of us, but nevertheless, we can explore the inner dimensions of our relationships in this manner. To do this, you would consider the relationship or situation that is upsetting or feels unhealed to you, and ask your body deva to show where you hold it in your body. You are not doing this work for anyone else (as they have their own work to do), but as we are relational beings and are all in a web of life, we can use the outer world, and those whom we have significant relationships with, to find unhealed or opposing forces within ourselves. Similar to the cultural work, one force may be in a different location from another. Finding both forces and allowing them to speak up about their needs, then enacting what they need in your life (always checking for safety and basic logic), will allow the battle to cease and all parts of you to receive what they really need. In alchemy, the union, "marriage," or reconciliation of the female and male aspects of Self is part of the "Great Work"; by seeking the unity and healing of these otherwise divided and polarized parts of ourselves, we are able to experience a greater level of evolution and wholeness than would otherwise be possible. We all contain forces that are pushing and pulling us in different directions. We may find that what is reactive or restricted within us is not the current, conscious, and centralized Self (us, right now, in the present moment) but another aspect of Self. The clearest example I have of this comes from working with many individuals who went to Catholic school and had an inner "nun" telling them that all of their sexual choices were horrible, creating shame for them about the sexual act itself. This would come up during something like inner child work, where we were working with an inner teenager who espoused a belief that sex was not meant to be pleasurable or that it should be secretive. The adult, centralized Self was usually more than willing to heal the inner teenager and enjoy sex or be more passionate sexually, without the repression they experienced. The force of the inner nun was what was preventing such a thing. It was only by working with the opposing force within that the sexual restrictions healed, or at least considerably lessened. Similarly, we may have forces within us impacting our decisions: our finances, who we decide to date or marry, whether or not we have a child, where we move, what job we take, and more. We may have a part within us that desperately wishes to have a child, and an opposing force that is saying that it is too expensive and is fearful of the pain of childbirth. We may have a part of us that desperately wants to succeed, and another part of us that is opposing that because success would take us away from our community, neighborhood, and family. By reconciling and understanding these parts we can have clarity and heal what is restricting us from becoming who we are meant to be in this world. This is obviously quite advanced work, but I do like to include elements that people can work their way toward, if they are not already there. We are long past the time when people only operate on surface levels of consciousness and books that only skim the surface of what can truly be accomplished. Much like the cultural work, we will be working with two (or three) parts. The third would be you (adult, centralized you), and the first would be whatever trauma you are working with: inner child, familial or ancestral, or past life. While working with this inner child, you may feel it is correct to ask about the opposing force. In some cases, it will be the mother or other caregiver or a teacher or similar role. You will ask where this opposing force "lives" in your body, and ask this opposing force to step forward, noting what they look like and who they may be to you, or to the aspect you are already working with. You may also choose to work directly with an aspect of yourself that you have become aware of. I will not list them here, as they will come up for healing when they are ready. You will ask the body deva to show you where this wounded or separated aspect of Self is held within the body, check in with the body and the physical, energetic, and visual representation of the energy held there, then create a visual or symbol for that aspect of Self. You can also access this work by using the questioning (in the Tying Things Together chapter) and may find that the question _Is this an aspect of Self?_ elicits a yes answer or there is a shift in the energy of the body. You can also use the idea of opposing forces to seek clarity and understanding about a particular relationship or reaction that you had to someone in the outer world. You would first seek what is wounded or reactive within you, then consider the opposing force, or the person you are reacting to, and where that force is located within you. All of these would be started with the basic protocol of using the body deva to find where this dynamic or energy is located within your body and sitting with the physical sensations and visual representation of the blocked or empty energy in that area of the body. * You will now create a visual, asking what is reactive or wounded within you (or simply ask the first force or aspect of you) to step forward. * Once you have created a visual, you will want to as clearly as possible find out what it is, if you have not already. \- Is it something unhealed within you that you have been working on already, or is it an archetype or aspect of you? \- We relate by using labels, so if it is an aspect of you or an archetype, sit with it until it becomes reasonably clear what it is _(Oh, this is the "crone" aspect of me)_. * Ask the body deva what this part of you has to say and how it is impacting your life. * Ask if there are any traumas or ways that it is creating restriction for you. * Ask what strengths, knowledge, or beauty it is bringing you. * If you are creating this force as a reaction to the outside world, or in relation to a decision you are making, you will want to ask their thoughts about this subject. * Now, ask about the _opposing_ force. * Go through the same process of visualizing or sensing this force, where it may be located, and what its label may be. * Ask the body deva what this opposing force has to say, in general, as well as about the other aspect of self (or wounded inner child, ancestor, archetype). * Ask the opposing force or archetype what it needs to communicate to you. * Ask the opposing force what its function is. \- This function may be control, protection, or something else entirely. It will likely believe that this service is beneficial. \- Ask the opposing force what would happen if it wasn't serving this function ( _What is the fear of what might happen?_ ). * Consider if you need this opposition—this aspect of you controlling or creating beliefs around the other part of you. * If this force is someone that you are reactive to or in a relationship with, ask if they are mirroring or pointing out anything that is unhealed within you. \- If they are, you will want to ask that inner child to come up so it can be worked with. * If you do not need this force within you, or do not need the opposition as much, say thank you to the opposing archetype (or aspect) and let it know its services will no longer be needed, or needed as much. * Ask your body deva to shift the archetype or aspect of Self to its pure form, or invite it to depart, dissolve, or recede into the background. * Return to the original aspect of Self that arose. Ask it if it can recognize that it is no longer opposed, and invite it to heal or shift. * Ask your body deva to shift, change, or heal the aspect of your body where it was held. **Matthew** Matthew felt as if he had hit an impasse in his relationship. Although he deeply cared for his wife, he felt that she was cold, uncommunicative, and that they were no longer on the same page. He realized that he could not change her but wanted to look at the internal dynamics, or basically, what he was contributing to this situation. The body deva revealed a pattern around his heart, throat, and head that looked like a large, shadowy balloon. In talking to this area of the body, it revealed a great deal of grief but also a sense of protection; it was keeping him from experiencing the depths of the grief that he held within his body. This grief was related to a past life in which his wife (his same wife from this present lifetime) as well as a child died due to a disease that was sweeping through their village. He felt guilt that he could not take care of her, that he could not save her. He created a contract that he would always love her and take care of her. As Matthew related this, he realized that he was still in that position. He experienced clarity that his wife was dealing with a significant amount of grief herself and that he was still trying to "save" her. He called up the "opposing force," or the aspect of him that appeared as his wife. This opposing force was in his throat area, and his grief intensified when it came up. The wife revealed that she was struggling with grief that wasn't hers, and that she was resistant and cold to Matthew to protect him from it. She believed that this was something that she needed to take care of herself, and that she needed to remain stoic in the face of all she suffered. Matthew asked this force what she needed, and she said that she needed him to not try to save her but support her instead. He agreed, and we went back to the imagery of the past life and let go of the contract of needing to save her, and offered relief and healing to the past life he had experienced. He asked his body to shift and change and felt a release of energy through his heart and throat. He asked his body deva how much of this he should say to his wife in physical reality, and it replied that he should encourage her to heal this past life and to seek counseling for her present-day difficulties in functioning, but to do so in a compassionate way, and to let it go after that. He ended the session by saying that he needed to think about how he could approach his wife in a way that was supportive but not looking to save her. Matthew worked further with his heart area to learn how to relate in a new way to his wife. He reported that his wife feels much more comfortable sharing what she has been experiencing due to his efforts and that their relationship has a sense of ease that it did not have before, but that she is not yet in a place to seek out her own healing efforts. **Irina** Irina was a woman in her mid-thirties who wanted to work on the fact that she had never really felt feminine. She didn't wear makeup, dressed simply, and was uncomfortable with anyone noticing her. She defined herself as "gray," which meant that she rarely felt sexual attraction for anyone. Irina immediately clued in on a part of herself in her abdomen that was a teenager who was confused about the fact that she didn't want to date and several experiences that were embarrassing for her, including a school dance. This part of her received clarity from adult Irina about why she was the way she was. It helped teenage Irina feel better and feel like she had a support system in adult Irina. Since she did not "disappear" after this, even though she seemed okay, we asked if there was anything else she needed. The body deva said that she still felt as if she was wrong for her sexual preferences. We asked the opposing archetype to step forward. It looked like a hypersexualized woman from one of the James Bond films and was located in her pelvis. It said that she needed to look and act a specific way in order to be considered a worthy woman. Adult Irina said that she understood the archetype and felt like she denied the archetype a bit because of her wounded inner teenager. She and the archetype negotiated ways in which she could feel more feminine, such as placing flowers around her home and being more confident in her attitude. The opposing archetype disappeared, happy that she had been heard. Irina's inner teenager realized that she did not need to act or be a specific way to be considered "feminine" or a "woman" and disappeared. Irina started to buy flowers and even took a flower-arranging course. She started using cooking to explore her feminine side, and had a friend teach her to put on makeup for special events. Her sexual preferences did not change, as they did not come from a place of wounding for her, but she became even more confident and clear about who she was and how she wanted to relate to others in a relationship. She found her relationships went much better now that she had the capacity to be up front and clear with anyone who expressed an interest in her. **Charlotte** Charlotte had done a lot of previous mind-body work and noticed that the left side of her body and the right felt very different from one another. In this work, she immediately clued in to the fact that her right side was more "feminine" and her left side was "masculine." Her masculine side revealed that she did not feel comfortable with aggression, anger, or stating what she needed in this world. Charlotte realized that she held a corrupted archetype of "male" based on what society told her a man should be, and how one should behave. She told this opposing force that she was ready for it to become more _pure_ , or to take on its true form. It revealed a warrior in gleaming armor. She then worked with her feminine side, and societal ideas about how she needed to be gentle and soft came up. She didn't relate to this, as she was in a male-dominated science field in which she needed to act stoic and intellectual and not draw attention to her femininity in order to be accepted. She asked the female aspect of herself what it needed, and it revealed that she could embody both softness and warrior qualities; there was no battle between the two. She agreed to this, and her inner warrior turned into a fierce female warrior. She felt energy flow up through her midline and after the session reported talking to both "sides" of herself frequently in order to reconcile them. She now feels much more confident, and no longer feels divided between the two forces within her. CHAPTER ELEVEN Healing the Central Myth The central myth is at the core of our being. It is our sense of purpose, gives our lives meaning, and we follow it blindly. The path that we are on, or believe ourselves to be on, creates restrictive beliefs and actions that we can free ourselves from. The Jungian concept of "shadow" is not neatly defined. It is not what we deem as "bad" or "dark" within us; it is the parts of us that are unconscious of what we have repressed. We move through our lives blindly, acting out our wounds, the pain of our ancestors and past lives, and living according to the cultural, archetypal, and mythological structures of our world. Some of you may have utilized this book as a form of self-healing, which is wonderful and completely appropriate. We all have so many wounds, so many beliefs and fears we are operating under, that to have any of that lifted away is to experience a sense of freedom that not many know. But we can allow ourselves to travel farther down that proverbial rabbit hole and discover what holds and confines us and seek freedom from it. The work of seeking the central myth, as with much of the work seeking opposing forces and separate selves, is not intrinsically self-help, although the work will result in a great deal of change, including physical, emotional, mental, and spiritual healing. These forms of work are for those truly interested in diving deep and releasing whatever is keeping them caged, that is, whatever lies unknown, cast aside, or simply acted out repeatedly ("looped") without awareness. Put more simply, seeking out your central myth is not likely to cure your knee pain. Typically, the work on the knee, the inner children, and other similar subjects comes first, not because it is simpler or less valid but because it needs to be worked on first. We cannot see or access our central myth if we have a population of screaming inner children clamoring for our attention. Our central myth is how we define our life journey. While we may have more than one myth, or have our myth change over time, we define ourselves at the very core of who we are as enacting a specific myth. For many of us, it will be the concept of the "quest." We may seek out truth, knowledge, peace, clarity, or purity. We may feel called to a particular area of study and make that our life work, or tread a path of an archetype such as "teacher," "witch," "crone," "sage," or "seeker." We may also centralize a myth based on those that are already part of our culture. I will discuss a few such myths below, but it is important to understand that our personal mythological constructs emerge from our culture and familial background. In simple terms, this means that someone who is Middle Eastern will have different mythological structures from some of the predominantly Germanic or Western mythologies and stories I list below. So consider the myths you grew up with, that are a product of your culture (even if you are removed from that culture physically, there is still a connection there), as part of this discovery process. Common Myths and Fairy Tales that Construct a Centralized Myth * **AMERICAN DREAM:** the idea that if one works hard enough, they can and will achieve their dreams. * **PRINCE CHARMING:** the belief in being rescued or of being the "white knight." * **PRINCESS AND THE PEA:** the feeling of being too sensitive or not healthy physically, for this world. * **BEAUTY AND THE BEAST:** the belief that underneath it all, someone has good character or is a "prince" or "princess," despite outer appearances or actions, and one just has to love them so they will change. * **CINDERELLA:** the belief that love or a "soul mate" and a perfect existence will come after misery. * **SUPERMAN OR HERO:** the belief in the need to be strong, to save the world (and sometimes save women from themselves). * **WOUNDED HEALER:** the belief that if one heals themselves, meditates, or spiritually explores enough all of their physical, emotional, and spiritual difficulties will cease. * **THE SNOW QUEEN:** the belief that one is good, pure and, innocent despite being cruelly treated, and that in the end, that evil will be found out and cruelty will be punished. * **PETER PAN:** permanent childhood and rejection of anything "adult." * **ROBIN HOOD:** activism and activities focused on offering relief to those suffering, accompanied by anger toward those who are "rich" or do not have the same beliefs as you may have. * **WARRIOR:** a path that includes struggle and discipline for a larger cause. * **THE OUTLIER:** a mythology created from pain and separation. * **STAR WARS:** the belief that one is fighting the forces of darkness and evil. The "Self" is good and must always stay "in the light" despite lower urgings. * **HARRY POTTER:** the belief that the world, and the people in it, are out to get you, and that one must always be on guard or alert. * **THE CHANGELING:** similar to the Outlier, but with a mythology that the Self is alien, fairy, or non-human in some capacity as a way of shielding oneself from participating in the human world. There are many other myths and fairy tales, and these are obviously crude assessments of how taking on a central myth or story can create a dysfunctional "loop" in our lives. Some of these are more damaging than others. Speaking as someone who has been engaged in spiritual exploration and work for a long time, I can say that the Outlier belief as a response to pain and trauma is prevalent; it is rare to move beyond the wounds associated with it, as the mythology is too appealing to release. If we heal our inner children and release the need for this myth, we can move forward without it. Even if we truly are an outlier, or wish to have a part of us be a "white knight," or truly feel called to teaching, being a plumber, geologist, fireman, or shaman, if we are suffering or feeling restriction as a result of our central myth, we can examine it and release it. In this way we can let go of the quest, the endless and relentless need to prove oneself or create drama, and take the beauty and strength of those mythologies we resonate with. When we become conscious of our myths, we can release them. The difficulty is that they are so central to our identity, we may not be sure what to do with ourselves without them. What would happen if we were a knight of the round table without a quest to go on or a fire-breathing dragon to slay? What would happen if we were no longer waiting for our magical Merlin to teach us and guide us? Who would we be if we were not feeling separate, unloved, or an outlier? Releasing the Central Myth To begin, we will either ask the body deva if there is a myth we should look at, or we may already have one in mind that we wish to work with. If we are unclear, the myth of the "quest" is always a good place to start. Ask the body deva to show you where you may hold this myth in your body. Explore the physical and energetic nature of this part of your body, and ask the individual consciousness of this area what it holds in relation to this myth. * After getting your initial assessment (creating a visual and feeling the physical effects in the individual body part, or area of the body), you will ask, _What parts of this myth are causing constraint for me?_ \- How does this myth cause you to act a specific way, have specific beliefs, be attracted to specific individuals (or attract specific individuals to you) in your life? * Visualize the protagonist of this myth. \- This will be you, but it can look like anything. It will be the aspect of you that relates to this myth, or quest. \- For example, what does your inner teacher or sage look like? \- For example, what does your inner activist Robin Hood look like? * Ask this protagonist who they are and what they believe to be true about themselves, the world, and the people in it. * Consider how you relate to this. * Is this outlook creating pain? \- Is this outlook preventing you from feeling free to achieve certain things in your life? \- What sort of wounds would I have to work on if I did not believe this myth? \- Is this something that feels strong, or is it creating pain or struggle? * Consider also how much energy you have put into this quest. This means that if we have been on a quest for forty years to find "the truth," we have likely expended a lot of energy and experienced a lot in relation to this. Some of this may have been a fruitful use of our time, but at other times we may have felt pushed, controlled, or experienced trauma as a result of this quest. * Ask for the beauty or strength of this myth and what it has brought you. Realize that you can take the strength and beauty without the constraint, feeling controlled or blindly acting out this myth without consciousness. * Consider, in your current adult state, if you still need this central mythology. If you wish to experience total freedom from it, still need it partially, or still need it totally. * Fear may arise at the idea of ending such a mythology. It is helpful to remember to work with that fear and feel compassion for it, and to realize that the "death" of this mythology means freedom rather than physical death (and a subsequent "rebirth"). * Let your body deva know your answer, and ask the protagonist to change, shift, or disappear in relation to how much you feel you may still need it. * Ask your body deva to change or shift the area of your body where this energy was held in relation to this central myth. * Say thank you, if it feels appropriate. **Mary Kay** Mary Kay was seeking to break the cycle of always seeking a "white knight." She actually had a mixed mythology, in that she would meet men who she felt were wonderful, but when she got into relationships with them they would be subtly or outright abusive to her, quickly going into "beast" mode. She would then hold onto the myth of the "white knight," thinking that if she only could love or care for the man enough that he would turn back into a "Prince Charming" character. After they were cruel to her, they often would do this, which confused her further. Mary Kay worked with inner child, familial, and cultural energies. She grew up in a household where men were expected to dominate women and provide for them. Her background was largely Macedonian, with a tradition of women taking on specific gender roles in the household and taking care of children. The confusing aspect of this to her was that she enjoyed some of this—she wanted a male to be a source of strength and to take care of her. She wanted this in a healthy way, however, and without feeling demeaned or violated as a result of it. She stated that her parents' relationship was a perfect example of this, and that her father was her ideal "white knight."' Her body held this myth in her heart, and her heart revealed a broken-down "princess" character who desperately wanted someone to love her and give her life meaning. She realized that this "princess" was attracting men who would show her this love in a way that was not healthy, and asked her body deva to shift her into someone who could be a "healthy princess," or someone who could hold her own and still be treated by men as worthwhile yet feminine. The princess changed and stood upright. In this situation, Mary Kay did not want to get rid of her mythology, but to alter it. Perhaps at some point she will no longer feel a need for it, but after the session she reported feeling stronger and having greater capacity to stand up for herself. She also laughed at the fact that she now was in touch with her intuition at a greater level and realized when "white knight" men who wanted to create a pattern of abuse or had an inner "beast" approached her. She now is very conscious of this "loop" and will not allow her inner "princess" to be anything but strong and to wait for a man who can also be a healthy source of strength for her. **Gregory** Gregory had been on the path of the "magician" and "hermit" for thirty years. He was a practicing occultist who, since childhood, had had a deep interest in anything hidden, "occult," or mystical. He was initiated into several traditions and systems, and related how separated he felt by what he had experienced. He was realizing lately that embodying this archetype had its limitations and had long ago become weary of associating with most other occultists. We asked for his inner "magician" to come forward, and it was held in his solar plexus. It looked like a tired, world-weary old man. He was surrounded by a cave of books and was sitting next to a fireplace. There was nobody else around him, and nobody in the direct vicinity. When asked for the strengths associated with this myth, the inner "magician" replied that he had gained immense knowledge and truly knew what it was like to see far beyond what most people do. But this had come at a price, and he felt like he was no longer a part of this world, and could not be. He felt weary of this world, and of others who were not able to see the world as he did, so his path was now one of the "hermit." Gregory related that he felt this way, that he realized how much this path had offered him, but that he no longer wanted to be constrained by the solitary, world-weary magician. He expressed to his body deva that he felt that he no longer needed this myth, and the "magician" disappeared, and his body released the energy in his solar plexus. Although Gregory still prefers a very small circle, and isn't exactly enthused about the world or the people in it, he reported feeling no longer constrained by needing to study and practice so much. He sold a large portion of his library and is finding enthusiasm in nature. He feels much more energized, and no longer feels constrained by the "quest" of the magician: to always explore deeper and deeper terrains. He is happy to simply explore what he sees fit to, and has let go of the specific rituals and achievements that once defined him. CHAPTER TWELVE Tying Things Together The purpose of this chapter is to provide a clear framework in which to seek out and work with the body deva. The method in some ways is simple: we are engaging with the consciousness of the body to find out what lies unhealed within our physical form and then providing the support to resolve it. The variations and ways to go about this, as well as what we may find within, are endless. In time, your own direct experience and engagement with the body deva will teach you new ways of working with this material, and offer you a sense of flow that takes you far beyond the ten-step process listed below. Before the Session To start, you may wish to draw a body map. This can highlight or show you possible areas to work on, and allow you to see how you are doing with this work over a period of time. Body maps can be done every time, before doing the work, or you may wish to do them on a semi-regular basis, such as once a week, if you are doing this work on a more frequent basis. **STEP ONE** You will now "call up" your body deva. Visualize it as either an outer symbol, an inner symbol, or by visualizing or sensing your midline and/or heart. **STEP TWO** Set intention or state what you would like to work with. There are a few different ways to pick what you are going to work on in a session. The first way would be to simply think of a topic that is top of mind, or is really calling out for your attention. This can be a specific pain in your body, an emotion, a reaction or interaction that you wish to look at, or something of a more energetic or spiritual nature. For example: **Physical** * _I would like to find out why my (knee, hip, and so on) hurts or feels restricted._ * _I would like to work with anything that is creating (a specific disease, discomfort, or imbalance)._ * _I would like to work with anything that is causing for me to feel low energy, ill, or sick._ * _I would like to explore why my body map shows that I do not have an abdomen or legs._ * _I would like to explore why my menstrual cycle is so difficult, or what is creating hormonal imbalances._ **Emotional** * _I would like to explore why I get so angry (or why I get so angry every time someone does "x")._ * _I would like to explore: grief, depression, apathy, loneliness._ **Energetic and Spiritual** * _I would like to work with anything that is causing me to feel blocked or stagnant in my life._ * _I would like to understand and heal any self-sabotage._ * _I would like to explore why I am not embodied in my lower body (or why I do not feel grounded)._ **Interpersonal** * _I would like to understand and heal my reaction to or conflict with (family member, friend, random person who has caused an emotional reaction)._ * _I would like to know what is at the root of the dislike I have for my boss._ * _I would like to know why I cannot stay in a relationship, and why I only attract a specific type of person to me._ * _I would like to heal my relationship, marriage, or partnership. Show me what I am holding in relation to this._ **Specific Patterns and Conflicts** * _I would like to work on an inner child, ancestral issue, or past life issue. Can you show me where I may hold these energies?_ * _I would like to heal my issues with money. Can you show me where I hold this imbalance?_ * _I would like to heal my issues with (sexuality, femininity, masculinity, power, strength, being who I am intended to be in this world). Can you show me where I hold this imbalance?_ * _What is preventing me from becoming more conscious?_ * _What is preventing me from being fully who I am in this world without apologies?_ The other ways to start would be to have an "open session." This means that you will simply ask or intend whatever would be most healing for you to come forward in the session. As stated above, you can also use your body map to focus on a specific area of your body that may be out of balance. **STEP THREE** You will now sense or visualize your body deva, say hello and state your intention and ask the body deva what it may have to say about the subject. **STEP FOUR** You will now ask the body deva to show you where you may have unhealed energies in relation to the topic. * Do a body scan or sense what areas are highlighting or drawing your focus. * Pick a body part that is the most significant to work with, or is drawing you in the most. You can also ask for the "linchpin," or the pattern that would provide the most healing. * Sit with that individual part of your body and notice how it physically feels to you. **STEP FIVE** You will now work with the individual consciousness or "talk" to the specific area of the body that was found. This will relay different information than your body deva. The body deva offers general information and is concerned with preserving your body as a whole. An individual cell would have a very specific focus and consciousness, with specific information to relay. An organ or part of the body (such as the pelvis) will have information mostly regarding itself and the specific area. Talking to Your Body After feeling how the area physically feels, you will note whether the area feels _full (stuck), empty_ , or _both_. Remember that we have many layers, and an area may have different patterns that need to be healed, so a mixture of both is not surprising to find. * You will note the energetics of the area and create a visual of the emptiness or fullness. What is its shape? Colors (dark/light)? Size? * Ask the individual body part (or whatever aspect of individual consciousness you are working with) for general information about what it may be holding. How long has the energy been there? What function does it serve? What emotions are present? * You may choose to work with the individual consciousness of this body part, asking it to reveal its needs for healing and helping it to understand that you no longer need the function it describes. You may also let it know that you are an adult (state your age) and now have the ability to let grief or other emotions arise. * The body part may not understand that you are now fifty years old, because in its unhealed state it is as if it was frozen in time. Ask your body consciousness to release any unhealed emotions and shift or change to a healthier state. **STEP SIX A** Ask the body deva what you would need to actualize in your life in order to fully heal the area, negotiating if necessary. For example, if your knee pain is coming from sitting at a desk all day, you may be ready to go completely to a standing desk, but you may wish to negotiate instead and offer to get up and stretch. The key to this is to actualize this in your daily life and, ideally, let the body deva know when you do. If you decide to work on this step (instead of alternate Step Six B), move on to Step Ten after completing this step. **STEP SIX B** You may choose instead to work directly with the trauma, more deeply exploring the root of the pattern. The individual body part may tell you exactly at what age the energy it is holding happened. It may reveal you have had it since you were twelve years old, or that it is from your mother. If it does not, or you wish to simply sit with discernment, you will slowly offer your body the different options as to the source of the energy: * Is it inner child? * Family or ancestral? * Past life? * In utero? * Cultural? * Archetypal, symbolic, or a part of myself? * Something else? If so, what? You will pause after each query—a positive response either being a sense of knowing that one of these areas has more significance than the others, or the individual area you are working with has a heightened or shifting energy. If multiple responses are offered, it is often best to work with the "root," or whatever came first. You may also ask your body deva for confirmation as to what need for healing would be best to work with. **STEP SEVEN** You will then ask the body deva to bring forward the source (the inner child, ancestor, past life, and so forth) and gain a basic understanding of the trauma that occurred. What created overwhelm, pain, or difficulty? You want to get a solid enough understanding of what is going on and what may be distressing if you try to move forward. You do not need to engage in an endless saga; the purpose is to understand enough of the basics of the trauma to understand the beliefs or contracts that were created. You may wish to use the "television screen" technique, working with resistance and using phrases such as, _Tell me more_ and _If I could hear the answer to this, what would I hear?_ in order to access more information. Fully understand the beliefs that were created as a result of this trauma: * What beliefs about the self were created? * What beliefs about the nature of the world were created? * What beliefs about people were created? * What beliefs about men or women were created? Reflect on the beliefs and see how valid they may be for you now. To best of your ability, acknowledge how these beliefs have affected you. **STEP EIGHT** Ask your body deva or the person directly what they would need to be healed. Visualize the person receiving what they need. Some people choose to visualize a white or colored light (what seems intuitively appropriate) to offer healing support. If you are doing cultural or archetypal work, you may also wish to locate and interact with the opposing force. The session is complete when you no longer notice the person you have been working with (they disappear). If they do not disappear, and you get a sense that the session is over, you will simply say thank you and move on to the integration and release. **STEP NINE** You will now return to focusing on your present-day body and the area where the stuck or empty energy was. This area may have already been releasing, but you will ask the body consciousness of the area if it realizes your current age, and that you no longer need it to hang onto the beliefs and energies from (yourself, your ancestor, and so forth). Ask the individual body consciousness to release, change, or shift, this energy. **STEP TEN** Ask the body deva to help release and integrate this work with your body as a whole. Say thank you to the body deva and take some time to rest before going about the rest of your day. Aftercare It takes some time for this work to release and reintegrate, so be respectful of the fact that it is starting a process. In simple terms, this means that it will take the person on average three to five days to finish releasing, and approximately two weeks to come to a new sense of homeostasis. All levels of the Self—body, mind, emotions, and spirit—must accept and work with the changes and shifts that have occurred. After this work, and for approximately the next three to five days, emotions, memories, strange dreams, physical shifts and possible aches, and other signs of the body processing and releasing are likely to occur. After that time is when the rest of the spiritual and mental shifts occur. If you have done a small session, this may only take a few days. If you have released a core belief, or have significantly shifted something, it can take a month or so. After about a month, you will have the space and perspective to see how much that belief may still be affecting you, or how your body map has changed. If this work gets to be too much, remember that we do not need to do the heavy lifting on our own. This is where experienced professionals and healers of varying types can help us to lift the proverbial boulder we are working with so we can return to our process of working with pebbles and large rocks. We shouldn't suffer alone, and we do not heal alone. _Shared pain is lessened pain_ , and by finding another person to do this work with and to chat with, or a healer to help you with tricky patterns (or simply things you are stuck in or overwhelmed by), this process will go a lot easier. We are so used to numbness that emotions, memories, or physical experiences arising after inner work that do not quickly shift us into more numbness or an outright "feeling better" state are deemed failures, or are quickly numbed. If we have been in a coma lying in a hospital bed for twenty years, our first stretch and walk down the hall is going to be uncomfortable. It is by understanding that this work is a process, as well as being conscious that after this type of work there will be shifting, that any fears of authentically feeling can be alleviated. I suggest telling yourself, _I bet that this is from releasing those beliefs in my pelvis with my body deva,_ so that the body can separate and will even lessen any symptoms experienced. Feeling is not a bad thing, however. Even if it is grief, anger, or pain, the authentic expression of such things is so repressed in the modern world that we have grown incredibly distanced from ourselves and one another. Many of us are so numb that we are simply living out the unhealed wounds, loops, and stockpiled emotions that we have accumulated or that have been given to us. For anyone to actually and authentically wake up, to seek within, and to release what is held there, is incredibly courageous. It also takes some effort. This work is cumulative. Accepting that your inner six-year-old may come up during twenty different healing sessions is difficult because we want things to be fixed—now, ideally, with as little effort on our part as possible, and with no difficult emotions or anything but positivity happening as a result of it. Engaging with your body deva does take time, and is like building any relationship. You will know yourself better through this work in six months than you will today. But know that each belief, each restriction, each aspect of you that is healed creates greater freedom. It creates further embodiment, more joy, more capacity to see and interact with the world with clarity and grounding. It creates wholeness, self-worth, and the ability to move forward in this world, understanding not only the deepest essence of who you are but that you can move without restriction or hesitation. Prompts You can work with the body deva on anything you may consider out of balance in your life. Here are a few prompts that may be helpful for further exploration: **RELATIONSHIP TO MONEY:** Ask your body deva where you may be holding restrictions or unhealed patterns with regard to money and financial success. **RELATIONSHIP TO FOOD:** Ask your body deva where you may be holding issues related to food, such as fears, attraction to specific foods, or even allergic or sensitivity responses. You can also engage the body deva by eating a specific food item and asking or sensing if the body is enjoying that food while you are eating it. The difficulty with this is that we utilize food for anesthetic, in response to unhealed energies, so initially we may find that our bodies do seem to enjoy the donuts or fried food item we are eating. Even in a healed state, we may deeply enjoy an occasional donut. By interacting with the healed and whole body deva, we will be able to discern what our bodies crave in a healthy manner. I caution you to approach working with food carefully if you have a background of extremely disordered thinking and eating. There is no need to create further obsession, or to use something like the body deva to fuel unhealed trauma. **CREATIVITY AND PASSION:** Which area in your body does not feel creativity, passion, or vitally alive? This may reveal many places, and you will inquire as to the one that would be best to work with today, or would provide the most healing. **SEXUAL BLOCKS AND RESTRICTIONS:** Where is your body holding any restrictions around sex or the sexual act? Where are you holding wounding in regards to your sexual orientation? Where are you holding wounding in regards to what sexual partners you have had? What parts of you may not enjoy sex? **MIRRORING:** The people as well as the situations we react to can be an incredible catalyst to inner work. The thought behind mirroring is that the outer world shows us our wounds, and what we react to shows us where we need to heal. Every person we react to points to something that is "shadow" (repressed or unconscious) within us. Every person we react to can show us which parts of ourselves remain frozen and unhealed. Mirroring is an incredible tool. Outer experiences and people can be presented to the body deva to find where we hold this energy in our bodies. We would then want to ask the body deva, or the individual consciousness of the body part, how this person represents something unhealed within us. Not everyone is a mirror for us. If someone harms us, or we have experienced something negative or traumatic in our lives, it does not mean that we attracted it, out of some sort of New Age ideology that seeks to blame the victim. If someone is unkind, violent, threatens your safety, or is just simply a jerk, that does not mean you need to heal your inner jerk. It is always helpful to use our experiences and how we react to them as a catalyst for inner work, and would be especially indicated if you find yourself ruminating about the person or the situation far beyond when you experienced the friction or annoyance. If we get cut off in traffic, we may momentarily get angry and even want to flick off the other driver, and that would be in the reasonable category of reactions to someone who threatened your safety. If you are thinking about this person hours later, it indicates that they may be a mirror for you, or at the very least may be tapping into your "anger" stockpile. **CAREER AND PERSONAL BLOCKS:** If you feel blocked in your life or career, ask for this imbalance to show up in your body. What would this block look like? What is preventing you from being in a career you feel called to or would enjoy? Move to the resistance and blocks chapter to work with any energies that may be blocking you. **OVERALL HEALTH AND WELLNESS:** What parts of you feel really unwell? What parts of you feel as if they could never be healthy or well? Where are you holding past sickness or trauma from that sickness? What is interfering with you feeling physically well? **SEPARATION AND TRAUMA:** Which parts of you do not want to participate in this world? Where do you hold energies that cause you to feel separate, isolated, or unloved? Where do you separate from yourself? Where do you separate from others and the world? What parts of you are unwilling to offer love to others in fear of rejection, abandonment, or not being loved in the same way you offer your own love? What needs to be healed within you so that you can have strong, stable boundaries? What needs to be healed so you can feel free to say no to others? What needs to be healed so that you can offer yourself nurturing or say yes to yourself? What needs to be healed so that you can truly offer yourself to others? _What parts of me feel disconnected? What is preventing me from becoming a more conscious individual? What is preventing me from feeling love for myself? What is preventing me from feeling love for others? What is preventing me from feeling joy? What is stopping me from feeling free?_ Closing Thoughts We are consciousness. Our body has consciousness. Our big toe has consciousness. Whether we are connecting to the consciousness of our bodies as a whole or an individual part, or even to the consciousness of a cell within ourselves, we are taking the journey to knowing ourselves, and healing ourselves, in a way that few choose to. The degree to which those connections to Earth, to the Divine, are opened is what allows our connection to a consciousness greater than ourselves. The degree to which the connection from the heart is flowing is what we are offering of ourselves to the world. In what ways are we flowing? Are we willing to be sustained by something other than ourselves? Are we willing to be sustained by Earth, by Spirit? We can decide to make the unknown known, to heal the emotional, mental, spiritual, and physical imbalances that we carry. I realize in stating this it is easy to take this concept overboard, to engage in the black-or-white resistant thinking that with this work you will be fully healed and will achieve immortality, gain riches, and achieve enlightenment. If we do consider and heal what we carry, we certainly become lighter. Our constricted views open, our wounds heal, and the emotions stockpiled within us and the psychic weight of the patterns and constrictions of our ancestors and families, past lives, and society can be less heavy. If we allow ourselves to do so, we truly can move beyond what we believe and know to be true about ourselves, our bodies, and the world. Doing so allows us to achieve a sense of peace in our lives that otherwise may not have been possible. It is by taking personal responsibility for ourselves, for what we carry, that we can not only make our own lives better but the world a better place to live in. The world needs more "adults"—those who have moved past having a legion of small children internally guiding their choices and creating chaos for themselves and others in this world. In the book, I suggest doing this work gradually. This is because many of us carry such weight that it is like we are carrying a million suitcases. It would be too freeing to simply dump all of those suitcases over a bridge and be done with it. The person would go into a tailspin and beyond their current adult capacity to deal with it. But we can all make that luggage lighter, releasing one suitcase and then another. Whatever weight or "luggage" you are able to release as a result of this work, whatever compassion you are able to show yourselves and what lies within, whatever voice you are able to give to what previously was unconscious within you is the extent to which you can move beyond simple emotional reactivity and acting out of what is unhealed—the "loops" that we all move through again and again, seeking healing. It is easy to look at an illusory end point for this work—that as a result of this work, you will become a beam of light, ascend to another realm, or have all of the difficulties that inherently come from being in a human form and the relationships and connections that come and inevitably go no longer come to pass. I would encourage you to look at this work as a process, one that can be continually engaged in. Never lose your curiosity or willingness to question. And never believe in your own resistance—it is either telling you that you suck or that you are now perfected and no longer need to do any inner work. Both are illusions. What happens through this process, through any in-depth spiritual and meditative process, is that what is within us reveals itself and is then worked with through whatever means the person finds. There are, of course, some methods and tools that are better than others, but it is helpful to keep in mind that it is easier to carry 2,000 suitcases than a million when engaging in processes like this. In time, it becomes easier to let go of the rest of the suitcases because what is resistant in you is really what is unhealed. After you have cleared out the five-year-olds and the screaming toddlers and the Goth teenager and the twenty-something who drank too much, as an adult you find that you don't want to carry this baggage. You will gradually move from resisting inner work to actively engaging in it because you know how deeply healing it is to experience more freedom. The irony of such things is that releasing all of the baggage is an illusory end point. You will find mini-suitcases to work on, as well as purses and the occasional duffel bag when you are done with your regular luggage. Once you move past the resistance, you will be willing to search for this luggage, though. It makes you want to look for it, because you realize that the more you heal yourself, the less engaged you become with the chaos that surrounds you; the less reactivity you have, the more stillness you will embody, and the more your tornado of chaos will dissipate. Your relationships will deepen and clarify. There are still emotions, but at a certain point there is a distinct willingness and even joy that comes from doing inner work, because you realize that delving in will bring even more clarity and peace into your life. This is a continual process. Nobody ever reaches the end of it. Allow yourself to let go of an illusory end point, or the unkind thoughts of needing to be someone different right now. Have compassion for yourself in this process, and realize that your openness will guide the process. By continually allowing yourself to heal, to lighten your load, to release your baggage, you will increase your capacity for more flow, more connection, and greater understanding of yourself and the world. _Whether you take one step forward and remove one bag, or release all of your luggage and begin looking for carry-ons, I thank you for doing this work._ Further Resources If this book has created interest in bodywork, consciousness studies, or spiritual exploration and evolution, here are some recommended resources to further your studies. All of these works have either been influences for this book or will provide further insights to unhealed or unconscious material that can be worked with utilizing your body deva. ** _The Spiritual Awakening Guide_** by Mary Mueller Shutan; Findhorn Press, 2015 ** _Managing Psychic Abilities_** by Mary Mueller Shutan; Findhorn Press, 2017 ** _The Complete Cord Course_** by Mary Mueller Shutan; Create Space, 2015 For further information regarding Mary's work, _visitwww.maryshutan.com_ **Craniosacral Therapy, Zero Balancing, Energy Work, and Talking to your Body** ** _Your Inner Physician and You_** by John Upledger; North Atlantic Books,1997 ** _Cell Talk_** by John Upledger; North Atlantic Books, 2003 ** _The Heart of Listening_** by Hugh Milne; North Atlantic Books, 1995 ** _Understanding the Messages of Your Body_** by Jean–Pierre Barral; North Atlantic Books, 2007 ** _The Polarity Process_** by Franklyn Sills; North Atlantic Books, 2001 ** _Being and Becoming_** by Franklyn Sills; North Atlantic Books, 2008 ** _Polarity Therapy Vol 1 and 2_** by Randolph Stone; Book Publishing, 1988 ** _Inner Bridges_** by Fritz Smith; Humanics, 1986 ** _The Personal Aura_** by Dora Kunz; Quest Books, 1991 **Psychology, Trauma, and Mind-Body Relations** ** _When the Body Says No_** by Gabor Maté; Vintage Canada, 2004 ** _The Body Keeps the Score_** by Bessel Van Der Kolk; Viking, 2014 ** _Drama of the Gifted Child_** by Alice Miller; Basic Books, 1996 ** _A Little Book on the Human Shadow_** by Robert Bly; Harper, 1988 ** _Fear of Life_** by Alexander Lowen; Bioenergetics Press, 2003 ** _The Language of the Body_** by Alexander Lowen; MacMillan, 1977 **Spiritual Studies–Mind, Body, Energy and Spirit** ** _How to Practice Self Inquiry_** by Ramana Maharshi; Freedom Religion Press, 2014 ** _Talks with Ramana Maharshi_** by Ramana Maharshi; Inner Directions, 2001 ** _Tao and Longevity_** by Huai-Chin Nan; Weiser Books, 1984 ** _The Doctrine of Vibration_** by Mark Dyczkowski; State University of New York Press, 1987 ** _Kundalini: Energy of the Depths_** by Lilian Silburn; State University of New York Press, 1988 ** _Vijnana Bhairava: The Manual for Self Realization_** by Swami Lakshmanjoo; Munshirm Manhoharlal, 2001 ** _Into the Heart of Life_** by Jetsunma Tenzin Palmo; Snow Lion, 2011 **Further Spiritual and Cultural Studies** ** _How to Know Higher Worlds_** by Rudolf Steiner; Steiner Books, 1994 ** _The Knee of Listening_** by Adi Da Samraj; Dawn Horse Press, 2007 ** _Prometheus Rising_** by Robert Anton Wilson; New Falcon, 2010 ** _Initiation into Hermetics_** by Franz Bardon; Merkur, 2001 ** _Programming and Metaprogramming in the Human Biocomputer_** by John Lilly; Three Rivers Press, 1987 ** _Mind: Its Mysteries and Control_** by Swami Sivananda; Divine Life Society, 1994 ** _Shamans, Healers, and Medicine Men_** by Holger Kalweit; Shambhala, 1987 ** _The Mermaid and the Minotaur_** by Dorothy Dinnerstein; Other Press, 1999 ** _The Wise Wound_** by Penelope Shuttle; Grove Press, 1988 Also of interest from Findhorn Press ** _The Spiritual Awakening Guide_** ** _by Mary Mueller Shutan_** THIS PRACTICAL BOOK opens new understandings of how to live in the world while going through an awakening process. Mary Mueller Shutan provides tools for how to navigate through each of the twelve layers of an awakened state and explains how to recognize where we are in our spiritual journey, along with common physical, emotional, and spiritual symptoms that may be experienced on the way. She offers the revolutionary idea that we are meant to be humans, to have a physical body with physical, sensate experiences and emotions. We are meant to live in the world and be a part of it even as fully awakened individuals. This guide proposes a look at the possibility of leading a grounded, earthbound life of work, family, friends, and other experiences in an awakened state. **978-1-84409-671-8** ** _Managing Psychic Abilities_** _ **by Mary Mueller Shutan**_ APPROXIMATELY 20% OF THE POPULATION is sensitive or in some way psychic. Being sensitive or psychic can allow you to understand the world in a way that most people can't, and to see beyond what others are able to. But for many, sensitivities are a burden, causing overwhelm or even physical ailments. You don't want to become even more sensitive but know how to manage your life as is. This book can teach you how psychic abilities and sensitivities develop, where you are on the spectrum of these, and most importantly, the basic and intermediate skills and techniques you need to learn to be healthier, more functional, and to feel in control of your sensitivities and abilities. The first modern guide on how to manage psychic abilities and sensitivities from a spiritual standpoint, this book will teach you everything you need to know so you can truly thrive in this world as a sensitive person. **978-1-84409-700-5** **_Life-Changing Books_** Consult our catalogue online (with secure order facility) on _www.findhornpress.com_ For information on the Findhorn Foundation: _www.findhorn.org_ About the Author Mary Mueller Shutan is a Spiritual Healer and Teacher with an extensive background in Chinese Medicine, CranioSacral Therapy, Zero Balancing, and Shamanic Healing. She is the author of _The Body Deva_ , _Managing Psychic Abilities_ , _The Complete Cord Course_ , and _The Spiritual Awakening Guide_. Mary lives near Chicago, Illinois. For more information on her work please visit her website: _www.maryshutan.com_ About Inner Traditions • Bear & Company Founded in 1975, Inner Traditions is a leading publisher of books on indigenous cultures, perennial philosophy, visionary art, spiritual traditions of the East and West, sexuality, holistic health and healing, self-development, as well as recordings of ethnic music and accompaniments for meditation. In July 2000, Bear & Company joined with Inner Traditions and moved from Santa Fe, New Mexico, where it was founded in 1980, to Rochester, Vermont. Together Inner Traditions • Bear & Company have eleven imprints: Inner Traditions, Bear & Company, Healing Arts Press, Destiny Books, Park Street Press, Bindu Books, Bear Cub Books, Destiny Recordings, Destiny Audio Editions, Inner Traditions en Español, and Inner Traditions India. For more information or to browse through our more than one thousand titles in print and ebook formats, visit www.InnerTraditions.com. Become a part of the Inner Traditions community to receive special offers and members-only discounts. Books of Related Interest **The Spiritual Awakening Guide ** Kundalini, Psychic Abilities, and the Conditioned Layers of Reality _by Mary Mueller Shutan_ **Managing Psychic Abilities ** A Real World Guide for the Highly Sensitive Person _by Mary Mueller Shutan_ **Being Present ** Cultivate a Peaceful Mind through Spiritual Practice _by Darren Cockburn_ **Ancestral Medicine ** Rituals for Personal and Family Healing _by Daniel Foor, Ph.D._ **Emotion and Healing in the Energy Body ** A Handbook of Subtle Energies in Massage and Yoga _by Robert Henderson_ **Awakening the Chakras ** The Seven Energy Centers in Your Daily Life _by Kooch N. Daniels, Pieter Weltevrede_ and _Victor Daniels_ **Effortless Living ** Wu-Wei and the Spontaneous State of Natural Harmony _by Jason Gregory_ **Essential Oils in Spiritual Practice ** Working with the Chakras, Divine Archetypes, and the Five Great Elements _by Candice Covington_ INNER TRADITIONS • BEAR & COMPANY P.O. Box 388 Rochester, VT 05767 1-800-246-8648 www.InnerTraditions.com Or contact your local bookseller Findhorn Press One Park Street Rochester, Vermont 05767 www.findhornpress.com Findhorn Press is a division of Inner Traditions International Copyright © 2018 by Mary Mueller Shutan All rights reserved. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. **Disclaimer** The information in this book is given in good faith and is neither intended to diagnose any physical or mental condition nor to serve as a substitute for informed medical advice or care. Please contact your health professional for medical advice and treatment. Neither author nor publisher can be held liable by any person for any loss or damage whatsoever which may arise from the use of this book or any of the information therein. A CIP record for this title is available from the Library of Congress print ISBN: 978-1-84409-745-6 ebook ISBN: 978-1-84409-754-8 To send correspondence to the author of this book, mail a first-class letter to the author c/o Inner Traditions • Bear & Company, One Park Street, Rochester, VT 05767, and we will forward the communication, or contact the author directly at **<http://maryshutan.com>** Electronic edition produced by Digital Media Initiatives
{ "redpajama_set_name": "RedPajamaBook" }
7,499
shea serrano post A typical Shea Serrano tweet A one-time teacher, Shea is renowned in the writing world for his entertaining and percipient perspective on hip-hop, sports, and pop culture. Shea Serrano's new book, Basketball And Other Things, is literally the greatest book of all time. By Shea Serrano and Jason Concepcion November 11, 2020 Filed under: The Rewatchables 'Toy Story' With Sean Fennessey, Mallory Rubin, and Shea Serrano Sean, Mallory, and Shea … The Daily Post The Best Thing in Texas: Shea Serrano Fans Are Sending a Whole Bunch of Copies of His Basketball Book to Soldiers Deployed Overseas … Shea… Larami Serrano … Each piece can be read or listened to in a single sitting, with or without Shea Serrano's A Wedding Thing is part of The One, a collection of seven singularly true love stories of friendship, companionship, marriage, and moving on. Technology How Shea Serrano Uses the Internet The Ringer writer was once a web commenter who actually used his real name. Bestselling author Shea Serrano tweeted an Express-News photo of Thursday's massive San Antonio Food Bank giveaway, and urged his followers to … Shea Serrano was camped on his couch in December 2017, "watching something stupid," when his phone started buzzing. His third book, Movies (And Other Things), is out today, October 8.For more information on Shea … "It wasn't a thing that I … Shea Serrano is back, and his new book, Movies (And Other Things), combines the fury of a John Wick shootout, the sly brilliance of Regina George holding court at a cafeteria table, and the sheer power of a Denzel monologue, all into one. ", which I recommend to all. Subscribe: Spotify / Apple Podcasts / … 1 New York Times Bestselling books down on the populace, such as Basketball (And Other Things), Movies (And Other Things) and The Rap Year Book, which became a 6-part documentary on AMC … $13,000 in 30. Shea Serrano is an award-winning writer, author, and illustrator. best-sellers. He is best known for his work with the sports and pop culture websites, The Ringer and Grantland, as well as his books, including The Rap Year Book, Basketball (and Other Things) and Movies (and Other Things), all three of which were The New York Times #1 best-sellers. Shea Serrano's The Rap Year Book has taken the book market by storm since its release last October, landing on the bestseller lists at the New York Times, the Washington Post … Shea Serrano, Arturo Torres (Illustrator) 4.17 avg rating — 1,955 ratings — published 2019 — 9 editions Post by Maggie Vanoni Photos by Cheyenne Thorpe Editor's note: This student blog post is the second in a series of two posts about Shea Serrano's visit to Serrano, the author of the forthcoming book "Movies (And Other Things)," explains how he became one of Twitter's most positive influencers. — Shea Serrano (@SheaSerrano) January 15, 2021 For the sake of this person's kids, I hope that his sons didn't wake up every time he charged into their rooms! • Shea Serrano has taken advantage of the coronavirus pandemic to release his first work of fiction, titled Post and set in the 1996 San Antonio of … He opened the message and saw … Shea Serrano, a best-selling author, who has been on a mission in recent years of helping out those who need a hand. Shea Serrano, a writer at The Ringer, and author of The Rap Year Book, put the phrase, "never judge a book by its cover" to the test.After a long, petty-driven standoff with Amazon, Serrano finally dropped the cover of his newest project, Basketball (and Other Things). Shea Serrano Post navigation The Finish Line Posted by bmick 0 Here we are at the finish line. post number eleven In 2018, writer Shea Serrano asked if I would submit some words for his upcoming book, Movies (And Other Things).It was a great email to get, from an old colleague and a … His most recent book, The Rap Year Book, reached the bestseller lists of the New York Times and Washington Post, topped the Arts and Entertainment iBooks bestseller list, and was named one of Billboard's Best 100 Music Books of … He is best known for his work with the sports and pop culture websites, The Ringer and Grantland, as well as his books, including The Rap Year Book, Basketball (and Other Things) and Movies (and Other Things), all of which were The New York Times #1 best-sellers. Despite his increased notoriety with the celebrities he writes about, the best-selling author isn't particularly eager to move to L.A. Shea Serrano is a Mexican-American author, journalist, and former teacher. A post shared by Shea Serrano (@shea.serrano) on Sep 12, 2020 at 4:32pm PDT Serrano is not on Biden's side because he loves Biden. Shea Serrano is back, and his new book, Movies (And Other Things),combines the fury of a John Wick shootout, the sly brilliance of Regina George holding court at a cafeteria table, and the sheer power of a Denzel monologue, all He began his career with The Houston Press and rose to prominence for his contributions to the now-defunct Grantland. Less than two hours after her initial post, the $475 Angeline was looking to get help in covering had turned into $2,500, thanks to Serrano … Shea Serrano is a Mexican-American New York Times best-selling Author of two books. When it comes to imaginative pop culture pieces, writer and author, Shea Serrano, is one of the most popular voices on the Internet. $2,000 in 10 minutes. Back in 2018, New York Times best-selling author Shea Serrano delivered Conference Room, Five Minutes, a collection of illustrated essays about The Office. Shea Serrano is an American author, journalist, and former teacher. — Shea Serrano (@SheaSerrano) August 31, 2017 Shea opened up his PayPal and Venmo accounts, and the donations started pouring in. The author was a Sanders supporter during the primary elections but Biden's clear victory was all it took for Serrano to … NOC Interview: Writer and Pop Culture Icon Shea Serrano Talks Basketball and The 'Fast & Furious' Series By Jamal Michel December 15, 2020 December 15, 2020 2020 has been a year of woes, but every now and again nerds and pop culture fiends the world over shared in some of the light brought on by creators and artists whose work helped us through the tumultuous times. Shea Serrano and Jason Gallagher get together to count down their five favorite episodes of the series Shea Serrano has written a so-far – as I've only read 62% of it – great book named "The Rap Year Book: The Most Important Rap Song From Every Year Since 1979, Discussed, Debated, and Deconstructed", which I recommend to all. Shea Serrano — he's a proud San Antonio native who wears shorts & slides w/ his socks hoisted 2 the d*mn kneecaps ALL DAY and still finds time to rain No. — Shea Serrano (@SheaSerrano) December 30, 2020 What followed was a chain reaction of kindness, with donations quickly flowing in at an incredible rate. Just ask him: His last book, The Rap Year Book, appeared on the New York Times Best Seller List, was a #1 best seller on Amazon and was ranked as one of the best music books of all time by Billboard. Shea Serrano signs copies of his best-selling books for students. He is best known for his work with the sports and pop culture websites, The Ringer and Grantland, as well as his books, including The Rap Year Book and Basketball (and Other Things), both The New York Times best-sellers. At its best, the … Shea Serrano is a Mexican-American author, journalist, and former teacher. Then Bill talks with Shea Serrano about Cobra Kai Season 3, some callbacks to the Karate Kid universe, Season 4 predictions, and more (1:11:00). Bear Scavenger Hunt, Moneygram Transaction On Hold, Alpha And Omega 2 A Howl-iday Adventure Wiki, Masnavi Ki Tareef Urdu Mein, Rubber Soul Meaning, Loyola Emergency Medicine Residency, Gyldenhul Barrow Pickaxe, I've Been Loving You Cover, Kitchen And Bath Showrooms Near Me, shea serrano post 2021
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,914
Demanding the retraction of false and misleading claims connecting immigrants at the Southern Border to terrorism Along with RAICES and Muslim Advocates, we demanded the Department of Homeland Security retract its inaccurate and unlawful "fact sheet" on terrorists at the Southern Border. Food & Water Watch v. Trump Exposing the work of President Trump's unlawful Infrastructure Council, which is composed of his New York developer cronies, on behalf of Food & Water Watch. Alabama v. Department of Commerce Intervening to stop Alabama's attempt to sabotage the Census by unconstitutionally excluding undocumented individuals from the Census count. Fighting the Administration's rollback of protections for disabled air travelers, on behalf of Paralyzed Veterans of America. Demanding investigation into politicization of Board of Veterans' Appeals We and VoteVets demanded the Veterans Affairs Inspector General investigate potential legal violations following reports the White House imposed a partisan loyalty test on Board of Veterans' Appeals nominees. VoteVets v. Department of Veterans Affairs Challenging the operation of the Trump Administration's illegal 'Mar-a-Lago Council' which is influencing policy and shaping decisions that affect millions of America's veterans, on behalf of VoteVets.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,562
{"url":"https:\/\/www.beatthegmat.com\/if-x-x-2-which-of-the-following-must-be-true-t324063.html?sid=e15e13f336651c881e7e3131738cf131","text":"## If $$|x| < x^2,$$ which of the following must be true?\n\n##### This topic has expert replies\nLegendary Member\nPosts: 2898\nJoined: 07 Sep 2017\nThanked: 6 times\nFollowed by:5 members\n\n### If $$|x| < x^2,$$ which of the following must be true?\n\nby Vincen \u00bb Fri May 28, 2021 6:53 am\n\n00:00\n\nA\n\nB\n\nC\n\nD\n\nE\n\n## Global Stats\n\nIf $$|x| < x^2,$$ which of the following must be true?\n\nA. $$x > 0$$\nB. $$x < 0$$\nC. $$x > 1$$\nD. $$-1 < x < 1$$\nE. $$x^2 > 1$$\n\nSource: GMAT Prep\n\n### GMAT\/MBA Expert\n\nGMAT Instructor\nPosts: 16137\nJoined: 08 Dec 2008\nLocation: Vancouver, BC\nThanked: 5254 times\nFollowed by:1268 members\nGMAT Score:770\n\n### Re: If $$|x| < x^2,$$ which of the following must be true?\n\nby [email\u00a0protected] \u00bb Sat Nov 27, 2021 7:18 am\nVincen wrote:\nFri May 28, 2021 6:53 am\nIf $$|x| < x^2,$$ which of the following must be true?\n\nA. $$x > 0$$\nB. $$x < 0$$\nC. $$x > 1$$\nD. $$-1 < x < 1$$\nE. $$x^2 > 1$$\n\nSource: GMAT Prep\nI find that these questions can be solved quickly (and accurately) by testing values.\n\nFor example, if |x| < x\u00b2, then x could equal 2 (since |2| < 2\u00b2)\nThis means we can eliminate choices B and D since they state that x cannot equal 2.\n\nSimilarly, if |x| < x\u00b2, then x could equal -2 (since |-2| < (-2)\u00b2)\nThis means we can eliminate choices A and C since they state that x cannot equal -2.\n\nBy the process of elimination, the correct answer is E\nBrent Hanneson - Creator of GMATPrepNow.com\n\n\u2022 Page 1 of 1","date":"2022-10-03 13:58:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8648666739463806, \"perplexity\": 2950.2813802367805}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030337421.33\/warc\/CC-MAIN-20221003133425-20221003163425-00621.warc.gz\"}"}
null
null
{"url":"http:\/\/math.stackexchange.com\/questions\/198670\/determine-if-the-following-set-of-sentence-is-consistent-or-inconsistent","text":"# Determine if the following set of sentence is consistent or inconsistent:\n\nDetermine if the following set of sentence is consistent or inconsistent:\n\nIf John committed the murder, then he was in the victim's apartment and did not leave before 11. In fact, he was in the victim's apartment. If he left before 11, then the doorman saw him but it is not the case either that the doorman saw him or that he committed the murder.\n\n-\nSince you are new to this site, please consider reading this: How to ask a homework question? I wrote this comment because the question sounds homework-like. \u2013\u00a0Pedro Tamaroff Sep 18 '12 at 19:05\n\nIt\u2019s entirely possible to work the problem using only ordinary English. However, you can also approach it systematically. The statement of the problem involves four propositions. In the order that they appear they are:\n\n$p$: John committed the murder.\n$q$: John was in the victim\u2019s apartment.\n$r$: John did not leave before $11$.\n$s$: The doorman saw him.\n\nHere I\u2019ve given them symbolic names for brevity. Now translate the assertions in the three sentences into statements in propositional logic:\n\n\\begin{align*} &p\\to(q\\land r)\\\\ &q\\\\ \\lnot &r\\to\\big(s\\land\\lnot(s\\lor p)\\big) \\end{align*}\n\nIf you do a little \u2018algebraic\u2019 manipulation of the third line, you may be able to see right away whether the system of statements is consistent. If not, write out a complete truth table; since you have four basic propositions, your truth table will have $2^4=16$ lines. Is there any line in which all three statements are true? If so, they\u2019re consistent; if not, they\u2019re inconsistent. (Notice that you really only have to construct the half of the truth table in which $q$ is true, since you don\u2019t care about the other eight cases.)\n\n-\nthat was gr8 help! thx \u2013\u00a0Khanak Sep 18 '12 at 19:51\nMy natural reading of the third English sentence would have \"then\" binding tighter than \"but\", so I would represent it as $(\\neg r\\to s)\\land\\neg(s\\lor p)$ \u2013\u00a0Henning Makholm Sep 18 '12 at 22:55\n@Henning: I thought about it, but if that were intended, I\u2019d expect a comma before the but: one is technically required there, so its absence ought to be significant. However, I will readily agree that the intended interpretation isn\u2019t really all that clear, and it\u2019s entirely possible that the author was insensitive to such subtleties. (You get a free pass, even if your English is excellent. :-)) \u2013\u00a0Brian M. Scott Sep 18 '12 at 23:01","date":"2016-06-29 14:51:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7460014224052429, \"perplexity\": 409.4851000492187}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-26\/segments\/1466783397748.48\/warc\/CC-MAIN-20160624154957-00066-ip-10-164-35-72.ec2.internal.warc.gz\"}"}
null
null
Baldwin, Y. Carliss → Design Rules, Volume 2: How Technology Shapes Organizations series by Carliss Y. Baldwin Building on Design Rules: The Power of Modularity, coauthored by HBS professor Carliss Y. Baldwin and Kim B. Clark (MIT Press, 2000). The IBM PC The IBM PC was the first computer platform to be open by choice and not because of financial constraints. Initially, this openness kept IBM competitive. But IBM's control over two strategic bottlenecks— standards embedded in the Basic Input Output System, and system integration and manufacturing of the computer itself—turned out to be weak. Platform Systems vs. Step Processes—The Value of Options and the Power of Modularity Technology shapes organizations through incentives and rewards. This paper compares the value structure of platform systems and step processes, finding that step processes reward technical integration, unified governance, risk aversion, and the use of direct authority. Platform systems by contrast reward modularity, distributed governance, risk taking, and autonomous decision-making. The Value Structure of Technologies, Part 2: Technical and Strategic Bottlenecks as Guides for Action This paper presents analytic tools to formulate strategy in large, evolving technical systems. It explains how value-enhancing technical change comes from the effective management of technical and strategic bottlenecks in conjunction with module boundaries and property rights. The analytic tools are used to explain the evolution of three historic technologies: early aircraft, machine tools, and container shipping. Design Rules, Volume 2: How Technology Shapes Organizations: Chapter 14 Introducing Open Platforms and Business Ecosystems Platform systems have existed in various forms for centuries. Beginning in the 1980s and 1990s, newly competitive technology of open platform systems based on digital technology and modular architectures changed the structure of entire industries. This paper lays the groundwork for a comprehensive theoretical investigation of open platform systems. Design Rules, Volume 2: How Technology Shapes Organizations: Chapter 5 Complementarity Even as economics has theories about what assets and activities should be grouped together under common ownership and unified governance, in practice it sometimes makes sense to distribute complementary assets, skills, and activities across separate organizations. This paper investigates when and how this happens. Design Rules, Volume 2: How Technology Shapes Organizations: Chapter 6 The Value Structure of Technologies, Part 1: Mapping Functional Relationships Technology shapes organizations by influencing the search for value—something that someone perceives as a good—in an economy made up of free agents. To understand the organizations that will develop and implement particular technologies we must first understand the technologies' value structure, including three main issues that make it difficult to value technologies. Designing an Agile Software Portfolio Architecture: The Impact of Coupling on Performance by Alan MacCormack, Robert Lagerström, Martin Mocker, and Carliss Y. Baldwin This study deepens our understanding of how firms can better design software portfolio architectures to improve their agility. The authors examined data from over 1,000 different software applications and 3,000 dependencies between them. They found that indirect measures of coupling and dependency have more power in predicting IT agility than direct measures. Explaining the Vertical-to-Horizontal Transition in the Computer Industry by Carliss Baldwin This paper shows how the vertical-to-horizontal transition in the computer industry was an organizational response to a change in economic rewards brought by the competing technologies of rationalized step processes and open platform systems. The spread of modular architectures—and the rapid pace of change in semiconductor technology—shifted the balance of rewards away from predictability toward flexibility. Exploring the Relationship Between Architecture Coupling and Software Vulnerabilities: A Google Chrome Case by Robert Lagerström, Carliss Y. Baldwin, Alan MacCormack, Dan Sturtevant, and Lee Doolan Managing software vulnerabilities is a top issue in today's society. By studying the Google Chrome codebase, the authors explore software metrics including architecture coupling measures in relation to software vulnerabilities. This paper adds new findings to research on software metrics and vulnerabilities, bringing the field closer to generalizable and conclusive results. A Methodology for Operationalizing Enterprise Architecture and Evaluating Enterprise IT Flexibility by Alan MacCormack, Robert Lagerstrom & Carliss Y. Baldwin When dealing with complex information system architectures, changes often propagate in unexpected ways, increasing the costs of adapting the system to future needs. In this paper the authors use data from a real firm and develop a robust network-based methodology by which to visualize and measure any firm's enterprise architecture. They also explore the dynamics of how different types of coupling influence the flexibility of enterprise architectures. They conclude with insights for practicing managers who must, for example, allocate resources and identify opportunities for system redesign. Closed for comment; 0 Comment(s) posted. Bottlenecks, Modules and Dynamic Architectural Capabilities Large technical systems made up of many interoperable components are becoming more common every day. Many of these systems, like tablet computers, smartphones, and the Internet, are based on digital information technologies. Others, like the electrical grid, the financial payments system, and all modern factories, rely on digital technologies. How do firms create and capture value in large technical systems? To answer this question, the author argues, it is first necessary to develop ways of describing such systems. One useful lens is architecture. Architectural capabilities are an important subset of dynamic capabilities that provide managers with the ability to see a complex technical system in an abstract way and change the system's structure by rearranging its components. Purposeful architectural change can then be used to create and capture value at different points in the technical system. Furthermore, value-enhancing architectural change arises through the effective management of bottlenecks and modules in conjunction with the firm's organizational boundaries and property rights. Key concepts include: Bottlenecks are points of value creation and capture in any complex man-made system. The architecture of a system defines its components, describes interfaces between components, and specifies ways of testing performance. The tools a firm can use to manage bottlenecks are 1) an understanding of the modular structure of the technical system and how it can be changed; and 2) an understanding of the contract structure of the firm, especially its organizational boundaries and property rights. Closed for comment; 0 Comment(s) posted. Visualizing and Measuring Software Portfolio Architectures: A Flexibility Analysis by Robert Lagerstrom, Carliss Y. Baldwin, Alan MacCormack & David Dreyfus Contemporary business environments are constantly evolving, requiring continual changes to the software applications that support a business. Moreover, during recent decades, the sheer number of applications has grown significantly, and they have become increasingly interdependent. Many companies find that managing applications and implementing changes to their application portfolio architecture is increasingly difficult and expensive. Firms need a way to visualize and analyze the modularity of their software portfolio architectures and the degree of coupling between components. In this paper, the authors test a method for visualizing and measuring software portfolio architectures using data of a biopharmaceutical firm's enterprise architecture. The authors also use the measures to predict the costs of architectural change. Findings show, first, that the biopharmaceutical firm's enterprise architecture can be classified as core-periphery. This means that 1) there is one cyclic group (the "Core") of components that is substantially larger than the second largest cyclic group, and 2) this group comprises a substantial portion of the entire architecture. In addition, the classification of applications in the architecture (as being in the Core or the Periphery) is significantly correlated with architectural flexibility. In this case the architecture has a propagation cost of 23 percent, meaning almost one-quarter of the system may be affected when a change is made to a randomly selected component. Overall, results suggest that the hidden structure method can reveal new facts about an enterprise architecture. This method can aid the analysis of change costs at the software application portfolio level. Key concepts include: This method for architectural visualization could provide valuable input when planning architectural change projects (in terms of, for example, risk analysis and resource planning). The method reveals a "hidden" core-periphery structure, uncovering new facts about the architecture that could not be gained from other visualization procedures or standard metrics. Compared to other measures of complexity, coupling, and modularity, this method considers not only the direct dependencies between components but also the indirect dependencies. These indirect dependencies provide important input for management decisions. Closed for comment; 0 Comment(s) posted. Modularity and Intellectual Property Protection by Carliss Y. Baldwin & Joachim Henkel Modularity is a means of partitioning technical knowledge about a product or process. The authors investigate the impact of modularity on intellectual property protection by formally modeling the threat of expropriation by agents. The principal has three options to address this threat: doing nothing, licensing the focal IP ex ante, and paying agents to prevent their defection. The principal can influence the value of these options by modularizing the technical system and by hiring clans of agents, thus exploiting relationships among them. The paper also gives examples of how managers arrive at a strategy in practice. Overall, the study contributes to the theory of profiting from innovation in three ways: First, it shows how the innovator's best choice of action against expropriation by agents-doing nothing, licensing, or paying agents-derives from the characteristics of the system, i.e., the share of trustworthy agents, the number of agents, the intensity of competition, the size of clans, the number of modules, and the degree of complementarity. Second, the innovator can use clans and modularity to increase profits, and the paper shows how clans and the modular architecture of the system interact to either reinforce or mitigate each other. Third, social relationships and norms of fairness affect the normative implications of an analysis based on rational choice theory. Implications for managers are also discussed. Key concepts include: Modularity is a means of partitioning technical knowledge about a product or process. Modularity can be used reduce the cost and/or risk of agents' expropriating valuable IP. The authors' model can be used to understand the effects of, for example, screening and signaling in the hiring process, legal protection of intellectual property, and social norms of fairness. Managers' fundamental choices are (1) to protect the knowledge or not; and (2) to trust the agents or not. Relational contracts, that is, paying selected agents not to defect, makes it possible to protect knowledge and maintain a monopoly when agents are relatively untrustworthy. Trusting one's agents-what the authors have called "doing nothing"-is the most valuable course of action if it works, but is a risky strategy because trust can always be betrayed. Better screening and signaling technologies make it easier for the principal to trust his agents, but some residual risk always remains. Closed for comment; 0 Comment(s) posted. Sharing Design Rights: A Commons Approach for Developing Infrastructure by Nuno Gil & Carliss Y. Baldwin Traditionally, a commons is a natural resource that gives rise to the problem of collective action: Individuals who act alone without consideration for others will arrive at outcomes that are bad for all. Pioneering research by Elinor Ostrom, a scholar of economic governance, has revealed that the claimants to a common pool resource are sometimes able to organize themselves to manage the commons on a day-today basis and to adapt to changing circumstances. In this paper, the authors study the dynamics of a commons organization: In 2006-2007, the Manchester City Council created a commons organization to design a number of new school buildings. The Council had broad decision rights over school design and construction, but rather than delegating those rights to its own staff or to a joint venture, as were the typical practices, the Council gave each school co-equal rights to approve the design so that no building project could go forward unless signed off by both the school and the Council staff. As such, the Council converted the decision-making process from a controlled, centralized style to a commons-based approach. Using the principles of Ostrom's commons theory the authors show that, overall, the commons form of organizing brought with it concomitant risk. This risk, however, was significantly lessened through the creation of a robust commons organization. Key concepts include: This study uses design theory to explain why the design process for school buildings can be viewed as a common pool resource, and explain what constitutes "tragedy of the commons" in this context. Sensible actions in terms of defining boundaries, making benefits proportionate to costs, and deferring to local rule-making can increase the robustness of the commons and increase its chances of success. A design commons organization should be considered as a potentially advantageous alternative to other ways of organizing design production processes. However, a design commons organization might not necessarily be the best approach to resolve design production problems in all environments. Closed for comment; 0 Comment(s) posted. Visualizing and Measuring Enterprise Architecture: An Exploratory BioPharma Case by Robert Lagerstrom, Carliss Baldwin, Alan MacCormack & David Dreyfus Achieving effective and efficient management of the software application landscape requires an ability to visualize and measure the current status of the enterprise architecture. To a large extent, this huge challenge can be addressed by introducing tools such as enterprise architecture modeling as a means of abstraction. In recent years, Enterprise Architecture (EA) has become an established discipline for business and software application management. Ideally, EA aids the stakeholders of the enterprise to effectively plan, design, document, and communicate IT and business related issues. Unfortunately, though, EA frameworks rarely explicitly state the kinds of analyses that can be performed given a certain model, nor do they provide details on how the analysis should be performed. In this paper, the authors present and test a method based on Design Structure Matrices (DSMs) and classic coupling measures that could be effective in uncovering the hidden structure of an enterprise architecture. The authors perform such a test using data consisting of a total of 407 architecture components and 1,157 dependencies from a biopharmaceutical company (referred to as BioPharma). Findings suggest that this method can reveal new facts about architecture structure on an enterprise level, equal to past results in the initial cases of single software systems such as Linux, Mozilla, Apache, and GnuCash. Key concepts include: For BioPharma, the architectural visualization and computed coupling metrics can provide valuable input when planning architectural change projects (in terms of, for example, risk analysis and resource planning). Analysis shows that business components are Control elements, infrastructure components are Shared elements, and software applications are in the Core, thus providing verification that the architecture is sound. The hidden external structure of the architecture components at BioPharma can be classified as core-periphery with a propagation cost of 23%, architecture flow through of 67%, and core size of 32%. Closed for comment; 0 Comment(s) posted. Hidden Structure: Using Network Methods to Map System Architecture by Carliss Y. Baldwin, Alan MacCormack & John Rusnak All complex systems can be described in terms of their architecture, that is, as a nested hierarchy of subsystems. Despite a wealth of research highlighting the importance of understanding system architecture, however, there is little empirical evidence on the actual architectural patterns observed across large numbers of real world systems. In this paper, the authors developed robust and reliable methods to detect the core components in a complex system, to establish whether these systems possess a core-periphery structure, and to measure important elements of these structures. Overall, the findings represent a first step in establishing some stylized facts about the structure of real-world systems. Key concepts include: The majority of systems analyzed in this non-random sample—67 percent to 76 percent—possess a core-periphery structure. Another 20 percent are considered borderline core-periphery. However, a significant number of systems lack such a structure. This implies a considerable amount of managerial discretion exists when choosing the "best" architecture for a system. There are major differences in the number of core components across a range of systems of similar size and function, indicating that differences in design are not driven solely by system requirements. Instead, these differences appear to be driven, in part, by the characteristics of the organization in which development occurs. Open, distributed organizations tend to develop modular designs with smaller "Cores"; whereas closed, collocated organizations tend to develop tightly-coupled designs with larger Cores. The authors find that core components are often distributed throughout a system, rather than being concentrated in one place. Hence it is important not to assume that all key relationships in a system are located in a few subsystems. These issues are pertinent in software, given that legacy code is rarely re-written, but instead forms a platform upon which new systems are built. The authors find no discernible pattern of direct dependencies between components that can reliably predict the number and location of core components. The results highlight the critical importance of indirect dependencies, which generate multiple paths along which changes and problems can propagate. These findings highlight the difficulties facing a system architect. Closed for comment; 0 Comment(s) posted. Risky Business: The Impact of Property Rights on Investment and Revenue in the Film Industry by Venkat Kuppuswamy & Carliss Y. Baldwin Films are a risky business because much more is known about the quality and revenue potential of a film post-production than pre-production. Using rich data on the US film industry, this paper explores variation in property right allocations, investment choices, and film revenues to find empirical support for three predictions based on property rights theory. (1) Studios underinvest in the marketing of independent films relative to studio-financed films. (2) Because of underinvestment, independent films have lower revenues than comparable studio-financed films. (3) If production cost and marketing investment are complementary, underinvestment in marketing harms large-budget films more than small-budget films, making it more likely that large-budget films will be studio-financed. Kuppuswamy and Baldwin's paper may be the first to provide evidence that vertical integration affects the revenue of specific products through its impact on marketing investments in those products. Key concepts include: Studio-financed films receive superior marketing investments compared to independent films. The US film industry has two distinct property rights regimes: studio-financed films are produced and distributed by studios which take in the lion's share of revenue. In contrast, independent films are distributed by studios under revenue sharing agreements, which give studios 30-40% of the revenue stream. Under either regime, the allocation of scarce marketing resources is determined by and paid for by the studio. Studio-financed films offer higher marginal returns to marketing investments than independent films. Independent film distribution to theaters may be an institutional mechanism that allows studios to adapt to post-production information about the value of their own films vs. outside opportunities. This in turn justifies ex ante investment in the production of independent films (especially those with small budgets) despite their dampened revenue expectations. Closed for comment; 0 Comment(s) posted. IP Modularity: Profiting from Innovation by Aligning Product Architecture with Intellectual Property by Joachim Henkel, Carliss Y. Baldwin & Willy C. Shih Firms increasingly practice open innovation, license technology out and in, outsource development and production, and enable users and downstream firms to innovate on their products. However, while such distributed value creation can boost the overall value created, it may create serious challenges for capturing value. This paper argues that in order to optimize value capture from a new product or process, an innovator must manage the artifact's intellectual property (IP) and its modular structure in conjunction. In other words, each module's IP status needs to be defined carefully and its boundaries must be placed accordingly. Fundamentally, IP modularity eliminates incompatibilities between IP rights in a given module, while permitting incompatibilities within the overall system. This in turn allows a firm to "have its cake and eat it too": It can reap the benefits of an open architecture while at the same time reducing the costs of opportunism on the part of suppliers, complementors, and employees. Key concepts include: By managing a system's modular structure in conjunction with its IP, firms can overcome intrinsic conflicts between distributed value creation on the one hand, and value appropriation on the other hand. The details of IP modularization must be determined by engineers and legal experts working together. But beyond the technical and legal concerns, IP modularity affects a firm's strategies for value appropriation in increasingly complex and fragmented technological spaces. With own IP, IP modularity helps to reconcile distributed value creation with value capture, and to avoid IP "leakage" to suppliers and employees. With external IP, an IP-modular architecture reduces holdup risk and other transaction costs of licensing, and may allow a firm to establish control over external, originally open IP. Under conditions of uncertainty, "anticipatory" IP modularity creates option value: it allows a firm to better exploit upcoming opportunities for distributed value creation and to counter threats from inadvertent IP infringement. Beyond products and processes, the concept of IP modularity also extends to organizations. An example is "Chinese Walls," the virtual barriers between different organizational units that prevent information exchange between these units. These units constitute "IP modules" within the organization, and the Chinese Walls between them are module boundaries. Closed for comment; 0 Comment(s) posted. Five Ways to Make Your Company More Innovative by Garry Emmons, Julia Hanna & Roger Thompson How do you create a company that unleashes and capitalizes on innovation? HBS faculty experts in culture, customers, creativity, marketing, and the DNA of innovators offer up ideas. From HBS Alumni Bulletin. Closed for comment; 11 Comment(s) posted.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,263
Sanoma Men's Magazines to license its car magazine GTO to Germany Sanoma Group Other During April 2005, The German publisher Delius Klasing Verlag will launch a German licensed publication of the glossy car magazine GTO in Germany, Switzerland and Austria. GTO is owned by the Dutch Sanoma Men's Magazines. This is the first foreign licensed publication of Sanoma Men's Magazines. GTO was launched at the AutoRAI two years ago, as a special edition of the successful magazine AutoWeek. After four publications GTO had an average print run of 20,000 copies and magazine's frequency was raised from 2004 onwards to bi-monthly. Sanoma Men's Magazines is part of SanomaWSOY's magazines division Sanoma Magazines.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,505
Okay, two in one today. Cause no rest for the wicked in October. So today we will be primarily reviewing Let The Right One In, but seeing as how a remake Let Me In was made we will compare notes and judge each movie fairly. Now I have to say I do like both films but like the true cinema snob I am, the original clearly is the better film (It's more foreign-y and less known.), centering around a young boy named Oskar who meets a relatively young girl name Eli and the two blossom into a rather quite sweet relationship, despite the fact that Eli is a bloodsucking beast of the night but my God are they a cute couple. Needless to say the film doesn't shy away from the gore and horror, but it's a growing love story at heart and is at the front of the story which can lead to a true horror buff finding something more in this film than others. It's beautifully shot, with snow filling almost every frame of film creating a serene scenery which really makes the blood pop. I can't really explain it but it's surprisingly a nice and quiet little movie that touches your heart and I'm always happy to come back and watch it again every now and then. An excellent film that grows beyond what one would think it is, a definite recommendation this month or when you just need some undead romance in your life. Now let's compare notes. Let Me In pretty much 90% follows the original beat for beat, it doesn't change much and yet surprisingly retains it's own identity and can be seen as either a companion piece to see what they did, or to be watched on it's own if foreign movies are not quite your thing. It's such an interesting case, you're watching a movie that is so similar to another movie but it has different touches to it. Hell the only other I know that did that was surprisingly another vampire movie Dracula, that's right the Bela Lugosi Dracula from 1931 had a spanish cast and crew making the same movie after Lugosi and company wrapped up shooting. Same sets, same script, different actors and performances so it wasn't even just Dracula dubbed in spanish and it's endlessly interesting because you're seeing the same movie in a new way and it's quite fun. The american actors do fine jobs, and the kids in both versions really deserve the credit cause they do awesome jobs. That's Chloe Grace Moretz as Abby and though I don't think I've seen many of her films she did really good. I feel the american version needed to up that gore and cuss words to get it an R rating and make it more appealing to western audiences, whereas the swedish version didn't need that much though the blood was still there in good amounts. So whichever one you see, I think you will enjoy it and it can hit several people's preferences beyond just horror, the american version doesn't stray far from the source material (I'm talking the film not the book.) so I think it can work anyway. But my two big thumbs up goes straight to Let The Right One In, a fine beginning to October and a high bar to be beat in terms of quality this month. But for those of you who aren't looking for classy foreign films join me next time for some good old trashy zombie action in Return Of The Living Dead. Posted by What The Dude Says at 11:32 AM Labels: Horror, Kare Hedebrand, Let The Right One In, Lina Leandersson, Romance, Suspense, Tomas Alfredson
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,384
Our team offers many benefits and incentives to those we represent in their home purchase. Such as complete representation; we assign a team member to every important aspect of your transaction. Programs offered include: Fly + Buy Program, Move On Over Program, Closing Cost Assistance, and more. Click here to learn more about our VIP Buyer Program. THE 5 STEP HOME BUYING PROCESS. Learn more about the buying process. Each important step is broken down and explained to help you have a clear picture of what is involved. Click here to learn more about the Buying Process. Use our Home Search to view active and available Single Family Homes, Condominiums, Duplexes, Rentals, Investment Properties in Southwest Florida. Use tools such as: Favorite Properties, Favorite Searches, Quick Inquiries, Virtual Tours and more. Click here to start your Home Search. This page will allow you to search by lifestyle... Gulf Access, Golf Homes, Pool Homes, Acreage, Equestrian, Gated Communities, etc.. Click here to Search by Lifestyle. Use our detailed Mortgage Calculator to determine how much you can comfortably spend on a home. Click here to use our Mortgage Calculator. + ARE YOU READY TO BUY? Use this handy guide to help you determine if you are ready to buy + explore tools that can help you plan for your big purchase. Learn more here. + FIRST TIME HOME BUYER? Read about some things you need to be aware of, if you're new to this process... Learn more here. + OUT OF THE COUNTRY? Our team is prepared to help you buy property in Southwest Florida even if you are out of the country. Along with our trusted vendors, we are equipped with the tools and programs that make buying here a smooth and easy process. Learn more here. A Collection of helpful documents to help you along your journey to home ownership. View + Download here. For many, buying a new home is a huge purchase... our team does not take that fact lightly. We are client-focused and understand the process, as well as what a home buyer, may be experiencing throughout the process. Our main goal is to make the process a smooth + stress-free experience. Click here to Meet the Team + let us get to work for you!
{ "redpajama_set_name": "RedPajamaC4" }
7,447
Q: c++, Display the records in data base which are having same date I need to display the records which are having specific date. Please check "void display_fromDate". I'm receiving error "expected primary expression before char" and pointing the "int main() --> case 2. Appreciate your advise. #include<iostream> #include<fstream> #include<iomanip> using namespace std; class patient { int contactnum; char name[50]; //float add; char bookingDate[15]; char bookingTime[5]; char trtType[15]; //int add, bookingDate, bookingTime; //double per; //char grade; //void calculate(); public: void getdata(); void showdata() const; void show_tabular() const; int getIDNum() const; char bDate() const; }; /*void patient::calculate() { per=(physics+chemistry+mathematics+english+comscience)/5.0; if(per>=90) grade='A+'; else if(per>=80) grade='A'; else if(per>=75) grade='A-'; else if(per>=70) grade='B+'; else if(per>=65) grade='B'; else if(per>=60) grade='B-'; else if(per>=55) grade='C+'; else if(per>=50) grade='C'; else grade='F'; }*/ void patient::getdata() { cout<<"\nEnter The Contact Number of the Patient "; cin>>contactnum; cout<<"\n\nEnter Patient's Name: "; cin.ignore(); cin.getline(name,50); cout<<"\nEnter Booking Date: "; cin>>bookingDate; cout<<"\nEnter Booking Time: "; cin.ignore(); cin>>bookingTime; cout<<"\nEnter Treatment Type: "; cin>>trtType; //calculate(); } void patient::showdata() const { cout<<"\nContact Number: "<<contactnum; cout<<"\nName: "<<name; cout<<"\nBooking Date: "<<bookingDate; cout<<"\nBooking Time: "<<bookingTime; cout<<"\nTreatment Type: "<<trtType; } void patient::show_tabular() const { cout<<contactnum<<setw(6)<<" "<<name<<setw(4)<<bookingDate<<setw(4)<<bookingTime<<setw(4)<<trtType<<endl; } int patient::getIDNum() const { return contactnum; } void Savepatient(); void displayAll(); void Searchdisplay(int); void modifypatient(int); void deletepatient(int); void DisplayClassResult(); void DisplayResult(); void write_patient() { patient st; ofstream outFile; outFile.open("patient.dat",ios::binary|ios::app); st.getdata(); outFile.write(reinterpret_cast<char *> (&st), sizeof(patient)); outFile.close(); cout<<"\n\nPatient record has been Created "; cin.ignore(); cin.get(); } /*void display_all() { patient st; ifstream inFile; inFile.open("patient.dat",ios::binary); if(!inFile) { cout<<"File could not be open !! Press any Key..."; cin.ignore(); cin.get(); return; } cout<<"\n\n\n\t\tDISPLAY ALL RECORD !!!\n\n"; while(inFile.read(reinterpret_cast<char *> (&st), sizeof(patient))) { st.showdata(); cout<<"\n\n====================================\n"; } inFile.close(); cin.ignore(); cin.get(); }*/ void display_fromDate(char n) { patient st; ifstream inFile; inFile.open("patient.dat",ios::binary); if(!inFile) { cout<<"File could not be open !! Press any Key..."; cin.ignore(); cin.get(); return; } bool flag=false; while(inFile.read(reinterpret_cast<char *> (&st), sizeof(patient))) { if(st.bDate()==n) { st.showdata(); flag=true; } } inFile.close(); if(flag==false) cout<<"\n\nrecord not exist..."; cin.ignore(); cin.get(); } void display_sp(int n) { patient st; ifstream inFile; inFile.open("patient.dat",ios::binary); if(!inFile) { cout<<"File could not be open !! Press any Key..."; cin.ignore(); cin.get(); return; } bool flag=false; while(inFile.read(reinterpret_cast<char *> (&st), sizeof(patient))) { if(st.getIDNum()==n) { st.showdata(); flag=true; } } inFile.close(); if(flag==false) cout<<"\n\nrecord not exist..."; cin.ignore(); cin.get(); } void modify_patient(int n) { bool found=false; patient st; fstream File; File.open("patient.dat",ios::binary|ios::in|ios::out); if(!File) { cout<<"File could not be open !! Press any Key..."; cin.ignore(); cin.get(); return; } while(!File.eof() && found==false) { File.read(reinterpret_cast<char *> (&st), sizeof(patient)); if(st.getIDNum()==n) { st.showdata(); cout<<"\n\nPlease Enter The New Details of Patient"<<endl; st.getdata(); int pos=(-1)*static_cast<int>(sizeof(st)); File.seekp(pos,ios::cur); File.write(reinterpret_cast<char *> (&st), sizeof(patient)); cout<<"\n\n\t Record Updated"; found=true; } } File.close(); if(found==false) cout<<"\n\n Record Not Found "; cin.ignore(); cin.get(); } void delete_patient(int n) { patient st; ifstream inFile; inFile.open("patient.dat",ios::binary); if(!inFile) { cout<<"File could not be open !! Press any Key..."; cin.ignore(); cin.get(); return; } ofstream outFile; outFile.open("Temp.dat",ios::out); inFile.seekg(0,ios::beg); while(inFile.read(reinterpret_cast<char *> (&st), sizeof(patient))) { if(st.getIDNum()!=n) { outFile.write(reinterpret_cast<char *> (&st), sizeof(patient)); } } outFile.close(); inFile.close(); remove("patient.dat"); rename("Temp.dat","patient.dat"); cout<<"\n\n\tRecord Deleted .."; cin.ignore(); cin.get(); } void class_result() { patient st; ifstream inFile; inFile.open("patient.dat",ios::binary); if(!inFile) { cout<<"File could not be open !! Press any Key..."; cin.ignore(); cin.get(); return; } cout<<"\n\n\t\tALL PATIENTS BOOKING DETAILS\n\n"; cout<<"==============================================================\n"; cout<<"Mobile.No Name Booking Date Booking Time"<<endl; cout<<"==============================================================\n"; while(inFile.read(reinterpret_cast<char *> (&st), sizeof(patient))) { st.show_tabular(); } cin.ignore(); cin.get(); inFile.close(); } int main() { char ch; int num; cout.setf(ios::fixed|ios::showpoint); cout<<setprecision(2); do { system("cls"); cout<<"\t@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@"; cout<<"\n\n\t1.CREATE PATIENT RECORD"; cout<<"\n\n\t2.DISPLAY ALL PATIENTS RECORDS"; cout<<"\n\n\t3.SEARCH PATIENT RECORD "; cout<<"\n\n\t4.MODIFY PATIENT RECORD"; cout<<"\n\n\t5.DELETE PATIENT RECORD"; cout<<"\n\n\t6.DISPLAY CLASS RESULT"; cout<<"\n\n\t7.EXIT"; cout<<"\n\n\t@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@"; cout<<"\n\n\tPlease Enter Your Choice (1-7): "; cin>>ch; system("cls"); switch(ch) { case '1': write_patient(); break; case '2': display_fromDate(char n); break; case '3': cout<<"\n\n\tPlease Enter Patient's Contact Number: "; cin>>num; display_sp(num); break; case '4': cout<<"\n\n\tPlease Enter Patient's Contact Number: "; cin>>num; modify_patient(num);break; case '5': cout<<"\n\n\tPlease Enter Patient's Contact Number: "; cin>>num; delete_patient(num);break; case '6' : class_result(); break; case '7': exit(0);; default: cout<<"\a"; } }while(ch!='7'); return 0; } A: This may not solve your issue, but it can help. (To Downvoters: this is too big to fit in a comment. Also, please leave comment when downvoting.) Some foundation: struct Time { unsigned int hours; unsigned int minutes; unsigned int seconds; }; struct Date { unsigned int day; unsigned int month; unsigned int year; }; With the time and date structures established, the patient structure now looks like: class Patient { public: unsigned int contact_num; std::string name; std::string trtype; Date booking_date; Time booking_time; }; To make the date comparable, you could add a few methods: bool operator==(const Date& d) { return (year == d.year) && (month == d.month) && (day == d.day); } bool operator< (const Date& d) { if (year == d.year) { if (month == d.month) { return day < d.day; } else { return month < d.month; } } return year < d.year; } With that you can write some ordering (comparison) functions for your Patients: bool Order_By_Date(const Patient& p1, const Patient& p2) { return p1.booking_date < p2.booking_date; } If you had a vector of Patient, you could order (sort) them by booking date by using: std::vector<Patient> database; //... std::sort(database.begin(), database.end(), Order_By_Date); The concepts can apply equally to the Time structure and the booking time. Advanced sorting functions, such as by name then by date, are left as an exercise for the OP. Edit 1: Searching or finding by date The STL search algorithms can be used with an ordering or comparison function, similar to the std::sort function above. To find a person by date: static Date search_key = { .year = 2016; .month = 01; .day = 22; }; bool Find_By_Date(const Person& p) { return p.booking_date == search_key; } std::vector<patients>::iterator iter = std::find(database.begin(), database.end(), search_key, Find_By_Date); if (iter != database.end()) { cout << "Person found: " << iter->name << "\n"; } I recommend reviewing other std search functions, such as lower_bound. If none of those are adequate, you can always write your own.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,689
Receive a GIANT 16oz ceramic Ice Cream Sundae 15k/5k mug! Great for coffee, tea, or any other beverages….. But best for ICE CREAM! Build your own sundae at the finish line with the toppings you want from the sundae bar. Kick back and enjoy your triumph and celebrate the day! Ice cream provided by Guernsey Farms Dairy! Best Ice Cream in Michigan!
{ "redpajama_set_name": "RedPajamaC4" }
4,065
{"url":"https:\/\/www.semanticscholar.org\/paper\/Disjoint-Triangles-of-a-Cubic-Line-Graph-Zhang-Bylka\/c68a19bc3422f02191d7dbee961d3d4f13dc694b?p2df","text":"# Disjoint Triangles of a Cubic Line Graph\n\n@article{Zhang2004DisjointTO,\ntitle={Disjoint Triangles of a Cubic Line Graph},\nauthor={Xiaodong Zhang and Stanislaw Bylka},\njournal={Graphs and Combinatorics},\nyear={2004},\nvolume={20},\npages={275-280}\n}\n\u2022 Published 1 June 2004\n\u2022 Mathematics\n\u2022 Graphs and Combinatorics\nAbstract.In this paper, we prove that a cubic line graph G on n vertices rather than the complete graph K4 has vertex-disjoint triangles and the vertex independence number . Moreover, the equitable chromatic number, acyclic chromatic number and bipartite density of G are respectively.\n4 Citations\n\n### Planarization and Acyclic Colorings of Subcubic Claw-Free Graphs\n\n\u2022 Mathematics\nWG\n\u2022 2011\nThe largest known subclass of subcubic graphs such that an optimal acyclic vertex coloring can be found in polynomial-time is presented, and it is shown that this bound is tight by proving that the problem is NP-hard for cubic line graphs (and therefore, claw-free graphs) of maximum degree d\u22654.\n\n### A Complexity Picture for H-Free Graphs\n\n\u2022 Mathematics\n\u2022 2021\n18 A k-colouring c of a graph G is a mapping V (G)\u2192 {1, 2, . . . k} such that c(u) 6= c(v) whenever u 19 and v are adjacent. The corresponding decision problem is Colouring. A colouring is acyclic,\n\n### Acyclic, Star and Injective Colouring: A Complexity Picture for H-Free Graphs\n\n\u2022 Mathematics\nESA\n\u2022 2020\nThis study gives almost complete classifications for the computational complexity of Acyclic Colouring, Star Colouring and Injective Colouring for H-free graphs and concludes that for fixed k the three problems behave in the same way, but this is no longer true if k is part of the input.\n\n### Counting Edge-injective Homomorphisms and Matchings on Restricted Graph Classes\n\n\u2022 Mathematics\nTheory of Computing Systems\n\u2022 2018\nIt is shown that edge-injective homomorphisms from a pattern graph \u00a3H$can be counted in polynomial time if$H$has bounded vertex-cover number after removing isolated edges and if the graphs in$\\mathcal{H}\\$ have unbounded vertex- cover number even after deleting isolated edges.\n\n## References\n\nSHOWING 1-10 OF 15 REFERENCES\n\n### Extremal bipartite subgraphs of cubic triangle-free graphs\n\n\u2022 Mathematics\nJ. Graph Theory\n\u2022 1982\nA cubic triangle-free graph has a bipartite subgraph with at least 4\/5 of the original edges with a best possible result.\n\n### Graph Coloring Problems\n\n\u2022 Mathematics\n\u2022 1994\nPlanar Graphs. Graphs on Higher Surfaces. Degrees. Critical Graphs. The Conjectures of Hadwiger and Hajos. Sparse Graphs. Perfect Graphs. Geometric and Combinatorial Graphs. Algorithms.\n\n### A note on cycles in 2-factors of line graphs\n\n\u2022 Mathematics\n\u2022 1999\nWe provide a generalization to the well-known result of Harary and Nash-Williams characterizing graphs with Hamiltonian line graphs. Our generalization allows us to characterize those graphs whose\n\n### Nonhamiltonian 3-Connected Cubic Planar Graphs\n\n\u2022 Mathematics\nSIAM J. Discret. Math.\n\u2022 2000\nIt is established that every cyclically 4-connected cubic planar graph of order at most 40 is hamiltonian and this bound is determined to be sharp, and all nonhamiltonian examples of order 42 are presented.\n\n### Largest bipartite subgraphs in triangle-free graphs with maximum degree three\n\n\u2022 Mathematics\n\u2022 1986\nOn presente un algorithme polynomial permettant de determiner un sous-graphe biparti d'un graphe G sans triangle ni boucle de degre maximum 3, contenant au moins 4\/5 des aretes de G. On caracterise","date":"2022-10-03 01:27:10","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.703075110912323, \"perplexity\": 3505.062923200657}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030337371.9\/warc\/CC-MAIN-20221003003804-20221003033804-00414.warc.gz\"}"}
null
null
LEVEL = ../../../.. PROG = deriv2 LDFLAGS = -lstdc++ include $(LEVEL)/MultiSource/Makefile.multisrc
{ "redpajama_set_name": "RedPajamaGithub" }
8,619
Q: Why is "the `async` keyword missing" when using Tokio and Rocket? When compiling my Rust project, I get this error: error: the `async` keyword is missing from the function declaration --> src/main.rs:48:7 | 48 | async fn main() { | ^^ My main() looks like: #[rocket::main] #[tokio::main] async fn main() { tokio::spawn(refresh_channel_rep()); tokio::spawn(refresh_channel_article_count()); tokio::spawn(remove_low_quality_articles()); tokio::spawn(calculate_article_trend()); let launch_result = create_server().launch().await; match launch_result { Ok(_) => println!("Rocket shut down gracefully."), Err(err) => println!("Rocket had an error: {}", err), }; } Why did this happen? What should I do to fixed this problem? A: You shouldn't use both #[rocket::main] and #[tokio::main], since the first does strictly more then the second - tokio::main just starts the runtime, rocket::main does the same and also configures it according to the Rocket configuration. Just use the #[rocket::main] only, and everything should work.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,773
Q: Raffle thing not working Coding problem for raffle won't work! var i = 0; var count; var names = [ "Stefon", "Garret", "Brandon" ]; function GetRandomInt(){ return Math.floor(Math.random()*i+1); } function CallWinner(){ var ID = GetRandomInt(); document.write("<hr>"+names[ID]+" has won with the ID of "+id+"!"); } do { i++; for(count=0;i<=names.length;){ count++; document.write(names[count]+" has been assigned to the raffle ID, "+count+"<br>"); } } while (i<=names.length); For some reason this isn't working, it acts like an infinite loop or maybe it crashes the tab, it works but then it crashes the tab. Please help. A: document.write is document.wrong. Please use something more modern and less prone to doing confusing things. This function tries to write to the current document. If the document has already been processed, the document will be replaced with a blank one with your argument. You don't want that; use the proper DOM methods instead. Your GetRandomInt function is broken; it should return a random accessible index in the array, not a static number. Try something like this instead: const names = [ "Stefon", "Garret", "Brandon" ]; function GetRandomIndex() { return Math.floor(Math.random() * names.length); } function CallWinner() { const index = GetRandomIndex(); const hr = document.body.appendChild(document.createElement('hr')); hr.textContent = names[index] + " has won with the ID of " + index + "!"; } names.forEach((name, count) => { const div = document.body.appendChild(document.createElement('hr')); div.textContent = name + " has been assigned to the raffle ID, " + count; }); CallWinner();
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,584
Burg 28 ist die Adresse folgender Gebäude: Burg 28 in Burghausen, siehe Liste der Baudenkmäler in Burghausen Burg 28 (Ennepetal) In der Burg 28 in Friedberg, siehe Liste der Kulturdenkmäler in Friedberg (Hessen)
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,927
\section{Introduction} Estimating the smallest singular value $\sigma_{\min}$ of a matrix is difficult. Dense SVD algorithms can approximate $\sigma_{\min}$ well and their running time is predictable, but they are also slow. Furthermore, dense SVD algorithms require space that is proportional to $mn$ when the matrix is $m$-by-$n$, which is impractical for large sparse matrices. Symmetrization is not an effective way to address the problem. If we work with the Gram matrix $A^{*}A$, we cannot estimate condition numbers $\kappa(A)=\sigma_{\max}/\sigma_{\min}$ greater than $1/\sqrt{\epsilon_{\text{machine}}}$, where $\epsilon_{\text{machine}}$ is the unit roundoff (machine precision). If we work with the augmented matrix \[ \begin{bmatrix}0 & A^{*}\\ A & 0 \end{bmatrix}\;, \] $\sigma_{\min}(A)$ is transformed into a pair of eigenvalues $\pm\sigma_{\min}$ in the middle of the spectrum. Such eigenvalues are difficult to compute accurately with Lanczos and its variant \footnote{For example, the ARPACK User's Guide states that a shift-invert iteration is usually required to compute eigenvalues in the interior of the spectrum~\cite[Section 3.4]{ARPACK-UG}. Experiments with ARPACK on some of the matrices presented later in the paper, whose condition number our method was able to estimate, showed that ARPACK does not converge on them when it tries to compute the smallest-magnitude eigenvalues of the augmented matrix without inversion }, and it is essentially impossible for such algorithms to determine that there is no eigenvalue closer to zero than the one that has already been computed. This paper describes a Krylov-subspace method that can estimate $\sigma_{\min}$ and hence the spectral condition number $\mathbf{\sigma_{\max}/}\sigma_{\min}$ accurately. Our method is reliable in the sense that it does not incorrectly report significant overestimates of $\sigma_{\min}$ as accurate (at least when $A$ is not close to being rank deficient). Our method is also robust since it requires very little memory. Our method also has some flaws. The main one is that it sometimes converges very slowly, making is essentially impossible to compute $\sigma_{\min}$. Our experience shows that the method always converges eventually, but that convergence might be too slow to be of practical use. There is no good way to determine how close the method is to termination, although it tends to behave consistently on related matrices (e.g. from the same application area). When $\kappa(A)$ is close to $\epsilon_{\text{machine}}^{-1}$, the method sometimes overestimates $\sigma_{\min}$ by several orders of magnitude, but it still returns a small estimate (smaller than $\sigma_{\max}\times10^{-11}$ in our experience, with $\epsilon_{\text{machine}}\approx10^{-16}$). Even with these flaws, to the best of our knowledge this method is the only practical way to compute $\sigma_{\min}$ with reasonable accuracy (to within a factor of $2$ or better) on many large matrices. The key idea of our method is to apply LSQR, a Krylov-subspace least-squares solver, to minimize $\|Ax-b\|$ (all norms in this paper denote the $2$-norm unless stated otherwise) when we already have an $x^{\star}$ that satisfies $Ax^{\star}=b$. Thanks to having $x^{\star}$, we can compute the forward error, which allows us to exploit the tendency of LSQR to concentrate the forward error in the direction of a singular vector associated with $\sigma_{\min}$. This is often seen as a flaw in LSQR and in Conjugate Gradients (LSQR is mathematically equivalent to Conjugate Gradients applied to $A^{T}Ax=A^{T}b$). These solvers converge slowly when the coefficient matrix $A$ is ill conditioned because it is difficult for them to get rid of the error in the small subspaces of $A$. Our method exploits this flaw, which acts as a sieve that captures a vector from this subspace. The method sometimes converges very rapidly and sometimes very slowly. This is not related to the size of the problem and not to how ill conditioned it is, but to the distribution of singular values. The rest of this paper is organized as follows. Section~\ref{sec:Related-Work} surveys related work on condition number estimation. Section~\ref{sec:The-Algorithm} describes the core of our algorithm. Section~\ref{sec:Rationale} explains how it works using mostly visualizations of numerical experiments that display the singular-vector concentration effect. Section~\ref{sec:analysis-small-err} analyzes the unique stopping criteria that our method relies upon. The bulk of our experimental evidence is summarized in Section~\ref{sec:Additional-Experiments}. \section{\label{sec:Related-Work}Related Work} The large singular value $\sigma_{\max}$ of $A$ can be computed accurately using a bounded number of matrix-vector multiplications involving $A$ and $A^{*}$. This can be done using the power method, for example, whose analysis for this application we explain below. The Lanczos method can reduce the number of matrix-vector products even further~\cite{Kuczynsky92}. Random projection methods can also estimate $\sigma_{\max}(A)$~\cite{HalkoMartinssonTropp11}. Estimating $\sigma_{\min}$ is computationally more challenging, because applying the pseudo-inverse is usually much harder than applying $A$ itself. In general, existing random projection methods cannot efficiently estimate $\sigma_{\min}(A)$ unless a decomposition of $A$ is computed, or $A$ is low rank (or numerically low rank). If $A$ is low rank, random projection methods can be used to estimate $\sigma_{k}(A)$, where $k$ is the (numerical) rank~\cite{HalkoMartinssonTropp11}. The LINPACK condition-number estimator requires a triangular factorization of $A$ (see Higham's monograph~\cite[Chapter 15]{Higham02} for details on this and related estimators). The Gotsman--Toledo~\cite{GostmanToledo2008} and the Bischof et al.~\cite{Bischof90} condition-number estimators, which are specialized to sparse matrices, also require a triangular factorization. Estimators that require a triangular factorization are less expensive than the SVD, but they still cannot be applied to huge matrices. The LAPACK condition-number estimator relies on repeated applications of the pseudo-inverses of $A$ and $A^{*}$~\cite{Higham88}. One way to apply them is using a factorization, but they can also be applied using an iterative solver. With an effective preconditioner, repeated applications of the pseudo-inverse may be less expensive than the method that we propose, but without one our method is less expensive. Kenny et al.~\cite{Kenney98} describe a way to estimate the condition number of a square matrix using a single application of the inverse to one or several vectors. The spectral condition number measures the norm-wise sensitivity of matrix-vector products and linear systems to small perturbations in the inputs. There are methods that estimate more focused metrics, such as the sensitivity of individual components of the inputs or output~\cite{Kenney98}. Our method does not address this problem. \section{\label{sec:The-Algorithm}The Algorithm} \begin{algorithm} \begin{algorithmic}[1] \small{ \STATE \textbf{Input: $A\in\mathbb{R}^{m\times n}$} \STATE \textbf{Parameters and defaults: $c_{1}$} ($8\epsilon_{\text{machine}}$), $c_{2}$ ($10^{-3}$), $c_{3}$ ($64/\epsilon_{\text{machine}}$), $c_{4}$ ($\sqrt{\epsilon_{\text{machine}}}$) and $c_{1}^{\prime}$ ($4\epsilon_{\text{machine}}$) \STATE \STATE Estimate $\hat{\sigma}_{\max}=\sigma_{\max}(A)$, along with a certificate $\hat{v}_{\max}$, using power iteration. \STATE $\hat{\sigma}_{\min}=\hat{\sigma}_{\max}$, $\hat{v}_{\min}=\hat{v}_{\max}$ \STATE Draw a random vector $\hat{x}\in\mathbb{R}^{n}$ with independent normal entries \STATE $\tau\gets\erf^{-1}(c_{2})/\Vert\hat{x}\Vert$ \STATE $x^{\star}\gets\hat{x}/\Vert\hat{x}\Vert$ \STATE $b\gets Ax^{\star}$ \STATE $\beta^{(0)}\gets\Vert b\Vert$, $u^{(0)}\gets b/\beta^{(0)}$ \STATE $v^{(0)}\gets Au^{(0)}$, $\alpha^{(0)}\gets\Vert v^{(0)}\Vert$, $v^{(0)}\gets v^{(0)}/\alpha^{(0)}$ \STATE $w^{(0)}\gets v^{(0)}$ \STATE $x^{(0)}\gets0_{n\times1}$ \STATE $\bar{\phi}^{(0)}\gets\beta^{(0)}$, $\bar{\rho}^{(0)}\gets\alpha$ \STATE $T\gets\infty$\qquad{} \COMMENT{Value of $T$ is set later.} \FOR{$t=1,\dots,T$} \STATE $u^{(t)}\gets Av^{(t-1)}-\alpha u^{(t-1)}$ \STATE $\beta^{(t)}\gets\Vert u^{(t)}\Vert$ \STATE $u^{(t)}\gets u^{(t)}/\beta^{(t)}$ \STATE $v^{(t)}\gets A^{*}u^{(t)}-\beta^{(t)}v^{(t)}$ \STATE $\alpha^{(t)}\gets\Vert v^{(t)}\Vert$ \STATE $v^{(t)}\gets v^{(t)}/\alpha^{(t)}$ \STATE $\rho^{(t)}\gets\left\Vert \left(\begin{array}{cc} \bar{\rho}^{(t-1)} & \beta^{(t)}\end{array}\right)\right\Vert $ \STATE $c^{(t)}\gets\bar{\rho}^{(t-1)}/\rho^{(t)}$, $s^{(t)}\gets\beta^{(t)}/\rho^{(t)}$ \STATE $\theta^{(t)}\gets s^{(t)}\alpha^{(t)}$, $\bar{\rho}^{(t)}\gets-c^{(t)}\alpha^{(t)}$ \STATE $\phi^{(t)}\gets c^{(t)}\bar{\phi}^{(t-1)}$, $\bar{\phi}^{(t)}\gets s\bar{\phi}^{(t-1)}$ \STATE $x^{(t)}\gets x^{(t-1)}+(\phi^{(t)}/\rho^{(t)})w^{(t-1)}$ \STATE $w^{(t)}\gets v^{(t)}-(\theta^{(t)}/\rho^{(t)})w^{(t-1)}$ \STATE $R_{tt}^{(t)}\gets\rho^{(t)}$\qquad{} \COMMENT{Only diagonal and superdiagonal of $R$ are kept in memory} \STATE \textbf{if $t>1$ set $R_{t-1,t}^{(t)}\gets\theta^{(t-1)}$} \STATE $d^{(t)}\gets x^{\star}-x^{(t)}$ \STATE \textbf{if }$d^{(t)}=0$ \textbf{set $\hat{\sigma}_{\min}\gets\hat{\sigma}_{\max}$, $\hat{v}_{\min}\gets\hat{v}_{\max}$ and break for}\\ \qquad{} \COMMENT{For matrices with $\kappa \neq 1$ the probability of getting $d^{(t)}=0$ is $0$}. \IF{$\Vert A d^{(t)} \Vert \leq \sigma_{\min} \Vert d^{(t)} \Vert$} \STATE $\hat{\sigma}_{\min}\gets\Vert Ad^{(t)}\Vert/\Vert d^{(t)}\Vert,$ $\hat{v}_{\min}\gets d^{(t)}$ \ENDIF \STATE \textbf{if $\hat{\sigma}_{\max}/\hat{\sigma}_{\min}\geq c_{4}$ then }$c_{1}\gets c_{1}^{\prime}$ \IF{ not converged ($T=\infty$) and ($\frac{\Vert Ad^{(t)}\Vert}{\hat{\sigma}_{\max}\Vert x^{(t)}\Vert+\Vert b\Vert}\leq c_{1}$ \textbf{or }$\Vert d^{(t)}\Vert\leq\tau$ \textbf{or $\hat{\sigma}_{\max}/\hat{\sigma}_{\min}\geq c_{3}$}) } \STATE\textbf{$T\gets\left\lceil 1.25t\right\rceil $} \ENDIF \ENDFOR \STATE Estimate $\tilde{\sigma}_{\min}=\sigma_{\min}(R^{(T)})$, using inverse power iteration \STATE $\tilde{\sigma}_{\min}\gets\min(\tilde{\sigma}_{\min},\hat{\sigma}_{\min})$ \STATE \RETURN $\hat{\sigma}_{\max}$,$\hat{\sigma}_{\min}$, $\hat{v}_{\max}$ and $\hat{v}_{\min}$, $\tilde{\sigma}_{\min}$ } \end{algorithmic} \caption{\label{alg:the-alg}The condition number estimation algorithm.} \end{algorithm} This section describes our algorithm for estimating the condition number of $A$. A detailed pseudo-code description appears in Algorithm~\ref{alg:the-alg}. The algorithm starts by estimating $\sigma_{\max}(A)$ and a corresponding certificate vector using power iteration on $A^{*}A$. We perform enough iterations to estimate $\sigma_{\max}$ to within 10\% with probability at least $1-10^{-12}$. Using a bound due to Klein and Lu~\cite[ Section 4.4]{KleinLu1996 \footnote{Note that the statement of Lemma~6 in~\cite{KleinLu1996} is incorrect; the proof shows the correct bound. Also, the discussion that follows the proof of the lemma repeats the error in the statement of the lemma }, we find that given a relative error parameter $\epsilon$ and a failure probability parameter $\delta$, if we perform \[ \left\lceil \frac{1}{\epsilon}\left(\ln\left(2n\right)^{2}+\ln\left(\frac{1}{\epsilon\delta^{2}}\right)\right)\right\rceil \] iterations, the relative error in our approximation is less than $\epsilon$ with probability at least $1-\delta$. For the parameters $\epsilon=10^{-1}$ and $\delta=10^{-12}$, 1004 iterations suffice even for matrices with up to $10^{9}$ columns. For $\epsilon=1/3$ and $\delta=10^{-12}$, only 298 iterations suffice for matrices with up to $10^{9}$ columns. (The accuracy of the $\sigma_{\max}$ estimate in the power method is typically much higher than predicted by this bound, but the additional accuracy depends on the gap between the largest and second-largest singular values; the bound that we use makes no assumption on the gap.) The main phase of the algorithm uses a slightly-enhanced LSQR iteration~\cite{LSQR} to estimate $\sigma_{\min}$ and to produce a corresponding certificate vector. The algorithm first generates a uniformly-distributed random vector $x^{\star}$ on the unit sphere by first generating a vector $\hat{x}$ with normally-distributed independent random components, and setting $x^{\star}=\hat{x}/\|\hat{x}\|$. The algorithm multiplies it by $A$ to produce a consistent right-hand side $b=Ax^{\star}$. Now the algorithm runs LSQR on this $b$, using Paige and Saunders's original formulation~\cite[pages 50--51]{LSQR}. LSQR minimizes $\|Ax-b\|$ iteratively using a Lanczos-type bidiagonalization procedure. It is mathematically equivalent to solving the normal equations $A^{*}Ax=A^{*}b$ using the Conjugate Gradients algorithm, but it behaves much better numerically. Our algorithm adds a few steps to each LSQR iteration. At the end of each (standard) LSQR iteration, we have an updated approximate solution $x^{(t)}$ and an estimate of $\|r^{(t)}\|=\|Ax^{(t)}-b\|$, denoted by $\bar{\phi}^{(t)}$. This estimate of $\|r^{(t)}\|$ is mathematically correct but the equality of $\bar{\phi}^{(t)}$ and $\|r^{(t)}\|$ depends on the orthogonality of the Lanczos vectors, which lose orthogonality in floating point arithmetic as the algorithm progresses. Our algorithm also computes $d^{(t)}=x^{\star}-x^{(t)}$ and $\|d^{(t)}\|$. We have \begin{eqnarray*} \left\Vert Ad^{(t)}\right\Vert & = & \left\Vert A\left(x^{\star}-x^{(t)}\right)\right\Vert \\ & = & \left\Vert b-Ax^{(t)}\right\Vert \\ & = & \left\Vert r^{(t)}\right\Vert \;, \end{eqnarray*} which in exact arithmetic equals $\bar{\phi}^{(t)}$, but to improve the robustness of the algorithm we compute $\|Ad^{(t)}\|$ explicitly. (In our numerical experiments we have found $\bar{\phi}^{(t)}$ to be an accurate estimate, but we prefer to avoid any reliance on the orthogonality of the Lanczos vectors in our algorithm.) We also compute $\|x^{(t)}\|$. Next, the algorithm computes the ratio $\|Ad^{(t)}\|/\|d^{(t)}\|,$ which like any Rayleigh quotient is an upper bound on $\sigma_{\min}$. If this ratio is the smallest we have seen so far, the algorithm treats it as an estimate of $\sigma_{\min}$ and stores both the ratio and the certificate $d^{(t)}$. When the algorithm terminates, it outputs the best ratio it has found and the corresponding certificate. We use three stopping criteria. The first stopping criterion is the one used by the standard LSQR algorithm~\cite{LSQR} for consistent systems: \begin{equation} \frac{\left\Vert r^{(t)}\right\Vert }{\hat{\sigma}_{\max}\left\Vert x^{(t)}\right\Vert +\left\Vert b\right\Vert }\leq c_{1}\,,\label{eq:stop1} \end{equation} where $\hat{\sigma}_{\max}$ is our estimate of $\Vert A\Vert$ and $c_{1}$ is a parameter that is set by default to $8\epsilon_{\text{machine}}$. It has been observed experimentally~\cite{CPT09} that for consistent systems, as long as $c_{1}=\Omega(\epsilon_{\text{machine}})$ this criterion will be eventually met in spite of the loss of orthogonality in the biorthogonalization process; however, the residual norm does not seem to decrease much below the value required to satisfy \eqref{eq:stop1}~\cite{CPT09}, so a much smaller $c_{1}$ cannot be used. In many cases our second stopping criterion, which is non-standard, will stop LSQR well before the residual is that small. This second condition is \begin{equation} \left\Vert d^{(t)}\right\Vert \leq\frac{\erf^{-1}(c_{2})}{\Vert\hat{x}\Vert}\;,\label{eq:stop2} \end{equation} where $\erf^{-1}$ is the inverse error function (we use a numerical approximation of $\erf^{-1}(c_{2})$), and $c_{2}$ is a parameter that is set by default to $10^{-3}$. We explain this stopping criterion and how the choice of $c_{2}$ affects the algorithm later, in Section~\ref{sec:analysis-small-err}. The third stopping criterion is \begin{equation} \frac{\Vert Ad^{(t)}\Vert}{\Vert d^{(t)}\Vert}\geq c_{3}\,,\label{eq:stop3} \end{equation} where $c_{3}$ is a parameter that is set by default to $64/\epsilon_{\text{machine}}$. In other words, at this threshold we consider the matrix to be numerically rank deficient and we do not attempt to estimate the exact condition number. This criterion is used in the standard LSQR algorithm~\cite{LSQR} as a regularizing criterion. To achieve good accuracy even for matrices that are terribly ill conditioned (condition number close to $1/\epsilon_{\text{machine}}$), the stopping criteria are refined in two additional ways: \begin{enumerate} \item If at some point we have \[ \frac{\Vert Ad^{(t)}\Vert}{\Vert d^{(t)}\Vert}\geq c_{4}\,, \] where $c_{4}$ is a parameter that is set by default to $\sqrt{\epsilon_{\text{machine}}}$, we set $c_{1}$ (residual-based stopping threshold) to $c_{1}^{\prime}$, which is set by default to $4\epsilon_{\text{machine}}$. \item Even when the method detects convergence using one of its three criteria (small residual, small error, and numerical rank deficiency), it keeps iterating. The number of extra iterations is one quarter of the number performed until convergence was detected. This rule is a heuristic that tries to improve the accuracy of the condition number estimate. The cost of this heuristic is obviously limited and it can be turned off by the user. \end{enumerate} There is one more twist to the algorithm. The algorithm stores the matrix $R^{(t)}$, one of two bidiagonal matrices that LSQR incrementally constructs but normally discards. As in the symmetric Lanczos algorithm, the singular values of $R^{(t)}$ converge to the singular values of $A$. Once the algorithm terminates, we compute $\tilde{\sigma}_{\min}\approx\sigma_{\min}(R^{(t)})$; if it is smaller than the best $\|Ad^{(t)}\|/\|d^{(t)}\|$ estimate, we output both estimates. One estimate ($\tilde{\sigma}_{\min}$) is tighter, but it comes with no certificate vector; the other is looser, but comes with a certificate. Generating the certificate for the Lanczos estimate requires storing the Lanczos vectors or repeating the iterations, both of which we consider to be too expensive. Storing $R^{(t)}$ and estimating $\sigma_{\min}(R^{(t)})$ is relatively inexpensive since $R^{(t)}$ is bidiagonal. We estimate it by running inverse iteration on $R^{(t)}$, again performing enough iterations to get to within 10\% with very high probability. Since we use power-iteration, the error in $\tilde{\sigma}_{\min}$ is one sided: it is always the case that $\tilde{\sigma}_{\min}\geq\sigma_{\min}(R^{(t)})$ (because it is generated by a Rayleigh quotient). Also, $\sigma_{\min}(R^{(t)})\geq\sigma_{\min}(A)$. Therefore, $\tilde{\sigma}_{\min}$ is also an upper bound on $\sigma_{\min}(A)$, not just an estimate. \section{\label{sec:Rationale}Rationale} This section explains the rationale behind the method using several illustrative numerical experiments. All the experiments were done on $1000$-by-$400$ matrices with real entries with prescribed singular values and random singular vectors. Let us begin the discussion with a matrix that has 300 singular values that are distributed logarithmically between $10^{-3}$ and $10^{-2}$, 10 values at $10^{-8}$, and 90 at $1$. \begin{figure} \begin{centering} \includegraphics[width=0.6\textwidth]{illustration_our} \par\end{centering} \caption{\label{fig:illustration_our}The behavior of the method on one matrix described in the text.} \end{figure} Figure~\ref{fig:illustration_our} shows the behavior of our method on one matrix generated with this spectrum. In the first 90 iterations or so the residual diminishes logarithmically; the norm of $x^{(t)}-x^{\star}$ drops a bit but then stops dropping much. These two effects cause our estimate to also diminish roughly logarithmically. The Lanczos estimate ($\sigma_{\min}(R^{(t)})$) drops a bit initially but stagnates from iteration 30 or so. What is happening up to iteration 90 or so is that the Lanczos bidiagonalization resolves the singular values in the $10^{-3}$-to-$10^{-2}$ cluster, while LSQR removes much of the projection of the corresponding singular vectors from the residual and from the error. Around iteration 160 Lanczos has resolved enough of the spectrum in the $10^{-3}$-to-$10^{-2}$ cluster and the small singular value of $R^{(t)}$ starts moving toward $10^{-8}$. At that point, most of the remaining error consists of singular vectors corresponding to the singular value $10^{-8}$, which causes our estimate to be accurate (to within 9 decimal digits!). The norm of $x^{(t)}-x^{\star}$ is still large, because the error contains a significant component in the subspace associated with the $10^{-8}$ singular values. At this point, LSQR starts to resolve the error in this subspace, the residual starts decreasing again, the norm of $x^{(t)}-x^{\star}$ starts decreasing, which causes our stopping criterion to be met. \begin{figure} \begin{centering} \includegraphics[width=0.6\textwidth]{illustration_our_proj} \par\end{centering} \caption{\label{fig:illustration_our_proj}The projection of the forward error on the right singular vectors in the experiment shown in Figure~\ref{fig:illustration_our}. The bottom plot shows the singular values and the top image shows the projections. Blue represents values near $\epsilon_{\text{machine}}$, green represents values near $\sqrt{\epsilon_{\text{machine}}}$, and red represents values near $1$.} \end{figure} Figure~\ref{fig:illustration_our_proj} visualizes the behavior described in the previous paragraph. We see that up to around iteration 50, the error associated with singular spaces associated with singular values other than $1$ remains very large. From that point on to about iteration 150, the error in subspaces corresponding to values between $10^{-3}$ and $10^{-2}$ is resolved, but the error associated with the singular value $10^{-8}$ is still very large. This is a point where our method finds the small singular value and its certificate (the error). As LSQR starts to resolve the error in the $10^{-8}$ singular subspace, the errors in the $10^{-3}$ to $10^{-2}$ subspaces grows (perhaps due to loss of orthogonality) but they are reduced again later. The method yielded similar results when the small singular value was moved down to $10^{-13}$, with convergence after about 440 iterations and an estimate that is correct to within 5 decimal digits. The value $\sigma_{\min}=10^{-13}$ is about the lower limit for which the $\|d^{(t)}\|$ stopping criterion is useful. When $A$ is rank deficient there is more than one solution to the system $Ax=b$. The solution $x^{\star}$, which was generated randomly, has no special property that distinguishes it from other solutions (like minimum norm), so no least-squares solver can recover $x^{\star}$. Therefore, it is unlikely $\|d^{(t)}\|$ will become small enough to cause our method to stop. In this case, the method stops because the residual eventually becomes very small (close to $\epsilon_{\text{machine}}$) or because the estimated condition number becomes too big (stopping condition~\eqref{eq:stop3}). In Figure~\ref{fig:illustration_rankdef_our} we illustrate the behavior of the algorithm on a rank deficient matrix; the matrix has 10 singular values that are $10^{-16}$ (numerical zeros), 300 distributed logarithmically between $10^{-3}$ and $10^{-2}$, and 90 at $1$. \begin{figure} \begin{centering} \includegraphics[width=0.6\textwidth]{illustration_rankdef_our} \par\end{centering} \caption{\label{fig:illustration_rankdef_our}The behavior of the method on one matrix described in the text.} \end{figure} The algorithm stopped because the condition number got too big, however a few iterations later it would have stopped because $\Vert r^{(t)}\Vert$ became too small (stopping condition~\eqref{eq:stop1}). Our estimate is not very accurate (around $1.8\times10^{-15}$). This still indicates to the user that the matrix is numerically rank deficient, although the algorithm cannot tell the user exactly how close to $\epsilon_{\text{machine}}^{-1}$ the condition number is (and certainly not whether it is higher). Stopping criteria \eqref{eq:stop1} and \eqref{eq:stop3}, and the relatively-inaccurate estimates they yield, are used only when the matrix is close to rank deficiency (condition number of about $10^{14}$ or larger). Can we estimate $\sigma_{\min}$ by minimizing $\|Ax-b\|$ with a random $b$, which with high probability is inconsistent if $A$ has fewer columns than rows or is rank deficient? On some matrices, applying the pseudo inverse of $A$ to such a $b$ produces a minimizer with a norm that is larger than the norm of $b$ by about a factor of $\sigma_{\min}^{-1}$. However, unless there is a large gap between the smallest singular values and the rest, the minimizer has a larger norm and the norms ratio fails to accurately estimate $\sigma_{\min}^{-1}$. \begin{figure} \begin{centering} \includegraphics[width=0.6\textwidth]{random_b_norm_of_x} \par\end{centering} \caption{\label{fig:random_x_norm_of_b}Estimating $\sigma_{\min}$ by $\|A^{+}b\|^{-1}$ for a unit-length random $b$ with normal independent components.} \end{figure} Figure~\ref{fig:random_x_norm_of_b} shows the relative errors in this estimate for matrices whose singular values are distributed linearly between $\sigma_{\min}$ and $1$. The errors are huge, sometimes by more than 3 orders of magnitude. This is not a particularly useful method. This method amounts to one half of inverse iteration on $A^{*}A$, so it is not surprising that it is not accurate; performing more iterations would make the method very reliable, but at the cost of applying the pseudo-inverse many times. This estimator is clearly biased (the estimate is always larger than $\sigma_{\min}$); so is any fixed number of steps of inverse iteration. Kenny et al.~\cite{Kenney98} derive a unbiased estimator of this type for the Frobenius-norm condition number. To the best of our knowledge, this is not possible in the Euclidean norm. Other distributions of the singular values lead to more accurate estimates in this method. But will this method work when the least-squares minimizer is an iterative method like LSQR? The following experiment suggests that the answer is no. The matrix used in the experiment has 50 singular values distributed logarithmically between $10^{-10}$ and $10^{-9}$, 50 more distributed logarithmically between $10^{-1}$ and $1$, and the rest are all $1$. \begin{figure} \begin{centering} \includegraphics[width=0.6\textwidth]{no_Atr_convergence} \par\end{centering} \caption{\label{fig:no_Atr_convergence}Running LSQR with an inconsistent right-hand-side $b$.} \end{figure} The results, presented in Figure~\ref{fig:no_Atr_convergence}, indicate that there is no good way to decide when to terminate LSQR when used in this way to estimate $\sigma_{\min}$. We obviously cannot rely on the residual approaching $\epsilon_{\text{machine}}$, because the problem is inconsistent. The original LSQR paper~\cite{LSQR} suggests another stopping condition, \[ \frac{\left\Vert A^{*}r^{(t)}\right\Vert }{\hat{\sigma}_{\max}\left\Vert r^{(t)}\right\Vert }\leq c\;, \] but our experiment shows that this ratio may fail to get close to $\epsilon_{\text{machine}}$. In our experiment the best local minimum is around $10^{-10}$, six orders of magnitude larger than $\epsilon_{\text{machine}}$! Moreover, at that local minimum, around iteration 60, the estimate $\|b+r^{(t)}\|/\|x^{(t)}\|$ is still near $1$, very far from $\sigma_{\min}$. The Lanczos estimate $\sigma_{\min}(R^{(t)})$ is also very inaccurate at that time. There does not appear to be a good way to decide when to stop the iterations and to report the best estimate seen so far. On matrices with this singular value distribution, our method detects convergence after 2400-2500 iterations, returning a certified estimate of $\sigma_{\min}$ that is accurate to within 15--40\% (the accuracy of the Lanczos estimate is better, with relative errors smaller than 10\%). Figure~\ref{fig:no_Atr_convergence_our} shows a typical run. The number of iterations is large, but the stopping criteria are robust. \begin{figure} \begin{centering} \includegraphics[width=0.6\columnwidth]{no_Atr_convergence_our} \par\end{centering} \caption{\label{fig:no_Atr_convergence_our}Running our method on the same matrix as in Figure~\ref{fig:no_Atr_convergence}.} \end{figure} \section{\label{sec:analysis-small-err}Analysis of the Small-Error Stopping Criterion} Now that we understand how the method works, we can explain how to derive the small-error stopping criterion \eqref{eq:stop2}. Suppose that the smallest singular value of $A$ is simple and that it is well separated from larger singular values. Let $x^{\star}=\sum_{i=1}^{n}\alpha_{i}v_{i}$ be the initial vector represented in the basis of the right singular vectors of $A$. As LSQR progresses towards finding $x^{\star}$ it will tend initially to resolve components in the direction of the largest singular vectors. Since the $v_{n}$ direction is not present in $x^{(0)}=0$ we expect it to be not present during the initial iterations, i.e. $v_{n}^{T}x^{(t)}\approx0$. This implies that we expect for the initial iterations $t$ to have $\vert v_{n}^{T}(x^{\star}-x^{(t)})\vert\approx\vert\alpha_{n}\vert$ so $\|x^{\star}-x^{(t)}\|\geq\alpha_{n}$. Now, at some point in the iteration, the solution $x^{(t)}$ will be roughly $x^{(t)}\approx\sum_{i=1}^{n-1}\alpha_{i}v_{i}$, i.e. the error remains mostly in the direction of the small singular subspace, but the $v_{n}$ direction is not present at all. At that point, $\|x^{\star}-x^{(t)}\|\approx\vert\alpha_{n}\vert$. LSQR will now start to resolve that error at least partially and the norm of the error will decrease below $\vert\alpha_{n}\vert$. If we stop the iteration when $\|x^{\star}-x^{(t)}\|\gg\vert\alpha_{n}\vert$, the error is unlikely to be a good estimate of a small singular vector. If we stop when $\|x^{\star}-x^{(t)}\|\leq\vert\alpha_{n}\vert$ we will likely have a good estimate of a small singular value. Ideally, we want to stop immediately when $\|x^{\star}-x^{(t)}\|$ drops below $\vert\alpha_{n}\vert$. Stopping later (when the error is much smaller than $\vert\alpha_{n}\vert$) does not do any harm, since we report the best Rayleigh quotient seen, but it does not improve the estimate by much. We do not know $\alpha_{n}=v_{n}^{T}x^{\star}$ (which is also a random variable), but we can do a test for which passing it implies that $\|x^{\star}-x^{(t)}\|\leq\vert\alpha_{n}\vert$ with high probability. Recall that $x^{\star}=\hat{x}/\|\hat{x}\|$ where $\hat{x}$ is a vector with normally-distributed independent random components. Let $\hat{x}^{(t)}$ be the LSQR estimates that we would have found if we run LSQR on $\hat{b}=A\hat{x}$, and let $\hat{\alpha}_{n}=v_{n}^{T}\hat{x}$. Clearly, $\hat{x}^{(t)}=\Vert\hat{x}\Vert x^{(t)}$ and $\hat{\alpha}_{n}=\Vert\hat{x}\Vert\alpha_{n}$, so $\|x^{\star}-x^{(t)}\|\leq\vert\alpha_{n}\vert$ if and only if $\|\hat{x}-\hat{x}^{(t)}\|\leq\vert\hat{\alpha}_{n}\vert$. Therefore, if we find a value $\tau$ such that $\Pr(\vert\hat{\alpha}_{n}\vert\geq\tau)\geq1-c_{2}$ then if $\|\hat{x}-\hat{x}^{(t)}\|\leq\tau$ then $\|\hat{x}-\hat{x}^{(t)}\|\leq\vert\hat{\alpha}_{n}\vert$ (and $\|x^{\star}-x^{(t)}\|\leq\vert\alpha_{n}\vert$) with probability of at least $1-c_{2}$. The condition $\|\hat{x}-\hat{x}^{(t)}\|\leq\tau$ is equivalent to $\|x^{\star}-x^{(t)}\|\leq\tau/\Vert\hat{x}\Vert$. Since $\Vert v_{n}\Vert=1$ we have $\hat{\alpha}_{n}\sim N(0,1)$. Therefore, for any $0<c_{2}<1$ \[ \Pr(\vert\hat{\alpha}_{n}\vert\geq\erf^{-1}(c_{2}))=1-c_{2}\,. \] This immediately leads to the stopping criterion \[ \|x^{\star}-x^{(t)}\|\leq\frac{\erf^{-1}(c_{2})}{\Vert\hat{x}\Vert}\,. \] The choice of $c_{2}$ in the algorithm determines our confidence that $\|x^{\star}-x^{(t)}\|$ dropped below $\vert\alpha_{n}\vert$. We use a default value of $10^{-3}$, which implies a small probability of failure, but not a tiny one. The user can, of course, change the value if he needs higher confidence (in either case the error is one sided). Another approach is to use a much larger $c_{2}$ and to repeat the algorithm $\ell$ times (possibly in a single run, exploiting matrix-matrix multiplies) , say $\ell=3$. The probability that we succeed in at least one run is at least $1-c_{2}^{\ell}$. For $c_{2}=10^{-2}$, say, setting $\ell=3$ or so should suffice. In our experience it is better to make $c_{2}$ smaller than to set $\ell>1$, but we did not do a formal analysis of this issue. If the small singular value is multiple (associated with a singular subspace of dimension $k>1$), our situation is even better, because we can stop when $x^{\star}-x\approx\sum_{i=n-k+1}^{n}\alpha_{i}v_{i}$, when $\|x^{\star}-x\|\approx\sqrt{\alpha_{n-k+1}^{2}+\cdots+\alpha_{n}^{2}}$, which is even more likely to be larger than our stopping criterion. When the small singular value is not well separated, the stopping criterion is still sound but the Rayleigh quotient estimate we obtain is not as accurate, because in such cases $x^{\star}-x$ tends to be a linear combination of singular vectors corresponding to several singular values. These singular values are all small, but they are not exactly the same, thereby pulling the Rayleigh quotient up a bit. \textbf{\textcolor{red}{}} \section{\label{sec:Additional-Experiments}Additional Experiments} \subsection{Additional Illustrative Examples} In Figure~\ref{fig:linear_1e-8} all the singular values are distributed linearly from $10^{-8}$ up to $1$. Convergence is fairly slow. The gap between $\sigma_{\min}=\sigma_{400}=10^{-8}$ and $\sigma_{399}$ is relatively large, around $\frac{1}{400}$, so $\sigma_{\min}$ is computed accurately. \begin{figure} \begin{centering} \includegraphics[width=0.6\textwidth]{illustration_linear_1e-8} \par\end{centering} \caption{\label{fig:linear_1e-8}Singular values are distributed linearly from $10^{-8}$ up to $1$.} \end{figure} When many singular values are distributed logarithmically or nearly so, convergence is very slow and the small relative gap between $\sigma_{\min}$ and the next-larger singular values causes the method to return a less accurate estimate. \begin{figure} \begin{centering} \includegraphics[width=0.6\textwidth]{illustration_log_1e-3} \par\end{centering} \caption{\label{fig:log_1e-3}200 singular values are distributed logarithmically from $10^{-3}$ up to $1$, and the rest are at $1$.} \end{figure} Figure~\ref{fig:log_1e-3} plots the convergence when 200 singular values are distributed logarithmically between $10^{-3}$ and $1$ and the rest are at $1$. We do not see a a period of stagnation during which the error is a good estimate of $v_{\min}$. The certified estimate is only accurate to within 31\% and the Lanczos estimate to within 10\% (much worse than when the small singular value is well separated from the rest). LSQR might have several periods of stagnation. This happens when the spectrum contains several well-separated clusters. Figure~\ref{fig:multiple_stagnations} plots the convergence when the matrix has a multiple singular value at $10^{-10}$, a multiple singular value at $10^{-7}$ (both with multiplicity 10), 300 singular values that are distributed logarithmically between $10^{-3}$ and $10^{-2}$, and the rest are at $1$. We see multiple stagnation periods of both the residual, the error, the Lanczos estimate, and our certified estimate. \begin{figure} \begin{centering} \includegraphics[width=0.6\textwidth]{illustration_multiple_stagnations} \par\end{centering} \caption{\label{fig:multiple_stagnations}200 singular values are distributed logarithmically from $10^{-3}$ up to $1$, and the rest are at $1$.} \end{figure} \subsection{Experiments on Large Structured Random Matrices} The next set of experiments was performed on sparse matrices that motivated this project. These matrices have exactly 3 nonzeros per column, where the location of the nonzeros is random and uniform and their values are $+1$ or $-1$ with equal independent probabilities. These type of matrices arise in trying to simulate the evolution of random 2-dimensional complexes in various stochastic models~\cite{ALLM12}. Such $m$-by-$n$ matrices tend to be well conditioned when $n<0.9m$ and rank deficient when $n>0.95m$. \begin{figure} \noindent \begin{centering} \begin{tabular}{ccc} \includegraphics[width=0.35\textwidth]{irad_100000_90000} & ~ & \includegraphics[width=0.35\textwidth]{irad_100000_95000}\tabularnewline \end{tabular} \par\end{centering} \begin{centering} \par\end{centering} \caption{\label{fig:irad_m=00003D100000}Random matrices with 3 nonzeros per row. On the left we see the convergence on a $100,000$-by-$90,000$ matrix and on the left convergence on a $100,000$-by-$95,000$ matrix.} \end{figure} Figure~\ref{fig:irad_m=00003D100000} shows that the method converges quite quickly even on large matrices in both the well conditioned and the rank deficient cases. On smaller matrices of this type we were able to assess the accuracy of the method. For $m=1000$, $n=900$ yielded a relative error of 22\% (the Lanczos estimate was off by 78\%), and $n=450$ yielded a relative error of 41\% (the Lanczos estimate was off by 18\%). Problems of this type of size $m=1,000,000$ required similar number of iterations and were easily solved on a laptop. It is worth noting that due to the random structure of the non-zero pattern, it is likely that factorization based condition number estimators will be very slow when applied to this type of matrices. \subsection{\label{sub:Large-Scale-Experiments}Experiments on Many Real-World Matrices} We ran both a dense SVD and our method on all the matrices from Tim Davis's sparse matrix collection~\cite{UFDavis11} for which $mn^{2}<256\times10^{9}$. Our method converged in 100,000 iterations or less on 1024 out of the 1468 matrices in this category. Out of the 1468 matrices, 404 had condition number $64/\epsilon_{\text{machine}}\approx7\times10^{13}$ or larger. Our method converged on 278 out of them, delivering condition number estimates of $5\times10^{11}$ or larger. In other words, on all the matrices that were close to rank deficiency, our method detected that the condition number is large, but in some cases it underestimated the actual condition number. On matrices with condition number smaller than $64/\epsilon_{\text{machine}}$, our method always estimated the condition number to within a relative error of 24\% or less. We ran the method again on some of the matrices on which it failed to converge in 100,000 iterations, allowing the method to run longer. It converged in all cases. For example, on \texttt{nos1}, the method detected convergence after 169,791 iterations. The $\sigma_{\min}$ estimate it returned was actually from iteration 90,173, meaning that at iteration 100,000 it actually converged, but the algorithm was not yet able to detect convergence. We note that \texttt{nos1} is a square matrix of dimension 237; the method can be slow even on small matrices. The running time of our method obviously varies a lot and is not easy to characterize. But on large matrices it is often much faster than a dense SVD. On one matrix in our test set, \texttt{bips98\_606} (a square matrix of dimension 7135), our method was more than 550 times faster than a dense SVD, even though the dense SVD routine used all 4 cores in the Intel Core i7 machine where as our method used only one. The machine had 16GB of RAM and the SVD computation did not perform any paging activity. \subsection{\label{sub:Experiments-on-Large}Experiments on Large Real-World Matrices} We ran the algorithm on a few very large matrices from the same matrix collection. As on the smaller matrices, the method sometimes converged but sometimes it exceeded the maximal number of iterations (up to 1,000,000). Table~\ref{tab:very-large} shows the statistics of successful runs; they indicate that when the singular spectrum is clustered, the method works well even on very large matrices. \begin{table} \begin{centering} \begin{tabular}{lrrrrr} & $m$ & $n$ & time (s) & iterations & $\kappa$ (est.)\tabularnewline \cline{2-6} rajat10 & 30202 & 30202 & 43 & 5219 & 1.2e+03\tabularnewline flower\_7\_4 & 67593 & 27693 & 4 & 231 & 1.6e+01\tabularnewline flower\_8\_4 & 125361 & 55081 & 15 & 537 & 2.2e+13\tabularnewline wheel\_601 & 902103 & 723605 & 1278 & 5260 & 1.3e+14\tabularnewline Franz11 & 47104 & 30144 & 1.4 & 59 & 3.3e+15\tabularnewline lp\_ken\_18 & 154699 & 105127 & 59 & 1836 & 2.5e+14\tabularnewline lp\_pds\_20 & 108175 & 33874 & 13 & 697 & 1.2e+14\tabularnewline \end{tabular} \par\end{centering} \caption{\label{tab:very-large}Large real-world matrices whose condition number was successfully computed by our method.} \end{table} \section{Summary} We have presented an adaptation of LSQR to the estimation of the condition number of matrices. Our method is yet another tool in the condition-number estimation toolbox. It relies almost solely on matrix-vector multiplications, so it can be applied to very large sparse matrices. It does not require much memory, and it is at least as fast as a single application of un-preconditioned LSQR to solve a least-squares problem. The method is reliable in the sense that it never returns an overestimate of the condition number. In many cases, the method is orders-of-magnitude faster than competing methods, especially if $A$ is large and has no sparse triangular factorization. However, the performance of the method depends on the distribution of the singular values of $A$, and some distributions lead to very slow convergence. In such cases, the method still provides a lower bound on the condition number, but it may be loose. In such cases, methods that are based on an orthogonal or triangular factorizations or on preconditioned iterative solvers may be faster. Our method is primarily based on one key insight: that the forward error in LSQR tends to converge to an approximate singular vector associated with $\sigma_{\min}$. This property of LSQR and related Krylov-subspace solvers is normally seen as a deficiency (because it slows down the convergence to the minimizer), but it turns out to be beneficial for condition-number estimation. \paragraph*{Acknowledgments} This research was supported in part by grant 1045/09 from the Israel Science Foundation (founded by the Israel Academy of Sciences and Humanities) and by grant 2010231 from the US-Israel Binational Science Foundation. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,576
{"url":"http:\/\/journal.psych.ac.cn\/xlxb\/CN\/abstract\/abstract5079.shtml","text":"ISSN 0439-755X\nCN 11-1911\/B\n\n\u4e2d\u56fd\u79d1\u5b66\u9662\u5fc3\u7406\u7814\u7a76\u6240\n\n\u2022 \u2022\n\n### \u7f51\u7edc\u7a81\u53d1\u4e8b\u4ef6\u4e2d\u7684\u8d1f\u6027\u504f\u5411\uff1a\u4ea7\u751f\u4e0e\u8868\u73b0\n\n1. 1. \u4e2d\u592e\u8d22\u7ecf\u5927\u5b66\n2. \u4e0a\u6d77\u6d77\u4e8b\u5927\u5b66\n3. \u4e2d\u56fd\u4eba\u6c11\u5927\u5b66\u5fc3\u7406\u5b66\u7cfb\n\u2022 \u6536\u7a3f\u65e5\u671f:2021-02-04 \u4fee\u56de\u65e5\u671f:2021-08-29 \u53d1\u5e03\u65e5\u671f:2021-09-07\n\u2022 \u901a\u8baf\u4f5c\u8005: \u8f9b\u81ea\u5f3a\n\u2022 \u57fa\u91d1\u8d44\u52a9:\n\u653f\u5e9c\u4fe1\u606f\u53d1\u5e03\u5bf9\u7a81\u53d1\u516c\u5171\u536b\u751f\u4e8b\u4ef6\u4e2d\u8d1f\u6027\u504f\u5411\u7684\u5f15\u5bfc\u4e0e\u6d88\u89e3;\u7a81\u53d1\u4e8b\u4ef6\u4e2d\u653f\u5e9c\u4fe1\u606f\u53d1\u5e03\u5bf9\u793e\u4f1a\u8d1f\u9762\u60c5\u7eea\u611f\u67d3-\u6f14\u5316\u7684\u5f71\u54cd\u673a\u5236\u7814\u7a76;\u7a81\u53d1\u516c\u5171\u4e8b\u4ef6\u4e2d\u8d1f\u6027\u504f\u5411\u6548\u5e94\u7684\u6f14\u5316\u673a\u5236\u4e0e\u6d88\u89e3\u7b56\u7565;\u7f51\u7edc\u7a81\u53d1\u4e8b\u4ef6\u4e2d\u8d1f\u6027\u504f\u5411\u6548\u5e94\u7684\u4f20\u9012\u53ca\u6d88\u89e3\u7b56\u7565\u7814\u7a76;\u91cd\u5927\u7a81\u53d1\u516c\u5171\u536b\u751f\u4e8b\u4ef6\u4e0b\u4e2d\u97e9\u516c\u4f17\u98ce\u9669\u611f\u77e5\u3001\u5e94\u5bf9\u884c\u4e3a\u53ca\u60c5\u7eea\u6f14\u53d8\u7684\u4ea4\u4e92\u5f71\u54cd\u673a\u7406\u7814\u7a76;\u4e2d\u592e\u8d22\u7ecf\u5927\u5b66\u4e00\u6d41\u5b66\u79d1\u5efa\u8bbe\u9879\u76ee;\u4e2d\u592e\u8d22\u7ecf\u5927\u5b66\u79d1\u7814\u521b\u65b0\u56e2\u961f\u652f\u6301\u8ba1\u5212\u8d44\u52a9\u9879\u76ee\n\n### Negativity bias in network emergency: Occurrence and manifestation\n\n1. 1.\n2. Shanghai Maritime University\n3. Central University of Finance and Economics\n\u2022 Received:2021-02-04 Revised:2021-08-29 Published:2021-09-07\n\nAbstract: Nowadays, network emergencies have occurred frequently, because of the social transition and the development of social media. In the past, most of the researches on network emergencies were theoretical analysis, and less attention was paid to the psychological mechanisms. The current research proposes that negativity bias, as a common psychological phenomenon in human decision-making, is an important mechanism behind the network emergency and its propagation. In order to explore the occurrence and performance of negativity bias in network emergencies, three theoretical hypotheses were tested by three studies under the guidance of a theoretical model. Study 1 aimed to explore the information content bias in the source texts of network emergencies. 40 source texts of network emergencies in the period from 2016 to 2019 were collected through Baidu, Sina, Tencent and other major media platforms. The Chinese psychoanalysis System TextMind 3.0 was used to analyze the texts. In Study 2, a recognition memory experiment was conducted to explore the information processing bias of the source texts of network emergencies. 48 participants completed the single-factor (word nature: positive, neutral and negative) within-subjects experiment. The reading materials used in the experiment are from the corpus set up in Study 1. Positive, neutral and negative words were selected from the text by online word segmentation tool in advance, and the subjects were asked to recall whether the words appeared in the article in the subsequent memory experiment. Study 3 aimed to explore the transmission bias in the dynamic propagation of network emergencies. One hundred and twenty participants (Thirty transmission chains) took part in the transmission experiment. Word nature was a within-subjects variable, which can be divided into three levels: positive, neutral and negative. Intergenerational transmission was a between-subjects variable including four generations. Study 1 indicated that although all network emergencies were negative, negative words did not dominate in the source texts. Study 2 showed that the recognition accuracy of negative words was higher than that of positive words and neutral words. The analysis based on signal detection theory showed that the participants had higher discrimination and tight decision-making criteria for negative words than positive and neutral words. Therefore, the negativity bias of the participants was mainly reflected in the fact that they were more likely to recognize negative words that are not in the text. Study 3 indicated that the survival rate of negative events was higher than that of positive events and neutral events, and that of positive events was higher than that of neutral events. The probability of negative interpretation of neutral events was higher than that of positive interpretation. These results supported the negative advantages in the process of emergency transmission. The current study investigated the occurrence and manifestation of negativity bias, an important psychological function formed in the process of human evolution, during the brewing, breaking out and spreading process of network Emergency. That is, the negativity bias did not originate from the source texts of network emergencies, but from the process of individual information processing and interpersonal information transmission, which is manifested in the higher recognition accuracy, higher discrimination, sightly tight decision-making criteria of negative words; the higher survival rate of negative events, as well as negative resolution of ambiguous events. This research is conducive to understanding the law of information dissemination of network emergencies, scientific response to the crisis of public opinion, innovative network governance. Key words: network emergency, negativity bias, memory, transmission chain experiment, cultural evolution","date":"2021-10-16 02:54:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3975292444229126, \"perplexity\": 2587.8016105747242}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323583408.93\/warc\/CC-MAIN-20211016013436-20211016043436-00038.warc.gz\"}"}
null
null
import * as ts from "../../_namespaces/ts"; import * as Utils from "../../_namespaces/Utils"; import * as Harness from "../../_namespaces/Harness"; import { createWatchedSystem, File, libFile, } from "../virtualFileSystemWithWatch"; import { baselineBuildInfo, CommandLineProgram, } from "../tsc/helpers"; import { applyChange, createBaseline, watchBaseline, } from "../tscWatch/helpers"; describe("unittests:: tsc:: builder cancellationToken", () => { verifyCancellation(/*useBuildInfo*/ true, "when emitting buildInfo"); verifyCancellation(/*useBuildInfo*/ false, "when using state"); function verifyCancellation(useBuildInfo: boolean, scenario: string) { it(scenario, () => { const aFile: File = { path: `/user/username/projects/myproject/a.ts`, content: Utils.dedent` import {B} from './b'; declare var console: any; let b = new B(); console.log(b.c.d);` }; const bFile: File = { path: `/user/username/projects/myproject/b.ts`, content: Utils.dedent` import {C} from './c'; export class B { c = new C(); }` }; const cFile: File = { path: `/user/username/projects/myproject/c.ts`, content: Utils.dedent` export class C { d = 1; }` }; const dFile: File = { path: `/user/username/projects/myproject/d.ts`, content: "export class D { }" }; const config: File = { path: `/user/username/projects/myproject/tsconfig.json`, content: JSON.stringify({ compilerOptions: { incremental: true, declaration: true } }) }; const { sys, baseline, oldSnap: originalSnap } = createBaseline(createWatchedSystem( [aFile, bFile, cFile, dFile, config, libFile], { currentDirectory: "/user/username/projects/myproject" } )); sys.exit = exitCode => sys.exitCode = exitCode; const reportDiagnostic = ts.createDiagnosticReporter(sys, /*pretty*/ true); const parsedConfig = ts.parseConfigFileWithSystem( "tsconfig.json", {}, /*extendedConfigCache*/ undefined, /*watchOptionsToExtend*/ undefined, sys, reportDiagnostic )!; const host = ts.createIncrementalCompilerHost(parsedConfig.options, sys); let programs: CommandLineProgram[] = ts.emptyArray; let oldPrograms: CommandLineProgram[] = ts.emptyArray; let builderProgram: ts.EmitAndSemanticDiagnosticsBuilderProgram = undefined!; let oldSnap = originalSnap; let cancel = false; const cancellationToken: ts.CancellationToken = { isCancellationRequested: () => cancel, throwIfCancellationRequested: () => { if (cancel) { sys.write(`Cancelled!!\r\n`); throw new ts.OperationCanceledException(); } }, }; // Initial build baselineBuild(); // Cancel on first semantic operation // Change oldSnap = applyChange( sys, baseline, sys => sys.appendFile(cFile.path, "export function foo() {}"), "Add change that affects d.ts" ); createIncrementalProgram(); // Cancel during semantic diagnostics cancel = true; try { builderProgram.getSemanticDiagnosticsOfNextAffectedFile(cancellationToken); } catch (e) { sys.write(`Operation ws cancelled:: ${e instanceof ts.OperationCanceledException}\r\n`); } cancel = false; builderProgram.emitBuildInfo(); baselineBuildInfo(builderProgram.getCompilerOptions(), sys); watchBaseline({ baseline, getPrograms: () => programs, oldPrograms, sys, oldSnap, }); // Normal emit again noChange("Normal build"); baselineBuild(); // Do clean build:: all the emitted files should be same noChange("Clean build"); baselineCleanBuild(); Harness.Baseline.runBaseline(`tsc/cancellationToken/${scenario.split(" ").join("-")}.js`, baseline.join("\r\n")); function noChange(caption: string) { oldSnap = applyChange(sys, baseline, ts.noop, caption); } function updatePrograms() { oldPrograms = programs; programs = [[builderProgram.getProgram(), builderProgram]]; } function createIncrementalProgram() { builderProgram = useBuildInfo ? ts.createIncrementalProgram({ rootNames: parsedConfig.fileNames, options: parsedConfig.options, host, }) : builderProgram = builderProgram = ts.createEmitAndSemanticDiagnosticsBuilderProgram( parsedConfig.fileNames, parsedConfig.options, host, builderProgram, /* configFileParsingDiagnostics*/ undefined, /*projectReferences*/ undefined, ); updatePrograms(); } function emitAndBaseline() { ts.emitFilesAndReportErrorsAndGetExitStatus(builderProgram, reportDiagnostic); baselineBuildInfo(builderProgram.getCompilerOptions(), sys); watchBaseline({ baseline, getPrograms: () => programs, oldPrograms, sys, oldSnap, }); } function baselineBuild() { createIncrementalProgram(); emitAndBaseline(); } function baselineCleanBuild() { builderProgram = ts.createEmitAndSemanticDiagnosticsBuilderProgram( parsedConfig.fileNames, parsedConfig.options, host, /*oldProgram*/ undefined, /* configFileParsingDiagnostics*/ undefined, /*projectReferences*/ undefined, ); updatePrograms(); emitAndBaseline(); } }); } });
{ "redpajama_set_name": "RedPajamaGithub" }
590
Hollinghurst came to acclaim with his groundbreaking debut novel, The Swimming Pool Library, in 1988, and went on to win the 2004 Man Booker Prize for The Line of Beauty, a novel that nailed the crosscurrents of greed, envy, privilege, and sexual desire in 1980s Britain. He was long-listed for the same award in 2011 for his multi-generational novel, The Stranger's Child. Below are Alan Hollinghurst's favorite books, available to purchase as a set or individually. The greatest of all novels. Read it again, to test and savour the infallible truth of Tolstoy's understanding of every stage and aspect of human life. The first poet I loved and learned by heart, and whose music still haunts me and inspires me 45 years later. Another teenage passion I have not outgrown but grown into, even if I have lost the faith that sustained him through good and ill. One of the great hearers, feelers, and observers, with a passionate language world all his own. One of the great technical innovators of the English novel, glittering, subversive, poignant and funny - inimitable but an inspiration to generations of writers since. I stick by the old heresy, that Woolf's Diary is her greatest achievement. An enthrallingly uncensored portrait of a brilliantly perceptive mind as it moves through a fascinating world in complex times. Published in 1942, this marvellously mordant account of the first years of World War II, less known than the wonderful Decline and Fall and A Handful of Dust, is perhaps Waugh's most flawless comic novel. Every word tells. A brilliant social comedy seen wholly from a child's point of view, this is a dazzling technical feat that as always with Henry James deepens as it develops, like the life of the child herself. An exhilarating prelude to the great novels of his famous late phase. Perhaps the best introduction to another great original of the English novel, who learned from Firbank's economy, but who had his own quite different imaginative world. Loving, set among the servants of an Irish country house, combines his superbly truthful ear for how people really speak with an unforgettable vein of surreal poetry. The beautiful and concentrated final masterpiece of an utterly original writer, set in early 19th-century Germany. Fitzgerald pays us the compliment of trusting our intelligence and counting on our close attention, which will be overwhelmingly rewarded.
{ "redpajama_set_name": "RedPajamaC4" }
6,377
The most dangerous profe... The most beautiful metro... The most strange flying ... The most ferocious preda... Reality And Myths Quality tips on different topics. Credible facts and nothing but facts. Very interesting myths and nothing but myths. Controller | October 5, 2017 | Myths | No Comments is an ancient Argentinian folk dance, it is danced in pairs. The rhythm is clear and energetic. The very word tango came from the Nigerian language, the dance was created on the basis of ancient African dance forms. For the first time the word "tango" began to be applied to dances in Argentina in the late 19th century, and in the early 20th century they came to Europe and America. The period of 30-50 years is considered the golden age of dance, and since 2009, tango is a UNESCO World Heritage Site. Many unsuccessfully try to understand the essence of tango, but it does not lend itself, guarding its secrets from the uninitiated. If you try to find in this dance only an expression of passion, you risk nothing to understand. The tango language allows you to talk about meetings and partings, love and betrayal. The whole history of this dance is permeated with idle fantasies, which we will try to refute or confirm. For a real tango, you need a sexual attraction. This statement is very relevant when it comes to tango. In general, every real man should have a sexual attraction. It is for this that dances are created, subordinating the couple to ancient instincts. When feelings are raging between dancers, tango becomes a real spectacle, winning in entertainment. In order for the dance to turn out, there is little desire to dance it. We must want to dance it with this woman. And this is not so simple, it was even seriously discussed at one time, which led to the emergence of a whole code of tango. In accordance with these rules, a man has no right at the time of the meeting to offer a woman anything other than tango, and then he can invite her to drink coffee with him, continuing the acquaintance. Tango helps men hide complexes. Not at all. In the tango partner is not just leading, he is responsible for the direction of the movement. It is believed that a woman, trusting in a dance partner, can dance with her eyes closed. Hidden complexes do not allow people to do this. Mistrust between partners will prove itself, and it will be difficult to dance. However, you will still be able to adjust your complexes. Engaging in tango for a long time, people change noticeably, although this is not an easy and long enough process. Some of them get to quickly become different and completely immerse themselves in tango, enjoying it. Real tango is danced only in Argentina. Present, or rather authentic tango, really exists only in this country. After all, there is tango – not a dance, but a subculture. And only those who have absorbed the taste of tango from childhood, who are brought up in these traditions, can dance it primordially. It is noticed that in Argentina tango is slightly different than in other countries. Abroad perform a dance in another language, with a slight accent. Tango lovers generally adore other dances. It is noted that the fans of tango do not include other dances in the list of their passions. These people completely give themselves to tango, having neither the will nor the strength to break out of this pleasant dependence. Many, beginning with tango, continue their journey by studying Argentine culture, language and travel to Buenos Aires. In this way, dance can become as if a religion. It's hard to imagine fans of waltz, who regularly travel to Vienna at the behest of the soul-ridden soul. And tango has such an effect. In a tango, the man is always in the lead. In fact, his concept of leadership is not at all unambiguous. It is believed that during the dance the man shows the partner the way, offering her to go to one or another step, the figure. A woman must certainly answer him, usually by consent. Leadership of a man manifests itself only in that he takes the initiative, but the woman still has an adequate answer. These relationships last for a fraction of a second, which sometimes is not enough even for partners to understand each other. This is what is taught in specialized schools. Yes, and the true meaning of the dance is still that a woman enjoys tango. In the tango club you have to come with your own partner. Tango is not only a dance, it's also a form of leisure.In bowling go with the company or with the spouse, and on a tango usually come or singly, or with friends. Rarely, rarely in this club come established couples, because in this case it will require a coincidence of interests, a rare and successful. But among the professionals quite a lot of married couples, which is not surprising, knowing the ability of dancing to kindle the fire of passion. There are no game elements in tango. In this dance there are still elements of the game, each of the partners has its own role. Honest her performance and bring pleasure from the dance-game. There are no winners and losers in tango, as there are no pre-determined moves. In this dance there is a constant dialogue between a man and a woman in the special language of their souls, the bodies that unite music. Tango is a social dance. This statement is valid, perhaps, for some areas of Buenos Aires. There you can see how many, if not all, dance on an ordinary party or home holiday. In our concept, paired social dance is a movement for slow music in a disco or special places. Having been practicing tango for a while, you begin to understand that there is no desire to dance with everyone in a row – not even all the partners from the club are fit for a couple. At major festivals or visiting meetings this feeling only increases. Much in general in such cases depends on luck – what partners have come, what kind of mood they have and how well the music is successfully chosen. You can say that tango is not a social dance, but a word would be an artificial language, like Esperanto. On it you can talk not with everyone, only with a handful of the same fans, but all over the world. Tango is just a show. Many people mistakenly believe that tango is the occupation of exclusively professional dancers on the stage of the theater or in the cinema. But in fact in all major cities there are clubs that are visited by tens and hundreds of ordinary people, there are held their festivals, where dancers come from other cities and countries. Most of these people do not dance for cameras or others, but for themselves. And all the fun in their tango passes inside their couple. It is also believed that it is possible to understand the movements of dancers only after studying for a while himself. Then it becomes clear that sometimes seemingly simple movements are insanely complicated, and effective technique is not always appropriate. Tango is very physiological. It is said that the technique of this dance takes into account the nuances of the structure of the human body, therefore, tango is natural for man and comfortable. But the same assumption can be attributed to juggling, standing on one leg on a tight wire. Our skeleton bends only in the joints, and the shoulders can not turn about the hips by 180 degrees, this is taken into account. But in fact for a step it is constantly necessary to release space, and for rotation and circular motions a stable axis is necessary. All this gives such a load on the muscles, which can not be obtained with the usual walking, not to mention the mass of other nuances. In addition, the body in the right position should not be strained, otherwise the muscles will be squeezed, and the dancer will lose flexibility, turning into such a log. It is no accident that the basic technique of the step is worked out over the years. Tango can be learned by yourself. A mistake will try to learn the dance, looking at how others do. Do not be like teachers, self-substituting them. They will be able to give the necessary direction of training, some exercises. But then you have to do some work yourself – develop a sense of rhythm, balance, ability to listen to a partner. Do not be afraid and ask questions, and the teacher will always tell you when he sees the mistakes. Psychological clamps can lead to the fact that in the end, after studying even a few years in a group, a person will dance mediocrely. Tango is more for women. This statement is partially true. In the beginning groups of girls really prevail, this applies to the evenings-milongas. Although it is not without exception.But in the established and senior groups of partners, on the contrary, is not enough. Tango is for young people. This statement follows from the myth that tango is just a show. It is for this that you need an excellent physical form and youthful enthusiasm. Without certain conditions, still can not do, but an important role is played by perseverance, the initial desire to engage in, the availability of time and money. Among the dancers there are a lot of those who began to practice tango only after 40-50 years. To the advantages of tango is also the fact that it can be danced to the old age, in the absence, of course, of health problems. Tango is a dance for adults. And this statement is a reflection of the previous one. Many mistakenly believe that children in tango have nothing to do. However, babies also have nothing to tell, but they are still being taught to talk. In youth, it is much easier to learn, because stretching and muscle tone are better. As a result, young dancers are much faster rastantsovyvayutsya. It is on these people and pay more attention to the milongas. Tango is only Argentine. This dance has several styles. In addition to the traditional, Argentine, which came also from Uruguay, they also distinguish between ballroom and Finnish tango. And Argentine tango itself has several varieties – Fantasia, Liso, Nuevo, Salon and others. Finnish tango originated in Suomi in the 40s of the 20th century. Ballroom tango – a sport dance, with which participate in international competitions. Its main difference from the traditional one is the lack of improvisation, everything here corresponds to certain rules. Tango can be trained in three days. Such a promise can only be given by charlatans. You can really learn the basics for at least six months. For this, by the way, you do not have to strain much. After all, for a hundred years of its existence, the dance has not changed in terms of technique, which is quite simple. First the scheme is brought to automatism, then the movements speed up and complicate the movements of the legs. Then other parts of the body will be "put". And the whole pretentiousness of the dance is passion, exaltation, it will come by itself and will depend on the partners themselves, because so many tangos are allowed … © 2020 Reality And Myths.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,448
\section{Introduction} Wireless communication channels are characterized by their time-varying fading nature that has significant effect on the performance of wireless networks. Various algorithms have been proposed to design efficient resource allocation schemes that optimize the system performance over fading channels, e.g., by minimizing the transmission power, minimizing the delay, or maximizing the system throughput. Resource allocation over fading channels has been studied for point-to point communication in different contexts, e.g.,~\cite{goldsmith1997capacity,fu2006optimal,negi2002delay,wang2014power,lee2009energy}. In~\cite{goldsmith1997capacity} the expected Shannon capacity for fading channels was obtained when the channel state information (CSI) is known causally at the transmitter and the receiver. Furthermore, it has been shown that the ``water-filling'' algorithm achieves the maximum expected capacity. The authors of~\cite{lee2009energy} considered the problem of minimizing the expected energy to transmit a single packet over a fading channel subject to a hard deadline. In~\cite{fu2006optimal}and~\cite{negi2002delay}, a dynamic program formulation was proposed to maximize a general throughput function under constraints on the delay and the amount of energy available at the transmitter. In~\cite{wang2014power}, the work of~\cite{fu2006optimal} was extended to energy harvesting systems where the transmitter has causal CSI. The capacity region of the multiple access channel (MAC) has been studied in various settings, see for example~\cite{yu2004iterative,tse1998multiaccess,hanly1998multiaccess,rezki2014capacity,devassy2015finite,budkuley2014jamming}. In~\cite{yu2004iterative}, the capacity region of the Gaussian multiple-input multiple-output (MIMO) MAC was characterized. The authors of~\cite{yu2004iterative} proposed an iterative water-filling algorithm to obtain the optimal transmit covariance matrices of the users that maximize the weighted sum capacity. In~\cite{tse1998multiaccess} the capacity region of the fading MAC was characterized by Tse and Hanly. Furthermore, the power allocation policy that maximizes the long-term achievable rates subject to average power constraints for each user was introduced. In~\cite{hanly1998multiaccess}, Hanly and Tse introduced an information-theoretic characterization of the capacity region of the fading MAC with delay constraints. In addition, they provided the optimal power allocation policy that achieves the delay-limited capacity. In~\cite{wang2015iterative}, Wang developed the optimal energy allocation strategy for the fading MAC with energy harvesting nodes by assuming that the CSI is \textit{non-causally} known before the beginning of transmission. In~\cite{caire2004variable}, the capacity region of the fading MAC with power constraint on each codeword was investigated. However, the authors of~\cite{caire2004variable} focused their work on the low signal-to-noise ratio (SNR) regime where they showed that the one-shot power allocation policy is asymptotically optimal. In this paper, we consider a system composed of multiple users transmitting to a single base station (BS) over a fading MAC. The transmission occurs over a limited time duration in which each user has a fixed amount of energy. Some motivating scenarios and applications for this system model are introduced in~\cite{hanly1998multiaccess,fu2006optimal,negi2002delay}, e.g., satellites, remote sensors, and cellular phones with limited amount of energy transmitting delay-sensitive data to a single receiver. We develop energy allocation strategies to maximize the expected sum-throughput of the fading MAC subject to hard deadline and energy constrains. First, we consider the offline allocation problem in which the channel states are known a priori to the BS. We show that the optimal solution of this problem can be obtained via the iterative water filling algorithm. Next, a dynamic program formulation is introduced to obtain the optimal online allocation policy when only causal CSI is available at the BS. Since the computational complexity of the optimal online policy increases exponentially with the number of users, we develop a suboptimal solution for the online allocation problem by exploiting the proposed offline allocation policy. Moreover, we investigate numerically the performance of the proposed policies and compare them with the equal-energy allocation and the one-shot energy allocation policy of~\cite{caire2004variable}. The rest of the paper is organized as follows. In Section~\ref{system}, we present the system model and formulate the maximum sum-throughput optimization problem. The offline energy allocation is introduced in Section~\ref{offline}. We study the online allocation in Section~\ref{online}, where dynamic programming is utilized to obtain the optimal policy and a suboptimal policy with reduced computational complexity is proposed. In Section~\ref{results}, we present our numerical results and compare the performance of different policies in various scenarios. Finally, we conclude the paper in Section~\ref{conclusion}. \section{SYSTEM MODEL} \label{system} We consider a discrete-time MAC as shown in Fig.~\ref{F1:system}, where $N$ users communicate with a single BS in a slotted wireless network. We assume a flat-fading channel model in which the channel gain of each user is constant over the duration of the time slot and changes independently from time slot to another according to a known continuous distribution. Thus, the received signal by the BS at time slot $t$ is given by \begin{equation} y_t=\sum_{i=1}^{N} \sqrt{h_t^{\left(i\right)}} x_t^{\left(i\right)}+n_t \end{equation} where $n_t$ is a zero-mean white Gaussian noise with variance $\sigma^2$, and $x_t^{\left(i\right)}$ is the transmitted signal of user $i$ at time slot $t$. The channel gain between the $i$th user and the BS at time slot $t$ is denoted by $h_t^{\left(i\right)}$, where the channel gains of each user $\left\{h_t^{\left(i\right)}\right\}$, $i\in\lbrace 1,\cdots,N\rbrace$, are independent identically distributed with the cumulative distribution function (CDF) $F_H^{\left(i\right)}\left(x\right)$. Let $E_i$ denote the maximum amount of energy that can be expended by user $i$ during $T$ time slots, where $T$ denotes the transmission window in which each user must transmit his data. Let $\mathcal{N}=\left\{1,\cdots,N\right\}$ denote the set of users communicating with the BS, and $\mathcal{T}=\left\{1,\cdots,T\right\}$ denote the set of the time slots during which communication occurs. Our goal is to maximize the sum-throughput of the MAC over the transmission window under constraints on the available energy for each user. Let $e_t^{\left(i\right)}$ denote the consumed energy by the $i$th user at time slot $t$. Hence, the maximum achievable sum-throughput of the MAC at time slot $t$, when the channel gains of all users at time slot $t$ are known, is given by~\cite{cover2012elements} \begin{equation}\label{eqn7} R\left(\mathbf{e}_t,\mathbf{h}_t\right)=\tau W\log_2\left(1+\frac{1}{\tau N_o}\sum _{i=1}^{N} h_t^{\left(i\right)}e_t^{\left(i\right)}\right) \end{equation} where $W$ and $\tau$ are the channel bandwidth, and the time slot duration, respectively, and $N_o=W\sigma^2$ is the noise power in watts.\footnote{Note that the successive cancellation decoding strategy is the optimal decoding scheme that achieves the maximum sum-throughput of the MAC~\cite{hanly1998multiaccess}} In \eqref{eqn7}, $\mathbf{h}_t=\left[ h_t^{\left(1\right)},\cdots,h_t^{\left(N\right)} \right]$ and $\mathbf{e}_t=\left[e_t^{\left(1\right)},\cdots,e_t^{\left(N\right)}\right]$ are the channel gains vector and the consumed energy vector of all users at time slot $t$, respectively. Let $\mathcal{E}_t^{\left(i\right)}$ be the available energy for user $i$ at time slot $t$. Thus, the evolution of the energy queue of the $i$th user is given by \begin{equation}\label{energylevel} \begin{aligned} \mathcal{E}_{t+1}^{\left(i\right)}=\mathcal{E}_{t}^{\left(i\right)}- e_t^{\left(i\right)} &\qquad t=1,\cdots,T-1 \end{aligned} \end{equation} where the initial state of the energy queue is $\mathcal{E}_{1}^{\left(i\right)}=E_i$. In addition, the energy vector $\mathcal{E}_t=\left[\mathcal{E}_{t}^{\left(1\right)},\cdots,\mathcal{E}_{t}^{\left(N\right)}\right]$ represents the energy levels of all users at time slot $t\in\mathcal{T}$. We aim to get the energy allocation policy for each user $i\in \mathcal{N}$ to maximize the expected sum-throughput of the MAC over a deadline of $T$ slots. Towards this objective, we formulate the following optimization problem: \begin{equation}\label{eqn1} \begin{aligned} \max_{\mathbf{e}_1,\cdots,\mathbf{e}_T} & \quad\mathbb{E}\left\{\sum_{t=1}^{T} R\left(\mathbf{e}_t,\mathbf{h}_t\right)\right\}&&\\ \text{s.t.} &\quad \sum_{t=1}^{T}e_t^{\left(i\right)}= E_i & i&\in \mathcal{N}&\\ &\quad\mathbf{e}_t \succeq \mathbf{0} & t&\in \mathcal{T}& \end{aligned} \end{equation} where $\mathbf{0}$ denotes a row vector whose elements are equal to zero, $\mathbb{E}$ denotes the expectation with respect to the channel vectors $\mathbf{h}_t$, $t\in \mathcal{T}$, and the maximization is over all feasible energy allocation policies. In the following sections, we first study the offline allocation policy in which the channel gains of all users are known a priori for $T$ time slots. Next, we study different online allocation policies that maximize the expected sum-throughput of the MAC when only causal CSI is available. \begin{figure} \centering \includegraphics[width=7 cm, height=6 cm]{SM.pdf} \caption{System model} \label{F1:system} \end{figure} \section{offline energy allocation}\label{offline} In this section, we introduce the optimal offline energy allocation policy when the channel vectors $\mathbf{h}_t$, $t\in \mathcal{T}$, are non-causally known to the BS and the users at the beginning of the transmission. Since the channel vectors $\mathbf{h}_t$, $t\in \mathcal{T}$ are a priori known, the optimization problem~\eqref{eqn1} can be reformulated as a deterministic optimization problem \begin{equation} \label{eqn2} \begin{aligned} \max_{\mathbf{e}_1,\cdots,\mathbf{e}_T} & \quad\sum_{t=1}^{T} R\left(\mathbf{e}_t,\mathbf{h}_t\right)&& \end{aligned} \end{equation} subject to the same constraints of the optimization problem~\eqref{eqn1}. \begin{theorem}\label{Th1} The optimal offline transmission policy for the users is obtained by solving the following equations \begin{eqnarray} e_t^{\left(i\right)} &\!\!\!\!=\!\!\!\!& \left(\gamma_{o}^{\left(i\right)}-\gamma_t^{\left(i\right)}\right)^{+} \qquad \forall i\in \mathcal{N}, t\in \mathcal{T} \label{eqn3}\\ \sum_{t=1}^{T}e_t^{\left(i\right)} &\!\!\!\!=\!\!\!\!& E_i \qquad\qquad \quad\qquad\; \forall i\in \mathcal{N} \label{eqn4}\\ \gamma_t^{\left(i\right)} &\!\!\!\!=\!\!\!\!& \frac{\tau N_o+\sum_{n\neq i}h_t^{\left(n\right)}e_t^{\left(n\right)}}{h_t^{\left(i\right)}} \label{last_IWF} \end{eqnarray} where $\gamma_{o}^{\left(i\right)}$ is a threshold value obtained by substituting from~\eqref{eqn3} into~\eqref{eqn4}, and $\left(x\right)^{+}=\max\left(0,x\right)$. \end{theorem} \begin{proof} Refer to the Appendix. \end{proof} In the single user case, i.e., $N=1$, the optimal offline policy in Theorem~\ref{Th1} is the conventional water-filling algorithm~\cite{goldsmith1997capacity}, where the noise to the channel gain ratio at each time slot $t$ determines the amount of energy allocated to the time slot $t$. In case of the multiple users, i.e., $N>1$, we note that the energy allocation policy of the $i$th user for a given energy allocation of the other users $\mathbf{e}^{\left(i\right)}=\left[e_1^{\left(i\right)},\cdots,e_T^{\left(i\right)}\right]$, is also obtained via the water-filling algorithm. However, in this case, the interference signals of the other users $\sum_{n\neq i}h_t^{\left(n\right)} e_t^{\left(n\right)}$ at each time slot $t$ are considered as noise. Hence, the energy allocation policy of the $i$th user is significantly affected by the energy allocation policy of the other users, where the allocated energy for the $i$th user in time slot $t$ depends on $\gamma_t^{\left(i\right)}$ which represents the ratio between the interference-plus-noise power and the channel gain of the $i$th user at time slot $t$. \begin{algorithm} \caption{Iterative water-filling (IWF) algorithm } \label{IWF} \begin{algorithmic}[1] \State \textbf{Initialization:} $\mathbf{e}_t=\mathbf{0}$, $\forall\ t\in \mathcal{T}$ \For {$l=1$ to $L_{\max}$} \For{$i=1$ to $N$} \State Let $\gamma_t^{\left(i\right)}=\frac{\tau\sigma^2+\sum_{n\neq i}h_t^{\left(n\right)}e_t^{\left(n\right)}}{h_t^{\left(i\right)}}$, $\forall\ t\in \mathcal{T}$ \State $e_t^{\left(i\right)}=\left(\gamma_{o}^{\left(i\right)}-\gamma_t^{\left(i\right)}\right)^{+}$, $\forall\ t\in \mathcal{T}$ \State $\sum_{t=1}^{T}e_t^{\left(i\right)}=E_i$ \EndFor \EndFor \end{algorithmic} \end{algorithm} Note that a closed-form expression for the optimal solution introduced in Theorem~\ref{Th1} can not be found. Nevertheless, the optimal solution can be obtained by applying the iterative water filling algorithm (IWF) described in Algorithm~\ref{IWF} to iteratively solve equations~\eqref{eqn3}--\eqref{last_IWF} where $L_{\max}$ is the maximum number of iterations. In each iteration, the IWF algorithm successively updates the optimal energy allocation of each user using the water-filling algorithm while assuming that the allocation policy of the other users are fixed. Hence, at each iteration the algorithm tries to maximize the objective function of the problem~\eqref{eqn2} by adapting the energy allocation of a single user while considering the signals of the other users $\sum_{n\neq i}h_t^{\left(n\right)} e_t^{\left(n\right)}$ as noise. Since the objective function is monotonically increasing in the energy allocation policy of each user $\mathbb{e}^{\left(i\right)}$, the objective function cannot decrease after any iteration. As a result, the IWF solution approaches the optimal solution of problem~\eqref{eqn2} as the number of iterations $L_{\max}$ increases where $L_{\max}$ determines the error tolerance. The IWF algorithm was applied in~\cite{yu2004iterative} to find the optimal transmit covariance matrices of the users that achieve the boundary of the Gaussian MIMO-MAC capacity. In a similar manner to~\cite{yu2004iterative}, we can assume the channel gains of the $i$th user over the time window ($h_1^{\left(i\right)},\cdots,h_T^{\left(i\right)}$) as effective channel gains of $T$ transmit antennas of the $i$th user. Therefore the results of the IWF algorithm obtained in~\cite{yu2004iterative} can be applied here. \begin{theorem} For a finite number of iterations, the IWF algorithm described in Algorithm~\ref{IWF} converges to the optimal allocation policy which is the solution of the optimization problem in~\eqref{eqn2}. Furthermore, the IWF algorithm achieves a sum-throughput lower than the optimal within $\frac{\left(N-1\right)T}{2}$ nats after a single iteration. \end{theorem} \begin{proof} See Theorem~$4$ and Theorem~$5$ in~\cite{yu2004iterative}. \end{proof} \section{Online energy allocation} \label{online} In this section, we assume that the channel vector $\mathbf{h}_t$ is causally known to the BS and the users at the beginning of time slot $t$ while future channel states are not known. Let $X_t=\left(\mathcal{E}_t,\mathbf{h}_t\right)$ denote the state of the system which is comprised of the channel gains and the energy levels of all users at time slot $t$. We aim to obtain the energy allocation policy $\mathcal{G}^*=\left[\mathbf{e}^*_1\left(X_1\right),\cdots,\mathbf{e}^*_T\left(X_T\right)\right]$ that maximizes the expected sum-throughput of the MAC within a duration of $T$ slots by sequentially solving the optimization problem in~\eqref{eqn1}. The optimal energy allocation policy $\mathcal{G}^*$ can be obtained by formulating the optimization problem in~\eqref{eqn1} as a finite horizon dynamic program (DP) that can be described by the following two equations \begin{subequations} \label{eqn6} \begin{align} \label{eqn6:1} U_T\left(\mathcal{E}_T,\mathbf{h}_T\right)&=R\left(\mathcal{E}_T,\mathbf{h}_T\right)\\ U_t\left(\mathcal{E}_t,\mathbf{h}_t\right)=&\underset{\mathbf{0}\preceq \mathbf{e}_t\preceq \mathcal{E}_t}{\max} R\left(\mathbf{e}_t,\mathbf{h}_t\right)+\overline{U}_{t+1}\left(\mathcal{E}_{t}-\mathbf{e}_t\right) \,\forall 1\leq t<T \end{align} \end{subequations} where $\overline{U}_{t+1}\left(\mathcal{E}\right)=\mathbb{E} \left\{ U_{t+1}\left(\mathcal{E},\mathbf{h}\right)\right\}$. The equations in~\eqref{eqn6} are Bellman's equations of the finite horizon DP~\cite{bertsekas1995dynamic}, where $\overline{U}_{t+1}\left(\mathbf{\mathcal{E}}\right)$ is the maximum expected sum-throughput that can be obtained during the remaining $T-t$ slots given that the energy levels of all users is $\mathcal{E}$. Note that the optimal policy is a vector of functions mapping the current state of the system (the channel gains and the energy levels) to an amount of energy determined for each user. In~\eqref{eqn6:1} the users transmit with all available energy $\mathcal{E}_{T}$ to maximize the total sum-throughput at the last time slot $T$. On the other hand, for time slots $t=1,\cdots,T-1$ there is a tradeoff between the current reward $R\left(\mathbf{e}_t,\mathbf{h}_t\right)$ and the expected future reward $\overline{U}_{t+1}\left(\mathbf{\mathcal{E}_t-\mathbf{e}_t}\right)$. Hence, the optimal energy allocated for each user at time slot $t$, is determined by maximizing the current throughput plus the expected future throughput. \subsection{The optimal policy} The optimal allocation policy $\mathcal{G}^*$ is obtained recursively by solving Bellman's equation in~\eqref{eqn6} at each time slot $t\in \mathcal{T}$, where $\overline{U}_{t+1}\left(\mathcal{E}\right)$ is computed backwards in time. However, we can not get a closed-form expression for the expected reward function $\overline{U}_{t+1}\left(\mathcal{E}\right)$ even in the single user case. Therefore, $\overline{U}_{t+1}\left(\mathcal{E}\right)$ is computed numerically using the discretization method~\cite{bertsekas1995dynamic}. \subsection{Suboptimal policy}~\label{sub} The computational complexity required to solve~\eqref{eqn6} numerically grows exponentially with the number of users~\cite{bertsekas1995dynamic}. In order to alleviate this problem, the one-shot energy allocation policy was introduced in~\cite{negi2002delay} and~\cite{caire2004variable} to solve~\eqref{eqn6} efficiently. The one-shot energy allocation policy arises from the linear approximation of the throughput function, i.e., \begin{equation} R\left(\mathbf{e}_t,\mathbf{h}_t\right)\approx \frac{1}{\sigma^2}\sum _{i=1}^{N} h_t^{\left(i\right)}e_t^{\left(i\right)}. \end{equation} Note that the linear approximation is acceptable in the wideband regime, i.e., when $W\rightarrow\infty$, and/or when all users transmit at low SNR, where the transmit SNR of the $i$th user is given by $\text{SNR}_i=\frac{E_i h_{o}^{\left(i\right)}}{\tau N_o}$, $ h_o^{\left(i\right)}=\int_{0}^{\infty} x dF_H^{\left(i\right)}\left(x\right)$. Hence, Bellman's equations can be restated as follows \begin{subequations} \label{eqn9} \begin{align} \label{eqn9:1} \tilde{U}_T\left(\mathcal{E}_T,\mathbf{h}_T\right)&=\sum _{i=1}^{N} h_T^{\left(i\right)}\mathcal{E}_T^{\left(i\right)}\\ \tilde{U}_t\left(\mathcal{E}_t,\mathbf{h}_t\right)=&\underset{\mathbf{0}\preceq \mathbf{e}_t\preceq \mathcal{E}_t}{\max} \sum _{i=1}^{N} h_t^{\left(i\right)}e_t^{\left(i\right)}+\overline{\tilde{U}}_{t+1}\left(\mathcal{E}_{t}-\mathbf{e}_t\right), 1\leq t<T \end{align} \end{subequations} By applying the DP recursion backward in time from the time slot $t=T$ to the time slot $t=1$, the expected reward function $\overline{\tilde{U}}_{t}\left(\mathcal{E}\right)$ for $t\in\mathcal{T}$ can be computed as follows \begin{equation} \overline{\tilde{U}}_{t}\left(\mathcal{E}\right)=\sum_{i=1}^{N}\mathcal{E}^{\left(i\right)}\nu_t^{\left(i\right)} \end{equation} Furthermore, the one-shot energy allocation policy which solves~\eqref{eqn9} is given by \begin{equation} e_t^{\left(i\right)}=\left\{\begin{array}{ll} \mathcal{E}_t^{\left(i\right)} & \text{if}\ \ h_t^{\left(i\right)}>\nu_{t+1}^{\left(i\right)}\\ 0 & \text{if}\ \ h_t^{\left(i\right)}\leq\nu_{t+1}^{\left(i\right)} \end{array} \right.,\ \forall\ i\in\mathcal{N} \end{equation} where $\nu_{t-1}^{\left(i\right)}=\mathbb{E}_{h_t^{\left(i\right)}}\left\{\max\lbrace h_t^{\left(i\right)},\nu_t^{\left(i\right)}\rbrace \right\}$, $\nu_{T}^{\left(i\right)}=h_o^{\left(i\right)}$, and $\nu_{T+1}^{\left(i\right)}=0$. The one-shot energy allocation policy allocates the available energy for the $i$th user $E_i$ to the earliest time slot $t\in\mathcal{T}$ that has a channel gain $h_t^{\left(i\right)}>\nu_{t+1}^{\left(i\right)}$. We refer to~\cite{caire2004variable} for more insights and details. Next, we develop a low-complexity suboptimal solution to solve the recursive DP introduced in~\eqref{eqn6}. We show through numerical simulations that the performance of the suboptimal algorithm is close to that of the optimal policy when the number of users is small. The suboptimal solution is obtained by applying the certainty equivalent controller (CEC) scheme (see Chapter~$6$ in~\cite{bertsekas1995dynamic}), in which the following three steps are applied at each time slot $t$: \begin{enumerate} \item \textbf{Certainty step}: We replace all uncertain variables with their means. Hence we assume that the future channel gain of each user over the remaining $T-t$ slots is equal to its mean, i.e., $\mathbf{h}_k=\mathbf{h}_o$ for $\ k=t+1,\cdots,T$ where $\mathbf{h}_o=\left[h_o^{\left(1\right)},\cdots,h_o^{\left(N\right)}\right],\ h_o^{\left(i\right)}=\int_{0}^{\infty} x dF_H^{\left(i\right)}\left(x\right)$. \item \textbf{Optimization step}: After the certainty step, the recursive optimization problem in~\eqref{eqn6} at time slot $t$ can be reformulated as the following deterministic optimization \begin{equation}\label{eqn8} \begin{aligned} \max_{\tilde{\mathbf{e}}_t,\cdots,\tilde{\mathbf{e}}_T} & R\left(\tilde{\mathbf{e}}_t,\mathbf{h}_t\right)+\sum_{k=t+1}^{T} R\left(\tilde{\mathbf{e}}_k,\mathbf{h}_o\right)\\ \text{s.t.} &\quad \sum_{k=t}^{T}\tilde{e}_k^{\left(i\right)}\leq \mathcal{E}_t^{\left(i\right)},\ i \in \mathcal{N}\\ &\quad\tilde{\mathbf{e}}_k \succeq \mathbf{0},\ k=t,\cdots,T \end{aligned} \end{equation} where the solution of the optimization problem in~\eqref{eqn8} is obtained in a similar way to the offline allocation problem introduced in Section~\ref{offline} by applying the IWF algorithm in Algorithm~\ref{IWF} over $T-t+1$ slots with an amount of energy available at each user $\mathcal{E}_t^{\left(i\right)}$ for $i\in\mathcal{N}$. \item \textbf{Allocation step}: We set $\mathbf{e}_t=\tilde{\mathbf{e}}_t$ and compute the energy levels of all users at $t+1$ using Equation~\ref{energylevel}. Then, we go to the next time slot $t+1$. \end{enumerate} \section{Numerical results} \label{results} \begin{figure} \centering \includegraphics[width=9 cm, height=7 cm]{fig1} \caption{Average sum-throughput in Mbits in the low SNR regime for $T=5$ and $N=2$.} \label{fig1} \end{figure} In this section, we numerically evaluate the performance of various energy allocation policies introduced throughout the paper. For comparison, we consider a simple energy allocation policy namely the equal-energy allocation, where each user allocates an equal amount of energy for each time slot of the transmission window regardless the effect of the channel fading and the allocation policy of the other users, i.e., \begin{equation} e_{t}^{\left(i\right)}=\frac{E_i}{T},\ \forall i\in\mathcal{N},\ \forall t\in\mathcal{T} \end{equation} Notice that this policy is optimal in case of time-invariant channels, where the channel gain of each user is constant over the deadline. \begin{figure} \centering \includegraphics[width=9 cm, height=7 cm]{fig2} \caption{Average sum-throughput in Mbits in the high SNR regime for $T=5$ and $N=2$} \label{fig2} \end{figure} \begin{figure} \centering \begin{subfigure}{.5\textwidth} \includegraphics[width=9 cm, height=7 cm]{fig3_1} \centering \caption{$\text{SNR}=-10$ dB} \label{fig3:1} \end{subfigure} \begin{subfigure}{.5\textwidth} \includegraphics[width=9 cm, height=7 cm]{fig3_2} \centering \caption{$\text{SNR}=10$ dB} \label{fig3:2} \end{subfigure} \caption{Average sum-throughput of the MAC versus the number of users $N$ for $\text{SNR}=-10$ dB and $\text{SNR}=10$ dB} \end{figure} For simplicity, we consider a symmetrical case, where all users are equipped with an equal amount of energy, i.e., $E_i=E$, $\forall i\in\mathcal{N}$, and the channel gains of all users are i.i.d., where the channel gains are generated according to the exponential distribution with parameter $\lambda=1$, i.e., $F_{{H}}^{\left(i\right)}\left(x\right)=1-e^{-x}$, $\forall i\in\mathcal{N}$. Also, we consider the following parameters: the bandwidth $W=1$ MHz, the noise power $N_o=1$ watts, and the slot length $\tau=1$ seconds, and hence, the transmit SNR of each user $\text{SNR}_i=E$, $\forall i\in\mathcal{N}$. We use the performance of the offline allocation policy as an upper bound on the performance of online policies. In the following figures, the performance of the optimal offline, suboptimal, one-shot, and equal-energy policies are obtained by averaging over $10^4$ randomly generated channel realizations, while the performance of the optimal online policy is obtained by using the discretization method~\cite{bertsekas1995dynamic}. Figs~\ref{fig1} and~\ref{fig2} show the average sum-throughput of the MAC versus the transmit SNR of each user for a system composed of $N=2$ users and transmission window length equal to $T=5$ time slots. Fig.~\ref{fig1} focuses on the low SNR regime where the SNR is varied from $-30$ dB to $0$ dB. It is clear that the performance of the proposed suboptimal and the one-shot policies is close to the optimal one, although, the proposed suboptimal policy performs better when the SNR approaches $0$ dB. Moreover, the equal-energy allocation policy has the worst performance. In Fig.~\ref{fig2}, the SNR varies from $0$ dB to $20$ dB to investigate the performance of the different policies in the medium and high SNR regimes. We can see from this figure that the one-shot policy deviates from the optimal solution, since the linear approximation of the throughput function is no longer valid at high SNR. However, the performance of the proposed suboptimal policy is still very close to that of the optimal solution. Next, we investigate the effect of the number of users $N$ on the performance of different policies. Fig.~\ref{fig3:1} and Fig.~\ref{fig3:2} show the average sum throughput of the system (for $T=5$ slots) at SNR$=-10$ dB and SNR=$10$ dB, respectively. We can see from Fig.~\ref{fig3:1} that both the proposed suboptimal policy and the one-shot policy almost have the same performance for any number of users in the low SNR regime. When the number of users is much larger than the time slots of the transmission window, i.e., $N\gg T$, each time slot of the transmission widow would be shared with a lot of users. In other words, each user would suffer from high interference signals at each time slot of the transmission window. Therefore the best choice is to allocate the available energy of each user to a single time slot of the transmission window that has a favorable channel gain. Hence Fig.~\ref{fig3:2} shows that the one-shot policy converges to the proposed suboptimal policy in the high SNR regime for $N\gg T$. However, the equal-energy allocation policy has better performance than the one-shot policy when the number of users is small. On the other hand, Fig.~\ref{fig3:1} and Fig.~\ref{fig3:2} show that the gap between the equal-energy allocation policy and the suboptimal policy increases as the number of users increases since the competition on the available resources (the time slots of the transmission window) increases as the number of users increases. \section{Conclusion} \label{conclusion} In this paper, we have proposed energy allocation strategies for the $N$-user fading MAC with delay and energy constraints under two different assumptions on the channel states information. In the offline allocation, a convex optimization problem is formulated with the objective of maximizing the sum-throughput of the fading MAC within the transmission window where the optimal solution is obtained by applying the iterative water filling algorithm. In the online allocation, the problem is formulated via dynamic programming, and the optimal solution is obtained numerically by using the discretization method when the number of users is small. In addition, we have proposed a suboptimal solution with reduced computational complexity that can be used when the number of users is large. Numerical results have been provided to show the superiority of the proposed algorithms compared to the equal-energy allocation and the one-shot allocation algorithms.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,523
{"url":"https:\/\/phys.libretexts.org\/Bookshelves\/Classical_Mechanics\/Variational_Principles_in_Classical_Mechanics_(Cline)\/07%3A_Symmetries_Invariance_and_the_Hamiltonian\/7.12%3A_Symmetries_and_Invariance","text":"# 7.12: Symmetries and Invariance\n\n\nThis chapter has shown that the symmetries of a system lead to invariance of physical quantities as was proposed by Noether. The symmetry properties of the Lagrangian can lead to the conservation laws summarized in Table $$\\PageIndex{1}$$.\n\nSymmetry Lagrange property Conserved quantity\nSpatial invariance Translational invariance Linear momentum\nSpatial homogeneous Rotational invariance Angular momentum\nTime invariance Time independence Total energy\nTable $$\\PageIndex{1}$$: Symmetries and conservation laws in classical mechanics\n\nThe importance of the relations between invariance and symmetry cannot be overemphasized. It extends beyond classical mechanics to quantum physics and field theory. For a three-dimensional closed system, there are three possible constants for linear momentum, three for angular momentum, and one for energy. It is especially interesting in that these, and only these, seven integrals have the property that they are additive for the particles comprising a system, and this occurs independent of whether there is an interaction among the particles. That is, this behavior is obeyed by the whole assemble of particles for finite systems. Because of its profound importance to physics, these relations between symmetry and invariance are used extensively.\n\nThis page titled 7.12: Symmetries and Invariance is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and\/or curated by Douglas Cline via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.","date":"2022-12-04 09:22:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9163625240325928, \"perplexity\": 710.6869938169481}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710968.29\/warc\/CC-MAIN-20221204072040-20221204102040-00642.warc.gz\"}"}
null
null
\section{Introduction} Saddle-point systems can be found in a variety of application fields, such as, for example, mixed finite element methods in fluid dynamics or interior point methods in optimization. An extensive overview about application fields and solution methods for this kind of problems is presented in the well-known article \cite{bgl_2005} by Benzi, Golub and Liesen. In our following study, we want to focus on an iterative solver based on the Golub-Kahan bidiagonalization: the generalized \ac{GKB} algorithm. This solver is designed for saddle-point systems, and was introduced by Arioli\cite{Ar2013}. It belongs to the family of Krylov subspace methods and, as such, relies on specific orthogonality conditions, as we will review in more detail in \Cref{sec:GKBtheory}. Enforcing these orthogonality conditions requires solving an \emph{inner problem}, i.e.~formally computing products with matrix inverses (as described in \Cref{alg:GKB}). In practice, this computation is performed with a linear system solver. For this task, we will explore in this article the use of iterative methods to serve as replacement for direct methods that have been used within \ac{GKB} so far. This is essential for very large problems, such as those coming from a discretized \ac{PDE} in 2D or 3D, when direct solvers may reach their limits. Using an inner iterative solver might also be advantageous from another point of view as we motivate in the following. The solution of large linear systems is often the bottleneck in scientific computing. The computational cost and, consequently, the execution time and/or the energy consumption can become prohibitive. For the inner-outer iterative \ac{GKB} solver in turn, the principal and costliest part is the solution of the inner system at each outer iteration. One approximate metric to measure the cost of the \ac{GKB} solver is the aggregate sum of the number of inner iterations. For a given setup, the cost of the \ac{GKB} method can hence be optimized by executing only a minimal number of inner iterations necessary for achieving a prescribed accuracy of the solution. To reduce this number, there are two possible steps to be taken into account. In a first step, for a given application it is often unnecessary to solve the linear system with the highest achievable accuracy. This could be the case, for example, in the solution of a discretized \ac{PDE}, when the discretization already introduces an error. A precise solution of the linear system would not improve the numerical solution with respect to the analytic solution of the \ac{PDE} any further than the discretization allows. Next, we come to the second step which will be the main point of the study in this paper. The solution of the inner linear system in the \ac{GKB} method has to be exact, in theory. If we choose a rather low accuracy for the outer iterative solver, an inner exact solution might, however, no longer be necessary, as long as the inner error does not alter the chosen accuracy of the numerical solution. This strategy results in a further reduction of the number of inner iterations, since the inner solver will converge in fewer iterations when a less strict stopping tolerance is used. In the following study, we address the case where the inner solver has a prescribed stopping tolerance and then how this limited accuracy affects the outer process and the quality of its iterates. We will show that, with the appropriate choice of parameters, it is possible to make use of inner iterative solvers without compromising the accuracy of the \ac{GKB} result. As it can be seen immediately, the lower the accuracy for the inner solver, the less expensive the \ac{GKB} method will be. Furthermore, we take advantage of the versatility of iterative methods by adapting the stopping tolerance of the inner solver dynamically. In other words, we prescribe the tolerance of the inner solver according to some criteria determined at each outer iteration. This can lead to a reduction of the cost, since only a minimal number of inner iterations are executed. Typically, we will reduce the required accuracy for later instances of the inner solver, since later steps of the outer \ac{GKB}-iteration may contribute less to the overall accuracy. One particular advantage of our proposed method is its generality. The strategy is independent of other choices which are problem-specific, such as the preconditioner for a Krylov method. We perform most of our tests on a relatively small Stokes flow problem, to illustrate the salient features. We confirm our findings by one final test on a larger case of the mixed Poisson problem, including the use of the augmented Lagrangian method, to demonstrate the use in a realistic scenario. Our study has a similar context as other works on inexact Krylov methods \cite{bouras2000relaxation,bouras2005inexact}, where these algorithms have been investigated from a numerical perspective. In these articles, the inexactness originates from a limited accuracy of the matrix-vector multiplication or that of the solution of a local sub-problem. Similar to what we have described above, it was found that the inner accuracy can be varied from step to step while still achieving convergence of the outer method. It was shown experimentally that the initial tolerance should be strict, then relaxed gradually, with the change being guided by the latest residual norm. Other works complemented the findings with theoretical insights, relevant to several algorithms of the Krylov family \cite{simoncini2003theory, simoncini2005relaxed,van2004inexact}. It was noted that, in some cases, unless a problem-dependent constant is included, the outer solver may fail to converge if the accuracy of the inner solution is adapted only based on the residual norm. This constant can be computed based on extreme singular values, as shown by Simoncini and Szyld \cite{simoncini2003theory}. Another source of inexactness can be the application of a preconditioner via an iterative method. Van den Eshof, Sleijpen and van Gijzen considered inexactness in Krylov methods originating both from matrix-vector products and variable preconditioning, using iterative methods from the GMRES family \cite{van2005relaxation}. Similarly to earlier work, their analysis relies on the connection between the residual and the accuracy of the solution to the inner problem. Since applying the preconditioner has the same effect as a matrix-vector product, the same strategies can be applied to more complex, flexible algorithms, such as those involving variable preconditioning: FGMRES \cite{saad1993flexible}, GMRESR \cite{van1994gmresr}, etc. A flexible version of the Golub-Kahan bidiagonalization is employed by Chung and Gazzola to find regularized solutions to a problem of image deblurring \cite{chung2019flexible}. In a more recent paper with the same application, Gazzola and Landman develop inexact Krylov methods as a way to deal with approximate knowledge of $\Am$ and $\Am ^T$ \cite{gazzola2021regularization}. Erlangga and Nabben construct a framework including nested Krylov solvers. They develop a multilevel approach to shift small eigenvalues, leading to a faster convergence of the linear solver \cite{erlangga2008multilevel}. In subsequent work related to multilevel Krylov methods, Kehl, Nabben and Szyld apply preconditioning in a flexible way, via an adaptive number of inner iterations \cite{kehl2019adaptive}. Baumann and van Gijzen analyze solving shifted linear systems and, by applying flexible preconditioning, also develop nested Krylov solvers \cite{baumann2015nested}. McInnes et al. consider hierarchical and nested Krylov methods with a small number of vector inner products, with the goal of reducing the need for global synchronization in a parallel computing setting \cite{mcinnes2014hierarchical}. Other than solving linear systems, inexact Krylov methods have been studied when tackling eigenvalue problems, as in the paper by Golub, Zhang and Zha \cite{GolZhaZha2000}. Although using different arguments, it was shown that the strategy of increasing the inner tolerance is successful for this kind of problem as well. Xu and Xue make use of an inexact rational Krylov method to solve nonsymmetric eigenvalue problems and observe that the accuracy of the inner solver (GMRES) can be relaxed in later outer steps, depending on the value of the eigenresidual \cite{xu2022inexact}. Dax computes the smallest eigenvalues of a matrix via a restarted Krylov solver which includes inexact matrix inversion \cite{dax2019restarted}. Our paper is structured as follows: in \Cref{sec:GKBtheory}, we review the theory and properties of the \ac{GKB} algorithm; in \Cref{sec:pbDesc}, we describe the specific problem we chose to use as test case for the numerical experiments; \Cref{sec:constAcc} is meant to illustrate the interactions between the accuracy of the inner solver and that of the outer one in a numerical test setting; \Cref{sec:pertErrAna} describes the link between the error of the outer solver and the perturbation induced by the use of an iterative inner solver. We describe and test our proposed strategy of using a variable tolerance parameter for the inner solver in \Cref{sec:relaxChoices}. We explore the interaction between the method of the \ac{AL} and our strategy in \Cref{sec:AL}. The final section is devoted to concluding remarks. \section{Generalized Golub-Kahan algorithm} \label{sec:GKBtheory} We are interested in saddle-point problems of the form \begin{align}\label{eqn:spsW} \left[ \begin{array}{cc} \Mm & \Am \\ \Am ^T & \mZ \end{array} \right] \left[ \begin{array}{c} \wv \\ \pv \end{array} \right] = \left[ \begin{array}{c} \gvv \\ \rv \end{array} \right] \end{align} with $\Mm\in \mathbb{R}^{m\times m}$ being a symmetric positive definite matrix and $\Am\in \mathbb{R}^{m \times n}$ a full rank constraint matrix. The generalized \ac{GKB} algorithm for the solution of a class of saddle-point systems was introduced by Arioli \cite{Ar2013}. To apply it to the system (\ref{eqn:spsW}), we first need to have the upper block of the right-hand side to be equal to 0. To this end, we use the transformation \begin{align} \label{eq:iniTransf} \uv &= \wv - \Mm^{-1}\gvv ,\\ \bv &= \rv - \Am^T \uv. \end{align} The resulting system is \begin{align}\label{eqn:sps} \left[ \begin{array}{cc} \Mm & \Am \\ \Am^T & 0 \end{array} \right] \left[ \begin{array}{c} \uv \\ \pv \end{array} \right] = \left[ \begin{array}{c} 0 \\ \bv \end{array} \right], \end{align} which is equivalent to that in \Cref{eqn:spsW}. We can recover the $\wv$ variable as $\wv = \uv + \Mm^{-1}\gvv$. Let $\Nm\in \mathbb{R}^{n\times n}$ be a symmetric positive definite matrix. To properly describe the \ac{GKB} algorithm, we need to define the following norms \begin{equation} \normM{\vvv} = \sqrt{ \vvv ^T \Mm \vvv }; \qquad \normN{\qv} = \sqrt{ \qv ^T \Nm \qv }; \qquad \normNI{\yv} = \sqrt{ \yv ^T \Nm^{-1} \yv }. \end{equation} Given the right-hand side vector $\bv \in \bR ^n$, the first step of the bidiagonalization is \begin{equation} \label{eq:iniGKBVec} \beta _1 = \normNI{\bv}, \quad \qv _1 = \Nm ^{-1} \bv / \beta _1. \end{equation} After $k$ iterations, the partial bidiagonalization is given by \begin{equation} \label{eq:oriGKB} \begin{cases} \Am \Qm _k = \Mm \Vm _k \Bm _k, &\qquad \Vm _k ^T \Mm \Vm _k = \Id _k \\ \Am ^T \Vm _k = \Nm \Qm _k \Bm ^T _k + \beta _ {k+1} \qv _{k+1} \ev _k^T, &\qquad \Qm _k ^T \Nm \Qm _k = \Id _k \end{cases}, \end{equation} with the bidiagonal matrix \begin{equation} \label{eq:BmatGKB} \Bm _k= \left[ \begin{matrix} \alpha_1 & \beta_2 & 0 & \ldots & 0 \\ 0 & \alpha_2 & \beta_3 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & \ldots & 0 & \alpha_{k-1} & \beta_k \\ 0 & \ldots & 0 & 0 & \alpha_{k} \end{matrix} \right] \end{equation} and the residual term $\beta _ {k+1} \qv _{k+1} \ev _k^T$. The columns of $\Vm _k$ are orthonormal vectors with respect to the inner product and norm induced by $\Mm$, while the same holds for $\Qm _k$ and $\Nm$ respectively \begin{equation} \begin{split} & \vvv _i ^T \Mm \vvv _j = 0, \forall i \neq j; \qquad \normM{\vvv _k} = 1; \\ & \qv _i ^T \Nm \qv _j = 0, \forall i \neq j; \qquad \normN{\qv _k} = 1. \end{split} \end{equation} Prior to the normalization leading to $\vvv _k$ and $\qv _k$, the norms are stored as $\alpha _k$ for $\vvv _k$ and $\beta _k $ for $\qv _k$, as detailed in \cref{alg:GKB}. Using $\Vm _k$, $\Qm _k$ and the relations in \Cref{eq:oriGKB}, we can transform the system from \Cref{eqn:sps} into a simpler form \begin{equation}\label{eq:transfSys} \left[ \begin{matrix} \Id _k& \Bm _k \\ \Bm _k ^T & \mZ \end{matrix} \right] % \left[ \begin{matrix} \zv _k \\ \yv _k \end{matrix} \right] = \left[ \begin{matrix} \mZ \\ \Qm _k ^T \bv \end{matrix} \right]. \end{equation} With the choice for $\qv _1$ given in \Cref{eq:iniGKBVec}, we have that $\Qm _k ^T \bv = \beta _1 \ev _1 $. The solution components to \Cref{eq:transfSys} are then given by \begin{equation} \label{eq:zandy} \zv _k= \beta _1 \Bm _k ^{-T} \ev _1; \quad \yv _k= - \Bm _k ^{-1} \zv _k, \end{equation} where $ \Bm _k ^{-T}$ is the inverse of $ \Bm _k ^{T}$. We can build the $k$-th approximate solution to \Cref{eqn:sps} as \begin{equation} \label{eq:GKBapx} \uv _k = \Vm _k \zv _k; \quad \pv _k = \Qm _k \yv _k. \end{equation} In particular, after a number of $k=n$ steps and assuming exact arithmetic, we have $\uv _k = \uv$ and $\pv _k = \pv $, meaning we have found the exact solution to \Cref{eqn:sps}. A proof of why $n$ terms are sufficient to find the exact solution is given in the introductory paper by Arioli \cite{Ar2013}. This corresponds to a scenario where it is necessary to perform the $n$ iterations, although, for specific problems with particular features, the solution may be found after fewer steps. As $ k \rightarrow n$, the quality of the approximation improves ($\uv _k \rightarrow \uv$ and $\pv _k \rightarrow \pv $), with the bidiagonalization residual $\beta _ {k+1} \qv _{k+1} \ev _k^T$ vanishing for $k=n$. Given the structure of $\beta _1 \ev _1$ and $ \Bm ^{T}$, we find \begin{equation} \label{eq:zetaDef} \zeta _1= \frac{\beta _1}{\alpha _1}, \quad \zeta _k = \zeta _{k-1} \frac{\beta _k}{\alpha _k}, \quad \zv _k = \left[ \begin{matrix} \zv _{k-1} \\ \zeta _k \end{matrix} \right] \end{equation} in a recursive manner. Then, $\uv _k$ is computed as $\uv_k = \uv _{k-1} + \zeta _k \vvv _k $. In order to obtain a recursive formula for $\pv$ as well, we introduce the vector \begin{equation} \dv _k= \frac{\qv _k - \beta _k \dv _{k-1}}{\alpha _k}, \quad \dv _1 = \frac{\qv _1}{\alpha _1}. \end{equation} Finally, the update formulas are \begin{equation} \label{eq:GKBupdate} \uv _k= \uv _{k-1} + \zeta _k \vvv _k , \quad \pv _k= \pv _{k-1} - \zeta _k \dv _k . \end{equation} At step $k$ of \Cref{alg:GKB}, we have the following error in the energy norm. \begin{equation} \label{eq:errExact} \begin{split} \normM{\ev _k} ^2 &= \normM{ \uv _k - \uv } ^2 = \normM{ \Vm _k \zv _k - [ \Vm _k \Vm _{n-k}] \left[ \begin{matrix} {\zv _k} \\ {\zv _{n-k}} \end{matrix} \right] } ^2 \\ &= \normM{ \Vm _{n-k} \zv _{n-k} } ^2 = \normEu{ \zv _{n-k}} ^2 = \sum_{i=k+1}^{n} \zeta _i ^2 \end{split} \end{equation} In the last line, we have made use of the $\Mm$-orthonormality of the $\Vm$ matrices. If we truncate the sum above to only its first $d$ terms, we get a lower bound on the energy norm of the error. The subscript $d$ stands for \textit{delay}, because we can compute this lower bound corresponding to a given step $k$ only after an additional $d$ steps \begin{equation} \label{eq:lowBnd} \xi ^2 _{k,d} = \sum_{i=k+1}^{k+d+1} \zeta _i ^2 < \normM{\ev _k} ^2. \end{equation} With this bound for the absolute error, we can devise one for the relative error in \Cref{eq:lowBndRel}, which is then used as stopping criterion in \Cref{alg_line:GKBconvCheck} of \Cref{alg:GKB}. \begin{equation} \label{eq:lowBndRel} \bar \xi ^2 _{k,d} = \frac{ \sum_{i=k-d+1}^{k} \zeta _i ^2 }{ \sum_{i=1}^{k} \zeta _i ^2 } . \end{equation} The \ac{GKB} algorithm has the following error minimization property. Let $\cV _k = span \{\vvv _1, ..., \vvv _k\}$ and $\cQ _k = span \{\qv _1, ..., \qv _k\}$. Then, for any arbitrary step $k$, we have that \begin{equation} \label{eq:errMinProp} \min_{\underset{(\Am ^T \uv _k - \bv ) \perp \cQ _k}{ \uv _k \in \cV _k,} } \normM{ \uv - \uv _k } \end{equation} is met for $\uv _k$ as computed by \Cref{alg:GKB}. For brevity and because the \ac{GKB} algorithm features this minimization property for the primal variable, our presentation will focus on the velocity for Stokes problems. The stopping criteria for our proposed algorithmic strategies rely on approximations of the velocity error norm. For all the numerical experiments that we have performed, the pressure error norm is close to that of the velocity (less than an order of magnitude apart). In the cases where we operate on a different subspace, as a result of preconditioning, we find that the pressure error norm is actually smaller than that for the velocity. In the case where the dual variable is equally important as the primal, one can use a monolithic approach, such as applying MINRES to the complete saddle-point system. The \ac{GKB} (as implemented by \Cref{alg:GKB}) is a nested iterative scheme in which each outer loop involves solving an inner linear system. According to the theory given in the paper by Arioli \cite{Ar2013}, the matrices $\Mm$ and $\Nm$ have to be inverted exactly in each iteration. We can choose $ \Nm=\frac{1}{\eta} \Id$, whose inversion reduces to a scalar multiplication. In the following sections, unless otherwise specified, we consider $\eta =1 $. On the other hand, the matrix $\Mm$ depends on the underlying differential equations or the problem setting in general. As long as the matrix $\Mm$ is of moderate size, a robust direct solver can be used. For large problems, however, a direct solution might no longer be possible and an iterative solver will be required. At this point, we face two problems. First, depending on the application, inverting $\Mm$ might be more or less costly. Second, to achieve a solution quality close to machine precision, an iterative solver might require a considerable number of iteration steps. \begin{algorithm} \caption{Golub-Kahan bidiagonalization algorithm} \label{alg:GKB} \begin{algorithmic}[1] \Require{$\Mm , \Am , \Nm, \bv$, maxit} \State{$\beta_1 = \|\bv\|_{\Nm^{-1}}$; $\qv_1 = \Nm^{-1} \bv / \beta_1$} \State{$\wv = \Mm^{-1} \Am \qv_1$; $\alpha_1 = \|\wv\|_{\Mm}$; $\vvv_1 = \wv / \alpha_1$} \State{$\zeta_1 = \beta_1 / \alpha_1$; $\dv_1=\qv_1/ \alpha_1$; $\uv^{(1)} =\zeta_{1} \vvv_{1}$; $\pv^{(1)} = - \zeta_1 \dv_1$; } \State{$\bar \xi _{1,d} = 1;$ $k = 1;$} \While{ \textcolor{black}{ $\bar \xi _{k,d} > $ tolerance } and $k < $ maxit } \State{$\gvv = \Nm^{-1} \left( \Am^T \vvv_k - \alpha_k \Nm \qv_k \right) $; $\beta_{k+1} = \|\gvv\|_{\Nm}$} \State{$\qv_{k+1} = \gvv / {\beta_{k+1}}$} \State{ \textcolor{black}{ $\wv = \Mm^{-1} \left( \Am \qv_{k+1} - \beta_{k+1} \Mm \vvv_{k} \right)$; } $\alpha_{k+1} = \|\wv\|_{\Mm}$} \label{alg_line:innerGKBproblem} \State{$\vvv_{k+1} = \wv / {\alpha_{k+1} }$} \State{$\zeta_{k+1} = - \dfrac{\beta_{k+1}}{\alpha_{k+1}} \zeta_k$} \State{$\dv_{k+1} = \left( \qv_{k+1} - \beta_{k+1} \dv_k \right) / \alpha_{k+1} $} \State{$\uv^{(k+1)} = \uv^{(k)} + \zeta_{k+1} \vvv_{k+1}$; $\pv^{(k+1)} = \pv^{(k)} - \zeta_{k+1} \dv_{k+1}$} \State{$k = k + 1$} \If{$k>d$} \State{\textcolor{black}{ $\bar \xi _{k,d} = \sqrt{ \sum_{i=k-d+1}^{k} \zeta _i ^2 / \sum_{i=1}^{k} \zeta _i ^2 } $} } \label{alg_line:GKBconvCheck} \EndIf{} \EndWhile{} \Return $\uv^{k+1}, \pv^{k+1}$ \end{algorithmic} \end{algorithm} In \Cref{alg_line:innerGKBproblem} of \Cref{alg:GKB}, we have the application of $\Mm^{-1} $ to a vector, which represents what we call the \textit{inner problem}. Typically, this is implemented as a call to a direct solver using the matrix $\Mm $ and the vector $\Am \qv_{k+1} - \beta_{k+1} \Mm \vvv_{k}$ as the right hand side. The main contribution of this work is a study of the behavior exhibited by \Cref{alg:GKB} when we replace the direct solver employed in \Cref{alg_line:innerGKBproblem} by an iterative one. In particular, for a target accuracy of the final \ac{GKB} iterate, we want to minimize the total number of inner iterations. Our choice for the inner solver is the unpreconditioned \ac{CG} algorithm, for its simplicity and relative generality. The strategies we propose in the subsequent sections do not rely on any specific feature of this inner solver, and are meant to be applicable regardless of this choice. We are interested in reducing the total number of inner iterations in a relative and general manner. This is why we do not take preconditioning for \ac{CG} into account, which is usually problem-dependent. We measure the effectiveness of our methods based on the percentage of inner iterations saved when compared against a scenario to be described in more detail in the following sections. \section{Problem description} \label{sec:pbDesc} As test problem, we will use a 2D Stokes flow in a rectangular channel domain $\Omega = \left[-1, L \right] \times \left[-1, 1 \right]$ given by \begin{equation} \label{eq:contStokes} \begin{aligned} - \Delta \vec{u} + \nabla p &= 0 \\ \nabla \cdot \vec{u} &=0, \end{aligned} \end{equation} More specifically, we will consider the Poiseuille flow problem, i.e. a steady Stokes problem with the exact solution \begin{equation} \label{eq:contStokesSol} \begin{cases} u_x = 1- y^2, \\ u_y = 0, \\ p = -2x + \text{constant}. \end{cases} \end{equation} The boundary conditions are given as Dirichlet condition on the inflow $\Gamma_{in}= \left\{-1\right\} \times \left[-1, 1 \right]$ (left boundary) and no-slip conditions on the top and bottom walls $\Gamma_{c}= \left[-1, L \right] \times \left\{-1\right\} \cup \left[-1, L \right] \times \left\{1\right\} $. The outflow at the right $\Gamma_{out}= \left\{L \right\} \times \left[-1, 1 \right]$ (right) is represented as a Neumann condition \begin{equation*} \begin{split} \frac{\partial u_x}{\partial x} - p &= 0 \\ \frac{\partial u_y}{\partial x} &= 0. \end{split} \end{equation*} We use Q2-Q1 Finite Elements as discretization method. Our sample matrices are generated by the Incompressible Flow \& Iterative Solver Software (IFISS)\footnote{\url{http://www.cs.umd.edu/~elman/ifiss3.6/index.html}} package \cite{ers07}, see the book by Elman et al. \cite{elman2014finite} for a more detailed description of this reference Stokes problem. \begin{figure} \centering \begin{tikzpicture} \begin{axis} [ width=\FigWid \textwidth, height=\FigHei \textwidth, minor tick num=3, grid=both, xtick = {-1,0,...,5}, ytick = {-1,0,...,1}, xlabel = $x$, ylabel = $y$, ticklabel style = {font = \scriptsize}, % colormap/jet, colorbar, % enlargelimits=false, axis on top, axis equal image ] \addplot [forget plot] graphics[xmin=-1,xmax=5,ymin=-1,ymax=1] {IFISSCh5SolRaw.png}; \end{axis} \end{tikzpicture} \caption{ Exact solution to the Stokes problem in a channel of length 5. Plotted is the $1-y^2$ function, which represents the $x$ direction velocity, overlaid with the mesh resulting from the domain discretization (Q2-Q1 Finite Elements Method). } \label{fig:convIFISSCh5SolMesh} \end{figure} We first illustrate some particular features shown by \ac{GKB} for this problem. We use a direct inner solver here, before discussing the influence of an iterative solver in subsequent sections. In \Cref{fig:convIFISSChL}, we plot the convergence history for several channels of different lengths, which leads us to noticing the following details. The solver starts with a period of slow convergence, visually represented by a plateau, the length of which is proportional to the length of the channel. The rest of the convergence curve corresponds to a period of superlinear convergence, a phenomenon also known for other solvers of the Krylov family, such as \ac{CG}. The presence of this plateau is especially relevant for our proposed strategies and, since it appears for each channel, we can conclude it is a significant feature of this class of channel problems. In the following numerical examples, we choose as boundary $L=20$ and thus a domain of length 21 units. \import{images/pgf/}{convIFISSChL.tex} \section{Constant accuracy inner solver} \label{sec:constAcc} Similar to what has been described by Golub et al. \cite{GolZhaZha2000} for solving eigenvalue problems, we have observed that when using an iterative method as an inner solver, its accuracy has a clear effect on the overall accuracy of the outer solver (see \Cref{fig:constCGtol}). We solve the channel problem described in \Cref{sec:pbDesc} with various configurations for the tolerance of the inner solver, and plot the resulting convergence curves in \Cref{fig:constCGtol}. The outer solver is always \ac{GKB} with a $10^{-7}$ tolerance. The cases we show are: a direct inner solver, three choices of constant inner solver tolerance ($10^{-3}$, $10^{-7}$ and $10^{-8}$), and a final case using a low accuracy solver of ($10^{-3}$) only for the first two iterations, then a high accuracy one ($10^{-14}$). The stopping criterion for the \ac{GKB} algorithm is a delayed lower bound estimate for the energy norm of the primal variable (see \Cref{eq:lowBnd}). As such, \ac{GKB} with a direct inner solver performs a few extra steps, achieving a higher accuracy than the one required, here around $10^{-8}$. Notice how the outer solver cannot achieve a higher accuracy than that of the inner solver. The outer solver stops reducing the error even before reaching the same accuracy as the inner solver. Replacing the exact inner solver by a \ac{CG} method with a constant tolerance of $10^{-8}$ leads to a convergence process where the error norm eventually reaches a value just below the target accuracy of $10^{-7}$ and does not decrease further. This highlights the fact that the inner solver does not need to be exact in order to have \ac{GKB} converge to the required solution. For this Poiseuille flow example, however, the inner solver must be least one order of magnitude more precise than the outer one. In the last case examined here, we want to see if early imprecise iterations can be compensated later by others having a higher accuracy. This strategy of increasing accuracy has been found to work, e.g., in the case of the Newton method for nonlinear problems \cite{dembo1982inexact}. We tested the case when the first two iterations of \ac{GKB} use an inner solver with tolerance $10^{-3}$, with all the subsequent inner iterations employ a tolerance of $10^{-14}$. The resulting curve shows a convergence history rather similar to the case where \ac{CG} has a constant tolerance of $10^{-3}$. The outer process cannot reduce the error norm below $10^{-3}$, despite the fact that the bulk of the iterations employ a high-accuracy inner solver. This is in correspondence with which was observed by Golub et al. \cite{GolZhaZha2000} for solving eigenvalue problems. \import{images/pgf/}{constCGtol.tex} An interesting observation is that all the curves in \Cref{fig:constCGtol} overlap in their initial iterations, until they start straying from the apparent profile, eventually leveling off. In \Cref{sec:pertErrAna}, we analyze the causes leading to these particular behaviors and link them to the accuracy of the inner solver. \section{Perturbation and error study} \label{sec:pertErrAna} In this section we describe how the error associated with the iterates of \Cref{alg:GKB} behaves if we use an iterative solver for the systems involving $\Mm ^{-1}$. We can think of the approximate solutions of these inner systems as perturbed versions of those we would get when using a direct solver. The error is then characterized in terms of this perturbation and the implications motivate our algorithmic strategies given in the subsequent sections. With this characterization, we can also explain the results in \Cref{sec:constAcc}. The use of an iterative inner solver directly affects the columns of the $\Vm$ matrix. In the following, $\Vm$ denotes the unperturbed matrix, with $\Em _{\Vm}$ being the associated perturbation matrix. In particular, we are interested in the $\Mm$ norm of the individual columns of $\Em _{\Vm}$, which gives us an idea of how far we are from the \enquote{ideal} columns of $\Vm$. Changes in the $\vvv$ and $\qv$ vectors also have an impact on their respective norms $\alpha$ and $\beta$, which shift away from the values they would normally have with a direct inner solver. In turn, these changes propagate to the coefficients $\zeta$ used to update the iterates $\uv$ and $\pv$. Our observations concern the $\zv$ vector, its perturbation $\ev _{\zv}$ and their effect on the error of the primal variable $\uv$ measured in the $\Mm$ norm. The entries of $\zv$ change sign every iteration, but we will only consider them in absolute value, as it is their magnitude which is important. In the following, we will denote perturbed quantities with a hat. \subsection{High initial accuracy followed by relaxation} \label{subsec:theoPrecRelax} In this subsection, we take a closer look at the interactions between the perturbation and the error. For us, perturbation is the result of using an inexact inner solver and represents a quantity which can prevent the outer solver from reducing the error below a certain value. The error itself needs to be precisely defined, as it may contain several components, each minimized by a different process. Because we focus on the difference between the perturbed and the unperturbed \ac{GKB}, sources of error that affect both versions, such as the round-off error, are not included in the following discussion. According to the observations by Jir{\'a}nek and Rozlo{\v{z}}n{\'\i}k, the accuracy of the outer solver depends primarily on that of the inner solver, since the perturbations introduced by an iterative solver dominate those related to finite-precision arithmetic \cite{jiranek2008maximum}. We take the exact solution $\uv$ to be equal to $\uv _n$, the $n$-th iterate of the unperturbed \ac{GKB} with exact arithmetic. At step $k$ of the \ac{GKB}, we have the error, \begin{align} \normM{\ev _k} &= \normM{ \hat \uv _k - \uv }, \end{align} where $\hat \uv _k$ is the current approximate solution and $\uv $ is the exact one. Both can be written as linear combinations of columns from $\Vm$ with coefficients from $\zv$. Let $\hat \uv _k$ come from an inexact version of \Cref{alg:GKB}, where the solution of the inner problem (a matrix-vector product with $\Mm ^{-1}$) includes perturbations. The term $ \uv = \Vm _n \zv _n$ is available after $n$ steps of \Cref{alg:GKB} in exact arithmetic, without perturbations. We separate the first $k$ terms, which have been computed, from the remaining ($n-k$). \begin{equation} \label{eq:errPert} \begin{split} \normM{\ev _k} ^2 &= \normM{ \hat \uv _k - \uv } ^2= \normM{ ( \Vm _k + \Em _{\Vm} ) ( \zv _k + \ev _{\zv} ) - [ \Vm _k \Vm _{n-k}] \left[ \begin{matrix} {\zv _k} \\ {\zv _{n-k}} \end{matrix} \right] } ^2 \\ &= \normM{ \Em _{\Vm} \zv _k + \Em _{\Vm} \ev _{\zv} + \Vm _k \ev _{\zv} - \Vm _{n-k} \zv _{n-k} } ^2 \\ & \leq \normM{ \Em _{\Vm} \zv _k } ^2 + \normM{ \Em _{\Vm} \ev _{\zv} } ^2 + \normEu{ \ev _{\zv} } ^2 + \normEu{ \zv _{n-k}} ^2 \end{split} \end{equation} In the last line, we have made use of the $\Mm$-orthonormality of the $\Vm$ matrices. In the case of a direct inner solver, we can leave out the perturbation terms, recovering the result $\normM{\ev _k} ^2 = \normEu{ \zv _{n-k}} ^2 = \sum_{i=k+1}^{n} \zeta _i^2 $ given by Arioli \cite{Ar2013}. This is simply the error coming from approximating $\uv $ (a linear combination of $n$ $\Mm$-orthogonal vectors) by $ \uv _k $ (a linear combination of only $k$ $\Mm$-orthogonal vectors). This term decreases as we perform more steps of \Cref{alg:GKB} ($ k \rightarrow n $). By truncating the sum $\sum_{i=k+1}^{n} \zeta _i ^2$, we obtain a lower bound for the squared error. The remaining three terms in \Cref{eq:errPert} include the perturbation coming from the inexact inner solution. Our goal is to minimize the total number of iterations of the inner solver, so we are interested in knowing how large can these terms be allowed to be, such that we still recover a final solution of the required accuracy. The answer is to keep them just below the final value of the fourth one, $\normEu{ \zv _{n-k}}$, below the acceptable algebraic error. If they are larger, the final accuracy will suffer. If they are significantly smaller, then our inner solver is unnecessarily precise and expensive. The following observations rely on the behavior of the $\zv$ vector. At each iteration, this vector gains an additional entry, while leaving the previous ones unchanged. These entries form a (mostly) decreasing sequence and have a magnitude below 1 when reaching the superlinear convergence phase. Unfortunately, we cannot yet provide a formal proof of these properties, but having seen them consistently reappear in our numerical experiments encourages us to consider them for motivating our approach. These properties appear in both cases, with and without perturbation. The decrease in the entries of the coefficient vector used to build the approximation has also been observed and described for other Krylov methods (see references \cite{van2004inexact,simoncini2003theory,simoncini2005relaxed}). Their context is that of inexact matrix-vector products, which is another way of viewing our case. The fact that new entries of $\zv$ are simply appended to the old ones and that they are smaller than one is linked to the particular construction specific to \ac{GKB}. Back to \Cref{eq:errPert}, let us assume the perturbation at each iteration is constant, i.e. the $\Mm$ norm of each column of $\Em _{\Vm}$ is equal to the same constant. Then, the vector $\Em _{\Vm} \zv _k$ will be a linear combination of perturbation vectors with coefficients from $\zv _k$. Following our observations concerning the entries of $\zv _k$, the first terms of the linear combination will be the dominant ones, with later terms contributing less and less to the sum. If the perturbation of the first $\vvv$ has an $\Mm$ norm below our target accuracy, the term $ \normM{ \Em _{\Vm} \zv _k } $ will never contribute to the error. We can allow the $\Mm$ norm of the columns of $\Em _{\Vm}$ to increase, knowing the effect of the perturbation will be reduced by the entries of $\zv$, which are decreasing and less than one. The \ac{GKB} solution can be computed in a less expensive way, as long as the term $\normM{ \Em _{\Vm} \zv _k }$ is kept below our target accuracy. The perturbation should initially be small, then allowed to increase proportionally to the decrease of the entries in $\zv$. Next, we describe the terms including $\ev _{\zv}$. Let the following define the perturbed entries of $\hat \zv$ \begin{equation*} \hat \zeta _k = - \hat \zeta _{k-1} \frac{\hat \beta _k}{\hat \alpha _k} = - \hat \zeta _{k-1} ( \frac{ \beta _k}{ \alpha _k} + \epsilon _k). \end{equation*} The term $\epsilon _k$ is the perturbation introduced at iteration $k$, coming from the shifted norms associated with $\qv _k$ and $\vvv _k$. This term is then multiplied by $\hat \zeta _{k-1}$ which, according to our empirical observations, decreases at (almost) every step. If we assume $\epsilon _k$ is constant, the entries of $\ev _{\zv}$ decrease in magnitude and the norm $ \normEu{ \ev _{\zv} }$ is mostly dominated by the first vector entry. The strategy described for the term $ \normM{ \Em _{\Vm} \zv _k } $ also keeps $ \normEu{ \ev _{\zv} }$ small. We start with a perturbation norm below the target accuracy, to ensure the quality of the final iterate. Gradually, we allow an increase in the perturbation norm proportional to the decrease of $\hat \zeta _k$ to reduce the costs of the inner solver. Finally, since the vector $\ev _{\zv} $ decreases similarly to $ \zv $, the term $ \normM{ \Em _{\Vm} \ev _{\zv} }$ can be described in the same way as $\normM{ \Em _{\Vm} \zv _k }$. We close this section by emphasizing the important role played by the first iterations and how the initial perturbations can affect the accuracy of the solution. Notice that the perturbation terms included refer to all the $k$ steps, not just the latest one. Relaxation strategies that start with a low accuracy and gradually increase it are unlikely to work for \ac{GKB} and other algorithms with similar error minimization properties. Since the first vectors computed are the ones that contribute the most to reducing the error, they should be determined as precisely as possible. Even if we follow a perturbed iteration exclusively by very accurate ones, this will not prevent the perturbation from being transmitted to all the subsequent vectors, and potentially be amplified by multiplication with matrices and floating-point error. With these observations in mind, we can understand the results in \Cref{sec:constAcc}. These findings are in line with those concerning other Kylov methods in the presence of inexactness (see Section 11 of the survey by Simoncini and Szyld \cite{simoncini2007recent} and the references therein). \ac{GKB} is not the only method which benefits from lowering the accuracy of the inner process, and the reason why this is possible is linked to the decreasing entries of the coefficient vector. \section{Relaxation strategy choices} \label{sec:relaxChoices} We have seen in \Cref{subsec:theoPrecRelax} that we can allow the perturbation norm to increase in a safe way, as long as the process is guided by the decrease of $ \abs{ \hat \zeta } $. This means that we can adapt the tolerance of the inner solver, such that each call is increasingly cheaper, without compromising the accuracy of the final \ac{GKB} iterate. Then, at step $k$ we can call the inner solver with a tolerance equal to $\tau / f(\zeta)$. The scalar $\tau$ represents a constant chosen as either the target accuracy for the final \ac{GKB} solution, or something stricter, to counteract possible losses coming from floating-point arithmetic. The function $f$ is chosen based on the considerations described below, with the goal of minimizing the number of inner iterations. A similar relaxation strategy was used in a numerical study by Bouras and Frayss\'e \cite{bouras2005inexact} to control the magnitude of the perturbation introduced by performing inexact matrix-vector products. They employ Krylov methods with a residual norm minimization property, so the proposed criterion divides the target accuracy by the latest residual norm. In our case, because of the minimization property in \Cref{eq:errMinProp}, we need to use the error norm instead of the residual, since it is the only quantity which is strictly decreasing. Due to the actual error norm being unknown, we rely on approximations found via $\zeta$. Considering the error characterization of the unperturbed process $\normM{\ev _k} ^2 = \sum_{i=k+1}^{n} \zeta _i ^2$, we can approximate the error by the first term of the sum, which is the dominant one. However, when starting iteration $k$ we do not know $\zeta _{k+1}$, not even $\zeta _{k}$, so we cannot choose a tolerance for the inner solver required to compute $\uv _k$ based on these. What we can do is predict these values via extrapolation, using information from the known values $\zeta _{k-1}$ and $\zeta _{k-2}$. We know that in general $ \frac{\beta _k}{\alpha _k} = \frac{\zeta _k}{\zeta _{k-1}} $ acts as a local convergence factor for the $\abs{\zeta }$ sequence. We approximate the one for step $k$ by using the previous one $\frac{\zeta _{k-1}}{\zeta _{k-2}}$. Then, we can compute the prediction $\tilde \zeta _k := \zeta _{k-1} \frac{\zeta _{k-1}}{\zeta _{k-2}}$. By squaring the local convergence factor, we get an approximation for $ \zeta _{k+1} $ as $\tilde \zeta _{k+1} := \zeta _{k-1} \left( \frac{\zeta _{k-1}}{\zeta _{k-2}} \right) ^2$, which we can use to approximate $ \normM{\ev _k} $ and adapt the tolerance of the inner solver. In practice, we only consider processes which include perturbation, and assume we have no knowledge of the unperturbed values $\abs{\zeta }$. As such, for better readability, we drop the hat notation with the implicit convention that we are referring to values which do include perturbation and use them in the extrapolation rule above. For some isolated iterations, it is possible that $\abs{\zeta _k } \geq \abs{\zeta _{k-1} } $. This behavior is then amplified through extrapolation, potentially leading to even larger values. In turn, this can cause an increase in the accuracy of the inner solver, following a stricter value for the tolerance parameter $\tau / f(\zeta)$. In \Cref{subsec:theoPrecRelax}, we have shown that there is no benefit in increasing this accuracy. The new perturbation would be smaller in norm, but the error $\normM{\ev _k}$ would be dominated by the previous, larger perturbation. As such, we propose computing several candidate values for the stopping tolerance of the inner solver, and choose the one with maximum value. Since these are only scalar quantities, the associated computational effort is negligible, but the impact of a well-chosen tolerance sequence can lead to significant savings in the total number of inner iterations. The candidate values are: \begin{equation} \label{eq:relaxChoices} \begin{cases} \text{the value at the previous step}, \\ \tau / \abs{ \zeta _{k-1} } , \\ \tau / \abs{ \tilde \zeta _{k} } , \\ \tau / \abs{ \tilde \zeta _{k+1} } . \end{cases} \end{equation} To prevent a limitless growth of the tolerance parameter, we impose a maximum value of $ 0.1 $. All these choices are safe in the sense that they do not lead to introduction of perturbations which prevent the outer solver from reaching the target accuracy. We proceed by testing these relaxations strategies on the problem described in \Cref{sec:pbDesc}. The initial tolerance for \ac{CG} is set to $\tau = 10^{-8}$, one order of magnitude more precise than the one set of \ac{GKB}. As a baseline for comparison, we first keep the tolerance constant, equal to $\tau$. Then, we introduce adaptivity using $\tau / \abs{ \zeta_{k-1} }$. The third case changes the tolerance according to $\tau / \abs{ \tilde \zeta _{k+1} }$, the latter term being a predicted approximation of the current error. Finally, we employ a hybrid approach, where all candidate values in \Cref{eq:relaxChoices} are computed, but only the largest one is used. In the legends of the following plots, these four cases are labeled \ConstantCase, \AdaptiveCase, \PredictedCase, and \HybridCase, respectively. To monitor \ac{GKB} convergence, we track the lower bound for the energy norm of the error corresponding to the primal variable given in \Cref{eq:lowBnd}. For easy reference, all the choices used and their respective labels are given below. We define $\tau = 10^{-8}$. \begin{align} \mathtt{(\ConstantCase)} &: \tau, \label{eq:cst}\\ (\mathtt{\AdaptiveCase}) &: \nicefrac{\tau}{\abs{ \zeta _{k-1} }} , \label{eq:z}\\ (\mathtt{\PredictedCase}) &: \nicefrac{\tau}{\abs{ \tilde \zeta _{k+1} }} ,\label{eq:adasquare}\\ (\mathtt{\HybridCase}) &: \max\left\{\nicefrac{\tau}{\abs{ \zeta _{k-1} }}, \nicefrac{\tau}{\abs{ \tilde \zeta _{k} }}, \nicefrac{\tau}{\abs{ \tilde \zeta _{k+1} }}, \text{previous value} \right\} .\label{eq:hybrid}\\ (\mathtt{\OptimalCase}) &: \nicefrac{\tau}{ ( parameter \cdot \abs{ \zeta _{k-1} } )} , \label{eq:zOptim} \end{align} Only the last scenario above, \OptimalCase{}, is left to explain. To see if the parameter-free choices can be improved, we run one more case which includes adaptivity by using $\abs{ \zeta _{k-1} }$, but also one constant parameter tuned experimentally. This is motivated by the fact that the considerations leading to \Cref{eq:relaxChoices} rely mostly on approximations and inequalities, which means we have an over-estimate of the error. It may be possible to reduce the total number of iterations further, by including an (almost) optimal, problem-dependent constant. The goal is to find a sequence of tolerance parameters with terms that are as large as possible, while guaranteeing the accuracy of the final \ac{GKB} iterate. All the results are given in \Cref{tab:varTolZ} and \Cref{fig:originallower bound}. \HybridCase{} offers the highest savings among the parameter-free choices (30\%), but \OptimalCase{}, the test with the problem-dependent constant, reveals that we can still improve this performance by about 6\%. \import{images/pgf/lowBndVScumulCG}{original} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter in \OptimalCase{} is $0.05$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 6963 & 5115 & 4897 & 4873 & 4399 \\ Savings \% & - & 26.54 & 29.67 & 30.02 & 36.82 \end{tabular} \label{tab:varTolZ} \end{table} \subsection{Increasing the savings by working on a simplified problem} \label{subsec:simple} Considering the observations in \Cref{subsec:theoPrecRelax} and the results plotted in \Cref{fig:originallower bound}, we can significantly reduce the accuracy of the inner solver only when the outer solver is in a superlinear convergence phase, when the $\abs{\zeta }$ sequence decreases rapidly. How much we can relax depends on the slope of the convergence curve. As such, to get the maximum reduction of the total number of iterations, the problem needs to be simplified, such that the convergence curve is as steep as possible and has no plateau. It is common to pair Krylov methods with other strategies, such as preconditioning, in order to improve their convergence behavior. The literature on these kinds of approaches is rich \cite{loghin2003schur,loghin2004analysis,bgl_2005,olshanskii2010acquired}. The following tests quantify how beneficial is the interaction between our proposed relaxation scheme and these other strategies. It has been shown by Arioli and Orban that the \ac{GKB} applied to the saddle-point system is equivalent to the \ac{CG} algorithm applied to the Schur complement equation \cite[Chapter~5]{orban2017iterative}. As such, the first step towards accelerating \ac{GKB} is to consider the Schur complement, defined as $\Sm := \Am ^T \Mm _{-1} \Am$, especially its spectrum. Ideally, a spectrum with tightly clustered values and no outliers leads to rapid \ac{GKB} convergence \cite{KrDaTaArRu2020}. To get as close as possible to this clustering we use the following two methods to induce positive changes in the spectrum: preconditioning with the \ac{LSC} \cite{elman2006block} and eigenvalue deflation. Each of them operates differently and leads to convergence curves with different traits. \import{images/pgf/}{easyIFISS20GKconv} In \Cref{fig:easyIFISS20GKconv}, we plot the \ac{GKB} convergence curve for each of these, using a direct inner solver. The \ac{LSC} aligns the small values in the spectrum with the main cluster and brings everything closer together. The corresponding \ac{GKB} convergence curve has no plateau and is much steeper than the curve for the unpreconditioned case. Using deflation, we remove the five smallest values from the spectrum, which constitute outliers with the respect to the main cluster. The other values remain unchanged. As such, its convergence curve no longer has the initial plateau, but is otherwise the same as in the original problem. For both of these cases we apply the same strategies of relaxing the inner tolerance, to see how many total \ac{CG} iterations we can save. The rest of the set-up is identical to that described for \Cref{tab:varTolZ}. We tabulate the results in \Cref{tab:varTolLSC,tab:varTolDefl} and plot them in \Cref{fig:LSCpreclower bound,fig:deflatedlower bound}. They highlight that the best parameter-free results are obtained when using \HybridCase{}, which leads to savings of about 50\%, depending on the specific case. When comparing this parameter-free approach to \OptimalCase{}, which includes an experimental constant, we find that the hybrid approach can still be improved. Nonetheless, the difference in \ac{CG} iterations savings is not very high (up to 6\%), which supports the idea that our proposed strategy is efficient in a general-use setting. An additional observation pertaining to the plots is that even if convergence is relatively fast (\Cref{fig:LSCpreclower bound}) or slow (\Cref{fig:deflatedlower bound}), the final savings are still around 50\%, as long as there is no plateau. \import{images/pgf/lowBndVScumulCG}{LSCprec} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the \ac{LSC} preconditioner. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter used in \OptimalCase{} is $0.007$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 2052 & 1301 & 1073 & 1046 & 919 \\ Savings \% & - & 36.60 & 47.71 & 49.03 & 55.21 \end{tabular} \label{tab:varTolLSC} \end{table} \import{images/pgf/lowBndVScumulCG}{deflated} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using deflation. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter used in \OptimalCase{} is $0.09$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 4830 & 2625 & 2416 & 2411 & 2110 \\ Savings \% & - & 45.65 & 49.98 & 50.08 & 56.31 \end{tabular} \label{tab:varTolDefl} \end{table} \section{\ac{GKB} with the augmented Lagrangian approach} \label{sec:AL} The method of the \ac{AL} has been used successfully to speed up the convergence of the \ac{GKB} algorithm \cite{KrDaTaArRu2020}, with this effect being theoretically explained by Arioli et al. \cite{KrDaTaArRu2020}. Maybe most striking is the potential to reach mesh-independent convergence, provided that the augmentation parameter is large enough. Another use of the \ac{AL} method is to transform the (1,1)-block of a saddle-point system, say $\Wm$, from a positive semi-definite matrix to a positive definite one. However, this can happen only if the off-diagonal block $\Am$ is full rank or, more generally, if $\mbox{ker}(\Wm)\cap \mbox{ker}(\Am^T)=\{ \mZ \}$. Let $\Nm \in \mathbb{R}^{n\times n}$ be a symmetric, positive definite matrix. For a given symmetric, positive semi-definite matrix $\Wm \in \mathbb{R}^{m\times m}$, we can transform it into a positive-definite one by \begin{align} \Mm := \Wm + \Am \Nm^{-1} \Am^T. \end{align} The upper right-hand side term $\gvv$ then becomes \begin{equation} \gvv := \gvv + \Am \Nm^{-1}\rv. \end{equation} With these changes in place, we can proceed to using the \ac{GKB} algorithm, as described in \Cref{sec:GKBtheory}. Note that if the matrix $\Wm$ is already symmetric positive-definite, the transformation of the (1,1)-block is not necessary for using the \ac{GKB} method. However, the application of the \ac{AL} approach does lead to a better conditioning of the Schur complement, which significantly improves convergence speed \cite{KrDaTaArRu2020}. As in \Cref{sec:GKBtheory}, we choose $\Nm=\frac{1}{\eta} \Id$. There is as usual no free lunch: depending on the conditioning of the matrix $\Am$ and the magnitude of $\eta$, the \ac{AL} can also degrade the conditioning of the $\Mm$ matrix as a side-effect. We test whether the augmentation interacts with the strategies we propose in \Cref{sec:relaxChoices}, namely if we can still achieve about 50\% savings in the total number of inner iterations. The strategies are applied when solving the problem described in \Cref{sec:pbDesc} after an augmentation with a parameter $\eta=1000$, with the results being given in \Cref{tab:varTolAL} and plotted in \Cref{fig:augLaglower bound}. Comparing the percentage of iterations saved in this case to those obtained in \Cref{sec:relaxChoices}, it is clear that, when combined with the \ac{AL} method, the strategy of variable inner tolerance does help reducing the total number of inner iterations, but by a lower percentage. \import{images/pgf/lowBndVScumulCG}{augLag} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the \ac{AL} ($\eta =1000$). The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter used in \OptimalCase{} is $0.005$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 2601 & 1886 & 1707 & 1661 & 1647 \\ Savings \% & - & 27.49 & 34.37 & 36.14 & 36.68 \end{tabular} \label{tab:varTolAL} \end{table} Since the \ac{AL} method modifies the (1,1)-block of the saddle-point system, it changes the difficulty of the inner problem and how many iterations the inner solver needs to perform. As such, a global comparison in terms of number of inner iterations, among all the scenarios we studied (original, preconditioned, deflated, including the \ac{AL}) is not fair unless the inner problem has the same degree of difficulty for all the cases. To verify the generality of our method, we also apply it in a different context than that described in \Cref{sec:pbDesc}. Let us consider a Mixed Poisson problem. We solve the Poisson equation \(-\Delta u=f\) on the unit square $(0,1)^2$ using a mixed formulation. We introduce the vector variable $\vec{\sigma}=\nabla u$. Find $(\vec{\sigma}, u)\in \Sigma \times W $ such that \begin{align} \vec{\sigma}-\grad u&=0\\ -\mathrm{div} (\vec{\sigma}) &= f. \end{align} where homogeneous Dirichlet boundary conditions are imposed for $u$ at all walls. The forcing term $f$ is random and uniformly drawn in $(0,1)$. The discretization is done with a lowest order Raviart-Thomas space $\Sigma^h \subset \Sigma$, and a space $W^h \subset W$ containing piece-wise constant basis functions. We used the finite element package Firedrake\footnote{\url{www.firedrakeproject.org}} coupled with a PETSc~\cite{petsc-web-page,petsc-user-ref,petsc-efficient} implementation of \ac{GKB} \footnote{\url{https://petsc.org/release/docs/manualpages/PC/PCFIELDSPLIT.html\#PCFIELDSPLIT}}, adapted to include dynamical relaxation, to produce the following numerical results. We used the implementation provided by Firedrake\footnote{\url{https://www.firedrakeproject.org/demos/saddle_point_systems.py.html}}. The test case has \num{328192} degrees of freedom, of which \num{197120} are associated with the (1,1)-block. The \ac{GKB} delay parameter is set to 3. The augmentation parameter $\eta$ is set to 500 and the tolerance for the \ac{GKB} set to \num{1e-5}. The results are presented in \Cref{fig:mixedPoissonLowBnd}. We confirm the results presented above with a reduction of over 60\% in the total number of inner \ac{CG} iterations with respect to the constant accuracy set up. \begin{figure} \centering \begin{tikzpicture} \begin{axis}[ ymode=log, legend pos= north east, width=\FigWid \textwidth, height=\FigHei \textwidth, xlabel={Inner CG iterations}, ylabel={Lower bound} ] \addplot+[name path=A] table [x=InnerKSPCumul, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiCST500.txt}; \addlegendentry{ \ConstantCase} \addplot+[] table [x expr=\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiHybrid500.txt}; \addlegendentry{\HybridCase } \addplot+[] table [x expr=\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiAdaSquare500.txt}; \addlegendentry{ \PredictedCase} \addplot+[] table [x expr=\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiZ500.txt}; \addlegendentry{ \AdaptiveCase} \addplot[name path=C, draw=none] table [x expr=0.5*\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiCST500.txt} node[pos=1]{50\%}; \addplot[black!30, opacity=0.5] fill between[of=A and C]; \addplot[name path=C, draw=none] table [x expr=0.35*\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiCST500.txt} node[pos=1]{65\%}; \addplot[black!20, opacity=0.5] fill between[of=A and C]; \end{axis} \end{tikzpicture} \caption{Lower bound (\Cref{eq:lowBnd}) for the error norm associated with the \ac{GKB} iterates versus the cumulative number of inner \ac{CG} iterations when solving the Mixed Poisson problem. We also use the \ac{AL} ($\eta =500$). See \Cref{eq:cst,eq:z,eq:adasquare,eq:hybrid} for the strategies denoted by the labels.} \label{fig:mixedPoissonLowBnd} \end{figure} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the \ac{AL} ($\eta =500$) on the Mixed Poisson problem. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:adasquare,eq:hybrid}. } \begin{tabular}{ccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase \\ \ac{CG} iterations & 10845 & 4680 & 4105 & 4225 \\ Savings \% & - & 56.84 & 62.15 & 61.04 \end{tabular} \label{tab:DarcyvarTolAL} \end{table} \section{Conclusions} We have studied the behavior of the \ac{GKB} algorithm in the case where the inner problem, i.e. the solution of a linear system, is performed iteratively. We have found that the inner solver does not need to be as precise as a direct one in order to achieve a \ac{GKB} solution of a predefined accuracy. Furthermore, we have proposed algorithmic strategies that reduce the cost of the inner solver, quantified as the cumulative number of inner iterations. This is possible by selecting criteria to change the stopping tolerance. To motivate these choices, we have studied the perturbation generated by the inexact inner solver. The findings show that the perturbation introduced in early iterations has a higher impact on the accuracy of the solution compared to later ones. We devised a dynamic way of adapting the accuracy of the inner solver at each call to minimize its cost. The initial, high accuracy is gradually reduced, maintaining the resulting perturbation under control. Our relaxation strategy is inexpensive, easy to implement, and has reduced the total number of inner iterations by 33-63\% in our tests. The experiments also show that including methods such as deflation, preconditioning and the augmented Lagrangian has no negative impact and can lead to a higher percentage of savings. Another advantage is that our method does not rely on additional parameters and is thus usable in a black-box fashion. \paragraph{Acknowledgments} The authors thank Mario Arioli for many inspiring discussions and advice. \section{Comparison with other methods} Applying an inexact matrix inverse to a vector can also be seen as an inexact matrix-vector product. The authors of \cite{bouras2005inexact} have studied how applying this kind of products influences the overall achievable precision of other Krylov subspace methods. They found that it is possible to relax the precision of the products in a way that still allows finding the required solution. The initial precision is high and then gradually decreased according to $\epsilon/\normEu{\rv _k},$ where $\epsilon$ is the target precision for the solution and $\normEu{\rv _k}$ is the euclidean norm of the $k$-th residual. In their study \cite{simoncini2003theory}, Simoncini and Szyld showed that the strategy above does not always work as intended, sometimes preventing the solver from converging to the required solution. This can be remedied by including a problem-dependent constant. These authors defined the constant for GMRES based on the extremal singular values of the upper Hessenberg matrix specific to this method. In the same article, they also applied the strategy to problems involving the Schur complement of a saddle-point matrix. Given the equivalence between \ac{GKB} with a saddle-point problem and \ac{CG} with the associated Schur complement problem, the discussion in \cite{simoncini2003theory} also applies to our context. To control the accuracy of the inner solver, they define the constant \begin{equation} \label{eq:simoConst} l= \frac{\sigma_{min}(\Sm)}{\sigma_{max}(\Am^T \Mm^{-1}) m_*}, \end{equation} where $\Sm= \Am^T \Mm ^ {-1}\Am$ is the Schur complement and $m_*$ is the maximum number of iterations allowed for the outer solver. The relaxation applied to the inner solver tolerance is then guided by \begin{equation} \label{eq:rezRelax} l \epsilon/\normEu{\rv _k}, \end{equation} with $l=1$ being the choice in \cite{bouras2005inexact} and \Cref{eq:simoConst} the choice in \cite{simoncini2003theory}. The purpose of this section is to compare the methods described above with the one we propose in \Cref{sec:relaxChoices}. We use \HybridCase{} (\Cref{eq:hybrid}), which has shown to be the most effective parameter-free choice. We consider all the previous scenarios: original problem, deflated, preconditioned, augmented Lagrangian. In the subsequent tables, the strategies in \Cref{eq:rezRelax} are labeled \BourasCase{} for $l=1$ and \SimonciniCase{} for \Cref{eq:simoConst}. \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations. We compare our proposed method \HybridCase{} (\Cref{eq:hybrid}) with two alternatives from the literature (\Cref{eq:simoConst,eq:rezRelax}).} \label{tab:compOrig} \resizebox{\textwidth}{!}{% \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \HybridCase & \BourasCase & \SimonciniCase ($m_*=100$) & \SimonciniCase ($m_*=60$) \\ \ac{CG} iterations & 6963 & 4873 & - & 7092 & 6891 \\ Savings \% & - & 30.02 & - & -1.85 & 1.03 \end{tabular}% } \end{table} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the LSC preconditioner. We compare our proposed method \HybridCase{} (\Cref{eq:hybrid}) with two alternatives from the literature (\Cref{eq:simoConst,eq:rezRelax}).} \label{tab:compLSC} \resizebox{\textwidth}{!}{% \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \HybridCase & \BourasCase & \SimonciniCase ($m_*=100$) & \SimonciniCase ($m_*=15$) \\ \ac{CG} iterations & 2052 & 1046 & 1094 & 1808 & 1601 \\ Savings \% & - & 49.03 & 46.69 & 11.89 & 21.98 \end{tabular}% } \end{table} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using deflation. We compare our proposed method \HybridCase{} (\Cref{eq:hybrid}) with two alternatives from the literature (\Cref{eq:simoConst,eq:rezRelax}).} \label{tab:compDefl} \resizebox{\textwidth}{!}{% \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \HybridCase & \BourasCase & \SimonciniCase ($m_*=100$) & \SimonciniCase ($m_*=40$) \\ \ac{CG} iterations & 4830 & 2411 & - & 3529 & 3283 \\ Savings \% & - & 50.08 & - & 26.94 & 32.03 \end{tabular}% } \end{table} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the augmented Lagrangian ($\eta =1000$). We compare our proposed method \HybridCase{} (\Cref{eq:hybrid}) with two alternatives from the literature (\Cref{eq:simoConst,eq:rezRelax}).} \label{tab:compAL} \resizebox{\textwidth}{!}{% \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \HybridCase & \BourasCase & \SimonciniCase ($m_*=100$) & \SimonciniCase ($m_*=40$) \\ \ac{CG} iterations & 2601 & 1661 & - & 1954 & 1739 \\ Savings \% & - & 36.14 & - & 24.88 & 33.14 \end{tabular}% } \end{table} From these comparisons (see \Cref{tab:compOrig,tab:compLSC,tab:compDefl,tab:compAL}), we make the following remarks. Using \BourasCase{} prevented \ac{GKB} from converging by increasing the tolerance too quickly, leading to excessive perturbation, confirming the possibility highlighted in \cite{simoncini2003theory}. This method only worked for the problem which included preconditioning. For \SimonciniCase{}, we first considered a set of tests where $m_*=100$ is the maximum number of iterations allowed for the outer solver, as done in the original article. This method worked as intended, with \ac{GKB} reaching the required level of precision. A second set of tests takes $m_*$ to be equal to the number of \ac{GKB} iterations necessary to reach the target precision. It is possible to know or at least approximate this number, either based on theory or previous solver runs. With this set of tests, we wanted to see how \SimonciniCase{} would perform in the best case scenario. Indeed, with a lower constant, the savings increase, and \ac{GKB} still converges. We can highlight three advantages of our \HybridCase{} method. First, it is safe, in the sense of keeping the perturbation under control and allowing \ac{GKB} to converge properly. Second, it is simple, in the sense of not requiring any parameters, either found by numerical experiments, or by estimating extremal singular values. Third, it is effective, leading to higher savings than the other two methods considered here. \section{Introduction} Saddle-point systems can be found in a variety of application fields, such as, for example, mixed finite element methods in fluid dynamics or interior point methods in optimization. An extensive overview about application fields and solution methods for this kind of problems is presented in the well-known article \cite{bgl_2005} by Benzi, Golub and Liesen. In our following study, we want to focus on an iterative solver based on the Golub-Kahan bidiagonalization: the generalized \ac{GKB} algorithm. This solver is designed for saddle-point systems, and was introduced by Arioli\cite{Ar2013}. It belongs to the family of Krylov subspace methods and, as such, relies on specific orthogonality conditions, as we will review in more detail in \Cref{sec:GKBtheory}. Enforcing these orthogonality conditions requires solving an \emph{inner problem}, i.e.~formally computing products with matrix inverses (as described in \Cref{alg:GKB}). In practice, this computation is performed with a linear system solver. For this task, we will explore in this article the use of iterative methods to serve as replacement for direct methods that have been used within \ac{GKB} so far. This is essential for very large problems, such as those coming from a discretized \ac{PDE} in 2D or 3D, when direct solvers may reach their limits. Using an inner iterative solver might also be advantageous from another point of view as we motivate in the following. The solution of large linear systems is often the bottleneck in scientific computing. The computational cost and, consequently, the execution time and/or the energy consumption can become prohibitive. For the inner-outer iterative \ac{GKB} solver in turn, the principal and costliest part is the solution of the inner system at each outer iteration. One approximate metric to measure the cost of the \ac{GKB} solver is the aggregate sum of the number of inner iterations. For a given setup, the cost of the \ac{GKB} method can hence be optimized by executing only a minimal number of inner iterations necessary for achieving a prescribed accuracy of the solution. To reduce this number, there are two possible steps to be taken into account. \begin{comment} The solution of \ac{PDE}s typically relies on discretization techniques such as the \ac{FEM}. The error that can theoretically be achieved, i.e. between the analytic solution and the discrete one, is determined by the approximation properties and the mesh size of the method. The discretization results in a linear system that has to be solved. Here, we will talk about the discrete solution when we mean the exact solution of the linear system. When solving the linear system, an error depending on matrix properties such as the conditioning and the accuracy of the solver (e.g. rounding errors, a stopping tolerance of $10^{-8}$ for an iterative solver) is introduced. We call this error between the discrete and the computed solution the \textit{algebraic error} in the following. \todUR{we need to be careful here: even with a (backward stable) direct solver, a poorly conditioned system may lead to large errors. What kind of accuracy do we mean anyway: small residual or small error?} Let us now compute the numerical solution of the linear system as precise as possible (e.g. with a backward stable direct solver or an iterative solver with a low stopping tolerance). Then, comparing the computed solution of the linear system and the actual analytical solution of the \ac{PDE}, we notice that the error between both is in general bounded by the error introduced by the discretization for a given mesh parameter $h$. Looking at the solution of the linear system from this point of view, it is thus, in general, not necessary to solve the linear system more accurately than the precision of the discretization, as the quality of the obtained numerical solution with respect to the analytical continuous one would not be improved further. \end{comment} In a first step, for a given application it is often unnecessary to solve the linear system with the highest achievable accuracy. This could be the case, for example, in the solution of a discretized \ac{PDE}, when the discretization already introduces an error. A precise solution of the linear system would not improve the numerical solution with respect to the analytic solution of the \ac{PDE} any further than the discretization allows. Next, we come to the second step which will be the main point of the study in this paper. The solution of the inner linear system in the \ac{GKB} method has to be exact, in theory. If we choose a rather low accuracy for the outer iterative solver, an inner exact solution might, however, no longer be necessary, as long as the inner error does not alter the chosen accuracy of the numerical solution. This strategy results in a further reduction of the number of inner iterations, since the inner solver will converge in fewer iterations when a less strict stopping tolerance is used. \begin{comment} fine domain discretization, to the point where although the inner problem is smaller, solving it directly still requires too much memory. Another reason why an inner iterative solver may be preferable has to do with the topic of desirable or achievable precision. In cases where it is known that the system already includes a source of error (discretization, measurements, etc.), it is unnecessary to find a solution with a lower algebraic error given by a direct method. It may also be the case that the outer method is just a part of a more complex algorithmic strategy, such as a linear solver embedded in a nonlinear one. There, it is not always mandatory that the linear solver delivers a high-precision solution. \end{comment} In the following study, we address the case where the inner solver has a prescribed stopping tolerance and then how this limited accuracy affects the outer process and the quality of its iterates. We will show that, with the appropriate choice of parameters, it is possible to make use of inner iterative solvers without compromising the accuracy of the \ac{GKB} result. As it can be seen immediately, the lower the accuracy for the inner solver, the less expensive the \ac{GKB} method will be. Furthermore, we take advantage of the versatility of iterative methods by adapting the stopping tolerance of the inner solver dynamically. In other words, we prescribe the tolerance of the inner solver according to some criteria determined at each outer iteration. This can lead to a reduction of the cost, since only a minimal number of inner iterations are executed. Typically, we will reduce the required accuracy for later instances of the inner solver, since later steps of the outer \ac{GKB}-iteration may contribute less to the overall accuracy. \begin{comment} Additionally, we will explore the interaction between the proposed strategy with the \ac{AL} method. This transformation of the saddle-point system can be used to accelerate \ac{GKB} and can even lead to mesh-independent convergence \cite{Ar2013,KrSoArTaRu2020,KrSoArTaRu2021,KrDaTaArRu2020}. Here, however, a trade-off can be expected and will be studied, since an acceleration of the outer iteration may be offset by needing more iterations in the inner iteration. Our results will show that the two methods can be used together, leading to faster convergence by decreasing both the number of outer and inner iterations.\todUR{is this always true for the inner its? I thought that we may spoil the conditioning of the 11 block?} \end{comment} One particular advantage of our proposed method is its generality. The strategy is independent of other choices which are problem-specific, such as the preconditioner for a Krylov method. We perform most of our tests on a relatively small Stokes flow problem, to illustrate the salient features. We confirm our findings by one final test on a larger case of the mixed Poisson problem, including the use of the augmented Lagrangian method, to demonstrate the use in a realistic scenario. \begin{comment} The need to replace a direct solver by an iterative one can be rooted in a number of causes. An increasingly common one is related to computational limitations. For example, industrial problems where \ac{PDE}s are solved typically rely on fine domain discretization, to the point where although the inner problem is smaller, solving it directly still requires too much memory. Another reason why an inner iterative solver may be preferable has to do with the topic of desirable or achievable precision. In cases where it is known that the system already includes a source of error (discretization, measurements, etc.), it is unnecessary to find a solution with a lower algebraic error given by a direct method. It may also be the case that the outer method is just a part of a more complex algorithmic strategy, such as a linear solver embedded in a nonlinear one. There, it is not always mandatory that the linear solver delivers a high-precision solution. \end{comment} Our study has a similar context as other works on inexact Krylov methods \cite{bouras2000relaxation,bouras2005inexact}, where these algorithms have been investigated from a numerical perspective. In these articles, the inexactness originates from a limited accuracy of the matrix-vector multiplication or that of the solution of a local sub-problem. Similar to what we have described above, it was found that the inner accuracy can be varied from step to step while still achieving convergence of the outer method. It was shown experimentally that the initial tolerance should be strict, then relaxed gradually, with the change being guided by the latest residual norm. Other works complemented the findings with theoretical insights, relevant to several algorithms of the Krylov family \cite{simoncini2003theory, simoncini2005relaxed,van2004inexact}. It was noted that, in some cases, unless a problem-dependent constant is included, the outer solver may fail to converge if the accuracy of the inner solution is adapted only based on the residual norm. This constant can be computed based on extreme singular values, as shown by Simoncini and Szyld \cite{simoncini2003theory}. Another source of inexactness can be the application of a preconditioner via an iterative method. Van den Eshof, Sleijpen and van Gijzen considered inexactness in Krylov methods originating both from matrix-vector products and variable preconditioning, using iterative methods from the GMRES family \cite{van2005relaxation}. Similarly to earlier work, their analysis relies on the connection between the residual and the accuracy of the solution to the inner problem. Since applying the preconditioner has the same effect as a matrix-vector product, the same strategies can be applied to more complex, flexible algorithms, such as those involving variable preconditioning: FGMRES \cite{saad1993flexible}, GMRESR \cite{van1994gmresr}, etc. A flexible version of the Golub-Kahan bidiagonalization is employed by Chung and Gazzola to find regularized solutions to a problem of image deblurring \cite{chung2019flexible}. In a more recent paper with the same application, Gazzola and Landman develop inexact Krylov methods as a way to deal with approximate knowledge of $\Am$ and $\Am ^T$ \cite{gazzola2021regularization}. Erlangga and Nabben construct a framework including nested Krylov solvers. They develop a multilevel approach to shift small eigenvalues, leading to a faster convergence of the linear solver \cite{erlangga2008multilevel}. In subsequent work related to multilevel Krylov methods, Kehl, Nabben and Szyld apply preconditioning in a flexible way, via an adaptive number of inner iterations \cite{kehl2019adaptive}. Baumann and van Gijzen analyze solving shifted linear systems and, by applying flexible preconditioning, also develop nested Krylov solvers \cite{baumann2015nested}. McInnes et al. consider hierarchical and nested Krylov methods with a small number of vector inner products, with the goal of reducing the need for global synchronization in a parallel computing setting \cite{mcinnes2014hierarchical}. Other than solving linear systems, inexact Krylov methods have been studied when tackling eigenvalue problems, as in the paper by Golub, Zhang and Zha \cite{GolZhaZha2000}. Although using different arguments, it was shown that the strategy of increasing the inner tolerance is successful for this kind of problem as well. Xu and Xue make use of an inexact rational Krylov method to solve nonsymmetric eigenvalue problems and observe that the accuracy of the inner solver (GMRES) can be relaxed in later outer steps, depending on the value of the eigenresidual \cite{xu2022inexact}. Dax computes the smallest eigenvalues of a matrix via a restarted Krylov solver which includes inexact matrix inversion \cite{dax2019restarted}. Our paper is structured as follows: in \Cref{sec:GKBtheory}, we review the theory and properties of the \ac{GKB} algorithm; in \Cref{sec:pbDesc}, we describe the specific problem we chose to use as test case for the numerical experiments; \Cref{sec:constAcc} is meant to illustrate the interactions between the accuracy of the inner solver and that of the outer one in a numerical test setting; \Cref{sec:pertErrAna} describes the link between the error of the outer solver and the perturbation induced by the use of an iterative inner solver. We describe and test our proposed strategy of using a variable tolerance parameter for the inner solver in \Cref{sec:relaxChoices}. We explore the interaction between the method of the \ac{AL} and our strategy in \Cref{sec:AL}. The final section is devoted to concluding remarks. \section{Generalized Golub-Kahan algorithm} \label{sec:GKBtheory} We are interested in saddle-point problems of the form \begin{align}\label{eqn:spsW} \left[ \begin{array}{cc} \Mm & \Am \\ \Am ^T & \mZ \end{array} \right] \left[ \begin{array}{c} \wv \\ \pv \end{array} \right] = \left[ \begin{array}{c} \gvv \\ \rv \end{array} \right] \end{align} with $\Mm\in \mathbb{R}^{m\times m}$ being a symmetric positive definite matrix and $\Am\in \mathbb{R}^{m \times n}$ a full rank constraint matrix. The generalized \ac{GKB} algorithm for the solution of a class of saddle-point systems was introduced by Arioli \cite{Ar2013}. To apply it to the system (\ref{eqn:spsW}), we first need to have the upper block of the right-hand side to be equal to 0. To this end, we use the transformation \begin{align} \label{eq:iniTransf} \uv &= \wv - \Mm^{-1}\gvv ,\\ \bv &= \rv - \Am^T \uv. \end{align} The resulting system is \begin{align}\label{eqn:sps} \left[ \begin{array}{cc} \Mm & \Am \\ \Am^T & 0 \end{array} \right] \left[ \begin{array}{c} \uv \\ \pv \end{array} \right] = \left[ \begin{array}{c} 0 \\ \bv \end{array} \right], \end{align} which is equivalent to that in \Cref{eqn:spsW}. We can recover the $\wv$ variable as $\wv = \uv + \Mm^{-1}\gvv$. Let $\Nm\in \mathbb{R}^{n\times n}$ be a symmetric positive definite matrix. To properly describe the \ac{GKB} algorithm, we need to define the following norms \begin{equation} \normM{\vvv} = \sqrt{ \vvv ^T \Mm \vvv }; \qquad \normN{\qv} = \sqrt{ \qv ^T \Nm \qv }; \qquad \normNI{\yv} = \sqrt{ \yv ^T \Nm^{-1} \yv }. \end{equation} Given the right-hand side vector $\bv \in \bR ^n$, the first step of the bidiagonalization is \begin{equation} \label{eq:iniGKBVec} \beta _1 = \normNI{\bv}, \quad \qv _1 = \Nm ^{-1} \bv / \beta _1. \end{equation} After $k$ iterations, the partial bidiagonalization is given by \begin{equation} \label{eq:oriGKB} \begin{cases} \Am \Qm _k = \Mm \Vm _k \Bm _k, &\qquad \Vm _k ^T \Mm \Vm _k = \Id _k \\ \Am ^T \Vm _k = \Nm \Qm _k \Bm ^T _k + \beta _ {k+1} \qv _{k+1} \ev _k^T, &\qquad \Qm _k ^T \Nm \Qm _k = \Id _k \end{cases}, \end{equation} with the bidiagonal matrix \begin{equation} \label{eq:BmatGKB} \Bm _k= \left[ \begin{matrix} \alpha_1 & \beta_2 & 0 & \ldots & 0 \\ 0 & \alpha_2 & \beta_3 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & \ldots & 0 & \alpha_{k-1} & \beta_k \\ 0 & \ldots & 0 & 0 & \alpha_{k} \end{matrix} \right] \end{equation} and the residual term $\beta _ {k+1} \qv _{k+1} \ev _k^T$. The columns of $\Vm _k$ are orthonormal vectors with respect to the inner product and norm induced by $\Mm$, while the same holds for $\Qm _k$ and $\Nm$ respectively \begin{equation} \begin{split} & \vvv _i ^T \Mm \vvv _j = 0, \forall i \neq j; \qquad \normM{\vvv _k} = 1; \\ & \qv _i ^T \Nm \qv _j = 0, \forall i \neq j; \qquad \normN{\qv _k} = 1. \end{split} \end{equation} Prior to the normalization leading to $\vvv _k$ and $\qv _k$, the norms are stored as $\alpha _k$ for $\vvv _k$ and $\beta _k $ for $\qv _k$, as detailed in \cref{alg:GKB}. Using $\Vm _k$, $\Qm _k$ and the relations in \Cref{eq:oriGKB}, we can transform the system from \Cref{eqn:sps} into a simpler form \begin{equation}\label{eq:transfSys} \left[ \begin{matrix} \Id _k& \Bm _k \\ \Bm _k ^T & \mZ \end{matrix} \right] % \left[ \begin{matrix} \zv _k \\ \yv _k \end{matrix} \right] = \left[ \begin{matrix} \mZ \\ \Qm _k ^T \bv \end{matrix} \right]. \end{equation} With the choice for $\qv _1$ given in \Cref{eq:iniGKBVec}, we have that $\Qm _k ^T \bv = \beta _1 \ev _1 $. The solution components to \Cref{eq:transfSys} are then given by \begin{equation} \label{eq:zandy} \zv _k= \beta _1 \Bm _k ^{-T} \ev _1; \quad \yv _k= - \Bm _k ^{-1} \zv _k, \end{equation} where $ \Bm _k ^{-T}$ is the inverse of $ \Bm _k ^{T}$. We can build the $k$-th approximate solution to \Cref{eqn:sps} as \begin{equation} \label{eq:GKBapx} \uv _k = \Vm _k \zv _k; \quad \pv _k = \Qm _k \yv _k. \end{equation} In particular, after a number of $k=n$ steps and assuming exact arithmetic, we have $\uv _k = \uv$ and $\pv _k = \pv $, meaning we have found the exact solution to \Cref{eqn:sps}. A proof of why $n$ terms are sufficient to find the exact solution is given in the introductory paper by Arioli \cite{Ar2013}. This corresponds to a scenario where it is necessary to perform the $n$ iterations, although, for specific problems with particular features, the solution may be found after fewer steps. As $ k \rightarrow n$, the quality of the approximation improves ($\uv _k \rightarrow \uv$ and $\pv _k \rightarrow \pv $), with the bidiagonalization residual $\beta _ {k+1} \qv _{k+1} \ev _k^T$ vanishing for $k=n$. Given the structure of $\beta _1 \ev _1$ and $ \Bm ^{T}$, we find \begin{equation} \label{eq:zetaDef} \zeta _1= \frac{\beta _1}{\alpha _1}, \quad \zeta _k = \zeta _{k-1} \frac{\beta _k}{\alpha _k}, \quad \zv _k = \left[ \begin{matrix} \zv _{k-1} \\ \zeta _k \end{matrix} \right] \end{equation} in a recursive manner. Then, $\uv _k$ is computed as $\uv_k = \uv _{k-1} + \zeta _k \vvv _k $. In order to obtain a recursive formula for $\pv$ as well, we introduce the vector \begin{equation} \dv _k= \frac{\qv _k - \beta _k \dv _{k-1}}{\alpha _k}, \quad \dv _1 = \frac{\qv _1}{\alpha _1}. \end{equation} Finally, the update formulas are \begin{equation} \label{eq:GKBupdate} \uv _k= \uv _{k-1} + \zeta _k \vvv _k , \quad \pv _k= \pv _{k-1} - \zeta _k \dv _k . \end{equation} At step $k$ of \Cref{alg:GKB}, we have the following error in the energy norm. \begin{equation} \label{eq:errExact} \begin{split} \normM{\ev _k} ^2 &= \normM{ \uv _k - \uv } ^2 = \normM{ \Vm _k \zv _k - [ \Vm _k \Vm _{n-k}] \left[ \begin{matrix} {\zv _k} \\ {\zv _{n-k}} \end{matrix} \right] } ^2 \\ &= \normM{ \Vm _{n-k} \zv _{n-k} } ^2 = \normEu{ \zv _{n-k}} ^2 = \sum_{i=k+1}^{n} \zeta _i ^2 \end{split} \end{equation} In the last line, we have made use of the $\Mm$-orthonormality of the $\Vm$ matrices. If we truncate the sum above to only its first $d$ terms, we get a lower bound on the energy norm of the error. The subscript $d$ stands for \textit{delay}, because we can compute this lower bound corresponding to a given step $k$ only after an additional $d$ steps \begin{equation} \label{eq:lowBnd} \xi ^2 _{k,d} = \sum_{i=k+1}^{k+d+1} \zeta _i ^2 < \normM{\ev _k} ^2. \end{equation} With this bound for the absolute error, we can devise one for the relative error in \Cref{eq:lowBndRel}, which is then used as stopping criterion in \Cref{alg_line:GKBconvCheck} of \Cref{alg:GKB}. \begin{equation} \label{eq:lowBndRel} \bar \xi ^2 _{k,d} = \frac{ \sum_{i=k-d+1}^{k} \zeta _i ^2 }{ \sum_{i=1}^{k} \zeta _i ^2 } . \end{equation} The \ac{GKB} algorithm has the following error minimization property. Let $\cV _k = span \{\vvv _1, ..., \vvv _k\}$ and $\cQ _k = span \{\qv _1, ..., \qv _k\}$. Then, for any arbitrary step $k$, we have that \begin{equation} \label{eq:errMinProp} \min_{\underset{(\Am ^T \uv _k - \bv ) \perp \cQ _k}{ \uv _k \in \cV _k,} } \normM{ \uv - \uv _k } \end{equation} is met for $\uv _k$ as computed by \Cref{alg:GKB}. For brevity and because the \ac{GKB} algorithm features this minimization property for the primal variable, our presentation will focus on the velocity for Stokes problems. The stopping criteria for our proposed algorithmic strategies rely on approximations of the velocity error norm. For all the numerical experiments that we have performed, the pressure error norm is close to that of the velocity (less than an order of magnitude apart). In the cases where we operate on a different subspace, as a result of preconditioning, we find that the pressure error norm is actually smaller than that for the velocity. In the case where the dual variable is equally important as the primal, one can use a monolithic approach, such as applying MINRES to the complete saddle-point system. The \ac{GKB} (as implemented by \Cref{alg:GKB}) is a nested iterative scheme in which each outer loop involves solving an inner linear system. According to the theory given in the paper by Arioli \cite{Ar2013}, the matrices $\Mm$ and $\Nm$ have to be inverted exactly in each iteration. We can choose $ \Nm=\frac{1}{\eta} \Id$, whose inversion reduces to a scalar multiplication. In the following sections, unless otherwise specified, we consider $\eta =1 $. On the other hand, the matrix $\Mm$ depends on the underlying differential equations or the problem setting in general. As long as the matrix $\Mm$ is of moderate size, a robust direct solver can be used. For large problems, however, a direct solution might no longer be possible and an iterative solver will be required. At this point, we face two problems. First, depending on the application, inverting $\Mm$ might be more or less costly. Second, to achieve a solution quality close to machine precision, an iterative solver might require a considerable number of iteration steps. \begin{algorithm} \caption{Golub-Kahan bidiagonalization algorithm} \label{alg:GKB} \begin{algorithmic}[1] \Require{$\Mm , \Am , \Nm, \bv$, maxit} \State{$\beta_1 = \|\bv\|_{\Nm^{-1}}$; $\qv_1 = \Nm^{-1} \bv / \beta_1$} \State{$\wv = \Mm^{-1} \Am \qv_1$; $\alpha_1 = \|\wv\|_{\Mm}$; $\vvv_1 = \wv / \alpha_1$} \State{$\zeta_1 = \beta_1 / \alpha_1$; $\dv_1=\qv_1/ \alpha_1$; $\uv^{(1)} =\zeta_{1} \vvv_{1}$; $\pv^{(1)} = - \zeta_1 \dv_1$; } \State{$\bar \xi _{1,d} = 1;$ $k = 1;$} \While{ $\bar \xi _{k,d} > $ tolerance and $k < $ maxit} \State{$\gvv = \Nm^{-1} \left( \Am^T \vvv_k - \alpha_k \Nm \qv_k \right) $; $\beta_{k+1} = \|\gvv\|_{\Nm}$} \State{$\qv_{k+1} = \gvv / {\beta_{k+1}}$} \State{$\wv = \Mm^{-1} \left( \Am \qv_{k+1} - \beta_{k+1} \Mm \vvv_{k} \right)$; $\alpha_{k+1} = \|\wv\|_{\Mm}$} \label{alg_line:innerGKBproblem} \State{$\vvv_{k+1} = \wv / {\alpha_{k+1} }$} \State{$\zeta_{k+1} = - \dfrac{\beta_{k+1}}{\alpha_{k+1}} \zeta_k$} \State{$\dv_{k+1} = \left( \qv_{k+1} - \beta_{k+1} \dv_k \right) / \alpha_{k+1} $} \State{$\uv^{(k+1)} = \uv^{(k)} + \zeta_{k+1} \vvv_{k+1}$; $\pv^{(k+1)} = \pv^{(k)} - \zeta_{k+1} \dv_{k+1}$} \State{$k = k + 1$} \If{$k>d$} \State{$\bar \xi _{k,d} = \sqrt{ \sum_{i=k-d+1}^{k} \zeta _i ^2 / \sum_{i=1}^{k} \zeta _i ^2 } $} \label{alg_line:GKBconvCheck} \EndIf{} \EndWhile{} \Return $\uv^{k+1}, \pv^{k+1}$ \end{algorithmic} \end{algorithm} In \Cref{alg_line:innerGKBproblem} of \Cref{alg:GKB}, we have the application of $\Mm^{-1} $ to a vector, which represents what we call the \textit{inner problem}. Typically, this is implemented as a call to a direct solver using the matrix $\Mm $ and the vector $\Am \qv_{k+1} - \beta_{k+1} \Mm \vvv_{k}$ as the right hand side. The main contribution of this work is a study of the behavior exhibited by \Cref{alg:GKB} when we replace the direct solver employed in \Cref{alg_line:innerGKBproblem} by an iterative one. In particular, for a target accuracy of the final \ac{GKB} iterate, we want to minimize the total number of inner iterations. Our choice for the inner solver is the unpreconditioned \ac{CG} algorithm, for its simplicity and relative generality. The strategies we propose in the subsequent sections do not rely on any specific feature of this inner solver, and are meant to be applicable regardless of this choice. We are interested in reducing the total number of inner iterations in a relative and general manner. This is why we do not take preconditioning for \ac{CG} into account, which is usually problem-dependent. We measure the effectiveness of our methods based on the percentage of inner iterations saved when compared against a scenario to be described in more detail in the following sections. \section{Problem description} \label{sec:pbDesc} As test problem, we will use a 2D Stokes flow in a rectangular channel domain $\Omega = \left[-1, L \right] \times \left[-1, 1 \right]$ given by \begin{equation} \label{eq:contStokes} \begin{aligned} - \Delta \vec{u} + \nabla p &= 0 \\ \nabla \cdot \vec{u} &=0, \end{aligned} \end{equation} More specifically, we will consider the Poiseuille flow problem, i.e. a steady Stokes problem with the exact solution \begin{equation} \label{eq:contStokesSol} \begin{cases} u_x = 1- y^2, \\ u_y = 0, \\ p = -2x + \text{constant}. \end{cases} \end{equation} The boundary conditions are given as Dirichlet condition on the inflow $\Gamma_{in}= \left\{-1\right\} \times \left[-1, 1 \right]$ (left boundary) and no-slip conditions on the top and bottom walls $\Gamma_{c}= \left[-1, L \right] \times \left\{-1\right\} \cup \left[-1, L \right] \times \left\{1\right\} $. The outflow at the right $\Gamma_{out}= \left\{L \right\} \times \left[-1, 1 \right]$ (right) is represented as a Neumann condition \begin{equation*} \begin{split} \frac{\partial u_x}{\partial x} - p &= 0 \\ \frac{\partial u_y}{\partial x} &= 0. \end{split} \end{equation*} We use Q2-Q1 Finite Elements as discretization method. Our sample matrices are generated by the Incompressible Flow \& Iterative Solver Software (IFISS)\footnote{\url{http://www.cs.umd.edu/~elman/ifiss3.6/index.html}} package \cite{ers07}, see the book by Elman et al. \cite{elman2014finite} for a more detailed description of this reference Stokes problem. \begin{figure} \centering \begin{tikzpicture} \begin{axis} [ width=\FigWid \textwidth, height=\FigHei \textwidth, minor tick num=3, grid=both, xtick = {-1,0,...,5}, ytick = {-1,0,...,1}, xlabel = $x$, ylabel = $y$, ticklabel style = {font = \scriptsize}, % colormap/jet, colorbar, % enlargelimits=false, axis on top, axis equal image ] \addplot [forget plot] graphics[xmin=-1,xmax=5,ymin=-1,ymax=1] {IFISSCh5SolRaw.png}; \end{axis} \end{tikzpicture} \caption{ Exact solution to the Stokes problem in a channel of length 5. Plotted is the $1-y^2$ function, which represents the $x$ direction velocity, overlaid with the mesh resulting from the domain discretization (Q2-Q1 Finite Elements Method). } \label{fig:convIFISSCh5SolMesh} \end{figure} We first illustrate some particular features shown by \ac{GKB} for this problem. We use a direct inner solver here, before discussing the influence of an iterative solver in subsequent sections. In \Cref{fig:convIFISSChL}, we plot the convergence history for several channels of different lengths, which leads us to noticing the following details. The solver starts with a period of slow convergence, visually represented by a plateau, the length of which is proportional to the length of the channel. The rest of the convergence curve corresponds to a period of superlinear convergence, a phenomenon also known for other solvers of the Krylov family, such as \ac{CG}. The presence of this plateau is especially relevant for our proposed strategies and, since it appears for each channel, we can conclude it is a significant feature of this class of channel problems. In the following numerical examples, we choose as boundary $L=20$ and thus a domain of length 21 units. \import{images/pgf/}{convIFISSChL.tex} \section{Constant accuracy inner solver} \label{sec:constAcc} Similar to what has been described by Golub et al. \cite{GolZhaZha2000} for solving eigenvalue problems, we have observed that when using an iterative method as an inner solver, its accuracy has a clear effect on the overall accuracy of the outer solver (see \Cref{fig:constCGtol}). We solve the channel problem described in \Cref{sec:pbDesc} with various configurations for the tolerance of the inner solver, and plot the resulting convergence curves in \Cref{fig:constCGtol}. The outer solver is always \ac{GKB} with a $10^{-7}$ tolerance. The cases we show are: a direct inner solver, three choices of constant inner solver tolerance ($10^{-3}$, $10^{-7}$ and $10^{-8}$), and a final case using a low accuracy solver of ($10^{-3}$) only for the first two iterations, then a high accuracy one ($10^{-14}$). The stopping criterion for the \ac{GKB} algorithm is a delayed lower bound estimate for the energy norm of the primal variable (see \Cref{eq:lowBnd}). As such, \ac{GKB} with a direct inner solver performs a few extra steps, achieving a higher accuracy than the one required, here around $10^{-8}$. Notice how the outer solver cannot achieve a higher accuracy than that of the inner solver. The outer solver stops reducing the error even before reaching the same accuracy as the inner solver. Replacing the exact inner solver by a \ac{CG} method with a constant tolerance of $10^{-8}$ leads to a convergence process where the error norm eventually reaches a value just below the target accuracy of $10^{-7}$ and does not decrease further. This highlights the fact that the inner solver does not need to be exact in order to have \ac{GKB} converge to the required solution. For this Poiseuille flow example, however, the inner solver must be least one order of magnitude more precise than the outer one. In the last case examined here, we want to see if early imprecise iterations can be compensated later by others having a higher accuracy. This strategy of increasing accuracy has been found to work, e.g., in the case of the Newton method for nonlinear problems \cite{dembo1982inexact}. We tested the case when the first two iterations of \ac{GKB} use an inner solver with tolerance $10^{-3}$, with all the subsequent inner iterations employ a tolerance of $10^{-14}$. The resulting curve shows a convergence history rather similar to the case where \ac{CG} has a constant tolerance of $10^{-3}$. The outer process cannot reduce the error norm below $10^{-3}$, despite the fact that the bulk of the iterations employ a high-accuracy inner solver. This is in correspondence with which was observed by Golub et al. \cite{GolZhaZha2000} for solving eigenvalue problems. \import{images/pgf/}{constCGtol.tex} An interesting observation is that all the curves in \Cref{fig:constCGtol} overlap in their initial iterations, until they start straying from the apparent profile, eventually leveling off. In \Cref{sec:pertErrAna}, we analyze the causes leading to these particular behaviors and link them to the accuracy of the inner solver. \section{Perturbation and error study} \label{sec:pertErrAna} In this section we describe how the error associated with the iterates of \Cref{alg:GKB} behaves if we use an iterative solver for the systems involving $\Mm ^{-1}$. We can think of the approximate solutions of these inner systems as perturbed versions of those we would get when using a direct solver. The error is then characterized in terms of this perturbation and the implications motivate our algorithmic strategies given in the subsequent sections. With this characterization, we can also explain the results in \Cref{sec:constAcc}. The use of an iterative inner solver directly affects the columns of the $\Vm$ matrix. In the following, $\Vm$ denotes the unperturbed matrix, with $\Em _{\Vm}$ being the associated perturbation matrix. In particular, we are interested in the $\Mm$ norm of the individual columns of $\Em _{\Vm}$, which gives us an idea of how far we are from the \enquote{ideal} columns of $\Vm$. Changes in the $\vvv$ and $\qv$ vectors also have an impact on their respective norms $\alpha$ and $\beta$, which shift away from the values they would normally have with a direct inner solver. In turn, these changes propagate to the coefficients $\zeta$ used to update the iterates $\uv$ and $\pv$. Our observations concern the $\zv$ vector, its perturbation $\ev _{\zv}$ and their effect on the error of the primal variable $\uv$ measured in the $\Mm$ norm. The entries of $\zv$ change sign every iteration, but we will only consider them in absolute value, as it is their magnitude which is important. In the following, we will denote perturbed quantities with a hat. \subsection{High initial accuracy followed by relaxation} \label{subsec:theoPrecRelax} In this subsection, we take a closer look at the interactions between the perturbation and the error. For us, perturbation is the result of using an inexact inner solver and represents a quantity which can prevent the outer solver from reducing the error below a certain value. The error itself needs to be precisely defined, as it may contain several components, each minimized by a different process. Because we focus on the difference between the perturbed and the unperturbed \ac{GKB}, sources of error that affect both versions, such as the round-off error, are not included in the following discussion. According to the observations by Jir{\'a}nek and Rozlo{\v{z}}n{\'\i}k, the accuracy of the outer solver depends primarily on that of the inner solver, since the perturbations introduced by an iterative solver dominate those related to finite-precision arithmetic \cite{jiranek2008maximum}. We take the exact solution $\uv$ to be equal to $\uv _n$, the $n$-th iterate of the unperturbed \ac{GKB} with exact arithmetic. At step $k$ of the \ac{GKB}, we have the error, \begin{align} \normM{\ev _k} &= \normM{ \hat \uv _k - \uv }, \end{align} where $\hat \uv _k$ is the current approximate solution and $\uv $ is the exact one. Both can be written as linear combinations of columns from $\Vm$ with coefficients from $\zv$. Let $\hat \uv _k$ come from an inexact version of \Cref{alg:GKB}, where the solution of the inner problem (a matrix-vector product with $\Mm ^{-1}$) includes perturbations. The term $ \uv = \Vm _n \zv _n$ is available after $n$ steps of \Cref{alg:GKB} in exact arithmetic, without perturbations. We separate the first $k$ terms, which have been computed, from the remaining ($n-k$). \begin{equation} \label{eq:errPert} \begin{split} \normM{\ev _k} ^2 &= \normM{ \hat \uv _k - \uv } ^2= \normM{ ( \Vm _k + \Em _{\Vm} ) ( \zv _k + \ev _{\zv} ) - [ \Vm _k \Vm _{n-k}] \left[ \begin{matrix} {\zv _k} \\ {\zv _{n-k}} \end{matrix} \right] } ^2 \\ &= \normM{ \Em _{\Vm} \zv _k + \Em _{\Vm} \ev _{\zv} + \Vm _k \ev _{\zv} - \Vm _{n-k} \zv _{n-k} } ^2 \\ & \leq \normM{ \Em _{\Vm} \zv _k } ^2 + \normM{ \Em _{\Vm} \ev _{\zv} } ^2 + \normEu{ \ev _{\zv} } ^2 + \normEu{ \zv _{n-k}} ^2 \end{split} \end{equation} In the last line, we have made use of the $\Mm$-orthonormality of the $\Vm$ matrices. In the case of a direct inner solver, we can leave out the perturbation terms, recovering the result $\normM{\ev _k} ^2 = \normEu{ \zv _{n-k}} ^2 = \sum_{i=k+1}^{n} \zeta _i^2 $ given by Arioli \cite{Ar2013}. This is simply the error coming from approximating $\uv $ (a linear combination of $n$ $\Mm$-orthogonal vectors) by $ \uv _k $ (a linear combination of only $k$ $\Mm$-orthogonal vectors). This term decreases as we perform more steps of \Cref{alg:GKB} ($ k \rightarrow n $). By truncating the sum $\sum_{i=k+1}^{n} \zeta _i ^2$, we obtain a lower bound for the squared error. The remaining three terms in \Cref{eq:errPert} include the perturbation coming from the inexact inner solution. Our goal is to minimize the total number of iterations of the inner solver, so we are interested in knowing how large can these terms be allowed to be, such that we still recover a final solution of the required accuracy. The answer is to keep them just below the final value of the fourth one, $\normEu{ \zv _{n-k}}$, below the acceptable algebraic error. If they are larger, the final accuracy will suffer. If they are significantly smaller, then our inner solver is unnecessarily precise and expensive. The following observations rely on the behavior of the $\zv$ vector. At each iteration, this vector gains an additional entry, while leaving the previous ones unchanged. These entries form a (mostly) decreasing sequence and have a magnitude below 1 when reaching the superlinear convergence phase. Unfortunately, we cannot yet provide a formal proof of these properties, but having seen them consistently reappear in our numerical experiments encourages us to consider them for motivating our approach. These properties appear in both cases, with and without perturbation. The decrease in the entries of the coefficient vector used to build the approximation has also been observed and described for other Krylov methods (see references \cite{van2004inexact,simoncini2003theory,simoncini2005relaxed}). Their context is that of inexact matrix-vector products, which is another way of viewing our case. The fact that new entries of $\zv$ are simply appended to the old ones and that they are smaller than one is linked to the particular construction specific to \ac{GKB}. Back to \Cref{eq:errPert}, let us assume the perturbation at each iteration is constant, i.e. the $\Mm$ norm of each column of $\Em _{\Vm}$ is equal to the same constant. Then, the vector $\Em _{\Vm} \zv _k$ will be a linear combination of perturbation vectors with coefficients from $\zv _k$. Following our observations concerning the entries of $\zv _k$, the first terms of the linear combination will be the dominant ones, with later terms contributing less and less to the sum. If the perturbation of the first $\vvv$ has an $\Mm$ norm below our target accuracy, the term $ \normM{ \Em _{\Vm} \zv _k } $ will never contribute to the error. We can allow the $\Mm$ norm of the columns of $\Em _{\Vm}$ to increase, knowing the effect of the perturbation will be reduced by the entries of $\zv$, which are decreasing and less than one. The \ac{GKB} solution can be computed in a less expensive way, as long as the term $\normM{ \Em _{\Vm} \zv _k }$ is kept below our target accuracy. The perturbation should initially be small, then allowed to increase proportionally to the decrease of the entries in $\zv$. Next, we describe the terms including $\ev _{\zv}$. Let the following define the perturbed entries of $\hat \zv$ \begin{equation*} \hat \zeta _k = - \hat \zeta _{k-1} \frac{\hat \beta _k}{\hat \alpha _k} = - \hat \zeta _{k-1} ( \frac{ \beta _k}{ \alpha _k} + \epsilon _k). \end{equation*} The term $\epsilon _k$ is the perturbation introduced at iteration $k$, coming from the shifted norms associated with $\qv _k$ and $\vvv _k$. This term is then multiplied by $\hat \zeta _{k-1}$ which, according to our empirical observations, decreases at (almost) every step. If we assume $\epsilon _k$ is constant, the entries of $\ev _{\zv}$ decrease in magnitude and the norm $ \normEu{ \ev _{\zv} }$ is mostly dominated by the first vector entry. The strategy described for the term $ \normM{ \Em _{\Vm} \zv _k } $ also keeps $ \normEu{ \ev _{\zv} }$ small. We start with a perturbation norm below the target accuracy, to ensure the quality of the final iterate. Gradually, we allow an increase in the perturbation norm proportional to the decrease of $\hat \zeta _k$ to reduce the costs of the inner solver. Finally, since the vector $\ev _{\zv} $ decreases similarly to $ \zv $, the term $ \normM{ \Em _{\Vm} \ev _{\zv} }$ can be described in the same way as $\normM{ \Em _{\Vm} \zv _k }$. We close this section by emphasizing the important role played by the first iterations and how the initial perturbations can affect the accuracy of the solution. Notice that the perturbation terms included refer to all the $k$ steps, not just the latest one. Relaxation strategies that start with a low accuracy and gradually increase it are unlikely to work for \ac{GKB} and other algorithms with similar error minimization properties. Since the first vectors computed are the ones that contribute the most to reducing the error, they should be determined as precisely as possible. Even if we follow a perturbed iteration exclusively by very accurate ones, this will not prevent the perturbation from being transmitted to all the subsequent vectors, and potentially be amplified by multiplication with matrices and floating-point error. With these observations in mind, we can understand the results in \Cref{sec:constAcc}. These findings are in line with those concerning other Kylov methods in the presence of inexactness (see Section 11 of the survey by Simoncini and Szyld \cite{simoncini2007recent} and the references therein). \ac{GKB} is not the only method which benefits from lowering the accuracy of the inner process, and the reason why this is possible is linked to the decreasing entries of the coefficient vector. \section{Relaxation strategy choices} \label{sec:relaxChoices} We have seen in \Cref{subsec:theoPrecRelax} that we can allow the perturbation norm to increase in a safe way, as long as the process is guided by the decrease of $ \abs{ \hat \zeta } $. This means that we can adapt the tolerance of the inner solver, such that each call is increasingly cheaper, without compromising the accuracy of the final \ac{GKB} iterate. Then, at step $k$ we can call the inner solver with a tolerance equal to $\tau / f(\zeta)$. The scalar $\tau$ represents a constant chosen as either the target accuracy for the final \ac{GKB} solution, or something stricter, to counteract possible losses coming from floating-point arithmetic. The function $f$ is chosen based on the considerations described below, with the goal of minimizing the number of inner iterations. A similar relaxation strategy was used in a numerical study by Bouras and Frayss\'e \cite{bouras2005inexact} to control the magnitude of the perturbation introduced by performing inexact matrix-vector products. They employ Krylov methods with a residual norm minimization property, so the proposed criterion divides the target accuracy by the latest residual norm. In our case, because of the minimization property in \Cref{eq:errMinProp}, we need to use the error norm instead of the residual, since it is the only quantity which is strictly decreasing. Due to the actual error norm being unknown, we rely on approximations found via $\zeta$. Considering the error characterization of the unperturbed process $\normM{\ev _k} ^2 = \sum_{i=k+1}^{n} \zeta _i ^2$, we can approximate the error by the first term of the sum, which is the dominant one. However, when starting iteration $k$ we do not know $\zeta _{k+1}$, not even $\zeta _{k}$, so we cannot choose a tolerance for the inner solver required to compute $\uv _k$ based on these. What we can do is predict these values via extrapolation, using information from the known values $\zeta _{k-1}$ and $\zeta _{k-2}$. We know that in general $ \frac{\beta _k}{\alpha _k} = \frac{\zeta _k}{\zeta _{k-1}} $ acts as a local convergence factor for the $\abs{\zeta }$ sequence. We approximate the one for step $k$ by using the previous one $\frac{\zeta _{k-1}}{\zeta _{k-2}}$. Then, we can compute the prediction $\tilde \zeta _k := \zeta _{k-1} \frac{\zeta _{k-1}}{\zeta _{k-2}}$. By squaring the local convergence factor, we get an approximation for $ \zeta _{k+1} $ as $\tilde \zeta _{k+1} := \zeta _{k-1} \left( \frac{\zeta _{k-1}}{\zeta _{k-2}} \right) ^2$, which we can use to approximate $ \normM{\ev _k} $ and adapt the tolerance of the inner solver. In practice, we only consider processes which include perturbation, and assume we have no knowledge of the unperturbed values $\abs{\zeta }$. As such, for better readability, we drop the hat notation with the implicit convention that we are referring to values which do include perturbation and use them in the extrapolation rule above. For some isolated iterations, it is possible that $\abs{\zeta _k } \geq \abs{\zeta _{k-1} } $. This behavior is then amplified through extrapolation, potentially leading to even larger values. In turn, this can cause an increase in the accuracy of the inner solver, following a stricter value for the tolerance parameter $\tau / f(\zeta)$. In \Cref{subsec:theoPrecRelax}, we have shown that there is no benefit in increasing this accuracy. The new perturbation would be smaller in norm, but the error $\normM{\ev _k}$ would be dominated by the previous, larger perturbation. As such, we propose computing several candidate values for the stopping tolerance of the inner solver, and choose the one with maximum value. Since these are only scalar quantities, the associated computational effort is negligible, but the impact of a well-chosen tolerance sequence can lead to significant savings in the total number of inner iterations. The candidate values are: \begin{equation} \label{eq:relaxChoices} \begin{cases} \text{the value at the previous step}, \\ \tau / \abs{ \zeta _{k-1} } , \\ \tau / \abs{ \tilde \zeta _{k} } , \\ \tau / \abs{ \tilde \zeta _{k+1} } . \end{cases} \end{equation} To prevent a limitless growth of the tolerance parameter, we impose a maximum value of $ 0.1 $. All these choices are safe in the sense that they do not lead to introduction of perturbations which prevent the outer solver from reaching the target accuracy. We proceed by testing these relaxations strategies on the problem described in \Cref{sec:pbDesc}. The initial tolerance for \ac{CG} is set to $\tau = 10^{-8}$, one order of magnitude more precise than the one set of \ac{GKB}. As a baseline for comparison, we first keep the tolerance constant, equal to $\tau$. Then, we introduce adaptivity using $\tau / \abs{ \zeta_{k-1} }$. The third case changes the tolerance according to $\tau / \abs{ \tilde \zeta _{k+1} }$, the latter term being a predicted approximation of the current error. Finally, we employ a hybrid approach, where all candidate values in \Cref{eq:relaxChoices} are computed, but only the largest one is used. In the legends of the following plots, these four cases are labeled \ConstantCase, \AdaptiveCase, \PredictedCase, and \HybridCase, respectively. To monitor \ac{GKB} convergence, we track the lower bound for the energy norm of the error corresponding to the primal variable given in \Cref{eq:lowBnd}. For easy reference, all the choices used and their respective labels are given below. We define $\tau = 10^{-8}$. \begin{align} \mathtt{(\ConstantCase)} &: \tau, \label{eq:cst}\\ (\mathtt{\AdaptiveCase}) &: \nicefrac{\tau}{\abs{ \zeta _{k-1} }} , \label{eq:z}\\ (\mathtt{\PredictedCase}) &: \nicefrac{\tau}{\abs{ \tilde \zeta _{k+1} }} ,\label{eq:adasquare}\\ (\mathtt{\HybridCase}) &: \max\left\{\nicefrac{\tau}{\abs{ \zeta _{k-1} }}, \nicefrac{\tau}{\abs{ \tilde \zeta _{k} }}, \nicefrac{\tau}{\abs{ \tilde \zeta _{k+1} }}, \text{previous value} \right\} .\label{eq:hybrid}\\ (\mathtt{\OptimalCase}) &: \nicefrac{\tau}{ ( parameter \cdot \abs{ \zeta _{k-1} } )} , \label{eq:zOptim} \end{align} Only the last scenario above, \OptimalCase{}, is left to explain. To see if the parameter-free choices can be improved, we run one more case which includes adaptivity by using $\abs{ \zeta _{k-1} }$, but also one constant parameter tuned experimentally. This is motivated by the fact that the considerations leading to \Cref{eq:relaxChoices} rely mostly on approximations and inequalities, which means we have an over-estimate of the error. It may be possible to reduce the total number of iterations further, by including an (almost) optimal, problem-dependent constant. The goal is to find a sequence of tolerance parameters with terms that are as large as possible, while guaranteeing the accuracy of the final \ac{GKB} iterate. All the results are given in \Cref{tab:varTolZ} and \Cref{fig:originallower bound}. \HybridCase{} offers the highest savings among the parameter-free choices (30\%), but \OptimalCase{}, the test with the problem-dependent constant, reveals that we can still improve this performance by about 6\%. \import{images/pgf/lowBndVScumulCG}{original} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter in \OptimalCase{} is $0.05$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 6963 & 5115 & 4897 & 4873 & 4399 \\ Savings \% & - & 26.54 & 29.67 & 30.02 & 36.82 \end{tabular} \label{tab:varTolZ} \end{table} \subsection{Increasing the savings by working on a simplified problem} \label{subsec:simple} Considering the observations in \Cref{subsec:theoPrecRelax} and the results plotted in \Cref{fig:originallower bound}, we can significantly reduce the accuracy of the inner solver only when the outer solver is in a superlinear convergence phase, when the $\abs{\zeta }$ sequence decreases rapidly. How much we can relax depends on the slope of the convergence curve. As such, to get the maximum reduction of the total number of iterations, the problem needs to be simplified, such that the convergence curve is as steep as possible and has no plateau. It is common to pair Krylov methods with other strategies, such as preconditioning, in order to improve their convergence behavior. The literature on these kinds of approaches is rich \cite{loghin2003schur,loghin2004analysis,bgl_2005,olshanskii2010acquired}. The following tests quantify how beneficial is the interaction between our proposed relaxation scheme and these other strategies. It has been shown by Arioli and Orban that the \ac{GKB} applied to the saddle-point system is equivalent to the \ac{CG} algorithm applied to the Schur complement equation \cite[Chapter~5]{orban2017iterative}. As such, the first step towards accelerating \ac{GKB} is to consider the Schur complement, defined as $\Sm := \Am ^T \Mm _{-1} \Am$, especially its spectrum. Ideally, a spectrum with tightly clustered values and no outliers leads to rapid \ac{GKB} convergence \cite{KrDaTaArRu2020}. To get as close as possible to this clustering we use the following two methods to induce positive changes in the spectrum: preconditioning with the \ac{LSC} \cite{elman2006block} and eigenvalue deflation. Each of them operates differently and leads to convergence curves with different traits. \import{images/pgf/}{easyIFISS20GKconv} In \Cref{fig:easyIFISS20GKconv}, we plot the \ac{GKB} convergence curve for each of these, using a direct inner solver. The \ac{LSC} aligns the small values in the spectrum with the main cluster and brings everything closer together. The corresponding \ac{GKB} convergence curve has no plateau and is much steeper than the curve for the unpreconditioned case. Using deflation, we remove the five smallest values from the spectrum, which constitute outliers with the respect to the main cluster. The other values remain unchanged. As such, its convergence curve no longer has the initial plateau, but is otherwise the same as in the original problem. For both of these cases we apply the same strategies of relaxing the inner tolerance, to see how many total \ac{CG} iterations we can save. The rest of the set-up is identical to that described for \Cref{tab:varTolZ}. We tabulate the results in \Cref{tab:varTolLSC,tab:varTolDefl} and plot them in \Cref{fig:LSCpreclower bound,fig:deflatedlower bound}. They highlight that the best parameter-free results are obtained when using \HybridCase{}, which leads to savings of about 50\%, depending on the specific case. When comparing this parameter-free approach to \OptimalCase{}, which includes an experimental constant, we find that the hybrid approach can still be improved. Nonetheless, the difference in \ac{CG} iterations savings is not very high (up to 6\%), which supports the idea that our proposed strategy is efficient in a general-use setting. An additional observation pertaining to the plots is that even if convergence is relatively fast (\Cref{fig:LSCpreclower bound}) or slow (\Cref{fig:deflatedlower bound}), the final savings are still around 50\%, as long as there is no plateau. \import{images/pgf/lowBndVScumulCG}{LSCprec} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the \ac{LSC} preconditioner. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter used in \OptimalCase{} is $0.007$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 2052 & 1301 & 1073 & 1046 & 919 \\ Savings \% & - & 36.60 & 47.71 & 49.03 & 55.21 \end{tabular} \label{tab:varTolLSC} \end{table} \import{images/pgf/lowBndVScumulCG}{deflated} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using deflation. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter used in \OptimalCase{} is $0.09$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 4830 & 2625 & 2416 & 2411 & 2110 \\ Savings \% & - & 45.65 & 49.98 & 50.08 & 56.31 \end{tabular} \label{tab:varTolDefl} \end{table} \section{\ac{GKB} with the augmented Lagrangian approach} \label{sec:AL} The method of the \ac{AL} has been used successfully to speed up the convergence of the \ac{GKB} algorithm \cite{KrDaTaArRu2020}, with this effect being theoretically explained by Arioli et al. \cite{KrDaTaArRu2020}. Maybe most striking is the potential to reach mesh-independent convergence, provided that the augmentation parameter is large enough. Another use of the \ac{AL} method is to transform the (1,1)-block of a saddle-point system, say $\Wm$, from a positive semi-definite matrix to a positive definite one. However, this can happen only if the off-diagonal block $\Am$ is full rank or, more generally, if $\mbox{ker}(\Wm)\cap \mbox{ker}(\Am^T)=\{ \mZ \}$. Let $\Nm \in \mathbb{R}^{n\times n}$ be a symmetric, positive definite matrix. For a given symmetric, positive semi-definite matrix $\Wm \in \mathbb{R}^{m\times m}$, we can transform it into a positive-definite one by \begin{align} \Mm := \Wm + \Am \Nm^{-1} \Am^T. \end{align} The upper right-hand side term $\gvv$ then becomes \begin{equation} \gvv := \gvv + \Am \Nm^{-1}\rv. \end{equation} With these changes in place, we can proceed to using the \ac{GKB} algorithm, as described in \Cref{sec:GKBtheory}. Note that if the matrix $\Wm$ is already symmetric positive-definite, the transformation of the (1,1)-block is not necessary for using the \ac{GKB} method. However, the application of the \ac{AL} approach does lead to a better conditioning of the Schur complement, which significantly improves convergence speed \cite{KrDaTaArRu2020}. As in \Cref{sec:GKBtheory}, we choose $\Nm=\frac{1}{\eta} \Id$. There is as usual no free lunch: depending on the conditioning of the matrix $\Am$ and the magnitude of $\eta$, the \ac{AL} can also degrade the conditioning of the $\Mm$ matrix as a side-effect. We test whether the augmentation interacts with the strategies we propose in \Cref{sec:relaxChoices}, namely if we can still achieve about 50\% savings in the total number of inner iterations. The strategies are applied when solving the problem described in \Cref{sec:pbDesc} after an augmentation with a parameter $\eta=1000$, with the results being given in \Cref{tab:varTolAL} and plotted in \Cref{fig:augLaglower bound}. Comparing the percentage of iterations saved in this case to those obtained in \Cref{sec:relaxChoices}, it is clear that, when combined with the \ac{AL} method, the strategy of variable inner tolerance does help reducing the total number of inner iterations, but by a lower percentage. \import{images/pgf/lowBndVScumulCG}{augLag} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the \ac{AL} ($\eta =1000$). The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter used in \OptimalCase{} is $0.005$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 2601 & 1886 & 1707 & 1661 & 1647 \\ Savings \% & - & 27.49 & 34.37 & 36.14 & 36.68 \end{tabular} \label{tab:varTolAL} \end{table} Since the \ac{AL} method modifies the (1,1)-block of the saddle-point system, it changes the difficulty of the inner problem and how many iterations the inner solver needs to perform. As such, a global comparison in terms of number of inner iterations, among all the scenarios we studied (original, preconditioned, deflated, including the \ac{AL}) is not fair unless the inner problem has the same degree of difficulty for all the cases. To verify the generality of our method, we also apply it in a different context than that described in \Cref{sec:pbDesc}. Let us consider a Mixed Poisson problem. We solve the Poisson equation \(-\Delta u=f\) on the unit square $(0,1)^2$ using a mixed formulation. We introduce the vector variable $\vec{\sigma}=\nabla u$. Find $(\vec{\sigma}, u)\in \Sigma \times W $ such that \begin{align} \vec{\sigma}-\grad u&=0\\ -\mathrm{div} (\vec{\sigma}) &= f. \end{align} where homogeneous Dirichlet boundary conditions are imposed for $u$ at all walls. The forcing term $f$ is random and uniformly drawn in $(0,1)$. The discretization is done with a lowest order Raviart-Thomas space $\Sigma^h \subset \Sigma$, and a space $W^h \subset W$ containing piece-wise constant basis functions. We used the finite element package Firedrake\footnote{\url{www.firedrakeproject.org}} coupled with a PETSc~\cite{petsc-web-page,petsc-user-ref,petsc-efficient} implementation of \ac{GKB} \footnote{\url{https://petsc.org/release/docs/manualpages/PC/PCFIELDSPLIT.html\#PCFIELDSPLIT}}, adapted to include dynamical relaxation, to produce the following numerical results. We used the implementation provided by Firedrake\footnote{\url{https://www.firedrakeproject.org/demos/saddle_point_systems.py.html}}. The test case has \num{328192} degrees of freedom, of which \num{197120} are associated with the (1,1)-block. The \ac{GKB} delay parameter is set to 3. The augmentation parameter $\eta$ is set to 500 and the tolerance for the \ac{GKB} set to \num{1e-5}. The results are presented in \Cref{fig:mixedPoissonLowBnd}. We confirm the results presented above with a reduction of over 60\% in the total number of inner \ac{CG} iterations with respect to the constant accuracy set up. \begin{figure} \centering \begin{tikzpicture} \begin{axis}[ ymode=log, legend pos= north east, width=\FigWid \textwidth, height=\FigHei \textwidth, xlabel={Inner CG iterations}, ylabel={Lower bound} ] \addplot+[name path=A] table [x=InnerKSPCumul, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiCST500.txt}; \addlegendentry{ \ConstantCase} \addplot+[] table [x expr=\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiHybrid500.txt}; \addlegendentry{\HybridCase } \addplot+[] table [x expr=\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiAdaSquare500.txt}; \addlegendentry{ \PredictedCase} \addplot+[] table [x expr=\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiZ500.txt}; \addlegendentry{ \AdaptiveCase} \addplot[name path=C, draw=none] table [x expr=0.5*\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiCST500.txt} node[pos=1]{50\%}; \addplot[black!30, opacity=0.5] fill between[of=A and C]; \addplot[name path=C, draw=none] table [x expr=0.35*\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiCST500.txt} node[pos=1]{65\%}; \addplot[black!20, opacity=0.5] fill between[of=A and C]; \end{axis} \end{tikzpicture} \caption{Lower bound (\Cref{eq:lowBnd}) for the error norm associated with the \ac{GKB} iterates versus the cumulative number of inner \ac{CG} iterations when solving the Mixed Poisson problem. We also use the \ac{AL} ($\eta =500$). See \Cref{eq:cst,eq:z,eq:adasquare,eq:hybrid} for the strategies denoted by the labels.} \label{fig:mixedPoissonLowBnd} \end{figure} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the \ac{AL} ($\eta =500$) on the Mixed Poisson problem. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:adasquare,eq:hybrid}. } \begin{tabular}{ccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase \\ \ac{CG} iterations & 10845 & 4680 & 4105 & 4225 \\ Savings \% & - & 56.84 & 62.15 & 61.04 \end{tabular} \label{tab:DarcyvarTolAL} \end{table} \section{Conclusions} We have studied the behavior of the \ac{GKB} algorithm in the case where the inner problem, i.e. the solution of a linear system, is performed iteratively. We have found that the inner solver does not need to be as precise as a direct one in order to achieve a \ac{GKB} solution of a predefined accuracy. Furthermore, we have proposed algorithmic strategies that reduce the cost of the inner solver, quantified as the cumulative number of inner iterations. This is possible by selecting criteria to change the stopping tolerance. To motivate these choices, we have studied the perturbation generated by the inexact inner solver. The findings show that the perturbation introduced in early iterations has a higher impact on the accuracy of the solution compared to later ones. We devised a dynamic way of adapting the accuracy of the inner solver at each call to minimize its cost. The initial, high accuracy is gradually reduced, maintaining the resulting perturbation under control. Our relaxation strategy is inexpensive, easy to implement, and has reduced the total number of inner iterations by 33-63\% in our tests. The experiments also show that including methods such as deflation, preconditioning and the augmented Lagrangian has no negative impact and can lead to a higher percentage of savings. Another advantage is that our method does not rely on additional parameters and is thus usable in a black-box fashion. \paragraph{Acknowledgments} The authors thank Mario Arioli for many inspiring discussions and advice. \section{Comparison with other methods} Applying an inexact matrix inverse to a vector can also be seen as an inexact matrix-vector product. The authors of \cite{bouras2005inexact} have studied how applying this kind of products influences the overall achievable precision of other Krylov subspace methods. They found that it is possible to relax the precision of the products in a way that still allows finding the required solution. The initial precision is high and then gradually decreased according to $\epsilon/\normEu{\rv _k},$ where $\epsilon$ is the target precision for the solution and $\normEu{\rv _k}$ is the euclidean norm of the $k$-th residual. In their study \cite{simoncini2003theory}, Simoncini and Szyld showed that the strategy above does not always work as intended, sometimes preventing the solver from converging to the required solution. This can be remedied by including a problem-dependent constant. These authors defined the constant for GMRES based on the extremal singular values of the upper Hessenberg matrix specific to this method. In the same article, they also applied the strategy to problems involving the Schur complement of a saddle-point matrix. Given the equivalence between \ac{GKB} with a saddle-point problem and \ac{CG} with the associated Schur complement problem, the discussion in \cite{simoncini2003theory} also applies to our context. To control the accuracy of the inner solver, they define the constant \begin{equation} \label{eq:simoConst} l= \frac{\sigma_{min}(\Sm)}{\sigma_{max}(\Am^T \Mm^{-1}) m_*}, \end{equation} where $\Sm= \Am^T \Mm ^ {-1}\Am$ is the Schur complement and $m_*$ is the maximum number of iterations allowed for the outer solver. The relaxation applied to the inner solver tolerance is then guided by \begin{equation} \label{eq:rezRelax} l \epsilon/\normEu{\rv _k}, \end{equation} with $l=1$ being the choice in \cite{bouras2005inexact} and \Cref{eq:simoConst} the choice in \cite{simoncini2003theory}. The purpose of this section is to compare the methods described above with the one we propose in \Cref{sec:relaxChoices}. We use \HybridCase{} (\Cref{eq:hybrid}), which has shown to be the most effective parameter-free choice. We consider all the previous scenarios: original problem, deflated, preconditioned, augmented Lagrangian. In the subsequent tables, the strategies in \Cref{eq:rezRelax} are labeled \BourasCase{} for $l=1$ and \SimonciniCase{} for \Cref{eq:simoConst}. \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations. We compare our proposed method \HybridCase{} (\Cref{eq:hybrid}) with two alternatives from the literature (\Cref{eq:simoConst,eq:rezRelax}).} \label{tab:compOrig} \resizebox{\textwidth}{!}{% \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \HybridCase & \BourasCase & \SimonciniCase ($m_*=100$) & \SimonciniCase ($m_*=60$) \\ \ac{CG} iterations & 6963 & 4873 & - & 7092 & 6891 \\ Savings \% & - & 30.02 & - & -1.85 & 1.03 \end{tabular}% } \end{table} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the LSC preconditioner. We compare our proposed method \HybridCase{} (\Cref{eq:hybrid}) with two alternatives from the literature (\Cref{eq:simoConst,eq:rezRelax}).} \label{tab:compLSC} \resizebox{\textwidth}{!}{% \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \HybridCase & \BourasCase & \SimonciniCase ($m_*=100$) & \SimonciniCase ($m_*=15$) \\ \ac{CG} iterations & 2052 & 1046 & 1094 & 1808 & 1601 \\ Savings \% & - & 49.03 & 46.69 & 11.89 & 21.98 \end{tabular}% } \end{table} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using deflation. We compare our proposed method \HybridCase{} (\Cref{eq:hybrid}) with two alternatives from the literature (\Cref{eq:simoConst,eq:rezRelax}).} \label{tab:compDefl} \resizebox{\textwidth}{!}{% \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \HybridCase & \BourasCase & \SimonciniCase ($m_*=100$) & \SimonciniCase ($m_*=40$) \\ \ac{CG} iterations & 4830 & 2411 & - & 3529 & 3283 \\ Savings \% & - & 50.08 & - & 26.94 & 32.03 \end{tabular}% } \end{table} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the augmented Lagrangian ($\eta =1000$). We compare our proposed method \HybridCase{} (\Cref{eq:hybrid}) with two alternatives from the literature (\Cref{eq:simoConst,eq:rezRelax}).} \label{tab:compAL} \resizebox{\textwidth}{!}{% \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \HybridCase & \BourasCase & \SimonciniCase ($m_*=100$) & \SimonciniCase ($m_*=40$) \\ \ac{CG} iterations & 2601 & 1661 & - & 1954 & 1739 \\ Savings \% & - & 36.14 & - & 24.88 & 33.14 \end{tabular}% } \end{table} From these comparisons (see \Cref{tab:compOrig,tab:compLSC,tab:compDefl,tab:compAL}), we make the following remarks. Using \BourasCase{} prevented \ac{GKB} from converging by increasing the tolerance too quickly, leading to excessive perturbation, confirming the possibility highlighted in \cite{simoncini2003theory}. This method only worked for the problem which included preconditioning. For \SimonciniCase{}, we first considered a set of tests where $m_*=100$ is the maximum number of iterations allowed for the outer solver, as done in the original article. This method worked as intended, with \ac{GKB} reaching the required level of precision. A second set of tests takes $m_*$ to be equal to the number of \ac{GKB} iterations necessary to reach the target precision. It is possible to know or at least approximate this number, either based on theory or previous solver runs. With this set of tests, we wanted to see how \SimonciniCase{} would perform in the best case scenario. Indeed, with a lower constant, the savings increase, and \ac{GKB} still converges. We can highlight three advantages of our \HybridCase{} method. First, it is safe, in the sense of keeping the perturbation under control and allowing \ac{GKB} to converge properly. Second, it is simple, in the sense of not requiring any parameters, either found by numerical experiments, or by estimating extremal singular values. Third, it is effective, leading to higher savings than the other two methods considered here. \section{Introduction} Saddle-point systems can be found in a variety of application fields, such as, for example, mixed finite element methods in fluid dynamics or interior point methods in optimization. An extensive overview about application fields and solution methods for this kind of problems is presented in the well-known article \cite{bgl_2005} by Benzi, Golub and Liesen. In our following study, we want to focus on an iterative solver based on the Golub-Kahan bidiagonalization: the generalized \ac{GKB} algorithm. This solver is designed for saddle-point systems, and was introduced by Arioli\cite{Ar2013}. It belongs to the family of Krylov subspace methods and, as such, relies on specific orthogonality conditions, as we will review in more detail in \Cref{sec:GKBtheory}. Enforcing these orthogonality conditions requires solving an \emph{inner problem}, i.e.~formally computing products with matrix inverses (as described in \Cref{alg:GKB}). In practice, this computation is performed with a linear system solver. For this task, we will explore in this article the use of iterative methods to serve as replacement for direct methods that have been used within \ac{GKB} so far. This is essential for very large problems, such as those coming from a discretized \ac{PDE} in 2D or 3D, when direct solvers may reach their limits. Using an inner iterative solver might also be advantageous from another point of view as we motivate in the following. The solution of large linear systems is often the bottleneck in scientific computing. The computational cost and, consequently, the execution time and/or the energy consumption can become prohibitive. For the inner-outer iterative \ac{GKB} solver in turn, the principal and costliest part is the solution of the inner system at each outer iteration. One approximate metric to measure the cost of the \ac{GKB} solver is the aggregate sum of the number of inner iterations. For a given setup, the cost of the \ac{GKB} method can hence be optimized by executing only a minimal number of inner iterations necessary for achieving a prescribed accuracy of the solution. To reduce this number, there are two possible steps to be taken into account. In a first step, for a given application it is often unnecessary to solve the linear system with the highest achievable accuracy. This could be the case, for example, in the solution of a discretized \ac{PDE}, when the discretization already introduces an error. A precise solution of the linear system would not improve the numerical solution with respect to the analytic solution of the \ac{PDE} any further than the discretization allows. Next, we come to the second step which will be the main point of the study in this paper. The solution of the inner linear system in the \ac{GKB} method has to be exact, in theory. If we choose a rather low accuracy for the outer iterative solver, an inner exact solution might, however, no longer be necessary, as long as the inner error does not alter the chosen accuracy of the numerical solution. This strategy results in a further reduction of the number of inner iterations, since the inner solver will converge in fewer iterations when a less strict stopping tolerance is used. In the following study, we address the case where the inner solver has a prescribed stopping tolerance and then how this limited accuracy affects the outer process and the quality of its iterates. We will show that, with the appropriate choice of parameters, it is possible to make use of inner iterative solvers without compromising the accuracy of the \ac{GKB} result. As it can be seen immediately, the lower the accuracy for the inner solver, the less expensive the \ac{GKB} method will be. Furthermore, we take advantage of the versatility of iterative methods by adapting the stopping tolerance of the inner solver dynamically. In other words, we prescribe the tolerance of the inner solver according to some criteria determined at each outer iteration. This can lead to a reduction of the cost, since only a minimal number of inner iterations are executed. Typically, we will reduce the required accuracy for later instances of the inner solver, since later steps of the outer \ac{GKB}-iteration may contribute less to the overall accuracy. One particular advantage of our proposed method is its generality. The strategy is independent of other choices which are problem-specific, such as the preconditioner for a Krylov method. We perform most of our tests on a relatively small Stokes flow problem, to illustrate the salient features. We confirm our findings by one final test on a larger case of the mixed Poisson problem, including the use of the augmented Lagrangian method, to demonstrate the use in a realistic scenario. Our study has a similar context as other works on inexact Krylov methods \cite{bouras2000relaxation,bouras2005inexact}, where these algorithms have been investigated from a numerical perspective. In these articles, the inexactness originates from a limited accuracy of the matrix-vector multiplication or that of the solution of a local sub-problem. Similar to what we have described above, it was found that the inner accuracy can be varied from step to step while still achieving convergence of the outer method. It was shown experimentally that the initial tolerance should be strict, then relaxed gradually, with the change being guided by the latest residual norm. Other works complemented the findings with theoretical insights, relevant to several algorithms of the Krylov family \cite{simoncini2003theory, simoncini2005relaxed,van2004inexact}. It was noted that, in some cases, unless a problem-dependent constant is included, the outer solver may fail to converge if the accuracy of the inner solution is adapted only based on the residual norm. This constant can be computed based on extreme singular values, as shown by Simoncini and Szyld \cite{simoncini2003theory}. Another source of inexactness can be the application of a preconditioner via an iterative method. Van den Eshof, Sleijpen and van Gijzen considered inexactness in Krylov methods originating both from matrix-vector products and variable preconditioning, using iterative methods from the GMRES family \cite{van2005relaxation}. Similarly to earlier work, their analysis relies on the connection between the residual and the accuracy of the solution to the inner problem. Since applying the preconditioner has the same effect as a matrix-vector product, the same strategies can be applied to more complex, flexible algorithms, such as those involving variable preconditioning: FGMRES \cite{saad1993flexible}, GMRESR \cite{van1994gmresr}, etc. A flexible version of the Golub-Kahan bidiagonalization is employed by Chung and Gazzola to find regularized solutions to a problem of image deblurring \cite{chung2019flexible}. In a more recent paper with the same application, Gazzola and Landman develop inexact Krylov methods as a way to deal with approximate knowledge of $\Am$ and $\Am ^T$ \cite{gazzola2021regularization}. Erlangga and Nabben construct a framework including nested Krylov solvers. They develop a multilevel approach to shift small eigenvalues, leading to a faster convergence of the linear solver \cite{erlangga2008multilevel}. In subsequent work related to multilevel Krylov methods, Kehl, Nabben and Szyld apply preconditioning in a flexible way, via an adaptive number of inner iterations \cite{kehl2019adaptive}. Baumann and van Gijzen analyze solving shifted linear systems and, by applying flexible preconditioning, also develop nested Krylov solvers \cite{baumann2015nested}. McInnes et al. consider hierarchical and nested Krylov methods with a small number of vector inner products, with the goal of reducing the need for global synchronization in a parallel computing setting \cite{mcinnes2014hierarchical}. Other than solving linear systems, inexact Krylov methods have been studied when tackling eigenvalue problems, as in the paper by Golub, Zhang and Zha \cite{GolZhaZha2000}. Although using different arguments, it was shown that the strategy of increasing the inner tolerance is successful for this kind of problem as well. Xu and Xue make use of an inexact rational Krylov method to solve nonsymmetric eigenvalue problems and observe that the accuracy of the inner solver (GMRES) can be relaxed in later outer steps, depending on the value of the eigenresidual \cite{xu2022inexact}. Dax computes the smallest eigenvalues of a matrix via a restarted Krylov solver which includes inexact matrix inversion \cite{dax2019restarted}. Our paper is structured as follows: in \Cref{sec:GKBtheory}, we review the theory and properties of the \ac{GKB} algorithm; in \Cref{sec:pbDesc}, we describe the specific problem we chose to use as test case for the numerical experiments; \Cref{sec:constAcc} is meant to illustrate the interactions between the accuracy of the inner solver and that of the outer one in a numerical test setting; \Cref{sec:pertErrAna} describes the link between the error of the outer solver and the perturbation induced by the use of an iterative inner solver. We describe and test our proposed strategy of using a variable tolerance parameter for the inner solver in \Cref{sec:relaxChoices}. We explore the interaction between the method of the \ac{AL} and our strategy in \Cref{sec:AL}. The final section is devoted to concluding remarks. \section{Generalized Golub-Kahan algorithm} \label{sec:GKBtheory} We are interested in saddle-point problems of the form \begin{align}\label{eqn:spsW} \left[ \begin{array}{cc} \Mm & \Am \\ \Am ^T & \mZ \end{array} \right] \left[ \begin{array}{c} \wv \\ \pv \end{array} \right] = \left[ \begin{array}{c} \gvv \\ \rv \end{array} \right] \end{align} with $\Mm\in \mathbb{R}^{m\times m}$ being a symmetric positive definite matrix and $\Am\in \mathbb{R}^{m \times n}$ a full rank constraint matrix. The generalized \ac{GKB} algorithm for the solution of a class of saddle-point systems was introduced by Arioli \cite{Ar2013}. To apply it to the system (\ref{eqn:spsW}), we first need to have the upper block of the right-hand side to be equal to 0. To this end, we use the transformation \begin{align} \label{eq:iniTransf} \uv &= \wv - \Mm^{-1}\gvv ,\\ \bv &= \rv - \Am^T \uv. \end{align} The resulting system is \begin{align}\label{eqn:sps} \left[ \begin{array}{cc} \Mm & \Am \\ \Am^T & 0 \end{array} \right] \left[ \begin{array}{c} \uv \\ \pv \end{array} \right] = \left[ \begin{array}{c} 0 \\ \bv \end{array} \right], \end{align} which is equivalent to that in \Cref{eqn:spsW}. We can recover the $\wv$ variable as $\wv = \uv + \Mm^{-1}\gvv$. Let $\Nm\in \mathbb{R}^{n\times n}$ be a symmetric positive definite matrix. To properly describe the \ac{GKB} algorithm, we need to define the following norms \begin{equation} \normM{\vvv} = \sqrt{ \vvv ^T \Mm \vvv }; \qquad \normN{\qv} = \sqrt{ \qv ^T \Nm \qv }; \qquad \normNI{\yv} = \sqrt{ \yv ^T \Nm^{-1} \yv }. \end{equation} Given the right-hand side vector $\bv \in \bR ^n$, the first step of the bidiagonalization is \begin{equation} \label{eq:iniGKBVec} \beta _1 = \normNI{\bv}, \quad \qv _1 = \Nm ^{-1} \bv / \beta _1. \end{equation} After $k$ iterations, the partial bidiagonalization is given by \begin{equation} \label{eq:oriGKB} \begin{cases} \Am \Qm _k = \Mm \Vm _k \Bm _k, &\qquad \Vm _k ^T \Mm \Vm _k = \Id _k \\ \Am ^T \Vm _k = \Nm \Qm _k \Bm ^T _k + \beta _ {k+1} \qv _{k+1} \ev _k^T, &\qquad \Qm _k ^T \Nm \Qm _k = \Id _k \end{cases}, \end{equation} with the bidiagonal matrix \begin{equation} \label{eq:BmatGKB} \Bm _k= \left[ \begin{matrix} \alpha_1 & \beta_2 & 0 & \ldots & 0 \\ 0 & \alpha_2 & \beta_3 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & \ldots & 0 & \alpha_{k-1} & \beta_k \\ 0 & \ldots & 0 & 0 & \alpha_{k} \end{matrix} \right] \end{equation} and the residual term $\beta _ {k+1} \qv _{k+1} \ev _k^T$. The columns of $\Vm _k$ are orthonormal vectors with respect to the inner product and norm induced by $\Mm$, while the same holds for $\Qm _k$ and $\Nm$ respectively \begin{equation} \begin{split} & \vvv _i ^T \Mm \vvv _j = 0, \forall i \neq j; \qquad \normM{\vvv _k} = 1; \\ & \qv _i ^T \Nm \qv _j = 0, \forall i \neq j; \qquad \normN{\qv _k} = 1. \end{split} \end{equation} Prior to the normalization leading to $\vvv _k$ and $\qv _k$, the norms are stored as $\alpha _k$ for $\vvv _k$ and $\beta _k $ for $\qv _k$, as detailed in \cref{alg:GKB}. Using $\Vm _k$, $\Qm _k$ and the relations in \Cref{eq:oriGKB}, we can transform the system from \Cref{eqn:sps} into a simpler form \begin{equation}\label{eq:transfSys} \left[ \begin{matrix} \Id _k& \Bm _k \\ \Bm _k ^T & \mZ \end{matrix} \right] % \left[ \begin{matrix} \zv _k \\ \yv _k \end{matrix} \right] = \left[ \begin{matrix} \mZ \\ \Qm _k ^T \bv \end{matrix} \right]. \end{equation} With the choice for $\qv _1$ given in \Cref{eq:iniGKBVec}, we have that $\Qm _k ^T \bv = \beta _1 \ev _1 $. The solution components to \Cref{eq:transfSys} are then given by \begin{equation} \label{eq:zandy} \zv _k= \beta _1 \Bm _k ^{-T} \ev _1; \quad \yv _k= - \Bm _k ^{-1} \zv _k, \end{equation} where $ \Bm _k ^{-T}$ is the inverse of $ \Bm _k ^{T}$. We can build the $k$-th approximate solution to \Cref{eqn:sps} as \begin{equation} \label{eq:GKBapx} \uv _k = \Vm _k \zv _k; \quad \pv _k = \Qm _k \yv _k. \end{equation} In particular, after a number of $k=n$ steps and assuming exact arithmetic, we have $\uv _k = \uv$ and $\pv _k = \pv $, meaning we have found the exact solution to \Cref{eqn:sps}. A proof of why $n$ terms are sufficient to find the exact solution is given in the introductory paper by Arioli \cite{Ar2013}. This corresponds to a scenario where it is necessary to perform the $n$ iterations, although, for specific problems with particular features, the solution may be found after fewer steps. As $ k \rightarrow n$, the quality of the approximation improves ($\uv _k \rightarrow \uv$ and $\pv _k \rightarrow \pv $), with the bidiagonalization residual $\beta _ {k+1} \qv _{k+1} \ev _k^T$ vanishing for $k=n$. Given the structure of $\beta _1 \ev _1$ and $ \Bm ^{T}$, we find \begin{equation} \label{eq:zetaDef} \zeta _1= \frac{\beta _1}{\alpha _1}, \quad \zeta _k = \zeta _{k-1} \frac{\beta _k}{\alpha _k}, \quad \zv _k = \left[ \begin{matrix} \zv _{k-1} \\ \zeta _k \end{matrix} \right] \end{equation} in a recursive manner. Then, $\uv _k$ is computed as $\uv_k = \uv _{k-1} + \zeta _k \vvv _k $. In order to obtain a recursive formula for $\pv$ as well, we introduce the vector \begin{equation} \dv _k= \frac{\qv _k - \beta _k \dv _{k-1}}{\alpha _k}, \quad \dv _1 = \frac{\qv _1}{\alpha _1}. \end{equation} Finally, the update formulas are \begin{equation} \label{eq:GKBupdate} \uv _k= \uv _{k-1} + \zeta _k \vvv _k , \quad \pv _k= \pv _{k-1} - \zeta _k \dv _k . \end{equation} At step $k$ of \Cref{alg:GKB}, we have the following error in the energy norm. \begin{equation} \label{eq:errExact} \begin{split} \normM{\ev _k} ^2 &= \normM{ \uv _k - \uv } ^2 = \normM{ \Vm _k \zv _k - [ \Vm _k \Vm _{n-k}] \left[ \begin{matrix} {\zv _k} \\ {\zv _{n-k}} \end{matrix} \right] } ^2 \\ &= \normM{ \Vm _{n-k} \zv _{n-k} } ^2 = \normEu{ \zv _{n-k}} ^2 = \sum_{i=k+1}^{n} \zeta _i ^2 \end{split} \end{equation} In the last line, we have made use of the $\Mm$-orthonormality of the $\Vm$ matrices. If we truncate the sum above to only its first $d$ terms, we get a lower bound on the energy norm of the error. The subscript $d$ stands for \textit{delay}, because we can compute this lower bound corresponding to a given step $k$ only after an additional $d$ steps \begin{equation} \label{eq:lowBnd} \xi ^2 _{k,d} = \sum_{i=k+1}^{k+d+1} \zeta _i ^2 < \normM{\ev _k} ^2. \end{equation} With this bound for the absolute error, we can devise one for the relative error in \Cref{eq:lowBndRel}, which is then used as stopping criterion in \Cref{alg_line:GKBconvCheck} of \Cref{alg:GKB}. \begin{equation} \label{eq:lowBndRel} \bar \xi ^2 _{k,d} = \frac{ \sum_{i=k-d+1}^{k} \zeta _i ^2 }{ \sum_{i=1}^{k} \zeta _i ^2 } . \end{equation} The \ac{GKB} algorithm has the following error minimization property. Let $\cV _k = span \{\vvv _1, ..., \vvv _k\}$ and $\cQ _k = span \{\qv _1, ..., \qv _k\}$. Then, for any arbitrary step $k$, we have that \begin{equation} \label{eq:errMinProp} \min_{\underset{(\Am ^T \uv _k - \bv ) \perp \cQ _k}{ \uv _k \in \cV _k,} } \normM{ \uv - \uv _k } \end{equation} is met for $\uv _k$ as computed by \Cref{alg:GKB}. For brevity and because the \ac{GKB} algorithm features this minimization property for the primal variable, our presentation will focus on the velocity for Stokes problems. The stopping criteria for our proposed algorithmic strategies rely on approximations of the velocity error norm. For all the numerical experiments that we have performed, the pressure error norm is close to that of the velocity (less than an order of magnitude apart). In the cases where we operate on a different subspace, as a result of preconditioning, we find that the pressure error norm is actually smaller than that for the velocity. In the case where the dual variable is equally important as the primal, one can use a monolithic approach, such as applying MINRES to the complete saddle-point system. The \ac{GKB} (as implemented by \Cref{alg:GKB}) is a nested iterative scheme in which each outer loop involves solving an inner linear system. According to the theory given in the paper by Arioli \cite{Ar2013}, the matrices $\Mm$ and $\Nm$ have to be inverted exactly in each iteration. We can choose $ \Nm=\frac{1}{\eta} \Id$, whose inversion reduces to a scalar multiplication. In the following sections, unless otherwise specified, we consider $\eta =1 $. On the other hand, the matrix $\Mm$ depends on the underlying differential equations or the problem setting in general. As long as the matrix $\Mm$ is of moderate size, a robust direct solver can be used. For large problems, however, a direct solution might no longer be possible and an iterative solver will be required. At this point, we face two problems. First, depending on the application, inverting $\Mm$ might be more or less costly. Second, to achieve a solution quality close to machine precision, an iterative solver might require a considerable number of iteration steps. \begin{algorithm} \caption{Golub-Kahan bidiagonalization algorithm} \label{alg:GKB} \begin{algorithmic}[1] \Require{$\Mm , \Am , \Nm, \bv$, maxit} \State{$\beta_1 = \|\bv\|_{\Nm^{-1}}$; $\qv_1 = \Nm^{-1} \bv / \beta_1$} \State{$\wv = \Mm^{-1} \Am \qv_1$; $\alpha_1 = \|\wv\|_{\Mm}$; $\vvv_1 = \wv / \alpha_1$} \State{$\zeta_1 = \beta_1 / \alpha_1$; $\dv_1=\qv_1/ \alpha_1$; $\uv^{(1)} =\zeta_{1} \vvv_{1}$; $\pv^{(1)} = - \zeta_1 \dv_1$; } \State{$\bar \xi _{1,d} = 1;$ $k = 1;$} \While{ \textcolor{black}{ $\bar \xi _{k,d} > $ tolerance } and $k < $ maxit } \State{$\gvv = \Nm^{-1} \left( \Am^T \vvv_k - \alpha_k \Nm \qv_k \right) $; $\beta_{k+1} = \|\gvv\|_{\Nm}$} \State{$\qv_{k+1} = \gvv / {\beta_{k+1}}$} \State{ \textcolor{black}{ $\wv = \Mm^{-1} \left( \Am \qv_{k+1} - \beta_{k+1} \Mm \vvv_{k} \right)$; } $\alpha_{k+1} = \|\wv\|_{\Mm}$} \label{alg_line:innerGKBproblem} \State{$\vvv_{k+1} = \wv / {\alpha_{k+1} }$} \State{$\zeta_{k+1} = - \dfrac{\beta_{k+1}}{\alpha_{k+1}} \zeta_k$} \State{$\dv_{k+1} = \left( \qv_{k+1} - \beta_{k+1} \dv_k \right) / \alpha_{k+1} $} \State{$\uv^{(k+1)} = \uv^{(k)} + \zeta_{k+1} \vvv_{k+1}$; $\pv^{(k+1)} = \pv^{(k)} - \zeta_{k+1} \dv_{k+1}$} \State{$k = k + 1$} \If{$k>d$} \State{\textcolor{black}{ $\bar \xi _{k,d} = \sqrt{ \sum_{i=k-d+1}^{k} \zeta _i ^2 / \sum_{i=1}^{k} \zeta _i ^2 } $} } \label{alg_line:GKBconvCheck} \EndIf{} \EndWhile{} \Return $\uv^{k+1}, \pv^{k+1}$ \end{algorithmic} \end{algorithm} In \Cref{alg_line:innerGKBproblem} of \Cref{alg:GKB}, we have the application of $\Mm^{-1} $ to a vector, which represents what we call the \textit{inner problem}. Typically, this is implemented as a call to a direct solver using the matrix $\Mm $ and the vector $\Am \qv_{k+1} - \beta_{k+1} \Mm \vvv_{k}$ as the right hand side. The main contribution of this work is a study of the behavior exhibited by \Cref{alg:GKB} when we replace the direct solver employed in \Cref{alg_line:innerGKBproblem} by an iterative one. In particular, for a target accuracy of the final \ac{GKB} iterate, we want to minimize the total number of inner iterations. Our choice for the inner solver is the unpreconditioned \ac{CG} algorithm, for its simplicity and relative generality. The strategies we propose in the subsequent sections do not rely on any specific feature of this inner solver, and are meant to be applicable regardless of this choice. We are interested in reducing the total number of inner iterations in a relative and general manner. This is why we do not take preconditioning for \ac{CG} into account, which is usually problem-dependent. We measure the effectiveness of our methods based on the percentage of inner iterations saved when compared against a scenario to be described in more detail in the following sections. \section{Problem description} \label{sec:pbDesc} As test problem, we will use a 2D Stokes flow in a rectangular channel domain $\Omega = \left[-1, L \right] \times \left[-1, 1 \right]$ given by \begin{equation} \label{eq:contStokes} \begin{aligned} - \Delta \vec{u} + \nabla p &= 0 \\ \nabla \cdot \vec{u} &=0, \end{aligned} \end{equation} More specifically, we will consider the Poiseuille flow problem, i.e. a steady Stokes problem with the exact solution \begin{equation} \label{eq:contStokesSol} \begin{cases} u_x = 1- y^2, \\ u_y = 0, \\ p = -2x + \text{constant}. \end{cases} \end{equation} The boundary conditions are given as Dirichlet condition on the inflow $\Gamma_{in}= \left\{-1\right\} \times \left[-1, 1 \right]$ (left boundary) and no-slip conditions on the top and bottom walls $\Gamma_{c}= \left[-1, L \right] \times \left\{-1\right\} \cup \left[-1, L \right] \times \left\{1\right\} $. The outflow at the right $\Gamma_{out}= \left\{L \right\} \times \left[-1, 1 \right]$ (right) is represented as a Neumann condition \begin{equation*} \begin{split} \frac{\partial u_x}{\partial x} - p &= 0 \\ \frac{\partial u_y}{\partial x} &= 0. \end{split} \end{equation*} We use Q2-Q1 Finite Elements as discretization method. Our sample matrices are generated by the Incompressible Flow \& Iterative Solver Software (IFISS)\footnote{\url{http://www.cs.umd.edu/~elman/ifiss3.6/index.html}} package \cite{ers07}, see the book by Elman et al. \cite{elman2014finite} for a more detailed description of this reference Stokes problem. \begin{figure} \centering \begin{tikzpicture} \begin{axis} [ width=\FigWid \textwidth, height=\FigHei \textwidth, minor tick num=3, grid=both, xtick = {-1,0,...,5}, ytick = {-1,0,...,1}, xlabel = $x$, ylabel = $y$, ticklabel style = {font = \scriptsize}, % colormap/jet, colorbar, % enlargelimits=false, axis on top, axis equal image ] \addplot [forget plot] graphics[xmin=-1,xmax=5,ymin=-1,ymax=1] {IFISSCh5SolRaw.png}; \end{axis} \end{tikzpicture} \caption{ Exact solution to the Stokes problem in a channel of length 5. Plotted is the $1-y^2$ function, which represents the $x$ direction velocity, overlaid with the mesh resulting from the domain discretization (Q2-Q1 Finite Elements Method). } \label{fig:convIFISSCh5SolMesh} \end{figure} We first illustrate some particular features shown by \ac{GKB} for this problem. We use a direct inner solver here, before discussing the influence of an iterative solver in subsequent sections. In \Cref{fig:convIFISSChL}, we plot the convergence history for several channels of different lengths, which leads us to noticing the following details. The solver starts with a period of slow convergence, visually represented by a plateau, the length of which is proportional to the length of the channel. The rest of the convergence curve corresponds to a period of superlinear convergence, a phenomenon also known for other solvers of the Krylov family, such as \ac{CG}. The presence of this plateau is especially relevant for our proposed strategies and, since it appears for each channel, we can conclude it is a significant feature of this class of channel problems. In the following numerical examples, we choose as boundary $L=20$ and thus a domain of length 21 units. \import{images/pgf/}{convIFISSChL.tex} \section{Constant accuracy inner solver} \label{sec:constAcc} Similar to what has been described by Golub et al. \cite{GolZhaZha2000} for solving eigenvalue problems, we have observed that when using an iterative method as an inner solver, its accuracy has a clear effect on the overall accuracy of the outer solver (see \Cref{fig:constCGtol}). We solve the channel problem described in \Cref{sec:pbDesc} with various configurations for the tolerance of the inner solver, and plot the resulting convergence curves in \Cref{fig:constCGtol}. The outer solver is always \ac{GKB} with a $10^{-7}$ tolerance. The cases we show are: a direct inner solver, three choices of constant inner solver tolerance ($10^{-3}$, $10^{-7}$ and $10^{-8}$), and a final case using a low accuracy solver of ($10^{-3}$) only for the first two iterations, then a high accuracy one ($10^{-14}$). The stopping criterion for the \ac{GKB} algorithm is a delayed lower bound estimate for the energy norm of the primal variable (see \Cref{eq:lowBnd}). As such, \ac{GKB} with a direct inner solver performs a few extra steps, achieving a higher accuracy than the one required, here around $10^{-8}$. Notice how the outer solver cannot achieve a higher accuracy than that of the inner solver. The outer solver stops reducing the error even before reaching the same accuracy as the inner solver. Replacing the exact inner solver by a \ac{CG} method with a constant tolerance of $10^{-8}$ leads to a convergence process where the error norm eventually reaches a value just below the target accuracy of $10^{-7}$ and does not decrease further. This highlights the fact that the inner solver does not need to be exact in order to have \ac{GKB} converge to the required solution. For this Poiseuille flow example, however, the inner solver must be least one order of magnitude more precise than the outer one. In the last case examined here, we want to see if early imprecise iterations can be compensated later by others having a higher accuracy. This strategy of increasing accuracy has been found to work, e.g., in the case of the Newton method for nonlinear problems \cite{dembo1982inexact}. We tested the case when the first two iterations of \ac{GKB} use an inner solver with tolerance $10^{-3}$, with all the subsequent inner iterations employ a tolerance of $10^{-14}$. The resulting curve shows a convergence history rather similar to the case where \ac{CG} has a constant tolerance of $10^{-3}$. The outer process cannot reduce the error norm below $10^{-3}$, despite the fact that the bulk of the iterations employ a high-accuracy inner solver. This is in correspondence with which was observed by Golub et al. \cite{GolZhaZha2000} for solving eigenvalue problems. \import{images/pgf/}{constCGtol.tex} An interesting observation is that all the curves in \Cref{fig:constCGtol} overlap in their initial iterations, until they start straying from the apparent profile, eventually leveling off. In \Cref{sec:pertErrAna}, we analyze the causes leading to these particular behaviors and link them to the accuracy of the inner solver. \section{Perturbation and error study} \label{sec:pertErrAna} In this section we describe how the error associated with the iterates of \Cref{alg:GKB} behaves if we use an iterative solver for the systems involving $\Mm ^{-1}$. We can think of the approximate solutions of these inner systems as perturbed versions of those we would get when using a direct solver. The error is then characterized in terms of this perturbation and the implications motivate our algorithmic strategies given in the subsequent sections. With this characterization, we can also explain the results in \Cref{sec:constAcc}. The use of an iterative inner solver directly affects the columns of the $\Vm$ matrix. In the following, $\Vm$ denotes the unperturbed matrix, with $\Em _{\Vm}$ being the associated perturbation matrix. In particular, we are interested in the $\Mm$ norm of the individual columns of $\Em _{\Vm}$, which gives us an idea of how far we are from the \enquote{ideal} columns of $\Vm$. Changes in the $\vvv$ and $\qv$ vectors also have an impact on their respective norms $\alpha$ and $\beta$, which shift away from the values they would normally have with a direct inner solver. In turn, these changes propagate to the coefficients $\zeta$ used to update the iterates $\uv$ and $\pv$. Our observations concern the $\zv$ vector, its perturbation $\ev _{\zv}$ and their effect on the error of the primal variable $\uv$ measured in the $\Mm$ norm. The entries of $\zv$ change sign every iteration, but we will only consider them in absolute value, as it is their magnitude which is important. In the following, we will denote perturbed quantities with a hat. \subsection{High initial accuracy followed by relaxation} \label{subsec:theoPrecRelax} In this subsection, we take a closer look at the interactions between the perturbation and the error. For us, perturbation is the result of using an inexact inner solver and represents a quantity which can prevent the outer solver from reducing the error below a certain value. The error itself needs to be precisely defined, as it may contain several components, each minimized by a different process. Because we focus on the difference between the perturbed and the unperturbed \ac{GKB}, sources of error that affect both versions, such as the round-off error, are not included in the following discussion. According to the observations by Jir{\'a}nek and Rozlo{\v{z}}n{\'\i}k, the accuracy of the outer solver depends primarily on that of the inner solver, since the perturbations introduced by an iterative solver dominate those related to finite-precision arithmetic \cite{jiranek2008maximum}. We take the exact solution $\uv$ to be equal to $\uv _n$, the $n$-th iterate of the unperturbed \ac{GKB} with exact arithmetic. At step $k$ of the \ac{GKB}, we have the error, \begin{align} \normM{\ev _k} &= \normM{ \hat \uv _k - \uv }, \end{align} where $\hat \uv _k$ is the current approximate solution and $\uv $ is the exact one. Both can be written as linear combinations of columns from $\Vm$ with coefficients from $\zv$. Let $\hat \uv _k$ come from an inexact version of \Cref{alg:GKB}, where the solution of the inner problem (a matrix-vector product with $\Mm ^{-1}$) includes perturbations. The term $ \uv = \Vm _n \zv _n$ is available after $n$ steps of \Cref{alg:GKB} in exact arithmetic, without perturbations. We separate the first $k$ terms, which have been computed, from the remaining ($n-k$). \begin{equation} \label{eq:errPert} \begin{split} \normM{\ev _k} ^2 &= \normM{ \hat \uv _k - \uv } ^2= \normM{ ( \Vm _k + \Em _{\Vm} ) ( \zv _k + \ev _{\zv} ) - [ \Vm _k \Vm _{n-k}] \left[ \begin{matrix} {\zv _k} \\ {\zv _{n-k}} \end{matrix} \right] } ^2 \\ &= \normM{ \Em _{\Vm} \zv _k + \Em _{\Vm} \ev _{\zv} + \Vm _k \ev _{\zv} - \Vm _{n-k} \zv _{n-k} } ^2 \\ & \leq \normM{ \Em _{\Vm} \zv _k } ^2 + \normM{ \Em _{\Vm} \ev _{\zv} } ^2 + \normEu{ \ev _{\zv} } ^2 + \normEu{ \zv _{n-k}} ^2 \end{split} \end{equation} In the last line, we have made use of the $\Mm$-orthonormality of the $\Vm$ matrices. In the case of a direct inner solver, we can leave out the perturbation terms, recovering the result $\normM{\ev _k} ^2 = \normEu{ \zv _{n-k}} ^2 = \sum_{i=k+1}^{n} \zeta _i^2 $ given by Arioli \cite{Ar2013}. This is simply the error coming from approximating $\uv $ (a linear combination of $n$ $\Mm$-orthogonal vectors) by $ \uv _k $ (a linear combination of only $k$ $\Mm$-orthogonal vectors). This term decreases as we perform more steps of \Cref{alg:GKB} ($ k \rightarrow n $). By truncating the sum $\sum_{i=k+1}^{n} \zeta _i ^2$, we obtain a lower bound for the squared error. The remaining three terms in \Cref{eq:errPert} include the perturbation coming from the inexact inner solution. Our goal is to minimize the total number of iterations of the inner solver, so we are interested in knowing how large can these terms be allowed to be, such that we still recover a final solution of the required accuracy. The answer is to keep them just below the final value of the fourth one, $\normEu{ \zv _{n-k}}$, below the acceptable algebraic error. If they are larger, the final accuracy will suffer. If they are significantly smaller, then our inner solver is unnecessarily precise and expensive. The following observations rely on the behavior of the $\zv$ vector. At each iteration, this vector gains an additional entry, while leaving the previous ones unchanged. These entries form a (mostly) decreasing sequence and have a magnitude below 1 when reaching the superlinear convergence phase. Unfortunately, we cannot yet provide a formal proof of these properties, but having seen them consistently reappear in our numerical experiments encourages us to consider them for motivating our approach. These properties appear in both cases, with and without perturbation. The decrease in the entries of the coefficient vector used to build the approximation has also been observed and described for other Krylov methods (see references \cite{van2004inexact,simoncini2003theory,simoncini2005relaxed}). Their context is that of inexact matrix-vector products, which is another way of viewing our case. The fact that new entries of $\zv$ are simply appended to the old ones and that they are smaller than one is linked to the particular construction specific to \ac{GKB}. Back to \Cref{eq:errPert}, let us assume the perturbation at each iteration is constant, i.e. the $\Mm$ norm of each column of $\Em _{\Vm}$ is equal to the same constant. Then, the vector $\Em _{\Vm} \zv _k$ will be a linear combination of perturbation vectors with coefficients from $\zv _k$. Following our observations concerning the entries of $\zv _k$, the first terms of the linear combination will be the dominant ones, with later terms contributing less and less to the sum. If the perturbation of the first $\vvv$ has an $\Mm$ norm below our target accuracy, the term $ \normM{ \Em _{\Vm} \zv _k } $ will never contribute to the error. We can allow the $\Mm$ norm of the columns of $\Em _{\Vm}$ to increase, knowing the effect of the perturbation will be reduced by the entries of $\zv$, which are decreasing and less than one. The \ac{GKB} solution can be computed in a less expensive way, as long as the term $\normM{ \Em _{\Vm} \zv _k }$ is kept below our target accuracy. The perturbation should initially be small, then allowed to increase proportionally to the decrease of the entries in $\zv$. Next, we describe the terms including $\ev _{\zv}$. Let the following define the perturbed entries of $\hat \zv$ \begin{equation*} \hat \zeta _k = - \hat \zeta _{k-1} \frac{\hat \beta _k}{\hat \alpha _k} = - \hat \zeta _{k-1} ( \frac{ \beta _k}{ \alpha _k} + \epsilon _k). \end{equation*} The term $\epsilon _k$ is the perturbation introduced at iteration $k$, coming from the shifted norms associated with $\qv _k$ and $\vvv _k$. This term is then multiplied by $\hat \zeta _{k-1}$ which, according to our empirical observations, decreases at (almost) every step. If we assume $\epsilon _k$ is constant, the entries of $\ev _{\zv}$ decrease in magnitude and the norm $ \normEu{ \ev _{\zv} }$ is mostly dominated by the first vector entry. The strategy described for the term $ \normM{ \Em _{\Vm} \zv _k } $ also keeps $ \normEu{ \ev _{\zv} }$ small. We start with a perturbation norm below the target accuracy, to ensure the quality of the final iterate. Gradually, we allow an increase in the perturbation norm proportional to the decrease of $\hat \zeta _k$ to reduce the costs of the inner solver. Finally, since the vector $\ev _{\zv} $ decreases similarly to $ \zv $, the term $ \normM{ \Em _{\Vm} \ev _{\zv} }$ can be described in the same way as $\normM{ \Em _{\Vm} \zv _k }$. We close this section by emphasizing the important role played by the first iterations and how the initial perturbations can affect the accuracy of the solution. Notice that the perturbation terms included refer to all the $k$ steps, not just the latest one. Relaxation strategies that start with a low accuracy and gradually increase it are unlikely to work for \ac{GKB} and other algorithms with similar error minimization properties. Since the first vectors computed are the ones that contribute the most to reducing the error, they should be determined as precisely as possible. Even if we follow a perturbed iteration exclusively by very accurate ones, this will not prevent the perturbation from being transmitted to all the subsequent vectors, and potentially be amplified by multiplication with matrices and floating-point error. With these observations in mind, we can understand the results in \Cref{sec:constAcc}. These findings are in line with those concerning other Kylov methods in the presence of inexactness (see Section 11 of the survey by Simoncini and Szyld \cite{simoncini2007recent} and the references therein). \ac{GKB} is not the only method which benefits from lowering the accuracy of the inner process, and the reason why this is possible is linked to the decreasing entries of the coefficient vector. \section{Relaxation strategy choices} \label{sec:relaxChoices} We have seen in \Cref{subsec:theoPrecRelax} that we can allow the perturbation norm to increase in a safe way, as long as the process is guided by the decrease of $ \abs{ \hat \zeta } $. This means that we can adapt the tolerance of the inner solver, such that each call is increasingly cheaper, without compromising the accuracy of the final \ac{GKB} iterate. Then, at step $k$ we can call the inner solver with a tolerance equal to $\tau / f(\zeta)$. The scalar $\tau$ represents a constant chosen as either the target accuracy for the final \ac{GKB} solution, or something stricter, to counteract possible losses coming from floating-point arithmetic. The function $f$ is chosen based on the considerations described below, with the goal of minimizing the number of inner iterations. A similar relaxation strategy was used in a numerical study by Bouras and Frayss\'e \cite{bouras2005inexact} to control the magnitude of the perturbation introduced by performing inexact matrix-vector products. They employ Krylov methods with a residual norm minimization property, so the proposed criterion divides the target accuracy by the latest residual norm. In our case, because of the minimization property in \Cref{eq:errMinProp}, we need to use the error norm instead of the residual, since it is the only quantity which is strictly decreasing. Due to the actual error norm being unknown, we rely on approximations found via $\zeta$. Considering the error characterization of the unperturbed process $\normM{\ev _k} ^2 = \sum_{i=k+1}^{n} \zeta _i ^2$, we can approximate the error by the first term of the sum, which is the dominant one. However, when starting iteration $k$ we do not know $\zeta _{k+1}$, not even $\zeta _{k}$, so we cannot choose a tolerance for the inner solver required to compute $\uv _k$ based on these. What we can do is predict these values via extrapolation, using information from the known values $\zeta _{k-1}$ and $\zeta _{k-2}$. We know that in general $ \frac{\beta _k}{\alpha _k} = \frac{\zeta _k}{\zeta _{k-1}} $ acts as a local convergence factor for the $\abs{\zeta }$ sequence. We approximate the one for step $k$ by using the previous one $\frac{\zeta _{k-1}}{\zeta _{k-2}}$. Then, we can compute the prediction $\tilde \zeta _k := \zeta _{k-1} \frac{\zeta _{k-1}}{\zeta _{k-2}}$. By squaring the local convergence factor, we get an approximation for $ \zeta _{k+1} $ as $\tilde \zeta _{k+1} := \zeta _{k-1} \left( \frac{\zeta _{k-1}}{\zeta _{k-2}} \right) ^2$, which we can use to approximate $ \normM{\ev _k} $ and adapt the tolerance of the inner solver. In practice, we only consider processes which include perturbation, and assume we have no knowledge of the unperturbed values $\abs{\zeta }$. As such, for better readability, we drop the hat notation with the implicit convention that we are referring to values which do include perturbation and use them in the extrapolation rule above. For some isolated iterations, it is possible that $\abs{\zeta _k } \geq \abs{\zeta _{k-1} } $. This behavior is then amplified through extrapolation, potentially leading to even larger values. In turn, this can cause an increase in the accuracy of the inner solver, following a stricter value for the tolerance parameter $\tau / f(\zeta)$. In \Cref{subsec:theoPrecRelax}, we have shown that there is no benefit in increasing this accuracy. The new perturbation would be smaller in norm, but the error $\normM{\ev _k}$ would be dominated by the previous, larger perturbation. As such, we propose computing several candidate values for the stopping tolerance of the inner solver, and choose the one with maximum value. Since these are only scalar quantities, the associated computational effort is negligible, but the impact of a well-chosen tolerance sequence can lead to significant savings in the total number of inner iterations. The candidate values are: \begin{equation} \label{eq:relaxChoices} \begin{cases} \text{the value at the previous step}, \\ \tau / \abs{ \zeta _{k-1} } , \\ \tau / \abs{ \tilde \zeta _{k} } , \\ \tau / \abs{ \tilde \zeta _{k+1} } . \end{cases} \end{equation} To prevent a limitless growth of the tolerance parameter, we impose a maximum value of $ 0.1 $. All these choices are safe in the sense that they do not lead to introduction of perturbations which prevent the outer solver from reaching the target accuracy. We proceed by testing these relaxations strategies on the problem described in \Cref{sec:pbDesc}. The initial tolerance for \ac{CG} is set to $\tau = 10^{-8}$, one order of magnitude more precise than the one set of \ac{GKB}. As a baseline for comparison, we first keep the tolerance constant, equal to $\tau$. Then, we introduce adaptivity using $\tau / \abs{ \zeta_{k-1} }$. The third case changes the tolerance according to $\tau / \abs{ \tilde \zeta _{k+1} }$, the latter term being a predicted approximation of the current error. Finally, we employ a hybrid approach, where all candidate values in \Cref{eq:relaxChoices} are computed, but only the largest one is used. In the legends of the following plots, these four cases are labeled \ConstantCase, \AdaptiveCase, \PredictedCase, and \HybridCase, respectively. To monitor \ac{GKB} convergence, we track the lower bound for the energy norm of the error corresponding to the primal variable given in \Cref{eq:lowBnd}. For easy reference, all the choices used and their respective labels are given below. We define $\tau = 10^{-8}$. \begin{align} \mathtt{(\ConstantCase)} &: \tau, \label{eq:cst}\\ (\mathtt{\AdaptiveCase}) &: \nicefrac{\tau}{\abs{ \zeta _{k-1} }} , \label{eq:z}\\ (\mathtt{\PredictedCase}) &: \nicefrac{\tau}{\abs{ \tilde \zeta _{k+1} }} ,\label{eq:adasquare}\\ (\mathtt{\HybridCase}) &: \max\left\{\nicefrac{\tau}{\abs{ \zeta _{k-1} }}, \nicefrac{\tau}{\abs{ \tilde \zeta _{k} }}, \nicefrac{\tau}{\abs{ \tilde \zeta _{k+1} }}, \text{previous value} \right\} .\label{eq:hybrid}\\ (\mathtt{\OptimalCase}) &: \nicefrac{\tau}{ ( parameter \cdot \abs{ \zeta _{k-1} } )} , \label{eq:zOptim} \end{align} Only the last scenario above, \OptimalCase{}, is left to explain. To see if the parameter-free choices can be improved, we run one more case which includes adaptivity by using $\abs{ \zeta _{k-1} }$, but also one constant parameter tuned experimentally. This is motivated by the fact that the considerations leading to \Cref{eq:relaxChoices} rely mostly on approximations and inequalities, which means we have an over-estimate of the error. It may be possible to reduce the total number of iterations further, by including an (almost) optimal, problem-dependent constant. The goal is to find a sequence of tolerance parameters with terms that are as large as possible, while guaranteeing the accuracy of the final \ac{GKB} iterate. All the results are given in \Cref{tab:varTolZ} and \Cref{fig:originallower bound}. \HybridCase{} offers the highest savings among the parameter-free choices (30\%), but \OptimalCase{}, the test with the problem-dependent constant, reveals that we can still improve this performance by about 6\%. \import{images/pgf/lowBndVScumulCG}{original} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter in \OptimalCase{} is $0.05$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 6963 & 5115 & 4897 & 4873 & 4399 \\ Savings \% & - & 26.54 & 29.67 & 30.02 & 36.82 \end{tabular} \label{tab:varTolZ} \end{table} \subsection{Increasing the savings by working on a simplified problem} \label{subsec:simple} Considering the observations in \Cref{subsec:theoPrecRelax} and the results plotted in \Cref{fig:originallower bound}, we can significantly reduce the accuracy of the inner solver only when the outer solver is in a superlinear convergence phase, when the $\abs{\zeta }$ sequence decreases rapidly. How much we can relax depends on the slope of the convergence curve. As such, to get the maximum reduction of the total number of iterations, the problem needs to be simplified, such that the convergence curve is as steep as possible and has no plateau. It is common to pair Krylov methods with other strategies, such as preconditioning, in order to improve their convergence behavior. The literature on these kinds of approaches is rich \cite{loghin2003schur,loghin2004analysis,bgl_2005,olshanskii2010acquired}. The following tests quantify how beneficial is the interaction between our proposed relaxation scheme and these other strategies. It has been shown by Arioli and Orban that the \ac{GKB} applied to the saddle-point system is equivalent to the \ac{CG} algorithm applied to the Schur complement equation \cite[Chapter~5]{orban2017iterative}. As such, the first step towards accelerating \ac{GKB} is to consider the Schur complement, defined as $\Sm := \Am ^T \Mm _{-1} \Am$, especially its spectrum. Ideally, a spectrum with tightly clustered values and no outliers leads to rapid \ac{GKB} convergence \cite{KrDaTaArRu2020}. To get as close as possible to this clustering we use the following two methods to induce positive changes in the spectrum: preconditioning with the \ac{LSC} \cite{elman2006block} and eigenvalue deflation. Each of them operates differently and leads to convergence curves with different traits. \import{images/pgf/}{easyIFISS20GKconv} In \Cref{fig:easyIFISS20GKconv}, we plot the \ac{GKB} convergence curve for each of these, using a direct inner solver. The \ac{LSC} aligns the small values in the spectrum with the main cluster and brings everything closer together. The corresponding \ac{GKB} convergence curve has no plateau and is much steeper than the curve for the unpreconditioned case. Using deflation, we remove the five smallest values from the spectrum, which constitute outliers with the respect to the main cluster. The other values remain unchanged. As such, its convergence curve no longer has the initial plateau, but is otherwise the same as in the original problem. For both of these cases we apply the same strategies of relaxing the inner tolerance, to see how many total \ac{CG} iterations we can save. The rest of the set-up is identical to that described for \Cref{tab:varTolZ}. We tabulate the results in \Cref{tab:varTolLSC,tab:varTolDefl} and plot them in \Cref{fig:LSCpreclower bound,fig:deflatedlower bound}. They highlight that the best parameter-free results are obtained when using \HybridCase{}, which leads to savings of about 50\%, depending on the specific case. When comparing this parameter-free approach to \OptimalCase{}, which includes an experimental constant, we find that the hybrid approach can still be improved. Nonetheless, the difference in \ac{CG} iterations savings is not very high (up to 6\%), which supports the idea that our proposed strategy is efficient in a general-use setting. An additional observation pertaining to the plots is that even if convergence is relatively fast (\Cref{fig:LSCpreclower bound}) or slow (\Cref{fig:deflatedlower bound}), the final savings are still around 50\%, as long as there is no plateau. \import{images/pgf/lowBndVScumulCG}{LSCprec} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the \ac{LSC} preconditioner. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter used in \OptimalCase{} is $0.007$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 2052 & 1301 & 1073 & 1046 & 919 \\ Savings \% & - & 36.60 & 47.71 & 49.03 & 55.21 \end{tabular} \label{tab:varTolLSC} \end{table} \import{images/pgf/lowBndVScumulCG}{deflated} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using deflation. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter used in \OptimalCase{} is $0.09$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 4830 & 2625 & 2416 & 2411 & 2110 \\ Savings \% & - & 45.65 & 49.98 & 50.08 & 56.31 \end{tabular} \label{tab:varTolDefl} \end{table} \section{\ac{GKB} with the augmented Lagrangian approach} \label{sec:AL} The method of the \ac{AL} has been used successfully to speed up the convergence of the \ac{GKB} algorithm \cite{KrDaTaArRu2020}, with this effect being theoretically explained by Arioli et al. \cite{KrDaTaArRu2020}. Maybe most striking is the potential to reach mesh-independent convergence, provided that the augmentation parameter is large enough. Another use of the \ac{AL} method is to transform the (1,1)-block of a saddle-point system, say $\Wm$, from a positive semi-definite matrix to a positive definite one. However, this can happen only if the off-diagonal block $\Am$ is full rank or, more generally, if $\mbox{ker}(\Wm)\cap \mbox{ker}(\Am^T)=\{ \mZ \}$. Let $\Nm \in \mathbb{R}^{n\times n}$ be a symmetric, positive definite matrix. For a given symmetric, positive semi-definite matrix $\Wm \in \mathbb{R}^{m\times m}$, we can transform it into a positive-definite one by \begin{align} \Mm := \Wm + \Am \Nm^{-1} \Am^T. \end{align} The upper right-hand side term $\gvv$ then becomes \begin{equation} \gvv := \gvv + \Am \Nm^{-1}\rv. \end{equation} With these changes in place, we can proceed to using the \ac{GKB} algorithm, as described in \Cref{sec:GKBtheory}. Note that if the matrix $\Wm$ is already symmetric positive-definite, the transformation of the (1,1)-block is not necessary for using the \ac{GKB} method. However, the application of the \ac{AL} approach does lead to a better conditioning of the Schur complement, which significantly improves convergence speed \cite{KrDaTaArRu2020}. As in \Cref{sec:GKBtheory}, we choose $\Nm=\frac{1}{\eta} \Id$. There is as usual no free lunch: depending on the conditioning of the matrix $\Am$ and the magnitude of $\eta$, the \ac{AL} can also degrade the conditioning of the $\Mm$ matrix as a side-effect. We test whether the augmentation interacts with the strategies we propose in \Cref{sec:relaxChoices}, namely if we can still achieve about 50\% savings in the total number of inner iterations. The strategies are applied when solving the problem described in \Cref{sec:pbDesc} after an augmentation with a parameter $\eta=1000$, with the results being given in \Cref{tab:varTolAL} and plotted in \Cref{fig:augLaglower bound}. Comparing the percentage of iterations saved in this case to those obtained in \Cref{sec:relaxChoices}, it is clear that, when combined with the \ac{AL} method, the strategy of variable inner tolerance does help reducing the total number of inner iterations, but by a lower percentage. \import{images/pgf/lowBndVScumulCG}{augLag} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the \ac{AL} ($\eta =1000$). The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter used in \OptimalCase{} is $0.005$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 2601 & 1886 & 1707 & 1661 & 1647 \\ Savings \% & - & 27.49 & 34.37 & 36.14 & 36.68 \end{tabular} \label{tab:varTolAL} \end{table} Since the \ac{AL} method modifies the (1,1)-block of the saddle-point system, it changes the difficulty of the inner problem and how many iterations the inner solver needs to perform. As such, a global comparison in terms of number of inner iterations, among all the scenarios we studied (original, preconditioned, deflated, including the \ac{AL}) is not fair unless the inner problem has the same degree of difficulty for all the cases. To verify the generality of our method, we also apply it in a different context than that described in \Cref{sec:pbDesc}. Let us consider a Mixed Poisson problem. We solve the Poisson equation \(-\Delta u=f\) on the unit square $(0,1)^2$ using a mixed formulation. We introduce the vector variable $\vec{\sigma}=\nabla u$. Find $(\vec{\sigma}, u)\in \Sigma \times W $ such that \begin{align} \vec{\sigma}-\grad u&=0\\ -\mathrm{div} (\vec{\sigma}) &= f. \end{align} where homogeneous Dirichlet boundary conditions are imposed for $u$ at all walls. The forcing term $f$ is random and uniformly drawn in $(0,1)$. The discretization is done with a lowest order Raviart-Thomas space $\Sigma^h \subset \Sigma$, and a space $W^h \subset W$ containing piece-wise constant basis functions. We used the finite element package Firedrake\footnote{\url{www.firedrakeproject.org}} coupled with a PETSc~\cite{petsc-web-page,petsc-user-ref,petsc-efficient} implementation of \ac{GKB} \footnote{\url{https://petsc.org/release/docs/manualpages/PC/PCFIELDSPLIT.html\#PCFIELDSPLIT}}, adapted to include dynamical relaxation, to produce the following numerical results. We used the implementation provided by Firedrake\footnote{\url{https://www.firedrakeproject.org/demos/saddle_point_systems.py.html}}. The test case has \num{328192} degrees of freedom, of which \num{197120} are associated with the (1,1)-block. The \ac{GKB} delay parameter is set to 3. The augmentation parameter $\eta$ is set to 500 and the tolerance for the \ac{GKB} set to \num{1e-5}. The results are presented in \Cref{fig:mixedPoissonLowBnd}. We confirm the results presented above with a reduction of over 60\% in the total number of inner \ac{CG} iterations with respect to the constant accuracy set up. \begin{figure} \centering \begin{tikzpicture} \begin{axis}[ ymode=log, legend pos= north east, width=\FigWid \textwidth, height=\FigHei \textwidth, xlabel={Inner CG iterations}, ylabel={Lower bound} ] \addplot+[name path=A] table [x=InnerKSPCumul, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiCST500.txt}; \addlegendentry{ \ConstantCase} \addplot+[] table [x expr=\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiHybrid500.txt}; \addlegendentry{\HybridCase } \addplot+[] table [x expr=\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiAdaSquare500.txt}; \addlegendentry{ \PredictedCase} \addplot+[] table [x expr=\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiZ500.txt}; \addlegendentry{ \AdaptiveCase} \addplot[name path=C, draw=none] table [x expr=0.5*\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiCST500.txt} node[pos=1]{50\%}; \addplot[black!30, opacity=0.5] fill between[of=A and C]; \addplot[name path=C, draw=none] table [x expr=0.35*\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiCST500.txt} node[pos=1]{65\%}; \addplot[black!20, opacity=0.5] fill between[of=A and C]; \end{axis} \end{tikzpicture} \caption{Lower bound (\Cref{eq:lowBnd}) for the error norm associated with the \ac{GKB} iterates versus the cumulative number of inner \ac{CG} iterations when solving the Mixed Poisson problem. We also use the \ac{AL} ($\eta =500$). See \Cref{eq:cst,eq:z,eq:adasquare,eq:hybrid} for the strategies denoted by the labels.} \label{fig:mixedPoissonLowBnd} \end{figure} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the \ac{AL} ($\eta =500$) on the Mixed Poisson problem. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:adasquare,eq:hybrid}. } \begin{tabular}{ccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase \\ \ac{CG} iterations & 10845 & 4680 & 4105 & 4225 \\ Savings \% & - & 56.84 & 62.15 & 61.04 \end{tabular} \label{tab:DarcyvarTolAL} \end{table} \section{Conclusions} We have studied the behavior of the \ac{GKB} algorithm in the case where the inner problem, i.e. the solution of a linear system, is performed iteratively. We have found that the inner solver does not need to be as precise as a direct one in order to achieve a \ac{GKB} solution of a predefined accuracy. Furthermore, we have proposed algorithmic strategies that reduce the cost of the inner solver, quantified as the cumulative number of inner iterations. This is possible by selecting criteria to change the stopping tolerance. To motivate these choices, we have studied the perturbation generated by the inexact inner solver. The findings show that the perturbation introduced in early iterations has a higher impact on the accuracy of the solution compared to later ones. We devised a dynamic way of adapting the accuracy of the inner solver at each call to minimize its cost. The initial, high accuracy is gradually reduced, maintaining the resulting perturbation under control. Our relaxation strategy is inexpensive, easy to implement, and has reduced the total number of inner iterations by 33-63\% in our tests. The experiments also show that including methods such as deflation, preconditioning and the augmented Lagrangian has no negative impact and can lead to a higher percentage of savings. Another advantage is that our method does not rely on additional parameters and is thus usable in a black-box fashion. \paragraph{Acknowledgments} The authors thank Mario Arioli for many inspiring discussions and advice. \section{Introduction} Saddle-point systems can be found in a variety of application fields, such as, for example, mixed finite element methods in fluid dynamics or interior point methods in optimization. An extensive overview about application fields and solution methods for this kind of problems is presented in the well-known article \cite{bgl_2005} by Benzi, Golub and Liesen. In our following study, we want to focus on an iterative solver based on the Golub-Kahan bidiagonalization: the generalized \ac{GKB} algorithm. This solver is designed for saddle-point systems, and was introduced by Arioli\cite{Ar2013}. It belongs to the family of Krylov subspace methods and, as such, relies on specific orthogonality conditions, as we will review in more detail in \Cref{sec:GKBtheory}. Enforcing these orthogonality conditions requires solving an \emph{inner problem}, i.e.~formally computing products with matrix inverses (as described in \Cref{alg:GKB}). In practice, this computation is performed with a linear system solver. For this task, we will explore in this article the use of iterative methods to serve as replacement for direct methods that have been used within \ac{GKB} so far. This is essential for very large problems, such as those coming from a discretized \ac{PDE} in 2D or 3D, when direct solvers may reach their limits. Using an inner iterative solver might also be advantageous from another point of view as we motivate in the following. The solution of large linear systems is often the bottleneck in scientific computing. The computational cost and, consequently, the execution time and/or the energy consumption can become prohibitive. For the inner-outer iterative \ac{GKB} solver in turn, the principal and costliest part is the solution of the inner system at each outer iteration. One approximate metric to measure the cost of the \ac{GKB} solver is the aggregate sum of the number of inner iterations. For a given setup, the cost of the \ac{GKB} method can hence be optimized by executing only a minimal number of inner iterations necessary for achieving a prescribed accuracy of the solution. To reduce this number, there are two possible steps to be taken into account. \begin{comment} The solution of \ac{PDE}s typically relies on discretization techniques such as the \ac{FEM}. The error that can theoretically be achieved, i.e. between the analytic solution and the discrete one, is determined by the approximation properties and the mesh size of the method. The discretization results in a linear system that has to be solved. Here, we will talk about the discrete solution when we mean the exact solution of the linear system. When solving the linear system, an error depending on matrix properties such as the conditioning and the accuracy of the solver (e.g. rounding errors, a stopping tolerance of $10^{-8}$ for an iterative solver) is introduced. We call this error between the discrete and the computed solution the \textit{algebraic error} in the following. \todUR{we need to be careful here: even with a (backward stable) direct solver, a poorly conditioned system may lead to large errors. What kind of accuracy do we mean anyway: small residual or small error?} Let us now compute the numerical solution of the linear system as precise as possible (e.g. with a backward stable direct solver or an iterative solver with a low stopping tolerance). Then, comparing the computed solution of the linear system and the actual analytical solution of the \ac{PDE}, we notice that the error between both is in general bounded by the error introduced by the discretization for a given mesh parameter $h$. Looking at the solution of the linear system from this point of view, it is thus, in general, not necessary to solve the linear system more accurately than the precision of the discretization, as the quality of the obtained numerical solution with respect to the analytical continuous one would not be improved further. \end{comment} In a first step, for a given application it is often unnecessary to solve the linear system with the highest achievable accuracy. This could be the case, for example, in the solution of a discretized \ac{PDE}, when the discretization already introduces an error. A precise solution of the linear system would not improve the numerical solution with respect to the analytic solution of the \ac{PDE} any further than the discretization allows. Next, we come to the second step which will be the main point of the study in this paper. The solution of the inner linear system in the \ac{GKB} method has to be exact, in theory. If we choose a rather low accuracy for the outer iterative solver, an inner exact solution might, however, no longer be necessary, as long as the inner error does not alter the chosen accuracy of the numerical solution. This strategy results in a further reduction of the number of inner iterations, since the inner solver will converge in fewer iterations when a less strict stopping tolerance is used. \begin{comment} fine domain discretization, to the point where although the inner problem is smaller, solving it directly still requires too much memory. Another reason why an inner iterative solver may be preferable has to do with the topic of desirable or achievable precision. In cases where it is known that the system already includes a source of error (discretization, measurements, etc.), it is unnecessary to find a solution with a lower algebraic error given by a direct method. It may also be the case that the outer method is just a part of a more complex algorithmic strategy, such as a linear solver embedded in a nonlinear one. There, it is not always mandatory that the linear solver delivers a high-precision solution. \end{comment} In the following study, we address the case where the inner solver has a prescribed stopping tolerance and then how this limited accuracy affects the outer process and the quality of its iterates. We will show that, with the appropriate choice of parameters, it is possible to make use of inner iterative solvers without compromising the accuracy of the \ac{GKB} result. As it can be seen immediately, the lower the accuracy for the inner solver, the less expensive the \ac{GKB} method will be. Furthermore, we take advantage of the versatility of iterative methods by adapting the stopping tolerance of the inner solver dynamically. In other words, we prescribe the tolerance of the inner solver according to some criteria determined at each outer iteration. This can lead to a reduction of the cost, since only a minimal number of inner iterations are executed. Typically, we will reduce the required accuracy for later instances of the inner solver, since later steps of the outer \ac{GKB}-iteration may contribute less to the overall accuracy. \begin{comment} Additionally, we will explore the interaction between the proposed strategy with the \ac{AL} method. This transformation of the saddle-point system can be used to accelerate \ac{GKB} and can even lead to mesh-independent convergence \cite{Ar2013,KrSoArTaRu2020,KrSoArTaRu2021,KrDaTaArRu2020}. Here, however, a trade-off can be expected and will be studied, since an acceleration of the outer iteration may be offset by needing more iterations in the inner iteration. Our results will show that the two methods can be used together, leading to faster convergence by decreasing both the number of outer and inner iterations.\todUR{is this always true for the inner its? I thought that we may spoil the conditioning of the 11 block?} \end{comment} One particular advantage of our proposed method is its generality. The strategy is independent of other choices which are problem-specific, such as the preconditioner for a Krylov method. We perform most of our tests on a relatively small Stokes flow problem, to illustrate the salient features. We confirm our findings by one final test on a larger case of the mixed Poisson problem, including the use of the augmented Lagrangian method, to demonstrate the use in a realistic scenario. \begin{comment} The need to replace a direct solver by an iterative one can be rooted in a number of causes. An increasingly common one is related to computational limitations. For example, industrial problems where \ac{PDE}s are solved typically rely on fine domain discretization, to the point where although the inner problem is smaller, solving it directly still requires too much memory. Another reason why an inner iterative solver may be preferable has to do with the topic of desirable or achievable precision. In cases where it is known that the system already includes a source of error (discretization, measurements, etc.), it is unnecessary to find a solution with a lower algebraic error given by a direct method. It may also be the case that the outer method is just a part of a more complex algorithmic strategy, such as a linear solver embedded in a nonlinear one. There, it is not always mandatory that the linear solver delivers a high-precision solution. \end{comment} Our study has a similar context as other works on inexact Krylov methods \cite{bouras2000relaxation,bouras2005inexact}, where these algorithms have been investigated from a numerical perspective. In these articles, the inexactness originates from a limited accuracy of the matrix-vector multiplication or that of the solution of a local sub-problem. Similar to what we have described above, it was found that the inner accuracy can be varied from step to step while still achieving convergence of the outer method. It was shown experimentally that the initial tolerance should be strict, then relaxed gradually, with the change being guided by the latest residual norm. Other works complemented the findings with theoretical insights, relevant to several algorithms of the Krylov family \cite{simoncini2003theory, simoncini2005relaxed,van2004inexact}. It was noted that, in some cases, unless a problem-dependent constant is included, the outer solver may fail to converge if the accuracy of the inner solution is adapted only based on the residual norm. This constant can be computed based on extreme singular values, as shown by Simoncini and Szyld \cite{simoncini2003theory}. Another source of inexactness can be the application of a preconditioner via an iterative method. Van den Eshof, Sleijpen and van Gijzen considered inexactness in Krylov methods originating both from matrix-vector products and variable preconditioning, using iterative methods from the GMRES family \cite{van2005relaxation}. Similarly to earlier work, their analysis relies on the connection between the residual and the accuracy of the solution to the inner problem. Since applying the preconditioner has the same effect as a matrix-vector product, the same strategies can be applied to more complex, flexible algorithms, such as those involving variable preconditioning: FGMRES \cite{saad1993flexible}, GMRESR \cite{van1994gmresr}, etc. A flexible version of the Golub-Kahan bidiagonalization is employed by Chung and Gazzola to find regularized solutions to a problem of image deblurring \cite{chung2019flexible}. In a more recent paper with the same application, Gazzola and Landman develop inexact Krylov methods as a way to deal with approximate knowledge of $\Am$ and $\Am ^T$ \cite{gazzola2021regularization}. Erlangga and Nabben construct a framework including nested Krylov solvers. They develop a multilevel approach to shift small eigenvalues, leading to a faster convergence of the linear solver \cite{erlangga2008multilevel}. In subsequent work related to multilevel Krylov methods, Kehl, Nabben and Szyld apply preconditioning in a flexible way, via an adaptive number of inner iterations \cite{kehl2019adaptive}. Baumann and van Gijzen analyze solving shifted linear systems and, by applying flexible preconditioning, also develop nested Krylov solvers \cite{baumann2015nested}. McInnes et al. consider hierarchical and nested Krylov methods with a small number of vector inner products, with the goal of reducing the need for global synchronization in a parallel computing setting \cite{mcinnes2014hierarchical}. Other than solving linear systems, inexact Krylov methods have been studied when tackling eigenvalue problems, as in the paper by Golub, Zhang and Zha \cite{GolZhaZha2000}. Although using different arguments, it was shown that the strategy of increasing the inner tolerance is successful for this kind of problem as well. Xu and Xue make use of an inexact rational Krylov method to solve nonsymmetric eigenvalue problems and observe that the accuracy of the inner solver (GMRES) can be relaxed in later outer steps, depending on the value of the eigenresidual \cite{xu2022inexact}. Dax computes the smallest eigenvalues of a matrix via a restarted Krylov solver which includes inexact matrix inversion \cite{dax2019restarted}. Our paper is structured as follows: in \Cref{sec:GKBtheory}, we review the theory and properties of the \ac{GKB} algorithm; in \Cref{sec:pbDesc}, we describe the specific problem we chose to use as test case for the numerical experiments; \Cref{sec:constAcc} is meant to illustrate the interactions between the accuracy of the inner solver and that of the outer one in a numerical test setting; \Cref{sec:pertErrAna} describes the link between the error of the outer solver and the perturbation induced by the use of an iterative inner solver. We describe and test our proposed strategy of using a variable tolerance parameter for the inner solver in \Cref{sec:relaxChoices}. We explore the interaction between the method of the \ac{AL} and our strategy in \Cref{sec:AL}. The final section is devoted to concluding remarks. \section{Generalized Golub-Kahan algorithm} \label{sec:GKBtheory} We are interested in saddle-point problems of the form \begin{align}\label{eqn:spsW} \left[ \begin{array}{cc} \Mm & \Am \\ \Am ^T & \mZ \end{array} \right] \left[ \begin{array}{c} \wv \\ \pv \end{array} \right] = \left[ \begin{array}{c} \gvv \\ \rv \end{array} \right] \end{align} with $\Mm\in \mathbb{R}^{m\times m}$ being a symmetric positive definite matrix and $\Am\in \mathbb{R}^{m \times n}$ a full rank constraint matrix. The generalized \ac{GKB} algorithm for the solution of a class of saddle-point systems was introduced by Arioli \cite{Ar2013}. To apply it to the system (\ref{eqn:spsW}), we first need to have the upper block of the right-hand side to be equal to 0. To this end, we use the transformation \begin{align} \label{eq:iniTransf} \uv &= \wv - \Mm^{-1}\gvv ,\\ \bv &= \rv - \Am^T \uv. \end{align} The resulting system is \begin{align}\label{eqn:sps} \left[ \begin{array}{cc} \Mm & \Am \\ \Am^T & 0 \end{array} \right] \left[ \begin{array}{c} \uv \\ \pv \end{array} \right] = \left[ \begin{array}{c} 0 \\ \bv \end{array} \right], \end{align} which is equivalent to that in \Cref{eqn:spsW}. We can recover the $\wv$ variable as $\wv = \uv + \Mm^{-1}\gvv$. Let $\Nm\in \mathbb{R}^{n\times n}$ be a symmetric positive definite matrix. To properly describe the \ac{GKB} algorithm, we need to define the following norms \begin{equation} \normM{\vvv} = \sqrt{ \vvv ^T \Mm \vvv }; \qquad \normN{\qv} = \sqrt{ \qv ^T \Nm \qv }; \qquad \normNI{\yv} = \sqrt{ \yv ^T \Nm^{-1} \yv }. \end{equation} Given the right-hand side vector $\bv \in \bR ^n$, the first step of the bidiagonalization is \begin{equation} \label{eq:iniGKBVec} \beta _1 = \normNI{\bv}, \quad \qv _1 = \Nm ^{-1} \bv / \beta _1. \end{equation} After $k$ iterations, the partial bidiagonalization is given by \begin{equation} \label{eq:oriGKB} \begin{cases} \Am \Qm _k = \Mm \Vm _k \Bm _k, &\qquad \Vm _k ^T \Mm \Vm _k = \Id _k \\ \Am ^T \Vm _k = \Nm \Qm _k \Bm ^T _k + \beta _ {k+1} \qv _{k+1} \ev _k^T, &\qquad \Qm _k ^T \Nm \Qm _k = \Id _k \end{cases}, \end{equation} with the bidiagonal matrix \begin{equation} \label{eq:BmatGKB} \Bm _k= \left[ \begin{matrix} \alpha_1 & \beta_2 & 0 & \ldots & 0 \\ 0 & \alpha_2 & \beta_3 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & \ldots & 0 & \alpha_{k-1} & \beta_k \\ 0 & \ldots & 0 & 0 & \alpha_{k} \end{matrix} \right] \end{equation} and the residual term $\beta _ {k+1} \qv _{k+1} \ev _k^T$. The columns of $\Vm _k$ are orthonormal vectors with respect to the inner product and norm induced by $\Mm$, while the same holds for $\Qm _k$ and $\Nm$ respectively \begin{equation} \begin{split} & \vvv _i ^T \Mm \vvv _j = 0, \forall i \neq j; \qquad \normM{\vvv _k} = 1; \\ & \qv _i ^T \Nm \qv _j = 0, \forall i \neq j; \qquad \normN{\qv _k} = 1. \end{split} \end{equation} Prior to the normalization leading to $\vvv _k$ and $\qv _k$, the norms are stored as $\alpha _k$ for $\vvv _k$ and $\beta _k $ for $\qv _k$, as detailed in \cref{alg:GKB}. Using $\Vm _k$, $\Qm _k$ and the relations in \Cref{eq:oriGKB}, we can transform the system from \Cref{eqn:sps} into a simpler form \begin{equation}\label{eq:transfSys} \left[ \begin{matrix} \Id _k& \Bm _k \\ \Bm _k ^T & \mZ \end{matrix} \right] % \left[ \begin{matrix} \zv _k \\ \yv _k \end{matrix} \right] = \left[ \begin{matrix} \mZ \\ \Qm _k ^T \bv \end{matrix} \right]. \end{equation} With the choice for $\qv _1$ given in \Cref{eq:iniGKBVec}, we have that $\Qm _k ^T \bv = \beta _1 \ev _1 $. The solution components to \Cref{eq:transfSys} are then given by \begin{equation} \label{eq:zandy} \zv _k= \beta _1 \Bm _k ^{-T} \ev _1; \quad \yv _k= - \Bm _k ^{-1} \zv _k, \end{equation} where $ \Bm _k ^{-T}$ is the inverse of $ \Bm _k ^{T}$. We can build the $k$-th approximate solution to \Cref{eqn:sps} as \begin{equation} \label{eq:GKBapx} \uv _k = \Vm _k \zv _k; \quad \pv _k = \Qm _k \yv _k. \end{equation} In particular, after a number of $k=n$ steps and assuming exact arithmetic, we have $\uv _k = \uv$ and $\pv _k = \pv $, meaning we have found the exact solution to \Cref{eqn:sps}. A proof of why $n$ terms are sufficient to find the exact solution is given in the introductory paper by Arioli \cite{Ar2013}. This corresponds to a scenario where it is necessary to perform the $n$ iterations, although, for specific problems with particular features, the solution may be found after fewer steps. As $ k \rightarrow n$, the quality of the approximation improves ($\uv _k \rightarrow \uv$ and $\pv _k \rightarrow \pv $), with the bidiagonalization residual $\beta _ {k+1} \qv _{k+1} \ev _k^T$ vanishing for $k=n$. Given the structure of $\beta _1 \ev _1$ and $ \Bm ^{T}$, we find \begin{equation} \label{eq:zetaDef} \zeta _1= \frac{\beta _1}{\alpha _1}, \quad \zeta _k = \zeta _{k-1} \frac{\beta _k}{\alpha _k}, \quad \zv _k = \left[ \begin{matrix} \zv _{k-1} \\ \zeta _k \end{matrix} \right] \end{equation} in a recursive manner. Then, $\uv _k$ is computed as $\uv_k = \uv _{k-1} + \zeta _k \vvv _k $. In order to obtain a recursive formula for $\pv$ as well, we introduce the vector \begin{equation} \dv _k= \frac{\qv _k - \beta _k \dv _{k-1}}{\alpha _k}, \quad \dv _1 = \frac{\qv _1}{\alpha _1}. \end{equation} Finally, the update formulas are \begin{equation} \label{eq:GKBupdate} \uv _k= \uv _{k-1} + \zeta _k \vvv _k , \quad \pv _k= \pv _{k-1} - \zeta _k \dv _k . \end{equation} At step $k$ of \Cref{alg:GKB}, we have the following error in the energy norm. \begin{equation} \label{eq:errExact} \begin{split} \normM{\ev _k} ^2 &= \normM{ \uv _k - \uv } ^2 = \normM{ \Vm _k \zv _k - [ \Vm _k \Vm _{n-k}] \left[ \begin{matrix} {\zv _k} \\ {\zv _{n-k}} \end{matrix} \right] } ^2 \\ &= \normM{ \Vm _{n-k} \zv _{n-k} } ^2 = \normEu{ \zv _{n-k}} ^2 = \sum_{i=k+1}^{n} \zeta _i ^2 \end{split} \end{equation} In the last line, we have made use of the $\Mm$-orthonormality of the $\Vm$ matrices. If we truncate the sum above to only its first $d$ terms, we get a lower bound on the energy norm of the error. The subscript $d$ stands for \textit{delay}, because we can compute this lower bound corresponding to a given step $k$ only after an additional $d$ steps \begin{equation} \label{eq:lowBnd} \xi ^2 _{k,d} = \sum_{i=k+1}^{k+d+1} \zeta _i ^2 < \normM{\ev _k} ^2. \end{equation} With this bound for the absolute error, we can devise one for the relative error in \Cref{eq:lowBndRel}, which is then used as stopping criterion in \Cref{alg_line:GKBconvCheck} of \Cref{alg:GKB}. \begin{equation} \label{eq:lowBndRel} \bar \xi ^2 _{k,d} = \frac{ \sum_{i=k-d+1}^{k} \zeta _i ^2 }{ \sum_{i=1}^{k} \zeta _i ^2 } . \end{equation} The \ac{GKB} algorithm has the following error minimization property. Let $\cV _k = span \{\vvv _1, ..., \vvv _k\}$ and $\cQ _k = span \{\qv _1, ..., \qv _k\}$. Then, for any arbitrary step $k$, we have that \begin{equation} \label{eq:errMinProp} \min_{\underset{(\Am ^T \uv _k - \bv ) \perp \cQ _k}{ \uv _k \in \cV _k,} } \normM{ \uv - \uv _k } \end{equation} is met for $\uv _k$ as computed by \Cref{alg:GKB}. For brevity and because the \ac{GKB} algorithm features this minimization property for the primal variable, our presentation will focus on the velocity for Stokes problems. The stopping criteria for our proposed algorithmic strategies rely on approximations of the velocity error norm. For all the numerical experiments that we have performed, the pressure error norm is close to that of the velocity (less than an order of magnitude apart). In the cases where we operate on a different subspace, as a result of preconditioning, we find that the pressure error norm is actually smaller than that for the velocity. In the case where the dual variable is equally important as the primal, one can use a monolithic approach, such as applying MINRES to the complete saddle-point system. The \ac{GKB} (as implemented by \Cref{alg:GKB}) is a nested iterative scheme in which each outer loop involves solving an inner linear system. According to the theory given in the paper by Arioli \cite{Ar2013}, the matrices $\Mm$ and $\Nm$ have to be inverted exactly in each iteration. We can choose $ \Nm=\frac{1}{\eta} \Id$, whose inversion reduces to a scalar multiplication. In the following sections, unless otherwise specified, we consider $\eta =1 $. On the other hand, the matrix $\Mm$ depends on the underlying differential equations or the problem setting in general. As long as the matrix $\Mm$ is of moderate size, a robust direct solver can be used. For large problems, however, a direct solution might no longer be possible and an iterative solver will be required. At this point, we face two problems. First, depending on the application, inverting $\Mm$ might be more or less costly. Second, to achieve a solution quality close to machine precision, an iterative solver might require a considerable number of iteration steps. \begin{algorithm} \caption{Golub-Kahan bidiagonalization algorithm} \label{alg:GKB} \begin{algorithmic}[1] \Require{$\Mm , \Am , \Nm, \bv$, maxit} \State{$\beta_1 = \|\bv\|_{\Nm^{-1}}$; $\qv_1 = \Nm^{-1} \bv / \beta_1$} \State{$\wv = \Mm^{-1} \Am \qv_1$; $\alpha_1 = \|\wv\|_{\Mm}$; $\vvv_1 = \wv / \alpha_1$} \State{$\zeta_1 = \beta_1 / \alpha_1$; $\dv_1=\qv_1/ \alpha_1$; $\uv^{(1)} =\zeta_{1} \vvv_{1}$; $\pv^{(1)} = - \zeta_1 \dv_1$; } \State{$\bar \xi _{1,d} = 1;$ $k = 1;$} \While{ $\bar \xi _{k,d} > $ tolerance and $k < $ maxit} \State{$\gvv = \Nm^{-1} \left( \Am^T \vvv_k - \alpha_k \Nm \qv_k \right) $; $\beta_{k+1} = \|\gvv\|_{\Nm}$} \State{$\qv_{k+1} = \gvv / {\beta_{k+1}}$} \State{$\wv = \Mm^{-1} \left( \Am \qv_{k+1} - \beta_{k+1} \Mm \vvv_{k} \right)$; $\alpha_{k+1} = \|\wv\|_{\Mm}$} \label{alg_line:innerGKBproblem} \State{$\vvv_{k+1} = \wv / {\alpha_{k+1} }$} \State{$\zeta_{k+1} = - \dfrac{\beta_{k+1}}{\alpha_{k+1}} \zeta_k$} \State{$\dv_{k+1} = \left( \qv_{k+1} - \beta_{k+1} \dv_k \right) / \alpha_{k+1} $} \State{$\uv^{(k+1)} = \uv^{(k)} + \zeta_{k+1} \vvv_{k+1}$; $\pv^{(k+1)} = \pv^{(k)} - \zeta_{k+1} \dv_{k+1}$} \State{$k = k + 1$} \If{$k>d$} \State{$\bar \xi _{k,d} = \sqrt{ \sum_{i=k-d+1}^{k} \zeta _i ^2 / \sum_{i=1}^{k} \zeta _i ^2 } $} \label{alg_line:GKBconvCheck} \EndIf{} \EndWhile{} \Return $\uv^{k+1}, \pv^{k+1}$ \end{algorithmic} \end{algorithm} In \Cref{alg_line:innerGKBproblem} of \Cref{alg:GKB}, we have the application of $\Mm^{-1} $ to a vector, which represents what we call the \textit{inner problem}. Typically, this is implemented as a call to a direct solver using the matrix $\Mm $ and the vector $\Am \qv_{k+1} - \beta_{k+1} \Mm \vvv_{k}$ as the right hand side. The main contribution of this work is a study of the behavior exhibited by \Cref{alg:GKB} when we replace the direct solver employed in \Cref{alg_line:innerGKBproblem} by an iterative one. In particular, for a target accuracy of the final \ac{GKB} iterate, we want to minimize the total number of inner iterations. Our choice for the inner solver is the unpreconditioned \ac{CG} algorithm, for its simplicity and relative generality. The strategies we propose in the subsequent sections do not rely on any specific feature of this inner solver, and are meant to be applicable regardless of this choice. We are interested in reducing the total number of inner iterations in a relative and general manner. This is why we do not take preconditioning for \ac{CG} into account, which is usually problem-dependent. We measure the effectiveness of our methods based on the percentage of inner iterations saved when compared against a scenario to be described in more detail in the following sections. \section{Problem description} \label{sec:pbDesc} As test problem, we will use a 2D Stokes flow in a rectangular channel domain $\Omega = \left[-1, L \right] \times \left[-1, 1 \right]$ given by \begin{equation} \label{eq:contStokes} \begin{aligned} - \Delta \vec{u} + \nabla p &= 0 \\ \nabla \cdot \vec{u} &=0, \end{aligned} \end{equation} More specifically, we will consider the Poiseuille flow problem, i.e. a steady Stokes problem with the exact solution \begin{equation} \label{eq:contStokesSol} \begin{cases} u_x = 1- y^2, \\ u_y = 0, \\ p = -2x + \text{constant}. \end{cases} \end{equation} The boundary conditions are given as Dirichlet condition on the inflow $\Gamma_{in}= \left\{-1\right\} \times \left[-1, 1 \right]$ (left boundary) and no-slip conditions on the top and bottom walls $\Gamma_{c}= \left[-1, L \right] \times \left\{-1\right\} \cup \left[-1, L \right] \times \left\{1\right\} $. The outflow at the right $\Gamma_{out}= \left\{L \right\} \times \left[-1, 1 \right]$ (right) is represented as a Neumann condition \begin{equation*} \begin{split} \frac{\partial u_x}{\partial x} - p &= 0 \\ \frac{\partial u_y}{\partial x} &= 0. \end{split} \end{equation*} We use Q2-Q1 Finite Elements as discretization method. Our sample matrices are generated by the Incompressible Flow \& Iterative Solver Software (IFISS)\footnote{\url{http://www.cs.umd.edu/~elman/ifiss3.6/index.html}} package \cite{ers07}, see the book by Elman et al. \cite{elman2014finite} for a more detailed description of this reference Stokes problem. \begin{figure} \centering \begin{tikzpicture} \begin{axis} [ width=\FigWid \textwidth, height=\FigHei \textwidth, minor tick num=3, grid=both, xtick = {-1,0,...,5}, ytick = {-1,0,...,1}, xlabel = $x$, ylabel = $y$, ticklabel style = {font = \scriptsize}, % colormap/jet, colorbar, % enlargelimits=false, axis on top, axis equal image ] \addplot [forget plot] graphics[xmin=-1,xmax=5,ymin=-1,ymax=1] {IFISSCh5SolRaw.png}; \end{axis} \end{tikzpicture} \caption{ Exact solution to the Stokes problem in a channel of length 5. Plotted is the $1-y^2$ function, which represents the $x$ direction velocity, overlaid with the mesh resulting from the domain discretization (Q2-Q1 Finite Elements Method). } \label{fig:convIFISSCh5SolMesh} \end{figure} We first illustrate some particular features shown by \ac{GKB} for this problem. We use a direct inner solver here, before discussing the influence of an iterative solver in subsequent sections. In \Cref{fig:convIFISSChL}, we plot the convergence history for several channels of different lengths, which leads us to noticing the following details. The solver starts with a period of slow convergence, visually represented by a plateau, the length of which is proportional to the length of the channel. The rest of the convergence curve corresponds to a period of superlinear convergence, a phenomenon also known for other solvers of the Krylov family, such as \ac{CG}. The presence of this plateau is especially relevant for our proposed strategies and, since it appears for each channel, we can conclude it is a significant feature of this class of channel problems. In the following numerical examples, we choose as boundary $L=20$ and thus a domain of length 21 units. \import{images/pgf/}{convIFISSChL.tex} \section{Constant accuracy inner solver} \label{sec:constAcc} Similar to what has been described by Golub et al. \cite{GolZhaZha2000} for solving eigenvalue problems, we have observed that when using an iterative method as an inner solver, its accuracy has a clear effect on the overall accuracy of the outer solver (see \Cref{fig:constCGtol}). We solve the channel problem described in \Cref{sec:pbDesc} with various configurations for the tolerance of the inner solver, and plot the resulting convergence curves in \Cref{fig:constCGtol}. The outer solver is always \ac{GKB} with a $10^{-7}$ tolerance. The cases we show are: a direct inner solver, three choices of constant inner solver tolerance ($10^{-3}$, $10^{-7}$ and $10^{-8}$), and a final case using a low accuracy solver of ($10^{-3}$) only for the first two iterations, then a high accuracy one ($10^{-14}$). The stopping criterion for the \ac{GKB} algorithm is a delayed lower bound estimate for the energy norm of the primal variable (see \Cref{eq:lowBnd}). As such, \ac{GKB} with a direct inner solver performs a few extra steps, achieving a higher accuracy than the one required, here around $10^{-8}$. Notice how the outer solver cannot achieve a higher accuracy than that of the inner solver. The outer solver stops reducing the error even before reaching the same accuracy as the inner solver. Replacing the exact inner solver by a \ac{CG} method with a constant tolerance of $10^{-8}$ leads to a convergence process where the error norm eventually reaches a value just below the target accuracy of $10^{-7}$ and does not decrease further. This highlights the fact that the inner solver does not need to be exact in order to have \ac{GKB} converge to the required solution. For this Poiseuille flow example, however, the inner solver must be least one order of magnitude more precise than the outer one. In the last case examined here, we want to see if early imprecise iterations can be compensated later by others having a higher accuracy. This strategy of increasing accuracy has been found to work, e.g., in the case of the Newton method for nonlinear problems \cite{dembo1982inexact}. We tested the case when the first two iterations of \ac{GKB} use an inner solver with tolerance $10^{-3}$, with all the subsequent inner iterations employ a tolerance of $10^{-14}$. The resulting curve shows a convergence history rather similar to the case where \ac{CG} has a constant tolerance of $10^{-3}$. The outer process cannot reduce the error norm below $10^{-3}$, despite the fact that the bulk of the iterations employ a high-accuracy inner solver. This is in correspondence with which was observed by Golub et al. \cite{GolZhaZha2000} for solving eigenvalue problems. \import{images/pgf/}{constCGtol.tex} An interesting observation is that all the curves in \Cref{fig:constCGtol} overlap in their initial iterations, until they start straying from the apparent profile, eventually leveling off. In \Cref{sec:pertErrAna}, we analyze the causes leading to these particular behaviors and link them to the accuracy of the inner solver. \section{Perturbation and error study} \label{sec:pertErrAna} In this section we describe how the error associated with the iterates of \Cref{alg:GKB} behaves if we use an iterative solver for the systems involving $\Mm ^{-1}$. We can think of the approximate solutions of these inner systems as perturbed versions of those we would get when using a direct solver. The error is then characterized in terms of this perturbation and the implications motivate our algorithmic strategies given in the subsequent sections. With this characterization, we can also explain the results in \Cref{sec:constAcc}. The use of an iterative inner solver directly affects the columns of the $\Vm$ matrix. In the following, $\Vm$ denotes the unperturbed matrix, with $\Em _{\Vm}$ being the associated perturbation matrix. In particular, we are interested in the $\Mm$ norm of the individual columns of $\Em _{\Vm}$, which gives us an idea of how far we are from the \enquote{ideal} columns of $\Vm$. Changes in the $\vvv$ and $\qv$ vectors also have an impact on their respective norms $\alpha$ and $\beta$, which shift away from the values they would normally have with a direct inner solver. In turn, these changes propagate to the coefficients $\zeta$ used to update the iterates $\uv$ and $\pv$. Our observations concern the $\zv$ vector, its perturbation $\ev _{\zv}$ and their effect on the error of the primal variable $\uv$ measured in the $\Mm$ norm. The entries of $\zv$ change sign every iteration, but we will only consider them in absolute value, as it is their magnitude which is important. In the following, we will denote perturbed quantities with a hat. \subsection{High initial accuracy followed by relaxation} \label{subsec:theoPrecRelax} In this subsection, we take a closer look at the interactions between the perturbation and the error. For us, perturbation is the result of using an inexact inner solver and represents a quantity which can prevent the outer solver from reducing the error below a certain value. The error itself needs to be precisely defined, as it may contain several components, each minimized by a different process. Because we focus on the difference between the perturbed and the unperturbed \ac{GKB}, sources of error that affect both versions, such as the round-off error, are not included in the following discussion. According to the observations by Jir{\'a}nek and Rozlo{\v{z}}n{\'\i}k, the accuracy of the outer solver depends primarily on that of the inner solver, since the perturbations introduced by an iterative solver dominate those related to finite-precision arithmetic \cite{jiranek2008maximum}. We take the exact solution $\uv$ to be equal to $\uv _n$, the $n$-th iterate of the unperturbed \ac{GKB} with exact arithmetic. At step $k$ of the \ac{GKB}, we have the error, \begin{align} \normM{\ev _k} &= \normM{ \hat \uv _k - \uv }, \end{align} where $\hat \uv _k$ is the current approximate solution and $\uv $ is the exact one. Both can be written as linear combinations of columns from $\Vm$ with coefficients from $\zv$. Let $\hat \uv _k$ come from an inexact version of \Cref{alg:GKB}, where the solution of the inner problem (a matrix-vector product with $\Mm ^{-1}$) includes perturbations. The term $ \uv = \Vm _n \zv _n$ is available after $n$ steps of \Cref{alg:GKB} in exact arithmetic, without perturbations. We separate the first $k$ terms, which have been computed, from the remaining ($n-k$). \begin{equation} \label{eq:errPert} \begin{split} \normM{\ev _k} ^2 &= \normM{ \hat \uv _k - \uv } ^2= \normM{ ( \Vm _k + \Em _{\Vm} ) ( \zv _k + \ev _{\zv} ) - [ \Vm _k \Vm _{n-k}] \left[ \begin{matrix} {\zv _k} \\ {\zv _{n-k}} \end{matrix} \right] } ^2 \\ &= \normM{ \Em _{\Vm} \zv _k + \Em _{\Vm} \ev _{\zv} + \Vm _k \ev _{\zv} - \Vm _{n-k} \zv _{n-k} } ^2 \\ & \leq \normM{ \Em _{\Vm} \zv _k } ^2 + \normM{ \Em _{\Vm} \ev _{\zv} } ^2 + \normEu{ \ev _{\zv} } ^2 + \normEu{ \zv _{n-k}} ^2 \end{split} \end{equation} In the last line, we have made use of the $\Mm$-orthonormality of the $\Vm$ matrices. In the case of a direct inner solver, we can leave out the perturbation terms, recovering the result $\normM{\ev _k} ^2 = \normEu{ \zv _{n-k}} ^2 = \sum_{i=k+1}^{n} \zeta _i^2 $ given by Arioli \cite{Ar2013}. This is simply the error coming from approximating $\uv $ (a linear combination of $n$ $\Mm$-orthogonal vectors) by $ \uv _k $ (a linear combination of only $k$ $\Mm$-orthogonal vectors). This term decreases as we perform more steps of \Cref{alg:GKB} ($ k \rightarrow n $). By truncating the sum $\sum_{i=k+1}^{n} \zeta _i ^2$, we obtain a lower bound for the squared error. The remaining three terms in \Cref{eq:errPert} include the perturbation coming from the inexact inner solution. Our goal is to minimize the total number of iterations of the inner solver, so we are interested in knowing how large can these terms be allowed to be, such that we still recover a final solution of the required accuracy. The answer is to keep them just below the final value of the fourth one, $\normEu{ \zv _{n-k}}$, below the acceptable algebraic error. If they are larger, the final accuracy will suffer. If they are significantly smaller, then our inner solver is unnecessarily precise and expensive. The following observations rely on the behavior of the $\zv$ vector. At each iteration, this vector gains an additional entry, while leaving the previous ones unchanged. These entries form a (mostly) decreasing sequence and have a magnitude below 1 when reaching the superlinear convergence phase. Unfortunately, we cannot yet provide a formal proof of these properties, but having seen them consistently reappear in our numerical experiments encourages us to consider them for motivating our approach. These properties appear in both cases, with and without perturbation. The decrease in the entries of the coefficient vector used to build the approximation has also been observed and described for other Krylov methods (see references \cite{van2004inexact,simoncini2003theory,simoncini2005relaxed}). Their context is that of inexact matrix-vector products, which is another way of viewing our case. The fact that new entries of $\zv$ are simply appended to the old ones and that they are smaller than one is linked to the particular construction specific to \ac{GKB}. Back to \Cref{eq:errPert}, let us assume the perturbation at each iteration is constant, i.e. the $\Mm$ norm of each column of $\Em _{\Vm}$ is equal to the same constant. Then, the vector $\Em _{\Vm} \zv _k$ will be a linear combination of perturbation vectors with coefficients from $\zv _k$. Following our observations concerning the entries of $\zv _k$, the first terms of the linear combination will be the dominant ones, with later terms contributing less and less to the sum. If the perturbation of the first $\vvv$ has an $\Mm$ norm below our target accuracy, the term $ \normM{ \Em _{\Vm} \zv _k } $ will never contribute to the error. We can allow the $\Mm$ norm of the columns of $\Em _{\Vm}$ to increase, knowing the effect of the perturbation will be reduced by the entries of $\zv$, which are decreasing and less than one. The \ac{GKB} solution can be computed in a less expensive way, as long as the term $\normM{ \Em _{\Vm} \zv _k }$ is kept below our target accuracy. The perturbation should initially be small, then allowed to increase proportionally to the decrease of the entries in $\zv$. Next, we describe the terms including $\ev _{\zv}$. Let the following define the perturbed entries of $\hat \zv$ \begin{equation*} \hat \zeta _k = - \hat \zeta _{k-1} \frac{\hat \beta _k}{\hat \alpha _k} = - \hat \zeta _{k-1} ( \frac{ \beta _k}{ \alpha _k} + \epsilon _k). \end{equation*} The term $\epsilon _k$ is the perturbation introduced at iteration $k$, coming from the shifted norms associated with $\qv _k$ and $\vvv _k$. This term is then multiplied by $\hat \zeta _{k-1}$ which, according to our empirical observations, decreases at (almost) every step. If we assume $\epsilon _k$ is constant, the entries of $\ev _{\zv}$ decrease in magnitude and the norm $ \normEu{ \ev _{\zv} }$ is mostly dominated by the first vector entry. The strategy described for the term $ \normM{ \Em _{\Vm} \zv _k } $ also keeps $ \normEu{ \ev _{\zv} }$ small. We start with a perturbation norm below the target accuracy, to ensure the quality of the final iterate. Gradually, we allow an increase in the perturbation norm proportional to the decrease of $\hat \zeta _k$ to reduce the costs of the inner solver. Finally, since the vector $\ev _{\zv} $ decreases similarly to $ \zv $, the term $ \normM{ \Em _{\Vm} \ev _{\zv} }$ can be described in the same way as $\normM{ \Em _{\Vm} \zv _k }$. We close this section by emphasizing the important role played by the first iterations and how the initial perturbations can affect the accuracy of the solution. Notice that the perturbation terms included refer to all the $k$ steps, not just the latest one. Relaxation strategies that start with a low accuracy and gradually increase it are unlikely to work for \ac{GKB} and other algorithms with similar error minimization properties. Since the first vectors computed are the ones that contribute the most to reducing the error, they should be determined as precisely as possible. Even if we follow a perturbed iteration exclusively by very accurate ones, this will not prevent the perturbation from being transmitted to all the subsequent vectors, and potentially be amplified by multiplication with matrices and floating-point error. With these observations in mind, we can understand the results in \Cref{sec:constAcc}. These findings are in line with those concerning other Kylov methods in the presence of inexactness (see Section 11 of the survey by Simoncini and Szyld \cite{simoncini2007recent} and the references therein). \ac{GKB} is not the only method which benefits from lowering the accuracy of the inner process, and the reason why this is possible is linked to the decreasing entries of the coefficient vector. \section{Relaxation strategy choices} \label{sec:relaxChoices} We have seen in \Cref{subsec:theoPrecRelax} that we can allow the perturbation norm to increase in a safe way, as long as the process is guided by the decrease of $ \abs{ \hat \zeta } $. This means that we can adapt the tolerance of the inner solver, such that each call is increasingly cheaper, without compromising the accuracy of the final \ac{GKB} iterate. Then, at step $k$ we can call the inner solver with a tolerance equal to $\tau / f(\zeta)$. The scalar $\tau$ represents a constant chosen as either the target accuracy for the final \ac{GKB} solution, or something stricter, to counteract possible losses coming from floating-point arithmetic. The function $f$ is chosen based on the considerations described below, with the goal of minimizing the number of inner iterations. A similar relaxation strategy was used in a numerical study by Bouras and Frayss\'e \cite{bouras2005inexact} to control the magnitude of the perturbation introduced by performing inexact matrix-vector products. They employ Krylov methods with a residual norm minimization property, so the proposed criterion divides the target accuracy by the latest residual norm. In our case, because of the minimization property in \Cref{eq:errMinProp}, we need to use the error norm instead of the residual, since it is the only quantity which is strictly decreasing. Due to the actual error norm being unknown, we rely on approximations found via $\zeta$. Considering the error characterization of the unperturbed process $\normM{\ev _k} ^2 = \sum_{i=k+1}^{n} \zeta _i ^2$, we can approximate the error by the first term of the sum, which is the dominant one. However, when starting iteration $k$ we do not know $\zeta _{k+1}$, not even $\zeta _{k}$, so we cannot choose a tolerance for the inner solver required to compute $\uv _k$ based on these. What we can do is predict these values via extrapolation, using information from the known values $\zeta _{k-1}$ and $\zeta _{k-2}$. We know that in general $ \frac{\beta _k}{\alpha _k} = \frac{\zeta _k}{\zeta _{k-1}} $ acts as a local convergence factor for the $\abs{\zeta }$ sequence. We approximate the one for step $k$ by using the previous one $\frac{\zeta _{k-1}}{\zeta _{k-2}}$. Then, we can compute the prediction $\tilde \zeta _k := \zeta _{k-1} \frac{\zeta _{k-1}}{\zeta _{k-2}}$. By squaring the local convergence factor, we get an approximation for $ \zeta _{k+1} $ as $\tilde \zeta _{k+1} := \zeta _{k-1} \left( \frac{\zeta _{k-1}}{\zeta _{k-2}} \right) ^2$, which we can use to approximate $ \normM{\ev _k} $ and adapt the tolerance of the inner solver. In practice, we only consider processes which include perturbation, and assume we have no knowledge of the unperturbed values $\abs{\zeta }$. As such, for better readability, we drop the hat notation with the implicit convention that we are referring to values which do include perturbation and use them in the extrapolation rule above. For some isolated iterations, it is possible that $\abs{\zeta _k } \geq \abs{\zeta _{k-1} } $. This behavior is then amplified through extrapolation, potentially leading to even larger values. In turn, this can cause an increase in the accuracy of the inner solver, following a stricter value for the tolerance parameter $\tau / f(\zeta)$. In \Cref{subsec:theoPrecRelax}, we have shown that there is no benefit in increasing this accuracy. The new perturbation would be smaller in norm, but the error $\normM{\ev _k}$ would be dominated by the previous, larger perturbation. As such, we propose computing several candidate values for the stopping tolerance of the inner solver, and choose the one with maximum value. Since these are only scalar quantities, the associated computational effort is negligible, but the impact of a well-chosen tolerance sequence can lead to significant savings in the total number of inner iterations. The candidate values are: \begin{equation} \label{eq:relaxChoices} \begin{cases} \text{the value at the previous step}, \\ \tau / \abs{ \zeta _{k-1} } , \\ \tau / \abs{ \tilde \zeta _{k} } , \\ \tau / \abs{ \tilde \zeta _{k+1} } . \end{cases} \end{equation} To prevent a limitless growth of the tolerance parameter, we impose a maximum value of $ 0.1 $. All these choices are safe in the sense that they do not lead to introduction of perturbations which prevent the outer solver from reaching the target accuracy. We proceed by testing these relaxations strategies on the problem described in \Cref{sec:pbDesc}. The initial tolerance for \ac{CG} is set to $\tau = 10^{-8}$, one order of magnitude more precise than the one set of \ac{GKB}. As a baseline for comparison, we first keep the tolerance constant, equal to $\tau$. Then, we introduce adaptivity using $\tau / \abs{ \zeta_{k-1} }$. The third case changes the tolerance according to $\tau / \abs{ \tilde \zeta _{k+1} }$, the latter term being a predicted approximation of the current error. Finally, we employ a hybrid approach, where all candidate values in \Cref{eq:relaxChoices} are computed, but only the largest one is used. In the legends of the following plots, these four cases are labeled \ConstantCase, \AdaptiveCase, \PredictedCase, and \HybridCase, respectively. To monitor \ac{GKB} convergence, we track the lower bound for the energy norm of the error corresponding to the primal variable given in \Cref{eq:lowBnd}. For easy reference, all the choices used and their respective labels are given below. We define $\tau = 10^{-8}$. \begin{align} \mathtt{(\ConstantCase)} &: \tau, \label{eq:cst}\\ (\mathtt{\AdaptiveCase}) &: \nicefrac{\tau}{\abs{ \zeta _{k-1} }} , \label{eq:z}\\ (\mathtt{\PredictedCase}) &: \nicefrac{\tau}{\abs{ \tilde \zeta _{k+1} }} ,\label{eq:adasquare}\\ (\mathtt{\HybridCase}) &: \max\left\{\nicefrac{\tau}{\abs{ \zeta _{k-1} }}, \nicefrac{\tau}{\abs{ \tilde \zeta _{k} }}, \nicefrac{\tau}{\abs{ \tilde \zeta _{k+1} }}, \text{previous value} \right\} .\label{eq:hybrid}\\ (\mathtt{\OptimalCase}) &: \nicefrac{\tau}{ ( parameter \cdot \abs{ \zeta _{k-1} } )} , \label{eq:zOptim} \end{align} Only the last scenario above, \OptimalCase{}, is left to explain. To see if the parameter-free choices can be improved, we run one more case which includes adaptivity by using $\abs{ \zeta _{k-1} }$, but also one constant parameter tuned experimentally. This is motivated by the fact that the considerations leading to \Cref{eq:relaxChoices} rely mostly on approximations and inequalities, which means we have an over-estimate of the error. It may be possible to reduce the total number of iterations further, by including an (almost) optimal, problem-dependent constant. The goal is to find a sequence of tolerance parameters with terms that are as large as possible, while guaranteeing the accuracy of the final \ac{GKB} iterate. All the results are given in \Cref{tab:varTolZ} and \Cref{fig:originallower bound}. \HybridCase{} offers the highest savings among the parameter-free choices (30\%), but \OptimalCase{}, the test with the problem-dependent constant, reveals that we can still improve this performance by about 6\%. \import{images/pgf/lowBndVScumulCG}{original} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter in \OptimalCase{} is $0.05$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 6963 & 5115 & 4897 & 4873 & 4399 \\ Savings \% & - & 26.54 & 29.67 & 30.02 & 36.82 \end{tabular} \label{tab:varTolZ} \end{table} \subsection{Increasing the savings by working on a simplified problem} \label{subsec:simple} Considering the observations in \Cref{subsec:theoPrecRelax} and the results plotted in \Cref{fig:originallower bound}, we can significantly reduce the accuracy of the inner solver only when the outer solver is in a superlinear convergence phase, when the $\abs{\zeta }$ sequence decreases rapidly. How much we can relax depends on the slope of the convergence curve. As such, to get the maximum reduction of the total number of iterations, the problem needs to be simplified, such that the convergence curve is as steep as possible and has no plateau. It is common to pair Krylov methods with other strategies, such as preconditioning, in order to improve their convergence behavior. The literature on these kinds of approaches is rich \cite{loghin2003schur,loghin2004analysis,bgl_2005,olshanskii2010acquired}. The following tests quantify how beneficial is the interaction between our proposed relaxation scheme and these other strategies. It has been shown by Arioli and Orban that the \ac{GKB} applied to the saddle-point system is equivalent to the \ac{CG} algorithm applied to the Schur complement equation \cite[Chapter~5]{orban2017iterative}. As such, the first step towards accelerating \ac{GKB} is to consider the Schur complement, defined as $\Sm := \Am ^T \Mm _{-1} \Am$, especially its spectrum. Ideally, a spectrum with tightly clustered values and no outliers leads to rapid \ac{GKB} convergence \cite{KrDaTaArRu2020}. To get as close as possible to this clustering we use the following two methods to induce positive changes in the spectrum: preconditioning with the \ac{LSC} \cite{elman2006block} and eigenvalue deflation. Each of them operates differently and leads to convergence curves with different traits. \import{images/pgf/}{easyIFISS20GKconv} In \Cref{fig:easyIFISS20GKconv}, we plot the \ac{GKB} convergence curve for each of these, using a direct inner solver. The \ac{LSC} aligns the small values in the spectrum with the main cluster and brings everything closer together. The corresponding \ac{GKB} convergence curve has no plateau and is much steeper than the curve for the unpreconditioned case. Using deflation, we remove the five smallest values from the spectrum, which constitute outliers with the respect to the main cluster. The other values remain unchanged. As such, its convergence curve no longer has the initial plateau, but is otherwise the same as in the original problem. For both of these cases we apply the same strategies of relaxing the inner tolerance, to see how many total \ac{CG} iterations we can save. The rest of the set-up is identical to that described for \Cref{tab:varTolZ}. We tabulate the results in \Cref{tab:varTolLSC,tab:varTolDefl} and plot them in \Cref{fig:LSCpreclower bound,fig:deflatedlower bound}. They highlight that the best parameter-free results are obtained when using \HybridCase{}, which leads to savings of about 50\%, depending on the specific case. When comparing this parameter-free approach to \OptimalCase{}, which includes an experimental constant, we find that the hybrid approach can still be improved. Nonetheless, the difference in \ac{CG} iterations savings is not very high (up to 6\%), which supports the idea that our proposed strategy is efficient in a general-use setting. An additional observation pertaining to the plots is that even if convergence is relatively fast (\Cref{fig:LSCpreclower bound}) or slow (\Cref{fig:deflatedlower bound}), the final savings are still around 50\%, as long as there is no plateau. \import{images/pgf/lowBndVScumulCG}{LSCprec} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the \ac{LSC} preconditioner. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter used in \OptimalCase{} is $0.007$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 2052 & 1301 & 1073 & 1046 & 919 \\ Savings \% & - & 36.60 & 47.71 & 49.03 & 55.21 \end{tabular} \label{tab:varTolLSC} \end{table} \import{images/pgf/lowBndVScumulCG}{deflated} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using deflation. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter used in \OptimalCase{} is $0.09$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 4830 & 2625 & 2416 & 2411 & 2110 \\ Savings \% & - & 45.65 & 49.98 & 50.08 & 56.31 \end{tabular} \label{tab:varTolDefl} \end{table} \section{\ac{GKB} with the augmented Lagrangian approach} \label{sec:AL} The method of the \ac{AL} has been used successfully to speed up the convergence of the \ac{GKB} algorithm \cite{KrDaTaArRu2020}, with this effect being theoretically explained by Arioli et al. \cite{KrDaTaArRu2020}. Maybe most striking is the potential to reach mesh-independent convergence, provided that the augmentation parameter is large enough. Another use of the \ac{AL} method is to transform the (1,1)-block of a saddle-point system, say $\Wm$, from a positive semi-definite matrix to a positive definite one. However, this can happen only if the off-diagonal block $\Am$ is full rank or, more generally, if $\mbox{ker}(\Wm)\cap \mbox{ker}(\Am^T)=\{ \mZ \}$. Let $\Nm \in \mathbb{R}^{n\times n}$ be a symmetric, positive definite matrix. For a given symmetric, positive semi-definite matrix $\Wm \in \mathbb{R}^{m\times m}$, we can transform it into a positive-definite one by \begin{align} \Mm := \Wm + \Am \Nm^{-1} \Am^T. \end{align} The upper right-hand side term $\gvv$ then becomes \begin{equation} \gvv := \gvv + \Am \Nm^{-1}\rv. \end{equation} With these changes in place, we can proceed to using the \ac{GKB} algorithm, as described in \Cref{sec:GKBtheory}. Note that if the matrix $\Wm$ is already symmetric positive-definite, the transformation of the (1,1)-block is not necessary for using the \ac{GKB} method. However, the application of the \ac{AL} approach does lead to a better conditioning of the Schur complement, which significantly improves convergence speed \cite{KrDaTaArRu2020}. As in \Cref{sec:GKBtheory}, we choose $\Nm=\frac{1}{\eta} \Id$. There is as usual no free lunch: depending on the conditioning of the matrix $\Am$ and the magnitude of $\eta$, the \ac{AL} can also degrade the conditioning of the $\Mm$ matrix as a side-effect. We test whether the augmentation interacts with the strategies we propose in \Cref{sec:relaxChoices}, namely if we can still achieve about 50\% savings in the total number of inner iterations. The strategies are applied when solving the problem described in \Cref{sec:pbDesc} after an augmentation with a parameter $\eta=1000$, with the results being given in \Cref{tab:varTolAL} and plotted in \Cref{fig:augLaglower bound}. Comparing the percentage of iterations saved in this case to those obtained in \Cref{sec:relaxChoices}, it is clear that, when combined with the \ac{AL} method, the strategy of variable inner tolerance does help reducing the total number of inner iterations, but by a lower percentage. \import{images/pgf/lowBndVScumulCG}{augLag} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the \ac{AL} ($\eta =1000$). The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:zOptim,eq:adasquare,eq:hybrid}. The parameter used in \OptimalCase{} is $0.005$. } \begin{tabular}{cccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase & \OptimalCase \\ \ac{CG} iterations & 2601 & 1886 & 1707 & 1661 & 1647 \\ Savings \% & - & 27.49 & 34.37 & 36.14 & 36.68 \end{tabular} \label{tab:varTolAL} \end{table} Since the \ac{AL} method modifies the (1,1)-block of the saddle-point system, it changes the difficulty of the inner problem and how many iterations the inner solver needs to perform. As such, a global comparison in terms of number of inner iterations, among all the scenarios we studied (original, preconditioned, deflated, including the \ac{AL}) is not fair unless the inner problem has the same degree of difficulty for all the cases. To verify the generality of our method, we also apply it in a different context than that described in \Cref{sec:pbDesc}. Let us consider a Mixed Poisson problem. We solve the Poisson equation \(-\Delta u=f\) on the unit square $(0,1)^2$ using a mixed formulation. We introduce the vector variable $\vec{\sigma}=\nabla u$. Find $(\vec{\sigma}, u)\in \Sigma \times W $ such that \begin{align} \vec{\sigma}-\grad u&=0\\ -\mathrm{div} (\vec{\sigma}) &= f. \end{align} where homogeneous Dirichlet boundary conditions are imposed for $u$ at all walls. The forcing term $f$ is random and uniformly drawn in $(0,1)$. The discretization is done with a lowest order Raviart-Thomas space $\Sigma^h \subset \Sigma$, and a space $W^h \subset W$ containing piece-wise constant basis functions. We used the finite element package Firedrake\footnote{\url{www.firedrakeproject.org}} coupled with a PETSc~\cite{petsc-web-page,petsc-user-ref,petsc-efficient} implementation of \ac{GKB} \footnote{\url{https://petsc.org/release/docs/manualpages/PC/PCFIELDSPLIT.html\#PCFIELDSPLIT}}, adapted to include dynamical relaxation, to produce the following numerical results. We used the implementation provided by Firedrake\footnote{\url{https://www.firedrakeproject.org/demos/saddle_point_systems.py.html}}. The test case has \num{328192} degrees of freedom, of which \num{197120} are associated with the (1,1)-block. The \ac{GKB} delay parameter is set to 3. The augmentation parameter $\eta$ is set to 500 and the tolerance for the \ac{GKB} set to \num{1e-5}. The results are presented in \Cref{fig:mixedPoissonLowBnd}. We confirm the results presented above with a reduction of over 60\% in the total number of inner \ac{CG} iterations with respect to the constant accuracy set up. \begin{figure} \centering \begin{tikzpicture} \begin{axis}[ ymode=log, legend pos= north east, width=\FigWid \textwidth, height=\FigHei \textwidth, xlabel={Inner CG iterations}, ylabel={Lower bound} ] \addplot+[name path=A] table [x=InnerKSPCumul, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiCST500.txt}; \addlegendentry{ \ConstantCase} \addplot+[] table [x expr=\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiHybrid500.txt}; \addlegendentry{\HybridCase } \addplot+[] table [x expr=\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiAdaSquare500.txt}; \addlegendentry{ \PredictedCase} \addplot+[] table [x expr=\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiZ500.txt}; \addlegendentry{ \AdaptiveCase} \addplot[name path=C, draw=none] table [x expr=0.5*\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiCST500.txt} node[pos=1]{50\%}; \addplot[black!30, opacity=0.5] fill between[of=A and C]; \addplot[name path=C, draw=none] table [x expr=0.35*\thisrow{InnerKSPCumul}, y=lowbnd]{images/petscResults/MixedPoisson/monitorCGJacobiCST500.txt} node[pos=1]{65\%}; \addplot[black!20, opacity=0.5] fill between[of=A and C]; \end{axis} \end{tikzpicture} \caption{Lower bound (\Cref{eq:lowBnd}) for the error norm associated with the \ac{GKB} iterates versus the cumulative number of inner \ac{CG} iterations when solving the Mixed Poisson problem. We also use the \ac{AL} ($\eta =500$). See \Cref{eq:cst,eq:z,eq:adasquare,eq:hybrid} for the strategies denoted by the labels.} \label{fig:mixedPoissonLowBnd} \end{figure} \begin{table} \centering \caption{Reduction of the total number of \ac{CG} iterations after using the \ac{AL} ($\eta =500$) on the Mixed Poisson problem. The \ac{CG} tolerance is relaxed according to \Cref{eq:cst,eq:z,eq:adasquare,eq:hybrid}. } \begin{tabular}{ccccc} \ac{CG} tolerance & \ConstantCase & \AdaptiveCase & \PredictedCase & \HybridCase \\ \ac{CG} iterations & 10845 & 4680 & 4105 & 4225 \\ Savings \% & - & 56.84 & 62.15 & 61.04 \end{tabular} \label{tab:DarcyvarTolAL} \end{table} \section{Conclusions} We have studied the behavior of the \ac{GKB} algorithm in the case where the inner problem, i.e. the solution of a linear system, is performed iteratively. We have found that the inner solver does not need to be as precise as a direct one in order to achieve a \ac{GKB} solution of a predefined accuracy. Furthermore, we have proposed algorithmic strategies that reduce the cost of the inner solver, quantified as the cumulative number of inner iterations. This is possible by selecting criteria to change the stopping tolerance. To motivate these choices, we have studied the perturbation generated by the inexact inner solver. The findings show that the perturbation introduced in early iterations has a higher impact on the accuracy of the solution compared to later ones. We devised a dynamic way of adapting the accuracy of the inner solver at each call to minimize its cost. The initial, high accuracy is gradually reduced, maintaining the resulting perturbation under control. Our relaxation strategy is inexpensive, easy to implement, and has reduced the total number of inner iterations by 33-63\% in our tests. The experiments also show that including methods such as deflation, preconditioning and the augmented Lagrangian has no negative impact and can lead to a higher percentage of savings. Another advantage is that our method does not rely on additional parameters and is thus usable in a black-box fashion. \paragraph{Acknowledgments} The authors thank Mario Arioli for many inspiring discussions and advice.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,453
Hoinkes Research Anxiety Culture Viducation Upcoming International Conference and Workshop November 15th - 17th, 2018 In June 2015, a group of scholars from Columbia University were invited to Kiel to meet their German colleagues in a "come-into-touch" conference that would become the foundation stone of the Anxiety Culture Project. Mirroring that first meeting, a bit over three years later, we have organized a new event at Kiel University: an international conference under the title "Living and Learning in a World of Anxieties" and, at the same time, a "get-to-work" workhop to discuss the profile of our project and define research strategies for the upcoming years. The event will take place between Thursday, November 15th, and Saturday, November 17th, at Kiel University. External guests are welcome to join us in the conference talks. You can find the time schedule following this link. Anxiety Culture Project Workshop in New York On February 23rd, 2018, a group of researchers and professionals from different institutions and fields met at the Interchurch Center, in New York City, to take part in the Anxiety Culture Project Workshop organized by several of our team members at Columbia University. The Interchurch Center hosts several interfaith and non-religious, non-profit organizations, such as the Council for European Studies and the Alliance Program, with whom we have recently began to collaborate. You can read more about the history and mission of this impressive building here. This workshop is a follow-up of the conversation that began at the International Conference in Gut Siggen, Germany, in July 2017. After a short round of introductions, our participants were divided in four think-tank groups to discuss how their individual research topics fit into at least one of our clusters (climate change, health, migration, or technology), and how each individual member's interests and expertise can help further develop the Anxiety Culture-related research in that particular field. Afterwards, the conclusions of each individual cluster were presented to the larger group. Through feedback from other perspectives, new ideas arose to clearly specify each cluster's research aims while, at the same time, finding intersecting areas and common ground between clusters. This exchange also helped us define the research questions that our project wants to answer in the larger scale. Additionally, the workshop was a unique opportunity to bring people together and create new liasons between attendees from several institutions, including Teachers College and Kiel University, among many others. Click here to see the full programme and read the profiles of all participants. We would like to warmly thank our collaborators from the following institutions: Columbia University, Teachers College, Knight Institute, Bremen University, Kiel University, Sciences Po, Science Applications CIESIN, Eucor, Council for European Studies, Alliance Program, United Nations Alliance of Civilizations, Political Studies Institute in Aix-en-Provence, International College of Philosophy in Paris, Social Science Research Council, Heyman Center, and Columbia Global Center. Additionally, we are immensely grateful to the Interchurch Center for letting us use their facilities. Get-Together and Conference at Teachers College As part of the follow-up activities to the research exchange in Gut Siggen in July, our scholars Dr. Mar Mañes and Kira Seeler were invited to New York City to participate in a a get-together and a two-day conference at Teachers College at Columbia University on November 28th and 29th. The first part of the meeting consisted of an update regarding new achievements and progression of the "Anxiety Culture Project" with Prof. Dr. John Allegrante and Prof. Dr. Ulrich Hoinkes, who is currently a visiting scholar at Teachers College. Upcoming events in the first trimester of 2018 were discussed, as well as the planning of a conference in Paris in Fall 2018. Additionally, contact was established with new participants and scholars who are interested in the research project. After a brief tour around the impressive campus, Dr. Mar Mañes and Kira Seeler were guest speakers at the conference that had been organized at the Grace Dodge Hall building. They introduced the "Anxiety Culture Project" to a group of research assistants and staff members in Prof. Allegrante's team, who engaged in a very interesting discussion and proposed new outlooks for our research. As a closing act, our scholars attended the Annual Tisch Lecture by Prof. Dr. Nathan A. Fox, Distinguished University Professor at the University of Maryland and Chair of the Department of Human Development and Quantitative Methodology. Prof. Fox gave a lecture on Recovery From Severe Psychological Deprivation based on the research he and his team conducted on orphaned Romanian children. We would like to thank Teachers College, and Prof. Dr. Allegrante in particular, for the invitation and for giving us the opportunity to take part in these activities. Prof. Dr. Ulrich Hoinkes, Kira Seeler, Dr. Mar Mañes and Prof. Dr. John Allegrante at Teachers College. The Anxiety Culture Research Project in Oviedo, Spain An international conference around the topics of "Governance, Trust, and Risk Culture" took place at the University of Oviedo, Spain, between September 25th and September 27th, 2017. Our team member, Dr. Mar Mañes Bordes, attended and gave an overview of the Anxiety Culture Research Project, after which a discussion ensued. The event was organized by the research group in Social Studies of Science of the University of Oviedo, Spain, in collaboration with several Iberian institutions, such as the CIEMAT in Madrid, the University of Valladolid, and the Centre for Research and Studies in Sociolgy at the University Institute of Lisbon. Their aim was to bring together experts in different fields in order to debate about the complex relations between trust, risk, and knowledge, focusing mainly, though not exclusively, on the areas of science and technology. The conference, which took place between September 25th and 27th at the Aula Magna of the historical building of the University of Oviedo, in the heart of the city, was an excellent opportunity to get in touch with an international group of investigators with similar approaches to our Anxiety Culture research project. Research Agreement (MOU) with Teachers College, Columbia University, New York Following the three-day international conference that took place at Gut Siggen, a Memorandum of Understanding was signed on July 19th as a culminating act of the mutual cooperation between the Faculty of Arts and Humanities at Christian-Albrechts University of Kiel (CAU) and Teachers College at Columbia University, together with the Leibniz Institute for Science and Mathematics Education (IPN). This agreement officializes our collaborative international and interdisciplinary project, as well as to establish our common aims in the research regarding public education services within the context of "anxiety culture". © Jan Winters, CAU Our latest press contributions about the Anxiety Project and MOU with TC in German (Kieler Nachrichten, July 19th 2017) Articles about the Anxiety Culture Research Project in Kieler Nachrichten. Viducation Workshop July 7, 2017, 10 a.m. - 18 p.m. The perception of social threats and crisis in public life and education "VIDUCATION" How to put learning in a new setting In this brief workshop we want to introduce a new creative way of short-film production in the framework of innovative academic teaching. Therefore, we share our expertise and experience in the broad field of self-produced tutorial- and educational films within the university context. In addition, we provide input on how to integrate those educational videos into academic didactics and give examples of their major contribution to the learning environment and increase in core competences. Representatives from several international institutions shared their findings around the topic of "anxiety culture" from different research fields. © 2018 by Hoinkes Research.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,204
Earl Judson Isaac (7 August 1921 – 12 December 1983) founded Fair, Isaac and Company along with friend William R. "Bill" Fair in 1956. They began the operation in a small studio apartment on Lincoln Avenue in San Rafael, California. Early life and education Earl Isaac was born in Buffalo, New York. The youngest of five children, Earl had two older brothers and two older sisters. He graduated from high school at age 15, attended Muskingum University, then accepted an appointment in the United States Naval Academy at Annapolis, Maryland, for the class of 1944. Because of the outbreak World War II, he graduated from the Naval Academy in 1943, one year early, and served as a U.S. Naval officer on the USS Missouri. After World War II, Earl earned a Master of Science degree in Mathematics at University of California, Los Angeles. Career After being discharged from the Navy, he worked at SRI International during SRI's beginning years, then known as Stanford Research Institute, along with Bill Fair, his future business partner. With an initial investment of $400 each, Isaac and Fair founded Fair, Isaac and Company in 1956, a firm that would provide a creditworthiness scoring system to lenders. References Notes Bibliography View Points Special Issue — The Fair Isaac Company, January 1986, written by William R. Fair. (article is not online) External links FICO - About Us - History American computer programmers 1921 births 1983 deaths 20th-century American mathematicians 20th-century American businesspeople United States Navy officers United States Naval Academy alumni SRI International people Muskingum University alumni People from Buffalo, New York Businesspeople from Buffalo, New York United States Navy personnel of World War II
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,101
\section{Introduction} \maketitle The dynamics of the superfluid interior of a neutron star is central to understanding a variety of phenomena that includes observed spin glitches, stochastic spin variations and thermal evolution, as well as possible precession and r-modes. In this connection, the possible importance of hydrodynamic instabilities in neutron stars has become a question of considerable interest. \citet{peralta_etal05,peralta_etal06} have shown that differential rotation in the core, resulting from a spin glitch or possibly causing it, drives an Ekman flow along the rotation axis than can excite a variant of the ``Glaberson-Donnelly'' counterflow instability in liquid helium \citep{Glaberson_etal74}; transitions between laminar flow and fully-developed turbulence could drive spin glitches. This instability could also be excited in precessing neutron stars \citep{gaj08a,vl08}. Unstable shear layers \citep{pm09} and r-mode instabilities \citep{ga09} in the outer core may also play a role in glitches. From the standpoint of building a realistic theory of neutron star seismology with which to interpret observations, it is important to identify hydrodynamic instabilities of possible relevance. The possibility of turbulent instabilities in the neutron star inner crust, the region from the neutron drip density to about half nuclear saturation density, has received little attention in this regard. Here the vortices that thread the rotating superfluid are predicted to interact with nuclei with energies of $\sim 1-5$ MeV per nucleus in the denser regions \citep{alpar77,eb88,dp06,abbv07}. Recent work has shown that this interaction will pin vortices to nuclei, regardless of the details of the pinning potential \citep{link09}. As the star spins down, the differential velocity between the superfluid and the pinned vortices approaches the critical value at which the hydrodynamic lift force on vortices, the {\em Magnus force}, would unpin them. As suggested long ago by \citet{ai75}, the spin glitches seen in neutron stars could arise from large-scale vortex unpinning from nuclei, wherein the threshold for pinning is exceeded. Below the critical velocity, pinned vortices slowly creep through thermal activation \citep{alpar_etal84,leb93} or quantum tunneling \citep{leb93}, driven by the Magnus force. Here we demonstrate the existence of a hydrodynamic instability related to the vortex creep process that could grow over timescales as short as months. In the next section, we describe vortex pinning in the inner crust. In \S 3 we give the stability analysis. In \S 4, we discuss hydrodynamic wave solutions in the case of no background flow. In \S 5, we describe the hydrodynamic instability that arises when vortices move slowly through the nuclear lattice. In \S 6, we calculate the vortex mobility, which we apply to obtain the growth rate of the instability. We conclude with a discussion of the possibility that the inner-crust superfluid becomes turbulent. \section{Vortex Pinning} First we calculate the critical velocity between the superfluid and the crust that can be sustained by vortex pinning. We will use these results in \S 6 to calculate the vortex mobility and the growth rate of the instability. Vortex pinning fixes the local superfluid velocity in the laboratory frame. As the crust spins down, a velocity difference $v$ between the pinned vortices and the superfluid develops. The Magnus force per unit length of vortex is \begin{equation} f_{\rm mag}=\rho\kappa v, \end{equation} where $\rho$ is the superfluid mass density, $\kappa\equiv h/m$ is the quantum of vorticity, and $m$ is twice the neutron mass. Let $F_p$ be the characteristic force of the vortex-nucleus potential. Above a critical velocity difference $v_c$, the Magnus force will exceed the pinning force, and vortex pinning is not possible. If a vortex could bend to intersect nuclei of average spacing $a$, the critical velocity difference $v_c$ would be given by \begin{equation} \rho\kappa v_ca=F_p. \end{equation} A vortex has a large self energy (tension) that typically prevents it from bending over a length scale $a$. If the tension were infinite, a vortex could not pin at all, since the vortex would remain straight and the forces from nuclei that surround the vortex would cancel on average. For finite tension, the vortex can bend over a length $l_p>a$, and the critical velocity is given instead by \begin{equation} \rho\kappa v_c l_p=F_p, \end{equation} giving a critical velocity \begin{equation} v_c =\frac{F_p}{\rho\kappa a}\left(\frac{a}{l_p}\right). \label{vc0} \end{equation} Tension lowers the critical velocity by a factor $a/l_p$. To calculate $l_p$, let $\rbf_v(z)$ be a vector in the $x-y$ plane that gives the shape of the pinned vortex. The energy of a static vortex in a pinning field $V(\rbf_v)$, in the absence of an ambient superfluid flow, is \begin{equation} E_v=\int dz\, \left(\frac{1}{2}T_v\left|\frac{d\rbf_v(z)}{dz}\right|^2 + V(\rbf_v)\right), \label{e} \end{equation} where $T_v$ is vortex tension, typically 1 MeV fm$^{-1}$. On average, over a length $l_p$ the vortex bends by an amount $\delta r_v$ to intersect one nucleus in a volume $l_p\pi (\delta r_v)^2$. The quantities $l_p$ and $\delta r_v$ are therefore related by \begin{equation} a^{-3} l_p \pi (\delta r_v)^2=1. \label{mfp} \end{equation} The energy of the vortex per unit length, from eq. (\ref{e}), is approximately \begin{equation} \frac{E_v}{l_p}\simeq \frac{1}{2}T_v\frac{(\delta r_v)^2}{l_p^2} -\frac{E_p}{l_p}, \end{equation} where $E_p$ is the interaction energy between a vortex and a single nucleus, typically $\sim 1$ MeV. Contributions to the potential by nuclei that the vortex does not intersect have been ignored; these contributions will largely cancel. Minimization of $E_v/l_p$ with respect to $l_p$, using eq. (\ref{mfp}), gives \begin{equation} \frac{l_p}{a}=\left(\frac{3aT_v}{2\pi E_p}\right)^{1/2}. \label{lp} \end{equation} The vortex tension $T_v$ is due mainly to the kinetic energy per unit length of vortex due to circulation abut the vortex, and takes the form \citep{thomson1880,fetter67}, \begin{equation} T_v=\frac{\rho\kappa^2}{4\pi}(0.116-\ln k_v\xi), \label{tension} \end{equation} where $\xi$ is the radius of the vortex core and $k_v$ is the characteristic bending wavenumber, $k_v=\pi/2l_p$. For typical conditions of the inner crust, the ratio $l_p/a$ is much larger than unity. At a density $\rho=5\times 10^{13}$ \hbox{\rm\hskip.35em g cm}$^{-3}$\, the lattice spacing is $a\simeq 50$ fm and the radius of the vortex core is $\xi\simeq 10$ fm. For $E_p=1$ MeV, simultaneous solution of eqs. (\ref{lp}) and (\ref{tension}) gives $l_p\simeq 9a$. The ratio $l_p/a$ increases for weaker pinning. For example, for $E_p=0.1$ MeV, the pinning length becomes $l_p\simeq 32a$. For $E_p=10$ MeV, unrealistically large according to recent calculations, $l_p=2a$. Combining eq. (\ref{lp}) with eq. (\ref{vc0}) gives the critical velocity, modified by vortex tension, \begin{equation} v_c= \frac{F_p}{\rho\kappa a}\left(\frac{a}{l_p}\right)= \frac{E_p}{\rho\kappa a\xi} \left(\frac{2E_p}{3aT_v}\right)^{1/2}, \label{vcrit} \end{equation} where we have taken $F_p=E_p/\xi$. Eq. (\ref{vcrit}) was found in the numerical simulations of the dynamics of an isolated vortex in a random potential \citep{link09}. This equation shows that pinning is weakened by vortex tension. For $E_p=$ 1 MeV and $\xi=10$ fm, the critical velocity is $v_c\simeq 4\times 10^5$ \hbox{\rm\hskip.35em cm s}$^{-1}$. \footnote{A value of $v_c$ as large as $\sim 10^7$ \hbox{\rm\hskip.35em cm s}$^{-1}$\ was estimated in \citet{link09}, assuming $E_p=5$ MeV at $\rho=10^{13}$ \hbox{\rm\hskip.35em g cm}$^{-3}$, for $\xi\simeq a\simeq 70$ fm. This number is a generous upper limit.} The corresponding differential angular velocity between the superfluid and the crust is as large as $\sim 1$\ rad s$^{-1}$, but still much less than the angular velocity of the star when the superfluid condensed. The relative flow between the superfluid and the crust will thus be close to or comparable the local critical velocity in regions where there is pinning. We now examine the stability of this differentially-rotating state. \section{Perturbation Analysis} The problem of the coupled dynamics of the superfluid and vortex lattice can be studied using the hydrodynamic theory of \citet{bc83} which accounts for vortex degrees of freedom. The local quantities of fluid velocity, vortex density, and vortex velocity are averaged over a length scale that is large compared to the inter-vortex spacing $l_v$; the theory is valid for wavenumbers that satisfy $kl_v<<1$. We treat the superfluid as a single-component fluid at zero temperature, and ignore dissipation in the bulk fluid and the small effects of vortex inertia. These approximations are justified in a typical neutron star, for which the temperature of the inner crust is much less than the condensation temperature of the superfluid. We also treat the crust as infinitely rigid and ignore local shear deformations, an approximation that will be justified below. The motion of the superfluid does not couple to the electrons, so electron viscosity is not relevant. Magnetic fields are not relevant either, as they do not interact with the vortices of the inner crust. We will consider only shear modes in the superfluid, so that the flow velocity $\vbf(\rbf,t)$ is divergence-free. The rotation axis lies along $\hat{z}$, and $\rbf_v(\rbf,t)$ denotes the continuum vortex displacement vector, with components in the $x-y$ plane only. The equations of motion in the laboratory frame are \citep{bc83} \begin{equation} \nabla\cdot\vbf = 0 \end{equation} \begin{equation} \frac{\partial\vbf}{\partial t}+\vbf\cdot\nabla\vbf = -\nabla\mu-\nabla\phi -\sigmabf_{el}/\rho+\fbf/\rho \label{sfaccel} \end{equation} \begin{equation} \rho\,\omegabf\times\left(\vbf-\frac{\partial\rbf_v}{\partial t}\right)=-\sigmabf_{el}(\rbf_v)+\fbf. \label{lineforce} \end{equation} where $\omegabf\equiv\nabla\times\vbf$ is the vorticity due to the existence of vortices in the fluid, $\mu$ is the chemical potential, $\phi$ is the gravitational potential, $\sigmabf_{el}/\rho$ is the elastic force per unit volume that arises from bending of the vortex lattice, and $\fbf/\rho$ is the force per unit volume exerted on the fluid by the normal matter. The elastic force is \begin{equation} \sigmabf_{el}/\rho=-c_T^2\left[2\nabla_\perp(\nabla\cdot\rbf_v)- \nabla^2_\perp\rbf_v\right]+c_V^2\frac{\partial^2 \rbf_v}{\partial z^2} \end{equation} where $\nabla_\perp$ denotes a derivative with components in the $x-y$ plane only. Here $c_T=(\hbar\Omega/4m)^{1/2}$ is the Tkachenko wave speed \citep{tk2a,tk2}, and $\Omega$ is the spin rate of the superfluid. The quantity $c^2_V=(\hbar\Omega/2m)\ln(\Omega_c/\Omega)$ is related to wave propagation along the rotation axis; $\Omega_c=h/(\sqrt{3}m\xi^2)$ for a triangular vortex lattice. For a typical neutron star rotation rate of $\Omega=100$ rad s$^{-2}$, $c_T=0.09$ cm s$^{-1}$ and $c_V=9\,c_T$. The areal density of vortices in the $x-y$ plane is $l_v^{-2}=2m\Omega/h$ for a uniform vortex lattice; hence, the requirement that $kl_v<<1$ is equivalent to $kc_T<<\Omega$. Eq. (\ref{lineforce}) is an expression of balance of the Magnus force, the elastic force of the deformed vortex lattice, and the force exerted on the fluid by the normal matter. If the vortex array is perfectly pinned to the normal matter of the inner crust moving at velocity $\vbf_n$, so that $\partial\rbf_v/\partial t=\vbf_n$, the force is \begin{equation} \fbf=\rho\,\omegabf\times(\vbf-\vbf_n)+\sigmabf_{el}(\rbf_v). \end{equation} For imperfect pinning, the Magnus force and elastic force drive vortex motion with respect to the normal matter. For imperfect pinning, the force above can be generalized as \begin{equation} \fbf= \beta^\prime\rho\,\omegabf\times(\vbf-\vbf_n) +\beta\rho\,\hat{\omega}\times(\omegabf\times\{\vbf-\vbf_n\}) +(1-\gamma)\,\sigmabf_{el}(\rbf_v). \label{fl} \end{equation} The first two terms of this force are present in the mutual friction force introduced by \citet{hv56}. We emphasize the generality of the force law of eq. (\ref{fl}). The first two terms represent the force exerted on the fluid by vortices that are moving with respect to the normal matter; the first term corresponds to the force transverse to the vortex motion, while the second term corresponds to the force parallel to the vortex motion. The coefficients $\alpha$ and $\beta$ can be calculated using a specific theory of vortex mobility. The third term accounts for the the contribution to the force that arises from local vortex bending. If the vortex lattice is locally undeformed ($\sigmabf_{el}=0$), the vortex velocity from eqs. (\ref{fl}) and (\ref{lineforce}) is \begin{equation} \frac{\partial\rbf_v}{\partial t}=\vbf_n+\alpha\,(\vbf-\vbf_n)- \beta\,\hat{\omega}\times(\vbf-\vbf_n), \label{vv} \end{equation} where $\alpha\equiv 1-\beta^\prime$. Imperfect pinning, that is, ``vortex creep'', corresponds to $\alpha<<1$ and $\beta<<1$. We refer to $\alpha$, $\beta$, and $\gamma$ as the ``pinning coefficients''. Perfect pinning corresponds to the limit $\alpha=\beta=\gamma=0$, while no pinning ($\fbf=0$) corresponds to $\alpha=\gamma=1$ and $\beta=0$. Vortices move with a component along $\vbf-\vbf_n$, so that $0<\alpha\le 1$. The energy dissipation rate per unit volume is determined by $\beta$, which must be positive to give local entropy production. Vortex creep could be a low-drag process, with $\beta<<\alpha$, or a high-drag process, with $\beta>>\alpha$. In much previous work on pinning, the high-drag limit has been implicitly assumed through the following relationship between $\beta$ and $\beta^\prime$: \begin{equation} \beta^\prime= 1-\alpha=\frac{{\cal R}^2}{1+{\cal R}^2}={\cal R}\,\beta, \label{drag} \end{equation} where ${\cal R}$ is a dimensionless drag coefficient. In this drag description, imperfect pinning ($\alpha<<1$, $\beta<<1$) corresponds to ${\cal R}>>1$ so that eq. (\ref{drag}) requires $\beta>>\alpha$. Eq. (\ref{drag}) {\em is not true in general}. The presence of non-dissipative forces between vortices and the solid to which they are pinned can give $\beta<<\alpha$ for $\alpha$ and $\beta$ both small, a regime of low drag that does not follow from eq. (\ref{drag}) for any value of ${\cal R}$ \citep{link09}. As discussed below, it is the low-drag regime that is likely to be realized, with vortex creep being unstable in this regime. A crucial feature of our analysis is that we do not assume eq. (\ref{drag}). To examine the stability of superfluid flow with imperfect pinning, we use a local plane wave analysis in the frame rotating with the normal matter at angular velocity $\Omegabf_n$, in which $\vbf_n=0$ and the unperturbed flow velocity arising from spin down of the crust is $\vbf_0$. Restricting the analysis to the regime $k\Delta R>> 1$, where $\Delta R$ is the thickness of the inner crust, the background flow can be taken to be uniform and the local analysis is valid. We take the unperturbed vortex lattice to be locally undeformed ($\sigmabf_{el}=0$). The unperturbed creep velocity in the rotating frame ($\vbf_n=0$) follows from eq. (\ref{vv}): \begin{equation} \frac{\partial\rbf_{v0}}{\partial t} =\alpha\,\vbf_0-\beta\,\hat{\omega}\times\vbf_0. \label{vv0} \end{equation} Below we estimate $\partial r_{v0}/\partial t\sim 10^{-5}\,v_0$ for a typical neutron star. Linearizing eqs. (\ref{sfaccel}) and (\ref{lineforce}) about $\vbf_0$, $\partial\rbf_{v0}/\partial t$, and $\sigmabf_{el}=0$, and neglecting $\partial\rbf_{v0}/\partial t$ compared to $\vbf_0$, gives \begin{equation} \nabla\cdot\delta\vbf=0 \label{div} \end{equation} \begin{equation} \frac{\partial\delta\vbf}{\partial t}+\vbf_0\cdot\nabla\delta\vbf+ 2\Omegabf_n\times\delta\vbf= -\nabla\delta\mu^\prime-\nabla\delta\phi -\sigmabf_{el}/\rho +\delta\fbf/\rho \end{equation} \begin{equation} \rho\, 2\Omegabf_n\times\left(\delta\vbf -\frac{\partial\delta\rbf_v}{\partial t}\right) +\rho\,\delta\omegabf\times\vbf_0 = -\sigmabf_{el}+\delta\fbf \label{vortexaccel1} \end{equation} where $\delta$ denotes a perturbed quantity, and $\mu^\prime\equiv\mu-\rho(\Omegabf_n\times\rbf)^2/2$. We assume that $\alpha$ and $\beta$ are constants. The perturbed force is then \begin{equation} \delta\fbf= (1-\alpha)\rho\,\delta\left\{\omegabf\times\vbf\right\}\nonumber +\beta\rho\,\delta\left\{\hat{\omega}\times(\omegabf\times\vbf)\right\} +(1-\gamma)\,\sigmabf_{el}(\rbf_v). \label{df} \end{equation} The vorticity appearing in this equation is the total vorticity evaluated in the laboratory frame. For $\nabla\times\vbf_0<<2\Omegabf_n$, a good approximation for most neutron stars, the vorticity is \begin{equation} \omegabf=2{\mathbf\Omega_n}+\nabla\times\delta\vbf. \end{equation} The final term in eq. (\ref{df}), associated with stress in the vortex lattice, will turn out to be negligible for vortex creep driven by a flow $v_0>>c_T$ and $v_0>>c_V$. We take the rotation axis to be $\hat{z}$, with the unperturbed flow in the azimuthal direction, and along $\hat{x}$ at some point. For simplicity, we restrict $\kbf$ to lie in the $x-z$ plane, with an angle $\theta$ with respect to the rotation axis. We further restrict the analysis to the quadrant $0\le\theta\le\pi/2$. For shear perturbations, $\kbf\cdot\delta\vbf=0$, that is, the velocity perturbations in the directions $\hat{y}$ and $\hat{e}\equiv-\cos\theta\,\hat{x}+\sin\theta\, \hat{z}$ are orthogonal to $\kbf$. We now Fourier transform $(\propto {\rm e}^{i{\mathbf k}\cdot{\mathbf r}-i\sigma t})$ eqs. (\ref{div})-(\ref{df}) and take the projections onto $\hat{y}$ and $\hat{e}$. Defining $\sigma^\prime\equiv\sigma-kv_0\sin\theta$, $c\equiv\cos\theta$, and $s\equiv\sin\theta$, we obtain the system of equations: \begin{equation} \left[ \begin {array}{cccc} -i\sigma^\prime+2\Omega_n\beta-i(1-\alpha)kv_0s & -2\Omega_n\alpha c & \gamma(c_T^2s^2+c_V^2c^2)k^2 & 0 \\ -i\beta kv_0 s c +2\Omega_n\alpha c & -i\sigma^\prime +2\Omega_n\beta c^2 -i(1-\alpha)kv_0 s & 0 & -\gamma(c_T^2s^2-c_V^2c^2)k^2 \\ -i(\sigma^\prime+kv_0 s) & 0 & 0 & i2\Omega_n\sigma^\prime/c \\ 0 & -i(\sigma^\prime + kv_0 s) & -i2\Omega_n c\sigma^\prime & 0 \\ \end {array} \right] \left[ \begin{array}{c} \hat{y}\cdot\delta\vbf \\ \hat{e}\cdot\delta\vbf \\ \hat{y}\cdot\rbf_v \\ \hat{e}\cdot\rbf_v \\ \end{array} \right]=0 \label{matrix} \end{equation} The resulting dispersion relation is quadratic: \begin{equation} (\sigma^\prime)^4+a_3(\sigma^\prime)^3+a_2(\sigma^\prime)^2 +a_1\sigma^\prime+a_0=0, \label{dr} \end{equation} where, in units with $\Omega_n=1$, \begin{equation} a_3=2(1-\alpha)s\{kv_0\}+2i(1+c^2)\beta \end{equation} \begin{eqnarray} a_2=(\alpha-1)^2 s^2 \{kv_0\}^2 &+& \left(2i\beta s(1+c^2)-2i\alpha\beta s +\frac{1}{2}i\beta\gamma s^3 \{kc_T\}^2 +\frac{1}{2}\beta\gamma sc^2 \{kc_V\}^2\right)\{kv_0\} \nonumber \\ &-& 4(\alpha^2+\beta^2)c^2 -\alpha\gamma s^4 \{kc_T\}^2 -\alpha\gamma c^2(1+c^2)\{kc_V\}^2 +\frac{1}{4}\gamma^2 s^4 \{kc_T\}^4 -\frac{1}{4}\gamma^2 c^4 \{kc_V\}^4 \end{eqnarray} \begin{equation} a_1= \frac{1}{2}i\beta\gamma(\{kc_T\}^2 s^4 +\{kc_V\}^2 c^2s^2)\{kv_0\}^2 -\left(\alpha\gamma[\{kc_T\}^2 s^5 +\{kc_V\}^2 s c^2(1+c^2)] -\frac{1}{2}\gamma^2 [\{kc_T\}^4 s^5 -\{kc_V\}^4 sc^4]\right)\{kv_0\} \end{equation} \begin{equation} a_0=\frac{1}{4}\gamma^2(\{kc_T\}^4 s^6 - \{kc_V\}^4 s^2c^4)\{kv_0\}^2, \end{equation} where, $\{kv_0\}\equiv kv_0/\Omega_n$, $\{kc_T\}\equiv kc_T/\Omega_n$, and $\{kc_V\}\equiv kc_V/\Omega_n$. \section{Wave solutions without flow} Before turning to the full problem with non-zero $v_0$, we consider the limit of $v_0=0$ for the two cases of zero pinning and imperfect pinning. The dispersion relation is quadratic \begin{equation} \sigma^2 +2i\beta(1+c^2)\sigma -4c^2(\alpha^2+\beta^2) +\gamma c_T^2 k^2 s^4\left(-\alpha+\frac{1}{4}\gamma c_T^2 k^2\right) -\gamma c_V^2 k^2 c^2\left(\alpha (1+c^2) +\frac{1}{4}\gamma c_V^2 k^2c^2\right) =0, \label{dr0} \end{equation} For zero pinning force ($\alpha=\gamma=1$, $\beta=0$), the dispersion relation to order $c_T^2k^2$ and $c_V^2k^2$ becomes \begin{equation} \sigma^2= (2\Omega_n\cos\theta)^2+c_V^2(k^2\cos^2\theta)(1+\cos^2\theta) +c_T^2k^2\sin^4\theta, \label{bc} \end{equation} as found by \citet{bc83}. The fluid supports Tkachenko modes for $\theta=\pi/2$, and axial modes (modified inertial modes) for $\theta=0$. In the limit $c_T=c_V=0$, the system supports only ordinary inertial modes. The role of pinning can be seen by considering axial modes for the case $\gamma c_V^2k^2<<\alpha\Omega_n^2$. The solutions to eq. (\ref{dr0}) in this limit are \begin{equation} \sigma_\pm =2i\beta\,\Omega_n\pm\left(2\alpha\,\Omega_n+\frac{1}{2} \frac{\gamma c_V^2 k^2}{\Omega_n}\right), \label{axial} \end{equation} which shows the damping effect of $\beta$. Pinning strongly suppresses the axial mode given by eq. (\ref{bc}), eliminating it entirely for perfect pinning. The waves are underdamped for $\beta<\alpha$, which defines the regime of low drag that we will study further. \section{Instability} We now show that a non-zero background flow $\vbf_0$ drives a hydrodynamic instability if the vortices are imperfectly pinned ($\alpha<<1$, $\beta<<1$, $\gamma<<1$). We are interested in flow velocities of $v_0\sim 10^5$ \hbox{\rm\hskip.35em cm s}$^{-1}$. By comparison, \begin{equation} c_V\sim 10\, c_T \sim 10^{-5} v_0 \end{equation} We will find that there is an instability for wavenumbers $k\hbox{${_{\displaystyle>}\atop^{\displaystyle\sim}}$} \Omega_n/v_0$. The hydrodynamic limit imposes the restriction $kc_T/\Omega_n<<1$. The regime of interest is thus, \begin{equation} \Omega_n/v_0<k<<\Omega_n/c_T. \end{equation} We will estimate below that $\alpha\sim 10\,\beta\sim 10^{-10}$. We assume that $\gamma$ is similarly small. We will not present here an analysis of the full mode structure of the system, but focus on two low-frequency modes that appear for imperfect pinning. We simplify the problem by proceeding to linear order in the small quantities $kc_T/\Omega_n$ and $kc_V/\Omega_n$. At this level of approximation: \begin{equation} a_3=2(1-\alpha)s\{kv_0\}+2i(1+c^2)\beta \end{equation} \begin{equation} a_2=(\alpha-1)^2 s^2 \{kv_0\}^2 + 2i\beta s(1+c^2-\alpha) \{kv_0\}-4c^2(\alpha^2+\beta^2) \end{equation} \begin{equation} a_1=a_0=0, \end{equation} that is, the vortex lattice exerts no stresses on the fluid to first order in $c_T/v_0$ and $c_V/v_0$. The dispersion relation eq. (\ref{dr}) now simplifies to: \begin{equation} (\sigma^\prime)^2\left\{(\sigma^\prime)^2 +2(\{1-\alpha\} kv_0s+ i\beta\{1+c^2\})\,\sigma^\prime +(1-\alpha)^2 k^2v_0^2s^2 -4c^2(\alpha^2+\beta^2) +2i\beta kv_0s(1+c^2-\alpha)\right\}=0. \label{dr1} \end{equation} The two degenerate solutions $(\sigma^\prime)^2=0$ correspond to $\sigma=kv_0\sin\theta=\kbf\cdot\vbf_0$, the frequency associated with translation of the wave pattern at velocity $\vbf_0$. The other two other solutions are, switching from $\sigma^\prime$ to $\sigma$ and restoring $\Omega_n$, \begin{equation} \sigma_\pm =\alpha kv_0\sin\theta-i\Omega_n(1+\cos^2\theta)\beta \pm \left( 4\Omega_n^2\alpha^2 \cos^2\theta -\Omega_n^2\beta^2\sin^4\theta -2i\alpha\beta \Omega_nkv_0\cos^2\theta\sin\theta\right)^{1/2}. \label{solutions} \end{equation} For $\beta<<\alpha$ and low wavenumber $kv_0<<\Omega_n$, there are two damped modes \begin{equation} \sigma_\pm\simeq \alpha(kv_0\sin\theta\pm 2\Omega_n\cos\theta) -i\beta\left(\Omega_n\{1+\cos^2\theta\}\pm\frac{1}{2}kv_0\cos\theta\,\sin\theta \right). \end{equation} Slow vortex motion has introduced two low-frequency modes to the system. Removing pinning and drag ($\alpha=1$, $\beta=0$) and taking $k=0$, we recover the ordinary inertial modes $\sigma_\pm =\pm 2\Omega_n\cos\theta$. Above a critical wavenumber $k_c$, the the solution with eigenvalue $\sigma_-$ is unstable: \begin{equation} k>k_c\equiv2\frac{\Omega_n}{v_0}\frac{(\beta^2+\alpha^2)^{1/2}} {\alpha} \frac{1+\cos^2\theta}{\sin\theta\cos\theta}. \label{kc} \end{equation} Numerical solution of the full dispersion relation, eq. (\ref{matrix}), for reasonable values of $c_T$, $c_V$, and $v_0$, confirms that there are no other instabilities. The critical wavenumber $k_c$ is minimized for $\theta=\tan^{-1}(\sqrt{2})$. For $k>>k_c$, we have the approximate solutions \begin{equation} \sigma_\pm\simeq \alpha kv_0\sin\theta\mp i(\alpha\beta\,\Omega_nkv_0\cos^2\theta\sin\theta)^{1/2}. \label{highk} \end{equation} The instability arises from coupling between velocity and vorticity through the first two terms of eq. (\ref{df}). Dissipation damps perturbations for $k<k_c$, but for $k>k_c$ the finite vortex mobility gives rise to growing perturbations under the Magnus force. For $k>>k_c$, the growth rate scales as $(\alpha\beta v_0)^{1/2}$. For $\beta<<\alpha$, $k_c$ takes a constant value, but the growth rate of the mode becomes small, going to zero as $\beta$ goes to zero. In the highly-damped regime, $\beta>>\alpha$, damping restricts the unstable mode to large $k$, generally stabilizing the system. There are no unstable modes for either $\alpha=0$ or $\beta=0$; the instability occurs only if the vortices move with respect to the crust, both along the flow and transverse to the flow. We now show that our neglect of shear deformations of the crust is a good approximation. The modes we are studying are in the regime $kv_0>\Omega_n$. In this limit, the dominant contribution to the shear force per unit volume in the fluid is (see eq. \ref{df}) \begin{equation} \delta f/\rho \sim kv_0\delta v. \end{equation} Because the vortices are nearly perfectly pinned, this shear force creates a strain field in the solid with a shear force per unit volume of \begin{equation} \delta f_s/\rho\sim c_s^2 k^2 \delta u, \end{equation} where $c_s$ is the shear speed of the solid and $\delta u$ is the characteristic displacement. The speed of a mass element in the solid is $\delta v_n\sim \mbox{Re($\sigma_\pm$)}\delta u$. Equating $\delta f$ and $\delta f_s$, and using eq. (\ref{highk}) for the limit of large $k$, gives \begin{equation} \frac{\delta v_n}{\delta v}\sim \alpha\left(\frac{v_0}{c_s}\right)^2. \end{equation} The values $\alpha=10^{-10}$, $v_0=10^5$ \hbox{\rm\hskip.35em cm s}$^{-1}$, and $c_s=10^8$ \hbox{\rm\hskip.35em cm s}$^{-1}$, give $\delta v_n/\delta v\sim 10^{-16}$. The displacement of the solid is very small for two reasons: i) the solid is very rigid compared to the vortex lattice, and, ii) the vortex creep modes are of very low frequency, proportional to $\alpha<<1$. We have restricted the analysis to shear waves ($\nabla\cdot\delta\vbf=0$). Because these waves to not perturb the density, we do not expect the finite compressibility of the matter to change our results. Compressibility will introduce new modes \citep{haskell11}, an effect that merits further study in the context of imperfect pinning. \section{Estimates} To obtain the growth rate of the instability, we now estimate the pinning parameters $\alpha$ and $\beta$ for the vortex creep process. To make these estimates, we regard the process of vortex creep as consisting of two distinct states of motion for a given vortex segment. Most of the time, the vortex segment is pinned. A small fraction of the time, the vortex segment is translating against a drag force to a new pinning configuration. The mutual friction force we are using (eq. \ref{fl}) is, ignoring the small force from the vortex lattice, \begin{equation} \fbf/\rho= \omegabf\times\vbf -\alpha\,\omegabf\times\vbf +\beta\, \hat{\omega}\times(\omegabf\times\vbf). \label{fagain} \end{equation} This force represents the average force exerted on the neutron fluid by the vortex array. The first term is the Magnus force for perfect pinning, while the remaining terms give the contribution to the force due to vortex motion. For those vortex segments that are unpinned and moving against drag, we take the force to have the same form, but with different coefficients: \begin{equation} \fbf_0/\rho= \omegabf\times\vbf -\alpha_0\,\omegabf\times\vbf +\beta_0\, \hat{\omega}\times(\omegabf\times\vbf). \label{fmf0} \end{equation} An unpinned vortex segment remains unpinned for a time $t_0\sim d/v_0$, where $d$ is the distance the segment moves before repinning. This distance is comparable to the distance between pinning sites \citep{leb93}, roughly ten times the unit cell size, giving $t_0\sim 10^{-15}$ s, much shorter than the hydrodynamic timescales of interest. Suppose that at any instant, the fraction of vortex length that is unpinned is $f_v<<1$. We now average $\fbf_0$ over a volume that contains many vortices, and over a time long compared to $t_0$ but short compared to hydrodynamic timescales, to obtain \begin{equation} \langle{\fbf_0}/\rho\rangle= \omegabf\times\vbf -f_v\alpha_0\,\omegabf\times\vbf +f_v\beta_0\, \hat{\omega}\times(\omegabf\times\vbf). \label{fave} \end{equation} Quantities related to the flow are unchanged by the averaging procedure since the superfluid flow velocity is independent of whether vortices are pinned or not. The factors of $f_v$ in eq. (\ref{fave}) account for the fact that only the motion of the translating vortex segments contributes to the mutual friction (see, also, \citealt{jahanmiri06}). The value of $f_v$ is unimportant for the following estimates. The force of eq. (\ref{fagain}), which is appropriate for vortex creep, must equal the average force $\langle{\fbf_0}/\rho\rangle$, giving the following relationships: \begin{equation} \alpha=f_v\,\alpha_0 \quad \mbox{and} \quad \beta=f_v\,\beta_0 \quad \Rightarrow \quad \frac{\beta}{\alpha}=\frac{\beta_0}{\alpha_0} \label{alphabeta} \end{equation} We now use estimates of $\beta_0/\alpha_0$ to obtain the ratio $\beta/\alpha$. The dominant drag process on unpinned vortex segments considered so far arises from the excitation of Kelvin modes as the vortex moves past nuclei. Calculations of dissipation by Kelvin phonon production on a long vortex with periodic boundary conditions for $v_0\sim 10^7$ \hbox{\rm\hskip.35em cm s}$^{-1}$\ give typical values of $\beta_0/\alpha_0= 0.1$ and $\alpha_0\sim 1$ \citep{eb92}. Pinning occurs for $v_0\hbox{${_{\displaystyle<}\atop^{\displaystyle\sim}}$} 10^5$ \hbox{\rm\hskip.35em cm s}$^{-1}$, and $\beta_0/\alpha_0$ is likely to be significantly smaller in this velocity regime due to strong suppression of Kelvin phonon production \citep{jones92}. Vortex creep is therefore a low-drag process if Kelvin phonon production is the dominant dissipative mechanism. We fix $\beta/\alpha=0.1$ for illustration in the following, which we consider to be an upper limit; we expect typical values to be smaller. We now estimate $\beta$. We adopt polar coordinates $(r,\phi,z)$, with the unperturbed vorticity along $\hat{z}$ and the unperturbed flow $\vbf_0$ along $\hat{\phi}$, and take the unperturbed flow and vortex velocity field to be axisymmetric. In the rotating frame, the unperturbed vortex velocity from eq. (\ref{vv0}) is \begin{equation} \frac{\partial\rbf_{v0}}{\partial t}= \alpha\,v_0\, \hat{\phi}+\beta\, v_0\,\hat{r} =\hat{n}\, \frac{\partial r_{v0}}{\partial t} \label{vv0again} \end{equation} where $\hat{n}$ is the average direction of vortex motion. For steady spin down of the star, the inner crust superfluid and the crust are spinning down at the same rate for a local differential velocity $v_0$. The creep velocity in this steady state is related to the spin-down rate by \citep{alpar_etal84,leb93} \begin{equation} \dot{\Omega}=-2\frac{\Omega}{r}\, \frac{\partial\rbf_{v0}}{\partial t}\cdot\hat{r} =-2\frac{\Omega}{r}\, v_0\,\beta =\dot{\Omega}_0, \end{equation} where $\Omega$ is the spin rate of the superfluid, $\dot{\Omega}_0$ is the observed spin down rate of the crust, and $r$ is approximately the stellar radius $R$. We arrive at the estimate \begin{equation} \beta=\frac{R}{4v_0t_{\rm age}} \simeq 10^{-11} \left(\frac{v_0}{10^5\mbox{ \hbox{\rm\hskip.35em cm s}$^{-1}$}}\right)^{-1} \left(\frac{t_{\rm age}}{10^4\mbox{ yr}}\right)^{-1}. \label{ss} \end{equation} where $\Omega\simeq\Omega_0$ is assumed, and $t_{\rm age}\equiv \Omega_0/2\vert\dot{\Omega}_0\vert$ is the spin-down age. Eq. (\ref{ss}), with $\beta=0.1\alpha$, gives the fiducial value $\alpha\beta=10^{-21}$. For this value, we deduce $f_v\sim (\alpha\beta/\alpha_0\beta_0)^{1/2}\sim 10^{-11}$, that is, most of the vortex length is pinned at any instant. The unperturbed vortex creep speed, from eq. (\ref{vv0again}), is $\sim\alpha\, v_0\sim 10^{-5}$ \hbox{\rm\hskip.35em cm s}$^{-1}$\ $<<v_0$, justifying the neglect of $\partial\rbf_{v0}/\partial t$ compared to $\vbf_0$ in the stability analysis. We can now proceed with estimates of the instability length scale and growth rate. For $\beta<\alpha$ and $\theta=\tan^{-1}(\sqrt{2})$ in eq. (\ref{kc}), the critical wavenumber is \begin{equation} k_c\simeq 6\,\frac{\Omega}{v_0}=6\times 10^{-3} \left(\frac{\Omega}{100\mbox{ rad s$^{-1}$}}\right) \left(\frac{v_0}{10^5\mbox{ \hbox{\rm\hskip.35em cm s}$^{-1}$}}\right)^{-1} \mbox{ cm$^{-1}$}, \end{equation} corresponding to a wavelength $\lambda=2\pi/k\simeq 10$ m. For $k>>k_c$, the growth rate from eq. (\ref{highk}) is \begin{equation} \frac{1}{2\pi}\,{\rm Im}(\sigma_-)\simeq 0.6\, \left(\frac{\alpha\beta}{10^{-21}}\right)^{1/2} \left(\frac{\Omega}{100\mbox{ rad s$^{-1}$}}\right)^{1/2} \left(\frac{v_0}{10^5\mbox{ \hbox{\rm\hskip.35em cm s}$^{-1}$}}\right)^{1/2} \left(\frac{\lambda}{1\mbox{ cm}}\right)^{-1/2} \mbox{ yr$^{-1}$}. \end{equation} The hydrodynamic treatment is restricted to $kc_T<<\Omega$. To estimate how high the growth rate could be, we consider a maximum wavenumber defined by $c_Tk_{\rm max}=0.1\,\Omega$, where $c_T\simeq 10^{-1}\, (\Omega/100\mbox{ \hbox{\rm\hskip.35em rad s}$^{-1}$})^{1/2}$ \hbox{\rm\hskip.35em cm s}$^{-1}$. The growth rate at this wavenumber, from eq. (\ref{highk}), is \begin{equation} \frac{1}{2\pi}{\rm Im}[\sigma_-(k_{\rm max})]\simeq 3\, \left(\frac{\alpha\beta}{10^{-21}}\right)^{1/2} \left(\frac{\Omega}{100\mbox{ rad s$^{-1}$}}\right)^{3/4} \left(\frac{v_0}{10^5\mbox{ \hbox{\rm\hskip.35em cm s}$^{-1}$}}\right)^{1/2} \mbox{ yr$^{-1}$}, \label{highsigma} \end{equation} For $\Omega=100$ rad s$^{-1}$, the corresponding wavenumber is $k_{\rm max}\simeq 100$ cm$^{-1}$. Eq. (\ref{highsigma}) does not represent a physical limit, but only the restrictions of the hydrodynamic treatment; the instability could continue to exist also for wavenumbers in the regime $kc_T>\Omega$. If vortex creep is in the strongly-damped regime $\beta>>\alpha$, contrary to the estimates here, there is still a broad window for instability. Requiring $k_c<k_{\rm max}$ gives \begin{equation} \beta<2\times 10^4\, \left(\frac{v_0}{10^5\mbox{ \hbox{\rm\hskip.35em cm s}$^{-1}$}}\right) \left(\frac{\Omega}{100\mbox{ rad s$^{-1}$}}\right)^{-1/2}\, \alpha, \end{equation} and the star will be unstable at some wavenumber that is consistent with the hydrodynamic regime $kc_T<<\Omega$. \section{Discussion and conclusions} We have identified a dissipation-driven instability that could operate in the fluid of the neutron star crust over length scales shorter than $\sim 10$ m and over timescales as fast as months. This instability is different than other superfluid instabilities already considered in {\em two-component} systems. In the case of liquid helium, the Glaberson-Donnelly instability arises when the fluid normal component has a component of flow along the rotation axis of the system \citep{Glaberson_etal74}. That instability occurs even if the mutual friction is zero. A variant of the Glaberson-Donnelly instability has been studied in the mixture of superfluid neutrons and superconducting protons of the neutron star core, and instability was found to occur under the assumption that the vortices are perfectly pinned to the flux tubes that penetrate the superconducting proton fluid \citep{gaj08a,vl08}. \citet{ga09} have identified a two-stream instability that might occur in the neutron-proton mixture of the core, again assuming perfect pinning of vortices to flux tubes and neglecting magnetic stresses. By contrast, the instability described here exists in a {\em single-component} fluid, and occurs because the vortices can move with respect to the solid. The component of the motion that is transverse to the flow, the component related to $\beta$ in eq. (\ref{fl}), though dissipative, is essential for the instability to occur. To illustrate the basic instability, we have taken the pinning coefficients $\alpha$ and $\beta$ to be constants. For thermally-activated vortex creep, these coefficients will have exponential dependence on the velocity difference between the superfluid and the crust \citep{alpar_etal84,leb93}. We expect that this strong velocity dependence will significantly enhance the growth rate of the instability. The instability of the system will be determined by four coefficients: $\alpha(v_0)$, $\beta(v_0)$, and the derivatives $d\alpha/dv$ and $d\beta/dv$ evaluated at $v_0$. Further work is needed to calculate these coefficients and to incorporate them in the stability analysis. We restricted the analysis to wavevectors that are co-planar with the rotation axis and the unperturbed flow; more general perturbations should be studied, including waves with compressive components. The flow of the inner crust superfluid could become turbulent. If the system evolves into a state of fully-developed superfluid turbulence with a vortex tangle, the friction force in the fluid would be better described by an isotropic \citep{gm49} or polarized \citep{andersson_etal07} form, rather than the anisotropic form of eq. (\ref{fl}) that is appropriate to a regular vortex array in the initial stages of the instability. The friction force for the tangle could be larger or smaller than the force given by eq. (\ref{fl}). On the one hand, the tangle has more vortex length per unit volume to interact with the solid, which tends to increase the force for a given value of the velocity. On the other hand, if the vortex distribution becomes highly tangled the momentum transfer from different regions will cancel to at least some extent, decreasing the friction force compared to that of a straight vortex array. If the force increases, so does the effective value of $\beta$, and the average value of the equilibrium differential velocity will decrease (see eq. \ref{ss}). This effect could spell trouble for inner-crust models of glitches. By contrast, if the friction is decreased by turbulence, there will be more excess angular momentum in the superfluid available to drive glitches. The instability results from the forcing of the vortex lattice through the solid lattice. If fully-developed turbulence results, the closest experimental analogue might be grid turbulence that has been well studied in superfluid helium \citep{smith_etal93}. Further work is needed to determine the observational consequences of this instability, or if it is damped by some mechanism that has been overlooked here. We have shown elsewhere that a similar instability occurs in the mixture of superfluid neutrons and protons of the outer core \citep{link12b}. \section*{acknowledgments} We thank I. Wasserman for valuable discussions. \label{lastpage}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,453
You are here: Home / News about Animals / Bears acquire Pro Bowl receiver Marshall from Dolphins Bears acquire Pro Bowl receiver Marshall from Dolphins (Reuters) – The Chicago Bears have acquired Pro Bowl wide receiver Brandon Marshall from the Miami Dolphins for a pair of undisclosed draft picks, the National Football League (NFL) teams said on Tuesday. One of the NFL's dominant wide outs, Marshall has caught over 100 passes three times, surpassed 1,000 yards receiving in each of the last five seasons and holds the NFL record for most receptions in a single game (21). Last season Marshall started all 16 games for the Dolphins hauling in 81 passes for 1,214 yards and six touchdowns. Blog Posted March 14, 2012 at 12:50 am About Ark Lady +ArkLady Enhances the Lives of Animals & Empowers the People Who Love Them! Join the armchair safari or connect via ARKlady website.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,465
Mitt livs gemål är en låt skriven av Carl-Axel Dominique och Monica Dominique och inspelad av Charlotte Perrelli & Magnus Carlsson, vilka gav ut den på singel 2010 i samband med bröllopet mellan kronprinsessan Victoria och Daniel Westling. Låten kom till efter en idé av producenten och artisten Stephan Lundin. Ursprungligen var det enbart en melodi, tänkt att överlämnas som en privat gåva till svenska kronprinsessparet i samband med deras bröllop den 19 juni 2010. Konvolutet designades av Magnus Carlsson och en special designad CD-box, även den gjord av Magnus Carlsson, överlämnades personligen till Victoria och Daniel vid en lysningsmottagning av Monica och Carl-Axel Dominique och producent Stephan Lundin. 2011 sjöngs en tysk version in i samband med svenska kronprinsessparets officiella besök i Tyskland. Listplaceringar Källor Fotnoter Musiksinglar 2010
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,084
\section{Introduction} The luminosity function of low mass stars is particularly interesting because it can be used to infer the initial mass function (IMF) independently of the star formation history, since low mass stars evolve relatively little over the entire lifetime of the Universe. Understanding whether or not there are variations of the IMF with galaxy type or with metallicity is essential to our understanding of star formation and to the modelling of galaxy evolution. To date, most observations of IMFs of low mass stars have been made, by necessity, in nearby systems. In the solar neighborhood, estimates of the low-mass IMF have been made by Salpeter (1955), Miller \& Scalo (1979), and Kroupa, Tout, \& Gilmore (1993) (among others). The latter find that a segmented power law, $dN/dM\propto M^{\alpha}$ best represents the local IMF, with $\alpha=-2.7$ for $M>1$\mbox{$M_\sun$}, $\alpha=-2.2$ for $0.5<M<1$\mbox{$M_\sun$}, and $-1.85<\alpha<-0.7$ for $M<0.5$\mbox{$M_\sun$}. The mass function of the least massive stars is of particular interest in connection with the frequency of brown dwarfs, the local mass density in the Galactic disk, and the observed microlensing rates towards the LMC and the Galactic bulge. Other recent studies of the local neighborhood also find a flattening of the IMF slope at low masses; Gould, Bahcall, \& Flynn (1997) find $\alpha = -0.56$ for $M<0.5$\mbox{$M_\sun$}\ for HST observations of local M dwarfs. The Gould et al. slope does not include a correction for the presence of binaries; allowing for these brings the inferred slope into the range suggested by Kroupa et al. (1993), who did make a correction for binaries. Measurements of the luminosity and mass functions in the Galactic halo have been recently reviewed by Mould (1996); conflicting results have been reported. Dahn et al. (1995) find a turnover in the halo luminosity function, while Richer \& Fahlman (1992) find evidence for a rapidly increasing number of stars as one goes to lower masses. In support of the Dahn et al. result, Gould, Flynn, \& Bahcall (1997) derive a mass function with a shallow slope of $\alpha \sim -0.75$ from analysis of 166 spheroid subdwarfs observed with HST. Other estimates of IMFs have been made in stellar clusters and associations, both in the Galaxy and in some nearby stellar systems. Most of these measurements suggest IMFs similar to those observed in the solar neighborhood, although there are some exceptions; Hunter et al. (1997) present a recent summary. However, measurements of stars with $M<0.5$\mbox{$M_\sun$}\ have been difficult to make. DeMarchi \& Paresce (1995a, 1995b, 1997) have measured the luminosity function down to very low mass stars in several nearby globular clusters using HST, and they find an increasing number of stars with decreasing mass down to $\sim$ 0.2 \mbox{$M_\sun$}, but then a flattening of the luminosity function towards lower masses. However, this is in conflict with previous ground-based measurements in these clusters, which suggest steeply rising mass functions down to the lowest mass stars observed (Richer et al. 1991). In general, the connection between current cluster mass functions and their initial mass function may be complicated because of dynamical evolution within a cluster and the removal of stars by the tidal field of the parent galaxy. In nearby galaxies, the ability of HST to observe individual faint stars has allowed estimates of the initial mass function down to relatively low stellar masses ($M\sim 0.7$ \mbox{$M_\sun$}). However, the conversion between a luminosity function and a mass function is not totally independent of the star formation history for such stars because these stars evolve in luminosity in less than a Hubble time. In an outer field in the Large Magellanic Cloud (LMC), Holtzman et al. (1997) find that a solar neighborhood initial mass function provides an adequate match to the data, but only if there is a significant component of older stars in the LMC. If the stars in the LMC field are predominantly young, then a steeper IMF, with $\alpha \sim -2.75$, is required. In the nearby dwarf spheroidal Draco, the inferred IMF is similarly linked to the age of the system (Grillmair et al. 1998); for an age of 12 Gyr, the inferred IMF slope is comparable to that of the solar neighborhood. Consequently, a picture is emerging which suggests, remarkably, that the initial mass function does not appear to vary significantly from one environment to another. However, much of the interpretation is still complicated by lack of knowledge about star formation histories, which affect inferences about the initial mass function for all except very low mass stars, and by the possible effects of dynamical evolution in star clusters. Additionally, initial mass functions have not been measured for all types of stellar systems. In particular, no metal-rich systems have been studied, nor have any massive spheroidal systems. A determination of the initial mass function of the Galactic bulge for comparison with that of the disk is important because of the possibility of differing modes of star formation in spheroidal and disk systems. Furthermore, a measurement of the mass function of stars in the bulge is essential to interpretations of microlensing events observed in the direction of the Galactic center (e.g., Alcock et al. 1997). We have observed a field in Baade's Window with the Wide Field Planetary Camera 2 on the Hubble Space Telescope in order to observe faint, low mass, stars. Our observations probe to stars with $M\sim 0.25$ \mbox{$M_\sun$}. In this paper we concentrate on the luminosity function of the faint stars, and its implications for the mass function in the bulge. A subsequent paper will discuss the interpretation of the color-magnitude diagram in greater detail, concentrating on what it can tell us about the star formation history in the Galactic bulge. \section{Observations} Observations of Baade's Window were obtained on 12 August 1994 with the Wide Field Planetary Camera 2 of the Hubble Space Telescope. Observations were made through the F555W and F814W filters (wide $V$ and I) with a total of 2420s through each filter. Observations through each filter were split into 5 exposures with exposure times of 20, 200, 200, 1000, and 1000 seconds. To maximize the dynamic range, the short exposures were made with a gain of $\sim$ 14 electrons/DN, and the long exposures were made with gain $\sim$ 7 electrons/DN. The data were processed using the standard reduction techniques discussed by Holtzman et al. (1995a, H95A). This processing included a very small correction for analog-to-digital errors, overscan and bias subtraction, dark subtraction, a tiny shutter shading correction, and flat fielding. \subsection{Photometry} Figure 1 shows the combined set of F555W exposures. The field is crowded and profile-fitting photometry is required to get accurate results for the faint stars. To perform the profile-fitting photometry, the five frames in each color were first combined to reject cosmic rays based on the known noise properties of the WFPC2 detectors. Since the F814W frames were deeper (because we are probing to low mass, very red stars), star detection was performed on the F814W frame alone. Given the input star list, profile fitting was performed simultaneously on the set of ten individual frames, solving for a brightness for each star in each color, a position for each star, a separate background value for each group of stars in each frame, and frame-to-frame pointing shifts. Simultaneous fitting imposes the requirement that all frames have the same star list with the same relative positions (after allowing for the variation in scale as a function of wavelength as discussed in H95A). Small pointing differences between the frames provide slightly different pixel samplings of the PSF, providing additional information for fitting the undersampled PSF. Model PSFs that vary across the field of view were used; separate models were derived for each of the individual frames allowing for small focus shifts between frames. A brief description of the model PSFs and their advantages and disadvantages is presented in Holtzman et al. (1997). Cosmic rays in each of the individual frames were flagged by the procedure which combined the stack of frames for star-finding, and contaminated pixels were ignored in the profile-fitting procedure. Because of the large range in luminosities of stars observed in the bulge, the wings of the bright stars cause significant problems for automatic star finding algorithms. To minimize the problem, the profile fitting was iterated three times. In the first pass, only the brightest stars were fit. This allowed identification and subtraction of these stars including the extensive stellar wings and diffraction spikes. In the second pass, the star finding algorithm was used on the subtracted frames, with a low threshold to detect faint stars. A higher detection threshold was used around bright stars in the subtracted frame to avoid spurious detections from imperfect PSF subtraction. A fit was then performed on the original frames including stars found on both first and second passes, and these stars were subtracted. In the third pass, a few additional close neighbors of stars were detected from these subtracted frames. These were added to the list of stars, and a final stellar photometry run was made. During each of the profile-fitting stages, the software attempted to remove spurious detections by deleting stars that were not well fit by the stellar PSF. The final photometry list was filtered once again using a goodness-of-fit index in an attempt to remove spurious detections which remained. The resulting magnitudes were placed on the synthetic WFPC2 photometric system defined by Holtzman et al. (1995b, hereafter H95B). The profile results were converted to instrumental aperture magnitudes with a 0.5 arcsec radius aperture using aperture photometry of reasonably bright stars after subtraction of their neighbors based on the profile fitting results. The aperture corrections were determined by inspecting the difference between the 0.5 arcsec aperture and profile-fitting results; a separate correction was determined for each of the four chips, although they all agreed to within a few percent. We judge the accuracy of the aperture corrections to be a few percent in the worst case. Because these were fairly long, crowded exposures, we made no correction for possible errors from charge transfer efficiency (CTE) effects, as discussed in H95B; if CTE problems were present they would only change the derived magnitudes by a few percent and our conclusions would be unaffected. No correction was made for a possible systematic effect which may give differences in photometric zeropoint between long and short exposures (see Note Added in Proof, H95B); applying such a correction would make all our magnitudes about 0.05 mag fainter, which would also have a minimal impact on our conclusions. To compare with local luminosity functions which have been derived in the $V$ and $I$ bandpasses, we also transformed our WFPC2 magnitudes to the Johnson/Cousins $VI$ system using the synthetic transformations presented in H95B. These transformations were derived from a stellar library which included stars as red as those observed here. The use of these transformations introduces some potential systematic errors because of the unknown metallicity dependence of the transformations, but such errors are likely to be small. In any case, most of the work presented below is performed in the native WFPC2 system. The calibrated color-magnitude diagrams (with both F555W and F814W on the ordinate) are presented in Figure 2. A well defined main sequence can be seen down to $\mathrm{F555W}\sim V>27$. \subsection{Completeness and Error Estimation} To accurately interpret the luminosity function, we need to understand the detection efficiency and measurement errors as a function of stellar brightness. To estimate these, we performed a series of artificial star experiments in which we added a grid of stars of equal brightness onto each exposure in each of the four chips. Artificial stars were given colors corresponding to the median color of observed stars at a comparable magnitude, so fainter stars were made to be redder. The grid spacings were chosen to insure that the artificial stars were isolated from each other and thus did not add significantly to the crowding on the frame; 529 stars were placed on each of the WFs, and 121 were placed on the PC. Different pixel centerings were used for each artificial star, and the pixel centering varied slightly from frame-to-frame as in the real data. Poisson statistics were used to add errors to the artificial stars. These frames were then run through photometry routines identical to those discussed in Section 2.1. This was done 22 separate times with different brightnesses chosen for the artificial stars each time. For each of the artificial star runs, the final list from the photometry procedure was compared with the input list of artificial stars, and also with the final photometry list from the original frames. An artificial star was considered to be found if there was a detection within one pixel of the position where the star was placed and if there was no corresponding detection on the original frame. If a match was found with both the artificial star position and with an object on the original frame, the artificial star was considered found if the measured F555W magnitude was closer to the magnitude of the artificial star than to the magnitude of the star on the original frames. This properly accounts for incompleteness arising from crowding as well as from incompleteness from inability to detect stars in the noise of the background. The artificial stars also provided an estimate of the photometric errors, at least for the fainter stars. A limitation is that the artificial stars are created and measured with the same PSF, so there are no errors resulting from inaccuracies in the PSF models. Such errors dominate for brighter stars, so the artificial stars cannot be used to judge the photometric errors for these stars. Errors in the fainter stars are dominated by photon statistics and include both random and systematic errors. The former comes from Poisson statistics and readout noise, but systematic errors also occur at the faintest levels because objects with positive noise fluctuations are detected preferentially over those with negative fluctuations. Systematic errors can also arise from crowding. Some of the measured completeness and error distributions for the F814W magnitudes are shown in Figure 3. Each panel shows a histogram of observed errors for artificial stars of a different brightness. The text in each panel identifies the artificial star brightness (F555W and F814W) as well as the completeness fraction (fraction of artificial stars detected and measured). As expected, the random error increases for fainter objects. In addition, for the faintest objects, it is clear that the error distribution is asymmetric for the F814W magnitudes, with more stars being detected too bright than too faint. This is expected, since the faintest objects may only be detected if they have a positive noise fluctuation. Crowding may also contribute to this result. The accuracy of the completeness tests is different for the two different filters, because star detection is performed only on the F814W frames. The probability that an artificial star will be detected depends on its F814W magnitude. Consequently, completeness as a function of input F814W magnitude is accurately measured, but completeness as a function of input F555W magnitude is accurate only to the extent to which the artificial stars have the {\it same color} as the true stars. The artificial star colors were chosen based on median colors of the observed real stars, but these are likely to be biased for the faintest stars by incompleteness. Consequently, we believe that the F814W corrected luminosity function is more accurate than the corresponding function in F555W. In addition, random errors are smaller in F814W for the faintest stars, so smearing of the luminosity function from observational error is smaller in the F814W luminosity function. We have attempted to assess possible errors in our completeness corrections by repeating the test with simulated stars made using a PSF that has a severely different focus from that inferred from the actual frames. We then reduced these frames with our normal PSFs to simulate the effect of using an erroneous PSF. Completeness results for the two different PSFs are shown in Figure 4. Although these differ significantly for the faint objects, we note that in the repeat test we used a PSF for the fake objects which was an extreme mismatch; the subtractions from our incorrect PSFs were glaring and far worse than any subtractions of comparably bright real stars. Consequently, we feel that the differences illustrated between these two completeness curves represent the extreme of possible errors. Spurious detections are more problematic than missed detections, since it is more difficult to estimate their frequency. We have attempted to minimize the number of spurious detections by using a relatively high star finding threshold, and by using a conservative limit on goodness-of-fit for accepting objects for which we perform photometry. Visual inspection shows that we do not appear to have a large number of spurious detections remaining after these techniques are applied; in subsequent analysis, we make no effort to correct for the few which have survived, since we cannot determine a reasonable estimate for the number of spurious detections as a function of apparent magnitude. \section{Interpretation} \subsection{Distance and reddening} We have adopted a distance of 8 kpc for the bulge (see, e.g., Carney et al. 1995), with a corresponding distance modulus of 14.52. Maximum errors in this are probably about 0.3 mag (corresponding to distances between 7 and 9 kpc). Of course, the bulge population sampled is likely to lie at a range of distances, causing the observed luminosity function to be smeared. The extinction in the direction of Baade's Window has been discussed by Stanek (1996), Gould, Popowski, \& Terndrup (1998), and Alcock et al. (1998), among others. Stanek (1996) presents a map of differential reddening within Baade's Window based on brightnesses of red clump stars. Gould et al. (1998) have computed a zero point for this map based on observed $(V-K)$ colors as compared with $(V-K)_0$ predicted from observed H$\beta$ indices. Alcock et al. (1998) independently compute a zeropoint based on observations of RR Lyrae stars and derive an almost identical zeropoint to that of Gould et al. Using this zeropoint and the Stanek map, we infer an extinction of $A_V=1.28 \pm 0.08$ for our field, which lies in one of the clearest regions of Baade's Window. Using the calculations of Holtzman et al. (1995b), we infer extinctions in the WFPC2 filters system of $A(\mathrm{F555W})=1.26$ and $A(\mathrm{F814W})=0.76$. \subsection{The luminosity function} Figure 5 shows the observed luminosity function in the $V$ and $I$ bands. Both the uncorrected (open squares) and completeness-corrected (filled squares) luminosity functions are shown. The completeness correction here uses the completeness fraction as measured from simulated stars of the corresponding magnitude. The application of the completeness correction in this way is only approximate because it assumes that stars are measured without observational error; in reality, stars observed at a given magnitude actually have a range of true magnitudes, and, correspondingly, different detection probabilities. However, random errors are not likely to have much effect because the luminosity function is relatively flat, and, as shown above, systematic errors are not important until $V\sim 28$, or $M_V\sim 12$. Similarly, spread in the distance of the stars is not likely to significantly affect the relatively flat luminosity function. We show the corrected luminosity function for $M_V < 12.25$ and $M_I<9$; as discussed above, the completeness correction for $I$ is probably more reliable than that for $V$ because the latter depends on the accuracy of the simulated star colors. Errors in the completeness correction are a likely cause of the apparent turnover in the $V$ band luminosity function at the faintest magnitudes. For the $I$ band, we expect that the error in completeness gives an uncertainty of $\mathrel{\vcenter{\offinterlineskip \hbox{$<$$ 50\% in the counts at the very faintest magnitude shown. For comparison, we also show the solar neighborhood luminosity function from Wielen et al. (1983) (triangles), as well as a recent determination of the local luminosity function for M dwarfs as derived by HST imaging (asterisks, Gould et al. 1997). The luminosity functions have been normalized to agree at $M_V=9$ and $M_I=7.25$. For the $I$ band, these luminosity functions have been transformed from the V band using the relation between $V$ and $V-I$ presented by Kroupa \& Tout (1997) based on the data of Monet et al (1992). It is immediately apparent that the corrected bulge luminosity function is in close agreement with the solar neighborhood luminosity function over the range $7<M_V<11$ and $6<M_I<9$. Brighter than this, the bulge luminosity function drops off more steeply than the local function, as expected for an older population. The one discrepant point from the Gould et al. luminosity function ($M_V\sim 8.3$) has a large associated error because few stars this bright are counted in the HST fields. However, the match of the luminosity function with that of the solar neighborhood does not necessarily imply a correspondence in the mass functions because of possible differences in the number of binaries in the samples and because of observational errors. To consider these effects, we turn to a discussion of the inferred mass function. \subsection{The mass-luminosity relation} The IMF is constrained using the lower main sequence because the effects of stellar evolution are minimal for low mass stars over the age of the universe. However, the derived mass function depends on an accurate knowledge of the mass-luminosity relation, and calculations indicate that the mass-luminosity relation depends on metallicity (Kroupa \& Tout 1997). Theoretical mass-luminosity relations are difficult to calculate for low mass stars because of complications from the equation of state, opacities, and convection, leading to uncertainties in the mass-luminosity relation as derived from models. However, recent progress has been made by Baraffe et al. (1997). These models incorporate the most up-to-date physics available and are computed self-consistently with the stellar atmospheres of Allard et al. (1997). So far, we have obtained these models only for stars up to 0.7 \mbox{$M_\sun$}; for more massive stars (which do not enter strongly into the discussion in this paper), we have used models from the Padua group (Bertelli et al. 1994; Bressan et al. 1993; Fagotto et al. 1994a,b). A good summary of the current understanding of mass-luminosity relations is presented by Kroupa \& Tout (1997). We note that it is clear that the models are still not perfectly accurate, because the model color-magnitude relation falls blueward for the data for the faintest stars. Since uncertainties about the quality of the theoretical mass-luminosity function remain, we also consider the use of an empirical mass-luminosity relation. This is available only for the solar neighborhood, and, consequently, only for stars of near solar composition. However, the median metallicity observed in the bulge may actually be quite similar to that of the solar neighborhood (McWilliam \& Rich 1994), although the bulge metallicity distribution has a tail which extends to lower metallicities. Consequently, it is plausible that an empirical mass-luminosity relation derived from solar neighborhood stars will provide a reasonable match for the bulge. Such empirical relations have been presented by Henry \& McCarthy (1993) and Kroupa et al. (1993), and the two show good agreement. However, the Henry and McCarthy relation is presented as a series of quadratic fits in different mass ranges. As a result, the derivative of their function, which enters into the derivation of a mass function from a luminosity function, is not continuous between the different regions which they fit, leading to problems with its use. Consequently, we adopt the Kroupa et al. function as our empirical function. This function has been tabulated for both the $V$ and $I$ passbands (among others) for stars with $M\leq 0.65$ by Kroupa \& Tout (1997). For larger masses, the relation is given by Kroupa et al. (1993), but only for the $V$ bandpass. To get the $I$ band mass-luminosity relation, we have transformed the V band mass-luminosity relation to the $I$ band using a fit to color-magnitude data for solar neighborhood stars which was kindly provided by I.N. Reid; these data include ground-based measurements as well as those from the Hipparcos satellite. The applicability of this relation to the bulge stars can be judged by the degree to which the color-magnitude diagram of the bulge matches that of solar neighborhood stars. Figure 6 shows the median locus of the bulge stars compared with the solar neighborhood fit. One can see that these agree fairly well, though not perfectly. Minor differences may arise from different metallicity distributions between the bulge and the solar neighborhood, different fractions of binary systems in the samples, and errors in our assumed distance and/or extinction. Figure 6 also shows a solar metallicity model color-magnitude relation; this demonstrates the problems the models have getting the correct colors for the fainter stars. Because neither the model nor the empirical mass-luminosity-color relations match the observed properties of the bulge stars perfectly, slightly different inferences are made about the mass function depending on whether the $V$ or the $I$ band luminosity function is considered. \subsection{The IMF} One can naively derive a MF from the luminosity function simply by using the M-L relation to effect a change of variables. However, this method has no way of accounting for systematic or random errors in the photometry which may be important for the fainter stars; it also cannot account for the presence of binary stars or spread in the distance to the stars. Of these different effects, the presence of binaries is the most significant, especially for the low-mass stars considered here. The effects of binaries have been previously discussed by Kroupa (1995) and Kroupa, Tout, \& Gilmore (1993). Some of these effects are shown in Figure 7, which plots expected luminosity functions for the {\it same} initial mass function using our estimated completeness and several different assumptions about the presence of binaries and systematic errors. In this figure and hereafter, the binary fraction refers to the number of \textit{systems} which are binaries. Also, we make the assumption that the masses of stars in binary systems are drawn independently from the same mass function. In Figure 7, the luminosity functions have been normalized to match at the bright end to make the differences in slope at the faint end most apparent. One can see that the presence of binaries can have a severe effect on the observed luminosity function. Depending on the mass function slope, this can dominate over the relatively small effects that random and systematic errors have on the luminosity functions, even for the faintest stars. In the solar neighborhood, various studies suggest that the binary fraction is in the vicinity of 0.5 (see discussion in Kroupa 1995 and Kroupa et al. 1993). Of course, we have no idea whether the bulge binary fraction is similar to that of the solar neighborhood, so we consider it to be a free parameter. To account for the presence of binaries and errors, a derivation of an IMF involves simulating a luminosity function from some assumed mass function, allowing for systematic errors, binaries, and distance spread, and then checking for consistency with the observed luminosity function. To do this, however, requires some parameterization of the mass function in order to keep the number of possible models reasonably small. Here, we initially transform our luminosity function into a mass function ignoring binaries and errors in order to determine what might provide a useful parameterization, and then simulate luminosity functions with binaries and errors for a more sophisticated comparison with model IMFs. \subsubsection{No binaries or errors} Figure 8 presents mass functions derived using both the F814W (top) and the F555W (bottom) luminosity functions; results using the F555W are more uncertain because the F555W data have larger photometric errors and less accurate completeness estimates. The inferred mass functions are shown using the empirical mass-luminosity relation (squares), a solar metallicity model mass-luminosity relation (triangles), and a model mass-luminosity relation for a population with Z=0.006 ([Fe/H] $\sim=-0.5$). For one of the relations (triangles), solid points show completeness-corrected data and open points show raw data, to illustrate the amplitude of the completeness corrections. Independent of the choice of mass-luminosity relation, no single power law mass function is able to fit the data; the derived mass-function shows a break around 0.5-0.7 \mbox{$M_\sun$}. This conclusion depends on having a reasonable estimate of the completeness, since the turnover occurs at a level where our data are only $\sim$ 50\% complete. However, our completeness estimate would have to be off by a factor of two for stars of 0.4 \mbox{$M_\sun$}\ to be consistent with a single power-law mass function. As discussed in \S 2.2, we do not believe this is likely. For masses less than $\sim$ 0.7 \mbox{$M_\sun$}, evolution is negligible, so this result implies that the {\it initial} mass function cannot be fit with a single power law. A similar result is derived for the solar neighborhood by Kroupa et al. (1993) and by Gould et al. (1997). Both of these studies find a mass function slope of $\alpha = -2.2$ for stars with $M>0.5$ \mbox{$M_\sun$}. For lower mass stars, Kroupa et al. find a slope of $-1.85<\alpha<-0.7$, and Gould et al. find $\alpha = -0.56$. Lines in Figure 8 are shown which correspond to $\alpha = -2.2$ and $\alpha = -0.56$. The data appear to be matched by a mass function with a faint end slope of $\alpha > -1$. The more massive stars are reasonably well matched by $\alpha=-2.2$ for $M\mathrel{\vcenter{\offinterlineskip \hbox{$>$ 0.7$ \mbox{$M_\sun$}\ using the model mass-luminosity relation. The empirical mass-luminosity relation suggests a steeper slope for the most massive stars, but since evolutionary effects are significant for these stars, the empirical mass-luminosity relation is likely not applicable since the mean age of bulge stars is larger than that of solar neighborhood stars. \subsubsection{The effect of binaries and errors} As mentioned above, an accurate mass function cannot be derived by simply converting luminosities to masses because of the presence of binaries, systematic errors, and distance spread. Here we derive some model luminosity functions assuming mass functions with power law segments, motivated by the estimates provided by Figure 8. The calculation of these models is complicated because of the observational incompleteness. In principle, one should be able to take the model magnitudes from a mass-luminosity relation, derive observed magnitudes using a distance and extinction estimate, and use the completeness estimate at that magnitude to predict an observed number of stars. In practice, however, this leads to problems because any errors in the mass-luminosity relation lead to large errors in the completeness corrections. As mentioned above, it is clear that such errors exist because neither the stellar model nor the empirical mass-luminosity relations are able to match the observations simultaneously in both bandpasses. The differences in the completeness correction which one derives from using the two different bandpasses to compute completeness can be severe. To avoid this problem, we compute model luminosity functions without accounting for incompleteness in the model, and compare these with the \textit{completeness-corrected} data. In this section, we only show comparisons with the F814W luminosity function, which has the better determined completeness. Figure 9 shows the observed luminosity function with calculated luminosity functions assuming $\alpha = -2.2$ for $M>0.5$ \mbox{$M_\sun$}\ and $\alpha = -0.5,-0.9,-1.3,-1.7$ for $M<0.5$ \mbox{$M_\sun$}. These cover the possible ranges of solar neighborhood mass functions inferred by Kroupa et al. (1993). We also include a mass function with a constant power law slope $\alpha=-2.0$ (steepest curve). Results from both a solar metallicity model and an empirical mass-luminosity relation are shown (left and right), as well as results for three different binary fractions, where the binary fraction gives the number of systems which are binary. The top panels show results for no binaries, the middle panels for 50\% binaries and the bottom panels for 90\% binaries, where binaries are assumed to have uncorrelated masses. As noted above, binaries have a strong influence on the luminosity function of faint stars. The models with no binaries seriously overestimate the number of faint stars, unless the faint-end slope flattens significantly at $M_{\mathrm{F814W}}\sim 6.5$, corresponding to $\sim$ 0.7 \mbox{$M_\sun$}. With binaries, the models provide a better match, although all models shown here are statistically significantly different from the observed data. Including binaries allows a steeper faint-end slope, but models with a constant slope at $\alpha \mathrel{\vcenter{\offinterlineskip \hbox{$<$ -2$ are inconsistent with the data. The left panels use the model mass-luminosity relation taken from the solar metallicity models of Baraffe et al. (1997), while the right panels use the empirical relation tabulated in Kroupa \& Tout (1997), combined with the relation presented in Kroupa et al. (1993) for brighter stars. The empirical mass-luminosity relation produces a dip in the luminosity function around $M_{F814W}\sim 6$ which is not apparent in the bulge data. Despite the differences between the empirical relation and the the model relation, the same general conclusions can be drawn; the slope of the mass function at low masses must be significantly shallower than at higher masses. We did many additional experiments to find models which match the data to within statistical uncertainties, and we found we had to go to models with several different power law segments to find acceptable fits. Finding a best fit with many free parameters does not strike us as providing significant physical insight, particularly given the uncertainties in the binary fraction, the mass-luminosity relation, and possible errors in our completeness correction. We choose here to show just several plausible mass functions which match the observed luminosity function. Figure 10 shows one luminosity function derived using a mass function which has $\alpha =-2.2$ for $M>0.7$ \mbox{$M_\sun$}, $\alpha=-0.9$ for $M<0.7$\mbox{$M_\sun$}, a solar metallicity model, and a binary fraction of 0.0, and another with a binary fraction of 0.5 and mass function slopes of $\alpha =-2.2$ for $M>0.7$ \mbox{$M_\sun$}, $\alpha=-1.3$ for $M<0.7$\mbox{$M_\sun$}. If one compares the model luminosity functions with the F555W data (which may have less well determined completeness corrections), one reaches similar conclusions; in fact, an even shallower faint-end slope is required to match these data. \section{Summary} We have measured a deep luminosity function in the Galactic bulge, and used it to infer a mass function. We find that the luminosity function down to $M_I\sim 9$ is similar to that observed in the solar neighborhood. Transforming the luminosity function into a mass function, we find strong evidence of a break from a power law mass function around $0.5-0.7$\mbox{$M_\sun$}. Detailed modelling of a population allowing for binaries and photometric errors as inferred from our data suggests a mass function which flattens from a slope of $\alpha=-2.2$ for $M>0.7$ \mbox{$M_\sun$}\ to $\alpha \sim -1$ for $M<0.7$ \mbox{$M_\sun$}. The exact details of the derived mass function depend on assumptions about the binary fraction, the mass-luminosity relation, and the details of our completeness corrections. The similarity of the mass function in the bulge to that of the solar neighborhood is perhaps not surprising given that the mean metallicities of the two populations may not differ by a large amount (McWilliam \& Rich 1994). The current data suggest that the physical processes of star formation in the bulge and in the disk may be similar. Additionally, the lack of large numbers of low mass stars in the bulge may lead to difficulties in explaining the relatively high optical depth in microlensing events and the large number of short duration events observed in the direction of the bulge. Models that account for these with a population of stars require a stellar mass function with $\alpha \sim -2$ all the way down to the hydrogen-burning limit (Zhao, Spergel, \& Rich 1995; Han \& Gould 1996). Model luminosity functions for this slope are shown as the steepest curves in Figure 9. If one were to normalize these curves to the bright end of the luminosity function, one can see that the observed star counts at the fainter magnitudes fall significantly short of those expected for this mass function. We can estimate the mass surface density observed towards Baade's Window using the models which do a reasonable job of fitting the observed luminosity function (Figure 10). We compute surface mass densities towards Baade's window for our assumed distance of 8 kpc, using several different assumptions about the lower mass cutoff of objects. For the model with no binaries and $\alpha=-0.9$ at the low mass end, we derive mass densities of 1.0, 1.3, \& 1.4 $\times 10^3$ \mbox{$M_\sun$} pc$^{-2}$ for lower mass cutoffs of 0.3, 0.08, and 0 \mbox{$M_\sun$}. For the model with 50\% binaries and $\alpha=-1.3$ at the low mass end, we derive mass densities of 1.1, 1.7, \& 2.1 $\times 10^3$ \mbox{$M_\sun$} pc$^{-2}$. If we assume that the entire bulge has a similar mass function, we can derive a total bulge mass by scaling these numbers by the ratio of the integrated infrared light from the bulge to that from Baade's Window (c.f. Han 1997). For the range of mass densities above, we derive a total bulge mass of somewhere between $7.4\times 10^9$ and $1.5\times 10^{10}$ \mbox{$M_\sun$}. This work was supported in part by NASA under contract NAS7-918 to JPL. We gratefully acknowledge I. Baraffe and F. Allard for communicating some of their stellar model and atmosphere results before publication. We thank the referee, A. Gould, for several very useful comments and suggestions. \pagebreak
{ "redpajama_set_name": "RedPajamaArXiv" }
1,737
Q: nested for loops with variable used in the nested loop windows batch I have a first loop and a second loop but the variable from the first loop always stays the same in the second loop. how do i get the actual value of the second loop? REM @echo off set fastestserver=none set fastestresponse=99999999ms for /F "tokens=*" %%A in (fileservers.txt) do ( for /F "skip=8 tokens=9 delims= " %%B in ('ping -n 3 %%A') do ( echo %%A echo %%B set fastestresponse=%%B set actualpingserver=%%B if /I "%fastestresponse:~,-2%" GTR "%actualpingserver:~,-2%" ( set fastestresponse=%%B set fastestserver=%%A ) ) ) REM @echo on echo %fastestserver% echo %fastestresponse% in the fileserver.txt there are some servers inside, each get pinged and i want to get the average ping, if the ping is smaller then of the one before it should replace the 2 variables (fastestserver , fastestresponse ). Problem now is that if i debug the script it always takes for if /I "%fastestresponse:~,-2%" LSS "%actualpingserver:~,-2%" so somehow from the second for the fastestresponse does not get filld because the value is always 999999999ms. Thanks already for a hint/helping answer A: try this: REM @echo off setlocal enableDelayedExpansion set fastestserver=none set fastestresponse=99999999ms for /F "tokens=*" %%A in (fileservers.txt) do ( for /F "skip=8 tokens=9 delims= " %%B in ('ping -n 3 %%A') do ( echo %%A echo %%B set actualpingserver=%%B if /I "!fastestresponse:~,-2!" GTR "!actualpingserver:~,-2!" ( set fastestresponse=%%B set fastestserver=%%A ) ) ) REM @echo on echo %fastestserver% echo %fastestresponse% endlocal more info about delayed expansion
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,160
Q: Why is TryParse in C#7 syntax (empty out parameter) emitting a warning if you compile it? In C#7, you are allowed to do if (int.TryParse("123", out int result)) Console.WriteLine($"Parsed: {result}"); or - if you don't use the result and just want to check if the parsing succeeds, discard the out value: if (int.TryParse("123", out _)) Console.WriteLine("Syntax OK"); That works fine usually, but in Visual Studio 2017 the second example, where the out parameter is empty, generates the warning Warning AD0001: Analyzer 'Microsoft.CodeAnalysis.CSharp.Diagnostics.SimplifyTypeNames.CSharpSimplifyTypeNamesDiagnosticAnalyzer' threw an exception of type 'System.NullReferenceException' with message 'Object reference not set to an instance of an object.'. The Visual Studio Versions where I could verify that it occurs is Visual Studio Enterprise 2017 Version 15.1 (26403.7) Release Visual Studio Enterprise 2017 Version 15.2 (26430.4) Release Is this a bug, or is the usage of int.TryParse("123", out _) not officially supported? I could not find any hint so far. For completeness, here's the code of the console application showing the issue: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace ConsoleApp1 { class Program { static void Main(string[] args) { if (int.TryParse("123", out _)) Console.WriteLine("Syntax OK"); } } } A: I submitted a bug request (request #19180) to the development team, and they confirmed it is a bug. You can see the entire status here at GitHub dotnet/roslyn. Pilchie commented 16 hours ago I can repro that in 15.2, but not 15.3. Moving to compiler based on the stack, >Abut I'm pretty sure this is a dupe. @jcouv? jcouv commented 16 hours ago Yes, this is a duplicate (of #17229 and possibly another one too). It was fixed in dev15.3 (#17544) and we were unfortunately unable to pull the >fix into dev15.2. Thanks @Matt11 for filing the issue and sorry for the bug. It seems to be already fixed and will be - as far as I understood - available in the next update. But there is no announced date when it will be included by Microsoft, so I submitted an issue through "Send Feedback/Report a Problem" in Visual Studio 2017. Notes: * *The issue is not limited to TryParse. I verified that it also occurs if you write your own function, i.e. the following sample shows the warning AD0001 as well: static void Main(string[] args) { bool myOutDemo(string str, out int result) { result = (str??"").Length; return result > 0; } // discard out parameter if (myOutDemo("123", out _)) Console.WriteLine("String not empty"); } *I noticed that there is now a VS Version 15.3 preview available, which should contain the fix mentioned in the GitHub comments. Check out the following link: Visual Studio 2017 Version 15.3 Preview. After installing it, I verified the issue again and can confirm it is fixed there. Thanks to all who participated in the discussion above! (question comments)
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,204
Where to stay near Sissi? Our 2019 accommodation listings offer a large selection of 328 holiday rentals near Sissi. From 44 Houses to 259 Studios, find unique holiday homes for you to enjoy a memorable stay with your family and friends. The best place to stay near Sissi for a long holiday or a weekend break is on HomeAway.
{ "redpajama_set_name": "RedPajamaC4" }
7,993
Q: What is the different between blue and black color at debug mode? Why sometimes the text is blue and sometimes black? A: I'm very curious with this question, and did small test. I'm not sure the thing which I have observed was right or not, but I did test it many times and got the same result. Conclusion: I observed that when you debug code and a variable on which pop up screen appears if it is still in use I mean existence of that variable is important to compiler than it will show black text. But after the value of that variable is no more required by compiler than it will shows blue text. For an example: In above image variable fileName is used in second line (it is not visible because pop up screen overlaps it, you can see it in next image) by a string variable so the text here is black for now. But see the next image Here you can see that variable fileName is not used any more and debug pointer is at the end of the method. So the text becomes blue now. Even in console window text is changing as per variable existence. Reference to first image console screen: Reference to second image console screen:
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,990
<!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"><head> <meta charset="utf-8"> <meta name="generator" content="quarto-1.0.37"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes"> <meta name="author" content="Ed Henry"> <meta name="dcterms.date" content="2017-01-16"> <title>edhenry.github.io - Algorithmic Toolbox - Week 2</title> <style> code{white-space: pre-wrap;} span.smallcaps{font-variant: small-caps;} span.underline{text-decoration: underline;} div.column{display: inline-block; vertical-align: top; width: 50%;} div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;} ul.task-list{list-style: none;} pre > code.sourceCode { white-space: pre; position: relative; } pre > code.sourceCode > span { display: inline-block; line-height: 1.25; } pre > code.sourceCode > span:empty { height: 1.2em; } .sourceCode { overflow: visible; } code.sourceCode > span { color: inherit; text-decoration: inherit; } div.sourceCode { margin: 1em 0; } pre.sourceCode { margin: 0; } @media screen { div.sourceCode { overflow: auto; } } @media print { pre > code.sourceCode { white-space: pre-wrap; } pre > code.sourceCode > span { text-indent: -5em; padding-left: 5em; } } pre.numberSource code { counter-reset: source-line 0; } pre.numberSource code > span { position: relative; left: -4em; counter-increment: source-line; } pre.numberSource code > span > a:first-child::before { content: counter(source-line); position: relative; left: -1em; text-align: right; vertical-align: baseline; border: none; display: inline-block; -webkit-touch-callout: none; -webkit-user-select: none; -khtml-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; padding: 0 4px; width: 4em; color: #aaaaaa; } pre.numberSource { margin-left: 3em; border-left: 1px solid #aaaaaa; padding-left: 4px; } div.sourceCode { } @media screen { pre > code.sourceCode > span > a:first-child::before { text-decoration: underline; } } code span.al { color: #ff0000; font-weight: bold; } /* Alert */ code span.an { color: #60a0b0; font-weight: bold; font-style: italic; } /* Annotation */ code span.at { color: #7d9029; } /* Attribute */ code span.bn { color: #40a070; } /* BaseN */ code span.bu { } /* BuiltIn */ code span.cf { color: #007020; font-weight: bold; } /* ControlFlow */ code span.ch { color: #4070a0; } /* Char */ code span.cn { color: #880000; } /* Constant */ code span.co { color: #60a0b0; font-style: italic; } /* Comment */ code span.cv { color: #60a0b0; font-weight: bold; font-style: italic; } /* CommentVar */ code span.do { color: #ba2121; font-style: italic; } /* Documentation */ code span.dt { color: #902000; } /* DataType */ code span.dv { color: #40a070; } /* DecVal */ code span.er { color: #ff0000; font-weight: bold; } /* Error */ code span.ex { } /* Extension */ code span.fl { color: #40a070; } /* Float */ code span.fu { color: #06287e; } /* Function */ code span.im { } /* Import */ code span.in { color: #60a0b0; font-weight: bold; font-style: italic; } /* Information */ code span.kw { color: #007020; font-weight: bold; } /* Keyword */ code span.op { color: #666666; } /* Operator */ code span.ot { color: #007020; } /* Other */ code span.pp { color: #bc7a00; } /* Preprocessor */ code span.sc { color: #4070a0; } /* SpecialChar */ code span.ss { color: #bb6688; } /* SpecialString */ code span.st { color: #4070a0; } /* String */ code span.va { color: #19177c; } /* Variable */ code span.vs { color: #4070a0; } /* VerbatimString */ code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warning */ </style> <script src="../../site_libs/quarto-nav/quarto-nav.js"></script> <script src="../../site_libs/quarto-nav/headroom.min.js"></script> <script src="../../site_libs/clipboard/clipboard.min.js"></script> <script src="../../site_libs/quarto-search/autocomplete.umd.js"></script> <script src="../../site_libs/quarto-search/fuse.min.js"></script> <script src="../../site_libs/quarto-search/quarto-search.js"></script> <meta name="quarto:offset" content="../../"> <script src="../../site_libs/quarto-html/quarto.js"></script> <script src="../../site_libs/quarto-html/popper.min.js"></script> <script src="../../site_libs/quarto-html/tippy.umd.min.js"></script> <script src="../../site_libs/quarto-html/anchor.min.js"></script> <link href="../../site_libs/quarto-html/tippy.css" rel="stylesheet"> <link href="../../site_libs/quarto-html/quarto-syntax-highlighting.css" rel="stylesheet" id="quarto-text-highlighting-styles"> <script src="../../site_libs/bootstrap/bootstrap.min.js"></script> <link href="../../site_libs/bootstrap/bootstrap-icons.css" rel="stylesheet"> <link href="../../site_libs/bootstrap/bootstrap.min.css" rel="stylesheet" id="quarto-bootstrap" data-mode="light"> <script id="quarto-search-options" type="application/json">{ "location": "navbar", "copy-button": false, "collapse-after": 3, "panel-placement": "end", "type": "overlay", "limit": 20, "language": { "search-no-results-text": "No results", "search-matching-documents-text": "matching documents", "search-copy-link-title": "Copy link to search", "search-hide-matches-text": "Hide additional matches", "search-more-match-text": "more match in this document", "search-more-matches-text": "more matches in this document", "search-clear-button-title": "Clear", "search-detached-cancel-button-title": "Cancel", "search-submit-button-title": "Submit" } }</script> <script src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml-full.js" type="text/javascript"></script> <link rel="stylesheet" href="../../styles.css"> </head> <body class="nav-fixed fullcontent"> <div id="quarto-search-results"></div> <header id="quarto-header" class="headroom fixed-top"> <nav class="navbar navbar-expand-lg navbar-dark "> <div class="navbar-container container-fluid"> <a class="navbar-brand" href="../../index.html"> <span class="navbar-title">edhenry.github.io</span> </a> <button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarCollapse" aria-controls="navbarCollapse" aria-expanded="false" aria-label="Toggle navigation" onclick="if (window.quartoToggleHeadroom) { window.quartoToggleHeadroom(); }"> <span class="navbar-toggler-icon"></span> </button> <div class="collapse navbar-collapse" id="navbarCollapse"> <ul class="navbar-nav navbar-nav-scroll ms-auto"> <li class="nav-item"> <a class="nav-link" href="../../about.html">About</a> </li> <li class="nav-item compact"> <a class="nav-link" href="https://github.com/edhenry"><i class="bi bi-github" role="img"> </i> </a> </li> <li class="nav-item compact"> <a class="nav-link" href="https://twitter.com/edhenry_"><i class="bi bi-twitter" role="img"> </i> </a> </li> </ul> <div id="quarto-search" class="" title="Search"></div> </div> <!-- /navcollapse --> </div> <!-- /container-fluid --> </nav> </header> <!-- content --> <header id="title-block-header" class="quarto-title-block default page-columns page-full"> <div class="quarto-title-banner page-columns page-full"> <div class="quarto-title column-body"> <h1 class="title">Algorithmic Toolbox - Week 2</h1> <div class="quarto-categories"> <div class="quarto-category">python</div> <div class="quarto-category">programming</div> <div class="quarto-category">algorithms</div> </div> </div> </div> <div class="quarto-title-meta"> <div> <div class="quarto-title-meta-heading">Author</div> <div class="quarto-title-meta-contents"> <p>Ed Henry </p> </div> </div> <div> <div class="quarto-title-meta-heading">Published</div> <div class="quarto-title-meta-contents"> <p class="date">January 16, 2017</p> </div> </div> </div> </header><div id="quarto-content" class="quarto-container page-columns page-rows-contents page-layout-article page-navbar"> <!-- sidebar --> <!-- margin-sidebar --> <!-- main --> <main class="content quarto-banner-title-block" id="quarto-document-content"> <p>As a refresher, I've started working through the <a href="https://www.coursera.org/learn/algorithmic-toolbox">Algorithmic Toolbox</a> course offered on <a href="https://www.coursera.org/">Coursera</a>. It's been a while since I've reviewed a lot of the basic algorithms and data structures fundamentals, so I figured I would work through the course to grease the bearings again, so to speak.</p> <p>That said, this is a notebook that covers some of the concepts and programming assignments in Week 2 of the course. I will try to post most of the stuff I review and examples I work through for anyone who may find it interesting and useful.</p> <div class="sourceCode" id="cb1"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb1-1"><a href="#cb1-1" aria-hidden="true" tabindex="-1"></a><span class="im">import</span> numpy <span class="im">as</span> np</span> <span id="cb1-2"><a href="#cb1-2" aria-hidden="true" tabindex="-1"></a><span class="im">import</span> matplotlib.pyplot <span class="im">as</span> plt</span> <span id="cb1-3"><a href="#cb1-3" aria-hidden="true" tabindex="-1"></a><span class="im">import</span> timeit</span> <span id="cb1-4"><a href="#cb1-4" aria-hidden="true" tabindex="-1"></a></span> <span id="cb1-5"><a href="#cb1-5" aria-hidden="true" tabindex="-1"></a><span class="op">%</span>matplotlib inline</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div> <section id="bigo-notation" class="level4"> <h4 class="anchored" data-anchor-id="bigo-notation">BigO Notation</h4> <p>When working with algorithms, it's typical to measure their "time" as, not a function of how long it takes an algorithm to run according to a wall clock, but rather as a function of the size of the input of the algorithm. This is called the <strong>rate of growth</strong> of of the running time.</p> <p>When utilizing BigO notation we can distill the "most important" parts and cast out the less important parts. We can see this by looking at the <code>ex_run_time</code> function we've defined below. This imaginary algorithm runtime, <span class="math inline">\(6n^{2}+100n+300\)</span>, takes as many machine instruction to execute. Again, this is an example.</p> <p>In the example below, we've defined two functions that calculate this imaginary runtime according to a user defined input size. We're going to use this illustration to show that the upper bound on this execution time for this algorithm is defined by the <span class="math inline">\(n^2\)</span> portion of the imaginary runtime of <span class="math inline">\(6n_2 + 100n + 300\)</span>.</p> <div class="sourceCode" id="cb2"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb2-1"><a href="#cb2-1" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> ex_run_time(coef, n):</span> <span id="cb2-2"><a href="#cb2-2" aria-hidden="true" tabindex="-1"></a> rt <span class="op">=</span> []</span> <span id="cb2-3"><a href="#cb2-3" aria-hidden="true" tabindex="-1"></a> <span class="cf">for</span> i <span class="kw">in</span> <span class="bu">range</span>(n):</span> <span id="cb2-4"><a href="#cb2-4" aria-hidden="true" tabindex="-1"></a> rt.append((i, (coef<span class="op">*</span>(i<span class="op">**</span><span class="dv">2</span>))))</span> <span id="cb2-5"><a href="#cb2-5" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> rt</span> <span id="cb2-6"><a href="#cb2-6" aria-hidden="true" tabindex="-1"></a></span> <span id="cb2-7"><a href="#cb2-7" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> decomp_run_time(coef_a, coef_b, n):</span> <span id="cb2-8"><a href="#cb2-8" aria-hidden="true" tabindex="-1"></a> rt <span class="op">=</span> []</span> <span id="cb2-9"><a href="#cb2-9" aria-hidden="true" tabindex="-1"></a> <span class="cf">for</span> i <span class="kw">in</span> <span class="bu">range</span>(n):</span> <span id="cb2-10"><a href="#cb2-10" aria-hidden="true" tabindex="-1"></a> rt.append((i, ((coef_a<span class="op">*</span>i) <span class="op">+</span> coef_b)))</span> <span id="cb2-11"><a href="#cb2-11" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> rt</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div> <div class="sourceCode" id="cb3"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb3-1"><a href="#cb3-1" aria-hidden="true" tabindex="-1"></a>ex_rt <span class="op">=</span> ex_run_time(<span class="dv">6</span>, <span class="dv">100</span>)</span> <span id="cb3-2"><a href="#cb3-2" aria-hidden="true" tabindex="-1"></a>decomp_rt <span class="op">=</span> decomp_run_time(<span class="dv">100</span>, <span class="dv">300</span>, <span class="dv">100</span>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div> <div class="sourceCode" id="cb4"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb4-1"><a href="#cb4-1" aria-hidden="true" tabindex="-1"></a>line1_plt <span class="op">=</span> plt.plot(ex_rt, label<span class="op">=</span><span class="st">'Line 1'</span>, linewidth<span class="op">=</span><span class="dv">2</span>)</span> <span id="cb4-2"><a href="#cb4-2" aria-hidden="true" tabindex="-1"></a>line2_plt <span class="op">=</span> plt.plot(decomp_rt, <span class="st">'b'</span>, label<span class="op">=</span><span class="st">'Line 2'</span>, linewidth<span class="op">=</span><span class="dv">2</span>)</span> <span id="cb4-3"><a href="#cb4-3" aria-hidden="true" tabindex="-1"></a>plt.legend([<span class="st">'100n + 300'</span>,<span class="st">'6n^2'</span>], loc<span class="op">=</span><span class="dv">2</span>)</span> <span id="cb4-4"><a href="#cb4-4" aria-hidden="true" tabindex="-1"></a>plt.show()</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div> <div id="fig-output-4" class="quarto-figure quarto-figure-center anchored"> <figure class="figure"> <div class="quarto-figure quarto-figure-center"> <figure class="figure"> <p><img src="./images/output_4_0.png" class="img-fluid figure-img"></p> <p></p><figcaption class="figure-caption">png</figcaption><p></p> </figure> </div> <p></p><figcaption class="figure-caption">Figure 1: <strong>?(caption)</strong></figcaption><p></p> </figure> </div> <p>Looking at the graph above, we can see that the runtime of the <span class="math inline">\(6n^2\)</span> portion of the runtime dominates the total runtime of the algorithm, overall. Using this assumption, when working with BigO notation, we can drop the <span class="math inline">\(100n+300\)</span> portion of the runtime complexity, as we're working against the squared element of the overall runtime. Looking at the graph, we also see that the runtime complexity for the <span class="math inline">\(n^2\)</span> term of our algorithm intersects the line for the other terms, as well. But the safe assumption here is that this algorithm's complexity will be overall dominated by the squared term, in any reasonable size input.</p> <p>We can even scale the coefficients of the imaginary complexity to prove that this intersection won't shift much and we'll still be bounded by the squared term.</p> <div class="sourceCode" id="cb5"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb5-1"><a href="#cb5-1" aria-hidden="true" tabindex="-1"></a>scaled_ex_rt <span class="op">=</span> ex_run_time(<span class="fl">0.6</span>, <span class="dv">2500</span>)</span> <span id="cb5-2"><a href="#cb5-2" aria-hidden="true" tabindex="-1"></a>scaled_decomp_rt <span class="op">=</span> decomp_run_time(<span class="dv">1000</span>, <span class="dv">3000</span>, <span class="dv">2500</span>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div> <div class="sourceCode" id="cb6"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb6-1"><a href="#cb6-1" aria-hidden="true" tabindex="-1"></a>_scaled_line1_plt <span class="op">=</span> plt.plot(scaled_ex_rt, label<span class="op">=</span><span class="st">'Line 1'</span>, linewidth<span class="op">=</span><span class="dv">2</span>)</span> <span id="cb6-2"><a href="#cb6-2" aria-hidden="true" tabindex="-1"></a>_scaled_line2_plt <span class="op">=</span> plt.plot(scaled_decomp_rt, <span class="st">'b'</span>, label<span class="op">=</span><span class="st">'Line 2'</span>, linewidth<span class="op">=</span><span class="dv">2</span>)</span> <span id="cb6-3"><a href="#cb6-3" aria-hidden="true" tabindex="-1"></a>plt.legend([<span class="st">'1000n + 3000'</span>,<span class="st">'0.6n^2'</span>], loc<span class="op">=</span><span class="dv">2</span>)</span> <span id="cb6-4"><a href="#cb6-4" aria-hidden="true" tabindex="-1"></a>plt.show()</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div> <div id="fig-output-7" class="quarto-figure quarto-figure-center anchored"> <figure class="figure"> <p><img src="./images/output_7_0.png" class="img-fluid figure-img"></p> <p></p><figcaption class="figure-caption">Figure 2: <strong>?(caption)</strong></figcaption><p></p> </figure> </div> <p>Checking out the graph above, we see that the size of the input was able to be scaled pretty considerably. But we can also see that around the input size of 1650, we still end up losing our to the squared term in the runtime of out algorithm. Using this logic, for general purposes, for any reasonable input we can use define the runtime complexity of this algorithm as <span class="math inline">\(n^2\)</span>.</p> </section> <section id="generator-methods" class="level4"> <h4 class="anchored" data-anchor-id="generator-methods">Generator methods</h4> <div class="sourceCode" id="cb7"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb7-1"><a href="#cb7-1" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> generate_seq(start, stop, size):</span> <span id="cb7-2"><a href="#cb7-2" aria-hidden="true" tabindex="-1"></a> <span class="co">'''</span></span> <span id="cb7-3"><a href="#cb7-3" aria-hidden="true" tabindex="-1"></a><span class="co"> Generate a sequence of integers useful in testing the functions below</span></span> <span id="cb7-4"><a href="#cb7-4" aria-hidden="true" tabindex="-1"></a><span class="co"> '''</span></span> <span id="cb7-5"><a href="#cb7-5" aria-hidden="true" tabindex="-1"></a> </span> <span id="cb7-6"><a href="#cb7-6" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> np.random.random_integers(low<span class="op">=</span>start, high<span class="op">=</span>stop, size<span class="op">=</span>size)</span> <span id="cb7-7"><a href="#cb7-7" aria-hidden="true" tabindex="-1"></a> </span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div> </section> <section id="programming-assignments" class="level4"> <h4 class="anchored" data-anchor-id="programming-assignments">Programming Assignments</h4> <div class="sourceCode" id="cb8"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb8-1"><a href="#cb8-1" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> calc_fib(n):</span> <span id="cb8-2"><a href="#cb8-2" aria-hidden="true" tabindex="-1"></a> <span class="co">'''</span></span> <span id="cb8-3"><a href="#cb8-3" aria-hidden="true" tabindex="-1"></a><span class="co"> Task : Given n, find the last digit of the nth Fibonacci number F_n</span></span> <span id="cb8-4"><a href="#cb8-4" aria-hidden="true" tabindex="-1"></a><span class="co"> </span></span> <span id="cb8-5"><a href="#cb8-5" aria-hidden="true" tabindex="-1"></a><span class="co"> Input : Single integer n</span></span> <span id="cb8-6"><a href="#cb8-6" aria-hidden="true" tabindex="-1"></a><span class="co"> </span></span> <span id="cb8-7"><a href="#cb8-7" aria-hidden="true" tabindex="-1"></a><span class="co"> Constraints : 0 \ge n \ge 10**7</span></span> <span id="cb8-8"><a href="#cb8-8" aria-hidden="true" tabindex="-1"></a><span class="co"> </span></span> <span id="cb8-9"><a href="#cb8-9" aria-hidden="true" tabindex="-1"></a><span class="co"> Output : Last digit of F_n</span></span> <span id="cb8-10"><a href="#cb8-10" aria-hidden="true" tabindex="-1"></a><span class="co"> </span></span> <span id="cb8-11"><a href="#cb8-11" aria-hidden="true" tabindex="-1"></a><span class="co"> '''</span></span> <span id="cb8-12"><a href="#cb8-12" aria-hidden="true" tabindex="-1"></a> int_a <span class="op">=</span> <span class="dv">0</span></span> <span id="cb8-13"><a href="#cb8-13" aria-hidden="true" tabindex="-1"></a> int_b <span class="op">=</span> <span class="dv">1</span></span> <span id="cb8-14"><a href="#cb8-14" aria-hidden="true" tabindex="-1"></a> </span> <span id="cb8-15"><a href="#cb8-15" aria-hidden="true" tabindex="-1"></a> <span class="cf">if</span> n <span class="op">&lt;=</span> <span class="dv">1</span>:</span> <span id="cb8-16"><a href="#cb8-16" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> n</span> <span id="cb8-17"><a href="#cb8-17" aria-hidden="true" tabindex="-1"></a> </span> <span id="cb8-18"><a href="#cb8-18" aria-hidden="true" tabindex="-1"></a> <span class="cf">elif</span> n <span class="op">&gt;=</span> <span class="dv">0</span> <span class="kw">and</span> n <span class="op">&lt;=</span> <span class="dv">45</span>:</span> <span id="cb8-19"><a href="#cb8-19" aria-hidden="true" tabindex="-1"></a> fib_int <span class="op">=</span> calc_fib(n<span class="op">-</span><span class="dv">1</span>) <span class="op">+</span> calc_fib(n<span class="op">-</span><span class="dv">2</span>)</span> <span id="cb8-20"><a href="#cb8-20" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> fib_int</span> <span id="cb8-21"><a href="#cb8-21" aria-hidden="true" tabindex="-1"></a> </span> <span id="cb8-22"><a href="#cb8-22" aria-hidden="true" tabindex="-1"></a> <span class="cf">else</span>:</span> <span id="cb8-23"><a href="#cb8-23" aria-hidden="true" tabindex="-1"></a> <span class="bu">print</span>(<span class="st">"</span><span class="sc">%s</span><span class="st"> is out of range. Please try an integer between 0 and 45."</span> <span class="op">%</span> n)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div> <div class="sourceCode" id="cb9"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb9-1"><a href="#cb9-1" aria-hidden="true" tabindex="-1"></a><span class="op">%%</span>time </span> <span id="cb9-2"><a href="#cb9-2" aria-hidden="true" tabindex="-1"></a><span class="cf">for</span> i <span class="kw">in</span> generate_seq(<span class="dv">1</span>,<span class="dv">2000</span>,<span class="dv">1</span>):</span> <span id="cb9-3"><a href="#cb9-3" aria-hidden="true" tabindex="-1"></a> calc_fib(<span class="dv">10</span>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div> <pre><code>CPU times: user 10 ms, sys: 0 ns, total: 10 ms Wall time: 1.52 ms</code></pre> <div class="sourceCode" id="cb11"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb11-1"><a href="#cb11-1" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> get_fibonacci_last_digit(n):</span> <span id="cb11-2"><a href="#cb11-2" aria-hidden="true" tabindex="-1"></a> <span class="co">'''</span></span> <span id="cb11-3"><a href="#cb11-3" aria-hidden="true" tabindex="-1"></a><span class="co"> Task : Given n, find the last digit of the nth Fibonacci number F_n</span></span> <span id="cb11-4"><a href="#cb11-4" aria-hidden="true" tabindex="-1"></a><span class="co"> </span></span> <span id="cb11-5"><a href="#cb11-5" aria-hidden="true" tabindex="-1"></a><span class="co"> Input : Single integer n</span></span> <span id="cb11-6"><a href="#cb11-6" aria-hidden="true" tabindex="-1"></a><span class="co"> </span></span> <span id="cb11-7"><a href="#cb11-7" aria-hidden="true" tabindex="-1"></a><span class="co"> Constraints : 0 \ge n \ge 10**7</span></span> <span id="cb11-8"><a href="#cb11-8" aria-hidden="true" tabindex="-1"></a><span class="co"> </span></span> <span id="cb11-9"><a href="#cb11-9" aria-hidden="true" tabindex="-1"></a><span class="co"> Output : Last digit of F_n</span></span> <span id="cb11-10"><a href="#cb11-10" aria-hidden="true" tabindex="-1"></a><span class="co"> </span></span> <span id="cb11-11"><a href="#cb11-11" aria-hidden="true" tabindex="-1"></a><span class="co"> '''</span></span> <span id="cb11-12"><a href="#cb11-12" aria-hidden="true" tabindex="-1"></a> fib_array <span class="op">=</span> np.zeros(shape<span class="op">=</span>n, dtype<span class="op">=</span><span class="bu">int</span>)</span> <span id="cb11-13"><a href="#cb11-13" aria-hidden="true" tabindex="-1"></a> fib_array[<span class="dv">0</span>] <span class="op">=</span> <span class="bu">int</span>(<span class="dv">0</span>)</span> <span id="cb11-14"><a href="#cb11-14" aria-hidden="true" tabindex="-1"></a> fib_array[<span class="dv">1</span>] <span class="op">=</span> <span class="bu">int</span>(<span class="dv">1</span>)</span> <span id="cb11-15"><a href="#cb11-15" aria-hidden="true" tabindex="-1"></a> </span> <span id="cb11-16"><a href="#cb11-16" aria-hidden="true" tabindex="-1"></a> <span class="cf">if</span> n <span class="op">&gt;=</span> <span class="dv">0</span> <span class="kw">and</span> n <span class="op">&lt;=</span> <span class="dv">10</span><span class="op">**</span><span class="dv">7</span>:</span> <span id="cb11-17"><a href="#cb11-17" aria-hidden="true" tabindex="-1"></a> counter <span class="op">=</span> <span class="dv">2</span></span> <span id="cb11-18"><a href="#cb11-18" aria-hidden="true" tabindex="-1"></a> <span class="cf">for</span> i <span class="kw">in</span> fib_array[<span class="dv">2</span>:n]:</span> <span id="cb11-19"><a href="#cb11-19" aria-hidden="true" tabindex="-1"></a> fib_array[counter] <span class="op">=</span> ((fib_array[counter<span class="op">-</span><span class="dv">1</span>] <span class="op">%</span> <span class="dv">10</span>) <span class="op">+</span> (fib_array[counter<span class="op">-</span><span class="dv">2</span>] <span class="op">%</span> <span class="dv">10</span>))</span> <span id="cb11-20"><a href="#cb11-20" aria-hidden="true" tabindex="-1"></a> counter <span class="op">+=</span> <span class="dv">1</span></span> <span id="cb11-21"><a href="#cb11-21" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> (fib_array[counter<span class="op">-</span><span class="dv">1</span>] <span class="op">%</span> <span class="dv">10</span>) <span class="op">+</span> (fib_array[counter<span class="op">-</span><span class="dv">2</span>] <span class="op">%</span> <span class="dv">10</span>)</span> <span id="cb11-22"><a href="#cb11-22" aria-hidden="true" tabindex="-1"></a> <span class="cf">else</span>:</span> <span id="cb11-23"><a href="#cb11-23" aria-hidden="true" tabindex="-1"></a> <span class="bu">print</span>(<span class="st">"</span><span class="sc">%s</span><span class="st"> is out of range. Please try an integer between 0 and 10,000,000."</span> <span class="op">%</span> n)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div> <div class="sourceCode" id="cb12"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb12-1"><a href="#cb12-1" aria-hidden="true" tabindex="-1"></a><span class="op">%</span>time</span> <span id="cb12-2"><a href="#cb12-2" aria-hidden="true" tabindex="-1"></a><span class="bu">print</span>(get_fibonacci_last_digit(<span class="dv">3</span>))</span> <span id="cb12-3"><a href="#cb12-3" aria-hidden="true" tabindex="-1"></a><span class="bu">print</span>(get_fibonacci_last_digit(<span class="dv">331</span>))</span> <span id="cb12-4"><a href="#cb12-4" aria-hidden="true" tabindex="-1"></a><span class="bu">print</span>(get_fibonacci_last_digit(<span class="dv">327305</span>))</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div> <pre><code>CPU times: user 0 ns, sys: 0 ns, total: 0 ns Wall time: 18.8 µs 2 9 5</code></pre> </section> <section id="greatest-common-divisor" class="level4"> <h4 class="anchored" data-anchor-id="greatest-common-divisor">Greatest common divisor</h4> <div class="sourceCode" id="cb14"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb14-1"><a href="#cb14-1" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> euclidean_gcd(a,b):</span> <span id="cb14-2"><a href="#cb14-2" aria-hidden="true" tabindex="-1"></a> <span class="cf">if</span> b <span class="op">==</span> <span class="dv">0</span>:</span> <span id="cb14-3"><a href="#cb14-3" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> a</span> <span id="cb14-4"><a href="#cb14-4" aria-hidden="true" tabindex="-1"></a> a_prime <span class="op">=</span> a <span class="op">%</span> b</span> <span id="cb14-5"><a href="#cb14-5" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span>(euclidean_gcd(b,a_prime))</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div> <div class="sourceCode" id="cb15"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb15-1"><a href="#cb15-1" aria-hidden="true" tabindex="-1"></a><span class="op">%%</span>time</span> <span id="cb15-2"><a href="#cb15-2" aria-hidden="true" tabindex="-1"></a><span class="bu">print</span>(euclidean_gcd(<span class="dv">18</span>,<span class="dv">35</span>))</span> <span id="cb15-3"><a href="#cb15-3" aria-hidden="true" tabindex="-1"></a><span class="bu">print</span>(euclidean_gcd(<span class="dv">28851538</span>, <span class="dv">1183019</span>))</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div> <pre><code>1 17657 CPU times: user 10 ms, sys: 0 ns, total: 10 ms Wall time: 981 µs</code></pre> </section> <section id="least-common-multiple" class="level4"> <h4 class="anchored" data-anchor-id="least-common-multiple">Least common multiple</h4> <div class="sourceCode" id="cb17"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb17-1"><a href="#cb17-1" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> lcm(a, b):</span> <span id="cb17-2"><a href="#cb17-2" aria-hidden="true" tabindex="-1"></a> <span class="cf">if</span> a <span class="op">&gt;=</span> <span class="dv">1</span> <span class="kw">and</span> a <span class="op">&lt;=</span> (<span class="dv">2</span><span class="op">*</span>(<span class="dv">10</span><span class="op">**</span><span class="dv">9</span>)) <span class="kw">and</span> b <span class="op">&gt;=</span> <span class="dv">1</span> <span class="kw">and</span> b <span class="op">&lt;=</span> (<span class="dv">2</span><span class="op">*</span>(<span class="dv">10</span><span class="op">**</span><span class="dv">9</span>)):</span> <span id="cb17-3"><a href="#cb17-3" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> (a<span class="op">*</span>b) <span class="op">//</span> euclidean_gcd(a,b)</span> <span id="cb17-4"><a href="#cb17-4" aria-hidden="true" tabindex="-1"></a> <span class="cf">else</span>:</span> <span id="cb17-5"><a href="#cb17-5" aria-hidden="true" tabindex="-1"></a> <span class="bu">print</span>(<span class="st">"something is wrong"</span>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div> <div class="sourceCode" id="cb18"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb18-1"><a href="#cb18-1" aria-hidden="true" tabindex="-1"></a><span class="op">%</span>time</span> <span id="cb18-2"><a href="#cb18-2" aria-hidden="true" tabindex="-1"></a><span class="bu">print</span>(lcm(<span class="dv">6</span>,<span class="dv">8</span>))</span> <span id="cb18-3"><a href="#cb18-3" aria-hidden="true" tabindex="-1"></a><span class="bu">print</span>(lcm(<span class="dv">28851538</span>, <span class="dv">1183019</span>))</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div> <pre><code>CPU times: user 0 ns, sys: 0 ns, total: 0 ns Wall time: 15 µs 24 1933053046</code></pre> </section> </main> <!-- /main --> <script id="quarto-html-after-body" type="application/javascript"> window.document.addEventListener("DOMContentLoaded", function (event) { const toggleBodyColorMode = (bsSheetEl) => { const mode = bsSheetEl.getAttribute("data-mode"); const bodyEl = window.document.querySelector("body"); if (mode === "dark") { bodyEl.classList.add("quarto-dark"); bodyEl.classList.remove("quarto-light"); } else { bodyEl.classList.add("quarto-light"); bodyEl.classList.remove("quarto-dark"); } } const toggleBodyColorPrimary = () => { const bsSheetEl = window.document.querySelector("link#quarto-bootstrap"); if (bsSheetEl) { toggleBodyColorMode(bsSheetEl); } } toggleBodyColorPrimary(); const icon = ""; const anchorJS = new window.AnchorJS(); anchorJS.options = { placement: 'right', icon: icon }; anchorJS.add('.anchored'); const clipboard = new window.ClipboardJS('.code-copy-button', { target: function(trigger) { return trigger.previousElementSibling; } }); clipboard.on('success', function(e) { // button target const button = e.trigger; // don't keep focus button.blur(); // flash "checked" button.classList.add('code-copy-button-checked'); var currentTitle = button.getAttribute("title"); button.setAttribute("title", "Copied!"); setTimeout(function() { button.setAttribute("title", currentTitle); button.classList.remove('code-copy-button-checked'); }, 1000); // clear code selection e.clearSelection(); }); function tippyHover(el, contentFn) { const config = { allowHTML: true, content: contentFn, maxWidth: 500, delay: 100, arrow: false, appendTo: function(el) { return el.parentElement; }, interactive: true, interactiveBorder: 10, theme: 'quarto', placement: 'bottom-start' }; window.tippy(el, config); } const noterefs = window.document.querySelectorAll('a[role="doc-noteref"]'); for (var i=0; i<noterefs.length; i++) { const ref = noterefs[i]; tippyHover(ref, function() { let href = ref.getAttribute('href'); try { href = new URL(href).hash; } catch {} const id = href.replace(/^#\/?/, ""); const note = window.document.getElementById(id); return note.innerHTML; }); } var bibliorefs = window.document.querySelectorAll('a[role="doc-biblioref"]'); for (var i=0; i<bibliorefs.length; i++) { const ref = bibliorefs[i]; const cites = ref.parentNode.getAttribute('data-cites').split(' '); tippyHover(ref, function() { var popup = window.document.createElement('div'); cites.forEach(function(cite) { var citeDiv = window.document.createElement('div'); citeDiv.classList.add('hanging-indent'); citeDiv.classList.add('csl-entry'); var biblioDiv = window.document.getElementById('ref-' + cite); if (biblioDiv) { citeDiv.innerHTML = biblioDiv.innerHTML; } popup.appendChild(citeDiv); }); return popup.innerHTML; }); } }); </script> </div> <!-- /content --> </body></html>
{ "redpajama_set_name": "RedPajamaGithub" }
4,698
Interview with Jeff Luhnow Note: Our buddy and sometime-contributor @KevinBassStache had a chance to talk to Astros GM Jeff Luhnow last week. Here's the transcript. It was a blustery and cold Thursday when the Houston Astros Caravan blew in to Austin as part of their annual winter caravan. The first stop was at Auditorium Shores, near downtown, for the ceremonial raising of an Astros flag to declare the city as "Astros Country." Current (Jason Castro), former (Jeff Kent), and future players (George Springer) were on hand plus Astros broadcaster Alan Ashby and team president Reid Ryan. But the star of the show, at least for me, was Astros GM Jeff Luhnow. No other person, besides Astros owner Jim Crane, is more influential in this organization and few are as bright and interesting. I've thought awhile about what I'd like to ask Jeff Luhnow if I was given the chance and, as luck would have it, was fortunate enough to be able to ask him a handful of questions. Below is that exchange: KBS: Mr. Luhnow, first off, thank you very much for your time. I want to start by asking about George Springer. If he has a big Spring Training this year, what are his chances of making the team? Luhnow: Well, we want to field the best team possible. We have 64 players coming to camp and we expect to take the best 25 to Houston with us for opening day. We've got a lot of different permutations of that, but that's why we go there--to sort it all out. But I would say, given the success he experienced last year, George Springer is going to spend a significant portion of this year—if not all of it—in Houston and he's going to help us accomplish our goal to win a lot more games. KBS: I was looking at the 40 man roster recently and, with the depth that's already there, it's going to be difficult to find space for 25 guys. Do you feel the team is done adding players, either on major or minor league deals, at this point in the winter? Luhnow: We're still talking to a few free agents still out there and, whether or not something happens, to be determined. The work really never stops, even as we go in to camp in just a few weeks time. Injuries tend to play a factor, not just for us but with all other teams, and players come available. Also, players who haven't signed yet become a bit more willing to sign at your terms, so we're going to continue to be opportunistic. We do feel like we're good with what we have right now, but there's always an opportunity to improve and we're going to be looking for it at every turn. KBS: This is your second year now as Astros GM and I'm wondering what some of the things are that, maybe, have been expected or unexpected about the job? Luhnow: The challenge with a job like this is that you can't control the outcome. All you can do is bring in the players, the staff, and front office to put everything in to position so you can make the best choices you can possibly make to increase your chances of winning as many games as possible. Going through a period at the end of last year, when we lost 15 straight games, if you play those games over and over again, very rarely would you lose all 15. There might be cases where you split, or win more, but that's something you can't really control and it's a little bit frustrating. All you can do is dust yourself off, find your weaknesses, and go in to the off-season by adding to the bullpen, or another bat, or another member of the starting rotation. You cross your fingers and hope it works out. KBS: As a follow up to that, I feel like the Astros front office really valued the construction of the 2013 team (prior to the 2013 season), but it never quite came together. Are there things you see now, or multiple things, that caused the team not to achieve its goals? Luhnow: Well, there was some variability in terms of performance. We were counting on Lucas Harrell to have a year similar to what he did in 2012 and he wasn't able to repeat that and Altuve wasn't exactly the same sort of player he was the year before. But we had some up-ticks in performance as well and 2013, at the end of the day, was an opportunity for many of our young players to get significant exposure in critical situations. You know, Josh Fields had a good year as our closer. Josh Zeid and Kevin Chapman were put in situations where the game was on the line—that's a lot of pressure to put on young kids. Robbie Grossman was out there playing everyday for awhile, Jake Elmore was playing everyday. We gave our young roster an opportunity to prove to us whether they belong and I think the answer, by the end of the year, was that some of these guys do belong. Matt Dominguez deserves to be the everyday third baseman, Jon Villar looks like he has the capability to be an everyday shortstop, so we discovered a lot more about guys we wanted to find out about. But at the same time, we didn't have the veteran presence last year. We tried it with Pena, Ankiel, Bedard, Veras and a couple of others, but we feel like the veterans we have for 2014 are a little bit better established and we are planning for them to stick around a little while. It sets a little better tone for them to be in that leadership role because they know they're going to be there for a few years. KBS: Something I've been thinking about awhile: Why aren't there any sign-and-trade deals in the MLB? You look at a player like Kendrys Morales (a current free agent), who turned down a $14.1 million one-year arbitration offer even though he won't receive the same sort of annual average value in a long term deal. For the team losing the player, they could have the opportunity to maybe obtain a higher-level minor-league player from the signing team who's already passed through a few levels. Luhnow: It's a good question and you sometimes see it with free agents that are signed and then traded mid-season. Typically, if a team is interested in a player, they'd be active in signing him right upfront. But as the season develops, and teams sort themselves out in to those that are and aren't contending, that's when the transactions tend to happen. You'll occasionally see a free agent signed and in spring training where he doesn't win the job and gets traded, but it doesn't happen as much as you think it might. KBS: Do you think that's something that'll change? Luhnow: It could. I think teams are getting a lot more creative in different ways of maximizing their return. Teams in the past few years, because of the collective bargaining agreement, have been creative about finding ways of bringing talent in because they're restricted in terms of the amateur and international world. To a certain extent, that's what we did the past couple of years. We were able to take major league players and convert them in to prospects. Even with the limitations of the international draft, we were able to go from a bottom five to a top five farm system utilizing all those avenues at our disposal. Labels: Caravan, Interviews, Jeff Luhnow, Kevin Bass Stache
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,852
You will love wearing these stylish camisole glitter singlet tops from Studio 7. Dancewear singlet tops come in aqua blue, hot pink or purple glitter and the fabric blend makes them feel like a second skin. The dance top holds their shape well and can be worn under other clothing or on their own making them extra versatile. A must-have for any new or veteran dancer!
{ "redpajama_set_name": "RedPajamaC4" }
7,226
Our diverse range of AMIA Committees and Task Forces collaborate with colleagues from around the world. We create forums for exchange, and develop programs and projects that move us all forward in a changing field. AMIA was the first professional organization to offer scholarships specifically for students pursuing careers in moving image archives. Applications for scholarships and the IPI Internship program opened in January. The AMIA Supplier Directory is an international resource guide for anyone working with audiovisual media. Find a supplier, or list your own services or products. We update the directory on a quarterly basis. 2019 Scholarship and IPI Internship Application Deadline Extended to May 15th! AMIA believes that the education and training of moving image archivists is essential to the long-term survival of our moving image heritage. In addition, the Image Permanence Institute Internship in Preservation Research will offer a student who is committed to the preservation of moving images the opportunity to acquire practical experience in preservation research.
{ "redpajama_set_name": "RedPajamaC4" }
7,447
export const API = process.env.NODE_ENV === 'production' ? '/_api' : 'http://localhost:4000/_api'; export const getSiteMetaUrl = () => `${API}/site_meta`; export const getConfigurationUrl = () => `${API}/configuration`; export const putConfigurationUrl = () => `${API}/configuration`; export const pagesAPIUrl = (directory = '') => `${API}/pages/${directory}`; export const pageAPIUrl = (directory, filename) => directory ? `${API}/pages/${directory}/${filename}` : `${API}/pages/${filename}`; export const draftsAPIUrl = (directory = '') => `${API}/drafts/${directory}`; export const draftAPIUrl = (directory, filename) => directory ? `${API}/drafts/${directory}/${filename}` : `${API}/drafts/${filename}`; export const collectionsAPIUrl = () => `${API}/collections`; export const collectionAPIUrl = (collection_name, directory) => directory ? `${API}/collections/${collection_name}/entries/${directory}` : `${API}/collections/${collection_name}/entries`; export const documentAPIUrl = (collection_name, directory, filename) => directory ? `${API}/collections/${collection_name}/${directory}/${filename}` : `${API}/collections/${collection_name}/${filename}`; export const datafilesAPIUrl = (directory = '') => `${API}/data/${directory}`; export const datafileAPIUrl = (directory, filename) => directory ? `${API}/data/${directory}/${filename}` : `${API}/data/${filename}`; export const staticfilesAPIUrl = (directory = '') => `${API}/static_files/${directory}`; export const staticfileAPIUrl = (directory, filename) => directory ? `${API}/static_files/${directory}/${filename}` : `${API}/static_files/${filename}`;
{ "redpajama_set_name": "RedPajamaGithub" }
789
'use strict' const fs = require('fs-extra') const path = require('path') const download = require('download') const async = require('async') const mh = require('media-helper') const winston = require('winston') const Media = require('../models/media') let settings = {} const Utils = require('../helpers/utils') var controller = {} const spacebroSettings = require('standard-settings').getSettings().service.spacebro function notFound (id) { let error = 'Media not found' winston.warn(error, { id: id }) return { error: error, id: id } } function emptyField (field, id) { let error = 'Empty field' winston.warn(error, { field: field, id: id }) return { error: error, field: field, id: id } } function getMediaCount (stateFilter) { return new Promise((resolve, reject) => { var criteria = (stateFilter === undefined) ? {} : { state: stateFilter } Media.count(criteria, (err, count) => { if (err) { reject(err) } else { resolve(count) } }) }) } function getSettings (req, res) { res.json(settings) } function getCount (req, res) { res.contentType('text/plain') getMediaCount(req.query.state) .then((count) => res.send(count.toString())) .catch((error) => res.send(error)) } function getFirst (req, res) { Media.findOne().sort({ uploadedAt: 1 }).exec((err, media) => { if (err) { winston.error(err) res.send(err) } else if (!media) { res.send(notFound()) } else { res.json(media) } }) } function getLast (req, res) { Media.findOne().sort({ uploadedAt: 1 }).exec((err, media) => { if (err) { winston.error(err) res.send(err) } else if (!media) { res.send(notFound()) } else { res.json(media) } }) } function getMedia (req, res) { Media.findById(req.params.id, (err, media) => { if (err) { winston.error(err) res.send(err) } else if (!media) { res.send(notFound(req.params.id)) } else { res.redirect(path.join('/static', media.path)) } }) } function getThumbnail (req, res) { Media.findById(req.params.id, (err, media) => { if (err) { winston.error(err) res.send(err) } else if (media) { if (media.details.thumbnail) { res.redirect(path.join('/static', media.details.thumbnail.url)) } else { res.send(emptyField('details.thumbnail', req.params.id)) } } else { res.send(notFound(req.params.id)) } }) } function getField (req, res) { Media.findById(req.params.id, (err, media) => { if (err) { winston.error(err) res.send(err) } else if (!media) { res.send(notFound(req.params.id)) } else { if (media[req.params.field] !== undefined) { res.json(media[req.params.field]) } else { res.send(emptyField(req.params.field, req.params.id)) } } }) } function postMedia (req, res) { var media = req.body.media var filename = req.body.filename if (media === undefined) { res.send(emptyField('media')) } if (filename === undefined) { res.send(emptyField('filename')) } var relativePath = path.join(Utils.dateDir(), filename) var absolutePath = path.join(settings.folder.data, relativePath) mh.toBase64(media).then((data) => { fs.writeFileSync(absolutePath, data, 'base64') Utils.createMedia({ path: relativePath, meta: req.body.meta, bucketId: req.body.bucketId }) .then((media) => { winston.info('ADD -', media._id, '-', media.path) res.send(media) }) .catch((error) => res.send(error)) }) .catch((error) => res.send(error)) } function updateMedia (req, res) { Media.findById(req.params.id, (err, media) => { if (err) { winston.error(err) res.send(err) } else if (!media) { res.send(notFound(req.params.id)) } else { // var spacebroData = { mediaId: media._id } var spacebroData = media if (req.body.state && req.body.state !== media.state) { media.state = req.body.state spacebroData['newState'] = media.state } if (req.body.bucketId && req.body.bucketId !== media.bucketId) { media.bucketId = req.body.bucketId spacebroData['newBucketId'] = media.bucketId } if (req.body.state || req.body.bucketId) { media.updatedAt = new Date().toISOString() media.save((err) => { if (err) { winston.error(err) res.send(err) } else { winston.info('UPDATE -', media._id.toString()) if (!settings.filterMediaUpdated || (settings.filterMediaUpdated && spacebroData.state === settings.filterMediaUpdated)) { Utils.spacebroClient.emit(spacebroSettings.client.out.mediaUpdated.eventName, spacebroData) } } }) } res.json(media) } }) } function updateMeta (req, res) { Media.findById(req.params.id, (err, media) => { if (err) { winston.error(err) res.send(err) } else if (!media) { res.send(notFound(req.params.id)) } else { const meta = Object.assign({}, media.meta, req.body) media.meta = meta media.updatedAt = new Date().toISOString() media.save((err) => { if (err) { winston.error(err) res.send(err) } else { winston.info('UPDATE META -', media._id.toString()) if (!settings.filterMediaUpdated || (settings.filterMediaUpdated && media.state === settings.filterMediaUpdated)) { Utils.spacebroClient.emit(spacebroSettings.client.out.mediaUpdated.eventName, media) } } }) res.json(media) } }) } function deleteMedia (req, res) { var id = req.params.id Utils.deleteMedia(id) .then((media) => { fs.unlinkSync(path.join(settings.folder.data, media.path)) res.send(media) }) .catch((error) => { winston.error(error) res.send(error) }) } function copyOrDownload (msg) { return new Promise((resolve, reject) => { console.log('start copyOrDownload process...') msg.file = msg.file || path.basename(msg.path) var basename = path.basename(msg.file) var mediaRelativePath = path.join(Utils.dateDir(), basename) var mediaAbsolutePath = path.join(settings.folder.data, mediaRelativePath) // Copy the media to the disk if (mh.isFile(msg.path)) { winston.info('Copying file ' + msg.file + ' to ' + mediaRelativePath) try { fs.copySync(msg.path, mediaAbsolutePath) msg.path = mediaAbsolutePath msg.url = controller.baseURL + 'static/' + mediaRelativePath winston.info('Done copying file ' + msg.file + ' to ' + mediaRelativePath) resolve(msg) } catch (err) { reject(err) } } else if (mh.isURL(msg.path) || mh.isURL(msg.url)) { winston.info('Downloading file ' + msg.file + ' to ' + mediaRelativePath) download(mh.isURL(msg.path) ? msg.path : msg.url) .then((data) => { try { fs.writeFileSync(mediaAbsolutePath, data) msg.path = mediaAbsolutePath msg.url = controller.baseURL + 'static/' + mediaRelativePath winston.info('Done downloading file ' + msg.file + ' to ' + mediaRelativePath) resolve(msg) } catch (err) { reject(err) } }).catch((err) => reject(err)) } else if (mh.isBase64(msg.path) || mh.isBase64(msg.url)) { winston.info('Creating file ' + msg.file + ' to ' + mediaRelativePath) try { let base64Data = msg.url.replace(/^data:image\/png;base64,/, '') fs.writeFileSync(mediaAbsolutePath, base64Data, 'base64') msg.path = mediaAbsolutePath msg.url = controller.baseURL + 'static/' + mediaRelativePath winston.info('Done creating file ' + msg.file + ' to ' + mediaRelativePath) resolve(msg) } catch (err) { reject(err) } } else { let msgError = `Error: Could not find a path or URL to the file ${msg.file}.` console.error(msgError) reject(new Error(msgError)) } }) } function toDataFolder (msg) { return new Promise((resolve, reject) => { copyOrDownload(msg) .catch((err) => reject(err)) // Check for files to import from media details and copy them to the disk async.eachOf(msg.details, function (mediaVersion, key, callback) { if (typeof mediaVersion === 'object' && (mediaVersion.path || mediaVersion.url)) { copyOrDownload(mediaVersion) .then((data) => callback()) .catch((err) => callback(err)) } else { callback() } }, (err) => { if (err) { reject(err) } else { winston.info('Done copying/downloading files and details for media ' + msg.file) resolve(msg) } }) }) } let init = (options) => { settings = options controller.baseURL = `http://${settings.server.host}:${settings.server.port}/` // ----- SPACEBRO EVENTS ----- // Utils.spacebroClient.on(settings.service.spacebro.client['in'].mediaCreate.eventName, function (data) { winston.info('EVENT - "new-media" received') toDataFolder(data) .then((data) => { Utils.createMedia(data) .then((media) => winston.info('ADD -', media._id, '-', media.path)) .catch((error) => winston.error(error)) }).catch((error) => winston.error(error)) }) } controller = { init, getSettings, getCount, getFirst, getLast, getMedia, getThumbnail, getField, postMedia, updateMedia, updateMeta, deleteMedia, baseURL: '' } module.exports = controller
{ "redpajama_set_name": "RedPajamaGithub" }
3,777
Thursday 7 Jumada al-akhirah 1442 - 21 January 2021 New Answers Get to know Islam Books Articles Principles of Fiqh Jurisprudence and Islamic Rulings Acts of Worship Things which invalidate the fast Ruling on one who breaks the fast by mistake Publication : 15-09-2007 Views : 82671 What is the ruling on breaking a voluntary fast by mistake?. Praise be to Allah. Al-Bukhaari (6669) and Muslim (1155) narrated that Abu Hurayrah (may Allaah be pleased with him) said: The Prophet (peace and blessings of Allaah be upon him) said: "Whoever forgets he is fasting and eats or drinks, let him complete his fast for it is Allaah Who has fed him and given him to drink." It was also narrated that he does not have to offer expiation or make up that fast. Ibn Khuzaymah (1999) narrated from Abu Hurayrah that the Prophet (peace and blessings of Allaah be upon him) said: "Whoever breaks his fast in Ramadaan by mistake does not have to make up that day or offer expiation." Classed as hasan by al-Albaani in Saheeh Ibn Khuzaymah. Al-Daaraqutni narrated from Abu Sa'eed al-Khudri that the Prophet (peace and blessings of Allaah be upon him) said: "Whoever eats in the month of Ramadaan by mistake does not have to make up that day." Al-Haafiz said: Although its isnaad is weak, it may still be considered sound because there are corroborating reports. The least that could be said about it is that it is hasan, so it may be quoted as evidence. Reports that are less strong than this have been quoted as evidence with regard to many issues. It may also be supported by the fact that a number of the Sahaabah issued fatwas that are in agreement with this hadeeth without any one of the Sahaabah having a different view, as was stated by Ibn al-Mundhir, Ibn Hazm and others, and 'Ali ibn Abi Taalib, Zayd ibn Thaabit, Abu Hurayrah, and Ibn 'Umar. And it is in accordance with the words of Allaah (interpretation of the meaning): "but He will call you to account for that which your hearts have earned" [al-Baqarah 2:225] Forgetfulness is not something which is earned by the heart. This hadeeth tells of the kindness of Allaah to His slaves and how He makes things easier for them and alleviates hardship. The majority of scholars quote these ahaadeeth as evidence that whoever forgets that he is fasting and breaks the fast, his fast is still valid, and he should complete his fast, and he does not have to make it up or offer any expiation. The general meaning of the hadeeth covers both obligatory and naafil fasts; there is no difference between the two. Al-Shaafa'i said in al-Umm (2/284): If a fasting person eats or drinks in Ramadaan or in a fast observed in fulfillment of a vow or as an expiation, or a fast that is obligatory for some reason, or a voluntary fast, out of forgetfulness, then his fast is complete and he does not have to make it up. Al-Nawawi said: This is evidence to support the view of the majority: if the fasting person eats or drinks or has intercourse because of forgetfulness, then he does not break his fast. Among those who were of this view are al-Shaafa'i, Abu Haneefah, Dawood and others. An interesting story was narrated by 'Abd al-Razzaaq from 'Amr ibn Dinar: that a person came to Abu Hurayrah and said: "I started fasting in the morning then I forgot and ate." He said, "It does not matter." He said: "Then I entered upon someone and by mistake I ate and drank." He said, "It does not matter, Allaah has fed you and given you to drink." Then he said: "I entered upon another person and forgot, and ate." Abu Hurayrah said: "You are a person who is not used to fasting.". Source: Islam Q&A share Question Type of commentComment on academic contentComment on spellingRequest translation of the answerRequest clarification of the answer Questions cannot be asked through this form You can ask your question on the website via this link: https://islamqa.info/en/ask Password should contain small, capital letter and at least 8 characters long Can't log in to your account? If you do not have an account, you can click the button below to create one If you have an account, log in Reset Username or Password Type of feedbackSuggestionsTechnical comment E-mail subscription service Join our e-mail list for regular site news and updates All Rights Reserved for Islam Q&A© 1997-2021
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,789
{"url":"https:\/\/www.encyclopediaofmath.org\/index.php\/Change_of_variables_in_an_integral","text":"# Change of variables in an integral\n\n2010 Mathematics Subject Classification: Primary: 26B10 [MSN][ZBL]\n\nA formula which generalizes to multidimensional integrals the usual integration by substitution of integrals in one variable.\n\nLet $U$ and $V$ be open sets in $\\mathbb R^n$, $\\Phi: U \\to V$ be a diffeomorphism and $f: V \\to \\mathbb R$ a continuous function. For any $y\\in U$ denote by $J \\Phi (y)$ the absolute value of the Jacobian determinant of the Jacobian matrix $D\\Phi|_y$, i.e. the determinant of the $n\\times n$ matrix $$\\label{e:Jacobi_matrix} D\\Phi|_y := \\left( \\begin{array}{llll} \\frac{\\partial \\Phi^1}{\\partial x_1} (y) & \\frac{\\partial \\Phi^1}{\\partial x_2} (y)&\\qquad \\ldots \\qquad & \\frac{\\partial \\Phi^1}{\\partial x_n} (y)\\\\ \\frac{\\partial \\Phi^2}{\\partial x_1} (y) & \\frac{\\partial \\Phi^2}{\\partial x_2} (y)&\\qquad \\ldots \\qquad & \\frac{\\partial \\Phi^2}{\\partial x_n} (y)\\\\ \\\\ \\vdots & \\vdots & &\\vdots\\\\ \\\\ \\frac{\\partial \\Phi^n}{\\partial x_1} (y) & \\frac{\\partial \\Phi^n}{\\partial x_2} (y)&\\qquad \\ldots \\qquad & \\frac{\\partial \\Phi^n}{\\partial x_n} (y) \\end{array}\\right)\\, ,$$ where $\\Phi^1, \\ldots , \\Phi^n$ denote the components of the vector function $\\Phi$.\n\nThen the following formula holds for any compact $\\Omega\\subset U$: $$\\label{e:change_of_variables} \\int_\\Omega f (\\Phi (y)) J \\Phi (y)\\, dy = \\int_{\\Phi (\\Omega)} f (z)\\, dz\\, .$$\n\nFormula \\eqref{e:change_of_variables} plays a fundamental role in defining the integration of a differential form: see also Integration on manifolds.\n\nThe assumptions on $\\Phi$, $f$ and the domains can be relaxed in several ways: we refer to Area formula.\n\n#### References\n\n [EG] L.C. Evans, R.F. Gariepy, \"Measure theory and fine properties of functions\" Studies in Advanced Mathematics. CRC Press, Boca Raton, FL, 1992. MR1158660 Zbl 0804.2800 [IP] V.A. Il'in, E.G. Poznyak, \"Fundamentals of mathematical analysis\" , 1\u20132 , MIR (1982) MR0687827 MR0687828 [Ku] L.D. Kudryavtsev, \"Mathematical analysis\" , 1\u20132 , Moscow (1973) [Ni] S.M. Nikol'skii, \"A course of mathematical analysis\" , 2 , MIR (1977) MR0796320Zbl 0479.00001 [Ru] W. Rudin, \"Principles of mathematical analysis\", Third edition, McGraw-Hill (1976) MR038502 Zbl 0346.2600 [Sp] M. Spivak, \"Calculus on manifolds\" , Benjamin\/Cummings (1965) MR0209411 Zbl 0141.05403\nHow to Cite This Entry:\nChange of variables in an integral. Encyclopedia of Mathematics. URL: http:\/\/www.encyclopediaofmath.org\/index.php?title=Change_of_variables_in_an_integral&oldid=29207","date":"2019-02-20 08:23:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 2, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9883716702461243, \"perplexity\": 786.087289477391}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-09\/segments\/1550247494485.54\/warc\/CC-MAIN-20190220065052-20190220091052-00043.warc.gz\"}"}
null
null
{"url":"https:\/\/www.studysmarter.us\/explanations\/macroeconomics\/financial-sector\/taylor-rule\/","text":"Suggested languages for you:\n|\n|\n\n## All-in-one learning app\n\n\u2022 Flashcards\n\u2022 NotesNotes\n\u2022 ExplanationsExplanations\n\u2022 Study Planner\n\u2022 Textbook solutions\n\n# Taylor Rule\n\nThe interest rate in the economy changes. Currently, the economy's interest rate is rising, and borrowing money is becoming significantly expensive. The institution in charge of setting the interest rate is the Federal Reserve System. The Federal Reserve System sets the interest rate according to the economic conditions. Did you know that it is possible to predict when the Fed will raise the interest rate and when it will not? This would help you decide whether to take a loan now, wait for another year, or consume more and save when there is a higher interest rate. But to know that, you'll have to know the Taylor Rule and how it works. Why don't you read on and find out all there is about the Taylor rule and what it says about the interest rate?\n\n## Taylor Rule Economics\n\nTaylor's rule in economics aims to elaborate on the relationship between the Federal Reserve's main policy instrument - the federal discount rate, inflation, and the gross domestic product. The Taylor Rule was developed by John B. Taylor in 1992.\n\nThe Taylor rule is a monetary policy rule that suggests that the federal funds rate should be set following inflation and economic growth levels.\n\nThe federal funds rate is the interest rate at which financial institutions lend their excessive reserves to each other on an uncollateralized basis.\n\nThe Federal Reserve uses the federal funds rate as one of the main instruments when conducting its policies. This policy instrument is essential in controlling inflation levels and ensuring healthy and steady economic growth.\n\nThe Fed and the Federal Funds Rate\n\nIt is the Federal Reserve that determines the federal funds rate. When inflation is too high in the economy, the Fed increases the federal funds rate, which lowers the money supply and cools inflation down.\n\nOn the other hand, when the economy is performing poorly, and output is dropping, the Federal Reserve lowers the federal funds rate. This contributes to an increase in the money available in the economy, boosting consumer spending and economic growth. This also comes with an increase in the inflation levels.\n\nThe main idea behind the Taylor rule is to explain how the federal funds rate should be set to maintain economic growth and healthy inflation levels.\n\nTaylor's rule is based on the assumption that the Federal Reserve will be willing to accept a moderate rise in the inflation rate if it helps stimulate greater production in the economy. This is the foundation upon which Taylor's rule is constructed.\n\n\u2022 According to the Taylor rule, the Federal Reserve has a 2 percent inflation target and follows three rules when setting its federal funds rate.\n\n1. When the inflation target is 2 percent, and the real GDP is equal to potential GDP, then the federal funds rate should be 2 percent.\n2. If the real GDP increases above the potential GDP, meaning that there is a positive output gap, the Fed should raise the real federal funds rate by 1\/2 percentage point for every 1 percent increase.\n3. If the inflation increases above its percentage target of 2 percent, then for each 1 percent increase, the Fed should raise the federal funds rate by 1\/2 percentage point.\n\nYou should be aware that in the real world, the Fed doesn't strictly adhere to these rules, and it sets the federal funds rate considering other factors as well. However, in many instances, the federal funds rate suggested by the Taylor rule was close to the actual one set by the Fed.\n\n## Taylor Rule Formula\n\nThe Taylor rule formula is as follows:\n\n$$i=p + 0.02 + 0.5y + 0.5(p - 0.02)$$\n\nWhere:\n\n$$i$$ - the nominal Fed funds rate\n\n$$p$$ - the rate of inflation over the previous four quarters\n\n$$y = \\frac {Y-Y_p} {Y_p}$$ - percentage difference between real output and full employment output.\n\nWhen adjusting for the formula and moving $$p$$, which is the inflation rate, on the left-hand side of the equation, we get the real federal funds rate, $$i-p$$.\n\nThe formula then becomes:\n\n$$i-p=0.02 + 0.5y + 0.5(p - 0.02)$$\n\nThat means that the real federal funds rate should respond to the difference between output and full-potential output and between inflation and target inflation. The target inflation in the Taylor rule formula is taken as 2%.\n\nThe idea behind the Taylor rule formula suggests that the federal reserve and central banks of other countries target the real interest rate rather than the real money supply in the economy. The real interest rate serves as an intermediate target through which the real money supply in the economy is influenced.\n\nNote that when the economy is at its full potential output level and the inflation target equals 2 percent, the equation becomes as follows.\n\n$$i-p = 0.02$$\n\n\u2022 Because under full employment, $$y=0$$. There is no difference between potential output and real output.\n\u2022 Also the term $$(p-0.02) = 0$$. Because there is no difference between the inflation target and the real inflation in the economy.\n\nNow that means that in such a scenario, the real federal funds rate is equal to the inflation target of 2%.\n\nLet's assume that the economy is going through an expansion, where the real output has exceeded potential output, and the real inflation rate is well above its target. For illustration purposes, let's assume that the difference between real output and full employment output is 3%, and the difference between real inflation and the inflation target is 2%.\n\nIn such a case, the Taylor rule would suggest that the real federal funds rate should rise.\n\n$$i-p= 0.02 + 0.5y + 0.5(p - 0.02)$$\n\n$$i-p = 0.02 + 0.5\\times0.03 + 0.5\\times0.02 = 0.045 = 4.5\\%$$\n\nThis means that the real federal funds rate should be 4.5% instead of 2% when the inflation rate is the same as the inflation target and when the real output is the same as the potential output.\n\nOn the other hand, if the economy is showing signs of weakness, such as output being below its level at which full employment is possible and inflation is below its target, the Taylor rule indicates that the real Fed funds rate should be reduced below 2%, which will result in the loosening of the monetary policy. Both replies adhere to the customary procedure followed by the Fed.\n\n## Taylor Rule of Monetary policy\n\nTaylor's rule of monetary policy is a set of prespecified rules that the Fed should follow when it adjusts the monetary policy instruments. Taylor's rule does not prevent the Fed from responding to different shocks in the economy for as long as the response is in line with the rules.\n\nThe Taylor rule suggests that the Fed should take into account the inflation rate and the difference between potential output and real output (output gap) when setting the interest rate in the economy.\n\nTaylor's rule of monetary policy is a rule that sets the federal funds rate in accordance with the inflation level and the output gap.\n\n### Taylor Rule of Monetary Policy: Taylor Rule Inflation Rate\n\nTaylor's rule inflation rate suggests how the Fed should design a monetary policy to tackle inflation. According to Taylor's rule of monetary policy, the Fed should raise the federal funds rate when the inflation rate is higher than the inflation target the Fed has set. On the other hand, when the inflation rate is below the inflation target, the Fed should respond by lowering the federal funds rate.\n\nFig. 1 - Taylor rule inflation\n\nFigure 1 shows how the Taylor rule works using the AD-AS model. When the inflation rate in the economy is high at P1, and the Fed wants to reduce inflation to P2, it should increase the federal funds rate according to the Taylor rule. The federal funds rate makes borrowing more expensive, reducing the total investment in the economy. This then shifts the aggregate demand curve from AD1 to AD2, resulting in a lower price level.\n\n### Taylor Rule of Monetary Policy: Taylor Rule Output gap\n\nTaylor rule output gap suggests how the Fed can use monetary policy to tackle output gaps.\n\nOutput gap occurs when the real GDP is either above or below the potential GDP.\n\nWhen the real GDP growth exceeds the economy's full potential, the Taylor rule calls for a higher interest rate. On the other hand, when real GDP is below levels of total employment output, the Fed should respond by lowering the interest rate according to the Taylor rule.\n\nFig. 2 - Taylor rule recessionary gap\n\nFigure 2 shows an economy experiencing a recessionary gap meaning that real GDP (Y1) is below potential GDP. Taylor argues that the Fed should lower the federal funds rate to fix the recessionary gap. This makes borrowing cheaper, increasing the total investment in the economy. As a result, the aggregate demand curve shifts from AD1 to AD2, and the real output Y1 increases to Yp at the equilibrium point E2.\n\n## Taylor Rule Examples\n\nOne of the primary examples of the Taylor rule and its application in the economy is the example of the United States. In the United States, the federal funds rate and the Taylor rule federal funds rate have followed a similar pattern for most periods, except for times of economic and financial crisis such as the one in 2008.\n\nFig. 3 - Federal Funds Rate and the Taylor Rule rate. Source: FRED Economic Data1\n\nFigure 3 shows a comparison between the predicted federal funds rate using the Taylor rule and the actual federal funds rate for the period starting in 1990 and continuing through the end of 2022 in the U.S. The pink line represents the Federal Funds Rate that would be according to the Taylor rule, whereas the green line presents the actual Federal Funds Rate.\n\nAs can be seen in Figure 3, the decisions that the Fed made from 1990 all the way up to 2010 were comparable to those predicted by the Taylor rule. The Taylor rule was closely predicting a decrease or an increase in the federal fund rate, which was the case regardless of the fact that the Taylor rule was not in effect.\n\nThe Federal funds rate and the rate set by the Taylor rule maintained a tight connection all the way up to the point where the Taylor Rule called for negative interest rates. However, the Fed did not engage in a negative discount rate as it is limited by the zero lower bound.\n\nWe have an article on the zero lower bound. Don't miss it!\n\n## Taylor Rule - Key takeaways\n\n\u2022 The Taylor rule is a monetary policy rule that suggests that the federal funds rate should be set following inflation and economic growth levels.\n\u2022 The federal funds rate is the interest rate at which financial institutions lend their excessive reserves to each other on an uncollateralized basis.\n\u2022 The Taylor rule formula is $$i=p + 0.02 + 0.5y + 0.5(p - 0.02)$$\n\u2022 Taylor's rule of monetary policy is a rule that sets the federal funds rate in accordance with the inflation level and the output gap.\n\n## References\n\n1. Fig 3. - Federal Funds Rate and the Taylor Rule rate, Source: FRED Economic Data, The Taylor Rule, https:\/\/fredblog.stlouisfed.org\/2014\/04\/the-taylor-rule\/\n\nThe Taylor rule is a monetary policy rule that suggests that the federal funds rate should be set following inflation and economic growth levels.\n\nThe formula for the Taylor rule is:\ni=p+0.02+0.5y+0.5(p-0.02)\n\nThe Federal Reserve\n\nTo set the interest rate according to economic conditions.\n\nThe Taylor's rule interest is calculated by using the formula:\ni=p+0.02+0.5y+0.5(p-0.02)\n\n## Final Taylor Rule Quiz\n\nQuestion\n\nWhat is the Taylor rule?\n\nThe Taylor rule is a monetary policy rule that suggests that the federal funds rate should be set following inflation and economic growth levels.\n\nShow question\n\nQuestion\n\nWhat is the Taylor rule formula?\n\nThe formula for the Taylor rule is\n\n$$i=p + 0.02 + 0.5y + 0.5(p - 0.02)$$\n\nShow question\n\nQuestion\n\nWhich one of the following is not part of the Taylor rule formula?\n\nActual Inflation rate\n\nShow question\n\nQuestion\n\nWhich one of the following is not part of the Taylor rule formula?\n\nExpected output\n\nShow question\n\nQuestion\n\n____________ is the interest rate at which financial institutions lend their excessive reserve to each other on an uncollateralized basis.\n\nFederal funds rate\n\nShow question\n\nQuestion\n\nWhen inflation is too high in the economy, the Fed _______ the federal funds rate, which _____ the money supply and cools inflation down.\n\nincreases,\u00a0lowers\n\nShow question\n\nQuestion\n\nWhat is the main idea behind the Taylor rule?\n\nThe main idea behind the Taylor rule is to explain how the federal funds rate should be set to maintain economic growth and healthy inflation levels.\n\nShow question\n\nQuestion\n\nWhat is the assumption that the Taylor rule is based on?\n\nTaylor's rule is based on the assumption that the Federal Reserve will be willing to accept a moderate rise in the inflation rate if it helps increase production in the economy.\n\nShow question\n\nQuestion\n\nAccording to the Taylor rule the Federal Reserve has a ________ inflation target.\n\nTwo percent\n\nShow question\n\nQuestion\n\nDoes the Fed adhere to the rules of Taylor in the real world?\n\nNo\n\nShow question\n\nQuestion\n\nLet\u2019s assume that the difference between real output and full employment output is 3%, and the difference between real inflation and the inflation target is 2%. How much should the federal funds rate be?\n\n4.5%\n\nShow question\n\nQuestion\n\nIf the economy is showing signs of weakness, such as output being below its potential level, and inflation is below its target, the Taylor rule formula indicates that the real Fed funds rate should be reduced below 2%.\n\nTrue\n\nShow question\n\nQuestion\n\nWhat is the Taylor's rule of monetary policy?\n\nTaylor's rule of monetary policy is a rule that sets the federal funds rate in accordance with the inflation level and the output gap.\n\nShow question\n\nQuestion\n\nWhat is an output gap?\n\nOutput gap occurs when the real GDP is either above or below potential GDP.\n\nShow question\n\nQuestion\n\nWhen the real GDP growth exceeds the economy's full potential, the Taylor rule calls for a _____ interest rate.\n\nHigher\n\nShow question\n\n60%\n\nof the users don't pass the Taylor Rule quiz! Will you pass the quiz?\n\nStart Quiz\n\n## Study Plan\n\nBe perfectly prepared on time with an individual plan.\n\n## Quizzes\n\nTest your knowledge with gamified quizzes.\n\n## Flashcards\n\nCreate and find flashcards in record time.\n\n## Notes\n\nCreate beautiful notes faster than ever before.\n\n## Study Sets\n\nHave all your study materials in one place.\n\n## Documents\n\nUpload unlimited documents and save them online.\n\n## Study Analytics\n\nIdentify your study strength and weaknesses.\n\n## Weekly Goals\n\nSet individual study goals and earn points reaching them.\n\n## Smart Reminders\n\nStop procrastinating with our study reminders.\n\n## Rewards\n\nEarn points, unlock badges and level up while studying.\n\n## Magic Marker\n\nCreate flashcards in notes completely automatically.\n\n## Smart Formatting\n\nCreate the most beautiful study materials using our templates.","date":"2022-11-26 10:13:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7100415825843811, \"perplexity\": 1535.5820468721124}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446706285.92\/warc\/CC-MAIN-20221126080725-20221126110725-00863.warc.gz\"}"}
null
null
FIMP steht für: ICAO-Code für den Flughafen Mauritius Feebly interacting massive particle, Kandidat für Dunkle Materie Abkürzung
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,465
Museum Of The American Cocktail Local nameMuseum Of The American Cocktail LocationParadise, United States The Museum of the American Cocktail, based in New Orleans, Louisiana, is a nonprofit organization dedicated to education in mixology and preserving the rich history of the cocktail as developed in the United States. Among its events are tastings in association with specific seminars or exhibits. It annually presents the American Cocktail Awards, together with the United States Bartenders Guild. Tags Museum Download Download See more This museum has moved to New Orleans. More information and contact Wikipedia https://en.wikipedia.org/wiki/The_Museum_of_the_American_Cocktail Official Website http://museumoftheamericancocktail.org/PressRoom/LasVegas/Exhibit.html Phone +1 7028928272 Address 3663 Las Vegas Blvd S, Las Vegas, NV 89109, USA Coordinates 36°6'33.872" N -115°10'21.759" E Sygic Travel - A Travel Guide in Your Pocket Download for free and plan your trips with ease Enter your mobile phone number to receive a direct link to download the app: Send Sent Or just search for \"Sygic Travel\" in App Store or Google Play. More interesting places Google Trips Alternative Best Hotels with Free Wifi in Paradise What to See in North America What to See in United States of America What to See in Nevada What to See in Clark UNLV Special Collections Bellagio Gallery of Fine Art Howard W. Cannon Aviation Museum Lied Discovery Children's Museum Titanic: The Artifact Exhibition Madame Tussauds Wax Museum Las Vegas Old Las Vegas Fort State Historical Park Erotic Heritage Museum Pinball Hall of Fame Clark County Museum CSI: The Experience Esslinger Barn Lied Library Marjorie Barrick Museum and Harry Reid Center Score! Be The Legend Sygic Travel Maps The world's first map app tailored for travelers Use the app Not now
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,684
Q: Macros for image, coimage, and cokernel I work in LyX. There's a default macro for the kernel: \ker which has nice spacing if I put something after it, compared to \text{ker}. I'd like to have macros with the same nice spacing for the image, coimage, and cokernel, respectively im, coim, coker.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,471
Petrorossia obscurior är en tvåvingeart som beskrevs av Wray Merrill Bowden 1964. Petrorossia obscurior ingår i släktet Petrorossia och familjen svävflugor. Artens utbredningsområde är Ghana. Inga underarter finns listade i Catalogue of Life. Källor Svävflugor obscurior
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,516
Jordan Kilganon is a Canadian slam dunker making a living by taking part in dunk competitions and performing at various shows. He is known for his athleticism and his dunks, such as the Hide-and-Seek, the 360 Scoop Elbow, the Roundhouse, the Lost And Found, and the Scorpion. Kilganon was born in Sudbury, Ontario, on April 28, 1992. High School and college Kilganon graduated from École Secondaire du Sacré Coeur, based out of Sudbury, Ontario. Jordan Kilganon graduated from Humber College in 2015. Dunking career Starting at age 15, Kilganon started posting on Youtube some videos of his dunks. In 2021, Jordan Kilganon won Dunk League 3, a contest pitting the world's best dunkers against each other for a $50,000 grand prize. References Basketball people from Ontario Living people Canadian men's basketball players Year of birth missing (living people)
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,948
The preparation of recovered polycarbonate matrix nanocomposites filled with organic modified montmorillonites has been considered as a method for the secondary or mechanical recycling of these polymeric wastes. The mechanical properties of these nanocomposites have been evaluated by means of Depth Sensing Indentation measurements. The selection of the measurement conditions has been discussed and a method to evaluate the heterogeneity of these materials has been presented. It has been found that greater nanoclay contents do not always lead to increase in mechanical properties. This fact has been explained in terms of the competition between the reinforcement effect of the nanofiller and the thermal and mechanical degradation that experiments the matrix during the melt processing. This result provide a limit for the clay addition in the mechanical recovery of polycarbonate wastes.
{ "redpajama_set_name": "RedPajamaC4" }
5,856
{"url":"https:\/\/planetmath.org\/bernsteininequalities","text":"# Bernstein inequalities\n\n1) Let $\\{X_{i}\\}_{i=1}^{n}$ be a collection of independent random variables satisfying the conditions:\na) $E[X_{i}^{2}]<\\infty$ $\\forall i$, so that one can write $\\sum_{i=1}^{n}E[X_{i}^{2}]=v^{2}$\nb) $\\exists c\\in\\mathbb{R}:\\sum_{i=1}^{n}E[\\left|X_{i}\\right|^{k}]\\leq\\frac{1}{2}k% !v^{2}c^{k-2}$ for all integers $k\\geq 3$\n\nThen, for any $\\varepsilon\\geq 0$,\n\n $\\Pr\\left\\{\\sum_{i=1}^{n}\\left(X_{i}-E[X_{i}]\\right)>\\varepsilon\\right\\}\\leq% \\exp\\left[-\\frac{v^{2}}{c^{2}}\\left(1+\\frac{c\\varepsilon}{v^{2}}-\\sqrt{1+2% \\frac{c\\varepsilon}{v^{2}}}\\right)\\right]\\leq\\exp\\left(-\\frac{\\varepsilon^{2}}% {2\\left(v^{2}+c\\varepsilon\\right)}\\right)$\n $\\Pr\\left\\{\\left|\\sum_{i=1}^{n}\\left(X_{i}-E[X_{i}]\\right)\\right|>\\varepsilon% \\right\\}\\leq 2\\exp\\left[-\\frac{v^{2}}{c^{2}}\\left(1+\\frac{c\\varepsilon}{v^{2}}% -\\sqrt{1+2\\frac{c\\varepsilon}{v^{2}}}\\right)\\right]\\leq 2\\exp\\left(-\\frac{% \\varepsilon^{2}}{2\\left(v^{2}+c\\varepsilon\\right)}\\right)$\n\n2) Let $\\{X_{i}\\}_{i=1}^{n}$ be a collection of independent, almost surely absolutely bounded (http:\/\/planetmath.org\/AlmostSurelyBoundedRandomVariable) random variables, that is $\\Pr\\left\\{\\left|X_{i}\\right|\\leq M\\right\\}=1\\text{ \\ }\\forall i$.\nThen, for any $\\varepsilon\\geq 0$,\n\n $\\Pr\\left\\{\\sum_{i=1}^{n}\\left(X_{i}-E[X_{i}]\\right)>\\varepsilon\\right\\}\\leq% \\exp\\left[-\\frac{9v^{2}}{M^{2}}\\left(1+\\frac{M\\varepsilon}{3v^{2}}-\\sqrt{1+2% \\frac{M\\varepsilon}{3v^{2}}}\\right)\\right]\\leq\\exp\\left(-\\frac{\\varepsilon^{2}% }{2\\left(v^{2}+\\frac{M}{3}\\varepsilon\\right)}\\right)$\n $\\Pr\\left\\{\\left|\\sum_{i=1}^{n}\\left(X_{i}-E[X_{i}]\\right)\\right|>\\varepsilon% \\right\\}\\leq 2\\exp\\left[-\\frac{9v^{2}}{M^{2}}\\left(1+\\frac{M\\varepsilon}{3v^{2% }}-\\sqrt{1+2\\frac{M\\varepsilon}{3v^{2}}}\\right)\\right]\\leq 2\\exp\\left(-\\frac{% \\varepsilon^{2}}{2\\left(v^{2}+\\frac{M}{3}\\varepsilon\\right)}\\right)$\nTitle Bernstein inequalities BernsteinInequalities 2013-03-22 16:09:08 2013-03-22 16:09:08 Andrea Ambrosio (7332) Andrea Ambrosio (7332) 21 Andrea Ambrosio (7332) Theorem msc 60E15","date":"2019-03-27 02:09:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 14, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9172865748405457, \"perplexity\": 1622.5890348952926}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-13\/segments\/1552912207618.95\/warc\/CC-MAIN-20190327020750-20190327042750-00446.warc.gz\"}"}
null
null
Wine and lemons – friendly encounters in Albania Posted on 29 April 2017 by Evelien Eyes closed, hands gesturing with fierce Mediterranean intensity, Lorenc' tenor sounded across the terraced garden. The big black '90s sound system blasted an Italian opera aria and my sister and I held our glasses of home made wine, watching Lorenc as he sang. On the hill behind us and across the river valley in front of us, the people of Berat had started closing the town's thousand windows with wooden shutters, and the April evening started to grow chilly. The grapevines covering the terrace gently shivered in the twilight breeze. We had just finished a simple plate of spaghetti, and Lorenc had asked us if we wanted to hear him sing opera. Opera was his absolute passion, and to him it seemed perfectly natural to share his interpretation of O Sole Mio with us. Despite the Italian flavours and sounds, the setting was unmistakably Albanian. Berat is called the 'Town of a Thousand Windows' because its two hillsides on both sides of the river are covered with white plastered town houses that feature countless windows looking out onto the valley, like the multifaceted eyes of an insect. It was under the guard of those same windows that we had first met Lorenc earlier that day on a square near the river, when we had just returned from a late afternoon walk in the town's surrounding hills. He had casually halted his bicycle alongside us to ask us where we were from, and where we were staying. "Why don't you come over for dinner and wine tonight? I own a hostel up the hill, with a nice garden. I built a bar there. I can also sing opera for you." Albania is – undeservedly – still a little-visited travel destination, and on this April evening outside the holiday season there were no guests occupying the two rooms in Lorenc' hostel. There was nobody in his garden but us. Lorenc showed us photos of his wife and daughter, and shared his philosophies on life. There were other chance encounters in Berat that introduced us to Albanian hospitality. While exploring the backstreets of the town, we ran into Anxhelo, who was also curious to learn where we were from. Anxhelo told us that he had spent some years working in the hospitality industry in Italy, and that he was now building his own hotel in Berat, together with his brothers. "I will show you my hotel," he said proudly, "but first let me offer you coffee. You see, I know that many people think badly about us Albanians. I want to show you that we are good people." Indeed, prior to our trip we received some surprised reactions when we told people we were going to travel around Albania. "Why would you go there?" people would ask. "Good luck not getting killed by the Albanian mafia!" another joked. Of course Albanians know that their country in reality defies its reputation and in their pride to show us their hospitality and warmth sometimes almost become defensive. We sat down on a terrace with Anxhelo, who had called some of his friends to join us for a drink. We talked about the differences between our countries, as one usually does when meeting a stranger in a foreign land. He insisted that the bill was on him and walked us uphill to a newly erected structure on a corner overlooking this part of the old town. "I think that in the future, maybe more people will be coming to visit Albania. Look at how beautiful Berat is," he said while looking out of a big hole that was to become a window in one of his hotel's future guest rooms. He was right; Albania is undeniably beautiful and has full potential to become a popular holiday destination, as happened in Croatia over the past decades. Albania is rich in old historic towns, mountains, and its beautiful coastline near the town of Ksamil, which we visited some days later, can compete with some of the most gorgeous beaches in Europe. Ksamil is a holiday town but when we visited before the start of the season, it was all but deserted. The family that we were renting a room from welcomed us as their first guests of the year. Finding out that they had two cats, we happily settled in and when Zenel, the son of the house, noticed our love for cats he whispered, "We have some kittens downstairs. Want to see them?" We followed him to the back of the house where he handed us two tiny kittens, their mother anxiously watching us from the garage doorway. "You can have them!" Zenel proclaimed happily after seeing us fall in love with the tiny beings. Albanians are happy to share anything with you to make you feel welcome, whether it is wine, coffee or just a bunch of adorable kittens. Ksamil is a great place to linger for a day or two. The sea is clear and the sands are white, and nearby Butrint offers an exciting dive into ancient Green and Roman history. The remains of this city that was inhabited throughout the Greek and Roman ages and subsequently the Bulgarian, Byzantine, Venetian and Ottoman empires still display well preserved ruins of a cathedral, an amfitheatre, defense towers, and a baptistry. There is a bus that runs the 4.5 km stretch between Ksamil and the small peninsula on which Butrint is situated, but we decided to walk in order to take in the views on the bay and the fields clad with cypress trees, olive trees and herds of white goat. Midway, the Butrint-bound bus passed us and halted with screeching brakes after the bus driver had spotted us walking on the side of the road. "You want to go to Butrint? Please take this bus!" We politely declined, trying to explain that we'd rather walk. "But for you it's free!" he yelled desperately, not understanding out determination to reach Butrint on foot. Albanians truly go out of their way to make you feel comfortable in their beautiful country. Back in Ksamil we admired the enormous lemons that hung from the trees in the garden of our temporary home. Zenel spotted us and asked, "You want some?" Without waiting for our response, he immediately picked six lemons the size of a baby's head and shoved them in our arms. When life gives you lemons, make lemonade, goes the popular proverb. When an Albanian gives you lemons, you just accept them gratefully and take them on the bus with you to your next destination, trying to think of useful things to do with the six huge lemons in your luggage. In the quiet garden in Berat, we moved from talking about philosophy to practicalities and we advised Lorenc to create a visible sign providing directions to his hostel. The day before, when we arrived in Berat around noon on a bus from Albania's capital city Tirana, we had struggled to find his hostel which was listed in our travel guide, but which appeared to be impossible to find in the town's winding cobblestone alleys – especially while carrying heavy backpacks in the warm sunlight that held the promise of the sweltering summer to come. We had given up our search and took a room in a pretty hotel on the other side of the river. "You should put a sign out the front of the door so that people can easily see that this is a hostel." "You are right, I should do that. What about my garden? In summer I see people passing by on the path behind my garden, and I have built my bar here but they do not know that they can come in for a drink. Do you think that other people want to visit my bar?" We agreed that it was a shame that his bar was largely unused, and guaranteed him that we were sure that other travelers would love to enjoy a cold drink in his garden when returning to town after a long walk. Lorenc disappeared inside his home and scurried back with a notebook and a pencil. He thought for a while, then started writing. "Welcome to all my friends. Come inside my garden for a drink. I make my own wine, you will enjoy it. I sing opera. Welcome." Create your own travel story! Getting there – Buses to Berat depart at Tirana's 21 December Square from early morning until mid-afternoon and take about 2.5 hours to get there. Sleeping – While Berat is a small town, there are plenty of options for spending the night there. We slept in Hotel Belgrad Mangalem on the north side of the river, where we had an excellent stay. To visit Lorenc Guesthouse, can find it on the south side of the river at Stiliano Bandill, Lagja: Gorica entrance nr.18. While it is possible to visit Berat on a day trip from Tirana, I recommend spending a night there because it is nice to experience the town in the evening and early morning without the day-tripping tourists. For most people, 2 days / 1 night is probably enough time to spend in Berat. Ksamil/Butrint Getting there – You can arrive in Ksamil by bus via Saranda from most major towns in Albania. Buses to Saranda leave Tirana throughout the mornings. From Saranda there are hourly buses to Ksamil. Sleeping – There is plenty of accomodation in Ksamil as the town is a true beach holiday destination in summer. Outside the summer season you will find many hotels and restaurants closed. We stayed at Villa Caca which is a simple but friendly collection of apartments some 15-20 minutes walking from Ksamil's most beautiful beach. From Ksamil you can walk to Butrint in about an hour following the main road east out of town past olive groves. You can also catch a bus from Ksamil's main road or directly from Saranda, passing through Ksamil. Some people combine a day trip visit from Saranda to Butrint with some hours at the beach in Ksamil. AlbaniaBeratKsamilMeeting the localsLeave a comment Driving through the Animal Kingdom known as Kruger National Park, South Africa
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,264
{"url":"http:\/\/gmatclub.com\/wiki\/Data_sufficiency","text":"# Data sufficiency\n\nGMAT Study Guide - a prep wikibook\n\n## General information\n\nData sufficiency has a unique question format developed especially for the GMAT. Nobody knows why data sufficiency and not data deficiency or something else; perhaps, cause ETS wanted to check decision making skills or ability to act promptly with unfamiliar questions. We don't know.\n\nWhat we know, however, are a few traps that DS poses to a test taker. You will learn most of them when you practice, but we feel we need to warn you about them.\n\n## Data sufficiency traps\n\nThe way answer choices work is the first trap of DS. They seem clear at first, but that's only the tip of the iceberg. What often confuses people is when enough or not. Thus, an answer choice is enough only and only if using the information provided in it, you can get an exact numerical value such as 4, $\\frac{162}{3}$, or similar. However, it is not enough to say that b=a, or to have two values (for example if you have $x^2 = 16$, then $x$ is 4 and -4) this is not sufficient, there must be only one value. When one of the statements produces several possible answers (such as 9 and 5 or 3 and -3), the result is undefined.\n\nThere are several theories whether the two statements given with the DS question should result in the same numerical answer if either is sufficient. This controversy is illustrated below:\n\n{{#x:box| Example 1. Triangle ABC has one angle equal to 90 degrees and AB equal to 5 inches, what is the area of ABC?\n\ni. BC = 4\n\nii. AC = 12 }}\n\nApparently both answer choices are sufficient to answer the question, but in two cases, the numeric answer differs. In the first case we get a 3-4-5 triangle and in the second 5-12-13, thus area of ABC1 is 6 and ABC2 is 30. It is still unknown whether this is possible under ETS's regulations or not. I know for sure that some companies ignore this rule, such as Princeton for example. If we are able to say that ETS does not support the idea of getting two different numeric answers on the same DS question, it will become easier to eliminate some answer choices. Please, respond if you encounter an example of this sort.\n\nTo solve or not to solve? Every test taker moves through DS cycles. First, when one encounters GMAT, DS seem to be fairly simple and easy to solve. Then, we realize that we don't need to solve and DS becomes the easiest thing in the world. The third step happens when Problem Solving score goes up, and DS stays the same or falls down. Then we get back to solving DS, so the pattern goes like this: solve >> not solve >> solve.\n\nThere is a general belief that one should not solve DS since all it asks is enough or not enough to answer. In fact, many textbooks tell not to solve. However, under \u201cdo not solve\u201d is implied to do the job of compiling an equation, get it to the final form and stop only when all you have to do is calculations. To get the majority of DS right, you need to solve, but you don't need to calculate. Don't try to solve in your head, use paper, don't stare at the problem trying to hypnotize it. See example below:\n\n{{#x:box| Example 2. What is the volume of a box with dimensions $a$, $b$, and $c$?\n\ni. $a = \\frac{18}{bc}$\n\nii. $b = 2$, $c = 4$ }}\n\nAt the first glance both statements are needed to answer the question, but when actually attempted, the problem appears solvable. If you did not compile an equation, you did not see that the volume of a body is\u00a0; the first statement is enough to answer. Often, ETS will use fractions and they will cancel out, providing you with an answer; always make sure you write down the formula\/equation.\n\n{{#x:box| Example 3. What is the value of $x$?\n\ni. $x + 2y = 6$\n\nii. $4y + 2x = 12$ }}\n\nIf you solve without a careful look, you will think that since there are two equations and two variables, you will find the solution. Nope. Both statements are masked and appear identical. According to our members, such problems are very common on the real GMAT.\n\n{{#x:box| Example 4. How many miles is it from George's house to the groceries store?\n\ni. If George did not visit a gas station on his way to the groceries store, he would have driven 4 miles less.\n\nii. The gas station is 8 miles from George's house }}\n\n{{#x:box| Example 5. How many children are there in Nancy's class?\n\ni. Yesterday there were 14 kids in the class besides Nancy\n\nii. Usually there are 2 kids who are sick and not present in the class }}\n\nDo not assume anything on data sufficiency. Some questions ask you to give them a little of something - Don't. For Example 4, we don't know how George's house is situated on the map related to the gas station or the groceries store; it can be a straight line or a triangle, therefore we don't have enough information to give an answer, E.\n\nIn the Example 5, we cannot say how many children are in Nancy's class since we may not assume that yesterday there was a normal situation and only 2 children were not present. It may have been a big School Play day, so even the sick children came to see it. We don't know, therefore E.\n\n{{#x:box| Example 6. Is the sum of six consecutive integers even?\n\ni. The first integer is odd\n\nii. The average of six integers is odd }}\n\n{{#x:box| Example 7. Does $x$ equal 3?\n\ni. $x^2 = 9$\n\nii. $x$ minus three is negative 6 }} Watch out for Yes\/No data sufficiency questions; they are the hardest and the most misleading.\n\nExample 6: The answer to this one is D. (1) Statement says that the sum of the integers is odd, which gives a NO answer to our question, but is SUFFICIENT to give an answer, therefore sufficient. (2) Says that the sum is odd, which is sufficient to give a Yes answer. In both cases it was sufficient to answer the question, except in the first case, the answer was NO and in the other, it was YES. Make sure you don't confuse No with insufficient because they are not related here.\n\nExample 7: the first statement is not sufficient since a square of x can equal either positive or negative 3, therefore it is not enough. The second statement, however, provides us with a value for x, negative 3. Therefore it is sufficient to answer the question. The answer is B.\n\nDo not combine answer choices. What ETS often does on harder DS questions, it gives the first piece of info as insufficient and the second being sufficient by itself. Yet, naturally, as one moves on during the test rush, to the second statement that nicely adds to the first and makes it sufficient to answer, he\/she misses the possibility that the second choice can be sufficient by itself. Do not eliminate this possibility.\n\nWhen you solve a medium\/hard DS question, it is good to play a game; try to find the trick in the puzzle. If you looked through the both pieces of info and it seems both are needed to be sufficient, try finding a trick why only one could be sufficient or vise versa. This technique pays of with hard DS questions that often have a more complex solution than seems from the first glance.\n\nAs always on GMAT, make an analysis of your mistakes and see what DS questions cause the most problems.\n\nFinally, make sure you don't confuse D and C and know the answer choices by heart.\n\n## Traps review\n\n1. Always write down the whole formula\/equation\n2. Watch out for masked statements; take extra time to check the solution, not just think that you will be able to answer a question since there are two statements; they may cancel out\n3. Do not assume anything on the DS; if you think the author is pushing too much, you are probably right\n4. Yes\/No questions - know them\n5. Do not combine answer choices\n6. Play a game with yourself and try to prove yourself wrong; helps with difficult DS\n7. Make sure you know the answer choices and don\u2019t confuse C and D\n8. Make analysis of your errors","date":"2013-06-20 03:12:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 15, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5470854043960571, \"perplexity\": 541.7568984559713}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368710196013\/warc\/CC-MAIN-20130516131636-00066-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
null
null
{"url":"https:\/\/en.wikipedia.org\/wiki\/Chinese_postman","text":"# Route inspection problem\n\n(Redirected from Chinese postman)\n\nIn graph theory, a branch of mathematics and computer science, Guan's route problem, the Chinese postman problem, postman tour or route inspection problem is to find a shortest closed path or circuit that visits every edge of an (connected) undirected graph. When the graph has an Eulerian circuit (a closed walk that covers every edge once), that circuit is an optimal solution. Otherwise, the optimization problem is to find the smallest number of graph edges to duplicate (or the subset of edges with the minimum possible total weight) so that the resulting multigraph does have an Eulerian circuit.[1] It can be solved in polynomial time.[2]\n\nThe problem was originally studied by the Chinese mathematician Kwan Mei-Ko in 1960, whose Chinese paper was translated into English in 1962.[3] The original name \"Chinese postman problem\" was coined in his honor; different sources credit the coinage either to Alan J. Goldman or Jack Edmonds, both of whom were at the U.S. National Bureau of Standards at the time.[4][5]\n\nA generalization is to choose any set T of evenly many vertices that are to be joined by an edge set in the graph whose odd-degree vertices are precisely those of T. Such a set is called a T-join. This problem, the T-join problem, is also solvable in polynomial time by the same approach that solves the postman problem.\n\n## Undirected solution and T-joins\n\nThe undirected route inspection problem can be solved in polynomial time by an algorithm based on the concept of a T-join. Let T be a set of vertices in a graph. An edge set J is called a T-join if the collection of vertices that have an odd number of incident edges in J is exactly the set T. A T-join exists whenever every connected component of the graph contains an even number of vertices in T. The T-join problem is to find a T-join with the minimum possible number of edges or the minimum possible total weight.\n\nFor any T, a smallest T-join (when it exists) necessarily consists of ${\\displaystyle {\\tfrac {1}{2}}|T|}$ paths that join the vertices of T in pairs. The paths will be such that the total length or total weight of all of them is as small as possible. In an optimal solution, no two of these paths will share any edge, but they may have shared vertices. A minimum T-join can be obtained by constructing a complete graph on the vertices of T, with edges that represent shortest paths in the given input graph, and then finding a minimum weight perfect matching in this complete graph. The edges of this matching represent paths in the original graph, whose union forms the desired T-join. Both constructing the complete graph, and then finding a matching in it, can be done in O(n3) computational steps.\n\nFor the route inspection problem, T should be chosen as the set of all odd-degree vertices. By the assumptions of the problem, the whole graph is connected (otherwise no tour exists), and by the handshaking lemma it has an even number of odd vertices, so a T-join always exists. Doubling the edges of a T-join causes the given graph to become an Eulerian multigraph (a connected graph in which every vertex has even degree), from which it follows that it has an Euler tour, a tour that visits each edge of the multigraph exactly once. This tour will be an optimal solution to the route inspection problem.[6][2]\n\n## Directed solution\n\nOn a directed graph, the same general ideas apply, but different techniques must be used. If the directed graph is Eulerian, one need only find an Euler cycle. If it is not, one must find T-joins, which in this case entails finding paths from vertices with an in-degree greater than their out-degree to those with an out-degree greater than their in-degree such that they would make in-degree of every vertex equal to its out-degree. This can be solved as an instance of the minimum-cost flow problem in which there is one unit of supply for every unit of excess in-degree, and one unit of demand for every unit of excess out-degree. As such it is solvable in O(|V|2|E|) time. A solution exists if and only if the given graph is strongly connected.[2][7]\n\n## Applications\n\nVarious combinatorial problems have been reduced to the Chinese Postman Problem, including finding a maximum cut in a planar graph and a minimum-mean length circuit in an undirected graph.[8]\n\n## Variants\n\nA few variants of the Chinese Postman Problem have been studied and shown to be NP-complete.[9]\n\n\u2022 The windy postman problem is a variant of the route inspection problem in which the input is an undirected graph, but where each edge may have a different cost for traversing it in one direction than for traversing it in the other direction. In contrast to the solutions for directed and undirected graphs, it is NP-complete.[10][11]\n\u2022 The Mixed Chinese postman problem: for this problem, some of the edges may be directed and can therefore only be visited from one direction. When the problem calls for a minimal traversal of a digraph (or multidigraph) it is known as the \"New York Street Sweeper problem.\"[12]\n\u2022 The k-Chinese postman problem: find k cycles all starting at a designated location such that each edge is traversed by at least one cycle. The goal is to minimize the cost of the most expensive cycle.\n\u2022 The \"Rural Postman Problem\": solve the problem with some edges not required.[11]\n\n## References\n\n1. ^ Roberts, Fred S.; Tesman, Barry (2009), Applied Combinatorics (2nd\u00a0ed.), CRC Press, pp.\u00a0640\u2013642, ISBN\u00a09781420099829\n2. ^ a b c Edmonds, J.; Johnson, E.L. (1973), \"Matching Euler tours and the Chinese postman problem\" (PDF), Mathematical Programming, 5: 88\u2013124, doi:10.1007\/bf01580113, S2CID\u00a015249924\n3. ^ Kwan, Mei-ko (1960), \"Graphic programming using odd or even points\", Acta Mathematica Sinica (in Chinese), 10: 263\u2013266, MR\u00a00162630. Translated in Chinese Mathematics 1: 273\u2013277, 1962.\n4. ^ Pieterse, Vreda; Black, Paul E., eds. (September 2, 2014), \"Chinese postman problem\", Dictionary of Algorithms and Data Structures, National Institute of Standards and Technology, retrieved 2016-04-26\n5. ^ Gr\u00f6tschel, Martin; Yuan, Ya-xiang (2012), \"Euler, Mei-Ko Kwan, K\u00f6nigsberg, and a Chinese postman\" (PDF), Optimization stories: 21st International Symposium on Mathematical Programming, Berlin, August 19\u201324, 2012, Documenta Mathematica, Extra: 43\u201350, MR\u00a02991468.\n6. ^ Lawler, E.L. (1976), Combinatorial Optimization: Networks and Matroids, Holt, Rinehart and Winston\n7. ^ Eiselt, H. A.; Gendeaeu, Michel; Laporte, Gilbert (1995), \"Arc Routing Problems, Part 1: The Chinese Postman Problem\", Operations Research, 43 (2): 231\u2013242, doi:10.1287\/opre.43.2.231\n8. ^ A. Schrijver, Combinatorial Optimization, Polyhedra and Efficiency, Volume A, Springer. (2002).\n9. ^ Crescenzi, P.; Kann, V.; Halld\u00f3rsson, M.; Karpinski, M.; Woeginger, G, A compendium of NP optimization problems, KTH NADA, Stockholm, retrieved 2008-10-22\n10. ^ Guan, Meigu (1984), \"On the windy postman problem\", Discrete Applied Mathematics, 9 (1): 41\u201346, doi:10.1016\/0166-218X(84)90089-1, MR\u00a00754427.\n11. ^ a b Lenstra, J.K.; Rinnooy Kan, A.H.G. (1981), \"Complexity of vehicle routing and scheduling problems\" (PDF), Networks, 11 (2): 221\u2013227, doi:10.1002\/net.3230110211\n12. ^ Roberts, Fred S.; Tesman, Barry (2009), Applied Combinatorics (2nd\u00a0ed.), CRC Press, pp.\u00a0642\u2013645, ISBN\u00a09781420099829","date":"2022-08-12 19:29:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 1, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6675482988357544, \"perplexity\": 927.0766639128284}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882571745.28\/warc\/CC-MAIN-20220812170436-20220812200436-00640.warc.gz\"}"}
null
null
{"url":"https:\/\/cstheory.stackexchange.com\/questions\/16182\/does-every-turing-recognizable-undecidable-language-have-a-np-complete-subset","text":"# Does every Turing-recognizable undecidable language have a NP-complete subset?\n\nDoes every Turing-recognizable undecidable language have a NP-complete subset?\n\nThe question could be seen as a stronger version of the fact that every infinite Turing-recognizable language has an infinite decidable subset.\n\nTuring-recognizable undecidable languages can be unary (define $x \\not\\in L$ unless $x = 0000\\ldots 0$, so the only difficult strings are composed solely of 0's). Mahaney's theorem says that no unary language can be NP-complete unless P=NP.","date":"2021-06-15 17:03:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6119120717048645, \"perplexity\": 1378.603548704061}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487621450.29\/warc\/CC-MAIN-20210615145601-20210615175601-00375.warc.gz\"}"}
null
null
17-Year-Old Deaf, Partially Blind Dog Helped Save Toddler Lost Overnight Posted by Jack Davis | Nov 26, 2020 | featured, News, News International | 0 | It didn't really matter that ol' Max was deaf. Or that his eyesight wasn't very good. Instead, the 17-year-old blue heeler had a loyalty that helped keep his tiny human warm and safe until he could help lead rescuers to her. The saga began in 2018 on a rainy April night in Queensland, Australia, when Aurora, 3, wandered off around 3 in the afternoon, according to the Australian Broadcasting Corp. The rugged Australia Bushland is not always hospitable, and the young girl was dressed in thin clothes. As soon as Aurora's family realized the little girl was missing, a search party of more than 100 people including State Emergency Service volunteers, police and nearby residents assembled and set out to find her. Little did they know, Max, the family dog, was on the case. The smart pup had followed Aurora over a mile from home and kept her warm for the next 15 hours while her family and rescuers continued their search. TRENDING: Lindsey Graham Tells the Squad Where to Stick Their Calls for His Resignation Around 8 a.m. the following day, Aurora's grandmother Leisa Bennett and others heard faint calls coming from the top of a mountain. Following the sound, Bennett was soon greeted by Max, and he led her straight to Aurora. The young girl was curled up on the ground, tired, and cold, but found alive thanks to Max. "When I heard her yell 'Grammy' I knew it was her," Bennett said. "I shot up the mountain … and when I got to the top, the dog came to me and led me straight to her." "He never left her sight. She smelled of dog, she slept with the dog," Bennett said. "The area around the house is quite mountainous and is very inhospitable terrain to go walking in, so she'd traveled quite a distance with her dog that was quite loyal to her," said State Emergency Service area controller Ian Phipps. "The search was actually quite hard where the volunteers and the police were, amongst the very steep slopes," Phipps said. For his heroics, Queensland Police named Max their first-ever honorary police dog. You may remember Max from such news stories as "3yo girl found safe, guarded by family dog" and "Dog hailed hero for keeping lost girl safe"… Well today, Max officially became Queensland's first ever honorary police dog. STILL SUCH A GOOD BOY! 😍 More: https://t.co/1YslFe2jhO pic.twitter.com/xLGxg3fG0q — Queensland Police (@QldPolice) May 1, 2018 TRENDING: Ilhan Omar suggests Biden should 'reverse' Trump's Middle East agreements, says they 'weren't peace deals' Thankfully, other than a few minor scratches, Aurora made it out of the ordeal unscathed. "With the weather last night it's quite lucky she is well because it was cold, it was cold and raining," Phipps said after the girl was rescued. "She's a very hardy young lass to survive that without any ill effects and everyone, all the volunteers are extremely happy," he said. "I think [Aurora] was a bit overwhelmed by the tears and the howling, but I explained to her how happy those tears were," Bennett said. "It could have gone any of 100 ways, but she's here, she's alive, she's well and it's a great outcome for our family," she said. Here's the Secretive Next-Gen Helicopter That Could Replace Army's Fleet of Black Hawks Obama's Simmering Resentment of Benjamin Netanyahu Ouch: Trump Supporters Savage Alyssa Milano Over Massively Hypocritical 'Olive Branch' POLL: Republicans Increasingly Negative Towards Fox News PreviousHere's the Secretive Next-Gen Helicopter That Could Replace Army's Fleet of Black Hawks NextIs Liberalism A Mental Disease? Data Is Uncovering an Alarming Trend Among Liberals! The Jerusalem Embassy Move, Hypocrisy Shown In Reaction To It. President Moves Quickly To combat Coronavirus Spread House Democrats Pass Bill To Give Illegal Aliens Health Insurance, Refused To Give The Same To Our Veterans Nikki Haley shuts down Ocasio-Cortez over her solution to the pandemic, and she is not happy about it
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,233
The purpose of the presentation is to propose a model for use in reviewing, conducting, and documenting qualitative assessments in counseling research and practice. By establishing evidence supporting qualitative assessments and processes, counselor educators and counselors can begin to build a record of support. This new evidence can serve counselors in the selection, administration, and documentation of qualitative assessment to demonstrate the efficacy of these processes in counseling.
{ "redpajama_set_name": "RedPajamaC4" }
590
SMa.r.t. Column: An Urgent Appeal to Our New City Council & Management The future lies within our cities, but our cities are crumbling! Cities are home to over 1/2 the world's population and in 20 years will be home to 2/3rds! They are the focal points of poverty & health, water & energy, food & waste, transportation & congestion, climate change & environmental degradation, innovation & economic growth – with changes in climate, economics, and social media creating a new and ever-changing reality. So where is this increasing growth and transformation headed? What will managing this growth require? What direction will the rapid shifts in communication take us? If our cities fail, so will we! The American dream has been focused on ownership and 80% of wealth is created in cities where people find opportunities. Will we continue to enjoy the benefits of cities – the restaurants & cafes, the art galleries & cultural facilities, parks & landscaped environments – without crime, traffic, and pollution? And the dream of ownership is becoming one of sharing – not me, but us! Will our future experience be increased sharing of resources – shared workspace, shared transportation, shared living quarters? Apartments are shrinking in area with quickly diminishing privacy as developers densify to increase profits! We're facing an unprecedented scale of planning which we frankly don't know how to handle with the future of cities and urban design requiring cross-disciplinary integration. What will managing this growth require and what are the barriers? And in our environment, where will the water required to support such growth come from in this time of extended drought? A healthy city will need to balance economic & resource efficiency – accommodating growth while lowering resource consumption. And even with computers & teleconferencing, face-to-face opportunities will still prevail. But with LA County's sprawl, enormous distances to jobs with public transportation while also managing childcare is not acceptable. So how do we keep Santa Monica from also falling deeper into this abyss? How do we stop Developers from "piecemealing" our city which has proven very expensive. There are 5 objectives I feel the City Council should consider to keep Santa Monica from continuing to crumble still further! Undertake affordable housing on public property! Provide incentives to promote responsible, controlled development on the boulevards! Initiate zoning code revisions & incentives to rid the city of these faceless 6-10 story block buildings with in-line balconies reminiscent of computer punch cards. A master plan to control our hemorrhaging debt. Providing more neighborhood green space by turning segments of streets/alleys into parks. A viable approach to affordable housing: As an architect & planner for more than 50 years, I have been involved in the design of several thousand affordable units – 2, 3 & 4 story terraced courtyard housing primarily on excess, vacant public land. The city, developer, and public all benefited from the significantly reduced land & construction costs! According to current online records, Santa Monica City owns 10% of the 5,312 acres within city limits, and approximately 10% of that is vacant. If only half of the vacant 531 acres is usable, it would provide 11,700 units at a desirable garden apartment density of terraced 2, 3, & 4 stories! Per the example shown on a one acre site, you could easily build 44 units/acre with surface parking and bicycles instead of subterranean, along with sitting & picnic areas, volleyball & basketball, and gardening & playground areas. Garden apartments with open space compared to the 113 units per acre currently proposed and jammed in over 2 levels of subterranean parking on the Gelson's site, of which only 10% of the units are considered affordable. Overall costs could be reduced – land costs of 25-30% would be 0%, low-rise type V construction costs would save 15-20% (especially with the coming digital fabrication of buildings saving on parts, labor, and construction time along with less neighborhood intrusion) and saving 10-15% of financing costs! And reduced permit costs and expedited review could save another 10%. And over a 15-20 year period, the affordable rents would repay the city its land value, provide a fair return to the developer, and more importantly provide tenant equity leading to apartment ownership after a 15-20 year mortgage is paid off with tenant rents! This would provide housing for approximately 26,000 or 28% of our current population – if this amount of housing is even necessary, which is unlikely given the dire water situation? After +/- 40 years, it's time to stop going down the wrong street! Incentive to promote market rate boulevard development: In Santa Monica, we have approximately 8 miles of boulevards with approximately 80% being either vacant, surface parking, or 1 story retail with the other 20% primarily 2 stories. Again by offering tax and processing incentives, the vacant properties, parking lots, and 1 story properties could be developed with stepped 2, 3, & 4 story buildings having ground floor meandering sidewalks and courtyards, upper level terracing, and boulevards turned into landscaped parkways with thousands of apartments or condos together with surrounding neighborhoods where residents could walk to jobs, shop, or just relax in the nearby courtyards and mini-parks. Zoning Code revisions: Although the development industry (west coast, east coast, and abroad) has successfully convinced Governor Gavin Newsom and the State legislature to unnecessarily and substantially increase housing across the state with code revisions allowing 6-10 story building heights and decreased property line setbacks along with the demise of R1 zoning, we are still able to offer developers meaningful tax exemptions and streamlined processing procedures to offset this totally unnecessary visual & physical boondoggle. The absolute necessity of a citywide Master Plan for our future: Instead of $5m tentatively budgeted in the pending 2023 SM budget for a master plan for the SM airport, spend a similar amount for a citywide master plan – one that the city in its 138 year history has never had?! Instead, we have juggled and piecemealed development. Let's not piecemeal our master planning. What kind of city do we want to become? What should our sources of revenue be? What is the image we would like to project to outsiders? What is the scale, height, and massing appropriate for the city's image? And will we be a net zero city, and if not, what are our energy goals? Without a master plan, this is a very stupid and expensive way to run a city, especially when the current computer age allows continuous updating! We can't continue allowing Developers to rebrand Santa Monica! What is our vision? What are we going to insist our Council do about it? Turning segments of residential streets into parks: For the past 5 years, I've been working on and processing my idea for small neighborhood parks in every residential neighborhood throughout the City of Compton and eventually throughout every urban community in the country. In a typical neighborhood of +/- 24-30 residential blocks defined by perimeter arterial streets, an "active" & "passive" park could be built at each end of the neighborhood on the side street of a residential block without interfering with residential or emergency access. In Compton, a city comparable in size with Santa Monica in area & population, 34 small parks offering a basketball court, playground equipment, community gardening & fitness classes, etc., could be completed for less than $12m total and within an estimated 3 years of design, approval & construction. C'mon City Council, get your act together and do the job the Residents need & elect you to do! This is not a political job you've selected, and it is certainly not a beauty contest you've won – this is a planning job!! We can't continue allowing Developers to rebrand Santa Monica – What is our vision, our future? Are you up to the task??? Ron Goldman FAIA for SMa.r.t. Santa Monica Architects for a Responsible Tomorrow Thane Roberts, Architect, Robert H. Taylor AIA, Ron Goldman FAIA, Architect, Dan Jansenson, Architect & Building and Fire-Life Safety Commission, Samuel Tolkin Architect & Planning Commissioner, Mario Fonda-Bonardi AIA & Planning Commissioner, Marc Verville M.B.A, CPA (Inactive), Michael Jolly, AIR-CRE. For previous articles see www.santamonicaarch.wordpress.com/writing By SM.a.r.t December 2, 2022 in News, Real Estate
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,879
Az indukált örvény a szárny mögött, annak szélein keletkező, körkörösen forgó légáramlat. Valójában örvénylés kíséri a szárny bármely pontját, ahol a felhajtóerő változik a hossz mentén, ténylegesen a szárnyvégeknél csavarodnak fel nagy örvényekké, az ívelőlapoknál, illetve a szárny egyéb hirtelen megtörésénél. A repülés során ezek az indukált örvények folyamatosan keletkeznek, energiát vonnak el, létrejöttüket indukált ellenállás kíséri. A szárny tényezőinek megválasztása és a használat körülményeinek figyelembe vétele az indukált ellenállás csökkentése érdekében a tervezés fontos feladata. Az örvények keletkezése Amikor egy szárnyon felhajtóerő keletkezik, akkor a felső felületen kisebb nyomás uralkodik, mint a szárny alsó felületén. A valós és véges szárnyak körül azért alakul ki ez a térbeli áramlás, mivel a szárny alsó és felső felülete közötti nyomáskülönbség a szárnyvégeknél egyenlítődik ki. Ha egy repülő farka irányából a menetirányba tekintenénk, akkor a bal oldali szárny végénél lévő örvényt az óramutató járásával megegyezően, a jobb oldali szárnyvégnél lévő örvényt pedig ellentétesen látnánk forogni. A nagyságuk független a szárnyszelvény alakjától, egyedül a nyomáskülönbség és a felhajtóerő keletkezéséhez köthető jelenlétük. Hatásai és hatásainak csökkentése Az indukált örvények indukált ellenállásának csökkentésére az oldalviszony növelése a leghatásosabb. Ez a magyarázata, hogy a vitorlázógépek szárnyai - ahol az indukált ellenállás nagysága súlyosan esik latba - nagy oldalviszonnyal készülnek. Ezek hátránya viszont a manőverezhetőségben és a szerkezeti stabilitásban jelentkezik. Egy másik módja a kellemetlen hatások csökkentésének a szárnyvégfülek (angolul: winglet) alkalmazása, ahogyan ezt a modern utasszállító járatok gépein is megfigyelhető. Hivatkozások Jereb-Szalma (1963) A vitorlázórepülés iskolája, Műszaki könyvkiadó Áramlástan Repüléstechnika
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,981
Q: grep with after-context that does not contain a keyword I want to grep through logs, and gather a certain exception stacktrace but I want to only see those that do not contain certain keywords in --after-context. I do not know in which line in after-context the keyword is. Simple example - given this shell code: grep -A 2 A <<EOF A B C R A Z Z X EOF the output is: A B C -- A Z Z I'd like the output to be: A Z Z I want to exclude any match that has 'B' in after-context How do I do this? Using grep is not a requirement, though I only have access to coreutils and perl. A: You could try sed -ne '/A/{N;N;/\n.*B/d;p;i --' -e '}' It seems to do what you need, except for the trailing --. A: This problem is a good fit for awk: grep -A2 A LOG_FILE | awk -v RS='--\n' '!/B/ { printf "%s", $0 }' * *`-v RS='--\n' sets the record separator. *!/B/ finds records that do not contain B. A: This is in perl. use strict; my $Data = join '',<DATA>; while( $Data =~ /(A\s+\w\s+\w)/msg ) { my $Match = $1; next if $Match =~ /B/; print $Match; } __DATA__ A B C R A Z Z X A: You might do: #!/usr/bin/env perl use strict; use warnings; my $pushok = 0; my @group; while (<DATA>) { chomp; if ( m/A/ ) { $pushok = 1; push @group, $_; next; } if ( m/B/ ) { $pushok = 0; @group = (); next; } $pushok and push @group, $_; if ( @group == 3 ) { print +(join "\n", @group), "\n"; $pushok = 0; @group = (); } } __DATA__ A B C R A Z Z X AAA BBB AAA XXX XXX Which would produce: A Z Z ZZZ AAA XXX XXX
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,764
Скумансдал, , букв. «долина Схумана» — бывший городок в 16 км от современного г. Луис-Тричард, возникший во времена Великого трека, однако через 30 лет после основания заброшенный из-за невозможности обороняться от частых нападений племён венда. С 1849 по 1858 год был столицей небольшой бурской республики Зоутпансберг. После того, как Трансвааль восстановил контроль над местностью, было решено не восстанавливать Скумансдал, а основать новый посёлок близ Махадо. Таким образом, Скумансдал — единственное из поселений буров, которое так и не превратилось в современный город. Луис Трехардт и Ханс ван Ренсбюрг Поселение основали предводители фуртреккеров Луис Трехардт и Ханс Ренсбург (из рода Ренсбургов, к которому принадлежал генерал :en:Willem Cornelis Janse van Rensburg), которые достигли горы Соутпансберг в 1836 г. После конфликта между двумя лидерами ван Ренсбург двинулся далее на восток, в сторону Мапуту, однако его группа по дороге была разгромлена. Трехардт и его люди оставались в этом месте больше года. После того, как они безуспешно пытались найти Ренсбурга в Зимбабве и в землях к востоку от их территории, они решили двинуться в сторону Мапуту, чтобы уйти подальше от британцев. Поход начался в сентябре 1837 г., и через 7 месяцев им удалось добраться до места назначения, но дорогой ценой; 27 из 53 мужчин умерли в пути, в том числе и сам Трехардт. Они потратили 2 с половиной месяца, чтобы пробраться через плато у Драконовых гор со своими 9 кибитками; задние колёса пришлось снять, чтобы кибитки смогли скатиться с гор. Хендрик Потгитер Хендрик Потгитер возглавил следующее переселение. Он прибыл из Оригстада после того, как многие из его людей умерли от малярии. Он убедил людей, что гора Соутпансберг находится достаточно далеко от британцев, что поможет им обеспечить независимость от британской власти. Он выбрал место для столицы своей республики и дал ему название Зоутпансберг. Зоутпансберг Первоначально дела складывались удачно: была создана община, построена церковь, был построен форт. В городе открыли свои магазины португальские торговцы из соседней колонии, процветала торговля слоновой костью. В 1852 г. Потгитер умер и после ряда колебаний название города было в 1855 г. окончательно изменено на Скумансдал. После этого город, в котором проживало 1800 жителей, стал местом, куда съезжались торговать охотники за слоновой костью и другие торговцы. Также здесь процветала контрабанда оружием; через город прошло около 30 тонн нелегального свинца на изготовление пуль. Такая ситуация раздражала соседнее племя венда, которое 15 июля 1867 г. напало на город и подожгло его. После этого город больше не возрождался, кладбище на нём тоже уже не существует. Примечания Города ЮАР
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,953
\section*{Peter Paule} This article is dedicated to Peter Paule, one of the great pioneers of experimental mathematics and symbolic computation. In particular, it is greatly inspired by his masterpiece, co-authored with Manuel Kauers, `The Concrete Tetrahedron' [KP], where a whole chapter is dedicated to our favorite {\it ansatz}, the $C-$finite ansatz. \section*{Introduction} Once upon a time there was a knot that no one could untangle, it was so complicated. Then came Alexander the Great and, in one second, {\it cut} it with his sword. Analogously, many mathematical problems are very hard, and the current party line is that in order for it be considered solved, the solution, or answer, should be given a logical, rigorous, {\it deductive} proof. Suppose that you want to answer the following question: \vspace{1mm}\noindent {\it Find a closed-form formula, as an expression in $n$, for the real part of the $n$-th complex root of the Riemann zeta function, $\zeta(s)$ .} \vspace{1mm}\noindent Let's call this quantity $a(n)$. Then you compute these real numbers, and find out that $a(n)=\frac{1}{2}$ for $n \leq 1000$. Later you are told by Andrew Odlyzko that $a(n)=\frac{1}{2}$ for all $1 \leq n \leq 10^{10}$. Can you conclude that $a(n)=\frac{1}{2}$ for {\it all} $n$? We would, but, at this time of writing, there is no way to deduce it rigorously, so it remains an open problem. It is very possible that one day it will turn out that $a(n)$ (the real part of the $n$-th complex root of $\zeta(s)$) belongs to a certain {\it ansatz}, and that checking it for the first $N_0$ cases implies its truth in general, but this remains to be seen. There are also frameworks, e.g. {\it Pisot sequences} (see [ESZ], [Z2]), where the {\it inductive } approach fails miserably. On the other hand, in order to (rigorously) prove that $1^3+2^3+3^3+ \dots + n^3=(n(n+1)/2)^2$, for {\it every} positive integer $n$, it suffices to check it for the five special cases $0 \leq n \leq 4$, since both sides are polynomials of {\bf degree} $4$, hence the difference is a polynomial of degree $\leq 4$, given by five `degrees of freedom'. This is an example of what is called the `$N_0$ principle'. In the case of a polynomial identity (like this one), $N_0$ is simply the degree plus one. But our favorite {\it ansatz} is the $C$-finite ansatz. A sequence of numbers $\{a(n)\}$ ($0 \leq n < \infty$) is $C$-finite if it satisfies a {\it linear recurrence equation with constant coefficients}. For example the Fibonacci sequence that satisfies $F(n)-F(n-1)-F(n-2)=0$ for $n \geq 2$. The $C$-finite ansatz is beautifully described in Chapter 4 of the masterpiece `The Concrete Tetrahedron' ([KP]), by Manuel Kauers and Peter Paule, and discussed at length in [Z1]. Here the `$N_0$ principle' also holds (see [Z3]), i.e. by looking at the `big picture' one can determine {\it a priori}, a positive integer, often not that large, such that checking that $a(n)=b(n)$ for $1 \leq n \leq N_0$ implies that $a(n)=b(n)$ for all $n>0$. A sequence $\{a(n)\}_{n=0}^{\infty}$ is $C$-finite if and only if its (ordinary) {\it generating function} $f(t):=\sum_{n=0}^{\infty} a(n)\,t^n$ is a {\bf rational function} of $t$, i.e. $f(t)=P(t)/Q(t)$ for some {\it polynomials} $P(t)$ and $Q(t)$. For example, famously, the generating function of the Fibonacci sequence is $t/(1-t-t^2)$. Phrased in terms of generating functions, the $C$-finite ansatz is the subject of chapter 4 of yet another masterpiece, Richard Stanley's `Enumerative Combinatorics' (volume 1) ([S]). There it is shown, using the `transfer matrix method' (that originated in physics), that in many combinatorial situations, where there are finitely many states, one is guaranteed, {\it a priori}, that the generating function is rational. Alas, finding this transfer matrix, at each specific case, is not easy! The human has to first figure out the set of states, and then using human ingenuity, figure out how they interact. A better way is to automate it. Let the computer do the research, and using `symbolic dynamical programming', the computer, automatically, finds the set of states, and constructs, {\it all by itself} (without any human pre-processing) the set of states and the transfer matrix. But this may not be so efficient for two reasons. First, at the very end, one has to invert a matrix with {\it symbolic} entries, hence compute symbolic determinants, that is time-consuming. Second, setting up the `infra-structure' and writing a program that would enable the computer to do `machine-learning' can be very daunting. In this article, we will describe two {\it case studies} where, by `general nonsense', we know that the generating functions are rational, and it is easy to bound the degree of the denominator (alias the order of the recurrence satisfied by the sequence). Hence a simple-minded, {\it empirical}, approach of computing the first few terms and then `fitting' a recurrence (equivalently rational function) is possible. The first case-study concerns counting spanning trees in families of grid-graphs, studied by Paul Raff ([R]), and F.J. Faase ([F]). In their research, the human first analyzes the intricate combinatorics, manually sets up the transfer matrix, and only at the end lets a computer-algebra system evaluate the symbolic determinant. Our key observation, that enabled us to `cut the Gordian knot' is that the terms of the studied sequences are expressible as {\it numerical} determinants. Since computing numerical determinants is so fast, it is easy to compute sufficiently many terms, and then fit the data into a rational function. Since we easily have an upper bound for the degree of the denominator of the rational function, everything is rigorous. The second case-study is computing generating functions for sequences of determinants of `almost diagonal Toeplitz matrices'. Here, in addition to the `naive' approach of cranking enough data and then fitting it into a rational function, we also describe the `symbolic dynamical programming method', that surprisingly is faster for the range of examples that we considered. But we believe that for sufficiently large cases, the naive approach will eventually be more efficient, since the `deductive' approach works equally well for the analogous problem of finding the sequence of permanents of these almost diagonal Toeplitz matrices, for which the naive approach will soon be intractable. This article may be viewed as a {\it tutorial}, hence we include lots of implementation details, and Maple code. We hope that it will inspire readers (and their computers!) to apply it in other situations \section*{Accompanying Maple Packages} This article is accompanied by three Maple packages, {\tt GFMatrix.txt}, {\tt JointConductance.txt}, and {\tt SpanningTrees.txt}, all available from the url \bigskip {\tt http://sites.math.rutgers.edu/\~{}zeilberg/mamarim/mamarimhtml/gordian.html} \bigskip In that page there are also links to numerous sample input and output files. \section*{The human approach to enumerating spanning trees of grid graphs} In order to illustrate the advantage of ``keeping it simple'', we will review the human approach to the enumeration task that we will later redo using the `Gordian knot' way. While the human approach is definitely interesting for its own sake, it is rather painful. Our goal is to enumerate the number of spanning trees in certain families of graphs, notably grid graphs and their generalizations. Let's examine Paul Raff's interesting approach described in his paper {\it Spanning Trees in Grid Graph} [R]. Raff's approach was inspired by the pioneering work of F. J. Faase ([F]). The goal is to find generating functions that enumerate spanning trees in grid graphs and the product of an arbitrary graph and a path or a cycle. Grid graphs have two parameters, let's call them $k$ and $n$. For a $k \times n$ grid graph, let's think of $k$ as {\it fixed} while $n$ is the discrete input variable of interest. {\bf Definition} The $k \times n$ grid graph $G_k(n)$ is the following graph given in terms of its vertex set $V$ and edge set $E$: $$ V = \{v_{ij}|1 \leq i \leq k, 1 \leq j \leq n\}, $$ $$ S = \{\{v_{ij}, v_{i'j'}\}| |i-i'|+|j-j'|=1\}. $$ The main idea in the human approach is to consider the collection of set-partitions of $[k] = \{1,2,\dots,k\}$ and figure out the transition when we extend a $k \times n$ grid graph to a $k \times (n+1)$ one. Let $\mathcal{B}_k$ be the collection of all set-partitions of $[k]$. $B_k = |\mathcal{B}_k|$ is called the $k$-th Bell number. Famously, the exponential generating function of $B_k$, namely $\sum_{k=0}^{\infty} \frac{B_k}{k!}\, t^k$, equals $e^{e^t-1}$. A lexicographic ordering on $\mathcal{B}_k$ is defined as follows: {\bf Definition} Given two partitions $P_1$ and $P_2$ of $[k]$, for $i \in [k]$, let $X_i$ be the block of $P_1$ containing $i$ and $Y_i$ be the block of $P_2$ containing $i$. Let $j$ be the minimum number such that $X_i \neq Y_i$. Then $P_1 < P_2$ iff 1. $|P_1| < |P_2|$ or 2. $|P_1| = |P_2|$ and $X_j \prec Y_j$ where $\prec$ denotes the normal lexicographic order. For example, here is the ordering for $k=3$: $$ \mathcal{B}_3 = \{\{\{1,2,3\}\}, \{\{1\}, \{2,3\}, \{\{1,2\}, \{3\}\}, \{\{1,3\}, \{2\}\}, \{\{1\}, \{2\}, \{3\}\}\} \quad . $$ For simplicity, we can rewrite it as follows: $$ \mathcal{B}_3 = \{123, 1/23, 12/3, 13/2, 1/2/3\}. $$ {\bf Definition} Given a spanning forest $F$ of $G_k(n)$, the partition induced by $F$ is obtained from the equivalence relation \centerline{$i \sim j \Longleftrightarrow v_{n,i}, v_{n,j} $ are in the same component of $F$.} For example, the partition induced by any spanning tree of $G_k(n)$ is $123\dots k$ because by definition, in a spanning tree, all $v_{n,i}, 1 \leq i \leq k$ are in the same component. For the other extreme, where every component only consists of one vertex, the corresponding set-partition is $1/2/3/\dots /k-1/k$ because no two $v_{n,i}, v_{n,j}$ are in the same component for $1 \leq i<j \leq k$. {\bf Definition} Given a spanning forest $F$ of $G_k(n)$ and a set-partition $P$ of $[k]$, we say that $F$ is consistent with $P$ if: 1. The number of trees in $F$ is precisely $|P|$. 2. $P$ is the partition induced by $F$. Let $E_n$ be the set of edges $E(G_k(n)) \backslash E(G_k(n-1))$, then $E_n$ has $2k-1$ members. Given a forest $F$ of $G_k(n-1)$ and some subset $X \subseteq E_n$, we can combine them to get a forest of $G_k(n)$ as follows. We just need to know how many subsets of $E_n$ can transfer a forest consistent with some partition to a forest consistent with another partition. This leads to the following definition: {\bf Definition} Given two partitions $P_1$ and $P_2$ in $\mathcal{B}_k$, a subset $X \subseteq E_n$ transfers from $P_1$ to $P_2$ if a forest consistent with $P_1$ becomes a forest consistent with $P_2$ after the addition of $X$. In this case, we write $X \diamond P_1 = P_2$. With the above definitions, it is natural to define a $B_k \times B_k$ transfer matrix $A_k$ by the following: $$ A_k(i,j) = | \{A \subseteq E_{n+1} | A \diamond P_j = P_i \} |. $$ Let's look at the $k=2$ case as an example. We have $$ \mathcal{B}_2 = \{12, 1/2\}, E_{n+1} = \{\{v_{1,n}, v_{1,n+1}\}, \{v_{2,n}, v_{2,n+1}\}, \{v_{1,n+1}, v_{2,n+1}\}\}. $$ For simplicity, let's call the edges in $E_{n+1}$ $e_1, e_2, e_3$. Then to transfer the set-partition $P_1 = 12$ to itself, we have the following three ways: $\{e_1, e_2\}, \{e_1, e_3\}, \{e_2, e_3\}$. In order to transfer the partition $P_2=1/2$ into $P_1$, we only have one way, namely: $\{e_1, e_2, e_3\}$. Similarly, there are two ways to transfer $P_1$ to $P_2$ and one way to transfer $P_2$ to itself Hence the transfer matrix is the following $2 \times 2$ matrix: $$ A = \begin{bmatrix} 3 & 1 \\ 2 & 1 \end{bmatrix}. $$ Let $T_1(n), T_2(n)$ be the number of forests of $G_k(n)$ which are consistent with the partitions $P_1$ and $P_2$, respectively. Let $$ v_n = \begin{bmatrix} T_1(n) \\ T_2(n) \end{bmatrix} \quad , $$ then $$ v_n =Av_{n-1} \quad . $$ The characteristic polynomial of $A$ is $$ \chi_\lambda(A) = \lambda^2-4\lambda+1. $$ By the Cayley-Hamilton Theorem, $A$ satisfies $$ A^2-4A+1=0. $$ Hence the recurrence relation for $T_1(n)$ is $$ T_1(n) = 4T_1(n-1) - T_1(n-2), $$ the sequence is $\{1, 4, 15, 56, 209, 780, 2911, 10864, 40545, 151316, \dots \}$ (OEIS A001353) and the generating function is $$ \frac{x}{1-4x+x^2}. $$ Similarly, for the $k=3$ case, the transfer matrix $$ A_3 = \begin{bmatrix} 8 & 3 & 3 & 4 & 1 \\ 4 & 3 & 2 & 2 & 1 \\ 4 & 2 & 3 & 2 & 1 \\ 1 & 0 & 0 & 1 & 0 \\ 3 & 2 & 2 & 2 & 1 \end{bmatrix}. $$ The transfer matrix method can be generalized to general graphs of the form $G \times P_n$, especially cylinder graphs. As one can see, we had to think very hard. First we had to establish a `canonical' ordering over set-partitions, then define the consistence between partitions and forests, then look for the transfer matrix and finally worry about initial conditions. Rather than think so hard, let's compute sufficiently many terms of the enumeration sequence, and try to guess a linear recurrence equation with constant coefficients, that would be provable {\it a posteriori} just because we know that {\it there exists} a transfer matrix without worrying about finding it explicitly. But how do we generate sufficiently many terms? Luckily, we can use the celebrated {\bf Matrix Tree Theorem}. \subsection*{The Matrix Tree Theorem} {\bf Matrix Tree Theorem} If $A = (a_{ij})$ is the adjacency matrix of an arbitrary graph $G$, then the number of spanning trees is equal to the determinant of any co-factor of the Laplacian matrix $L$ of $G$, where $$ L = \begin{bmatrix} a_{12}+\dots+a_{1n} & -a_{12} & \dots & -a_{1,n} \\ -a_{21} & a_{21}+\dots+a_{2n} & \dots & -a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ -a_{n1} & -a_{n2} & \dots & a_{n1}+\dots+a_{n,n-1} \end{bmatrix}. $$ For instance, taking the $(n,n)$ co-factor, we have that the number of spanning trees of $G$ equals $$ \begin{vmatrix} a_{12}+\dots+a_{1n} & -a_{12} & \dots & -a_{1,n-1} \\ -a_{21} & a_{21}+\dots+a_{2n} & \dots & -a_{2,n-1} \\ \vdots & \vdots & \ddots & \vdots \\ -a_{n-1,1} & -a_{n-1,2} & \dots & a_{n-1,1}+\dots+a_{n-1,n} \end{vmatrix}. $$ Since computing determinants for numeric matrices is very fast, we can find the generating functions for the number of spanning trees in grid graphs and more generalized graphs by experimental methods, using the C-finite ansatz. \section*{The GuessRec Maple procedure} Our engine is the Maple procedure {\tt GuessRec(L)} that resides in the Maple packages accompanying this article. We used the `vanilla', straightforward, linear algebra approach for guessing, using {\it undetermined coefficients}. A more efficient way is via the celebrated Berlekamp-Massey algorithm ([Wi]). Since the guessing part is not the {\it bottle-neck} of our approach ( it is rather the data-generation part), we preferred to keep it simple. Naturally, we need to collect enough data. The input is the data (given as a list) and the output is a conjectured recurrence relation derived from that data. Procedure {\tt GuessRec(L)} inputs a list, {\tt L}, and attempts to output a linear recurrence equation with constant coefficients satisfied by the list. It is based on procedure {\tt GuessRec1(L,d)} that looks for such a recurrence of order $d$. The output of {\tt GuessRec1(L,d)} consists of the the list of initial $d$ values (`initial conditions') and the recurrence equation represented as a list. For instance, if the input is $L = [1,1,1,1,1,1]$ and $d=1$, then the output will be $[[1],[1]]$; if the input is $L=[1, 4, 15, 56, 209, 780, 2911, 10864, 40545, 151316]$ as the $k=2$ case for grid graphs and $d=2$, then the output will be $[[1, 4], [4, -1]]$. This means that our sequence satisfies the recurrence $a(n)=4a(n-1)-a(n-2)$, subject to the initial conditions $a(0)=1,a(1)=4$. Here is the Maple code: \bigskip \vspace{1mm}\noindent {\obeylines {\tt GuessRec1:=proc(L,d) local eq,var,a,i,n: if nops(L)<=2*d+2 then \quad print(`The list must be of size >=`, 2*d+3 ): \quad RETURN(FAIL): fi: var:=$\{$seq(a[i],i=1..d)$\}$: eq:=$\{$seq(L[n]-add(a[i]*L[n-i],i=1..d),n=d+1..nops(L))$\}$: var:=solve(eq,var): if var=NULL then \quad RETURN(FAIL): else \quad RETURN([[op(1..d,L)],[seq(subs(var,a[i]),i=1..d)]]): fi: end: } } \bigskip The idea is that having a long enough list $L$ $(|L|>2d+2)$ of data, we use the data after the $d$-th one to discover whether there exists a linear recurrence relation, the first $d$ data points being the initial condition. With the unknowns $a_1, a_2, \dots, a_d $, we have a linear systems of no less than $d+3$ equations. If there is a solution, it is extremely likely that the recurrence relation holds in general. The first list of length $d$ in the output constitutes the list of initial conditions while the second list, $R$, codes the linear recurrence, where $[R[1], \dots R[d]]$ stands for the following recurrence: $$ L[n] = \sum_{i=1}^d R[i]L[n-i]. $$ Here is the Maple procedure {\tt GuessRec(L)}: \bigskip {\obeylines {\tt GuessRec:=proc(L) local gu,d: for d from 1 to trunc(nops(L)/2)-2 do \quad gu:=GuessRec1(L,d): \quad if gu<>FAIL then \quad \quad RETURN(gu): fi: od: FAIL: end: } } \bigskip This procedure inputs a sequence $L$ and tries to guess a recurrence equation with constant coefficients satisfying it. It returns the initial values and the recurrence equation as a pair of lists. Since the length of $L$ is limited, the maximum degree of the recurrence cannot be more than $\lfloor |L|/2-2 \rfloor$. With this procedure, we just need to input $L=[1, 4, 15, 56, 209, 780, 2911, 10864, 40545, 151316]$ to get the recurrence (and initial conditions) $[[1, 4], [4, -1]]$. Once the recurrence relation, let's call it {\tt S}, is discovered, procedure {\tt CtoR(S,t)} finds the generating function for the sequence. Here is the Maple code: \bigskip {\obeylines {\tt CtoR:=proc(S,t) local D1,i,N1,L1,f,f1,L: if not (type(S,list) and nops(S)=2 and type(S[1],list) and type(S[2],list) \quad and nops(S[1])=nops(S[2]) and type(t, symbol) ) then \quad print(`Bad input`): \quad RETURN(FAIL): fi: D1:=1-add(S[2][i]*t**i,i=1..nops(S[2])): N1:=add(S[1][i]*t**(i-1),i=1..nops(S[1])): L1:=expand(D1*N1): L1:=add(coeff(L1,t,i)*t**i,i=0..nops(S[1])-1): f:=L1/D1: L:=degree(D1,t)+10: f1:=taylor(f,t=0,L+1): if expand([seq(coeff(f1,t,i),i=0..L)])<>expand(SeqFromRec(S,L+1)) then print([seq(coeff(f1,t,i),i=0..L)],SeqFromRec(S,L+1)): \quad RETURN(FAIL): else \quad RETURN(f): fi: end: } } \vspace{1mm}\noindent Procedure {\tt SeqFromRec} used above (see the package) simply generates many terms using the recurrence. Procedure {\tt CtoR(S,t)} outputs the rational function in $t$, whose coefficients are the members of the C-finite sequence $S$. For example: $$ {\tt CtoR([[1,1],[1,1]],t)} = \frac{1}{-t^2-t+1}. $$ Briefly, the idea is that the denominator of the rational function can be easily determined by the recurrence relation and we use the initial condition to find the starting terms of the generating function, then multiply it by the denominator, yielding the numerator. \section*{Application of GuessRec for enumerating spanning trees of grid graphs and $G \times P_n$} With the powerful procedures {\tt GuessRec} and {\tt CtoR}, we are able to find generating functions for the number of spanning trees of generalized graphs of the form $G \times P_n$. We will illustrate the application of {\tt GuessRec} to finding the generating function for the number of spanning trees in grid graphs. First, using procedure {\tt GridMN(k,n)}, we get the $k \times n$ grid graph. Then, procedure {\tt SpFn} uses the Matrix Tree Theorem to evaluate the determinant of the co-factor of the Laplacian matrix of the grid graph which is the number of spanning trees in this particular graph. For a fixed $k$, we need to generate a sufficiently long list of data for the number of spanning trees in $G_k(n), n \in [l(k), u(k)]$. The lower bound $l(k)$ can't be too small since the first several terms are the initial condition; the upper bound $u(k)$ can't be too small as well since we need sufficient data to obtain the recurrence relation. Notice that there is a symmetry for the recurrence relation, and to take advantage of this fact, we modified {\tt GuessRec} to get the more efficient {\tt GuessSymRec} (requiring less data). Once the recurrence relation, and the initial conditions, are given, applying {\tt CtoR(S,t)} will give the desirable generating function, that, of course, is a rational function of $t$. All the above is incorporated in procedure {\tt GFGridKN(k,t)} which inputs a positive integer $k$ and a symbol $t$, and outputs the generating function whose coefficient of $t^n$ is the number of spanning trees in $G_k(n)$, i.e. if we let $s(k,n)$ be the number of spanning trees in $G_k(n)$, the generating function $$ F_k(t) = \sum_{n=0}^{\infty} s(k,n) t^n. $$ We now list the generating functions $F_k(t)$ for $1 \leq k \leq 7$: Except for $k=7$, these were already found by Raff[R] and Faase[F], but it is reassuring that, using our new approach, we got the same output. The case $k=7$ seems to be new. \bigskip {\bf Theorem 1} The generating function for the number of spanning trees in $G_1(n)$ is: $$ F_1(t) = \frac {t}{1-t}. $$ \bigskip {\bf Theorem 2} The generating function for the number of spanning trees in $G_2(n)$ is: $$ F_2 = \frac {t}{{t}^{2}-4\,t+1}. $$ \bigskip {\bf Theorem 3} The generating function for the number of spanning trees in $G_3(n)$ is: $$ F_3 = \frac {-{t}^{3}+t}{{t}^{4}-15\,{t}^{3}+32\,{t}^{2}-15\,t+1}. $$ \bigskip {\bf Theorem 4} The generating function for the number of spanning trees in $G_4(n)$ is: $$ F_4 = \frac {{t}^{7}-49\,{t}^{5}+112\,{t}^{4}-49\,{t}^{3}+t}{{t}^{8}-56\,{t }^{7}+672\,{t}^{6}-2632\,{t}^{5}+4094\,{t}^{4}-2632\,{t}^{3}+672\,{t}^ {2}-56\,t+1}. $$ For $5 \leq k \leq 7$, since the formulas are too long, we present their numerators and denominators separately. \bigskip {\bf Theorem 5} The generating function for the number of spanning trees in $G_5(n)$ is: $$ F_5 = \frac{N_5}{D_5} $$ where $$ N_5 = -{t}^{15}+1440\,{t}^{13}-26752\,{t}^{12}+185889\,{t}^{11}-574750\,{t}^ {10}+708928\,{t}^{9}-708928\,{t}^{7} $$ $$ +574750\,{t}^{6}-185889\,{t}^{5}+ 26752\,{t}^{4}-1440\,{t}^{3}+t, $$ $$ D_5 = {t}^{16}-209\,{t}^{15}+11936\,{t}^{14}-274208\,{t}^{13}+3112032\,{t}^{ 12}-19456019\,{t}^{11}+70651107\,{t}^{10}-152325888\,{t}^{9} $$ $$ +196664896\,{t}^{8}-152325888\,{t}^{7}+70651107\,{t}^{6}-19456019\,{t}^{5}+ 3112032\,{t}^{4}-274208\,{t}^{3}+11936\,{t}^{2}-209\,t+1. $$ \bigskip {\bf Theorem 6} The generating function for the number of spanning trees in $G_6(n)$ is: $$ F_6 = \frac{N_6}{D_6} $$ where $$ N_6 = {t}^{31}-33359\,{t}^{29}+3642600\,{t}^{28}-173371343\,{t}^{27}+ 4540320720\,{t}^{26}-70164186331\,{t}^{25} $$ $$ +634164906960\,{t}^{24}- 2844883304348\,{t}^{23}-1842793012320\,{t}^{22}+104844096982372\,{t}^{ 21} $$ $$ -678752492380560\,{t}^{20}+2471590551535210\,{t}^{19}- 5926092273213840\,{t}^{18}+9869538714631398\,{t}^{17} $$ $$ -11674018886109840\,{t}^{16}+9869538714631398\,{t}^{15}- 5926092273213840\,{t}^{14}+2471590551535210\,{t}^{13} $$ $$ -678752492380560 \,{t}^{12}+104844096982372\,{t}^{11}-1842793012320\,{t}^{10}- 2844883304348\,{t}^{9} $$ $$ +634164906960\,{t}^{8}-70164186331\,{t}^{7}+ 4540320720\,{t}^{6}-173371343\,{t}^{5}+3642600\,{t}^{4}-33359\,{t}^{3} +t, $$ $$ D_6 = {t}^{32}-780\,{t}^{31}+194881\,{t}^{30}-22377420\,{t}^{29}+1419219792 \,{t}^{28}-55284715980\,{t}^{27}+1410775106597\,{t}^{26} $$ $$ -24574215822780\,{t}^{25}+300429297446885\,{t}^{24}-2629946465331120\,{ t}^{23}+16741727755133760\,{t}^{22} $$ $$ -78475174345180080\,{t}^{21}+ 273689714665707178\,{t}^{20}-716370537293731320\,{t}^{19} $$ $$ +1417056251105102122\,{t}^{18}-2129255507292156360\,{t}^{17}+ 2437932520099475424\,{t}^{16} $$ $$ -2129255507292156360\,{t}^{15}+ 1417056251105102122\,{t}^{14}-716370537293731320\,{t}^{13} $$ $$ +273689714665707178\,{t}^{12}-78475174345180080\,{t}^{11}+ 16741727755133760\,{t}^{10}-2629946465331120\,{t}^{9} $$ $$ +300429297446885 \,{t}^{8}-24574215822780\,{t}^{7}+1410775106597\,{t}^{6}-55284715980\, {t}^{5} $$ $$ +1419219792\,{t}^{4}-22377420\,{t}^{3}+194881\,{t}^{2}-780\,t+1. $$ {\bf Theorem 7} The generating function for the number of spanning trees in $G_7(n)$ is: $$ F_7 = \frac{N_7}{D_7} $$ where $$ N_7 = -{t}^{47}-142\,{t}^{46}+661245\,{t}^{45}-279917500\,{t}^{44}+ 53184503243\,{t}^{43}-5570891154842\,{t}^{42} $$ $$ +341638600598298\,{t}^{41 }-11886702497030032\,{t}^{40}+164458937576610742\,{t}^{39} $$ $$ +4371158470492451828\,{t}^{38}-288737344956855301342\,{t}^{37}+ 7736513993329973661368\,{t}^{36} $$ $$ -131582338768322853956994\,{t}^{35}+ 1573202877300834187134466\,{t}^{34} $$ $$ -13805721749199518460916737\,{t}^{ 33}+90975567796174070740787232\,{t}^{32} $$ $$ -455915282590547643587452175\, {t}^{31}+1747901867578637315747826286\,{t}^{30} $$ $$ -5126323837327170557921412877\,{t}^{29}+11416779122947828869806142972\, {t}^{28} $$ $$ -18924703166237080216745900796\,{t}^{27}+ 22194247945745188489023284104\,{t}^{26} $$ $$ -15563815847174688069871470516 \,{t}^{25}+15563815847174688069871470516\,{t}^{23} $$ $$ -22194247945745188489023284104\,{t}^{22}+18924703166237080216745900796 \,{t}^{21} $$ $$ -11416779122947828869806142972\,{t}^{20}+ 5126323837327170557921412877\,{t}^{19} $$ $$ -1747901867578637315747826286\,{ t}^{18}+455915282590547643587452175\,{t}^{17} $$ $$ -90975567796174070740787232\,{t}^{16}+13805721749199518460916737\,{t}^{ 15} $$ $$ -1573202877300834187134466\,{t}^{14}+131582338768322853956994\,{t}^ {13}-7736513993329973661368\,{t}^{12} $$ $$ +288737344956855301342\,{t}^{11}- 4371158470492451828\,{t}^{10}-164458937576610742\,{t}^{9} $$ $$+11886702497030032\,{t}^{8}-341638600598298\,{t}^{7}+5570891154842\,{t} ^{6}-53184503243\,{t}^{5} $$ $$ +279917500\,{t}^{4}-661245\,{t}^{3}+142\,{t}^ {2}+t, $$ $$ D_7 = {t}^{48}-2769\,{t}^{47}+2630641\,{t}^{46}-1195782497\,{t}^{45}+ 305993127089\,{t}^{44}-48551559344145\,{t}^{43} $$ $$ +5083730101530753\,{t}^ {42}-366971376492201338\,{t}^{41}+18871718211768417242\,{t}^{40} $$ $$ -709234610141846974874\,{t}^{39}+19874722637854592209338\,{t}^{38}- 422023241997789381263002\,{t}^{37} $$ $$ +6880098547452856483997402\,{t}^{36} -87057778313447181201990522\,{t}^{35} $$ $$ +862879164715733847737203343\,{t} ^{34}-6750900711491569851736413311\,{t}^{33} $$ $$ +41958615314622858303912597215\,{t}^{32}-208258356862493902206466194607 \,{t}^{31} $$ $$ +828959040281722890327985220255\,{t}^{30}- 2654944041424536277948746010303\,{t}^{29} $$ $$ +6859440538554030239641036025103\,{t}^{28}- 14324708604336971207868317957868\,{t}^{27} $$ $$ +24214587194571650834572683444012\,{t}^{26}- 33166490975387358866518005011884\,{t}^{25} $$ $$ +36830850383375837481096026357868\,{t}^{24}- 33166490975387358866518005011884\,{t}^{23} $$ $$ +24214587194571650834572683444012\,{t}^{22}- 14324708604336971207868317957868\,{t}^{21} $$ $$ +6859440538554030239641036025103\,{t}^{20}- 2654944041424536277948746010303\,{t}^{19} $$ $$ +828959040281722890327985220255\,{t}^{18}- 208258356862493902206466194607\,{t}^{17} $$ $$ +41958615314622858303912597215 \,{t}^{16}-6750900711491569851736413311\,{t}^{15} $$ $$ +862879164715733847737203343\,{t}^{14}-87057778313447181201990522\,{t}^ {13} $$ $$ +6880098547452856483997402\,{t}^{12}-422023241997789381263002\,{t} ^{11} $$ $$ +19874722637854592209338\,{t}^{10}-709234610141846974874\,{t}^{9} +18871718211768417242\,{t}^{8} $$ $$ -366971376492201338\,{t}^{7}+5083730101530753\,{t}^{6}-48551559344145\,{t}^{5}+305993127089\,{t}^{4} $$ $$ -1195782497\,{t}^{3}+2630641\,{t}^{2}-2769\,t+1. $$ \bigskip Note that, surprisingly, the degree of the denominator of $F_7(t)$ is $48$ rather than the expected $64$ since the first six generating functions' denominator have degree $2^{k-1}$, $1 \leq k \leq 6$. With a larger computer, one should be able to compute $F_k$ for larger $k$, using this experimental approach. Generally, for an arbitrary graph $G$, we consider the number of spanning trees in $G \times P_n$. With the same methodology, a list of data can be obtained empirically with which a generating function follows. \section*{Joint Resistance} The original motivation for the Matrix Tree Theorem, first discovered by Kirchhoff (of Kirchhoff's laws fame) came from the desire to efficiently compute joint resistances in an electrical network. Suppose one is interested in the joint resistance in an electric network in the form of a grid graph between two diagonal vertices $[1,1]$ and $[k,n]$. We assume that each edge has resistance $1$ Ohm. To obtain it, all we need is, in addition for the number of spanning trees (that's the numerator), the number of spanning forests $SF_k(n)$ of the graph $G_k(n)$ that have exactly two components, each component containing exactly one of the members of the pair $\{[1,1],[k,n]\}$ (this is the denominator). The joint resistance is just the ratio. In principle, we can apply the same method to obtain the generating function $S_k$. Empirically, we found that the denominator of $S_k$ is always the square of the denominator of $F_k$ times another polynomial $C_k$. Once the denominator is known, we can find the numerator in the same way as above. So our focus is to find $C_k$. The procedure {\tt DenomSFKN(k,t)} in the Maple package {\tt JointConductance.txt}, calculates $C_k$. For $2 \leq k \leq 4$, we have $$ C_2 = t-1 , $$ $$ C_3 = t^4-8t^3+17t^2-8t+1 , $$ $$ C_4 = t^{12}-46t^{11}+770t^{10}-6062t^9+24579t^8-55388t^7+72324t^6-55388t^5+24579t^4 $$ $$ -6062t^3+770t^2-46t+1 . $$ {\bf Remark} By looking at the output of our Maple package, we conjectured that $R(k,n)$, the resistance between vertex $[1,1]$ and vertex $[k,n]$ in the $k \times n$ grid graph, $G_k(n)$, where each edge is a resistor of $1$ Ohm, is asymptotically $n/k$, for any fixed $k$, as $n \rightarrow \infty$. We proved it rigorously for $k \leq 6$, and we wondered whether there is a human-generated ``electric proof''. Naturally we emailed Peter Doyle, the co-author of the delightful masterpiece [DS], who quickly came up with the following argument. {\it Making the horizontal resistors into almost resistance-less gold wires gives the lower bound $R(k,n) \geq (n-1)/k$ since it is a parallel circuit of $k$ resistors of $n-1$ Ohms. For an upper bound of the same order, put 1 Ampere in at [1,1] and out at $[k,n]$, routing $1/k$ Ampere up each of the $k$ verticals. The energy dissipation is $k(n-1)/k^2+C(k) = (n-1)/k+C(k)$, where the constant $C(k)$ is the energy dissipated along the top and bottom resistors. Specifically, $C(k) = 2(1-1/k)^2 + (1-2/k)^2 + \dots + (1/k)^2)$. So $(n-1)/k \leq R(k,n) \leq (n-1)/k + C(k)$.} We thank Peter Doyle for his kind permission to reproduce this {\it electrifying} argument. \section*{The statistic of the number of vertical edges in spanning trees of grid graphs} Often in enumerative combinatorics, the class of interest has natural `statistics', like height, weight, and IQ for humans. Recall that the {\it naive counting} is $$ |A| \, := \, \sum_{a \in A} 1 , $$ getting a {\bf number}. Define: $$ |A|_x \, := \,\sum_{a \in A} x^{f(a)} , $$ where $f:=A \rightarrow \mathbb{Z}$ is the statistic of interest. To go from the weighted enumeration (a certain Laurent polynomial) to straight enumeration, one simply plugs-in $x=1$, i.e. $|A|_1 = |A|$. The {\it scaled} random variable is defined as follows. Let $E(f)$ and $Var(f)$ be the {\it expectation} and {\it variance}, respectively, of the statistic $f$ defined on $A$, and define the {\it scaled} random variable, for $a\in A$, by $$ X (a):= \frac{f(a)- E(f)}{\sqrt{Var(f)}} . $$ In this section, we are interested in the statistic `number of vertical edges', defined on spanning trees of grid graphs. For given $k$ and $n$, let, as above, $G_k(n)$ denote the $k \times n$ grid-graph. Let $\mathcal{F}_{k,n}$ be its set of spanning trees. If the weight is 1, then $\sum_{f \in \mathcal{F}_{k,n}} 1 =|\mathcal{F}_{k,n}|$ is the naive counting. Now let's define a natural statistic $ver(T)$ = the number of vertical edges in the spanning tree $T$, and the weight $w(T) = v^{ver(T)}$, then the weighted counting follows: $$ Ver_{k,n} (v) = \sum_{T \in \mathcal{F}_{k,n}} w(T) $$ where $\mathcal{F}_{k,n}$ is the set of spanning trees of $G_k(n)$. We define the bivariate generating function $$ g_{k}(v,t) = \sum_{n=0}^{\infty} Ver_{k,n} t^n. $$ More generally, with our Maple package {\tt GFMatrix.txt}, and procedure {\tt VerGF}, we are able to obtain the bivariate generating function for an arbitrary graph of the form $G \times P_n$. The procedure {\tt VerGF} takes inputs $G$ (an arbitrary graph), $N$ (an integer determining how many data we use to find the recurrence relation) and two symbols $v$ and $t$. The main tool for computing {\tt VerGF} is still the Matrix Tree Theorem and {\tt GuessRec}. But we need to modify the Laplacian matrix for the graph. Instead of letting $a_{ij}=-1$ for $i \neq j$ and $\{i,j\} \in E(G \times P_n)$, we should consider whether the edge $\{i,j\}$ is a vertical edge. If so, we let $a_{i,j}=-v, a_{j,i}=-v$. The diagonal elements which are $(-1) \times$ (the sum of the rest entries on the same row) should change accordingly. The following theorems are for grid graphs when $2 \leq k \leq 4$ while $k=1$ is a trivial case because there are no vertical edges. \bigskip {\bf Theorem 8} The bivariate generating function for the weighted counting according to the number of vertical edges of spanning trees in $G_2(n)$ is: $$ g_2(v,t) = \frac {vt}{1- \left( 2\,v+2 \right) t+{t}^{2}} . $$ \bigskip {\bf Theorem 9} The bivariate generating function for the weighted counting according to the number of vertical edges vertical edges of spanning trees in $G_3(n)$ is: $$ g_3(v,t) = \frac {-{t}^{3}{v}^{2}+{v}^{2}t}{1- \left( 3\,{v}^{2}+8\,v+4 \right) t- \left( -10\,{v}^{2}-16\,v-6 \right) {t}^{2}- \left( 3\,{v}^{2}+8\,v+ 4 \right) {t}^{3}+{t}^{4}} . $$ \bigskip {\bf Theorem 10} The bivariate generating function for the weighted counting according to the number of vertical edges of spanning trees in $G_4(n)$ is: $$ g_4(v,t) = \frac{numer(g_4)}{denom(g_4)} $$ where $$ numer(g_4) = {v}^{3}t+ \left( -16\,{v}^{5}-24\,{v}^{4}-9\,{v}^{3} \right) {t}^{3}+ \left( 8\,{v}^{6}+40\,{v}^{5}+48\,{v}^{4}+16\,{v}^{3} \right) {t}^{4} $$ $$ + \left( -16\,{v}^{5}-24\,{v}^{4}-9\,{v}^{3} \right) {t}^{5}+{v}^{3}{t }^{7} $$ and $$ denom(g_4) = 1- \left( 4\,{v}^{3}+20\,{v}^{2}+24\,v+8 \right) t- \left( -52\,{v}^{4 }-192\,{v}^{3}-256\,{v}^{2}-144\,v-28 \right) {t}^{2} $$ $$ - \left( 64\,{v}^ {5}+416\,{v}^{4}+892\,{v}^{3}+844\,{v}^{2}+360\,v+56 \right) {t}^{3} $$ $$ - \left( -16\,{v}^{6}-160\,{v}^{5}-744\,{v}^{4}-1408\,{v}^{3}-1216\,{v} ^{2}-480\,v-70 \right) {t}^{4} $$ $$ - \left( 64\,{v}^{5}+416\,{v}^{4}+892\,{ v}^{3}+844\,{v}^{2}+360\,v+56 \right) {t}^{5}- \left( -52\,{v}^{4}-192 \,{v}^{3}-256\,{v}^{2}-144\,v-28 \right) {t}^{6} $$ $$ - \left( 4\,{v}^{3}+20 \,{v}^{2}+24\,v+8 \right) {t}^{7}+{t}^{8} . $$ With the Maple package {\tt BiVariateMoms.txt} and its {\tt Story} procedure from \bigskip {\tt http://sites.math.rutgers.edu/\~{}zeilberg/tokhniot/BiVariateMoms.txt}, \bigskip the expectation, variance and higher moments can be easily analyzed. We calculated up to the 4th moment for $G_2(n)$. For $k=3,4$, you can find the output files from {\tt http://sites.math.rutgers.edu/\~{}yao/OutputStatisticVerticalk=3.txt} {\tt http://sites.math.rutgers.edu/\~{}yao/OutputStatisticVerticalk=4.txt} \bigskip {\bf Theorem 11} The moments of the statistic: the number of vertical edges in the spanning trees of $G_2(n)$ are as follows: Let $b$ be the largest positive root of the polynomial equation $$ b^2-4b+1 = 0 $$ whose floating-point approximation is 3.732050808, then the size of the $n$-th family (i.e. straight enumeration) is very close to $$ {\frac {{b}^{n+1}}{-2+4\,b}} . $$ The average of the statistics is, asymptotically $$ \frac{1}{3}+\frac{1}{3}\,{\frac { \left( -1+2\,b \right) n}{b}} . $$ The variance of the statistics is, asymptotically $$ -\frac{1}{9}+\frac{1}{9}\,{\frac { \left( 7\,b-2 \right) n}{-1+4\,b}} . $$ The skewness of the statistics is, asymptotically $$ {\frac {780\,b-209}{ \left( 4053\,b-1086 \right) {n}^{3}+ \left( -7020 \,b+1881 \right) {n}^{2}+ \left( 4053\,b-1086 \right) n-780\,b+209}}. $$ The kurtosis of the statistics is, asymptotically $$ 3\,{\frac { \left( 32592\,b-8733 \right) {n}^{2}+ \left( -56451\,b+ 15126 \right) n+21728\,b-5822}{ \left( 32592\,b-8733 \right) {n}^{2}+ \left( -37634\,b+10084 \right) n+10864\,b-2911}} . $$ \section*{Application of the C-finite Ansatz to computing generating functions of determinants (and permanents) of almost-diagonal Toeplitz matrices} So far, we have seen applications of the $C$-finite ansatz methodology for automatically computing generating functions for enumerating spanning trees/forests for certain infinite families of graphs. The second case study is completely different, and in a sense more general, since the former framework may be subsumed in this new context. \bigskip {\bf Definition} Diagonal matrices $A$ are square matrices in which the entries outside the main diagonal are $0$, i.e. $a_{ij} = 0$ if $i \neq j$. \bigskip {\bf Definition} An almost-diagonal Toeplitz matrix $A$ is a square matrices in which $a_{i,j} = 0$ if $j-i \geq k_1$ or $i-j \geq k_2$ for some fixed positive integers $k_1, k_2$ and $\forall i_1, j_1, i_2, j_2$, if $i_1-j_1 = i_2-j_2$, then $a_{i_1 j_1} = a_{i_2 j_2}$. \bigskip For simplicity, we use the notation $L=$[n, [the first $k_1$ entries in the first row], [the first $k_2$ entries in the first column]] to denote the $n \times n$ matrix with these specifications. Note that this notation already contains all information we need to reconstruct this matrix. For example, [6, [1,2,3], [1,4]] is the matrix $$ \begin{bmatrix} 1 & 2 & 3 & 0 & 0 & 0 \\ 4 & 1 & 2 & 3 & 0 & 0 \\ 0 & 4 & 1 & 2 & 3 & 0 \\ 0 & 0 & 4 & 1 & 2 & 3 \\ 0 & 0 & 0 & 4 & 1 & 2 \\ 0 & 0 & 0 & 0 & 4 & 1 \end{bmatrix} . $$ The following is the Maple procedure {\tt DiagMatrixL} (in our Maple package {\tt GFMatrix.txt}), which inputs such a list $L$ and outputs the corresponding matrix. \bigskip {\obeylines {\tt DiagMatrixL:=proc(L) local n, r1, c1,p,q,S,M,i: n:=L[1]: r1:=L[2]: c1:=L[3]: p:=nops(r1)-1: q:=nops(c1)-1: if r1[1] <> c1[1] then \quad return fail: fi: S:=[0\$(n-1-q), seq(c1[q-i+1],i=0..q-1), op(r1), 0\$(n-1-p)]: M:=[0\$n]: for i from 1 to n do \quad M[i]:=[op(max(0,n-1-q)+q+2-i..max(0,n-1-q)+q+1+n-i,S)]: od: return M: end: } } \bigskip For this matrix, $k_1=3$ and $k_2=2$. Let $k_1, k_2$ be fixed and $M_1, M_2$ be two lists of numbers or symbols of length $k_1$ and $k_2$ respectively, $A_k$ is the almost-diagonal Toeplitz matrix represented by the list $L_k = [k, M_1, M_2]$. Note that the first elements in the lists $M_1$ and $M_2$ must be identical. Having fixed two lists $M_1$ of length $k_1$ and $M_2$ of length $k_2$, (where $M_1[1]=M_2[1]$), it is of interest to derive {\it automatically}, the generating function (that is always a rational function for reasons that will soon become clear), $\sum_{k=0}^{\infty} a_k \, t^k$, where $a_k$ denotes the determinant of the $k \times k$ almost-diagonal Toeplitz matrix whose first row starts with $M_1$, and first column starts with $M_2$. Analogously, it is also of interest to do the analogous problem when the determinant is replaced by the permanent. Here is the Maple procedure {\tt GFfamilyDet} which takes inputs (i) $A$: a name of a Maple procedure that inputs an integer $n$ and outputs an $n \times n$ matrix according to some rule, e.g., the almost-diagonal Toeplitz matrices, (ii) a variable name $t$, (iii) two integers $m$ and $n$ which are the lower and upper bounds of the sequence of determinants we consider. It outputs a rational function in $t$, say $R(t)$, which is the generating function of the sequence. \vspace{1mm}\noindent {\obeylines {\tt GFfamilyDet:=proc(A,t,m,n) local i,rec,GF,B,gu,Denom,L,Numer: L:=[seq(det(A(i)),i=1..n)]: rec:=GuessRec([op(m..n,L)])[2]: gu:=solve(B-1-add(t**i*rec[i]*B,i=1..nops(rec)), {B}): Denom:=denom(subs(gu,B)): Numer:=Denom*(1+add(L[i]*t**i, i=1..n)): Numer:=add(coeff(Numer,t,i)*t**i, i=0..degree(Denom,t)): Numer/Denom: end: } } \vspace{1mm}\noindent Similarly we have procedure {\tt GFfamilyPer} for the permanent. Let's look at an example. The following is a sample procedure which considers the family of almost diagonal Toeplitz matrices which the first row $[2,3]$ and the first column $[2,4,5]$. \vspace{1mm}\noindent {\obeylines {\tt SampleB:=proc(n) local L,M: L:=[n, [2,3], [2,4,5]]: M:=DiagMatrixL(L): end: } } Then {\tt GFfamilyDet(SampleB, t, 10, 50)} will return the generating function $$ -\frac{1}{ 45\,{t}^{3}-12\,{t}^{2}+2\,t-1 } . $$ It turns out, that for this problem, the more `conceptual' approach of setting up a transfer matrix also works well. But don't worry, the computer can do the `research' all by itself, with only a minimum amount of human pre-processing. We will now describe this more conceptual approach, that may be called {\it symbolic dynamical programming}, where the computer sets up, {\it automatically}, a finite-state scheme, by {\it dynamically} discovering the set of states, and automatically figures out the transfer matrix. \section*{The Transfer Matrix method for almost-diagonal Toeplitz matrices} Recall from Linear Algebra 101, the {\bf Cofactor Expansion} Let $|A|$ denote the determinant of an $n \times n$ matrix $A$, then $$ |A| = \sum_{j=1}^{n} (-1)^{i+j} a_{ij} M_{ij}, \quad \forall i \in [n], $$ where $M_{ij}$ is the $(i,j)$-minor. We'd like to consider the Cofactor Expansion for almost-diagonal Toeplitz matrices along the first row. For simplicity, we assume while $a_{i,j} = 0$ if $j-i \geq k_1$ or $i-j \geq k_2$ for some fixed positive integers $k_1, k_2$, and if $-k_2 < j_1-i_1 < j_2-i_2 < k_1$, then $a_{i_1 j_1} \neq a_{i_2 j_2}$. Under this assumption, for any minors we obtain through recursive Cofactor Expansion along the first row, the dimension, the first row and the first column should provide enough information to reconstruct the matrix. For an almost-diagonal Toeplitz matrix represented by $L=$[Dimension, [the first $k_1$ entries in the first row], [the first $k_2$ entries in the first column]], any minor can be represented by [Dimension, [entries in the first row up to the last nonzero entry], [entries in the first column up to the last nonzero entry]]. Our goal in this section is the same as the last one, to get a generating function for the determinant or permanent of almost-diagonal Toeplitz matrices $A_k$ with dimension $k$. Once we have those almost-diagonal Toeplitz matrices, the first step is to do a one-step expansion as follows: \vspace{1mm}\noindent {\obeylines {\tt ExpandMatrixL:=proc(L,L1) local n,R,C,dim,R1,C1,i,r,S,candidate,newrow,newcol,gu,mu,temp,p,q,j: n:=L[1]: R:=L[2]: C:=L[3]: p:=nops(R)-1: q:=nops(C)-1: dim:=L1[1]: R1:=L1[2]: C1:=L1[3]: if R1=[] or C1=[] then \quad return {}: elif R[1]<>C[1] or R1[1]<>C1[1] or dim>n then \quad return fail: else S:=\{\}: gu:=[0\$(n-1-q), seq(C[q-i+1],i=0..q-1), op(R), 0\$(n-1-p)]: candidate:=[0\$nops(R1),R1[-1]]: for i from 1 to nops(R1) do \quad mu:=R1[i]: for j from n-q to nops(gu) do \quad \quad if gu[j]=mu then \quad\quad\quad candidate[i]:=gu[j-1]: \quad \quad fi: \quad od: od: for i from n-q to nops(gu) do \quad if gu[i] = R1[2] then \quad \quad temp:=i: \quad \quad break: \quad fi: od: for i from 1 to nops(R1) do \quad if i = 1 then \quad\quad mu:=[R1[i]*(-1)**(i+1), [dim-1,[op(i+1..nops(candidate), candidate)], [seq(gu[temp-i],i=1..temp-n+q)]]]: \quad\quad S:=S union {mu}: \quad else \quad \quad mu:=[R1[i]*(-1)**(i+1), [dim-1, [op(1..i-1, candidate), op(i+1..nops(candidate), candidate)], [op(2..nops(C1), C1)]]]: \quad\quad S:=S union {mu}: \quad fi: od: \quad return S: fi: end: } } \vspace{1mm}\noindent The {\tt ExpandMatrixL} procedure inputs a data structure $L =$ [Dimension, first\_row=[ ], first\_col=[ ]] as the matrix we start and the other data structure $L1$ as the current minor we have, expands $L1$ along its first row and outputs a list of [multiplicity, data structure]. We would like to generate all the "children" of an almost-diagonal Toeplitz matrix regardless of the dimension, i.e., two lists $L$ represent the same child as long as their first\_rows and first\_columns are the same, respectively. The set of "children" is the scheme of the almost diagonal Toeplitz matrices in this case. The following is the Maple procedure {\tt ChildrenMatrixL} which inputs a data structure $L$ and outputs the set of its "children" under Cofactor Expansion along the first row: \bigskip {\obeylines {\tt ChildrenMatrixL:=proc(L) local S,t,T,dim,U,u,s: dim:=L[1]: S:=\{[op(2..3,L)]\}: T:=\{seq([op(2..3,t[2])],t in ExpandMatrixL(L,L))\}: while T minus S <> \{\} do \quad U:=T minus S: \quad S:=S union T: \quad T:=\{\}: \quad for u in U do \quad T:=T union \{seq([op(2..3,t[2])],t in ExpandMatrixL(L,[dim,op(u)]))\}: od: od: for s in S do \quad if s[1]=[] or s[2]=[] then \quad\quad S:=S minus \{s\}: \quad fi: od: S: end: } } \bigskip After we have the scheme $S$, by the Cofactor Expansion of any element in the scheme, a system of algebraic equations follows. For children in $S$, it's convenient to let the almost-diagonal Toeplitz matrix be the first one $C_1$ and for the rest, any arbitrary ordering will do. For example, if after Cofactor Expansion for $C_1$, $c_2$ "copies" of $C_2$ and $c_3$ "copies" of $C_3$ are obtained, then the equation will be $$ C_1 = 1+ c_2 t C_2 + c_3 t C_3 . $$ However, if the above equation is for $C_i, i \neq 1$, i.e. $C_i$ is not the almost-diagonal Toeplitz matrix itself, then the equation will be slightly different: $$ C_i = c_2 t C_2 + c_3 t C_3 . $$ Here $t$ is a symbol as we assume the generating function is a rational function of $t$. Here is the Maple code that implements how we get the generating function for the determinant of a family of almost-diagonal Toeplitz matrices by solving a system of algebraic equations: \bigskip {\obeylines {\tt GFMatrixL:=proc(L,t) local S,dim,var,eq,n,A,i,result,gu,mu: dim:=L[1]: S:=ChildrenMatrixL(L): S:=[[op(2..3,L)], op(S minus \{[op(2..3,L)]\})]: n:=nops(S): var:=\{seq(A[i],i=1..n)\}: eq:=\{\}: for i from 1 to 1 do \quad result:=ExpandMatrixL(L,[dim,op(S[i])]): \quad for gu in result do \quad \quad if gu[2][2]=[] or gu[2][3]=[] then \quad \quad \quad result:=result minus \{gu\}: \quad \quad fi: \quad od: \quad eq:=eq union \{A[i] - 1 - add(gu[1]*t*A[CountRank(S, [op(2..3, gu[2])])], gu in result)\}: od: for i from 2 to n do \quad result:=ExpandMatrixL(L,[dim,op(S[i])]): \quad for gu in result do \quad if gu[2][2]=[] or gu[2][3]=[] then \quad \quad result:=result minus {gu}: \quad fi: od: eq:=eq union \{A[i] - add(gu[1]*t*A[CountRank(S, [op(2..3, gu[2])])], gu in result)\}: od: gu:=solve(eq, var)[1]: subs(gu, A[1]): end: } } \bigskip {\tt GFMatrixL([20, [2, 3], [2, 4, 5]], t)} returns $$ - \frac{1}{ 45\,{t}^{3}-12\,{t}^{2}+2\,t-1} . $$ Compared to empirical approach, the `symbolic dynamical programming' method is faster and more efficient for the moderate-size examples that we tried out. However, as the lists will grow larger, it is likely that the former method will win out, since with this non-guessing approach, it is equally fast to get generating functions for determinants and permanents, and as we all know, permanents are hard. The advantage of the present method is that it is more appealing to humans, and does not require any `meta-level' act of faith. However, both methods are very versatile and are great experimental approaches for enumerative combinatorics problems. We hope that our readers will find other applications. \section*{Summary} Rather than trying to tackle each enumeration problem, one at a time, using ad hoc human ingenuity each time, building up an intricate transfer matrix, and only using the computer at the end as a symbolic calculator, it is a much better use of our beloved silicon servants (soon to become our masters!) to replace `thinking' by `meta-thinking', i.e. develop experimental mathematics methods that can handle many different types of problems. In the two case studies discussed here, everything was made rigorous, but if one can make semi-rigorous and even non-rigorous discoveries, as long as they are {\it interesting}, one should not be hung up on rigorous proofs. In other words, if you can find a rigorous justification (like in these two case studies) that's nice, but if you can't, that's also nice! \section*{Acknowledgment} Many thanks are due to a very careful referee that pointed out many minor, but annoying, errors. Also thanks to Peter Doyle for permission to include his elegant electric argument. \section*{References} [DS] Peter Doyle and Laurie Snell, {\it ``Random Walks and Electrical Networks''}, Carus Mathematical Monographs (\# 22), Math. Assn. of America, 1984. \bigskip [ESZ] Shalosh B. Ekhad, N. J. A. Sloane and Doron Zeilberger, {\it Automated Proof (or Disproof) of Linear Recurrences Satisfied by Pisot Sequences}, The Personal Journal of Shalosh B. Ekhad and Doron Zeilberger. Available from \hfill\break {\tt http://www.math.rutgers.edu/\~{}zeilberg/mamarim/mamarimhtml/pisot.html} \bigskip [F] F. J. Faase, {\it On the number of specific spanning subgraphs of the graphs $g \times p_n$}, Ars Combinatorica, {\bf 49} (1998) 129-154 \bigskip [KP] Manuel Kauers and Peter Paule, {\it ``The Concrete Tetrahedron''}, Springer, 2011. \bigskip [R] Paul Raff, {\it Spanning Trees in Grid Graph}, \hfill\break {\tt https://arxiv.org/abs/0809.2551}. \bigskip [S] Richard Stanley, {\it ``Enumerative Combinatorics, Volume 1''}. First edition: Wadsworth \& Brooks/Cole, 1986. Second edition: Cambridge University Press, 2011. \bigskip [Wi] Wikipedia contributors. ``Berlekamp-Massey algorithm.'' Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 26 Nov. 2018. Web. 7 Jan. 201 \bigskip [Z1] Doron Zeilberger, {\it The C-finite Ansatz}, Ramanujan Journal {\bf 31} (2013), 23-32. Available from \hfill\break {\tt http://www.math.rutgers.edu/\~{}zeilberg/mamarim/mamarimhtml/cfinite.html} \bigskip [Z2] Doron Zeilberger, {\it Why the Cautionary Tales Supplied by Richard Guy's Strong Law of Small Numbers Should not be Overstated}, The Personal Journal of Shalosh B. Ekhad and Doron Zeilberger. Available from \hfill\break {\tt http://www.math.rutgers.edu/\~{}zeilberg/mamarim/mamarimhtml/small.html} \bigskip [Z3] Doron Zeilberger, {\it An Enquiry Concerning Human (and Computer!) [Mathematical] Understanding}, in: C.S. Calude, ed., ``Randomness \& Complexity, from Leibniz to Chaitin'' World Scientific, Singapore, 2007, pp. 383-410. Available from \hfill\break {\tt http://www.math.rutgers.edu/\~{}zeilberg/mamarim/mamarimhtml/enquiry.html} \bigskip First Version: Dec. 17, 2018. This version: Jan. 8, 2019. \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,173
\section{Introduction} \label{Sect:1} In astrophysics, the emitted radiation is usually the only source of information about the physical conditions in the emitting medium. Physical properties of the emitting plasma are then derived by analysis and modeling of the observed spectra. For a long time, this has been done under the assumption of a local, equilibrium Maxwellian distribution. This is done even if the emitting medium is optically thin and therefore perhaps not dense enough for the equilibrium to be always ensured locally. Such assumption is at best difficult in dynamic situations with particle acceleration, as e.g., a high-energy tail is difficult to equilibrate collisionally, since the collision frequency scales inversely with $E^{3/2}$, where $E$ is the particle energy \citep[e.g.,][]{Meyer-Vernet07}. \citet{Scudder13} argue that, in the case of stellar coronae, the assumption of the Maxwellian distribution should always be violated at heights above 1.05 of the stellar radius. If long-range interactions are induced, e.g., by reconnection, wave-particle interaction, or shocks, the particles in the system can become correlated and do not have a Maxwellian distribution \citep[e.g.,][]{Collier04,Vocks03,Vocks08,Drake06,Livadiotis09,Livadiotis10,Livadiotis13,Pierrard10,Gontikakis13,Laming13}. Rather, the distribution exhibits a high-energy power-law tail. The $\kappa$-distributions are a class of particle distributions having a near-Maxwellian core and a high-energy power-law tail, both of which are described by an analytic expression \citep[][, see also Sect. 2]{Vasyliunas68,Owocki83}. The $\kappa$ index has been shown to be an independent thermodynamic index \citep{Livadiotis09,Livadiotis10,Livadiotis11a,Livadiotis13} in the generalized Tsallis statistical mechanics \citep[e.g.,][]{Tsallis88,Tsallis09,Leubner02,Leubner04a}. The $\kappa$-distributions can be derived analytically in case of a turbulent velocity diffusion coefficient inversely proportional to velocity. This has been shown for the plasma in a suprathermal radiation field \citep{Hasegawa85}, for electrons heated by lower hybrid waves \citep{Laming07} and for solar flare plasmas where the distribution function arises as a consequence of balance between diffusive acceleration and collisions \citep{Bian14}. Indeed, in solar flares, the $\kappa$-distributions provide a good fit to some of the X-ray spectra of coronal sources observed during partially occulted flares \citep{Kasparova09,Oka13}, although a second, near-Maxwellian distribution is also present \citep{Oka13}. \citet{Battaglia13} used the AIA and RHESSI observations of flares to derive the distribution function in the range of 0.1 to tens of keV. These authors shown that the distribution derived in the low-energy range from AIA does not match the high-energy tail observed by RHESSI. A possible cause of this mis-match is the assumption of Maxwellian distribution in the calculation of AIA differential emission measures (DEMs), which may compromise the analysis, especially if the high-energy tail is present and observed by RHESSI. \citet{Dzifcakova11} have shown that the $\kappa$-distributions can explain the \ion{Si}{3} transition-region line intensities observed by the \textit{SOHO}/SUMER instrument \citep{Wilhelm95}, especially in the active region spectra (see also \citet{DelZanna14}). \citet{Testa14} inferred presence of high-energy tails from the analysis of \ion{Si}{4} spectra observed by the IRIS spectrometer \citep{DePontieu14}. The $\kappa$-distributions are also routinely detected in the solar wind \citep[e.g.,][]{Collier96,Maksimovic97a,Maksimovic97b,Livadiotis10,LeChat11}. The high-energy tails at keV energies can arise as a consequence of coronal nanoflares \citep{Gontikakis13} that are also able to produce the ``halo'' in the solar wind electron distribution \citep{Che14}. Furthermore, a claim has been made that the $\kappa$-distributions were detected also in the spectra of planetary nebulae \citep{Binette12,Nicholls12,Nicholls13,Dopita13}, although this has been challenged as a possible effect of atomic data uncertainties \citep{Storey13,Storey14}. The kappa-distributions are also one of the possible explanations of the non-Maxwellian H$\alpha$ profiles detected in the Tycho's supernova remnant \citep{Raymond10}. Although the $\kappa$-distributions were detected in solar flares, transition region and solar wind, their presence in the solar corona is currently unknown despite numerous attempts at their diagnostics. A diagnostics of the high-energy electrons have been attempted by \citet{Feldman07} and \citet{Hannah10}. \citet{Feldman07} investigated whether the He-like intensities observed by SUMER could correspond to a bi-Maxwellian distribution with the second Maxwellian having a temperature of 10\,MK. These authors argued that no such second Maxwellian is necessary. However, this analysis was limited to Maxwellian distribution and did not include the effect of a proper high-energy power-law tail. \citet{Hannah10} used the X-ray off-limb observations of the quiet-Sun performed by the RHESSI instrument \citep{Lin02} to obtain upper-limits on the emission measures as a function of $\kappa$. However, for temperatures of several MK corresponding to the solar corona, these upper limits are large and increase with increasing $\kappa$. Direct attempts at spectroscopic diagnostics using EUV line intensities observed by \textit{Hinode}/EIS \citep{Culhane07} were performed by \citet{Dzifcakova10} and \citet{Mackovjak13}. Indications of non-Maxwellian distributions were found using the \ion{O}{4}--\ion{O}{5} and \ion{S}{10}--\ion{S}{11} lines. However, such analysis was problematic due to large photon noise uncertainties affecting weak lines, atomic data uncertainties and the possible presence of multi-thermal effects that would complicate the analysis. Therefore, even diagnostics using only strong lines will have be supplemented by a DEM analysis under the assumption of a $\kappa$-distribution. Under the constraints of the current EUV instrumentation, such DEM analysis typically involves many different elements and ionization stages \citep[see, e.g.,][]{Warren12,Mackovjak14}. All this leads to a requirement of reliable calculation of synthetic spectra involving many different elements and ionization stages. In this paper, we describe the KAPPA package for calculation of optically thin astrophysical spectra that arise due to collisional excitation by electrons with a $\kappa$-distribution. This package, allowing for fast calculation of line and continuum spectra for $\kappa$-distributions, is based on the freely available CHIANTI atomic database and software, currently in version 7.1 \citep{Dere97,Landi13}. The manuscript is organized as follows. The $\kappa$-distributions are described in Sect. \ref{Sect:2}. Synthesis of line spectra and continua are described in Sect. \ref{Sect:3} and Sect. \ref{Sect:4}, respectively. Section \ref{Sect:5} describes the database and the software implementation. Examples of the synthetic spectra and the AIA filter responses fo $\kappa$-distributions are provided in Sect. \ref{Sect:6}. Summary is given in Sect. \ref{Sect:7}. \vspace{0.5cm} \begin{figure}[!t] \centering \includegraphics[width=8.8cm]{plot_kappa_logt620_eV.eps} \includegraphics[width=8.8cm]{plot_kappa_logt620_eV_adjMxw3_M.eps} \includegraphics[width=8.8cm]{plot_kappa_logt620_eV_adjMxw3_C.eps} \caption{The $\kappa$-distributions with $\kappa$\,=\,2, 3, 5, 10, 25 and the Maxwellian distribution plotted for log($T$/K)\,=\,6.20 (\textit{top}). Colors and linestyles denote the different values of $\kappa$. Approximations of the $\kappa$\,=\,3 distribution in the low-energy range with a Maxwellian distribution according to \citet{Livadiotis09} and \citet{Oka13} are shown in the \textit{middle} and \textit{bottom} panels, respectively. \\ A color version of this image is available in the online journal.} \label{Fig:Kappa} \end{figure} \section{The Non-Maxwellian $\kappa$-distributions} \label{Sect:2} \subsection{Definition and Basic Properties} \label{Sect:2.1} The $\kappa$-distribution of electron energies (Fig. \ref{Fig:Kappa}) is defined as \citep[e.g.,][]{Owocki83,Livadiotis09} \begin{equation} f_\kappa(E) \mathrm{d}E = A_{\kappa} \frac{2}{\sqrt{\pi} (k_\mathrm{B}T)^{3/2}} \frac{E^{1/2}\mathrm{d}E}{\left (1+ \frac{E}{(\kappa - 3/2) k_\mathrm{B}T} \right)^{\kappa+1}}\,, \label{Eq:Kappa} \end{equation} where the $A_{\kappa}$\,=\,$\Gamma(\kappa+1)$/$\left(\Gamma(\kappa-1/2) (\kappa-3/2)^{3/2}\right)$ is the normalization constant and $k_\mathrm{B}$\,=\,1.38 $\times 10^{-16}$ erg\,s$^{-1}$ is the Boltzmann constant. The $\kappa$-distribution has two parameters, $T$\,$\in$\,$\left(0,+\infty\right)$ and $\kappa$\,$\in$\,$\left(3/2,+\infty\right)$. The Maxwellian distribution at a given $T$ corresponds to $\kappa$\,$\to$\,$\infty$. The departure from the Maxwellian distribution increases with decreasing $\kappa$, with the maximum departure occurring for $\kappa$\,$\rightarrow$\,3/2. While the most probable energy $E_\mathrm{max}$\,=\,$(\kappa-3/2)k_\mathrm{B}T/\kappa$ is a decreasing function of $\kappa$, the mean energy $\left< E \right> = {3k_\mathrm{B}T}/{2}$ of a $\kappa$-distribution does not depend on $\kappa$ and is only a function of $T$. Because of this, the parameter $T$ has the same physical meaning for the $\kappa$-distributions as the (kinetic) temperature for the Maxwellian distribution. Additionally, \citet{Livadiotis09} and \citet{Livadiotis10} show that the $T$ also corresponds to the definition of physical temperature in the framework of the generalized Tsallis statistical mechanics \citep{Tsallis88,Tsallis09}, and permits the generalization of the zero-th law of thermodynamics. Note that this fact permits e.g. the definition of electron kinetic pressure $p$\,=\,$n_\mathrm{e}k_\mathrm{B}T$ in the usual manner. Note also that the $\kappa$-distribution is not the only possible representation of a non-Maxwellian distribution with a high-energy tail \citep[e.g.,][]{Dzifcakova11,Che14}. Nevertheless, its analytical expression and a single additional parameter $\kappa$ make it a useful special case of an equilibrium particle distribution associated with turbulence \citep{Hasegawa85,Laming07,Bian14}, offering a rather straightforward evaluation of various rate coefficients associated with radiative processes (Sects. \ref{Sect:3} and \ref{Sect:4}). \subsection{Approximation by Maxwellian Core and a Power-Law Tail} \label{Sect:2.2} It is straightforward to see from Eq. (\ref{Eq:Kappa}) that in the high-energy limit, the $\kappa$-distribution approaches a power-law with the index of $-(\kappa+1/2)$. On the other hand, \citet{Meyer-Vernet95} and \citet{Livadiotis09} showed that, in the low-energy limit, the $\kappa$-distribution behaves as a Maxwellian with \begin{equation} T_\mathrm{M} = \frac{\kappa-3/2}{\kappa+1}T\,. \label{Eq:T_M} \end{equation} The low-energy end of a $\kappa$-distribution can indeed be well approximated by a Maxwellian, if this Maxwellian is scaled by a constant \begin{equation} c_\mathrm{M}(\kappa) = A_\kappa \frac{\left(\frac{\kappa-3/2}{\kappa+1}\right)^{3/2} \mathrm{exp}\left(\frac{\kappa+1}{2\kappa+1}\right) } {\left(1+\frac{1}{2\kappa+1}\right)^{(\kappa+1)}}\,, \label{Eq:C_M} \end{equation} so that the two distributions match at the most probable energy $E_\mathrm{max}$\,=\,$(\kappa-3/2)k_\mathrm{B}T/\kappa $ (Fig. \ref{Fig:Kappa}, \textit{middle}; see also e.g., \citet{Dzifcakova02}, Fig. 1 therein). \citet{Oka13} attempted to approximate the core of a $\kappa$-distribution with a Maxwellian at temperature $T_\mathrm{C}$ \begin{equation} T_\mathrm{C} = \frac{\kappa-3/2}{\kappa} T\,. \label{Eq:T_C} \end{equation} Such Maxwellian core has to be adjusted (Fig. \ref{Fig:Kappa}, \textit{bottom}) by a scaling constant \citep[][Eq. (3) therein]{Oka13} \begin{equation} c(\kappa) = \mathrm{exp(1)} \frac{\Gamma(\kappa+1)}{\Gamma(\kappa -1/2)} \kappa^{-3/2} \left(1 +\frac{1}{\kappa}\right)^{-(\kappa+1)}\,, \label{Eq:C_kappa} \end{equation} so that the two distributions match at $E$\,=\,$k_\mathrm{B}T_\mathrm{C}$. This approximation by a Maxwellian core leads to a worse match at very low energies $E$\,$\to$\,0 (Fig. \ref{Fig:Kappa}, \textit{bottom}). These approximations suggest that the $\kappa$-distribution can be thought of as a Maxwellian core at a lower temperature with a power-law tail. \begin{figure*}[!ht] \centering \includegraphics[width=17.4cm]{ioneq_Fe.eps} \caption{Example of the ionization equilibrium for iron and the $\kappa$-distributions. Relative ion abundances for \ion{Fe}{10}--\ion{Fe}{18} are plotted as a function of $T$ and $\kappa$ and compared to the relative ion abundances for the Maxwellian distribution from CHIANTI v7.1 (full lines). \\ A color version of this image is available in the online journal.} \label{Fig:Ioneq} \end{figure*} \section{Line intensities for the $\kappa$-distributions} \label{Sect:3} In the optically thin solar and stellar coronae, as well as the associated transition regions and flares, spectral lines arise as a consequence of particle collisions exciting the ions in the highly ionized plasma. The total emissivity $\varepsilon_{ji}$ of a spectral line $\lambda_{ji}$ corresponding to a transition $j \to i$, $j > i$, in a $k$-times ionized ion of the element $X$ is usually expressed as \citep[e.g.,][]{Mason94,Phillips08} \begin{eqnarray} \nonumber \varepsilon_{ji} &=& \frac{hc}{\lambda_{ji}} A_{ji} n(X_j^{+k}) = \frac{hc}{\lambda_{ji}} \frac{A_{ji}}{n_\mathrm{e}} \frac{n(X_j^{+k})}{n(X^{+k})} \frac{n(X^{+k})}{n(X)} A_X n_\mathrm{e} n_\mathrm{H}\\ &=& A_X G_{X,ji}(T,n_\mathrm{e},\kappa) n_\mathrm{e} n_\mathrm{H}\,, \label{Eq:line_emissivity} \end{eqnarray} where $h$\,$\approx$\,6.62\,$\times$\,10$^{-27}$ erg\,s is the Planck constant, $c$\,=\,3\,$\times$10$^{10}$\,cm\,s$^{-1}$ is the speed of light, $A_{ji}$ the Einstein coefficient for spontaneous emission, and $n(X_j^{+k})$ the density of the ion $+k$ with electron on the excited upper level $j$. In Eq. \ref{Eq:line_emissivity}, the latest quantity is usually expanded in terms of the ionization fraction $n(X^{+k})/n_X$ (Sect. \ref{Sect:3.1}) and the excitation fraction $n(X_j^{+k})/n(X^{+k})$ (Sect. \ref{Sect:3.2}). There, $n(X^{+k})$ denotes the total density of the ion $+k$, and $n(X)$\,$\equiv$\,$n_X$ corresponds to the total density of element $X$ whose abundance is $A_X$, with $n_\mathrm{H}$ being the hydrogen density. The function $G_{X,ji}(T,n_\mathrm{e},\kappa)$ is the contribution function for the line $\lambda_{ji}$. The intensity $I_{ji}$ of the spectral line is then given by the emissivity integral of emissivity along a path $l$ corresponding to the line of sight \begin{equation} I_{ji} = \int A_X G_{X,ji}(T,n_\mathrm{e},\kappa) n_\mathrm{e} n_\mathrm{H} \mathrm{d}l\,, \label{Eq:line_intensity} \end{equation} where $EM$\,=\,$\int n_\mathrm{e} n_\mathrm{H} \mathrm{d}l$ is the emission measure of the emitting plasma. The CHIANTI atomic database provides the observed wavelengths $\lambda_{ji}$ and the corresponding Einstein coefficients $A_{ji}$, while the electron density $n_\mathrm{e}$ is a free parameter. To complete the synthesis of line intensities for the $\kappa$-distributions, the relative ion abundance $n(X^{+k})/n_X$ and the relative level population $n(X_j^{+k})/n(X^{+k})$ must be calculated. This is detailed in the remainder of this section. \subsection{Ionization Equilibrium} \label{Sect:3.1} A common assumption in calculation of the relative ion abundance $n(X^{+k})/n_X$ is that of ionization equilibrium, i.e., that the relative ion abundance is not a function of time. Then, the relative ion abundance is given by the equilibrium between the ionization and recombination rates. In coronal conditions, the dominating ionization processes are the direct ionization and the autoionization \citep[e.g.,][]{Phillips08}, while the dominant recombination processes are radiative and dielectronic recombination. Since these processes involve free electrons, all of these rates depend on $T$ and $\kappa$ \citep[e.g.,][]{Dzifcakova92,Anderson96,Dzifcakova02,Wannawichian03,Dzifcakova13}. We note that in the non-equilibrium ionization conditions, the $n(X^{+k})/n_X$ depends on the specific evolution of the system, in particular on the energy sources, sinks, and the resulting flows \citep[e.g.,][]{Bradshaw03,Bradshaw04,Bradshaw09}. Since radiation is an energy sink, the system is then coupled. \citet{Dzifcakova13} provide the latest available ionization equilibria for $\kappa$-distributions for all ions of the elements with $Z$\,$\leqq$\,30, i.e., H to Zn. These calculations use the same atomic data for ionization and recombination as the ionization equilibrium for the Maxwellian distribution available in the CHIANTI database, v7.1 \citep{Landi13,Dere07,Dere97}. Figure \ref{Fig:Ioneq} shows examples of the behaviour of the relative ion abundances of \ion{Fe}{10}--\ion{Fe}{18} with $\kappa$. The ionization peaks are in general wider for lower $\kappa$. Compared to the Maxwellian distribution, ionization peaks of the transition-region ions are in general shifted to lower log$(T/$K), while the coronal ions are generally shifted to higher $T$, especially for low $\kappa$\,=\,2--3 \citep{Dzifcakova13}. Exceptions from these rules of thumb occur. E.g., the ionization peak of \ion{Fe}{17} is shifted to lower $T$ for $\kappa$\,=\,5, while for $\kappa$\,=\,2, it is shifted to higher $T$ compared to the Maxwellian distribution. The shifts of the ionization peaks are typically $\Delta$log($T$/K)\,$\approx$\,0.10--0.15 for $\kappa$\,=\,2, although much larger shifts can also occur, e.g., for \ion{Fe}{7} \citep{Dzifcakova13}. This behaviour of the individual ionization peaks with $\kappa$ strongly influence the resulting line intensities (Eqs. \ref{Eq:line_emissivity} and \ref{Eq:line_intensity}). Therefore, the approximate temperatures determined from the observed lines in the spectrum can be different for a $\kappa$-distribution and the Maxwellian distribution. For a plasma where the high-energy tail or electron beams can be expected, the $T$ is related to the mean energy of the distribution (Sect. \ref{Sect:2.1}) including the high-energy tail. Notably, $T$ can be very different from the Maxwellian ``bulk'' temperature $T_\mathrm{M}$ or $T_\mathrm{C}$ (Sect. \ref{Sect:2.2}). Strong changes in the ionization equilibrium for the $\kappa$-distributions mainly in the transition region result e.g. in the \ion{O}{4} being formed at log$(T/K$)\,$\approx$\,5.15 for the Maxwellian distribution, but at $\approx$\,5.0 for $\kappa$\,=\,5 and $\approx$\,4.8 for $\kappa$\,=\,2 \citep{Dudik14a}. The core of the distribution can have even lower temperatures -- for log$(T/K$)\,$\approx$\,5.15, the log($T_\mathrm{C}$/K)\,=\,5.0 for $\kappa$\,=\,5, but only 4.55 for $\kappa$\,=\,2. The $T_\mathrm{M}$ are even lower (Sect. \ref{Sect:2.2}). Therefore, without a diagnostics of $\kappa$ in situations where the high-energy tail can exist, one has to be very careful in the estimation of the plasma temperature from the fact that a particular line is observed. The situation is furthermore complicated by the dependence of line emission on the differential emission measure of the emitting plasma \citep[e.g.,][]{Warren12,Teriaca12b}, which is itself a function of $\kappa$ \citep{Mackovjak14}. \subsection{Excitation Equilibrium and Rates} \label{Sect:3.2} The relative level populations $n(X_j^{+k})/n(X^{+k})$ can be obtained under an assumption of excitation equilibrium \citep[][Eqs. (4.24) and (4.25) therein]{Phillips08}. In equilibrium, the total number of transitions to and from any given level $j$ is balanced by transitions both from all other levels $m$ to the level $j$, as well as from the level $j$ to any other level $m$. In the conditions of the solar and stellar coronae, ion-electron collisions are the dominant excitation mechanism, while deexcitations are facilitated either by spontaneous radiative decay (with the rates $A_{jm}$) and/or collisional deexcitation during ion-electron collisions. The rates of electron excitation and deexcitation, $C_{jm}^\mathrm{e}$ and $C_{jm}^\mathrm{d}$, can be expressed as \citep{Bryans06,Dudik14b} \begin{eqnarray} C_{ij}^\mathrm{e} &=& \frac{2 \sqrt{2} a_0^2 I_H }{\sqrt{m_\mathrm{e}}\omega_i} \left(\frac{\pi}{k_\mathrm{B}T}\right)^{1/2} \mathrm{e}^{-\frac{\Delta E_{ij}}{k_\mathrm{B}T}} \Upsilon_{ij}(T,\kappa)\,, \label{Eq:Excit_rate_Upsilon} \\ C_{ji}^\mathrm{d} &=& \frac{2 \sqrt{2} a_0^2 I_H }{\sqrt{m_\mathrm{e}}\omega_j}\left(\frac{\pi}{k_\mathrm{B}T}\right)^{1/2} \, \rotatebox[origin=c]{180}{$\Upsilon$}_{ji}(T,\kappa)\,, \label{Eq:Deexcit_rate_Downsilon} \end{eqnarray} where $a_0$\,=\,5.29\,$\times$10$^{-9}$ cm is the Bohr radius, $m_\mathrm{e}$\,=\,9.1\,$\times$10$^{-28}$ is the electron rest mass, $I_H$\,$\approx$\,13.6\,eV\,$\equiv$\,1\,Ryd is the hydrogen ionization energy, $\omega_i$ and $\omega_j$ are the statistical weights of the levels $i$ and $j$, respectively, $\Delta E_{ij}$\,=\,$E_i - E_j$ is the energy of the transition, and $E_i$ and $E_j$ are the incident and final electron energies. The $\Upsilon_{ij}(T,\kappa)$ and \rotatebox[origin=c]{180}{$\Upsilon$}$_{ji}(T,\kappa)$ denote the distribution-averaged collision strengths, given by \begin{eqnarray} \Upsilon_{ij} &=& A_\kappa \frac{\Delta E_{ij}}{k_\mathrm{B}T} \mathrm{e}^{\frac{\Delta E_{ij}}{k_\mathrm{B}T}} \int\limits_{\Delta E_{ij}}^{+\infty} \frac{\Omega_{ji}(E_i)}{ \left(1+ \frac{E_i}{(\kappa-3/2)k_\mathrm{B}T}\right)^{\kappa+1}} \,\frac{\mathrm{d}E_i}{\Delta E_{ij}}\,, \label{Eq:Upsilon_kappa} \\ \rotatebox[origin=c]{180}{$\Upsilon$}_{ji} &=& A_\kappa \frac{\Delta E_{ij}}{k_\mathrm{B}T} \int\limits_0^{+\infty} \frac{\Omega_{ji}(E_j)}{\left(1+ \frac{E_j}{(\kappa-3/2)k_\mathrm{B}T}\right)^{\kappa+1}} \,\frac{\mathrm{d}{E_j}}{\Delta E_{ij}}\,. \label{Eq:Downsilon_kappa} \end{eqnarray} In these expressions, $\Omega_{ji}(E_j)$\,=\,$\Omega_{ij}(E_i)$ is the collision strength, i.e., the non-dimensionalised cross-section \begin{equation} \Omega_{ji}(E_j) = \omega_j \frac{E_j}{I_H} \frac{\sigma_{ji}^\mathrm{d}(E_j)}{\pi a_0^2} = \omega_i \frac{E_i}{I_H} \frac{\sigma_{ij}^\mathrm{e}(E_i)}{\pi a_0^2}\,, \\ \label{Eq:Omega} \end{equation} where the $\sigma_{ji}^\mathrm{e}$ and $\sigma_{ij}^\mathrm{d}$ are the electron impact excitation and deexcitation cross-sections, respectively. Note that with $\kappa$\,$\to$\,$\infty$, the $\Upsilon_{ij}(T,\kappa)$ and \rotatebox[origin=c]{180}{$\Upsilon$}$_{ji}(T,\kappa)$ revert to the $\Upsilon_{ij}(T)$ commonly used for the Maxwellian distribution \citep[][]{Seaton53,Burgess92,Mason94,Bradshaw13}, with the property of $\Upsilon_{ij}(T)$\,$\equiv$\,\rotatebox[origin=c]{180}{$\Upsilon$}$_{ji}(T)$ being recovered. The $\Upsilon_{ij}(T,\kappa)$ and \rotatebox[origin=c]{180}{$\Upsilon$}$_{ji}(T,\kappa)$, together with the Eqs. (\ref{Eq:Excit_rate_Upsilon}) and (\ref{Eq:Deexcit_rate_Downsilon}) and the equations of statistical equilibrium \citep[Eqs. (4.24) and (4.25) in][]{Phillips08}, can then be used to synthesize the spectra for the $\kappa$-distributions in the same manner as for the Maxwellian distribution (Sect. \ref{Sect:5.2}). \begin{figure*} \centering \includegraphics[width=5.9cm]{o4_1_4_apr.eps} \includegraphics[width=5.9cm]{omega_o4_1_4_smt.eps} \includegraphics[width=5.9cm]{o4_1_4.eps} \caption{\textit{Left:} Approximation of $\Upsilon$ from CHIANTI. \textit{Middle:} Comparison of the approximation of $\Omega$ (red line) with atomic data of \citet{Liang12} without (black) and with smoothin (blue). \textit{Right:} Comparison of the $\Upsilon$s calculated using our approximation (full lines) with direct calculations for the Maxwellian distribution (triangles) for different $\kappa$ distributions with $\kappa$\,=\, 25 (violet), 10 (blue), 7 (turquoise), 5 (green), 3 (yellow), and 2 (red). } \label{Fig:O4} \end{figure*} \begin{figure*}[!ht] \centering \includegraphics[width=5.9cm,bb=20 0 453 340,clip]{fe_11_1-3.eps} \includegraphics[width=5.9cm,bb=20 0 453 340,clip]{fe_11_2-39.eps} \includegraphics[width=5.9cm,bb=20 0 453 340,clip]{fe_11_2-246.eps} \includegraphics[width=5.9cm,bb=20 0 453 340,clip]{err_fe_11_1-3.eps} \includegraphics[width=5.9cm,bb=20 0 453 340,clip]{err_fe_11_2-39.eps} \includegraphics[width=5.9cm,bb=20 0 453 340,clip]{err_fe_11_2-246.eps} \caption{$\Upsilon_{\kappa}$ (\textit{top}) and their relative errors (\textit{below}) for the \ion{Fe}{11} 3s$^2$3p$^4$ $^3$P$_2$\,--\,3s$^2$3p$^4$ $^3$P$_0$ \textit{(left)}, 3s$^2$3p$^4$ $^3$P$_1$ -- 3s$^2$3p$^4$($^2$D) $^3$S$_0$ (\textit{middle}), and 3s$^2$3p$^4$ $^3$P$_1$ -- 3p$^5$ 3d $^3$F$_3$ \textit{(right)} transitions. Black lines show comparison of the CHIANTI's approximation with the direct calculations for the Maxwellian distribution. Colors show the comparison of our approximation to direct calculations for the $\kappa$-distribution with $\kappa$\,=\,25 (blue), 10 (turquoise), 5 (green), 3 (yellow), and 2 (red).} \label{Fig:Errors_fe11} \end{figure*} \begin{figure*}[!ht] \centering \includegraphics[width=5.9cm,bb=20 0 453 340,clip]{fe_17_2-3.eps} \includegraphics[width=5.9cm,bb=20 0 453 340,clip]{fe_17_2-43.eps} \includegraphics[width=5.9cm,bb=20 0 453 340,clip]{fe_17_3-50.eps} \includegraphics[width=5.9cm,bb=20 0 453 340,clip]{err_fe_17_2-3.eps} \includegraphics[width=5.9cm,bb=20 0 453 340,clip]{err_fe_17_2-43.eps} \includegraphics[width=5.9cm,bb=20 0 453 340,clip]{err_fe_17_3-50.eps} \caption{Same as in Fig. \ref{Fig:Errors_fe11}, but for the following transitions in \ion{Fe}{17}: 2s$^2$2p$^5$3s $^1$P$_2$ -- 2s$^2$2p$^5$3s $^1$P$_1$ (\textit{left}), 2s$^2$2p$^5$3s $^1$P$_2$ -- 2s$^2$2p$^5$4p $^2$D$_2$ (\textit{middle}), and 2s$^2$2p$^5$3s $^1$P$_1$ -- 2s$^2$2p$^5$4p $^2$D$_2$ (\textit{right}).} \label{Fig:Errors_fe17} \end{figure*} \begin{figure*}[!ht] \centering \includegraphics[width=8.8cm]{scattr_fe_11_Maxwell_all.eps} \includegraphics[width=8.8cm]{scattr_fe_17_Maxwell_all.eps} \includegraphics[width=8.8cm]{scattr_fe_11_k10_all.eps} \includegraphics[width=8.8cm]{scattr_fe_17_k10_all.eps} \includegraphics[width=8.8cm]{scattr_fe_11_k5_all.eps} \includegraphics[width=8.8cm]{scattr_fe_17_k5_all.eps} \includegraphics[width=8.8cm]{scattr_fe_11_k3_all.eps} \includegraphics[width=8.8cm]{scattr_fe_17_k3_all.eps} \includegraphics[width=8.8cm]{scattr_fe_11_k2_all.eps} \includegraphics[width=8.8cm]{scattr_fe_17_k2_all.eps} \caption{The relative error of $\Upsilon_{\kappa}$ to $\Upsilon_{\kappa DC}$ for \ion{Fe}{11} (\textit{left}) and \ion{Fe}{17} (\textit{right}) as a function of $\Upsilon_{\kappa DC}$ at temperatures corresponding to the maximum of the ion abundance $\Upsilon_{\kappa,(T_{\mathrm{max}})}$. Black points are for the CHIANTI approximation (Eq. \ref{Eq:Ups_k_approx}). Different colors stand for results for $\kappa$\,=\,2 (red), 3 (orange), 5 (green), and 10 (blue). } \label{Fig:Scatterplots} \end{figure*} \subsection{Collision Strength Approximation} \label{Sect:3.3} The calculation of the collision strengths for excitation and deexcitation averaged over $\kappa$-distributions for large number of transitions introduces a problem of accessibility of atomic cross-setions $\Omega_{ji}(E_j)$. Only a few database contain these data, and typically only for a small number of transitions. The CHIANTI database and software \citep{Dere97,Landi13} contains spline approximations to the Maxwellian-averaged collision strengths for the majority of the astronomically interesting ions of elements H to Zn. CHIANTI allows for computation and analysis of solar spectra and is an important tool of the diagnostics of the solar plasma under the assumption of a Maxwellian distribution. We used the CHIANTI database to calculate the approximate cross-sections $\Omega$ and subsequently approximate excitation and de-excitation rates $\Upsilon_{ij}(T,\kappa)$ and \rotatebox[origin=c]{180}{$\Upsilon$}$_{ji}(T,\kappa)$ for the $\kappa$-distributions. This approximate method was described e.g. in \citet{Dzifcakova06} and tested for \ion{Fe}{15} by \citet{DzifcakovaMason08}. Here, we use this method to obtain the $\Upsilon_{ij}(T,\kappa)$ and \rotatebox[origin=c]{180}{$\Upsilon$}$_{ji}(T,\kappa)$ for all transitions in all the elements and ions available within CHIANTI. The approximation works as follows: A functional form for the approximation of $\Omega$ is assumed \citep{Abramowitz65} \begin{equation} \Omega = \sum_{n=0}^{n_\mathrm{max}}{\cal C}_{n}u^{-n}+D~\mathrm{ln}(u), \label{Eq:Omega_approx} \end{equation} where ${\cal C}_{k}$ and $D$ are coefficients and $u=E_{i}/\Delta E_{ij}$. The advantage of this approximation is a simple analytical evaluation of its integral over the distribution function. This approximation was often used for expression of the collision strength e.g. by \citet{Mewe72}. The $\Upsilon_{ij}$ for the Maxellian distribution can then be written as: \begin{equation} \Upsilon_{ij}=\frac{\Delta E_{ij}}{k_\mathrm{B}T} \mathrm{e}^{\frac{\Delta E_{ij}}{k_\mathrm{B}T}} \int_{1}^{\infty}\Omega_{ij} \mathrm{e}^{\left(-\frac{E_{i}}{k_\mathrm{B}T}\right)} \mathrm{d}\left(\frac{E_{i}}{\Delta E_{ij}}\right)\,, \label{Eq:Ups_mxw_approx} \end{equation} which after integration leads to \begin{equation} \Upsilon_{ij}={\cal C}_{0}+\left( \sum_{k=1}^{n_\mathrm{max}}y{\cal C}_{n}{\cal E}_{n}(y)+D {\cal E}_{1}(y) \right) e^{y}, \label{Eq:Ups_mxw_approx_expr} \end{equation} where $y=\Delta E_{ij}/k_\mathrm{B}T$ and ${\cal E}_{n}(y)$ is an $n$-th order exponential integral. The behaviour of $\Omega$ in the high-energy limit and the corresponding behaviour of $\Upsilon_{ij}$ provide following conditions for the coefficients ${\cal C}_{n}$ and $D$ for the electric dipole transitions \begin{equation} D=\frac{4\omega_{i}f_{ij}}{\Delta E_{ij}},~ \Upsilon_{ij}(\rightarrow\infty)=\sum\limits_{n=0}^{n_\mathrm{max}}{\cal C}_{n}=\Omega(u=1)\,, \label{Eq:D-Ups_conditions_type1} \end{equation} while for the non electric dipole, non exchange transitions \begin{eqnarray} D&=&0,~\Upsilon_{ij}(\rightarrow0)={\cal C}_{0},\nonumber \\ \Upsilon_{ij}(\rightarrow\infty)&=&\sum_{n=0}^{n_{max}}{\cal C}_{n}=\Omega(u=1) \label{Eq:D-Ups_conditions_type2} \end{eqnarray} and finally for the exchange transitions \begin{eqnarray} {\cal C}_{0}&=&{\cal C}_{1}=D=0,~\nonumber \\ \Upsilon_{ij}(\rightarrow 0)&=&y\int_{1}^{\infty}\Omega d(u),~\nonumber \\ \Upsilon_{ij}(\rightarrow\infty)&=&\sum_{n=0}^{n_\mathrm{max}}{\cal C}_{n}=\Omega(u=1). \label{Eq:D-Ups_conditions_type3} \end{eqnarray} The low- and high-energy limits $\Upsilon_{ij}(\rightarrow0)$ and $\Upsilon_{ij}(\rightarrow\infty)$ can be found in the CHIANTI database. The coefficients ${\cal C}_{n}$ and $D$ can be evaluated from the collisional strengths in CHIANTI, averaged over the Maxwellian distribution by the least square method. To achieve the higher precision, we used approximations (Eq. \ref{Eq:Omega_approx}) up to $n_\mathrm{max}=7$. The approximate method described here is also used to calculate the distribution-averaged collision strengths for deexcitation \rotatebox[origin=c]{180}{$\Upsilon$}$_{ji}(T,\kappa)$. \subsection{Validity of the Approximate Method} \label{Sect:3.4} Figure \ref{Fig:O4} demonstrates the approximation of $\Omega$ and calculation of $\Upsilon_{\kappa}$ for the \ion{O}{4} transition 2s$^2$\,2p\,$^2$P$_{1/2}$\,--\,2s\,2p$^2$\,$^4$P$_{3/2}$ at 1401.16\AA. The atomic data for this transition are taken from \citet{Liang12}. We find a typical precision in the approximation of CHIANTI $\Upsilon$'s of a few percent. This is the case for the \ion{O}{4} 1401.16\AA~transition shown, for which we find a precision of 1--2\%. However, for a small part of transitions the precision can be significantly worse, up to approximately 15\%. Fulfilling the conditions (\ref{Eq:D-Ups_conditions_type1})--(\ref{Eq:D-Ups_conditions_type3}) for the coefficients guarantees correct behaviour of $\Omega$ for high and threshold energies. It is however difficult to compare data for all transitions of each of ion. Occasional errors in the approximation of $\Omega$ (Eq. \ref{Eq:Omega_approx}) cannot be excluded at present. Their propagation to the calculated of $\Upsilon_{\kappa}$ are further minimized by adopting \begin{equation} \Upsilon_{\kappa}=\Upsilon_\mathrm{Maxwell}^\mathrm{CHIANTI} \frac{\Upsilon_{\kappa}^\mathrm{approx}}{\Upsilon_\mathrm{Maxwell}^\mathrm{approx}}\,, \label{Eq:Ups_k_approx} \end{equation} where $\Upsilon_{\kappa}$ is the final $\Upsilon(\kappa,T)$ for the $\kappa$-distributions, $\Upsilon_\mathrm{Maxwell}^\mathrm{CHIANTI}$ is ${\Upsilon}(T)$ taken from CHIANTI for the Maxwellian distribution, and $\Upsilon_{\kappa}^\mathrm{approx}$ and $\Upsilon_\mathrm{Maxwell}^\mathrm{approx}$ are $\Upsilon$'s calculated from our approximations of the cross sections for the $\kappa$-distributions and Maxwellian distribution, respectively. First tests of the precision of the approximate method desribed in Sect. \ref{Sect:3.3} were performed by \citet{DzifcakovaMason08}. These authors used $n_\mathrm{max}=5$ and tested the validity of the approximation of the cross-section $\Omega$ for some of the \ion{Fe}{15} transitions. An overall precision better than 10\% was found. The approximation worked almost perfectly for the alowed transitions. Worse results were found for the forbidden transitions. It was also found that transitions with strong resonance contributions and a low ratio of the excitation energy to temperature can also be problematic. However, all the $\Omega$s for all transitions were reproduced to an accuracy better than 15\% \citep{DzifcakovaMason08}. To supplement this analysis, we used $n_\mathrm{max}=7$ (Sect. \ref{Sect:3.3}) and tested the approximate method on two ions, \ion{Fe}{11} and \ion{Fe}{17}. We used the original atomic cross sections from \citet{DelZanna10a} for \ion{Fe}{11} and \citet{DelZanna11b} for \ion{Fe}{17}. These Maxwellian-averaged $\Upsilon_{ij}(T)$ are implemented in the CHIANTI database, version 7.1 \citep{Landi13}. Here, we compare our approximation based on these Maxwellian data in CHIANTI with the $\Upsilon_{ij}(T,\kappa)$ and \rotatebox[origin=c]{180}{$\Upsilon$}$_{ji}(T,\kappa)$ calculated directly from the $\Omega$s using the method of \citet{Dudik14b}. Figures. \ref{Fig:Errors_fe11} and \ref{Fig:Errors_fe17} show several examples of the comparison of the direct calculation (hereafter, DC) with the approximate method for \ion{Fe}{11} (Fig. \ref{Fig:Errors_fe11}) and \ion{Fe}{17} (Fig. \ref{Fig:Errors_fe17}). The DC are denoted by squares and the approximate $\Upsilon_\kappa$ by the full lines. Left columns in these figures show typical worst cases for strong transitions. We see that the error of the approximation depends on $\kappa$ and $T$; it typically increases with decreasing $\kappa$. The worst cases are however still within 10\% even for the extreme value of $\kappa$\,=\,2 considered here. Typical cases are shown in the middle columns of Figs. \ref{Fig:Errors_fe11} and \ref{Fig:Errors_fe17}. Here, the approximations are valid to within a few per cent for all $\kappa$s. Finally, typical approximations for the weak transitions are shown in the right columns of Figs. \ref{Fig:Errors_fe11} and \ref{Fig:Errors_fe17}. We again find that the approximations are valid to within $\approx$10\% for all $\kappa$s. Figure \ref{Fig:Scatterplots} contain scatterplots of the relative error $\Upsilon_\kappa$/$\Upsilon_\kappa,\mathrm{DC}$\,$-$1 plotted for each $\kappa$ at the peak of the corresponding relative ion abundance. These scatterplots contain 447 transitions in \ion{Fe}{11} and 1050 transitions in \ion{Fe}{17} that we were able to unambiguously indentify both in both the CHIANTI database and the atomic data themselves. The plots in Fig. \ref{Fig:Scatterplots} confirm that the approximate $\Upsilon_\kappa$ do not depart from the directly calculated one $\Upsilon_\kappa,\mathrm{DC}$ by more than 10\%. Typically, the relative errors increase with decreasing $\kappa$; smallest errors are found for the Maxwellian distribution. Strong transitions typically have higher accuracy than the weaker ones, in agreement with the results of \citet{DzifcakovaMason08}. Finally we note that the approximation of $\Upsilon(\kappa,T)$ to within 10\% is considered satisfactory given the uncertainties in the atomic data themselves, which are typically of the same order of magnitude, and the uncertainties of the spline-fits of the Maxwellian $\Upsilon(T)$ contained in CHIANTI, which are typically $<5$\%. \subsection{Dielectronic Satellite Lines} \label{Sect:3.5} The rate coefficient for the dielectronic excitation from level $i$ to level $j$ and for an arbitrary electron distribution funtion $f(E)$ can be expressed as \citep{Seely87} \begin{equation} C^\mathrm{diel}=\left( \frac{2}{m_\mathrm{e} \Delta E_{ji}} \right)^{1/2}\frac{h^3g_j}{16\pi m_\mathrm{e} g_i}f(\Delta E_{ji})A_a, \label{Eq:C_diel} \end{equation} where $g_j$ and $g_i$ are statistical wieghts of double excited state and lower level, respectively; $A^\mathrm{a}$ is the autoionization (Auger) rate. The transition occurs at discrete energy $\Delta E_{ji}$ which corresponds to the energy difference between energy of states $j$ and $m$. For the Maxwellian distribution, this equation leads to the well-known expression \citep[e.g.,][Eq. (4.19) therein]{Phillips08} \begin{equation} C^\mathrm{diel}_\mathrm{Maxw} = \frac{h^3}{2(2\pi m_\mathrm{e} k_\mathrm{B}T)^{3/2}} \frac{g_j}{g_i} \mathrm{e}^{-\frac{\Delta E_{ji}}{k_\mathrm{B}T}}A^\mathrm{a} \,. \label{Eq:C_maxw} \end{equation} For the $\kappa$-distribution, we have \begin{equation} C^\mathrm{diel}_\kappa = \frac{A_\kappa h^3}{2(2\pi m_\mathrm{e} k_\mathrm{B}T)^{3/2}} \frac{g_j}{g_i} \frac{A^\mathrm{a}}{\left (1+\frac{\Delta E_{ji}}{(\kappa-1.5)k_\mathrm{B}T}\right)^{\kappa+1}}\,, \label{Eq:C_diel_kappa} \end{equation} which leads to \begin{equation} C^\mathrm{diel}_\kappa = C^\mathrm{diel}_\mathrm{Maxw} \frac{A_\kappa \mathrm{e}^{\frac{\Delta E_{ji}}{k_\mathrm{B}T}}}{\left (1+\frac{\Delta E_{ji}}{(\kappa-1.5)k_\mathrm{B}T}\right)^{\kappa+1}}\,. \label{Eq:C_diel_kappa2} \end{equation} \section{The non-Maxwellian Continuum} \label{Sect:4} The continuum for the non-Maxwellian $\kappa$-distributions is treated here using the approach of \citet{Dudik12}. Contributions from the free-free and free-bound continua are considered. The two-photon continuum is not considered, as its emissivity for $\kappa$-distributions is not known, and its contribution is usually weak for the Maxwellian distribution \citep{Young03,Phillips08} especially at higher densities. Nevertheless, at least for the Maxwellian distribution and a limited wavelength range, the two-photon continuum may not be a negligible contribution to the total continuum. We plan to implement it in the future. \subsection{The Free-Free Continuum} \label{Sect:4.1} The total emissivity of the free-free continuum arising due to electron-ion bremsstrahlung is given by \citep{Dudik11,Dudik12} \begin{eqnarray} \nonumber \varepsilon_\mathrm{ff}(&&\lambda,\kappa,T) = \frac{A_\kappa T^{1/2}}{\lambda^2} n_\mathrm{e} n_\mathrm{H} \times \\ && \times \sum_{Z} K_X(\kappa,T)A_X \int\limits_{0}^{\infty} \frac{g_\mathrm{ff}(y,w)}{\left(1+\frac{y+w}{\kappa-3/2}\right)^{\kappa+1}} \mathrm{d}y \,, \label{Eq:ff} \end{eqnarray} where $A_X$ is the element abundance relative to hydrogen, $w = hc/\lambda k_\mathrm{B}T = {\cal E} / k_\mathrm{B}T$ is the scaled photon energy, the $g_\mathrm{ff}$ is the free-free Gaunt factor, and the $K_X(\kappa,T)$ is a function of $\kappa$ and $T$ through the dependence on the ionization balance $n_k/n_X$ (see Sect. \ref{Sect:3.1}) \begin{equation} K_Z(\kappa,T) = \frac{1}{4 \pi} \frac{32\pi}{3} \frac{e^6}{m_\mathrm{e} c^2} \sqrt{\frac{2\pi k_\mathrm{B}}{3 m_\mathrm{e}}} \sum_{k} k^2 \frac{n_k}{n_Z} \,, \label{Eq:ff_constant} \end{equation} where $k$ is the ionization degree. The units of $\varepsilon_\mathrm{ff}$ are ergs\,cm$^{-3}$s$^{-1}$sr$^{-1}$\AA$^{-1}$. The bremsstrahlung spectrum is strongly dependent on $\kappa$ mainly at short wavelengths \citep{Dudik12}, where the tail of the $\kappa$-distribution strongly enhances the bremsstrahlung emission. Near the wavelength where the $\varepsilon_\mathrm{ff}$ peaks for the Maxwellian distribution, the free-free emission drops with $\kappa$. At larger wavelengths it is enhanced again (see Figs. 2 and 3 in \cite{Dudik12}). \subsection{Free-Bound Continuum} \label{Sect:4.2} The emissivity of the recombination processes resulting in $k$-times ionized ions of element $X$ with an electron on an excited level $j$ is for the $\kappa$-distributed incident electrons given by \citep{Dudik12} \begin{eqnarray} \nonumber \varepsilon_\mathrm{fb}(&& \lambda,\kappa,T) = \frac{1}{4\pi} \sqrt{\frac{2}{\pi}} \frac{{\cal E}^5}{hc^3 \left(m_\mathrm{e} k_\mathrm{B}T\right)^{3/2}} n_\mathrm{e} n_\mathrm{H} \times \\ && \times \sum_{k,X}{ \frac{n_\mathrm{k+1}}{n_X} A_X \frac{g_j}{g_0} \sigma_j^\mathrm{bf} A_\kappa \frac{1}{\left(1 +\frac{{\cal E}-I_j}{\left(\kappa -3/2\right) k_\mathrm{B}T}\right)^{\kappa+1}}}\,,\hspace{0.7cm} \label{Eq:fb} \end{eqnarray} where ${\cal E}$\,=\,$hc/\lambda$\,=\,$E+I_j$ is the photon energy, $I_j$ is the ionization potential from the level $j$ with statistical weight $g_j$, and $\sigma_j^\mathrm{bf}$ is the ionization cross-section from the level $j$. A conspicuous feature of the free-bound spectra for the $\kappa$-distributions are the greatly enhanced ionization edges \citep[see Fig. 5 in][]{Dudik12}. Generally, this increase comes from Eq. (\ref{Eq:fb}) through the increase of low-energy electrons in a $\kappa$-distribution with respect to the Maxwellian distribution at the same $T$. However, details also depend on the ionization equilibrium together with $T$ and $\kappa$ \citep{Dudik12}. \section{The KAPPA Package} \label{Sect:5} The KAPPA package\footnote{http://kappa.asu.cas.cz} currently allows for calculation of the synthetic spectra for integer values of $\kappa$\,=\,2, 3, 4, 5, 7, 10, 15, 25, and 33, for which the ionization equilibria are tabulated. These values should cover the parameter space with sufficient density. The database and software for the KAPPA package is based on the IDL version of the freely available CHIANTI database and software\footnote{www.chiantidatabase.org} \citep{Dere97,Landi13}. The routines and database of the KAPPA package are contained in a standalone folder. It cannot be contained within the CHIANTI itself in order to prevent its automatic removal by CHIANTI updates. The path to the folder can be set by an IDL system variable in the \textit{idl\_startup.pro} file $defsysv,\;'!data\_ pth',\;'path\; to\; package' $. The KAPPA folder contains the modified CHIANTI routines for calculation of spectra for the $\kappa$-distributions, with the ``data\_k'' subfolder having the same structure as the CHIANTI's ``dbase`` subfolder. The modified CHIANTI routines follow the original CHIANTI routines as closely as possible. Their names end with an extra ''\textit{\_k}`` before the \textit{.pro} extension. The calling parameters of these routines are kept the same, except that the first parameter is always the value of $\kappa$. The subdirectories within the database contain datas for ionization and recombination rates (Sect. \ref{Sect:5.1.2}) together with the tabulated $\Upsilon_{ij}(T,\kappa)$ and \rotatebox[origin=c]{180}{$\Upsilon$}$_{ji}(T,\kappa)$ files in the ASCII format. Previous versions of the modification corresponding to CHIANTI v5.2 \citep{Dzifcakova06b} contained the coefficients for the approximation of $\Omega$. Then, the calculations for the $\kappa$-distributions were aproximately ten times longer compared to the CHIANTI for the Maxwellian distribution. Therefore, we decided to pre-calculate $\Upsilon(\kappa,T)$ for a grid temperatures and $\kappa$. These pre-calculated values of $\Upsilon(\kappa,T)$ are contained in files names according to the ion and the value of $\kappa$ with the extension \textit{.ups}, e.g., \textit{c\_5\_k2.ups} for \ion{C}{5} and $\kappa$\,=\,2. IDL savefiles containing the $\Upsilon_{ij}(T,\kappa)$ and \rotatebox[origin=c]{180}{$\Upsilon$}$_{ji}(T,\kappa)$ are also provided. At present, the KAPPA package fully corresponds to the atomic data contained in the CHIANTI version 7.1. Similarly, the routines provided in the KAPPA package are based on CHIANTI 7.1 routines, with the exception of routines for free-free continuum (see Sect. \ref{Sect:5.3.1}). \subsection{Ionization Equilibrium} \label{Sect:5.1} \subsubsection{Ionization Equilibrium Files} \label{Sect:5.1.1} The ionization equilibrium \textit{.ioneq} and similar files were originally provided by \citet{Dzifcakova13}. A minor software bug in the calculation of radiative recombination rates for the $\kappa$-distributions was found and corrected. This problem affected the ionization equilibria at log$(T/K$)\,$<$\,5 with the error being much smaller than the effect of $\kappa$-distributions on the ionization equilibrium. These \textit{.ioneq} files are produced in the same format as the original \textit{chianti.ioneq} file. Therefore, these can be read by the CHIANTI routine \textit{read\_ioneq.pro} directly. The names of these files are \textit{kappa\_02.ioneq} and similar, where the numbers give the integer value of $\kappa$. For more details on the \textit{.ioneq} file format, see \citet{Dzifcakova13}, Appendix A therein. \subsubsection{Ionization and Recombination Rates} \label{Sect:5.1.2} In addition to the ionization equilibria, total ionization and recombination rates are provided for each ion and a range of temperatures. Here, the total ionization rate is a sum of the direct collisional ionization rate and the autoionization rate. Similarly, the total recombination rate is given by the sum of the radiative recombination rate and the total dielectronic recombination rate \citep{Dzifcakova13}. These rates are stored in the respective database folder for each ion, e.g., the \textit{dbase/c/c\_5/c\_5\_k25.tionizr} is the total ionization rate file for \ion{C}{5} and $\kappa$\,=\,25. The file format is ASCII. The total recombination rate file has the same name except the \textit{.trecombr} extension. The routines \textit{read\_rate\_ioniz\_k.pro} and \textit{read\_rate\_recomb\_k.pro} are provided for reading these files. \begin{table*}[!ht] \begin{center} \caption{List of routines within the KAPPA package. \label{Table:1}} \begin{tabular}{ll} \tableline \tableline Routine name & Function \\ \tableline kappa.pro & interactive widget for calculation of synthetic spectra, based on ch\_ss.pro \\ ch\_synthetic\_k.pro & calculates line intensities as a function of $\kappa$, $n_\mathrm{e}$ and $T$ \\ descale\_diel\_k.pro & converts $\Upsilon_{ij}(T,\kappa)$ and \rotatebox[origin=c]{180}{$\Upsilon$}$_{ji}(T,\kappa)$from the scaled domain \\ & for dielectronic satellite lines and performs correction in Eq. (\ref{Eq:C_diel_kappa2}) \\ emiss\_calc\_k.pro & calculates $hc/\lambda$~$A_{ji} n(X_j^{+k})$ \\ freebound\_ion\_k.pro & calculates the free-bound continuum arising from a single ion \\ freebound\_k.pro & calculates the free-bound continuum \\ freefree\_k.pro & free-free continuum interpolated from pre-calculated data \\ freefree\_k\_integral.pro & calculates the free-free continuum directly \\ isothermal\_k.pro & calculates isothermal spectra as a function of $\lambda$ \\ make\_kappa\_spec\_k.pro & routine for calculating the synthetic spectra \\ plot\_populations\_k.pro & calculates and plots relative level populations \\ pop\_solver\_k.pro & calculates the relative level population \\ read\_ff\_k.pro & reads the pre-calculated free-free continuum as a function of $Z$ and $T$ \\ read\_rate\_ioniz\_k.pro & reads the total ionization and recombination rates \\ read\_rate\_recomb\_k.pro & reads the total ionization and recombination rates \\ ups\_kappa\_interp.pro & routine for interpolating the $\Upsilon_{ij}(T,\kappa)$ and \rotatebox[origin=c]{180}{$\Upsilon$}$_{ji}(T,\kappa)$ \\ \tableline \tableline \end{tabular} \end{center} \end{table*} \subsection{Tools for calculation of line spectra} \label{Sect:5.2} The KAPPA package provides several routines for calculation of line intensities. These are listed in Table \ref{Table:1}. As already mentioned, these routines are based on CHIANTI routines, version 7.1. They can be used in the same manner as the CHIANTI routines, with the exception that the value of $\kappa$ is always the first parameter. The most important of these routines is the \textit{pop\_solver\_k.pro} routine that calculates the relative level population based on the distribution-averaged collision strengths $\Upsilon_{ij}(T,\kappa)$ and \rotatebox[origin=c]{180}{$\Upsilon$}$_{ji}(T,\kappa)$ calculated using the method described in Sect. \ref{Sect:3.3}. Other routines for calculating line intensities (Table \ref{Table:1}) rely on this routine. Examples of synthetic spectra calculated for $\kappa$\,=\,2 and their comparison to the Maxwellian spectra at the same $T$ are given in Sect. \ref{Sect:6.1}. We note here that the method for calculation of the collisional electron excitation and deexcitation rates described in Sect. \ref{Sect:3.3} cannot be applied to the collisional excitation by protons due to unavailability of the proton excitation cross sections. The proton excitation is typically negligible, but may be important for some transitions. In the synthesis of line spectra, the proton excitation rate for $\kappa$-distribution is assumed to be the same as for the Maxwellian distribution at the same temperature. It is currently unknown if this assumption is justified. Because of this, we advocate caution in using such lines for e.g. diagnostics of $\kappa$ from observations. An interactive widget for calculating the synthetic spectra is provided in the \textit{kappa.pro} routine, based on the CHIANTI's \textit{ch\_ss.pro}. The value of $\kappa$ is selected by the choice of the ionization equilibrium. Subsequently, the excitation and line intensities are calculated for the same value of $\kappa$. All other functionality of the \textit{ch\_ss.pro} routine is retained. \subsection{Tools for calculation of continuum} \label{Sect:5.3} \subsubsection{Free-free continuum} \label{Sect:5.3.1} The CHIANTI database relies on the approximations to the Maxwellian bremsstrahlung calculated by \citet{Itoh00} and \citet{Sutherland98} and incorporated in the \textit{freefree.pro} routine together with the \textit{itoh.pro} and \textit{sutherland.pro} routines. This approach cannot be followed in the modified CHIANTI, since no fitting formulae exist for the free-free continuum for $\kappa$-distributions. Instead, we provide two options to calculate the free-free continuum for $\kappa$-distributions: \begin{enumerate} \item Direct integration using the Eqs. (\ref{Eq:ff}) and (\ref{Eq:ff_constant}). This approach is implemented in the \textit{freefree\_k\_integral.pro} routine and requires the \textit{data\_k/continuum/gffew.dat} file containing the scaled $g_\mathrm{ff}(y,w)$ values provided by \citet{Sutherland98}. These $g_\mathrm{ff}$ values are then de-scaled and numerically integrated. Since the scaling depends on the ionization energy (and thus on the ionization stage $k$ and the proton number $Z$), it has to be carried out for each ion separately \citep{Dudik12}. Therefore, the direct integration using the \textit{freefree\_k\_integral.pro} is time-consuming and impractical. We note that, in practice, restricting the integration to elements with relative abundance of $A_Z$\,$\geqq$\,10$^{-6}$ introduces a relative error smaller than $10^{-4}$ and speeds up the calculations by a factor of $\approx$2. Note that the Maxwellian-integrated $g_\mathrm{ff}(y,w)$ are a part of the CHIANTI database. \item To overcome the long calculation times, the free-free continuum has been pre-calculated as a function of $Z$ for 101 logaritmically spaced temperatures spanning log$(T/$K)\,=\,$\left<4,9\right>$ with a step of $\Delta$log$(T/$K)\,=\,0.05, together with 29 logarithmically spaced points in $\lambda$\,=\,0.1\AA\,--3\,$\times$10$^{4}$\AA~with a step of log$(\lambda/$\AA)\,=\,0.2. These calculations are contained in the \textit{data\_k/continuum/ff\_kappa\_02.dat} file and analogous files for other values of $\kappa$, each file for a single value of $\kappa$. The files are in the ASCII format. The routine \textit{freefree\_k.pro} reads these files, folds and sums them over the abundances to produce a semi-final free-free continuum. The final free-free continuum is then calculated for the user-input ranges of $T$ and $\lambda$ within in the ranges specified above. This is achieved first by linearly interpolating in log$(T/$K) and then by spline-interpolating in log$(\lambda/$\AA). In this way, an accuracy of few per cent is achieved in the $\lambda$\,=\,1\AA\,--2\,$\times$10$^{4}$\AA~range, with the calculation time of few seconds. We note that this method should not be used for calculation of free-free continua below 1\AA~and 2\,$\times$10$^{4}$\AA, where the spline interpolation results in errors of several $\times$\,10\,\% or more. \end{enumerate} \subsubsection{Free-bound continuum} \label{Sect:5.3.2} Using Eq. (\ref{Eq:fb}), the free-bound continuum is straightforward to calculate. The \textit{freebound\_k.pro} and \textit{freebound\_ion\_k.pro} routines can be used in the same manner as the CHIANTI's \textit{freebound.pro} and \textit{freebound\_ion.pro} routines. The only change is that these routines require a value of $\kappa$ as an extra input. Ionization equilibrium files (Sect. \ref{Sect:5.1}) are read together with the cross-sections. The speed of the calculation is the same as using the original CHIANTI. \begin{figure*}[!t] \centering \includegraphics[width=8.6cm]{aia171_t590_n900_mxw.eps} \includegraphics[width=8.6cm]{aia171_t590_n900_k2.eps} \includegraphics[width=8.6cm]{aia193_t620_n900_mxw.eps} \includegraphics[width=8.6cm]{aia193_t620_n900_k2.eps} \caption{Example isothermal spectra at log$(T/K$)\,=\,5.9 near the peak of the AIA 171\AA~wavelength response (\textit{top}) and at log$(T/$K)\,=\,6.2 near the peak of the AIA 193\AA~filter (\textit{bottom}). The electron density assumed is log$(n_\mathrm{e}$/cm$^{-3}$)\,=\,9.0. \\ A color version of this image is available in the online journal.} \label{Fig:AIA_spectra} \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=8.6cm]{resp_aia94_k.eps} \includegraphics[width=8.6cm]{resp_aia131_k.eps} \includegraphics[width=8.6cm]{resp_aia171_k.eps} \includegraphics[width=8.6cm]{resp_aia193_k.eps} \includegraphics[width=8.6cm]{resp_aia211_k.eps} \includegraphics[width=8.6cm]{resp_aia335_k.eps} \caption{Responses of the AIA EUV filters for the $\kappa$-distributions. Individual colors and linestyles stand for different values of $\kappa$, as indicated. \\ A color version of this image is available in the online journal.} \label{Fig:AIA_responses} \end{figure*} \section{Synthetic Spectra and \textit{SDO}/AIA Responses} \label{Sect:6} In this section, we provide some examples of the calculated spectra that are of interest to the physics of the solar corona. Note that the behaviour of individual lines observed by the \textit{Hinode}/EIS spectrometer \citep{Culhane07} and the possible observational diagnostics with and without the effect of the ionization equilibrium are described elsewhere \citep{Dzifcakova10,Mackovjak13,Dzifcakova13,Dudik14b}, as is the application for DEM diagnostics \citep{Mackovjak14}. \subsection{Synthetic EUV Spectra} \label{Sect:6.1} An example of the synthetic isothermal spectra calculated using the \textit{isothermal\_k.pro} for the Maxwellian and $\kappa$\,=\,2 are shown in Fig. \ref{Fig:AIA_spectra}. These examples show synthetic line and continuum spectra within the AIA 171\AA~and 193\AA~channels. The spectra are calculated for the electron density of $n_\mathrm{e}$\,=\,10$^9$\,cm$^{-3}$ and temperatures of log$(T/K$)\,=\,5.9 for AIA 171\AA~and 6.2 for AIA 193\AA~channels, respectively. Note that the temperature is kept the same for both distributions shown. These temperatures correspond to the maximum of the relative ion abundance of \ion{Fe}{9} and \ion{Fe}{12} under the Maxwellian distribution, respectively (see Fig. \ref{Fig:Ioneq}). The line intensities for $\kappa$\,=\,2 are decreased by a factor of several compared to the Maxwellian distribution. This is mainly an effect of the ionization equilibrium, with the maximum of the relative ion abundance for $\kappa$\,=\,2 being shifted to higher log$(T/K$) (Fig. \ref{Fig:Ioneq}, \textit{bottom}). Note that the intensities of the continuum is several orders of magnitude smaller than the line intensities; therefore, the continuum is not visible in the linear scale on Fig. \ref{Fig:AIA_spectra}. From Fig. \ref{Fig:AIA_spectra} we see that at log$(T/K$)\,=\,5.9, the AIA 171\AA~channel is dominated by \ion{Fe}{9} independently of the value of $\kappa$. Contrary to that, the situation for the AIA 193\AA~channel and log$(T/K$)\,=\,6.2 is more complex. This filter is dominated by \ion{Fe}{12} transitions between energy levels 1--30 at 192.394\AA, 1--29 at 193.509\AA~and 1--27 at 195.119\AA~\citep[][Table B.4]{Dudik14b}. However, contributions from the 1--38 and 1--37 transitions in \ion{Fe}{11} at 188.216\AA~and 188.299\AA~are present as well. The relative contribution of these transitions to the total filter response to emission at log$(T/K$)\,=\,6.2 increases from 4.4\% for the Maxwellian distribution to 6.5\% for $\kappa$\,=\,2. This is because this temperature is closer to the ionization peak of \ion{Fe}{11} than \ion{Fe}{12} for $\kappa$\,=\,2 (Fig. \ref{Fig:Ioneq}). \subsection{AIA responses for the $\kappa$-distributions} \label{Sect:6.2} As an example of the usage of the synthetic line and continuum spectra we calculated the responses of the Atmospheric Imaging Assembly \citep[AIA][]{Boerner12,Lemen12} onboard the \textit{Solar Dynamics Observatory (SDO)} \citep{Pesnell12} for the $\kappa$-distributions. Note that even though the continuum intensities are weak compared to the line intensities (Sect. \ref{Sect:6.1}) at a particular wavelength, the continuum is a significant contributor to some of the AIA bands \citep{ODwyer10,DelZanna13b}. This is because the filter response to plasma emission is given by the wavelength integral of the filter and instrument transmissivity times the emitted spectrum \citep[e.g., Eq. (6) in][]{Dudik09}. The \textit{SDO}/AIA responses calculated for $\kappa$-distributions and log($n_\mathrm{e}/$cm$^{-3}$)\,=\,9 are shown in Fig. \ref{Fig:AIA_responses}. The peaks of the responses are typically flatter and wider, and can be shifted to higher log$(T/$K) for low $\kappa$. This behaviour is typical since it is given mainly by the ionization equilibrium (Sect. \ref{Sect:3.1}). It has been reported for the TRACE filter responses by \citet{Dudik09} and for \textit{Hinode}/XRT responses by \citet{Dzifcakova12}, where the AIA responses were also calculated for an earlier set of atomic data corresponding to CHIANTI v5.2 \citep{Landi06}. The AIA responses calculated here represent a significant improvement over the \citet{Dzifcakova12} ones due to advances in the atomic data for AIA bands \citep{DelZanna13b}. We note that some of the secondary maxima, such as those at log$(T/$K)\,$\approx$\,5.4 for AIA 171\AA, 193\AA, 211\AA,~and 335\AA~ disappear for low $\kappa$. This is again mostly because of the wider ionization peaks and their relative contributions to individual filter responses \citep{Dudik09}, which gradually smooth out these secondary maxima with decreasing $\kappa$. We also note that the contribution from \ion{Fe}{10} and \ion{Fe}{14} to the AIA 94\AA~response \citep[][Fig. 3 therein]{DelZanna13b} form a single, smooth secondary peak of the response for $\kappa$\,$\lesssim$\,5. These AIA responses for $\kappa$-distributions can be used to obtain the DEMs using the regularized DEM inversion developed by \citet{Hannah12} and \citet{Hannah13}, as well as for diagnostics of the distribution from combination of imaging and spectroscopic observations (Dud\'ik et al. 2014, in preparation). \section{Summary} \label{Sect:7} We have developed tools for calculation of synthetic optically thin line and continuum spectra arising from collisionally dominated astrophysical plasmas characterized by a $\kappa$-distribution. These tools constitute the KAPPA package, which is based on the freely available CHIANTI database and software. At present, the KAPPA package can handle only values of $\kappa$\,=\,2, 3, 4, 5, 7, 10, 15, 25, and 33, which should provide sufficient coverage for most spectroscopic purposes. Ionization and recombination rates are provided together with ionization equilibrium calculations. Approximations to the distribution-averaged collision strengths are provided. These are based on the reverse-engineered collision strengths obtained from the Maxwellian-averaged collision strengths available within CHIANTI. This is done for all transitions in all ions available within CHIANTI, version 7.1. We have tested the validity of this approximate method by comparison with the directly integrated collision strengths. For temperatures typical of the formation of individual ions, typical errors of less than 5\% were found. It was also found that the errors are \textit{always} less than 10\%. The errors are typically of the order of a few per cent for strong transitions, but the precision decreases for weaker transitions and low values of $\kappa$. Considering the uncertainties in the atomic data calculations themselves, these errors are considered acceptable. Several routines for calculation of the synthetic line spectra, free-free and free-bound continua are provided. These routines are based on the CHIANTI routines and can be used in the same manner, except that the first input parameter is always the value of $\kappa$. The calculation of the free-free continuum is based on interpolation from pre-calculated values; however, an option of direct integration of the free-free gaunt factors is also provided. We aim to keep the database updated to reflect the newer releases of CHIANTI. \acknowledgements The authors thank G. Del Zanna, P. R. Young and H. E. Mason for useful discussions. The collision strengths $\Omega$ used to validate the approximate method in Sect. \ref{Sect:3.4} were provided by G. Del Zanna and are gratefully acknowledged. EDZ, PK, FF and AK acknowledges the support by Grant Agency of the Czech Republic, Grant No. P209/12/1652. JD acknowledges support from the Royal Society via the Newton Fellowships Programme. The authors also acknowledge the support from the International Space Science Institute through its International Teams program. CHIANTI is a collaborative project involving the NRL (USA), RAL (UK), MSSL (UK), the Universities of Florence (Italy) and Cambridge (UK), and George Mason University (USA). It is a great spectroscopic database and software and the authors are very grateful for its existence and availability. \bibliographystyle{apj}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,949
Q: How to convert a panda series of 1-D numpy array to 2D numpy array a = np.zeros((100,6), dtype=np.int8) a_np_list = [arr for arr in a] ser = pd.Series(a_np_list) if I convert the series with ser.values, it is not with shape (100,6) but (100,), how to convert can result to a (100,6) 2D array? A: Use: a = np.array(ser.tolist())
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,523
import doFetch from './fetch'; import * as _ from 'lodash'; export const logonApi = {};
{ "redpajama_set_name": "RedPajamaGithub" }
2,935
using System; using System.Collections.Generic; using System.Text; namespace CodeStyleChecks { public class CodeStyleViolations { public string name { get; set; } public CodeStyleViolations() { } public bool Works() { var a = "a string"; var b = true; if (b) { Console.WriteLine("Hello"); } else { Console.WriteLine("Bye"); } return b; } } }
{ "redpajama_set_name": "RedPajamaGithub" }
303
Q: How to find the inventory of all images and packages available in Nexus? Is there any API or report to query and find all available versions in nexus for Docker, maven etc ? A: You can use Nexus API to query/search, find an item in nexus. The API url is like: Your_nexus_ur/swagger-ui/
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,021
var io = ('undefined' === typeof module ? {} : module.exports); (function() { /** * socket.io * Copyright(c) 2011 LearnBoost <dev@learnboost.com> * MIT Licensed */ (function (exports, global) { /** * IO namespace. * * @namespace */ var io = exports; /** * Socket.IO version * * @api public */ io.version = '0.9.11'; /** * Protocol implemented. * * @api public */ io.protocol = 1; /** * Available transports, these will be populated with the available transports * * @api public */ io.transports = []; /** * Keep track of jsonp callbacks. * * @api private */ io.j = []; /** * Keep track of our io.Sockets * * @api private */ io.sockets = {}; /** * Manages connections to hosts. * * @param {String} uri * @Param {Boolean} force creation of new socket (defaults to false) * @api public */ io.connect = function (host, details) { var uri = io.util.parseUri(host) , uuri , socket; if (global && global.location) { uri.protocol = uri.protocol || global.location.protocol.slice(0, -1); uri.host = uri.host || (global.document ? global.document.domain : global.location.hostname); uri.port = uri.port || global.location.port; } uuri = io.util.uniqueUri(uri); var options = { host: uri.host , secure: 'https' == uri.protocol , port: uri.port || ('https' == uri.protocol ? 443 : 80) , query: uri.query || '' }; io.util.merge(options, details); if (options['force new connection'] || !io.sockets[uuri]) { socket = new io.Socket(options); } if (!options['force new connection'] && socket) { io.sockets[uuri] = socket; } socket = socket || io.sockets[uuri]; // if path is different from '' or / return socket.of(uri.path.length > 1 ? uri.path : ''); }; })('object' === typeof module ? module.exports : (this.io = {}), this); /** * socket.io * Copyright(c) 2011 LearnBoost <dev@learnboost.com> * MIT Licensed */ (function (exports, global) { /** * Utilities namespace. * * @namespace */ var util = exports.util = {}; /** * Parses an URI * * @author Steven Levithan <stevenlevithan.com> (MIT license) * @api public */ var re = /^(?:(?![^:@]+:[^:@\/]*@)([^:\/?#.]+):)?(?:\/\/)?((?:(([^:@]*)(?::([^:@]*))?)?@)?([^:\/?#]*)(?::(\d*))?)(((\/(?:[^?#](?![^?#\/]*\.[^?#\/.]+(?:[?#]|$)))*\/?)?([^?#\/]*))(?:\?([^#]*))?(?:#(.*))?)/; var parts = ['source', 'protocol', 'authority', 'userInfo', 'user', 'password', 'host', 'port', 'relative', 'path', 'directory', 'file', 'query', 'anchor']; util.parseUri = function (str) { var m = re.exec(str || '') , uri = {} , i = 14; while (i--) { uri[parts[i]] = m[i] || ''; } return uri; }; /** * Produces a unique url that identifies a Socket.IO connection. * * @param {Object} uri * @api public */ util.uniqueUri = function (uri) { var protocol = uri.protocol , host = uri.host , port = uri.port; if ('document' in global) { host = host || document.domain; port = port || (protocol == 'https' && document.location.protocol !== 'https:' ? 443 : document.location.port); } else { host = host || 'localhost'; if (!port && protocol == 'https') { port = 443; } } return (protocol || 'http') + '://' + host + ':' + (port || 80); }; /** * Mergest 2 query strings in to once unique query string * * @param {String} base * @param {String} addition * @api public */ util.query = function (base, addition) { var query = util.chunkQuery(base || '') , components = []; util.merge(query, util.chunkQuery(addition || '')); for (var part in query) { if (query.hasOwnProperty(part)) { components.push(part + '=' + query[part]); } } return components.length ? '?' + components.join('&') : ''; }; /** * Transforms a querystring in to an object * * @param {String} qs * @api public */ util.chunkQuery = function (qs) { var query = {} , params = qs.split('&') , i = 0 , l = params.length , kv; for (; i < l; ++i) { kv = params[i].split('='); if (kv[0]) { query[kv[0]] = kv[1]; } } return query; }; /** * Executes the given function when the page is loaded. * * io.util.load(function () { console.log('page loaded'); }); * * @param {Function} fn * @api public */ var pageLoaded = false; util.load = function (fn) { if ('document' in global && document.readyState === 'complete' || pageLoaded) { return fn(); } util.on(global, 'load', fn, false); }; /** * Adds an event. * * @api private */ util.on = function (element, event, fn, capture) { if (element.attachEvent) { element.attachEvent('on' + event, fn); } else if (element.addEventListener) { element.addEventListener(event, fn, capture); } }; /** * Generates the correct `XMLHttpRequest` for regular and cross domain requests. * * @param {Boolean} [xdomain] Create a request that can be used cross domain. * @returns {XMLHttpRequest|false} If we can create a XMLHttpRequest. * @api private */ util.request = function (xdomain) { if (xdomain && 'undefined' != typeof XDomainRequest && !util.ua.hasCORS) { return new XDomainRequest(); } if ('undefined' != typeof XMLHttpRequest && (!xdomain || util.ua.hasCORS)) { return new XMLHttpRequest(); } if (!xdomain) { try { return new window[(['Active'].concat('Object').join('X'))]('Microsoft.XMLHTTP'); } catch(e) { } } return null; }; /** * XHR based transport constructor. * * @constructor * @api public */ /** * Change the internal pageLoaded value. */ if ('undefined' != typeof window) { util.load(function () { pageLoaded = true; }); } /** * Defers a function to ensure a spinner is not displayed by the browser * * @param {Function} fn * @api public */ util.defer = function (fn) { if (!util.ua.webkit || 'undefined' != typeof importScripts) { return fn(); } util.load(function () { setTimeout(fn, 100); }); }; /** * Merges two objects. * * @api public */ util.merge = function merge (target, additional, deep, lastseen) { var seen = lastseen || [] , depth = typeof deep == 'undefined' ? 2 : deep , prop; for (prop in additional) { if (additional.hasOwnProperty(prop) && util.indexOf(seen, prop) < 0) { if (typeof target[prop] !== 'object' || !depth) { target[prop] = additional[prop]; seen.push(additional[prop]); } else { util.merge(target[prop], additional[prop], depth - 1, seen); } } } return target; }; /** * Merges prototypes from objects * * @api public */ util.mixin = function (ctor, ctor2) { util.merge(ctor.prototype, ctor2.prototype); }; /** * Shortcut for prototypical and static inheritance. * * @api private */ util.inherit = function (ctor, ctor2) { function f() {}; f.prototype = ctor2.prototype; ctor.prototype = new f; }; /** * Checks if the given object is an Array. * * io.util.isArray([]); // true * io.util.isArray({}); // false * * @param Object obj * @api public */ util.isArray = Array.isArray || function (obj) { return Object.prototype.toString.call(obj) === '[object Array]'; }; /** * Intersects values of two arrays into a third * * @api public */ util.intersect = function (arr, arr2) { var ret = [] , longest = arr.length > arr2.length ? arr : arr2 , shortest = arr.length > arr2.length ? arr2 : arr; for (var i = 0, l = shortest.length; i < l; i++) { if (~util.indexOf(longest, shortest[i])) ret.push(shortest[i]); } return ret; }; /** * Array indexOf compatibility. * * @see bit.ly/a5Dxa2 * @api public */ util.indexOf = function (arr, o, i) { for (var j = arr.length, i = i < 0 ? i + j < 0 ? 0 : i + j : i || 0; i < j && arr[i] !== o; i++) {} return j <= i ? -1 : i; }; /** * Converts enumerables to array. * * @api public */ util.toArray = function (enu) { var arr = []; for (var i = 0, l = enu.length; i < l; i++) arr.push(enu[i]); return arr; }; /** * UA / engines detection namespace. * * @namespace */ util.ua = {}; /** * Whether the UA supports CORS for XHR. * * @api public */ util.ua.hasCORS = 'undefined' != typeof XMLHttpRequest && (function () { try { var a = new XMLHttpRequest(); } catch (e) { return false; } return a.withCredentials != undefined; })(); /** * Detect webkit. * * @api public */ util.ua.webkit = 'undefined' != typeof navigator && /webkit/i.test(navigator.userAgent); /** * Detect iPad/iPhone/iPod. * * @api public */ util.ua.iDevice = 'undefined' != typeof navigator && /iPad|iPhone|iPod/i.test(navigator.userAgent); })('undefined' != typeof io ? io : module.exports, this); /** * socket.io * Copyright(c) 2011 LearnBoost <dev@learnboost.com> * MIT Licensed */ (function (exports, io) { /** * Expose constructor. */ exports.EventEmitter = EventEmitter; /** * Event emitter constructor. * * @api public. */ function EventEmitter () {}; /** * Adds a listener * * @api public */ EventEmitter.prototype.on = function (name, fn) { if (!this.$events) { this.$events = {}; } if (!this.$events[name]) { this.$events[name] = fn; } else if (io.util.isArray(this.$events[name])) { this.$events[name].push(fn); } else { this.$events[name] = [this.$events[name], fn]; } return this; }; EventEmitter.prototype.addListener = EventEmitter.prototype.on; /** * Adds a volatile listener. * * @api public */ EventEmitter.prototype.once = function (name, fn) { var self = this; function on () { self.removeListener(name, on); fn.apply(this, arguments); }; on.listener = fn; this.on(name, on); return this; }; /** * Removes a listener. * * @api public */ EventEmitter.prototype.removeListener = function (name, fn) { if (this.$events && this.$events[name]) { var list = this.$events[name]; if (io.util.isArray(list)) { var pos = -1; for (var i = 0, l = list.length; i < l; i++) { if (list[i] === fn || (list[i].listener && list[i].listener === fn)) { pos = i; break; } } if (pos < 0) { return this; } list.splice(pos, 1); if (!list.length) { delete this.$events[name]; } } else if (list === fn || (list.listener && list.listener === fn)) { delete this.$events[name]; } } return this; }; /** * Removes all listeners for an event. * * @api public */ EventEmitter.prototype.removeAllListeners = function (name) { if (name === undefined) { this.$events = {}; return this; } if (this.$events && this.$events[name]) { this.$events[name] = null; } return this; }; /** * Gets all listeners for a certain event. * * @api publci */ EventEmitter.prototype.listeners = function (name) { if (!this.$events) { this.$events = {}; } if (!this.$events[name]) { this.$events[name] = []; } if (!io.util.isArray(this.$events[name])) { this.$events[name] = [this.$events[name]]; } return this.$events[name]; }; /** * Emits an event. * * @api public */ EventEmitter.prototype.emit = function (name) { if (!this.$events) { return false; } var handler = this.$events[name]; if (!handler) { return false; } var args = Array.prototype.slice.call(arguments, 1); if ('function' == typeof handler) { handler.apply(this, args); } else if (io.util.isArray(handler)) { var listeners = handler.slice(); for (var i = 0, l = listeners.length; i < l; i++) { listeners[i].apply(this, args); } } else { return false; } return true; }; })( 'undefined' != typeof io ? io : module.exports , 'undefined' != typeof io ? io : module.parent.exports ); /** * socket.io * Copyright(c) 2011 LearnBoost <dev@learnboost.com> * MIT Licensed */ /** * Based on JSON2 (http://www.JSON.org/js.html). */ (function (exports, nativeJSON) { "use strict"; // use native JSON if it's available if (nativeJSON && nativeJSON.parse){ return exports.JSON = { parse: nativeJSON.parse , stringify: nativeJSON.stringify }; } var JSON = exports.JSON = {}; function f(n) { // Format integers to have at least two digits. return n < 10 ? '0' + n : n; } function date(d, key) { return isFinite(d.valueOf()) ? d.getUTCFullYear() + '-' + f(d.getUTCMonth() + 1) + '-' + f(d.getUTCDate()) + 'T' + f(d.getUTCHours()) + ':' + f(d.getUTCMinutes()) + ':' + f(d.getUTCSeconds()) + 'Z' : null; }; var cx = /[\u0000\u00ad\u0600-\u0604\u070f\u17b4\u17b5\u200c-\u200f\u2028-\u202f\u2060-\u206f\ufeff\ufff0-\uffff]/g, escapable = /[\\\"\x00-\x1f\x7f-\x9f\u00ad\u0600-\u0604\u070f\u17b4\u17b5\u200c-\u200f\u2028-\u202f\u2060-\u206f\ufeff\ufff0-\uffff]/g, gap, indent, meta = { // table of character substitutions '\b': '\\b', '\t': '\\t', '\n': '\\n', '\f': '\\f', '\r': '\\r', '"' : '\\"', '\\': '\\\\' }, rep; function quote(string) { // If the string contains no control characters, no quote characters, and no // backslash characters, then we can safely slap some quotes around it. // Otherwise we must also replace the offending characters with safe escape // sequences. escapable.lastIndex = 0; return escapable.test(string) ? '"' + string.replace(escapable, function (a) { var c = meta[a]; return typeof c === 'string' ? c : '\\u' + ('0000' + a.charCodeAt(0).toString(16)).slice(-4); }) + '"' : '"' + string + '"'; } function str(key, holder) { // Produce a string from holder[key]. var i, // The loop counter. k, // The member key. v, // The member value. length, mind = gap, partial, value = holder[key]; // If the value has a toJSON method, call it to obtain a replacement value. if (value instanceof Date) { value = date(key); } // If we were called with a replacer function, then call the replacer to // obtain a replacement value. if (typeof rep === 'function') { value = rep.call(holder, key, value); } // What happens next depends on the value's type. switch (typeof value) { case 'string': return quote(value); case 'number': // JSON numbers must be finite. Encode non-finite numbers as null. return isFinite(value) ? String(value) : 'null'; case 'boolean': case 'null': // If the value is a boolean or null, convert it to a string. Note: // typeof null does not produce 'null'. The case is included here in // the remote chance that this gets fixed someday. return String(value); // If the type is 'object', we might be dealing with an object or an array or // null. case 'object': // Due to a specification blunder in ECMAScript, typeof null is 'object', // so watch out for that case. if (!value) { return 'null'; } // Make an array to hold the partial results of stringifying this object value. gap += indent; partial = []; // Is the value an array? if (Object.prototype.toString.apply(value) === '[object Array]') { // The value is an array. Stringify every element. Use null as a placeholder // for non-JSON values. length = value.length; for (i = 0; i < length; i += 1) { partial[i] = str(i, value) || 'null'; } // Join all of the elements together, separated with commas, and wrap them in // brackets. v = partial.length === 0 ? '[]' : gap ? '[\n' + gap + partial.join(',\n' + gap) + '\n' + mind + ']' : '[' + partial.join(',') + ']'; gap = mind; return v; } // If the replacer is an array, use it to select the members to be stringified. if (rep && typeof rep === 'object') { length = rep.length; for (i = 0; i < length; i += 1) { if (typeof rep[i] === 'string') { k = rep[i]; v = str(k, value); if (v) { partial.push(quote(k) + (gap ? ': ' : ':') + v); } } } } else { // Otherwise, iterate through all of the keys in the object. for (k in value) { if (Object.prototype.hasOwnProperty.call(value, k)) { v = str(k, value); if (v) { partial.push(quote(k) + (gap ? ': ' : ':') + v); } } } } // Join all of the member texts together, separated with commas, // and wrap them in braces. v = partial.length === 0 ? '{}' : gap ? '{\n' + gap + partial.join(',\n' + gap) + '\n' + mind + '}' : '{' + partial.join(',') + '}'; gap = mind; return v; } } // If the JSON object does not yet have a stringify method, give it one. JSON.stringify = function (value, replacer, space) { // The stringify method takes a value and an optional replacer, and an optional // space parameter, and returns a JSON text. The replacer can be a function // that can replace values, or an array of strings that will select the keys. // A default replacer method can be provided. Use of the space parameter can // produce text that is more easily readable. var i; gap = ''; indent = ''; // If the space parameter is a number, make an indent string containing that // many spaces. if (typeof space === 'number') { for (i = 0; i < space; i += 1) { indent += ' '; } // If the space parameter is a string, it will be used as the indent string. } else if (typeof space === 'string') { indent = space; } // If there is a replacer, it must be a function or an array. // Otherwise, throw an error. rep = replacer; if (replacer && typeof replacer !== 'function' && (typeof replacer !== 'object' || typeof replacer.length !== 'number')) { throw new Error('JSON.stringify'); } // Make a fake root object containing our value under the key of ''. // Return the result of stringifying the value. return str('', {'': value}); }; // If the JSON object does not yet have a parse method, give it one. JSON.parse = function (text, reviver) { // The parse method takes a text and an optional reviver function, and returns // a JavaScript value if the text is a valid JSON text. var j; function walk(holder, key) { // The walk method is used to recursively walk the resulting structure so // that modifications can be made. var k, v, value = holder[key]; if (value && typeof value === 'object') { for (k in value) { if (Object.prototype.hasOwnProperty.call(value, k)) { v = walk(value, k); if (v !== undefined) { value[k] = v; } else { delete value[k]; } } } } return reviver.call(holder, key, value); } // Parsing happens in four stages. In the first stage, we replace certain // Unicode characters with escape sequences. JavaScript handles many characters // incorrectly, either silently deleting them, or treating them as line endings. text = String(text); cx.lastIndex = 0; if (cx.test(text)) { text = text.replace(cx, function (a) { return '\\u' + ('0000' + a.charCodeAt(0).toString(16)).slice(-4); }); } // In the second stage, we run the text against regular expressions that look // for non-JSON patterns. We are especially concerned with '()' and 'new' // because they can cause invocation, and '=' because it can cause mutation. // But just to be safe, we want to reject all unexpected forms. // We split the second stage into 4 regexp operations in order to work around // crippling inefficiencies in IE's and Safari's regexp engines. First we // replace the JSON backslash pairs with '@' (a non-JSON character). Second, we // replace all simple value tokens with ']' characters. Third, we delete all // open brackets that follow a colon or comma or that begin the text. Finally, // we look to see that the remaining characters are only whitespace or ']' or // ',' or ':' or '{' or '}'. If that is so, then the text is safe for eval. if (/^[\],:{}\s]*$/ .test(text.replace(/\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g, '@') .replace(/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g, ']') .replace(/(?:^|:|,)(?:\s*\[)+/g, ''))) { // In the third stage we use the eval function to compile the text into a // JavaScript structure. The '{' operator is subject to a syntactic ambiguity // in JavaScript: it can begin a block or an object literal. We wrap the text // in parens to eliminate the ambiguity. j = eval('(' + text + ')'); // In the optional fourth stage, we recursively walk the new structure, passing // each name/value pair to a reviver function for possible transformation. return typeof reviver === 'function' ? walk({'': j}, '') : j; } // If the text is not JSON parseable, then a SyntaxError is thrown. throw new SyntaxError('JSON.parse'); }; })( 'undefined' != typeof io ? io : module.exports , typeof JSON !== 'undefined' ? JSON : undefined ); /** * socket.io * Copyright(c) 2011 LearnBoost <dev@learnboost.com> * MIT Licensed */ (function (exports, io) { /** * Parser namespace. * * @namespace */ var parser = exports.parser = {}; /** * Packet types. */ var packets = parser.packets = [ 'disconnect' , 'connect' , 'heartbeat' , 'message' , 'json' , 'event' , 'ack' , 'error' , 'noop' ]; /** * Errors reasons. */ var reasons = parser.reasons = [ 'transport not supported' , 'client not handshaken' , 'unauthorized' ]; /** * Errors advice. */ var advice = parser.advice = [ 'reconnect' ]; /** * Shortcuts. */ var JSON = io.JSON , indexOf = io.util.indexOf; /** * Encodes a packet. * * @api private */ parser.encodePacket = function (packet) { var type = indexOf(packets, packet.type) , id = packet.id || '' , endpoint = packet.endpoint || '' , ack = packet.ack , data = null; switch (packet.type) { case 'error': var reason = packet.reason ? indexOf(reasons, packet.reason) : '' , adv = packet.advice ? indexOf(advice, packet.advice) : ''; if (reason !== '' || adv !== '') data = reason + (adv !== '' ? ('+' + adv) : ''); break; case 'message': if (packet.data !== '') data = packet.data; break; case 'event': var ev = { name: packet.name }; if (packet.args && packet.args.length) { ev.args = packet.args; } data = JSON.stringify(ev); break; case 'json': data = JSON.stringify(packet.data); break; case 'connect': if (packet.qs) data = packet.qs; break; case 'ack': data = packet.ackId + (packet.args && packet.args.length ? '+' + JSON.stringify(packet.args) : ''); break; } // construct packet with required fragments var encoded = [ type , id + (ack == 'data' ? '+' : '') , endpoint ]; // data fragment is optional if (data !== null && data !== undefined) encoded.push(data); return encoded.join(':'); }; /** * Encodes multiple messages (payload). * * @param {Array} messages * @api private */ parser.encodePayload = function (packets) { var decoded = ''; if (packets.length == 1) return packets[0]; for (var i = 0, l = packets.length; i < l; i++) { var packet = packets[i]; decoded += '\ufffd' + packet.length + '\ufffd' + packets[i]; } return decoded; }; /** * Decodes a packet * * @api private */ var regexp = /([^:]+):([0-9]+)?(\+)?:([^:]+)?:?([\s\S]*)?/; parser.decodePacket = function (data) { var pieces = data.match(regexp); if (!pieces) return {}; var id = pieces[2] || '' , data = pieces[5] || '' , packet = { type: packets[pieces[1]] , endpoint: pieces[4] || '' }; // whether we need to acknowledge the packet if (id) { packet.id = id; if (pieces[3]) packet.ack = 'data'; else packet.ack = true; } // handle different packet types switch (packet.type) { case 'error': var pieces = data.split('+'); packet.reason = reasons[pieces[0]] || ''; packet.advice = advice[pieces[1]] || ''; break; case 'message': packet.data = data || ''; break; case 'event': try { var opts = JSON.parse(data); packet.name = opts.name; packet.args = opts.args; } catch (e) { } packet.args = packet.args || []; break; case 'json': try { packet.data = JSON.parse(data); } catch (e) { } break; case 'connect': packet.qs = data || ''; break; case 'ack': var pieces = data.match(/^([0-9]+)(\+)?(.*)/); if (pieces) { packet.ackId = pieces[1]; packet.args = []; if (pieces[3]) { try { packet.args = pieces[3] ? JSON.parse(pieces[3]) : []; } catch (e) { } } } break; case 'disconnect': case 'heartbeat': break; }; return packet; }; /** * Decodes data payload. Detects multiple messages * * @return {Array} messages * @api public */ parser.decodePayload = function (data) { // IE doesn't like data[i] for unicode chars, charAt works fine if (data.charAt(0) == '\ufffd') { var ret = []; for (var i = 1, length = ''; i < data.length; i++) { if (data.charAt(i) == '\ufffd') { ret.push(parser.decodePacket(data.substr(i + 1).substr(0, length))); i += Number(length) + 1; length = ''; } else { length += data.charAt(i); } } return ret; } else { return [parser.decodePacket(data)]; } }; })( 'undefined' != typeof io ? io : module.exports , 'undefined' != typeof io ? io : module.parent.exports ); /** * socket.io * Copyright(c) 2011 LearnBoost <dev@learnboost.com> * MIT Licensed */ (function (exports, io) { /** * Expose constructor. */ exports.Transport = Transport; /** * This is the transport template for all supported transport methods. * * @constructor * @api public */ function Transport (socket, sessid) { this.socket = socket; this.sessid = sessid; }; /** * Apply EventEmitter mixin. */ io.util.mixin(Transport, io.EventEmitter); /** * Indicates whether heartbeats is enabled for this transport * * @api private */ Transport.prototype.heartbeats = function () { return true; }; /** * Handles the response from the server. When a new response is received * it will automatically update the timeout, decode the message and * forwards the response to the onMessage function for further processing. * * @param {String} data Response from the server. * @api private */ Transport.prototype.onData = function (data) { this.clearCloseTimeout(); // If the connection in currently open (or in a reopening state) reset the close // timeout since we have just received data. This check is necessary so // that we don't reset the timeout on an explicitly disconnected connection. if (this.socket.connected || this.socket.connecting || this.socket.reconnecting) { this.setCloseTimeout(); } if (data !== '') { // todo: we should only do decodePayload for xhr transports var msgs = io.parser.decodePayload(data); if (msgs && msgs.length) { for (var i = 0, l = msgs.length; i < l; i++) { this.onPacket(msgs[i]); } } } return this; }; /** * Handles packets. * * @api private */ Transport.prototype.onPacket = function (packet) { this.socket.setHeartbeatTimeout(); if (packet.type == 'heartbeat') { return this.onHeartbeat(); } if (packet.type == 'connect' && packet.endpoint == '') { this.onConnect(); } if (packet.type == 'error' && packet.advice == 'reconnect') { this.isOpen = false; } this.socket.onPacket(packet); return this; }; /** * Sets close timeout * * @api private */ Transport.prototype.setCloseTimeout = function () { if (!this.closeTimeout) { var self = this; this.closeTimeout = setTimeout(function () { self.onDisconnect(); }, this.socket.closeTimeout); } }; /** * Called when transport disconnects. * * @api private */ Transport.prototype.onDisconnect = function () { if (this.isOpen) this.close(); this.clearTimeouts(); this.socket.onDisconnect(); return this; }; /** * Called when transport connects * * @api private */ Transport.prototype.onConnect = function () { this.socket.onConnect(); return this; }; /** * Clears close timeout * * @api private */ Transport.prototype.clearCloseTimeout = function () { if (this.closeTimeout) { clearTimeout(this.closeTimeout); this.closeTimeout = null; } }; /** * Clear timeouts * * @api private */ Transport.prototype.clearTimeouts = function () { this.clearCloseTimeout(); if (this.reopenTimeout) { clearTimeout(this.reopenTimeout); } }; /** * Sends a packet * * @param {Object} packet object. * @api private */ Transport.prototype.packet = function (packet) { this.send(io.parser.encodePacket(packet)); }; /** * Send the received heartbeat message back to server. So the server * knows we are still connected. * * @param {String} heartbeat Heartbeat response from the server. * @api private */ Transport.prototype.onHeartbeat = function (heartbeat) { this.packet({ type: 'heartbeat' }); }; /** * Called when the transport opens. * * @api private */ Transport.prototype.onOpen = function () { this.isOpen = true; this.clearCloseTimeout(); this.socket.onOpen(); }; /** * Notifies the base when the connection with the Socket.IO server * has been disconnected. * * @api private */ Transport.prototype.onClose = function () { var self = this; /* FIXME: reopen delay causing a infinit loop this.reopenTimeout = setTimeout(function () { self.open(); }, this.socket.options['reopen delay']);*/ this.isOpen = false; this.socket.onClose(); this.onDisconnect(); }; /** * Generates a connection url based on the Socket.IO URL Protocol. * See <https://github.com/learnboost/socket.io-node/> for more details. * * @returns {String} Connection url * @api private */ Transport.prototype.prepareUrl = function () { var options = this.socket.options; return this.scheme() + '://' + options.host + ':' + options.port + '/' + options.resource + '/' + io.protocol + '/' + this.name + '/' + this.sessid; }; /** * Checks if the transport is ready to start a connection. * * @param {Socket} socket The socket instance that needs a transport * @param {Function} fn The callback * @api private */ Transport.prototype.ready = function (socket, fn) { fn.call(this); }; })( 'undefined' != typeof io ? io : module.exports , 'undefined' != typeof io ? io : module.parent.exports ); /** * socket.io * Copyright(c) 2011 LearnBoost <dev@learnboost.com> * MIT Licensed */ (function (exports, io, global) { /** * Expose constructor. */ exports.Socket = Socket; /** * Create a new `Socket.IO client` which can establish a persistent * connection with a Socket.IO enabled server. * * @api public */ function Socket (options) { this.options = { port: 80 , secure: false , document: 'document' in global ? document : false , resource: 'socket.io' , transports: io.transports , 'connect timeout': 10000 , 'try multiple transports': true , 'reconnect': true , 'reconnection delay': 500 , 'reconnection limit': Infinity , 'reopen delay': 3000 , 'max reconnection attempts': 10 , 'sync disconnect on unload': false , 'auto connect': true , 'flash policy port': 10843 , 'manualFlush': false }; io.util.merge(this.options, options); this.connected = false; this.open = false; this.connecting = false; this.reconnecting = false; this.namespaces = {}; this.buffer = []; this.doBuffer = false; if (this.options['sync disconnect on unload'] && (!this.isXDomain() || io.util.ua.hasCORS)) { var self = this; io.util.on(global, 'beforeunload', function () { self.disconnectSync(); }, false); } if (this.options['auto connect']) { this.connect(); } }; /** * Apply EventEmitter mixin. */ io.util.mixin(Socket, io.EventEmitter); /** * Returns a namespace listener/emitter for this socket * * @api public */ Socket.prototype.of = function (name) { if (!this.namespaces[name]) { this.namespaces[name] = new io.SocketNamespace(this, name); if (name !== '') { this.namespaces[name].packet({ type: 'connect' }); } } return this.namespaces[name]; }; /** * Emits the given event to the Socket and all namespaces * * @api private */ Socket.prototype.publish = function () { this.emit.apply(this, arguments); var nsp; for (var i in this.namespaces) { if (this.namespaces.hasOwnProperty(i)) { nsp = this.of(i); nsp.$emit.apply(nsp, arguments); } } }; /** * Performs the handshake * * @api private */ function empty () { }; Socket.prototype.handshake = function (fn) { var self = this , options = this.options; function complete (data) { if (data instanceof Error) { self.connecting = false; self.onError(data.message); } else { fn.apply(null, data.split(':')); } }; var url = [ 'http' + (options.secure ? 's' : '') + ':/' , options.host + ':' + options.port , options.resource , io.protocol , io.util.query(this.options.query, 't=' + +new Date) ].join('/'); if (this.isXDomain() && !io.util.ua.hasCORS) { var insertAt = document.getElementsByTagName('script')[0] , script = document.createElement('script'); script.src = url + '&jsonp=' + io.j.length; insertAt.parentNode.insertBefore(script, insertAt); io.j.push(function (data) { complete(data); script.parentNode.removeChild(script); }); } else { var xhr = io.util.request(); xhr.open('GET', url, true); if (this.isXDomain()) { xhr.withCredentials = true; } xhr.onreadystatechange = function () { if (xhr.readyState == 4) { xhr.onreadystatechange = empty; if (xhr.status == 200) { complete(xhr.responseText); } else if (xhr.status == 403) { self.onError(xhr.responseText); } else { self.connecting = false; !self.reconnecting && self.onError(xhr.responseText); } } }; xhr.send(null); } }; /** * Find an available transport based on the options supplied in the constructor. * * @api private */ Socket.prototype.getTransport = function (override) { var transports = override || this.transports, match; for (var i = 0, transport; transport = transports[i]; i++) { if (io.Transport[transport] && io.Transport[transport].check(this) && (!this.isXDomain() || io.Transport[transport].xdomainCheck(this))) { return new io.Transport[transport](this, this.sessionid); } } return null; }; /** * Connects to the server. * * @param {Function} [fn] Callback. * @returns {io.Socket} * @api public */ Socket.prototype.connect = function (fn) { if (this.connecting) { return this; } var self = this; self.connecting = true; this.handshake(function (sid, heartbeat, close, transports) { self.sessionid = sid; self.closeTimeout = close * 1000; self.heartbeatTimeout = heartbeat * 1000; if(!self.transports) self.transports = self.origTransports = (transports ? io.util.intersect( transports.split(',') , self.options.transports ) : self.options.transports); self.setHeartbeatTimeout(); function connect (transports){ if (self.transport) self.transport.clearTimeouts(); self.transport = self.getTransport(transports); if (!self.transport) return self.publish('connect_failed'); // once the transport is ready self.transport.ready(self, function () { self.connecting = true; self.publish('connecting', self.transport.name); self.transport.open(); if (self.options['connect timeout']) { self.connectTimeoutTimer = setTimeout(function () { if (!self.connected) { self.connecting = false; if (self.options['try multiple transports']) { var remaining = self.transports; while (remaining.length > 0 && remaining.splice(0,1)[0] != self.transport.name) {} if (remaining.length){ connect(remaining); } else { self.publish('connect_failed'); } } } }, self.options['connect timeout']); } }); } connect(self.transports); self.once('connect', function (){ clearTimeout(self.connectTimeoutTimer); fn && typeof fn == 'function' && fn(); }); }); return this; }; /** * Clears and sets a new heartbeat timeout using the value given by the * server during the handshake. * * @api private */ Socket.prototype.setHeartbeatTimeout = function () { clearTimeout(this.heartbeatTimeoutTimer); if(this.transport && !this.transport.heartbeats()) return; var self = this; this.heartbeatTimeoutTimer = setTimeout(function () { self.transport.onClose(); }, this.heartbeatTimeout); }; /** * Sends a message. * * @param {Object} data packet. * @returns {io.Socket} * @api public */ Socket.prototype.packet = function (data) { if (this.connected && !this.doBuffer) { this.transport.packet(data); } else { this.buffer.push(data); } return this; }; /** * Sets buffer state * * @api private */ Socket.prototype.setBuffer = function (v) { this.doBuffer = v; if (!v && this.connected && this.buffer.length) { if (!this.options['manualFlush']) { this.flushBuffer(); } } }; /** * Flushes the buffer data over the wire. * To be invoked manually when 'manualFlush' is set to true. * * @api public */ Socket.prototype.flushBuffer = function() { this.transport.payload(this.buffer); this.buffer = []; }; /** * Disconnect the established connect. * * @returns {io.Socket} * @api public */ Socket.prototype.disconnect = function () { if (this.connected || this.connecting) { if (this.open) { this.of('').packet({ type: 'disconnect' }); } // handle disconnection immediately this.onDisconnect('booted'); } return this; }; /** * Disconnects the socket with a sync XHR. * * @api private */ Socket.prototype.disconnectSync = function () { // ensure disconnection var xhr = io.util.request(); var uri = [ 'http' + (this.options.secure ? 's' : '') + ':/' , this.options.host + ':' + this.options.port , this.options.resource , io.protocol , '' , this.sessionid ].join('/') + '/?disconnect=1'; xhr.open('GET', uri, false); xhr.send(null); // handle disconnection immediately this.onDisconnect('booted'); }; /** * Check if we need to use cross domain enabled transports. Cross domain would * be a different port or different domain name. * * @returns {Boolean} * @api private */ Socket.prototype.isXDomain = function () { var port = global.location.port || ('https:' == global.location.protocol ? 443 : 80); return this.options.host !== global.location.hostname || this.options.port != port; }; /** * Called upon handshake. * * @api private */ Socket.prototype.onConnect = function () { if (!this.connected) { this.connected = true; this.connecting = false; if (!this.doBuffer) { // make sure to flush the buffer this.setBuffer(false); } this.emit('connect'); } }; /** * Called when the transport opens * * @api private */ Socket.prototype.onOpen = function () { this.open = true; }; /** * Called when the transport closes. * * @api private */ Socket.prototype.onClose = function () { this.open = false; clearTimeout(this.heartbeatTimeoutTimer); }; /** * Called when the transport first opens a connection * * @param text */ Socket.prototype.onPacket = function (packet) { this.of(packet.endpoint).onPacket(packet); }; /** * Handles an error. * * @api private */ Socket.prototype.onError = function (err) { if (err && err.advice) { if (err.advice === 'reconnect' && (this.connected || this.connecting)) { this.disconnect(); if (this.options.reconnect) { this.reconnect(); } } } this.publish('error', err && err.reason ? err.reason : err); }; /** * Called when the transport disconnects. * * @api private */ Socket.prototype.onDisconnect = function (reason) { var wasConnected = this.connected , wasConnecting = this.connecting; this.connected = false; this.connecting = false; this.open = false; if (wasConnected || wasConnecting) { this.transport.close(); this.transport.clearTimeouts(); if (wasConnected) { this.publish('disconnect', reason); if ('booted' != reason && this.options.reconnect && !this.reconnecting) { this.reconnect(); } } } }; /** * Called upon reconnection. * * @api private */ Socket.prototype.reconnect = function () { this.reconnecting = true; this.reconnectionAttempts = 0; this.reconnectionDelay = this.options['reconnection delay']; var self = this , maxAttempts = this.options['max reconnection attempts'] , tryMultiple = this.options['try multiple transports'] , limit = this.options['reconnection limit']; function reset () { if (self.connected) { for (var i in self.namespaces) { if (self.namespaces.hasOwnProperty(i) && '' !== i) { self.namespaces[i].packet({ type: 'connect' }); } } self.publish('reconnect', self.transport.name, self.reconnectionAttempts); } clearTimeout(self.reconnectionTimer); self.removeListener('connect_failed', maybeReconnect); self.removeListener('connect', maybeReconnect); self.reconnecting = false; delete self.reconnectionAttempts; delete self.reconnectionDelay; delete self.reconnectionTimer; delete self.redoTransports; self.options['try multiple transports'] = tryMultiple; }; function maybeReconnect () { if (!self.reconnecting) { return; } if (self.connected) { return reset(); }; if (self.connecting && self.reconnecting) { return self.reconnectionTimer = setTimeout(maybeReconnect, 1000); } if (self.reconnectionAttempts++ >= maxAttempts) { if (!self.redoTransports) { self.on('connect_failed', maybeReconnect); self.options['try multiple transports'] = true; self.transports = self.origTransports; self.transport = self.getTransport(); self.redoTransports = true; self.connect(); } else { self.publish('reconnect_failed'); reset(); } } else { if (self.reconnectionDelay < limit) { self.reconnectionDelay *= 2; // exponential back off } self.connect(); self.publish('reconnecting', self.reconnectionDelay, self.reconnectionAttempts); self.reconnectionTimer = setTimeout(maybeReconnect, self.reconnectionDelay); } }; this.options['try multiple transports'] = false; this.reconnectionTimer = setTimeout(maybeReconnect, this.reconnectionDelay); this.on('connect', maybeReconnect); }; })( 'undefined' != typeof io ? io : module.exports , 'undefined' != typeof io ? io : module.parent.exports , this ); /** * socket.io * Copyright(c) 2011 LearnBoost <dev@learnboost.com> * MIT Licensed */ (function (exports, io) { /** * Expose constructor. */ exports.SocketNamespace = SocketNamespace; /** * Socket namespace constructor. * * @constructor * @api public */ function SocketNamespace (socket, name) { this.socket = socket; this.name = name || ''; this.flags = {}; this.json = new Flag(this, 'json'); this.ackPackets = 0; this.acks = {}; }; /** * Apply EventEmitter mixin. */ io.util.mixin(SocketNamespace, io.EventEmitter); /** * Copies emit since we override it * * @api private */ SocketNamespace.prototype.$emit = io.EventEmitter.prototype.emit; /** * Creates a new namespace, by proxying the request to the socket. This * allows us to use the synax as we do on the server. * * @api public */ SocketNamespace.prototype.of = function () { return this.socket.of.apply(this.socket, arguments); }; /** * Sends a packet. * * @api private */ SocketNamespace.prototype.packet = function (packet) { packet.endpoint = this.name; this.socket.packet(packet); this.flags = {}; return this; }; /** * Sends a message * * @api public */ SocketNamespace.prototype.send = function (data, fn) { var packet = { type: this.flags.json ? 'json' : 'message' , data: data }; if ('function' == typeof fn) { packet.id = ++this.ackPackets; packet.ack = true; this.acks[packet.id] = fn; } return this.packet(packet); }; /** * Emits an event * * @api public */ SocketNamespace.prototype.emit = function (name) { var args = Array.prototype.slice.call(arguments, 1) , lastArg = args[args.length - 1] , packet = { type: 'event' , name: name }; if ('function' == typeof lastArg) { packet.id = ++this.ackPackets; packet.ack = 'data'; this.acks[packet.id] = lastArg; args = args.slice(0, args.length - 1); } packet.args = args; return this.packet(packet); }; /** * Disconnects the namespace * * @api private */ SocketNamespace.prototype.disconnect = function () { if (this.name === '') { this.socket.disconnect(); } else { this.packet({ type: 'disconnect' }); this.$emit('disconnect'); } return this; }; /** * Handles a packet * * @api private */ SocketNamespace.prototype.onPacket = function (packet) { var self = this; function ack () { self.packet({ type: 'ack' , args: io.util.toArray(arguments) , ackId: packet.id }); }; switch (packet.type) { case 'connect': this.$emit('connect'); break; case 'disconnect': if (this.name === '') { this.socket.onDisconnect(packet.reason || 'booted'); } else { this.$emit('disconnect', packet.reason); } break; case 'message': case 'json': var params = ['message', packet.data]; if (packet.ack == 'data') { params.push(ack); } else if (packet.ack) { this.packet({ type: 'ack', ackId: packet.id }); } this.$emit.apply(this, params); break; case 'event': var params = [packet.name].concat(packet.args); if (packet.ack == 'data') params.push(ack); this.$emit.apply(this, params); break; case 'ack': if (this.acks[packet.ackId]) { this.acks[packet.ackId].apply(this, packet.args); delete this.acks[packet.ackId]; } break; case 'error': if (packet.advice){ this.socket.onError(packet); } else { if (packet.reason == 'unauthorized') { this.$emit('connect_failed', packet.reason); } else { this.$emit('error', packet.reason); } } break; } }; /** * Flag interface. * * @api private */ function Flag (nsp, name) { this.namespace = nsp; this.name = name; }; /** * Send a message * * @api public */ Flag.prototype.send = function () { this.namespace.flags[this.name] = true; this.namespace.send.apply(this.namespace, arguments); }; /** * Emit an event * * @api public */ Flag.prototype.emit = function () { this.namespace.flags[this.name] = true; this.namespace.emit.apply(this.namespace, arguments); }; })( 'undefined' != typeof io ? io : module.exports , 'undefined' != typeof io ? io : module.parent.exports ); /** * socket.io * Copyright(c) 2011 LearnBoost <dev@learnboost.com> * MIT Licensed */ (function (exports, io, global) { /** * Expose constructor. */ exports.websocket = WS; /** * The WebSocket transport uses the HTML5 WebSocket API to establish an * persistent connection with the Socket.IO server. This transport will also * be inherited by the FlashSocket fallback as it provides a API compatible * polyfill for the WebSockets. * * @constructor * @extends {io.Transport} * @api public */ function WS (socket) { io.Transport.apply(this, arguments); }; /** * Inherits from Transport. */ io.util.inherit(WS, io.Transport); /** * Transport name * * @api public */ WS.prototype.name = 'websocket'; /** * Initializes a new `WebSocket` connection with the Socket.IO server. We attach * all the appropriate listeners to handle the responses from the server. * * @returns {Transport} * @api public */ WS.prototype.open = function () { var query = io.util.query(this.socket.options.query) , self = this , Socket if (!Socket) { Socket = global.MozWebSocket || global.WebSocket; } this.websocket = new Socket(this.prepareUrl() + query); this.websocket.onopen = function () { self.onOpen(); self.socket.setBuffer(false); }; this.websocket.onmessage = function (ev) { self.onData(ev.data); }; this.websocket.onclose = function () { self.onClose(); self.socket.setBuffer(true); }; this.websocket.onerror = function (e) { self.onError(e); }; return this; }; /** * Send a message to the Socket.IO server. The message will automatically be * encoded in the correct message format. * * @returns {Transport} * @api public */ // Do to a bug in the current IDevices browser, we need to wrap the send in a // setTimeout, when they resume from sleeping the browser will crash if // we don't allow the browser time to detect the socket has been closed if (io.util.ua.iDevice) { WS.prototype.send = function (data) { var self = this; setTimeout(function() { self.websocket.send(data); },0); return this; }; } else { WS.prototype.send = function (data) { this.websocket.send(data); return this; }; } /** * Payload * * @api private */ WS.prototype.payload = function (arr) { for (var i = 0, l = arr.length; i < l; i++) { this.packet(arr[i]); } return this; }; /** * Disconnect the established `WebSocket` connection. * * @returns {Transport} * @api public */ WS.prototype.close = function () { this.websocket.close(); return this; }; /** * Handle the errors that `WebSocket` might be giving when we * are attempting to connect or send messages. * * @param {Error} e The error. * @api private */ WS.prototype.onError = function (e) { this.socket.onError(e); }; /** * Returns the appropriate scheme for the URI generation. * * @api private */ WS.prototype.scheme = function () { return this.socket.options.secure ? 'wss' : 'ws'; }; /** * Checks if the browser has support for native `WebSockets` and that * it's not the polyfill created for the FlashSocket transport. * * @return {Boolean} * @api public */ WS.check = function () { return ('WebSocket' in global && !('__addTask' in WebSocket)) || 'MozWebSocket' in global; }; /** * Check if the `WebSocket` transport support cross domain communications. * * @returns {Boolean} * @api public */ WS.xdomainCheck = function () { return true; }; /** * Add the transport to your public io.transports array. * * @api private */ io.transports.push('websocket'); })( 'undefined' != typeof io ? io.Transport : module.exports , 'undefined' != typeof io ? io : module.parent.exports , this ); /** * socket.io * Copyright(c) 2011 LearnBoost <dev@learnboost.com> * MIT Licensed */ (function (exports, io, global) { /** * Expose constructor. * * @api public */ exports.XHR = XHR; /** * XHR constructor * * @costructor * @api public */ function XHR (socket) { if (!socket) return; io.Transport.apply(this, arguments); this.sendBuffer = []; }; /** * Inherits from Transport. */ io.util.inherit(XHR, io.Transport); /** * Establish a connection * * @returns {Transport} * @api public */ XHR.prototype.open = function () { this.socket.setBuffer(false); this.onOpen(); this.get(); // we need to make sure the request succeeds since we have no indication // whether the request opened or not until it succeeded. this.setCloseTimeout(); return this; }; /** * Check if we need to send data to the Socket.IO server, if we have data in our * buffer we encode it and forward it to the `post` method. * * @api private */ XHR.prototype.payload = function (payload) { var msgs = []; for (var i = 0, l = payload.length; i < l; i++) { msgs.push(io.parser.encodePacket(payload[i])); } this.send(io.parser.encodePayload(msgs)); }; /** * Send data to the Socket.IO server. * * @param data The message * @returns {Transport} * @api public */ XHR.prototype.send = function (data) { this.post(data); return this; }; /** * Posts a encoded message to the Socket.IO server. * * @param {String} data A encoded message. * @api private */ function empty () { }; XHR.prototype.post = function (data) { var self = this; this.socket.setBuffer(true); function stateChange () { if (this.readyState == 4) { this.onreadystatechange = empty; self.posting = false; if (this.status == 200){ self.socket.setBuffer(false); } else { self.onClose(); } } } function onload () { this.onload = empty; self.socket.setBuffer(false); }; this.sendXHR = this.request('POST'); if (global.XDomainRequest && this.sendXHR instanceof XDomainRequest) { this.sendXHR.onload = this.sendXHR.onerror = onload; } else { this.sendXHR.onreadystatechange = stateChange; } this.sendXHR.send(data); }; /** * Disconnects the established `XHR` connection. * * @returns {Transport} * @api public */ XHR.prototype.close = function () { this.onClose(); return this; }; /** * Generates a configured XHR request * * @param {String} url The url that needs to be requested. * @param {String} method The method the request should use. * @returns {XMLHttpRequest} * @api private */ XHR.prototype.request = function (method) { var req = io.util.request(this.socket.isXDomain()) , query = io.util.query(this.socket.options.query, 't=' + +new Date); req.open(method || 'GET', this.prepareUrl() + query, true); if (method == 'POST') { try { if (req.setRequestHeader) { req.setRequestHeader('Content-type', 'text/plain;charset=UTF-8'); } else { // XDomainRequest req.contentType = 'text/plain'; } } catch (e) {} } return req; }; /** * Returns the scheme to use for the transport URLs. * * @api private */ XHR.prototype.scheme = function () { return this.socket.options.secure ? 'https' : 'http'; }; /** * Check if the XHR transports are supported * * @param {Boolean} xdomain Check if we support cross domain requests. * @returns {Boolean} * @api public */ XHR.check = function (socket, xdomain) { try { var request = io.util.request(xdomain), usesXDomReq = (global.XDomainRequest && request instanceof XDomainRequest), socketProtocol = (socket && socket.options && socket.options.secure ? 'https:' : 'http:'), isXProtocol = (global.location && socketProtocol != global.location.protocol); if (request && !(usesXDomReq && isXProtocol)) { return true; } } catch(e) {} return false; }; /** * Check if the XHR transport supports cross domain requests. * * @returns {Boolean} * @api public */ XHR.xdomainCheck = function (socket) { return XHR.check(socket, true); }; })( 'undefined' != typeof io ? io.Transport : module.exports , 'undefined' != typeof io ? io : module.parent.exports , this ); /** * socket.io * Copyright(c) 2011 LearnBoost <dev@learnboost.com> * MIT Licensed */ (function (exports, io) { /** * Expose constructor. */ exports.htmlfile = HTMLFile; /** * The HTMLFile transport creates a `forever iframe` based transport * for Internet Explorer. Regular forever iframe implementations will * continuously trigger the browsers buzy indicators. If the forever iframe * is created inside a `htmlfile` these indicators will not be trigged. * * @constructor * @extends {io.Transport.XHR} * @api public */ function HTMLFile (socket) { io.Transport.XHR.apply(this, arguments); }; /** * Inherits from XHR transport. */ io.util.inherit(HTMLFile, io.Transport.XHR); /** * Transport name * * @api public */ HTMLFile.prototype.name = 'htmlfile'; /** * Creates a new Ac...eX `htmlfile` with a forever loading iframe * that can be used to listen to messages. Inside the generated * `htmlfile` a reference will be made to the HTMLFile transport. * * @api private */ HTMLFile.prototype.get = function () { this.doc = new window[(['Active'].concat('Object').join('X'))]('htmlfile'); this.doc.open(); this.doc.write('<html></html>'); this.doc.close(); this.doc.parentWindow.s = this; var iframeC = this.doc.createElement('div'); iframeC.className = 'socketio'; this.doc.body.appendChild(iframeC); this.iframe = this.doc.createElement('iframe'); iframeC.appendChild(this.iframe); var self = this , query = io.util.query(this.socket.options.query, 't='+ +new Date); this.iframe.src = this.prepareUrl() + query; io.util.on(window, 'unload', function () { self.destroy(); }); }; /** * The Socket.IO server will write script tags inside the forever * iframe, this function will be used as callback for the incoming * information. * * @param {String} data The message * @param {document} doc Reference to the context * @api private */ HTMLFile.prototype._ = function (data, doc) { this.onData(data); try { var script = doc.getElementsByTagName('script')[0]; script.parentNode.removeChild(script); } catch (e) { } }; /** * Destroy the established connection, iframe and `htmlfile`. * And calls the `CollectGarbage` function of Internet Explorer * to release the memory. * * @api private */ HTMLFile.prototype.destroy = function () { if (this.iframe){ try { this.iframe.src = 'about:blank'; } catch(e){} this.doc = null; this.iframe.parentNode.removeChild(this.iframe); this.iframe = null; CollectGarbage(); } }; /** * Disconnects the established connection. * * @returns {Transport} Chaining. * @api public */ HTMLFile.prototype.close = function () { this.destroy(); return io.Transport.XHR.prototype.close.call(this); }; /** * Checks if the browser supports this transport. The browser * must have an `Ac...eXObject` implementation. * * @return {Boolean} * @api public */ HTMLFile.check = function (socket) { if (typeof window != "undefined" && (['Active'].concat('Object').join('X')) in window){ try { var a = new window[(['Active'].concat('Object').join('X'))]('htmlfile'); return a && io.Transport.XHR.check(socket); } catch(e){} } return false; }; /** * Check if cross domain requests are supported. * * @returns {Boolean} * @api public */ HTMLFile.xdomainCheck = function () { // we can probably do handling for sub-domains, we should // test that it's cross domain but a subdomain here return false; }; /** * Add the transport to your public io.transports array. * * @api private */ io.transports.push('htmlfile'); })( 'undefined' != typeof io ? io.Transport : module.exports , 'undefined' != typeof io ? io : module.parent.exports ); /** * socket.io * Copyright(c) 2011 LearnBoost <dev@learnboost.com> * MIT Licensed */ (function (exports, io, global) { /** * Expose constructor. */ exports['xhr-polling'] = XHRPolling; /** * The XHR-polling transport uses long polling XHR requests to create a * "persistent" connection with the server. * * @constructor * @api public */ function XHRPolling () { io.Transport.XHR.apply(this, arguments); }; /** * Inherits from XHR transport. */ io.util.inherit(XHRPolling, io.Transport.XHR); /** * Merge the properties from XHR transport */ io.util.merge(XHRPolling, io.Transport.XHR); /** * Transport name * * @api public */ XHRPolling.prototype.name = 'xhr-polling'; /** * Indicates whether heartbeats is enabled for this transport * * @api private */ XHRPolling.prototype.heartbeats = function () { return false; }; /** * Establish a connection, for iPhone and Android this will be done once the page * is loaded. * * @returns {Transport} Chaining. * @api public */ XHRPolling.prototype.open = function () { var self = this; io.Transport.XHR.prototype.open.call(self); return false; }; /** * Starts a XHR request to wait for incoming messages. * * @api private */ function empty () {}; XHRPolling.prototype.get = function () { if (!this.isOpen) return; var self = this; function stateChange () { if (this.readyState == 4) { this.onreadystatechange = empty; if (this.status == 200) { self.onData(this.responseText); self.get(); } else { self.onClose(); } } }; function onload () { this.onload = empty; this.onerror = empty; self.retryCounter = 1; self.onData(this.responseText); self.get(); }; function onerror () { self.retryCounter ++; if(!self.retryCounter || self.retryCounter > 3) { self.onClose(); } else { self.get(); } }; this.xhr = this.request(); if (global.XDomainRequest && this.xhr instanceof XDomainRequest) { this.xhr.onload = onload; this.xhr.onerror = onerror; } else { this.xhr.onreadystatechange = stateChange; } this.xhr.send(null); }; /** * Handle the unclean close behavior. * * @api private */ XHRPolling.prototype.onClose = function () { io.Transport.XHR.prototype.onClose.call(this); if (this.xhr) { this.xhr.onreadystatechange = this.xhr.onload = this.xhr.onerror = empty; try { this.xhr.abort(); } catch(e){} this.xhr = null; } }; /** * Webkit based browsers show a infinit spinner when you start a XHR request * before the browsers onload event is called so we need to defer opening of * the transport until the onload event is called. Wrapping the cb in our * defer method solve this. * * @param {Socket} socket The socket instance that needs a transport * @param {Function} fn The callback * @api private */ XHRPolling.prototype.ready = function (socket, fn) { var self = this; io.util.defer(function () { fn.call(self); }); }; /** * Add the transport to your public io.transports array. * * @api private */ io.transports.push('xhr-polling'); })( 'undefined' != typeof io ? io.Transport : module.exports , 'undefined' != typeof io ? io : module.parent.exports , this ); /** * socket.io * Copyright(c) 2011 LearnBoost <dev@learnboost.com> * MIT Licensed */ (function (exports, io, global) { /** * There is a way to hide the loading indicator in Firefox. If you create and * remove a iframe it will stop showing the current loading indicator. * Unfortunately we can't feature detect that and UA sniffing is evil. * * @api private */ var indicator = global.document && "MozAppearance" in global.document.documentElement.style; /** * Expose constructor. */ exports['jsonp-polling'] = JSONPPolling; /** * The JSONP transport creates an persistent connection by dynamically * inserting a script tag in the page. This script tag will receive the * information of the Socket.IO server. When new information is received * it creates a new script tag for the new data stream. * * @constructor * @extends {io.Transport.xhr-polling} * @api public */ function JSONPPolling (socket) { io.Transport['xhr-polling'].apply(this, arguments); this.index = io.j.length; var self = this; io.j.push(function (msg) { self._(msg); }); }; /** * Inherits from XHR polling transport. */ io.util.inherit(JSONPPolling, io.Transport['xhr-polling']); /** * Transport name * * @api public */ JSONPPolling.prototype.name = 'jsonp-polling'; /** * Posts a encoded message to the Socket.IO server using an iframe. * The iframe is used because script tags can create POST based requests. * The iframe is positioned outside of the view so the user does not * notice it's existence. * * @param {String} data A encoded message. * @api private */ JSONPPolling.prototype.post = function (data) { var self = this , query = io.util.query( this.socket.options.query , 't='+ (+new Date) + '&i=' + this.index ); if (!this.form) { var form = document.createElement('form') , area = document.createElement('textarea') , id = this.iframeId = 'socketio_iframe_' + this.index , iframe; form.className = 'socketio'; form.style.position = 'absolute'; form.style.top = '0px'; form.style.left = '0px'; form.style.display = 'none'; form.target = id; form.method = 'POST'; form.setAttribute('accept-charset', 'utf-8'); area.name = 'd'; form.appendChild(area); document.body.appendChild(form); this.form = form; this.area = area; } this.form.action = this.prepareUrl() + query; function complete () { initIframe(); self.socket.setBuffer(false); }; function initIframe () { if (self.iframe) { self.form.removeChild(self.iframe); } try { // ie6 dynamic iframes with target="" support (thanks Chris Lambacher) iframe = document.createElement('<iframe name="'+ self.iframeId +'">'); } catch (e) { iframe = document.createElement('iframe'); iframe.name = self.iframeId; } iframe.id = self.iframeId; self.form.appendChild(iframe); self.iframe = iframe; }; initIframe(); // we temporarily stringify until we figure out how to prevent // browsers from turning `\n` into `\r\n` in form inputs this.area.value = io.JSON.stringify(data); try { this.form.submit(); } catch(e) {} if (this.iframe.attachEvent) { iframe.onreadystatechange = function () { if (self.iframe.readyState == 'complete') { complete(); } }; } else { this.iframe.onload = complete; } this.socket.setBuffer(true); }; /** * Creates a new JSONP poll that can be used to listen * for messages from the Socket.IO server. * * @api private */ JSONPPolling.prototype.get = function () { var self = this , script = document.createElement('script') , query = io.util.query( this.socket.options.query , 't='+ (+new Date) + '&i=' + this.index ); if (this.script) { this.script.parentNode.removeChild(this.script); this.script = null; } script.async = true; script.src = this.prepareUrl() + query; script.onerror = function () { self.onClose(); }; var insertAt = document.getElementsByTagName('script')[0]; insertAt.parentNode.insertBefore(script, insertAt); this.script = script; if (indicator) { setTimeout(function () { var iframe = document.createElement('iframe'); document.body.appendChild(iframe); document.body.removeChild(iframe); }, 100); } }; /** * Callback function for the incoming message stream from the Socket.IO server. * * @param {String} data The message * @api private */ JSONPPolling.prototype._ = function (msg) { this.onData(msg); if (this.isOpen) { this.get(); } return this; }; /** * The indicator hack only works after onload * * @param {Socket} socket The socket instance that needs a transport * @param {Function} fn The callback * @api private */ JSONPPolling.prototype.ready = function (socket, fn) { var self = this; if (!indicator) return fn.call(this); io.util.load(function () { fn.call(self); }); }; /** * Checks if browser supports this transport. * * @return {Boolean} * @api public */ JSONPPolling.check = function () { return 'document' in global; }; /** * Check if cross domain requests are supported * * @returns {Boolean} * @api public */ JSONPPolling.xdomainCheck = function () { return true; }; /** * Add the transport to your public io.transports array. * * @api private */ io.transports.push('jsonp-polling'); })( 'undefined' != typeof io ? io.Transport : module.exports , 'undefined' != typeof io ? io : module.parent.exports , this ); if (typeof define === "function" && define.amd) { define([], function () { return io; }); } })(); (function() { /** * EventEmitter v4.0.5 - git.io/ee * Oliver Caldwell * MIT license * @preserve */ ;(function(exports) { // JSHint config - http://www.jshint.com/ /*jshint laxcomma:true*/ /*global define:true*/ // Place the script in strict mode 'use strict'; /** * Class for managing events. * Can be extended to provide event functionality in other classes. * * @class Manages event registering and emitting. */ function EventEmitter(){} // Shortcuts to improve speed and size // Easy access to the prototype var proto = EventEmitter.prototype // Existence of a native indexOf , nativeIndexOf = Array.prototype.indexOf ? true : false; /** * Finds the index of the listener for the event in it's storage array * * @param {Function} listener Method to look for. * @param {Function[]} listeners Array of listeners to search through. * @return {Number} Index of the specified listener, -1 if not found */ function indexOfListener(listener, listeners) { // Return the index via the native method if possible if(nativeIndexOf) { return listeners.indexOf(listener); } // There is no native method // Use a manual loop to find the index var i = listeners.length; while(i--) { // If the listener matches, return it's index if(listeners[i] === listener) { return i; } } // Default to returning -1 return -1; } /** * Fetches the events object and creates one if required. * * @return {Object} The events storage object. */ proto._getEvents = function() { return this._events || (this._events = {}); }; /** * Returns the listener array for the specified event. * Will initialise the event object and listener arrays if required. * * @param {String} evt Name of the event to return the listeners from. * @return {Function[]} All listener functions for the event. * @doc */ proto.getListeners = function(evt) { // Create a shortcut to the storage object // Initialise it if it does not exists yet var events = this._getEvents(); // Return the listener array // Initialise it if it does not exist return events[evt] || (events[evt] = []); }; /** * Adds a listener function to the specified event. * The listener will not be added if it is a duplicate. * If the listener returns true then it will be removed after it is called. * * @param {String} evt Name of the event to attach the listener to. * @param {Function} listener Method to be called when the event is emitted. If the function returns true then it will be removed after calling. * @return {Object} Current instance of EventEmitter for chaining. * @doc */ proto.addListener = function(evt, listener) { // Fetch the listeners var listeners = this.getListeners(evt); // Push the listener into the array if it is not already there if(indexOfListener(listener, listeners) === -1) { listeners.push(listener); } // Return the instance of EventEmitter to allow chaining return this; }; /** * Alias of addListener * @doc */ proto.on = proto.addListener; /** * Alias of addListener * @doc */ proto.once = function(evt, listener) { var self = this; this.on(evt, function wrapper() { listener.apply(self, arguments); self.removeListener(evt, wrapper); }); }; /** * Removes a listener function from the specified event. * * @param {String} evt Name of the event to remove the listener from. * @param {Function} listener Method to remove from the event. * @return {Object} Current instance of EventEmitter for chaining. * @doc */ proto.removeListener = function(evt, listener) { // Fetch the listeners // And get the index of the listener in the array var listeners = this.getListeners(evt) , index = indexOfListener(listener, listeners); // If the listener was found then remove it if(index !== -1) { listeners.splice(index, 1); // If there are no more listeners in this array then remove it if(listeners.length === 0) { this.removeEvent(evt); } } // Return the instance of EventEmitter to allow chaining return this; }; /** * Alias of removeListener * @doc */ proto.off = proto.removeListener; /** * Adds listeners in bulk using the manipulateListeners method. * If you pass an object as the second argument you can add to multiple events at once. The object should contain key value pairs of events and listeners or listener arrays. * You can also pass it an event name and an array of listeners to be added. * * @param {String|Object} evt An event name if you will pass an array of listeners next. An object if you wish to add to multiple events at once. * @param {Function[]} [listeners] An optional array of listener functions to add. * @return {Object} Current instance of EventEmitter for chaining. * @doc */ proto.addListeners = function(evt, listeners) { // Pass through to manipulateListeners return this.manipulateListeners(false, evt, listeners); }; /** * Removes listeners in bulk using the manipulateListeners method. * If you pass an object as the second argument you can remove from multiple events at once. The object should contain key value pairs of events and listeners or listener arrays. * You can also pass it an event name and an array of listeners to be removed. * * @param {String|Object} evt An event name if you will pass an array of listeners next. An object if you wish to remove from multiple events at once. * @param {Function[]} [listeners] An optional array of listener functions to remove. * @return {Object} Current instance of EventEmitter for chaining. * @doc */ proto.removeListeners = function(evt, listeners) { // Pass through to manipulateListeners return this.manipulateListeners(true, evt, listeners); }; /** * Edits listeners in bulk. The addListeners and removeListeners methods both use this to do their job. You should really use those instead, this is a little lower level. * The first argument will determine if the listeners are removed (true) or added (false). * If you pass an object as the second argument you can add/remove from multiple events at once. The object should contain key value pairs of events and listeners or listener arrays. * You can also pass it an event name and an array of listeners to be added/removed. * * @param {Boolean} remove True if you want to remove listeners, false if you want to add. * @param {String|Object} evt An event name if you will pass an array of listeners next. An object if you wish to add/remove from multiple events at once. * @param {Function[]} [listeners] An optional array of listener functions to add/remove. * @return {Object} Current instance of EventEmitter for chaining. * @doc */ proto.manipulateListeners = function(remove, evt, listeners) { // Initialise any required variables var i , value , single = remove ? this.removeListener : this.addListener , multiple = remove ? this.removeListeners : this.addListeners; // If evt is an object then pass each of it's properties to this method if(typeof evt === 'object') { for(i in evt) { if(evt.hasOwnProperty(i) && (value = evt[i])) { // Pass the single listener straight through to the singular method if(typeof value === 'function') { single.call(this, i, value); } else { // Otherwise pass back to the multiple function multiple.call(this, i, value); } } } } else { // So evt must be a string // And listeners must be an array of listeners // Loop over it and pass each one to the multiple method i = listeners.length; while(i--) { single.call(this, evt, listeners[i]); } } // Return the instance of EventEmitter to allow chaining return this; }; /** * Removes all listeners from a specified event. * If you do not specify an event then all listeners will be removed. * That means every event will be emptied. * * @param {String} [evt] Optional name of the event to remove all listeners for. Will remove from every event if not passed. * @return {Object} Current instance of EventEmitter for chaining. * @doc */ proto.removeEvent = function(evt) { // Remove different things depending on the state of evt if(evt) { // Remove all listeners for the specified event delete this._getEvents()[evt]; } else { // Remove all listeners in all events delete this._events; } // Return the instance of EventEmitter to allow chaining return this; }; /** * Emits an event of your choice. * When emitted, every listener attached to that event will be executed. * If you pass the optional argument array then those arguments will be passed to every listener upon execution. * Because it uses `apply`, your array of arguments will be passed as if you wrote them out separately. * So they will not arrive within the array on the other side, they will be separate. * * @param {String} evt Name of the event to emit and execute listeners for. * @param {Array} [args] Optional array of arguments to be passed to each listener. * @return {Object} Current instance of EventEmitter for chaining. * @doc */ proto.emitEvent = function(evt, args) { // Get the listeners for the event // Also initialise any other required variables var listeners = this.getListeners(evt) , i = listeners.length , response; // Loop over all listeners assigned to the event // Apply the arguments array to each listener function while(i--) { // If the listener returns true then it shall be removed from the event // The function is executed either with a basic call or an apply if there is an args array response = args ? listeners[i].apply(this, args) : listeners[i].call(this); if(response === true) { this.removeListener(evt, listeners[i]); } } // Return the instance of EventEmitter to allow chaining return this; }; /** * Alias of emitEvent * @doc */ proto.trigger = proto.emitEvent; /** * Subtly different from emitEvent in that it will pass its arguments on to the listeners, as * opposed to taking a single array of arguments to pass on. * * @param {String} evt Name of the event to emit and execute listeners for. * @param {...*} Optional additional arguments to be passed to each listener. * @return {Object} Current instance of EventEmitter for chaining. * @doc */ proto.emit = function(evt) { var args = Array.prototype.slice.call(arguments, 1); return this.emitEvent(evt, args); }; // Expose the class either via AMD or the global object if(typeof define === 'function' && define.amd) { define(function() { return EventEmitter; }); } else { exports.EventEmitter = EventEmitter; } }(this)); /** * Expose `debug()` as the module. */ // window.debug = debug; /** * Create a debugger with the given `name`. * * @param {String} name * @return {Type} * @api public */ function debug(name) { if (!debug.enabled(name)) return function(){}; return function(fmt){ var curr = new Date; var ms = curr - (debug[name] || curr); debug[name] = curr; fmt = name + ' ' + fmt + ' +' + debug.humanize(ms); // This hackery is required for IE8 // where `console.log` doesn't have 'apply' window.console && console.log && Function.prototype.apply.call(console.log, console, arguments); } } /** * The currently active debug mode names. */ debug.names = []; debug.skips = []; /** * Enables a debug mode by name. This can include modes * separated by a colon and wildcards. * * @param {String} name * @api public */ debug.enable = function(name) { try { localStorage.debug = name; } catch(e){} var split = (name || '').split(/[\s,]+/) , len = split.length; for (var i = 0; i < len; i++) { name = split[i].replace('*', '.*?'); if (name[0] === '-') { debug.skips.push(new RegExp('^' + name.substr(1) + '$')); } else { debug.names.push(new RegExp('^' + name + '$')); } } }; /** * Disable debug output. * * @api public */ debug.disable = function(){ debug.enable(''); }; /** * Humanize the given `ms`. * * @param {Number} m * @return {String} * @api private */ debug.humanize = function(ms) { var sec = 1000 , min = 60 * 1000 , hour = 60 * min; if (ms >= hour) return (ms / hour).toFixed(1) + 'h'; if (ms >= min) return (ms / min).toFixed(1) + 'm'; if (ms >= sec) return (ms / sec | 0) + 's'; return ms + 'ms'; }; /** * Returns true if the given mode name is enabled, false otherwise. * * @param {String} name * @return {Boolean} * @api public */ debug.enabled = function(name) { for (var i = 0, len = debug.skips.length; i < len; i++) { if (debug.skips[i].test(name)) { return false; } } for (var i = 0, len = debug.names.length; i < len; i++) { if (debug.names[i].test(name)) { return true; } } return false; }; // persist if (window.localStorage) debug.enable(localStorage.debug); if(!window.RTCPeerConnection) { if (navigator.mozGetUserMedia) { window.RTCSessionDescription = window.mozRTCSessionDescription; window.RTCPeerConnection = window.mozRTCPeerConnection; window.getUserMedia = navigator.mozGetUserMedia.bind(navigator); } else if (navigator.webkitGetUserMedia) { window.RTCPeerConnection = window.webkitRTCPeerConnection; window.getUserMedia = navigator.webkitGetUserMedia.bind(navigator); } } function extend(dest, src) { for(var key in src.prototype) { dest.prototype[key] = src.prototype[key]; } } function uuid_v4() { return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) { var r = Math.random()*16|0, v = c == 'x' ? r : (r&0x3|0x8); return v.toString(16); }); } /* Get the peerId from either stored on the localStorage or create a new one */ function getPeerId() { var keyname = 'peerman-peer-id'; var peerId = localStorage.getItem(keyname); if(!peerId) { peerId = uuid_v4(); localStorage.setItem(keyname, peerId); } return peerId; } /* An Utility which runs a function with a timeout * While in queue, if asked to run again that request will be discarded * Timeout start with the "last run time", not the current time */ function Runner(callback, self) { var lastRunAt = 0; var scheduled = false; this.triggerIn = function(millis, args) { if(!scheduled) { var now = Date.now(); var timeDiff = now - lastRunAt; if(timeDiff > millis) { triggerFunction(args); } else { scheduled = true; setTimeout(function() { triggerFunction(args); }, millis - timeDiff); } } }; function triggerFunction(args) { scheduled = false; lastRunAt = Date.now(); callback.apply(self, args); } } function getCookie(c_name) { var i,x,y,ARRcookies=document.cookie.split(";"); for (i=0;i<ARRcookies.length;i++) { x=ARRcookies[i].substr(0,ARRcookies[i].indexOf("=")); y=ARRcookies[i].substr(ARRcookies[i].indexOf("=")+1); x=x.replace(/^\s+|\s+$/g,""); if (x==c_name) { return unescape(y); } } } function getScriptQuery() { var scripts = document.getElementsByTagName('script'); var scriptSrc= scripts[scripts.length - 1].src; var qs = scriptSrc.split('?')[1]; var query = {}; if(qs) { var parts = qs.split('&'); parts.forEach(function(part) { var secondPart = part.split('='); query[secondPart[0]] = secondPart[1]; }); } return query; } function Qbox(ticks) { var callbacks = []; var loaded = false; this.ready = function(callback) { if(loaded) { callback(); } else { callbacks.push(callback); } }; this.tick = function() { if(--ticks == 0) { this.start(); } }; this.start = function() { loaded = true; callbacks.forEach(function(callback) { callback(); }); }; } function PeerSocket(resource, peerId, type) { var self = this; this.resource = resource; this.peerId = peerId; this.type = type; this.connected = false; var logger = this.logger = debug('peer:' + peerId); var iceLogger = this.iceLogger = debug('peer-ice:' + peerId); var servers = {"iceServers":[{"url":"stun:stun.l.google.com:19302"}]}; var me = this.me = new RTCPeerConnection(servers, {optional: [{RtpDataChannels: true}]}); var meData = this.meData = me.createDataChannel('default', { reliable: false }); me.onicecandidate = this._onIceCandidate.bind(this); me.onicechange = this._onIceChange.bind(this); meData.onmessage = this._notifyMessage.bind(this); meData.onopen = this._notifyOpen.bind(this); } extend(PeerSocket, EventEmitter); PeerSocket.prototype.offer = function offer(callback) { var self = this; this.me.createOffer(function(desc) { self.logger('offer created'); self.me.setLocalDescription(desc); callback(desc); }); }; PeerSocket.prototype.answer = function answer (remoteDesc, callback) { if(!(remoteDesc instanceof RTCSessionDescription)) { remoteDesc = new RTCSessionDescription(remoteDesc); } var self = this; this.me.setRemoteDescription(remoteDesc); this.me.createAnswer(function(desc) { self.logger('answer created'); self.me.setLocalDescription(desc); callback(desc); }); }; PeerSocket.prototype.send = function send(message) { this.logger('sending message: ' + message); this.meData.send(message); }; PeerSocket.prototype.setRemote = function configureWithRemote (desc) { if(!(desc instanceof RTCSessionDescription)) { desc = new RTCSessionDescription(desc); } this.logger('setting remote'); this.me.setRemoteDescription(desc); }; PeerSocket.prototype.addCandidate = function addIceCandidate (candidate) { this.iceLogger('adding ice candidate: ' + candidate.sdpMid); if(this.me.iceConnectionState != 'disconnected') { this.me.addIceCandidate(new RTCIceCandidate(candidate)); } }; PeerSocket.prototype.close = function close() { this.me.close(); this.me.onicecandidate = null; this.me.onicechange = null; this.meData.onmessage = null; this.meData.onopen = null; this.meData.onclose = null; this.logger('closed'); this.emit('disconnected'); }; PeerSocket.prototype._onIceCandidate = function _onIceCandidate (e) { if(e.candidate && e.candidate.sdpMid == 'data') { this.iceLogger('receiving ice candidate: ' + e.candidate.candidate); this.emit('candidate', e.candidate); } } PeerSocket.prototype._notifyMessage = function _notifyMessage(message) { this.logger('receiving message: ' + message.data); this.emit('message', message.data); } PeerSocket.prototype._notifyOpen = function _notifyOpen() { this.logger('state changed: connected'); this.connected = true; this.emit('connected'); } PeerSocket.prototype._onIceChange = function _onIceChange (state) { this.logger('changing ice status: ' + this.me.iceConnectionState); if(this.me.iceConnectionState == 'disconnected') { this.close(); } } function PeerDirectory (server, connectionManager, peerId, options) { options = options || {}; var resource; var maxPeers; var logger = debug('directory'); var NUM_REQUESTING_PEERS_COUNT = 2; var closed = false; this.connect = function connect(_resource, _maxPeers) { resource = _resource; maxPeers = _maxPeers; if(server.socket.connected) { //initialize straight if already connected initialize(); } server.on('connect', initialize); connectionManager.on('peer', onNewPeer); }; this.close = function close() { closed = true; server.removeListener('connect', initialize); connectionManager.removeListener('peer', onNewPeer); }; this.reconnect = function reconnect(callback) { initResource(callback); }; function initResource(callback) { var connectedPeers = connectionManager.getConnectedPeerNames(); var loginToken = getCookie('peerman-login-token'); server.emit('init-resource', peerId, resource, { totalInterested: maxPeers, connectedPeers: connectedPeers, timestamp: Date.now() }); server.once('init-success-' + resource, callback); } function initialize() { initResource(function() { server.once('init-success-' + resource, function() { findPeersFromDirectory.triggerIn(0, [NUM_REQUESTING_PEERS_COUNT]); }); }); } var findPeersFromDirectory = new Runner(function findPeersFromDirectory(peerCount) { logger('findiing peers for resource: ' + resource + ' count: ' + peerCount); server.emit('request-peers', resource, peerCount); server.once('peers-found-' + resource, connectWithFoundPeers); }, this); function connectWithFoundPeers(peers) { logger('receiving peers to connect: ' + JSON.stringify(peers)); if(peers.length == 0) { reconsiderRequestingMorePeers(); } else { var numConnecting = 0; peers.forEach(function(peer) { if(!connectionManager.peers[peer]) { connectionManager.connectNew(peer, {timeout: options.offerTimeout}); numConnecting++; } }); if(numConnecting != NUM_REQUESTING_PEERS_COUNT) { reconsiderRequestingMorePeers(); } } } function onNewPeer(peerSocket) { logger('connected with new peer: ' + peerSocket.peerId + ' type: ' + peerSocket.type); server.emit('add-peer', resource, peerSocket.peerId); peerSocket.on('disconnected', onPeerDisconnected); reconsiderRequestingMorePeers(); } function onPeerDisconnected() { this.removeListener('disconnected', onPeerDisconnected); server.emit('remove-peer', resource, this.peerId); reconsiderRequestingMorePeers(); } function reconsiderRequestingMorePeers() { if(!closed) { var acceptedPeerCount = connectionManager.canHaveMorePeers(NUM_REQUESTING_PEERS_COUNT); if(acceptedPeerCount > 0) { findPeersFromDirectory.triggerIn(2000, [acceptedPeerCount]); } } } } function ResourceManager(server) { this.createResource = function createResource(callback) { server.emit('create-resource'); server.once('create-resource', errorCallback(callback)); }; this.removeResource = function removeResource(id, callback) { server.emit('remove-resource', id); server.once('remove-resource-' + id, errorCallback(callback)); }; this.getResource = function getResource(callback) { server.emit('get-resource'); server.once('get-resource', errorCallback(callback)); }; this.isResourceOwner = function isResourceOwner(id, callback) { server.emit('is-resource-owner', id); server.once('is-resource-owner-' + id, callback); }; this.loadMetadata = function loadMetadata(resource) { return new ResourceMetadata(resource); }; function ResourceMetadata (resource) { this.set = function set(kvPairs, callback) { if(typeof(kvPairs) == 'object') { server.emit('set-metadata', resource, kvPairs); server.once('set-metadata-' + resource, errorCallback(callback)); } else { throw new Error('First argument need to be an object!'); } }; this.get = function get(keyList, callback) { if(typeof(keyList) == 'string' || typeof(keyList) == 'number') { keyList = [keyList] } else if(!keyList instanceof Array) { throw Error('First argument need be a key or array of keys'); } server.emit('get-metadata', resource, keyList); server.once('get-metadata-' + resource, errorCallback(callback)); }; this.getAll = function getAll(callback) { server.emit('get-all-metadata', resource); server.once('get-all-metadata-' + resource, errorCallback(callback)); }; this.remove = function remove(keyList, callback) { if(typeof(keyList) == 'string' || typeof(keyList) == 'number') { keyList = [keyList] } else if(!keyList instanceof Array) { throw Error('First argument need be a key or array of keys'); } server.emit('remove-metadata', resource, keyList); server.once('remove-metadata-' + resource, errorCallback(callback)); }; } function errorCallback(callback) { return function() { if(arguments[0]) { arguments[0] = new Error(arguments[0]); } callback.apply(null, arguments); } } } function ConnectionManager(server, resourceName, peerId, maxPeers, options) { options = options || {}; var self = this; this.peers = {}; //connected peers var connectedPeerCount = 0; var timeoutHandlers = {}; var logger = debug('connection-manager'); this.connectNew = function connectNew(otherPeerId, connectOptions) { connectOptions = connectOptions || {}; logger('start connecting with peer: ' + otherPeerId); var connection = new PeerSocket(resourceName, otherPeerId, 'offered'); connection.on('disconnected', onDisconnected); connection.on('candidate', sendCandidate); connection.on('connected', onConnected); this.peers[otherPeerId] = connection; connection.offer(function(desc) { server.emit('offer', resourceName, otherPeerId, desc); }); if(connectOptions.timeout) { enableTimeout(otherPeerId, connectOptions.timeout); } }; this.getConnectedPeerNames = function getConnectedPeers() { var rtn = []; for(var peerName in this.peers) { if(this.peers[peerName].connected) { rtn.push(peerName); } } return peerName; }; this.getConnectedPeerCount = function getConnectedPeerCount() { return connectedPeerCount; }; this.canHaveMorePeers = function canHaveMorePeers(count) { var cntMoreNeeded = maxPeers - connectedPeerCount; if(cntMoreNeeded > 0) { return (cntMoreNeeded > count)? count: cntMoreNeeded; } else { return 0; } }; this.close = function close() { //server based listeners server.removeListener('answer-' + resourceName, onAnswer); server.removeListener('offer-' + resourceName, onOffer); server.removeListener('ice-candidate-' + resourceName, onIceCandidate); server.removeListener('error-' + resourceName, onServerError); //close connections for(var peerId in this.peers) { this.peers[peerId].close(); } }; this.reconnect = function reconnect(callback) { callback(); }; server.on('answer-' + resourceName, onAnswer); server.on('offer-' + resourceName, onOffer); server.on('ice-candidate-' + resourceName, onIceCandidate); server.on('error-' + resourceName, onServerError); function onAnswer(from, status, answerDesc) { logger('answer from: ' + from + ' with status: ' + status); var connection = self.peers[from]; if(connection) { if(status == 'ACCEPTED') { connection.setRemote(answerDesc); } else { connection.close(); } } else { logger('no connection to answer: ' + from); } } function onOffer(from, desc) { logger('offer from: ' + from); if(self.peers[from]) { logger('offer rejected because of exisitng from: ' + from); server.emit('answer', resourceName, from, 'REJECTED'); } else if(self.canHaveMorePeers(1)) { var connection = new PeerSocket(resourceName, from, 'answered'); if(connection) { connection.on('disconnected', onDisconnected); connection.on('candidate', sendCandidate); connection.on('connected', onConnected); connection.answer(desc, function(answerDesc) { server.emit('answer', resourceName, from, 'ACCEPTED', answerDesc); }); self.peers[from] = connection; if(options.answerTimeout) { enableTimeout(from, options.answerTimeout); } } else { logger('no connection to offer: ' + from); } } else { logger('rejecting peer due to maxPeers limit: ' + from); } } function onIceCandidate(from, candidate) { var connection = self.peers[from]; if(connection) { connection.addCandidate(candidate); } else { logger('no connection to offer ice-candidate: ' + from); } } function onServerError(event, error) { logger('server error on: ' + event + ' error: ' + JSON.stringify(error)); if(error && error.code == 'NO_CLIENT') { var connection = self.peers[error.to]; if(connection) { connection.close(); } } } function sendCandidate(candidate) { server.emit('ice-candidate', resourceName, this.peerId, candidate); } function onConnected() { if(connectedPeerCount < maxPeers) { logger('connecting: ' + this.peerId); connectedPeerCount++; self.emit('peer', this); this.removeListener('connected', onConnected); cancleTimeout(this.peerId); } else { logger('connection dropped due to maxPeers limit: ' + this.peerId); this.close(); } } function onDisconnected() { logger('disconnecting: ' + this.peerId); connectedPeerCount--; delete self.peers[this.peerId]; this.removeListener('disconnected', onDisconnected); this.removeListener('candidate', sendCandidate); this.removeListener('connected', onConnected); cancleTimeout(this.peerId); } function enableTimeout(peerId, timeoutMillis) { timeoutHandlers[peerId] = setTimeout(function() { var connection = self.peers[peerId]; if(connection && !connection.connected) { logger('timeouting: ' + peerId + ' after: ' + timeoutMillis); connection.close(); timeoutHandlers[peerId] = null; } }, timeoutMillis); } function cancleTimeout(peerId) { var handler = timeoutHandlers[peerId]; if(handler) { logger('clearing timeout handler for: ' + peerId); clearTimeout(handler); timeoutHandlers[peerId] = null; } } } extend(ConnectionManager, EventEmitter); function Peerman() { var peerId = this.peerId = getPeerId(); var socket; var options; var resourceManager; var resources = {}; var authenticated = null; var sigServer; var appServer; this.waitForLoginCompletion = false; this.waitForLogoutCompletion = false; this.connect = function(_sigServer, _appServer) { appServer = _appServer; sigServer = _sigServer; socket = io.connect(sigServer); resourceManager = new ResourceManager(socket); //initialize if(socket.socket.connected) { initialize(); } socket.on('connect', initialize); //add some utility methods this.createResource = resourceManager.createResource.bind(resourceManager); this.removeResource = resourceManager.removeResource.bind(resourceManager); this.getResource = resourceManager.getResource.bind(resourceManager); this.isResourceOwner = resourceManager.isResourceOwner.bind(resourceManager); this.connect = function() {}; }; this.disconnect = function disconnect() { for(var key in resources) { resources[key].leave(); } socket.removeListener('connect', initialize); socket.disconnect(); }; this.join = function join(id, options) { options = options || {}; options.maxPeers = options.maxPeers || 5; options.answerTimeout = options.answerTimeout || 60000; options.offerTimeout = options.offerTimeout || 60000; var resourceObj = new PeermanResource(peerId, socket); resourceObj.connect(id, options); resourceObj.metadata = resourceManager.loadMetadata(id); resources[id] = resourceObj; return resourceObj; }; this.reconnect = function reconnect(callback) { authenticated = null; initialize(callback); //TODO: reconnect individual resources as well; }; this.isAuthenticated = function isAuthenticated(callback) { if(authenticated == null) { //wait for the authenticated notice socket.once('authenticated', callback); } else { callback(authenticated); } }; this.login = function login(callback) { this.once('complete-login', callback); this.waitForLoginCompletion = true; var url = appServer + '/login?redirect=' + location.href; openWindow(url); }; this._completeLogin = function _completeLogin() { var self = this; this.reconnect(function(authenticated) { self.waitForLoginCompletion = false; self.emit('complete-login', authenticated); }); }; this.logout = function logout(callback) { this.once('complete-logout', callback); this.waitForLogoutCompletion = true; var url = appServer + '/logout?redirect=' + location.href; openWindow(url); }; this._completeLogout = function _completeLogout() { var self = this; this.reconnect(function(authenticated) { self.waitForLogoutCompletion = false; self.emit('complete-logout', authenticated); }); }; function openWindow(url) { var leftPosition = (screen.width - 600) / 2; window.open(url, "Peerman Login", "width=600px,height=500px,top=100px,left=" + leftPosition + "px"); } function initialize(callback) { var loginToken = getCookie('peerman-login-token'); socket.emit('init', peerId, loginToken); socket.once('authenticated', function(_authenticated) { authenticated = _authenticated; if(callback) callback(authenticated); }); } } extend(Peerman, EventEmitter); function PeermanResource (peerId, server) { var self = this; var connectionManager; var peerDirectory; this.id; this.connect = function connect(resource, options) { this.id = resource; var connectionOptions = { answerTimeout: options.answerTimeout }; connectionManager = new ConnectionManager(server, resource, peerId, options.maxPeers, connectionOptions); connectionManager.on('peer', onNewPeer); var directoryOptions = { offerTimeout: options.offerTimeout }; peerDirectory = new PeerDirectory(server, connectionManager, peerId, directoryOptions); peerDirectory.connect(resource, options.maxPeers); this.connect = function() {}; }; this.reconnect = function reconnect(callback) { var $ = new Qbox(2); $.ready(callback); connectionManager.reconnect(function() { $.tick(); }); peerDirectory.reconnect(function() { $.tick(); }); }; this.leave = function leave() { connectionManager.removeListener('peer', onNewPeer); peerDirectory.close(); connectionManager.close(); }; function onNewPeer(peer) { self.emit('peer', peer); } } extend(PeermanResource, EventEmitter); // check for if this is the login popup window if(window.opener && window.opener.peerman) { if(window.opener.peerman.waitForLoginCompletion) { window.opener.peerman._completeLogin(); window.close(); } else if(window.opener.peerman.waitForLogoutCompletion) { window.opener.peerman._completeLogout(); window.close(); } } var query = getScriptQuery(); var sigServer = query.sigServer || 'http://localhost:5005'; //change this to production value later var appServer = query.appServer || 'http://localhost:5006'; //change this to production value later //added debug options if(query.debug) { var parts = decodeURI(query.debug).split(','); parts.forEach(function(part) { debug.names.push(new RegExp(part.trim())); }); } var peerman = window.peerman = new Peerman(); peerman.connect(sigServer, appServer); //exporting Classes peerman.EventEmitter = EventEmitter; })();
{ "redpajama_set_name": "RedPajamaGithub" }
7,851
namespace Kvasir { //Peripheral Access Controller 2 namespace Pac2Wpclr{ ///<Write Protection Clear using Addr = Register::Address<0x42000000,0x00000001,0x00000000,unsigned>; ///Write Protection Clear constexpr Register::FieldLocation<Addr,Register::maskFromRange(31,1),Register::ReadWriteAccess,unsigned> wp{}; } namespace Pac2Wpset{ ///<Write Protection Set using Addr = Register::Address<0x42000004,0x00000001,0x00000000,unsigned>; ///Write Protection Set constexpr Register::FieldLocation<Addr,Register::maskFromRange(31,1),Register::ReadWriteAccess,unsigned> wp{}; } }
{ "redpajama_set_name": "RedPajamaGithub" }
5,552
The AACI Champion for Cures Award was created in 2018 to recognize one or more people, who, through direct financial support of an AACI cancer center, demonstrate exceptional leadership in advancing cancer research and care and in inspiring others to do the same. Selected by the AACI Board of Directors, the awardee has gained distinction through their visionary approach to promoting our shared goal of a future without cancer. Through their transformational philanthropy, which may take many forms—including supporting a new facility, cancer center infrastructure, or programming—the cancer center can focus beyond immediate needs to foster creativity and innovation and multiply its impact on patient health, research, and its surrounding community. Please note that the 2019 nomination period is now closed. The Champion for Cures honoree will be selected by a majority vote of the AACI Board of Directors. The AACI president will notify the cancer center who submitted the name of the selected honoree by April 1 and the award will be presented at the AACI/CCAF Annual Meeting. In the event that there is no nomination or no nominee meets the award criteria, no award will be made for that cycle. The 2020 nominiation period will open on December 2, 2019. For more information, please contact Kate Burroughs, Director of Development, at 412-647-3844. Donors are an integral part of the ecosystem of cancer care. In 2018, Richard and Susan Rogel were honored for their $150 million gift to the University of Michigan Comprehensive Cancer Center – now Rogel Cancer Center.
{ "redpajama_set_name": "RedPajamaC4" }
6,118
{"url":"https:\/\/www.subjectcoach.com\/tutorials\/math\/topic\/prekinder-to-grade-2-mathematics\/chapter\/splitting-up-the-whole-numbers","text":"# Splitting up the Whole Numbers\n\nAs you can see on the above number line, the whole numbers alternate between being odd and even. Every whole number is either odd or even.\n\n## Even Numbers\n\nEven numbers are whole numbers (integers) that can be divided evenly by $2$.\n\nThere are 10 circles and stars in the above diagram. They split up into pairs with one circle and one star in each, a bit like people in a set at a bush dance. Each circle has a star as its dancing partner. $10$ is an even number because it can be divided up into pairs like this.\n\nThe even numbers include numbers like\n\n$\\dots,-8,-6,-4,-2,0,2,4,6,8,\\dots$\nThey all have a final digit of\n$0,2,4,6,$ or $8$\n\n### Examples:\n\n\u2022 The number $23,564$ is even. It ends in a $4$.\n\u2022 $0$ is an even number\n\u2022 $-256,276$ is also even. Its final digit is a $6$.\n\u2022 $23,578,269,357$ is not even as its final digit is not $0,2,4,6$ or $8$.\n\n## Odd Numbers\n\nOdd numbers are the whole numbers that cannot be divided exactly by $2$. When you try to split an odd number of people up into dancing partners, there's always one left over.\n\nThe picture above represents the number $11$. It shows a total of 6 circles and 5 stars. We can't divide it evenly up into circles and stars, so $11$ is an odd number. The poor circle on the right hand side is missing a dancing partner.\n\nThe odd numbers include:\n\n$\\dots,-7,-5,-3,-1,1,3,5,7,\\dots$\n\nOdd numbers end in one of the digits\n\n$1,3,5,7,$ or $9$\n\n### Examples:\n\n\u2022 The number $23,593,235,789$ is odd. It ends in a $9$.\n\u2022 $-273,595$ is an odd number. Its final digit is $5$\n\u2022 $23,578,269,354$ is not odd as its final digit is not $1,3,5,7$ or $9$. In fact, it is even.\n\n## Adding and Subtracting Odd and Even Numbers\n\nWhat happens when we add and subtract odd and even numbers? We sometimes match up dancing partners, and sometimes don't. We might even take them away. There are rules telling us whether the result of an addition or subtraction is odd or even:\n\n Operation Result Example (even numbers are purplish-pink and odd numbers are orange). Even + Even Even 14 + 12 = 26 Odd + Odd Even 3 + 5 = 8 Odd + Even Odd 3 + 2 = 5 Even + Odd Odd 2 + 7 = 9\n\nWe get the same pattern when we replace the plus sign with a minus sign and subtract.\n\n## Multiplying Odd and Even Numbers\n\nWhat happens when we multiply odd and even numbers together? There's a list of rules that tell us whether the result is an odd or an even number:\n\n Operation Result Example (even numbers are purplish-pink and odd numbers are orange). Even $\\times$ Even Even 4 $\\times$ 2 = 8 Even $\\times$ Odd Even 2 $\\times$ 5 = 10 Odd $\\times$ Even Even 3 $\\times$ 2 = 6 Odd $\\times$ Odd Odd 3 $\\times$ 5 = 15\n\n### Description\n\nThis mini book covers the core of Math for Foundation, Grade 1 and Grade 2 mathematics including\n\n1. Numbers\n3. Subtraction\n4. Division\n5. Algebra\n6. Geometry\n7. Data\n8. Estimation\n9. Probability\/Chance\n10. Measurement\n11. Time\n12. Money\n13. and much\u00a0more\n\nThis material is provided free of cost for Parent looking for some tricks for their Prekinder, Kinder, Prep, Year 1 and Year 2 children\n\n### Audience\n\nYou must be logged in as Student to ask a Question.","date":"2023-02-06 02:52:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5233351588249207, \"perplexity\": 715.7649033908816}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500303.56\/warc\/CC-MAIN-20230206015710-20230206045710-00474.warc.gz\"}"}
null
null
Q: Upload multiple images with PHP Curl I'm trying to upload more than one image with PHP Curl. The API I'm using gives me the following example: curl -v -s -u username:password \ -H "Content-Type: multipart/form-data" \ -H "Accept: application/vnd.com.example.api+json" \ -F "image=@img1.jpeg;type=image/jpeg" \ -F "image=@img2.jpeg;type=image/jpeg" \ -XPUT 'https://example.com/api/sellers/12/ads/217221/images' So in my php script I tried this: $files = array(); foreach($photos as $photo) { $cfile = new CURLFile('../' . $photo, 'image/jpeg', 'test_name'); $files[] = $cfile; } $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, 'https://services.example.com/seller-api/sellers/' . $seller . '/ads/' . $voertuig_id . '/images'); curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "PUT"); curl_setopt($ch, CURLOPT_POSTFIELDS, $files); curl_setopt($ch, CURLOPT_PROXY,'api.test.sandbox.example.com:8080'); curl_setopt($ch, CURLOPT_USERPWD, 'USER:PASS'); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_HTTPHEADER, array( 'Host: services.example.com', 'Content-type: multipart/form-data; boundary=vjrLefDejJaWiU0JzZfsadfasd1rMcE2HQ-n7XsSx', 'Accept: application/vnd.com.example.api+json' )); $output = curl_exec($ch); curl_close($ch); First I got this response from the API: { "errors":{ "error":{ "@key":"unsupported-form-element" } } } but now there is no response at all. How do I use curl to upload multiple files? if $files is an empty JSON for example, it gives no error at all and returns the images (empty ofcourse). This is the documentation of the API I'm using: https://services.mobile.de/manual/new-seller-api.html#_upload_images EDIT: I tried to build the request body and send it, but it doesn't do anything: $requestBody = ''; $requestBody .= '--vjrLeiXjJaWiU0JzZkUPO1rMcE2HQ-n7XsSx\r\n'; $requestBody .= 'Content-Disposition: form-data; name="image"; filename="ferrari.JPG"\r\n'; $requestBody .= 'Content-Type: image/jpeg\r\n'; $requestBody .= 'Content-Transfer-Encoding: binary\r\n'; curl_setopt($ch, CURLOPT_POSTFIELDS, $requestBody); A: try use assoc array, how it use into documentation CURLFile $photos = [ 'img1' => 'img1.jpg', 'img2' => 'img2.jpg' ] $files = []; foreach($photos as $key => $photo) { $cfile = new CURLFile('../' . $photo, 'image/jpeg', $key); $files[$key] = $cfile; } $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, 'https://services.example.com/seller-api/sellers/' . $seller . '/ads/' . $voertuig_id . '/images'); curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "PUT"); curl_setopt($ch, CURLOPT_POSTFIELDS, $files); curl_setopt($ch, CURLOPT_PROXY,'api.test.sandbox.example.com:8080'); curl_setopt($ch, CURLOPT_USERPWD, 'USER:PASS'); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_HTTPHEADER, array( 'Host: services.example.com', 'Content-type: multipart/form-data; boundary=vjrLefDejJaWiU0JzZfsadfasd1rMcE2HQ-n7XsSx', 'Accept: application/vnd.com.example.api+json' )); $output = curl_exec($ch); curl_close($ch);
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,879
function [cum_ret, cumprod_ret, daily_ret, daily_portfolio] ... = olmar1(fid, data, varargins, opts) % This program starts the OLMAR-1 algorithm % % function [cum_ret, cumprod_ret, daily_ret, daily_portfolio] .... % = olmar1_start(fid, data, varargins, opts) % % cum_ret: a number representing the final cumulative wealth. % cumprod_ret: cumulative return until each trading period % daily_ret: individual returns for each trading period % daily_portfolio: individual portfolio for each trading period % % data: market sequence vectors % fid: handle for write log file % varargins: variable parameters % opts: option parameter for behvaioral control % % Example: [cum_ret, cumprod_ret, daily_ret, daily_portfolio] ... % = olmar1_start(fid, data, {10, 5, 0}, opts); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % This file is part of OLPS: http://OLPS.stevenhoi.org/ % Original authors: Bin LI, Steven C.H. Hoi % Contributors: % Change log: % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Extract the parameters epsilon = varargins{1}; % reversion parameter \epsilon W = varargins{2}; % Window size tc = varargins{3}; % transaction cost fee rate % Run the OLMAR-1 algorithm [cum_ret, cumprod_ret, daily_ret, daily_portfolio]... = olmar1_run(fid, data, epsilon, W, tc, opts); end
{ "redpajama_set_name": "RedPajamaGithub" }
4,294
from __future__ import unicode_literals from django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration): initial = True dependencies = [ ] operations = [ migrations.CreateModel( name='BlogPost', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=255, verbose_name='title')), ('lead', models.TextField(verbose_name='lead')), ('body', models.TextField(verbose_name='body')), ], options={ 'verbose_name': 'blog post', 'verbose_name_plural': 'blog posts', }, ), migrations.CreateModel( name='Category', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=255, verbose_name='title')), ], options={ 'verbose_name': 'category', 'verbose_name_plural': 'categories', }, ), migrations.CreateModel( name='Tag', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=255, verbose_name='title')), ], options={ 'verbose_name': 'tag', 'verbose_name_plural': 'tags', }, ), migrations.CreateModel( name='TextBlock', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=255, verbose_name='title')), ('body', models.TextField(verbose_name='body')), ('blog_post', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='blocks', to='blog.BlogPost')), ], options={ 'verbose_name': 'text block', 'verbose_name_plural': 'text blocks', }, ), migrations.AddField( model_name='blogpost', name='category', field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='blog_posts', to='blog.Category'), ), migrations.AddField( model_name='blogpost', name='tags', field=models.ManyToManyField(related_name='blog_post', to='blog.Tag'), ), ]
{ "redpajama_set_name": "RedPajamaGithub" }
5,727
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/> <meta http-equiv="X-UA-Compatible" content="IE=9"/> <meta name="generator" content="Doxygen 1.8.7"/> <title>OpenWeatherMap: OWMCityDailyWeather Class Reference</title> <link href="tabs.css" rel="stylesheet" type="text/css"/> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript" src="dynsections.js"></script> <link href="search/search.css" rel="stylesheet" type="text/css"/> <script type="text/javascript" src="search/search.js"></script> <script type="text/javascript"> $(document).ready(function() { searchBox.OnSelectItem(0); }); </script> <link href="doxygen.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="top"><!-- do not remove this div, it is closed by doxygen! --> <div id="titlearea"> <table cellspacing="0" cellpadding="0"> <tbody> <tr style="height: 56px;"> <td style="padding-left: 0.5em;"> <div id="projectname">OpenWeatherMap </div> </td> </tr> </tbody> </table> </div> <!-- end header part --> <!-- Generated by Doxygen 1.8.7 --> <script type="text/javascript"> var searchBox = new SearchBox("searchBox", "search",false,'Search'); </script> <div id="navrow1" class="tabs"> <ul class="tablist"> <li><a href="index.html"><span>Main&#160;Page</span></a></li> <li class="current"><a href="annotated.html"><span>Classes</span></a></li> <li><a href="files.html"><span>Files</span></a></li> <li> <div id="MSearchBox" class="MSearchBoxInactive"> <span class="left"> <img id="MSearchSelect" src="search/mag_sel.png" onmouseover="return searchBox.OnSearchSelectShow()" onmouseout="return searchBox.OnSearchSelectHide()" alt=""/> <input type="text" id="MSearchField" value="Search" accesskey="S" onfocus="searchBox.OnSearchFieldFocus(true)" onblur="searchBox.OnSearchFieldFocus(false)" onkeyup="searchBox.OnSearchFieldChange(event)"/> </span><span class="right"> <a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a> </span> </div> </li> </ul> </div> <div id="navrow2" class="tabs2"> <ul class="tablist"> <li><a href="annotated.html"><span>Class&#160;List</span></a></li> <li><a href="classes.html"><span>Class&#160;Index</span></a></li> <li><a href="hierarchy.html"><span>Class&#160;Hierarchy</span></a></li> <li><a href="functions.html"><span>Class&#160;Members</span></a></li> </ul> </div> <!-- window showing the filter options --> <div id="MSearchSelectWindow" onmouseover="return searchBox.OnSearchSelectShow()" onmouseout="return searchBox.OnSearchSelectHide()" onkeydown="return searchBox.OnSearchSelectKey(event)"> <a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(0)"><span class="SelectionMark">&#160;</span>All</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(1)"><span class="SelectionMark">&#160;</span>Classes</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(2)"><span class="SelectionMark">&#160;</span>Functions</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(3)"><span class="SelectionMark">&#160;</span>Properties</a></div> <!-- iframe showing the search results (closed by default) --> <div id="MSearchResultsWindow"> <iframe src="javascript:void(0)" frameborder="0" name="MSearchResults" id="MSearchResults"> </iframe> </div> </div><!-- top --> <div class="header"> <div class="summary"> <a href="#properties">Properties</a> &#124; <a href="class_o_w_m_city_daily_weather-members.html">List of all members</a> </div> <div class="headertitle"> <div class="title">OWMCityDailyWeather Class Reference</div> </div> </div><!--header--> <div class="contents"> <div class="dynheader"> Inheritance diagram for OWMCityDailyWeather:</div> <div class="dyncontent"> <div class="center"> <img src="interface_o_w_m_city_daily_weather.png" usemap="#OWMCityDailyWeather_map" alt=""/> <map id="OWMCityDailyWeather_map" name="OWMCityDailyWeather_map"> <area href="interface_o_w_m_basic_model.html" alt="OWMBasicModel" shape="rect" coords="0,56,142,80"/> </map> </div></div> <table class="memberdecls"> <tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="properties"></a> Properties</h2></td></tr> <tr class="memitem:a80a47ffe07c91a617fc260c2699fb6bd"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="a80a47ffe07c91a617fc260c2699fb6bd"></a> long&#160;</td><td class="memItemRight" valign="bottom"><b>clouds</b></td></tr> <tr class="separator:a80a47ffe07c91a617fc260c2699fb6bd"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:a4174cb8714cb80779746f9a7598aecb5"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="a4174cb8714cb80779746f9a7598aecb5"></a> long&#160;</td><td class="memItemRight" valign="bottom"><b>humidity</b></td></tr> <tr class="separator:a4174cb8714cb80779746f9a7598aecb5"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:aae3e45ed53bb8ee07a6062eaeea9846c"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="aae3e45ed53bb8ee07a6062eaeea9846c"></a> long&#160;</td><td class="memItemRight" valign="bottom"><b>dt</b></td></tr> <tr class="separator:aae3e45ed53bb8ee07a6062eaeea9846c"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:ad15f1ad46d803b98b90eccedf8f1dab6"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="ad15f1ad46d803b98b90eccedf8f1dab6"></a> float&#160;</td><td class="memItemRight" valign="bottom"><b>speed</b></td></tr> <tr class="separator:ad15f1ad46d803b98b90eccedf8f1dab6"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:a1946d0e42d9a090c78c1152f932c1154"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="a1946d0e42d9a090c78c1152f932c1154"></a> float&#160;</td><td class="memItemRight" valign="bottom"><b>rain</b></td></tr> <tr class="separator:a1946d0e42d9a090c78c1152f932c1154"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:a7b9565702ab64974dc5126c54cd672ca"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="a7b9565702ab64974dc5126c54cd672ca"></a> float&#160;</td><td class="memItemRight" valign="bottom"><b>snow</b></td></tr> <tr class="separator:a7b9565702ab64974dc5126c54cd672ca"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:a629402f95b65d28b03a076f532c475a1"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="a629402f95b65d28b03a076f532c475a1"></a> float&#160;</td><td class="memItemRight" valign="bottom"><b>wind</b></td></tr> <tr class="separator:a629402f95b65d28b03a076f532c475a1"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:a1309af96762feb1e5f59913db81450ed"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="a1309af96762feb1e5f59913db81450ed"></a> float&#160;</td><td class="memItemRight" valign="bottom"><b>temp_eve</b></td></tr> <tr class="separator:a1309af96762feb1e5f59913db81450ed"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:a9a6b76acd0f31a0c2518645ffc2531f0"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="a9a6b76acd0f31a0c2518645ffc2531f0"></a> float&#160;</td><td class="memItemRight" valign="bottom"><b>temp_min</b></td></tr> <tr class="separator:a9a6b76acd0f31a0c2518645ffc2531f0"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:a98aaa5093af535ef7d085a5c361a075b"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="a98aaa5093af535ef7d085a5c361a075b"></a> float&#160;</td><td class="memItemRight" valign="bottom"><b>temp_night</b></td></tr> <tr class="separator:a98aaa5093af535ef7d085a5c361a075b"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:adafdab70ef1d635c8fb78c2dca6f6182"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="adafdab70ef1d635c8fb78c2dca6f6182"></a> float&#160;</td><td class="memItemRight" valign="bottom"><b>temp_day</b></td></tr> <tr class="separator:adafdab70ef1d635c8fb78c2dca6f6182"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:ab5492bab7da5bf2f57d697635e73bee8"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="ab5492bab7da5bf2f57d697635e73bee8"></a> float&#160;</td><td class="memItemRight" valign="bottom"><b>temp_morn</b></td></tr> <tr class="separator:ab5492bab7da5bf2f57d697635e73bee8"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:aaae7295c4f9a26898cf03d2909f83262"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="aaae7295c4f9a26898cf03d2909f83262"></a> float&#160;</td><td class="memItemRight" valign="bottom"><b>temp_max</b></td></tr> <tr class="separator:aaae7295c4f9a26898cf03d2909f83262"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:a7447b5d4df4c704677e34de098c2304c"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="a7447b5d4df4c704677e34de098c2304c"></a> float&#160;</td><td class="memItemRight" valign="bottom"><b>pressure</b></td></tr> <tr class="separator:a7447b5d4df4c704677e34de098c2304c"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:a327bd6b7c0b077827ceb0d5c19a928e6"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="a327bd6b7c0b077827ceb0d5c19a928e6"></a> long&#160;</td><td class="memItemRight" valign="bottom"><b>deg</b></td></tr> <tr class="separator:a327bd6b7c0b077827ceb0d5c19a928e6"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:a37dcf2bc05cd6a79ef5faff72ac5ddc9"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="a37dcf2bc05cd6a79ef5faff72ac5ddc9"></a> <a class="el" href="interface_o_w_m_weather.html">OWMWeather</a> *&#160;</td><td class="memItemRight" valign="bottom"><b>weather</b></td></tr> <tr class="separator:a37dcf2bc05cd6a79ef5faff72ac5ddc9"><td class="memSeparator" colspan="2">&#160;</td></tr> </table><table class="memberdecls"> <tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="inherited"></a> Additional Inherited Members</h2></td></tr> <tr class="inherit_header pub_methods_interface_o_w_m_basic_model"><td colspan="2" onclick="javascript:toggleInherit('pub_methods_interface_o_w_m_basic_model')"><img src="closed.png" alt="-"/>&#160;Instance Methods inherited from <a class="el" href="interface_o_w_m_basic_model.html">OWMBasicModel</a></td></tr> <tr class="memitem:a13c3946d9ac2fb7995da05880abf942c inherit pub_methods_interface_o_w_m_basic_model"><td class="memItemLeft" align="right" valign="top"><a class="anchor" id="a13c3946d9ac2fb7995da05880abf942c"></a> (id)&#160;</td><td class="memItemRight" valign="bottom">- <b>initWithDictionary:</b></td></tr> <tr class="separator:a13c3946d9ac2fb7995da05880abf942c inherit pub_methods_interface_o_w_m_basic_model"><td class="memSeparator" colspan="2">&#160;</td></tr> </table> <hr/>The documentation for this class was generated from the following file:<ul> <li>Parser/Models/<a class="el" href="_o_w_m_city_daily_weather_8h_source.html">OWMCityDailyWeather.h</a></li> </ul> </div><!-- contents --> <!-- start footer part --> <hr class="footer"/><address class="footer"><small> Generated on Sat Mar 7 2015 15:59:37 for OpenWeatherMap by &#160;<a href="http://www.doxygen.org/index.html"> <img class="footer" src="doxygen.png" alt="doxygen"/> </a> 1.8.7 </small></address> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
8,060
Q: Ubuntu 14.04 syslog showing @@@@ for long time period My Ubuntu Server wasn't accessible for a long time, only after restarting the server I could access it again. I have checked the syslog file which shows something like @@@@@@@@@@@@@ from last night until the point I restarted the system. What is it? A: Typically a bunch of @@@@ in a log file are actually 0X00 bytes, and are due to some sort of crash or failure event where things have gone astray. Typically, the only recovery is a hard reset, as you found in your case. You should examine all the files in `/var/log' to try to gain insight into what happened.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,552
{"url":"http:\/\/star-www.rl.ac.uk\/star\/docs\/sc21.htx\/sc21ss77.html","text":"### MAPTOL\n\nSpecifies when to stop iterating\n\n#### Description:\n\nIf the normalised change (either the mean or maximum change - see parameter \"maptol_mean\" ) between the maps created on subsequent iterations falls below the value of maptol, then the map-maker performs one more iteration and then terminates. Only used if parameter \"numiter\" is negative. The normalised mean (or maximum) change between maps is defined as the mean (or maximum) of the absolute change in map pixel value, taken over all pixels within the region of the AST mask (if any, see parameter \"ast.zero_mask\" , etc), and normalised by the RMS of the square root of the pixel variances. Compared to parameter \"chitol\" , this is much more like a \" by eye\" test, that will stop the solution when the map stops changing. [0.05]\nType:\nreal\n\nMAKEMAP, CALCQU","date":"2018-09-23 09:08:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7950186729431152, \"perplexity\": 1986.3750667289896}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-39\/segments\/1537267159165.63\/warc\/CC-MAIN-20180923075529-20180923095929-00495.warc.gz\"}"}
null
null
One killed, another injured in early morning crash in Waukesha By Derica Williams One killed, another injured in Waukesha crash Sunday morning WAUKESHA (WITI) -- Officials say a driver was killed in a crash in Waukesha that occurred on Sunday, July 21st just after 4:40 a.m. It happened in the 1100 block of E. Moreland Blvd. Officials say the crash involved two vehicles, one of which collided with a tree. One driver was pronounced dead at the scene, and another was transported to the hospital for treatment of non life-threatening injuries. The crash remains under investigation, but officials say early indications show that speed was a factor. "I heard somebody come speeding by my house, then I heard the brake squeal," Emy Wood said. A black and silver car rested on top of a tow truck Sunday, with its driver's side obliterated. A second mangled vehicle sat wrapped around a tree. "All you heard was a loud bunch of screeches and it almost sounded like a train wreck," Jim Markovich said. "A lot of loud bangs. One of the cops out here was saying she ended up rolling a few times. The bus sign at the corner was taken out and the light pole was taken out. It was pretty bad," Markovich said. "It was tragic, probably at the time there were at least 30 emergency vehicles," Wood said. Markovich says he is still shaken up after seeing the aftermath of the powerful impact. "There was one black gentleman that was in one of the cars, and he actually climbed out of his sunroof and was laying on top of the car," Markovich said. Names of those involved in the crash have not yet been released. "People need to value life and be more careful and consider themselves and other people," Ben Hergert said. "I really feel for the families involved and my thoughts and prayers will be with them," Markovich said.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,156
Q: Azure Notification Hub static instance or recreate What is the correct way to use the c# Azure Notification Hub Client? We are using this in a very high usage server sending millions of pushes a day. I would like to know if we should recreate the NotificationHubClient every time or if we should keep a static/singleton instance of it and then reuse that each time? /// <summary> /// Get refence to the azure hub /// </summary> /// <returns></returns> private static NotificationHubClient GetHub(NotificationHub nhub, bool enableTestSend = false) { return NotificationHubClient.CreateClientFromConnectionString(nhub.EndPoint, nhub.HubName, nhub.AllowDiagnosticSend && enableTestSend); } Currently, we recreate it every time we send a push notification. However, I know from personal experience we have had issues with the .net HTTP client and it not releasing tcp sockets fast enough. I was worried that this library could start having similar issues. A: I would recommend to use the Singleton and reuse it rather than creating a new instance every time. There is already a reported issue on GitHub where your current strategy (creating a new instance every time) fails on very high loads. https://github.com/Azure/azure-notificationhubs-dotnet/issues/118 you can follow the below stackoverflow discussion as well. Azure NotificationHubClient throws SocketException when used on Azure function intensively
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,652
{"url":"https:\/\/piparadox.live\/questions\/concyclicity-complex-numbers\/","text":"# Concyclicity complex numbers\n\n330 views\n0\nDetermine the set of points $M(m)$ such that $A(-\\frac{1}{2}-\\frac{1}{2}i)$, $M(m)$, $M_1(-1+im)$ and $M_2(-1-im)$ are concyclic, with $m$ a complex number st $m=e^{i\\theta}$ and\n$\\pi <\\theta <\\frac{3\\pi}{2}$","date":"2020-12-02 15:30:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.42954251170158386, \"perplexity\": 421.7083016496406}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-50\/segments\/1606141711306.69\/warc\/CC-MAIN-20201202144450-20201202174450-00561.warc.gz\"}"}
null
null
{"url":"https:\/\/www.physicsforums.com\/threads\/two-bulb-experiment-for-measuring-gas-diffusivity.844051\/","text":"# Two-bulb experiment for measuring gas diffusivity\n\nTags:\n1. Nov 19, 2015\n\n### MexChemE\n\nHi, PF! I recently solved a problem from BSL which asked to analyze the following system used for determining the diffusivity of a binary mixture of gases.\n\nThe left portion of the system, from the left bulb up to the stopcock at the middle of the tube, is filled with gas A. The right portion of the system is filled with gas B. At t = 0, the stopcock is opened a the gases start to diffuse. This is a quasi-steady state process. First we derive an expression for the molar flux of A through the tube using a steady state molar balance, and then we make an unsteady state molar balance for species A on the left bulb. The goal is to obtain an expression for $x_A^+$ as a function of time. The function is\n$$\\ln \\left(\\frac{\\tfrac{1}{2} - x_A^+}{\\tfrac{1}{2}} \\right) = - \\frac{SD_{AB} t}{LV}$$\nWhere S is the cross-section area of the tube. What got my attention is that the last part of the problem asked to suggest a method of plotting the experimental data in order to find the diffusivity. What I suggested was to define\n$$y = \\ln \\left(\\frac{\\tfrac{1}{2} - x_A^+}{\\tfrac{1}{2}} \\right)$$\n$$m = - \\frac{SD_{AB}}{LV}$$\nThen we can make a linear regression with the data from the experiment and find the slope, then obtain $D_{AB}$ from it.\n\nMy actual doubt lies within the implementation of the experiment. Specifically, calculating the mole fraction of gas A in the mixture. Is there a way to do that without using advanced equipment? I.e. can you calculate the mole fraction of A in the mixture using only basic lab equipment?\n\nThanks in advance for any input!\n\n2. Nov 20, 2015\n\n### Staff: Mentor\n\nThe question is, \"how do you measure the mole fraction of a species in a gas sample?\" It depends on the situation. If it's water vapor in air, for example, you just condense out the water vapor. In other situations, it might be much more difficult.\n\nChet\n\n3. Nov 21, 2015\n\n### MexChemE\n\nYes, that sounds better. I thought it would be a nice experiment to do at home or school, but measuring mole fractions would be a big problem without the use of sophisticated techniques or equipment.\n\nHere's another question, probably a big misconception though. If I fill one half of the system with oxygen, and the other half with nitrogen, will the steady state concentrations of each species be the same as in air after carrying out the experiment?\n\nEdit: Another related question, if not the same. Do the concentrations of O2 and N2 in air depend only on the diffusivity of the O2-N2 pair or do they depend on other factors also?\n\nLast edited: Nov 21, 2015\n4. Nov 22, 2015\n\n### Staff: Mentor\n\nWhat are your thoughts on this?\nWhat are your thoughts on this?\n\n5. Nov 22, 2015\n\n### MexChemE\n\nWell, for the first question, in order for the system to be at constant temperature and pressure, there must be the same amount of moles of each gas in each half of the system. So I guess no, the mole fraction of both oxygen and nitrogen will be 0.5 when the system reaches steady state. This is also what the obtained model tells us. So, in this case, diffusivity affects how fast diffusion will happen, but not the extent of mixing. Therefore, if we want the final mixture to be like air, we should fill one half with 0.21 moles of oxygen and the other half with 0.79 moles of nitrogen, or a proportional multiple of these quantities.\n\nFor the second question, I guess diffusivity again plays an insignificant role in the nature of the concentration distribution of air, so it depends in other factors like temperature, pressure and the inherent properties of each gas.\n\nBottom line: Diffusivity is an important parameter in transport processes\/phenomena, but not so useful or significant in equilibrium thermodynamics, right?\n\n6. Nov 24, 2015\n\n### Staff: Mentor\n\nMainly, the naturally occurring amounts of the gases.\nOf course not. It's only a transport parameter, and doesn't figure in equilibrium thermodynamics in any way.\n\nChet","date":"2017-10-22 05:37:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5948524475097656, \"perplexity\": 479.17386886078583}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-43\/segments\/1508187825141.95\/warc\/CC-MAIN-20171022041437-20171022061437-00372.warc.gz\"}"}
null
null
<!DOCTYPE html> <!--[if lt IE 7]><html class="no-js lt-ie9 lt-ie8 lt-ie7"><![endif]--> <!--[if IE 7]><html class="no-js lt-ie9 lt-ie8" <![endif]--> <!--[if IE 8]><html class="no-js lt-ie9" <![endif]--> <!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]--> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="description" content=""> <title>C++ overview and OpenSceneGraph introduction</title> <!-- Favicon --> <link rel="icon" type="image/x-icon" href="/robotics-course-materials/assets/img/favicon.ico" /> <!-- Come and get me RSS readers --> <link rel="alternate" type="application/rss+xml" title="Robotics" href="/robotics-course-materials/robotics-course-materials/feed.xml" /> <!-- Stylesheet --> <link rel="stylesheet" href="/robotics-course-materials/assets/css/style.css"> <!--[if IE 8]><link rel="stylesheet" href="/robotics-course-materials/assets/css/ie.css"><![endif]--> <link rel="canonical" href="/robotics-course-materials/robotics-course-materials/blog/C++/"> <!-- Modernizr --> <script src="/robotics-course-materials/assets/js/modernizr.custom.15390.js" type="text/javascript"></script> </head> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ TeX: { equationNumbers: { autoNumber: "AMS" } } }); </script> <script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" > </script> <body> <div class="header"> <div class="container"> <h1 class="logo"><a href="/robotics-course-materials/">Robotics</a></h1> <nav class="nav-collapse"> <ul class="noList"> </ul> </nav> </div> </div><!-- end .header --> <div class="content"> <div class="container"> <div class="post"> <h1 class="postTitle">C++ overview and OpenSceneGraph introduction</h1> <p class="meta">December 18, 2015 | <span class="time">16</span> Minute Read</p> <p class="intro"><span class="dropcap"></span> This learning module will provide an overview of C++, targeted toward those with some background in Java or C. ## Overview of C++ ### C++ data types See [this page](http://en.cppreference.com/w/cpp/language/types) for a description of all C++ primitive data types, including concepts like minimum and maximum numbers, not-a-number, and infinity. The most commonly used types are `void`, `bool`, `char`, `int`, `unsigned`, `long`, `float`, and `double`. Since C++ is "close to the metal", like C, it can help you to know the number of bits used for each representation, _which can change depending on machine architecture_. ### Some differences between C++ and Java Notes below will be useful even to those programmers without a background in Java. + C++ uses ``bool`` for a Boolean type (Java calls this ``boolean``) + Java uses `System.out.println` for output to `stdout`. C++ uses `std::cout`, the `&lt;&lt;` operator, and `std::endl`. The Java statement `System.out.println("Hello world!");` would be `std::cout &lt;&lt; "Hello world!" &lt;&lt; std::endl;` in C++. + Java forces you to allocate non-primitive types on the heap, where C++ allows you to allocate non-primitive types on the stack (the latter is faster and more amenable to real-time performance). + Java automatically de-allocates memory (using relatively slow garbage collection). + Array allocation is slightly different. Arrays are allocated in Java like ``int[] array = new int[20]``. Arrays are allocated in C++ like ``int* array = new int[20]``. + In Java, a member function is defined like ``public void tabulateScores()`` while the function would be declared in C++ like ``public: void tabulateScores()`` + All primitive types (``int``, ``float``, etc.) are [passed by value](http://courses.washington.edu/css342/zander/css332/passby.html) to functions and all non-primitive types are [passed by reference](http://courses.washington.edu/css342/zander/css332/passby.html). C++ gives the option to pass any type by reference or by value to a function. + C++ requires you to [declare](http://stackoverflow.com/questions/4757565/c-forward-declaration) function prototypes and classes when you refer to them (before they have been _defined_- fleshed out). If you refer to a class before it has been defined, C++ requires you to do a [forward declaration](http://stackoverflow.com/questions/4757565/c-forward-declaration). Java was smart to avoid declarations, in my opinion. + C++ does not have _interfaces_ but it does have [pure virtual functions](https://en.wikipedia.org/wiki/Virtual_function), which serve an identical purpose. Some resources for Java programmers to learn C++: + [Moving from Java to C++](http://www.horstmann.com/ccj2/ccjapp3.html) + [Java to C++ Transition](http://cs.brown.edu/courses/cs123/docs/java_to_cpp.shtml) ### Object-oriented programming in C++ Object oriented programming (OOP) is a programming model centered around data and the functions used to operate on that data rather than _procedural programming languages_ (like C) that focus on decomposing a task into subroutines (procedures). A tutorial to OOP in C++ can be found [here](http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-088-introduction-to-c-memory-management-and-c-object-oriented-programming-january-iap-2010/lecture-notes/MIT6_088IAP10_lec04.pdf). ### Memory allocation and shared pointers One price you pay for the additional speed and control that C++ offers is the need to manage heap memory allocation and deallocation. Memory is allocated from the heap using the ``new`` operator: int* x; // x is a pointer x = new int; // allocated memory for x on the heap and memory must be deallocated from the heap using the ``delete`` operator: delete [] x; For every ``new`` in your code, there should be a matching ``delete``. #### Shared pointers Shared pointers provide automatic memory deallocation. _I suggest using them instead of regular pointers for memory allocation/deallocation._ The idea is simple- when no more references point to a block of memory, the block is deallocated- though a few caveats exist. Shared pointers work like this: shared_ptr<int> x; // x is a shared pointer to an int x = shared_ptr<int>(new int); // allocated memory There no longer needs to be a matching ``delete`` statement. The advantages of shared pointers over garbage collection are that the former is considerably faster and that memory is reclaimed as soon as possible. The disadvantage is that circular pointer references must be explicitly managed by the programmer or memory leaks will occur. **Using the following example class definitions will result in a memory leak**: class B; // the forward reference is necessary class A { shared_ptr<b> b; }; class B { shared_ptr<a> a; }; This situation is fixable using a [weak pointer](http://www.boost.org/doc/libs/1_58_0/libs/smart_ptr/weak_ptr.htm): class B; // the forward reference is necessary class A { shared_ptr<b> b; }; class B { weak_ptr<a> a; // the weak pointer breaks the circular reference }; ### Passing by reference and passing by value (_ED: I have seen this advice somewhere but cannot locate it at the present moment. I will cite my source when I find it again._) * Pass variables by reference when the function is to modify the variable. As a matter of fact, indicating that the variable is passed by reference _and without the `const`_ keyword indicates to the caller that the function is _expected_ to modify the variable. * Else, for primitive types, pass by value * Pass objects by reference and use the `const` modifier when the object uses more memory than a pointer (64-bits on most systems) and the function is not expected to change the object (e.g., `void sum_inertias(const SpatialRBInertiad&amp; J)`). * Pass objects by value when the object uses less memory than a pointer and the function is not expected to modify the object ### Compiling/linking C++ on Unix-type systems Whether producing an executable file or a software library, C++ requires two processes: _compiling_ the C++ source code into machine code ("object files") and _linking_ the object files together (which resolves symbolic references to functions and data). A description of the compilation and linking processes is [here](http://stackoverflow.com/questions/6264249/how-does-the-compilation-linking-process-work). As a very simple example, g++ -c hello.cpp -o hello.o compiles `hello.cpp` to produce `hello.o` and g++ hello.o -o hello links `hello.o` with the C++ standard libraries to produce the executable `hello`. I recommend getting and learning [CMake](http://cmake.org) to build your projects, which can take care of the compiling and linking process for you automatically. Otherwise, you have to compile your source files manually, forcing you to remember all of the arcane command line options, and then manually link your objects together. One warning: [the linker (g++) is sensitive to the order that libraries and object files are specified on the command line on Linux systems](http://stackoverflow.com/questions/45135/why-does-the-order-in-which-libraries-are-linked-sometimes-cause-errors-in-gcc). ### Newer language features C++ continues to support more and more features over time. The language's evolution reminds me of this: ![Big swiss army knife](http://images.knifecenter.com/knifecenter/wenger/images/WR16999a.jpg) because few language features are ever removed. The [C++ standard template library](http://www.cplusplus.com/reference/stl/) contains a number of useful data structures- including vectors, linked lists, queues, stacks, sets, and maps- and algorithms (finding maximum elements, binary search, sorting, and more). You will be able to increase your programming proficiency in C++ many fold when you understand the concept of [iterators](http://www.cs.northwestern.edu/~riesbeck/programming/c++/stl-iterators.html). A staging ground for many C++ algorithms that often make their way into the language is [Boost](http://www.boost.org). This functionality goes part of the way toward replicating the utility of other languages' standard libraries (Python and Java in particular). ### Templates Templates allow us to avoid code like this: void swap(int&amp; x, int&amp; y) { int tmp = x; x = y; y = tmp; } void swap(float&amp; x, float&amp; y) { float tmp = x; x = y; y = tmp; } . . . We can do this instead: template <typename t=""> void swap(T&amp; x, T&amp; y) { T tmp = x; x = y; y = tmp; } This saves typing and, more importantly, reduces possibility of bugs from copy and paste (a great way to introduce bugs in programming). On the downside, templates make code a little harder to read, make it slower to compile, and tends to generate really hard to read compiler error messages for syntax errors ([see this part of the C++ FAQ for a fix](https://isocpp.org/wiki/faq/templates#template-error-msgs)). Learning templates well will help you understand the Boost, the STL, and will give you the ability to read the majority of C++ code. ### Exceptions Before exceptions, programmers would check for errors like this: FILE* fp = fopen("/tmp/dat", "w"); if (!fp) { std::cerr &lt;&lt; "Unable to open file!" &lt;&lt; std::endl; return false; } ... Using exceptions we check for errors like this: try { fp = open("/tmp/dat"); } catch (IOException e) { std::cerr &lt;&lt; "Unable to open file!" &lt;&lt; std::endl; return false; } One advantage is that if we don't care about the error at this level- it's apparent that we already signal to the calling function that there was a problem by the `return false` statement- then we can keep our code very neat by doing this instead: fp = open("/tmp/dat"); Now if we do not "catch" the exception, the function above is responsible for catching it, on up the [call stack](https://en.wikipedia.org/wiki/Call_stack), until- if the `main` function does not catch it- the exception will cause the program to terminate with an error. A commentor on [Stack overflow](http://stackoverflow.com/questions/196522/in-c-what-are-the-benefits-of-using-exceptions-and-try-catch-instead-of-just) indicates two benefits: 1. They can't be ignored: you must deal with them at some level or they will terminate your program. If you do not explicitly check for the error code, it is lost. 2. They _can_ be ignored: if you explicitly wish to ignore an exception, it will propagate up to higher levels until some piece of code does handle it. This same Stack overflow thread has many more viewpoints on why exceptions are useful. No commentor argues that checking for error codes is a better solution. ### Programming / debugging advice Some general programming advice (beyond C++): - **readability**: One of your primary goals when programming is to carefully guide another programmer through your code. Even if you expect to be the only person to ever see your code, you will be that other programmer in six months. - **minimize cognitive load**: Toward keeping your code readable, minimize the cognitive load. Name variables and functions descriptively (``num_iterations`` instead of ``n``, ``calc_inertias(.)`` instead of ``compute(.)``). - **use STL containers instead of arrays**: Arrays do no range checking and the correct size must be allocated at runtime; accidentally overwriting memory outside of the array is a common bug [and is a common vector for security attacks](https://en.wikipedia.org/wiki/Buffer_overflow). I prefer the [STL vector](http://www.cplusplus.com/reference/vector/vector/), which can be accessed like an array (e.g., ``x[5] = 3``), can be queried for its size, automatically deallocates memory when the variable goes out of scope, performs range checking, and can increase its capacity automatically. [Here](http://cs.brown.edu/~jak/proglang/cpp/stltut/tut.html) is a nice tutorial on the STL (Standard Template Library). - **put reusable code in functions and keep functions small**: longer functions are more likely to have defects (see a dissenting viewpoint plus several that backup my point of view [here](http://c2.com/cgi/wiki?LongFunctionHeresy)). The longer your function is, than say 50 lines of code, the more you should consider breaking it into multiple functions. - **[beware of macros](http://stackoverflow.com/questions/14041453/why-are-preprocessor-macros-evil-and-what-are-the-alternatives)** - **write the comments first**: This is a strategy I use when programming. Writing the comments first helps you focus on organizing the logic. Filling in the code from the comments is pretty easy when you know the language syntax. - **address the first compiler errors first**: Many errors found by the C++ compiler will disappear after you correct the first in a list of errors. - **fix all compiler warnings**: C++ compilers tend to generate warnings in places where compilers for other languages would generate errors. Take compiler warnings seriously- treat them as errors. - **write [unit tests](https://en.wikipedia.org/wiki/Unit_testing)**: Unit tests allow you to catch problems in a function while you remember the ins and outs of that function as opposed to six months down the road when you locate a bug in the function. - **use a debugger**: see below ### C++ tools - **git / version control**: While not a C++ tool _per se_, use version control to track your changes. Advanced features of version control even allow you to, as examples: run unit tests, run regression tests, and build binary releases upon committing code. - **gdb / lldb**: Debugging using ``printf`` (or its variants among programming languages) is usually an order of magnitude faster than using a debugger. Learn at least the main features of a debugger. A good tutorial on gdb is found [here](http://www.unknownroad.com/rtfm/gdbtut/). - **valgrind**: If you have a bug that you are having difficulty locating using gdb, [valgrind](http://valgrind.org) should be your next stop. Valgrind can locate problems like illegal memory reads and writes that gdb will not catch. - **performance tools**: Do not [prematurely optimize](https://shreevatsa.wordpress.com/2008/05/16/premature-optimization-is-the-root-of-all-evil/): you will find that your intuition about the time sinks in your software are often wrong anyway. Use a _profiler_, my favorite on Linux is currently [google-perftools](https://github.com/gperftools/gperftools). ### Additional reference materials - [C++ FAQ](http://www.parashift.com/c++-faq/) - Google --- ## Overview of OpenSceneGraph You have two clear options to program in 3D: [OpenGL](https://en.wikipedia.org/wiki/OpenGL), which is a _state system_ (the rendering is completed determined by state variables), and [scene graph-based systems](https://en.wikipedia.org/wiki/Scene_graph), like [OpenSceneGraph](https://en.wikipedia.org/wiki/OpenSceneGraph), [Open Inventor](https://en.wikipedia.org/wiki/Open_Inventor), and [Java 3D](http://www.oracle.com/technetwork/articles/javase/index-jsp-138252.html). The earliest technology for viewing 3D content on the web, [VRML](https://en.wikipedia.org/wiki/VRML97), is based on a scene graph representation (and this is a pretty good file format too). I will discuss the scene graph representation because it is intuitive to understand- it fits well into the object-oriented paradigm, in particular- and 3D rendering can be achieved with very little code. For example, this tiny bit of code renders many 3D models that you can view using mouse controls: // simple.cpp (Evan Drumwright) #include &lt;osgDB/ReadFile&gt; #include &lt;osgViewer/Viewer&gt; int main(int argc, char** argv) { if (argc &lt; 2) { std::cerr &lt;&lt; "syntax: simple <filename>" &lt;&lt; std::endl; return -1; } osgViewer::Viewer viewer; viewer.setSceneData(osgDB::readNodeFile(argv[1])); return viewer.run(); } You can build this program using [this](../../assets/other/simpleosg/CMakeLists.txt) CMake build file. You can then run the program on many 3D files. One example is this [cessna airplane](http://scv.bu.edu/documentation/software-help/graphics-programming/osg_examples/materials/cessna.osg). Once you build the program, you run it like this: `simple cessna.osg`. ### The scene graph A scene graph is a collection of nodes in a tree (or, more generally, a graph) structure. A node in the tree may have many children but only a single parent, with the effect of a parent applied to all its child nodes. An operation performed on a group automatically propagates its effect to all of its members. Associating a geometrical transformation matrix (which I will describe in a future learning module) at a node will apply the transformation (rotation, translation, scaling) to all nodes below it. Materials are applied The scene graph paradigm is particularly good for rendering and animating animals, humans, and robots. (Adapted from [this page](https://en.wikipedia.org/wiki/Scene_graph)). An example scene graph for a virtual human is depicted below: ![example virtual human scene graph](../../assets/img/scene_graph.png) The types of nodes in the graph are described below: * [Transform](http://trac.openscenegraph.org/documentation/OpenSceneGraphReferenceDocs/a00910.html): A group node for which all children are transformed by a 4x4 (homogeneous) [transformation matrix](https://en.wikipedia.org/wiki/Transformation_matrix)- again, I will discuss this in a future learning module. * [Group](http://trac.openscenegraph.org/documentation/OpenSceneGraphReferenceDocs/a00357.html): A generic node for grouping children together * [Sphere](http://trac.openscenegraph.org/documentation/OpenSceneGraphReferenceDocs/a00775.html): A geometric primitive node for rendering a sphere * [Material](http://trac.openscenegraph.org/documentation/OpenSceneGraphReferenceDocs/a00479.html): An object for setting the color properties (color, shininess, transparency) of an object ### Simple animation #include &lt;osgDB/ReadFile&gt; #include &lt;osgViewer/Viewer&gt; #include &lt;osg/MatrixTransform&gt; #include &lt;osgGA/TrackballManipulator&gt; #include &lt;osgGA/StateSetManipulator&gt; #include <unistd.h> int main(int argc, char** argv) { if (argc &lt; 2) { std::cerr &lt;&lt; "syntax: anim <filename>" &lt;&lt; std::endl; return -1; } // create the viewer, as before, but now we need to add // a trackball manipulator osgViewer::Viewer viewer; // create a transform osg::MatrixTransform* group = new osg::MatrixTransform; viewer.setCameraManipulator(new osgGA::TrackballManipulator()); // read the file and add it to the transform group group-&gt;addChild(osgDB::readNodeFile(argv[1])); // point the viewer to the scene graph viewer.setSceneData(group); viewer.realize(); // set the angle (in radians) const double ANGLE = M_PI/180.0; unsigned i = 0; // loop until done while (true) { if (viewer.done()) break; // render a frame viewer.frame(); // update the transform to do a rotation around axis .577 .577 .577 osg::Matrixd T; T.makeRotate(ANGLE*i, 0.57735, 0.57735, 0.57735); group-&gt;setMatrix(T); i++; // sleep a little (10000 microseconds = 10ms = 100 frames per second) usleep(10000); } return 0; } This code fragment covers 90% of animation cases: simply update a matrix transform and then render a frame (using `frame()`). You can build this program using [this](../../assets/other/animosg/CMakeLists.txt) CMake build file. Again, you can then run the program on many 3D files. Once you build the program, you run it like this: `anim cessna.osg`. **One important note about animation**: if your code between calls to `frame()` takes too long, then the frame rate will naturally suffer. ### 3D file formats and tools To do anything cool with 3D, you need models, and models require considerable time and expertise to create. You can search for models using Google (try "3D model spaceship", for example), convert between models using tools, or even try building your own or modifying someone else's. Some useful tools are linked to below: * [Wavefont OBJ](https://en.wikipedia.org/wiki/Wavefront_.obj_file) is a file format that is extremely simple both to parse and to write. I prefer it less than other file formats when colors should be applied, because these "materials" are stored outside of the file (all data cannot be stored in a single file). The file extension is ".obj". * [VRML](https://en.wikipedia.org/wiki/VRML) comes in two formats, both still popular, [VRML 1.0](http://www.martinreddy.net/gfx/3d/VRML.spec) and [VRML 97 (also known as VRML 2.0)](http://gun.teipir.gr/VRML-amgem/spec/index.html). The VRML 1.0 file extension is ".iv"; the VRML 2.0 file extensions are ".wrl" and ".vrml" (less common). VRML 1.0 is easy to parse and write to; VRML 2.0 is easier to write to. There exist tools for converting between VRML 1.0 and 2.0, but your mileage will vary. * [Blender](https://www.blender.org) is free, professional (or near professional grade) 3D modeling and rendering software. It can help you edit 3D models and convert between various representations. The only problems: its interface is not very intuitive, the interface has changed multiple times in the 10+ years that I've used it, and the documentation has historically been poor. ### Learning more There are a number of tutorials available for OpenSceneGraph [here](http://trac.openscenegraph.org/projects/osg//wiki/Support/Tutorials). API documentation for OpenSceneGraph is located [here](http://trac.openscenegraph.org/documentation/OpenSceneGraphReferenceDocs/). </filename></unistd.h></filename></typename></a></b></a></b></int></int></p> <!-- POST NAVIGATION --> <div class="postNav clearfix"> <a class="prev" href="/robotics-course-materials/blog/forward-kinematics/"><span>&laquo;&nbsp;Forward kinematics</span> </a> <a class="next" href="/robotics-course-materials/blog/linear-algebra/"><span>An engineer's guide to matrices, vectors, and numerical linear algebra&nbsp;&raquo;</span> </a> </div> </div> </div> </div><!-- end .content --> <div class="footer"> <div class="container"> <div class="footer-links"> <ul class="noList"> </ul> </div> </div> </div><!-- end .footer --> <!-- Add jQuery and other scripts --> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.2/jquery.min.js"></script> <script>window.jQuery || document.write('<script src="/robotics-course-materials"><\/script>')</script> <script src="/robotics-course-materials/assets/js/dropcap.min.js"></script> <script src="/robotics-course-materials/assets/js/responsive-nav.min.js"></script> <script src="/robotics-course-materials/assets/js/scripts.js"></script> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
7,844
{"url":"https:\/\/www.zbmath.org\/authors\/?q=ai%3Apech.pavel","text":"# zbMATH \u2014 the first resource for mathematics\n\n## Pech, Pavel\n\nCompute Distance To:\n Author ID: pech.pavel Published as: Pech, P.; Pech, Pavel External Links: ORCID\n Documents Indexed: 30 Publications since 1990, including 1 Book Reviewing Activity: 4 Reviews\nall top 5\n\n#### Co-Authors\n\n 19 single-authored 3 Kur\u00e1\u017e, Michal 3 Mayer, Petr 2 Blazek, Jiri 2 Jakubcov\u00e1, Michala 2 M\u00e1ca, Petr 1 Havl\u00ed\u010dek, Vojt\u011bch 1 Hora, Jaroslav 1 Kov\u00e1cs, Zolt\u00e1n 1 Moln\u00e1r, Emil 1 Pavl\u00e1sek, Jirka 1 Szirmai, Jen\u00f3 1 Wei\u00df, Gunter\nall top 5\n\n#### Serials\n\n 5 Journal for Geometry and Graphics 2 Applied Mathematics and Computation 2 Mathematica Pannonica 1 \u010casopis Pro P\u011bstov\u00e1n\u00ed Matematiky 1 Commentationes Mathematicae Universitatis Carolinae 1 Czechoslovak Mathematical Journal 1 Journal of Computational and Applied Mathematics 1 Journal of Geometry 1 Mathematica Bohemica 1 Rad Hrvatske Akademije Znanosti i Umjetnosti. Matemati\u010dke Znanosti 1 Mathematical Problems in Engineering 1 Acta Academiae Paedagogicae Agriensis. Nova Series. Sectio Matematicae 1 Journal of Applied Mathematics 1 Mathematics in Computer Science 1 G - Slovensk\u00fd \u010casopis pre Geometriu a Grafiku\nall top 5\n\n#### Fields\n\n 16 Geometry\u00a0(51-XX) 10 Convex and discrete geometry\u00a0(52-XX) 10 Computer science\u00a0(68-XX) 3 Fluid mechanics\u00a0(76-XX) 2 Commutative algebra\u00a0(13-XX) 2 Real functions\u00a0(26-XX) 2 Harmonic analysis on Euclidean spaces\u00a0(42-XX) 2 Differential geometry\u00a0(53-XX) 2 Numerical analysis\u00a0(65-XX) 2 Operations research, mathematical programming\u00a0(90-XX) 1 Mathematical logic and foundations\u00a0(03-XX) 1 Algebraic geometry\u00a0(14-XX) 1 Mechanics of deformable solids\u00a0(74-XX) 1 Mathematics education\u00a0(97-XX)\n\n#### Citations contained in zbMATH\n\n11 Publications have been cited 32 times in 27 Documents Cited by Year\nSelected topics in geometry with classical vs. computer proving.\u00a0Zbl\u00a01149.51007\nPech, Pavel\n2007\nThe harmonic analysis of polygons and Napoleon\u2019s theorem.\u00a0Zbl\u00a00991.51009\nPech, Pavel\n2001\nDual permeability variably saturated flow and contaminant transport modeling of a nuclear waste repository with capillary barrier protection.\u00a0Zbl\u00a01453.76205\nKur\u00e1\u017e, Michal; Mayer, Petr; Havl\u00ed\u010dek, Vojt\u011bch; Pech, Pavel; Pavl\u00e1sek, Jirka\n2013\nSolving the nonlinear and nonstationary Richards equation with two-level adaptive domain decomposition ($$dd$$-adaptivity).\u00a0Zbl\u00a01410.76431\nKuraz, Michal; Mayer, Petr; Pech, Pavel\n2015\nSolving the nonlinear Richards equation model with adaptive domain decomposition.\u00a0Zbl\u00a01446.76161\nKuraz, Michal; Mayer, Petr; Pech, Pavel\n2014\nOn the Simson-Wallace theorem and its generalizations.\u00a0Zbl\u00a01101.51009\nPech, Pavel\n2005\nErd\u0151s-Mordell inequality for space n-gons.\u00a0Zbl\u00a00807.52004\nPech, P.\n1994\nInequality between sides and diagonals of a space n-gon and its integral analog.\u00a0Zbl\u00a00722.52006\nPech, Pavel\n1990\nLocus computation in dynamic geometry environment.\u00a0Zbl\u00a007095825\nBla\u017eek, Ji\u0159\u00ed; Pech, Pavel\n2019\nOn a 3D extension of the Simson-Wallace theorem.\u00a0Zbl\u00a01317.51026\nPech, Pavel\n2014\nComputations of the area and radius of cyclic polygons given by the lengths of sides.\u00a0Zbl\u00a01159.68557\nPech, Pavel\n2006\nLocus computation in dynamic geometry environment.\u00a0Zbl\u00a007095825\nBla\u017eek, Ji\u0159\u00ed; Pech, Pavel\n2019\nSolving the nonlinear and nonstationary Richards equation with two-level adaptive domain decomposition ($$dd$$-adaptivity).\u00a0Zbl\u00a01410.76431\nKuraz, Michal; Mayer, Petr; Pech, Pavel\n2015\nSolving the nonlinear Richards equation model with adaptive domain decomposition.\u00a0Zbl\u00a01446.76161\nKuraz, Michal; Mayer, Petr; Pech, Pavel\n2014\nOn a 3D extension of the Simson-Wallace theorem.\u00a0Zbl\u00a01317.51026\nPech, Pavel\n2014\nDual permeability variably saturated flow and contaminant transport modeling of a nuclear waste repository with capillary barrier protection.\u00a0Zbl\u00a01453.76205\nKur\u00e1\u017e, Michal; Mayer, Petr; Havl\u00ed\u010dek, Vojt\u011bch; Pech, Pavel; Pavl\u00e1sek, Jirka\n2013\nSelected topics in geometry with classical vs. computer proving.\u00a0Zbl\u00a01149.51007\nPech, Pavel\n2007\nComputations of the area and radius of cyclic polygons given by the lengths of sides.\u00a0Zbl\u00a01159.68557\nPech, Pavel\n2006\nOn the Simson-Wallace theorem and its generalizations.\u00a0Zbl\u00a01101.51009\nPech, Pavel\n2005\nThe harmonic analysis of polygons and Napoleon\u2019s theorem.\u00a0Zbl\u00a00991.51009\nPech, Pavel\n2001\nErd\u0151s-Mordell inequality for space n-gons.\u00a0Zbl\u00a00807.52004\nPech, P.\n1994\nInequality between sides and diagonals of a space n-gon and its integral analog.\u00a0Zbl\u00a00722.52006\nPech, Pavel\n1990\nall top 5\n\n#### Cited by 38 Authors\n\n 7 Pech, Pavel 3 Kur\u00e1\u017e, Michal 2 Dana-Picard, Thierry Noah 2 Liu, Jian 2 Mayer, Petr 2 Nicollier, Gr\u00e9goire 2 R\u00f6schel, Otto 1 Ait-Haddou, Rachid 1 Albuja, Guillermo 1 Althaus, Ernst 1 \u00c1vila, Andr\u00e9s I. 1 Blazek, Jiri 1 Dolej\u0161\u00ed, V\u00edt 1 Fukada, Kotaro 1 Ha\u0161ek, Roman 1 Herzog, Walter 1 Kisil, Vladimir V. 1 Kita, Ichiro 1 Kondo, Takefumi 1 Lang, Johann 1 Li, Dawang 1 Li, Longyuan 1 Mick, Sybille 1 Moln\u00e1r, Emil 1 Montes, Antonio 1 Moritsugu, Shuichi 1 Nomura, Taishin 1 Rauterberg, Felix 1 Schumann, Heinz 1 \u0160ol\u00edn, Pavel 1 Szirmai, Jen\u00f3 1 Toyoda, Tetsu 1 Uehara, Takato 1 Wang, Xianfeng 1 Xing, Feng 1 Yoshioka, Hidekazu 1 Zehavi, Nurit 1 Ziegler, Sarah\nall top 5\n\n#### Cited in 16 Serials\n\n 3 Mathematics in Computer Science 2 Mathematische Semesterberichte 2 Beitr\u00e4ge zur Algebra und Geometrie 2 Journal of Geometry 2 Applied Mathematical Modelling 1 Applied Mathematics and Computation 1 Czechoslovak Mathematical Journal 1 Geometriae Dedicata 1 Journal of Computational and Applied Mathematics 1 Computer Aided Geometric Design 1 Applied Numerical Mathematics 1 Discrete & Computational Geometry 1 Journal of Inequalities and Applications 1 Proceedings of the International Geometry Center 1 Journal of Mathematics in Industry 1 EURO Journal on Computational Optimization\nall top 5\n\n#### Cited in 17 Fields\n\n 13 Geometry\u00a0(51-XX) 7 Numerical analysis\u00a0(65-XX) 7 Computer science\u00a0(68-XX) 6 Fluid mechanics\u00a0(76-XX) 3 Commutative algebra\u00a0(13-XX) 2 Differential geometry\u00a0(53-XX) 2 Mathematics education\u00a0(97-XX) 1 General and overarching topics; collections\u00a0(00-XX) 1 History and biography\u00a0(01-XX) 1 Mathematical logic and foundations\u00a0(03-XX) 1 Functions of a complex variable\u00a0(30-XX) 1 Partial differential equations\u00a0(35-XX) 1 Dynamical systems and ergodic theory\u00a0(37-XX) 1 Convex and discrete geometry\u00a0(52-XX) 1 Mechanics of deformable solids\u00a0(74-XX) 1 Geophysics\u00a0(86-XX) 1 Operations research, mathematical programming\u00a0(90-XX)","date":"2021-04-11 01:50:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2648355960845947, \"perplexity\": 13556.231266081251}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038060603.10\/warc\/CC-MAIN-20210411000036-20210411030036-00545.warc.gz\"}"}
null
null
{"url":"https:\/\/itk.org\/Wiki\/index.php?title=ITK\/Examples\/Registration\/ImageRegistrationMethod&oldid=36192","text":"ITK\/Examples\/Registration\/ImageRegistrationMethod\n\n< ITK\u200e | Examples\n(diff) \u2190 Older revision | Latest revision (diff) | Newer revision \u2192 (diff)\n\nThis example registers two synthetic images. A white circle is created in the center of the fixed image (with a black background). A white ellipse is created as the moving image and offset from the center of the image. A rigid translation-only transform is then optimized to bring the ellipse to the circle.\n\nImageRegistrationMethod.cxx\n\n#include \"itkCastImageFilter.h\"\n#include \"itkEllipseSpatialObject.h\"\n#include \"itkImage.h\"\n#include \"itkImageRegistrationMethod.h\"\n#include \"itkLinearInterpolateImageFunction.h\"\n#include \"itkImageFileWriter.h\"\n#include \"itkMeanSquaresImageToImageMetric.h\"\n#include \"itkResampleImageFilter.h\"\n#include \"itkRescaleIntensityImageFilter.h\"\n#include \"itkSpatialObjectToImageFilter.h\"\n#include \"itkTranslationTransform.h\"\n\nconst unsigned int Dimension = 2;\ntypedef unsigned char PixelType;\n\ntypedef itk::Image< PixelType, Dimension > ImageType;\n\nvoid CreateEllipseImage(ImageType::Pointer image);\nvoid CreateSphereImage(ImageType::Pointer image);\n\nint main(int, char *[] )\n{\n\/\/ The transform that will map the fixed image into the moving image.\ntypedef itk::TranslationTransform< double, Dimension > TransformType;\n\n\/\/ An optimizer is required to explore the parameter space of the transform\n\/\/ in search of optimal values of the metric.\n\n\/\/ The metric will compare how well the two images match each other. Metric\n\/\/ types are usually parameterized by the image types as it can be seen in\n\/\/ the following type declaration.\ntypedef itk::MeanSquaresImageToImageMetric<\nImageType,\nImageType > MetricType;\n\n\/\/ Finally, the type of the interpolator is declared. The interpolator will\n\/\/ evaluate the intensities of the moving image at non-grid positions.\ntypedef itk:: LinearInterpolateImageFunction<\nImageType,\ndouble > InterpolatorType;\n\n\/\/ The registration method type is instantiated using the types of the\n\/\/ fixed and moving images. This class is responsible for interconnecting\n\/\/ all the components that we have described so far.\ntypedef itk::ImageRegistrationMethod<\nImageType,\nImageType > RegistrationType;\n\n\/\/ Create components\nMetricType::Pointer metric = MetricType::New();\nTransformType::Pointer transform = TransformType::New();\nOptimizerType::Pointer optimizer = OptimizerType::New();\nInterpolatorType::Pointer interpolator = InterpolatorType::New();\nRegistrationType::Pointer registration = RegistrationType::New();\n\n\/\/ Each component is now connected to the instance of the registration method.\nregistration->SetMetric( metric );\nregistration->SetOptimizer( optimizer );\nregistration->SetTransform( transform );\nregistration->SetInterpolator( interpolator );\n\n\/\/ Get the two images\nImageType::Pointer fixedImage = ImageType::New();\nImageType::Pointer movingImage = ImageType::New();\n\nCreateSphereImage(fixedImage);\nCreateEllipseImage(movingImage);\n\n\/\/ Write the two synthetic inputs\ntypedef itk::ImageFileWriter< ImageType > WriterType;\n\nWriterType::Pointer fixedWriter = WriterType::New();\nfixedWriter->SetFileName(\"fixed.png\");\nfixedWriter->SetInput( fixedImage);\nfixedWriter->Update();\n\nWriterType::Pointer movingWriter = WriterType::New();\nmovingWriter->SetFileName(\"moving.png\");\nmovingWriter->SetInput( movingImage);\nmovingWriter->Update();\n\n\/\/ Set the registration inputs\nregistration->SetFixedImage(fixedImage);\nregistration->SetMovingImage(movingImage);\n\nregistration->SetFixedImageRegion(\nfixedImage->GetLargestPossibleRegion() );\n\n\/\/ Initialize the transform\ntypedef RegistrationType::ParametersType ParametersType;\nParametersType initialParameters( transform->GetNumberOfParameters() );\n\ninitialParameters[0] = 0.0; \/\/ Initial offset along X\ninitialParameters[1] = 0.0; \/\/ Initial offset along Y\n\nregistration->SetInitialTransformParameters( initialParameters );\n\noptimizer->SetMaximumStepLength( 4.00 );\noptimizer->SetMinimumStepLength( 0.01 );\n\n\/\/ Set a stopping criterion\noptimizer->SetNumberOfIterations( 200 );\n\n\/\/ Connect an observer\n\/\/CommandIterationUpdate::Pointer observer = CommandIterationUpdate::New();\n\/\/optimizer->AddObserver( itk::IterationEvent(), observer );\n\ntry\n{\nregistration->Update();\n}\ncatch( itk::ExceptionObject & err )\n{\nstd::cerr << \"ExceptionObject caught\u00a0!\" << std::endl;\nstd::cerr << err << std::endl;\nreturn EXIT_FAILURE;\n}\n\n\/\/ The result of the registration process is an array of parameters that\n\/\/ defines the spatial transformation in an unique way. This final result is\n\/\/ obtained using the \\code{GetLastTransformParameters()} method.\n\nParametersType finalParameters = registration->GetLastTransformParameters();\n\n\/\/ In the case of the \\doxygen{TranslationTransform}, there is a\n\/\/ straightforward interpretation of the parameters. Each element of the\n\/\/ array corresponds to a translation along one spatial dimension.\n\nconst double TranslationAlongX = finalParameters[0];\nconst double TranslationAlongY = finalParameters[1];\n\n\/\/ The optimizer can be queried for the actual number of iterations\n\/\/ performed to reach convergence. The \\code{GetCurrentIteration()}\n\/\/ method returns this value. A large number of iterations may be an\n\/\/ indication that the maximum step length has been set too small, which\n\/\/ is undesirable since it results in long computational times.\n\nconst unsigned int numberOfIterations = optimizer->GetCurrentIteration();\n\n\/\/ The value of the image metric corresponding to the last set of parameters\n\/\/ can be obtained with the \\code{GetValue()} method of the optimizer.\n\nconst double bestValue = optimizer->GetValue();\n\n\/\/ Print out results\n\/\/\nstd::cout << \"Result = \" << std::endl;\nstd::cout << \" Translation X = \" << TranslationAlongX << std::endl;\nstd::cout << \" Translation Y = \" << TranslationAlongY << std::endl;\nstd::cout << \" Iterations = \" << numberOfIterations << std::endl;\nstd::cout << \" Metric value = \" << bestValue << std::endl;\n\n\/\/ It is common, as the last step of a registration task, to use the\n\/\/ resulting transform to map the moving image into the fixed image space.\n\/\/ This is easily done with the \\doxygen{ResampleImageFilter}. Please\n\/\/ refer to Section~\\ref{sec:ResampleImageFilter} for details on the use\n\/\/ of this filter. First, a ResampleImageFilter type is instantiated\n\/\/ using the image types. It is convenient to use the fixed image type as\n\/\/ the output type since it is likely that the transformed moving image\n\/\/ will be compared with the fixed image.\n\ntypedef itk::ResampleImageFilter<\nImageType,\nImageType > ResampleFilterType;\n\n\/\/ A resampling filter is created and the moving image is connected as its input.\n\nResampleFilterType::Pointer resampler = ResampleFilterType::New();\nresampler->SetInput( movingImage);\n\n\/\/ The Transform that is produced as output of the Registration method is\n\/\/ also passed as input to the resampling filter. Note the use of the\n\/\/ methods \\code{GetOutput()} and \\code{Get()}. This combination is needed\n\/\/ here because the registration method acts as a filter whose output is a\n\/\/ transform decorated in the form of a \\doxygen{DataObject}. For details in\n\/\/ this construction you may want to read the documentation of the\n\/\/ \\doxygen{DataObjectDecorator}.\n\nresampler->SetTransform( registration->GetOutput()->Get() );\n\n\/\/ As described in Section \\ref{sec:ResampleImageFilter}, the\n\/\/ ResampleImageFilter requires additional parameters to be specified, in\n\/\/ particular, the spacing, origin and size of the output image. The default\n\/\/ pixel value is also set to a distinct gray level in order to highlight\n\/\/ the regions that are mapped outside of the moving image.\n\nresampler->SetSize( fixedImage->GetLargestPossibleRegion().GetSize() );\nresampler->SetOutputOrigin( fixedImage->GetOrigin() );\nresampler->SetOutputSpacing( fixedImage->GetSpacing() );\nresampler->SetOutputDirection( fixedImage->GetDirection() );\nresampler->SetDefaultPixelValue( 100 );\n\n\/\/ The output of the filter is passed to a writer that will store the\n\/\/ image in a file. An \\doxygen{CastImageFilter} is used to convert the\n\/\/ pixel type of the resampled image to the final type used by the\n\/\/ writer. The cast and writer filters are instantiated below.\n\ntypedef unsigned char OutputPixelType;\ntypedef itk::Image< OutputPixelType, Dimension > OutputImageType;\ntypedef itk::CastImageFilter<\nImageType,\nImageType > CastFilterType;\n\nWriterType::Pointer writer = WriterType::New();\nCastFilterType::Pointer caster = CastFilterType::New();\nwriter->SetFileName(\"output.png\");\n\ncaster->SetInput( resampler->GetOutput() );\nwriter->SetInput( caster->GetOutput() );\nwriter->Update();\n\n\/*\n\/\/ The fixed image and the transformed moving image can easily be compared\n\/\/ using the \\doxygen{SubtractImageFilter}. This pixel-wise filter computes\n\/\/ the difference between homologous pixels of its two input images.\n\ntypedef itk::SubtractImageFilter<\nFixedImageType,\nFixedImageType,\nFixedImageType > DifferenceFilterType;\n\nDifferenceFilterType::Pointer difference = DifferenceFilterType::New();\n\ndifference->SetInput2( resampler->GetOutput() );\n*\/\n\nreturn EXIT_SUCCESS;\n}\n\nvoid CreateEllipseImage(ImageType::Pointer image)\n{\ntypedef itk::EllipseSpatialObject< Dimension > EllipseType;\n\ntypedef itk::SpatialObjectToImageFilter<\nEllipseType, ImageType > SpatialObjectToImageFilterType;\n\nSpatialObjectToImageFilterType::Pointer imageFilter =\nSpatialObjectToImageFilterType::New();\n\nImageType::SizeType size;\nsize[ 0 ] = 100;\nsize[ 1 ] = 100;\n\nimageFilter->SetSize( size );\n\nImageType::SpacingType spacing;\nspacing.Fill(1);\nimageFilter->SetSpacing(spacing);\n\nEllipseType::Pointer ellipse = EllipseType::New();\n\ntypedef EllipseType::TransformType TransformType;\nTransformType::Pointer transform = TransformType::New();\ntransform->SetIdentity();\n\nTransformType::OutputVectorType translation;\nTransformType::CenterType center;\n\ntranslation[ 0 ] = 65;\ntranslation[ 1 ] = 45;\ntransform->Translate( translation, false );\n\nellipse->SetObjectToParentTransform( transform );\n\nimageFilter->SetInput(ellipse);\n\nellipse->SetDefaultInsideValue(255);\nellipse->SetDefaultOutsideValue(0);\nimageFilter->SetUseObjectValue( true );\nimageFilter->SetOutsideValue( 0 );\n\nimageFilter->Update();\n\nimage->Graft(imageFilter->GetOutput());\n\n}\n\nvoid CreateSphereImage(ImageType::Pointer image)\n{\ntypedef itk::EllipseSpatialObject< Dimension > EllipseType;\n\ntypedef itk::SpatialObjectToImageFilter<\nEllipseType, ImageType > SpatialObjectToImageFilterType;\n\nSpatialObjectToImageFilterType::Pointer imageFilter =\nSpatialObjectToImageFilterType::New();\n\nImageType::SizeType size;\nsize[ 0 ] = 100;\nsize[ 1 ] = 100;\n\nimageFilter->SetSize( size );\n\nImageType::SpacingType spacing;\nspacing.Fill(1);\nimageFilter->SetSpacing(spacing);\n\nEllipseType::Pointer ellipse = EllipseType::New();\n\ntypedef EllipseType::TransformType TransformType;\nTransformType::Pointer transform = TransformType::New();\ntransform->SetIdentity();\n\nTransformType::OutputVectorType translation;\nTransformType::CenterType center;\n\ntranslation[ 0 ] = 50;\ntranslation[ 1 ] = 50;\ntransform->Translate( translation, false );\n\nellipse->SetObjectToParentTransform( transform );\n\nimageFilter->SetInput(ellipse);\n\nellipse->SetDefaultInsideValue(255);\nellipse->SetDefaultOutsideValue(0);\nimageFilter->SetUseObjectValue( true );\nimageFilter->SetOutsideValue( 0 );\n\nimageFilter->Update();\n\nimage->Graft(imageFilter->GetOutput());\n}\n\n\nCMakeLists.txt\n\ncmake_minimum_required(VERSION 2.6)\n\nPROJECT(Registration)\n\nFIND_PACKAGE(ITK REQUIRED)\nINCLUDE(\\${ITK_USE_FILE})","date":"2022-12-02 17:09:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.45507529377937317, \"perplexity\": 13060.482480269617}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710909.66\/warc\/CC-MAIN-20221202150823-20221202180823-00206.warc.gz\"}"}
null
null
Bob Blazekovic (Battle Creek, MI, SAD, 26. svibnja 1960.), američki tenisač hrvatskoga podrijetla. Igrao je desnom rukom. Nastupao u paru s američkim tenisačem Erickom Iskerskim. Natjecao se na ATP Touru. Izvori Američki tenisači Hrvati u SAD-u
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,656
'Star Wars: The Last Jedi' Blu-ray Reportedly Features Over Two Hours of Bonus Content By Patrick Cavanaugh - February 6, 2018 07:49 am EST The Last Jedi is still playing in theaters around the world, yet fans can't wait to purchase a copy of their own to analyze the film frame by frame and explore behind-the-scenes content. While official details about the film's home video release have yet to be announced, a leaked image of an advertisement for the film promises we'll get more than two hours of supplemental materials. The Last Jedi Blu-ray to have over 2 hours of bonus material, official announcement coming February 20 from r/StarWarsLeaks The above advertisement doesn't specify what content fans can enjoy, but we can most likely expect some of the previously released behind-the-scenes featurettes, in addition to the rumored 20 minutes of deleted scenes that writer/director Rian Johnson left on the cutting room floor. One thing audiences shouldn't expect is an extended edition of the film, with Johnson recently confirming that the theatrical cut of the film is the best version of the story. "The final cut of the movie is the best cut of the movie that we could come up with," Johnson shared during a recent Q&A. "Everything that was taken out, even the stuff that I love so much, was taken out for a reason at the end of the day. For me personally, I've enjoyed extended cuts of other stuff, but I don't think I would ever do one." The Last Jedi clocked it at more than two-and-a-half-hours, making it the longest chapter in the Star Wars saga. Despite the length of the theatrical cut, Johnson confirmed some of his favorite sequences had to be removed for an overall better experience. "It's one of those things where … and this always happens in the edit," the filmmaker expressed of cutting out scenes. "It's like suddenly you can see through the Matrix and you're like, 'Oh my God, that big sequence that I love so much and I can't imagine the movie without, if we lift it out and put these two things together, it plays in a slightly different way but it plays better.' And you just kind of have that, "::sigh:: Sh-t," and you hit delete. You don't think about all the stuff we built on set to get the shots, you don't think about all the work the actors and the crew did, you just hit one button and it's gone and the movie's better." The Last Jedi is in theaters now, with an expected announcement that the film will be available on Blu-ray, DVD and VOD in March. You can check out all of your pre-order options here. [H/T Reddit]
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,406
{"url":"https:\/\/openmdao.org\/newdocs\/versions\/latest\/_srcdocs\/packages\/approximation_schemes\/approximation_scheme.html","text":"# approximation_scheme.py\u00b6\n\nBase class used to define the interface for derivative approximation schemes.\n\nclass openmdao.approximation_schemes.approximation_scheme.ApproximationScheme[source]\n\nBases: object\n\nBase class used to define the interface for derivative approximation schemes.\n\nAttributes\n_approx_groupslist\n\nA list of approximation tuples ordered into groups of \u2018of\u2019s matching the same \u2018wrt\u2019.\n\n_colored_approx_groups: list\n\nA list containing info for all colored approximation groups.\n\n_approx_groups_cached_under_csbool\n\nFlag indicates whether approx_groups was generated under complex step from higher in the model hieararchy.\n\nA dict that maps wrt name to its fd\/cs metadata.\n\n_progress_outNone or file-like object\n\nAttribute to output the progress of check_totals\n\n_during_sparsity_compbool\n\nIf True, we\u2019re doing a sparsity computation and uncolored approxs need to be restricted to only colored columns.\n\n_jac_scattertuple\n\nData needed to scatter values from results array to a total jacobian column.\n\n__init__()[source]\n\nInitialize the ApproximationScheme.\n\nUse this approximation scheme to approximate the derivative d(of)\/d(wrt).\n\nParameters\nabs_keytuple(str,str)\n\nAbsolute name pairing of (of, wrt) for the derivative.\n\nsystemSystem\n\nContaining System.\n\nkwargsdict\n\nAdditional keyword arguments, to be interpreted by sub-classes.\n\ncompute_approximations(system, jac=None)[source]\n\nExecute the system to compute the approximate sub-Jacobians.\n\nParameters\nsystemSystem\n\nSystem on which the execution is run.\n\njacNone or dict-like\n\nIf None, update system with the approximated sub-Jacobians. Otherwise, store the approximations in the given dict-like object.","date":"2022-07-07 16:16:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.48992449045181274, \"perplexity\": 7641.5732403945385}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656104495692.77\/warc\/CC-MAIN-20220707154329-20220707184329-00142.warc.gz\"}"}
null
null
Your CIBIL Score is a crucial element in determining your Credit Eligibility. Check your CIBIL Score today! Use the calculator* as a guide before applying for a home loan or car loan as it lets you understand what is the loan amount you are eligible for and what would the EMI be. So next time you approach a Bank or financial institution for loan, you can apply as per your eligibility. Also visit CIBIL MarketPlace to check out the best loan or credit offers customized around your credit eligibility.
{ "redpajama_set_name": "RedPajamaC4" }
668
Christmas Day in the Chances for Children Family! The large box of Christmas presents arrived! We have bought new minivans for the children to go to school! Last weekend it was Daddy Martins birthday! He was so happy to come back to a surprise of cake, home made chips, and party hats and balloons as well as a gift from Auntie Gabs and Uncle Russell and various birthday letters that the children wrote too!
{ "redpajama_set_name": "RedPajamaC4" }
7,818
Theming ======= Theming is a way to replace a set of [views](structure-views.md) with another without the need of touching the original view rendering code. You can use theming to systematically change the look and feel of an application. To use theming, you should configure the [[yii\base\View::theme|theme]] property of the `view` application component. The property configures a [[yii\base\Theme]] object which governs how view files are being replaced. You should mainly specify the following properties of [[yii\base\Theme]]: - [[yii\base\Theme::basePath]]: specifies the base directory that contains the themed resources (CSS, JS, images, etc.) - [[yii\base\Theme::baseUrl]]: specifies the base URL of the themed resources. - [[yii\base\Theme::pathMap]]: specifies the replacement rules of view files. More details will be given in the following subsections. For example, if you call `$this->render('about')` in `SiteController`, you will be rendering the view file `@app/views/site/about.php`. However, if you enable theming in the following application configuration, the view file `@app/themes/basic/site/about.php` will be rendered, instead. ```php return [ 'components' => [ 'view' => [ 'theme' => [ 'basePath' => '@app/themes/basic', 'baseUrl' => '@web/themes/basic', 'pathMap' => [ '@app/views' => '@app/themes/basic', ], ], ], ], ]; ``` > Info: Path aliases are supported by themes. When doing view replacement, path aliases will be turned into the actual file paths or URLs. You can access the [[yii\base\Theme]] object through the [[yii\base\View::theme]] property. For example, in a view file, you can write the following code because `$this` refers to the view object: ```php $theme = $this->theme; // returns: $theme->baseUrl . '/img/logo.gif' $url = $theme->getUrl('img/logo.gif'); // returns: $theme->basePath . '/img/logo.gif' $file = $theme->getPath('img/logo.gif'); ``` The [[yii\base\Theme::pathMap]] property governs how view files should be replaced. It takes an array of key-value pairs, where the keys are the original view paths to be replaced and the values are the corresponding themed view paths. The replacement is based on partial match: if a view path starts with any key in the [[yii\base\Theme::pathMap|pathMap]] array, that matching part will be replaced with the corresponding array value. Using the above configuration example, because `@app/views/site/about.php` partially matches the key `@app/views`, it will be replaced as `@app/themes/basic/site/about.php`. ## Theming Modules <span id="theming-modules"></span> In order to theme modules, [[yii\base\Theme::pathMap]] can be configured like the following: ```php 'pathMap' => [ '@app/views' => '@app/themes/basic', '@app/modules' => '@app/themes/basic/modules', // <-- !!! ], ``` It will allow you to theme `@app/modules/blog/views/comment/index.php` into `@app/themes/basic/modules/blog/views/comment/index.php`. ## Theming Widgets <span id="theming-widgets"></span> In order to theme widgets, you can configure [[yii\base\Theme::pathMap]] in the following way: ```php 'pathMap' => [ '@app/views' => '@app/themes/basic', '@app/widgets' => '@app/themes/basic/widgets', // <-- !!! ], ``` This will allow you to theme `@app/widgets/currency/views/index.php` into `@app/themes/basic/widgets/currency/index.php`. ## Theme Inheritance <span id="theme-inheritance"></span> Sometimes you may want to define a basic theme which contains a basic look and feel of the application, and then based on the current holiday, you may want to vary the look and feel slightly. You can achieve this goal using theme inheritance which is done by mapping a single view path to multiple targets. For example, ```php 'pathMap' => [ '@app/views' => [ '@app/themes/christmas', '@app/themes/basic', ], ] ``` In this case, the view `@app/views/site/index.php` would be themed as either `@app/themes/christmas/site/index.php` or `@app/themes/basic/site/index.php`, depending on which themed file exists. If both themed files exist, the first one will take precedence. In practice, you would keep most themed view files in `@app/themes/basic` and customize some of them in `@app/themes/christmas`.
{ "redpajama_set_name": "RedPajamaGithub" }
7,670
Q: Swift UICollectionViewCell Animated View Randomly Blank When Scrolling I am trying to use a UICollectionView to display a square MyCollectionViewCell that has animated GIFs in a UIImageView. Rows and sections are setup like so: override func numberOfSectionsInCollectionView(collectionView: UICollectionView) -> Int { return 1 } override func collectionView(collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { return 50 } I'm using SwiftGif to get GIFs assigned to the cell's UIImageView like so: let gifs = [UIImage.gifWithName("gif1"), UIImage.gifWithName("gif2"), UIImage.gifWithName("gif3")] override func collectionView(collectionView: UICollectionView, cellForItemAtIndexPath indexPath: NSIndexPath) -> UICollectionViewCell { let cell = collectionView.dequeueReusableCellWithReuseIdentifier("gifCell", forIndexPath: indexPath) as! GifCollectionViewCell cell.gifImageView.image = gifs[indexPath.item % gifs.count] return cell } Everything for the most part works great. But my issue is that when scrolling, there are times when cell is blank and no GIF appears. In the debugging process, I've added a label to the cell to display the indexPath.item, as you can see in code above, to make sure that the cell isn't getting passed over and have found that the label will always display indexPath even if the cell does not display a GIF. I have tested with regular images instead like so: let testImages = [UIImage(named: "testImage1"), UIImage(named: "testImage2", UIImage(named: "testImage3")] override func collectionView(collectionView: UICollectionView, cellForItemAtIndexPath indexPath: NSIndexPath) -> UICollectionViewCell { let cell = collectionView.dequeueReusableCellWithReuseIdentifier("gifCell", forIndexPath: indexPath) as! GifCollectionViewCell cell.gifImageView.image = testImages[indexPath.item % testImages.count] return cell } and had no occurrences of blank cells. Even more curious, when I originally actually built the GIF in collectionView(...cellForItemAtIndexPath) I did not get any issues with blank cells either: let gifNames = ["gif1", "gif2", "gif3"] override func collectionView(collectionView: UICollectionView, cellForItemAtIndexPath indexPath: NSIndexPath) -> UICollectionViewCell { let cell = collectionView.dequeueReusableCellWithReuseIdentifier("gifCell", forIndexPath: indexPath) as! GifCollectionViewCell let gif = UIImage.gifWithName(gifNames[indexPath.item % gifNames.count]) cell.gifImageView.image = gif return cell } This original implementation would have worked if it weren't for the fact the GIF build process drastically affects the scrolling performance of the UICollectionView which is what forced me to change implementation in the first place. I have confirmed that this is not an issue with SwiftGif as I have replicated this issue in a different application using an animation render library and an AnimatedView in place of the UIImageView in MyCollectionViewCell and displayed animations instead of GIFs and got the same issue with cells randomly showing nothing instead of the animation when scrolling through the CollectionView. I have tried the StackOverflow solution here and implemented a custom UICollectionViewFlowLayout like so: class MyFlowLayout: UICollectionViewFlowLayout { override func layoutAttributesForElementsInRect(rect: CGRect) -> [UICollectionViewLayoutAttributes]? { let attributes = super.layoutAttributesForElementsInRect(rect) let contentSize = self.collectionViewContentSize() let newAttrs = attributes?.filter { $0.frame.maxX <= contentSize.width && $0.frame.maxY <= contentSize.height} return newAttrs } } And assigned it in viewDidLoad() like so: self.collectionView?.collectionViewLayout = MyFlowLayout() In MyFlowLayout I have also tried: override func shouldInvalidateLayoutForBoundsChange(newBounds: CGRect) -> Bool { return true } I have also messed with various cell sizes (width 1 less than height, height 1 less than width etc) and messed around with some section inset combinations but have not managed to find the source of this issue that is causing the animated view to not to show up. Any help would be appreciated. Thanks A: I solved the issue by setting the gifImageView.image = nil in collectionView(...cellForItemAtIndexPath) before assigning the image. override func collectionView(collectionView: UICollectionView, cellForItemAtIndexPath indexPath: NSIndexPath) -> UICollectionViewCell { let cell = collectionView.dequeueReusableCellWithReuseIdentifier("gifCell", forIndexPath: indexPath) as! GifCollectionViewCell cell.gifImageView.image = nil // Added this line cell.gifImageView.image = gifs[indexPath.item % gifs.count] cell.infoLabel.text = String(indexPath.item) return cell } Not sure how/why this fixed it but it works now.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,898
Ars Nova Singers is helmed by Founding Artistic Director/Many-Hat-Wearer Thomas Edward Morgan, Executive Director/Ship-Steerer Kimberly Brody, and a dynamic and generous Board of Directors. Learn more about The Ars Nova Singers leadership team below: Founder, Artistic Director, Conductor Recognized as a "many-splendored musician" (The Boulder Daily Camera) and leading interpreter of new choral music, Tom Morgan has led the evolution of Ars Nova Singers from a local choral… Kimberly Brody Kim grew up in Minnesota (land of the 25 below and blizzards!), attended St. Olaf College as a music performance major, and pursued graduate studies at Northwestern University. After a… Elizabeth Swanson Associate Conductor, Alto, & CU Boulder Associate Director of Choral Studies Current Hometown: Lafayette, CO What's Your Vocal "14er"? Verdi's Requiem performances with the Chicago Symphony Chorus and Orchestra (Riccardo Muti, conductor). These rehearsal and performance experiences were life-altering! What Brings… Under the skillful direction of Thomas Edward Morgan, [Ars Nova Singers] put across the full expressive power of this amazing music. Dan has worked with Wells Fargo for the last six years, and currently serves in a corporate training role. He supports a wide variety of positions across his branch and… Kerren Bergman CEO, Hyde Engineering + Consulting, Inc. Kerren is a Boulder native and currently serves as CEO of Hyde Engineering + Consulting, Inc., a global engineering consulting firm. She has worked with Hyde Engineering for over two… Cate Colburn-Smith Marketing Strategist, NetApp Cate has over 25 years experience in high tech marketing and has worked for agencies, startups, and large corporations. She is an expert in brand messaging and positioning, product marketing,… Bruce Doenecke Retired Pediatrician Bruce made the move to Denver from Cincinnati to pursue residency training in Pediatrics at The Denver Children's Hospital. Since 1988, he's practiced general pediatrics as a member of the… Brant Foote Senior Scientist Emeritus, National Center for Atmospheric Research Brant received his PhD in Atmospheric Sciences from the University of Arizona in 1970, and took a Postdoctoral Fellowship at the National Center for Atmospheric Research. He held a variety… Janice Moore Jan Osburn Private Studio Music Educator Jan is a music educator with a strong belief in nurturing creativity. She retired last August after teaching 4 years in the Twin Cities, and 36 years in the Boulder… Linda Weise Interested in being considered for board membership? Please contact Executive Director Kimberly Brody.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
96
Alison Gary August 15, 2017 Lifestyle Weekend in Raleigh: Our Kid's First Concert Music is something that means a lot to my family. Karl and I have attended several music festivals and I think the combination of music and roughing it has helped solidify our relationship. We regularly attend concerts and music is always playing in our home. Emerson inherited our love of music; she's always singing, loves playing the piano, and also is regularly playing music from her kid-friendly Spotify playlists. We've taken Emerson to nearby cafes for open mics and local music performances, but always thought that once she turned 8 we'd take her to her first bona fide concert. Our favorite band is My Morning Jacket and has been for over a decade. Their album Evil Urges came out the summer before Emerson was born and was the soundtrack to my pregnancy and her infancy (she was a Highly Suspicious baby). I even saw My Morning Jacket at Bonnaroo while Emerson was in my belly. So when deciding what first concert we'd take our kid to, it wasn't a hard decision. The decision was where, and how. While My Morning Jacket would be coming to a venue near us, we thought it would be more fun to make this a true event and travel to another city. Also, this made it possible for us to see My Morning Jacket twice this tour, once as a family and once as a couple. Our goal was to make this a fun memory for Emerson. My Morning Jacket in Raleigh, North Carolina Last fall, I attended a music festival in Raleigh, North Carolina and loved the city. It's easy to navigate, safe, great places to eat, cool things to do during the day for all ages, and I already had visited the concert venue and knew its proximity to hotels. It's only a five-hour drive from DC, which made it a kid-friendly weekend road trip. We booked a two-room suite at the Sheraton Raleigh Hotel so Emerson could have her own room and we parents could have a touch of privacy. The Sheraton is literally a block from Red Hat Amphitheater, the location of the concert so if Emerson got overwhelmed or fell asleep, it would easy to dash back to the safety and comfort of our hotel. The Sheraton Raleigh Hotel was also a great choice because it has an indoor pool; Emerson LOVES to swim and I've learned over the years traveling with a kid that an indoor pool is better than an outdoor because it's not as popular with adults and is a go-to when the weather turns foul. The concert was on a Sunday night; we drove down Saturday and arrived just before dinner, and just before a major thunderstorm. After the drive and unpacking, we had no desire to walk around in the rain and try to find a place to eat. We decided to splurge and order room service… and were surprised to find the prices were cheaper than average restaurants at home. $6 for 4 giant juicy chicken fingers and a huge serving of French fries? Yes please! We didn't want Emerson to go to bed early and wake early, so after dinner we went down to the pool and ended up being there for almost two hours. The pool wasn't the gem of the hotel; it was a bit old and not well maintained but it had a hot tub, was large enough for laps, and for most of the time we had the place completely to ourselves. Exploring Raleigh, North Carolina as a Family Looking for a family-friendly restaurant for breakfast, we came across Big Ed's City Market Restaurant. Like everything else in downtown Raleigh, it was a comfortable walking distance from our hotel. It's super cute and quirky and located in the Downtown City Market, a part of Raleigh I didn't visit the last time I was there. Not the place for a heart-healthy low-fat low-carb paleo clean eating breakfast, but it was delicious and the waitstaff was so friendly. We did a bit of research and heard that the North Carolina Museum of Natural Sciences was really great and very kid-friendly. On our walk to the museum, we passed by a mural supporting the rights of protesters by Dare Coulter, commissioned by the ACLU of North Carolina. It's a gorgeous and inspiring piece; after we explained to Emerson what it represented she asked to have her picture taken in front of it. The North Carolina Museum of Natural Sciences lives up to all of its glowing reviews. Upon entering (it's free to visit), you see several examples of local wildlife. Travel around the museum and you see everything from examples of local stone to examples of local marshlands, aquariums full of fish and marine life, and interactive exhibits. We could have stayed there for hours, but around 1pm our stomachs started rumbling. We read that the cafeteria inside the museum was surprisingly good and decided to stay put for lunch. Again, the reviews were not wrong. We each got a fresh and delicious meal (I highly recommend the veggie burger!). Each kid meal comes with a small plastic figurine of an animal; Emerson got a sea lion and it became her American Girl doll's favorite toy. Walking back from the museum, we passed the Briggs Hardware Building. Built in 1874, the building has had several different lives: hardware store, YMCA, church, armory, and office space. The main floor is now home to the City of Raleigh (COR) museum. Emerson was instantly fascinated by the scale model of Raleigh and we spent a good half hour in there, me reading up on the city and she imagining being small enough to drive down the roads of Mini Raleigh. We knew the only way Emerson would survive a concert where the headliner started after her regular bedtime was to take a nap. We headed back to the hotel but Emerson was still full of energy. Back to the pool where we swam lots of laps and played lots of games of Marco Polo to wear all of us out. Back to the room where we took cozy warm showers, ordered some more chicken fingers, and all end up sleeping for a couple of hours. We woke up later than we expected, but decided not to rush. We got dressed, and headed down to the venue. Attending our First Concert as a Family Red Hat Amphitheater is an outdoor concert venue right behind the Raleigh Convention Center. Unlike large venues in the DC area, getting in was pretty easy. They checked in bags and pockets, but we didn't have to stay in line and got there in time to see some of the opener, the genius that is Gary Clark Jr. Lines for concessions were long, but organized and moved relatively quickly. The audience at a My Morning Jacket show is usually friendly, but being in the south, we found the crowd even more friendly, accommodating, and laid back. I am a member of My Morning Jacket's fan club (RIP Roll Call), so I was able to order tickets before the public. We ended up with third row seats in the center section right on an aisle. While I doubt there's any bad seat at Red Hat, this was a perfect location for being there with a child: nor far from bathrooms, able to peek over heads at the edge of the crowd to see the show, quick access to the exit in case of a situation. Emerson came equipped with her headphones (we bought these for her when she was a baby, they still work great and likely will fit her for several more years) and a bag that had a couple small toys – some Littlest Pet Shop figurines, a fidget spinner, a note pad and pen. These were great for the time between acts to keep all of us occupied. We couldn't bring outside food or drink but bought a couple bottles of water and hot dogs at concession. Emerson was not the only child at the show. We saw babies wrapped to their parent's chests and backs, kids Emerson's age dancing in the aisles, and plenty of tweens and teens with their moms and/or dads. While there were clearly many people enjoying alcohol and likely other substances, no one was unruly, aggressive, or obviously inebriated. We didn't smell weed until way after the sun went down, and those around us were considerate of Emerson and if they were interested in doing such a thing, didn't do it near us. The staff at Red Hat Amphitheater was phenomenal. They were on alert, but didn't want to ruin anyone's good time. Emerson was allowed to stand on the chair to be able to see the stage, they didn't mind people dancing in the aisles as long as there was a clear path, and when people got out of line, they gently spoke to them and resumed order. Emerson was nervous about standing on the chairs, she felt it was against the rules, even when one of the staff told her it was okay. She swore she could see but I knew at my height I had to be just so to see the show between shoulders. She looked bored and sleepy and I was worried we wouldn't even get through half the show. I finally just picked her up and put her on the chair behind me and wrapped her arms around my neck. Once she could see the stage is when she finally connected and realized how awesome a concert can be. While it may be tempting to get the cheap lawn seats for a kid's first concert, if you can splurge for closer seats it may make all the difference. Close seats for a pop star may cost an arm and a leg, but smaller artists at smaller venues may have close seats for little more than lawn tickets. Red Hat is such a cool location; as the band played a train went past behind the stage while the sun was setting. The night was warm but not too hot, there weren't any bugs, and there was a light breeze. The sky was so clear we could see stars as the sky darkened. And as the sky darkened, the light show increased. Emerson was excited when she recognized songs, though was confused and bored by jam sessions and when they band switched up songs. It was a funny contrast to the diehard fans around us who were excited by very same things! It was clear Emerson was getting tired; her usual time to head upstairs for bed is 8pm and was turning into a pumpkin by 9:45 and the only thing that kept her going was hearing songs she recognized. When it seemed the band was finished with their performance she immediately asked us to go back to the hotel. We explained to her what an encore was and said it may be worth it to stay. She agreed that if she knew the song we could stay, if not we'd leave. The band came on stage and played Wordless Chorus. They turned on the disco ball hiding behind the stage lights and the whole amphitheater, all the people, and the convention center was covered in light. Emerson was in awe, her mouth dropped open looking at everything around her. But after the song ended, as we promised, we headed out. Emerson was sleepy and a bit grumpy but as we walked out she saw a glow stick on the path. She raced to it, and saw it had the connector to turn it into a bracelet. We hooked it onto her wrist and she was THRILLED. We as a family walked hand in hand back to the Sheraton, singing along to Touch Me I'm Going to Scream Pt. 1. Karl and I both sang, "I can see it by the way you smile, I'm smiling too, I see myself in you" and looked down at Emerson who was beaming between yawns. It was a late night, but such a great night. The next morning we got on the road and back home by the afternoon. We think we'll make this a yearly tradition as a family; each summer a My Morning Jacket show in a different US city. It would be such a great way to see this country and get to know different cities a bit better while enjoying a band that is such an important part of our family. What is the Best Age for a Kid's First Concert? This really depends on your child. I've seen kids still in diapers rocking out at shows and attending music festivals, having a blast. Our child I don't think would have enjoyed it before the age she is now. We started small with local bands at cafes and venues familiar to her, leaving before it got too late. It was a great way to see how she reacted to the noise, the lights, the crowds. All the things that we adults find thrilling about a live show can be overwhelming, overstimulating, and terrifying to some children. You know your child best; be sure this will be a positive experience for her as well as you. Tips for Taking a Child to his or her First Concert: If you're attending a concert out of town, arrive the day before the show. Give your family a chance to get acclimated as well as plenty of sleep before a late night of music. Consider a Sunday Show. In most areas, shows on Sundays end earlier than those on other days of the week. We found this show ended before 10:00 while their show in Maryland the following Thursday ended at 11:00. Take a Nap. Not just the child, the whole family. Not only will the nap help your child stay awake later, he or she will also be in better spirits. And parents, take a nap as well so you too are in better spirits. It's a lot easier to deal with a grumpy or overwhelmed or overly excited kid in a crowd when you're well rested. Get ear protection. These headphones from Peltor are comfortable AND cute. Emerson feels cool wearing them and we feel good knowing her hearing is protected. Bring Distractions. There's gaps of time between sets. Mad Libs, a fidget spinner, a notepad and pen for hangman and doodling are all great things to use. I do NOT recommend bringing a tablet or other electronic device to entertain your child. Even when music is playing the venue will be loud. Also we all know that electronic devices can be too entertaining and it's hard to detach a child from them when the music starts or when connecting with others in the audience. Plan for the Worst, Hope for the Best. We went in knowing there was a good chance she'd want to leave after one song. And together we agreed if that happened we'd leave. Sure, it's our favorite band. Sure we traveled a long way and paid a lot for our tickets. But what a horrible experience for Emerson if we forced her to stay. Between sets we went to the bathroom, when there was a jam session we again visited the bathroom even though she thought she didn't need to so we wouldn't have an emergency in the middle of our favorite song. We made sure Emerson had a card safety pinned in her shorts that had our first names, cell phone numbers and an emergency number besides us in case we got separated. We also took a picture of Emerson that evening on both of our phones in case we got separated so we had a super recent image showing what she was wearing. We introduced ourselves to the staff near our section so Emerson was familiar with them in case we got separated and they knew she belonged with us and no other people. It's awful we have to think this way, but it's better to be safe than sorry no matter how friendly and chill the situation. Stay Sober. Refraining from alcohol and other substances is a good idea for a multitude of reasons. You're staying sharp so you're at your best if an emergency arises. You're showing your child that even though there's people under the influence at shows, it can still be a blast without being inebriated. It also puts you in a good light with the staff so they are more likely to make accommodations for you and your child (let you move up, stand on chairs, maybe even invite you backstage). Have an Exit Plan. It's easier to walk a long distance into a venue than a long distance from it. It's easier to deal with an overtired child in your car than walking through a parking lot. When arriving at the venue, consider these things when parking. Maybe you arrive extra early and have a relaxed picnic dinner in the venue before others arrive so you can park near the exit. Maybe you choose to stay in a hotel walking distance from the venue so you can be from concert seat to bed in a jiffy. We all know kids can go from chill to complete meltdown in the blink of an eye, especially when overtired and overstimulated. Being prepared can help ensure your child's first concert is an experience they will always treasure. We Need to Talk about The Dig … Holiday Gift Guide: Travel Edition Music, My Morning Jacket, North Carolina Museum of Natural Sciences, Red Hat Amphitheatre, Sheraton, Sheraton Raleigh Hotel, weekend reads Previous Post Giving Back Sunday: The Southern Poverty Law Center Next Post A Weekend in Atlanta (with a TCFStyle Expo recap!) That picture of you with Emerson on the chair behind you/around your shoulders is amazing!! What a great family experience! Your tips are really good too. My husband & I have always had the agreement when traveling with our son that we never overplan or overextended ourselves. Even when we took him to Disney World at 5, we knew we would not see & do it "all." I have to say that almost all of our family vacations have been fantastic b/c of this, and our son is a fantastic traveling companion! I love this! We love taking our kids to concerts. Our teens have gone to U2 the last two times they came through and we are ending summer vacation by going to Bruno Mars this weekend. My older teen just went to Lollapolooza last weekend and had a blast at her first music festival. I love that my kids are now asking "What's our next concert, mom?" Michelle Van Ellis Way to go guys! Emerson is now instantly cool to anyone who hears she went to her first concert when she was 8. Alecia Ramsay We moved to the Raleigh area (my husband's home town) a year ago & are really liking it! Glad you did too. Louise Davies Our boys have been going to concerts since they were 2 years old – starting with Bruce Springsteen. It is what we do as a family! They are now 16 and 20 and we just drove from Winston-Salem NC to Philadelphia to see U2 over Father's Day weekend. Next up is Kendrick Lamar next week, and yes our tastes in music are very eclectic. Krista Alexander YES! YES! YES! All of the above! We, too, are a family with a love of music. Our now almost 16 year old son could sing before he could talk and has perfect rythem. We were always taking him to small local things and a handful of Christian concerts/music festivals when he was small. When he was 10, we went to the Gentlemen of the Road Music Festival with Mumford & Sons. 2 days of music and mayhem. We camped and on the first night we were on the stage rail for Edward Sharpe & the Magnetic Zeros. Like your experience everyone around us was mindful of the kids (there were lots of them) and he was absolutely in awe. Now he drags us to every concert possible and we are happy to oblige. It's hands down one of our favorite family memories.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
101
\section{Introduction} The nuclear symmetry energy, i.e., the energy difference between removing a neutron or a proton from nuclear matter \cite{2016PrPNP..91..203B}, is an important topic of experimental and theoretical nuclear (astro)physics, as it affects a large number of phenomena in nuclear structure physics \cite{2012RPPh...75b6301B}, heavy-ion collisions \cite{2002Sci...298.1592D,2008PhR...464..113L,2012PhRvC..86a5803T}, and astrophysics like neutron star (NS) structure \cite{2019EPJA...55..117L,2019arXiv190107673T,2014EPJA...50...40L} or recently NS mergers \cite{2017PhRvL.119p1101A,2018PhRvL.121p1101A,2019PhRvX...9a1001A, 2017RPPh...80i6901B,2019PrPNP.10903714B}. At least under the last two scenarios, the nuclear system is at non-negligible finite temperature of the order of several tens of MeV. This requires to consider the free energy as fundamental thermodynamical quantity. Therefore in recent years some phenomenological methods, such as a momentum-dependent effective interaction \cite{2009PhRvC..79d5806M} and the nuclear energy-density functional theory \cite{2012JPhCS.342a2003F}, were applied to the study of the behavior of the free energy of nuclear matter as a function of the baryon density. More recently, microscopic calculations based on the self-consistent Green's Function method with nuclear forces derived from chiral effective field theory were performed \cite{Carbone_2019}. Moreover, we have computed the free energy up to large nucleon densities $\rho\lesssim0.8\;\text{fm}^{-3}$ and temperatures $T\lesssim50\;\text{MeV}$ within the theoretical Brueckner-Hartree-Fock (BHF) method, and provided convenient parametrizations for practical use. Under these circumstances, the nuclear free (symmetry) energy depends on the partial densities $\rho_n$, $\rho_p$, and temperature $T$. An important feature is the dependence on isospin asymmetry $\beta\equiv (\rho_n-\rho_p)/(\rho_n+\rho_p)$ for fixed nucleon density $\rho=\rho_n+\rho_p$, and for cold matter it has been demonstrated that a quadratic dependence $\sim \beta^2$ is rather accurate \cite{1991PhRvC..44.1892B,1999PhRvC..60b4605Z}. However, at finite temperature this approximation becomes less reliable \cite{2004PhRvC..69f4001Z,2013NuPhA.902...53T,2017NuPhA.961...78T, 2015PhRvC..92a5801W} and one should seek to go beyond this lowest-order parametrization. This is the focus of the present article, where we study in detail the dependence of the finite-temperature free energy on isospin and provide parametrizations that go beyond the quadratic law. We will also give a simple application to NS structure in order to estimate the magnitude of the effect in practical applications. We consider in this work two microscopic EOSs that have been derived within the BHF formalism \cite{1957RSPSA.239..267G,1976PhR....25...83J,1979NuPhA.328....1D, Baldo1999,2012RPPh...75b6301B} based on realistic two-nucleon ($NN$) and compatible three-nucleon forces (TBF) \cite{1997A&A...328..274B,2004PhRvC..69a8801Z,2002NuPhA.706..418Z, 2008PhRvC..77c4316L,2008PhRvC..78b8801L}, namely those employing the Argonne $V_{18}$ \cite{1995PhRvC..51...38W} or the Bonn~B \cite{1987PhR...149....1M,Machleidt1989} $NN$ potentials, respectively. They all feature reasonable properties at (sub)nuclear densities in agreement with nuclear-structure phenomenology \cite{2008PhRvC..78b8801L,2012ChPhL..29a2101L,2016ChPhL..33c2101Q, 2013PhRvC..87d5803T}, and are also fully compatible with recent constraints obtained from the analysis of the GW170817 NS merger event \cite{2018ApJ...860..139B,2019JPhG...46c4001W,2020EPJA...56...63W}, as well as from NS cooling \cite{2018MNRAS.475.5010F,2019MNRAS.484.5162W}. Our paper is organized as follows. In Sec.~\ref{s:bhf} we briefly review the computation of the free energy in the finite-temperature BHF approach and give some details of the fitting procedure. In Sec.~\ref{s:res} we present the numerical results for the free energy and some model calculations of hot NS structure. Conclusions are drawn in Sec.~\ref{s:end}. \section{Formalism} \label{s:bhf} The calculations for hot asymmetric nuclear matter are based on the Brueckner-Bethe-Goldstone (BBG) theory \cite{1957RSPSA.239..267G,1976PhR....25...83J,1979NuPhA.328....1D, Baldo1999,1991PhRvC..44.1892B,1999PhRvC..60b4605Z} and its extension to finite temperature \cite{1986NuPhA.453..189L,1999PhRvC..59..682B,1994PhR...242..165B, 2004PhRvC..69f4001Z}. Here we simply give a brief review for completeness. The free energy density in 'frozen-correlations' approximation \cite{1958NucPh...7..459B,1959NucPh..10..181B,1959NucPh..10..509B, Baldo1999,1986NuPhA.453..189L,1999PhRvC..59..682B,2006A&A...451..213N, 2006PhRvD..74l3001N,2010PhRvC..81b5806L,2011PhRvC..83b5804B,2010A&A...518A..17B} is \begin{equation}} \def\ee{\end{equation} f = \rho \frac{F}{A} = \sum_{i=n,p} \left[ 2\sum_k n_i(k) \left( {k^2\over 2m_i} + {1\over 2}U_i(k) \right) - Ts_i \right] \:, \label{e:fn} \ee where \begin{equation}} \def\ee{\end{equation} s_i = - 2\sum_k \Big( n_i(k) \ln n_i(k) + [1-n_i(k)] \ln [1-n_i(k)] \Big) \label{eq:entr} \ee is the entropy density for the component $i$ treated as a free Fermi gas with spectrum $e_i(k)$. At finite temperature, \begin{equation}} \def\ee{\end{equation} n_i(k) = \left[\exp{\Big(\frac{e_i(k)-\tilde{\mu}_i}{T}\Big)} + 1 \right]^{-1} \ee is a Fermi distribution, where the auxiliary chemical potentials $\tilde{\mu}_{n,p}$ are fixed by the condition $\rho_i = 2\sum_k n_i(k)$. The single-particle energy \begin{eqnarray}} \def\eea{\end{eqnarray} e_1 &=& \frac{k_1^2}{2m_1} + U_1 \:, \\ U_1(\rho,x_p) &=& {\rm Re} \sum_2 n_2 \langle 1 2| K(\rho,x_p;e_1+e_2) | 1 2 \rangle_a \label{eq:uk} \eea is obtained from the interaction matrix $K$, which satisfies the self-consistent equation \begin{equation}} \def\ee{\end{equation} K(\rho,x_p;E) = V + V \;\text{Re} \sum_{1,2} \frac{|12 \rangle (1-n_1)(1-n_2) \langle 1 2|} {E - e_1-e_2 +i0} K(\rho,x_p;E) \:. \label{eq:BG} \ee Here $E$ is the starting energy and $x_p=\rho_p/\rho$ is the proton fraction. The multi-indices 1,2 denote in general momentum, isospin, and spin. Two choices for the realistic $NN$ interaction $V$ are adopted in the present calculations \cite{2008PhRvC..78b8801L}: the Argonne~$V_{18}$ \cite{1995PhRvC..51...38W} and the Bonn~B (BOB) \cite{1987PhR...149....1M,Machleidt1989} potential. They are supplemented with microscopic TBF employing the same meson-exchange parameters as the two-body potentials. The TBF are reduced to an effective two-body force and added to the bare potential in the BHF calculation, see Refs.~\cite{1989PhRvC..40.1040G,2002NuPhA.706..418Z, 2008PhRvC..77c4316L,2008PhRvC..78b8801L} for details. The knowledge of the free energy allows to derive all necessary thermodynamical quantities in a consistent way, namely one defines the ``true" chemical potentials $\mu_i$, pressure $p$, and internal energy density $\epsilon$ as \begin{eqnarray}} \def\eea{\end{eqnarray} \mu_i &=& \frac{\partial f}{\partial \rho_i} \:, \\ p &=& \rho^2 {\partial{(f/\rho)}\over \partial{\rho}} = \sum_i \mu_i \rho_i - f \:, \label{e:eosp} \\ \epsilon &=& f + Ts \:,\quad s = -{{\partial f}\over{\partial T}} \:. \label{e:eose} \eea \begin{table}[t \caption{ Parameters of the fit for the free energy per nucleon $F/A$, Eq.~(\ref{e:fitf}), for symmetric nuclear matter (SNM), asymmetric ($\beta=0.6$) nuclear matter (ANM), and pure neutron matter (PNM) with the V18 and BOB EOSs.} \medskip \def\myc#1{\multicolumn{1}{c}{$#1$}} \renewcommand{\arraystretch}{1.2} \begin{ruledtabular} \begin{tabular}{lr|rrrr|rrrrr} & & \myc{a} & \myc{b} & \myc{c} & \multicolumn{1}{c|}{$d$} & $\tilde{a}$ & $\tilde{b}$ & $\tilde{c}$ & $\tilde{d}$ & $\tilde{e}$ \\ \hline\multirow{3}{*} {V18}& SNM & -54 & 363 & 2.68 & -8 & -149 & 211 & -58 & 81 & 2.40 \\ & ANM & -23 & 473 & 2.72 & -3 & -140 & 200 & -61 & 82 & 2.36 \\ & PNM & 38 & 668 & 2.78 & 6 & -91 & 153 & -26 & 38 & 2.64 \\ \hline\multirow{3}{*} {BOB}& SNM & -60 & 495 & 2.69 & -9 & -124 & 203 & -60 & 80 & 2.38 \\ & ANM & -21 & 624 & 2.78 & -4 & -119 & 193 & -59 & 78 & 2.36 \\ & PNM & 52 & 860 & 2.89 & 4 & -82 & 149 & -25 & 36 & 2.67 \\ \end{tabular} \end{ruledtabular} \label{t:fit} \end{table \begin{figure}[t \vspace{-3mm}\hspace{-1mm} \centerline{\includegraphics[scale=0.35]{FIG1}} \vspace{-10mm} \caption{ Free energy per nucleon as a function of asymmetry for different densities at $T=0$ (top panels), $50\;\text{MeV}$ (middle panels) for the V18 (left panels) or BOB (right panels) EOS. Dashed lines show the parabolic approximation Eq.~(\ref{e:parab}). The bottom panels show the deviation between numerical results and the linear, [Eq.~(\ref{e:parab}), solid curves], or quadratic, [Eq.~(\ref{e:fsym}), dashed curves], $\beta^2$ fits. } \label{f:fa} \end{figure \begin{figure*}[t \vspace{-4mm} \centerline{\hspace{-2mm}\includegraphics[scale=0.45]{FIG2}} \vspace{-5mm} \caption{ Free symmetry energies per nucleon $F_\text{sym,2}/A$ (in linear, [Eq.~(\ref{e:parab}), dashed curves], or quadratic, [Eq.~(\ref{e:fsym}), solid curves], approximation) and $F_\text{sym,4}/A$ as functions of nucleon density or temperature for fixed temperatures/densities, respectively. For comparison, the $T=0$ FSUGold and IU-FSU results of Ref.~\cite{2012PhRvC..85b4302C} are plotted as short dotted and dash-dotted lines respectively in the lower row. } \label{f:fsym} \end{figure* For the case of asymmetric nuclear matter, one might expand the free energy for fixed total density and temperature in terms of the asymmetry parameter $\delta=\beta^2=(1-2x_p)^2$, \begin{eqnarray}} \def\eea{\end{eqnarray} f(\delta) &\approx& f(0) + \delta f_\text{sym,2} + \delta^2 f_\text{sym,4} \:. \label{e:fexp} \eea Limiting to the second term, one obtains the symmetry energy as the difference between pure neutron matter (PNM) and symmetric nuclear matter (SNM), \begin{subequations} \bal f_\text{sym,2} &= f(1) - f(0) \:, \label{e:parab2} \\ f_\text{sym,4} &= 0 \:, \label{e:parab4} \eal \label{e:parab} \end{subequations} \!\!which is usually a good approximation at zero temperature \cite{1991PhRvC..44.1892B,1999PhRvC..60b4605Z,2010A&A...518A..17B}, and also used at finite temperature \cite{2004PhRvC..69f4001Z}. It has, however, been pointed out \cite{2013NuPhA.902...53T,2017NuPhA.961...78T,2015PhRvC..92a5801W, 2016PhRvC..93c5806T,2016PhRvC..94b5806N,2019PhRvC..99b5806M, PhysRevC.96.054311,2019PhRvC.100a5808Z,2018PrPNP..99...29L, 2018PhRvC..97b5801L,2018PhRvC..97e1302W} that at least the kinetic part of the free energy density [first term in Eq.~(\ref{e:fn})] violates the parabolic law, in particular at high temperature. We therefore extend the expansion to second order and compute $f_\text{sym,4}$ in the following way: Inverting the system of equations for $f(0),f(\alpha),f(1)$, where $\alpha$ is an arbitrarily chosen value (we use $\alpha=0.6^2$, which corresponds to a typical $x_p=0.2$ in NS matter), one obtains \begin{subequations} \bal f_\text{sym,2} &= \frac{\alpha^2[f(1)-f(0)]-[f(\alpha)-f(0)]}{\alpha^2-\alpha} \:, \label{e:fsym2} \\ f_\text{sym,4} &= \frac{\alpha[f(1)-f(0)]-[f(\alpha)-f(0)]}{\alpha-\alpha^2} \:, \label{e:fsym4} \eal \label{e:fsym} \end{subequations} \!\!in which $f(0),f(\alpha),f(1)$ depend on total density and temperature. Following Ref.~\cite{2019PhRvC.100e4335L}, we provide analytical fits for these dependencies of the numerical results in the required ranges of density ($0.05\;\text{fm}^{-3} \lesssim \rho \lesssim 1\;\text{fm}^{-3}$) and temperature ($5\;\text{MeV} \leq T \leq 50\;\text{MeV}$) in the following functional form for the free energy per nucleon \begin{eqnarray}} \def\eea{\end{eqnarray} {F\over A}(\rho,T) &=& a \rho + b \rho^c + d \nonumber\\&& +\, \tilde{a} t^2 \rho + \tilde{b} t^2 \ln(\rho) + ( \tilde{c} t^2 + \tilde{d} t^{\tilde{e}} )/\rho \:, \label{e:fitf} \eea where $t=T/(100\;\text{MeV})$ and $F/A$ and $\rho$ are given in MeV and $\;\text{fm}^{-3}$, respectively. The parameters of the fits are listed in Table~\ref{t:fit} for SNM, asymmetric nuclear matter with $x_p=0.2$ (ANM), and PNM, for the different EOSs we are using. The rms deviations of fits and data are better than $0.3\;\text{MeV}$ for all EOSs. \section{Results} \label{s:res} Fig.~\ref{f:fa} shows the free energy per nucleon as a function of the asymmetry parameter $\delta$ for different densities and at temperatures $T=0$ (upper row) and $T=50\;\text{MeV}$ (middle row), for both EOSs. The linear approximation Eq.~(\ref{e:fexp},\ref{e:parab}) is indicated by dashed lines in the figure, and the deviations from the linear [Eq.~(\ref{e:parab})] or quadratic [Eq.~(\ref{e:fsym})] laws at $T=50\;\text{MeV}$ are indicated in the lower row. One observes that in general even the linear law provides a very good fit, even at low density and high temperature, where the deviations might reach a few percent. With the quadratic law, the deviations remain below $2\;\text{MeV}$ over the whole parameter space $[\rho,T,\beta]$. In this case the overall variances are 0.47 and $0.54\;\text{MeV}$ for the V18 and BOB EOS, respectively. In order to compare the magnitude of violation of the linear or quadratic $\beta^2$ laws with those of other frequently used finite-temperature nuclear EOSs, we performed the previous analysis also for the SFHo \cite{2013ApJ...774...17S} and the HShen \cite{1998PThPh.100.1013S,2011ApJS..197...20S} EOS and report the values of the variance $\langle \Delta {F/A} \rangle_\text{rms}$ for both the linear and quadratic law in Table~\ref{t:rms}. We observe that in all cases the quadratic law is an important improvement by at least a factor three, but also the linear law is a very reasonable approximation. \begin{table}[b \caption{ Quality $\langle \Delta {F/A} \rangle_\text{rms}$ (in MeV) of the linear or quadratic $\beta^2$ laws for the free energy per nucleon $F/A$ obtained with different EOSs. } \def\myc#1{\multicolumn{1}{c}{$#1$}} \def\myc#1{\multicolumn{1}{c}{\text{#1}}} \renewcommand{\arraystretch}{1.2} \begin{ruledtabular} \begin{tabular}{l|dddd} EOS & \myc{V18} & \myc{BOB} & \myc{SFHo} & \myc{Shen} \\ \hline linear & 1.51 & 1.77 & 1.12 & 1.53 \\ quadratic & 0.47 & 0.54 & 0.23 & 0.39 \\ \end{tabular} \end{ruledtabular} \label{t:rms} \end{table \begin{figure}[t \vspace{-3mm}\hspace{-1mm} \centerline{\includegraphics[scale=0.4]{FIG3}} \vspace{-8mm} \caption{ Symmetry energies $J_2,J_4$ (upper panels) and slope parameters $L_2,L_4$ (lower panels) at empirical saturation density $\rho_0 = 0.17\;\text{fm}^{-3}$ as a function of temperature for different EOSs. The N3LO414 and N3LO450 results of Ref.~\cite{2015PhRvC..92a5801W} are plotted as dashed curves. } \label{f:jlt} \end{figure Fig.~\ref{f:fsym} shows the derived free symmetry energies per nucleon $\rm F_{sym,2}/A$, Eqs.~(\ref{e:parab2},\ref{e:fsym2}), and $\rm F_{sym,4}/A$, Eq.~(\ref{e:fsym4}), as functions of density and temperature. One notes that the dependence on density is more pronounced for $\rm F_{sym,2}/A$ than for $\rm F_{sym,4}/A$, while the opposite is the case for the temperature dependence. The $\rm F_{sym,2}/A$ results in quadratic approximation (solid curves in upper row) are somewhat smaller than in linear approximation (dashed curves) in order to compensate for the finite $\rm F_{sym,4}/A$, in particular at finite temperature. For comparison, the $T=0$ results for $\rm F_{sym,4}/A$ obtained by RMF theory with FSU interactions \cite{2012PhRvC..85b4302C} are shown as dotted and dash-dotted curves in the lower row. They are comparable with our BHF results, especially the BOB model. The density dependence of the symmetry energies can be expanded around normal density $\rho_0$ in terms of normal values $J_2,J_4$ and slope parameters $L_2,L_4$: \bal F_\text{sym,2}/\!A(\rho,T) &\approx J_2(T) + L_2(T) x \:, \\ F_\text{sym,4}/\!A(\rho,T) &\approx J_4(T) + L_4(T) x \:, \eal with $x=(\rho-\rho_0)/3\rho_0$ and $J_i(T)=J_{\rm sym,i}(\rho_0,T)$, $L_i(T)=3\partial J_{\rm sym,i}(\rho_0,T)/\partial\rho$. These quantities are shown in Fig.~\ref{f:jlt}. The $T=0$ values are $J_2(0)=31.0(32.7)\;\text{MeV}$ and $L_2(0)=58.5(64.2)\;\text{MeV}$ for V18(BOB), which should be confronted with recent constraints $J_2=31.7\pm2.7\;\text{MeV}$ and $L_2= 58.7\pm28.1\;\text{MeV}$ \cite{2017RvMP...89a5007O,2019EPJA...55..117L}. In the same figure we report also the results for the SFHo and Shen EOSs according to our analysis, see also Table~\ref{t:rms}. Reasonable values are obtained in the first case, but too large ones in the latter. The second-order symmetry energy $J_4(0)$ is theoretically more controversial compared to the first-order one $J_2(0)$. Our results are $J_4(0)=0.41,0.93,1.17,1.17\;\text{MeV}$ for the V18, BOB, SFHo, Shen EOS, respectively. Within energy density functionals with mean-field approximation, for example Skyrme-Hartree-Fock and Gogny-Hartree-Fock models, the values of $J_4$ reported in the literature are around $1.0\;\text{MeV}$ \cite{PhysRevC.96.054311}, and around $0.66\;\text{MeV}$ within RMF models \cite{2012PhRvC..85b4302C}, while values extracted from Quantum Molecular Dynamics models could be larger depending on the specific interaction \cite{2016PhRvC..94b5806N}. From the view point of finite nuclei, $J_4$ can be related to the second-order symmetry energy $a_{\rm sym,4}(A)$ in a semi-empirical mass formula, in which the latter can be inferred from the double difference of ``experimental" symmetry energies by analyzing the binding energies of a large number of measured nuclei \cite{2017PhLB..773...62W,2014PhRvC..90f4303J}. In this case, the estimates are $J_4=20.0\pm4.6\;\text{MeV}$ \cite{2017PhLB..773...62W} and two possible $J_4=8.5\pm0.5\;\text{MeV}$ or $J_4=3.3\pm0.5\;\text{MeV}$ \cite{2014PhRvC..90f4303J}, which are significantly different and larger than those deduced from nuclear matter, which points to a great model dependence and to the importance of finite-size effects in nuclei. Regarding the temperature dependence, from Fig.~\ref{f:jlt} one can see that $J_2(T)$ and $J_4(T)$ are increasing monotonically with temperature for all models, whereas $L_2(T)$ decreases and $L_4(T)$ exhibits nonmonotonic behavior. It is notable that the $J_4(T)$ results are nearly universal for all EOSs. Note that in our approach the temperature dependence is constrained to be a linear combination of $T^2$ and $T^{\tilde{e}}$ terms according to Eq.~(\ref{e:fitf}). We compare our results with the ones of the chiral effective field theory calculation \cite{2015PhRvC..92a5801W}. Considering also the cutoff dependence of the chiral potentials, we observe that both results are in quantitative agreement in particular in the low temperature region, but the latter predicts a more linear temperature dependence. (At low temperature such behavior is excluded by the condition of vanishing entropy in the $T\ra0$ limit). The temperature dependence of the free symmetry energy is also discussed in Refs.~\cite{2007PhRvC..75a4607X,2014EPJA...50...19A}, where an isospin- and momentum-dependent interaction constrained by heavy-ion collisions and the Skyrme SLy4 parameters have been employed, respectively. Those investigations shows very similar behavior and numerical magnitudes to the present calculations about the free symmetry energy. \begin{figure}[t \vspace{-7mm}\hspace{-0mm} \centerline{\includegraphics[scale=0.35]{FIG4}} \vskip-10mm \centerline{\includegraphics[scale=0.35]{FIG5}} \vspace{-7mm} \caption{ Proton fraction of betastable matter (upper plot) and NS mass-radius relation (lower plot) at $T=0$ (solid curves) and $50\;\text{MeV}$ (dashed curves), employing linear (thin curves) or quadratic (thick curves) $\beta^2$ fits, Eqs.~(\ref{e:parab}) or (\ref{e:fsym}). } \label{f:ns} \end{figure In order to assess the relevance of the previous results to practical applications, we perform some model calculations of NS structure employing the different approximations for the symmetry energy. Fig.~\ref{f:ns} shows the proton fractions of $\beta$-stable and charge-neutral nuclear matter in the upper panel and the mass-radius relations of NSs in the lower panel at the temperatures $T=0$ and $T=50\;\text{MeV}$. Results using the linear [Eq.~(\ref{e:parab}), thin curves] or the quadratic [Eq.~(\ref{e:fsym}), thick curves] $\delta$ laws are compared with both BOB and V18 interactions. One can see that the inclusion of $F_\text{sym,4}$ in the latter case causes a slight decrease of the proton fraction in particular at high temperature, corresponding to a slight reduction of $F/A$ as seen in Fig.~\ref{f:fa}. The effect on the mass-radius relations is nearly invisible, even at large finite temperature, which means that the linear law Eq.~(\ref{e:parab}) is a very good approximation for the determination of the stellar structure. \section{Summary} \label{s:end} We have studied the isospin dependence of the free symmetry energy of nuclear matter at zero and finite temperature within the framework of the Brueckner-Hartree-Fock approach at finite temperature with different potentials and compatible nuclear three-body forces. We have compared our results with phenomenological models, i.e., SFHo and Shen EOS, which are widely used in numerical simulations of astrophysical processes. We have determined the first- and second-order terms in an expansion with respect to isospin asymmetry and provided convenient parametrizations for practical applications. A model study of neutron star structure at finite temperature demonstrated that the often used parabolic law is an excellent approximation and the second-order modifications are very small. \section*{Acknowledgments} This work is sponsored by the National Natural Science Foundation of China under Grant Nos.~11475045, 11975077 and the China Scholarship Council, No.~201806100066. We further acknowledge partial support from ``PHAROS,'' COST Action CA16214. \vfill \newcommand{Phys. Rep.}{Phys. Rep.} \newcommand{Nucl. Phys. A}{Nucl. Phys. A} \newcommand{A\&A}{A\&A} \newcommand{MNRAS}{MNRAS} \newcommand{ApJS}{ApJS} \bibliographystyle{apsrev4-2}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,043
I have several of his flight stands (and got a few more last month) and am very pleased with them. We use them nearly every game. Sorry if this was answered elsewhere, but where did DF get those clear elevation indicators for the flying miniatures? The DoD ruin pieces were very cool in the first room. Also I have never seen the Minotaur mini they were fighting. Not sure if that was DF or not. I am thinking, how many rope bridges can I get, and is there a whole crystal cavern set? Yes the Minotaur is a new mini we made! Happy to see all the amazing negative space pieces. I want a lot of those and 2 bridges. It's in the Shrine close-up; there are two: one to the left of the crystal piller, one behind it. Oh, wow, that's great. Thank you. I want a rickety looking wood platform to cover the pit with. And a pile of leaves! And other things that adventurers might get curious about or stumble onto. Cavern ledges on the left side of the Shrine. What looks to be a stake floor trap at the near end of the bridge next to the nets. The entire twisty passage from the ruins to the bridge looks like new fun. There are a suspiciously large number of stalagmite pieces, and they fit very well as filler behind the stairs. I suspect some are explicitly fit for freestanding curve and wall stand-ins. There's something interesting going on with elevation around the bridge edges. That includes the seeming half-height corner filler that adds real smoothness and realism to the chasm. I can't believe I didn't mention the dead, headless bodies "floating", arms spread, in the water.
{ "redpajama_set_name": "RedPajamaC4" }
410
{"url":"https:\/\/math.stackexchange.com\/questions\/3082819\/is-there-a-non-simply-connected-subspace-of-mathbb-r2-with-trivial-first-hom","text":"# Is there a non-simply connected subspace of $\\mathbb R^2$ with trivial first homology?\n\nExactly what the title says. Can you find some topological space $$X\\subset\\mathbb R^2$$ such that $$\\pi_1(X)\\neq0$$, but $$\\mathrm H_1(X,\\mathbb Z)=0$$?\n\nI've been told that this paper shows that the fundamental group of any subspace of $$\\mathbb R^2$$ has a torsion-free fundamental group, so at the very least, for such a space to exist there has to be some torsion-free group with trivial abelianization. I do not know if such a group exists.\n\nEdit: It turns out that simple torsion-free groups exist, so there are torsion-free groups with trivial abelianization.\n\n## 1 Answer\n\nBy the references in this answer, fundamental groups of subsets of the plane are residually free, and in particular, if $$\\pi_1(X)$$ is nontrivial it surjects onto a nontrivial free group. Because free groups have nontrivial abelianization, we see that $$\\pi_1(X)$$ surjects onto an abelian group, and hence $$H_1(X) = \\pi_1(X)^{\\text{ab}}$$ is nontrivial.\n\n\u2022 It looks like this only applies to closed subsets of the plane. Do you know of similar constraints on the fundamental groups of open (or, even better, arbitrary) subsets of the plane? \u2013\u00a0Niven Jan 22 at 7:57\n\u2022 @Niven Looks like I gave the wrong reference. The result applies to arbitrary subsets. See Fischer-Zastrow, \"The fundamental groups of subsets of closed surfaces inject into their first shape groups\". \u2013\u00a0user98602 Jan 22 at 13:55\n\u2022 If you were only interested in open subsets this becomes much easier: it is much more elementary to prove that every open subset of the plane has free fundamental group, and in fact, this is true of any noncompact surface. \u2013\u00a0user98602 Jan 22 at 14:04\n\u2022 I see now that is the reference I gave. The MO post just asked for closed sets, but the paper itself never makes that restriction. \u2013\u00a0user98602 Jan 22 at 20:29","date":"2019-10-14 06:04:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 7, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9028143882751465, \"perplexity\": 200.66571524547135}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986649232.14\/warc\/CC-MAIN-20191014052140-20191014075140-00007.warc.gz\"}"}
null
null
Cryptocurrency mining is gaining popularity as last couple of years confirmed that it could bring good profit. However, there are many risks you can come upon if you decide to mine digital tokens at home. United Crypto Mining Group solution will reduce the risks and costs of mining business, and bring the activity to professional level. The United Crypto Mining Group (UCMG) introduces the full-service crypto mining solution designed to provide access to remote mining for representatives of all countries of the world by offering them a hosing service on its network farms and a wide range of related services. United Crypto Mining Group (UCMG) is going to issue 127,65 million tokens with 111million of them available for potential customers. Buying the token, the client ensures the right to use the capacities established and functioning within the United Crypto Mining Group Network farms, without paying rent for the next 5-year period. One token is equal to ½ W of electricity consumed by the client equipment (by the miner). In order to calculate the number of required tokens for mining on the farm, you need to multiply ½ W by the total amount of electrical energy consumed by the equipment. United Crypto Mining Group presale stage of crowdsale is now in progress and will last till April 15, 2018. Token generation event of United Crypto Mining Group project starts April 16, 2018 and ends May 27, 2018. The price of one token will be $0.5 + bonus. The exact amount will depend on the date of purchase (with the bonus). If you're interested in United Crypto Mining Group ICO, you can visit the official website or follow them on Twitter and Telegram.
{ "redpajama_set_name": "RedPajamaC4" }
5,991
{"url":"http:\/\/www.peteryu.ca\/tutorials\/publishing\/latex_table_tips","text":"# LaTeX Table Tips\n\nThis page documents a few tricks for making LaTeX tables that I found useful when writing my thesis and preparing various publications. Just a quick note: For all examples on this page that use a tabular environment, you should put the tabular within a table environment if you want to make it an \u201cofficial\u201d table with caption, label and float position.\n\nIf coding tables by hand seems tedious, you can also cheat by converting Excel spreadsheets to LaTeX tables.\n\n## Set both column width and column alignment\n\nColumn widths in tables are specified by column type p, e.g.: p{3cm}, which will make a column 3 cm wide. However, by default p columns are left aligned. To specify alignment for fixed width columns, you need to specify the alignment of the p columns. By doing this, you can create tables that look like:\n\nFirst, you need to use the array package in your document:\n\n\\usepackage{array}\n\nSpecify the width and alignment of each table column with code from the following example:\n\n\\begin{tabular}{|p{1.5cm}|>{\\raggedright}p{2.5cm}|>{\\centering}p{1.5cm}|>{\\raggedleft}p{3cm}|}\n\\hline\nDefault & Left-Aligned & Centered & Right-Aligned \\tabularnewline\n\\hline\n1.5 cm & 2.5 cm & 1.5 cm & 3 cm \\tabularnewline\n\\hline\n\\end{tabular}\n\nOne thing to note is that to use this, the line break \\\\ must now be replaced with:\n\n\\tabularnewline\n\nThe column types used above are:\n\n\u2022 p{1.5cm} specifies a left-aligned column (default) with 1.5 cm width.\n\u2022 >{\\raggedright}p{2.5cm} specifies a left-aligned column (the right side is ragged) with 2.5 cm width.\n\u2022 >{\\centering}p{1.5cm} specifies a centered column with 1.5 cm width.\n\u2022 >{\\raggedleft}p{3cm} specifies a right column (the left side is ragged) with 3 cm width.\n\nYou can define new column types for each of the above cases, as described here (which is where I learned this technique). This would make typing the table easier.\n\n## Continuous vertical lines across double horizontal lines\n\nIn LaTeX tabulars, using \\hline\\hline will produce a double horizontal line but any vertical lines will be broken across the gap. To create an uninterrupted, intact vertical line across double hlines requires workaround: create an empty row and set the line break height using something like \\\\[-0.8em]. Adjusting the line break height adjusts the spacing between the double horizontal lines. The result will produce a table with double horizontal lines and a continuous vertical line across the gap:\n\nThe code below generates the above table:\n\n\\begin{tabular}{|l|l|}\n\\hline\nClass & Accuracy (\\%) \\\\\n% The following code produces double hline with continuous vertical line.\n\\hline % First hline\n& \\\\[-0.8em] % Empty table row with custom line break spacing.\n\\hline % Second hline.\nIce & 85\\\\\n\\hline\nWater & 95\\\\\n% The following code produces double hline with continuous vertical line.\n\\hline % First hline\n& \\\\[-0.8em] % Empty table row with custom line break spacing.\n\\hline % Second hline.\nOverall & 90\\\\\n\\hline\n\\end{tabular}\n\nI found this tip from this blog post on keeping vertical lines intact across double horizontal lines.\n\nIf you look closely at the picture of the table above, you can see some artifacts around the vertical lines after each double \\hline where the vertical lines are darker. I think this is because there actually are two vertical lines overlapping in those areas (one for the empty row and one for the next row). When PDFs are rasterized at a low resolution, these overlapping lines produce a darker single line due to roundoff errors and the like in the rasterization algorithm. Zooming in enough in the PDF removes the artifact (because the two overlapping lines should be on top of each other perfectly so only one line is visible).\n\n## Align the top of a figure with the top of the text in a table row\n\nWhen I was writing my thesis, including figures or graphics in a table was not straightfoward. It seems that whenever an image is included, the bottom of an image in a table cell was aligned with the first line of text in the other cells of the same table row:\n\nThe baseline of an image is at the bottom of the image. In table rows, the text content of each cell is vertically aligned so that the baselines of the first line of text are in the same vertical position. When the content of the cell is an \\includegraphics, then the base line of that cell is the bottom of the image, causing the resulting table to look like the above.\n\nIn order to vertically align the top of the image with the top of the text in each table row, the image baseline must be adjusted. Once adjusted, you can produce tables like this:\n\nFirst, this technique requires the calc package:\n\n\\usepackage{calc}\n\nThe following code produces the desired table (replace placeholder with the name of your graphic file):\n\n\\begin{tabular}{|p{2in}|c|}\n\\hline\nSome text ... & \\raisebox{2ex - \\height}{\\includegraphics[width=1in]{placeholder}} \\\\\n% Use the raisebox command to set the baseline of the image, which will be aligned with baselines of the first lines of text in the same table row.\n% The raisebox command changes the baseline of the image relative to the original baseline, which is at the bottom of the image.\n% The (2ex - \\height) sets the image baseline to about 1 text line below the top of the image.\n% This lines the top of the image with the text properly.\n\\hline\n\\end{tabular}\n\nThe \\raisebox command adjusts the baseline of the image in the table cell. Raising the box to 2ex - \\height moves the baseline of the image by 2ex - \\height relative to the default position (bottom of the image), where \\height is the height of the image box. The final base line is 2ex below the top of the image, as 2ex is about the height of one text line (it is the height of two 'x' characters). When this baseline is aligned with the baseline of the first line of text in the other cells of the same table row, the top of the image lines up with the top of the text.\n\n## Background colours in table cells\n\nThe colortbl package lets you set the background colour of individual table cells, columns and rows. This example shows how to set individual cell colours using the \\cellcolor command. You can create tables that look like this:\n\nFirst, include the colortbl package:\n\n\\usepackage{colortbl}\n\nUse the following code to create the above table:\n\n\\begin{tabular}{|c|c|}\n\\hline\nColour 1 & Colour 2 \\\\\n\\hline\n\\cellcolor[rgb]{0.000,0.700,1.000} Blue & \\cellcolor[rgb]{1.000,0.700,0.500} Orange \\\\\n\\hline\n\\end{tabular}\n\nThe \\cellcolor command can be included anywhere in the cell whose colour you want to set. The \\cellcolor command also works for longtable cells. This technique was handy for one of my publications where I had to shade individual table cells.\n\nIf you are using PdfTeX to produce PDFs, you will notice that cell colour background sometimes partially overlaps with the table lines when you view the PDF file. This is supposedly a PDF rendering issue in the viewer caused by rounding errors in the rasterization algorithm (the algorithm that actually draws the PDF to your screen). If you zoom in close enough or print the PDF, the artifact disappears.\n\n## Multiple page tables\n\nIf your table has a lot of rows, you may need to split it up across several pages. The longtable environment allows you to do this. Here is a basic example of using longtable to create a multi-page table, with a specific caption and header row(s) for the first page and a different caption and header row(s) for the subsequent pages.\n\nFirst page:\n\nSubsequent pages:\n\nThis is useful if you want one caption for the first page of the table, while the caption should be \u201ccontinued\u201d for subsequent pages. For example, \u201cTable 1: This is table 1\u201d on the first page and then on subsequent pages, \u201cTable 1: (continued)\u201d.\n\nNote that in the above pictures, I purposely set the \\textheight of the page to 2 inches so I can have multiple page tables with only 12 rows of data (this is easier to present on this website). The textheight can be set by including this command before the document begins: \\setlength{\\textheight}{2in}\n\nTo use longtable, include the longtable package:\n\n\\usepackage{longtable}\n\nNext, use the following code to produce the table shown above:\n\n\\begin{longtable}{|l|l|}\n\n% specifies header for first page of table.\n\\caption{This caption appears on the first page of the table. \\label{tab.longtable_example}} \\\\\n\\hline\n\n% specifies header for rest of the pages.\n\\caption{Caption for subsequent pages.} \\\\\n\\hline\n\n% This is the actual table data.\n\\hline\nTable row & Table row \\\\\n\\hline\nTable row & Table row \\\\\n\\hline\nTable row & Table row \\\\\n\\hline\n% [ ... and so on. The rest of the rows are not included for conciseness ... ]\n\n\\end{longtable}\n\nThe lines before \\endfirsthead specify the caption and heading row(s) that will be used for the first page of the table. The next few lines before \\endhead specify the same thing but for subsequent pages. There is quite a bit of flexibility here. For example, if you wanted the heading area of the subsequent pages to have two heading rows, you can just specify it before the \\endhead:\n\n\t% specifies header for rest of the pages.\n\\caption{Caption for subsequent pages.} \\\\\n\\hline\nHeading 2 for subsequent pages & Heading 2 for subsequent pages \\\\\n% finished specifying subsequent headers\n\nThis would put multiple heading rows for subsequent pages, which will create the following second page:\n\nThe same can be specified for the first page before the \\endfirsthead command.\n\nI've seen some examples where the caption for the subsequent pages is specified using:\n\n\\caption[]{Caption for subsequent pages.}\n\nThis is supposed to give you a numbered caption that does not appear in the List of Tables (due to the contents of [] being empty). I haven't tried it. An alternative could be \\caption* but that would give you no numbers.\n\n## Discussion\n\n, 2011\/08\/31 09:15\nHi, the post helped me out a lot in the formatting of my thesis tables. Cheers.\n, 2012\/11\/23 14:35\nThanks for your article. I am using Longtable and want to color the background of the header. Any advise?\n\nThanks, Ralph\n, 2012\/12\/07 14:03\nI haven't tried it but you should be able to just use the colortbl package with longtable. There's a \\rowcolor command.\n, 2014\/02\/02 00:05\nI am trying to put a period after the table number on the continued pages. Any ideas?\n, 2014\/12\/15 14:34\nThank you very much. It was helpful.\n, 2015\/05\/13 03:02\nThanks a lot.","date":"2017-10-22 12:01:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9056154489517212, \"perplexity\": 1605.8951147478433}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-43\/segments\/1508187825227.80\/warc\/CC-MAIN-20171022113105-20171022133013-00013.warc.gz\"}"}
null
null
Cindy Anthony Today, Net Worth, Still Married To George? ByCharles MatthewsNovember 29, 2022 Meet Cindy Anthony, wife of George Anthony and mother of Casey Anthony. Where is she today? What job does she do? Find out everything as you scroll down below. Meet Cindy Anthony, Casey Anthony Mother Casey Anthony was born to her mother Cindy Anthony. Cindy welcomed Casey on 19 March 1986 in Warren, Ohio. Cindy gained attention internationally when she placed a 911 call reporting her granddaughter Caylee "missing". Following up on numerous leads, Cindy and her husband George searched the entire country for the toddler. That December, Caylee's remains were found in a wooded area near the family home. In 2011, Casey was charged with murder. The trial was watched by 40 million people where Casey was acquitted of murder but convicted of four counts of lying to police. Cindy on the otherhand didn't escape unscathed and even testified that she had been the one to perform incriminating computer searches that were being blamed on Casey. Additionally, Cindy recently spoke to Crime Scene Confidential on Investigation Discovery. Cindy Anthony becomes clearly upset as she recalls the defense's assertion that Caylee drowned in the home pool. "What the eff was she thinking?" Cindy asked. "Why the heck didn't she tell us? Why didn't she call somebody. None of this would've happened. And it's like, oh my gosh, she put us through hell. I know if I had found Caylee drowned in the pool, I would've been devastated and blamed myself for the rest of my life." As reported by a family friend, Cindy hasn't been the same since the trial. "She is still angry a lot of the time," the family insider says. "This was a loving grandmother who had to withstand family trauma that no one should ever have to deal with. So when she starts talking about Casey and Caylee, she gets really upset, even now." Cindy's connection with her daughter is tumultuous. They went a long time without speaking, but have just recently started intermittently exchanging words. "At first, Cindy wanted answers. She wanted to know what had happened, why this had happened. She wanted Casey to explain the hell she put everyone through. Now she realizes that there's no point asking Casey anything because she is never going to get any straight answers. So what's the point?" But does Cindy Anthony think her daughter was guilty of murdering her granddaughter? "That's a tough question," says the family insider. "Sometimes she wavers. George is steadfast that she did something wrong, but with Cindy, it's a big question mark. Mostly, she's just sad about the way things turned out." In the same Crime Watch Daily interview, Cindy opened up about the relationship she shared with her estranged daughter. "She checked on me a couple of months ago," Cindy said. "She called to check on me because she had heard I was in the hospital. I mean, I'm still her mom." The mother of two also discussed the last time she saw her daughter. "The last time I saw her was just before Christmas, the week before Christmas, last year," she said. "Well, it was weird, the first time I saw her when she got out of the car for the first time, and I saw her, I stood there for a second, and I just wanted to do one of two things: I wanted to just embrace her, and I wanted to smack her, that's how I felt inside. I wanted to hit her for everything," the bereaved grandmother said. "I wanted to shake her, I wanted to say, 'What the hell did you do for all these years?'" Where Is Cindy Anthony Today? Is She Still Married To George Anthony? Yes, George and Cindy Anthony are still married. They reportedly exchanged their wedding vows on 20 March 1981. In 1989, Cindy and George moved to Florida from Ohio where they have been residing ever since. Cindy Anthony Accident There are no records of Cindy Anthony having an accident. However, her husband George was in a fatal car accident. In November 2018, George was involved in a devastating car accident in Volusia County. According to the Orlando Sentinel, he had driven off Interstate 4, which caused his car to flip. Some had speculated that George Anthony had attempted to commit suicide, but he insisted that the crash was purely accidental. Apparently, the left rear axle broke on his 1999 Toyota 4-Runner, leading directly to the rollover. Mr. Anthony specifically suffered spinal cord damage, necessitating the use of a halo that immobilizes the neck. He has been receiving outpatient rehabilitation, but due to his advanced age, he most likely won't recover completely. He has a crowdfunding campaign put up for him by friends and family with the goal of obtaining $100,000 to pay for medical costs. Cindy Anthony Age The date of birth of Cindy Anthony is 5 June 1958. She turned 64 years old in 2022. Is Cindy Anthony On Instagram? No, Cindy Anthony is not on Instagram. Cindy Anthony Job Cindy Anthony worked as a nurse for many years. She was working as a nurse at the time of Caylee's disappearance. She said her time sheet from her job as a nurse might not have reflected that she was home because she was a salaried employee and required to enter hour hours whether she worked or not. Cindy has taken a permanent disability leave from her last nursing job. How Much Is Cindy Anthony Net Worth? Although the exact figure for Cindy Anthony's net worth is not public yet, she should worth above $100 thousand. Where Does Cindy Anthony Reside Now? In 2022, Cindy Anthony is residing in Florida. What Is Cindy Anthony House Address? Cindy Anthony's current house is based in Florida but the address is not public yet. According to Oddstops, the old house's address where Caylee was found dead nearby is 4937 Hopespring Drive. The house was reportedly born in 1986 which Cindy bought in 1989. The assessed value for the house was $101,000 according to the Orange County Property Appraiser's Office. George Anthony Bio, Today, Job, Casey Anthony Father Lamar Bonaparte Jr Bio, Leva Bonaparte Son, Age, Dad
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,616