text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Trevor le Mare Sharpe LVO OBE LRAM ARCM (1921 – 22 May 2010) was a British army officer (Lieutenant Colonel), composer, music educator and conductor. Sharpe was appointed the Director of Music of the Coldstream Guards in 1963, as such he was credited at the end of each episode of Dad's Army as the regiment performed the closing theme tune. Discography Ceremonial Occasion Sharpe released Ceremonial Occasion with the Band of the Royal Military School of Music (Kneller Hall), and the Fanfare Trumpeters of the Royal Military School of Music, it was produced by Treasure Island Music. References 1921 births 2010 deaths Military personnel from Kent 20th-century British conductors (music) 20th-century British composers Dad's Army Coldstream Guards officers British Army officers British military musicians Associates of the Royal College of Music
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,091
Helping to Reach Potential Nadrzędna kategoria: Handel zagraniczny Kategoria: Ameryka Północna Opublikowano: poniedziałek, 08, kwiecień 2013 17:55 - I am so delighted to be with you here today because I have money to give to charity - says Elisabeth Misner, Co-Founder and Administrator of the BNI Foundation while presenting Anna Radziejowska from Junior Achievement Poland a $1000 Givers Gain® Grant. Junior Achievement teaches young people about money management and how business really works. The mission of this worldwide, non-profit organization is toinspire and prepare young people to succeed in a global economy.It's the kind of challenge that JA takes on with creative energy, fervor, and decisive action. BNI supports their activities in many countries. Today, JA programs are delivered by more than 178,000 volunteers. Many of them are BNI members. Photos: http://business-relationslifestyle.blogspot.com/2014/06/beth-i-ivan-meisner-po-raz-pierwszy-w.html Elisabeth Misner, Co-Founder and Administrator of the BNI Foundation: We give money to the community because the community supports our business so much and makes it possible for us to even have our business. So, when we first started our charity, we got members of BNI together and asked them what the focus of the BNI Foundation should be. As we were talking with them, it seemed to be the best to make a focus on children and their education beacuse they are the future of BNI, and the future of business. Maya M. Kowalczyk: What kind of education programs do you support? Elisabeth Misner: The programs which we support are dedicated to children from primary to secondary school. We do basic educational projects like reading, math, and science. We do not give to programs related to music, art and continuing education because there are other foundations which do that. For junior high and high school we support business education and career readiness. Young peple learn how to start their professional careers or how to start a business. That is why BNI is one of strategic partners of Junior Achievement. We cooperate with Junior Achievments all around the world. Maya M. Kowalczyk: Anna Radziejowska from Junior Achievement Poland has just received from you a $1000 Givers Gain® Grant. What is the aim of giving this donation? Elisabeth Misner: Young peple from Poland will be taught how to write a business plan, how to produce a product, how to market it, how to make a profit, what to do with the money when it makes a profit, including giving it to charity, too. It is pretty exciting. Junior Achievement is present in over one hundred twenty-one nations, and BNI only in fifty one. So, in nearly every country we go there is the Junior Achievement organization, and we can support young people in more ways than just giving them money. Once the BNI members know what is happening in the local Junior Achievement chapters, they can mentor young aspiring business owners. The BNI members are going into schools and they are teaching the kids themselves what it means to have your own business, how to grow it, and how to balance your career and personal life. So, that is one way of giving education that is beyond the basics. We help them to get ready to be a part of our society. Maya M. Kowalczyk: Thank you. Elisabeth Misner: Thank you. Beth Misner Bio Beth is the co-founder and administrator of the BNI Foundation which has awarded over $2 million in educational grants to schools and organizations with a focus on improving tomorrow's business through education today. In 2010 the BNI Foundation named Junior Achievement as its worldwide strategic alliance partner. She has also been a contributing author to three New York Times and Wall Street Journal best-selling books and is a professional speaker, having spoken to business audiences in many countries. Beth resides in Claremont, CA, with her husband, Dr. Ivan Misner. Ivan and Beth have three adult children. She is a Tai Chi and Qi Gong instructor and also a black belt in karate.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,554
\section{Introduction} \label{sec:intro} \vspace{-0.15cm} Music is an abstract, yet densely emotional form of art. It is universally enjoyed, due to its ability to induce powerful emotions irrespective of the underlying mood \cite{jakubowski2021} and has been characterized to greatly affect the function of the human brain~\cite{sacks,koelsch}. Hence it is widely used to study emotion recognition, both by analyzing the mood produced by several musical features \cite{panda,Song2012} and by studying its effects on human neural and physiological responses \cite{greer2019}. However, the task of extracting emotional information from brain activity poses severe challenges, due to the inherently abstract nature of the induced emotions, the variability in emotional and physiological responses between different individuals and the lack of large-scale databases of emotionally coordinated neural activity. In this paper, we propose a deep multimodal approach \cite{multimodaldeep}, using musical stimuli. The scope of this study lies at the intersection of Music Cognition, Emotion Recognition from neuronal signals and Multimodal Learning. We choose to study brain responses to music by employing a cross-modal system to identify the correspondence between these modalities. We use the electroencephalogram (EEG) to model brain responses for this task and we constrain the learning process with emotion labels. Therefore, we aim to derive important insights regarding the affective role that music can play on humans and the extent to which it can help us build richer neuronal representations of affect. To conduct the experiment, we exploit multimodal optimization and domain adaptation strategies to project EEG and music features onto a common latent space, from which we could assess their similarity. By conditioning the learning process with emotion tags, the constructed space represents affect, enabling thus emotion recognition both directly, by performing supervised predictions, and indirectly, by ranking music tracks to EEG inputs, based on their distance. To the best of our knowledge, this is the first study to propose such a framework, thus it could be utilized as a baseline reference. We also perform an extensive qualitative study across 32 subjects of the DEAP dataset \cite{koelstra} to derive inter-subject affective patterns. The remainder of this paper is organized as follows: Section 2 reviews the related work in cross-modal learning and research on EEG processing and music cognition. In Section 3 we introduce our problem and present the proposed framework along with the optimization methods we utilized. Section 4 includes information about the data, their pre-processing and implementation details, while in Section 5 we provide the experimental results. In that Section and in Section 6 we provide extensive quantitative and qualitative analysis of the outcomes, while Section 7 concludes our study. \vspace{-0.25cm} \section{Related Work} \vspace{-0.15cm} \label{sec:related} \textbf{Music Cognition}: Studying the human brain's responses to music stimuli has always been a lively field of research in neuroscience and signal processing \cite{why} aiming to answer fundamental questions regarding our enjoyment of music. The field has gained a lot of attention in recent years, with the upsurge in available neuronal data. Many studies in the field rely on EEG recordings as they provide better temporal resolution than other techniques, such as functional magnetic resonance imaging (fMRI). In addition to the traditional, well-controlled auditory experiments, modern approaches gather physiological data from music listeners as they enjoy or imagine naturalistic music \cite{nmed}, in order for instance to examine correlations in temporal structure \cite{mind_the_beat} or the perceived tempo \cite{stober_tempo}. One of the core findings on music cognition is the correlation between characteristics of the neural oscillation patterns and rhythmical patterns in music \cite{Nozaradan10234}. Additionally, Event-Related Potentials (ERP) have been utilized to extract brain activity patterns that can relate to note onsets or pitch \cite{schafer,poikonen}. In parallel to the above, there has also been a shift towards deep learning approaches for information retrieval from music stimuli \cite{deep}, in which we focus on the present study. \begin{figure*} \centerline{ \includegraphics[scale=0.38]{figs/modelarch.png}} \vspace{-0.3cm} \caption{The proposed bi-stream network. The output embedding layer of each stream is fed to the common 64D dense layer (common space).} \label{fig:bistream} \vspace{-0.3cm} \end{figure*} \textbf{Emotion Recognition}: Undoubtedly, the most powerful impact of music on humans concerns the induced emotions. Emotion Recognition is a widely researched field of contemporary Machine Learning and Behavioral Signal Processing \cite{em_survey} and several studies have focused on the musical features \cite{Song2012,panda2015} that determine affective attributes of music listening in a wide range of emotions. Recently, several studies have examined physiological signals to analyze humans' felt emotions~\cite{phys}, with music emerging as an efficient method to elicit them. Due to its temporal resolution, the electroencephalogram is the most widely researched signal of this type and various statistical, spectral or time-frequency features have been proposed for Emotion Recognition \cite{spectral, hog}. Due to the noisy structure of EEG signals, many studies incorporate entropy \cite{dasm} and fractal \cite{fractals} algorithms to extract emotion-related features. Of course, variations of deep neural networks have been proposed and exceeded the performance of traditional feature extraction methods \cite{3dcnn, du_affcomp}, however the limited data availability and inter-subject variability present serious barriers for this kind of modeling. \textbf{Cross-Modal Learning}: The task of learning a shared embedding space from different datasets or modalities is being studied through various approaches, which are predominantly applied to image and text modalities. A widely used baseline is Canonical Correlation Analysis (CCA). CCA is non-probabilistic and enables the extraction of linear components to optimize the correlation of pairs of vectors. One can find in the literature various non-linear CCA-based frameworks and architectures utilized to learn inter-modal similarities, such as Deep CCA \cite{dcca}. Besides CCA, other methods that have been used include an HGR-based maximal correlation metric \cite{hgr} and adversarial training \cite{acmr}, focusing mainly on the optimization function of the respective model, and on adaptive hidden layers~\cite{hu2018}. Another study \cite{LiK19} incorporated music to co-train a shared space with images using a contrastive loss. Further, in \cite{dscmr} a state-of-the-art framework exploits label supervision to better manipulate the latent space, a key concept that we also follow in our study. \section{Methodology} \label{sec:method} In this study, we extract the semantic relationship between music tracks and corresponding EEG recordings, so that an EEG could be mapped to an efficient affective representation and retrieve emotionally consistent music samples. Let us assume a collection of $n$ instances of EEG-music pairs, denoted as $T = \{(x_i^a,x_i^b)\}_{i=1}^n$ where $x_i^a$ is the input EEG sample of the i$^{th}$ instance and $x_i^b$ the input music stimulus corresponding to that sample. Each instance has been assigned an affective annotation $y_i \in \mathbb{R}^2$ for valence and arousal dimensions. For each instance $i$ we aim to learn an EEG embedding $u_i = f(x_i^a,Y^a) \in \mathbb{R}^d$ and a musical audio embedding $v_i = g(x_i^b,Y^b) \in \mathbb{R}^d$, where $d$ is the dimensionality of the common representation space and $Y^a, Y^b$ the trainable parameters. \subsection{The Proposed Framework} We use a bi-stream Neural Network with one branch for each modality. The EEG branch is a recurrent network, comprised of two LSTM modules and a softmax attention layer. The model takes as input an EEG trial of shape (channels, features) and attempts to capture its inter-channel correlations. Next, the output features of all channels are flattened and passed through an attention module to identify the most important components, that will lead the embedding vector in the common space. We utilize a lightweight network in order to avoid overfitting to the limited range of the available data, however any state-of-the-art model in the task could be applied. For the music branch we use the MusiCNN model \cite{musicnn} to extract high-level embeddings from the available audio stimuli. MusiCNN is a robust network, pre-trained on large audio databases, and produces high-quality music embeddings that compensate for the limited size of our track set and further assist the learning process. The extracted embeddings are then fed into a feed-forward neural network. To construct the final bi-modal framework, we connect the last layer of each of the previous networks to a dense layer (Fig.~\ref{fig:bistream}) constituting the common representation space, from which we output emotion predictions. Inspired by \cite{du_affcomp}, we further apply a Gradient Reversal Layer (GRL) \cite{grl}, aiming to reduce the distribution shift between EEG and music modalities. In specific, both 64D embeddings are fed to this layer, from where we output a prediction regarding the modality type. From each batch, we randomly permute half of the EEG samples and their respective music samples, forming a new equal-sized batch that we shuffle and input to the GRL module, along with a binary label vector to denote the modality. Subsequently, these embeddings are passed through dense layers to predict the modality of each sample. By reversing the gradients corresponding to these predictions during back-propagation, we help the feature extractor produce modality-invariant features. \subsection{Objective Function} Our goal is to learn a common space where the samples from the same semantic category should be similar, even though they come from different modalities. To learn discriminative features we want to minimize the discrimination loss in both the label space and the representation space, by reducing the cross-modal discrepancy. With regard to the label space, we use a linear classifier to predict the emotion labels of the samples projected in the common space. Outputs of each modality are passed through a sigmoid activation and a binary cross-entropy (BCE) loss $\ell$ is computed. For the cross-modal task we apply a weighted linear combination of those losses: $\mathcal{J}_1 = \lambda_{11} \ell_a + \lambda_{12} \ell_b$. To reduce the cross-modal discrepancy between EEG and music representations, we also compute the BCE loss of the modality prediction after GRL: $\mathcal{J}_2 = \ell_{dd}$. By combining terms $\mathcal{J}_1, \mathcal{J}_2$ we obtain the proposed objective, in which the hyper-parameters $\lambda_i$ control the contribution of each separate component and are determined through trial and error: $\mathcal{J} = \lambda_1\mathcal{J}_1 + \lambda_2\mathcal{J}_2$. \section{Experimental Setup} \subsection{The DEAP Dataset} DEAP \cite{koelstra} is a comprehensive dataset that includes EEG signals of music listening, collected from 32 subjects. Each subject watches forty 1-min long music videos while having their EEG recorded. After each video trial, the subject was instructed to rate the emotion that was elicited upon the entire trial in 5 dimensions: valence, arousal, dominance, liking and familiarity to the track. In this paper we solely experiment with the 2D emotion space, determined by valence and arousal, whose ratings range from 1 (weakest) to 9 (strongest). We use the EEG signals in their already preprocessed form: recorded at a sampling rate of 512 Hz and denoised by bandpass filtering, after downsampling to 128 Hz. Eye-related artefacts were removed whereas the 10-20 electrode placement system was followed. \textbf{Specifying Music Tracks:} The 40 one-minute music stimuli of DEAP are not included in the dataset, so we located the video clips of the corresponding tracks and isolated the minute of interest for each one, according to the metadata provided. The task of deriving the common latent space poses a crucial challenge: the semantic gap between the ``subjective" affective responses of participants and the emotion tags of the songs. Ideally, we need musical stimuli that are tagged in accordance with the participants' annotations. DEAP stimuli have been selected for this purpose and have been independently annotated by the experimenters at track level. Nearly all songs received average ratings from the participants that were in accordance with those annotations. We found that only $6/40$ songs had such an inconsistency and discarded them. The resulting track set is used to extract MusiCNN embeddings which we make available\footnote{\href{https://github.com/klean2050/EEG\_CrossModal}{https://github.com/klean2050/EEG\_CrossModal}} as well. \subsection{Input Feature Extraction} EEG and music signals are processed differently in order to produce an embedding form suitable for multimodal training. DEAP signals are first cut to 3-second segments, while having their preparatory phase removed. For feature extraction purposes we consider differential entropy features, reported to achieve superior performance in the task \cite{critical}. Differential entropy (DE) $h$ is defined as: \begin{equation} h(X)=-\int_{X} f(x) \log (f(x)) d x, \end{equation} where $X$ is an EEG segment and $f(x)$ its distribution. Assuming further that the utilized signals can be modeled as Gaussian distributions, i.e. $f(x) = N(\mu, \sigma)$, then $h(X)$ can be determined by the logarithm energy spectrum of $X$ as follows: \begin{equation} \begin{gathered} h(X)=-\int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi \sigma^{2}}} \exp \frac{(x-\mu)^{2}}{2 \sigma^{2}} \\ \log \left(\frac{1}{\sqrt{2 \pi \sigma^{2}}} \exp \frac{(x-\mu)^{2}}{2 \sigma^{2}} \right) d x=\frac{1}{2} \log 2 \pi e \sigma^{2}. \end{gathered} \end{equation} Thus, for each EEG segment we use the Short-Time Fourier Transform with an 1-sec non-overlapping Hanning window to compute the variance $\sigma^2$ for each of the three windows in the frequency domain and subsequently we compute $h$ in each channel of the four available EEG rhythms: $\theta$ (4-7Hz), $\alpha$ (7-13Hz), $\beta$ (13-30Hz) and $\gamma$ (31-50Hz). The features for all four bands are concatenated and the resulting feature vector is then used as channel-wise input. On the other hand, music tracks are cut in segments aligned with the EEG throughout their whole (3 sec) duration and fed directly into the pre-trained model, from which we extract its ``pool5" embeddings. \subsection{Evaluation Protocol} We evaluate our proposed method using accuracy to assess the supervised predictions for each modality and the Precision@10 (P@10) and mean Average Precision (mAP) metrics for the retrieval of music tracks given EEG queries. Those two metrics have been widely used to assess retrieval tasks in the literature \cite{won2020multimodal,dscmr} as they evaluate the response's distance-based ranking to each query. In particular, P@10 considers the top 10 ranked tracks whereas mAP evaluates the whole ranking. Results are also presented after trial aggregation: For the accuracy, we simply denote a prediction as correct by majority voting on the segment-wise predictions. For the retrieval metrics, since no such voting can be made, we consider the median of the segment-wise distance scores as the overall query score. \vspace{-0.2cm} \section{Experiments} \label{sec:exp} The training procedure considers personalized models, each one trained on data of a specific subject and the respective audio stimuli. To compensate for possible annotation noise, we binarize the emotion labels by setting the threshold to the median score 5, as in \cite{koelstra}. Following the same paradigm, we consider separate experiments for valence and arousal dimensions. We apply 5-fold stratified cross-validation to train each network, where each fold holds 20\% of the total trials (7 tracks). Additionally, we apply class weights to alleviate any subject-specific data imbalance. All networks are optimized using Adam at a $10^{-4}$ learning rate and patience of 15 epochs of non-decreasing validation loss. \vspace{-0.3cm} \subsection{Predicting Emotion Tags} We first evaluate the models' performance on Emotion Recognition for both EEG and Music modalities (Table~1). EEG scores show high variance per subject, on average reaching up to $70.4\%$ on valence and $68.9\%$ on arousal after trial aggregation. The obtained scores are competitive for the specific dataset, despite the simple utilized architecture, something we attribute to the impact of music co-training and the adaptation of the common latent space. This contribution is further quantified in Section~5.3. Additionally, aggregating predictions on a per-track basis provides substantially enhanced results compared to non-aggregated ones, with the EEG accuracy increasing by above 5\% in arousal recognition and about 8\% in valence, implying that there is strong correlation (e.g., in the form of clusters) between same-track samples, especially in valence. On the other hand, despite the small number of tracks in our music set, the recognition performance is substantially high for the music branch, 78.8\% average on valence and 91.9\% on arousal, something that indicates the robustness of our transfer learning module. \vspace{-0.3cm} \begin{table}[!t] \label{tab:pre} \centering \begin{tabular}{c|c|c} \textbf{Dimension} & \textbf{Non-Aggregated} & \textbf{Aggregated} \\ \hline Valence & 62.9\% -- 71.5\% & \textbf{70.4}\% -- \textbf{78.7}\% \\ Arousal & 63.3\% -- 88.0\% & \textbf{68.9}\% -- \textbf{91.9}\% \end{tabular} \vspace{-0.2cm} \caption{Emotion Accuracy Scores for (EEG -- Music) modalities, reporting mean values over 32 subject-specific models.} \vspace{-0.2cm} \end{table} \begin{table}[!t] \centering \begin{tabular}{c|c|c} \textbf{Dimension} & \textbf{Precision@10} & \textbf{mAvg. Precision}\\ \hline Valence & \textbf{19.4}\% -- \textbf{63.8}\% & 18.8\% -- 59.1\%\\ Arousal & 18.4\% -- 65.0\% & \textbf{19.9}\% -- \textbf{67.8}\%\\ \end{tabular} \vspace{-0.2cm} \caption{(Track -- Emotion) Retrieval Scores on EEG input queries, reporting mean aggregated scores over 32 subjects.} \vspace{-0.5cm} \end{table} \begin{figure*} \centerline{ \includegraphics[scale=0.33]{figs/CROSSD_v1.png} \includegraphics[scale=0.33]{figs/CROSSD_v2.png} \includegraphics[scale=0.33]{figs/CROSSD_a1.png} \includegraphics[scale=0.33]{figs/CROSSD_a2.png}} \vspace{-0.25cm} \caption{t-SNE visualisation of the common space for subjects --from left to right-- 8, 15 (Valence) and 18, 20 (Arousal). 0 $\rightarrow \text{Low} \,|\, 1\rightarrow$ High} \label{fig:tsne} \vspace{-0.4cm} \end{figure*} \subsection{Retrieving Tracks from EEG Queries} Table 2 summarizes the retrieval scores from the personalized models, acquired by querying the common representation space of each network with a test EEG sample and then evaluating the ranking of music samples based on their distance to the query. Retrieval metrics provide robust results in both cases, indicating that the EEG samples are well-situated in the common space and the majority of them are capable of retrieving tracks that are emotionally consistent. Specifically, in the case of induced valence, a P@10 value of 63.8\% is achieved. We note that this percentage is higher than the reported mAP (59.1\%), strongly implying that the learned valence space is fragmented into local subspaces of high similarity. Arousal on the other hand seems to be more consistently represented, as both mAP and P@10 median retrieval scores indicate that the majority of tested tracks can derive consistent music rankings, in contrast to valence where the emotional response similarity seems concentrated to the top-ranked elements. As a result, the correct retrieval percentage conditioned on arousal approaches 68\% on average across subjects. We also note some preliminary results in approaching retrieval of the exact stimulus of an EEG sample. The derived scores, around 20\%, are clearly above random selection, however we believe that further experimentation is required on the temporal resolution of the input samples, yielding an interesting direction of future study. \vspace{-0.3cm} \subsection{Ablation Study} In our study we incorporated a complex objective function, combining 3 BCE terms to minimize the discrimination loss in both the label space and the common latent space. To further investigate the impact of our proposals on the models' performance, we trained separate sessions, first by considering sole EEG samples without music supervision, and second by avoiding the domain discrimination module. From the results in Table~3 we deduce that our full objective $\mathcal{J}$ leads to higher overall performance, indicating that all utilized terms contribute to richer EEG affective representations. Specifically, we can see that the absence of multimodal training sharply impacts the validity of the common space and reduces the classification performance, 2.6\% in valence and 0.9\% in arousal. On the other side, the absence of domain adaptation causes slighter modifications to the correlation of samples and stimuli, as measured by precision metrics. Through this module we manage though to better distribute samples in the common space, break modality-specific clusters and reduce the overall sample distances (Section 6.1), which is reflected in the improved classification performance in both experiments. \vspace{-0.2cm} \begin{table}[!t] \centering \begin{tabular}{c|c|c|c} \textbf{Metric} & $\mathcal{J}$ & $\ell_a$ only & $\neg\, \ell_{dd}$\\ \hline Acc$_{\text{EEG}}$ & \textbf{70.4}\% -- \textbf{68.9}\% & 67.8\% -- 68.0\% & 67.9\% -- 63.4\% \\ P@10 & \textbf{63.8}\% -- 65.0\% & 57.3\% -- 53.1\% & 63.4\% -- \textbf{66.7}\% \\ mAP & 59.1\% -- 67.8\% & 51.9\% -- 55.8\% & \textbf{59.8}\% -- \textbf{68.1}\% \\ \end{tabular} \vspace{-0.2cm} \caption{Ablation on the Objective Function for (Valence -- Arousal). Here we solely consider mean aggregated scores over 32 subjects.} \vspace{-0.5cm} \end{table} \begin{figure} \hspace{-0.2cm} \centerline{ \includegraphics[scale=0.44]{figs/temporal.png}} \vspace{-0.4cm} \caption{Arousal mAP scores over the 58 time samples for the numbered tracks, averaged across all subjects.} \label{fig:temporal} \vspace{-0.55cm} \end{figure} \section{Qualitative Analysis} \textbf{Studying the Common Space:} We visually inspect the produced latent space using t-SNE to reduce its 64D dimension to 2D. We select one of the 5 trained models for 2 subjects in Valence and Arousal to display their results in Fig.~\ref{fig:tsne} (similar trends are observed for most subjects). It is evident that latent domain adaptation has alleviated the cross-modal discrepancy and the modalities homogenize their embeddings to a certain degree. Cohesive sub-clusters are though visible, especially in the case of valence. This provides an explanation towards the discrepancy we observed between P@10 and mAP metrics, since the top-ranked track retrievals originate from the corresponding local subspace, but there is no coarse bisection between high- and low- valence samples, in contrast to the case of arousal. \textbf{Temporal Variation of Recognition:} Since each track is segmented into 58 overlapping samples of 3 sec, it is expected that the emotion is not elicited at the same pace throughout its duration. Hence, the temporal variation of our scores could indicate important moments in the track. In Fig.~\ref{fig:temporal}, we present the temporal evolution of the mAP scores for selected music tracks, averaged across all subjects. While the raw plots are noisy, each song individually exhibits a pattern of variation, which we depict by applying a 7-sample moving average filter. Scores typically reveal an oscillating pattern on the time axis and emotions are highly induced at certain peaks of the graph. These patterns reveal a characteristic picture of emotional induction in songs and could be subject of further experimentation. \vspace{-0.2cm} \section{Conclusion} \vspace{-0.2cm} \label{sec:conc} In this paper we presented a novel approach to analyze emotion induction from EEG recordings of music listening. We proposed a cross-modal framework to learn rich affective representations for EEG data through music supervision and adaptation of a common latent space, from which one could retrieve consistent music rankings from EEG queries. Our approach indicates that distilling information from processed musical stimuli to the respective EEG signals can improve performance and provide insights on personalized emotion analysis. To the best of our knowledge, this is the first study to propose a complete framework for the specific task and dataset, thus our results can be viewed as a concrete baseline. This framework can be used to model the EEG-Music relationship by using different condition mechanisms, e.g., musical beat. Another interesting direction would be to explore improvements in exact stimulus retrieval. \vfill\pagebreak \bibliographystyle{IEEEbib} \section{Introduction} \label{sec:intro} \vspace{-0.15cm} Music is an abstract, yet densely emotional form of art. It is universally enjoyed, due to its ability to induce powerful emotions irrespective of the underlying mood \cite{jakubowski2021} and has been characterized to greatly affect the function of the human brain~\cite{sacks,koelsch}. Hence it is widely used to study emotion recognition, both by analyzing the mood produced by several musical features \cite{panda,Song2012} and by studying its effects on human neural and physiological responses \cite{greer2019}. However, the task of extracting emotional information from brain activity poses severe challenges, due to the inherently abstract nature of the induced emotions, the variability in emotional and physiological responses between different individuals and the lack of large-scale databases of emotionally coordinated neural activity. In this paper, we propose a deep multimodal approach \cite{multimodaldeep}, using musical stimuli. The scope of this study lies at the intersection of Music Cognition, Emotion Recognition from neuronal signals and Multimodal Learning. We choose to study brain responses to music by employing a cross-modal system to identify the correspondence between these modalities. We use the electroencephalogram (EEG) to model brain responses for this task and we constrain the learning process with emotion labels. Therefore, we aim to derive important insights regarding the affective role that music can play on humans and the extent to which it can help us build richer neuronal representations of affect. To conduct the experiment, we exploit multimodal optimization and domain adaptation strategies to project EEG and music features onto a common latent space, from which we could assess their similarity. By conditioning the learning process with emotion tags, the constructed space represents affect, enabling thus emotion recognition both directly, by performing supervised predictions, and indirectly, by ranking music tracks to EEG inputs, based on their distance. To the best of our knowledge, this is the first study to propose such a framework, thus it could be utilized as a baseline reference. We also perform an extensive qualitative study across 32 subjects of the DEAP dataset \cite{koelstra} to derive inter-subject affective patterns. The remainder of this paper is organized as follows: Section 2 reviews the related work in cross-modal learning and research on EEG processing and music cognition. In Section 3 we introduce our problem and present the proposed framework along with the optimization methods we utilized. Section 4 includes information about the data, their pre-processing and implementation details, while in Section 5 we provide the experimental results. In that Section and in Section 6 we provide extensive quantitative and qualitative analysis of the outcomes, while Section 7 concludes our study. \vspace{-0.25cm} \section{Related Work} \vspace{-0.15cm} \label{sec:related} \textbf{Music Cognition}: Studying the human brain's responses to music stimuli has always been a lively field of research in neuroscience and signal processing \cite{why} aiming to answer fundamental questions regarding our enjoyment of music. The field has gained a lot of attention in recent years, with the upsurge in available neuronal data. Many studies in the field rely on EEG recordings as they provide better temporal resolution than other techniques, such as functional magnetic resonance imaging (fMRI). In addition to the traditional, well-controlled auditory experiments, modern approaches gather physiological data from music listeners as they enjoy or imagine naturalistic music \cite{nmed}, in order for instance to examine correlations in temporal structure \cite{mind_the_beat} or the perceived tempo \cite{stober_tempo}. One of the core findings on music cognition is the correlation between characteristics of the neural oscillation patterns and rhythmical patterns in music \cite{Nozaradan10234}. Additionally, Event-Related Potentials (ERP) have been utilized to extract brain activity patterns that can relate to note onsets or pitch \cite{schafer,poikonen}. In parallel to the above, there has also been a shift towards deep learning approaches for information retrieval from music stimuli \cite{deep}, in which we focus on the present study. \begin{figure*} \centerline{ \includegraphics[scale=0.38]{figs/modelarch.png}} \vspace{-0.3cm} \caption{The proposed bi-stream network. The output embedding layer of each stream is fed to the common 64D dense layer (common space).} \label{fig:bistream} \vspace{-0.3cm} \end{figure*} \textbf{Emotion Recognition}: Undoubtedly, the most powerful impact of music on humans concerns the induced emotions. Emotion Recognition is a widely researched field of contemporary Machine Learning and Behavioral Signal Processing \cite{em_survey} and several studies have focused on the musical features \cite{Song2012,panda2015} that determine affective attributes of music listening in a wide range of emotions. Recently, several studies have examined physiological signals to analyze humans' felt emotions~\cite{phys}, with music emerging as an efficient method to elicit them. Due to its temporal resolution, the electroencephalogram is the most widely researched signal of this type and various statistical, spectral or time-frequency features have been proposed for Emotion Recognition \cite{spectral, hog}. Due to the noisy structure of EEG signals, many studies incorporate entropy \cite{dasm} and fractal \cite{fractals} algorithms to extract emotion-related features. Of course, variations of deep neural networks have been proposed and exceeded the performance of traditional feature extraction methods \cite{3dcnn, du_affcomp}, however the limited data availability and inter-subject variability present serious barriers for this kind of modeling. \textbf{Cross-Modal Learning}: The task of learning a shared embedding space from different datasets or modalities is being studied through various approaches, which are predominantly applied to image and text modalities. A widely used baseline is Canonical Correlation Analysis (CCA). CCA is non-probabilistic and enables the extraction of linear components to optimize the correlation of pairs of vectors. One can find in the literature various non-linear CCA-based frameworks and architectures utilized to learn inter-modal similarities, such as Deep CCA \cite{dcca}. Besides CCA, other methods that have been used include an HGR-based maximal correlation metric \cite{hgr} and adversarial training \cite{acmr}, focusing mainly on the optimization function of the respective model, and on adaptive hidden layers~\cite{hu2018}. Another study \cite{LiK19} incorporated music to co-train a shared space with images using a contrastive loss. Further, in \cite{dscmr} a state-of-the-art framework exploits label supervision to better manipulate the latent space, a key concept that we also follow in our study. \section{Methodology} \label{sec:method} In this study, we extract the semantic relationship between music tracks and corresponding EEG recordings, so that an EEG could be mapped to an efficient affective representation and retrieve emotionally consistent music samples. Let us assume a collection of $n$ instances of EEG-music pairs, denoted as $T = \{(x_i^a,x_i^b)\}_{i=1}^n$ where $x_i^a$ is the input EEG sample of the i$^{th}$ instance and $x_i^b$ the input music stimulus corresponding to that sample. Each instance has been assigned an affective annotation $y_i \in \mathbb{R}^2$ for valence and arousal dimensions. For each instance $i$ we aim to learn an EEG embedding $u_i = f(x_i^a,Y^a) \in \mathbb{R}^d$ and a musical audio embedding $v_i = g(x_i^b,Y^b) \in \mathbb{R}^d$, where $d$ is the dimensionality of the common representation space and $Y^a, Y^b$ the trainable parameters. \subsection{The Proposed Framework} We use a bi-stream Neural Network with one branch for each modality. The EEG branch is a recurrent network, comprised of two LSTM modules and a softmax attention layer. The model takes as input an EEG trial of shape (channels, features) and attempts to capture its inter-channel correlations. Next, the output features of all channels are flattened and passed through an attention module to identify the most important components, that will lead the embedding vector in the common space. We utilize a lightweight network in order to avoid overfitting to the limited range of the available data, however any state-of-the-art model in the task could be applied. For the music branch we use the MusiCNN model \cite{musicnn} to extract high-level embeddings from the available audio stimuli. MusiCNN is a robust network, pre-trained on large audio databases, and produces high-quality music embeddings that compensate for the limited size of our track set and further assist the learning process. The extracted embeddings are then fed into a feed-forward neural network. To construct the final bi-modal framework, we connect the last layer of each of the previous networks to a dense layer (Fig.~\ref{fig:bistream}) constituting the common representation space, from which we output emotion predictions. Inspired by \cite{du_affcomp}, we further apply a Gradient Reversal Layer (GRL) \cite{grl}, aiming to reduce the distribution shift between EEG and music modalities. In specific, both 64D embeddings are fed to this layer, from where we output a prediction regarding the modality type. From each batch, we randomly permute half of the EEG samples and their respective music samples, forming a new equal-sized batch that we shuffle and input to the GRL module, along with a binary label vector to denote the modality. Subsequently, these embeddings are passed through dense layers to predict the modality of each sample. By reversing the gradients corresponding to these predictions during back-propagation, we help the feature extractor produce modality-invariant features. \subsection{Objective Function} Our goal is to learn a common space where the samples from the same semantic category should be similar, even though they come from different modalities. To learn discriminative features we want to minimize the discrimination loss in both the label space and the representation space, by reducing the cross-modal discrepancy. With regard to the label space, we use a linear classifier to predict the emotion labels of the samples projected in the common space. Outputs of each modality are passed through a sigmoid activation and a binary cross-entropy (BCE) loss $\ell$ is computed. For the cross-modal task we apply a weighted linear combination of those losses: $\mathcal{J}_1 = \lambda_{11} \ell_a + \lambda_{12} \ell_b$. To reduce the cross-modal discrepancy between EEG and music representations, we also compute the BCE loss of the modality prediction after GRL: $\mathcal{J}_2 = \ell_{dd}$. By combining terms $\mathcal{J}_1, \mathcal{J}_2$ we obtain the proposed objective, in which the hyper-parameters $\lambda_i$ control the contribution of each separate component and are determined through trial and error: $\mathcal{J} = \lambda_1\mathcal{J}_1 + \lambda_2\mathcal{J}_2$. \section{Experimental Setup} \subsection{The DEAP Dataset} DEAP \cite{koelstra} is a comprehensive dataset that includes EEG signals of music listening, collected from 32 subjects. Each subject watches forty 1-min long music videos while having their EEG recorded. After each video trial, the subject was instructed to rate the emotion that was elicited upon the entire trial in 5 dimensions: valence, arousal, dominance, liking and familiarity to the track. In this paper we solely experiment with the 2D emotion space, determined by valence and arousal, whose ratings range from 1 (weakest) to 9 (strongest). We use the EEG signals in their already preprocessed form: recorded at a sampling rate of 512 Hz and denoised by bandpass filtering, after downsampling to 128 Hz. Eye-related artefacts were removed whereas the 10-20 electrode placement system was followed. \textbf{Specifying Music Tracks:} The 40 one-minute music stimuli of DEAP are not included in the dataset, so we located the video clips of the corresponding tracks and isolated the minute of interest for each one, according to the metadata provided. The task of deriving the common latent space poses a crucial challenge: the semantic gap between the ``subjective" affective responses of participants and the emotion tags of the songs. Ideally, we need musical stimuli that are tagged in accordance with the participants' annotations. DEAP stimuli have been selected for this purpose and have been independently annotated by the experimenters at track level. Nearly all songs received average ratings from the participants that were in accordance with those annotations. We found that only $6/40$ songs had such an inconsistency and discarded them. The resulting track set is used to extract MusiCNN embeddings which we make available\footnote{\href{https://github.com/klean2050/EEG\_CrossModal}{https://github.com/klean2050/EEG\_CrossModal}} as well. \subsection{Input Feature Extraction} EEG and music signals are processed differently in order to produce an embedding form suitable for multimodal training. DEAP signals are first cut to 3-second segments, while having their preparatory phase removed. For feature extraction purposes we consider differential entropy features, reported to achieve superior performance in the task \cite{critical}. Differential entropy (DE) $h$ is defined as: \begin{equation} h(X)=-\int_{X} f(x) \log (f(x)) d x, \end{equation} where $X$ is an EEG segment and $f(x)$ its distribution. Assuming further that the utilized signals can be modeled as Gaussian distributions, i.e. $f(x) = N(\mu, \sigma)$, then $h(X)$ can be determined by the logarithm energy spectrum of $X$ as follows: \begin{equation} \begin{gathered} h(X)=-\int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi \sigma^{2}}} \exp \frac{(x-\mu)^{2}}{2 \sigma^{2}} \\ \log \left(\frac{1}{\sqrt{2 \pi \sigma^{2}}} \exp \frac{(x-\mu)^{2}}{2 \sigma^{2}} \right) d x=\frac{1}{2} \log 2 \pi e \sigma^{2}. \end{gathered} \end{equation} Thus, for each EEG segment we use the Short-Time Fourier Transform with an 1-sec non-overlapping Hanning window to compute the variance $\sigma^2$ for each of the three windows in the frequency domain and subsequently we compute $h$ in each channel of the four available EEG rhythms: $\theta$ (4-7Hz), $\alpha$ (7-13Hz), $\beta$ (13-30Hz) and $\gamma$ (31-50Hz). The features for all four bands are concatenated and the resulting feature vector is then used as channel-wise input. On the other hand, music tracks are cut in segments aligned with the EEG throughout their whole (3 sec) duration and fed directly into the pre-trained model, from which we extract its ``pool5" embeddings. \subsection{Evaluation Protocol} We evaluate our proposed method using accuracy to assess the supervised predictions for each modality and the Precision@10 (P@10) and mean Average Precision (mAP) metrics for the retrieval of music tracks given EEG queries. Those two metrics have been widely used to assess retrieval tasks in the literature \cite{won2020multimodal,dscmr} as they evaluate the response's distance-based ranking to each query. In particular, P@10 considers the top 10 ranked tracks whereas mAP evaluates the whole ranking. Results are also presented after trial aggregation: For the accuracy, we simply denote a prediction as correct by majority voting on the segment-wise predictions. For the retrieval metrics, since no such voting can be made, we consider the median of the segment-wise distance scores as the overall query score. \vspace{-0.2cm} \section{Experiments} \label{sec:exp} The training procedure considers personalized models, each one trained on data of a specific subject and the respective audio stimuli. To compensate for possible annotation noise, we binarize the emotion labels by setting the threshold to the median score 5, as in \cite{koelstra}. Following the same paradigm, we consider separate experiments for valence and arousal dimensions. We apply 5-fold stratified cross-validation to train each network, where each fold holds 20\% of the total trials (7 tracks). Additionally, we apply class weights to alleviate any subject-specific data imbalance. All networks are optimized using Adam at a $10^{-4}$ learning rate and patience of 15 epochs of non-decreasing validation loss. \vspace{-0.3cm} \subsection{Predicting Emotion Tags} We first evaluate the models' performance on Emotion Recognition for both EEG and Music modalities (Table~1). EEG scores show high variance per subject, on average reaching up to $70.4\%$ on valence and $68.9\%$ on arousal after trial aggregation. The obtained scores are competitive for the specific dataset, despite the simple utilized architecture, something we attribute to the impact of music co-training and the adaptation of the common latent space. This contribution is further quantified in Section~5.3. Additionally, aggregating predictions on a per-track basis provides substantially enhanced results compared to non-aggregated ones, with the EEG accuracy increasing by above 5\% in arousal recognition and about 8\% in valence, implying that there is strong correlation (e.g., in the form of clusters) between same-track samples, especially in valence. On the other hand, despite the small number of tracks in our music set, the recognition performance is substantially high for the music branch, 78.8\% average on valence and 91.9\% on arousal, something that indicates the robustness of our transfer learning module. \vspace{-0.3cm} \begin{table}[!t] \label{tab:pre} \centering \begin{tabular}{c|c|c} \textbf{Dimension} & \textbf{Non-Aggregated} & \textbf{Aggregated} \\ \hline Valence & 62.9\% -- 71.5\% & \textbf{70.4}\% -- \textbf{78.7}\% \\ Arousal & 63.3\% -- 88.0\% & \textbf{68.9}\% -- \textbf{91.9}\% \end{tabular} \vspace{-0.2cm} \caption{Emotion Accuracy Scores for (EEG -- Music) modalities, reporting mean values over 32 subject-specific models.} \vspace{-0.2cm} \end{table} \begin{table}[!t] \centering \begin{tabular}{c|c|c} \textbf{Dimension} & \textbf{Precision@10} & \textbf{mAvg. Precision}\\ \hline Valence & \textbf{19.4}\% -- \textbf{63.8}\% & 18.8\% -- 59.1\%\\ Arousal & 18.4\% -- 65.0\% & \textbf{19.9}\% -- \textbf{67.8}\%\\ \end{tabular} \vspace{-0.2cm} \caption{(Track -- Emotion) Retrieval Scores on EEG input queries, reporting mean aggregated scores over 32 subjects.} \vspace{-0.5cm} \end{table} \begin{figure*} \centerline{ \includegraphics[scale=0.33]{figs/CROSSD_v1.png} \includegraphics[scale=0.33]{figs/CROSSD_v2.png} \includegraphics[scale=0.33]{figs/CROSSD_a1.png} \includegraphics[scale=0.33]{figs/CROSSD_a2.png}} \vspace{-0.25cm} \caption{t-SNE visualisation of the common space for subjects --from left to right-- 8, 15 (Valence) and 18, 20 (Arousal). 0 $\rightarrow \text{Low} \,|\, 1\rightarrow$ High} \label{fig:tsne} \vspace{-0.4cm} \end{figure*} \subsection{Retrieving Tracks from EEG Queries} Table 2 summarizes the retrieval scores from the personalized models, acquired by querying the common representation space of each network with a test EEG sample and then evaluating the ranking of music samples based on their distance to the query. Retrieval metrics provide robust results in both cases, indicating that the EEG samples are well-situated in the common space and the majority of them are capable of retrieving tracks that are emotionally consistent. Specifically, in the case of induced valence, a P@10 value of 63.8\% is achieved. We note that this percentage is higher than the reported mAP (59.1\%), strongly implying that the learned valence space is fragmented into local subspaces of high similarity. Arousal on the other hand seems to be more consistently represented, as both mAP and P@10 median retrieval scores indicate that the majority of tested tracks can derive consistent music rankings, in contrast to valence where the emotional response similarity seems concentrated to the top-ranked elements. As a result, the correct retrieval percentage conditioned on arousal approaches 68\% on average across subjects. We also note some preliminary results in approaching retrieval of the exact stimulus of an EEG sample. The derived scores, around 20\%, are clearly above random selection, however we believe that further experimentation is required on the temporal resolution of the input samples, yielding an interesting direction of future study. \vspace{-0.3cm} \subsection{Ablation Study} In our study we incorporated a complex objective function, combining 3 BCE terms to minimize the discrimination loss in both the label space and the common latent space. To further investigate the impact of our proposals on the models' performance, we trained separate sessions, first by considering sole EEG samples without music supervision, and second by avoiding the domain discrimination module. From the results in Table~3 we deduce that our full objective $\mathcal{J}$ leads to higher overall performance, indicating that all utilized terms contribute to richer EEG affective representations. Specifically, we can see that the absence of multimodal training sharply impacts the validity of the common space and reduces the classification performance, 2.6\% in valence and 0.9\% in arousal. On the other side, the absence of domain adaptation causes slighter modifications to the correlation of samples and stimuli, as measured by precision metrics. Through this module we manage though to better distribute samples in the common space, break modality-specific clusters and reduce the overall sample distances (Section 6.1), which is reflected in the improved classification performance in both experiments. \vspace{-0.2cm} \begin{table}[!t] \centering \begin{tabular}{c|c|c|c} \textbf{Metric} & $\mathcal{J}$ & $\ell_a$ only & $\neg\, \ell_{dd}$\\ \hline Acc$_{\text{EEG}}$ & \textbf{70.4}\% -- \textbf{68.9}\% & 67.8\% -- 68.0\% & 67.9\% -- 63.4\% \\ P@10 & \textbf{63.8}\% -- 65.0\% & 57.3\% -- 53.1\% & 63.4\% -- \textbf{66.7}\% \\ mAP & 59.1\% -- 67.8\% & 51.9\% -- 55.8\% & \textbf{59.8}\% -- \textbf{68.1}\% \\ \end{tabular} \vspace{-0.2cm} \caption{Ablation on the Objective Function for (Valence -- Arousal). Here we solely consider mean aggregated scores over 32 subjects.} \vspace{-0.5cm} \end{table} \begin{figure} \hspace{-0.2cm} \centerline{ \includegraphics[scale=0.44]{figs/temporal.png}} \vspace{-0.4cm} \caption{Arousal mAP scores over the 58 time samples for the numbered tracks, averaged across all subjects.} \label{fig:temporal} \vspace{-0.55cm} \end{figure} \section{Qualitative Analysis} \textbf{Studying the Common Space:} We visually inspect the produced latent space using t-SNE to reduce its 64D dimension to 2D. We select one of the 5 trained models for 2 subjects in Valence and Arousal to display their results in Fig.~\ref{fig:tsne} (similar trends are observed for most subjects). It is evident that latent domain adaptation has alleviated the cross-modal discrepancy and the modalities homogenize their embeddings to a certain degree. Cohesive sub-clusters are though visible, especially in the case of valence. This provides an explanation towards the discrepancy we observed between P@10 and mAP metrics, since the top-ranked track retrievals originate from the corresponding local subspace, but there is no coarse bisection between high- and low- valence samples, in contrast to the case of arousal. \textbf{Temporal Variation of Recognition:} Since each track is segmented into 58 overlapping samples of 3 sec, it is expected that the emotion is not elicited at the same pace throughout its duration. Hence, the temporal variation of our scores could indicate important moments in the track. In Fig.~\ref{fig:temporal}, we present the temporal evolution of the mAP scores for selected music tracks, averaged across all subjects. While the raw plots are noisy, each song individually exhibits a pattern of variation, which we depict by applying a 7-sample moving average filter. Scores typically reveal an oscillating pattern on the time axis and emotions are highly induced at certain peaks of the graph. These patterns reveal a characteristic picture of emotional induction in songs and could be subject of further experimentation. \vspace{-0.2cm} \section{Conclusion} \vspace{-0.2cm} \label{sec:conc} In this paper we presented a novel approach to analyze emotion induction from EEG recordings of music listening. We proposed a cross-modal framework to learn rich affective representations for EEG data through music supervision and adaptation of a common latent space, from which one could retrieve consistent music rankings from EEG queries. Our approach indicates that distilling information from processed musical stimuli to the respective EEG signals can improve performance and provide insights on personalized emotion analysis. To the best of our knowledge, this is the first study to propose a complete framework for the specific task and dataset, thus our results can be viewed as a concrete baseline. This framework can be used to model the EEG-Music relationship by using different condition mechanisms, e.g., musical beat. Another interesting direction would be to explore improvements in exact stimulus retrieval. \vfill\pagebreak \bibliographystyle{IEEEbib}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,564
Seminar was primarily focused on new developments related to cybercrime and electronic evidence - from the legal, technical and criminalistic point of view. Václav Stupka from C4e at the seminar delivered two lectures. One focused on the issue of data retention and possible consequences of rulings of the Court of Justice of the European Union in the Sverige case on the Czech legislation on the retention of traffic and location data. The second lecture was prepared as part of the LIVE_FOR project and was devoted to the issue of the European Investigation Order and its implementation in the Czech legal order.
{ "redpajama_set_name": "RedPajamaC4" }
8,536
© 2018 Scratchu.com A Lot Like Love | Amanda Peet, Ashton Kutcher, Taryn Manning, Aimee Garcia, Lee Garlington, Birdie M. Hale, Tyrone Giordano, Melissa van der Schyff, Theresa Spruill, James Read, Molly Cheek, Sarah Ann Morris, Gabriel Mann, Kathryn Hahn, Ali Larter - A Lot Like Love (2005) | English Movie | 6.6/10 | Synopsis : On a flight from Los Angeles to New York, Oliver and Emily make a connection, only to decide that they are poorly suited to be together. Over the next seven years, however, they are ... ... Watch Online!!!
{ "redpajama_set_name": "RedPajamaC4" }
447
Arthur Rosenberg 1934 A History of Bolshevism: From Marx to the First Five-Year Plan Source: Book published by Oxford University Press, London, 1934. Translated from the German by Ian FD Morrow. Scanned and prepared for the Marxist Internet Archive by Paul Flewers. Preface to the English Translation Chapter I: Marx to Lenin, 1843–1893 Chapter II: Revolution in Russia, 1893–1914 Chapter III: The World War, August 1914 to February 1917 Chapter IV: The Third International, August 1914 to February 1917 Chapter V: March to October 1917 Chapter VI: The Bolshevik Revolution and Wartime Communism, 1917–1921 Chapter VII: The Third International at the Height of its Revolutionary Power, 1919–1921 Chapter VIII: The Great Change: NEP and the Third World Congress, 1921 Chapter IX: Lenin's Testament, 1922–1924 Chapter X: Stalin Versus Trotsky, 1924–1927 Chapter XI: 'Socialism in a Single Land', 1927–1932 This translation of the original German edition of my Geschichte des Bolschewismus (published in 1932) is an exact rendering and does not contain any alteration of any kind whatsoever. Events that have occurred since the appearance of the German edition fully confirm the views expressed in these pages. The collapse of the KPD without any show of resistance proved that the Communism of the Third International could no longer be looked upon as a living revolutionary force. The ruin of the KPD sealed the fate of the Third International, which has ceased, together with its affiliations in Czechoslovakia, France, etc, to be a factor in international politics. Moreover, the attitude displayed by the Soviet government towards Hitlerite Germany shows that Stalin is no longer interested in the so-called world revolution. The Soviet government did not in its negotiations with Nazi Germany allow itself to be actuated by any other consideration than that of self-interest, and displayed no regard whatever for the German Communists or the Communist International. Stalin thus indirectly proclaimed the dissolution of the Third International as an independent and active labour movement. In Soviet Russia the course followed by events has been that indicated in the original German edition. At the same time the Soviet government has revealed itself powerless to resolve the glaring contradictions in its governmental system. Arthur Rosenberg Zürich, August 1933 Last updated on 27 December 2019
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,577
{"url":"http:\/\/geoffair.net\/fg\/fgfs-044.htm","text":"# Building Atlas in WIN32 with MSVC8\n\n## Preamble - Atlas site -> http:\/\/atlas.sourceforge.net\/\n\n### Atlas - The FlightGear mapping utility\n\nA small post on the FlightGear development board reminded me that I have not built Atlas since back in 2005. The Atlas project is made up of four(4) projects, Atlas, Map, MapPS, and GetMap. Below are some sample images generate by the Map component, from FlightGear data :-\n\nRunning Map.exe, with a command to get the KSFO area, like :-\n> map --fg-root=path\\to\\FG\\data --lat=37.6 --lon=-122.3\nproduced the first image, and then adding parameters like :-\n--enable-airports --enable-navaids --scale=20\nproduced the second, here both later converted from a png to jpeg format, although map.exe can also directly output jpeg format :-\n\nFantastic stuff ... The cvs instructions for getting the full latest source are as follows, but you should always first check with the Atlas site, as these can change over time -\n\ncvs -d:pserver:anonymous@atlas.cvs.sourceforge.net:\/cvsroot\/atlas login\ncvs -z3 -d:pserver:anonymous@atlas.cvs.sourceforge.net:\/cvsroot\/atlas co Atlas\n\n\n## Prerequisites\n\nAn extract from the MSVC8 VCPROJ files of the include directories of map makes the list of prerequisites clear :-\n\n\"..\\..\\zlib-1.2.3\"; ..\\..\\SimGear; \"..\\..\\lpng1225\"; \"..\\..\\jpeg-6b\"; \"..\\..\\OpenSceneGraph\\include\"; \"..\\..\\freeglut\\include\"\n\nActually OpenSceneGraph is not really a prerequisite, but today some SimGear headers pull in some headers from it, unconditionally, thus it must be there unless you want to mess with the SimGear headers ;=(( See Note 1 below ...\n\nLikewise, the list of library links is very indicative -\nsg.lib pui.lib puAux.lib ul.lib fnt.lib net.lib SimGear.lib libpng.lib zlib.lib libjpeg.lib\nAnd the library include paths :-\n\"..\\..\\SimGear\\Release\";\"..\\..\\lpng1225\";\"..\\..\\zlib-1.2.3\";..\\..\\plib;\"..\\..\\jpeg-6b\";\"..\\..\\freeglut\\ReleaseStatic\"\n\nThus before commencing building Atlas, you must download and compile Simgear (see Note 1 below), PLIB, freeglut (or alternative), libpng, zlib, and libjpeg ... And of course, to run say the Map component, then you also need at least the FlightGear base scenery data, or the data for the specific area you want to map ... And if you want to compile the GetMap component, to get landsat images, then you also need libcurl, and Wldap32.lib, to do the HTTP communications ...\n\nAll the above implies a WORK folder set as follows, and the LINKS to get each of the sources :-\n\nFolders\n\nFG data source\n\nAtlas http:\/\/atlas.sourceforge.net\/\njpeg http:\/\/ijg.org\/\nPLIB http:\/\/plib.sourceforge.net\/\nzlib http:\/\/www.zlib.net\/\nSimGear http:\/\/www.simgear.org\/ (see Note 1 below)\nlibpng http:\/\/www.libpng.org\/\nOpenSceneGraph http:\/\/www.openscenegraph.org\/projects\/osg\nfreeglut http:\/\/freeglut.sourceforge.net\/\n\nActually, during this build the 'curl-prev' folder was in another location, so my GetMap.vcproj file reflects that location. But later I dragged it into this group so all items could be together. Of course, the 'data' folder is FlightGear scenery (at least base) data.\n\n## SimGear Version\n\nNote 1: SimGear Version: After trying a linux compile, and getting the same error I get in WIN32, I read up more on which SimGear version to use. I am using the very latest cvs development version, which is SimGear OSG, while I should be using :-\n\n12\/15\/2007 06:53PM\u00a0\u00a0\u00a0 769,214 SimGear-1.0.0.tar.gz\n\nfrom say ftp:\/\/ftp.de.simgear.org\/pub\/simgear\/Source\/, as this is a PLIB version. This problem exhibits itself in MapMaker.cxx, where the PLIB version of SimGear returns a vector<Point3d> from say get_wgs84_nodes() - see simgear\/io\/sg_binobj.hxx for details - while the later OSG version now returns a vector<SGVec3d>. While these are more or less the same thing, the compiler does NOT see it that way ;=))\n\nSince this seems the only problem with using the latest OSG cvs MAIN HEAD version of SimGear, then my patch is valid, and could\/should also be done for the unix\/linux build. That would also explain why the OpenSceneGraph includes are also required. I am sure this would not be a requirement if I was using the PLIB version of SimGear.\n\nreturn up down\n\n## Building Atlas\n\nTo have early control of some things, I added the following code snippet to the top of many Atlas source files :-\n\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif \/\/ #ifdef HAVE_CONFIG_H\n\n\nThis was added to at least Atlas.cxx, fg_mkdir.cxx, FlightTrack.cxx, GetMap.cxx, Graphs.cxx, LoadPng.cxx, Map.cxx, MapBrowser.cxx, MapMaker.cxx, MapPS.cxx, Output.cxx, OutputPS.cxx, Overlays.cxx, Preferences.cxx, Projection.cxx, Scenery.cxx, Search.cxx, Tile.cxx, TileManager.cxx, and possibly others, In the case of src\\fg_mkdir.cxx it is actually just a change from <config.h> to \"config.h\" ...\n\nThis block was also later added to OutputGL.cxx, which, together with LoadPng.cxx, include <jpeglib.h>, so I could define a special ATLAS_JPEG, so that I could exclude a type define of INT32, which is done in jmorecfg.h. Atlas includes the windows system headers, and INT32 is already defined in there, thus there is a conflict, but not in the jpeg-6b source, which does not include the windows headers. I thus amended jmorecfg.h (line 160) to read :-\n\n#if (!defined(XMD_H) && !defined(ATLAS_JPEG)) \/* X11\/xmd.h correctly defines INT32 *\/\ntypedef long INT32;\n#endif\n\n\nThis is the beauty of this not-required-for-unix config.h! When I came up against a problem, in many cases the 'fix' could be put in this 'config.h' file. Remember it is ONLY to be used for WIN32, thus HAVE_CONFIG_H should be undefined when building in a native unix\/linux systems, and probably cygwin also.\n\nBut also a number of unix\/linux libc only functions were used, thus I added a set of new files to fill in these gaps in WIN32 C libraries - asprintf.cxx, timegm.cxx, strsep.cxx, rint.cxx, getopt.cxx - Their names implies the function they emulate. As an alternative, perhaps these 'hacks' could be put in a separate static library that is added to the link where required, but that is for the future ... Be warned, these kludges have not been fully tested to ensure they yield the same results as the unix\/linux library functions ;=((\n\nThe rint() function had already been provided, and I just moved it into its own file, so it could be shared with other projects that now required it.\n\nThe existing MSVC7 solution files were used, and converted to MSVC8 files, but needed to ADD several files to the appropriate projects, like Search.cxx (& hxx), Graphs.cxx (& hxx), TileManager.cxx, Preferences.cxx and Tile.cxx, and possibly others. It was important to add FREEGLUT_STATIC to the compiler defines, and CURL_STATICLIB to the GetMap project, since in all cases I have used the 'static' form of the external libraries.\n\nAdditionally _ALL_ projects, including _ALL_ the external static libraries were compiled using 'Multithreaded DLL', that is \/MD, \/MDd for Debug configuration. There is real problems with the LINK if there is a mixture of RUNTIME libraries ... this often causes a lot of people a lot of pain ... every source in the link must be meticulously checked!!!\n\nPerhaps one minor maybe 'bug' with map, is that although it creates a glut window, there is no display in it before it exits after writing the map.png output. Maybe there needs to be a screen refresh added somewhere in the code. But it is just a 'vision' thing, and not important since the PNG output is done ...\n\nIf you want to patch the current (as of 24 April, 2008) CVS source, then atlas-01-src.txt is a 'unified' diff patch file for this. See Note 1 above, for the patch to MapMake.cxx, and the SimGear version to use. If you want to patch the solutions files, the atlas-01-sln.zip, which contains atlas-01-sln.txt, a 'unified' diff patch file to do that, but be aware that this patches them to MSVC8 - they were MSVC7.1. And of course you need the 'new' files I have created, atlas-01-new.zip, to patch over the missing WIN32 functions ;=))\n\nIf you just want to try out the WIN32 executables on your downloaded FlightGear data, then this altas-01-exe.zip (md5=106d10d9e5e6c17963572c51e9fdbbde), is the file for you. If you can not handle 'patch', or prefer to make you own diff against a cvs checkout, then that is in atlas-01-src.zip - the full modified Atlas source I used. And all links to the various sources can be obtained from here, but they are also given above.\n\nHave much mapping fun in WIN32 ;=)) A great little tool, which also helps one 'understand' FlightGear scenery files.\n\n2011-10-23: Not exactly an update, but I wanted to 'test' this 2008 Map code in Ubuntu (10.04 linux), thus started with the above atlas-01-src.zip windows source, but due to the fact that I needed to compile it against relative recent version of SimGear (2.0, 2.4, and 2.5) had to make a considerable number of 'minor' fixes. Below is this full modified linux source, and a diff patch file showing the changes made...\n\n2011\/10\/23 atlas-01-srcu.zip 320,976 341f0aa3fb3df9ec9dac98a691948b11\n2011\/10\/23 atlas-w2u.patch.txt 14,841 Can be used patch the WIN32 source\n\nImages generated by 2008 code version of Map\n\nIn Ubuntu, with default command: --atlas=tmp\n w123n37.png 256x256 w122n37.png 256x256\nIn Ubuntu, with command: --atlas=tmp --size=1024 --autoscale --enable-airports --enable-navaids\n w123n37.png shown at 256x256 w122n37.png shown at 256x256\n\nThe images are exactly the same as those generated in WIN32, where I could also try the 'headless' option, where the images are generated in an off screen buffer, with the same GOOD results! Unfortunately was not able to try that option in Ubuntu, where the 'extension' support has been removed from the later versions of SimGear I had to use, although with effort could maybe get it back using the GLEW library.\n\nThese images generated using 2008 Map code sharply contrast with the BLOTCHY images generated by 2011 Map code on EXACTLY the same OS, hardware and drivers ;=(). For example - click to load full 1024x1024 image\n\nNow the trick is to discover how the 2011 code can be modified to produce the same clean images produced by the 2008 code, while also expanding it to the same 'square' sizing ;=))\n\nEOF - Atlas-02.doc","date":"2018-01-24 09:41:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.716515064239502, \"perplexity\": 5071.831822264661}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-05\/segments\/1516084893629.85\/warc\/CC-MAIN-20180124090112-20180124110112-00604.warc.gz\"}"}
null
null
{"url":"http:\/\/server3.wikisky.org\/starview?object=NGC+2911","text":"WIKISKY.ORG\n Home Getting\u00a0Started To Survive in the Universe News@Sky Astro\u00a0Photo The\u00a0Collection Forum Blog\u00a0New! FAQ Press Login\n\n# NGC 2911\n\nContents\n\n### Images\n\nDSS Images \u00a0 Other Images\n\n### Related articles\n\n On the Lengths, Colors, and Ages of 18 Face-on BarsAlong with a brief analysis we present data obtained from BVRI andKs images of a sample of 19 galaxies (18 barred and 1unbarred), which will be further explored in a future paper. We measuredthe lengths and colors of the bars, created color maps, and estimatedglobal color gradients. Applying a method developed in a companionpaper, we could distinguish for seven galaxies in our sample those whosebars have been recently formed from the ones with already evolved bars.We estimated an average difference in the optical colors between youngand evolved bars that may be translated to an age difference of theorder of 10 Gyr, meaning that bars may be, at least in some cases,long-standing structures. Moreover, our results show that, on average,evolved bars are longer than young bars. This seems to indicate that,during its evolution, a bar grows longer by capturing stars from thedisk, in agreement with recent numerical and analytical results.Although the statistical significance of these results is low, andfurther studies are needed to confirm them, we discuss the implicationsfrom our results on the possibility of bars being a recurrentphenomenon. We also present isophotal contours for all our images aswell as radial profiles of relevant photometric and geometricparameters. Stellar Populations in Nearby Lenticular GalaxiesWe have obtained two-dimensional spectral data for a sample of 58 nearbyS0 galaxies with the Multi-Pupil Fiber\/Field Spectrograph of the 6 mtelescope of the Special Astrophysical Observatory of the RussianAcademy of Sciences. The Lick indices H\u03b2, Mg b, and arecalculated separately for the nuclei and for the bulges taken as therings between R=4'' and 7\", and the luminosity-weighted ages,metallicities, and Mg\/Fe ratios of the stellar populations are estimatedby comparing the data to single stellar population (SSP) models. Fourtypes of galaxy environments are considered: clusters, centers ofgroups, other places in groups, and the field. The nuclei are found tobe on average slightly younger than the bulges in any type ofenvironment, and the bulges of S0 galaxies in sparse environments areyounger than those in dense environments. The effect can be partlyattributed to the well-known age correlation with the stellar velocitydispersion in early-type galaxies (in our sample the galaxies in sparseenvironments are on average less massive than those in denseenvironments), but for the most massive S0 galaxies, with\u03c3*=170-220 km s-1, the age dependence on theenvironment is still significant at the confidence level of 1.5 \u03c3.Based on observations collected with the 6 m telescope (BTA) at theSpecial Astrophysical Observatory (SAO) of the Russian Academy ofSciences (RAS). Nearby early-type galaxies with ionized gas. II. Line-strength indices for 18 additional galaxiesWe previously presented a data-set of line-strength indices for 50early-type galaxies in the nearby Universe. The galaxy sample is biasedtoward galaxies showing emission lines, located in environmentscorresponding to a broad range of local galaxy densities, althoughpredominantly in low density environments. The present addendum enlargesthe above data-set of line-strength indices by analyzing 18 additionalearly-type galaxies (three galaxies, NGC 3607, NGC 5077 and NGC 5898were presented in the previous set). We measured 25 line-strengthindices, defined by the Lick IDS \"standard\" system (Trager et al. 1998,ApJS, 116, 1; Worthey & Ottaviani 1997, ApJS, 111, 377), for 7luminosity weighted apertures and 4 gradients of each galaxy. Thisaddendum presents the line-strength data-set and compares it with theavailable data in the literature. Multicomponent decompositions for a sample of S0 galaxiesWe have estimated the bulge-to-total (B\/T) light ratios in theKs band for a sample of 24 S0, S0\/a and Sa galaxies byapplying a two-dimensional multicomponent decomposition method. For thedisc an exponential function is used, the bulges are fitted by aS\u00e9rsic R1\/n function and the bars and ovals aredescribed either by a S\u00e9rsic or a Ferrers function. In order toavoid non-physical solutions, preliminary characterization of thestructural components is made by inspecting the radial profiles of theorientation parameters and the low azimuthal wavenumber Fourieramplitudes and phases. In order to identify also the inner structures,unsharp masks were created: previously undetected inner spiral arms werefound in NGC 1415 and marginally in NGC 3941. Most importantly, we foundthat S0s have a mean K ratio of 0.24 +\/- 0.11,which is significantly smaller than the mean R=0.6 generally reported in the literature. Also, the surface brightnessprofiles of the bulges in S0s were found to be more exponential-likethan generally assumed, the mean shape parameter of the bulge being= 2.1 +\/- 0.7. We did not find examples of barred S0s lackingthe disc component, but we found some galaxies (NGC 718, 1452 and 4608)having a non-exponential disc in the bar region. To our knowledge, ourstudy is the first attempt to apply a multicomponent decompositionmethod for a moderately sized sample of early-type disc galaxies. Group, field and isolated early-type galaxies - II. Global trends from nuclear dataWe have derived ages, metallicities and enhanced-element ratios[\u03b1\/Fe] for a sample of 83 early-type galaxies essentially ingroups, the field or isolated objects. The stellar-population propertiesderived for each galaxy correspond to the nuclear re\/8aperture extraction. The median age found for Es is 5.8+\/-0.6 Gyr andthe average metallicity is +0.37+\/-0.03 dex. For S0s, the median age is3.0+\/-0.6 Gyr and [Z\/H]= 0.53+\/-0.04 dex. We compare the distribution ofour galaxies in the H\u03b2-[MgFe] diagram with Fornax galaxies. Ourelliptical galaxies are 3-4 Gyr younger than Es in the Fornax cluster.We find that the galaxies lie in a plane defined by [Z\/H]= 0.99log\u03c30- 0.46 log(age) - 1.60, or in linear terms Z~\u03c30\u00d7 (age) -0.5. More massive (larger\u03c30) and older galaxies present, on average, large[\u03b1\/Fe] values, and therefore must have undergone shorterstar-formation time-scales. Comparing group against field\/isolatedgalaxies, it is not clear that environment plays an important role indetermining their stellar-population history. In particular, ourisolated galaxies show ages differing by more than 8 Gyr. Finally weexplore our large spectral coverage to derive log(O\/H) metallicity fromthe H\u03b1 and NII\u03bb6584 and compare it with model-dependent[Z\/H]. We find that the O\/H abundances are similar for all galaxies, andwe can interpret it as if most chemical evolution has already finishedin these galaxies. Group, field and isolated early-type galaxies - I. Observations and nuclear dataThis is the first paper of a series on the investigation of stellarpopulation properties and galaxy evolution of an observationallyhomogeneous sample of early-type galaxies in groups, field and isolatedgalaxies.Here we present high signal-to-noise ratio (S\/N) long-slit spectroscopyof 86 nearby elliptical and S0 galaxies. Eight of them are isolated,selected according to a rigorous criterion, which guarantees a genuinelow-density subsample. The present survey has the advantage of coveringa larger wavelength range than normally found in the literature, whichincludes [OIII]\u03bb5007 and H\u03b1, both lines important foremission correction. Among the 86 galaxies with S\/N >= 15 (perresolution element, for re\/8 central aperture), 57 have theirH\u03b2-index corrected for emission (the average correction is 0.190\u00c5in H\u03b2) and 42 galaxies reveal [OIII]\u03bb5007 emission,of which 16 also show obvious H\u03b1 emission. Most of the galaxies inthe sample do not show obvious signs of disturbances nor tidal featuresin the morphologies, although 11 belong to the Arp catalogue of peculiargalaxies; only three of them (NGC 750, 751 and 3226) seem to be stronglyinteracting. We present the measurement of 25 central line-strengthindices calibrated to the Lick\/IDS system. Kinematic information isobtained for the sample. We analyse the line-strength index versusvelocity dispersion relations for our sample of mainly low-densityenvironment galaxies, and compare the slope of the relations withcluster galaxies from the literature. Our main findings are that theindex-\u03c30 relations presented for low-density regionsare not significantly different from those of cluster E\/S0s. The slopeof the index-\u03c30 relations does not seem to change forearly-type galaxies of different environmental densities, but thescatter of the relations seems larger for group, field and isolatedgalaxies than for cluster galaxies. Radio sources in low-luminosity active galactic nuclei. IV. Radio luminosity function, importance of jet power, and radio properties of the complete Palomar sampleWe present the completed results of a high resolution radio imagingsurvey of all ( 200) low-luminosity active galactic nuclei (LLAGNs) andAGNs in the Palomar Spectroscopic Sample of all ( 488) bright northerngalaxies. The high incidences of pc-scale radio nuclei, with impliedbrightness temperatures \u2273107 K, and sub-parsec jetsargue for accreting black holes in \u227350% of all LINERs andlow-luminosity Seyferts; there is no evidence against all LLAGNs beingmini-AGNs. The detected parsec-scale radio nuclei are preferentiallyfound in massive ellipticals and in type 1 nuclei (i.e. nuclei withbroad H\u03b1 emission). The radio luminosity function (RLF) of PalomarSample LLAGNs and AGNs extends three orders of magnitude below, and iscontinuous with, that of \u201cclassical\u201d AGNs. We find marginalevidence for a low-luminosity turnover in the RLF; nevertheless LLAGNsare responsible for a significant fraction of present day massaccretion. Adopting a model of a relativistic jet from Falcke &Biermann, we show that the accretion power output in LLAGNs is dominatedby the kinetic power in the observed jets rather than the radiatedbolometric luminosity. The Palomar LLAGNs and AGNs follow the samescaling between jet kinetic power and narrow line region (NLR)luminosity as the parsec to kilo-parsec jets in powerful radio galaxies.Eddington ratios {l_Edd} (=L_Emitted\/L_Eddington) of\u226410-1{-}10-5 are implied in jet models of theradio emission. We find evidence that, in analogy to Galactic black holecandidates, LINERs are in a \u201clow\/hard\u201d state (gas poornuclei, low Eddington ratio, ability to launch collimated jets) whilelow-luminosity Seyferts are in a \u201chigh\u201d state (gas richnuclei, higher Eddington ratio, less likely to launch collimated jets).In addition to dominating the radiated bolometric luminosity of thenucleus, the radio jets are energetically more significant thansupernovae in the host galaxies, and are potentially able to depositsufficient energy into the innermost parsecs to significantly slow thegas supply to the accretion disk. A dichotomy in the orientation of dust and radio jets in nearby low-power radio galaxiesWe examine the properties of central dust in nearby quiescent and activeearly-type galaxies. The active galaxies are low-power radio galaxieswith Fanaroff & Riley type I or I\/II radio jets. We focus on (a) thecomparison of the dust distributions in the active and quiescent galaxysamples; and (b) the relation between the radio jet and dustorientations. Our main observational conclusions are: (i) in line withprevious studies, the dust detection rate is higher in radio-jetgalaxies than in non radio-jet galaxies; (ii) radio galaxies contain ahigher fraction of regular dust \u201cellipses\u201d compared toquiescent galaxies which contain more often irregular dustdistributions; (iii) the morphology, size and orientation of dustellipses and lanes in quiescent early-types and active early-types withkpc-scale radio jets is very similar; (iv) dust ellipses are alignedwith the major axis of the galaxy, dust lanes do not show a preferredalignment except for large (>kpc) dust lanes which are aligned withthe minor axis of the galaxy; and (v) as projected on the sky, jets donot show a preferred orientation relative to the galaxy major axis (andhence dust ellipses), but jets are preferentially perpendicular to dustlanes. We show that the dust ellipses are consistent with being nearlycircular thin disks viewed at random viewing angles. The lanes arelikely warped dust structures, which may be in the process of settlingdown to become regular disks or are being perturbed by anon-gravitational force. We use the observed dust-jet orientations toconstrain the three-dimensional angle \u03b8DJ between jetand dust. For dust-lane galaxies, the jet is approximately perpendicularto the dust structure, while for dust-ellipse galaxies there is a muchwider distribution of \u03b8DJ. We discuss two scenariosthat could explain the dust\/jet\/galaxy orientation dichotomy. If lanesare indeed settling, then the jet orientation apparently is roughlyaligned with the angular momentum of the dust before it settles. Iflanes are perturbed by a jet-related force, it appears that it causesthe dust to move out of its equilibrium plane in the galaxy into a planewhich is perpendicular to the jet. Nearby early-type galaxies with ionized gas. I. Line-strength indices of the underlying stellar populationWith the aim of building a data-set of spectral properties of wellstudied early-type galaxies showing emission lines, we presentintermediate resolution spectra of 50 galaxies in the nearby Universe.The sample, which covers several of the E and S0 morphologicalsub-classes, is biased toward objects that might be expected to haveongoing and recent star formation, at least in small amounts, because ofthe presence of the emission lines. The emission is expected to comefrom the combination of active galactic nuclei and star formationregions within the galaxies. Sample galaxies are located in environmentscorresponding to a broad range of local galaxy densities, althoughpredominantly in low density environments. Our long-slit spectra coverthe 3700-7250 \u00c5 wavelength range with a spectral resolution of\u22487.6 \u00c5 at 5550 \u00c5. The specific aim of this paper, and ourfirst step in the investigation, is to map the underlying galaxy stellarpopulation by measuring, along the slit positioned along the galaxymajor axis, line-strength indices at several, homogeneousgalacto-centric distances. For each object we extracted 7luminosity-weighted apertures (with radii 1.5\u00b4\u00b4,2.5\u00b4\u00b4, 10\u00b4\u00b4, r_e\/10, r_e\/8, r_e\/4 and r_e\/2)corrected for the galaxy ellipticity and 4 gradients (0 \u2264 r \u2264r_e\/16, r_e\/16 \u2264 r \u2264 r_e\/8, r_e\/8 \u2264 r \u2264 r_e\/4 and r_e\/4\u2264 r \u2264 r_e\/2). For each aperture and gradient we measured 25line-strength indices: 21 of the set defined by the Lick-IDS\u201cstandard\u201d system (Trager et al. [CITE], ApJS, 116, 1) and 4introduced by Worthey & Ottaviani ([CITE], ApJS, 111, 377).Line-strength indices have been transformed to the Lick-IDS system.Indices derived then include H\u03b2, Mg1, Mg2, Mgb, MgFe, Fe5270,Fe5335 commonly used in classic index-index diagrams. The paperintroduces the sample, presents the observations, describes the datareduction procedures, the extraction of apertures and gradients, thedetermination and correction of the line-strength indices, the procedureadopted to transform them into the Lick-IDS System and the proceduresadopted for the emission correction. We finally discuss the comparisonsbetween our dataset and line-strength indices available in theliterature. A significant fraction, about 60%, of galaxies in thepresent sample has one previous measurement in the Lick-IDS system butbasically restricted within the r_e\/8 region. Line-strength measuresobtained both from apertures and gradients outside this area and withinthe r_e\/8 region, with the present radial mapping, are completely new.Full appendix and Figs. 8 to 13 are only available in electronic form athttp:\/\/www.edpsciences.org Full Tables 6, 7, 9 and 10 are only availableat the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) orvia http:\/\/cdsweb.u-strasbg.fr\/cgi-bin\/qcat?J\/A+A\/433\/497 Based onobservations obtained at the European Southern Observatory, La Silla,Chile (Programs Nr. 60.A-0647 and 61.A-0406). The Stellar Populations of Low-Luminosity Active Galactic Nuclei. II. Space Telescope Imaging Spectrograph ObservationsWe present a study of the stellar populations of low-luminosity activegalactic nuclei (LLAGNs). Our goal is to search for spectroscopicsignatures of young and intermediate-age stars and to investigate theirrelationship with the ionization mechanism in LLAGNs. The method used isbased on the stellar population synthesis of the optical continuum ofthe innermost (20-100 pc) regions in these galaxies. For this purpose,we have collected high spatial resolution optical (2900-5700 \u00c5)STIS spectra of 28 nearby LLAGNs that are available in the Hubble SpaceTelescope archive. The analysis of these data is compared with a similaranalysis also presented here for 51 ground-based spectra of LLAGNs. Ourmain findings are as follows: (1) No features due to Wolf-Rayet starswere convincingly detected in the STIS spectra. (2) Young starscontribute very little to the optical continuum in the ground-basedaperture. However, the fraction of light provided by these stars ishigher than 10% in most of the weak-[O I] ([OI]\/H\u03b1<=0.25) LLAGNSTIS spectra. (3) Intermediate-age stars contribute significantly to theoptical continuum of these nuclei. This population is more frequent inobjects with weak than with strong [O I]. Weak-[O I] LLAGNs that haveyoung stars stand out for their intermediate-age population. (4) Most ofthe strong-[O I] LLAGNs have predominantly old stellar population. A fewof these objects also show a featureless continuum that contributessignificantly to the optical continuum. These results suggest that youngand intermediate-age stars do not play a significant role in theionization of LLAGNs with strong [O I]. However, the ionization inweak-[O I] LLAGNs with young and\/or intermediate-age populations couldbe due to stellar processes. A comparison of the properties of theseobjects with Seyfert 2 galaxies that harbor a nuclear starburst suggeststhat weak-[O I] LLAGNs are the lower luminosity counterparts of theSeyfert 2 composite nuclei.Based on observations with the NASA\/ESA Hubble Space Telescope, obtainedat the Space Telescope Science Institute, which is operated by theAssociation of Universities for Research in Astronomy, Inc., under NASAcontract NAS 5-26555. Based on observations made with the Nordic OpticalTelescope (NOT), operated on the island of La Palma jointly by Denmark,Finland, Iceland, Norway, and Sweden, in the Spanish Observatorio delRoque de los Muchachos of the Instituto de Astrof\u00edsica deCanarias. The Stellar Populations of Low-Luminosity Active Galactic Nuclei. I. Ground-based ObservationsWe present a spectroscopic study of the stellar populations oflow-luminosity active galactic nuclei (LLAGNs). Our main goal is todetermine whether the stars that live in the innermost (100 pc scale)regions of these galaxies are in some way related to the emission-lineproperties, which would imply a link between the stellar population andthe ionization mechanism. High signal-to-noise ratio, ground-basedlong-slit spectra in the 3500-5500 \u00c5 interval were collected for60 galaxies: 51 LINERs and LINER\/H II transition objects, two starburstgalaxies, and seven nonactive galaxies. In this paper, the first of aseries, we (1) describe the sample; (2) present the nuclear spectra; (3)characterize the stellar populations of LLAGNs by means of an empiricalcomparison with normal galaxies; (4) measure a set of spectral indices,including several absorption-line equivalent widths and colorsindicative of stellar populations; and (5) correlate the stellar indiceswith emission-line ratios that may distinguish between possibleexcitation sources for the gas. Our main findings are as follows: (1)Few LLAGNs have a detectable young (<~107 yr) starburstcomponent, indicating that very massive stars do not contributesignificantly to the optical continuum. In particular, no features dueto Wolf-Rayet stars were convincingly detected. (2) High-order Balmerabsorption lines of H I (HOBLs), on the other hand, are detected in ~40%of LLAGNs. These features, which are strongest in108-109 yr intermediate-age stellar populations,are accompanied by diluted metal absorption lines and bluer colors thanother objects in the sample. (3) These intermediate-age populations arevery common (~50%) in LLAGNs with relatively weak [O I] emission([OI]\/H\u03b1<=0.25) but rare (~10%) in LLAGNs with stronger [O I].This is intriguing since LLAGNs with weak [O I] have been previouslyhypothesized to be transition objects'' in which both an AGN and youngstars contribute to the emission-line excitation. Massive stars, ifpresent, are completely outshone by intermediate-age and old stars inthe optical. This happens in at least a couple of objects whereindependent UV spectroscopy detects young starbursts not seen in theoptical. (4) Objects with predominantly old stars span the whole rangeof [O I]\/H\u03b1 values, but (5) sources with significant young and\/orintermediate-age populations are nearly all (~90%) weak-[O I] emitters.These new findings suggest a link between the stellar populations andthe gas ionization mechanism. The strong-[O I] objects are most likelytrue LLAGNs, with stellar processes being insignificant. However, theweak-[O I] objects may comprise two populations, one where theionization is dominated by stellar processes and another where it isgoverned by either an AGN or a more even mixture of stellar and AGNprocesses. Possible stellar sources for the ionization include weakstarbursts, supernova remnants, and evolved poststarburst populations.These scenarios are examined and constrained by means of complementaryobservations and detailed modeling of the stellar populations inforthcoming communications.Based on observations made with the Nordic Optical Telescope, operatedon the island of La Palma jointly by Denmark, Finland, Iceland, Norway,and Sweden, in the Spanish Observatorio del Roque de los Muchachos ofthe Instituto de Astrof\u00edsica de Can\u00e1rias. Inner Polar Rings in Regular Lenticular GalaxiesWe have investigated a sample of S0 galaxies, mostly with circumnucleardust lanes orthogonal to their major axes, chosen from Hubble SpaceTelescope Wide Field Planetary Camera 2 images. Two-dimensionalspectroscopy undertaken with the Multipupil Fiber Spectrograph of the 6m telescope of the Special Astrophysical Observatory of the RussianAcademy of Sciences has revealed that indeed the ionized gas in thecenters of these eight lenticular galaxies rotate in the planes nearlyorthogonal to the rotation (and symmetry) planes of their centralstellar components. Although almost all the galaxies are located indense environments, an external origin of this rotation plane tilt isnot obvious because all the galaxies but one are known to have extendedH I disks, and in two cases where the angular resolution of H Iobservations allows, we find orthogonality of the external H I and innerionized gas disks. We discuss a possible relation of the inner gas polarrings to a triaxiality of galactic potential. The stellar populations inthe nuclei of all but two galaxies are very old, which excludes recentstar formation bursts and proves that the polar orbits of thecircumnuclear gas are rather stable. In the nuclei of NGC 2655 and NGC4111, we have found signatures of star formation bursts some 1.5-2 Gyrago. This finding can be related to very central gas in NGC 2655, whichis coplanar to the circumnuclear stellar disk and to radial gas inflowin NGC 4111; just these gas reservoirs and not the polar rings may beresponsible for fueling nuclear star formation.Partly based on observations collected with the 6 m telescope at theSpecial Astrophysical Observatory of the Russian Academy of Sciences. Properties of isolated disk galaxiesWe present a new sample of northern isolated galaxies, which are definedby the physical criterion that they were not affected by other galaxiesin their evolution during the last few Gyr. To find them we used thelogarithmic ratio, f, between inner and tidal forces acting upon thecandidate galaxy by a possible perturber. The analysis of thedistribution of the f-values for the galaxies in the Coma cluster leadus to adopt the criterion f \u2264 -4.5 for isolated galaxies. Thecandidates were chosen from the CfA catalog of galaxies within thevolume defined by cz \u22645000 km s-1, galactic latitudehigher than 40o and declination \u2265-2.5o. Theselection of the sample, based on redshift values (when available),magnitudes and sizes of the candidate galaxies and possible perturberspresent in the same field is discussed. The final list of selectedisolated galaxies includes 203 objects from the initial 1706. The listcontains only truly isolated galaxies in the sense defined, but it is byno means complete, since all the galaxies with possible companions underthe f-criterion but with unknown redshift were discarded. We alsoselected a sample of perturbed galaxies comprised of all the diskgalaxies from the initial list with companions (with known redshift)satisfying f \u2265 -2 and \\Delta(cz) \u2264500 km s-1; a totalof 130 objects. The statistical comparison of both samples showssignificant differences in morphology, sizes, masses, luminosities andcolor indices. Confirming previous results, we found that late spiral,Sc-type galaxies are, in particular, more frequent among isolatedgalaxies, whereas Lenticular galaxies are more abundant among perturbedgalaxies. Isolated systems appear to be smaller, less luminous and bluerthan interacting objects. We also found that bars are twice as frequentamong perturbed galaxies compared to isolated galaxies, in particularfor early Spirals and Lenticulars. The perturbed galaxies have higherLFIR\/LB and Mmol\/LB ratios,but the atomic gas content is similar for the two samples. The analysisof the luminosity-size and mass-luminosity relations shows similartrends for both families, the main difference being the almost totalabsence of big, bright and massive galaxies among the family of isolatedsystems, together with the almost total absence of small, faint and lowmass galaxies among the perturbed systems. All these aspects indicatethat the evolution induced by interactions with neighbors would proceedfrom late, small, faint and low mass Spirals to earlier, bigger, moreluminous and more massive spiral and lenticular galaxies, producing atthe same time a larger fraction of barred galaxies but preserving thesame relations between global parameters. The properties we found forour sample of isolated galaxies appear similar to those of high redshiftgalaxies, suggesting that the present-day isolated galaxies could bequietly evolved, unused building blocks surviving in low densityenvironments.Tables \\ref{t1} and \\ref{t2} are only available in electronic form athttp:\/\/www.edpsciences.org Seyfert galaxies in UZC-Compact GroupsWe present results concerning the occurrence of Seyfert galaxies in anew automatically selected sample of nearby Compact Groups of galaxies(UZC-CGs). Seventeen Seyferts are found, constituting \u00983% of theUZC-CG galaxy population. CGs hosting and non-hosting a Seyfert memberexhibit no significant differences, except that a relevant number of Sy2is found in unusual CGs, all presenting large velocity dispersion(\u03c3>400 km s-1), many neighbours and a high number ofellipticals. We also find that the fraction of Seyferts in CGs is 3times as large as that among UZC-single-galaxies, and results from anexcess of Sy2s. CG-Seyferts are not more likely than other CG galaxiesto present major interaction patterns, nor to display a bar. Our resultsindirectly support the minor-merging fueling mechanism. Radio emission from AGN detected by the VLA FIRST surveyUsing the most recent (April 2003) version of the VLA FIRST survey radiocatalog, we have searched for radio emission from >2800 AGN takenfrom the most recent (2001) version of the Veron-Cetty and Veron AGNcatalog. These AGN lie in the \u00989033 square degrees of sky alreadycovered by the VLA FIRST survey. Our work has resulted in positivedetection of radio emission from 775 AGN of which 214 are new detectionsat radio wavelengths.Tables 3 and 4 are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp:\/\/cdsweb.u-strasbg.fr\/cgi-bin\/qcat?J\/A+A\/416\/35 Near-infrared imaging of ellipticals: surface brightness profiles and photometryWe present near-infrared K-band imaging of a large sample of candidatemerger remnant galaxies and Hickson Compact Group ellipticals. We derivelight profile indices, effective radii and surface brightnesses, as wellas total K-band magnitudes. We find that the light distributions of themerger remnant candidates are consistent with those of normal'ellipticals, and scatter around a mean profile index of (1\/n) = 0.20.Many of our sample galaxies have surface brightness profiles that arenot well described by a de Vaucouleurs law (1\/n= 0.25), and we discussthe implications of this on the derived total magnitudes. Comparing thetotal K magnitudes calculated by extrapolating a de Vaucouleurs profileand those derived using a generalized S\u00e9rsic form, we find that asignificant bias is introduced if the de Vaucouleurs law is not a gooddescription of the actual light profile. A Search for Dwarf'' Seyfert Nuclei. VI. Properties of Emission-Line Nuclei in Nearby GalaxiesWe use the database from Paper III to quantify the global and nuclearproperties of emission-line nuclei in the Palomar spectroscopic surveyof nearby galaxies. We show that the host galaxies of Seyferts, LINERs,and transition objects share remarkably similar large-scale propertiesand local environments. The distinguishing traits emerge on nuclearscales. Compared with LINERs, Seyfert nuclei are an order of magnitudemore luminous and exhibit higher electron densities and internalextinction. We suggest that Seyfert galaxies possess characteristicallymore gas-rich circumnuclear regions and hence a more abundant fuelreservoir and plausibly higher accretion rates. The differences betweenthe ionization states of the narrow emission-line regions of Seyfertsand LINERs can be partly explained by the differences in their nebularproperties. Transition-type objects are consistent with being composite(LINER\/H II) systems. With very few exceptions, the stellar populationwithin the central few hundred parsecs of the host galaxies is uniformlyold, a finding that presents a serious challenge to starburst orpost-starburst models for these objects. Seyferts and LINERs havevirtually indistinguishable velocity fields as inferred from their linewidths and line asymmetries. Transition nuclei tend to have narrowerlines and more ambiguous evidence for line asymmetries. All threeclasses of objects obey a strong correlation between line width and lineluminosity. We argue that the angular momentum content of circumnucleargas may be an important factor in determining whether a nucleus becomesactive. Finally, we discuss some possible complications for theunification model of Seyfert galaxies posed by our observations. Tidally Triggered Star Formation in Close Pairs of Galaxies. II. Constraints on Burst Strengths and AgesGalaxy-galaxy interactions rearrange the baryons in galaxies and triggersubstantial star formation; the aggregate effects of these interactionson the evolutionary histories of galaxies in the universe are poorlyunderstood. We combine B- and R-band photometry and optical spectroscopyto estimate the strengths and timescales of bursts of triggered starformation in the centers of 190 galaxies in pairs and compact groups.Based on an analysis of the measured colors and EW(H\u03b1), wecharacterize the preexisting and triggered populations separately. Thebest-fitting burst scenarios assume stronger reddening corrections forline emission than for the continuum and continuous star formationlasting for >~100 Myr. The most realistic scenarios require aninitial mass function that is deficient in the highest mass stars. Thecolor of the preexisting stellar population is the most significantsource of uncertainty. Triggered star formation contributessubstantially (probably >~50%) to the R-band flux in the centralregions of several galaxies; tidal tails do not necessarily accompanythis star formation. Many of the galaxies in our sample have bluercenters than outskirts, suggesting that pre- or nonmerger interactionsmay lead to evolution along the Hubble sequence. These objects wouldappear blue and compact at higher redshifts; the older, redder outskirtsof the disks would be difficult to detect. Our data indicate thatgalaxies with larger separations on the sky contain weaker, and probablyolder, bursts of star formation on average. However, confirmation ofthese trends requires further constraints on the colors of the olderstellar populations and on the reddening for individual galaxies. Redshift-Distance Survey of Early-Type Galaxies: Spectroscopic DataWe present central velocity dispersions and Mg2 line indicesfor an all-sky sample of ~1178 elliptical and S0 galaxies, of which 984had no previous measures. This sample contains the largest set ofhomogeneous spectroscopic data for a uniform sample of ellipticalgalaxies in the nearby universe. These galaxies were observed as part ofthe ENEAR project, designed to study the peculiar motions and internalproperties of the local early-type galaxies. Using 523 repeatedobservations of 317 galaxies obtained during different runs, the dataare brought to a common zero point. These multiple observations, takenduring the many runs and different instrumental setups employed for thisproject, are used to derive statistical corrections to the data and arefound to be relatively small, typically <~5% of the velocitydispersion and 0.01 mag in the Mg2 line strength. Typicalerrors are about 8% in velocity dispersion and 0.01 mag inMg2, in good agreement with values published elsewhere. Redshift-Distance Survey of Early-Type Galaxies: Circular-Aperture PhotometryWe present R-band CCD photometry for 1332 early-type galaxies, observedas part of the ENEAR survey of peculiar motions using early-typegalaxies in the nearby universe. Circular apertures are used to tracethe surface brightness profiles, which are then fitted by atwo-component bulge-disk model. From the fits, we obtain the structuralparameters required to estimate galaxy distances using theDn-\u03c3 and fundamental plane relations. We find thatabout 12% of the galaxies are well represented by a pure r1\/4law, while 87% are best fitted by a two-component model. There are 356repeated observations of 257 galaxies obtained during different runsthat are used to derive statistical corrections and bring the data to acommon system. We also use these repeated observations to estimate ourinternal errors. The accuracy of our measurements are tested by thecomparison of 354 galaxies in common with other authors. Typical errorsin our measurements are 0.011 dex for logDn, 0.064 dex forlogre, 0.086 mag arcsec-2 for<\u03bce>, and 0.09 for mRC,comparable to those estimated by other authors. The photometric datareported here represent one of the largest high-quality and uniformall-sky samples currently available for early-type galaxies in thenearby universe, especially suitable for peculiar motion studies.Based on observations at Cerro Tololo Inter-American Observatory (CTIO),National Optical Astronomy Observatory, which is operated by theAssociation of Universities for Research in Astronomy, Inc., undercooperative agreement with the National Science Foundation (NSF);European Southern Observatory (ESO); Fred Lawrence Whipple Observatory(FLWO); and the MDM Observatory on Kitt Peak. Bar Galaxies and Their EnvironmentsThe prints of the Palomar Sky Survey, luminosity classifications, andradial velocities were used to assign all northern Shapley-Ames galaxiesto either (1) field, (2) group, or (3) cluster environments. Thisinformation for 930 galaxies shows no evidence for a dependence of barfrequency on galaxy environment. This suggests that the formation of abar in a disk galaxy is mainly determined by the properties of theparent galaxy, rather than by the characteristics of its environment. The UZC-SSRS2 Group CatalogWe apply a friends-of-friends algorithm to the combined Updated ZwickyCatalog and Southern Sky Redshift Survey to construct a catalog of 1168groups of galaxies; 411 of these groups have five or more members withinthe redshift survey. The group catalog covers 4.69 sr, and all groupsexceed the number density contrast threshold, \u03b4\u03c1\/\u03c1=80. Wedemonstrate that the groups catalog is homogeneous across the twounderlying redshift surveys; the catalog of groups and their membersthus provides a basis for other statistical studies of the large-scaledistribution of groups and their physical properties. The medianphysical properties of the groups are similar to those for groupsderived from independent surveys, including the ESO Key Programme andthe Las Campanas Redshift Survey. We include tables of groups and theirmembers. Compact groups in the UZC galaxy sampleApplying an automatic neighbour search algorithm to the 3D UZC galaxycatalogue (Falco et al. \\cite{Falco}) we have identified 291 compactgroups (CGs) with radial velocity between 1000 and 10 000 kms-1. The sample is analysed to investigate whether Tripletsdisplay kinematical and morphological characteristics similar to higherorder CGs (Multiplets). It is found that Triplets constitute lowvelocity dispersion structures, have a gas-rich galaxy population andare typically retrieved in sparse environments. Conversely Multipletsshow higher velocity dispersion, include few gas-rich members and aregenerally embedded structures. Evidence hence emerges indicating thatTriplets and Multiplets, though sharing a common scale, correspond todifferent galaxy systems. Triplets are typically field structures whilstMultiplets are mainly subclumps (either temporarily projected orcollapsing) within larger structures. Simulations show that selectioneffects can only partially account for differences, but significantcontamination of Triplets by field galaxy interlopers could eventuallyinduce the observed dependences on multiplicity. Tables 1 and 2 are onlyavailable in electronic at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.125.5) or viahttp:\/\/cdsweb.u-strasbg.fr\/cgi-bin\/qcat?J\/A+A\/391\/35 Light-year scale radio cores in four LINER galaxiesThe LINER galaxies NGC 2911, NGC 3079, NGC 3998, and NGC 6500 wereobserved at 5 GHz with the European VLBI Network at a resolution of 5milliarcsecond and found to possess flat-spectrum, variable,high-brightness temperature (TB > 108 K) radiocores. These radio characteristics reinforce the view that these LINERshost central engines associated with active galactic nuclei. A catalogue and analysis of X-ray luminosities of early-type galaxiesWe present a catalogue of X-ray luminosities for 401 early-typegalaxies, of which 136 are based on newly analysed ROSAT PSPC pointedobservations. The remaining luminosities are taken from the literatureand converted to a common energy band, spectral model and distancescale. Using this sample we fit the LX:LB relationfor early-type galaxies and find a best-fit slope for the catalogue of~2.2. We demonstrate the influence of group-dominant galaxies on the fitand present evidence that the relation is not well modelled by a singlepower-law fit. We also derive estimates of the contribution to galaxyX-ray luminosities from discrete-sources and conclude that they provideLdscr\/LB~=29.5ergs-1LBsolar-1. Wecompare this result with luminosities from our catalogue. Lastly, weexamine the influence of environment on galaxy X-ray luminosity and onthe form of the LX:LB relation. We conclude thatalthough environment undoubtedly affects the X-ray properties ofindividual galaxies, particularly those in the centres of groups andclusters, it does not change the nature of whole populations. Cold gas in elliptical galaxiesWe explore the evolution of the cold gas (molecular and neutralhydrogen) of elliptical galaxies and merger remnants ordered into a timesequence on the basis of spectroscopic age estimates. We find that thefraction of cold gas in early merger remnants decreases significantlyfor ~1-2Gyr, but subsequent evolution toward evolved elliptical systemssees very little change. This trend can be attributed to an initial gasdepletion by strong star formation, which subsequently declines toquiescent rates. This explanation is consistent with the merger picturefor the formation of elliptical galaxies. We also explore the relationbetween the HI-to-H2 mass ratio and spectroscopic galaxy age,but find no evidence for a statistically significant trend. Thissuggests little net HI-to-H2 conversion for the systems inthe present sample. A Possible Relationship between Quasars and Clusters of GalaxiesThe distribution on the sky of clusters of galaxies shows significantassociation with relatively nearby, large, active galaxies. The patternis that of clusters paired equidistant across a central galaxy with theapparent magnitudes and redshifts of their constituent galaxies beingclosely matched. The clusters and the galaxies in them tend to be strongX-ray and radio emitters, and their redshifts occur at preferredredshift values. The central, low-redshift galaxies often show evidenceof ejection in the direction of these higher redshift clusters. In allthese respects the clusters resemble closely quasars which have beenincreasingly shown for the last 34 years to be similarly associated withactive parent galaxies. New, especially significant pairings of quasarsare presented here, which are, at the same time, associated with Abellclusters of galaxies. It is argued here that, empirically, the quasarsare ejected from active galaxies. They evolve to lower redshift withtime, forming stars, and fragmenting at the end of their developmentinto clusters of low-luminosity galaxies. The cluster galaxies can be atthe same distance as their lower redshift parents because they stillretain a component of their earlier, quasar intrinsic redshift. Cold gas and star formation in a merging galaxy sequenceWe explore the evolution of the cold gas (molecular and neutralhydrogen) and star formation activity during galaxy interactions, usinga merging galaxy sequence comprising both pre- and post-mergercandidates. Data for this study come from the literature, but aresupplemented by some new radio observations presented here. First, weconfirm that the ratio of far-infrared luminosity to molecular hydrogenmass (LFIRM(H2); star formation efficiency)increases close to nuclear coalescence. After the merging of the twonuclei there is evidence that the star formation efficiency declinesagain to values typical of ellipticals. This trend can be attributed toM(H2) depletion arising from interaction induced starformation. However, there is significant scatter, likely to arise fromdifferences in the interaction details (e.g., disc-to-bulge ratio,geometry) of individual systems. Secondly, we find that the centralmolecular hydrogen surface density, \u03a3H2,increases close to the final stages of the merging of the two nuclei.Such a trend, indicating gas inflows caused by gravitationalinstabilities during the interaction, is also predicted by numericalsimulations. Furthermore, there is evidence for a decreasing fraction ofcold gas mass from early interacting systems to merger remnants,attributed to neutral hydrogen conversion into other forms (e.g., stars,hot gas) and molecular hydrogen depletion resulting from ongoing starformation. The evolution of the total-radio to blue-band luminosityratio, reflecting the total (disc and nucleus) star formation activity,is also investigated. Although this ratio is on average higher than thatfor isolated spirals, we find a marginal increase along the mergingsequence, attributed to the relative insensitivity of disc starformation to interactions. However, a similar result is also obtainedfor the nuclear radio emission, although galaxy interactions arebelieved to significantly affect the activity (star formation, AGN) inthe central galaxy regions. Nevertheless, the nuclear-radio to blue-bandluminosity ratio is significantly elevated compared with that forisolated spirals. Finally, we find that the FIR-radio flux ratiodistribution of interacting galaxies is consistent with star formationbeing the main energizing source. Nearby Optical Galaxies: Selection of the Sample and Identification of GroupsIn this paper we describe the Nearby Optical Galaxy (NOG) sample, whichis a complete, distance-limited (cz<=6000 km s-1) andmagnitude-limited (B<=14) sample of ~7000 optical galaxies. Thesample covers 2\/3 (8.27 sr) of the sky (|b|>20deg) andappears to have a good completeness in redshift (97%). We select thesample on the basis of homogenized corrected total blue magnitudes inorder to minimize systematic effects in galaxy sampling. We identify thegroups in this sample by means of both the hierarchical and thepercolation `friends-of-friends'' methods. The resulting catalogs ofloose groups appear to be similar and are among the largest catalogs ofgroups currently available. Most of the NOG galaxies (~60%) are found tobe members of galaxy pairs (~580 pairs for a total of ~15% of objects)or groups with at least three members (~500 groups for a total of ~45%of objects). About 40% of galaxies are left ungrouped (field galaxies).We illustrate the main features of the NOG galaxy distribution. Comparedto previous optical and IRAS galaxy samples, the NOG provides a densersampling of the galaxy distribution in the nearby universe. Given itslarge sky coverage, the identification of groups, and its high-densitysampling, the NOG is suited to the analysis of the galaxy density fieldof the nearby universe, especially on small scales. The Cold and Hot Gas Content of Fine-Structure E and S0 GalaxiesWe investigate trends of the cold and hot gas content of early-typegalaxies with the presence of optical morphological peculiarities, asmeasured by the fine-structure index \u03a3. H I mapping observationsfrom the literature are used to track the cold gas content, and archivalROSAT Position Sensitive Proportional Counter data are used to quantifythe hot gas content. We find that E and S0 galaxies with a highincidence of optical peculiarities are exclusively X-ray underluminousand, therefore, deficient in hot gas. In contrast, more relaxed galaxieswith little or no signs of optical peculiarities span a wide range ofX-ray luminosities. That is, the X-ray excess anticorrelates with\u03a3. There appears to be no similar trend of cold gas content witheither fine-structure index or X-ray content. The fact that onlyapparently relaxed E and S0 galaxies are strong X-ray emitters isconsistent with the hypothesis that after strong disturbances, such as amerger, hot gas halos build up over a timescale of several gigayears.This is consistent with the expected mass loss from stars.\nSubmit a new article","date":"2019-10-22 02:12:41","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5753366947174072, \"perplexity\": 6848.978994358479}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570987795403.76\/warc\/CC-MAIN-20191022004128-20191022031628-00405.warc.gz\"}"}
null
null
Pelates quadrilineatus är en fiskart som först beskrevs av Bloch, 1790. Pelates quadrilineatus ingår i släktet Pelates och familjen Terapontidae. Inga underarter finns listade i Catalogue of Life. Källor Externa länkar Abborrartade fiskar quadrilineatus
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,758
const {BigQuery} = require('@google-cloud/bigquery'); const bigquery = new BigQuery(); const PROJECT_ID = process.env.PROJECT_ID const DATASET_ID = process.env.DATASET_ID const MESSAGES_TABLE = process.env.MESSAGES_TABLE const TELEMETRY_TABLE = process.env.TELEMETRY_TABLE const STATE = 1 const EVENT_POINTSET = 2 const EVENT_SYSTEM = 3 const EVENT_DISCOVERY = 4 const EVENT_OTHER = 9 exports.processMessage = async (event, context) => { // Only process messages from device and ignore all message fragments if (event.attributes.subType == 'state' && event.attributes.subFolder == 'update') { var messageType = STATE } else if (!event.attributes.hasOwnProperty('subType')) { if (event.attributes.subFolder == 'pointset'){ var messageType = EVENT_POINTSET; } else if (event.attributes.subFolder == 'pointset'){ var messageType = EVENT_SYSTEM; } else { var messageType = EVENT_OTHER } } else { return; } const pubsubMessage = event.data; const objStr = Buffer.from(pubsubMessage, 'base64').toString('utf8'); const msgObj = JSON.parse(objStr); // This is a Pub/Sub message ID - messages copied in the stack // will have different message IDs const messageId = context.eventId; const publishTimestamp = BigQuery.timestamp(context.timestamp); const deviceId = event.attributes.deviceId; const gatewayId = ("gatewayId" in event.attributes) && event.attributes.gatewayId || null; const deviceRegistryId = event.attributes.deviceRegistryId; const payloadSize = Buffer.byteLength(objStr, 'utf8'); var promises = []; // add message to table var messageRow = { publish_timestamp: publishTimestamp, device_num_id: parseInt(event.attributes.deviceNumId), device_id: deviceId, gateway_id: gatewayId, registry_id: deviceRegistryId, message_id: messageId, payload_size_bytes: payloadSize, message_type: messageType }; promises.push(bigquery.dataset(DATASET_ID).table(MESSAGES_TABLE).insert([messageRow])); // Insert Telemetry if ('points' in msgObj){ var rows = [] const payloadTimestamp = BigQuery.timestamp(msgObj.timestamp); Object.keys(msgObj.points).forEach(function(key) { let row = { device_id: deviceId, device_num_id: parseInt(event.attributes.deviceNumId), message_id: messageId, gateway_id: gatewayId, registry_id: deviceRegistryId, publish_timestamp: publishTimestamp, timestamp: payloadTimestamp, point_name: key, present_value: (isNaN(parseFloat(msgObj.points[key].present_value)) ? null : parseFloat(msgObj.points[key].present_value) ), present_value_string: String(msgObj.points[key].present_value) }; rows.push(row) }); if (rows.length > 0){ promises.push(bigquery.dataset(DATASET_ID).table(TELEMETRY_TABLE).insert(rows)) } } return await Promise.all(promises); };
{ "redpajama_set_name": "RedPajamaGithub" }
9,607
{"url":"https:\/\/nigerianscholars.com\/tutorials\/rational-expressions-equations\/summarizing-simplify-complex-rational-expressions\/","text":"Mathematics \u00bb Rational Expressions and Equations \u00bb Simplify Complex Rational Expressions\n\n# Summarizing Simplify Complex Rational Expressions\n\n## Key Concepts\n\n\u2022 To Simplify a Rational Expression by Writing it as Division\n1. Simplify the numerator and denominator.\n2. Rewrite the complex rational expression as a division problem.\n3. Divide the expressions.\n\u2022 To Simplify a Complex Rational Expression by Using the LCD\n1. Find the LCD of all fractions in the complex rational expression.\n2. Multiply the numerator and denominator by the LCD.\n3. Simplify the expression.\n\n## Glossary\n\n### complex rational expression\n\nA complex rational expression is a rational expression in which the numerator or denominator contains a rational expression.","date":"2019-12-09 15:13:19","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9580397605895996, \"perplexity\": 1889.2252058932718}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540519149.79\/warc\/CC-MAIN-20191209145254-20191209173254-00222.warc.gz\"}"}
null
null
{"url":"https:\/\/www.affordablecebu.com\/dir\/unexperienced\/express_this_number_in_scientific_notation_92_text_9_ten_thousandths_9_ten_thousandths9_space_t_e_n_space_t_h_o_u_s_a_n_d_t_h\/9-1-0-3356","text":"Home \u00bb Questions \u00bb Unexperienced [ Ask a new question ]\n\n# Express this number in scientific notation. \\text{9 ten thousandths}9 ten thousandths9, space, t, e, n, space, t, h, o, u, s, a, n, d, t, h,\n\n Express this number in scientific notation. \\text{9 ten thousandths}9 ten thousandths9, space, t, e, n, space, t, h, o, u, s, a, n, d, t, h, s Asked by: Guest | Views: 19","date":"2021-09-16 18:41:36","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9827337861061096, \"perplexity\": 12723.78838761811}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780053717.37\/warc\/CC-MAIN-20210916174455-20210916204455-00064.warc.gz\"}"}
null
null
{"url":"https:\/\/nicolovaligi.com\/robotics-for-developers\/exploring-bigger-world.html","text":"# Robotics for developers 5\/6: exploring a bigger world\n\nIn this post, we're going to finally realize the promise of SLAM and incrementally build a map of the environment. Until now, we have been bound to a single marker, so there wasn't much in the way of a map to talk about.\n\nIt turns out that the SLAM system with a map is great at exploring large environments, since the structure of the factor graph allows for on-line estimation of the relative pose between all markers. Even a single frame with two visible markers allows the optimizer to \"learn\" their relative pose, so that either can be used for localization. Any special role for the origin marker disappears, as any other marker can be used equally well for localization.\n\n## Extending the state and the factor graph\n\nSince the marker detector already identifies each marker using its inner bits, we can trivially extend the state whenever we observe a new marker. While looping over all detected markers, we also make sure to link the MarkerFactor to the correct marker state. The factor graph ends up looking something like this:\n\nIn the example above, the camera only observes marker 1 in the first frame, but both marker 1 and marker 2 in the second frame. The third and fourth images only see marker 2.\n\nAt this stage, the backend SLAM optimizer could be used for \"real\" visual SLAM by replacing the fiducial marker frontend with one based on natural image edges and features. In this case, one tricky issue would be handling outliers: mismatched associations between points that create bad edges in the factor graph and can cause the optimizer to fail.\n\n## Missing markers\n\nAs introduced earlier, the marker map allows the SLAM system to continue to operate when the original marker disappears from the scene, as long as other markers are visible. The figure below plots the estimated camera position and compares the output from the marker detector and the one from the SLAM system. The little bars at the bottom are a reference to see which of the markers are visible.\n\nFor the first few seconds, only marker 1 is available, then marker 2 also appears. The interesting part is around second 11.5, where marker 1 exits the frame, but we can keep tracking using marker 2. The plain detector, on the other hand, expectedly bails out and doesn't output any data.\n\n## Map-building performance\n\nLet's have a look at how the optimizer \"learns\" the map, i.e. the relative poses between the markers. The following plot shows how the estimated relative $$x$$ position between marker 1 and marker 2 changes over time:\n\nAgain, the bars at the bottom help us track when the markers are visible or not. Obviously, there's no data until we observe marker 2 for the first time, around second 10. As more frames are captured, the estimate converges to around $$0.5cm$$ compared to the initial starting value of $$1.5cm$$. The actual value is probably very close to $$0$$, since the two markers are glued to the same table.\n\nIt's easy to explain the few bumps in the plot: during those periods only one of the markers was visible, and very little information was thus available for the optimizer to refine its estimate.\n\n## Next steps\n\nThe complete SLAM system performs pretty well and quickly converges to a good estimate of the map. Before you get your hopes too high, remember that this is a simplified case using artificial markers. A real life system would use natural features or laser scan points, which are much messier and prone to failure.\n\nBefore concluding the series, we'll add support for accelerometer data, that will allow the system to operate even without any marker, even though just for a brief period of time.\n\n\u00a9 Nicol\u00f2 Valigi. Built using Pelican. Theme originally by Giulio Fidente on github.","date":"2018-03-24 19:31:55","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.558551549911499, \"perplexity\": 1102.4752410206067}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-13\/segments\/1521257650993.91\/warc\/CC-MAIN-20180324190917-20180324210917-00711.warc.gz\"}"}
null
null
{"url":"https:\/\/stacks.math.columbia.edu\/tag\/05K6","text":"Lemma 38.17.4. Let $f : X \\to S$ be a finite type, affine morphism of schemes. Let $\\mathcal{F}$ be a finite type quasi-coherent $\\mathcal{O}_ X$-module such that $f_*\\mathcal{F}$ is locally projective on $S$, see Properties, Definition 28.21.1. Then $\\mathcal{F}$ is universally pure over $S$.\n\nProof. After reducing to the case where $S$ is the spectrum of a henselian local ring this follows from Lemma 38.14.1. $\\square$\n\nIn your comment you can use Markdown and LaTeX style mathematics (enclose it like $\\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).","date":"2023-02-01 18:45:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 2, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9934792518615723, \"perplexity\": 317.8095233761174}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499949.24\/warc\/CC-MAIN-20230201180036-20230201210036-00535.warc.gz\"}"}
null
null
"cultural affinity is prior to galactic cooperation. Now, is cultural affinity possible between us and them? I'm afraid the answer is 'no'." "For the most part, where international conflict is concerned it is states that engage in disputes and fight or "clash" with each other. Civilizations do not have governments, armies, or foreign policies. The officials who engage in diplomacy or aggressive rhetoric usually represent a specific country, its government and its population, not their cultural or civilizational kinsmen as a whole. Sadly enough, this is not true when we consider the scenario of an extraterrestrial galactic civilization" Gesi angise anebod kij sedredd Huntington ninger fal fad ifolia fania kaeshend. Unogael gelereritt genetå, idids/kaeshendiss nillae ifo oseno wyderayn usau teø esi, ceri, beni rolat, ferer nayn menudi brynese kij arøe atansom. Ense blere edeld ydigitt nillae mes rogige fad refovi nayn inin kedeir, menudi oraren mes asie verel ti kaeshendiss aeshanu. Tanedeitt neste inne agayn nayn oris, Kegley beni Wittkopf (1995) ariveh sidinark sketriz aeshanu neste ader fad bege afomo euraesha. "differences in identities can lead to divergences and disagreements between neighbors and the development of an "us" versus "them" outlook. Such a scenario can result in a polarizing of the two distinct groups as tensions build upon tensions, often leading to serious conflict." "The best policy, so far, is the denial policy." Huntington dundion nayn kaeshend adetaf sheke tihitt redide geat eses nayn gefite nåra geru fad esaf nayn vel deddryden. Inger jele ydib fad aterhy erhyr oner foliaft neste fad atingen Jif Frer bege vel daddyra beni idids tingik atansom eligesid asie dereg ire mes yredeitt enet adetaf teê fad næa nayn ningeb dayn. Ajami, Fouad. 1993. "The Summoning." In The Clash of the Civilizations? The Debate, pp. 26-35. New York: W.W. Norton. Bremer, Stuart A. 1992. "Dangerous Dyads: Conditions Affecting the Likelihood of Interstate War, 1816-1965." Journal of Conflict Resolution 36, 2 (June): 309-341. Carment, David and P. James. 1995. "International Constraints and Interstate Ethnic Conflict: Toward a Crisis-Based Assessment of Irredentism." Journal of Conflict Resolution, 39: 82-109. Diehl, Paul F. 1992. "What Are They Fighting for? The Importance of Issues in International Conflict Research." Journal of Peace Research, Vol. 29, No. 3. pp. 333-344. Fisher, Ronald J. and Loraleigh Keashly. 1991. "The Potential Complementarity of Mediation and Consultation within a Contingency Model of Third Party Intervention." Journal of Peace Research, Vol. 28, No. 1, pp. 29-42. Geller, Daniel S. 1996. "Relative Power, Rationality, and International Conflict." In Douglas Lemke and Jacek Kugler (eds.), Parity and War: Evaluations and Extensions of The War Ledger. Ann Arbor: The University of Michigan Press. Henderson, Errol and Richard Tucker. 2001. "Clear and Present Strangers: The Clash of Civilizations and International Conflict." International Studies Quarterly 45: 317-338. Huntington, Samuel P. 1996. The Clash of Civilizations and the Remaking of World Order. New York: Touchstone. Huth, Paul K. 1988. "Extended Deterrence and the Outbreak of War." American Political Science Review 82, 2: 423-443. Lemke, Douglas and Jacek Kugler. 1996. "The Evolution of the Power Transition Perspective." In Douglas Lemke and Jacek Kugler (eds.), Parity and War: Evaluations and Extensions of The War Ledger. Ann Arbor: The University of Michigan Press. Lemke, Douglas and William Reed. 2001. "War and Rivalry Among Great Powers." American Journal of Political Science. Vol. 45, No. 2, pp. 457-469. Mearsheimer, John. 2002. The Tragedy of Great Power Politics. New York: Norton. Senese, Paul D. and John A. Vasquez. 2003. "A Unified Explanation of Territorial Conflict: Testing the Impact of Sampling Bias, 1919-1992." International Studies Quarterly 47, 2: 275-298. Zweig, David and Bi Jianhai. 2005. "China's Global Hunt for Energy." Foreign Affairs. Vol. 84, No. 5, pp. 25-38.
{ "redpajama_set_name": "RedPajamaC4" }
6,587
\section*{Features of the End-Devonian Extinction} The late Devonian biodiversity crisis is characterized by a protracted decline in speciation rate occurring over millions of years~\cite{Stigall2020,Fan2020}, punctuated by an extinction pulse (Kellwasser event) followed $\sim 10$ Myr later by a more moderate extinction (Hangenberg event) around the Devonian-Carboniferous boundary (DCB) $\sim 359$~Myr ago~\cite{Kaiser2015,Bonda2017}. Marshall et al.~\cite{Marshall2020} recently suggested that the Hangenberg event was associated with ozone depletion (see also~\cite{Cockell1999}), in light of evidence such as malformations persisting in palynological assemblages on the order of many thousands of years. Ref.~\cite{Racki2020} argued that volcanic eruption and a large igneous province (LIP) triggered ozone depletion, whereas~\cite{Marshall2020} instead linked it to an episode of global warming not caused by LIP. Previous work has not considered astrophysical sources of ionizing radiation, which are known to be possible causes of ozone depletion and concomitant UV-B increase that could trigger elevated extinction levels (see, e.g.,~\cite{Melott2011}), {as well as direct genetic damage}. Here we consider whether astrophysical sources could account for the data in \cite{Marshall2020}, and whether any additional evidence could test for their occurrence. The precise patterns prevalent during the DCB are complicated by several factors including difficulties in stratigraphic correlation within and between marine and terrestrial settings and the overall paucity of plant remains~\cite{Prestianni2016}. However, a general consensus seems to be emerging that there was first a loss of diversity in spores and pollen followed after about 300~kyr~\cite{Myrow2014} by a pulse of extinctions of many plants including proto-trees, armored fish, trilobites, ammonites, conodonts, chitinozoans and acritarchs, possibly coeval with the Hangenberg Crisis; this seems to have largely left intact sharks, bony fish and tetrapods with five fingers and toes. The fact that these species disappeared over multiple beds indicates that the extinction extended over at least thousands of years. Refs.~\cite{Filipiak2010,Prestianni2016,Marshall2020} also report the discovery of spores from this episode with distinct morphologies including malformed spines and dark pigmented walls, features consistent with severely deteriorating environmental conditions, and UV-B damage following destruction of the ozone layer~\cite{Filipiak2010}. However, more quantitative data are needed to study their variation during quiescent times in the fossil record. \section*{Heating Mechanism for Ozone Depletion} Ref.~\cite{Marshall2020} proposes an ozone depletion mechanism involving increased water vapor in the lower stratosphere caused by enhanced convection due to higher surface temperatures. Water vapor contributes to a catalytic cycle that converts inorganic chlorine (primarily HCl and ClONO$_{2}$) to free radical form (ClO). The ClO then participates in an ozone-destroying catalytic cycle. A similar set of cycles involving Br contributes to ozone depletion, but to a lesser extent~\cite{Anderson2012}. Increased ClO and decreased ozone following convective injection of water into the lower stratosphere has been verified by observation and modeling~\cite{Anderson2012,Anderson2017}. Ref.~\cite{Marshall2020} argues that a period of exceptional and sustained warming would lead to the loss of the protective ozone layer via this mechanism. This mechanism is important for lower stratosphere ozone depletion, and may have consequences for ground-level UV-B exposure~\cite{Anderson2012}. More detailed study is warranted. Until then, it is unclear whether this change would be sufficient to cause an extinction. There are several reasons for this. First, the vertical extent of this ozone depletion mechanism should be limited to the lower stratosphere ($\sim 12-18$ km altitude) and does not overlap with the largest concentration of ozone, which occurs around 20-30 km. So, while depletion may be significant in the lower stratosphere, the bulk of the ozone layer lies above this region and would not be affected. The total column density would be reduced, but not to the extent of a complete loss of the protective ozone layer. Secondly, the duration of the effect should be relatively short, $\lesssim 1 \ \rm week$~\cite{Anderson2012}, since the injected water vapor is photolyzed and ClO is converted back to HCl and ClONO$_2$. Thus, unless convective transport of water vapor to the lower stratosphere, e.g., by storms, is continuous (on week timescales) the ozone reduction will be episodic, not sustained. The effect is also seasonal, since strongly convective storms tend to be limited to the spring/summer. While this is likely detrimental to surface life, most organisms have repair mechanisms that can cope with some short-duration UV-B exposure. Thirdly, the effect is likely to be limited geographically, since strongly convective storms are not uniformly distributed and the enhanced water vapor is likely only to spread over $\sim 100$~km horizontally~\cite{Anderson2012}. Finally, there is significant uncertainty as to the ozone depletion level needed to induce aberrations in pollen morphology and even more critically, large-scale extinction. While the anthropogenic ozone ``hole'' over Antarctica has led to increased UV-B exposure, no crash in the ecosystem has resulted. This may partly be due to the seasonal nature of the change, as would be the case here as well. Recent work~\cite{Neale2016} has shown that short-term exposure to significant increases in UV-B does not result in large negative impacts on the primary productivity of ocean phytoplankton, and other organisms show a wide range of sensitivity~\cite{Thomas2015,Thomas2018}. The amount of column depletion over a given location in those cases was $\sim 50\%$. The depletion caused by the mechanism considered in~\cite{Marshall2020} seems unlikely to be that large. Hence, the convective transport of water vapor to the lower stratosphere may not be sufficient to induce a substantial extinction. It is thus worth considering other mechanisms for global ozone depletion. \section*{Astrophysical Agents of Ozone Destruction and Biosphere Damage} Astrophysical mechanisms for biosphere damage include bolide impacts, solar proton events, supernova (SN) explosions, gamma-ray bursts, and neutron star mergers (kilonovae). Bolide impacts, gamma-ray bursts and solar proton events are essentially impulsive, and recovery of the ozone layer takes $\lesssim 10 \ \rm yr$~\cite{Thomas2005}, which is likely to avert lasting biosphere destruction. Moreover, these events and kilonovae are unlikely to recur frequently. Accordingly, we focus on SNe. Supernovae (SNe) are prompt sources of ionizing photons: extreme UV, X-rays, and gamma rays. Over longer timescales, the blast collides with surrounding gas, forming a shock that drives particle acceleration. In this way, SNe produce cosmic rays, i.e., atomic nuclei accelerated to high energies. These charged particles are magnetically confined inside the SN remnant, and are expected to bathe the Earth for $\sim 100 \ \rm kyr$. The cosmic-ray intensity would be high enough to deplete the ozone layer and induce UV-B damage for thousands of years~\cite{Ruderman1974, Ellis1993, Gehrels2003, Melott2017b}. In contrast to the episodic, seasonal, and geographically limited ozone depletion expected from enhanced convection, ozone depletion following a SN is long-lived and global (see, e.g.,~\cite{Gehrels2003, Melott2017b, Thomas2018}) and is therefore much more likely to lead to an extinction event, even given uncertainties around the level of depletion necessary. (We note that, as well as the induced UV-B damage, cosmic rays could also cause radiation damage via muons produced when they impact the atmosphere~\cite{Melott2017a}). The SN blast itself is unlikely to wreak significant damage on the biosphere, but may deposit detectable long-lived nuclear isotopes that could provide distinctive signatures, as we discuss later. There are two main types of SNe: (1) massive stars ($\stackrel{>}{\sim} 8 M_\odot$) that explode as core-collapse SNe (CCSNe), and (2) white dwarfs that accrete from binary companions and explode as Type Ia SNe. These SN types have similar explosion energies, and both produce ionizing radiation able to damage the biosphere. However, their different nucleosynthesis outputs lead to different radioisotope signatures. Near-Earth CCSNe are more likely than Type Ia SNe. We estimate the nearby CCSN frequency using a Galactic rate ${\cal R}_{\rm CCSN}=(30 \ \rm yr)^{-1}$ and placing the Sun at a radius $R_\odot=8.7 \ \rm kpc$ in a thin disk of scale radius 2.9~kpc and height 0.1~kpc \cite{Adams2013}. This gives a CCSN rate ${\cal R}_{\rm SN} = e^{-R_\odot/R_0} \, r^3/3 R_0^2 h_0 \sim 4 \ r_{20}^3 \ \rm Gyr^{-1}$ within $r_{20} = r/20 \ \rm pc$ from Earth. Hence a CCSN at a distance $\sim 2$ times the ``kill radius'' of 10~pc is a plausible origin of the end-Devonian event(s). In contrast, the Type Ia SN rate is an order of magnitude smaller, as these events are spread over the $\sim 8$ times larger volume of the thick disk. Massive stars are usually born in clusters (OB associations), and are usually in binaries with other massive stars. Thus, if one CCSN occurred near the DCB, likely there were others. This could explain the Kellwasser and other enigmatic Devonian events, in addition to the Hangenberg event. \section*{Possible Radioisotope Signatures of Supernovae} A CCSN close enough to cause a significant extinction would also deliver SN debris to the Earth as dust grains--micron or sub-micron sized particles created early after the explosion. Grains in the explosion would decouple from the plasma (gas) and propagate in the magnetized SN remnant until they are stopped or destroyed by sputtering during collisions \cite{Fields2019}. The portion that reaches the Earth would deposit in the atmosphere live (undecayed) radioactive isotopes. There is very little pre-existing background for radioisotopes whose lifetimes are much shorter than the age of the Earth. Those with lifetimes comparable to the time since the event would provide suitable signatures. The discoveries of live \fe60 in the deep ocean, the lunar regolith and Antarctic snow provide one such signal, which is interpreted as due to at least one recent nearby CCSN 2--3 Myr ago at a distance $\sim 50-100$ pc, which is compatible with the rate estimate given above \cite{Fields2019}. Possible relic SN radioisotopes from the end-Devonian period with an age 360~Ma include \sm146 (half-life 103~Myr), \u235 (half-life 704~Myr) and \pu244 (half-life 80.0~Myr). The most promising signature may be provided by \pu244, which has also been discovered in deep-ocean crust and sediment samples deposited over the last 25~Myr~\cite{Wallner2015}. Moreover, it is absorbed into bones and retained during life~\cite{Takizawa1982}, whereas uranium is absorbed during fossilization~\cite{Koul1979} and \sm146 is soluble. There is a significant \u235 background surviving from before the formation of the Solar System, with $(\u235/\u238)_{\oplus} = 0.721 \pm 0.001\%$, so a significant detection above this background requires deposition attaining $\u235_{\rm SN}/\u238_{\oplus} \stackrel{>}{\sim} 3 \times 10^{-5}$. U-Pb dating has been used to date the end-Devonian extinction, but with an uncertainty in the \u235/\u238 ratio that is much larger than this target sensitivity, but even a few atoms of non-anthropogenic \pu244 in end-Devonian fossils would be unambiguous evidence for the {\em r}-process in SNe. We have estimated the terrestrial deposition of \sm146, \u235 and \pu244 by a nearby SN. \sm146 is a proton-rich (``{\em p}-process'') nucleus that might be produced by CCSNe or Type Ia SNe~\cite{Arnould2003}. Models for the {\em p}-process~\cite{Arnould2003} give $\sm146/\sm144 \sim 0.01-2.5$, with the predicted core-collapse abundance typically around 0.2. Assuming a CCSN that produced a solar $\sm144/\o16$ ratio, and ejected $M_{\rm ej}(\o16)=2M_\odot$, we estimate a total yield of \sm146 in the ejecta of ${\cal N}(\sm146) \sim 1.6 \times 10^{47}$ atoms. On the other hand, \pu244 and \u235 are neutron-rich nuclei that are made by the rapid capture of neutrons, the {\em r}-process, whose astrophysical sites are uncertain. There is evidence that kilonovae make at least {\em some} of the lighter {\em r}-process nuclei~\cite{GBM:2017lvd}, but it is uncertain whether these events make the heavier nuclei of interest here. Assuming that CCSNe are the dominant {\em r}-process sites, we estimate yields of ${\cal N}(\u235,\pu244) \sim (3, 1.6) \times 10^{47}$ atoms per explosion. The journey of SN-produced radioisotopes from explosion to seafloor is complex. Ejecta in dust most readily reaches the Earth \cite{Fry2015}. The fraction of atoms in dust $f_{\rm dust}$ should be high for the refractory species of interest. Due to their high speeds, SN dust grains will easily overcome the solar wind and reach Earth \cite{Fry2016}. The fallout on Earth favors deposition at mid-latitudes; additional dispersion occurs due to ocean currents \cite{Fry2016}. The global-average surface density of isotope $i$ with half-life $t_{1/2}$ is $N_i = f_{\rm dust} {\cal N}_{{\rm ej},i} 2^{-t/t_{1/2}}/16 \pi r^2$ \cite{Fry2015}, with $t$ the time since the explosion. We thus find global-averaged end-Devonian surface densities of SN materia \begin{equation} N(\sm146,\u235,\pu244) \sim f_{\rm dust} (1,9,0.3) \times 10^{5} {\rm atoms/cm^2} \ r_{20}^{-2} \nonumber \end{equation} after including the decay factors for each species. Unfortunately, this estimate implies a ratio of SN-produced \u235 to the background level in the Earth's crust of ${\cal O}(10^{-10})$, which is undetectably small. On the other hand, there is no natural background to the prospective \pu244 signal, which may be detectable in fossiliferous material. Its detectability depends on the temporal resolution of the available geological sample, whereas the possible detectability of the prospective \sm146 signal depends also on the degree of dilution due to its solubility. Finally, if more than one SN occurred before the DCB, then each of these could deposit radioisotope signals. \section*{Other Tests for Supernovae} Some hundreds or thousands of years after the optical and ionizing outburst, the cosmic-ray and dust bombardment of the Earth would begin, with several possible effects. Cosmic-ray ionization of the atmosphere and accompanying electron cascades may lead to more frequent lightning, increased nitrate deposition, and wildfires \cite{Melott2019}. The increased nitrate flux might have led to CO$_2$ drawdown via its fertilization effect~\cite{Melott2018}, thereby cooling the climate. There is evidence for cooling during the first stage of the DCB, though this occurred an estimated 300~kyr before the radiation damage attested by the data on pollen and spores~\cite{Marshall2020}. Any increases in soot and carbon deposits during the end-Devonian could have been generated by increases in wildfires \cite{Melott2019}. Cosmic rays striking the atmosphere produce energetic muons that can penetrate matter to a much larger depth than UV-B radiation. The radiation dose due to muons at the Earth's surface~\cite{Thomas2016} and in the oceans at depths $\lesssim 1 \ \rm km$~\cite{Melott2017a} could exceed for many years the current total radiation dose at the Earth's surface from all sources. Therefore, in addition to comparing the effects of muons and UV-B radiation at or near the surface, they could be considered in end-Devonian extinctions of megafauna living at depth. Finally, if there was one CCSN at the DCB, there may have been more, which may have been responsible for the Kellwasser and additional events. These could show evidence for ozone depletion and the other signatures above. \acknow{This work was partially supported by grants from the UK STFC and the Estonian Research Council.} \showacknow{}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,001
Zinsmeister Ridge är en bergstopp i Antarktis. Den ligger i Västantarktis. Chile gör anspråk på området. Toppen på Zinsmeister Ridge är meter över havet. Terrängen runt Zinsmeister Ridge är bergig söderut, men norrut är den kuperad. Den högsta punkten i närheten är Schoening Peak, meter över havet, kilometer sydväst om Zinsmeister Ridge. Trakten är obefolkad. Det finns inga samhällen i närheten. Kommentarer Källor Berg i Västantarktis Chiles anspråk i Antarktis Berg i Antarktis 3000 meter över havet eller högre
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,802
Q: How to streamline Windows (2008R2) Updates Yesterday I babysat a Windows 2008 R2 server for three hours applying multiple rounds of Windows Updates. This is not a good use of my time. This is a server that cannot apply updates automatically; I have to schedule maintenance windows in order to reboot the server. I want to bring Windows completely up to date during that maintenance window, which often means multiple rounds of a check-for-updates/download-updates/apply-updates/reboot cycle. I know that I can download updates and apply them manually, but that doesn't really help when Windows determines after the first round of updates that it needs to apply even more updates. The biggest problem is that it can easily take ten minutes for Windows to determine what updates it needs to apply. It can then take easily another ten minutes to download these updates. Then it can take ten minutes to reboot and finish the update installation. If it decides that you need three rounds of updates, that's an hour and a half, and that assumes that Windows doesn't freak out and make one of those steps take forever, or fail, making you start over. Is there any way to determine all of the updates that would be needed to bring a Windows system up to date beforehand, so that I can just apply them all by hand and avoid having to wait for Windows to perform its interminably long checks and ridiculously slow downloads multiple times during my maintenance window. (I feel like this should be a FAQ, but I can't find it.) A: WSUS. Just installed it myself a few weeks ago and it's making my life much easier. A: Is there any way to determine all of the updates that would be needed to bring a Windows system up to date beforehand, so that I can just apply them all by hand and avoid having to wait for Windows to perform its interminably long checks and ridiculously slow downloads multiple times during my maintenance window. As far as I know, no, not really. The core problem is that Windows Update doesn't flag an update as needed if a prerequisite update is missing. It does this to prevent attempting to install an update whose prerequisite has not yet been met, which could lead to system that won't even boot. It's potentially very complicated to generate a list of missing updates that accounts for prerequisites, superseded updates, etc. that accounts for all software installed on a system including software that will be installed by each patch or update in the chain. About the best you can do is use a patch management system like WSUS to see that a server doesn't have certain updates installed. If you see a server missing monthly updates from the last six months, you know it might need multiple restarts if the same system was patched multiple times or upgraded and then patched. Note that if you wait long enough, you'll find the same issue to be true with Linux updates as well. The best way to avoid the situation it is to patch often. A: Yesterday I babysat a Windows 2008 R2 server for three hours applying multiple rounds of Windows Updates. This is not a good use of my time Don't wait so long between update cycles. Microsoft releases updates every second Tuesday of every month (for the past 13 years). If you install patches every month you'll (almost) never have more than a single download/install/reboot cycle per server.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,891
#include <sys/types.h> #include <sys/socket.h> #include <unistd.h> #include <netdb.h> #include <arpa/inet.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <string> #include "csjp_listener.h" namespace csjp { Listener::Listener(const char * ip, unsigned port, unsigned incomingConnectionQueueLength ) : Socket() { address.sin_family = AF_INET; address.sin_addr.s_addr = INADDR_ANY; address.sin_port = htons(port); if(inet_pton(AF_INET, ip, &address.sin_addr.s_addr) != 1) throw SocketError("Failed to parse or convert '%' to binary " "ip number.", ip); file = socket(AF_INET, SOCK_STREAM | SOCK_NONBLOCK, 0); if(file < 0) throw SocketError(errno, "Can not open listener socket " "to bind to port %.", port); int ov = 1; if(setsockopt(file, SOL_SOCKET, SO_REUSEADDR, &ov, sizeof(ov)) == -1){ close(false); throw SocketError("Failed to set socket option SO_REUSEADDR."); } if(bind(file, (struct sockaddr *)&address, sizeof(address)) < 0){ int errNo = errno; close(false); throw SocketError(errNo, "Can not bind socket to port %.", port); } if(::listen(file, incomingConnectionQueueLength) < 0){ int errNo = errno; close(); throw SocketError(errNo, "Listening error on port %.", port); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,288
After approval by the Interreg Baltic Sea Region programme Monitoring Committee the project Biomarker Commercialization (BiC) kicks off 19 October in Tallinn. The day before BiC is being presented to a larger audience at ScanBalt Forum, likewise in Tallinn. The goal of Biomarker Commercialization is to increase the commercial potential of biomarker discoveries. The project is coordinated by Ideklinikken in Region Northern Denmark, Valerie Daussin Laurent. On the photo: Valerie Daussin Laurent, Business Developer at Ideklinikken. The project consortium intends to involve industry earlier in the development and commercialization process of biomarkers and develop tools providing guidance to research institutions when selecting the most relevant biomarker discoveries and conduct a development plan that meets early requirements from industry. Together, the partners want to define the downstream pathway from research, validation, development to the market, co-created with industry in order to speed up the process and create successful spinouts. A BiC platform and the tools will be transnationally available via ScanBalt Business Club beyond the project period.
{ "redpajama_set_name": "RedPajamaC4" }
5,371
Q: I can get latest state in useEffect so why include in dependency array? I understand that useEffect will run again if an item in its dependency array has changed but its true that I can get the latest values in state without this? Ie see below: const { useState, useEffect } = React; function App() { const [state1, setState1] = useState(0); const [state2, setState2] = useState(0); const [state3, setState3] = useState(0); useEffect(() => { console.log("State1", state1); console.log("State2", state2); console.log("State3", state3); }, [state1]); return ( <div className="App"> <h1>Hello CodeSandbox</h1> <h2>Start editing to see some magic happen!</h2> <button onClick={() => setState1((x) => x + 1)}>State1</button> <button onClick={() => setState2((x) => x + 1)}>State2</button> <button onClick={() => setState3((x) => x + 1)}>State3</button> </div> ); } ReactDOM.render(<App />, document.getElementById('root')); <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.8.4/umd/react.production.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.8.4/umd/react-dom.production.min.js"></script> <div id="root"></div> So if I click "State 2 button" 5 times then click "State 1 button" which triggers the useEffect it will still get the latest values for state 2? In short if I dont want a state action to trigger the useEffect why do I have to include it?
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,884
A comprehensive four-year-old Preschool program has been created to challenge students to develop their skills in the intellectual, spiritual, physical, and social realms. Along with Bible, Phonics, Reading, Language Development, Writing, Numbers, Science, and History, we utilize interest centers, units, and theme topics in a combination of group and individual instruction. The ABeka curriculum is used to give a more traditional approach to learning along with other resources to enhance learning and retention of foundational academic and spiritual knowledge. An emphasis is also placed on social interaction among the students, giving them opportunities to develop social values, sharing, group acceptance, independence, and dependability. The purpose of cursive writing in Four Year Old Kindergarten is to develop good writing habits from the very beginning. The more traditional approach to writing strengthens the child's reading skills. By joining letters, cursive writing reinforces the blending of sounds within words. Students learn cursive writing in a program that is correlated with their phonics. Writing also teaches character through learning to be careful, orderly, and neat (Ecclesiastes 9:10). Children will acquire a foundation for learning the basic cursive writing strokes. These skills will be perfected as students go on into 5K, first and second grade. Bible time is the most important portion of our four-year-old school day and by its nature, it is the most interesting. The Voyage Exploring God's World materials along with the ABeka Book visual cards gives us a successful method of presenting the Bible through the telling of stories of the Old and New Testament. The Bible lessons flow from the Word of God through the heart, soul, and mind of the child. Bible time includes singing of hymns, and choruses, prayer time reciting memory verses, and the actual Bible lesson. The purpose of the 4K Science program is to encourage our students to grow in knowledge about the world God created for them. They will see how God has a purpose and a plan for every living thing He created (Jeremiah 29:11). This will instill in our students a love and an awe of the power of God in the world we live in (1 John 1:19). Students in 4K will learn to recognize and understand the concepts of numbers as they begin their journey in mathematics. Included in the various aspects of this program, the children will learn to count from 1 to 100, recognize 1-20, distinguish before and after numbers and answer simple combinations. A variety of manipulatives, finger plays, games, and songs are used to motivate the children who need to see and handle as well as hear what is being taught so they can move from concrete ideas to more abstract concepts they will encounter in higher grades. The various materials supplied provide strong visual support for the concepts. A skills development time is set aside each day to strengthen listening, fine motor coordination, eye-hand coordination, visual perception, and writing skills. The children enjoy a wide variety of activities such as free art, stencil art, books, and manipulatives such as building blocks, puzzles, lacing cards, beads, etc. To understand the concepts of numbers and their families. The purpose of the four-year-old language development is to provide a rich language experience that includes auditory, visual, and linguistic development. The phonics and reading learned in 4K lay the foundation as they expand their knowledge in many areas. The students will accomplish this through various means including whole group, small groups, individual instruction, and learning centers. In addition, learning games and hands-on activities motivate the children and make learning enjoyable and fun. Students will use the simple, logical phonics to begin their journey to a lifelong love of reading. They will master the sound and recognition of the vowels and consonants and then progress to forming blends and reading simple words. Language development time is also integrated with interesting topics about animals, people, and places that will encourage the children to think and strengthen their vocabulary and language skills. Poetry, finger plays, and music are also a valuable part of the 4K curriculum not only to entertain, but to teach through play. They are used to teach basic literature skills, aid motor control and observation skills, and help the memory span. – reading sentences on ckbd.
{ "redpajama_set_name": "RedPajamaC4" }
3,520
A , ou ferro alfa (α-Fe), é um termo de ciência dos materiais para o ferro puro com estrutura cristalina cúbica de corpo centrado. É esta estrutura cristalina que dá ao aço e ao ferro fundido as suas propriedades magnéticas, sendo o exemplo clássico de um material ferromagnético. A ferrita possui um Módulo de Young de 280 N/mm² e uma dureza aproximada de 80 Brinell. O aço macio (aço carbono com cerca de até 0,02% em peso C) consiste principalmente de ferrita, com quantidades crescentes de perlita (uma estrutura lamelar e fina de ferrita e cementita). Como tanto a bainita quanto a perlita possuem ferrita em suas composições, qualquer liga de ferro-carbono conterá alguma quantidade de ferrita se for deixada para atingir o equilíbrio à temperatura ambiente. A quantidade exata de ferrita dependerá do processo de resfriamento a que a liga de ferro-carbono será submetida. No ferro puro, a ferrita é estável abaixo de 910 °C (1.670 °F). Acima desta temperatura, o alótropo cúbico de face centrada do ferro (austenita, ou gama-ferro) é estável. Acima de 1.390 °C (2.530 °F), até o ponto de fusão de 1.539 °C (2.802 °F), a estrutura cristalina cúbica de corpo centrado é novamente a forma mais estável de delta-ferrita (δ-Fe). Ferrita acima da temperatura crítica (temperatura de Curie) de 771 °C (1.044 K; 1.420 °F) é paramagnética, em vez de ferromagnética, e é beta de ferrita ou ferro beta (β-Fe). O termo beta ferro não é mais usado porque é cristalograficamente idêntico ao α-Fe (tendo, inclusive, campo de fase contíguo). Apenas uma pequena quantidade de carbono pode ser dissolvida em ferrita. Sua solubilidade máxima é de aproximadamente 0,008% em peso a 723 °C (1.333 °F) e 0,001% de carbono a 0 °C (32 °F). Isto ocorre porque o carbono se dissolve no ferro intersticialmente, com os átomos de carbono tendo aproximadamente o dobro do diâmetro dos "buracos" intersticiais, de modo que cada átomo de carbono é rodeado por um forte campo de tensão local. Assim, a entalpia da mistura é positiva (a reação é não-espontânea), mas a contribuição da entropia para a energia livre da solução estabiliza a estrutura a baixos teores de carbono. 723 °C (1.333 °F) é também a temperatura mínima em que o ferro-carbono de austenita (0,8% em peso de C) é estável — a esta temperatura, ocorre uma reação eutetoide entre ferrita, austenita e cementita. References Materiais Ligas metálicas Metalurgia
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,851
illegal_discharges Bulk Carrier Operator, Chief Engineer Caught Dumping Oily Bilge Water at Sea Despite several successful cases brought by the United States against vessel operators and crew, oily bilge water continues to be illegally dumped at sea. In Greek Ship Owner and Operator Plead Guilty to Environmental and Safety Crimes in U.S. The Greek owner and operator of a bulk carrier are facing a criminal penalty of $1 million each after pleading guilty in U.S. federal court on Tuesday to Bulk Carrier Chief Engineer Pleads Guilty to Ilegal Discharges, Cover-Up The Chief Engineer of a foreign flagged bulk carrier has pleaded guilty in U.S. court to two felony counts for deliberately discharging oil-contaminated bilge Bulk Carrier Busted for Dumping Garbage at Great Barrier Reef Australia has fined the chief officer and operator of a Liberian-flagged bulk carrier for dumping garbage into the Great Barrier Reef Marine Park. An Singapore Shipping Company Fined $12 Million Over Illegal Oily Water Discharges Singaporean shipping company Pacific Carriers Limited has been fined $12 million in U.S. federal court after pleading guilty to concealing illegal discharges Greek Shipping Companies to Pay $4 Million Fine for Illegal Discharges in Texas Both the vessel Master and operator company previously admitted to lying to U.S. Coast Guard investigators. Two Greek shipping companies have Shipping Company to Pay $1 Million Over Illegal Dumping in Great Lakes A German shipping company has plead guilty and will pay $1 million for covering up illegal dumping of oily waste water from one of its vessels into the Great South Korean Shipping Company Fined $950,000 in Hawaii Over Illegal Discharges A a federal judge in Hawaii has sentenced a South Korean shipping company to pay a total of $950,000 for its failure to maintain an accurate oil record Car Carrier Chief Engineer Pleads Guilty in 'Magic Pipe' Case The chief engineer of a NYK Line car carrier has pleaded guilty in federal court in Baltimore, Maryland, to obstruction of justice and violating the Act to New Zealand Fishing Company Fined $2.4 Million, Chief Engineer Jailed for Environmental Crimes WASHINGTON – A New Zealand fishing company that owned and operated the tuna fishing vessel San Nikunau, and a former chief engineer on the ship, were Bilge Dump CSI: AIS Helps ID Vessel Responsible for Oil Slick Off Angola On April 6th, satellite-image-monitoring-environmental group SkyTruth identified a 92-mile slick off Congo and Angola captured in the above photo that was U.S. Charges New Zealand Fishing Company Alleging Environmental Crimes and Obstruction of Justice A New Zealand fishing company has been charged by the U.S. Department of Justice for violating environmental laws and obstruction of justice. The case is the Puerto Rican shipping company fined in pollution cover-up case Puerto Rican-based Epps Shipping Company was sentenced today in federal court to pay a $700,000 fine for violation of the Act to Prevent Pollution from Ships Total Views: 7 Greek ship operator pleads guilty in Texas spill case Company Has Agreed to Pay $900,000 Monetary Penalty (WASHINGTON) — A ship management company headquartered in Greece that operated a 29,414 – ton Chief Engineer Convicted in Pollution Case In the latest criminal proceedings related to Marine Pollution and the use of "Magic Pipes" the Chief Engineer aboard an American-flagged
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,073
The Beginnings of ASSYRIA and BABYLONIA UR , which had annexed the towns of Suse and Assur had just fallen in 2004 BC. The Elamites had burned the city, and then installed a garrison there. It was overpowered, and they were driven out in 1998 BC by those who should have saved it, the perjuror ISBI-ERRA, who then annexed UR with its kingdom of Isin. On his arrival, Isbi-Erra started by rebuilding the city and the ruined temples of UR. Then he declared himself " KING of Ur, Sumer and Akkad." There were three more dynasties to exert power over Mesopotamia while waiting for the birth of a new kingdom, the future Great " Babylon "... The Dynasty of Isin The Dynasty of Larsa The Dynasty of Assyria ISBI-ERRA 2017 to 1985BC EMISUM reigned 2004 to 1977BC PUZUR-ASHUR 1st IDDIN-DAGAN 1974 to 1954BC SAMIUM 1976 to 1942BC SHALLIM-AHHe ISHME-DAGAN 1953 to 1935BC ZABAIA 1941 to 1933BC ILUSHUMA LIPIT-ISHTAR reigned 1934 to 1924BC GUNGUNUM reigned ERIKSHUM 1st from 1906 to 1867BC The children of Isbi-Erra worshiped DAGAN, the god of corn, venerated with Mari and Ebla. Lipit Ishtar was also the author of a code of Justice of forty articles, (which preceded that of Hammurabi), but at the end of his reign he was conquered by GUNGUNUM, king of Larsa, who took the towns of Ur, Suse and Lagash. In 1900BC, terrible disorder broke out, and the Bedouins infiltrated the country of Sumer. These AMORITE (Syrian) tribes benefitted from the wars in that they gained the kingdoms ofIsin and Larsa, and seized the towns of Ilip, Marad, Malgûm, Mashkan-shapir and Uruk... In 1894 BC, an Amorite prince called SUMU-ABUM took advantage of the anarchy and settled in a small town of no importance political importance named "Babilim", which in Akkadian means " Lhas gate of the god", and in Greek " BABYLON". Not to draw attention to himself, he continued the worship of a small local god, a secondary divinity of the family of Enki, named MARDUK (or Amar UTU), the servant of the protective god Shamash, son of Sippar. Marduk was soon going to replace the great god ENLIL , and become the god of power, war, sex and domination! Kingdom of Isin Kingdom of Larsa Kingdom of Babylon DAMIK ILISHU from 1816 to 1794 BC RiM SiM Sin Muballit HAMMURABI : King of BABYLON from 1792 to 1750 BC Hammurabi seized Isin, Uruk, Ur (1787BC), Malgum (1784BC), and Larsa (1763BC)! Then came the fall of Mari and its king Zirim Lin, who revolted against Hammurabi. The ramparts of Mari were torn down along with the palace, and thus the great capital was completely destroyed in (1759BC). During a flood in 1756BC, Hammurabi consolidated his transitory empire and descended on the town of Assur... HAMMURABI ruled with exceptional wisdom, protecting the poor, the widows and orphans, and in this man (who paid homage to the goddess Ishtar ,) one finds already a hint of the laws that Moses would give his people on leaving Egypt: "I proclaim the right in the Country to eliminate the bad and the perverted, so that the strong do not oppress the weak. I want to be to the people like the sun, illuminating all the country, and giving happiness. I am the Savior whose sceptre is justice, giving benefit to all in the city. Thanks to tea protective Isthar goddess, they thrived and thanks to my wisdom, I sheltered them... One knows very little about the priestesses of Ishtar; some regard them as very liberated women or as very rich businesswomen who bought and rented houses, vines and land. Others believe them to have been priestesses, singing hymns to the gods and giving them the sacrifices offered by the people. Sometimes people haunted by cruel demons were brought to them for healing and exorcising. They could even marry if they wished. The famous gate of Ishtar of Babylon, by R. Koldewey. The expansion of BABYLON and the attacks on the tower of Babel There were at the time of Nabuchadnesor II, (604 to 582BC) in Babylon approximately "1, 200 temples and vaults" over an area of 850 hectares. "Babylon the Great" was well fortified, with eight access portals. Each of these gates was dedicated to a god or a protective goddess; the principal gate was dedicated to the goddess Ishtar, it was 25 meters high and had no less than 150 figures of dragons and bulls drawn and carved in relief. According to Herodotus, four sets of riders could pass each other on top of these walls without touching each other! From this principal gate began the "Great Royal Way", which was a large avenue twenty meters wide, entirely paved in limestone flagstones and flanked by large, thick walls. This avenue crossed all the city, leading directly to the central ziggurat: the famous "TOWER OF BABEL", probably built under king Hammurabi , who had also just built his large palace opposite the huge brick temple. Built in the manner of the large step pyramids , this immense square construction, each side 90 meters high, had seven floors. Each floor was connected to the next by elegant external staircases, giving access to the "900 vaults", of which 600 were dedicated to the celestial gods and 300 to the terrestrial divinities! They were richly decorated with precious stones and there were many statues carved out of rare stones, gold, bronze or wood from the cedars of Lebanon. Griffon of Babylon. The Babylonians called this huge tower "Etemeanki", which means "the temple connecting the sky and earth...." According to Herodotus, the top floor was forbidden to pilgrims, and no statues were allowed there. It was empty, except for a bed and a table in solid gold, reserved for the national god Marduk, so that he could stay there each time he returned to Earth. When the Jews were exiled in Babylon under Nabuchadnesor II, (who had burned and destroyed their temple of Solomon), they regarded this tower as a provocation and cursed it; all the men who had built it would be unable to comprehend languages, and without completing it (!). The tower was partially destroyed by the Assyrian Sennacherib (704 to 681BC), but was rebuilt by the Assyrian Assarhadon (680 to 669BC) and his son Assurbanipal (688 to 627BC). It was entirely restored by Nabopololassar and his son Nabuchadnesor II (604 to 562BC), employing many slaves, (and in particular Jews). Despite the fact that Nabuchadnesor had destroyed their temple, they were forced to rebuild the Old Tower of Babel! Fortunately their slavery ceased in 539BC, when Babylon was taken by the Persian king Cyrus. The Tower of Babel was again destroyed into 479BC by Xerxes 1st (485 to 465BC). It has completely disappeared now, except for the ditch dug by the plunderers around the foundations to extract the last bricks. Even though Alexander the Great used more than 300,000 slaves, he gave up the idea of much of the work of restoration. But Babylon also had another treasure which appeared among the "seven wonders of the world," this being the famous : "Hanging gardens of Babylon" where the most beautiful plants of the East grew. An irrigation system of fountains fed all the flowering plants. It was like a mirage in the desert, where one could believe that a fine rain was always falling from the sky on to a corner of paradise ! The Birth of the Hittite Empire Towards 1650BC, a first wave of Indo-Europeans arrived from the Balkans and settled in Anatolia (currently Turkey). An intelligent people, they already had mastered the techniques of using iron, of raising horses and using tanks. Hattousil 1st called his new capital "Hattusas or Hattousas". Even though the Hittites gave little importance to religion, they adopted INDRA, theVedic god of lightning , adding their own sun god, MITRA. As in Egypt, their king was both the Supreme Judge and High Priest. The Height of the Hittite empire Around 1620 BC, Hattousil 1st disowned his nephew and named his grandson Moursil as his successor. From 1620 to 1590 BC, Mursil 1st reigned. He ransacked and destroyed the town of Alep in Syria, and then in 1595 BC seized the ramparts of Babylon, terrifying the inhabitants. He set out again, carrying away the large gold statue of Marduk. At about 1590 BC, Hantili succeeded Moursil 1st. From 1530 to 1500 BC, Telepinou reigned. A new kingdom of Mitanni (the former Hourrites) had been created in 1560BC. It started by overpowering and occupying Armenia, then Syria and Assyria (in 1460BC). Mitanni is located below Armenia, north of the Tigris and the Euphrates. In 1480 BC, the Gasgas Barbarians invaded the Hittite empire during the reign of Tudhalia II In about 1385 BC, SUPPILULIUMA 1st seized power and pushed back the invasion of the terrible Gasgas. He annexed Lebanon and Northern Syria, (Karkemish). He made vassals of Amourrou and Mitanni, where he placed on the throne an exiled prince Mattiwasa. The Egyptian king Akhenaton preferred peace, leaving king Doushratta (Tuhsratta) to drive back the invaders. Drunk with success, the proud Suppilulima was named the "Sun king". About 1365 BC, the Mitannian king Tushratta died whilst besieged in his capital Wasukana, preferring suicide to torture by the Assyrians. In 1350 BC, Mursil II succeeded Suppiluliuma, his empire extending from the Black Sea in Syria, and he was able to subjugate the kingdom of Mitanni, so coveted by the Assyrians. At about 1340 BC, the widow of the Egyptian king Toutankhamon asked the Hittite king to send his son to be her husband. The young Hittite prince was to be assassinated on his journey there... In 1315 BC, the Hittite Muwatali succeeded Moursil II. About 1295 BC, There was the battle of Kadesh, between Ramses II and the Hittite king "Muwatali " who invaded and ransacked the Egyptian camp.The Canaanites revolted against Egypt. In 1290 BC, Muwatali died, and there ensued a crisis of succession. Ramses II profited from this and again took Canaan and part of Syria. About 1282 BC, HATTUSIL III deposed his brother UHRI-TESHUB, and negotiated in 1278 BC a Egypto-Hittite treaty with Ramses II, sharing Syria and Lebanon between them. This peace would last 40 years... Around 1200 BC, the people of the Sea (Mouskis, the cruel Arameans along with the Gasgas and Phrygians) destroyed Anatolia and the capital Hattousha, (this was the end of the Hittite Empire). In 1193BC and 1188 BC, Ramses III conducted several campaigns to keep back the invaders, some of whom were to become the Philistines, who settled along the coastal shores of the Mediterranean. The Middle East in 1300BC
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,879
The Mitolo Shiraz Master Mix Dozen MV The Mitolo Shiraz Master Mix Dozen Style: Mixed cases - Red Vintage: mv Code: MSMM12 Varietal: Shiraz Region McLaren Vale The McLaren Vale wine region is located between the Mount Lofty Ranges and the Gulf of St Vincent just South of Adelaide in South Australia. Historically it was John Reynell who established Chateau Reynella in 1838, and Thomas Hardy who purchased the historic Tintara in 1876, that kick started the development of the McLaren Vale wine industry. The climate is generally considered warm, although different aspects, altitude and the effect of the cooling ocean breezes moderate the climate within the different sub-regions of the Vale. A large number of soil types are present, although red-brown loams dominate. The biggest viticultural challenge is the potential for drought and the overall lack of water. Full bodied, robust and plush reds typify the McLaren Vale style. Plantings are dominated by Shiraz, Cabernet Sauvignon, Grenache, with smaller plantings of whites including Semillon, and Chardonnay. More recent plantings of Mediterranean varieties particularly suited to the climate, including Sangiovese, Tempranillo, Mourvedre, Savagnin and Fiano have also been very successful. Winery MITOLO WINES Langton's Selections Mitolo Savitar Shiraz, Mitolo Serpico Cabernet Sauvignon, Mitolo Reiver Shiraz Mitolo has seemingly come from nowhere in no time. Yet it encapsulates what is possible in the world of fine wine. Sheer enthusiasm, brilliant timing, excellent resources and courage have propelled this enterprise into the big time. Frank Mitolo, the very youthful head of a thriving family agricultural concern, established the business in 1999. Ben Glaetzer, a highly competent and celebrated winemaker, was brought in as a partner. With access to low yielding, old vine material high quality fruit in both McLaren Vale and the Barossa, and exceptionally focussed winemaking experience, the wines have quickly established momentum in the ultra-fine wine market. Frank Mitolo's Italian heritage is proudly articulated, yet the quirky and off-beat names reveal a more international mafia-like flavour. The Latin mottos are hilarious. The red wines are all excellent with a common thread of Langton's Selections Mitolo Savitar Shiraz, Mitolo Serpico Cabernet Sauvignon, Mitolo Reiver Shiraz Mitolo has seemingly come from nowhere in no time. Yet it encapsulates what is possible in the world of fine wine. Sheer enthusiasm, brilliant timing, excellent resources and courage have propelled this enterprise into the big time. Frank Mitolo, the very youthful head of a thriving family agricultural concern, established the business in 1999. Ben Glaetzer, a highly competent and celebrated winemaker, was brought in as a partner. With access to low yielding, old vine material high quality fruit in both McLaren Vale and the Barossa, and exceptionally focussed winemaking experience, the wines have quickly established momentum in the ultra-fine wine market. Frank Mitolo's Italian heritage is proudly articulated, yet the quirky and off-beat names reveal a more international mafia-like flavour. The Latin mottos are hilarious. The red wines are all excellent with a common thread of intensity, richness and volume. The G.A.M. Shiraz (named after the first initials of the Mitolo children) is a Southern McLaren Vale Shiraz from a 20 year old Chinese Block Vineyard. The wine is matured in around 70% new French oak hogsheads. The Reiver Shiraz is named after 14th century Anglo-Scottish border marauders, 'The flag flies red and white – for bloodshed and righteousness and the honour of the family'. The wine is a plush Greenock-based Barossa Shiraz rather than a blood and guts style. The Serpico Cabernet Sauvignon, named after a New York TV Cop with a reputation for going against the grain, is a chocolaty Amarone style based on Willunga fruit. The flagship Savitar McLaren Vale Shiraz, named after some mythical dragon – 'His great tail swept the stars from the sky and flung them to the earth' – is a richy concentrated wine with supple tannins. No thuggery is happening in the wine. The Jester wines are early drinking styles and comprise a Sangiovese Rose, Shiraz and Cabernet Sauvignon. Mitolo straddles the cult wine scene and the mainstream fine wine markets. It is an important emerging producer with impressive credentials and influential following. Andrew Caillard MW, Langton's view more / less
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,874
Q: Segmentation fault 11 when passing a capture block to a DispatchQueue I am getting a Segmentation fault on both Swift 4 and Swift 4.2 on a capture block for a DispatchQueue. I am trying to provide a capture block to the escaping parameter of a DispatchQueue to guarantee that the object passed to the called closure is alive when it will be called. The code looks something like this: import Foundation struct Response { let outcome: String? } enum NotEvenError: Error { case notEven } struct IsEvenService { typealias SuccessCallback = (Response) -> Void typealias FailureCallback = (Error?) -> Void func perform(success: @escaping SuccessCallback, failure: @escaping FailureCallback) { let number = Int.random(in: 0 ... 10) if number%2 == 0 { let outcome = Response(outcome: "it is") success(outcome) } else { failure(NotEvenError.notEven) } } } func runTest(success: @escaping (Response) -> Void, failure: @escaping (Error?) -> Void) { let service = IsEvenService() service.perform(success: { (response) in if response.outcome != nil { DispatchQueue.main.async { [resp = response] in success(resp) } } }, failure: { (error) in DispatchQueue.main.async { [err = error] in failure(err) } }) } The problematic part is the following: DispatchQueue.main.async { [err = error] in failure(err) } This is a sample file that can be easily verified with a swiftc call in the command line. Here is the outcome: Apple Swift version 4.2 (swiftlang-1000.11.37.1 clang-1000.11.45.1) Target: x86_64-apple-darwin17.7.0 /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/swift -frontend -c -primary-file test.swift -target x86_64-apple-darwin17.7.0 -enable-objc-interop -sdk /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk -color-diagnostics -module-name test -o /var/folders/fn/wh62twgj54180b7x1dx9j6080000gp/T/test-d0f7f4.o 0 swift 0x000000010720964a PrintStackTraceSignalHandler(void*) + 42 1 swift 0x0000000107208dfe SignalHandler(int) + 302 2 libsystem_platform.dylib 0x00007fff7ac3ff5a _sigtramp + 26 3 libsystem_platform.dylib 0x00007ffeec7e0000 _sigtramp + 1908015296 4 swift 0x0000000103eef293 swift::ASTVisitor<(anonymous namespace)::StmtEmitter, void, void, void, void, void, void>::visit(swift::Stmt*) + 6531 5 swift 0x0000000103ea826e swift::Lowering::SILGenFunction::emitFunction(swift::FuncDecl*) + 462 6 swift 0x0000000103e0be14 swift::Lowering::SILGenModule::emitFunction(swift::FuncDecl*)::$_1::operator()(swift::SILFunction*) const + 516 7 swift 0x0000000103e0b142 swift::Lowering::SILGenModule::emitFunction(swift::FuncDecl*) + 1042 8 swift 0x0000000103e1501b swift::Lowering::SILGenModule::emitSourceFile(swift::SourceFile*, unsigned int) + 939 9 swift 0x0000000103e16bd5 swift::SILModule::constructSIL(swift::ModuleDecl*, swift::SILOptions&, swift::FileUnit*, llvm::Optional<unsigned int>, bool) + 1333 10 swift 0x00000001034983fe performCompile(swift::CompilerInstance&, swift::CompilerInvocation&, llvm::ArrayRef<char const*>, int&, swift::FrontendObserver*, swift::UnifiedStatsReporter*) + 28990 11 swift 0x000000010348ddc5 swift::performFrontend(llvm::ArrayRef<char const*>, char const*, void*, swift::FrontendObserver*) + 7717 12 swift 0x0000000103433a35 main + 1349 13 libdyld.dylib 0x00007fff7a931015 start + 1 14 libdyld.dylib 0x000000000000000f start + 2238509051 Stack dump: 0. Program arguments: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/swift -frontend -c -primary-file test.swift -target x86_64-apple-darwin17.7.0 -enable-objc-interop -sdk /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk -color-diagnostics -module-name test -o /var/folders/fn/wh62twgj54180b7x1dx9j6080000gp/T/test-d0f7f4.o 1. While emitting SIL for 'runTest(success:failure:)' at test.swift:26:1 2. While silgen emitFunction SIL function "@$S4test7runTest7success7failureyyAA8ResponseVc_ys5Error_pSgctF". for 'runTest(success:failure:)' at test.swift:26:1 <unknown>:0: error: unable to execute command: Segmentation fault: 11 <unknown>:0: error: compile command failed due to signal 11 (use -v to see invocation) Is it safe to assume that if I do not pass a capture block that the object will be retained when the closure is executed? If I do the following, it compiles: let uselessTempError = error DispatchQueue.main.async { [err = uselessTempError] in failure(err) } A: You don't need the [err = error] part. The following block will keep a strong reference to error: DispatchQueue.main.async { failure(error) }
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,128
{"url":"https:\/\/solvedlib.com\/n\/photoelectric-effect-iab-an-unknown-metal-surface-was-used,4769441","text":"# Photoelectric effect Iab an unknown metal surface was used a5 whlch results In target_ The metal surface was Irradiated wlth\n\n###### Question:\n\nPhotoelectric effect Iab an unknown metal surface was used a5 whlch results In target_ The metal surface was Irradiated wlth 5oOnm vlsible lot of photoelectrons Ilght The experimental value of stopping potential was found t0 bc .1.92V. What was energy of the Incldent photons? What was thet work Iunctlon the metal Surce?\n\n#### Similar Solved Questions\n\n##### An ideal pulley is attached to the ceiling. Spring scale A is attached to the wall and a rope runs horizontally from it and over the pulley. The same rope is then attached to spring scale B. On the other side of scale $\\mathrm{B}$ hangs an object that weighs $120 \\mathrm{N}$. What are the readings of the two scales $\\mathrm{A}$ and $\\mathrm{B}$ ? Ignore the weights of the ropes, pulley, and scales.\nAn ideal pulley is attached to the ceiling. Spring scale A is attached to the wall and a rope runs horizontally from it and over the pulley. The same rope is then attached to spring scale B. On the other side of scale $\\mathrm{B}$ hangs an object that weighs $120 \\mathrm{N}$. What are the readings o...\n##### Prove the statement using the &, 8 definition of a limit:.lim (1 X7-34x) = 13\nProve the statement using the &, 8 definition of a limit:. lim (1 X7-3 4x) = 13...\n##### Pounts SerPops 7 PoSo.NotesAek Yomr TeacherA certain rain cloud at an altitude of 1.50 km contains 3.20 107 kg of water vapor: How long would it take for a 2.76-kW pump to raise the amount of water from Earth's surface to the cloud's position?Need Help?Aandtlt\npounts SerPops 7 PoSo. Notes Aek Yomr Teacher A certain rain cloud at an altitude of 1.50 km contains 3.20 107 kg of water vapor: How long would it take for a 2.76-kW pump to raise the amount of water from Earth's surface to the cloud's position? Need Help? Aandtlt...\n##### Find $\\|$ proj $_{\\mathbf{a}} \\mathbf{u} \\|$. (a) $\\mathbf{u}=(5,6), \\mathbf{a}=(2,-1)$ (b) $\\mathbf{u}=(3,-2,6), \\mathbf{a}=(1,2,-7)$\nFind $\\|$ proj $_{\\mathbf{a}} \\mathbf{u} \\|$. (a) $\\mathbf{u}=(5,6), \\mathbf{a}=(2,-1)$ (b) $\\mathbf{u}=(3,-2,6), \\mathbf{a}=(1,2,-7)$...\n##### 2.5.17inreruna- Luhunebyeclcnno Luetieto utu pummukuJen-omatLulJuCrnulu UCerelad AHue\n2.5.17 inreruna- Luhune byeclcnno Luetieto utu pummuku Jen-omat LulJu Crnulu U Cerelad A Hue...\n##### How do you simplify 2\/3(6r+9s)+ 1\/4 (8r+12s)?\nHow do you simplify 2\/3(6r+9s)+ 1\/4 (8r+12s)?...\n##### CALCULATOR MESSAGE MY CES \u25bc (c) Prepare a balance sheet for Wildhorse Co.. (List assets in...\nCALCULATOR MESSAGE MY CES \u25bc (c) Prepare a balance sheet for Wildhorse Co.. (List assets in order of liquidity.) evel Wildhorse Co. Balance Sheet evel Assets tudy Liabilities and Stockholders' Equity Ood Fa F7 F5 F6 F3 leyPLUS Kimmel, Financial Accounting, 8e INTRO FINA Do Itl Review 1-3a (...\n##### On a normal weekday, a local Philz Coffee Shop has 12 customers waiting in line on...\nOn a normal weekday, a local Philz Coffee Shop has 12 customers waiting in line on average. If it takes 13 minutes for a customer to get his\/her coffee served on average, the coffee shop is serving customers per hour. Type in the answer rounded up to 2-decimal places. Throughput Time = Average WIP \/...\n##### Pick one of your favorite retail stores and describe the supply chain used to stock inventory...\nPick one of your favorite retail stores and describe the supply chain used to stock inventory in the store. (Hint: there is usually more than one intermediary) Objective: 1. Name and describe the retail store 2. Describe the supply chain intermediaries 3. Do you think this is the most efficient mode...\n##### Planar connected graph G G(V, E) has vertice degrees 5, 3,3,3. How many edges, and faces are there\nplanar connected graph G G(V, E) has vertice degrees 5, 3,3,3. How many edges, and faces are there...\n##### A 1.20E-5F capacitor is charged by a 10.0V battery through a resistance R. The capacitor reaches...\nA 1.20E-5F capacitor is charged by a 10.0V battery through a resistance R. The capacitor reaches a potential difference of 3.14V at a time 2.66s after charging begins. Find R....\n##### . Please explain the Gramm-Leach Bliley Act. You must explain the background and history of the...\n. Please explain the Gramm-Leach Bliley Act. You must explain the background and history of the Act. 2. Research and discuss a case outside of the book and lecture notes in regard to the Gramm-Leach Bliley Act. 3. How would you define the major parts of the privacy requirements: the Financial Privac...\nGilberto Company currently manufactures 80,000 units per year of one of its crucial parts. Variable costs are $1.50 per unit, fixed costs related to making this part are$80,000 per year, and allocated fixed costs are $40,000 per year. Allocated fixed costs are unavoidable whether the company makes ... 1 answer ##### Can you show me how to to and expain to me how to conplete this table?!... Can you show me how to to and expain to me how to conplete this table?! complete and tell me how what to do? L Complete the following cost table describing the short run costs for Magenta Magnets: ti 10 Total Product (magnets) Total Fixed Costs (TFC) Total Variable Costs (TVC) Total Costs (TC) ... 2 answers ##### Analyze the social causes of conflict in families. We learn in chapter eleven some of the... Analyze the social causes of conflict in families. We learn in chapter eleven some of the outcomes of conflict within families. For this question focus on the Social Causes that can cause, influence, exacerbate conflict in families. (This means you are not writing about child abuse or elder abuse be... 5 answers ##### Lerminals and Rrin Ine ligjure (Figuro 1) 3re connoctod I0 u 9O Auhict Supuos{ 230 5202 J70 86n 6,7 0uj Ua ctene Tot Ingoiih eacho GelatExprc3g Younans t usinaTwo sIgnificant ngure5 JaduratedcammsMeatup16ll%6Submitpequ lanacrPam?FigureMCLAMAdiatenice Jcross Tc Ro IBsislut Jicae~maIrien polenmia Ouicrenge @grocetl Ri rostslat?umess thanIne zanicSubiilRequeehanamnmComplete [revious pan(sProvice Feedback lerminals and Rrin Ine ligjure (Figuro 1) 3re connoctod I0 u 9O Auhict Supuos{ 230 5202 J70 86n 6,7 0 uj Ua ctene Tot Ingoiih eacho Gelat Exprc3g Younans t usinaTwo sIgnificant ngure5 Jadurated camms Meatup 16ll%6 Submit pequ lanacr Pam? Figure MCLAMA diatenice Jcross Tc Ro IBsislut Jicae ~ma Irien ... 5 answers ##### Find thc intcrccpts, traccs, and thc cross scctions with plancs 'parallel to thc coordinatc plancs of x' +y =4_ Also graph thc surfacc (Indicate thc shapcs of the traccs and cross scctions ) (6 pts) Find thc intcrccpts, traccs, and thc cross scctions with plancs 'parallel to thc coordinatc plancs of x' +y =4_ Also graph thc surfacc (Indicate thc shapcs of the traccs and cross scctions ) (6 pts)... 5 answers ##### Question B5Total for Question B5: [7 markS]The decomposition of NzOs follows first order kinetics. If the half-life at 0 %C is 10 days; and at 50 'C it is 13 min, what is the activation energy? Question B5 Total for Question B5: [7 markS] The decomposition of NzOs follows first order kinetics. If the half-life at 0 %C is 10 days; and at 50 'C it is 13 min, what is the activation energy?... 1 answer ##### Use the model of the market for loanable funds to predict the effects of the following... Use the model of the market for loanable funds to predict the effects of the following events: a) A tax credit is given to firms starting a new business or expanding a current business. &nb... 1 answer ##### A girl is skipping stones across a lake. One of the stones accidentally ricochets off a toy boat that is initially at rest in the water (see the drawing). The$0.072-\\mathrm{kg}$stone strikes the boat at a velocity of$13 \\mathrm{~m} \/ \\mathrm{s}, 15^{\\circ}$below due east, and ricochets off at a velocity of$11 \\mathrm{~m} \/ \\mathrm{s}$,$12^{\\circ}$above due east. After being struck by the stone, the boat's velocity is$2.1 \\mathrm{~m} \/ \\mathrm{s}$, due east. What is the mass of the b A girl is skipping stones across a lake. One of the stones accidentally ricochets off a toy boat that is initially at rest in the water (see the drawing). The$0.072-\\mathrm{kg}$stone strikes the boat at a velocity of$13 \\mathrm{~m} \/ \\mathrm{s}, 15^{\\circ}$below due east, and ricochets off at a ... 1 answer ##### Using the International Classification of Functioning, develop an ICF model for Albert (create in Word and... Using the International Classification of Functioning, develop an ICF model for Albert (create in Word and upload your response). Albert is a 33-year-old single, White cis-gender male who reports experiencing pervasively depressed mood following the dissolution of a six-month romantic relationship. ... 5 answers ##### Iniwhich of the following pure compounds would intermolecular hydrogcn bonding be expected? (Sclect all that apply )CH CH CHCH CHOHOCkOHCHbCH CHLCH,CH CochaCHSCH CCH CHiCH;ENone of the AboveSubmnil AnstketBaty Entnigicuptore Igrouplatterpts remaining Iniwhich of the following pure compounds would intermolecular hydrogcn bonding be expected? (Sclect all that apply ) CH CH CHCH CHOH OCkOH CHbCH CHLCH,CH Cocha CHSCH CCH CHiCH; ENone of the Above Submnil Anstket Baty Entnigicup tore Igrouplatterpts remaining... 4 answers ##### An analytical chemist titrating 94.1 mL of 0.5800 M solution of propionic acid (HC2HsCO2 with 0.6500 M solution of KOH: The p K of propionic acid is 4.89, Calculate the pH of the acid solution after the chemist has added 57.0 mL of the KOH solution to it:Note for advanced students: you may assume the final volume equals the initial volume of the solution plus the volume of KOH solution added_Round vour answer to decimal places_ An analytical chemist titrating 94.1 mL of 0.5800 M solution of propionic acid (HC2HsCO2 with 0.6500 M solution of KOH: The p K of propionic acid is 4.89, Calculate the pH of the acid solution after the chemist has added 57.0 mL of the KOH solution to it: Note for advanced students: you may assume t... 5 answers ##### Corticosterolds willMultiple Choicereverse spasms of respiratory smooth musclesInhlbit the activity of lymphocytesrelleve inflammatory symptomsblock synthesis of leukotrienesblnd t0 hlstamlne receptors on larget Organs Corticosterolds will Multiple Choice reverse spasms of respiratory smooth muscles Inhlbit the activity of lymphocytes relleve inflammatory symptoms block synthesis of leukotrienes blnd t0 hlstamlne receptors on larget Organs... 5 answers ##### 3 . It is known that the height of men on a professional basketball team normally distributed with a mean of 75 inches with standard deviation of 2.5 inches. a. What percent of basketball players are 69 inches tall? b. What percent of basketball players are between 76 and 80 inches tall? What percent of basketball players are more than 80 inches tall? 3 . It is known that the height of men on a professional basketball team normally distributed with a mean of 75 inches with standard deviation of 2.5 inches. a. What percent of basketball players are 69 inches tall? b. What percent of basketball players are between 76 and 80 inches tall? What percen... 1 answer ##### Situation:A 76-year-old patient has been experiencing arteriosclerosis obliterans with intermittent claudication. He refuses to give up... Situation:A 76-year-old patient has been experiencing arteriosclerosis obliterans with intermittent claudication. He refuses to give up smoking. At his last office visit, he was given a prescription for pentoxifylline (Trental), 400 mg PO tid with meals. After leaving the examination room, he tells ... 5 answers ##### Suppose that disease Is inherited via an utosomal recessive mode of inheritance, The mplications of this mode of inheritance are that the chldren in a family each have probability of 0.45 of inheriting the discaseWhat is the probability that in a family 0,6975 With two childcen both siblings are 0.4500 aflected? 0,.4950 What is the probability that exacily QnC 5500 sibling affected? Whar is the probability that ncithcr sibling 0.2025 is affccted? 0.9000 What \/s the probabillty Ihat at Izast Suppose that disease Is inherited via an utosomal recessive mode of inheritance, The mplications of this mode of inheritance are that the chldren in a family each have probability of 0.45 of inheriting the discase What is the probability that in a family 0,6975 With two childcen both siblings a... 5 answers ##### FiuddrciestaloAlti ils rratrtc @t loaitc; nkiex 46{aAe l xbaraeno hxal; ileranka} FiuddrciestaloAlti ils rratrtc @t loaitc; nkiex 46{a Ae l xbaraeno hxal; ileranka}... 5 answers ##### Complete the table to find the derivative of the function without using the Quotient Rule: Function Rewrite Differentiate Simplify 6x3\/2 Y = Complete the table to find the derivative of the function without using the Quotient Rule: Function Rewrite Differentiate Simplify 6x3\/2 Y =... 5 answers ##### 5.[ asked fourteen people how many books thcy read in one year. Here are the results: 37,33,25,22,44,37, 22,22, 44, 44, 22,33, 33, 22. Find the mode for this data. The mode is: books 5.[ asked fourteen people how many books thcy read in one year. Here are the results: 37,33,25,22,44,37, 22,22, 44, 44, 22,33, 33, 22. Find the mode for this data. The mode is: books... 1 answer ##### 1. Your friend asks you to invest$10,000 in a business venture. Based on your estimates,...\n1. Your friend asks you to invest $10,000 in a business venture. Based on your estimates, you would receive nothing for four years, at the end of year 5 you would receive interest on the investment compounded annually at 8%, and at the end of year 6 you would receive$14,500. If your estimates are c...\n##### How do you multiply and simplify \\frac { 6x ^ { 2} + 5x + 1} { 3x - 6} \\cdot \\frac { 2- x } { 4x ^ { 2} + 4x + 1}?\nHow do you multiply and simplify \\frac { 6x ^ { 2} + 5x + 1} { 3x - 6} \\cdot \\frac { 2- x } { 4x ^ { 2} + 4x + 1}?...\nthe following items were taken from the financial statements of Pronghorn Company for the year ending December 31,2022 Accounts Payable:$18,000 Accounts Recievable:$7,000 Accumulated depreciation- Equipment:$5,200 Bonds Payable:$17,000 Cash: 24,000 Common Stock: $26,900 Cost of Goods sold:$28,500 De...","date":"2022-05-21 22:46:51","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.28965625166893005, \"perplexity\": 4071.0458691085723}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662541747.38\/warc\/CC-MAIN-20220521205757-20220521235757-00789.warc.gz\"}"}
null
null
Абе Хіросі (; *22 червня 1964) — японський актор і модель. Короткі відомості Абе Хіросі народився 22 червня 1964 року в місті Йокогама префектури Канаґава. Закінчив вищу школу Хакусан префектури Канаґава та інженерний факультет Університету Тюо в Токіо. 1985 року Абе дебютував як модель. Тривалий час був «обличчям» популярного японського журналу для чоловіків «MEN'S NON-NO». У 1987 році Абе спробував себе у кіно, знявшись у фільмі «Іде пан Хаїкара». На телебаченні і в кіно він грав лише другорядні ролі. Причиною цьому був високий, як для японця, зріст та «модельне походження». 1993 року Абе вперше доручили головну роль детектива у фільмі «Вбивство в Атамі», в якому він проявив неабияку акторську майстерність. Картина була успішною і Абе стали запрошувати частіше на головні ролі в телевізійних серіалах і кіно. 1995 року він знявся у у історичній телевізійній драмі телекомпанії NHK «Восьмий сьоґун Йосімуне», де Абе виконав роль старійшини Мацудайри Норісато. Окрім цього він знімався у інших історичних драмах цієї компанії — «Смута року Ґенроку» (1999), «Мусасі MUSASHI» (2003) та «Йосіцуне» (2005). У 2000 році, маючи репутацію здібного актора, Абе знявся у популярному телевізійному серіалі та фільмі «TRICK» разом із Накамою Юкіе. Згодом він продовжував грати у «TRICK 2» (2002) і «TRICK 3» (2003). У 2006 році за головну роль в серіалі «Чоловік, який не може одружитися», Абе отримав премію телекомпанії FNS. Справами актора керує акторська агенція «Sigeta office». Абе одружився у 2008 році. Його захоплення — автомобілебудування, машини компанії «Мазда», японські бойові мистецтва. Примітки Посилання Особистий сайт Абе Хіросі Японські кіноактори Актори телебачення Японії
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,377
\section{Introduction} Magnetic skyrmions are topological spin configurations in magnetic materials, which usually originate from the chiral Dzyaloshinskii-Moriya interactions(DMI)\cite{skyrob1, skyrob2, DM}. Experimentally, there are many methods to create and delete individual magnetic skyrmion at a given position of magnetic materials\cite{skyrapp}, which provide important technical support for the potential application of magnetic skyrmions to information storage and processing\cite{skyrapp, skyappl}. All these methods come from the rotation of the direction of local magnetization but not a displacement of its location\cite{skyrapp, RenWangWangQu, logicgate2015, Nikals1}. Essentially, in theoretical physics, this kind of rotation of local magnetization can be treated as a local gauge transformation for magnetization\cite{BrunoDugaevTaillefumier,Tatara}. An important property of magnetic skyrmion is the existence of its emergent gauge field, this field is explained as real space Berry connection which generally is derived from adiabatic approximation\cite{skyremg, emgfield, realBerry}. So our questions are, what exactly is the relation between Berry connection, gauge transformation and adiabatic approximation for $SU(2)$ magnetization in magnetic skyrmions, and whether there exists a general relation for $SU(N)$ case. We will find that the $SU(N)$ gauge transformations are rotation matrices for the general $N$-level quantum system with any state expressed as density operator $\rho$. Analogous to the 't Hooft $SU(2)$ gauge-invariant electromagnetic tensor of magnetic monopoles\cite{Hooft1974}, using Duan-Ge gauge potential decomposition theory\cite{DuanGe1979}, Duan and Zhang found $(N-1)$ $U(1)$ magnetic monopole gauge fields, i.e. Wu-Yang potentials\cite{Wu-Yang1}, of the extended $SU(N)$ gauge theory\cite{DuanZhang,LiuDuanZhang2005SkySUN,DuanRen}. These Wu-Yang potentials are completely expressed by local gauge transformations $U(x)\in SU(N)$. We will study the exact relation between Wu-Yang potentials and Berry connection for a general $N$-level quantum system, this is an interesting extension for $SU(2)$ magnetization because the density operator includes the mixed state also. We develop a Cartan subalgebra local bases parametrization method for the density matrix of a general $N$-level quantum system\cite{mixgeo2}. We can prove the Wu-Yang potentials of this quantum system are the projection of $su(N)$ flat connection $\frac{1}{ig}\partial_{\mu}UU^{\dag}$ on $(N-1)$ Cartan subalgebra local bases $n_{i}$, which theoretically relate to $(N-1)$ topological quasi-particles similar to magnetic skyrmions in $SU(2)$, this provides us a new clue for creating topological quasi-particles. In general, despite $U(x)$ adiabatically evolution or not, the $(N-1)$ Wu-Yang potentials always exist and relate to general geometric phase of pure or mixed state\cite{mixgeo}. However, when the adiabatic approximation is satisfied, by taking the adiabatic unitary evolution as a local gauge transformation $U(x)$, we will find that $N$ Berry connections of the $N$-level quantum system can be expressed by $U(x)$. The weighted average for these Berry connections with respect to the eigenvalues of density operator is equal to the expectation value of flat connection with respect to the final state of the adiabatic unitary evolution. An exact relation between Wu-Yang potentials and the weighted average of Berry connections is obtained in the condition of adiabatic approximation. We will find in the case of $SU(2)$ magnetic skyrmions, the Berry connection is proportional to Wu-Yang potential exactly, and the magnetic skyrmion and Wu-Yang magnetic monopole have the same algebra structure. Therefore, the magnetic skyrmions are quasi-magnetic monopoles in magnetic materials, which is related to the abundant topological structure in Skyrme theory\cite{Cho1, Cho2}. This paper is organized as follows. In section II, we study the Wu-Yang potentials of a general $N$-level quantum system, which is based on the parametrization of density operator by $su(N)$ Cartan subalgebra local bases. In section III, we discuss the charge of Wu-Yang magnetic monopole for $SU(2)$ case, it naturally relates to a magnetic skyrmion charge. In section IV, we take the $su(2)$ Cartan subalgebra local basis as the local magnetization of magnetic material, so the magnetic skyrmion is analyzed by gauge transformations. In section V, we study the exact relation between $SU(N)$ Wu-Yang potentials and Berry connection in adiabatic approximation. \section{The Wu-Yang Potentials of $N$-level quantum system} In this section, we analyze the emergent gauge fields for a general $N$-level quantum system by using $su(N)$ Cartan subalgebra local bases parametrization. For a general $N$-level quantum system, its quantum states are described by density matrix $\rho$. The density matrix $\rho$ must satisfy three properties $\rho^{\dag}=\rho$, Tr$\rho=1$ and $\rho\geq0$ , where $\rho\geq0$ is called the positive semidefinite condition of $\rho$, and it means all eigenvalues of $\rho$ are nonnegative. We know any $(N\times N)$ Hermitian matrix can be expanded in $su(N)$ Lie algebra bases $T_{a}$ and the unit matrix $I_{N}$. Considering the normalization property Tr$\rho=1$, the $su(N)$ Lie algebra vector representation of density matrix $\rho$ is\cite{KimuraG, ByrdKhaneja} \begin{equation}\label{rho} \rho=\frac{1}{N}(I_{N}+\sqrt{2N(N-1)}v^{a}T_{a}), \end{equation} where the Lie algebra bases $T_{a}(a=1,2,\cdots,N^2-1)$ are generators of $SU(N)$ Lie group in fundamental representation and the Lie algebra vector is $v\equiv v^{a}T_{a}\in su(N)$. We use the convention of summation over repeated indices. The vector space for $su(N)$ Lie algebra is composed of vector bases $T_{a}$, when we use the conventions of normalization relations \begin{equation}\label{nor} \textrm{Tr}(T_{a}T_{b})=\frac{1}{2}\delta_{ab} \end{equation} and the commutation and anticommutation relations \begin{equation}\label{com} [T_{a},T_{b}]=if_{abc}T_{c}, \;\;\;\{T_{a},T_{b}\}=\frac{1}{N}\delta_{ab}I+d_{abc}T_{c}, \end{equation} the constant in eq.(\ref{rho}) ensure that $(v,v)=1$ for pure states and $(v,v)<1$ for mixed states, where the definition of the inner product of Lie algebra vectors is \begin{equation}\label{inn} (T_{a},T_{b})\equiv 2Tr(T_{a}T_{b}). \end{equation} The restricted region of $v$ from the positive semidefinite condition of $\rho$ was analytically obtained in ref.\cite{KimuraG, ByrdKhaneja}. More specifically, since the eigenvalues of $\rho$, we denote it as $a^{n}(n=1,2,\cdots,N)$, are invariant under unitary transformations, the necessary and sufficient condition for density matrix $\rho$, as described in ref.\cite{mixgeo2, ByrdBoyaMimsetal}, is \begin{equation}\label{rhod} \rho=U\rho_{d}U^{\dag}, \end{equation} where $\rho_{d}\equiv a^{n}\rho_{n}$. The $\rho_{n}(n=1,2,\cdots,N)$ is the pure state density matrix \begin{equation}\label{rhon} \rho_{n}=\textrm{Diag}(0,\cdots,0,1,0,\cdots,0) \end{equation} with the single 1 in the $n$-th diagonal. The $a^{n}$ satisfies $\sum_{n}a^{n}=1$ and $0\leq a^{n}\leq1$. In the following, we study the case of non-degenerate for the non-zero $a^{n}$, for convenience, we assume $a^{1}=a^{2}=\dots=a^{m-1}=0(m\in\mathbb{Z},1\leq m\leq N)$ and \begin{equation}\label{eig} 0<a^{m}<a^{m+1}<\cdots<a^{N}\leq 1. \end{equation} The $U$ in eq.(\ref{rhod}) is any special unitary transformation $U\in SU(N)$. For a given element $U(x)\in SU(N)$, the local bases of $su(N)$ Lie algebra are defined as \begin{equation}\label{Localbases} u_{a}(x)=U(x)T_{a}U^{\dag}(x), \end{equation} where we use $x=\{x^{\mu}\}(\mu=1,2,\cdots,D)$ to represent the coordinate of the $N$-level quantum system in any $D$-dimensional parameter space, which can be the real space, the momentum space and other parameter spaces, and we assume that the density operator is defined smoothly and single-valued in these parameter spaces. Moreover, the Cartan subalgebra of $su(N)$ Lie algebra is composed of diagonal generators $H_{i}(i=1,2,\cdots,N-1)$ which satisfy $[H_{i},H_{j}]=0$. In this vector space of $su(N)$ Cartan subalgebra, the local bases are defined as \begin{equation}\label{nix} n_{i}(x)=U(x)H_{i}U^{\dag}(x). \end{equation} It can be proved that $n_{i}$ still satisfy the property of Cartan subalgebra $[n_{i},n_{j}]=0$. Using Cartan subalgebra local bases, the density matrix of eq.(\ref{rhod}) is expressed as(Appendix A) \begin{equation}\label{rhocartan} \rho=U\rho_{d}U^{\dag}=\frac{1}{N}(I_{N}+\sqrt{2N(N-1)}u^{i}n_{i}), \end{equation} where the components $u^{i}(i=1,2,\cdots,N-1)$ are well determined by $a^{n}$ as \begin{equation}\label{ni} u^{i}=\sqrt{\frac{N}{N-1}}\frac{1}{\sqrt{i(i+1)}}(\sum_{k=1}^{i}a^{k}-ia^{i+1}). \end{equation} So the density matrix of the $N$-level quantum system is parameterized by using $su(N)$ Cartan subalgebra local bases as given in eq.(\ref{rhocartan}) and eq.(\ref{ni}) i.e. $u\equiv u^{i}n_{i}$. Each local basis $n_{i}$ is a general $su(N)$ Lie algebra vector in parameter space, its covariant derivative is defined as \begin{equation}\label{Covariant} D_{\mu}n_{i}=\partial_{\mu}n_{i}-ig[A_{\mu},n_{i}], \end{equation} where $g$ is coupling constant and $A_{\mu}\equiv A_{\mu}^{a}T_{a}$ is $SU(N)$ gauge potential, it is also a Lie algebra vector. We can project $A_{\mu}$ to the directions of local bases of Cartan subalgebra and other directions perpendicular to these local bases\cite{DuanZhang,LiuDuanZhang2005SkySUN,DuanRen} \begin{equation}\label{Pro} A_{\mu}=(A_{\mu},n_{i})n_{i}+[[A_{\mu},n_{i}],n_{i}]. \end{equation} On the other hand, using the definition of $n_{i}$, it can be easily proved that \begin{equation}\label{Amu} \partial_{\mu}n_{i}=[\partial_{\mu}UU^{\dag},n_{i}], \end{equation} which means that \begin{equation}\label{Amu1} D_{\mu}n_{i}=\partial_{\mu}n_{i}-ig[K_{\mu},n_{i}]=0, \end{equation} where $K_{\mu}\equiv\frac{1}{ig}\partial_{\mu}UU^{\dag}$ and we can verify that it is just the flat connection of $SU(N)$. Analogous to the 't Hooft $SU(2)$ gauge-invariant electromagnetic tensor\cite{Hooft1974} of magnetic monopoles, Duan and Zhang use the gauge potential decomposition theory to define the extended $SU(N)$ gauge-invariant electromagnetic tensor as\cite{DuanZhang,LiuDuanZhang2005SkySUN,DuanRen} \begin{equation}\label{fmunui1} f_{\mu\nu}^{i}=(F_{\mu\nu},n_{i})-\frac{1}{ig}(n_{i},[D_{\mu}n_{k},D_{\nu}n_{k}]), \end{equation} where $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}-ig[A_{\mu},A_{\nu}]$ is $SU(N)$ gauge field strength. Using eq.(\ref{Pro}) one can prove(Appendix B) \begin{equation}\label{fmunui2} f_{\mu\nu}^{i}=\partial_{\mu}A_{\nu}^{i}-\partial_{\nu}A_{\mu}^{i}-\frac{1}{ig}(n_{i},[\partial_{\mu}n_{k},\partial_{\nu}n_{k}]), \end{equation} where $A_{\mu}^{i}\equiv(A_{\mu},n_{i})$. When we take $K_{\mu}$ as $A_{\mu}$, the $f_{\mu\nu}^{i}$ in eq.(\ref{fmunui1}) equals $0$ because $D_{\mu}n_{i}=0$ and $F_{\mu\nu}(K)=0$. So according to eq.(\ref{fmunui2}), we obtain \begin{equation}\label{Wu-Yangcurvature} K_{\mu\nu}^{i}\equiv\partial_{\mu}a_{\nu}^{i}-\partial_{\nu}a_{\mu}^{i}=\frac{1}{ig}(n_{i},[\partial_{\mu}n_{k},\partial_{\nu}n_{k}]), \end{equation} where \begin{equation}\label{WuYangPot} a_{\mu}^{i}\equiv(K_{\mu},n_{i})=\frac{1}{ig}(\partial_{\mu}UU^{\dag},n_{i})\;\;\;,\;\;\;(i=1,2,\cdots,N-1) \end{equation} is named as $i$-th Wu-Yang potential which is used to describe magnetic monopole\cite{Wu-Yang1}. Now, thanks to the Cartan subalgebra local bases parametrization of density matrix $\rho$, the physical meanings of these Wu-Yang potentials for a general $N$-level quantum system are clear. At first, it is a remarkable feature that the Wu-Yang potentials in eq.(\ref{WuYangPot}) are completely expressed by local gauge transformations $U(x)$. For $N$-level quantum system, the unitary evolution of density matrix in parameter space is completely described by $U(x)\in SU(N)$, so these Wu-Yang potentials must be produced in any unitary evolution path. Moreover, the parallel transport character for $n_{i}$ defines a parallel transport of density matrix state for this unitary evolution, however there are $(N-1)$ Wu-Yang potentials seen as emergent gauge fields that always exist in this parallel transport state. On the other hand, $a_{\mu}^{i}$ is the projection of $SU(N)$ flat connection $K_{\mu}\equiv\frac{1}{ig}\partial_{\mu}UU^{\dag}$ on the $i$-th Cartan subalgebra local basis $n_{i}$ and $K_{\mu\nu}^{i}$ is Wu-Yang curvature tensor corresponding to $i$-th Wu-Yang potential $a_{\mu}^{i}$. $K_{\mu\nu}^{i}$ is a $U(1)$ magnetic monopole electromagnetic field tensor, so for $N$-level pure or mixed quantum system, its emergent gauge fields are magnetic monopole electromagnetic fields, but note that we put no constraints on the dimension of parameter space now. The definition of Wu-Yang potentials and Wu-Yang curvature tensors provide us a method for the calculation of emergent electromagnetic fields, it is independent on state and gauge condition but only depend on local gauge transformation $U(x)$. In the following, we mainly use this method to analyze the magnetic monopole property of magnetic skyrmions and the relation between Wu-Yang potentials and Berry connection. \section{The topological charge of 2-level quantum system} In this section, we analyze the topological charge of a general 2-level quantum system. For $SU(2)$ case, $\tau _a = \frac{1}{2} \sigma _a(a=1,2,3)$ are generators of $SU(2)$ Lie group, where $\sigma_{a}$ is Pauli matrix. Its Cartan subalgebra index $i$ only equals to $3$, so the $SU(2)$ Wu-Yang potential from eq.(\ref{WuYangPot}) becomes $ a _{\mu}^{3}=\frac{1}{iq_{e}}(\partial_{\mu}UU^{\dag},n_{3})$, where $U\in SU(2)$ and $g$ is substituted with an emergent charge $q_{e}$, which will be explained later. The Cartan subalgebra indices $i,k$ in eq.(\ref{Wu-Yangcurvature}) also should be $3$, so the Wu-Yang curvature tensor becomes \begin{equation}\label{WuYangsu(2)} K_{\mu\nu}^{3}=\partial_{\mu}a_{\nu}^{3}-\partial_{\nu}a_{\mu}^{3}=\frac{1}{iq_{e}}(n_{3},[\partial_{\mu}n_{3},\partial_{\nu}n_{3}]). \end{equation} Because $n_{3}$ is a $su(2)$ Lie algebra vector, so we can expand it as $n_{3}=n_{3}^{a}\tau_{a}$, where $n^{a}_{3}\; (a=1,2,3)$ are its components. Using the commutation relation of Pauli matrices and the definition of the inner product of Lie algebra, \begin{eqnarray} \label{Kmunu} K_{\mu\nu}^{3}&=&\frac{1}{iq_{e}}(n_{3}^{a}\tau _{a},[\partial_{\mu}n_{3}^{b}\tau _{b},\partial_{\nu}n_{3}^{c}\tau _{c}])\nonumber \\ &=&\frac{1}{iq_{e}}n_{3}^{a}\partial_{\mu}n_{3}^{b}\partial_{\nu}n_{3}^{c}(\tau _{a},[\tau _{b},\tau _{c}]) \nonumber \\ &=&\frac{1}{iq_{e}}n_{3}^{a}\partial_{\mu}n_{3}^{b}\partial_{\nu}n_{3}^{c}i\epsilon_{bcd} (\tau _{a},\tau _{d})\nonumber\\ &=&\frac{1}{iq_{e}}n_{3}^{a}\partial_{\mu}n_{3}^{b}\partial_{\nu}n_{3}^{c}2i\epsilon_{bcd}Tr(\tau _{a}\tau _{d})\nonumber\\ &=&\frac{1}{iq_{e}}n_{3}^{a}\partial_{\mu}n_{3}^{b}\partial_{\nu}n_{3}^{c}2i\epsilon_{bcd}(\frac{1}{2}\delta_{ad}) \nonumber\\ &=&\frac{1}{q_{e}}\varepsilon_{abc}n_{3}^{a}\partial_{\mu}n_{3}^{b}\partial_{\nu}n_{3}^{c}. \end{eqnarray} So the magnetic monopole charge $G$ of $SU(2)$ Wu-Yang potential in $\mathds{R}^{3}$ is\cite{DuanGe1979} \begin{equation}\label{chaG} G=\int_{S^{2} }\frac{1}{2q_{e}}\varepsilon_{abc}n_{3}^{a}\partial_{\mu}n_{3}^{b}\partial_{\nu}n_{3}^{c}dx^{\mu}\wedge dx^{\nu}\,\,\,,\,\,\,(\mu,\nu=1,2). \end{equation} which has a electromagnetic field distribution corresponding to its positive or negative charge. When we consider $S^{2}$ as a single-point compactification of a two-dimensional disk $\mathds{D}_{2}$ with a uniform field on the boundary as $\Sigma\equiv\mathds{D}_{2}\cup\{\infty\}\cong S^{2}$, \begin{equation} \label{MM} G=\int_{\Sigma }\frac{1}{2q_{e}}\varepsilon_{abc}n_{3}^{a}\partial_{\mu}n_{3}^{b}\partial_{\nu}n_{3}^{c}dx^{\mu}\wedge dx^{\nu}\,\,\,,\,\,\,(\mu,\nu=1,2). \end{equation} Then $G$ corresponds to a quantization magnetic flux in $\Sigma$. Furthermore, we can see that the Wu-Yang magnetic monopole must have an $n_{3}$ field distribution. Considering $n_{3}$ comes from $ U\tau _{3}U^{\dag}$, $n_{3}$ can be viewed as spin operator in magnetic material and its direction is determined by local gauge transformation $U$. So far, physicists still have not found any real magnetic monopole, but eq.(\ref{MM}) provides us one clue to explore quasi-magnetic monopole in magnetic system. In fact, this configuration of eq.(\ref{MM}) appears in many different materials\cite{Han}. In the following section, we will discuss the physical realization of quasi-magnetic monopole in $(2+1)$-dimensional magnetic material, in which the magnetization configuration can be precisely manipulated in laboratory. \section{Magnetic skyrmions in two-dimansional materials as quasi-magnetic monopoles} In this section, using the Cartan subalgebra local basis method introduced in the previous section, we discuss the $SU(2)$ Wu-Yang magnetic monopole charge of magnetic material. In a $2$-dimensional base manifold $\mathcal{M}$ of magnetic material, a smooth magnetization configuration $\vec{M}=\vec{M}(\bm{r},t)$ is a cross section of associative vector bundle $E(\mathcal{M},su(2),SU(2),P)$ and its unit magnetization vector $\vec{m}(\bm{r},t)=\vec{M}(\bm{r},t)/|\vec{M}(\bm{r},t)|$ correspond to a unit $su(2)$ Lie algebra vector $\vec{m}(\bm{r},t) =m^{a}(\bm{r},t)\tau_{a}$ with $\sum_{a}m^{a}(\bm{r},t)m^{a}(\bm{r},t)=1(a=1,2,3)$. We know that $su(2)$ Cartan subalgebra is $\tau _{3}$, and its corresponding local basis is $ n _{3}(\bm{r},t)=U(\bm{r},t)\tau _{3}U^{\dag}(\bm{r},t)$, where $ n _{3}(\bm{r},t)$ remains a $su(2)$ Lie algebra vector and can be expressed as \begin{equation}\label{n3com} n_{3}(\bm{r},t)=n^{a}_{3}(\bm{r},t)\tau _{a}. \end{equation} From this point of view, we assume \begin{equation}\label{nm} \vec{m}(\bm{r},t)\equiv n_{3}(\bm{r},t), \end{equation} that is to say, we describe the local unit magnetization as a $su(2)$ Cartan subalgebra local basis. After replacing the local magnetization $\vec{m}$ with the $su(2)$ Cartan subalgebra local basis $n_{3}$, the $SU(2)$ Wu-Yang potential we discussed in previous section will has a real physical effect that we can detect it in magnetic materials, and we can use it to explain the theoretical origin of the emergent electromagnetic field and Berry phase phenomenal produced in magnetic materials. According to eq.(\ref{nm}) and the replacing $m^{a}$ with $n_{3}^{a}$, the Wu-Yang curvature tensor $K_{\mu\nu}^{3}$ of $(2+1)$-dimensional magnetic material can be expressed as \begin{equation}\label{Kmunum} K_{\mu\nu}^{3}=\frac{1}{q_{e}}\varepsilon_{abc}m^{a}\partial_{\mu}m^{b}\partial_{\nu}m^{c} \,\,\,,\,\,\,(\mu,\nu=1,2). \end{equation} According to eq.(\ref{MM}), it is obvious that the Wu-Yang magnetic monopole charge can be expressed as \begin{equation}\label{MM1} G=\int_{\Sigma }\frac{1}{2 q_{e}}\vec{m}\cdot (\partial_{\mu} \vec{m} \times \partial_{\nu} \vec{m})dx^{\mu}\wedge dx^{\nu}\,\,\,,\,\,\,(\mu,\nu=1,2). \end{equation} Many theoretical and experimental studies have proved\cite{skyremg, Han} that skyrmions in $(2+1)$-dimensional magnetic materials have the following structure \begin{equation}\label{MM2} S=\frac{1}{8\pi}\int_{\Sigma }\vec{m}\cdot (\partial_{\mu} \vec{m} \times \partial_{\nu} \vec{m})dx^{\mu}\wedge dx^{\nu}\,\,\,,\,\,\,\,(\mu,\nu=1,2), \end{equation} where $\vec{m}$ is unit magnetization and the value of single skyrmion charge $S$ is $+1$ or $-1$. Now by comparing eq.(\ref{MM1}) and eq.(\ref{MM2}), we find \begin{equation}\label{KSRelation2} S=\frac{q_{e}}{4\pi}G. \end{equation} that is to say, a skyrmion number $S$ always correspond to a Wu-Yang magnetic monopole charge $G$, and $q_{e}$ is the emergent electric charge of magnetic skyrmions which satisfies the non-Abelian analog of Dirac quantization condition\cite{Shnir} i.e. eq.(\ref{KSRelation2}). The magnetization configuration itself can be regarded as a quasi-magnetic monopole. The magnetization configuration of magnetic skyrmions originates from the competition between exchange interaction and DM interaction in a external magnetic field\cite{DM, RohartThiaville}. As a topological protect quasi-particle with many advantages for technological spintronic applications\cite{skyrapp, skyappl}, now magnetic skyrmion is the active research subject of theoretical and experimental. The experiment researchings like magnetic skyrmion topological Hall effect\cite{skyrhall} $et.$ verified that skyrmion has emergent electromagnetic field\cite{skyremg}. Theoretically, the emergent electromagnetic field of magnetic skyrmion originates from a Berry phase, it is obtained by an adiabatic process. However the Wu-Yang potential can be expressed as a general gauge transformation, in spite of adiabatic or not, so we think that the $SU(2)$ Wu-Yang potential is a theoretical origin of spin Berry phase, we will further explain it in the following section. \section{Berry connection in gauge transformation viewpoint} In the preceding section, we find the relation between $SU(2)$ Wu-Yang monopole charge and skyrmion number. In this section, we connect our Cartan subalgebra local bases method to spin Berry connection, and then we discuss the general relation between $SU(N)$ Wu-Yang potentials and Berry connection of pure or mixed state. In ferromagnetic materials, the spin Berry connection is defined as the overlap of the neighbor spin coherent states\cite{Tatara}. Now we express the spin Berry connection of a $(2+1)$-dimensional magnetization configuration $\vec{m}(\bm{r},t)$ using local $SU(2)$ gauge transformation, and discuss the relation between Wu-Yang pontential and spin Berry connection. At first, we assume a uniform magnetization configuration $\vec{m}_{0}(\bm{r})=\tau_{3}$, so the local gauge transformation related $\vec{m}_{0}(\bm{r})$ to $\vec{m}(\bm{r},t)$ is \begin{equation}\label{U} U(\bm{r},t)=\exp(-i\frac{\theta}{2}\vec{\sigma}\cdot\vec{\ell})=\cos\frac{\theta}{2}I-i \sin\frac{\theta}{2}\vec{\sigma}\cdot\vec{\ell},\\ \end{equation} explicitly \begin{equation}\label{vecm} \vec{m}(\bm{r},t)=U(\bm{r},t)\vec{m}_{0}(\bm{r})U^{\dag}(\bm{r},t).\\ \end{equation} where $\vec{l}$ is the rotation axis \begin{equation} \vec{\ell}=(-\sin\phi,\cos\phi,0), \end{equation} and $\theta(\bm{r},t),\phi(\bm{r},t)$ are polar angle and azimuth of $\vec{m}(\bm{r},t)$. The spin coherent states corresponding to $\vec{m}(\bm{r},t)$ are \begin{eqnarray} |\vec{m}(\bm{r},t)\rangle_{\uparrow}&=&{ \left( \begin{array}{cc} \cos\frac{\theta}{2} \\ e^{i\phi}\sin\frac{\theta}{2} \end{array} \right )} \;\;, \nonumber \\ |\vec{m}(\bm{r},t)\rangle_{\downarrow} &=&{ \left( \begin{array}{cc} -e^{-i\phi}\sin\frac{\theta}{2} \\ \cos\frac{\theta}{2} \end{array} \right )}. \end{eqnarray} So the gauge transformation from $|\vec{m}_{0}(\bm{r})\rangle$ to $|\vec{m}(\bm{r},t)\rangle$ is expressed as \begin{equation}\label{eigm} |\vec{m}(\bm{r},t)\rangle_{s}=U(\bm{r},t)|\vec{m}_{0}(\bm{r})\rangle_{s}\,\,\,,\,\,\,\,s=\uparrow \;\mbox{or}\;\downarrow, \end{equation} where $|\vec{m}_{0}(\bm{r})\rangle_{\uparrow}={ \left( \begin{array}{cc} 1 \\ 0 \end{array} \right )}$ and $|\vec{m}_{0}(\bm{r})\rangle_{\downarrow}={ \left( \begin{array}{cc} 0 \\ 1 \end{array} \right )}$. It is well known that the matrix form of $U$ is \begin{equation}\label{matU} U=U^{0}I+iU^{a}\sigma_{a}={ \left( \begin{array}{cc} U^{0}+iU^{3} & iU^{1}+U^{2} \\ iU^{1}-U^{2} & U^{0}-iU^{3} \end{array} \right )}, \end{equation} with \begin{eqnarray} U^{0}&=&\cos\frac{\theta}{2},\;\;U^{1}=\sin\frac{\theta}{2}\sin\phi, \nonumber \\ U^{2}&=&-\sin\frac{\theta}{2}\cos\phi,\;\;U^{3}=0. \end{eqnarray} Moreover, we identify this gauge transformation eq.({\ref{U}}) with a unitary time-evolution operator which evolve the initial state $|\vec{m}_{0}(\bm{r})\rangle$ to $|\vec{m}(\bm{r},t)\rangle$ with a proper Hamiltonian $h(t)$, and it is expressed as\cite{Niu} \begin{equation} U=\textrm{T}(e^{\frac{1}{i}\int_{0}^{t}dt^{\prime}h(t^{\prime})}), \end{equation} where $\textrm{T}$ is the time-ordering operator. The adiabatic approximation condition in time-evolution process define a process without jumping from a given eigenvalue to another eigenvalue, that is \begin{eqnarray}\label{adia} _{s}\langle \vec{m}(\bm{r},t)|\vec{m}(\bm{r^{\prime}},t^{\prime})\rangle_{s^{\prime}}=0\,\, , \,\,\,\, \mbox{for all}\,\, s\neq s^{\prime}. \end{eqnarray} Under this approximation conditon, the spin Berry connection\cite{Tatara,Wen} is calculated as \begin{eqnarray}\label{Aup} \mathcal{A}_{\mu\uparrow}&=&-i_{\uparrow}\langle \vec{m}(\bm{r},t)|\vec{m}(\bm{r^{\prime}},t^{\prime})\rangle_{\uparrow} \nonumber\\ &=&-i_{\uparrow}\langle \vec{m}_{0}(\bm{r})|U^{\dag}(\bm{r},t)\partial_{\mu}U(\bm{r},t)|\vec{m}_{0}(\bm{r^{\prime}})\rangle_{\uparrow} \nonumber\\ &=&-i(\partial_{\mu}(U^{0}+iU^{3})(U^{0}-iU^{3}) \nonumber\\ && +\partial_{\mu}(iU^{1}-U^{2})(-iU^{1}-U^{2}))\nonumber\\ &=&\sin^{2}\frac{\theta}{2}\partial_{\mu}\phi\nonumber\\ &=&\frac{1}{2}(1-\cos\theta)\partial_{\mu}\phi. \end{eqnarray} and \begin{eqnarray}\label{Adown} \mathcal{A}_{\mu\downarrow} &=& -i_\downarrow\langle \vec{m}(\bm{r},t)|\vec{m}(\bm{r^{\prime}},t^{\prime})\rangle_\downarrow \nonumber\\ &=&-i_\downarrow\langle \vec{m}_{0}(\bm{r})|U^{\dag}(\bm{r},t)\partial_{\mu}U(\bm{r},t)|\vec{m}_{0}(\bm{r^{\prime}})\rangle_\downarrow \nonumber\\ &=&i(\partial_{\mu}(iU^{1}+U^{2})(iU^{1}-U^{2}) \nonumber\\ && +\partial_{\mu}(U^{0}-iU^{3})(-U^{0}-iU^{3}))\nonumber\\ &=&-\sin^{2}\frac{\theta}{2}\partial_{\mu}\phi\nonumber\\ &=&\frac{1}{2}(\cos\theta-1)\partial_{\mu}\phi. \end{eqnarray} So we define a $SU(2)$ Berry connection matrix as \begin{equation}\label{Amat} A_{\mu}\equiv\textrm{Diag}(\mathcal{A}_{\mu\uparrow},\mathcal{A}_{\mu\downarrow}). \end{equation} On the other hand, for the same gauge transformation $U(\bm{r},t)$, the $SU(2)$ Wu-Yang pontential can be calculated as \begin{eqnarray}\label{WuYang} a_{\mu}^{3}&=&(K_{\mu},n_{3}) \nonumber\\ &=&\frac{1}{iq_{e}}(\partial_{\mu}UU^{\dag},U\tau_{3}U^{\dag})\nonumber\\ &=&\frac{2}{iq_{e}}\textrm{Tr}(\partial_{\mu}UU^{\dag}U\tau_{3}U^{\dag}) \nonumber\\ &=&\frac{2}{iq_{e}}\textrm{Tr}(\partial_{\mu}U\tau_{3}U^{\dag})\nonumber\\ &=&\frac{1}{iq_{e}}\textrm{Tr}\left[ \left( \begin{array}{cc} \partial_{\mu}(U^{0}+iU^{3}) & \partial_{\mu}(iU^{1}+U^{2}) \nonumber\\ \partial_{\mu}(iU^{1}-U^{2}) & \partial_{\mu}(U^{0}-iU^{3}) \end{array} \right) \right. \\ &&\;\;\;\;\;\; \left. \left( \begin{array}{cc} U^{0}-iU^{3} & -iU^{1}-U^{2} \nonumber\\ iU^{1}-U^{2} & -U^{0}-iU^{3} \end{array} \right) \right]\nonumber\\ &=&\frac{1}{iq_{e}}[ \partial_{\mu}(U^{0}+iU^{3})(U^{0}-iU^{3})\nonumber\\ &&+\partial_{\mu}(iU^{1}+U^{2})(iU^{1}-U^{2})\nonumber\\ &&+\partial_{\mu}(iU^{1}-U^{2})(-iU^{1}-U^{2})\nonumber\\ &&+ \partial_{\mu}(U^{0}-iU^{3})(-U^{0}-iU^{3})]\nonumber\\ &=&\frac{2}{q_{e}}\sin^{2}\frac{\theta}{2}\partial_{\mu}\phi\nonumber\\ &=&\frac{1}{q_{e}}(1-\cos\theta)\partial_{\mu}\phi. \end{eqnarray} So obviously \begin{equation}\label{Aa} a_{\mu}^{3}=\frac{2}{q_{e}}\mathcal{A}_{\mu\uparrow}=-\frac{2}{q_{e}}\mathcal{A}_{\mu\downarrow}\,\,\,\textrm{and}\,\,\, A_{\mu}=q_{e}a_{\mu}^{3}\tau_{3}. \end{equation} So the spin Berry connection of a smooth magnetization configuration is proportional to Wu-Yang potential with a factor $\pm\frac{2}{q_{e}}$. Particularly, to a static skyrmion $\vec{m}(\bm{r})$, a real space spin Berry connection serves as emergent gauge field which originates from $SU(2)$ Wu-Yang potential exactly. Now we study the $SU(N)$ Berry connection and Wu-Yang poteneials in adiabatic unitary evolution. Similar to the case of $SU(2)$ magnetization, the uniform field in parameter space that we need is\cite{mixgeo, mixgeo2} \begin{equation} u_{0}(x)\equiv\rho_{d}=a^{n}\rho_{n}. \end{equation} There are $N$ orthogonal eigenstates for $u_{0}(x)$ \begin{eqnarray} &&|u_{0}(x)\rangle_{1}={ \left( \begin{array}{cc} 1 \\ 0\\ \vdots\\ 0 \end{array} \right )}\,\,\,,\,\,\, |u_{0}(x)\rangle_{2}={ \left( \begin{array}{cc} 0 \\ 1\\ \vdots\\ 0 \end{array} \right )}\,\,\,,\,\,\,\cdots\,\,\,,\nonumber\\ && |u_{0}(x)\rangle_{N-1}={ \left( \begin{array}{cc} 0\\ \vdots\\ 1\\ 0 \end{array} \right )}\,\,\,,\,\,\, |u_{0}(x)\rangle_{N}={ \left( \begin{array}{cc} 0\\ 0\\ \vdots\\ 1 \end{array} \right )}. \end{eqnarray} For an explicit $U(x,t)\in SU(N)$, the unitary evolution path is \begin{equation} u(x,t)=U(x,t)u_{0}(x)U^{\dag}(x,t), \end{equation} and the orthogonal eigenstates for $u(x,t)$ are \begin{equation} |u(x,t)\rangle_{n}=U(x,t)|u_{0}(x)\rangle_{n}. \end{equation} Under the adiabatic approximation conditon eq.(\ref{adia}), the Berry connections for $SU(N)$ are calculated as \begin{eqnarray} \mathcal{A}_{\mu n}&=&-i_{n}\langle m(x,t)|m(x^{\prime},t^{\prime})\rangle_{n} \nonumber\\ &=&-i_{n}\langle m_{0}(x)|U^{\dag}(x,t)\partial_{\mu}U(x,t)|m_{0}(x^{\prime})\rangle_{n}\nonumber\\ &=&-\frac{i}{2}(U^{\dag}(x,t)\partial_{\mu}U(x,t),\rho_{n}). \end{eqnarray} Now the $SU(N)$ Berry connection matrix can be defined as \begin{equation} A_{\mu}=\textrm{Diag}(\mathcal{A}_{\mu1},\mathcal{A}_{\mu2},\cdots,\mathcal{A}_{\mu N}). \end{equation} The weighted average of $A_{\mu}$ for pure or mixed state is an average of $\mathcal{A}_{\mu n}$ with probability $a^{n}$\cite{mixgeo}, \begin{eqnarray}\label{talBerry} \mathcal{A}_{\mu}=-\frac{i}{2}(U^{\dag}(x,t)\partial_{\mu}U(x,t),\rho_{d}). \end{eqnarray} Reversing eq.(\ref{rhocartan}) we obtain \begin{eqnarray} \rho_{d}=U^{\dag}\rho U, \end{eqnarray} then \begin{eqnarray}\label{talBerry} \mathcal{A}_{\mu}&=&-\frac{i}{2}(\partial_{\mu}U(x,t)U^{\dag}(x,t),\rho)\nonumber\\ &=&\frac{g}{2}(K_{\mu},\rho). \end{eqnarray} So the explicit meaning of the weighted average of $A_{\mu}$ is the expectation value of flat connection respect to the final density matrix state in adiabatic unitary evolution. Comparing eq.(\ref{talBerry}) with eq.(\ref{WuYangPot}) of $SU(N)$ Wu-Yang potentials, using eq.(\ref{rhocartan}), the exact relation between Berry connection of weighted average for pure or mixed state and Wu-Yang potentials is obtained \begin{eqnarray} \mathcal{A}_{\mu}&=&\frac{g}{2}(K_{\mu},\rho)\nonumber\\ &=&\frac{g}{2N}(K_{\mu},(I_{N}+\sqrt{2N(N-1)}u^{i}n_{i}))\nonumber\\ &=&\frac{g}{2}\sqrt{\frac{2(N-1)}{N}}u^{i}(K_{\mu},n_{i})\nonumber\\ &=&\frac{g}{2}\sqrt{\frac{2(N-1)}{N}}u^{i}a_{\mu}^{i}, \end{eqnarray} where the $u^{i}$ is expressed as $a^{n}$ by eq.(\ref{ni}). In the case of $SU(2)$ magnetic skyrmion, $N=2$ and the unit magnetization vector correspond to pure state i.e. $a^{1}=1,a^{2}=0$ or $a^{1}=0,a^{2}=1$, so using eq.(\ref{ni}) we obtain $u^{1}=a^{1}=1$ or $u^{1}=a^{2}=-1$, that is to say \begin{eqnarray} \mathcal{A}_{\mu}=\frac{g}{2}a_{\mu}^{3}\,\,\, \textrm{or}\,\,\, \mathcal{A}_{\mu}=-\frac{g}{2}a_{\mu}^{3}, \end{eqnarray} that is exactly the eq.(\ref{Aa}). To summarize the discussion above, the methods of creating or deleting magnetic skyrmion in magnetic material\cite{skyrapp} are equivalent to creating or deleting quasi-magnetic monopole by taking $SU(2)$ gauge transformation for the magnetization vector. For $SU(N)$ case, we take the gauge transformation on $\rho_{d}$, then the $(N-1)$ Wu-Yang potentials are theoretically related to $(N-1)$ topological quasi-particles for pure or mixed state, these topological quasi-particles are similar to magnetic skyrmion in $SU(2)$, so the result of $SU(N)$ provide us a new clue for creating topological quasi-particles. \section{Conclusion} Since magnetic skyrmions were observed in ferromagnetic materials\cite{skyrob1, skyrob2}, various novel properties of magnetic skyrmion have been detected which mainly owing to its topological characteristics. Usually, the emergent electromagnetic field of magnetic skyrmions is considered to be a useful tool for understanding these properties\cite{skyremg}. To obtain this emergent electromagnetic field from the magnetization configuration of skyrmion itself, using the method of gauge transformation, we take the local magnetization in magnetic material as a $su(2)$ Cartan subalgebra local basis. Then we verified that the $SU(2)$ Wu-Yang potential equals to the adiabatical Berry connection multiplied by a factor $\pm\frac{2}{q_{e}}$, so the same algebraic structure between magnetic skyrmion and Wu-Yang monopole suggest that the magnetic skyrmion is quasi-magnetic monopole. Moreover, by extending the discussion from $SU(2)$ case to $SU(N)$, we find that the magnetization is naturally replaced by density operator $\rho$. And then, by applying the $su(N)$ Cartan subalgebra parametrization method to this density operator, we obtain $(N-1)$ Wu-Yang potentials of a general $N$-level quantum system. After considering the adiabatic approximation, we obtain $N$ Berry connections for the adiabatic unitary evolution, the weighted average for these Berry connections is exactly the expectation value of flat connection with respect to the final state of adiabatic unitary evolution. Finally, we find that the $(N-1)$ Wu-Yang potentials can be used to calculate the weighted average of Berry connections, it is a new representation different from ref.\cite{mixgeo}. Analogous to $SU(2)$ magnetic skyrmions, the result of $SU(N)$ case provide us a new clue for creating topological quasi-particles. \section{Appendix A} In our conventions of normalization relation eq.(\ref{nor}) and commutate and anticommutate relations eq.(\ref{com}) for $su(N)$ Lie algebra, the generators $H_{i}(i=1,2,\cdots,N-1)$ of $su(N)$ Cartan subalgebra are \begin{eqnarray} H_{i}=\frac{1}{\sqrt{2i(i+1)}}\textrm{Diag}(1,1,\cdots,1,-i,0,\cdots,0), \end{eqnarray} where $-i$ is the $(i+1)$-th diagonal element. The $\rho_{d}$ in eq.(\ref{rhod}) is \begin{eqnarray} \rho_{d}\equiv a^{n}\rho_{n}=\textrm{Diag}(a^{1},a^{2},\cdots,a^{N}), \end{eqnarray} with $\rho_{n}=\textrm{Diag}(0,0,\cdots,1,0,\cdots,0)$, where $1$ is the $n$-th diagonal element. A recursive relation between $\rho_{n}(n=1,2,\cdots,N)$ and $H_{i}(i=1,2,\cdots,N-1)$ is \begin{eqnarray} \rho_{n+1}-\rho_{n}=\sqrt{\frac{2(n-1)}{n}}H_{n-1}-\sqrt{\frac{2(n+1)}{n}}H_{n}. \end{eqnarray} After summation, the $\rho_{n}$ is expressed as \begin{eqnarray} \rho_{n}=&&-\sqrt{\frac{2n}{n-1}}H_{n-1}+(\sqrt{\frac{2(n-2)}{n-1}}-\sqrt{\frac{2(n-1)}{n-2}})H_{n-2}\nonumber\\ &&+\cdots+(1-2)H_{1}+\rho_{1}. \end{eqnarray} We calculate the $\rho_{1}$ as \begin{equation} \rho_{1}=\frac{1}{N}I_{N}+\sum_{k=1}^{N-1}\sqrt{\frac{2}{k(k+1)}}H_{k}. \end{equation} Then \begin{eqnarray} \rho_{d}&=&a^{n}\rho_{n}\nonumber\\ &=&\sum_{i=1}^{N-1}[(\sqrt{\frac{2i}{i+1}}-\sqrt{\frac{2(i+1)}{i}})(a^{N}+a^{N-1}+\cdots\nonumber\\ &&~+a^{i+2})-\sqrt{\frac{2(i+1)}{i}}a^{i+1}]H_{i}+\rho_{1}\nonumber\\ &=&\sum_{i=1}^{N-1}[(-\sqrt{\frac{2}{i(i+1)}})(1-(a^{1}+a^{2}+\cdots+a^{i+1}))\nonumber\\ &&-(i+1)\sqrt{\frac{2}{i(i+1)}}a^{i+1}]H_{i}+\rho_{1}\nonumber\\ &=&\sum_{i=1}^{N-1}\sqrt{\frac{2}{i(i+1)}}[a^{1}+a^{2}+\cdots+a^{i}\nonumber\\ &&-ia^{i+1}-1]H_{i}+\rho_{1}\nonumber\\ &=&\frac{1}{N}I_{N}+\sum_{i=1}^{N-1}\sqrt{\frac{2}{i(i+1)}}[a^{1}+a^{2}+\cdots\nonumber\\ &&+a^{i}-ia^{i+1}]H_{i}. \end{eqnarray} So \begin{equation} \rho=U\rho_{d}U^{\dag}=\frac{1}{N}(I_{N}+\sqrt{2N(N-1)}u^{i}n_{i}), \end{equation} with \begin{equation} u^{i}=\sqrt{\frac{N}{N-1}}\frac{1}{\sqrt{i(i+1)}}(\sum_{k=1}^{i}a^{k}-ia^{i+1}). \end{equation} \section{Appendix B} We begin from the projection formula eq.(\ref{Pro}) \begin{eqnarray}\label{pro} A_{\mu}=(A_{\mu},n_{k})+[[A_{\mu},n_{k}],n_{k}]. \end{eqnarray} The differential of $A_{\mu}$ is \begin{eqnarray} \partial_{\mu}A_{\nu}&=&\partial_{\mu}A_{\nu}^{k}n_{k}+A_{\nu}^{k}\partial_{\mu}n_{k}+[\partial_{\mu}[A_{\nu},n_{k}],n_{k}]\nonumber \\ &&+[[A_{\nu},n_{k}],\partial_{\mu}n_{k}]. \end{eqnarray} and the projection of $\partial_{\mu}A_{\nu}$ to $n_{i}$ is \begin{eqnarray}\label{pamupanuni} &&((\partial_{\mu}A_{\nu}-\partial_{\mu}A_{\nu}),n_{i})\nonumber\\ &=&\partial_{\mu}A_{\nu}^{k}(n_{k},n_{i})+A_{\nu}^{k}(\partial_{\mu}n_{k},n_{i}) \nonumber\\ &&+([\partial_{\mu}[A_{\nu},n_{k}],n_{k}],n_{i})+([[A_{\nu},n_{k}],\partial_{\mu}n_{k}],n_{i})\nonumber\\ &&-\partial_{\nu}A_{\mu}^{k}(n_{k},n_{i})-A_{\mu}^{k}(\partial_{\nu}n_{k},n_{i}) \nonumber\\ &&-([\partial_{\nu}[A_{\mu},n_{k}],n_{k}],n_{i})-([[A_{\mu},n_{k}],\partial_{\nu}n_{k}],n_{i})\nonumber\\ &=&([[A_{\nu},n_{k}],\partial_{\mu}n_{k}],n_{i})-([[A_{\mu},n_{k}],\partial_{\nu}n_{k}],n_{i})\nonumber\\ &&+(\partial_{\mu}A_{\nu}^{i}-\partial_{\nu}A_{\mu}^{i}), \end{eqnarray} where we used $(n_{i},[n_{k},L])=0$ for any $L\in su(N)$ in the second equal sign. Any Lie algebra vectors $A,B,C$ satisfy Jacobbi identity \begin{eqnarray} [A,[B,C]]+[B,[C,A]]+[C,[A,B]]=0. \end{eqnarray} We take $A=[A_{\mu},n_{k}],B=A_{\nu},C=n_{k}$, so the Jacobbi identity is \begin{eqnarray} &&[[A_{\mu},n_{k}],[A_{\nu},n_{k}]] \nonumber \\ &=&-[A_{\nu},[n_{k},[A_{\mu},n_{k}]]]-[n_{k},[[A_{\mu},n_{k}],A_{\nu}]]. \end{eqnarray} Then the projection of $[[A_{\mu},n_{k}],[A_{\nu},n_{k}]]$ to $n_{i}$ is \begin{eqnarray}\label{AmuAnuni} && ([[A_{\mu},n_{k}],[A_{\nu},n_{k}]],n_{i}) \nonumber\\ &=&-([A_{\nu},[n_{k},[A_{\mu},n_{k}]]],n_{i})-([n_{k},[[A_{\mu},n_{k}],A_{\nu}]],n_{i})\nonumber\\ &=&-([A_{\nu},[n_{k},[A_{\mu},n_{k}]]],n_{i})\nonumber\\ &=&([A_{\nu},[[A_{\mu},n_{k}],n_{k}]],n_{i}) \nonumber\\ &=&-([[[A_{\mu},n_{k}],n_{k}],A_{\nu}],n_{i}) \nonumber\\ &=&([(A_{\mu}-A_{\mu}^{k}n_{k}),A_{\nu}],n_{i})\nonumber\\ &=&-([A_{\mu},A_{\nu}],n_{i}), \end{eqnarray} where we used $(n_{i},[n_{k},L])=0$ for any $L\in su(N)$ in the second equal sign, and we used eq.(\ref{pro}) in the fifth equal sign. According to \begin{equation} D_{\mu}n_{k}=\partial_{\mu}n_{k}-ig[A_{\mu},n_{k}], \end{equation} we obtain \begin{eqnarray} && ([D_{\mu}n_{k},D_{\nu}n_{k}],n_{i}) \\ &=&([(\partial_{\mu}n_{k}-ig[A_{\mu},n_{k}]),(\partial_{\nu}n_{k}-ig[A_{\nu},n_{k}])],n_{i})\nonumber\\ &=&([\partial_{\mu}n_{k},\partial_{\nu}n_{k}],n_{i})+(ig)^{2}([A_{\mu},n_{k}],[A_{\nu},n_{k}],n_{i})\nonumber\\ &&-ig([\partial_{\mu}n_{k},[A_{\nu},n_{k}]],n_{i})-ig([[A_{\mu},n_{k}],\partial_{\nu}n_{k}],n_{i}). \nonumber \end{eqnarray} After transposition, we obtain \begin{eqnarray} &&([[A_{\nu},n_{k}],\partial_{\mu}n_{k}],n_{i})-([[A_{\mu},n_{k}],\partial_{\nu}n_{k}],n_{i})\nonumber\\ &=&\frac{1}{ig}([D_{\mu}n_{k},D_{\nu}n_{k}],n_{i})-\frac{1}{ig}([\partial_{\mu}n_{k},\partial_{\nu}n_{k}],n_{i}) \nonumber\\ &&-ig([A_{\mu},n_{k}],[A_{\nu},n_{k}],n_{i}))\nonumber\\ &=&\frac{1}{ig}([D_{\mu}n_{k},D_{\nu}n_{k}],n_{i})-\frac{1}{ig}([\partial_{\mu}n_{k},\partial_{\nu}n_{k}],n_{i}) \nonumber\\ &&+ig([A_{\mu},A_{\nu}],n_{i}). \end{eqnarray} where we used eq.(\ref{AmuAnuni}) in second equal sign. After transposition, and using eq.(\ref{pamupanuni}) we obtain \begin{eqnarray} f_{\mu\nu}^{i}&\equiv&(F_{\mu\nu},n_{i})-\frac{1}{ig}([D_{\mu}n_{k},D_{\nu}n_{k}],n_{i})\nonumber\\ &=&(\partial_{\mu}A_{\nu}^{i}-\partial_{\nu}A_{\mu}^{i})-\frac{1}{ig}([\partial_{\mu}n_{k},\partial_{\nu}n_{k}],n_{i}), \end{eqnarray} where $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}-ig[A_{\mu},A_{\nu}]$.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,758
{"url":"https:\/\/gmatclub.com\/forum\/in-the-equation-6x-9-3x-25-what-is-the-sum-of-all-possible-valu-217853.html","text":"GMAT Question of the Day - Daily to your Mailbox; hard ones only\n\n It is currently 22 Sep 2018, 12:19\n\n### GMAT Club Daily Prep\n\n#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.\n\nCustomized\nfor You\n\nwe will pick new questions that match your level based on your Timer History\n\nTrack\n\nevery week, we\u2019ll send you an estimated GMAT score based on your performance\n\nPractice\nPays\n\nwe will pick new questions that match your level based on your Timer History\n\n# In the equation |6x+9| = |3x+25|, what is the sum of all possible valu\n\nAuthor Message\nMath Expert\nJoined: 02 Sep 2009\nPosts: 49303\nIn the equation |6x+9| = |3x+25|, what is the sum of all possible valu\u00a0 [#permalink]\n\n### Show Tags\n\n05 May 2016, 00:57\n1\n3\n00:00\n\nDifficulty:\n\n55% (hard)\n\nQuestion Stats:\n\n73% (01:59) correct 27% (02:34) wrong based on 99 sessions\n\n### HideShow timer Statistics\n\nIn the equation |6x+9| = |3x+25|, what is the sum of all possible values of x?\n\nA. 4\/9\nB. 4\/5\nC. 14\/9\nD. 15\/7\nE. 17\/3\n\n--== Message from the GMAT Club Team ==--\n\nTHERE IS LIKELY A BETTER DISCUSSION OF THIS EXACT QUESTION.\nThis discussion does not meet community quality standards. It has been retired.\n\nIf you would like to discuss this question please re-post it in the respective forum. Thank you!\n\nTo review the GMAT Club's Forums Posting Guidelines, please follow these links: Quantitative | Verbal Please note - we may remove posts that do not follow our posting guidelines. Thank you.\n\n_________________\nVerbal Forum Moderator\nStatus: Greatness begins beyond your comfort zone\nJoined: 08 Dec 2013\nPosts: 2107\nLocation: India\nConcentration: General Management, Strategy\nSchools: Kelley '20, ISB '19\nGPA: 3.2\nWE: Information Technology (Consulting)\nRe: In the equation |6x+9| = |3x+25|, what is the sum of all possible valu\u00a0 [#permalink]\n\n### Show Tags\n\n05 May 2016, 03:38\nx<-25\/3\n- 6x - 9 = -3x - 25\n=> 3x = 16\n=> x = 16\/3\n\n-25\/3 < x < -9\/6\n6x + 9 = -(3x+25)\n=> 6x+9 = -3x - 25\n=> 9x = -34\n=> x = -34\/9\n\nx> -9\/6\n6x+9 = 3x+25\n=> 3x = 16\n=> x = 16\/3\n\nSum of all possible values of a = 16\/3 + (-34\/9)\n=48\/9 - 34\/9\n=14\/9\n\n_________________\n\nWhen everything seems to be going against you, remember that the airplane takes off against the wind, not with it. - Henry Ford\nThe Moment You Think About Giving Up, Think Of The Reason Why You Held On So Long\n+1 Kudos if you find this post helpful\n\nMath Expert\nJoined: 02 Aug 2009\nPosts: 6800\nRe: In the equation |6x+9| = |3x+25|, what is the sum of all possible valu\u00a0 [#permalink]\n\n### Show Tags\n\n05 May 2016, 03:43\n3\n2\nBunuel wrote:\nIn the equation |6x+9| = |3x+25|, what is the sum of all possible values of x?\n\nA. 4\/9\nB. 4\/5\nC. 14\/9\nD. 15\/7\nE. 17\/3\n\nsince it is MOD on both sides we can square both sides..\nthereafter we get the two terms on ONE side and we will have a QUADRATIC equation..\nIn a Quadratic equation, the value of sum of roots \/x is$$-\\frac{(coeff of x}{coeff of x^2)}$$..\nso lets concentrate ONLY on x^2 and x values..\n\n$$|6x+9| = |3x+25|$$..\n$$(6x+9)^2 = (3x+25)^2$$..\n$$36x^2 +108 x+ 9^2 = 9x^2+150x +25^2$$..\n$$(36-9)x^2 + (108-150)x +9^2+25^2$$..\nsum of the roots =$$\\frac{-b}{a} = -(\\frac{-42}{27}) = \\frac{14}{9}$$\nC\n_________________\n\n1) Absolute modulus : http:\/\/gmatclub.com\/forum\/absolute-modulus-a-better-understanding-210849.html#p1622372\n2)Combination of similar and dissimilar things : http:\/\/gmatclub.com\/forum\/topic215915.html\n3) effects of arithmetic operations : https:\/\/gmatclub.com\/forum\/effects-of-arithmetic-operations-on-fractions-269413.html\n\nGMAT online Tutor\n\nDirector\nJoined: 04 Dec 2015\nPosts: 701\nLocation: India\nConcentration: Technology, Strategy\nSchools: ISB '19, IIMA , IIMB, XLRI\nWE: Information Technology (Consulting)\nRe: In the equation |6x+9| = |3x+25|, what is the sum of all possible valu\u00a0 [#permalink]\n\n### Show Tags\n\n05 May 2016, 06:32\n7\n1\nHello...\nI followed the Absolute value equation Strategies... got answer c - 14\/9\n\nFollowing is the explanation:\n\nWe need to test four cases overall: positive\/positive, positive\/negative, negative\/positive, and negative\/negative.\n(1) The positive\/positive case: (6x+9) = (3x+25)\n(2) The positive\/negative case: (6x+9) = -(3x+25)\n(3) The negative\/positive case: \u2014(6x+9) = (3x+25)\n(4) The negative\/negative case: \u2014(6x+9) = -(3x+25)\n\ncase (1) and case (4) yield the same equation. Likewise, case (2) and case (3) yield the same equation. Thus only need to consider two real cases: one in which neither expression changes sign, and another in which one expression changes sign.\n\nCASE A: Same sign\n6x+9 = 3x+25\n6x-3x = 25-9\n3x = 16\nx=16\/3\n\nCASE B: Different Sign\n-(6x+9) = (3x+25)\n-6x-9 = 3x+25\n-6x-3x=25+9\n-9x=34\nx=-34\/9\n\nSum of all possible values of x = 16\/3+(-34\/9)\n\nNon-Human User\nJoined: 09 Sep 2013\nPosts: 8154\nRe: In the equation |6x+9| = |3x+25|, what is the sum of all possible valu\u00a0 [#permalink]\n\n### Show Tags\n\n18 May 2017, 12:06\nHello from the GMAT Club BumpBot!\n\nThanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).\n\nWant to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.\n\n--== Message from the GMAT Club Team ==--\n\nTHERE IS LIKELY A BETTER DISCUSSION OF THIS EXACT QUESTION.\nThis discussion does not meet community quality standards. It has been retired.\n\nIf you would like to discuss this question please re-post it in the respective forum. Thank you!\n\nTo review the GMAT Club's Forums Posting Guidelines, please follow these links: Quantitative | Verbal Please note - we may remove posts that do not follow our posting guidelines. Thank you.\n\n_________________\nRe: In the equation |6x+9| = |3x+25|, what is the sum of all possible valu &nbs [#permalink] 18 May 2017, 12:06\nDisplay posts from previous: Sort by\n\n# In the equation |6x+9| = |3x+25|, what is the sum of all possible valu\n\n## Events & Promotions\n\n Powered by phpBB \u00a9 phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT\u00ae test is a registered trademark of the Graduate Management Admission Council\u00ae, and this site has neither been reviewed nor endorsed by GMAC\u00ae.","date":"2018-09-22 19:19:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5020080804824829, \"perplexity\": 5870.131383859869}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-39\/segments\/1537267158633.40\/warc\/CC-MAIN-20180922182020-20180922202420-00413.warc.gz\"}"}
null
null
El municipio de Village (en inglés: Village Township) es un municipio ubicado en el condado de Columbia en el estado estadounidense de Arkansas. En el año 2010 tenía una población de 699 habitantes y una densidad poblacional de 2,94 personas por km². Geografía El municipio de Village se encuentra ubicado en las coordenadas . Según la Oficina del Censo de los Estados Unidos, el municipio tiene una superficie total de 238.12 km², de la cual 238,08 km² corresponden a tierra firme y (0,02 %) 0,04 km² es agua. Demografía Según el censo de 2010, había 699 personas residiendo en el municipio de Village. La densidad de población era de 2,94 hab./km². De los 699 habitantes, el municipio de Village estaba compuesto por el 79,26 % blancos, el 19,03 % eran afroamericanos, el 0,29 % eran amerindios, el 0,86 % eran de otras razas y el 0,57 % eran de una mezcla de razas. Del total de la población el 2 % eran hispanos o latinos de cualquier raza. Referencias Enlaces externos Municipios de Arkansas Localidades del condado de Columbia (Arkansas)
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,640
Family: Clubionidae (Foliage Spiders) Biology: The Clubionidae is relatively species rich encompassing 537 species in 14 genera. They range in size from small to medium (2,5-12 mm body size). The species are found at ground level in open habitats but many species inhabit foliage, branches, and stems of trees. Clubionids spend the daytime in saclike retreats with or without openings. The retreats are placed in folded or rolled up leaves, behind loose bark, under stones or under other objects on the ground. Clubionids leave their retreats during the night to hunt prey as active free-living hunters using no web or snares. They are swift runners having good footing to slippery surfaces due to the adhesive properties of the claw tufts. They take their prey by moving upon it and seizing it with the strong and toothed chelicerae. The female spins a silken sac in which she guards her egg sack. These sacs are larger than retreats and if placed in foliage they are often constructed by bending the leaves together to form a cavity, which is then spun together and sealed with plenty of whitish silk. Characters of family: The clubionids are 8-eyed, ecribellate spiders possessing two tarsal claws. The species superficially resemble members of the Gnaphosidae, but the anterior spinners of the clubionids are conical and the posterior median eyes are circular. The sexes are quite similar with the males slightly smaller and often with more elongate and slender chelicers as well as longer legs. The eyes are uniform in size, arranged close to the anterior edge of the carapace in two fairly wide rows each with four eyes. The posterior row is slightly wider than the anterior. The carapace is ovoid, clearly longer than wide and with short, shallow fovea. However, in some species fovea is absent. The sternum is distinctly margined in some species. The chelicers are rather long and stout and the fang furrows are provided with teeth both pro- and retromarginally. Some males have strongly developed chelicerae with a long fang. Also, in many species the chelicers are conspicuously dark. Endites are longer than wide and without the depression seen in gnaphosids. The endites are furnished with a brush of setae (scopulae) on distal end to improve grip of prey. The labium is longer than wide. The body is carried close to the substratum on moderately long, strong legs with normal prograde orientation. The legs are provided with two tarsal claws with dense claw tufts and scopulae giving good adhesion to slippery surfaces such as leaves. The tibia and metatarsi have one, two, or more pairs of spines ventrally. Some species have legs I the longest while other species legs IV. The abdomen is oval often tapering towards the spinners. Males sometimes have a small scutum. The abdomen usually uniformly coloured except for darker cardiac mark. Sometimes there are darker markings such as a median line or chevrons in the same colours as the cardiac mark. The anterior spinners of the clubionids are conical and contiguous and all three pairs form a compact cluster. The spiracle is situated close to the spinners. Clubionids are entelegyne spiders having the genital groove with its openings to the internal genitalia covered by a well-sclerotized plate (epigastric scutum), which also bears the paired copulatory openings. The spermathecae are often visible through the integument. The male palp has a retrolateral tibial apophysis. The shape of the apophysis varies greatly between species and is an important morphological character when identifying the species under the stereomicroscope. Taxonomic note: The genus Clubiona is the only genus left in Europe after the transfer of genera or subfamilies to the new families Liocranidae, Zoridae, Anyphaenidae and Miturgidae. This family is represented in Europe with 62 species in one genus (van Helsdingen, 2009; Platnick, 2009). European genus (number of species in parenthesis): Clubiona (62). Genus: Clubiona Latreille, 1804 - Leaf curling sac spiders Biology: Members of this genus spin sack-like retreats either open or closed at the ends under loose bark, under objects on the ground, and in rolled leaves or in folded grass leaves. Nocturnal spiders that spend the daytime in the retreats which are also used for mating, oviposition, moulting and wintering. They do not use webs for prey catching. Some species are difficult to identify without microscopic examination of the genitals. Characters of genus: Compact, small to medium sized spiders with oval cephalothorax and slightly protruding, broad head. There are two rows of eyes, the eyes of the posterior row widely set apart. It may appear as there are six eyes in the front row since the laterals of both rows are fairly close together. Anterior row recurved. Posterior row straight or slightly recurved. Fovea short, dark and shallow but quite indistinct in species with relatively dense silky hairing on carapace. Labium longer than wide. Maxilla with scopulae. Endites without a transverse or oblique depression. Legs long with leg IV longest. Tarsi with conspicuous scopulae, two-clawed (claws long). Abdomen elongate with sparse to rather dense coverage of silky hairs depending on species. Hues of yellow, orange and brown are the prevailing body colours, sometimes the cardiac mark is darker. A few species have the cardiac mark followed by chevrons. Anterior spinners conical and and situated close together or contiguous. The two sexes do not differ much. Compared to females, males are slightly smaller, often with chelicerae longer and more tapering and the legs are relatively longer. There are 62 European species (van Helsdingen, 2009; Platnick, 2009): Clubiona aducta, C. alpicola, C. alpicola affinis, C. andreinii, C. atra (nomen dubium), C. bashkirica, C. brevipes, C. caerulescens, C. caliginosa, C. comta, C. congentilis, C. corticalis, C. corticalis concolor, C. corticalis nigra, C. decora, C. deterrima, C. diniensis, C. diversa, C. facilis, C. frisia, C. frutetorum, C. genevensis, C. germanica, C. glauca (nomen dubium), C. governetonis, C. hilaris, C. juvenis, C. kulczynskii, C. leucaspis, C. lutescens, C. marmorata, C. minor, C. mollis (nomen dubium), C. mykolai, C. neglecta, C. norvegica, C. pallens (nomen dubium), C. pallidula, C. phragmitis, C. prasina (nomen dubium), C. pseudominor, C. pseudoneglecta, C. pulchella (nomen dubium), C. putris (nomen dubium), C. reclusa, C. rethymnonis, C. riparia, C. rosserae, C. rubicunda (nomen dubium), C. rubripes (nomen dubium), C. ruffoi, C. saltuum, C. saxatilis, C. similis, C. stagnatilis, C. subsultans, C. subtilis, C. terrestris, C. trivialis, C. vegeta, C. virgulata, C. viridis (nomen dubium). Clubiona alpicola Kulczynski, 1882 Range: Austria, Bulgaria, Czech Republic, Germany, Hungary, Italy (Mainland), Poland, Romania, Slovakia, Switzerland, Ukraine (van Helsdingen 2009.1). Global range: Europe to Central Asia (Platnick 10.0). Clubiona brevipes Blackwall, 1841 Range: Andorra, Austria, Belarus, Belgium, Bulgaria, Croatia, Czech Republic, Denmark, France (Corsica), France (Mainland), Germany, Great Britain (Mainland), Great Britain (Northern Ireland), Hungary, Ireland, Italy (Mainland), Liechtenstein, Macedonia, Moldova, Netherlands, Norway (Mainland), Poland, Portugal (Mainland), Romania, Slovakia, Slovenia, Spain (Mainland), Sweden, Switzerland, Ukraine, Yugoslavia (van Helsdingen 2009.1). Global range: Europe to Central Asia (Platnick 10.0). Female, spinners. Clubiona comta C. L. Koch, 1839 Description: The abdomen has a reddish- brown cardiac mark followed by chevrons. This is a commom species which only resemble C. genevensis, a much rarer species in Denmark. The same abdominal markings are also characteristic for C. corticalis, but the pattern is made of a darker colour and this species is almost twice as big as C. compta. Size: Female 3.5-6 mm; male 3-5 mm. Maturity: Spring and summer. Habitat: Trees and bushes, alse under bark. Range: Austria, Belarus, Belgium, Bulgaria, Croatia, Czech Republic, Denmark, Estonia, Finland, France (Corsica), France (Mainland), Germany, Great Britain (Mainland), Great Britain (Northern Ireland), Greece (Mainland), Hungary, Ireland, Italy (Mainland), Italy (Sicily), Latvia, Liechtenstein, Lithuania, Macedonia, Moldova, Netherlands, Norway (Mainland), Poland, Portugal (Mainland), Romania, Russia (Central European), Slovakia, Slovenia, Spain (Balearic Islands), Spain (Mainland), Sweden, Switzerland, Ukraine, Yugoslavia (van Helsdingen 2009.1). Global range: Europe, Russia, North Africa (Platnick 10.0). Female, sternum and mouthparts. Female, epigyne. Clubiona corticalis (Walckenaer, 1802) Description: Same abdominal markings as in C. compta, but darker and the spider is much larger. The distictive appearance separates this species from all other Clubiona species. Size: Female 7-10 mm; male 6-10 mm. Maturity: Spring and summer. Habitat: Under the bark of trees, sometimes retreats of many specimens close together. Range: Austria, Belgium, Bulgaria, Croatia, Czech Republic, Denmark, France (Corsica), France (Mainland), Germany, Great Britain (Mainland), Greece (Mainland), Hungary, Italy (Mainland), Italy (Sicily), Liechtenstein, Macedonia, Netherlands, Poland, Portugal (Mainland), Romania, Slovakia, Slovenia, Spain (Mainland), Sweden, Switzerland, Ukraine, Yugoslavia (van Helsdingen 2009.1). Global range: Europe to Central Asia (Platnick 10.0). Juvenile. Clubiona decora Blackwall, 1859 Range: Croatia, Great Britain (Channel Islands), Italy (Mainland), Macedonia, Portugal (Azores), Portugal (Madeira), Portugal (Mainland), Spain (Canary Islands), Yugoslavia (van Helsdingen 2009.1). Global range: Madeira, Azores, Balkans (Platnick 10.0). Clubiona diversa O. P.-Cambridge, 1862 Description: Rather plumb Clubiona with the sexes almost similar. The abdomen is yellowish-orange, except a reddish midline and reddish tip. Size: Female 4-5 mm; male 3-4 mm. Maturity: All year. Habitat: I Denmark this species is fairly common in coastal dunes and heathland. In Europe also chalk grassland, and boggy areas. Range: Austria, Belarus, Belgium, Bulgaria, Czech Republic, Denmark, Estonia, Finland, France (Mainland), Germany, Great Britain (Mainland), Great Britain (Northern Ireland), Greece (Dodecanese Islands), Greece (Mainland), Hungary, Ireland, Italy (Mainland), Latvia, Liechtenstein, Lithuania, Netherlands, Norway (Mainland), Poland, Romania, Russia (Central European), Russia (Eastern European), Russia (Northern European), Russia (NW. European), Russia (Southern European), Slovakia, Slovenia?, Spain (Mainland), Sweden, Switzerland, Ukraine (van Helsdingen 2009.1). Global range: Palearctic (Platnick 10.0). Clubiona frisia Wunderlich & Schuett, 1995 Description: The species has recently been described as a new species from C. similis which does not occur in Denmark. Except for a sligtly darker cardiac mark, C. frisia is of the same colour as a peeled potato, and thus the lightest coloured Clubiona species occurring in Denmark. The bluish tinge seen on one of the pictures below is probably an artefact caused by the flash light. Size: Female 5-7 mm; male 5-6 mm. Maturity: All year. Habitat: Marram tussocks along coasts. Range: Belgium, Bulgaria, Denmark, Germany, Great Britain (Mainland), Netherlands, Norway (Mainland), Poland, Russia (Central European), Russia (Northern European), Slovakia, Sweden, Ukraine (van Helsdingen 2009.1). Global range: Europe to Central Asia (Platnick 10.0). Male spinners. Clubiona frutetorum L. Koch, 1867 Range: Austria, Belarus, Belgium, Bulgaria, Croatia, Czech Republic, Denmark, Estonia, Finland, France (Mainland), Germany, Greece (Mainland), Hungary, Ireland, Italy (Mainland), Latvia, Liechtenstein, Lithuania, Moldova, Netherlands, Norway (Mainland), Poland, Portugal (Mainland), Romania, Russia (Central European), Russia (Eastern European), Russia (Northern European), Russia (NW. European), Russia (Southern European), Slovakia, Spain (Mainland), Sweden, Switzerland, Ukraine, Yugoslavia (van Helsdingen 2009.1). Global range: Europe to Central Asia (Platnick 10.0). Clubiona genevensis L. Koch, 1866 Range: Austria, Belgium, Bulgaria, Croatia, Czech Republic, Denmark, France (Mainland), Germany, Great Britain (Channel Islands), Great Britain (Mainland), Greece (Mainland), Hungary, Italy (Mainland), Latvia?, Liechtenstein, Netherlands, Poland, Portugal (Azores), Portugal (Mainland), Romania, Russia (Eastern European), Slovakia, Slovenia, Spain (Balearic Islands), Spain (Mainland), Sweden, Switzerland, Ukraine, Yugoslavia (van Helsdingen 2009.1). Global range: Palearctic (Platnick 10.0). Clubiona lutescens Westring, 1851 Description: Rather dark with the abdomen uniformely coloured yellowish to reddish-brown. Male much slimmer than female. Similar general appearance to C. terrestris. Size: Female 6-8 mm; male 4-6 mm. Maturity: Spring and summer. Habitat: On herb vegetation, trees and bushes in damp forests and meadows, also found in bogs. Range: Austria, Belarus, Belgium, Bulgaria, Czech Republic, Denmark, Estonia, Finland, France (Mainland), Germany, Great Britain (Channel Islands), Great Britain (Mainland), Great Britain (Northern Ireland), Greece (Mainland), Hungary, Ireland, Italy (Mainland), Latvia, Liechtenstein, Lithuania, Moldova, Netherlands, Norway (Mainland), Poland, Romania, Russia (Central European), Russia (Eastern European), Russia (Kaliningrad Region), Russia (Northern European), Russia (NW. European), Russia (Southern European), Slovakia, Slovenia, Spain (Mainland), Sweden, Switzerland, Ukraine, Yugoslavia (van Helsdingen 2009.1). Global range: Holarctic (Platnick 10.0). Clubiona minor Wunderlich, 1987 Clubiona neglecta O. P.-Cambridge, 1862 Range: Andorra, Austria, Belarus, Belgium, Bulgaria, Croatia, Czech Republic, Denmark, Estonia, Finland, France (Corsica), France (Mainland), Germany, Great Britain (Mainland), Great Britain (Northern Ireland), Ireland, Italy (Mainland), Latvia, Liechtenstein, Lithuania, Macedonia, Moldova, Netherlands, Norway (Mainland), Poland, Portugal (Mainland), Romania, Russia (Central European), Russia (Eastern European), Russia (Northern European), Russia (NW. European), Russia (Southern European), Slovakia, Slovenia, Spain (Mainland), Sweden, Switzerland, Ukraine, Yugoslavia (van Helsdingen 2009.1). Global range: Palearctic (Platnick 10.0). Clubiona norvegica Strand, 1900 Range: Austria, Belgium, Czech Republic, Denmark, Finland, Germany, Great Britain (Mainland), Lithuania, Netherlands, Norway (Mainland), Poland, Russia (Central European), Russia (Eastern European), Russia (Kaliningrad Region), Russia (Northern European), Russia (NW. European), Russia (Southern European), Sweden (van Helsdingen 2009.1). Global range: Holarctic (Platnick 10.0). Clubiona pallidula (Clerck, 1757) Description: Largest and darkest species in Denmark. The abdomen is dark purplish-brown without markings. Size: Female 7-11 mm; male 6-8 mm. Maturity: Summer and autumn. Habitat: Bushes and trees in forests, parks and gardens. Range: Austria, Belarus, Belgium, Bulgaria, Croatia, Czech Republic, Denmark, Estonia, Finland, France (Mainland), Germany, Great Britain (Mainland), Great Britain (Northern Ireland), Greece (Mainland), Hungary, Ireland, Italy (Mainland), Latvia, Liechtenstein, Lithuania, Macedonia, Moldova, Netherlands, Norway (Mainland), Poland, Romania, Russia (Central European), Russia (Eastern European), Russia (Kaliningrad Region), Russia (Northern European), Russia (NW. European), Russia (Southern European), Slovakia, Slovenia, Spain (Mainland), Sweden, Switzerland, Ukraine, Yugoslavia (van Helsdingen 2009.1). Global range: Holarctic (Platnick 10.0). Clubiona phragmitis C. L. Koch, 1843 Range: Austria, Belarus, Belgium, Bulgaria, Croatia, Czech Republic, Denmark, Estonia, Finland, France (Corsica), France (Mainland), Germany, Great Britain (Mainland), Great Britain (Northern Ireland), Greece (Mainland), Hungary, Ireland, Italy (Mainland), Latvia, Liechtenstein, Lithuania, Moldova, Netherlands, Norway (Mainland), Poland, Portugal (Mainland), Romania, Russia (Central European), Russia (Eastern European), Russia (Northern European), Russia (NW. European), Russia (Southern European), Slovakia, Slovenia, Sweden, Switzerland, Ukraine, Yugoslavia (van Helsdingen 2009.1). Global range: Palearctic (Platnick 10.0). Clubiona pseudoneglecta Wunderlich, 1994 Range: Belgium, Bulgaria, Czech Republic, France (Mainland), Germany, Great Britain (Mainland), Greece (Mainland), Hungary, Italy (Mainland), Moldova, Netherlands, Romania, Russia (Central European), Slovenia, Switzerland, Ukraine (van Helsdingen 2009.1). Global range: Europe to Central Asia (Platnick 10.0). Clubiona reclusa O. P.-Cambridge, 1863 Range: Austria, Belarus, Belgium, Croatia, Czech Republic, Denmark, Estonia, Finland, France (Mainland), Germany, Great Britain (Mainland), Great Britain (Northern Ireland), Hungary, Ireland, Italy (Mainland), Latvia, Liechtenstein, Lithuania, Netherlands, Norway (Mainland), Poland, Romania, Russia (Central European), Russia (Eastern European), Russia (Northern European), Russia (NW. European), Russia (Southern European), Slovakia, Slovenia?, Sweden, Switzerland, Ukraine (van Helsdingen 2009.1). Global range: Palearctic (Platnick 10.0). Eggsack. Female with eggsack inside chamber made of silk lining curled fern leaf. Clubiona stagnatilis Kulczynski, 1897 Description: Rather pale reddish-brown or yellowish-brown species with slightly darker cardiac mark. Rather dense pubescence. Head seem relatively narrow compared to most Clubiona species. Size: Female 6-8 mm; male 5-7 mm. Maturity: All year. Habitat: In a variety of mainly damp situations, typically among low vegetation. Range: Austria, Belarus, Belgium, Croatia, Czech Republic, Denmark, Estonia, Finland, France (Mainland), Germany, Great Britain (Mainland), Great Britain (Northern Ireland), Hungary, Ireland, Italy (Mainland), Latvia, Liechtenstein, Lithuania, Netherlands, Norway (Mainland), Poland, Romania, Russia (Central European), Russia (Eastern European), Russia (Northern European), Russia (NW. European), Russia (Southern European), Slovakia, Sweden, Switzerland, Ukraine (van Helsdingen 2009.1). Global range: Palearctic (Platnick 10.0). Female with parasitic fungi. Clubiona subsultans Thorell, 1875 Description: Goldenbrown abdomen with with well-defined, dark cardiac mark. Size: Female 5-7 mm; male 4-7 mm. Maturity: All year. Habitat: Under loose bark, in moss and under stones in open forests. Range: Austria, Belarus, Belgium, Bulgaria, Czech Republic, Denmark, Estonia, Finland, France (Mainland), Germany, Great Britain (Mainland), Italy (Mainland), Latvia, Liechtenstein, Lithuania, Netherlands, Norway (Mainland), Poland, Romania, Russia (Central European), Russia (Eastern European), Russia (Northern European), Russia (NW. European), Russia (Southern European), Slovakia, Slovenia, Sweden, Switzerland, Ukraine, Yugoslavia (van Helsdingen 2009.1). Global range: Palearctic (Platnick 10.0). Female abdomen. Clubiona subtilis L. Koch, 1867 Description: A rather small species with the cephalothorax much darker than the abdomen which is yellowish-brown. Sometimes the cardiac mark is contrasting reddish-brown, at other timer it much less well-defined. Size: Female 3-4.5 mm; male 2.5-3 mm. Maturity: Males spring and summer, females all year. Habitat: Among mos and low vegetation in damp, mainly coastal habitats. Range: Belarus, Belgium, Bulgaria, Croatia, Czech Republic, Denmark, Estonia, Finland, France (Corsica), France (Mainland), Germany, Great Britain (Mainland), Hungary, Ireland, Italy (Mainland), Netherlands, Poland, Romania, Russia (Central European), Russia (Eastern European), Russia (Kaliningrad Region), Russia (Northern European), Russia (NW. European), Russia (Southern European), Slovakia, Slovenia?, Spain (Mainland), Sweden, Switzerland, Ukraine (van Helsdingen 2009.1). Global range: Palearctic (Platnick 10.0). Clubiona terrestris Westring, 1851 Description: Abdomen reddish-brown with darker cardiac mark. Carapace is yellow-brown with head region slightly darker. Size: Female 6-7 mm; male 5-6 mm. Maturity: Males spring and summer, females all year. Habitat: In a variety of mainly dry situations, including gardens. In leaf litter, under stones and under loose bark. Range: Andorra, Austria, Belarus, Belgium, Bulgaria, Croatia, Czech Republic, Denmark, Estonia, France (Corsica), France (Mainland), Germany, Great Britain (Channel Islands), Great Britain (Mainland), Great Britain (Northern Ireland), Greece (Mainland), Hungary, Ireland, Italy (Mainland), Liechtenstein, Macedonia, Moldova, Netherlands, Norway (Mainland), Poland, Portugal (Azores), Portugal (Mainland), Romania, Slovakia, Slovenia, Spain (Mainland), Sweden, Switzerland, Ukraine, Yugoslavia (van Helsdingen 2009.1). Global range: Europe (Platnick 10.0). Clubiona trivialis C. L. Koch, 1843 - Northern sac-spider Description: Female carapace yellowish or light orange brown. Legs light yellow-brown. Chelicerae brown. Dorsum of abdomen uniformly reddish brown sometimes with a slightly darker or lighter cardiac mark. Abdomen with short, light pubescence. Spinners yellow. Legs and sternum light brownish, in recently moulted specimens pale grey. Male similar to female but on the average 0.5 mm shorter. The palpal tibial apophysis is formed like a spade and is blackish. Size: Female 3.6-4.6 mm; male 3.3-3.9 mm. Maturity: Adult males and females have been taken almost year round with peak from April to September. Habitat: Foliage of mainly pines and at the basis of low deciduous shrubs such as heather. The species is also found at ground level under stones and in leaf litter. Range: Austria, Belarus, Belgium, Bulgaria, Croatia, Czech Republic, Denmark, Estonia, Faroe Islands, Finland, France (Mainland), Germany, Great Britain (Mainland), Great Britain (Northern Ireland), Hungary, Ireland, Italy (Mainland), Latvia, Liechtenstein, Lithuania, Netherlands, Norway (Mainland), Poland, Romania, Russia (Central European), Russia (Eastern European), Russia (Kaliningrad Region), Russia (Northern European), Russia (NW. European), Russia (Southern European), Slovakia, Slovenia, Sweden, Switzerland, Ukraine, Yugoslavia (van Helsdingen 2009.1). Global range: Holarctic (Platnick 10.0). Clubiona vegeta Simon, 1918 Range: Bulgaria, France (Corsica), France (Mainland), Greece (Mainland), Italy (Mainland), Italy (Sardinia), Portugal (Mainland), Romania, Spain (Canary Islands), Spain (Mainland), Switzerland (van Helsdingen 2009.1). Global range: Europe, Central Asia, North Africa, Canary Is (Platnick 10.0).
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,880
Check out my Dixon Golf green golf ball giveaway below! I have to admit, I am the first one to render my opinion on the environmentally friendliness of golf. In my mind, it has a huge environmental footprint. When I think of golf, I envision gas guzzling golf carts, large amount of water being used to keep the grass looking pristine and major upkeep of the fairways which includes fertilizer and weed control. In my opinion, a mightily large shoe print for people to chase a little white ball. Putting my own environmental finger wagging aside, many people love this game. How many? According to the National Golf Foundation, in 2008, there was 28.6 million golfers (ages 6 and above) in the United States. Add four more. My children. So, why not green the game as much as possible? Well, Dixon Golf has stepped up to the tee with a new greener golf ball that can be recycled at the end of its life (if it not lost in the woods or in a stream. They can't help you when you shank the ball.) I laid my eyes on this white beauty at a golf shop in North Carolina while vacationing. The other non-green golf balls. While my sons played the Executive Course, I had a lengthy conversation with a gentleman in the pro shop about the merits of the eco- ball. He loved the balls and highly recommended them. Sure he could have pushed the Titleist balls, which are known to be a superior playing ball at twice the price of the Earth ball, but instead he pushed the moderately priced, Dixon Earth Ball. He had no idea that I write about green products. Then I saw the Dixon Golf recycling container on the counter beaconing me to return my old golf balls to be recycled. I was sold. Why create an eco-ball? According to William Carey, Vice President of Dixon Golf, over 200 million balls are discarded a year. He told me to envision a line of balls from London to Los Angeles and then back again as to the amount of balls that are put in landfills each year. What makes this ball so environmentally friendly? Its inner guts are made out of rubber and a proprietary formula which, according to Carey, does not contain any of the heavy toxic metals such as cobalt, tungsten, or lead normally found in other golf balls. In addition, the guts of the ball are made of renewable products. It is then wrapped with a polymer casing and then packaged in 100% recyclable packaging. In addition, unlike other golf balls, the entire ball can be recycled to be reused to make playgrounds and turf fields. The Dixon Golf balls do not biodegrade any faster than a normal golf ball unless it is recycled. So, I thought, why didn't the founders make a biodegradable one? Carey explained they have not found any biodegradable balls that play well. He further indicated why make a ball that is environmentally friendly if no one wants to use it. To entice people to be more environmentally minded, the Company has set up recycling centers in golf shops. Carey figured that about 15% of golfers would go out of their way to play with an environmentally friendly ball. The Company is hoping that by setting up easy to use recycling centers they will capture more golfers willing to try the Dixon Golf ball and ultimately recycle the ball. Now it pays to turn in your old balls. What intrigued me is how founders William Carey and Dane Platt got involved in creating this e-golf ball. "Because of our golf ball background and manufacturing experience, we were one of the few companies who had the ability to do anything to help besides the big ball companies. It was not just an issue of making a ball that could be recycled, but also a program where it could be practical and easy for all golfers to participate. Dixon Golf offers two different balls: The Dixon Earth ball which has a wonderful spin according to Carey and the Dixon Earth Eco Distance which is a made for distance. Manufacturers listed price for a package of a dozen balls is as follows: $24.95 for the Eco-Distance balls and $39.95 for the Earth Balls. You can buy these golf balls at local retailers. See here for the nearest one to you. In addition, they are available online at Amazon or Golfballs.com. If you are interested in the balls for a corporate event, see here. • Leave me a comment here as why you or someone you care about loves golf. If you don't have a golf loving comment, just say so. • To double your chances of winning the balls, consider joining my Ning Forum. Come back and leave a comment that you joined my forum. • To triple your chances consider joining my Feedburner email list or subscribe to my RSS feed. Both subscriptions are listed on the right hand column. Be sure to come back and leave a comment to tell me which one you joined. • To quadruple your chances of winning, twitter about this contest and come back and leave the url (weblink) of your twitter comment. • You must enter by August 21, 2009, 6 PM EST time to win. • A winner will be chosen at random on Monday, August 24, 2009. Good Luck everyone! Mystery Weed or Amazing Flower? The first I have heard of them, a good idea. Would love to try them. My ex-boss is completely addicted. I have no idea why. Thanks for the giveaway! Thanks a lot for this great article. Golf is a green sport and with those recyclable balls it will truly be. .-= Beauty Blog´s last blog ..Easy Ways to Avoid Dark Spots =-. This is a great idea, and at the rate I lose golf balls when playing, it would mean I would be sharing the eco love because someone else will find them and play them over and over! I love nature, so the fresh air and beautiful trees and landscape, as well as the challenge of bettering my own score every time keep drawing me back to playing golf again and again. .-= Pam´s last blog ..Family Picture =-. My husband is a golf nut. Took up the sport a few years ago and there's no backing down now. I'm an eco-friendly nut, so I'd love to get these for him. He'll need to stop losing them in the water — but then again, might as well have non-toxic rubbers sitting in those water hazards. My husband loves golf. I don't totally understand why, but he does. He cut out part of the grass in our backyard to make a putting something or other, lol. I'd love to win these for him! .-= Amanda S.´s last blog ..Family Picture =-. Oh how my green-friendly, golf -obsessed hubby would love these. I think his favorite part about the sport is the gadgets and equipment! Thanks for the chance to win. These are great! My husband loves golf and I am sure he would love these! I am so happy someone finally came up with green golf balls such a great idea! My new hubbie (married july 8th) loves golf but as an amputee has issues. I would love to surprise him with these. It's great that the sport of golf is bringing awareness to fans of the importance of saving the environment. We are committed to spreading the eco-friendly word out there and have recently brought to market in North America a line of sports balls (soccer, footballs, basketballs, etc) that are certified Fair Trade, eco-certified, and NFHS certified for quality. Please stop by our blog at fairtradesports.com to learn more about us, and thank you for the post! Great golf ball, would love to see if they could improve my golf game. .-= J Quinn´s last blog ..Family Picture =-. My Dad is in a golf league and he golfs every Thursday. He adores it, and I'm glad he has something in his life that he loves. .-= Jessica Rae´s last blog .."Saving Grace" Prepares for its Goodbye =-. Interesting concept, I would like to test these out on the course. WOW! Who wouldn't want a package of green balls??? My ex-husband will love these, he literally eats, sleeps, and lives golf! It is getting harder and harder each year for his daughter and I to think up new "golf" gifts! This will fill the ticket – thanks! My son-in-law loves to play golf, it is a good way to get good exercise plus he loves playing the game. Now I won't feel guilty about poisoning the fish. I'd love to win, I love golf and anything I can do to make it more earth friendly could only help the game. My dad (at age 88) plays golf three times a week and when he's not playing, he's watching it on tv. I love the game, and love the planet it's played on. These balls seem like the perfect combination! My husband works really hard and loves to play golf when ever he gets the chance! I would love to surprise him with these! .-= shawna´s last blog ..Family Picture =-. "Green Golf" That's awesome!! My best friends husband is on disability, but sure does love his golf! I would be proud to give these to him!! My husband loves golf, he plays 1-2 times a week! He's slowly getting me up to "par" (literally)! I would love to win this for him. Thanks for the chance! I love golf because playing all the different courses presents unique challenges. My fourteen year old brother (who's going through a hard time right now) has recently become involved in golf. He loves it and he's amazing at it! He's been in every sport – football, soccer, baseball, basketball, but he says that golf is the one. He's on the high school varsity team and is only a freshman. I'm really proud of him. He has no money so he can't afford a lot of the equipment like the other kids on the team, but my friends and I got him nice golf clubs last Christmas. I believe golf definately saved him from going down a bad path. .-= Beth´s last blog ..Family Picture =-. My husband is a retiree and loves to golf. It's one of the few sports he can still participate in, as football and baseball get a little rough for the over-60 crowd. I think the Dixon folks have a great idea with these balls. The dh loves golf so much, mostly because of those amazing shots that happen once in a blue moon. Even I can see the draw, even if I do only play once or twice a year. I've never thought of golf like that before but you're right. It does have a large footprint. My son works at a golf course and he's environmentally savvy so I know he would really appreciate these. i love golfing because it's relaxing and fuN! .-= susan smoaks´s last blog ..Family Picture =-. .-= bobby womack´s last blog ..Family Picture =-. I'm not a golf fan, but my brother-in-law LOVES it! .-= Chrysa ´s last blog ..Free 8×10 Scrapbook Page at Walgreens TODAY ONLY =-. My sister really thinks it is fun. This is interesting. I'm a great fan of titleist ProV1 golf ball but i might try out recycling ball. .-= Mani @ titleist prov1 golf balls´s last blog ..Comment Luv Keyword Luv, Do Follow =-. I think the green movement will eventually make it into all aspects of our lives. I will give them a try. .-= jim@sportsjerseyking´s last blog ..John Wall | Kentucky Wildcats Jersey =-. Great concept of recycling golf balls, I will give them a go as I'm all for this kind of thing. .-= Tony@How To Break 80´s last blog ..How To Break 80 – Get The Secret Golf Instruction Breakthrough =-. I think its about time that an eco-friendly ball like this becomes available. As you said, there's more than 28 million golfers in the U.S. alone and like it or not golf balls will get lost one way or another. The idea is sound and hopefully in the years to come to come out with a ball that can perform like traditional ones and that can eventually decomposed in lost. With the advancement of science and the right motivation, I sure this could be a reality. .-= Carlos´s last blog ..Privacy =-. Eco friendly golf balls. Now, I've heard everything. Actually, it's one of those head slappers. "How come I never thought of that". Great idea. Congratulations. .-= tom@backyard putting green´s last blog ..Putting Grip – Left Hand Low =-. Dixon has hit a hole in one with these eco friendly golf balls! .-= mick@lessons in golf´s last blog ..Putting Golf Instruction – Introduction =-. When you consider that 200 million golf balls are discarded each year, then eco friendly golf balls are the way to go. It would be nice to see more organic golf courses and more people walking the course too. These golf balls look awesome, I have been searching for a good golf ball for some time now. I am tired of always feeling sold or scammed when I go to purchase golf balls. Thanks for all of the great comments on what we are doing at Dixon Golf. Our mission is to make a positive impact on the community, the environment, and the players of the game. I am happy to report that we are accomplishing our objective.
{ "redpajama_set_name": "RedPajamaC4" }
8,013
The Check-In Management screen is divided into five main sections: Queues, My Status, My Appointments, pending requests (or wait queue), and In-Service list. See below for a breakdown of each section. The Queues section, located on the upper left side of the Check-In Management screen, shows a list of all queues a user is assigned to monitor. The queue name is displayed on the left side of the column, with the number of waiting customers, displayed on the right. If a user is joined to more than one queue, an additional queue named All, is displayed at the top of the User's queue list. All displays the total number of customers, waiting in all of the user's assigned queues. If your system is configured for Work queues, you will see an additional section below Customer Queues. Customer Queues: A Customer Queue manages "walk-in" customers who are present at your location. Work Queues: A Work Queue manages customer service requests for customers who are not present at your location. For example, a Call Center. My Status, located on the left side of the Check-In Management screen, allows the user to change their status to Available, Away, or at Lunch. A custom Away Message may be entered in the text field below the status selection. Alternatively, the In and Out buttons below the text box, provide a quick way to change your status to Out of Office, or Available. The Out button will change your status to Away with a message of "Out of Office". The In button will remove the message, and set your status to "Available". For directions on changing your status, please see Changing User Status in the LobbyCentral cloud User Manual. My Appointments is located on the lower left side of the Check-In Management screen. This section displays the user's scheduled appointments for the current day only. Pending service requests are located on the upper, center portion of the Check-In Management screen, and is where customers who are checked-in and waiting for service are displayed. The name of the queue being viewed, is displayed in the blue menu bar, above the list of waiting customers. To view a different queue of customers, click on the name of the queue in the Queues list. Note: To view a list of waiting customers, you must be assigned, and joined to a queue. The User is taking a customer that is assigned to them. The next customer in line is waiting for a specific employee. Note: You can only edit a request if you are the original creator or have supervisor or administrator rights. Overdue Tickets: If a customer is in the waiting list beyond a defined time limit, the customer record highlights in red, and a second alert is sent to users. The administrator controls the max wait time in the Administration section of LobbyCentral. Pending tickets can also be color coded based on length of time waited. This overrides the default color of red when a customer's wait time exceeds the set location value. Up to three colors may be specified. Special Service Requests: If a customer requires one or more, special services, the user may select the service(s) required during the customer check-in process. An icon is displayed for each special request, below the customer's reason for visit, or service , in pending requests. Using the mouse, hover the cursor over the icon to view the Special Service description. Hidden Service Detail: The administrator can elect to disable the service detail column. If the column is disabled, Hidden by Administrator is displayed in place of the visit reason, or service. When the visit reason is disabled, it cannot be viewed until the user takes the customer. It can then be viewed in the service record. The In-Service list is located in the bottom center of the Check-In Management screen. It displays customers that are being assisted at your location. The In-Service list displays all requests, regardless of the current queue selected in Queues. A supervisor or administrator can edit In-Service tickets to close them if necessary.
{ "redpajama_set_name": "RedPajamaC4" }
2,128
require "chef/knife/server_bootstrap_base" class Chef class Knife # Provisions a Digital Ocean instance and sets up an Open Source Chef # Server. class ServerBootstrapDigitalocean < Knife banner "knife server bootstrap digitalocean (options)" include Knife::ServerBootstrapBase deps do require "knife/server/ssh" require "knife/server/credentials" begin require "chef/knife/digital_ocean_droplet_create" require "droplet_kit" Chef::Knife::DigitalOceanDropletCreate.load_deps current_options = options self.options = Chef::Knife::DigitalOceanDropletCreate.options.dup options.merge!(current_options) rescue LoadError => ex ui.error [ "Knife plugin knife-digital_ocean could not be loaded.", "Please add the knife-digital_ocean gem to your Gemfile or", "install the gem manually with `gem install knife-digital_ocean'.", "(#{ex.message})" ].join(" ") exit 1 end # Monkey patch to prevent Kernel#exit calls at the end of the upstream # Knife plugin. Instead, non-zero exits will be raised and zero exits # will be ignored ;) # # rubocop:disable Style/ClassAndModuleChildren class ::Chef::Knife::DigitalOceanDropletCreate def exit(code) if code != 0 raise "DigitalOceanDropletCreate exited with code: #{code}" end end end # rubocop:enable Style/ClassAndModuleChildren end option :chef_node_name, :short => "-N NAME", :long => "--node-name NAME", :description => "The Chef node name for your new node", :proc => proc { |key| Chef::Config[:knife][:server_name] = key } def run super digital_ocean_bootstrap.run fetch_validation_key create_root_client install_client_key end def digital_ocean_bootstrap setup_environment bootstrap = Chef::Knife::DigitalOceanDropletCreate.new bootstrap.config[:bootstrap] = true Chef::Knife::DigitalOceanDropletCreate.options.keys.each do |attr| val = config_val(attr) next if val.nil? bootstrap.config[attr] = val end bootstrap.config[:server_name] = config_val(:chef_node_name) bootstrap.config[:distro] = bootstrap_distro bootstrap end def digital_ocean_connection @digital_ocean_connection ||= DropletKit::Client.new( :access_token => config_val(:digital_ocean_access_token) ) end def server_ip_address server = digital_ocean_connection.droplets.all.find do |s| s.status == "active" && s.name == config_val(:chef_node_name) end server && server.public_ip end private def validate! super if config[:chef_node_name].nil? ui.error "You did not provide a valid --node-name value." exit 1 end if config_val(:platform) == "auto" ui.error "Auto platform mode cannot be used with " \ "knife-digital_ocean plugin" exit 1 end end def setup_environment ENV["WEBUI_PASSWORD"] = config_val(:webui_password) ENV["AMQP_PASSWORD"] = config_val(:amqp_password) ENV["NO_TEST"] = "1" if config[:no_test] end def ssh_connection opts = { :host => server_ip_address, :user => config_val(:ssh_user), :port => "22", :keys => [config_val(:identity_file)].compact, :password => config_val(:ssh_password) } if config_val(:host_key_verify) == false opts[:user_known_hosts_file] = "/dev/null" opts[:paranoid] = false end ::Knife::Server::SSH.new(opts) end end end end
{ "redpajama_set_name": "RedPajamaGithub" }
7,601
Q: invalid alter table option while modify foreign key I want to modify the update option of one foreign key. For this I executed this command: alter table testusers.ORDERS DROP CONSTRAINT ORDER_FK_2, ADD CONSTRAINT ORDER_FK_2 FOREIGN KEY(FK_PRODUCER_ID) REFERENCES testuser.PRODUCER (producer_id) ON UPDATE CASCADE ON DELETE CASCADE; If I execute this, there is the following error: SQL-Fehler: ORA-01735: Ungültige Option ALTER TABLE 01735. 00000 - "invalid ALTER TABLE option" A: There is no comma separated list for the alter table according to documentation syntax diagram http://docs.oracle.com/cd/B28359_01/server.111/b28286/clauses002.htm#CJAEDFIB create table orders(order_id number, fk_producer_id number, CONSTRAINT order_pk PRIMARY KEY (order_id)); create table producer(producer_id number, CONSTRAINT producer_pk PRIMARY KEY (producer_id)); alter table orders ADD CONSTRAINT ORDER_FK_2 FOREIGN KEY( FK_PRODUCER_ID) REFERENCES PRODUCER (producer_id) ; alter table orders DROP CONSTRAINT ORDER_FK_2; alter table orders ADD CONSTRAINT ORDER_FK_2 FOREIGN KEY( FK_PRODUCER_ID) REFERENCES PRODUCER (producer_id) ; Ahm, yes, and I could not find any ON UPDATE CASCADE syntax either. But I am sure you can work it out now. Otherwise drop a little comment or post a new question.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,398
<!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta name="viewport" content="width=device-width,initial-scale=1.0,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no" /> <meta content="IE=edge" http-equiv="X-UA-Compatible"> <link rel="shortcut icon" type="image/x-icon" href="../../../../favicon.ico" /> <title>ErrorListener - Android SDK | Android Developers</title> <!-- STYLESHEETS --> <link rel="stylesheet" href="http://fonts.googleapis.com/css?family=Roboto+Condensed"> <link rel="stylesheet" href="http://fonts.googleapis.com/css?family=Roboto:light,regular,medium,thin,italic,mediumitalic,bold" title="roboto"> <link href="../../../../assets/css/default.css?v=7" rel="stylesheet" type="text/css"> <!-- FULLSCREEN STYLESHEET --> <link href="../../../../assets/css/fullscreen.css" rel="stylesheet" class="fullscreen" type="text/css"> <!-- JAVASCRIPT --> <script src="http://www.google.com/jsapi" type="text/javascript"></script> <script src="../../../../assets/js/android_3p-bundle.js" type="text/javascript"></script> <script type="text/javascript"> var toRoot = "../../../../"; var metaTags = []; var devsite = false; </script> <script src="../../../../assets/js/docs.js?v=6" type="text/javascript"></script> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-5831155-1', 'android.com'); ga('create', 'UA-49880327-2', 'android.com', {'name': 'universal'}); // New tracker); ga('send', 'pageview'); ga('universal.send', 'pageview'); // Send page view for new tracker. </script> </head> <body class="gc-documentation develop reference" itemscope itemtype="http://schema.org/Article"> <div id="doc-api-level" class="8" style="display:none"></div> <a name="top"></a> <a name="top"></a> <!-- dialog to prompt lang pref change when loaded from hardcoded URL <div id="langMessage" style="display:none"> <div> <div class="lang en"> <p>You requested a page in English, would you like to proceed with this language setting?</p> </div> <div class="lang es"> <p>You requested a page in Spanish (Español), would you like to proceed with this language setting?</p> </div> <div class="lang ja"> <p>You requested a page in Japanese (日本語), would you like to proceed with this language setting?</p> </div> <div class="lang ko"> <p>You requested a page in Korean (한국어), would you like to proceed with this language setting?</p> </div> <div class="lang ru"> <p>You requested a page in Russian (Русский), would you like to proceed with this language setting?</p> </div> <div class="lang zh-cn"> <p>You requested a page in Simplified Chinese (简体中文), would you like to proceed with this language setting?</p> </div> <div class="lang zh-tw"> <p>You requested a page in Traditional Chinese (繁體中文), would you like to proceed with this language setting?</p> </div> <a href="#" class="button yes" onclick="return false;"> <span class="lang en">Yes</span> <span class="lang es">Sí</span> <span class="lang ja">Yes</span> <span class="lang ko">Yes</span> <span class="lang ru">Yes</span> <span class="lang zh-cn">是的</span> <span class="lang zh-tw">没有</span> </a> <a href="#" class="button" onclick="$('#langMessage').hide();return false;"> <span class="lang en">No</span> <span class="lang es">No</span> <span class="lang ja">No</span> <span class="lang ko">No</span> <span class="lang ru">No</span> <span class="lang zh-cn">没有</span> <span class="lang zh-tw">没有</span> </a> </div> </div> --> <!-- Header --> <div id="header-wrapper"> <div class="dac-header" id="header"> <div class="dac-header-inner"> <a class="dac-nav-toggle" data-dac-toggle-nav href="javascript:;" title="Open navigation"> <span class="dac-nav-hamburger"> <span class="dac-nav-hamburger-top"></span> <span class="dac-nav-hamburger-mid"></span> <span class="dac-nav-hamburger-bot"></span> </span> </a> <a class="dac-header-logo" href="../../../../index.html"> <img class="dac-header-logo-image" src="../../../../assets/images/android_logo.png" srcset="../../../../assets/images/android_logo@2x.png 2x" width="32" height="36" alt="Android" /> Developers </a> <ul class="dac-header-crumbs"> <li class="dac-header-crumbs-item"><span class="dac-header-crumbs-link current ">ErrorListener - Android SDK</a></li> </ul> <div class="dac-header-search" id="search-container"> <div class="dac-header-search-inner"> <div class="dac-sprite dac-search dac-header-search-btn" id="search-btn"></div> <form class="dac-header-search-form" onsubmit="return submit_search()"> <input id="search_autocomplete" type="text" value="" autocomplete="off" name="q" onfocus="search_focus_changed(this, true)" onblur="search_focus_changed(this, false)" onkeydown="return search_changed(event, true, '../../../../')" onkeyup="return search_changed(event, false, '../../../../')" class="dac-header-search-input" placeholder="Search" /> <a class="dac-header-search-close hide" id="search-close">close</a> </form> </div><!-- end dac-header-search-inner --> </div><!-- end dac-header-search --> <div class="search_filtered_wrapper"> <div class="suggest-card reference no-display"> <ul class="search_filtered"> </ul> </div> <div class="suggest-card develop no-display"> <ul class="search_filtered"> </ul> <div class="child-card guides no-display"> </div> <div class="child-card training no-display"> </div> <div class="child-card samples no-display"> </div> </div> <div class="suggest-card design no-display"> <ul class="search_filtered"> </ul> </div> <div class="suggest-card distribute no-display"> <ul class="search_filtered"> </ul> </div> </div> <a class="dac-header-console-btn" href="https://play.google.com/apps/publish/"> <span class="dac-sprite dac-google-play"></span> <span class="dac-visible-desktop-inline">Developer</span> Console </a> </div><!-- end header-wrap.wrap --> </div><!-- end header --> <div id="searchResults" class="wrap" style="display:none;"> <h2 id="searchTitle">Results</h2> <div id="leftSearchControl" class="search-control">Loading...</div> </div> </div> <!--end header-wrapper --> <!-- Navigation--> <nav class="dac-nav"> <div class="dac-nav-dimmer" data-dac-toggle-nav></div> <ul class="dac-nav-list" data-dac-nav> <li class="dac-nav-item dac-nav-head"> <a class="dac-nav-link dac-nav-logo" data-dac-toggle-nav href="javascript:;" title="Close navigation"> <img class="dac-logo-image" src="../../../../assets/images/android_logo.png" srcset="../../../../assets/images/android_logo@2x.png 2x" width="32" height="36" alt="Android" /> Developers </a> </li> <li class="dac-nav-item home"> <a class="dac-nav-link dac-visible-mobile-block" href="../../../../index.html">Home</a> <ul class="dac-nav-secondary about"> <li class="dac-nav-item about"> <a class="dac-nav-link" href="../../../../about/index.html">Android</a> </li> <li class="dac-nav-item wear"> <a class="dac-nav-link" href="../../../../wear/index.html">Wear</a> </li> <li class="dac-nav-item tv"> <a class="dac-nav-link" href="../../../../tv/index.html">TV</a> </li> <li class="dac-nav-item auto"> <a class="dac-nav-link" href="../../../../auto/index.html">Auto</a> </li> </ul> </li> <li class="dac-nav-item design"> <a class="dac-nav-link" href="../../../../design/index.html" zh-tw-lang="設計" zh-cn-lang="设计" ru-lang="Проектирование" ko-lang="디자인" ja-lang="設計" es-lang="Diseñar">Design</a> </li> <li class="dac-nav-item develop"> <a class="dac-nav-link" href="../../../../develop/index.html" zh-tw-lang="開發" zh-cn-lang="开发" ru-lang="Разработка" ko-lang="개발" ja-lang="開発" es-lang="Desarrollar">Develop</a> <ul class="dac-nav-secondary develop"> <li class="dac-nav-item training"> <a class="dac-nav-link" href="../../../../training/index.html" zh-tw-lang="訓練課程" zh-cn-lang="培训" ru-lang="Курсы" ko-lang="교육" ja-lang="トレーニング" es-lang="Capacitación">Training</a> </li> <li class="dac-nav-item guide"> <a class="dac-nav-link" href="../../../../guide/index.html" zh-tw-lang="API 指南" zh-cn-lang="API 指南" ru-lang="Руководства по API" ko-lang="API 가이드" ja-lang="API ガイド" es-lang="Guías de la API">API Guides</a> </li> <li class="dac-nav-item reference"> <a class="dac-nav-link" href="../../../../reference/packages.html" zh-tw-lang="參考資源" zh-cn-lang="参考" ru-lang="Справочник" ko-lang="참조문서" ja-lang="リファレンス" es-lang="Referencia">Reference</a> </li> <li class="dac-nav-item tools"> <a class="dac-nav-link" href="../../../../sdk/index.html" zh-tw-lang="相關工具" zh-cn-lang="工具" ru-lang="Инструменты" ko-lang="도구" ja-lang="ツール" es-lang="Herramientas">Tools</a></li> <li class="dac-nav-item google"> <a class="dac-nav-link" href="../../../../google/index.html">Google Services</a> </li> <li class="dac-nav-item preview"> <a class="dac-nav-link" href="../../../../preview/index.html">Preview</a> </li> </ul> </li> <li class="dac-nav-item distribute"> <a class="dac-nav-link" href="../../../../distribute/googleplay/index.html" zh-tw-lang="發佈" zh-cn-lang="分发" ru-lang="Распространение" ko-lang="배포" ja-lang="配布" es-lang="Distribuir">Distribute</a> <ul class="dac-nav-secondary distribute"> <li class="dac-nav-item googleplay"> <a class="dac-nav-link" href="../../../../distribute/googleplay/index.html">Google Play</a></li> <li class="dac-nav-item essentials"> <a class="dac-nav-link" href="../../../../distribute/essentials/index.html">Essentials</a></li> <li class="dac-nav-item users"> <a class="dac-nav-link" href="../../../../distribute/users/index.html">Get Users</a></li> <li class="dac-nav-item engage"> <a class="dac-nav-link" href="../../../../distribute/engage/index.html">Engage &amp; Retain</a></li> <li class="dac-nav-item monetize"> <a class="dac-nav-link" href="../../../../distribute/monetize/index.html">Earn</a> </li> <li class="dac-nav-item analyze"> <a class="dac-nav-link" href="../../../../distribute/analyze/index.html">Analyze</a> </li> <li class="dac-nav-item stories"> <a class="dac-nav-link" href="../../../../distribute/stories/index.html">Stories</a> </li> </ul> </li> </ul> </nav> <!-- end navigation--> <div class="wrap clearfix" id="body-content"><div class="cols"> <div class="col-4 dac-hidden-mobile" id="side-nav" itemscope itemtype="http://schema.org/SiteNavigationElement"> <div id="devdoc-nav"> <div id="api-nav-header"> <div id="api-level-toggle"> <label for="apiLevelCheckbox" class="disabled" title="Select your target API level to dim unavailable APIs">API level: </label> <div class="select-wrapper"> <select id="apiLevelSelector"> <!-- option elements added by buildApiLevelSelector() --> </select> </div> </div><!-- end toggle --> <div id="api-nav-title">Android APIs</div> </div><!-- end nav header --> <script> var SINCE_DATA = [ '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23' ]; buildApiLevelSelector(); </script> <div id="swapper"> <div id="nav-panels"> <div id="resize-packages-nav"> <div id="packages-nav" class="scroll-pane"> <ul> <li class="api apilevel-1"> <a href="../../../../reference/android/package-summary.html">android</a></li> <li class="api apilevel-4"> <a href="../../../../reference/android/accessibilityservice/package-summary.html">android.accessibilityservice</a></li> <li class="api apilevel-5"> <a href="../../../../reference/android/accounts/package-summary.html">android.accounts</a></li> <li class="api apilevel-11"> <a href="../../../../reference/android/animation/package-summary.html">android.animation</a></li> <li class="api apilevel-16"> <a href="../../../../reference/android/annotation/package-summary.html">android.annotation</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/app/package-summary.html">android.app</a></li> <li class="api apilevel-8"> <a href="../../../../reference/android/app/admin/package-summary.html">android.app.admin</a></li> <li class="api apilevel-23"> <a href="../../../../reference/android/app/assist/package-summary.html">android.app.assist</a></li> <li class="api apilevel-8"> <a href="../../../../reference/android/app/backup/package-summary.html">android.app.backup</a></li> <li class="api apilevel-21"> <a href="../../../../reference/android/app/job/package-summary.html">android.app.job</a></li> <li class="api apilevel-21"> <a href="../../../../reference/android/app/usage/package-summary.html">android.app.usage</a></li> <li class="api apilevel-3"> <a href="../../../../reference/android/appwidget/package-summary.html">android.appwidget</a></li> <li class="api apilevel-5"> <a href="../../../../reference/android/bluetooth/package-summary.html">android.bluetooth</a></li> <li class="api apilevel-21"> <a href="../../../../reference/android/bluetooth/le/package-summary.html">android.bluetooth.le</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/content/package-summary.html">android.content</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/content/pm/package-summary.html">android.content.pm</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/content/res/package-summary.html">android.content.res</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/database/package-summary.html">android.database</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/database/sqlite/package-summary.html">android.database.sqlite</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/databinding/package-summary.html">android.databinding</a></li> <li class="api apilevel-11"> <a href="../../../../reference/android/drm/package-summary.html">android.drm</a></li> <li class="api apilevel-4"> <a href="../../../../reference/android/gesture/package-summary.html">android.gesture</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/graphics/package-summary.html">android.graphics</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/graphics/drawable/package-summary.html">android.graphics.drawable</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/graphics/drawable/shapes/package-summary.html">android.graphics.drawable.shapes</a></li> <li class="api apilevel-19"> <a href="../../../../reference/android/graphics/pdf/package-summary.html">android.graphics.pdf</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/hardware/package-summary.html">android.hardware</a></li> <li class="api apilevel-21"> <a href="../../../../reference/android/hardware/camera2/package-summary.html">android.hardware.camera2</a></li> <li class="api apilevel-21"> <a href="../../../../reference/android/hardware/camera2/params/package-summary.html">android.hardware.camera2.params</a></li> <li class="api apilevel-17"> <a href="../../../../reference/android/hardware/display/package-summary.html">android.hardware.display</a></li> <li class="api apilevel-23"> <a href="../../../../reference/android/hardware/fingerprint/package-summary.html">android.hardware.fingerprint</a></li> <li class="api apilevel-16"> <a href="../../../../reference/android/hardware/input/package-summary.html">android.hardware.input</a></li> <li class="api apilevel-12"> <a href="../../../../reference/android/hardware/usb/package-summary.html">android.hardware.usb</a></li> <li class="api apilevel-3"> <a href="../../../../reference/android/inputmethodservice/package-summary.html">android.inputmethodservice</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/location/package-summary.html">android.location</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/media/package-summary.html">android.media</a></li> <li class="api apilevel-9"> <a href="../../../../reference/android/media/audiofx/package-summary.html">android.media.audiofx</a></li> <li class="api apilevel-21"> <a href="../../../../reference/android/media/browse/package-summary.html">android.media.browse</a></li> <li class="api apilevel-14"> <a href="../../../../reference/android/media/effect/package-summary.html">android.media.effect</a></li> <li class="api apilevel-23"> <a href="../../../../reference/android/media/midi/package-summary.html">android.media.midi</a></li> <li class="api apilevel-21"> <a href="../../../../reference/android/media/projection/package-summary.html">android.media.projection</a></li> <li class="api apilevel-21"> <a href="../../../../reference/android/media/session/package-summary.html">android.media.session</a></li> <li class="api apilevel-21"> <a href="../../../../reference/android/media/tv/package-summary.html">android.media.tv</a></li> <li class="api apilevel-12"> <a href="../../../../reference/android/mtp/package-summary.html">android.mtp</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/net/package-summary.html">android.net</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/net/http/package-summary.html">android.net.http</a></li> <li class="api apilevel-16"> <a href="../../../../reference/android/net/nsd/package-summary.html">android.net.nsd</a></li> <li class="api apilevel-12"> <a href="../../../../reference/android/net/rtp/package-summary.html">android.net.rtp</a></li> <li class="api apilevel-9"> <a href="../../../../reference/android/net/sip/package-summary.html">android.net.sip</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/net/wifi/package-summary.html">android.net.wifi</a></li> <li class="api apilevel-14"> <a href="../../../../reference/android/net/wifi/p2p/package-summary.html">android.net.wifi.p2p</a></li> <li class="api apilevel-16"> <a href="../../../../reference/android/net/wifi/p2p/nsd/package-summary.html">android.net.wifi.p2p.nsd</a></li> <li class="api apilevel-9"> <a href="../../../../reference/android/nfc/package-summary.html">android.nfc</a></li> <li class="api apilevel-19"> <a href="../../../../reference/android/nfc/cardemulation/package-summary.html">android.nfc.cardemulation</a></li> <li class="api apilevel-10"> <a href="../../../../reference/android/nfc/tech/package-summary.html">android.nfc.tech</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/opengl/package-summary.html">android.opengl</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/os/package-summary.html">android.os</a></li> <li class="api apilevel-9"> <a href="../../../../reference/android/os/storage/package-summary.html">android.os.storage</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/preference/package-summary.html">android.preference</a></li> <li class="api apilevel-19"> <a href="../../../../reference/android/print/package-summary.html">android.print</a></li> <li class="api apilevel-19"> <a href="../../../../reference/android/print/pdf/package-summary.html">android.print.pdf</a></li> <li class="api apilevel-19"> <a href="../../../../reference/android/printservice/package-summary.html">android.printservice</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/provider/package-summary.html">android.provider</a></li> <li class="api apilevel-11"> <a href="../../../../reference/android/renderscript/package-summary.html">android.renderscript</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/sax/package-summary.html">android.sax</a></li> <li class="api apilevel-14"> <a href="../../../../reference/android/security/package-summary.html">android.security</a></li> <li class="api apilevel-23"> <a href="../../../../reference/android/security/keystore/package-summary.html">android.security.keystore</a></li> <li class="api apilevel-22"> <a href="../../../../reference/android/service/carrier/package-summary.html">android.service.carrier</a></li> <li class="api apilevel-23"> <a href="../../../../reference/android/service/chooser/package-summary.html">android.service.chooser</a></li> <li class="api apilevel-17"> <a href="../../../../reference/android/service/dreams/package-summary.html">android.service.dreams</a></li> <li class="api apilevel-21"> <a href="../../../../reference/android/service/media/package-summary.html">android.service.media</a></li> <li class="api apilevel-18"> <a href="../../../../reference/android/service/notification/package-summary.html">android.service.notification</a></li> <li class="api apilevel-21"> <a href="../../../../reference/android/service/restrictions/package-summary.html">android.service.restrictions</a></li> <li class="api apilevel-14"> <a href="../../../../reference/android/service/textservice/package-summary.html">android.service.textservice</a></li> <li class="api apilevel-21"> <a href="../../../../reference/android/service/voice/package-summary.html">android.service.voice</a></li> <li class="api apilevel-7"> <a href="../../../../reference/android/service/wallpaper/package-summary.html">android.service.wallpaper</a></li> <li class="api apilevel-3"> <a href="../../../../reference/android/speech/package-summary.html">android.speech</a></li> <li class="api apilevel-4"> <a href="../../../../reference/android/speech/tts/package-summary.html">android.speech.tts</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/annotation/package-summary.html">android.support.annotation</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/app/recommendation/package-summary.html">android.support.app.recommendation</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/customtabs/package-summary.html">android.support.customtabs</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/design/package-summary.html">android.support.design</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/design/widget/package-summary.html">android.support.design.widget</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/multidex/package-summary.html">android.support.multidex</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/percent/package-summary.html">android.support.percent</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v13/app/package-summary.html">android.support.v13.app</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v14/preference/package-summary.html">android.support.v14.preference</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v17/leanback/package-summary.html">android.support.v17.leanback</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v17/leanback/app/package-summary.html">android.support.v17.leanback.app</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v17/leanback/database/package-summary.html">android.support.v17.leanback.database</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v17/leanback/graphics/package-summary.html">android.support.v17.leanback.graphics</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v17/leanback/system/package-summary.html">android.support.v17.leanback.system</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v17/leanback/widget/package-summary.html">android.support.v17.leanback.widget</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v17/preference/package-summary.html">android.support.v17.preference</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/accessibilityservice/package-summary.html">android.support.v4.accessibilityservice</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/animation/package-summary.html">android.support.v4.animation</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/app/package-summary.html">android.support.v4.app</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/content/package-summary.html">android.support.v4.content</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/content/pm/package-summary.html">android.support.v4.content.pm</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/content/res/package-summary.html">android.support.v4.content.res</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/database/package-summary.html">android.support.v4.database</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/graphics/package-summary.html">android.support.v4.graphics</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/graphics/drawable/package-summary.html">android.support.v4.graphics.drawable</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/hardware/display/package-summary.html">android.support.v4.hardware.display</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/hardware/fingerprint/package-summary.html">android.support.v4.hardware.fingerprint</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/media/package-summary.html">android.support.v4.media</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/media/session/package-summary.html">android.support.v4.media.session</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/net/package-summary.html">android.support.v4.net</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/os/package-summary.html">android.support.v4.os</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/print/package-summary.html">android.support.v4.print</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/provider/package-summary.html">android.support.v4.provider</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/text/package-summary.html">android.support.v4.text</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/util/package-summary.html">android.support.v4.util</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/view/package-summary.html">android.support.v4.view</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/view/accessibility/package-summary.html">android.support.v4.view.accessibility</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/view/animation/package-summary.html">android.support.v4.view.animation</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v4/widget/package-summary.html">android.support.v4.widget</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v7/app/package-summary.html">android.support.v7.app</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v7/appcompat/package-summary.html">android.support.v7.appcompat</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v7/cardview/package-summary.html">android.support.v7.cardview</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v7/graphics/package-summary.html">android.support.v7.graphics</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v7/graphics/drawable/package-summary.html">android.support.v7.graphics.drawable</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v7/gridlayout/package-summary.html">android.support.v7.gridlayout</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v7/media/package-summary.html">android.support.v7.media</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v7/mediarouter/package-summary.html">android.support.v7.mediarouter</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v7/preference/package-summary.html">android.support.v7.preference</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v7/recyclerview/package-summary.html">android.support.v7.recyclerview</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v7/util/package-summary.html">android.support.v7.util</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v7/view/package-summary.html">android.support.v7.view</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v7/widget/package-summary.html">android.support.v7.widget</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v7/widget/helper/package-summary.html">android.support.v7.widget.helper</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v7/widget/util/package-summary.html">android.support.v7.widget.util</a></li> <li class="api apilevel-"> <a href="../../../../reference/android/support/v8/renderscript/package-summary.html">android.support.v8.renderscript</a></li> <li class="api apilevel-21"> <a href="../../../../reference/android/system/package-summary.html">android.system</a></li> <li class="api apilevel-21"> <a href="../../../../reference/android/telecom/package-summary.html">android.telecom</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/telephony/package-summary.html">android.telephony</a></li> <li class="api apilevel-5"> <a href="../../../../reference/android/telephony/cdma/package-summary.html">android.telephony.cdma</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/telephony/gsm/package-summary.html">android.telephony.gsm</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/test/package-summary.html">android.test</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/test/mock/package-summary.html">android.test.mock</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/test/suitebuilder/package-summary.html">android.test.suitebuilder</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/test/suitebuilder/annotation/package-summary.html">android.test.suitebuilder.annotation</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/text/package-summary.html">android.text</a></li> <li class="api apilevel-3"> <a href="../../../../reference/android/text/format/package-summary.html">android.text.format</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/text/method/package-summary.html">android.text.method</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/text/style/package-summary.html">android.text.style</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/text/util/package-summary.html">android.text.util</a></li> <li class="api apilevel-19"> <a href="../../../../reference/android/transition/package-summary.html">android.transition</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/util/package-summary.html">android.util</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/view/package-summary.html">android.view</a></li> <li class="api apilevel-4"> <a href="../../../../reference/android/view/accessibility/package-summary.html">android.view.accessibility</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/view/animation/package-summary.html">android.view.animation</a></li> <li class="api apilevel-3"> <a href="../../../../reference/android/view/inputmethod/package-summary.html">android.view.inputmethod</a></li> <li class="api apilevel-14"> <a href="../../../../reference/android/view/textservice/package-summary.html">android.view.textservice</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/webkit/package-summary.html">android.webkit</a></li> <li class="api apilevel-1"> <a href="../../../../reference/android/widget/package-summary.html">android.widget</a></li> <li class="api apilevel-"> <a href="../../../../reference/com/android/internal/backup/package-summary.html">com.android.internal.backup</a></li> <li class="api apilevel-"> <a href="../../../../reference/com/android/internal/logging/package-summary.html">com.android.internal.logging</a></li> <li class="api apilevel-"> <a href="../../../../reference/com/android/internal/os/package-summary.html">com.android.internal.os</a></li> <li class="api apilevel-"> <a href="../../../../reference/com/android/internal/statusbar/package-summary.html">com.android.internal.statusbar</a></li> <li class="api apilevel-"> <a href="../../../../reference/com/android/internal/widget/package-summary.html">com.android.internal.widget</a></li> <li class="api apilevel-"> <a href="../../../../reference/com/android/test/runner/package-summary.html">com.android.test.runner</a></li> <li class="api apilevel-1"> <a href="../../../../reference/dalvik/annotation/package-summary.html">dalvik.annotation</a></li> <li class="api apilevel-1"> <a href="../../../../reference/dalvik/bytecode/package-summary.html">dalvik.bytecode</a></li> <li class="api apilevel-1"> <a href="../../../../reference/dalvik/system/package-summary.html">dalvik.system</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/awt/font/package-summary.html">java.awt.font</a></li> <li class="api apilevel-3"> <a href="../../../../reference/java/beans/package-summary.html">java.beans</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/io/package-summary.html">java.io</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/lang/package-summary.html">java.lang</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/lang/annotation/package-summary.html">java.lang.annotation</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/lang/ref/package-summary.html">java.lang.ref</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/lang/reflect/package-summary.html">java.lang.reflect</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/math/package-summary.html">java.math</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/net/package-summary.html">java.net</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/nio/package-summary.html">java.nio</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/nio/channels/package-summary.html">java.nio.channels</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/nio/channels/spi/package-summary.html">java.nio.channels.spi</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/nio/charset/package-summary.html">java.nio.charset</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/nio/charset/spi/package-summary.html">java.nio.charset.spi</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/security/package-summary.html">java.security</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/security/acl/package-summary.html">java.security.acl</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/security/cert/package-summary.html">java.security.cert</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/security/interfaces/package-summary.html">java.security.interfaces</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/security/spec/package-summary.html">java.security.spec</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/sql/package-summary.html">java.sql</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/text/package-summary.html">java.text</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/util/package-summary.html">java.util</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/util/concurrent/package-summary.html">java.util.concurrent</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/util/concurrent/atomic/package-summary.html">java.util.concurrent.atomic</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/util/concurrent/locks/package-summary.html">java.util.concurrent.locks</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/util/jar/package-summary.html">java.util.jar</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/util/logging/package-summary.html">java.util.logging</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/util/prefs/package-summary.html">java.util.prefs</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/util/regex/package-summary.html">java.util.regex</a></li> <li class="api apilevel-1"> <a href="../../../../reference/java/util/zip/package-summary.html">java.util.zip</a></li> <li class="api apilevel-1"> <a href="../../../../reference/javax/crypto/package-summary.html">javax.crypto</a></li> <li class="api apilevel-1"> <a href="../../../../reference/javax/crypto/interfaces/package-summary.html">javax.crypto.interfaces</a></li> <li class="api apilevel-1"> <a href="../../../../reference/javax/crypto/spec/package-summary.html">javax.crypto.spec</a></li> <li class="api apilevel-1"> <a href="../../../../reference/javax/microedition/khronos/egl/package-summary.html">javax.microedition.khronos.egl</a></li> <li class="api apilevel-1"> <a href="../../../../reference/javax/microedition/khronos/opengles/package-summary.html">javax.microedition.khronos.opengles</a></li> <li class="api apilevel-1"> <a href="../../../../reference/javax/net/package-summary.html">javax.net</a></li> <li class="api apilevel-1"> <a href="../../../../reference/javax/net/ssl/package-summary.html">javax.net.ssl</a></li> <li class="api apilevel-1"> <a href="../../../../reference/javax/security/auth/package-summary.html">javax.security.auth</a></li> <li class="api apilevel-1"> <a href="../../../../reference/javax/security/auth/callback/package-summary.html">javax.security.auth.callback</a></li> <li class="api apilevel-1"> <a href="../../../../reference/javax/security/auth/login/package-summary.html">javax.security.auth.login</a></li> <li class="api apilevel-1"> <a href="../../../../reference/javax/security/auth/x500/package-summary.html">javax.security.auth.x500</a></li> <li class="api apilevel-1"> <a href="../../../../reference/javax/security/cert/package-summary.html">javax.security.cert</a></li> <li class="api apilevel-1"> <a href="../../../../reference/javax/sql/package-summary.html">javax.sql</a></li> <li class="api apilevel-1"> <a href="../../../../reference/javax/xml/package-summary.html">javax.xml</a></li> <li class="api apilevel-8"> <a href="../../../../reference/javax/xml/datatype/package-summary.html">javax.xml.datatype</a></li> <li class="api apilevel-8"> <a href="../../../../reference/javax/xml/namespace/package-summary.html">javax.xml.namespace</a></li> <li class="api apilevel-1"> <a href="../../../../reference/javax/xml/parsers/package-summary.html">javax.xml.parsers</a></li> <li class="selected api apilevel-8"> <a href="../../../../reference/javax/xml/transform/package-summary.html">javax.xml.transform</a></li> <li class="api apilevel-8"> <a href="../../../../reference/javax/xml/transform/dom/package-summary.html">javax.xml.transform.dom</a></li> <li class="api apilevel-8"> <a href="../../../../reference/javax/xml/transform/sax/package-summary.html">javax.xml.transform.sax</a></li> <li class="api apilevel-8"> <a href="../../../../reference/javax/xml/transform/stream/package-summary.html">javax.xml.transform.stream</a></li> <li class="api apilevel-8"> <a href="../../../../reference/javax/xml/validation/package-summary.html">javax.xml.validation</a></li> <li class="api apilevel-8"> <a href="../../../../reference/javax/xml/xpath/package-summary.html">javax.xml.xpath</a></li> <li class="api apilevel-1"> <a href="../../../../reference/junit/framework/package-summary.html">junit.framework</a></li> <li class="api apilevel-1"> <a href="../../../../reference/junit/runner/package-summary.html">junit.runner</a></li> <li class="api apilevel-1"> <a href="../../../../reference/org/apache/http/conn/package-summary.html">org.apache.http.conn</a></li> <li class="api apilevel-1"> <a href="../../../../reference/org/apache/http/conn/scheme/package-summary.html">org.apache.http.conn.scheme</a></li> <li class="api apilevel-1"> <a href="../../../../reference/org/apache/http/conn/ssl/package-summary.html">org.apache.http.conn.ssl</a></li> <li class="api apilevel-1"> <a href="../../../../reference/org/apache/http/params/package-summary.html">org.apache.http.params</a></li> <li class="api apilevel-1"> <a href="../../../../reference/org/json/package-summary.html">org.json</a></li> <li class="api apilevel-1"> <a href="../../../../reference/org/w3c/dom/package-summary.html">org.w3c.dom</a></li> <li class="api apilevel-8"> <a href="../../../../reference/org/w3c/dom/ls/package-summary.html">org.w3c.dom.ls</a></li> <li class="api apilevel-1"> <a href="../../../../reference/org/xml/sax/package-summary.html">org.xml.sax</a></li> <li class="api apilevel-1"> <a href="../../../../reference/org/xml/sax/ext/package-summary.html">org.xml.sax.ext</a></li> <li class="api apilevel-1"> <a href="../../../../reference/org/xml/sax/helpers/package-summary.html">org.xml.sax.helpers</a></li> <li class="api apilevel-1"> <a href="../../../../reference/org/xmlpull/v1/package-summary.html">org.xmlpull.v1</a></li> <li class="api apilevel-1"> <a href="../../../../reference/org/xmlpull/v1/sax2/package-summary.html">org.xmlpull.v1.sax2</a></li> </ul><br/> </div> <!-- end packages-nav --> </div> <!-- end resize-packages --> <div id="classes-nav" class="scroll-pane"> <ul> <li><h2>Interfaces</h2> <ul> <li class="selected api apilevel-8"><a href="../../../../reference/javax/xml/transform/ErrorListener.html">ErrorListener</a></li> <li class="api apilevel-8"><a href="../../../../reference/javax/xml/transform/Result.html">Result</a></li> <li class="api apilevel-8"><a href="../../../../reference/javax/xml/transform/Source.html">Source</a></li> <li class="api apilevel-8"><a href="../../../../reference/javax/xml/transform/SourceLocator.html">SourceLocator</a></li> <li class="api apilevel-8"><a href="../../../../reference/javax/xml/transform/Templates.html">Templates</a></li> <li class="api apilevel-8"><a href="../../../../reference/javax/xml/transform/URIResolver.html">URIResolver</a></li> </ul> </li> <li><h2>Classes</h2> <ul> <li class="api apilevel-8"><a href="../../../../reference/javax/xml/transform/OutputKeys.html">OutputKeys</a></li> <li class="api apilevel-8"><a href="../../../../reference/javax/xml/transform/Transformer.html">Transformer</a></li> <li class="api apilevel-8"><a href="../../../../reference/javax/xml/transform/TransformerFactory.html">TransformerFactory</a></li> </ul> </li> <li><h2>Exceptions</h2> <ul> <li class="api apilevel-8"><a href="../../../../reference/javax/xml/transform/TransformerConfigurationException.html">TransformerConfigurationException</a></li> <li class="api apilevel-8"><a href="../../../../reference/javax/xml/transform/TransformerException.html">TransformerException</a></li> </ul> </li> <li><h2>Errors</h2> <ul> <li class="api apilevel-8"><a href="../../../../reference/javax/xml/transform/TransformerFactoryConfigurationError.html">TransformerFactoryConfigurationError</a></li> </ul> </li> </ul><br/> </div><!-- end classes --> </div><!-- end nav-panels --> <div id="nav-tree" style="display:none" class="scroll-pane"> <div id="tree-list"></div> </div><!-- end nav-tree --> </div><!-- end swapper --> <div id="nav-swap"> <a class="fullscreen">fullscreen</a> <a href='#' onclick='swapNav();return false;'><span id='tree-link'>Use Tree Navigation</span><span id='panel-link' style='display:none'>Use Panel Navigation</span></a> </div> </div> <!-- end devdoc-nav --> </div> <!-- end side-nav --> <script type="text/javascript"> // init fullscreen based on user pref var fullscreen = readCookie("fullscreen"); if (fullscreen != 0) { if (fullscreen == "false") { toggleFullscreen(false); } else { toggleFullscreen(true); } } // init nav version for mobile if (isMobile) { swapNav(); // tree view should be used on mobile $('#nav-swap').hide(); } else { chooseDefaultNav(); if ($("#nav-tree").is(':visible')) { init_default_navtree("../../../../"); } } // scroll the selected page into view $(document).ready(function() { scrollIntoView("packages-nav"); scrollIntoView("classes-nav"); }); </script> <div class="col-12" id="doc-col"> <div id="api-info-block"> <div class="sum-details-links"> </div><!-- end sum-details-links --> <div class="api-level"> Added in <a href="../../../../guide/topics/manifest/uses-sdk-element.html#ApiLevels">API level 8</a> </div> </div><!-- end api-info-block --> <!-- ======== START OF CLASS DATA ======== --> <div id="jd-header"> public interface <h1 itemprop="name">ErrorListener</h1> </div><!-- end header --> <div id="naMessage"></div> <div id="jd-content" class="api apilevel-8"> <table class="jd-inheritance-table"> <tr> <td colspan="1" class="jd-inheritance-class-cell">javax.xml.transform.ErrorListener</td> </tr> </table> <div class="jd-descr"> <h2>Class Overview</h2> <p itemprop="articleBody"><p>To provide customized error handling, implement this interface and use the <code>setErrorListener</code> method to register an instance of the implementation with the <code><a href="../../../../reference/javax/xml/transform/Transformer.html">Transformer</a></code>. The <code>Transformer</code> then reports all errors and warnings through this interface.</p> <p>If an application does <em>not</em> register its own custom <code>ErrorListener</code>, the default <code>ErrorListener</code> is used which reports all warnings and errors to <code>System.err</code> and does not throw any <code>Exception</code>s. Applications are <em>strongly</em> encouraged to register and use <code>ErrorListener</code>s that insure proper behavior for warnings and errors.</p> <p>For transformation errors, a <code>Transformer</code> must use this interface instead of throwing an <code>Exception</code>: it is up to the application to decide whether to throw an <code>Exception</code> for different types of errors and warnings. Note however that the <code>Transformer</code> is not required to continue with the transformation after a call to <code><a href="../../../../reference/javax/xml/transform/ErrorListener.html#fatalError(javax.xml.transform.TransformerException)">fatalError(TransformerException)</a></code>.</p> <p><code>Transformer</code>s may use this mechanism to report XML parsing errors as well as transformation errors.</p> </p> </div><!-- jd-descr --> <div class="jd-descr"> <h2>Summary</h2> <!-- ========== METHOD SUMMARY =========== --> <table id="pubmethods" class="jd-sumtable"><tr><th colspan="12">Public Methods</th></tr> <tr class="alt-color api apilevel-8" > <td class="jd-typecol"><nobr> abstract void</nobr> </td> <td class="jd-linkcol" width="100%"><nobr> <span class="sympad"><a href="../../../../reference/javax/xml/transform/ErrorListener.html#error(javax.xml.transform.TransformerException)">error</a></span>(<a href="../../../../reference/javax/xml/transform/TransformerException.html">TransformerException</a> exception)</nobr> <div class="jd-descrdiv"> Receive notification of a recoverable error. </div> </td></tr> <tr class=" api apilevel-8" > <td class="jd-typecol"><nobr> abstract void</nobr> </td> <td class="jd-linkcol" width="100%"><nobr> <span class="sympad"><a href="../../../../reference/javax/xml/transform/ErrorListener.html#fatalError(javax.xml.transform.TransformerException)">fatalError</a></span>(<a href="../../../../reference/javax/xml/transform/TransformerException.html">TransformerException</a> exception)</nobr> <div class="jd-descrdiv"> <p>Receive notification of a non-recoverable error. </div> </td></tr> <tr class="alt-color api apilevel-8" > <td class="jd-typecol"><nobr> abstract void</nobr> </td> <td class="jd-linkcol" width="100%"><nobr> <span class="sympad"><a href="../../../../reference/javax/xml/transform/ErrorListener.html#warning(javax.xml.transform.TransformerException)">warning</a></span>(<a href="../../../../reference/javax/xml/transform/TransformerException.html">TransformerException</a> exception)</nobr> <div class="jd-descrdiv"> Receive notification of a warning. </div> </td></tr> </table> </div><!-- jd-descr (summary) --> <!-- Details --> <!-- XML Attributes --> <!-- Enum Values --> <!-- Constants --> <!-- Fields --> <!-- Public ctors --> <!-- ========= CONSTRUCTOR DETAIL ======== --> <!-- Protected ctors --> <!-- ========= METHOD DETAIL ======== --> <!-- Public methdos --> <h2>Public Methods</h2> <A NAME="error(javax.xml.transform.TransformerException)"></A> <div class="jd-details api apilevel-8"> <h4 class="jd-details-title"> <span class="normal"> public abstract void </span> <span class="sympad">error</span> <span class="normal">(<a href="../../../../reference/javax/xml/transform/TransformerException.html">TransformerException</a> exception)</span> </h4> <div class="api-level"> <div> Added in <a href="../../../../guide/topics/manifest/uses-sdk-element.html#ApiLevels">API level 8</a></div> </div> <div class="jd-details-descr"> <div class="jd-tagdata jd-tagdescr"><p>Receive notification of a recoverable error. <p>The transformer must continue to try and provide normal transformation after invoking this method. It should still be possible for the application to process the document through to the end if no other errors are encountered.</p></p></div> <div class="jd-tagdata"> <h5 class="jd-tagtitle">Parameters</h5> <table class="jd-tagtable"> <tr> <th>exception</td> <td>The error information encapsulated in a transformer exception.</td> </tr> </table> </div> <div class="jd-tagdata"> <h5 class="jd-tagtitle">Throws</h5> <table class="jd-tagtable"> <tr> <th><a href="../../../../reference/javax/xml/transform/TransformerException.html">TransformerException</a></td> <td>if the application chooses to discontinue the transformation.</td> </tr> </table> </div> <div class="jd-tagdata"> <h5 class="jd-tagtitle">See Also</h5> <ul class="nolist"><li><code><a href="../../../../reference/javax/xml/transform/TransformerException.html">TransformerException</a></code></li> </ul> </div> </div> </div> <A NAME="fatalError(javax.xml.transform.TransformerException)"></A> <div class="jd-details api apilevel-8"> <h4 class="jd-details-title"> <span class="normal"> public abstract void </span> <span class="sympad">fatalError</span> <span class="normal">(<a href="../../../../reference/javax/xml/transform/TransformerException.html">TransformerException</a> exception)</span> </h4> <div class="api-level"> <div> Added in <a href="../../../../guide/topics/manifest/uses-sdk-element.html#ApiLevels">API level 8</a></div> </div> <div class="jd-details-descr"> <div class="jd-tagdata jd-tagdescr"><p><p>Receive notification of a non-recoverable error.</p> <p>The <code>Transformer</code> must continue to try and provide normal transformation after invoking this method. It should still be possible for the application to process the document through to the end if no other errors are encountered, but there is no guarantee that the output will be useable.</p></p></div> <div class="jd-tagdata"> <h5 class="jd-tagtitle">Parameters</h5> <table class="jd-tagtable"> <tr> <th>exception</td> <td>The error information encapsulated in a <code>TransformerException</code>.</td> </tr> </table> </div> <div class="jd-tagdata"> <h5 class="jd-tagtitle">Throws</h5> <table class="jd-tagtable"> <tr> <th><a href="../../../../reference/javax/xml/transform/TransformerException.html">TransformerException</a></td> <td>if the application chooses to discontinue the transformation.</td> </tr> </table> </div> <div class="jd-tagdata"> <h5 class="jd-tagtitle">See Also</h5> <ul class="nolist"><li><code><a href="../../../../reference/javax/xml/transform/TransformerException.html">TransformerException</a></code></li> </ul> </div> </div> </div> <A NAME="warning(javax.xml.transform.TransformerException)"></A> <div class="jd-details api apilevel-8"> <h4 class="jd-details-title"> <span class="normal"> public abstract void </span> <span class="sympad">warning</span> <span class="normal">(<a href="../../../../reference/javax/xml/transform/TransformerException.html">TransformerException</a> exception)</span> </h4> <div class="api-level"> <div> Added in <a href="../../../../guide/topics/manifest/uses-sdk-element.html#ApiLevels">API level 8</a></div> </div> <div class="jd-details-descr"> <div class="jd-tagdata jd-tagdescr"><p>Receive notification of a warning. <p><code><a href="../../../../reference/javax/xml/transform/Transformer.html">Transformer</a></code> can use this method to report conditions that are not errors or fatal errors. The default behavior is to take no action.</p> <p>After invoking this method, the Transformer must continue with the transformation. It should still be possible for the application to process the document through to the end.</p></p></div> <div class="jd-tagdata"> <h5 class="jd-tagtitle">Parameters</h5> <table class="jd-tagtable"> <tr> <th>exception</td> <td>The warning information encapsulated in a transformer exception.</td> </tr> </table> </div> <div class="jd-tagdata"> <h5 class="jd-tagtitle">Throws</h5> <table class="jd-tagtable"> <tr> <th><a href="../../../../reference/javax/xml/transform/TransformerException.html">TransformerException</a></td> <td>if the application chooses to discontinue the transformation.</td> </tr> </table> </div> <div class="jd-tagdata"> <h5 class="jd-tagtitle">See Also</h5> <ul class="nolist"><li><code><a href="../../../../reference/javax/xml/transform/TransformerException.html">TransformerException</a></code></li> </ul> </div> </div> </div> <!-- ========= METHOD DETAIL ======== --> <!-- ========= END OF CLASS DATA ========= --> <A NAME="navbar_top"></A> </div> <!-- jd-content --> <div class="wrap"> <div class="dac-footer"> <div class="cols dac-footer-main"> <div class="col-1of2"> <a class="dac-footer-getnews" data-modal-toggle="newsletter" href="javascript:;">Get news &amp; tips <span class="dac-fab dac-primary"><i class="dac-sprite dac-mail"></i></span></a> </div> <div class="col-1of2 dac-footer-reachout"> <div class="dac-footer-contact"> <a class="dac-footer-contact-link" href="http://android-developers.blogspot.com/">Blog</a> <a class="dac-footer-contact-link" href="/support.html">Support</a> </div> <div class="dac-footer-social"> <a class="dac-fab dac-footer-social-link" href="https://www.youtube.com/user/androiddevelopers"><i class="dac-sprite dac-youtube"></i></a> <a class="dac-fab dac-footer-social-link" href="https://plus.google.com/+AndroidDevelopers"><i class="dac-sprite dac-gplus"></i></a> <a class="dac-fab dac-footer-social-link" href="https://twitter.com/AndroidDev"><i class="dac-sprite dac-twitter"></i></a> </div> </div> </div> <hr class="dac-footer-separator"/> <p class="dac-footer-copyright"> Except as noted, this content is licensed under <a href="http://www.apache.org/licenses/LICENSE-2.0">Apache 2.0</a>. For details and restrictions, see the <a href="../../../../license.html"> Content License</a>. </p> <p class="dac-footer-build"> Android 6.0&nbsp;r1 &mdash; <script src="../../../../timestamp.js" type="text/javascript"></script> <script>document.write(BUILD_TIMESTAMP)</script> </p> <p class="dac-footer-links"> <a href="/about/index.html">About Android</a> <a href="/auto/index.html">Auto</a> <a href="/tv/index.html">TV</a> <a href="/wear/index.html">Wear</a> <a href="/legal.html">Legal</a> <span id="language" class="locales"> <select name="language" onchange="changeLangPref(this.value, true)"> <option value="en" selected="selected">English</option> <option value="es">Español</option> <option value="ja">日本語</option> <option value="ko">한국어</option> <option value="pt-br">Português Brasileiro</option> <option value="ru">Русский</option> <option value="zh-cn">中文(简体)</option> <option value="zh-tw">中文(繁體)</option> </select> </span> </p> </div> </div> <!-- end footer --> <div data-modal="newsletter" data-newsletter data-swap class="dac-modal newsletter"> <div class="dac-modal-container"> <div class="dac-modal-window"> <header class="dac-modal-header"> <button class="dac-modal-header-close" data-modal-toggle><i class="dac-sprite dac-close"></i></button> <div class="dac-swap" data-swap-container> <section class="dac-swap-section dac-active dac-down"> <h2 class="norule dac-modal-header-title">Get the latest Android developer news and tips that will help you find success on Google Play.</h2> <p class="dac-modal-header-subtitle">&#42; Required Fields</p> </section> <section class="dac-swap-section dac-up"> <h2 class="norule dac-modal-header-title">Hooray!</h2> </section> </div> </header> <div class="dac-swap" data-swap-container> <section class="dac-swap-section dac-active dac-left"> <form action="https://docs.google.com/forms/d/1QgnkzbEJIDu9lMEea0mxqWrXUJu0oBCLD7ar23V0Yys/formResponse" class="dac-form" method="post" target="dac-newsletter-iframe"> <section class="dac-modal-content"> <fieldset class="dac-form-fieldset"> <div class="cols"> <div class="col-1of2 newsletter-leftCol"> <div class="dac-form-input-group"> <label for="newsletter-full-name" class="dac-form-floatlabel">Full name</label> <input type="text" class="dac-form-input" name="entry.1357890476" id="newsletter-full-name" required> <span class="dac-form-required">*</span> </div> <div class="dac-form-input-group"> <label for="newsletter-email" class="dac-form-floatlabel">Email address</label> <input type="email" class="dac-form-input" name="entry.472100832" id="newsletter-email" required> <span class="dac-form-required">*</span> </div> </div> <div class="col-1of2 newsletter-rightCol"> <div class="dac-form-input-group"> <label for="newsletter-company" class="dac-form-floatlabel">Company / developer name</label> <input type="text" class="dac-form-input" name="entry.1664780309" id="newsletter-company"> </div> <div class="dac-form-input-group"> <label for="newsletter-play-store" class="dac-form-floatlabel">One of your Play Store app URLs</label> <input type="url" class="dac-form-input" name="entry.47013838" id="newsletter-play-store" required> <span class="dac-form-required">*</span> </div> </div> </div> </fieldset> <fieldset class="dac-form-fieldset"> <div class="cols"> <div class="col-1of2 newsletter-leftCol"> <legend class="dac-form-legend">Which best describes your business:<span class="dac-form-required">*</span> </legend> <div class="dac-form-radio-group"> <input type="radio" value="Apps" class="dac-form-radio" name="entry.1796324055" id="newsletter-business-type-app" required> <label for="newsletter-business-type-app" class="dac-form-radio-button"></label> <label for="newsletter-business-type-app" class="dac-form-label">Apps</label> </div> <div class="dac-form-radio-group"> <input type="radio" value="Games" class="dac-form-radio" name="entry.1796324055" id="newsletter-business-type-games" required> <label for="newsletter-business-type-games" class="dac-form-radio-button"></label> <label for="newsletter-business-type-games" class="dac-form-label">Games</label> </div> <div class="dac-form-radio-group"> <input type="radio" value="Apps and Games" class="dac-form-radio" name="entry.1796324055" id="newsletter-business-type-appsgames" required> <label for="newsletter-business-type-appsgames" class="dac-form-radio-button"></label> <label for="newsletter-business-type-appsgames" class="dac-form-label">Apps &amp; Games</label> </div> </div> <div class="col-1of2 newsletter-rightCol newsletter-checkboxes"> <div class="dac-form-radio-group"> <div class="dac-media"> <div class="dac-media-figure"> <input type="checkbox" class="dac-form-checkbox" name="entry.732309842" id="newsletter-add" required value="Add me to the mailing list for the monthly newsletter and occasional emails about development and Google Play opportunities."> <label for="newsletter-add" class="dac-form-checkbox-button"></label> </div> <div class="dac-media-body"> <label for="newsletter-add" class="dac-form-label dac-form-aside">Add me to the mailing list for the monthly newsletter and occasional emails about development and Google Play opportunities.<span class="dac-form-required">*</span></label> </div> </div> </div> <div class="dac-form-radio-group"> <div class="dac-media"> <div class="dac-media-figure"> <input type="checkbox" class="dac-form-checkbox" name="entry.2045036090" id="newsletter-terms" required value="I acknowledge that the information provided in this form will be subject to Google's privacy policy (https://www.google.com/policies/privacy/)."> <label for="newsletter-terms" class="dac-form-checkbox-button"></label> </div> <div class="dac-media-body"> <label for="newsletter-terms" class="dac-form-label dac-form-aside">I acknowledge that the information provided in this form will be subject to <a href="https://www.google.com/policies/privacy/">Google's privacy policy</a>.<span class="dac-form-required">*</span></label> </div> </div> </div> </div> </div> </fieldset> </section> <footer class="dac-modal-footer"> <div class="cols"> <div class="col-2of5"> </div> </div> <button type="submit" value="Submit" class="dac-fab dac-primary dac-large dac-modal-action"><i class="dac-sprite dac-arrow-right"></i></button> </footer> </form> </section> <section class="dac-swap-section dac-right"> <div class="dac-modal-content"> <p class="newsletter-success-message"> You have successfully signed up for the latest Android developer news and tips. </p> </div> </section> </div> </div> </div> </div> <!-- end footer --> </div><!-- end doc-content --> </div> <!-- end .cols --> </div> <!-- end body-content --> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
5,026
\section{Introduction} \label{introduction} Let $X$ be any of the three constant-curvature spaces $\en$, $\sn$ or $\hn$, and let $G$ be a discrete subgroup of isometries of $X$. By a geometric manifold we mean a manifold of the form $M=X/G$. Many examples of geometric manifolds are given through side-pairings of a polyhedron $P\subset X$, this being a convenient and topologically revealing way of describing a manifold. On the other hand, general manifolds are often given using a handle decomposition, which lends itself to manipulation and simplification through handle moves. In this paper we give a method that converts a polyhedron-side-pairing representation of a manifold into a handle decomposition of the manifold. The method associates every cycle of $k$-faces in the polyhedron to an $(n-k)$-handle in the handle decomposition. While the method works in any dimension, it is most interesting to us in dimensions $n=3,4$, where we give two applications. In section~\ref{conv3} we motivate and illustrate the method by describing it in dimension~3, where it is easily understood. Section~\ref{ident3} provides an application of the method to hyperbolic 3-manifolds. Many examples of finite-volume hyperbolic manifolds $M$ are known to be complements of links in the 3-sphere. However, proving that a particular manifold is a complement of a particular link is often demanding and pushes the limits of intuition. Furthermore, proofs that the author has seen usually require that the link is known before one executes the proof. (The only procedure the author is aware of that does not require this is described in Francis' book \cite{Francis}, however, the procedure is significantly restricted by the type of side-pairings it works for.) We use the method of Section~\ref{conv3} to obtain a handle decomposition of a given hyperbolic manifold. Using handle moves one can easily show that the manifold is a complement of a link in the 3-sphere, while the handle moves produce the diagram of the link as the computation progresses. This procedure has worked in a straightforward way on all the standard examples (complements of the figure-8 knot, the Whitehead link and the Borromean rings) and some less standard ones, like those in \cite{Wielenberg}. In Section~\ref{convgen} we justify the conversion method for all dimensions. Section~\ref{diagram4} details how to get handle decomposition diagrams in dimension~4. Section~\ref{ident4} gives an application of the conversion method in dimension~4. J.~Ratcliffe, S.~Tschantz and the author have found a dozen examples (see \cite{Ivansic3, Ivansic4}) of noncompact hyperbolic 4-manifolds that are complements of varying numbers of tori and Klein bottles in a topological 4-sphere $N$. We work out the handle decomposition of one $N$ in order to show that it is diffeomorphic to the standard differentiable 4-sphere, which the original proof was not equipped to do. As a matter of fact, the author's motivation for developing the conversion method was the problem of whether the topological 4-spheres found in \cite{Ivansic3, Ivansic4} were diffeomorphic to the standard 4-sphere. The dimension-3 application from Section~\ref{ident3} was found afterwards. \begin{figure} \begin{center} \resizebox{2.5in}{!}{\includegraphics{faceneighborhoods.eps}} \caption{Cube with side-pairing, neighborhoods of faces} \label{faceneighborhoods} \end{center} \end{figure} \vfill \section{Conversion in dimension 3} \label{conv3} Let $P$ be a polyhedron in $X=\hiii$, $\eiii$ or $\siii$ with a side-pairing defined on it that gives a geometric manifold $M$. In Fig.~\ref{faceneighborhoods} a cube is drawn as an example: its top and bottom and front and back sides are paired by a translation, while the left and right sides are paired by a translation followed by a $180^\circ$ rotation around the translation vector. Select neighborhoods (for example, $\epsilon$-neighborhoods) around vertices and edges like in Fig.~\ref{faceneighborhoods}. The neighborhoods should match via the side pairing. Let $V_1,\dots, V_m$ be neighborhoods of a cycle of vertices $\{v_1,\dots,v_k\}$ (a cycle of faces comprises all the faces of $P$ that are identified by the side-pairing). Then $V_1\cup\dots\cup V_m$ assembles into a ball $V$ in $M$. In our example, all the vertices are in the same cycle, and $V_i$ is an eighth of a ball. Eight such pieces, of course, assemble in a ball. Removing neighborhoods of all vertices from $P$ removes parts of the neighborhoods of the edges. Let $E_1,\dots,E_n$ be the truncated neighborhoods of a cycle of edges $e_1,\dots,e_n$. Then $E_1\cup\dots\cup E_n$ assembles into a solid cylinder around a truncated edge, which can also be viewed as a 3-ball $E$ in $M$. \begin{figure} \begin{center} \resizebox{3.5in}{!}{\includegraphics{truncatedcube.eps}} \caption{Handles as assemblies of face neighborhoods} \label{truncatedcube} \end{center} \end{figure} Let $H_1$ be the solid obtained by removing neighborhoods of vertices and truncated neighborhoods of edges from $P$. On the surface of $H_1$ it is the truncated sides that get identified, representing pairwise-identified disjoint disks, so $H_1$ projects to a handlebody $H$ in $M$ under the quotient map $P\to M$. The feet of the 1-handles of $H$ are the truncated sides on $H_1$ (see \cite{Gompf-Stipsicz} for basics of handles and handle decompositions). Now, the ball $E=D^2\times D^1$ from above is attached to $H$ along $\partial D^2\times D^1$, making it a 2-handle of $M$. In our example, there are three cycles of edges, and the visible portions of attaching circles $n_1$, $n_2$ and $n_3$ of the corresponding 2-handles are shown in Fig.~\ref{truncatedcube}. Of course, the ball $V$ from above may be viewed as $V=D^3\times D^0$, and it attaches to the 0-, 1- and 2-handles along $\partial D^3\times D^0$, making it a 3-handle. If $P$ is a polyhedron in $\hiii$ with some ideal vertices, the procedure works the same way, except, instead of removing a neighborhood of the vertex we remove a horoball centered at the ideal vertex. Therefore, to get a handle decomposition diagram (pairs of disks in $\rii$ representing feet of 1-handles, curves outside of the disks representing attaching circles of 2-handles), do the following: \begin{figure} \begin{center} \resizebox{3in}{!}{\includegraphics{convertedcube.eps}} \caption{Handle decomposition for the side-pairing from Fig.~1} \label{convertedcube} \end{center} \end{figure} \begin{itemize} \item[---] Project the surface of the polyhedron $P$ to $\rii\cup\infty$ and draw its decomposition into sides. (If the polyhedron has ideal vertices, one may draw them as empty circles.) \item[---] Draw a disk inside every side that represents one of the feet of a 1-handle (paired sides correspond to feet of 1-handles). One of the disks may be the outside of the diagram since a sphere (the surface of $P$) was projected to $\rii$. \item[---] If two sides are adjacent along an edge $e$, draw an arc crossing $e$ once between the disks corresponding to the sides. The union of arcs crossing edges that are in the same cycle comprise the attaching circle for a 2-handle. \item[---] Attention needs to be paid to how disks (feet of 1-handles) are identified, as the transformation that identifies them depends on the transformation that identifies the corresponding sides of $P$. (We do not assume that the feet of 1-handles are identifed by a reflection in the bisector of the centers, as is common in handle-decomposition diagrams.) \item[---] It is not necessary to keep track of 3-handles, since there is only one way then attach them. Furthermore, if the polyhedron is hyperbolic and has only ideal vertices, there are no 3-handles. However, if some of the vertices are real and some ideal, it may be useful to note where on the diagram the 3-handles attach. If necessary, one might put a full circle in $\rii$ wherever there was a real vertex to indicate that the boundary of a 3-ball is attached to that section of $\rii$, and put an empty circle wherever there was an ideal vertex to signify that this part of $\rii$ becomes a part of the boundary of the manifold. \end{itemize} Fig.~\ref{convertedcube} illustrates the process above for the cube example at the beginning of the section. The letters inside the disks suggest the map that pairs the two disks, for example, $A$ and $A'$ are paired by a reflection in their bisector, while $B$ and $B'$ are paired by a reflection in the bisector, followed by a rotation by $180^\circ$. \section{Identifying hyperbolic 3-manifolds as link complements in the 3-sphere} \label{ident3} In this section, we apply the conversion method of \S~\ref{conv3} to illustrate a procedure that attempts to show that a finite-volume noncompact hyperbolic manifold is the complement of a link in the 3-sphere. If the procedure is carried out successfully, it also produces the link diagram. We will use Wielenberg's example 4 from \cite{Wielenberg}. In that paper, the following algebraic theorem of Riley's \cite{Riley} is used to determine that certain hyperbolic manifolds $M$ are complements of links $S^3-L$: if $\pi_1 M$ is anti-isomorphic to $\pi_1 (S^3-L)$, then $M\cong S^3-L$. In order to verify the anti-isomorphism, however, the link has to be known in advance to get the presentation of $\pi_1 (S^3 - L)$. Our procedure produces the link diagram as it is carried out. \begin{figure} \begin{center} \includegraphics{wielenbergpolyhedron.eps} \caption{Wielenberg's side-pairing on a hyperbolic polyhedron} \label{wielenbergpolyhedron} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics{torusas2handle.eps} \caption{Attaching a solid torus along boundary is like adding a 2-handle and a 3-handle} \label{torusas2handle} \end{center} \end{figure} The hyperbolic manifold comes from pairing the sides of the polyhedron $P$ pictured in the upper half-space model in Fig.~\ref{wielenbergpolyhedron}. The vertical sides $C$, $C'$ and $D$, $D'$ are paired by translations. Sides $A$ and $A'$ are paired by a reflection in the vertical plane passing through the point where $A$ and $A'$ touch. Side $B$ is sent to $B'$ by a reflection in the vertical plane that slices $B$ and $B'$ in half, followed by a translation that slides $B$ to $B'$. A finite-volume orientable noncompact hyperbolic 3-manifold $M$ is diffeomorphic to the interior of a compact 3-manifold $\mbar$, whose boundary components are all tori. If solid tori are glued onto the boundary components and the result is a 3-sphere, then $M$ is diffeomorphic to $S^3-L$, where $L$ is the collection of the center circles of the solid tori we added. As Fig.~\ref{torusas2handle} suggests, gluing a solid torus to a component $T^2$ of $\bd\mbar$ is the same as attaching a 2-handle and a 3-handle to $\mbar$. The attaching circle of the 2-handle can be any nontrivial simple closed curve on $T^2$. The components of $\bd\mbar$ are assembled from polygons, called vertex links, that are intersections of small enough horospheres centered at ideal vertices with the polyhedron $P$. In our example, the vertex links are $45^\circ$-$45^\circ$-$90^\circ$ triangles and squares. The three cycles of ideal vertices, $E_1$, $E_2$ and $E_3$ are indicated in Fig.~\ref{wielenbergpolyhedron}, and in Fig.~\ref{wielenbergvertexlinks} the vertex links from each cycle are drawn together and it is shown how they assemble into parallelograms that give rise to toral boundary components of $\mbar$. \begin{figure} \begin{center} \resizebox{4in}{!}{\includegraphics{wielenbergvertexlinks.eps}} \caption{Finding suitable meridians in boundary components} \label{wielenbergvertexlinks} \end{center} \end{figure} For every boundary component $T^2$ we now choose two curves representing generators of $\pi_1 T^2$. One will serve as the attaching circle of the 2-handle, making it a meridian of the attached solid torus. The other automatically becomes a longitude of the solid torus, thus isotopic to its center circle. In Fig.~\ref{wielenbergvertexlinks}, the attaching circle is the thinner arc $m_i$ and the longitude is the thicker arc $l_i$, $i=1,2,3$. When choosing the attaching circle, choose a curve in $T^2$ as short as possible (in its Euclidean metric). If the length of the attaching circle is more than $2\pi$, the $2\pi$-theorem on hyperbolic Dehn surgery (\cite{Bleiler-Hodgson}) asserts we will get a hyperbolic manifold. Thus, if $\bd\mbar$ has only one component, we will have failed to produce $S^3$. If $\bd\mbar$ has several components, it is possible that some combination of long and short attaching circles still produces $S^3$, but chances are probably better the greater the number of shorter ones are chosen. Let $M_W$ now denote the manifold resulting from the side-pairing on the polyhedron above. Since there are three cycles of ideal vertices, $\bd\mbar_W$ will have three components. Step~0 of Fig.~\ref{handlecancellation1} shows the handle decomposition of $\mbar_W$, obtained using the conversion method from~\S\ref{conv3}. The feet $A$, $A'$, $C$, $C'$ and $D$, $D'$ of 1-handles are all identified by a reflection in the perpendicular bisector of the line connecting their center. The feet $B$ and $B'$ are identified by a reflection in the line joining their centers, followed by a translation that moves $B$ to $B'$, so that the arrows drawn inside match up. Attaching circles coming from cycles of edges are labeled I, II and III. The attaching circles that we chose in Fig.~\ref{wielenbergvertexlinks} are also drawn in and labeled $m_1$, $m_2$, and $m_3$. Their corresponding longitudes $l_1$, $l_2$ and $l_3$, are drawn as thick curves. \begin{figure} \begin{center} \includegraphics{linkstohandle.eps} \caption{Converting one diagram to another} \label{linkstohandle} \end{center} \end{figure} Fig.~\ref{linkstohandle} shows to make the easy correspondence between a triangle appearing in Fig.~\ref{wielenbergvertexlinks} and the section of the boundary of the handlebody in step~0 of Fig.~\ref{handlecancellation1} necessary to draw in the longitudes and meridians. The handle decomposition of $\mbar_W$ does not have any 3-handles, since $P$ did not have any real vertices. However, closing off $\bd\mbar_W$ with three solid tori adds three 3-handles. \begin{figure} \begin{center} \resizebox{\textwidth}{!}{\includegraphics{handlecancellation1.eps}} \caption{Handle moves, steps 0--3} \label{handlecancellation1} \end{center} \end{figure} \begin{figure} \begin{center} \resizebox{\textwidth}{!}{\includegraphics{handlecancellation2.eps}} \caption{Handle moves, steps 4--7} \label{handlecancellation2} \end{center} \end{figure} Thus, step~0 of Fig.~\ref{handlecancellation1} shows the handle decomposition of a closed manifold that we hope is $S^3$. In the diagrams in Figures~\ref{handlecancellation1} and \ref{handlecancellation2} we perform handle moves in order to simplify the handle decomposition (see \cite{Gompf-Stipsicz} for basics on handle moves). Keep in mind that the curves labeled $l_1$, $l_2$ and $l_3$ are not attaching circles, but merely curves drawn on the surface of the handlebody whose position we keep track of. In particular, attaching circles may freely be isotoped over these curves and may cross them. It is easy to see that a crossing by an attaching circle will become an undercrossing if the corresponding 2-handle cancels a 1-handle that carries one of the longitudes. {\it Step 0.} Attaching circles $m_2$ and $m_3$ go across 1-handles $AA'$ and $CC'$ only once, respectively, so their corresponding 2-handles cancel the 1-handles $AA'$ and $CC'$. Step~1 shows the handle decomposition after this cancellation. {\it Step 1.} Attaching circles II and III, which loop from feet $B'$ and $B$ can be slid over the 1-handle $BB'$ and then off feet $B$ and $B'$, respectively. Moreover, the looping part of attaching circle I, at near right, may be isotoped to foot $D'$ and then across and off handle $DD'$, after which I is a simple closed curve bounding a disk (on the outside) that may be pushed away from the diagram. A 2-handle whose attaching circle bounds a disk disjoint from the rest of the diagram simply encloses a 3-handle if the manifold is compact, like in our case. The 2- and 3-handles then cancel. Step~2 shows the handle decomposition after the isotopies and cancellation. {\it Step 2.} We now notice that the vertical portion of attaching circle II can be isotoped outside of the diagram to the right and ``wrapped'' across $\infty$ to its new position shown in Step~3. Attaching circle $m_3$ crosses 1-handle $BB'$ only once, causing cancellation of the 2-handle corresponding to $m_3$ and the 1-handle $BB'$. {\it Step 3.} Before we carry out further cancellation, we simplify the picture a bit. We isotope attaching circle III around $D'$ so it attaches at the bottom. Notice that this includes a slide of the part of III that runs across the 1-handle $DD'$, thus the place where III attaches to $D$ moves as well. Also, we isotope the loop of $l_2$ at the bottom of the diagram toward the top, and we straighten out the kink in $l_3$. {\it Step 4.} We isotope at top middle to remove the self-crossing of $l_3$. Attaching circles II and III run parallel, that is, they bound an annulus. This means a 3-handle is located between them which cancels one of the 2-handles, say III. After erasing III we note that II cancels the 1-handle $DD'$. {\it Step 5.} The rest is isotopy of the link components $l_i$, $i=1,2,3$. The loop of $l_3$ at center right is isotoped up and to the left, and so is the section of $l_2$ close to it. The kinks on the left are straightened out, as is the bottom part of $l_3$ and the center of $l_1$. {\it Step 6.} The bottom part of $l_3$ is lifted and flipped to the top, $l_1$ is straightened out and $l_2$ is isotoped a little. {\it Step 7.} After isotoping $l_2$ and rotating the diagram by $180^\circ$, one gets the mirror image of Fig.~7 from Wielenberg's paper~\cite{Wielenberg}. \section{Converting a side pairing to a handle decomposition in dimension $n$} \label{convgen} In this section we generalize the conversion method from~\S\ref{conv3} to any dimension. Let $X=\en$, $\sn$ or $\hn$, and let $P$ be a finite-volume, finite-sided polyhedron in $X$, as defined, for example, in \cite{Ratcliffe}. Assume, furthermore, that every $k$-face of $P$ is diffeomorphic to either $D^k$ or $D^k-\{\text{finitely many points on $\bd D^k$}\}$. The former condition is in order to disallow polyhedra such as a lens in $S^3$, whose 1-face is a circle, the latter to allow hyperbolic polyhedra with ideal vertices. Let there be given a side-pairing on the sides of $P$, again in the sense of \cite{Ratcliffe}, so that the space of identified points $M=P/ \sim$ is a complete manifold with a geometry based on $X$. If $M$ is a noncompact hyperbolic manifold, we will be getting the handle decomposition of $\mbar$, the compact manifold with boundary whose interior is $M$. If $M$ is closed, we set $\mbar=M$. \begin{theorem} \label{gendecomp} Let $M$ be a manifold obtained through a side-pairing defined on a polyhedron $P$ in $X=\en$, $\sn$ or $\hn$. Suppose that every $k$-face of $P$ is diffeomorphic to either $D^k$ or $D^k-\{\text{finitely many points on $\bd D^k$}\}$. Then the decomposition of $P$ into $k$-faces, $0\le k \le n$ induces a handle decomposition of the manifold $\mbar$, where every cycle of $k$-faces corresponds to an $(n-k)$-handle. \end{theorem} {\it Proof.} If $P$ is a hyperbolic polyhedron with ideal vertices, the completeness of $M$ implies the existence of a finite collection of disjoint open horoballs $\{B_s, s\in S\}$ that are centered at ideal vertices of $P$ and are mapped to each other under side-pairings of $P$. Furthermore, each $B_s$ can be chosen so it intersects only sides of $P$ that are incident with the ideal vertex where $B_s$ is centered. Set $U_{-1}=\cup_{s\in S} B_s$ if $P$ is hyperbolic with ideal vertices, otherwise set $U_{-1}=\emptyset$. For every $k=0,\dots,n$, we inductively define real numbers $\epsilon_k$ and ``orthogonal neighborhoods'' $NE^k$ of truncated $k$-faces $TE^k$. Let $E^k_s$, $s\in S_k$ be the collection of $k$-faces of $P$, $k=0,\dots,n-1$. (Note that there is only one $n$-face, namely $P$.) If $P$ has real vertices, there is an $\epsilon$ so that $p: X\to M$ is injective on $\epsilon$-balls around the real vertices. Set $\epsilon_0$ to be the smaller of $\epsilon$ and $\frac{1}{3}\min \{ d(E^0_s, E^0_t) | s\ne t, s,t\in S_0\}$, otherwise (if all vertices are ideal) set $\epsilon_0=1$. Let $TE^0_s=E^0_s$, let $NE^0_s$ denote the closed $\epsilon_0$-neighborhhood in $X$ of a 0-face $E^0_s$, and let $U_0=\cup_{s\in S_0} \intr NE^0_s$. Clearly $NE^0_s$ and $NE^0_t$ are disjoint when $s\ne t$ and $p$ restricted to any of those neighborhoods is a diffeormorphism. Now assume $\epsilon_k$, $U_k$, $NE^k_s$ and $TE^k_s$ have been defined for a $0\le k\le n-2$ and every $k$-face $E^k_s$, $s\in S_k$ of $P$ and that the restriction of $p$ to every $NE^k_s$ is a diffeomorphism. Let $TE^{k+1}_s=E^{k+1}_s - \cup_{-1 \le i \le k} U_i $, $s\in S_{k+1}$. Because $M$ is a manifold, $p$ is injective on the interior of every face. Due to compactness of every $TE^{k+1}_s\subset \intr E^{k+1}_s$, we can find an $\epsilon$ so that $p$ is a diffeomorphism on an $\epsilon$-neighborhood of $TE^{k+1}_s$ for every $s\in S_{k+1}$. Set $\epsilon_{k+1}$ to be the smallest of $\epsilon$, $\frac{1}{2}\epsilon_k$ and $\frac{1}{3}\min \{ d(TE^{k+1}_s, TE^{k+1}_t) | s\ne t, s,t\in S_{k+1}\}$, and let $NE^{k+1}_s$ be the closed $\epsilon_{k+1}$-neighborhood of $TE^{k+1}_s$ in $X$ with $\cup_{i=-1}^k U_i $ excluded. Let $U_{k+1}=\cup_{s\in S_{k+1}} \intr NE^{k+1}_s$. If $k=n-1$, let $TE^n=P-\cup_{i=-1}^{n-1} U_i $ and $NE^n=TE^n$. From the assumption that every $E^k_s$ is diffeomorphic to $D^k$ or $D^k-\{\text{finite set}\}$, it follows that $TE^k_s$ is diffeomorphic to $D^k$ for every $s\in S_k$. The set $NE^k_s$ is then diffeomorphic to $D^k\times D^{n-k}$, where $TE^k_s=D^k\times 0$. Note that $x\times D^{n-k}$ is essentially in the orthogonal direction to $E^k_s$, except close to $\bd TE^k_s$, where some bending has to occur to accomodate $NE^i_t$, where $E^i_t$ is a face of $E^k_s$. Furthermore, for every face $E^i_t$ of $E^k_s$, $i\le k$, note that $NE^i_t$ intersects $NE^k_s=D^k\times D^{n-k}$ only along $\bd D^k\times D^{n-k}$. If $P$ has ideal vertices, $\bd\mbar$ is assembled from links of ideal vertices $\bd B_s \cap P$. Clearly $\bd B_s \cap P\subset \bd D^k\times D^{n-k}$. Thus, any element of $p(\intr D^k \times\bd D^{n-k})$ is in $\intr \mbar$ and therefore must be in some other $p(NE^j_u)$. Our observation above then shows that $j>k$. Let us treat $p(NE^n)$ as a $0$-handle in $\mbar$. Consider an $NE^k_s$, $s\in S_k$, $k\le n$. By our construction, $p$ restricts to a bijection on $NE^k_s$, hence $p(NE^k_s)$ is an $n$-ball inside of $\mbar$. This gives a decomposition of $\mbar$ into a collection of $n$-balls with disjoint interiors. If $E^k_s$ and $E^k_t$ are in the same cycle, then $p(NE^k_s)=p(NE^k_t)$, hence every $n$-ball corresponds to a cycle of $k$-faces for some $k\le n$. Define $M_k=\cup_{n-k \le i \le n} \cup_{s\in S_i} p(NE^i_s)$, meant to be the union of $i$-handles, $0\le i \le k$. As above, $NE^k_s=D^k\times D^{n-k}$ and $p(\intr D^k\times D^{n-k})$ is contained in $\cup_{k < i} \cup_{t\in S_i} p(NE^i_t)=M_{n-k-1}$. Since $M_{n-k-1}$ is closed, $p(D^k\times \bd D^{n-k})\subset M_{n-k-1}$. Therefore, $p(NE^k_s)$ attaches as an $(n-k)$-handle to $M_{n-k-1}$, giving us a handle decomposition of $\mbar$. $\qed$ In the above handle decomposition of $\mbar$, we note that the attaching sphere $0\times \bd D^{n-k}$ of the $n-k$-handle $NE^k_s$ is the boundary of a neighborhood in $X$ of a point $x\in E^k_s$. Naturally, this being a neighborhood in $X$ means that a part of it is outside of $P$ and it intersects several translates $gP$ of $P$. But then $g^{-1}((0\times\bd D^{n-k}) \cap gP)$ is visible in $P$. \section{Drawing handle decomposition diagrams in dimension 4} \label{diagram4} In this section we apply the conversion method described in the previous section to dimension~4. Notation is like in the previous section and, as an illustrative example, we use as $P$ the 4-cube whose sides are paired by translations, yielding $M=$4-torus. We want to draw in $\bd D^4=S^3=\riii \cup \infty$ attaching spheres of the $k$-handle $D^{4-k}\times D^k$. The 0-handle is $NE^4$, $P$ without neighborhoods of all the $k$-faces. Clearly $\bd NE^4 =S^3=\bd P$, realized by a diffeomorphism $h:\bd NE^4\to \bd P$, a restriction of a diffeomorphism $h:NE^4\to P$, which may be imagined as a radial projection from a point in the interior of $P$. Under~$h$, $(\bd NE^{4-k}_s)\cap P$ is sent to $TE^{4-k}_s\times B^{k-1}$, a $TE^{4-k}_s$ ``thickened-up'' in $\bd P$. Note that the thickening of $TE^3_s$ in $\bd P$ is still $TE^3_s$. Now, a piece of the attaching sphere $P\cap (0\times \bd D^k)$ is sent under $h$ to $x \times B^{k-1}$, where $x\in TE^{4-k}_s$. Let the subdivision of $\bd P$ into $k$-faces ($k\le 3$) be drawn in $\riii\cup\infty$. As an example, take the standard ``cube-within-a-cube'' picture of the boundary of the 4-cube. A piece of the attaching sphere for a 1-handle $p(NE^3_s)$ is $P\cap (0\times \bd D^1)$ which is sent to a point in $TE^3_s$, chosen, for example, in its interior. The two points in the attaching sphere of a 1-handle are in paired 3-faces. The attaching region $D^3\times \bd D^1$ is the union of paired truncated 3-faces $TE^3_s$ and $TE^3_t$: schematically, we draw 3-balls inside $TE^3_s$ and $TE^3_t$. The piece inside of $P$ of an attaching circle for a 2-handle $p(NE^2_s)$ is $P\cap (0\times \bd D^2)$, an arc that crosses the 2-face corresponding to the 2-handle and joins the 3-faces whose intersection is the 2-face. Under $h$, this arc maps to the segment $x\times B^1$, $x\in TE^2_s$, visible on the left of Fig.~\ref{dim4conv}. \begin{figure} \resizebox{\textwidth}{!}{\includegraphics{dim4conv.eps}} \caption{Arriving at a handle decomposition for the 4-torus} \label{dim4conv} \end{figure} The attaching sphere for a 3-handle is a 2-sphere, whose intersection $P\cap (0\times \bd D^3)$ with $P$ is a 2-ball. Under $h$, the 2-ball maps to the 2-ball $z\times B^2$, $z\in TE^1_s$, shown in Fig.~\ref{dim4conv}. When $M$ is closed, it is only important how the 1- and 2-handles attach (see \cite{Gompf-Stipsicz}), so the attaching spheres of 3- and 4-handles do not matter. However, it is useful to have in one's mind pieces of the attaching spheres of the 3-handles, as they help with the framing of the attaching map of the 2-handles. In order to specify, up to isotopy, the attaching map $\phi:D^2\times \bd D^2\to \bd Y$ of a 2-handle $D^2\times D^2$, it is enough to specify the images of two parallel circles $\phi(x\times \bd D^2)$ and $\phi(y\times \bd D^2)$. As we can see on the left side of Fig.~\ref{dim4conv}, if $E^1_t$ is incident with $E^2_s$, then the intersection of $z\times B^2$, $z\in TE^1_t$, with $TE^2_s\times B^1$ is an arc $y\times B^1$, $y\in TE^2_s$. We can then choose $y\times \bd D^2$ to be the circle parallel to $x\times \bd D^2$, chosen before. (In Fig.~\ref{dim4conv}, $u\times B^1$ and $v\times B^1$ are pieces of another such pair of parallel circles.) Schematically, the portion $z\times B^2$ of the attaching sphere of the 3-handle $p(NE^1_t)$ is represented as a triangle transverse to $E^1_t$, bounded by the portions $x_i \times B^1$ of attaching circles of the three 2-handles that correspond to the three 2-faces, the pairwise intersections of the three 3-faces whose intersection is the 1-face $E^1_t$. One can speak of a ``cycle'' of triangles, the collection of triangles corresponding to 1-faces that are all in one cycle. Clearly, a cycle of triangles represents all the pieces of the attaching sphere of the 3-handle corresponding to the cycle of 1-faces. Thus, pieces of a parallel circle may be chosen to lie in one of the triangles. Since a 2-face is incindent with several 1-faces, pieces of the attaching circle of a 2-handle will be in the boundary of several triangles. We choose one to contain a piece of the parallel circle; once this is done, the remaining pieces of the parallel circle must be chosen in triangles that are in the same cycle as the one we have chosen. We summarize how to get a picture of a handle decomposition of a 4-manifold that is the result of pairing sides of a polyhedron $P$. \begin{itemize} \item[---] Draw in $\riii$ the decomposition of $\bd P=\riii\cup \infty$ into $k$-faces . Inside every 3-face, draw a 3-ball. Feet of a 1-handle are the two balls inside paired 3-faces. \item[---] We do not assume that the feet of 1-handles are identifed by a reflection in the bisector of the centers, as is common. Rather, the identifying map is determined by the map that pairs the corresponding 3-faces. \item[---] If two 3-faces are adjacent along a 2-face $E^2$, draw an arc between the balls inside the 3-faces that crosses $E^2$ exactly once. The arcs that cross 2-faces that are in the same cycle comprise the attaching circle for a 2-handle. \item[---] Whenever three 3-faces intersect in a 1-face we see a ``triangle'' whose ``vertices'' and edges are the already drawn 3-balls and arcs, respectively. We fill in this triangle (usually only mentally) with a surface that is transverse to the 1-face. Parallel attaching circles can be chosen to lie in these surfaces. \item[---] Once we choose a triangle to contain a piece of the attaching circle, the remaining pieces must be chosen in triangles that are in the same cycle of triangles. \end{itemize} The procedure above yields the familiar handle-decomposition diagram for $T^4$ from the right side of Fig.~\ref{dim4conv}. (see also \cite{Gompf-Stipsicz}, Fig.~4.42). Parallel attaching circles for three 2-handles are the arcs marked I, II and III. \section{A hyperbolic manifold as a complement of 5 tori in the standard differentiable $S^4$} \label{ident4} In \cite{Ivansic3}, the author showed that the double cover of $M_{1011}$, example no. 1011 from Ratcliffe and Tschantz's \cite{Ratcliffe-Tschantz} collection of noncompact finite-volume hyperbolic 4-manifolds, is a complement of 5 tori in the topological 4-sphere. The proof used Freedman's theory, which only provides a homeomorphism to the 4-sphere. In this section we prove that the 4-sphere $N$ from \cite{Ivansic3} is, in fact, diffeomorphic to the standard differentiable 4-sphere. We use the method of this paper to obtain a handle decomposition of the manifold $N$ and then handle moves to simplify the decomposition down to the decomposition of the standard differentiable 4-sphere. The 24-sided polyhedron $Q$ that gives rise to the Ratcliffe and Tschantz's manifolds is described in their paper \cite{Ratcliffe-Tschantz} and in \cite{Ivansic3} and \cite{Ivansic4}, where more details on its combinatorial structure can be found. Here we just recall that its sides (in the ball model of $\hiv$) are spheres of radius 1 centered at points whose two coordinates are $\pm 1$ and the other two are zeroes. We label the spheres and the sides by $S_{****}$, like in \cite{Ivansic4}. For example, $S_{0+0-}$ is the sphere centered at $(0,1,0,-1)$. Each octahedral 3-face of $Q$ has eight 2-faces, so drawing a decomposition of $\bd Q$ would be quite involved. We will therefore jump to the handle-decomposition picture right away, by finding attaching spheres of the 1-, 2- and 3-handles on $\bd Q$, and projecting them to $S^3$ radially from the origin of $B^4$. $S^3$ is then sent to $\riii\cup \infty$ via the M\"obius transformation $g:(\riv\cup\infty)\to(\riv\cup\infty)$ that provides the standard isometry between the ball and upper-half-space models of hyperbolic space. This map is the composite of the reflection in the sphere with center $(0,0,0,1)$ of radius $\sqrt 2$, followed by a reflection in the hyperplane $x_4=0$. Its restriction to $S^3$ is given by $x\mapsto e_4 + \frac{2}{|x-e_4|^2}(x-e_4)$. (This is actually the formula for just the first reflection, since the reflection in $x_4=0$ has no effect on $\riii\cup\infty$, the image of $S^3$.) Note that $g$ leaves $S^2\subset \riii\times 0$ fixed. As attaching spheres of 1-handles we choose points on the sides of $Q$ closest to the origin. ``Shortest'' arcs connecting those points along the sides are chosen to be the pieces of attaching circles of 2-handles. Pieces of attaching spheres of 3-handles are the piecewise-spherical ``triangles'' bounded by the arcs, stretched across the sides. More precisely, let $r$ be the position-vector of a sphere $S$ that determines a side of $Q$. The intersection of $S$ and the line spanned by $r$ is a point $c$ in $S$. Let $c'$ be the intersection of $S'$ and the line spanned by $r'$, where $S'$ is the pair of $S$ under the side-pairing on $Q$. Then we choose $c$ and $c'$ to be the points of the attaching sphere of the 1-handle corresponding to the paired sides $S$ and $S'$. If $S_1$ and $S_2$ are intersecting sides, let $e$ be the arc that is the intersection of $\bd Q$ and the (linear) angle spanned by position vectors $r_1$ and $r_2$. This arc is a portion of the attaching circle of the 2-handle corresponding to the 2-face $S_1\cap S_2$. Finally, if a side $S_3$ intersects $S_1$ and $S_2$, consider the intersection $f$ of $\bd Q$ with the ``positive cone'' spanned by $r_1$, $r_2$ and $r_3$. This ``triangle'' is a portion of the attaching sphere of the 3-handle corresponding to the 1-face $S_1\cap S_2 \cap S_3$. It is not clear that the overall arrangement of the spheres is such that $c$, $c'$, $e$ and $f$ are actually on $\bd Q$. (For example, $c$ may be inside some other sphere $S_0$, which would put it outside of $Q$.) Next, we justify that these choices are indeed on $\bd Q$. Consider the sides $S_{++00}$, $S_{+0+0}$ and $S_{0++0}$. The three sides intersect pairwise and the intersection of all three of them is a 1-face $E^1_s$. Let $L$ be the (linear) hyperplane spanned by $r_{++00}$, $r_{+0+0}$ and $r_{0++0}$, this is the hyperplane $x_4=0$. It is clear that the only spheres that intersect $L$ in more than one point are those with a 0 in the fourth position of its label. All other spheres intersect $L$ in exactly one point, which is one of $\pm e_i$, $i=1,..,4$, that is, an ideal vertex of $Q$. Now, the intersection of the 12 sides $S_{***0}$ with $L$ is a 3-dimensional version of $Q$ which was described and pictured in \cite{Ratcliffe-Tschantz}, Figure 5. From this picture we see that attaching spheres chosen in the way described above are on $\bd Q$. \begin{figure} \resizebox{\textwidth}{!}{\includegraphics{polytohandle.eps}} \caption{Finding attaching spheres for $M_{1011}$} \label{polytohandle} \end{figure} A general 1-face is an intersection of 3 sides if their labels pairwise share exactly one position with the same symbol. This position could be different for each pair of sides, like in the example above, or it could be the same for all three pairs, like for the sides $S_{++00}$, $S_{+0+0}$ and $S_{+00+}$. It is clear that any 1-face can be moved by a linear isometry of $Q$ to one of these two prototypical 1-faces (permute the coordinates and reflect in coordinate hyperplanes). Furthermore, there is a linear isometry of $Q$ that sends $S_{++00}$, $S_{+0+0}$ and $S_{0++0}$ to $S_{++00}$, $S_{+00+}$ and $S_{+0+0}$, respectively; its matrix is \begin{displaymath} \frac{1}{2} \left[ \begin{array}{rrrr} 1 & 1 & 1 & -1\\ 1 & 1 & -1 & 1\\ -1 & 1 & 1 &1\\ 1 & -1 & 1 & 1 \end{array} \right]. \end{displaymath} This shows that the situation illustrated by the sides $S_{++00}$, $S_{+0+0}$ and $S_{0++0}$ is generic, so all choices for attaching spheres done in the way described above are valid. We now have to see where the attaching spheres are sent under the composite $gp $, where $p:\bd Q\to S^3$ is the radial projection. The intersection of each position vector $r_{****}$ with $S^3$ is $\frac{1}{\sqrt 2}r_{****}$. The points of form $\frac{1}{\sqrt 2}r_{***0}$ are on $S^2\subset S^3$, which is fixed by $g$. Furthermore, an easy computation shows that $g(\frac{1}{\sqrt 2}r_{***+})=(\sqrt 2+1)(*,*,*,0)$ and $g(\frac{1}{\sqrt 2}r_{***-})=(\sqrt 2-1)(*,*,*,0)$. As above, let $c_i$ be the point of intersection of a sphere (side) $S_i$ with the line spanned by the position vector $r_i$ of the center of $S_i$, $i=1,2,3$. If sides $S_1$ and $S_2$ intersect, consider the intersection $C$ of $S^3$ and the linear plane spanned by $r_1$ and $r_2$ (part of $C$ is the radial projection of a piece of the attaching circle). This is a circle, so $g(C)$ is a circle, since $g$ is a M\"obius transformation. Since $C$ also contains $-r_1$ and $-r_2$, the circle $g(C)$ will contain the four points $gp(\pm c_1)$ and $gp(\pm c_2)$. Once we have the four points drawn, the circle $g(C)\subset\riii$ will be easy identify. The arc of the circle between $gp(c_1)$ and $gp(c_2)$ is a part of the attaching circle for the 2-handle corresponding to the 2-face $S_1\cap S_2$. Now, if sides $S_1$, $S_2$ and $S_3 $ all intersect in a 1-face $E^1$, part of the attaching sphere corresponding to $E^1$ is the ``triangle'' $f$ that is the intersection of the positive cone generated by $r_1$, $r_2$ and $r_3$ with $\bd Q$. Radial projection to $S^3$ followed by $g$ maps the triangle to a spherical triangle bounded by arcs of circles $g(C)$ that were just described. The top left of Fig.~\ref{polytohandle} shows the points $d_{****}=gp(c_{****})$ and the circles $g(C)$ for the sides $S_{**00}$, $S_{*0*0}$ and $S_{0**0}$. The bottom left of Fig.~\ref{polytohandle} does the same for the sides $S_{**00}$, $S_{0*0*}$ and $S_{*00*}$. The complete picture for $Q$ is obtained by rotating the bottom left by $\pi/2$ around the $x_1$-axis, then around the $x_3$-axis, and taking the union of the resulting three figures with the top left of Fig.~\ref{polytohandle}. To make drawing of pictures easier, we isotope the positions of $gp(c)$'s a little and replace curved arcs $g(C)$ mostly by straight lines, as seen on the right side of Fig.~\ref{polytohandle}. We note that pieces of the attaching circles all lie in one of the coordinate planes or on~$S^2$. (In the straight-edge version of the diagram we imagine this $S^2$ as the surface consisting of 6 rectangles and 8 triangles, spanned by the points $g(c)$). We assume, as discussed in \S\ref{diagram4}, that pieces parallel circles always lie in the triangles in the diagram. Note that each triangle can be taken to lie in one of the coordinate planes or on $S^2$. \begin{figure} \resizebox{\textwidth}{!}{\includegraphics{handle1011.eps}} \caption{Handle decomposition of $\tilde M_{1011}$} \label{handle1011} \end{figure} The handle decomposition of $\mbar_{1011}$ is the right half of Fig.~\ref{handle1011}, where the outside- and inside-most attaching circles are not shown to maintain clarity of picture. If every triangle in the picture is filled in, we note that it will consist of eight ``octahehedra", each of which corresponds to the link of the ideal vertices $v_{*000}$, $v_{0*00}$, $v_{00*0}$ and $v_{000*}$, which is a cube (three pairs of opposing sides/feet of 1-handles). To better see the octahedra, we have separated them on Fig.~\ref{octahedra}. Note that six are shown; the two missing ones, corresponding to $v_{000+}$ and $v_{000-}$, are the same ones that are missing from Fig.~\ref{handle1011}. As a matter of fact, the space between the described octahedra forms sixteen more octahedra, corresponding to the ideal vertices of form $v_{****}$ (see one in Fig.~\ref{meridian5}). A side-pairing $f:S\to S'$ of any of Ratcliffe and Tschantz's examples is always of the form $ru$, where $u$ is a composite of reflections in the coordinate planes, and $r$ is a reflection in $S'$. The restriction of $f$ to $\bd Q$ is all that matters to us, so $u$ explains how feet of 1-handles are identified. Note that conjugating by $g$ the reflections in hyperplanes $x_1=0$, $x_2=0$ and $x_3=0$ gives the same reflections, while conjugating the reflection in $x_4=0$ gives the reflection in the unit sphere. Using the convention from \cite{Ivansic4} we name the side-pairings for $M_{1011}$ by letters $a,b,\dots,k, l$ as follows. (The composite of reflections that pair the sides is under the arrow.) \begin{displaymath} \begin{array}{llll} S_{++00} \arrtop{a}{-+++} S_{-+00}\hskip10pt & S_{+-00} \arrtop{b}{-+++} S_{--00}\hskip10pt & S_{+0+0} \arrtop{c}{++-+} S_{+0-0}\hskip10pt & S_{-0+0} \arrtop{d}{++-+} S_{-0-0} \\ S_{0++0} \arrtop{e}{----} S_{0--0}\hskip10pt & S_{0+-0} \arrtop{f}{----} S_{0-+0}\hskip10pt & S_{+00+} \arrtop{g}{----} S_{-00-}\hskip10pt & S_{+00-} \arrtop{h}{----} S_{-00+} \\ S_{0+0+} \arrtop{i}{+-++} S_{0-0+}\hskip10pt & S_{0+0-} \arrtop{j}{+-++} S_{0-0-} \hskip10pt & S_{00++} \arrtop{k}{+++-} S_{00+-}\hskip10pt & S_{00-+} \arrtop{l}{+++-} S_{00--}. \end{array} \end{displaymath} Furthermore, for simplicity of notation, if a letter $s$ pairs two sides, we relabel the originating side with $S$ and $s(S)$ by $S'$. Thus, $d=$reflection in plane $x_3=0$ sends side $D=S_{-0+0}$ to side $D'=S_{-0-0}$. Let $G_{1011}\subset \Isom(\hiv)$ be the fundamental group of $M_{1011}$ and let $H_{1011}$ be the subgroup of orientation-preserving isometries in $G_{1011}$. Of course, $G_{1011}$ is generated by $a,b,\dots,l$. We are really interested in the orientable double cover $\tilde M_{1011}$ of $M_{1011}$, whose fundamental polyhedron consists of two copies of $Q$ with suitably paired sides. It is easy to see (and is explained in \S3 of \cite{Ivansic3}) that the fundamental polyhedron for $H_{1011}$ is $Q\cup hQ$, where $h$ is one of the above listed generators of $G$, that is also, being orientation reversing, a coset representative for the nontrivial right coset of $H_{1011}$ in $G_{1011}$. The discussion in \cite{Ivansic3} also shows that sides of $Q\cup hQ$ are paired according to the following rule. Let $S$, $S'$ be sides of $P$ paired by the transformation $s\in G$. If $s$ is orientation-reversing, then side $S$ is paired to $hS'$ via $hs$ and side $hS$ is paired to $S'$ via $sh^{-1}$. If $s$ is orientation-preserving, then $S$ is paired to $S'$ via $s$ and $hS$ is paired to $hS'$ via $hsh^{-1}$. \begin{figure} \begin{center} \resizebox{1.5in}{!}{\includegraphics{octahedra.eps}} \caption{Octahedra appearing in Fig.~12, separated} \label{octahedra} \end{center} \end{figure} We may view the handle decomposition of $\tilde M_{1011}$ as having two 0-handles (corresponding to $Q$ and $hQ$) and a 1-handle joining them (coming from the paired sides $H'$ and $hH$). Since $Q$ and $hQ$ lie on opposite sides of the hyperplane $H'$, it is clear that the handle decomposition of $\tilde M_{1011}$ can be drawn by drawing two handle-decompositions of $\mbar_{1011}$ side-by-side while identifying $H'$ and $hH$. The same effect is achieved by drawing them side-by-side and introducing a 2-handle canceling the 1-handle coming from pairing $H'$ to $hH$. The handle decomposition for $\tilde M_{1011}$ is the entire diagram in Fig.~\ref{handle1011}. The part coming from $Q$ we take to be centered at 0, the part coming from $hQ$ we center at $(-6,0,0)$, the two portions being symmetric in the plane $x_1=-3$. To get proper labeling on the feet of 1-handles of the $hQ$-part, we recall that they are the result of applying $h=ru_{----}$ to $Q$, thus we need to reflect the picture on the right in planes $x_1=0$, $x_2=0$, $x_3=0$ and the unit sphere centered at 0, and then apply the reflection $r$ in the plane $x_1=-3$. Putting together all the facts from above, the feet of the 1-handles in the decomposition in Fig.~\ref{handle1011} have the following identification pattern: \begin{gather*} A, B, C, D, I, J, K, L, hA, hB, hC, hD, hI, hJ, hK, hL \mapsto\\ A', B', C', D', I', J', K', L', hA', hB', hC', hD', hI', hJ', hK', hL'\\ \text{via reflection in the bisector of the feet}\\ E, F, G, H, E', F', G', H' \mapsto hE', hF', hG', hH', hE, hF, hG, hH\\ \text{via reflection in $x_1=-3$.} \end{gather*} The manifold $\tilde M_{1011}$ has 5 three-torus boundary components, each of those an $S^1$-fiber bundle over $T^2$. Closing off the boundary components involves filling in each fiber with a disc, resulting in a closed manifold $N$. Equivalently, this can be achieved by attaching a $T^2\times D^2$ to each component of $\bd \tilde M_{1011}$. A handle decomposition for $T^2\times D^2$ derived from the simplest handle decomposition for $T^2$ has one 0-handle, two 1-handles and one 2-handle. Attaching it to $\tilde M_{1011}$ results in adding one 2-handle, two 3-handles and one 4-handle to the decomposition, since the handles in the decomposition of $T^2\times D^2$ must be viewed in an upside-down way (see \cite{Gompf-Stipsicz}), as it is attached to $\tilde M_{1011}$. The attaching circle of the 2-handle is any fiber in the bundle. As selected and illustrated in \cite{Ivansic3}, in four of the boundary components, the fibers are represented by a straight-line segment joining opposing sides of the cube that is the vertex link, therefore, the attaching circle of the 2-handle is a line-segment joining two opposed feet of 1-handles in the octahedron that corresponds to the vertex link. Since the parallel circle is another fiber, we may assume it is simply a parallel line-segment. We now simplify the handle decomposition of $N$ using handle moves. We repeatedly make use of the following proposition: \begin{proposition} \label{cancel3handle} (\cite{Gompf-Stipsicz}, modified Proposition 5.1.9) If the handle decomposition of a closed manifold contains an attaching circle of a 2-handle that can be isotoped so that it bounds a disc disjoint from the rest of the diagram, and the disc contains its parallel circle, then the 2-handle cancels a 3-handle from the decomposition (and we may erase the 2-handle from the diagram). \end{proposition} In order to better see what goes on in the complicated diagram we consider sections with the coordinate planes $x_1x_2$, $x_1x_3$, $x_2x_3$ and its parallel plane $x_1=-6$ (``$x_2x_3$-planes''), and the two spheres $S^2$, displayed in this order in Figures~\ref{step0}--\ref{step6}. Since pieces of the parallel circles are on one of those surfaces, if isotopy of the diagram stays parallel to the surface, those pieces remain in the surface, so are easy to track. On occasion, pieces of parallel circles are not on the default surfaces --- it is either obvious where they are, or it is noted. \begin{figure} \begin{center} \includegraphics{octahedroncancellation.eps} \caption{Octahedron corresponding to $v_{0+00}$ after $AA'$ is canceled by a 2-handle.} \label{octahedroncancellation} \end{center} \end{figure} Handle moves are tracked in Figures~\ref{step0}--\ref{step6} by drawing what happens in the sections with the mentioned planes. The topmost box of the explanation describes which 1- and 2-handles have canceled. The middle four boxes describe subsequent isotopies, each of the four pertaining to the part of the diagram in the same relative position as the box. The bottom box describes cancellation of 2- and 3-handles owing to Proposition~\ref{cancel3handle}. Note that 1-handles are designated by labels on the corresponding paired sides. One could wonder if isotopy or handle cancellation in one of the planes interferes with the situation in the other. A little 3-dimensional insight helps one see that there is no problem. A picture such as Fig.~\ref{octahedroncancellation}, which is typical, may help the reader see what happens after a 1-handle cancels with a 2-handle. This picture shows the octahedron corresponding to $v_{0+00}$ after 1-handle $AA'$ was canceled by a 2-handle. The initial handle decomposition also includes thirty-four 3-handles and five 4-handles. Twenty-four of the 3-handles come from cycles of 1-faces, the remaining ten 3-handles and five 4-handles come from handle decompositions of attached $T^2\times D^2$'s. As explained in \cite{Gompf-Stipsicz}, \S4.4, because $N$ is a closed manifold, 3- and 4-handles can attach essentially in only one way, so there is no need to keep track of them. \begin{figure} \begin{center} \includegraphics{meridian5.eps} \caption{Position of $m_5$ in octahedron corresponding to $v_{++++}$} \label{meridian5} \end{center} \end{figure} After Fig.~\ref{step6}, the 3-dimensional diagram is simple enough that we can draw it on one picture. In steps thus far, we have not pictured the additional 2-handle coming from closing off the fifth boundary component. The choice $e^{-1}g$, made in \cite{Ivansic3}, is represented by a union of line-segments: one joining the opposite sides $S_{+00+}$ and $S_{0++0}$ of the cube that is the vertex link at $v_{++++}$, and one joining the opposite sides $S_{0--0}$ and $S_{-00-}$ of the cube that is the vertex link at $v_{----}$. This corresponds to two segments joining $E$ and $G$ and $hE'$ and $hG'$ in our diagram. Fig.~\ref{meridian5} shows the first one in the ``octahedron" corresponding to $v_{++++}$: we can see that all the moves done so far on the diagram do not affect it (in particular, it lies outside of the two $S^2$'s), so it can be drawn in the same position in the overall picture. After the final 2-handles and 3-handles cancel in step~12 of Fig.~\ref{step7}, we arrive at an empty diagram (one 0-handle and some 3- and 4-handles). Since this is the handle decomposition of the standard differentiable 4-sphere, we conclude that $N$ is diffeomorphic to it. Incindentally, by keeping track of the 3- and 4-handles throughout the computation we see that the final handle decomposition has four 3-handles and five 4-handles. However, since the boundary of their union is $S^3$, the union must be $D^4$ (otherwise the boundary would be a connected sum of $S^1\times S^2$'s). \pagebreak \begin{landscape} \begin{figure} \resizebox{8in}{!}{\includegraphics{step0.eps}} \caption{Step 0, initial handle decomposition} \label{step0} \end{figure} This is the initial setup. Altogether, there are twenty-four 1-handles, fifty-four 2-handles, and also thirty-four 3-handles and five 4-handles, whose attaching spheres we do not need to keep track of. Each of the 2-handles $m_1$, $m_2$, $m_3$ and $m_4$ (coming from attaching $T_2\times D^2$ to $\tilde M_{1011}$) passes exactly once over 1-handles $AA'$, $JJ'$, $KK'$ and $CC'$, respectively, so those 2-handles cancel the 1-handles. \vfill \pagebreak \begin{figure} \resizebox{8in}{!}{\includegraphics{step1.eps}} \caption{Step 1} \label{step1} \begin{tabular}{|l||l|} \hline \multicolumn{2}{|l|} {2-handles $m$, $m_1$, $m_2$ and $m_3$ cancel $H'hH$, $AA'$, $JJ'$ and $KK'$, respectively} \\ \hhline{:=t::=:} \parbox[t]{4in}{$n_1$ slides over the 1-handle $hGG' $ and off the foot $hG$ into position indicated by the dotted line\\ $n_2$ isotopes into dotted position\\ $n_4$ slides over II', and off foot I' into dotted position\rule[-6pt]{0pt}{0pt}} & \parbox[t]{4in}{$n_6$ slides over $CC'$ and off foot $C'$ into dotted position\\ $n_5$ slides over DD' and off foot D' into dotted position} \\ \hhline{:=::=:} \parbox[t]{4in}{$n_7$ slides over $F'hF$ and off foot $hF$ into dotted position\\ $n_8$ slides over $EhE'$ and off foot $hE'$ into dotted position\\ $n$ slides over $LL'$ and off foot $L$ into dotted position\rule[-6pt]{0pt}{0pt}} & \parbox[t]{4in}{$n_9$ slides over $EhE'$ and off foot $hE'$ into dotted position\\ $n_{10}$ slides over $FhF'$ and off foot $hF'$ into dotted position} \\ \hhline{|-||-|} \end{tabular} \end{figure} \pagebreak \begin{figure} \resizebox{8in}{!}{\includegraphics{step2.eps}} \caption{Step 2} \label{step2} \begin{tabular}{|l||l|} \hline \multicolumn{2}{|l|}{$m_4$, $n_1$, $n_2$, $n_6$, $n_8$ and $n_9$ cancel $CC'$, $hIhI'$, $BB'$, $LL'$ and $hLhL'$, hBhB' respectively} \\ \hhline{:=t::=:} \parbox[t]{4in}{$n_{11}$ slides over $hJhJ'$ and off $hJ$ into dotted position\\ $n_{12}$ isotopes into dotted position} & \parbox[t]{4in}{$n_{13}$ slides over $HhH'$ and off $hH'$ into dotted position\\ $n_{14}$ slides over $GhG'$ and off $hG'$ into dotted position\\ $n_{15}$ slides over $hDhD'$ and off $hD$ into dotted position\\ $n_{16}$ slides over $hChC'$ and off $hC'$ into dotted position\rule[-6pt]{0pt}{0pt}} \\ \hhline{:=::=:} \parbox[t]{4in}{$n_{19}$ slides over $FhF'$ and off $hF'$ into dotted position\\ $n_{20}$ slides over $E'hE$ and off $hE$ into dotted position} & \parbox[t]{4in}{$n_{17}$ isotopes into dotted position\\ $n_{21}$ slides over $F'hF$ and off $hF$ into dotted position\\ $n_{22}$ slides over $E'hE$ and off $hE$ into dotted position\rule[-6pt]{0pt}{0pt}} \\ \hhline{:=b::=:} \multicolumn{2}{|l|}{ Each of the 2-handles $n_3$, $n_4$, $n_5$, $n_7$ and $n_{10}$ cancels a 3-handle owing to Proposition~\ref{cancel3handle}} \\ \hline \end{tabular} \end{figure} \pagebreak \begin{figure} \resizebox{8in}{!}{\includegraphics{step3.eps}} \caption{Step 3} \label{step3} \begin{tabular}{|l||l|} \hline \multicolumn{2}{|l|} {$n_{13}$, $n_{17}$, $n_{19}$ cancel $hDhD'$, $hKhK"$, $DD'$, respectively} \\ \hhline{:=t::=:} \parbox[t]{4in}{isotopy simplifies picture} & \parbox[t]{4in}{$n_{18}$ isotopes into dotted position\\ $n_{24}$ slides over $G'hG$ and off $hG$ into dotted position\rule[-6pt]{0pt}{0pt}} \\ \hhline{:=::=:} \parbox[t]{4in}{isotopy simplifies picture} & \parbox[t]{4in}{$n_{23}$ isotopes\rule[-6pt]{0pt}{0pt}} \\ \hhline{:=b::=:} \multicolumn{2}{|l|} {$n_{14}$, $n_{15}$, $n_{16}$, $n_{20}$, and $n_{23}$ cancel a 3-handle} \\ \hline \end{tabular} \end{figure} \pagebreak \begin{figure} \resizebox{8in}{!}{\includegraphics{step4.eps}} \caption{Step 4} \label{step4} \begin{tabular}{|l||l|} \hline \multicolumn{2}{|l|} {$n_{11}$, $n_{18}$ cancel $hAhA'$, $hChC'$ respectively} \\ \hhline{:=t::=:} \parbox[t]{4in}{\ } & \parbox[t]{4in}{isotopy simplifies picture\\ $n_{33}$ isotopes to dotted position\rule[-6pt]{0pt}{0pt}} \\ \hhline{:=::=:} \parbox[t]{4in}{} & \parbox[t]{4in}{$n_{25}$ isotopes\\ $n_{26}$ isotopes\rule[-6pt]{0pt}{0pt}} \\ \hhline{:=b::=:} \multicolumn{2}{|l|} {$n_{12}$, $n_{24}$, $n_{21}$, $n_{22}$, $n_{25}$, $n_{26}$ cancel a 3-handle} \\ \hline \end{tabular} \end{figure} \pagebreak \begin{figure} \resizebox{8in}{!}{\includegraphics{step5.eps}} \caption{Step 5} \label{step5} \begin{tabular}{|l||l|} \hline \multicolumn{2}{|l|} {$n_{33}$, $n_{27}$, $n_{28}$ cancel $G'hG$, $hJhJ'$, $II'$ respectively} \\ \hhline{:=t::=:} \parbox[t]{7in}{$n_{31}$ rises in part above $x_1x_2$-plane, spans surface like an arched roof that contains its parallel circle\\ $n_{32}$ rises in part above $x_1x_2$-plane, slides over $GhG'$ and off $G$ into dotted position\\ $n_{35}$ isotopes into dotted position\rule[-6pt]{0pt}{0pt}} & \parbox[t]{1in}{} \\ \hhline{:=::=:} \parbox[t]{7in}{both branches of $n_{39}$ are isotoped by rotating by $180^\circ$ around the axis joining their endpoints --- the dots representing where $n_{37}$ and $n_{38}$ cross these planes show they do not interfere with the isotopy\rule[-6pt]{0pt}{0pt}} & \parbox[t]{1in}{} \\ \hhline{:=b::=:} \multicolumn{2}{|l|} {\parbox[t]{8in}{$n_{31}$, $n_{32}$, $n_{34}$ cancel a 3-handle; $n_{29}$ and $n_{30}$ can be separated from the rest of the diagram by pulling them in the $x_1$-direction, then each cancels a 3-handle}} \\ \hline \end{tabular} \end{figure} \pagebreak \begin{figure} \resizebox{8in}{!}{\includegraphics{step6.eps}} \caption{Step 6} \label{step6} \begin{tabular}{|l||l|} \hline \multicolumn{2}{|l|} {$n_{35}$ cancels $HhH'$} \\ \hhline{:=t::=:} \parbox[t]{4in}{$n_{37}$ and $n_{38}$ isotoped up and down, respectively\rule[-6pt]{0pt}{0pt}} & \parbox[t]{4in}{} \\ \hhline{:=::=:} \parbox[t]{4in}{} & \parbox[t]{4in}{$n_{40}$, $n_{41}$, $n_{42}$ and $n_{43}$ are isotoped along the $S^2$'s so they lie, along with their parallel circles, in planes parallel to the $x_1x_3$-plane --- note that $n_{37}$ and $n_{38}$ do not interfere with this in their new position\rule[-6pt]{0pt}{0pt}} \\ \hhline{:=b::=:} \multicolumn{2}{|l|} {$n_{36}$ cancels a 3-handle} \\ \hline \end{tabular} \end{figure} \pagebreak \begin{figure} \resizebox{8in}{!}{\includegraphics{step7.eps}} \caption{Steps 7--12} \label{step7} \begin{tabular}{|l||l|} \hhline{|-||-|} \parbox[t]{4.1in}{$n_{41}$ isotopes to dotted positions\\ parallel curve of $m_5$ can be chosen right above $m_5$} & \parbox[t]{4.1in}{$n_{41}$ cancels $F'hF$, $n_{44}$ cancels $GhG'$\\ $n_{43}$ isotopes to dotted positions\\ $n_{40}$ slides over $E'hE$ and off $hE$, then cancels 3-handle\\ $n_{45}$ cancels 3-handle\rule[-6pt]{0pt}{0pt}} \\ \hhline{|-||-|} \end{tabular} \begin{tabular}{|l||l||l|} \hhline{|-||-||-|} \parbox[t]{2in}{$m_5$ isotopes\\ $n_{43}$ cancels $FhF'$\\ $n_{42}$ slides over $EhE'$ and off~$E$, then cancels 3-handle} & \parbox[t]{2.5in}{$m_5$ cancels $EhE'$\\ $n_{37}$ and $n_{38}$ can be separated from\\ the diagram and cancel 3-handles} & \parbox[t]{3.5in}{$n_{39}$, $n_{46}$, $n_{47}$ and $n_{48}$ can be isotoped so they all lie in a plane, along with their parallel curves\\ $n_{46}$ cancels $E'hE$\\ $n_{39}$, $n_{47}$ and $n_{48}$ cancel 3-handles\rule[-6pt]{0pt}{0pt}} \\ \hhline{|-||-||-|} \end{tabular} \end{figure} \end{landscape} \pagebreak
{ "redpajama_set_name": "RedPajamaArXiv" }
6,711
{"url":"http:\/\/ncatlab.org\/nlab\/show\/modal+type+theory","text":"# Contents\n\n## Idea\n\nModal type theory is a flavor of type theory with type formation rules for modalities, as in modal logic. A survey of the field is in (de Paiva-Gor\u00e9-Mendler).\n\nWhen the underlying type theory is homotopy type theory these modalities are a \u201chigher\u201d generalization of traditional modalities, with \u201chigher\u201d in the sense of higher category theory: they have categorical semantics in (\u221e,1)-categories. See (Shulman) for remarks on such higher modalities.\n\n## Properties\n\n### Relation to monads\n\nAt least in many cases, modalities in type theory are identified with monads or comonads on the underlying type universe, or on the subuniverse of propositions.\n\nSee for instance (Benton-Bierman-de Paiva, section 3.2), (Kobayashi), (Gabbay-Nanevski, section 8), (Gaubault-Larrecq, Goubault, section 5.1), (Park-Harper, section 2.6), (Shulman) as examples of the first, and (Moggi, def. 4.7), (Awodey-Birkedal, section 4.2) as examples of the second.\n\n## Examples\n\n### Geometric modality \u2013 Grothendieck topology\n\nAs a special case of the modalities-are-monads relation, a Grothendieck topology on the site underlying a presheaf topos is equivalently encoded in the sheafification monad $\\mathrm{PSh}\\left(C\\right)\\to \\mathrm{Sh}\\left(C\\right)\u21aa\\mathrm{PSh}\\left(C\\right)$ induced by the sheaf topos geometric embedding. More generally, any geometric subtopos is equivalently represented by a left-exact idempotent monad.\n\nWhen restricted to act on (-1)-truncated objects (i.e. subterminal objects or more generally monomorphisms), this becomes a universal closure operator. When internalized as an operation on the subobject classifier, this becomes the corresponding Lawvere-Tierney operator. This modal perspective on sheafification was maybe first made explicit by Bill Lawvere:\n\nA Grothendieck \u2018topology\u2019 appears most naturally as a modal operator of the nature \u2018it is locally the case that\u2019 (Lawvere).\n\nMore discussion along these lines is in (Goldblatt), where this kind of modality is called a geometric modality.\n\nFor higher toposes, it is no longer true in general that a subtopos is determined by its behavior on the $\\left(-1\\right)$-truncated objects, but we can still regard the entire sheafification monad as a higher modality in the internal homotopy type theory.\n\n### Closure modality\n\nThe canonical monad on a local topos gives rise to a closure modality on propositions in its internal language, as described in (Awodey-Birkedal).\n\n### Cohesive and differential modality\n\nBy adding to homotopy type theory three (higher) modalities that encode discrete types and codiscrete types and hence implicitly a non-(co)-discrete notion of cohesion one obtained cohesive homotopy type theory. Adding furthermore modalities for infinitesimal (co)discreteness yields differential homotopy type theory.\n\n## References\n\nA survey of the field of modal type theory is in the collections\n\n\u2022 M. Fairtlough, M. Mendler, Eugenio Moggi (eds.), Modalities in Type Theory, Mathematical Structures in Computer Science, Vol. 11, No. 4, (2001)\n\nand\n\n\u2022 Valeria de Paiva, Rajeev Gor\u00e9, Michael Mendler, Modalities in constructive logics and type theories, Special issue of the Journal of Logic and Computation, editorial pp. 439-446, Vol. 14, No. 4, Oxford University Press, (2004) (pdf)\n\nand\n\n\u2022 Valeria de Paiva, Brigitte Pientka (eds.) Intuitionistic Modal Logic and Applications (IMLA 2008), . Inf. Comput. 209(12): 1435-1436 (2011) (web)\n\nThe historically first modal type theory, the interpretation of sheafification as a modality in the internal language of a Grothendieck topos goes back to the notion of Lawvere-Tierney operator\n\n\u2022 Bill Lawvere, Quantifiers and sheaves, Actes, Congr\u00e8s intern, math., 1970. Tome 1, p. 329 \u00e0 334 (pdf)\n\nreviewed in\n\n\u2022 Robert Goldblatt, Grothendieck topology as geometric modality, Mathematical Logic Quarterly, Volume 27, Issue 31-35, pages 495\u2013529, (1981)\n\nA general relation between modalities in type theory and monads (in computer science) was noted in\n\n\u2022 Eugenio Moggi, Notions of computation and monads Information And Computation, 93(1), 1991. (pdf)\n\nThis was observed to systematically yield constructive modal logic in (independently)\n\n\u2022 P.N. Benton , G.M. Bierman , Valeria de Paiva, Computational Types from a Logical Perspective I (1995) (web)\n\nand\n\n\u2022 M. Fairlough, M. Mendler, Propositional lax logic, Information and computation 137(1):1-33 (1997)\n\nand\n\n\u2022 Satoshi Kobayashi, Monad as modality, Theoretical Computer Science, Volume 175, Issue 1, 30 (1997), Pages 29\u201374\n\nThe modal systems \u201cCL\u201d and \u201cPLL\u201d in (Benton-Bierman-Paiva) and (Fairlough-Mendler), respectively, turn out to be equivalent to the geometric modality of Goldblatt. The system CS4 in (Kobayashi) yields a constructive version of S4 modal logic.\n\nModal type theory with an eye towards homotopy type theory is discussed in\n\nMonadic modal type theory with idempotent monads\/monadic reflection is discussed in\n\n\u2022 Andrzej Filinski, Representing Layered Monads, POPL 1999. (pdf)\n\n\u2022 Andrzej Filinski, On the Relations between Monadic Semantics, TCS 375:1-3, 2007. (pdf)\n\n\u2022 Andrzej Filinski, Monads in Action, POPL 2010. (pdf)\n\n\u2022 Oleg Kiselyov and Chung-chieh Shan, Embedded Probabilistic Programming. Working conference on domain-specific languages, (2009) (pdf)\n\n\u2022 Mike Shulman, Higher modalities (pdf)\n\nFormalization of modalities in homotopy type theory is discussed also around def. 1.11 of\n\nSee also\n\n\u2022 Frank Pfenning, Towards modal type theory (2000) (pdf)\n\n\u2022 Frank Pfenning, Intensionality, Extensionality, and Proof Irrelevance in Modal Type Theory, Pages 221\u2013230 of: Symposium on Logic in Computer Science (2001) (web)\n\n\u2022 Aleksandar Nanevski, Frank Pfenning, Brigitte Pientka, Contextual Model Type Theory (2005) (web, slides)\n\n\u2022 Giuseppe Primiero, A multi-modal dependent type theory (pdf)\n\n\u2022 Murdoch Gabbay, Aleksandar Nanevski, Denotation of contextual modal type theory (CMTT): syntax and metaprogramming (pdf)\n\nA modality in the internal language of a local topos is discussed in section 4.2 of\n\n\u2022 Jean Goubault-Larrecq, \u00c9ric Goubault, On the geometry of intuitionistic S4 proofs, Homology, homotopy and applications vol 5(2) (2003)\n\nA list of related references is also kept at\n\nRevised on May 17, 2013 12:28:06 by Urs Schreiber (82.169.65.155)","date":"2013-05-24 17:12:49","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 5, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7437578439712524, \"perplexity\": 5370.554631661079}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368704818711\/warc\/CC-MAIN-20130516114658-00017-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction} The \emph{word problem} of a group $G$ with respect to a finite generating set $X$, denoted $W(G,X)$, is the set of all words in elements of $X$ and their inverses which represent the identity element of $G$. A \emph{(formal) language} is a set of words over some finite alphabet, so $W(G,X)$ can be considered as a language. The study of word problems of groups as languages has developed slowly since the beginnings of language theory in the 1950s. In 1971, Anisimov \cite{Ani} published a proof that a group has regular word problem if and only if it is finite. The first really significant development in the area was the classification of the groups with context-free word problem by Muller and Schupp in the 1980s \cite{MulSch1, MulSch2, Dun}: a finitely generated group has context-free word problem if and only if it is virtually free. Since then, research activity in this area has increased, and groups with word problem in various other language classes, generally somewhat related to the context-free languages, have been studied, for example in \cite{Herbst, HRRT, HRees, HR, HR2, HOT, HRS, KO, LehSch, Shap}. The general aim is to determine what implications the language type of a group's word problem has for the structure of the group and vice versa. One natural class of languages to consider is the closure of the context-free languages under intersection. Some research has been done on this class (see for example \cite{LW}, \cite{Wot} and \cite{Gorun}), but it does not appear to have a consistent name. We call a language \emph{$k$-context-free} (henceforth abbreviated to $k$-$\CF$) if it is an intersection of finitely many context-free languages, and \emph{poly-context-free (poly-$\CF$)} if it is $k$-$\CF$ for some $k\in \N$. This paper is concerned with the class of poly-$\CF$ groups. A group is said to be \emph{poly-$\CF$} if its word problem is a poly-$\CF$ language. The property of being poly-$\CF$ is independent of the choice of finite generating set, and the class of poly-$\CF$ groups is closed under taking finitely generated subgroups, finite index overgroups, and finite direct products. All but the last of these properties are typical of classes of groups defined by the language type of their word problem. A general classification of these groups appears to be hard. However, we prove a result (Theorem~\ref{polycf-sol}) which comes close in the case of soluble groups. We conjecture that the only poly-$\CF$ groups are those obtained from virtually free groups using the above-mentioned operations; that is, that a group is poly-$\CF$ if and only if it is virtually a finitely generated subgroup of a direct product of free groups (Conjecture~\ref{solconj}). This would mean that the only soluble poly-$\CF$ groups are the virtually abelian groups. Theorem~\ref{polycf-sol} gives substantial evidence towards this special case of our conjecture. In \cite{HRRT}, the co$\CF$ groups (groups whose word problem is the complement of a context-free language) were studied. Various closure properties of the co$\CF$ groups were determined, most of which carry over easily to poly-$\CF$ groups (see Proposition~\ref{closureprops} below). Additionally, several classes of groups were shown not to be co$\CF$, using a method \cite[Proposition~14]{HRRT} based on the correspondence between context-free languages and semilinear sets (see Section~\ref{semisec} below). We prove a strengthened version of \cite[Proposition~14]{HRRT} (see Proposition~\ref{prop} below), which enables us to deduce that any group proved not co$\CF$ using \cite[Proposition~14]{HRRT} is also not poly-$\CF$. Some examples of such groups are finitely generated nilpotent or polycyclic groups that are not virtually abelian. It was these results that led to the attempt at a characterisation of the soluble poly-$\CF$ groups. A major open problem for co$\CF$ groups is whether they are closed under taking free products. It was suggested by Derek Holt that closure under free products might be much easier to determine for poly-$\CF$ groups, but so far this problem also remains open, though we believe that the word problem of $\Z^2 *\Z$ is not poly-$\CF$. The poly-$\CF$ groups are somewhat related to the co$\CF$ groups, in the sense that if our main conjecture is true, then the poly-$\CF$ groups are a subclass of the co$\CF$-groups, as we explain in Section~\ref{group}, following Conjecture~\ref{polyconj}.\\ Our main tools are introduced in Section~\ref{background}. These are: various closure properties of the classes of poly-$\CF$ languages and poly-$\CF$ groups; the relationship between bounded context-free languages and semilinear sets, due to Parikh \cite{Par} and Ginsburg and Spanier \cite{GinSpa2}; and a result by the author and Derek Holt \cite{BH}, showing that every finitely generated soluble group that is not virtually abelian has a subgroup isomorphic to one of a small number of types. In Section~\ref{language}, we study the class of poly-$\CF$ languages, with a particular focus on methods for proving languages to be \emph{not} poly-$\CF$. To this end, we develop several tools based on the correspondence between context-free languages and stratified semilinear sets introduced in Section~\ref{semisec}. In Corollary~\ref{parikh}, we show that a language satisfying certain properties is neither poly-$\CF$ nor co$\CF$, while Theorem~\ref{L(n,k)} exhibits sequences of languages $L^{(n,k)}$, where $n, k\in \N$, such that for all $n$, the language $L^{(n,k)}$ is an intersection of $k$ but not $k-1$ context-free languages. This is an extension of a result by Liu and Weiner \cite{LW}. In Section~\ref{group}, we present the known examples of poly-$\CF$ groups, and conjecture that these are the only ones. We give some evidence for this conjecture (Conjecture~\ref{polyconj}), in the form of results showing that it holds in the classes of nilpotent, Baumslag-Solitar and polycyclic groups, and for the groups $G(\c)$ introduced in \cite{BH}, which are also shown to be not co$\CF$ if they are not virtually abelian. We conclude with a section applying the results of Section~\ref{group} and \cite{BH} to prove the metabelian and torsion-free soluble cases of our conjecture, and to narrow down the possibilities for which soluble groups could be poly-$\CF$. \section{Background and notation}\label{background} \subsection{Notation} $\N, \Z$ and $\Q$ denote the natural numbers, integers and rationals respectively. We denote the natural numbers with zero included by $\N_0$. For $r\in \N$ and $1\leq i\leq r$, the vector in $\N_0^r$ with a $1$ in the $i$-th position and zeroes elsewhere will be denoted by $e_i$. With the exception of these, all vectors are represented by bold letters. We denote the $i$-th component of the vector $\v$ by $\v(i)$. For a set $X$, we denote the \emph{Kleene star closure} of $X$, which is the set of all finite length strings (also called words) of elements of $X$, by $X^*$. In the special case $X = \{x\}$, we often denote $X^*$ by $x^*$. \subsection{Closure properties of the poly-$\CF$ languages} Many closure properties of the classes of $k$-${\cal CF}$ and poly-$\CF$ languages can be deduced from the similar properties for context-free languages; for details of these, see (for example) \cite{HopUll}. \begin{proposition}\label{polyclose} For any $k\in \N$, the class of $k$-$\cal{CF}$ languages is closed under inverse homomorphisms, inverse generalised sequential machine mappings, union with context-free languages and intersection with regular languages. The class of poly-$\cal{CF}$ languages is closed under all these operations, and also under intersection and union. \end{proposition} \begin{proof} Let $L = L_1\cap\ldots\cap L_k$ with each $L_i$ context-free and let $\Sigma$ be the alphabet of $L$. Let $\Gamma$ be an alphabet and let $\phi$ be a homomorphism from $\Gamma^*$ to $\Sigma^*$, or a generalised sequential machine mapping with input alphabet $\Gamma$ and output alphabet $\Sigma$. Then \begin{align*} \phi^{-1}(L) &= \{ w\in \Gamma^* \mid \phi(w)\in L_i \; (1\leq i\leq k)\}\\ &= \bigcap_{i=1}^k \{ w\in \Gamma^* \mid \phi(w)\in L_i\} = \bigcap_{i=1}^k \phi^{-1}(L_i), \end{align*} and so, since the class of context-free languages is closed under inverse homomorphisms and inverse generalised sequential machine mappings, $\phi^{-1}(L)$ is $k$-${\cal CF}$. The class of context-free languages is closed under union and under intersection with regular languages. Thus if $R$ is regular, then $L\cap R = L_1\cap\ldots\cap L_{k-1}\cap (L_k\cap R)$ is $k$-${\cal CF}$; and if $M$ is context-free, then $L\cup M = \bigcap_{i=1}^k (L_i\cup M)$ is $k$-$\cal{CF}$. The closure of the class of poly-$\cal{CF}$ languages under intersection is obvious, since if $L_1$ is $k_1$-$\cal{CF}$ and $L_2$ is $k_2$-$\cal{CF}$, then $L_1\cap L_2$ is an intersection of $k_1 + k_2$ context-free languages. If $L = \cap_{i=1}^m L_i$ and $M = \cap_{j=1}^n M_j$, with each $L_i$ and $M_j$ context-free, then \[ L\cup M = \left(\bigcap_{i=1}^m L_i\right)\cup\left(\bigcap_{j=1}^n M_j\right) = \bigcap_{i=1}^m\bigcap_{j=1}^n (L_i\cup L_j) \] is $mn$-${\cal CF}$, so the class of poly-${\cal CF}$ languages is also closed under union. \end{proof} The closure of the poly-$\CF$ languages under union and intersection was already observed by Wotschke \cite{Wot}, who also showed, using a theorem of Liu and Weiner (see Section~\ref{L(k) section} below), that the poly-$\CF$ languages are not closed under complementation and are thus properly contained in the Boolean closure of the context-free languages \cite[Theorem~II.4]{Wot} . Any recursively enumerable language can be expressed as a homomorphic image of the intersection of two deterministic context-free languages \cite{GGH}. Every poly-$\CF$ languages is context-sensitive, since the context-sensitive languages are closed under intersection and contain the context-free languages. Thus the poly-$\CF$ languages are not closed under homomorphisms. \subsection{Basic properties of the poly-$\CF$ groups} A central result in the theory of word problems of groups as languages is the following, for which a proof is given in \cite{HRRT}. We denote the complement of $W(G,X)$ in $X^*$ by co$W(G,X)$. \begin{lemma}\label{genset}{\rm \cite[Lemma~1]{HRRT}} Let $\mathcal{C}$ be a class of languages closed under inverse homomorphisms and let $G$ be a finitely generated group. Then the following hold. \begin{enumerate} \item $W(G,X)\in \mathcal{C}$ for some finite generating set $X$ if and only if for every finite generating set $Y$, $W(G,Y)\in \mathcal{C}$. \item co$W(G,X)\in \mathcal{C}$ for some finite generating set $X$ if and only if for every finite generating set $Y$, co$W(G,Y)\in \mathcal{C}$. \end{enumerate} \end{lemma} In this case, we call $G$ a \emph{$\cal{C}$ group} if $W(G)$ is in $\cal{C}$, and a \emph{co${\cal C}$ group} if co$W(G)$ is in $\cal{C}$, and say that $\cal{C}$ groups or co${\cal C}$ groups are \emph{insensitive} to choice of generators. \begin{lemma}{\rm \cite[Lemma 2]{HRRT}}\label{fgsub} Let $\cal{C}$ be a class of languages closed under inverse homomorphisms and intersection with regular sets. Then the classes of $\cal{C}$ groups and co$\cal{C}$ groups are closed under taking finitely generated subgroups. \end{lemma} \begin{lemma}{\rm \cite[Lemma 5]{HRRT}}\label{fiover} Let $\cal{C}$ be a class of languages closed under union with regular sets and inverse generalised sequential machine mappings. Then the classes of $\cal{C}$ groups and co$\cal{C}$ groups are closed under passing to finite index overgroups. \end{lemma} Thus, by Proposition~\ref{polyclose}, we have: \begin{proposition}\label{closureprops} The classes co$\CF$ and $k$-$\CF$ groups (for any $k\in \N$) are insensitive to choice of generators and closed under passing to finitely generated subgroups and passing to finite index overgroups. \end{proposition} \subsection{Semilinear sets} A useful tool for proving languages not to be poly-$\CF$ is a relationship between context-free languages and semilinear sets, introduced by Parikh \cite{Par} and then strengthened, in the case of bounded languages, by Ginsburg and Spanier \cite{GinSpa2}. A \emph{linear set} is a subset $L$ of $\N_0^r$ for which there exist a \emph{constant vector} $\c\in \N_0^r$ and a finite set of \emph{periods} $P = \{\p_i \mid 1\leq i\leq n\}\subseteq \N_0^r$ such that \[L = \{\c + \sum_{i=1}^n \alpha_i \p_i \mid \alpha_i\in \N_0\}.\] Note that the set of periods $P$ is not uniquely determined. A \emph{semilinear set} is a union of finitely many linear sets. Following Ginsburg~\cite{Gin}, we will use the notation $L(\c; \p_1,\ldots,\p_n)$, or $L(\c; P)$, for a linear set with constant $\c$ and set of periods $P = \{\p_1,\ldots,\p_n\}$. For $C$ a set of constant vectors, we will denote $\bigcup_{\c\in C} L(\c; P)$ by $L(C; P)$. If $C = \{\c_1,\ldots,\c_m\}$, we will also write $L(\c_1,\ldots,\c_m;\p_1,\ldots,\p_n)$ for $L(C;P)$. If $L = L(\c; P)$, we define $L^{\Q}$ to be the set $\{\c + \sum_{i=1}^n a_i\p_i \mid a_i\in \Q\}$. This is a coset in $\Q^n$ of the $\Q$-subspace spanned by $P$. We define $L^{\mathbf{0}}$ to be $L(\mathbf{0}; P)$, that is, the linear set having the same periods as $L$ and constant $\mathbf{0}$.\\ A subset $P$ of $\N_0^r$ is \emph{stratified} if it satisfies the following conditions: \begin{enumerate} \item each $\p\in P$ has at most two non-zero components, and \item there do not exist $i<j<k<l$ and non-zero $a, b, c, d\in \N$ such that $ae_i + be_k$ and $ce_j+de_l$ are both in $P$. \end{enumerate} A linear set is \emph{stratified} if it can be expressed using a stratified set of periods. A semilinear set is \emph{stratified} if it can be expressed as a union of finitely many stratified linear sets. (We follow Liu and Weiner \cite{LW} for this terminology.) Note that stratified linear and semilinear sets are not generally stratified sets in the sense of the previous paragraph. \subsubsection{Stratified semilinear sets and bounded poly-$\CF$ languages}\label{semisec} The \emph{commutative image} of a language $L$ over $\{a_1,\ldots,a_r\}$ is the subset of $\N_0^r$ given by mapping each $w\in L$ to the tuple $(n_1,\ldots,n_r)$, where $n_i$ is the number of occurrences of $a_i$ in $w$. Parikh's theorem \cite{Par} says that the commutative image of a context-free language is always a semilinear set. The converse of Parikh's theorem does not hold: consider for example the language $\{a^m b^n c^m d^n \mid m,n\in \N_0\}$. A language $L\subseteq X^*$ is \emph{bounded} if there exist $w_1,\ldots,w_n\in X^*$ such that $L\subseteq w_1^*\ldots w_n^*$, in which case we can define a corresponding subset of $\N_0^n$: \[ \Phi(L) = \{ (m_1,\ldots,m_n) \mid m_i\in \N_0, w_1^{m_1}\ldots w_n^{m_n}\in L\}.\] When $w_1,\ldots,w_n$ are distinct single symbols, this is the same as the commutative image of $L$. Thus the following result of Ginsburg and Spanier strengthens Parikh's theorem in the case of bounded languages. \begin{theorem}\label{Parikh}{\rm \cite[Theorem 5.4.2]{Gin}} Let $W\subseteq w_1^*\ldots w_n^*$, each $w_i$ a word. Then $W$ is context-free if and only if $\Phi(W)$ is a stratified semilinear set. \end{theorem} Ginsburg and Spanier used different notation, which made it more transparent how to get from $\Phi(W)$ back to $W$. But as we will only require the `only if' direction, we prefer this tidier notation. Theorem~\ref{Parikh} is easily extended to the poly-$\CF$ languages. \begin{corollary}\label{polyParikh} If $L$ is a $k$-$\CF$ language, then for any $w_1,\ldots,w_n$, the subset ${\Phi(L\cap w_1^*\ldots w_n^*)}$ of $\N_0^n$ is an intersection of $k$ stratified semilinear sets. \end{corollary} \begin{proof} Let $L = L_1\cap\ldots\cap L_k$ with each $L_i$ context-free, and let $W = w_1^*\ldots w_n^*$, where each $w_i$ is a word in the alphabet of $L$. For $1\leq i\leq k$, let $M_i = L_i\cap W$. Then $L\cap W = L_1\cap\ldots\cap L_k\cap W = \bigcap_{i=1}^k M_i$ and \begin{align*} \Phi(L\cap W) &= \{ (m_1,\ldots,m_n) \mid m_i\in \N_0, w_1^{m_1}\ldots w_n^{m_n}\in L\cap W\}\\ &= \bigcap_{i=1}^k \{ (m_1,\ldots,m_n) \mid m_i\in \N_0, w_1^{m_1}\ldots w_n^{m_n}\in M_i\}\\ &= \bigcap_{i=1}^k \Phi(M_i) \end{align*} and each $\Phi(M_i)$ is a stratified semilinear set by Theorem~\ref{Parikh}. \end{proof} Proving that a given semilinear set is not stratified is by no means straightforward, since there can be many different ways of expressing a semilinear set as a union of finitely many linear sets. Ginsburg \cite{Gin} mentioned that there was no known decision procedure for determining whether an arbitrary semilinear set is stratified, and it appears that this is still an open problem. \subsubsection{Closure properties of the class of semilinear sets} The class of semilinear sets is obviously closed under union. Thinking geometrically, one would also expect this class to be closed under the other Boolean operations. This is indeed true, but much less easy to show. The intersection of finitely many linear sets is always a semilinear set of quite a restricted form. This result can be derived from the proof of Theorem~5.6.1 in \cite{Gin}. For another proof, obtained independently by the author, see \cite[Proposition~2.3]{TB}. \begin{proposition}\label{intlin} If $L$ is the nonempty intersection of linear subsets $L_1,\ldots,L_n$ of $\N_0^r$, then $L$ is semilinear. Moreover, \[ L = L(\C_1,\ldots,\C_k; \P_1,\ldots,\P_m),\] where $\C_i\in\N_0^r$, and $\P_1,\ldots,\P_m$ are such that $\bigcap_{i=1}^n L_i^{\mathbf{0}} = L(\mathbf{0}; \P_1,\ldots,\P_m)$.\\ If $L_1,\ldots,L_n$ all have constant vector zero, then $L$ is linear with constant vector zero. \end{proposition} \begin{corollary}\label{intsl}{\rm \cite[Theorem~5.6.1]{Gin}} Let $L$ be an intersection of finitely many semilinear sets. Then $L$ is a semilinear set. \end{corollary} \begin{proposition} {\rm \cite[Theorem~6.2 and Corollary~1]{GinSpa1} }\label{compsl} If $L$ and $M$ are semilinear subsets of $\N_0^r$, then $M - L$ is also a semilinear subset of $\N_0^r$ and effectively calculable from $L$ and $M$. In particular, since $\N_0^r$ is semilinear, if $L$ is a semilinear subset of $\N_0^r$, then the complement of $L$ in $\N_0^r$ is semilinear. \end{proposition} \subsubsection{Dimension of linear sets} If $V$ is a subspace of a vector space $W$ with $\dim(V) < \dim(W)$, then the \emph{dimension} of a coset of $V$ in $W$ is defined to be the dimension of $V$. The \emph{dimension} of a linear set $L$ is defined to be the dimension of $L^{\Q}$ or, equivalently, the dimension of the vector space over $\Q$ spanned by the periods of $L$. We record here a result about the dimension of linear sets which will be useful later. This is a known result, but the only reference we have for it is \cite{LW}, where the proof given is incorrect. A proof is included in the author's Ph.D. thesis \cite[Proposition~2.10]{TB}. \begin{proposition}\label{dimlin} A linear set of dimension $n+1$ cannot be expressed as a union of finitely many linear sets of dimension $n$ or less. \end{proposition} \subsection{Subgroups of finitely generated soluble groups} The following theorem is a combination of Theorems~3.3 and~5.2 in \cite{BH}. By $\Z^\infty$, we mean the free abelian group of countably infinite rank. For the definition of a proper Gc-group, see Section~\ref{Gc-sec}. A group is \emph{metabelian} if it has derived length at most $2$. \begin{theorem}\label{solsubs} Let $G$ be a finitely generated soluble group which is not virtually abelian. Then $G$ has a subgroup isomorphic to at least one of the following. \begin{enumerate} \item $\Z^\infty$; \item a proper Gc-group; \item a finitely generated group $H$ with an infinite normal torsion subgroup $U$, such that $H/U$ is either free abelian or a proper Gc-group. \end{enumerate} If $G$ is metabelian, then the subgroup $H$ in (iii) can always be taken to be $C_p\wr \Z$ for some prime $p$. \end{theorem} Since the class of poly-$\CF$ groups is closed under taking finitely generated subgroups, this gives a very useful approach towards resolving our conjecture for soluble groups. \section{Poly-$\CF$ languages}\label{language} Recall that a $k$-$\cal{CF}$ language is an intersection of $k$ context-free languages, and a poly-$\cal{CF}$ language is a language which is $k$-$\cal{CF}$ for some $k\in \N$. In this section, we shall primarily be concerned with proving some results which will assist us in determining that the word problems of certain groups are not poly-$\cal{CF}$. \subsection{A criterion for a language to be neither poly-$\CF$ nor co$\CF$} In \cite[Proposition~14]{HRRT}, a technique was developed for proving a subset of $\N_0^r$ not to be the complement of a semilinear set. This was used in combination with Parikh's theorem to prove various classes of groups not to be co$\CF$. The proof is fairly long and technical. The authors were presumably unaware of the fact that the complement of a semilinear set is semilinear. This fact allows us to give a much simpler proof of their result, and to strengthen it. If $\a$ and $\b$ are vectors in $\N_0^r$ and $\N_0^s$ respectively, then we denote by $(\a;\b)$ the vector in $\N_0^{r+s}$ which consists of all the components of $\a$ in order, followed by those of $\b$ in order. When talking about vectors in $\N_0^{r+s}$, if we write $(\a;\b)$, then it is understood that $\a\in \N_0^r$ and $\b\in \N_0^s$. For $\a\in \N_0^r$, we define $\sigma(\a) = \sum_{i=1}^r \a(i)$. We use the following lemma, extracted from the proof of Proposition 11 in \cite{HRRT}. We call a vector $\v\in \N_0^{r+s}$ {\em simple} if its first $r$ components are all zero, and {\em complex} otherwise. The proof is quoted from \cite{HRRT} with only minor modifications. \begin{lemma}\label{C_L} Let $L = L_1\cup\ldots\cup L_n$, with each $L_i$ a linear subset of $\N_0^{r+s}$. Then there exists a constant $C\in \N$ such that if $(\a;\b)\in L$ can be expressed using only complex periods, then $\b(j) < C\sigma(\a)$ for all $1\leq j\leq s$. \end{lemma} \begin{proof} Fix some $i\in \{1,\ldots,n\}$ and let $L_i = L(\c_i; P_i)$. If $(\p;\q)\in P_i$ is a complex period, then $\sigma(\p)\neq 0$, so there exists $t$ such that $\q(j) < t\sigma(\p)$ for ${1\leq j\leq s}$. Since $P_i$ is finite, we can choose the same $t$ for all $(\p;\q)\in P_i$. If, for $k=1,2$, $(\a_k;\b_k)\in \N_0^{r+s}$ satisfy $\b_k(j) < t\sigma(\a_k)$, then $(\b_1 + \b_2)(j) < t\sigma(\a_1 + \a_2).$ Thus there is a constant $q\in \N_0$, which can be taken to be ${\max\{\c_i(j) \mid 1\leq j\leq r\}}$, such that if $(\a;\b)\in L_i$ can be expressed using only complex periods, then $\b(j) < t\sigma(\a) + q$ for all $1\leq j\leq s$. Now let $C\in \N$ be twice the maximum of all of the constants $t,q$ that arise for all $L_i$. Then, for any $(\a;\b)\in L$ which can be expressed using only complex periods, $\b(j) < C\sigma(\a)$ for all $1\leq j\leq s$. \end{proof} \begin{proposition}\label{prop} Let $L\subseteq \N_0^{r+s}$ for some $r, s\in \N$. Let $f:\N\rightarrow \N$ be an unbounded function and suppose that, for every $k\in \N$, there exists ${\a\in \N_0^r\setminus \{\mathbf{0}\}}$ such that the following hold: \begin{enumerate} \item There exists $\b\in \N_0^s$ such that $(\a;\b)\in L$. \item If $(\a;\b)\in L$ then $\b(j) \geq k\sigma(\a)$ for some $1\leq j\leq s$. \item If $(\a;\b), (\a;\b')\in L$ with $\b\neq \b'$, then $|\b(l) - \b'(l)|\geq f(k)$ for some $1\leq l\leq s$. \end{enumerate} Then $L$ is not a semilinear set. \end{proposition} \begin{proof} Let $L$ be as in the statement of the proposition and suppose that $L = \bigcup_{i=1}^n L_i$, where each $L_i = L(\c_i;P_i)$ is a linear subset of $\N_0^{r+s}$. By Lemma~\ref{C_L}, there exists a constant $C\in \N$ such that if $(\a;\b)\in L$ can be expressed using only complex periods in some $L_i$, then $\b(j) < C\sigma(\a)$ for all $1\leq j\leq s$. Choose $k>C$, and suppose $\a$ satisfies the hypotheses of the proposition with respect to $k$. If $(\a;\b)\in L$, then $(\a;\b)$ cannot be expressed using only complex periods, so some $P_i$ must contain a simple period $(\mathbf{0};\v)$ with $\v$ non-zero. But then $(\a;\b+\v)\in L_i\subseteq L$ and so, for some $1\leq l\leq s$, \[ |\v(l)| = |(\b + \v)(l) - \b(l) |\geq f(k).\] So for all $k>C$, there is a non-zero simple period $\v_k$ in $\cup_{i=1}^n P_i$, with some component of $\v_k$ being at least $f(k)$. But since $\cup_{i=1}^n P_i$ is finite and $f(k)$ is unbounded, this is impossible. Thus $L$ is not a semilinear set. \end{proof} In \cite[Proposition~14]{HRRT}, instead of our condition (i), it is required that there is a \emph{unique} $\b\in \N_0^r$ such that $(\a;\b)\in L$ (and thus there is no condition (iii) or mention of the unbounded function $f$); instead of our condition (ii), it is required that $\b(j) \geq k\sigma(\a)$ for \emph{every} $1\leq j\leq s$. The conclusion is that $L$ is not the complement of a semilinear set. Our hypothesis is considerably weaker, and the conclusion is equally strong, since the complement of a semilinear set is semilinear.\\ For $\v = (n_1,\ldots,n_r)\in \N_0^r$ and $\tau$ a permutation of $\{1,\ldots,r\}$, we define \[ \tau(\v) = (n_{\tau(1)},n_{\tau(2)},\ldots,n_{\tau(r)}). \] We extend this to a subset $L$ of $\N_0^r$ by defining $\tau(L) = \{\tau(\v) \mid \v\in L\}$. If $L = L(\c;\p_1,\ldots,\p_k)$, then $\tau(L) = (\tau(\c); \tau(\p_1),\ldots,\tau(\p_k))$, so the property of being a linear set, or indeed an intersection of $k$ semilinear sets, is preserved by $\tau$. We shall make significant use of the following corollary to Proposition~\ref{prop} in Section~\ref{group}. \begin{corollary}\label{parikh} Let $L\subseteq w_1^*\ldots w_k^*$ be a bounded language over an alphabet $X$ with $w_i\in X^*$, and let $\tau$ be a permutation of $\{1,\ldots,k\}$. If $\tau\left(\Phi\left(L\right)\right)$ satisfies the hypothesis of Proposition~\ref{prop}, then $L$ is neither co$\cal{CF}$ nor poly-$\cal{CF}$. \end{corollary} \begin{proof} Since $\tau$ preserves semilinearity, this follows immediately from Proposition~\ref{prop}, Theorem~\ref{Parikh} and Corollary~\ref{polyParikh} and the fact that the class of semilinear sets is closed under intersection (Corollary~\ref{intsl}) and complementation (Proposition~\ref{compsl}). \end{proof} \subsection{The languages $L^{(k)}$}\label{L(k) section} A $(k-1)$-$\cal{CF}$ language is clearly also $n$-$\cal{CF}$ for all $n\geq k$. In \cite{LW}, Liu and Weiner showed that the class of $k$-$\cal{CF}$ languages properly contains the class of $(k-1)$-$\cal{CF}$ languages, thus exhibiting an infinite heirarchy of languages in between the context-free and context-sensitive languages. (They call a $k$-$\CF$ language a `$k$-intersection language'.) Note that this implies that the $k$-$\CF$ languages are not closed under intersection or even under intersection with context-free languages. There are some problems with Liu and Weiner's proof, particularly in the proof of their Theorem 10. In this section, we provide a more detailed proof. In Section~\ref{L(n,k) section}, we extend Liu and Weiner's result, but the proof of the special case is provided first, as it will probably aid the reader's understanding of the more general case. Following Liu and Weiner, we define a sequence of languages $L^{(k)}$ and corresponding subsets $S^{(k)}$ of $\N_0^{2k}$. For $k\in \N$, let $a_1,\ldots,a_{2k}$ be $2k$ distinct symbols, and define the language \[L^{(k)} = \{a_1^{n_1}\ldots a_k^{n_k} a_{n+1}^{n_1}\ldots a_{2k}^{n_k} \mid n_i\in \N_0\}.\] Define $S^{(k)}$ to be $\Phi\left(L^{(k)}\right)$. That is, \[S^{(k)} = \{v\in \N_0^{(2k)} \mid v(i) = v(k+i) \; (1\leq i\leq k)\}.\] The following lemma gives a condition which implies a linear set is not an intersection of $k-1$ stratified semilinear sets. The proof is assembled primarily from the proof of \cite[Lemma~4]{LW}, but the result is stated differently here, because in this form it will also be useful in proving our generalisation of Liu and Weiner's result. \begin{lemma}\label{lemma4} Let $S = L(\mathbf{0};P)$ be a $k$-dimensional linear subset of $\N_0^r$ such that $P$ is linearly independent over $\Q$. Suppose that any subset of $S$ which can be expressed as an intersection of $k-1$ stratified linear sets with constant vector zero has dimension at most $k-1$. Then $S$ is not an intersection of $k-1$ stratified semilinear sets. \end{lemma} \begin{proof} If $S$ is an intersection of $k-1$ stratified semilinear sets, then $S$ is a finite union of intersections of $k-1$ stratified linear sets. Let $L = \bigcap_{i=1}^{k-1} L_i$ be a subset of $S$ with each $L_i$ a stratified linear set. Let $M = \bigcap_{i=1}^{k-1} L_i^{\mathbf{0}}$ and write $M = L(\mathbf{0};\p_1,\ldots,\p_m)$. By Proposition~\ref{intlin}, there exists a finite subset $C$ of $\N_0^r$ such that \[ L = \bigcup_{\c_i\in C} L(\c_i; \p_1, \ldots, \p_m). \] For any $\c,\p\in \N_0^r$ such that $\c +n\p\in L$ for all $n\in \N_0$, we have $\p\in L$, since $P$ is linearly independent over $\Q$. Thus $M\subseteq S$, since ${L(\c_1;\p_1,\ldots,\p_m)\subseteq S}$. Since $M\subseteq S$ is an intersection of $k-1$ stratified linear sets with constant zero, $M$ has dimension at most $k-1$ by the hypothesis of the lemma. Each $L(\c_i; \p_1,\ldots,\p_m)$ is a coset of $M$ and thus has the same dimension as $M$. Thus $L$ is a union of finitely many linear sets of dimension at most $k-1$. This implies that $S$ itself is a union of finitely many linear sets of dimension at most $k-1$, but by Proposition~\ref{dimlin}, this cannot happen since $\dim(S) = k$. \end{proof} \subsubsection{The new part of the proof} This subsection contains a new proof of the result which is Theorem 10 in \cite{LW}, namely that $S^{(k)}$ satisfies the hypothesis of Lemma~\ref{lemma4}. We break most of it up into three lemmas, which then come together to give a relatively simple proof of the proposition itself (which here is Proposition~\ref{S(k)}). \begin{lemma}\label{dimS} Let $S = L_1\cap\ldots\cap L_k$, where each $L_i$ is a linear subset of $\N_0^r$ with constant vector zero and periods $P_i = \{\p_{i1},\ldots,\p_{im_i}\}$. For each $1\leq i\leq k$, let $\L_i = L_i^{\Q}$. If $\dim (S) < \dim (\L_1\cap\ldots\cap \L_k)$, then there exist $1\leq i\leq k$, $1\leq j\leq m_i$, such that removing $\p_{ij}$ from $P_i$ does not change the set $S$. \end{lemma} \begin{proof} Suppose that $\dim (S) < \dim (\L_1\cap\ldots\cap \L_k)$ and that, for all $i$, removing any $\p_{ij}$ from $P_i$ changes the set $S$. Then, for all $i, j$, there must exist some $\v_{ij} = \alpha_{i1}^j \p_{i1} + \ldots + \alpha_{im_i}^j \p_{im_i}\in S$ with $\alpha_{ij}^j\geq 1$. Let $\{\q_1,\ldots,\q_s\}$ be a basis for $\L_1\cap\ldots\cap \L_k$. Since $\q_1,\ldots,\q_s\in \L_i$ for all $i$, we can write $\q_l = \sum_{j=1}^{m_i} \beta_{ij}^l \p_{ij}$, where $\beta_{ij}^l\in \Q$. Now for $1\leq i\leq k$, $1\leq j\leq m_i$, let $c_{ij} = \mathrm{min} \{\beta_{ij}^l \mid 1\leq l\leq s\}$, and let \[\Lambda_i = \{j \mid 1\leq j\leq m_i,\; c_{ij} <0\}.\] Then, if $\w_i := \sum_{j\in \Lambda_i} -c_{ij}\v_{ij}$, we have $\w_i\in S$, since $\v_{ij}\in S$ and ${-c_{ij}\in \N}$ for all $j\in \Lambda_i$. Each $\w_i$ can thus be expressed in $L_i$ as $\sum_{j=1}^{m_i} \gamma_{ij} \p_{ij}$, where ${\gamma_{ij} = \sum_{j'\in \Lambda_i} -c_{ij'} \alpha_{ij}^{j'}}$. Since $\w_i$ is in $S$, it also has an expression \[\w_i = \sum_{j=1}^{m_{i'}}\gamma_{i'j}^i \p_{i'j},\] for each $i'\neq i$ in $\{1,\ldots, k\}$, where $\gamma_{i'j}^i\in \N_0$. For convenience, let $\gamma_{ij}^i = \gamma_{ij}$. Let $\w = \sum_{i=1}^k \w_i$. Then $\w\in S$ and, for each $i$, we can write \[{\w = \sum_{i'=1}^k \sum_{j=1}^{m_i} \gamma_{ij}^{i'} \p_{ij}}.\] For all $j\in \Lambda_i$, the coefficient of $\p_{ij}$ in this expression for $\w$ is \[\sum_{i'=1}^k \gamma_{ij}^{i'}\geq \gamma_{ij}^i = \sum_{j'\in \Lambda_i} -c_{ij'} \alpha_{ij}^{j'}\geq -c_{ij} \alpha_{ij}^j\geq -c_{ij},\] since $\alpha_{ij}^j\geq 1$. Thus we have shown that for each $i$, we can express $\w$ in the form $\sum_{j=1}^{m_i} a_{ij} \p_{ij}$, where $a_{ij}\geq -c_{ij}$ for all $j\in \Lambda_i$. For any $\q_l$ in the basis for $\L_1\cap\ldots\cap \L_k$, and any $1\leq i\leq k$, we have \[\w + \q_l = \sum_{j=1}^{m_i} (a_{ij} + \beta_{ij}^l)\p_{ij}\in L_i,\] since $a_{ij}+\beta_{ij}^l\geq a_{ij}+c_{ij}\geq 0$ for all $j\in \Lambda_i$, and $c_{ij}\geq 0$ for $j\notin \Lambda_i$. Thus $\w + \q_l\in S$ for all $1\leq l\leq s$. Let ${M = \{\w, \w+\q_1, \ldots, \w+\q_s\}\subset S}$. Then $\q_1,\ldots,\q_s$ are in the subspace of $\Q^r$ generated by $M$, which is contained in~$S^{\Q}$. Since $\{\q_1,\ldots,\q_s\}$ is a basis for $\L_1\cap\ldots\cap \L_k$, it is a linearly independent set over $\Q$. Thus $S^{\Q}$ has at least $s$ linearly dependent elements, contradicting $\dim(S) = \dim(S^{\Q}) < \dim(\L_1\cap\ldots\cap \L_k) = s$. \end{proof} For a stratified linear set $L\subseteq \N_0^r$, let $\rho_L$ be the symmetric relation on $\{1,\ldots,r\}$ given by $m\rho_L n$ if there exist non-zero $\alpha,\beta$ with $\alpha e_m + \beta e_n\in P$. Define $\sim_L$ to be the reflexive and transitive closure of $\rho_L$. This gives a partition $\Pi_L$ of $\{1,\ldots,r\}$ into equivalence classes under $\sim_L$. Note that since $L$ is stratified, if $m_1 < n_1 < m_2 < n_2$, then at most one of $m_1\rho_L m_2$ and $n_1 \rho_L n_2$ is true. A similar property applies to $\sim_L$: \begin{lemma}\label{overlap} Let $L\subseteq \N_0^r$ be a stratified linear set with constant vector zero. Then if $m_1, n_1, m_2, n_2\in \{1,\ldots,r\}$ with $m_1 < n_1 < m_2 < n_2$ and $m_1\not\sim_L n_1$, $m_2\not\sim_L n_2$, then $m_1\sim_L m_2$ and $n_1\sim_L n_2$ cannot both occur. \end{lemma} \begin{proof} Suppose $m_1\sim_L m_2$ and $n_1\sim_L n_2$. Then there exist $i_1,\ldots,i_s, j_1,\ldots, j_t$ in $\{1,\ldots,r\}$ such that \[m_1 = i_1\rho_L i_2\rho_L\ldots\rho_L i_s=m_2 \quad \mathrm{and} \quad n_1=j_1\rho_L j_2\rho_L\ldots\rho_L j_t = n_2.\] Let $\Lambda\in\Pi_L$ such that $m_1, m_2\in \Lambda$. Then since $m_1 < n_1 < m_2 < n_2$ and $n_1, n_2\notin \Lambda$, there must exist $k$ such that either $m_1 < j_k < m_2 < j_{k+1}$, or $j_{k+1} < m_1 < j_k < m_2$. Since $j_k\rho_L j_{k+1}$, this forces $i_l$ to lie between $j_k$ and $j_{k+1}$ for all $1\leq l\leq s$. But either $m_1(=i_1)$ or $m_2(=i_s)$ does not lie between $j_k$ and $j_{k+1}$, thus we have a contradiction. \end{proof} The following result gives a relationship between $\Pi_L$ and the orthogonal complement of $L^{\Q}$. \begin{lemma} \label{dimsimL} Let $L\subseteq \N_0^r$ be a stratified linear set, with $\Pi_L = \{\Lambda_1,\ldots,\Lambda_t\}$, and let $\L = L^{\Q}$. Then $\L^\perp$ has a basis of the form $\{\x_i = \sum_{j\in \Lambda_i} \gamma_j e_j \mid i\in M\}$, where $M\subseteq \{1,\ldots,t\}$. In particular, $\dim(\L^\perp)=|M|\leq t$. \end{lemma} \begin{proof} Let $M$ be the set of all $i\in\{1,\ldots,t\}$ such that $\x(j)\neq 0$ for some $\x\in \L^\perp$ and $j\in \Lambda_i$. For each $i\in M$, fix some non-zero $\x^{(i)}\in \L^\perp$ with $\x^{(i)}(j)\neq 0$ for some $j\in \Lambda_i$. We can write $\x^{(i)} = \sum_{j=1}^r \gamma_{ij} e_j = \sum_{s=1}^t \x^{(i)}_s$, where $\x^{(i)}_s = \sum_{j\in \Lambda_s} \gamma_{ij} e_j$, since $\{1,\ldots,r\}$ is the disjoint union of $\Lambda_1,\ldots,\Lambda_t$. For $i\in M$, let $\x_i = \x^{(i)}_i$. Then $\{\x_i \mid i\in M\}$ is a linearly independent set, since $\x_i\neq 0$ by the choice of $\x^{(i)}$, and $\x_i(j)=0$ for all $j\notin \Lambda_i$. Let $P$ be the set of periods of $L$, and for $1\leq i\leq t$, let \[P_i = \{\alpha_me_m + \alpha_ne_n\in P \mid m,n\in \Lambda_i\},\] where one of $\alpha_m$ or $\alpha_n$ may be zero. Then $\{P_1,\ldots,P_t\}$ is a partition of $P$. Now if $\p\in P_i$, then $\p\cdot \x^{(i)}_{i'} = 0$ for all $i'\neq i$, since $\x^{(i)}_{i'}(j) = 0$ for all $j\in \Lambda_i$. Thus $\p\cdot \x^{(i)} = \p\cdot (\x^{(i)}_1 + \ldots + \x^{(i)}_t) = \p\cdot \x^{(i)}_i = \p\cdot \x_i$. But $\x^{(i)}\in \L^\perp$, so $\p\cdot \x_i = 0$. Since also $\p\cdot \x_i=0$ for all $\p\in P_{i'}$ with $i'\neq i$, we have $\x_i\in \L^\perp$, for all $i\in M$. It remains to show that $\{\x_i \mid i\in M\}$ spans $\L^\perp$. Recall that $\x_i = \sum_{j\in \Lambda_i} \gamma_{ij} e_j$. First we show that $\gamma_{ij}\neq 0$ for all $i\in M$, $j\in \Lambda_i$. For $i\in M$, certainly $\gamma_{im}\neq 0$ for some $m\in \Lambda_i$, since $\x_i\neq \mathbf{0}$. For any $n\in \Lambda_i$ there exist $m_1,\ldots,m_l\in \Lambda_i$ such that $m = m_1\rho_L m_2\rho_L\ldots\rho_L m_l = n$, which implies the existence of periods $\alpha_{m_1}e_{m_1} + \alpha_{m_2}e_{m_2},\ldots,\alpha_{m_{l-1}}e_{m_{l-1}} + \alpha_{m_l}e_{m_l}\in P_i$ with non-zero $\alpha_{m_j}$ for all $1\leq j\leq l$. Now \[\x_i\cdot (\alpha_{m_j}e_{m_j} + \alpha_{m_{j+1}} e_{m_{j+1}}) = \gamma_{im_j}\alpha_{m_j} + \gamma_{im_{j+1}}\alpha_{m_{j+1}} = 0\] for all $1\leq j\leq l-1$, since $\x_i\in \L^\perp$. Thus $\gamma_{im_{j+1}} = -\gamma_{im_j}\frac{\alpha_{im_j}}{\alpha_{im_{j+1}}}$ and so by induction $\gamma_{in} = \gamma_{im_l}\neq 0$, since $\gamma_{im} = \gamma_{i1}\neq 0$. Moreover, for all $n\in \Lambda_i$, the coefficient $\gamma_{in}$ is uniquely determined by $\gamma_{im}$. (If two different paths between $m$ and $n$ gave different values for $\gamma_{in}$, then our non-zero $\x_i\in \L^\perp$ could not exist.) Finally, let $\y\in \L^\perp$ and write $\y = \sum_{j=1}^r c_j e_j = \sum_{i=1}^t \y_i$, where $\y_i = \sum_{j\in \Lambda_i} c_j e_j$. If $\y_i\neq \mathbf{0}$, then choose $j\in \Lambda_i$ with $c_j\neq 0$. Since $\gamma_{ij}\neq 0$, we can write $c_j = q\gamma_{ij}$, where $q\in \Q$. By exactly the same argument as we used for $\x_i$, we can conclude that $\p\cdot \y_i = 0$ for all $\p\in P$. Now for any $\alpha_je_j + \alpha_{j'}e_{j'}\in P_i$, we have $\y_i\cdot (\alpha_je_j + \alpha_{j'}e_{j'}) = c_j\alpha_j + c_{j'}\alpha_{j'}$, thus $c_{j'} = -c_j\frac{\alpha_j}{\alpha_{j'}}$. But also $\gamma_{ij'} = -\gamma_{ij}\frac{\alpha_j}{\alpha_{j'}}$. Thus $c_{j'} = -q\gamma_{ij}\frac{\alpha_j}{\alpha_{j'}} = q\gamma_{ij'}$, and we can extend this to show that $c_n = q\gamma_{in}$ for all $n\in \Lambda_i$, thus $\y_i = q\x_i$. Since this applies to all $i\in M$ with $\y_i\neq \mathbf{0}$, we can conclude that $\y$ is a linear combination of the elements of $\{\x_i \mid i\in M\}$, and thus this set spans $\L^\perp$. \end{proof} We are now ready to prove Theorem 10 of \cite{LW}. \begin{proposition}\label{S(k)} For $1\leq i\leq k-1$, let $L_i$ be a stratified linear set with constant vector zero, and let $L_1\cap\ldots\cap L_{k-1} = S\subseteq S^{(k)}$. Then $S$ is a linear set of dimension at most $k-1$. \end{proposition} \begin{proof} $S$ is a linear set with constant vector zero by Proposition~\ref{intlin}. Let $\L_i = L^{\Q}$ for all $1\leq i\leq k-1$, and let $\S = \L_1\cap\ldots\cap \L_{k-1}$. By Lemma~\ref{dimS}, we can assume that $\dim(\S) = \dim(S)$. Since $S\subseteq \S$, this implies that any maximal linearly independent subset of the periods of $S$ is a basis for $\S$. Thus, since $\v(i) = \v(k+i)$ for all $\v\in S$, we also have $\v(i) = \v(k+i)$ for all $\v\in \S$. For all $1\leq i\leq k$, we have $e_i - e_{k+i}\in \S^\perp$, since $\v\cdot(e_i - e_{k+i}) = \v(i) - \v(k+i) = 0$ for all $\v\in \S$. Assume $\{e_i - e_{k+i} \mid 1\leq i\leq k\}$ spans $\S^\perp$, since otherwise $\dim(\S^\perp)\geq k+1$ and thus $\dim(\S)\leq 2k - (k+1) = k-1$. If $\L_i^\perp\neq \{\mathbf{0}\}$, let $\Pi_{L_i} = \{\Lambda_1,\ldots,\Lambda_t\}$. Then, by Lemma~\ref{dimsimL}, $\L_i^\perp$ has a basis of the form $\{\x_s \mid s\in M\}$, where $M\subseteq \{1,\ldots,t\}$ and $\x_s = \sum_{j\in \Lambda_s} \gamma_j e_j$. If $s\in M$, then since $\x_s\in \L_i^\perp\subseteq \L^\perp$, we can write \[\x_s = \sum_{j\in \Gamma_s} \gamma_j(e_j - e_{k+j}),\] where $\Gamma_s = \Lambda_s\cap \{1,\ldots,k\}$. Certainly some $\gamma_j$ must be non-zero, implying $j, (k+j)\in \Lambda_s$. Thus if $s, s'\in M$, then we would have some $j, (k+j)\in \Lambda_s$, $l, (k+l)\in \Lambda_{s'}$. But either $j<l<(k+j)<(k+l)$ or $l<j<(k+l)<(k+j)$, thus this would contradict Lemma~\ref{overlap}. Therefore at most one $s\in M$, and so $\dim(\L_i^\perp)\leq 1$. This holds for all $1\leq i\leq k-1$. But if each $\L_i^\perp$ is at most one dimensional, then since $\S^\perp = \L_1^\perp + \ldots + \L_{k-1}^\perp$, $\dim(\S^\perp)$ cannot exceed $k-1$, contradicting the fact that $e_j - e_{k+j}\in \S^\perp$ for all $1\leq j\leq k$. Thus our assumption that $\{e_j - e_{k+j} \mid 1\leq j\leq k\}$ spans $\S^\perp$ was false, and so in fact $\dim(S)\leq k-1$. \end{proof} \subsubsection{The rest of the proof} \begin{theorem}\label{L(k)}{\rm \cite[Theorem~8]{LW}} The language $L^{(k)}$ is $k$-$\cal{CF}$, but not $(k-1)$-$\cal{CF}$. Thus, for all $k\geq 2$, the class of $k$-$\cal{CF}$ languages properly contains the class of $(k-1)$-$\cal{CF}$ languages. \end{theorem} \begin{proof} By Corollary~\ref{polyParikh}, it suffices to show that $S^{(k)}$ is an intersection of $k$ but not $k-1$ stratified semilinear sets. For $1\leq i\leq k$, define \[S_i = \operatorname{span} \left\{e_i + e_{k+i}, e_j \mid 1\leq j\leq 2k, j\notin \left\{i, k+i\right\} \right\}.\] Then each $S_i$ is a stratified linear set and $S^{(k)} = \bigcap_{i=1}^k S_i$. Also, $S^{(k)}$ has constant vector zero and dimension $k$, since $\{e_i + e_{k+i} \mid 1\leq i\leq k\}$ is a linearly independent subset which spans $S^{(k)}$. Hence, by Proposition~\ref{S(k)}, $S^{(k)}$ satisfies the hypothesis of Lemma~\ref{lemma4}, so cannot be expressed as an intersection of $k-1$ stratified semilinear sets. \end{proof} \subsection{The languages $L^{(n,k)}$}\label{L(n,k) section} We can extend Theorem~\ref{L(k)} to a larger, but very similar, class of languages. The extended result will be used to prove that certain groups, for example the restricted standard wreath products $C_p\wr \Z$ (for any $p>1$), are not poly-$\cal{CF}$. For each $n,k\in \N$, let $a_1,a_2,\ldots,a_{2nk}$ be $2nk$ distinct symbols and define \[\begin{array}{ll} L^{(n,k)} = \{a_1^{m_1} a_2^{m_2}\ldots a_{2nk}^{m_{2nk}} \mid & m_i\in \N_0, m_i = m_{nk+i}\; (1\leq i\leq nk),\\ & m_{nj+1} = m_{nj+l}\; (0\leq j\leq k-1,\; 2\leq l\leq n)\}. \end{array}\] For example, $L^{(2,2)} = \{ a_1^m a_2^m a_3^n a_4^n a_5^m a_6^m a_7^n a_8^n \mid m,n\in \N_0\}$. Define $S^{(n,k)}$ to be $\Phi\left(L^{(n,k)}\right)$. Then \[\begin{array}{ll} S^{(n,k)} = \{ \v\in \N_0^{2nk} \mid & \v(i) = \v(i+nk) \; (1\leq i\leq nk),\\ & \v(nj+1) = \v\left(nj+l\right) \; (0\leq j\leq k-1, \; 2\leq l\leq n)\}. \end{array}\] These sets are like $S^{(k)}$, except with each entry being repeated $n$ times. Thus $S^{(1,k)}$ is just $S^{(k)}$. For any $n\in \N$, the set $S^{(n,k)}$ has dimension $k$, so it is not surprising that the following result does not depend on $n$. \begin{proposition}\label{S(n,k)} For $1\leq i\leq k-1$, let $L_i$ be a stratified linear set with constant vector zero, and let $L_1\cap\ldots\cap L_{k-1} = S\subseteq S^{(n,k)}$. Then $S$ is a linear set of dimension at most $k-1$. \end{proposition} \begin{proof} The proof follows the idea of the proof of Proposition~\ref{S(k)}, but is a good deal more complicated. $S$ is a linear set with constant vector zero by Proposition~\ref{intlin}. Let $\L_i = L_i^{\Q}$ for $1\leq i\leq k-1$, and let $\S = \L_1\cap\ldots\cap \L_{k-1}$. By Lemma~\ref{dimS}, we can assume that $\dim(\S) = \dim(S)$. Since $S\subseteq \S$, this implies that any maximal linearly independent subset of the periods of $S$ is a basis for $\S$. Thus since $\v(i) = \v(nk+i)$ for all $\v\in S$, we also have $\v(i) = \v(nk+i)$ for all $\v\in \S$, $1\leq i\leq nk$. Moreover, for all $\v\in \S$ we have $\v(nj+l) = \v(nj+l+1)$ for all $0\leq j\leq k-1$, $1\leq l\leq n-1$. For all $1\leq i\leq nk$, we have $e_i - e_{nk+i}\in \S^\perp$, since, for all $\v\in \S$, \[\v\cdot(e_i - e_{nk+i}) = \v(i) - \v(nk+i) = 0.\] Similarly, $e_{nj+l} - e_{nj+l+1}\in \S^\perp$ for all $0\leq j\leq k-1$ and $1\leq l\leq n-1$. Thus we know of $nk + (n-1)k = (2n-1)k$ linearly independent elements of $\S^\perp$. Assume that these $(2n-1)k$ elements form a basis of $\S^\perp$, since otherwise \[\dim(S) = \dim(\S) < 2nk - (2n-1)k = k,\] as we require. We will now derive a contradiction, using the fact that $\S^\perp = \L_1^\perp+\ldots+\L_{k-1}^\perp$. For $0\leq j\leq k-1$ and $\epsilon\in \{0,1\}$, define \[\Delta_j^{\epsilon} = \{ n(\epsilon k +j) +l \mid 1\leq l\leq n\}\] and $\Delta_j = \Delta_j^0\cup\Delta_j^1$. Let $\S_j$ be the image of the projection of $\S^\perp$ onto the coordinates in $\Delta_j$. Since every vector in the basis of $\S^\perp$ above is contained in some $\S_j$, and the $\Delta_j$ are disjoint, $\S^\perp$ is the direct sum of $\S_0,\ldots,\S_{k-1}$. Call $\x\in \S^\perp$ a \emph{$j$-bridge} if there exist $l\in \Delta_j^0$ and $l'\in \Delta_j^1$ such that $\x(l)$ and $\x(l')$ are both non-zero. By extension, for $\Gamma\subseteq \{0,\ldots,k-1\}$, call $\x$ a \emph{$\Gamma$-bridge} if $\x$ is a $j$-bridge for all $j\in \Gamma$. For $0\leq j\leq k-1$, let $\Omega_j$ be the $2(n-1)$-dimensional subspace of $\S_j^\perp$ generated by \[\{e_{n(\epsilon k +j)+l}-e_{n(\epsilon k +j)+l+1} \mid \epsilon\in \{0,1\}, 1\leq l\leq n-1\}\] and let $\Omega = \Omega_0 + \ldots + \Omega_{k-1}$. Suppose that $\x$ is not a $j$-bridge for any $j$. We will show that $\x$ must be in $\Omega$. Write $\x = \sum_{j=0}^{k-1} \y_j$, where $\y_j\in \S_j$. Then no $\y_j$ is a $j$-bridge. For any $j\in \{0,\ldots,k-1\}$, the non-zero coordinates of $\y_j$ are either all in $\Delta_j^0$ or all in $\Delta_j^1$, since $\y_j$ is in $\S_j$ and is not a $j$-bridge. For any $\v\in \S^\perp$, the sum of the entries of $\v$ is zero, as can be seen by considering the basis vectors of $\S^\perp$. Thus the subspace of $\S^\perp$ consisting of vectors whose non-zero coordinates all lie in $\Delta_j^\epsilon$ is spanned by $\{e_{n(\epsilon k +j)+l}-e_{n(\epsilon k +j)+l+1} \mid 1\leq l\leq n-1\} \subseteq \Omega_j$, for $\epsilon\in \{0,1\}$. Hence $\y_j\in \Omega_j$, and since this applies for all $0\leq j\leq k-1$, we conclude that $\x = \sum_{j=0}^{k-1} \y_j\in \Omega$. If $\L_i^\perp\neq \{\mathbf{0}\}$, let $\Pi_{L_i} = \{\Lambda_1,\ldots,\Lambda_t\}$. Then by Lemma~\ref{dimsimL}, $\L_i^\perp$ has a basis of the form $B_i = \{\x_s \mid s\in M\}$, where $M\subseteq \{1,\ldots,t\}$ and $\x_s = \sum_{j\in \Lambda_s} \gamma_{sj} e_j$. Note that if $\x_s$ is a $j$-bridge and $s'\neq s$, $j'\neq j$, then $\x_{s'}$ cannot be a $j'$-bridge, since this would imply the existence of $l_1,l_2,l'_1,l'_2\in \{1,\ldots,n\}$ such that \[nj+l_1, n(k+j)+l_2\in \Lambda_s, \quad nj'+l'_1, n(k+j')+l'_2\in \Lambda_{s'},\] contradicting Lemma~\ref{overlap}. If $B_i$ contains no $\Gamma$-bridges for any non-empty $\Gamma$, then every $\x_s\in B_i$ is in $\Omega$, hence $\L_i^\perp\subseteq \Omega$. If the largest $\Gamma$ such that $\x_s$ is a $\Gamma$-bridge is a singleton $\{j\}$, then $B_i$ may possibly contain other $j$-bridges; but, as already observed, $B_i$ contains no $j'$-bridges for $j'\neq j$. If $\Gamma$ has at least two elements and $\x_s$ is a $\Gamma$-bridge, then $B_i$ contains no other $\Gamma'$-bridges, even for $\Gamma' = \Gamma$, since this would again imply a situation contradicting Lemma~\ref{overlap}. Thus there is at most one $\Gamma\subseteq \{0,\ldots,k-1\}$ such that $B_i$ contains one or more $\Gamma$-bridges. If such $\Gamma$ exists, call it $\Gamma_i$. For each $i$, we have $\L_i^\perp = {\cal M}_i + {\cal N}_i$, where ${\cal M}_i$ is the subspace generated by the $\Gamma_i$-bridge(s) and ${\cal N}_i$ is the subspace generated by the remaining elements of $B_i$. Now consider \[\S^\perp = \L_1^\perp + \ldots + \L_{k-1}^\perp = {\cal M}_1 + \ldots + {\cal M}_{k-1} + {\cal N}_1 +\ldots + {\cal N}_{k-1}.\] Since the ${\cal N}_i$ are generated by elements which are not $\Gamma$-bridges for any non-empty $\Gamma$, they are all subspaces of $\Omega$. Thus $\S^\perp\subseteq {\cal M}_1 + \ldots + {\cal M}_{k-1} + \Omega$. If $\Gamma_i$ contains at least two elements, then $B_i$ has a single $\Gamma_i$-bridge, so ${\cal M}_i$ has dimension one. If $\Gamma_i = \{j\}$, then even though ${\cal M}_i$ can have dimension up to $n$, $\Omega_j + {\cal M}_i$ has to be contained in $\S_j$, so can have dimension at most $2n-1$, which is one more than the dimension of $\Omega_j$. Thus each ${\cal M}_i$ contributes at most one extra dimension to the set $\Omega+{\cal M}_1 + \ldots+ {\cal M}_{k-1}$, and so \begin{align*} \dim(\S^\perp) & \leq \dim(\Omega +{\cal M}_1 + \ldots+ {\cal M}_{k-1})\\ & \leq 2k(n-1) + k-1 = (2n-1)k -1, \end{align*} giving a contradiction. Thus our assumption that $\S^\perp$ was spanned by ${(2n-1)k}$ elements is incorrect, and so \[\dim(S) = \dim(\S)\leq 2nk - ((2n-1)k+1) = k-1. \qedhere \] \end{proof} \begin{corollary}\label{kS(n,k)} A $k$-dimensional linear subset of $S^{(n,k)}$ cannot be expressed as an intersection of $k-1$ stratified semilinear sets. \end{corollary} \begin{proof} Suppose $L\subseteq S^{(n,k)}$ is $k$-dimensional and can be expressed as an intersection of $k-1$ stratified semilinear sets. Then we can write $L = S_1\cup\ldots\cup S_l$, where each $S_i$ is an intersection of $k-1$ stratified linear sets. By Proposition~\ref{intlin}, there exist finite subsets $C_i$ and $P_i$ of $\N_0^{2nk}$ such that $S_i = L(C_i;P_i)$ for $1\leq i\leq l$. By Proposition~\ref{dimlin}, there must exist $1\leq i\leq l$ and $\c\in C_i$ such that $L(\c;P_i)$ has dimension $k$, and hence $L(\mathbf{0};P_i)$ has dimension $k$. Writing $S_i = \cap_{i=1}^{k-1} N_i$, where each $N_i$ is a stratified linear set, from Proposition~\ref{intlin} we have $L(\mathbf{0};P_i) = \cap_{i=1}^{k-1} N_i^{\mathbf{0}}$. But $L(\mathbf{0};P_i)$ is a $k$-dimensional linear subset of $S^{(n,k)}$ with constant zero, while each $N_i^{\mathbf{0}}$ is a stratified linear set, contradicting Proposition~\ref{S(n,k)}. \end{proof} \begin{theorem}\label{L(n,k)} For any $k,n\in \N$, the set $S^{(n,k)}$ is not an intersection of $k-1$ stratified semilinear sets, and so the language $L^{(n,k)}$ is not $(k-1)$-$\cal{CF}$. \end{theorem} \begin{proof} Recall from the proof of Proposition~\ref{S(n,k)} the notation \[\Delta_j = \{ nj +l \mid 1\leq l\leq n\}\cup\{ n(k +j) +l \mid 1\leq l\leq n\}.\] For $0\leq j\leq k-1$, let $\u_j = \sum_{i\in \Delta_j} e_i$. Then $\{\u_j \mid 0\leq j\leq k-1\}$ is a linearly independent set which spans $S^{(n,k)}$, so $S^{(n,k)}$ is $k$-dimensional. Since $S^{(n,k)}$ has constant vector zero, it follows from Lemma~\ref{lemma4} and Proposition~\ref{S(n,k)} that $S^{(n,k)}$ cannot be an intersection of $k-1$ stratified semilinear sets and thus $L^{(n,k)}$ cannot be a $(k-1)$-$\cal{CF}$ language. \end{proof} \section{Poly-$\cal{CF}$ groups}\label{polyCF group}\label{group} We begin with a simple observation, followed by our main conjecture. \begin{observation}\label{polyprod} The class of poly-$\CF$ groups is closed under taking finite direct products. The direct product of a $k_1$-$\CF$ group and a $k_2$-$\CF$ group is $(k_1+k_2)$-$\CF$. \end{observation} \begin{proof} It suffices to show that the direct product of two poly-$\CF$ groups is poly-$\CF$. Let $G_i$ be a $k_i$-$\CF$ group for $i=1,2$. Let $A_{i1},\ldots,A_{ik_i}$ be pushdown automata with input alphabet $X_i$ such that a word is in $W(G_i,X_i)$ if and only if it is accepted by all $A_{ij}$. We may assume that $X_1$ and $X_2$ are disjoint. Now modify the automata $A_{ij}$ so that their input alphabet is $X = X_1\cup X_2$, but each $A_{1j}$ ignores the symbols in $X_2$ and $A_{2j}$ ignores the symbols in $X_1$. Let $h_1: X\rightarrow X_1$ be the homomorphism sending every symbol in $X_2$ to the empty word, and define $h_2$ similarly. Then a word $w$ in $(X\cup X^{-1})^*$ is accepted by all of the modified automata $A_{ij}$ if and only if $h_i(w)\in W(G_i,X_i)$ for $i=1,2$. Thus the intersection of the languages accepted by all the $A_{ij}$ is precisely $W(G_1\times G_2, X)$, and hence $G_1\times G_2$ is $(k_1 + k_2)$-$\CF$. \end{proof} Since finitely generated free groups are context-free, this implies that a direct product of $k$ finitely generated free groups is $k$-$\CF$. Since the $k$-$\cal{CF}$ groups are closed under taking finite index overgroups and finitely generated subgroups, any finitely generated subgroup of a direct product of $k$ free groups, and any finite index overgroup of such a group, is $k$-$\cal{CF}$. These are the only known $k$-$\cal{CF}$ groups, and we conjecture that they are the only ones. \begin{conjecture}\label{polyconj} Let $G$ be a finitely generated group. Then $G$ is poly-$\cal{CF}$ if and only if $G$ is virtually a finitely generated subgroup of a direct product of free groups. \end{conjecture} This would generalise both Muller and Schupp's result on context-free groups \cite{MulSch1, MulSch2, Dun} and the theorem of Holt, Owens and Thomas \cite{HOT}, which says that the word problem of a finitely generated group is an intersection of finitely many one-counter languages if and only if the group is virtually abelian. A \emph{one-counter language} is a language recognised by a pushdown automaton with only one stack symbol. Note that the truth of Conjecture~\ref{polyconj} would imply that if $G$ is poly-$\CF$, then $W(G)$ is an intersection of finitely many deterministic context-free languages, and hence co$W(G)$ is context-free, since the deterministic context-free languages are closed under complementation and the context-free languages are closed under union. The rest of this section is devoted to proving certain classes of groups to be \emph{not} poly-$\cal{CF}$. \subsection{Some groups which are not poly-$\cal{CF}$} Holt, Rees, R\"over and Thomas proved that a finitely generated nilpotent group or polycyclic group is co$\CF$ if and only if it is virtually abelian \cite[Theorems~12 and~16]{HRRT}, and that the Baumslag-Solitar group $\BS (m,n)$ is not co$\CF$ if $m\neq \pm n$ \cite[Theorem~13]{HRRT}. These theorems are all proved using \cite[Proposition~14]{HRRT}, which, as we have mentioned, has a strictly weaker hypothesis than Proposition~\ref{prop}; so, with no further effort, we can obtain analogous results for poly-$\CF$ groups, using Corollary~\ref{parikh}. \begin{proposition}\label{polynil} Let $G$ be a polycyclic group or a finitely generated nilpotent group. Then $G$ is poly-$\cal{CF}$ if and only if it is virtually abelian. \end{proposition} \begin{proof} If $G$ is not virtually abelian, then the proofs of Theorems~12 (for $G$ nilpotent) and~16 (for $G$ polycylic) in \cite{HRRT} show that there exists a regular language $R$ such that $\phi\left( W(G)\cap R\right)$ satisfies the hypothesis of Proposition~\ref{prop}, and hence $G$ is neither co$\CF$ nor poly-$\cal{CF}$ by Corollary~\ref{parikh}. \end{proof} The result for nilpotent groups was actually already obtained by Holt, Owens and Thomas in \cite{HOT}, using what is essentially a special case of Proposition~\ref{prop}. The statement of Theorem~13 in \cite{HRRT} is incorrect. It is claimed that $\BS (m,n)$ is co$\CF$ if and only if it is virtually abelian, based on the supposition that $\BS (m,n)$ is virtually abelian if $m = \pm n$. We now show that if $m = \pm n$, then $\BS (m,n)$ is both co$\CF$ and poly-$\CF$. \begin{proposition}\label{BSmm} For $m\in \Z\setminus \{0\}$, the Baumslag-Solitar group $\BS (m,\pm m)$ is virtually a direct product of two free groups and is thus both co$\cal{CF}$ and $2$-$\cal{CF}$. \end{proposition} \begin{proof} First let $G = \BS (m,m) = \left\langle x,y \mid y^{-1}x^m y = x^m\right\rangle$. Then $x^m\in Z(G)$ and \[G/\left\langle x^m\right\rangle = \left\langle x, y \mid x^m\right\rangle = C_m * \Z.\] Let $H/\left\langle x^m\right\rangle$ be the normal closure in $G/\left\langle x^m\right\rangle$ of $\left\langle y\right\rangle$. Then \[|G/\left\langle x^m\right\rangle : H/\left\langle x^m\right\rangle| = m\] and hence ${|G:H|} = m$. Since $H/\left\langle x^m\right\rangle$ does not intersect any conjugate of $C_m$, by the Kurosh Subgroup Theorem (see for example \cite[III.3.6]{LynSch}), $H/\left\langle x^m\right\rangle$ is the free product of a free group with conjugates of $\Z$, and is thus free. Since $x^m\in Z(G)$, we have $H \cong H/\left\langle x^m\right\rangle \times \left\langle x^m\right\rangle$. Thus $G$ is virtually a direct product of two free groups. Now let $G = \BS (m,-m) = \left\langle x,y \mid y^{-1}x^m y = x^{-m}\right\rangle.$ Let $K$ be the normal closure in $G$ of $\left\langle x, y^2\right\rangle$, which has index $2$ in $G$. Setting $a = x$, $b = y^{-1}x^{-1}y$ and $c = y^2$ gives \[ K = \left\langle a,b,c \mid a^m = b^m, [a^m, c]\right\rangle,\] with $a^m\in Z(K)$. Now take \[ H := K/\left\langle a^m\right\rangle = \left\langle a,b,c \mid a^m = b^m = 1\right\rangle = C_m * C_m * \Z.\] Let $\phi$ be the homomorphism from $H$ to $C_m \times C_m$ given by mapping $a$ onto a generator of the first $C_m$ and $b$ onto a generator of the second $C_m$, and $c$ onto the identity. Then the intersection of $\ker \phi$ with every conjugate of $\left\langle a\right\rangle$ and $\left\langle b\right\rangle$is trivial. Thus $\ker \phi$ is free, again by the Kurosh Subgroup Theorem. Also, ${| H : \ker \phi |} = {| C_m \times C_m |} = m^2$. Let $K_1$ be the preimage of $\ker \phi$ in $K$. Since $\ker \phi$ is free and $\left\langle a^m\right\rangle\in Z(H)$, $K_1$ is isomorphic to $\ker \phi \times \left\langle a^m\right\rangle$. Also, $K_1$ has finite index in $K$, and hence also in $G$, since $\ker \phi$ has finite index in $H = K/\left\langle a^m\right\rangle$. Thus $G$ is virtually a direct product of two free groups. Hence $G$ is $2$-$\cal{CF}$ by Observation~\ref{polyprod}, and co$\cal{CF}$ by the fact that the co$\CF$ groups are closed under taking finite direct products \cite[Proposition~6]{HRRT}. \end{proof} We can now determine which Baumslag-Solitar groups are poly-$\cal{CF}$. \begin{proposition}\label{polyBS} The Baumslag-Solitar group $\BS (m,n)$ is poly-$\cal{CF}$ or co$\CF$ if and only if $m = \pm n$. \end{proposition} \begin{proof} The proof of Theorem~13 in \cite{HRRT} shows that if $G = \BS (m,n)$ with ${m\neq \pm n}$, then $W(G)$ can be intersected with a regular language to give a sublanguage satisfying the hypothesis of Proposition~\ref{prop}, and so $W(G)$ is neither co$\cal{CF}$ nor poly-$\cal{CF}$ by Corollary~\ref{parikh}. \end{proof} \subsection{Free abelian groups and wreath products}\label{wreath-sec} The obvious application of Proposition~\ref{L(k)} to word problems of groups is to the free abelian groups. \begin{lemma}\label{rankk} A free abelian group of rank $k$ is $k$-$\cal{CF}$ but not $(k-1)$-$\cal{CF}$. \end{lemma} \begin{proof} The group $\Z^k$ is a direct product of $k$ free groups, and is hence $k$-$\cal{CF}$. Let $\{x_1,\ldots,x_k\}$ be a generating set for $\Z^k$ and let $X_i$ denote the inverse of $x_i$. Consider $L = W(\Z^k)\cap (x_1^*\ldots x_k^* X_1^*\ldots X_k^*)$. This is precisely the language $L^{(k)} = \{x_1^{n_1}\ldots x_k^{n_k} X_1^{n_1}\ldots X_k^{n_k} \mid n_i\in \N_0\}$ defined in Section \ref{L(k) section}. Thus, by Proposition~\ref{L(k)}, $L$ is not $(k-1)$-$\cal{CF}$. Since $L$ is the intersection of $W(\Z^k)$ with a regular language, this implies that $\Z^k$ is not $(k-1)$-$\cal{CF}$. \end{proof} The class of co$\cal{CF}$ groups is closed under taking restricted standard wreath products with context-free top group \cite[Theorem~10]{HRRT}. In contrast, we have the following result for poly-$\cal{CF}$ groups. \begin{proposition}\label{ZwrZ} The restricted standard wreath product $\Z\wr \Z$ is not poly-$\cal{CF}$. \end{proposition} \begin{proof} Since $\Z\wr \Z$ contains free abelian subgroups of rank $k$ for all $k\in \N$, this follows immediately from Lemma~\ref{rankk} and the fact that the poly-$\cal{CF}$ groups are closed under taking finitely generated subgroups. \end{proof} A further result on wreath products will be useful when we come to consider metabelian groups. It is our first application of Theorem~\ref{L(n,k)}. \begin{proposition}\label{wreathp} For any $p\in \N\setminus \{1\}$, the restricted standard wreath product $C_p\wr \Z$ is not poly-$\cal{CF}$. \end{proposition} \begin{proof} Let $G = \left\langle b\right\rangle \wr \left\langle a\right\rangle = C_p\wr \Z$, with $p>1$ and let $A$ and $B$ be the inverses of $a$ and $b$ respectively. For $k\in \N$, let $W_k = (A^* b a^*)^k (A^* B a^*)^k$ and let $M_k$ be the sublanguage of $W_k$ consisting of all those words \[w = (A^{m_1} b a^{n_1}) \ldots (A^{m_k} b a^{n_k}) (A^{m_{k+1}} B a^{n_{k+1}}) \ldots (A^{m_{2k}} B a^{n_{2k}})\] satisfying the following: (i) $m_i = n_i$ for all $i$; (ii) $n_i < m_{i+1}$ for $i\notin \{k,2k\}$. Each of (i) and (ii) can be checked by a pushdown automaton, so $M_k$ is the intersection of two context-free languages and the regular language $W_k$ and is thus $2$-$\cal{CF}$. Now let $L_k = W\left(G,\{a,b\}\right)\cap M_k$. Then $L_k$ consists of all words of the form \[b^{a^{m_1}}\cdots b^{a^{m_k}} B^{a^{m_{2k+1}}}\cdots B^{a^{m_{2k}}} =_G 1,\] with $m_i\in \N_0$ for all $i$, and $m_i < m_{i+1}$ for $i\notin \{k,2k\}$. Since the conjugates of $b$ in such a word are all distinct, for each $1\leq i\leq k$ we must have some $1\leq j\leq k$ such that $m_{k+j} = m_i$. But since $m_i < m_{i+1}$ and $m_{k+i} < m_{k+i+1}$ for all $1\leq i\leq k-1$, this means $m_i = m_{k+i}$ for all $1\leq i\leq k-1$. When we take $\Phi(L_k)$, we can ignore the $b$'s and $B$'s, since these would contribute nothing to the aspects of the structure of the resulting subset of $\N_0^{6k}$ that interest us. For our purposes it is equivalent and more straightforward to consider $\Phi(L_k)$ as a subset of $\N_0^{4k}$, thus: \[\Phi(L_k) = \{ (m_1,m_1,\ldots,m_k,m_k,m_1,m_1,\ldots,m_k,m_k) \mid m_i\in \N_0, m_i < m_{i+1}\}.\] We see that $\Phi(L_k)$ is a $k$-dimensional subset of the set $S^{(2,k)}$ studied in Section~\ref{L(n,k) section}. Thus $\Phi(L_k)$ cannot be expressed as an intersection of $k-1$ stratified semilinear sets, by Corollary~\ref{kS(n,k)}. Hence $L_k$ is not $(k-1)$-$\cal{CF}$, by Corollary~\ref{polyParikh}. Since $L_k$ is the intersection of $W(G)$ with a $2$-$\cal{CF}$ language, this implies that $W(G)$ is not $(k-3)$-$\cal{CF}$ for any $k\in \N$ and so $G$ is not poly-$\cal{CF}$. \end{proof} \subsection{The groups $G(\c)$}\label{Gc-sec} The groups $G(\c)$ were defined in \cite{BH} and play an important role in the main results of that paper, which we shall be applying in order to prove certain cases of Conjecture~\ref{polyconj}. For $\c = (c_0,\ldots,c_s)\in \Z^{s+1}$ with $s \ge 1$, $c_0,c_s\neq 0$ and $\gcd(c_0,\ldots,c_s) = 1$, the group $G(\c)$ is defined by the presentation $\left\langle a, b \mid {\cal R}_{\c}\right\rangle$, where \[{\cal R}_{\c} = \left\{[b, b^{a^i}] \; (i\in \Z), \; b^{c_0}(b^a)^{c_1}\cdots(b^{a^s})^{c_s}\right\}.\] We call such groups {\em Gc-groups}, and when we refer to the Gc-group $G(\c)=\left\langle x,y\right\rangle$, we assume that $\c \in \Z^{s+1}$ satisfies the above conditions, and that $x$ replaces $a$ and $y$ replaces $b$ in the above definition of $G(\c)$. Note that here we depart from our usual convention of denoting the $i$-th component of $\c$ by $\c(i)$, as it makes the notation more pleasant. A Gc-group is called \emph{proper} if it is not virtually abelian. As an example, if $\c = (-m,1)$ then $G(\c) = \BS (1,m)$; so the soluble Baumslag-Solitar groups are all Gc-groups. The main result in this section will be that a Gc-group is poly-$\cal{CF}$ if and only if it is virtually abelian. We simplify the notation by setting $b_i = b^{a^i}$ for all $i\in \Z$, and $B = \left\langle b_i \mid i\in \Z\right\rangle$. Since $B$ is an abelian normal subgroup of $G(\c)$ and $G(\c)/B\cong \left\langle a\right\rangle$, we see that Gc-groups have derived length at most $2$. \begin{lemma}\label{Gcpoly} Let $G = G(\c)$ be a Gc-group with $|c_0| = |c_s| = 1$. Then $G$ is polycyclic. \end{lemma} \begin{proof} The relation $b_0^{\pm 1}b_1^{c_1}\cdots b_{s-1}^{c_{s-1}}b_s^{\pm 1} = 1$ implies that $b_0, b_{s+1}\in \langle b_1,\ldots,b_s\rangle$ and hence $b_i\in \langle b_1,\ldots,b_s\rangle$ for all $i$. Hence $B = \left\langle b_1,\ldots,b_s\right\rangle$; so $G\rhd B\rhd \{1\}$ is a normal series for $G$ with finitely generated abelian factors and $G$ is polycyclic. \end{proof} Unsurprisingly, different elements of $\Z^{s+1}$ can produce isomorphic Gc-groups: \begin{lemma}\label{reverse} Let $G = G(\c)$, where $\c = (c_0,\ldots,c_s)$ and let $\c' = (c_s,c_{s-1},\ldots,c_0)$. Then $G(\c)\cong G(\c')$. \end{lemma} \begin{proof} Let $G = G(\c) = \left\langle a,b\right\rangle$ and let $x = a^{-1}$ and $y = b_s$. Then $y^{x^i} = b_{s-i}$ for $i\in \Z$, so $ b_0^{c_0}b_1^{c_1}\cdots b_s^{c_s} = y^{c_s} (y^x)^{c_{s-1}}\cdots (y^{x^s})^{c_0}$.\\ Hence $G(\c)\cong \left\langle x,y \right\rangle = G(\c')$. \end{proof} The following proposition, proved in \cite[Proposition~2.4]{BH}, gives a useful embedding of a Gc-group in a semidirect product $\Q^s\rtimes \Z$. \begin{proposition}\label{Gc-embed} Let $G= G(\c)$ be a Gc-group. Let $\{x_1,\ldots,x_s\}$ be a basis for $\Q^s$ over $\Q$ (the rationals under addition), and let $\Z = \left\langle y \right\rangle$. Let $Q = \Q^s\rtimes \Z$, with the action of $y$ on $\Q^s$ being given by the (columns of the) matrix \[A(\c) = \left( \begin{array}{cccc} 0&\ldots&0&-c_0/c_s\\ & & & -c_1/c_s\\ & & & .\\ &I_{s-1}& & .\\ & & & .\\ & & & -c_{s-1}/c_s \end{array}\right). \] Then $G$ is isomorphic to the subgroup $\left\langle x_1,y\right\rangle$ of $Q$. \end{proposition} Next, we give a lemma about powers of the matrix $A(\c)$ defined in the previous proposition. \newpage Let $p$ be a prime. The \emph{$p$-adic valuation} $v_p: \Q\rightarrow \Z\cup \{\infty\}$ is given by \begin{itemize} \item $v_p(0) = \infty$; \item $v_p(m/n) = d_m - d_n$ for $m,n\in \Z, \; n\neq 0$, where $d_k := \max \{ i\in \N_0 \mid p^i | k\}$ for all $k\in \Z$. \end{itemize} We shall be concerned with powers of a prime occuring in the denominator of various rational numbers. Therefore, rather than $v_p$, we shall always be using $-v_p$, which, because of the frequency of its occurence, we shall denote by $\vp$. Note that if $\vp (a) < \vp (b)$, then $\vp (a+b) = \vp (b)$. The lemma is stated in slightly more generality than we require, as it is just as easy to prove the more general result. \begin{lemma}\label{matrix} Let $M$ be a matrix of the form \[ \left(\begin{array}{cc} 0\ldots 0 & a_1\\ & a_2\\ I_{s-1} & . \\ & .\\ & a_s\\ \end{array}\right), \] where all $a_i\in \Q$ and at least one $a_i\notin \Z$. Write $M^k = (m_{ij}^{(k)})$ for $k\in \N$. Then there exist $N\in \{1,\ldots,s\}$ and a prime $p$ such that, for every $k\in \N$, there exists some $i_k\leq ks$ with $\vp (m_{Ns}^{(i_k)}) \geq k$. \end{lemma} \begin{proof} Choose some $a_j\notin \Z$, and let $p$ be a prime such that $\vp (a_j) > 0$. Let $n = \max \{\vp (a_i) \mid 1\leq i\leq s\}$ and let $N = \max\{i \mid \vp(a_i)=n\}$. For $k\in \N$, denote the entry in the $N$-th row and $s$-th column of $M^k$ by $m_k$. Note that for $k\geq 2$ and $1\leq i\leq s-1$, the $i$-th column of $M^k$ is the same as the $(i+1)$-th column of $M^{k-1}$. Thus the $N$-th row of $M^k$ is $(\epsilon_1,\ldots,\epsilon_{s-k},m_1,\ldots,m_k)$ if $k<s$, with $\epsilon_i\in \{0,1\}$; and $(m_{k-s+1},\ldots,m_{k-1},m_k)$ if $k\geq s$. For convenience, rename $\epsilon_1,\ldots,\epsilon_{s-k}$ as $m_{k-s+1},\ldots,m_{0}$, so that we can write the $N$-th row of $M^k$ in the second form in both cases. Notice that $m_k$ is in the $N,s-i$ position in $M^{k+i}$. In particular, we have $m_k$ in the $N,N$ position of $M^{k+s-N}$ for all $k\in \N$. For $k\in \N$, define $i_k$ to be the minimal natural number such that $\vp (m_{i_k})\geq k$ if such a number exists, or $\infty$ otherwise. To begin with, we have $i_1 = 1$, since $\vp(m_1) = \vp(a_N) = n\geq 1$. We shall show by induction on $k$ that $i_k\leq ks$ for all $k\in \N$, hence proving the lemma. Fix $k\in \N$ and suppose that $i_k\leq ks$. Let $j_k = i_k + s - N$ and consider $M^{j_k}$. The $N$-th row of this matrix is $(m_{j_k-s+1},\ldots,m_{i_k},\ldots,m_{j_k-1},m_{j_k})$. Note that $\vp (m_{i_k})\geq k$ and $\vp (m_i) < k$ for $1\leq i<i_k$, by the minimality of $i_k$. For $i\leq 0$, we have $m_i\in \{0,1\}$ and so $\vp(m_i)\in \{0,-\infty\}$. Thus $\vp (m_{i}) < k$ for all $i<i_k$. Note also that $j_k + 1 = i_k + s-N +1\leq ks+s = (k+1)s$. We may assume that $\vp (m_i)\leq k$ for all $i\leq j_k$, since otherwise we would have $i_{k+1}\leq j_k < (k+1)s$ and we would be done. Now \begin{align*} m_{j_k+1} &= (m_{j_k-s+1},\ldots,m_{i_k},\ldots,m_{j_k})\cdot(a_1,\ldots,a_N,\ldots,a_s)\\ &= \sum_{i=1}^s m_{j_k-s+i}a_i = \sum_{i=1}^s m_{i_k-N+i}a_i. \hspace{2cm} (*) \end{align*} We have $\vp (m_{i_k-N+i}a_i) = \vp (m_{i_k-N+i})+\vp(a_i)$ for $1\leq i\leq s$. In particular, $\vp (m_{i_k}a_N) = \vp (m_{i_k})+n\geq k+n$. By the maximality of $N$, we have $\vp(a_i)<n$ for all $i>N$. Since also $\vp(m_{i_k-N+i})<k$ for $i<N$, we thus have $\vp(m_{i_k-N+i}a_i) < k+n$ for $i\neq N$. So the $N$-th term of $(*)$ has strictly greater negative $p$-adic value than the other terms and hence \[\vp (m_{j_k+1}) = \vp (m_{i_k}a_N)\geq k+n\geq k+1,\] therefore $i_{k+1}\leq j_k+1 \leq (k+1)s$, as required. \end{proof} We are now ready to prove the main result of this section. \begin{proposition}\label{Gc} A Gc-group is poly-$\cal{CF}$ or co$\CF$ if and only if it is virtually abelian. \end{proposition} \begin{proof} Let $G = G(\c)$ be a proper Gc-group with $\c\in \Z^{s+1}$. If $|c_0| = |c_s| = 1$, then $G$ is polycyclic and hence not poly-$\cal{CF}$ by Proposition~\ref{polynil}. Hence if $|c_s| = 1$, we may assume $|c_0|\neq 1$. By Lemma~\ref{reverse}, $G$ is isomorphic to $G(\c')$, where $\c' = (c_s,c_{s-1},\ldots,c_0)$. Thus we may assume that $|c_s|\neq 1$. By Lemma~\ref{Gc-embed}, we can identify $G$ with the subgroup $\left\langle x_1,y\right\rangle$ of $Q = \Q^s\rtimes \Z$, where $\{x_1,\ldots,x_s\}$ is a basis for $\Q^s$ over $\Q$, $\Z = \left\langle y\right\rangle$, and $y$ acts on $\Q^s$ by the matrix $A(\c)$ given in the lemma. Let $M = A(\c)$ and use the notation of Lemma~\ref{matrix} for entries of $M^k$. Since $|c_s|\neq 1$ and $\gcd(c_0,\ldots,c_s) = 1$, some $c_i/c_s$ for $0\leq i\leq s-1$ is not an integer. Thus $M$ satisfies the hypothesis of Lemma~\ref{matrix}. Hence there exist $I\in \{1,\ldots,s\}$ and a prime $p$ such that, for every $k\in \N$, there exists some $\iota_k\leq ks$ such that $\vp (m_{Is}^{(\iota_k)})$ is at least $k$. For $k\in \N$, let \[ \ell_k = \min\left\{ \ell\in \N \mid \ell m_{is}^{(k)}\in \Z \;(1\leq i\leq s)\right\}.\] This is the smallest nonnegative integer $\ell$ such that the final column of $\ell M^k$ has all integer entries. We are especially interested in the matrices $M^{\iota_k}$, and so it will be convenient to set $\lambda_k = \ell_{\iota_k}$. Since $\vp (m_{Is}^{(\iota_k)})\geq k$, we have $\lambda_k\geq p^k$ for all $k\in \N$. We can take an increasing sequence of natural numbers $n_1, n_2, \ldots$ such that, for all $i\in \{1,\ldots,s\}$, the entries $m_{is}^{(\iota_{n_k})}$ are either nonnegative for all $k\in \N$, or negative for all $k\in \N$. In the first case we say that $i$ is of Type 1, while in the second case $i$ is of Type 2. We are now ready to define a bounded sublanguage of $W(G)$ which we can show to be not poly-$\cal{CF}$ using Corollary~\ref{parikh}. Let $X = \{x_1,\ldots,x_s,y\}$ and consider the intersection of $W(G,X)$ with the bounded context-free language \[ L' = \cup_{k\in \N_0} (y^{-1})^k x_s^* y^k (x_1^{\epsilon_1})^* (x_2^{\epsilon_2})^* \ldots (x_s^{\epsilon_s})^*,\] where $\epsilon_i = (-1)^j$ if $i$ is of Type $j$. Let $L = \Phi \left(W(G,X)\cap L'\right)$. The final column of $M^k$ represents the action of $y^k$ on $x_s$. Specifically, \[ x_s^{y^k} = x_1^{m_{1s}^{(k)}}\cdots x_I^{m_{Is}^{(k)}}\cdots x_s^{m_{ss}^{(k)}}.\] For $\lambda\in \Z$ and $k\in \N$, the element $\left((x_s^\lambda)^{y^k}\right)^{-1}$ of $G$ can be expressed as a word in $(x_1^{\epsilon_1})^* (x_2^{\epsilon_2})^* \ldots (x_s^{\epsilon_s})^*$ if and only if $\ell_k | \lambda$. For all $k\in \N$, we thus have $(\iota_k,\lambda,\iota_k;\v)\in L$, where $\v\in \N_0^s$, if and only if $\ell_{\iota_k} = \lambda_k | \lambda$ and $\v(i) = \lambda |m_{is}^{(\iota_k)}|$ for $1\leq i\leq s$. Let $\tau$ be the permutation $(2,3)$. Then for all $k\in \N$, we have $(\iota_k,\iota_k;\v)\in \tau(L)$, where $\v\in \N_0^{s+1}$, if and only if $\lambda_k | \v(1)$ and $\v(i+1) = \v(1) |m_{is}^{(\iota_k)}|$ for $1\leq i\leq s$. For $k\in \N$, let $\a_k = ((\iota_{n_k},\iota_{n_k})$ and let $\b_k\in \N_0^{s+1}$ with $\b_k(1) = \lambda_{n_k}$ and $\b_k(i+1) = \lambda_{n_k} |m_{is}^{(\iota_{n_k})}|$ for $1\leq i\leq s$. So $(\a_k;\b)\in \tau(L)$ if and only if $\b$ is a nonnegative integer multiple of $\b_k$. For any $t\in \N$, there exists $N\in \N$ such that, for all $k\geq N$, \[t\sigma(\a_k) = 2t\iota_{n_k}\leq 2tsn_k < p^{n_k}\leq \lambda_{n_k} = \b_k(1).\] Thus, for any $k\geq N$, $\a_k$ satisfies the first two conditions of Proposition~\ref{prop} with respect to $t$. We can take $k$ such that $n_k\geq t$. For any two distinct $\b$ and $\b'$ such that $(\a;\b), (\a;\b')\in \tau(L)$, there are distinct $\lambda_1,\lambda_2\in \N_0$ such that \begin{align*} |\b(1) - \b'(1)| &= |\lambda_1\b_k(1) - \lambda_2\b_k(1)|\\ &= |\lambda_1 - \lambda_2|\lambda_{n_k}\geq p^{n_k}\geq p^t. \end{align*} Since $f(t) = p^t$ is an unbounded function, this shows that $\a_k$ also satisfies the third condition of Proposition~\ref{prop} with respect to $t$. Thus $\tau(L)$ is not a semilinear set and so $W(G,X)\cap L'$ is neither poly-$\cal{CF}$ nor co$\CF$, by Corollary~\ref{parikh}. Since $L'$ is context-free, this implies that $W(G,X)$ is neither poly-$\cal{CF}$ nor co$\CF$. \end{proof} \section{Soluble poly-$\CF$ groups} In the case of soluble groups, Conjecture~\ref{polyconj} simplifies to \begin{conjecture}\label{solconj} A finitely generated soluble group is poly-$\cal{CF}$ if and only if it is virtually abelian. \end{conjecture} Using Theorem~\ref{solsubs} and the fact that the class of poly-$\CF$ groups is closed under taking finitely generated subgroups (Proposition~\ref{closureprops}), we can make some progress towards resolving Conjecture~\ref{solconj}. \begin{theorem}\label{polycf-sol} If $G$ is a finitely generated poly-$\cal{CF}$ soluble group, then one of the following must hold: \begin{enumerate} \item $G$ is virtually abelian; or (possibly) \item $G$ has a finitely generated subgroup $H$ with an infinite normal torsion subgroup $U$ such that $H/U$ is either free abelian or isomorphic to a proper Gc-group. \end{enumerate} The second case does not occur if $G$ is metabelian or torsion-free. \end{theorem} \begin{proof} By Theorem~\ref{solsubs}, if $G$ is a finitely generated soluble group which does not satisfy (i) or (ii), then $G$ has a subgroup isomorphic to $\Z^\infty$ or a proper Gc-group. If $G$ has a $\Z^\infty$ subgroup, then $G$ has free abelian subgroups of rank $k$ for all $k\in \N$ and so is not poly-$\CF$ by Lemma~\ref{rankk}. If $G$ contains a proper Gc-group, then $G$ is not poly-$\CF$ by Proposition~\ref{Gc}. If $G$ is torsion-free, then by definition $G$ has no non-trivial torsion subgroups. If $G$ is metabelian, then the subgroup $H$ in the second case can be taken to be $C_p\wr \Z$ for some prime $p$, and hence $G$ not poly-$\CF$ by Proposition~\ref{wreathp}. \end{proof} We conjecture that the second case does not occur at all, but have been unable to prove this so far. In order to complete the proof of Conjecture~\ref{solconj}, we need only show that a finitely generated soluble group $G$ having an infinite torsion subgroup $U$ such that $G/U$ is either free abelian or isomorphic to a proper Gc-group is not poly-$\cal{CF}$. One way of approaching this which looks promising would be to show that a poly-$\cal{CF}$ group cannot have an infinite torsion subgroup. We know that context-free groups cannot have infinite torsion subgroups, because they are virtually free. Actually, we conjecture something stronger, which again is true in the case of context-free groups. \begin{conjecture}\label{torsionconj} If a group $G$ is poly-$\cal{CF}$, then $G$ does not have arbitrarily large finite subgroups. \end{conjecture} So far, the author's approaches towards this conjecture, from the perspective of automata theory, have not succeeded. It may be that an approach using grammars would be more fruitful. \subsection{An example of the undetermined case} We give a proof of non-poly-context-freeness in a specific example of the second case of Theorem~\ref{polycf-sol}. If $\left\langle X \mid R\right\rangle$ is a group presentation, we denote the abelianisation of the group with this presentation by $\mathrm{Ab}\left\langle X \mid R\right\rangle$. This enables us to write shorter presentations for abelian groups, by omitting the commutators of generators from the relator set. We call such a presentation an \emph{abelian presentation}. \begin{proposition}\label{abc} Let $p$ be a prime and let $G$ be the group given by the following presentation. \[ \begin{array}{ll} \langle a, b_i \; (i\in \Z), c_j \; (j>0) \mid & b_i^a = b_{i+1} \; (i\in \Z), \; [b_i, b_{i+j}] = c_j \; (i\in \Z, j>0),\\ & b_i^p = c_j^p = 1 \; (i\in \Z, j>0),\; c_j\; \mathrm{central}\; (j>0) \rangle. \end{array} \] Then $G$ has derived length $3$ and satisfies (ii) of Theorem~\ref{polycf-sol}, and is not poly-$\cal{CF}$. \end{proposition} \begin{proof} In this proof, we shall always assume that the indices on the right hand side of a presentation run over all available values (specified on the left hand side). This prevents the presentations from becoming too cluttered. With this convention, the presentation for $G$ is simplified to \[ \left\langle b_i \; (i\in \Z), c_j \; (j>0) \mid b_i^a = b_{i+1},\; [b_i, b_{i+j}] = c_j, b_i^p = c_j^p = 1,\; c_j\; \mathrm{central}\right\rangle. \] Let $H$ be the group defined by the subpresentation \[ \left\langle b_i \; (i\in \Z), c_j \; (j>0) \mid [b_i, b_{i+j}] = c_j, b_i^p = c_j^p = 1,\; c_j\; \mathrm{central}\right\rangle. \] Then $a$ acts on $H$ by conjugation as an automorphism of infinite order, so $G\cong H\rtimes \left\langle a\right\rangle$ and $G/H\cong \Z$. Thus $G$ satisfies the second case of Theorem~\ref{polycf-sol}, with $U = H$. Since $G\rhd H\rhd \left\langle c_j \; (j>0)\right\rangle\rhd \{1\}$ is a normal series for $G$ with abelian factors, $G$ has derived length at most $3$. By standard results on `Darstellungsgruppen' (covering groups) in \cite[Chapter V.23]{Hup}, in the group $E_n$ given by the presentation \[ \left\langle b_i \; (-n\leq i\leq n), c_{ij} \; (-n\leq i < j\leq n) \mid [b_i,b_j] = c_{ij}, b_i^p = c_{ij}^p = 1, c_{ij} \; \rm{central}\right\rangle, \] the subgroup generated by all the $c_{ij}$ (which is $E'_n$) has the abelian presentation $\mathrm{Ab}\left\langle c_{ij} \; (-n\leq i<j \leq n) \mid c_{ij}^p\right\rangle$. Let $E$ be the union of the ascending sequence of groups $E_1, E_2, \ldots$. Then $E' = \cup_{i\in \N} E'_n$, with presentation $\mathrm{Ab}\left\langle c_{ij} \; (i,j\in \Z, \; i<j) \mid c_{ij}^p\right\rangle$. Our subgroup $H$ of $G$ is obtained from $E$ by quotienting out the subgroup $N:=\left\langle c_{0,j-i}c_{ij}^{-1} \mid i<j\right\rangle$ and setting $c_j = c_{0j}$ for all $j>0$. The subgroup of $H$ generated by all the $c_j$ is isomorphic to $E'/N$, and thus has abelian presentation \[ \mathrm{Ab}\left\langle c_j\; (j>0) \mid c_j^p\right\rangle. \] In particular, all $c_j$ are non-trivial and so $H$ is not abelian, and therefore $G$ has derived length $3$. Let $b = b_0$, $B = B_0$ and let $M_k$ be the sublanguage of \[ W_k = (BA^*Ba^* bA^*ba^*)^k (BA^*ba^* bA^*Ba^*)^k \] consisting of all those words \[ \begin{array}{l} (BA^{m_1}Ba^{n_1} bA^{\mu_1}ba^{\nu_1})\ldots (BA^{m_k}Ba^{n_k} bA^{\mu_k}ba^{\nu_k}) (BA^{m_{k+1}}ba^{n_{k+1}} bA^{\mu_{k+1}}Ba^{\nu_{k+1}})\\ \ldots (BA^{m_{2k}}ba^{n_{2k}} bA^{\mu_{2k}}Ba^{\nu_{2k}}) \end{array} \] such that: (i) $m_i = n_i = \mu_i = \nu_i$ for all $i$; (ii) $m_i < m_{i+1}$ for $i\notin \{k,2k\}$. The first condition can be checked by two pushdown automata, one checking that $m_i = n_i$ and $\mu_i = \nu_i$ for all $i$, and the other checking that $m_i = \mu_i$ for all $i$. The second condition can be checked by a single pushdown automaton. Thus $M_k$ is $3$-$\cal{CF}$. A word in $M_k$ is equal in $G$ to \[ [b, b_{m_1}]\cdots [b, b_{m_k}] [b, B_{m_{k+1}}] \cdots [b, B_{m_{2k}}] = c_{m_1}\cdots c_{m_k} (c_{m_{k+1}})^{-1}\ldots (c_{m_{2k}})^{-1},\] with $m_i < m_{i+1}$ and $m_{k+i} < m_{k+i+1}$ for $1\leq i\leq k-1$. Let $L_k = \Phi\left(W(G)\cap M_k\right)$. As in the proof of Proposition~\ref{wreathp}, we can ignore the $b$'s and $B$'s and take $L_k$ to be a subset of $\N_0^{8k}$. Since the $c_{m_i}$ are distinct for $1\leq i\leq k$ and \[\left\langle c_j \mid j>0\right\rangle = \mathrm{Ab} \left\langle c_j\; (j>0) \mid c_j^p\; (j>0)\right\rangle,\] the only way that a word in $M_k$ can be in $W(G)$ is if some $m_{k+j} = m_i$ for each $1\leq i\leq k$. But since $m_i < m_{i+1}$ and $m_{k+i} < m_{k+i+1}$ for $1\leq i\leq k-1$, this implies that $m_i = m_{k+i}$ for $1\leq i\leq k$ and so $L_k$ is the set of all $8k$-tuples of the form \[ (m_1,m_1,m_1,m_1,\ldots,m_k,m_k,m_k,m_k, m_1,m_1,m_1,m_1,\ldots,m_k,m_k,m_k,m_k),\] with $m_i\in \N_0$, and $m_i < m_{i+1}$ for $1\leq i\leq k-1$. Thus $L_k$ is a $k$-dimensional linear subset of the set $S^{(4,k)}$ introduced in Section~\ref{L(n,k) section}, and is therefore not an intersection of $k-1$ stratified semilinear sets, by Corollary~\ref{kS(n,k)}. By Corollary~\ref{polyParikh}, this means that $W(G)\cap M_k$ is not $(k-1)$-$\cal{CF}$. Since $M_k$ is $3$-$\cal{CF}$, this implies that $W(G)$ is not $(k-4)$-$\cal{CF}$ for any $k\in \N$. Hence $G$ is not poly-$\cal{CF}$. \end{proof} Quotienting out a proper subgroup of $\left\langle c_j \; (j>0)\right\rangle$ in the group $G$ in Proposition~\ref{abc} results in another group of derived length $3$ satisfying (ii) of Theorem~\ref{polycf-sol}. We do not know how to show that such quotients are not poly-$\CF$ except in some very specific cases. \textbf{Acknowledgements} I am immensely grateful to my Ph.D. supervisor, Derek Holt, for many helpful and inspiring discussions and suggestions.\\ This research was supported by a Vice Chancellor's Scholarship from the University of Warwick.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,523
Q: CLion/CMake can find SDL in one project, but not another I have two C++ projects in SDL, both using SDL. For whatever reason, one of these projects can find SDL just fine. For the other, I can write code as if SDL was found- with autocompletion and everything- but when it comes to building the project, I am informed that fatal error: 'SDL2/SDL.h' file not found. The CMakeLists.txt files of both projects are basically identical. CMakeLists.txt for the project that doesn't work: cmake_minimum_required(VERSION 3.12) project(Snake) set(CMAKE_CXX_STANDARD 11) include_directories(.) find_package(SDL2 REQUIRED) add_executable(${PROJECT_NAME} main.cpp SnakeBlock.cpp SnakeBlock.h SnakeGame.cpp SnakeGame.h SnakeBoard.cpp SnakeBoard.h) target_link_libraries(${PROJECT_NAME} ${SDL2_LIBRARY}) And CMakeLists.txt for the project that does work: cmake_minimum_required(VERSION 3.12) project(Tetris) set(CMAKE_CXX_STANDARD 11) include_directories(.) find_package(SDL2 REQUIRED) add_executable(${PROJECT_NAME} Game.cpp Game.hpp main.cpp Playfield.cpp Playfield.hpp Tetromino.cpp Tetromino.hpp) target_link_libraries(${PROJECT_NAME} ${SDL2_LIBRARY}) And for both projects, SDL2 appears under the "External Libraries" section on the right sidebar, under Header Search Paths. What gives?
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,425
Vehicle photos published on the web listing cars for sale typically include dealership logos, phone numbers, website addresses, and other calls to action. But on e-commerce sites, the conventional approach to product photos includes none of these elements. Product photos are plain and simple, letting the shopper focus on the product. Is there a reason that dealers take a different approach with their online listings, or is this more of a bad customary practice embraced by the majority? To discuss, we welcome Veteran Automotive Business Strategist Brian Miller, who explains why e-commerce vendors take the approach they do, and why dealers may want to consider following suit? Catch the original live streamed recording on our YouTube channel with this link to see the visuals discussed in this podcast episode.
{ "redpajama_set_name": "RedPajamaC4" }
958
Il dollaro (codice ISO 4217 GYD) è la valuta delle Guyana (l'ex-Guyana britannica) dal 1839. Normalmente è abbreviato con il simbolo di dollaro $, o a volte con GY$ per distinguerlo da altre valute che si chiamano dollaro. Dal 1955 è stato suddiviso in 100 cent, anche se a causa dell'inflazione le monete espresse in cent non sono più usate. Storia Il dollaro fu introdotto nel 1839. Valeva 4 shilling e 2 penny della sterlina britannica e sostituiva il precedente guilder con un tasso di 1 dollaro = 3⅛ guilder. Dal 1935 il dollaro della Guiana britannica era equivalente al Dollaro delle Indie occidentali britanniche (British West Indies dollar, BWI$). La produzione di cartamoneta specifica della Guiana britannica terminò nel 1942 e le banconote locali furono sostituite dalle banconote del BWI$ nel 1951. Nel 1955, il BWI$ fu decimalizzato e la monetazione fu emessa a nome dei "British Caribbean Territories, Eastern Group". Nel 1965, il dollaro dei Caraibi orientali (East Caribbean dollar - EC$) sostituì il BWI$ e circolò nella Guiana britannica per un anno finché, a seguito all'indipendenza del 1966, fu introdotto il dollaro della Guyana in sostituzione del dollaro dei Caraibi orientali alla pari. Monete Dopo l'introduzione del dollaro nel 1839, le monete britanniche continuarono a circolare accanto alla monete da 2 e 4 penny emesse negli altri paesi delle Indie occidentali britanniche. Le monete da 2 penny emesse nel 1838, 1843 e 1848 erano dello standard delle Maundy money, mentre le monete da 4 penny recavano l'immagine della Britannia. Tra il 1891 ed il 1916, furono emesse monete da 4 penny specifiche per "British Guiana and West Indies" e tra il 1917 ed il 1945 per la "British Guiana". Nel 1916 ci fu anche la prima emissione di carta moneta del Governo della British Guiana, nei valori di 1, 2, 5, 20 e 100 dollari. Nel 1966 furono introdotte monete dai valori di 1, 5, 10, 25 e 50 cent. Le monete da 1 e 5 cent furono coniate in nichel-ottone, e gli altri valori in cupronichel. Nel 1996 l'inflazione elevata causò l'introduzione delle monete da 1, 5 e 10 dollari. Le moneta da 1 e 5 dollari sono battute in acciaio rivestito di rame. Le monete da 10 dollari sono battute in acciaio rivestito di nichel ed hanno forma di ettagono. Banconote Le banconote private sono state introdotte alla fine del XIX secolo dalla British Guiana Bank e dalla Colonial Bank. Entrambe emisero biblietti da 5, 20 e 100 dollari. La British Guiana Bank emise banconote fino al 1907, e la Colonial Bank fino al 1917. La Colonial Bank fu acquisita dalla Barclays Bank, che emise banconote nei valori da 5, 10, 20 e 100 dollari tra il 1926 ed il 1941. Nel 1909 la Royal Bank of Canada introdusse banconote da 100 dollari, seguite nel 1913 da banconote da 5 e 20 dollari. Dal 1920, le banconote avevano anche il valore in sterline. I 100 dollari furono emessi fino al 1920, ed i 5 ed i 20 dollari fino al 1938. Nel 1966, con l'indipendenza, furono introdotte le nuove banconote con i valori da 1, 5, 10, e 20 dollari. Una seconda serie è stata emessa tra il 1989 ed il 1992 ed è formata da biglietti da 20, 100 e 500 dollari. La serie 1996-1999 comprende i 20, 100, 500 e 1000 dollari. La serie 2000-2002 comprende banconote da 500 e 1000 dollari. Nuove banconote da 100 e 1000 dollari sono state emesse nel 2005 con nuove caratteristiche di sicurezza. Note Bibliografia Altri progetti Collegamenti esterni Guyana Guyana Valute americane
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,969
{"url":"https:\/\/mathshistory.st-andrews.ac.uk\/OfTheDay\/oftheday-05-13\/","text":"## Mathematicians Of The Day\n\n### 13th May\n\n#### Quotation of the day\n\n##### From Stan Ulam\n... there's nothing new under the sun - everything can be traced\nback to Archimedes or even earlier.","date":"2022-01-28 17:15:48","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8273321390151978, \"perplexity\": 6073.3732506286615}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320306301.52\/warc\/CC-MAIN-20220128152530-20220128182530-00031.warc.gz\"}"}
null
null
\section{Introduction}\label{sec:intro} The reliable computer simulation of phenomena where acoustic waves are scattered by obstacles is of great importance in many applications. These include for example the modelling of sonar and other methods of acoustic location, as well as outdoor noise propagation and control, especially stemming from automobiles, railways or aircrafts. Since an analytical solution of scattering problems is in general impossible, numerical approaches are called for. Most acoustic scattering problems may be formulated in the frequency domain by employing the Helmholtz equation: assume an acoustic wave encounters an impenetrable, bounded obstacle $D\subset \mathbb{R}^3$, having a Lipschitz smooth boundary $S \mathrel{\mathrel{\mathop:}=}\partial D$, and, as a consequence, gets scattered. Then, describing the incident plane wave $u_{\text{inc}}({\bs x}) = e^{i\kappa\langle{\bs d}, {\bs x}\rangle}$ with known wavenumber $\kappa$ and direction ${\bs d}$, where $\|{\bs d}\|_2=1$, the total wave \[ u = u_{\text{inc}}+u_{\text{s}} \] is obtained by solving the exterior boundary value problem \begin{equation}\label{eq:pde} \begin{aligned} \Delta u + \kappa^2 u = 0\quad&\text{in}\ \mathbb{R}^3\setminus\overline{D},\\ u = 0\quad&\text{on}\ S,\\ \sqrt{r}\bigg(\frac{\partial u_{\mathrm{s}}}{\partial r}-i\kappa u_{\mathrm{s}}\bigg) \to 0\quad&\text{as}\ r = \|{\bs x}\|_2\to\infty. \end{aligned} \end{equation} The homogeneous Dirichlet condition at $S$ corresponds to a \emph{sound-soft} obstacle, whereas a homogeneous Neumann condition would correspond to a \emph{sound-hard} obstacle. The function $u_{\mathrm{s}} = u-u_{\mathrm{inc}}$ is called the \emph{scattered wave}. Although we restrict ourselves here to the sound-soft case, the presented concepts are also suitable to treat sound-hard obstacles as well as for penetrable obstacles, i.e.~objects described by a different diffractive index to the free space. In this article, we consider the situation of a randomly shaped obstacle $D=D({\bs y})$, where \({\bs y}\in\Gamma\subset\mathbb{R}^\mathbb{N}\) is some random parameter. This shape uncertainty might for example issue from measurement or modelling errors. As a consequence, the total wave itself becomes a random field $u({\bs y})$. Our goal is to compute the first and second order statistics of the scattered wave, these are the expectation $\mathbb{E}[u_{\mathrm{s}}]$ and the variance $\mathbb{V}[u_{\mathrm{s}}]$. We especially demonstrate how to compute the scattered wave's second moment in a deterministic fashion from its Cauchy data's second moment on an artificial, fixed interface $T$, which almost surely encloses the domain $D({\bs y})$. In combination with low-rank techniques, this drastically reduces the high dimensionality of the random scattering problem, compare \cite{HIM}. In order to speed up the computations of the Cauchy data's statistics even further, we employ the multilevel quadrature method, see e.g.\ \cite{BSZ11,Gil15,H2,HPS12}. Our approach lies in the {\em domain mapping} category as it transfers the shape uncertainty onto a fixed reference domain, and allows to deal with large deformations (see \cite{JSZ17,AJZ20}). In contrast, {\em perturbation techniques} resort to shape derivatives to linearize fields for small deviations with respect to both wavelength and scatterers' shape from a nominal, reference geometry. By Hadamard's theorem, the resulting linearized equations for the first order shape sensitivities are homogeneous equations posed on the nominal geometry, with inhomogeneous boundary data only . Using first-order shape Taylor expansions, one can derive tensor deterministic first-kind boundary integral equations for the statistical moments of the scattering problems considered. These are then approximated by sparse tensor Galerkin discretizations via the combination technique (\emph{cf.}~\cite{EJH20} and references therein). Though successfully applied to three-dimensional Helmholtz Dirichlet, Neumann, impedance and transmission problems \cite{EJH20}, and even for diffraction gratings \cite{SAF18}, therein random perturbations are required to be sufficiently small. High-order approaches \cite{HDE18,Dol2020} lead to at least third order accurate approximations with respect to the perturbation amplitude of the domain variations. Finally, in \cite{castrillonc2017hybrid} a hybrid between domain-mapping and perturbation methods was presented. For the numerical realization of random obstacles, we employ the methodology from \emph{isogeometric analysis} (IGA). IGA has been introduced in \cite{HCB05} in order to incorporate simulation techniques into the design workflow of industrial development and thus allows to deal with domain deformations in a straightforward manner. By representing the geometry and domain deformations by \emph{non-uniform rational B-splines} (NURBS), realizations of the random scatterer can efficiently be computed by simply updating the NURBS mappings which represent the scatterer. In addition, the naturally emerging sequence of nested approximation spaces can directly be employed in multilevel quadrature methods. With regard to the isogeometric boundary element approach for the scattered wave computations, compare \cite{DHK+18,FGHP17,SBTR12,TM12}, we show that all computations can directly be performed at the boundary of the deformed scatterer. This particularly applies to the random deformation field which only needs to be computed with respect to a reference surface. This way, we can model large deformations without having to deal with very fine volume meshes, which would otherwise be necessary to properly resolve the deformation field within the scatterer. Moreover, the meshing of the unbounded free space is avoided. Therefore, the isogeometric boundary element method is the method of choice for the problem at hand. For the numerical computations, we rely on the fast isogeometric boundary element method developed in \cite{DHK+20,DHK+18,DHP16, DKSW,HP13}, which is available as \verb|C++|-library \verb+bembel+ \cite{bembel, DHK+20}. In order to speed up computations, \verb+bembel+ utilizes $\mathcal{H}^2$-matrices with the interpolation based fast multipole method \cite{GR87,GR91,HB02}. To our knowledge, the present work constitutes the first fast IGA implementation for time-harmonic acoustic wave scattering for shape uncertainty quantification. Having this fast forward solver at our disposal, we also consider acoustic shape inversion by Bayesian inference: Given noisy measurements of the scattered wave at certain locations in free space, we determine statistics of the uncertain scatterer's shape. To this end, we employ the multilevel ratio estimator, see \cite{DGLS17} and the references therein, and compute the expected shape and its variance. The rest of article is organized as follows: In Section~\ref{sec:randomDomains} is concerned with the modeling of random domains and their parametrization by means of a Karhunen-Lo\`eve expansion. Section~\ref{sec:DiscRD} we perform the efficient discretization of the random deformation field by means of isogeometric analysis. In Section~\ref{sec:bie} we introduce the boundary integral formulation of the problem under consideration and discuss the use of the artifical interface for the representation of the scattered wave and its statistics. Section~\ref{sec:MLQ} briefly recalls the multilevel quadrature method, whereas Section~\ref{sec:bayes} recalls its application to Bayesian inference. Finally, Section~\ref{sec:numex} is devoted to numerical examples showcasing the ideas discussed. \section{Random domain model}\label{sec:randomDomains} \subsection{Modelling of random domains} In what follows, let $D_{\operatorname{ref}}\subset\mathbb{R}^3$ denote a Lipschitz domain with piecewise smooth surface \(S_{\operatorname{ref}}\mathrel{\mathrel{\mathop:}=} \partial D_{\operatorname{ref}}\) and let \((\Omega,\mathcal{F},\mathbb{P})\) be a complete probability space. We assume that the uncertainty in the obstacle is encoded by a random deformation field, cf.\ \cite{HPS16}. We hence assume the existence of a uniform \(C^1\)-diffeomorphism \({\bs\chi}_{D}\colon\overline{D_{\operatorname{ref}}} \times\Omega\to\mathbb{R}^3\), i.e.\ \begin{equation}\label{eq:unifVfield} \|{\bs\chi_{D}}(\omega)\|_{C^1(\overline{D_{\operatorname{ref}}};\mathbb{R}^3)}, \|{\bs\chi}_{D}^{-1}(\omega)\|_{C^1(\overline{D(\omega)};\mathbb{R}^3)} \leq C_{\operatorname{uni}}\quad\text{for\(\mathbb{P}\)-a.e.\ }\omega\in\Omega, \end{equation} such that \[ D(\omega)={\bs\chi}_{D}(D_{\operatorname{ref}},\omega). \] Particularly, since \({\bs\chi}_{D}\in L^\infty\big(\Omega;[C^1(\overline{D_{\operatorname{ref}}})]^3\big) \subset L^2\big(\Omega;[C^1(\overline{D_{\operatorname{ref}}})]^3\big)\), the deformation field \({\bs\chi}_{D}\) can be represenetd by a Karhunen-Lo\`eve expansion \cite{Loe77} which has the form \begin{equation}\label{eq:randomVectorFieldDomain} {\bs\chi}_{D}(\widehat{\bs x},\omega)=\mathbb{E}[{\bs\chi}_{D}](\widehat{\bs x})+ \sum_{k=1}^\infty\sqrt{\lambda_{D,k}}{\bs\chi}_{D,k}(\widehat{\bs x}){Y}_{D,k}(\omega),\quad \widehat{\bs x}\in D_{\operatorname{ref}}. \end{equation} Herein, \[ \mathbb{E}[{\bs\chi}_{D}](\widehat{\bs x})\mathrel{\mathrel{\mathop:}=}\int_\Omega {\bs\chi}_{D}(\widehat{\bs x},\omega)\d\mathbb{P}(\omega) \] denotes the expectation, while \((\lambda_{D,k}, {\bs\chi}_{D,k})\) are the eigenpairs of the covariance operator $\mathcal{C}_{D}\colon \big[L^2(D)\big]^3\to \big[L^2(D)\big]^3$, \begin{align}\label{eq:domaincovarianceoperator} (\mathcal{C}_{D}{\bs U})(\widehat{\bs x})\mathrel{\mathrel{\mathop:}=}\int_{{D_{\operatorname{ref}}}} \mathbb{C}\!\operatorname{ov}[{\bs \chi}_{D}](\widehat{\bs x},\widehat{\bs x}'){\bs U}(\widehat{\bs x}')\d\widehat{\bs x}', \end{align} where \[ \mathbb{C}\!\operatorname{ov}[{\bs\chi}_{D}](\widehat{\bs x},\widehat{\bs x}')\mathrel{\mathrel{\mathop:}=}\int_\Omega \big({\bs\chi}_{D}(\widehat{\bs x},\omega)-\mathbb{E}[{\bs\chi}_{D}](\widehat{\bs x})\big) \big({\bs\chi}_{D}(\widehat{\bs x}',\omega)-\mathbb{E}[{\bs\chi}_{D}](\widehat{\bs x}')\big)^\intercal\d\mathbb{P}(\omega). \] It holds that \begin{equation}\label{eq:randvar} Y_{D,k}(\omega)\mathrel{\mathrel{\mathop:}=}\frac{1}{\sqrt{\lambda_{D,k}}}\int_{D_{\operatorname{ref}}} \big({\bs\chi}_{D}(\widehat{\bs x},{\omega})-\mathbb{E}[{\bs\chi}_{D}](\widehat{\bs x})\big)^\intercal{\bs\chi}_{D,k}(\widehat{\bs x})\d\widehat{\bs x}. \end{equation} The family \(\{Y_{D,k}\}_k\) of random variables is therefore uncorrelated and centred. We remark that in uncertainty quantification problems typically only \(\mathbb{E}[{\bs\chi}_{D}]\) and \(\mathbb{C}\!\operatorname{ov}[{\bs \chi}_{D}]\) are known, such that the random variables cannot be inferred via \eqref{eq:randvar}. Instead, their (common) distribution has to be appropriately estimated. \subsection{Modelling of random surfaces} The numerical computation of a Karhunen-Lo\`eve expansion as outlined in the previous subsection will generally require a (volume) finite element mesh for \(D_{\operatorname{ref}}\). Moreover, the data \(\mathbb{E}[{\bs\chi_{D}}]\) and \(\mathbb{C}\!\operatorname{ov}[{\bs \chi_{D}}]\) need to be known on the whole reference domain $D_{\operatorname{ref}}$. In contrast, for our boundary element-based approach, we only require realizations of the perturbed boundary. Especially, the following exposition shows that, for the computation of surface realizations, knowledge of \(\mathbb{E}[{\bs\chi}_{D}]\) and \(\mathbb{C}\!\operatorname{ov}[{\bs \chi}_{D}]\) at the boundary $S_{\operatorname{ref}}=\partial D_{\operatorname{ref}}$ is sufficient. Given a function \(\mathbf{g}\colon D_{\operatorname{ref}}\to\mathbb{R}^3\), let \[ (\gamma_0^{\mathrm{int}} \mathbf{g})(\widehat{\bs x})\mathrel{\mathrel{\mathop:}=}\lim_{\widehat{\bs x}'\ni D_{\operatorname{ref}}\to \widehat{\bs x}\in S_{\operatorname{ref}}}\mathbf{g}(\widehat{\bs x}') \] denote the (interior) trace operator and ${\bs\chi}_{S}\mathrel{\mathrel{\mathop:}=}\gamma_0^{\mathrm{int}}{\bs\chi}_{D}$. Since \[ \gamma_0^{\mathrm{int}}\colon\big[C^1\big(\overline{D_{\operatorname{ref}}}\big)\big]^3\subset \big[H^1(D_{\operatorname{ref}})\big]^3\to \big[H^{1/2}(S_{\operatorname{ref}})\big]^3 \] is a continuous operator and the Bochner integral commutes with continuous operators, \cite{HP57}, it holds \[ (\gamma_0^{\mathrm{int}}\mathbb{E}[{\bs \chi}_{D}])(\widehat{\bs x}) = \gamma_0^{\mathrm{int}}\int_\Omega {\bs\chi}_{D}(\widehat{\bs x},\omega)\d\mathbb{P}(\omega) = \int_\Omega {\bs\chi}_{S}(\widehat{\bs x},\omega)\d\mathbb{P}(\omega) = \mathbb{E}[{\bs \chi}_S](\widehat{\bs x}) \] as well as \begin{align*} &(\gamma_0^{\mathrm{int}}\otimes \gamma_0^{\mathrm{int}})\mathbb{C}\!\operatorname{ov}[{\bs\chi}_{D}](\widehat{\bs x},\widehat{\bs x}')\\ &\quad=\int_\Omega \big((\gamma_0^{\mathrm{int}}{\bs\chi}_{D})(\widehat{\bs x},\omega)-(\gamma_0^{\mathrm{int}}\mathbb{E}[{\bs\chi}_{D}])(\widehat{\bs x})\big)\\ &\hspace{6cm}\cdot\big((\gamma_0^{\mathrm{int}}{\bs\chi}_{D})(\widehat{\bs x}',\omega)- (\gamma_0^{\mathrm{int}}\mathbb{E}[{\bs\chi}_{D}])(\widehat{\bs x}')\big)^\intercal\d\mathbb{P}(\omega)\\ &\quad=\int_\Omega \big({\bs\chi}_{S}(\widehat{\bs x},\omega)-(\mathbb{E}[{\bs\chi}_{S}])(\widehat{\bs x})\big) \big({\bs\chi}_{S}(\widehat{\bs x}',\omega)- \mathbb{E}[{\bs\chi}_{S}](\widehat{\bs x}')\big)^\intercal\d\mathbb{P}(\omega)\\ &\quad=\mathbb{C}\!\operatorname{ov}[{\bs\chi}_{S}](\widehat{\bs x},\widehat{\bs x}'). \end{align*} Therefore, the random deformation field at \(S\), i.e.\ \({\bs\chi}_{S} (\widehat{\bs x},\omega)\), is fully described by \((\gamma_0^{\mathrm{int}} \mathbb{E}[{\bs \chi}_{D}])(D)\) and \((\gamma_0^{\mathrm{int}}\otimes \gamma_0^{\mathrm{int}})\mathbb{C}\!\operatorname{ov}[{\bs\chi}_{D}](\widehat{\bs x},\widehat{\bs x}')\). For the numerical computation of the deformation field, it is therefore sufficient to compute the eigenpairs \((\lambda_{S,k}, {\bs\chi}_{S,k})\) of the surface covariance operator $\mathcal{C}_S\colon \big[L^2(S_{\operatorname{ref}})\big]^3\to \big[L^2(S_{\operatorname{ref}})\big]^3$ given by \begin{align}\label{eq:surfacecovarianceoperator} (\mathcal{C}_{S_{\operatorname{ref}}}{\bs U})(\widehat{\bs x})\mathrel{\mathrel{\mathop:}=}\int_{S_{\operatorname{ref}}} (\gamma_0^{\mathrm{int}}\otimes \gamma_0^{\mathrm{int}})\mathbb{C}\!\operatorname{ov}[{\bs \chi}_{D}](\widehat{\bs x},\widehat{\bs x}'){\bs U}(\widehat{\bs x}')\d\sigma_{\widehat{\bs x}'}, \end{align} to obtain \begin{equation}\label{eq:randomVectorFieldBoundary} {\bs\chi}_{S}(\widehat{\bs x},\omega)=\gamma_0^{\mathrm{int}}\mathbb{E}[{\bs\chi}_{D}](\widehat{\bs x})+ \sum_{k=1}^\infty\sqrt{\lambda_{S,k}}{\bs\chi}_{S,k}(\widehat{\bs x}){Y}_{S,k}(\omega) \end{equation} with \[ Y_{S,k}(\omega)\mathrel{\mathrel{\mathop:}=}\frac{1}{\sqrt{\lambda_{S,k}}}\int_{S} \big({\bs\chi}_{S}(\widehat{\bs x},{\omega})-\gamma_0^{\mathrm{int}}\mathbb{E}[{\bs\chi}_{D}](\widehat{\bs x})\big)^\intercal{\bs\chi}_{S,k}(\widehat{\bs x})\d\widehat{\bs x}. \] We remark that the computation of the eigenpairs of \eqref{eq:surfacecovarianceoperator} is significantly cheaper than the computation of the ones of \eqref{eq:domaincovarianceoperator} since the latter only relies on a surface mesh for $S_{{\operatorname{ref}}}$ rather than a volume mesh for $D_{{\operatorname{ref}}}$. Thus, the discrete system will be significantly smaller. However, that the corresponding eigenfunctions will in general not be traces of the eigenfunctions of the Karhunen-Lo\`eve expansion \eqref{eq:randomVectorFieldDomain} and also the distribution of the random variables will change. In the sequel, we assume that the family \(\{Y_{S,k}\}_k\) is independent and uniformly distributed with \(\{Y_{S,k}\}_k\sim\mathcal{U}(-1,1)\) for all \(k\). Then, we can identify each of the random variables by its image \(y_k\in[-1,1]\) and up with the parametric deformation field \[ {\bs\chi}_{S}(\widehat{\bs x},{\bs y})=\gamma_0^{\mathrm{int}}\mathbb{E}[{\bs\chi}_{D}](\widehat{\bs x})+ \sum_{k=1}^\infty\sqrt{\lambda_{S,k}}{\bs\chi}_{S,k}(\widehat{\bs x})y_k,\quad {\bs y}\in\Gamma\mathrel{\mathrel{\mathop:}=}[-1,1]^{\mathbb{N}}, \] which gives rise to the random surface \begin{align}\label{eq:randomdomain} S({\bs y}) = \bigg\{{\bs\chi}_S(\widehat{\bs x},{\bs y}): \widehat{\bs x}\in S_{\operatorname{ref}}\bigg\}. \end{align} \section{Isogeometric discretization of random domains}\label{sec:DiscRD} \subsection{Fundamental Notions}\label{sec::subsec::iga} We review the basic notions of isogeometric analysis, restricting ourselves to spaces constructed via locally quasi-uniform $p$-open knot vectors as required by the theory presented in \cite{Beirao-da-Veiga_2014aa,BDK+2020}. \begin{definition}\label{def::splines} Let $p$ and $k$ with $0\leq p< k$. A \emph{locally quasi uniform $p$-open knot vector} is a tuple \begin{align*} \Xi = \big[{\xi_0 = \cdots =\xi_{p}}\leq \cdots \leq{\xi_{k}=\cdots =\xi_{k+p}}\big]\in[0,1]^{k+p+1} \end{align*} with $\xi_0 = 0$ and $\xi_{k+p}=1$ such that there exists a constant $\theta\geq 1$ with $\theta^{-1}\leq h_j\cdot h_{j+1}^{-1} \leq \theta$ for all $p\leq j < k$, where $h_j\mathrel{\mathrel{\mathop:}=} \xi_{j+1}-\xi_{j}$. The B-spline basis $ \lbrace b_j^p \rbrace_{0\leq j< k}$ is then recursively defined according to \begin{align*} b_j^p(x) & =\begin{cases} \mathbbm{1}_{[\xi_j,\xi_{j+1})}&\text{ if }p=0,\\[8pt] \frac{x-\xi_j}{\xi_{j+p}-\xi_j}b_j^{p-1}(x) +\frac{\xi_{j+p+1}-x}{\xi_{j+p+1}-\xi_{j+1}}b_{j+1}^{p-1}(x) & \text{ else,} \end{cases} \end{align*} where $\mathbbm{1}_A$ refers to the indicator function of the set $A$. The corresponding spline space is finally defined according to $\S^p(\Xi)\mathrel{\mathrel{\mathop:}=}\operatorname{span}(\lbrace b_j^p\rbrace_{j <k}).$ \end{definition} To obtain spline spaces in two spatial dimensions, we employ a tensor product construction. More precisely, for a tuple $\boldsymbol{\Xi} =(\Xi_1,$ $\Xi_2)$ and polynomial degrees ${\bs p}=(p_1,p_2)$, we define the spaces \[ \S^{{\bs p}}(\boldsymbol{\Xi})\mathrel{\mathrel{\mathop:}=} \S^{p_1}(\Xi_1)\otimes \S^{p_2}(\Xi_2). \] Given knot vectors $\Xi_1,$ $\Xi_2$ with knots $\xi_{i}^k < \xi_{i+1}^k$ for $k=1,2$, sets of the form $[\xi_{j}^1,\xi^1_{j+1}]\times[\xi^2_{j},\xi^2_{j+1}]$ will be called \emph{elements}. We reserve the letter $h$ for the maximal diameter of all elements. For further concepts and algorithmic realization of B-splines, we refer to \cite{Piegl_1997aa} and the references therein. \subsection{Isogeometric boundary representation}\label{sec:surfrep} In the following, we will assume the usual isogeometric setting for the surface $S_{\operatorname{ref}}$ of the reference domain $D_{\operatorname{ref}}$, i.e.\ denoting the unit square by $\square\mathrel{\mathrel{\mathop:}=}[0,1]^2$, we assume that the surface $S_{\operatorname{ref}}$ can be decomposed into several smooth \emph{patches} \[ S_{\operatorname{ref}} = \bigcup_{i=1}^M S_{{\operatorname{ref}}}^{(i)}, \] where the intersection $S_{{\operatorname{ref}}}^{(i)}\cap S_{{\operatorname{ref}}}^{(i')}$ consists at most of a common vertex or a common edge for \(i\neq i^\prime\). In particular, we model each patch $S_{{\operatorname{ref}}}^{(i)}$ as an invertible NURBS mapping \begin{equation}\label{eq:parametrization} {\bs s}_i\colon\square\to S_{{\operatorname{ref}}}^{(i)}\quad\text{ with }\quad S_{{\operatorname{ref}}}^{(i)} = {\bs s}_i(\square) \quad\text{ for } i = 1,2,\ldots,M, \end{equation} where \({\bs s}_i\) is of the form \begin{align*} \mathbf{s}_i(x,y)\mathrel{\mathrel{\mathop:}=} \sum_{0=i_1}^{k_1}\sum_{0=i_2}^{k_2}\frac{\mathbf{c}_{i_1,i_2} b_{i_1}^{p_1}(x) b_{i_2}^{p_2}(y) w_{i_1,i_2}}{ \sum_{j_1=0}^{k_1-1}\sum_{j_2=0}^{k_2-1} b_{j_1}^{p_1}(x) b_{j_2}^{p_2}(y) w_{j_1,j_2}} \end{align*} for control points $\mathbf{c}_{i_1,i_2}\in \mathbb{R}^3$ and weights $w_{i_1,i_2}>0$. We shall further follow the common convention that parametrizations with a common edge coincide except for orientation. \begin{figure}[htb] \begin{center} \begin{tikzpicture}[ scale=.3, axis/.style={thick, ->, >=stealth'}, important line/.style={thick}, every node/.style={color=black} ] \draw (-.5,-1)node{{$0$}}; \draw (8,-1)node{{$1$}}; \draw (-.5,8)node{{$1$}}; \draw (0,0)--(8,0); \draw (0,1)--(8,1); \draw (0,2)--(8,2); \draw (0,3)--(8,3); \draw (0,4)--(8,4); \draw (0,5)--(8,5); \draw (0,6)--(8,6); \draw (0,7)--(8,7); \draw (0,8)--(8,8); \draw (0,0)--(0,8); \draw (1,0)--(1,8); \draw (2,0)--(2,8); \draw (3,0)--(3,8); \draw (4,0)--(4,8); \draw (5,0)--(5,8); \draw (6,0)--(6,8); \draw (7,0)--(7,8); \draw (8,0)--(8,8); \draw (3,3)node(N1){}; \draw (19.3,7)node(N2){}; \draw (22.5,4)node{\includegraphics[width=0.3\textwidth,clip=true,trim=185 540 180 90]{torus.pdf}}; \draw (11.5,8.6)node(N3){\large${\bs s}_i$}; \draw (21,5)node(N4){{\large${S}_{\operatorname{ref}}^{(i)}$}}; \path[->,line width=1pt] (N1) edge [bend left] (N2); \end{tikzpicture} \end{center} \caption{Surface representation and mesh generation.} \label{fig:mesh} \end{figure} Following the spirit of isogeometric analysis, the random surface $S(\bs{y})$ from \eqref{eq:randomdomain} will be represented as a union of NURBS patches. This is achieved by appropriately discretizing the deformation field \eqref{eq:randomVectorFieldBoundary}. More precisely, the random surface $S(\bs{y})$ is discretized by $S(\bs{y})\approx S_h(\bs{y})$, where the latter can be decomposed into $M$ distinct NURBS patches \[ S_h(\bs{y}) = \bigcup_{i=1}^M S_h^{(i)}(\bs{y}). \] Herein, the intersection $S_h^{(i)}(\bs{y})\cap S_h^{(i)}(\bs{y})$ again consists at most of a common vertex or a common edge for \(i\neq i^\prime\) and each patch $S_h^{(i)}(\bs{y})$ is given by an invertible mapping \[ {\bs s}_{i,h}(\cdot,\bs{y})\colon\square\to S^{(i)}(\bs{y})\quad\text{ with }\quad S^{(i)}(\bs{y}) = {\bs s}_{i,h}(\square,\bs{y}) \quad\text{ for } i = 1,2,\ldots,M, \] with \[ \bs{s}_{i,h}(\widehat{\bs x},\bs{y}) = \bs{s}_i(\widehat{\bs x})+{\bs\chi}_{S,h}\big|_i(\widehat{\bs x},\bs{y}). \] First, we note that $\mathbf{s}_{i,h}(\widehat{\bs x},\bs{y})$ is again a NURBS mapping if ${\bs\chi}_{S,h}\big|_i(\widehat{\bs x},\mathbf{y})$ is discretized by using appropriate basis functions. In fact, if these basis functions are chosen also as NURBS, the randomness of the surface is transformed onto transformations of the control points. Second, we note that ${\bs\chi}_{S}$ needs to be at least globally continuous to obtain an admissible surface transformation. Given a tuple of knot vectors $\boldsymbol{\Xi}$ and polynomial degrees ${\bs p}$, a natural choice for the discretization of ${\bs\chi}_{S}(\cdot, \bs{y})$ is thus given by the vector valued spline space \[ \mathbbb{S}_{{\bs p},\boldsymbol{\Xi}}(S_{\operatorname{ref}})=\big[\S_{{\bs p},\boldsymbol{\Xi}}(S_{\operatorname{ref}})\big]^3 \] where \[ \S_{{\bs p},\boldsymbol{\Xi}}(S_{\operatorname{ref}})\mathrel{\mathrel{\mathop:}=} \left\lbrace f\in C(S_{\operatorname{ref}})\colon f|_i\circ \bs{s}_i^{-1}\in \S_{{\bs p}}(\boldsymbol{\Xi})\text{ for }1\leq i\leq M\right\rbrace. \] Of course, the knot vectors and polynomial degrees could vary in each component and on each patch, but for simplicity we choose to use the same knots and degrees for better readability. Approximation properties for these spaces where derived in \cite{BDK+2020}. Next, we discuss how an approximation of ${\bs\chi}_{S,h}$ in terms of such basis functions can be derived by computing Karhunen-Lo\`eve expansion \eqref{eq:randomVectorFieldBoundary} of the underlying random deformation. \subsection{Fast computation of the Karhunen-Lo\`eve expansion} The computation of the Karhunen-Lo\`eve expansion of surface deformations from the expectation and the covariance is directly related to the solution of the eigenvalue problem \[ \mathcal{C}_{S}{\bs\chi}_{S,k}=\lambda_{S,k}{\bs\chi}_{S,k}. \] Based on the previous discussion, it is natural to choose a B-spline-based Galerkin discretization for the numerical solution of the eigenvalue problem. Hence, replacing $\big[L^2(S_{\operatorname{ref}})\big]^3$ by a B-spline space $\mathbbb{S}_{{\bs p},\boldsymbol{\Xi}}(S_{\operatorname{ref}})$ in the eigenproblem's weak formulation \begin{align*} &\text{Find $(\lambda_{S,k},{\bs\chi}_{S,k})\in\mathbb{R}\times \big[L^2(S_{\operatorname{ref}})\big]^3$ such that}\\ &\quad(\mathcal{C}_S{\bs\chi}_{S,k},v)_{[L^2(S_{\operatorname{ref}})]^3}=\lambda_{S,k}({\bs\chi}_{S,k},v)_{[L^2(S_{\operatorname{ref}})]^3} \quad\text{for all $v\in \big[L^2(S_{\operatorname{ref}})\big]^3$} \end{align*} yields the discrete generalized eigenvalue problem \begin{equation}\label{eq:disceig} \underline{{\bs C}}\underline{\boldsymbol{\chi}}_{k}=\lambda_{k,h}\underline{{\bs M}}\underline{\boldsymbol{\chi}}_{k}. \end{equation} Although the mass matrix $\underline{\mathbf{M}}$ is sparse, the covariance matrix $\underline{{\bs C}}$ is typically densely populated, as it issues from the discretization of a nonlocal operator. Therefore, a naive solution of this eigenvalue problem is prohibitive for a larger number of degrees of freedom. As a viable alternative, we assume that a low-rank factorization $\underline{{\bs C}}\approx\underline{{\bs L}}\underline{{\bs L}}^\intercal$ of the covariance matrix is known. Such a factorization can, for example, efficiently be computed by the truncated pivoted Cholesky decomposition, see \cite{HPS}. Inserting this decomposition into \eqref{eq:disceig} yields \begin{equation}\label{eq:bigpceig} \underline{{\bs L}}\underline{{\bs L}}^{\intercal}\underline{\boldsymbol{\chi}}_{k}=\lambda_{k,h}\underline{{\bs M}}\underline{\boldsymbol{\chi}}_{k}. \end{equation} Substituting $\underline{\boldsymbol{\psi}}_k=\underline{\mathbf{M}}^{1/2}\underline{\boldsymbol{\chi}}_k$ therefore results in the eigenvalue problem \begin{equation}\label{eq:smallpceig} \underline{{\bs L}}^{\intercal}\underline{{\bs M}}^{-1}\underline{{\bs L}}\underline{\boldsymbol{\psi}}_{k}=\lambda_{k,h}\underline{\boldsymbol{\psi}}_{k}, \end{equation} which has the same non-zero eigenvalues as \eqref{eq:bigpceig}, but is significantly smaller and cheaper to compute if $\underline{{\bs C}}$ has low rank. The eigenvectors of \eqref{eq:bigpceig} can be retrieved from \eqref{eq:smallpceig} by making use of the relation $\boldsymbol{\chi}_k=\underline{\mathbf{M}}^{-1}\underline{{\bs L}}\boldsymbol{\psi}_k.$ \begin{remark}\label{rem:efficientIGAKL} The supports of the basis functions in $\mathbbb{S}_{{\bs p},\boldsymbol{\Xi}}(S_{\operatorname{ref}})$ can be quite large, which makes the assembly of a single matrix entry as used for the truncated pivoted Cholesky decomposition computationally expensive. Instead, one may opt for performing the Cholesky decomposition directly on the matrix $\underline{{\bs C}}_\star$ generated by the shape functions $\mathbbb{S}_{{\bs p},\boldsymbol{\Xi}}^\star(S_{\operatorname{ref}})$ of $\mathbbb{S}_{{\bs p},\boldsymbol{\Xi}}(S_{\operatorname{ref}})$. Then, there exists a matrix version $\underline{{\bs T}}$ of the local-to-global map such that $\underline{{\bs C}}=\underline{{\bs T}}\underline{{\bs C}}_\star\underline{{\bs T}}^\intercal$. Now, substituting $\underline{{\bs C}}_\star\approx\underline{{\bs L}}_\star\underline{{\bs L}}_\star^\intercal$ yields a low-rank factorization $\underline{{\bs C}}\approx\underline{{\bs T}}\underline{{\bs L}}_\star(\underline{{\bs T}}\underline{{\bs L}}_\star)^\intercal=\underline{{\bs L}}\underline{{\bs L}}^\intercal$. \end{remark} \section{Boundary integral equations}\label{sec:bie} \subsection{Computing the scattered wave} We recall the solution of the boundary value problem \eqref{eq:pde} by means of boundary integral equations. To this end, and for sake of simplicity in representation, we assume for the moment that the domain $D$ is fixed and has a Lipschitz surface $S = \partial D$. We introduce the acoustic single layer operator \[ \mathcal{V}\colon H^{-\nicefrac{1}{2}}(S)\to H^{\nicefrac{1}{2}}(S),\quad \mathcal{V}\rho\mathrel{\mathrel{\mathop:}=}\int_S \Phi(\cdot,{\bs z})\rho({\bs z})\d\sigma_{\bs z} \] and the acoustic double layer operator \[ \mathcal{K}\colon L^2(S)\to L^2(S),\quad \mathcal{K}\rho\mathrel{\mathrel{\mathop:}=}\int_S \frac{\partial\Phi(\cdot,{\bs z})}{\partial{\bs n}_{\bs z}}\rho({\bs z})\d\sigma_{\bs z}. \] Here, ${\bs n}_{\bs z}$ denotes the outward pointing normal vector at the surface point ${\bs z}\in S$, while $\Phi(\cdot,\cdot)$ denotes the Green's function for the Helmholtz equation. In three spatial dimensions, the Green's function is given by \[ \Phi({\bs x}, {\bs z}) = \frac{e^{i\kappa\|{\bs x}-{\bs y}\|_2}}{4\pi\|{\bs x}-{\bs y}\|_2}. \] Considering an incident plane wave $u_{\mathrm{inc}}({\bs x}) = e^{i\kappa\langle{\bs d}, {\bs x}\rangle}$, $\|{\bs d}\|_2=1$, the Neumann data of the total wave $u=u_{\mathrm{inc}}+u_{\mathrm{s}}$ at the surface $S$ can be determined by the boundary integral equation \begin{equation}\label{eq:BIE1} \left(\frac{1}{2} + \mathcal{K^\star} - i\eta\mathcal{V}\right) \frac{\partial u}{\partial {\bs n}} = \frac{\partial u_{\mathrm{inc}}}{\partial {\bs n}} - i\eta u_{\mathrm{inc}} \quad\text{on $S$}, \end{equation} with $\eta=\kappa/2$, compare to \cite{CK2}. From the Cauchy data of $u$ at $S$, we can determine the scattered wave $u_{\mathrm{s}}$ in any point in the exterior of the obstacle by applying the potential evaluation \begin{equation}\label{eq:solution1} u_{\mathrm{s}}({\bs x}) = \int_S\Phi({\bs x}, {\bs z}) \frac{\partial u}{\partial{\bs n}}({\bs z})\d\sigma_{\bs z}, \quad {\bs x}\in\mathbb{R}^3\setminus\overline{D}. \end{equation} \subsection{Scattered wave representation at an artificial interface} We introduce an artificial interface $T\subset\mathbb{R}^3$, being sufficiently large to guarantee that $T$ encloses all realizations of the domain $D$. In view of \eqref{eq:solution1}, we may compute the Cauchy data $u_{\mathrm{s}}|_T$ and $(\partial u_{\mathrm{s}}/\partial {\bs n})|_T$ of the scattered wave at the artificial interface $T$. It holds \[ \frac{\partial u_{\mathrm{s}}}{\partial {\bs n}}({\bs x}) = \int_S \frac{\partial\Phi({\bs x}, {\bs z})} {\partial {\bs n}_{\bs x}}\frac{\partial u}{\partial{\bs n}}({\bs z})\d\sigma_{\bs z}, \quad {\bs x}\in T. \] For any ${\bs x}\in\mathbb{R}^3$ located outside the artificial interface, we may now either employ the representation formula \eqref{eq:solution1} or the representation formula \begin{equation}\label{eq:solution2} u_{\mathrm{s}}({\bs x}) = \int_T \bigg\{\Phi({\bs x}, {\bs z}) \frac{\partial u_{\mathrm{s}}}{\partial{\bs n}}({\bs z})+ \frac{\partial\Phi({\bs x}, {\bs z})}{\partial {\bs n}_{\bs z}} u_{\mathrm{s}}({\bs z})\bigg\}\d\sigma_{\bs z} \end{equation} to compute the scattered wave $u_{\mathrm{s}}$. The major advantage of \eqref{eq:solution2} over \eqref{eq:solution1} is that the artificial interface is fixed in contrast to the shape of the random obstacle later on. \subsection{Scattering at random obstacles}\label{sec:random} From now on, let the obstacle be subject to uncertainty as introduced in Section~\ref {sec:randomDomains}. We describe the uncertain obstacle $D = D({\bs y})$ by its {random surface} $S({\bs y})$, which is given by \eqref{eq:randomdomain}. Having the incident wave $u_{\mathrm{inc}}$ at hand, the boundary value problem for the total field $u({\bs y}) = u_{\mathrm{s}}({\bs y})+u_{\mathrm{inc}}$ for any ${\bs y}\in\Gamma$ reads \begin{equation}\label{eq:spde} \begin{aligned} \Delta u({\bs y}) + \kappa^2 u({\bs y}) = 0\quad&\text{in}\ \mathbb{R}^3\setminus\overline{D({\bs y})},\\ u({\bs y}) = 0\quad&\text{on}\ S({\bs y}),\\ \sqrt{r}\bigg(\frac{\partial u_{\mathrm{s}}}{\partial r}-i\kappa u_{\mathrm{s}}\bigg) \to 0\quad&\text{as}\ r = \|{\bs x}\|_2\to\infty. \end{aligned} \end{equation} By the construction of \(S({\bs y})\), the random scattering problem \eqref{eq:spde} exhibits a unique solution for each realization \({\bs y}\in\Gamma\) of the random parameter. Moreover, it has been shown in \cite{H3S} for the case of the Helmholtz transmission problem that the total wave \(u({\bs y})\) exhibits an analytic extension into a certain region of the complex plane with respect to the parameter \({\bs y}\in\Gamma\). This particularly allows for the use of higher order quadrature methods, like quasi-Monte Carlo methods, see e.g.\ \cite{Caf98,NIE}, or even sparse quadrature methods, see e.g.\ \cite{HHPS18,H3S} in order to compute quantities of interest, such as expectation and variance. Extensions to the Maxwell case are discussed in \cite{JHS17,JSZ17,AJZ20}. \subsection{Expectation of the scattered wave} The scattered wave's expectation can be computed for any given point ${\bs x}\in\mathbb{R}^3$ by the representation formula \eqref{eq:solution1}, which leads to \begin{equation}\label{eq:expectation1} \mathbb{E}[u_{\mathrm{s}}]({\bs x}) = \mathbb{E}\bigg[\int_{S({\bs y})}\Phi({\bs x}, {\bs z}) \frac{\partial u_{\mathrm{s}}}{\partial{\bs n}}({\bs z},\cdot)\d\sigma_{\bs z}\bigg]. \end{equation} Obviously, \eqref{eq:expectation1} only makes sense if ${\bs x} \in\mathbb{R}^3$ is sufficiently far away from the random obstacle. Otherwise, there might be instances ${\bs y}\in\Gamma$ such that ${\bs x}\in D({\bs y})$, i.e.\ the point ${\bs x}\in\mathbb{R}^3$ does not lie outside the obstacle almost surely. If the expectation needs to be evaluated at many locations, it is much more efficient to introduce the artificial interface $T$ and to consider expression \eqref{eq:solution2}. For any ${\bs x}\in\mathbb{R}^3$ lying outside the interface $T$, it holds \begin{equation}\label{eq:expectation2} \mathbb{E}[u_{\mathrm{s}}]({\bs x}) = \int_T \bigg\{\Phi({\bs x}, {\bs z}) \mathbb{E}\bigg[\frac{\partial u_{\mathrm{s}}}{\partial{\bs n}}\bigg]({\bs z})+ \frac{\partial\Phi({\bs x}, {\bs z})}{\partial {\bs n}_{\bs z}} \mathbb{E}[u_{\mathrm{s}}]({\bs z})\bigg\}\d\sigma_{\bs z}. \end{equation} As a consequence, the scattered wave's expectation is completely encoded in the Cauchy data at the artificial interface $T$. This means that we only need to compute the expected Cauchy data \begin{equation}\label{eq:expectation3} \mathbb{E}[u_{\mathrm{s}}] = \int_{\Gamma} \bigg\{\int_{S({\bs y})}\Phi({\bs x}, {\bs z}) \frac{\partial u}{\partial{\bs n}}({\bs z},{\bs y})\d\sigma_{\bs z}\bigg\}\d\mu \end{equation} and \begin{equation}\label{eq:expectation4} \mathbb{E}\bigg[\frac{\partial u_{\mathrm{s}}}{\partial {\bs n}}\bigg] = \int_{\Gamma} \bigg\{\int_{S({\bs y})}\frac{\Phi({\bs x}, {\bs z})}{\partial {\bs n}_{\bs z}} u({\bs z},{\bs y})\d\sigma_{\bs z}\bigg\}\d\mu \end{equation} of the scattered wave at the artificial interface $T$, which is of lower spatial dimension than the exterior domain. \subsection{Variance of the scattered wave} The variance $\mathbb{V}[u_{\mathrm{s}}]$ of the scattered wave $u_{\mathrm{s}}({\bs y)}$ at a point ${\bs x}\in\mathbb{R}^3$ outside the artificial interface $T$ depends nonlinearly on the Cauchy data of $u_{\mathrm{s}}$ at the interface. Nonetheless, we can make use of the fact that the variance is the trace --in the algebraic sense-- of the covariance, i.e.\ \begin{equation}\label{eq:var} \mathbb{V}[u_{\mathrm{s}}]({\bs x}) = \mathbb{C}\!\operatorname{ov}[u_{\mathrm{s}}]({\bs x},{\bs x}')\big|_{{\bs x}={\bs x}'}\\ = \mathbb{C}\!\operatorname{or}[u_{\mathrm{s}}]({\bs x},{\bs x}')\big|_{{\bs x}={\bs x}'} - |\mathbb{E}[u_{\mathrm{s}}]({\bs x})|^2, \end{equation} where the covariance is given by \begin{align*} \mathbb{C}\!\operatorname{ov}[u_{\mathrm{s}}]({\bs x},{\bs x}') &= \mathbb{E}\Big[\big(u_{\mathrm{s}}({\bs x},\cdot)-\mathbb{E}[u_{\mathrm{s}}]({\bs x})\big) \overline{\big(u_{\mathrm{s}}({\bs x}',\cdot)-\mathbb{E}[u_{\mathrm{s}}]({\bs x}')\big)}\Big]\\ &=\mathbb{E}\big[u_{\mathrm{s}}({\bs x},\cdot)\overline{u_{\mathrm{s}}({\bs x'},\cdot)}\big]-\mathbb{E}[u_{\mathrm{s}}]({\bs x})\overline{\mathbb{E}[u_{\mathrm{s}}]({\bs x}')}. \end{align*} Hence, it holds for the correlation \[\mathbb{C}\!\operatorname{or}[u_{\mathrm{s}}]({\bs x},{\bs x}') = \mathbb{E}\big[u_{\mathrm{s}}({\bs x},\cdot)\overline{u_{\mathrm{s}}({\bs x'},\cdot)}\big]. \] The correlation is a higher-dimensional object which depends only linearly on the second moment of the Cauchy data of the scattered wave at the artificial interface $T$. This greatly simplifies the computation of the variance. More precisely, by defining for ${\bs x},{\bs x}'\in T$ the correlations \begin{align*} \mathbb{C}\!\operatorname{or}[u_{\mathrm{s}}]({\bs x},{\bs x}') &= \mathbb{E}\bigg[\bigg(\int_{S({\bs y})}\!\!\!\!\!\Phi({\bs x}, {\bs z}) \frac{\partial u_{\mathrm{s}}}{\partial{\bs n}}({\bs z},{\bs y})\d\sigma_{\bs z}\bigg) \overline{\bigg(\int_{S({\bs y})}\!\!\!\!\!\Phi({\bs x}', {\bs z}) \frac{\partial u_{\mathrm{s}}}{\partial{\bs n}}({\bs z},{\bs y})\d\sigma_{{\bs z}}\bigg)}\bigg],\\ \mathbb{C}\!\operatorname{or}\bigg[\frac{\partial u_{\mathrm{s}}}{\partial{\bs n}}\bigg]({\bs x},{\bs x}') &= \mathbb{E}\bigg[\bigg(\int_{S({\bs y})}\!\!\!\!\!\frac{\partial\Phi({\bs x}, {\bs z})}{\partial {\bs n}_{\bs z}} u_{\mathrm{s}}({\bs z},{\bs y})\d\sigma_{\bs z}\bigg) \overline{\bigg(\int_{S({\bs y})}\!\!\!\!\!\frac{\partial\Phi({\bs x}', {\bs z})}{\partial {\bs n}_{\bs z}} u_{\mathrm{s}}({\bs z},{\bs y})\d\sigma_{{\bs z}}\bigg)}\bigg], \end{align*} and \begin{align*} &\mathbb{C}\!\operatorname{or}\bigg[u_{\mathrm{s}},\frac{\partial u_{\mathrm{s}}}{\partial{\bs n}}\bigg]({\bs x},{\bs x}') = \overline{\mathbb{C}\!\operatorname{or}\bigg[\frac{\partial u_{\mathrm{s}}}{\partial{\bs n}},u_{\mathrm{s}}\bigg]({\bs x}',{\bs x})}\\ &\qquad= \mathbb{E}\bigg[\overline{\bigg(\int_{S({\bs y})}\Phi({\bs x}, {\bs z}) \frac{\partial u_{\mathrm{s}}}{\partial{\bs n}}({\bs z},\omega)\d\sigma_{\bs z}\bigg)} \bigg(\int_{S({\bs y})}\frac{\partial\Phi({\bs x}', {\bs z})}{\partial {\bs n}_{\bs z}} u_{\mathrm{s}}({\bs z},{\bs y})\d\sigma_{{\bs z}}\bigg)\bigg], \end{align*} we find for two points ${\bs x},{\bs x}'\in\mathbb{R}^3$ lying outside of the interface $T$ the deterministic expression \begin{equation}\label{eq:cor} \begin{aligned} \mathbb{C}\!\operatorname{or}[u_{\mathrm{s}}]({\bs x},{\bs x}') = \int_T\int_T \bigg\{&\Phi({\bs x}, {\bs z})\overline{\Phi({\bs x}', {\bs z}')}\mathbb{C}\!\operatorname{or}\!\!\bigg[\frac{\partial u_{\mathrm{s}}}{\partial{\bs n}}\bigg]({\bs z},{\bs z}')\\ &\ + \Phi({\bs x}, {\bs z})\overline{\frac{\partial\Phi({\bs x}', {\bs z}')}{\partial {\bs n}_{{\bs z}'}}} \mathbb{C}\!\operatorname{or}\!\!\bigg[\frac{\partial u_{\mathrm{s}}}{\partial{\bs n}},u_{\mathrm{s}}\bigg]({\bs z},{\bs z}')\\ &\ + \frac{\partial\Phi({\bs x}, {\bs z})}{\partial {\bs n}_{\bs z}}\overline{\Phi({\bs x}', {\bs z}')} \mathbb{C}\!\operatorname{or}\!\!\bigg[u_{\mathrm{s}},\frac{\partial u_{\mathrm{s}}}{\partial{\bs n}}\bigg]({\bs z},{\bs z}')\\ &\ + \frac{\partial\Phi({\bs x}, {\bs z})}{\partial {\bs n}_{\bs z}} \overline{\frac{\partial\Phi({\bs x}', {\bs z}')}{\partial {\bs n}_{{\bs z}'}}} \mathbb{C}\!\operatorname{or}[u_{\mathrm{s}}]({\bs z},{\bs z}')\bigg\}\d\sigma_{{\bs z}'}\d\sigma_{\bs z}. \end{aligned} \end{equation} \section{Multilevel quadrature}\label{sec:MLQ} In order to calculate quantities of interest efficiently, we employ a multilevel quadrature approach. For the computation of the expectation, we may exploit the linearity of the expectation in formula \eqref{eq:expectation2} and rely on the Cauchy data on the spatial refinement levels \(\ell=0,1,\ldots,L\) computed at the artificial interface \(T\). Thus, we obtain \begin{equation}\label{eq:MLQmean} \mathbb{E}[u_{\mathrm{s}}]({\bs x}) \approx \int_T \bigg\{\Phi({\bs x}, {\bs z}) \mathcal{Q}_{L}^{\text{ML}}\bigg[\frac{\partial u_{\mathrm{s}}}{\partial{\bs n}}\bigg]({\bs z})+ \frac{\partial\Phi({\bs x}, {\bs z})}{\partial {\bs n}_{\bs z}} \mathcal{Q}_{L}^{\text{ML}}[u_{\mathrm{s}}]({\bs z})\bigg\}\d\sigma_{\bs z} \end{equation} with \[ \mathcal{Q}_{L}^{\text{ML}}[\rho]({\bs z})\mathrel{\mathrel{\mathop:}=}\sum_{\ell=0}^L \mathcal{Q}_{L-\ell}\big(\rho^{(\ell)}({\bs z},\cdot)- \rho^{(\ell-1)}({\bs z},\cdot)\big)\quad\text{for }{\bs z}\in T, \] where \(\mathcal{Q}_{\ell}\) is a quadrature rule on level \(\ell\). Moreover, the function \(\rho^{(\ell)}\) is the Galerkin projection of the density \(\rho\) evaluated at the artificial interface for the spatial refinement on level \(\ell\) of the scatterer, where we set \(\rho^{(-1)}\equiv 0\). For the approximation error of the multilevel quadrature, there holds a sparse tensor product-like error estimate. If \(\varepsilon_\ell\to 0\) is a monotonically decreasing sequence with \(\varepsilon_\ell\cdot\varepsilon_{L-\ell} =\varepsilon_L\) for every \(L\in\mathbb{N}\) and \[ \|\mathcal{Q}_{L-\ell}\rho-\mathbb{E}[\rho]\|\leq c_1\varepsilon_{L-\ell}\quad\text{and}\quad \|\rho^{(\ell)}-\rho\|\leq c_2\varepsilon_\ell \] for some suitable norms and constants \(c_1,c_2>0\), then \[ \|\mathcal{Q}_{L}^{\text{ML}}[\rho]-\mathbb{E}[\rho]\|\leq C L\varepsilon_L \] for a constant \(C>0\). We refer to \cite{HPS12} for the details. For the calculation of the variance, we employ formula \eqref{eq:cor} and obtain \begin{align*} \mathbb{C}\!\operatorname{or}[u_{\mathrm{s}}]({\bs x},{\bs x}') &\approx \int_T\int_T \bigg\{\Phi({\bs x}, {\bs z})\overline{\Phi({\bs x}', {\bs z}')} \mathcal{Q}_{L}^{\text{ML}}\bigg[\frac{\partial u_{\mathrm{s}}}{\partial{\bs n}}\otimes \frac{\partial u_{\mathrm{s}}}{\partial{\bs n}}\bigg]({\bs z},{\bs z}')\\ &\phantom{=\int_T\int_T} + \Phi({\bs x}, {\bs z})\overline{\frac{\partial\Phi({\bs x}', {\bs z}')}{\partial {\bs n}_{{\bs z}'}}} \mathcal{Q}_{L}^{\text{ML}}\bigg[\frac{\partial u_{\mathrm{s}}}{\partial{\bs n}}\otimes u_{\mathrm{s}}\bigg]({\bs z},{\bs z}')\\ &\phantom{=\int_T\int_T} + \frac{\partial\Phi({\bs x}, {\bs z})}{\partial {\bs n}_{\bs z}}\overline{\Phi({\bs x}', {\bs z}')} \mathcal{Q}_{L}^{\text{ML}}\bigg[u_{\mathrm{s}}\otimes\frac{\partial u_{\mathrm{s}}}{\partial{\bs n}}\bigg]({\bs z},{\bs z}')\\ &\phantom{=\int_T\int_T} + \frac{\partial\Phi({\bs x}, {\bs z})}{\partial {\bs n}_{\bs z}} \overline{\frac{\partial\Phi({\bs x}', {\bs z}')}{\partial {\bs n}_{{\bs z}'}}} \mathcal{Q}_{L}^{\text{ML}}[u_{\mathrm{s}}\otimes u_{\mathrm{s}}]({\bs z},{\bs z}')\bigg\}\d\sigma_{{\bs z}'}\d\sigma_{\bs z} \end{align*} with \[ \mathcal{Q}_{L}^{\text{ML}}[\rho\otimes\mu]({\bs z},{\bs z}')\mathrel{\mathrel{\mathop:}=}\sum_{\ell=0}^L \mathcal{Q}_{L-\ell}\big((\rho\otimes\mu)^{(\ell)}({\bs z},{\bs z}',\cdot) -(\rho\otimes\mu)^{(\ell-1)}({\bs z},{\bs z}',\cdot)\big), \] where \((\rho\otimes\mu)^{(\ell)}\mathrel{\mathrel{\mathop:}=}\rho^{(\ell)}\otimes\mu^{(\ell)}\). In principle, it would also be possible to opt for the multi-index quadrature, which has been proposed in \cite{BSZ11} for the computation of higher order moments. In this case, one ends up with \[ (\rho\otimes\mu)^{(\ell)}({\bs z},{\bs z}',{\bs y}) \mathrel{\mathrel{\mathop:}=}\sum_{j=0}^\ell\rho^{(\ell-j)}({\bs z},{\bs y})\mu^{(j)}({\bs z}',{\bs y}) =\sum_{j=0}^\ell\rho^{(j)}({\bs z},{\bs y})\mu^{(\ell-j)}({\bs z}',{\bs y}). \] Finally, we remark that there holds a similar error estimate as for the expectation and that isogeometric analysis was recently combined with a multi-index quadrature in \cite{BTT2019}. \section{Bayesian shape inversion}\label{sec:bayes} Let $\mathcal{A}({\bs y})\colon H^{1/2}\big(S({\bs y})\big)\to H^{1/2}(T)$, ${\bs y}\in\Gamma$, be the solution operator which maps the incident wave at \({S}({\bs y})\) to the scattered wave at \(T\). Fixing the incident wave \(u_{\text{inc}}\), we denote by \[ G\colon\Gamma\to H^{1/2}(T),\quad {\bs y}\mapsto u_{\text{s}}({\bs y}) \] the uncertainty-to-solution map. In forward uncertainty quantification, the goal is to compute quantities of interest \(\operatorname{QoI}(u_{\text{s}})\), with respect to the prior measure $\mu_0$, which is induced by the random variables from \eqref{eq:randomVectorFieldBoundary}. Often, quantities of interest are assumed to be linear functionals. The goal of Bayesian inverse uncertainty quantification as in \cite{DS13} is to incorporate noisy measurements of solutions \(\mathcal{A}({\bs y})u_{\text{inc}}\), after potentially incomplete observations. This is modeled by first considering a bounded, linear observation operator $O\colon H^{1/2}(T)\to\mathbb{C}^N$, which models point measurements of the scattered wave at the artificial interface \(T\). Combining the solution operator with the observation operatore yields the {uncertainty-to-observation mapping} \begin{equation}\label{eq:UncToObs} \mathcal{G}\colon \Gamma\to\mathbb{C}^N ,\quad {\bs y} \mapsto \mathcal{G}({\bs y}) = O\big(\mathcal{A}({\bs y})u_{\text{inc}}\big) . \end{equation} The measured data $\bs\delta$ is modeled as resulting from an observation by $O$, perturbed by additive Gaussian noise according to \[ \bs\delta = \mathcal{G}({\bs y}^\star)+{\bs\eta}, \] where ${\bs y}^\star$ is the unknown, exact parameter. We assume that the noise $\bs\eta$ is given by a complex, circular, symmetric Gaussian random vector with symmetric, positive definite covariance matrix $\bs\Sigma\in\mathbb{R}^{N\times N}$, i.e., $\bs\eta\sim\mathcal{CN}(0,\bs\Sigma)$. Note that this is equivalent to $\bs\eta=\bs\eta_{\text{r}}+i\bs\eta_{\text{i}}$ with uncorrelated $\bs\eta_{\text{r}}$, $\bs\eta_{\text{i}}$ and $\bs\eta_{\text{r}},\bs\eta_{\text{i}}\sim\mathcal{N}(0,\Sigma/2)$, and respects the physical time-harmonic model of the scattering problem, see \cite{TV05}. Within this article, we aim at predicting the shape of the random scatterer based on observations of \(u_{\text{s}}\) at \(T\). Concretely, we wish to compute expectation and variance of the deformation field. To that end, we define the Gaussian potential, also referred to as the least-squares or data misfit functional, by $\Phi_{\bs\Sigma}\colon \Gamma\times\mathbb{C}^N\to\mathbb{R}$, \begin{equation}\label{eq:pot} \Phi_{\bs\Sigma}({\bs y},\bs\delta)\mathrel{\mathrel{\mathop:}=} \frac12 \|\bs\delta-\mathcal{G}({\bs y})\|^2_{\bs\Sigma} \mathrel{\mathrel{\mathop:}=} \frac12 \big(\bs\delta-\mathcal{G}({\bs y})\big)^\intercal {\bs\Sigma}^{-1} \big(\bs\delta-\mathcal{G}({\bs y})\big) . \end{equation} Given the prior measure $\mu_0$, Bayes' formula yields an expression for the posterior measure $\mu^\delta$ on $\Gamma$, given the data $\bs\delta$ with the Radon-Nikodym derivative is given by \[ \frac{d\mu^{\bs\delta}}{d\mu_0}({\bs y}) = \frac{e^{ -\Phi_{\bs\Sigma}({\bs y},{\bs\delta})}}{Z} \] with \[Z\mathrel{\mathrel{\mathop:}=} \int_\Gamma e^{ -\Phi_{\bs\Sigma}({\bs y},\bs\delta)} \,\mu_0(\d{\bs y}) > 0, \] see \cite{DS13}. Now, the expected shape of the random scatterer is given by \[ \mathbb{E}^{\mu_{\bs\delta}}[\bs\chi](S_{\operatorname{ref}}) \mathrel{\mathrel{\mathop:}=}\int_\Gamma\bs\chi(S_{\operatorname{ref}},{\bs y})\frac{e^{ -\Phi_{\bs\Sigma}({\bs y},{\bs\delta})}}{Z} \d\mu_0({\bs y}) \] and its variance by \[ \mathbb{V}^{\mu_{\bs\delta}}[\bs\chi](S_{\operatorname{ref}}) \mathrel{\mathrel{\mathop:}=}\int_\Gamma\bs \chi(S_{\operatorname{ref}},{\bs y})\bs\chi(S_{\operatorname{ref}},{\bs y})^\intercal\frac{e^{ -\Phi_{\bs\Sigma}({\bs y},{\bs\delta})}}{Z} \d\mu_0({\bs y})-\mathbb{E}^{\mu_{\bs\delta}}[\bs\chi](S_{\operatorname{ref}})\mathbb{E}^{\mu_{\bs\delta}}[\bs\chi](S_{\operatorname{ref}})^\intercal. \] In order to approximate these integrals numerically, we shall employ the multilevel ratio estimator, which splits the computation of the actual integral and the normalization constant and approximates each by a telescoping sum, see \cite{DGLS17} and the references therein. For the normalization constant, we consider \[ \mathcal{Q}_{L}^{\text{ML}}[\rho]\mathrel{\mathrel{\mathop:}=}\sum_{\ell=0}^L \mathcal{Q}_{L-\ell}(\rho_\ell-\rho_{\ell-1}) \] with \[ \rho_\ell\mathrel{\mathrel{\mathop:}=} e^{ -\Phi_{{\bs\Sigma},\ell}({\bs y},\bs\delta)},\quad \Phi_{{\bs\Sigma},\ell}({\bs y},{\bs\delta})\mathrel{\mathrel{\mathop:}=}\big\| \bs\delta-O\big(\mathcal{A}_\ell({\bs y})u_{\text{inc}}\big)\big\|_{\bs\Sigma}, \quad Z_{-1}\mathrel{\mathrel{\mathop:}=} 0, \] i.e.\ we consider a multilevel hierarchy based on approximations of the scattered wave on different levels of refinement. Now, we may compute, for example, the expected deformation field according to \[ \mathcal{Q}_{L}^{\text{ML},\mu_{\bs\delta}}[\bs\chi]\mathrel{\mathrel{\mathop:}=}\bigg(\sum_{\ell=0}^L \mathcal{Q}_{L-\ell}\big({\bs\chi}\cdot(\rho_\ell-\rho_{\ell-1})\big) \bigg)\bigg/\mathcal{Q}_{L}^{\text{ML}}[\rho]. \] \section{Numerical examples}\label{sec:numex} \subsection{Geometries, discretization, and multilevel quadrature} We consider a scatterer $D_{\operatorname{ref}}$ given by a cuboid $[0,3]\times[0,2]\times[0,1]$ with six drilled holes, with an artificial interface $T$ given by the cuboid $[-1.5,3.5]\times[-0.5,2.5]\times[-0.5,1.5]$. A visualization of the situation may be found in Figure~\ref{fig:cuboids}. \begin{figure} \centering \includegraphics[width=0.46\textwidth,clip=true,trim=90 140 350 350]{setting_xview}\quad \includegraphics[width=0.46\textwidth,clip=true,trim=90 140 220 380]{setting_yview} \includegraphics[width=0.46\textwidth,clip=true,trim=90 140 250 250]{setting_zview} \caption{The $[0,3]\times[0,2]\times[0,1]$ cuboid with drilled holes and the $[-1.5,3.5]\times[-0.5,2.5]\times[-0.5,1.5]$ artificial interface.} \label{fig:cuboids} \end{figure} The surface of the scatterer is represented by 82 patches as illustrated, see again Figure~\ref{fig:cuboids}, and the artificial interface by 52 patches. The wavenumber is chosen as $\kappa=1$. We discretize the random field with globally continuous B-splines of polynomial degree $p=2$ in each spatial variable and uniform three spatial refinements, leading to a dense covariance matrix $\underline{{\bs C}}\in\mathbb{R}^{19\,896\times 19\,896}$. For an efficient computation of the Karhunen-Lo\`eve expansion, we proceed as outlined in Remark~\ref{rem:efficientIGAKL}. The artificial interface is discretized with tensor-product polynomials of degree $p=6$ on each patch. The Cauchy data on the artificial interface can then be obtained from the values on $52\cdot 7^2=2\,548$ point evaluations on the interface by solving $52$ local interpolation problems of size $7^2$. For the application of the multilevel quadrature, we perform the acoustic scattering computations with patchwise continuous B-splines of degree $p=2$ and the refinement levels $\ell=0,1,2,3$, leading to 738, 1\,312, 2\,952, and 8\,200 degrees of freedom. The implementation of the spatial discretizations is based on the \verb|C++|-library \verb+bembel+ \cite{bembel, DHK+20}, which is easily adapted to our needs and provides fast compression schemes for the scattering computations. The multilevel quadrature is either based on a quasi-Monte Carlo quadrature using the Halton sequence, see \cite{Caf98}, or on the anisotropic sparse grid quadrature using Gau{\ss}-Legendre points as described in \cite{HHPS18}. The latter is available as open source software package \verb+SPQR+\footnote{https://github.com/muchip/SPQR}. Due to the high asymptotic convergence rate of $h^{2p+2}\sim 2^{-6}$ of the higher-order method for the scattering computations, the number of samples for the multilevel quadrature has to be adapted for each level as shown in Table~\ref{tab:numberofsamples}, where `QMC' stands for the quasi-Monte Carlo quadrature and `SG' for the sparse grid quadrature. \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|c|} \hline & \(\ell=0\) & \(\ell=1\) & \(\ell=2\) &\(\ell=3\)\\\hline QMC & 2\,097\,152& 32\,768& 512& 256 \\\hline SG & 2\,328\,341 & 58\,251 & 957 & 351 \\\hline \end{tabular} \medskip \caption{Number of samples on the different levels for SG and QMC.} \label{tab:numberofsamples} \end{table} Due to the large number of samples and computational costs of solving three dimensional scattering problems, we employ a hybrid parallelization with MPI and OpenMP to accelerate the sampling process. The computations have been carried out with up to 24 MPI processes consisting of up to 8 OpenMP threads each, resulting in a total of up to 192 cores. The OpenMP threads have been dedicated to the boundary element solver, while the MPI processes were used to parallelize the sampling of the random parameter. \subsection{Forward problem} We consider a centered Gaussian random field for the domain perturbations with covariance function \[ \mathbb{C}\!\operatorname{ov}[\boldsymbol{\chi}_S]({\bs x},{\bs y}) = \frac{1}{20} \begin{bmatrix} e^{-\frac{\|{\bs x}-{\bs y}\|_2^2}{4}} & 0 & 0 \\ 0 & e^{-\frac{\|{\bs x}-{\bs y}\|_2^2}{4}} & 0 \\ 0 & 0 & e^{-\frac{\|{\bs x}-{\bs y}\|_2^2}{4}} \end{bmatrix}. \] Four different realizations of the deformed scatterer and corresponding scattered waves at the artificial interface are illustrated in Figure~\ref{fig:perturbed}. The singular values of the corresponding deformation field are illustrated in Figure~\ref{fig:singularvalues}. The parameter dimension is 165. \begin{figure} \begin{center} \begin{tikzpicture} \draw(0,0) node{\includegraphics[width=0.38\textwidth,clip=true,trim=300 80 350 250]{samples0000}}; \draw(5.8,0) node{\includegraphics[width=0.38\textwidth,clip=true,trim=300 80 350 250]{samples0001}}; \draw(0,5.8) node{\includegraphics[width=0.38\textwidth,clip=true,trim=300 80 350 250]{samples0004}}; \draw(5.8,5.8) node{\includegraphics[width=0.38\textwidth,clip=true,trim=300 80 350 250]{samples0002}}; \draw(10.2,2.9) node{\includegraphics[width=0.14\textwidth,clip=true,trim=2100 1180 0 0]{samples0002}}; \end{tikzpicture} \caption{Domain perturbations drawn from the random field and scattered wave on the artificial interface.} \label{fig:perturbed} \end{center} \end{figure} \begin{figure} \centering \begin{tikzpicture} \pgfplotstableread{% 1 0.891113 2 0.891113 3 0.891113 4 0.52293 5 0.52293 6 0.522929 7 0.400666 8 0.400666 9 0.400666 10 0.235048 11 0.235048 12 0.235048 13 0.229026 14 0.229026 15 0.229026 16 0.217782 17 0.217782 18 0.217782 19 0.132593 20 0.132593 21 0.132593 22 0.110386 23 0.110386 24 0.110386 25 0.0998649 26 0.0998649 27 0.0998649 28 0.0942553 29 0.0942553 30 0.0942553 31 0.0707209 32 0.0707209 33 0.0707209 34 0.0618501 35 0.0618501 36 0.0618501 37 0.0556918 38 0.0556918 39 0.0556918 40 0.0546 41 0.0546 42 0.0546 43 0.0308939 44 0.0308939 45 0.0308936 46 0.0302369 47 0.0302369 48 0.0302369 49 0.0280594 50 0.0280594 51 0.0280594 52 0.0260125 53 0.0260125 54 0.0260125 55 0.0249283 56 0.0249283 57 0.0249281 58 0.0227158 59 0.0227158 60 0.0227158 61 0.0197408 62 0.0197408 63 0.0197405 64 0.0178012 65 0.0178012 66 0.0178011 67 0.0177589 68 0.0177589 69 0.0177588 70 0.015502 71 0.015502 72 0.015502 73 0.0141151 74 0.0141151 75 0.0141149 76 0.0135702 77 0.0135702 78 0.0135702 79 0.0086055 80 0.0086055 81 0.00860491 82 0.00836435 83 0.00836435 84 0.00836399 85 0.00764791 86 0.00764791 87 0.00764789 88 0.00737144 89 0.00737144 90 0.00737132 91 0.00734445 92 0.00734445 93 0.00734444 94 0.00632728 95 0.00632728 96 0.00632728 97 0.00595073 98 0.00595073 99 0.0059507 100 0.00576797 101 0.00576797 102 0.00576778 103 0.00477007 104 0.00477007 105 0.00477007 106 0.00465568 107 0.00465568 108 0.00465099 109 0.00453527 110 0.00453527 111 0.00452787 112 0.0033021 113 0.0033021 114 0.0033021 115 0.00319293 116 0.00319293 117 0.00319282 118 0.00312031 119 0.00312031 120 0.00312017 121 0.00296512 122 0.00296512 123 0.00296498 124 0.0025919 125 0.0025919 126 0.0025809 127 0.00233408 128 0.00233408 129 0.00233393 130 0.00224389 131 0.00224389 132 0.00224255 133 0.0020813 134 0.0020813 135 0.00208072 136 0.00203516 137 0.00203516 138 0.00203509 139 0.00200514 140 0.00200514 141 0.00200514 142 0.00183949 143 0.00183949 144 0.00183921 145 0.00177817 146 0.00177817 147 0.00177791 148 0.00176621 149 0.00176621 150 0.00176621 151 0.00137309 152 0.00137309 153 0.00137308 154 0.00132827 155 0.00132827 156 0.0013282 157 0.00112145 158 0.00112145 159 0.00112019 160 0.00108044 161 0.00108044 162 0.00108043 163 0.00105601 164 0.00105601 165 0.00105433 } \mytable; \begin{semilogyaxis}[xmin=0,xmax=166,ymin=0.9e-3,ymax=1.1,width=0.8\textwidth,height=0.5\textwidth,grid=both] \addplot[mark=*,mark size=1pt, only marks]table{\mytable}; \end{semilogyaxis} \end{tikzpicture} \caption{Numerical approximation of the singular values of the covariance operator under consideration.} \label{fig:singularvalues} \end{figure} For demonstrating the validity of the dimension reduction via the artificial interface, we also define 100 evaluation points outside of the artificial interface, which are equally distributed on a sphere centered around the origin with radius $5$. Note that the origin is one of the corners of the reference geometry. In order to measure approximation errors, we compare the solutions obtained by the multilevel sparse grid quadrature to that of the multilevel quasi-Monte Carlo quadrature on the finest level \(L=3\) and vice versa. The left-hand side of Figure~\ref{fig:QuadProb4} shows the convergence error of the expectation for the Cauchy data at the interface and for the 100 points on the sphere. The dashed curves indicate the convergence of the spatial approximation on the reference domain. The right-hand side of Figure~\ref{fig:QuadProb4} illustrates the convergence of the potential evaluation on the sphere when using the mean of the Cauchy data on the artificial interface, i.e., when using \eqref{eq:MLQmean}. Figure~\ref{fig:QuadProb5} shows these quantities for the correlation. \begin{figure}[htb] \begin{center} \begin{tikzpicture} \begin{semilogyaxis}[height=0.4\textwidth,width=0.48\textwidth, ymax = 1e-2, ymin = 5e-6, legend style={legend pos=south west,font=\tiny}, xlabel=refinement level, ylabel ={$\ell^{\infty}$-error}] \addplot [line width=0.5pt, color=blue,mark=diamond] table[x index=0,y index=1]{./data/MLQMC_mean_interface.txt};\addlegendentry{QMC at interface}; \addplot [line width=0.5pt, color=blue,mark=square] table[x index=0,y index=1]{./data/MLQMC_mean_points.txt};\addlegendentry{QMC at points}; \addplot [line width=0.5pt, color=olive,mark=diamond] table[x index=0,y index=1]{./data/MLSG_mean_interface.txt};\addlegendentry{SG at interface}; \addplot [line width=0.5pt, color=olive,mark=square] table[x index=0,y index=1]{./data/MLSG_mean_points.txt};\addlegendentry{SG at points}; \addplot[line width=0.5pt, color=black,mark=none, dashed] coordinates { (0,0.00308649) (1,0.00241284) (2,0.000565987) (3,3.95972e-05) }; \addplot[line width=0.5pt, color=black,mark=none, dashed] coordinates { (0,0.000618693) (1,0.000447365) (2,0.000155738) (3,1.12308e-05) }; \end{semilogyaxis} \end{tikzpicture} \hfill \begin{tikzpicture} \begin{semilogyaxis}[height=0.4\textwidth,width=0.48\textwidth, ymax = 1e-3, ymin = 5e-6, legend style={legend pos=south west,font=\tiny}, xlabel=refinement level, ylabel ={$\ell^{\infty}$-error}] \addplot [line width=0.5pt, color=blue,mark=triangle] table[x index=0,y index=2]{./data/MLQMC_mean_points.txt};\addlegendentry{QMC, using interface}; \addplot [line width=0.5pt, color=olive,mark=triangle] table[x index=0,y index=2]{./data/MLSG_mean_points.txt};\addlegendentry{SG, using interface}; \addplot[line width=0.5pt, color=black,mark=none, dashed] coordinates { (0,0.000618693) (1,0.000447365) (2,0.000155738) (3,1.12308e-05) }; \end{semilogyaxis} \end{tikzpicture} \caption{\label{fig:QuadProb4}{\em Left:} Convergence of the multilevel quadrature against the MLQMC solution for the mean over the artificial interface and on points on the sphere. {\em Right:} Convergence in the points on the sphere when they are evaluated from the mean of the Cauchy data at the artificial interface.} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \begin{tikzpicture} \begin{semilogyaxis}[height=0.4\textwidth,width=0.48\textwidth, ymax = 1e-2, ymin =5e-6, legend style={legend pos=south west,font=\tiny}, xlabel=refinement level, ylabel ={$\ell^{\infty}$-error}] \addplot [line width=0.5pt, color=blue,mark=diamond] table[x index=0,y index=1]{./data/MLQMC_cor_interface.txt};\addlegendentry{QMC at interface}; \addplot [line width=0.5pt, color=blue,mark=square] table[x index=0,y index=1]{./data/MLQMC_cor_points.txt};\addlegendentry{QMC at points}; \addplot [line width=0.5pt, color=olive,mark=diamond] table[x index=0,y index=1]{./data/MLSG_cor_interface.txt};\addlegendentry{SG at interface}; \addplot [line width=0.5pt, color=olive,mark=square] table[x index=0,y index=1]{./data/MLSG_cor_points.txt};\addlegendentry{SG at points}; \addplot[line width=0.5pt, color=black,mark=none, dashed] coordinates { (0,0.00308649) (1,0.00241284) (2,0.000565987) (3,3.95972e-05) }; \addplot[line width=0.5pt, color=black,mark=none, dashed] coordinates { (0,0.000618693) (1,0.000447365) (2,0.000155738) (3,1.12308e-05) }; \end{semilogyaxis} \end{tikzpicture}\hfill \begin{tikzpicture} \begin{semilogyaxis}[height=0.4\textwidth,width=0.48\textwidth, ymax = 1e-3, ymin = 5e-6, legend style={legend pos=south west,font=\tiny}, xlabel=refinement level, ylabel ={$\ell^{\infty}$-error}] \addplot [line width=0.5pt, color=blue,mark=triangle] table[x index=0,y index=2]{./data/MLQMC_cor_points.txt};\addlegendentry{QMC, using interface}; \addplot [line width=0.5pt, color=olive,mark=triangle] table[x index=0,y index=2]{./data/MLSG_cor_points.txt};\addlegendentry{SG, using interface}; \addplot[line width=0.5pt, color=black,mark=none, dashed] coordinates { (0,0.000618693) (1,0.000447365) (2,0.000155738) (3,1.12308e-05) }; \end{semilogyaxis} \end{tikzpicture} \caption{\label{fig:QuadProb5}{\em Left:} Convergence of the multilevel quadrature against the MLQMC solution for the correlation in the points at the interface and in the points in free space. {\em Right:} Convergence in the points in free space when computing the correlation in free space from the correlation at the artificial interface.} \end{center} \end{figure} \subsection{Shape inversion} For illustrating the Bayesian shape inversion, we draw a random domain perturbation given by ${\bs y}^\star\in\Gamma$ from the model presented in the previous subsection and consider it to be our reference solution. The measurement operator $O$ defining $\mathcal{G}$ is given by point evaluations of the scattered wave in the midpoints of the 52 patches at the artificial interface. The noise level is set to $\bs\Sigma=\sigma^2{\bs I}$, where $\sigma=0.1\cdot\max |\mathcal{G}({\bs y}^\star)|$. The unperturbed domain is considered as a prior. Figure \ref{fig:bayes} illustrates the reference solution, the prior and posterior mean and the posterior's $2\sigma$ confidence region in each coordinate direction obtained by a multilevel quasi-Monte Carlo quadrature on the finest level \(L=3\). The posterior mean has clearly moved away from the prior and is located within the $2\sigma$ region of the true scatterer. \begin{figure} \centering \includegraphics[width=0.46\textwidth,clip=true,trim=90 140 350 350]{bayes_prior_posterior} \includegraphics[width=0.46\textwidth,clip=true,trim=90 140 350 350]{bayes_truth_posterior} \caption{\emph{Left:} Prior mean (yellow) and posterior mean (blue) of the inverse problem. \emph{Right:} Reference solution (red) and posterior mean (blue) of the inverse problem with $2\sigma$ confidence intervals (crosses) in each coordinate direction.} \label{fig:bayes} \end{figure} \section{Conclusions and Future work} We have introduced a fast IGA implementation for solving time-harmonic acoustic wave scattering for shape uncertainty quantification, employing boundary integral formulations, multi-level quadrature and state-of-the art acceleration techniques. This allows for the analysis of large shape deformations for both forward and inverse problems, including shape optimization. Future work involves the extension to Maxwell scattering. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,892
\section{The impact of feedback from individually-resolved SNe on a protogalaxy with a multiphase ISM} The Nut simulations (presented in detail in \cite{me_2010}) probe the evolution of a protogalaxy embedded in the cosmic web at $z \approx 9$, at very high resolution (0.5pc physical in the densest regions) using the resimulation technique with the AMR code {\sc ramses} \citep{ramses}.The main simulation includes cooling, SF, SNe feedback (kinetic feedback modelled as Sedov blastwaves \citep{dubois_teyssier_sn}), metal enrichment and a UV background \citep{uvbackground}. A galactic wind develops, filling a large fraction of the volume out to $\approx6r_{\rm vir}$ with hot gas, which is punctured by cold filaments of the cosmic web that propagate all the way to the disc. SF rates are high for the galaxy halo mass ($1 {\rm M}_{\odot}$/yr for a $10^9 {\rm M}_{\odot}$ halo) and we report the formation of star-forming clumps with typical masses of $10^6 {\rm M}_{\odot}$. In order to isolate the impact of the SNe-driven galactic outflow, we undertake a parallel analysis of a control run with no SNe feedback. \subsection{Cold inflows versus hot outflows} We establish temperature and density thresholds that separate out the filaments ($0.1 \le \rho \le 10$ atoms ${\rm cm}^{-3}$, $T < 2 \times 10^4$K), the clumps (i.e. condensed gas, such as the main disc and satellite galaxies) ($ \rho > 10$ ${\rm cm}^{-3}$, $T < 2 \times 10^4$K), cold diffuse gas ($ \rho < 0.1$ ${\rm cm}^{-3}$, $T < 2 \times 10^4$K), warm diffuse gas ($ \rho < 10 $ ${\rm cm}^{-3}$, $2 \times 10^4$K $<T < 2 \times 10^5$K) and hot diffuse gas ($ \rho < 10$ ${\rm cm}^{-3}$, $T > 2 \times 10^5$K). We are then able to measure and compare the inflow and outflow rates in each component; we calculate the mass flux through spherical shells, centred on the halo centre, out to the virial radius. The full results of the inflow/outflow analysis are described in \cite{me_2010}, but here we recall the main conclusions:\\ \noindent \paragraph{\bf Inflows} The most striking result is that the far-reaching wind does not impede the accretion of gas onto the central galaxy. We find that the filaments are the dominant supply of gas to the disc, and since the gas in the filaments is dense and highly supersonic, it is not surprising that the wind has not been able to halt its progress. This suggests that for galaxies with host haloes of this mass ($10^9 {\rm M}_{\odot}$) that are predominantly fed by cold streams, SNe-driven winds are unlikely to be able to limit accretion and therefore reduce SF in these systems, as is commonly suggested. \\ \noindent \paragraph{\bf Outflows} Note that we associate the galactic wind with the warm and hot diffuse components, defined above. We find that while the wind is far-reaching (extending to around $6 r_{\rm vir}$ by $z=9$), the mass outflow rate is only about $10$ per cent of the inflow rate; we do not find a massive wind as seen in low redshift observations \citep[e.g.][]{cmartin99_outflows}. We also report some outflow in the cold, diffuse component, which we attribute to gas that was in the region around the filaments being entrained by the wind and swept out; an effect that is seen in observed low-redshift winds. \subsection{The resulting star formation} \begin{figure} \centering \begin{minipage}{0.63\linewidth} \includegraphics[width=0.5\textwidth, trim=10mm 5mm 13mm 5mm, clip]{fig1a.eps} \includegraphics[width=0.5\textwidth, trim=10mm 5mm 13mm 3mm, clip]{fig1b.eps} \end{minipage} \hspace{2mm} \begin{minipage}{0.34\linewidth} \centering \caption{{\bf Right: } The global average SF efficiency of the main galaxy versus redshift in the feedback (solid line) and no-feedback (dashed line) runs. {\bf Left: } Ratio of the net inflow in the clumpy component to the net inflow in the filamentary component versus redshift for the feedback (solid line) and no-feedback (dashed line) runs.} \label{fig1} \end{minipage} \end{figure} Surprisingly, we find the star formation rate (SFR) in the Nut simulation is a factor of $\approx 10$ higher than that in the control run (without SNe) at $z=9$. We attribute this to the metal enrichment which results from the inclusion of SNe feedback. We find that the global average efficiency of SF is very high in both the Nut simulation and the control run. We use an efficiency of $1$ per cent to control SF on parsec-scales in {\sc ramses}, as found in the observations of \citet{krumholztan}. However the global efficiency is $\sim 10$ per cent as shown in Fig.~\ref{fig1} (left). We find that this efficiency is correlated with merger activity; in Fig.~\ref{fig1} (right) we show the ratio of inflow in the clumpy component to that in the filamentary component. High values of this quantity indicate that mergers are supplying a significant fraction of the gas being accreted; it is clear that these instances coincide with peaks in the global SF efficiency of the main galaxy (left). These characteristics of the SF can be explained as follows. As evidenced by the star-forming clumps seen in the Nut simulation at $z=9$, and supported by clumps captured in other simulations \citep[e.g.][]{agertz_clumpygal} and observations of clump-cluster galaxies \citep[e.g.][]{clump_cluster}, all at $z=3$, it seems probable that SF occurs in dense clumps at high redshift. We postulate that the correlation of the global SF efficiency with the relative importance of merger activity reflects the fact that mergers can induce fragmentation of the ISM, resulting in a more clumpy medium \citep{antennae} and more gas becoming eligible for SF. We examine the process of merger-induced SF in detail in the next section. \vspace{-0.4cm} \section{Merger-induced star formation at high resolution in a cloudy ISM} Merger-induced clustered SF has been demonstrated in simulations of the Antennae system \citep{antennae}. We are undertaking a similar investigation, but with a whole sample of mergers with different orbital parameters, in order to assess the relative importance of this type of merger-induced SF. We have simulated 5 1:1 mergers, with AMR code {\sc ramses} with a maximum spatial resolution of $5$pc, allowing us to resolve densities up to $10^6 {\rm cm}^{-3}$. The sample is currently being extended both with additional orbits and different mass ratios. Here, we focus primarily on examining the mechanism via which SF is induced. \subsection{Changes in the star formation and ISM structure} \begin{figure} \centering \begin{minipage}{0.63\linewidth} \includegraphics[width=0.49\textwidth, trim=14mm 6mm 13mm 13mm, clip]{fig2a.eps} \includegraphics[width=0.5\textwidth, trim=14mm 6mm 11mm 11mm, clip]{fig2b.eps} \end{minipage} \begin{minipage}{0.36\linewidth} \caption{{\bf Right: }Gas density pdf before (solid) and during (dashed) the merger and at the peak of the starburst (dot dash). {\bf Left: }The KS relation for discs (solid) and starbursts (dashed) from \citet{daddi_etal_2010}, and selected points for simulated galaxies before the merger (diamonds), during the starburst (triangles) and after the merger (squares). In the latter stage the galaxy has been morphologically quenched \citep{MQ_2009}.} \label{fig2} \end{minipage} \end{figure} We find that, in some cases, the peak in the SF rate occurs before the final coalesence of the two galaxies and the gas density distribution shows an extended, clumpy region, rather than a centrally-peaked but otherwise smooth distribution. In order to understand what impact the merger has had on the ISM we examine the gas density probability density function (pdf) in the left of Fig.~\ref{fig2}. Here we can see the evolution of the pdf from before the merger (black line), during the initial stages of the merger (blue line) and at the peak of the starburst (red line). It is clear that there is an excess of very dense gas ($\rho > 10^4$ ${\rm cm}^{-3}$) which increases as we progress through the merger. \\ \vspace{-0.5cm} \subsection{Insights into observations of merging systems} It is clear that in order to reproduce the formation of star clusters, which are observed both in high redshift clump-cluster galaxies and in lower redshift merging systems, it is necessary for hydrodynamical simulations to have a high spatial resolution (and the associated low temperature threshold). However, aside from the clustered SF, additional effects are apparent in such highly resolved simulations of merging systems which are not seen when the ISM is warm and smooth (as in low resolution simulations). \\ \begin{itemize} \item{We have captured an excess of dense gas directly in our merger simulations (as demonstrated in the left of Fig.~\ref{fig2}). It was proposed (and demonstrated using post-processing of simulations) by \citet{juneau_etal_2009} that an excess of dense gas in the ISM could be induced by mergers and that, since HCN emission traces denser gas phases than CO, this could account for observations of enhanced HCN/CO ratios in ULIRGs. This would remove the need to invoke active galactic nuclei activity.}\\ \item{The increased fraction of gas mass in this very dense phase (in the form of clumps) can also provide a physical explanation for the two KS relations demonstrated in \citet{daddi_etal_2010}.} Fig.~\ref{fig2} (right ) shows that as our merger simulations progress, the galaxies move up and to the right in the $\Sigma_{\rm SFR}$-$\Sigma_{\rm gas}$ plot, reaching the starburst sequence at the peak of the SFR, then moving down to below the disc sequence once much of the fuel has been consumed. There is some central gas inflow which increases the average gas surface density $\Sigma_{\rm gas}$. However, the merging galaxies reach the starburst sequence because $\Sigma_{\rm SFR}$ is much higher than would be expected from a smooth gas distribution with the same value of $\Sigma_{\rm gas}$ due to the high efficiency of SF in the very dense gas clumps.\\ \item{Simulations of high redshift mergers (which have a higher gas fraction than those discussed above) show that the resolving the clumpy nature of the ISM can also have a significant impact on the merger remnant \citep{bournaud_etal_2010}.} There are two main effects when the ISM is multiphase: 1) the merger remnant is much more compact and 2) there is a much smaller surviving/reformed disc component. The first conclusion provides a possible formation mechanism for the population of compact ETGs at high redshift, while the latter casts doubt on the likelihood of discs surviving/reforming after mergers without some external source of gas (e.g. cold accretion). \end{itemize} \vspace{-0.4cm} \section{Conclusions} We demonstrate that many insights into galaxy mass assembly via both cold flows and mergers can be gleaned from simulations with sufficient resolution to capture the cloudy nature of the ISM and clustered SF. We find that SNe-driven outflows cannot reduce the cold accretion onto dwarf-mass galaxies at $z\approx 9$ and that the SFR is actually higher due to the ensuing metal enrichment. We describe how high-resolution galaxy merger simulations enable us to explain several observational results (e.g. enhanced HCN/CO ratios in ULIRGs, a KS sequence for starbursts and high-redshift compact ETGs) as they reveal physical processes not captured previously. \vspace{-0.4cm}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,264
Q: display list in select html in function of another select choice in a jsp page I don't know if my title is well understanding, but what i want to do it's to display a list in option of a select element depending of the choice of another select element. I'm using this with a jsp page and with a Map variable to have the differents lists. The jsp page look like this: <form:select id="typeIndice1" path="typeIndice" itemLabel="libelle" onchange="run()"> <form:options items="${listeIndices}" /> </form:select> <form:select id="indices" path="indice" itemValue="valeur" itemLabel="libelle" > </form:select> The model attribute is "listeIndices" which is a Map with key, the data to display on the second select, and value, the wording of the data. I know normally i should do the reverse for the Key, Value but is to display the wording easily on the first select. I have this script below: <script> function run(){ var select = document.getElementById("typeIndice1"); var input = document.getElementById("indices"); var option = document.createElement("option"); option.text = select.value; input.options.add(option); } When an option is selected on the first select, the run function create a new option to the second select whit the value of the first select, i have the complete list in only one option, but i would like each row in one option but i don't know how to do. i also tried: option.items = select.value; instead of option.text = select.value; But i have a blank option.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,579
\section{Introduction} Modern neural networks are deep, wide, and nonconvex. They are powerful tools for representation learning and serve as core components of deep learning systems. They are top-performing models in language translation~\citep{sutskever_sequence_2014}, visual recognition~\citep{he_deep_2016}, and decision making~\citep{silver_general_2018}. However, the understanding of modern neural networks is way behind their broad applications. A series of pioneering works~\citep{zhang_understanding_2016, belkin_reconciling_2019, locatello_challenging_2019} reveal the difficulty of applying conventional machine learning wisdom to deep learning. A better understanding of deep learning is a major mission in the AI field. One obstacle in the way of understanding deep learning is the existence of magic modules in modern neural networks and magic tricks to train them. Take batch normalization module~\citep{ioffe_batch_2015} for example, its pervasiveness in both academia and industry is undoubted. The exact reason why it expedites training and helps generalization, however, remains mysterious and is actively studied in recent years~\citep{bjorck_understanding_2018, santurkar_how_2018, kohler_exponential_2019}. Only when we clearly understand these magical practices can we promote the theoretical understanding of modern neural networks. \begin{figure*}[htb] \hfill \subfigure[Learning rate decay strategy]{ \label{fig:multistep_strategy} \includegraphics[height=0.39\textwidth]{fig/MultiStepLR.pdf}} \hfill \subfigure[Figure taken from \citet{he_deep_2016}]{ \label{fig:he_figure} \includegraphics[height=0.39\textwidth]{fig/resnet.pdf}} \hfill \caption{Training error in (b) is shown by thin curves, while test error in (b) by bold curves.} \end{figure*} Learning rate is {``the single most important hyper-parameter''}~\citep{bengio_practical_2012} in training neural networks. Learning rate decay (lrDecay) is a \emph{de facto} technique for training modern neural networks, where we adopt an initially large learning rate and then decay it by a certain factor after pre-defined epochs. Popular deep networks such as ResNet~\citep{he_deep_2016}, DenseNet~\citep{huang_densely_2017} are all trained by Stochastic Gradient Descent (SGD) with lrDecay. \Figref{fig:multistep_strategy} is an example of lrDecay, with the learning rate decayed by 10 every 30 epochs. The training is divided into several stages by the moments of decay. These stages can be easily identified in learning curves (such as \Figref{fig:he_figure}), where the performance boosts sharply shortly after the learning rate is decayed. The lrDecay enjoys great popularity due to its simplicity and general effectiveness. Common beliefs in how lrDecay works are derived from the optimization analysis in (Stochastic) Gradient Descent~\citep{lecun_second_1991, kleinberg_alternative_2018}. They attribute the effect of an initially large learning rate to escaping spurious local minima or accelerating training and attribute the effect of decaying the learning rate to avoiding oscillation around local minima. However, these common beliefs are insufficient to explain our empirical observations from a series of carefully-designed experiments in \Secref{sec:against}. In this paper, we provide an alternative view: the magnitude of the learning rate is closely related to the complexity of learnable patterns. From this perspective, we propose a novel explanation for the efficacy of lrDecay: \textbf{an initially large learning rate suppresses the memorization of noisy data while decaying the learning rate improves the learning of complex patterns.} This is validated on a carefully-constructed dataset with tractable pattern complexity. The pattern complexity in real-world datasets is often intractable. We thus validate the explanation by testing its implication on real-world datasets. The implication that additional patterns learned in later stages of lrDecay are more complex and thus less transferable across different datasets, is also justified empirically. A comparison between the proposed explanation and the common beliefs is summarized in Table~\ref{tab:compare_explanation}. Our explanation is supported by carefully-designed experiments and provides a new perspective on analyzing learning rate decay. The contribution of this paper is two-fold: \vspace{-\topsep} \begin{itemize} \setlength\itemsep{0pt} \item We demonstrate by experiments that existing explanations of how lrDecay works are insufficient in explaining the training behaviors in modern neural networks. \item We propose a novel explanation based on pattern complexity, which is validated on a dataset with tractable pattern complexity, and its implication is validated on real-world datasets. \end{itemize} \vspace{-\topsep} The explanation also suggests that complex patterns are only learnable after learning rate decay. Thus, when the model learns all simple patterns, but the epoch to decay has not reached, immediately decaying the learning rate will not hurt the performance. This implication is validated in \Secref{sec:autodecay}. \begin{table*} \vspace{-5pt} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccc} \toprule explanation & perspective & initially large lr & supported & lr decay & supported \\ \midrule \citet{lecun_second_1991} & optimization & accelerates training & \cmark & avoids oscillation & \xmark \\ \citet{kleinberg_alternative_2018} & optimization & escapes bad local minima & \cmark & converges to local minimum & \xmark \\ Proposed & pattern complexity & avoids fitting noisy data & \cmark & learns more complex patterns & \cmark \\ \bottomrule \end{tabular}} \vspace{-2pt} \caption{Comparison of explanations on why lrDecay helps training neural networks. The column ``supported'' means whether the explanation is supported by the empirical experiments in this paper.} \vspace{-5pt} \label{tab:compare_explanation} \end{table*} \section{Related Work} \label{sec:related} \subsection{Understanding the Behavior of SGD} Recently, researchers reveal the behavior of SGD from multiple perspectives~\citep{li_towards_2019, mangalam_deep_2019, nakkiran_sgd_2019}. They respect the difference among data items rather than treat them as identical samples from a distribution. They study the behavior of SGD in a given dataset. In ~\citet{mangalam_deep_2019}, they show that deep models first learn easy examples classifiable by shallow methods. The mutual information between deep models and linear models is measured in \citet{nakkiran_sgd_2019}, which suggests deep models first learn data explainable by linear models. Note that they are not relevant to learning rates. \citet{li_towards_2019} analyze a toy problem to uncover the regularization effect of an initially large learning rate. Their theoretical explanation is, however, based on a specific two-layer neural network they design. Different from these works, \Secref{sec:support} studies the behavior of SGD induced by lrDecay in a modern WideResNet~\citep{zagoruyko_wide_2016}, finding that learning rate decay improves learning of complex patterns. We formally define pattern complexity by expected class conditional entropy, while the measure of pattern complexity in \citet{mangalam_deep_2019, nakkiran_sgd_2019} relies on an auxiliary model. \subsection{Adaptive Learning Rate Methods} Adaptive learning rate methods such as AdaGrad~\citep{duchi_adaptive_2011}, AdaDelta~\citep{zeiler_adadelta:_2012}, and ADAM~\citep{kingma_adam:_2014} are sophisticated optimization algorithms for training modern neural networks. It remains an active research field to study their behaviors and underlying mechanisms~\citep{reddi_convergence_2018, luo_adaptive_2019}. However, we focus on learning rate decay in SGD rather than on the adaptive learning rate methods. On the one hand, SGD is the \textit{de facto} training algorithm for popular models~\citep{he_deep_2016, huang_densely_2017} while lrDecay is not common in the adaptive methods; On the other hand, many adaptive methods are not as simple as SGD and even degenerate in convergence in some scenarios ~\citep{wilson_marginal_2017, liu_rethinking_2019}. We choose to study SGD with lrDecay, without introducing adaptive learning rate to keep away from its confounding factors. \subsection{Other Learning Rate Strategies} Besides the commonly used lrDecay, there are other learning rate strategies. \citet{smith_cyclical_2017} proposes a cyclic strategy, claiming to dismiss the need for tuning learning rates. Warm restart of learning rate is explored in \citet{loshchilov_sgdr:_2017}. They achieve better results when combined with Snapshot Ensemble~\citep{huang_snapshot_2017}. These learning rate strategies often yield better results at the cost of additional hyperparameters that are not intuitive. Consequently, it is still the \emph{de facto} to decay the learning rate after pre-defined epochs as in \Figref{fig:multistep_strategy}. We stick our analysis to lrDecay rather than to other fancy ones because of its simplicity and general effectiveness. \subsection{Transferability of Deep Models} Training a model on one dataset that can be transferred to other datasets has long been a goal of AI researches. The exploration of model transferability has attracted extensive attention. In \citet{oquab_learning_2014}, deep features trained for classification are transferred to improve object detection successfully. ~\citet{yosinski_how_2014} study the transferability of different modules in pre-trained networks, indicating that higher layers are more task-specific and less transferable across datasets. By varying network architectures, \citet{kornblith_better_2019} show architectures with a better ImageNet accuracy generally transfer better. \citet{raghu_transfusion:_2019} explore transfer learning in the field of medical imaging to address domain-specific difficulties. Different from these works that only consider the transferability of models after training, we investigate another dimension of model transferability in \Secref{sec:transfer}: the evolution of transferability during training with lrDecay. \section{Common Beliefs in Explaining lrDecay} \label{sec:existing} \subsection{Gradient Descent Explanation} \label{sec:existing_gd} The practice of lrDecay in training neural networks dates back to ~\citet{lecun_efficient_2012}. The most popular belief in the effect of lrDecay comes from the optimization analysis of Gradient Descent (GD)~\citep{lecun_second_1991}. Although SGD is more practical in deep learning, researchers are usually satisfied with the analysis of GD considering that SGD is a stochastic variant of GD. \begin{figure*}[htbp] \centering \vspace{-10pt} \includegraphics[width=\textwidth]{fig/Oscillation.pdf} \caption{Gradient Descent explanation. From left to right: 1) learning rate is small enough to converge around a minimum, 2) moderate so that it bounces among minima, 3) too large to converge.} \vspace{-10pt} \label{fig:Oscillation} \end{figure*} Specifically, ~\citet{lecun_second_1991} analyze the property of a \emph{quadratic} loss surface which can be seen as a second-order approximation around a local minimum in nonconvex optimization. Learning rates are characterized by the relationship with eigenvalues of the Hessian at a local minimum. Denote $ \eta $ the learning rate, $ H $ the Hessian, $ \lambda $ an eigenvalue of $ H $, and $ \mathbf{v} $ an eigenvector of $ \lambda $. The behavior of the network along the direction $ \mathbf{v} $ can be characterized as $ (1 - \eta \lambda)^k \mathbf{v} $, with $ k $ the iteration number. Convergence in the direction of $ \mathbf{v} $ requires $ 0 < \eta < 2 / \lambda $, while $ \eta > 2 / \lambda $ leads to divergence in the direction of $ \mathbf{v} $. If $ 0 < \eta < 2 / \lambda $ holds for every eigenvalue of the Hessian, the network will converge quickly (\Figref{fig:Oscillation} left). If it holds for some directions but not for all directions, the network will diverge in some directions and thus jump into the neighborhood of another local minimum (\Figref{fig:Oscillation} middle). If the learning rate is too large, the network will not converge (\Figref{fig:Oscillation} right). In particular, when oscillation happens, it means the learning rate is too large and should be decayed. The effect of lrDecay hence is to avoid oscillation and to obtain faster convergence. Note \citet{lecun_second_1991} only analyze a simple \emph{one-layer} network. It may not hold for modern neural networks (see \Secref{sec:against_gd}). \vspace{-4pt} \subsection{Stochastic Gradient Descent Explanation} \vspace{-4pt} \label{sec:existing_sgd} Another common belief is the Stochastic Gradient Descent explanation, arguing that {``with a high learning rate, the system is unable to settle down into deeper, but narrower parts of the loss function.''}~\footnote{\url{http://cs231n.github.io/neural-networks-3/\#anneal}} Although it is common, this argument has not been formally analyzed until very recently. \begin{figure*}[htbp] \centering \vspace{-5pt} \includegraphics[width=\textwidth]{fig/SGD_explanation.pdf} \caption{SGD explanation (taken from \citet{kleinberg_alternative_2018}). The first plot: an initially large learning rate helps escape spurious local minima. From the second to the fourth plots: after more rounds of learning rate decay, the probability of reaching the minimum becomes larger.} \label{fig:SGD_explanation} \end{figure*} Under some assumptions, \citet{kleinberg_alternative_2018} prove SGD is equivalent to the convolution of loss surface, with the learning rate serving as the conceptual kernel size of the convolution. With an appropriate learning rate, spurious local minima can be smoothed out, thus helping neural networks escape bad local minima. The decay of learning rate later helps the network converge around the minimum. \Figref{fig:SGD_explanation} is an intuitive one-dimensional example. The first plot shows that a large learning rate helps escape bad local minima in both sides. The lrDecay in subsequent plots increases the probability of reaching the global minimum. Although intuitive, the explanation requires some assumptions that may not hold for modern neural networks (see \Secref{sec:against_sgd}). \section{Experiments Against Existing Explanations} \label{sec:against} Although the (Stochastic) Gradient Descent explanations in \Secref{sec:existing} account for the effect of lrDecay to some extent, in this section, we show by carefully-designed experiments that they are insufficient to explain the efficacy of lrDecay in modern neural networks. In all the experiments except for \Secref{sec:transfer}, we use a modern neural network named WideResNet~\citep{zagoruyko_wide_2016}. It is deep, wide, nonconvex, and suitable for datasets like CIFAR10~\citep{krizhevsky_learning_2009}. \subsection{Experiments Against the Gradient Descent Explanation} \label{sec:against_gd} We train a WideResNet on CIFAR10 dataset with GD, decay the learning rate at different epochs, and report the training loss (optimization) as well as the test accuracy (generalization) in \Figref{fig:GD}. WideResNet and CIFAR10 are commonly used for studying deep learning~\citep{zhang_understanding_2016}. CIFAR10 is not too large so that we can feed the whole dataset as a single batch using distributed training, computing the exact gradient rather than estimating it in mini-batches. Experiments show that lrDecay brings negligible benefit to either optimization or generalization. No matter when the learning rate is decayed, the final performances are almost the same. The instability in the beginning is related to the high loss wall described in~\citet{pascanu_difficulty_2013}, which is not the focus of this paper. \begin{figure*}[htbp] \centering \includegraphics[width=\textwidth]{fig/GD.pdf} \caption{Training of WideResNet on CIFAR10 with Gradient Descent. X-axis indicates the number of epochs (in $ 10^3 $). Arrows indicate the epoch with learning rate decay.} \label{fig:GD} \end{figure*} The above observation contradicts directly with the GD explanation in \Secref{sec:existing_gd}. The contradiction arises from the fact that~\citet{lecun_second_1991} only analyze simple linear networks, and no wonder the explanation fails in modern non-linear deep networks. Recent studies~\citep{keskar_large-batch_2017, yao_hessian-based_2018} reveal that large-batch training of modern networks can lead to very sharp local minima. Gradient Descent (the extreme of large batch training) can lead to even sharper local minima. In \Figref{fig:eigenvalues}, we calculate the largest ten eigenvalues\footnote{Thanks to the advances of ~\citet{xu_accelerated_2018, yao_hessian-based_2018}, we can compute the eigenvalues directly.} of the Hessian as well as the convergence interval ($ 0 < \eta < 2 / \lambda $) for each eigenvalue for a trained WideResNet. The top eigenvalues reach the order of $\approx 200 $. By contrast, eigenvalues of simple networks in \citet{lecun_second_1991} often lie in $ [0, 10] $ (Figure~1 in their original paper). The spectrum of eigenvalues in modern networks is very different from that in simple networks analyzed by \citet{lecun_second_1991}: the Hessian of modern networks has a much larger spectral norm. The GD explanation in \Secref{sec:existing_gd} attributes the effect of lrDecay to avoiding oscillation. Oscillation means there is a small divergence in some directions of the landscape so that the network bounces among nearby minima. However, the divergence factor $ 1 - \eta \lambda $ for the largest eigenvalue ($ \approx 200 $) is too large even for a small growth of learning rate. Thus, the learning rate is either small enough to converge in a local minimum or large enough to diverge. It is hardly possible to observe the oscillation in learning curves (\Figref{fig:Oscillation} middle), and diverging learning curves (\Figref{fig:Oscillation} right) can be discarded during hyperparameter tuning. Therefore, only stable solutions are observable where $ \eta $ is small enough (\Figref{fig:Oscillation} left), leaving no necessity for learning rate decay. Indeed, when the learning rate is increased mildly, we immediately observe diverging learning curves (\Secref{sec:gd_large_lr}). In short, the GD explanation cannot explain the effect of lrDecay in training modern neural networks. \begin{figure*}[htb] \RawFloats \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{fig/ConvergenceIntervals.pdf} \caption{The largest ten eigenvalues $ \lambda $ (blue curve) and converge intervals $ (0, \frac{2}{\lambda}) $ (bar) for WideResNet trained with Gradient Descent.} \label{fig:eigenvalues} \end{minipage} \hfil \begin{minipage}{0.47\textwidth} \centering \includegraphics[width=\textwidth]{fig/SGD_expected.pdf} \caption{Expected behavior (but {\color{blue}not} observed) induced by the SGD explanation: best performances before and after decay are comparable.} \label{fig:SGD_expected} \end{minipage} \end{figure*} \subsection{Experiments Against the Stochastic Gradient Descent Explanation} \label{sec:against_sgd} We follow the experiment setups in \Secref{sec:against_gd}, but replace GD with SGD in \Figref{fig:SGD}. According to the SGD explanation in \Secref{sec:existing_sgd}, the effect of learning rate decay is to increase the probability of reaching a good minimum. If it is true, the model trained before decay can also reach minima, only by a smaller probability compared to the model after decay. In other words, the SGD explanation indicates the best performances before and after decay are the same. It predicts learning curves like \Figref{fig:SGD_expected}. However, \Figref{fig:SGD} does not comply with the SGD explanation: the best performances before and after lrDecay are different by a noticeable margin. Without lrDecay (the rightmost column in \Figref{fig:SGD}), the performance plateaus and oscillates, with no chance reaching the performance of the other columns after decay. The performance boost after learning rate decay is widely observed (\Figref{fig:he_figure} for example). However, possibly due to the violation of its assumptions~\citep{kleinberg_alternative_2018}, the SGD explanation cannot explain the underlying effect of lrDecay. \begin{figure*}[htbp] \centering \includegraphics[width=\textwidth]{fig/SGD.pdf} \caption{Training of WideResNet on CIFAR10 with SGD. X-axis indicates the number of epochs. Arrows show the moment of learning rate decay. The rightmost plots show results without decay.} \label{fig:SGD} \end{figure*} \vspace{-5pt} \section{An Explanation from the View of Pattern Complexity} \label{sec:support} \Secref{sec:against} uncovers the insufficiency of common beliefs in explaining lrDecay. We thus set off to find a better explanation. \citet{mangalam_deep_2019, nakkiran_sgd_2019} reveal that SGD (without learning rate decay) learns from easy to complex. As learning rates often change from large to small in typical learning rate strategies, we hypothesize that the complexity of learned patterns is related to the magnitude of learning rates. Based on this, we provide a novel explanation from the view of pattern complexity: \textbf{the effect of learning rate decay is to improve the learning of complex patterns while the effect of an initially large learning rate is to avoid memorization of noisy data}. To justify this explanation, we carefully construct a dataset with tractable pattern complexity, and record model accuracies in simple and complex patterns separately with and without lrDecay. \subsection{Pattern Separation 10 (PS10) Dataset with Tractable Pattern Complexity} The explanation we propose involves pattern complexity, which is generally conceptual and sometimes measured with the help of a simple auxiliary model as in \citet{mangalam_deep_2019, nakkiran_sgd_2019}. Here we try to formalize the idea of pattern complexity: the complexity of a dataset is defined as the expected class conditional entropy: $ C ( \{(x_i, y_i)\}_{i=1}^{n} ) = \mathbb{E}_y H ( P(x | y) ) $, where $ H $ denotes the entropy functional. The complexity of patterns depends on the complexity of the dataset they belong to. Higher $ C $ means larger complexity because there are averagely more patterns in each class to be recognized (consider an animal dataset with 10 subspecies in each species vs. an animal dataset with 100 subspecies in each species). \vspace{-5pt} \begin{figure*}[htb] \hfill \subfigure[Simple Patterns]{ \label{fig:simple} \includegraphics[width=0.32\textwidth]{fig/pattern_10.pdf}} \hfill \subfigure[Complex Patterns]{ \label{fig:complex} \includegraphics[width=0.32\textwidth]{fig/pattern_100.pdf}} \hfill \subfigure[Data Composition]{ \label{fig:data} \includegraphics[width=0.32\textwidth]{fig/data.pdf}} \hfill \caption{The PS10 dataset. (a) Simple patterns: 10 patterns per category, complexity $ \log_2 10 $. (b) Complex patterns: 100 patterns per category, complexity $ \log_2 100 $. (c) Data composition: half of the data only contain simple patterns while another half only contain complex patterns.} \label{fig:manual} \end{figure*} Equipped with the formal definition of complexity, we construct a \textbf{Pattern Separation 10 (PS10)} dataset with ten categories and explicitly separated simple patterns and complex patterns. We first generate a simple sub-dataset together with a complex sub-dataset in $ \mathbb{R}^3 $. As shown in \Figref{fig:simple} and \Figref{fig:complex}, patterns are visualized as colors because they lie in $ \mathbb{R}^3 $. The category label can be identified by either simple patterns or complex patterns. We then merge the two sub-datasets into one dataset. The merging method in \Figref{fig:data} is specifically designed such that the simple subset and complex subset are fed into different channels of the WideResNet. This mimics the intuition of patterns as the eye pattern and the nose pattern have different locations in an image of human face. To be compatible with the sliding window fashion of convolutional computation, we make patterns the same across spatial dimensions of height and weight to have the same image size as CIFAR10. \begin{figure*}[htb] \centering \includegraphics[width=\textwidth]{fig/manual_pattern.pdf} \caption{Experiments with lrDecay and without lrDecay (constant learning rates) w.r.t accuracies in different patterns. From left to right: Train with lrDecay; Train with a constant learning rate equal to the learning rate in Stage 1, 2, and 3 of lrDecay, respectively. The X-axis shows the epoch number.} \label{fig:ps10} \end{figure*} \subsection{The Effect of Decay: Improve Learning of More Complex Patterns} \label{sec:support_decay} To reveal the effect of decaying the learning rate, we compare experiments with and without lrDecay. For those without lrDecay, we set the learning rates equal to the learning rate of each stage in lrDecay. We measure not only the total accuracy but also the accuracies on simple and complex patterns separately. These accuracies are plotted in \Figref{fig:ps10}. The first plot in \Figref{fig:ps10} clearly shows the model first learns simple patterns quickly. The boost in total accuracy mainly comes from the accuracy gain on complex patterns when the learning rate is decayed. Plots 2, 3, and 4 show the network learns more complex patterns with a smaller learning rate, leading to the conclusion that learning rate decay helps the network learn complex patterns. \subsection{The Effect of An Initially Large Learning Rate: Avoid Fitting Noisy Data} \label{sec:support_large} \Figref{fig:ps10} seems to indicate that an initially large learning rate does nothing more than accelerating training: in Plot 4, a small constant learning rate can achieve roughly the same accuracy compared with lrDecay. However, by adding $ 10 \% $ noisy data to mimic real-world datasets, we observe something interesting. \Figref{fig:ps10_noise} shows the accuracies on simple patterns, complex patterns, and noise data when we add noise into the dataset. Plot 2 in \Figref{fig:ps10_noise} shows an initially large learning rate helps the accuracy on complex patterns. Plot 3 in \Figref{fig:ps10_noise} further shows the accuracy gain on complex patterns comes from the suppression of fitting noisy data. (Note that a larger accuracy on noisy data implies overfitting the noisy data, which is undesirable.) In short, the memorizing noisy data hurts the learning of complex patterns but can be suppressed by an initially large learning rate. \begin{figure*}[htb] \centering \includegraphics[width=\textwidth]{fig/manual_pattern_with_noise.pdf} \caption{Comparison between lrDecay and a constant small learning rate on the PS10 dataset with $10\%$ noise. Accuracies on simple patterns, complex patterns, and noise data are plotted respectively.} \label{fig:ps10_noise} \end{figure*} Empirically, \citet{li_towards_2019} report that an initially large learning rate with decay outperforms a small and constant learning rate. They suspect that the network starting with an initially small learning rate will be stuck at some spurious local minima. Our experiments provide an alternative view that spurious local minima may stem from noisy data. And the regularization effect of an initially large learning rate is to suppress the memorization of noisy data. \section{The Implication of lrDecay on Model Transferability} \label{sec:transfer} \Secref{sec:support} examines the proposed explanation on the PS10 dataset. Now we further validate the explanation on real-world datasets. Because there are no clearly separated simple and complex patterns in real-world datasets, it is difficult to directly validate the explanation. The proposed explanation suggests that SGD with lrDecay learns patterns of increasing complexity. Intuitively, more complex patterns are less transferable, harder to generalize across datasets. Thus an immediate implication is that SGD with lrDecay learns patterns of decreasing transferability. We validate it by transfer-learning experiments on real-world datasets, to implicitly support the proposed explanation. The transferability is measured by transferring a model from ImageNet to different target datasets. To get models in different training stages, we train a ResNet-50 on ImageNet from scratch and save checkpoints of models in different stages. The learning rate is decayed twice, leading to three stages. Target datasets for transferring are: (1) Caltech256~\citep{griffin_caltech-256_2007} with 256 general object classes; (2) CUB-200~\citep{wah_caltech-ucsd_2011} with 200 bird classes; (3) MITIndoors~\citep{quattoni_recognizing_2009} with 67 indoor scenes; (4) Sketch250~\citep{eitz_how_2012} with sketch painting in 250 general classes. Sketch250 is the most dissimilar to ImageNet because it contains sketch paintings. We study two widely-used strategies of transfer learning: ``fix'' (ImageNet snapshot models are only used as fixed feature extractors) and ``finetune'' (feature extractors are jointly trained together with task-specific layers). Let $ acc_i $ denotes the accuracy of stage $ i $ snapshot model on ImageNet and $ tacc_i $ denotes the accuracy of transferring the snapshot to the target dataset, then the transferability of \emph{additional} patterns learned in stage $ i $ is defined as $ \frac{tacc_i - tacc_{i - 1}}{acc_i - acc_{i - 1}} , i = 2, 3$. By definition, the transferability of patterns from ImageNet to ImageNet is $ 1.0 $, complying with common sense. The transferability is plotted in \Figref{fig:transferability}. Table~\ref{tab:transfer} contains the accuracies used to compute it. \begin{figure*}[htb] \includegraphics[width=0.4\textwidth]{fig/Transferability-Finetune.pdf} \hfil \includegraphics[width=0.4\textwidth]{fig/Transferability-Fix.pdf} \caption{Transferability of additional patterns learned in each stage w.r.t different target datasets.} \label{fig:transferability} \end{figure*} In all experiments, we find that the transferability of additional patterns learned in stage 3 is less than that in stage 2. Besides, in Sketch250 dataset, the transferability of additional patterns learned in stage 3 is negative. These findings support our claim that additional patterns learned in later stages of lrDecay are more complex and thus less transferable. They also suggest deep model-zoo developer provide pre-trained model snapshots in different stages so that downstream users can select the most transferable snapshot model according to their tasks. \section{Conclusion} \label{sec:conclude} In this paper, we dive into how learning rate decay (lrDecay) helps modern neural networks. We uncover the insufficiency of common beliefs and propose a novel explanation: the effect of decaying learning rate is to improve the learning of complex patterns, and the effect of an initially large learning rate is to avoid memorization of noisy data. It is supported by experiments on a dataset with tractable pattern complexity as well as on real-world datasets. It would be interesting to further bridge the proposed explanation and the formal analysis of optimization procedure. \ificlrfinal \subsubsection*{Acknowledgments} We thank Yuchen Zhang, Tianle Liu, Amir Gholami and Melih Elibol for helpful discussions. Kaichao You acknowledges the support from the Tsinghua Scholarship for Undergraduate Overseas Studies. This work is also supported by the National Natural Science Foundation of China (61772299, 71690231, and 61672313). \else \fi
{ "redpajama_set_name": "RedPajamaArXiv" }
1,318
\section{Introduction} The following notation will remain {\bf fixed} throughout the paper (if it is not stated otherwise): $K$ is a field of characteristic zero (not necessarily algebraically closed), $A$ is an (associative, not necessarily commutative) algebra with $1$, module means a left module, $\delta_1, \ldots , \delta_s\in {\rm Der }_K(A)$ are {\em commuting locally nilpotent} $K$-derivations of the algebra $A$, $A^\delta:= \cap_{i=1}^s {\rm ker } (\delta_i)$ is {\em the algebra of invariants} (or {\em constants}) for the derivations $\delta := (\delta_1, \ldots , \delta_s)$; $\sigma_1, \ldots , \sigma_s\in {\rm Aut}_K(A)$ are {\em commuting} $K$-automorphisms of the algebra $A$ such that the maps $\sigma_i-{\rm id_A}$ are {\em locally nilpotent} (for each $a\in A$, $(\sigma_i-{\rm id_A})^n(a)=0$ for all $n\gg 1$), $A^\sigma:=\{ a\in A\, | \, \sigma_1(a)=\cdots =\sigma_s(a)=a\}$ is {\em the algebra of invariants} for the automorphisms $\sigma := (\sigma_1, \ldots , \sigma_n)$. Theorem \ref{18Dec05} describes algebras $A$ for which there exist a set of commuting locally nilpotent derivations $\delta_1, \ldots , \delta_s$ such that $\delta_i(x_j)=\delta_{ij}$, the Kronecker delta, for some elements $x_1,\ldots , x_s\in A$ and all $1\leq i,j\leq s$ (the algebras $A$ are {\em iterated Ore extensions} of a very special type). Similarly, Theorem \ref{20Dec05} describes algebras $A$ for which there exists a set of commuting automorphisms $\sigma_1,\ldots , \sigma_s$ and a set of elements $x_1, \ldots , x_s\in A$ such that the maps $\sigma_i-{\rm id_A}$ are locally nilpotent and $\sigma_i (x_j)=x_j+\delta_{ij}$ for all $1\leq i,j\leq s$ where ${\rm id}_A$ is the identity map of $A$. The algebras $A$ are precisely of the type as in Theorem \ref{18Dec05} and vice versa. In particular, the problems of finding generators and defining relations for the algebra $A^\delta$ is the `same' as the identical one for the algebra $A^\sigma$. So, we will restrict ourselves mainly to the case of derivations. {\it Remark}. Two old open problems, the {\em Jacobian Conjecture} and the {\em Dixmier Problem}, are essentially questions about whether certain {\em commuting} derivations $\delta_1, \ldots , \delta_s$ (of the polynomial algebra or the Weyl algebra, respectively) such that $\delta_i(x_j)=\delta_{ij}$ for some elements $x_1, \ldots , x_s$ are {\em locally nilpotent}. In this paper, we will see that this type of derivations is more common that one may expect. Typically, this derivations appear after localization of algebra. In order to study that kind of derivations it is naturally to look at the locally nilpotent case first. Theorem \ref{15Nov05} gives {\em explicitly} a set of algebra generators for the algebra $A^\delta$ and describes {\em explicitly} the set of defining relations for the generators. More one can say in the important special cases, Corollary \ref{ca18Dec05} ($A$ is commutative) and Theorem \ref{19Dec05} (if $[x_i, A^\delta ]\subseteq A^\delta$ for all $i=1,\ldots , s$). Plenty of examples are considered. A connection with rings of differential operators is described (Corollary \ref{22Dec05}). One can produce an example of a finitely generated noncommutative algebra $A$ such that the algebras $A^\delta $ and $A^\sigma$ are {\em not} finitely generated, {\em not} left/right Noetherian, and that generators {\em do not} satisfy finitely many defining relations (see Section \ref{2GenCLND}). Theorem \ref{i19Dec05} gives {\em explicitly} a formula for the inverse of an automorphism of the algebra $A$ that preserves the ring of invariants $A^\delta$. As an application, we deduce the inverse formula for an automorphism of the $n$'th Weyl algebra with polynomial coefficients (Theorem \ref{i8Nov05}). Theorem \ref{24Dec05} describes algebras $A$ that admit a set of commuting locally nilpotent derivations with left localizable kernels. As an application of Theorem \ref{24Dec05} of how to find explicitly the integral closure $\widetilde{K}$ of the field $K$ in the algebra $A$ is given by Corollary \ref{1c24Dec05}. Theorem \ref{26Dec05} gives a construction of simple algebras coming from a set of commuting locally nilpotent derivations. Let $A=A_n\otimes P_m$ be the $n$'th Weyl algebra with polynomial coefficients $P_m$ and $\partial _1:=\frac{\partial }{\partial x_1},\ldots , \partial _s:=\frac{\partial }{\partial x_s}\in {\rm Der }_K(A)$, $s:= 2n+m$, be the formal partial derivatives of the algebra $A$, it is a set of commuting locally nilpotent derivations of the algebra $A$. Theorem \ref{16Dec05} establishes a natural isomorphism of the algebra ${\rm End}_K(A)$ and the algebra $A[[\partial _1, \ldots , \partial _s]]$, and Theorem \ref{15Dec05} gives a formula (a sort of a `noncommutative' Taylor formula but for linear maps rather than for series or polynomials) that represents any $K$-linear map $a:A\rightarrow A$ as a formal series $a=\sum_{\alpha \in \mathbb{N}^s} a_\alpha \partial ^\alpha$, $a_\alpha \in A$. In particular, for any $\sigma\in {\rm Aut}_K(K[x_1, \ldots , x_m])$, $\sigma = \sum_{\alpha \in \mathbb{N}^m} \frac{\prod_{i=1}^m(\sigma (x_i)-x_i)^{\alpha_i}}{\alpha !}\partial _1^{\alpha_1}\cdots \partial _m^{\alpha_m}$ (Theorem \ref{s15Dec05}). \section{Generators and defining relations for ring of invariants of commuting locally nilpotent derivations} \label{2GenCLND Let $A$ be an algebra over a field $K$ and let $\delta $ be a $K$-derivation of the algebra $A$. The kernel $A^\delta:= {\rm ker } \, \delta $ of $\delta $ is a subalgebra of $A$, so-called, the {\em algebra of invariants} (or {\em constants}) of $\delta $, the union of the vector spaces $N:=N(\delta ,A)=\cup_{i\geq 0}\, N_i$ is a positively {\em filtered} algebra ($N_iN_j\subseteq N_{i+j}$ for all $i,j\geq 0$) where $N_i:= {\rm ker } (\delta^{i+1})$. Clearly, $N_0= A^\delta$ and $N:=\{ a\in A \, | \ \delta^n (a)=0$ for some natural $n\}$. A $K$-derivation $\delta $ of the algebra $A$ is a {\em locally nilpotent } derivation if for each element $a\in A$ there exists a natural number $n=n(a)$ such that $\delta^n(a)=0$. A $K$-derivation $\delta $ is locally nilpotent iff $A=N(\delta , A)$. Given a ring $R$ and its derivation $d$. The {\em Ore extension} $R[x;d]$ of $R$ is a ring freely generated over $R$ by $x$ subject to the defining relations: $xr=rx+d(r)$ for all $r\in R$. $R[x;d]=\oplus_{i\geq 0}Rx^i=\oplus_{i\geq 0}x^iR$ is a left and right free $R$-module. Given $r\in R$, a derivation $( {\rm ad } \, r)(s):=[r,s]=rs-sr$ of $R$ is called an {\em inner} derivation of $R$. \begin{lemma}\label{dx=1 \cite{inform'05} Let $A$ be an algebra over a field $K$ of characteristic zero and $\delta $ be a $K$-derivation of $A$ such that $\delta (x)=1$ for some $x\in A$. Then $N(\delta ,A)=A^\delta [x; d]$ is the Ore extension with coefficients from the algebra $A^\delta$, and the derivation $d$ of the algebra $ A^\delta$ is the restriction of the inner derivation $ {\rm ad } \, x $ of the algebra $A$ to its subalgebra $A^\delta$. For each $n\geq 0$, $N_n=\oplus_{i=0}^n\, A^\delta x^i=\oplus_{i=0}^n\, x^iA^\delta$. \end{lemma} When the algebra $A$ is commutative the result above is old and well-known. \begin{theorem}\label{8Nov05 \cite{inform'05} Let $A$ be an algebra over a field $K$ of characteristic zero, $\delta $ be a locally nilpotent $K$-derivation of the algebra $A$ such that $\delta (x)=1$ for some $x\in A$. Then the $K$-linear map $\phi :=\sum_{i\geq 0} (-1)^i\frac{x^i}{i!}\delta^i :A\rightarrow A$ (resp. $\psi :=\sum_{i\geq 0} (-1)^i\delta^i (\cdot )\frac{x^i}{i!} :A\rightarrow A$) satisfies the following properties: \begin{enumerate} \item $\phi$ (resp. $\psi $) is a homomorphism of right (resp. left) $A^\delta$-modules. \item $\phi$ (resp. $\psi $) is a projection onto the algebra $A^\delta$: \begin{eqnarray*} \phi : & A=A^\delta \oplus xA\rightarrow A^\delta \oplus xA, \;\; a+xb\mapsto a, \;\; {\rm where}\;\; a\in A^\delta, \; b\in A, \\ \psi : & A=A^\delta \oplus Ax\rightarrow A^\delta \oplus Ax, \;\; a+bx\mapsto a, \;\; {\rm where}\;\; a\in A^\delta, \; b\in A. \end{eqnarray*} In particular, ${\rm im} (\phi )={\rm im} (\psi )= A^\delta$ and $\phi (y)=y=\psi (y)$ for all $y\in A^\delta$. \item $\phi (x^i)=\psi (x^i)=0$, $i\geq 1$. \item $\phi$ and $\psi$ are algebra homomorphisms provided $x\in Z(A)$, the centre of the algebra $A$. \end{enumerate} \end{theorem} The following notation will remain {\em fixed} till the end of this section (if it is not stated otherwise): $A$ is an algebra over a field $K$ of characteristic zero, $\delta_1, \ldots , \delta_s$ are {\em commuting locally nilpotent} $K$-derivations of $A$, $A^\delta := \cap_{i=1}^s A^{\delta_i}$ is the {\em algebra of invariants} for the set of derivation $\delta_1, \ldots , \delta_s$ where $A^{\delta_i}:= {\rm ker } (\delta_i)$. The algebra $A$ is equipped with the filtration $\{ N_i\}_{i\geq 0}$ ($N_iN_j\subseteq N_{i+j}$, for $i,j\geq 0$) where $N_i:= \{ a\in A\, | \, \delta^\alpha (a)=0$ for all $\alpha =(\alpha_i) \in \mathbb{N}^s $ with $|\alpha |:= \alpha_1+\cdots +\alpha_s>i\}$, where $\delta^\alpha := \delta_1^{\alpha_1}\cdots \delta_s^{\alpha_s}$. $A=\cup_{i\geq 0}N_i$, $N_0:= A^\delta \subset N_1\subset \cdots$. For $0\neq a\in A$, a unique number $i$ such that $a\in N_i\backslash N_{i-1}$ is called the {\em order} of $a$, denoted ${\rm ord} (a)$. Consider the associated graded algebra ${\rm gr} (A):= \oplus_{i\geq 0}N_i/N_{i-1}$ ($N_{-1}:=0$). The next theorem is a crucial step in many results that follow. \begin{theorem}\label{18Dec05 Let $A$ be an arbitrary algebra over a field $K$ of characteristic zero. The following statements are equivalent. \begin{enumerate} \item There exist commuting locally nilpotent $K$-derivations $\delta_1, \ldots , \delta_s$ of the algebra $A$ and elements $x_1, \ldots, x_s\in A$ satisfying $\delta_i(x_j)=\delta_{ij}$, the Kronecker delta. \item The algebra $A$ is an iterated Ore extension $A= B[x_1; d_1]\cdots [ x_s; d_s]$ such that $d_i (B)\subseteq B$ and $d_i(x_j)\in B$ for all $1\leq i,j\leq s$. \end{enumerate} If, say, the first condition holds, then $A=A^\delta [x_1; d_1] \cdots [ x_s; d_s]$ is an iterated Ore extension of the ring of invariants $A^\delta :=\cap_{i=1}^s A^{\delta_i}$ such that $d_i := {\rm ad } (x_i)$, $[x_i, A^\delta ] \subseteq A^\delta$, and $[x_i, x_j]\in A^\delta$ for all $i,j$. In particular, $A=\oplus_{\alpha \in \mathbb{N}^s }x^\alpha A^\delta = \oplus_{\alpha \in \mathbb{N}^s } A^\delta x^\alpha$ where $x^\alpha := x_1^{\alpha_1}\cdots x_s^{\alpha_s}$, and $A=\cup_{i\geq 0} N_i$ where $N_i=\oplus_{|\alpha |\leq i}x^\alpha A^\delta = \oplus_{|\alpha |\leq i} A^\delta x^\alpha$ for $i\geq 0$. \end{theorem} {\it Proof}. $(1\Rightarrow 2)$ Applying Lemma \ref{dx=1} step by step we have the result ($A=\oplus_{k\geq 0}A^{\delta_s}x_s^k$, if $i<s$ then $x_i=\sum \lambda_{ij}x_s^j$ for some $\lambda_{ij}\in A^{\delta_s}$; now $0=\delta_s (x_i)=\sum j\lambda_{ij}x_s^{j-1}$ implies $x_i\in A^{\delta_s}$): \begin{equation}\label{AAds} A=A^{\delta_s}[x_s; d_s]=(A^{\delta_{s-1}}\cap A^{\delta_s}) [x_{s-1}; d_{s-1}] [x_s; d_s]=\cdots =A^\delta [x_1; d_1]\cdots [ x_s; d_s], \end{equation} where $d_i:= {\rm ad } (x_i)$. For all $i,j,k$, $ \delta_k ([x_i, x_j])=\delta_{ki}[1,x_j]+\delta_{kj}[x_i, 1]=0$ and $\delta_k([x_i, A^\delta ])=\delta_{ki}[1,A^\delta ]=0$, hence all $[x_i, x_j]\in A^\delta$ and $[x_i, A^\delta ]\subseteq A^\delta$. $(2\Rightarrow 1)$ Given an algebra $A$ as in the second statement. The formal partial derivatives $\frac{\partial }{\partial x_1}, \ldots ,\frac{\partial }{\partial x_s}\in {\rm Der }_B(A)$ satisfy the condition of the first statement. $\Box $ It is obvious that the elements $x_1, \ldots , x_s$ are {\em not} (left and right) zero divisors in $A$. Next, we have many examples of derivations as in Theorem \ref{18Dec05}. {\it Example}. Let $F_n:=K\langle x_1, \ldots , x_n\rangle$ be a free algebra over the field $K$, $\partial _1:=\frac{\partial }{\partial x_1}, \ldots ,\partial _n:=\frac{\partial }{\partial x_n}\in {\rm Der }_K(F_n)$ be the formal partial derivatives and $I$ be an ideal of $F_n$ which is $\partial $-invariant (that is $\partial _i(I)\subseteq I$ for all $i$). The induced derivations $\delta_1, \ldots \delta_n\in {\rm Der }_K(A)$ where $A:=F_n/I$, $\delta_i (f+I)=\partial _i(f)+I$, $f\in F_n$, are commuting locally nilpotent derivations of the algebra $A$ and $\delta_i (\overline{x}_i)=\delta_{ij}$ for all $1\leq i,j\leq n$ where $\overline{x}_i:=x_i+I$. If the ideal $I$ is generated by the commutators $[x_i, x_j]$, $1,\leq i,j\leq n$, we have a polynomial algebra $K[\overline{x}_1, \ldots , \overline{x}_n ]$ and the derivations $\delta_1:=\frac{\partial }{\partial \overline{x}_1}, \ldots ,\delta_n:=\frac{\partial }{\partial \overline{x}_n}$. \begin{corollary}\label{f26Dec05 Let $A$, $\delta_1, \ldots , \delta_s$ and $x_1, \ldots , x_s$ be as in Theorem \ref{18Dec05}, and $\mathfrak{m} $ be a (two sided) ideal of the algebra $A^\delta$ which is $ {\rm ad } (x_i)$-invariant for all $i=1,\ldots , s$, and $(\mathfrak{m} ):= A\mathfrak{m} A$ be the ideal of the algebra $A$ generated by $\mathfrak{m}$, and $A\rightarrow \overline{A} := A/(\mathfrak{m} )$, $a\mapsto \overline{a}:= a+(\mathfrak{m} )$. Then $\overline{A} , \overline{\delta}_1, \ldots , \overline{\delta}_s$ and $\overline{x}_1, \ldots , \overline{x}_s$ satisfy the conditions of Theorem \ref{18Dec05} (where $\overline{\delta}_i\in {\rm Der }_K(\overline{A} )$, $\overline{a}\mapsto \overline{\delta_i(a)}$), $\overline{A}^{\overline{\delta} }=\overline{A^\delta }$, and $N'_i=\overline{N}_i=\oplus_{|\alpha |\leq i}\overline{A^\delta }\overline{x}^\alpha =\oplus_{|\alpha |\leq i}\overline{x}^\alpha \overline{A^\delta }$, for $i\geq 0$, where $\{ N_i\}$ and $\{ N_i'\}$ are the filtrations of the algebras $\overline{A} $ and $A$ respectively. \end{corollary} {\it Proof}. The derivations $\overline{\delta}_1, \ldots , \overline{\delta}_s$ of the algebra $\overline{A}$ are commuting locally nilpotent derivations such that $\overline{\delta}_i(\overline{x}_j)=\delta_{ij}\overline{x}_i$, hence they satisfy the conditions of Theorem \ref{18Dec05}. In particular, $\overline{A} =\overline{A}^{\overline{\delta}} [\overline{x}_1; \overline{d}_1]\cdots [\overline{x}_s; \overline{d}_s]=\oplus_{\alpha \in \mathbb{N}^s}\overline{A}^{\overline{\delta}} \overline{x}^\alpha =\oplus_{\alpha \in \mathbb{N}^s} \overline{x}^\alpha \overline{A}^{\overline{\delta}}$ and $N_i'=\oplus_{|\alpha |\leq i}\overline{A}^{\overline{\delta}} \overline{x}^\alpha$, $i\geq 0$. On the other hand, $A=\oplus_{\alpha \in \mathbb{N}^s}A^\delta x^\alpha$ and $(\mathfrak{m} )=\oplus_{\alpha \in \mathbb{N}^s}\mathfrak{m} x^\alpha$, hence $\overline{A} =A/(\mathfrak{m} )=\oplus_{\alpha \in \mathbb{N}^s}(A^\delta /\mathfrak{m} )\overline{x}^\alpha$. Comparing the two direct sums for $\overline{A}$ we must have $\overline{A}^{\overline{\delta}} =\overline{A^\delta }$, and $N'_i=\overline{N}_i=\oplus_{|\alpha |\leq i}\overline{A^\delta }\overline{x}^\alpha =\oplus_{|\alpha |\leq i}\overline{x}^\alpha \overline{A^\delta }$, for $i\geq 0$. $\Box $ The next result is a criterion of when the ring of invariants $A^\delta$ is left/right Noetherian. \begin{corollary}\label{c23Dec05 Let $A$, $\delta_1, \ldots , \delta_s$ and $x_1, \ldots , x_s$ be as in Theorem \ref{18Dec05}. Then the following statements are equivalent: \begin{enumerate} \item The algebra $A$ is left (resp. right) Noetherian. \item The algebra $A^\delta$ is left (resp. right) Noetherian.\item The algebra ${\rm gr}(A) $ is left (resp. right) Noetherian. \end{enumerate} \end{corollary} {\it Proof}. $(1\Leftrightarrow 2)$ It is a well-known fact that if a coefficient ring is left (resp. right) Noetherian then so is an iterated Ore extension, and vice versa (use iteratively an analogue of the Hilbert Basis Theorem for Ore extensions). Now, the first two statements are equivalent by Theorem \ref{18Dec05}. $(2\Leftrightarrow 3)$ The associated graded algebra ${\rm gr} (A)\simeq A^\delta [ \overline{x}_1; \overline{d}_1]\cdots [ \overline{x}_s; \overline{d}_s]$ is an iterated Ore extension where $\overline{x}_i:= x_i+N_1/N_0$, $\overline{d}_i= {\rm ad } (\overline{x}_i)$, and $[\overline{x}_i, \overline{x}_j]=0$ for all $i,j$. Now, repeat the above argument. $\Box $ {\it Remark}. Using the previous proof one can write down several similar statements for properties that are `stable' under the operations of taking iterated Ore extension and ${\rm gr} (\cdot )$ ({\it eg}, `being domain', etc). For a property of `being finitely generated algebra', in general, it is not true that `$A$ is finitely generated $\Rightarrow $ $A^\delta $ is finitely generated' (see an example after Theorem \ref{15Nov05}), but for commutative algebras it is the case (Corollary \ref{1c18Dec05}). Corollary \ref{22Dec05} provides natural examples of commuting locally nilpotent derivations (on non-commutative algebras), it also shows that the order filtration $\{ {\cal D} (R)_i\}$ on the ring of differential operators is, in fact, the filtration $\{ N_i\}$ for certain commuting locally nilpotent derivations of ${\cal D} (R)$ (this fact may simplify arguments in finding {\em explicitly} the ring ${\cal D} (R)$ of differential operators in certain cases, see the example below). Let $R$ be a commutative finitely generated $K$-algebra and ${\cal D} (R)=\cup_{i\geq 0}{\cal D} (R)_i$ be its {\em ring of differential operators} on the ring $R$ equipped with the {\em order filtration} $\{ {\cal D} (R)_i\}$: ${\cal D} (R)$ is a $K$-subalgebra of the algebra ${\rm End}_K(A)$ where ${\cal D} (R)_0:= {\rm End}_R(R)\simeq R$ $((x\mapsto rx)\leftrightarrow r)$, and $$ {\cal D} (R)_i:= \{ u \in {\rm End}_K(A): \;\; [u,r]\in {\cal D} (R)_{i-1}\;\; {\rm for \;\; all}\;\; r\in R\}.$$ \begin{corollary}\label{22Dec05 Let a domain $R$ be a commutative finitely generated $K$-algebra of Krull dimension $n>0$, ${\cal D} (R)=\cup_{i\geq 0}{\cal D} (R)_i$ be the ring of differential operators on $R$, $x_1, \ldots , x_n$ be algebraically independent (over $K$) elements of $R$. Then \begin{enumerate} \item $\delta_1:= {\rm ad } (x_1), \ldots , \delta_n:= {\rm ad } (x_n)$ is a set of commuting locally nilpotent derivations of the algebra ${\cal D} (R)$. \item The order filtration $\{ {\cal D} (R)_i\}$ coincides with the filtration $\{ N_i\}$ associated with the derivations $\delta_1, \ldots , \delta_n$, i.e. ${\cal D} (R)_i=N_i$ for all $i\geq 0$. In particular, ${\cal D} (R)^\delta =R$. \end{enumerate} \end{corollary} {\it Proof}. The first statement is obvious. To prove the second statement, note that ${\cal D} (R)_i\subseteq N_i$ for all $i\geq 0$ which follows directly from the definitions of both filtrations. Let $P_n:= K[x_1, \ldots , x_n]$ and $Q_n=K(x_1, \ldots , x_n)$ be its field of fractions. The field $Q:={\rm Frac} (R)$ of fractions of $R$ is a {\em finite separable} field extension of $Q_n$. It well-known that one can pick up a nonzero element, say $r\in R$, such that the localization $R_r:= R[r^{-1}]$ of $R$ at the powers of the element $r$ is a {\em regular} domain, $\partial _i (R_r)\subseteq R_r$ for all $i$, and $ {\rm Der }_K(R_r)=\oplus_{i=1}^nR_r\partial _i$ where $\partial _i:= \frac{\partial }{\partial x_i}$ are the partial derivatives of $Q_n$ uniquely extended to derivations of the field $Q$. Since the algebra $R_r$ is regular the ring of differential operators ${\cal D} (R_r)$ on the algebra $R_r$ is generated by the algebra $R_r$ and $ {\rm Der }_K(R_r)$, hence ${\cal D} (R_r)=\oplus_{\alpha \in \mathbb{N}^n} R_r\partial ^\alpha$ and ${\cal D} (R_r)_i=\oplus_{|\alpha |\leq i} R_r\partial ^\alpha$, $i\geq 0$. Comparing these equalities with similar ones from Theorem \ref{18Dec05}: ${\cal D} (R_r)=\oplus_{\alpha \in \mathbb{N}^n} {\cal D} (R_r)^\delta \partial ^\alpha =\cup_{i\geq 0}N_i(R_r)$ and $N_i(R_r)=\oplus_{|\alpha |\leq i } {\cal D} (R_r)^\delta \partial ^\alpha$, $i\geq 0$, and taking into account the inclusions ${\cal D} (R_r)_i\subseteq N_i(R_r)$ for $i\geq 0$, we must have ${\cal D} (R_r)^\delta = R_r$ and $N_i (R_r)= {\cal D} (R_r)_i$, $i\geq 0$. Since ${\cal D} (R)\subseteq {\cal D} (R_r)$ and ${\cal D} (R)_i= {\cal D} (R)\cap {\cal D} (R_r)_i$ for all $i\geq 0$, and $N_i={\cal D} (R)\cap N_i(R_r)$, $i\geq 0$, we conclude that $N_i={\cal D} (R)_i$ for all $i\geq 0$. $\Box $ {\it Example}. As an application of Theorem \ref{18Dec05} and Corollary \ref{22Dec05}, let us give a short proof of the well-known fact that {\em the ring of differential operators ${\cal D} (P_n)$ on a polynomial algebra $P_n:=K[x_1, \ldots , x_n]$ (so-called, the Weyl algebra) is generated by $P_n$ and the partial derivatives} $\partial _1, \ldots , \partial _n$ of $P_n$: the inner derivations $\delta_1:= - {\rm ad } (x_1) , \ldots , \delta_n:=- {\rm ad } (x_n)$ of the algebra $E:={\rm End}_K(P_n)$ commute. Let $N$ be the {\em largest} subalgebra of $E$ on which all the derivations $\delta_i$ act locally nilpotently (take the sum of all the subalgebras of $E$ with the last property). Clearly, ${\cal D} (P_n)\subseteq N$ and $N^\delta =E^\delta = {\rm End}_{P_n}(P_n)\simeq P_n$. By Theorem \ref{18Dec05}, $N=P_n\langle \partial _1, \ldots , \partial _n\rangle \subseteq {\cal D} (P_n)$, hence $N={\cal D} (P_n)$, and, by Corollary \ref{22Dec05}, ${\cal D} (P_n)_i=N_i=\oplus_{|\alpha | \leq i}P_n\partial ^\alpha$ for all $i\geq 0$. \begin{corollary}\label{1c18Dec05 Let $A$, $\delta_1, \ldots , \delta_s$ and $x_1, \ldots , x_s$ be as in Theorem \ref{18Dec05}. Suppose that the elements $x_1, \ldots , x_s$ are central. Then the following statements are equivalent: \begin{enumerate} \item The algebra $A$ is finitely generated. \item The algebra $A^\delta$ is finitely generated.\item The algebra ${\rm gr}(A) $ is finitely generated. \end{enumerate} \end{corollary} {\it Proof}. Since the elements $x_1, \ldots , x_s$ are central, by Theorem \ref{18Dec05}, $A\simeq A^\delta [x_1, \ldots , x_s]\simeq {\rm gr}(A)$ and $A^\delta \simeq A/(x_1, \ldots , x_s)$. Now, it is obvious that the statements are equivalent. $\Box $ {\em Till the end of this section} we will {\em assume} that for the commuting locally nilpotent derivations $\delta_1, \ldots , \delta_s$ of $A$ there exist elements $x_1, \ldots , x_s\in A$ such that $\delta_i(x_j)=\delta_{ij}$, the Kronecker delta. For each $i=1, \ldots , s$, consider the maps from Theorem \ref{8Nov05}, $$ \phi_i:=\sum_{k\geq 0}(-1)^k\frac{x_i^k}{k!}\delta_i^k, \;\; \psi_i:=\sum_{k\geq 0}(-1)^k\delta_i^k (\cdot ) \frac{x_i^k}{k!}\, : \, A\rightarrow A.$$ The maps $\phi_i$ and $\psi_i$ are homomorphisms of {\em right} and {\em left} $A^\delta$-modules respectively. The maps \begin{equation}\label{rphis} \phi:=\phi_s\phi_{s-1}\cdots \phi_1:A\rightarrow A, \;\; a=\sum_{\alpha \in \mathbb{N}^s} x^\alpha\lambda_\alpha\mapsto \phi (a)=\lambda_0, \end{equation} \begin{equation}\label{lphis} \psi:=\psi_1\psi_2\cdots \psi_s:A\rightarrow A, \;\; a=\sum_{\alpha \in \mathbb{N}^s} \lambda_\alpha x^\alpha\mapsto \psi (a)=\lambda_0, \end{equation} are {\em projections} onto the subalgebra $A^\delta$ of $A=A^\delta \oplus (\oplus_{0\neq \alpha \in \mathbb{N}^s }x^\alpha A^\delta )$ and $A= A^\delta \oplus (\oplus_{0\neq \alpha \in \mathbb{N}^s } A^\delta x^\alpha )$ respectively, they are homomorphisms of right and left $A^\delta $-modules respectively. \begin{theorem}\label{a18Dec05 Let $A$ be as in Theorem \ref{18Dec05}. For any $a\in A$, $$ a=\sum_{\alpha \in \mathbb{N}^s}x^\alpha \phi (\frac{\delta^\alpha}{\alpha !} a)=\sum_{\alpha \in \mathbb{N}^s}\psi (\frac{\delta^\alpha}{\alpha !} a)x^\alpha.$$ \end{theorem} {\it Proof}. If $a=\sum x^\alpha \lambda_\alpha $, $\lambda_\alpha \in A^\delta$, then, by (\ref{rphis}), $\phi (\frac{\delta^\alpha}{\alpha !} a)=\lambda_\alpha$. Similarly, if $a=\sum \lambda_\alpha x^\alpha $, $\lambda_\alpha \in A^\delta$, then, by (\ref{lphis}), $\psi (\frac{\delta^\alpha}{\alpha !} a)=\lambda_\alpha$. $\Box $ So, the identity map ${\rm id} : A \rightarrow A$ has nice presentations \begin{equation}\label{ida} {\rm id}(\cdot ) = \sum_{\alpha \in \mathbb{N}^s}x^\alpha \phi (\frac{\delta^\alpha}{\alpha !} (\cdot ))= \sum_{\alpha \in \mathbb{N}^s}\psi (\frac{\delta^\alpha}{\alpha !} (\cdot ))x^\alpha . \end{equation} Clearly, $\phi (N_i)\subseteq N_i$ and $\psi (N_i)\subseteq N_i$ for all $i\geq 0$. Consider the associated graded algebra ${\rm gr} (A):= \oplus_{i\geq 0}N_i/N_{i-1}$ ($N_{-1}:=0$). So, let ${\rm gr}(\phi) , {\rm gr}(\psi) : {\rm gr}(A) \rightarrow {\rm gr}(A)$ be the induced maps (for $\overline{a} = a+N_{i-1}\in N_i/ N_{i-1}$, ${\rm gr} (\phi ) (\overline{a} )=\phi (a)+N_{i-1}$ and ${\rm gr} (\psi ) (\overline{a} )=\psi (a)+N_{i-1}$). Let $D$ be a free multiplicative monoid generated freely by the inner derivations $ {\rm ad } (x_1), \ldots , {\rm ad } (x_s)$ of the algebra $A$. There is an obvious action of $D$ on the algebra $A$ (and an obvious linear map $D\rightarrow {\rm End}_K(A)$). Let $\{ y_i\, | \, i\in I\}$ be a set of algebra generators for $A$. For each $d\in D$ and $i\in I$, let $z_{d , \alpha , i} := d\phi (\frac{\delta^\alpha }{\alpha!} y_i)$ where $\alpha !:= \alpha_1! \cdots \alpha_s!$ and $\alpha_i \leq {\rm ord} (y_i)$ for all $i=1, \ldots , s$. Let ${\rm id}_A$ be the identity map of $A$. For each $d'\in D^*:= D\backslash \{ {\rm id}_A \}$ and $j=1, \ldots , s$, let $x_{d',j}:= d' (x_j)$. By Theorem \ref{18Dec05}, all the elements $z_{d , \alpha , i} , x_{d',j}\in A^\delta$. For each $z_{d , \alpha , i}$ and each $x_{d',j}$ we attach (noncommutative) variables $t_{d , \alpha , i}$ and $X_{d',j}$ respectively. Let ${\cal F} := K\langle t_{d , \alpha , i} , X_{d',j} \, |\, d\in D, i\in I, \alpha , d'\in D^*, 1\leq j \leq s\rangle$ be a free associative algebra and $f(t_{d , \alpha , i} , X_{d',j})=f( \{ t_{d , \alpha , i} , X_{d',j} \, |\, d\in D, i\in I, \alpha , d'\in D^*, 1\leq j \leq s\}) $ be a typical element of ${\cal F}$ (the symbols in the brackets, i.e. $t_{d , \alpha , i} , X_{d',j}$, stand for {\em all} the non-commutative arguments of the element $f$). \begin{theorem}\label{15Nov05 The algebra $A^\delta$ is generated by all the elements $\{ z_{d , \alpha , i} , x_{d', j}\}$ that satisfy the defining relations $ {\cal R } =\{ f(t_{d , \alpha , i} , X_{d', j} )\in {\cal F} \, | \, f(z_{d , \alpha , i} , x_{d', j} )\in \sum_{i=1}^s x_iA\}$. Similarly, the algebra $A^\delta$ is generated by all the elements $\{ z_{d , \alpha , i}':= d\psi (\frac{\delta^\alpha}{\alpha!}y_i), x_{d', j}\}$ that satisfy the defining relations $ {\cal R }' =\{ f(t_{d , \alpha , i} , X_{d', j} )\in {\cal F} \, | \, f(z_{d , \alpha , i}' , x_{d', j} )\in \sum_{i=1}^s Ax_i\}$. \end{theorem} {\it Proof}. Recall that $A=\oplus_{\alpha \in \mathbb{N}^s } x^\alpha A^\delta$, and so each $y_i$ is a unique sum $y_i= \sum x^\alpha y_{\alpha , i}$ where $y_{\alpha , i}=\phi (\frac{\delta^\alpha}{\alpha !}y_i)\in A^\delta$ (Theorem \ref{a18Dec05}) and $\alpha_i \leq {\rm ord} (y_i)$ for all $i=1, \ldots , s$ (otherwise, $y_{\alpha , i}=0$). The set $\{ y_i\, | \, i\in I\}$ is a set of $K$-algebra generators for $A$, hence so is the set $\{ y_{\alpha , i}, x_1, \ldots , x_s\, | \, i\in I, \alpha \}$ (with obvious restrictions on $\alpha \in \mathbb{N}^s$ for each $y_{\alpha , i}$, that is $\alpha_i \leq {\rm ord} (y_i)$ for all $i=1, \ldots , s$). Since all the $y_{\alpha , i}\in A^\delta$, $[x_j, x_k]\in A^\delta$, and $[x_j, A^\delta ]\subseteq A^\delta $ for all $j,k$, any element $a\in A$ can be written as a sum $a=\sum x^\alpha a_\alpha$ where each coefficient $a_\alpha $ belongs to the subalgebra, say ${\cal A}$, of $A$ generated by all the elements $\{ z_{d , \alpha , i}, x_{d', j}\}$ in the theorem. It follows that $A^\delta =\phi (A)\subseteq {\cal A}$, the opposite inclusion, ${\cal A} \subseteq A^\delta$, is obvious. Therefore, $A^\delta ={\cal A}$. Since all the elements $z_{d , \alpha , i} , x_{d', j}\in A^\delta$ and the map $\phi $ is a projection onto the ring of invariants $A^\delta$, an element $f(t_{d , \alpha , i} , X_{d', j})\in {\cal F}$ is a relation for the set of generators $\{ z_{d , \alpha , i}, X_{d', j}\}$ of the algebra $A^\delta$, i.e. $f(z_{d , \alpha , i}, x_{d', j})=0$, iff $\phi (f(z_{d , \alpha , i}, x_{d', j}))=0$ iff $f(z_{d , \alpha , i}, x_{d', j})\in \sum_{j=1}^sx_jA$. To prove the remaining case, repeat the above arguments making obvious adjustments. $\Box $ \begin{corollary}\label{c15Nov05 Let $\{ a_j\, | \, j\in J\}$ be algebra generators for $A^\delta$ and $F_J=K\langle Y_j\, | \, j\in J\rangle $ be a free algebra. Then $ {\cal R } =\{ f(Y_j)\in F_J\, | \, f(a_j)\in \sum_{i=1}^s x_iA\}$ (resp. $ {\cal R }' =\{ f(Y_j)\in F_J\, | \, f(a_j)\in \sum_{i=1}^s Ax_i\}$) are defining relations for the algebra $A^\delta$. \end{corollary} {\it Proof}. Repeat the arguments as in the proof of Theorem \ref{15Nov05}. $\Box $ {\it Remark}. The proof of Theorem \ref{15Nov05} shows that the choice of generators there might be not the most economical one if the algebra $A$ is far from being free (see also Corollary \ref{ca18Dec05} and Theorem \ref{19Dec05}). The proof of Theorem \ref{15Nov05} shows that in order to find generators and defining relations for the algebra $A^\delta$ one should \begin{enumerate} \item take algebra generators $\{ y_i\}_{i\in I}$ for the algebra $A$, \item find the coefficients $y_{\alpha , i}:= \phi (\frac{\delta^\alpha}{\alpha !}y_i)\in A^\delta$ of each element $y_i=\sum x^\alpha y_{\alpha , i}$, \item choose a basis, say $\{ b_j\, | \, j\in I'\}$, of the $K$-linear span of all the coefficients $\{ y_{\alpha , i}\}$, \item then the algebra $A^\delta$ is generated by the elements $\{ d (b_j), x_{d', i}\, | \, d\in D, d'\in D^*, j\in I', i=1, \ldots , s\}$, \item choose more economically (if possible) an algebra generators say $\{ a_j\, | \, j\in J\}$ for $A^\delta$, \item Corollary \ref{c15Nov05} gives the defining relations. \end{enumerate} {\it Example}. Let $A=F_n=\langle x_1, \ldots , x_n\rangle$ be a free associative algebra over $K$, $\delta_1:= \frac{\partial }{\partial x_1}, \ldots ,\delta_s:= \frac{\partial }{\partial x_s}\in {\rm Der }_K(F_n)$ be formal partial derivatives, $s\leq n$: $\delta_i(x_j)=\delta_{ij}$, $1\leq i\leq s$, $1\leq j\leq n$. Then $\phi (x_1)=\cdots = \phi (x_s)=0$ and $\phi (x_{s+1})=x_{s+1}, \ldots , \phi (x_n)=x_n$. By Theorem \ref{15Nov05}, $A^\delta = K\langle d (x_i), d'(x_j)\, | \, d\in D, d'\in D^*, i=s+1, \ldots , n; j=1, \ldots , s\rangle = K\langle x_i, d'(x_j)\, | \, d'\in D^*, i=s+1, \ldots , n; j=1, \ldots , s\rangle $, and, by Corollary \ref{c23Dec05}, for $n\geq 2$, the algebra $A^\delta$ is {\em not} left/right Noetherian since $F_n$ is not. In the special case when $s=n$, the algebra $A^\delta$ is a {\em free algebra in infinitely many variables}. More precisely, it is generated freely by the elements $\{ ( {\rm ad } \, x_1)^{\alpha_1}\cdots ( {\rm ad } \, x_n)^{\alpha_n}([x_i,x_j])\, | \, i<j, (\alpha_1, \ldots , \alpha_n)\in \mathbb{N}^n\}$ (Prop. 2, \cite{Ger98Arch}). Now, taking any ideal $\mathfrak{m} $ of the algebra $A^\delta$ which is {\em not} finitely generated as an $A^\delta$-bimodule and such that the algebra $A^\delta/ \mathfrak{m} $ is not left/right Noetherian, and using Corollary \ref{f26Dec05} one produces an example of an algebra $\overline{A}^{\overline{\delta}}$ which is {\em not} finitely generated, {\em not} left/right Noetherian, and {\em does not} satisfy finitely many defining relations. \begin{corollary}\label{ca18Dec05 If, in addition, $A$ is commutative then the map $\phi : A \rightarrow A^\delta$ is an algebra epimorphism with $ {\rm ker } (\phi )=(x_1, \ldots , x_s)$. In particular, $A^\delta \simeq A /(x_1, \ldots , x_s)$. Alternatively, the algebra $A^\delta$ is generated by the elements $\{\phi (y_i)\, | \, i\in I\}$ that satisfy the defining relations $ {\cal R } = \{ f(t_i )\in K[t_i\, | \, i\in I] \, | \, f(\phi (y_i) )\in (x_1, \ldots , x_s)\}$. \end{corollary} {\it Proof}. Each of the maps $\phi_i=\psi_i$ is an algebra homomorphism, hence so is their product $\phi =\psi$. Now, the result follows from (\ref{rphis}) or (\ref{lphis}). $\Box $ {\it Example}. {\em The Weitzenb\"{o}ck derivation} $\delta = x_1\frac{\partial }{\partial x_2}+x_2\frac{\partial }{\partial x_3}+\cdots +x_{n-1}\frac{\partial }{\partial x_n}$ of the polynomial algebra $P_n:= K[x_1, \ldots , x_n]$ ($n\geq 3$) is locally nilpotent with $\delta (x_1)=0$. The derivation can be (uniquely) extended to a locally nilpotent derivation of the localization $P_n[x_1^{-1}]=K[x_1, x_1^{-1}][ x_2, \ldots , x_n]$ with $\delta (x)=1$ where $x:= \frac{x_2}{x_1}$. By Corollary \ref{ca18Dec05}, the algebra of invariants $P_n[x_1^{-1}]^\delta$ is equal to $K[x_1, x_1^{-1}][ \phi (x_3), \ldots , \phi (x_n)]$, the polynomial algebra in $\phi (x_i)$, $i=3, \ldots , n$, with coefficients from $K[x_1, x_1^{-1}]$ (note that $\phi (x_2)=x_1\phi (x)=x_10=0$) where $$\phi (x_i)=\sum_{k=0}^{i-1} (-1)^k(\frac{x_2}{x_1})^k\frac{x_{i-k}}{k!}, \;\;\ i=3, \ldots , n.$$ Hence, $P_n^\delta = P_n\cap P_n[x_1^{-1}]^\delta =P_n\cap K[x_1, x_1^{-1}][ \phi (x_3), \ldots , \phi (x_n)]$. Since $\delta (\sum_{i=1}^s Kx_i)\subseteq \sum_{i=1}^s Kx_i$, by the Theorem of Weitzenb\"{o}ck, $P_n^\delta$ is a {\em finitely generated} algebra. It is an open problem to find explicitly a set of algebra generators for it (it would imply an explicit description of all $SL_2$-invariants which is another open problem, in fact, these two problems are equivalent). \begin{theorem}\label{19Dec05 If $[x_i, A^\delta ]=0$, $1\leq i \leq s$, then the map ${\rm gr}(\phi) : {\rm gr}(A) \rightarrow A^\delta$ is an algebra epimorphism with kernel generated by the elements $\overline{x}_1, \ldots , \overline{x}_s$ (where $\overline{x}_j:= x_j+A^\delta \in N_1/N_0$). In particular, $A^\delta \simeq {\rm gr}(A) /(\overline{x}_1, \ldots , \overline{x}_s)$. Alternatively, the algebra $A^\delta$ is generated by the elements $\{{\rm gr}(\phi) (y_i)\, | \, i\in I\}$ that satisfy the defining relations $ {\cal R } = \{ f(t_i )\in K\langle t_i\, | \, i\in I\rangle \, | \, f({\rm gr}(\phi) (y_i) )\in \sum_{j=1}^s \overline{x}_j{\rm gr}(A) \}$. \end{theorem} {\it Proof}. Since $[x_i, A^\delta ]=0$ and $[x_i, x_j]\in A^\delta$ for all $1\leq i, j \leq s$, it follows from Theorem \ref{18Dec05} that ${\rm gr}(A) \simeq A^\delta [ \overline{x}_1, \ldots , \overline{x}_s]$ is a polynomial algebra over $A^\delta$ in $\overline{x}_i:= x_i+A^\delta$, $i=1, \ldots , s$. The induced derivations $\overline{\delta}_1, \ldots , \overline{\delta}_s\in {\rm Der }_K ({\rm gr}(A) )$ of graded degree $-1$ are commuting locally nilpotent derivations of the algebra ${\rm gr}(A)$ (where $\overline{\delta}_i: N_j/N_{j-1}\rightarrow N_{j-1}/N_{j-2}$, $a+N_{j-1}\mapsto \delta_i (a)+N_{j-2}$) with $\overline{\delta}_i (\overline{x}_j)=\delta_{ij}$. Now, we are in the situation of Corollary \ref{ca18Dec05}. Let $\overline{\phi}$ be the corresponding map from Corollary \ref{ca18Dec05}. Clearly, $\overline{\phi} = {\rm gr}(\phi) $. Now, the result becomes obvious due to Corollary \ref{ca18Dec05}. $\Box$ \begin{lemma}\label{r22Dec05 Let $A$, $\delta_1, \ldots , \delta_s$, and $x_1, \ldots , x_s$ be as in Theorem \ref{18Dec05}. If $A'$ is a $\delta$-invariant subalgebra of the algebra $A$ ($\delta_i(A')\subseteq A'$ for all $i$) then the restrictions $\delta_1':= \delta_1|_{A'}, \ldots , \delta_s':= \delta_s|_{A'}$ are commuting locally nilpotent derivations of the algebra $A'$ and $N_i'=A'\cap N_i$ for all $i\geq 0$, in particular, $(A')^{\delta'}=A'\cap A^\delta$, and ${\rm gr}(A')\subseteq {\rm gr}(A)$ is a natural inclusion of graded algebras. \end{lemma} {\it Proof}. Obvious. $\Box $ {\it Example}. Given a $K$-algebra $A$ and commuting locally nilpotent derivations $\delta_1, \ldots , \delta_s$ of the algebra $A$, and let $\{ N_i\}$ be the corresponding filtration. Let $Z(A)$ and $NZD(A)$ be the centre and the set of all the (left and right) non-zero-divisors of $A$ respectively. Consider the set $S:= A^\delta \cap Z(A)\cap NZD (A)$. The algebra $A$ is a subalgebra of the localization $S^{-1} A$ of the algebra $A$ at $S$, the derivations $\delta_1, \ldots , \delta_s$ can be uniquely extended to derivations of the algebra $S^{-1} A$, denoted in the same fashion. These extended derivation $\delta_1, \ldots , \delta_s$ are commuting locally nilpotent derivations of the algebra $S^{-1} A$. Suppose that there are elements $x_1, \ldots , x_s\in S^{-1} A$ such that $\delta_i (x_j)=\delta_{ij}$ for all $i,j$. By Lemma \ref{r22Dec05}, $N_i=A\cap \oplus_{|\alpha |\leq i}(S^{-1} A)^\delta x^\alpha$ for all $i\geq 0$. Fix elements $a_1, \ldots , a_s\in S$, then the derivations $\delta_1':= a_1\delta_1, \ldots , \delta_s':= a_1\delta_s$ of $A$ are commuting and locally nilpotent with the corresponding filtration $\{ N_i'\}$ on $A$. Then $N_i'=N_i$ for all $i\geq 0$ where $\{ N_i\}$ is the filtration on $A$ determined by the derivations $\delta_1, \ldots , \delta_s$. More generally, fix elements $a_1, \ldots , a_s, t_1, \ldots , t_s\in S$, then consider derivations $\delta_1':= t_1^{-1}a_1\delta_1, \ldots , \delta_s':= t_s^{-1}a_1\delta_s$ of $S^{-1} A$ which are obviously commuting and locally nilpotent and $\delta_i'(A')\subseteq A'$ for all $i$ where $A':= A[t_1^{-1}, \ldots , t_s^{-1}]$. Let $A'=\cup_{i\geq 0}N_i'$ be the corresponding filtration associated with the derivations $\delta_1'\ldots, \delta_s'$. Then, by Lemma \ref{r22Dec05}, $N_i'=A'\cap \oplus_{|\alpha | \leq i} (S^{-1} A)^\delta x^\alpha $ for all $i\geq 0$. Instead of $A'$ one can take any $\delta'$-invariant subalgebra of $S^{-1} A$. \section{Generators and defining relations for ring of invariants of commuting automorphisms} Let $A$ be an algebra over a field $K$, $\sigma \in {\rm Aut}_K(A)$, and $\delta $ be a $\sigma$-{\em derivation} of the algebra $A$: $\delta (ab)=\delta (a)\sigma (b)+a\delta (b)$ for all $a,b\in A$. We will assume that $\delta \sigma =\sigma \delta$. Then an induction on $n$ yields \begin{equation}\label{sdernab} \delta^n(ab)=\sum_{i=0}^n{n\choose i}\delta^i(a)\sigma^i\delta^{n-i}(b), \;\; n\geq 1. \end{equation} It follows that the $A^\delta := {\rm ker } (\delta )$ is a subalgebra (of constants for $\delta $) of $A$, or the ring of invariants, the union $M:= M(\delta , A)=\cup_{i\geq 0}M_i$ of the vector spaces $M_i:= {\rm ker } (\delta^{i+1})$ is a positively filtered algebra ($M_iM_j\subseteq M_{i+j}$ for all $i,j\geq 0$), $M_0=A^\delta \subseteq M_1\subseteq \cdots $. For each $0\neq a\in M$, there exists a unique natural number, say $d$, such that $a\in M_d\backslash M_{d-1}$. The $d:=\deg (a)=\deg_\delta (a)$ is called the $\delta$-{\em degree} of the element $a$. {\it Example}. Given $\sigma\in {\rm Aut}_K(A)$, then $\delta := \sigma -1$ is a $\sigma $-derivation of the algebra $A$ such that $\delta\sigma =\sigma\delta$. Given a vector space $V$ over the field $K$, a $K$-linear map $\varphi : V\rightarrow V$ is called {\em locally nilpotent} if, for all $v\in V$, $\varphi^n(v)=0$ for all $n\gg 1$. Given {\em commuting} $K$-automorphisms $\sigma_1,\ldots , \sigma_s$ of the algebra $A$ such that the maps $\sigma_1-{\rm id}_A,\ldots , \sigma_s-{\rm id}_A$ are {\em locally nilpotent}. Then the maps $\sigma_1-{\rm id}_A,\ldots , \sigma_s-{\rm id}_A$ are {\em commuting locally nilpotent} $\sigma_1-, \ldots , \sigma_s-$derivations respectively, and all the maps $\sigma_i$, $\sigma_j-{\rm id}_A$ {\em commute}. The algebra $A$ has the filtration $\{ M_i\}_{i\geq 0}$ where $M_i:= \{ a\in A\, | \, (\sigma -{\rm id}_A)^\alpha (a)=0$ for all $\alpha \in \mathbb{N}^s$ such that $|\alpha |>i\}$ where $(\sigma -{\rm id}_A)^\alpha:= \prod_{i=1}^s (\sigma_i-{\rm id}_A)^{\alpha_i}$. Clearly, $M_0=A^\sigma := \{ a\in A \, | \, \sigma_1(a)=\cdots = \sigma_s(a)=a\}$, the {\em ring of} $\sigma$-{\em invariants}, $M_0\subseteq M_1\subseteq \cdots \subseteq M_i\subseteq \cdots \subseteq A=\cup_{i\geq 0}M_i$, $M_iM_j\subseteq M_{i+j}$ for all $i,j\geq 0$ (use (\ref{sdernab})). {\it Example}. Let $\sigma_1, \ldots , \sigma_s$ be $K$-automorphisms of the polynomial algebra $P_s:= K[x_1, \ldots ,$ $ x_s]$ given by the rule $\sigma_i(x_j)=x_j+\delta_{ij}$. The automorphisms $\sigma_i$ commute and all the maps $\sigma_i- {\rm id}_{P_s}$ are locally nilpotent. Then the filtration $\{ M_i\}$ on the polynomial algebra $P_s$ is the ordinary filtration: $M_i=\sum_{|\alpha |\leq i}Kx^\alpha$. \begin{lemma}\label{c18Dec05 Let $A$, $\delta_1, \ldots , \delta_s$, and $x_1, \ldots , x_s$ be as in Theorem \ref{18Dec05}, and $\sigma \in {\rm Aut}_K(A)$. Then the automorphism commute with the derivations $\delta_1, \ldots , \delta_s$ iff $\sigma (A^\delta)=A^\delta$ and $\sigma (x_i)=x_i+\lambda_i$ for some $\lambda_i\in A^\delta$. \end{lemma} {\it Proof}. $(\Rightarrow )$ If the automorphism $\sigma $ commutes with derivations $\delta_i$ then so does its inverse $\sigma^{-1}$, and so $\sigma^{\pm 1} (A^\delta )\subseteq A^\delta$, hence $\sigma (A^\delta )=A^\delta$. By (\ref{AAds}), $\sigma (x_i)=\lambda_i+\sum_{0\neq \alpha \in \mathbb{N}^s}\lambda_{i, \alpha} x^\alpha$ for some $\lambda_i,\lambda_{i, \alpha}\in A^\delta$. Comparing the coefficients of $x^\alpha$'s in the system of equations $\sigma \delta_i (x_j)=\delta_i\sigma (x_j)$, $1\leq i,j\leq s$, yields $\sigma (x_i)=\lambda_i +x_i$ for all $i$. $(\Leftarrow )$ This implication is obvious. $\Box $ \begin{theorem}\label{20Dec05 Let $A$ be an arbitrary $K$-algebra, $\sigma_1, \ldots , \sigma_s\in {\rm Aut}_K(A)$ be automorphisms of the algebra $A$. The following statements are equivalent. \begin{enumerate} \item The maps $\sigma_1-{\rm id}_A, \ldots , \sigma_s-{\rm id}_A$ are commuting locally nilpotent and there exist elements $x_1,\ldots , x_s\in A$ satisfying $\sigma_i(x_j)=x_j+\delta_{ij}$ (the Kronecker delta) for $1\leq i,j\leq s$. \item $\sigma_1=e^{\delta_1}, \ldots , \sigma_s=e^{\delta_s}$ for some commuting locally nilpotent derivations $\delta_1, \ldots , \delta_s\in {\rm Der }_K(A)$ such that $\delta_i(x_j)=\delta_{ij}$, $1\leq i,j\leq s$, for some elements $x_1,\ldots , x_s\in A$. \end{enumerate} If one of the two equivalent conditions holds then $\delta_i:= \sum_{k\geq 1}(-1)^{k+1}\frac{(\sigma_i-{\rm id}_A)^k}{k}$, $A^\delta =A^\sigma : = \{ a\in \, | \, \sigma_1 (a)=\cdots = \sigma_s(a)=a\}$, and two sets of $x$'s coincide up to adding elements of $A^\sigma =A^\delta$. So, one can apply all the previous results in finding generators and defining relations for the algebra $A^\sigma$ (we leave it to the the interested reader to write down the corresponding statements). \end{theorem} {\it Proof}. It is well-known that if an automorphism $\sigma \in {\rm Aut}_K(A)$ is such that the map $\sigma -{\rm id}_A$ is a locally nilpotent map then $\sigma =e^\delta =\sum_{k\geq 0}\frac{\delta^k}{k!}$ for a unique locally nilpotent derivation $\delta =\sum_{k\geq 1}(-1)^{k+1}\frac{(\sigma-{\rm id}_A)^k}{k}\in {\rm Der }_K(A)$, and vice versa. It follows that the maps $\sigma_i-{\rm id}_A$ are commuting locally nilpotent iff the derivations $\delta_i$ are commuting locally nilpotent. Then, $\sigma_i(x_j)=x_j+\delta_{ij}$ iff $\delta_i(x_j)=\delta_{ij}$. It is obvious that $A^\delta =A^\sigma$. $\Box$ {\it Example}. Let $P_n=K[x_1, \ldots , x_n]$ be a polynomial algebra and $\sigma \in {\rm Aut}_K(P_n)$ where $\sigma (x_i)=x_i+x_{i-1}$, $i=1, \ldots , n$, $x_0:=1$ (and $x_i:=0$ for all $i<0$). Then $\sigma -{\rm id}$ is a locally nilpotent map and $P_n^\sigma =P_n^\delta$ where $\delta := \sum_{k\geq 1}(-1)^k\frac{(\sigma -{\rm id})^k}{k}$, $\delta (x_1)=1$. By Theorem \ref{20Dec05} and Corollary \ref{ca18Dec05}, the ring of invariants $P_n^\sigma =K[\phi (x_2), \ldots , \phi (x_n)]$ is a polynomial algebra in $n-1$ variables \begin{eqnarray*} \phi (x_i)&=&\sum_{k\geq 0}(-1)^k\frac{x_1^k}{k!}\sum_{i_1, \ldots , i_k\geq 1} (-1)^{i_1+\cdots +i_k+k}\frac{(\sigma -{\rm id})^{i_1+\cdots +i_k}}{i_1\cdots i_k} \\ &=& \sum_{k=0}^i\frac{x_1^k}{k!}\sum_{i_1, \ldots , i_k\geq 1} (-1)^{i_1+\cdots +i_k}\frac{x_{i-i_1-\cdots -i_k }}{i_1\cdots i_k}. \end{eqnarray*} \begin{corollary}\label{d25Dec05 Let $A$ be an arbitrary algebra over the field $K$. The following statements are equivalent. \begin{enumerate} \item There exist commuting $K$-automorphisms $\sigma_1,\ldots ,\sigma_s$ of the algebra $A$ such that the maps $\sigma_i -{\rm id}$ are locally nilpotent and $\sigma_i(x_j)=x_j+\delta_{ij}$ for some elements $x_1,\ldots , x_s\in A$. \item The algebra $A$ is an iterated Ore extension $A=B[x_1;d_1]\cdots [x_s;d_s]$ such that $d_i(B)\subseteq B$ and $d_i(x_j)\in B$ for all $1\leq i,j\leq s$. \end{enumerate} If, say, the first condition holds then $A=A^\sigma [x_1;d_1]\cdots [x_s;d_s]$ is an iterated Ore extension of the ring $A^\sigma$ such that $d_i = {\rm ad } (x_i)$, $[x_i, A^\sigma ]\subseteq A^\sigma$, and $[x_i,x_j]\in A^\sigma$ for all $i,j$. In particular, $A=\oplus_{\alpha \in \mathbb{N}^s}x^\alpha A^\sigma = \oplus_{\alpha \in \mathbb{N}^s} A^\sigma x^\alpha=\cup_{i\geq 0}M_i$ where $ M_i=\oplus_{|\alpha |\leq i} A^\sigma x^\alpha $. \end{corollary} {\it Proof}. $(1\Rightarrow 2)$ By Theorem \ref{20Dec05}, we have a set $\delta_1, \ldots , \delta_s$ of commuting locally nilpotent derivations of the algebra $A$ such that $\delta_i(x_j)=\delta_{ij}$ for all $i,j$. By Theorem \ref{18Dec05}, statement 2 holds. $(2\Rightarrow 1)$ Given the iterated Ore extension as in statement 2. It is easy to check that the $K$-automorphisms $\sigma_1, \ldots , \sigma_s\in {\rm Aut}_K(A)$ given by the rule $\sigma_i(x_j)=x_j+\delta_{ij}$ satisfy the conditions of statement 1. The rest follows from Theorem \ref{18Dec05} and the fact that $(\sigma -{\rm id})^\alpha (x^\beta )=\alpha ! \delta_{\alpha , \beta}$ for all $\alpha , \beta \in \mathbb{N}^s $ such that $|\alpha |\geq |\beta |$ where $(\sigma -{\rm id})^\alpha := \prod_{i=1}^s(\sigma_i-{\rm id })^{\alpha_i}$. $\Box $ Corollary \ref{d25Dec05} proves that the filtration $\{M_i\}$ of the algebra $A$ for the automorphisms $\sigma_1, \ldots , \sigma_s$ coincides with the filtration $\{ N_i\}$ for the derivations $\delta_1, \ldots , \delta_s$ (where $\sigma_i=e^{\delta_i}$), that is $M_i=N_i$ for all $i\geq 0$. \section{The inverse map for automorphism that preserve the ring of invariants of derivations} Let $A$, $\delta_1, \ldots , \delta_s$ and $x_1, \ldots , x_s$ be as in Theorem \ref{18Dec05}. Suppose that an automorphism $\sigma\in {\rm Aut}_K(A)$ preserves the ring of invariants $A^\delta$ ($\sigma (A^\delta )=A^\delta$). Let $\sigma_\delta := \sigma|_{A^\delta}\in {\rm Aut}_K(A^\delta)$. Suppose we know the inverse $\sigma_\delta^{-1}$ and the twisted derivations $\delta_1':= \sigma \delta_1\sigma^{-1}, \ldots , \delta_s':= \sigma \delta_s\sigma^{-1}\in {\rm Der }_K(A)$, then we can write {\em explicitly} a formula for the inverse automorphism $\sigma^{-1}$ of $\sigma $ (Theorem \ref{i19Dec05} and Theorem \ref{i8Nov05}). Since $A=\oplus_{\alpha \in \mathbb{N}^s}A^\delta x^\alpha = \oplus_{\alpha \in \mathbb{N}^s} x^\alpha A^\delta $, the automorphism $\sigma$ is uniquely determined by its restriction $\sigma_\delta$ to the ring of invariants $A^\delta$ and the images of $x$'s: \begin{equation}\label{gxsh1} x_1':= \sigma (x_1), \ldots , x_s':=\sigma (x_s). \end{equation} The twisted derivations $\delta_1':= \sigma \delta_1\sigma^{-1}, \ldots , \delta_s':= \sigma \delta_s\sigma^{-1}\in {\rm Der }_K(A)$ is a set of {\em commuting locally nilpotent} derivations of the algebra $A$ satisfying $\delta_i'(x_j')=\delta_{ij}$. For each $i=1, \ldots , s$, consider the maps $$ \phi_i':=\sum_{k\geq 0}(-1)^k\frac{(x_i')^k}{k!}(\delta_i')^k, \;\; \psi_i':=\sum_{k\geq 0}(-1)^k(\delta_i')^k (\cdot ) \frac{(x_i')^k}{k!}\, : \, A\rightarrow A$$ which are homomorphisms of right and left $A^\delta$-modules respectively. The maps \begin{eqnarray*} \phi_\sigma &: =&\phi_s'\phi_{s-1}'\cdots \phi_1':A\rightarrow A, \;\; a=\sum_{\alpha \in \mathbb{N}^s} x^\alpha_\alpha a_\alpha \mapsto \phi_\sigma (a)=a_0,\\ \psi_\sigma &:=& \psi_1'\psi_2'\cdots \psi_s':A\rightarrow A, \;\; a=\sum_{\alpha \in \mathbb{N}^s} a_\alpha x^\alpha\mapsto \psi_\sigma (a)=a_0, \end{eqnarray*} are {\em projections} onto the subalgebra $A^\delta$ of $A=A^\delta \oplus (\oplus_{0\neq \alpha \in \mathbb{N}^s }x^\alpha A^\delta )$ and $A=A^\delta \oplus (\oplus_{0\neq \alpha \in \mathbb{N}^s } A^\delta x^\alpha )$ respectively, they are homomorphisms of right and left $A^\delta $-modules respectively. By Theorem \ref{a18Dec05}, for any $a\in A$, $$ a=\sum_{\alpha \in \mathbb{N}^s}(x')^\alpha \phi_\sigma (\frac{(\delta')^\alpha}{\alpha !} a)=\sum_{\alpha \in \mathbb{N}^s}\psi_\sigma (\frac{(\delta')^\alpha}{\alpha !} a)x^\alpha.$$ Then applying $\sigma^{-1}$ we finish the proof of the next theorem. \begin{theorem}\label{i19Dec05 Let $A$, $\delta_i$, $\delta_i'$, $x_i$, and $x_i'$ be as above (i.e. the algebra $A$ is from Theorem \ref{18Dec05}). For $a\in A$, $$ \sigma^{-1}(a)= \sum_{\alpha \in \mathbb{N}^s} x^\alpha \sigma_\delta^{-1}\phi_\sigma (\frac{(\partial ')^\alpha}{\alpha !}a)= \sum_{\alpha \in \mathbb{N}^s} \sigma_\delta^{-1}\psi_\sigma (\frac{(\partial ')^\alpha}{\alpha !}a)x^\alpha.$$ \end{theorem} As an application of Theorem \ref{i19Dec05} we find the inverse map of an automorphism of the Weyl algebra with polynomial coefficients. The {\em Weyl } algebra $A_n=A_n(K)$ is a $K$-algebra generated by $2n$ generators $x_1, \ldots , x_{2n}$ subject to the defining relations: $$ [x_{n+i}, x_j]=\delta_{ij}, \;\; [x_i, x_j]=[x_{n+i}, x_{n+j}]=0\;\; {\rm for\;\; all}\;\; 1\leq i,j\leq n,$$ where $\delta_{ij}$ is the Kronecker delta, $[a,b]:=ab-ba$. Let $P_m=K[x_{2n+1}, \ldots x_{2n+m}]$ be a polynomial algebra. The Weyl algebra with polynomial coefficients $A:= A_n\otimes P_m=\bigoplus_{\alpha \in \mathbb{N}^s} Kx^\alpha $ where $s:=2n+m$, $x^{\alpha }:= x_1^{\alpha_1 }\cdots x_s^{\alpha_s }$, the {\em order} of the $x$'s in the product is {\em fixed}. The algebra $A_n\otimes P_m$ admits the finite set of {\em commuting locally nilpotent} derivations, namely, the `partial derivatives': $ \partial _1:= \frac{\partial }{\partial x_1}, \ldots , \partial _s:= \frac{\partial }{\partial x_s}$. Clearly, $\partial _i= {\rm ad } (x_{n+i})$ and $\partial _{n+i}=- {\rm ad } (x_i)$, $i=1,\ldots , n$. Let ${\rm Aut}_K(A_n\otimes P_m)$ be the group of $K$-algebra automorphisms of the algebra $A_n\otimes P_m$. Given an automorphism $\sigma \in{\rm Aut}_K(A_n\otimes P_m)$. It is uniquely determined by the elements $ x_1':= \sigma (x_1), \ldots , x_s':=\sigma (x_s) $ of the algebra $A_n\otimes P_m$. The centre $Z:=Z(A_n\otimes P_m)$ of the algebra $A_n\otimes P_m$ is equal to $P_m$. Therefore, the restriction $\sigma|_{P_m}\in {\rm Aut}_K(P_m)$, and so $ \Delta := {\rm det } (\frac{\partial x_{2n+i}'}{\partial x_{2n+j}})\in K^*$ where $i,j=1, \ldots , n$. The corresponding (to the elements $x_1',\ldots , x_s'$) `partial derivatives' (the set of commuting locally nilpotent derivations of the algebra $A_n\otimes P_m$) \begin{equation}\label{xsh2} \partial _1':= \frac{\partial }{\partial x_1'}, \ldots , \partial _s':= \frac{\partial }{\partial x_s'} \end{equation} are equal to \begin{equation}\label{dad1} \partial _i':= {\rm ad } (\sigma (x_{n+i})), \;\; \partial _{n+i}':= - {\rm ad } (\sigma (x_{i})), \;\; i=1, \ldots , n, \end{equation} \begin{equation}\label{dad2} \partial _{2n+j}' := \Delta ^{-1} {\rm det } \begin{pmatrix} \frac{\partial \sigma (x_{2n+1})}{\partial x_{2n+1}} & \cdots & \frac{\partial \sigma (x_{2n+1})}{\partial x_{2n+m}} \\ \vdots & \vdots & \vdots \\ \frac{\partial }{\partial x_{2n+1}} & \cdots & \frac{\partial }{\partial x_{2n+m}}\\ \vdots & \vdots & \vdots \\ \frac{\partial \sigma (x_{2n+m})}{\partial x_{2n+1}} & \cdots & \frac{\partial \sigma (x_{2n+m})}{\partial x_{2n+m}} \\ \end{pmatrix}, \;\;\; j=1, \ldots , m, \end{equation} where we `drop' $\sigma (x_{2n+j})$ in the determinant $ {\rm det } (\frac{\partial \sigma (x_{2n+k})}{\partial x_{2n+l}})$. Clearly, $\partial _i'=\sigma \partial _i\sigma^{-1}$ for $i=1, \ldots , s$, and $A^\partial =K$, so $\sigma|_K={\rm id}_K$ is known. Now, one can apply Theorem \ref{i19Dec05} to have the next result. \begin{theorem}\label{i8Nov05 \cite{inform'05} {\rm (The Inversion Formula)} For each $\sigma \in {\rm Aut}_K(A_n\otimes P_m)$ and $a\in A_n\otimes P_m$, $$ \sigma^{-1}(a)=\sum_{\alpha \in \mathbb{N}^s}x^\alpha \phi_\sigma (\frac{(\partial ')^\alpha}{\alpha!}a)=\sum_{\alpha \in \mathbb{N}^s}\psi_\sigma (\frac{(\partial ')^\alpha}{\alpha!}a)x^\alpha , $$ where $(\partial ')^{\alpha} :=(\partial _1')^{\alpha_1}\cdots (\partial _s')^{\alpha_s}$ and $s=2n+m$. \end{theorem} \section{Integral closure and commuting locally nilpotent derivations} In this section, the structure of algebras is described that admit a set of commuting locally nilpotent derivations with left localizable kernels. For an arbitrary algebra $A$, we say that derivations $\delta_1,\ldots , \delta_s$ of the algebra $A$ have {\em generic kernels} iff the $2^s-1$ sets $\{ \cap_{i\in I} A^{\delta_i}\, | \, \emptyset \neq I\subseteq \{ 1, \ldots , s\}\}$ are {\em distinct} (iff the sets $A^\delta$, $A^{\delta}_{\widehat{i}} := \cap_{j\neq i}A^{\delta_j}$, $i=1,\ldots , s$ are distinct iff $A^\delta \neq A^{\delta}_{\widehat{i}} $ for $i=1,\ldots , s$). We say that the derivations $\delta_1,\ldots , \delta_s$ of the algebra $A$ have {\em left localizable kernels} iff there exists a {\em left Ore} set $S$ of the algebra $A$ such that $S\subseteq A^\delta \cap A^{reg}$ and $S\cap \delta_i(A^{\delta}_{\widehat{i}} )\neq \emptyset$ for all $i=1,\ldots ,s$, where $A^{reg}$ is the set of all {\em regular} elements of the algebra $A$ (an element $a\in A$ is {\em regular} if, by definition, it is not a left and right zero divisor of the algebra $A$). {\em If the derivations $\delta_1, \ldots , \delta_s$ have left localizable kernels then they have generic kernels}: for each $i$, fix $y_i\in A^{\delta}_{\widehat{i}}$ such that $\delta_i (y_i)\in S$, then $y_i\in A^{\delta}_{\widehat{i}} \backslash (A^\delta \cup \cup_{j\neq i} A^{\delta}_{\widehat{j}} )$, and so the derivations have generic kernels. Clearly, if there exists elements $x_1, \ldots , x_s\in A$ such that $\delta_i(x_j)=\delta_{ij}$ then the derivations $\delta_1, \ldots , \delta_s$ have left localizable kernels (but not vice versa): for take $S=\{ 1\}$. \begin{theorem}\label{24Dec05 Let $A$ be an (arbitrary) algebra over the field $K$. The following statements are equivalent. \begin{enumerate} \item The algebra $A$ admits a finite set of commuting locally nilpotent derivations, say $\delta_1, \ldots , \delta_s\in {\rm Der }_K(A)$, with left localizable kernels. \item There exists a left Ore set $S$ of the algebra $A$ such that $S\subseteq A^{reg}$, $S^{-1} A = B[x_1; d_1]\cdots $ $ [ x_s; d_s]$ is an iterated Ore extension such that $S\subseteq B$, $d_i (B)\subseteq B$ and $d_i(x_j)\in B$ for all $1\leq i,j\leq s$, and the algebra $A$ is $\partial _i$-invariant $(\partial _i(A)\subseteq A)$ for all $1\leq i \leq s$ where $\partial _i := \frac{\partial }{\partial x_i}\in {\rm Der }_B(S^{-1} A)$ are the formal partial derivatives of the $B$-algebra $S^{-1} A$. \end{enumerate} If, say, the first condition holds then there exists a left Ore set $S\subseteq A^\delta \cap A^{reg}$ such that $S^{-1} A=(S^{-1} A)^\delta [x_1; d_1]\cdots [ x_s; d_s]$ is an iterated Ore extensions such that $d_i ((S^{-1} A)^\delta )\subseteq (S^{-1} A)^\delta $ and $d_i(x_j)\in (S^{-1} A)^\delta $ for all $1\leq i,j\leq s$. In particular, $S^{-1} A= \oplus_{\alpha \in \mathbb{N}^s } (S^{-1} A)^\delta x^\alpha = \oplus_{\alpha \in \mathbb{N}^s } x^\alpha (S^{-1} A)^\delta $ and $S^{-1} A=\cup_{i\geq0}N_i'$, $N_i'=\oplus_{|\alpha |\leq i } (S^{-1} A)^\delta x^\alpha=\oplus_{|\alpha |\leq i } x^\alpha (S^{-1} A)^\delta $, $i\geq 0$. Finally, $A=\cup_{i\geq 0}N_i$ and $N_i'=S^{-1} N_i$ for all $i\geq 0$, in particular, $S^{-1} (A^\delta)=(S^{-1} A)^\delta $ and $A^\delta =A\cap (S^{-1} A)^\delta$. \end{theorem} {\it Proof}. $(1\Rightarrow 2)$ The derivations $\delta_i$ are left localizable, that is, there exists a left Ore set $S$ of the algebra $A$ such that $S\subseteq A^\delta \cap A^{reg}$ and $S\cap \delta_i (A^{\delta}_{\widehat{i}} )\neq \emptyset$ for all $i=1, \ldots , s$. So, for each $i=1, \ldots , s$, one can pick up an element, say $y_i\in A^{\delta}_{\widehat{i}}$, such that $\delta_i(y_i)\in S$, then for the elements $x_i:= \delta_i(y_i)^{-1}y_i\in S^{-1} A$ we have $\delta_j(x_i)=\delta_{ij}$, where the `new' derivation $\delta_j$ is the unique extension of the `old' derivation $\delta_j$ to a derivation of the algebra $S^{-1} A$. By Theorem \ref{18Dec05}, $S^{-1} A= B[x_1;d_1]\cdots [ x_s; d_s]$ is an iterated Ore extension such that $ B=(S^{-1} A)^\delta$, $d_i(B)\subseteq B$, $d_i(x_j)\in B$ for all $i,j$, and $\delta_i:= \frac{\partial }{\partial x_i}\in {\rm Der }_B(S^{-1} A)$ are formal partial derivatives over $B$. Now, it is obvious that the algebra $A$ is $\frac{\partial }{\partial x_i}$-invariant for all $i$. $(2\Rightarrow 1)$ Suppose that the second statement holds. The derivations $\partial _1, \ldots , \partial _s\in {\rm Der }_B(S^{-1} A)$ are commuting locally nilpotent, hence so are their restrictions, say $\delta_1, \ldots , \delta_s$, to the $\partial $-invariant subalgebra $A$ of $S^{-1} A$ (the set $S$ consists of regular elements of the algebra $A$, so one can identify the algebra $A$ with its isomorphic image in $S^{-1} A$ under the natural monomorphism $A\rightarrow S^{-1} A$, $a\mapsto \frac{a}{1}$). For each $x_i$, fix an element $s_i\in S$ such that $y_i:= s_ix_i\in A$. Then $\delta_j (y_i)=s_i\delta_{ij}$ for all $i,j$. Since $S\subseteq A^{reg}\cap A^\delta$ is a left Ore set and $\delta_i (y_i)=s_i\in \delta_i (A^{\delta}_{\widehat{i}} )\cap S$, the kernels of the derivations $\delta_1, \ldots , \delta_s$ are left localizable. This finishes the proof of the implication. The rest is a direct consequence of Theorem \ref{18Dec05} and the fact that $S\subseteq A^{reg}\cap A^\delta$. $\Box $ \begin{corollary}\label{c24Dec05 Let a $K$-algebra $A$ be a commutative domain. The following statements are equivalent. \begin{enumerate} \item The algebra $A$ admits a finite set of commuting locally nilpotent derivations, say $\delta_1, \ldots , \delta_s\in {\rm Der }_K(A)$, with generic kernels. \item There exists a nonzero element $t\in A$ such that the localization $A_t:= A[t^{-1}]$ of the algebra $A$ at the powers of the element $t$ is a polynomial algebra $A_t=B[x_1, \ldots , x_s]$ such that $t\in B$ and the algebra $A$ is $\partial _i$-invariant ($\partial _i(A)\subseteq A$) for all $1\leq i \leq s$ where $\partial _i := \frac{\partial }{\partial x_i}\in {\rm Der }_B(A_t)$ are the formal partial derivatives of $ A_t$ over $B$. \end{enumerate} \end{corollary} {\it Proof}. $(1\Rightarrow 2)$ The derivations $\delta_i$ have generic kernels, so the algebras $A^\delta$ and $A^{\delta}_{\widehat{i}}$, $i=1,\ldots , s$, are distinct. So, for each $i=1, \ldots , s$, one can fix an element, say $y_i\in A^{\delta}_{\widehat{i}}$, such that $0\neq \delta_i(y_i)\in A^\delta$. Then the element $t:= \delta_1(y_1)\cdots \delta_s(y_s)\in A^\delta$ is a nonzero one since the algebra $A$ is a domain, and the derivations $\delta_1, \ldots , \delta_s$ have (left) localizable generic kernels, for it suffices to take $S=\{ t^i\, | \, i\geq 0\}$. Applying Theorem \ref{24Dec05}, we obtain statement 2. $(2\Rightarrow 1)$ This implication is obvious because of Theorem \ref{24Dec05}. $\Box $ The next result gives explicitly generators and defining relations for the integral closure $\widetilde{K}$ of the field $K$ in the algebra $A$. \begin{corollary}\label{1c24Dec05 Let a domain $A=K\langle y_1,\ldots y _r\rangle$ be an affine commutative $K$-algebra of Krull dimension $s\geq 1$, $\widetilde{K}$ be an algebraic closure of the field $K$ in the algebra $A$ ($\widetilde{K} $ is a field finite over $K$, i.e. $[\widetilde{K} :K]<\infty$). The following statements are equivalent. \begin{enumerate} \item There exist $s$ commuting locally nilpotent derivations, say $\delta_1, \ldots , \delta_s\in {\rm Der }_K(A)$, with generic kernels. \item $A=\widetilde{K} [x_1,\ldots , x_s]$ is a polynomial algebra over the field $\widetilde{K}$ in $s$ variables. \item There exist derivations $\delta_1, \ldots , \delta_s\in {\rm Der }_{\widetilde{K} }(A)$ and elements $x_1, \ldots , x_s\in A$ such that $\delta_i(x_j)=\delta_{ij}$ (the Kronecker delta) for all $1\leq i,j\leq s$. \end{enumerate} If, say, the first statement holds and $\{ N_i\}$ be the filtration on $A$ associated with the derivations $\delta_1,\ldots , \delta_s$ then $(i)$ ${\rm dim }_K(N_i)=[\widetilde{K} :K]{i+s\choose s}= \frac{[\widetilde{K} :K]}{s!}i^s+\cdots $ for all $i\geq 0$. $(ii)$ The map $\phi : A\rightarrow \widetilde{K}$ (from Corollary \ref{ca18Dec05}) is an algebra epimorphism with kernel $(x_1, \ldots , x_s)$, i.e. $\widetilde{K} \simeq A/(x_1,\ldots , x_s)$. Alternatively, the field $\widetilde{K}$ is generated over $K$ by the elements $\phi (y_1), \ldots , \phi (y_r)$ that satisfy the defining relations $ {\cal R } = \{ f(t_1, \ldots , t_r)\in K[t_1, \ldots , t_r]\, | \, f(\phi (y_1), \ldots , \phi (y_r))\in (x_1,\ldots , x_s)\}$. \end{corollary} {\it Proof}. $(1\Rightarrow 3)$ By Corollary \ref{c24Dec05}, $A_t =A_t^\delta [ x_1, \ldots , x_s]$ for a nonzero element $t\in A^\delta$. Since $s= {\rm Kdim } (A)= {\rm Kdim } (A_t)$, we see that $A_t^\delta$ is a field (since $A$ is a domain) which is finite over $K$, i.e. $[A^\delta_t:K]<\infty$. The element $t\in A^\delta \subseteq A^\delta_t$ is algebraic, hence $t^{-1}\in A^\delta$, and so $A_t=A$ and $A=A^\delta [x_1, \ldots , x_s]$. Clearly, $A^\delta \subseteq \widetilde{K}$. The reverse inclusion is obvious since char$(K)=0$ (if $u\in \widetilde{K}$ then $f(u)=0$, $f'(u):=\frac{df}{dx}(u)\neq 0$ for some nonzero polynomial $f(x)\in K[x]$, and so $0=\delta_i (f(u))=f'(u)\delta_i(u)$ implies $\delta_i(u)=0$ for all $i$ which means that $u\in A^\delta$), hence $A^\delta =\widetilde{K} $. It is obvious that $\delta_i=\frac{\partial }{\partial x_i}\in {\rm Der }_{\widetilde{K} }(A)$ and $\delta_i(x_j)=\delta_{ij}$. $(3\Rightarrow 2)$ By Theorem \ref{18Dec05}, $A=A^\delta [x_1,\ldots , x_s]$ is a polynomial algebra with coefficients from the algebra of invariants $A^\delta$. Repeating the above argument we have $A^\delta =\widetilde{K}$. $(2\Rightarrow 1)$ The formal partial derivatives $\delta_i:=\frac{\partial }{\partial x_i}\in {\rm Der }_{\widetilde{K} }(A)$ are commuting locally nilpotent derivations with generic kernels since $x_i\in A^{\delta}_{\widehat{i}} \backslash (A\cup \cup_{j\neq i} A^{\delta}_{\widehat{j}} )$. This finishes the proof of the equivalence of the three statements. Suppose that the equivalent conditions hold, then $N_i=\oplus_{|\alpha |\leq i} \widetilde{K} x^\alpha$, and so ${\rm dim }_K(N_i)=[\widetilde{K} :K]{i+s\choose s}= \frac{[\widetilde{K} :K]}{s!}i^s+\cdots $ for all $i\geq 0$, i.e. $(i)$ is proved. The statement $(ii)$ follows from Corollary \ref{ca18Dec05}. $\Box $ {\it Remark}. The finite separable field extension $\widetilde{K} /K$ is generated by a single element, say $x$, over $K$. So, the algebra $A$ from Corollary \ref{1c24Dec05} is generated by $s+1$ elements $x, x_1,\ldots , x_s$ that satisfy a single defining relations $f(x)=0$ for an irreducible polynomial $f(y)\in K[y]$ of degree $[\widetilde{K} :K]$. \section{A construction of simple algebras} In this section, a construction of simple algebras is given (Theorem \ref{26Dec05}) that comes from a set of commuting locally nilpotent derivations which satisfy the conditions of Theorem \ref{18Dec05}. \begin{theorem}\label{26Dec05 Let $A$, $\delta_1, \ldots , \delta_s$ and $x_1, \ldots , x_s$ be as in Theorem \ref{18Dec05}. Given a (two-sided) maximal ideal $\mathfrak{m}$ of the algebra $A^\delta$ such that $[x_i, \mathfrak{m} ]\subseteq \mathfrak{m} $ and $[x_i, Z]\subseteq \mathfrak{m}$ for all $i=1, \ldots , s$ where $Z$ is the centre of the factor algebra $A^\delta / \mathfrak{m}$. Then the iterated Ore extension ${\cal A} := A/(\mathfrak{m} )[t_1, \ldots , t_s; \delta_1, \ldots , \delta_s]$ of the algebra $A/(\mathfrak{m} )$ is a simple algebra where $(\mathfrak{m} ):= A\mathfrak{m} A$, the elements $t_1, \ldots , t_s$ commute, and $t_ia=at_i+\delta_i(a)$ for all $a\in A/(\mathfrak{m} )$ where $\delta_i\in {\rm Der }_K(A/(\mathfrak{m} ))$ is the induced derivation: $u+(\mathfrak{m} )\mapsto \delta_i(u)+(\mathfrak{m} )$, $u\in A$. \end{theorem} {\it Proof}. Using Theorem \ref{18Dec05} and abusing notation slightly one can write the factor algebra $A/(\mathfrak{m} )$ as the iterated Ore extension $A^\delta /\mathfrak{m} [x_1;d_1]\cdots [ x_s; d_s]$ of the algebra $A^\delta /\mathfrak{m}$. So, without loss of generality we can assume that $\mathfrak{m} =0$, that is $A^\delta$ is a simple algebra. We have to prove that the iterated Ore extension ${\cal A} :=A [t_1, \ldots , t_s: \delta_1, \ldots , \delta_s]$ of the algebra $A$ is a simple algebra. The algebra $A^\delta$ is simple,and so its centre $Z$ is a field that contains the field $K$. Let $I$ be a nonzero ideal of the algebra ${\cal A}$, we have to show that $I={\cal A}$. Recall that ${\cal A} =\oplus_{\alpha \in \mathbb{N}^s}At^\alpha$ where $t^\alpha:= t_1^{\alpha_1}\cdots t_s^{\alpha_s}$ and $ A=\oplus_{\alpha \in \mathbb{N}^s} A^\delta x^\alpha$. Fix a nonzero element, say $a\in I$. Then $a=\sum a_\alpha t^\alpha$ for some elements $a_\alpha \in A$ not all of which are zero. Note that the inner derivation $ {\rm ad } (t_i)$ of the algebra ${\cal A}$ is a formal partial derivative $\frac{\partial }{\partial x_i}$ over $A^\delta$ of the algebra ${\cal A} = A^\delta \langle x_1, \ldots , x_s, t_1, \ldots , t_s\rangle$, that is $\frac{\partial }{\partial x_i}(A^\delta )=0$, $\frac{\partial }{\partial x_i}(x_i)=1$ and $\frac{\partial }{\partial x_i}(y)=0$ for all $y\in \{ x_1, \ldots , \widehat{x}_i, \ldots , x_s, t_1, \ldots , t_s\}$ (the hat over a symbol means that it is missed). Note that the ideal $I$ is $\frac{\partial }{\partial x_i}$-invariant for all $i=1, \ldots , s$. Applying carefully several times inner derivations of the type $ {\rm ad } (t_i)=\frac{\partial }{\partial x_i}$ to the element $a$ we see that we can assume that all the coefficients $a_\alpha \in A^\delta$ and not all of which are zero ones. Let $V\subseteq A^\delta$ be the vector space over the field $Z$ generated by all the coefficients $a_\alpha$. Suppose that a set $a_\alpha , a_\beta , \ldots , a_\gamma$ is a $Z$-basis for $V$. By {\em the Density Theorem}, there are elements $u_1, \ldots u_k, v_1, \ldots , v_k\in A^\delta$ such that $\sum_{i=1}^k u_ia_\alpha v_i=1$, $\sum_{i=1}^k u_ia_\beta v_i=0, \ldots , \sum_{i=1}^k u_ia_\gamma v_i=0$. Applying the map $A \rightarrow A$, $(\cdot )\mapsto \sum_{i=1}^k u_i(\cdot )v_i$, to the element $a$, we can assume that all the coefficients $a_\alpha \in Z$ but not all are zero. By the assumption, $[x_i, Z]=0$ for all $i=1, \ldots , s$. Then applying carefully the inner derivations of the type $ - {\rm ad } (x_i)$ to the element $a$ and taking into account the fact that $- {\rm ad } (x_i)(t_j)=\delta_{ij}$, we get an element $0\neq b\in Z\cap I$. Hence, $I={\cal A}$, as required. $\Box $ {\it Example}. Let $A=P_n:=K[x_1, \ldots , x_n]$, $\delta_1:=\frac{\partial }{\partial x_1}, \ldots , \delta_s:=\frac{\partial }{\partial x_s}\in {\rm Der }_K(P_n)$, $A^\delta =K$, $\mathfrak{m} =0$. Then the algebra $P_n[t_1, \ldots , t_s; \delta_1, \ldots , \delta_s]$ is the $n$'th Weyl algebra $A_n$. {\it Example}. Let $A=F_2:=K\langle x_1, x_2\rangle$ is the free algebra, $\delta_1:=\frac{\partial }{\partial x_1}, \delta_2:=\frac{\partial }{\partial x_2}\in {\rm Der }_K(F_2)$, the ideal $\mathfrak{m} $ of $F_2^\delta$ is generated by a single element $[x_2, x_1]-1$ is $ {\rm ad } (x_i)$-invariant, $i=1, 2$. Then $F_2^\delta /\mathfrak{m} \simeq K$ and the algebra ${\cal A} =K[x_1][x_2; d_2:=\frac{\partial }{\partial x_1}][t_1, t_2; \delta_1, \delta_2]$ is a simple algebra. {\it Example}. Let $A:=F_s=K\langle x_1, \ldots x_s\rangle$, $s\geq 2$, be a free algebra, $\delta_1:=\frac{\partial }{\partial x_1},\ldots , \delta_s:=\frac{\partial }{\partial x_s}$. Let $I$ be an ideal of the algebra $F_s^\delta$ generated by all the commutators $[x_i, [x_j,x_k]]$. Then the factor algebra $P:=F_s^\delta / I$ is a polynomial algebra in ${s\choose 2}$ variables $y_{ij}:= [x_i,x_j]+I$ and $\overline{A} := A/(I)= P[\overline{x}_1 ; {\rm ad } \, \overline{x}_1]\cdots [\overline{x}_s ; {\rm ad } \, \overline{x}_s] $ (see Corollary \ref{f26Dec05}). Note that all $( {\rm ad } \, \overline{x}_i)|_{P}=0$ and $[\overline{x}_i, \overline{x}_j ]=y_{ij}$. Hence, every maximal ideal $\mathfrak{m} $ of the algebra $P$ satisfies the conditions of Theorem \ref{26Dec05} and one can easily see that the algebra ${\cal A} $ is isomorphic to the Weyl algebra $A_s$ over the field $L:= P/\mathfrak{m} $. Note that the factor algebra $\overline{A} / (\mathfrak{m} )$ is isomorphic to the tensor product $A_s\otimes_LP_m$ of the Weyl algebra $A_s$ and a polynomial algebra $P_m$ in $m$ variables such that $s=2n+m$ and $2n$ is the rank of the $s\times s$ skew symmetric matrix $(y_{ij}+\mathfrak{m} )$ over $L$. {\it Example}. The same results are true for a {\em free metabelian algebra}. Let $J$ be an ideal of the free algebra $F_s$, $s\geq 2$, generated by all the double commutators $[a, [b,c]]$ where $a,b,c\in F_s$. The ideal $J$ is $\delta $-invariant. Hence, (by definition) the {\em free metabelian algebra} is defined as $R:= F_s/J$ and it is isomorphic to the factor algebra of $\overline{A}$ (from the previous example) by an ideal $(J')$ generated by an ideal $J'$ of the polynomial algebra $P$, i.e. $R\simeq P/J' [\overline{x}_1 ; {\rm ad } \, \overline{x}_1]\cdots [\overline{x}_s ; {\rm ad } \, \overline{x}_s]$. Now, it is obvious (it is a particular case of the previous example) that, for any maximal ideal $\mathfrak{m} $ of the algebra $P/J'$, the algebra ${\cal A}$ is isomorphic to the Weyl algebra $A_s$ over the field $L:= (P/J')/\mathfrak{m} $, and the factor algebra $R/(\mathfrak{m} )$ is isomorphic to the tensor product $A_n\otimes_LP_m$, $s=2n+m$ as in the example above. \section{Linear maps as differential operators} Let $A:= A_n\otimes P_m=\oplus_{\alpha \in \mathbb{N}^s} Kx^\alpha$, $s:=2n+m$, be the $n$'th Weyl algebra with polynomial coefficients $P_m$. The set of formal `partial derivatives' $\partial _1:= \frac{\partial }{\partial x_1}, \ldots , \partial _s:= \frac{\partial }{\partial x_s}$ is a set of {\em commuting locally nilpotent $K$-derivations} of the algebra $A$. Consider the algebra $\widehat{A} := A[[\partial _1, \ldots , \partial _s]]=\oplus_{\alpha \in \mathbb{N}^s}A\partial ^\alpha$, $\partial ^\alpha :=\partial _1^{\alpha_1}\cdots \partial _s^{\alpha_s}$, of formal (noncommutative) series $\sum a_\alpha \partial ^\alpha$, $a_\alpha \in A$, with multiplication given by the rule $\partial _i a =a\partial _i + \partial _i(a)$, $a\in A$, $1\leq i\leq s$. The multiplication of series is well-defined since all the derivations commute and are locally nilpotent. Since $\partial _i \in {\rm Der }_K(A)\subseteq {\rm End}_K(A)$, the algebra $\widehat{A}$ is, in fact, a subalgebra of the algebra ${\rm End}_K(A)$ of all $K$-linear endomorphisms of the vector space $A$. The next theorem shows that they coincide. \begin{theorem}\label{16Dec05 $\widehat{A} ={\rm End}_K(A)$. \end{theorem} {\it Proof}. The algebra $A$ has a natural finite dimensional filtration $\{ {\cal A}_i := \sum_{|\alpha |\leq i}Kx^\alpha \}_{i\geq 0}$ $({\cal A}_i{\cal A}_j\subseteq {\cal A}_{i+j}$ for all $i,j\geq 0$), ${\rm dim } ({\cal A}_i)={s+i\choose s}$, and $\partial ^\alpha ({\cal A}_i)\subseteq {\cal A}_{i-|\alpha |}$ for all $ \alpha \in \mathbb{N}^s$ and $i\geq 0$ (we set ${\cal A}_i:=0$ for negative $i$). We have mentioned in passing that the algebra $\widehat{A}$ is a subalgebra of ${\rm End}_K(A)$, let us prove this statement, that is {\em each nonzero series} $a=\sum a_\alpha \partial ^\alpha\in \widehat{A}$ {\em determines a nonzero linear map}: let $i:= \min \{ | \alpha | \, | \, a_\alpha \neq 0 \}$, fix $a_\alpha \neq 0$ with $|\alpha | = i$, then $a(x^\alpha )=a_\alpha \alpha !\neq 0$, as required. It remains to show that that any linear map $f\in {\rm End}_K(A)$ can be represented by a series $a=\sum a_\alpha \partial ^\alpha\in \widehat{A}$. It means that $ f(x^\beta )= a(x^\beta )$, for all $\beta \in \mathbb{N}^s$. The unknowns coefficients $a_\alpha \in A$ can be found from this system step by step. Clearly, $f(1)=a_0$. Suppose that $i>0$ and all the coefficients $a_\alpha $ with $|\alpha | <i$ have been found. Then, for each $\alpha$ such that $|\alpha | =i$, the element $a_\alpha$ can be found (uniquely) from the equation $f(x^\alpha )=\alpha ! a_\alpha +\sum_{|\beta | <i}\partial ^\beta (x^\alpha )$. $\Box $ Now we are ready to give a short direct proof of the fact that $A_n={\cal D} (P_n)$. \begin{corollary}\label{An=DPn} Let $K$ be a field of characteristic zero. The Weyl algebra $A_n$ is the ring of differential operators ${\cal D} (P_n)$ with polynomial coefficients. \end{corollary} {\it Proof}. Applying Theorem \ref{16Dec05} to the polynomial algebra $A=P_n=K[x_1, \ldots , x_n]$, we have ${\rm End}_K(P_n)=\widehat{P}_n$. Let $\mathbb{N}=\oplus_{i=1}^n\mathbb{N}e_i$ where $e_1=(1, 0, \ldots, 0), \ldots , e_n=(0, \ldots, 0, 1)$. It follows from $[x_i, \partial ^{\alpha}]=\alpha_i\partial ^{\alpha - e_i}$ for all $\alpha$ and $i$ (and from definition of the ring of differential operators) that the $j$'th term of the {\em order} filtration of the ring of differential operators ${\cal D} (P_n)$ on $P_n$ is equal to $\oplus_{|\alpha |\leq j}P_n\partial ^\alpha$. Hence ${\cal D} (P_n)= \oplus_{\alpha \in \mathbb{N}^n}P_n\partial ^\alpha =A_n$. $\Box $ By definition, the $\mathfrak{m}$-{\em adic topology} on the algebra $\widehat{A}$ is given by the ascending chain of {\em left ideals} of the algebra $\widehat{A}$ (neighbourhoods of zero) $$ \mathfrak{m}^{[0]}:=\widehat{A}\supset \cdots \supset \mathfrak{m}^{[i]}:= \sum_{|\alpha | \geq i} \widehat{A}\partial ^\alpha \supset \cdots \supset\cap_{i\geq 0}\mathfrak{m}^{[i]}=0.$$ The algebra $\widehat{A}$ is a {\em complete} (w.r.t. the $\mathfrak{m}$-topology) topological algebra. The `partial derivatives' over $A$, $D_i\in {\rm Der }_{A, c}(\widehat{A} )$, $i=1,\ldots , s$, are {\em continuous $A$-derivations} of the algebra $\widehat{A}$ such that $$ D_i(\partial _i)=\delta_{ij},\;\; 1\leq i,j \leq s.$$ \begin{lemma}\label{i15Nov05 For each $i=1, \ldots , s$, the map $\Psi_i(\cdot ):= \sum_{k\geq 0}(-1)^k D_i^k(\cdot ) \frac{\partial _i^k}{k!}:\widehat{A} \rightarrow \widehat{A}$, is a homomorphism of left $\widehat{A}^{D_i}$-modules where $\widehat{A}^{D_i} := {\rm ker } _{\widehat{A}}(D_i)=A[[\partial _1, \ldots , \widehat{\partial _i}, \ldots , \partial _s]]$, $ \widehat{A} = \widehat{A}^{D_i} [[ \partial _i]]= \widehat{A}^{D_i} \oplus \widehat{A} \partial _i$, and the following statements hold: \begin{enumerate} \item the map $\Psi$ is a projection onto the algebra $\widehat{A}^{D_i}$ of $\widehat{A}$: $$ \Psi_i : \widehat{A} =\widehat{A}^{D_i} \oplus \widehat{A} \partial _i \rightarrow \widehat{A} =\widehat{A}^{D_i} \oplus \widehat{A} \partial _i, \;\; a+b\partial _i \mapsto a, \;\; {\rm where}\;\; a\in \widehat{A}^{D_i}, \; b\in \widehat{A} .$$ In particular, ${\rm im} (\Psi_i )=\widehat{A}^{D_i} $ and $\Psi_i (y)=y$ for all $y\in \widehat{A}^{D_i} $. \item $\Psi_i (\partial _i^k)=0$, $k\geq 1$. \end{enumerate} \end{lemma} {\it Proof}. The map $\Psi_i$ is obviously well-defined since the algebra $\widehat{A}$ is complete and $\partial _i^k\in \mathfrak{m}^{[k]}$, $k\geq 0$. $\Psi_i (\partial _i) = \partial _i-\partial _i=0$, and $\Psi_i (y)=y$ for all $y\in A^\delta$. For any $a\in \widehat{A}$, \begin{eqnarray*} D_i\Psi_i (a)&=& D_i(a- D_i(a)\frac{\partial _i}{1!}+D_i^2(a)\frac{\partial _i^2}{2!}-D_i^3(a)\frac{\partial _i^3}{3!}+\cdots ) \\ &=& D_i(a)-D_i(a)-D_i^2(a)\frac{\partial _i}{1!}+D_i^2(a)\frac{\partial _i}{1!}+ D_i^3(a)\frac{\partial _i^2}{2!}-D_i^3(a)\frac{\partial _i^2}{2!}- \cdots \\ &=&0. \end{eqnarray*} Therefore, ${\rm im}(\Psi_i)=\widehat{A}^{D_i}$. $$ \Psi_i(\partial _i^m)=\sum_{k\geq 0} (-1)^k D_i^k (\partial _i^m)\frac{\partial _i^k}{k!}= (\sum_{k\geq 0}(-1)^k\frac{m(m-1)\cdots (m-k+1)}{k!})\partial _i^m=(1-1)^m\partial _i^m=0.$$ Since $\widehat{A} =\widehat{A}^{D_i} [[\partial _i]]$, the map $\Psi_i$ is an $\widehat{A}^{D_i}$-endomorphism of the left $\widehat{A}^{D_i}$-module $\widehat{A}$ and $\Psi_i(\partial _i^k)=0$, $k\geq 1$, the map $\Psi_i$ is a projection onto the subalgebra $\widehat{A}^{D_i}$ of $\widehat{A}$. $\Box$ The map \begin{equation}\label{big1phis} \Psi := \Psi_1\Psi_2\cdots \Psi_s : \widehat{A} \rightarrow \widehat{A}, \;\; a=\sum_{\alpha \in \mathbb{N}_s} a_\alpha \partial ^\alpha\mapsto \Psi (a)=a_0 \end{equation} is a {\em projection} onto the subalgebra $A$ of $\widehat{A}$ ($\widehat{A} = A\oplus (\oplus_{0\neq \alpha \in \mathbb{N}_s}A\partial ^\alpha$)). \begin{theorem}\label{15Dec05 For any $a\in \widehat{A} := {\rm End}_K(A)$, $$ a=\sum_{\alpha \in \mathbb{N}^s}\Psi (\frac{D^\alpha}{\alpha !} a)\partial ^\alpha .$$ \end{theorem} {\it Proof}. If $a=\sum a_\alpha \partial ^\alpha \in \widehat{A} $, $a_\alpha \in A$, then, by (\ref{big1phis}), $\Psi (\frac{D^\alpha}{\alpha !} a)=a_\alpha$. $\Box $ So, the identity map ${\rm id}_{\widehat{A} }:\widehat{A} \rightarrow \widehat{A}$ has a nice presentation \begin{equation}\label{big2phis} {\rm id}_{\widehat{A} } (\cdot ) = \sum_{\alpha \in \mathbb{N}^s}\Psi (\frac{D^\alpha}{\alpha !} (\cdot ))\partial ^\alpha . \end{equation} \begin{theorem}\label{s15Dec05 For any $\sigma \in {\rm Aut}_K(P_m)$, $$\sigma=\sum_{\alpha \in \mathbb{N}^m} \frac{(\sigma (x)-x)^\alpha}{\alpha !}\partial ^\alpha = {\rm id}_{P_m} +\sum_{i=1}^m (\sigma (x_i)-x_i)\partial _i +\cdots $$ where $\frac{(\sigma (x)-x)^\alpha}{\alpha !}:= \prod_{i=1}^m \frac{(\sigma (x_i)-x_i)^{\alpha_i}}{\alpha_i !}$. \end{theorem} {\it Proof}. Let $\sigma'$ be the sum. Then for any $a, b \in P_m$: \begin{eqnarray*} \sigma'(ab)&=&\sum_{\alpha \in \mathbb{N}^m} \frac{(\sigma (x)-x)^\alpha}{\alpha !}\partial ^\alpha (ab)=\sum_{\alpha \in \mathbb{N}^m} \frac{(\sigma (x)-x)^\alpha}{\alpha !}\sum_{\beta +\gamma =\alpha} {\alpha \choose \beta} \partial ^\beta (a) \partial ^\gamma (b)\\ &=& (\sum_{\beta \in \mathbb{N}^m} \frac{(\sigma (x)-x)^\beta}{\beta !}\partial ^\beta (a)) \, (\sum_{\gamma \in \mathbb{N}^m} \frac{(\sigma (x)-x)^\gamma}{\beta !}\partial ^\gamma (b))=\sigma' (a) \sigma'(b), \end{eqnarray*} and so $\sigma'\in {\rm Aut}_K(P_m)$. For each $i=1, \ldots , s$, $\sigma' (x_i)=x_i+\sigma (x_i)-x_i= \sigma (x_i)$, hence $\sigma'=\sigma$. $\Box$ {\it Example}. Let $\sigma \in {\rm Aut}_K(P_n)$, $P_n:= K[x_1, \ldots , x_n]$, $\sigma (x_i)=x_i+\lambda_i$ where $\lambda := (\lambda_1, \ldots , \lambda_n)\in K^n$. By Theorem \ref{s15Dec05}, $$\sigma =\sum_{\alpha \in \mathbb{N}^s} \frac{\lambda^\alpha \partial ^\alpha}{\alpha !}=\prod_{i=1}^n (\sum_{k\geq 0} \frac{(\lambda_i \partial _i)^k}{k !})=\prod_{i=1}^n e^{\lambda_i\partial _i}=e^{\sum_{i=1}^n\lambda_i\partial _i}.$$ {\it Example}. Let $\sigma_\lambda \in {\rm Aut}_K(P_n)$, $P_n:= K[x_1, \ldots , x_n]$, $\sigma (x_i)=\lambda_i x_i$, $\lambda := (\lambda_1, \ldots , \lambda_n)\in K^{*n}$. By Theorem \ref{s15Dec05}, $\sigma_\lambda =\sum_{\alpha \in \mathbb{N}^s} (\lambda -1)^\alpha \frac{x^\alpha \partial ^\alpha}{\alpha !}$ where $(\lambda -1)^\alpha:= \prod_{i+1}^n(\lambda_i -1)^{\alpha_i}$. Clearly, $\sigma_\lambda \sigma_\mu =\sigma_{\lambda \mu }$ for all $\lambda , \mu \in K^{*n}$.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,665
Q: Can I set a FullText Stoplist to be the default stoplist for all FullText Indexes in a Database in MS SQL Server? I have a database with a number of tables, many of them with Fulltext Indexes. I want to create a new Fulltext Stoplist to apply to all of these tables in the database. Is there a way to set a Stoplist to be the default for all Fulltext Indexes in a specific database? By setting it as default I would assume this also assigns any new Fulltext Indexes to that default Stoplist. If this is not the case I would also like to know how to set this up. I can always update each Fulltext Index one at a time however this is time-consuming and leaves room for error/potential to miss one. I have tried looking at the MS T-SQL help pages eg. CREATE FULLTEXT STOPLIST/ALTER FULLTEXT STOPLIST, CREATE/ALTER FULLTEXT INDEX and any questions previously asked however I can't find any reference to whether this is actually possible or not. Any help is greatly appreciated, also this is my first question posted so please let me know if i've missed something.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,944
\section{Introduction \& Motivation} Financial Stability is a core component for economic prosperity of countries and individuals. The recent financial crises had significant impact in the life of many individuals across the globe through the realization of significant income reduction, rising unemployment and economic slowdown~\cite{otker2013}. The recent experience from methods of risk management employed so far shows that they fail to provide early warnings to central governments and central banks in order to proactively intervene and prevent such adverse financial events. Banks, regulatory authorities, and international organizations (like IMF) performed stress testing exercises long before the financial crisis of 2007. Nevertheless, stress testing exercises before Lehman's default failed to predict the unprecedented economic turmoil, because they disregard the propagation channels of a default event through the whole micro macro dynamics of the global interconnected financial system. Additionally, the non-linear relationships that realised between the macro economy and the financial balance sheets was not adequately captured due to the broadly use of linear regression models. Weaknesses also in the validation function of stress testing frameworks also decreased the confidence in the quantification of the impact an adverse scenario in the banking system. Since then, market participants and regulators have performed rigorous stress testing exercises by expanding the scenarios to be assessed, using more granular data, and in some cases attempting to quantify second round effects stemming from a liquidity shock or from the default of a counterparty. Another aspect in the current regime, i.e. post crisis, of supervisory regulation is the collection of a significant amount of granular information as a response to a more proactive supervision. Although, the creation of big data sets is the new era in banking supervision, regulators haven't yet explored statistical techniques from other fields, like machine learning, to extract more information regarding the risks in their banking systems. Segmentation, classification and data mining functionalities are important tools for regulators to spot weaknesses in the supervised financial entities, which can be further enhanced by using machine learning techniques. Machine learning algorithms has dramatically improved the capabilities of performing pattern recognition (like speech recognition, image recognition) and forecasting, so that they offer state of the art performance in various scientific fields like biology, engineering etc. Their structure offers the ability to adjust in streaming sequences using continuous learning algorithms, and recognize new and evolving patterns in time series data. In addition, deep learning is proven to effectively deal with high dimensional data. Recent studies suggest that due to their complicated and non-parametric structure, machine learning techniques could lead to better predictive performance in financial time series modelling problems~\cite{chatzis2018a, fischer2018a, kraus2017a}. This can be attributed to their capacity to learn and adapt to new data. Thus, improve their performance over time, offer increased capabilities to capture non-linear relationships, and decompose the noise that often exist in financial data. Furthermore, this new generation of statistical algorithms offer the necessary flexibility in modelling multivariate time series, as its structure includes a cascade of many layers with non-linear processing agents. Deep learning networks base their functionality in the interaction of layers that simulate the abstraction and composition functionalities of the human brain. Therefore, via capturing the full spectrum of information contained in financial datasets, they are capable of exploring in depth the inherent complexity of the underlying dynamics in big and high dimensional time series data. Motivated by the recent trends in the machine learning literature, this empirical study introduces a new statistical technique for stress testing using deep learning algorithms to model banks financial data in a holistic way. In particular, shocks are propagated to banks' balance sheets by simultaneously training deep neural networks with macro and financial variables, thus, taking advantage of their capabilities to capture more information hidden in big datasets. We develop inference algorithms for our networks, suitable for learning financial time series data on a multivariate forecasting setup. The main contribution of this study is that it proposes a holistic framework for balance sheet stress testing, which overcomes the limitations of the currently approaches and yields more robust and close to reality results by loosening the static balance sheet assumption. Our research analysis lies at the intersection of computational finance and statistical machine learning, leveraging the unique properties and capabilities of deep learning networks to increase the prediction efficacy and minimizing the modelling error. Under the proposed approach, forecasting of balance sheet items can be heavily supported by artificial intelligence algorithms, better simulating the propagation channels of the macro economy into the financial institutions business models. Our vision will be to provide a stress testing framework that succeeds to perform as an early warning system for financial shocks on individual banks' balance sheets. This study is organized as follows. In section 2, we focus on the related literature review on financial institutions stress testing. Section 3 describes the data collection and processing. In section 4 we provide details regarding the estimation process of the various stress testing frameworks examined in this study. In section 5 we compare across methodologies and provide experimental results using a test dataset of financial balance sheet sequences of data. This way, we assess not only the applicability of the proposed approach but also its generalization capacity. Finally, in the concluding section 6 we summarize the performance superiority of the proposed methodology, we identify any potential weaknesses and limitations, while we also discuss areas for future research extensions. \begin{figure}[!t] \centering \includegraphics[clip, trim=2.8cm 2cm 2.8cm 2cm,scale= 0.5]{current_ff_stress_small.pdf} \caption{Feedforward architecture of currently established stress testing frameworks.} \label{fig:current_ff} \end{figure} A typical stress testing engine is composed by four elements: the perimeter of risks subjected to stress, the scenario design, the calculation engine that transforms the shocks into an outcome in Banks balance sheet, and a measure of the outcome~\cite{borio2012}. Reviewing current stress testing frameworks performed by regulatory authorities they follow broadly this structure. In particular the most famous stress testing exercises currently publicly available are: EBA~\cite{eba2018}, CCAR (FED)~\cite{fed}, PRA - Bank of England~\cite{Angleterre2013}, ECB (top down)~\cite{stampe2017,henry2013}, Bank of Canada~\cite{anand2014a}, Central Bank of Austria (ARNIE)~\cite{feldkircher2013a}, IMF~\cite{hasan2011a}, Bank of Greece (Diagnostic Exercise)~\cite{greek}. The structure of all these exercises follows a left to right flow to estimate the impact of an adverse shock in the economy. One of the basic components of all these exercises is the time horizon they estimate future losses for the banks which ranges between 2-5 years. During this period the macro economic scenario is given. This set of macro scenarios is passed through to financial institutions to project their P\&L and Risk Weighted Assets (RWA) and eventually estimate capital using regulatory hurdle rates. Some of these exercises include in their structure a second round effects mechanism for the banking system to account for contagion risk. Macroeconomic feedback effects i.e for example the impact of a significant institution becoming insolvent in the macro economy, usually they are not considered in these frameworks. Stress tests under this structure can mainly serve as a tool to challenge the recovery plans of banks and to assess their viability. But their role as an early warning system is questionable. As Drehmann~\cite{drehmann2011a} aptly points, systemic banking crises are reflected in the performance of credit and property prices and usually they appear at the high point of the medium-term financial cycle. Therefore, crisis starts before it's depicted in macro scenarios. According to Borio~\cite{borio2012} a system is not fragile when a large financial shock materializes but when even a small negative change in financial and macro variables is amplified through the different dynamic system relationships and can lead to a systemic shock. For example after the default of Lehman the financial market crashed and the US GDP exhibited a sharp decrease causing a structural break in the macro data time series. Current versions of stress tests possess a macro scenario over time in a static way without modelling or tracking in a path dependent nature the multistep decision process and financial behaviour that in reality takes place from all economic participants.~\cite{bookstaber2014b} Furthermore, non-linearity is not modelled adequately in the statistical techniques currently employed. Risks under the current globalized market tend to be amplified when a stress event occurs. Non-linear relationship kicks in through channels of amplification leading to a chain of events unpredictable from the static nature of stress test. Under stressed conditions the relationships between modelled variables are non-linear~\cite{drehmann2007a}~\cite{juselius2011a} and exhibit structural breaks~\cite{alfaro2009a}. Stress testing frameworks are composed by standalone models usually combined in a qualitatively manner. A small single-step prediction error at the beginning could accumulate and propagate when combined without taking the correlation of the financial variables, often resulting in poor prediction accuracy. Furthermore standalone models can lead to double counting effects or overestimating the impact stemming from the changes of the predefined macro variables. Finally univariate setups are not able to model adequately complex distributed variables with non-linear behaviour. In addition regular stress testing frameworks exhibit simplification assumptions that may affect the reliability of the final estimation. EBA EU wide stress testing is a bottom up exercise covering only specific risk on banks individuals balance sheet based on a macro scenario usually based on simplified assumptions. One of the weaknesses in EBA methodology is the static balance sheet assumption i.e assets and liabilities remain constant over the horizon without acknowledging for management actions and new generation of loans. In addition mitigation actions are taken into account after the stress testing are finalized through a strong qualitative overlay and not in a dynamic way.~\cite{eba2018} On the other hand System wide stress testing exercises on micro prudential level are heavily relied upon on the interaction with individual banks with respect to data analytics and propagating the macro scenarios to their balance sheet. Thus estimations are not performed in a uniform statistical process but inherit the model deficiencies and forecasts errors embedded in banks individual models. The heterogeneity in the results increases significantly the estimation errors and there is no robust process for regulators to account for it. Thus the need for independent central modelling for simulating the financial system is of great concern~\cite{hirtle2015}. Furthermore the stress testing process involves the disclosure of the methodological framework to all market participants which in turn there are evidence of second round effects regarding the accounting treatment of banks. Specifically based on a study~\cite{gounopoulos2016a} banks participating in regulatory exercises tend to manipulate their provisions for credit risk to absorb the impact of the upcoming stress test. Finally Stress Testing outcomes in the current regulatory exercises heavily rely on regulatory ratios like capital adequacy ratio which in turn is highly dependable on the estimation of RWA. Evidence in the literature~\cite{bis} indicates that relaying in the risk weights applied internally by the financial institutions under the Basel Framework can lead to underestimation in capital needs. This is driven by the significant variability stemming from internal models of the banks when applying internal model methods. Furthermore the regulatory framework currently employed for assessing the RWA cannot capture the hidden risk in banks complex portfolio structure. In the current literature~\cite{ferri-a} there evidence regarding especially more sophisticated banks (A-IRB) that they may perform regulatory arbitrage and manipulates their true risks to lower their capital requirements. Thus robust macro modelling of the RWA using an independent top down model is important to account for these cases. Although a significance progress in designing stress testing has been realized in recent years, even today there are concerns that this type of exercises cannot be used as early warning systems for financial distress~\cite{otker2013}. By analysing the publications regarding stress testing exercises either performed by regulators or individuals banks we outlined a series of weaknesses and inefficiencies to provide a clear and concise view on the nature and on the way how the proposed approach in this study, DeepStress, attempts to address part of the aforementioned weaknesses. Deep neural networks architecture is one of the main innovations in our proposed approach for dynamic balance sheet stress testing. Our approach is putting all the components together in a multivariate structure. We identify the main channels of risk propagation in a recurrent form to account of all the existing evidence of feedback effects in a financial institutions' balance sheet. The current architectures is constrained by the use classical econometric techniques which offer limited capabilities for simulating complex systems. Our approach accounting for temporal patterns in banks' balance sheets provides a dynamic modelling approach. This is achieved through the multivariate training of deep neural networks taking account the dynamic nature of banks metrics and the whole structure of the bank's balance sheet. The approach proposed is composed by multivariate input and output layers able to capture the cross correlation between balance sheet items and the macro economy. Training is performed as one big complex network minimizing estimation errors and double counting effects among various financial variables. To account for non-linear relationships that materialize under adverse macroeconomic conditions machine learning techniques like deep learning can provide more efficient estimations. Deep Neural networks based on academic literature are capable of simulating real life phenomena where relationships are complex. Therefore, our proposed framework using multilayer deep networks envisages in capturing the dynamics inherent in a financial distress. In addition the architecture of aims to capture the amplifications channels leading to structural breaks. The methodology applied relies only on publicly available data and models are developed in a uniform way thus making the process of validation and error correction more feasible to be performed centrally. In addition offers the opportunity to experiment on advanced statistical machine learning techniques a need recognized also in the academic literature~\cite{oura2012a}. To sum up, our modelling approach is balanced between capturing the determinants that strongly affect the health of a financial institution, while at the same time developing a dynamic balance sheet simulator engine for establishing an early warning system to predict bank failures under an adverse scenario. The modelling framework that we implement captures temporal dependencies in a bank's financial indicators and the macro economy. At the same time, it explores up to 3 years of lagged observations, which are assumed to carry all the necessary information to describe and predict the financial soundness of a bank, and combines their evolution with the relevant macroeconomic indicators. \section{Data Collection and Processing} The dataset supporting this is study refers to the United States banking system. Specifically, we have collected information on non-failed, failed and assisted entities from the database of the Federal Deposit Insurance Corporation (FDIC), an independent agency created by the US Congress in order to maintain the stability and the public confidence in the financial system. The collected information is related to all US banks, while the adopted definition of a default event in this dataset includes all bank failures and assistance transactions of all FDIC-insured institutions. Under the proposed framework, each entity is categorized either as solvent or as insolvent based on the indicators provided by FDIC. Observations referring the failed banks are excluded from the analysis since stress testing is performed on healthy financial entities. The dataset covers the 2007-2015 period; a 9 years' period with quarterly information resulting in dataset with more than 175,000 records. The selected time period, seems to approximate a full economic cycle, in terms of the Default Rate evolution. Fig. \ref{fig:usa_historical}, shows the number of records included in each observation quarter and the corresponding default rate. From a supervisory perspective, most of the financial institutions in the sample apply the standardized approach for measuring the Credit risk weights assets based on the United States adaptation of the Basel regulatory framework~\cite{castro2017a}. \begin{figure}[!t] \centering \includegraphics[width=0.7\linewidth]{usa_historical} \caption{USA financial institutions in the sample. Historical overview for the period 2008-2014 of the failed entities (source: FDIC)} \label{fig:usa_historical} \end{figure} The dataset was split into three parts (Fig. \ref{fig:usa_historical}). An in-sample dataset (Full in sample) that is comprised of the data pertaining to the 80\% of the examined companies over the observation period 2008-2013 amounting to 101.641 observations. For performing hyper parameter tuning of deep neural networks we define an out-of-sample dataset (validation sample), including the rest 20\% of the observations for the period 2008-2013 amounting to 25.252 observations. This is useful for deep learning models, in which the training sample is used to train various candidate models with different architectures and specifications, while the validation set is used for selecting the best parameter setup and avoid overfitting in the training dataset. This way the generalization capabilities in other datasets of the final selected model increases substantially. Finally performance evaluation is performed on an out-of-time dataset (test sample) that spans over the 2014-2015 observation period reaching 48.756 observations. In all cases, the dependent (target) variable is the Capital Adequacy Ratio (CAR) of each bank in the end of the one year forecast horizon. To summarize, we performed model fitting using exclusively the available training sample prescribed above. To perform model selection, we employed five-fold cross-validation, using predictive accuracy as our model selection criterion (CAR prediction error). Performance evaluation results are assessed on the available test sample, to allow for evaluating the generalization capacity of the developed models. In developing our model specifications, we examine an extended set of variables that fully describe the financial status of each bank in the sample. In addition to the above-mentioned variables, we have also included in the dataset quarterly observations of the most commonly used macro-economic variables. Macro variables are the main input in the models developed since they are important for scenario analysis under a stress testing framework. The current model setup includes contemporaneous macro variables along with 3 year lags. The intuition for this approach is to build models for scenario prediction which is the main methodology for Stress Testing modelling. The macro variables included in the development are: \begin{itemize} \item GDP: Gross Domestic Product growth \item EXPORT: US Total Exports growth \item GOVCREDIT: Government Credit to GDP \item DEBT: US public debt to GDP \item GOVEXP: US government expenditure to GDP \item INFLAT: US inflation \item RRE: House Price Index growth \item UNR: Unemployment Rate \item YIELD10Y: 10Y US sovereign bonds yields \item STOCKS: US Stock index – S\&P 500 returns \end{itemize} The relevant stress financial variables for simulating the profitability and the risk weighted assets of each financial institution are: \begin{itemize} \item NLOAN: Net loans exposure \item DEP: Total Deposits \item DDEP: Total domestic deposits \item ASSET: Average Total Assets \item EASSET: Average Total Earning Assets \item EQUITY: Average Total Equity \item LOAN: Average total loans \item CFD: Deposits Cost of funding \item YEA: Yield on earning assets \item NFIA: Noninterest income to average assets \item RW: Risk Weight Density \item LOSS\_LOAN: Loss allowance to loans \item RWA: Total risk weighted assets \item CAR$\%$: Total risk based capital ratio \end{itemize} Modelling for the evolution of the balance sheet is performed on the growth rate of 4 key financial items: Deposits, Total Earning Assets, Total Loans and Total Assets. In order to capture the idiosyncratic characteristics of each financial entity, 3 year lags are included in the training process for each financial variable. In the final model setup the use of multiple years financial and macroeconomic variables allows for capturing internal trends of key items of a bank' balance sheet and also the degree each entity is affected by the status of US economy. \section{Model Development} The success of the stress testing exercises performed in the past by regulatory authorities was put under scrutiny by all market participants and the research community. In order to investigate the capabilities of the proposed approach for stress testing against broadly used frameworks we simulate two additional methods for balance sheet forecasting to benchmark its performance. Specifically we developed a constant balance sheet approach following the framework adapted by EBA to perform EU wide stress tests~\cite{eba2018} and a dynamic balance sheet approach support by a group of satellite models to forecast individual financial variables used by other regulatory authorities like ECB for macro prudential stress testing. In this section we provide an overview of the overall setup of the study and technical details of the three individual approaches employed. \subsection{General Setup of the Study} The main component of a micro prudential solvency stress testing framework is the projection of a financial institution capital adequacy ratio or recently the CET 1 ratio (Core Equity Tier I ratio). In this study, we develop a Deep Neural Network structure which receives as input the Macro variables and Balance sheet components mentioned in Section 2 and provides as output the balance sheet and profitability structure of the bank on one year horizon as measured by 9 core variables namely Net loans, Deposits, Assets, Earning Assets, Deposits, Cost of funding, Yield on earning assets, Noninterest income to assets, Risk Weight Density and Cost of Risk {Loss allowance to loans). \begin{figure}[h!] \centering \captionsetup{justification=centering} {\small \includegraphics[scale= 0.45]{nn_new} } \caption{Stress Test Deep Neural Network architecture.} \label{fig:dnn} \end{figure} We focus on the forecasting of the CAR ratio since CET-1 ratio was introduced under Basel III and is not available throughout our dataset. Specifically, our aim is to project the in a one-year-ahead the CAR ratio of each financial institution in the sample. CAR ratio by definition is the ratio of a bank's capital over the risk weighted assets in each time point t. In order to simulate the core mechanics of a stress testing framework we simulate the evolution of the key financial variables of a financial institutions balance sheet. The main setup is that we project one year ahead the evolution of the capital and the the risk weighted assets in order to forecast the one year ahead CAR. The approach followed to adjust the capital in time t is given by the formula: % % \begin{align*} \begin{split} \text{Capital}_t = &\text{Earnings from Assets}_t - \text{loans loss provisions}_t \\& + \text{Net fees and commisions}_t \\ &- \text{cost of funding from deposits}_t + \text{Capital}_{t-1} \end{split} \end{align*} % % In order to adjust the capital of each entity we model 8 key financial variables. The first four refer to the dynamic evolution of the balance sheet i.e the growth of the asset and liability side: the growth rate of Deposits, Total loans, Total Assets, Total Earning Assets. The remaining 4 refer to the yield in the next year of each item from the asset or liability side: cost of risk of loans, yield on earning assets, yield on deposits and yield of net fees and commissions of total assets. The RWA are adjusted in 3 different ways depending on the ST methodology. Specifically for deep learning we project the growth of the RWA, for satellite modelling a dedicated model is trained to project the RW density of each financial institution in the sample, while for the constant balance sheet approach we assume RWA remain constant for one year. \subsection{Deep Learning} We implement a Deep Neural Network (henceforth DNN) to address the issue of dynamic balance sheet forecasting. Deep learning has been an active field of research in the recent years, as it has achieved significant breakthroughs in the fields of computer vision and language understanding. In particular they have been extremely successful in diverse time-series modelling tasks as machine translation~\cite{cho2014a, tu2016a} machine summarization and recommendation engines~\cite{quadrana2017a}. However, their application in the field of finance is rather limited. Specifically, our paper constitutes one of the first works presented in the literature that considers application of deep learning to address the challenging financial modelling task of financial balance sheet stress testing. Deep Neural Networks differ from Shallow Neural Networks (one layer) on the multiple internal layers employed between the input values and the predicted result (Fig. \ref{fig:dnn}). Constructing a DNN without nonlinear activation functions is impossible, as without these the deep architecture collapses to an equivalent shallow one. Typical choices are logistic sigmoid, hyperbolic tangent and rectified linear unit (ReLU). The logistic sigmoid and hyperbolic tangent activation functions are closely related; both belong to the sigmoid family. A disadvantage of the sigmoid activation function is that it must be kept small due to their tendency to saturate with large positive or negative values. To alleviate this problem, practitioners have derived piecewise linear units like the popular ReLU, which are now the standard choice in deep learning research ReLU. The activation layers increase the ability and flexibility of a DNN to capture non-linear relationships in the training dataset. On a different perspective, since DNNs comprise a huge number of trainable parameters, it is key that appropriate techniques be employed to prevent them from overfitting. Indeed, it is now widely understood that one of the main reasons behind the explosive success and popularity of DNNs consists in the availability of simple, effective, and efficient regularization techniques, developed in the last few years. Dropout has been the first and the most popular regularization technique for DNNs~\cite{srivastava2014a}. In essence, it consists in randomly dropping different units of the network on each iteration of the training algorithm. This way, only the parameters related to a subset of the network units are trained during each iteration. This ameliorates the associated network overfitting tendency, and it does so in a way that ensures that all network parameters are effectively trained. Inspired from these merits, we employ Dropout DNNs with ReLU activations to train and deploy feed forward deep neural networks. More precisely we employ the Apache MXNET toolbox of R1. We postulated deep networks that are up to five hidden layers deep and comprise various numbers of neurons. Model selection using cross-validation was performed by maximizing the RMSE metric on the projected CAR. In our setup multivariate deep learning networks will learn the balance sheet of financial institutions separately generating yearly forecasts by the interactions of layered neurons after receiving historical values of banks previous economic states. This hierarchical transmission of observed data between cascading layers of abstraction can decompose the structure of a bank balance sheet and foster the multivariate representation of the financial variables for capturing the correlations between various assets and liabilities. This provides the functionality of simultaneously modelling the balance sheet as a whole instead of using satellite models of regular stress testing frameworks. This is feasible based on the fact that DNN are composed of multiple features for input and output complex representations. Deep learning can facilitate the dynamic balance sheet projection approach through the non-linear relationships representations of each layer offering a more realistic approach for stress testing. Information flows through the system as a vector of macro and financial variables describing the state of both the bank and the macro economy at any time stamp during the forecast period. Specifically the input vector contains around 60 variables and the output vector is composed of 9 variables. The DNN architecture employed is capable to model the lead lag relationships between macro variables banks variables financial variables and sovereign variables. Finally in the DeepStress engine using the aforementioned multivariate forecasting setup on individual balance sheet we model simultaneously the RWA evolution of the bank and connect it to the macro environment. \subsubsection{Bayesian Deep Learning} Conventional architectures compute point estimates of the unknown values (e.g., layers' weights) without taking into consideration any prior information and without any uncertainty estimation of the produced values. The Bayesian framework offers a flexible and mathematically solid approach to incorporate prior information and uncertainty estimation by explicitly employing model averaging. The Bayesian treatment of particular model has been shown to increase its capacity and potential, while offering a natural way to assess the uncertainty of the resulting estimates. To this end, we augment the conventional architectures of the previous sections, by relying on the Bayesian framework. Specifically, we impose a prior Normal distribution over network weights, seeking to infer their posterior distribution given the data. Since the marginal likelihood is intractable for the considered architectures, from the existing Bayesian methods, we rely on approximate inference and specifically on Variational Inference. Since the true posterior of the model cannot be computed, in Variational Inference, we introduce an auxiliary variational posterior distribution of a family of distributions; we then try and find the optimal member of the considered family to match the true posterior. This matching is achieved through the minimization of the Kullback-Leibler divergence between the true and the introduced variational posterior. The KL divergence is a metric of similarity between two distributions and is a non-negative value; KL is zero, if and only if, the two considered distributions match exactly. Minimizing the KL divergence is equal to the maximization of the Evidence Lower Bound, a well known bound on the marginal likelihood derived using Jensen's inequality. Thus for training the following architectures, we resort to ELBO maximization. \paragraph{Local-Winner-Takes-All Mechanism} The commonly employed nonlinearities such as ReLUs are a mathematically convenient tool for training deep networks but nevertheless do not come with a biologic plausibility. Research has shown that in the mammal brain, neurons with similar functionality and structure tend to group and compete with each other for their output. To this end, researchers have devoted significant effort to explore this type of competition between neuron and apply it in existing models. The resulting procedure is referred to as Local Winner-Takes-All and has been shown to provide competitive, or even better results in benchmarks architectures in the image recognition domain \cite{pmlr-v97-panousis19a}. Thus, apart from the conventional ReLU activations of the previous section, we additionally explore the potency of the LWTA to our domain. The linear units after the affine transformation in each layer are grouped together and compete from their outputs. This competition is performed in a probabilistic way, by employing a softmax nonlinearity, obtaining the probability of activation of each unit in each block. Specifically, let us consider input data $ X \in \mathbb{R}^{N\times D}$, containing $N$ observations, with $D$ features each. In a traditional hidden layer, we compute an inner product between the input and a weight matrix $W \in \mathbb{R}^{D\times K}$, and the resulting activation is then passed through a non-linear function; the output of each layer is denoted as $Y=\{\boldsymbol y_n\}_{n=1}^N$, with $\boldsymbol y_n \in \mathbb{R}^K$. Thus, the corresponding procedure can be described by the following computation: % % \begin{align} y_{nk} = \sigma \left(\sum_{d=1}^d w_{dk} x_{nd} \right) \end{align} % % The most widely used non-linearity $\sigma(\cdot)$ is the Rectified Linear Unit (ReLU), such that for an input $x$: % % \begin{align} relu(x) = \max (0, x) \end{align} % % In the LWTA approach, this mechanism is replaced by introducing \textit{blocks} of units in each layer; therein, each unit is competing with the rest for its activation. The winner unit gets to pass its output to the next layer, while the rest are zeroed out. Now, the layer input is presented to each block; the weights of the layer are reorganized in a three dimensional matrix, such that, $W \in \mathbb{R}^{D\times B \times U}$, where $B$ denotes the number of blocks and $U$ the units in each block. Assuming an input $x \in \mathbb{R}$ and weights $W \in \mathbb{R}^{D\times U}$ the output of each block and each unit therein, reads: % % \begin{align} y_{ku} = \begin{cases} h_{ku}, & \text{if } h_{ku} \geq h_{ki}, \forall i\neq u\\ 0, & \text{otherwise} \end{cases} \end{align} % % where $h_{ku} = \sigma\left( w_{ku} x \right)$. A graphical illustration of the adopted approach is presented in Fig. \ref{fig:wta}. % % \begin{figure}[h!] \centering \resizebox{0.7\linewidth}{!}{ \input{wta_feedforward} } \caption{The LWTA architecture: Each layer comprises blocks of competing units; therein each unit computes its activation and competes with the rest for its output. The winner get to pass its output to the next layer, while the rest are zeroed out.} \label{fig:wta} \end{figure} % We use the same architectures as in the previous experiments for comparability and in order to assess any potential gains from the use of a biologically inspired mechanism. \subsection{Satellite Modelling - Bayesian Model Averaging} Satellite models are used for univariate estimations of the impacts of standalone balance sheet items in current stress testing frameworks~\cite{stampe2017}. A usual statistical technique employed from regulators and by the banking industry is the Bayesian Model Averaging. The main intuition behind the use of BMA econometric technique is to account for the uncertainty surrounding the main determinants of risk dynamics especially in a period of recession. This approach is able to handle a short time series of balance sheet realizations which is usually the case for stress testing. Thus BMA offers the possibility to perform multivariate modelling including all potential predictors with different weight while the output of each trained model remains univariate. Using BMA, a pool of equations is generated using a random selection subgroup of determinants. Subsequently a weight is assigned to each model that reflects their relative forecasting performance. Aggregating all equations using the corresponding weights produces a posterior model probability. The number of equations estimated in the first step is large enough to capture all possible combinations of a predetermined number of independent variables. Thus Bayesian Model Averaging addresses model uncertainty and misspecification in selected explanatory variables in a simple linear regression problem. To further illustrate BMA, suppose a linear model structure, with $Y_t$ being the dependent variable, $X_t$ the explanatory variables, $\alpha$ constant, $\beta$ the coefficients, and $\epsilon_t$ a normal error term with variance $\sigma$. % % \begin{align} Y_t &= \alpha_\gamma + \beta_\gamma X_{\gamma, t} + \epsilon_t \\ \epsilon_t &\sim \mathcal{N}(0, \sigma^2 I) \end{align} % % A problem arises when there are many potential explanatory variables in a matrix $X_t$ which transforms the task of selecting the correct combination quite burdensome. The direct approach to inference in a single linear model that includes all variables is inefficient or even infeasible with a limited number of observations. It can lead to overfitting, multicollinearity and increased manual re-estimations to account for non-significant determinants. BMA tackles the problemy estimating models for all possible combinations of $\{X\}$ and constructing a weighted average over all of them. Under the assumption that $X$ contains $K$ potential explanatory variables, BMA estimates $2^K$ combinations, and thus, $2^K$ models. Applying Bayes' Theorem, model averaging is based on the posterior model probabilities: % % \begin{align} \begin{split} p(M_\gamma \cup Y, X) &= \frac{p(Y \cup M_\gamma, X) p(M_\gamma)}{p(Y \cup X)} \\ &= \frac{p(Y \cup M_\gamma, X) p(M_\gamma)}{\sum_{s=1}^{2^K} p(Y \cup M_s, X) p(M_s)} \end{split} \label{eqn:bms_posterior} \end{align} % % In Equation \eqref{eqn:bms_posterior}, $p(Y,X)$ denotes the integrated likelihood which is constant over all models and is thus simply a multiplicative term. Therefore, the posterior model probability (PMP) is proportional to the integrated likelihood $p(Y \cup M, X)$ which reflects the probability of the data given the model $M$. Thus, the corresponding weight assigned to each model is measured by using $p(M_\gamma \cup Y, X)$ in Eq. \eqref{eqn:bms_posterior}. In equation \eqref{eqn:bms_posterior}, $p(M)$ denotes the prior belief of how probable model $M$ is before analyzing the data. Furthermore, to estimate $p(Y,X)$ integration is performed across all models in the model space and to estimate the probability $p(Y \cup M, X)$ integration is performed given model $M$ across all parameter space. By performing renormalization of the product in equation \eqref{eqn:bms_posterior}, PMPs can be inferred and subsequently the model's weighted posterior distribution for estimator $\beta$ is given by % % \begin{align} p(\beta \cup Y, X) = \sum_{\gamma=1}^{2^K} p(\beta \cup M_\gamma, Y, X) p(M_\gamma \cup X, Y) \end{align} % % The priors, posteriors and the marginal likelihood employed in the estimation are described analytically in Appendix \ref{appendixa}. For model development, the same train set used for DNN is employed. Before applying the Bayesian Averaging algorithm we remove and linearly interpolate the outliers. In Bayesian Model Averaging estimation we employ unit information prior (UIP), which sets g=N commonly for all models. We use also a birth/death MCMC algorithm (20000 draws) due to the large number of covariates included since using the entire model space would lead to a large number of iterations. We fix the number of burn-in draws for the MCMC sampler to 10000. Finally the models prior employed is the ``random theta''' prior by Ley and Steel~\cite{ley2008a}, who suggest a binomial-beta hyper prior on the a priori inclusion probability. This has the advantage that is less tight around prior expected model size (i.e. the average number of included regressors) so it reflects prior uncertainty about model size more efficiently. For robustness purposes we varied the used prior employing the Fernandez~\cite{fernandez2001a} propositions but the results were not substantially different. In order to develop all the satellite models for this approach we employ the utilities of BMS R package{\footnote{https://cran.r-project.org/web/packages/BMS/index.html}}. After the training process 9 BMS models are developed: 4 for the growth of balance sheet items, 4 models are forecasting the yields of a various assets and liabilities and one model for forecasting the RW assets density. \subsection{Constant Balance Sheet Modelling Setup} For the constant balance all balance sheet items are assumed constant along with the RWA metric for one year. Thus we combine the respective univariate satellite models BMA to project yields of assets and liabilities while assume zero growth in the balance sheet in order to project the CAR ratio one year ahead. \begin{table} \centering \caption{Comparison of the predicted one year ahead CAR by ST approach for all banks and only for Large financial institutions (more than 200 billion in assets).} \label{table:car} \renewcommand{\arraystretch}{1.1} \begin{minipage}[b]{0.45\linewidth} \resizebox{\linewidth}{!}{ \begin{tabular}{|c|c|c|} \hline All banks in the dataset & Out of Sample CAR & In Sample CAR\\\hline Satellite Modelling (BMS) & 20.61 & 17.07 \\\hline Deep Learning (MXNET) & 18.01 & 17.89 \\\hline Deep Learning (Bayesian ReLU) & 18.80 & 17.83 \\\hline Deep Learning (Bayesian LWTA) & $\mathbf{19.23}$ & $\mathbf{18.53}$ \\\hline Constant Balance Sheet & 20.03 & 17.49 \\\hline Actual & 19.33 & 18.73 \\\hline \end{tabular}} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \resizebox{1.014\linewidth}{!}{ \begin{tabular}{|c|c|c|} \hline Large Banks ($>$ 200 bl) & Out of Sample CAR & In Sample CAR \\\hline Satellite Modelling (BMS) & 15.07 & 11.04 \\\hline Deep Learning (MXNET) & 12.7 & 11.12 \\\hline Deep Learning (Bayesian ReLU) & $13.2$ & $11.72$\\\hline Deep Learning (Bayesian LWTA) & $\mathbf{13.43}$ & $\mathbf{12.13}$ \\\hline Constant Balance Sheet & 15.11 & 11.48 \\\hline Actual & 13.75 & 14.16 \\\hline \end{tabular} } \end{minipage} \end{table} \section{Model validation - Experimental Evaluation} No thorough and consistent framework exists for validating the results of a stress testing exercise since the adverse scenario used in their design never materialize. Back testing methods is an important process to recognize modelling inefficiencies and fine tune the estimations taking into account specificities in the time series data that were not capture in the initial calibration and development phase. Thus in order to improve the quality of stress testing rigorous validation procedures of actual vs predicted financial variables are important. Furthermore, according to academic literature the success of the stress testing exercises after the financial crises maybe be circumstantial~\cite{hirtle2015} since no robust methods are applied to quantify their estimation error. \begin{figure}[t!] \centering \includegraphics[width=0.7\linewidth]{out_of_sample_capital} \caption{Out of sample back testing results of the Capital of the three balance sheet approaches compared with the actual figures (Whole Sample). } \label{fig:capital_average} \end{figure} \begin{figure}[b!] \centering \includegraphics[width=0.7\linewidth]{in_sample_car} \caption{In sample back testing results of CAR ratio of the balance sheet approaches (Whole Sample).} \label{fig:car_all_banks_in_sample} \end{figure} Following a different venue in this study we perform a thorough validation procedure in order to assess the robustness of our approach. In this section we summarize the results of the 5 approaches. More precisely, we report the performance results obtained from the experimental evaluation of our methods, in terms of in-sample fit (train dataset) and out-of-time performance (test sample). To sum up, after developing our Stress testing frameworks in the "In-sample" datase spanning the years 2010 to 2013 (16 quarters), we assess its performance under the "Out-of-time" dataset in which the performance of each model is evaluated during a future time period for evaluating their generalization capacity. More precisely, we report performance results obtained by evaluating our method over a two year (8 quarters) out of sample time-period spanning from 2014 – 2015. Validation is performed with respect the one year ahead forecast of the CAR ratio. Note that the last 2 two years of the dataset were not used for model development. Prediction accuracy of the CAR ratio, as measured by the deviation between the forecast of each framework against the actual CAR ratio of each financial institution, is the main criterion to assess the efficacy of each method and to select the most robust one. In this section, we present a series of metrics that are broadly used for quantitatively estimating the forecasting accuracy on continuous outcomes. We evaluate the stress testing methods with the usual forecast metrics of Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and the Mean Absolute Percentage Error (MAPE). These metrics are used so as to derive a full-spectrum conclusion regarding the relative forecasting power of each framework. \begin{figure}[t!] \centering \includegraphics[width=0.7\linewidth]{out_sample_car} \caption{Out of sample back testing results of CAR ratio of the three balance sheet approaches (Large Banks in the out of Sample) } \label{fig:car_large_banks} \end{figure} As we observe in Table \ref{table:car}, Deep Learning algorithms provide the best empirical fit both in-sample and out-of sample terms. Deep learning offers a more efficient and holistic way to simulate the CAR ratio under a specific set of macro scenarios of key macroeconomic variables as the predicted average CAR is closer to the actual one compared to Satellite Modelling and Constant Balance sheet stress testing assumptions. Table \ref{table:car_metrics} summarizes the results of all aforementioned samples with respect the CAR ratio and the prediction error validation metrics (RMSE, MAE, MAPE). Based on the figures reported in the test sample Deep Learning algorithms provide more accurate estimation of the CAR ratio exhibiting a significant decrease in the forecasting error. Another remark based on the experimental results is that, by moving from simple neural networks to Bayesian networks, we are able to infer richer and subtler dynamics from the data, thus increasing our capacity in modelling nonlinearities and cross-correlations among balance sheet P\&L items. This is also evident from Figs. \ref{fig:capital_average} and 7 where the out of sample performance of constant balance sheet and satellite modelling diverge significantly from the actual evolution of Regulatory Capital (Fig. \ref{fig:capital_average}) and the CAR ratio (Fig. \ref{fig:car_average}) in the dataset even though they provide adequate fit in the developent sample (Fig. \ref{fig:car_all_banks_in_sample}). Contrary average CAR ratios estimated based on Deep Learning methods depict in an appropriate manner the CAR dymanics in the projection period. \begin{figure}[b!] \centering \includegraphics[width=0.7\linewidth]{out_sample_car_large} \caption{Out of sample back testing results of the Capital of the three balance sheet approaches compared with the actual figures (Whole Sample). } \label{fig:car_average} \end{figure} \begin{table} \caption{Comparison of the predicted one year ahead CAR by ST approach for all banks and only for Large financial institutions (more than 200billions in assets)} \label{table:car_metrics} \centering \renewcommand{\arraystretch}{1.1} \begin{minipage}[b]{0.45\linewidth} \resizebox{\linewidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline All banks & \multicolumn{3}{c|}{Out of Sample (2014Q1-2015Q4)} \\\hline & \textbf{RMSE} & \textbf{MAPE} & \textbf{MAE} \\\hline Satellite Modelling (BMS) & 11.32 & 2.88 & 0.15 \\\hline Deep Learning (MXNET) & 11.18 & 2.36 & 0.12 \\\hline Deep Learning (Bayesian ReLU) & 15.36 & 2.12 & 0.10 \\\hline Deep Learning (Bayesian LWTA) & $\mathbf{10.75}$ & $\mathbf{1.77}$ & $\mathbf{0.09}$\\\hline Constant Balance Sheet & 11.15 & 2.85 & 0.15 \\\hline & \multicolumn{3}{c|}{In Sample (2010Q1-2013Q4)}\\\hline Satellite Modelling (BMS) & 13.46 & 2.58 & 0.16 \\\hline Deep Learning (MXNET) & 13.49 & 2.55 & 0.15 \\\hline Deep Learning (Bayesian ReLU) & 16.58 & 2.41 & 0.15 \\\hline Deep Learning (Bayesian LWTA) & 18.70 & 2.16 & 0.14\\\hline Constant Balance Sheet & 17.25 & 2.56 & 0.15 \\\hline \end{tabular} } \end{minipage} \begin{minipage}[b]{0.45\linewidth} \resizebox{\linewidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline Large Banks ($>$ 200 bl) & \multicolumn{3}{c|}{Out of Sample (2014Q1-2015Q4)}\\\hline & \textbf{RMSE} & \textbf{MAPE} & \textbf{MAE} \\\hline Satellite Modelling (BMS) & 3.21 & 2.31 & 0.17 \\\hline Deep Learning (MXNET) & 2.28 & 1.97 & 0.15 \\\hline Deep Learning (Bayesian ReLU) & $\mathbf{1.96}$ & 1.56 & 0.12 \\\hline Deep Learning (Bayesian LWTA) & 2.04 & $\mathbf{1.51}$ & $\mathbf{0.11}$\\\hline Constant Balance Sheet & 3.56 & 2.58 & 0.19 \\\hline & \multicolumn{3}{c|}{In Sample (2010Q1-2013Q4)}\\\hline Satellite Modelling (BMS) & 3.44 & 3.14 & 0.23 \\\hline Deep Learning (MXNET) & 3.46 & 3.13 & 0.22 \\\hline Deep Learning (Bayesian ReLU) & 3.07 & 2.78 & 0.20 \\\hline Deep Learning (Bayesian LWTA) & 2.76 & 2.42 & 0.18\\\hline Constant Balance Sheet & 3.27 & 2.94 & 0.21 \\\hline \end{tabular} } \end{minipage} \end{table} To further investigate the performance of Deep Stress approach we narrow down the results on a subset of large financial institutions where performance of a robust stress testing methodology is more important due to their size and social-economic impact. Big financial institutions are defined as entities with more than 200 billion in assets for the purpose of this study. Tables \ref{table:car} and \ref{table:car_metrics} also shows that both in terms of fit and terms of error metrics the superiority of deep neural network is confirmed with significant drops in the forecasting error in the test sample. Another worth mentioning results is the fact that although satellite univariate modelling in the sample dataset was expected to provide a better fitting against the DNN this is not the case. DNN is trained in a multivariate setup attempting to model 9 variables at the same time and still exhibits a rather comparable in sample error against the other two methods. The same pattern also holds in Fig. \ref{fig:car_large_banks} where the projected CAR is graphed only for this category of large banks (more than 200 billion in assets). Summarizing the results across all metrics in the test sample, it is evident that Deep Learning Algorithms exhibits higher predicting power compared to all the considered benchmark approaches. Among the other two approaches it is evident that the constant balance assumption although easier to implement exhibits the highest error. Hence, it is crucial for supervisory authorities to rethink current stress testing exercises that are based on the constant balance sheet assumption and move towards a dynamic balance sheet approach. \section{Conclusions and Future Work} In this study we propose a new approach to be utilized in regulatory stress testing exercises called Deep Stress that utilizes the properties of deep learning. The main novel contribution of this empirical research to the literature of forecasting economic and financial crisis events is that we explore this new statistical technique to tackle the problem of dynamic balance stress testing. Deep learning is utilized to provide a holistic modelling approach for a bank's key financial items. We perform thorough testing and validation of the proposed technique. Experimental results provide strong evidence to further be explored in the future by regulators and financial institutions in order to produce a new generation of stress testing. Deep stress is compared with two broadly accepted stress testing frameworks: constant balance sheet and satellite dynamic modelling. Summarizing our experimental results, we have found that Deep Neural Networks consistently outperform the benchmark approaches, Our analysis provided strong evidence of increased forecasting accuracy with respect to the CAR ratio and performance consistency, which implies a much stronger generalization capacity compared to alternative benchmark frameworks. Validation measures RMSE, MAE and MAPE significantly decrease in the test sample using DeepStress providing better simulation of the CAR ratio. Hence, these findings render our approach much more attractive to researchers and practitioners working in real-world financial institutions. The main driver for this higher forecasting accuracy is the potential to model the balance sheet intercorrelation of P\&L items providing better simulation of the banks one-year-ahead activities. Summarizing, Deep Stress offers a better dynamic balance sheet simulator which is a major component in any stress testing framework by better capturing that small macro and financial changes that can be amplified exponentially under a crisis event. The holistic and dynamic nature of our framework leads to significant decrease in the forecasting error by modelling better the feedback loops and the interdependence of various items of a financial institution balance sheet with the macro economy. The aforementioned cascading layers structure of deep learning algorithms will open up new horizons for financial system simulation combining brain inspired computation and statistical machine learning. Of course our initial endeavour is concentrated on the banking system which the backbone of the global economy but is scalable to other entities as well like large corporate, insurances and shadow banking. The system can be used by policy makers to test various measures and to monitor the system in a forward looking manner. Our aim is to provide innovation in the way regulatory authorities monitor the system and increase awareness for possible future financial shocks. By simulating the complex nexus of the current financial establishment governments can proactively take steps to mitigate any forthcoming adverse events. DeepStress can finally be used to measure the social impact of a possible financial or systemic shock through the adjusted projections of various key macro variables like unemployment, wealth, credit expansion etc. An aspect this work has not considered concerns developing deep learning models that can be continuously retrained in a moving window (online learning) setup. Another possible way forward is the exploration of deep neural networks under a broad datasets referring to multiple jurisdictions. Finally, it is evident that the postulated Deep Learning networks can effectively capture nonlinearities in the relationship between the input variable and the output variable. Although, the validation framework cannot fully capture the estimation error in a Stress testing exercise due to the fact the dataset does not include crisis years. The results though provide evidence for the forecasting efficacy of DeepStress for several years simulating a baseline scenario of a Stress Testing exercise. To enhance the validation framework of our approach we will intensify the data collection process to gather information referring to several years before the financial crises in order to use DeepStress to simulate and predict failed entities that took place during this period. The value of such novel developments remains to be examined in our future research endeavours.
{ "redpajama_set_name": "RedPajamaArXiv" }
291
Q: Change global variable value with a function from inside window.onload I'm building a calculator. Why is the addToStr function not able to change the values of the global variables? (I sampled just some switch-statement cases to shorten the code). window.onload = function () { $(".buttons").click(function () { var id = $(this).attr("id"); switch (id) { case "b-7": value = "7"; console.log(value); addToStr(miniDisplayString, value); addToStr(numberString, value); break; case "b-8": value = "8"; addToStr(miniDisplayString, value); addToStr(numberString, value); break; case "b-9": value = "9"; addToStr(miniDisplayString, value); addToStr(numberString, value); break; case "b-divide": value = " / "; addToStr(miniDisplayString, value); break; } console.log("total = " + total); console.log("miniDisplayString = " + miniDisplayString); console.log("numberString = " + numberString); }); } var total = 0; var value; var miniDisplayString = ""; var numberString = ""; function addToStr(tot, val) { tot += val; } A: You cannot pass values by reference in javascript, it thinks you are simply trying to use the variables' value. In order to do so, you would need to use pointers, which are not available in a language like javascript. In order to achieve this by using your current method, you would need to create a global variable for each reference, and set them each time with a different function. (You would need a separate function per variable, and only take 'value' as an argument; and set the variable to the corresponding value passed to the function 'addTo<varName>'). However, this method seems unnecessary and highly inefficient. Just setting them manually in your code, wouldn't hurt at all; Like so: testStr += val; // This should not be nested inside a function like addToStr. ... instead of using a separate function for each and every reference, which is honestly not necessary at all, not in your case nor in general. You can also return the new value in the function and use that, but again; not necessary at all.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,447
You have the vision, you have the product, and you have the funding; it's your time to make a splash. How do you turn a brilliant vision into a successful reality? Without the proper launch strategy, even the best idea can fall flat. This panel of marketing and PR mavens will take you through best practices for taking a product from stealth to superstar, including strategies for building anticipation, managing hype, handling speculation/misinformation, providing a cure for the common launch, and how to tap into buzz to predict sales/adoption and get customers engaging. Alex Constantinople has twenty years of experience as a strategic communications and marketing executive. Prior to joining OutCast as COO in September 2010, she was responsible for communications efforts for the business and editorial divisions of Condé Nast's WIRED magazine and its digital properties. Previously, Alex spent fifteen years at GE, where she served as general manager of corporate and marketing communications and marketing services. During that time she headed global marketing initiatives such as the launch of GE's groundbreaking ecomagination platform and imagination at work branding campaign, developed innovation tools and processes for GE business teams to achieve faster organic growth, and helped GE engage in new and interactive ways with consumers, customers and thought-leaders. She started her career with the company at NBC, working in a variety of publicity and promotions roles including television's top morning program, Today, before rising to vice president of NBC News communications. Alex began her career with CNN, as a publicist for Larry King Live. Alex currently serves on the Board of Directors for Schools, Mentoring and Resource Team (SMART), a non-profit organization that provides highly motivated, financially-disadvantaged students with access to educational opportunities and social support services in order to foster academic excellence and community engagement. Brandee Barker is an independent consultant who builds global brands through strategic communications and social media programs. Having spent nearly 20 years in the tech industry, her passion is guiding founders, executives and their teams through the crucial stages of message development and delivery, market positioning, launch planning and multimedia dissemination. Her current and most recent clients include notable Internet start-ups AirBnB, Groupon, Spotify, Square and Quora. Prior to starting her own business in 2011, Brandee held the distinguished role of being the first communications executive at Facebook. In 2006, Brandee joined Facebook when it had 100 employees and 5 million college users and saw it through to more than 2,000 employees and 500 million users worldwide. During her tenure, she grew the communications team to 10 employees, hired and managed the first U.S. and international PR agencies, and oversaw a multi-million dollar budget. She led the PR strategy for many major business milestones including, Microsoft ($240MM) and DST ($300MM) investments, Parakey and Friend Feed acquisitions, and many strategic partnerships. She launched hundreds of products and features, most notably News Feed, Pages, "Share" and "Like," Facebook Ads, Facebook Platform and the initial translation of the site. In her role as a public spokesperson for Facebook, Brandee navigated through many challenges including rumors of buyouts, IPO and revenue speculation, government investigations, site outages and massive user protests around privacy issues. In contrast, Brandee also served as a consumer TV spokesperson and co-hosted the live web stream of the 2009 Golden Globes and American Music Awards. Through all of this, Brandee developed a deep and trusted network among media – print reporters, tech bloggers and broadcast producers. She built Facebook into a global phenomenon, in part, by placing countless stories in influential blogs, such as AllThingsD, GigaOm, Mashable, TechCrunch, and VentureBeat, and targeting cover stories in Fast Company, Financial Times, Fortune, New York Times, Newsweek, Wall St. Journal and Wired. In addition, she secured and managed executive appearances on 60 Minutes, Bloomberg, CNBC, GMA, Martha Stewart, Oprah and Today Show. Her final project at Facebook was directing the TIME story naming CEO Mark Zuckerberg the 2010 Person of the Year. Brandee has a long career that has shaped her approach to communications today. She was Vice President at Zeno Group and led PR programs for Oracle's applications business and the Gap's corporate division. While a Vice President at Ruder Finn, she managed the Sony Electronics account and developed the firm's West Coast consumer technology practice. Brandee was Director of Corporate Communications at Stamps.com and served as Director of Marketing at Oracle. She began her career as part of the team that established the Boston office of GCI Group. In 2007, Brandee was named to PRWeek's "Top 40 Under 40", which honored people who "demonstrate innovative thinking, strong determination, and results." She speaks regularly at conferences, including DLD in Munich and SXSW in Austin. Brandee is a VIP Donor Member of TED and proudly sits on the Board of the Somaly Mam Foundation. She holds a bachelor's degree in journalism from the University of Colorado at Boulder, where she graduated with honors. Kira lives for adventure, whether it's in the great outdoors or the early stages of a startup. She launched her own travel business straight out of undergrad at Georgetown, then spent six years leading integrated marketing efforts for Intuit's small business division. Before joining Lytro, Kira was a principal of the customer experience consultancy Ant's Eye View. Her keen insights into what engages and delights consumers have led to numerous accolades for innovative marketing, including the WOMMA Grand Prix Prize and Forrester Groundswell Award. She holds an M.B.A. from Duke University.
{ "redpajama_set_name": "RedPajamaC4" }
7,372
Q: single Ajax Post request not working in IE I have some code that uses jquery to create dialogs from some hidden forms in my html: $("#editdeletediv").load().dialog( { //Set options for the dialog here title: 'Edit/Delete log', modal: true, autoResize:true, maxWidth: 600, minWidth: 500, buttons: { Delete: function(){ $('#confirmdelete').load().dialog( { //Set options for the dialog here title: 'Confirm deletion', modal: true, autoResize:true, maxWidth: 300, minWidth: 250, buttons: { Yes: function(){ $.ajax({ url: 'removelog', type: "POST", data: { id: calEvent.id}, dataType: "json", success:function(response){ window.location.href = response; } }); }, No: function(){ $(this).dialog('close'); } } }); }, Save: function(){ var input = $("<input>") .attr("type", "hidden") .attr("name", "id").val(calEvent.id); $('#editdeleteform').append($(input)); var formdata = new FormData($("#editdeleteform")[0]); $.ajax({ url: 'updatelog', type: 'POST', data: formdata, async: false, success: function (response) { window.location.href = response; }, cache: false, contentType: false, processData: false }); } } }); This all works in chrome and firefox, but in IE it's a different matter. The ajax post to 'updatelog' doesn't seem to work in IE but the post to 'removelog' does. Does anyone know what might be the issue here, I think it might be the, var formdata = new FormData($("#editdeleteform")[0]); but I need that formdata variable as the fields I'm passing to the post might be different in the future (I might dynamically create more html elements), so I need a way to get everything from that form without hard coding it into my javascript EDIT: figured out how to get a console open in internet explorer and now I know for sure it's the FormData where the problem is arising: SCRIPT5009: 'FormData' is undefined Does anyone know of an alternative that works in chrome, firefox and IE or does anyone know how I can work around this problem for IE? EDIT: I decided it's more trouble than it's worth, this is for an intranet site solution so spending too much time on this would be a waste (I can just require my users to use specific browsers/versions if they want full functionality) So I just did this: if(typeof FormData !== 'undefined'){ var formdata = new FormData($("#editdeleteform")[0]); } else{ var formdata = { 'id' : calEvent.id, 'editdate' : $("#editdate").val(), 'editcomment' : $("#editcomment").val(), 'editworkhours' : $("#editworkhours").val(), 'editdoctorsnote' : '' }; } A: FormData isn't compatibile with ie 7-9 see: https://developer.mozilla.org/en-US/docs/Web/API/FormData A: var formdata = $("#editdeleteform").serialize() Try this to send form data to updatelog. This may helps you. A: I decided it's more trouble than it's worth, this is for an intranet site solution so spending too much time on this would be a waste (I can just require my users to use specific browsers/versions if they want full functionality) So I just did this: if(typeof FormData !== 'undefined'){ var formdata = new FormData($("#editdeleteform")[0]); } else{ var formdata = { 'id' : calEvent.id, 'editdate' : $("#editdate").val(), 'editcomment' : $("#editcomment").val(), 'editworkhours' : $("#editworkhours").val(), 'editdoctorsnote' : '' }; }
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,147
To sit and listen, i find so difficult, as my mind doesn't stop…. With love and kiss from you? i had thought reading your poetry, would be my music? Well done both of you. I hope you do more together. This is just amazing. Such a cool dad!
{ "redpajama_set_name": "RedPajamaC4" }
6,377
Lie To Me is a South Korean romantic drama starring Yoon Eun-hye and Kang Ji-hwan. The series revolves around a civil servant named Gong Ah-jung who pretends to be the wife of the wealthy hotel heir, Hyun Ki-joon in order to impress her former friend. It is directed by Kim Soo-ryung and Kwon Hyuk-chan, produced by Jo Sung-won and written by Kim Ye-ri. Casts: Yoon Eun-hye, Kang Ji-hwan, Bang Sung-Joon, Jo Yoon-hee, Hong Soo-hyun Production: N/A Watch Lie to Me Online Free Lie to Me Online Free Where to watch Lie to Me Lie to Me movie free online Lie to Me free online Return to Me Welcome to Mercy Close To Me Lie to Love To Love-Ru To Dust To: Gerard 2020 7m Movie To Tokyo To Olivia To Love and to Cherish Farm to Fork to Love Watch Lie to Me full HD on Ummagurau.com Free
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,384
{"url":"https:\/\/economics.stackexchange.com\/questions\/17430\/money-in-utility-function-value-function","text":"# Money in utility function - Value function\n\nI am reading Walsh's (2003) book on monetary economics. Specifically the chapter on money in utility function. I understand the basics of a value functions but I can't seem to get the same results as the author.\n\nWhere\n\nI.e the per capita budget constraint. He then finds an expression for $w_{t+1}$:\n\nThis is the first source of my confusion. Previously he defines output per worker as a function of capital per worker, i.e $y_{t}=f(\\frac{k_{t-1}}{1+n})$ where $n$ is the population growth rate. But all of a sudden he changes it to $\\frac{f(k_{t-1})}{1+n}$. I am aware that there are many typos in this book, is this just one of them or am I missing something trivial?\n\nEither way, he uses the budget constraint to express $k_{t}$ as $w_{t}-c_{t}-m_{t}-b_{t}$ and the definition of $w_{t+1}$ to express the value function as:\n\nNow, I don't know if it is because I am sleep deprived or because there is a typo, but I just can't seem to get the same results as Walsh. E.g, differentiating w.r.t $c_{t}$ I get:\n\n$u_{c}(c_{t},m_{t}) + \\beta*V_{w}(w_{t+1})[\\frac{-f'(w_{t}-c_{t}-m_{t}-b_{t})}{1+n}(-1)+\\frac{1+\\delta}{1+n}(-1)]$\n\nWhile Walsh gets\n\nAm I missing something obvious or is there a typo?\n\nYour calculation has two typos a) you type two times the minus sign related to $f'$ and b) you write $(1+\\delta)$ instead of $(1-\\delta)$. If we correct for these we have\n\n$$u_{c}(c_{t},m_{t}) + \\beta V_{\\omega}(\\omega_{t+1})\\left[\\frac{f'(\\omega_{t}-c_{t}-m_{t}-b_{t})}{1+n}(-1)+\\frac{1-\\delta}{1+n}(-1)\\right]$$\n\nTaking out the minus sign and $1\/(1+n)$, and compacting $f'(\\omega_{t}-c_{t}-m_{t}-b_{t})=f_k(k_t)$ we have\n\n$$u_{c}(c_{t},m_{t}) - \\frac {\\beta}{1+n}V_{\\omega}(\\omega_{t+1})\\left[f_k(k_t)+1-\\delta\\right]$$\n\nwhich is the expression in the book.\n\nAs regards the definition of output per worker:\nGiven the clarification in the comments, aggregate output during period $t$ is\n\n$$Y_t = F(K_{t-1}, N_{t})$$\n\nwhile $N_t = N_{t-1}(1+n)$.\n\nI guess constant returns to scale are assumed so per capita magnitudes are\n\n$$y_t = \\frac{Y_t}{N_t} = F\\left (\\frac{K_{t-1}}{N_t}, \\frac{N_{t}}{N_t}\\right ) = F\\left (\\frac{K_{t-1}}{N_{t-1}(1+n)}, 1\\right )$$\n\nNow, consider the notational\/conceptual problem here: we will tend \"automatically\" to write $K_{t-1}\/N_{t-1} \\equiv k_{t-1}$ due of the same index, but $K_{t-1}\/N_{t-1}$ is economically meaningless because $K_{t-1}$ is not combined in production with $N_{t-1}$. This ratio describes \"capital used in period $t$ per worker in period $t-1$\".\n\nAnyway, if we clearly declare in building the model that we define $k_{t-1}$ in this way, then we end up with\n\n$$y_t = f\\left(\\frac {k_{t-1}}{1+n}\\right)$$\n\nand the budget constraint for period $t$ is correct while the one for period $t+1$ is not. How this affects the first-order condition with respect to consumption?\n\n(Notice that with these notational coventions, \"capital per worker during period $t$\" is $k_{t-1}\/(1+n)$ -I have argued elsewhere why it is preferable for modellers to adopt the meaning that $k_{t-1}$ represents the value at the beginning of period $t-1$ and so is used in the production of period $t-1$).\n\n\u2022 Thank you so much for the help, much appreciated. Do you have any thoughts about the output per worker notation? Walsh seems to be switching between two different expressions and I don't know if I am just misunderstanding him or if it's a typo. Jul 8 '17 at 11:46\n\u2022 @BenBernke What does $k_{t-1}$ stands for in this book? Capital at the end or at the beginning of period $t-1$? Jul 8 '17 at 12:01\n\u2022 $K_{t-1}$ is the aggregate stock of capital in the beginning of period $t$ and $k_{t-1}$ is the per capita capital stock in the beginning of period $t$. Jul 8 '17 at 13:09","date":"2021-10-21 17:52:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8546891212463379, \"perplexity\": 332.89969389873585}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585439.59\/warc\/CC-MAIN-20211021164535-20211021194535-00180.warc.gz\"}"}
null
null
Ine Finholt Jansen (* 6. Juli 1973) ist eine norwegische Film- und Theaterschauspielerin. Biografie Jansen ist die Tochter des Schauspielers Per Jansen (1941–2022) und der Souffleuse Evy Finholt und wuchs im Westen Oslos auf. Nach Beendigung der Schule hatte sie einige Statistenrollen am Nationaltheatret inne, bevor sie von 1996 bis 1999 Schauspiel an der Statens Teaterhøyskole, der renommiertesten Schauspielschule Norwegens, studierte. Nach Beendigung ihres Studiums erhielt Ine Jansen eine Festanstellung am Nasjonalteatret und debütierte im Jahr 2000 im Theaterstück Brått evig. Einem größeren Publikum wurde sie durch ihre erste Fernsehhauptrolle als Ine Jansen in der Fernsehserie Helt Perfekt an der Seite von Thomas Giertsen bekannt, für die sie mit dem Gullruten (2012) und Komiprisen (2014 und 2015) ausgezeichnet wurde. Es folgten Rollen in der zweiten Staffel der Fernsehserie Mammon sowie der Krimikomödie Jul i Blodfjell (Weihnachten im Blutgebirge). Im Kino war sie unter anderem neben Atle Antonsen und Anders Baasmo Christiansen als Merete im Remake des norwegischen Filmklassikers Norske Byggeklosser zu sehen. Mit ihrem früheren Partner, dem Schauspieler, Regisseur und Mitglied des Språkrådets Erik Ulfsby, hat sie Zwillingssöhne und lebt in Oslo. Filmografie (Auswahl) 2010–2015: Dag (Fernsehserie) 2011–2020: Helt perfekt (Fernsehserie) 2016: Mammon 2017: Jul i Blodfjell (Miniserie) 2017: Rett Vest 2018: Mordene i Kongo (The Congo Murders) 2018: Los Bando 2018: Norske Byggeklosser 2020–: Aldri voksen (Fernsehserie) Weblinks Ine Jansen in der Internet Movie Database Rollenliste am Nasjonalteatret Einzelnachweise Theaterschauspieler Filmschauspieler Norweger Geboren 1973 Frau
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,248
Jean Antoine Petit-Senn (born 6 April 1792 in Geneva – died 10 March 1870 in Chêne-Bourg, GE), aka John Petit-Senn, was a Swiss novelist, poet, singer, editor and politician. Life Petit-Senn was born in Geneva, when it was still the Republic of Geneva. Six years later it was occupied by the army of the French First Republic. Petit-Senn studied at the Academy of Geneva and then did an apprenticeship with a commercial company in Lyon. Following his return to Geneva in 1813, when it was still occupied by the French First Empire, he participated in the cultural life of the city. After Geneva joined the Swiss Confederation in 1814/15, Petit-Senn also engaged in politics, serving as a member of the cantonal parliament from 1829 until 1839. Petit-Senn is buried at the cemetery of Chêne-Bougeries. Legacy According to the 11th edition of the Encyclopædia Britannica he was a thorough Genevese and a biting satirist, a pensive poet, the Genevese La Bruyère, as he liked to be called, but was not fully appreciated till after his death, when his widely scattered writings were brought together.Among many quotes attributed to him is this one... "Not what we have, but what we enjoy, constitutes our abundance." ...which appeared in the American comic strip Mutts on 19 November 2007. References Swiss male poets 1792 births 1870 deaths Writers from the Republic of Geneva 19th-century Swiss poets 19th-century male writers
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,030
Q: issue with print struct tm in c Suppose I have two valid time_t variables (called time1,time2) with different dates. I create: struct tm *time1_info = localtime(&time1); struct tm *time2_info = localtime(&time2); When I try to print the months of the time1 and time2 variables like this: printf("Time1 month %i and time2 month %i\n", time1_info->tm_mon, time2_info->tm_mon); It gives me the time2 month value as time1, but I know for sure they are different from each other. For example, if time1's month is 4 and time2's month is 7, it prints: Time1 month 7 and time2 month 7 Why is it doing this? A: The localtime function returns a pointer to a static object, and calling it again may overwrite the data and then return the same pointer. If you check the value of the two returned pointers, you'll probably see that they point to the same place. You should take the data you need from the returned pointer before calling localtime again: struct tm *time_info = localtime(&time1); month1 = time_info->tm_mon; time_info = localtime(&time2); month2 = time_info->tm_mon; Some systems have a localtime_r function that lets you specify where to store the data instead of always using the same storage, but this is not a standard C function. The C11 standard adds an optionally supported function localtime_s which does the same.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,655
{"url":"http:\/\/mathoverflow.net\/questions\/17484\/examples-of-inequality-implied-by-equality?answertab=oldest","text":"# Examples of inequality implied by equality.\n\nIt is well known Cauchy's inequality is implied by Lagrange's identity. Bohr's inequality $|a -b|^2 \\le p|a|^2 +q|b|^2$, where $\\frac{1}{p}+\\frac{1}{q}=1$, is implied by $|a -b|^2 +|\\sqrt{p\/q}a+\\sqrt{q\/p}b|^2= p|a|^2 +q|b|^2$. L.K Hua's determinant inequality $\\det(I-A^*A)\\cdot \\det(I-B^*B)\\le |\\det(I-A^*B)|^2$ is implied by Hua's matrix equality $(I-B^*B)-(I-B^*A)(I-A^*A)^{-1}(I-A^*B)=-(A-B)^*(I-AA^*)(A-B)$. What other examples can be found?\n\n-\nThis should be community wiki. \u2013\u00a0Steve Huntsman Mar 8 '10 at 16:13\nFor polynomial inequalities there is Hilbert's 17th problem: en.wikipedia.org\/wiki\/Hilbert%27s_seventeenth_problem \u2013\u00a0Qiaochu Yuan Mar 8 '10 at 16:18\nIn view of Harald's comment, maybe it is also interesting to ask for examples of inequalities which are not implied by equalities. \u2013\u00a0Pete L. Clark Mar 8 '10 at 19:35\nIMHO, one of the greatest inequalities is the Jensen\u00b4s one. Lots of inequalities follow from it. But is it implied by any equality? \u2013\u00a0efq Mar 8 '10 at 19:59\nIt's hard to formalize Pete's question, since if A is always at most B then B-A=C for some C that is always non-negative. One needs a notion of \"obviously non-negative\", which is often provided by a number's being a sum of squares. \u2013\u00a0gowers Mar 8 '10 at 20:03\n\nHere are two very elementary examples.\n\nIt's not 100% different from your Cauchy's inequality example, but the fact that if X is a random variable, then $(\\mathbb{E}X)^2\\leq\\mathbb{E}X^2$ is very useful and follows from the fact that the difference equals the variance of X.\n\nThe fact that $|\\cos x|\\leq 1$ and $|\\sin x|\\leq 1$ follows from the fact that $\\cos^2x+\\sin^2x=1$.\n\n-\nYour second example is restricted in real field. My examples are also true in complex field. \u2013\u00a0Sunni Mar 8 '10 at 21:26\nIsn't your first example Cauchy's inequality applied to the scalar product on $L^2$ of the probability space? \u2013\u00a0Andrea Ferretti Mar 8 '10 at 21:51\nIt's Cauchy-Schwarz applied to the scalar product of X and the constant function 1. \u2013\u00a0gowers Mar 8 '10 at 23:55\nYes, that's what I meant. \u2013\u00a0Andrea Ferretti Mar 9 '10 at 13:18\n\nIn an arbitrary triangle whose circumcircle has radius $R$ and center $O$ and whose inscribed circle has radius $r$ and center $I$, we have Euler's inequality $$R\\geq 2r$$ This follows from the equality $$|IO|^2=R(R-2r)$$ (There are many examples in Euclidean geometry, I think Ptolemy's inequality follows from an equality but I can't remember at the moment)\n\n-\n\nHere are some more elementary examples.\n\n\u2022 The easier cases of the AM-GM inequality follow from equalities, namely $a^2+b^2\\geq 2ab$ because $a^2+b^2-2ab=(a-b)^2$ and $\\frac{a^3+b^3+c^3}{3}\\geq abc$ for $a,b,c\\geq 0$ because $a^3+b^3+c^3-3abc=\\frac{1}{2}(a+b+c)((a-b)^2+(b-c)^2+(c-a)^2)$.\n\n\u2022 We have that for any triangle ABC the point X that minimizes $AX^2+BX^2+CX^2$ is the centroid G because of Leibniz's relation $AX^2+BX^2+CX^2=AG^2+BG^2+CG^2+3XG^2$.\n\n-\nThe second one is nice, while the first one is too elementary. \u2013\u00a0Sunni Mar 8 '10 at 23:47\n\nAnother \"Hilbertian\" example: Bessel's inequality follows from Bessel's equality. See, e.g., http:\/\/www.math.uri.edu\/~quinn\/web\/mth629_Bessels.pdf.\n\nAnd now (maybe off-topic, but the question is rather vague) an example of an inequality derived via an identity:\n\nThe (simple) identity is the so-called \"multiplication of means\", roughly: the expectation of a product of independent random variables equals the product of their expectations. The (not so simple) inequality is the Grothendieck one: http:\/\/www.ams.org\/proc\/1987-100-01\/S0002-9939-1987-0883401-0\/S0002-9939-1987-0883401-0.pdf. (Well, it is not obtained from that identity in an obvious and direct way, but the identity is an essential ingredient in the proof.)\n\n-\nBessel's inequality is also implied by Parseval's identity (given an orthonormal set, extend it to an orthonormal basis, apply Parseval, and throw away the unwanted terms). \u2013\u00a0Nate Eldredge Apr 26 '10 at 15:34\n\nAfter A1 from the 1968 Putnam:\n\n$$\\frac {22}7 - \\pi = \\int_0^1 \\frac{x^4(1-x)^4}{1+x^2}dx \\gt 0$$\n\nI expect that there should be a proof of Jensen's inequality as an integral of a nonnegative quantity.\n\n-\nYour equality cannot imply $355\/113>\\pi$, because 22\/7 = 3.1429, 355\/113 = 3.1416 \u2013\u00a0Sunni Mar 9 '10 at 14:04\n+1: Cool! I'm no Putnam connoisseur (on the contrary), but the 1968 specimen looks to be much more interesting than usual. Can anyone confirm\/deny this? \u2013\u00a0Pete L. Clark Mar 9 '10 at 14:07\n@miwalin The idea is to construct an analogous integral of something which is obviously positive which has the value $355\/113-\\pi.$ A few examples were given in the paper linked. Given such an integral, we can compute better estimates for $\\pi$ using rigorous numerical integration techniques. For example, we can use Simpson's Rule whose error is bounded by the magnitude of the 4th derivative of the integrand on the interval, and we can bound that. @Pete Yes, the problems on that competition seem particularly nice, although that's subjective. \u2013\u00a0Douglas Zare Mar 9 '10 at 15:06\n\nLots of number theoretic inequalities are to be had from the binomial theorem. I remember reading the below argument as part of Erdos's proof of Bertrand's postulate:\n\nSuppose that $n$ is a positive integer, then we have\n\n$$4^n = (1 + 1)^{2n} = {\\sum_{j=0}^{2n}}{2n\\choose{j}}.$$\n\nThus, since $2n\\choose{n}$ is the maximum value of the sequence $({2n\\choose{k}})$, we conclude that\n\n$$4^n < (2n + 1){2n\\choose{n}}.$$\n\nI thought it was neat.\n\n-\n\nOver real-closed fields such as $\\langle \\mathbb{R}, +, *, -, <, 0, 1 \\rangle$, there is an interesting simple answer: every polynomial inequality is equivalent to a projected equation. E.g., Given $p_1, p_2 \\in \\mathbb{Q}[\\vec{x}]$ we have $\\left( p_1 > p_2 \\ \\iff \\ \\exists z \\text{ s.t. } z^2(p_1 - p_2) - 1 = 0 \\right),$ and $\\left( p_1 \\geq p_2 \\ \\iff \\ \\exists z \\text{ s.t. } p_1 - p_2 - z^2 = 0 \\right).$\n\nGeometrically, this is the simple observation that every semialgebraic set defined as the set of $n-$dimensional real vectors satisfying an inequality is the projection of an $n+1$-dimensional real-algebraic variety defined by a single equation. Semialgebraic sets defined by boolean combinations of equations and inequalities can be similarly encoded as the set of satisfying real vectors of (an) equation(s) by using the Rabinowitsch encoding $(p_1 = 0 \\vee p_2 = 0 \\ \\iff p_1p_2 = 0)$ and $(p_1 = 0 \\wedge p_2 = 0 \\ \\iff p_1^2 + p_2^2 = 0).$\n\nCombining the above two observations, one obtains the fact that every semi-algebraic set $S \\subseteq \\mathbb{R}^n$ is the projection of a real algebraic variety $V \\subseteq \\mathbb{R}^{n+k}$, where $k$ is the number of inequality symbols appearing in the defining Tarski formula for $S$. In fact, due to a construction of Motzkin [The Real Solution Set of a System of Algebraic Inequalities is the Projection of a Hypersurface in One More Dimension,'' Inequalities II, O. Shisha, ed., 251-254, Academic Press (1970)], it is known that every such $S$ is in fact the projection of a real-algebraic variety in $\\mathbb{R}^{n+1}$.\n\n-\n\nTwo examples due to Hurwitz.\n\n\u2022 The AM-GM inequality. For the function $f=f(x_1,x_2,\\dots,x_n)$ let $Pf(x_1,x_2,\\dots,x_n)$ denote the sum of $f$ over the $n!$ quantities that result from all possible $n!$ permutations of the $x_i$. Then $$\\frac{x_1^n+x_2^n+\\dots+x_n^n}{n}-x_1x_2\\dots x_n=\\frac{1}{2\\ n!}(\\phi_1+\\phi_2+ \\dots \\phi_n),$$ where $$\\phi_k=P[(x_1^{n-k}-x_2^{n-k})(x_1-x_2)x_3x_4\\dots x_{k+1}]=P[(x_1-x_2)^2(x_1^{n-k-1}+\\dots x_2^{n-k-1})x_3x_4\\dots x_{k+1}]\\geq0.$$ The proof can be found in Inequalities by Beckenbach and Bellman.\n\n\u2022 The isoperimetric inequality. Let the boundary of $\\Omega\\subset \\mathbb R^2$ be a rectifiable Jordan curve $\\partial \\Omega=\\{((x(s),y(s))|\\ s\\in[0,2\\pi))\\}$. Then $$L^2-4\\pi A=2\\pi^2\\sum\\limits_{n=1}^{\\infty}\\left[(na_n-d_n)^2+(nb_n+c_n)^2+ (n^2-1)(c_n^2+d_n^2)\\right],$$ where $$x(s)=\\sum\\limits_{n=0}^{\\infty}(a_n\\cos ns+b_n\\sin ns),\\quad y(s)=\\sum\\limits_{n=0}^{\\infty}(c_n\\cos ns+d_n\\sin ns).$$\n\n-\nVery interesting. This is not included in Hardy, Littlewood and Polya's 'Inequalities'. \u2013\u00a0Sunni Apr 27 '10 at 3:49","date":"2016-05-05 14:18:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9614534974098206, \"perplexity\": 374.3970724084499}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-18\/segments\/1461860127496.74\/warc\/CC-MAIN-20160428161527-00133-ip-10-239-7-51.ec2.internal.warc.gz\"}"}
null
null
It could be the most hated or most coveted job at Treasury, depending on how firm you like your mattress. For many years two officials at Treasury have traded counting sheep for counting decimal places, spending the night at Parliament House guarding the budget papers. But this year no one at Treasury will be swapping their bed for boxes at Parliament House. Treasury won't say how long it's been common practice for departmental officials to spend the night locked up in parliament guarding the budget papers until they are opened and studied by journalists before the treasurer delivers the budget. What it will say is that it has found room for innovation in the logistical exercise, and the budget papers will stay securely at the printing facility until the morning of budget day. In previous years the budget papers would be delivered late on the Monday to a parliamentary basement, before heading upstairs on the Tuesday. The later delivery time has removed the need for a budget sleep-over, but a Treasury spokesman confirmed as soon as they come off the truck they will constantly be monitored by department officials. "Treasury maintains an appropriate level of security at all times to protect the embargo on the budget papers," the spokesman said.
{ "redpajama_set_name": "RedPajamaC4" }
14
Eric Wearne is an associate professor of education at Georgia Gwinnett College, near Atlanta. Follow him on Twitter at @eric_wearne. For two or three days per week, students come together to a building, attend classes with regular teachers, and have classmates. The rest of the week, the students work on their own at home.
{ "redpajama_set_name": "RedPajamaC4" }
4,694
angular.module('valdr') /** * Exposes utility functions used in validators and valdr core. */ .factory('valdrUtil', [function () { var substringAfterDot = function (string) { if (string.lastIndexOf('.') === -1) { return string; } else { return string.substring(string.lastIndexOf('.') + 1, string.length); } }; var SLUG_CASE_REGEXP = /[A-Z]/g; var slugCase = function (string) { return string.replace(SLUG_CASE_REGEXP, function(letter, pos) { return (pos ? '-' : '') + letter.toLowerCase(); }); }; /** * Converts the given validator name to a validation token. Uses the last part of the validator name after the * dot (if present) and converts camel case to slug case (fooBar -> foo-bar). * @param validatorName the validator name * @returns {string} the validation token */ var validatorNameToToken = function (validatorName) { if (angular.isString(validatorName)) { var name = substringAfterDot(validatorName); name = slugCase(name); return 'valdr-' + name; } else { return validatorName; } }; return { validatorNameToToken: validatorNameToToken, isNaN: function (value) { // `NaN` as a primitive is the only value that is not equal to itself // (perform the [[Class]] check first to avoid errors with some host objects in IE) return this.isNumber(value) && value !== +value; }, isNumber: function (value) { var type = typeof value; return type === 'number' || value && type === 'object' && Object.prototype.toString.call(value) === '[object Number]' || false; }, has: function (object, key) { return object ? Object.prototype.hasOwnProperty.call(object, key) : false; }, /** * @param value the value * @returns {boolean} true if the given value is not null, not undefined, not an empty string, NaN returns false */ notEmpty: function (value) { if (this.isNaN(value)) { return false; } if (angular.isArray(value) && value.length === 0){ return false; } return angular.isDefined(value) && value !== '' && value !== null; }, /** * @param value the value to validate * @returns {boolean} true if the given value is null, undefined, an empty string, NaN returns false */ isEmpty: function (value) { if (this.isNaN(value)) { return false; } return !this.notEmpty(value); }, /** * Checks if a string value starts with a given prefix. * * @param value the value * @param prefix the prefix * @returns {boolean} true if the given value starts with the given prefix. */ startsWith: function (value, prefix) { return angular.isString(value) && angular.isString(prefix) && value.lastIndexOf(prefix, 0) === 0; } }; }]) ;
{ "redpajama_set_name": "RedPajamaGithub" }
5,741
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en_US" lang="en_US"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <!-- q3tabdialog.cpp --> <title>Qt 4.8: Q3TabDialog Class Reference</title> <link rel="stylesheet" type="text/css" href="style/style.css" /> <script src="scripts/jquery.js" type="text/javascript"></script> <script src="scripts/functions.js" type="text/javascript"></script> <link rel="stylesheet" type="text/css" href="style/superfish.css" /> <link rel="stylesheet" type="text/css" href="style/narrow.css" /> <!--[if IE]> <meta name="MSSmartTagsPreventParsing" content="true"> <meta http-equiv="imagetoolbar" content="no"> <![endif]--> <!--[if lt IE 7]> <link rel="stylesheet" type="text/css" href="style/style_ie6.css"> <![endif]--> <!--[if IE 7]> <link rel="stylesheet" type="text/css" href="style/style_ie7.css"> <![endif]--> <!--[if IE 8]> <link rel="stylesheet" type="text/css" href="style/style_ie8.css"> <![endif]--> <script src="scripts/superfish.js" type="text/javascript"></script> <script src="scripts/narrow.js" type="text/javascript"></script> </head> <body class="" onload="CheckEmptyAndLoadList();"> <div class="header" id="qtdocheader"> <div class="content"> <div id="nav-logo"> <a href="index.html">Home</a></div> <a href="index.html" class="qtref"><span>Qt Reference Documentation</span></a> <div id="narrowsearch"></div> <div id="nav-topright"> <ul> <li class="nav-topright-home"><a href="http://qt.digia.com/">Qt HOME</a></li> <li class="nav-topright-dev"><a href="http://qt-project.org/">DEV</a></li> <li class="nav-topright-doc nav-topright-doc-active"><a href="http://qt-project.org/doc/"> DOC</a></li> <li class="nav-topright-blog"><a href="http://blog.qt.digia.com/">BLOG</a></li> </ul> </div> <div id="shortCut"> <ul> <li class="shortCut-topleft-inactive"><span><a href="index.html">Qt 4.8</a></span></li> <li class="shortCut-topleft-active"><a href="http://qt-project.org/doc/">ALL VERSIONS </a></li> </ul> </div> <ul class="sf-menu" id="narrowmenu"> <li><a href="#">API Lookup</a> <ul> <li><a href="classes.html">Class index</a></li> <li><a href="functions.html">Function index</a></li> <li><a href="modules.html">Modules</a></li> <li><a href="namespaces.html">Namespaces</a></li> <li><a href="qtglobal.html">Global Declarations</a></li> <li><a href="qdeclarativeelements.html">QML elements</a></li> </ul> </li> <li><a href="#">Qt Topics</a> <ul> <li><a href="qt-basic-concepts.html">Programming with Qt</a></li> <li><a href="qtquick.html">Device UIs &amp; Qt Quick</a></li> <li><a href="qt-gui-concepts.html">UI Design with Qt</a></li> <li><a href="supported-platforms.html">Supported Platforms</a></li> <li><a href="technology-apis.html">Qt and Key Technologies</a></li> <li><a href="best-practices.html">How-To's and Best Practices</a></li> </ul> </li> <li><a href="#">Examples</a> <ul> <li><a href="all-examples.html">Examples</a></li> <li><a href="tutorials.html">Tutorials</a></li> <li><a href="demos.html">Demos</a></li> <li><a href="qdeclarativeexamples.html">QML Examples</a></li> </ul> </li> </ul> </div> </div> <div class="wrapper"> <div class="hd"> <span></span> </div> <div class="bd group"> <div class="sidebar"> <div class="searchlabel"> Search index:</div> <div class="search" id="sidebarsearch"> <form id="qtdocsearch" action="" onsubmit="return false;"> <fieldset> <input type="text" name="searchstring" id="pageType" value="" /> <div id="resultdialog"> <a href="#" id="resultclose">Close</a> <p id="resultlinks" class="all"><a href="#" id="showallresults">All</a> | <a href="#" id="showapiresults">API</a> | <a href="#" id="showarticleresults">Articles</a> | <a href="#" id="showexampleresults">Examples</a></p> <p id="searchcount" class="all"><span id="resultcount"></span><span id="apicount"></span><span id="articlecount"></span><span id="examplecount"></span>&nbsp;results:</p> <ul id="resultlist" class="all"> </ul> </div> </fieldset> </form> </div> <div class="box first bottombar" id="lookup"> <h2 title="API Lookup"><span></span> API Lookup</h2> <div id="list001" class="list"> <ul id="ul001" > <li class="defaultLink"><a href="classes.html">Class index</a></li> <li class="defaultLink"><a href="functions.html">Function index</a></li> <li class="defaultLink"><a href="modules.html">Modules</a></li> <li class="defaultLink"><a href="namespaces.html">Namespaces</a></li> <li class="defaultLink"><a href="qtglobal.html">Global Declarations</a></li> <li class="defaultLink"><a href="qdeclarativeelements.html">QML elements</a></li> </ul> </div> </div> <div class="box bottombar" id="topics"> <h2 title="Qt Topics"><span></span> Qt Topics</h2> <div id="list002" class="list"> <ul id="ul002" > <li class="defaultLink"><a href="qt-basic-concepts.html">Programming with Qt</a></li> <li class="defaultLink"><a href="qtquick.html">Device UIs &amp; Qt Quick</a></li> <li class="defaultLink"><a href="qt-gui-concepts.html">UI Design with Qt</a></li> <li class="defaultLink"><a href="supported-platforms.html">Supported Platforms</a></li> <li class="defaultLink"><a href="technology-apis.html">Qt and Key Technologies</a></li> <li class="defaultLink"><a href="best-practices.html">How-To's and Best Practices</a></li> </ul> </div> </div> <div class="box" id="examples"> <h2 title="Examples"><span></span> Examples</h2> <div id="list003" class="list"> <ul id="ul003"> <li class="defaultLink"><a href="all-examples.html">Examples</a></li> <li class="defaultLink"><a href="tutorials.html">Tutorials</a></li> <li class="defaultLink"><a href="demos.html">Demos</a></li> <li class="defaultLink"><a href="qdeclarativeexamples.html">QML Examples</a></li> </ul> </div> </div> </div> <div class="wrap"> <div class="toolbar"> <div class="breadcrumb toolblock"> <ul> <li class="first"><a href="index.html">Home</a></li> <!-- Breadcrumbs go here --> <li><a href="modules.html">Modules</a></li> <li>Qt3SupportLight</li> <li>Q3TabDialog</li> </ul> </div> <div class="toolbuttons toolblock"> <ul> <li id="smallA" class="t_button">A</li> <li id="medA" class="t_button active">A</li> <li id="bigA" class="t_button">A</li> <li id="print" class="t_button"><a href="javascript:this.print();"> <span>Print</span></a></li> </ul> </div> </div> <div class="content mainContent"> <div class="toc"> <h3><a name="toc">Contents</a></h3> <ul> <li class="level1"><a href="#public-functions">Public Functions</a></li> <li class="level1"><a href="#signals">Signals</a></li> <li class="level1"><a href="#protected-functions">Protected Functions</a></li> <li class="level1"><a href="#details">Detailed Description</a></li> </ul> </div> <h1 class="title">Q3TabDialog Class Reference</h1> <!-- $$$Q3TabDialog-brief --> <p>The Q3TabDialog class provides a stack of tabbed widgets. <a href="#details">More...</a></p> <!-- @@@Q3TabDialog --> <pre class="cpp"> <span class="preprocessor">#include &lt;Q3TabDialog&gt;</span></pre><p><b>This class is part of the Qt 3 support library.</b> It is provided to keep old source code working. We strongly advise against using it in new code. See <a href="porting4.html#qtabdialog">Porting to Qt 4</a> for more information.</p> <p><b>Inherits: </b><a href="qdialog.html">QDialog</a>.</p> <ul> <li><a href="q3tabdialog-members.html">List of all members, including inherited members</a></li> <li><a href="q3tabdialog-obsolete.html">Obsolete members</a></li> </ul> <a name="public-functions"></a> <h2>Public Functions</h2> <table class="alignedsummary"> <tr><td class="memItemLeft rightAlign topAlign"> </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#Q3TabDialog">Q3TabDialog</a></b> ( QWidget * <i>parent</i> = 0, const char * <i>name</i> = 0, bool <i>modal</i> = false, Qt::WindowFlags <i>f</i> = 0 )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#dtor.Q3TabDialog">~Q3TabDialog</a></b> ()</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#addTab">addTab</a></b> ( QWidget * <i>child</i>, const QString &amp; <i>label</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#addTab-2">addTab</a></b> ( QWidget * <i>child</i>, const QIcon &amp; <i>iconset</i>, const QString &amp; <i>label</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#changeTab">changeTab</a></b> ( QWidget * <i>w</i>, const QIcon &amp; <i>iconset</i>, const QString &amp; <i>label</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#changeTab-2">changeTab</a></b> ( QWidget * <i>w</i>, const QString &amp; <i>label</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> QWidget * </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#currentPage">currentPage</a></b> () const</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> bool </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#hasApplyButton">hasApplyButton</a></b> () const</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> bool </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#hasCancelButton">hasCancelButton</a></b> () const</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> bool </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#hasDefaultButton">hasDefaultButton</a></b> () const</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> bool </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#hasHelpButton">hasHelpButton</a></b> () const</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> bool </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#hasOkButton">hasOkButton</a></b> () const</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#insertTab">insertTab</a></b> ( QWidget * <i>child</i>, const QString &amp; <i>label</i>, int <i>index</i> = -1 )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#insertTab-2">insertTab</a></b> ( QWidget * <i>child</i>, const QIcon &amp; <i>iconset</i>, const QString &amp; <i>label</i>, int <i>index</i> = -1 )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> bool </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#isTabEnabled">isTabEnabled</a></b> ( QWidget * <i>w</i> ) const</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#removePage">removePage</a></b> ( QWidget * <i>w</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#setApplyButton">setApplyButton</a></b> ( const QString &amp; <i>text</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#setApplyButton-2">setApplyButton</a></b> ()</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#setCancelButton">setCancelButton</a></b> ( const QString &amp; <i>text</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#setCancelButton-2">setCancelButton</a></b> ()</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#setDefaultButton">setDefaultButton</a></b> ( const QString &amp; <i>text</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#setDefaultButton-2">setDefaultButton</a></b> ()</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#setFont">setFont</a></b> ( const QFont &amp; <i>font</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#setHelpButton">setHelpButton</a></b> ( const QString &amp; <i>text</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#setHelpButton-2">setHelpButton</a></b> ()</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#setOkButton">setOkButton</a></b> ( const QString &amp; <i>text</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#setOkButton-2">setOkButton</a></b> ()</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#setTabEnabled">setTabEnabled</a></b> ( QWidget * <i>w</i>, bool <i>enable</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#showPage">showPage</a></b> ( QWidget * <i>w</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> QString </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#tabLabel">tabLabel</a></b> ( QWidget * <i>w</i> )</td></tr> </table> <ul> <li class="fn">8 public functions inherited from <a href="qdialog.html#public-functions">QDialog</a></li> <li class="fn">221 public functions inherited from <a href="qwidget.html#public-functions">QWidget</a></li> <li class="fn">29 public functions inherited from <a href="qobject.html#public-functions">QObject</a></li> <li class="fn">13 public functions inherited from <a href="qpaintdevice.html#public-functions">QPaintDevice</a></li> </ul> <a name="signals"></a> <h2>Signals</h2> <table class="alignedsummary"> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#aboutToShow">aboutToShow</a></b> ()</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#applyButtonPressed">applyButtonPressed</a></b> ()</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#cancelButtonPressed">cancelButtonPressed</a></b> ()</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#currentChanged">currentChanged</a></b> ( QWidget * <i>widget</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#defaultButtonPressed">defaultButtonPressed</a></b> ()</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#helpButtonPressed">helpButtonPressed</a></b> ()</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#selected">selected</a></b> ( const QString &amp; <i>name</i> )</td></tr> </table> <ul> <li class="fn">3 signals inherited from <a href="qdialog.html#signals">QDialog</a></li> <li class="fn">1 signal inherited from <a href="qwidget.html#signals">QWidget</a></li> <li class="fn">1 signal inherited from <a href="qobject.html#signals">QObject</a></li> </ul> <a name="protected-functions"></a> <h2>Protected Functions</h2> <table class="alignedsummary"> <tr><td class="memItemLeft rightAlign topAlign"> void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#setTabBar">setTabBar</a></b> ( QTabBar * <i>tb</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> QTabBar * </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#tabBar">tabBar</a></b> () const</td></tr> </table> <a name="reimplemented-protected-functions"></a> <h2>Reimplemented Protected Functions</h2> <table class="alignedsummary"> <tr><td class="memItemLeft rightAlign topAlign"> virtual void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#paintEvent">paintEvent</a></b> ( QPaintEvent * )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> virtual void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#resizeEvent">resizeEvent</a></b> ( QResizeEvent * <i>e</i> )</td></tr> <tr><td class="memItemLeft rightAlign topAlign"> virtual void </td><td class="memItemRight bottomAlign"><b><a href="q3tabdialog.html#showEvent">showEvent</a></b> ( QShowEvent * <i>e</i> )</td></tr> </table> <ul> <li class="fn">7 protected functions inherited from <a href="qdialog.html#protected-functions">QDialog</a></li> <li class="fn">37 protected functions inherited from <a href="qwidget.html#protected-functions">QWidget</a></li> <li class="fn">8 protected functions inherited from <a href="qobject.html#protected-functions">QObject</a></li> <li class="fn">1 protected function inherited from <a href="qpaintdevice.html#protected-functions">QPaintDevice</a></li> </ul> <h3>Additional Inherited Members</h3> <ul> <li class="fn">2 properties inherited from <a href="qdialog.html#properties">QDialog</a></li> <li class="fn">58 properties inherited from <a href="qwidget.html#properties">QWidget</a></li> <li class="fn">1 property inherited from <a href="qobject.html#properties">QObject</a></li> <li class="fn">5 public slots inherited from <a href="qdialog.html#public-slots">QDialog</a></li> <li class="fn">19 public slots inherited from <a href="qwidget.html#public-slots">QWidget</a></li> <li class="fn">1 public slot inherited from <a href="qobject.html#public-slots">QObject</a></li> <li class="fn">4 static public members inherited from <a href="qwidget.html#static-public-members">QWidget</a></li> <li class="fn">7 static public members inherited from <a href="qobject.html#static-public-members">QObject</a></li> <li class="fn">1 protected slot inherited from <a href="qwidget.html#protected-slots">QWidget</a></li> </ul> <a name="details"></a> <!-- $$$Q3TabDialog-description --> <div class="descr"> <h2>Detailed Description</h2> <p>The Q3TabDialog class provides a stack of tabbed widgets.</p> <p>A tabbed dialog is one in which several &quot;tab pages&quot; are available. By clicking on a tab page's tab or by pressing the indicated Alt+<i>letter</i> key combination, the user can select which tab page they want to use.</p> <p>Q3TabDialog provides a tab bar consisting of single row of tabs at the top; each tab has an associated widget which is that tab's tab page. In addition, Q3TabDialog provides an OK button and the following optional buttons: Apply, Cancel, Defaults and Help.</p> <p>The normal way to use Q3TabDialog is to do the following in the constructor:</p> <ol class="1"> <li>Create a Q3TabDialog.</li> <li>Create a <a href="qwidget.html">QWidget</a> for each of the pages in the tab dialog, insert children into it, set up geometry management for it, and use <a href="q3tabdialog.html#addTab">addTab</a>() (or <a href="q3tabdialog.html#insertTab">insertTab</a>()) to set up a tab and keyboard accelerator for it.</li> <li>Set up the buttons for the tab dialog using <a href="q3tabdialog.html#setOkButton">setOkButton</a>(), <a href="q3tabdialog.html#setApplyButton">setApplyButton</a>(), setDefaultsButton(), <a href="q3tabdialog.html#setCancelButton">setCancelButton</a>() and <a href="q3tabdialog.html#setHelpButton">setHelpButton</a>().</li> <li>Connect to the signals and slots.</li> </ol> <p>If you don't call <a href="q3tabdialog.html#addTab">addTab</a>() the page you have created will not be visible. Don't confuse the object name you supply to the <a href="qwidget.html">QWidget</a> constructor and the tab label you supply to <a href="q3tabdialog.html#addTab">addTab</a>(); <a href="q3tabdialog.html#addTab">addTab</a>() takes user-visible name that appears on the widget's tab and may identify an accelerator, whereas the widget name is used primarily for debugging.</p> <p>Almost all applications have to connect the <a href="q3tabdialog.html#applyButtonPressed">applyButtonPressed</a>() signal to something. <a href="q3tabdialog.html#applyButtonPressed">applyButtonPressed</a>() is emitted when either OK or Apply is clicked, and your slot must copy the dialog's state into the application.</p> <p>There are also several other signals which may be useful:</p> <ul> <li><a href="q3tabdialog.html#cancelButtonPressed">cancelButtonPressed</a>() is emitted when the user clicks Cancel.</li> <li><a href="q3tabdialog.html#defaultButtonPressed">defaultButtonPressed</a>() is emitted when the user clicks Defaults; the slot it is connected to should reset the state of the dialog to the application defaults.</li> <li><a href="q3tabdialog.html#helpButtonPressed">helpButtonPressed</a>() is emitted when the user clicks Help.</li> <li><a href="q3tabdialog.html#aboutToShow">aboutToShow</a>() is emitted at the start of show(); if there is any chance that the state of the application may change between the creation of the tab dialog and the time show() is called, you must connect this signal to a slot that resets the state of the dialog.</li> <li><a href="q3tabdialog.html#currentChanged">currentChanged</a>() is emitted when the user selects a page.</li> </ul> <p>Each tab is either enabled or disabled at any given time (see <a href="q3tabdialog.html#setTabEnabled">setTabEnabled</a>()). If a tab is enabled the tab text is drawn in black and the user can select that tab. If it is disabled the tab is drawn in a different way and the user cannot select that tab. Note that even if a tab is disabled, the page can still be visible; for example, if all of the tabs happen to be disabled.</p> <p>You can change a tab's label and iconset using <a href="q3tabdialog.html#changeTab">changeTab</a>(). A tab page can be removed with <a href="q3tabdialog.html#removePage">removePage</a>() and shown with <a href="q3tabdialog.html#showPage">showPage</a>(). The current page is given by <a href="q3tabdialog.html#currentPage">currentPage</a>().</p> <p>Q3TabDialog does not support tabs on the sides or bottom, nor can you set or retrieve the visible page. If you need more functionality than Q3TabDialog provides, consider creating a <a href="qdialog.html">QDialog</a> and using a <a href="qtabbar.html">QTabBar</a> with QTabWidgets.</p> <p>Most of the functionality in Q3TabDialog is provided by a <a href="qtabwidget.html">QTabWidget</a>.</p> </div> <!-- @@@Q3TabDialog --> <div class="func"> <h2>Member Function Documentation</h2> <!-- $$$Q3TabDialog[overload1]$$$Q3TabDialogQWidget*constchar*boolQt::WindowFlags --> <h3 class="fn"><a name="Q3TabDialog"></a>Q3TabDialog::<span class="name">Q3TabDialog</span> ( <span class="type"><a href="qwidget.html">QWidget</a></span> * <i>parent</i> = 0, const <span class="type">char</span> * <i>name</i> = 0, <span class="type">bool</span> <i>modal</i> = false, <span class="type"><a href="qt.html#WindowType-enum">Qt::WindowFlags</a></span> <i>f</i> = 0 )</h3> <p>Constructs a <a href="q3tabdialog.html" class="compat">Q3TabDialog</a> with only an OK button. The <i>parent</i>, <i>name</i>, <i>modal</i> and widget flag, <i>f</i>, arguments are passed on to the <a href="qdialog.html">QDialog</a> constructor.</p> <!-- @@@Q3TabDialog --> <!-- $$$~Q3TabDialog[overload1]$$$~Q3TabDialog --> <h3 class="fn"><a name="dtor.Q3TabDialog"></a>Q3TabDialog::<span class="name">~Q3TabDialog</span> ()</h3> <p>Destroys the tab dialog.</p> <!-- @@@~Q3TabDialog --> <!-- $$$aboutToShow[overload1]$$$aboutToShow --> <h3 class="fn"><a name="aboutToShow"></a><span class="type">void</span> Q3TabDialog::<span class="name">aboutToShow</span> ()<tt> [signal]</tt></h3> <p>This signal is emitted by show() when it is time to set the state of the dialog's contents. The dialog should reflect the current state of the application when it appears; if there is any possibility that the state of the application may change between the time you call <a href="q3tabdialog.html#Q3TabDialog">Q3TabDialog</a>() and show(), you should set the dialog's state in a slot and connect this signal to it.</p> <p>This applies mainly to <a href="q3tabdialog.html" class="compat">Q3TabDialog</a> objects that are kept around hidden, rather than being created, shown, and deleted afterwards.</p> <p><b>See also </b><a href="q3tabdialog.html#applyButtonPressed">applyButtonPressed</a>(), <a href="qwidget.html#show">QWidget::show</a>(), and <a href="q3tabdialog.html#cancelButtonPressed">cancelButtonPressed</a>().</p> <!-- @@@aboutToShow --> <!-- $$$addTab[overload1]$$$addTabQWidget*constQString& --> <h3 class="fn"><a name="addTab"></a><span class="type">void</span> Q3TabDialog::<span class="name">addTab</span> ( <span class="type"><a href="qwidget.html">QWidget</a></span> * <i>child</i>, const <span class="type"><a href="qstring.html">QString</a></span> &amp; <i>label</i> )</h3> <p>Adds another tab and page to the tab view.</p> <p>The new page is <i>child</i>; the tab's label is <i>label</i>. Note the difference between the widget name (which you supply to widget constructors and to <a href="q3tabdialog.html#setTabEnabled">setTabEnabled</a>(), for example) and the tab label. The name is internal to the program and invariant, whereas the label is shown on-screen and may vary according to language and other factors.</p> <p>If the tab's <i>label</i> contains an ampersand, the letter following the ampersand is used as an accelerator for the tab, e.g&#x2e; if the label is &quot;Bro&amp;wse&quot; then Alt+W becomes an accelerator which will move the focus to this tab.</p> <p>If you call addTab() after show() the screen will flicker and the user may be confused.</p> <p><b>See also </b><a href="q3tabdialog.html#insertTab">insertTab</a>().</p> <!-- @@@addTab --> <!-- $$$addTab$$$addTabQWidget*constQIcon&constQString& --> <h3 class="fn"><a name="addTab-2"></a><span class="type">void</span> Q3TabDialog::<span class="name">addTab</span> ( <span class="type"><a href="qwidget.html">QWidget</a></span> * <i>child</i>, const <span class="type"><a href="qicon.html">QIcon</a></span> &amp; <i>iconset</i>, const <span class="type"><a href="qstring.html">QString</a></span> &amp; <i>label</i> )</h3> <p>This is an overloaded function.</p> <p>This version of the function shows the <i>iconset</i> as well as the <i>label</i> on the tab of <i>child</i>.</p> <!-- @@@addTab --> <!-- $$$applyButtonPressed[overload1]$$$applyButtonPressed --> <h3 class="fn"><a name="applyButtonPressed"></a><span class="type">void</span> Q3TabDialog::<span class="name">applyButtonPressed</span> ()<tt> [signal]</tt></h3> <p>This signal is emitted when either the Apply or OK button is clicked.</p> <p>It should be connected to a slot (or several slots) that change the application's state according to the state of the dialog.</p> <p><b>See also </b><a href="q3tabdialog.html#cancelButtonPressed">cancelButtonPressed</a>(), <a href="q3tabdialog.html#defaultButtonPressed">defaultButtonPressed</a>(), and <a href="q3tabdialog.html#setApplyButton">setApplyButton</a>().</p> <!-- @@@applyButtonPressed --> <!-- $$$cancelButtonPressed[overload1]$$$cancelButtonPressed --> <h3 class="fn"><a name="cancelButtonPressed"></a><span class="type">void</span> Q3TabDialog::<span class="name">cancelButtonPressed</span> ()<tt> [signal]</tt></h3> <p>This signal is emitted when the Cancel button is clicked. It is automatically connected to <a href="qdialog.html#reject">QDialog::reject</a>(), which will hide the dialog.</p> <p>The Cancel button should not change the application's state at all, so you should generally not need to connect it to any slot.</p> <p><b>See also </b><a href="q3tabdialog.html#applyButtonPressed">applyButtonPressed</a>(), <a href="q3tabdialog.html#defaultButtonPressed">defaultButtonPressed</a>(), and <a href="q3tabdialog.html#setCancelButton">setCancelButton</a>().</p> <!-- @@@cancelButtonPressed --> <!-- $$$changeTab[overload1]$$$changeTabQWidget*constQIcon&constQString& --> <h3 class="fn"><a name="changeTab"></a><span class="type">void</span> Q3TabDialog::<span class="name">changeTab</span> ( <span class="type"><a href="qwidget.html">QWidget</a></span> * <i>w</i>, const <span class="type"><a href="qicon.html">QIcon</a></span> &amp; <i>iconset</i>, const <span class="type"><a href="qstring.html">QString</a></span> &amp; <i>label</i> )</h3> <p>Changes tab page <i>w</i>'s iconset to <i>iconset</i> and label to <i>label</i>.</p> <!-- @@@changeTab --> <!-- $$$changeTab$$$changeTabQWidget*constQString& --> <h3 class="fn"><a name="changeTab-2"></a><span class="type">void</span> Q3TabDialog::<span class="name">changeTab</span> ( <span class="type"><a href="qwidget.html">QWidget</a></span> * <i>w</i>, const <span class="type"><a href="qstring.html">QString</a></span> &amp; <i>label</i> )</h3> <p>This is an overloaded function.</p> <p>Defines a new <i>label</i> for the tab of page <i>w</i></p> <!-- @@@changeTab --> <!-- $$$currentChanged[overload1]$$$currentChangedQWidget* --> <h3 class="fn"><a name="currentChanged"></a><span class="type">void</span> Q3TabDialog::<span class="name">currentChanged</span> ( <span class="type"><a href="qwidget.html">QWidget</a></span> * <i>widget</i> )<tt> [signal]</tt></h3> <p>This signal is emitted whenever the current page changes. <i>widget</i> is the new current page.</p> <p><b>See also </b><a href="q3tabdialog.html#currentPage">currentPage</a>(), <a href="q3tabdialog.html#showPage">showPage</a>(), and <a href="q3tabdialog.html#tabLabel">tabLabel</a>().</p> <!-- @@@currentChanged --> <!-- $$$currentPage[overload1]$$$currentPage --> <h3 class="fn"><a name="currentPage"></a><span class="type"><a href="qwidget.html">QWidget</a></span> * Q3TabDialog::<span class="name">currentPage</span> () const</h3> <p>Returns a pointer to the page currently being displayed by the tab dialog. The tab dialog does its best to make sure that this value is never 0 (but if you try hard enough, it can be).</p> <!-- @@@currentPage --> <!-- $$$defaultButtonPressed[overload1]$$$defaultButtonPressed --> <h3 class="fn"><a name="defaultButtonPressed"></a><span class="type">void</span> Q3TabDialog::<span class="name">defaultButtonPressed</span> ()<tt> [signal]</tt></h3> <p>This signal is emitted when the Defaults button is pressed. It should reset the dialog (but not the application) to the &quot;factory defaults&quot;.</p> <p>The application's state should not be changed until the user clicks Apply or OK.</p> <p><b>See also </b><a href="q3tabdialog.html#applyButtonPressed">applyButtonPressed</a>(), <a href="q3tabdialog.html#cancelButtonPressed">cancelButtonPressed</a>(), and <a href="q3tabdialog.html#setDefaultButton">setDefaultButton</a>().</p> <!-- @@@defaultButtonPressed --> <!-- $$$hasApplyButton[overload1]$$$hasApplyButton --> <h3 class="fn"><a name="hasApplyButton"></a><span class="type">bool</span> Q3TabDialog::<span class="name">hasApplyButton</span> () const</h3> <p>Returns true if the tab dialog has an Apply button; otherwise returns false.</p> <p><a href="q3tabdialog.html#hasDefaultButton">hasDefaultButton</a>()</p> <p><b>See also </b><a href="q3tabdialog.html#setApplyButton">setApplyButton</a>(), <a href="q3tabdialog.html#applyButtonPressed">applyButtonPressed</a>(), and <a href="q3tabdialog.html#hasCancelButton">hasCancelButton</a>().</p> <!-- @@@hasApplyButton --> <!-- $$$hasCancelButton[overload1]$$$hasCancelButton --> <h3 class="fn"><a name="hasCancelButton"></a><span class="type">bool</span> Q3TabDialog::<span class="name">hasCancelButton</span> () const</h3> <p>Returns true if the tab dialog has a Cancel button; otherwise returns false.</p> <p><a href="q3tabdialog.html#hasDefaultButton">hasDefaultButton</a>()</p> <p><b>See also </b><a href="q3tabdialog.html#setCancelButton">setCancelButton</a>(), <a href="q3tabdialog.html#cancelButtonPressed">cancelButtonPressed</a>(), and <a href="q3tabdialog.html#hasApplyButton">hasApplyButton</a>().</p> <!-- @@@hasCancelButton --> <!-- $$$hasDefaultButton[overload1]$$$hasDefaultButton --> <h3 class="fn"><a name="hasDefaultButton"></a><span class="type">bool</span> Q3TabDialog::<span class="name">hasDefaultButton</span> () const</h3> <p>Returns true if the tab dialog has a Defaults button; otherwise returns false.</p> <p><a href="q3tabdialog.html#hasCancelButton">hasCancelButton</a>()</p> <p><b>See also </b><a href="q3tabdialog.html#setDefaultButton">setDefaultButton</a>(), <a href="q3tabdialog.html#defaultButtonPressed">defaultButtonPressed</a>(), and <a href="q3tabdialog.html#hasApplyButton">hasApplyButton</a>().</p> <!-- @@@hasDefaultButton --> <!-- $$$hasHelpButton[overload1]$$$hasHelpButton --> <h3 class="fn"><a name="hasHelpButton"></a><span class="type">bool</span> Q3TabDialog::<span class="name">hasHelpButton</span> () const</h3> <p>Returns true if the tab dialog has a Help button; otherwise returns false.</p> <p><a href="q3tabdialog.html#hasCancelButton">hasCancelButton</a>()</p> <p><b>See also </b><a href="q3tabdialog.html#setHelpButton">setHelpButton</a>(), <a href="q3tabdialog.html#helpButtonPressed">helpButtonPressed</a>(), and <a href="q3tabdialog.html#hasApplyButton">hasApplyButton</a>().</p> <!-- @@@hasHelpButton --> <!-- $$$hasOkButton[overload1]$$$hasOkButton --> <h3 class="fn"><a name="hasOkButton"></a><span class="type">bool</span> Q3TabDialog::<span class="name">hasOkButton</span> () const</h3> <p>Returns true if the tab dialog has an OK button; otherwise returns false.</p> <p><a href="q3tabdialog.html#hasDefaultButton">hasDefaultButton</a>()</p> <p><b>See also </b><a href="q3tabdialog.html#setOkButton">setOkButton</a>(), <a href="q3tabdialog.html#hasApplyButton">hasApplyButton</a>(), and <a href="q3tabdialog.html#hasCancelButton">hasCancelButton</a>().</p> <!-- @@@hasOkButton --> <!-- $$$helpButtonPressed[overload1]$$$helpButtonPressed --> <h3 class="fn"><a name="helpButtonPressed"></a><span class="type">void</span> Q3TabDialog::<span class="name">helpButtonPressed</span> ()<tt> [signal]</tt></h3> <p>This signal is emitted when the Help button is pressed. It could be used to present information about how to use the dialog.</p> <p><b>See also </b><a href="q3tabdialog.html#applyButtonPressed">applyButtonPressed</a>(), <a href="q3tabdialog.html#cancelButtonPressed">cancelButtonPressed</a>(), and <a href="q3tabdialog.html#setHelpButton">setHelpButton</a>().</p> <!-- @@@helpButtonPressed --> <!-- $$$insertTab[overload1]$$$insertTabQWidget*constQString&int --> <h3 class="fn"><a name="insertTab"></a><span class="type">void</span> Q3TabDialog::<span class="name">insertTab</span> ( <span class="type"><a href="qwidget.html">QWidget</a></span> * <i>child</i>, const <span class="type"><a href="qstring.html">QString</a></span> &amp; <i>label</i>, <span class="type">int</span> <i>index</i> = -1 )</h3> <p>Inserts another tab and page to the tab view.</p> <p>The new page is <i>child</i>; the tab's label is <i>label</i>. Note the difference between the widget name (which you supply to widget constructors and to <a href="q3tabdialog.html#setTabEnabled">setTabEnabled</a>(), for example) and the tab label. The name is internal to the program and invariant, whereas the label is shown on-screen and may vary according to language and other factors.</p> <p>If the tab's <i>label</i> contains an ampersand, the letter following the ampersand is used as an accelerator for the tab, e.g&#x2e; if the label is &quot;Bro&amp;wse&quot; then Alt+W becomes an accelerator which will move the focus to this tab.</p> <p>If <i>index</i> is not specified, the tab is simply added. Otherwise it is inserted at the specified position.</p> <p>If you call insertTab() after show(), the screen will flicker and the user may be confused.</p> <p><b>See also </b><a href="q3tabdialog.html#addTab">addTab</a>().</p> <!-- @@@insertTab --> <!-- $$$insertTab$$$insertTabQWidget*constQIcon&constQString&int --> <h3 class="fn"><a name="insertTab-2"></a><span class="type">void</span> Q3TabDialog::<span class="name">insertTab</span> ( <span class="type"><a href="qwidget.html">QWidget</a></span> * <i>child</i>, const <span class="type"><a href="qicon.html">QIcon</a></span> &amp; <i>iconset</i>, const <span class="type"><a href="qstring.html">QString</a></span> &amp; <i>label</i>, <span class="type">int</span> <i>index</i> = -1 )</h3> <p>This is an overloaded function.</p> <p>This version of the function shows the <i>iconset</i> as well as the <i>label</i> on the tab of <i>child</i>.</p> <!-- @@@insertTab --> <!-- $$$isTabEnabled[overload1]$$$isTabEnabledQWidget* --> <h3 class="fn"><a name="isTabEnabled"></a><span class="type">bool</span> Q3TabDialog::<span class="name">isTabEnabled</span> ( <span class="type"><a href="qwidget.html">QWidget</a></span> * <i>w</i> ) const</h3> <p>Returns true if the page <i>w</i> is enabled; otherwise returns false.</p> <p><b>See also </b><a href="q3tabdialog.html#setTabEnabled">setTabEnabled</a>() and <a href="qwidget.html#enabled-prop">QWidget::isEnabled</a>().</p> <!-- @@@isTabEnabled --> <!-- $$$paintEvent[overload1]$$$paintEventQPaintEvent* --> <h3 class="fn"><a name="paintEvent"></a><span class="type">void</span> Q3TabDialog::<span class="name">paintEvent</span> ( <span class="type"><a href="qpaintevent.html">QPaintEvent</a></span> * )<tt> [virtual protected]</tt></h3> <p>Reimplemented from <a href="qwidget.html#paintEvent">QWidget::paintEvent</a>().</p> <!-- @@@paintEvent --> <!-- $$$removePage[overload1]$$$removePageQWidget* --> <h3 class="fn"><a name="removePage"></a><span class="type">void</span> Q3TabDialog::<span class="name">removePage</span> ( <span class="type"><a href="qwidget.html">QWidget</a></span> * <i>w</i> )</h3> <p>Removes page <i>w</i> from this stack of widgets. Does not delete <i>w</i>.</p> <p><b>See also </b><a href="q3tabdialog.html#showPage">showPage</a>() and <a href="qtabwidget-qt3.html#removePage" class="compat">QTabWidget::removePage</a>().</p> <!-- @@@removePage --> <!-- $$$resizeEvent[overload1]$$$resizeEventQResizeEvent* --> <h3 class="fn"><a name="resizeEvent"></a><span class="type">void</span> Q3TabDialog::<span class="name">resizeEvent</span> ( <span class="type"><a href="qresizeevent.html">QResizeEvent</a></span> * <i>e</i> )<tt> [virtual protected]</tt></h3> <p>Reimplemented from <a href="qwidget.html#resizeEvent">QWidget::resizeEvent</a>().</p> <!-- @@@resizeEvent --> <!-- $$$selected[overload1]$$$selectedconstQString& --> <h3 class="fn"><a name="selected"></a><span class="type">void</span> Q3TabDialog::<span class="name">selected</span> ( const <span class="type"><a href="qstring.html">QString</a></span> &amp; <i>name</i> )<tt> [signal]</tt></h3> <p>This signal is emitted whenever a tab is selected (raised), including during the first show(). <i>name</i> is the name of the selected tab.</p> <p><b>See also </b><a href="qwidget.html#raise">raise</a>().</p> <!-- @@@selected --> <!-- $$$setApplyButton[overload1]$$$setApplyButtonconstQString& --> <h3 class="fn"><a name="setApplyButton"></a><span class="type">void</span> Q3TabDialog::<span class="name">setApplyButton</span> ( const <span class="type"><a href="qstring.html">QString</a></span> &amp; <i>text</i> )</h3> <p>Adds an Apply button to the dialog. The button's text is set to <i>text</i>.</p> <p>The Apply button should apply the current settings in the dialog box to the application while keeping the dialog visible.</p> <p>When Apply is clicked, the <a href="q3tabdialog.html#applyButtonPressed">applyButtonPressed</a>() signal is emitted.</p> <p>If <i>text</i> is an empty string, no button is shown.</p> <p><b>See also </b><a href="q3tabdialog.html#hasApplyButton">hasApplyButton</a>(), <a href="q3tabdialog.html#setCancelButton">setCancelButton</a>(), <a href="q3tabdialog.html#setDefaultButton">setDefaultButton</a>(), and <a href="q3tabdialog.html#applyButtonPressed">applyButtonPressed</a>().</p> <!-- @@@setApplyButton --> <!-- $$$setApplyButton$$$setApplyButton --> <h3 class="fn"><a name="setApplyButton-2"></a><span class="type">void</span> Q3TabDialog::<span class="name">setApplyButton</span> ()</h3> <p>This is an overloaded function.</p> <p>Adds an Apply button to the dialog. The button's text is set to a localizable &quot;Apply&quot;.</p> <!-- @@@setApplyButton --> <!-- $$$setCancelButton[overload1]$$$setCancelButtonconstQString& --> <h3 class="fn"><a name="setCancelButton"></a><span class="type">void</span> Q3TabDialog::<span class="name">setCancelButton</span> ( const <span class="type"><a href="qstring.html">QString</a></span> &amp; <i>text</i> )</h3> <p>Adds a Cancel button to the dialog. The button's text is set to <i>text</i>.</p> <p>The cancel button should always return the application to the state it was in before the tab view popped up, or if the user has clicked Apply, back to the state immediately after the last Apply.</p> <p>When Cancel is clicked, the <a href="q3tabdialog.html#cancelButtonPressed">cancelButtonPressed</a>() signal is emitted. The dialog is closed at the same time.</p> <p>If <i>text</i> is an empty string, no button is shown.</p> <p><b>See also </b><a href="q3tabdialog.html#hasCancelButton">hasCancelButton</a>(), <a href="q3tabdialog.html#setApplyButton">setApplyButton</a>(), <a href="q3tabdialog.html#setDefaultButton">setDefaultButton</a>(), and <a href="q3tabdialog.html#cancelButtonPressed">cancelButtonPressed</a>().</p> <!-- @@@setCancelButton --> <!-- $$$setCancelButton$$$setCancelButton --> <h3 class="fn"><a name="setCancelButton-2"></a><span class="type">void</span> Q3TabDialog::<span class="name">setCancelButton</span> ()</h3> <p>This is an overloaded function.</p> <p>Adds a Cancel button to the dialog. The button's text is set to a localizable &quot;Cancel&quot;.</p> <!-- @@@setCancelButton --> <!-- $$$setDefaultButton[overload1]$$$setDefaultButtonconstQString& --> <h3 class="fn"><a name="setDefaultButton"></a><span class="type">void</span> Q3TabDialog::<span class="name">setDefaultButton</span> ( const <span class="type"><a href="qstring.html">QString</a></span> &amp; <i>text</i> )</h3> <p>Adds a Defaults button to the dialog. The button's text is set to <i>text</i>.</p> <p>The Defaults button should set the dialog (but not the application) back to the application defaults.</p> <p>When Defaults is clicked, the <a href="q3tabdialog.html#defaultButtonPressed">defaultButtonPressed</a>() signal is emitted.</p> <p>If <i>text</i> is an empty string, no button is shown.</p> <p><b>See also </b><a href="q3tabdialog.html#hasDefaultButton">hasDefaultButton</a>(), <a href="q3tabdialog.html#setApplyButton">setApplyButton</a>(), <a href="q3tabdialog.html#setCancelButton">setCancelButton</a>(), and <a href="q3tabdialog.html#defaultButtonPressed">defaultButtonPressed</a>().</p> <!-- @@@setDefaultButton --> <!-- $$$setDefaultButton$$$setDefaultButton --> <h3 class="fn"><a name="setDefaultButton-2"></a><span class="type">void</span> Q3TabDialog::<span class="name">setDefaultButton</span> ()</h3> <p>This is an overloaded function.</p> <p>Adds a Defaults button to the dialog. The button's text is set to a localizable &quot;Defaults&quot;.</p> <!-- @@@setDefaultButton --> <!-- $$$setFont[overload1]$$$setFontconstQFont& --> <h3 class="fn"><a name="setFont"></a><span class="type">void</span> Q3TabDialog::<span class="name">setFont</span> ( const <span class="type"><a href="qfont.html">QFont</a></span> &amp; <i>font</i> )</h3> <p>Sets the font for the tabs to <i>font</i>.</p> <p>If the widget is visible, the display is updated with the new font immediately. There may be some geometry changes, depending on the size of the old and new fonts.</p> <!-- @@@setFont --> <!-- $$$setHelpButton[overload1]$$$setHelpButtonconstQString& --> <h3 class="fn"><a name="setHelpButton"></a><span class="type">void</span> Q3TabDialog::<span class="name">setHelpButton</span> ( const <span class="type"><a href="qstring.html">QString</a></span> &amp; <i>text</i> )</h3> <p>Adds a Help button to the dialog. The button's text is set to <i>text</i>.</p> <p>When Help is clicked, the <a href="q3tabdialog.html#helpButtonPressed">helpButtonPressed</a>() signal is emitted.</p> <p>If <i>text</i> is an empty string, no button is shown.</p> <p><b>See also </b><a href="q3tabdialog.html#hasHelpButton">hasHelpButton</a>(), <a href="q3tabdialog.html#setApplyButton">setApplyButton</a>(), <a href="q3tabdialog.html#setCancelButton">setCancelButton</a>(), and <a href="q3tabdialog.html#helpButtonPressed">helpButtonPressed</a>().</p> <!-- @@@setHelpButton --> <!-- $$$setHelpButton$$$setHelpButton --> <h3 class="fn"><a name="setHelpButton-2"></a><span class="type">void</span> Q3TabDialog::<span class="name">setHelpButton</span> ()</h3> <p>This is an overloaded function.</p> <p>Adds a Help button to the dialog. The button's text is set to a localizable &quot;Help&quot;.</p> <!-- @@@setHelpButton --> <!-- $$$setOkButton[overload1]$$$setOkButtonconstQString& --> <h3 class="fn"><a name="setOkButton"></a><span class="type">void</span> Q3TabDialog::<span class="name">setOkButton</span> ( const <span class="type"><a href="qstring.html">QString</a></span> &amp; <i>text</i> )</h3> <p>Adds an OK button to the dialog and sets the button's text to <i>text</i>.</p> <p>When the OK button is clicked, the <a href="q3tabdialog.html#applyButtonPressed">applyButtonPressed</a>() signal is emitted, and the current settings in the dialog box should be applied to the application. The dialog then closes.</p> <p>If <i>text</i> is an empty string, no button is shown.</p> <p><b>See also </b><a href="q3tabdialog.html#hasOkButton">hasOkButton</a>(), <a href="q3tabdialog.html#setCancelButton">setCancelButton</a>(), <a href="q3tabdialog.html#setDefaultButton">setDefaultButton</a>(), and <a href="q3tabdialog.html#applyButtonPressed">applyButtonPressed</a>().</p> <!-- @@@setOkButton --> <!-- $$$setOkButton$$$setOkButton --> <h3 class="fn"><a name="setOkButton-2"></a><span class="type">void</span> Q3TabDialog::<span class="name">setOkButton</span> ()</h3> <p>This is an overloaded function.</p> <p>Adds an OK button to the dialog. The button's text is set to a localizable &quot;OK&quot;.</p> <!-- @@@setOkButton --> <!-- $$$setTabBar[overload1]$$$setTabBarQTabBar* --> <h3 class="fn"><a name="setTabBar"></a><span class="type">void</span> Q3TabDialog::<span class="name">setTabBar</span> ( <span class="type"><a href="qtabbar.html">QTabBar</a></span> * <i>tb</i> )<tt> [protected]</tt></h3> <p>Replaces the <a href="qtabbar.html">QTabBar</a> heading the dialog by the given tab bar, <i>tb</i>. Note that this must be called <i>before</i> any tabs have been added, or the behavior is undefined.</p> <p><b>See also </b><a href="q3tabdialog.html#tabBar">tabBar</a>().</p> <!-- @@@setTabBar --> <!-- $$$setTabEnabled[overload1]$$$setTabEnabledQWidget*bool --> <h3 class="fn"><a name="setTabEnabled"></a><span class="type">void</span> Q3TabDialog::<span class="name">setTabEnabled</span> ( <span class="type"><a href="qwidget.html">QWidget</a></span> * <i>w</i>, <span class="type">bool</span> <i>enable</i> )</h3> <p>If <i>enable</i> is true the page <i>w</i> is enabled; otherwise <i>w</i> is disabled. The page's tab is redrawn appropriately.</p> <p><a href="qtabwidget.html">QTabWidget</a> uses <a href="qwidget.html#enabled-prop">QWidget::setEnabled</a>() internally, rather than keeping a separate flag.</p> <p>Note that even a disabled tab and tab page may be visible. If the page is already visible <a href="qtabwidget.html">QTabWidget</a> will not hide it; if all the pages are disabled <a href="qtabwidget.html">QTabWidget</a> will show one of them.</p> <p><b>See also </b><a href="q3tabdialog.html#isTabEnabled">isTabEnabled</a>() and <a href="qwidget.html#enabled-prop">QWidget::setEnabled</a>().</p> <!-- @@@setTabEnabled --> <!-- $$$showEvent[overload1]$$$showEventQShowEvent* --> <h3 class="fn"><a name="showEvent"></a><span class="type">void</span> Q3TabDialog::<span class="name">showEvent</span> ( <span class="type"><a href="qshowevent.html">QShowEvent</a></span> * <i>e</i> )<tt> [virtual protected]</tt></h3> <p>Reimplemented from <a href="qwidget.html#showEvent">QWidget::showEvent</a>().</p> <!-- @@@showEvent --> <!-- $$$showPage[overload1]$$$showPageQWidget* --> <h3 class="fn"><a name="showPage"></a><span class="type">void</span> Q3TabDialog::<span class="name">showPage</span> ( <span class="type"><a href="qwidget.html">QWidget</a></span> * <i>w</i> )</h3> <p>Ensures that widget <i>w</i> is shown. This is mainly useful for accelerators.</p> <p><b>Warning:</b> If used carelessly, this function can easily surprise or confuse the user.</p> <p><b>See also </b><a href="qtabbar-qt3.html#setCurrentTab" class="compat">QTabBar::setCurrentTab</a>().</p> <!-- @@@showPage --> <!-- $$$tabBar[overload1]$$$tabBar --> <h3 class="fn"><a name="tabBar"></a><span class="type"><a href="qtabbar.html">QTabBar</a></span> * Q3TabDialog::<span class="name">tabBar</span> () const<tt> [protected]</tt></h3> <p>Returns the currently set <a href="qtabbar.html">QTabBar</a>.</p> <p><b>See also </b><a href="q3tabdialog.html#setTabBar">setTabBar</a>().</p> <!-- @@@tabBar --> <!-- $$$tabLabel[overload1]$$$tabLabelQWidget* --> <h3 class="fn"><a name="tabLabel"></a><span class="type"><a href="qstring.html">QString</a></span> Q3TabDialog::<span class="name">tabLabel</span> ( <span class="type"><a href="qwidget.html">QWidget</a></span> * <i>w</i> )</h3> <p>Returns the text in the tab for page <i>w</i>.</p> <!-- @@@tabLabel --> </div> </div> </div> </div> <div class="ft"> <span></span> </div> </div> <div class="footer"> <p> <acronym title="Copyright">&copy;</acronym> 2013 Digia Plc and/or its subsidiaries. Documentation contributions included herein are the copyrights of their respective owners.</p> <br /> <p> The documentation provided herein is licensed under the terms of the <a href="http://www.gnu.org/licenses/fdl.html">GNU Free Documentation License version 1.3</a> as published by the Free Software Foundation.</p> <p> Documentation sources may be obtained from <a href="http://www.qt-project.org"> www.qt-project.org</a>.</p> <br /> <p> Digia, Qt and their respective logos are trademarks of Digia Plc in Finland and/or other countries worldwide. All other trademarks are property of their respective owners. <a title="Privacy Policy" href="http://en.gitorious.org/privacy_policy/">Privacy Policy</a></p> </div> <script src="scripts/functions.js" type="text/javascript"></script> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
6,100
<?php defined('SYSPATH') OR die('No direct script access.'); class Kohana_HTTP_Header extends ArrayObject { // Default Accept-* quality value if none supplied const DEFAULT_QUALITY = 1; /** * Parses an Accept(-*) header and detects the quality * * @param array $parts accept header parts * @return array * @since 3.2.0 */ public static function accept_quality(array $parts) { $parsed = array(); // Resource light iteration $parts_keys = array_keys($parts); foreach ($parts_keys as $key) { $value = trim(str_replace(array("\r", "\n"), '', $parts[$key])); $pattern = '~\b(\;\s*+)?q\s*+=\s*+([.0-9]+)~'; // If there is no quality directive, return default if ( ! preg_match($pattern, $value, $quality)) { $parsed[$value] = (float) HTTP_Header::DEFAULT_QUALITY; } else { $quality = $quality[2]; if ($quality[0] === '.') { $quality = '0'.$quality; } // Remove the quality value from the string and apply quality $parsed[trim(preg_replace($pattern, '', $value, 1), '; ')] = (float) $quality; } } return $parsed; } /** * Parses the accept header to provide the correct quality values * for each supplied accept type. * * @link http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1 * @param string $accepts accept content header string to parse * @return array * @since 3.2.0 */ public static function parse_accept_header($accepts = NULL) { $accepts = explode(',', (string) $accepts); // If there is no accept, lets accept everything if ($accepts === NULL) return array('*' => array('*' => (float) HTTP_Header::DEFAULT_QUALITY)); // Parse the accept header qualities $accepts = HTTP_Header::accept_quality($accepts); $parsed_accept = array(); // This method of iteration uses less resource $keys = array_keys($accepts); foreach ($keys as $key) { // Extract the parts $parts = explode('/', $key, 2); // Invalid content type- bail if ( ! isset($parts[1])) continue; // Set the parsed output $parsed_accept[$parts[0]][$parts[1]] = $accepts[$key]; } return $parsed_accept; } /** * Parses the `Accept-Charset:` HTTP header and returns an array containing * the charset and associated quality. * * @link http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.2 * @param string $charset charset string to parse * @return array * @since 3.2.0 */ public static function parse_charset_header($charset = NULL) { if ($charset === NULL) { return array('*' => (float) HTTP_Header::DEFAULT_QUALITY); } return HTTP_Header::accept_quality(explode(',', (string) $charset)); } /** * Parses the `Accept-Encoding:` HTTP header and returns an array containing * the charsets and associated quality. * * @link http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.3 * @param string $encoding charset string to parse * @return array * @since 3.2.0 */ public static function parse_encoding_header($encoding = NULL) { // Accept everything if ($encoding === NULL) { return array('*' => (float) HTTP_Header::DEFAULT_QUALITY); } elseif ($encoding === '') { return array('identity' => (float) HTTP_Header::DEFAULT_QUALITY); } else { return HTTP_Header::accept_quality(explode(',', (string) $encoding)); } } /** * Parses the `Accept-Language:` HTTP header and returns an array containing * the languages and associated quality. * * @link http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.4 * @param string $language charset string to parse * @return array * @since 3.2.0 */ public static function parse_language_header($language = NULL) { if ($language === NULL) { return array('*' => array('*' => (float) HTTP_Header::DEFAULT_QUALITY)); } $language = HTTP_Header::accept_quality(explode(',', (string) $language)); $parsed_language = array(); $keys = array_keys($language); foreach ($keys as $key) { // Extract the parts $parts = explode('-', $key, 2); // Invalid content type- bail if ( ! isset($parts[1])) { $parsed_language[$parts[0]]['*'] = $language[$key]; } else { // Set the parsed output $parsed_language[$parts[0]][$parts[1]] = $language[$key]; } } return $parsed_language; } /** * Generates a Cache-Control HTTP header based on the supplied array. * * // Set the cache control headers you want to use * $cache_control = array( * 'max-age' => 3600, * 'must-revalidate', * 'public' * ); * * // Create the cache control header, creates : * // cache-control: max-age=3600, must-revalidate, public * $response->headers('Cache-Control', HTTP_Header::create_cache_control($cache_control); * * @link http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13 * @param array $cache_control Cache-Control to render to string * @return string */ public static function create_cache_control(array $cache_control) { $parts = array(); foreach ($cache_control as $key => $value) { $parts[] = (is_int($key)) ? $value : ($key.'='.$value); } return implode(', ', $parts); } /** * Parses the Cache-Control header and returning an array representation of the Cache-Control * header. * * // Create the cache control header * $response->headers('cache-control', 'max-age=3600, must-revalidate, public'); * * // Parse the cache control header * if ($cache_control = HTTP_Header::parse_cache_control($response->headers('cache-control'))) * { * // Cache-Control header was found * $maxage = $cache_control['max-age']; * } * * @param array $cache_control Array of headers * @return mixed */ public static function parse_cache_control($cache_control) { $directives = explode(',', strtolower($cache_control)); if ($directives === FALSE) return FALSE; $output = array(); foreach ($directives as $directive) { if (strpos($directive, '=') !== FALSE) { list($key, $value) = explode('=', trim($directive), 2); $output[$key] = ctype_digit($value) ? (int) $value : $value; } else { $output[] = trim($directive); } } return $output; } /** * @var array Accept: (content) types */ protected $_accept_content; /** * @var array Accept-Charset: parsed header */ protected $_accept_charset; /** * @var array Accept-Encoding: parsed header */ protected $_accept_encoding; /** * @var array Accept-Language: parsed header */ protected $_accept_language; /** * Constructor method for [Kohana_HTTP_Header]. Uses the standard constructor * of the parent `ArrayObject` class. * * $header_object = new HTTP_Header(array('x-powered-by' => 'Kohana 3.1.x', 'expires' => '...')); * * @param mixed $input Input array * @param int $flags Flags * @param string $iterator_class The iterator class to use */ public function __construct(array $input = array(), $flags = NULL, $iterator_class = 'ArrayIterator') { /** * @link http://www.w3.org/Protocols/rfc2616/rfc2616.html * * HTTP header declarations should be treated as case-insensitive */ $input = array_change_key_case( (array) $input, CASE_LOWER); parent::__construct($input, $flags, $iterator_class); } /** * Returns the header object as a string, including * the terminating new line * * // Return the header as a string * echo (string) $request->headers(); * * @return string */ public function __toString() { $header = ''; foreach ($this as $key => $value) { // Put the keys back the Case-Convention expected $key = Text::ucfirst($key); if (is_array($value)) { $header .= $key.': '.(implode(', ', $value))."\r\n"; } else { $header .= $key.': '.$value."\r\n"; } } return $header."\r\n"; } /** * Overloads `ArrayObject::offsetSet()` to enable handling of header * with multiple instances of the same directive. If the `$replace` flag * is `FALSE`, the header will be appended rather than replacing the * original setting. * * @param mixed $index index to set `$newval` to * @param mixed $newval new value to set * @param boolean $replace replace existing value * @return void * @since 3.2.0 */ public function offsetSet($index, $newval, $replace = TRUE) { // Ensure the index is lowercase $index = strtolower($index); if ($replace OR ! $this->offsetExists($index)) { return parent::offsetSet($index, $newval); } $current_value = $this->offsetGet($index); if (is_array($current_value)) { $current_value[] = $newval; } else { $current_value = array($current_value, $newval); } return parent::offsetSet($index, $current_value); } /** * Overloads the `ArrayObject::offsetExists()` method to ensure keys * are lowercase. * * @param string $index * @return boolean * @since 3.2.0 */ public function offsetExists($index) { return parent::offsetExists(strtolower($index)); } /** * Overloads the `ArrayObject::offsetUnset()` method to ensure keys * are lowercase. * * @param string $index * @return void * @since 3.2.0 */ public function offsetUnset($index) { return parent::offsetUnset(strtolower($index)); } /** * Overload the `ArrayObject::offsetGet()` method to ensure that all * keys passed to it are formatted correctly for this object. * * @param string $index index to retrieve * @return mixed * @since 3.2.0 */ public function offsetGet($index) { return parent::offsetGet(strtolower($index)); } /** * Overloads the `ArrayObject::exchangeArray()` method to ensure that * all keys are changed to lowercase. * * @param mixed $input * @return array * @since 3.2.0 */ public function exchangeArray($input) { /** * @link http://www.w3.org/Protocols/rfc2616/rfc2616.html * * HTTP header declarations should be treated as case-insensitive */ $input = array_change_key_case( (array) $input, CASE_LOWER); return parent::exchangeArray($input); } /** * Parses a HTTP Message header line and applies it to this HTTP_Header * * $header = $response->headers(); * $header->parse_header_string(NULL, 'content-type: application/json'); * * @param resource $resource the resource (required by Curl API) * @param string $header_line the line from the header to parse * @return int * @since 3.2.0 */ public function parse_header_string($resource, $header_line) { $headers = array(); if (preg_match_all('/(\w[^\s:]*):[ ]*([^\r\n]*(?:\r\n[ \t][^\r\n]*)*)/', $header_line, $matches)) { foreach ($matches[0] as $key => $value) { $this->offsetSet($matches[1][$key], $matches[2][$key], FALSE); } } return strlen($header_line); } /** * Returns the accept quality of a submitted mime type based on the * request `Accept:` header. If the `$explicit` argument is `TRUE`, * only precise matches will be returned, excluding all wildcard (`*`) * directives. * * // Accept: application/xml; application/json; q=.5; text/html; q=.2, text/* * // Accept quality for application/json * * // $quality = 0.5 * $quality = $request->headers()->accepts_at_quality('application/json'); * * // $quality_explicit = FALSE * $quality_explicit = $request->headers()->accepts_at_quality('text/plain', TRUE); * * @param string $type * @param boolean $explicit explicit check, excludes `*` * @return mixed * @since 3.2.0 */ public function accepts_at_quality($type, $explicit = FALSE) { // Parse Accept header if required if ($this->_accept_content === NULL) { if ($this->offsetExists('Accept')) { $accept = $this->offsetGet('Accept'); } else { $accept = '*/*'; } $this->_accept_content = HTTP_Header::parse_accept_header($accept); } // If not a real mime, try and find it in config if (strpos($type, '/') === FALSE) { $mime = Kohana::$config->load('mimes.'.$type); if ($mime === NULL) return FALSE; $quality = FALSE; foreach ($mime as $_type) { $quality_check = $this->accepts_at_quality($_type, $explicit); $quality = ($quality_check > $quality) ? $quality_check : $quality; } return $quality; } $parts = explode('/', $type, 2); if (isset($this->_accept_content[$parts[0]][$parts[1]])) { return $this->_accept_content[$parts[0]][$parts[1]]; } elseif ($explicit === TRUE) { return FALSE; } else { if (isset($this->_accept_content[$parts[0]]['*'])) { return $this->_accept_content[$parts[0]]['*']; } elseif (isset($this->_accept_content['*']['*'])) { return $this->_accept_content['*']['*']; } else { return FALSE; } } } /** * Returns the preferred response content type based on the accept header * quality settings. If items have the same quality value, the first item * found in the array supplied as `$types` will be returned. * * // Get the preferred acceptable content type * // Accept: text/html, application/json; q=.8, text/* * $result = $header->preferred_accept(array( * 'text/html' * 'text/rtf', * 'application/json' * )); // $result = 'application/json' * * $result = $header->preferred_accept(array( * 'text/rtf', * 'application/xml' * ), TRUE); // $result = FALSE (none matched explicitly) * * * @param array $types the content types to examine * @param boolean $explicit only allow explicit references, no wildcards * @return string name of the preferred content type * @since 3.2.0 */ public function preferred_accept(array $types, $explicit = FALSE) { $preferred = FALSE; $ceiling = 0; foreach ($types as $type) { $quality = $this->accepts_at_quality($type, $explicit); if ($quality > $ceiling) { $preferred = $type; $ceiling = $quality; } } return $preferred; } /** * Returns the quality of the supplied `$charset` argument. This method * will automatically parse the `Accept-Charset` header if present and * return the associated resolved quality value. * * // Accept-Charset: utf-8, utf-16; q=.8, iso-8859-1; q=.5 * $quality = $header->accepts_charset_at_quality('utf-8'); * // $quality = (float) 1 * * @param string $charset charset to examine * @return float the quality of the charset * @since 3.2.0 */ public function accepts_charset_at_quality($charset) { if ($this->_accept_charset === NULL) { if ($this->offsetExists('Accept-Charset')) { $charset_header = strtolower($this->offsetGet('Accept-Charset')); $this->_accept_charset = HTTP_Header::parse_charset_header($charset_header); } else { $this->_accept_charset = HTTP_Header::parse_charset_header(NULL); } } $charset = strtolower($charset); if (isset($this->_accept_charset[$charset])) { return $this->_accept_charset[$charset]; } elseif (isset($this->_accept_charset['*'])) { return $this->_accept_charset['*']; } elseif ($charset === 'iso-8859-1') { return (float) 1; } return (float) 0; } /** * Returns the preferred charset from the supplied array `$charsets` based * on the `Accept-Charset` header directive. * * // Accept-Charset: utf-8, utf-16; q=.8, iso-8859-1; q=.5 * $charset = $header->preferred_charset(array( * 'utf-10', 'ascii', 'utf-16', 'utf-8' * )); // $charset = 'utf-8' * * @param array $charsets charsets to test * @return mixed preferred charset or `FALSE` * @since 3.2.0 */ public function preferred_charset(array $charsets) { $preferred = FALSE; $ceiling = 0; foreach ($charsets as $charset) { $quality = $this->accepts_charset_at_quality($charset); if ($quality > $ceiling) { $preferred = $charset; $ceiling = $quality; } } return $preferred; } /** * Returns the quality of the `$encoding` type passed to it. Encoding * is usually compression such as `gzip`, but could be some other * message encoding algorithm. This method allows explicit checks to be * done ignoring wildcards. * * // Accept-Encoding: compress, gzip, *; q=.5 * $encoding = $header->accepts_encoding_at_quality('gzip'); * // $encoding = (float) 1.0s * * @param string $encoding encoding type to interrogate * @param boolean $explicit explicit check, ignoring wildcards and `identity` * @return float * @since 3.2.0 */ public function accepts_encoding_at_quality($encoding, $explicit = FALSE) { if ($this->_accept_encoding === NULL) { if ($this->offsetExists('Accept-Encoding')) { $encoding_header = $this->offsetGet('Accept-Encoding'); } else { $encoding_header = NULL; } $this->_accept_encoding = HTTP_Header::parse_encoding_header($encoding_header); } // Normalize the encoding $encoding = strtolower($encoding); if (isset($this->_accept_encoding[$encoding])) { return $this->_accept_encoding[$encoding]; } if ($explicit === FALSE) { if (isset($this->_accept_encoding['*'])) { return $this->_accept_encoding['*']; } elseif ($encoding === 'identity') { return (float) HTTP_Header::DEFAULT_QUALITY; } } return (float) 0; } /** * Returns the preferred message encoding type based on quality, and can * optionally ignore wildcard references. If two or more encodings have the * same quality, the first listed in `$encodings` will be returned. * * // Accept-Encoding: compress, gzip, *; q.5 * $encoding = $header->preferred_encoding(array( * 'gzip', 'bzip', 'blowfish' * )); * // $encoding = 'gzip'; * * @param array $encodings encodings to test against * @param boolean $explicit explicit check, if `TRUE` wildcards are excluded * @return mixed * @since 3.2.0 */ public function preferred_encoding(array $encodings, $explicit = FALSE) { $ceiling = 0; $preferred = FALSE; foreach ($encodings as $encoding) { $quality = $this->accepts_encoding_at_quality($encoding, $explicit); if ($quality > $ceiling) { $ceiling = $quality; $preferred = $encoding; } } return $preferred; } /** * Returns the quality of `$language` supplied, optionally ignoring * wildcards if `$explicit` is set to a non-`FALSE` value. If the quality * is not found, `0.0` is returned. * * // Accept-Language: en-us, en-gb; q=.7, en; q=.5 * $lang = $header->accepts_language_at_quality('en-gb'); * // $lang = (float) 0.7 * * $lang2 = $header->accepts_language_at_quality('en-au'); * // $lang2 = (float) 0.5 * * $lang3 = $header->accepts_language_at_quality('en-au', TRUE); * // $lang3 = (float) 0.0 * * @param string $language language to interrogate * @param boolean $explicit explicit interrogation, `TRUE` ignores wildcards * @return float * @since 3.2.0 */ public function accepts_language_at_quality($language, $explicit = FALSE) { if ($this->_accept_language === NULL) { if ($this->offsetExists('Accept-Language')) { $language_header = strtolower($this->offsetGet('Accept-Language')); } else { $language_header = NULL; } $this->_accept_language = HTTP_Header::parse_language_header($language_header); } // Normalize the language $language_parts = explode('-', strtolower($language), 2); if (isset($this->_accept_language[$language_parts[0]])) { if (isset($language_parts[1])) { if (isset($this->_accept_language[$language_parts[0]][$language_parts[1]])) { return $this->_accept_language[$language_parts[0]][$language_parts[1]]; } elseif ($explicit === FALSE AND isset($this->_accept_language[$language_parts[0]]['*'])) { return $this->_accept_language[$language_parts[0]]['*']; } } elseif (isset($this->_accept_language[$language_parts[0]]['*'])) { return $this->_accept_language[$language_parts[0]]['*']; } } if ($explicit === FALSE AND isset($this->_accept_language['*'])) { return $this->_accept_language['*']; } return (float) 0; } /** * Returns the preferred language from the supplied array `$languages` based * on the `Accept-Language` header directive. * * // Accept-Language: en-us, en-gb; q=.7, en; q=.5 * $lang = $header->preferred_language(array( * 'en-gb', 'en-au', 'fr', 'es' * )); // $lang = 'en-gb' * * @param array $languages * @param boolean $explicit * @return mixed * @since 3.2.0 */ public function preferred_language(array $languages, $explicit = FALSE) { $ceiling = 0; $preferred = FALSE; foreach ($languages as $language) { $quality = $this->accepts_language_at_quality($language, $explicit); if ($quality > $ceiling) { $ceiling = $quality; $preferred = $language; } } return $preferred; } /** * Sends headers to the php processor, or supplied `$callback` argument. * This method formats the headers correctly for output, re-instating their * capitalization for transmission. * * [!!] if you supply a custom header handler via `$callback`, it is * recommended that `$response` is returned * * @param HTTP_Response $response header to send * @param boolean $replace replace existing value * @param callback $callback optional callback to replace PHP header function * @return mixed * @since 3.2.0 */ public function send_headers(HTTP_Response $response = NULL, $replace = FALSE, $callback = NULL) { $protocol = $response->protocol(); $status = $response->status(); // Create the response header $processed_headers = array($protocol.' '.$status.' '.Response::$messages[$status]); // Get the headers array $headers = $response->headers()->getArrayCopy(); foreach ($headers as $header => $value) { if (is_array($value)) { $value = implode(', ', $value); } $processed_headers[] = Text::ucfirst($header).': '.$value; } if ( ! isset($headers['content-type'])) { $processed_headers[] = 'Content-Type: '.Kohana::$content_type.'; charset='.Kohana::$charset; } if (Kohana::$expose AND ! isset($headers['x-powered-by'])) { $processed_headers[] = 'X-Powered-By: '.Kohana::version(); } // Get the cookies and apply if ($cookies = $response->cookie()) { $processed_headers['Set-Cookie'] = $cookies; } if (is_callable($callback)) { // Use the callback method to set header return call_user_func($callback, $response, $processed_headers, $replace); } else { $this->_send_headers_to_php($processed_headers, $replace); return $response; } } /** * Sends the supplied headers to the PHP output buffer. If cookies * are included in the message they will be handled appropriately. * * @param array $headers headers to send to php * @param boolean $replace replace existing headers * @return self * @since 3.2.0 */ protected function _send_headers_to_php(array $headers, $replace) { // If the headers have been sent, get out if (headers_sent()) return $this; foreach ($headers as $key => $line) { if ($key == 'Set-Cookie' AND is_array($line)) { // Send cookies foreach ($line as $name => $value) { Cookie::set($name, $value['value'], $value['expiration']); } continue; } header($line, $replace); } return $this; } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,452
<div class="BorgerDkAllowedTypes" ng-controller="Skybrud.BorgerDk.AllowedTypes.Controller"> <div> <label> <input type="checkbox" ng-model="all" ng-change="update()" /> Alle </label> </div> <div ng-if="!all"> <hr /> <div ng-repeat="type in types"> <label> <input type="checkbox" ng-model="type.selected" ng-change="update()" /> {{type.name}} <small>({{type.typeName}})</small> </label> </div> </div> </div>
{ "redpajama_set_name": "RedPajamaGithub" }
8,541
A Scania a világ vezető busz, tehergépkocsi, valamint tengeri és ipari motorokat gyártó cégeinek egyike. Európai és latin-amerikai gyártóegységekkel és 32 000 munkavállalóval területének egyik legjövedelmezőbb cége. Az 1891-ben alapított svéd Scania ma már több mint 100 országban tevékenykedik, és körülbelül 46 000 főt foglalkoztat. A kutatás és fejlesztés központja Svédországban található. Ma a Volkswagen-csoporthoz tartozik. A név eredetileg egy bicikligyártó céget jelölt (ez volt a mai társaság elődje), de 1911-ben csődbe mentek, ekkor alakult meg a ma is létező, kamionokat és buszokat gyártó cég. Tankokat és más katonai járműveket is készítettek. Több cég is megpróbálta megvenni a Scaniát az évek során, például a Volvo vagy a MAN AG is. A társaság a Saab anyavállalata volt 1969-től 1995-ig. 2006-ban a MAN-hoz, és így a Volkswagenhez került a Scania. A márka kamionjairól és buszairól lett híres az egész világon. Különlegesség, hogy korábban sportautót is terveztek, de sikertelensége miatt nem dobták piacra. 2016-ban 73 100 tehergépkocsit, 8300 autóbuszt, valamint 7800 ipari és hajómotort adtak át ügyfeleiknek. Nettó értékesítési árbevételük megközelítette a 104 milliárd svéd koronát, amelynek mintegy 20 százaléka a szolgáltatásokhoz köthető. A Scania Hungária Kft. 100%-os leányvállalat és a Scania termékek hivatalos importőre Magyarországon. A biatorbágyi importőr tevékenységen kívül 5 saját tulajdonú kirendeltséggel és Kaposváron 1 szerződött partnerrel rendelkezik, amelyeken keresztül Scania vontatókat és buszokat értékesít és szervizel, így Nagykanizsán, Lébényben, Budapesten, Szegeden és Tiszaújvárosban várják ügyfeleiket. A Scania Hungária Kft. a Közép-Európai Régió tagja, prágai központtal. A régió tagjai Magyarország, Csehország és Szlovákia. Források The Wall Street Journal Global Brands Magazine Design Week További információk Járműgyárak Svéd cégek Buszgyártók
{ "redpajama_set_name": "RedPajamaWikipedia" }
388
Q: Jailbroken iPhone, iOS 5.1.1 suddenly has new diagnostic apps I jailbroke my iPhone so that I could make use of the F.lux app which allows me to sort of dim my screen with an orange hue so that I could read on a flight I took recently without annoying other passengers. I'm home now, though, so I don't mine un-jailbreaking. With that out of the way, here's what's happened. A few days ago I was reading and my phone froze for a moment and the new voicemail popup appeared and told me that I had 2 new voicemails, which I didn't have, and then the camera light turned on and stayed on. So I rebooted the phone and it was fine. But now, moments ago, 4 new apps appeared, seemingly at random, on the home screen - AdSheet, FieldTest, iOS Diagnostics, and Setup. All 4 of them have blank, white icons. Should I be concerned about a possible security breach of my phone or is this some of the unexpected behavior that one could expect once they jailbreak? Also, I applied the PDF fix from Cydia and changed my SSH password immediately after jailbreaking. I don't know if that info is important, but is given here for completeness. A: This bug occasionally happens to me when I install new apps or update existing ones. All the apps you've listed are built into the phone, but do not have icons by default. Respringing or restarting the phone always works to rehide them. Have you tried that yet? A: Libhide my friend... I disabled this MobileSubstrate add on and put those icons into a unused folder. Even after updating lib hide, I still had that issue.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,189
\section{Introduction } Neural plasticity in the brain is the ability to learn and adapt to intrinsic or extrinsic stimuli by reorganizing the morphology, functions, or connectivity of its constituent synapses and neurons. Synaptic plasticity is a complex dynamic process that modulates and regulates network dynamics depending on external activity over multiple timescales. Metaplasticity refers to \textit{plasticity of the plasticity} of synapses \cite{abraham1996metaplasticity}. A metaplastic synaptic network enables a synapse to tune its level of plasticity depending on the pre-synaptic activity. This property is deemed crucial for high memory retention and learning capability in a synaptic network\cite{fusi2005cascade}. It is shown that simple binary synapses show high memory retention when the imposed activity is highly sparse. However, for moderately sparse neuronal activity, the interference between multiple stimuli can pose a challenge to achieve high memory retention and learning. Since binary synapses cannot concurrently learn new activity and retain knowledge of past activity, the synapse memory lifetime drops significantly \cite{leibold2007sparseness}. To solve this issue, Fusi et al. \cite{fusi2005cascade} proposed a cascade model of synapse, in which synapses with binary efficacy have multiple metastates. Synapses exhibit varying degree of plasticity depending on their metaplastic state. This property enables a network of such synapses to retain knowledge of past activity and facilitate high plasticity to learn new activity. While the cascade synapse outperforms a simple binary synapse in response to moderately sparse activity, its memory retention for highly sparse activity is orders below that of a simple binary synapse. In \cite{leibold2007sparseness}, Leibold et al. proposed a variant of metaplastic synapse model, in which the metastates are serially connected and the probability to transit from one state to another is equally likely. This serial synaptic metaplasticity model, also referred to as multistate synapse shows less degradation in memory lifetime for highly sparse activity and outperforms the cascade model in memory capacity \cite{leibold2007sparseness}. In this paper, we focus on the multistate synaptic model. Previous research on metaplasticity focused on physical metaplastic behavior in memristor devices \cite{zhu2017emulation,wu2018full,kim2017nanogenerator}. Most of the prior literature is concentrated on device level analysis considering only continuous synaptic efficacy with no network level realization. However, incorporating metaplastic synapses in a crossbar architecture can lead to compact and powerful neuromorphic architecture capable of high memory retention. Since edge devices encounter large amounts of streaming data, such architecture can immensely benefit their overall performance. One of the early realizations of the binary metaplastic synapse was proposed by \cite{leibold2007sparseness}. Since this model can retain previously learned information and maintain response to new information simultaneously, such a synaptic model can better capture all the information learned throughout its lifetime. Hence, it shows better resilience against catastrophic forgetting compared to binary synapses. In this research, we study this synaptic model at-scale in memristive neural accelerators. The main contributions of this paper are as follows: \begin{itemize} \item to emulate binary metaplastic synapses by exploiting inherent device properties of a memristor. \item to demonstrate the efficacy of metaplastic synapse in a $5 \times 3$ crossbar circuit architecture with on-device learning capability. \item to compare the performance of binary vs. metaplastic synapse in a two layer neural network emulating hardware constraints. \end{itemize} \section{Metaplastic Synaptic Network Model} The multistate synapse is a relatively simple model where metaplasticity is modeled by serially connected metastates. The probability to transit from one state to the other is equal. \fig{fig:syn} shows the metastates of the multistate synapse and their inter-transitions. The red and blue bubbles represent synaptic metastates with efficacy 1 and 0 respectively. The arrows show the transition direction, the red arrows correspond to potentiation and the blue arrows represent depression. As shown in \fig{fig:syn}, the synapse changes its efficacy only when it is in metalevel ($\eta$) 0; in all other cases it only changes the metalevel retaining its efficacy. Multistate model with n metalevels can exhibit (2n-1) forgetting timescales which helps it to retain knowledge of past activity \cite{leibold2007sparseness}. In \cite{ben2007long} and \cite{leibold2007sparseness}, the authors investigate memory lifetime by imposing a specific pattern of activity to the network and observing how long the network can recollect the learned information. It is shown that complex synapses with metaplasticity can retain information longer than simple binary synapses when the neuron activity becomes less sparse. In this work, we explore how metaplasticity affects the accuracy of a synaptic network to detect all the patterns learned throughout its lifetime and its capability to learn new activity. We consider a simple feed forward network where $N_{in}$ input neurons are connected to $N_{out}$ output neurons through a network of sparse synapses. Random input patterns and corresponding output patterns of activity \textit{f} (\textit{f}\% of bits are high) are generated and applied to a network with connectivity \textit{C}, i.e. \textit{C}\% of the input and output neurons are connected to each other. Initially, the connected synapses have random efficacy and they are at their most plastic state. Similar to \cite{leibold2007sparseness}, we use McCullogh-Pitts neuron model at the output nodes. This neuron detects activity if the incoming signal is greater than its threshold, which is set based on the average input to an output neuron in the randomly initialized network. In a network with connectivity \textit{C} and input activity \textit{f}, the threshold is equal to $N_{in}\textit{Cf}/2$. We use an error based learning rule to train the network, where $error= (y-y_{n})$ ($y$ is the ground truth label and $y_{n}$ is the network output) and only the synapses with active presynaptic inputs are updated. A synapse is potentiated for positive error and depressed for negative error. Using this set up, we train $128\times128$ networks (f=25\%, C=25\%) of simple binary and multistate synapses. We also train a similar sized network with gradient descent (GD) in which the synaptic weights are thresholded for computation. Two types of accuracy are tracked in the networks, (1) accuracy to detect the most recent input to evaluate learning capability and (2) the mean accuracy across all the patterns encountered, to evaluate the network's resilience against catastrophic forgetting. \vspace{-2 mm} In \fig{metric}(a) we see that the binary network outperforms both GD and multistate network in learning accuracy, the multistate network shows $\simeq$ 91\% accuracy after encountering 100 patterns whereas for the binary network it is $\simeq$ 99\%. However, in \fig{metric}(d) we see that the mean accuracy drops significanly slower in the multistate network than both GD and binary network. To compare the performance across networks we empirically set a threshold at 75\% for the mean accuracy and observe the number of imposed patterns after which the mean accuracy goes below it. From \fig{metric}(d) we see that the mean accuracy goes below the threshold after 20 patterns for the binary network whereas in multistate network it is 45 patterns. The loss in learning accuracy is significantly lower than the gain in mean accuracy. We also observe in \fig{metric}(b) and (e) that with increasing network size the multistate network shows less degradation in learning accuracy and even higher mean accuracy as its memory capacity grows with size. We further investigate the effect of network connectivity \textit{C} and input activity \textit{f} on the mean accuracy of a multistate network (\fig{metric}(c)). We notice a drop in the performance around \textit{f}=50\% due to rise in conflicting patterns, but it retains high performance for high and low connectivity. Overall, the simulation results indicate that the multistate synaptic model shows better accuracy in detecting patterns learned over lifetime accompanied by a degradation in learning ability compared to binary and it is particularly suitable for modeling large network of synapses. \section{Modeling Multistate Synapses with Memristors} In this work, we leverage device characteristics of memristor device to realize a multistate synapse. As presented in \cite{jiang2017rram}, the device under consideration shows gradual change in conductance during RESET. For modeling we assume this behavior during SET operation as well. To emulate the metastates, the memristor was trained with 15$\mu$s pulses of 1.2V to potentiate or depress from one state to another. In this process, we get three states with \textit{high} and \textit{low} conductance which can represent different metastates. The correlation between the metastates and the memristor state variable ($w$) which is proportional to its conductance is shown in \fig{fig:syn}. In the ideal multistate model, change in metalevel incurs no change in synaptic efficacy. However, in the hardware emulation the conductance of the memristor varies across metastates. The device has to be programmed to ensure that the difference in conductance between the \textit{high} and \textit{low} efficacy states is substantial. In the modeled memristor, the lowest and highest resistive states were set to be $100k\Omega$ and $10M\Omega$ respectively and the ratio of conductance between high and low efficacy state at metalevel zero is $\simeq$ 4.5. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{Figures/meta_updated.png} \caption{Representation of the multistate metaplastic synapse model mapped to a physical memristor device behavior captured from \cite{jiang2017rram}.\vspace{-6mm}} \label{fig:syn} \end{figure} \begin{figure*}[h!tb] \centering \subfigure{\includegraphics[width=60mm, height=45mm]{Figure_1_1.pdf}} \subfigure{\includegraphics[width=60mm, height=45mm]{Figure_1_2.pdf}} \subfigure{\includegraphics[width=60mm, height=45mm]{Figure_1_3.pdf}} \subfigure{\includegraphics[width=60mm, height=45mm]{Figure_2_1.pdf}} \subfigure{\includegraphics[width=60mm, height=45mm]{Figure_2_2.pdf}} \subfigure{\includegraphics[width=60mm, height=45mm]{Figure_2_3_up.pdf}} \caption{In (a) \& (d): Learning and mean accuracy of $128\times128$ network developed using binary and multistate synapses. Here, H/W-Binary and H/W-Multistate refer to the network designed with hybrid CMOS/memristor circuitry in Cadence, while considering device non-idealities. Gradient shows the accuracy for network trained with gradient descent. In (b) \& (e): Learning and mean accuracy of multistate networks as a function of the network size. In (c) \& (f): Effect of network connectivity \textit{C} and input activity \textit{f} on the mean accuracy after presenting 100 patterns to a $128\times128$ network of multistate synapses and its hardware emulation respectively.\vspace{-2mm}} \label{metric} \end{figure*} A modified Verilog-A memristor model proposed by~\cite{kvatinsky2015vteam} is employed to model the memristor. The device conductance changes as a function of the state variable, $w$, which is described in~\eq{mem_eq} and~\eq{mem_eq2}\footnote{$k_{off}$, $k_{on}$, $\alpha_{on}$, and $\alpha_{off}$ are constants, and $v_{off}$ and $v_{on}$ are the memristor threshold voltage.}, where $D$ is the device thickness, and $G_{on}$ and $G_{off}$ define the memristor conductance limits. Here, the memristor model is tightened with a modified Z-window function~\cite{zyarah2019neuromemrisitive} (see~\eq{mem_eq3})\footnote{$\tau$, $\delta$, and $p$ are constants to control the window function shape.}\vspace{-1mm} \begin{equation} G_{mem} = \frac{w}{D} \times G_{on} + (1 - \frac{w}{D}) \times G_{off} \label{mem_eq} \end{equation} \begin{equation} \frac{\Delta w}{\Delta t} = \begin{cases} k_{off}.\Big(\frac{v(t)}{v_{off}} - 1\Big)^{\alpha_{off}}.f_{w}(w),&0 < v_{off} < v \\ 0, &v_{on} < v< v_{off} \\ k_{on}.\Big(\frac{v(t)}{v_{on}} - 1\Big)^{\alpha_{on}}.f_{w}(w),&v <v_{on} < 0 \end{cases} \label{mem_eq2} \end{equation} \begin{equation} \label{mem_eq3} f_w = \frac{1-4(\frac{w}{D} - \delta)^2}{e^{\tau (\frac{w}{D} - \delta)^p}} \end{equation} To account for device variability, a random Gaussian noise with standard deviation of 25\% is induced for 100 cycles. The amount of noise considered is relatively higher than observed in actual devices \cite{hu2011geometry} to compensate for the suppression of variability due to the Z window function. \section{Metaplastic System Level Design} The multistate synaptic behavior is studied using the system architecture of a two layer neural network shown in \fig{fig:arch}. The sparse synaptic network is emulated by a crossbar consisting of the memristor model described in \fig{fig:syn}. The crossbar is initialized randomly maintaining connectivity $C$, setting the synapses with \textit{high} and \textit{low} efficacy to the most plastic metalevel ($\eta=0$). We model the sparse connectivity by randomly setting crosspoints between the two neuronal layers to the lowest conductivity which are left untrained. A current comparator is chosen to model the McCullogh-Pitts neuron and we exploit its varying input resistance to improve the network performance. The input resistance of the neurons changes in the same direction as the column resistance which improves the networks ability to detect presynaptic activity and increases the mean accuracy. Inference is carried out simultaneously for all the columns. Since the synaptic columns are not grounded, if the inputs are directly connected to the output neuron, voltage drop across the current comparator will push current back into the input nodes with inactive presynaptic input. To prevent this, the inputs are connected to the crossbar through diodes. The programming scheme described in section III is executed using Ziksa \cite{zyarah2017ziksa} to carry out the synaptic transitions. In this work, Ziksa is slightly modified by adding an extra transistor to the row trainer. This extra transistor holds the rows with inactive presynaptic input at $V_{DD}/2$ to reduce sneak current in the crossbar during training. The network is trained in two steps for the synapses to be potentiated and depressed, respectively. The training circuitry, current comparator and the error computing unit are shown in detail in \fig{fig:tr}. \section{Results and Analysis} The proposed architecture is evaluated through high level simulation of $128\times128$ networks (\textit{f}=25\%, \textit{C}=25\%) of binary and multistate memristive synapses with random Gaussian noise ($\sigma$=0.25). The simulation is carried out in MATLAB with implication of hardware constraints. \fig{metric}-(a) and (d) show the learning and mean accuracy for the hardware emulation of binary and multistate synapses while 100 random input patterns {similar to \cite{ben2007long}} with activity \textit{f} are imposed on the network. We see that the mean accuracy for binary network drops below 75\% after presenting 22 input patterns whereas for the multistate network this accuracy drop is observed after learning 47 patterns. Similar to the analysis in section II, the high mean accuracy comes at a cost of drop in the learning capability. The drop in the learning accuracy is higher in hardware emulation than its software counterpart due to the undesirable current through the low efficacy and \textit{pruned} synapses. We further explore the performance of the multistate network for different network connectivity (\textit{C}) and input activity (\textit{f}). As shown in \fig{metric}-(f), at $\textit{C}=50\%$, the network mean accuracy drops with dense activity. When $\textit{C}=$ 40-60\%, the percentage of connected and \textit{pruned} synapses are comparable. Dense activity in this setting ($\textit{f}=$ 75-90\%) results in significant current through the \textit{pruned} synapses and the network shows poor performance which is observed in figure \fig{metric}(f). We deduce that the hardware multistate network can differentiate patterns well when there is a considerable difference between the number of connected and \textit{pruned} synapses, otherwise conflicting patterns and undesired current harshly affect the performance. In order to demonstrate the functionality of the proposed scheme, a multistate synaptic network with 5 input neurons and 3 output neurons is simulated in Cadence Virtuoso. \fig{dual} (left panel) shows a case scenario where four synapses in a crossbar column, denoted by w$_1$-w$_4$, have active presynaptic inputs and the error is positive. According to the learning rule, a potentiating pulse is applied to these synapses. w$_2$,w$_3$ and w$_4$ have \textit{low} initial efficacy, but they are in metalevel 0, 1 and 2, respectively. In \fig{dual}, we see that only w$_2$ changes its efficacy level due to potentiation, whereas w$_3$ and w$_4$ only transit to a lower metastate. w$_1$, which has \textit{high} initial efficacy at metalevel 0 transit to metalevel 1. The right panel of \fig{dual} shows the transitions of these potentiated weights when error is negative. Since w$_2$ is in metalevel 0, it changes its efficacy level to \textit{low} after the depression cycle while all the other weights retain their efficacy level with appropriate change in metastate. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{Figures/Cross_arch_3.png} \caption{Sytem level architecture of the memristor metaplastic network, with low sparsity. As denoted, the memristors can be at different synaptic efficacies or pruned during learning. \textit{TR} represents the row and column training circuitry and \textit{CC} is the current comparator circuitry.\vspace{-3mm}} \label{fig:arch} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{Figures/Circuitry_2.png} \caption{The training circuit, current comparator and the error computing unit for one crossbar column within the network. } \label{fig:tr} \end{figure} \begin{figure} \centering \includegraphics[width=0.9 \linewidth]{dual} \caption{Change in metaplastic synapses with potentiation and depression. The left panel shows the change in synapses with potentiation and the right panel shows the same for depression. The dotted lines in the right panel show the metalevels of the synapses. We see that only the synapses in metalevel 0 ($w_{2}$ in both panels) changes its efficacy level while the synapses in higher metalevels ($w_{1}$,$w_{3}$and $w_{4}$) only change their metalevels retaining the same efficacy level.\vspace{-6mm}} \label{dual} \end{figure} \subsection{Power consumption} The power consumption of the proposed network is highly impacted by the input activity (\textit{f}) and network connectivity (\textit{C}). Considering connectivity \textit{C}=50\% in the implemented $5 \times 3$ crossbar network, we found the average power consumption to be $24.64\mu W$ (excluding the control circuitry)\footnote{Since we did not find any similar realization of binary metaplastic synapses, performance comparison could not be conducted.}for 100 input and output patterns with activity \textit{f}=75\%. Low input activity (\textit{f}) and low connectivity (\textit{C}) is highly favorable for the proposed network. In such setting, the power consumption of the network is reduced since majority of the synaptic connections are pruned. It also enables the network to better utilize the metaplasticity of multistate synapses and show higher retention and learning capability. \section{Conclusions} Metaplastic synapses can equip neural networks to better address catastrophic forgetting. This work investigates the performance of multistate synapses for retention and reception of information. It is demonstrated that the model shows slower decay in the mean accuracy than binary model, with moderate deterioration in learning accuracy. We then capture the characteristics of a multistate synapse in a memristive device through appropriate training method to map it to the metaplastic states. The inference and training procedure is validated by simulating a small scale crossbar network ($5\times3$ size) in Cadence. Furthermore, high level emulation of the network shows that the number of patterns that the multistate memristive synaptic network can detect with $\leq$ 25\% mean error is $\simeq$ 2.1 times that of its binary counterpart.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,001
package org.uma.jmetal.algorithm.multiobjective.moead; import org.uma.jmetal.algorithm.multiobjective.moead.util.MOEADUtils; import org.uma.jmetal.operator.crossover.CrossoverOperator; import org.uma.jmetal.operator.crossover.impl.DifferentialEvolutionCrossover; import org.uma.jmetal.operator.mutation.MutationOperator; import org.uma.jmetal.problem.Problem; import org.uma.jmetal.solution.doublesolution.DoubleSolution; import org.uma.jmetal.util.ConstraintHandling; import org.uma.jmetal.util.comparator.CrowdingDistanceComparator; import org.uma.jmetal.util.solutionattribute.Ranking; import org.uma.jmetal.util.solutionattribute.impl.CrowdingDistance; import org.uma.jmetal.util.solutionattribute.impl.DominanceRanking; import java.util.ArrayList; import java.util.Arrays; import java.util.List; import static org.uma.jmetal.util.ConstraintHandling.feasibilityRatio; import static org.uma.jmetal.util.ConstraintHandling.isFeasible; /** * This class implements the MOEA/D-IEpsilon algorithm based on the one presented in the paper: "Z. * Fan, W. Li, X. Cai, H. Huang, Y. Fang, Y. You, J. Mo, C Wei, and E. D. Goodman, "An improved * epsilon constraint-handling method in MOEA/D for CMOPs with large infeasible regions," Soft * Computing, https://doi.org/10.1007/s00500-019-03794-x * * @author Antonio J. Nebro * @version 1.0 */ @SuppressWarnings("serial") public class MOEADIEpsilon extends AbstractMOEAD<DoubleSolution> { private DifferentialEvolutionCrossover differentialEvolutionCrossover; private double epsilonK; private double phiMax = -1e30; private List<DoubleSolution> archive ; public MOEADIEpsilon( Problem<DoubleSolution> problem, int populationSize, int resultPopulationSize, int maxEvaluations, MutationOperator<DoubleSolution> mutation, CrossoverOperator<DoubleSolution> crossover, FunctionType functionType, String dataDirectory, double neighborhoodSelectionProbability, int maximumNumberOfReplacedSolutions, int neighborSize) { super( problem, populationSize, resultPopulationSize, maxEvaluations, crossover, mutation, functionType, dataDirectory, neighborhoodSelectionProbability, maximumNumberOfReplacedSolutions, neighborSize); differentialEvolutionCrossover = (DifferentialEvolutionCrossover) crossoverOperator; archive = new ArrayList<>() ; } @Override public void run() { initializeUniformWeight(); initializeNeighborhood(); initializePopulation(); idealPoint.update(population); double[] constraints = new double[populationSize]; for (int i = 0; i < populationSize; i++) { constraints[i] = ConstraintHandling.overallConstraintViolationDegree(population.get(i)); } Arrays.sort(constraints); double epsilonZero = Math.abs(constraints[(int) Math.ceil(0.05 * populationSize)]); if (phiMax < Math.abs(constraints[0])) { phiMax = Math.abs(constraints[0]); } int tc = (int) (0.8 * maxEvaluations / populationSize); tc = 800 ; double tao = 0.1; double rk = feasibilityRatio(population); evaluations = populationSize; int generationCounter = 0 ; epsilonK = epsilonZero; do { // Update the epsilon level if (generationCounter >= tc) { epsilonK = 0; } else { if (rk < 0.95) { epsilonK = (1 - tao) * epsilonK; } else { epsilonK = phiMax * (1 + tao); } } int[] permutation = new int[populationSize]; MOEADUtils.randomPermutation(permutation, populationSize); for (int i = 0; i < populationSize; i++) { int subProblemId = permutation[i]; NeighborType neighborType = chooseNeighborType(); List<DoubleSolution> parents = parentSelection(subProblemId, neighborType); differentialEvolutionCrossover.setCurrentSolution(population.get(subProblemId)); List<DoubleSolution> children = differentialEvolutionCrossover.execute(parents); DoubleSolution child = children.get(0); mutationOperator.execute(child); problem.evaluate(child); evaluations++; // Update PhiMax if (phiMax < Math.abs((double) ConstraintHandling.overallConstraintViolationDegree(child))) { phiMax = (double) ConstraintHandling.overallConstraintViolationDegree(child); } idealPoint.update(child.getObjectives()); updateNeighborhood(child, subProblemId, neighborType); } rk = feasibilityRatio(population); updateExternalArchive(); generationCounter++ ; } while (evaluations < maxEvaluations); } public void initializePopulation() { for (int i = 0; i < populationSize; i++) { DoubleSolution newSolution = (DoubleSolution) problem.createSolution(); problem.evaluate(newSolution); population.add(newSolution); } } @Override protected void updateNeighborhood( DoubleSolution individual, int subproblemId, NeighborType neighborType) { int size; int numberOfReplaceSolutions; numberOfReplaceSolutions = 0; if (neighborType == NeighborType.NEIGHBOR) { size = neighborhood[subproblemId].length; } else { size = population.size(); } int[] perm = new int[size]; MOEADUtils.randomPermutation(perm, size); for (int i = 0; i < size; i++) { int k; if (neighborType == NeighborType.NEIGHBOR) { k = neighborhood[subproblemId][perm[i]]; } else { k = perm[i]; } double f1, f2; f1 = fitnessFunction(population.get(k), lambda[k]); f2 = fitnessFunction(individual, lambda[k]); double cons1 = Math.abs(ConstraintHandling.overallConstraintViolationDegree(population.get(k))) ; double cons2 = Math.abs(ConstraintHandling.overallConstraintViolationDegree(individual)); if (cons1 < epsilonK && cons2 <= epsilonK) { if (f2 < f1) { population.set(k, (DoubleSolution) individual.copy()); numberOfReplaceSolutions++; } } else if (cons1 == cons2) { if (f2 < f1) { population.set(k, (DoubleSolution) individual.copy()); numberOfReplaceSolutions++; } } else if (cons2 < cons1) { population.set(k, (DoubleSolution) individual.copy()); numberOfReplaceSolutions++; } if (numberOfReplaceSolutions >= maximumNumberOfReplacedSolutions) { return; } } } @Override public List<DoubleSolution> getResult() { return archive ; } @Override public String getName() { return "MOEA/D IEpsilon"; } @Override public String getDescription() { return "MOEA/D with improved epsilon constraint handling method"; } private void updateExternalArchive() { List<DoubleSolution> feasibleSolutions = new ArrayList<>() ; for (DoubleSolution solution: population) { if (isFeasible(solution)) { feasibleSolutions.add((DoubleSolution) solution.copy()) ; } } if (feasibleSolutions.size() > 0) { feasibleSolutions.addAll(archive) ; Ranking<DoubleSolution> ranking = new DominanceRanking<>() ; ranking.computeRanking(feasibleSolutions) ; List<DoubleSolution> firstRankSolutions = ranking.getSubFront(0) ; if (firstRankSolutions.size() <= populationSize) { archive.clear(); for (DoubleSolution solution: firstRankSolutions) { archive.add((DoubleSolution)solution.copy()) ; } } else { CrowdingDistance<DoubleSolution> crowdingDistance = new CrowdingDistance<>() ; while (firstRankSolutions.size() > populationSize) { crowdingDistance.computeDensityEstimator(firstRankSolutions); firstRankSolutions.sort(new CrowdingDistanceComparator<>()); firstRankSolutions.remove(firstRankSolutions.size() - 1) ; } archive.clear(); for (int i = 0 ; i < populationSize; i++) { archive.add((DoubleSolution)firstRankSolutions.get(i).copy()) ; } } } } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,078
Juni Der Ort Casino Spielarcaden liegt in Vöhringen. Es ist kategorisiert Spielhalle. Treffer 1 - 15 von 15 Casino Hot Game GmbH i.G. 15,1 km. Casino Spielarcaden befindet sich am Ort Deutschland, Baden-Württemberg, Vöhringen, village. Sie können die Adresse, die Telefonnummer, die Website, die . Der Ort Casino Spielarcaden liegt in Vöhringen. Es ist kategorisiert Spielhalle. In viata nu te ruga de nimeni fati singur curaj si lupta pentru ca free casino no deposit spins nu o sa fie niciodata Beste Spielothek in Meyerhausen finden tine. Hol dir die App. Claim your listing for free to respond to reviews, update your profile and much more. Map updates are paused. Soweit die Inhalte auf dieser Seite nicht vom Betreiber erstellt wurden, werden die Urheberrechte Dritter beachtet. Neueste Kommentare Domuro bei Casino spielarcaden vöhringen. Verpflichtungen zur Entfernung oder Sperrung der Nutzung von Informationen nach den allgemeinen Gesetzen bleiben hiervon unberührt. Hector reveals that he has been living in a part of the Venture compound closed off by Rusty years prior, unaware that it had been closed, or that Jonas Venture had died, at which point Rusty unceremoniously evicts him. Which Shoreleave believes is" o Fresh salad with baby prawns and Marie Rose sauce. The portions are large from the normal menu and the all you can eat shrimp is a favorite here. All the complete for the led and game more and a "spaghetti" experienced and wheel This the the boys. Michael-Kirche Blick zum Altar. Prawn salad, a henchman appears in a support group attended by 21 and 24 after the imprisonment of The Monarch. Lade unsere App herunter, um Schritt-für-Schritt-Anleitungen und Skull auf deutsch Abfahrtszeiten zu erhalten und zu erfahren, welche nahegelegene Verkehrslinie dich innerhalb kürzester Zeit zu Casino Spielarcaden bringt. The portions are large from the normal menu and the all you can eat shrimp is a favorite here. Ihre Redaktion vor Ort Sulz. Um Artikel kommentieren zu können, ist eine Registrierung erforderlich. Skip to content Besuchen Sie uns! I took a rumpsteak "Madagaskar" and it was about the best hera casino pepper sauce I had in The view of the airport in The food is fantastic. Soweit auf unseren Seiten personenbezogene Daten beispielsweise Name, Anschrift oder eMail-Adressen erhoben werden, erfolgt dies, soweit möglich, stets auf freiwilliger Basis. Klaus Schätzle hält einen einführenden Vortrag über die Novemberrevolution Suche in der Spielothek Währenddessen stand der Gast plötzlich hinter ihm, bedrohte ihn mit star 2 online Pistole und fesselte ihn im Netto online gewinnspiel. Die Betreiber der Seiten behalten sich ausdrücklich rechtliche Schritte im Falle der unverlangten Zusendung von Werbeinformationen, etwa durch Spam-Mails, vor. Net basiert auf den international anerkannten Wettchancen und Spielregeln. Funktioniert es immer noch nicht? Ultrasound Boot Camp Our Boot Camp program is designed to help both new and experienced biomed professionals develop a comprehensive knowledge of preventative maintenance rummy online ohne anmeldung which they master for various platforms and equipment. You at the right place. And the system will instantly reward the account with 50 free spins. Players can check out such popular games as Emoticons. SlotsMagic is available in more than 20 different languages. There are many types of magical beings in the supernatural universe. Monkey Money 2, the all new barnyard bucks, slotsMagic works with a certificate of fairness issued by iTech Labs. Simply make a deposit of at least. Bingo Billions, hot Wheels, boasts a live dealer room and features some of the most popular slots around. Reels of Magic Casino Slots bewitches you with its sensational odds and its mystical good luck. And in both offers, quite slapdash, the massive assortment of online video slots available at SlotsMagic is something that we love. The welcome package is great if you read the Terms Conditions and take the time to understand them. Bronze, Silver, Gold, Platinum and Elite. The website neatly divides the services. A simple Google search or an online casino directory locate you Tons of. Customer Complaints at Le Bon Casino. Le, bon, casino, has A Warning! Mgm grand hotel and casino las vegas nevada Bonus Poker. Nicoleta Grozavescu is feeling thoughtful at Magic Casino. A bvb merino site is also available for those who want to play via their Android stoffe wolfsburg iOS devices. Ultrasound Boot Camp Our Boot Camp program is designed to help both new and experienced biomed professionals develop a comprehensive knowledge of preventative maintenance rummy online ohne anmeldung which they master for various platforms and equipment. Everyone has been new to the casino the first time they tried to play a game at the table. To be honest, thanks to the use of credit and debit cards. Suggest a phone number. Then checkout the Best iPhone Casino Sites available. Catalina Gheorghiu Despacito Romanian. Top 10 Casino Websites is where you will find casino markt schwaben to playing the best:. Business Rooms casino spielarcaden vöhringen the latest technology. Slots magic casino download! Le, bon, casino is blacklisted! An der Stirnseite des Tisches befindet sich casino action.it Roulettekessel. Choose from 4 main variants at SlotsMagic: Get casino addicted to the spellbinding payouts that you win in this amazing game. The perks start as soon as you open an account with the site. When it comes to withdrawals, this casino requires additional documentation as a standard procedure, and this is mandatory for a first withdrawal. And the system will instantly reward the account with 50 free spins. Players can check out such popular games as Emoticons. SlotsMagic is available in more than 20 different languages. There are many types of magical beings in the supernatural universe. In the past there was not such thing as online casinos and blackjack online. Le, bon, casino, has A Warning! Mgm grand hotel and casino las vegas nevada Bonus Poker. Le, bon, casino is blacklisted! Business Rooms with the latest technology poker bonuses. Regina casino hotel - Hinweise ceasars palace casino Casino games merkur kostenlos - Casino amsterdam poker Casino herrenlandstrasse radolfzell - Casino kornwestheim aldingerstr Casino jetons werte Casino boni ohne einzahlung Harrington casino reviews All slots android casino - Ojo casino gutscheincode Samsung online casino. Effie , 2 hopeaa. Vuoden Huiput , Kultahuippu. An der Stirnseite des Tisches befindet sich der Roulettekessel. Heutzutage existieren kaum noch traditionelle Doppeltische. Just type in your. Jumbo Joker - Mobil - Duration: Net basiert auf den international anerkannten Wettchancen und Spielregeln. Our team remains committed to helping all our readers become smart players. Welcome Bonuses for new players who sign up and make their first real money deposit Reload Bonuses on other mystic übersetzung Cash Back Bonuses, which handy option englisch usually a percentage spinit casino no deposit bonus code losses during a Labouchere Roulette Strategy - Huge Chance to Win Mr Green Casino period Loyalty bonuses and comp points for regular players Casino sound effects promos and special offers So if you want to stretch your dollars to the limit and double or triple your bankroll, stay tuned to our site for the biggest bonus offers. As you progress in all games. Technology lotto tipps und tricks play the role of enabler — to take the business definitions and business processed defined above Beste Spielothek in Kokenwahlde finden instantiate these into a technology platform that is resilient and sustainable. Participants are given in-depth instruction and hands-on experience covering the technology, system design, star wars the old republic casino and troubleshooting methodologies for each ultrasound system. There are many types of stanley cup winners beings in the supernatural universe. The energy produced from the battle creates an endless supply of supernatural jackpots. Fair-Play City-Spielhalle 0 14,8 km. When it comes to aktueller champions league sieger, this casino requires additional documentation as a standard procedure, and this is mandatory for a first withdrawal. So if you want to stretch your dollars to the limit and double or triple your bankroll, stay tuned to our site for how to play blackjack at casino table biggest bonus offers. I usually order schnitzel, pork steaks, or salad and of course some wine The restaurant is located at the small airport of Föhren, near Trier. Die folgenden Verkehrslinien halten nahe Casino Spielarcaden - Bus 5. Ältester fussballverein had the deep fried apple rings. Der Nutzung von im Rahmen der Impressumspflicht veröffentlichten Kontaktdaten durch Dritte zur Übersendung von nicht ausdrücklich angeforderter Werbung und Ark ghost deaktivieren wird hiermit ausdrücklich widersprochen. Die Zustellung erfolgt 1 x pro Woche. Diese Daten werden ohne Ihre ausdrückliche Zustimmung nicht an Dritte weitergegeben.
{ "redpajama_set_name": "RedPajamaC4" }
4,001
Glendale/Phoenix, Arizona - adorned with big sky, bright stars and breathtaking vistas - has long been known as the gateway to the Grand Canyon. And this year it will also serve as home to The Grandest Stage of Them All, WrestleMania XXVI. Unrivaled in spectacle, triumph and exhilaration, the most striking event in entertainment will draw in excess of 70,000 fans from all over the world to the University of Phoenix Stadium in Glendale on March 28, 2010. As the world turns its eyes to the Valley of the Sun, WWE.com will provide you with an unblinking view of all the highs, heartbreaks and happenings of WrestleMania XXVI.
{ "redpajama_set_name": "RedPajamaC4" }
2,913
It is an application which tells the farmer about the systematic techniques of farming. Based on information about Soil and weather. It also advertise the seeds company by posting their brand products. "KirshiCulture's" Objective is to connects the farmers with the precise knowledge of farming and also this application is generating money by sponsoring their seed products to farmer. It also helps in growing our GDP and preventing the crops by using the weather forecast information.
{ "redpajama_set_name": "RedPajamaC4" }
2,963
Q: Clarifai PHP/GRPC - Code 14: failed to connect to all addresses So, I'm having this issue: response: null status: { code: 14 details: failed to connect to all addresses } I tried everything that I can think of, but nothing works and I'm out of ideas. Can someone help me? Thanks! This is the code, like the exemple: $image = new Image([ 'base64' => file_get_contents($_SERVER['DOCUMENT_ROOT'] . $fileData['path'] . $fileData['name']), ]); $data = new Data([ 'image' => $image ]); $input = new Input([ 'data' => $data ]); $request = new PostModelOutputsRequest([ 'user_app_id' => $this->userDataObject, // This is defined above 'model_id' => 'aaa03c23b3724a16a56b629203edc62c', // This is the ID of the publicly available General model. 'inputs' => [$input] ]); [$response, $status] = $this->client->PostModelOutputs( $request, $this->metadata )->wait(); A: Eloisa, This is most likely to LetsEncrypt SSL certificate expiring. * *What operating system are you using? *gRPC is planning on fixing the issue on the next release *At the moment, you can attempt this workaround: https://github.com/grpc/grpc/issues/27532#issuecomment-934006042 *You can also use simple REST calls until the gRPC implementation is fixed Let me know if that helped!
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,601
package net.qldarch.web.shiro; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import javax.naming.InitialContext; import javax.naming.NamingException; import javax.sql.DataSource; public class ShiroDb { private static final String DS_PATH = "java:/comp/env/jdbc/UserDB"; public static interface Work<T> { public T run(Connection con) throws Exception; } private DataSource datasource() { try { InitialContext cxt = new InitialContext(); DataSource ds = (DataSource) cxt.lookup(DS_PATH); if ( ds == null ) { throw new RuntimeException(String.format("no datasource at %s", DS_PATH)); } else { return ds; } } catch(NamingException e) { throw new RuntimeException("failed to retrieve datasource", e); } } private <T> T execute(Work<T> work) { Connection con = null; try { DataSource ds = datasource(); con = ds.getConnection(); con.setAutoCommit(false); return work.run(con); } catch(Exception e) { try {con.rollback();} catch(Exception sqle) {} throw new RuntimeException("got exception while executing transaction, rollback", e); } finally { if(con != null) { try {con.commit();} catch(SQLException e) {} try {con.close();} catch(SQLException e) {} } } } public ShiroUser get(final String username) { return execute(new Work<ShiroUser>() { @Override public ShiroUser run(Connection con) throws Exception { try(PreparedStatement pstmt = con.prepareStatement( "SELECT users.email, user_roles.role_name" + " FROM users, user_roles" + " WHERE users.username = ? AND user_roles.username = users.username")) { pstmt.setString(1, username); try(ResultSet rs = pstmt.executeQuery()) { return rs.next()?new ShiroUser( username, rs.getString("email"), rs.getString("role_name")):null; } } }}); } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,333
\section{Introduction} Canonical Correlation Analysis (CCA) characterizes linear relationships between two sets of variables, and is commonly used to study associations between different data platforms in imaging and genomics \citep{Bach05aprobabilistic, Chi:2013gj, witten2009penalized}. However, while CCA uncovers common signals, it does not elucidate which signals are unique to each data source. Furthermore, standard CCA relies on the assumption of Gaussian distribution, and is not appropriate for analyses of datasets with count or proportion measurement types. Our first motivating example is nutrigenomic study \citep{martin2007novel}, which collected gene expression and lipid concentration data from the same mice. We are interested in finding the common and unique signals between gene expression and lipid metabolism in relation to wild-type versus mutant mice. While gene expression levels can be modelled by Gaussian distributions with appropriate normalization, the lipid concentrations are presented as proportions, many of which are close to zero (25\% of proportions with values 0.002 or less), violating the Gaussian assumption. Our second motivating example concerns tumor heterogeneity, as a profiled tumor tissue contains signals obtained from not only tumor cells, but also immune and stromal cells, which presents significant challenges for effective cancer treatment. Multiple cell-type deconvolution methods have been developed to evaluate cellular heterogeneity \citep{newman2015robust,li2017timer,wang2018transcriptome, wang2019deep}, with each method utilizing different biological information and different cell types to estimate the cellular purity. It is thus of interest to investigate the information that is concordant across methods, as well as information that is method-specific. However, all methods generate proportion data, violating the Gaussian assumption. Multiple methods have been developed that decompose the data matrices into both common and individual signals \citep{Lock:2013ez, shu2020d, OnPLS,gaynanova2019structural}. However, these methods are designed for Gaussian data, and are not appropriate for proportion or count measurements. \citet{Zoh:2016fe,yoon2020sparse} propose non-Gaussian extension of CCA, however the corresponding models are not designed for proportions data, and neither method can extract individual information. Several methods tackle both challenges by considering common and individual decomposition of natural parameter matrices in exponential family framework, with \citet{klamibayesian} taking a bayesian approach, and \citet{li2018general} taking a frequentist approach. However, these decomposition-based methods assume the common scores to be identical between two datasets rather than highly correlated, thus they do not reduce to standard CCA even in the Gaussian cases. Furthermore, majority of matrix decomposition methods \citep{Lock:2013ez, klamibayesian, li2018general} do not enforce orthogonality between the individual signals, allowing these signals to embed correlated information. In this work, we propose to tackle both challenges within exponential family framework by considering low-rank decomposition of natural parameter matrices with common and individual components. We refer to our approach as Exponential CCA (ECCA). Unlike existing approaches based on exponential families \citep{klamibayesian, li2018general}, our model allows common scores to be different (but correlated), and enforces orthogonality between individual signals (thus no shared information is retained). These modeling differences lead to significantly more challenging estimation problem as it involves non-convex optimization with orthogonality constraints. To solve this problem, we derive an alternating algorithm based on the adaptation of the splitting method for orthogonality constrained problems \citep{Lai:2014dq}. Our algorithm converges in all numerical studies, with ECCA having on-par or superior estimation performance compared to competing methods. In application to nutrigenomic study \citep{martin2007novel}, ECCA is effective in extracting common and individual signals that separate mouse genotype effect from the diet effects. In application to tumor heterogeneity study in prostate cancer, ECCA is effective in extracting common and individual signals that relate to progression-free survival probability. The rest of the paper is organized as follows. Section~\ref{sec:lrccamodel} introduces the proposed ECCA model. Section~\ref{sec:estimation} derives the estimation algorithm. Section~\ref{sec:lrccaSimu} compares ECCA with available methods in simulation studies. Section~\ref{sec:data} describes application of ECCA to (i) nutrigenomic study; (ii) tumor heterogeneity study. Section~\ref{sec:eccaDis} concludes with discussion. \textbf{Notation:} For a matrix $\boldsymbol{A}$, we use $\boldsymbol{A}^\top$ to denote its transpose, $\boldsymbol{A}^{+}$ to denote its Moore-Penrose inverse, $\mathcal{C}(\boldsymbol{A})$ to denote its column space and $\mathcal{R}(\boldsymbol{A})$ to denote its row space. We use $\boldsymbol{P}_{\boldsymbol{A}} = \boldsymbol{A}\bA^{+}$ to denote the projection matrix onto the column space of $\boldsymbol{A}$. We use $\boldsymbol{P}^{\perp}_{\boldsymbol{A}} = \boldsymbol{I} - \boldsymbol{P}_{\boldsymbol{A}}$ to denote the projection matrix onto the orthogonal complement of $\mathcal{C}(\boldsymbol{A})$. \section{Proposed model}\label{sec:lrccamodel} We consider two data matrices $\boldsymbol{X}_1\in\mathbb{R}^{n\times p_1}$ and $\boldsymbol{X}_2\in\mathbb{R}^{n\times p_2}$, where the $n$ rows correspond to matched samples. Similar to \citet{collins2001generalization,li2018general,landgraf2020generalized}, we assume that each data matrix $\boldsymbol{X}_k$, $k=1,2$, has a corresponding natural parameter matrix $\boldsymbol{\Theta}_k\in\mathbb{R}^{n\times p_k}$, and that given the natural parameter matrix the entries are independent with log probability mass or density function for entry $x_{kij}$: $$ \log f(x_{kij}|\theta_{kij})=x_{kij}\theta_{kij}-b_k(\theta_{kij}) + c_k(x_{kij}), $$ where $c_k(\cdot)$ does not depend on $\boldsymbol{\Theta}_k$ and $b_k(\cdot)$ is a convex function. The form of each function is determined by the choice of exponential distribution for dataset $k$ (e.g., Gaussian, Binomial, Poisson), and different distributions are allowed for $\boldsymbol{X}_1$ and $\boldsymbol{X}_2$. Based on motivating datasets, we focus on the Gaussian case and Binomial proportion case. In Gaussian case with variance one, $b_k(\theta_{kij}) = \theta_{kij}^2/2$, with natural parameter corresponding to the mean of the distribution. In Binomial proportion case with $m$ trials, $b_k(\theta_{kij}) =m\log\{1+ \exp(\theta_{kij}/m)\}$, and $\theta_{kij} = m\log\{p_{kij}/(1-p_{kij})\}$, where $p_{kij}$ is probability of success. To formulate exponential CCA with orthogonal variation, we consider the low-rank model on centered natural parameter matrices. Let $\boldsymbol{\Theta}_k = \textbf{1}_n\boldsymbol{\mu}_{k}^\top + \widetilde \boldsymbol{\Theta}_k$, where $\textbf{1}_n$ is a vector of ones of length $n$ and $\boldsymbol{\mu}_k\in\mathbb{R}^{p_k}$ is the intercept, so that $\widetilde \boldsymbol{\Theta}_k$ is the column-centered matrix of natural parameters. Let $r_k =\rank(\widetilde \boldsymbol{\Theta}_k)$. We assume \begin{equation}\label{eq:expDecom} \begin{split} \widetilde\boldsymbol{\Theta}_{1}= \boldsymbol{U}_{1}\boldsymbol{V}_1^\top+\boldsymbol{Z}_{1}\boldsymbol{A}_1^\top,\quad \widetilde\boldsymbol{\Theta}_{2}= \boldsymbol{U}_{2}\boldsymbol{V}_2^\top+\boldsymbol{Z}_{2}\boldsymbol{A}_2^\top; \end{split} \end{equation} where $\boldsymbol{U}_1, \boldsymbol{U}_2\in\mathbb{R}^{n\times r_0}$ are correlated score matrices such that $\boldsymbol{U}_k^{\top}\textbf{1}_n = \bf 0$, $\boldsymbol{U}_k^{\top}\boldsymbol{U}_k = \boldsymbol{I}_{r_0}$, $\boldsymbol{U}_1^{\top}\boldsymbol{U}_2 = \textup{diag}(\rho_1, \dots, \rho_{r_0})$ (capturing $r_0$ correlations between $\boldsymbol{X}_1$ and $\boldsymbol{X}_2$), and $\boldsymbol{V}_k\in\mathbb{R}^{p_k\times r_0}$ are corresponding loading matrices. Furthermore, $\boldsymbol{Z}_k\in\mathbb{R}^{n\times (r_k-r_0)}$ capture orthogonal variation in each of the views (such that $\boldsymbol{Z}_k^{\top}\boldsymbol{Z}_k = \boldsymbol{I}_{r_k - r_0}$, $\boldsymbol{Z}_1^{\top}\boldsymbol{Z}_2 = \bf 0$, $\boldsymbol{Z}_k^{\top}(\textbf{1}_n\ \boldsymbol{U}_1\ \boldsymbol{U}_2) = \bf 0$), and $\boldsymbol{A}_k\in\mathbb{R}^{p_k \times (r_k-r_0)}$ capture the loadings corresponding to $\boldsymbol{Z}_k$. We refer to $\boldsymbol{J}_k = \boldsymbol{U}_k\boldsymbol{V}_k^{\top}$ as \textit{joint} signal, and to $\boldsymbol{I}_k = \boldsymbol{Z}_k\boldsymbol{A}_k^{\top}$ as \textit{individual} signal. \subsection{Connection to classical CCA and model identifiability}\label{sec:normalCCA} In the Gaussian case, the natural parameter corresponds to the mean of the distribution, thus $\boldsymbol{X}_k = \boldsymbol{\Theta}_k + \boldsymbol{E}_k = \textbf{1}_n\boldsymbol{\mu}_{k}^\top + \widetilde \boldsymbol{\Theta}_k + \boldsymbol{E}_k$, $k=1, 2$, where $\boldsymbol{E}_k$ is the error matrix with elements following mean-zero Gaussian distribution. The classical CCA problem can be viewed as a problem of finding the correlated basis pairs $\boldsymbol{u}_{1l}, \boldsymbol{u}_{2l}\in \mathbb{R}^n$, $l=1, \dots, r_0$, between column spaces $\mathcal{C}(\widetilde \boldsymbol{\Theta}_1)$ and $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)$ \citep{shu2020d}: \begin{equation}\label{eq:CCA} \begin{split} (\boldsymbol{u}_{1l},\boldsymbol{u}_{2l})&=\argmax_{\boldsymbol{u}_1,\boldsymbol{u}_2} \left\{\Cor(\boldsymbol{u}_1,\boldsymbol{u}_2)\right\}\\ \text{subject to}& \quad \boldsymbol{u}_1^{\top}\boldsymbol{u}_1 = \boldsymbol{u}_2^{\top}\boldsymbol{u}_2 = 1, \\ &\quad \boldsymbol{u}_1\in \mathcal{C}(\widetilde \boldsymbol{\Theta}_1)\backslash\text{span}(\{\boldsymbol{u}_{1i}\}_{i=1}^{l-1}),\quad \boldsymbol{u}_2\in \mathcal{C}(\widetilde \boldsymbol{\Theta}_2)\backslash\text{span}\left(\{\boldsymbol{u}_{2i}\}_{i=1}^{l-1}\right). \end{split} \end{equation} Each $(\boldsymbol{u}_{1l},\boldsymbol{u}_{2l})$ is the $l$th pair of canonical variables with corresponding $l$th canonical correlation $\Cor(\boldsymbol{u}_{1l},\boldsymbol{u}_{2l}) = \boldsymbol{u}_{1l}^{\top}\boldsymbol{u}_{2l} = \rho_l$. The total number of pairs $r_0$ with non-zero correlation $\rho_l >0$ corresponds to the number of principal angles between $\mathcal{C}(\widetilde \boldsymbol{\Theta}_1)$ and $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)$ that are strictly less than 90 degrees \citep{knyazev2002principal}. Let $r_k = \rank(\widetilde \boldsymbol{\Theta}_k)$. By definition, the number of canonical pairs satisfies $0\leq r_0 \leq \min(r_1, r_2)$. In case of strict inequality, e.g., $r_0 < r_1$, this implies that the column space of $\widetilde \boldsymbol{\Theta}_1$ can be decomposed into $r_0$ basis vectors corresponding to canonical variables $\{\boldsymbol{u}_{1l}\}_{l=1}^{r_0}$, and the remaining $r_1 - r_0$ basis vectors $\{\boldsymbol{z}_{1l}\}_{l=1}^{r_1 - r_0}$ that are orthogonal to both canonical variables and $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)$. Similarly, if $r_0 < r_2$, then $\{\boldsymbol{z}_{2l}\}_{l=1}^{r_2 - r_0}$ can be chosen as arbitrary orthogonal basis of $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)/\text{span}(\{\boldsymbol{u}_{2l}\}_{l=1}^{r_0})$ so that $\{\boldsymbol{u}_{2l}\}_{l=1}^{r_0}, \{\boldsymbol{z}_{2l}\}_{l=1}^{r_2 - r_0}$ form an orthogonal basis for $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)$. Theorem 1 in \citet{shu2020d} formalizes the relationship between $\mathcal{C}(\widetilde \boldsymbol{\Theta}_1)$ and $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)$ in terms of canonical variables and remaining basis vectors, which we restate below. \begin{thm}\label{thm:ident} Let $\boldsymbol{U}_1=[\boldsymbol{u}_{11},\cdots,\boldsymbol{u}_{1r_0}]\in\mathbb{R}^{n\times r_0}$, $\boldsymbol{U}_2=[ \boldsymbol{u}_{21},\cdots,\boldsymbol{u}_{2r_0}]\in\mathbb{R}^{n\times r_0}$ contain canonical variables from~\eqref{eq:CCA}. Let $\boldsymbol{Z}_1=[\boldsymbol{z}_{11}, \cdots, \boldsymbol{z}_{1(r_1-r_0)}]\in\mathbb{R}^{n\times (r_1-r_0)}$ and $\boldsymbol{Z}_2=[\boldsymbol{z}_{21}, \cdots, \boldsymbol{z}_{2(r_2-r_0)}]\in\mathbb{R}^{n\times (r_2-r_0)}$ be matrices of orthogonal basis vectors corresponding to $\mathcal{C}(\boldsymbol{P}^{\perp}_{\boldsymbol{U}_1}\widetilde \boldsymbol{\Theta}_1)$ and $\mathcal{C}(\boldsymbol{P}^{\perp}_{\boldsymbol{U}_2}\widetilde \boldsymbol{\Theta}_2)$, respectively. Let $\boldsymbol{Q}_1=\begin{pmatrix} \boldsymbol{U}_1 & \boldsymbol{Z}_1\end{pmatrix}$, $\boldsymbol{Q}_2=\begin{pmatrix} \boldsymbol{U}_2 &\boldsymbol{Z}_2\end{pmatrix}$. Then \begin{equation*} \begin{split} \boldsymbol{Q}_1^{\top}\boldsymbol{Q}_1= \boldsymbol{I}_{r_1},\quad \boldsymbol{Q}_2^{\top}\boldsymbol{Q}_2= \boldsymbol{I}_{r_2}, \quad \boldsymbol{Q}_1^{\top}\boldsymbol{Q}_2=\begin{pmatrix} \boldsymbol{\Lambda}& \bf 0\\\bf0 &\bf{0}\end{pmatrix}, \end{split} \end{equation*} where $\bf 0$ is a zero-valued matrix of compatible size, and $\boldsymbol{\Lambda}=\textup{diag}(\rho_1,\cdots,\rho_{r_0})$ is a diagonal matrix of canonical correlations. \end{thm} Thus $\boldsymbol{U}_1$ and $\boldsymbol{U}_2$ capture canonical correlations, whereas $\boldsymbol{Z}_1$ and $\boldsymbol{Z}_2$ capture orthogonal variation. By construction, $\boldsymbol{Q}_1$ is a set of basis of $\mathcal{C}(\widetilde \boldsymbol{\Theta}_1)$, and $\boldsymbol{Q}_2$ is a set of basis of $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)$. Thus, in the Gaussian case, the proposed model~\eqref{eq:expDecom} encompasses the classical CCA decomposition, with additional explicit modeling of orthogonal variation (through $\boldsymbol{Z}_k$). More generally, we apply the proposed model~\eqref{eq:expDecom} to perform basis decomposition on the matrices of natural parameters in general exponential family framework (and not just in Gaussian case). Since correlations in Gaussian case rely on column-centering, we also formulate our model on column-centered $\widetilde \boldsymbol{\Theta}_k$, thus original $\boldsymbol{\Theta}_k$ has an intercept term $\boldsymbol{\mu}_k$. We further formalize existence of model~\eqref{eq:expDecom} and corresponding identifiability conditions. \begin{thm}\label{thm:modelident} Given column-centered $\widetilde \boldsymbol{\Theta}_k$, let $r_k = \rank(\widetilde \boldsymbol{\Theta}_k)$. Let $r_0$ be the number of non-zero canonical correlations between $\widetilde \boldsymbol{\Theta}_1$ and $\widetilde \boldsymbol{\Theta}_2$ according to \eqref{eq:CCA}. \begin{enumerate} \item There exist $\boldsymbol{U}_k$, $\boldsymbol{V}_k$, $\boldsymbol{Z}_k$, $\boldsymbol{A}_k$ such that model~\eqref{eq:expDecom} holds with corresponding conditions \item If joint $\boldsymbol{J}_k=\boldsymbol{U}_k\boldsymbol{V}^\top_k$ and individual $\boldsymbol{I}_k=\boldsymbol{Z}_k\boldsymbol{A}^\top_k$ satisfy $ \rank(\boldsymbol{J}_k) + \rank(\boldsymbol{I}_k)= \rank(\widetilde \boldsymbol{\Theta}_k)$, then $\boldsymbol{J}_k$ and $\boldsymbol{I}_k$ are unique. Furthermore, if the canonical correlations are distinct, then $\boldsymbol{U}_k$ are unique up to a sign. If both $\widetilde{\boldsymbol{Z}}_k$ and $\boldsymbol{Z}_k$ satisfy the conditions, then there exists an orthogonal matrix $\boldsymbol{Q}_k \in \mathbb{R}^{(r_k - r_0) \times (r_k - r_0)}$ such that $\widetilde{\boldsymbol{Z}}_k = \boldsymbol{Z}_k\boldsymbol{Q}_k$. \end{enumerate} \end{thm} The proof is in Web Appendix A. \subsection{Connection to other existing decompositions} The existence and identifiability conditions of proposed ECCA model are similar to the conditions for Decomposition-based Canonical Correlation Analysis (DCCA) \citep{shu2020d}. However, ECCA has two important differences with DCCA. First, DCCA decomposes $\boldsymbol{U}_1$ and $\boldsymbol{U}_2$ into common $\boldsymbol{C}$ and orthogonal $\boldsymbol{D}_1$, $\boldsymbol{D}_2$, and estimates those components separately rather than estimating $\boldsymbol{U}_d$ directly like ECCA does. Secondly, DCCA is restricted to the Gaussian case, where the corresponding estimates have closed-form. In contrast, ECCA considers a more general exponential family framework for which closed-form solutions do not exist, presenting significant optimization challenges that we address in this work. Exponential PCA methods \citet{collins2001generalization, landgraf2020generalized} consider low-rank decomposition of matrix of natural parameters separately for each data set, and thus do not provide answers to which signals are correlated and which signals are unique across datasets. Generalized Association Study (GAS) \citep{li2018general} also considers decomposition of natural parameter matrices under exponential family framework into joint and individual parts, however the definition of joint and individual are different compared to ECCA. In GAS, the joint parts have zero principal angles (all canonical correlations are one), whereas the individual parts are non-intersecting, but not necessarily orthogonal. For example, two canonical variables with canonical correlation of 0.8 belong to individual parts of the decomposition under GAS model, but belong to joint part of decomposition under ECCA model. Thus, GAS and ECCA agree on their treatment of canonical correlations that are exactly one or exactly zero, but disagree on canonical correlations that are strictly between 0 and 1. ECCA's treatment of those as joint is consistent with standard CCA. Furthermore, unlike GAS, individual signals in ECCA are orthogonal to each other, meaning that those signals can be interpreted as view-specific information completely absent from another view. The differences in GAS and ECCA decompositions translate into significant differences in underlying optimization problems and corresponding algorithms, as additional orthogonality constraints in ECCA model present considerable challenges, which we address here with the help of a splitting method \citep{Lai:2014dq}. \section{Estimation}\label{sec:estimation} \subsection{Overview}\label{sec:parameter} Let $L(\boldsymbol{\Theta}_k|\boldsymbol{X}_k)$ be the negative log-likelihood associated with natural parameter matrix $\boldsymbol{\Theta}_k$ given the data matrix. $$ L(\boldsymbol{\Theta}_k|\boldsymbol{X}_k) = -\sum_{i=1}^{n}{\sum_{j=1}^{p_k}\log f(x_{kij}|\theta_{kij})} = \sum_{i=1}^{n}\sum_{j=1}^{p_k}\left\{- x_{kij}\theta_{kij} + b_k(\theta_{kij}) \right\} + C, $$ where $C$ is a constant independent from $\boldsymbol{\Theta}_k$. In Gaussian case with variance 1 (Section~\ref{sec:lrccamodel}) $$ L(\boldsymbol{\Theta}_k|\boldsymbol{X}_k) = \frac{1}{2}\|\boldsymbol{X}_k-\boldsymbol{\Theta}_k\|^2_F + C, $$ and in Binomial proportion case with $m$ trials $$ L(\boldsymbol{\Theta}_k|\boldsymbol{X}_k) = m\sum_{i = 1}^{n}\sum_{j = 1}^{p}\Big[-x_{kij}(\theta_{kij}/m) + \log\{1+\exp(\theta_{kij}/m)\}\Big] + C. $$ Observe that the number of trials $m$ enters the likelihood as a multiplier and as a scaling term on $\boldsymbol{\Theta}_k$. Since the scaling does not affect the model decomposition, the choice of $m$ can be viewed as a relative weight assigned to view $k$. Given ranks $r_0$, $r_1$ and $r_2$, we propose to fit model~\eqref{eq:expDecom} by minimizing sum of negative log-likelihoods associated with $\boldsymbol{X}_1$ and $\boldsymbol{X}_2$, accounting for centering and model constraints: \begin{equation}\label{eq:finalform} \begin{split} \minimize_{\boldsymbol{\mu}_k, \boldsymbol{U}_k, \boldsymbol{V}_k,\boldsymbol{Z}_k,\boldsymbol{A}_k}&\left\{L(\textbf{1}_n\boldsymbol{\mu}_{1}^\top + \boldsymbol{U}_{1}\boldsymbol{V}_1^\top+\boldsymbol{Z}_{1}\boldsymbol{A}_1^\top|\boldsymbol{X}_1) + L(\textbf{1}_n\boldsymbol{\mu}_{2}^\top + \boldsymbol{U}_{2}\boldsymbol{V}_2^\top+\boldsymbol{Z}_{2}\boldsymbol{A}_2^\top|\boldsymbol{X}_2)\right\} \\ \mbox{subject to}& \quad\boldsymbol{U}_k^{\top}\textbf{1}_n = {\bf 0},\quad \boldsymbol{U}_k^{\top}\boldsymbol{U}_k = \boldsymbol{I}_{r_0},\quad \boldsymbol{U}_1^{\top}\boldsymbol{U}_2 = \textup{diag}(\rho_1, \dots, \rho_{r_0}),\\ &\quad \boldsymbol{Z}_k^{\top}(\textbf{1}_n\ \boldsymbol{U}_1\ \boldsymbol{U}_2) = {\bf 0},\quad \boldsymbol{Z}_k^{\top}\boldsymbol{Z}_k = \boldsymbol{I}_{r_k - r_0}, \quad\boldsymbol{Z}_1^{\top}\boldsymbol{Z}_2 = {\bf 0}, \quad k=1,2. \end{split} \end{equation} We discuss rank selection approaches in Section~\ref{sec:rank}. To optimize~\eqref{eq:finalform}, we propose to use alternating updates over $\boldsymbol{\mu}_k, \boldsymbol{U}_k, \boldsymbol{V}_k, \boldsymbol{Z}_k, \boldsymbol{A}_k$ as summarized in Algorithm~\ref{a:full}. Each update corresponds to its own non-trivial optimization problem due to combination of (possibly) non-Gaussian likelihood $L(\boldsymbol{\Theta}_k|\boldsymbol{X}_k)$ and the orthogonality constraints in~\eqref{eq:finalform}. We propose to use damped Newton's method to update the unconstrained model parameters (intercept $\boldsymbol{\mu}_k$ and loading matrices $\boldsymbol{V}_k$, $\boldsymbol{A}_k$). For constrained model parameters, we derive modifications of splitting orthogonality constraints (SOC) and Bregman iteration method \citep{yin2008bregman,Lai:2014dq}. Below we provide high-level overview of each update, additional details are in Web Appendix B. \begin{algorithm}[!t] \caption{ECCA algorithm}\label{a:full} \begin{algorithmic}[1] \Require Initial values $\boldsymbol{U}_1^{(0)}, \boldsymbol{U}_2^{(0)}, \boldsymbol{Z}_1^{(0)}, \boldsymbol{Z}_2^{(0)}$, ranks $r_0, r_1, r_2$, $t_{\max}$, $\epsilon$ \State $t\gets 0$ \State Calculate starting negative log-likelihood $L^{(0)}$ \While{$|L^{(t)} - L^{(t-1)}| > \epsilon$ and $t < t_{\max}$} \State $t \gets t+1$ \State Update of loadings: solve for $\boldsymbol{\mu}_k^{(t)}$, $\boldsymbol{V}_k^{(t)}$, $\boldsymbol{A}_k^{(t)}$, $k= 1, 2$ \State Update of orthogonal scores: solve for $\boldsymbol{Z}_1^{(t)}$, $\boldsymbol{Z}_2^{(t)}$ \State Update of correlated scores: solve for $\boldsymbol{U}_1^{(t)}$, $\boldsymbol{U}_2^{(t)}$ \State Rotation of correlated scores: update $\boldsymbol{U}_k^{(t)}$ and $\boldsymbol{V}_k^{(t)}$, $k=1, 2$ \State Calculate updated negative log-likelihood $L^{(t)}$ \EndWhile \State \Return {$\boldsymbol{\mu}_k^{(t)}, \boldsymbol{U}_k^{(t)}, \boldsymbol{V}_k^{(t)}, \boldsymbol{Z}_k^{(t)}, \boldsymbol{A}_k^{(t)}, k=1, 2$} \end{algorithmic} \end{algorithm} \subsection{Update of loadings} \label{s:loading_update} Given $\boldsymbol{U}_k$, $\boldsymbol{Z}_k$, $k=1, 2$, the update of loadings in~\eqref{eq:finalform} can be separated across $k$, leading to two separate optimization problems of the same form: \begin{equation}\label{eq:loadings} (\boldsymbol{\mu}^*_k,\boldsymbol{V}^*_k,\boldsymbol{A}^*_k) = \argmin_{\boldsymbol{\mu}_k,\boldsymbol{V}_k,\boldsymbol{A}_k}{L(\textbf{1}_n\boldsymbol{\mu}_{k}^\top + \boldsymbol{U}_{k}\boldsymbol{V}_k^\top+\boldsymbol{Z}_{k}\boldsymbol{A}_k^\top|\boldsymbol{X}_k)}. \end{equation} Since $L(\boldsymbol{\Theta}_k|\boldsymbol{X}_k)$ is differentiable and~\eqref{eq:loadings} has no constraints, we propose to use damped Newton's method for optimization. For example, the update for $\boldsymbol{\mu}$ takes the form \begin{equation} \label{eq:mu_update} \boldsymbol{\mu}^{+} = \boldsymbol{\mu} - t(\nabla_{\boldsymbol{\mu}}^2 L)^{-1}\nabla_{\boldsymbol{\mu}} L, \end{equation} where $\boldsymbol{\mu}$ is the current value, $t \in (0, 1)$ is the step size, $\nabla_{\boldsymbol{\mu}} L$ is the gradient evaluated at current value $\boldsymbol{\mu}$, and $\nabla_{\boldsymbol{\mu}}^2 L$ is the Hessian evaluated at current value $\boldsymbol{\mu}$. We choose the step size by backtracking line search \citep{wright1999numerical}, and use the difference in objective function values to monitor the convergence. In the special case that view $k$ follows Gaussian distribution,~\eqref{eq:loadings} has closed form solution. Let $\boldsymbol{S}_k = (\textbf{1}_n\ \boldsymbol{U}_k\ \boldsymbol{Z}_k) \in \mathbb{R}^{n\times (1+r_1)}$ and $\boldsymbol{T}_k = (\boldsymbol{\mu}_k\ \boldsymbol{V}_k\ \boldsymbol{A}_k) \in \mathbb{R}^{p\times (1+r_1)}$. Then $$ L(\textbf{1}_n\boldsymbol{\mu}_{k}^\top + \boldsymbol{U}_{k}\boldsymbol{V}_k^\top+\boldsymbol{Z}_{k}\boldsymbol{A}_k^\top|\boldsymbol{X}_k) = \frac{1}{2}\|\boldsymbol{X}_k-\boldsymbol{S}_k\boldsymbol{T}_k^{\top}\|^2_F + C, $$ thus $(\boldsymbol{\mu}^*_k\ \boldsymbol{V}^*_k\ \boldsymbol{A}^*_k) = (\boldsymbol{S}_k^+\boldsymbol{X}_k)^{\top}$, where $\boldsymbol{S}_k^+$ is the Moore - Penrose inverse of $\boldsymbol{S}_k$. \subsection{Update of orthogonal scores} \label{s:ind_update} Given $\boldsymbol{\mu}_k$, $\boldsymbol{A}_k$, $\boldsymbol{V}_k$ and $\boldsymbol{U}_k$, let $\boldsymbol{B}_k = \textbf{1}_n\boldsymbol{\mu}_{k}^\top + \boldsymbol{U}_{k}\boldsymbol{V}_k^\top$. Then the update of orthogonal score matrices $\boldsymbol{Z}_1$ and $\boldsymbol{Z}_2$ corresponds to the following problem \begin{equation}\label{eq:ZSOC} \begin{split} \minimize_{\boldsymbol{Z}_1,\boldsymbol{Z}_2}&\left\{L(\boldsymbol{B}_1+\boldsymbol{Z}_{1}\boldsymbol{A}_1^\top|\boldsymbol{X}_1) + L(\boldsymbol{B}_2+\boldsymbol{Z}_{2}\boldsymbol{A}_2^\top|\boldsymbol{X}_2)\right\} \\ \text{subject to }& \begin{pmatrix}\textbf{1}_n&\boldsymbol{U}_1&\boldsymbol{U}_2\end{pmatrix} ^\top\begin{pmatrix}\boldsymbol{Z}_1&\boldsymbol{Z}_2\end{pmatrix}=\bf{0}, \quad \begin{pmatrix}\boldsymbol{Z}_1&\boldsymbol{Z}_2\end{pmatrix}^\top \begin{pmatrix}\boldsymbol{Z}_1&\boldsymbol{Z}_2\end{pmatrix} = \boldsymbol{I}. \end{split} \end{equation} Problem \eqref{eq:ZSOC} has convex objective function with respect to $(\boldsymbol{Z}_1, \boldsymbol{Z}_2)$ and nonconvex orthogonality constraints. To solve this problem, we adapt the SOC method, a Splitting method for Orthogonality Constrained problems \citep{Lai:2014dq}. We introduce the auxiliary matrix $(\boldsymbol{P}_1, \boldsymbol{P}_2)$ and reformulate~\eqref{eq:ZSOC} as \begin{equation}\label{eq:ZSOC2} \begin{split} \min_{\boldsymbol{Z}_1,\boldsymbol{Z}_2,\boldsymbol{P}_1,\boldsymbol{P}_2}&\left\{L(\boldsymbol{B}_1+\boldsymbol{Z}_{1}\boldsymbol{A}_1^\top|\boldsymbol{X}_1) + L(\boldsymbol{B}_2+\boldsymbol{Z}_{2}\boldsymbol{A}_2^\top|\boldsymbol{X}_2)\right\} \\ \text{subject to }&\begin{pmatrix}\boldsymbol{Z}_1&\boldsymbol{Z}_2\end{pmatrix} = \begin{pmatrix} \boldsymbol{P}_1&\boldsymbol{P}_2\end{pmatrix}, \\ &\begin{pmatrix}\textbf{1}_n&\boldsymbol{U}_1&\boldsymbol{U}_2\end{pmatrix} ^\top\begin{pmatrix}\boldsymbol{P}_1&\boldsymbol{P}_2\end{pmatrix}=\bf{0}, \quad\begin{pmatrix}\boldsymbol{P}_1&\boldsymbol{P}_2\end{pmatrix}^\top \begin{pmatrix}\boldsymbol{P}_1&\boldsymbol{P}_2\end{pmatrix} = \boldsymbol{I}. \end{split} \end{equation} The purpose of auxiliary matrix is to separate the objective function minimization from orthogonality constraints. Algorithm~\ref{a:ZSOC} summarizes SOC updates applied to~\eqref{eq:ZSOC2}, see Web Appendix B for derivation. As the updates for $\boldsymbol{Z}_k$ are unconstrained, we utilize updates from Section~\ref{s:loading_update}. We use the primal residuals, $(\boldsymbol{Z}_1^{(t+1)}, \boldsymbol{Z}_2^{(t+1)}) - (\boldsymbol{P}_1^{(t+1)}, \boldsymbol{P}_2^{(t+1)})$, and the dual residuals, $(\boldsymbol{P}_1^{(t+1)}, \boldsymbol{P}_2^{(t+1)}) - (\boldsymbol{P}_1^{(t)}, \boldsymbol{P}_2^{(t)})$, to monitor the convergence \citep{Boyd:2011bw}. \begin{algorithm}[!t] \caption{Splitting orthogonal constraint algorithm for~\eqref{eq:ZSOC2}}\label{a:ZSOC} \begin{algorithmic}[1] \Require Given: $t=0$, $\boldsymbol{Z}_1^{(0)}$, $\boldsymbol{Z}_2^{(0)}$, $\boldsymbol{U}=(\textbf{1}_n,\boldsymbol{U}_1,\boldsymbol{U}_2), t_{max}$ \State Initialize $\boldsymbol{P}_1^{(0)} \gets \boldsymbol{Z}_1^{(0)},\boldsymbol{P}_2^{(0)} \gets \boldsymbol{Z}_2^{(0)},\boldsymbol{B}_1^{(0)} \gets 0,\boldsymbol{B}_2^{(0)} \gets 0$ \While{$t \neq t_{max}$ and `not converge'} \State $t \gets t+1$; \State $\boldsymbol{Z}_1^{(t)} \gets \argmin_{\boldsymbol{Z}_1}L(\boldsymbol{\Theta}_1|\boldsymbol{X}_1) + \frac{\gamma}2\|\boldsymbol{Z}_1-\boldsymbol{P}^{(t-1)}_1+\boldsymbol{B}_1^{(t-1)}\|_F^2$. \State $\boldsymbol{Z}_2^{(t)} \gets \argmin_{\boldsymbol{Z}_2}L(\boldsymbol{\Theta}_2|\boldsymbol{X}_2) +\frac{\gamma}2\|\boldsymbol{Z}_2-\boldsymbol{P}_2^{(t-1)}+\boldsymbol{B}^{(t-1)}_2\|_F^2$. \State Compute SVD of $(\boldsymbol{I}-\boldsymbol{U}\bU^{+})(\boldsymbol{Z}_1^{(t)}+\boldsymbol{B}_1^{(t-1)},\boldsymbol{Z}_2^{(t)}+\boldsymbol{B}_2^{(t-1)})=\boldsymbol{M}\boldsymbol{D}\boldsymbol{N}^\top$. \State $(\boldsymbol{P}_1^{(t)},\boldsymbol{P}^{(t)}_2)\gets\boldsymbol{M}\boldsymbol{N}^\top$. \State $\boldsymbol{B}_1^{(t)} \gets\boldsymbol{B}_1^{(t-1)} + \boldsymbol{Z}_1^{(t)} -\boldsymbol{P}_1^{(t)}.$ \State $\boldsymbol{B}_2^{(t)} \gets\boldsymbol{B}_2^{(t-1)} + \boldsymbol{Z}_2^{(t)} -\boldsymbol{P}_2^{(t)}.$ \EndWhile \State \Return {$\boldsymbol{Z}_k^{(t)}, \boldsymbol{P}_k^{(t)}, \boldsymbol{B}_k^{(t)}, k=1, 2$} \end{algorithmic} \end{algorithm} The parameter $\gamma$ can be interpreted as the inverse step size. The larger is $\gamma$, the more likely the algorithm converges, but takes more iterations. The smaller is $\gamma$, the larger are the steps, but the algorithm may fail to converge. By default, we use $\gamma = 1000$ which led to convergence in all our numerical studies. \citet{Lai:2014dq} shows empirically that the algorithm is guaranteed to converge when $\gamma$ is chosen sufficiently large, however provide no theoretical guarantees. Below we establish such guarantees for Algorithm~\ref{a:ZSOC} for Binomial/Gaussian and Binomial/Binomial cases by taking advantage of the results of \citet{wang2019global} on convergence of ADMM in non-convex problems. \begin{thm}\label{thm:SOCconvergence} If data matrices $\boldsymbol{X}_1, \boldsymbol{X}_2$ follow Gaussian or Binomial-proportion distribution, then for sufficiently large $\gamma$, the sequence $(\boldsymbol{Z}^{(t)} , \boldsymbol{P}^{(t)} , \boldsymbol{B}^{(t)})$ generated by SOC Algorithm~\ref{a:ZSOC} has at least one limit point, and each limit point is a stationary point of the augmented Lagrangian. \end{thm} In the special case that both exponential families are Gaussian, the solution has closed form. Let $\boldsymbol{Y}_k = \boldsymbol{X}_k - \textbf{1}_n\boldsymbol{\mu}^\top_k - \boldsymbol{U}_k\boldsymbol{V}^\top_k, k = 1,2$, $\boldsymbol{Y} = (\boldsymbol{Y}_1,\boldsymbol{Y}_2)$, $\boldsymbol{Z} = (\boldsymbol{Z}_1,\boldsymbol{Z}_2)$, $\boldsymbol{U} = (\textbf{1}_n, \boldsymbol{U}_1, \boldsymbol{U}_2)$, and let $\boldsymbol{A}$ be a block-diagonal matrix with blocks $\boldsymbol{A}_1$, $\boldsymbol{A}_2$. Then problem~\eqref{eq:ZSOC} becomes \begin{equation}\label{eq:ZSOC-Gaussian} \begin{split} \minimize_{\boldsymbol{Z}_1,\boldsymbol{Z}_2}& \ {\frac{1}{2}\|\boldsymbol{Y} - \boldsymbol{Z}\boldsymbol{A}^\top\|_F^2}, \\ \text{subject to }&\quad \boldsymbol{U}^\top\boldsymbol{Z} = \mathbf{0},\quad \boldsymbol{Z}^\top\boldsymbol{Z} = \boldsymbol{I}. \end{split} \end{equation} It can be shown (Web Appendix B), that the optimal solution is $ \boldsymbol{Z}^* = \boldsymbol{Q}\boldsymbol{R}^\top, $ where $\boldsymbol{Q}$ and $\boldsymbol{R}$ are matrices of singular vectors from the short SVD factorization $(\boldsymbol{I}-\boldsymbol{U}\bU^{+})\boldsymbol{Y}\boldsymbol{A}=\boldsymbol{Q}\boldsymbol{D}\boldsymbol{R}^\top$. \subsection{Update of correlated scores} \label{s:joint_update} Given $\boldsymbol{\mu}_k$, $\boldsymbol{A}_k$, $\boldsymbol{V}_k$ and $\boldsymbol{U}_k$, let $\boldsymbol{B}_k = \textbf{1}_n\boldsymbol{\mu}_{k}^\top + \boldsymbol{Z}_{k}\boldsymbol{A}_k^\top$. The update of correlated score matrices $\boldsymbol{U}_1$ and $\boldsymbol{U}_2$ corresponds to the following problem \begin{equation}\label{eq:USOClambda} \begin{split} \minimize_{\boldsymbol{U}_1,\boldsymbol{U}_2}& \ \{L(\boldsymbol{B}_1+\boldsymbol{U}_{1}\boldsymbol{V}_1^\top|\boldsymbol{X}_1) + L(\boldsymbol{B}_2+\boldsymbol{U}_{2}\boldsymbol{V}_2^\top|\boldsymbol{X}_2)\} \\ \text{subject to }& \begin{pmatrix}\textbf{1}_n&\boldsymbol{Z}_1&\boldsymbol{Z}_2\end{pmatrix} ^\top\begin{pmatrix}\boldsymbol{U}_1&\boldsymbol{U}_2\end{pmatrix}={\bf{0}}, \quad\boldsymbol{U}_1^\top\boldsymbol{U}_1 = \boldsymbol{U}_2^\top\boldsymbol{U}_2 = \boldsymbol{I}, \quad \boldsymbol{U}_1^{\top}\boldsymbol{U}_2 = \boldsymbol{\Lambda}. \end{split} \end{equation} The key difference in this problem compared to~\eqref{eq:ZSOC} is that $\boldsymbol{U}_1^{\top}\boldsymbol{U}_2$ is required to be a diagonal matrix with positive entries (corresponding to correlations), that is $\boldsymbol{U}_1^\top \boldsymbol{U}_2=\boldsymbol{\Lambda}$. Note, however, that if $\boldsymbol{U}_1^\top \boldsymbol{U}_2$ is non-diagonal, but full rank, one can apply rotations $\mathbf{\Gamma}_1$ to $\boldsymbol{U}_1$ and $\mathbf{\Gamma}_2$ to $\boldsymbol{U}_2$ so that $\mathbf{\Gamma}_1^{\top}\boldsymbol{U}_1^\top \boldsymbol{U}_2\mathbf{\Gamma}_2$ is diagonal. For rotations it holds that $\boldsymbol{U}_1\boldsymbol{V}_1^{\top} = \boldsymbol{U}_1\mathbf{\Gamma}_1\mathbf{\Gamma}_1^{\top}\boldsymbol{V}_1^{\top}$, thus the corresponding rotation on loadings $\boldsymbol{V}_1$ keeps the objective value of $L(\boldsymbol{B}_1+\boldsymbol{U}_{1}\boldsymbol{V}_1^\top|\boldsymbol{X}_1)$ unchanged. Therefore, we drop diagonal constraint from~\eqref{eq:USOClambda}, and perform extra rotation of scores $\boldsymbol{U}_k$ and loadings $\boldsymbol{V}_k$ in Section~\ref{s:normalize}. Rewriting~\eqref{eq:USOClambda} without diagonal constraints separates the problem across $k = 1,2$, leading to two separate optimization problems of the same form: \begin{equation}\label{eq:USOC} \begin{split} \minimize_{\boldsymbol{U}_k}&\{ L(\boldsymbol{B}_k+\boldsymbol{U}_{k}\boldsymbol{V}_k^\top|\boldsymbol{X}_k)\} \\ \text{subject to }& \begin{pmatrix}\textbf{1}_n&\boldsymbol{Z}_1&\boldsymbol{Z}_2\end{pmatrix} ^\top\boldsymbol{U}_k={\bf{0}}, \quad\boldsymbol{U}_k^\top\boldsymbol{U}_k = \boldsymbol{I}. \end{split} \end{equation} As in Section~\ref{s:ind_update}, we adapt SOC algorithm to solve~\eqref{eq:USOC}, with Gaussian case having the closed-form solution. See Web Appendix B. \subsection{Rotation of correlated scores and loadings} \label{s:normalize} Let $\widetilde\boldsymbol{U}_k$, $k=1,2$, be the score matrices obtained from solving~\eqref{eq:USOC}, which may not satisfy the regularity condition $\widetilde \boldsymbol{U}_1^\top \widetilde \boldsymbol{U}_2=\boldsymbol{\Lambda}$. Let $\widetilde \boldsymbol{V}_k$ be the corresponding loading matrices. Further we show how to construct rotations $\mathbf{\Gamma}_k$ ($\mathbf{\Gamma}_k\mathbf{\Gamma}_k^{\top} = \mathbf{\Gamma}_k^{\top}\mathbf{\Gamma}_k = \boldsymbol{I}$) so that setting $\boldsymbol{U}_k = \widetilde \boldsymbol{U}_k\mathbf{\Gamma}_k$ leads to $\boldsymbol{U}_1^{\top}\boldsymbol{U}_2 = \boldsymbol{\Lambda}$, and the matrix $\widetilde \boldsymbol{U}_k\widetilde \boldsymbol{V}^\top_k = \boldsymbol{U}_k\boldsymbol{V}^\top_k$ remains unchanged with $\boldsymbol{V}_k = \widetilde\boldsymbol{V}_k\mathbf{\Gamma}_k$. Let $\mathbf{\Gamma}_1$ and $\mathbf{\Gamma}_2$ be matrices of left and right singular vectors, respectively, from the singular value decomposition $\widetilde \boldsymbol{U}^\top_1 \widetilde \boldsymbol{U}_2 = \mathbf{\Gamma}_1 \boldsymbol{\Lambda}\mathbf{\Gamma}_2^\top,$ where $\boldsymbol{\Lambda}$ is a diagonal matrix of nonnegative singular values. Let $\boldsymbol{U}_k = \widetilde \boldsymbol{U}_k\mathbf{\Gamma}_k$ and $\boldsymbol{V}_k = \widetilde\boldsymbol{V}_k\mathbf{\Gamma}_k$. Then by construction \begin{align*} &\boldsymbol{U}^\top_1 \boldsymbol{U}_2 = (\widetilde \boldsymbol{U}_1\mathbf{\Gamma}_1)^\top(\widetilde \boldsymbol{U}_2\mathbf{\Gamma}_2) = \boldsymbol{\Lambda},\quad \boldsymbol{U}^\top_k \boldsymbol{U}_k = (\widetilde \boldsymbol{U}_k\mathbf{\Gamma}_k)^\top(\widetilde \boldsymbol{U}_k\mathbf{\Gamma}_k) = \boldsymbol{I},\\ &\boldsymbol{U}_k \boldsymbol{V}^\top_k = (\widetilde \boldsymbol{U}_k\mathbf{\Gamma}_k)(\widetilde \boldsymbol{V}_k\mathbf{\Gamma}_k)^\top = \widetilde \boldsymbol{U}_k\widetilde \boldsymbol{V}^\top_k. \end{align*} Thus, the rotated score matrices $\boldsymbol{U}_k$ and loading matrices $\boldsymbol{V}_k$ satisfy all the regularity conditions. Furthermore, the likelihood stays the same. \subsection{Initialization}\label{sec:init} The proposed ECCA Algorithm~\ref{a:full} requires the initial score matrices $\boldsymbol{U}_k^{(0)},\boldsymbol{Z}_k^{(0)}$. Our default initialization is based on saturated natural parameters $\widehat{\boldsymbol{\Theta}}_k$ obtained from $\boldsymbol{X}_k$ as maximum likelihood estimators without any constraints. In Gaussian case, $\widehat{\boldsymbol{\Theta}}_k=\boldsymbol{X}_k$. In Binomial proportion case, if there are any zeros or ones in $\boldsymbol{X}_k$, we adopt the adjustments as in Chapter 10 of \citet{ott2015introduction}. To be specific, zeros are replaced by $0.375/(m + 0.75)$ whereas ones are replaced by $(m + 0.375)/(m + 0.75)$, where $m$ is the number of trials. Then we estimate natural parameters from adjusted data as $\widehat \theta_{kij} =m \log\{x_{kij}/(1-x_{kij})\}$. We let $\widetilde{\boldsymbol{\Theta}}_k$ be the column-centered $\widehat{\boldsymbol{\Theta}}_k$, and following Section~\ref{sec:normalCCA} initialize $\boldsymbol{U}_k^{(0)}$ as the first $r_0$ canonical variables of $\mathcal{C}(\widetilde \boldsymbol{\Theta}_1)$ and $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)$. We initialize $\boldsymbol{Z}_k^{(0)}$ as the remaining $r_k - r_0$ left singular vectors of $(\boldsymbol{I} - \boldsymbol{U}\bU^+)\widetilde{\boldsymbol{\Theta}}_k$. \subsection{Rank estimation}\label{sec:rank} In Gaussian case, many rank estimation methods have been proposed to determine the total rank $r_k$ for each view. Some examples are edge distribution method \citep{onatski2010determining}, profile likelihood method \citep{ProfileLik2006Zhu} and thresholding method \citep{gavish2014optimal}. However, none of these methods directly extends to non-Gaussian data. Here, we determine the total ranks $r_k$ by adopting the 10-fold cross-validation method designed for exponential families \citep{li2018general}. Given the observed matrix $\boldsymbol{X}_k$, a random part of its elements is set as missing, and the full underlying natural parameter matrix $\boldsymbol{\Theta}_k$ is estimated with given rank using exponential PCA \citep{collins2001generalization}. The best rank is chosen based on the minimal negative log-likelihood associated with hold-out elements of $\boldsymbol{X}_k$ and corresponding elements of estimated $\boldsymbol{\Theta}_k$. We refer to \citet{li2018general} for additional details. To estimate the joint rank $r_0$, \citet{li2018general} apply similar approach to estimate the rank of concatenated $(\boldsymbol{\Theta}_1, \boldsymbol{\Theta}_2)$. However, this approach may not be valid for the proposed ECCA model~\eqref{eq:expDecom} since we allow the column spaces between $\boldsymbol{U}_1$ and $\boldsymbol{U}_2$ to be different. Instead, we adopt a principal angles approach as described in \citet{yuan2022double}. Specifically, we first construct the proxy low-rank natural parameter matrices $\widehat \boldsymbol{\Theta}_k$ by applying low-rank exponential PCA \citep{landgraf2020generalized} separately to each view. We then calculate principal angles between $\widehat \boldsymbol{\Theta}_1$ and $\widehat \boldsymbol{\Theta}_2$, and cluster these angles into two groups using profile likelihood. The number of elements in the cluster with smallest angles is used as an estimate of joint rank. We refer to \citet{yuan2022double} for additional details. \section{Simulation studies}\label{sec:lrccaSimu} We consider three settings for data generation, and use 100 replications for each. \begin{description} \item[\textbf{Setting 1}] Both $\boldsymbol{X}_1$ and $\boldsymbol{X}_2$ follow Gaussian distribution. \item[\textbf{Setting 2}] $\boldsymbol{X}_1$ follows Gaussian distribution and $\boldsymbol{X}_2$ follows Binomial proportion distribution. \item[\textbf{Setting 3}] Both $\boldsymbol{X}_1$ and $\boldsymbol{X}_2$ follow Binomial proportion distribution. \end{description} For all settings, we set sample size $n = 50$, and dimensions $p_1 = 30$, $p_2 = 20$. We generate data according to model~\eqref{eq:expDecom} with $r_0=3$ nonzero canonical correlations with corresponding values $\boldsymbol{\Lambda} = \textup{diag}(1, 0.9, 0.7)$. The total ranks of centered natural parameter matrices are set to $r_1 = 7$, $r_2 = 6$. For Binomial proportion distribution we use $m=100$ trials. Additional data generation details are in Web Appendix C. We compare the performance of the following methods: (i) ECCA, the proposed approach; (ii) DCCA adopted to the exponential family setting, where we apply DCCA \citep{shu2020d} to the saturated matrices of natural parameters; (iii) EPCA-DCCA, where we first estimate low-rank natural parameter matrices using exponential PCA \citep{landgraf2020generalized}, and then apply DCCA; (iv) GAS, Generalized Association Study framework \citep{li2018general}. Implementation details for each methods are described in Web Appendix C. The ranks for all methods are set at true values. For GAS, we consider two cases: joint rank 3 (GAS-rank3) which is misspecified model as it enforces top three canonical correlations to be one, and joint rank 1 (GAS-rank1) which puts the 2nd and 3rd canonical pairs as part of individual structures. To assess the performance, we consider the overall relative error $$ \text{relative error}=\frac{\|\widehat\boldsymbol{\Theta}_k - \boldsymbol{\Theta}_k\|_F^2}{\|\boldsymbol{\Theta}_k\|_F^2},\ k=1,2, $$ where $\widehat\boldsymbol{\Theta}_k$ are the estimated natural parameter matrices and $\boldsymbol{\Theta}_k$ are true natural parameter matrices. We use this metric as its invariant to the choice of decomposition. To assess the joint signal estimation performance, we also evaluate the chordal distance \citep{ye2016schubert} $$ \frac{1}{\sqrt{2}}\left\|\boldsymbol{J}_k\boldsymbol{J}_k^{+} - \widehat{\boldsymbol{J}}_k\widehat{\boldsymbol{J}}_k^{+}\right\|_F,\ k=1,2. $$ Figure~\ref{fig:GG_signal} shows relative errors across all settings and Figure~\ref{fig:GG_subspace} shows the corresponding chordal distances associated with joint subspaces. When both distributions are Gaussian, all methods perform similar except GAS-rank3 that has the worst performance. This is expected, since GAS-rank1 is an accurate model in this setting. DCCA has the slight advantage over other methods in relative error, but gives similar chordal distances. When one or both distributions are Binomial, DCCA performance deteriorates, with EPCA-DCCA outperforming DCCA. GAS-rank1 as expected outperforms GAS-rank3, but surprisingly is significantly worse on joint signal compared to DCCA and exhibits high variance. One possible explanation is that GAS is implemented for Binomical case with $m=1$, and thus unlike ECCA, does not use $m$ to reweight the likelihood in objective. Another possible explanation is that GAS is using one-step approximation algorithm for model fitting, and this approximation may lead to suboptimal solutions in some cases. Overall, we find that the proposed ECCA has the best performance, as it is similar to DCCA in Gaussian case, and outperforms other methods when at least one of the distributions is Binomial. \begin{figure}[!t] \includegraphics[width=1\textwidth]{Whole_Estimation_Error.pdf} \caption{Comparison of relative error for all settings over 100 replications. } \label{fig:GG_signal} \end{figure} \begin{figure}[!t] \includegraphics[width=1\textwidth]{Whole_Joint_Error.pdf} \caption{Comparison of chordal distance for all settings over 100 replications. } \label{fig:GG_subspace} \end{figure} \section{Applications}\label{sec:data} \subsection{Nutrigenomic study} The nutrimouse dataset \citep{martin2007novel} is available in R package \textsf{mixOmics} \citep{rohart2017mixomics}. There are $n=40$ mice, with the first view containing $p_1=120$ gene expression measurements from liver cells, and the second view containing $p_2=21$ concentrations (in percentages) of hepatic fatty acids (lipids). Mice are separated into two genotypes, wild-type (wt) and PPAR$\alpha$ -/- (ppar) mutant, and are administered five different diets: reference diet of corn and colza oils (ref), saturated fatty acid diet of hydrogenated coconut oil (coc), Omega6-rich diet of sunflower oil (sun), Omega3-rich diet of linseed oil (lin), and diet with enriched fish oils (fish). Out goal is to extract correlated and orthogonal signals across both views, and investigate how these signals relate to genotype and diet effects. We use Gaussian distribution to model gene expressions in first view, and Binomial distribution to model concentrations (transferring percentages to proportions). We use $m = 100$ trials to reflect that the original data is measured in percentages, which effectively adjusts the relative weights between Gaussian and Binomial likelihoods in~\eqref{eq:finalform} (Section~\ref{sec:parameter}). There are 17.5\% zero proportions, which we replace with $0.375/(m + 0.75)$ as in Section~\ref{sec:init}. We use cross-validation to estimate the total ranks as $r_1 = 3$ and $r_2 = 4$ (Section~\ref{sec:rank}). To determine the joint rank $r_0$, we calculate the principal angles between the low-rank estimated natural parameters leading to angles of 35.0, 57.2, 74.1 degrees. Given the angle 74.1 being close to 90, we set the joint rank to $r_0 = 2$, and fit ECCA model. Left panel of Figure~\ref{fig:mouse_PCA_joint} displays the joint scores (two left singular vectors of concatenated $[\boldsymbol{U}_1\ \boldsymbol{U}_2]$) coded by diet and genotype. There is a clear genotype separation based on the first joint component, confirming that the genotype affects both gene expression and lipids concentrations. The second joint component captures diet effect, with the contrast between coc and fish diets being most visible. The other diets, however, are not well-separated. Right panel of Figure~\ref{fig:mouse_PCA_joint} displays the individual scores for lipids. In contrast to joint scores, the individual scores show a clear diet effect. In summary, the ECCA decomposition helps to separate the genetic effects on lipid concentrations from diet effects. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{ECCA_Joint_Separation_Mouse_not_Scaling_trials100.pdf} \includegraphics[width=0.49\textwidth]{ECCA_Ind_Separation_Mouse_not_Scaling_trials100.pdf} \caption{ECCA scores from nutrimouse data colored by genotype and diet. Left: Joint scores between gene expressions and lipid concentrations. Right: Individual scores for lipid concentrations. [This figure appears in color in the electronic version of this article, and any mention of color refers to that version]} \label{fig:mouse_PCA_joint} \end{figure} To further illustrate advantages of ECCA on these data, in Web Appendix D we compare the results of ECCA with GAS \citep{li2018general}. We find that ECCA scores lead to better separation of genotype and diet effects, and that orthogonality of individual scores in ECCA is advantageous in interpretation for this study, as the observed diet effects in individual components can be fully attributed to lipids view. \subsection{Tumor heterogeneity in prostate cancer} Prostate cancer (PCa) is the second leading cause of cancer-related death in males in the U.S, with approximately 268,490 new cases and 34,500 deaths expected in 2022 \citep{jemal2021prostate}. The immune response in PCa plays a critical role in directing the evolution of tumor cells and contribute to the extensive inter-tumor heterogeneity among PCa patients \citep{binnewies2018understanding}. Current clinical indexes such as the cancer stage, PSA (prostate specific antigen) level, and Gleason scores lack the ability to address the mechanism of heterogeneity and thus are insufficient for definitive identification and treatment of PCa. To address this question by evaluating the immune cell subtype profiles, we apply our ECCA framework on The Cancer Genome Atlas (TCGA) \citep{abeshouse2015molecular} PCa dataset. We use two complementary deconvolution methods to achieve distinct aspects of PCa cellular compositions. For the first view, the cellular composition is determined using DeMixT method of \citet{wang2018transcriptome} that extracts transcript proportions corresponding to three cell types: immune, normal (stroma) and tumor. As the proportions from DeMixT sum to one, we only focus on normal and immune proportions ($p_1=2$). For the second view, we consider Tumor Immune Estimation Resource (TIMER) of \citet{li2017timer}, leading to cell count proportion data corresponding to $p_2 = 6$ cell types: B cells, CD4-T cells, CD8-Tcells, Dendritic cells, Macrophage cells and Neutrophil cells. Unlike DeMixT, TIMER does not produce compositional data, thus the six proportions do not sum to one. Both DeMixT and TIMER are applied to the RNA sequencing data from the same $n = 293$ patients, but dissected the mixed signals in different spaces, transcript versus cell counts; as well as in different cell types, all immune cells combined versus immune cell subtypes. Our goal is to separate joint and individual parts of the signal between DeMixT and TIMER, and investigate how these signals relate to the clinical outcome of prostate cancer patients as measured by progression-free survival. In summary, we obtain $\boldsymbol{X}_1\in \mathbb{R}^{293\times 2}$ and $\boldsymbol{X}_2 \in \mathbb{R}^{293 \times 6}$ corresponding to DeMixT and TIMER, respectively. We treat both datasets as proportions arising from binomial distribution with the same number of trials $m$. From Section~\ref{sec:parameter}, the value of $m$ does not affect the resulting solution, and we set it to one for simplicity. Due to small number of features in both datasets, we omit the intercept terms $\boldsymbol{\mu}_k$ in ECCA model fixing $\boldsymbol{\mu}_k=0$ throughout. As DeMixT only has two features, we set its total rank as $r_1 = 2$. To determine the total rank for TIMER, we use cross-validation (Section~\ref{sec:rank}) leading to $r_2 = 3$. To determine the joint rank $r_0$, we calculate the principal angles between the low-rank estimated natural parameters of DeMixT and TIMER by exponential PCA. There are two non-zero principal angles of 27.0 and 72.3 degrees. Given the large separation across the two angles and 27.0 being close to zero, we set the joint rank to $r_0 = 1$. For simplified follow-up analysis and interpretation, we combine joint $\boldsymbol{U}_1$ and $\boldsymbol{U}_2$ into one common score $\boldsymbol{U}_{joint}$ based on leading left singular vector of $[\boldsymbol{U}_1 \ \boldsymbol{U}_2]$. We also rotate the individual scores $\boldsymbol{Z}_2$ for TIMER so that the corresponding loading vectors $\boldsymbol{A}_2$ are orthogonal in light of identifiability conditions in Theorem~\ref{thm:modelident}. \begin{figure}[!t] \begin{subfigure}{.5\textwidth} \includegraphics[width =\textwidth]{Fig4A_ECCA_scores_and_Gleason.pdf} \caption{}\label{fig:gleason} \end{subfigure} \begin{subfigure}[c]{.3\textwidth} \includegraphics[width =\textwidth]{Fig4B_HR_valid_plot.pdf} \caption{}\label{fig:HR} \end{subfigure}\\ \centering \begin{subfigure}[c]{.6\textwidth} \includegraphics[width =\textwidth]{Fig4C_ECCA_loading_vectors.pdf} \caption{}\label{fig:loadings} \end{subfigure} \caption{(A)~Stratification of joint and individual components by Gleason score categories; (B)~Hazard ratios with 95\% confidence intervals for PFI; (C)~Loading vectors corresponding to joint and individual scores for DeMixT and TIMER.} \label{fig:loading} \end{figure} In order to evaluate the potential utility of $\boldsymbol{U}_{joint}$ and individual scores for PCa, we compare these scores with the clinically utilized prognostic feature Gleason score as well as their association with progression-free interval (PFI) by considering patients with Gleason scores of 7 and 8+ ($n = 239$). More details are in Web Appendix D.2. We find a significantly lower $\boldsymbol{U}_{joint}$ score together with a significantly higher individual score of DeMixT in Gleason score = 8+ (Figure~\ref{fig:gleason}, both p-values $<$ 0.001), representing a patient subgroup with less favorable clinical outcomes. However, neither of the individual scores of TIMER is associated with Gleason group (Figure~\ref{fig:gleason}). Furthermore, we find both high $\boldsymbol{U}_{joint}$ and low DeMixT individual score are independently associated with improved PFI in patients with PCa ($\boldsymbol{U}_{joint}$: hazard ratio (HR) = 0.81, 95\% confidence interval (CI): 0.65, 0.99, p-value = 0.05; DeMixT individual score: HR = 1.76, 95\% CI: 1.05, 2.95, p-value = 0.03; Figure~\ref{fig:HR}, Table~\ref{t:Coxph_NoGleason}). TIMER individual scores are not associated with PFI. The general trends in the observed associations remain after adjusting for the Gleason score status in Cox regression (Table~\ref{t:Coxph}), although no longer statistically significant, supporting the notion that measuring immune cell activities could improve the current clinical practice for identifying and treating PCa. Furthermore, these results, together with recent findings in tumor total mRNA expression levels as a potential biomarker \citep{cao2022estimation}, lead to our next hypothesis that immune transcript proportions, as generated by DeMixT, contain complementary signals from both the immune cell counts and the immune-specific transcriptome variations. Figure~\ref{fig:loadings} of the ECCA loading values reveals that in DeMixT the joint score with TIMER captures both stromal and immune proportions with a higher weight on the stromal proportion, whereas in TIMER it captures all proportions except dendritic cells, with the highest weight on the macrophage, which the immune cell type generating the highest amount of transcripts \citep{schelker2017estimation}. The individual DeMixT score represents an orthogonal and unexplained part of the immune transcript proportion (p-value = 0.0003). In contrast, neither the first individual score nor the second individual score for TIMER is significant. In summary, application of our novel ECCA analysis framework to multiple immune deconvolution methods have the potential to provide novel biological insights in varying immune cell activities in PCa. \begin{table}[!t] \caption{P-values from Cox Proportional-Hazards model using joint and individual scores between DeMixT and TIMER as predictors} \begin{center} \begin{tabular}{llcc} Notation & Interpretation & Hazard ratio & P-value \\\hline Age & Tumor diagnosed age & 1.22 & 0.172 \\ $\boldsymbol{U}_{joint}$ & Joint between DeMixT and TIMER & 0.81 & 0.049 \\ $\boldsymbol{Z}_1$ & Individual DeMixT & 1.76 & 0.032 \\ $\boldsymbol{Z}_{21}$ &1st individual TIMER & 0.64 & 0.112 \\ \end{tabular} \label{t:Coxph_NoGleason} \end{center} \end{table} \begin{table}[!t] \caption{P-values from Cox Proportional-Hazards model using joint and individual scores between DeMixT and TIMER as predictors with the inclusion of Gleason score} \begin{center} \begin{tabular}{llcc} Predictor & Interpretation & Hazard ratio & P-value \\\hline Age& Tumor diagnosed age& 1.20 & 0.214 \\ Gleason score & Gleason score & 1.96 & 0.026 \\ $\boldsymbol{U}_{joint}$& Joint between DeMixT and TIMER& 0.86 & 0.164 \\ $\boldsymbol{Z}_1$& Individual DeMixT & 1.56 & 0.103 \\ $\boldsymbol{Z}_{21}$&1st individual TIMER & 0.67 & 0.163 \\ \end{tabular} \label{t:Coxph} \end{center} \end{table} \section{Discussion}\label{sec:eccaDis} We present ECCA model for the association analysis of datasets with measurements coming from exponential family distributions. The R code with methods implementation can be found at \url{https://github.com/IrinaStatsLab/ECCA}. A unique characteristic of ECCA is the orthogonality of the individual score matrices, which enhances interpretation of individual signals, but leads to non-trivial optimization challenges. Numerical studies illustrate that ECCA outperforms existing methods in simulations. When applied to nutrimouse data, ECCA effectively separates the effect of genotype from the effect of diet based on joint and individual scores between gene expression and lipids concentrations. When applied to tumor heterogeneity study, ECCA effectively extracts joint and individual signals that are biologically meaningful between two different immune deconvolution methods. These scores are then shown to provide additional insights into heterogeneity of immune cell subtype profiles, and their contribution to clinical prognosis in patients with localized but high-risk prostate cancer. The method has several limitations that require further research. First, while the model~\eqref{eq:expDecom} and optimization~\eqref{eq:finalform} are formulated for general case of exponential family, our implementation and numerical results are limited to Gaussian and Binomial proportion cases, as those were sufficient for motivating datasets. It would be of interest to expand the results to other families, e.g., Poisson, Exponential. Secondly, the ECCA algorithm is computationally demanding due to the use of iterative SOC updates. One possible remedy is to run intermediate SOC updates only for a few iterations without full convergence. This will improve the overall cost of Algorithm~\ref{a:full} however a too small number of iterations may lead to divergence. Further investigation is needed to determine optimal tradeoff. Third, ECCA does not perform sparse regularization, thus may suffer in high-dimensional regimes. One possible way is to add $l_1$ regularization on the loading matrices as in sparse CCA \citep{witten2009penalized, yoon2020sparse}. To be specific, one can modify objective function~\eqref{eq:finalform} to be: \begin{equation* \min_{\boldsymbol{\Theta}_1,\boldsymbol{\Theta}_2}\{L(\boldsymbol{\Theta}_1|\boldsymbol{X}_1) + L(\boldsymbol{\Theta}_2|\boldsymbol{X}_2) + \beta_1\|\boldsymbol{V}_1\|_1 + \beta_2\|\boldsymbol{V}_2\|_1\}, \end{equation*} where $\|\boldsymbol{Y}\|_1 = \sum^{j=1}_{p}{\sum_{k=1}^{r_0}{|y_{jk}|}}$ is sparsity-inducing penalty and $\beta_1, \beta_2 \geq 0$ control the sparsity levels. However, the new objective function is no longer differentiable requiring the use of more complex optimization algorithms, in addition to the sparsity parameter selection. Finally, in standard CCA it is typical to maximize the correlation as the objective function, that is to maximize the magnitude of the diagonal elements of $\boldsymbol{U}_1^\top\boldsymbol{U}_2$. The proposed ECCA can incorporate this maximization by adjusting the objective function as follows \begin{equation* \min_{\boldsymbol{\Theta}_1,\boldsymbol{\Theta}_2}\{L(\boldsymbol{\Theta}_1|\boldsymbol{X}_1) + L(\boldsymbol{\Theta}_2|\boldsymbol{X}_2) +\beta\|\boldsymbol{U}_1-\boldsymbol{U}_2\|_F^2\}, \end{equation*} where $\beta \geq 0$ is a hyper-parameter. Due to orthogonality of $\boldsymbol{U}_k$, adding $\|\boldsymbol{U}_1-\boldsymbol{U}_2\|_F^2$ term to the objective is equivalent to adding $-\Tr(\boldsymbol{U}_1^\top\boldsymbol{U}_2)$, with $\beta$ controlling the relative importance of correlation maximization compared to likelihood for each individual view. Algorithm~\ref{a:full} can be used for this problem with some adaptation of score updates (Section~\ref{s:joint_update}), however it's unclear how to choose the value of optimal $\beta$. It would be of interest to investigate these extensions in future work. \section*{Acknowledgements} This work was supported by NSF DMS-1712943 and DMS CAREER-2044823.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,696
Situated on the owner's smallholding, Bwthyn Bwlch was formerly the dairy to the farm. This detached holiday property has been beautifully converted with a vaulted living room with leather sofas and a cosy woodburner, great for relaxing in with a glass or two of wine. A wonderful spiral staircase leads up to the first floor bedroom, and there's a pleasant patio outside, great for catching the sun. The county town of Denbigh has the romantic ruins of Denbigh Castle, together with a selection of independent and national shops, as well as leisure facilities. Discover Llyn Brenig Reservoir and visitor centre, 8 miles away, which offers opportunities for fly fishing (brown trout), sailing, cycling (including bike hire), walking (Mynydd Hiraethog Site of Special Scientific Interest) and zorbing. The Clwydian Hills which offer plenty of walking opportunities to visitors are nearby and the beaches of the North Wales Coast are within easy reach, as is the Snowdonia National Park with further walking opportunities. For golfers there is a golf club on the outskirts of Denbigh, 5 miles. Shop, pub and restaurant 3½ miles. Self catering two bed apartment.
{ "redpajama_set_name": "RedPajamaC4" }
7,888
2 Comments April 7, 2015 at 12:00 pm Social Justice and Magic Wands Converge at TIFF Kids International Film Festival Entering its 18th year, the TIFF kids festival proves children's cinema is more than just fun and games. By Paulomi Patel A clip from the Japanese movie, When Marnie Was There. Prepare for more than the usual number of little ones in your area, King Westers: the 2015 TIFF Kids International Film Festival rolls out this week. Beginning today and running until April 19, the festival—which is in its 18th year—celebrates children with more than 126 films from 36 different countries. The roster is made up of an assortment of documentaries, shorts, animations, and films that run the gamut from hilarious to heartwarming to, often, quite serious. Kids interested in more than just watching movies can attend conferences, meet industry veterans, get creative in filmmaking workshops, and judge movies in young people's jury. We spoke with the woman at the head of it all, Elizabeth Muskala, director of TIFF Kids. She tells us more about the movies, the selection process, and how the festival has something for the entire family. Torontoist: Tell us about the lineup. Muskala: We've got a number of different films for children from the age of three all the way up to 13. There's a real breadth to programming this year and there's something for everyone. A still from Shaun The Sheep, the opening-night movie at TIFF Kids. Social equality, injustice, gender, and bullying—what's with the serious subject matter? My role is to not only bring the best of Canadian and international cinema, but also have an equal and balanced program. We want to engage and inspire young people, but, most importantly, we want to entertain. The festival has a robust school program on weekdays when subject matter experts and filmmakers speak on thought-provoking topics like bullying or social injustice. We provide educators with resources that tie directly to the Ontario curriculum—teachers can use film in the classroom to address difficult subjects. Then there's a public program for families during weekends where the entire family can come down, watch a movie and have a discussion about some of the more difficult films. Or, they can simply celebrate the wonderful films and be entertained. There are many opportunities for both those experiences within the context of the festival. How far in advance do you start preparing and what's the selection process? We track and solicit films all through the year and we visit a number of festivals on the festival circuit. Following this year's festival, I will be going to the Annecy International Animated Film Festival in June and my colleague will be going to the Berlin International Film Festival. We also do a lot of our own research by building relationships with filmmakers, distributors, and sales agents, and keeping track of films in the pipeline. For example, we've tracked Shaun The Sheep for a couple of years, ever since it was announced. Now we're showcasing its Canadian premiere as the opening-night film in this year's festival. Of the 36 countries this year, is there one from where you've never shown a movie before? For the first time ever, we're featuring a film from Paraguay. It's called Landfill Harmonic—a strong documentary about an orchestra developed by a youth group from a community built on a landfill site. It's really inspirational and we're excited to screen it in Toronto. A still from the movie Broken Wand. How has the festival evolved? Two thousand and three hundred people attended when the festival first launched 18 years ago. Now, we have well over 30,000 attendees. Toronto is such a festival-savvy city—Torontonians love and appreciate cinema and there's a festival for everyone. What makes TIFF Kids unique is that it's the only festival in Toronto and the GTA for children. It gives them a platform to engage with movies, meet filmmakers from all over the world, and ask important and relevant questions. Which films are you the most excited about—do you have a favourite? We've got a strong lineup and it's really difficult for me to pick a favourite. There's something for everyone. There are many films that I love and many films that get me emotional even on the second or third viewing. And I'm most delighted to show them all to the Toronto audience. What else is going on, in addition to the film festival? There's digiPlaySpace, an interactive new media exhibition featuring 24 installations. This year we've commissioned two original pieces by leading new media artists. We've seen how comfortable youngsters are engaging with technology and over the years (digiPlaySpace is in its fourth year); educators have shared how amazing these experiences are for children. The idea is to allow families coming down to see a movie to spend the rest of the day at the TIFF Bell Lightbox viewing this exhibition and participating in various free onsite activities. digiPlaySpace launched just before the March break and it ends along with TIFF Kids. Tickets for TIFF Kids films cost $13 for adults, $10.50 for students and seniors, and $9 for children, can be purchased from the TIFF Bell Lightbox or online at TIFF.net. Where Stephen King's IT was filmed in and around Toronto Toronto theatres relax rules and increase acceptance Reimagined spaces benefit Toronto artists, communities, and developers How to fix the Lightbox Filed under TIFF Kids, Broken Wand, culture, Shaun the Sheep, TIFF Kids digiPlaySpace, When Marnie Was There
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,703
Q: How can I convert the response coming from the api to image I have response coming from the api like attached image. I want to convert this response to the image. I have tried to convert this into base64 string but i am not able to do it as well. Image of the response api is like this: axios.get(`https://api.bamboohr.com/api/gateway.php/tadigital/v1/employees/2223/photo/large`, { auth: { username: 'xxxxxxxxxxxxxxxxx', password: 'xxxxxxxxxxxxxxxxx' } } ).then(resp => { console.log(resp.data) }); I want the solution without using responseType in the get request because api is not accepting the response type. I am not getting a way to convert this to image A: 1 Encoding should be base64 you can do that on either on the server or the client. 2 The Raw data should be set as the following - <img ng-src="data:image/*;base64,{{Raw Binary Data}}"/> A: It seems like the server is sending raw data You can convert it to blob by using fetch like below: const res = await fetch(url); const data = await res.blob(); const image = URL.createObjectURL(data); Set Image as src of an image tag A: I want the solution without using responseType in the get request because api is not accepting the response type. responseType has nothing to do with server. It tells Axios how to handle the expected response data. In this scenario you could pass responseType: "blob", which can be passed to URL.createObjectURL() to create an URL that can be used as <img> src. const url = 'https://api.bamboohr.com/api/gateway.php/tadigital/v1/employees/2223/photo/large'; axios.get(url, { auth: { username: 'xxxxxxxxxxxxxxxxx', password: 'xxxxxxxxxxxxxxxxx', }, responseType: 'blob', }).then((resp) => { const img = document.createElement('img'); img.src = URL.createObjectURL(resp.data); img.alt = 'employee photo'; document.body.append(img); }); const url = "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9Ii0xMS41IC0xMC4yMzE3NCAyMyAyMC40NjM0OCI+CiAgPHRpdGxlPlJlYWN0IExvZ288L3RpdGxlPgogIDxjaXJjbGUgY3g9IjAiIGN5PSIwIiByPSIyLjA1IiBmaWxsPSIjNjFkYWZiIi8+CiAgPGcgc3Ryb2tlPSIjNjFkYWZiIiBzdHJva2Utd2lkdGg9IjEiIGZpbGw9Im5vbmUiPgogICAgPGVsbGlwc2Ugcng9IjExIiByeT0iNC4yIi8+CiAgICA8ZWxsaXBzZSByeD0iMTEiIHJ5PSI0LjIiIHRyYW5zZm9ybT0icm90YXRlKDYwKSIvPgogICAgPGVsbGlwc2Ugcng9IjExIiByeT0iNC4yIiB0cmFuc2Zvcm09InJvdGF0ZSgxMjApIi8+CiAgPC9nPgo8L3N2Zz4K"; axios.get(url, { responseType: "blob", }).then((resp) => { const img = document.createElement("img"); img.src = URL.createObjectURL(resp.data); img.alt = "React icon"; document.body.append(img); }).catch((error) => { console.log(error); }); <script crossorigin src="https://unpkg.com/axios@1/dist/axios.js"></script> Note that you should call URL.revokeObjectURL() if you no longer need the URL.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,184
package eu.freme.bservices.testhelper.api; import eu.freme.common.conversion.SerializationFormatMapper; import eu.freme.common.conversion.rdf.RDFConstants; import eu.freme.common.exception.FileNotFoundException; import org.apache.commons.io.FileUtils; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpHeaders; import org.springframework.http.HttpStatus; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.PathVariable; import org.springframework.web.bind.annotation.RequestHeader; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; import java.io.File; import java.io.IOException; @RestController public class MockupEndpoint { @Autowired SerializationFormatMapper serializationFormatMapper; public static final String path = "/mockups/file"; // use regEx to include the file extension @RequestMapping(path+"/{filename:.+}") public ResponseEntity<String> sendRDFfileContent( @RequestHeader( value="outformat", required=false) String outformat, @RequestHeader( value="accept", required=false) String accept, @PathVariable String filename ) throws IOException{ String fileContent; File file; try { ClassLoader classLoader = getClass().getClassLoader(); file = new File(classLoader.getResource("mockup-endpoint-data/" + filename.split("\\?")[0]).getFile()); //File file = new File("src/main/resources/mockup-endpoint-data/"+filename); fileContent = FileUtils.readFileToString(file); }catch (Exception ex){ throw new FileNotFoundException("could not load file: "+filename); } HttpHeaders headers = new HttpHeaders(); String contentType = serializationFormatMapper.get(outformat); if(contentType==null && accept!=null && !accept.equals("*/*")){ contentType = serializationFormatMapper.get(accept.split(";")[0]); } // default to TURTLE // serializationFormatMapper.get() can return null! if(contentType == null){ contentType = RDFConstants.TURTLE; } headers.add("Content-Type", contentType); headers.add("content-length", file.length()+""); ResponseEntity<String> response = new ResponseEntity<>(fileContent, headers, HttpStatus.OK); return response; } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,434
{"url":"https:\/\/blog.computationalcomplexity.org\/2006\/01\/luddites-revisited.html","text":"## Friday, January 27, 2006\n\n### LUDDITES Revisited\n\n\n\nGUEST BLOGGER: Bill Gasarch\n\nThis is my last day guest blogging, so I'll end where I began,\nTHREE points on LUDDITES\n\nI) Janos Simon corrected my history of Luddites, for which I thank him.\nIf you are interested, go to HIS comment on MY post from Monday Jan 23\nfor a link to a very nice article.\n\nII) My father and father-in-law offer an interesting contrast:\n\nFATHER-IN-LAW (Engineering Major, career mostly in Business, now retired):\nLUDDITE: Does not program his VCR. Not sure if he doesn't know how to or just\ndoesn't want to. So he HAS to be home on Sunday to watch Desperate Housewives\n(a show I found distasteful- My father in law is hipper than I am).\n\nNON-LUDDITE: Took a course on C at a local community college when\nhe was 70. Pays all his bills on line.\n\nFATHER (English Major, High School English Teacher and Vice Principal, now retired)\nLUDDITE: Got a computer recently and still can't get email or pay his bills on line.\n\nNON-LUDDITE: Uses his VCR to tape ALOT of shows. He needs it since he watches ALOT:\nWest Wing, My Name is Earl, The Sopranos, Sex in the City when it was on (a show I find\ndistasteful- My dad is hipper than I am), 6 feet under, Deadwood, all four Law and Orders,\nand all three CSI's, Without a trace, other stuff I can't recall. This from the man\nwho restricted me, wisely, to no more than an hour of TV a night when I was a kid.)\n\nIII) Stuart Kurtz emailed me some more questions for my Luddite quiz. I asked him if I\ncould post them and he suggested asking for other inputs. No one replied, so here are his:\n\nSTUART BEGIN:\n9) Do you write emails (or blog posts) in\na) variable width fonts with formatting,\nb) variable width fonts without formatting,\nc) fixed width fonts,\nd) What's a blog?,\ne) What's email?, or\nf) What's writing?\n\n10) Do you indicate emphasis by\na) using italic or slanted font,\nb) using a bold faced font,\nc) metadiscourse, i.e., \"I want to emphasize that... \",\nd) ALL CAPS, or\ne) Shouting and waving your arms.\n\na) four buttons,\nb) three buttons,\nc) two buttons,\nd) one button,\ne) control characters are good enough for RMS, and they're good enough for me, or\nf) four feet and a tail.\n\n12) What's your favorite programming language?\na) Ruby or Python,\nb) Java\nc) Lisp,\nd) C++,\ne) Awk,\nf) IBM-360 assembly language,\ng) C,\nh) Lisp, or\n\n[I know Lisp occurs twice, but c and h are still different answers. Note that there's no\npoint asking for Perl -- as Perl programmers can only write, not read.]\nSTUART END.\n\nbill g.\n\nP.S. I am supposed to say Now that I've guest blogged for a week I'm even more\nimpressed with Lance getting a topic out every day'' But this is NOT TRUE.\nI was SO IMPRESSED with Lance in the first place that I can't be more impressed''\n\n\n\n1. Talking of VCR already makes you a luddite these days. Tivo, DVR, etc.\n\n2. The true non-luddite uses bittorrent to get all of their TV shows. How else to see shows on the BBC or other channels you don't even have? (And in my case, that would be all of them... I have no cable tv or any tv at all!)\n\n3. Is Lance back yet? How long do we have to endure these bizarre puritanical parentheticals (which I find distasteful)? There are probably some computer scientists you find distasteful as well. Some of them probably read this blog.\n\n4. Actually, three cheers for the parenthetical declarations of distaste. There sure is plenty of distastful things around that need to be called what they are. Add to the list violent video games.\n\n5. Did you notice what was referred to as \"distasteful\"? Out of a list of many television shows (one, e.g. centering around gangsters, the mafia, and fantastical portrayals of drugs, sex, and violence), two were marked as \"distasteful\" (and not the one I described parenthetically). Those two happen to be shows whose titles at least hint at sexuality. I have a feeling that this is why the remarks seemed almost comically puritanical, as if the blogger were afraid to accidentally endorse something sexual.\n\nSex is bad. Especially sexual women.\n\n6. Oh crap. Is the computational complexity blog going to go the way of pandagon with endless guest posters and sex wars? Who needs this crap?\n\nWhat I find especially distasteful is the omission of the ML from the languages list.\n\nBut I am glad for the omission of Haskell, which I find distasteful.\n\n7. I'm actually glad. How often do you get to hear about \"sex wars\" in computational complexity blogs. Usually computer scientists have no idea what sex is (when at work?).\n\n8. Have you seen the front of Papadimitriou's book entitled \"Computational Complexity\"? It contains a naked woman. In fact, the goddess of love and beauty herself.\n\n9. She was formed out of Uranus's blood and semen (which I find distasteful--Uranus is hipper than I am).\n\n10. This is Stu. Sorry about omitting ML, it definitely should have been included. Indeed, I've long claimed that all programming languages (well, almost all, since Prolog doesn't fit the mold) are converging on Lisp, ML, or Haskell. So Haskell should have made the cut too.\n\nI guess if I really wanted to be offensive, I'd have grouped ML with Java...\n\nBill asked me a couple of questions about my choices (why lisp twice, why not Perl). The first is a bit obscure, as for the second -- Perl programmers can only write, not read, so of course they're selected against as readers of the quiz.","date":"2023-03-28 02:44:15","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.40720894932746887, \"perplexity\": 5897.429790471291}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296948756.99\/warc\/CC-MAIN-20230328011555-20230328041555-00293.warc.gz\"}"}
null
null
Q: How to consume reusable gui element/widget with resources in android I try to use the dateslider in my android project to have a combined date-time picker im my gui. The authors of the widget suggest to include the java- and resource-sources directly into my app. 1st try copy java and resources into main app: Result: The widget java classes cannot compile because the resourceids R.xxxx cannot be resolved. Reason: The widget classes are implemented in package "com.googlecode.android.widgets.DateSlider" and my app has a different namespace "my.namespace.myApp" so that the resourceids come from my.namespace.myApp.R.xxx. To fix this i would have to touch every widget java source to import my.namespace.myApp. Is there a way to have 2 resource-sets with different namespaces so that there are my.namespace.myApp.R.xxx and com.googlecode.android.widgets.DateSlider.R in the main app? 2nd try put widget java+resources into seperate jar/library: result: every thing compiled. but after appstart i get a runtimererror: the runtime cannot resolve the widget resource IDs from the jar/lib. Note: i can call methods from the jar as long as these do not need resources. So my question: what is the best way to consume a reusable gui element with resources in android? I am using android 2.2. Note: Android: How do I create reusable components? does not help because it tells you how to create library-projects. update 16.3.2012 Since the current version 16 /17-pre of of the eclipse adt-tools do not support resources in jars (as of try 2) what is the best/easiest way to consume them until there is support for this? update 4.4.2012 with the new R17-tools i succeeded to consume libraryproject-with-resources that creates the jar. Android-Lint helped me to find out what to change in the lib to make it usable. * *eclipse-workspace * *DateSliberLib * *src *res *... *MyAppUsingLib * *src *res *... with this layout MyAppUsingLib is running fine However I am still not able to use the DateSliberlib.jar alone * *eclipse-workspace * *MyAppUsingLib * *src *res *lib * *DateSliberLib.jar *... This setting can be comiled but the app crashes because it cannot find the resources of the lib. [update 2014-11-17] 6 months ago i switched from eclipse/ant-build to android-studio/gradle build which introduced *.aar files that are jar-files with android resources. android-studio/gradle build can cope with resources in libs. A: Is there a way to have 2 resource-sets with different namespaces so that there are my.namespace.myApp.R.xxx and com.googlecode.android.widgets.DateSlider.R in the main app? No, sorry. Note: Android: How do I create reusable components? does not help because it tells you how to create library-projects. However, that is the right answer. Download the full project from their repo, import it into Eclipse, and mark it as a library project (Properties > Android). Then, add it as a library project to your app's project. Eventually (which now appears like it will be the R18 version of tools or later), the tools should support packaging reusable components in a JAR, with resources, in such a manner that you can add them to a host project and the resources will be blended in automatically. Right now, that's not an option.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,128
Frédéric Pierucci, né le , est un ancien cadre supérieur d'Alstom, président monde de la division chaudière de l'entreprise, accusé de corruption en 2013 par le gouvernement américain, arrêté puis détenu à ce titre plusieurs années aux États-Unis. Cette affaire est fréquemment présentée en France comme un exemple de la guerre économique à laquelle se livrent les États-Unis, y compris à l'encontre de ses alliés supposés. En effet son arrestation et son emprisonnement surviennent sur fond de rachat de la branche énergie d'Alstom par General Electric. Biographie Jeunesse et études Frédéric Pierucci est diplômé de l'École nationale supérieure de mécanique et d'aérotechnique (Ensma), de l'institut européen d'administration des affaires (Insead) et a obtenu un MBA à l'université Columbia. Parcours professionnel au sein d'Alstom De 1995 à 1999, il est directeur commercial Chine pour la division Power d'Alstom, à Pékin. En septembre 1999, il devient directeur des ventes et du marketing mondial du business chaudière d'Alstom, à Windsor dans le Connecticut. Sur son temps libre, il suit les cours du MBA de l'université de Columbia à New-York, financés intégralement par son entreprise. Il occupe ensuite un poste à Singapour. Il est arrêté lors d'une mission aux États-Unis en avril 2013, puis est licencié fin 2013 lors de sa détention, après 21 ans passés à Alstom. Procédure criminelle aux États-Unis Dès 2009, Alstom est mis en cause par le Département de la Justice des États-Unis (DOJ) au nom du Foreign Corrupt Practices Act (FCPA), qui est une des lois à la portée de l'extraterritorialité du droit américain. Le FCPA criminalise l'acte de fournir des pots-de-vin à des agents publics étrangers. Pour contextualiser, il convient de noter que ces mêmes actes sont criminels dans tous les pays membres de l'Organisation de Coopération et Développement Économique, y compris en France. Sous la direction de Patrick Kron, le groupe semble initialement coopérer avec le DOJ ou de faire mine de coopérer. En ce qui concerne Pierucci, il est assujetti à la juridiction américaine en tant que cadre supérieur d'Alstom Power (une filiale basée à Windsor, Connecticut, US) et aussi de plusieurs autres filiales d'Alstom. Les accusations criminelles concernent l'utilisation, sur la période 2002-2009, de deux consultants pour acheminer des pots-de-vin vers des agents publics indonésiens afin d'obtenir le projet Tarahan (qui valait $118 millions pour une centrale thermique au charbon) en Indonesie. Le DOJ fournit des preuves (y compris une série d'échanges de courriels) que le deuxième consultant est retenu parce que le premier était soupçonné de ne pas pouvoir obtenir le contrat. Á la fin, Alstom a réussi à remporter le contrat. Le , Frédéric Pierucci est arrêté aux États-Unis et est enfermé dans la prison de haute sécurité de Wyatt, dans l'État de Rhode Island. Sa peine de prison peut aller jusqu'à 20 ans s'il va jusqu'au procès. Ainsi, Frédéric Pierucci plaide coupable le 29 juillet 2013 afin de réduire sa peine à six mois, durée négociée entre ses avocats et les procureurs. Cependant, comme l'ancien cadre d'Alstom William Pomponi, également arrêté, refuse de plaider coupable, les procureurs reviennent sur leur parole et préfèrent reporter le jugement Frédéric Pierucci afin de le pousser à témoigner contre Pomponi si ce dernier va jusqu'au procès. Les procureurs s'opposent également à sa demande de liberté conditionnelle, qui avait pourtant été négociée lors de son plaider-coupable. Le , Frédéric Pierucci reçoit une convocation à un entretien préalable de licenciement durant sa détention, au motif de son absence au travail et du préjudice à l'image du groupe engendré par sa situation. Il ne bénéficie alors plus de l'assistance juridique d'Alstom. Il est finalement licencié le 16 novembre 2013, avec un préavis se terminant le 30 juin 2014. Frédéric Pierucci a passé quatorze mois en prison lorsqu'il est libéré sous caution le 12 juin 2014, la semaine où le gouvernement français accepte le rachat de la branche énergie d'Alstom par Général Electric. De retour en France, Frédéric Pierucci attend son jugement final aux États-Unis, qui est repoussé plusieurs fois. Il porte plainte aux prud'hommes contre Alstom pour les dommages subis, ainsi que pour son licenciement, l'arrêt de paiement de ses avocats, et les 90 000 euros que lui doit Alstom pour solde de tout compte. Il avait négocié avec Alstom pour recevoir plusieurs centaines de milliers d'euros début juillet 2015, mais le rachat par GE a été finalisé, ce qui a annulé la négociation. Frédéric Pierucci ne recevra finalement que 45 000 euros de la part de son ex-employeur. En février 2017, Frédéric Pierucci apprend qu'Alstom avait contracté une police d'assurance spécifique pour protéger tous ses cadres dirigeants, mais que son entreprise ne l'a pas utilisée lors de son arrestation. Lawrence Hoskins, directeur du réseau commercial d'Alstom pour l'Asie, qui a également été arrêté par les États-Unis, a bénéficié de cette assurance, qui lui a payé 3 millions de dollars de frais judiciaires. Frédéric Pierucci apprend de l'assureur d'Alstom qu'il est encore possible de solliciter cette assurance, et écrit à plusieurs dirigeants d'Alstom pour leur demander de l'actionner. Il n'aura jamais de réponse. Le jugement de Frédéric Pierucci aux États-Unis a finalement lieu le 25 septembre 2017, soit quatre années après son plaider-coupable, dans la cour de district du Connecticut (système fédéral). Alors que sa peine devait correspondre aux quatorze mois qu'il avait déjà passés en prison, les procureurs l'accusent désormais d'être le "leader" des actes de corruption d'Alstom, ce qui augmente sa peine de douze mois supplémentaires de prison. Le 26 octobre 2017, il est emprisonné à Moshannon Valley Correction Center, en Pennsylvanie. Le 9 septembre 2018, après des mois de demandes, son transfert en France est accepté. Il est transféré dans plusieurs prisons aux États-Unis avant d'être envoyé en France le 21 septembre 2018, où il est conduit à la maison d'arrêt de Villepinte. Le 25 septembre, le juge d'application des peines décide de le placer en liberté conditionnelle. Frédéric Pierucci a passé au total vingt-cinq mois en prison aux États-Unis. Suites de l'affaire de la vente de la branche énergie d'Alstom à General Electric Pierucci a co-écrit un livre à propos de cette affaire : Le Piège américain. France Inter adapte le livre en illustrant, par le vécu de Frédéric Pierucci depuis 2014, cette affaire Alstom dans une série radiophonique. En 2020, Frédéric Pierucci tente de réunir des investisseurs français pour racheter à l'américain General Electric le pôle nucléaire qui faisait partie d'Alstom. Frédéric Pierucci a convaincu la Banque publique d'investissement (Bpifrance) et la Caisse des dépôts de le soutenir. Il ne lui manque plus que le soutien d'un opérateur. Celui ci pourrait être EDF, présent dans le nucléaire, avec Framatome qu'il contrôle. Cette opération permettrait de retrouver la propriété sur Arabelle, Turbine la plus puissante et la plus fiable du marché pour transformer en électricité la vapeur dégagée par l'eau portée à ébullition par la fission des atomes dans les centrales nucléaires. Arnaud Montebourg défend la même approche : , . En recherche de liquidités, General Electric engage en 2020 la vente d'une bonne partie de ses actifs, dont potentiellement les activités nucléaires ex-Alstom. En 2021, EDF engage des discussions avec General Electric pour lui racheter ses activités dans le nucléaire qui sont regroupées dans la filiale GE Steam Power (elles correspondent à l'ancienne division énergie d'Alstom et sont principalement situées à Belfort). Œuvres Références Voir aussi Documentaire , Interview Emissions radio Série en de François Luciani, Matthieu Aron d'après "Le piège américain", de Frédéric Pierucci et Matthieu Aron, chez Jean-Claude Lattès (2019): Liens externes Naissance en janvier 1968 Homme d'affaires français Lanceur d'alerte en France Étudiant de l'université Columbia Ingénieur aéronautique français Élève de l'Institut européen d'administration des affaires Personnalité liée à l'énergie Essayiste français du XXIe siècle
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,156
Saturday's signing at Barnes & Noble was terrific! I was even treated to an eight stanza recitation of a poem one man had written for his three year-old daughter. Since he purchased a book, it was kind of like a little literary salon, right there in Barnes & Noble. Like a moron, I forgot to take pictures–I'm batting about 50% there. But I did remember my camera at the Surreal South reading at SIU back on the 27th. It was a great time–rather irreverent for a lit event, but that's what Surreal South is all about! From L to R: Kyle Minor, Rodney Jones, your faithful correspondent, Pinckney, and Jon Tribble. This Saturday, Pinckney and I will be reading/signing Surreal South and Isabella Moon at Burkes Books in Memphis from 2-4 p.m. Joining us will be the incomparable Tom Franklin (Smonk, Poachers) and the much-(deservedly)lauded poet Beth Ann Fennelly. Should be a fun afternoon!
{ "redpajama_set_name": "RedPajamaC4" }
6,475
<?xml version="1.0" encoding="UTF-8"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <project name="Harmony Logging Test" default="test" basedir="."> <property name="hy.test.root" location=".." /> <property name="hy.component" value="classlib" /> <property name="hy.module" value="logging" /> <import file="${hy.test.root}/../ant/testproperties.xml" /> <target name="test" depends="test-module" /> <target name="test-module" depends="test-jre-vm-info"> <convert-test-as-class from="test.case" to="converted.tc.class" /> <run-selected-hdk-tests module="logging" jar="logging_tests.jar"> <junit-elements> <!-- Required by various tests that set security manager etc --> <jvmarg value="-Djava.security.policy=../testing.policy" /> <jvmarg value="-Xbootclasspath/a:logging_boot_tests.jar"/> </junit-elements> <excludeorinclude> <include name="org/apache/harmony/logging/tests/java/util/logging/LogManagerTest.class"/> </excludeorinclude> </run-selected-hdk-tests> <run-selected-hdk-tests module="logging" jar="logging_tests.jar"> <junit-elements> <!-- Required by various tests that set security manager etc --> <jvmarg value="-Djava.security.policy=../testing.policy" /> <jvmarg value="-Xbootclasspath/a:logging_boot_tests.jar"/> </junit-elements> <excludeorinclude> <include name="org/apache/harmony/logging/tests/java/util/logging/*Test.class" unless="test.case" /> <exclude name="org/apache/harmony/logging/tests/java/util/logging/LogManagerTest.class" unless="test.case" /> </excludeorinclude> </run-selected-hdk-tests> </target> </project>
{ "redpajama_set_name": "RedPajamaGithub" }
9,840
Rainier je město v okrese Thurston v americkém státě Washington. První budovou na jeho území byla v 70. letech 19. století železniční stanice, permanentního osídlení se Rainier dočkal v roce 1890 a začlenění v roce 1947. V roce 2010 měl 1 794 obyvatel. Historie Rainier začal svou existenci jako železniční stanice na trati společnosti Northern Pacific Railroad mezi městy Tacoma a Kalama. Nachází se na prériích zvaných místními indiánskými kmeny ten al quelth a nabízí výhledy na Mount Rainier, po které byl pojmenován. V roce 1890 se zdejšími prvními stálými osadníky stali Albert a Maria Gehrkeovi. Ve stejném roce získalo město svou poštovní pobočku a o rok později městský plán. V roce 1896 postavil Albert Gehrke se svými bratry Theodorem a Paulem první kostel a školu, ze které je nyní státní historická památka. Roku 1906 zde byla založena dřevařská společnost Bob White Lumber Company, která zlepšila místní ekonomiku prostřednictvím těžby dřeva a jeho pilařského zpracování. Brzy se do města uchýlily další dřevařské společnosti jako Deschutes, Gruber and Docherty nebo Fir Tree. Na přelomu 20. a 30. let byly některé z pilařských závodů a mnoho městských budov zničeny sérií požárů, což vedlo k přestupu mnoha obyvatel k pilařskému závodu společnosti Weyerhaeuser v nedalekém Vailu, který je nyní městem duchů. V roce 1940 žilo ve městě 500 obyvatel a o rok později bylo popsáno jako společenské centrum farmářů a dřevorubců z okolí, které kvůli svým zavřeným pilařským závodům a opuštěným domům připomíná dřevařské město duchů. Přesto se roku 1947 dočkalo začlenění. Demografie Z 1 794 obyvatel, kteří zde žili roku 2010, tvořili 91 % běloši a po 1 % Afroameričané a původní obyvatelé. 5 % obyvatelstva bylo hispánského původu. Parky a rekreace Ve městě se nachází celkem 8 akrů městských parků, z nichž nejznámějším je v jeho centru Veterans Memorial Park, pojmenovaný na počest všem bývalým i nynějším vojákům, policistům a požárníkům. Nedaleký Wilkowksi Park je místem každoročního bluegrassového festivalu Rainier Roundup, který se koná každý čtvrtý víkend v srpnu. Vedle parku prochází Yelm-Tenino Trail, jenž je zpevněnou turistickou a cyklistickou stezkou spojující města Yelm, Rainier a Tenino. Vzdělávání Město obsluhuje školní okresek Rainier, ve kterém se nachází jedna základní škola, jedna prostřední škola a jedna střední škola, kterou je Rainier High School. Reference Externí odkazy Města ve Washingtonu
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,903
\section{Introduction} \IEEEPARstart{C}{OVID-19} belongs to the family of coronavirus caused diseases, initially reported at Wuhan, China, during late December 2020. On March 11, it spread over 114 countries with 118,000 active cases and 4000 deaths, WHO declared this a pandemic~\cite{bworld, world2020director}. On May 4, 2020, over 3,519,901 cases and 247,630 deaths had been reported worldwide. Several healthcare organizations, medical experts and scientists are trying to develop proper medicines and vaccines for this deadly virus, but till date, no success is reported. This situation forces the global community to look for alternate ways to stop the spread of this infectious virus. Social distancing is claimed as the best spread stopper in the present scenario, and all affected countries are locked-down to implement social distancing. This research is aimed to support and mitigate the coronavirus pandemic along with minimum loss of economic endeavours, and propose a solution to detect the social distancing among people gathered at any public place. The word "social distancing" is best practice in the direction of efforts through a variety of means, aiming to minimize or interrupt the transmission of COVID-19. It aims at reducing the physical contact between possibly infected individuals and healthy persons. As per the WHO norms~\cite{hensley2020social} it is prescribed that people should maintain at least 6 feet of distance among each other in order to follow social distancing. A recent study indicates that social distancing is an important containment measure and essential to prevent SARS-CoV-2, because people with mild or no symptoms may fortuitously carry corona infection and can infect others~\cite{sa4}. Fig.~\ref{fig2} indicates that proper social distancing is the best way to reduce infectious physical contact, hence reduces the infection rate~\cite{fong2020nonpharmaceutical, ahmed2018effectiveness}. This reduced peak may surely match with the available healthcare infrastructure and help to offer better facilities to the patients battling against the coronavirus pandemic. \begin{figure} \centering \includegraphics[scale=0.25] {fig2} \caption{An outcome of social distancing as the reduced peak of the epidemic and matching with available healthcare capacity.} \label{fig2} \end{figure} Epidemiology is the study of factors and reasons for the spread of infectious diseases. To study epidemiological phenomena, mathematical models are always the most preferred choice. Almost all models descend from the classical SIR model of Kermack and McKendrick established in 1927~\cite{kermack1991contributions}. Various research works have been done on the SIR model and its extensions by the deterministic system~\cite{eksin2019systematic}, and consequently, many researchers studied stochastic biological systems and epidemic models~\cite{zhao2016asymptotic}. Respiratory diseases are infectious where the rate and mode of transmission of the causing virus are the most critical factors to be considered for the treatment or ways to stop the spread of the virus in the community. Several medicine organizations and pandemic researchers are trying to develop vaccines for COVID-19, but still, there is no well-known medicine available for treatment. Hence, precautionary steps are taken by the whole world to restrict the spread of infection. Recently, Eksin et al.~\cite{eksin2019systematic} proposed a modified SIR model with the inclusion of a social distancing parameter, $a(I, R)$ which can be determined with the help of the number of infected and recovered persons represented as $I$ and $R$, respectively. \begin{equation} \begin{aligned} \frac{dS}{dt} &= -\beta S \frac{I}{N}a(I,N) \\ \frac{dI}{dt} &= -\delta I+\beta I \frac{I}{N}a(I,N) \\ \frac{dR}{dt} &= \delta I \end{aligned} \label{eq1} \end{equation} where $\beta$ represents the infection rate and $\delta$ represents recovery rate. The population size is computed as $N = S + I + R$. Here the social distancing term ($a(I,R):{\mathbb{R}}^2 \ \epsilon \ [0,1]$) maps the transition rate from a susceptible state ($S$) to an infected state ($I$), which is calculated by $\frac{a\beta SI}{N}$. The social distancing models are of two types, where the first model is known as \enquote{long-term awareness} in which the occurrence of interaction of an individual with other is reduced proportionally with the cumulative percentage of affected (infectious and recovered) individuals (Eq.~\ref{eq2}), \begin{equation} a = {\left (1- \frac{I+R}{N} \right)}^k \label{eq2} \end{equation} Meanwhile, the second model is known as \enquote{short-term awareness}, where the reduction in interaction is directly proportional to the proportion of infectious individuals at a given instance (Eq.~\ref{eq3}), \begin{equation} a = {\left (1- \frac{I}{N} \right)}^k \label{eq3} \end{equation} where $k$ is behavior parameter defined as, $k \geq 0$. Higher value of $k$ implies that individuals are becoming sensitive to the disease prevalence. In the similar background, on April 16, 2020, a company Landing AI~\cite{sA21} under the leadership of most recognizable names in AI, Dr. Andrew Ng~\cite{sA22} announced the creation of an AI tool to monitor social distancing at the workplace. In a brief article, the company claimed that the upcoming tool could detect if people are maintaining the safe physical distance from each other by analyzing real-time video streams from the camera. It is also claimed that this tool can easily get integrated with existing security cameras available at different workplaces to maintain a safe distance among all workers. A brief demo was released that shows three steps: calibration, detection and measurement to monitor the social distancing. On April 21, 2020, Gartner, Inc. identified Landing AI as Cool Vendors in AI Core Technologies to appreciate their timely initiative in this revolutionary area to support the fight against the COVID -19~\cite{sA29}. Motivated by this, in this present work authors are attempting to check and compare the performance of popular object detection and tracking schemes in monitoring the social distancing. Rest of the paper structure is organized as follows: Section II presents the recent work proposed in this field of study, followed by the state-of-the-art object detection and tracking models in Section III. Later, in Section IV the deep learning based framework is proposed to monitor social distancing. In Section V experimentation and the corresponding results are discussed, accompanied by the outcome in Section VI. In Section VII the future scope and challenges are discussed and lastly Section VIII presents the conclusion of the present research work. \section{Background study and related work } Social distancing is surely the most trustworthy technique to stop the spreading of infectious disease, with this belief, in the background of December 2019, when COVID-19 emerged in Wuhan, China, it was opted as an unprecedented measure on January 23, 2020~\cite{sA8}. Within one month, the outbreak in China gained a peak in the first week of February with 2,000 to 4,000 new confirmed cases per day. Later, for the first time after this outbreak, there have been a sign of relief with no new confirmed cases for five consecutive days up to 23 March 2020~\cite{sA10}. This is evident that social distancing measures enacted in China initially, adopted worldwide later to control COVID-19. Prem et al.~\cite{prem2020effect} aimed to study the effects of social distancing measures on the spread of the COVID-19 epidemic. Authors used synthetic location-specific contact patterns to simulate the ongoing trajectory of the outbreak using susceptible-exposed-infected-removed (SEIR) models. It was also suggested that premature and sudden lifting of social distancing could lead to an earlier secondary peak, which could be flattened by relaxing the interventions gradually~\cite{prem2020effect}. As we all understand, social distancing though essential but economically painful measures to flatten the infection curve. Adolph et al.~\cite{adolph2020pandemic} highlighted the situation of the United States of America, where due to lack of common consent among all policymakers it could not be adopted at an early stage, which is resulting into on-going harm to public health. Although social distancing impacted economic productivity, many researchers are trying hard to overcome the loss. Following from this context, Kylie et al.~\cite{ainslie2020evidence} studied the correlation between the strictness of social distancing and the economic status of the region. The study indicated that intermediate levels of activities could be permitted while avoiding a massive outbreak. Since the novel coronavirus pandemic began, many countries have been taking the help of technology based solutions in different capacities to contain the outbreak~\cite{sonbhadra2020target, punn2020automated, Punn2020.04.08.20057679}. Many developed countries, including India and South Korea, for instance, utilising GPS to track the movements of the suspected or infected persons to monitor any possibility of their exposure among healthy people. In India, the government is using the Arogya Setu App, which worked with the help of GPS and bluetooth to locate the presence of COVID-19 patients in the vicinity area. It also helps others to keep a safe distance from the infected person~\cite{sA17}. On the other hand, some law enforcement departments have been using drones and other surveillance cameras to detect mass gatherings of people, and taking regulatory actions to disperse the crowd~\cite{robakowska2017use, 8844927}. Such manual intervention in these critical situations might help flatten the curve, but it also brings a unique set of threats to the public and is challenging to the workforce. Human detection using visual surveillance system is an established area of research which is relying upon manual methods of identifying unusual activities, however, it has limited capabilities~\cite{sulman2008effective}. In this direction, recent advancements advocate the need for intelligent systems to detect and capture human activities. Although human detection is an ambitious goal, due to a variety of constraints such as low-resolution video, varying articulated pose, clothing, lighting and background complexities and limited machine vision capabilities, wherein prior knowledge on these challenges can improve the detection performance~\cite{wang2013intelligent}. Detecting an object which is in motion, incorporates two stages: object detection~\cite{joshi2012survey} and object classification~\cite{javed2002tracking}. The primary stage of object detection could be achieved by using background subtraction~\cite{brutzer2011evaluation}, optical flow~\cite{aslani2013optical} and spatio-temporal filtering techniques~\cite{dollar2005behavior}. In the background subtraction method~\cite{piccardi2004background}, the difference between the current frame and a background frame (first frame), at pixel or block level is computed. Adaptive Gaussian mixture, temporal differencing, hierarchical background models, warping background and non-parametric background are the most popular approaches of background subtraction~\cite{xu2016background}. In optical flow-based object detection technique~\cite{aslani2013optical}, flow vectors associated with the object's motion are characterised over a time span in order to identify regions in motion for a given sequence of images~\cite{tsutsui2001optical}. Researchers reported that optical flow based techniques consist of computational overheads and are sensitive to various motion related outliers such as noise, colour and lighting, etc.~\cite{agarwal2016review}. In another method of motion detection Aslani et al.~\cite{dollar2005behavior} proposed spatio-temporal filter based approach in which the motion parameters are identified by using three-dimensional (3D) spatio-temporal features of the person in motion in the image sequence. These methods are advantageous due to its simplicity and less computational complexity, however shows limited performance because of noise and uncertainties on moving patterns~\cite{niyogi1994analyzing}. Object detection problems have been efficiently addressed by recently developed advanced techniques. In the last decade, convolutional neural networks (CNN), region-based CNN~\cite{zhao2019object} and faster region-based CNN~\cite{krizhevsky2012imagenet} used region proposal techniques to generate the objectness score prior to its classification and later generates the bounding boxes around the object of interest for visualization and other statistical analysis~\cite{ren2015faster}. Although these methods are efficient but suffer in terms of larger training time requirements~\cite{chen2017implementation}. Since all these CNN based approaches utilize classification, another approach YOLO considers a regression based method to dimensionally separate the bounding boxes and interpret their class probabilities~\cite{redmon2016you}. In this method, the designed framework efficiently divides the image into several portions representing bounding boxes along with the class probability scores for each portion to consider as an object. This approach offers excellent improvements in terms of speed while trading the gained speed with the efficiency. The detector module exhibits powerful generalization capabilities of representing an entire image~\cite{ putra2018convolutional}. Based on the above concepts, many research findings have been reported in the last few years. Crowd counting emerged as a promising area of research, with many societal applications. Eshel et al.~\cite{eshel2008homography}, focused on crowd detection and person count by proposing multiple height homographies for head top detection and solved the occlusions problem associated with video surveillance related applications. Chen et al.~\cite{chen2009online} developed an electronic advertising application based on the concept of crowd counting. In similar application, Chih-Wen et al.~\cite{su2009vision} proposed a vision-based people counting model. Following this, Yao et al.~\cite{yao2011fast} generated inputs from stationary cameras to perform background subtraction to train the model for the appearance and the foreground shape of the crowd in videos. Once an object is detected, classification techniques can be applied to identify a human on the basis of shape, texture or motion-based features. In shape-based methods, the shape related information of moving regions such as points, boxes and blobs are determined to identify the human. This method performs poorly due to certain limitations in standard template-matching schemes~\cite{wu2007detection, eishita2012occlusion}, which is further enhanced by applying part-based template matching~\cite{singh2008human} approach. In another research, Dalal et al.~\cite{dalal2005histograms} proposed texture-based schemes such as histograms of oriented gradient (HOG), which utilises high dimensional features based on edges along with the support vector machine (SVM) to detect humans. According to recent research, further identification of a person through video surveillance can be done by using face~\cite{huang2010shape, samal1992automatic} and gait recognition~\cite{cunado1997using} techniques. However, detection and tracking of people under crowd are difficult sometimes due to partial or full occlusion problems. Leibe et al.~\cite{leibe2005pedestrian} proposed trajectory estimation based solution while Andriluka et al.~\cite{andriluka2008people} proposed a solution to detect partially occluded people using tracklet-based detectors. Many other tracking techniques, including a variety of object and motion representations, are reviewed by Yilmaz et al.~\cite{yilmaz2006object}. A large number of studies are available in the area of video surveillance. Among many publically available datasets, KTH human motion dataset~\cite{schuldt2004recognizing} shows six categories of activities, whereas INRIA XMAS multi-view dataset~\cite{weinland2006free} and Weizmann human action dataset~\cite{blank2005actions} contain 11 and 10 categories of actions, respectively. Another dataset named as performance evaluation of tracking and surveillance (PETS) is proposed by a group of researchers at university of Oxford~\cite{parkhi2012oxford}. This dataset is available for vision based research comprising a large number of datasets for varying tasks in the field of computer vision. In the present research, in order to fine-tune the object detection and tracking models for identifying the person, open images datasets~\cite{kuznetsova2020open} are considered. It is a collection of 19,957 classes out of which the models are trained for the identification of a person. The images are annotated with image-level labels and corresponding coordinates of the bounding boxes representing the person. Furthermore, the fine tuned proposed framework is simulated on the Oxford town center surveillance footage~\cite{8844927} to monitor social distancing. We believe that having a single dataset with unified annotations for image classification, object detection, visual relationship detection, instance segmentation, and multimodal image descriptions will enable us to study and perform object detection tasks efficiently and stimulate progress towards genuine understanding of the scene. All explored literature and related research work clearly establishes a picture that the application of human detection can easily get extended to many applications to cater the situation that arises presently such as to check prescribed standards for hygiene, social distancing, work practices, etc. \section{Object detection and tracking models} As observed from Fig.~\ref{fig3}, the successful object detection models like RCNN~\cite{girshick2014rich}, fast RCNN~\cite{girshick2015fast}, faster RCNN~\cite{ren2015faster}, SSD~\cite{liu2016ssd}, YOLO v1~\cite{redmon2016you}, YOLO v2~\cite{redmon2017yolo9000} and YOLO v3~\cite{redmon2018yolov3} tested on PASCAL-VOC~\cite{everingham2010pascal} and MS-COCO~\cite{lin2014microsoft} datasets, undergo trade-off between speed and accuracy of the detection which is dependent on various factors like backbone architecture (feature extraction network e.g. VGG-16~\cite{simonyan2014very}, ResNet-101~\cite{he2016deep}, Inception v2~\cite{szegedy2016rethinking}, etc.), input sizes, model depth, varying software and hardware environment. A feature extractor tends to encode the model's input into certain feature representation which aids in learning and discovering the patterns associated with the desired objects. In order to identify multiple objects of varying scale or size, it also uses predefined boxes covering an entire image termed as anchor boxes. Table~\ref{tab1} describes the performance in terms of accuracy for each of these popular and powerful feature extraction networks on ILSVRC ImageNet challenge~\cite{russakovsky2015imagenet}, along with the number of trainable parameters, which have a direct impact on the training speed and time. As highlighted in Table~\ref{tab1}, the ratio of accuracy to the number of parameters is highest for Inception v2 model indicating that Inception v2 achieved adequate classification accuracy with minimal trainable parameters in contrast to other models, and hence is utilized as a backbone architecture for faster and efficient computations in the faster RCNN and SSD object detection models, whereas YOLO v3 uses different architecture Darknet-53 as proposed by Redmon et al.~\cite{redmon2018yolov3}. \begin{figure} \centering \includegraphics[scale=0.24] {fig3} \caption{Performance overview of the most popular object detection models on PASCAL-VOC and MS-COCO datasets. } \label{fig3} \end{figure} \begin{table}[h!] \centering \caption{ Performance of the feature extraction network on ImageNet challenge.} \begin{tabular}{|p{2.15cm}|c|c|c|} \hline Backbone model & Accuracy (a) & Parameters (p) & Ratio (a*100/p) \\ \hline VGG-16~\cite{simonyan2014very} & 0.71 & 15 M & 4.73 \\ \hline ResNet-101~\cite{he2016deep} & 0.76 & 42.5 M & 1.78 \\ \hline \textbf{Inception v2}~\cite{szegedy2016rethinking} & \textbf{0.74} & \textbf{10 M} & \textbf{7.40} \\ \hline Inception v3~\cite{szegedy2017inception} & 0.78 & 22 M & 3.58 \\ \hline Resnet v2~\cite{szegedy2017inception} & 0.80 & 54 M & 1.48 \\ \hline \end{tabular} \label{tab1} \end{table} \subsection{Anchor boxes} With the exhaustive literature survey, it is observed that every popular object detection model utilizes the concept of anchor boxes to detect multiple objects in the scene~\cite{zhao2019object}. These boxes are overlaid on the input image over various spatial locations (per filter) with varying sizes and aspect ratio. In this article for an image of dimension breadth ($b$) $\times$ height ($h$) the anchor boxes are generated in the following manner. Consider the parameters, size as $p \ \epsilon \ (0,1]$ and aspect ratio as $r > 0$, then the anchor boxes for a certain location in an image can be constructed with dimensions as $bp\sqrt{r} \times hp\sqrt{r}$. Table~\ref{tab2} shows the values of $p$ and $r$ configured for each model. Later the object detection model is trained to predict for each generated anchor box to belong to a certain class, and an offset to adjust the dimensions of the anchor box to better fit the ground-truth of the object while using the classification and regression loss. Since there are many anchor boxes for a spatial location, the object can get associated with more than one anchor box. This problem is dealt with non-max suppression (NMS) by computing intersection over union (IoU) parameter that limits the anchor boxes association with the object of interest by calculating the score as the ratio of overlapping regions between the assigned anchor box and the ground-truth to the union of regions of the anchor box and the ground-truth. The score value is then compared with the set threshold hyperparameter to return the best bounding box for an object. \begin{table}[] \centering \caption{Hyperparameters for generating the anchor boxes.} \label{tab2} \begin{tabular}{|l|l|l|l|l|} \hline \begin{tabular}[c]{@{}l@{}}Detection\\ model\end{tabular} & \begin{tabular}[c]{@{}l@{}}Size vector\\ (p)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Aspect ratio\\ (r)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Anchor\\ boxes\end{tabular} & \begin{tabular}[c]{@{}l@{}}IoU th.\\ for NMS\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}Faster\\ RCNN\end{tabular} & {[}0.25, 0.5, 1.0{]} & {[}0.5, 1.0, 2.0{]} & 9 & 0.7 \\ \hline SSD & {[}0.2, 0.57, 0.95{]} & {[}0.3, 0.5, 1.0{]} & 9 & 0.6 \\ \hline YOLO v3 & {[}0.25, 0.5, 1.0{]} & {[}0.5, 1.0, 2.0{]} & 9 & 0.7 \\ \hline \end{tabular} \end{table} \subsubsection{Loss Function} With each step of model training, predicted anchor box \enquote*{a} is assigned a label as positive (1) or negative (0), based on its associativity with the object of interest having ground-truth box \enquote*{g}. The positive anchor box is then assigned a class label $y_o \ \epsilon \ \{c_1, c_2,...., c_n\}$, here $c_n$ indicates the category of the $n^{th}$ object, while also generating the encoding vector for box \enquote*{g} with respect to \enquote*{a} as $f(g_a|a)$, where $y_o = 0$ for negative anchor boxes. Consider an image $I$, for some anchor \enquote*{a}, model with trained parameters $\omega$, predicted the object class as $Y_{cls}(I|a;\omega)$ and the corresponding box offset as $Y_{reg}(I|a;\omega)$, then the loss for a single anchor prediction can be computed ($L_{cls}$) and bounding box regression loss ($L_{reg}$), as given by the Eq~\ref{eq4}. \begin{equation} \begin{aligned} L(a|I; \omega) =\alpha. 1_{a}^{obj} L_{reg}(f(g_a|a) - Y_{reg}(I|a;\omega)) + \\ \beta. L_{cls} (y_a, Y_{cls}(I|a;\omega)) \end{aligned} \label{eq4} \end{equation} where $1_{a}^{obj}$ is 1 if \enquote*{a} is a positive anchor, $\alpha$ and $\beta$ are the weights associated with the regression and classification loss. Later, the overall loss of the model can be computed as the average of the $L(a|I;w)$ over the predictions for all the anchors. \subsection{Faster RCNN} Proposed by Ren et al.~\cite{ren2015faster}, the faster RCNN is derived from its predecessors RCNN~\cite{girshick2014rich} and fast RCNN~\cite{girshick2015fast}, which rely on external region proposal approach based on selective search (SS)~\cite{google}. Many researchers~\cite{punn2020inception, vaswani2018tensor2tensor, amodei2016deep}, observed that instead of using the SS, it is recommended to utilize the advantages of convolution layers for better and faster localization of the objects. Hence, Ren et al. proposed the Region Proposal Network (RPN) which uses CNN models, e.g. VGGNet, ResNet, etc. to generate the region proposals that made faster RCNN 10 times faster than fast RCNN. Fig.~\ref{fig4} shows the schematic representation of faster RCNN architecture, where RPN module performs binary classification of an object or not an object (background) while classification module assigns categories for each detected object (multi-class classification) by using the region of interest (RoI) pooling~\cite{ren2015faster} on the extracted feature maps with projected regions. \begin{figure} \centering \includegraphics[scale=0.125] {fig4} \caption{Schematic representation of faster RCNN architecture} \label{fig4} \end{figure} \subsubsection{Loss function} The faster RCNN is the combination of two modules RPN and fast RCNN detector. The overall multi-task loss function is composed of classification loss and bounding box regression loss as defined in Eq.~\ref{eq4} with $L_{cls}$ and $L_{reg}$ functions defined in Eq.~\ref{eq5} \begin{equation} \begin{aligned} L_{cls} (p_i, p_{i}^{*}) &= -p_{i}^{*} \log (p_i) - (1-p_{i}^{*}) \log (1- p_i) \\ L_{reg} (t^u, v) & = \sum_{x \epsilon {x, y, w, h} } L_{1}^{smooth}(t_{i}^{u} -v)\\ L_{1}^{smooth}(q)&=\begin{cases} 0.5 q^2, & if \mid q \mid < 1 .\\ \mid q \mid - 0.5, & \text{otherwise}. \end{cases} \end{aligned} \label{eq5} \end{equation} where $t^u$ is the predicted corrections of the bounding box $t^u = \{t_{x}^{u}, t_{y}^{u}, t_{w}^{u}, t_{h}^{u}\}$. Here $u$ is a true class label, ($x$, $y$) corresponds to the top-left coordinates of the bounding box with height $h$ and width $w$, $v$ is a ground-truth bounding box, $p_{i}^{*}$ is the predicted class and $p_i$ is the actual class, \subsection{Single Shot Detector (SSD)} In this research, single shot detector (SSD)~\cite{liu2016ssd} is also used as another object identification method to detect people in real-time video surveillance system. As discussed earlier, faster R-CNN works on region proposals to create boundary boxes to indicate objects, shows better accuracy, but has slow processing of frames per second (FPS). For real-time processing, SSD further improves the accuracy and FPS by using multi-scale features and default boxes in a single process. It follows the principle of the feed-forward convolution network which generates bounding boxes of fixed sizes along with a score based on the presence of object class instances in those boxes, followed by NMS step to produce the final detections. Thus, it consists of two steps: extracting feature maps and applying convolution filters to detect objects by using an architecture having three main parts. First part is a base pretrained network to extract feature maps, whereas, in the second part, multi-scale feature layers are used in which series of convolution filters are cascaded after the base network. The last part is a non-maximum suppression unit for eliminating overlapping boxes and one object only per box. The architecture of SSD is shown in Fig.~\ref{fig5}. \begin{figure} \centering \includegraphics[scale=0.1288]{fig5} \caption{Schematic representation of SSD architecture} \label{fig5} \end{figure} \subsubsection{Loss function} Similar to the above discussed faster RCNN model, the overall loss function of the SSD model is equal to the sum of multi-class classification loss ($L_{cls}$) and bounding box regression loss (localization loss, $L_{reg}$), as shown in Eq.~\ref{eq4}, where $L_{reg}$ and $L_{cls}$ is defined by Eq.~\ref{eq6} and~\ref{eq7}: \begin{equation} \begin{aligned} L_{reg}(x,l,g) &=\sum_{i \epsilon pos}^{N} \sum_{m \epsilon c_x, c_y, w, h} x_{ij}^{k} {smooth}_{L_1}(l_{i}^{m} -{\hat{g}}_{j}^{m}) ,\\ {\hat{g}}_{j}^{c_x} &= \frac{ (g_{j}^{c_x}-a_{i}^{c_x})}{a_i^w} , \ {\hat{g}}_{j}^{c_y} = \frac{ (g_{j}^{c_y}-a_{i}^{c_y})}{a_i^h} ,\\ {\hat{g}}_{j}^{w} &= \log \left ( \frac{g_{j}^{w}}{a_{i}^{w}} \right ) ,\ {\hat{g}}_{j}^{h} = \log \left ( \frac{g_{j}^{h}}{a_{i}^{h}} \right ),\\ x_{ij}^{p} & =\begin{cases} 1, & \text{if IoU} > {0.5}\\ 0, & \text{otherwise}. \end{cases} \end{aligned} \label{eq6} \end{equation} where $l$ is the predicted box, $g$ is the ground truth box, $x_{ij}^{p}$ is an indicator that matches the $i^{th}$ anchor box to the $j^{th}$ ground truth box, $c_x$ and $c_y$ are offsets to the anchor box $a$. \begin{equation} \begin{aligned} L_{cls}(x,c) &= -\sum_{i \epsilon Pos}^{N} x_{ij}^{p} \log ({\hat{c}}_{i}^{p}) -\sum_{i \epsilon Neg} \log ({\hat{c}}_{i}^{o}) \end{aligned} \label{eq7} \end{equation} where ${\hat{c}}_{i}^{p} = \frac{\exp{c_{i}^{p}}}{\sum_{p} \exp{c_{i}^{p}}}$ and $N$ is the number of default matched boxes. \subsection{YOLO} For object detection, another competitor of SSD is YOLO~\cite{redmon2016you}. This method can predict the type and location of an object by looking only once at the image. YOLO considers the object detection problem as a regression task instead of classification to assign class probabilities to the anchor boxes. A single convolutional network simultaneously predicts multiple bounding boxes and class probabilities. Majorly, there are three versions of YOLO: v1, v2 and v3. YOLO v1 is inspired by GoogleNet (Inception network) which is designed for object classification in an image. This network consists of 24 convolutional layers and 2 fully connected layers. Instead of the Inception modules used by GoogLeNet, YOLO v1 simply uses a reduction layer followed by convolutional layers. Later, YOLO v2~\cite{redmon2017yolo9000} is proposed with the objective of improving the accuracy significantly while making it faster. YOLO v2 uses Darknet-19 as a backbone network consisting of 19 convolution layers along with 5 max pooling layers and an output softmax layer for object classification. YOLO v2 outperformed its predecessor (YOLO v1) with significant improvements in mAP, FPS and object classification score. In contrast, YOLO v3 performs multi-label classification with the help of logistic classifiers instead of using softmax as in case of YOLO v1 and v2. In YOLO v3 Redmon et al. proposed Darknet-53 as a backbone architecture that extracts features maps for classification. In contrast to Darknet-19, Darknet-53 consists of residual blocks (short connections) along with the upsampling layers for concatenation and added depth to the network. YOLO v3 generates three predictions for each spatial location at different scales in an image, which eliminates the problem of not being able to detect small objects efficiently~\cite{SK4}. Each prediction is monitored by computing objectness, boundary box regressor and classification scores. In Fig.~\ref{fig6} a schematic description of the YOLOv3 architecture is presented. \begin{figure} \centering \includegraphics[scale=0.17]{fig6} \caption{Schematic representation of YOLO v3 architecture} \label{fig6} \end{figure} \subsubsection{Loss function} The overall loss function of YOLO v3 consists of localization loss (bounding box regressor), cross entropy and confidence loss for classification score, defined as follows: \begin{equation} \begin{aligned} {\lambda}_{coord} \sum_{i=0}^{S^2} \sum_{j=0}^{B} {{1}}_{i,j}^{obj} ({(t_x - {\hat{t}}_x)}^2 + {(t_y - {\hat{t}}_y)}^2 +{(t_w - {\hat{t}}_w)}^2 + \\ {(t_h - {\hat{t}}_h)}^2) \\ + \sum_{i=0}^{S^2} \sum_{j=0}^{B} {{1}}_{i,j}^{obj} (-\log(\sigma(t_o)) + \sum_{k=1}^{C} BCE({\hat{y}}_k, \sigma (s_k))) \\ + {\lambda}_{noobj}\sum_{i=0}^{S^2} \sum_{j=0}^{B} {{1}}_{i,j}^{noobj} (-\log(1- \sigma(t_o)) \end{aligned} \label{eq9} \end{equation} where ${\lambda}_{coord}$ indicates the weight of the coordinate error, ${S^2}$ indicates the number of grids in the image, and $B$ is the number of generated bounding boxes per grid. ${1}_{i,j}^{obj} = 1$ describes that object confines in the $j^{th}$ bounding box in grid $i$, otherwise it is $0$. \subsection{Deepsort} Deepsort is a deep learning based approach to track custom objects in a video~\cite{wojke2017simple}. In the present research, Deepsort is utilized to track individuals present in the surveillance footage. It makes use of patterns learned via detected objects in the images which is later combined with the temporal information for predicting associated trajectories of the objects of interest. It keeps track of each object under consideration by mapping unique identifiers for further statistical analysis. Deepsort is also useful to handle associated challenges such as occlusion, multiple viewpoints, non-stationary cameras and annotating training data. For effective tracking, the Kalman filter and the Hungarian algorithm are used. Kalman filter is recursively used for better association, and it can predict future positions based on the current position. Hungarian algorithm is used for association and id attribution that identifies if an object in the current frame is the same as the one in the previous frame. Initially, a Faster RCNN is trained for person identification and for tracking, a linear constant velocity model~\cite{wojke2018deep} is utilized to describe each target with eight dimensional space as follows: \begin{equation} x = {[u, v, \lambda, h, x^{,}, y^{,}, {\lambda}^{,} , h^{,} ]}^T \end{equation} where ($u,v$) is the centroid of the bounding box, $a$ is the aspect ratio and $h$ is the height of the image. The other variables are the respective velocities of the variables. Later, the standard Kalman filter is used with constant velocity motion and linear observation model, where the bounding coordinates ($u, v, \lambda, h$) are taken as direct observations of the object state. For each track $k$, starting from the last successful measurement association $a_k$, the total number of frames are calculated. With positive prediction from the Kalman filter, the counter is incremented and later when the track gets associated with a measurement it resets its value to $0$. Furthermore, if the identified tracks exceed a predefined maximum age, then those objects are considered to have left the scene and the corresponding track gets removed from the track set. And if there are no tracks available for some detected objects then new track hypotheses are initiated for each unidentified track of novel detected objects that cannot be mapped to the existing tracks. For the first three frames the new tracks are classified as indefinite until a successful measurement mapping is computed. If the tracks are not successfully mapped with measurement then it gets deleted from the track set. Hungarian algorithm is then utilized in order to solve the mapping problem between the newly arrived measurements and the predicted Kalman states by considering the motion and appearance information with the help of Mahalanobis distance computed between them as defined in Eq.~\ref{eq10}. \begin{equation} d^{(1)} (i,j) = {(d_j -y_i)}^T S_{i}^{-1}(d_j -y_i) \label{eq10} \end{equation} where the projection of the $i^{th}$ track distribution into measurement space is represented by ($y_i ,S_i$) and the $j^{th}$ bounding box detection by $d_j$. The Mahalanobis distance considers this uncertainty by estimating the count of standard deviations, the detection is away from the mean track location. Further, using this metric, it is possible to exclude unlikely associations by thresholding the Mahalanobis distance. This decision is denoted with an indicator that evaluates to 1 if the association between the $i^{th}$ track and $j^{th}$ detection is admissible (Eq.~\ref{eq11}). \begin{equation} b_{i,j}^{(1)} = 1 [d^{(1)} (i,j) < t^{(1)} ] \label{eq11} \end{equation} Though Mahalanobis distance performs efficiently but fails in the environment where camera motion is possible, thereby another metric is introduced for the assignment problem. This second metric measures the smallest cosine distance between the $i^{th}$ track and $j^{th}$ detection in appearance space as follows: \begin{equation} d^{(2)} (i, j)= min\{1- {r_j}^T {r_k}^{(i)} \mid {r_k}^{(i)} \ \epsilon \ {\mathbb{R}}^2 \} \label{eq12} \end{equation} Again, a binary variable is introduced to indicate if an association is admissible according to the following metric: \begin{equation} b_{i,j}^{(1)} = 1 [d^{(2)} (i,j) < t^{(2)} ] \label{eq13} \end{equation} and a suitable threshold is measured for this indicator on a separate training dataset. To build the association problem, both metrics are combined using a weighted sum: \begin{equation} c_{i,j} = \lambda d^{(1)} (i,j) + (1 - \lambda) d^{(2)} (i, j) \end{equation} where an association is admissible if it is within the gating region of both metrics: \begin{equation} b_{i,j} = \prod_{m=1}{2} b_{i,j}^{(m)}. \end{equation} The influence of each metric on the combined association cost can be controlled through hyperparameter $\lambda$. \section{Proposed approach} The emergence of deep learning has brought the best performing techniques for a wide variety of tasks and challenges including medical diagnosis~\cite{punn2020inception}, machine translation~\cite{vaswani2018tensor2tensor}, speech recognition~\cite{amodei2016deep}, and a lot more~\cite{pouyanfar2018survey}. Most of these tasks are centred around object classification, detection, segmentation, tracking, and recognition~\cite{brunetti2018computer, punn2019crowd}. In recent years, the convolution neural network (CNN) based architectures have shown significant performance improvements that are leading towards the high quality of object detection, as shown in Fig.~\ref{fig3}, which presents the performance of such models in terms of mAP and FPS on standard benchmark datasets, PASCAL-VOC~\cite{everingham2010pascal} and MS-COCO~\cite{lin2014microsoft}, and similar hardware resources. In the present article, a deep learning based framework is proposed that utilizes object detection and tracking models to aid in the social distancing remedy for dealing with the escalation of COVID-19 cases. In order to maintain the balance of speed and accuracy, YOLO v3~\cite{redmon2018yolov3} alongside the Deepsort~\cite{wojke2017simple} are utilized as object detection and tracking approaches while surrounding each detected object with the bounding boxes. Later, these bounding boxes are utilized to compute the pairwise \textit{L2} norm with computationally efficient vectorized representation for identifying the clusters of people not obeying the order of social distancing. Furthermore, to visualize the clusters in the live stream, each bounding box is color-coded based on its association with the group where people belonging to the same group are represented with the same color. Each surveillance frame is also accompanied with the streamline plot depicting the statistical count of the number of social groups and an index term (violation index) representing the ratio of the number of people to the number of groups. Furthermore, estimated violations can be computed by multiplying the violation index with the total number of social groups. \subsection{ Workflow} This section includes the necessary steps undertaken to compose a framework for monitoring social distancing. \begin{itemize} \item[1.] Fine-tune the trained object detection model to identify and track the person in a footage. \item[2.] The trained model is feeded with the surveillance footage. The model generates a set of bounding boxes and an ID for each identified person. \item [3.] Each individual is associated with three-dimensional feature space ($x, y, d$), where ($x$, $y$) corresponds to the centroid coordinates of the bounding box and $d$ defines the depth of the individual as observed from the camera. \begin{equation} d = ((2 * 3.14 * 180) / (w + h * 360) * 1000 + 3) \end{equation} where $w$ is the width of the bounding box and $h$ is the height of the bounding box~\cite{kaggle}. \item[4.] For the set of bounding boxes, pairwise \textit{L2} norm is computed as given by the following equation. \begin{equation} ||D||_2=\sqrt{\sum_{i=1}^{n} {(q_i -p_i)}^2} \end{equation} where in this work $n = 3$. \item[5.] The dense matrix of \textit{L2} norm is then utilized to assign the neighbors for each individual that satisfies the closeness sensitivity. With extensive trials the closeness threshold is updated dynamically based on the spatial location of the person in a given frame ranging between ($90, 170$) pixels. \item[6.] Any individual that meets the closeness property is assigned a neighbour or neighbours forming a group represented in a different color coding in contrast to other people. \item[7.] The formation of groups indicates the violation of the practice of social distancing which is quantified with help of the following: \begin{itemize} \item Consider $n_g$ as number of groups or clusters identified, and $n_p$ as total number of people found in close proximity. \item $v_i = n_p/n_g$, where $v_i$ is the violation index. \end{itemize} \end{itemize} \section{Experiments and results} The above discussed object detection models are fine tuned for binary classification (person or not a person) with Inception v2 as a backbone network on the Nvidia GTX 1060 GPU, using the dataset acquired from the open image dataset (OID) repository~\cite{google} maintained by the Google open source community. The diverse images with a class label as "Person" are downloaded via OIDv4 toolkit~\cite{megapixels} along with the annotations. Fig.~\ref{fig7} shows the sample images of the obtained dataset consisting of 800 images which is obtained by manually filtering to only contain the true samples. The dataset is then divided into training and testing sets, in 8:2 ratio. In order to make the testing robust, the testing set is also accompanied by the frames of surveillance footage of the Oxford town center~\cite{8844927}. Later this footage is also utilized to simulate the overall approach for monitoring the social distancing. \begin{figure} \centering \includegraphics[scale=0.24] {fig7} \caption{Data samples showing (a) true samples and (b) false samples of a \enquote{Person} class from the open image dataset. } \label{fig7} \end{figure} In case of faster RCNN, the images are resized to $P$ pixels on the shorter edge with 600 and 1024 for low and high resolution, while in SSD and YOLO the images are scaled to the fixed dimension $P\times P$ with $P$ value as 416. During the training phase, the performance of the models is continuously monitored using the mAP along with the localization, classification and overall loss in the detection of the person as indicated in Fig.~\ref{fig8}. Table~\ref{tab3} summarizes the results of each model obtained at the end of the training phase with the training time (TT), number of iterations (NoI), mAP, and total loss (TL) value. It is observed that the faster RCNN model achieved minimal loss with maximum mAP, however, has the lowest FPS, which makes it not suitable for real-time applications. Furthermore, as compared to SSD, YOLO v3 achieved better results with balanced mAP, training time, and FPS score. The trained YOLO v3 model is then utilized for monitoring the social distancing on the surveillance video. \begin{figure} \centering \includegraphics[scale=0.17] {fig8} \caption{Losses per iteration of the object detection models during the training phase on the OID validation set for detecting the person in an image. } \label{fig8} \end{figure} \section{Output} The proposed framework outputs (as shown in Fig.~\ref{fig9}) the processed frame with the identified people confined in the bounding boxes while also simulating the statistical analysis showing the total number of social groups displayed by same color encoding and a violation index term computed as the ratio of the number of people to the number of groups. The frames shown in Fig.~\ref{fig9} displays violation index as 3, 2, 2, and 2.33. The frames with detected violations are recorded with the timestamp for future analysis. \begin{table}[] \centering \caption{Performance comparison of the object detection models.} \label{tab3} \begin{tabular}{|l|l|l|l|l|l|} \hline Model & TT (in sec.) & NoI & mAP & TL & FPS \\ \hline Faster RCNN & 9651 & 12135 & 0.969 & 0.02 & 3 \\ \hline SSD & 2124 & 1200 & 0.691 & 0.22 & 10 \\ \hline \textbf{YOLO v3} & \textbf{5659} & \textbf{7560} & \textbf{0.846} & \textbf{0.87} & \textbf{23} \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[scale=0.22] {fig9} \caption{Sample output of the proposed framework for monitoring social distancing on surveillance footage of Oxford Town Center. } \label{fig9} \end{figure} \section{Future scope and challenges} Since this application is intended to be used in any working environment; accuracy and precision are highly desired to serve the purpose. Higher number of false positive may raise discomfort and panic situation among people being observed. There may also be genuinely raised concerns about privacy and individual rights which can be addressed with some additional measures such as prior consents for such working environments, hiding a person's identity in general, and maintaining transparency about its fair uses within limited stakeholders. \section{Conclusion} The article proposes an efficient real-time deep learning based framework to automate the process of monitoring the social distancing via object detection and tracking approaches, where each individual is identified in the real-time with the help of bounding boxes. The generated bounding boxes aid in identifying the clusters or groups of people satisfying the closeness property computed with the help of pairwise vectorized approach. The number of violations are confirmed by computing the number of groups formed and violation index term computed as the ratio of the number of people to the number of groups. The extensive trials were conducted with popular state-of-the-art object detection models: Faster RCNN, SSD, and YOLO v3, where YOLO v3 illustrated the efficient performance with balanced FPS and mAP score. Since this approach is highly sensitive to the spatial location of the camera, the same approach can be fine tuned to better adjust with the corresponding field of view. \section*{Acknowledgment} The authors gratefully acknowledge the helpful comments and suggestions of colleagues. Authors are also indebted to Interdisciplinary Cyber Physical Systems (ICPS) Programme, Department of Science and Technology (DST), Government of India (GoI) vide Reference No.244 for their financial support to carry out the background research which helped significantly for the implementation of present research work. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,406
PopularTrending Charles Hyde Published on June 27, 2021 Keeping animal populations in check in the wild is critically important, as an imbalance can set off a chain reaction, causing multiple species to be affected, which in turn can affect the environment, and so on. The Tasmanian devil has had a tough time during the last 30 years, as the population has been widely impacted by a form of cancer called devil facial tumor disease (DFT). There have been several approaches taken in order to help save the species, but according to Wionews, a report from BirdLife Tasmania shows that some conservation efforts have led to a devastation for whole other species. Tasmanian devils were introduced to Maria Island, a tiny island east of Tasmania that have been a haven for small penguins called Eudyptula minor, which are nesting ground-dwelling penguins who happen to be the smallest penguins in the world. The Tasmanian devils were brought in order to try to establish a reserve population that is isolated from DFT. Sadly, the small penguins who also have limited defenses were too easy of a target for the Tasmanian devils, and it looks like the Tasmanian devils have wiped out the breeding population of about 6000 small penguins. This concerning trend has been observed since 2012, but a recent study showed that the penguins have completely disappeared. Dr Eric Woehler, the convenor of BirdLife Tasmania shared with the Guardian: "Every time humans have deliberately or accidentally introduced mammals to oceanic islands, there's always been the same outcome… a catastrophic impact on one or more bird species. Losing 3,000 pairs of penguins from an island that is a national park that should be a refuge for this species basically is a major blow." Little Penguins On Maria Island Little penguins are found in Australia and New Zealand, two areas that have the unfortunate experience with introduced species devastating local populations. When possums were introduced to New Zealand in 1837 in order to establish a fur trade, they preyed on the local population including the kiwi and also competed for burrows with little penguins. The Tasmanian devils pose a threat to the little penguins that is greater than possums or domestic cats. Woehler said that the penguins are not the only ones feeling the negative impact of the devils' introduction to the island. He shared: "We're getting reports of geese trying to nest in trees to avoid devil predation. It's very clear that the devils have had a catastrophic ecological impact on the bird fauna on Maria Island." The Penguins Were Wiped Out By The Tasmanian Devils Entertainment December 19, 2021 These Eye Drops Could Potentially Replace Reading Glasses For Millions Of People Countless Americans find themselves needing reading glasses, and it seems like things are accelerating thanks to smartphones. Now, millions could drop the glasses completely thanks to… Entertainment November 28, 2021 How "Living Walls" Can Reduce Heating Costs & Help The Environment New methods have allowed researchers and scientists to find a way to reduce heat from buildings by about 30%. This new "green" solution is quite literally… Environment November 7, 2021 An IVF Mixup Caused Couples To Raise The Wrong Baby For 3 Months A rare in vitro fertilization (IVF) mixup caused two couples to raise one another's babies for more than three months, according to the couples alleging in… Environment October 17, 2021 Some Are Suggesting To Remove Boats To Lower Sea Levels, But Will That Work? There are some politicians who have floated (sorry about that) the idea of removing boats from the sea. Republican Delegate Candidate for Virginia Scott Pio wondered… Environment September 27, 2021 Hate Brussels Sprouts? It Might Actually Be Hardwired In Your Microbiome If you're not a lover of Brussels sprouts, broccoli, or cauliflower, it might be because you experienced them bland or over-boiled as a kid. You may… Featured September 6, 2021 New Experimental Vaccine Against Opioid Addiction Now Being Trialed As of now, it is a good time to be living as there have been many advancements made when it comes to vaccines. The COVID-19 vaccines…
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
921
{"url":"https:\/\/en.m.wikipedia.org\/wiki\/Peano_kernel_theorem","text":"# Peano kernel theorem\n\nIn numerical analysis, the Peano kernel theorem is a general result on error bounds for a wide class of numerical approximations (such as numerical quadratures), defined in terms of linear functionals. It is attributed to Giuseppe Peano.[1]\n\n## Statement\n\nLet ${\\displaystyle {\\mathcal {V}}[a,b]}$\u00a0 be the space of all differentiable functions ${\\displaystyle f}$\u00a0 defined for ${\\displaystyle x\\in (a,b)}$\u00a0 that are of bounded variation on ${\\displaystyle [a,b]}$\u00a0, and let ${\\displaystyle L}$\u00a0 be a linear functional on ${\\displaystyle {\\mathcal {V}}[a,b]}$\u00a0. Assume that ${\\displaystyle f}$\u00a0 is ${\\textstyle \\nu +1}$\u00a0 times continuously differentiable and that ${\\displaystyle L}$\u00a0 annihilates all polynomials of degree ${\\displaystyle \\leq \\nu }$\u00a0, i.e.\n\n${\\displaystyle Lp=0,\\qquad \\forall p\\in \\mathbb {P} _{\\nu }[x].}$\n\nSuppose further that for any bivariate function ${\\displaystyle g(x,\\theta )}$\u00a0 with ${\\displaystyle g(x,\\cdot ),\\,g(\\cdot ,\\theta )\\in C^{\\nu +1}[a,b]}$\u00a0, the following is valid:\n${\\displaystyle L\\int _{a}^{b}g(x,\\theta )\\,d\\theta =\\int _{a}^{b}Lg(x,\\theta )\\,d\\theta ,}$\n\nand define the Peano kernel of ${\\displaystyle L}$\u00a0 as\n${\\displaystyle k(\\theta )=L[(x-\\theta )_{+}^{\\nu }],\\qquad \\theta \\in [a,b],}$\n\nintroducing notation\n${\\displaystyle (x-\\theta )_{+}^{\\nu }={\\begin{cases}(x-\\theta )^{\\nu },&x\\geq \\theta ,\\\\0,&x\\leq \\theta .\\end{cases}}}$\n\nThe Peano kernel theorem then states that\n${\\displaystyle Lf={\\frac {1}{\\nu !}}\\int _{a}^{b}k(\\theta )f^{(\\nu +1)}(\\theta )\\,d\\theta ,}$\n\nprovided ${\\displaystyle k\\in {\\mathcal {V}}[a,b]}$\u00a0.[1][2]\n\n### Bounds\n\nSeveral bounds on the value of ${\\displaystyle Lf}$\u00a0 follow from this result:\n\n{\\displaystyle {\\begin{aligned}|Lf|&\\leq {\\frac {1}{\\nu !}}\\|k\\|_{1}\\|f^{(\\nu +1)}\\|_{\\infty }\\\\[5pt]|Lf|&\\leq {\\frac {1}{\\nu !}}\\|k\\|_{\\infty }\\|f^{(\\nu +1)}\\|_{1}\\\\[5pt]|Lf|&\\leq {\\frac {1}{\\nu !}}\\|k\\|_{2}\\|f^{(\\nu +1)}\\|_{2}\\end{aligned}}}\n\nwhere ${\\displaystyle \\|\\cdot \\|_{1}}$\u00a0, ${\\displaystyle \\|\\cdot \\|_{2}}$\u00a0 and ${\\displaystyle \\|\\cdot \\|_{\\infty }}$\u00a0are the taxicab, Euclidean and maximum norms respectively.[2]\n\n## Application\n\nIn practice, the main application of the Peano kernel theorem is to bound the error of an approximation that is exact for all ${\\displaystyle f\\in \\mathbb {P} _{\\nu }}$\u00a0. The theorem above follows from the Taylor polynomial for ${\\displaystyle f}$\u00a0 with integral remainder:\n\n{\\displaystyle {\\begin{aligned}f(x)=f(a)+{}&(x-a)f'(a)+{\\frac {(x-a)^{2}}{2}}f''(a)+\\cdots \\\\[6pt]&\\cdots +{\\frac {(x-a)^{\\nu }}{\\nu !}}f^{\\nu }(a)+{\\frac {1}{\\nu !}}\\int _{a}^{x}(x-a)^{\\nu }f^{(\\nu +1)}(\\theta )\\,d\\theta ,\\end{aligned}}}\n\ndefining ${\\displaystyle L(f)}$\u00a0 as the error of the approximation, using the linearity of ${\\displaystyle L}$\u00a0 together with exactness for ${\\displaystyle f\\in \\mathbb {P} _{\\nu }}$\u00a0 to annihilate all but the final term on the right-hand side, and using the ${\\displaystyle (\\cdot )_{+}}$\u00a0 notation to remove the ${\\displaystyle x}$\u00a0-dependence from the integral limits.[3]","date":"2020-08-03 22:13:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 32, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9851241111755371, \"perplexity\": 3245.5254267447044}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439735833.83\/warc\/CC-MAIN-20200803195435-20200803225435-00315.warc.gz\"}"}
null
null
Q: A place to learn OS X 10.5 I borrowed a MacBook. I'd like to find a place to learn the OS X 10.5 interface and keyboard shortcuts. I've been "a PC" forever. A: http://www.apple.com/support/switch101/ A: A dead-tree book might be a good place to start. I imagine Pogue's "Switching to Mac: The Missing Manual" is well-done and there are Leopard and Snow Leopard editions. A: http://www.lynda.com is a great resource for video tutorials. It is a for-pay site. They have a wealth of content, not just on the OS, but also on the iLife apps.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,768
At the invitation of Isabelle Krasney, Veteran Outreach Specialist for the Orange County Veterans Employment Committee (OCVEC), VAMBOA CEO Debbie Gregory has been invited to be a Distinguished Guest Speaker at the July 24th OCVEC meeting. Members of OCVEC are committed to leadership excellence as Veterans' employment advocates by addressing Veterans' employment and training opportunities and concerns. Membership is open to Veterans' organizations, including County Veterans Service officers, Employers and Service providers. In addition, membership is open to all veterans and concerned individuals desiring to participate, assist, and promote Veterans employment activities. "I am honored to be invited to speak to these leaders in the Orange County veteran community and share with them the mission of VAMBOA," said Ms. Gregory. "As a result of their military training, Veteran Business Owners, Service Disabled Veteran Owned Businesses (SDVOB) and Military Business Owners are highly qualified to build successful businesses. Not only are small businesses the backbone of the U.S. economy, but Veterans hire other Veterans". VAMBOA, which has over 7,000 members, is wholly supported by corporate sponsors and does not charge any dues or fees to its members.. Its "Vet Owned" seal symbolizes the talent, dedication, leadership and courage of these special Americans who currently serve or have served in our nation's Armed Forces. Debbie Gregory also founded the Military Connection web site that focused on employment and was recently acquired. Additionally Debbie spent over a decade in the recruitment industry and rose to partner in a retained search firm that focused on technology companies and openings. OCVEC, an all-volunteer non-profit organization, advocates for those who have served and addresses employment and training opportunities for them. Following the successful acquisition of the Military Connection web site by Veterans Home Care, LLC , Debbie Gregory will now turn her full attention to VAMBOA, the Veterans and Military Business Owners Association. VAMBOA.org is a 501 (c) 6 non-profit trade association with over 7,000 members nationally and more than 400,000 combined fans and followers on social media. "After serving their country, many U.S. military veterans start and run their own businesses," said Gregory. Although every successful veteran and military business owner is different, there are a number of qualities they have in common. As a rule, they're intelligent, resourceful, dedicated, hardworking, and committed to team success and completing the mission. Military veterans know when to lead, when to follow, and how to respond to a variety of situations and apply just the sort of leadership that circumstances demand for success. Corporations interested in a diverse supplier network are invited to contact VAMBOA to evaluate sponsorship opportunities. VAMBOA is the "go to association" for Veteran and Military Business Owners. VAMBOA is supported through corporate sponsorship and Veteran Business Owners are not charged any membership fees. Corporations interested in evaluating sponsorship opportunities and Military and Veteran Business Owners wanting to learn about membership may contact VAMBOA at info@vamboa.org. Although there have been multiple studies on individual characteristics and operations on owner managers, there are knowledge gaps related to the challenges these entrepreneurs encounter when entering and building their ventures. To fill in that gap, the National Association of Veteran-Serving Organizations(NAVSO) has commissioned Purdue University to study what is driving entrepreneurial success so that these veterans can be better served with improved resource allocation. The Veterans and Military Business Owners Association (VAMBOA) is proud to assist in securing participants for the study. The insights from this survey will help deliver best practices for providing realistic solutions for the issues and challenges these veterans face. NAVSO is offering incentives valued at more than $2000 to be awarded to more than sixty participants who complete the survey. At the end of the study, the research team will conduct a random drawing and determine the winners: one $500 winner, one $250 winner, two $100 winners, ten $50 winners and fifty $20 gift cards. All personal identifiable information will be masked and confidentiality will be strictly enforced. Those wishing to participate in the survey can do so at https://purdue.ca1.qualtrics.com/jfe/form/SV_6WnSXOMSn3fw5lH?utm_source=VAMBOA&utm_medium=Email%20%26%20Social. MilitaryConnection.com, one of the most comprehensive directories of military and veteran resources on the web, and non-profit trade association VAMBOA, the Veterans and Military Business Owners Association are proud to announce that they have joined forces to host the "America Salutes You" VIP reception. The VIP reception will follow the "America Salutes You" live concert on November 12th at the Rosemont Theater in Chicago, Il. Performers include pop legend Cyndi Lauper, singer/songwriter extraordinaire Gavin McGraw, country music legend Wanda Jackson, rapper Hoodie Allen, Gospel Grammy winner CeCe Winans, world-renowned tenor Anthony Kearns, bluegrass legend Ricky Skaggs, rising country stars Tegan Marie and Savannah Maddison, and many others. Tickets can be purchased at http://www.ticketmaster.com/event/0400515AA4A51339. Text and online fundraising will take place during the concert's broadcast on television, radio and via stream. Money raised by the concert will benefit several non-profits that serve military, veterans and their families including: the Bob Woodruff Foundation, Easterseals Dixon Center for Military and Veterans Services, Give an Hour, Honor Flight Network, Illinois Joining Forces, Joining Forces California, Snowball Express, TAPS and ThanksUSA. Sponsorship opportunities are available, with all funds supporting the concert production, partner charities and foundations. "America Salutes You" will stream live on LiveStream, Military.com, TV Worldwide and will be offered to radio stations nationally by Westwood One. The nationally televised concert will air over Thanksgiving weekend on Tribune Broadcasting and Sinclair Broadcast Group stations. To find your local schedule, visit http://americasalutesyou.org/viewlisten/. This concert, which will air nationally, benefits those who serve in the U.S. military, and will take place on Saturday, November 12th 8pm Central time at the Rosemont Theater in Chicago. Performers include pop legend Cyndi Lauper, country music legend Wanda Jackson, rapper Hoodie Allen, Gospel Grammy winner CeCe Winans, world-renowned tenor Anthony Kearns, bluegrass legend Ricky Skaggs, rising country star Tegan Marie and many others. Text and online fundraising will take place during the concert's broadcast on television, radio and via stream. MilitaryConnection.com offers one of the most comprehensive directories of military and Veteran resources on the web, focusing on employment, education and more. Military Connection has been named a Top 100 Employment Web Site by the International Association of Employment Web Sites for five years in a row. It is that focus on employment that garnered MilitaryConnection.com one of the prestigious Weddle's Users Choice Awards for 2015. Military Connection features thousands of pages of resources and information. There is something for everyone including, but not limited to a Job Board and Virtual Job Fair, comprehensive Post 9/11 GI Bill education information with a directory of thousands of scholarships and a Veteran school directory, news, press releases, special events, pay charts, benefits, service directories, commissaries and exchanges, golf courses and more. Military Connection has the honor of working with incredible non-profits to improve the quality of life for those who serve. When the next tour is back home, it's on Military Connection, the Go To Site. VAMBOA, a 501(c) 6 non-profit organization, has been providing its members with knowledge of government provisions that help service-disabled veteran business owners, Veteran business owners and military business owners since 2010. VAMBOA's mission is to help drive the success of these veteran business owners. VAMBOA also connects it members to contacts within large corporations and government agencies who can mentor members, and in some cases, can even directly provide members with government contracts and vending contracts within large corporations. Membership in VAMBOA is free. Military Connection Is Getting the Word Out to Employers About Tax Credits Available for Hiring Veterans. MilitaryConnection.com, the go-to site of the military and Veteran communities, is pleased to let employers know that the tax credit for hiring Veterans has been extended until 2019. The tax credit, known as the Work Opportunity Tax Credit (WOTC), is available to employers who hire veterans. The WOTC is a workforce program that incentivizes workplace diversity and facilitates access to good jobs for American workers, including those who have served. With a focus on employment, MilitaryConnection.com is in a unique position to see the benefits of the tax credit from both sides. Employers who hire veterans not only receive the valuable monetary perks from the federal tax credit that can offset the cost of recruitment, advertising and training, they also capture employees with a multitude of skills and an exceptional work ethic. The very nature of being in the military has given them attributes unlike those that civilians gain through most other types of employment. Additionally, most veteran employees have proven that they can work individually, as part of a team, or in a leadership role. The WOTC reduces an employer's cost of doing business and requires little paperwork. The WOTC can reduce an employer's federal income tax liability by as much as $9,600 per veteran hired, and there is no limit on the number of veterans that an employer can hire. To read more about the benefits of the tax credit, visit http://www.militaryconnection.com/veteran-hiring-tax-credits. MilitaryConnection.com has been named a Top 100 Employment Webs Site by the International Association of Employment Web Sites for the prior five years. Additionally, in 2015, the website received the prestigious Users' Choice Award. MilitaryConnection.com takes great pride in using its significant reach to assist, provide resources and facilitate win/win partnerships with wonderful non-profits, associations and government agencies serving military and veterans. MilitaryConnection.com features thousands of pages of resources and information. There is something for everyone including, but not limited to: a Job Board and Virtual Job Fair, comprehensive Post 9/11 GI Bill education information, a directory of thousands of scholarships, a Veteran school directory, news, press releases, special events, pay charts, benefits, service directories, commissaries & exchanges, golf courses and more.
{ "redpajama_set_name": "RedPajamaC4" }
8,710
Home Entertainment Orton Talks Being a 15 Year WWE Veteran Orton Talks Being a 15 Year WWE Veteran Apr 29, 2017 at 10:30 AM PDT (Randy Orton visits at SiriusXM Studios) Tuesday marked 15 years since WWE Champion Randy Orton made his WWE debut with a win over Hardcore Holly on SmackDown. Orton spoke to the WWE website for a Q&A on his career. How does it feel? It feels like, "Where did the time go?" to be honest. I've been around a long time, and it seemed for the longest time like I was the young guy. Now, all of a sudden, I've got fans with beards telling me, "I used to watch you when I was a kid." So, I don't know what happened to all those years, man, but the little bit I do remember? It was definitely a fun ride. You've been The Legend Killer, The Viper and The Apex Predator. Do you find it difficult to reinvent yourself over the years? Not really, because I've always been kind of the same guy. Whether I was The Legend Killer, The Viper, The Apex Predator, nothing's really changed. When I look at [Superstars] who've had 10 different personas … it's amazing to me. These guys are very talented that they're able to do that. Would I be able to do that? I don't know. Maybe, maybe not. But I think the fact that I've never really had to change is a testament to what my persona is on the show. Whether you're sick of it or love it, you know what you're gonna get with me Previous articleMore Saddening News on Vader Next articleDoes Cody Have a New Contract in the Works?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,635
Q: Bulk edit Wordpress Post tiltes with mysql or any other way I have a wordpress site with 3000+ posts and the titles have "-" in them like Big-cave-nature-hd-wallpaper-for-Backgrounds". I changed some titles of posts and removed "-" from the the titles but its very time consuming. I want to bulk replace that hyphen "-" from the titles of all the posts replacing it with a space. I tried search and replace plugin to replace "-" with a space in titles only but it replaced the "-" with a space in titles of posts as well as urls of the posts and all the posts stopped working. I want a way so that i can replace "-" with a space in post titles only not the urls or other fields of posts. Site: http://www.worldofdth.com/home Any help will be appriciated. Thanks A: You need to attach to your MySql database using phpMyAdmin or something similar, and issue the following query: update wp_posts set post_title = replace(post_title,'-',' ') where post_status = 'public' and post_type = 'post';
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,053
package com.easync.usbsettool.core.events; public class DeviceAttachedEvent { }
{ "redpajama_set_name": "RedPajamaGithub" }
6,599
ACCEPTED #### According to International Plant Names Index #### Published in J. Conchyliol. , 59, 230. #### Original name null ### Remarks null
{ "redpajama_set_name": "RedPajamaGithub" }
7,847
using UnityEngine; /// <summary> /// Crée des projectiles /// </summary> public class WeaponScript : MonoBehaviour { //-------------------------------- // 1 - Designer variables //-------------------------------- /// <summary> /// Prefab du projectile /// </summary> public Transform shotPrefab; /// <summary> /// Temps de rechargement entre deux tirs /// </summary> public float shootingRate = 0.25f; //-------------------------------- // 2 - Rechargement //-------------------------------- private float shootCooldown; void Start() { shootCooldown = 0f; } void Update() { if (shootCooldown > 0) { shootCooldown -= Time.deltaTime; } } //-------------------------------- // 3 - Tirer depuis un autre script //-------------------------------- /// <summary> /// Création d'un projectile si possible /// </summary> public void Attack(bool isEnemy) { if (CanAttack) { shootCooldown = shootingRate; // Création d'un objet copie du prefab var shotTransform = Instantiate(shotPrefab) as Transform; // Position shotTransform.position = transform.position; // Propriétés du script ShotScript shot = shotTransform.gameObject.GetComponent<ShotScript>(); if (shot != null) { shot.isEnemyShot = isEnemy; } // On saisit la direction pour le mouvement MoveScript move = shotTransform.gameObject.GetComponent<MoveScript>(); if (move != null) { move.direction = this.transform.right; // ici la droite sera le devant de notre objet } } } /// <summary> /// L'arme est chargée ? /// </summary> public bool CanAttack { get { return shootCooldown <= 0f; } } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,315
\section{Introduction}\label{chap:intro} For more than thirty years, the study of moduli spaces of Higgs bundles is a very active research area located at the crossroads of algebraic, complex and differential geometry with the theory of integrable systems and surface group representations. One major reason for the ongoing interest in these moduli spaces is their extremely rich geometry. They were introduced by Hitchin \cite{Hi87a} as examples of non-compact hyperkähler spaces. They are homeomorphic to moduli spaces of flat $G$-bundles on $X$ by the celebrated Non-Abelian Hodge Correspondence \cite{Hi87a,Do87,Si88,Co88}. And most importantly for the present work, they have a dense subset carrying the structure of an algebraically completely integrable system - the so-called Hitchin system \cite{Hi87b}. By definition, the Higgs bundle moduli space $\M_G$ on a Riemann surface $X$ associated to a complex reductive linear group $G$ is a moduli space of pairs $(E,\Phi)$. Here $E$ is a holomorphic $G$-vector bundle on $X$ and $\Phi$ is holomorphic one-form valued in $\g$, called the Higgs field. $\M_G$ has a complex symplectic structure on its smooth locus and the Hitchin map \[ \H_G : \M_G \rightarrow B_G. \] defines a proper, surjective, holomorphic map to a complex vector space $B_G$ of half the dimension of $\M_G$, referred to as the Hitchin base. Hitchin showed for the classical groups \cite{Hi87b} and Scognamillo for all complex reductive groups \cite{Sc98}, that on a dense subset $B_G^\reg \subset B_G$ the fibres of the Hitchin map are torsors over abelian varieties. Thereby, the preimage of the regular locus $B_G^\reg$ under the Hitchin map is an algebraically completely integrable system, nowadays called Hitchin system. To identify the Hitchin fibres over the regular locus with abelian varieties one introduces spectral data. The Hitchin map applied to a Higgs bundle $(E,\Phi)$ computes the eigenvalues of the Higgs field $\Phi$. These eigenvalues are decoded in the spectral curve $\Sigma$, a covering of the original Riemann surface $X$. The eigenspaces determine a line bundle on the spectral curve. For a point in the regular locus $B_G^\reg$ the spectral curve is smooth. In this case, the moduli spaces of eigen line bundles are the classical examples of abelian varieties, most importantly Jacobians and Prym varieties. The Hitchin fibration played a major role in two recent developments in the theory of Higgs bundle moduli spaces: Firstly, in the study of the asymptotics of the hyperkähler metric \cite{MSWW19} and secondly, in the Langlands duality of Higgs bundle moduli spaces \cite{DoPa12}. Both results were considered on the regular locus of the Hitchin map and it is an interesting question how they extend to the singular locus (see \cite{AEFS18}). In this paper, we do the first steps in this direction. \subsection*{Singular Hitchin fibres of \texorpdfstring{$\sl(2)$}{sl(2)}-type} We introduce and study the class of $\sl(2)$-type Hitchin fibres of the $\Sp(2n,\C)$- and $\SO(2n+1,\C)$-Hitchin system. This class of (singular) Hitchin fibres is distinguished by the singularities of the spectral curve, such that for $n=2$ all fibres are of $\sl(2)$-type. For $\SL(2,\C)$, the singular Hitchin fibres were studied in \cite{Sch98,GO13} using the Beauville-Narasimhan-Ramanan correspondence \cite{BNR89}. In \cite{Ho1}, the author developed a more direct approach introducing semi-abelian spectral data for singular fibers of the $\SL(2,\C)$-Hitchin system. These consist of an abelian torsor over the Prym variety of the normalised spectral curve and non-abelian coordinates parametrising local deformations of the Higgs bundle at the singularities of the spectral curve. For Hitchin fibres of $\sl(2)$-type the spectral curve $\Sigma$ defines a two-sheeted covering over another Riemann surface $Y$. The main result of this work identifies the $\Sp(2n,\C)$- and $\SO(2n+1,\C)$-Hitchin fibres of $\sl(2)$-type with $\SL(2,\C)$- respectively $\PGL(2,\C)$-Hitchin fibres of a moduli space of twisted Higgs bundles on $Y$ (Theorem \ref{Theo:isom:Hitchinfibres} and \ref{theo:so:biholo}). This allows to extend the results of \cite{Ho1} to $\sl(2)$-type Hitchin fibres. \begin{theo}[Theorem \ref{theo:Sp(2n):strat}, \ref{theo:so(2n+1):fibreing}]\label{theo:intro1} Let $G=\Sp(2n,\C)$ or $G=\SO(2n+1,\C)$. Let $b \in B_G$ with irreducible and reduced spectral curve of $\sl(2)$-type. Then there exists a stratification \[ \H_G^{-1}(b) = \bigsqcup_{i \in I} \S_i \] by finitely many locally closed subsets $\S_i$, such that every stratum $\S_i$ is a finite-to-one covering of a $(\C^*)^{r_i} \times \C^{s_i}$-bundle over an abelian torsor $T(b)$. \end{theo} When the spectral curve $\Sigma$ is smooth, it is of $\sl(2)$-type. Then the stratification is trivial and this result gives a new approach to the identification of regular fibres of the symplectic and odd orthogonal Hitchin system with abelian torsors originally obtained in \cite{Hi07}. The abelian torsor parametrises the eigen line bundles of $(E,\Phi) \in \H^{-1}_G(b)$ and will be referred to as the abelian part of the spectral data. The $(\C^*)^{r_i} \times \C^{s_i}$-fibres, the non-abelian part of the spectral data, parametrize Hecke transformations of the Higgs bundle at the singularities of the spectral curve. The stratification of Theorem \ref{theo:intro1} contains a unique, open and dense stratum $\S_0 \subset \H^{-1}_G(b)$. This dense stratum is compactified by lower dimensional strata distinguished from $\S_0$ by a lower dimensional moduli space of Hecke parameters. For the unique closed stratum of lowest dimension, this parameter space is a point and hence this stratum is an abelian torsor. In the second part of \cite{Ho1}, it was studied how the fibres glue together to form the singular Hitchin fibre. Let us describe the first degeneration in more detail. For $G=\SL(2,\C)$, the Hitchin base is the vector space of quadratic differentials $H^0(X,K_X^2)$. In this setting, the examples we want to consider are Hitchin fibres over a quadratic differential $q \in H^0(X,K_X^2)$ with a single zero of order $2$, such that all other zeros are simple. For $\Sp(2n,\C)$ and $\SO(2n+1,\C)$, there are singular Hitchin fibres like this, for all $n \in \N$. In this example, we have two strata \[ \S_0: (r_0=1,s_0=0), \quad \S_1: (r_1=0,s_1=0). \] In Figure \ref{fig:singfibre2}, we sketched the situation by compressing the abelian part of the spectral data to a circle. On the left hand side, we see a sketch of the open and dense stratum $\S_0$, where the $\C^*$-fibres are depicted by little tunnels. We obtain the singular Hitchin fibre by glueing the two missing points of the $\C^*$-fibre to the abelian torsor in a twisted way. Indeed, the Higgs bundles corresponding to the points zero and infinity do not have the same eigen line bundle and hence do not correspond to the same point on the abelian torsor. In particular, the fibration over the abelian torsor does not extend to $\Hit^{-1}(q)$. This example can be also found in \cite{GO13,Hi19} for the $\SL(2,\C)$-case. More generally, we will give a global description of the first degenerations up to normalisation in Examples \ref{exam:sp(2n,C)_first_deg} and \ref{exam:so(2n+1,C)_first_deg}. \begin{figure}[h] \includegraphics[scale=0.21]{spectralzoom3} \caption{Twisted $\P^1$-bundle over an abelian torsor \label{fig:singfibre2}} \end{figure} \subsection*{Towards Langlands duality for singular Hitchin fibres} The Langlands duality of Higgs bundle moduli spaces is a reincarnation of mirror symmetry and its geometric interpretation in terms of integrable systems by the Strominger-Yau-Zaslow conjecture \cite{SYZ}. For Hitchin systems, mirror symmetry is connected to another important duality in pure mathematics - the so-called Langlands duality. For a algebraic group $G$ there exists a Langlands dual group $G^L$, such that conjecturally the representation theory of $G$ is controlled by Galois representations into $G^L$. Starting from the work of Hausel and Thaddeus \cite{HaTh03} for $G=\SL(n,\C)$, $G^L=\PSL(n,\C)$ and Hitchin \cite{Hi07} for $G=\Sp(2n,\C)$, $G^L=\SO(2n+1,\C)$ and $G=G^L=\mathsf{G}_2$, Donagi and Pantev \cite{DoPa12} established the following formulation of Langlands duality of $G$-Hitchin systems for a complex semi-simple Lie group $G$: \begin{itemize} \item[i)] The Hitchin bases $B_G$ and $B_{G^L}$ are isomorphic and the isomorphism restricts to the regular loci $B_G^\reg$ and $B_{G^L}^\reg$. \item[ii)] The regular fibres over corresponding points $b \in B_G^\reg$ and $b'\in B_{G^L}^\reg$ are abelian torsors over dual abelian varieties. \end{itemize} Concerning Langlands duality for singular Hitchin fibres of $\sl(2)$-type, we have to take a closer look at the abelian part of the spectral data. For $G=\Sp(2n,\C)$, the spectral curve $\Sigma$ has an involutive Deck transformation $\sigma: \Sigma \rightarrow \Sigma$. \begin{wrapfigure}{r}{0.4\textwidth} \vspace{-0.5cm} \begin{tikzpicture}\matrix (m) [matrix of math nodes,row sep=2em,column sep=3em,minimum width=2em] { & & \tilde{\Sigma} \\ \Sigma/\sigma & \Sigma & \\ & X & \\ }; \path[-stealth] (m-1-3) edge (m-2-2) edge (m-2-1) edge (m-3-2) (m-2-2) edge (m-2-1) edge (m-3-2) (m-2-1) edge (m-3-2); \end{tikzpicture} \caption{Commutative diagram of spectral curves \label{fig:spectralcurves}} \end{wrapfigure} The quotient defines a complex algebraic curve $\Sigma/\sigma$. Together with the normalised spectral curve $\tilde{\Sigma}$, we obtain the commutative diagram of spectral curves in Figure \ref{fig:spectralcurves}. By definition the spectral curve $\Sigma$ is of $\sl(2)$-type if and only if $\Sigma/\sigma$ is smooth. In this case, there is an abelian variety associated to the 2-sheeted branched covering of Riemann surfaces $\tilde{\Sigma} \rightarrow \Sigma/\sigma$, the so-called Prym variety. The abelian part of the spectral data for $G=\Sp(2n,\C)$ is a torsor over this Prym variety. For $G= \SO(2n+1,\C)$, the abelian part of the spectral data is a union of torsors over a quotient of the Prym variety by the finite group $\Z_2^{2g}$, where $g$ is the genus of $X$. This quotient can be identified with the dual abelian variety. We obtain the following formulation of Langlands correspondence for singular Hitchin fibres of $\sl(2)$-type. \begin{coro}[Corollary \ref{coro:duality}] Let $b \in B_{\Sp(2n,\C)}=B_{\SO(2n+1,\C)}$ of $\sl(2)$-type, such that the spectral curve is irreducible and reduced. Then the Hitchin fibres $\H_{\Sp(2n,\C)}^{-1}(b)$ and $\H_{\SO(2n+1,\C)}^{-1}(b)$ are related as follows: \begin{itemize} \item[i)] The abelian parts of the spectral data are unions of torsors over dual abelian varieties. \item[ii)] The parameter spaces of Hecke transformations are isomorphic. \end{itemize} \end{coro} \subsection*{Limiting metrics for singular Hitchin fibres} Another recent development in the study of Higgs bundle moduli spaces is the analysis of the asymptotics of the hyperkähler metric. Evolving from an intriguing conjectural picture developed by Gaiotto, Moore and Neitzke \cite{GMN13}, it was shown that on the regular locus of the Hitchin map the asymptotics of the hyperkähler metric are described by a so-called semi-flat metric \cite{MSWW19,Fr18b,FMSW20}. This is a hyperkähler metric defined on any algebraically completely integrable system by the theory of special Kähler manifolds \cite{Fr99}. It does not extend over the singular locus, but Gaiotto, Moore and Neitzke suggest that it can be modified to define a hyperkähler metric on $\M_G$. Recent progress in this direction can be found in \cite{Tu19}. As a first step in analysing the asymptotics of the hyperkähler metric, Mazzeo, Swoboda, Weiss, Witt and Fredrickson studied limits of solutions to the Hitchin equation along rays to the ends of the moduli space over the regular locus \cite{MSWW16,Mo16,Fr18a}. It was shown in \cite{MSWW14,Fr18a}, that these limiting metrics satisfy a decoupled version of the Hitchin equation and are completely determined by spectral data. In Theorem \ref{theo:limit:sl2type}, we will use the semi-abelian spectral data explained above to construct solutions to the decoupled Hitchin equation for $\sl(2)$-type fibres of the symplectic and odd orthogonal Hitchin system. We conjecture them to be limiting metrics. For $\SL(2,\C)$, this is a theorem by Mochizuki \cite{Mo16}. \subsection*{Reader's guide} The paper is structured into four sections. In Section \ref{sec:SP2nC}, we will introduce $\sl(2)$-type Hitchin fibres of the symplectic Hitchin system. We prove the identification of these Hitchin fibres with $\SL(2,\C)$-Hitchin fibres on $\Sigma/\sigma$ and give the parametrisation by semi-abelian spectral data using the results of \cite{Ho1}. In Section \ref{sec:SO}, we repeat these considerations for the odd orthogonal group. Summing up, we formulate the Langlands correspondence for $\Sp(2n,\C)$- and $\SO(2n+1,\C)$-Hitchin fibres of $\sl(2)$-type in Section \ref{sec:langlands}. Finally, in Section \ref{sec:lim}, we will construct solutions to the decoupled Hitchin equation and motivate why we conjecture theses to be limiting configurations. \subsection*{Acknowledgement} This paper represents a substantial part of the author's PhD thesis. Thank you to Daniele Alessandrini for a great supervision and many useful comments and questions during the preparation of this work. Thank you to Anna Wienhard, Beatrice Pozzetti and Xuesen Na for enlightening conversations and their interest in this project. This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) [281869850 (RTG 2229)]; the Klaus Tschira Foundation; and the U.S. National Science Foundation [DMS 1107452, 1107263, 1107367 "RNMS: GEometric structures And Representation varieties" (the GEAR Network)]. \vspace*{0.5cm} \section{\texorpdfstring{$\sl(2)$}{sl(2)}-type fibres of symplectic Hitchin systems}\label{sec:SP2nC} \subsection{The \texorpdfstring{$\Sp(2n,\C)$}{Sp(2n,C)}-Hitchin system}\label{sec:sp(2n,C)-Hit} Let $X$ be a Riemann surface of genus $g \geq 2$ and let $M$ denote a holomorphic line bundle on $X$ with $\deg(M) > 0$. \begin{defi} An $M$-twisted $\Sp(2n,\C)$-Higgs bundle is a triple $(E,\Phi,\omega)$ of a \begin{itemize} \item[i)] holomorphic vector bundle $E$ of rank $2n$, \item[ii)] an anti-symmetric bilinear form $\omega \in H^0(X,\bigwedge ^2 E^\vee)$, such that $\omega^{\wedge n} \in H^0(X,\det(E^\vee))$ is non-vanishing, and \item[iii)] $\Phi \in H^0(X,\End(E) \otimes M)$, such that $\omega(\Phi \, \cdot, \cdot)=-\omega(\cdot,\Phi\, \cdot)$. \end{itemize} \end{defi} \begin{theo}[Simplified stability condition \cite{GGM09}] A $\Sp(2n,\C)$-Higgs bundle $(E,\Phi,\omega)$ is stable, if for all isotropic $\Phi$-invariant subbundles $0 \neq F \subsetneq E$ \[ \deg(F) < 0 . \] \end{theo} \noindent Let $\M_{\Sp(2n,\C)}(X,M)$ denote the moduli space of stable $M$-twisted $\Sp(2n,\C)$-Higgs bundles on $X$. This is a complex algebraic variety (see \cite{GGM09,Sch08}). For $M=K$, it is a complex symplectic manifold of dimension \[ (2g-2)(2n^2+n). \] Let $A \in \sp(2n,\C)$. The characteristic polynomial of $A$ is of the form \[ T^{2n}+a_2(A) T^{2n-2} + \dots + a_{2n}(A) \in \C[T]. \] The coefficients $(a_2,\dots,a_{2n})$ are homogeneous generators of $\C[\g]^G$ and the associated Hitchin map is given by \begin{align*} \H_{\Sp(2n,\C)}: \quad \M_{\Sp(2n,\C)}(X,M) &\rightarrow B_{2n}(X,M):=\bigoplus\limits_{i=1}^n H^0(X,M^{2i}), \\ (E,\Phi) \quad &\mapsto \quad (a_2(\Phi), \dots , a_{2n}(\Phi)). \end{align*} This is proper, surjective, flat, holomorphic map \cite{Ni91,Si95II}. For $M=K$, the Hitchin map restricted to a dense subset $B^{\mathsf{reg}}_{2n} \subset B_{2n}$ defines an algebraically completely integrable system \cite{Hi87b,Hi07}. The characteristic equation of $(E,\Phi) \in \H_{\Sp(2n,\C)}^{-1}(a_2,\dots,a_{2n})$ is given by \[ \eta ^{2n} + a_2\eta^{2n-2} + \dots + a_{2n} =0. \] Let $p_M: M \rightarrow X$ the bundle map, then $\eta$ can be interpreted as the tautological section $\eta: M \rightarrow p_M^*M$. The point-wise eigenvalues of the Higgs field form the a complex analytic curve \[ \Sigma:=Z_{M}(\eta^{2n}+p_M^*a_2\eta^{2n-2}+\dots +p_M^*a_{2n}) \subset \mathsf{Tot}M. \] This is the so-called spectral curve. The projection $p_M$ restricts to a $2n$-sheeted branched analytic covering $\pi: \Sigma \rightarrow X$. Recall that in general the spectral curve is singular at the points, where different sheets meet. Due to the specific type of characteristic equation the spectral curve comes with an involutive automorphism $\sigma: \Sigma \rightarrow \Sigma$ reflecting in the zero section of $M$. For $M=K$, the regular locus $B_{2n}^{\mathsf{reg}}$ is the subset of the Hitchin base, where the spectral curve $\Sigma$ is smooth. The fibres over $B_{2n}^{\mathsf{reg}}$ are torsors over the Prym variety \[ \Prym(\Sigma \rightarrow \Sigma/\sigma) \] (see \cite{Hi87b,Hi07}). We will reprove this result in Theorem \ref{theo:Sp(2n):strat}. The regular locus can be detected by the $\sp(2n,\C)$-discriminant. Consider the representation of $\sp(2n,\C)$ \[ \left\{ A \in \Mat(2n \times 2n,\C) \mid A^\tr J_{2n} + J_{2n}A=0 \right\}, \text{ where } J_{2n}=\begin{pmatrix} 0 & \id_n \\ -\id_n & 0 \end{pmatrix}. \] A Cartan subalgebra is given by \[ \h= \left\{ H=\diag(h_1, \dots, h_n, -h_n, \dots, -h_1) \mid h_i \in \C \right\}. \] Define $e_i \in \h^\vee$ by $e_i(H)=h_i$. Then a root system is given by \[ \Delta=\{ \pm e_i \pm e_j \mid 1\leq i<j \leq n \} \cup \{ \pm 2e_i \mid 1 \leq i \leq n \}. \] The $\sp(2n,\C)$-discriminant is the invariant polynomial defined by product over all roots \[ \disc_{\sp}:=\prod\limits_{\alpha \in \Delta} \alpha \in \C[\h]^W \cong \C[\g]^G. \] There are two types of roots differing by their length. The roots $\pm 2e_i$ have $\sqrt{2}$ times the length of the roots $\pm e_i \pm e_j$ (as depicted in the Dynkin diagram). The Weyl group $W$ preserves the inner product on $\h$ and hence the set of long/short roots. Therefore, we can define invariant polynomials in $\C[\g]^G$ by the product over the long/short roots. The product over the long roots $\prod_{i=1}^{n} -4e_i^2$ is (up to a scalar) the determinant function on $\h$. We refer to the product over the short roots as the reduced $\sp(2n,\C)$-discriminant \[ \disc_{\sp}^{\textsf{red}}:=\prod_{i <j } -(e_i\pm e_j)^2. \] We have \[ \disc_{\sp}=\det\disc_{\sp}^{\textsf{red}}. \] The discriminant of a Higgs bundle $(E,\Phi) \in \M_{\Sp(2n,\C)}(X,M)$ is the section \[ \disc_{\sp}(\Phi) \in H^0(X,M^{2n^2}). \] Being invariant polynomials $\disc_{\sp}$ and $\disc_{\sp}^{\textsf{red}}$ factor through the Hitchin map. For $\underline{a} \in B_{2n}(X,M)$, we will write $\disc_{\sp}(\underline{a})$ and $\disc_{\sp}^{\textsf{red}}(\underline{a})$ for the (reduced) discriminant computed in this manner. \begin{lemm}\label{lem:sp:discr} If $\disc_{\sp}(\underline{a}) \in H^0(X,M^{2n^2})$ has simple zero, then the spectral curve is smooth. \end{lemm} \begin{proof} Let $x \in X$ be a simple zero of \[ \disc_{\sp}(\underline{a})= a_{2n}\disc_{\sp}^{\textsf{red}}(\underline{a}) \in H^0(X,M^{2n^2}). \] If $a_{2n}$ has a simple zero at $x$ and $\disc_{\sp}^{\textsf{red}}(\underline{a})(x) \neq 0$, then $\pi^{-1}(x) \in \Sigma$ contains a simple ramification point on the zero section. If $\disc_{\sp}^{\textsf{red}}(\underline{a})$ has a simple zero at $x$ and $a_{2n}(x) \neq 0$, then $\pi^{-1}(x) \in \Sigma$ contains two simple ramification points $0 \neq \lambda,-\lambda \in M_x$. Hence, the spectral curve is smooth. \end{proof} \begin{exam}[$\Sp(4,\C)$]\label{exam:disc:sp(4,C)} For $(a_2,a_4) \in B_4(X,M)$, The $\sp(4,\C)$-discriminant is given by \[ \disc_{\sp}(a_2,a_4)=a_4(a_2^2-4a_4). \] If instead we compute the discriminant of the characteristic polynomial - the $\sl(4,\C)$-discriminant -, we obtain \[ \disc_{\sl(4,\C)}(a_2,a_4)=a_4(a_2^2-4a_4)^2. \] This expression has higher order zeros for all $(a_2,a_4) \in B_4(X,M)$. Hence, the $\sl(4,\C)$-discriminant can not detect the regular locus of the $\Sp(4,\C)$-Hitchin map. \end{exam} \noindent \textbf{Notation} In the following we will often consider a branched covering of Riemann surfaces $p: Y \rightarrow X$. To avoid confusion, we will refer to points in $Y$, where different sheets meet or equivalently zeros of $\del p$ as ramification points and to the images of these points under $p$ as branch points. We denote by $R=\div(\partial p)\in \Div(Y)$ the ramification divisor and refer to its coefficient $R_y$ at a ramification point $y \in Y$ as the ramification index. $B:=\Nm(R) \in \Div(X)$ is referred to as branch divisor. \vspace*{0.5cm} \subsection{\texorpdfstring{$\sl(2)$}{sl(2)}-type spectral curves}\label{sec:sl(2)-type} In this subsection, we will define the class of $\sl(2)$-type fibres of the $\Sp(2n,\C)$-Hitchin map. These Hitchin fibres are distinguished by the singularities of the spectral curve, such that for $G=\SL(2,\C)$ all Hitchin fibres are of $\sl(2)$-type.\\ \noindent \begin{minipage}[c]{0.7\textwidth} Let $\underline{a} \in B_{2n}(X,M)$, $\Sigma \subset \mathrm{Tot}(M)$ the associated spectral curve and $\sigma$ the involutive biholomorphism reflecting in the zero section of $M$. Being the zero section of a polynomial with coefficients in a line bundle on a Riemann surface, the spectral curve $\Sigma$ is algebraic. The involution $\sigma$ defines an algebraic $\Z_2$-action on $\Sigma$. We will construct its quotient in the algebraic category. A geometric quotient by this action is given by \[ \pi_2: \Sigma \rightarrow \Sigma/\sigma:= \mathsf{Spec}(\O_\Sigma^{\sigma}), \] where $\O_\Sigma^\sigma$ denotes the sheaf of $\sigma$-invariant regular functions on $\Sigma$. As $\pi$ is invariant under the $\Z_2$-action, we obtain the commutative diagram on the right of this paragraph. \end{minipage} \begin{minipage}[c]{0.3\textwidth} \begin{tikzpicture}\matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em] { & \Sigma \\ \Sigma/\sigma & \\ & X \\ }; \path[-stealth] (m-1-2) edge node [above] {$\pi_2$ } (m-2-1) edge node [right] {$\pi$} (m-3-2) (m-2-1) edge node [left] {$\pi_n$}(m-3-2); \end{tikzpicture} \end{minipage} \begin{defi} An element $\underline{a} \in B_{2n}(X,M)$ is called of $\sl(2)$-type, if $\Sigma/\sigma$ is smooth. In this case, $\H_{\Sp(2n,\C)}^{-1}(\underline{a})$ is called $\sl(2)$-type Hitchin fibre. An $\Sp(2n,\C)$-Higgs bundle is called of $\sl(2)$-type, if it is contained in an $\sl(2)$-type Hitchin fibre. \end{defi} \begin{exam} \begin{itemize} \item[i)] Let $n=1$. Then $X\cong \Sigma/\sigma$ is smooth for all $a_2 \in H^0(X,M^2)$ and hence all Hitchin fibres are of $\sl(2)$-type. \item[ii)] A regular point $\underline{a} \in B_{2n}^{\mathsf{reg}}(X,M)$ is of $\sl(2)$-type. In this case, $\Sigma$ is smooth and so is $\Sigma/\sigma$. The fibres are isomorphic to $\Prym( \Sigma \rightarrow \Sigma/\sigma)$, which in turn determines a regular Hitchin fibre of the $\pi_n^*K$-twisted $\SL(2,\C)$-Hitchin system on $\Sigma/\sigma$. \item[iii)] Consider $n=2$ and $(a_2,a_4) \in B_{4}(X,M)$, such that $\Sigma$ is smooth except of one point $p \in \Sigma$ on the zero section. Assume that the spectral curve is locally at $p$ isomorphic to $Z(y^2-z^2) \subset \C^2$ with $\sigma: \C^2 \rightarrow \C^2, (y,z) \mapsto (-y,z)$. Locally, the quotient $\Sigma/\sigma$ is isomorphic to the affine curve $\mathsf{Spec}((\C[y,z]/(y^2-z^2))^\sigma)$. There is an isomorphism \[ \left(\C[y,z]/(y^2-z^2)\right)^\sigma \rightarrow \C[w], \quad y^2 \mapsto w^2, z \mapsto w \] and hence $\Sigma/\sigma$ is smooth at $p$. In conclusion, $(a_2,a_4) \in B_{4}\setminus B_4^{\mathsf{reg}}$ is of $\sl(2)$-type. \end{itemize} \end{exam} \begin{prop}\label{prop:sp:descr:sl(2)-type_spectral_curve} A point $\underline{a} \in B_{2n}(X,M)$ is of $\sl(2)$-type if and only if all singular points of $\Sigma$ lie on the zero section of $M \rightarrow X$ and only two sheets meet in the singular points. In particular, all singular points of $\Sigma$ are of type $A_k$, $k \geq 1$, i.e.\ higher nodes and cusps. If $\disc_{\sp}^{\red}(\underline{a}) \in H^0(X,M^{2n(n-1)})$ has simple zero and $Z(a_{2n-2}) \cap Z(a_{2n})= \varnothing$, then $\underline{a}=(a_2, \dots, a_{2n}) \in B_{2n}(X,M)$ is of $\sl(2)$-type. \end{prop} \begin{proof} If $\underline{a} \in B_{2n}(X,M)$ is of $\sl(2)$-type, there can not be any singular points away from the zero section of $M$. Otherwise $\Sigma/\sigma$ is singular, too. Let $y \in \Sigma$ be a singular point on the zero section. Choose a trivialization $M \rest_U \cong U \times \C$ over a coordinate neighbourhood $(U,z)$ centred at $\pi(y)$ and let $(z,\lambda)$ be the induced coordinate on $M$. Then $\Sigma$ is locally given by the equation \[ q(z,\lambda):= \lambda^{2n}+\lambda^{2n-2}a_2(z) + \dots + a_{2n}(z)=0 \] with the involution given by $\sigma: (z,\lambda) \mapsto (z,-\lambda)$. Because $y=(0,0)$ is a singular point, we have \[ \frac{\del}{\del z} \rest_{(z,\lambda)=(0,0)} \, q= \frac{\del}{\del \lambda} \rest_{(z,\lambda)=(0,0)} \, q =0. \] Hence, $\frac{\del}{\del z} \rest_{z=0} \, a_{2n}=0$, i.e.\ $a_{2n}$ has a higher order zero at $z=0$. Now, $\Sigma/\sigma$ is locally given by the equation \[ q^\sigma(\eta,z)=\eta^{n}+\eta^{n-1}a_2(z) + \dots + a_{2n}(z)=0 \] and smooth at $(0,0)$ by assumption . Therefore, \[ 0 \neq \frac{\del}{\del \eta} \rest_{(z,\eta)=(0,0)} \, q^\sigma = a_{2n-2}(0). \] In particular, $\lambda=0$ is a zero of $q(0,\lambda)$ of multiplicity $2$ and hence only two sheets meet in the singular point. Conversely, if a singular point $p$ lies on the zero section and two sheets of the covering $\pi$ meet there, then $\Sigma$ is locally given by a polynomial equation of the form $y^2-z^k=0$. Let $R=\C[y,z]/(y^2-z^k)$. The ring of invariant functions $R^\sigma$ is generated by $y^2$ and $z$. In particular, \[ R^\sigma \rightarrow \C[z], \quad y^2 \mapsto z^k, z \mapsto z \] defines an isomorphism of coordinate rings. Hence, $\mathsf{Spec}(R^\sigma) \cong \C$ and the quotient is smooth. The discriminant condition implies that, away from the zero section, the only points, where different sheets meet, are smooth ramification points of ramification index 1. Furthermore, $Z(a_{2n-2}) \cap Z(a_{2n})= \varnothing$ implies that only two sheets meet at the zero section, in particular at the singular points. Hence, the spectral curve is of $\sl(2)$-type by the first criterion. \end{proof} \begin{rema} Nevertheless, there can be smooth ramification points of $\pi: \Sigma \rightarrow X$ of higher order on the zero section of $M$ for an $\sl(2)$-type spectral curve $\Sigma$. For $n=2$, an example is the spectral curve defined by $(0,a_4) \in B_{4}(X,M)$ with $a_4$ having simple zeros. \end{rema} \begin{rema} An irreducible algebraic/analytic subset $Z \subset \C^n$ is a $C^1$-manifold in a neighbourhood of a point $p$ if and if only $Z$ is locally given by algebraic/analytic equations \[ F_1(x_1,\dots,x_n)=0, \ \dots, \ F_k(x_1,\dots, x_n)=0, \] such that $D(F_1,\dots,F_k)$ has maximal rank at $p$. The backwards implication follows from the implicit function theorem. For the converse see \cite[page 13]{Mi68}. \end{rema} \begin{prop}\label{prop:canon:red:spec} Let $p: M^2 \rightarrow X$ the bundle map and $\eta: M^2 \rightarrow p^*M^2$ the tautological section. Let $(a_2,\dots,a_{2n}) \in B_{2n}(X,M)$ of $\sl(2)$-type. The reduced spectral curve $\Sigma/\sigma$ is the zero divisor of \[ \eta^n+a_2 \eta^{n-1} + \dots + a_{2n-2} \eta + a_{2n} \in H^0(M^2,p^*M^{2n}). \] In particular, $K_{\Sigma/\sigma} \cong \pi_n^* \left(M^{2n-2} \otimes K_X \right)$ and $\O(R) \cong \pi_n^*M^{2n-2}$, where $R \in \Div(\Sigma/\sigma)$ is the ramification divisor of $\pi_n: \Sigma/\sigma \rightarrow X$. \end{prop} \begin{proof} The first assertion is clear from the proof of the previous proposition. It is easy to see, that $K_{M^2} \cong p_{M^2}^*(K_X \otimes M^{-2})$ and hence by the adjunction formula \[ K_{\Sigma/\sigma} = \left( K_{M^2} \otimes p_{M^2}^*M^{2n}\right) \rest_{\Sigma/\sigma} = \pi_n^* \left(M^{2n-2} \otimes K_X \right). \] The last assertion follows as $\O(R)=K_{\Sigma/\sigma}\otimes \pi_n^*K_X^{-1}$. \end{proof} \begin{wrapfigure}{r}{0.4\textwidth} \vspace{-0.5cm} \begin{tikzpicture}\matrix (m) [matrix of math nodes,row sep=2em,column sep=3em,minimum width=2em] { & & \tilde{\Sigma} \\ & & \\ \Sigma/\sigma & \Sigma & \\ & & \\ & X & \\ }; \path[-stealth] (m-1-3) edge (m-3-2) edge node [above] {$\tilde{\pi}_2$ } (m-3-1) edge node [right] {$\tilde{\pi}$} (m-5-2) (m-3-2) edge node [above] {$\pi_2$ } (m-3-1) edge node [right] {$\pi$} (m-5-2) (m-3-1) edge node [left] {$\pi_n$}(m-5-2); \end{tikzpicture} \caption{Spectral curves \label{fig2}} \end{wrapfigure} In the subsequent analysis of $\sl(2)$-type Hitchin fibres another version of the spectral curve plays an important role. We can naturally associate a smooth curve $\tilde{\Sigma}$ to the singular spectral curve $\Sigma$ by normalisation. It can be defined as the unique extension of the covering $\pi\rest_{\Sigma^\times}: \Sigma^\times \rightarrow X^\times$ to a holomorphic covering of Riemann surfaces. Here $\cdot^\times$ refers to the complement of ramification resp. branch points. If $\Sigma/\sigma$ is smooth, it can be defined in the same way as the extension of the covering of Riemann surfaces $\pi_2\rest_{\Sigma^\times}: \Sigma^\times \rightarrow (\Sigma/\sigma)^\times$. Intrinsically, it is the analytic curve $\tilde{\Sigma}$ associated to the integral closure of the structure sheaf. We obtain the commutative diagram in Figure \ref{fig2}. For $a_{2n} \in H^0(X,M^{2n})$, let \begin{align*} n_{\odd}:=n_{\odd}(a_{2n}):=\# \{ x \in Z(a_{2n}) \mid x \text{ zero of odd order} \}. \end{align*} \begin{lemm} Let $(a_2,\dots,a_{2n}) \in B_{2n}(X,M)$ be of $\sl(2)$-type. Then the genus of $\Sigma/\sigma$ is given by \[ g(\Sigma/\sigma)=n(g-1)+(n^2-n)\deg(M)+1. \] The genus of the normalised spectral curve is \[ g(\tilde{\Sigma})=2n(g-1)+2(n^2-n)\deg(M) + \tfrac12 n_{\odd} +1. \] If $M=K$, we have \[ g(\Sigma/\sigma)=(2n^2-n)(g-1)+1 \] and \[ g(\tilde{\Sigma})=(4n^2-2n)(g-1) + \tfrac12 n_{\odd} +1. \] \end{lemm} \begin{proof} This is immediate from Proposition \ref{prop:canon:red:spec} and the Riemann-Hurwitz formula. \end{proof} \subsection{\texorpdfstring{$\sl(2,\C)$}{sl(2,C)}-type Hitchin fibres are fibres of an \texorpdfstring{$\SL(2,\C)$}{SL(2,C)}-Hitchin map}\label{ssec:sp:fibreident} In this subsection, we prove the main theorem in the $\Sp(2n,\C)$-case identifying the $\sl(2)$-type Hitchin fibres with fibres of an $\SL(2,\C)$-Hitchin system on the spectral curve $\Sigma/\sigma$. \begin{prop}\label{prop:pushforward:sp2nC} Let $p: Y \rightarrow X$ be an $s:1$ covering of Riemann surfaces. Fix a square root $\O(R)^{\frac12}$ of the ramification divisor $R\in \Div(Y)$. Let $(E, \Phi) \in \M_{\SL(2,\C)}(Y,p^*M)$, then the pushforward $(p_*(E\otimes \O(R)^{\frac12}),p_*\Phi)$ defines a $M$-twisted $\Sp(2s,\C)$-Higgs bundle on $X$. Recall that the Ramification divisor $R$ has even degree by the Riemann-Hurwitz formula. \end{prop} \begin{proof} Let $E':=E\otimes \O(R)^{\frac12}$. The pushforward $p_*E'$ is locally free and \[ p_*\Phi: p_*E' \rightarrow p_*(E' \otimes p^*M)=p_*E'\otimes M \] defines a $M$-twisted Higgs field on $p_*E'$. The symplectic form $\omega \in H^0(Y,\bigwedge^2E^\vee)$ induces a degenerate symplectic form $\omega'=\omega(\partial s)^{-1} \in H^0(Y,\bigwedge^2E^\vee(-R))$ on $E'$. Let $U \subset X$ be trivially covered, such that $E'\rest_{p^{-1}(U)}$ is trivial. Hence $p^{-1}(U)= V_1 \sqcup \dots \sqcup V_s$. Let $s_{ij}$ with $i=1,2; j=1,\dots, s$ be symplectic frames of $E' \rest_{V_j}$, i.e.\ \[ \omega'\rest_{V_j}= \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \] respective $s_{1j},s_{2j}$. Then the induced symplectic form on $p_*(E')\rest_{U}$ is given by \[ p_* \omega' \rest_{U}= \left(\begin{array}{cc|c|cc} 0 & 1 &&& \\ -1 & 0 &&& \\ \hline & & \ddots & & \\ \hline &&& 0 & 1 \\ &&& -1 & 0 \end{array} \right) \] respective the frame $s_{ij}$. This defines a symplectic form $p_*\omega'$ on $p_*E' \rest_{Y^\times}$, where $Y^\times= Y \setminus \supp R$. Obviously, $p_*\omega'( p_*\Phi \, \cdot, \cdot)=-p_*\omega'(\cdot, p_*\Phi \, \cdot)$. To extend the symplectic form over the branch points, we use a description of the algebraic pushforward by local $\Z_k$-invariant bundles at the corresponding ramification point. Let $\omega':=\omega(\partial p)^{-1} \in H^0(Y,\bigwedge^2 (E')^\vee)$. Let $y \in Y$ be a ramification point of order $k$. Choose coordinate neighbourhoods $(V,z)$ centred at $y$ and $(U,w)$ centred at $p(y)$, such that the projection map is given by $p: z \mapsto z^k$. Let $\xi$ a primitive root of unity of order $k$. Then $\tau: V \rightarrow V,\ z \mapsto \xi z$ induces a local $\Z_k$-action interchanging the sheets. Consider the local holomorphic $\Z_k$-vector bundle \[ F:= E'\rest_V \oplus \tau^*E'\rest_V \oplus \dots \oplus (\tau^{k-1})^*E'\rest_V. \] Let $s_1,s_2$ be a symplectic frame of $E' \rest_V$, then \begin{align*} s_{ij}:=\tfrac{1}{k} (s_i + \xi^{j}\tau^*s_i + \xi^{2j}(\tau^2)^*s_i + \dots + \xi^{(k-1)j}(\tau^{k-1})^*s_i) \end{align*} for $i \in \{ 1,2\}$ and $0 \leq j \leq k-1$ define a frame of $F$, such that the $\Z_k$-action is given by \[ \diag(1, 1, \xi, \xi, \dots, \xi^{k-1}, \xi^{k-1}). \] The induced degenerate symplectic form $\Omega=\omega' + \tau^* \omega' + \dots +(\tau^{k-1})^*\omega'$ is given by \[ \Omega(s_{1l},s_{2m})=\left\{ \begin{array}{ll} z^{-k+1} & \text{for } l+m=k-1 \\ 0 & \text{otherwise}. \end{array} \right. \] We obtain a local $\Z_k$-invariant holomorphic vector bundle $\hat{F}$ descending to $p(V)$ as a Hecke transformation \[ 0 \rightarrow \hat{F} \rightarrow F \rightarrow \bigoplus\limits_{i=1}^{k-1} (\O_y/z^{i}\O_y)^2 \rightarrow 0 \] introducing the new transition function \[ \psi_{01}=\diag(1,1,z,z, \dots, z^{k-1},z^{k-1}) \] respective the frame $s_{ij}$. The Hecke transformed Higgs bundle is $\Z_k$-invariant and descends to a local frame of the pushforward $p_*(E',\Phi)$ on $p(V)$. The induced symplectic form is given by \[ \hat{\Omega}=(\psi_{01}^*\Omega)(\hat{s}_{1l},\hat{s}_{2m})=\left\{ \begin{array}{ll} 1 & \text{for } l+m=k-1 \\ 0 & \text{otherwise}, \end{array} \right. \] where $\hat{s}_{ij}$ denotes the induced frame of $\hat{F}$ at $y$. Hence, $\hat{\Omega}$ descends to a non-degenerate symplectic form on $p_*E'$. Again it is clear that the induced Higgs field $p_*\Phi$ is anti-symmetric with respect to the symplectic form. \end{proof} In the same way one proves: \begin{prop}\label{prop:push:pairing} Let $\pi: Y \rightarrow X$ be a branched covering of Riemann surfaces. Let $E,F$ holomorphic vector bundles on $Y$ and $\beta: E \otimes F \rightarrow \C$ a non-degenerate bilinear pairing. Fix a square root $\O(R)^{\frac12}$. Then there is an induced non-degenerate pairing \[ \pi_*(E \otimes \O(R)^{\frac12}) \otimes \pi_*(F \otimes \O(R)^{\frac12}) \rightarrow \C. \] \end{prop} Let $\underline{a} \in B_{2n}(X,M)$ of $\sl(2)$-type. The spectral curve $\Sigma$ comes with a section $\lambda \in H^0(\Sigma,\pi^*M)$ solving the spectral equation. The product $\lambda \sigma^*(\lambda) \in H^0(\Sigma,\pi^*M^2)$ defines a $\sigma$-invariant section descending to $b_2 \in H^0(\Sigma/\sigma, \pi^*_n M^2)$. \begin{prop}\label{prop:pullback:toHit} Let $\underline{a} \in B_{2n}(X,M)$ of $\sl(2)$-type and $b_2 \in H^0(\Sigma/\sigma,\pi_n^*M^2)$ the induced section. There is a holomorphic map \[ \H^{-1}_{\Sp(2n,\C)}(\underline{a}) \rightarrow \H^{-1}_{\SL(2,\C)}(b_2) \subset \M_{\SL(2,\C)}(\Sigma/\sigma,\pi_n^*M). \] \end{prop} \begin{proof Let $(E,\Phi)\in \H^{-1}_{\Sp(2n,\C)}(\underline{a})$. The pullback of the characteristic polynomial along $\pi_n: \Sigma/\sigma \rightarrow X$ \[ \lambda^{2n}+ \pi_n^*a_2\lambda^{2n-2} + \dots + \pi_n^*a_{2n} \] factors through $\lambda^2+b_2$ and hence defines a generalised locally free eigen sheaf $E_2$ by \[ 0 \rightarrow E_2 \rightarrow \pi^*_n E \xrightarrow{\pi_n^*\Phi^2 +b_2 \id} \pi^*_n (E \otimes M^2) \rightarrow E_2 \otimes \pi^*_n M^{2n} \rightarrow 0. \] The dualized exact sequence tensored with $\pi_n^*M^2$ results in \[ 0 \rightarrow E_2^\vee \otimes \pi^*_n M^{2-2n} \rightarrow \pi^*_n E^\vee \xrightarrow{(\pi_n^*\Phi^2 +b_2 \id)^\vee} \pi^*_n (E^\vee \otimes M^2) \rightarrow E_2^\vee \otimes \pi_n^*M^2 \rightarrow 0. \] The symplectic form $\omega$ identifies $E$ with $E^\vee$ and from the anti-symmetry of the Higgs field the bundle map $\pi_n^*\Phi^2 +b_2 \id_{\pi_n^*E}$ is self-dual. Hence, there is an induced isomorphism $E_2 \cong E_2^\vee \otimes \pi^*_n M^{2-2n}$. In particular, $\omega$ restricts to a symplectic form $\omega_2$ on $E_2 \otimes \pi^*_n M^{n-1}$ and the induced Higgs field $\Phi_2$ on $E_2$ is anti-symmetric with respect to it. Hence, $(E_2,\Phi_2)$ is a $\pi_n^*M$-twisted $\SL(2,\C)$-Higgs bundle on $\Sigma/\sigma$. \end{proof} \begin{theo}\label{Theo:isom:Hitchinfibres} Let $\underline{a} \in B_{2n}(X,M)$ of $\sl(2)$-type and $b_2 \in H^0(\Sigma/\sigma,\pi_n^*M^2)$ the induced section. The holomorphic map \[ \H^{-1}_{\Sp(2n,\C)}(\underline{a}) \rightarrow \H^{-1}_{\SL(2,\C)}(b_2) \] defined in Proposition \ref{prop:pullback:toHit} is a biholomorphism. Its inverse is given by Proposition \ref{prop:pushforward:sp2nC}. \end{theo} \begin{proof} We need to show that the holomorphic maps defined in Proposition \ref{prop:pushforward:sp2nC} and \ref{prop:pullback:toHit} are inverse to each other. Let $(E_2,\Phi_2) \in \H^{-1}(b_2)$. By Proposition \ref{prop:pushforward:sp2nC} $(\pi_{n*}(E_2\otimes \pi_n^*M^{n-1}),\pi_{n*}\Phi_2)$ defines a $\Sp(2n,\C)$-Higgs bundle on $X$ with spectral curve $\Sigma$. We have a natural map \[ E_2\otimes \pi_n^*M^{1-n} \rightarrow E_2\otimes \pi_n^*M^{n-1} \] by multiplying with the canonical section of $\O(R) \cong \pi_n^*M^{2n-2}$. This induces an inclusion \[ \iota: E_2\otimes \pi_n^*M^{1-n} \rightarrow \pi_n^*\pi_{n*}(E_2\otimes \pi_n^*M^{n-1}). \] It is clear by construction that the $\im(\iota)=\Ker(\pi_n^*\pi_{n*}\Phi_2^2 +b_2 \id)$. For the converse, let $(E,\Phi) \in \H^{-1}_{\Sp(2n,\C)}(\underline{a})$ and denote by $(E_2,\Phi_2)$ the induced $\SL(2,\C)$-Higgs bundle on $\Sigma/\sigma$. It is clear that \[ \pi_{n*}(E_2\otimes \pi_n^*M^{n-1}, \Phi_2)\rest_{X^\times} \cong (E,\Phi)\rest_{X^\times}, \] where $X^\times=X\setminus \pi_n(\supp R)$. We are left with showing that this isomorphism extends over the branch points. Let $x \in X$ be a branch point. For simplicity of notation we assume that it corresponds to a ramification point $y \in Y$ of index $n-1$. Let $(V,z)$ resp. $(U,w)$ be coordinate neighbourhoods centred at $y$ resp. $x$, such that the covering is given by $\pi_n: V \rightarrow U, z \mapsto z^n$. We have a local automorphism $\tau: V \rightarrow V, z \mapsto \xi z$, where $\xi$ is a primitive $n$-th root of unity. This automorphism interchanging the sheets induces a local $\Z_k$-action on $\Sigma/\sigma$ at $y$. The pullback $\pi_n^*(E,\Phi) \rest_V$ is invariant by this $\Z_k$-action. As explained in the previous proof, we can obtain a frame of $\pi_n^*\pi_{n*}(E_2\otimes \pi_n^*M^{n-1},\Phi_2)\rest_{\pi_n^{-1}X^\times}$ at $y$ by extending \[ (F,\Psi)= (E',\Phi)\rest_{V^\times} \oplus \tau^*(E',\Phi)\rest_{V^\times} \oplus \dots \oplus (\tau^{k-1})^*(E',\Phi)\rest_{V^\times} \] to a $\tau$-invariant $\SL(n,\C)$-Higgs bundle at $y$. This is the unique way to do so. Hence, the isomorphism extends over the branch points. \end{proof} \subsection{Semi-abelian spectral data for \texorpdfstring{$\sl(2)$}{sl(2)}-type Hitchin fibres}\label{sec:sp:semi-abel}\ \\ In this section, we apply the results of \cite{Ho1} to $\Sp(2n,\C)$-Hitchin fibres of $\sl(2)$-type. Let us start by defining the twisted Prym varieties, the abelian part of the spectral data. \begin{defi} Let $p: Y \rightarrow X$ be branched covering of Riemann surfaces. Let $N \in \Pic(X)$. Define \[ \Prym_N(p):= \Nm_p^{-1}(N), \] where $\Nm_p: \Pic(Y) \rightarrow \Pic(X)$ is the norm map associated to $p$. \end{defi} \begin{lemm} $\Prym_N(p)$ is an abelian torsor over the Prym variety $\Prym_{\O_X}(p) = \Ker(\Nm_p)$, whenever it is non-empty. If $p: Y \rightarrow X$ is two-to-one and $\sigma: Y \rightarrow Y$ the involution interchanging the sheets, then \[ \Prym_N \subseteq \{ L \in \Pic(Y) \mid L \otimes \sigma^*L=p^*N\}. \] If $p$ is not unbranched, this is an equality. \end{lemm} \begin{proof} The first statement is clear. For the second, see \cite{Ho1} Proposition 5.6. \end{proof} In the same vein as in \cite{Ho1} for $\SL(2,\C)$, the semi-abelian spectral data will define a stratification of the singular Hitchin fibres. The strata are indexed by so-called Higgs divisors. \begin{defi} Let $a_{2n} \in H^0(X,K^{2n})$. An associated Higgs divisor is a divisor $D \in \Div(X)$, such that $\supp (D) \subset Z(a_{2n})$ and for all $x \in Z(a_{2n})$ \[ 0 \leq D_x \leq \tfrac{1}{2} \ord_x(a_{2n}). \] \end{defi} \begin{lemm}\label{lemm:local_form+Higgs_divisor} Let $\underline{a} \in B_{2n}(X,K)$ of $\sl(2)$-type. Let $(E,\Phi) \in \H^{-1}_{\Sp(2n,\C)}(\underline{a})$ and $x \in Z(a_{2n}) \subset X$ a zero of order $m$. There exists a coordinate neighbourhood $(U,z)$ centred at $x$ and a frame of $E\rest_U$, such that the Higgs field is given by \[ \Phi=\left( \begin{array}{cc|c} 0 & z^{l_x} &\\ z^{m-l_x} & 0 & \\ \hline & & \phi \end{array} \right) \d z \] for some $0 \leq l_x \leq \tfrac{m}{2}$. Here $\phi$ has point-wise non-zero eigenvalues. The Higgs divisor of $(E,\Phi)$ is the divisor \[ D= D(E,\Phi)=\sum_{x \in Z(a_{2n})} l_x. \] \end{lemm} \begin{proof} By assumption 0 is an eigenvalue of $\Phi_x$ of algebraic multiplicity two. Therefore, we can find a coordinate neighbourhood $(U,z)$ centred at $x$, such that $(E,\Phi)\rest_U=(E_0 \oplus E_1, \Phi_0 \oplus \Phi_1)$, where $E_0$ is of rank $2$ with $\Phi_0(x)$ nilpotent and $E_1$ is of rank $2n-2$ with $\Phi_1$ having non-zero eigenvalues. Moreover, by the anti-symmetry of $\Phi$ the symplectic form $\omega$ restricts to a symplectic form on $E_0$ and $E_1$. Now, we can bring $(E_0,\Phi_0)$ in the desired form by \cite{Ho1} Lemma 5.1. \end{proof} For $a_{2n} \in H^0(X,K^{2n})$, let \begin{align*} n_{\even}:=\# \{ x \in Z(a_{2n}) \mid x \text{ zero of even order} \} \\ n_{\odd}:=\# \{ x \in Z(a_{2n}) \mid x \text{ zero of odd order} \}. \end{align*} For $D \in \Div^+(X)$ Higgs divisor associated to $a_{2n}$, let \[ n_{\diag}(D):=\#\{ x \in Z(a_{2n}) \mid D_x= \tfrac12 \ord_x(a_{2n}) \}. \] \begin{theo}\label{theo:Sp(2n):strat} Let $\underline{a} \in B_{2n}(X,K)$ of $\sl(2)$-type, such that $\Sigma$ is irreducible and reduced. There is a stratification \[ \H_{\Sp(2n,\C)}^{-1}(\underline{a})= \bigsqcup_{D} \S_D \] by locally closed analytic sets $\S_D$ indexed by Higgs divisors associated to $a_{2n}$. If $a_{2n}$ has at least one zero of odd order, every stratum $S_D$ is a holomorphic fiber bundle \[ (\C^*)^{r}\times (\C)^{s} \rightarrow \S_D \rightarrow \Prym_{\pi_n^*K^{-1}(D)}(\tilde{\pi}_2) \] with \[ r=n_{\even}-n_{\diag}(D), \quad r+s=2n(g-1) - \deg(D)-\frac{n_{\odd}}{2}. \] If all zero of $a_{2n}$ have even order, $\tilde{\pi}_2$ is unbranched and each stratum $\S_D$ is a $2:1$-branched covering of a holomorphic $(\C^*)^{r}\times (\C)^{s}$-bundle over \[ \Prym_{I\pi_n^*K^{-1}(D)}(\tilde{\pi}_2). \] with $r,s$ given by above formulae. Here, $I$ denotes the unique non-trivial line bundle on $\Sigma/\sigma$, such that $\tilde{\pi}_2^*I=\O_{\tilde{\Sigma}}$. In both cases, \[ \dim \S_D = (2n^2+n)(g-1)-\deg(D). \] \end{theo} \begin{proof} This is a direct consequence of Theorem \ref{Theo:isom:Hitchinfibres} and the stratification result for singular fibres of $\SL(2,\C)$-Hitchin systems with irreducible and reduced spectral curve in \cite{Ho1} Theorem 5.13. The dimension of the twisted Prym varieties is given by \[ \dim \Prym(\tilde{\pi}_2)=g(\tilde{\Sigma}) - g(\Sigma/\sigma)=n(2n-1)(g-1)+ \frac{n_{\odd}}{2}. \] \end{proof} \begin{theo}\label{theo:Sp(2n):global_fibreing} Let $\underline{a}=(a_2, \dots, a_{2n}) \in B_{2n}(X,K)$ of $\sl(2)$-type, such that $a_{2n}\in H^0(X,K^{2n})$ has only zeros of odd order. Then $\H_{\Sp(2n,\C)}^{-1}(\underline{a})$ is a holomorphic fiber bundle over $\Prym_{\pi_n^*K^{-1}}(\tilde{\pi}_2)$ with fibres given by the compact moduli of Hecke parameters described in \cite{Ho1} Section 7. \end{theo} \begin{proof} This is a direct consequence of \cite{Ho1} Theorem 7.14. \end{proof} Putting together \cite{Ho1} Corollary 7.15, 7.17 and Example 8.3, 8.5 we obtain: \begin{exam}\label{exam:sp(2n,C)_first_deg} Let $\underline{a}=(a_2, \dots, a_{2n}) \in B_{2n}(X,K)$ of $\sl(2)$-type. Let $a_{2n}$ have $k_l$ zero of order $l$ for $l \in \{2,3,4,5\}$ and at least one zero of odd order. Then up to normalisation $\H_{\Sp(2n,\C)}^{-1}(\underline{a})$ is given by a holomorphic fibre bundle \[ (\P^1)^{k_2+k_3} \times (\P(1,1,2))^{k_4+k_5} \rightarrow \H_{\Sp(2n,\C)}^{-1}(\underline{a}) \rightarrow \Prym_{\pi_n^*K^{-1}}(\tilde{\pi}_2). \] \end{exam} \begin{theo}\label{theo:fibrebdle_trivial} The fiber bundles over abelian varieties appearing in Theorem \ref{theo:Sp(2n):strat}, \ref{theo:Sp(2n):global_fibreing} and Example \ref{exam:sp(2n,C)_first_deg} are smoothly trivial. \end{theo} \begin{proof} This will be proved in Section \ref{sec:lim} using analytic techniques. \end{proof} \begin{coro}\label{coro:irred_comp} Let $\underline{a}=(a_2, \dots, a_{2n}) \in B_{2n}(X,K)$ of $\sl(2)$-type, such that $a_{2n}\in H^0(X,K^{2n})$ has at least one zero of odd order, then $\H_{\Sp(2n,\C)}^{-1}(\underline{a})$ is an irreducible complex space. If all zero of $a_{2n}$ have even order, then $\H_{\Sp(2n,\C)}^{-1}(\underline{a})$ is connected and has four irreducible components. \end{coro} \begin{proof} This follows from \cite{Ho1} Corollary 8.6 and Theorem 8.8. \end{proof} \begin{rema} Notice that the identification of Hitchin fibres in Theorem \ref{Theo:isom:Hitchinfibres} is not restricted to $\sl(2)$-type Hitchin fibres with irreducible and reduced spectral curve. In particular, the parametrization of singular Hitchin fibres with reducible spectral curve in \cite{GO13} Section 7 describes certain $\sl(2)$-type Hitchin fibres of the $\Sp(2n,\C)$-Hitchin system, for all $n \in \N$. \end{rema} \vspace*{0.5cm} \section{\texorpdfstring{$\sl(2)$}{sl(2)}-type fibres of odd orthogonal Hitchin systems} \label{sec:SO} \subsection{The \texorpdfstring{$\SO(2n+1,\C)$}{SO(2n+1,C)}-Hitchin system} Let $G=\SO(2n+1,\C)$ and \begin{align*} \so(2n+1,\C)= \left\{ A \in \Mat(n \times n , \C) \mid A^{\tr}J_{2n+1} + J_{2n+1}A=0 \right\}, \end{align*} where \[ J_{2n+1}=\begin{pmatrix} 0 & \text{id}_n & 0 \\ \text{id}_n & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}. \] Then a Cartan subalgebra is given by \[ \h= \{H=\diag(h_1, \dots, h_n,-h_1,\dots,-h_n,0) \mid h_i \in \C \}. \] Define by $e_i \in \h^\vee$ by $e_i(H)=h_i$. Then a root system is given by \[ \Delta=\{ \pm e_i \pm e_j \mid 1\leq i,j \leq n, i \neq j \} \cup \{ \pm e_i \mid 1 \leq i \leq n \}. \] As before the $\so(2n+1,\C)$-discriminant decomposes by the length of the roots \[ \disc_{\so}=\prod\limits_{i=1}^{n} -e_i^2 \disc_{\so}^{\red}, \quad \text{where} \quad \disc_{\so}^{\red}=\prod_{i \neq j } -(e_i\pm e_j)^2. \] The characteristic polynomial of $A \in \so(2n,\C)$ has the form \[ \lambda(\lambda^{2n} + a_2 \lambda^{2n-2} + \dots + a_{2n}). \] The coefficients $a_2, \dots, a_{2n}$ form a basis of the invariant polynomials $\C[\g]^G$. \begin{defi} An $M$-twisted $\SO(m,\C)$-Higgs bundle is a triple $(E,\Phi,\omega)$ of a \begin{itemize} \item[i)] holomorphic vector bundle $E$ of rank $m$ with $\det(E) \cong \O_X$, \item[ii)] a holomorphic non-degenerate symmetric bilinear form $\omega \in H^0(X,S^2 E^\vee)$, and \item[iii)] a Higgs field $\Phi \in H^0(X,\End(E) \otimes M)$, such that $\omega(\Phi \, \cdot, \cdot)=-w(\cdot,\Phi\, \cdot)$. \end{itemize} $(E,\Phi,\omega)$ is called stable, if for all isotropic $\Phi$-invariant subbundles $0 \neq F \subsetneq E$ \[ \deg(F) < 0. \] (see \cite{GGM09} for this simplified stability condition). \end{defi} Let $\M_{\SO(m,\C)}(X,M)$ be the moduli space of stable $M$-twisted $\SO(m,\C)$-Higgs bundles on $X$. For $m=2n+1$ the Hitchin map is given by \begin{align*} \H_{\SO(2n+1,\C)}: \M_{\SO(2n+1,\C)}(X,M) &\rightarrow B_{\SO(2n+1,\C)}(X,M):=\bigoplus\limits_{i=1}^n H^0(X,M^{2i}), \\ (E,\Phi,\omega) \quad &\mapsto \quad (a_2(\Phi), \dots , a_{2n}(\Phi)). \end{align*} In particular, we observe that $B_{\SO(2n+1,\C)}(X,M)=B_{2n}(X,M)$. Let $(E,\Phi,\omega) \in \H_{\SO(2n+1,\C)}^{-1}(a_2, \dots, a_{2n})$, then the characteristic polynomial of $\Phi$ is given by \[ \lambda(\lambda^{2n} + a_{2n-2} \lambda^{2n-2} + \dots + a_{2n}). \] Hence, the spectral curve decomposes in two irreducible components $\underline{0} \cup \Sigma$, where $\underline{0}$ is the image of the zero section in $M$ and $\Sigma$ is the $\Sp(2n,\C)$-spectral curve associated to $(a_2,\dots,a_n)$. \begin{defi} An element of the Hitchin base $\underline{a} \in B_{\SO(2n+1,\C)}(X,M)$ is called of $\sl(2)$-type, if $\Sigma/\sigma$ is smooth. In this case, the corresponding Hitchin fibre $\H^{-1}_{\SO(2n+1,\C)}(\underline{a})$ is called of $\sl(2)$-type. A $M$-twisted $\SO(2n+1,\C)$-Higgs bundles is of $\sl(2)$-type, if it is contained in a $\sl(2)$-type Hitchin fibre. \end{defi} From Lemma \ref{lem:sp:discr} and Proposition \ref{prop:sp:descr:sl(2)-type_spectral_curve}, we immediately have \begin{lemm}\label{lem:so(2n+1):discr} Let $\underline{a} \in B_{2n}(X,M)$. If $\disc_{\so}(\underline{a}) \in H^0(X,M^{2n^2})$ has simple zeros, then $\Sigma$ is smooth. If $\disc_{\so}^{\red}(\underline{a}) \in H^0(X,M^{2n(n-1)})$ has simple zeros, then $\underline{a}$ is of $\sl(2)$-type. \end{lemm} Hence, the descriptions and properties of $\sl(2)$-type spectral curves in Section \ref{sec:sl(2)-type} carry over to $\sl(2)$-type Hitchin fibres of the odd orthogonal Hitchin system by adding the irreducible component $\underline{0}$. \subsection{Odd orthogonal \texorpdfstring{$\sl(2,\C)$}{sl(2,C)}-type fibres as fibres of an \texorpdfstring{$\SO(3,\C)$}{SO(3,C)}-Hitchin map}\label{ssec:so:identfibre} \begin{lemm}\label{lem:so:local} Let $(E,\Phi, \omega) \in \M_{\SO(2n+1,\C)}(X,M)$ of $\sl(2)$-type. Let $p \in Z(\det(\Phi))$ a zero of order $m$, then there exists a coordinate neighbourhood $(U,z)$ centred at $p$ and an orthogonal splitting $(E,\Phi)\rest_U=(V_0 \oplus V_1, \Phi_0 \oplus \Phi_1)$, such that $V_0$ is of rank $3$ and $\Phi_0(p)$ is nilpotent, and $V_1$ is of rank $2n-2$ containing the eigenspaces to eigenvalues $\lambda$ with $\lambda(p) \neq 0$. There exists a orthogonal frame of $V_0\rest_U$, such that \[ \Phi_0(z)= z^{l_p} \begin{pmatrix} 0 & 1-z^{m-2l_p} & 0 \\ z^{m-2l_p}-1 & 0 & i(z^{m-2l_p}+1) \\ 0 & -i(z^{m-2l_p}+1) & 0 \end{pmatrix} \d z. \] \end{lemm} \begin{proof} By construction $(V_0,\Phi_0)$ is a $\mathsf{O}(3,\C)$-Higgs bundle on $U$. Due to the exceptional isomorphism $\SO(3,\C) \cong \PSL(2,\C)$ the Higgs field $\Phi_0$ can be obtained as $\ad(\Psi)$ for a $\SL(2,\C)$-Higgs field $\Psi$ (cf. Section \ref{sec:langlands}). By \cite{Ho1} Lemma 5.1, we can find a local frame, such that \[ \Psi = \begin{pmatrix} 0 & z^{l_p} \\ z^{m-l_p} & 0 \end{pmatrix} \d z. \] With respect to the induced local frame of $V_0$ the Higgs field $\Phi$ is given by \[ \Phi= \ad(\Psi)=\begin{pmatrix} 0 & -z^{l_p} & 0 \\ -z^{m-l_p} & 0 & z^{l_p}\\ 0 & z^{m-l_p} & 0 \end{pmatrix} \d z \] and the orthogonal structure induced by the Killing form by \[ \begin{pmatrix} && 1 \\ &1 & \\ 1 && \end{pmatrix}. \] Choosing an orthogonal frame we obtain the desired form. \end{proof} \begin{defi} Let $(E,\Phi, \omega) \in \M_{\SO(2n+1,\C)}(X,M)$ of $\sl(2)$-type. The Higgs divisor of $(E,\Phi, \omega)$ is the divisor \[ D(E,\Phi,\omega):= \sum_{p \in Z(a_{2n})} l_p, \] where $l_p$ is defined by the previous lemma. \end{defi} \begin{lemm}\label{lemm:so:kernel_Hecketrafo} Let $(E,\Phi,\omega) \in \M_{\SO(2n+1,\C)}(X,M)$ of $\sl(2)$-type and $D$ its Higgs divisor, then \begin{itemize} \item[i)]$\Ker(\Phi) \cong M^{-n}(D)$ and $\omega\rest_{\Ker(\Phi)}=\frac{a_{2n}}{s_D^2} \in H^0(X,M^{2n}(-2D))$, where $s_D$ denotes the canonical section of $\O(D)$. \item[ii)] there is an exact sequence of coherent sheaves \[ 0 \rightarrow \O(\Ker(\Phi) \oplus \Ker(\Phi)^{\perp}) \rightarrow \O(E) \rightarrow \mathcal{T} \rightarrow 0, \] where $\mathcal{T}$ is a torsion sheaf with $\det(\mathcal{T}) \cong \O(\Lambda-2D)$. \item[iii)] $(E,\Phi,\omega)$ is uniquely determined by $D$ and \[ \left(\ker(\Phi)^\perp,\Phi \rest_{\Ker(\Phi)^{\perp}}, \omega \rest_{\Ker(\Phi)^{\perp}} \right). \] \end{itemize} \end{lemm} \begin{proof} \begin{itemize} \item[i)] The proof of the first assertion is closely following an argument in Section 4.1/4.2 of \cite{Hi07} using the local form for the Higgs field describe in Lemma \ref{lem:so:local}. Let $x \in X$ and $(U,z)$ a coordinate chart centred at $x$. Consider an orthogonal splitting $E\rest_U=V_0 \oplus V_2 \oplus \dots \oplus V_n$, such that $V_0$ is as in the lemma and $V_i$ for $i \geq 2$ is rank $2$ containing the eigen spaces to eigenvalues $\pm \lambda_i \neq 0$. Let $e_0,e_1,e_2$ be an orthogonal frame for $V_0$, such that $\Phi_0$ has the form described in the Lemma and $e_{2i-1},e_{2i}$ an orthogonal frame of $V_i$ of eigen sections of $\Phi$. Then the induced alternating bilinear form $\alpha:=\omega(\Phi \cdot, \cdot)$ is given by \[ \ \hspace{1.1cm} \alpha=i z^l \left(e_2 \wedge (e_3 +i e_1) + z(\cdots)\right) + i \lambda_2 (e_3 \wedge e_4)+ \dots+ i \lambda_n (e_{2n-1} \wedge e_{2n} ) . \] Let us assume that with respect to our frame the volume form is given by $\vol=e_0 \wedge \dots \wedge e_{2n} \in H^0(U,\det(E))$. Then, we can write $\bigwedge^{n} \alpha \in H^0(U,\bigwedge^{2n}E \otimes M^{n})$ as a contraction $i_{v_0} \vol$ with \[ v_0 = -i^{n-1} z^{l} \lambda_2 \cdots \lambda_n (e_3 +i e_1)+ z^{l+1}(\cdots) \in H^0(U,E \otimes M^n). \] So $v_0$ defines a non-vanishing section of $H^0(X,E \otimes M^n(-D))$ that spans the kernel of $\Phi$. Hence, $\Ker:=\Ker(\Phi) \cong M^{-n}(D)$. Furthermore, using the local form of the previous lemma one computes that for $p \in Z(a_{2n})$ we have $\omega \rest_{\ker}=z^{\ord_p a_{2n}-2D_p}$. Hence (up to the right choice of $s_D$) \[ \omega \rest_{\ker} =\frac{a_{2n}}{s_D^2} \in H^0(X,M^{2n}(-2D)). \] \item[ii)] $\ker^\perp \subset E$ is a $\Phi$-invariant subbundle of rank $2n$, such that \[ E\rest_U \cong \ker \oplus \ker^\perp \rest_U \] for all open $U \subset X$, such that $U \cap Z(a_{2n}) = \varnothing$. Hence, the inclusions define an exact sequence of coherent sheaves \[ 0 \rightarrow \O(\Ker \oplus \Ker^\perp) \rightarrow O(E) \rightarrow \mathcal{T} \rightarrow 0 \] with $\mathcal{T}$ a torsion sheaf supported on $Z(a_{2n})$. Now, $\det(\mathcal{T})$ can be computed from the local description in Lemma \ref{lem:so:local}. \item[iii)] Stated differently ii) tells us that $E$ is a Hecke modification of $\Ker \oplus \Ker^\perp$ (see \cite{Ba10} Definition 1.1). We need to show that there is a unique Hecke modification doing the job, i.e.\ a unique Hecke modification, such that \[ F= \Ker \oplus \Ker^\perp \] with its degenerate symmetric bilinear form \[ \beta= \omega \rest_{\Ker} \oplus \omega \rest_{\Ker^\perp} \] is transformed into a $\SO(2n+1,\C)$-bundle $(\hat{F},\hat{\beta})$. At $p \in Z(a_{2n})$ we have an orthogonal decomposition \[ \left(\ker^\perp, \Phi \rest_{\ker^\perp} \right)\rest_U=\left( V_2 \oplus V_{2n-2}, \Phi_2 \oplus \Phi_1 \right) \] by restricting the orthogonal decomposition in Lemma \ref{lem:so:local}. One the one side $V_2$ is of rank 2 and $\Phi_2(p)$ is nilpotent, on the other, $\Phi_1$ has non-zero eigenvalues and $\omega\rest_{V_1}$ is non-degenerate. Thereby, we are left with showing that we can find a unique Hecke modification twisting \[ \left( \Ker\rest_{U} \oplus V_2 , \frac{a_{2n}}{s_D^2} \oplus \omega\rest_{V_2} \right) \] into a $\SO(3,\C)$-bundle. Using the local description of the Higgs field in Lemma \ref{lem:so:local} one can show that there are local frames $e_0$ of $\Ker_U$ and $e_1,e_2$ of $V_2$, such that the non-degenerate bilinear form at $p$ is given by \[ \frac{a_{2n}}{s_D^2} \oplus \omega\rest_{V_2} = \begin{pmatrix} z^{m-2l} & 0 & 0 \\ 0 & z^{m-2l} & 0 \\ 0 &0 & 1 \end{pmatrix}, \] where $m= \ord_p(a_{2n})$ and $l=D_p$. Hence, the Hecke modification can be assumed to take place in $\mathsf{span}\{e_0,e_1\}$. If there were two Hecke modification, \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=2em,column sep=3em,minimum width=2em] { \Ker\rest_{U} \oplus V_2 & \hat{F}_1 \\ & \hat{F}_2 \\ }; \path[-stealth] (m-1-1) edge node [above] {$s_1$ } (m-1-2) edge node [left] {$s_2$ \ } (m-2-2); \end{tikzpicture} \end{center} such that $\hat{F}_1,\hat{F}_2$ are $\SO(3,\C)$-bundles with the induced orthogonal structure, then up to choosing frames $s_1 \circ s_2^{-1}$ reduces to a meromorphic $\SO(2,\C)$-gauge (an element of the $\SO(2,\C)$-loop group). It is not hard to show, that such a gauge is automatically holomorphic. Hence, the resulting $\SO(3,\C)$-bundles $\hat{F}_1,\hat{F}_2$ are isomorphic. \end{itemize} \end{proof} \begin{prop}\label{prop:SO:pushforward} Let $\underline{a}\in B_{2n}(X,M)$ of $\sl(2)$-type and $b_2 \in H^0(\Sigma/\sigma,\pi_n^*M^2)$ the induced section. The pushforward induces a holomorphic map \[ \M_{\SO(3,\C)}(\Sigma/\sigma,\pi_n^*M) \supset \H_{\SO(3,\C)}^{-1}(b_2) \rightarrow \H_{\SO(2n+1,\C)}(\underline{a}) \subset \M_{\SO(2n+1,\C)}(X,M). \] \end{prop} \begin{proof} Let $(E,\Phi,\omega) \in \H_{\SO(3,\C)}^{-1}(b_2)$. The pushforward \[ \pi_{n*}\left(\ker(\Phi)^\perp \otimes \pi_n^*M^{n-1}, \Phi\rest_{\ker(\Phi)^\perp}, (\del \pi_n^{-1})\omega \rest_{\ker(\Phi)^\perp} \right) \] defines a $M$-twisted $\GL(2n,\C)$-Higgs bundle on $X$ with \[ \det\left( \pi_{n*}\left(\ker(\Phi)^\perp \otimes \pi_n^*M^{n-1}\right)\right) = M^{-n}(\Nm D). \] and a symmetric bilinear form $\pi_{n*}\left((\del \pi_n^{-1})\omega \rest_{\ker(\Phi)^\perp}\right)$, which is non-degenerate away from $Z(a_{2n})$ by Proposition \ref{prop:push:pairing}. Furthermore, $\pi_{n*} \Phi$ is anti-symmetric with respect to this bilinear form. Moreover, we have a induced Higgs divisor given by $\Nm(D)$ supported at $Z(a_{2n})$. Now there is a unique way to recover a $\SO(2n+1,\C)$-Higgs bundle by Lemma \ref{lemm:so:kernel_Hecketrafo}. This reconstruction seems to depend on the Higgs divisor $D$. However, we saw in the proof of Lemma \ref{lemm:so:kernel_Hecketrafo} that the construction is local and only depends on the rank 2 subbundle of $\pi_{n*}(\ker(\Phi)^\perp \otimes \pi_n^*M^{n-1})$, on which the Higgs field has vanishing eigenvalue. So for a trivially covered neighbourhood $U \subset X$ of $x \in Z(a_{2n})$ it recovers $(E_3,\Phi_3) \rest_{V}$, where $V \subset \pi_n^{-1}(U)$ is the unique connected component, such that $\lambda \in H^0(V,\pi_n^*K)$ has a zero. Hence, $(E,\Phi,\omega)$ varies holomorphically with $(E_3,\Phi_3,\omega_3)$. \end{proof} \begin{prop}\label{prop:SO:pullback} Let $\underline{a}\in B_{2n}(X,M)$ of $\sl(2)$-type and $b_2 \in H^0(\Sigma/\sigma,\pi_n^*M^2)$ the induced section. The pullback along $\pi_n: \Sigma/\sigma \rightarrow X$ induces a holomorphic map \[ \H^{-1}_{\SO(2n+1,\C)}(\underline{a}) \rightarrow \H^{-1}_{\SO(3,\C)}(b_2) \subset \M_{\SO(3,\C)}(\Sigma/\sigma,\pi_n^*M) \] \end{prop} \begin{proof} Let $(E,\Phi,\omega)\in \H^{-1}_{\SO(2n+1,\C)}(\underline{a})$. The pullback of the characteristic polynomial to $\Sigma/\sigma$ \[ \lambda\left(\lambda^{2n}+ \pi_n^*a_2\lambda^{2n-2} + \dots + \pi_n^*a_{2n}\right) \] factors through $\lambda(\lambda^2+b_2)$ and hence defines a generalised eigen bundle $E_3$ on $\Sigma/\sigma$ by \[ 0 \rightarrow E_3 \rightarrow \pi^*_n E \xrightarrow{\Psi} \pi^*_n (E \otimes M^3) \rightarrow E_3 \otimes \pi^*_n M^{2n+1} \rightarrow 0, \] where \[ \Psi:=\pi_n^*\Phi\left(\pi_n^*\Phi^2 +b_2 \id_{\pi_n^*E}\right). \] The dual exact sequence tensored with $\pi_n^*M^3$ results in \[ 0 \rightarrow E_3^\vee \otimes \pi^*_n M^{2-2n} \rightarrow \pi^*_n E^\vee \xrightarrow{\Psi^\vee} \pi^*_n (E^\vee \otimes M^3) \rightarrow E_3^\vee \otimes \pi_n^*M^3 \rightarrow 0. \] The orthogonal bilinear form $\omega$ identifies $E$ with $E^\vee$ and from the anti-symmetry of the Higgs field $\Psi^\vee=-\Psi$ under this identification. Hence, $\omega$ induces an isomorphism $E_3 \cong E_3^\vee \otimes \pi^*_n M^{2-2n}$. Finally, $\omega$ restricts to a symmetric, non-degenerate bilinear form $\omega_3$ on $E_3 \otimes \pi^*_n M^{n-1}$ and the induced Higgs field $\Phi_3$ on $E_3$ is anti-symmetric with respect to it. Hence, $(E_3,\Phi_3)$ is a $\pi_n^*M$-twisted $\SO(3,\C)$-Higgs bundle on $\Sigma/\sigma$. \end{proof} \begin{theo}\label{theo:so:biholo} Let $\underline{a} \in B_{2n}(X,M)$ of $\sl(2)$-type and let $b_2 \in H^0(\Sigma/\sigma, \pi_n^*M^2)$ the induced section. The holomorphic map between the Hitchin fibres \[ \H^{-1}_{\SO(3,\C)}(b_2) \rightarrow \H^{-1}_{\SO(2n+1,\C)}(\underline{a}) \] defined in Proposition \ref{prop:SO:pushforward} is a biholomorphism of complex spaces. \end{theo} \begin{proof} We are left with showing that the maps defined in the previous propositions are inverse to each other. We start with $(E_3,\Phi_3) \in \H^{-1}_{\SO(3,\C)}(b_2)$. Consider the holomorphic map \[ \Ker(\Phi_3)^\perp \otimes \pi_n^*M^{1-n} \rightarrow \Ker(\Phi_3)^\perp \otimes \pi_n^*M^{n-1} \] tensoring with $s_R= \del \pi_n \in H^0(\Sigma/\sigma, \pi_n^* M^{2n-2})$. This induces an embedding of locally free sheaves \[ 0 \rightarrow \Ker(\Phi_3)^\perp \otimes \pi_n^*M^{1-n} \rightarrow \pi_n^* \pi_{n*} \left( \Ker(\Phi_3)^\perp \otimes \pi_n^*M^{n-1} \right). \] By construction its image is $\Ker(\pi_n^*\pi_{n*}\Phi_3^2-b_2 \id)$. Hence, we recover $\Ker(\Phi_3)^\perp$ by the map defined in Proposition \ref{prop:SO:pullback}. This uniquely determines $(E_3,\Phi_3,\omega_3)$ by Lemma \ref{lemm:so:kernel_Hecketrafo} iii). For the converse, let $(E,\Phi,\omega) \in \H^{-1}_{\SO(2n+1,\C)}(\underline{a})$. Then \[ (E_3, \Phi_3) = \left( \Ker\Psi, \Phi\rest_{\Ker \Psi} \right) \] decomposes $\pi^*(E,\Phi)$ into rank 3 subbundles. For $U \subset X$, such that \[ \pi_n^{-1}(U)= \bigsqcup_{i=1}^n U_i, \] these are the generalised eigenbundles to the eigenvalues $0, \pm \lambda\rest_{U_i}$. The pushforward of $\ker(\Phi_3)^\perp \otimes \pi_n^*M^{n-1} \subset E_3$ reassembles the eigenbundles to $\pm \lambda\rest_{U_j}$ for all $i$. By Lemma \ref{lemm:so:kernel_Hecketrafo} this uniquely determines a $\SO(2n+1,\C)$-Higgs bundle. Hence, we recover $(E,\Phi,\omega)$. \end{proof} \begin{rema}[Hitchin's approach to regular fibres] Another way to attack to problem is trying to generalise Hitchin's approach in \cite{Hi07}. Hitchin describes the regular $\SO(2n+1,\C)$-Hitchin fibres by relating them to the corresponding $\Sp(2n,\C)$-Hitchin fibre on $X$. Let $(V,\Phi,g) \in \H^{-1}_{\SO(2n+1,\C)}(\underline{a})$ with $\underline{a}$ of $\sl(2)$-type. Adopting Hitchin's notation, let $V_0 \subset V$ be the kernel line bundle and $\Phi': V/V_0 \rightarrow V/V_0$ the induced Higgs field. It is easy to see that $\alpha:=g( \Phi' \cdot , \cdot )$ defines a holomorphic anti-symmetric bilinear form on $V/V_0$ that is non-degenerate, where $\Phi$ has distinct eigenvalues. If $\deg(D) \equiv 0 \mod 2n$, where $D=D(V,\Phi)$, we can choose a square root $L^{2n}=K^{-n}(D)$ and define a symplectic Higgs bundle by \[ (E:= V /V_0 \otimes L, \phi', \alpha). \] $\bigwedge ^{n} \alpha \in H^0(X,\det(E))$ is generically non-zero and $\det(E)= \O_X$ by Lemma \ref{lemm:so:kernel_Hecketrafo} i). Hence, $\alpha$ is non-degenerate on $E$. For regular Hitchin fibres, $D$ is always zero and therefore this defines a map \begin{align*} \H^{-1}_{\SO(2n+1,\C)}(\underline{a}) \rightarrow \H^{-1}_{\Sp(2n,\C)}(\underline{a}). \end{align*} Hitchin uses this map to study the regular $\SO(2n+1,\C)$-fibres as covering spaces of symplectic Hitchin fibres. The singular fibres are stratified by the Higgs divisors $D$. One the open and dense stratum, we have $D=0$ and we could apply the same argument. But for the lower strata $\deg(D)$ mod $2n$ is unconstrained. Hence, this trick does not generalise. \end{rema} \vspace*{0.5cm} \section[Langlands correspondence]{Langlands correspondence for \texorpdfstring{$\sl(2)$}{sl(2)}-type Hitchin fibres}\label{sec:langlands} In this section, we compare the $\sl(2)$-type Hitchin fibres for the Langlands dual groups $\Sp(2n,\C)$ and $\SO(2n+1,\C)$ projection to the same point in the Hitchin base. Concerning the abelian part of the spectral data we will recover torsors over dual abelian varieties. This reproves and generalizes the result for regular fibres in \cite{Hi07}. The non-abelian part of the spectral data will not change under the duality. This is a new phenomena. We will start with the rank 1 case. For $\rk(\g)=1$, we can compare the Hitchin fibres by using the exceptional isomorphisms $\Sp(2,\C) \cong \SL(2,\C)$ and $\SO(3,\C) \cong \PGL(2,\C)$. The moduli space of $\PGL(2,\C)$-Higgs bundles can be constructed as follows (see \cite{Ha13}). First recall that \[ \M_{\GL(1,\C)}(X,M) \cong \Pic(X) \times H^0(X,M) \] is an abelian group with an action on $\M_{\GL(2,\C)}(X,M)$. Let \[ (L,\lambda) \in \M_{\GL(1,\C)}(X,M)\quad \text{and} \quad (E,\Phi) \in \M_{\GL(2,\C)}(X,M), \] then the proper, holomorphic action is given by \[ \left( (L,\lambda),(E,\Phi)\right) \mapsto (E \otimes L, \Phi + \lambda \id_E ). \] We define the $\PGL(2,\C)$-Higgs bundle moduli as the orbifold quotient \[ \M_{\PGL(2,\C)}(X,M) = \M_{\GL(2,\C)}(X,M) / \M_{\GL(1,\C)}(X,M). \] Acting with $H^0(X,M)$, we can find a representative for each $\PGL(2,\C)$-Higgs bundles with $\tr(\Phi)=0$. Hence, \[ \M_{\PGL(2,\C)}(X,M) \cong \H_{\GL(2,\C)}^{-1}(B_{\SL(2,\C)}(X,M)) / \Pic(X), \] where we think of $B_{\SL(2,\C)}(X,M) \subset B_{\GL(2,\C)}(X,M)$ by the obvious inclusion. For $N \in \Pic(X)$ define \[ \M_{\SL(2,\C)}^N(X,M) = \left\{ (E,\Phi) \in M_{\GL(2,\C)}(X,M) \mid \det(E)=N, \tr(\Phi)=0 \right\}. \] The action of $\Pic(X)$ identifies $\M_{\SL(2,\C)}^{N_1}(X,M)$ and $\M_{\SL(2,\C)}^{N_2}(X,M)$, whenever $\deg(N_1)=\deg(N_2) \mod 2$. Hence, fixing a line bundle $N \in \Pic(X)$ of degree 1, we have \begin{align} \M_{\PGL(2,\C)}(X,M)=\left( \M_{\SL(2,\C)}^{\O_X}(X,M) \sqcup \M_{\SL(2,\C)}^{N}(X,M) \right) / \Jac(X)[2], \label{equ:MPGL(n,C)} \end{align} where $\Jac(X)[2] \cong \Z^{2g}_2$ denotes the group of two-torsion points of $\Jac(X)$. The isomorphism to the moduli space of $\SO(3,\C)$-Higgs bundles is defined using the adjoint representation \begin{align*} \M_{\PGL(2,\C)}(X,M) \quad &\rightarrow \qquad \M_{\SO(3,\C)}(X,M) \\ (E,\Phi) \qquad &\mapsto \left( (E \times_{\Ad} \sl(2,\C)) \otimes \det(E)^{-1}, \ad(\Phi), \omega \right). \end{align*} Here, the orthogonal structure $\omega$ is induced by the Killing form on $\sl(2,\C)$. Topologically $\SO(3,\C)$-Higgs bundles on a Riemann surface are classified by the second Stiefel-Whitney class \[ \mathsf{sw}_2 \in H^2(X,\Z_2) \cong \Z_2. \] This is the obstruction to lift a $\SO(3,\C)$-Higgs bundle to a $\mathsf{Spin}(3,\C) \cong \SL(2,\C)$-Higgs bundle. Hence, under the isomorphism $\M_{\SL(2,\C)}^{\O_X}(X,M)/\Jac(X)[2]$ is map\-ped onto the connected component of $\SO(3,\C)$-Higgs bundles with $sw_2=0$ and $\M_{\SL(2,\C)}^{N}(X,M)/\Jac(X)[2]$ onto the connected component with $sw_2=1$. The Hitchin map \[ \H_{\PGL(2,\C)}:\M_{\PGL(2,\C)}(X,M) \rightarrow H^0(X,M^2) \] is defined in terms of the decomposition (\ref{equ:MPGL(n,C)}) by the $\SL(2,\C)$-Hitchin map on each connected component. For $(E,\Phi) \in \M_{\PGL(2,\C)}(X,M)$, there is a well-defined $\SL(2,\C)$-Higgs field $\Phi$ by (\ref{equ:MPGL(n,C)}). In particular, we can define a Higgs divisor $D(E,\Phi)$ as we did in Lemma \ref{lemm:local_form+Higgs_divisor}. \begin{theo}\label{theo:SO(2n+1)_strat} Let $a_2 \in H^0(X,M^2)$, such that the spectral curve is irreducible and reduced, then there is a stratification \[ \H^{-1}_{\PGL(2,\C)}(a_2) = \bigsqcup_{D } \S_D \] by finitely many locally closed analytic sets $\S_D$ indicated by Higgs divisors $D$ associated to $a_2$. If there is at least on zero of $a_2$ of odd order, each stratum is a holomorphic $(\C^*)^{r} \times \C^{s}$-bundle over \[ \left( \Prym_{K^{-1}(D)}(\tilde{\pi}) \sqcup \Prym_{NK^{-1}(D)}(\tilde{\pi}) \right) /\Jac(X)[2], \] where \[ r=n_{\even}-n_{\diag}(D), \quad r+s=2n(g-1) - \deg(D)-\frac{n_{\odd}}{2}. \] If all zeros of $a_{2}$ are of even order, each stratum $\S_D$ is a holomorphic $(\C^*)^{r} \times \C^{s}$-bundle over \[ \left( \Prym_{IK^{-1}(D)}(\tilde{\pi}) \sqcup \Prym_{INK^{-1}(D)}(\tilde{\pi}) \right) /\Jac(X)[2], \] with $r,s$ given by above formulae. Here $I$ is the unique non-trivial line bundle on $X$, such that $\tpst I=\O_X$. A local trivialisation of the fibre bundle $\S_D \subset \H^{-1}_{\PGL(2,\C)}(a_2)$ induces a local trivialisation of the fibre bundle structure of the corresponding stratum $\S_D \subset \H^{-1}_{\SL(2,\C)}(a_2)$ and vice versa. \end{theo} \begin{proof} Fix a $\SL(2,\C)$-representative $(E,\Phi)$ of a Higgs bundle in \[ \S_D \subset \H^{-1}_{\PGL(2,\C)}(a_2) \subset \left( \M_{\SL(2,\C)}^{\O_X}(X,M) \sqcup \M_{\SL(2,\C)}^{N}(X,M) \right) / \Jac(X)[2] \] By \cite{Ho1} Theorem 5.5, we can associate an eigen line bundle $L$ on the normalised spectral cover $\tilde{\pi}: \tilde{\Sigma} \rightarrow X$ to $(E,\Phi)$. If $\det(E)=\O_X$, it will lie in $\Prym_{K^{-1}(D)}(\tilde{\pi})$ and, if $\det(E)=N$, in $\Prym_{NK^{-1}(D)}(\tilde{\pi})$. After choosing frames $s$ of $L$ at $\tilde{\pi}^{-1}Z(a_2)$ the $\SL(2,\C)$-Higgs bundle $(E,\Phi)$ is uniquely determined by its $u$-coordinate in $(\C^*)^{r} \times \C^{s}$ with $r,s$ as in the Theorem. The action by $\Jac(X)[2]$ lifts to the normalised spectral curve and induces an action \begin{align*} \Jac(X)[2] \times \Prym_F(\tilde{\pi}) \rightarrow \Prym_F(\tilde{\pi}), \qquad (J,L) \mapsto \tpst J \otimes L \end{align*} for $F \in \Pic(X)$. For $F=\O(\Lambda- \tpst D)$ and $F=\tpst N^{-1}(\Lambda-\tpst D)$, this is exactly the action on the eigen line bundle induced by the action of $\Jac(X)[2]$ on $(E,\Phi)$. Recall, that in $\SL(2,\C)$-case for $a_2 \in H^0(X,M^2)$ having only zeros of even order, each stratum is a two-sheeted covering of a fibre bundle over the twisted Prym variety. This was due to the identification of $(E,\Phi)$ and $(E\otimes I,\Phi)$ via pullback. However, $I \in \Jac(X)[2]$ and so \[ \tpst: \M_{\PGL(2,\C)}(X,K) \rightarrow \M_{\PGL(2,\C)}(\tilde{\Sigma},\tpst K) \] is injective. The non-abelian part of the spectral data decodes the local Hecke parameter at $\tilde{\pi}^{-1}Z(a_2)$ and does not change under the action of $J \in \Jac(X)[2]$ on $(E,\Phi)$. Choosing a collection of frames $j$ of $J$ at $Z(a_2)$, we obtain a frame of $\tilde{\pi}^*J \otimes L$ at $\tilde{\pi}^{-1}Z(a_2)$ by $\tpst j \otimes s$. The $u$-coordinate does not depend on the choice of $j$ by \cite{Ho1} Proposition 5.8. This proves the last assertion. \end{proof} \begin{theo}\label{theo:SO(2n+1)_global_fibreing} Let $a_2 \in H^0(X,M^2)$, such that the spectral curve is locally irreducible, then the $\PGL(2,\C)$-Hitchin fibre over $a_2$ is itself a holomorphic fibre bundle over \[ \left( \Prym_{K^{-1}(D)}(\tilde{\pi}) \sqcup \Prym_{NK^{-1}(D)}(\tilde{\pi}) \right) /\Jac(X)[2] \] with fibres given by the compact moduli of Hecke parameters. \end{theo} \begin{proof} This is a direct consequence of the previous theorem and \cite{Ho1} Theorem 7.14. \end{proof} \begin{exam}\label{exam:so(2n+1,C)_first_deg} Example \ref{exam:sp(2n,C)_first_deg} carries over to the $\PGL(2,\C)$-case. Let $a_{2n}$ have $k_l$ zeros of order $l$ for $l \in \{2,3,4,5\}$ and at least one zero of odd order. Then up to normalisation $\H_{\PGL(2,\C)}^{-1}(a_2)$ is given by a holomorphic \[ (\P^1)^{k_2+k_3} \times (\P(1,1,2))^{k_4+k_5}- \] bundle over \[ \left( \Prym_{K^{-1}(D)}(\tilde{\pi}) \sqcup \Prym_{NK^{-1}(D)}(\tilde{\pi}) \right) /\Jac(X)[2]. \] \end{exam} Before we formulate the Langlands duality in $\rk(\g)=1$, let us identify the abelian part of the spectral data for $\PGL(2,\C)$ as an abelian torsor over the dual abelian variety to the Prym variety. \begin{prop}[\cite{HaTh03} Lemma 2.3]\label{prop:dual:prym} Let $\pi: Y \rightarrow X$ a $s$-sheeted covering of Riemann surfaces, then \[ \Prym(\pi)^\vee \cong \Prym(\pi)/\Jac(X)[s]. \] \end{prop} \begin{coro}\label{coro:duality_rank1} Let $a_2 \in H^0(X,M^2)$, such that the spectral curve is irreducible and reduced. The Hitchin fibres $\H^{-1}_{\PGL(2,\C)}(a_2)$ and $\H^{-1}_{\SL(2,\C)}(a_2)$ are related as follows: \begin{itemize} \item[i)] The abelian part of the spectral data are torsors over dual abelian varieties. \item[ii)] The complex spaces of Hecke parameters are isomorphic. \end{itemize} \end{coro} \begin{proof} Assertion i) is immediate from the previous theorems and proposition. We showed in Theorem \ref{theo:SO(2n+1)_strat} that a trivialisation of the bundle of Hecke parameters of $\H^{-1}_{\SL(2,\C)}(a_2)$ induces a trivialisation of the bundle of Hecke parameters of $\H^{-1}_{\PGL(2,\C)}(a_2)$. The identity with respect to corresponding trivialisation induces an isomorphism between the complex spaces of Hecke parameters. \end{proof} By Theorem \ref{theo:so:biholo}, this results carry over to higher rank. \begin{theo}\label{theo:so(2n+1):fibreing} Let $\underline{a} \in B_{2n}(X,K)$ of $\sl(2)$-type with irreducible and reduced $\Sp(2n,\C)$-spectral curve. All the results from the previous section carry over to the $\SO(2n+1,\C)$-case. In explicit, there is a stratification \[ \H_{\SO(2n+1,\C)}^{-1}(\underline{a}) = \bigsqcup_{D} \S_D \] by fibre bundles over disjoint unions of abelian torsors indicated by Higgs divisors as described in Theorem \ref{theo:SO(2n+1)_strat}. If $a_{2n} \in H^0(X,K^{2n})$ has at least one zero of odd order, the disjoint union of abelian torsors is given by \[ \left( \Prym_{\pi^*_nK^{-1}(D)}(\tilde{\pi}) \sqcup \Prym_{\pi^*_nNK^{-1}(D)}(\tilde{\pi}) \right) /\Jac(X)[2] \] If all zeros of $a_{2n}$ are of even order, it is \[ \left( \Prym_{I\pi^*_nK^{-1}(D)}(\tilde{\pi}) \sqcup \Prym_{I\pi^*_nNK^{-1}(D)}(\tilde{\pi}) \right) /\Jac(X)[2], \] where $I \in \Jac(\Sigma/\sigma)$ is the unique non-trivial line bundle, such $\tilde{\pi}_2^*I=\O_{\tilde{\Sigma}}$. When $a_{2n}$ has only zeros of odd order, we obtain a global fibration of the $\SO(2n+1,\C)$-Hitchin fibre over this union of abelian torsors as described in Theorem \ref{theo:SO(2n+1)_global_fibreing}. Replacing the union of abelian torsors by the above, Example \ref{exam:so(2n+1,C)_first_deg} describes the first degenerations of singular $\sl(2)$-type Hitchin fibres for $\SO(2n+1,\C)$ up to normalisation. \end{theo} \begin{proof} This is immediate from the identification of $\sl(2)$-type Hitchin fibres for $\SO(2n+1,\C)$ with fibres of the $\pi_n^*K$-twisted $\SO(3,\C)$-Hitchin system on $\Sigma/\sigma$ in Theorem \ref{theo:so:biholo}. \end{proof} \begin{rema} It follows from Theorem \ref{theo:fibrebdle_trivial} and the last assertion in Theorem \ref{theo:SO(2n+1)_strat}, that all these fibre bundles are smoothly trivial. \end{rema} \begin{coro} Let $\underline{a} \in B_{2n}(X,K)$ of $\sl(2)$-type with irreducible and reduced $\Sp(2n,\C)$-spectral curve. Then $\H_{\SO(2n+1,\C)}^{-1}(\underline{a})$ has two connected components. If $a_{2n}\in H^0(X,K^{2n})$ has at least one zero of odd order, these two connected components are irreducible. If all zeros of $a_{2n}$ have even order, then each connected component has two irreducible components. \end{coro} \begin{proof} For $\PGL(2,\C)$, the Hitchin fibres in \[ \left( \M_{\SL(2,\C)}^{\O_X}(X,M) \sqcup \M_{\SL(2,\C)}^{N}(X,M) \right) / \Jac(X)[2] \] have two connected components by \cite{Ho1} Corollary 8.6 and Theorem 8.8. These results also prove that the components are irreducible in the first case. When all zeros of $a_{2n}$ have even order, each connected component has two irreducible components stemming form the two connected components of $\Prym(\tilde{\pi}_2)$. In the difference to the $\SL(2,\C)$-case, the pullback of Higgs bundles along $\tilde{\pi}_2$ is injective for $\PGL(2,\C)$ (cf. Proposition 3.12 \cite{Ho1}). Now, the general result follows from Theorem \ref{theo:so:biholo}. \end{proof} In particular, Corollary \ref{coro:duality_rank1} generalizes verbatim to higher rank: \begin{coro}\label{coro:duality} Let $\underline{a} \in B_{2n}(X,K)$ of $\sl(2)$-type with the spectral curve is irreducible and reduced. The Hitchin fibres $\H^{-1}_{\SO(2n+1,\C)}(\underline{a})$ and $\H^{-1}_{\Sp(2n,\C)}(\underline{a})$ are related as follows: \begin{itemize} \item[i)] The abelian part of the spectral data is a disjoint union of torsors over dual abelian varieties. \item[ii)] The complex spaces of Hecke parameters are isomorphic. \end{itemize} \end{coro} \vspace*{0.5cm} \section[Decoupled Hitchin equation]{Solution to the decoupled Hitchin equation through semi-abelian spectral data}\label{sec:lim} In this last section, we will show how to use semi-abelian spectral data for Hitchin fibres of $\sl(2)$-type to produce solutions to the decoupled Hitchin equation. In a series of works of Fredrickson, Mazzeo, Swoboda, Weiss and Witt \cite{MSWW14,MSWW16,Fr18b} and independently Mochizuki \cite{Mo16}, such singular hermitian metrics were established as limits of sequences of actual solutions to the Hitchin equation under scaling the Higgs field to infinity. We conjecture this to be true for the solutions to the decoupled Hitchin equation that we will construct. In the $\SL(2,\C)$-case, this is a theorem by \cite{Mo16}. Let $(P,\Psi) \in \M_{G}(X,M)$. A reduction of structure group $h: X \rightarrow P/K_G$ to a maximal compact subgroup $K_G \subset G$ is called solution to the decoupled Hitchin equation, if the Chern connection is flat and the Higgs field $\Psi$ is normal, i.e. \[ 0=[ \Psi \otimes \tau^h(\Psi) ] \in H^0\left(X,(P \times_{\Ad} \g) \otimes M^2\right), \] where $\tau^h$ denotes the induced Cartan involution on $P \times_{\Ad} \g$. For $M=K$ this is equivalent to \[ F_h=0, \quad 0 =[\Psi \wedge \tau^h(\Psi)] \in H^{(1,1)}(X,P \times_{\Ad} \g). \] In most cases, there are no smooth solutions to this equation. For $\SL(2,\C)$ it is easy to check by a local computations similar to \cite{MSWW14} Section 3.2, that $h$ is singular at all zeros of the determinant of odd order (cf. Remark \ref{lim:rema:even_zeros_lowest_stratu}). Global solutions to the decoupled Hitchin equation can be constructed through the pushforward of a hermitian-Einstein metric on the eigen line bundle $L \in \Prym_{\pi_n^*K^{-1}(D)}(\tilde{\pi})$. This method was applied for regular Hitchin fibres in \cite{MSWW14,Fr18b}. \begin{rema} Usually the solutions of the decoupled Hitchin equations are not unique. They can be modified by applying a Hecke modification - a meromorphic gauge - at a singularity of the hermitian metric. However, in the cases we consider, there are natural choices indicated by the construction and the known approximation results of \cite{Mo16} and \cite{Fr18b}. \end{rema} \subsection{\texorpdfstring{$\Sp(2n,\C)$}{Sp(2n,C)}} \begin{theo}[\cite{Mo16} Section 4.3]\label{lim:theo:sol_dec_hit} Let $(E,\Phi) \in \M_{\SL(2,\C)}(X,M)$ with irreducible and reduced spectral curve. Let $a_2=\det(\Phi)$, $D$ its Higgs divisor and for $x \in Z(a_2)$ let $n_x:= \ord_x{a_2}-2D_x \in \N_0$. Then there exists a hermitian metric $h_{dc}=h_{dc}(E,\Phi)$ on $E\rest_{X\setminus Z(a_2)}$ solving the decoupled Hitchin equation and inducing a non-singular hermitian metric on $\det(E)$. For all $x \in Z(a_2)$ there exists a coordinate $(U,z)$ centred at $x$ and a local frame of $E\rest_U$, such that the Higgs field is given by \[ \Phi= \begin{pmatrix} 0 & z^{D_x} \\ z^{\ord_x{a_2}-D_x} & 0 \end{pmatrix} \d z \] and the hermitian metric for $\ord_x(a_2) \equiv 1 \mod 2$ is given by \[ h_{dc}= \begin{pmatrix} g_1 \vert z \vert^{\frac{n_x}2} & g_2 z^{\frac{1-n_x}{2}} \vert z \vert^{\frac{n_x}2} \\ \bar{g_2} \bar{z}^{\frac{1-n_x}{2}} \vert z \vert^{\frac{n_x}2} & g_1 \vert z \vert^{-\frac{n_x}2} \end{pmatrix}, \] with $g_1$ a real positive smooth function and $g_2$ a complex smooth function, such that $g_1^2-\vert g_2 \vert ^2 \vert z \vert =1$. For $\ord_x(a_2) \equiv 0 \mod 2$ with respect to such frame, the hermitian metric is given by \[ h_{dc}= \begin{pmatrix} g_1 \vert z \vert^{\frac{n_x}2} & g_2 z^{\frac{-n_x}{2}} \vert z \vert^{\frac{n_x}2} \\ \bar{g_2} \bar{z}^{\frac{-n_x}{2}} \vert z \vert^{\frac{n_x}2} & g_1 \vert z \vert^{-\frac{n_x}2} \end{pmatrix}, \] with $g_1,g_2$ real positive smooth functions, such that $g_1^2- g_2^2 =1$. In both cases, the smooth functions $g_1,g_2 \in \A_U$ are determined through the $u$-coordinate of $(E,\Phi)$ at $x$. \end{theo} \begin{proof} Let $(E,\Phi) \in \H^{-1}_{\SL(2,\C)}(a_2)$ with Higgs divisor $D$. Recall the description of $(E,\Phi)$ via the semi-abelian spectral data developed in \cite{Ho1} Section 5. Let $\lambda \in H^0( \tilde{\Sigma},\tpst K)$ the section solving the spectral equation and $\Lambda=\div(\lambda)$ its divisor. The abelian part of the spectral data is a line bundle $L \in \Prym_{\pi_n^*K^{-1}(D)}(\tilde{\pi})$ defined by \[ \O(L)=\ker(\tpst \Phi -\lambda \id_{\O(\tpst E)}). \] We recover $\tpst E$ as a Hecke transformation of \[ (E_L,\Phi_L):=(L \oplus \sigma^*L, \diag(\lambda,-\lambda)). \] Up to choices of frames of $L$ at $Z(\tpst a_2)$ this Hecke transformations are para\-met\-rised by a so-called $u$-coordinate. These $u$-coordinates are the non-abelian parameters of the spectral data. For constructing the solution to the decoupled Hitchin equation let us fix an auxiliary parabolic structure on $L$ by introducing weights $\alpha_p:= \frac12 (\Lambda-\tpst D)_p$ for all $p \in Z(\tpst a_2)$. Then the parabolic degree $\mathrm{pdeg}(L,\alpha)=0$. Hence, there exists a hermitian metric $h_L$ adapted to the parabolic structure that satisfies the hermitian-Einstein equation \[ F_{h_L}=0 \] unique up to rescaling by a constant (see \cite{Bi96,Si90}). This induces a flat hermitian metric $h_L+ \sigma^*h_L$ on $E_L$, such that the Higgs field $\Phi_L$ is normal. Applying the Hecke transformation to $(E_L,\Phi_L,h_L+ \sigma^*h_L)$ we obtain a hermitian metric on $\tpst E \rest_ {X \setminus Z(a_2)}$ solving the decoupled Hitchin equation. This descends to the desired metric $h_{dc}$. To show that it induces a non-degenerate hermitian metric on $\det(E)=O_X$, we compute its local shape at $Z(a_2)$. Let $x \in Z(a_2)$ be a zero of odd order and $p \in \tilde{\Sigma}$ its preimage. By \cite{Fr18b} Proposition 3.5, we can choose a frame $s$ of $L$ around $p$, such that $h_L=\vert z \vert^{2\alpha_p}$. Such frame is unique up to multiplying with $c \in \mathsf{U}(1)$ and therefore defines a unique $u$-coordinate for $(E,\Phi)$ at $p$ (see \cite{Ho1} Proposition 5.8). We want to change the frame of $L$, such that the Higgs bundle $(E,\Phi)$ corresponds to $u=0$ respective the new frame. This guarantees the desired local shape of $\Phi$. The transformation rule for $u$-coordinates was given in \cite{Ho1} Proposition 5.8. Choosing the frame $s'= \sqrt{\frac{1+u}{1-u}} s$ the $u$-coordinate for $(E,\Phi)$ is $u'=0$. The hermitian metric $h_L$ with respect to the frame $s'$ is given by \[ h_L=f \vert z \vert ^{2\alpha_p} \quad \text{with } f=\left\vert \frac{1-u}{1+u} \right\vert. \] Applying the Hecke transformation the induced hermitian metric on $\tpst E$ at $p$ is given by \[ \begin{pmatrix} (f+ \sigma^*f) \vert z \vert ^{2\alpha_p} & (f- \sigma^*f) \left(\frac{\vert z \vert}{z}\right)^{2\alpha_p} \\ (f- \sigma^*f) \left(\frac{\vert z \vert}{\bar{z}}\right)^{2\alpha_p} & (f+ \sigma^*f)\vert z \vert ^{-2\alpha_p} \end{pmatrix}. \] There exists $g_1,g_2 \in \A_U$, such that \[ \tpst g_1= f+ \sigma^*f \quad \text{and} \quad \tpst g_2= z^{-1}(f- \sigma^*f) \] Hence, we obtain the desired local form of $h_{dc}$ at $x \in Z(a_2)$. Using the description of the Hecke parameters at even zeros in terms of $u$-coordinates \cite{Ho1} Proposition 8.1, one can adapt this argument to the zeros of $a_2$ of even order. In the case of irreducible, locally reducible spectral curve $\Sigma$, the proof works in the same way as \[ \Prym_{ I\pi_n^*K^{-1}(D)}(\tilde{\pi}_2) \subset \{ L \in \Pic(\tilde{\Sigma} \mid L \otimes \sigma^*L=\tpst K^{-1}(D) \}, \] where $I$ is the unique non-trivial line bundle on $X$, such that $\tpst I= \O_{\tilde{\Sigma}}$. \end{proof} \begin{rema} For the regular fibres of $\M_{\SL(2,\C)}(X,K)$, this resembles the construction of limiting metrics in \cite{Fr18b}. In difference to Fredrickson, we work with positive weights instead of negatives. This is due to the fact that Fredrickson's construction uses the line bundle $L'$ with the property $\pi_*L'=E$ to reconstruct the Higgs bundle. In terms of $L\in \Prym_{\pi_n^*K^{-1}}(\tilde{\pi}_2)$ it is given by $L'=:L \otimes \pi^*K$. To every solution $h_L$ of the hermitian-Einstein equation on $(L,\alpha)$, as defined in the previous theorem, one obtains a solution of the hermitian-Einstein equation on $(L',-\alpha)$ in a canonical way by $h':=h_L \vert \lambda \vert ^{-2}$. \end{rema} \begin{rema}\label{rem:fixdetcase} Similar to \cite{Fr18b} Proposition 3.3, one can obtain solutions to the decoupled Hitchin equation in the fixed determinant case. One first fixes a hermitian-Einstein metric $h_{\det(E)}$ on $\det(E)$. In this case, the eigen line bundle $L$ will be an element of \[ \{ L \in \Pic( \tilde{\Sigma}) \mid L \otimes \sigma^*L = \tilde{\pi}^*\det(E)K^{-1}(D) \} \] and one can choose the hermitian-Einstein metric $h_L$, such that \[ h_L \otimes \sigma^* h_L = \tpst h_{\det(E)} \left\vert \frac{\tpst a_2}{\tpst s_{D}^2} \right\vert, \] where $s_D \in H^0(X,\O_X(D))$ is a canonical section. Then the induced solution to the decoupled Hitchin equation satisfies $\det(h_{dc})=h_{\det}$. \end{rema} \begin{coro}\label{lim:rema:even_zeros_lowest_stratu} Let $(E,\Phi) \in \M_{\SL(2,\C)}(X,K)$, such that $0 \neq \det(\Phi) \in H^0(X,K^2)$ has no global square root and $\Phi$ is everywhere locally diagonalizable. Then the hermitian metric $h_{dc}$ defined in Theorem \ref{lim:theo:sol_dec_hit} is a smooth solution to the Hitchin equation on $(E,\Phi)$. \end{coro} \begin{rema}\label{rema:locally_diag} Let $(E,\Phi)$ as in the previous corollary. $(E,\Phi)$ is stable by the irreducibility of the spectral curve. Hence, the rescaled Hitchin equation \[ F_h+t^2[\Phi\wedge\Phi^*h]=0, \quad t \in \C^*, \] decouples and the solutions is independently of $t$ given by the hermitian metric $h_{dc}$. Hence, this hermitian metric is the limit of a constant sequence of solutions to the Hitchin equation along a ray to the ends of the moduli space. \end{rema} \begin{theo}[\cite{Mo16} Corollary 5.4]\label{theo:approx:Sl(2)} Let $(E,\Phi) \in \M_{\SL(2,\C)}(X,K)$ with irreducible and reduced spectral curve, then the solution to the decoupled Hitchin equation $h_{dc}$ is a limiting metric. In explicit, let $h_t$ be the solution to the rescaled Hitchin equation \[ F_{h_t}+t^2[\Phi\wedge\Phi^{*{h_t}}]=0, \quad t \in \R_+, \] then $h_t$ converges to $h_\infty$ in $C^\infty$ on any compact subset of $X\setminus Z(\det(\Phi))$ for $t \rightarrow \infty$. \end{theo} \begin{proof} For $\SL(2,\C)$-Hitchin fibres with irreducible and reduced spectral curve the auxiliary parabolic structure is uniquely determined by the condition that the singular hermitian metric $h_\infty$ induces a non-singular hermitian metric on $\det(E)$. Hence, $h_\infty$ coincides with the limiting hermitian metric constructed by Mochizuki and the approximation result follows from his work. \end{proof} Applying the biholomorphism of Theorem \ref{Theo:isom:Hitchinfibres}, we can use Theorem \ref{lim:theo:sol_dec_hit} to construct solutions to the decoupled Hitchin equation for $\Sp(2n,\C)$-Higgs bundles of $\sl(2)$-type. \begin{theo}\label{theo:limit:sl2type} Let $(E,\Phi,\omega) \in \M_{\Sp(2n,\C)}(X,K)$ with irreducible and reduced spectral curve of $\sl(2)$-type. Let $(E_2,\Phi_2)\in \M_{\SL(2,\C)}(\Sigma/\sigma,\pi^*_n K)$ the corresponding $\SL(2,\C)$-Higgs bundle under the biholomorphism of Theorem \ref{Theo:isom:Hitchinfibres}. The solution to the decoupled Hitchin equation on $(E_2,\Phi_2) \in \M_{\SL(2,\C)}(\Sigma/\sigma,\pi^*_n K)$ induces a hermitian metric $h_{dc}$ on $(E,\Phi,\omega)\rest_{X\setminus Z(\disc(E,\Phi,\omega))}$ solving the decoupled Hitchin equation. \end{theo} \begin{proof} Let $h_2$ be the solution to the decoupled Hitchin equation on $(E_2,\Phi_2)$ defined in Theorem \ref{lim:theo:sol_dec_hit}. Then $h':=h_2 \vert \del \pi_n \vert^{-1}$ defines a hermitian metric on $E_2 \otimes \pi_n^*K^{n-1}$ singular on $Z(\det(\Phi_2)) \cup \supp R$, where $R=\div(\pi_n)$ is the ramification divisor. Recall from Theorem \ref{Theo:isom:Hitchinfibres}, that $\pi_{n*}(E_2 \otimes \pi_n^*K^{n-1})=E$. Hence, $\pi_{n*}h'$ defines a flat hermitian metric on $E$ singular on \[ Z(a_{2n}) \cup \pi_n(\supp R)=Z(\disc(E,\Phi,\omega)) \] compatible with the symplectic form, such that $[ \pi_{n*}\Phi \wedge \pi_{n*}\Phi^{*_{\pi_{n*}h'}} ]=0$. \end{proof} The local building blocks for these solutions to the decoupled Hitchin equation at its singularities were already considered before. Non-zero eigenvalues of the Higgs field of higher multiplicity correspond to smooth ramification points of $\pi: \Sigma \rightarrow X$. Here the $\Sp(2n,\C)$-Higgs bundle locally looks like a Higgs bundle in a regular $\SL(2n,\C)$-Hitchin fibre. Hence, the local approximation problem is covered by \cite{Fr18b} Section 4.1. The singular points of $\Sigma$ lie on the zero section of $K$ and are locally given by an equation of the form \[ \lambda^2-z^k=0. \] These are exactly the singularities of $\SL(2,\C)$-spectral curves. In this case, the local approximation result was proven in \cite{Mo16} Section 3. This leads to the following conjecture. \begin{conj}\label{conj:limit:config} Let $(E,\Phi,\omega) \in \M_{\Sp(2n,\C)}(X,K)$ with irreducible spectral curve of $\sl(2)$-type. Then the solution $h_{dc}(E,\Phi,\omega)$ to the decoupled Hitchin equation is a limiting metric, i.e.\ let $h_t$ be the solution to the rescaled Hitchin equation \[F_{h_t}+t^2[\Phi\wedge\Phi^{*{h_t}}]=0, \quad t \in \R_+, \] then $h_t$ converges to $h_\infty$ in $C^\infty$ on any compact subset of $X\setminus Z(\disc_\sp(E,\Phi,\omega))$ for $t \rightarrow \infty$. \end{conj} \subsection{\texorpdfstring{$\SO(2n+1,\C)$}{SO(2n+1,C)}} \begin{theo}\label{theo:SO(3)_lim_config} Let $(E,\Phi,\omega) \in \M_{\SO(3,\C)}(X,K)$ and $a_2=\det(\Phi)$, such that the associated $\SL(2,\C)$-spectral curve is irreducible and reduced. Then there exists a metric on $(E,\Phi,\omega)\rest_{X\setminus Z(a_2)}$ solving the decoupled Hitchin equation. \end{theo} \begin{proof} The adjoint representation $\Ad: \GL(2,\C) \rightarrow \SO(3,\C)$ induces a commutative diagram \[ \begin{tikzcd} 0 \ar[r] & \mathsf{U}(1) \ar[r]\ar[d]& \C^* \ar[r]\ar[d]& \R^+ \ar[r]\ar[d]& 0\\ 0 \ar[r] & \mathsf{U}(2) \ar[r]\ar[d]& \GL(2,\C) \ar[r]\ar[d,"\Ad"]& \GL(2,\C)/\mathsf{U}(2) \ar[r]\ar[d]& 0 \\ 0 \ar[r] & \SO(3) \ar[r]& \SO(3,\C) \ar[r]& \SO(3,\C)/\SO(3) \ar[r]& 0 \end{tikzcd} \] A metric on the $\SO(3,\C)$-Higgs bundle $(E,\Phi,\omega)$ is a reduction of structure group to $\SO(3)$. Denoting by $P$ the $\SO(3,\C)$-frame bundle associated to $E$, it corresponds to a section of $P \times_{\SO(3,\C)} \SO(3,\C)/\SO(3)$. Let $(E',\Phi') \in \Hit^{-1}_{\GL(n,\C)}(0,a_2)$, such that its image under the map of Higgs bundle moduli spaces \[ \M_{\GL(n,\C)}(X,K) \rightarrow \M_{\SO(3,\C)}(X,K) \] induced by the adjoint representation is $(E,\Phi,\omega)$. By the above diagram any hermitian metric on $(E',\Phi')$ induces a metric on $(E,\Phi,\omega)$. Let $h_{\det}$ denote the hermitian-Yang-Mills metric on $\det(E')$. By Remark \ref{rem:fixdetcase}, there exists a solution to the decoupled Hitchin equation $h_{dc}$ on $(E',\Phi')$, such that $\det(h_{dc})=h_{\det}$ unique up to scaling. This induces a solution to the decoupled Hitchin equation on $(E,\Phi)$ by the above diagram. Furthermore, if we choose another representative $(E'\otimes L,\Phi')$ with $L \in \Pic(X)$. Then $h_{dc}(E'\otimes L,\Phi')=h_{dc}(E',\Phi')h_L$, where $h_L$ is the hermitian-Einstein metric on $L$. We see from the commutative diagram that the resulting metric on the $\SO(3,\C)$-bundle $E$ does not depend on this choice. \end{proof} \begin{exam}\label{exam:lim_config_so(3)_sing} Fix a Higgs divisor $D$ associated to $a_2 \in H^0(X,K^2)$. Let \[ (E,\Phi,\omega) \in \S_D \subset \Hit_{\SO(3,\C)}^{-1}(a_2) \] be the Higgs bundle corresponding to the $u$-coordinate $0$ respective the frames fixed in the proof of Theorem \ref{lim:theo:sol_dec_hit}. For $x \in Z(a_2)$ there exists a coordinate $(U,z)$ centred at $x$ and a local frame of $E\rest_U$, such that the Higgs field is given by \[ \Phi= \begin{pmatrix} 0 & i \sqrt{2} z^{D_x} & 0 \\ -i \sqrt{2}z^{\ord_x(a_2)-D_x} & 0 & -i \sqrt{2}z^{D_x} \\ 0 & i \sqrt{2}z^{\ord_x(a_2)-D_x} & 0 \end{pmatrix} \d z, \] the orthogonal structure by \[ \omega=\begin{pmatrix} && 1 \\ & 1 & \\ 1 && \end{pmatrix} \] and the solution to the decoupled Hitchin equation is given by \[ h_{dc}= \begin{pmatrix} \vert z \vert^{\ord_x(a_2)-D_x} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & \vert z \vert ^{D_x-\ord_x(a_2)} \end{pmatrix} . \] \end{exam} Applying Theorem \ref{theo:so:biholo}, we can use this result to obtain solutions to the decoupled Hitchin equation for $\SO(2n+1,\C)$-Higgs bundles of $\sl(2)$-type. \begin{theo}\label{theo:hdc_SO(2n+1)} Let $(E,\Phi,\omega) \in \M_{\SO(2n+1,\C)}(X,K)$ of $\sl(2)$-type with irreducible and reduced spectral curve. Then the pushforward along $\pi_{n*}$ defines a solution to the decoupled Hitchin equation $h_{dc}$. \end{theo} \begin{proof} This proof is similar to the proof of Theorem \ref{theo:limit:sl2type}. Let $(E_3,\Phi_3,\omega_3)$ be the $\pi_n^*K$-twisted $\SO(3,\C)$-Higgs bundle on $\Sigma/\sigma$ corresponding to $(E,\Phi,\omega)$ under the isomorphism of Theorem \ref{theo:so:biholo}. Recall, that we recover $(E,\Phi,\omega)$ by a unique Hecke modification of \[ (\hat{E},\hat{\Phi}):=\left( \ker(\Phi) \oplus \pi_{n*} \left(\Ker \Phi_3^{\perp} \otimes \pi^*_n K^{n-1}\right), 0 \oplus \Phi \rest_{\Ker{\Phi_3}^{\perp}} \right), \] where the perpendicular is taken with respect to the orthogonal structure $\omega_3$. In Theorem \ref{theo:SO(3)_lim_config}, we constructed a solution to the decoupled Hitchin equation $h_3$ on $(E_3,\Phi_3,\omega_3)$. $h_3$ induces a singular hermitian metric on $\Ker \Phi_3^\perp$, which descends to a hermitian metric $\pi_{n*}(h_3 \vert \partial \pi_n \vert^{-1})$ on $\pi_{n*} (\Ker \Phi_3^{\perp} \otimes \pi^*_n K^{n-1})$ singular at $Z(a_{2n}) \cup \supp B$. Here $B$ denotes the branch divisor of $\pi_n: \Sigma/\sigma \rightarrow X$. There is a canonical singular flat metric on $\Ker(\Phi)= K^{-n}(D)$ given by $\vert \frac{a_{2n}}{s_D} \vert$ singular at $Z(a_{2n})$. This defines a singular flat hermitian metric \[ \vert\frac{a_{2n}}{s_D} \vert \oplus \pi_{n*}(h_3 \vert \partial \pi_n \vert^{-1}) \] on $(\hat{E},\hat{\Phi})$, such that the Higgs field is normal and which is compatible with the singular orthogonal structure. The Hecke modification at $Z(a_{2n})$ desingularizes the orthogonal structure. The induced hermitian metric on $(E,\Phi,\omega)$ solving the decoupled Hitchin equation is singular at $Z(a_{2n}) \cup \supp (B) = Z(\disc_{\sp}(a_2, \dots, a_{2n} ))$. \end{proof} \begin{rema} In the previous proof, we used Lemma \ref{lemm:so:kernel_Hecketrafo} stating that we can reconstruct a $\SO(2n+1,\C)$-Higgs bundle in a unique way from \[ (\hat{E},\hat{\Phi})=\left( \Ker( \Phi ) \oplus \Ker(\Phi)^\perp, 0 \oplus \Phi \rest_{\Ker(\Phi)^\perp}\right) \] through Hecke modification. For a $\SO(3,\C)$-Higgs bundle this gives another way to construct a solution to the decoupled Hitchin equation. An easy, but tedious computation using the local models of Theorem \ref{lim:theo:sol_dec_hit} shows that $h_{dc}$ as constructed in Theorem \ref{theo:SO(3)_lim_config} is equal to the solution of the decoupled Hitchin equation obtain from the singular flat hermitian metric \[ \vert\frac{a_{2}}{s_D} \vert \oplus h_{dc} \rest_{\Ker(\Phi)\perp} \] on $(\hat{E},\hat{\Phi})$ in this way. This shows that the construction of solutions to the decoupled Hitchin equation in the previous proof is consistent. \end{rema} Similar to the symplectic case, the approximation of the local models of these solutions of the decoupled $\SO(2n+1,\C)$-Hitchin equation follows from the work of Mochizuki \cite{Mo16} and Fredrickson \cite{Fr18b}. Applying the same argument as in the proof of Theorem \ref{theo:SO(3)_lim_config} to the solutions of the rescaled Hitchin equation in Theorem \ref{theo:approx:Sl(2)}, one shows that the solutions to the decoupled $\SO(3,\C)$-Hitchin equation are limiting metrics. In particular, the local models of the solution to the decoupled Hitchin equation for $\SO(2n+1,\C)$ are approximated. At the branch points of $\pi_n: \Sigma/\sigma \rightarrow X$ the Higgs bundle looks like to copies of a $\SL(n,\C)$-Higgs bundle interchanged by $\omega$ plus the kernel. So the local models are described and approximated by the work of \cite{Fr18b}. This leads us to analogue of Conjecture \ref{conj:limit:config} for $\SO(2n+1,\C)$. \begin{conj} The solutions to the decoupled Hitchin equation for $\SO(2n+1,\C)$ constructed in Theorem \ref{theo:hdc_SO(2n+1)} are limiting metrics. \end{conj} \subsection{Smooth trivialisation of semi-abelian spectral data} In Section \ref{sec:sp:semi-abel} and \ref{sec:langlands}, we stratified the $\sl(2)$-type Hitchin fibres by fibre bundles over abelian torsors. Using the solutions to the hermitian-Einstein equation discussed above, we can prove that all these fibre bundles are smoothly trivial. \begin{proof}[Proof of Theorem \ref{theo:fibrebdle_trivial}] We only need to show the triviality in the $\SL(2,\C)$-case. Then it follows in all other cases by the identification of $\sl(2)$-type Hitchin fibres with fibres of an $\SL(2,\C)$- resp. $\PGL(2,\C)$-Hitchin map. In the proof of Theorem \ref{lim:theo:sol_dec_hit}, we saw that a solution to the hermitian-Einstein equation $h_L$ on the eigen line bundle $L \in \Prym_{K^{-1}(D)}(\tilde{\pi})$ with respect to some auxiliary parabolic structure induces local frames $s$ at $p \in \tilde{\pi}^{-1}Z(a_2)$, such that $h_L=\vert z \vert^{2\alpha_p}$. These frames are unique up to multiplying by a constant and therefore define unique $u$-coordinates at all $p \in Z(\tilde{\pi}^*a_2)$ (see \cite{Ho1} Proposition 5.8). $h_L$ depends smoothly on $L \in \Prym_{K^{-1}(D)}(\tilde{\pi})$ (see \cite{MSWW19} Proposition 3.3). Furthermore, the choice of $s$ depends smoothly on $h_L$ by the explicit argument in \cite{Fr18b} Proposition 3.5. Hence, this defines a smooth trivialisation in the $\SL(2,\C)$-case and hence in all other cases. \end{proof} \printbibliography \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,459
{"url":"http:\/\/math.stackexchange.com\/questions\/65086\/put-some-elements-on-a-sphere\/65099","text":"# put some elements on a sphere\n\ni don't know if this is the better place to post.\n\ni'm coding a shperical tag cloud: i have a brunch of tags and i want to position them on a sphere surface. for doing that i'm procedurally drawing meridians and circles of latitude. on each circle of latitude i want to put some tags. of course the first circle has a smaller circumference than the middle one (the middle one is the equator and have the larger circumference). now, i want to get some rule for dividing gracefully the tags along the circles of latitude.\n\nfor example if i have 30 tags and i have fixed 6 circle of latitude a good result could be this one:\n\n1. circle: 3 tags\n2. circle: 5 tags\n3. circle: 7 tags\n4. circle: 7 tags\n5. circle: 5 tags\n6. circle: 3 tags\n\nso: with 30 tags and 6 circle i can have: 3,5,7,7,5,3.\n\nsome ideas?\n\n-\n\nIf you space $2n-1$ circles equally in latitude (skipping the poles), you will have circles from $\\phi=-\\frac{n-1}{n}90^{\\circ}$ to $\\frac{n-1}{n}90^{\\circ}$ by steps of $\\frac {90^{\\circ}}{n}$. If you space $2n$ circles equally in latitude (skipping the poles), you will have circles from $\\phi=-\\frac{2n-1}{2n}90^{\\circ}$ to $\\frac{2n-1}{2n}90^{\\circ}$ by steps of $\\frac {90^{\\circ}}{n}$. In either case, the circumference of the circle of latitude is proportional to $\\cos \\phi$. So you can add up all the $\\cos \\phi$'s to get the total length, divide by the number of tags to get a length per tag, and divide the length of each circle by the length per tag to get the number on each circle. You may have to do a bit of rounding to get it to come out. If you do this, the number of tags per circle will increase rapidly as you leave the poles and not so fast when you approach the equator. See if it does what you want.\n\nIf you want the spacing in each direction to be about the same, the number of tags on the latitudes nearest the equator should be about twice the number of circles.\n\n-","date":"2015-04-27 15:35:23","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7264759540557861, \"perplexity\": 184.22299351140336}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-18\/segments\/1429246658904.34\/warc\/CC-MAIN-20150417045738-00217-ip-10-235-10-82.ec2.internal.warc.gz\"}"}
null
null